Hat tip: why and how the hat matrix makes things easier 2021-03-05
Intro
As most of my posts, this one is about linear modeling. I really believe that linear modeling is an under-appreciated discipline in today's machine learning mainstream. Also, as most of my posts, this one starts with a question in Cross Validated. I also believe that Cross Validated is under-appreciated, but I digress.
So, the question is simple: how to show that the residual sum of squares (RSS) and the explained sum of squares (ESS) are independent random variables? I really wanted to answer this one, because it is fundamental, but from the start, it seemed quite clear that only an algebraic treatment would be feasible. This is due to quadratic forms that arise naturally in the problem, as you will see below.
Let's state then what we have, shall we?
This is our model, a plain, old, linear regression. Notice that we state it as a conditional distribution, where the expectation of the outcome is given by the linear combination of predictors and there is zero covariance between outcomes, which have a constant variance attached. We could augment it, of course. But, from the treatment I'll expose below it'll become more obvious how it's actually a simple procedure.
So, regarding the problem statement, here are RSS and ESS:
Where and .
So, how one goes on to show that RSS and ESS are uncorrelated?
The hat treatment
Even though the description of RSS and ESS are intuitive from their mathematical descriptions, let's take one step back (and add Algebra to the mix, making everything harder to read, though easier to solve).
So, what's again? It's the solution to the problem, and it can be obtained in analytical form (I showed you how in: "Enter the Ridge: derivation and properties") :
With that we can retrieve the expected, or "fitted", values:
Notice something peculiar? The fitted values can be written explicitly as a linear combination of the original values . In fact, if we define , then .
We call this matrix the hat matrix. It's a square matrix. It's very useful to demonstrate Gaussian linear regression properties. It also measure something called leverage: literally the leverage each original value in has on the prediction of itself. Technically, it's simply the partial derivative .
Some neat properties can be attested outright:
The hat matrix is idempotent!
The hat matrix is symmetric!
So, as I promised you, using this makes things easier. Let's rewrite RSS and ESS. But first we must state them algebraically.
Because .
So there we have it, and . That term in the RSS might give you a hint they are uncorrelated, since ESS has simply a . Now, how to prove they are uncorrelated?
Computing the covariance
The covariance is "simply" (haha) given by
Note that for symmetric matrices :
The covariance between quadratic forms[1] is given by:
Plugging and we retrieve:
Using the fact that
But , so:
Conclusions
To be frank, I am not even sure if a similar proof can be easily obtained without using the hat matrix. I hope that this post convinces you that the hat matrix helps a lot on the statistical treatment of linear regression. It also makes clear where different assumptions could be plugged in, e.g. .
[1] | This proof is a bit longer, might be subject for another post in the future. See "Graybill, Franklin A. Matrices with applications in statistics. 1983". |