The F-test and R²: an introduction from the ground up 2020-10-23
Intro
This is, in part, a repost of an answer of mine in Cross Validated. I also take the opportunity to introduce the concepts and explain (mostly informally) why they are the way they are. It's a very interesting question, that I'll reproduce below:
What does it mean if I have a high F-stat but low ?, a question by pythonuser distributed under CC BY-SA 4.0
As far as I understand, a high F-stat leads to a high , though the converse is not true. What does it mean if I have a high F-stat and a low ?
For context though, what is and where does this F-statistic appears?
The coefficient of determination,
Consider a linear model:
, the coefficient of determination, is a useful metric of the fitting. It can be defined from various contexts, but informally its supposed to represent the amount of variance of the true (a vector containing the values of ) your model captures. Another way of looking at it is to treat it as the variance of squared residuals in relation to the variance of data: if your residuals have, proportionally small variance, that may be good news for your intended application. Formally, it's the comparison between two models actually: yours and an alternative one, usually a null model.
Let's define the Mean Squared Error (MSE) as a function acting on two vectors: (the true values) and (estimates from a model):
Now let's denote the MSE pertaining to different models with . Then, we can compare the "performance" of two models by comparing their MSEs! What if we define a baseline model, one that all other models should beat before we even consider their worthy?
A good candidate is a model with only an intercept: . Optimizing it for MSE is the same as setting . Let's compare our model and this new one by the ratio of their MSEs, and call this value the MSE-ratio (MSER).
If (unless for some very specific edge cases) our model is great™ and residuals are very small. If, however, , then our model is actually not that much better than the boring arithmetic average: its residuals are actually of the same magnitude! Mind you, depending on model specification and on which data we are measuring it, we can even witness , meaning that our model is even worse than simply taking the mean!
If you've seen the formula for then you know where this is going. is simply one minus MSER:
Thus, the interpretation of follows the same lines of the one we derived for MSER, but inverted:
Low (high ): low explanatory power, not that much better than the mean, i.e. the null model of choice
High (low ): high explanatory power, much better than the mean
Negative ( greater than one): we'd be better using the mean as our model
A neat interpretation of can be derived from deviance, linking it to distributions other than Gaussian. These definitions of and MSER are valid for basically most models (including non-linear ones). Other derivations that will follow, however, only apply to linear regression. In the case of linear regression optimized by ordinary least squares, it is possible to show that:
So, what if we wanted to attach a test-statistic to ? Enter the F-test.
Comparing linear models, enter the F-test
Let's state our hypotheses:
Under the null hypothesis, the MSE is a sum of squared standard normal variables, divided by the number of variables. This again entails the assumption that our dependent variable, has normal conditional distribution. A sum of squared standard normals is Chi-squared distributed, with degrees of freedom (see Wiki).
Thus we define the residual sum of squares (RSS) :
The ratio of two independent Chi-squared variables is F-distributed:
With all that information, we can see that the RSS of our alternative and our null models are Chi-squared distributed under the null. In fact, the ratio of variances (which is proportional to the sum of squared normal variables) is F-distributed. The F-test is commonly employed to test for equality of variances under the assumption of normality. If we can arrange our RSSs into variances, we can derive the F-statistics.
So let's define the explained and unexplained variances. The unexplained variance is the variance we couldn't account for in our model, it makes sense then to define it as proportional to the RSS of our model. The explained variance, on the other hand, is the difference between the total variance and the unexplained variance (this is particular to linear regression, and might not hold for other models due to a covariance term). The total variance is proportional to the RSS of the null model, since the null model assumes the mean value.
With that definition, we can derive the F-statistic and perform our test. Under the null (i.e. assuming ) the general form of F can be stated as:
What's the relationship between the F-statistic and ?
can be defined in terms of RSS as well.
Thus, with some rearranging, we can show that:
A few interesting conclusions can be extracted from this:
The F-test comparing an intercept-only null model with an alternative model tests for the difference in residual sum of squares, ergo tests for , a joint test for all coefficients in the model other than the intercept
Increasing , the number of samples, inflates (we kind of expect this, it becomes easier to reject point-nulls with larger sample sizes)
Following from the previous point, huge, significant, values of can be attained with the tiniest, non-zero, s.
An unanswered question, that I might answer in a later post, is: if we know the distribution, , why don't we test it directly using that information? But a simple answer is that we do, under specific conditions.
This was an interesting topic to talk about and one that time and time again pops up. If you spot any inaccuracies, I would be extremely grateful if you let me know and I'll credit you on corrections. Until the next one!