R-squared

Steve Simon

1999-08-18

*Dear Professor Mean

You’re performing quadratic regressions on a calculator? I hope the poor thing doesn’t overheat.

You know more than you think you do. R squared

Short explanation

In brief

More details

Consider a data set where we are trying to predict the Lymphocyte count (per cubic mm) as a quadratic function of reticulytes.

**R squared is computed by looking at two sources of variation

Think of SStotal as the error in prediction if you did not use any information about reticulytes. In that case

Example

Here is the SPSS output of a sample exercise from page 461 of Rosner (1992). We use a quadratic function of reticulytes to predict lymphocytes.

height=“248”}

Figure 3.1. [Image is already full size]

The output shows a value of R squared of 0.39. How is this number computed?

The formula for R squared is

height=“60”}

Another formula

height=“63”}

where SSregression is the difference between SStotal and SSerror.

We know that SSerror is 2,207,364.8. In SPSS

Anyway

height=“58”}

A value of 0.39 is a low

What is adjusted R squared?

Notice that SPSS also produces a statistic called adjusted R squared. This statistic adjusts for the degrees of freedom in the model

height=“60”}

and here is a double check of the results.

height=“58”}

We might prefer to use the adjusted R squared if we are comparing our quadratic model to other models of varying complexity

More complicated models

Some regression and ANOVA models incorporate a random factor. These models do not have an obvious way to compute R squared.

With a random factor

The R squared for within variation is a measure of how much the model helps when trying to predict a new observation on one of the subjects already in your study. The R squared for total variation is a measure of how much the model helps when trying to predict a new observation on a new subject.

In theory

I have a paper somewhere in my bibliography that talks about this

Summary

R squared measures the relative prediction power of your model. It compares the variability of the residuals in your model (SSerror) to the variability of the dependent measure (SStotal). If the variability of the residuals is small then your model has good predictive power.

Further reading

Any good textbook on regression should have lots of details on R squared. Draper and Smith (1998) discuss R squared in chapters 2 and 4. Most introductory level books will also discuss R squared. Rosner (1992) talks about R squared (though not in the context of quadratic regression) in chapter 11.

  1. **Applied Regression Analysis
  1. **Fundamentals of Biostatistics

You can find an earlier version of this page on my original website.