Lecture 31:  Standardized Multiple Regression Model

 

Two scenarios may warrant transformation of variables to standardized form, where Yi and Xik are converted to

           

Yi’                                   =    = 

and                                 Xik’    =    = 

 

where                       sY  =   and   = 

 

Recall that  and  have zero mean and unit variance.

 

 

The two scenarios are

 

1.     Minimizing rounding errors in computations.

Primary source of these errors is in the computation of the inverse of the matrix [X’X]. If the determinant of this matrix is near zero (caused by severe multicollinearity), then the individual elements of the inverse will be extremely large in magnitude.

Rounding errors may also occur when X variables are grossly different in magnitude.

Transformation of the response and the predictor variables will convert the [X’X] matrix into a matrix of correlation coefficients with all elements within ±1.

 

2.     Lack of comparability in regression coefficients.

Suppose we have a model with two predictors

X1 - trees per hectare in the range of 0 – 5000, and

X2 - tree diameter with a range of 0 – 50 cm.

The regression coefficients b1 and b2 are likely to have very different magnitude, with the result that increase of one unit in X1 will have an entirely different effect on the response than from a unit change in X2.

 

Standardized regression model

Regression model with transformed variables, y’ and xk’, is called the standardized regression model

Yi’  =  b1’Xi1’ + b2’Xi2’ + … bp-1’Xip-1’ + ei

 

Note that there is no intercept term in this model. Why?

 

 

Matrix of transformed variables

 

The [X’X] matrix and [X’Y] vector of the transformed predictor variables are

 

 

rXX  =             and       rXY  = 

 

and the vector of estimated regression coefficients is

b’  =  [x’x]-1[x’y]  =  [rXX]-1[rXY]

It can be shown that the individual regression coefficient bk of the untransformed model can be recovered from the standardized regression coefficient bk’

bk  =  bk’

and                                            b0  =   -

 

 

 

 


Polynomial regression models

 

General form of a polynomial model of one predictor variable

 

Y  =  b0 + b1X + b2X2 + … + bp-1Xp-1 + e

 

Even though the model form is multiple linear there are major differences between a polynomial model and a first order multiple regression model

 

1.     Curvilinearity in two-dimensional space with several peaks and valleys. A far cry from a p-dimensional response surface for a first order multiple regression model.

2.     There is strong multicollinearity as all predictors are different powers of a single variable.

 

Polynomial models increase in complexity with more than one predictor variable.

 

Why use polynomial models ?

 

1.     The true curvilinear response function is indeed polynomial.

2.     The curvilinear response function is unknown (or complex) but a polynomial function is a good approximation.

 

Perhaps the primary reason for using polynomials as very often the true form of the functional relationship between X,Y is unknown.

The main danger in using polynomial is in extrapolation, particularly when monotonically increasing response is modeled by a polynomial.

Polynomial regression models may contain one or more variables, and each predictor variable may be present in various powers.

We will consider here only models with one and two predictor variables raised to the first and second power.

 

One predictor variable  -  second order  (most common)

 

Yi  =  b0’ + b1’Xi + b2’Xi2 + ei

 

In order to minimize computational difficulties caused by  multicollinearity Xi is replaced by deviation xi, such that

xi,  =  Xi -

and the model is transformed into

Yi  =  b0 + b1xi + b2xi2 + ei

Slightly different notation is used in polynomial regression to reflect pattern of exponents

Yi  =  b0 + b1xi + b11xi2 + ei

 

The original coefficients (b0’,+ b1’ and b2’) can be recovered from the b0,  b1 and b2 using the transformation

b0’  =  b0 – b1 + b112

b1’  =  b1 – 2b11

                        b2’  =  b2

 

 

Example: Data from Douglas fir trees is used to illustrate the use of a quadratic regression model to predict tree height from its diameter.

 

Second order polynomial regression of H on (D - )

 

The regression equation is:                  =  115.053 + 6.071*x - 0.180*x²

 

where                          Y  =  H,            and      x  =  (D - ).

For our data                              =  14.5458

 

and the height prediction model in terms of D is

 

  =  115.053 + 6.071*(D – 14.5458) - 0.180*(D – 14.5458)²

 

when simplified becomes

 

  =  -11.339 + 11.307*D - 0.180*D²

 

Fig 8.1:       Scatter-plot of diameter (D) and height (H) of 49 Douglas fir trees. The curve predicts height using quadratic regression of H on D.

 

 

Two predictor variables  --  second order

 

Y  =  b0 + b1x1 + b2x2 + b11x12 + b22x22 + b12x1x2 + e

 

The response surface in this case is three dimensional, the actual form governed by the parameter values.

 

Additional variables, and higher order will make the polynomial model larger and more complex to interpret.

 

Basic rule of regression to keep the model simple also applies to polynomials.

 

Implementation of polynomial regression models

 

Polynomial regression model fitting presents no new problems and all earlier results on fitting apply.

 

When fitting a polynomial model, start with a second or third order model and then perform tests to see if higher order terms can be dropped.

 

Some further comments on polynomial regression

 

1.               Multicollinearity is unavoidable

2.               Exrapolation may lead to more serious errors than in general linear models.

3.               Though quadratic term often provides close approximation to nonlinear form, it has only limited flexibility.

4.               Keep the model to as low an order as possible for interpretation simplicity.