ECONOMETRICS
Chrispin Mphuka
                                  University of Zambia
                                     January 2011
Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   1 / 15
Assumptions
     Generic form of the linear regression model : population regression h
                                y   = f (x1 , x2 , ..., xk ) + ε
                                    = x1 β1 + x2 β2 + ...xk βk + ε
     Classical regression model is a set of joint distributions satisfying
     Assumptions 1.1 - 1.4
  Chrispin Mphuka (Institute)       Lecture 1: Classical Regression Model   01/2010   2 / 15
Classical Linear Regression Assumptions
     Linearity Assunption.
                yi = xi 1 β1 + xi 2 β2 + ...xik βk + εi                  (i = 1, 2, 3, ..., n)     (1)
                  0
     where β s are unkown parameters to be estimated, and εi is the
     unobserved error term with certain properties to be speci…ed below
  Chrispin Mphuka (Institute)    Lecture 1: Classical Regression Model                   01/2010   3 / 15
Classical Linear Regression Assumptions
     Linearity Assunption.
                yi = xi 1 β1 + xi 2 β2 + ...xik βk + εi                  (i = 1, 2, 3, ..., n)     (1)
                  0
     where β s are unkown parameters to be estimated, and εi is the
     unobserved error term with certain properties to be speci…ed below
         the RHS: xi 1 β1 + xi 2 β2 + ...xik βk is called the regression function
  Chrispin Mphuka (Institute)    Lecture 1: Classical Regression Model                   01/2010   3 / 15
Classical Linear Regression Assumptions
     Linearity Assunption.
                yi = xi 1 β1 + xi 2 β2 + ...xik βk + εi                  (i = 1, 2, 3, ..., n)     (1)
                  0
     where β s are unkown parameters to be estimated, and εi is the
     unobserved error term with certain properties to be speci…ed below
         the RHS: xi 1 β1 + xi 2 β2 + ...xik βk is called the regression function
             β0 s are regression coe¢ cientst that represent marginal and separate
             e¤ects of regressors
  Chrispin Mphuka (Institute)    Lecture 1: Classical Regression Model                   01/2010   3 / 15
Classical Linear Regression Assumptions
     Linearity Assunption.
                yi = xi 1 β1 + xi 2 β2 + ...xik βk + εi                  (i = 1, 2, 3, ..., n)     (1)
                  0
     where β s are unkown parameters to be estimated, and εi is the
     unobserved error term with certain properties to be speci…ed below
         the RHS: xi 1 β1 + xi 2 β2 + ...xik βk is called the regression function
             β0 s are regression coe¢ cientst that represent marginal and separate
             e¤ects of regressors
             e.g , β2 represents the the change in the dependent variable when the
             second regressor increases by 1 unit holding constant other variables.
             i.e ∂yi /∂xi 2 = β2
  Chrispin Mphuka (Institute)    Lecture 1: Classical Regression Model                   01/2010   3 / 15
Classical Linear Regression Assumptions
     Linearity Assunption.
                yi = xi 1 β1 + xi 2 β2 + ...xik βk + εi                  (i = 1, 2, 3, ..., n)     (1)
                  0
     where β s are unkown parameters to be estimated, and εi is the
     unobserved error term with certain properties to be speci…ed below
         the RHS: xi 1 β1 + xi 2 β2 + ...xik βk is called the regression function
             β0 s are regression coe¢ cientst that represent marginal and separate
             e¤ects of regressors
             e.g , β2 represents the the change in the dependent variable when the
             second regressor increases by 1 unit holding constant other variables.
             i.e ∂yi /∂xi 2 = β2
             Linearity implies that the marginal e¤ects do not depend on the level of
             regressors
  Chrispin Mphuka (Institute)    Lecture 1: Classical Regression Model                   01/2010   3 / 15
Classical Linear Regression Assumptions
     Linearity Assunption.
                yi = xi 1 β1 + xi 2 β2 + ...xik βk + εi                  (i = 1, 2, 3, ..., n)     (1)
                  0
     where β s are unkown parameters to be estimated, and εi is the
     unobserved error term with certain properties to be speci…ed below
         the RHS: xi 1 β1 + xi 2 β2 + ...xik βk is called the regression function
             β0 s are regression coe¢ cientst that represent marginal and separate
             e¤ects of regressors
             e.g , β2 represents the the change in the dependent variable when the
             second regressor increases by 1 unit holding constant other variables.
             i.e ∂yi /∂xi 2 = β2
             Linearity implies that the marginal e¤ects do not depend on the level of
             regressors
             Example: (Wage Equation)
                      log(WAGEi ) = β1 + β2 Si + β3 TENUREi + β4 EXPRi + εi
  Chrispin Mphuka (Institute)    Lecture 1: Classical Regression Model                   01/2010   3 / 15
Classical Linear Regression Assumptions
                                    2           3                      2          3
                                         xi 1                                β1
                                     6   xi 2   7                 6          β2   7
                                     6          7                 6               7
                                     6    .     7                 6           .   7
                                Xi = 6
                                     6
                                                7 ,
                                                7              β =6
                                                                  6
                                                                                  7
                                                                                  7
                                     6    .     7                 6           .   7
                                     4    .     5                 4           .   5
                                         xik                                 βk
     thus Xi0 β =xi 1 β1 + xi 2 β2 + ...xik βk and
                                  yi = Xi0 β + εi , (i = 1, 2, ..., n)
  Chrispin Mphuka (Institute)        Lecture 1: Classical Regression Model            01/2010   4 / 15
Classical Linear Regression Assumptions
     Also de…ned
            2         3           2        3                2             3   2                      3
                 y1                   ε1                         X10              x11 . . . x1K
        6         .   7           6    .   7           6          .       7 6      .                 7
        6             7           6        7           6                  7 6                        7
     y =6
        6         .   7,
                      7         ε=6
                                  6    .   7,
                                           7        X =6
                                                       6          .       7=6
                                                                          7 6      .                 7
                                                                                                     7
        4         .   5           4    .   5           4          .       5 4      . . . .           5
                 yn                   εn                         Xn0              xn1 . . . xnK
     Therefore assumption1.1 can be written as:
                                               y = Xβ + ε
  Chrispin Mphuka (Institute)     Lecture 1: Classical Regression Model                01/2010   5 / 15
Classical Linear Regression Assumptions
     Assumption 1.2 (Strict Exogeneity)
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   6 / 15
Classical Linear Regression Assumptions
     Assumption 1.2 (Strict Exogeneity)
             E ( εi jX) = 0      (i = 1, 2, ..., n)
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   6 / 15
Classical Linear Regression Assumptions
     Assumption 1.2 (Strict Exogeneity)
             E ( εi jX) = 0     (i = 1, 2, ..., n)
             di¤erently put implies E (εi jX1 , ..., Xn ) = 0           (i = 1, 2, ..., n)
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model             01/2010    6 / 15
Classical Linear Regression Assumptions
     Assumption 1.2 (Strict Exogeneity)
             E ( εi jX) = 0     (i = 1, 2, ..., n)
             di¤erently put implies E (εi jX1 , ..., Xn ) = 0           (i = 1, 2, ..., n)
     Implications of Strict Exogeneighty
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model             01/2010    6 / 15
Classical Linear Regression Assumptions
     Assumption 1.2 (Strict Exogeneity)
             E ( εi jX) = 0     (i = 1, 2, ..., n)
             di¤erently put implies E (εi jX1 , ..., Xn ) = 0           (i = 1, 2, ..., n)
     Implications of Strict Exogeneighty
             The unconditional mean if the error term is 0 i.e           E ( εi ) = 0
             (i = 1, 2, ..., n)
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model             01/2010    6 / 15
Classical Linear Regression Assumptions
     Assumption 1.2 (Strict Exogeneity)
             E ( εi jX) = 0     (i = 1, 2, ..., n)
             di¤erently put implies E (εi jX1 , ..., Xn ) = 0           (i = 1, 2, ..., n)
     Implications of Strict Exogeneighty
             The unconditional mean if the error term is 0 i.e E (εi ) = 0
             (i = 1, 2, ..., n)
             If cross moments E (xy ) = 0 =) x is orthogonal to y. Under strict
             exogeneity the regressors are orthogonal to the error terms.
             i.e., E xjk εi = 0 (j, k = 1, ..., n; k = 1, ..., K )
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model             01/2010    6 / 15
Classical Linear Regression Assumptions
     Assumption 1.2 (Strict Exogeneity)
             E ( εi jX) = 0     (i = 1, 2, ..., n)
             di¤erently put implies E (εi jX1 , ..., Xn ) = 0           (i = 1, 2, ..., n)
     Implications of Strict Exogeneighty
             The unconditional mean if the error term is 0 i.e E (εi ) = 0
             (i = 1, 2, ..., n)
             If cross moments E (xy ) = 0 =) x is orthogonal to y. Under strict
             exogeneity the regressors are orthogonal to the error terms.
             i.e., E xjk εi = 0 (j, k = 1, ..., n; k = 1, ..., K )
     Strict exogeneity resquires that the regressors be orthorgonal not only
     to the error term from the same observation but also to the error
     term from other observations.
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model             01/2010    6 / 15
Classical Linear Regression Assumptions
     Assumption 1.3: No multicollinearity
     - the rank of the nXK data matrix, X, is K with probability 1.
     -implies X is a full column rank
     -implies that n > k i.e., there must be at least as many observations
     as variables.
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   7 / 15
Classical Linear Regression Assumptions
     Assumption 1.3: No multicollinearity
     - the rank of the nXK data matrix, X, is K with probability 1.
     -implies X is a full column rank
     -implies that n > k i.e., there must be at least as many observations
     as variables.
     Assumption 1.4: Spherical error variance
     - (homoskedasticity) E ε2i jX = σ2 > 0 (i = 1, 2, ..., n)
                                        2
     Var ε2i jX = E ε2i jX     E (εi jX) = E ε2i jX , since E (εi jX) = 0
     by assumption
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   7 / 15
Classical Linear Regression Assumptions
     Assumption 1.3: No multicollinearity
     - the rank of the nXK data matrix, X, is K with probability 1.
     -implies X is a full column rank
     -implies that n > k i.e., there must be at least as many observations
     as variables.
     Assumption 1.4: Spherical error variance
     - (homoskedasticity) E ε2i jX = σ2 > 0 (i = 1, 2, ..., n)
                                        2
     Var ε2i jX = E ε2i jX     E (εi jX) = E ε2i jX , since E (εi jX) = 0
     by assumption
     The homoskedaisticity assumption says that the conditional second
     moment is a constant
     -(no correlation between errors) E (εi εj jX) = 0
     (i = 1, 2, ..., n; i 6= j )
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   7 / 15
Classical Linear Regression Assumptions
     Assumption 1.3: No multicollinearity
     - the rank of the nXK data matrix, X, is K with probability 1.
     -implies X is a full column rank
     -implies that n > k i.e., there must be at least as many observations
     as variables.
     Assumption 1.4: Spherical error variance
     - (homoskedasticity) E ε2i jX = σ2 > 0 (i = 1, 2, ..., n)
                                        2
     Var ε2i jX = E ε2i jX     E (εi jX) = E ε2i jX , since E (εi jX) = 0
     by assumption
     The homoskedaisticity assumption says that the conditional second
     moment is a constant
     -(no correlation between errors) E (εi εj jX) = 0
     (i = 1, 2, ..., n; i 6= j )
     The two assumptions combined =) E (εε0 jX) = σ2 In or equivalently
     Var (εε0 jX) = σ2 In
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   7 / 15
Classical Linear Regression Assumptions
     Random Sample
     -Assume (yi ,Xi ) is a random sample =) (yi ,Xi ) is iid accros
     distributions
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   8 / 15
Classical Linear Regression Assumptions
     Random Sample
     -Assume (yi ,Xi ) is a random sample =) (yi ,Xi ) is iid accros
     distributions
     Then
     -E (εi jX) = E (εi jxi )
     -E ε2i jX = E ε2i jxi
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   8 / 15
Classical Linear Regression Assumptions
     Random Sample
     -Assume (yi ,Xi ) is a random sample =) (yi ,Xi ) is iid accros
     distributions
     Then
     -E (εi jX) = E (εi jxi )
     -E ε2i jX = E ε2i jxi
             and E εi εj jX = E (εi jxi ) E εj jxj              (i 6 = j )
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model        01/2010   8 / 15
Algebra of Least Squares
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   9 / 15
Working with color
     Beamer ’color themes’de…ne the use of color in a presentation.
     This presentation uses the default color theme.
     To use a di¤erent color theme, add the command
     nusecolortheme{colorthemename} to the preamble of your
     document, replacing any existing nusecolortheme command.
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   10 / 15
Working with fonts
     Beamer ’font themes’de…ne the use of fonts in a presentation.
     This presentation uses the default font scheme.
     To use a di¤erent font theme, add the command
     nusefonttheme{fontthemename} to the preamble of your
     document, replacing any existing nusefonttheme command.
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   11 / 15
Adding graphics
     Frames can contain graphics and animations.
     Columns provide support for laying out graphics and text.
     See examples in SWSamples/PackageSample-beamer.tex in your
     program installation.
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   12 / 15
Setting class options
Use class options to
     Set the base font size for the presentation.
     Set text alignment.
     Set equation numbering.
     Set print quality.
     Format displayed equations.
     Create a presentation, handout, or set of transparencies.
     Hide or display notes.
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   13 / 15
Setting class options
Notes
        This shell is originally supplied with the notes class option set to
        Show.
        This frame contains a note so that you can test the notes options.
        To see the note, scroll to the next frame.
   Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   14 / 15
                                                                 Setting class options
             Lecture 1: Classical Regression Model               Notes
2011-01-25      The Classical Multiple Linear Regression Model
                                                                         This shell is originally supplied with the notes class option set to
                    Setting class options                                Show.
                                                                         This frame contains a note so that you can test the notes options.
                                                                         To see the note, scroll to the next frame.
                        Setting class options
             Here is a Beamer note.
Learn more about Beamer
     This shell and the associated fragments provide basic support for
     Beamer in SWP and SW.
     The current support is a beta version.
     To learn more about Beamer, see
     SWSamples/PackageSample-beamer.tex in your program installation.
     For complete information, read the BeamerUserGuide.pdf manual
     provided with this shell.
     For support, contact support@mackichan.com.
  Chrispin Mphuka (Institute)   Lecture 1: Classical Regression Model   01/2010   15 / 15