Tuesday, August 20, 2019
Fixed and random effects of panel data analysis
Fixed and random effects of panel data analysis Panel data (also known as longitudinal or cross-sectional time-series data) is a dataset in which the behavior of entities are observed across time. With panel data you can include variables at different levels of analysis (i.e. students, schools, districts, states) suitable for multilevel or hierarchical modeling. In this document we focus on two techniques use to analyze panel data:_DONE_ Fixed effects Random effects FE explore the relationship between predictor and outcome variables within an entity (country, person, company, etc.). Each entity has its own individual characteristics that may or may not influence the predictor variables (for example being a male or female could influence the opinion toward certain issue or the political system of a particular country could have some effect on trade or GDP or the business practices of a company may influence its stock price). When using FE we assume that something within the individual may impact or bias the predictor or outcome variables and we need to control for this. This is the rationale behind the assumption of the correlation between entitys error term and predictor variables. FE remove the effect of those time-invariant characteristics from the predictor variables so we can assess the predictors net effect. _DONE_ Another important assumption of the FE model is that those time-invariant characteristics are unique to the individual and should not be correlated with other individual characteristics. Each entity is different therefore the entitys error term and the constant (which captures individual characteristics) should not be correlated with the others. If the error terms are correlated then FE is no suitable since inferences may not be correct and you need to model that relationship (probably using random-effects), this is the main rationale for the Hausmantest (presented later on in this document). The equation for the fixed effects model becomes: Yit= ÃŽà ²1Xit+ ÃŽà ±i+ uit[eq.1] Where ÃŽà ±i(i=1à ¢Ã¢â ¬Ã ¦.n) is the unknown intercept for each entity (nentity-specific intercepts). Yitis the dependent variable (DV) where i= entity and t= time. Xitrepresents one independent variable (IV), ÃŽà ²1 is the coefficient for that IV, uitis the error term _DONE_ Random effects assume that the entitys error term is not correlated with the predictors which allows for time-invariant variables to play a role as explanatory variables. In random-effects you need to specify those individual characteristics that may or may not influence the predictor variables. The problem with this is that some variables may not be available therefore leading to omitted variable bias in the model. RE allows to generalize the inferences beyond the sample used in the model. To decide between fixed or random effects you can run a Hausman test where the null hypothesis is that the preferred model is random effects vs. the alternative the fixed effects (see Green, 2008, chapter 9). It basically tests whether the unique errors (ui) are correlated with the regressors, the null hypothesis is they are not. Testing for random effects: Breusch-Pagan Lagrange multiplier (LM)The LM test helps you decide between a random effects regression and a simple OLS regression. The null hypothesis in the LM test is that variances across entities is zero. This is, no significant difference across units (i.e. no panel effect). Here we failed to reject the null and conclude that random effects is not appropriate. This is, no evidence of significant differences across countries, therefore you can run a simple OLS regression. EC968 Panel Data Analysis Steve Pudney ISER University of Essex 2007 Panel data are a form of longitudinal data, involving regularly repeated observations on the same individuals Individuals may be people, households, firms, areas, etc Repeat observations may be different time periods or units within clusters (e.g. workers within firms; siblings within twin pairs)+DONE_ Some terminology A balanced panel has the same number of time observations (T) on each of the n individuals An unbalanced panel has different numbers of time observations (Ti) on each individual A compact panel covers only consecutive time periods for each individual there are no gaps Attrition is the process of drop-out of individuals from the panel, leading to an unbalanced and possibly non-compact panel A short panel has a large number of individuals but few time observations on each, (e.g. BHPS has 5,500 households and 13 waves) A long panel has a long run of time observations on each individual, permitting separate time-series analysis for each_DONE_ Advantages of panel data With panel data: à ¢Ã¢â ¬Ã ¢ We can study dynamics à ¢Ã¢â ¬Ã ¢ The sequence of events in time helps to reveal causation à ¢Ã¢â ¬Ã ¢ We can allow for time-invariant unobservable variables BUTà ¢Ã¢â ¬Ã ¦ à ¢Ã¢â ¬Ã ¢ Variation between people usually far exceeds variation over time for an individual à ¢Ã¢â¬ ¡Ã¢â¬â¢ a panel with T waves doesnt give T times the information of a cross-section à ¢Ã¢â ¬Ã ¢ Variation over time may not exist or may be inflated by measurement error à ¢Ã¢â ¬Ã ¢ Panel data imposes a fixed timing structure; continuoustime survival analysis may be more informative Panel Data Analysis Advantages and Challenges Cheng Hsiao May 2006 IEPR WORKING PAPER 06.49 Panel data or longitudinal data typically refer to data containing time series observations of a number of individuals. Therefore, observations in panel data involve at least two dimensions; a cross-sectional dimension, indicated by subscript i, and a time series dimension, indicated by subscript t. However, panel data could have a more complicated clustering or hierarchical structure. For instance, variable y may be the measurement of the level of air pollution at station _ in city j of country i at time t (e.g. Antweiler (2001), Davis (1999)). For ease of exposition, I shall confine my presentation to a balanced panel involving N cross-sectional units, i = 1, . . .,N, over T time periods, t = 1, . . ., T._DONE_ There are at least three factors contributing to the geometric growth of panel data studies. (i) data availability, (ii) greater capacity for modeling the complexity of human behavior than a single cross-section or time series data, and (iii) challenging methodology. Advantages of Panel Data Panel data, by blending the inter-individual differences and intra-individual dynamics have several advantages over cross-sectional or time-series data: (i) More accurate inference of model parameters. Panel data usually contain more degrees of freedom and more sample variability than cross-sectional data which may be viewed as a panel with T = 1, or time series data which is a panel with N = 1, hence improving the efficiency of econometric estimates (e.g. Hsiao, Mountain and Ho-Illman (1995)._DONE_ (ii) Greater capacity for capturing the complexity of human behavior than a single cross-section or time series data. These include: (ii.a) Constructing and testing more complicated behavioral hypotheses. For instance, consider the example of Ben-Porath (1973) that a cross-sectional sample of married women was found to have an average yearly labor-force participation rate of 50 percent. These could be the outcome of random draws from a homogeneous population or could be draws from heterogeneous populations in which 50% were from the population who always work and 50% never work. If the sample was from the former, each woman would be expected to spend half of her married life in the labor force and half out of the labor force. The job turnover rate would be expected to be frequent and 3 the average job duration would be about two years. If the sample was from the latter, there is no turnover. The current information about a womans work status is a perfect predictor of her future work status. A cross-sectional data is not able to distinguish between these two possibilities, but panel data can because the sequential observations for a number of women contain information about their labor participation in different subintervals of their life cycle. Another example is the evaluation of the effectiveness of social programs (e.g. Heckman, Ichimura, Smith and Toda (1998), Hsiao, Shen, Wang and Wang (2005), Rosenbaum and Rubin (1985). Evaluating the effectiveness of certain programs using cross-sectional sample typically suffers from the fact that those receiving treatment are different from those without. In other words, one does not simultaneously observe what happens to an individual when she receives the treatment or when she does not. An individual is observed as either receiving treatment or not receiving treatment. Using the difference between the treatment group and control group could suffer from two sources of biases, selection bias due to differences in observable factors between the treatment and control groups and selection bias due to endogeneity of participation in treatment. For instance, Northern Territory (NT) in Australia decriminalized possession of small amount of marijuana in 1996. Evaluating the effects of decriminalization on marijuana smoking behavior by comparing the differences between NT and other states that were still non-decriminalized could suffer from either or both sorts of bias. If panel data over this time period are available, it would allow the possibility of observing the before- and affect-effects on individuals of decriminalization as well as providing the possibility of isolating the effects of treatment from other factors affecting the outcome. 4 (ii.b) Controlling the impact of omitted variables. It is frequently argued that the real reason one finds (or does not find) certain effects is due to ignoring the effects of certain variables in ones model specification which are correlated with the included explanatory variables. Panel data contain information on both the intertemporal dynamics and the individuality of the entities may allow one to control the effects of missing or unobserved variables. For instance, MaCurdys (1981) life-cycle labor supply model under certainty implies that because the logarithm of a workers hours worked is a linear function of the logarithm of her wage rate and the logarithm of workers marginal utility of initial wealth, leaving out the logarithm of the workers marginal utility of initial wealth from the regression of hours worked on wage rate because it is unobserved can lead to seriously biased inference on the wage elasticity on hours worked since initial wealth is likely to be correlated with wage rate. However, since a workers marginal utility of initial wealth stays constant over time, if time series observations of an individual are available, one can take the difference of a workers labor supply equation over time to eliminate the effect of marginal utility of initial wealth on hours worked. The rate of change of an individuals hours worked now depends only on the rate of change of her wage rate. It no longer depends on her marginal utility of initial wealth._DONE_ (ii.c) Uncovering dynamic relationships. Economic behavior is inherently dynamic so that most econometrically interesting relationship are explicitly or implicitly dynamic. (Nerlove (2002)). However, the estimation of time-adjustment pattern using time series data often has to rely on arbitrary prior restrictions such as Koyck or Almon distributed lag models because time series observations of current and lagged variables are likely to be highly collinear (e.g. Griliches (1967)). With panel 5 data, we can rely on the inter-individual differences to reduce the collinearity between current and lag variables to estimate unrestricted time-adjustment patterns (e.g. Pakes and Griliches (1984))._DONE_ (ii.d) Generating more accurate predictions for individual outcomes by pooling the data rather than generating predictions of individual outcomes using the data on the individual in question. If individual behaviors are similar conditional on certain variables, panel data provide the possibility of learning an individuals behavior by observing the behavior of others. Thus, it is possible to obtain a more accurate description of an individuals behavior by supplementing observations of the individual in question with data on other individuals (e.g. Hsiao, Appelbe and Dineen (1993), Hsiao, Chan, Mountain and Tsui (1989)). (ii.e) Providing micro foundations for aggregate data analysis. Aggregate data analysis often invokes the representative agent assumption. However, if micro units are heterogeneous, not only can the time series properties of aggregate data be very different from those of disaggregate data (e.g., Granger (1990); Lewbel (1992); Pesaran (2003)), but policy evaluation based on aggregate data may be grossly misleading. Furthermore, the prediction of aggregate outcomes using aggregate data can be less accurate than the prediction based on micro-equations (e.g., Hsiao, Shen and Fujiki (2005)). Panel data containing time series observations for a number of individuals is ideal for investigating the homogeneity versus heterogeneity issue. (iii) Simplifying computation and statistical inference. Panel data involve at least two dimensions, a cross-sectional dimension and a time series dimension. Under normal circumstances one would expect that the 6 computation of panel data estimator or inference would be more complicated than cross-sectional or time series data. However, in certain cases, the availability of panel data actually simplifies computation and inference. For instance: (iii.a) Analysis of nonstationary time series. When time series data are not stationary, the large sample approximation of the distributions of the least-squares or maximum likelihood estimators are no longer normally distributed, (e.g. Anderson (1959), Dickey and Fuller (1979,81), Phillips and Durlauf (1986)). But if panel data are available, and observations among cross-sectional units are independent, then one can invoke the central limit theorem across cross-sectional units to show that the limiting distributions of many estimators remain asymptotically normal (e.g. Binder, Hsiao and Pesaran (2005), Levin, Lin and Chu (2002), Im, Pesaran and Shin (2004), Phillips and Moon (1999)). (iii.b) Measurement errors. Measurement errors can lead to under-identification of an econometric model (e.g. Aigner, Hsiao, Kapteyn and Wansbeek (1985)). The availability of multiple observations for a given individual or at a given time may allow a researcher to make different transformations to induce different and deducible changes in the estimators, hence to identify an otherwise unidentified model (e.g. Biorn (1992), Griliches and Hausman (1986), Wansbeek and Koning (1989)). (iii.c) Dynamic Tobit models. When a variable is truncated or censored, the actual realized value is unobserved. If an outcome variable depends on previous realized value and the previous realized value are unobserved, one has to take integration over the truncated range to obtain the likelihood of observables. In a dynamic framework with multiple missing values, the multiple 7 integration is computationally unfeasible. With panel data, the problem can be simplified by only focusing on the subsample in which previous realized values are observed (e.g. Arellano, Bover, and Labeager (1999)). The advantages of random effects (RE) specification are: (a) The number of parameters stay constant when sample size increases. (b) It allows the derivation of efficient 10 estimators that make use of both within and between (group) variation. (c) It allows the estimation of the impact of time-invariant variables. The disadvantage is that one has to specify a conditional density of ÃŽà ±i given x Ãâ¹Ã
â _ i = (x Ãâ¹Ã
â it, . . ., x Ãâ¹Ã
âiT ), f(ÃŽà ±i | x Ãâ¹Ã
â i), while ÃŽà ±i are unobservable. A common assumption is that f(ÃŽà ±i | x Ãâ¹Ã
âi) is identical to the marginal density f(ÃŽà ±i). However, if the effects are correlated with x Ãâ¹Ã
âit or if there is a fundamental difference among individual units, i.e., conditional on x Ãâ¹Ã
âit, yit cannot be viewed as a random draw from a common distribution, common RE model is misspecified and the resulting estimator is biased. The advantages of fixed effects (FE) specification are that it can allow the individualand/ or time specific effects to be correlated with explanatory variables x Ãâ¹Ã
â it. Neither does it require an investigator to model their correlation patterns. The disadvantages of the FE specification are: (a) The number of unknown parameters increases with the number of sample observations. In the case when T (or N for ÃŽà »t) is finite, it introduces the classical incidental parameter problem (e.g. Neyman and Scott (1948)). (b) The FE estimator does not allow the estimation of the coefficients that are time-invariant. In order words, the advantages of RE specification are the disadvantages of FE specification and the disadvantages of RE specification are the advantages of FE specification. To choose between the two specifications, Hausman (1978) notes that if the FE estimator (or GMM), Ãâ¹Ã¢â¬ ÃŽà ¸_DONE_ Ãâ¹Ã
âFE, is consistent whether ÃŽà ±i is fixed or random and the commonly used RE estimator (or GLS), Ãâ¹Ã¢â¬ ÃŽà ¸ Ãâ¹Ã
âRE, is consistent and efficient only when ÃŽà ±i is indeed uncorrelated with x Ãâ¹Ã
âit and is inconsistent if ÃŽà ±i is correlated with x Ãâ¹Ã
âit. The advantage of RE specification is that there is no incidental parameter problem. The problem is that f(ÃŽà ±i | x Ãâ¹Ã
â i) is in general unknown. If a wrong f(ÃŽà ±i | x Ãâ¹Ã
âi) is postulated, maximizing the wrong likelihood function will not yield consistent estimator of ÃŽà ² Ãâ¹Ã
â . Moreover, the derivation of the marginal likelihood through multiple integration may be computationally infeasible. The advantage of FE specification is that there is no need to specify f(ÃŽà ±i | x Ãâ¹Ã
â i). The likelihood function will be the product of individual likelihood (e.g. (4.28)) if the errors are i.i.d. The disadvantage is that it introduces incidental parameters. Longitudinal (Panel and Time Series Cross-Section) Data Nathaniel Beck Department of Politics NYU New York, NY 10012 [emailprotected] http://www.nyu.edu/gsas/dept/politics/faculty/beck/beck home.html Jan. 2004 What is longitudinal data? Observed over time as well as over space. Pure cross-section data has many limitations (Kramer, 1983). Problem is that only have one historical context. (Single) time series allows for multiple historical context, but for only one spatial location. Longitudinal data repeated observations on units observed over time Subset of hierarchical data observations that are correlated because there is some tie to same unit. E.g. in educational studies, where we observe student i in school u. Presumably there is some tie between the observations in the same school. In such data, observe yj,u where u indicates a unit and j indicates the jth observation drawn from that unit. Thus no relationship between yj,u and yj,u0 even though they have the same first subscript. In true longitudinal data, t represents comparable time. Generalized Least Squares An alternative is GLS. If is known (up to a scale factor), GLS is fully efficient and yields consistent estimates of the standard errors. The GLS estimates of _ are given by (X0à ¢Ãâ ââ¬â¢1X) à ¢Ãâ ââ¬â¢1X0à ¢Ãâ ââ¬â¢1Y (14) with estimated covariance matrix (X0à ¢Ãâ ââ¬â¢1X) à ¢Ãâ ââ¬â¢1 . (15) (Usually we simplify by finding some trick to just do a simple transform on the observations to make the resulting variance-covariance matrix of the errors satisfy the Gauss-Markov assumptions. Thus, the common Cochrane-Orcutt transformation to eliminate serial correlation of the errors is almost GLS, as is weighted regression to eliminate heteroskedasticity.) The problem is that is never known in practice (even up to a scale factor). Thus an estimate of , Ãâ¹Ã¢â¬ , is used in Equations 14 and 15. This procedure, FGLS, provides consistent estimates of _ if Ãâ¹Ã¢â¬ is estimated by residuals computed from consistent estimates of _; OLS provides such consistent estimates. We denote the FGLS estimates of _ by Ãâ¹Ã
â_. In finite samples FGLS underestimates sampling variability (for normal errors). The basic insight used by Freedman and Peters is that X0à ¢Ãâ ââ¬â¢1X is a (weakly) concave function of . FGLS uses an estimate of , Ãâ¹Ã¢â¬ , in place of the true . As a consequence, the expectation of the FGLS variance, over possible realizations of Ãâ¹Ã¢â¬ , will be less than the variance, computed with the . This holds even if Ãâ¹Ã¢â¬ is a consistent estimator of . The greater the variance of Ãâ¹Ã¢â¬ , the greater the downward bias. This problem is not severe if there are only a small number of parameters in the variance-covariance matrix to be estimated (as in Cochrane-Orcutt) but is severe if there are a lot of parameters relative to the amount of data. Beck TSCS Winter 2004 Class 1 8 ASIDE: Maximum likelihood would get this right, since we would estimate all parameters and take those into account. But with a large number of parameters in the error process, we would just see that ML is impossible. That would have been good. PANEL DATA ANALYSIS USING SAS ABU HASSAN SHAARI MOHD NOR Faculty of Economics and Business Universiti Kebangsaan Malaysia [emailprotected] FAUZIAH MAAROF Faculty of Science Universiti Putra Malaysia [emailprotected] 2007 Advantages of panel data According to Baltagi (2001) there are several advantages of using panel data as compared to running the models using separate time series and cross section data. They are as follows: Large number of data points 2)Increase degrees of freedom reduce collinearity 3) Improve efficiency of estimates and 4) Broaden the scope of inference The Econometrics of Panel Data Michel Mouchart 1 Institut de statistique Università © catholique de Louvain (B) 3rd March 2004 1 text book Statistical modelling : benefits and limita- tions of panel data 1.5.1 Some characteristic features of P.D. Object of this subsection : features to bear in mind when modelling P.D. à ¢Ã¢â ¬Ã ¢ Size : often N (] of individual(s)) is large Ti (size of individual time series) is small thus:N >> Ti BUT this is not always the case ] of variables is large (often: multi-purpose survey) à ¢Ã¢â ¬Ã ¢Ã ¢Ã¢â ¬Ã ¢ Sampling : often individuals are selected randomly Time is not rotating panels split panels _ : individuals are partly renewed at each period à ¢Ã¢â ¬Ã ¢ à ¢Ã¢â ¬Ã ¢ à ¢Ã¢â ¬Ã ¢ non independent data among data relative to a same individual: because of unobservable characteristics of each individual among individuals : because of unobservable characteristics common to several individuals between time periods : because of dynamic behaviour CHAPTER 1. INTRODUCTION 10 1.5.2 Some benefits from using P.D. a) Controlling for individual heterogeneity Example : state cigarette demand (Baltagi and Levin 1992) à ¢Ã¢â ¬Ã ¢ Unit : 46 american states à ¢Ã¢â ¬Ã ¢ Time period : 1963-1988 à ¢Ã¢â ¬Ã ¢ endogenous variable : cigarette demand à ¢Ã¢â ¬Ã ¢ explanatory variables : lagged endogenous, price, income à ¢Ã¢â ¬Ã ¢ consider other explanatory variables : Zi : time invariant religion (à ± stable over time) education etc. Wt state invariant TV and radio advertising (national campaign) Problem : many of these variables are not available This is HETEROGENEITY (also known as frailty) (remember !) omitted variable ) bias (unless very specific hypotheses) Solutions with P.D. à ¢Ã¢â ¬Ã ¢ dummies (specific to i and/or to t) WITHOUT killing the data à ¢Ã¢â ¬Ã ¢Ã ¢Ã¢â ¬Ã ¢ differences w.r.t. to i-averages i.e. : yit 7! (yit à ¢Ãâ ââ¬â¢ à ¯yi.)_DONE_ CHAPTER 1. INTRODUCTION 11 b) more information data sets à ¢Ã¢â ¬Ã ¢ larger sample size due to pooling _ individual time dimension In the balanced case: NT observations In the unbalanced case: P1_i_N Ti observations à ¢Ã¢â ¬Ã ¢Ã ¢Ã¢â ¬Ã ¢ more variability ! less collinearity (as is often the case in time series) often : variation between units is much larger than variation within units_DONE_ c) better to study the dynamics of adjustment à ¢Ã¢â ¬Ã ¢ distinguish repeated cross-sections : different individuals in different periods panel data : SAME individuals in different periods à ¢Ã¢â ¬Ã ¢Ã ¢Ã¢â ¬Ã ¢ cross-section : photograph at one period repeated cross-sections : different photographs at different periods only panel data to model HOW individuals ajust over time . This is crucial for: policy evaluation life-cycle models intergenerational models_DONE_ CHAPTER 1. INTRODUCTION 12 d) Identification of parameters that would not be identified with pure cross-sections or pure time-series: example 1 : does union membership increase wage ? P.D. allows to model BOTH union membership and individual characteristics for the individuals who enter the union during the sample period. example 2 : identifying the turn-over in the female participation to the labour market. Notice: the female, or any other segment ! i.e. P.D. allows for more sophisticated behavioural models e) à ¢Ã¢â ¬Ã ¢ estimation of aggregation bias à ¢Ã¢â ¬Ã ¢Ã ¢Ã¢â ¬Ã ¢ often : more precise measurements at the micro level Comparing the Fixed Effect and the Ran- dom Effect Models 2.4.1 Comparing the hypotheses of the two Models The RE model and the FE model may be viewed within a hierarchical specification of a unique encompassing model. From this point of view, the two models are not fundamentally different, they rather correspond to different levels of analysis within a unique hierarchical framework. More specifically, from a Bayesian point of view, where all the variables (latent or manifest) and parameters are jointly endowed with a (unique) probability measure, one CHAPTER 2. ONE-WAY COMPONENT REGRESSION MODEL 37 may consider the complete specification of the law of (y, ÃŽà ¼, _ | Z, ZÃŽà ¼) as follows: (y | ÃŽà ¼, _, Z, ZÃŽà ¼) _ N( Z_ _ + ZÃŽà ¼ÃŽà ¼, _2 I(NT)) (2.64) (ÃŽà ¼ | _, Z, ZÃŽà ¼) _ N(0, _2 ÃŽà ¼ I(N)) (2.65) (_ | Z, ZÃŽà ¼) _ Q (2.66) where Q is an arbitrary prior probability on _ = (_, _2 , _2 ÃŽà ¼). Parenthetically, note that this complete specification assumes: y _2 ÃŽà ¼ | ÃŽà ¼, _, _2 , Z, ZÃŽà ¼ ÃŽà ¼(_, Z, ZÃŽà ¼) | _2 ÃŽà ¼ The above specification implies: (y | _, Z, ZÃŽà ¼) _ N( Z_ _ , _2 ÃŽà ¼ ZÃŽà ¼ Z0ÃŽà ¼ + _2 I(NT)) (2.67) Thus the FE model, i.e. (2.64), considers the distribution of (y | ÃŽà ¼, _, Z, ZÃŽà ¼) as the sampling distribution and the distributions of (ÃŽà ¼ | _, Z, ZÃŽà ¼) and (_ | Z, ZÃŽà ¼) as prior specification. The RE model, i.e. (2.67), considers the distribution of (y | _, Z, ZÃŽà ¼) as the sampling distribution and the distribution of (_ | Z, ZÃŽà ¼) as prior specification. Said differently, in the RE model, ÃŽà ¼ is treated as a latent (i.e. not obervable) variable whereas in the FE model ÃŽà ¼ is treated as an incidental parameter. Moreover, the RE model is obtained from the FE model through a marginalization with respect to ÃŽà ¼. These remarks make clear that the FE model and the RE model should be expected to display different sampling properties. Also, the inference on ÃŽà ¼ is an estimation problem in the FE model whereas it is a prediction problem in the RE model: the difference between these two problems regards the difference in the relevant sampling properties, i.e. w.r.t. the distribution of (y | ÃŽà ¼, _, Z, ZÃŽà ¼) or of (y | _, Z, ZÃŽà ¼), and eventually of the relevant risk functions, i.e. the sampling expectation of a loss due to an error between an estimated value and a (fixed) parameter or between a predicted value and the realization of a (latent) random variable. This fact does however not imply that both levels might be used indifferently. Indeed, from a sampling point of view: (i) the dimensions of the parameter spaces are drastically different. In the FE model, when N , the number of individuals, increases, the ÃŽà ¼i s being CHAPTER 2. ONE-WAY COMPONENT REGRESSION MODEL 38 incidental parameters also increases in number: each new individual introduces a new parameter.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.