Xie, Aolin2013-02-062013-02-062012-10https://hdl.handle.net/11299/143994University of Minnesota Ph.D. dissertation. October 2012. Major: Educational Psychology. Advisor: Michael Harwell. 1 computer file (PDF); ix, 92 pages, appendix p. 77-92.A key methodological decision in a meta-analysis has traditionally been the choice between the classic fixed-effects (FE) or random-effects (RE) models assumed to underlie effect sizes (see Hedges & Olkin, 1985). Recent work has criticized these models because of the implausibility of their underlying assumptions (Bonett, 2008, 2009; Hunter & Schmidt, 2000). Bonett (2009a) proposed a modified FE model and recommended using contrasts to compare mean effect sizes among levels of discrete moderators. This study empirically investigated the behavior of the Bonett (2009a) modified FE model and the classic FE model for interval estimation and hypothesis testing of effect size contrasts. The results suggested that the two models performed similarly well with normally distributed data. The Bonett model was robust to nonnormality combined with unequal within-study variances and unequal within-study sample sizes, whereas the classic FE model showed inflated type I error rates and lower statistical power under these conditions.en-USBonett's meta-analytic modelFixed-effects modelMeta-analysisPowertype I error ratesAn empirical study of Bonett’s (2009) meta-analytic modelThesis or Dissertation