Browsing by Author "Camilli, Gregory"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item A conceptual analysis of differential item functioning in terms of a multidimensional item response model(1992) Camilli, GregoryDifferential item functioning (DIF) has been informally conceptualized as multidimensionality. Recently, more formal descriptions of DIF as multidimensionality have become available in the item response theory literature. This approach assumes that DIF is not a difference in the item parameters of two groups; rather, it is a shift in the distribution of ability along a secondary trait that influences the probability of a correct item response. That is, one group is relatively more able on an ability such as test-wiseness. The parameters of the secondary distribution are confounded with item parameters by unidimensional DIF detection models, and this manifests as differences between estimated item parameters. However, DIF is confounded with impact in multidimensional tests, which may be a serious limitation of unidimensional detection methods in some situations. In the multidimensional approach, DIF is considered to be a function of the educational histories of the examinees. Thus, a better tool for understanding DIF may be provided through structural modeling with external variables that describe background and schooling experience. Index terms: differential item functioning, factor analysis, IRT, item bias, LISREL, multidimensionality.Item Scale shrinkage in vertical equating(1993) Camilli, Gregory; Yamamoto, Kentaro; Wang, Ming-meiAs an alternative to equipercentile equating in the area of multilevel achievement test batteries, item response theory (IRT) vertical equating has produced unexpected results. When expanded standard scores were obtained to link the Comprehensive Test of Basic Skills and the California Achievement Test, the variance of test scores diminished both within particular grade levels from fall to spring, and also from lower to upper grade levels. Equipercentile equating, on the other hand, has resulted in increasing variance both within and across grade levels, although the increases are not linear across grade levels. Three potential causes of scale shrinkage are discussed, and a more comprehensive, model-based approach to establishing vertical scales is described. Test data from the National Assessment of Educational Progress were used to estimate the distribution of ability at grades 4, 8, and 12 for several math achievement subtests. For each subtest, the variance of scores increased from grade 4 to grade 8; however, beyond grade 8 the results were not uniform. Index terms: developmental scores, equating, IRT scaling, maximum likelihood estimation, National Assessment of Educational Progress (NAEP), scale shrinkage, vertical equating.