Browsing by Author "Kim, Seock-Ho"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Detection of differential item functioning in the graded response model(1993) Cohen, Allan S.; Kim, Seock-Ho; Baker, Frank B.Methods for detecting differential item functioning (DIF) have been proposed primarily for the item response theory dichotomous response model. Three measures of DIF for the dichotomous response model are extended to include Samejima’s graded response model: two measures based on area differences between item true score functions, and a χ² statistic for comparing differences in item parameters. An illustrative example is presented. Index terms: differential item functioning, graded response model, item response theory.Item An investigation of Lord's procedure for the detection of differential item functioning(1994) Kim, Seock-Ho; Cohen, Allan S.; Kim, Hae-OkType I error rates of Lord’s X² chi; test for differential item functioning were investigated using monte carlo simulations. Two- and three-parameter item response theory (IRT) models were used to generate 50-item tests for samples of 250 and 1,000 simulated examinees. Item parameters were estimated using two algorithms (marginal maximum likelihood estimation and marginal Bayesian estimation) for three IRT models (the three-parameter model, the three-parameter model with a fixed guessing parameter, and the two-parameter model). Proportions of significant X²s at selected nominal α levels were compared to those from joint maximum likelihood estimation as reported by McLaughlin & Drasgow (1987). Type I error rates for the three-parameter model consistently exceeded theoretically expected values. Results for the three-parameter model with a fixed guessing parameter and for the two-parameter model were consistently lower than expected values at the a levels in this study. Index terms: differential item functioning, item response theory, Lord’s X².Item An investigation of the likelihood ratio test for detection of differential item functioning(1996) Cohen, Allan S.; Kim, Seock-Ho; Wollack, James A.Type I error rates for the likelihood ratio test for detecting differential item functioning (DIF) were investigated using monte carlo simulations. Two- and three-parameter item response theory (IRT) models were used to generate 100 datasets of a 50-item test for samples of 250 and 1,000 simulated examinees for each IRT model. Item parameters were estimated by marginal maximum likelihood for three IRT models: the three-parameter model, the three-parameter model with a fixed guessing parameter, and the two-parameter model. All DIF comparisons were simulated by randomly pairing two samples from each sample size and IRT model condition so that, for each sample size and IRT model condition, there were 50 pairs of reference and focal groups. Type I error rates for the two-parameter model were within theoretically expected values at each of the α levels considered. Type I error rates for the three-parameter and three-parameter model with a fixed guessing parameter, however, were different from the theoretically expected values at the α levels considered. Index terms: bias, differential item functioning, item bias, item response theory, likelihood ratio test for DIF.Item A minimum X² method for equating tests under the graded response model(1995) Kim, Seock-Ho; Cohen, Allan S.The minimum X² method for computing equating coefficients for tests with dichotomously scored items was extended to the case of Samejima’s graded response items. The minimum X² method was compared with the test response function method (also referred to as the test characteristic curve method) in which the equating coefficients were obtained by matching the test response functions of the two tests. The minimum X² method was much less demanding computationally and yielded equating coefficients that differed little from those obtained using the test response function approach. Index terms: equating, graded response model, item response theory, minimum X² method, test response function method.