Browsing by Author "Ackerman, Terry A."
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
Item A comparison of the information provided by essay, multiple-choice, and free-response writing tests(1988) Ackerman, Terry A.; Smith, Philip L.This study investigated the similarity of information that is provided by direct and indirect methods of writing assessment. The skills required by each of these techniques provide a framework for a cognitive model of writing skills from which these procedures can be compared. It is suggested that practitioners interested in reliably measuring all aspects of the proposed writing process continuum, as characterized by this cognitive model, use both indirect and direct methods. Index terms: Confirmatory factor analysis, Essay tests, Free-response tests, Multiple-choice tests, Writing assessment, Writing processes.Item Creating a test information profile for a two-dimensional latent space(1994) Ackerman, Terry A.In some cognitive testing situations it is believed, despite reporting only a single score, that the test items differentiate levels of multiple traits. In such situations, the reported score may represent quite disparate composites of these multiple traits. Thus, when attempting to interpret a single score from a set of multidimensional items, several concerns naturally arise. First, it is important to know what composite of traits is being measured at all levels of the reported score scale. Second, it is also necessary to discern that all examinees, no matter where they lie in the latent trait space, are being measured on the same composite of traits. Thus, the role of multidimensionality in the interpretation or meaning given to various score levels must be examined. This paper presents a method for computing multidimensional information and provides examples of how different aspects of test information can be displayed graphically to form a profile of a test in a two-dimensional latent space. Index terms: information, item response theory, multidimensional item response theory, test information.Item Developments in Multidimensional Item Response Theory(1996) Ackerman, Terry A.Item Graphical representation of multidimensional item response theory analyses(1996) Ackerman, Terry A.This paper illustrates how graphical analyses can enhance the interpretation and understanding of multidimensional item response theory (IRT) analyses. Many unidimensional IRT concepts, such as item response functions and information functions, can be extended to multiple dimensions; however, as dimensionality increases, new problems and issues arise, most notably how to represent these features within a multidimensional framework. Examples are provided of several different graphical representations, including item response surfaces, information vectors, and centroid plots of conditional two-dimensional trait distributions. All graphs are intended to supplement quantitative and substantive analyses and thereby assist the test practitioner in determining more precisely such information as the construct validity of a test, the degree of measurement precision, and the consistency of interpretation of the number-correct score scale. Index terms: dimensionality, graphical analysis, multidimensional item response theory, test analysis.Item The influence of conditioning scores in performing DIF analyses(1994) Ackerman, Terry A.; Evans, John A.The effect of the conditioning score on the results of differential item functioning (DIF) analyses was examined. Most DIF detection procedures match examinees from two groups of interest according to the examinees’ test score (e.g., number correct) and then summarize the performance differences across trait levels. DIF has the potential to occur whenever the conditioning criterion cannot account for the multidimensional interaction between items and examinees. Response data were generated from a two-dimensional item response theory model for a 30-item test in which items were measuring uniformly spaced composites of two latent trait parameters, θ₁ and θ₂. Two different DIF detection methods- the Mantel-Haenszel and simultaneous item bias (SIBTEST) detection procedure-were used for three different sample size conditions. When the DIF procedures were conditioned on the number-correct score or on a transformation of θ₁ or θ₂, differential group performance followed hypothesized patterns. When the conditioning criterion was a function of both θ₁ and θ₂ (i.e., when the complete latent space was identified), DIF, as theory would suggest, was eliminated for all items. Index terms: construct validity, differential item functioning, item bias, Mantel-Haenszel procedure, SIBTEST.Item Unidimensional IRT calibration of compensatory and noncompensatory multidimensional items(1989) Ackerman, Terry A.The characteristics of unidimensional ability estimates obtained from data generated using multidimensional compensatory models were compared with estimates from noncompensatory IRT models. Reckase, Carlson, Ackerman, and Spray (1986) reported that when a compensatory model is used and item difficulty is confounded with dimensionality, the composition of the unidimensional ability estimates differs for different points along the unidimensional ability (θ) scale. Eight datasets (four compensatory, four noncompensatory) were generated for four different levels of correlated two-dimensional θs. In each dataset, difficulty was confounded with dimensionality and then calibrated using LOGIST and BILOG. The confounding of difficulty and dimensionality affected the BILOG calibration of response vectors using matched multidimensional item parameters more than it affected the LOGIST calibration. As the correlation between the generated two-dimensional θs increased, the response data became more unidimensional as shown in bivariate plots of the mean θ̂₁ as opposed to the mean of θ̂₂ for specified unidimensional quantiles. Index terms: BILOG, compensatory IRT models, IRT ability estimation, LOGIST, multidimensional item response theory, noncompensatory IRT models.Item The use of unidimensional parameter estimates of multidimensional items in adaptive testing.(1991) Ackerman, Terry A.This study investigated the effect of using multidimensional items in a computerized adaptive test (CAT) setting which assumes that all items are unidimensional. Previous research has suggested that the composite of multidimensional abilities being estimated by a unidimensional IRT model is not constant throughout the entire unidimensional ability scale (Reckase, Carlson, Ackerman, & Spray, 1986). Results of this study suggest that unidimensional calibration of multidimensional data tends to "filter out" the multidimensionality. Items that measured a θ₁,θ₂ composite similar to the composite of the calibrated unidimensional θ scale had larger estimated unidimensional discrimination values. These items thus had a greater probability of being administered in a CAT where only the most informative items are selected. Results also suggest that if a CAT item pool contains items from several content areas measuring dissimilar θ₁,θ₂ composites, different unidimensional abilities may receive disparate proportions of items from the various content areas. Index terms: adaptive testing, item response theory, multidimensionality, parallel tests, test construction.