Real and simulated datasets were used to
investigate the effects of the systematic variation of
two major variables on the operating characteristics
of computerized adaptive testing (CAT)
applied to instruments consisting of polychotomously
scored rating scale items. The two
variables studied were the item selection procedure
and the stepsize method used until maximum
likelihood trait estimates could be calculated. The
findings suggested that (1) item pools that consist
of as few as 25 items may be adequate for CAT;
(2) the variable stepsize method of preliminary
trait estimation produced fewer cases of nonconvergence
than the use of a fixed stepsize
procedure; and (3) the scale value item selection
procedure used in conjunction with a minimum
standard error stopping rule outperformed the
information item selection technique used in
conjunction with a minimum information stopping
rule in terms of the frequencies of nonconvergent
cases, the number of items administered, and the
correlations of CAT θ estimates with full scale
estimates and known θ values. The implications
of these findings for implementing CAT with
rating scale items are discussed. Index terms:
adaptive testing, attitude measurement, computerized
adaptive testing, item response theory, rating scale model.
Dodd, Barbara G..
The effect of item selection procedure and stepsize on computerized adaptive attitude measurement using the rating scale model.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital
Conservancy may be subject to additional license and use
restrictions applied by the depositor.