This study investigated the convergent validity
of expert-system scores for four mathematical
constructed-response item formats. A five-factor
model comprised of four constructed-response format
factors and a Graduate Record Examination
(GRE) General Test quantitative factor was posed.
Confirmatory factor analysis was used to test the
fit of this model and to compare it with several alternatives.
The five-factor model fit well, although a
solution comprised of two highly correlated
dimensions-GRE-quantitive and constructed-response
represented the data almost as well.
These results extend the meaning of the expert
system’s constructed-response scores by relating
them to a well-established quantitative measure
and by indicating that they signify the same
underlying proficiency across item formats. Index
terms: automatic scoring, constructed response, expert
system, free-response items, open-ended items.
Bennett, Randy E, Sebrechts, Marc M & Rock, Donald A. (1991). Expert-system scores for complex constructed-response quantitative items: A study of convergent validity. Applied Psychological Measurement, 15, 227-239. doi:10.1177/014662169101500302
Bennett, Randy Elliot; Sebrechts, Marc M.; Rock, Donald A..
Expert-system scores for complex constructed-response quantitative items: A study of convergent validity.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.