Browsing by Author "Bennett, Randy Elliot"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Expert-system scores for complex constructed-response quantitative items: A study of convergent validity(1991) Bennett, Randy Elliot; Sebrechts, Marc M.; Rock, Donald A.This study investigated the convergent validity of expert-system scores for four mathematical constructed-response item formats. A five-factor model comprised of four constructed-response format factors and a Graduate Record Examination (GRE) General Test quantitative factor was posed. Confirmatory factor analysis was used to test the fit of this model and to compare it with several alternatives. The five-factor model fit well, although a solution comprised of two highly correlated dimensions-GRE-quantitive and constructed-response represented the data almost as well. These results extend the meaning of the expert system’s constructed-response scores by relating them to a well-established quantitative measure and by indicating that they signify the same underlying proficiency across item formats. Index terms: automatic scoring, constructed response, expert system, free-response items, open-ended items.Item The relationship of expert-system scored constrained free-response items to multiple-choice and open-ended items(1990) Bennett, Randy Elliot; Rock, Donald A.; Braun, Henry I.; Douglas, Frye; Spohrer, James C.; Soloway, ElliotThis study examined the relationship of an expert-system scored constrained free-response item (requiring the student to debug a faulty computer program) to two other item types: (1) multiple-choice and (2) free-response (requiring production of a program). Confirmatory factor analysis was used to test the fit of a three-factor model to these data and to compare the fit of the model to three alternatives. These models were fit using two random-half samples, one given a faulty program containing one bug and the other a program with three bugs. A single-factor model best fit the data for the sample taking the one-bug constrained free response and a two-factor model fit the data somewhat better for the second sample. In addition, the factor intercorrelations showed this item type to be highly related to both the free-response and multiple-choice measures. Index terms: artificial intelligence, constructed-response items, expert-system scoring, free-response items, open-ended items.