Browsing by Author "Weiss, David J."
Now showing 1 - 20 of 21
- Results Per Page
- Sort Options
Item Bias and information of Bayesian adaptive testing(1984) Weiss, David J.; McBride, James R.Monte carlo simulation was used to investigate score bias and information characteristics of Owen’s Bayesian adaptive testing strategy and to examine possible causes of score bias. Factors investigated in three related studies included effects of an accurate prior θ estimate, effects of item discrimination, and effects of fixed versus variable test length. Data were generated from a three-parameter logistic model for 3,100 simulees in each of eight data sets, and Bayesian adaptive tests were administered, drawing items from a "perfect" item pool. Results showed that the Bayesian adaptive test yielded unbiased θ estimates and relatively flat information functions only in the situation in which an accurate prior θ estimate was used. When a constant prior θ estimate was used with a fixed test length, severe bias was observed that varied with item discrimination. A different pattern of bias was observed with variable test length and a constant prior. Information curves for the constant prior conditions generally became more peaked and asymmetric with increasing item discrimination. In the variable test length condition, the test length required to achieve a specified level of the posterior variance of θ estimates was an increasing function of θ level. These results indicate that θ estimates from Owen’s Bayesian adaptive testing method are affected by the prior θ estimate used and that the method does not provide measurements that are unbiased and equiprecise except when an accurate prior θ estimate is used.Item Bias Free Computerized Testing, March 1979(University of Minnesota, Department of Psychology, 1979-03) Pine, Steven M.; Weiss, David J.Item Final Report: Computer-Based Measurement of Intellectual Capabilities(University of Minnesota, Department of Psychology, 1983-12) Weiss, David J.Item Final Report: Computerized Adaptive Ability Testing, April 1981(University of Minnesota, Department of Psychology, 1981-04) Weiss, David J.Item Final Report: Computerized Adaptive Measurement of Achievement and Ability(University of Minnesota, Department of Psychology, 1985-06) Weiss, David J.Item Final Report: Computerized Adaptive Performance Evaluation, February 1980(University of Minnesota, Department of Psychology, 1980-02) Weiss, David J.Item Improving measurement quality and efficiency with adaptive theory(1982) Weiss, David J.Approaches to adaptive (tailored) testing based on item response theory are described and research results summarized. Through appropriate combinations of item pool design and use of different test termination criteria, adaptive tests can be designed (1) to improve both measurement quality and measurement efficiency, resulting in measurements of equal precision at all trait levels; (2) to improve measurement efficiency for test batteries using item pools designed for conventional test administration; and (3) to improve the accuracy and efficiency of testing for classification (e.g., mastery testing). Research results show that tests based on item response theory (IRT) can achieve measurements of equal precision at all trait levels, given an adequately designed item pool; these results contrast with those of conventional tests which require a tradeoff of bandwidth for fidelity/precision of measurements. Data also show reductions in bias, inaccuracy, and root mean square error of ability estimates. Improvements in test fidelity observed in simulation studies are supported by live-testing data, which showed adaptive tests requiring half the number of items as that of conventional tests to achieve equal levels of reliability, and almost one-third the number to achieve equal levels of validity. When used with item pools from conventional tests, both simulation and live-testing results show reductions in test battery length from conventional tests, with no reductions in the quality of measurements. Adaptive tests designed for dichotomous classification also represent improvements over conventional tests designed for the same purpose. Simulation studies show reductions in test length and improvements in classification accuracy for adaptive vs. conventional tests; live-testing studies in which adaptive tests were compared with "optimal" conventional tests support these findings. Thus, the research data show that IRT-based adaptive testing takes advantage of the capabilities of IRT to improve the quality and/or efficiency of measurement for each examinee.Item Polychotomous or Polytomous?(1995) Weiss, David J.Item Proceedings of the 1977 Computerized Adaptive Testing Conference(University of Minnesota, Department of Psychology, 1977-07) Weiss, David J.Item Proceedings of the 1979 Computerized Adaptive Testing Conference(University of Minnesota, Department of Psychology, 1979-06) Weiss, David J.Item Proceedings of the 1982 Item Response Theory and Computerized Adaptive Testing Conference(University of Minnesota, Department of Psychology, 1982-07) Weiss, David J.Item Relationship between corresponding Armed Services Vocational Aptitude Battery (ASVAB) and computerized adaptive testing (CAT) subtests(1984) Moreno, Kathleen E.; Wetzel, C. Douglas; McBride, James R.; Weiss, David J.The relationships between selected subtests from the Armed Services Vocational Aptitude Battery (ASVAB) and corresponding subtests administered as computerized adaptive tests (CAT) were investigated using Marine recruits as subjects. Three adaptive subtests were shown to correlate as well with ASVAB as did a second administration of ASVAB, even though the CAT subtests contained only half the number of items. Factor analysis showed the CAT subtests to load on the same factors as the corresponding ASVAB subtests, indicating that the same abilities were being measured. The preenlistment Armed Forces Qualification Test (AFQT) composite scores were predicted as well from the CAT subtest scores as from the retest ASVAB subtest scores, even though the CAT contained only three of the four AFQT subtests. It is concluded that CAT can achieve the same measurement precision as a conventional test, with half the number of items.Item Research Reports, 1973, 1-4(University of Minnesota, Department of Psychology, 1973) Weiss, David J.; Betz, Nancy E.; Bejar, Isaac I.Item Research Reports, 1974, 1-5(University of Minnesota, Department of Psychology, 1974) DeWitt, Louis J.; Weiss, David J.; McBride, James R.; Larkin, Kevin C.; Betz, Nancy E.Item Research Reports, 1975, 1-6(University of Minnesota, Department of Psychology, 1975) Weiss, David J.; Larkin, Kevin C.; Vale, David C.; McBride, James R.; Betz, Nancy; Sympson, James B.; Linn, Robert L.; Bock, Darrell R.Item Research Reports, 1976, 1-5(University of Minnesota, Department of Psychology, 1976) McBride, James R.; Weiss, David J.Item Research Reports, 1978, 1-5(University of Minnesota, Department of Psychology, 1978) Pine, Steven M.; Weiss, David J.; Prestwood, Stephen J.; Church, Austin T.; Bejar, Isaac I.; Martin, John T.Item Research Reports, 1979, 1-7(University of Minnesota, Department of Psychology, 1979) Bejar, Isaac I.; Weiss, David J.; Pine, Steven M.; Church, Austin T.; Gialluca, Kathleen; Kingsbury, G. Gage; Trabin, Tom E.Item Research Reports, 1980, 1-5(University of Minnesota, Department of Psychology, 1980) Gialluca, Kathleen; Weiss, David J.; Church, Austin T.; Thompson, Janet G.; Kingsbury, G. GageItem Research Reports, 1981, 1-5(University of Minnesota, Department of Psychology, 1981) Weiss, David J.; Davidson, Mark L.; Johnson, Marilyn F.; Prestwood, J. Steven; Kingsbury, G. Gage; Maurelli, Vincent A.; Gialluca, Kathleen