Browsing by Author "McBride, James R."
Now showing 1 - 9 of 9
Results Per Page
Sort Options
Item Adaptive and conventional versions of the DAT: The first complete test battery comparison(1989) Henly, Susan J.; Klebe, Kelli J.; McBride, James R.; Cudeck, RobertA group of covariance structure models was examined to ascertain the similarity between conventionally administered and computerized adaptive (CAT) versions of the complete battery of the Differential Aptitude Tests (DAT). Two factor analysis models developed from classical test theory and three models with a multiplicative structure for these multitrait-multimethod data were developed and then fit to sample data in a double cross-validation design. All three direct-product models performed better than the factor analysis models in both calibration and cross-validation subsamples. The cross-validated, disattenuated correlation between the administration methods in the best-performing direct-product model was very high in both groups (.98 and .97), suggesting that the CAT version of the DAT is an adequate representation of the conventional test battery. However, some evidence suggested that there are substantial differences between the printed and computerized versions of the one speeded test in the battery. Index terms: adaptive tests, computerized adaptive testing, covariance structure, cross-validation, Differential Aptitude Tests, direct-product models, factor analysis, multitrait-multimethod matrices.Item Bias and information of Bayesian adaptive testing(1984) Weiss, David J.; McBride, James R.Monte carlo simulation was used to investigate score bias and information characteristics of Owen’s Bayesian adaptive testing strategy and to examine possible causes of score bias. Factors investigated in three related studies included effects of an accurate prior θ estimate, effects of item discrimination, and effects of fixed versus variable test length. Data were generated from a three-parameter logistic model for 3,100 simulees in each of eight data sets, and Bayesian adaptive tests were administered, drawing items from a "perfect" item pool. Results showed that the Bayesian adaptive test yielded unbiased θ estimates and relatively flat information functions only in the situation in which an accurate prior θ estimate was used. When a constant prior θ estimate was used with a fixed test length, severe bias was observed that varied with item discrimination. A different pattern of bias was observed with variable test length and a constant prior. Information curves for the constant prior conditions generally became more peaked and asymmetric with increasing item discrimination. In the variable test length condition, the test length required to achieve a specified level of the posterior variance of θ estimates was an increasing function of θ level. These results indicate that θ estimates from Owen’s Bayesian adaptive testing method are affected by the prior θ estimate used and that the method does not provide measurements that are unbiased and equiprecise except when an accurate prior θ estimate is used.Item Relationship between corresponding Armed Services Vocational Aptitude Battery (ASVAB) and computerized adaptive testing (CAT) subtests(1984) Moreno, Kathleen E.; Wetzel, C. Douglas; McBride, James R.; Weiss, David J.The relationships between selected subtests from the Armed Services Vocational Aptitude Battery (ASVAB) and corresponding subtests administered as computerized adaptive tests (CAT) were investigated using Marine recruits as subjects. Three adaptive subtests were shown to correlate as well with ASVAB as did a second administration of ASVAB, even though the CAT subtests contained only half the number of items. Factor analysis showed the CAT subtests to load on the same factors as the corresponding ASVAB subtests, indicating that the same abilities were being measured. The preenlistment Armed Forces Qualification Test (AFQT) composite scores were predicted as well from the CAT subtest scores as from the retest ASVAB subtest scores, even though the CAT contained only three of the four AFQT subtests. It is concluded that CAT can achieve the same measurement precision as a conventional test, with half the number of items.Item Research Reports, 1974, 1-5(University of Minnesota, Department of Psychology, 1974) DeWitt, Louis J.; Weiss, David J.; McBride, James R.; Larkin, Kevin C.; Betz, Nancy E.Item Research Reports, 1975, 1-6(University of Minnesota, Department of Psychology, 1975) Weiss, David J.; Larkin, Kevin C.; Vale, David C.; McBride, James R.; Betz, Nancy; Sympson, James B.; Linn, Robert L.; Bock, Darrell R.Item Research Reports, 1976, 1-5(University of Minnesota, Department of Psychology, 1976) McBride, James R.; Weiss, David J.Item Research Reports, 1977, 1-7(University of Minnesota, Department of Psychology, 1977) Bejar, Isaac I.; McBride, James R.; Pine, Steven M.; Sympson, James B.; Vale, David C.Item Research Reports, 1983, 1-3(University of Minnesota, Department of Psychology, 1983) Martin, John T.; McBride, James R.; Weiss, David J.; Suhadolnik, DebraItem Some properties of a Bayesian adaptive ability testing strategy(1977) McBride, James R.Four monte carlo simulation studies of Owen’s Bayesian sequential procedure for adaptive mental testing were conducted. In contrast to previous simulation studies of this procedure which have concentrated on evaluating it in terms of the correlation of its test scores with simulated ability in a normal population, these four studies explored a number of additional properties, both in a normally distributed population and in a distribution-free context. Study 1 replicated previous studies with finite item pools, but examined such properties as the bias of estimate, mean absolute error, and correlation of test length with ability. Studies 2 and 3 examined the same variables in a number of hypothetical infinite item pools, investigating the effects of item discriminating power, guessing, and variable vs. fixed test length. Study 4 investigated some properties of the Bayesian test scores as latent trait estimators. The properties of interest included the conditional bias of the ability estimates, the information curve of the trait estimates, and the relationship of test length to ability level. The results of these studies indicated that the ability estimates derived from the Bayesian testing strategy were highly correlated with ability level. However, the ability estimates were also highly correlated with number of items administered, were non-linearly biased and provided measurements which were not of equal precision at all levels of ability.