Browsing by Author "Bejar, Isaac I."
Now showing 1 - 11 of 11
- Results Per Page
- Sort Options
Item An application of the continuous response level model to personality measurement(1977) Bejar, Isaac I.This paper reports an application of Samejima’s latent trait model for continuous responses. A brief review of latent trait theory is presented, including an elaboration of the theory for test responses other than dichotomous responses, in order to put the continuous model in perspective. The model is then applied using the Impulsivity and Harmavoidance scales of Jackson’s Personality Research Form. Special attention is given to the requirement that the model be invariant across populations and sex groups. Results showed that responses from males fit the model better than those from females, especially for the Harmavoidance scale. The practical and theoretical implications of the study are discussed.Item An approach to assessing unidimensionality revisited(1988) Bejar, Isaac I.A reanalysis of data from Hambleton and Rovinelli (1986) argues that the methods suggested by Bejar (1980) are a valuable descriptive tool for assessing the unidimensionality assumption when a priori information is available about possible response factors. Index terms: achievement testing, item response theory, unidimensionality.Item Factorial invariance in student ratings on instruction(1981) Bejar, Isaac I.; Doyle, Kenneth O.The factorial invariance of student ratings of instruction across three curricular areas was investigated by means of maximum likelihood factor analysis. The results indicate that a one-factor model was not completely adequate from a statistical point of view. Nevertheless, a single factor was accepted as reasonable from a practical point of view. It was concluded that the single factor was invariant across three curricular groups. The reliability of the single factor was essentially the same in the three groups, but in every case it was very high. Some of the theoretical and practical implications of the study were discussed.Item A generative analysis of a three-dimensional spatial task(1990) Bejar, Isaac I.The feasibility of incorporating research results from cognitive science into the modeling of performance on psychometric tests and the construction of test items is considered, particularly the feasibility of modeling performance on a three-dimensional rotation task within the context of item response theory (IRT). Three-dimensional items were selected because of the rich literature on the mental models that are used in their solution. An 80-item, three-dimensional rotation test was constructed. An inexpensive computer system was also developed to administer the test and record performance, including response-time data. Data were collected on high school juniors and seniors. As expected, angular disparity was a potent determinant of item difficulty. The applicability of IRT to these data was investigated by dichotomizing response time at increasing elapsed times, and applying standard item parameter estimation procedures. It is concluded that this approach to psychometric modeling, which explicitly incorporates information on the mental models examinees use in solving an item, is workable and important for future developments in psychometrics. Index terms: cognitive psychology, continuous response, item response theory, mental rotation, response latency.Item A generative approach to the modeling of isomorphic hidden-figure items(1991) Bejar, Isaac I.; Yocom, PeterA generative approach to psychometric modeling consists of encoding information about the cognitive processes and structures that underlie test performance into an item-generation algorithm in such a way that the generated items have known psychometric parameters. An important by-product of the approach is that the knowledge about the response process is tested every time a test is administered. Validation thus becomes an ongoing process rather than an occasional event. This approach is illustrated through an analysis of hidden-figure items, and is shown to be feasible. Index terms: construct validity, generative modeling, isomorphic problems, item difficulty, spatial ability, validation.Item Research Reports, 1973, 1-4(University of Minnesota, Department of Psychology, 1973) Weiss, David J.; Betz, Nancy E.; Bejar, Isaac I.Item Research Reports, 1977, 1-7(University of Minnesota, Department of Psychology, 1977) Bejar, Isaac I.; McBride, James R.; Pine, Steven M.; Sympson, James B.; Vale, David C.Item Research Reports, 1978, 1-5(University of Minnesota, Department of Psychology, 1978) Pine, Steven M.; Weiss, David J.; Prestwood, Stephen J.; Church, Austin T.; Bejar, Isaac I.; Martin, John T.Item Research Reports, 1979, 1-7(University of Minnesota, Department of Psychology, 1979) Bejar, Isaac I.; Weiss, David J.; Pine, Steven M.; Church, Austin T.; Gialluca, Kathleen; Kingsbury, G. Gage; Trabin, Tom E.Item A study of pre-equating based on item response theory(1982) Bejar, Isaac I.; Wingersky, Marilyn S.The study reports a feasibility study using item response theory (IRT) as a means of equating the Test of Standard Written English (TSWE). The study focused on the possibility of pre-equating, that is, deriving the equating transformation prior to the final administration of the test. The three-parameter logistic model was postulated as the response model and its fit was assessed at the item, subscore, and total score level. Minor problems were found at each of these levels; but, on the whole, the three-parameter model was found to portray the data well. The adequacy of the equating provided by IRT procedures was investigated in two TSWE forms. It was concluded that pre-equating does not appear to present problems beyond those inherent to IRT-equating.Item Subject matter experts' assessment of item statistics(1983) Bejar, Isaac I.This study was conducted to determine the degree to which subject matter experts could predict the difficulty and discrimination of items on the Test of Standard Written English. It was concluded that despite an extended training period the raters did not approach a high level of accuracy, nor were they able to pinpoint the factors that contribute to item difficulty and discrimination. Further research should attempt to uncover those factors by examining the items from a linguistic and psycholinguistic perspective. It is argued that by coupling linguistic features of the items with subject matter ratings it may be possible to attain more accurate predictions of item difficulty and discrimination.