Logistic models can be used to estimate item parameters
of a unifactor test that are free of the examinee
groups used. The Rasch model was used to
identify items in the Cattell Culture Fair Intelligence
Test that did not conform to this model for a
group of Nigerian high school students and for a
group of American students, groups believed to be
different with respect to race, culture, and type of
schooling. For both groups a factor analysis yielded
a single factor accounting for 90% of the test’s variance.
Although all items conformed to the Rasch
model for both groups, 13 of the 46 items had significant
between score group fit in either the American
or the Nigerian sample or both. These were removed
from further analyses. Bias was defined as a
difference in the estimation of item difficulties.
There were six items biased in "favor" of the
American group and five in "favor" of the Nigerian
group; the remaining 22 items were not identified
as biased. The American group appeared to perform
better on classification of geometric forms,
while the Nigerians did better on progressive matrices.
It was suggested that the replicability of
these findings be tested, especially across other
types of stimuli.
Nenty, H. Johnson & Dinero, Thomas E. (1981). A cross-cultural analysis of the fairness of the Cattell Culture Fair Intelligence Test using the Rasch model. Applied Psychological Measurement, 5, 355-368. doi:10.1177/014662168100500309
Nenty, H. Johnson; Dinero, Thomas E..
A cross-cultural analysis of the fairness of the Cattell Culture Fair Intelligence Test using the Rasch model.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.