Identifying measurement bias in test items using statistical procedures is important because it aids in the development of a measurement invariant and fair test. Many of the statistical procedures used to identify measurement invariance fail to examine the root cause of the measurement bias of items. This study suggests the field moves from identifying items with measurement bias using intuition to examining the commonalities among sets of items using statistical models to explain the reasons why measurement bias may be occurring. Specifically, this study combines the research of cross-classified models and explanatory item response models to examine the common traits of items that display measurement bias. By combining both approaches, qualities of test items, test takers and their contextual relationships are quantified. These results quantify differential performance of student groups as a function of specific test characteristics, something not possible with current measurement invariance convention. This study examines the usefulness of cross-classified MMMs to detect measurement bias and compares these models to more traditional approaches. Four models are compared in terms of parameter recovery and model fit: a Rasch Model, a latent regression model, a latent regression-linear logistic test model, and a cross-classified model. Potential uses and applications of the cross-classified model are discussed.