Browsing by Subject "measurement invariance"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Differential Item Functioning and Measurement Invariance of Self- and Proxy-Reports: An Evaluation of Objective Quality of Life Measures for People with Intellectual and Developmental Disabilities(2015-01) Hepperlen, ReneeAbstract The field of intellectual disabilities and developmental disabilities (ID/DD) uses objective quality of life indicators for policy and program development (Verdugo, Schalock, Keith, & Stancliffe, 2005). An ongoing concern in this field is the assessment of quality of life for people who are unable to answer for themselves. In these instances, a proxy-respondent, or someone who knows the person with ID/DD well, will respond on his/her behalf. Research examining the efficacy of using proxy-respondents has yielded mixed results. While some studies failed to show statistically significant differences in responses (McVilly, Burton-Smith, & Davidson, 2000; Rapley, Ridgway, & Beyer, 1998; Stancliffe, 1999), other research has found meaningful differences between matched pairs of self- and proxy-respondents (Rapley et al., 1998). A principle limitation of these previous studies is the reliance on simplistic analytic methods, such as a t-test or correlation to determine if similarities existed between these matched groups. Methodologically, the previous studies on self- and proxy-respondents used t-tests and correlations to examine the relationship between self- and proxy-responses. The present study extends this body of research through the use of differential item functioning and measurement invariance to examine the use of self- and proxy-respondents. Specifically, this study examined the internal structure of the three objective quality of life measures on the National Core Indicators, including the Community Inclusion, Life Decisions, and Everyday Choices scales. Study findings revealed that several items function differently for these two groups when comparing these respondents based on the total score of the scale, which implies that construct-irrelevant differences impacted some item responses (American Educational Research Association [AERA], American Psychological Association [APA], National Council on Measurement in Education [NCME], 1999). In addition, an examination of measurement invariance established that metric invariance fits these data well, meaning that it is not possible to compare these two groups. These findings have policy-and program-evaluation implications, since construct irrelevance (AERA, APA, & NCME, 1999) indicates that for the items identified as functioning differently for these groups, responses also include another construct that is separate from the construct that the scale intends to measure. With these differences, it becomes more difficult to conclude that changes in outcome can be attributed to program social justice implications, since differential item functioning and measurement invariance assessments relate to fairness in testing (Huggins, 2013). When items function differently for groups, this means that respondents find these items difficult, which makes full participation challenging. When individuals find items confusing or hard, then responses may not accurately reflect their experiences. These findings have implications for policy and practice, since policy makers and practitioners use these scales to make program decisions for people with ID/DD.Item Modeling Item Features that Characterize Measurement Bias(2016-08) Stanke, LukeIdentifying measurement bias in test items using statistical procedures is important because it aids in the development of a measurement invariant and fair test. Many of the statistical procedures used to identify measurement invariance fail to examine the root cause of the measurement bias of items. This study suggests the field moves from identifying items with measurement bias using intuition to examining the commonalities among sets of items using statistical models to explain the reasons why measurement bias may be occurring. Specifically, this study combines the research of cross-classified models and explanatory item response models to examine the common traits of items that display measurement bias. By combining both approaches, qualities of test items, test takers and their contextual relationships are quantified. These results quantify differential performance of student groups as a function of specific test characteristics, something not possible with current measurement invariance convention. This study examines the usefulness of cross-classified MMMs to detect measurement bias and compares these models to more traditional approaches. Four models are compared in terms of parameter recovery and model fit: a Rasch Model, a latent regression model, a latent regression-linear logistic test model, and a cross-classified model. Potential uses and applications of the cross-classified model are discussed.Item Revisiting Ambivalent Sexism and the Ambivalent Sexism Inventory: Examining the Effects of Respondents’ and Targets’ Racial Group Membership on Endorsement(2024-06) Madzelan, MollyIn recent years, social psychologists have increasingly acknowledged the importance of intersectionality in social-psychological research, especially in the domain of gender. In particular, the intersection between gender and race may be especially important to understanding gender prejudice, particularly ambivalent sexism and its two components, hostile sexism and benevolent sexism (Glick & Fiske, 1996). I argue that ambivalent sexism theory as well as its measure, the Ambivalent Sexism Inventory, are due for an intersectional re-examination to determine the role that race plays in endorsement and application of hostile and benevolent sexism. In Study 1, I investigated the influence of respondent race on ASI scores, finding that measurement invariance could not be established across the four race-by-gender groups of interest (i.e., Black men, Black women, White men, and White women) and therefore ASI scores could not be compared between these groups. This raises several important questions about whether or not the ASI is a valid measure of ambivalent sexism for Black Americans and if that construct properly reflects Black Americans’ contemporary sexist attitudes. In Study 2, I investigated the influence of target race on ASI scores after first establishing measurement invariance for three of the four race-by-gender groups (the exception being Black Men). I found no evidence that target race affected hostile and benevolent sexism endorsement but did find differences in endorsement between Black women and White women, as well as between White men and White women. My research indicates that an intersectional perspective is critical to research using ambivalent sexism theory and the ASI, suggesting that, at the least, race must be considered in concert with gender when studying this particular form of sexism. Finally, and more generally, my research highlights the need for social psychologists to better attend to proper measurement evaluation (including testing measurement invariance) to ensure that measures are functioning as intended across all groups of interest so as to avoid invalid or overly broad findings and conclusions.