Abstract The field of intellectual disabilities and developmental disabilities (ID/DD) uses objective quality of life indicators for policy and program development (Verdugo, Schalock, Keith, & Stancliffe, 2005). An ongoing concern in this field is the assessment of quality of life for people who are unable to answer for themselves. In these instances, a proxy-respondent, or someone who knows the person with ID/DD well, will respond on his/her behalf. Research examining the efficacy of using proxy-respondents has yielded mixed results. While some studies failed to show statistically significant differences in responses (McVilly, Burton-Smith, & Davidson, 2000; Rapley, Ridgway, & Beyer, 1998; Stancliffe, 1999), other research has found meaningful differences between matched pairs of self- and proxy-respondents (Rapley et al., 1998). A principle limitation of these previous studies is the reliance on simplistic analytic methods, such as a t-test or correlation to determine if similarities existed between these matched groups. Methodologically, the previous studies on self- and proxy-respondents used t-tests and correlations to examine the relationship between self- and proxy-responses. The present study extends this body of research through the use of differential item functioning and measurement invariance to examine the use of self- and proxy-respondents. Specifically, this study examined the internal structure of the three objective quality of life measures on the National Core Indicators, including the Community Inclusion, Life Decisions, and Everyday Choices scales. Study findings revealed that several items function differently for these two groups when comparing these respondents based on the total score of the scale, which implies that construct-irrelevant differences impacted some item responses (American Educational Research Association [AERA], American Psychological Association [APA], National Council on Measurement in Education [NCME], 1999). In addition, an examination of measurement invariance established that metric invariance fits these data well, meaning that it is not possible to compare these two groups. These findings have policy-and program-evaluation implications, since construct irrelevance (AERA, APA, & NCME, 1999) indicates that for the items identified as functioning differently for these groups, responses also include another construct that is separate from the construct that the scale intends to measure. With these differences, it becomes more difficult to conclude that changes in outcome can be attributed to program social justice implications, since differential item functioning and measurement invariance assessments relate to fairness in testing (Huggins, 2013). When items function differently for groups, this means that respondents find these items difficult, which makes full participation challenging. When individuals find items confusing or hard, then responses may not accurately reflect their experiences. These findings have implications for policy and practice, since policy makers and practitioners use these scales to make program decisions for people with ID/DD.
University of Minnesota Ph.D. dissertation. January 2015. Major: Social Work. Advisor: Elizabeth Lightfoot. 1 computer file (PDF); xi, 160 pages.
Differential Item Functioning and Measurement Invariance of Self- and Proxy-Reports: An Evaluation of Objective Quality of Life Measures for People with Intellectual and Developmental Disabilities.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.