Simulating the Effects of Test Score Reliability and Test Dimensionality on Teacher Value-Added Scores and Inferences

Loading...
Thumbnail Image

Persistent link to this item

Statistics
View Statistics

Journal Title

Journal ISSN

Volume Title

Title

Simulating the Effects of Test Score Reliability and Test Dimensionality on Teacher Value-Added Scores and Inferences

Published Date

2015-12

Publisher

Type

Thesis or Dissertation

Abstract

A teacher evaluation system that has gained in popularity is value-added assessment (VAA). In VAA, student test scores are used in an attempt to isolate the effect of individual teachers on student learning. Teacher value-added scores are interpreted as measures of teacher effectiveness, and are now widely used to evaluate teachers, and in pay-for-performance programs. Despite a body of literature on a variety of VAA issues, very little attention has been given to the psychometric properties of the student achievement tests used to derive teacher value-added scores, and the implications of violations to model assumptions on using value-added scores to make decisions about teachers. The purpose of the present study was to examine the potential bias produced in estimating teacher effectiveness (i.e., in teacher value-added scores) when there are violations to (a) the assumption of unidimensionality in the measurement model used to estimate student achievement scores, and (b) the assumption that control variables are free of measurement error in the statistical models used to estimate teacher value-added scores. An additional purpose was to examine the effect(s) of model assumption violations on the use of teacher value-added scores to make high-stakes decisions about teachers’ relative effectiveness. The results showed that teacher value-added scores were differentially biased by variations in the properties of the student achievement tests used to derive the value-added scores. Both bias in the estimation of teacher value-added scores, and the consistency of teachers’ estimated relative effectiveness were most sensitive to variations in the specification of test dimensionality. The results strongly caution against the use of value-added scores to make high-stakes decisions about teachers.

Description

University of Minnesota Ph.D. dissertation. December 2015. Major: Educational Psychology. Advisor: Michael Rodriguez. 1 computer file (PDF); ix, 103 pages.

Related to

Replaces

License

Collections

Series/Report Number

Funding information

Isbn identifier

Doi identifier

Previously Published Citation

Other identifiers

Suggested citation

Dupuis, Danielle. (2015). Simulating the Effects of Test Score Reliability and Test Dimensionality on Teacher Value-Added Scores and Inferences. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/177173.

Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.