Johnson, Kelli2016-09-192016-09-192016-07https://hdl.handle.net/11299/182289University of Minnesota Ph.D. dissertation. July 2016. Major: Organizational Leadership, Policy, and Development. Advisor: Jean King. 1 computer file (PDF); vi, 122 pages.Evaluation researchers and practitioners share a commitment to evaluation use, and the research community has focused on evaluation use because it is an essential component of the practice of evaluation. While evaluation use is among the most frequently studied topics in the field, a scale for measuring the use of evaluations in multi-site settings has yet to be validated. This study describes the development and validation of the Evaluation Use Scale for assessing program evaluation use and the factors associated with evaluation use in multi-site science, technology, mathematics, and engineering (STEM) education programs. The data used in this study were collected as part of the NSF-funded Beyond Evaluation Use study (Lawrenz & King, 2009) and included the development and administration of the NSF Program Evaluation Follow-up Survey, a web-based survey of project leaders and evaluators in four multi-site STEM education programs. The study used Messick’s unitary concept of validity as a framework for assembling empirical evidence and theoretical rationales to assess the adequacy and appropriateness of inferences and actions based on the Evaluation Use Scale. The overall evidence in support of the validity of the Evaluation Use Scale as a measure of evaluation use in multi-site evaluations is mixed and varies by aspect of validity. In four of the six aspects, the evidence is adequate to strong. However, the evidence in the remaining two aspects is sharply split and, therefore, inconclusive.enevaluation usevalidityThe Development and Validation of an Evaluation Use Scale for Multi-site EvaluationsThesis or Dissertation