Browsing by Subject "Evaluation influence"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item An investigation of the relationship between involvement in and use of evaluation in three multi-site evaluations when those involved are not the primary intended users.(2011-04) Roseland, Denise L.This dissertation research describes an investigation that explores the nature of the relationship between participation in evaluation and the use of evaluation findings and processes within three large-scale multisite evaluations. The purpose of this study is to test whether assumptions and theories about participation translate into evaluation use in the same ways as seen in single evaluation sites. Using canonical correlation analysis and a collection of 20 interviews, this study describes and tests the relationship between these two critical conceptual powerhouses in evaluation. Using data that were collected as a part of the NSF-funded research Beyond Evaluation Use (Lawrenz & King, 2009), this study found that some theories and beliefs about participatory evaluation contribute to use and influence in similar ways as single-site evaluations. The differences identified in this research highlight potential planning and implementation considerations that might allow multisite evaluators and funders of multisite evaluation to enhance use and influence of multisite evaluations.Item Understanding Evaluation's Influence on Stakeholders' Attitudes and Actions at a Social Services Organization(2021-06) Anderson, LindsayResearch and discussions of evaluation use have a long history in the field of evaluation. Recently, discussions of evaluation use have expanded to include the concept of evaluation influence in an attempt to better understand the impact of evaluation through a variety of approaches. Much of the existing research on evaluation use focuses on how and under what conditions evaluations are used by stakeholders. However, to date, limited research on evaluation influence exists and less is known about the specific ways in which evaluation may influence stakeholders’ attitudes and actions. This study focused on building an understanding of how stakeholder attitudes and actions towards evaluation change as a result of being involved in an evaluation process. An exploratory qualitative approach and case study design was used. The case was a single social services agency. An evaluation of the social service agency’s employment program took place. Participants involved in the study were interviewed before and after the evaluation took place to better understand how they were using evaluation and their attitudes towards evaluation. This study found evaluation influenced stakeholders in multiple ways at the individual, interpersonal, and collective levels. The research demonstrated 1) people recognize the value of evaluation, but in different ways; 2) evaluation is more than just about the results, the process matters; 3) evaluation influences participants at multiple levels of an organization; 4) participating in an evaluation does influence stakeholders’ attitudes towards evaluation in positive ways; 5) evaluation is important in facilitating communication among key stakeholders; and 6) evaluation has the potential to influence future evaluation activities.Item Using citation analysis methods to assess the influence of STEM education evaluation(2008-05) Greenseid, Lija OzolsThis study explores the validity of using citation analysis methods as a way of assessing the influence of program evaluations conducted within the areas of science, technology, engineering, and mathematics (STEM). Interest in the broad influence of evaluations has caught the attention of evaluation theorists, practitioners, and funders recently. However, methods for measuring the influence of evaluations have yet to be developed and validated. Citation analysis is widely used within scientific research communities to measure the relative influence of scientific research and/or specific scientists. This study explores the applicability of citation analysis for understanding the broad impact of STEM education program evaluations. Nine assumptions regarding the validity of using citation analysis methods to assess STEM education evaluation product influence are examined using data from four sources: (1) citation analysis data, (2) the opinions of an expert panel, (3) data from a survey of primary investigators and evaluators from local projects connected with four national program evaluations, and (4) a review of relevant literatures. The data collected for the validation study suggest that citation analysis methods provide data to help understand, to a limited extent, the influence of large-scale program evaluations on the fields of STEM education and evaluation. In particular, citation data can be used to understand and compare patterns of influence of multi-site STEM program evaluations. Citations, however, are only one among many possible measures of one limited type of influence arising from the dissemination of evaluation products. Additionally, citation data do not appear to be useful for precisely quantifying the actual level of influence of any one evaluation. Moreover, the examination of the content of citations is critical. Without understanding the content of the citations, judgments cannot be made about whether citations are actually measuring influence. Consequently, it is important to stress that citations are only one measure of one possible influence arising from an evaluation and are limited and should be interpreted as such.