ECOLOGY OF EVALUATION CONTRACT WORK Exploring the Ecology of Evaluation Contract Work in the United States: Implications for Industry A Dissertation SUBMITTED TO THE FACULTY OF THE UNIVERSITY OF MINNESOTA BY Alexandra R. Verhoye IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY David Johnson April 2023 ECOLOGY OF EVALUATION CONTRACT WORK © Alexandra R. Verhoye 2023 ECOLOGY OF EVALUATION CONTRACT WORK i Acknowledgements This dissertation would not have been possible without the years of encouragement and grace from my support network. I am particularly grateful to my advisor, David Johnson for his guidance and understanding. I am also appreciative of my mentor, Jean King for her unwavering support and encouragement. My committee members, John LaVelle, Karen Miksch, and Melissa Chapman Haynes provided feedback, shared their perspectives, and were patient as I traversed this doctoral journey. I am especially grateful to my friends who endured late nights, nerdy ramblings, and last- minute social cancellations: Doris Espelien, Ray Gorlin, Kristin Huffman, Hanna Sun, and Megan Treinen. Thank you for not giving up on me. Both my current and previous colleagues have also provided years of encouragement and support. To my therapist, thank you for helping me navigate life’s ups and downs. I would also like to thank my parents, Anna and Jim Verhoye for showing me it is possible to get a Ph.D. Thank you as well to my sisters, Sirena and Olivia Verhoye, and my extended family for their endless love and support. Lastly, I would be remiss if I did not acknowledge my cat, Frank, for his purr-ticipation and support throughout my dissertation journey. ECOLOGY OF EVALUATION CONTRACT WORK ii Dedication Gentle Rain Gentle as the rain falling softly at my door, whispers pull me down to a cold, hard floor. “It’s okay,” she says, “to feel sad, scared, and alone.” “But listen close now – to the pounding in your breast. Do you hear the beats of your working chest? Place your hand upon your heart. Deeply inhale air. You’re alive my love, with which nothing can compare.” Cautiously, I rise. This dissertation is dedicated to the neurodivergent brains, to those who have struggled with their mental and physical health. While we might take years longer than our cohort to finish our degrees, or we may completely abandon our programs to become sheep farmers (admittedly, a personal dream during the height of COVID-19), we nevertheless persist. I myself cannot believe I have finished this freaking thing. ECOLOGY OF EVALUATION CONTRACT WORK iii Abstract While evaluators have provided goods and services to U.S. states and the federal government for decades, little is known about how the market’s demand-driven nature impacts evaluation practice. This study explored the likelihood evaluation firms and universities acquired newly funded evaluation-specific contracts from the U.S. Department of Health and Human Services (HHS) each fiscal year (FY) between FY 2008-2022, some of the factors that influenced the likelihood a firm or university acquired new HHS evaluation contracts, external research and evaluation providers’ perceptions of the federal evaluation contracts landscape, and the ways external research and evaluation providers positioned themselves within the federal evaluation market to compete for resources. Contracts data from USAspending.gov and semi- structured interviews with 11 practicing evaluators and researchers were used to explicate the demand-driven nature of external evaluation contract work in the U.S. Contracts data focused on the composition of evaluators (i.e., organizational size, type, niche) and length of time in the HHS arena between FY 2008-2022. Interview data focused on practicing evaluators’ perceptions of external environmental factors in the market—including changes in presidential administrations, the emergence of the COVID-19 pandemic, and George Floyd’s murder—in relation to their practice. Study results suggest that not small firms are more likely than small firms or universities to acquire new HHS contracts. Interviewee perceptions did not necessarily align with the literature; a few evaluators, all specializing in disability and special education- related work, described how changes in presidential administrations did not have a major impact on the number or types of federal contracts made available over the years. Discussions on the ECOLOGY OF EVALUATION CONTRACT WORK iv increase in work related to economic, health, and racial disparities; experiences with demand- side barriers to embedding diversity, equity, and inclusion in research and evaluation; their organization’s niche, size, and type; and the need to build internal organizational capacity were all framed in the context of needing to be responsive to funder (i.e., state and federal government) demands. Overall, this research provides a glimpse into the marketplace conditions and structures for contractual evaluative work in the United States and the implications such structure poses for the evaluation industry. ECOLOGY OF EVALUATION CONTRACT WORK v Table of Contents Acknowledgements ................................................................................................................i Dedication ..............................................................................................................................ii Abstract ..................................................................................................................................iii Table of Contents ...................................................................................................................v List of Tables .........................................................................................................................viii List of Figures ........................................................................................................................ix Chapter 1: Introduction ..........................................................................................................1 Statement of the Problem ................................................................................................1 The Context: United States Evaluation Marketplace ......................................................2 Study Purpose and Research Questions ..........................................................................8 Significance of the Study ................................................................................................9 Definition of Terms..........................................................................................................10 Overview of the Dissertation ..........................................................................................13 Chapter 2: Literature Review ................................................................................................14 Historical Development of Evaluation in the U.S. Federal Government ........................14 Federal Evaluation Contract Work: Supply and Demand ...............................................22 Organizational Ecology ...................................................................................................38 Summary ..........................................................................................................................44 Chapter 3: Methods ................................................................................................................45 Study Purpose and Research Questions ...........................................................................46 ECOLOGY OF EVALUATION CONTRACT WORK vi Research Design...............................................................................................................48 Data Collection ................................................................................................................52 Data Analysis ...................................................................................................................61 Researcher Positionality...................................................................................................66 Chapter 4: Results ..................................................................................................................68 Event History Analysis ....................................................................................................68 Interviews .........................................................................................................................82 Chapter 5: Discussion and Implications ................................................................................101 RQ 1a: Likelihood of Acquiring New Evaluation-Specific HHS Contracts ...................103 RQ 1b: Factors Influencing Likelihood of Acquiring New HHS Contracts................... 103 RQ 2a: Evaluation Provider Perceptions of Federal Contract Work Landscape .............106 RQ 2b: Evaluation Provider Positioning Within the Evaluation Market .........................108 Study Implications ...........................................................................................................111 Limitations .......................................................................................................................115 Conclusion .......................................................................................................................117 References ..............................................................................................................................119 Appendix A: IRB Determination Form .................................................................................136 Appendix B: List of Universities and Firms Included in the Study .......................................138 Appendix C: Evaluation Panelist Recruitment Email ............................................................167 Appendix D: Practicing Evaluator Recruitment Materials ....................................................168 Appendix E: Interview Protocol Review Instructions and Rubric.........................................170 ECOLOGY OF EVALUATION CONTRACT WORK vii Appendix F: Practicing Evaluator Interview Protocol ...........................................................172 Appendix G: Kaplan-Meier Estimated Survival Tables ........................................................175 ECOLOGY OF EVALUATION CONTRACT WORK viii List of Tables Table 1: Conceptual and Theoretical Concepts by Research Question and Data Source ......42 Table 2: Study Frameworks and Research Questions Crosswalk ..........................................43 Table 3: Evaluation Goods and Services Search Terms........................................................ 54 Table 4: Overview of Practicing Evaluator and Researcher Participants ..............................59 Table 5: Initial Evaluation Provider Interview Codebook .....................................................64 Table 6: Glossary of Event History Analysis Terms Used in this Study ...............................68 Table 7: Interviewee Employment Information .....................................................................81 Table 8: Summary of Main Findings .....................................................................................101 ECOLOGY OF EVALUATION CONTRACT WORK ix List of Figures Figure 1: Evaluation Market Framework ...............................................................................4 Figure 2: U.S. Federal Government Organizational Chart ....................................................17 Figure 3: Historical Count of Universities and Firms in HHS Arena, FY 2008-2022 ..........70 Figure 4: Annual University and Firm Births in HHS Arena, FY 2008-2022 .......................72 Figure 5: Annual University and Firm Deaths in HHS Arena, FY 2008-2021 .....................73 Figure 6: Kaplan-Meier Estimated Survival Curves, Universities and Firms .......................74 Figure 7: Historical Count of Total Firms in HHS Arena by Size, FY 2008-2022 ...............76 Figure 8: Annual Small and Not Small Firm Births in HHS Arena, FY 2008-2022 .............77 Figure 9: Annual Small and Not Small Firm Deaths in HHS Arena, FY 2008-2021............78 Figure 10: Kaplan-Meier Estimated Survival Curves, Small and Not Small Firms ..............79 ECOLOGY OF EVALUATION CONTRACT WORK 1 Chapter 1: Introduction Despite its lack of a clear definition and identity, the evaluation field has been referred to by various scholars as “booming” (House, 1997; Leeuw, 2002; Maynard, 2000; Picciotto, 2011; Shadish et al., 1991; Vedung, 2010). In practice, evaluations in the United States are often described as operating internally or externally to the organizing body seeking said evaluation (Datta, 2011; Sturges, 2014; Weiss, 1993). In thinking of this dynamic, evaluation largely occurs in a contractual context between an evaluation commissioner (e.g., the U.S. Department of Health and Human Services seeking an evaluation) and an evaluation provider (e.g., an evaluation and research consulting firm) (Nielsen et al., 2018a). It then follows that analyzing the marketplace conditions and structures for contractual evaluative work is imperative to understanding the current outlook of the U.S. evaluation industry. Statement of the Problem The evaluation market1 is defined by Nielsen et al. (2018a) “as an arena in which buyers and sellers interact and trade evaluation services and where forces of supply and demand operate” (p. 21). The authors go on to describe evaluation services as including a range of evaluative processes involving, but not limited to, formative and summative evaluations, implementation evaluations, outcome evaluations, and policy analyses (Nielsen et al., 2018a). In the federal arena, these services operate in an inherently political setting where programs and policies “are the creatures of political decisions … [and] evaluation is undertaken in order to feed 1 For pithiness the term “market” will, unless otherwise indicated, be used throughout this work in reference to the evaluation market. ECOLOGY OF EVALUATION CONTRACT WORK 2 into decision-making” (Weiss, 1993, p. 94). Situated in this environment, providing contractual evaluative services associated with the U.S. federal government becomes predominately demand-driven work (i.e., is driven by the political, social, and economic forces that accompany the U.S. federal government). An imperfect or skewed market can occur when there is a mismatch between the demand and supply-sides of evaluation services (House, 1997). House (1997) explains how the market is primarily skewed by its demand-driven nature and a smaller number of primary providers on the supply-side and buyers on the demand-side, altogether, creating an interdependency between the sellers and buyers of evaluation services. Such skewing of the market can lead to concerns on the strength and sustainability of contractual evaluation as an area of practice. How competitive is the federal evaluation contracts arena? Is the evaluation community in-tune with the market’s demand-driven nature and changing environment? How can evaluation providers be certain they are building and operating their evaluation practices with the market structures and tools necessary to be competitive in the field? The Context: United States Evaluation Marketplace The concept of a labor market has been around for centuries. Adam Smith’s work in The Wealth of Nations (1776), the 1905 manifesto and formation of the United States Industrial Workers of the World (2017), and the publications of “A Nation at Risk” (Gardner, 1983) and “Workplace Basics: The Skills Employers Want” (Carnevale et al., 1988) all, in one way or another, address how the composition of a labor market (e.g., the specific skills and expertise a ECOLOGY OF EVALUATION CONTRACT WORK 3 worker brings to a job) is influenced by overall market factors associated with supply and demand. Drawing from Sturges (2014), Lemire et al. (2018b) address the connection between the market’s demand and supply-sides by analyzing how the push and pull of market forces influence its composition. Their findings are synthesized in an Evaluation Market Framework (Figure 1) which outlines the market’s overall characteristics, dynamics, and relationships (p. 146), and represents the conceptual grounding for this current work. As outlined in this framework, the composition of the demand-side in today’s evaluation market is primarily dominated by select foundations and national governments. As this study is reviewing the demand for evaluation services created specifically by U.S. federal agencies, a more detailed analysis of these agencies, the pressures they face(d) and the role(s) they play in the procurement of evaluation services is necessary. ECOLOGY OF EVALUATION CONTRACT WORK 4 Figure 1 Evaluation Market Framework Research on the demand-side of the federal market has concentrated primarily on: the demand for quality governmental program evaluation (Rist & Paliokas, 2002; Wargo, 1995); the fluctuations in federal evaluation trends related to personnel, funding, and overall capacity (Lemire et al., 2018a; Rist & Paliokas, 2002; Vought, 2019; Wargo, 1995); and the increased Note. This figure illustrates findings from an analysis on the overall characteristics and relationships within the U.S. evaluation marketplace (Lemire et al., 2018b, p. 146). ECOLOGY OF EVALUATION CONTRACT WORK 5 demand for governmental accountability (Bundi, 2016; Burwell et al., 2013; Chelimsky, 2015; Henry, 2015; Maynard, 2000; Weiss, 1993). Although there has been an increase in research on the market’s supply-side composition, (e.g., evaluator competencies, analyses of training needs, gap analyses between training programs and evaluation practitioners, formation of evaluator identity) this research has almost exclusively focused on interactions amongst the market’s supply-side actors (i.e., evaluators in training, practicing evaluators, evaluation and research consulting firms) (Dewey et al., 2008; Galport & Azzam, 2017; King & Stevahn, 2015; LaVelle & Donaldson, 2010; Maynard, 2000). Scholars have expressed a need for more explicit, detailed research on the interconnectedness between the market’s supply-side composition and the economic, social, and political forces at work on the demand-side, and the implications these connections pose for evaluation providers (Chelimsky, 1995, 2007; Datta, 2011; Henry, 2001; Lemire et al., 2018b; Maynard, 2018; Wargo, 1995; Weiss, 1993). To date, the extent of the supply-side’s (i.e., external evaluators’) level of organizational adaptability in the context of the overall market environment is unknown. How susceptible is the evaluation market to changes in political administrations? How vulnerable is contractual evaluative work to the oscillations of ideological and societal shifts? How reliant is the strength and sustainability of the evaluation market to economic fluctuations? The following research proposes to shed light on these questions by analyzing the competitiveness of the field and the ways in which perceived demand-side forces influence the supply-side in hopes of elucidating evaluation providers’ level of organizational adaptability. ECOLOGY OF EVALUATION CONTRACT WORK 6 The supply and demand of evaluative work exists in a wide variety of contexts in the United States. While exploring each of these contexts would be useful to the evaluation field, such analysis is beyond the scope of this research. This study will limit its review of the evaluation market to the history, major players, and the conditions and structures at play in the U.S. evaluation industry. To do so, it will draw upon aspects of the Evaluation Market Framework developed by Lemire et al. (2018b). On the demand-side of the framework, it will take a particular look at U.S. policies, legislation, and the societal and economic trends that influence(d) federal agencies’ evaluation needs. On the supply-side of the framework, it will draw upon methodological and domain expertise amongst firms and the finding that the market is dominated by relatively few, larger firms. These specific components were drawn from the framework with the assumption that they will shed light on the supply-side’s level of organizational adaptability in an ever-changing market landscape. In conjunction with the previously outlined conceptual framework, Hannan and Freeman’s (1989) theory of organizational ecology will be used to buttress the proposed research. As defined, an ecology of organizations seeks to understand how social conditions affect the rates at which new organizations and new organizational forms arise, the rates at which organizations change forms, and the rates at which organizations and forms die out. In addition to focusing on the effects of social, economic, and political systems on these rates, an ecology of organizations also emphasizes the dynamics that take place within organizational populations. (Hannan & Freeman, 1989, p. 7) ECOLOGY OF EVALUATION CONTRACT WORK 7 In defining a population of organizations, Hannan and Freeman (1989) explain the need for a population to have what they describe as a “unitary character … [a] common dependence on the material and social environment” (p. 45). The supply and demand of contractual evaluative work in the U.S. shares a dependence on a material and social environment (e.g., changes in available resources, new legislative mandates, social reforms); in this sense, contractual evaluative work in the United States can be defined as a population of organizations. Labor markets associated with higher education, retail, newspapers, female entrepreneurship, the semiconductor manufacturing industry, nonprofit organizations, and the information/technology sector are a few examples of organizational populations that have been analyzed through the lens of organizational ecology (Becker, 2007; Hannan & Freeman, 1989; Michael & Kim, 2005; Potter & Crawford, 2008; Siddiqui et al., 2018; van Witteloostuijn et al., 2018; Youn & Gamson, 1994). As these examples reflect, organizational ecology is a valuable theoretical framework for researching the lifecycle (i.e., birth, growth, adaptations, and decline) of an industry. Specific elements of organizational ecology that lend itself to the study of labor markets include its emphasis on niche theory, legitimacy, liabilities of newness and smallness, and an organization’s size and age (Michael & Kim, 2005; see also Carroll, 1984; Hannan & Freeman, 1977, 1989). Organizational ecology is a useful and appropriate theory for this research, as the present work seeks to understand the environmental context within which external evaluators in the United States are situated. As a theory of change, organizational ecology will bolster this research as it works to create an understanding of both the current evaluation market landscape, ECOLOGY OF EVALUATION CONTRACT WORK 8 and the potential need for evaluation providers to adapt to an ever-evolving social, political, and economic climate. Study Purpose and Research Questions The purpose of this study is twofold. First, it will add to the field’s understanding of the existing job market and labor structure for external evaluators in the context of contractual evaluative work in the United States. Through a review of existing literature, the research will employ the Evaluation Market Framework (Lemire et al., 2018b) and Hannan & Freeman’s (1989) theory of organizational ecology to examine the market forces (i.e., historical, political, economic, and societal forces) that influence the supply and demand for federal contractual evaluative work in the United States. It will then take a specific look at the organizational ecology concepts of newness and smallness to explore the likelihood evaluation firms and universities received funding from the U.S. Department of Health and Human Services (HHS) for evaluation contract work between fiscal years (FY) 2008-2022. Second, it will investigate practicing evaluators’ awareness and prioritization of market forces, including how evaluators perceive their positioning in the U.S. market in relation to current and future funding opportunities. Altogether, the overarching aim of this research is to shed light on the changing environment of contractual evaluative work in the U.S. This study’s research questions are: 1a. What is the likelihood evaluation firms and universities acquired newly funded evaluation-specific contracts from the U.S. Department of Health and Human Services (HHS) each fiscal year (FY) between FY 2008-2022? ECOLOGY OF EVALUATION CONTRACT WORK 9 1b. Which factors (i.e., an entity’s size and type) influence the likelihood a firm or university acquired newly funded HHS evaluation contracts each year between FY 2008-2022? 2a. How do external research and evaluation providers perceive the federal evaluation contracts landscape? 2b. How have external research and evaluation providers positioned themselves within the federal evaluation market to compete for resources? Significance of the Study This research will contribute to a greater understanding of the current composition and structure of the evaluation market in the United States. Specifically, it will provide insight on the ecological context of the supply and demand for evaluation services. Such insight is crucial to the future strength and sustainability of the evaluation industry; for evaluation actors and enterprises to survive, the community needs to understand the current and projected demand for evaluation services and products, the competitiveness of the field, and the competencies required to flourish in an industry that is often described as do or perish. Potential benefits to the evaluation community as a result of this work can be viewed from several standpoints, including the human resources domain (e.g., staffing needs), the design of evaluation training programs (e.g., program structure and course requirements), the financial domain (e.g., firms and universities spending money on outreach and professional development), and the growth, adaptability, and use of evaluator competencies (e.g., content and methodological expertise) in an ever evolving environment. ECOLOGY OF EVALUATION CONTRACT WORK 10 Definition of Terms Award: The term “award” is defined as the “money the federal government has promised to pay a recipient. Funding may be awarded to a company, organization, government entity (i.e., state, local, tribal, federal, or foreign), or individual. It may be obligated (promised) in the form of a contract, grant, loan, insurance, direct payment, etc.” (USAspending, n.d.). Awarding Agency: An “awarding agency” is the federal agency that “issues and administers the award. This agency usually pays for the funding out of its own budget. In some cases, the money is financed by another agency, called the Funding Agency” (USAspending, n.d.). Contract: A “contract” means a “mutually binding legal relationship obligating the seller to furnish the supplies or services … and the buyer to pay for them. It includes all types of commitments that obligate the government to an expenditure of appropriated funds and that, except as otherwise authorized, are in writing. … [C]ontracts include (but are not limited to) awards and notices of awards; job orders or task letters, issued under basic ordering agreements; letter contracts; orders, such as purchase orders, under which the contract becomes effective by written acceptance or performance; and bilateral contract modifications. Contracts do not include grants and cooperative agreements covered by 31 U.S.C. 6301, et seq.” (USAspending, n.d.). However, in this study, the terms “contract work” and “contractual work” will, unless otherwise specified, include grants and cooperative agreements. Contractor: The term “contractor” applies to the supply-side of the evaluation market. Specifically, a supply-side contractor is defined as “a business, organization, or agency that ECOLOGY OF EVALUATION CONTRACT WORK 11 receives funding and/or performs work on a contract. A contractor may be a corporation, small business, university, non-profit, sole proprietor, or other entity. When a company has a contract with the U.S. government, they may hire another company to perform part of the work. When this happens, the company who received the award is called the prime contractor. The company hired by the prime is called the sub-contractor” (USAspending, n.d.). Evaluation Industry: The evaluation industry is defined “as the collective of evaluation providers engaged with the professional provision of contracted evaluation services” (Nielsen et al., 2018a, p. 23). Evaluation Market: The evaluation market is defined “as an arena in which buyers and sellers interact and trade evaluation services and where forces of supply and demand operate” (Nielsen et al., 2018a, p. 21). Evaluation Providers: Evaluation providers are defined as the universe of contractors that are designated as either firms or universities/institutes of higher education. Federal Agency: The term “federal agency” applies to the demand-side of the market and is defined as “any federal department, commission, or other U.S. government entity. Agencies can have multiple sub-agencies. For example, the National Park Service is a sub- agency of the U.S. Department of the Interior” (USAspending, n.d.). Firm: The term “contracting firm” or more succinctly, “firm,” applies to the supply-side of the evaluation market. Specifically, firms are defined as evaluation, research, or consulting entities (e.g., companies, organizations, enterprises) that explicitly provide evaluation services, are external to the party seeking services, and are not categorized as a university or institute of ECOLOGY OF EVALUATION CONTRACT WORK 12 higher education. For example, ABT Associates provides evaluation services to the U.S. Department of Health and Human Services (HHS). ABT Associates is thus considered a “firm” as it provides evaluation services and is external to HHS. Fiscal Year: Fiscal year (FY) is defined throughout this study as the federal fiscal year. A FY is “an accounting period that spans 12 months. For the federal government, it runs October 1 to September 30. For example, Fiscal Year 2017 (FY 2017) start[ed] October 1, 2016 and end[ed] September 30, 2017” (USAspending, n.d.). Funding Agency: The term “funding agency” is defined as the agency that “pays for the majority of funds for an award out of its budget. Typically, the Funding Agency is the same as the Awarding Agency. In some cases, one agency will administer an award (Awarding Agency) and another agency will pay for it (Funding Agency)” (USAspending, n.d.). Funding Obligated: The term “funding obligated” refers to “the amount of money that [a federal] agency has promised to pay, usually because the agency has signed a contract, awarded a grant, or placed an order for goods or services” (USAspending, n.d.). Prime Award: The term “prime award” refers to “an agreement that the government makes with a non-federal entity for the purpose of carrying out a federal program. The entities receiving the award are known as prime recipients” (USAspending, n.d.), or more colloquially as the “prime.” Request for Proposals: Request for Proposals (RFPs) are public procurement procedures—“the major means of acquiring contracts” (Peck, 2018). In the context of the federal government, RFPs are used “in negotiated acquisitions to communicate Government ECOLOGY OF EVALUATION CONTRACT WORK 13 requirements to prospective contractors and to solicit proposals” (FAR, n.d.). Other kinds of public procurement procedures include funding opportunity announcements (FOAs) and requests for quotation (RFQs). Throughout this study, the term “RFP” will be used to refer to all types of public procurement procedures. Sub-Award: A sub-award is defined as “agreement[s] that a prime recipient makes with another entity to perform a portion of their award. … these recipients are known as sub- recipients” (USAspending, n.d.), or more colloquially as the “sub.” Overview of the Dissertation Chapter Two provides a summary of federal research and evaluation literature, starting with the historical development of the supply and demand for evaluative contract work, followed by a closer look at the composition of the industry’s demand- and supply-sides. The chapter will conclude with additional discussion on how the theory of organizational ecology can be used to explore organizational responsiveness to supply and demand influences. Chapter Three describes the study design and methods used in the present research. Chapter Four presents the results of the event history analysis and interviews. Chapter Five discusses the results and suggests implications for evaluation practice and future research. ECOLOGY OF EVALUATION CONTRACT WORK 14 Chapter 2: Literature Review This review examines the historical development of evaluation in the United States and explores the composition and broader environment surrounding specific buyers (i.e., the demand- side) and sellers (i.e., the supply-side) of evaluation services in the marketplace. Nielsen et al. (2018a) note how as a field, evaluation has and continues to be rooted within a wider setting of knowledge production services at varying levels and capacities in public and private domains. The authors go on to define which players make up the demand- and supply-sides of the market. As described, the demand-side of the market is composed of evaluation commissioners (i.e., buyers) which includes but is not limited to supranational organizations, federal, state, and local governments, foundations, and non-profit organizations (Nielsen et al., 2018a). This research will focus specifically on the demand for evaluation services created by federal agencies. Evaluation providers make up the supply-side of the market and include evaluators who are employed in an evaluation, research, and/or consulting firm, university, or college (Nielsen et al., 2018a). As evaluation is currently a demand-driven field that is shaped by national governments (Bundi, 2016; Davies, 2012; Davies et al., 2018; Furubo & Sandahl, 2002; House, 1997; Lahey et al., 2018), a more detailed look at the composition of the demand-side of the market is crucial to obtaining a deeper understanding of today’s labor market forces. Historical Development of Evaluation in the U.S. Federal Government The field of evaluation is no stranger to pedagogical fads and paradigm shifts. Throughout history, evaluation in the federal government has witnessed various societal, political, structural, and economic trends (Furubo & Sandahl, 2002). These trends have placed ECOLOGY OF EVALUATION CONTRACT WORK 15 pressure on both the demand-side (i.e., political administrations, federal agencies, and government programs) and the supply-side (i.e., academics, researchers, and evaluation providers) of the market in numerous ways (Alkin & King, 2016; Biderman & Sharp, 1972; Chelimsky, 2007; Della-Piana & Della-Piana, 2007; Donaldson, 2015; Henry, 2001; House, 1993; Lemire et al., 2018b; Rist & Paliokas, 2002; Vedung, 2010; Wargo, 1995). Before exploring some of these trends, a brief look at the historical development and structure of the U.S. Executive Branch is warranted as it will situate the proposed research and elucidate the power dynamics amongst the entities that comprise the demand-side of the market. Executive Branch Article II of the Constitution gives the President the power of the Executive Order and the power to appoint, and with approval from the Senate confirm, Officers of the United States (U.S. Const. art. II, § 1-2). Over time, Presidents have used this power to sign legislation that establishes Federal Agencies (as one example, the U.S. Department of Agriculture was founded by President Lincoln when he signed the “Act to Establish a Department of Agriculture” in May of 1862) (United States Department of Agriculture, n.d.). To reduce government spending and surround the President with expert support, President Franklin D. Roosevelt used his Executive powers to sign the “Reorganization Act of 1939” into law, establishing the Executive Office of the President (EOP) (The United States Government Manual, n. d.). As illustrated in Figure 2, the EOP (which includes the Office of Management and Budget, or OMB) and 15 Federal Agencies fall under the purview of the Executive Branch. The President’s power of the Executive Order, and the structure and origin of the EOP and Federal Agencies, are of an ECOLOGY OF EVALUATION CONTRACT WORK 16 important note as they play a crucial role in understanding the demand-side of the market through an organizational ecological lens (i.e., by highlighting the structural and political environment of federal evaluations). ECOLOGY OF EVALUATION CONTRACT WORK 17 Figure 2 U.S. Federal Government Organizational Chart Note. This diagram illustrates the structure of the U.S. Federal Government (U.S. Government Manual, n.d.). ECOLOGY OF EVALUATION CONTRACT WORK 18 A Rise in Federal Evaluation Activity Though early evaluation activity can be traced back to the mid-1800s and early 1900s, (Alkin & King 2016) federal evaluation did not see significant momentum until the early to mid- 1960s (Henry, 2001; Lemire et al., 2018a; Rist & Paliokas, 2002). During this period, evaluation became rooted in “one of the great narratives of our time: that the world can be made more humane if capitalism and the market economy can be reined in by doses of central policy planning and public intervention at a comprehensive level” (Vedung, 2010, p. 265). In the wake of the “Great Society” programs (e.g., the “War on Poverty”) the notion that “public policy should be more scientific and sensible” (Vedung, 2010, p. 265) became popular opinion across the public-sector. It was thus thought that by using research methods, evaluation, and analysis, the federal government would be able to legitimize its investments and public policy decisions to the general population (Datta, 2011; House, 1993). Evidenced by the creation of new agencies and divisions during the “Great Society,” the size and composition of the federal evaluation market continued to grow. To meet the increasing demand for evaluation services, federal agencies in both the Executive and Legislative branches beefed-up their internal capacity and employed outside help from methodological and content experts in academia (Henry, 2001; Rist & Paliokas, 2002). Situated in the Executive Office of the President (see Figure 2), OMB played an instrumental role in supporting agencies’ efforts to build their evaluation capacity (Stack, 2018). As of today, OMB “reach[es] into every federal agency, and its control over the major executive branch policy levers—budget, legislative proposals, and regulatory and management policy—enable it to drive government-wide reforms ECOLOGY OF EVALUATION CONTRACT WORK 19 that aim to increase effectiveness and efficiency” (Stack, 2018, p. 113). Also noteworthy in this development was the rise of legislative evaluation units in many state capitals, and the birth of “the legendary Program Evaluation Methodology Division within the General Accounting Office,” (Henry, 2001, p. 420). While the demand for federal evaluation services grew during the 1960s and ‘70s, its growth was not evenly felt through all federal agencies (Rist & Paliokas, 2002). The planning, programming, and budgeting systems tool for example, (which emphasized various types of quantitative methods) was first implemented in the Defense Department in the early 1960s and was not employed by all federal agencies until President Johnson signed an executive order several years later (Rist & Paliokas, 2002). In 1979, the Office of Management and Budget (OMB) issued the “Management Improvement and the Use of Evaluation in the Executive Branch” report, which explicitly called upon all agencies within the Federal Branch to assess the effectiveness of their programs and the efficiency with which they are conducted and seek improvement on a continuing basis so that Federal management will reflect the most progressive practices of both public and business management and result in service to the public. (OMB, 1979: 1; cited in Rist & Paliokas, 2002, p. 227) Yet before agencies were able to fully take up the OMB call for program assessment and evaluation, a political and ideological change swept the nation. With the Reagan Administration taking office in 1980, much of the funding and emphasis that had been placed on evaluation during the 1960s and 1970s began to decline (House, 1993; Rist & Paliokas, 2002). ECOLOGY OF EVALUATION CONTRACT WORK 20 1980s to Today: Federal Evaluation Activity Ebbs and Flows The oscillating nature of political ideologies and interest groups coming in and out of Washington, DC has meant that the use of research, evaluation, and science-like analysis to inform policy decisions for the sake of social betterment has not been immune to the whims of incoming Administrations (Vedung, 2010; Weiss, 1993). This is played out for example, in the shift from prioritizing federal research and evaluation in the 1960s and 1970s, to a focus on cutting costs in the 1980s (Hogan, 2007; Shadish et al., 1991). When the 1990s came around and the Clinton Administration took office, the Federal government’s emphasis on evaluation and research began to pick up speed once again (Maynard, 2018). Every fiscal year (FY), OMB publishes the President’s Budget Report which predicts the funding allocations across the Federal government. These reports are included in the broader OMB publication Analytical Perspectives, which has been used “to codify a system-wide values chain that prioritizes evaluation as an essential function for a more efficient and effective government” (Nolton, 2020, p. 7). As the amount of resources the federal government invests in evaluation activities is one major way it expresses its value in rigorous research and evidence (Lemire et al., 2018a), one can look to these reports to assess trends in the government’s research values and priorities. Nolton (2020) for example, used these OMB reports to track the institutionalization of evaluation in the Federal government between FY 1996-2020. Reforms pushing for improving federal agency accountability from the 1990s to today have included (but are not limited to) the development of the Government and Performance Results Act (GPRA) under Clinton (which was later amended to include management, becoming ECOLOGY OF EVALUATION CONTRACT WORK 21 the Government and Performance Results Management Act, or GPRMA) and the Program Assessment Rating Tool (PART) under George H. W. Bush (Nolton, 2020). Despite these and other Administrative moves for greater research and evidence in government, evaluation remained pervious to ideological shifts and changing political ideals (Biderman & Sharp, 1972; Chelimsky, 1995; Davies, 2012; Kettl, 1994; Mills, 2008; VanLandingham, 2006). For the evaluation industry, this meant that the structure and methodological requirements for federal grants, cooperative agreements, and contracts required more rigor, but were still susceptible to environmental (i.e., political, societal, and economic) changes (Chelimsky, 1995, 2007, 2015; Datta, 2011; Donaldson, 2015; Henry, 2015; Maynard et al., 2016; Maynard, 2018). It was not until 2019 when the Foundations for Evidence-Based Policymaking Act (commonly referred to as the Evidence Act) was signed into law that evaluation found itself firmly rooted in government practice (Nolton, 2020). The Evidence Act mandates greater investment in the management and use of data and evidence, promoting a “shift away from low- level activities toward actions that will support decision makers: linking spending to program outputs, delivering on mission, better managing enterprise risks, and promoting civic engagement and transparency” (Vought, 2019, p. 1). In sum, the Evidence Act “advances program evaluation as an essential component of Federal evidence building” (Vought, 2020, p. 1). From its height in the 1960s and 1970s to today, the U.S. federal evaluation market has continued to evolve. As new Administrations come and go, their attempt to tackle societal issues (new and old) continue to shape the market’s emphasis on certain methodologies (e.g., ECOLOGY OF EVALUATION CONTRACT WORK 22 randomized control trials, survey research, impact evaluations, outcome research), requirements for evidence, and the demand for federal evaluation contract work (Chelimsky, 2015; Donaldson, 2015; Henry, 2015; Lemire et al., 2018a). Federal Evaluation Contract Work: Supply and Demand A population of organizations’ capacity to adapt to unstable environments is prey to the overall legitimacy of organizations, the market’s composition (i.e., niche, size, and age), and the competitiveness of the market (Hannan & Freeman, 1977, 1989). As evidenced in the previously discussed literature the growing need for evaluation services, coupled with a decreasing internal federal capacity, augmented the availability of external evaluative contract work. With the dynamic environment created by contract work in tow, this review will now look to the composition of the demand and supply-sides of the Lemire et al. (2018b) Evaluation Market Framework (see Figure 1) in an effort to further understand: (1) who is demanding and who is supplying services; (2) what kinds of evaluation services are being asked for and delivered; and (3) how the answers to these first two points inform the competitiveness of the field. Gathering a clearer picture of the who, what, and how of evaluation services within the U.S. marketplace will shed light on the changing evaluation landscape and the ways in which providers situate themselves in the market for opportunities. Outlined in the Lemire et al. (2018b) Evaluation Market Framework (see Figure 1), the composition of the supply-side in today’s evaluation market is primarily dominated by a few select firms. On the demand-side, the market is primarily dominated by foundations and national governments. As this study is reviewing the demand for evaluation services created specifically ECOLOGY OF EVALUATION CONTRACT WORK 23 by federal agencies, a more detailed analysis of these agencies, the pressures they face(d) and the role(s) they play in the procurement of evaluation services is necessary. Demand-Side: Federal Agencies With priorities shifting in the 1980s, evaluation offices within federal agencies saw a swift decline in funding, resources, personnel, and overall evaluation capacity (Lemire et al., 2018a; Rist & Paliokas, 2002; Wargo, 1995). Yet the need for evaluation (and by extension systematic inquiry) remained at large; while changes occurred when the Reagan Administration took office, many programs birthed from the “Great Society” were still active, and the demand for governmental accountability (i.e., justification of resources) remained high (Datta, 2011; Henry, 2001; Lemire et al., 2018a). In response to declining evaluative capacity, federal evaluations quickly turned smaller and more internal (Lemire et al., 2018a). This shift is reflective of the fact that the composition, structure, and behaviors of federal agencies is considerably similar to those of large business firms (Hannan & Freeman, 1989). Like large business firms, living in an environment where success of organizational survival depends on a finite level of resources means that competitive selection must ensue (Hannan & Freeman, 1977). The organizational structure of the Executive Office of the President (EOP) (see Figure 2) highlights this point, as agencies are required to compete against one another for limited resources. Over time, as the structure, capacity, and composition of federal agencies evolved, the gap between the demand for evaluation and the federal government’s ability to provide its own services widened. Unable to compete for finite resources (e.g., personnel and funding) to meet ECOLOGY OF EVALUATION CONTRACT WORK 24 evaluation needs, federal agencies turned toward the private and academic sectors for support (Lemire et al., 2018a). Lemire et al. (2018a) conducted a review of the U.S. federal evaluation market’s demand- side by analyzing current trends in the allocation of federal contract dollars specified for evaluation and related knowledge-production services. To begin their study, the authors chose to analyze the amount of federal U.S. dollars spent on evaluation and related services between fiscal years (FY) 2010 and 2017 (Lemire et al., 2018a). Data were pulled from www.usaspending.gov which is the “official source for spending data for the U.S. Federal Government. … [Search parameters limited] data [to] specific … prime-award contracts awarded by twenty-two non-defense departments that are subject to the provisions in GPRMA for FY10- FY17” (Lemire et al., 2018a, p. 69). Examples of search terms utilized in the analysis include the following labels: “‘program evaluation,’ ‘program review/development,’ or ‘program evaluation/ review/ development. These labels were then coded according to qualitative descriptors (e.g., contract titles)” (p. 69). In reviewing the initial data, the authors noticed that notwithstanding expected ups and downs in federal spending, the overall funding for contracts labeled as evaluation activities (i.e., the aforementioned search labels) increased 65 percent “from $394 million (in FY 2010) to $651 million (in FY 17)” (Lemire et al., 2018a, p. 70-71). Overall federal spending noticeably varied across each of the twenty-two non-defense agencies, with the Department of Health and Human ECOLOGY OF EVALUATION CONTRACT WORK 25 Services (HHS) awarding the most ($217 million) for evaluation activities in FY 2017.2 Due to the amorphous definition of “evaluation,” the authors were unable to discern which specific activities were actually supported under the search labels (Lemire et al., 2018a). Consequently, the extent to which the overall increase in federal dollars spent among the twenty-two non- defense agencies between fiscal years 2010 and 2017 can be specifically attributed to program development, research, and evaluation activities is unclear (Lemire et al., 2018a). While examining the full landscape of federal agency funding would be useful to the evaluation industry, such analysis is beyond the scope of this study. Two federal agencies—the Department of Health and Human Services and the Department of Education—were chosen for further review to illustrate some of the market’s demand-side complexities. The Department of Health and Human Services was chosen as it was, of the twenty-two federal non-defense departments reviewed by Lemire et al. (2018a), the largest provider of evaluation-labeled contract funds in FY 2017. The Department of Education was chosen for further review, despite landing in ninth place amongst the twenty-two non-defense departments reviewed by Lemire et al. (2018a), due to its rich history with evaluation. U.S. Department of Health and Human Services (HHS). Expanding upon their initial review, Lemire et al. (2018a) decided to narrow their analysis to evaluation-specific contracts within the Department of Health and Human Services (HHS) for FY 2017 by using qualitative descriptors (e.g., contract titles) and additional online searches to code their labels. The authors 2 To note, the authors specifically looked at federal spending through the lens of the “Awarding Agency,” not the “Funding Agency.” ECOLOGY OF EVALUATION CONTRACT WORK 26 found that “$217 million in contracts for evaluation services labeled as ‘program evaluation,’ ‘program review/development,’ or ‘program evaluation/ review/ development’” were awarded across 385 total contracts in HHS (p. 73). Of these 385 total contracts, approximately 113 (29%) averaging $531,000 per contract, were distributed for “evaluation-specific services, including program and policy evaluations, evaluation capacity building, and evaluability assessments” (Lemire et al., 2018a, p. 73). Procurement of these 113 evaluation-specific HHS contracts mainly occurred through open competition (77%), and open competition after some exclusion (10.6%) (Lemire et al., 2018a). Data results indicate that the majority (58.4%) of the 113 aforementioned contracts were “firm fixed price, whereby the evaluation provider is paid a fixed fee agreed upon at contract formation (including costs)” (Lemire et al., 2018a, p. 74). The implications of these findings in relation to the Evaluation Market Framework (i.e., how these market forces affect evaluation providers) are great and deserve much-needed attention among evaluation researchers. As an example, on the demand-side of the market, an agency’s policies around request for proposals (RFPs) and succeeding terms of references (ToRs) reinforce the demand-driven nature of the market by requiring methodological specification, therefore bolstering the push for buyer (federal agency) preferences (Nielsen et al., 2018b). Additionally, the finding that evaluation-specific HHS contracts were primarily procured through open competition and competition after some exclusion highlights the supply-side’s reliance on limited resources. These market forces illuminate the need for evaluation providers to be cognizant of their niche standing—that is, either specialize or perish (Hannan & Freeman, 1977; Peck, 2018). ECOLOGY OF EVALUATION CONTRACT WORK 27 Measuring characteristics of the demand-side composition through federal dollars spent on evaluation-related contracts is one major limitation to these studies. As initially described, Lemire et al. (2018a) were unable to determine to what extent increases in overall funding could be attributed to evaluation-specific services; some activities labeled “evaluation” could include rudimentary policy research, while other evaluation services could be part of contracts that were not labeled as “evaluation.” Consequently, the authors’ findings could be an over- or underestimation of federal dollars spent on evaluation and evaluation-related activities (Lemire et al., 2018a). Pulling data exclusively from federal budget figures and records is another limitation to these studies, for “figures are subject to continual change and are never final until 1 year after the fiscal year is completed and a final accounting is made” (Lemire et al., 2018a, p. 70). Producing more high-quality data on the exact evaluation services procured from federal evaluation contracts could ease the burden of assuming contract labels provide accurate and consistent definitions of evaluation activities. U.S. Department of Education (ED). Eras of social revolution and political turmoil are ripe for the development of new kinds of organizations (Hannan & Freeman, 1989)—a phenomenon with which education in the United States is all too familiar. In 1958, Congress passed the National Defense Education Act (NDEA) in response to the Soviet Union’s successful launch of Sputnik 1 in 1957 (Hogan, 2007). The NDEA placed an increased emphasis on science, mathematics, and foreign language education in an effort to prepare students to better compete with the USSR (Alkin & King, 2016; Gardner, 1983; Hogan, 2007). Several years later in 1963, educational psychologist Lee Cronbach wrote an article addressing the increased ECOLOGY OF EVALUATION CONTRACT WORK 28 emphasis on mathematics and science in education, specifically calling attention to the need for evaluation to make final judgments about the efficacy of the curricula but also to provide information that would assist in making modifications of courses under development. Cronbach’s idea that course improvement was an appropriate outcome of evaluation activity became the basis for the formative/summative distinction [Scriven made in 1967]. (Alkin & King, 2016, p. 571) Cronbach’s push for evaluations to be grounded in rational and scientific inquiry illustrates how political action and changing governmental priorities have the power to directly influence the ways in which evaluation providers conduct their services. The increased emphasis in federal efforts to support education through the “Great Society” programs and NDEA, coupled with the enactment of the 1964 Civil Rights Act (which under Title VI, 42 U.S.C. § 2000d et seq., withholds federal funds from any institution that operates in a discriminatory manner), paved the way for passage of the 1965 Elementary and Secondary Education Act (ESEA) (Alkin & King, 2016; Borman & D’Agostino, 1996; Mills, 2008). Within ESEA, Title I explicitly stipulated funding to support “a variety of supplemental services that share a collective purpose: to improve educational opportunities and outcomes for low-achieving students from schools with concentrations of poverty” (Borman & D’Agostino, 1996, p. 309). In its mandate, Title I required school districts receiving funds to demonstrate the influence of these federal dollars by producing, or contracting for, annual program evaluations (Alkin & King, 2016; Borman & D’Agostino, 1996). As these evaluations were sent to the ECOLOGY OF EVALUATION CONTRACT WORK 29 federal government, it quickly became apparent that further guidance on the methodological standards for Title I evaluations was needed (Maynard, 2018). In response to this charge, the Title I Evaluation and Reporting System (TIERS) was created; TIERS offered districts the option to choose one of three accepted evaluation models to report the results of their annual standardized tests which were administered to their Title I students (Borman & D’Agostino, 1996). From a capacity standpoint, it is important to note that NDEA, ESEA, and TIERS were developed during a time when ED existed not as a standalone agency, but as an arm of the Department of Health, Education, and Welfare (HEW) (which was established as a cabinet-level agency by Eisenhower in 1953) (Borman & D’Agostino, 1996). Being housed within the HEW umbrella meant that educational evaluations promoting evidence and systematic inquiry had tougher competition for the environment’s finite resources (e.g., funding, personnel, legislative priority) than if it had been a standalone agency (Nolton, 2020). It was not until the Federal Reorganization Act of 1977 and the subsequent 1979 Department of Education Organization Act that the cabinet-level Department of Education (ED) (see Figure 2 for the ED’s current positioning) was formed (Nolton, 2020). During the 1990s, the ED saw a movement toward clearer definitions for student outcomes and school district accountability with the establishment of the National Education Goals under H. W. Bush, and the Improving America’s Schools Act (a reauthorization of ESEA) under Clinton (Mills, 2008). ESEA was again reauthorized in 2002 with the passage of No Child Left Behind (NCLB) (Mills, 2008). Two key elements of NCLB included the further ECOLOGY OF EVALUATION CONTRACT WORK 30 development of accountability requirements, and an emphasis on applying rigorous scientific research (Mills, 2008). The rising importance of creating evidence-based evaluations and policy initiatives highlighted the need to expand government agencies’ capacity to rigorously evaluate programs, systematically review evidence and map that evidence to standards for making policy and for funding and monitoring programs. Moreover, federal evaluation and program offices … [came] to need staff who have working knowledge of evidence standards, who understand how those standards affect their oversight responsibilities for federally mandated and/or funded evaluations, and who are able to judge whether evidence was adequate to support specific policy decisions. (Maynard, 2018, p. 137) To meet the growing capacity needs and demand for more rigorous evaluations, the Education Science Reform Act (ESRA) of 2002 created the Institute of Education Sciences (IES), a “research arm of the U.S. Department of Education … formed with a mandate to increase the amount of credible, scientific evidence that was available to serve as a basis for educational decision making” (Henry, 2015, p. 65; see also Whitehurst, 2018). ESRA explicitly deemed IES a science agency (Whitehurst, 2018), meaning it “placed a premium on RCTs [randomized control trials], FEs [field experiments], and other high-quality designs for evaluating education interventions and providing funding for these types of evaluations” (Henry, 2015, p. 66). In doing so, IES further solidified the notion that from a methodological standpoint, credible evidence was true and accurate when it came from impact evaluations—evaluations that were designed using RCTs, quasi-experimental designs, and/or field experiments (Henry, 2015; ECOLOGY OF EVALUATION CONTRACT WORK 31 Maynard, 2018; Whitehurst, 2018). This concept rippled throughout the federal government. For example, during the FY 2014 budget preparation, OMB reported that “agencies should demonstrate the use of evidence throughout their … budget submissions. … [by explicitly including] a separate section on agencies’ most innovative uses of evidence and evaluation, addressing …” current and future evaluation needs (Zients, 2012, p. 1). The advent of this evidence era has meant that evaluation providers have had to adapt to increasing demands to employ what Chelimsky (2012) calls, more “treasured methods” (p. 79). While some agencies are coming to understand the inherent strengths and weaknesses of these various, rigorously-deemed methods (and are thus opening up to the possibility of using more mixed-methods approaches), the established legitimacy of using rigorously applied systematic inquiries in federal evaluations prevails (Chelimsky, 2012). With an increased understanding of these methodological priorities and the overall complexities associated with the market’s demand-side, the evaluation industry will be better suited to withstand changes in the market environment. Supply-Side: Firms and Universities The supply-side of the market is primarily composed of evaluators who are employed in an evaluation, research, and/or consulting firm, university, or college (Nielsen et al., 2018a). As this work seeks to further understand the supply-side’s composition (including practicing evaluators and evaluators in-training), a closer look at the evaluation industry is needed. Within their review of the U.S. federal government’s demand for evaluation services, Lemire et al. (2018a) analyzed who is included as a “main provider…of evaluation and other ECOLOGY OF EVALUATION CONTRACT WORK 32 knowledge production services for [the] DHHS [sic]” (p. 75). To do so, the authors examined and then ranked the top ten HHS providers “according to the amount of evaluation and other knowledge production funds awarded by DHHS [sic] in fiscal year 2017” (Lemire et al., 2018a, p. 75). These top ten providers handled a total of 111 contracts amounting to $101,965,295 in total funds (Lemire et al., 2018a). The authors categorized each of these providers based on their number of contracts and total funding, resulting in three tiers. The first tier of providers were the largest, and included Mathematica Policy Research, ICF, MDRC, Research Triangle Institute (RTI), and ABT Associates; each of the providers in this tier “offer evaluation services as part of a broader array of knowledge production services” (Lemire et al., 2018a, p. 75). The next tier of providers primarily offers services “in the form of other [broad] types of knowledge production services, including applied research studies and large-scale survey studies” (Lemire et al., 2018a, p. 75). The third tier of HHS providers for FY17 “offer[ed] other knowledge production services in addition to program development and support services” (Lemire et al., 2018a, p. 75). Categorization of these top ten evaluation suppliers provides important insight into the U.S. evaluation market landscape. Results of Lemire et al.’s (2018a) analysis of the FY 2017 HHS contracts demonstrate that most evaluations funded in fiscal year 2017 were most commonly procured in full and open competition, primarily contracted as firm fixed price or cost plus fixed fee, and [were] primarily commissioned to eight main evaluation providers (all of which [were] large-scale research and consulting firms). (p. 77) ECOLOGY OF EVALUATION CONTRACT WORK 33 These findings echo House’s (1997) caution on the implication of the market’s limited number of primary evaluation providers (i.e., the potential skewing the industry), and the need for a population of organizations to have a healthy level of competition (Becker, 2007). To gain greater understanding of the primary providers of evaluation services in the market, Peck (2018) expanded upon the work of Lemire et al. (2018a) by analyzing the large evaluation enterprises that make up the evaluation industry. Large Evaluation Enterprises. Peck (2018) defined large evaluation companies as those that engage in evaluation work by “assessing the formative and summative results of a policy, program or intervention of some sort,” and have annual revenue over $20.5 million (p. 98). Peck (2018) determined the study’s sample by first collecting a list of research and evaluation firms and their revenue data, and then determining the biggest among them by analyzing the overall value of their main federal contract revenues in 2016. Primary sources in this analysis include data from the Securities and Exchange Commission (10-K forms from SEC.gov), Bloomberg.com, and each of the selected companies’ websites (Peck, 2018). These data were then cataloged by firm name, mission statement, areas of practice, services offered, office location(s), and organization type and size (Peck, 2018). The results showed that some firms focus their evaluation services in a specific topic area (e.g., education), while other firms tend to focus on a particular context (e.g., Social Impact). Of the firms analyzed, Peck (2018) noted that the most common factors between each of these companies involved their age, size, specialization (i.e., niche—topic area and/or context), and methodological diversity—factors which all increase a firms’ odds of winning contracts. These ECOLOGY OF EVALUATION CONTRACT WORK 34 findings are consistent with organizational ecology’s notion that an organization’s legitimacy is connected to its size, age, and niche in that a population of organizations flourishes because it [specializes and therefore] maximizes its exploitation of the environment and accepts the risk of having that environment change or because it accepts a lower level of exploitation [by being generalist] in return for greater security. (Hannan & Freeman, 1977, p. 948) Interestingly, Peck (2018) also found that it was common for these large firms to subcontract parts of their evaluations to smaller evaluation businesses. In response to this discovery, Peck (2018) dove deeper into these data by drawing from www.usaspending.gov to conduct a network analysis of the subcontracting transactions across eight of the top twenty evaluation firms previously identified. Peck (2018) found that while the value of these subcontracts represented a comparatively small portion (8%) of overall funding for the firms analyzed, “they involve[d] a very large number of transactions and relationships” (p. 118). These findings highlight two important concepts: (1) that designated staff who have specific skills and expertise are crucial to the successful procurement and execution of evaluations among both large firms and small (i.e., subcontracted) enterprises; and (2) that the transactions and relationships between firms serve as a way for firms to be partners, as opposed to competitors, in the market’s competitive, niche environment. Small Evaluation Enterprises. The life of small evaluation enterprises has received a range of attention in recent years (Jarosewich et al., 2019). In 2018, Hwalek and Straub explored the perceived market conditions and overall composition associated with small evaluation firms. ECOLOGY OF EVALUATION CONTRACT WORK 35 In defining small sellers of evaluation services, Hwalek and Straub (2018) drew their operationalization from Peck (2018) who describes small evaluation firms as “companies with annual revenues under $20.5 million …” (p. 97). Data for their analysis were primarily pulled from a 2015 survey of the American Evaluation Association’s (AEA) Independent Consulting Topical Interest Group (IC TIG) (Hwalek & Straub, 2018). AEA survey participants included all 932 then-current AEA IC TIG members. Of these members, 250 responded to the survey invitation and 187 satisfied the sampling frame for the study (Hwalek & Straub, 2018). To fit into the sampling frame, participants had to have indicated they were (1) a “CEO, primary owner, partner or solo practitioner; (2) their businesses employ[ed] fewer than 50 people; [and] (3) their business [was] both based in the United States and primarily provide[d] program evaluation services within the United States” (Hwalek & Straub, 2018, p. 126). The composition of these small firms was analyzed based on five primary characteristics: gender, age range, ethnicity, highest degree completed, and field of highest degree (Hwalek & Straub, 2018). Hwalek and Straub (2018) found that most survey respondents were white females, and the majority of these respondents had doctorates in education or the social sciences. Of total respondents, 76 percent reported that “their total revenues [came] from evaluation services including: conducting evaluations, writing evaluation plans, evaluation capacity building or training, etc. … [and] largely [came] from federal, state, or local government agencies” (Hwalek & Straub, 2018, p. 128). With the composition of these small firms in mind, Hwalek and Straub (2018) then analyzed respondents’ perceptions of market conditions. The authors defined market conditions ECOLOGY OF EVALUATION CONTRACT WORK 36 as referring “to the level of intensity of the competition faced by businesses, major competitors, demand for services, and pattern of business growth” (Hwalek & Straub, 2018, p. 128-129). Results indicated that survey respondents frequently reported strong competition for their services, but that this competition was not overwhelming (Hwalek & Straub, 2018). In addition to competition, other market conditions respondents noted as influencing the growth, decline, or stability of their businesses included, but was not limited to: increased recognition of name or organization (i.e., legitimization) among evaluation commissioners; individual preference to limit the extent of services provided or size of the firm (i.e., niche and size); and “increased demand for evaluation services in the United States in general” (Hannan & Freeman, 1977, 1989; Hwalek & Straub, 2018, p. 130). Differentiation of services (68% of responses) and methodological expertise (52% of responses) were the most frequently referenced ways in which a firm distinguished itself from the competition (Hwalek & Straub, 2018). Evaluator Competencies. Based on the findings and literature demonstrating the need for specialized staff who have domain and methodological expertise, it is imperative for the field of evaluation to focus additional attention on how specific competencies and expertise (i.e., within the evaluation industry) are influenced by market factors. Much research on evaluator competencies and skills has been conducted over the years (Chelimsky, 2012; Datta, 2011; Dewey et al., 2008; Galport & Azzam, 2017; Germuth, 2019; King & Stevahn, 2015; Maynard, 2000, 2016; Nielsen et al., 2018b). This work includes important evaluation research led by members of the AEA Board of Directors appointed Competency Task Force, and individuals not explicitly on the Task Force (but often in ECOLOGY OF EVALUATION CONTRACT WORK 37 partnership with Task Force members) (Stevahn et al., 2005). To provide a foundation to the wide range of evaluator competency literature, this review draws from the competency framework employed by Stevahn et al. (2005), which describes evaluator competencies as “the knowledge, skills, and dispositions program evaluators need to be effective as professionals” (p. 48). The AEA Competency Task Force (2018) identified five primary domains within which a competent evaluator’s skills, knowledge, and expertise would fall under: (1) Professional Practice; (2) Methodology; (3) Context; (4) Planning and Management; and (5) Interpersonal. In conjunction with the CEO Forum’s list of 21st century skills (2001), these five competency domains are directly connected to the composition of the evaluation market on the supply-side and are susceptible to fluctuations in the labor market. In their 2008 study, Dewey et al. took a close look at the disconnects and overlapping competencies evaluators acquire in graduate school and the competencies employers desire. The authors focused their study by analyzing data from a job bank, and the results of two surveys conducted with AEA partners: “one for job seekers in the evaluation field and the other for employers of evaluators” (Dewey et al., 2008, p. 271). Results indicated that many of the nineteen competencies included in the job bank and survey analyses that were taught in graduate programs were in fact desired by employers (Dewey et al., 2008). One major limitation of this study is its lack of generalizability, for conclusions drawn from the study are limited to the evaluators that “could be reached through AEA electronic mailing lists” (Dewey et al., 2008, p. 283). ECOLOGY OF EVALUATION CONTRACT WORK 38 Drawing upon Dewey et al. (2008), Galport & Azzam (2017) conducted a competency training gap analysis; the authors investigated what practicing evaluators perceived as important competencies for conducting high-quality work (Galport & Azzam, 2017). Participants surveyed were drawn from a simple, random sample of 1,994 AEA members (Galport & Azzam, 2017). Of the 1,994 individuals who received the survey, 403 responded, representing a 20 percent response rate (Galport & Azzam, 2017). From this sample of 403 practicing evaluators, seven participated in one focus group while three partook in one-on-one interviews (Galport & Azzam, 2017). Results from the survey, focus group, and interviews indicated that the top two competencies viewed as most important for guiding successful evaluations included professional practice and systematic inquiry; participants identified “the ability to conduct meta-evaluations … followed by responding to requests for proposals” (Galport & Azzam, 2017, p. 86) as the two least important competencies. Organizational Ecology As previously discussed, this study was guided by the Evaluation Market Framework (Lemire et al., 2018b; see Figure 1) and Hannan and Freeman’s (1989) theory of organizational ecology. The Evaluation Market Framework (Lemire et al., 2018b) situates the field of U.S. contractual research and evaluation work in an ecological context where forces of supply and demand (e.g., policies and legislation, economic trends) operate. With the field conceptually framed in an ecological context, it was then appropriate to embed organizational ecology as this study’s theoretical framework to allow for a deeper exploration of the current environment for, and changing landscape of, federal contractual evaluative work. ECOLOGY OF EVALUATION CONTRACT WORK 39 Four elements of organizational ecology were used to address the study’s research questions and shed light on the supply-side’s overall level of organizational adaptability: (1) the liability of newness, (2) the liability of smallness, (3) niche theory, and (4) external environmental factors. Liability of Newness Within organizational ecology, the effects “newness” can have on an organization’s probability of survival has received extensive attention over the years (Aldrich & Auster, 1986; Aldrich & Fiol, 1994; Carroll, 1984; Freeman et al., 1983; Hannan & Freeman, 1977, 1989; Jovanovic, 1982; Mata & Portugal, 1994; Michael & Kim, 2005; Siddiqui et al., 2018; Singh & Lumsden, 1990; Stinchcombe, 1965; Youn & Gamson, 1994). The basic premise behind this liability is that—for reasons related to an organization’s name recognition, reputation, and experience—the older an organization is (i.e., the longer it has existed in the population) the greater its probability of survival when compared to newer organizations. In the context of this study, the liability of newness helped assess the changing landscape of contractual evaluative work, as well as evaluation providers’ perceptions of the landscape and positioning in the field to compete for current and future opportunities. Chapter 3 describes how the liability of newness was used to address the study’s research questions. Liability of Smallness Often paired with the liability of newness, the effects “smallness” can have on an organization’s probability of survival has also received much attention over the years (Carroll, 1984; Carroll & Delacriox, 1982; Carroll & Huo, 1986; Hannan & Freeman, 1977, 1989; ECOLOGY OF EVALUATION CONTRACT WORK 40 Jovanovic, 1982; Mars & Bronstein, 2020; Mayer & Goldstein, 1961; Michael & Kim, 2005; Siddiqui et al., 2018; Youn & Gamson, 1994). The basic premise behind this liability is that—for reasons related to an organization’s capacity and resource availability—larger organizations have a greater probability of survival than smaller organizations. For the current study, in conjunction with the liability of newness, the liability of smallness helped assess the changing landscape of contractual evaluative work, as well as evaluation providers’ perceptions of the landscape and positioning in the field to compete for current and future opportunities. Chapter 3 describes how the liability of smallness was used to address the study’s research questions. Niche Theory Niche theory, a fundamental component of organizational ecology research, differentiates between specialist and generalist forms (e.g., evaluation and research entities) (Carroll, 1985; Freeman & Hannan, 1983; Hannan et al., 2003; Monge et al., 2011). Specialist organizations are narrow in scope, while generalist organizations provide a wide array of goods and services. A university center that only conducts research related to K-12 education is an example of a specialist form, while a firm that provides evaluation services in a wide array of topics or industries is an example of a generalist form. Chapter 3 describes how niche theory was used to address the study’s research questions. External Environmental Factors Considering external environmental factors is inherent in organizational ecology research (Hannan & Freeman, 1989). Organizational ecologists assert, dissimilar to some types of sociological researchers (e.g., business policy experts) that purport organizational survival and ECOLOGY OF EVALUATION CONTRACT WORK 41 failure is only due to internal causes, that external environmental factors play a considerable role in an organization’s probability of survival or failure (Carroll, 1984). Chapter 3 describes how external environmental factors—including changes in presidential administrations and new legislation, the emergence of the COVID-19 pandemic,3 and George Floyd’s murder4—were used to address the study’s research questions. Table 1 shows the conceptual and theoretical concepts addressed by each of the study’s four research questions. Table 2 provides a crosswalk illustrating the integration of the Evaluation Market Framework (Lemire et al., 2018b), organizational ecology, and the study’s four research questions. 3 See the Centers for Disease Control and Prevention (2022) for more information on the COVID-19 pandemic. 4 See the United States Department of Justice Office of Public Affairs (2022) for more information on George Floyd’s murder. ECOLOGY OF EVALUATION CONTRACT WORK 42 Table 1 Conceptual and Theoretical Concepts by Research Question and Data Source Research Question Data Source Evaluation Market Concept Organizational Ecology Concept 1a. What is the likelihood evaluation firms and universities acquired newly funded evaluation-specific contracts from the U.S. Department of Health and Human Services (HHS) each fiscal year (FY), between FY 2008-2022? • Event history data • Federal agencies • Organization age & type • Liability of newness 1b. Which factors (i.e., an entity’s size and type) influence the likelihood a firm or university acquired newly funded HHS evaluation contracts each year between FY 2008-2022? • Event history data • Federal agencies • Organization age, size, & type • Liability of newness • Liability of smallness 2a. How do external research and evaluation providers perceive the federal evaluation contracts landscape? • Interviews • Domain expertise • Economic trends • Methodological expertise • Organization age, size, & type • Policy and legislation • Professionalism • Requests for Proposals (RFPs) • Sourcing strategy • Strategic partnerships • Liability of newness • Liability of smallness • Niche • Outside environmental factors 2b. How have external research and evaluation providers positioned themselves within the federal evaluation market to compete for resources? • Interviews • Client relationship management • Domain expertise • Economic trends • Methodological expertise • Organization age, size, & type • Policy and legislation • Professionalism • Requests for Proposals (RFPs) • Sourcing strategy • Strategic partnerships • Liability of newness • Liability of smallness • Niche • Outside environmental factors ECOLOGY OF EVALUATION CONTRACT WORK 43 Table 2 Study Frameworks and Research Questions Crosswalk Evaluation Market Concepts Organizational Ecology Concepts Liability of newness Liability of smallness Niche Outside environmental factors Supply-side: Mechanisms Professionalism Organization age Organization size Organization type Supply-side: Composition Boundary markets Methodological expertise Domain expertise Client relationship management **** **** **** **** Thought leadership **** **** **** **** Strategic partnerships **** **** **** Demand-side: Composition Policy and legislation Federal agencies (e.g., HHS, ED) * - - Demand-side: Mechanisms Economic trends Sourcing strategy **** Requests for Proposals (RFPs) **** Note. * Refers to RQ1a; ** Refers to RQ1b; *** Refers to RQ2a; **** Refers to RQ2b Refers to RQ1a & RQ1b; Refers to RQ1b, RQ2a, & RQ2b; Refers to RQ2a & RQ2b Refers to all four RQs ECOLOGY OF EVALUATION CONTRACT WORK 44 Summary This chapter provided a brief history on the rise of federal evaluation activity in the United States, the dynamic environment of evaluative contract work, and the main actors that make up the federal evaluation market. The interconnectedness of these elements was portrayed through the lens of organizational ecology by emphasizing legitimacy, an organization’s size and age, and niche theory. As evidenced by this review, the need to understand current market conditions, structures, and environmental forces in the federal evaluation market is crucial to ascertaining evaluation providers’ level of adaptability to an ever-evolving landscape. The next chapter describes the research design and methods used to answer the study’s four research questions. ECOLOGY OF EVALUATION CONTRACT WORK 45 Chapter 3: Methods In its youth, when a burgeoning federal evaluation market was at the forefront of the evaluation world, Donald T. Campbell set the course for the discovery of “a utopia he called the Experimenting Society” (Campbell, 1991; cited in Donaldson, 2015, p. 7). Campbell envisioned the Experimenting Society would include “rational decision making by politicians based on hardheaded tests of bold social programs designed to improve society” (Donaldson, 2015, p. 7). In this sense, Campbell’s vision embraced evaluation as a tool for systematic inquiry motivated by the quest for societal betterment (Henry, 2001). The market’s demand-side (i.e., federal agencies) has, overall, continued to push for systematic inquiry to be an integral part of federal evaluation work. How exactly this push factors into the evaluation industry’s human resources domain (e.g., staffing needs), financial domain (e.g., firms and universities spending money on professional development and building relationships), and the growth, adaptability, and use of evaluator competencies (e.g., domain and methodological expertise) is yet to be examined. Further exploration on the interconnectedness between the market’s supply and demand for contractual evaluative work is imperative, as it lives in an environment where the U.S. government is “the source of demand for our product, the supplier of funds for our work, the potential user of our efforts, and a critic who spur[s] us on methodologically and conceptually” (Datta, 2011, p. 275). ECOLOGY OF EVALUATION CONTRACT WORK 46 Study Purpose and Research Questions This research was guided by the following conclusions gleaned from the literature review: • The U.S. evaluation market is demand-driven in nature, and its primary actors include the federal government (specifically, amongst the twenty-two non-defense departments that are subject to the provisions in the Government and Performance Results Management Act, the Department of Health and Human Services) on the demand-side, and external research and evaluation firms and universities on the supply-side. • Historically, the evaluation market has been fueled by factors related to an ever-evolving social, political, and economic climate. • Currently, little is known about evaluation providers’ awareness of evaluation market factors in relation to their practice. Reinforced by these conclusions, the overarching purpose of this study was to shed light on the changing landscape of contractual evaluative work in the United States. To do so, it drew upon concepts from organizational ecology and the Evaluation Market Framework (Lemire et al., 2018b) to employ a mixed methods research strategy that incorporated an event history analysis of the population of external evaluation providers that received new funding from the U.S. Department of Health and Human Services (HHS) between fiscal years (FY) 2008 and 2022 for their evaluation services, and open-ended interviews with eleven practicing evaluators across ten firms and universities. HHS was intentionally chosen for this research, instead of the Department of Education or other U.S. agencies, due to its high demand for evaluation services. Fiscal year ECOLOGY OF EVALUATION CONTRACT WORK 47 (FY) 2008 was chosen due to availability of data; FY 2008 is the earliest FY one can filter data in the USAspending database. FY 2022 was chosen as agencies can update their reported funding up to the end of each FY; FY 2022 ended on September 30, 2022. The researcher’s choice to examine changes in new HHS funding for evaluation services was deliberate; this study was conducted under the premise that receiving new funding is crucial to an organization’s survival. Additionally, understanding who (i.e., evaluation firms and universities) receives new funding each FY, their size (i.e., small and not small5), and length of time spent (i.e., degree of newness) in the HHS arena sheds light on the ways in which demand- side (i.e., HHS) choices (i.e., awarding new funding) influence the contractual evaluative landscape for evaluation providers. Knowing, for example, whether small organizations are less likely than not small organizations to receive evaluation-specific HHS funding could help evaluators in small organizations strategize how they respond to RFPs. Furthermore, this study was explicitly interested in understanding practicing evaluators’ awareness and prioritization of market forces, including how they perceive their organization’s positioning in the U.S. market in relation to current and future funding opportunities. Knowing evaluation practitioners’ awareness of and perceived positioning in the market could elucidate providers’ level of organizational adaptability in response to market forces. The research questions for this study were: 5 The researcher intentionally chose the organizational size distinction “not small” instead of “medium” or “large” to align this research with the U.S. Code of Federal Regulations Part 121- Small Business Size Regulations (1996). Additionally, this study uses the terms “not small” and “other than small” interchangeably. ECOLOGY OF EVALUATION CONTRACT WORK 48 1a. What is the likelihood evaluation firms and universities acquired newly funded evaluation-specific contracts from the U.S. Department of Health and Human Services (HHS) each fiscal year (FY), between FY 2008-2022? 1b. Which factors (i.e., an entity’s size and type) influence the likelihood a firm or university acquired newly funded HHS evaluation contracts each year between FY 2008-2022? 2a. How do external research and evaluation providers perceive the federal evaluation contracts landscape? 2b. How have external research and evaluation providers positioned themselves within the federal evaluation market to compete for resources? Research Design This study was designed on the premise that in the United States, federal contractual research and evaluation work exists in an inherently ecological arena where concepts of evolution (e.g., variation, selection, time) are at play. As Monge et al. (2011) note, “evolutionary phenomena are rarely described with statistics alone; mixed-methods research, including the use of interviews and historical records, often helps to accurately describe evolutionary processes” (p. 218). Thus, to address each of the proposed research questions, this exploratory study used a convergent parallel mixed methods design involving event history analysis and interview data. The use of a convergent parallel mixed methods design was suitable for this study as it enabled the researcher to concurrently gather quantitative and qualitative data via various means, and then integrate both forms of data into the analysis (Plano Clark & Creswell, 2014). ECOLOGY OF EVALUATION CONTRACT WORK 49 Event History Analysis Event history analysis is one of the most common types of analyses used in organizational ecology studies (Landes et al., 2020; Monge et al., 2011). In its simplest form, “event history analyses aim to analyze the time-to-event for individuals (e.g., death or any other event of interest) over a given amount of time, which does not necessarily correspond to their lifespans” (Landes et al., 2020, p. 3). The occurrence of an ‘event’ is the dependent variable and factors related to newness, smallness, and environmental shocks (e.g., the Great Depression) are common independent variables (Carroll & Delacriox, 1982; Landes et al., 2020; Monge et al., 2011). Unlike other statistical methods that are used to analyze categorical variables (e.g., logistic regression), event history techniques allow for time-varying covariates (e.g., experience responding to RFPs) and can handle “right censoring”6 (Landes et al., 2020; Monge et al., 2011; Tuma et al., 1979). While several factors are taken into consideration when choosing an event history model, the ecological phenomena or process of interest is the most important (Monge et al., 2011). The broad ecological phenomenon of interest in this study was the overall changing landscape of federal evaluation contract work. Specifically, this study explored survival probabilities in the HHS evaluation arena by examining the likelihood that an evaluation firm or university received new funding from HHS each year between FY 2008-2022. When the process of interest is an event not occurring (such as failing to receive new funding) modeling survival functions, as 6 “Right censoring” is a common occurrence in organizational ecology research as it refers to the instance when an observation period ends prior to the event occurring. ECOLOGY OF EVALUATION CONTRACT WORK 50 opposed to hazard functions, is the appropriate event history method of choice (Monge et al., 2011; see also Allison, 2004; Blossfeld & Rohwer, 2002; Efron, 1988; George et al., 2014; and Landes et al., 2020). This is because hazard functions estimate “the instantaneous risk … that an individual at time t will experience the event [e.g., death] … [whereas] the survival function is the probability that an individual will remain in a state [e.g., alive] until time t” (Landes et al., 2020, p. 4). Interviews An in-depth, semi-structured interview format is one of the most common methods choices used in ecological and evolutionary research and is often combined with analysis of archival data (Monge et al., 2011). As defined, in-depth interviewing strives to elucidate the deep knowledge and understanding of participants (Johnson, 2011). These interviews can be structured, semi-structured, or free flowing, but all share an overall probing, open-ended, and discursive format (Gerson & Damaske, 2020). By using in-depth interviews, a researcher can gather rich information that may not otherwise have come to light from the use of a survey, secondary data analysis, or observational data alone. An additional virtue of in-depth interviewing is its ability to “combine depth of understanding with [a] purposeful, systematic, analytic research design to answer theoretically motivated questions” (Lamont & Swidler, 2014, p. 159). As a primary purpose of this study was to ascertain external evaluators’ knowledge, understanding, and potential prioritization of factors that influence their professional life, conducting in-depth, semi-structured interviews with these providers was pertinent. ECOLOGY OF EVALUATION CONTRACT WORK 51 Ethical Considerations This study was submitted to the University of Minnesota’s Institutional Review Board (IRB) before data collection processes began. The IRB submission package included a brief description of the study and a completed IRB review determination form. Upon review, the University of Minnesota IRB determined that the proposed research activity did not qualify as human subjects research (see Appendix A). Due to the nature of the study, the researcher had a meeting with her current employer to assure there was no conflict of interest. Upon review of the draft recruitment materials and interview protocol, her employer requested additional language be added to fully disclose that the researcher’s employer would in no way be involved with the proposed study. To meet this request, the following language was added to the recruitment materials and interview protocol: Full disclosure — I am currently employed as a research analyst at [FIRM]. Information gleaned from an interview would be solely used to complete my doctoral requirements at the University of Minnesota; this research is in no way associated or affiliated with my current employer—[FIRM]—nor will any information gathered from this interview be shared with anyone at [FIRM]. Interviews are confidential. No information that would make it possible to identify you or your organization would be included in future presentations or publications; only de-identified data will be shared. All data will be securely stored in my University of Minnesota Box Secure Storage account. I alone, as the sole researcher, will have access to raw data. ECOLOGY OF EVALUATION CONTRACT WORK 52 Data Collection Exploration of the federal research and evaluation contract landscape consisted of three data collection components, where component one related to event history analysis and components two and three related to expert engagement and provider interviews: (1) a systematic review of HHS research and evaluation contract work data from FY 2008-2022 to elucidate the number of firms and universities that received new contract funds each fiscal year, define the study’s population, and inform participant selection (in conjunction with purposive sampling); (2) engagement with expert evaluation consultants to inform interview protocol development; and (3) interviews with eleven practicing researchers and evaluators who engage with federal evaluation contract work. Document Review Within organizational ecology, document review and analysis have been used to help define a population’s size, age, and niche (i.e., altogether, a study’s unit of analysis) (Carroll, 1984; Hannan & Freeman, 1977, 1989; Michael & Kim, 2005; Youn & Gamson, 1994). As this work sought to further understand the supply-side’s composition and potential vulnerability to outside market forces, a closer look at evaluation providers who specifically engage in and rely on federal grants, cooperative agreements, and contracts was warranted. Following previous research on U.S. research and evaluation enterprises, areas of examination for the document review included information on an entity’s name, areas of practice, services offered, organization type, size, and federal sources of revenue (i.e., whether the organization received funding from HHS) (Hwalek & Straub, 2018; Peck 2018). Evaluation entities were identified by first ECOLOGY OF EVALUATION CONTRACT WORK 53 compiling a comprehensive list of firms and universities that received prime awards from HHS between FY 2008 and FY 2022 for evaluation products or services. A simple, five step process was used to pull and clean initial HHS data from USAspending.gov. First, data were filtered by FY, Funding Agency, prime award status, and product or service code (PSC). The researcher chose to filter the data by Funding Agency, instead of Awarding Agency, to capture the full extent of HHS contract spending on evaluation products and services. Prime award data, instead of sub award data, were pulled to focus the analysis on entities who received funding as the contract prime. Following previous research by Lemire et al. (2018a), data were filtered by the PSCs “program evaluation services” and “support-professional: program evaluation/review/development.” A Product or Service Code (PSC) is a unique code used by the federal government to identify “the type of product, service, or research & development (R&D) purchased” (USAspending, n.d.). Second, data were systematically downloaded by FY (i.e., 2008 to 2022) and compiled into a single Excel document. Third, the “total_obligated_amount” column was examined for instances where zero dollars was listed. This column provides the amount of funding obligated, or “promised to pay, usually because the agency has signed a contract, awarded a grant, or placed an order for goods or services” (USAspending.gov, n.d.). There were instances where a “potential” amount existed, but no funds were obligated. Data points with zero funds obligated where removed from the dataset. Fourth, the “recipient_name” and “recipient_uei”7 columns were reviewed to identify 7 The Unique Entity Identifier (UEI) is a unique code “for an awardee or recipient … created in the System for Award Management (SAM.gov, n.d.) that is used to uniquely identify ECOLOGY OF EVALUATION CONTRACT WORK 54 and remove duplicates—as well as any recipient who was clearly not a university or firm (e.g., states, counties, national associations, hospitals)—as this study was solely interested in the number of unique entities8 receiving new funds from HHS each FY (i.e., not the specific amount of funds, number of contracts, or types of contracts each entity was awarded). With zero obligated dollars and duplicate entries removed from the dataset, the final step was to review each of the remaining firms’ websites—including downloading and examining mission statements, capability statements, and annual reports—to determine whether the organization explicitly provided external evaluation goods or services. Only firms that mentioned providing evaluation goods or services in their mission statement, capability statement, or annual report were included in the final data set. Table 3 lists the evaluation goods and services search terms used for the document review. specific commercial, nonprofit, or business entities registered to do business with the federal government” (USAspending, n.d.). 8 For the purposes of this research, ‘unique entities’ means entities with multiple contracts, or entities that were absorbed by a parent entity, were only counted once per FY. ECOLOGY OF EVALUATION CONTRACT WORK 55 Table 3 Evaluation Goods and Services Search Terms Primary term Secondary terms eval* capacity assess*, build*, eval* formative assess*, eval* impact assess*, eval* implementation assess*, eval* outcome assess*, eval* policy assess*, eval* program assess*, develop*, eval* summative assess*, eval* survey develop*, research Of the initially 410 unique recipients to have received funding from HHS between FY 2008 and 2022 for evaluation goods or services, 125 (96 firms and 29 universities) were included in the final dataset. Due to limited recipient information, the researcher was unable to ascertain which specific university centers received funding. Therefore, university center websites were not examined to determine whether they explicitly provided external evaluation goods or services. However, the HHS dataset did include information on each university contract’s object class and/or base transaction description. An “object class” is a category “in a classification system that presents funding obligations by the items or services purchased by the Federal Government” (USAspending, n.d.). All university contracts in this analysis that had an associated object class were in either: 25.1 which includes studies, analyses, and evaluations; 25.2 which includes contractual advisory services; or 25.5 which includes basic and applied research and development. The “base transaction description” is “a brief description of the ECOLOGY OF EVALUATION CONTRACT WORK 56 purpose of the award” (USAspending, n.d.). Some base transaction description examples included evaluation assessments and development, program performance analysis, evaluation capacity building, and technical assistance and evaluation. Object class and base transaction description information were cross-referenced to confirm the contract was explicitly for research and evaluation services. All university contracts were determined to be for research and evaluation services. As such, all universities were included in the final dataset. See Appendix B for a list of each university, and a detailed list of each evaluation firm—firm name, year established, area(s) of expertise, service(s) offered, size, and merger status—included in the study. Size was categorized as either “small” or “other than small” as defined by the U.S. Code of Federal Regulations Part 121–Small Business Size Regulations (1996). This study used purposive sampling (Patton, 2015; Plano Clark & Creswell, 2014) and findings from the document review to identify and recruit participants who fit into the sampling frame. Purposive sampling is a nonprobability sampling approach where the researcher identifies the most appropriate participants for the study, thereby spurring the possibility for “information rich” cases (Patton, 2015; Plano Clark & Creswell, 2014). While an appropriate choice for this study, use of a nonprobability sampling approach does “limit the conclusions that can be drawn about the results from a study” (Plano Clark & Creswell, 2014, p. 236). Protocol Development and Expert Engagement Procedures The initial external evaluator interview protocol was drafted based on findings from the literature discussed in Chapter Two. The researcher then selected and engaged with the panel of evaluation experts to aid in the protocol development process. As panelists, these experts helped ECOLOGY OF EVALUATION CONTRACT WORK 57 develop and verify the content and structure of the interview questions. Gathering expert feedback in the protocol development process boosts individual item and overall protocol validity, which helped ensure the questions captured the primary intent of the interview (i.e., illuminating an evaluator’s awareness and prioritization of market forces), and could be reasonably answered by respondents (Grant & Davis, 1997; Dinnesen et al., 2020). Five possible expert panelists were identified through purposive sampling. These experts were chosen based on their knowledge and experience in external evaluation work, and their involvement in the evaluation community at-large. A recruitment email (see Appendix C) was sent to each person to determine their interest in participating in the study; three individuals replied they were interested in serving as a panelist. While the literature on engaging expert panelists is not definitive in the number of experts required for a review, the consensus is that the number of experts depends on the range of desired representation and level of expertise across the panel (Grant & Davis, 1997). The researcher felt confident that—based on the representation and level of expertise across the individuals who expressed interest—three panelists constituted a sufficient number of experts. Each of the three experts engaged in this study were heavily involved in the evaluation community for, at a minimum, the past seven years. One individual had experience working at an external evaluation firm, while the other two had experience in both academia and external research and evaluation work. Upon their written consent to participate in the study, evaluation panelists were asked to provide their opinion on the initial interview protocol designed for practicing evaluators. Specifically, panelists were asked to review the protocol and judge the representativeness of the ECOLOGY OF EVALUATION CONTRACT WORK 58 question content, the clarity of the question style, and the overall comprehensiveness of the instrument. To complete their review, panelists were given instructions and an interview protocol rubric (see Appendix E) via email. The review was estimated to take approximately 15 to 20 minutes to complete. The researcher asked panelists to return their completed rubrics one week after they were disseminated. Once all data were collected, rubric scores, notes, and feedback were compiled into a single Word document for ease of analysis. Information gleaned from this review process informed the development of a final interview protocol (see Appendix F) which was used with selected evaluation providers. Interview Procedures and Sample Potential interviewees were identified based on the following selection criteria: (a) be currently employed at an evaluation, research, and/or consulting firm, or at a university or college evaluation or research center that pursues federal grants, cooperative agreements, or contracts; (b) be in the role of director, research scientist, or lead procurer of grants, cooperative agreements, or contracts; and (c) have been working in some capacity as an external evaluator in the United States for a minimum of 3 years. These criteria are directly related to the research questions as they enabled the researcher to gain insights from evaluators who are most directly involved in tracking and responding to federal Requests for Proposal (RFPs). An initial recruitment message was sent via LinkedIn to 10 individuals due to their lack of a publicly available work email address; an initial recruitment email was sent to 21 people who either had a publicly available work email address, or whose work email was acquired through purposive sampling; a follow-up recruitment email was sent to 16 people (see Appendix D for practicing ECOLOGY OF EVALUATION CONTRACT WORK 59 evaluator recruitment materials). A total of 13 individuals (42%) expressed interest in being interviewed. Interviews were scheduled via email between the researcher and selected evaluation providers. Thirteen interviews were scheduled; one participant had to cancel due to personal reasons and one interviewee cancelled due to a perceived conflict of interest, resulting in eleven total interviews (35% response rate). Each of the 11 interview participants met the study’s selection criteria. Table 4 provides an overview of participants’ place of employment, role, and years of experience in the field. Nine participants came from unique entities; two participates were employed at the same firm. Both participants from the duplicate firm were included in the study due to their unique positions, and therefore perspectives. Seven interviewees were employed at firms; four interviewees were employed at universities. Table 4 Overview of Practicing Evaluator and Researcher Participants Firm Role Years in the Field Firm A Proposals Compliance & Operations Manager 5 Firm A Business Development Manager 18 Firm B President 45 ICF Senior Director 25 Mathematica Policy Research Program Area Vice President 40 MDRC Director 25 University of Minnesota Center Director 33 University of Mississippi Center Director 10 University of New Hampshire Center Director 13 University of North Carolina at Chapel Hill Center Senior Research Scientist 25 WestEd Director 32 ECOLOGY OF EVALUATION CONTRACT WORK 60 Note. One participant was employed at a small sized evaluation firm; two participants were employed at an other than small sized evaluation firm, but their job titles paired with the firm’s name would be identifying. To protect confidentiality, the names of these firms are omitted throughout the study. Additionally, “Years in the Field” refers to the overall length of time the interviewee had been in the evaluation field—not the length of time at current place of employment—further protecting participants’ identity. The researcher sent a reminder email to participants one week prior to the scheduled interview, which included the finalized protocol for the participant to review if desired. Interviews were conducted over the researcher’s Zoom (Version 5.13.11) account and recorded upon verbal consent from the participant. Upon completion of each interview, the researcher downloaded the Zoom video recording and transcript. The recording and transcript were then immediately uploaded to the researcher’s secure University of Minnesota Box Secure Storage account—“a shared, cloud-based, commercial file storage, sharing, and collaboration service” (Box Secure Storage, n.d.)—then deleted from Zoom and the Downloads folder on the researcher’s computer. Subsequently, Zoom transcripts were reviewed, cleaned for clarity (e.g., if the transcript noted the interviewee said “coven 19th” or “cobra” the researcher edited the transcript to note “COVID-19”), and then uploaded to the researcher’s Dedoose (Version 9.0.17) account for future coding and analysis. ECOLOGY OF EVALUATION CONTRACT WORK 61 Data Analysis Event History Data The event history data included information to assist with participant selection, as well as ascertain whether firms and universities received new evaluation-specific contract funding from HHS each year between FY 2008 and FY 2022. Data to inform participant selection was analyzed using simple Excel functions and filters (e.g., highlight duplicate cells). Data to assess the changing landscape of new HHS funding awarded to evaluation entities by fiscal year was analyzed using simple Excel functions and tools (e.g., SUM and line graphs) and the Kaplan- Meier survival estimator. Using simple Excel functions and tools was an appropriate first step as it provided a snapshot of the total number of universities and firms entering the HHS evaluation arena, as well as the number of new “foundings” and “deaths” each type of entity experienced year to year. In survival analysis, a “founding” or “birth” is the instant an individual enters the study, and a “death” occurs when an individual exists the study before the observation period ends (Landes et al., 2020). For the purposes of this research, a “founding” or “birth” is defined as the instant (i.e., fiscal year) a university or firm received newly awarded HHS funding for evaluation goods or services. For example, Westat received HHS funding for evaluation work in 2008. This means that Westat was counted as a birth in 2008. A “death” in this study is then defined as the instant a university or firm does not receive new HHS funding for evaluation work. While Westat received new funds in 2009, it was not awarded new funds in 2010. This means the firm experienced a death in 2009, as it did not survive to (i.e., receive new funding in) 2010. In ECOLOGY OF EVALUATION CONTRACT WORK 62 instances where a university or firm experienced a death and then another birth, the birth—per survival analysis guidelines9—was counted as a new instant. For example, Westat did not receive new funding in 2010. When the firm did receive new funding in 2011, this instant was counted as a birth, similar to the birth that was recorded when the firm received funding at the start of the study’s observation period (i.e., FY 2008). The Kaplan-Meier (KM) estimate is a nonparametric statistical method that was developed and published by Edward L. Kaplan and Paul Meier in 1958 (Kaplan & Meier, 1958). Nonparametric methods “were developed to be used in cases when the researcher knows nothing about the parameters of the variable of interest in the population (hence the name nonparametric)” (Dodge, 2008, p. 375-376). As a nonparametric model, the KM estimate “represent[s] the real, observed data without assuming a distribution for the baseline … [and] helps researchers visualize preliminary comparisons of survival functions for different levels of categorical variables” (Landes et al., 2020, p. 6). Additional benefits of the KM model include the relative ease of calculation and interpretation of results, as well as only assuming that censoring is independent (i.e., not related) to survival probabilities, that the likelihood of survival is the same regardless of when a subject enters the study (e.g., year one versus year five), and that events (i.e., death) happen at a specified time (e.g., yearly) (Miller, 1983). Based on a preliminary review of the event history data, the researcher determined that the KM model— when compared to other nonparametric and parametric models—was the most appropriate 9 See Efron (1988), George et al. (2014), Landes et al. (2020), Rich et al. (2010), and Stalpers & Kaplan (2018) for more information on this technique. ECOLOGY OF EVALUATION CONTRACT WORK 63 method for the current study. Unlike other nonparametric models (e.g., histogram), the KM model provides nonparametric advantages while being able to account for data censoring10 (Landes et al., 2020). The resulting KM survival functions can then be compared (across different levels of categorical variables) by performing “a log-rank test, which returns a 𝜒2 test statistic and P value” (Landes et al., 2020, p. 6). University and firm KM survival functions (or curves) were created in RStudio (R Core Team, 2022). A log-rank test, also computed in RStudio, was performed to compare university and firm survival curves. Firm data were then extrapolated to create and compare KM curves and log-rank tests for the population of “small” and “not small” firms. A firm’s size was determined as either “small” or “not small” by first examining the details listed in the USAspending contract data, and then cross-referencing the contract data by reviewing a firm’s self-acclaimed status (e.g., as a Certified Small Business) on its company’s website. When the size of a firm was not explicitly stated on its website, the researcher searched for the organization in the U.S. General Services Administration (GSA) eLibrary database. The GSA eLibrary “provide[s] a centralized online resource to assist acquisition professionals in the research and identification of commercial businesses providing products and services offered under GSA and VA [Veterans Affairs] acquisition solutions. Information on GSA eLibrary is updated every night” (U.S. General Services Administration, n.d., p. 3). The database’s “Socio-Economic” variable lists the socio-economic indicators associated with an entity at the time it receives contract awards. Examples of possible socio-economic indicators associated with an entity include “Other than 10 See Footnote 6. ECOLOGY OF EVALUATION CONTRACT WORK 64 small business,” “SBA Certified Small Disadvantaged business,” “Small business,” and “Women Owned Business.” Interview Data An iterative deductive-inductive approach was used to create an initial codebook, then code and analyze the interview data. With a deductive approach, a researcher first assigns a set of predetermined codes to the data (Saldaña & Omasta, 2018). Alternatively, an inductive approach necessitates the researcher read through the data first, allowing codes to emerge organically (Miles et al., 2020). Several factors are typically taken into consideration when deciding which method to either begin with or solely use; factors such as the study’s purpose, conceptual or theoretical concerns, and the inherently “emergent nature of qualitative analysis” are often considered when choosing an analytic strategy (Bingham & Witkowsky, 2022, p. 134). Depending on the study purpose and research questions, iteratively employing a deductive- inductive approach can create a richer picture of the phenomena in question than is possible with either approach alone (Bingham & Witkowsky, 2022). As this research sought to elucidate practicing evaluators’ perceptions of the environment for federal evaluation contract work in relation to the study’s frameworks, beginning the analysis with a deductive approach was appropriate. To create the initial codebook, the researcher drew upon the study’s conceptual and theoretical frameworks (i.e., the Evaluation Market Framework and organizational ecology). For example, the parent code “Awareness of market” was created to (1) align with outside market forces identified in the Evaluation Market Framework (i.e., policy and legislation, economic ECOLOGY OF EVALUATION CONTRACT WORK 65 trends, boundary markets, and professionalism) and (2) align with organizational ecology theory—specifically, the concept that external, environmental factors contribute to an organization’s likelihood of survival. Several child codes, including, “COVID-19,” “Admin/policy changes,” and “George Floyd’s murder,” were created and nested under the “Awareness of market” parent code to further align the analysis with the study’s conceptual and theoretical frameworks. After reviewing the interview protocol, research questions, and study frameworks, six initial parent codes and eight initial child codes were deductively created. See Table 5 for the initial parent and child code list. Table 5 Initial Evaluation Provider Interview Codebook Parent code Child code General reflections - Possible quotes - Background Employment Perception of the field Employee skills/competencies Entity situatedness Awareness of market COVID-19 Admin/policy changes George Floyd’s murder Future Overall field Current organization The coding process occurred in two cycles. The first cycle involved attribute coding, structural coding, and values coding to categorize and describe the data. The second cycle applied pattern coding to extract and synthesize major themes. An iterative inductive approach was embedded throughout each cycle, allowing for new codes to organically emerge until a final codebook ECOLOGY OF EVALUATION CONTRACT WORK 66 consisting of six parent codes, eight child codes, twenty-five grandchild codes, twenty-two great- grandchild codes, and twelve great-great-grandchild codes. Researcher Positionality As humans, we all hold implicit and explicit ideologies, biases, and worldviews. As an overall philosophical orientation, a worldview represents personal beliefs and ideals that influence the ways in which we engage with others and the world around us. One’s worldview is an intrinsic reality that is, arguably, most prevalent when we set out to conduct research—to examine relationships between phenomena that exist outside of ourselves. As a researcher, I believe it is paramount to expose my own worldview as it directly influences my choice of research topic, questions, design, and methodology (Creswell & Creswell, 2018). My decision to study the nature of the changing ecological landscape of federal external evaluation contract work in the United States was greatly influenced by my own pragmatic worldview. Pragmatism is a paradigmatic perspective primarily concerned with “what works”— the applications and solutions for what works for a specific set of research questions or within a particular setting (Creamer, 2018; Creswell & Creswell, 2018; Patton, 1990). Ontologically, the pragmatist believes that knowledge and truth are uncertain, variable over time, and context specific (Creamer, 2018). Therefore, from the pragmatist point of view, the most appropriate way to go about conducting research is to creatively engage in whatever array of methods are deemed necessary to thoroughly answer the research questions at hand (Creamer, 2018; Creswell & Creswell, 2018; Strauss & Corbin, 1998). Based on this study’s proposed research topic and questions, employing a mixed methods approach was appropriate. ECOLOGY OF EVALUATION CONTRACT WORK 67 As an external evaluator, my (admittedly and unapologetically) selfish desire to understand the state of my career is driven by the pragmatist in me. This desire has propelled my research forward as it embraces the axiological and epistemological assumptions of pragmatism—an interest in connecting research to practice, and in discerning the quality of research by its practicability (Creamer, 2018). In all, this study’s research problem, questions, design, data collection, analysis, and limitations are all shaped through a pragmatic lens. ECOLOGY OF EVALUATION CONTRACT WORK 68 Chapter 4: Results The purpose of this exploratory study was to elucidate the changing landscape of contractual evaluative work in the United States, investigate practicing evaluators’ awareness and prioritization of market forces, and explore how evaluators position themselves in the U.S. market for current and future opportunities. Several organizational ecological concepts useful to understanding the study’s theoretical constructs and application were introduced in Chapter 2 (i.e., liability of newness, liability of smallness, niche theory, and external environmental factors). Chapter 4 presents the study’s results derived from an event history analysis and interviews. The liabilities of newness and smallness were explored through event history data and interviews. The concept of niche and external environmental factors—including changes in presidential administrations and new legislation, the emergence of COVID-19, and racial violence and injustice (e.g., George Floyd’s murder)—were assessed via interviews. Results are organized around the study’s research questions. Event History Analysis The event history analysis used U.S. Department of Health and Human Services (HHS) historical contracting records to examine the influences of firm size (liability of smallness) and history of being annually awarded new HHS evaluation contracts (liability of newness). Type of entity (firm or university) was also used in the analysis for comparison purposes. To understand the size of the HHS arena, this analysis began with a look at the historical count of evaluation entities that received HHS funding between fiscal years (FY) 2008-2022. As outlined in Chapter 3, the liabilities of newness and smallness were then explored through an examination of ECOLOGY OF EVALUATION CONTRACT WORK 69 university and firm birth data, death data, and survival probabilities by FY, as the likelihood an evaluation entity received new HHS evaluation contract funds between FY 2008-2022 was this study’s primary process of interest. Table 6 presents a glossary of terms used in this study’s event history analysis. Table 6 Glossary of Event History Analysis Terms Used in this Study Term Definition Birth Defined as the instant (i.e., FY) a university or firm received new HHS funding for evaluation goods or services when it had not received funding the previous FY; all evaluation entities experienced a birth at the start of the study’s observation period (i.e., FY 2008). Death Defined as the instant (i.e., FY) a university or firm did not receive new HHS funding for evaluation goods or services when it had received funding the previous FY; death data end in FY 2021 as 2022 data were censored. Event Refers to a change in an evaluation entity’s state (i.e., birth, death, or continued receipt of new HHS funds). External environmental factors Refers to the social, political, economic, and health events that were perceived to have influenced the evaluation contracting marketplace (e.g., George Floyd’s murder, changes in presidential administrations or funding priorities, the Great Recession, COVID-19). Liability of newness Refers to the idea that the older an entity is (i.e., the longer it has existed in the HHS evaluation funding arena) the greater its probability of survival when compared to newer entities. Various factors—including the alacrity with which entities can adjust to meet demand and the lack of a personal history with buyers (e.g., HHS contracting and procurement offices)—can contribute to newer evaluation entities’ death. Liability of smallness Refers to the idea that the larger an entity is (i.e., small versus other than small as defined by the U.S. Code of Federal Regulations Part 121- Small Business Size Regulations) the greater its probability of survival when compared to not small entities. Various factors—including an organization’s content or methodological expertise and resource availability—can contribute to small evaluation entities’ death. ECOLOGY OF EVALUATION CONTRACT WORK 70 Niche Refers to the type of evaluation entity as either specialist (i.e., narrow in scope) or generalist (i.e., wide in scope). A university center that only conducts research within a specific federal agency priority (e.g., HHS priority to strengthen early childhood development) is an example of a specialist entity; a firm that provides evaluation services (e.g., logic models, evaluation capacity building) or methodologies (e.g., policy analysis, RCTs, participatory research) in a wide array of topics or industries is an example of a generalist entity. Probability of survival Refers to the probability an evaluation entity will receive new HHS funding for evaluation contract work when it received funding the previous year. Right censored (censored) Refers to the instance when an observation period ends prior to the event occurring; all universities and firms still alive in 2022 were censored. Examining evaluation entity birth data, death data, and survival probabilities by FY provides a useful illustration, as doing so allows us to assess whether there were survival differences by entity type. For example, between FY 2008-2022, did firms have a higher survival probability (i.e., a higher likelihood of continuing to receive new HHS funding each FY) compared to universities? RQ 1a: History of Acquiring New U.S. HHS Evaluation Contract Work, FY 2008-2022 Historical Count of Evaluation Entities in HHS Arena. Figure 3 presents a historical count of unique11 universities and firms in the HHS arena (i.e., evaluation entities that received contract funding) between FY 2008-2022; a total of 29 universities and 96 firms were found to exist (see Chapter 3 for information on the data source, data collection, and cleaning procedures). In line with observations from other industries, both university and firm count data appear to have a nonlinear pattern12 (Carroll, 1984; Carroll & Huo, 1986). These results indicate that, for 11 See Footnote 8. 12 A nonlinear pattern occurs when the relationship between two variables changes over time (i.e., the relationship between two variables is not linear) (Merrill, 2017). ECOLOGY OF EVALUATION CONTRACT WORK 71 both universities and firms, an overall positive relationship initially occurred between the total number of evaluation entities existing in the HHS arena and year; the total number of universities and firms in the HHS arena increased each year between FY 2008 and 2010. Results then show that this positive relationship became negative for firms in FY 2011 and negative for universities in 2014. Both firms and universities continued to experience ebbs and flows in the number of total entities each year between 2014 and 2022. Figure 3 Historical Count of Total Universities and Firms in HHS Arena, FY 2008-2022 RQ 1b: Likelihood of Acquiring New U.S. HHS Evaluation Contract Work, FY 2008-2022 Organization Type. Evidence of the liability of newness was first explored by examining differences between entity type (i.e., universities versus firms). Analyses began with a ECOLOGY OF EVALUATION CONTRACT WORK 72 review of annual births and deaths by type, followed by estimates of the survivor function through Kaplan-Meier curves and a log-rank test. Liability of Newness. Beginning with an overall picture of evaluation entity births and deaths in the HHS arena between FY 2008-2022 sets the stage for further analysis of the survivor functions. Figure 4 presents university and firm birth data between FY 2008-2022. A total of 42 university births13 and 170 firm births14 occurred between FY 2008-2022. Firms saw an initial steep decline in births between FY 2008-2009; universities saw a steep decline in births between FY 2013-2014. Both firm and university birth data demonstrate a consistent nonlinear pattern throughout the observation period (FY 2008-2022). 13 The total number of university births (N=42) does not equal the total number of unique universities in the HHS arena between FY 2008-2022 (N=29) as eight universities experienced at least two births. 14 The total number of firm births (N=170) does not equal the total number of unique firms in the HHS arena between FY 2008-2022 (N=96) as 44 firms experienced at least two births. ECOLOGY OF EVALUATION CONTRACT WORK 73 Figure 4 Annual University and Firm Births in HHS Arena, FY 2008-2022 Figure 5 presents university and firm death data between FY 2008-2021.15 A total of 42 university deaths and 153 firm deaths occurred between FY 2008-2021; zero universities and 17 firms were right censored.16 Both university and firm death rates demonstrate a nonlinear pattern; while university death rates increased between FY 2008-2010, firm death rates consistently increased and decreased between FY 2008-2021. 15 Death data are only presented through FY 2021, as 2022 represents censored data (i.e., 2022 represents the end of the observation period, not entity death). 16 “Right censoring” refers to the instance when an observation period ends prior to the event occurring; See Footnote 6. ECOLOGY OF EVALUATION CONTRACT WORK 74 Figure 5 Annual University and Firm Deaths in HHS Arena, FY 2008-2021 A Kaplan-Meier estimator (see Figure 6) was used to compare university and firm survival curves (i.e., compare the probability universities and firms received new HHS funding each subsequent fiscal year). Both universities and firms had a median survival time of one year. Results indicate that the probability universities survived beyond year one was 14.29 percent; the probability firms survived beyond year one was 44.71 percent.17 The steep early decline in both university and firm survival probabilities supports the liability of newness—the less time an 17 See Appendix G for Kaplan-Meier estimated survival tables for universities and firms for more information. Survival tables depict the data in the Kaplan-Meier curves by the number of evaluation entities at risk, the number of entities that died, the survival probabilities, standard error, and confidence intervals at each timepoint. ECOLOGY OF EVALUATION CONTRACT WORK 75 evaluation entity existed in the HHS arena, the greater the likelihood of entity death at a given time point. Figure 6 Kaplan-Meier Estimated Survival Curves, Universities and Firms Note. The dashed line represents university data while the solid line represents firm data. The solid black horizontal line visually depicts the median survival (50-percentile point). Tick marks represent time points when data were right censored; there are no tick marks on the dashed line. A log-rank test was performed to test whether there was a statistically significant difference between the university and firm survival curves. Results indicate there was a statistically significant difference between university and firm survival curves with a 𝜒2 = 17, and p < .001 with 1 degree of freedom; there is evidence to believe that the likelihood of receiving new HHS evaluation contracts between FY 2008-2022 was influenced by whether an entity was a university or firm. ECOLOGY OF EVALUATION CONTRACT WORK 76 Organization Size: Small Versus Not Small Firms. Evidence of the liabilities of newness and smallness were further explored by examining differences between firm size (i.e., small versus not small). Analyses began with a review of annual births and deaths by size, followed by estimates of the survivor function through Kaplan-Meier curves and a log-rank test. Liabilities of Newness and Smallness. Understanding the overall picture of small and not small firm births and deaths in the HHS arena between FY 2008-2022 helps situate subsequent survivor function analysis. Figure 7 presents a historical count of unique18 small and not small firms in the HHS arena (i.e., evaluation entities that received contract funding) between FY 2008-2022; a total of 58 small firms and 38 not small firms were found to exist. Total historical count of both small and not small firms demonstrated a nonlinear pattern. The total count of not small firms increased each year between FY 2010-2015; the total count of small firms increased each year between FY 2008-2010. The total count of not small firms experienced a steep decline between FY 2015-2016; the total count of small firms experienced a steep decline between FY 2019-2020. 18 See Footnote 8. ECOLOGY OF EVALUATION CONTRACT WORK 77 Figure 7 Historical Count of Total Small and Not Small Firms in HHS Arena, FY 2008-2022 Figure 8 presents small and not small firm birth data between FY 2008-2022. A total of 90 small firm births19 and 80 not small firm births20 occurred between FY 2008-2022. Not small firm births increased each year between FY 2011-2013; small firm births increased and decreased each year. Not small firm births saw a steep decline between FY 2015-2016; small firm births saw steep declines between both FY 2016-2017 and FY 2021-2022. 19 The total number of small firm births (N=90) does not equal the total number of unique small firms in the HHS arena between FY 2008-2022 (N=58) as 21 firms experienced at least two births. 20 The total number of not small firm births (N=80) does not equal the total number of not small firms in the HHS arena between FY 2008-2022 (N=38) as 23 firms experienced at least two births. ECOLOGY OF EVALUATION CONTRACT WORK 78 Figure 8 Annual Small and Not Small Firm Births in HHS Arena, FY 2008-2022 Figure 9 presents small firm and not small firm death data between FY 2008-2021.21 A total of 83 small firm deaths and 70 not small firm deaths occurred between FY 2008-2021; seven small firms and 10 not small firms were right censored.22 Small firm deaths increased each year between FY 2008-2010, and experienced steep increases between FY 2013-2014 and FY 2018-2019; not small firm deaths experienced periods of increase, decrease, and consistency, with a steep increase between FY 2014-2015. 21 See Footnote 15. 22 See Footnote 6. ECOLOGY OF EVALUATION CONTRACT WORK 79 Figure 9 Annual Small and Not Small Firm Deaths in HHS Arena, FY 2008-2021 A Kaplan-Meier estimator (see Figure 10) was used to compare small and not small firm survival curves (i.e., compare the probability small and not small firms received new HHS funding each subsequent fiscal year). Small firms had a median survival time of one year; not small firms had a median survival time of two years. Results indicate that the probability small firms survived beyond year one was 35.56 percent; the probability not small firms survived beyond year one was 55 percent. The steep early decline in both small and not small firm survival probabilities supports the liability of newness—the less time a firm existed in the HHS arena, regardless of firm size, the greater the likelihood of entity death at a given time point. ECOLOGY OF EVALUATION CONTRACT WORK 80 Figure 10 Kaplan-Meier Estimated Survival Curves, Small and Not Small Firms Note. The solid line represents small firm data while the dashed line represents not small firm data. The solid black horizontal line visually depicts the median survival time (50-percentile point). Tick marks represent time points when data were right censored; tick marks on the dashed line occur at the one year, two year, three year, four year, seven year, eleven year, twelve year, and fifteen year marks. A log-rank test was performed to test whether there was a statistically significant difference between the small and not small firm survival curves. Results indicate there was a statistically significant difference between small and not small firm survival curves with a 𝜒2 = 7.9, and p = .005 with 1 degree of freedom; there is evidence to believe that the likelihood of receiving new HHS evaluation contracts between FY 2008-2022 was influenced by whether an entity was a small or not small firm. ECOLOGY OF EVALUATION CONTRACT WORK 81 Overall, when we look at the changing landscape, the event history data point to the liability of newness as having an influence on both universities and firms that existed in the HHS arena between FY 2008-2022. Additionally, the statistically significant difference between small and not small firms indicates the liability of smallness had an influence on firms that existed in the HHS arena between 2008-2022. Event history analysis findings are reiterated below by evaluation entity type and firm size. • Evaluation Entity Type and the Liability of Newness: Data results indicate there was evidence to believe there was a statistically significant difference in survival probabilities between firms and universities; the probability of receiving new HHS evaluation contract funding between FY 2008-2022 was influenced by whether an evaluation entity was a university or a firm. Additionally, the steep early decline in both university and firm survival probabilities supports the liability of newness—the less time an evaluation entity existed in the HHS arena, the greater the likelihood of entity death at a given timepoint. • Firm Size and the Liabilities of Newness and Smallness: Data results indicate there was evidence to believe there was a statistically significant difference in survival probabilities between small and not small firms; the probability of receiving new HHS evaluation contract funding between FY 2008-2022 was influenced by whether a firm was small or not small, which supports the liability of smallness. Additionally, the steep decline for both small and not firms supports the liability of newness—the less time small and not small firms existed in the HHS arena appeared to impact the likelihood of firm death, regardless of size, at a given timepoint. ECOLOGY OF EVALUATION CONTRACT WORK 82 Interviews Interview data were drawn from semi-structured interviews with eleven practicing evaluators across ten organizations. Themes that emerged from an analysis of interview transcripts, organized around the study’s research questions (which were embedded in the study’s conceptual and theoretical frameworks) are presented next. Interview Sample Table 7 provides information on each of the 11 interviewees’ place of employment (i.e., entity type, size, age) and area(s) of specialization. Ten out of 11 interviewees were employed at an evaluation entity that was established over 30 years ago; one interviewee was employed at an evaluation entity that was established over twenty years ago. Three out of 11 interviewees were employed at small entities; eight interviewees were employed at other than small entities. The top three areas interviewees specialized in included early childhood (six), disabilities or special education (four), and education (four). Table 7 Interviewee Employment Information Interviewee pseudonym Entity type Entity size Entity age Interviewee specialization(s) Andy University Other than small > 30 years Disabilities Brandi University Small > 20 years Early childhood Carlos University Other than small > 30 years Disabilities Dan University Small > 30 years Career pathways, Child development, Education Ebony Firm Other than small > 30 years Early childhood, English language learners, Special education ECOLOGY OF EVALUATION CONTRACT WORK 83 Frankie Firm Other than small > 30 years Criminal justice, Youth development Georgina Firm Other than small > 30 years Child welfare, Early childhood, Education Han Firm Other than small > 30 years Child welfare, Early childhood, Education Ivy Firm Other than small > 30 years Child welfare, Early childhood, Fatherhood, Youth development Josh Firm Small > 30 years Early childhood, Education, Disabilities Koda Firm Other than small > 30 years Healthcare, Technology Note. Entity size was categorized as either “small” or “other than small” as defined by the U.S. Code of Federal Regulations Part 121 – Small Business Size Regulations (1996). RQ 2a: Evaluation Provider Perceptions: Federal Contract Work Landscape Interviewees were asked about their current and future perceptions of the landscape for federal external evaluation contract work in the United States. Specifically, evaluators were asked if there were certain outside market factors that influenced the amount and type of work they were currently seeing the federal government request (e.g., through RFPs), as well as the future amount and type of work they anticipated seeing through RFPs. Within each interview, the interviewer defined “outside market factors” as anything from major health, social, or economic events (e.g., COVID-19, George Floyd’s murder, the Great Recession) to changes in administrations, major policies, or legislation. The aim of these questions was to promote dialogue to ascertain whether and how evaluators perceived outside market factors as impacting contracts opportunities. ECOLOGY OF EVALUATION CONTRACT WORK 84 Three major themes emerged across all interviews. These included (1) not seeing presidential administration changes as having a major impact on RFPs; (2) recently witnessing greater calls (through RFPs) for work that focuses on economic, health, and racial disparities; and (3) experiencing demand-side (i.e., federal and state funders) barriers to embedding diversity, equity, and inclusion (DEI) in research and evaluation work. Interviewees often attributed the recent increase in economic, health, and racial disparity work to the COVID-19 pandemic and heightened amounts of racial violence (e.g., George Floyd’s murder). Barriers to incorporating DEI in evaluations and research were often attributed to demand-side (i.e., funder) constraints. Changes in Presidential Administrations. Following presidential elections, changes in the scope, priorities, personnel (new political appointments), and appropriations of federal agencies can have a dramatic impact on discretionary grant and contract funding. In discussing perceptions about the landscape for federal research and evaluation contract work, some interviewees responded that they did not see changes in presidential administrations as having a major impact on RFPs. Andy for example, whose work at a university center focuses on disability-related research and evaluation, noted that he has not really seen [demand-side] economic factors or changes in administrations matter that much. Assumptions are sometimes made that under republican leadership there’s less likely to be research and evaluation on social types of things, but in all honesty, federal funding for intellectual and developmental disabilities programs have expanded under republican leadership more than they have under democratic leadership. ECOLOGY OF EVALUATION CONTRACT WORK 85 Ebony, whose work focuses on early education, English language learners, and special education, described her perspective on why, “for at the least the last decade or so now,” she has not seen presidential administration changes influence her work: At the federal level, flat is the new up. We’ve had a lot of federal budgets over the past ten years where it was basically continuing resolutions of what it was before. In some weird way, the way we’ve spent our money within the federal government is not significantly different now from the Clinton administration. There have only been a handful of times that there has been an actual shift, like when IES [Institute of Education Sciences] was created. But for many years, if you get a stalemated Congress, that ‘flat’ ends up happening. Under Trump, a lot of people in education were terrified it was going to be a bloodbath, but it ended up being kind of stable—evaluation and research work has been stable at the federal level. There are things that have happened recently with the CARES Act and the American Rescue Plan Act that are throwing a bunch of new money into the system, so that will end up in research and evaluation. There will be this new clump of money. But generally, I haven’t seen much change by administration because if changes in research or evaluation priorities even happen, it takes a long time for things to be enacted by congress. Carlos, whose work focuses on disabilities, noted that while he has not seen major changes in funding priorities based on presidential administrations, he does believe that “you still have to pay attention to who is in the federal offices and what they roll out [through RFPs]—certain administrations have been very generous in some areas and lean in others, and vice versa” ECOLOGY OF EVALUATION CONTRACT WORK 86 (Carlos). Ivy, whose work focuses on child welfare, early childhood, fatherhood, and youth development, shared a similar sentiment that while she has overall not witnessed major changes by administration, some administrations like [the Biden] administration, for example, [has] been better than most at pushing a couple of big issues down into federal agencies which we are then seeing affect our work. More [federal] agency RFPs include elements of equity issues based on George Floyd of course, but also general growing concerns about racial equity and other forms of equity. (Ivy) Influences of Economic, Health, and Racial Disparities. Societal events such as recessions, high inflationary periods, dramatic domestic events (COVID-19, George Floyd’s murder), and world events (Russia’s 2022 invasion of Ukraine) can cause changes in funding appropriations for discretionary grants and contracts spending. All interviewees discussed the impact COVID-19 and George Floyd’s murder had on the current amount and types of work they were seeing in RFPs, and what they anticipated seeing in future RFPs. Koda reflected on how his healthcare and technology evaluation work was greatly impacted by COVID-19. He described seeing RFPs for updating what seemed like “every single diagnosis code because of new things like throat swabs. It was a huge amount of work that had to be done very fast. And then there was doing telehealth—literally needing to create and then evaluate system platforms” (Koda). Andy, explained that he sees the recent, heightened focus and interest on equity and diversity coming through government RFPs as a direct result of public outcry. There was also a lot of worry about economic ECOLOGY OF EVALUATION CONTRACT WORK 87 challenges like during the Great Recession. So, there was worry that COVID would put us in a recession and drastically cut the amount of work available, which would make everything super competitive, but I didn’t see that happen. If anything, there was a ton of new work because of COVID and the ARPA [American Rescue Plan Act] funds that have come from it. (Andy) Several interview respondents reflected on how COVID-19 and George Floyd’s murder “underscored how inequities lead to real, massive issues for low-income families and families and communities of color. There’s these bigger societal issues that funders [like the federal government] are now really lifting up and wanting addressed because of these recent events” (Frankie). Frankie, whose work focuses on criminal justice and youth development, described how she believes COVID-19 has led to an increase in RFPs around “economic mobility—on a focus on individuals who are low income and community interventions for helping folks move up in the labor market, move towards a living wage for their families and sustainable jobs for themselves” (Frankie). Carlos shared a similar perspective, noting that “with COVID and ARPA, there’s been a tremendous amount of opportunities related to health and health care access, health disparities, and a lot more money related to the workforce coming about through RFPs” (Carlos). Beyond the current state of the field, several interviewees expressed both hope and concern for the future. There was a general hope that the recent increased emphasis on disparity and DEI work will continue, and concern that waning COVID-19 measures (due to the advent of vaccines and an emotionally and mentally exhausted society) means we will see funders shift ECOLOGY OF EVALUATION CONTRACT WORK 88 back to pre-COVID-19 priorities (e.g., back to an even greater emphasis on educational testing). Han, whose work focuses on child welfare, early childhood, and education, shared his hope and perspective that because there’s no way these past two years happened without impacting children and families, the potential for work for [my firm] is great. And I think it’s going to keep coming from the government because if people weren’t thinking about social and emotional well-being before, they’re certainly thinking about it now. There’s also this sense that we need to do something to repair and address all the inequities these past two years have really explicitly and loudly been brought to everyone’s attention. Frankie shared a similar hope, “that given how prominent [equity] is now, my hope is it’s not one of these hot new things that will disappear in a year—so many funders are including [equity] in RFPs—it feels like people are recognizing its importance” (Frankie). Carlos, whose work focuses on disabilities, expressed both hope and concern for how COVID-19 and George Floyd’s murder could impact his future work. He explained how he is hopeful because, in our pursuit as a nation, and our continued reckoning with diversity, equity, and inclusion, and the systematic oppression—intentionally and unintentionally of marginalized groups—only gives voice to a broader, collective body. [My university center] talked a lot after George Floyd’s murder about the trauma and hurt and abuse of power that occurs, regularly, in other industries and how we as a group of people that cares about disabilities know what that injustice feels like. We know what systematic ECOLOGY OF EVALUATION CONTRACT WORK 89 oppression feels like. We know what institutionalization feels like because our whole history is steeped in it. We all need allies. I think the disability community is well situated to both lead and partner in that movement, whether it comes through RFPs or more broadly in policy, research, evaluation, community building, or elsewhere. (Carlos) At the same time, Carlos expressed concern about his future work, that because COVID put a temporary hold on certain legislation under certain policies, it’s going to take us years to get back to what our pre-COVID-19 inclusion looked like—and it wasn’t good before COVID, kids were still being excluded in schools. Additionally, people with disabilities had more hurdles to jump through to get access to quality health care and there was already a lack of allied health professionals to support people with disabilities and their health care needs. It’s going to take a lot of will to get back to where we were before COVID—a lot of research and evaluation that will require political and financial will, and I’m not sure we have that. Barriers to Embedding DEI into Research and Evaluations. While many interviewees described witnessing an increased emphasis on economic, health, and racial disparities in the RFPs they were reviewing, several also discussed demand-side (i.e., funder-driven) barriers (e.g., bureaucratic processes) they believed prevent evaluators from fully engaging in diversity, equity, and inclusion (DEI) work. Ebony noted how most of her firm’s federal grants and contracts work comes from the U.S. Department of Education which she notes, “is a very bureaucratic process that can be resistant to change” (Ebony). She described how ECOLOGY OF EVALUATION CONTRACT WORK 90 things are structured the way they are [in the U.S. Department of Education] because they made it through a legal review, not for any other good reason. There are all these technical assistance projects that are funded by the Department, and they all have three tiers of technical assistance—universal, targeted, and intensive. That is the bible because a lawyer signed off on it once before. For people operating inside the system, like, they have to be able to argue that the money is a worthwhile investment of public money, and one of the ways to do that is through a legal review. Then, when I’ve seen the Department say that they have this ‘wild,’ new crazy thing that’s going to revolutionize technical assistance projects and create equity and promote inclusion—I was told by a contact at the Department that the lawyers were like, ‘Oh. Well, we haven’t done that before so I don’t think we can do it here.’ In this way, it’s at like a little bit of a cross purposes because the Department wants more DEI but then there’s these structural barriers to actually creating and implementing the work. In addition to barriers at the federal government level—and while the current study focused on entities who receive funding from the federal government—some interviewees also discussed barriers to embedding DEI into work funded through state governments. Brandi for example, described a situation where political forces greatly impacted one of her recent evaluations. Her team had been awarded a state agency contract to do an evaluation of a local school district’s social-emotional learning (SEL) program, and the start date for the contract’s period of performance (i.e., the start and end dates of a contract) happened to coincide with an election cycle. Election results meant that the state agency saw major changes in leadership and ECOLOGY OF EVALUATION CONTRACT WORK 91 agency funding priorities. Brandi described this experience, reflecting on how major political forces can impact our work as evaluators. In Brandi’s case, she explained how the change in leadership meant that the state was now equating SEL with critical race theory [CRT], which caused the district to have to change their website and tell teachers they have to be very careful about the language they use. Race is an important factor in SEL work, but teachers now have this incredible pressure on them to not even talk about race in the classroom. There’s very much this pressure because our contract is still active—the state still wants an evaluation of this program, but now we’re in this uncomfortable space because the funder wants one thing, but the education context has shifted so much that I don’t think the evaluation will be an accurate portrayal of the program. I’m also questioning what the value-add is to this evaluation. Like, will the state even look at or use these results? I doubt it. It’ll probably just be so they can check a box, which as an evaluator, researcher, and mom makes me both frustrated and disappointed. Like Brandi’s experience, Josh—whose work focuses on early childhood, education, and disabilities—also described struggles with two recent evaluations that were funded through state education agencies. Josh explained that while the first state agency (i.e., the funder) requested the final evaluation product be a professional development session focused on overrepresentation and disproportional representation of some children in special education classrooms, the evaluation team was told they were “not allowed to use certain words, like ‘race’ or ‘critical race theory’ in the professional development workshop” (Josh). For Josh’s other state evaluation, the ECOLOGY OF EVALUATION CONTRACT WORK 92 governor had banned schools from requiring students and staff to wear masks to prevent the spread of COVID-19. The evaluation was then greatly impacted, as many students and staff were often out sick which made administering surveys and conducting observations incredibly difficult (Josh). Despite these challenges, Josh believed that DEI will continue to be embedded in his future evaluation work, for “the whole conversation about diversity, equity, inclusion, and racism is not going to go away … as much as some states would love for it to” (Josh). Summary. Interviewees frequently described their current and future perception of federal evaluation and research contract work in terms of the impact outside market factors had on funders which, as a heavily demand-driven field, then trickled down to the amount and types of work they were seeing in RFPs. Additionally, there was some variation across interviewee perceptions. Those who discussed their view that changing presidential administrations had no major impact on RFPs came from different types of organizations (i.e., one from a university center and two from firms), but all came from the same sized organization (i.e., not small) and specialized in the same niche (i.e., special education and disability-related work). All interviewees shared similar perspectives around seeing an increasing amount of RFPs calling for work on economic, health, and racial disparities—including work in diversity, equity, and inclusion (DEI). However, only interviewees in the early childhood niche discussed their experiences with demand-side (i.e., state and federal funder) barriers to implementing DEI into their work. ECOLOGY OF EVALUATION CONTRACT WORK 93 RQ 2b: Evaluation Provider Positioning within the Evaluation Market Interviewees were asked to describe how they see their firm or university being situated within the competitive external evaluation market. Specifically, evaluators were asked: (1) how they view their organization’s position in the market in terms of organization size, capacity, and capabilities compared to other organizations they might compete with for federal grants, cooperative agreements, and contracts and (2) whether they perceived outside market factors as having an impact on how their organization approached or responded to new contracts (e.g., by creating a dedicated team focused on COVID-19 or health disparities research). The goal of the first question was to spark dialogue about interviewees’ perceptions of the field’s levels of competitiveness and cooperation, as well as ascertain the kinds of skills, experience, and expertise they look for in new middle- to senior-level hires. The researcher chose ask interviewees about the middle- to senior-level tiers as employees at these levels are typically responsible for responding to RFPs and acting as principal investigators (PIs), project directors (PDs), or project managers (PMs). Interviewees’ responses to this question varied based on their organization’s size and market niche. The goal of the second question—whether interviewees perceived outside market factors (e.g., COVID-19, George Floyd’s murder) as having an impact on how their organization approached or responded to new contracts—was to spur discussion to capture interviewee’s perceptions on whether and how their organization was reacting to changes in the discretionary contracting environment (e.g., whether they hired a new staff member with explicit DEI background and experience or someone with expertise in participatory action research because they were seeing an uptick in the number of RFPs requiring this method). In ECOLOGY OF EVALUATION CONTRACT WORK 94 response to this question, evaluators frequently replied as seeing their organization recently place a greater emphasis on embedding diversity, equity, and inclusion into multiple aspects of their work (e.g., required DEI training, intentionally making new hires through a DEI lens, ensuring project teams are either staffed with an employee who is specifically dedicated to ongoing project DEI review or there is an explicit process in place that requires projects to consult with their organization-wide DEI team). The two main themes that emerged from these questions involved (1) the influence of organizational age, niche, size, and type, and (2) the need to build internal capacity in response to outside market factors to increase organizational nimbleness. Organization Age, Niche, Size, and Type. Several interviewees described how their organization approaches new work is related to its age, size (i.e., small versus not small), and type (i.e., university versus firm). When reviewing an RFP with the aim to bid on it, some interviewees commented on how they first consider if their organization has the capacity to pursue the work, and then they assess whether their team has any gaps in the skills, capabilities, or expertise required to submit a competitive proposal. In thinking of these gaps, interviewees also often commented on the skills, capabilities, and expertise they look for when hiring new staff. Common responses included: demonstrated research skills (i.e., developing a research question, identifying appropriate methods, analyzing data using statistical software, writing results); ability to write competitive grants; interpersonal skills (e.g., working well with other team members and clients); subject matter expertise (e.g., disability policy in K-12 public ECOLOGY OF EVALUATION CONTRACT WORK 95 education); methodological expertise (e.g., mixed methods, quasi-experimental, random assignment); and project management skills. Throughout our discussions, interviewees also mentioned what they see as the pros and cons to being situated in the external evaluation market based on their entity type (i.e., firm versus university) and niche (i.e., specialization). Georgina, whose work focuses on child welfare, early childhood services and development, and education, described how she believes her firm’s long-standing relationships with federal agencies (i.e., the over 30 years the firm has been working with the federal government) has provided them the opportunity to engage in research and evaluation that has a wide-reaching impact with policymakers and decision makers. Having an established relationship with federal agencies—around sixty percent of our work is federal—has also meant that we’ve been able to diversify our portfolio to state-funded and foundation-funded projects in addition to federal ones. Because the federally funded contracts can be $5 to $10 million over the course of several years, we know we are financially stable enough to pursue smaller opportunities which then allows us to develop new relationships and expand our experience and expertise. When thinking about how his university center compares to firms they might compete against or cooperate with for federal evaluation work, Carlos explained: We are smaller and our capacity is nowhere near that of the larger firms like Mathematica, Westat, or WestEd. So, when we think about the structures of how we ECOLOGY OF EVALUATION CONTRACT WORK 96 bring in money to our entity like, you know, reviewing and responding to RFPs, we oftentimes think of ourselves as a package deal, [as providing both content and methodological expertise] or we sub out pieces we’re missing in expertise to other entities. When comparing us to other university centers, we lean towards a little bit bigger. But like I just said before, we know we’re on the weaker side for some types of specific analysis work so we tend to contract out for some of those more in-depth analysis pieces that might be in an RFP. When she thinks about competition in the federal arena, Brandi described how she sees advantages and disadvantages to being in a public university research and evaluation center instead of at a firm. She explained that from her perspective, one advantage includes having an indirect rate that is lower than a previous firm she worked for, meaning the university’s costs are sometimes lower than a corporate research organization. This makes the university more attractive and competitive than a firm when bidding on an RFP. From Brandi’s perspective, another advantage to being in a public university over a firm is that “because we’re situated in a university, we are evaluated and incentivized to publish. So, I think we’re at an advantage because our range and depth of content expertise is so great” (Brandi). In reflecting on her previous experience at a firm, Brandi noted that an advantage to being at a firm is the fact that it is more nimble than a university—“Because of a university’s structure and various administrative systems and reporting requirements, it can be a disadvantage because we’re not able to always respond to RFPs as quickly as we need to.” ECOLOGY OF EVALUATION CONTRACT WORK 97 Georgina described how she thinks being a not small firm with a specialized niche (i.e., is narrow in scope, providing research and evaluation in only a few methodological and content areas) is a major advantage to her organization’s level of competitiveness on RFPs. She explained that being situated in this way, on top of the organization being in the field for over thirty years, means their relationships with federal clients (i.e., federal agencies) is robust. Georgina noted that because of their long-standing relationships, “they often hear from their FPOs [federal project officers—the main point of contact for a federal contract]—about things like grants, cooperative agreements, or contract opportunities that are forecasted and that [the FPO] suggests [the firm] goes for.” Ivy, whose firm has a generalist niche (i.e., is wide in scope, providing research and evaluation in an array of methodological and content areas) described how from her perspective, being among “the broadest [of firms], covering a broader range of topics and methodologies than most [organizations], has advantages and disadvantages. Being broad means greater flexibility to respond to changing [federal] priorities, but also means continually hiring new experts in those areas” (Ivy). Building Internal Capacity in Response to Outside Market Factors. Several interviewees described their organization’s efforts to build internal capacity in response to funding priorities they were seeing in RFPs, which interviewees unanimously attributed to the government’s response to outside market factors. For example, four individuals employed at firms and three individuals employed at universities noted their employer had recently intentionally hired someone for a diversity, equity, and inclusion (DEI) position, pulled a current employee into a new DEI role, or created a written plan to hire and develop the organization’s ECOLOGY OF EVALUATION CONTRACT WORK 98 DEI capacity in the near future. The priority to make these new hires was attributed to the government’s response to growing concerns about health equity and racial equity. For example, Han said he believes his firm’s new impact plan, which emphasizes growing internal capacity around health and racial equity issues, “was created in response to current events [like COVID- 19] and what the firm was seeing as being required in RFPs” (Han). Ivy reflected on the importance of an organization’s ability to anticipate RFP requirements. She described how her organization was seeing the growing concern about race and health equity issues being addressed in the government through these RFPs that are coming out. And we always want to be ahead of that. Whatever kind of organization you are, you don’t want to see an RFP and go, “Oh, we have to show them we’re sensitive to and have experience with issue X and we never thought about it before.” You have to be experienced and yet also adaptable to meet the changing needs of [federal] agencies and the environment we’re in—trying to do survey collection at the height of COVID taught us that. And for us, I think our firm is particularly competitive because we’ve actually been thinking about this kind of [work] for many years. We’ve had a dedicated DEI team for years, although they were called something different until just a few years ago. In addition to being individually responsive to increased DEI requirements in RFPs, several interviewees also discussed subcontracting (and by virtue, aspects related to relationship building) with partners as a strategy for meeting RFP requirements. Andy for example, whose work at a not small university center focused on disability-related research and evaluation, ECOLOGY OF EVALUATION CONTRACT WORK 99 described how his center had subcontracted to 51 other organizations in the last year alone. Andy believed that by partnering with other organizations, his university center was able to acquire more funding than if they had opted to pursue those opportunities on their own. Georgina, who worked at a not small firm specializing in child welfare, early childhood development, and education, also commented on the benefits of subcontracting to other entities, but then noted that, our partners are our competitors, so we are always considering whether a subcontracting opportunity would be a mutually beneficial partnership, or if it would result in us helping someone else build their capacity when we should really be focused on building our own capacity. When thinking about building their own internal capacity, a couple of interviewees noted that their organization had recently posted hiring opportunities for specific kinds of domain and methodological expertise. Josh, for example, explained that his firm noticed a growing amount of work was becoming available in his organization’s niche (early childhood, education, and disability work). To stay competitive and have the capacity to bid on more projects, Josh’s firm had been looking for a new senior evaluator—an “ideal candidate with expertise in special education, evaluation, and participatory research methods.” Dan also mentioned that his university center’s hiring needs had shifted in recent years. Specifically, his center saw an increased amount and type of evaluation work coming out of the federal government. They had recently started a search to hire a senior evaluator with expertise on the impacts of health and racial inequities on early childhood development as a result. The hiring process was, however, ECOLOGY OF EVALUATION CONTRACT WORK 100 proven difficult due to “many people’s preconceived notion about the political environment in [state the center is located in], so recruiting to [state] ha[d] been very difficult” (Dan). Summary. Interviewees frequently described how their organization was situated in the field (i.e., based on entity age, niche, type, and size) impacted how they respond to and prepare for federal contracting opportunities. Once again, evaluators expressed the strategies and practices (e.g., subcontracting, relationship building, hiring) their organization uses to be competitive in the field through the lens of needing to be responsive to funder demands (e.g., specific requirements the demand-side—federal and state governments—express through RFPs). Additionally, there appeared to be some differences in evaluator’s perceptions based on their organization’s niche (i.e., areas of specialization), size (small versus not small), and type (university versus firm). The next chapter, Chapter 5, discusses the results that emerged through the event history analysis and interviews. Main study themes, limitations, and implications for the field and future research are examined. ECOLOGY OF EVALUATION CONTRACT WORK 101 Chapter 5: Discussion and Implications In describing the commercial (i.e., supply and demand) side of evaluation, Nielsen et al. (2018b) state that as a field, we evaluators … fail to preach about all of our practice. What is often left silent in our exchanges, what we all too often fail to consider in the ongoing development of our evaluation theory and practice, in developing our evaluation profession, is the ever-present market conditions within which we practice evaluation. (p. 244) This research examined some of the ‘ever-present market conditions’ in external research and evaluation contract work to shed light on the field’s dependence on and adaptability to respond to demand-side actors (e.g., federal government agencies) and external environmental factors that shift market priorities and resources (e.g., changes in political administrations). Possessing a greater understanding of how market conditions affect evaluation practice is vital to the future strength and sustainability of the evaluation industry. For evaluation providers to survive, the community needs to understand that the current and projected demand for evaluation services and products, the competitiveness of the field, and the competencies required to flourish are influenced by outside market factors. Using the Evaluation Market Framework developed by Lemire et al. (2018b; see Figure 1, Chapter 1, p. 3) and Hannan and Freeman’s (1989) theory of organizational ecology, this study examined the landscape of federal external evaluation work in the United States. This chapter first presents a summary of the study’s main findings (Table 8), followed by further discussion of the findings, study limitations, and suggestions and implications for practicing evaluators and future research. ECOLOGY OF EVALUATION CONTRACT WORK 102 The first two research questions explored the likelihood firms and universities acquired newly funded HHS evaluation contract work each fiscal year between FY 2008-2022. The third and fourth research questions examined practicing evaluators’ perceptions of and positioning in the federal evaluation contract landscape. Table 8 Summary of Main Study Findings Research question Main findings RQ1: What is the likelihood evaluation firms and universities acquired newly funded evaluation- specific contracts from the U.S. Department of Health and Human Services (HHS) each fiscal year (FY), between FY 2008-2022? • A nonlinear pattern with an overall relatively small number of evaluation providers. Historical count data of evaluation entities entering the HHS evaluation contract arena for the first time between FY 2008-2022 waxed and waned; there were periods of time when each FY saw an increase in providers entering the HHS arena for the first time and periods of time when each FY saw a decrease in providers entering the HHS arena for the first time. Overall, the total number of evaluation providers in the HHS contracting arena between FY 2008-2022 was relatively small (29 universities and 96 firms). RQ1b: Which factors (i.e., an entity’s size and type) influence the likelihood a firm or university acquired newly funded HHS evaluation contracts each year between FY 2008- 2022? • The likelihood of receiving new HHS evaluation contracts each fiscal year was influenced by entity type. Firms were significantly more likely to receive new HHS evaluation funding each FY than universities. • The likelihood of receiving new HHS evaluation contracts each fiscal year was influenced by firm size. Not small firms were significantly more likely to receive new HHS evaluation funding each FY than small firms. RQ2a: How do external research and evaluation providers perceive the federal evaluation contracts landscape? • Changing presidential administrations did not have an impact on RFPs. Some interviewees said they believed presidential administration changes had no major impact on RFPs. • Greater calls (through RFPs) for work focusing on economic, health, and racial disparities. Most evaluators described seeing a greater number of RFPs that included work related to economic, health, and racial disparities. This increased emphasis on economic, health, and racial disparities was often attributed to the emergence of COVID-19 and heightened racial violence (e.g., George Floyd’s murder) across the U.S. • Experiencing demand side (i.e., state and federal funder) barriers to embedding diversity, equity, and inclusion (DEI) in ECOLOGY OF EVALUATION CONTRACT WORK 103 research and evaluation work. A few interviewees discussed the difficulty of embedding DEI in their evaluation and research contract work, despite DEI being included in a contract; structural and political barriers imposed by a funder (e.g., not being allowed to use the word ‘race’) influenced evaluators’ ability to incorporate DEI in their work. RQ2b: How have external research and evaluation providers positioned themselves within the federal evaluation market to compete for resources? • Organizational niche, size, and type. Most interviewees described both how their organization’s niche (i.e., specialization), size, and type (i.e., firm versus university) influence the ways in which they approach new funding opportunities. Interviewees also discussed what they perceive as the pros and cons to their organization’s positioning (e.g., a not small university research and evaluation center) in the external evaluation market. • Need to build internal capacity in response to outside market factors. Interviewees frequently expressed the strategies and practices (e.g., subcontracting, relationship building, hiring) their organization uses to be competitive in the field through the lens of needing to be responsive to funder (i.e., state and federal funder) demands. RQ 1a: What is the likelihood evaluation firms and universities acquired newly funded evaluation-specific contracts from the U.S. Department of Health and Human Services (HHS) each fiscal year (FY), between FY 2008-2022? RQ 1b: Which factors (i.e., an entity’s size and type) influence the likelihood a firm or university acquired newly funded HHS evaluation contracts each year between FY 2008- 2022? This research explored the changing landscape of newly awarded external HHS evaluation contract funding to research and evaluation firms and universities in the United States. Specifically, this study examined which factors (i.e., an organization’s size, type, and length of time in the HHS field) influenced the likelihood that a firm or university acquired newly funded HHS evaluation contracts between FY 2008-2022. To answer the first two research questions, Hannan and Freeman’s (1989) theory of organizational ecology was employed through an event ECOLOGY OF EVALUATION CONTRACT WORK 104 history analysis technique and basic Excel functions. The specific organizational ecology concepts examined included the liability of newness and the liability of smallness. Prior research on the liability of newness asserts that newer organizations have a higher likelihood of dying when compared to more established organizations (Singh et al., 1986; Stinchcombe, 1965); research on the liability of smallness contends that smaller organizations have a lower likelihood of survival compared to larger organizations (Carroll, 1984; Hannan & Freeman, 1977, 1989). Historical Count of Evaluation Entities in HHS Arena An initial historical count of unique23 universities and firms in the HHS contracting arena between FY 2008-2022 was conducted to determine the overall size of the study’s population (see Figure 3). Through a review of organization mission statements, capability statements, and annual reports, 96 evaluation-specific firms were found to exist across the study’s 14-year observation period; due to limited data, all 29 universities found to exist across the study’s 14- year observation period were included in the study. While not generalizable to the evaluation field at large, this relatively low count of evaluation providers in the HHS contracting arena supports findings from previous research that suggest there is an overall limited number of evaluation providers on the supply-side of the evaluation market (House, 1997; Lemire et al., 2018a, 2018b). While neither causation nor correlation were explored in this work, the nonlinear data patterns (see Figure 3) indicate that new entry (i.e., receiving funding for an evaluation-specific HHS contract for the first time) and exit (i.e., not receiving new evaluation-specific funding the 23 See Footnote 8. ECOLOGY OF EVALUATION CONTRACT WORK 105 following fiscal year) were not constant; there were periods of time when each fiscal year saw an increase in providers entering and exiting the HHS arena, and periods of time when each fiscal year saw a decrease in providers entering and exiting the HHS arena. This finding suggests there may be a cyclical component to the number of evaluation entities entering and exiting the HHS arena. While this finding might not have direct implications to practicing evaluators, simply being cognizant of the market’s cyclical nature could prove beneficial; understanding that the density of the market ebbs and flows could help evaluators strategize their positioning in the market. For while no one can predict the future, staying apprised of the market’s major players could help evaluators as they navigate the evaluation contract landscape. Likelihood of Receiving New HHS Evaluation Contracts by Entity Type and Size As previously described, this study sought to understand whether an organization’s age (i.e., newness to the HHS arena) influenced its likelihood of receiving new HHS evaluation- specific funding. Data results indicate, as evidenced by a steep early decline in survival probabilities (see Figure 6, Chapter 3, p. 65; see also Appendix G for Kaplan-Meier estimated survival tables for universities and firms), that both universities and firms were influenced by the liability of newness. That is, the less time universities and firms existed in the HHS evaluation arena, the greater the likelihood they would not receive subsequent HHS funding. Put another way, the longer universities and firms exist in the HHS evaluation arena, the more likely they are to receive funding. This finding supports prior research which purports that “the commission of evaluation services [exists] among a finite number of buyers [and] … results in an imperfect ECOLOGY OF EVALUATION CONTRACT WORK 106 market, whereby buyers and sellers of evaluation services have to operate in close inter- dependency” (Lemire et al., 2018b, p. 157). Additional data results indicate that there was evidence to believe there is a statistically significant difference between an evaluation firm’s likelihood of receiving new HHS evaluation- specific contracts between FY 2008-2022 compared to universities. Although not generalizable to all federal agencies that fund evaluation work, this finding does support—by logical extension—previous research that points to the firm-dominated nature of the federal external evaluation arena (Hwalek & Straub, 2018; Lemire et al., 2018a, 2018b; Peck, 2018). Further, the finding that there was evidence to believe that there is a statistically significant difference between a not small firm’s likelihood of receiving new HHS evaluation-specific contracts between FY 2008-2022 compared to small firms’ likelihood of receiving new HHS evaluation- specific contracts supports prior research that the field is dominated by larger firms (Hwalek & Straub, 2018; Lemire et al., 2018a, 2018b; Peck, 2018). RQ 2a: How do external research and evaluation providers perceive the federal evaluation contracts landscape? This study examined practicing evaluators’ perceptions of the federal evaluation contract landscape through in-depth semi-structured interviews with eleven evaluators across ten firms and university research centers. Impact of Changing Presidential Administrations on RFPs As evidenced by the literature review, changes following presidential elections to the federal government’s priorities, political appointments, and federal agency appropriations has ECOLOGY OF EVALUATION CONTRACT WORK 107 historically had major impacts on discretionary contracts funding—and thus the amounts and types of RFPs available to evaluation providers (Lemire et al., 2018a; Nolton, 2020). Interestingly, two interviewees from not small firms who specialized in disability and special education-related work and one interviewee from a not small university center who specialized in disability-related work shared the perspective that changes in presidential administrations (i.e., specifically from one party to another versus inter-party changes) did not have a major impact on the types or amounts of RFPs they saw coming out of the federal government over recent years. While not generalizable this finding suggests that, for some evaluation providers, their practice is not as affected by major changes to the market’s demand-side (i.e., the federal government) as other evaluation practices; the fact that all three interviewees who shared this perspective were located in the same sized organization (i.e., not small) and niche (i.e., disability and special education-related work) suggests that this finding may be different for small evaluation providers or those in other niches (e.g., health, early childhood). Increasing Work on Economic, Health, and Racial Disparities This research was developed under the organizational ecological assumption that populations of organizations (e.g., the federal external evaluation market) are impacted by outside market forces. In line with organizational ecology research about outside market influences on labor markets (Siddiqui, 2018; Youn & Gamson, 1994), interviewees attributed demand-side actions to external forces. Specifically, the perceived increase in RFPs that included work related to economic, health, and racial disparities was often related to outside market factors, including recently heightened racial violence across the U.S. (e.g., George Floyd’s ECOLOGY OF EVALUATION CONTRACT WORK 108 murder) and the emergence of COVID-19. This finding points to the demand-driven nature of federal external evaluation work; interviewees perceived factors that impact the demand-side of the market as influencing the amounts and types of work available to their organizations. Experiencing Demand-side (i.e., State and Federal Funder) Barriers to Embedding Diversity, Equity, and Inclusion (DEI) in Research and Evaluation Work A few interviewees discussed the difficulty of embedding diversity, equity, and inclusion (DEI) in their evaluation and research contract work, despite DEI being included in a contract; structural and political barriers imposed by a funder (e.g., not being allowed to use the word ‘race’) influenced interviewees’ ability to incorporate DEI in their work. Interestingly, the three interviewees who shared this sentiment all specialized in early childhood work, but one worked at a small firm, another worked at a not small firm, and the third worked in a university center. While not generalizable this finding suggests that evaluators in some niches (i.e., early childhood) may experience similar barriers to embedding DEI in their work, despite the type and size of their organization. RQ 2b: How have external research and evaluation providers positioned themselves within the federal evaluation market to compete for resources? This study was conducted under the assumption that being responsive to the evaluation market’s demand-driven environment is crucial to obtaining new funding, and obtaining new funding is vital to organizational survival. As such, this research sought to explore the ways in which evaluators position themselves within the federal evaluation market to compete for similar resources (e.g., federal contracts). Specifically, interviewees were asked to describe how they ECOLOGY OF EVALUATION CONTRACT WORK 109 view their organization’s position in the market in terms of its capacity, capabilities, and size, and whether they believed outside market factors influenced the ways in which their organization approached opportunities for new work. Organizational Niche, Size, and Type Most interviewees described how both their organization’s niche (i.e., specialization), size (i.e., small versus not small), and type (i.e., firm versus university) influenced the ways in which they approach new funding opportunities. For example, many interviewees discussed how, when reviewing an RFP, they determine whether their organization (based on its niche, size, and type) has the capacity to pursue the full scope of work described (e.g., whether they think they will be able to complete all work described in the RFP within the provided timeframe). If they decide to respond to the RFP, they then assess whether their team has any gaps in the skills, capabilities, or expertise required to submit a competitive proposal. With these gaps in mind, interviewees then described how their organization’s niche and size influence the kinds of skills, capabilities, and expertise they look for in new hires. Some of the skills, capabilities, and expertise interviewees described include: demonstrated research skills (i.e., developing a research question, identifying appropriate methods, analyzing data using statistical software, writing results); ability to write competitive grants and responses to RFPs; interpersonal skills (e.g., working well with other team members and clients); subject matter expertise (e.g., disability policy in K-12 public education); methodological expertise (e.g., mixed methods, quasi-experimental, random assignment); and project management skills. This finding aligns with previous research around the role external environmental factors have on the composition of ECOLOGY OF EVALUATION CONTRACT WORK 110 a labor market (e.g., the connection between RFP requirements and the skills or expertise evaluators look for in new hires) (Carnevale et al., 1988; Eisner, 2010; Gardner, 1983). While this finding also aligns with research on the importance of having staff with specific expertise and skills in order to successfully procure and execute contracts (Peck, 2018), it is contrary to recent competency training gap analyses by Galport and Azzam (2017) wherein study participants ranked the ability to respond to RFPs as one of the least important competencies. Throughout these discussions, interviewees also expressed their perceived pros and cons to how their organization was situated in the external market for funding opportunities. For example, one interviewee whose work focused on child welfare, early childhood services and development, and education, described how she believed her not small firm’s long-standing (i.e., over 30 years) relationship with federal agencies (including HHS) provided them the opportunity to not only engage in meaningful research and evaluation work, but also diversify their portfolio by expanding their capacity and expertise. Further, having established relationships with federal agencies has provided the firm a certain level of security. This sentiment echoes previous research which suggests that “establishing and in effect securing evaluation contracts over time allows for [commissioned evaluators to have] continuity, increased capacity, and expertise and sector knowledge” (Lemire et al., 2018b, p. 158). This finding also connects to both prior research, as well as the current study’s findings, on the liability of newness. Prior research on the liability of newness asserts that newer organizations have a higher likelihood of dying due to various factors, including demand-side processes and lower levels of legitimacy, when compared to more established organizations (Singh et al., 1986; Stinchcombe, 1965). As found in the ECOLOGY OF EVALUATION CONTRACT WORK 111 current study, one interviewee who worked at a not small firm described how her organization benefited (i.e., continually won new contracts) due, in part, to its longstanding history in the federal evaluation contracts arena. Another interviewee who worked at a not small university center also described how he believed his center’s history in the federal evaluation and research arena lead to his institute becoming known, and then sought after, in their niche. Both interviewees’ perspectives buttress the notion that an organization’s legitimacy can influence its ability to acquire new federal evaluation funding. Need to Build Internal Capacity in Response to Outside Market Factors The need for evaluators to continuously build internal capacity and adapt to changes in the field is not new (AEA Competencies Task Force, 2018; Dewey et al., 2008; Maynard et al., 2016). Interviewees frequently expressed the strategies and practices (e.g., subcontracting, relationship building, hiring) their organization used to be competitive in the field through the lens of needing to be responsive to funder (i.e., state and federal funder) demands, which most interviewees attributed to outside market forces (i.e., interviewees stated they saw how outside market factors influenced what funders were requiring in their RFPs). As with other findings in this study, this finding once again points to the heavily demand-driven nature of external evaluation contract work and the implications created by a skewed market (House, 1997). Study Implications Results from this exploratory research have implications for evaluation practice and future research. Implications for Evaluation Practice ECOLOGY OF EVALUATION CONTRACT WORK 112 Results from this exploratory research provided insight into the current landscape of federal external evaluation contract work that may be of broad interest to practicing evaluators. For example, findings related to staffing needs in response to outside market forces (e.g., the need for evaluators who specialize in diversity, equity, and inclusion work) sheds light on the importance of staying apprised of demand-side (i.e., funder) needs and priorities. Other findings related to the types of skills, capabilities, and expertise evaluators look for when hiring new staff could be useful to the design of evaluation training programs (e.g., program structure and course requirements). For example, many interviewees emphasized the importance of writing competitive grants and possessing interpersonal skills (e.g., working well with other team members and clients) when considering a new hire. To prepare future evaluators to meet employer preferences, evaluation training programs could require grant writing courses and internships that involve working with both an internal team and external clients. Additional findings related to professional development and outreach could be useful to practicing evaluators. For example, most interviewees rarely mentioned (if at all) their organization’s role in providing professional development to employees. Those that did mention professional development either casually listed one specific workshop their employer provided or noted that their professional development is typically intertwined with travel, which for the past couple of years had mostly or completely halted due to COVID-19. While the researcher’s framing of the interview questions could be a contributing factor, the general lack of professional development and outreach discussions in this study suggests that these topics may not be widely prioritized or funded among evaluation firms and universities. As such, practicing evaluators ECOLOGY OF EVALUATION CONTRACT WORK 113 who are interested in professional development may need to actively seek out opportunities on their own instead of relying on organization-provided options. This, placing the onus on individual employees to pursue professional development opportunities, could prove disadvantageous to evaluation entities (i.e., firms and universities); not providing staff with professional development opportunities to learn new methods, data analysis procedures, or DEI strategies could result in evaluation organizations losing their competitive edge. Implications for Future Research While research on the U.S. federal evaluation marketplace exists in a variety of contexts (Alkin & King, 2016; Biderman & Sharp, 1972; Della-Piana & Della-Piana, 2007; House, 1997; Hwalek & Straub, 2018; Lemire et al., 2018a, 2018b; Maynard et al., 2016; Nielsen et al., 2018a; Peck, 2018), there is still much to explore. This study examined the changing landscape of federal external evaluation contract work in the United States by exploring the influence newness and smallness had on evaluation firms and universities’ likelihood of receiving new evaluation- specific funding from HHS, as well as evaluation providers’ awareness of market forces. Future studies could explore the nonlinear pattern present in the historical count data. Such research could shed light on the supports and barriers evaluation entities encounter when attempting to enter the HHS contracting arena for the first time. Other studies could expand upon Peck’s (2018) work to further explore the density of the federal evaluation contracting arena. For example, one could draw upon organizational ecology to examine the field’s density dependency, that is, the idea that certain environments can only sustain a certain number of similar type organizations (Amburgey & Rao, 1996; Baum & Shipilov, 2006; Hannan & ECOLOGY OF EVALUATION CONTRACT WORK 114 Freeman, 1989; van Witteloostuijn et al., 2018). Information gleaned from this research could shed light on competition levels and resource availability within the field (e.g., the number and size of evaluation-specific contracts available for bidding). Other research could involve conducting an in-depth examination of the full landscape of evaluation-specific federal agency spending. This could involve further exploration of: the types of evaluation services that receive funding (e.g., program evaluations, outcome evaluations, policy analyses, technical assistance and training, capacity building); the most common types of evaluation contracts awarded (e.g., firm fixed, time and materials, cost plus fixed fee); the most common types of competition (e.g., full and open competition, open competition after some exclusion, competitive delivery order); or the distribution of evaluation-specific funding by subagency (e.g., Administration for Children and Families, Health Resources and Services Administration, Indian Health Service). Additionally, the present study did not examine why firms, newer organizations, and not small organizations were more likely to receive newly funded HHS contracts as opposed to their counterparts. To explore the why, future research could draw upon organizational ecology to further examine university and firms’ survival probabilities by analyzing possible correlations between various environmental variables and the likelihood of survival. For example, is the length of time a firm remains in the HHS contracting arena (i.e., continues to receive new HHS funding each fiscal year) influenced by the number and types of public health events in a given fiscal year (e.g., H1N1 flu outbreak, passage of the Affordable Care Act, opioid crisis, COVID- 19)? Does the number or type of major legislation passed at the federal level (e.g., Evidence Act ECOLOGY OF EVALUATION CONTRACT WORK 115 of 2019, repeal of Roe v. Wade) influence the length of time a university or firm stays in the federal evaluation arena? Does the number or type of economic events (e.g., 2008 Great Recession, passage of the American Rescue Plan) influence the length of time a university or firm stays in the federal evaluation arena? Information gleaned from such research could provide invaluable insight into the evaluation field’s ability to adapt to an ever-changing climate. Lastly, future research involving additional evaluator interviews and surveys could further capture differing perceptions by provider type, niche, and years of experience in the field. Gathering practicing evaluators’ perceptions of the field is crucial to bridging evaluation research and practice. For example, previous research purports that changes in presidential administrations influences the amount and type of work available to evaluators (Lemire et al., 2018a; Nolton, 2020). However, in the present study, the only interviewees that discussed presidential administrations (all located in the same niche) stated they did not see such changes as influencing the amount and type of work available to them. Limitations There are several limitations to this research that should be considered when interpreting the study’s findings. Limitations relate to the firm and university inclusion criteria and document review process, the use of the survival analysis technique including use of the log-rank test to compare survival curves, and participant sampling and interview data findings. While the evaluation contract data from USAspending.gov was pulled based on previous research by Lemire et al. (2018a) (i.e., by using evaluation-specific product service codes), data limitations paired with the amorphous definition of “evaluation” means it was not possible to ECOLOGY OF EVALUATION CONTRACT WORK 116 discern which specific evaluation activities were supported under the search labels. As such, the data could both include contracts that do not involve evaluation activities, as well as be missing contracts that do involve evaluation activities. It is therefore possible that the findings in this study either over- or underestimated the number of evaluation contracts, and therefore the number of firms or universities, in the HHS arena between FY 2008-2022. The lack of a clear definition of evaluation also means firms may have been wrongly excluded during the document review process if their mission statement, capability statement, or annual report did not include one of the predetermined search terms (see Table 3). While it was the appropriate method of choice for this study, using the survival analysis technique has limitations. Specifically, the log-rank test is only able to determine significance; the log-rank test is not able to “provide an estimate of the magnitude of difference in survival times between [two] groups” (Sedgwick, 2014). This means that while this study found there was a statistically significant difference in survival curves based on both organization type (i.e., firm versus university) and size (i.e., small and not small), the effect size is unknown. Limitations related to participant sampling, and thus the findings generated from the interview data, should also be considered when reviewing this study’s findings. This research did not intend to create a generalizable sample of participants; interviewees were identified through the researcher’s professional network and cold messaging via email and LinkedIn. As such, the researcher recognizes that differing interviewee voices and perspectives may be missing from the study’s findings. ECOLOGY OF EVALUATION CONTRACT WORK 117 Conclusion As its name suggests, external evaluation most often occurs in a contractual context between an evaluation commissioner (e.g., HHS) on the demand-side of the market, and an evaluation provider (e.g., an evaluation and research firm) on the supply-side of the market (Nielsen et al., 2018a). Situated in a federal arena, evaluation services are inherently political (House, 1997; Della-Piana & Della-Piana, 2007; Weiss, 1993). From these premises, the researcher sought to garner understanding about the marketplace conditions and structures for contractual evaluative work, as doing so was deemed imperative to understanding the current outlook of the U.S. evaluation industry. In line with prior research, this study found that the federal evaluation market prefers larger firms who have a history in the field, is demand-driven in nature, and is influenced by outside market factors. A survival analysis of evaluation firms and universities receiving new evaluation-specific funding from HHS each fiscal year between FY 2008-2022 informed the first conclusion. Specifically, this study found that firms were more likely than universities, and not small firms were more likely than small firms, to receive new HHS funding between FY 2008- 2022. Interviews with practicing evaluators informed the second and third conclusions. Specifically, evaluators shared their perspectives on how they view the evaluation contract landscape, as well as how they position themselves in the external evaluation environment for funding opportunities. Being responsive to demand-side (i.e., funder) needs and priorities is crucial to obtaining new evaluation funding. When situated in an imperfect, demand-driven market, having the ECOLOGY OF EVALUATION CONTRACT WORK 118 nimbleness and acumen to navigate changes in one’s environment becomes even more vital. By possessing an awareness of the overall federal evaluation market structure and processes, evaluators will be better poised to build and operate their practices as the field grows from its nascent stage into a more mature industry. ECOLOGY OF EVALUATION CONTRACT WORK 119 References AEA Competencies Task Force. (2018). The 2018 AEA Evaluator Competencies: Our two-part charge from the AEA Board. https://www.eval.org/Portals/0/Docs/AEA%20Evaluator%20Competencies.pdf Aldrich, H., & Auster, E. R. (1986). Even dwarfs started small: Liabilities of age and size and their strategic implications. Research in Organizational Behavior, 8, 165-198. Aldrich, H. E., & Fiol, C. M. (1994). Fools rush in? The institutional context of industry creation. Academy of Management Review, 19(4), 645-670. https://doi.org/10.5465/amr.1994.9412190214 Alkin, M. C., & King, J. A. (2016). The historical development of evaluation use. American Journal of Evaluation, 37(4), 568-579. https://doi.org/10.1177/1098214016665164 Allison, P. (2004). Event history analysis. In M. Hardy & A. Bryman (Eds.), Handbook of Data Analysis (pp 369-385). SAGE Publications Amburgey, T. L., & Rao, H. (1996). Organizational ecology: Past, present, and future directions. Academy of Management Journal, 39(5), 1265-1286. https://doi.org/10.5465/256999 Baum, J. A. C., & Shipilov, A. V. (2006). Ecological approaches to organizations. Sage Handbook for Organization Studies, 55-110. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1017085 Becker, F. (2007). Organizational ecology and knowledge networks. California Management Review, 49(2), 42–61. https://doi.org/10.2307/41166382 ECOLOGY OF EVALUATION CONTRACT WORK 120 Biderman, A. D., & Sharp, L. M. (1972). Evaluation research: Procurement and method. Social Science Information, 11(3–4), 141–170. https://doi.org/10.1177/053901847201100305 Bingham, A. J. & Witkowsky, P. (2022). Deductive and inductive approaches to qualitative data analysis. In C. Vanover, P. Mihas, & J. Saldaña (Eds.), Analyzing and interpreting qualitative data: After the interview (pp. 133-146). SAGE Publications. Blossfeld, H-P., & Rohwer, G. (2002). Techniques of event history modeling: New approaches to causal analysis (2nd ed.). Psychology Press. Borman, G. D., & D’Agostino, J. V. (1996). Title I and student achievement: A meta-analysis of federal evaluation results. Educational Evaluation and Policy Analysis, 18(4), 309- 326. https://doi.org/10.3102/01623737018004309 Box Secure Storage (n.d.). Box Secure Storage: Work with files and folders [Computer software]. Retrieved from https://it.umn.edu/services-technologies/self-help-guides/box- secure-storage-work-files-folders Bundi, P. (2016). What do we know about the demand for evaluation? Insights from the parliamentary arena. American Journal of Evaluation, 37(4), 522-541. https://doi.org/10.1177/1098214015621788 Burwell, S. M., Muñoz, C., Holdren, J., & Krueger, A. (2013). M-13-17 Memorandum to the Heads of Departments and Agencies: Next steps in the Evidence and Innovation Agenda. Executive Office of the President: Office of Management and Budget, 1-14. https://www.whitehouse.gov/omb/information-for-agencies/memoranda/#memoranda- 2013 ECOLOGY OF EVALUATION CONTRACT WORK 121 Carroll, G. R. (1984). Organizational ecology. Annual Review of Sociology, 10, 71-93. https://doi.org/10.1146/annurev.so.10.080184.000443 Carroll, G. R., (1985). Concentration and specialization: Dynamics of niche width in populations of organizations. American Journal of Sociology, 90(6), 1262-1283. http://www.jstor.org/stable/2779636 Carroll, G. R., & Delacriox, J. (1982). Organizational mortality in the newspaper industries of Argentina and Ireland: An ecological approach. Administrative Science Quarterly, 27(2), 169-198. https://doi.org/10.2307/2392299 Carroll, G. R., & Huo, Y. P. (1986). Organizational task and institutional environments in ecological perspective: Findings from the local newspaper industry. American Journal of Sociology, 91(4), 838-873. https://doi.org/10.1086/228352 Carnevale, A. P., Gainer, L. J., Meltzer, A. S., & Holland, S. L. (1988). Workplace basics: The skills employers want. Training & Development Journal, 22-30. Centers for Disease Control and Prevention. (2022, August 16). CDC Museum COVID-19 timeline. https://www.cdc.gov/museum/timeline/covid19.html#:~:text=January%2031%2C%2020 20&text=The%20Secretary%20of%20the%20Department,outbreak%20a%20public%20h ealth%20emergency CEO Forum on Education and Technology. (2001). Education Technology Must Be Included in Comprehensive Education Legislation: A Policy Paper. Washington DC: The CEO Forum on Education and Technology, 1-10. ECOLOGY OF EVALUATION CONTRACT WORK 122 Chelimsky, E. (1995). The political environment of evaluation and what it means for the development of the field: Evaluation for a new century: A global perspective. American Journal of Evaluation, 16(3), 215-225. https://doi.org/10.1177/109821409501600301 Chelimsky, E. (2007). Factors influencing the choice of methods in federal evaluation practice. New Directions for Evaluation, 113, 13-33. https://doi.org/10.1002/ev.213 Chelimsky, E. (2012). Valuing, evaluation methods, and the politicization of the evaluation process. New Directions for Evaluation, 133, 77-83. https://doi.org/10.1002/ev.20008 Chelimsky, E. (2015). Credibility, policy use, and the evaluation synthesis. In S. I. Donaldson, C. A. Christie, & M. M. Mark (Eds.), Credible and actionable evidence: The foundation for rigorous and influential evaluations (2nd ed., pp 3-26). SAGE Publications, Inc. https://dx.doi.org/10.4135/9781483385839.n12 Creamer, E. G. (2018). An introduction to fully integrated mixed methods research. SAGE Publications, Inc. Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE Publications, Inc. Datta, L. (2011). Politics and evaluation: More than methodology. American Journal of Evaluation, 32(2), 273–294. https://doi.org/10.1177/1098214011400060 Davies, P. (2012). The state of evidence-based policy evaluation and its role in policy formation. National Institute Economic Review, 219(1), R41-R52. https://doi.org/10.1177/002795011221900105 ECOLOGY OF EVALUATION CONTRACT WORK 123 Davies, P., Morris, S., & Fox, C. (2018). The evaluation market and its industry in England. In S. B. Nielsen, S. Lemire, & C. A. Christie (Eds.), The Evaluation Marketplace: Exploring the Evaluation Industry. New Directions for Evaluation, 160, 29-43. https://doi.org/10.1002/ev.20347 Della-Piana, C. K., & Della-Piana, G. M. (2007). Evaluation in the context of the government market place: Implications for the evaluation of research. Journal of Multidisciplinary Evaluation, 4(8), 79-91. https://journals.sfu.ca/jmde/index.php/jmde_1/article/view/33 Dewey, J. D., Montrosse, B. E., Schröter, D. C., Sullins, C. D., & Mattox, J. R. (2008). Evaluator competencies: What’s taught versus what’s sought. American Journal of Evaluation, 29(3), 268-287. https://doi.org/10.1177/1098214008321152 Dinnesen, M. S., Olszewski, A., Breit-Smith, A., & Guo, Y. (2020). Collaborating with an expert panel to establish the content validity of an intervention for preschoolers with language impairment. Communication Disorders Quarterly, 41(2), 86-99. https://doi.org/10.1177/1525740118795158 Dodge, Y. (2008). The concise encyclopedia of statistics. Springer. https://doi.org/10.1007/978- 0-387-32833-1 Donaldson, S. I. (2015). Examining the backbone of contemporary evaluation practice: Credible and actionable evidence. In: S. I. Donaldson, C. A. Christie, & M. M. Mark (Eds.), Credible and actionable evidence: The foundation for rigorous and influential evaluations (2nd ed., pp 3-26). SAGE Publications, Inc. https://doi.org/10.4135/9781483385839 ECOLOGY OF EVALUATION CONTRACT WORK 124 Efron, B. (1988). Logistic regression, survival analysis, and the Kaplan-Meier curve. Journal of the American Statistical Association, 83(402), 414-425. https://doi.org/10.2307/2288857 Eisner, S. (2010). Grave new world? Workplace skills for todays college graduates. American Journal of Business Education, 3(9), 27-50. https://doi.org/10.19030/ajbe.v3i9.478 FAR. (n.d.). Requests for proposals. Retrieved from https://www.acquisition.gov/far/15.203#:~:text=(a)%20Requests%20for%20proposals%2 0(,contractors%20and%20to%20solicit%20proposals Freeman, J., Carroll, G. R., & Hannan, M. T. (1983). The liability of newness: Age dependence in organizational death rates. American Sociological Review, 48(5), 692-710. https://doi.org/10.2307/2094928 Freeman, J., & Hannan, M. T. (1983). Niche width and the dynamics of organizational populations. American Journal of Sociology, 88(6), 1116-1145. http://www.jstor.org/stable/2778966 Furubo, J. F., & Sandahl, R. (2002). Introduction: A diffusion perspective on global developments in evaluation. In J. E. Furubo, R. C. Rist, & R. Sandahl (Eds.), International Atlas of Evaluation, (pp. 1-26). Transaction Publishers. Galport, N., & Azzam, T. (2017). Evaluator training needs and competencies: A gap analysis. American Journal of Evaluation, 38(1), 80-100. https://doi.org/10.1177/1098214016643183 Gardner, D. P. (1983). A Nation at Risk: The Imperative for Educational Reform. An Open Letter to the American People. A Report to the Nation and the Secretary of Education. ECOLOGY OF EVALUATION CONTRACT WORK 125 Washington DC: National Commission on Excellence in Education Department of Education. George, B., Seals, S., & Aban, I. (2014). Survival analysis and regression models. Journal of Nuclear Cardiology, 21, 686-694. https://doi.org/10.1007/s12350-014-9908-2 Germuth, A. A. (2019). Succeeding as an independent evaluation consultant: Requisite skills and attributes. In N. Martínez‐Rubin, A. A. Germuth, & M. L. Feldmann (Eds.), Independent Evaluation Consulting: Approaches and Practices from a Growing Field. New Directions for Evaluation, 164, 43–54. https://doi.org/10.1002/ev.20386 Gerson, K., & Damaske, S. (2020). The science and art of interviewing. New York: Oxford University Press. Grant, J. S., & Davis, L. L. (1997). Selection and use of content experts for instrument development. Research in Nursing & Health, 20, 269-274. https://doi.org/10.1002/(SICI)1098-240X(199706)20:3<269::AID-NUR9>3.0.CO;2-G Hannan, M. T., Carroll, G. R., & Pólos, L. (2003). The organizational niche. Sociological Theory, 21(4), 309-340. https://www.jstor.org/stable/1602329 Hannan, M. T., & Freeman, J. (1977). The population ecology of organizations. American Journal of Sociology, 82(5), 929–964. https://doi.org/10.1086/226424 Hannan, M. T., & Freeman, J. (1989). Organizational Ecology. Harvard University Press. Henry, G. T. (2001). How modern democracies are shaping evaluation and the emerging challenges for evaluation. American Journal of Evaluation, 22(3), 419-429. https://doi.org/10.1016/S1098-2140(01)00138-2 ECOLOGY OF EVALUATION CONTRACT WORK 126 Henry, G. T. (2015). When getting it right matters: The struggle for rigorous evidence of impact and to increase its use continues. In S. I. Donaldson, C. A. Christie, & M. M. Mark (Eds.), Credible and actionable evidence: The foundation for rigorous and influential evaluations (2nd ed., pp 65-82). SAGE Publications. https://doi.org/10.4135/9781483385839 House, E. R. (1993). Professional Evaluation: Social Impact and Political Consequences. SAGE Publications. House, E. R. (1997). Evaluation in the government marketplace. Evaluation Practice, 18(1), 37- 48. https://doi.org/10.1177/109821409701800104 Hwalek, M. A., & Straub, V. L. (2018). The small sellers of program evaluation services in the United States. In S. B. Nielsen, S. Lemire, & C. A. Christie (Eds.), The Evaluation Marketplace: Exploring the Evaluation Industry. New Directions for Evaluation, 160, 125-143. https://doi.org/10.1002/ev.20340 Jarosewich, T., Feldmann, M. L., Martínez‐Rubin, N., & Clark, N. (2019). Who we are: Findings from the American Evaluation Association's Independent Consulting Topical Interest Group 2015 Decennial Survey. In N. Martínez‐Rubin, A. A. Germuth, & M. L. Feldmann (Eds.), Independent Evaluation Consulting: Approaches and Practices from a Growing Field. New Directions for Evaluation, 164, 27–41. https://doi.org/10.1002/ev.20387 Johnson, J. M. (2011). In-depth interviewing. In J. F. Gubrium & J. A. Holstein (Eds.), Handbook of Interview Research. (pp. 103-119). SAGE Publications. ECOLOGY OF EVALUATION CONTRACT WORK 127 Jovanovic, B. (1982). Selection and the evolution of industry. Econometrica, 50(3), 649-670. https://doi.org/10.2307/1912606 Kaplan, E. L., & Meier, P. (1958). Nonparametric estimation from incomplete observations. Journal of the American Statistical Association, 53(282), 457-481. https://doi.org/10.2307/2281868 Kettl, D. F. (1994). Sharing Power: Public Governance and Private Markets. The Brookings Institution. King, J. A., & Stevahn, L. (2015). Competencies for program evaluators in light of adaptive action: What? So what? Now what? In J. W. Altschuld & M. Engle (Eds.), Accreditation, certification, and credentialing: Relevant concerns for U.S. evaluators. New Directions for Evaluation, 145, 21-37. https://doi.org/10.1002/ev.20109 Lahey, R., Elliott, C., & Heath, S. (2018). The evolving market for systematic evaluation in Canada. In S. B. Nielsen, S. Lemire, & C. A. Christie (Eds.), The Evaluation Marketplace: Exploring the Evaluation Industry. New Directions for Evaluation, 160, 45- 62. https://doi.org/10.1002/ev.20346 Lamont, M., & Swidler, A. (2014). Methodological pluralism and the possibilities and limits of interviewing. Qualitative Sociology, 37(2), 153-171. https://doi.org/10.1007/s11133-014- 9274-z Hogan, L. R. (2007). The historical development of program evaluation: Exploring past and present. Online Journal of Workforce Education and Development, 2(4), 1–10. https://opensiuc.lib.siu.edu/ojwed/vol2/iss4/5/ ECOLOGY OF EVALUATION CONTRACT WORK 128 Landes, J., Engelhardt, S. C., & Pelletier, F. (2020). An introduction to event history analyses for ecologists. Ecosphere, 11(10), 1-14. https://doi.org/10.1002/ecs2.3238 LaVelle, J. M., & Donaldson, S. I. (2010). University-based evaluation training programs in the United States 1980-2008: An empirical examination. American Journal of Evaluation, 31(1), 9–23. https://doi.org/10.1177/1098214009356022 Leeuw, F. L. (2002). Evaluation in Europe 2000: Challenges to a growth industry. Evaluation, 8(1), 5-12. https://doi.org/10.1177/1358902002008001743 Lemire, S., Fierro, L. A., Kinarsky, A. R., Fujita-Conrads, E., & Christie, C. A. (2018a). The U.S. federal evaluation market. In S. B. Nielsen, S. Lemire, & C. A. Christie (Eds.), The Evaluation Marketplace: Exploring the Evaluation Industry. New Directions for Evaluation, 160, 63-80. https://doi.org/10.1002/ev.20343 Lemire, S., Nielsen, S. B., & Christie, C. A. (2018b). Toward understanding the evaluation market and its industry—Advancing a research agenda. In S. B. Nielsen, S. Lemire, & C. A. Christie (Eds.), The Evaluation Marketplace: Exploring the Evaluation Industry. New Directions for Evaluation, 160, 145-163. https://doi.org/10.1002/ev.20339 Manifesto of Industrial Workers of the World. (2017). Manifesto of Industrial Workers of the World, 3. Mars, M. M., & Bronstein, J. L. (2020). The population ecology of undesigned systems: An analysis of the Arizona charter school system. Journal of Organization Design, 9(17), 1- 18. https://doi.org/10.1186/s41469-020-00083-y ECOLOGY OF EVALUATION CONTRACT WORK 129 Mata, J., & Portugal, P. (1994). Life duration of new firms. The Journal of Industrial Economics, 42(3), 227-245. https://doi.org/10.2307/2950567 Mayer, K. B., & Goldstein, S. (1961). The first two years: Problems of small firm growth and survival. Small Business Administration, Washington, DC: GPO Maynard, R. (2000). Whether a sociologist, economist, psychologist or simply a skilled evaluator: Lessons from evaluation practice in the United States. Evaluation, 6(4), 471- 480. https://doi.org/10.1177/13563890022209433 Maynard, R., Goldstein, N., & Nightingale, D. S. (2016). Program and policy evaluations in practice: Highlights from the federal perspective. In L. R. Peck (Ed.), Social experiments in practice: The what, why, when, where, and how of experimental design & analysis. New Directions for Evaluation, 152, 109-135. https://doi.org/10.1002/ev.20209 Maynard, R. (2018). The role of federal agencies in creating and administering evidence-based policies. The ANNALS of the American Academy of Political and Social Science, 678(1), 134-144. https://doi.org/10.1177/0002716218768742 Merrill, A. (2017). Curvilinear relationship. In M. Allen, (Ed.), The SAGE Encyclopedia of Communication Research Methods (pp. 323-325). SAGE Publications Inc. Michael, S. C., & Kim, S. M. (2005). The organizational ecology of retailing: A historical perspective. Journal of Retailing, 81(2), 113–123. https://doi.org/10.1016/j.jretai.2005.03.005 Miles, M. B., Huberman, A. M., & Saldaña, J. (2020). Qualitative data analysis: A methods sourcebook (4th ed.). SAGE. ECOLOGY OF EVALUATION CONTRACT WORK 130 Miller, R. G., Jr. (1983). What price Kaplan-Meier? Biometrics, 39(4), 1077-1081. https://www.jstor.org/stable/2531341 Mills, J. I. (2008). A legislative overview of No Child Left Behind. New Directions for Evaluation, 117, 9-20. https://doi.org/10.1002/ev.248 Monge, P., Lee, S., L., Fulk, J., Weber, M., Shen, C., Schultz, C., Margolin, D., Gould, J., & Frank, L. B. (2011). Research methods for studying evolutionary and ecological processes in organizational communication. Management Communication Quarterly, 25(2), 211-251. https://doi.org/10.1177/0893318911399447 Nielsen, S. B., Lemire, S., & Christie, C. A. (2018a). The evaluation marketplace and its industry. In S. B. Nielsen, S. Lemire, & C. A. Christie (Eds.), The Evaluation Marketplace: Exploring the Evaluation Industry. New Directions for Evaluation, 160, 13- 28. https://doi.org/10.1002/ev.20344 Nielsen, S. B., Lemire, S., & Christie, C. A. (2018b). The commercial side of evaluation: Evaluation as an industry and as a professional service. In J. E. Furubo & N. Stame, (Eds.), The evaluation enterprise: A critical view (1st ed., pp. 243-265). Routledge. Nolton, E. C. (2020). Mapping the Institutionalization of Evaluation in the U.S. Federal Government. [Doctoral dissertation, George Mason University] ProQuest Dissertations and Theses Global. Patton, M. Q. (1990). Qualitative evaluation and research methods (2nd ed.). Newbury Park: Sage. Patton, M. Q. (2015). Qualitative research & evaluation methods (4th ed.). SAGE Publications. ECOLOGY OF EVALUATION CONTRACT WORK 131 Peck, L. R. (2018). The big evaluation enterprises in the United States. New Directions for Evaluation, 160, 97-124. https://doi.org/10.1002/ev.20341 Picciotto, R. (2011). The logic of evaluation professionalism. Evaluation, 17(2), 165-180. https://doi.org/10.1177/1356389011403362 Plano Clark, V. L., & Creswell, J. W. (2014). Understanding research: A consumer’s guide (2nd ed.). Pearson Potter, J. D., & Crawford, S. E. S. (2008). Organizational ecology and the movement of nonprofit organizations. State & Local Government Review, 40(2), 92-100. https://www.jstor.org/stable/25469781 R Core Team. (2022). R: A language and environment for statistical computing (Version 4.2.2) [Computer software]. R Foundation for Statistical Computing, Vienna, Austria. https://www.R-project.org/ Rich, J. T., Neely, J. G., Paniello, R. C., Voelker, C. C. J., Nussenbaum, B., & Wang, E. W. (2010). A practical guide to understanding Kaplan-Meier curves. Otolaryngology-Head and Neck Surgery, 143(3), 331–336. https://doi.org/10.1016/j.otohns.2010.05.007 Rist, R. C., & Paliokas, K. L. (2002). The rise and fall (and rise again?) of the evaluation function in the U.S. government. In J. E. Furubo, R. C. Rist, & R. Sandahl (Eds.), International Atlas of Evaluation, (pp. 225-245). Transaction Publishers. Saldaña, J., & Omasta, M. (2018). Qualitative research: Analyzing life. Sage Publications. Sedgwick, P. (2014). How to read a Kaplan-Meier survival plot. British Medical Journal, 349, 1- 3. https://doi.org/10.1136/bmj.g5608 ECOLOGY OF EVALUATION CONTRACT WORK 132 Shadish, W. R. Jr., Cook, T. D., & Leviton, L. C. (1991). Foundations of program evaluation: Theories of practice. Sage Publications. Siddiqui, S. H., Rasheed, R., Nawaz, M. S., & Sharif, M. S. (2018). Explaining survival and growth of women entrepreneurship: Organizational ecology perspective. Review of Economics and Development Studies, 4(2), 293–302. Singh, J. V., & Lumsden, C. J. (1990). Theory and research in organizational ecology. Annual Review of Sociology, 16, 161-195. https://doi.org/10.1146/annurev.so.16.080190.001113 Singh, J. V., Tucker, D. J., & House, R. J. (1986). Organizational legitimacy and the liability of newness. Administrative Science Quarterly, 31(2), 171-193. https://www.jstor.org/stable/2392787 Small Business Size Regulations, 13 C.F.R. § 121 (1996). https://www.ecfr.gov/current/title- 13/chapter-I/part-121 Smith, A. (1776). Wealth of Nations. Generic NL Freebook Publisher. Stack, K. (2018). The Office of Management and Budget: The quarterback of evidence-based policy in the federal government. The ANNALS of the American Academy of Political and Social Science, 678(1), 112-123. https://doi.org/10.1177/0002716218768440 Stalpers, L. J. A., & Kaplan, E. L. (2018). Edward L. Kaplan and the Kaplan-Meier survival curve. British Journal for the History of Mathematics, 33(2), 109-135. https://doi.org/10.1080/17498430.2018.1450055 ECOLOGY OF EVALUATION CONTRACT WORK 133 Stevahn, L., King, J. A., Ghere, G., & Minnema. J. (2005). Establishing essential competencies for program evaluators. American Journal of Evaluation, 26(1), 43-59. https://doi.org/10.1177/1098214004273180 Stinchcombe, A. L. (1965). Social structure and organizations. In J. G. March (Ed), Handbook of Organizations (pp. 153-193). Rand-McNally Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Grounded theory procedures and techniques (2nd ed.). Thousand Oaks, CA: Sage. Sturges, K. M. (2014). External evaluation as contract work: The production of evaluator identity. American Journal of Evaluation, 35(3), 346-363. https://doi.org/10.1177/1098214013513829 Tuma, B. N., Hannan, M. T., Groeneveld, L. P. (1979). Dynamic analysis of event histories. American Journal of Sociology, 84(4), 820-854. http://www.jstor.org/stable/2778026 United States Department of Agriculture. (n.d.). About the U.S. Department of Agriculture. United States Department of Agriculture. https://www.usda.gov/our-agency/about- usda#:~:text=On%20May%2015%2C%201862%2C%20President,economic%20develop ment%2C%20science%2C%20natural%20resource United States Department of Justice Office of Public Affairs. (2022, February 24). Three former Minneapolis police officers convicted of federal civil rights violations for death of George Floyd [Press release]. https://www.justice.gov/opa/pr/three-former-minneapolis- police-officers-convicted-federal-civil-rights-violations-death USAspending. (n.d.). USAspending.gov glossary. Retrieved from https://www.usaspending.gov/ ECOLOGY OF EVALUATION CONTRACT WORK 134 U.S. Const. art. II, § 1-2. U.S. General Services Administration. (n.d.). GSA eLibrary. Retrieved from https://www.gsaelibrary.gsa.gov/ElibMain/home.do U.S. Government Manual. (n.d.). Organizational chart of the U.S. government. Retrieved from https://www.usgovernmentmanual.gov/ van Witteloostuijn, A., Boin, A., Kofman, C., Kuilman, J., & Kuipers, S. (2018). Explaining the survival of public organizations: Applying density dependence theory to a population of US federal agencies. Public Administration, 96(4), 633-650. https://doi.org/10.1111/padm.12524 VanLandingham, G. R. (2006). A voice crying in the wilderness: Legislative oversight agencies' efforts to achieve utilization. New Directions for Evaluation, 112, 25- 39. https://doi.org/10.1002/ev.205 Vedung, E. (2010). Four Waves of Evaluation Diffusion. Evaluation, 16(3), 263-277. https://doi.org/10.1177/1356389010372452 Vought, R. T. (2019, July 10). M-19-23 Memorandum to the Heads of Departments and Agencies: Phase I Implementation of the Foundations for Evidence-Based Policymaking Act of 2018: Learning agendas, personnel, and planning guidance. Executive Office of the President: Office of Management and Budget, 1-37. https://www.whitehouse.gov/omb/information-for-agencies/memoranda/#memoranda- 2019 ECOLOGY OF EVALUATION CONTRACT WORK 135 Vought, R. T. (2020, March 10). M-20-12 Memorandum to the Heads of Departments and Agencies: Phase 4 Implementation of the Foundations for Evidence-Based Policymaking Act of 2018: Program evaluation standards and practices. Executive Office of the President: Office of Management and Budget, 1-37. https://www.whitehouse.gov/omb/information-for-agencies/memoranda/#memoranda- 2019 Wargo, M. J. (1995). The impact of federal government reinvention on federal evaluation activity. Evaluation Practice, 16(3), 227-237. https://doi.org/10.1177/109821409501600302 Weiss, C. (1993). Where politics and evaluation research meet. Evaluation Practice, 14(1), 93- 106. https://doi.org/10.1016/0886-1633(93)90046-R Whitehurst, G. J. (Russ). (2018). The Institute of Education Sciences: A model for federal research offices. The ANNALS of the American Academy of Political and Social Science, 678(1), 124–133. https://doi.org/10.1177/0002716218768243 Youn, T. I. K., & Gamson, Z. F. (1994). Organizational responses to the labor market: A study of faculty searches in comprehensive colleges and universities. Higher Education, 28(2), 189–205. https://doi.org/10.1007/BF01383728 Zients, J. D. (2012, May 18). M-12-14 Memorandum to the Heads of Departments and Agencies: Use of evidence and evaluation in the 2014 Budget. Executive Office of the President: Office of Management and Budget, 1-5. https://www.whitehouse.gov/omb/information- for-agencies/memoranda/#memoranda-2012 ECOLOGY OF EVALUATION CONTRACT WORK 136 Appendix A: IRB Determination Form NOT HUMAN RESEARCH June 15, 2021 David Johnson [PHONE] [EMAIL] Dear David Johnson: On 6/15/2021, the IRB reviewed the following submission: Type of Review: Initial Study Title of Study: Exploring the Future of Evaluation Contract Work in the United States: Implications for Industry Investigator: David Johnson IRB ID: STUDY00013083 Sponsored Funding: None Grant ID: None Internal UMN Funding: None Fund Management Outside University: None IND, IDE, or HDE: None Documents Reviewed with this Submission: • HRP-503-Human-Research-Determination- Form_6.10.21.docx, Category: IRB Protocol; The IRB determined that the proposed activity is not research involving human subjects as defined by DHHS and FDA regulations. To arrive at this determination, the IRB used “WORKSHEET: Human Research (HRP-310).” If you have any questions about this ECOLOGY OF EVALUATION CONTRACT WORK 137 determination, please review that Worksheet in the HRPP Toolkit Library and contact the IRB office if needed. Ongoing IRB review and approval for this activity is not required; however, this determination applies only to the activities described in the IRB submission and does not apply should any changes be made. If changes are made and there are questions about whether IRB review is required, please submit a Modification to the IRB for a determination. Sincerely, Bri Warner IRB Analyst ECOLOGY OF EVALUATION CONTRACT WORK 138 Appendix B: List of Universities and Firms Included in the Study Table B1 Total Universities in HHS Arena, FY 2008-2022 Brandeis University Ohio State University University of Illinois Case Western Reserve University Portland State University University of Missouri System Duke University Rector & Visitors of the University of Virginia University of North Carolina at Chapel Hill Emory University Regents of the University of Colorado University of Pittsburgh George Washington University Regents of the University of Michigan University of Puerto Rico Medical Sciences Center Icahn School of Medicine at Mount Sinai Regents of the University of Minnesota University of Texas Health Science Center of San Antonio Johns Hopkins University University of Alabama at Birmingham University of Utah Louisiana State University University of California, Davis Vanderbilt University Medical College of Wisconsin University of California, Los Angeles Vanderbilt University Medical Center Minnesota State Colleges and Universities University of California, San Francisco ECOLOGY OF EVALUATION CONTRACT WORK 139 Table B2 Total Firms in HHS Arena, FY 2008-2022 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status 2M Research Services LLC (est. 2011) Clinical research; Education research; Public health research & practice; Social, behavioral, & methodological sciences research Research & evaluation; Data management, collection, & processing; Technical assistance; Economic analysis & population forecasting; Statistical consulting; Clinical trials management; Logistics & communications Contract research organization Small A-Team Solutions LLC (est. 2004) Professional & allied healthcare staffing services Healthcare professional staffing services; Program & project management; Consulting services; Facilitation services; Survey services; Information technology staffing Professional services company Small Abt Associates, Inc (est. 1965) Equity; Social determinants of health; Education, youth & families; Food security & agriculture; Governance & justice; Health; Housing, communities & asset building; Workforce & economic mobility Communications & behavior change; Data capture & surveys; Digital transformations; Research, monitoring & evaluation; Technical assistance & implementation Global research & consulting firm Other than small Acumen LLC (est. 1996) Insurance programs; Provider payment; Program integrity Auditing & validation; Measure development; Program design & implementation; Program evaluation; Real- time monitoring; Clinical expertise; Data & statistical support; Data visualizations; Information Systems & tools Policy research & consulting firm Other than small ECOLOGY OF EVALUATION CONTRACT WORK 140 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Advocates for Human Potential, Inc (est. 1986) Addictions & substance abuse; Behavioral health policy, financing reform, & systems integration; Criminal justice; Health care reform; Housing & homelessness; Mental health; Population health management; Recovery supports; Veterans; Workforce development Professional consulting; Research & evaluation; Technical assistance & training; Virtual solutions, publications & events; Wellness recovery action plan (WRAP) Consulting & research firm Small AFYA, Inc (est. 1991) Healthcare Planning, evaluation, analysis, & performance measurement; Training; Technical assistance Technical & professional services firm Small Altarum Institute (est. 1946) Maternal & reproductive health; Behavioral health; Food & nutrition; Medicare & Medicaid; Eldercare; Military & veterans Public health interoperability; Disease surveillance; Medicare-Medicaid services for states; Advisory services & continuing education; Applied research & analytics; Program implementation Nonprofit organization Other than small Amdex Corporation (est. 1987) IT services & IT consulting Data collection; Program evaluation; Management & modernization of data; Application development; Data analytics; Data security & governance; Engineering tech Federal consulting corporation Small American Institutes for Research (est. 1946) Education; Health; Human services; International; Workforce Research & evaluation; Technical assistance; Data science & technology Nonprofit research & technical assistance organization Other than small ECOLOGY OF EVALUATION CONTRACT WORK 141 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Applied Public Policy Research Institute for Study and Evaluation (APPRISE) (est. 2002) Low-income usage reduction programs; low- income bill payment assistance programs; Renewable energy programs; Market transformation programs Process evaluation research; Impact evaluation research; Data tracking research; Survey research; Needs assessment; Economic & policy analysis; Non-energy impact analysis; Performance measurement; Technical assistance Nonprofit research institute Other than small Aquilent Inc (est. 1979) Federal government services Digital & cloud services; DevOps; Computer systems design services; IT strategy & architecture; Professional, scientific, & technical services IT services & consulting organization Other than small Acquired by Booz Allen in Jan 2017 Art of Resolution, LLC (est. 2014) Equal employment opportunity; Diversity, inclusion, & equity Mediation; Group facilitation; Conflict coaching; D&I training; Diversity based workforce analytics; Diversity & inclusion strategic plans; Diversity policy; Equal employment opportunity (EEO) consulting; EEO investigations; Procedural reviews & final agency decisions; EEO assessments; HR policy evaluation; Climate & conflict assessments; Sexual harassment training and more Consulting services organization Small ASRT Inc. (est. 2017) Bioinformatics; Global health; Information management; Professional services; Management consulting; Science & health Epidemiology; Laboratory services; Statistics & biostatistics; Research & development; Environmental health & science; Program & project management; Quality & regulatory management; Emergency preparedness & response; Electronic data interchange; Health communications; Health strategic planning; Research services & consulting organization Small ECOLOGY OF EVALUATION CONTRACT WORK 142 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status IT; Administrative services; Business consulting; Program evaluation Atlas Research LLC (est. 2008) Health disparities & health equity; Families & communities; Rural health; Health & health care; Veteran & military health; Homelessness; Learning & performance; Mental health & PTSD; Caregivers; HIV/AIDS Research & evaluation; Human capital solutions; Digital & technology; Innovation & modernization; Strategic communications; Organizational excellence & transformation; Facility activation & advisory solutions Government services firm Small Audacious Inquiry LLC (est. 2010) Health IT policy; Road- mapping & advisory; Medicaid technology & operations; Outreach & onboarding; Creative communications Market research & evaluation, legislative & regulatory analysis, & guidance for industry compliance; Healthcare IT evaluation & guidance; Planning & funding strategy, contracting strategy, re-use & modularity plans; Methods for rapid adoption of health information exchange; Communications IT services & consulting organization Small Avanti Corporation (est. 1990) Compliance; Data management; Environmental health; NEPA NPDES permitting; Compliance assistance; Inspections & audits; Regulation development; Program implementation; Permit compliance assistance; Compliance targeting; Records management; Grants coding; Grants analysis; Meeting support; Cumulative impacts analyses; environmental justice assessments; comment compilation & tracking; Administrative record management Environmental services organization Small Avaris Concepts LLC (est. 2004) Management consulting; Chemical, biological, Organization diagnosis & assessment; Coaching & mentoring; Training & Management consulting & Small ECOLOGY OF EVALUATION CONTRACT WORK 143 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status radiological, nuclear & explosive; Supply chain & logistics; Agile transformation & information technology consulting; Health services management education; Behavioral assessment & evaluation; and more technical support services company BETAH Associates (est. 1988) Social equity; Sustainable community; Healthy populations Communications & creative services; Training & technical assistance; Meeting & event management; Evaluation & post-event services; Peer review services; Professional support services Professional services firm Small Better World Advertising (est. 1996) HIV & STDs; Tobacco; LGBT; Foster care & adoption; Obesity; BIPOC health; Youth & child welfare; Mental health; Drugs & alcohol; Stigma & discrimination; Diabetes; Global health; Healthcare; Vaccines; Community; Environment Strategy; Creative development; Production; Media planning; Implementation; Evaluation; Social media; Web development; Branding; Needs assessment; Formative research; Message development & testing Advertising services organization Small Bizzell Group LLC, The (est. 2010) Health solutions; Global programs; Workforce innovation; Management services; Behavioral health Research & evaluation; Program & project management; Learning & engagement; Data analytics; Innovation & technology; Technical assistance & training; Health information solutions; Conference & talent solutions Technology & consulting firm Small ECOLOGY OF EVALUATION CONTRACT WORK 144 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Booz Allen Hamilton Inc (est. 1914) Analytics & AI; Computing; Management consulting; Cybersecurity; Digital solutions; Engineering Modeling, simulation, & analysis; Environmental, safety, & human factors engineering; Technical, engineering, research management & evaluation support and more Management consulting firm Other than small Capital Consulting Corp. (est. 1986) Federal government services Program management; Conference planning & management; Strategic health communications; Technical & scientific publications; IT support; Research & evaluation services; Administrative services Federal services & consulting corporation Small Center for Policy Research (CPR) (est. 1981) Child support; Father engagement & healthy relations; Economic security & healthcare; Child welfare; Gender- based violence; Early childhood & education Design pilot projects & obtain demonstration grant funding; Strategic planning & technical assistance; Program evaluation; Best practices & policy analysis; Performance measurement & continuous quality improvement and more Nonprofit research, evaluation, & technical assistance agency Other than small Child Trends (est. 1979) Child welfare; COVID-19; Early childhood; Education; Families & parenting; Health; Juvenile justice; LGBT+; Poverty & inequality; Racial equity; School health; Social & emotional development; Teen pregnancy/ reproductive health; Trauma; Youth development Evaluation services; Policy analysis & engagement; Communications consultation; Research synthesis; Data; Capacity building & technical assistance Nonprofit, nonpartisan research & evaluation organization Other than small ECOLOGY OF EVALUATION CONTRACT WORK 145 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Cloudburst Group, The (est. 2005) Public health; Environment & climate resiliency; Global development; Housing & community development; State & local consulting; Homeless assistance programs Technical assistance & training; Research & evaluation; Data & analytics; Communications Research, business consulting & services organization Small Community Science Inc (est. 1997) Equitable community development; Health & behavioral health equity; Youth leadership & engagement; Powerful citizenry; Organizational effectiveness Research & evaluation; Capacity building & learning systems; Strategy development & improvement; Cross cultural collaboration; Community engagement & power building; Community change & technologies Research services organization Small DB Consulting Group (est. 2000) Aeronautics; Defense; Education; Health; Homeland security; Housing; Justice; Social programs IT & security; Health & clinical services, including program planning & development, research & policy studies, health communications & social marketing, & health informatics; Program management & support; Peer review & grants management; Research & evaluation; Training & technical assistance; Systems development & operations; Communications services; Conference & meeting logistics Global professional consulting firm Small Deloitte Touche Tohmatsu Limited (est. 1845) Consumer & industrial products; Energy; Life sciences & health care; Manufacturing; Real estate & construction; Acquisitions & mergers; Advisory services; Audit & assurance; Consulting; Financial & risk advisory services; Growth enterprise services Multinational management & consulting firm Other than small ECOLOGY OF EVALUATION CONTRACT WORK 146 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Technology, media, & telecommunications Designs for Learning (DL) (est. 1987) Financial management; Human resources; Special education; Technology; Program support Mandated reporting; Evaluation services; English learner program coordination; MARSS coordination; MCCC reporting; Specialized consulting; School business office functions; Audit prep; Capacity building; Annual STAR reporting; School nutrition and more Charter school & small organization support services organization Small Eastern Research Group, Inc (est. 1984) Air quality; Clean transportation; Climate & resilience solutions; Digital & information solutions; Drinking water; Ecological services; Economic & policy analysis; Energy; Environmental & climate justice solutions; Environmental & occupational health; Facilities planning & engineering; Grant program support; Laboratory services; Life cycle services; Organizational effectiveness; Permitting, compliance, & enforcement; Public health; Strategic communications; Wastes & toxics; Water Strategic planning; Operations & governance; Monitoring, evaluation, & learning; Stakeholder & partner support; Digital transformation; Risk assessment; Program evaluation; Health economics and more Multidisciplinary consulting firm Other than small ECOLOGY OF EVALUATION CONTRACT WORK 147 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status quality & resource management Econometrica (est. 1998) Fair market rents; Clinical practice; Communications; Grants management; Homeland security; Housing policy & research; Learning solutions; Maritime & water resources; Regulatory training; Section 508; Statistical sampling Budget & financial analysis; Data & statistical analysis; Learning solutions; Project management. & IT services; Qualitative services including - Environmental scans, Focus groups, Literature review, Office of management & budget clearance support, Operations research, Outreach to program participants, Performance measure development, Policy development, Processing casework, Program evaluation, Recruitment for qualitative research studies, Survey design & data collection, Survey research, Technical expert panels, Text mining & analysis Research & management organization Small Education Development Center, Inc (est. 1958) Early childhood development & learning; Elementary & secondary education; Behavioral, physical, & mental health; HIV & sexual & reproductive health; Opioid & other substance misuse prevention; Suicide, violence, & injury prevention; Capacity building. For individuals, organizations, & systems; Out-of-school learning; Design & development; Evaluation; Implementation; Policy; Research; EDC solutions Global nonprofit organization Other than small ECOLOGY OF EVALUATION CONTRACT WORK 148 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status STEM; Youth & workforce development Enlogica Solutions, LLC (est. 2009) IT operations; Cybersecurity; Software development Information assurance & technology; IT service management; Cybersecurity; IT systems engineering; Risk identification, analysis, evaluation, & communication IT services & consulting organization Small ESAC, Inc (est. 2000) Bioinformatics; Software engineering; Computer Science; Physical sciences; Genomics; Proteomics; Healthcare; Life sciences Research data management; Bioinformatics; Health IT Custom software development & consulting Small Acquired by ICF in 2021 Family Health International (FHI 360) (est. 1971) Civil society; Communication & social marketing; Crisis response; Economic development; Education; Environment & climate change; Gender, equity, safeguarding and social inclusion; Health; Nutrition; Research; Technology; Youth Capacity building; Creative services; Data analysis; Emerging infectious diseases & pandemic response; Monitoring & evaluation; Quality assurance; Research services; Social & behavior change; Social marketing & communication; Training & technical assistance Nonprofit human development organization Other than small Far Harbor (est. 2000) Education; Public health; Public policy Research design; Statistical analysis; Program evaluation Research services organization Small Georgia Tech Applied Research Corporation (GTARC) (est. 1997) State of Georgia economic development; National security; Improving the human condition; Research & evaluation; Engineering; Systems engineering; Digital media; IT; Information communications technology and more Nonprofit research corporation Other than small ECOLOGY OF EVALUATION CONTRACT WORK 149 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Education; Technology and more Global Evaluation & Applied Research (GEARS) Inc (est. 2002) Evaluation & applied research; Gender- responsive programs & initiatives; Cultural responsiveness Evaluations; Technical assistance & training; Support services; Meeting & event management; Strategic planning; IT services Consulting services organization Small Health Research and Analysis, LLC (est. 2004) Healthcare - quality & cost; Chronic diseases; Injury epidemiology; Behavioral health; Substance use & abuse; Tobacco use; Patient safety; Infection control; Deployment health; Environmental & occupational exposures & health outcomes Study design; Statistical analysis (SAS, SPSS, STATA, R, SUDAAN); Data management (SQL, ACCESS); Claims & electronic health record data; Surveys/data collection; Health registries; Surveillance; Needs assessment; Predictive analytics; Performance measurement; Program evaluation; Policy analysis; Cost-benefit analysis; Systematic literature reviews; Technical reports & manuscripts; Technical assistance & training Research services organization Small Hill Group, Inc (est. 1998) Public health; Health & human services programs Technical assistance & training; Grants management; Creative services - including annual reports, evaluation, clinical & biomedical research, and more; Conference & event planning Management consulting firm Small Human Services Research Institute (HSRI) (est. 1976) Housing & homelessness; Population health; Children, youth, & family; Aging & disabilities; Behavioral health; Aging & disabilities; Behavioral Evaluation; Quality improvement; Systems design; Data collection & analysis; Technical assistance & training Nonprofit research organization Other than small ECOLOGY OF EVALUATION CONTRACT WORK 150 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status health; Intellectual & developmental disabilities Humanitas Inc (est 1992) Health & disability; Workforce development; Education Policy & program assessment, research, & data analysis; Evaluation planning & conduct; Program planning & support; Review management & logistics; Information management & technology; Education & training Management & technology consulting firm Small ICF Incorporated LLC (est. 1969) Aviation; Consumer products; Disaster management; Education; Energy; Environment; Financial services; Healthcare; Hospitality; International development; Federal health; Public sector; Retail; Social programs; Transportation Analytics & data science; Cybersecurity; Energy; Federal IT modernization; Federal health; Policy & regulatory development; Program implementation; Research & evaluation; Technical assistance and more Global management & technology consulting firm Other than small ICF Macro (previously Macro International Inc) (est. 1966) Federal health-related programs & research; HIV/AIDS; Chronic disease prevention; International health; Behavioral health; Health communications Research & evaluation; Management consulting; Market communications; Information services Implementation & evaluation services firm Other than small Acquired by ICF in 2009 IMPAQ International LLC (est. 2001) Health & workforce development; Education; International & human services; Advanced Research; Evaluations; Implementation Global policy research, analytics, & Other than small Acquired by AIR in 2020 ECOLOGY OF EVALUATION CONTRACT WORK 151 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status analytics; AI & machine learning implementation firm Industrial Economics Inc (Iec) (est. 1981) Natural resource management; Public policy analysis; Applied economics; Finance & forensic analysis; Program design & evaluation; Sustainability; IT & communication Natural resource damage assessment (NRDA); Site restoration & remediation; Applied sciences; Endangered species; Water resources management; Natural resource economics; Cost-benefit analysis; Health measurement & valuation; Utility rate regulation; Strategic planning & program design; Measurement & evaluation; Environmental reporting & communications; Data management; Decision support tools; Spatial analysis/GIS; Survey research; Graphic design and more Environmental consulting firm Other than small Insight Policy Research Inc (est. 2001) Education; Family support; Food & nutrition; Health; Workforce development; Military & veteran support; Advancing equity Data collection; Program evaluation; Data science & statistics; Learning & improvement; Data visualization & analytics Policy research firm Small Acquired by Westat in 2022 Intellizant (est. 2007) Management consulting; Technology consulting Community program assessment; Risk management; Operations facility selection, assessment, & evaluation; Continuity planning & organizational resilience; Regulatory compliance; Training, testing, & exercises; Information protection; Disaster recovery and more Management & technology consulting firm Small Ipsos Public Affairs LLC (est. 1975) International development; Health; Policy & evaluation Survey management, data collection, & delivery; Policy evaluation, impact & Research services company Other than small ECOLOGY OF EVALUATION CONTRACT WORK 152 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status program assessment; Performance management & improvement James Bell Associates (JBA) (est. 1979) Child & family development; Child welfare; Tribal evaluation; Health care Program evaluation; Capacity building; Cost analysis; Performance improvement; Communications Research services organization Small JBS International (est. 1985) Aging; Child welfare; Disabilities; Education; Health safety; Health systems reform; Immigration; International development; Public health; Opioid epidemic; Tobacco control; Trauma & violence; Tribal child welfare; Workforce development Business process; Communications; Creative services; Data analytics; Interactive learning; Performance management; Program evaluation; Surveys; Technology; Training & technical assistance; Web & mobile development Business consulting & services company Other than small Operating under Celerian Group owned by Blue Cross Blue Shield of South Carolina since 2017 John Snow Inc (JSI) (est. 1978) Applied research & evaluation; Behavioral health; Capacity building; Digital health; Health supply chain management; Health systems strengthening; Healthy communities; HIV & infectious diseases; Immunization; Social & behavior change; Women, children, & youth Needs assessment; Survey research; Implementation research; Program evaluation; Data analysis; Monitoring & evaluation; Quality improvement; Program implementation; Training & technical assistance; Curriculum development; Strategic planning; Service delivery improvements; Strategic planning; Grant writing; Immunization; Social media & community outreach; Data collection; IT solutions; Integrated behavioral health workforce development and more Global & national nonprofit public health organization Other than small ECOLOGY OF EVALUATION CONTRACT WORK 153 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Karna LLC (est. 2008) Health communication; Global health; Emergency response; HIV & infectious disease; Substance abuse/tobacco control; Aging & gerontology; Environmental health; Healthcare quality; Social determinants of health Program & policy evaluation; Health analytics; Health communications; Tailored third party administration; Technical assistance; Population health applications; Public health research & support Public health consulting organization Small Acquired by Celerian Group in 2018 Kauffman & Associates, Inc (KAI) (est. 1990) Public health; Education; Economic development Research & evaluation; Training & technical assistance; Communications; Meeting & event planning; Organizational transformation Management consulting firm Small Kingstonville LLC (est. 2012) Risk; Compliance; Program management; Staff augmentation Program management; Risk management; Administrative management Management consulting firm Small Kingsley-Kleimann Group (est. 1998) Insurance; Mortgage; Financial; Health; Legal; Government; Plain language Survey research; Descriptive research; Experimental research; Correlational research; Causal-comparative research; Ethnographic research; Action research; Grounded theory research; Usability & cognitive research; Case studies; Evaluation Business consulting & services company Small L&M Policy Research LLC (est. 2004) Alternative payment models (healthcare); Care & disease management; Evidence-based medicine; Financial incentives for providers/patients; Health communications; Health Literature reviews & environmental scans; Market analysis; Payment policy development & evaluation; Policy & regulatory research & analysis; Program evaluation; Provider, payer, consumer, & other stakeholder interviews & focus groups; Quality & performance measurement; Research services organization Small ECOLOGY OF EVALUATION CONTRACT WORK 154 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status care delivery; Health care organizations; HealthCare.gov (health insurance marketplace); Home & community-based services; Medicaid & CHIP; Medical homes; Medicare; Population health; Provider payment; Quality & performance management; Racial & socioeconomic disparities; Aging & disabled population; Dually eligible for Medicare & Medicaid population; High-cost, high-need communities population; State health policy; Transparency & public reporting; Uninsured & underinsured; Value- based delivery innovations Quantitative services; Strategic & financial planning; Survey design & management; Technical advisory panel recruitment & management; User-centered design & usability testing Lewin Group, Inc, The (est. 1970) Health services research; Health system modernization; Population health; Program integrity; Public sector health care programs; Quality measures; Value-based payment systems Advanced analytics; Learning & diffusion/technical assistance; Policy research; Program design, implementation, evaluation; Strategy & management consulting Research analytics & consulting organization Other than small ECOLOGY OF EVALUATION CONTRACT WORK 155 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Lord and Tucker Management Consultants LLC (LTMC) (est. 2004) Management consulting & training in entrepreneurship; Federal government contracting; Collegiate professional development; Management & leadership; Workforce development Project & program planning, development; Management & documentation; Strategic planning & organizational development; Research, evaluation & policy studies; Cooperative, interagency, and MOU agreement development; Cradle to grave grants management; Administration, communications, logistics & meeting support Business consulting & services company Small LTG Associates, Inc. (est. 1982) Breastfeeding; Computer sciences; Early childhood development; Economics; Genetics; Geriatrics; Immigration & migration; Infectious disease; International development; Intimate partner violence; Justice diversion; Medicine; Mental health; Nutrition; Organizational science; Postpartum depression; Refugee resettlement; Reproductive health; STIs; Substance use; Theology, religious organization and culture; Tuberculosis Training, mentoring, & technical assistance; Organizational development; Quantitative & mixed methods studies; Ethnography & other qualitative studies; Monitoring, evaluation, & learning; Population & issue assessment Research services & consulting organization Small MANILA Consulting Group (est. 1999) Federal government services Policy analysis; Research & evaluation; Survey instruments; Assistive technology & accessibility services; Performance management; Strategic communication & marketing Research services & consulting organization Small Acquired by Insignia Federal in 2015 ECOLOGY OF EVALUATION CONTRACT WORK 156 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Mathematica Policy Research Inc (est. 1968) Health; Human services; International research Advisory services; Research & evaluation; Data analytics; Digital innovation; Public health data analytics Research services organization Other than small Maximus Federal Services, Inc (est. 1975) Stakeholder engagement; Technology; Clinical services; Appeals & independent medical reviews; Consulting & advisory services; Digital transformation; Eligibility & enrollment Business process assessment & redesign; Policy & procedures review & development; Quality assurance (QA); Comprehensive needs assessment; Audit preparation& compliance; Financial services practice; Higher education practice; Planning; Procurement support; Oversight & risk management and more Government administration organization Other than small MayaTech Corporation (est. 1985) Women's health; HIV/AIDS & infectious disease; Public health & policy; Higher education; Research & evaluation; Technical assistance & training; Policy & legislative analysis; Conference & logistics management; Information development & dissemination; Information technology & systems integration; Web design Applied social science consulting & technical services company Small MDB, Inc (est. 2000) Emergency public health; Environmental justice; Environmental science & health; Global health; Occupational safety & health; Sustainability & climate change; Air; Water Strategic communication; Research, analysis, & evaluation; Specialized information services; Data visualization & graphic design; Event management, facilitation, & training; IT & website programming Business consulting & services organization Small MDRC (est. 1974) P-12 education; Higher education; At-risk young people; Families with children; Work & income Research; Program evaluation Nonprofit, nonpartisan education & social policy Other than small ECOLOGY OF EVALUATION CONTRACT WORK 157 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status security; Criminal & juvenile justice; Health research organization MEF Associates, LLC (est. 2009) Asset building & financial capacity; Career pathways & postsecondary education & training; Child support; Child welfare; COVID-19; Criminal justice & re-entry; Early care & education; Environment & climate resiliency; Fatherhood & family strengthening programs; Food assistance; Housing; Refugee resettlement & assistance; Self-sufficiency; Social security & disability employment policy; Substance use disorder; Temporary assistance for needy families (TANF); Welfare & employment; Workforce system; Youth services Research & evaluation; Training & technical assistance Social policy research firm Small Mission Analytics Group Inc. (Mission) (est. 2010) Child, youth, & family services; Health care delivery; HIV/AIDS; Individuals with disabilities; Long-term & community-based care Data integration & management; Performance measurement; Policy research; Program evaluation; Risk modeling, assessment, & management; Technical assistance Health & human services policy research firm Small ECOLOGY OF EVALUATION CONTRACT WORK 158 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Nakamoto Group, Inc (est. 2003) Technical assistance; Conference planning; Corrections; Publications Program support & management; Financial, administrative, & legal services; Information management & computer services; Scientific, technical & communications (writing & editing); Editorial & graphics services; Contract negotiation; Facility/program preparation for PREA standards; Facility/ program compliance monitoring, including evaluating standards; Security management; Corrections executive management temporary staffing; Policy writing for correctional facilities/reentry programs/community-based operations; Emergency plans for correctional facilities; Staffing analysis/budget analysis; Program analysis, program implementation, and more Business consulting & services company Small National Opinion Research Center (NORC) (est. 1941) Economics, markets, & the workforce; Education, training, & learning; Global development; Health & well-being; Society, media, & public affairs Strategy & planning; Design & methodology; Data collection & management; Analytics & data sciences; Policy analysis, program implementation, & evaluation; Strategic communications & dissemination Nonpartisan research organization Other than small Neil Hoosier & Associates Inc. (NHA) (est. 1993) Federal government & private health insurance companies Program management, project management & integration services; Change & release management services; Testing services - manual & automated; Software development & design; Business analysis - Human Centered Design framework; Technical writing, editing & 508 compliance document remediation; Outreach & education services, virtual training, e-learning, instructional Government administration organization Small ECOLOGY OF EVALUATION CONTRACT WORK 159 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status design, & learning management systems; Program evaluation & analysis; Program oversight, compliance & audit services; Program quality assurance; Medical records review; Information security program management, vulnerability scanning & remediation, OWASP penetration testing Nova Research Company (renamed Lumina Corps in 2022) (est. 1986) Biomedical & public health research & communication Research - including qualitative & quantitative, program evaluation, analytics & measurement, audience research, etc.; Communications; Creative design; Digital strategy & development; Event planning & meeting coverage Research services & consulting firm Small OceanEast Associates (est. 1997) Critical performance & results management; Outcome focused program management; Acquisition & cost management; Digital strategy enablement; Digital marketing & communications Executive dashboard development & management; Performance/results tracking & reporting; Savings tracking & reporting; Program management organization strategy & formation; Program management execution; Program management evaluation; Strategic sourcing & category management; Acquisition organization strategy & design; Acquisition program & process evaluation; Web/mobile presence; Workflow design & enablement; User research; Social media planning & implementation and more Management & technology consulting firm Small Optimal Solutions Group, LLC (est. 2000) Education; Health; International practice; Workforce & social policy; Housing, economic development, & Comment processing tool; Research & evaluation; Analytics group; Software products Economic & policy analysis research & consulting firm Small ECOLOGY OF EVALUATION CONTRACT WORK 160 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status transportation; Entrepreneurship & innovation Pal Technologies Inc. (Paltech) (est. 1987) Outreach & strategic communications; Conference planning & management; Program monitoring & evaluation; Staff augmentation Ad hoc consulting; Program management; Program support; Publications support; International program management & support; Training & learning; Human capital; IT Professional services company Small Policy & Research Group (PRG) (est. 2004) Public health; Education; Workforce; Child welfare; Housing; Youth risk Impact studies; Program evaluations; Qualitative analyses; Survey development & administration; Needs assessments; Meta- analyses Research services organization Small Public Policy Associates Inc (est. 1991) Workforce development; Education; Criminal & juvenile justice; Healthy communities Implementation evaluation; Outcome & impact evaluation including quasi- experimental & experimental designs; Cost analysis; Literature review; Performance measurement & fidelity assessment; Demographic trends & workforce analysis; Customer satisfaction studies; Scans of national practices & policies; Strategic planning; Initiative facilitation & stakeholder engagement; Evaluation capacity-building; Technical support Evaluation, research, & consulting organization Small Public Strategies, Inc. (est. 1990) Education; Business; Faith; Criminal justice; Child welfare; Human services Project management; Marketing & communications; Training & technical assistance; Program design; Strategic consulting; Online resource centers; Social & behavior change communication; Project management & communications firm Small ECOLOGY OF EVALUATION CONTRACT WORK 161 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Storytelling; Event & meeting management; Evaluation Quality Resource Systems Inc. (QRS) (est. 1988) Analytics; IT Communication & training; Website design & development; Database development; Health sector analysis; Mapping & GIS analysis; Women's & minority health; Healthy people assessment; Program evaluation IT services & consulting organization Small RAND Corp (est. 1948) Children, families, & communities; Cyber & data sciences; Education & literacy; Energy & environment; Health, health care, & aging; Homeland security & public safety; Infrastructure & transportation; International affairs; Law & business; National security & terrorism; Science & technology; Social equity; Workers & the workplace Benchmarking; Case study analysis; Cost analysis; Economic analysis; Modeling, simulation, & optimization; Performance measurement & measurement development; Policy analysis; Program evaluation; Risk assessment & analysis; Strategic planning; Survey research; Technology assessment & development Nonprofit, nonpartisan research organization Other than small Research Triangle Institute (RTI) International (est. 1958) Health; Transformative research unit for equity; Education & workforce development; International development; Water; Energy research; Environmental sciences; Justice research & policy; Surveys & data collection; Statistics & data science; Evaluation, assessment, & analysis; Program design & implementation; Digital solutions for social impact; Research technologies; Drug discovery & development; Analytical laboratory science; Engineering & Technology R&D Independent, nonprofit research institute Other than small ECOLOGY OF EVALUATION CONTRACT WORK 162 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Food security & agriculture; Innovation ecosystems; Military support Resources for the Future (RFF) (est. 1952) Adaptation & resilience; Carbon pricing; Climate finance & financial risk; Climate risks & impacts; Comprehensive climate strategies; Data & decision tools; Earth observation for policy; Electric power; Environmental justice; Equity in the energy transition; Federal climate policy; Industry & fuels; International climate policy; Land use, forestry, & agriculture; Natural resources; Social cost of carbon; Transportation Policy design & evaluation; Evaluating regulatory impact analyses; Behavioral economics & policy evaluation; Evaluating climate risks; Economy-wide & sector- specific climate solutions and more Nonpartisan, nonprofit research institute Other than small Ripple Effect Communications, Inc (est. 2003) Communications & outreach; Program management & policy; Research & evaluation; Project-based consulting; Strategic staffing Policy development; Stakeholder engagement; Business process analysis; Grants & program administration; Project management; Strategic planning; Public comment management & analysis; Process & outcome evaluations; Qualitative research & analysis; Survey research & analysis; Data analysis & visualization; Writing & editing; Strategic communications planning; Digital & social media; Public relations & media Professional consulting services organization Small ECOLOGY OF EVALUATION CONTRACT WORK 163 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status outreach; Event planning; Graphic design & multimedia; Data design, collection, & management; Data analysis & AI; Workflow & RPA design; Web development and more SciMetrika LLC (est. 2001) Population health; Risk assessment; Technical assistance Literature reviews; Implementation; Training & technical assistance; Evaluation; Statistics & data analysis; Primary data collection; Data collection & analysis; Sustainable programs; Information management Research services organization Small Shattuck & Associates (est. 2000) Public health; Education Evaluation design; Summative evaluation/impacts & outcomes; Surveys/focus groups/in depth interviews/observations; Literature reviews; Content analysis; Program design; Tracking logs Research services & consulting organization Small Social & Scientific Systems (est. 1978) Public health Clinical & biomedical research; Epidemiology; Health policy; Program evaluation; Longitudinal studies; Data analytics Public health service organization Other than small Acquired by DLH in 2019 Social Solutions International, Inc (est. 2005) Public & global health; Environmental health & preparedness; Gender equality & inclusion; Military wealth & wellness Monitoring, evaluation, & learning; Institutional support & strengthening; Capacity building, training, & technical assistance; Program development implementation; Research & data analytics Full-service consulting firm Other than small Summit Consulting LLC (est. 2003) Federal infrastructure finance, loans, & grants; Evidence-based program evaluation; Program & Data science; Qualitative research including interviews, focus groups, environmental scans, literature reviews, process maps, logic models, consumer experience maps, survey Quantitative & qualitative consulting firm Other than small ECOLOGY OF EVALUATION CONTRACT WORK 164 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status business modernization; Enforcement & litigation analytics, program integrity development, pretesting, qualitative analysis software, analysis of existing qualitative data; Risk analytics, modeling & statistics Systems Plus, Inc (est. 1991) Military health & R&D program management; Military health research & evaluation to dissemination & community interventions; National defense; Advance computing technologies; Automation; Machine learning; Healthcare solutions & services; Program management Assessment; Evaluation; Research & development; IT strategies & support; System integration; Database architecture; Software development and more IT services & consulting organization Small Three Feathers Associates (TFA) (est. 1980) Health; Education; Welfare; Focus on American Indian & Alaska Native population Logistical services - including for national, regional, state-wide conferences, institutes, seminars, & workshops; Training & technical assistance; Research & collaboration Nonprofit corporation Other than small TMF Health Quality Institute (est. 1971) Health care payment model & support; Practice transformation; Quality improvement; Community health; Quality assurance Learning & dissemination; Monitoring & evaluation; Technical assistance & training Private, nonprofit organization Other than small Urban Institute (est. 1968) Aging & retirement; Children & youth; Child welfare; Climate, disasters, & environment; Crime, justice, & safety; Economic Research, evaluation, & analysis; Data science; Strategic advising & assistance; Convening; Strategic communications; Integrated project management; Monitoring program implementation and more Social policy & economic think tank Other than small ECOLOGY OF EVALUATION CONTRACT WORK 165 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status mobility & inequality; Education; Evidence-based policy capacity; Families; Global issues; Health and health care; Housing finance; Housing; Immigrants & immigration; Land use; Neighborhoods, cities, & metros; Nonprofits & philanthropy; Race & equity; Sexual orientation, gender identity, & expression; Social safety net; State & local finance; Taxes & budgets; Wealth & financial well-being; Workforce Vantage Human Resource Services (Vantage) (est. 1974) Leadership development; Interpersonal & communication solutions; Career development; Organizational development solutions Design of professional leadership development programs; Managing & supporting professional leadership programs; Design, development, & delivery of leadership & management training courses; Executive leadership coaching; Organizational needs assessments; Project & program evaluation; Organizational strategic planning; Program & process improvement consultation and more Human resources services organization Small Walter R McDonald & Associates (WRMA) (est. 1980) Early childhood; Education; Child & family welfare; Justice; Adult protective services; IT; Health initiatives; Technical assistance; IT; Research & evaluation; Learning & resource supports Business consulting & services organization Small ECOLOGY OF EVALUATION CONTRACT WORK 166 Name (Est. year) Areas of Expertise Services Offered Organization Type Organization Size* Merger Status Learning & resource supports; Program integrity solutions; HIV/AIDS programs & policies; Other health & human service supports Westat (est. 1963) Behavioral health & health policy; Clinical trials; Education; Public health & epidemiology; Social policy & economics; Transportation Communications; Data collection, management & survey research; Evaluation; IT; Statistical sciences; Technical assistance; Social marketing; Clinical trials Employee- owned corporation Other than small *Note: Organization Size indicates the size of the firm at the time Health and Human Services (HHS) contracts were awarded. For example, although Karna was acquired in 2018 and is no longer a small business, the firm was a small business when it received HHS evaluation and research contracts in 2012, 2015, 2016, and 2017. ECOLOGY OF EVALUATION CONTRACT WORK 167 Appendix C: Evaluation Panelist Recruitment Email Dear [Insert Name], My name is Alex Verhoye and I am a current doctoral candidate in the Organizational Leadership, Policy, and Development program at the University of Minnesota. I am writing today to invite you to participate in a research study about external evaluators’ awareness and prioritization of market forces that influence the evaluation industry. You have been identified as a possible participant due to your expertise and experience in external evaluation work, and your involvement in the evaluation community at-large. As a participant, you would be asked to provide your expert opinion on an interview protocol designed for practicing external evaluators. If you decide to participate in the study, you will be asked to review an interview protocol and judge the representativeness of the question content, the clarity of question style, and the overall comprehensiveness of the protocol. You will be given instructions and an interview protocol rubric to complete the review. The review should take approximately 15 to 20 minutes to complete. Information gleaned from this review process will be used to develop a final interview protocol. Participation in this study as a panelist is completely voluntary. If you would like to participate in this research, or have any questions or concerns about this study, please directly contact me at [EMAIL] or at [PHONE NUMBER]. Thank you for your time and consideration. Sincerely, Alex Verhoye ECOLOGY OF EVALUATION CONTRACT WORK 168 Appendix D: Practicing Evaluator Recruitment Materials Initial LinkedIn Message Dear [Insert Name], My name is Alex Verhoye and I am a current Evaluation Studies PhD Candidate in the Department of Organizational, Leadership, Policy, and Development at the University of Minnesota. I am currently in the data collection phase of my studies, and I would love to connect! Initial or Follow-up Email24 Dear [Insert Name], My name is Alex Verhoye and I am a current doctoral candidate in the Department of Organizational Leadership, Policy, and Development at the University of Minnesota, and am now in the data collection phase of my studies. I am writing today to ask for your contribution to my research study about external evaluators’ awareness and prioritization of market forces that influence the U.S. evaluation industry. You have been identified as a possible participant due to your external evaluation work at [FIRM OR UNIVERSITY CENTER], and your experience in federal government grants and contracts work. If you decide to participate in the study, you will be interviewed by me regarding your professional experience, knowledge, and understanding of the field in relation to the U.S. evaluation market. The interview will last approximately 45-60 minutes and will be recorded for data collection purposes. I am currently looking to schedule interviews throughout April and May. Participation in this study is completely voluntary. If you would like to participate in this research, or have any questions or concerns about this study, please directly contact me at [EMAIL] or at [PHONE NUMBER]. You may also contact my doctoral advisor, David R. Johnson, Ph.D., at [EMAIL]. Full disclosure — I am currently employed as a research analyst at [FIRM]. Information gleaned from an interview would be solely used to complete my doctoral requirements at the University of Minnesota; this research is in no way associated or affiliated with my current employer—[FIRM]—nor will any information gathered from this interview be shared with anyone at [FIRM]. Interviews are confidential. No information that would make it possible to 24 This text was sent as an initial email to individuals who were not originally contacted via LinkedIn. If an individual was first contacted via LinkedIn and they expressed interest in participating in the study, this text was sent as a follow-up email. ECOLOGY OF EVALUATION CONTRACT WORK 169 identify you or your organization would be included in future presentations or publications; only de-identified data will be shared. All data will be securely stored in my University of Minnesota Box Secure Storage account. I alone, as the sole researcher, will have access to raw data. Thank you for your time and consideration. Sincerely, Alex Verhoye Interview Reminder Email Hello [Insert Name], I hope this email finds you well! Thank you again for your willingness to be interviewed for my dissertation on Exploring U.S. Research and Evaluation Contract Work. Your interview is scheduled for next [Day of the week], [Month] [Day] at [Time] [Time Zone]. Attached is a list of questions that we will discuss during our call. We will be using Zoom to meet, which will allow you to join either from a computer or by calling in from a phone. If you are joining from a computer, please click the link below. Please feel free to join with or without your video on - whatever you are most comfortable with. Please let me know if you have any questions or concerns, or if you need to cancel or reschedule the interview. Best, Alex ECOLOGY OF EVALUATION CONTRACT WORK 170 Appendix E: Interview Protocol Review Instructions and Rubric I am developing an interview protocol to explore practicing evaluators’ professional experience, knowledge, understanding, and potential prioritization of external factors that influence the U.S. evaluation market. Your participation in the protocol development process is valuable, as serves as a foundational step towards painting a clearer picture of the current U.S. evaluation landscape. Directions: Please complete the provided rubric during your review of the interview protocol. This rubric asks that you judge how representative each question domain is of external evaluator’s experiences, and whether the item is appropriate for practicing external evaluators. In addition to representativeness, you are also asked to judge the clarity of each question section, and the overall comprehensiveness of the interview protocol overall. Throughout your review, please include any comments and suggestions you have for improvement on the individual questions, the question sections, and/or the interview protocol as a whole. Protocol Representativeness Clarity Comments Section I 1 = This section is not representative 2 = This section requires major revisions to be representative 3 = This section requires minor revisions to be representative 1 = This section is not clear 2 = This section requires major revisions to be clear 3 = This section requires minor revisions to be clear ECOLOGY OF EVALUATION CONTRACT WORK 171 4 = This section is representative 4 = This section is representative Section II 1 2 3 4 1 2 3 4 Section III 1 2 3 4 1 2 3 4 COMMENTS ON OVERALL COMPREHENSIVENESS: Note. This rubric is adapted from Dinnesen et al. (2020). Collaborating with an expert panel to establish the content validity of an intervention for preschoolers with language impairment. Communication Disorders Quarterly, 41(2), 86-99. ECOLOGY OF EVALUATION CONTRACT WORK 172 Appendix F: Practicing Evaluator Interview Protocol Introduction & Instructions Hello name of interviewee. Thank you for taking the time to meet with me today. My name is Alex Verhoye, and I am a Doctoral Candidate in the Department of Organizational Leadership, Policy, and Development at the University of Minnesota. The purpose of this interview is to learn about practicing evaluators’ awareness and potential prioritization of current marketplace factors within the U.S. evaluation industry. My work involves examining the current U.S. research and evaluation marketplace in an attempt to better understand the interconnectedness between the market's supply-side (e.g., external evaluation and research organizations, university evaluation and research centers) processes, and the economic, societal, and political forces (e.g., major economic or social events, changes in policies, legislation, and administrations) on the demand-side (i.e., the federal government), and the implications these connections pose for research and evaluation providers. To this end, I am trying to learn about practicing evaluators’ current understanding, awareness, and prioritization of outside factors—such as major economic and social events, as well as changes in policies, legislation, and administrations—that could influence their evaluation practice (e.g., contract procurement, skills and competencies their organization prioritizes). The information gleaned from this interview will be solely used to complete my doctoral requirements at the University of Minnesota; this research is in no way associated or affiliated with my current employer—[FIRM]—nor will any information gleaned from this interview be shared with anyone at [FIRM]. There are no immediate or expected risks for participating in this interview. The interview is confidential. No information that would make it possible to identify you or your organization will be included in future presentations or publications; only de-identified data will be shared. There are no immediate or expected benefits for participating in the interview. Please know that you are free to skip any question(s) that you do not want to answer and can request to end the interview at any time with zero consequence. This interview will take approximately forty to sixty minutes and will be recorded for data collection purposes. Do you consent to having this interview recorded? [If no, do not turn on recording.] [If yes, start recording.] Now that we have started recording, can you please repeat whether you consent to having this interview recorded? ECOLOGY OF EVALUATION CONTRACT WORK 173 Before we continue, do you have any questions or concerns about anything I’ve mentioned so far? [If no, continue on to the next section.] [If yes, answer any questions and respond to any concerns the interviewee mentions.] Section I: Background Information [Note: Throughout the interview, I will use probing and clarification phrases/words such as, “Can you give me an example?”, “Tell me more about that”, “To clarify…”, “What I’m hearing you say is X – is that correct?”, “What …?”, “How…?”, “Who…?”, “When…?”] To get us started, can you tell me a bit about yourself? What is your name, current place of employment, and position title? How long have you been at your current organization? How long have you been working in the evaluation field overall? [TRANSITION TO NEXT SECTION] Section II: Positioning within the Field Do you participate in your organization’s hiring process? [IF YES] Imagine your organization is hiring for an additional position that is equivalent to your own. What skills, competencies, experience, and/or expertise would you look for in applicants? [IF NO] Imagine you are going on the job market this year: What skills, competencies, experience, and/or expertise would you want to highlight on your resume and during interviews? From your perspective, where do you see your organization’s place within the broader U.S. evaluation field? For example, how do you view your organization’s size, capacity, and capabilities in comparison to other organizations you may compete with for federal grants and contracts? [TRANSITION TO NEXT SECTION] ECOLOGY OF EVALUATION CONTRACT WORK 174 Section III: Current Perception of the Field From your perspective and experience, are there certain outside market factors that influence the types of contracts you bid on and/or win? When I say ‘outside market factors’ I am thinking of anything from major health, social, or economic events (e.g., COVID-19, George Floyd’s murder, the Great Recession) to changes in major policies, legislation, and/or administrations. What are these factors? How do you view their influence on your organization’s work, specifically in how your organization approaches federal grants and contracts? Of the market factors you have identified, are there any that are particularly salient to your organization? Do you prioritize certain factors over others and if so, why? [TRANSITION TO NEXT SECTION] Section IV: Future of the Field How do you see the future outlook for your organization in terms of growth, continued success or stability, and/or organizational priorities? Is there anything in particular that you are excited or nervous about in relation to the future of federal grant and contract work in the United States? Based on your own knowledge and experiences in research and evaluation, where do you see the field five years from now, and why? [TRANSITION TO CONCLUSION] Conclusion This concludes all of the questions I had prepared. Is there anything else you would like to add or that you would like me to know? Do you have any comments, questions, or concerns about what we discussed today? [If yes, answer any questions and respond to any concerns.] [If no, conclude interview.] Thank you for taking the time to speak with me today about your experience as an external evaluator in the United States. Please do not hesitate to contact me in the future if you have any additional thoughts, questions, or concerns regarding today’s interview. ECOLOGY OF EVALUATION CONTRACT WORK 175 Appendix G: Kaplan-Meier Estimated Survival Tables Table G1 Kaplan-Meier Estimated Survival Table, Universities Time (in years) Number at risk Number died Probability of survival SE 95% CI LL UL 1 42 36 0.1429 0.0540 0.06810 0.300 2 6 5 0.0238 0.0235 0.00343 0.165 3 1 1 0.0000 NaN NA NA Note: CI = confidence interval; Number died = number of universities who died during the jth interval; Number at risk = number of universities that were alive and at risk of dying at the beginning of the jth interval; Probability of survival = the probability of surviving the jth interval given that the university has survived the previous intervals; SE = standard error; Time (in years) = the length of time, in years, a university was active in the HHS evaluation contract arena between FY 2008-2022. Table G2 Kaplan-Meier Estimated Survival Table, Total Firms Time (in years) Number at risk Number died Probability of survival SE 95% CI LL UL 1 170 94 0.4471 0.0381 0.3782 0.528 2 72 34 0.2359 0.0331 0.1792 0.311 3 35 14 0.1416 0.0279 0.0963 0.208 4 19 6 0.0969 0.0243 0.0592 0.158 5 11 2 0.0793 0.0229 0.0450 0.140 6 9 2 0.0616 0.0209 0.0317 0.120 11 6 1 0.0514 0.0198 0.0241 0.109 Note: CI = confidence interval; Number died = number of firms who died during the jth interval; Number at risk = number of firms that were alive and at risk of dying at the beginning of the jth ECOLOGY OF EVALUATION CONTRACT WORK 176 interval; Probability of survival = the probability of surviving the jth interval given that the firm has survived the previous intervals; SE = standard error; Time (in years) = the length of time, in years, a firm was active in the HHS evaluation contract arena between FY 2008-2022. Table G3 Kaplan-Meier Estimated Survival Table, Small Firms Time (in years) Number at risk Number died Probability of survival SE 95% CI LL UL 1 90 58 0.3556 0.0505 0.2692 0.470 2 30 17 0.1541 0.0389 0.0939 0.253 3 12 6 0.0770 0.0295 0.0363 0.163 4 4 1 0.0578 0.0277 0.0226 0.148 11 1 1 0.0000 NaN NA NA Note: CI = confidence interval; Number died = number of small firms who died during the jth interval; Number at risk = number of small firms that were alive and at risk of dying at the beginning of the jth interval; Probability of survival = the probability of surviving the jth interval given that the small firm has survived the previous intervals; SE = standard error; Time (in years) = the length of time, in years, a small firm was active in the HHS evaluation contract arena between FY 2008-2022. Table G4 Kaplan-Meier Estimated Survival Table, Not Small Firms Time (in years) Number at risk Number died Probability of survival SE 95% CI LL UL 1 80 36 0.5500 0.0556 0.4511 0.671 2 42 17 0.3274 0.0532 0.2381 0.450 3 23 8 0.2135 0.0476 0.1380 0.330 4 15 5 0.1423 0.0410 0.0809 0.250 5 9 2 0.1107 0.0375 0.0570 0.215 ECOLOGY OF EVALUATION CONTRACT WORK 177 6 7 2 0.0791 0.0328 0.0351 0.178 Note: CI = confidence interval; Number died = number of not small firms who died during the jth interval; Number at risk = number of not small firms that were alive and at risk of dying at the beginning of the jth interval; Probability of survival = the probability of surviving the jth interval given that the not small firm has survived the previous intervals; SE = standard error; Time (in years) = the length of time, in years, a small firm was active in the HHS evaluation contract arena between FY 2008-2022.