Wastvedt, Solvejg2024-04-302024-04-302024-03https://hdl.handle.net/11299/262768University of Minnesota Ph.D. dissertation. March 2024. Major: Biostatistics. Advisors: Julian Wolfson, Jared Huling. 1 computer file (PDF); x, 127 pages.Along with the increasing availability of health data has come the rise of data-driven models to inform decision-making and policy. These models have the potential to benefit both patients and health care providers but can also exacerbate health inequities. Existing "algorithmic fairness" methods for measuring and correcting model bias fall short of what is needed for health policy in several ways that we address in this dissertation. First, in clinical applications, risk prediction is typically used to guide treatment, creating distinct statistical issues that invalidate most existing techniques. Second, methods typically focus on a single grouping along which discrimination may occur rather than considering multiple, intersecting groups. Third, most existing techniques are only usable for relatively large subgroups. Finally, most existing algorithmic fairness methods require complete data on the grouping variables, such as race or gender, along which fairness is to be assessed. However, in many clinical settings, this information is missing or unreliable. In this dissertation, we address each of these challenges and propose methods that expand the possibilities for algorithmic fairness work in clinical settings.enalgorithmic fairnesscausal inferencehealth equityrisk predictionFairness Estimation For Small And Intersecting Subgroups In Clinical ApplicationsThesis or Dissertation