Browsing by Subject "algorithmic game theory"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item Incentive-aware Machine Learning for social welfare maximization under uncertainty(2024-11) Ngo, Tuan DungMachine Learning algorithms are being deployed to aid the decision-making process of high-stakes domains such as hiring, education, and medical trials. Under the influence of these substantial algorithmic decisions, individuals (i.e., the decision-subjects) are more aware of their role and capabilities in dictating the outcome of Machine Learning algorithms. On the other hand, the social planner (i.e., the decision-maker) deploying these machine learning algorithms has to consider the incentives of individuals who are unwilling to follow the algorithmic decisions given to them unquestioningly. This dissertation focuses on incentive-aware machine learning under uncertainty through the lens of a centralized social planner deploying the algorithms for decision-making. Naturally, some tension exists between the goals of these different stakeholders: an individual's action often springs from self-interested motives. At the same time, the social planner wants to optimize for better social welfare across all individuals in the population. Therefore, the primary focus of this dissertation is to design incentive-aware Machine Learning algorithms that align with the incentives of the decision-subjects while obtaining the desired total outcomes. Concretely, I study the problem of incentivized exploration in the context of online recommender systems, where the social planner can only provide signals to facilitate the individual's action choices. Mainly, I extend the existing literature of incentivized exploration in three trajectories: structured action sets, heterogeneous populations, and practical applications. Beyond incentivized exploration, this dissertation also examines other settings where incentives play an essential role in the agent's decision-making process. First, I study the problem of strategic learning, where the decision-subjects are strategic and may modify their input to obtain a more favorable prediction outcome. Secondly, I examine how a centralized server can incentivize participation in a collaborative learning setting. Finally, I investigate the setting where the downstream decision-makers face uncertainty in choosing an accurate and loss-minimizing predictor suitable for their downstream task. These works bring a greater understanding of how to create incentive-aware machine learning algorithms under uncertainty that both (1) align with each decision-subject's belief and (2) maximize cumulative social welfare.