In this thesis we address the problem of emergence of cooperation between agents that operate in a simulated environment, where they need to accomplish a complex task that is decomposed into sub-tasks and that can be completed only with cooperation. A deep reinforcement learning approach using a multi-layered neural network is used by the agents within a reinforcement learning algorithm to learn how to accomplish the task of moving heavy loads that require cooperation between agents. The goal of this work is to empirically show that cooperation can emerge without explicit instructions, whereby agents learn to cooperate to perform complex tasks, and to analyze the correlation between task complexity and training time. The series of experiments we conducted helps establish that cooperation can emerge but becomes unlikely in partially observable environments as the environment size scales up. Another series of experiments shows that communication makes the cooperative behavior more likely, even as environments scale up, when the environment is only partially observable. However, communication is not a necessary condition for cooperation to emerge, since agents with knowledge of the environment can also learn to cooperate, as demonstrated in the fully observable environment.
University of Minnesota M.S. thesis. December 2020. Major: Computer Science. Advisor: Maria Gini. 1 computer file (PDF); 50 pages.
Learning to cooperate using deep reinforcement learning in a multi-agent system.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.