In multi-agent navigation, agents have to move from their start positions to their goal locations while avoiding collisions with other agents and any static element in the environment. Existing methods either compute the motion of each agent centrally or allow each agent to compute its own motion. Using a central controller limits the number of agents that can be controlled in real time, while using a local method produces motions that are optimal locally but do not account for the motions of the other agents, producing inefficient global motions when many agents move in a crowded space. This dissertation proposes a set of online action selection methods that each agent uses to dynamically adapt its behavior to the local conditions. Specifically, we propose four approaches based on learning, planning, coordination and model inference to improve the global motions of a set of agents. These approaches are highly scalable because each agent makes its own decisions on how to move. We validate the approaches experimentally, with multiple simulations in a variety of environments and with different numbers of agents. When compared to other techniques, the proposed approaches produce motions that are more efficient and make better use of the space, allowing agents to reach their destinations faster.