Multi-robot systems, or mobile sensor networks, which have become increasingly popular due to recent advances in electronics and communications, can be used in a wide range of applications, such as space exploration, search and rescue, target tracking, and cooperative localization and mapping. In contrast to single robots, multi-robot teams are more robust against single-point failures, accomplish coverage tasks more efficiently by dispersing multiple robots into large areas, and achieve higher estimation accuracy by directly communicating and fusing their sensor measurements. Realizing these advantages of multi-robot systems, however, requires addressing certain challenges. Specifically, in order for teams of robots to cooperate, or fuse measurements from geographically dispersed sensors, they need to know their poses with respect to a common frame of reference. Initializing the robots' poses in a common frame is relatively easy when using GPS, but very challenging in the absence of external aids. Moreover, planning the motion of multiple robots to achieve optimal estimation accuracy is quite challenging. Specifically, since the estimation accuracy depends on the locations where the robots record their sensor measurements, it may take an extensive amount of time to reach a required level of accuracy, if the robots' motions are not properly designed.
This thesis offers novel solutions to the aforementioned challenges. The first part of the thesis investigates the problem of relative robot pose initialization, using robot-to-robot distance and/or bearing measurements collected over multiple time steps. In particular, it focuses on solving minimal problems and proves that in 3D there exist only 14 such problems that need to be solved. Furthermore, it provides efficient algorithms for computing the robot-to-robot transformation, which exploit recent advances in algebraic geometry.
The second part of the thesis investigates the problem of optimal motion strategies for localization in leader-follower formations using distance or bearing measurements. Interestingly, the robot-to-robot pose is unobservable if the robots move on a straight line and maintain their formations, hence, the uncertainty of the robots' poses increases over time. If the robots, however, deviate from the desired formation, their measurements provide additional information which makes the relative pose observable. This thesis addresses the trade-off between maintaining the formation and estimation accuracy, and provides algorithms for computing the optimal positions where the robots should move to in order to collect the most informative measurements at the next time step.
By providing solutions to two important problems for multi-robot systems: motion-induced
extrinsic calibration, and optimal motion strategies for relative localization, the work presented in this thesis is expected to promote the use of multi-robot teams in real-world applications.