Browsing by Author "Fulton, Michael"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Natural, Robust, and Multi-Modal Human-Robot Interaction For Underwater Robots(2023-01) Fulton, MichaelIn the mid-twentieth century, robots began to swim in the oceans, lakes, rivers, and waterways of the world. Over the seventy years that have passed since then, autonomous underwater vehicles (AUVS) have slowly been evolving, becoming smaller, more intelligent, and more capable. As they have begun to be deployed in a wider variety of locations and for increasingly complex purposes, excitement over the idea of a collaborative AUV (co-AUV) has begun to grow, with the continued development of the field. Now we stand upon the cusp of a revolution in the world of underwater work. Thousands of divers the world over could be aided in their work by a co-AUV in the coming years, helping humans to better understand and protect the critical water resources of our planet. However, for this dream to come to fruition, these co-AUVs must be capable of natural, robust communication, rich and accurate perception of their human partners and adaptive operation in an ever-changing environment. Though researchers have been making steps toward this goal, this thesis marks a new stage in the development of the co-AUV. In the following chapters, we present three novel methods of communication, two state-of-the-art perception capabilities, a new capability for diver approach, a new methodology for gestural AUV control, a modular software ecosystem for UHRI, and an adaptive communication controller. Additionally, seven human studies evaluating these systems are presented, five of which were conducted in underwater environments with an unprecedented number of participants. The communication methods presented in Part I are a new direction for the field, emphasizing non-text communication which is easily perceived at a distance, natural and intuitive design over information complexity, and introducing new vectors of communication using motion and sound that have not been previously studied underwater. The perception methods of Part II are more traditional, but push the boundaries of previously developed capabilities in numerous ways: developing a new capability in terms of diver motion prediction, creating a method for estimating the relative distance to a diver using only monocular vision, and creating reconfigurable and dynamic gestural control in a way that has not previously been attempted for AUVs. The capstone of the thesis in Part III is the PROTEUS underwater HRI software system, which could serve as a foundation for a great deal of future research, as well as the first adaptive communication system for AUVs, ACVS. ACVS uses the perception capabilities presented in Part II to determine which of the communication vectors introduced in Part I should be utilized given the context of an interaction, with all of the components implemented within the PROTEUS framework. The research contained in this thesis is highly multidisciplinary, encompassing interaction design, software development, hardware fabrication, the design and administration of human studies, quantitative and qualitative analysis of study results, deep learning system design, training and deployment of neural networks, robot design, and general robotics development. The results of these investigations into UHRI reveal an exciting potential for the field. Nearly every method presented in this thesis has achieved sufficient success in testing to indicate that it could be effectively applied in field environments, especially given some further development. The dream of co-AUVs helping divers in their work is already beginning to come to life, and the algorithms and systems presented in this document have brought us ever closer to that goal. The work that is done by divers is critical for human society and the health of our planet's ecosystems and the aid that collaborative AUVs could render in these environments is invaluable, greatly increasing diver safety and task success rates. This thesis provides novel communication methods, a new state of the art in diver perception, an adaptive communication system, and a software architecture that ties them all together, improving the flexibility and robustness of underwater human-robot interaction and providing a basis for further development along these exciting avenues.Item Predicting the Future Motion Trajectory of Scuba Divers for Human-Robot Interaction(2021) Agarwal, Tanmay; Fulton, Michael; Sattar, JunaedAutonomous Underwater Vehicles (AUVs) can be effective collaborators to human scuba divers in many applications, such as environmental surveying, mapping, or infrastructure repair. However, for these applications to be realized in the real world, it is essential that robots are able to both lead and follow their human collaborators. Current algorithms for diver following are not robust to non-uniform changes in the motion of the diver, and no framework currently exists for robots to lead divers. One method to improve the robustness of diver following and enable the capability of diver leading is to predict the future motion of a diver. In this paper, we present a vision-based approach for AUVs to predict the future motion trajectory of divers, utilizing the Vanilla-LSTM and Social-LSTM temporal deep neural networks. We also present a dense optical flow-based method to stabilize the input annotations from the dataset and reduce the effects of camera ego-motion. We analyze the results of these models on scenarios ranging from swimming pools to the open ocean and present the model's accuracy at varying prediction lengths. We find that our LSTM models can generate predictions with significant accuracy 1.5 seconds into the future and that stabilizing LSTM models significantly improves trajectory prediction performance.Item Using LED Gaze Cues to Enhance Underwater Human-Robot Interaction(2022-05) Prabhu, Aditya; Fulton, Michael; Sattar, Junaed, Ph.D.In the underwater domain, conventional methods of communication between divers and Autonomous Underwater Vehicles (AUVs) are heavily impeded. Radio signal attenuation, water turbidity (cloudiness), and low light levels make it difficult for a diver and AUV to relay information between each other. Current solutions such as underwater tablets, slates, and tags are not intuitive and introduce additional logistical challenges and points of failure. Intuitive human-robot interaction (HRI) is imperative to ensuring seamless collaboration between AUVs and divers. Eye gazes are a natural form of relaying information between humans, and are an underutilized channel of communication in AUVs, while lights help eliminate concerns of darkness, turbidity, and signal attenuation which often impair diver-robot collaboration. This research aims to implement eye gazes on LoCO (a low-cost AUV) using RGB LED rings in order to pursue intuitive forms of HRI underwater while overcoming common barriers to communication. To test the intuitiveness of the design, 9 participants with no prior knowledge of LoCO and HRI were tasked with recalling the meanings for each of 16 gaze indicators during pool trials, while being exposed to the indicators 3 to 4 days earlier. Compared to the baseline text display communication, which had a recall of 100%, the recall for most eye gaze animations were exceptionally high, with an 80% accuracy score for 11 of the 16 indicators. These results suggest certain eye indicators convey information more intuitively than others, and additional training can make gaze indicators a viable method of communication between humans and robots.Item Video Diver Dataset (VDD-C) 100,000 annotated images of divers underwater(2021-04-19) de Langis, Karin; Fulton, Michael; Sattar, Junaed; fulto081@umn.edu; Fulton, Michael; Interactive Robotics and Vision LabThis dataset contains over 100,000 annotated images of divers underwater, gathered from videos of divers in pools and the Caribbean off the coast of Barbados. It is intended for the development and testing of diver detection algorithms for use in autonomous underwater vehicles (AUVs). Because the images are sourced from videos, they are largely sequential, meaning that temporally aware algorithms (video object detectors) can be trained and tested on this data. Training on this data improved our current diver detection algorithms significantly because we increased our training set size by 17 times compared to our previous best dataset. It is released for free for anyone who wants to use it.