Browsing by Subject "IRVLab"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Simulation of Semantically-Aware Obstacle Avoidance Algorithms for Underwater Robots(2021-08-30) Walaszek, Chris AResearchers working with autonomous underwater vehicles (AUVs) must be able to test their robotic vision-based algorithms on-location in field trials. These trials can be time-consuming and carry the risk of unforeseen hardware and software bugs limiting the amount of data gathered. To this end, being able to test and evaluate algorithms risk-free in a computer simulation beforehand can be invaluable for researchers. Current simulation solutions can provide realistic physics and easily modifiable worlds, however using a 3D graphics engine to create realistic underwater scenarios can improve results considerably and ease the transition into a real-world environment. This research demonstrates the potential of the Unity 3D graphics engine to provide a realistic simulation environment by running and evaluating a vision-based underwater obstacle avoidance algorithm on a simulated Aqua robot. We find that Unity can provide simulated stereo images that can be used by the Semantic Obstacle Avoidance for Robots (SOAR) algorithm to navigate a simple obstacle field en route to a predetermined goal position.Item Using LED Gaze Cues to Enhance Underwater Human-Robot Interaction(2022-05) Prabhu, Aditya; Fulton, Michael; Sattar, Junaed, Ph.D.In the underwater domain, conventional methods of communication between divers and Autonomous Underwater Vehicles (AUVs) are heavily impeded. Radio signal attenuation, water turbidity (cloudiness), and low light levels make it difficult for a diver and AUV to relay information between each other. Current solutions such as underwater tablets, slates, and tags are not intuitive and introduce additional logistical challenges and points of failure. Intuitive human-robot interaction (HRI) is imperative to ensuring seamless collaboration between AUVs and divers. Eye gazes are a natural form of relaying information between humans, and are an underutilized channel of communication in AUVs, while lights help eliminate concerns of darkness, turbidity, and signal attenuation which often impair diver-robot collaboration. This research aims to implement eye gazes on LoCO (a low-cost AUV) using RGB LED rings in order to pursue intuitive forms of HRI underwater while overcoming common barriers to communication. To test the intuitiveness of the design, 9 participants with no prior knowledge of LoCO and HRI were tasked with recalling the meanings for each of 16 gaze indicators during pool trials, while being exposed to the indicators 3 to 4 days earlier. Compared to the baseline text display communication, which had a recall of 100%, the recall for most eye gaze animations were exceptionally high, with an 80% accuracy score for 11 of the 16 indicators. These results suggest certain eye indicators convey information more intuitively than others, and additional training can make gaze indicators a viable method of communication between humans and robots.