Browsing by Author "Sattar, Junaed"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item On Applications of GANs and Their Latent Representations(2018-07-09) Fabbri, Cameron; Sattar, JunaedThis report describes various applications of Generative Adversarial Networks (GANs) for image generation, image-to-image translation, and vehicle control. With this, we also investigate the role played by the computed latent space, and show various ways of exploiting this space for controlled image generation and exploration. We show one pure generative method which we call AstroGAN that is able to generate realistic images of galaxies from a set of galaxy morphologies. Two image-to-image translation methods are also displayed: StereoGAN, which is able to generate a pair of stereo images given a single image; Underwater GAN, which is able to restore distorted imagery exhibited in underwater environments. Lastly, we show a generative model for generating actions in a simulated self-driving car environment.Item Predicting the Future Motion Trajectory of Scuba Divers for Human-Robot Interaction(2021) Agarwal, Tanmay; Fulton, Michael; Sattar, JunaedAutonomous Underwater Vehicles (AUVs) can be effective collaborators to human scuba divers in many applications, such as environmental surveying, mapping, or infrastructure repair. However, for these applications to be realized in the real world, it is essential that robots are able to both lead and follow their human collaborators. Current algorithms for diver following are not robust to non-uniform changes in the motion of the diver, and no framework currently exists for robots to lead divers. One method to improve the robustness of diver following and enable the capability of diver leading is to predict the future motion of a diver. In this paper, we present a vision-based approach for AUVs to predict the future motion trajectory of divers, utilizing the Vanilla-LSTM and Social-LSTM temporal deep neural networks. We also present a dense optical flow-based method to stabilize the input annotations from the dataset and reduce the effects of camera ego-motion. We analyze the results of these models on scenarios ranging from swimming pools to the open ocean and present the model's accuracy at varying prediction lengths. We find that our LSTM models can generate predictions with significant accuracy 1.5 seconds into the future and that stabilizing LSTM models significantly improves trajectory prediction performance.Item Trash-ICRA19: A Bounding Box Labeled Dataset of Underwater Trash(2020-07-21) Fulton, Michael S; Hong, Jungseok; Sattar, Junaed; irvlab@umn.edu; Sattar, Junaed; Interactive Robotics and Vision LabThis data was sourced from the J-EDI dataset of marine debris. The videos that comprise that dataset vary greatly in quality, depth, objects in scenes, and the cameras used. They contain images of many different types of marine debris, captured from real-world environments, providing a variety of objects in different states of decay, occlusion, and overgrowth. Additionally, the clarity of the water and quality of the light vary significantly from video to video. These videos were processed to extract 5,700 images, which comprise this dataset, all labeled with bounding boxes on instances of trash, biological objects such as plants and animals, and ROVs. The eventual goal is to develop efficient and accurate trash detection methods suitable for onboard robot deployment. It is our hope that the release of this dataset will facilitate further research on this challenging problem, bringing the marine robotics community closer to a solution for the urgent problem of autonomous trash detection and removal.Item TrashCan 1.0 An Instance-Segmentation Labeled Dataset of Trash Observations(2020-07-23) Hong, Jungseok; Fulton, Michael S; Sattar, Junaed; irvlab@umn.edu; Sattar, Junaed; Interactive Robotics and Vision LabThe TrashCan dataset is comprised of annotated images (7,212 images currently) which contain observations of trash, ROVs, and a wide variety of undersea flora and fauna. The annotations in this dataset take the format of instance segmentation annotations: bitmaps containing a mask marking which pixels in the image contain each object. The imagery in TrashCan is sourced from the J-EDI (JAMSTEC E-Library of Deep-sea Images) dataset, curated by the Japan Agency of Marine Earth Science and Technology (JAMSTEC). This dataset contains videos from ROVs operated by JAMSTEC since 1982, largely in the sea of Japan. The dataset has two versions, TrashCan-Material and TrashCan-Instance, corresponding to different object class configurations. The eventual goal is to develop efficient and accurate trash detection methods suitable for onboard robot deployment. While datasets have previously been created containing bounding box level annotations of trash in marine environments, TrashCan is, to the best of our knowledge, the first instance-segmentation annotated dataset of underwater trash. It is our hope that the release of this dataset will facilitate further research on this challenging problem, bringing the marine robotics community closer to a solution for the urgent problem of autonomous trash detection and removal.Item Video Diver Dataset (VDD-C) 100,000 annotated images of divers underwater(2021-04-19) de Langis, Karin; Fulton, Michael; Sattar, Junaed; fulto081@umn.edu; Fulton, Michael; Interactive Robotics and Vision LabThis dataset contains over 100,000 annotated images of divers underwater, gathered from videos of divers in pools and the Caribbean off the coast of Barbados. It is intended for the development and testing of diver detection algorithms for use in autonomous underwater vehicles (AUVs). Because the images are sourced from videos, they are largely sequential, meaning that temporally aware algorithms (video object detectors) can be trained and tested on this data. Training on this data improved our current diver detection algorithms significantly because we increased our training set size by 17 times compared to our previous best dataset. It is released for free for anyone who wants to use it.