Browsing by Subject "underwater"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Trash-ICRA19: A Bounding Box Labeled Dataset of Underwater Trash(2020-07-21) Fulton, Michael S; Hong, Jungseok; Sattar, Junaed; irvlab@umn.edu; Sattar, Junaed; Interactive Robotics and Vision LabThis data was sourced from the J-EDI dataset of marine debris. The videos that comprise that dataset vary greatly in quality, depth, objects in scenes, and the cameras used. They contain images of many different types of marine debris, captured from real-world environments, providing a variety of objects in different states of decay, occlusion, and overgrowth. Additionally, the clarity of the water and quality of the light vary significantly from video to video. These videos were processed to extract 5,700 images, which comprise this dataset, all labeled with bounding boxes on instances of trash, biological objects such as plants and animals, and ROVs. The eventual goal is to develop efficient and accurate trash detection methods suitable for onboard robot deployment. It is our hope that the release of this dataset will facilitate further research on this challenging problem, bringing the marine robotics community closer to a solution for the urgent problem of autonomous trash detection and removal.Item TrashCan 1.0 An Instance-Segmentation Labeled Dataset of Trash Observations(2020-07-23) Hong, Jungseok; Fulton, Michael S; Sattar, Junaed; irvlab@umn.edu; Sattar, Junaed; Interactive Robotics and Vision LabThe TrashCan dataset is comprised of annotated images (7,212 images currently) which contain observations of trash, ROVs, and a wide variety of undersea flora and fauna. The annotations in this dataset take the format of instance segmentation annotations: bitmaps containing a mask marking which pixels in the image contain each object. The imagery in TrashCan is sourced from the J-EDI (JAMSTEC E-Library of Deep-sea Images) dataset, curated by the Japan Agency of Marine Earth Science and Technology (JAMSTEC). This dataset contains videos from ROVs operated by JAMSTEC since 1982, largely in the sea of Japan. The dataset has two versions, TrashCan-Material and TrashCan-Instance, corresponding to different object class configurations. The eventual goal is to develop efficient and accurate trash detection methods suitable for onboard robot deployment. While datasets have previously been created containing bounding box level annotations of trash in marine environments, TrashCan is, to the best of our knowledge, the first instance-segmentation annotated dataset of underwater trash. It is our hope that the release of this dataset will facilitate further research on this challenging problem, bringing the marine robotics community closer to a solution for the urgent problem of autonomous trash detection and removal.Item Video Diver Dataset (VDD-C) 100,000 annotated images of divers underwater(2021-04-19) de Langis, Karin; Fulton, Michael; Sattar, Junaed; fulto081@umn.edu; Fulton, Michael; Interactive Robotics and Vision LabThis dataset contains over 100,000 annotated images of divers underwater, gathered from videos of divers in pools and the Caribbean off the coast of Barbados. It is intended for the development and testing of diver detection algorithms for use in autonomous underwater vehicles (AUVs). Because the images are sourced from videos, they are largely sequential, meaning that temporally aware algorithms (video object detectors) can be trained and tested on this data. Training on this data improved our current diver detection algorithms significantly because we increased our training set size by 17 times compared to our previous best dataset. It is released for free for anyone who wants to use it.