Data supporting: Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine Learning
Loading...
Persistent link to this item
Statistics
View StatisticsCollection period
Date completed
Date updated
Time period coverage
2022
Geographic coverage
Minneapolis, MN
Source information
Journal Title
Journal ISSN
Volume Title
Title
Data supporting: Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine Learning
Published Date
2024-06-20
Group
Author Contact
Segijn, Claire
segijn@umn.edu
segijn@umn.edu
Type
Dataset
Statistical Computing Software Code
Human Subjects Data
Programming Software Code
Statistical Computing Software Code
Human Subjects Data
Programming Software Code
Abstract
The goal of the current study is to compare the different methods for automated object detection (i.e., tag detection, shape detection, matching, and machine learning) with manual coding on different types of objects (i.e., static, dynamic, and dynamic with human interaction) and describe the advantages and limitations of each method. We tested the methods in an experiment that utilizes mobile eye tracking because of the importance of attention in communication science and the challenges this type of data poses to analyze different objects because visual parameters are consistently changing within and between participants. Python scripts, processed videos, R scripts, and processed data files are included for each method.
Description
Each zip contains separate files for each method. Python scripts were used on the raw videos to generate the processed videos and the CSV files. A CSV file for the areas of interest (aoi) and fixation detections are included for each method. R scripts were used to analyze the CSV files to report the statistics and tables included in the manuscript. Processed videos are included for each participant.
Referenced by
Segijn, C.M., Menheer, P., Lee, G., Kim, E., Olsen, D., and Hofelich Mohr, A. (Submitted). Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine Learning.
Related to
Replaces
item.page.isreplacedby
Publisher
Collections
Funding information
This work was supported by the Office of the Vice President for Research, University of Minnesota [The Grant-in-Aid of Research, Artistry, and Scholarship].
item.page.sponsorshipfunderid
item.page.sponsorshipfundingagency
item.page.sponsorshipgrant
Previously Published Citation
Other identifiers
Suggested citation
Segijn, Claire M.; Menheer, Pernu; Lee, Garim; Kim, Eunah; Olsen, David; Hofelich Mohr, Alicia. (2024). Data supporting: Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine Learning. Retrieved from the Data Repository for the University of Minnesota (DRUM), https://doi.org/10.13020/2SMC-3642.
View/Download File
File View/Open
Description
Size
1_ManualCoding_data.csv
Coding file for human raters on each fixation for the subset of videos
(791.07 KB)
2_Shape_Detection.zip
Scripts, data, and videos for the shape detection method
(1.74 GB)
3_Tag_Detection.zip
Scripts and data for the tag detection method
(8.99 KB)
4_ML_Yolo7.zip
Scripts, data, and videos for the machine learning method
(1.93 GB)
5_TemplateMapping.zip
Scripts, data, and videos for the template matching method
(2.7 GB)
5a_FeatureMatching_Static.zip
Scripts, data, and videos for the static feature matching method
(1.73 GB)
5b_FeatureMatching_Dynamic.zip
Scripts, data, and videos for the dynamic feature matching method
(693.14 MB)
Readme_Segijn.txt
Readme File
(22.97 KB)
Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.