Data supporting: Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine Learning

Loading...
Thumbnail Image

Persistent link to this item

Statistics
View Statistics

Collection period

Date completed

Date updated

Time period coverage

Geographic coverage

Source information

Journal Title

Journal ISSN

Volume Title

Title

Data supporting: Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine Learning

Published Date

2024-06-20

Group

Author Contact

Segijn, Claire
segijn@umn.edu

Type

Dataset
Statistical Computing Software Code
Human Subjects Data
Programming Software Code

Abstract

The goal of the current study is to compare the different methods for automated object detection (i.e., tag detection, shape detection, matching, and machine learning) with manual coding on different types of objects (i.e., static, dynamic, and dynamic with human interaction) and describe the advantages and limitations of each method. We tested the methods in an experiment that utilizes mobile eye tracking because of the importance of attention in communication science and the challenges this type of data poses to analyze different objects because visual parameters are consistently changing within and between participants. Python scripts, processed videos, R scripts, and processed data files are included for each method.

Description

Each zip contains separate files for each method. Python scripts were used on the raw videos to generate the processed videos and the CSV files. A CSV file for the areas of interest (aoi) and fixation detections are included for each method. R scripts were used to analyze the CSV files to report the statistics and tables included in the manuscript. Processed videos are included for each participant.

Referenced by

Related to

Replaces

Publisher

Funding information

This work was supported by the Office of the Vice President for Research, University of Minnesota [The Grant-in-Aid of Research, Artistry, and Scholarship].

item.page.sponsorshipfunderid

item.page.sponsorshipfundingagency

item.page.sponsorshipgrant

Previously Published Citation

Suggested citation

Segijn, Claire M.; Menheer, Pernu; Lee, Garim; Kim, Eunah; Olsen, David; Hofelich Mohr, Alicia. (2024). Data supporting: Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine Learning. Retrieved from the Data Repository for the University of Minnesota (DRUM), .
View/Download file
File View/OpenDescriptionSize
1_ManualCoding_data.csvCoding file for human raters on each fixation for the subset of videos791.07 KB
2_Shape_Detection.zipScripts, data, and videos for the shape detection method1.84 GB
3_Tag_Detection.zipScripts and data for the tag detection method8.99 KB
4_ML_Yolo7.zipScripts, data, and videos for the machine learning method2.02 GB
5_TemplateMapping.zipScripts, data, and videos for the template matching method2.71 GB
5a_FeatureMatching_Static.zipScripts, data, and videos for the static feature matching method1.81 GB
5b_FeatureMatching_Dynamic.zipScripts, data, and videos for the dynamic feature matching method696.16 MB
Readme_Segijn.txtReadMe File22.75 KB

Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.