Between Dec 19, 2024 and Jan 2, 2025, datasets can be submitted to DRUM but will not be processed until after the break. Staff will not be available to answer email during this period, and will not be able to provide DOIs until after Jan 2. If you are in need of a DOI during this period, consider Dryad or OpenICPSR. Submission responses to the UDC may also be delayed during this time.
 

Data supporting: Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine Learning

Loading...
Thumbnail Image
Statistics
View Statistics

Collection period

Date completed

Date updated

Time period coverage

2022

Geographic coverage

Minneapolis, MN

Source information

Journal Title

Journal ISSN

Volume Title

Title

Data supporting: Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine Learning

Published Date

2024-06-20

Group

Author Contact

Segijn, Claire
segijn@umn.edu

Type

Dataset
Statistical Computing Software Code
Human Subjects Data
Programming Software Code

Abstract

The goal of the current study is to compare the different methods for automated object detection (i.e., tag detection, shape detection, matching, and machine learning) with manual coding on different types of objects (i.e., static, dynamic, and dynamic with human interaction) and describe the advantages and limitations of each method. We tested the methods in an experiment that utilizes mobile eye tracking because of the importance of attention in communication science and the challenges this type of data poses to analyze different objects because visual parameters are consistently changing within and between participants. Python scripts, processed videos, R scripts, and processed data files are included for each method.

Description

Each zip contains separate files for each method. Python scripts were used on the raw videos to generate the processed videos and the CSV files. A CSV file for the areas of interest (aoi) and fixation detections are included for each method. R scripts were used to analyze the CSV files to report the statistics and tables included in the manuscript. Processed videos are included for each participant.

Referenced by

Segijn, C.M., Menheer, P., Lee, G., Kim, E., Olsen, D., and Hofelich Mohr, A. (Submitted). Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine Learning.

Related to

Replaces

item.page.isreplacedby

Publisher

Funding information

This work was supported by the Office of the Vice President for Research, University of Minnesota [The Grant-in-Aid of Research, Artistry, and Scholarship].

item.page.sponsorshipfunderid

item.page.sponsorshipfundingagency

item.page.sponsorshipgrant

Previously Published Citation

Other identifiers

Suggested citation

Segijn, Claire M.; Menheer, Pernu; Lee, Garim; Kim, Eunah; Olsen, David; Hofelich Mohr, Alicia. (2024). Data supporting: Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine Learning. Retrieved from the Data Repository for the University of Minnesota (DRUM), https://doi.org/10.13020/2SMC-3642.

Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.