Segijn, Claire M.Menheer, PernuLee, GarimKim, EunahOlsen, DavidHofelich Mohr, Alicia2024-06-202024-06-202024-06-20https://hdl.handle.net/11299/263979Each zip contains separate files for each method. Python scripts were used on the raw videos to generate the processed videos and the CSV files. A CSV file for the areas of interest (aoi) and fixation detections are included for each method. R scripts were used to analyze the CSV files to report the statistics and tables included in the manuscript. Processed videos are included for each participant.The goal of the current study is to compare the different methods for automated object detection (i.e., tag detection, shape detection, matching, and machine learning) with manual coding on different types of objects (i.e., static, dynamic, and dynamic with human interaction) and describe the advantages and limitations of each method. We tested the methods in an experiment that utilizes mobile eye tracking because of the importance of attention in communication science and the challenges this type of data poses to analyze different objects because visual parameters are consistently changing within and between participants. Python scripts, processed videos, R scripts, and processed data files are included for each method.Attribution-NonCommercial 4.0 Internationalhttp://creativecommons.org/licenses/by-nc/4.0/EyetrackingMachine LearningAdvertisingTemplate MatchingData supporting: Automated Object Detection in Mobile Eye-Tracking Research: Comparing Manual Coding with Tag Detection, Shape Detection, Matching, and Machine LearningDatasethttps://doi.org/10.13020/2SMC-3642