Humans receive an enormous amount of information or stimulus from the environment. The interpretation, reaction, or behavior toward the current situation relies specifically on the useful information refined from the environmental stimulus by humans' brain processing. Yet, not only the refining process but also the transformation toward output behavior remains hidden or partially discovered. Researchers in various fields are interested in understanding this information transition and transformation flow, in which this knowledge can be widely applied or inserted to many research or applications for improvement. For example, computers may improve their efficiency in computing by taking advantage of understanding what the critical elements from the incoming information are. Also, a robot may be designed to simulate this information flow and produce a corresponding reaction by processing the incoming scenes from the webcam or environmental change from different sensors. This dissertation focused on the flow between the visual stimulus and interpretation with the observable eye movement data involved. We aim to simulate the overall visual information flow as a user model and seek the connection of how the user model can associate with machine learning and human-computer interaction. By using the user model, we are able to provide insights on designing intelligent interfaces for filtering and collecting useful information. The user model can also provide improvement to the machine learning methods. We discuss this visual information flow and discriminate our research projects into these three themes: intelligent interface for understanding user modeling, model-driven machine learning, and applications of the learned user model. Based on the human eye movement on images, we introduce the concept of Interest-based Regions, which indicates regions that get more attention or interest while viewing an image. This innovative representation method acts as critical information (hidden states) in the user model between input information and output behavior. By using this representation, we present the feasibility of how to collect further interpretation, how to connect with the viewing behavior updates, and how to use in real-life applications such as non-invasive aid on diagnosing psychological symptoms.
University of Minnesota Ph.D. dissertation. June 2020. Major: Computer Science. Advisor: Paul Schrater. 1 computer file (PDF); viii, 119 pages.
Characterizing Human Looking Behavior considering Human interest and Visual Attention with contributions to Cognitive AI and HCI.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.