Browsing by Author "Legge, Gordon E."
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Data for Validating a Model of Architectural Hazard Visibility with Low-Vision Observers(2020-07-22) Liu, Siyun; Thompson, William B.; Liu, Yichen; Shakespeare, Robert A.; Kersten, Daniel J.; Legge, Gordon E.; liux4433@umn.edu; Liu, Siyun; Department of Psychology, University of Minnesota; School of Computing, University of Utah; Department of Theatre, Drama, and Contemporary Dance, Indiana University BloomingtonPedestrians with low vision are at risk of injury when hazards, such as steps and posts, have low visibility. This study aims at validating the software implementation of a computational model that estimates hazard visibility. The model takes as input a photorealistic 3-D rendering of an architectural space, and the acuity and contrast sensitivity of a low-vision observer, and outputs estimates of the visibility of hazards in the space. Our experiments explored whether the model can predict the likelihood of observers correctly identifying hazards. We tested fourteen normally sighted subjects with blur goggles that reduced acuity to 1.2 logMAR or 1.6 logMAR and ten low-vision subjects with acuities ranging from 0.8 logMAR to 1.6 logMAR. Subjects viewed computer-generated images of a walkway containing five possible targets ahead—large step up, large step-down, small step up, small step down, or a flat continuation. Each subject saw these stimuli with variations of lighting and viewpoint in 250 trials and indicated which of the five targets was present. The model generated a score on each trial that estimated the visibility of the target. If the model is valid, the scores should be predictive of how accurately the subjects identified the targets. We used logistic regression to examine the correlation between the scores and the participants’ responses. For twelve of the fourteen normally sighted subjects with artificial acuity reduction and all ten low-vision subjects, there was a significant relationship between the scores and the participant’s probability of correct identification. These experiments provide evidence for the validity of a computational model that predicts the visibility of architectural hazards. The software implementation of the model may be useful for architects to assess the visibility of hazards in their designs, thereby enhancing the accessibility of spaces for people with low vision.Item MNREAD Baseline Data for Normal Vision Across the Lifespan(2017-08-30) Calabrèse, Aurélie; Cheong, Allen M. Y.; Cheung, Sing-Hang; He, Yingchen; Kwon, MiYoung; Mansfield, J. Stephen; Subramanian, Ahalya; Yu, Deyue; Legge, Gordon E.; acalabre@umn.edu; Calabrèse, Aurélie; Minnesota Laboratory for Low-Vision Research, Psychology Department, University of MinnesotaThe continuous-text reading-acuity test MNREAD is designed to measure the reading performance of people with normal and low vision. This test is used to estimate maximum reading speed (MRS), critical print size (CPS), reading acuity (RA), and the reading accessibility index (ACC). The present data contains MNREAD data from 645 normally sighted participants ranging in age from 8 to 81 years. The data were collected in several studies conducted by different testers and at different sites in our research program, enabling evaluation of robustness of the test. The data allows to: 1) study the age dependence of reading performance for normally sighted individuals; 2) provide baseline data for MNREAD testing.Item Reconciling Print-Size and Display-Size Constraints on Reading (Minnesota Lab for Low-Vision Research, 2020)(2020-03-20) Atilgan, Nilsu; Xiong, Yingzi; Legge, Gordon E.; atilg001@umn.edu; Atilgan, Nilsu; University of Minnesota - MN Lab for Low Vision ResearchThe data includes both normally-sighted (Times and Courier groups) and low-vision subjects' reading performance. The main dependent variable in this dataset is reading speed. Reading speed measure is indicated by how many characters per minute were read. In addition, two independent variables with their different levels are provided. These variables are display format (laptop, tablet, phone) and blur condition for normally-sighted participants (normal-viewing condition and viewing under artificial blur through goggles).