Browsing by Subject "Low Vision"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Data for Validating a Model of Architectural Hazard Visibility with Low-Vision Observers(2020-07-22) Liu, Siyun; Thompson, William B.; Liu, Yichen; Shakespeare, Robert A.; Kersten, Daniel J.; Legge, Gordon E.; liux4433@umn.edu; Liu, Siyun; Department of Psychology, University of Minnesota; School of Computing, University of Utah; Department of Theatre, Drama, and Contemporary Dance, Indiana University BloomingtonPedestrians with low vision are at risk of injury when hazards, such as steps and posts, have low visibility. This study aims at validating the software implementation of a computational model that estimates hazard visibility. The model takes as input a photorealistic 3-D rendering of an architectural space, and the acuity and contrast sensitivity of a low-vision observer, and outputs estimates of the visibility of hazards in the space. Our experiments explored whether the model can predict the likelihood of observers correctly identifying hazards. We tested fourteen normally sighted subjects with blur goggles that reduced acuity to 1.2 logMAR or 1.6 logMAR and ten low-vision subjects with acuities ranging from 0.8 logMAR to 1.6 logMAR. Subjects viewed computer-generated images of a walkway containing five possible targets ahead—large step up, large step-down, small step up, small step down, or a flat continuation. Each subject saw these stimuli with variations of lighting and viewpoint in 250 trials and indicated which of the five targets was present. The model generated a score on each trial that estimated the visibility of the target. If the model is valid, the scores should be predictive of how accurately the subjects identified the targets. We used logistic regression to examine the correlation between the scores and the participants’ responses. For twelve of the fourteen normally sighted subjects with artificial acuity reduction and all ten low-vision subjects, there was a significant relationship between the scores and the participant’s probability of correct identification. These experiments provide evidence for the validity of a computational model that predicts the visibility of architectural hazards. The software implementation of the model may be useful for architects to assess the visibility of hazards in their designs, thereby enhancing the accessibility of spaces for people with low vision.Item Indoor Spatial Updating with Impaired Vision-Human Performance Data for 32 Normally Sighted Subjects, 16 Low Vision Subjects and 16 Blind Subjects(2016-09-21) Legge, Gordon E; Granquist, Christina; Baek, Yihwa; Gage, Rachel; legge@umn.edu; Legge, Gordon ESpatial updating is the ability to keep track of position and orientation while moving through an environment. We asked how normally sighted and visually impaired subjects compare in spatial updating and in estimating room dimensions. Groups of 32 normally sighted, 16 low vision and 16 blind subjects estimated the dimensions of six rectangular rooms. Updating was assessed by guiding the subjects along three-segment paths in the rooms. At the end of each path, they estimated the distance and direction to the starting location, and to a designated target (a bean bag dropped at the first segment of their path). Spatial updating was tested in five conditions ranging from free viewing to full auditory and visual deprivation (see documentation for details). The normal and low-vision groups did not differ in their accuracy for judging room dimensions. Correlations between estimated size and physical size were high. Accuracy of low-vision performance was not correlated with acuity, contrast sensitivity or field status. Accuracy was lower for the blind subjects. The three groups were very similar in spatial-updating performance, and exhibited only weak dependence on the nature of the viewing conditions. Conclusions. People with a wide range of low-vision conditions are able to judge room dimensions as accurately as people with normal vision. Blind subjects have difficulty in judging the dimensions of quiet rooms, but some information is available from echolocation. Vision status has little impact on performance in simple spatial updating; Proprioceptive and vestibular cues are sufficient.Item Measuring the Detection of Objects under Simulated Visual Impairment in 3D Rendered Scenes(2018-09) Carpenter, BrentA space is visually accessible when a person can use their vision to travel through the space and to pursue activities intended to be performed with vision within that space. Previous work has addressed the difficulty of evaluating the detection of objects in real spaces by observers with simulated visual impairments. This current research addresses the viability of using physically realistic 3D renderings of public spaces under artificially induced blur in place of the more resource intensive testing in the real spaces themselves while participants wear blurring goggles. In addition, this research illustrates the efficacy of a model that predicts portions of public scenes that an observer with simulated vision impairment would presumably fail to detect by comparing the predictions of missed space geometry to actual geometry detection failures by observers with simulated impairments. Lastly, this work also addresses how well simulated Low Vision observers can categorize the contents of scenes. Observer categorization rate is compared to several image metrics and the results indicate that average classification rate across Low Vision simulations can be predicted very well by knowing the averages of several different image metrics within each of the acuity blocks. Chapter 1 of this dissertation is a literature review necessary for understanding the background of the state of the art of this research and an overview of the research itself. In Chapter 2, an experiment is described in which object visibility was tested in a virtual environment with the goal of validating the use of 3D renderings as substitutive stimuli via comparing performance between the real and digital version of the same task (Bochsler et al., 2013). The objects were ramps, steps, and flat surfaces. Participants were normally sighted young adults who viewed either blurred or unblurred images. Images were blurred using a Gaussian filter on a Sloan Chart calibrated for the viewing distance of the experiment. Patterns of object identifications and confusions between the digital and physical versions of the task were highly similar. It is very likely that 3D renderings of public spaces when used in psychophysical tasks are effective substitutive stimuli for real spaces in object detection tasks. Avenues for parametric manipulations that might strengthen the argument are also explored. Chapter 3 extends the use of physics based 3D renderings to simulations of visual impairment (Thompson et al, 2017; https://github.com/visual-accessibility/deva-filter). A model of visual impairment was used to simulate 3D renderings of public spaces under increasing levels of impairment. Participants were then asked to draw the edges and contours of objects in these simulations under several separate task conditions: draw the edges of doors, stairs, obstacles, or floor-wall-ceiling connections. As simulations of visual impairment deepened, observers struggled to find the correct object contours in each of the tasks. Also, as the simulated impairments deepened, observer data often more closely matched the predictive model: a system that puts a premium on sudden changes in luminance contrast. In the absence of context and meaning, simulated Low Vision observers tend to make false positive geometrical edge identifications when a scene has non- accidental incidences of strong luminance contrast edges such as bars of light and shadows. The predictive power and utility of the model for simulating visual impairment is also discussed. Chapter 4 contains a pilot experiment which seeks to understand how well simulated Low Vision observers can classify the category of blurry scenes shown to them. Observers were asked to perform a three alternative forced choice task where they must identify if an image is one of three scenes, and observers’ classification accuracy was tracked across acuity level simulations. Several image metrics were calculated and regressed against classification accuracy of either single scenes or classification accuracy per acuity block. It was found that average classification accuracy within an acuity block could be predicted by knowing any one of several average image metrics of scenes within blocks and when regressed across acuity levels.Item Navigating through buildings with impaired vision: challenges and solutions.(2009-06) Kalia, Amy AshwinNavigation is the ability to plan and follow routes between locations, often with an internal or external map of the environment. Vision is an important way to access environmental information for navigation. Consequently, independent navigation is a significant challenge for individuals with visual impairment. This thesis describes three studies that investigate how real or simulated visual impairment affects the ability to navigate inside buildings. Furthermore, these experiments explore methods for compensating for the loss of visual information, either by using other senses or by using assistive technology. The visual information in an environment that is useful for navigation can be categorized as two types: geometric (visual information conveying layout geometry such as hallways and intersections), and non-geometric (features other than geometry such as lighting, texture, and object landmarks). The first experiment (Chapter 2) tested the effects of visual impairment and age on the use of these two types of visual information for navigation. In the second experiment (Chapter 3), visually-impaired individuals were tested on their ability to follow verbal route instructions provided by an indoor navigation technology. The instructions described distances in one of three ways: feet, number of steps, and travel time in seconds. This study compared route-following performance with the three distance modes. The third experiment (Chapter 4) investigated the integration of visual and walking information for localization in a hallway. We predicted that humans integrate information: 1) only when they perceive themselves to be near a landmark after walking (congruency), and 2) by weighing each information source according to its reliability. Normally-sighted participants judged their location in a hallway after viewing a target and then walking blindfolded to either the visual target or to a slightly different location. Participants viewed targets in two conditions that manipulated visual reliability: normal viewing and blurry viewing. This experiment tested and confirmed a statistical model of human perception in a novel domain. Together, these three studies enhance our understanding of the effects of visual impairment on navigation ability. These studies also suggest that information provided by other senses or assistive technology can improve navigation ability with low vision.