Browsing by Subject "Visual Perception"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Item Encoding of biologically significant information in the human brain: face and biological motion perception.(2009-05) Jiang, YiInherently social, humans communicate identity, emotion, and intention mainly via visual signals ― faces and body motions. We are highly efficient in processing and recognizing such biologically salient cues, so much so that it seems effortless. This dissertation presents four studies which employ both psychophysical and brain imaging techniques to probe the neural encoding of faces and biological motion. Study 1 behaviorally demonstrates that a substantial amount of information, including face orientation, can be processed in the absence of observers’ conscious awareness. Study 2 and Study 3 further examine the cortical and sub-cortical processing of facial information that take place at the subconscious level. By rendering face images invisible through interocular suppression, distinct patterns of responses are revealed in FFA, STS, and the amygdala, with STS and the amygdala being selectively sensitive to facial-expression information. Study 4 focuses on the processing of local biological motion signals. A series of experiments show that such signals are processed automatically in the visual system independent of global form and global pattern of motion, and that dorsal occipito-parietal areas are the prime neural candidate for the "life motion detector". Together, these studies indicate that the human visual system is sensitive to biologically significant information, which can be processed without awareness. The findings add to our understandings of the brain mechanisms underlying humans' superb processing of face and biological motion information.Item Machine Vision for Improved Human-Robot Cooperation in Adverse Underwater Conditions(2021-05) Islam, Md JahidulVisually-guided underwater robots are deployed alongside human divers for cooperative exploration, inspection, and monitoring tasks in numerous shallow-water and coastal-water applications. The most essential capability of such companion robots is to visually interpret their surroundings and assist the divers during various stages of an underwater mission. Despite recent technological advancements, the existing systems and solutions for real-time visual perception are greatly affected by marine artifacts such as poor visibility, lighting variation, and the scarcity of salient features. The difficulties are exacerbated by a host of non-linear image distortions caused by the vulnerabilities of underwater light propagation (e.g., wavelength-dependent attenuation, absorption, and scattering). In this dissertation, we present a set of novel and improved visual perception solutions to address these challenges for effective underwater human-robot cooperation. The research outcomes entail novel design and efficient implementation of the underlying vision and learning-based algorithms with extensive field experimental validations and real-time feasibility analyses for single-board deployments. The dissertation is organized into three parts. The first part focuses on developing practical solutions for autonomous underwater vehicles (AUVs) to accompany human divers during an underwater mission. These include robust vision-based modules that enable AUVs to understand human swimming motion, hand gesture, and body pose for following and interacting with them while maintaining smooth spatiotemporal coordination. A series of closed-water and open-water field experiments demonstrate the utility and effectiveness of our proposed perception algorithms for underwater human-robot cooperation. We also identify and quantify their performance variability over a diverse set of operating constraints in adverse visual conditions. The second part of this dissertation is devoted to designing efficient techniques to overcome the effects of poor visibility and optical distortions in underwater imagery by restoring their perceptual and statistical qualities. We further demonstrate the practical feasibility of these techniques as pre-processors in the autonomy pipeline of visually-guided AUVs. Finally, the third part of this dissertation develops methodologies for high-level decision-making such as modeling spatial attention for fast visual search, learning to identify when image enhancement and super-resolution modules are necessary for a detailed perception, etc. We demonstrate that these methodologies facilitate up to 45% faster processing of the on-board visual perception modules and enable AUVs to make intelligent navigational and operational decisions, particularly in autonomous exploratory tasks. In summary, this dissertation delineates our attempts to address the environmental and operational challenges of real-time machine vision for underwater human-robot cooperation. Aiming at a variety of important applications, we develop robust and efficient modules for AUVs to 'follow and interact' with companion divers by accurately perceiving their surroundings while relying on noisy visual sensing alone. Moreover, our proposed perception solutions enable visually-guided robots to 'see better' in noisy conditions, and 'do better' with limited computational resources and real-time constraints. In addition to advancing the state-of-the-art, the proposed methodologies and systems take us one step closer toward bridging the gap between theory and practice for improved human-robot cooperation in the wild.Item Measuring the Detection of Objects under Simulated Visual Impairment in 3D Rendered Scenes(2018-09) Carpenter, BrentA space is visually accessible when a person can use their vision to travel through the space and to pursue activities intended to be performed with vision within that space. Previous work has addressed the difficulty of evaluating the detection of objects in real spaces by observers with simulated visual impairments. This current research addresses the viability of using physically realistic 3D renderings of public spaces under artificially induced blur in place of the more resource intensive testing in the real spaces themselves while participants wear blurring goggles. In addition, this research illustrates the efficacy of a model that predicts portions of public scenes that an observer with simulated vision impairment would presumably fail to detect by comparing the predictions of missed space geometry to actual geometry detection failures by observers with simulated impairments. Lastly, this work also addresses how well simulated Low Vision observers can categorize the contents of scenes. Observer categorization rate is compared to several image metrics and the results indicate that average classification rate across Low Vision simulations can be predicted very well by knowing the averages of several different image metrics within each of the acuity blocks. Chapter 1 of this dissertation is a literature review necessary for understanding the background of the state of the art of this research and an overview of the research itself. In Chapter 2, an experiment is described in which object visibility was tested in a virtual environment with the goal of validating the use of 3D renderings as substitutive stimuli via comparing performance between the real and digital version of the same task (Bochsler et al., 2013). The objects were ramps, steps, and flat surfaces. Participants were normally sighted young adults who viewed either blurred or unblurred images. Images were blurred using a Gaussian filter on a Sloan Chart calibrated for the viewing distance of the experiment. Patterns of object identifications and confusions between the digital and physical versions of the task were highly similar. It is very likely that 3D renderings of public spaces when used in psychophysical tasks are effective substitutive stimuli for real spaces in object detection tasks. Avenues for parametric manipulations that might strengthen the argument are also explored. Chapter 3 extends the use of physics based 3D renderings to simulations of visual impairment (Thompson et al, 2017; https://github.com/visual-accessibility/deva-filter). A model of visual impairment was used to simulate 3D renderings of public spaces under increasing levels of impairment. Participants were then asked to draw the edges and contours of objects in these simulations under several separate task conditions: draw the edges of doors, stairs, obstacles, or floor-wall-ceiling connections. As simulations of visual impairment deepened, observers struggled to find the correct object contours in each of the tasks. Also, as the simulated impairments deepened, observer data often more closely matched the predictive model: a system that puts a premium on sudden changes in luminance contrast. In the absence of context and meaning, simulated Low Vision observers tend to make false positive geometrical edge identifications when a scene has non- accidental incidences of strong luminance contrast edges such as bars of light and shadows. The predictive power and utility of the model for simulating visual impairment is also discussed. Chapter 4 contains a pilot experiment which seeks to understand how well simulated Low Vision observers can classify the category of blurry scenes shown to them. Observers were asked to perform a three alternative forced choice task where they must identify if an image is one of three scenes, and observers’ classification accuracy was tracked across acuity level simulations. Several image metrics were calculated and regressed against classification accuracy of either single scenes or classification accuracy per acuity block. It was found that average classification accuracy within an acuity block could be predicted by knowing any one of several average image metrics of scenes within blocks and when regressed across acuity levels.