The human visual system can construct a 3D viewpoint of visual objects based on their 2D contours. This process is presumably an essential part of the visual system that enables interaction with an environment, e.g., grasping an object. Despite its importance, it is not clearly understood to what extent conscious awareness is involved in constructing 3D information from visual input. Here we investigated whether the 3D viewpoint of the object could be extracted and represented by the visual system when observers were not aware of the object's image. To test whether the viewpoint of an invisible cube image could be processed, we measured how much the initial viewpoint of the Necker cube could be biased after adapting to an unambiguous version of Necker cube that rendered. We found that a significant amount of viewpoint adaptation aftereffect occurred even when 1) the adapting cube was invisible due to flash suppression, 2) both the adaptation and test cubes were presented in different sizes and retinal locations, and 3) the adaptation cube was presented to the opposite eye from the target eye. These results suggest that the visual system can construct the representation of a 3D viewpoint in the absence of awareness to visual input. These results are consistent with observations in blindsight patients that appropriate visuo-motor action can be executed with less dependence on the presence of explicit visual awareness. Our brain resolves the perceptual ambiguity of sensory input. The bistable perception (e.g., Necker cube and binocular rivalry) is a typical example of this perceptual ambiguity. There are many examples of bistable perception and, despite the apparent similarities and differences between them, it remains unanswered whether they are governed by a single neural mechanism for resolving perceptual ambiguity. We measured the switching rates of three bistable perceptions across visual fields (left/right) and eyes (left/right) in both right-handed and left-handed subjects. Results showed that the temporal dynamics of binocular rivalry might be determined by both eye- and hemispheric-specific factors that are dependent on handedness: the Necker cube has a right hemisphere advantage (faster switching) regardless of handedness, and the rotating cylinder has a right eye advantage (faster switching) for right-handed subjects. These results suggest that, for different bistable phenomena, competitions between alternative perceptual interpretations are likely determined by different ambiguity-resolving mechanisms situated along with visual hierarchy. The human visual system is very good at recognizing and categorizing many different material classes (e.g., wood, stone, metal etc). Glossiness has been known as one diagnostic visual property to judge material class. Many studies found that perceived glossiness was influenced by various intrinsic and extrinsic visual factors, for example, micro-level surface geometry and illumination conditions; nevertheless, it is unknown to what extent viewing time could influence glossiness perception. Here we systematically varied the amount of time for viewing stimuli, and measured how well subjects could discriminate glossy from non-glossy objects. The results showed that perceived glossiness was significantly influenced by viewing time; observers needed at least 300 ms to achieve 75% discrimination accuracy between glossy and non-glossy objects. In further experiments, we also used a rotating object and tested whether rotation speed would be influential to glossiness perception. When the rotation speed was faster, the degree of perceived glossiness became similar between glossy and non-glossy objects. Our findings suggest that glossiness perception is a process to compute the spatial relationship between surface shading information and bright spots, and the efficiency of this computational process is proportional to processing time. A broader implication of the study would be that estimating other material properties (e.g., transparency and translucency) also might be critically influenced by the viewing time.
University of Minnesota Ph.D. dissertation. June 2015. Major: Psychology. Advisors: Daniel Kersten, Sheng He. 1 computer file (PDF); vii, 71 pages.
Cho, Shin Ho.
Spatio-temporal integration of an object's surface information in mid-level vision.
Retrieved from the University of Minnesota Digital Conservancy,
Content distributed via the University of Minnesota's Digital Conservancy may be subject to additional license and use restrictions applied by the depositor.