Browsing by Subject "Psychophysics"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Item A Computational Framework for Predicting Appearance Differences(2018-07) Ludwig, MichaelQuantifying the perceived difference in appearance between two surfaces is an important industrial problem that currently is solved through visual inspection. The field of design has always employed trained experts to manually compare appearances to verify manufacturing quality or match design intent. More recently, the advancement of 3D printing is being held back by an inability to evaluate appearance tolerances. Much like color science greatly accelerated the design of conventional printers, a computational solution to the appearance difference problem would aid the development of advanced 3D printing technology. Past research has produced analytical expressions for restricted versions of the problem by focusing on a single attribute like color or by requiring homogeneous materials. But the prediction of spatially-varying appearance differences is a far more difficult problem because the domain is highly multi-dimensional. This dissertation develops a computational framework for solving the general form of the appearance comparison problem. To begin, a method-of-adjustment task is used to measure the effects of surface structure on the overall perceived brightness of a material. In the case considered, the spatial variations of an appearance are limited to shading and highlights produced by height changes across its surface. All stimuli are rendered using computer graphics techniques in order to be viewed virtually, thus increasing the number of appearances evaluated per subject. Results suggest that an image-space model of brightness is an accurate approximation, justifying the later image-based models that address more general appearance evaluations. Next, a visual search study is performed to measure the perceived uniformity of 3D printed materials. This study creates a large dataset of realistic materials by using state-of-the-art material scanners to digitize numerous tiles 3D printed with spatially- varying patterns in height, color, and shininess. After scanning, additional appearances are created by modifying the reflectance descriptions of the tiles to produce variations that cannot yet be physically manufactured with the same level of control. The visual search task is shown to efficiently measure changes in appearance uniformity resulting from these modifications. A follow-up experiment augments the collected uniformity measurements from the visual search study. A forced-choice task measures the rate of change between two appearances by interpolating along curves defined in the high-dimensional appearance space. Repeated comparisons are controlled by a Bayesian process to efficiently find the just noticeable difference thresholds between appearances. Gradients reconstructed from the measured thresholds are used to estimate perceived distances between very similar appearances, something hard to measure directly with human subjects. A neural network model is then trained to accurately predict uniformity from features extracted from the non-uniform appearance and target uniform appearance images. Finally, the computational framework for predicting general appearance differences is fully developed. Relying on the previously generated 3D printed appearances, a crowd-sourced ranking task is used to simultaneously measure the relative similarities of multiple stimuli against a reference appearance. Crowd-sourcing the perceptual data collection allows the many complex interactions between bumpiness, color, glossiness, and pattern to be evaluated efficiently. Generalized non-metric multidimensional scaling is used to estimate a metric embedding that respects the collected appearance rankings. The embedding is sampled and used to train a deep convolutional neural network to predict the perceived distance between two appearance images. While the learned model and experiments focus on 3D printed materials, the presented approaches can apply to arbitrary material classes. The success of this computational approach creates a promising path for future work in quantifying appearance differences.Item Context-dependent adaptation in the visual system(2017-12) Mesik, JurajThe visual system continuously adjusts its sensitivities to various visual features so as to optimize neural processing, a phenomenon known as adaptation. Although this rapid form of plasticity has been extensively studied across numerous sensory modalities, it remains unclear if its dynamics can change with experience. Specifically, the world we live in is composed of many different environments, or contexts, each of which contains its own statistical regularities. For example, forests contain more vertical energy and greenish hues than a desert landscape. Here we investigated the possibility that through experience, the visual system can learn statistical regularities in the visual input, and use this knowledge to adapt more quickly. In two sets of experiments, participants repeatedly adapted to previously unexperienced regularities in orientation statistics over the course of 3-4 sessions. They adapted either to rapidly presented sequences of oriented gratings containing orientation biases, or to natural visual input that was filtered to alter its orientation statistics. We found that experience did increase adaptation rate, but only in the experiments where participants adapted to a single set of altered statistics of natural input. We found no changes in adaptation rate in experiments where participants periodically switched between adapting to different statistical regularities. These results demonstrate that adaptation and experience can interact under some circumstances.Item Discriminability of simple and complex haptic vibrations in single-cell computational and human psychophysical settings(2017-07) Theis, NicholasA multiscale, multiphysics model of the Pacinian Corpuscle (PC) was used to study the neurophysiological response to haptic vibrations in the 100-200Hz range. The computational results were compared to human psychophysical experiments, emulating the pairing of psychophysics with in vivo electrophysiology in PC research. A first assessment of this approach was made by examining the discriminability (dꞌ) of pairs of vibrotactile stimuli. The discrimination task was performed psychophysically and in silico for both one- and two-frequency stimuli. Both firing rate and inter-spike interval neural decoding schemes were used to calculate dꞌ from simulation data. Human subjects discriminated between frequencies with two components (complex stimuli) more effectively than isolated frequencies (simple stimuli), possibly due to the presence of beat frequencies in dissonant stimuli. Over a given stimulus set, in silico dꞌ values correlated well with the psychophysical data (R2 > 0.6), but when the simple and complex data were combined the model did not match the experiment (R2 < 0.1). Firing rate resulted in better predictions than inter-spike interval, and was more robust to noise. Results suggest that a single simulated PC can capture some but not all of the observed psychophysical response to a vibrotactile stimulus.Item Human neurophysiological mechanisms of contextual modulation in primary visual cortex.(2010-05) Schumacher, Jennifer FrancesThis dissertation examines visual processing of contextually modulated artificial and natural stimuli in primary visual cortex on a local scale. Understanding how local features are integrated into a global structure or ignored as irrelevant background is a critical step in comprehending human vision. To investigate these mechanisms, first it was necessary to measure the relationship between inferred neural responses, such as those obtained with blood oxygenation level-dependent (BOLD) fMRI, and local stimuli. From this point, orientation-dependent contextual modulation was analyzed locally or with a contour. While focusing on primary visual cortex, these experiments with stimuli of increasing complexity provide a foundation for how local features are grouped into global structures. BOLD fMRI provides a non-invasive method to measure the inferred neural response in humans. Because BOLD fMRI is a result of interaction between neural activity, blood flow, and deoxyhemoglobin concentration, it is not obvious that there is a linear relationship between these mechanisms as well as established functions, like the contrast response function (CRF). Chapter 2 measures the BOLD response to single Gabor patches of increasing contrast with two pulse sequences: Gradient Echo (GE) and Spin Echo (SE). GE measurements include signals from large and small veins while SE measurements eliminate the signal from large veins. Comparing these signals, at ultra-high field strength (7 Tesla) found the relationship between the CRF and BOLD fMRI for local stimuli is not linear with GE measurements. Chapters 3 and 4 focus on orientation-dependent contextual modulation of a single Gabor patch or of a vertical line of Gabor patches. In the periphery, surrounds of parallel orientation suppress the center stimulus while surrounds of orthogonal orientation facilitate the center stimulus. The relationship between the BOLD response and these suppressive or facilitative mechanisms was measured on a local scale (Chapter 3). Then, to compare the mechanisms for orientation-dependent contextual modulation and contour integration, performance in a contour detection task was measured over an extensive parameter space (Chapter 4). These data show that the BOLD response to suppressive stimuli do not behave as predicted by psychophysical results and that orientation-dependent contextual modulation and contour integration operate over different spatial scales, and likely different neural mechanisms. This dissertation provides data on the relationship between the BOLD response and local stimuli as well as data on the neural mechanisms behind orientation-dependent contextual modulation, contour integration, and texture classification. An over-arching theme is that inferred neural responses, such as those measured with BOLD fMRI, behave differently on a local scale than a global scale. However, other non-invasive measures provide details for how local stimuli are processed and further integrated into a global structure. Future work can incorporate computational models of neural activity and the BOLD response to clarify why measured responses differ on a local scale compared to a global scale.Item Measuring the Detection of Objects under Simulated Visual Impairment in 3D Rendered Scenes(2018-09) Carpenter, BrentA space is visually accessible when a person can use their vision to travel through the space and to pursue activities intended to be performed with vision within that space. Previous work has addressed the difficulty of evaluating the detection of objects in real spaces by observers with simulated visual impairments. This current research addresses the viability of using physically realistic 3D renderings of public spaces under artificially induced blur in place of the more resource intensive testing in the real spaces themselves while participants wear blurring goggles. In addition, this research illustrates the efficacy of a model that predicts portions of public scenes that an observer with simulated vision impairment would presumably fail to detect by comparing the predictions of missed space geometry to actual geometry detection failures by observers with simulated impairments. Lastly, this work also addresses how well simulated Low Vision observers can categorize the contents of scenes. Observer categorization rate is compared to several image metrics and the results indicate that average classification rate across Low Vision simulations can be predicted very well by knowing the averages of several different image metrics within each of the acuity blocks. Chapter 1 of this dissertation is a literature review necessary for understanding the background of the state of the art of this research and an overview of the research itself. In Chapter 2, an experiment is described in which object visibility was tested in a virtual environment with the goal of validating the use of 3D renderings as substitutive stimuli via comparing performance between the real and digital version of the same task (Bochsler et al., 2013). The objects were ramps, steps, and flat surfaces. Participants were normally sighted young adults who viewed either blurred or unblurred images. Images were blurred using a Gaussian filter on a Sloan Chart calibrated for the viewing distance of the experiment. Patterns of object identifications and confusions between the digital and physical versions of the task were highly similar. It is very likely that 3D renderings of public spaces when used in psychophysical tasks are effective substitutive stimuli for real spaces in object detection tasks. Avenues for parametric manipulations that might strengthen the argument are also explored. Chapter 3 extends the use of physics based 3D renderings to simulations of visual impairment (Thompson et al, 2017; https://github.com/visual-accessibility/deva-filter). A model of visual impairment was used to simulate 3D renderings of public spaces under increasing levels of impairment. Participants were then asked to draw the edges and contours of objects in these simulations under several separate task conditions: draw the edges of doors, stairs, obstacles, or floor-wall-ceiling connections. As simulations of visual impairment deepened, observers struggled to find the correct object contours in each of the tasks. Also, as the simulated impairments deepened, observer data often more closely matched the predictive model: a system that puts a premium on sudden changes in luminance contrast. In the absence of context and meaning, simulated Low Vision observers tend to make false positive geometrical edge identifications when a scene has non- accidental incidences of strong luminance contrast edges such as bars of light and shadows. The predictive power and utility of the model for simulating visual impairment is also discussed. Chapter 4 contains a pilot experiment which seeks to understand how well simulated Low Vision observers can classify the category of blurry scenes shown to them. Observers were asked to perform a three alternative forced choice task where they must identify if an image is one of three scenes, and observers’ classification accuracy was tracked across acuity level simulations. Several image metrics were calculated and regressed against classification accuracy of either single scenes or classification accuracy per acuity block. It was found that average classification accuracy within an acuity block could be predicted by knowing any one of several average image metrics of scenes within blocks and when regressed across acuity levels.