A Computational Framework for Predicting Appearance Differences

Loading...
Thumbnail Image

Persistent link to this item

Statistics
View Statistics

Journal Title

Journal ISSN

Volume Title

Title

A Computational Framework for Predicting Appearance Differences

Published Date

2018-07

Publisher

Type

Thesis or Dissertation

Abstract

Quantifying the perceived difference in appearance between two surfaces is an important industrial problem that currently is solved through visual inspection. The field of design has always employed trained experts to manually compare appearances to verify manufacturing quality or match design intent. More recently, the advancement of 3D printing is being held back by an inability to evaluate appearance tolerances. Much like color science greatly accelerated the design of conventional printers, a computational solution to the appearance difference problem would aid the development of advanced 3D printing technology. Past research has produced analytical expressions for restricted versions of the problem by focusing on a single attribute like color or by requiring homogeneous materials. But the prediction of spatially-varying appearance differences is a far more difficult problem because the domain is highly multi-dimensional. This dissertation develops a computational framework for solving the general form of the appearance comparison problem. To begin, a method-of-adjustment task is used to measure the effects of surface structure on the overall perceived brightness of a material. In the case considered, the spatial variations of an appearance are limited to shading and highlights produced by height changes across its surface. All stimuli are rendered using computer graphics techniques in order to be viewed virtually, thus increasing the number of appearances evaluated per subject. Results suggest that an image-space model of brightness is an accurate approximation, justifying the later image-based models that address more general appearance evaluations. Next, a visual search study is performed to measure the perceived uniformity of 3D printed materials. This study creates a large dataset of realistic materials by using state-of-the-art material scanners to digitize numerous tiles 3D printed with spatially- varying patterns in height, color, and shininess. After scanning, additional appearances are created by modifying the reflectance descriptions of the tiles to produce variations that cannot yet be physically manufactured with the same level of control. The visual search task is shown to efficiently measure changes in appearance uniformity resulting from these modifications. A follow-up experiment augments the collected uniformity measurements from the visual search study. A forced-choice task measures the rate of change between two appearances by interpolating along curves defined in the high-dimensional appearance space. Repeated comparisons are controlled by a Bayesian process to efficiently find the just noticeable difference thresholds between appearances. Gradients reconstructed from the measured thresholds are used to estimate perceived distances between very similar appearances, something hard to measure directly with human subjects. A neural network model is then trained to accurately predict uniformity from features extracted from the non-uniform appearance and target uniform appearance images. Finally, the computational framework for predicting general appearance differences is fully developed. Relying on the previously generated 3D printed appearances, a crowd-sourced ranking task is used to simultaneously measure the relative similarities of multiple stimuli against a reference appearance. Crowd-sourcing the perceptual data collection allows the many complex interactions between bumpiness, color, glossiness, and pattern to be evaluated efficiently. Generalized non-metric multidimensional scaling is used to estimate a metric embedding that respects the collected appearance rankings. The embedding is sampled and used to train a deep convolutional neural network to predict the perceived distance between two appearance images. While the learned model and experiments focus on 3D printed materials, the presented approaches can apply to arbitrary material classes. The success of this computational approach creates a promising path for future work in quantifying appearance differences.

Description

University of Minnesota Ph.D. dissertation.July 2018. Major: Computer and Information Sciences. Advisor: Gary Meyer. 1 computer file (PDF); xi, 166 pages.

Related to

Replaces

License

Collections

Series/Report Number

Funding information

Isbn identifier

Doi identifier

Previously Published Citation

Other identifiers

Suggested citation

Ludwig, Michael. (2018). A Computational Framework for Predicting Appearance Differences. Retrieved from the University Digital Conservancy, https://hdl.handle.net/11299/206226.

Content distributed via the University Digital Conservancy may be subject to additional license and use restrictions applied by the depositor. By using these files, users agree to the Terms of Use. Materials in the UDC may contain content that is disturbing and/or harmful. For more information, please see our statement on harmful content in digital repositories.