Browsing by Subject "Plenoptic imaging"
Now showing 1 - 1 of 1
- Results Per Page
- Sort Options
Item New Information-Theoretic Analyses and Algorithmic Methods for Parameter Estimation in Structured Data Settings and Plenoptic Imaging Models(2020-08) Viswanathan Sambasivan, AbhinavParameter estimation problems involve estimating an unknown quantity (or parameter) of interest, from a set of data (or observations) that contains some information about the parameter. Such problems are ubiquitous and widely studied across diverse disciplines in science and engineering including, but not limited to, physics, computer science, signal processing, computational genomics, and economics. Information-theoretic limits of a parameter estimation problem quantify the best-achievable performance (under a suitable metric), thus establishing the fundamental difficulty of solving the problem. A central theme of the first two parts of this work is to develop information-theoretic tools to analyze the fundamental limits of estimating parameters from noisy data under two very different settings: (1) the parameter of interest belongs to structured class of signals, and (2) a concise forward model relating the observations to the parameters is analytically challenging to obtain. The first part of this work examines the fundamental error characteristics of a general class of matrix completion problems, where the matrix of interest is a product of two a priori unknown matrices, one of which is sparse, and the observations are noisy. Our main contributions come in the form of minimax lower bounds for the expected per-element squared error for this problem under several common noise models. Our results establish that the error bounds derived in (Soni et al. 2016) for complexity-regularized maximum likelihood estimators achieve, up to multiplicative constants and logarithmic factors, the minimax error rates under certain (mild) conditions. The rest of this work focuses on plenoptic imaging, which usually involves taking multiple single snapshot images of a scene, collected across time (videos), wavelength (multi-spectral cameras), and from multiple vantage points (light field sensor arrays), thus providing substantially more information about a given scene than conventional imaging. For this thrust, we first focus on assessing the fundamental limits of scene parameter estimation in plenoptic imaging systems, with an eye towards passive indirect imaging problems. We develop a general framework to obtain lower bounds on the variance of unbiased estimators for scene parameter estimation from noisy plenoptic data. The novelty of this work lies in the use of computer graphics rendering software to synthesize the (often-complicated) forward mapping to evaluate the Hammersley-Chapman-Robbins lower bound (HCR-LB), which is at least as tight as the more commonly used Cramer-Rao lower bound. When the rendering software yields inexact estimates of the forward mapping, we analyze the effects of such inaccuracies on the HCR-LB both theoretically and via simulations, and provide a method to obtain upper and lower intervals for the true HCR-LB. The final part of this work explores algorithmic methods for Non-Line-of-Sight (NLOS) imaging from (noisy) plenoptic data, where the aim is to recover a hidden scene of interest from noisy measurements that arise from reflections off a scattering surface, e.g. a wall, or the floor. We use the insight that plenoptic data is highly structured due to parallax and/or motion in the hidden scene and propose a multi-way Total Variation (TV) regularized inversion methodology to leverage this structure and recover hidden scenes. We demonstrate our recovery algorithm on real-world plenoptic data measurements at visible and Long-Wave InfraRed (LWIR) wavelengths. Experiments in LWIR (or thermal) imaging shows that it is possible to reliably image human subjects around a corner, nearly in real-time, using our framework.