School of Statistics Graduate Student Research
Persistent link for this collectionhttps://hdl.handle.net/11299/96783
Browse
Browsing School of Statistics Graduate Student Research by Issue Date
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Geometric ergodicity of a random-walk Metropolis algorithm for a transformed density(2010-11-22) Johnson, Leif; Geyer, Charles J.Curvature conditions on a target density in R^k for the geometric ergodicity of a random-walk Metropolis algorithm have previously been established (Mengersen and Tweedie(1996), Roberts and Tweedie(1996), Jarner and Hansen(2000)). However, the conditions for target densities in R^k that have exponentially light tails, but are not super-exponential are difficult to apply. In this paper I establish a variable transformation to apply to such target densities, that along with a regularity condition on the target density, ensures that a random-walk Metropolis algorithm for the transformed density is geometrically ergodic. Inference can be drawn for the original target density using Markov chain Monte Carlo estimates based on the transformed density. An application to inference on the regression parameter in multinomial logit regression with a conjugate prior is given.Item Supporting Theory and Data Analysis for "Long Range Search for Maximum Likelihood in Exponential Families"(2011-07-23) Okabayashi, Saisuke; Geyer, Charles J.Exponential families are often used to model data sets with complex dependence. Maximum likelihood estimators (MLE) can be difficult to estimate when the likelihood is expensive to compute. Markov chain Monte Carlo (MCMC) methods based on the MCMC-MLE algorithm in Geyer and Thompson (1992) are guaranteed to converge in theory under certain conditions when starting from any value, but in practice such an algorithm may labor to converge when given a poor starting value. We present a simple line search algorithm to find the MLE of a regular exponential family when the MLE exists and is unique. The algorithm can be started from any initial value and avoids the trial and error experimentation associated with calibrating algorithms like stochastic approximation. Unlike many optimization algorithms, this approach utilizes first derivative information only, evaluating neither the likelihood function itself nor derivatives of higher order than first. We show convergence of the algorithm for the case where the gradient can be calculated exactly. When it cannot, it has a particularly convenient form that is easily estimable with MCMC, making the algorithm still useful to a practitioner.