Browsing by Author "Geyer, Charles J."
Now showing 1 - 20 of 46
- Results Per Page
- Sort Options
Item A Philosophical Look at Aster Models(University of Minnesota, 2010-02) Geyer, Charles J.Item Annealing Markov Chain Monte Carlo with Applications to Ancestral Inference(University of Minnesota, 1994-02) Geyer, Charles J.; Thompson, Elizabeth A.Item Aster Models and Lande-Arnold Beta(2010-01-09) Geyer, Charles J.; Shaw, Ruth G.Lande and Arnold (1983) proposed an estimate of beta, the directional selection gradient, by ordinary least squares (OLS). Aster models (Geyer, Wagenius and Shaw, 2007; Shaw, Geyer, Wagenius, Hangelbroek, and Etterson, 2008) estimate exactly the same beta, so providing no improvement over the Lande-Arnold method in point estimation of this quantity. Aster models do provide correct confidence intervals, confidence regions, and hypothesis tests for beta; in contrast, such procedures derived from OLS are often invalid because the assumptions for OLS are grossly incorrect.Item Aster Models and Lande-Arnold Beta (revised)(2010-01-13) Geyer, Charles J.; Shaw, Ruth G.Lande and Arnold (1983) proposed an estimate of beta, the directional selection gradient, by ordinary least squares (OLS). Aster models (Geyer, Wagenius and Shaw, 2007; Shaw, Geyer, Wagenius, Hangelbroek, and Etterson, 2008) estimate exactly the same beta, so providing no improvement over the Lande-Arnold method in point estimation of this quantity. Aster models do provide correct confidence intervals, confidence regions, and hypothesis tests for beta; in contrast, such procedures derived from OLS are often invalid because the assumptions for OLS are grossly incorrect. This revision fixes a bug which made the figure incorrect in the original.Item Aster Models for Life History Analysis(University of Minnesota, 2005-09) Geyer, Charles J.; Wagenius, Stuart; Shaw, Ruth G.Item Aster Models with Random Effects and Additive Genetic Variance for Fitness(University of Minnesota, 2013-07) Geyer, Charles J.; Shaw, Ruth G.Item Aster Models with Random Effects and Additive Genetic Variance for Fitness(2013-07-10) Geyer, Charles J.; Shaw, Ruth G.This technical report is a minor supplement to the paper Geyer et al. (in press) and its accompanying technical report Geyer et al. (2012). It shows how to move variance components from the canonical parameter scale to the mean value parameter scale. This is useful in estimating additive genetic variance for fitness, and that appears in Fisher's fundamental theorem of natural selection, which predicts the rate of increase in fitness via natural selection.Item Aster Models with Random Effects via Penalized Likelihood(2012-10-09) Geyer, Charles J.; Ridley, Caroline E.; Latta, Robert G.; Etterson, Julie R.; Shaw, Ruth G.This technical report works out details of approximate maximum likelihood estimation for aster models with random effects. Fixed and random effects are estimated by penalized log likelihood. Variance components are estimated by integrating out the random effects in the Laplace approximation of the complete data likelihood following Breslow and Clayton (1993), which can be done analytically, and maximizing the resulting approximate missing data likelihood. A further approximation treats the second derivative matrix of the cumulant function of the exponential family where it appears in the approximate missing data log likelihood as a constant (not a function of parameters). Then first and second derivatives of the approximate missing data log likelihood can be done analytically. Minus the second derivative matrix of the approximate missing data log likelihood is treated as approximate Fisher information and used to estimate standard errors.Item Aster Models with Random Effects via Penalized Likelihood(University of Minnesota, 2010-07) Geyer, Charles J.; Ridley, Caroline E.; Latta, Robert G.; Etterson, Julie R.; Shaw, Ruth G.Item Commentary on Lande-Arnold Analysis(University of Minnesota, 2008-05) Geyer, Charles J.; Shaw, Ruth G.Item Commentary on Lande-Arnold Analysis(School of Statistics, University of Minnesota, 2008-05-14) Geyer, Charles J.; Shaw, Ruth G.A solution to the problem of estimating fitness landscapes was proposed by Lande and Arnold (1983). Another solution, which avoids problematic aspects of the Lande-Arnold methodology, was proposed by Shaw, Geyer, Wagenius, Hangelbroek, and Etterson (2008). This technical report goes through Lande-Arnold theory in detail paying careful attention to problematic aspects. The only completely new material is a theoretical analysis of when the best quadratic approximation to a fitness landscape, which is what the Lande-Arnold method estimates, is a good approximation to the actual fitness landscape.Item Computation for the Introduction to MCMC Chapter Handbook of Markov Chain Monte Carlo(University of Minnesota, 2010-07) Geyer, Charles J.Item Computation for the Introduction to MCMC Chapter of Handbook of Markov Chain Monte Carlo(2010-07-29) Geyer, Charles J.This technical report does the computation for the "Introduction to MCMC" chapter of Brooks, Gelman, Jones and Meng (forthcoming). All analyses are done in R (R Development Core Team, 2008) using the Sweave function so this entire technical report and all of the analyses reported in it are exactly reproducible by anyone who has R with the mcmc package (Geyer, 2005) installed and the R noweb file specifying the document.Item Constrained covariance component models(1993-11) Shaw, Frank H.; Geyer, Charles J.Item Estimating Normalizing Constants and Reweighting Mixtures(1994-07-09) Geyer, Charles J.Markov chain Monte Carlo (the Metropolis-Hastings algorithm and the Gibbs sampler) is a general multivariate simulation method that permits sampling from any stochastic process whose density is known up to a constant of proportionality. It has recently received much attention as a method of carrying out Bayesian, likelihood, and frequentist inference in analytically intractable problems. Although many applications of Markov chain Monte Carlo do not need estimation of normalizing constants, three do: calculation of Bayes factors, calculation of likelihoods in the presence of missing data, and importance sampling from mixtures. Here reverse logistic regression is proposed as a solution to the problem of estimating normalizing constants, and convergence and asymptotic normality of the estimates are proved under very weak regularity conditions. Markov chain Monte Carlo is most useful when combined with importance reweighting so that a Monte Carlo sample from one distribution can be used for inference about many distributions. In Bayesian inference, reweighting permits the calculation of posteriors corresponding to a range of priors using a Monte Carlo sample from just one posterior. In likelihood inference, reweighting permits the calculation of the whole likelihood function using a Monte Carlo sample from just one distribution in the model. Given this estimate of the likelihood, a parametric bootstrap calculation of the sampling distribution of the maximum likelihood estimate can be done using just one more Monte Carlo sample. Although reweighting can save much calculation, it does not work well unless the distribution being reweighted places appreciable mass in all regions of interest. Hence it is often not advisable to sample from a distribution in the model. Reweighting a mixture of distributions in the model performs much better, but this cannot be done unless the mixture density is known and this requires knowledge of the normalizing constants, or at least good estimates such as those provided by reverse logistic regression.Item Geometric ergodicity of a random-walk Metropolis algorithm for a transformed density(2010-11-22) Johnson, Leif; Geyer, Charles J.Curvature conditions on a target density in R^k for the geometric ergodicity of a random-walk Metropolis algorithm have previously been established (Mengersen and Tweedie(1996), Roberts and Tweedie(1996), Jarner and Hansen(2000)). However, the conditions for target densities in R^k that have exponentially light tails, but are not super-exponential are difficult to apply. In this paper I establish a variable transformation to apply to such target densities, that along with a regularity condition on the target density, ensures that a random-walk Metropolis algorithm for the transformed density is geometrically ergodic. Inference can be drawn for the original target density using Markov chain Monte Carlo estimates based on the transformed density. An application to inference on the regression parameter in multinomial logit regression with a conjugate prior is given.Item Hypothesis Tests and Confidence Intervals Involving Fitness Landscapes fit by Aster Models(2010-01-09) Geyer, Charles J.; Shaw, Ruth G.This technical report explores some issues left open in Technical Reports 669 and 670 (Geyer and Shaw, 2008a,b): for fitness landscapes fit using an aster models, we propose hypothesis tests of whether the landscape has a maximum and confidence regions for the location of the maximum. All analyses are done in R (R Development Core Team, 2008) using the aster contributed package described by Geyer, Wagenius and Shaw (2007) and Shaw, Geyer, Wagenius, Hangelbroek, and Etterson (2008). Furthermore, all analyses are done using the Sweave function in R, so this entire technical report and all of the analyses reported in it are completely reproducible by anyone who has R with the aster package installed and the R noweb file specifying the document. The revision fixes one error in the confidence ellipsoids in Section 4 (a square root was forgotten so the regions in the original were too big).Item Likelihood and Exponential Families(University of Washington, 1990-06-08) Geyer, Charles J.A family of probability densities with respect to a positive Borel measure on a finite-dimensional affine space is standard exponential if the log densities are affine functions. The family is convex if the natural parameter set (gradients of the log densities) is convex. In the closure of the family in the topology of pointwise almost everywhere convergence of densities, the maximum likelihood estimate (MLE) exists whenever the supremum of the log likelihood is finite. It is not defective if the family is convex. The MLE is a density in the original family conditioned on some affine subspace (the support of the MLE) which is determined by the "Phase I" algorithm, a sequence of linear programming feasibility problems. Standard methods determine the MLE in the family conditioned on the support ("Phase II"). An extended-real-valued function on an affine space is generalized affine if it is both convex and concave. The space of all generalized affine functions is a compact Hausdorff space, sequentially compact if the carrier is finite-dimensional. A family of probability densities is a standard generalized exponential family if the log densities are generalized affine. The closure of an exponential family is equivalent to a generalized exponential family. When the likelihood of an exponential family cannot calculated exactly, it can sometimes be calculated by Monte Carlo using the Metropolis algorithm or the Gibbs sampler. The Monte Carlo log likelihood (the log likelihood in the exponential family generated by the Monte Carlo empirical distribution) then hypoconverges strongly (almost surely over sample paths) to the true log likelihood. For a closed convex family the Monte Carlo approximants to the MLE and all level sets of the likelihood converge strongly to the truth. For nonconvex families, the outer set limits converge. These methods are demonstrated by an autologistic model for estimation of relatedness from DNA fingerprint data and by isotonic, convex logistic regression for the maternal-age-specific incidence of Down’s syndrome, both constrained MLE problems. Hypothesis tests and confidence intervals are constructed for these models using the iterated parametric bootstrap.Item Likelihood Ratio Tests and Inequality Constraints(School of Statistics, University of Minnesota, 1995-12-18) Geyer, Charles J.In likelihood ratio tests involving inequality-constrained hypotheses, the Neyman-Pearson test based on the least favourable parameter value in a compound null hypothesis can be extremely conservative. The ordinary parametric bootstrap is generally inconsistent and usually too liberal. Two methods of correcting the inconsistency of the parametric bootstrap are proposed: shrinking the constraint set toward the maximum likelihood estimate and superefficient estimation of the active set of constraints. Optimal shrinkage adjustment can be determined using bootstrap calibration. These methods are compared with the double bootstrap, the subsampling bootstrap, Bayes factors, and Bayesian P-values. The Bayesian methods are also too liberal if diffuse priors are used.Item Likelihood Ration Tests and Inequality Contraints(University of Minnesota, 1995-12) Geyer, Charles J.
- «
- 1 (current)
- 2
- 3
- »