Pointwise and functional approximations in Monte Carlo maximum likelihood estimation

@article{Kuk1999PointwiseAF,
  title={Pointwise and functional approximations in Monte Carlo maximum likelihood estimation},
  author={Anthony Y. C. Kuk and Yuk W. Cheng},
  journal={Statistics and Computing},
  year={1999},
  volume={9},
  pages={91-99}
}
We consider the use of Monte Carlo methods to obtain maximum likelihood estimates for random effects models and distinguish between the pointwise and functional approaches. We explore the relationship between the two approaches and compare them with the EM algorithm. The functional approach is more ambitious but the approximation is local in nature which we demonstrate graphically using two simple examples. A remedy is to obtain successively better approximations of the relative likelihood… Expand
Automatic choice of driving values in Monte Carlo likelihood approximation via posterior simulations
  • A. Y. Kuk
  • Mathematics, Computer Science
  • Stat. Comput.
  • 2003
TLDR
For smaller samples, this work proposes to use the current posterior as the next prior distribution to make the posterior simulations closer to the maximum likelihood estimate (MLE) and hence improve the likelihood approximation. Expand
The use of approximating models in Monte Carlo maximum likelihood estimation
To obtain the likelihood of a non-Gaussian state-space model, Durbin and Koopman (1997, Biometrika, 84, 669-684) first calculate the likelihood under an approximating linear Gaussian model and thenExpand
Monte Carlo approximation through Gibbs output in generalized linear mixed models
Geyer (J. Roy. Statist. Soc. 56 (1994) 291) proposed Monte Carlo method to approximate the whole likelihood function. His method is limited to choosing a proper reference point. We attempt to improveExpand
Analysis of generalized linear mixed models via a stochastic approximation algorithm with Markov chain Monte-Carlo method
TLDR
A new implementation of a stochastic approximation algorithm with Markov chain Monte Carlo method is investigated, which indicates that the proposed algorithm is an attractive alternative for problems with a large number of random effects or with high dimensional intractable integrals in the likelihood function. Expand
Automatic Differentiation to Facilitate Maximum Likelihood Estimation in Nonlinear Random Effects Models
Maximum likelihood estimation in random effects models for non-Gaussian data is a computationally challenging task that currently receives much attention. This article shows that the estimationExpand
Automatic Approximation of the Marginal Likelihood in Nonlinear Hierarchical Models
We show that the fitting of nonlinear hierarchical random effects models by maximum likelihood can be made automatic to the same extent that Bayesian model fitting can be automated by the programExpand
Combining MM-Algorithms and MCMC Procedures for Maximum Likelihood Estimation in Mallows-Bradley-Terry Models
This paper is devoted to the computation of the maximum likelihood estimates of the Mallows-Bradley-Terry ranking model parameters. The maximum likelihood method is avoid because of the normalizingExpand
Laplace Importance Sampling for Generalized Linear Mixed Models
It is well known that the standard Laplace approximation of the integrated marginal likelihood function of a random effects model may be invalid if the dimension of the integral increases with theExpand
Automatic approximation of the marginal likelihood in non-Gaussian hierarchical models
Fitting of non-Gaussian hierarchical random effects models by approximate maximum likelihood can be made automatic to the same extent that Bayesian model fitting can be automated by the program BUGS.Expand
Properties and comparison of estimation methods in a log-linear generalized linear mixed model
Generalized linear mixed models have become a popular choice for modeling correlated and non-normal response data, with an increasing number of methods available for fitting these models. However,Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 30 REFERENCES
On the Convergence of Monte Carlo Maximum Likelihood Calculations
SUMMARY Monte Carlo maximum likelihood for normalized families of distributions can be used for an extremely broad class of models. Given any family { he: 0 E 0 } of non-negative integrableExpand
The monte carlo newton-raphson algorithm
It is shown that the Monte Carlo Newton-Raphson algorithm is a viable alternative to the Monte Carlo EM algorithm for finding maximum likelihood estimates based on incomplete data. Both Monte CarloExpand
Maximum Likelihood Algorithms for Generalized Linear Mixed Models
Abstract Maximum likelihood algorithms are described for generalized linear mixed models. I show how to construct a Monte Carlo version of the EM algorithm, propose a Monte Carlo Newton-RaphsonExpand
Hierarchical Generalized Linear Models
We consider hierarchical generalized linear models which allow extra error components in the linear predictors of generalized linear models. The distribution of these components is not restricted toExpand
Monte Carlo EM Estimation for Time Series Models Involving Counts
Abstract The observations in parameter-driven models for time series of counts are generated from latent unobservable processes that characterize the correlation structure. These models result inExpand
Approximate inference in generalized linear mixed models
Statistical approaches to overdispersion, correlated errors, shrinkage estimation, and smoothing of regression relationships may be encompassed within the framework of the generalized linear mixedExpand
Constrained Monte Carlo Maximum Likelihood for Dependent Data
Maximum likelihood estimates (MLEs) in autologistic models and other exponential family models for dependent data can be calculated with Markov chain Monte Carlo methods (the Metropolis algorithm orExpand
A Monte Carlo Implementation of the EM Algorithm and the Poor Man's Data Augmentation Algorithms
Abstract The first part of this article presents the Monte Carlo implementation of the E step of the EM algorithm. Given the current guess to the maximizer of the posterior distribution, latent dataExpand
A gradient algorithm locally equivalent to the EM algorithm
In many problems of maximum likelihood estimation, it is impossible to carry out either the E-step or the M-step of the EM algorithm. The present paper introduces a gradient algorithm that is closelyExpand
Estimation in generalized linear models with random effects
SUMMARY A conceptually very simple but general algorithm for the estimation of the fixed effects, random effects, and components of dispersion in generalized linear models with random effects isExpand
...
1
2
3
...