#### Filter Results:

- Full text PDF available (70)

#### Publication Year

1995

2017

- This year (2)
- Last 5 years (25)
- Last 10 years (55)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

If the distribution P is considered random and distributed according to , as it is in Bayesian inference, then the posterior distribution is the conditional distribution of P given the observations. The prior is, of course, a measure on some σ-field on and we must assume that the expressions in the display are well defined. In particular, we assume that the… (More)

We study the rates of convergence of the maximum likelihood estimator (MLE) and posterior distribution in density estimation problems, where the densities are location or location-scale mixtures of normal distributions with the scale parameter lying between two positive numbers. The true density is also assumed to lie in this class with the true mixing… (More)

- Subhashis Ghosal
- 2001

Mixture models for density estimation provide a very useful set up for the Bayesian or the maximum likelihood approach. For a density on the unit interval, mixtures of beta densities form a flexible model. The class of Bernstein densities is a much smaller subclass of the beta mixtures defined by Bernstein polynomials, which can approximate any continuous… (More)

We consider the asymptotic behavior of posterior distributions and Bayes estimators based on observations which are required to be neither independent nor identically distributed. We give general results on the rate of convergence of the posterior measure relative to distances derived from a testing criterion. We then specialize our results to independent,… (More)

This article describes a Bayesian approach to estimating the spectral density of a stationary time series. A nonparametric prior on the spectral density is described through Bernstein polynomials. Because the actual likelihood is very complicated, a pseudoposterior distribution is obtained by updating the prior using the Whittle likelihood. A Markov chain… (More)

- SUBHASHIS GHOSAL
- 2003

We consider the problem of estimating the mean of an infinitedimensional normal distribution from the Bayesian perspective. Under the assumption that the unknown true mean satisfies a “smoothness condition,” we first derive the convergence rate of the posterior distribution for a prior that is the infinite product of certain normal distributions and compare… (More)

- Subhashis Ghosal
- 1999

Exponential families arise naturally in statistical modelling and the maximum likelihood estimate (MLE) is consistent and asymptotically normal for these models [Berk [2]]. In practice, often one needs to consider models with a large number of parameters, particularly if the sample size is large; see Huber [14], Haberman [13] and Portnoy [18 21]. One may… (More)

We consider the problem of testing monotonicity of the regression function in a nonparametric regression model. We introduce test statistics that are functionals of a certain natural U-process. We study the limiting distribution of these test statistics through strong approximation methods and the extreme value theory for Gaussian processes. We show that… (More)

Abstract: We consider nonparametric Bayesian estimation of a probability density p based on a random sample of size n from this density using a hierarchical prior. The prior consists, for instance, of prior weights on the regularity of the unknown density combined with priors that are appropriate given that the density has this regularity. More generally,… (More)

- SUBHASHIS GHOSAL
- 1996

We study consistency and asymptotic normality of posterior distributions of the regression coeecient in a linear model when the dimension of the parameter grows with the sample size. Under certain growth restrictions on the dimension (depending on the design matrix), we show that the posterior distributions concentrate in neighbourhoods of the true… (More)