#### Filter Results:

- Full text PDF available (4)

#### Publication Year

2001

2016

- This year (0)
- Last 5 years (4)
- Last 10 years (7)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Key Phrases

Learn More

- Jingjing Wu, Rohana J. Karunamuni, Biao Zhang
- J. Multivariate Analysis
- 2010

- James T. Ding, Rohana J. Karunamuni
- IJMTM
- 2008

Consider an experiment yielding an observable random quantity X whose distribution F θ depends on a parameter θ with θ being distributed according to some distribution G 0. We study the Bayesian estimation problem of θ under squared error loss function based on X, as well as some additional data available from other similar experiments according to an… (More)

- James T. Ding, Rohana J. Karunamuni
- ICCSA
- 2003

- Rohana J. Karunamuni, Jingjing Wu
- Computational Statistics & Data Analysis
- 2011

- Jingjing Wu, Rohana J. Karunamuni
- J. Multivariate Analysis
- 2012

Minimum distance techniques have become increasingly important tools for solving statistical estimation and inference problems. In particular, the successful application of the Hellinger distance approach to fully parametric models is well known. The corresponding optimal estimators, known as minimum Hellinger distance estimators, achieve efficiency at the… (More)

- Rohana J. Karunamuni, Qingguo Tang, Bangxin Zhao
- Computational Statistics & Data Analysis
- 2015

Let (Y1,θ1), . . . ,(Yn,θn) be independent real-valued random vectors with Yi, given θi, is distributed according to a distribution depending only on θi for i= 1, . . . ,n. In this paper, best linear unbiased predictors (BLUPs) of the θi’s are investigated. We show that BLUPs of θi’s do not exist in certain situations. Furthermore, we present a general… (More)

- Rohana J. Karunamuni, Laisheng Wei
- Int. J. Math. Mathematical Sciences
- 2006

We investigate the empirical Bayes estimation problem of multivariate regression coefficients under squared error loss function. In particular, we consider the regression model Y = Xβ + ε, where Y is an m-vector of observations, X is a known m × k matrix, β is an unknown k-vector, and ε is an m-vector of unobservable random variables. The problem is squared… (More)

- Qingguo Tang, Rohana J. Karunamuni
- J. Multivariate Analysis
- 2013