• Publications
  • Influence
When is B− A− a generalized inverse of AB?☆
In practice factorizations of a generalized inverse often arise from factorizations of the matrix which is to be inverted. In addition to full rank factorizations, normal factorizations and singularExpand
  • 59
  • 2
On Inequality Constrained Generalized Least Squares Selections in the General Possibly Singular
This paper deals with the general possibly singular linear model. It is assumed that in addition to the sample information we have some nonstochastic prior information concerning the unknownExpand
  • 18
  • 2
On extensions of Cramer's rule for solutions of restricted linear systems 1
For the unique solution of a special consistent restricted linear system Ax=bx∊M we derive two different determinantal forms, which both reduce to Cramer's classical rule if A is nonsingular. TheExpand
  • 29
  • 1
A BLUE decomposition in the general linear regression model
Abstract In this note we consider the general linear regression model (y,Xβ,V|R2β2 = r) where the block partitioned regressor matrix X = (X1 X2) may be deficient in column rank, the dispersion matrixExpand
  • 25
  • 1
The General Expressions for the Moments of the Stochastic Shrinkage Parameters of the Liu Type Estimator
ABSTRACT One of the problems with the Liu estimator is the appropriate value for the unknown biasing parameter d. In this article we consider the optimum value for d and give upper bound for theExpand
  • 9
  • 1
Generalized inversion and weak bi-complementarity ∗
In the literature, complementary matrices have been studied because of their importance in statistics. The present paper shows how this notion can be extended to the concept of weak (hi-)Expand
  • 14
  • 1
More on partitioned possibly restricted linear regression
This paper deals with the general partitioned linear regression model where the regressor matrix $X=\pmatrix{X_1 & X_2\cr}$ may be deficient in column rank, the dispersion matrix $V$ is possiblyExpand
  • 25
  • 1
On the matrix monotonicity of generalized inversion
Let A, B be two matrices of the same order. We write A>B(A>⩾B) iff A− B is a positive (semi-) definite hermitian matrix. In this paper the well-known result if A>B>θ, then B−1>A−1> θ (cf. BellmanExpand
  • 23
  • 1
Two matrix-based proofs that the linear estimator Gy is the best linear unbiased estimator
Abstract We offer two matrix-based proofs for the well-known result that the two conditions GX=X and GVQ=0 are necessary and sufficient for Gy to be the traditional best linear unbiased estimatorExpand
  • 44
More on blu estimation in regression models with possibly singular covariances
Abstract Consider the general linear regression model E(y) = Aβ, Cov(y) = V, where y is an n × 1 vector of observations, A is a known real n × m matrix, and V is a known dispersion matrix. No rankExpand
  • 24