#### Filter Results:

#### Publication Year

2003

2015

#### Co-author

#### Key Phrase

#### Publication Venue

Learn More

Covariance estimation for high dimensional signals is a classically difficult problem in statistical signal analysis and machine learning. In this paper, we propose a maximum likelihood (ML) approach to covariance estimation, which employs a novel non-linear sparsity constraint. More specifically, the covariance is constrained to have an eigen decomposition… (More)

—A variety of problems in remote sensing require that a covariance matrix be accurately estimated, often from a limited number of data samples. We investigate the utility of several variants of a recently introduced covariance estimator—the sparse matrix transform (SMT), a shrinkage-enhanced SMT, and a graph-constrained SMT—in the context of several of… (More)

We describe the design of a dual-issue single-instruction, multiple-data-like (SIMD-like) extension of the IBM PowerPCt 440 floating-point unit (FPU) core and the compiler and algorithmic techniques to exploit it. This extended FPU is targeted at both the IBM massively parallel Blue Genet/L machine and the more pervasive embedded platforms. We discuss the… (More)

We describe the design, implementation, and evaluation of a dual-issue SIMD-like extension of the PowerPC 440 floating-point unit (FPU) core. This extended FPU is targeted at both IBM's massively parallel Blue-Gene/L machine as well as more pervasive embedded platforms. It has several novel features, such as a computational crossbar and cross-load/store… (More)

Recently, the Sparse Matrix Transform (SMT) has been proposed as a tool for estimating the eigen-decomposition of high dimensional data vectors [1]. The SMT approach has two major advantages: First it can improve the accuracy of the eigen-decomposition, particularly when the number of observations, n, is less the the vector dimension, p. Second, the… (More)

—This paper addresses two issues related to the detection of hyperspectral anomalies. The first issue is the evaluation of anomaly detector performance even when labeled data is not available. The second issue is the estimation of the covariance structure of the data in local detection methods, such as the RX detector, when the number of available training… (More)

The BlueGene/L supercomputer will use system-on-a-chip integration and a highly scalable cellular architecture to deliver 360 Teraflops of peak computing power. With 65,536 compute nodes, BlueGene/L represents a new level of scalability for parallel systems. As such, it is natural for many scalability challenges to arise. In this paper, we discuss… (More)

BlueGene/L is a massively parallel computer system with 65,536 dual-processor compute nodes. The peak performance of BlueGene/L is in excess of 360 TFLOP/s if both processor cores in a node are used for computation. The main challenge of deploying this dual-core mode of operation is that the L1 caches in each core are not hardware coherent. This forces a… (More)