Thomas Powers

Learn More
Recurrent neural networks are powerful models for processing sequential data, but they are generally plagued by vanishing and exploding gradient problems. Unitary recurrent neural networks (uRNNs), which use unitary recurrence matrices, have recently been proposed as a means to avoid these issues. However, in previous experiments, the recurrence matrices(More)
We introduce a novel speech enhancement algorithm for removing reverberation and noise from recorded speech data. Our approach centers around using a single-channel minimum mean-square error log-spectral amplitude (MMSELSA) estimator, which applies gain coefficients in a timefrequency domain to suppress noise and reverberation. The main contribution of this(More)
We develop a framework to select a subset of sensors from a field in which the sensors have an ingrained independence structure. Given an arbitrary independence pattern, we construct a graph that denotes pairwise independence between sensors, which means those sensors may operate simultaneously. The set of all fully-connected subgraphs (cliques) of this(More)
This paper focuses on sensor management for distributed sensor fields in deep water. This paper will investigate the sensor placement problem for the barrier scenario. Field performance is often based solely on sensor coverage or probability of detection evaluations, which provide only a fleeting snapshot of how the field is performing at a particular time.(More)
Recurrent neural networks (RNNs) are powerful and effective for processing sequential data. However, RNNs are usually considered “black box” models whose internal structure and learned parameters are not interpretable. In this paper, we propose an interpretable RNN based on the sequential iterative soft-thresholding algorithm (SISTA) for solving the(More)
Historically, sparse methods and neural networks, particularly modern deep learning methods, have been relatively disparate areas. Sparse methods are typically used for signal enhancement, compression, and recovery, usually in an unsupervised framework, while neural networks commonly rely on a supervised training set. In this paper, we use the specific(More)
In this paper we develop a framework to select a subset of sensors from a field in which the sensors have an ingrained independence structure. Given an arbitrary independence pattern, we construct a graph that denotes pairwise independence between sensors, which means those sensors can operate simultaneously. The set of all fully-connected subgraphs(More)
In this paper, we propose a novel recurrent neural network architecture for speech separation. This architecture is constructed by unfolding the iterations of a sequential iterative soft-thresholding algorithm (ISTA) that solves the optimization problem for sparse nonnegative matrix factorization (NMF) of spectrograms. We name this network architecture deep(More)