Learn More
Sparse coding, an unsupervised feature learning technique, is often used as a basic building block to construct deep networks. Convolutional sparse coding is proposed in the literature to overcome the scalability issues of sparse coding techniques to large images. In this paper we propose an efficient algorithm, based on the fast iterative shrinkage(More)
In this paper, we consider the problem of joint sensor calibration and target signature estimation using distributed measurements from a large-scale wireless sensor network with random link variations. Specifically, we propose a new Distributed Space-Alternating Generalized Expectation Maximization (EM) algorithm, DSAGE, which can estimate the (constrained)(More)
We consider the problem of joint enhancement of multichannel images with pixel based constraints on the multichannel data. Previous work by Çetin and Karl introduced nonquadratic regularization methods for SAR image enhancement using sparsity enforcing penalty terms. We formulate an optimization problem that jointly enhances complex-valued(More)
We consider the problem of joint sensor calibration and target signature estimation using distributed measurements over a large-scale sensor network. Specifically, we develop a new Distributed Signature Learning and Node Calibration algorithm (D-SLANC) which simultaneously estimates source signal's signature and estimates calibration parameters local to(More)
Deep learning methods have resulted in significant performance improvements in several application domains and as such several software frameworks have been developed to facilitate their implementation. This paper presents a comparative study of four deep learning frameworks, namely Caffe, Neon, Theano, and Torch, on three aspects: extensibility, hardware(More)
In this paper, we focus on the task of learning influential parameters under unsteady and dynamic environments. Such unsteady and dynamic environments often occur in the ramp-up phase of manufacturing. We propose a novel regularization-based framework, called Distributed Dynamic Elastic Nets (DDEN), for this problem and formulate it as a convex optimization(More)
Most machine learning algorithms involve solving a convex optimization problem. Traditional in-memory convex optimization solvers do not scale well with the increase in data. This paper identifies a generic convex problem for most machine learning algorithms and solves it using the Alternating Direction Method of Multipliers (ADMM). Finally such an ADMM(More)
In this paper, we study distributed classification of targets in a large scale sensor network setting. Specifically, we consider sensor nodeswhich canmeasure only a part of the feature vec- tor and whose communication capabilities are limited to only their neighbouring nodes. We formulate a distributed classifi- cation algorithm that learns the optimal(More)