IEEE Transactions on Signal Processing ( Issue 12 , 13 )

Abstract

First, the known network structure and historical data are leveraged to design a dictionary for representing the network process. The novel approach draws from semi-supervised learning to enable learning the dictionary with only partial network observations. Once the dictionary is learned, network-wide prediction becomes possible via a regularized least-squares estimate which exploits the parsimony encapsulated in the design of the dictionary. Second, an online network-wide prediction algorithm is developed to jointly extrapolate the process over the network and update the dictionary accordingly. This algorithm is able to handle large training datasets at a fixed computational cost. More important, the online algorithm takes into account the temporal correlation of the underlying process, and thereby improves prediction accuracy. The performance of the novel algorithms is illustrated for prediction of real Internet traffic. ———————— An Adaptive Approach to Learn Overcomplete Dictionaries With Efficient Numbers of Elements M. Marsousi, K. Abhari, P. Babyn, J. Alirezaie U of Toronto, Ryerson University, U of Saskatchewan Abstract To avoid the representations suboptimality, a systematic approach to adapt the elements number based on input datasets is essential. Some existing methods try to address this requirement such as enhanced K-SVD, sub-clustering K-SVD, and stage-wise K-SVD. However, it is not specified under which sparsity level and representation error criteria their learned dictionaries are size-optimized. We propose a new dictionary learning approach that automatically learns a dictionary with an efficient number of elements that provides both desired representation error and desired average sparsity level. In our proposed method, for any given representation error and average sparsity level, the number of elements in the learned dictionary varies based on content complexity of training datasets. The performance of the proposed method is demonstrated in image denoising. The proposed method is compared to state-of-the-art, and results confirm the superiority of the proposed approach. ————————To avoid the representations suboptimality, a systematic approach to adapt the elements number based on input datasets is essential. Some existing methods try to address this requirement such as enhanced K-SVD, sub-clustering K-SVD, and stage-wise K-SVD. However, it is not specified under which sparsity level and representation error criteria their learned dictionaries are size-optimized. We propose a new dictionary learning approach that automatically learns a dictionary with an efficient number of elements that provides both desired representation error and desired average sparsity level. In our proposed method, for any given representation error and average sparsity level, the number of elements in the learned dictionary varies based on content complexity of training datasets. The performance of the proposed method is demonstrated in image denoising. The proposed method is compared to state-of-the-art, and results confirm the superiority of the proposed approach. ———————— Group-Sparse Signal Denoising: Non-Convex Regularization, Convex Optimization Po-Yu Chen, Ivan W. Selesnick New York University Abstract Convex optimization with sparsity-promoting convex regularization is a standard approach for estimating sparse signals in noise. In order to promote sparsity more strongly than convex regularization, it is also standard practice to employ non-convex optimization. In this paper, we take a third approach. We utilize a non-convex regularization term chosen such that the total cost function (consisting of data consistency and regularization terms) is convex. Therefore, sparsity is more strongly promoted than in the standard convex formulation, but without sacrificing the attractive aspects of convex optimization. ————————Convex optimization with sparsity-promoting convex regularization is a standard approach for estimating sparse signals in noise. In order to promote sparsity more strongly than convex regularization, it is also standard practice to employ non-convex optimization. In this paper, we take a third approach. We utilize a non-convex regularization term chosen such that the total cost function (consisting of data consistency and regularization terms) is convex. Therefore, sparsity is more strongly promoted than in the standard convex formulation, but without sacrificing the attractive aspects of convex optimization. ————————

Cite this paper

@inproceedings{Chen2014IEEETO, title={IEEE Transactions on Signal Processing ( Issue 12 , 13 )}, author={Po-Yu Chen and Ivan W. Selesnick}, year={2014} }