1-Bit Tensor Completion
@article{Aidini20181BitTC, title={1-Bit Tensor Completion}, author={Anastasia Aidini and Grigorios Tsagkatakis and Panagiotis Tsakalides}, journal={electronic imaging}, year={2018}, volume={2018}, pages={261-1-261-6} }
Higher-order tensor structured data arise in many imaging scenarios, including hyperspectral imaging and color video. The recovery of a tensor from an incomplete set of its entries, known as tensor completion, is crucial in applications like compression. Furthermore, in many cases observations are not only incomplete, but also highly quantized. Quantization is a critical step for high dimensional data transmission and storage in order to reduce storage requirements and power consumption…
12 Citations
Quantized Tensor Robust Principal Component Analysis
- Computer ScienceICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2020
A tensor robust principal component analysis algorithm is introduced in order to recover a tensor with real-valued entries from a partly observed set of quantized and sparsely corrupted entries.
Tensor recovery from noisy and multi-level quantized measurements
- Computer ScienceEURASIP J. Adv. Signal Process.
- 2020
This paper addresses the problem of tensor recovery from multi-level quantized measurements by leveraging the low CANDECOMP/PARAFAC (CP) rank property and provides the theoretical upper bounds of the recovery error, which diminish to zero when the sizes of dimensions increase to infinity.
Quantized Higher-Order Tensor Recovery by Exploring Low-Dimensional Structures
- Computer Science2020 54th Asilomar Conference on Signals, Systems, and Computers
- 2020
It is proved that the recovery errors for both optimization models go to zero when the dimension lengths of tensors go to infinity, and tensors with TSVD can theoretically reach a lower error.
Classification of Compressed Remote Sensing Multispectral Images via Convolutional Neural Networks
- Computer Science, Environmental ScienceJ. Imaging
- 2020
This work considers the encoding of multispectral observations into high-order tensor structures which can naturally capture multi-dimensional dependencies and correlations, and proposes a resource-efficient compression scheme based on quantized low-rank tensor completion.
TenIPS: Inverse Propensity Sampling for Tensor Completion
- Computer ScienceAISTATS
- 2021
This paper studies the problem of completing a partially observed tensor with MNAR observations, without prior information about the propensities, and proposes an algorithm to complete the tensor, which assumes that both the original tensor and the Tensor of Propensities have low multilinear rank.
One-bit tensor completion via transformed tensor singular value decomposition
- Computer ScienceApplied Mathematical Modelling
- 2021
A Probit Tensor Factorization Model For Relational Learning
- Computer ScienceJournal of Computational and Graphical Statistics
- 2021
A binary tensor factorization model with probit link is proposed, which not only inherits the computation efficiency from the classic tensorfactorization model but also accounts for the binary nature of relational data.
Robust Low-Tubal-Rank Tensor Recovery From Binary Measurements
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2022
A new quantization scheme is developed under which the convergence rate can be accelerated to an exponential function of .
Learning Tensors From Partial Binary Measurements
- Computer ScienceIEEE Transactions on Signal Processing
- 2019
The 1-bit measurement model can be used for context-aware recommender systems and the advantage of directly using the low-rank tensor structure, rather than matricization, is shown, both theoretically and numerically.
Tensor Robust Principal Component Analysis From Multilevel Quantized Observations
- Computer ScienceIEEE Transactions on Information Theory
- 2023
A nonconvex constrained maximum likelihood (ML) estimation method for Quantized Tensor Robust Principal Component Analysis is proposed and an upper bound on the Frobenius norm of tensor estimation error under this method is provided.
References
SHOWING 1-10 OF 18 REFERENCES
Parallel matrix factorization for low-rank tensor completion
- Computer ScienceArXiv
- 2013
Although the model is non-convex, the algorithm performs consistently throughout the tests and gives better results than the compared methods, some of which are based on convex models.
Tensor completion and low-n-rank tensor recovery via convex optimization
- Computer Science
- 2011
This paper uses the n-rank of a tensor as a sparsity measure and considers the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-Rank that fulfills some linear constraints.
Matrix recovery from quantized and corrupted measurements
- Computer Science2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2014
Experimental results on synthetic and two real-world collaborative filtering datasets demonstrate that directly operating with the quantized measurements - rather than treating them as real values - results in (often significantly) lower recovery error if the number of quantization bins is less than about 10.
The Augmented Lagrange Multiplier Method for Exact Recovery of Corrupted Low-Rank Matrices
- Computer ScienceArXiv
- 2010
Matrix and Tensor Completion on a Human Activity Recognition Framework
- Computer ScienceIEEE Journal of Biomedical and Health Informatics
- 2017
This paper proposes a classification framework that considers the reconstruction of subsampled data during the test phase and introduces the concept of forming the available data streams into low-rank two-dimensional and 3-D Hankel structures, and exploits data redundancies using sophisticated imputation techniques.
1-bit matrix completion under exact low-rank constraint
- Computer Science, Mathematics2015 49th Annual Conference on Information Sciences and Systems (CISS)
- 2015
An upper bound on the matrix estimation error under this model is provided and has faster convergence rate with matrix dimensions when the fraction of revealed 1-bit observations is fixed, independent of the matrix dimensions.
Efficient projections onto the l1-ball for learning in high dimensions
- Computer ScienceICML '08
- 2008
Efficient algorithms for projecting a vector onto the l1-ball are described and variants of stochastic gradient projection methods augmented with these efficient projection procedures outperform interior point methods, which are considered state-of-the-art optimization techniques.
The Power of Convex Relaxation: Near-Optimal Matrix Completion
- Computer ScienceIEEE Transactions on Information Theory
- 2010
This paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors).
A Singular Value Thresholding Algorithm for Matrix Completion
- Computer ScienceSIAM J. Optim.
- 2010
This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.
Exact Matrix Completion via Convex Optimization
- Computer Science, MathematicsFound. Comput. Math.
- 2009
It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.