1-Bit Tensor Completion

  title={1-Bit Tensor Completion},
  author={Anastasia Aidini and Grigorios Tsagkatakis and Panagiotis Tsakalides},
  journal={electronic imaging},
Higher-order tensor structured data arise in many imaging scenarios, including hyperspectral imaging and color video. The recovery of a tensor from an incomplete set of its entries, known as tensor completion, is crucial in applications like compression. Furthermore, in many cases observations are not only incomplete, but also highly quantized. Quantization is a critical step for high dimensional data transmission and storage in order to reduce storage requirements and power consumption… 

Figures from this paper

Quantized Tensor Robust Principal Component Analysis

A tensor robust principal component analysis algorithm is introduced in order to recover a tensor with real-valued entries from a partly observed set of quantized and sparsely corrupted entries.

Tensor recovery from noisy and multi-level quantized measurements

This paper addresses the problem of tensor recovery from multi-level quantized measurements by leveraging the low CANDECOMP/PARAFAC (CP) rank property and provides the theoretical upper bounds of the recovery error, which diminish to zero when the sizes of dimensions increase to infinity.

Quantized Higher-Order Tensor Recovery by Exploring Low-Dimensional Structures

It is proved that the recovery errors for both optimization models go to zero when the dimension lengths of tensors go to infinity, and tensors with TSVD can theoretically reach a lower error.

Classification of Compressed Remote Sensing Multispectral Images via Convolutional Neural Networks

This work considers the encoding of multispectral observations into high-order tensor structures which can naturally capture multi-dimensional dependencies and correlations, and proposes a resource-efficient compression scheme based on quantized low-rank tensor completion.

TenIPS: Inverse Propensity Sampling for Tensor Completion

This paper studies the problem of completing a partially observed tensor with MNAR observations, without prior information about the propensities, and proposes an algorithm to complete the tensor, which assumes that both the original tensor and the Tensor of Propensities have low multilinear rank.

A Probit Tensor Factorization Model For Relational Learning

A binary tensor factorization model with probit link is proposed, which not only inherits the computation efficiency from the classic tensorfactorization model but also accounts for the binary nature of relational data.

Robust Low-Tubal-Rank Tensor Recovery From Binary Measurements

A new quantization scheme is developed under which the convergence rate can be accelerated to an exponential function of .

Learning Tensors From Partial Binary Measurements

The 1-bit measurement model can be used for context-aware recommender systems and the advantage of directly using the low-rank tensor structure, rather than matricization, is shown, both theoretically and numerically.

Tensor Robust Principal Component Analysis From Multilevel Quantized Observations

A nonconvex constrained maximum likelihood (ML) estimation method for Quantized Tensor Robust Principal Component Analysis is proposed and an upper bound on the Frobenius norm of tensor estimation error under this method is provided.



Parallel matrix factorization for low-rank tensor completion

Although the model is non-convex, the algorithm performs consistently throughout the tests and gives better results than the compared methods, some of which are based on convex models.

Tensor completion and low-n-rank tensor recovery via convex optimization

This paper uses the n-rank of a tensor as a sparsity measure and considers the low-n-rank tensor recovery problem, i.e. the problem of finding the tensor of the lowest n-Rank that fulfills some linear constraints.

Matrix recovery from quantized and corrupted measurements

Experimental results on synthetic and two real-world collaborative filtering datasets demonstrate that directly operating with the quantized measurements - rather than treating them as real values - results in (often significantly) lower recovery error if the number of quantization bins is less than about 10.

Matrix and Tensor Completion on a Human Activity Recognition Framework

This paper proposes a classification framework that considers the reconstruction of subsampled data during the test phase and introduces the concept of forming the available data streams into low-rank two-dimensional and 3-D Hankel structures, and exploits data redundancies using sophisticated imputation techniques.

1-bit matrix completion under exact low-rank constraint

  • S. BhaskarAdel Javanmard
  • Computer Science, Mathematics
    2015 49th Annual Conference on Information Sciences and Systems (CISS)
  • 2015
An upper bound on the matrix estimation error under this model is provided and has faster convergence rate with matrix dimensions when the fraction of revealed 1-bit observations is fixed, independent of the matrix dimensions.

Efficient projections onto the l1-ball for learning in high dimensions

Efficient algorithms for projecting a vector onto the l1-ball are described and variants of stochastic gradient projection methods augmented with these efficient projection procedures outperform interior point methods, which are considered state-of-the-art optimization techniques.

The Power of Convex Relaxation: Near-Optimal Matrix Completion

This paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors).

A Singular Value Thresholding Algorithm for Matrix Completion

This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.

Exact Matrix Completion via Convex Optimization

It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.