Influence-guided Data Augmentation for Neural Tensor Completion

@article{Oh2021InfluenceguidedDA,
  title={Influence-guided Data Augmentation for Neural Tensor Completion},
  author={Sejoon Oh and Sungchul Kim and Ryan A. Rossi and Srijan Kumar},
  journal={Proceedings of the 30th ACM International Conference on Information \& Knowledge Management},
  year={2021}
}
  • Sejoon OhSungchul Kim Srijan Kumar
  • Published 23 August 2021
  • Computer Science
  • Proceedings of the 30th ACM International Conference on Information & Knowledge Management
How can we predict missing values in multi-dimensional data (or tensors) more accurately? The task of tensor completion is crucial in many applications such as personalized recommendation, image and video restoration, and link prediction in social networks. Many tensor factorization and neural network-based tensor completion algorithms have been developed to predict missing entries in partially observed tensors. However, they can produce inaccurate estimations as real-world tensors are very… 
1 Citations

Figures and Tables from this paper

Training Data Influence Analysis and Estimation: A Survey

This paper provides the first comprehensive survey of training data influence analysis and estimation and organizes state-of-the-art influence analysis methods into a taxonomy; it describes each of these methods in detail and compare their underlying assumptions, asymptotic complexities, and overall strengths and weaknesses.

References

SHOWING 1-10 OF 47 REFERENCES

CoSTCo: A Neural Tensor Completion Model for Sparse Tensors

This work proposes a novel convolutional neural network (CNN) based model, named CoSTCo (Convolutional Sparse Tensor Completion), which leverages the expressive power of CNN to model the complex interactions inside tensors and its parameter sharing scheme to preserve the desired low-rank structure.

Neural Tensor Completion for Accurate Network Monitoring

This paper proposes a novel Neural Tensor Completion (NTC) scheme to effectively model three-order interaction among data features with the outer product and build a 3D interaction map, and applies 3D convolution to learn features of high- order interaction from the local range to the global range.

MTC: Multiresolution Tensor Completion from Partial and Coarse Observations

The proposed Multi-resolution Tensor Completion model (MTC) explores tensor mode properties and leverages the hierarchy of resolutions to recursively initialize an optimization setup, and optimizes on the coupled system using alternating least squares to ensure low computational and space complexity.

Fully Scalable Methods for Distributed Tensor Factorization

This paper proposes two distributed tensor factorization methods, CDTF and SALS, which are scalable with all aspects of data and show a trade-off between convergence speed and memory requirements.

Scalable Tensor Decompositions for Multi-aspect Data Mining

  • T. KoldaJimeng Sun
  • Computer Science
    2008 Eighth IEEE International Conference on Data Mining
  • 2008
Memory-Efficient Tucker (MET) is proposed, which achieves over 1000X space reduction without sacrificing speed; it also allows us to work with much larger tensors that were too big to handle before.

Tucker factorization with missing data with application to low-$$n$$n-rank tensor completion

This paper proposes a simple algorithm for Tucker factorization of a tensor with missing data and its application to low-$$n$$n-rank tensor completion and demonstrates in several numerical experiments that the proposed algorithm performs well even when the ranks are significantly overestimated.

Neural Tensor Model for Learning Multi-Aspect Factors in Recommender Systems

A novel nonlinear tensor machine is proposed, which combines deep neural networks and tensor algebra to capture nonlinear interactions among multi-aspect factors.

Neural Tensor Factorization for Temporal Interaction Learning

A Neural network based Tensor Factorization (NTF) model for predictive tasks on dynamic relational data that incorporates the multi-layer perceptron structure for learning the non-linearities between different latent factors.

Scalable Tucker Factorization for Sparse Tensors - Algorithms and Discoveries

P-Tucker is proposed, a scalable Tucker factorization method for sparse tensors that successfully discover hidden concepts and relations in a large-scale real-world tensor, while existing methods cannot reveal latent features due to their limited scalability or low accuracy.

Generalized Higher-Order Orthogonal Iteration for Tensor Decomposition and Completion

This work proposes an efficient and scalable core tensor Schatten 1-norm minimization method for simultaneous tensor decomposition and completion, with a much lower computational complexity.