• Corpus ID: 230435975

Learning Neural Networks on SVD Boosted Latent Spaces for Semantic Classification

@article{Sidheekh2021LearningNN,
  title={Learning Neural Networks on SVD Boosted Latent Spaces for Semantic Classification},
  author={Sahil Sidheekh},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.00563}
}
The availability of large amounts of data and compelling computation power have made deep learning models much popular for text classification and sentiment analysis. Deep neural networks have achieved competitive performance on the above tasks when trained on naive text representations such as word count, term frequency, and binary matrix embeddings. However, many of the above representations result in the input space having a dimension of the order of the vocabulary size, which is enormous… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 19 REFERENCES

Latent Semantic Analysis Boosted Convolutional Neural Networks for Document Classification

TLDR
A parsimonious LSA -based CNN model, in which natively trained LSA word vectors are used as input into parallel 1-dimensional convolutional layers (1D-CNNs), which exceeds the accuracy of all linear classifiers utilizing ngrams with TFIDF on all analyzed datasets.

Deep Learning in Natural Language Processing

TLDR
Different methods that can be adopted to perform NLP using deep learning methodologies are discussed, including representation methods available to present the data, then the ways to prepare data for deep learning and popular methods that are being used for NLP are presented.

Learning Word Vectors for Sentiment Analysis

TLDR
This work presents a model that uses a mix of unsupervised and supervised techniques to learn word vectors capturing semantic term--document information as well as rich sentiment content, and finds it out-performs several previously introduced methods for sentiment classification.

Very Deep Convolutional Networks for Text Classification

TLDR
This work presents a new architecture (VDCNN) for text processing which operates directly at the character level and uses only small convolutions and pooling operations, and is able to show that the performance of this model increases with the depth.

Attention is All you Need

TLDR
A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

Character-level Convolutional Networks for Text Classification

TLDR
This article constructed several large-scale datasets to show that character-level convolutional networks could achieve state-of-the-art or competitive results in text classification.

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

TLDR
A new language representation model, BERT, designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.

Dimension Reduction in Text Classification with Support Vector Machines

TLDR
Novel dimension reduction methods to reduce the dimension of the document vectors dramatically are adopted and decision functions for the centroid-based classification algorithm and support vector classifiers are introduced to handle the classification problem where a document may belong to multiple classes.

Distributed Text Classification With an Ensemble Kernel-Based Learning Approach

TLDR
A new systematic distributed ensemble framework based on a generic deployment strategy in a cluster distributed environment that employs a combination of both task and data decomposition of the text-classification system, based on partitioning, communication, agglomeration, and mapping to define and optimize a graph of dependent tasks.

CNN Features Off-the-Shelf: An Astounding Baseline for Recognition

TLDR
A series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13 suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.