Building Program Vector Representations for Deep Learning

@article{Peng2014BuildingPV,
  title={Building Program Vector Representations for Deep Learning},
  author={Hao Peng and Lili Mou and Ge Li and Yuxuan Liu and Lu Zhang and Zhi Jin},
  journal={ArXiv},
  year={2014},
  volume={abs/1409.3358}
}
  • Hao PengLili Mou Zhi Jin
  • Published 11 September 2014
  • Computer Science
  • ArXiv
Deep learning has made significant breakthroughs in various fields of artificial intelligence. [] Key Result This result confirms the feasibility of deep learning to analyze programs.

TBCNN: A Tree-Based Convolutional Neural Network for Programming Language Processing

To the best knowledge, this paper is the first to analyze programs with deep neural networks and extends the scope of deep learning to the field of programming language processing; the experimental results validate its feasibility and show a promising future of this new research area.

Learning Embeddings of API Tokens to Facilitate Deep Learning Based Program Processing

This paper proposes a neural model to learn embeddings of API tokens that combines a recurrent neural network with a convolutional neural network and uses API documents as training corpus.

Code vectors: understanding programs through embedded abstracted symbolic traces

This paper uses abstractions of traces obtained from symbolic execution of a program as a representation for learning word embeddings and shows thatembeddings learned from semantic abstractions provide nearly triple the accuracy of those learned from syntactic abstractions.

Deep Learning Based Code Completion Models for Programming Codes

Deep Learning based models to automatically complete programming codes are designed, which are LSTM-based neural networks, and are combined with several techniques such as Word Embedding models in NLP (Natural Language Processing), and Multihead Attention Mechanism.

Use of Deep Learning Model with Attention Mechanism for Software Fault Prediction

This paper constructed a deep learning model called Defect Prediction via Self-Attention mechanism (DPSAM) to extract semantic features and predict defects automatically and evaluated performance on 7 open source projects.

Convolutional Neural Networks over Tree Structures for Programming Language Processing

A novel tree-based convolutional neural network (TBCNN) is proposed for programming language processing, in which a convolution kernel is designed over programs' abstract syntax trees to capture structural information.

Software Defect Prediction via Convolutional Neural Network

This paper proposes a framework called Defect Prediction via Convolutional Neural Network (DP-CNN), which leverages deep learning for effective feature generation and evaluates the method on seven open source projects in terms of F-measure in defect prediction.

Modeling Programs Hierarchically with Stack-Augmented LSTM

Learning to Execute

This work developed a new variant of curriculum learning that improved the networks' performance in all experimental conditions and had a dramatic impact on an addition problem, making an LSTM to add two 9-digit numbers with 99% accuracy.

Comparative Code Structure Analysis using Deep Learning for Performance Prediction

The results show that tree-based Long Short-Term Memory models can leverage source code's hierarchical structure to discover latent representations and LSTM-based predictive models can correctly predict if a source code will perform better or worse up to 84% and 73% of the time, respectively.
...

References

SHOWING 1-10 OF 56 REFERENCES

Learning Deep Architectures for AI

The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.

Greedy Layer-Wise Training of Deep Networks

These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

The Difficulty of Training Deep Architectures and the Effect of Unsupervised Pre-Training

The experiments confirm and clarify the advantage of unsupervised pre- training, and empirically show the influence of pre-training with respect to architecture depth, model capacity, and number of training examples.

Representation Learning: A Review and New Perspectives

Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.

Generating Text with Recurrent Neural Networks

The power of RNNs trained with the new Hessian-Free optimizer by applying them to character-level language modeling tasks is demonstrated, and a new RNN variant that uses multiplicative connections which allow the current input character to determine the transition matrix from one hidden state vector to the next is introduced.

A Fast Learning Algorithm for Deep Belief Nets

A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.

Natural Language Processing (Almost) from Scratch

We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity

ImageNet classification with deep convolutional neural networks

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank

A Sentiment Treebank that includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality, and introduces the Recursive Neural Tensor Network.

Learning long-term dependencies with gradient descent is difficult

This work shows why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases, and exposes a trade-off between efficient learning by gradient descent and latching on information for long periods.
...