A Convolutional Attention Network for Extreme Summarization of Source Code
@article{Allamanis2016ACA, title={A Convolutional Attention Network for Extreme Summarization of Source Code}, author={Miltiadis Allamanis and Hao Peng and Charles Sutton}, journal={ArXiv}, year={2016}, volume={abs/1602.03001} }
Attention mechanisms in neural networks have proved useful for problems in which the input and output do not have fixed dimension. Often there exist features that are locally translation invariant and would be valuable for directing the model's attention, but previous attentional architectures are not constructed to learn such features specifically. We introduce an attentional neural network that employs convolution on the input tokens to detect local time-invariant and long-range topical…
413 Citations
Effective approaches to combining lexical and syntactical information for code summarization
- Computer ScienceSoftw. Pract. Exp.
- 2020
Two general and effective approaches to leveraging lexical and syntactical information of code for better summarization quality are proposed: a convolutional neural network that aims to better extract vector representation of AST node for downstream models; and a Switch Network that learns an adaptive weight vector to combine different code representations for summary generation.
Reinforcement-Learning-Guided Source Code Summarization Using Hierarchical Attention
- Computer ScienceIEEE Transactions on Software Engineering
- 2022
This paper presents a new code summarization approach using hierarchical attention network by incorporating multiple code features, including type-augmented abstract syntax trees and program control flows into a deep reinforcement learning (DRL) framework for comment generation.
Source Code Summarization Using Attention-Based Keyword Memory Networks
- Computer Science, Education2020 IEEE International Conference on Big Data and Smart Computing (BigComp)
- 2020
This work proposes a two-phase model that consists of a keyword predictor and a description generator that can effectively reduce the semantic gap and generate more accurate descriptions of source codes.
Neural Code Summarization: How Far Are We?
- Computer ScienceArXiv
- 2021
A systematic and in-depth analysis of five state-of-the-art neural source code summarization models on three widely used datasets suggests that the BLEU metric has many variants and code pre-processing choices can have a large impact on the summarization performance.
Query Attention GloVe GloVe CNN Attention Flow Layer Modeling Layer Output Layer
- Computer Science
- 2017
This paper discusses how to apply convolutional neural networks (CNNs) to the machine comprehension task and incorporates CNNs with existing bidirectional attention-flow mechanisms and compares the performance to RNN-based models.
Improving Automatic Source Code Summarization via Deep Reinforcement Learning
- Computer Science2018 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE)
- 2018
An abstract syntax tree structure as well as sequential content of code snippets into a deep reinforcement learning framework (i.e., actor-critic network) which provides the confidence of predicting the next word according to current state and an advantage reward composed of BLEU metric to train both networks.
Code Completion with Neural Attention and Pointer Networks
- Computer ScienceIJCAI
- 2018
A pointer mixture network is proposed for better predicting OoV words in code completion by learning to generate a within-vocabulary word through an RNN component, or regenerate an OoV word from local context through a pointer component.
Attention, please! A survey of Neural Attention Models in Deep Learning
- Computer ScienceArtificial Intelligence Review
- 2022
This review systematically reviewed hundreds of architectures in the area, identifying and discussing those in which attention has shown a significant impact and described the primary uses of attention in convolutional, recurrent networks and generative models.
Novel Natural Language Summarization of Program Code via Leveraging Multiple Input Representations
- Computer ScienceEMNLP
- 2021
This work is the first code summarization work that utilizes a natural language-based contextual pretrained language model in its encoder, and proposes a novel code-level encoder based on BERT capable of express-ing the semantics of code, and obtain representations for every line of code.
References
SHOWING 1-10 OF 57 REFERENCES
Modelling, Visualising and Summarising Documents with a Single Convolutional Neural Network
- Computer ScienceArXiv
- 2014
A model is introduced that is able to represent the meaning of documents by embedding them in a low dimensional vector space, while preserving distinctions of word and sentence order crucial for capturing nuanced semantics.
A Convolutional Neural Network for Modelling Sentences
- Computer ScienceACL
- 2014
A convolutional architecture dubbed the Dynamic Convolutional Neural Network (DCNN) is described that is adopted for the semantic modelling of sentences and induces a feature graph over the sentence that is capable of explicitly capturing short and long-range relations.
A Neural Attention Model for Abstractive Sentence Summarization
- Computer ScienceEMNLP
- 2015
This work proposes a fully data-driven approach to abstractive sentence summarization by utilizing a local attention-based model that generates each word of the summary conditioned on the input sentence.
Convolutional Neural Networks over Tree Structures for Programming Language Processing
- Computer ScienceAAAI
- 2016
A novel tree-based convolutional neural network (TBCNN) is proposed for programming language processing, in which a convolution kernel is designed over programs' abstract syntax trees to capture structural information.
Pointer Networks
- Computer ScienceNIPS
- 2015
A new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence using a recently proposed mechanism of neural attention, called Ptr-Nets, which improves over sequence-to-sequence with input attention, but also allows it to generalize to variable size output dictionaries.
Neural Programmer: Inducing Latent Programs with Gradient Descent
- Computer ScienceICLR
- 2016
This work proposes Neural Programmer, an end-to-end differentiable neural network augmented with a small set of basic arithmetic and logic operations and finds that training the model is difficult, but it can be greatly improved by adding random noise to the gradient.
Suggesting accurate method and class names
- Computer ScienceESEC/SIGSOFT FSE
- 2015
A neural probabilistic language model for source code that is specifically designed for the method naming problem is introduced, and a variant of the model is introduced that is, to the knowledge, the first that can propose neologisms, names that have not appeared in the training corpus.
Learning to Execute
- Computer ScienceArXiv
- 2014
This work developed a new variant of curriculum learning that improved the networks' performance in all experimental conditions and had a dramatic impact on an addition problem, making an LSTM to add two 9-digit numbers with 99% accuracy.
Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
- Computer ScienceICML
- 2015
An attention based model that automatically learns to describe the content of images is introduced that can be trained in a deterministic manner using standard backpropagation techniques and stochastically by maximizing a variational lower bound.
A unified architecture for natural language processing: deep neural networks with multitask learning
- Computer ScienceICML '08
- 2008
We describe a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic…