Using Transfer Learning for Code-Related Tasks

@article{Mastropaolo2022UsingTL,
  title={Using Transfer Learning for Code-Related Tasks},
  author={Antonio Mastropaolo and Nathan Cooper and David Nader-Palacio and Simone Scalabrino and Denys Poshyvanyk and Rocco Oliveto and Gabriele Bavota},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.08574}
}
—Deep learning (DL) techniques have been used to support several code-related tasks such as code summarization and bug-fixing. In particular, pre-trained transformer models are on the rise, also thanks to the excellent results they achieved in Natural Language Processing (NLP) tasks. The basic idea behind these models is to first pre-train them on a generic dataset using a self-supervised task ( e.g., filling masked words in sentences). Then, these models are fine-tuned to support specific tasks of… 

References

SHOWING 1-10 OF 86 REFERENCES

Studying the Usage of Text-To-Text Transfer Transformer to Support Code-Related Tasks

TLDR
This paper empirically investigated how the T5 model performs when pre-trained and fine-tuned to support code-related tasks, and compared the performance of this single model with the results reported in the four original papers proposing DL-based solutions for those four tasks.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.

Universal Language Model Fine-tuning for Text Classification

TLDR
This work proposes Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduces techniques that are key for fine- Tuning a language model.

Toward Deep Learning Software Repositories

TLDR
This work motivate deep learning for software language modeling, highlighting fundamental differences between state-of-the-practice software language models and connectionist models, and proposes avenues for future work, where deep learning can be brought to bear to support model-based testing, improve software lexicons, and conceptualize software artifacts.

Sequence to Sequence Learning with Neural Networks

TLDR
This paper presents a general end-to-end approach to sequence learning that makes minimal assumptions on the sequence structure, and finds that reversing the order of the words in all source sentences improved the LSTM's performance markedly, because doing so introduced many short term dependencies between the source and the target sentence which made the optimization problem easier.

On Learning Meaningful Code Changes Via Neural Machine Translation

TLDR
Qualitative analysis of the ability of a Neural Machine Translation model to learn how to automatically apply code changes implemented by developers during pull requests shows that the model is capable of learning and replicating a wide variety of meaningful code changes, especially refactorings and bug-fixing activities.

Summarizing Source Code using a Neural Attention Model

TLDR
This paper presents the first completely datadriven approach for generating high level summaries of source code, which uses Long Short Term Memory (LSTM) networks with attention to produce sentences that describe C# code snippets and SQL queries.

Attention is All you Need

TLDR
A new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely is proposed, which generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.

A Convolutional Attention Network for Extreme Summarization of Source Code

TLDR
An attentional neural network that employs convolution on the input tokens to detect local time-invariant and long-range topical attention features in a context-dependent way to solve the problem of extreme summarization of source code snippets into short, descriptive function name-like summaries is introduced.

Neural-Machine-Translation-Based Commit Message Generation: How Far Are We?

TLDR
A simpler and faster approach is proposed, named NNGen (Nearest Neighbor Generator), to generate concise commit messages using the nearest neighbor algorithm, which is over 2,600 times faster than NMT, and outperforms NMT in terms of BLEU by 21%.
...