Corpus ID: 221655438

A Systematic Literature Review on the Use of Deep Learning in Software Engineering Research

@article{Watson2020ASL,
  title={A Systematic Literature Review on the Use of Deep Learning in Software Engineering Research},
  author={Cody Watson and Nathan Cooper and David Nader-Palacio and Kevin Moran and Denys Poshyvanyk},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.06520}
}
An increasingly popular set of techniques adopted by software engineering (SE) researchers to automate development tasks are those rooted in the concept of Deep Learning (DL). The popularity of such techniques largely stems from their automated feature engineering capabilities, which aid in modeling software artifacts. However, due to the rapid pace at which DL techniques have been adopted, it is difficult to distill the current successes, failures, and opportunities of the current research… Expand
A comprehensive study of deep learning compiler bugs
TLDR
This paper presents the first systematic study of DL compiler bugs by analyzing 603 bugs arising in three popular DL compilers (i.e., TVM from Apache, Glow from Facebook, and nGraph from Intel), and provides a series of valuable guidelines for future work on DL compiler bug detection and debugging. Expand

References

SHOWING 1-10 OF 204 REFERENCES
Automatic Feature Learning for Predicting Vulnerable Software Components
TLDR
This paper describes a new approach, built upon the powerful deep learning Long Short Term Memory model, to automatically learn both semantic and syntactic features of code, which demonstrates that the prediction power obtained from the learned features is better than what is achieved by state of the art vulnerability prediction models. Expand
How Well Do Change Sequences Predict Defects? Sequence Learning from Software Changes
TLDR
A novel approach called Fences is proposed, which extracts six types of change sequences covering different aspects of software changes via fine-grained change analysis via deep learning and achieves an average F-measure of 0.657, which improves the prediction models built on traditional metrics significantly. Expand
Neural-augmented static analysis of Android communication
TLDR
This work describes a neural-network architecture that encodes abstractions of communicating objects in two applications and estimates the probability with which a link indeed exists, and evaluates the approach on a large corpus of Android applications, demonstrating that it achieves very high accuracy. Expand
Oreo: detection of clones in the twilight zone
TLDR
Ore is presented, a novel approach to source code clone detection that not only detects Type-1 to Type-3 clones accurately, but is also capable of detecting harder-to-detect clones in the Twilight Zone. Expand
VulSeeker: A Semantic Learning Based Vulnerability Seeker for Cross-Platform Binary
TLDR
VulSeeker is a semantic learning based vulnerability seeker for cross-platform binary that outperforms the state-of-the-art approaches in terms of accuracy and embedding vector. Expand
Learning from data: a short course. AMLbook.com
  • 2012
Deep Transfer Bug Localization
TLDR
This work proposes a deep transfer learning approach named TRANP-CNN, which extracts transferable semantic features from source project and fully exploits labeled data from target project for effective cross-project bug localization. Expand
Mining Likely Analogical APIs Across Third-Party Libraries via Large-Scale Unsupervised API Semantics Embedding
TLDR
This work presents an unsupervised deep learning based approach to embed both API usage semantics and API description semantics into vector space for inferring likely analogical API mappings between libraries. Expand
A Grammar-Based Structural CNN Decoder for Code Generation
TLDR
This paper proposes a grammar-based structural convolutional neural network (CNN) for code generation, which generates a program by predicting the grammar rules of the programming language; it designs several CNN modules, including the tree-based convolution and pre-order convolution, whose information is further aggregated by dedicated attentive pooling layers. Expand
A Neural Model for Method Name Generation from Functional Description
TLDR
A neural network is proposed to directly generate readable method names from natural language description to handle the explosion of vocabulary when dealing with large repositories, and how to leverage the knowledge learned from large repositories to a specific project. Expand
...
1
2
3
4
5
...