• Corpus ID: 234742452

Deep Learning Models in Software Requirements Engineering

  title={Deep Learning Models in Software Requirements Engineering},
  author={Maria A. Naumcheva},
Requirements elicitation is an important phase of any software project: the errors in requirements are more expensive to fix than the errors introduced at later stages of software life cycle. Nevertheless many projects do not devote sufficient time to requirements. Automated requirements generation can improve the quality of software projects. In this article we have accomplished the first step of the research on this topic: we have applied the vanilla sentence autoencoder to the sentence… 

Figures and Tables from this paper

Feasibility Study of Machine Learning & AI Algorithms for Classifying Software Requirements

The purpose of this research work is for understanding the application and use of Machine Learning algorithms for the problem of requirements classification, while providing inputs for developing a “software requirements definition and description framework” using English language.

Efficient Extraction of Technical Requirements Applying Data Augmentation

Investigation of the performance in artificially generating requirements through data augmentation shows that performance of artificial intelligence models in requirements extraction is improved applying augmented data and therefore the method leads to efficient product development.

Deep Learning Model for Selecting Suitable Requirements Elicitation Techniques

The model provides a robust decision-making process for delivering correct elicitation techniques and lowering the risk of project failure and can be used to promote the automatization of the elicitation technique selection process, thereby enhancing current required elicitation industry practices.



Requirements Engineering: Best Practice

This chapter gives an overview of commonly used requirements engineering techniques and shows which of the techniques, when used in a software project, correlate with requirements engineering success.

Natural Language Processing (NLP) for Requirements Engineering: A Systematic Mapping Study

The landscape of NLP4RE research is surveyed to understand the state of the art and identify open problems, which has amassed a large number of publications and attracted widespread attention from diverse communities.

The role of formalism in system requirements (full version)

The present survey discusses some of the main formal approaches and compares them to informal methods, and classifies them into five categories: general-purpose, natural-language, graph/automata, other mathematical notations, seamless, and altogether 22 different ones.

Variational Template Machine for Data-to-Text Generation

This paper proposes the variational template machine (VTM), a novel method to generate text descriptions from data tables, and utilizes both small parallel data and large raw text without aligned tables to enrich the template learning.

Towards Generating Long and Coherent Text with Multi-Level Latent Variable Models

Extensive experimental results demonstrate that the proposed multi-level VAE model produces more coherent and less repetitive long text compared to baselines as well as can mitigate the posterior-collapse issue.

Generating Sentences from a Continuous Space

This work introduces and study an RNN-based variational autoencoder generative model that incorporates distributed latent representations of entire sentences that allows it to explicitly model holistic properties of sentences such as style, topic, and high-level syntactic features.

Toward Controlled Generation of Text

A new neural generative model is proposed which combines variational auto-encoders and holistic attribute discriminators for effective imposition of semantic structures inGeneric generation and manipulation of text.

Learning Deep Architectures for AI

The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.

Application of Low-resource Machine Translation Techniques to Russian-Tatar Language Pair

This paper applies such techniques as transfer learning and semi-supervised learning to the base Transformer model and empirically shows that the resulting models improve Russian to Tatar and Tatar to Russian translation quality by +2.57 and +3.66 BLEU, respectively.

Hierarchical Transformer for Multilingual Machine Translation

It is demonstrated that in case of carefully chosen training strategy the hierarchical architecture can outperform bilingual models and multilingual models with full parameter sharing.