EVIL: Exploiting Software via Natural Language

@article{Liguori2021EVILES,
  title={EVIL: Exploiting Software via Natural Language},
  author={Pietro Liguori and Erfan Al-Hossami and Vittorio Orbinato and Roberto Natella and Samira Shaikh and Domenico Cotroneo and Bojan Cukic},
  journal={2021 IEEE 32nd International Symposium on Software Reliability Engineering (ISSRE)},
  year={2021},
  pages={321-332}
}
Writing exploits for security assessment is a challenging task. The writer needs to master programming and obfuscation techniques to develop a successful exploit. To make the task easier, we propose an approach (EVIL) to automatically generate exploits in assembly/Python language from descriptions in natural language. The approach leverages Neural Machine Translation (NMT) techniques and a dataset that we developed for this work. We present an extensive experimental study to evaluate the… 

Figures and Tables from this paper

Can we generate shellcodes via natural language? An empirical study

The empirical analysis shows that NMT can generate assembly code snippets from the natural language with high accuracy and that in many cases can generate entire shellcodes with no errors.

Can NMT Understand Me? Towards Perturbation-based Evaluation of NMT Models for Code Generation

This work identifies a set of perturbations and metrics tailored for the robustness assessment of NMT models, and presents a preliminary experimental evaluation, showing what type of perturbedations affect the model the most and deriving useful insights for future directions.

How Important are Good Method Names in Neural Code Generation? A Model Robustness Perspective

The potential of benefiting from method names to enhance the performance of PCGMs, from a model robustness perspective, is studied and a novel approach is proposed, named RADAR (neuRAl coDe generAtor Robustifier).

A Survey on Artificial Intelligence for Source Code: A Dialogue Systems Perspective

This survey paper overviews major deep learning methods used in Natural Language Processing (NLP) and source code over the last 35 years and presents a software-engineering centered taxonomy for CI placing each of the works into one category describing how it best assists the software development cycle.

Can we generate shellcodes via natural language? An empirical study

The empirical analysis shows that NMT can generate assembly code snippets from the natural language with high accuracy and that in many cases can generate entire shellcodes with no errors.

C ODE S UMMARIZATION : D O T RANSFORMERS R E ALLY U C

Overall, the quality of the generated summaries even from state-of-the-art (SOTA) models is quite poor, raising questions about the utility of current approaches and datasets.

References

SHOWING 1-10 OF 66 REFERENCES

Shellcode_IA32: A Dataset for Automatic Shellcode Generation

The first step to address the task of automatically generating shellcodes, i.e., small pieces of code used as a payload in the exploitation of a software vulnerability, starting from natural language comments is taken, consisting of challenging but common assembly instructions with their natural language descriptions.

Incorporating External Knowledge through Pre-training for Natural Language to Code Generation

Evaluations show that combining the two sources with data augmentation and retrieval-based data re-sampling improves the current state-of-the-art by up to 2.2% absolute BLEU score on the code generation testbed CoNaLa.

Automatically generating commit messages from diffs using neural machine translation

This paper adapts Neural Machine Translation (NMT) to automatically "translate" diffs into commit messages and designed a quality-assurance filter to detect cases in which the algorithm is unable to produce good messages, and return a warning instead.

Neural-Machine-Translation-Based Commit Message Generation: How Far Are We?

A simpler and faster approach is proposed, named NNGen (Nearest Neighbor Generator), to generate concise commit messages using the nearest neighbor algorithm, which is over 2,600 times faster than NMT, and outperforms NMT in terms of BLEU by 21%.

AEG: Automatic Exploit Generation

This paper presents AEG, the first end-to-end system for fully automatic exploit generation, which was used to analyze 14 open-source projects and successfully generated 16 control flow hijacking exploits.

A Parallel Corpus of Python Functions and Documentation Strings for Automated Code Documentation and Code Generation

A large and diverse parallel corpus of a hundred thousands Python functions with their documentation strings (“docstrings”) generated by scraping open source repositories on GitHub is introduced.

A Natural Language Programming Approach for Requirements-Based Security Testing

This paper proposes, applies and assess Misuse Case Programming (MCP), an approach that automatically generates security test cases from misuse case specifications (i.e., use case specifications capturing the behavior of malicious users).

SemFuzz: Semantics-based Automatic Generation of Proof-of-Concept Exploits

SemFuzz is presented, a novel technique leveraging vulnerability-related text to guide automatic generation of PoC exploits for the vulnerability types never automatically attacked, indicating that more complicated flaws can also be automatically attacked.

Latent Predictor Networks for Code Generation

A novel neural network architecture is presented which generates an output sequence conditioned on an arbitrary number of input functions and allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training.

CodeBERT: A Pre-Trained Model for Programming and Natural Languages

This work develops CodeBERT with Transformer-based neural architecture, and trains it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators.
...