MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages
@article{Wang2022MCoNaLaAB, title={MCoNaLa: A Benchmark for Code Generation from Multiple Natural Languages}, author={Zhiruo Wang and Grace Cuenca and Shuyan Zhou and Frank F. Xu and Graham Neubig}, journal={ArXiv}, year={2022}, volume={abs/2203.08388} }
While there has been a recent burgeoning of applications at the intersection of natural and programming languages, such as code generation and code summarization, these applications are usually English-centric. This creates a barrier for program developers who are not proficient in English. To mitigate this gap in technology development across languages, we propose a multilingual dataset, MCoNaLa, to benchmark code generation from natural language commands extending beyond English. Modeled off…
Figures and Tables from this paper
References
SHOWING 1-10 OF 50 REFERENCES
The Flores-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation
- Computer ScienceTACL
- 2022
The Flores-101 evaluation benchmark is introduced, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains that enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems.
TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation
- Computer ScienceEMNLP
- 2018
TRANX is a transition-based neural semantic parser that maps natural language utterances into formal meaning representations (MRs) and is highly generalizable, extensible, and effective, registering strong results compared to existing neural semantic parsers.
Learning to Mine Aligned Code and Natural Language Pairs from Stack Overflow
- Computer Science2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR)
- 2018
A novel method to mine high-quality aligned data from SO using two sets of features: hand-crafted features considering the structure of the extracted snippets, and correspondence features obtained by training a probabilistic model to capture the correlation between NL and code using neural networks.
Multilingual Denoising Pre-training for Neural Machine Translation
- Computer ScienceTransactions of the Association for Computational Linguistics
- 2020
Abstract This paper demonstrates that multilingual denoising pre-training produces significant performance gains across a wide variety of machine translation (MT) tasks. We present mBART—a…
Latent Predictor Networks for Code Generation
- Computer ScienceACL
- 2016
A novel neural network architecture is presented which generates an output sequence conditioned on an arbitrary number of input functions and allows both the choice of conditioning context and the granularity of generation, for example characters or tokens, to be marginalised, thus permitting scalable and effective training.
Learning to Generate Pseudo-Code from Source Code Using Statistical Machine Translation (T)
- Computer Science2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE)
- 2015
SMT, which was originally designed to translate between two natural languages, allows us to automatically learn the relationship between source code/pseudo-code pairs, making it possible to create a pseudo-code generator with less human effort.
SNIFF: A Search Engine for Java Using Free-Form Queries
- Computer ScienceFASE
- 2009
A novel code search technique, called SNIFF, which retains the flexibility of performing code search in plain English, while obtaining a small set of relevant code snippets to perform the desired task.
Incorporating External Knowledge through Pre-training for Natural Language to Code Generation
- Computer ScienceACL
- 2020
Evaluations show that combining the two sources with data augmentation and retrieval-based data re-sampling improves the current state-of-the-art by up to 2.2% absolute BLEU score on the code generation testbed CoNaLa.
Multi-Domain Multilingual Question Answering
- Computer ScienceEMNLP
- 2021
This tutorial introduces standard benchmarks in multi-domain and multilingual QA, and discusses state-of-the-art approaches that achieve impressive performance, ranging from zero-shot transfer learning to out-of the box training with open-domain QA systems.
Cross-Lingual Training with Dense Retrieval for Document Retrieval
- Computer ScienceArXiv
- 2021
These experiments reveal that zero-shot model-based transfer using mBERT improves the search quality in non-English mono-lingual retrieval and weakly-supervised target language transfer yields competitive performances against the generation-based targetlanguage transfer that requires external translators and query generators.