• Publications
  • Influence
CamemBERT: a Tasty French Language Model
TLDR
We train a monolingual Transformer-based language model on the French language using recent large-scale corpora. Expand
  • 128
  • 25
  • PDF
Controllable Sentence Simplification
TLDR
We adapt a discrete parametrization mechanism that provides explicit control on simplification systems based on Sequence-to-Sequence models to the task of Sentence Simplification. Expand
  • 11
  • 3
  • PDF
EASSE: Easier Automatic Sentence Simplification Evaluation
TLDR
We introduce EASSE, a Python package aiming to facilitate and standardise automatic evaluation and comparison of Sentence Simplification (SS) systems. Expand
  • 13
  • 1
  • PDF
Reference-less Quality Estimation of Text Simplification Systems
TLDR
We compare multiple approaches to reference-less quality estimation of sentence-level text simplification systems, based on the dataset used for the QATS 2016 shared task. Expand
  • 11
  • 1
  • PDF
Multilingual Unsupervised Sentence Simplification
TLDR
We propose using unsupervised mining techniques to automatically create training corpora for simplification in multiple languages from raw Common Crawl web data. Expand
  • 1
  • 1
  • PDF
ELMoLex: Connecting ELMo and Lexicon Features for Dependency Parsing
TLDR
In this paper, we present the details of the neural dependency parser and the neu-ral tagger submitted by our team 'ParisNLP' to the CoNLL 2018 Shared Task on parsing from raw text to Universal Dependencies. Expand
  • 6
  • PDF
ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations
TLDR
We introduce ASSET (Abstractive Sentence Simplification Evaluation and Tuning), a new dataset for tuning and evaluation of automatic SS models. Expand
  • 6
  • PDF
Les modèles de langue contextuels Camembert pour le français : impact de la taille et de l'hétérogénéité des données d'entrainement (C AMEM BERT Contextual Language Models for French: Impact of
TLDR
Les modeles de langue neuronaux contextuels sont desormais omnipresents en traitement automatique des langues. Expand
Les modèles de langue contextuels Camembert pour le français : impact de la taille et de l'hétérogénéité des données d'entrainement
Les modeles de langue neuronaux contextuels sont desormais omnipresents en traitement automatique des langues. Jusqu’a recemment, la plupart des modeles disponibles ont ete entraines soit sur desExpand