Corpus ID: 234790338

KLUE: Korean Language Understanding Evaluation

@article{Park2021KLUEKL,
  title={KLUE: Korean Language Understanding Evaluation},
  author={Sungjoon Park and Jihyung Moon and Sung-Dong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Tae Hwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Young-kuk Jeong and Inkwon Lee and Sang-gyu Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice H. Oh and Jung-Woo Ha and Kyunghyun Cho},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.09680}
}
We introduce Korean Language Understanding Evaluation (KLUE) benchmark. KLUE is a collection of 8 Korean natural language understanding (NLU) tasks, including Topic Classification, Semantic Textual Similarity, Natural Language Inference, Named Entity Recognition, Relation Extraction, Dependency Parsing, Machine Reading Comprehension, and Dialogue State Tracking. We build all of the tasks from scratch from diverse source corpora while respecting copyrights, to ensure accessibility for anyone… Expand
FewCLUE: A Chinese Few-shot Learning Evaluation Benchmark
  • Liang Xu, Xiaojing Lu, +6 authors Hai Hu
  • Computer Science
  • ArXiv
  • 2021

References

SHOWING 1-10 OF 177 REFERENCES
OCNLI: Original Chinese Natural Language Inference
FlauBERT: Unsupervised Language Model Pre-training for French
ParsiNLU: A Suite of Language Understanding Challenges for Persian
XNLI: Evaluating Cross-lingual Sentence Representations
An Empirical Study of Tokenization Strategies for Various Korean NLP Tasks
Asking Crowdworkers to Write Entailment Examples: The Best of Bad Options
Semi-supervised Training Data Generation for Multilingual Question Answering
Language Models are Unsupervised Multitask Learners
...
1
2
3
4
5
...