Corpus ID: 230435736

The Pile: An 800GB Dataset of Diverse Text for Language Modeling

@article{Gao2021ThePA,
  title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
  author={Leo Gao and Stella Rose Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.00027}
}
Recent work has demonstrated that increased training dataset diversity improves general cross-domain knowledge and downstream generalization capability for large-scale language models. With this in mind, we present the Pile: an 825 GiB English text corpus targeted at training large-scale language models. The Pile is constructed from 22 diverse high-quality subsets—both existing and newly constructed—many of which derive from academic or professional sources. Our evaluation of the untuned… Expand
7 Citations
Documenting the English Colossal Clean Crawled Corpus
Multilingual Augmenter: The Model Chooses
Pitfalls of Static Language Modelling
Measuring Coding Challenge Competence With APPS
Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets

References

Test perplexity of the Pile using GPT-2 and GPT-3. Evaluation is performed on one-tenth of the test data of the Pile