Never-Ending Learning


Whereas people learn many different types of knowledge from diverse experiences over many years, most current machine learning systems acquire just a single function or data model from just a single data set. We propose a neverending learning paradigm for machine learning, to better reflect the more ambitious and encompassing type of learning performed by humans. As a case study, we describe the Never-Ending Language Learner (NELL), which achieves some of the desired properties of a never-ending learner, and we discuss lessons learned. NELL has been learning to read the web 24 hours/day since January 2010, and so far has acquired a knowledge base with over 80 million confidenceweighted beliefs (e.g., servedWith(tea, biscuits)). NELL has also learned millions of features and parameters that enable it to read these beliefs from the web. Additionally, it has learned to reason over these beliefs to infer new beliefs, and is able to extend its ontology by synthesizing new relational predicates. NELL can be tracked online at, and followed on Twitter at @CMUNELL.

DOI: 10.3233/978-1-61499-098-7-5
View Slides

Extracted Key Phrases

5 Figures and Tables

Citations per Year

139 Citations

Semantic Scholar estimates that this publication has 139 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Mitchell2015NeverEndingL, title={Never-Ending Learning}, author={Tom M. Mitchell and William W. Cohen and Estevam R. Hruschka and Partha Pratim Talukdar and Justin Betteridge and Andrew Carlson and Bhavana Dalvi Mishra and Matthew Gardner and Bryan Kisiel and Jayant Krishnamurthy and Ni Lao and Kathryn Mazaitis and Thahir Mohamed and Ndapandula Nakashole and Emmanouil Antonios Platanios and Alan Ritter and Mehdi Samadi and Burr Settles and Richard C. Wang and Derry Tanti Wijaya and Abhinav Gupta and Xinlei Chen and Abulhair Saparov and Malcolm Greaves and Joel Welling}, booktitle={AAAI}, year={2015} }