• Corpus ID: 42064048

Machine Teaching: A New Paradigm for Building Machine Learning Systems

  title={Machine Teaching: A New Paradigm for Building Machine Learning Systems},
  author={Patrice Y. Simard and Saleema Amershi and David Maxwell Chickering and Alicia Edelman Pelton and Soroush Ghorashi and Christopher Meek and Gonzalo Ramos and Jina Suh and Johan Verwey and Mo Wang and John Robert Wernsing},
The current processes for building machine learning systems require practitioners with deep knowledge of machine learning. This significantly limits the number of machine learning systems that can be created and has led to a mismatch between the demand for machine learning systems and the ability for organizations to build them. We believe that in order to meet this growing demand for machine learning systems we must significantly increase the number of individuals that can teach machines. We… 
Interactive machine teaching: a human-centered approach to building machine-learned models
It is argued that IMT processes that enable people to leverage intrinsic human capabilities and have a variety of benefits, including making machine learning methods accessible to subject-matter experts and the creation of semantic and debuggable machine learning (ML) models.
Machine Teaching by Domain Experts: Towards More Humane, Inclusive, and Intelligent Machine Learning Systems
This paper argues that a possible way to escape from the limitations of current machine learning (ML) systems is to allow their development directly by domain experts without the mediation of ML
Explainable Active Learning (XAL)
An empirical study comparing the model learning outcomes, feedback content and experience with XAL, to that of traditional AL and coactive learning (providing the model's prediction without explanation), and potential drawbacks--anchoring effect with the model judgment and additional cognitive workload.
Contextual machine teaching
The main contribution of this work is an increased focus on available features, the features space and the potential to transfer some of the domain expert's explanatory powers to the machine learning system.
Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers
The wide adoption of Machine Learning (ML) technologies has created a growing demand for people who can train ML models. Some advocated the term “machine teacher” to refer to the role of people who
Using Expert Patterns in Assisted Interactive Machine Learning: A Study in Machine Teaching
This paper explores and shows how end-users without MT experience successfully build ML models using the MT process, and achieve results not far behind those of MT experts.
Understanding and Supporting Knowledge Decomposition for Machine Teaching
Findings from a study investigating what types of knowledge people teach, what cognitive processes they use, and what challenges they encounter when teaching a learner to classify text documents carry implications for applying the benefits of knowledge decomposition to MT and ML.
Intuitiveness in Active Teaching
This work analyzes the intuitiveness of certain algorithms when they are actively taught by users to offer a systematic method to judge the efficacy of human-machine interactions and thus, to scrutinize how accessible, understandable, and fair, a system is.
A Feature Space Focus in Machine Teaching
This work focuses on domain experts and the importance of, for the ML system, available features and the space they span, and the investigation of the feature space is grounded in a conducted study and related theories.
A Level-wise Taxonomic Perspective on Automated Machine Learning to Date and Beyond: Challenges and Opportunities
A new classification system with seven levels to distinguish AutoML systems based on their level of autonomy is introduced, i.e., each level is defined according to their scope of automation support and some important challenges in achieving this ambitious goal are discussed.


Machine Learning: The High Interest Credit Card of Technical Debt
Machine learning offers a fantastically powerful toolkit for building complex systems quickly. This paper argues that it is dangerous to think of these quick wins as coming for free. Using the
Curriculum learning
It is hypothesized that curriculum learning has both an effect on the speed of convergence of the training process to a minimum and on the quality of the local minima obtained: curriculum learning can be seen as a particular form of continuation method (a general strategy for global optimization of non-convex functions).
Structured labeling for facilitating concept evolution in machine learning
This paper introduces the notion of concept evolution, the changing nature of a person's underlying concept which can result in inconsistent labels and thus be detrimental to machine learning, and introduces two structured labeling solutions.
Toward an Architecture for Never-Ending Language Learning
This work proposes an approach and a set of design principles for an intelligent computer agent that runs forever and describes a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs.
The mythical man-month: Essays on software engineering
  • P. Kidwell
  • Engineering
    IEEE Annals of the History of Computing
  • 1996
Like Bahbage, he lobbied for mathematical reform, stumped for the centrality of science in cultural advancement, argued that government support was crucial, and proved a stubborn and crotchety
The Nature of Statistical Learning Theory
  • V. Vapnik
  • Computer Science, Mathematics
    Statistics for Engineering and Information Science
  • 2000
Setting of the learning problem consistency of learning processes bounds on the rate of convergence of learning processes controlling the generalization ability of learning processes constructing
Toward an Architec
  • 2010