Corpus ID: 14892153

Teaching Classification Boundaries to Humans

@inproceedings{Basu2013TeachingCB,
  title={Teaching Classification Boundaries to Humans},
  author={Sumit Basu and Janara Christensen},
  booktitle={AAAI},
  year={2013}
}
Given a classification task, what is the best way to teach the resulting boundary to a human? While machine learning techniques can provide excellent methods for finding the boundary, including the selection of examples in an online setting, they tell us little about how we would teach a human the same task. We propose to investigate the problem of example selection and presentation in the context of teaching humans, and explore a variety of mechanisms in the interests of finding what may work… Expand
Near-Optimal Machine Teaching via Explanatory Teaching Sets
TLDR
This paper proposes NOTES, a principled framework for constructing interpretable teaching sets, utilizing explanations to accelerate the teaching process, and proves that NOTES is competitive with the optimal explanation-based teaching strategy. Expand
Near-Optimally Teaching the Crowd to Classify
TLDR
This work proposes a natural stochastic model of the learners, modeling them as randomly switching among hypotheses based on observed feedback, and develops STRICT, an efficient algorithm for selecting examples to teach to workers. Expand
Becoming the expert - interactive multi-class machine teaching
TLDR
An Interactive Machine Teaching algorithm is proposed that enables a computer to teach challenging visual concepts to a human and shows that a teaching strategy that probabilistically models the student's ability and progress, based on their correct and incorrect answers, produces better `experts'. Expand
Teaching Categories to Human Learners with Visual Explanations
TLDR
A teaching framework that provides interpretable explanations as feedback and models how the learner incorporates this additional information is proposed, and it is shown that it can automatically generate explanations that highlight the parts of the image that are responsible for the class label. Expand
On Actively Teaching the Crowd to Classify
TLDR
This work proposes a natural Bayesian model of the workers, modeling them as a learning entity with an initial skill, competence, and dynamics, and shows how a teaching system can exploit this model to interactively teach the workers. Expand
Towards Realistic Predictors
TLDR
Experimental results provide evidence in support of the effectiveness of the proposed architecture and the learned hardness predictor, and show that the realistic classifier always improves performance on the examples that it accepts to classify, performing better on these examples than an equivalent nonrealistic classifier. Expand
Gradient-based Algorithms for Machine Teaching
The problem of machine teaching is considered. A new formulation is proposed under the assumption of an optimal student, where optimality is defined in the usual machine learning sense of empiricalExpand
SuperLoss: A Generic Loss for Robust Curriculum Learning
TLDR
The SuperLoss consists in appending a novel loss function on top of any existing task loss, hence its name: the main effect is to automatically downweight the contribution of samples with a large loss, effectively mimicking the core principle of curriculum learning. Expand
Interpretable Machine Teaching via Feature Feedback
TLDR
This work proposes a teaching framework that includes both instance-level labels as well as explanations in the form of feature-level feedback to the human learners, and shows that learners that are taught with feature- level feedback perform better at test time compared to existing methods. Expand
What Objective Does Self-paced Learning Indeed Optimize?
TLDR
This study proves that the solving strategy on SPL accords with a majorization minimization algorithm implemented on a latent objective function, and finds that the loss function contained in this latent objective has a similar configuration with non-convex regularized penalty (NSPR) known in statistics and machine learning. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 15 REFERENCES
Curriculum learning
TLDR
It is hypothesized that curriculum learning has both an effect on the speed of convergence of the training process to a minimum and on the quality of the local minima obtained: curriculum learning can be seen as a particular form of continuation method (a general strategy for global optimization of non-convex functions). Expand
Human Active Learning
TLDR
This first quantitative study comparing human category learning in active versus passive settings indicates that humans are capable of actively selecting informative queries, and in doing so learn better and faster than if they are given random training data, as predicted by learning theory. Expand
Self-Paced Learning for Latent Variable Models
TLDR
A novel, iterative self-paced learning algorithm where each iteration simultaneously selects easy samples and learns a new parameter vector that outperforms the state of the art method for learning a latent structural SVM on four applications. Expand
Active Learning Literature Survey
TLDR
This report provides a general introduction to active learning and a survey of the literature, including a discussion of the scenarios in which queries can be formulated, and an overview of the query strategy frameworks proposed in the literature to date. Expand
Flexible shaping: How learning in small steps helps
TLDR
This work studies the shaping of a hierarchical working memory task using an abstract neural network model as the target learner and uses the model to investigate some of the elements of successful shaping. Expand
How Do Humans Teach: On Curriculum Learning and Teaching Dimension
TLDR
It is shown through behavioral studies that humans employ three distinct teaching strategies, one of which is consistent with the curriculum learning principle, and a novel theoretical framework is proposed as a potential explanation for this strategy. Expand
Predicting the Optimal Spacing of Study: A Multiscale Context Model of Memory
TLDR
A Multiscale Context Model (MCM) is able to predict the influence of a particular study schedule on retention for specific material, and is intriguingly similar to a Bayesian multiscale model of memory, yet MCM is better able to account for human declarative memory. Expand
Survey and critique of techniques for extracting rules from trained artificial neural networks
TLDR
This survey focuses on mechanisms, procedures, and algorithms designed to insert knowledge into ANNs, extract rules from trained ANNs (rule extraction), and utilise ANNs to refine existing rule bases (rule refinement). Expand
Building Intelligent Interactive Tutors: Student-centered Strategies for Revolutionizing E-learning
TLDR
Building Intelligent Interactive Tutors discusses educational systems that assess a student's knowledge and are adaptive to a students' learning needs, and taps into 20 years of research on intelligent tutors to bring designers and developers a broad range of issues and methods that produce the best intelligent learning environments possible. Expand
A day of great illumination: B. F. Skinner's discovery of shaping.
  • G. Peterson
  • Psychology, Medicine
  • Journal of the experimental analysis of behavior
  • 2004
TLDR
Despite the seminal studies of response differentiation by the method of successive approximation detailed in chapter 8 of The Behavior of Organisms, B. F. Skinner never actually shaped an operant response by hand until a memorable incident of startling serendipity on the top floor of a flour mill in Minneapolis in 1943, causing him to appreciate as never before the significance of reinforcement mediated by biological connections with the animate social environment. Expand
...
1
2
...