• Publications
  • Influence
Multitask Learning
  • R. Caruana
  • Computer Science
  • Encyclopedia of Machine Learning and Data Mining
  • 1 May 1998
Multitask Learning is an approach to inductive transfer that improves learning for one task by using the information contained in the training signals of other related tasks. Expand
  • 2,397
  • 175
Multitask Learning
  • 1,538
  • 164
An empirical comparison of supervised learning algorithms
We present a large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps. Expand
  • 1,808
  • 94
  • PDF
Do Deep Nets Really Need to be Deep?
In this paper we empirically demonstrate that shallow feed-forward nets can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models. Expand
  • 1,190
  • 92
  • PDF
Removing the Genetics from the Standard Genetic Algorithm
We present an abstraction of the genetic algorithm (GA), termed population-based incremental learning (PBIL), that explicitly maintains the statistics contained in a GA''s population, but which abstracts away the crossover operator and redefines the role of the population. Expand
  • 609
  • 69
  • PDF
Multitask Learning: A Knowledge-Based Source of Inductive Bias
This paper suggests that it may be easier to learn several hard tasks at one time than to learn them separately. Expand
  • 617
  • 65
Predicting good probabilities with supervised learning
We examine the relationship between the predictions made by different learning algorithms and true posterior probabilities. Expand
  • 751
  • 61
  • PDF
Model compression
We present a method for "compressing" large, complex ensembles into smaller, faster models, usually without significant loss in performance. Expand
  • 963
  • 46
  • PDF
Ensemble selection from libraries of models
We present a method for constructing ensembles from libraries of thousands of models by using many different learning algorithms and parameter settings. Expand
  • 610
  • 44
  • PDF
Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission
We present two case studies where high-performance generalized additive models with pairwise interactions (GA2Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy. Expand
  • 691
  • 43
  • PDF