• Publications
  • Influence
Multitask Learning
  • R. Caruana
  • Computer Science
    Encyclopedia of Machine Learning and Data Mining
  • 1 May 1998
TLDR
Suggestions for how to get the most out of multitask learning in artificial neural nets are presented, an algorithm forMultitask learning with case-based methods like k-nearest neighbor and kernel regression is presented, and an algorithms for multitasklearning in decision trees are sketched.
Multitask Learning
Do Deep Nets Really Need to be Deep?
TLDR
This paper empirically demonstrate that shallow feed-forward nets can learn the complex functions previously learned by deep nets and achieve accuracies previously only achievable with deep models.
An empirical comparison of supervised learning algorithms
TLDR
A large-scale empirical comparison between ten supervised learning methods: SVMs, neural nets, logistic regression, naive bayes, memory-based learning, random forests, decision trees, bagged trees, boosted trees, and boosted stumps is presented.
Predicting good probabilities with supervised learning
We examine the relationship between the predictions made by different learning algorithms and true posterior probabilities. We show that maximum margin methods such as boosted trees and boosted
Model compression
TLDR
This work presents a method for "compressing" large, complex ensembles into smaller, faster models, usually without significant loss in performance.
Ensemble selection from libraries of models
TLDR
A method for constructing ensembles from libraries of thousands of models using forward stepwise selection to be optimized to performance metric such as accuracy, cross entropy, mean precision, or ROC Area is presented.
Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission
TLDR
This work presents two case studies where high-performance generalized additive models with pairwise interactions (GA2Ms) are applied to real healthcare problems yielding intelligible models with state-of-the-art accuracy.
...
1
2
3
4
5
...