Toward a 'Standard Model' of Machine Learning

@article{Hu2021TowardA,
  title={Toward a 'Standard Model' of Machine Learning},
  author={Zhiting Hu and Eric P. Xing},
  journal={Harvard Data Science Review},
  year={2021}
}
  • Zhiting HuE. Xing
  • Published 17 August 2021
  • Computer Science
  • Harvard Data Science Review
. Machine learning (ML) is about computational methods that enable ma-chines to learn concepts from experience. In handling a wide variety of experience ranging from data instances, knowledge, constraints, to rewards, adversaries, and lifelong interaction in an ever-growing spectrum of tasks, contemporary ML/AI (artificial intelligence) research has resulted in a multitude of learning paradigms and methodologies. Despite the continual progresses on all different fronts, the disparate narrowly… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 102 REFERENCES

Toward a Unified Science of Machine Learning

This editorial examines seven dichotomies that have emerged in recent years to partition the field of machine learning and argues that long-term progress will occur only if the authors can find ways to unify these apparently competing views into a coherent whole.

Model-based machine learning

  • Charles M. Bishop
  • Computer Science
    Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
  • 2013
It is shown how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and a large-scale commercial application of this framework involving tens of millions of users is outlined.

Learning in Implicit Generative Models

This work develops likelihood-free inference methods and highlight hypothesis testing as a principle for learning in implicit generative models, using which it is able to derive the objective function used by GANs, and many other related objectives.

Connecting the Dots Between MLE and RL for Sequence Generation

A generalized entropy regularized policy optimization formulation is presented, and it is shown that the apparently distinct algorithms can all be reformulated as special instances of the framework, with the only difference being the configurations of a reward function and a couple of hyperparameters.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a

A decision-theoretic generalization of on-line learning and an application to boosting

The model studied can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting, and it is shown that the multiplicative weight-update Littlestone?Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.

Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike Common Sense

Learning Data Manipulation for Augmentation and Weighting

This work builds upon a recent connection of supervised learning and reinforcement learning, and adapts an off-the-shelf reward learning algorithm from RL for joint data manipulation learning and model training and shows the resulting algorithms significantly improve the image and text classification performance in low data regime and class-imbalance problems.

A Convex Duality Framework for GANs

This work develops a convex duality framework for analyzing GANs, and proves that the proposed hybrid divergence changes continuously with the generative model, which suggests regularizing the discriminator's Lipschitz constant in f-GAN and vanilla GAN.

The master algorithm: how the quest for the ultimate learning machine will remake our world

This book presents the past, the present and the future of the different types of machine learning algorithms, and suggests to get the best out of each “tribe” and make a unique learning algorithm able to learn without caring about the problem: the master algorithm.
...