Self-directed Machine Learning

  title={Self-directed Machine Learning},
  author={Wenwu Zhu and Xin Wang and Pengtao Xie},

Figures from this paper

Auxiliary Learning with Joint Task and Data Scheduling

The JTDS model captures the joint task-data importance through a task- data scheduler, which creates a mapping from task, feature and label information to the schedule in a parameter-efficient way, and formulate the scheduler and the task learning process as a bi-level optimization problem.



Taking Human out of Learning Applications: A Survey on Automated Machine Learning

An up to date survey on AutoML and proposes a general AutoML framework that not only covers most existing approaches to date but also can guide the design for new methods.

A Survey on Contrastive Self-supervised Learning

This paper provides an extensive review of self-supervised methods that follow the contrastive approach, explaining commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far.

Continual Lifelong Learning with Neural Networks: A Review

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning

Meta-Learning in Neural Networks: A Survey

A new taxonomy is proposed that provides a more comprehensive breakdown of the space of meta-learning methods today and surveys promising applications and successes ofMeta-learning such as few-shot learning and reinforcement learning.

Self-supervised Learning: Generative or Contrastive

This survey takes a look into new self-supervised learning methods for representation in computer vision, natural language processing, and graph learning, and comprehensively review the existing empirical methods into three main categories according to their objectives.

A Survey on Curriculum Learning.

The insights on the relationships connecting CL and other machine learning concepts including transfer learning, meta-learning, continual learning and active learning, etc., are presented to point out challenges in CL as well as potential future research directions deserving further investigations.

ScreenerNet: Learning Self-Paced Curriculum for Deep Neural Networks

It is shown the networks augmented with the ScreenerNet achieve early convergence with better accuracy than the state-of-the-art curricular learning methods in extensive experiments using three popular vision datasets such as MNIST, CIFAR10 and Pascal VOC2012.

Experimenting to Bootstrap Self-Regulated Learning

Modern theories of cognitive and constructive learning portray students as agents who set and pursue goals. More effective students select among cognitive tactics they use to approach goals and learn

Overcoming catastrophic forgetting in neural networks

It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.