One Citation
Auxiliary Learning with Joint Task and Data Scheduling
- Computer ScienceICML
- 2022
The JTDS model captures the joint task-data importance through a task- data scheduler, which creates a mapping from task, feature and label information to the schedule in a parameter-efficient way, and formulate the scheduler and the task learning process as a bi-level optimization problem.
References
SHOWING 1-10 OF 79 REFERENCES
Taking Human out of Learning Applications: A Survey on Automated Machine Learning
- Computer ScienceArXiv
- 2018
An up to date survey on AutoML and proposes a general AutoML framework that not only covers most existing approaches to date but also can guide the design for new methods.
A Survey on Contrastive Self-supervised Learning
- Computer ScienceTechnologies
- 2020
This paper provides an extensive review of self-supervised methods that follow the contrastive approach, explaining commonly used pretext tasks in a contrastive learning setup, followed by different architectures that have been proposed so far.
Self-Supervised Learning: Generative or Contrastive
- Computer ScienceIEEE Transactions on Knowledge and Data Engineering
- 2023
This survey takes a look into new self-supervised learning methods for representation in computer vision, natural language processing, and graph learning using generative, contrastive, and generative-contrastive methods.
Continual Lifelong Learning with Neural Networks: A Review
- Computer ScienceNeural Networks
- 2019
Meta-Learning in Neural Networks: A Survey
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2022
A new taxonomy is proposed that provides a more comprehensive breakdown of the space of meta-learning methods today and surveys promising applications and successes ofMeta-learning such as few-shot learning and reinforcement learning.
A Survey on Curriculum Learning
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2022
The insights on the relationships connecting CL and other machine learning concepts including transfer learning, meta-learning, continual learning and active learning, etc., are presented to point out challenges in CL as well as potential future research directions deserving further investigations.
ScreenerNet: Learning Self-Paced Curriculum for Deep Neural Networks
- Computer Science
- 2018
It is shown the networks augmented with the ScreenerNet achieve early convergence with better accuracy than the state-of-the-art curricular learning methods in extensive experiments using three popular vision datasets such as MNIST, CIFAR10 and Pascal VOC2012.
Experimenting to Bootstrap Self-Regulated Learning
- Education, Psychology
- 1997
Modern theories of cognitive and constructive learning portray students as agents who set and pursue goals. More effective students select among cognitive tactics they use to approach goals and learn…
Overcoming catastrophic forgetting in neural networks
- Computer ScienceProceedings of the National Academy of Sciences
- 2017
It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.
Mastering the game of Go without human knowledge
- Computer ScienceNature
- 2017
An algorithm based solely on reinforcement learning is introduced, without human data, guidance or domain knowledge beyond game rules, that achieves superhuman performance, winning 100–0 against the previously published, champion-defeating AlphaGo.