• Corpus ID: 235367721

Coarse-to-Fine Curriculum Learning

@article{Stretcu2021CoarsetoFineCL,
  title={Coarse-to-Fine Curriculum Learning},
  author={Otilia Stretcu and Emmanouil Antonios Platanios and Tom Michael Mitchell and Barnab'as P'oczos},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.04072}
}
When faced with learning challenging new tasks, humans often follow sequences of steps that allow them to incrementally build up the necessary skills for performing these new tasks. However, in machine learning, models are most often trained to solve the target tasks directly. Inspired by human learning, we propose a novel curriculum learning approach which decomposes challenging tasks into sequences of easier intermediate goals that are used to pre-train a model before tackling the target task… 

Continual Coarse-to-Fine Domain Adaptation in Semantic Segmentation

References

SHOWING 1-10 OF 77 REFERENCES

Curriculum learning of multiple tasks

TLDR
The experimental results show that learning multiple related tasks sequentially can be more effective than learning them jointly, the order in which tasks are being solved affects the overall performance, and that the model is able to automatically discover a favourable order of tasks.

Competence-based Curriculum Learning for Neural Machine Translation

TLDR
A curriculum learning framework for NMT that reduces training time, reduces the need for specialized heuristics or large batch sizes, and results in overall better performance, which can help improve the training time and the performance of both recurrent neural network models and Transformers.

Autonomous Task Sequencing for Customized Curriculum Design in Reinforcement Learning

TLDR
This paper forms the design of a curriculum as a Markov Decision Process, which directly models the accumulation of knowledge as an agent interacts with tasks, and proposes a method that approximates an execution of an optimal policy in this MDP to produce an agent-specific curriculum.

Curriculum learning

TLDR
It is hypothesized that curriculum learning has both an effect on the speed of convergence of the training process to a minimum and on the quality of the local minima obtained: curriculum learning can be seen as a particular form of continuation method (a general strategy for global optimization of non-convex functions).

Reverse Curriculum Generation for Reinforcement Learning

TLDR
This work proposes a method to learn goal-oriented tasks without requiring any prior knowledge other than obtaining a single state in which the task is achieved, and generates a curriculum of start states that adapts to the agent's performance, leading to efficient training on goal- oriented tasks.

Data Parameters: A New Family of Parameters for Learning a Differentiable Curriculum

TLDR
This work is the first curriculum learning method to show gains on large scale image classification and detection tasks and introduces data parameters, which governs their importance in the learning process.

Self-Paced Curriculum Learning

TLDR
The missing link between CL and SPL is discovered, and a unified framework named self-paced curriculum leaning (SPCL) is proposed, formulated as a concise optimization problem that takes into account both prior knowledge known before training and the learning progress during training.

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels

TLDR
Experimental results demonstrate that the proposed novel technique of learning another neural network, called MentorNet, to supervise the training of the base deep networks, namely, StudentNet, can significantly improve the generalization performance of deep networks trained on corrupted training data.
...