Focused learning promotes continual task performance in humans

  title={Focused learning promotes continual task performance in humans},
  author={Timo Flesch and Jan Balaguer and Ronald Dekker and Hamed Nili and Christopher Summerfield},
Humans can learn to perform multiple tasks in succession over the lifespan (“continual” learning), whereas current machine learning systems fail. Here, we investigated the cognitive mechanisms that permit successful continual learning in humans. Unlike neural networks, humans that were trained on temporally autocorrelated task objectives (focussed training) learned to perform new tasks more effectively, and performed better on a later test involving randomly interleaved tasks. Analysis of error… 
1 Citations
A neural network walks into a lab: towards using deep nets as models for human behavior
It is argued that methods for assessing the goodness of fit between DNN models and human behavior have to date been impoverished, and cognitive science might have to start using more complex tasks, but doing so might be beneficial for DNN-independent reasons as well.


Human category learning.
Results from four different kinds of category-learning tasks provide strong evidence that human category learning is mediated by multiple, qualitatively distinct systems.
Overcoming catastrophic forgetting in neural networks
It is shown that it is possible to overcome the limitation of connectionist models and train networks that can maintain expertise on tasks that they have not experienced for a long time and selectively slowing down learning on the weights important for previous tasks.
Optimal Teaching for Limited-Capacity Human Learners
This work applies a machine teaching procedure to a cognitive model that is either limited capacity ( as humans are) or unlimited capacity (as most machine learning systems are), and finds that the machine teacher recommends idealized training sets and human learners perform best when training recommendations are based on a limited-capacity model.
Human-level control through deep reinforcement learning
This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
When does fading enhance perceptual category learning?
  • H. Pashler, M. Mozer
  • Psychology
    Journal of experimental psychology. Learning, memory, and cognition
  • 2013
It is argued that fading should have practical utility in naturalistic category learning tasks, which involve extremely high dimensional stimuli and many irrelevant dimensions.
Catastrophic forgetting in connectionist networks
  • R. French
  • Computer Science
    Trends in Cognitive Sciences
  • 1999
What Learning Systems do Intelligent Agents Need? Complementary Learning Systems Theory Updated
The importance of mixed selectivity in complex cognitive tasks
It is shown that mixed selectivity neurons encode distributed information about all task-relevant aspects, so that each aspect can be decoded from the population of neurons even when single-cell selectivity to that aspect is eliminated.
Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory.
The account presented here suggests that memories are first stored via synaptic changes in the hippocampal system, that these changes support reinstatement of recent memories in the neocortex, that neocortical synapses change a little on each reinstatement, and that remote memory is based on accumulated neocorticals changes.
Connectionist models of recognition memory: constraints imposed by learning and forgetting functions.
  • R. Ratcliff
  • Psychology, Computer Science
    Psychological review
  • 1990
The problems discussed provide limitations on connectionist models applied to human memory and in tasks where information to be learned is not all available during learning.