Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts

@article{Schmidhuber2006DevelopmentalRO,
  title={Developmental robotics, optimal artificial curiosity, creativity, music, and the fine arts},
  author={J{\"u}rgen Schmidhuber},
  journal={Connection Science},
  year={2006},
  volume={18},
  pages={173 - 187}
}
Even in the absence of external reward, babies and scientists and others explore their world. Using some sort of adaptive predictive world model, they improve their ability to answer questions such as what happens if I do this or that? They lose interest in both the predictable things and those predicted to remain unpredictable despite some effort. One can design curious robots that do the same. The author’s basic idea (1990, 1991) for doing so is a reinforcement learning (RL) controller is… 
Simple Algorithmic Principles of Discovery, Subjective Beauty, Selective Attention, Curiosity & Creativity
TLDR
It is discussed how all of the above can be naturally implemented on computers, through an extension of passive unsupervised learning to the case of active data selection: the authors reward a general reinforcement learner for actions that improve the subjective compressibility of the growing data.
Simple Algorithmic Principles of Discovery, Subjective Beauty, Selective Attention, Curiosity and Creativity
TLDR
This paper discusses how all of the above can be naturally implemented on computers, through an extension of passive unsupervised learning to the case of active data selection: a general reinforcement learner is rewarded for actions that improve the subjective compressibility of the growing data.
A Formal Theory of Creativity to Model the Creation of Art
According to the Formal Theory of Creativity (1990–2010), a creative agent—one that never stops generating non-trivial, novel, and surprising behaviours and data—must have two learning components: a
Maximizing Fun by Creating Data with Easily Reducible Subjective Complexity
  • J. Schmidhuber
  • Computer Science
    Intrinsically Motivated Learning in Natural and Artificial Systems
  • 2013
TLDR
The Formal Theory of Fun and Creativity (1990–2010) describes principles of a curious and creative agent that never stops generating nontrivial & novel & surprising tasks and data.
Intrinsically Motivated Learning in Natural and Artificial Systems
TLDR
This book introduces the concept of intrinsic motivation in artificial systems, reviews the relevant literature, offers insights from the neural and behavioural sciences, and presents novel tools for research.
Slowness learning for curiosity-driven agents
TLDR
The first contribution, called the incremental SFA (IncSFA), is a low-complexity, online algorithm that extracts slow features without storing any input data or estimating costly covariance matrices, thereby making it suitable to be used for several online learning applications.
Curiosity driven reinforcement learning for motion planning on humanoids
TLDR
This work embodies a curious agent in the complex iCub humanoid robot, the first ever embodied, curious agent for real-time motion planning on a humanoid, and demonstrates that it can learn compact Markov models to represent large regions of the iCub's configuration space.
Innovation Engines: Automated Creativity and Improved Stochastic Optimization via Deep Learning
TLDR
The long-term vision for the Innovation Engine algorithm is described, which involves many technical challenges that remain to be solved and suggests that Innovation Engines could ultimately automate the production of endless streams of interesting solutions in any domain.
Formal Theory of Creativity, Fun, and Intrinsic Motivation (1990–2010)
  • J. Schmidhuber
  • Psychology
    IEEE Transactions on Autonomous Mental Development
  • 2010
TLDR
This overview first describes theoretically optimal (but not necessarily practical) ways of implementing the basic computational principles on exploratory, intrinsically motivated agents or robots, encouraging them to provoke event sequences exhibiting previously unknown, but learnable algorithmic regularities.
...
...

References

SHOWING 1-10 OF 125 REFERENCES
Introduction to developmental robotics
TLDR
This new rubric captures the essential features of many related, previous research agendas, including embodied cognition, evolutionary robotics and machine learning.
Exploring the predictable
TLDR
This work studies an embedded active learner that can limit its predictions to almost arbitrary computable aspects of spatio-temporal events and constructs probabilistic algorithms that map event sequences to abstract internal representations (IRs), and predicts IRs from IRs computed earlier.
A possibility for implementing curiosity and boredom in model-building neural controllers
TLDR
It is described how the particular algorithm (as well as similar model-building algorithms) may be augmented by dynamic curiosity and boredom in a natural manner by introducing (delayed) reinforcement for actions that increase the model network's knowledge about the world.
Bootstrap learning of foundational representations
TLDR
The first steps toward learning an ontology of objects are taken, showing that a bootstrap learning robot can learn to individuate objects through motion, separating them from the static environment and from each other, and can learn properties useful for classification.
The Discovery of Communication
TLDR
A computational model and a robotic experiment are presented to articulate the hypothesis that children discover communication as a result of exploring and playing with their environment and ends up being interested by communication through vocal interactions without having a specific drive for communication.
Intrinsically Motivated Learning of Hierarchical Collections of Skills
TLDR
Initial results from a computational study of intrinsically motivated learning aimed at allowing artificial agents to construct and extend hierarchies of reusable skills that are needed for competent autonomy are presented.
Reinforcement Learning: An Introduction
TLDR
This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Completely Self-referential Optimal Reinforcement Learners
TLDR
This work presents the first class of mathematically rigorous, general, fully self-referential, self-improving, optimal reinforcement learning systems, which not only boasts an optimal order of complexity but can optimally reduce any slowdowns hidden by the O()- notation, provided the utility of such speed-ups is provable at all.
Learning to Generate Artificial Fovea Trajectories for Target Detection
This paper shows how ‘static’ neural approaches to adaptive target detection can be replaced by a more efficient and more sequential alternative. The latter is inspired by the observation that
...
...