Corpus ID: 3854508

Fractal AI: A fragile theory of intelligence

@article{Cerezo2018FractalAA,
  title={Fractal AI: A fragile theory of intelligence},
  author={Sergio Hernandez Cerezo and Guillem Duran Ballester},
  journal={ArXiv},
  year={2018},
  volume={abs/1803.05049}
}
Fractal AI is a theory for general artificial intelligence. It allows deriving new mathematical tools that constitute the foundations for a new kind of stochastic calculus, by modelling information using cellular automaton-like structures instead of smooth functions. In the repository included we are presenting a new Agent, derived from the first principles of the theory, which is capable of solving Atari games several orders of magnitude more efficiently than other similar techniques, like… Expand
Solving Atari Games Using Fractals And Entropy
TLDR
Fractal Monte Carlo, a novel MCTS based approach that is derived from the laws of the thermodynamics, allows us to create an agent that takes intelligent actions in both continuous and discrete environments while providing control over every aspect of the agent behavior. Expand
Go-Explore: a New Approach for Hard-Exploration Problems
TLDR
A new algorithm called Go-Explore, which exploits the following principles to remember previously visited states, solve simulated environments through any available means, and robustify via imitation learning, which results in a dramatic performance improvement on hard-exploration problems. Expand

References

SHOWING 1-10 OF 26 REFERENCES
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
TLDR
This work explores the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients, and highlights several advantages of ES as a blackbox optimization technique. Expand
Human-level control through deep reinforcement learning
TLDR
This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks. Expand
Causal entropic forces.
TLDR
A causal generalization of entropic forces is found that can cause two defining behaviors of the human "cognitive niche"-tool use and social cooperation-to spontaneously emerge in simple physical systems. Expand
The Arcade Learning Environment: An Evaluation Platform for General Agents (Extended Abstract)
TLDR
The promise of ALE is illustrated by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning, and an evaluation methodology made possible by ALE is proposed. Expand
Classical Planning with Simulators: Results on the Atari Video Games
TLDR
The empirical results over 54 Atari games show that the simplest such algorithm performs at the level of UCT, the state-of-the-art planning method in this domain, and suggest the potential of width-based methods for planning with simulators when factored, compact action models are not available. Expand
Blind Search for Atari-Like Online Planning Revisited
TLDR
The planning effectiveness can be further improved by considering online planning for the Atari games as a multiarmed bandit style competition between the various actions available at the state planned for, and not purely as a classical planning style action sequence optimization problem. Expand
Deep Learning for Real-Time Atari Game Play Using Offline Monte-Carlo Tree Search Planning
TLDR
The central idea is to use the slow planning-based agents to provide training data for a deep-learning architecture capable of real-time play, and proposed new agents based on this idea are proposed and shown to outperform DQN. Expand
Monte Carlo Tree Search in Continuous Action Spaces with Execution Uncertainty
TLDR
A new Monte Carlo tree search (MCTS) algorithm specifically designed for exploiting an execution model in this setting is proposed using kernel regression, which generalizes the information about action quality between actions and to unexplored parts of the action space. Expand
Noisy Networks for Exploration
TLDR
It is found that replacing the conventional exploration heuristics for A3C, DQN and dueling agents with NoisyNet yields substantially higher scores for a wide range of Atari games, in some cases advancing the agent from sub to super-human performance. Expand
A Survey of Monte Carlo Tree Search Methods
TLDR
A survey of the literature to date of Monte Carlo tree search, intended to provide a snapshot of the state of the art after the first five years of MCTS research, outlines the core algorithm's derivation, impart some structure on the many variations and enhancements that have been proposed, and summarizes the results from the key game and nongame domains. Expand
...
1
2
3
...