Minimal neural network models for permutation invariant agents
@article{Pedersen2022MinimalNN, title={Minimal neural network models for permutation invariant agents}, author={Joachim Winther Pedersen and Sebastian Risi}, journal={Proceedings of the Genetic and Evolutionary Computation Conference}, year={2022} }
Organisms in nature have evolved to exhibit flexibility in face of changes to the environment and/or to themselves. Artificial neural networks (ANNs) have proven useful for controlling of artificial agents acting in environments. However, most ANN models used for reinforcement learning-type tasks have a rigid structure that does not allow for varying input sizes. Further, they fail catastrophically if inputs are presented in an ordering unseen during optimization. We find that these two ANN…
Figures and Tables from this paper
References
SHOWING 1-10 OF 36 REFERENCES
Evolving Plasticity for Autonomous Learning under Changing Environmental Conditions
- Computer ScienceEvolutionary Computation
- 2021
This work uses a discrete representation to encode the learning rules in a finite search space, and employs genetic algorithms to optimize these rules to allow learning on two separate tasks (a foraging and a prey-predator scenario) in online lifetime learning settings.
Evolving and merging hebbian learning rules: increasing generalization by decreasing the number of rules
- Computer ScienceGECCO
- 2021
It is shown that by allowing multiple connections in the network to share the same local learning rule, it is possible to drastically reduce the number of trainable parameters, while obtaining a more robust agent.
Introducing Symmetries to Black Box Meta Reinforcement Learning
- Computer ScienceArXiv
- 2021
This paper develops a black-box meta RL system that exhibits certain symmetries (specifically the reuse of the learning rule, and invariance to input and output permutations) that are not present in typical black- box meta RL systems.
Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding
- Computer ScienceNIPS
- 1995
It is concluded that reinforcement learning can work robustly in conjunction with function approximators, and that there is little justification at present for avoiding the case of general λ.
Born to Learn: the Inspiration, Progress, and Future of Evolved Plastic Artificial Neural Networks
- Computer ScienceNeural Networks
- 2018
Neuronlike adaptive elements that can solve difficult learning control problems
- Computer ScienceIEEE Transactions on Systems, Man, and Cybernetics
- 1983
It is shown how a system consisting of two neuronlike adaptive elements can solve a difficult learning control problem and the relation of this work to classical and instrumental conditioning in animal learning studies and its possible implications for research in the neurosciences.
On the Binding Problem in Artificial Neural Networks
- Computer ScienceArXiv
- 2020
This paper proposes a unifying framework that revolves around forming meaningful entities from unstructured sensory inputs, maintaining this separation of information at a representational level (representation), and using these entities to construct new inferences, predictions, and behaviors (composition).
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
- Computer ScienceArXiv
- 2017
This work explores the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients, and highlights several advantages of ES as a blackbox optimization technique.
Efficient memory-based learning for robot control
- Computer Science
- 1990
A method of learning is presented in which all the experiences in the lifetime of the robot are explicitly remembered, thus permitting very quick predictions of the e ects of proposed actions and, given a goal behaviour, permitting fast generation of a candidate action.
Evolving Plastic Neural Networks for Online Learning: Review and Future Directions
- Computer ScienceAustralasian Conference on Artificial Intelligence
- 2012
Prior work in evolving plastic neural networks for online learning is reviewed, including problem domains and tasks, fitness functions, synaptic plasticity models and neural network encoding schemes, and addressing the "general" in general intelligence by the introduction of previously unseen tasks during the evolution process.