Development of swarm behavior in artificial learning agents that adapt to different foraging environments

@article{LpezIncera2020DevelopmentOS,
  title={Development of swarm behavior in artificial learning agents that adapt to different foraging environments},
  author={Andrea L{\'o}pez-Incera and Katja Ried and Thomas M{\"u}ller and Hans J. Briegel},
  journal={PloS one},
  year={2020},
  volume={15 12},
  pages={
          e0243628
        }
}
Collective behavior, and swarm formation in particular, has been studied from several perspectives within a large variety of fields, ranging from biology to physics. In this work, we apply Projective Simulation to model each individual as an artificial learning agent that interacts with its neighbors and surroundings in order to make decisions and learn from them. Within a reinforcement learning framework, we discuss one-dimensional learning scenarios where agents need to get to food resources… 

Honeybee communication during collective defence is shaped by predation

The approach is established as a powerful tool to explore how selection based on a collective outcome shapes individual responses, which remains a challenging issue in the field of evolutionary biology.

Collective Evolution Learning Model for Vision-Based Collective Motion with Collision Avoidance

In this work, a novel vision-based CM with CA model (i.e., VCMCA) simulating the collective evolution learning process is proposed, and agents are successfully implementing a collective behavior similar to the one encountered in nature using nature-inspired genetic algorithms and reinforcement-learning methods.

Optimal foraging strategies can be learned and outperform Lévy walks

It is proved theoretically that maximizing rewards in the reinforcement learning model is equivalent to optimizing foraging efficiency and with numerical experiments that the agents learn foraging strategies which outperform the efficiency of known strategies such as L\'evy walks.

Reinforcement learning and decision making via single-photon quantum walks

A variational approach to quantize projective simulation (PS), a reinforcement learning model aimed at interpretable artificial intelligence, and it is shown that the quantized PS learning model can outperform its classical counter-part.

References

SHOWING 1-10 OF 91 REFERENCES

The Physics of Foraging: An Introduction to Random Searches and Biological Encounters

Part I. Introduction: Movement: 1. Empirical motivation for studying movement 2. Statistical physics of biological motion 3. Random walks and Levy flights 4. Wandering albatrosses Part II.

Hidden Markov Models for Time Series: An Introduction Using R

The model Likelihood evaluation Parameter estimation by maximum likelihood Model checking Inferring the underlying state Models for a heterogeneous group of subjects Other modifications or extensions Application to caterpillar feeding behavior appear at the end of most chapters.

Understanding movements of organisms: it's time to abandon the Lévy foraging hypothesis

The Lévy walk model is unrealistic, especially as it omits directionality between successive steps, which results in lower foraging efficiency than other more realistic models and the evidence that organisms actually ‘do the LÉvy walk’ is weak to non‐existent.

Multimodel Inference

Various facets of such multimodel inference are presented here, particularly methods of model averaging, which can be derived as a non-Bayesian result.

How many animals really do the Lévy walk?

Lévy walks are superdiffusive and scale-free random walks that have recently emerged as a new conceptual tool for modeling animal search paths and may be confounded with them because they present apparent move-length-heavy tail distributions and superdiffusivity.

Sociol

  • Methods Res. 33, 261
  • 2004

Simul

  • 2021

and a at

The xishacorene natural products are structurally unique apolar diterpenoids that feature a bicyclo[3.3.1] framework. These secondary metabolites likely arise from the well-studied, structurally

Learning to flock through reinforcement

It is shown that such a velocity alignment may have naturally evolved as an adaptive behavior that aims at minimizing the rate of neighbor loss, and proved that this alignment does not only favor (local) polar order, but it corresponds to the best policy or strategy to keep group cohesion when the sensory input is limited to the velocity of neighboring agents.
...