Modeling learning and adaptation processes in activity-travel choice A framework and numerical experiment

  title={Modeling learning and adaptation processes in activity-travel choice A framework and numerical experiment},
  author={Ta Theo Arentze and Hjp Harry Timmermans},
This paper develops a framework for modeling dynamic choice based on a theory of reinforcement learning and adaptation. According to this theory, individuals develop and continuously adapt choice rules while interacting with their environment. The proposed model framework specifies required components of learning systems including a reward function, incremental action value functions, and action selection methods. Furthermore, the system incorporates an incremental induction method that… 

Calibrating a New Reinforcement Learning Mechanism for Modeling Dynamic Activity-Travel Behavior and Key Events

The goal of the present study is to release the predefined activity-travel sequence assumption and to allow the algorithm to determine the activity- travel sequence autonomously, as this aspect was previously also set within the fixed schedule.

Implementing an Improved Reinforcement Learning Algorithm for the Simulation of Weekly Activity-Travel Sequences

The research presented here attempts at contributing to the current state of the art by formulating a framework for the simulation of individual activity-travel patterns by redesigning an existing reinforcement learning technique by adding a regression-tree function approximator.

Modeling Context-Sensitive Dynamic Activity-Travel Behavior Under Conditions of Uncertainty Incorporating Reinforcement Learning, Habit Formation, and Behavioral and Cognitive Adaptation Strategies

Activity-based analysis has slowly shifted gear from analysis of daily activity patterns to analysis and modeling of dynamic activity-travel patterns. This paper will describe and illustrate a

Comparing paradigms for strategy learning of route choice with traffic information under uncertainty

A Multi-Agent Modeling Approach to Simulate Dynamic Activity-Travel Patterns

This chapter discusses a framework of an agent-based modeling approach focusing on the dynamic formation of (location) choice sets based on principles of reiriforcement learning, Bayesian learning, and social comparison theories.

Learning and Affective Responses in Location-Choice Dynamics

A dynamic agent-based model which simulates how agents search and explore in nonstationary environments and ultimately develop habitual, context-dependent, activity–travel patterns is discussed, indicating that solutions generated by the model are sensitive to rational and emotional considerations in decision making in well-interpretable ways.

Modeling individuals’ cognitive and affective responses in spatial learning behavior

Activity-based analysis has slowly shifted gear from analysis of daily activity patterns to analysis and modeling of dynamic activity-travel patterns. In this paper, we describe a dynamic model that

Modelling Learning and Adaptation in Route and Departure Time Choice Behaviour: Achievements and Prospects

This paper presents an overview of the study that focuses on the day to day dynamics of decision in interaction with the performance of the transport system. As the development of this area is still

Incorporating Bounded Rationality in a Model of Endogenous Dynamics of Activity-Travel Behaviour

This chapter discusses the formulation of an agent-based model to simulate day-to-day dynamics in activity-travel patterns, based on short and long-term adaptations to exogenous and exogenous changes, which is one of the first attempts to formulate a dynamic model of activity- travel behaviour based on principle of bounded rationality.

The Allocation of Time and Location Information to Activity-Travel Sequence Data by Means of Reinforcement Learning

This book shows that reinforcement learning is a very dynamic area in terms of theory and applications and it shall stimulate and encourage new research in this field.



Inductive Learning Approach to Evolutionary Decision Processes in Activity-Scheduling Behavior: Theory and Numerical Experiments

The development of an inductive learning agent for simulating evolutionary processes (Ilse), which is meant to be linked to the ALBATROSS model, is discussed. The agent was developed to simulate

Route Choice Model with Inductive Learning

This research views drivers’ behaviors as psychological and heterogeneous rather than economical and homogeneous and indicates that system behavior is much more complex and dynamic than implied by equilibrium analysis.

Reinforcement Learning: An Introduction

This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.

Modeling Learning and Evolutionary Adaptation Processes in Activity Settings: Theory and Numerical Simulations

The development of a conceptual framework to build a model of multi-faceted choices underlying activity behavior, which views individuals developing stereotype behavior or scripts over time by learning as a function of state-dependent variables and latent behavior and adjustment principles.

Dynamic network models and driver information systems

Experimental analysis of dynamic route choice behavior


The objective of this paper is to investigate the influence of a range of different learning mechanisms on the dynamics of transport systems in the context of stochastic traffic assignment, which provides a natural framework for investigating day-to-day learning processes.

Drivers’ Learning and Network Behavior: Dynamic Analysis of the Driver-Network System as a Complex System

A model system of drivers’ cognition, learning, and route choice is formulated, taking into account the limitations in drivers’ cognitive capabilities, and is applied to examine the dynamic nature of


This paper outlines the use of the simple exponential smoothing model for estimating the influence of experience on the expectation of journey time and departure decision time. The author considers