Symmetry Detection in Trajectory Data for More Meaningful Reinforcement Learning Representations

  title={Symmetry Detection in Trajectory Data for More Meaningful Reinforcement Learning Representations},
  author={Marissa D'Alonzo and Rebecca L. Russell},
Knowledge of the symmetries of reinforcement learning (RL) systems can be used to create compressed and semantically meaningful representations of a low-level state space. We present a method of automatically detecting RL symmetries directly from raw trajectory data without requiring active con- trol of the system. Our method generates candidate symmetries and trains a recurrent neural network (RNN) to discrimi- nate between the original trajectories and the transformed trajectories for each… 

Figures and Tables from this paper



Learning state representations with robotic priors

This work identifies five robotic priors and explains how they can be used to learn pertinent state representations, and shows that the state representations learned by the method greatly improve generalization in reinforcement learning.

Learning Visual Feature Spaces for Robotic Manipulation with Deep Spatial Autoencoders

An approach that automates state-space construction by learning a state representation directly from camera images by using a deep spatial autoencoder to acquire a set of feature points that describe the environment for the current task, such as the positions of objects.

Explaining Conditions for Reinforcement Learning Behaviors from Real and Imagined Data

This work presents a method of generating human-interpretable abstract behavior models that identify the experiential conditions leading to different task execution strategies and outcomes in a model-based RL setting.

Don't Start From Scratch: Leveraging Prior Data to Automate Robotic Reinforcement Learning

The empirical results demonstrate that incorporating prior data into robotic reinforcement learning enables autonomous learning, substantially improves sample-efficiency of learning, and enables better generalization.

Symmetry-Based Disentangled Representation Learning requires Interaction with Environments

It is argued that Symmetry-Based Disentangled Representation Learning cannot only be based on static observations: agents should interact with the environment to discover its symmetries.

Explainability in Deep Reinforcement Learning

Learning Linear Temporal Properties from Noisy Data: A MaxSAT Approach

This work devise two algorithms for inferring concise LTL formulas even in the presence of noise by reducing the inference problem to a problem in maximum satisfiability and using off-the-shelf MaxSAT solvers to find a solution.

Explainable Reinforcement Learning: A Survey

It is found that a) the majority of XRL methods function by mimicking and simplifying a complex model instead of designing an inherently simple one, and b) XRL (and XAI) methods often neglect to consider the human side of the equation, not taking into account research from related fields like psychology or philosophy.

Explainable Reinforcement Learning via Reward Decomposition

This work exploits an off-policy variant of Qlearning that provably converges to an optimal policy and the correct decomposed action values, and introduces the concept of minimum sufficient explanations for compactly explaining why one action is preferred over another in terms of the types.

Uncertainty-Aware Signal Temporal Logic Inference

This paper investigates the uncertainties associated with trajectories of a system and represents such uncertainties in the form of interval trajectories, and proposes two uncertaintyaware signal temporal logic (STL) inference approaches to classify the undesired behaviors and desired behaviors of a systems.