• Corpus ID: 238634825

LEO: Learning Energy-based Models in Factor Graph Optimization

@inproceedings{Sodhi2021LEOLE,
  title={LEO: Learning Energy-based Models in Factor Graph Optimization},
  author={Paloma Sodhi and Eric Dexheimer and Mustafa Mukadam and Stuart Anderson and Michael Kaess},
  booktitle={CoRL},
  year={2021}
}
: We address the problem of learning observation models end-to-end for estimation. Robots operating in partially observable environments must infer latent states from multiple sensory inputs using observation models that capture the joint distribution between latent states and observations. This inference problem can be for-mulated as an objective over a graph that optimizes for the most likely sequence of states using all previous measurements. Prior work uses observation models that are… 
Learning (Local) Surrogate Loss Functions for Predict-Then-Optimize Problems
TLDR
This paper provides an approach to learn faithful task-specific surrogates which only requires access to a black-box oracle that can solve the optimisation problem and is thus generalizable, and can be convex by construction and so can be easily optimized over.
Probabilistic Tracking with Deep Factors
TLDR
A likelihood model is presented that combines a learned feature encoder with generative densities over them and leverages the Lie group properties of the tracked target’s pose to apply the feature encoding on an image patch, extracted through a differentiable warp function inspired by spatial transformer networks.
Category-Independent Articulated Object Tracking with Factor Graphs
TLDR
A category-independent framework for predicting the articulation models of unknown objects from sequences of RGB-D images and a manipulation-oriented metric to evaluate predicted joint twists in terms of how well a compliant robot controller would be able to manipulate the articulated object given the predicted twist is proposed.

References

SHOWING 1-10 OF 78 REFERENCES
ICS: Incremental Constrained Smoothing for State Estimation
TLDR
This work proposes a framework ICS that combines a primal-dual method like the Augmented Lagrangian with an incremental Gauss Newton approach that reuses previously computed matrix factorizations and evaluates it on a set of simulated and real-world problems involving equality constraints like object contact and inequality constraints like collision avoidance.
End to end learning and optimization on graphs
TLDR
This work proposes an alternative decision-focused learning approach that integrates a differentiable proxy for common graph optimization problems as a layer in learned systems to learn a representation that maps the original optimization problem onto a simpler proxy problem that can be efficiently differentiated through.
Differentiable Particle Filters: End-to-End Learning with Algorithmic Priors
TLDR
This work presents differentiable particle filters (DPFs), a differentiable implementation of the particle filter algorithm with learnable motion and measurement models that encode the structure of recursive state estimation with prediction and measurement update that operate on a probability distribution over states.
Backprop KF: Learning Discriminative Deterministic State Estimators
TLDR
This work presents an alternative approach where the parameters of the latent state distribution are directly optimized as a deterministic computation graph, resulting in a simple and effective gradient descent algorithm for training discriminative state estimators.
End-to-End Learning for Structured Prediction Energy Networks
TLDR
End-to-end learning for SPENs is presented, where the energy function is discriminatively trained by back-propagating through gradient-based prediction, and the approach is substantially more accurate than the structured SVM method of Belanger and McCallum (2016).
Variational Inference With Parameter Learning Applied to Vehicle Trajectory Estimation
TLDR
This letter learns the covariances for the motion and sensor models used within vehicle trajectory estimation and demonstrates that the ESGVI framework can be used to solve pose graph optimization even with many false loop closures.
Predictive-State Decoders: Encoding the Future into Recurrent Networks
TLDR
This work seeks to combine the advantages of RNNs and PSRs by augmenting existing state-of-the-art recurrent neural networks with Predictive-State Decoders (PSDs), which add supervision to the network's internal state representation to target predicting future observations.
Learning to Filter with Predictive State Inference Machines
TLDR
This work presents the PREDICTIVE STATE INFERENCE MACHINE (PSIM), a data-driven method that considers the inference procedure on a dynamical system as a composition of predictors and directly learns predictors for inference in predictive state space.
Meta Learning via Learned Loss
TLDR
This paper presents a meta-learning method for learning parametric loss functions that can generalize across different tasks and model architectures, and develops a pipeline for “meta-training” such loss functions, targeted at maximizing the performance of the model trained under them.
Model Based Planning with Energy Based Models
TLDR
This work provides an online algorithm to train EBMs while interacting with the environment, and shows that EBMs allow for significantly better online learning than corresponding feed-forward networks and support maximum entropy state inference.
...
...