• Publications
  • Influence
Reducing the variance in online optimization by transporting past gradients
TLDR
We propose implicit gradient transport (IGT) which transforms gradients computed at previous iterates into gradients evaluated at the current iterate without using the Hessian explicitly. Expand
  • 5
  • 1
  • PDF
Decoupling Adaptation from Modeling with Meta-Optimizers for Meta Learning
TLDR
Meta-learning methods, most notably Model-Agnostic Meta-Learning or MAML, have achieved great success in adapting to new tasks quickly, after having been trained on similar tasks. Expand
  • 3
When MAML Can Adapt Fast and How to Assist When It Cannot.
TLDR
Model-Agnostic Meta-Learning (MAML) and its variants have achieved success in meta-learning tasks on many datasets and settings. Expand
  • 2
  • PDF
Analyzing the Variance of Policy Gradient Estimators for the Linear-Quadratic Regulator
TLDR
We study the variance of the REINFORCE policy gradient estimator in environments with continuous state and action spaces, linear dynamics, quadratic cost, and Gaussian noise. Expand
  • 3
  • PDF
Shapechanger: Environments for Transfer Learning
TLDR
We present Shapechanger, a library for transfer reinforcement learning specifically designed for robotic tasks. Expand
learn2learn: A Library for Meta-Learning Research
TLDR
We introduce learn2learn, a software library for meta-learning research focused on solving those prototyping and reproducibility issues. Expand
  • 2
  • PDF
Accelerating SGD for Distributed Deep-Learning Using Approximated Hessian Matrix
TLDR
We introduce a novel method to compute a rank $m$ approximation of the inverse of the Hessian matrix in the distributed regime by leveraging the differences in gradients and parameters of multiple Workers. Expand