#### Filter Results:

- Full text PDF available (9)

#### Publication Year

2012

2016

- This year (0)
- Last 5 years (9)
- Last 10 years (9)

#### Publication Type

#### Co-author

#### Journals and Conferences

#### Data Set Used

#### Key Phrases

Learn More

- Aaron Defazio, Francis R. Bach, Simon Lacoste-Julien
- NIPS
- 2014

In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal… (More)

- Aaron Defazio, Justin Domke, Tibério S. Caetano
- ICML
- 2014

Recent advances in optimization theory have shown that smooth strongly convex finite sums can be minimized faster than by treating them as a black box ”batch” problem. In this work we introduce a new method in this class with a theoretical convergence rate four times faster than existing methods, for sums with sufficiently many terms. This method is also… (More)

- Aaron Defazio, Tibério S. Caetano
- NIPS
- 2012

A key problem in statistics and machine learning is the determination of network structure from data. We consider the case where the structure of the graph to be reconstructed is known to be scale-free. We show that in such cases it is natural to formulate structured sparsity inducing priors using submodular functions, and we use their Lovász extension to… (More)

We apply stochastic average gradient (SAG) algorithms for training conditional random fields (CRFs). We describe a practical implementation that uses structure in the CRF gradient to reduce the memory requirement of this linearly-convergent stochastic gradient method, propose a non-uniform sampling scheme that substantially improves practical performance,… (More)

- Aaron Defazio, Thore Graepel
- ArXiv
- 2014

Reinforcement learning agents have traditionally been evaluated on small toy problems. With advances in computing power and the advent of the Arcade Learning Environment, it is now possible to evaluate algorithms on diverse and difficult problems within a consistent framework. We discuss some challenges posed by the arcade learning environment which do not… (More)

- Aaron Defazio
- NIPS
- 2016

We describe a novel optimization method for finite sums (such as empirical risk minimization problems) building on the recently introduced SAGA method. Our method achieves an accelerated convergence rate on strongly convex smooth problems. Our method has only one parameter (a step size), and is radically simpler than other accelerated methods for finite… (More)

- Aaron Defazio, Tibério S. Caetano
- ICML
- 2012

Item neighbourhood methods for collaborative filtering learn a weighted graph over the set of items, where each item is connected to those it is most similar to. The prediction of a user’s rating on an item is then given by that rating of neighbouring items, weighted by their similarity. This paper presents a new neighbourhood approach which we call item… (More)

- Aaron Defazio
- ArXiv
- 2015

In this work we introduce several new optimisation methods for problems in machine learning. Our algorithms broadly fall into two categories: optimisation of finite sums and of graph structured objectives. The finite sum problem is simply the minimisation of objective functions that are naturally expressed as a summation over a large number of terms, where… (More)

- Mark Schmidt, Reza Babanezhad, Mohamed Osama Ahemd, Aaron Defazio, Ann Clifton, Anoop Sarkar
- 2015

In this supplementary material we provide the proofs of both parts of the the propositions as well as extended experimental results. Proof of Part (a) of Proposition 1 In this section we consider the minimization problem min x f(x) = 1 n n ∑ i=1 fi(x), (1) where each f ′ i is L-Lipschitz continuous and each fi is μ-strongly-convex. We will define Algorithm… (More)

- ‹
- 1
- ›