• Publications
  • Influence
Online Learning for Matrix Factorization and Sparse Coding
TLDR
A new online optimization algorithm is proposed, based on stochastic approximations, which scales up gracefully to large data sets with millions of training samples, and extends naturally to various matrix factorization formulations, making it suitable for a wide range of learning problems.
SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives
TLDR
This work introduces a new optimisation method called SAGA, which improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser.
Online dictionary learning for sparse coding
TLDR
A new online optimization algorithm for dictionary learning is proposed, based on stochastic approximations, which scales up gracefully to large datasets with millions of training samples, and leads to faster performance and better dictionaries than classical batch algorithms for both small and large datasets.
Online Learning for Latent Dirichlet Allocation
TLDR
An online variational Bayes (VB) algorithm for Latent Dirichlet Allocation (LDA) based on online stochastic optimization with a natural gradient step is developed, which shows converges to a local optimum of the VB objective function.
Multiple kernel learning, conic duality, and the SMO algorithm
TLDR
Experimental results are presented that show that the proposed novel dual formulation of the QCQP as a second-order cone programming problem is significantly more efficient than the general-purpose interior point methods available in current optimization toolboxes.
Kernel independent component analysis
TLDR
A class of algorithms for independent component analysis which use contrast functions based on canonical correlations in a reproducing kernel Hilbert space is presented, showing that these algorithms outperform many of the presently known algorithms.
Minimizing finite sums with the stochastic average gradient
TLDR
Numerical experiments indicate that the new SAG method often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.
Non-local sparse models for image restoration
TLDR
Experimental results in image denoising and demosaicking tasks with synthetic and real noise show that the proposed method outperforms the state of the art, making it possible to effectively restore raw images from digital cameras at a reasonable speed and memory cost.
Kernel independent component analysis
TLDR
A class of algorithms for independent component analysis which use contrast functions based on canonical correlations in a reproducing kernel Hilbert space is presented, showing that these algorithms outperform many of the presently known algorithms.
Consistency of the group Lasso and multiple kernel learning
  • F. Bach
  • Computer Science, Mathematics
    J. Mach. Learn. Res.
  • 23 July 2007
TLDR
This paper derives necessary and sufficient conditions for the consistency of group Lasso under practical assumptions, and proposes an adaptive scheme to obtain a consistent model estimate, even when the necessary condition required for the non adaptive scheme is not satisfied.
...
1
2
3
4
5
...