Learning Permutations with Exponential Weights

@article{Helmbold2007LearningPW,
  title={Learning Permutations with Exponential Weights},
  author={David P. Helmbold and Manfred K. Warmuth},
  journal={J. Mach. Learn. Res.},
  year={2007},
  volume={10},
  pages={1705-1736}
}
We give an algorithm for learning a permutation on-line. The algorithm maintains its uncertainty about the target permutation as a doubly stochastic matrix. This matrix is updated by multiplying the current matrix entries by exponential factors. These factors destroy the doubly stochastic property of the matrix and an iterative procedure is needed to re-normalize the rows and columns. Even though the result of the normalization procedure does not have a closed form, we can still bound the… 

Online Learning of Permutations Using Extended Formulation

TLDR
A new method for efficiently learning permutations that exploits the technique of extended formulation from the combinatorial optimization community, which encodes the hard-to-describe polytope of permutations as the projection of an easier to describe polytopes in a higher dimensional space.

Sinkhorn Permutation Variational Marginal Inference

TLDR
Sinkhorn variational marginal inference is introduced as a scalable alternative, a method whose validity is ultimately justified by the so-called Sinkhorn approximation of the permanent.

Learning rotations with little regret

TLDR
The expected regret of the online algorithm compared to the best fixed rotation chosen offline over T iterations is nT and a lower bound is given that proves that this expected regret bound is optimal within a constant factor.

Online Linear Optimization over Permutations

TLDR
An algorithm for online linear optimization problem over permutations; the objective of the online algorithm is to find a permutation of {1,…,n} at each trial so as to minimize the "regret" for T trials.

Fourier Theoretic Probabilistic Inference over Permutations

TLDR
This paper uses the "low-frequency" terms of a Fourier decomposition to represent distributions over permutations compactly, and presents Kronecker conditioning, a novel approach for maintaining and updating these distributions directly in the Fourier domain, allowing for polynomial time bandlimited approximations.

Extended Formulation for Online Learning of Combinatorial Objects

TLDR
A general technique for converting extended formulation techniques that encode the convex hull of Huffman trees as a polytope in a higher dimensional space with only polynomially many facets into efficient on-line algorithms with good relative loss bounds is developed.

Randomized PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension

TLDR
An on-line algorithm for Principal Component Analysis that calculates the total expected quadratic approximation error of the best subspace chosen in hindsight plus some additional term that grows linearly in dimension of the subspace but logarithmically in the dimensions of the instances.

Randomized Online PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension

TLDR
The methodology in the expert setting of online learning is developed by giving an algorithm for learning as well as the best subset of experts of a certain size and then lifted to the matrix setting where the subsets of experts correspond to subspaces.

Learning Probability Distributions over Permutations by Means of Fourier Coefficients

TLDR
This paper presents a method to learn a probability distribution that approximates the generating distribution of a given sample of permutations and learns the Fourier domain information representing this probability distribution.

Learning of Combinatorial Objects via Extended Formulation

TLDR
A general framework for converting extended formulations into efficient online algorithms with good relative loss bounds is developed and applications are presented, which can be applied to other combinatorial objects.
...

References

SHOWING 1-10 OF 57 REFERENCES

Online kernel PCA with entropic matrix updates

TLDR
The main problem is the kernelization of an online PCA algorithm which belongs to this family of updates for density matrices, which involve a softmin calculation based on matrix logs and matrix exponentials.

Fourier Theoretic Probabilistic Inference over Permutations

TLDR
This paper uses the "low-frequency" terms of a Fourier decomposition to represent distributions over permutations compactly, and presents Kronecker conditioning, a novel approach for maintaining and updating these distributions directly in the Fourier domain, allowing for polynomial time bandlimited approximations.

Randomized PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension

TLDR
An on-line algorithm for Principal Component Analysis that calculates the total expected quadratic approximation error of the best subspace chosen in hindsight plus some additional term that grows linearly in dimension of the subspace but logarithmically in the dimensions of the instances.

Quadratic Convergence for Scaling of Matrices

TLDR
A scaling algorithm is proposed that is conjectured to run much faster than any previous scaling algorithm, and it is shown that this algorithm converges quadratically for strictly scalable matrices, suggesting that the algorithm might always be fast.

Randomized Online PCA Algorithms with Regret Bounds that are Logarithmic in the Dimension

TLDR
The methodology in the expert setting of online learning is developed by giving an algorithm for learning as well as the best subset of experts of a certain size and then lifted to the matrix setting where the subsets of experts correspond to subspaces.

A polynomial-time approximation algorithm for the permanent of a matrix with nonnegative entries

TLDR
A polynomial-time randomized algorithm for estimating the permanent of an arbitrary n × n matrix with nonnegative entries computes an approximation that is within arbitrarily small specified relative error of the true value of the permanent.

A Deterministic Strongly Polynomial Algorithm for Matrix Scaling and Approximate Permanents

TLDR
This work develops the first strongly polynomial-time algorithm for matrix scaling –– an important nonlinear optimization problem with many applications and suggests a simple new (slow)Polynomial time decision algorithm for bipartite perfect matching.

ON THE COMPLEXITY OF NONNEGATIVE-MATRIX SCALING

Polynomial approximation algorithms for belief matrix maintenance in identity management

TLDR
It is proved that even in cases in which the matrices are not exactly scalable, the problem can be solved to e-optimality in strongly polynomial time, improving the best known bound for the problem of scaling arbitrary nonnegative rectangular matrices to prescribed row and column sums.

Relative Loss Bounds for On-Line Density Estimation with the Exponential Family of Distributions

TLDR
This work considers on-line density estimation with a parameterized density from the exponential family and uses a Bregman divergence to derive and analyze each algorithm to design algorithms with the best possible relative loss bounds.
...