Corpus ID: 3204312

Accelerated Training for Matrix-norm Regularization: A Boosting Approach

@inproceedings{Zhang2012AcceleratedTF,
  title={Accelerated Training for Matrix-norm Regularization: A Boosting Approach},
  author={X. Zhang and Y. Yu and Dale Schuurmans},
  booktitle={NIPS},
  year={2012}
}
  • X. Zhang, Y. Yu, Dale Schuurmans
  • Published in NIPS 2012
  • Computer Science, Mathematics
  • Sparse learning models typically combine a smooth loss with a nonsmooth penalty, such as trace norm. Although recent developments in sparse approximation have offered promising solution methods, current approaches either apply only to matrix-norm constrained problems or provide suboptimal convergence rates. In this paper, we propose a boosting method for regularized learning that guarantees e accuracy within O(1 /e) iterations. Performance is further accelerated by interlacing boosting with… CONTINUE READING
    91 Citations
    Learning of Generalized Low-Rank Models: A Greedy Approach
    • 1
    • PDF
    Nuclear Norm Minimization via Active Subspace Selection
    • 121
    • PDF
    Scalable and Sound Low-Rank Tensor Learning
    • 18
    • PDF
    Proximal Riemannian Pursuit for Large-Scale Trace-Norm Minimization
    • 6
    • PDF
    Greedy Learning of Generalized Low-Rank Models
    • 13
    • PDF
    Fast Low-Rank Matrix Learning with Nonconvex Regularization
    • 39
    • PDF
    Approximate Low-Rank Tensor Learning
    • 9
    • PDF
    Efficient Structured Matrix Rank Minimization
    • 17
    • PDF

    References

    SHOWING 1-10 OF 45 REFERENCES
    Lifted coordinate descent for learning with trace-norm regularization
    • 100
    • Highly Influential
    • PDF
    A Simple Algorithm for Nuclear Norm Regularized Problems
    • 228
    • Highly Influential
    • PDF
    Trace Norm Regularization: Reformulations, Algorithms, and Multi-Task Learning
    • 170
    • PDF
    Low-Rank Optimization with Trace Norm Penalty
    • 110
    • Highly Influential
    • PDF
    Convex Sparse Matrix Factorizations
    • 125
    • PDF
    Convex Sparse Coding, Subspace Learning, and Semi-Supervised Extensions
    • 32
    • PDF
    Forward Basis Selection for Sparse Approximation over Dictionary
    • 12
    • PDF
    Convex Multi-view Subspace Learning
    • 109
    • PDF
    Convex multi-task feature learning
    • 1,259
    • PDF
    Large-Scale Convex Minimization with a Low-Rank Constraint
    • 141
    • PDF