Corpus ID: 203951320

Bregman Proximal Framework for Deep Linear Neural Networks

  title={Bregman Proximal Framework for Deep Linear Neural Networks},
  author={Mahesh Chandra Mukkamala and Felix Westerkamp and E. Laude and D. Cremers and P. Ochs},
  • Mahesh Chandra Mukkamala, Felix Westerkamp, +2 authors P. Ochs
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • A typical assumption for the analysis of first order optimization methods is the Lipschitz continuity of the gradient of the objective function. However, for many practical applications this assumption is violated, including loss functions in deep learning. To overcome this issue, certain extensions based on generalized proximity measures known as Bregman distances were introduced. This initiated the development of the Bregman proximal gradient (BPG) algorithm and an inertial variant (momentum… CONTINUE READING
    3 Citations
    Global Convergence of Model Function Based Bregman Proximal Minimization Algorithms
    • PDF
    First-Order Algorithms Without Lipschitz Gradient: A Sequential Local Optimization Approach
    • PDF
    Convex-Concave Backtracking for Inertial Bregman Proximal Gradient Algorithms in Non-Convex Optimization
    • 12
    • PDF


    Beyond Alternating Updates for Matrix Factorization with Inertial Bregman Proximal Gradient Algorithms
    • 6
    • PDF
    A Descent Lemma Beyond Lipschitz Gradient Continuity: First-Order Methods Revisited and Applications
    • 150
    • Highly Influential
    • PDF
    Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
    • 6,536
    • PDF
    Implicit Regularization of Discrete Gradient Dynamics in Deep Linear Neural Networks
    • 34
    • PDF
    Adam: A Method for Stochastic Optimization
    • 58,627
    • PDF
    A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion
    • 686
    • PDF