Learn More
We consider a Newton-CG augmented Lagrangian method for solving semidef-inite programming (SDP) problems from the perspective of approximate semismooth Newton methods. In order to analyze the rate of convergence of our proposed method, we characterize the Lipschitz continuity of the corresponding solution mapping at the origin. For the inner problems, we(More)
The smoothing Newton method for solving a system of nonsmooth equations F (x) = 0, which may arise from the nonlinear complementarity problem, the variational inequality problem or other problems, can be regarded as a variant of the smoothing method. At the kth step, the nonsmooth function F is approximated by a smooth function f (·, ε k), and the(More)
We study analyticity, differentiability, and semismoothness of Löwner's operator and spectral functions under the framework of Euclidean Jordan algebras. In particular, we show that many optimization-related classical results in the symmetric matrix space can be generalized within this framework. For example, the metric projection operator over any(More)
for his contributions to the theory and algorithms for large-scale optimization. Abstract. We introduce a flexible optimization framework for nuclear norm minimization of matrices with linear structure, including Hankel, Toeplitz and moment structures, and catalog applications from diverse fields under this framework. We discuss various first-order methods(More)
Matrix-valued functions play an important role in the development of algorithms for semidefi-nite programming problems. This paper studies generalized differential properties of such functions related to nonsmooth-smoothing Newton methods. The first part of this paper discusses basic properties such as the generalized derivative, Rademacher's(More)
The nuclear norm minimization problem is to find a matrix with the minimum nuclear norm subject to linear and second order cone constraints. Such a problem often arises from the convex relaxation of a rank minimization problem with noisy data, and arises in many fields of engineering and science. In this paper, we study inexact proximal point algorithms in(More)
In this paper we take a new look at smoothing Newton methods for solving the nonlinear comple-mentarity problem (NCP) and the box constrained variational inequalities (BVI). Instead of using an infinite sequence of smoothing approximation functions, we use a single smoothing approximation function and Robinson's normal equation to reformulate NCP and BVI as(More)
The nearest correlation matrix problem is to find a correlation matrix which is closest to a given symmetric matrix in the Frobenius norm. The well studied dual approach is to reformulate this problem as an unconstrained continuously differentiable convex optimization problem. Gradient methods and quasi-Newton methods like BFGS have been used directly to(More)
The paper presents concrete realizations of quasi-Newton methods for solving several standard problems including complementarity problems, special variational inequality problems, and the Karush–Kuhn–Tucker (KKT) system of nonlinear programming. A new approximation idea is introduced in this paper. The Q-superlinear convergence of the Newton method and the(More)
We propose a Newton-CG primal proximal point algorithm for solving large scale log-determinant optimization problems. Our algorithm employs the essential ideas of the proximal point algorithm, the Newton method and the preconditioned conjugate gradient solver. When applying the Newton method to solve the inner sub-problem, we find that the log-determinant(More)