Inexact Proximal-Point Penalty Methods for Non-Convex Optimization with Non-Convex Constraints

08/30/2019
by   Qihang Lin, et al.
0

Non-convex optimization problems arise from various areas in science and engineering. Although many numerical methods and theories have been developed for unconstrained non-convex problems, the parallel development for constrained non-convex problems remains limited. That restricts the practices of mathematical modeling and quantitative decision making in many disciplines. In this paper, an inexact proximal-point penalty method is proposed for constrained optimization problems where both the objective function and the constraint can be non-convex. The proposed method approximately solves a sequence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weak-convexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradient-type method. The theoretical property of the proposed method is analyzed in two different cases. In the first case, the objective function is non-convex but the constraint functions are assumed to be convex, while in the second case, both the objective function and the constraint are non-convex. For both cases, we give the complexity results in terms of the number of function value and gradient evaluations to produce near-stationary points. Due to the different structures, different definitions of near-stationary points are given for the two cases. The complexity for producing a nearly ε-stationary point is Õ(ε^-5/2) for the first case while it becomes Õ(ε^-4) for the second case.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro