Skip to main content

A general double-proximal gradient algorithm for difference of convex function programming

Abstract:
The possibilities of exploiting the special structure of difference of convex function programs, which consist of optimising the difference of convex functions, are currently more or less limited to variants of the subgradient-based DCA proposed by Pham Dinh Tao and Le Thi Hoai An in 1997. We propose another algorithm, which allows the evaluation of both the concave and the convex part by their proximal points. Additionally, we allow a smooth part, which is evaluated via its gradient. In the spirit of primal-dual splitting algorithms, the concave part might be the composition of a concave function with a linear operator, which are, however, evaluated separately. For this algorithm we show that every cluster point is a solution of the optimization problem. Furthermore, we show the connection to the Toland dual problem and descent of the objective function values for a primal-dual formulation of the problem. Convergence of the iterates is shown if this objective function satisfies the Kurdyka-Lojasiewicz property. In the last part, we show how to apply the algorithm to an image-processing model according to Osher et al., which involves a difference of isotropic and anisotropic total variation.

Time: Fri 2017-01-27 11.00 - 12.00

Location: Seminar room 3721, Lindstedtsvägen 25, KTH

Participating: Sebastian Banert, University of Vienna

Export to calendar