VA & Opt Webinar: Yalçın Kaya (UniSA)

Title: Constraint Splitting and Projection Methods for Optimal Control

Speaker: Yalçın Kaya (UniSA)

Date and Time: September 30th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We consider a class of optimal control problems with constrained control variable. We split the ODE constraint and the control constraint of the problem so as to obtain two optimal control subproblems for each of which solutions can be written simply.  Employing these simpler solutions as projections, we find numerical solutions to the original problem by applying four different projection-type methods: (i) Dykstra’s algorithm, (ii) the Douglas–Rachford (DR) method, (iii) the Aragón Artacho–Campoy (AAC) algorithm and (iv) the fast iterative shrinkage-thresholding algorithm (FISTA).  The problem we study is posed in infinite-dimensional Hilbert spaces. Behaviour of the DR and AAC algorithms are explored via numerical experiments with respect to their parameters. An error analysis is also carried out numerically for a particular instance of the problem for each of the algorithms.  This is joint work with Heinz Bauschke and Regina Burachik.

VA & Opt Webinar: Regina Burachik (UniSA)

Title: A Primal–Dual Penalty Method via Rounded Weighted-L1 Lagrangian Duality

Speaker: Regina Burachik (UniSA)

Date and Time: September 23rd, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We propose a new duality scheme based on a sequence of smooth minorants of the weighted-ℓ1 penalty function, interpreted as a parametrized sequence of augmented Lagrangians, to solve nonconvex constrained optimization problems. For the induced sequence of dual problems, we establish strong asymptotic duality properties. Namely, we show that (i) the sequence of dual problems is convex and (ii) the dual values monotonically increase to the optimal primal value. We use these properties to devise a subgradient based primal–dual method, and show that the generated primal sequence accumulates at a solution of the original problem. We illustrate the performance of the new method with three different types of test problems: A polynomial nonconvex problem, large-scale instances of the celebrated kissing number problem, and the Markov–Dubins problem. Our numerical experiments demonstrate that, when compared with the traditional implementation of a well-known smooth solver, our new method (using the same well-known solver in its subproblem) can find better quality solutions, i.e., “deeper” local minima, or solutions closer to the global minimum. Moreover, our method seems to be more time efficient, especially when the problem has a large number of constraints.

This is a joint work with C. Y. Kaya (UniSA) and C. J. Price (University of Canterbury, Christchurch, New Zealand)

VA & Opt Webinar: Christopher Price (University of Canterbury)

Title: A direct search method for constrained optimization via the rounded ℓ1 penalty function.

Speaker: Christopher Price (University of Canterbury)

Date and Time: September 16th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: This talk looks at the constrained optimization problem when the objective and constraints are Lipschitz continuous black box functions.   The approach uses a sequence of smoothed and offset ℓ1 penalty functions. The method generates an approximate minimizer to each penalty function, and then adjusts the offsets and other parameters. The smoothing is steadily reduced, ultimately revealing the ℓ1 exact penalty function. The method preferentially uses a discrete quasi-Newton step, backed up by a global direction search. Theoretical convergence results are given for the smooth and non-smooth cases subject to relevant conditions. Numerical results are presented on a variety of problems with non-smooth objective or constraint functions. These results show the method is effective in practice.

VA & Opt Webinar: Christiane Tammer (MLU)

Title: Subdifferentials and Lipschitz properties of translation invariant functionals and applications

Speaker: Christiane Tammer (MLU)

Date and Time: September 9th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: In the talk, we are dealing with translation invariant functionals and their application for deriving necessary conditions for minimal solutions of constrained and unconstrained optimization problems with respect to general domination sets.

Translation invariant functionals are a natural and powerful tool for the separation of not necessarily convex sets and scalarization. There are many applications of translation invariant functionals in nonlinear functional analysis, vector optimization, set optimization, optimization under uncertainty, mathematical finance as well as consumer and production theory.

The primary objective of this talk is to establish formulas for basic and singular subdifferentials of translation invariant functionals and to study important properties such as monotonicity, the PSNC property, the Lipschitz behavior, etc. of these nonlinear functionals without assuming that the shifted set involved in the definition of the functional is convex. The second objective is to propose a new way to scalarize a set-valued optimization problem. It allows us to study necessary conditions for minimal solutions in a very broad setting in which the domination set is not necessarily convex or solid or conical. The third objective is to apply our results to vector-valued approximation problems.

This is a joint work with T.Q. Bao (Northern Michigan University).

VA & Opt Webinar: Gerd Wachsmuth (BTU)

Title: New Constraint Qualifications for Optimization Problems in Banach Spaces based on Asymptotic KKT Conditions

Speaker: Gerd Wachsmuth (BTU)

Date and Time: September 2nd, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Optimization theory in Banach spaces suffers from the lack of available constraint qualifications. Despite the fact that there exist only a very few constraint qualifications, they are, in addition, often violated even in simple applications. This is very much in contrast to finite-dimensional nonlinear programs, where a large number of constraint qualifications is known. Since these constraint qualifications are usually defined using the set of active inequality constraints, it is difficult to extend them to the infinite-dimensional setting. One exception is a recently introduced sequential constraint qualification based on asymptotic KKT conditions. This paper shows that this so-called asymptotic KKT regularity allows suitable extensions to the Banach space setting in order to obtain new constraint qualifications. The relation of these new constraint qualifications to existing ones is discussed in detail. Their usefulness is also shown by several examples as well as an algorithmic application to the class of augmented Lagrangian methods.

This is a joint work with Christian Kanzow (Würzburg) and Patrick Mehlitz (Cottbus).

VA & Opt Webinar: Jein-Shan Chen (NTNU)

Title: Two approaches for absolute value equation by using smoothing functions

Speaker: Jein-Shan Chen (NTNU)

Date and Time: August 26th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: In this talk, we present two approaches for solving absolute value equation.These two approaches are based on using some smoothing function. In particular, thereare several systematic ways of constructing smoothing functions. Numerical experimentswith comparisons are reported, which suggest what kinds of smoothing functions workwell along with the proposed approaches.

VA & Opt Webinar: Hieu Thao Nguyen (TU Delft)

Title:  Projection Algorithms for Phase Retrieval with High Numerical Aperture

Speaker: Hieu Thao Nguyen (TU Delft)

Date and Time: August 19th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We develop the mathematical framework in which the class of projection algorithms can be applied to high numerical aperture (NA) phase retrieval. Within this framework we first analyze the basic steps of solving this problem by projection algorithms and establish the closed forms of all the relevant prox-operators. We then study the geometry of the high-NA phase retrieval problem and the obtained results are subsequently used to establish convergence criteria of projection algorithms. Making use of the vectorial point-spread-function (PSF) is, on the one hand, the key difference between this work and the literature of phase retrieval mathematics which mostly deals with the scalar PSF. The results of this paper, on the other hand, can be viewed as extensions of those concerning projection methods for low-NA phase retrieval. Importantly, the improved performance of projection methods over the other classes of phase retrieval algorithms in the low-NA setting now also becomes applicable to the high-NA case. This is demonstrated by the accompanying numerical results which show that all available solution approaches for high-NA phase retrieval are outperformed by projection methods.

VA & Opt Webinar: Xiaoqi Yang (Hong Kong PolyU)

Title:  On error bound moduli for locally Lipschitz and regular functions

Speaker: Xiaoqi Yang (Hong Kong PolyU)

Date and Time: August 12th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We first introduce for a closed and convex set two classes of subsets: the near and far ends relative to a point, and give some full characterizations for these end sets by virtue of the face theory of closed and convex sets. We provide some connections between closedness of the far (near) end and the relative continuity of the gauge (cogauge) for closed and convex sets. We illustrate that the distance from 0 to the outer limiting subdifferential of the support function of the subdifferential set, which is essentially the distance from 0 to the end set of the subdifferential set, is an upper estimate of the local error bound modulus. This upper estimate becomes tight for a convex function under some regularity conditions. We show that the distance from 0 to the outer limiting subdifferential set of a lower C^1 function is equal to the local error bound modulus.


References:
Li, M.H., Meng K.W. and Yang X.Q., On far and near ends of closed and convex sets. Journal of Convex Analysis. 27 (2020) 407–421.
Li, M.H., Meng K.W. and Yang X.Q., On error bound moduli for locally Lipschitz and regular functions, Math. Program. 171 (2018) 463–487.

VA & Opt Webinar: Evgeni Nurminski (FEFU)

Title: Practical Projection with Applications

Speaker: Evgeni Nurminski
(FEFU, Vladivostok)

Date and Time: August 5th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Projection of a point on a given set is a very common computational operation in an endless number of algorithms and applications. However, with exception of simplest sets it by itself is a nontrivial operation which is often complicated by large dimension, computational degeneracy, nonuniqueness (even for orthogonal projection on convex sets in certain situations), and so on. This talk aims to present some practical solutions, i.e. finite algorithms, for projection on polyhedral sets, among those: simplex, polytopes, polyhedron, finite-generated cones with a certain discussion of “nonlinearities”, decomposition and parallel computations. We also consider the application of projection operation in linear optimization and epi-projection algorithm for convex optimization.

VA & Opt Webinar: Akiko Takeda (University of Tokyo)

Title: Deterministic and Stochastic Gradient Methods for Non-Smooth Non-Convex Regularized Optimization

Speaker: Akiko Takeda (University of Tokyo)

Date and Time: July 29th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Our work focuses on deterministic/stochastic gradient methods for optimizing a smooth non-convex loss function with a non-smooth non-convex regularizer. Research on stochastic gradient methods is quite limited, and until recently no non-asymptotic convergence results have been reported. After showing a deterministic approach, we present simple stochastic gradient algorithms, for finite-sum and general stochastic optimization problems, which have superior convergence complexities compared to the current state-of-the-art. We also compare our algorithms’ performance in practice for empirical risk minimization.


This is based on joint works with  Tianxiang Liu, Ting Kei Pong and Michael R. Metel.

1 5 6 7 8 9