VA & Opt Webinar: Yalçın Kaya (UniSA)

Title: Constraint Splitting and Projection Methods for Optimal Control

Speaker: Yalçın Kaya (UniSA)

Date and Time: September 30th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We consider a class of optimal control problems with constrained control variable. We split the ODE constraint and the control constraint of the problem so as to obtain two optimal control subproblems for each of which solutions can be written simply.  Employing these simpler solutions as projections, we find numerical solutions to the original problem by applying four different projection-type methods: (i) Dykstra’s algorithm, (ii) the Douglas–Rachford (DR) method, (iii) the Aragón Artacho–Campoy (AAC) algorithm and (iv) the fast iterative shrinkage-thresholding algorithm (FISTA).  The problem we study is posed in infinite-dimensional Hilbert spaces. Behaviour of the DR and AAC algorithms are explored via numerical experiments with respect to their parameters. An error analysis is also carried out numerically for a particular instance of the problem for each of the algorithms.  This is joint work with Heinz Bauschke and Regina Burachik.

VA & Opt Webinar: Regina Burachik (UniSA)

Title: A Primal–Dual Penalty Method via Rounded Weighted-L1 Lagrangian Duality

Speaker: Regina Burachik (UniSA)

Date and Time: September 23rd, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We propose a new duality scheme based on a sequence of smooth minorants of the weighted-ℓ1 penalty function, interpreted as a parametrized sequence of augmented Lagrangians, to solve nonconvex constrained optimization problems. For the induced sequence of dual problems, we establish strong asymptotic duality properties. Namely, we show that (i) the sequence of dual problems is convex and (ii) the dual values monotonically increase to the optimal primal value. We use these properties to devise a subgradient based primal–dual method, and show that the generated primal sequence accumulates at a solution of the original problem. We illustrate the performance of the new method with three different types of test problems: A polynomial nonconvex problem, large-scale instances of the celebrated kissing number problem, and the Markov–Dubins problem. Our numerical experiments demonstrate that, when compared with the traditional implementation of a well-known smooth solver, our new method (using the same well-known solver in its subproblem) can find better quality solutions, i.e., “deeper” local minima, or solutions closer to the global minimum. Moreover, our method seems to be more time efficient, especially when the problem has a large number of constraints.

This is a joint work with C. Y. Kaya (UniSA) and C. J. Price (University of Canterbury, Christchurch, New Zealand)

VA & Opt Webinar: Christopher Price (University of Canterbury)

Title: A direct search method for constrained optimization via the rounded ℓ1 penalty function.

Speaker: Christopher Price (University of Canterbury)

Date and Time: September 16th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: This talk looks at the constrained optimization problem when the objective and constraints are Lipschitz continuous black box functions.   The approach uses a sequence of smoothed and offset ℓ1 penalty functions. The method generates an approximate minimizer to each penalty function, and then adjusts the offsets and other parameters. The smoothing is steadily reduced, ultimately revealing the ℓ1 exact penalty function. The method preferentially uses a discrete quasi-Newton step, backed up by a global direction search. Theoretical convergence results are given for the smooth and non-smooth cases subject to relevant conditions. Numerical results are presented on a variety of problems with non-smooth objective or constraint functions. These results show the method is effective in practice.