VA & OPT: Lars Grüne

Title: The turnpike property: a classical feature of optimal control problems revisited

Speaker: Lars Grüne (University of Bayreuth)

Date and Time: Wed May 04 2022, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

The turnpike property describes a particular behavior of optimal control problems that was first observed by Ramsey in the 1920s and by von Neumann in the 1930s. Since then it has found widespread attention in mathematical economics and control theory alike. In recent years it received renewed interest, on the one hand in optimization with partial differential equations and on the other hand in model predictive control (MPC), one of the most popular optimization based control schemes in practice. In this talk we will first give a general introduction to and a brief history of the turnpike property, before we look at it from a systems and control theoretic point of view. Particularly, we will clarify its relation to dissipativity, detectability, and sensitivity properties of optimal control problems in both finite and infinite dimensions. In the final part of the talk we will explain why the turnpike property is important for analyzing the performance of MPC.

VA & OPT: Andreas Lohne

Title: Approximating convex bodies using multiple objective optimization

Speaker: Andreas Löhne (Friedrich Schiller University Jena)

Date and Time: Wed Apr 27 2022, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

The problem to compute a polyhedral outer and inner approximation of a convex body can be reformulated as a problem to solve approximately a convex multiple objective optimization problem. This extends a previous result showing that multiple objective linear programming is equivalent to compute a $V$-representation of the projection of an $H$-polyhedron. These results are also discussed with respect to duality, solution methods and error bounds.

VA & OPT Webinar: Héctor Ramírez

Title: Extensions of Constant Rank Qualification Constrains condition to Nonlinear Conic Programming

Speaker: Héctor Ramírez (Universidad de Chile)

Date and Time: Wed Apr 13 2022, 11:00 AEST (Register here for remote connection via Zoom)

Abstract:

We present new constraint qualification conditions for nonlinear conic programming that extend some of the constant rank-type conditions from nonlinear programming. As an application of these conditions, we provide a unified global convergence proof of a class of algorithms to stationary points without assuming neither uniqueness of the Lagrange multiplier nor boundedness of the Lagrange multipliers set. This class of algorithms includes, for instance, general forms of augmented Lagrangian, sequential quadratic programming, and interior point methods. We also compare these new conditions with some of the existing ones, including the nondegeneracy condition, Robinson’s constraint qualification, and the metric subregularity constraint qualification. Finally, we propose a more general and geometric approach for defining a new extension of this condition to the conic context. The main advantage of the latter is that we are able to recast the strong second-order properties of the constant rank condition in a conic context. In particular, we obtain a second-order necessary optimality condition that is stronger than the classical one obtained under Robinson’s constraint qualification, in the sense that it holds for every Lagrange multiplier, even though our condition is independent of Robinson’s condition.

VA & OPT Webinar: Sorin-Mihai Grad

Title: Extending the proximal point algorithm beyond convexity

Speaker: Sorin-Mihai Grad (ENSTA Paris)

Date and Time: Wed Apr 06 2022, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

Introduced in in the 1970’s by Martinet for minimizing convex functions and extended shortly afterwards by Rockafellar towards monotone inclusion problems, the proximal point algorithm turned out to be a viable computational method for solving various classes of (structured) optimization problems even beyond the convex framework. In this talk we discuss some extensions of proximal point type algorithms beyond convexity. First we propose a relaxed-inertial proximal point type algorithm for solving optimization problems consisting in minimizing strongly quasiconvex functions whose variables lie in finitely dimensional linear subspaces, that can be extended to equilibrium functions involving such functions. Then we briefly discuss another generalized convexity notion for functions we called prox-convexity for which the proximity operator is single-valued and firmly nonexpansive, and see that the standard proximal point algorithm and Malitsky’s Golden Ratio Algorithm (originally proposed for solving convex mixed variational inequalities) remain convergent when the involved functions are taken prox-convex, too. The talk contains joint work with Felipe Lara and Raúl Tintaya Marcavillaca (both from University of Tarapacá).

VA & OPT Webinar: Pham Ky Anh

Title: Regularized dynamical systems associated with structured monotone inclusions

Speaker: Pham Ky Anh (Vietnam National University)

Date and Time: Wed Mar 30 2022, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

In this report, we consider two dynamical systems associated with additively structured monotone inclusions involving a multi-valued maximally monotone operator A and a single-valued operator B in real Hilbert spaces. We established strong convergence of the regularized forward-backward and regularized forward – backward–forward dynamics to an “optimal” solution of the original inclusion under a weak assumption on the single-valued operator B. Convergence estimates are obtained if the composite operator A + B is maximally monotone and strongly (pseudo)monotone. Time-discretization of the corresponding continuous dynamics provides an iterative regularization forward-backward method or an iterative regularization forward-backward-forward method with relaxation parameters. Some simple numerical examples were given to illustrate the agreement between analytical and numerical results as well as the performance of the proposed algorithms.

VA & OPT Webinar: Shawn Wang

Title: Roots of the identity operator and proximal mappings: (classical and phantom) cycles and gap vectors

Speaker: Shawn Wang (The University of British Columbia)

Date and Time: Wed Mar 23 2022, 11:00 AEST (Register here for remote connection via Zoom)

Abstract:

Recently, Simons provided a lemma for a support function of a closed convex set in a general Hilbert space and used it to prove the geometry conjecture on cycles of projections. We extend Simons’s lemma to closed convex functions, show its connections to Attouch-Théra duality, and use it to characterize (classical and phantom) cycles and gap vectors of proximal mappings. Joint work with H. Bauschke

VA & Opt Webinar: Janosch Rieger

Title: Generalized Gearhart-Koshy acceleration for the Kaczmarz method

Speaker: Janosch Rieger (Monash University)

Date and Time: Wed Mar 16 2022, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

The Kaczmarz method is an iterative numerical method for solving large and sparse rectangular systems of linear equations. Gearhart and Koshy have developed an acceleration technique for the Kaczmarz method for homogeneous linear systems that minimises the distance to the desired solution in the direction of a full Kaczmarz step. Matthew Tam has recently generalised this acceleration technique to inhomogeneous linear systems.

In this talk, I will develop this technique into an acceleration scheme that minimises the Euclidean norm error over an affine subspace spanned by a number of previous iterates and one additional cycle of the Kaczmarz method. The key challenge is to find a formulation in which all parameters of the least-squares problem defining the unique minimizer are known, and to solve this problem efficiently.

VA & Opt Webinar: David Yost

Title: Minimising the number of faces of a class of polytopes

Speaker: David Yost (Federation University Australia)

Date and Time: Wed Dec 1, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

Polytopes are the natural domains of many optimisation problems. We consider a “higher order” optimisation problem, whose domain is a class of polytopes, asking what is the minimum number of faces (of a given dimension) for this class, and which polytopes are the minimisers. Generally we consider the class of d-dimensional polytopes with V vertices, for fixed V and d. The corresponding maximisation problem was solved decades ago, but serious progress on the minimisation question has only been made in recent years auxiliary information will be provided.

VA & Opt Webinar: Fred Roosta-Khorasani

Title: A Newton-MR Algorithm with Complexity Guarantee for Non-Convex Problemsverting exhausters and coexhausters

Speaker: Fred Roosta-Khorasani (The University of Queensland)

Date and Time: Wed Dec 1, 11:00 AEST (Register here for remote connection via Zoom)

Abstract:

Classically, the conjugate gradient (CG) method has been the dominant solver in most inexact Newton-type methods for unconstrained optimization. In this talk, we consider replacing CG with the minimum residual method (MINRES), which is often used for symmetric but possibly indefinite linear systems. We show that MINRES has an inherent ability to detect negative-curvature directions. Equipped with this advantage, we discuss algorithms, under the general name of Newton-MR, which can be used for optimization of general non-convex objectives, and that come with favourable complexity guarantees. We also give numerical examples demonstrating the performance of these methods for large-scale non-convex machine learning problems.

VA & Opt Webinar: Majid Abbasov

Title: Converting exhausters and coexhausters

Speaker: Majid Abbasov (Saint-Petersburg State University)

Date and Time: Wed Nov 17, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

Exhausters and coexhausters are notions of constructive nonsmooth analysis which are used to study extremal properties of functions. An upper exhauster (coexhauster) is used to get an approximation of a considered function in the neighborhood of a point in the form of minmax of linear (affine) functions. A lower exhauster (coexhauster) is used to represent the approximation in the form of maxmin of linear (affine) functions. Conditions for a minimum in a most simple way are expressed by means of upper exhausters and coexhausters, while conditions for a maximum are described in terms of lower exhausters and coexhausters. Thus the problem of obtaining an upper exhauster or coexhauster when the lower one is given and vice verse arises. In the talk I will consider this problem and present new method for such a . Also all needed auxiliary information will be provided.

1 2 3 4 9