VA & Opt Webinar: Chayne Planiden (UoW)

Title: New Gradient and Hessian Approximation Methods for Derivative-free Optimisation

Speaker: Chayne Planiden (UoW)

Date and Time: November 4th, 2020, 17:00 AEDT (Register here for remote connection via Zoom)

Abstract: In general, derivative-free optimisation (DFO) uses approximations of first- and second-order information in minimisation algorithms. DFO is found in direct-search, model-based, trust-region and other mainstream optimisation techniques and is gaining popularity in recent years. This work discusses previous results on some particular uses of DFO: the proximal bundle method and the VU-algorithm, and then presents improvements made this year on the gradient and Hessian approximation techniques. These improvements can be inserted into any routine that requires such estimations.

VA & Opt Webinar: Radek Cibulka (University of West Bohemia)

Title: Continuous selections for inverse mappings in Banach spaces

Speaker: Radek Cibulka (University of West Bohemia)

Date and Time: October 28th, 2020, 17:00 AEDT (Register here for remote connection via Zoom)

Abstract: Influenced by a recent work by A. V. Arutyunov, A. F. Izmailov,  and S. E. Zhukovskiy,  we establish a general Ioffe-type criterion guaranteeing the existence of a continuous and calm selection for the inverse of a single-valued uniformly continuous mapping between Banach spaces with  a closed domain.  We show that the general statement yields elegant proofs  following  the same pattern as in the case of the usual openness with a linear rate by  considering mappings instead of points. As in the case of the Ioffe’s criterion for linear openness around the reference point, this allows us to avoid the iteration, that is, the construction of a sequence of continuous functions  the limit of which is the desired continuous selection for the inverse mapping, which is illustrated on the proof of the Bartle-Graves theorem.  Then we formulate sufficient conditions based on approximations given by positively homogeneous mappings and bunches of linear operators. The talk is based on a joint work with Marián Fabian.

VA & Opt Webinar: Wilfredo Sosa (UCB)

Title:On diametrically maximal sets, maximal premonotone maps and promonote bifunctions

Speaker: Wilfredo Sosa (UCB)

Date and Time: October 21st, 2020, 17:00 AEDT (Register here for remote connection via Zoom)

Abstract: First, we study diametrically maximal sets in the Euclidean space (those which are not properly contained in a set with the same diameter), establishing their main properties. Then, we use these sets for exhibiting an explicit family of maximal premonotone operators. We also establish some relevant properties of maximal premonotone operators, like their local boundedness, and finally we introduce the notion premonotone bifunctions, presenting a canonical relation between premonotone operators and bifunctions, that extends the well known one, which holds in the monotone case.

VA & Opt Webinar: Björn Rüffer (UoN)

Title: A Lyapunov perspective to projection algorithms

Speaker: Björn Rüffer (UoN)

Date and Time: October 14th, 2020, 17:00 AEDT (Register here for remote connection via Zoom)

Abstract: The operator theoretic point of view has been very successful in the study of iterative splitting methods under a unified framework. These algorithms include the Method of Alternating Projections as well as the Douglas-Rachford Algorithm, which is dual to the Alternating Direction Method of Multipliers, and they allow nice geometric interpretations. While convergence results for these algorithms have been known for decades when problems are convex, for non-convex problems progress on convergence results has significantly increased once arguments based on Lyapunov functions were used. In this talk we give an overview of the underlying techniques in Lyapunov’s direct method and look at convergence of iterative projection methods through this lens.

VA & Opt Webinar: Reinier Diaz Millan (Deakin)

Title: An algorithm for pseudo-monotone operators with application to rational approximation

Speaker: Reinier Diaz Millan (Deakin)

Date and Time: October 7th, 2020, 17:00 AEDT (Register here for remote connection via Zoom)

Abstract: The motivation of this paper is the development of an optimisation method for solving optimisation problems appearing in Chebyshev rational and generalised rational approximation problems, where the approximations are constructed as ratios of linear forms (linear combination of basis functions). The coefficients of the linear forms are subject to optimisation and the basis functions are continuous function. It is known that the objective functions in generalised rational approximation problems are quasi-convex. In this paper we also prove a stronger result, the objective functions are pseudo-convex. Then we develop numerical methods, that are efficient for a wide range of pseudo-convex functions and test them on generalised rational approximation problems.

VA & Opt Webinar: Yalçın Kaya (UniSA)

Title: Constraint Splitting and Projection Methods for Optimal Control

Speaker: Yalçın Kaya (UniSA)

Date and Time: September 30th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We consider a class of optimal control problems with constrained control variable. We split the ODE constraint and the control constraint of the problem so as to obtain two optimal control subproblems for each of which solutions can be written simply.  Employing these simpler solutions as projections, we find numerical solutions to the original problem by applying four different projection-type methods: (i) Dykstra’s algorithm, (ii) the Douglas–Rachford (DR) method, (iii) the Aragón Artacho–Campoy (AAC) algorithm and (iv) the fast iterative shrinkage-thresholding algorithm (FISTA).  The problem we study is posed in infinite-dimensional Hilbert spaces. Behaviour of the DR and AAC algorithms are explored via numerical experiments with respect to their parameters. An error analysis is also carried out numerically for a particular instance of the problem for each of the algorithms.  This is joint work with Heinz Bauschke and Regina Burachik.

VA & Opt Webinar: Regina Burachik (UniSA)

Title: A Primal–Dual Penalty Method via Rounded Weighted-L1 Lagrangian Duality

Speaker: Regina Burachik (UniSA)

Date and Time: September 23rd, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We propose a new duality scheme based on a sequence of smooth minorants of the weighted-ℓ1 penalty function, interpreted as a parametrized sequence of augmented Lagrangians, to solve nonconvex constrained optimization problems. For the induced sequence of dual problems, we establish strong asymptotic duality properties. Namely, we show that (i) the sequence of dual problems is convex and (ii) the dual values monotonically increase to the optimal primal value. We use these properties to devise a subgradient based primal–dual method, and show that the generated primal sequence accumulates at a solution of the original problem. We illustrate the performance of the new method with three different types of test problems: A polynomial nonconvex problem, large-scale instances of the celebrated kissing number problem, and the Markov–Dubins problem. Our numerical experiments demonstrate that, when compared with the traditional implementation of a well-known smooth solver, our new method (using the same well-known solver in its subproblem) can find better quality solutions, i.e., “deeper” local minima, or solutions closer to the global minimum. Moreover, our method seems to be more time efficient, especially when the problem has a large number of constraints.

This is a joint work with C. Y. Kaya (UniSA) and C. J. Price (University of Canterbury, Christchurch, New Zealand)

VA & Opt Webinar: Christopher Price (University of Canterbury)

Title: A direct search method for constrained optimization via the rounded ℓ1 penalty function.

Speaker: Christopher Price (University of Canterbury)

Date and Time: September 16th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: This talk looks at the constrained optimization problem when the objective and constraints are Lipschitz continuous black box functions.   The approach uses a sequence of smoothed and offset ℓ1 penalty functions. The method generates an approximate minimizer to each penalty function, and then adjusts the offsets and other parameters. The smoothing is steadily reduced, ultimately revealing the ℓ1 exact penalty function. The method preferentially uses a discrete quasi-Newton step, backed up by a global direction search. Theoretical convergence results are given for the smooth and non-smooth cases subject to relevant conditions. Numerical results are presented on a variety of problems with non-smooth objective or constraint functions. These results show the method is effective in practice.

VA & Opt Webinar: Christiane Tammer (MLU)

Title: Subdifferentials and Lipschitz properties of translation invariant functionals and applications

Speaker: Christiane Tammer (MLU)

Date and Time: September 9th, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: In the talk, we are dealing with translation invariant functionals and their application for deriving necessary conditions for minimal solutions of constrained and unconstrained optimization problems with respect to general domination sets.

Translation invariant functionals are a natural and powerful tool for the separation of not necessarily convex sets and scalarization. There are many applications of translation invariant functionals in nonlinear functional analysis, vector optimization, set optimization, optimization under uncertainty, mathematical finance as well as consumer and production theory.

The primary objective of this talk is to establish formulas for basic and singular subdifferentials of translation invariant functionals and to study important properties such as monotonicity, the PSNC property, the Lipschitz behavior, etc. of these nonlinear functionals without assuming that the shifted set involved in the definition of the functional is convex. The second objective is to propose a new way to scalarize a set-valued optimization problem. It allows us to study necessary conditions for minimal solutions in a very broad setting in which the domination set is not necessarily convex or solid or conical. The third objective is to apply our results to vector-valued approximation problems.

This is a joint work with T.Q. Bao (Northern Michigan University).

VA & Opt Webinar: Gerd Wachsmuth (BTU)

Title: New Constraint Qualifications for Optimization Problems in Banach Spaces based on Asymptotic KKT Conditions

Speaker: Gerd Wachsmuth (BTU)

Date and Time: September 2nd, 2020, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Optimization theory in Banach spaces suffers from the lack of available constraint qualifications. Despite the fact that there exist only a very few constraint qualifications, they are, in addition, often violated even in simple applications. This is very much in contrast to finite-dimensional nonlinear programs, where a large number of constraint qualifications is known. Since these constraint qualifications are usually defined using the set of active inequality constraints, it is difficult to extend them to the infinite-dimensional setting. One exception is a recently introduced sequential constraint qualification based on asymptotic KKT conditions. This paper shows that this so-called asymptotic KKT regularity allows suitable extensions to the Banach space setting in order to obtain new constraint qualifications. The relation of these new constraint qualifications to existing ones is discussed in detail. Their usefulness is also shown by several examples as well as an algorithmic application to the class of augmented Lagrangian methods.

This is a joint work with Christian Kanzow (Würzburg) and Patrick Mehlitz (Cottbus).

1 5 6 7 8 9 10