VA & Opt Webinar: Dominikus Noll

Title: Alternating projections with applications to Gerchberg-Saxton error reduction

Speaker: Dominikus Noll (Institut de Mathématiques de Toulouse)

Date and Time: Wed Oct 6, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

We discuss alternating projections between closed non-convex sets A, B in R^n and obtain criteria for convergence when A, B do not intersect transversally. The infeasible case, A∩B=∅, is also addressed, and here we expect convergence toward a gap between A, B. For sub-analytic sets A, B sub-linear convergence rates depending on the Lojasiewicz exponent of the distance function can be computed. We then present applications to the Gerchberg-Saxton error reduction algorithm, to Cadzow’s denoising algorithm, and to instances of the Gaussian EM-algorithm.

ACEMS OCTOBER LECTURES

Dear MOCAO members.

We would like to invite you to ACEMS October lectures

Optimal decision making: a tribute to female ingenuity
Professor Kate Smith-Miles & Alison Harcourt AO
Tuesday 12 October | 12pm-1pm AEDT

and

Statistical Methodology Development & Software Dissemination
Distinguished Professor Matt Wand, UTS
Thursday 28 October | 12pm-1pm AEDT

For more information and registration, please refer to

https://mailchi.mp/9fad69599771/acems-public-lectures-may-5541985?e=cf88c86980

VA & Opt Webinar: Quoc Tran-Dinh

Title: Randomized Douglas-Rachford Splitting Algorithms for Federated Composite Optimization

Speaker: Quoc Tran-Dinh (University of North Carolina)

Date and Time: Wed Sep 29, 11:00 AEST (Register here for remote connection via Zoom)

Abstract:

In this talk, we present two randomized Douglas-Rachford splitting algorithms to solve a class of composite nonconvex finite-sum optimization problems arising from federated learning. Our algorithms rely on a combination of three main techniques: Douglas-Rachford splitting scheme, randomised block-coordinate technique, and asynchronous strategy. We show that our algorithms achieve the best-known communication complexity bounds under standard assumptions in the nonconvex setting, while allow one to inexactly updating local models with only a subset of users each round, and handle nonsmooth convex regularizers. Our second algorithm can be implemented in an asynchronous mode using a general probabilistic model to capture different computational architectures. We illustrate our algorithms with many numerical examples and show that the new algorithms have a promising performance compared to common existing methods.

This talk is based on the collaboration with Nhan Pham (UNC), Lam M. Nguyen (IBM), and Dzung Phan (IBM).

VA & Opt Webinar: Maxim Dolgopolik

Title: DC Semidefinite Programming

Speaker: Maxim Dolgopolik (Institute for Problems in Mechanical Engineering of the Russian Academy of Sciences)

Date and Time: Wed Sep 22, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: DC (Difference-of-Convex) optimization has been an active area of research in nonsmooth nonlinear optimization for over 30 years. The interest in this class of problems is based on the fact that one can efficiently utilize ideas and methods of convex analysis/optimization to solve DC optimization problems. The main results of DC optimization can be extended to the case of nonlinear semidefinite programming problems, i.e. problems with matrix-valued constraints, in several different ways. We will discuss two possible generalizations of the notion of DC function to the case of matrix-valued functions and show how these generalizations lead to two different DC optimization approaches to nonlinear semidefinite programming.

VA & Opt Webinar: Walaa Moursi

Title: The Douglas-Rachford algorithm for solving possibly inconsistent optimization problems

Speaker: Walaa Moursi (University of Waterloo)

Date and Time: Wed Sep 15, 11:00 AEST (Register here for remote connection via Zoom)

Abstract: More than 40 years ago, Lions and Mercier introduced in a seminal paper the Douglas–Rachford algorithm. Today, this method is well recognized as a classical and highly successful splitting method to find minimizers of the sum of two (not necessarily smooth) convex functions. While the underlying theory has matured, one case remains a mystery: the behaviour of the shadow sequence when the given functions have disjoint domains. Building on previous work, we establish for the first time weak and value convergence of the shadow sequence generated by the Douglas–Rachford algorithm in a setting of unprecedented generality. The weak limit point is shown to solve the associated normal problem which is a minimal perturbation of the original optimization problem. We also present new results on the geometry of the minimal displacement vector.

VA & Opt Webinar: Andrew Eberhard

Title: Bridges between Discrete and Continous Optimisation in Stochastic Programming

Speaker: Andrew Eberhard (RMIT University)

Date and Time: June 30th, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: For many years there has been a divide between the theoretical under pinning of algorithmic analysis in discrete and continuous optimisation. As a case study, stochastic optimisation provides a classic example. Here the theoretical foundations of continuous stochastic optimisation lies in the theory of monotone operators, operator splitting and nonsmooth analysis, none of which appear to be applicable to discrete problems. In this talks we will discuss the application of ideas from continuous optimisation and variational analysis to the study of progressive hedging like methods for discrete optimisation models. The key to the success of such approaches is the acceptance of the existence of MIP and QMIP\ solvers that can be integrated in to analysis as “black box solvers” that return solutions within a broader algorithmic analysis. Here methods more familiar to continuous optimisers and nonsmooth analysts can be used to provide proofs of convergence of both primal and dual methods. Unlike continuous optimisation there still exists separate primal and dual methods and analysis in the discrete context. We will discuss this aspect and some convergent modifications that yield robust and effective versions of these methods, long with numerical validation of their effectiveness.

VA & Opt Webinar: Adil Bagirov

Title: Nonsmooth DC optimization: recent developments

Speaker: Adil Bagirov (Federation University)

Date and Time: June 23rd, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: In this talk we consider unconstrained optimization problems where the objective functions are represented as a difference of two convex (DC) functions. Various applications of DC optimization in machine learning are presented. We discuss two different approaches to design methods of nonsmooth DC optimization: an approach based on the extension of bundle methods and an approach based on the DCA (difference of convex algorithm). We also discuss numerical results obtained using these methods.

VA & Opt Webinar: Bruno F. Lourenço

Title: Error bounds, amenable cones and beyond

Speaker: Bruno F. Lourenço (Institute of Statistical Mathematics)

Date and Time: June 16th, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: In this talk we present an overview of the theory of amenable cones, facial residual functions and their applications to error bounds for conic linear systems. A feature of our results is that no constraint qualifications are ever assumed, so they are applicable even to some problems with unfavourable theoretical properties. Time allowing, we will discuss some recent findings on the geometry of amenable cones and also some extensions for non-amenable cones.

VA & Opt Webinar: Vuong Phan

Title: The Boosted Difference of Convex Functions Algorithm

Speaker: Vuong Phan (University of Southampton)

Date and Time: June 9th, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We introduce a new algorithm for solving Difference of Convex functions (DC) programming, called Boosted Difference of Convex functions Algorithm (BDCA). BDCA accelerates the convergence of the classical difference of convex functions algorithm (DCA) thanks to an additional line search step. We prove that any limit point of the BDCA iterative sequence is a critical point of the problem under consideration and that the corresponding objective value is monotonically decreasing and convergent. The global convergence and convergence rate of the iterations are obtained under the Kurdyka Lojasiewicz property. We provide applications and numerical experiments for a hard problem in biochemistry and two challenging problems in machine learning, demonstrating that BDCA outperforms DCA. For the biochemistry problem, BDCA was ve times faster than DCA, for the Minimum Sum-of-Squares Clustering problem, BDCA was on average sixteen times faster than DCA, and for the Multidimensional Scaling problem, BDCA was three times faster than DCA.

Joint work with Francisco J. Aragón Artacho (University of Alicante, Spain).

VA & Opt Webinar: Scott Lindstrom

Title: A primal/dual computable approach to improving spiraling algorithms, based on minimizing spherical surrogates for Lyapunov functions

Speaker: Scott Lindstrom (Curtin University)

Date and Time: June 2nd, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Optimization problems are frequently tackled by iterative application of an operator whose fixed points allow for fast recovery of locally optimal solutions. Under light-weight assumptions, stability is equivalent to existence of a function—called a Lyapunov function—that encodes structural information about both the problem and the operator. Lyapunov functions are usually hard to find, but if a practitioner had a priori knowledge—or a reasonable guess—about one’s structure, they could equivalently tackle the problem by seeking to minimize the Lyapunov function directly. We introduce a class of methods that does this. Interestingly, for certain feasibility problems, circumcentered-reflection method (CRM) is an extant example therefrom. However, CRM may not lend itself well to primal/dual adaptation, for reasons we show. Motivated by the discovery of our new class, we experimentally demonstrate the success of one of its other members, implemented in a primal/dual framework.

1 2 3 4 5 6 10