VA & Opt Webinar: Maxim Dolgopolik

Title: DC Semidefinite Programming

Speaker: Maxim Dolgopolik (Institute for Problems in Mechanical Engineering of the Russian Academy of Sciences)

Date and Time: Wed Sep 22, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: DC (Difference-of-Convex) optimization has been an active area of research in nonsmooth nonlinear optimization for over 30 years. The interest in this class of problems is based on the fact that one can efficiently utilize ideas and methods of convex analysis/optimization to solve DC optimization problems. The main results of DC optimization can be extended to the case of nonlinear semidefinite programming problems, i.e. problems with matrix-valued constraints, in several different ways. We will discuss two possible generalizations of the notion of DC function to the case of matrix-valued functions and show how these generalizations lead to two different DC optimization approaches to nonlinear semidefinite programming.

VA & Opt Webinar: Walaa Moursi

Title: The Douglas-Rachford algorithm for solving possibly inconsistent optimization problems

Speaker: Walaa Moursi (University of Waterloo)

Date and Time: Wed Sep 15, 11:00 AEST (Register here for remote connection via Zoom)

Abstract: More than 40 years ago, Lions and Mercier introduced in a seminal paper the Douglas–Rachford algorithm. Today, this method is well recognized as a classical and highly successful splitting method to find minimizers of the sum of two (not necessarily smooth) convex functions. While the underlying theory has matured, one case remains a mystery: the behaviour of the shadow sequence when the given functions have disjoint domains. Building on previous work, we establish for the first time weak and value convergence of the shadow sequence generated by the Douglas–Rachford algorithm in a setting of unprecedented generality. The weak limit point is shown to solve the associated normal problem which is a minimal perturbation of the original optimization problem. We also present new results on the geometry of the minimal displacement vector.

VA & Opt Webinar: Andrew Eberhard

Title: Bridges between Discrete and Continous Optimisation in Stochastic Programming

Speaker: Andrew Eberhard (RMIT University)

Date and Time: June 30th, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: For many years there has been a divide between the theoretical under pinning of algorithmic analysis in discrete and continuous optimisation. As a case study, stochastic optimisation provides a classic example. Here the theoretical foundations of continuous stochastic optimisation lies in the theory of monotone operators, operator splitting and nonsmooth analysis, none of which appear to be applicable to discrete problems. In this talks we will discuss the application of ideas from continuous optimisation and variational analysis to the study of progressive hedging like methods for discrete optimisation models. The key to the success of such approaches is the acceptance of the existence of MIP and QMIP\ solvers that can be integrated in to analysis as “black box solvers” that return solutions within a broader algorithmic analysis. Here methods more familiar to continuous optimisers and nonsmooth analysts can be used to provide proofs of convergence of both primal and dual methods. Unlike continuous optimisation there still exists separate primal and dual methods and analysis in the discrete context. We will discuss this aspect and some convergent modifications that yield robust and effective versions of these methods, long with numerical validation of their effectiveness.

VA & Opt Webinar: Adil Bagirov

Title: Nonsmooth DC optimization: recent developments

Speaker: Adil Bagirov (Federation University)

Date and Time: June 23rd, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: In this talk we consider unconstrained optimization problems where the objective functions are represented as a difference of two convex (DC) functions. Various applications of DC optimization in machine learning are presented. We discuss two different approaches to design methods of nonsmooth DC optimization: an approach based on the extension of bundle methods and an approach based on the DCA (difference of convex algorithm). We also discuss numerical results obtained using these methods.

VA & Opt Webinar: Bruno F. Lourenço

Title: Error bounds, amenable cones and beyond

Speaker: Bruno F. Lourenço (Institute of Statistical Mathematics)

Date and Time: June 16th, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: In this talk we present an overview of the theory of amenable cones, facial residual functions and their applications to error bounds for conic linear systems. A feature of our results is that no constraint qualifications are ever assumed, so they are applicable even to some problems with unfavourable theoretical properties. Time allowing, we will discuss some recent findings on the geometry of amenable cones and also some extensions for non-amenable cones.

VA & Opt Webinar: Vuong Phan

Title: The Boosted Difference of Convex Functions Algorithm

Speaker: Vuong Phan (University of Southampton)

Date and Time: June 9th, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: We introduce a new algorithm for solving Difference of Convex functions (DC) programming, called Boosted Difference of Convex functions Algorithm (BDCA). BDCA accelerates the convergence of the classical difference of convex functions algorithm (DCA) thanks to an additional line search step. We prove that any limit point of the BDCA iterative sequence is a critical point of the problem under consideration and that the corresponding objective value is monotonically decreasing and convergent. The global convergence and convergence rate of the iterations are obtained under the Kurdyka Lojasiewicz property. We provide applications and numerical experiments for a hard problem in biochemistry and two challenging problems in machine learning, demonstrating that BDCA outperforms DCA. For the biochemistry problem, BDCA was ve times faster than DCA, for the Minimum Sum-of-Squares Clustering problem, BDCA was on average sixteen times faster than DCA, and for the Multidimensional Scaling problem, BDCA was three times faster than DCA.

Joint work with Francisco J. Aragón Artacho (University of Alicante, Spain).

VA & Opt Webinar: Scott Lindstrom

Title: A primal/dual computable approach to improving spiraling algorithms, based on minimizing spherical surrogates for Lyapunov functions

Speaker: Scott Lindstrom (Curtin University)

Date and Time: June 2nd, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Optimization problems are frequently tackled by iterative application of an operator whose fixed points allow for fast recovery of locally optimal solutions. Under light-weight assumptions, stability is equivalent to existence of a function—called a Lyapunov function—that encodes structural information about both the problem and the operator. Lyapunov functions are usually hard to find, but if a practitioner had a priori knowledge—or a reasonable guess—about one’s structure, they could equivalently tackle the problem by seeking to minimize the Lyapunov function directly. We introduce a class of methods that does this. Interestingly, for certain feasibility problems, circumcentered-reflection method (CRM) is an extant example therefrom. However, CRM may not lend itself well to primal/dual adaptation, for reasons we show. Motivated by the discovery of our new class, we experimentally demonstrate the success of one of its other members, implemented in a primal/dual framework.

VA & Opt Webinar: Guoyin Li

Title: Proximal methods for nonsmooth and nonconvex fractional programs: when sparse optimization meets fractional programs

Speaker: Guoyin Li (UNSW)

Date and Time: May 26th, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: Nonsmooth and nonconvex fractional programs are ubiquitous and also highly challenging. It includes the composite optimization problems studied extensively lately, and encompasses many important modern optimization problems arising from diverse areas such as the recent proposed scale invariant sparse signal reconstruction problem in signal processing, the robust Sharpe ratio optimization problems in finance and the sparse generalized eigenvalue problem in discrimination analysis. In this talk, we will introduce extrapolated proximal methods for solving nonsmooth and nonconvex fractional programs and analyse their convergence behaviour. Interestingly, we will show that the proposed algorithm exhibits linear convergence for sparse generalized eigenvalue problem with either cardinality regularization or sparsity constraints. This is achieved by identifying the explicit desingularization function of the Kurdyka-Lojasiewicz inequality for the merit function of the fractional optimization models. Finally, if time permits, we will present some preliminary encouraging numerical results for the proposed methods for sparse signal reconstruction and sparse Fisher discriminant analysis.

VA & Opt Webinar: Yura Malitsky

Title: Adaptive gradient descent without descent

Speaker: Yura Malitsky (Linköping University)

Date and Time: May 19th, 2021, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: In this talk I will present some recent results for the most classical optimization method — gradient descent. We will show that a simple zero cost rule is sufficient to completely automate gradient descent. The method adapts to the local geometry, with convergence guarantees depending only on the smoothness in a neighborhood of a solution. The presentation is based on a joint work with K. Mishchenko, see arxiv.org/abs/1910.09529.

VA & Opt Webinar: Hung Phan

Title: Adaptive splitting algorithms for the sum of operators

Speaker: Hung Phan (University of Massachusetts Lowell)

Date and Time: May 12th, 2021, 11:00 AEST (Register here for remote connection via Zoom)

Abstract: A general optimization problem can often be reduced to finding a zero of a sum of multiple (maximally) monotone operators, which creates challenging computational tasks as a whole. It motivates the development of splitting algorithms in order to simplify the computations by dealing with each operator separately, hence the name “splitting”. Some of the most successful splitting algorithms in applications are the forward-backward algorithm, the Douglas-Rachford algorithm, and the alternating directions method of multipliers (ADMM). In this talk, we discuss some adaptive splitting algorithms for finding a zero of the sum of operators. The main idea is to adapt the algorithm parameters to the generalized monotonicity of the operators so that the generated sequence converges to a fixed point.

1 2 3 4 5 7