VA & Opt Webinar: Jane Ye

Title: Difference of convex algorithms for bilevel programs with applications in hyperparameter selection

Speaker: Jane Ye (University of Victoria, Canada)

Date and Time: Wed Nov 10, 11:00 AEST (Register here for remote connection via Zoom)

Abstract:

A bilevel program is a sequence of two optimization problems where the constraint region of the upper level problem is determined implicitly by the solution set to the lower level problem. In this talk, I will present difference of convex algorithms for solving bilevel programs in which the upper level objective functions are difference of convex functions and the lower level programs are fully convex. This nontrivial class of bilevel programs provides a powerful modelling framework for dealing with  applications arising from hyperparameter selection in machine learning. Thanks to the full convexity of the lower level program,  the value function of the lower level program turns out to be convex and hence the bilevel program can be reformulated as a difference of convex bilevel program. We propose two algorithms for solving the reformulated difference of convex program and show their convergence to stationary points under very mild assumptions. Finally we conduct numerical experiments to a bilevel model of support vector machine classification.

VA & Opt Webinar: Nadezda Sukhorukova

Title: Rational approximation and its role in different branches of mathematics and applications

Speaker: Nadezda Sukhorukova (Swinburne University of Technology)

Date and Time: Wed Nov 3, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

Rational approximation is a powerful function approximation tool. Rational approximation is approximation by a ratio of two polynomials, whose coefficients are subject to optimisation. Numerical methods for rational approximation have been developed independently in different branches of mathematics. In this talk, I will present the interconnections between different numerical methods developed to rational approximation. Most of them can be extended to the case of the so called generalised rational approximation where the approximation is a ration of two linear forms and the basis functions are not limited to monomials. Finally, I am going to talk about real-life applications for rational and generalised rational approximation.queness.

VA & Opt Webinar: Nghia Tran

Title: Sharp and strong minima for robust recovery

Speaker: Nghia Tran (Oakland University)

Date and Time: Wed Oct 27, 11:00 AEST (Register here for remote connection via Zoom)

Abstract:

In this talk, we show the important roles of sharp minima and strong minima for robust recovery. We also obtain several characterizations of sharp minima for convex regularized optimization problems. Our characterizations are quantitative and verifiable especially for the case of decomposable norm regularized problems including sparsity, group-sparsity, and low-rank convex problems. For group-sparsity optimization problems, we show that a unique solution is a strong solution and obtain quantitative characterizations for solution uniqueness.

VA & Opt Webinar: Sidney Morris

Title: Tweaking Ramanujan’s Approximation of n!

Speaker: Sidney Morris (Federation University, and La Trobe University)

Date and Time: Wed Oct 20, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

In 1730 James Stirling, building on the work of Abraham de Moivre, published what is known as Stirling’s approximation of n!. He gave a good formula which is asymptotic to n!. Since then hundreds of papers have given alternative proofs of his result and improved upon it, including notably by Burside, Gosper, and Mortici. However Srinivasa Ramanujan gave a remarkably better asymptotic formula. Hirschhorn and Villarino gave a nice proof of Ramanujan’s result and an error estimate for the approximation. 

This century there have been several improvements of Stirling’s formula including by Nemes, Windschitl, and Chen. In this presentation it is shown 

(i) how all these asymptotic results can be easily verified; 

(ii) how Hirschhorn and Villarino’s argument allows a tweaking of Ramanujan’s result to give a better approximation; 

(iii) that a new asymptotic formula can be obtained by further tweaking of Ramanujan’s result;

(iv) that Chen’s asymptotic formula is better than the others mentioned here, and the new asymptotic formula is comparable with Chen’s.

VA & Opt Webinar: Rubén Campoy

Title: A product space reformulation with reduced dimension

Speaker: Rubén Campoy (University of Valencia)

Date and Time: Wed Oct 13, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

The product space reformulation is a powerful trick when tackling monotone inclusions defined by finitely many operators with splitting algorithms. This technique constructs an equivalent two-operator problem, embedded in a product Hilbert space, that preserves computational tractability. Each operator in the original problem requires one dimension in the product space. In this talk, we propose a new reformulation with a reduction on the dimension of the outcoming product Hilbert space. We shall discuss the case of not necessarily convex feasibility problems. As an application, we obtain a new parallel variant of the Douglas-Rachford algorithm with a reduction in the number of variables. The computational advantage is illustrated through some numerical experiments.

VA & Opt Webinar: Dominikus Noll

Title: Alternating projections with applications to Gerchberg-Saxton error reduction

Speaker: Dominikus Noll (Institut de Mathématiques de Toulouse)

Date and Time: Wed Oct 6, 17:00 AEST (Register here for remote connection via Zoom)

Abstract:

We discuss alternating projections between closed non-convex sets A, B in R^n and obtain criteria for convergence when A, B do not intersect transversally. The infeasible case, A∩B=∅, is also addressed, and here we expect convergence toward a gap between A, B. For sub-analytic sets A, B sub-linear convergence rates depending on the Lojasiewicz exponent of the distance function can be computed. We then present applications to the Gerchberg-Saxton error reduction algorithm, to Cadzow’s denoising algorithm, and to instances of the Gaussian EM-algorithm.

ACEMS OCTOBER LECTURES

Dear MOCAO members.

We would like to invite you to ACEMS October lectures

Optimal decision making: a tribute to female ingenuity
Professor Kate Smith-Miles & Alison Harcourt AO
Tuesday 12 October | 12pm-1pm AEDT

and

Statistical Methodology Development & Software Dissemination
Distinguished Professor Matt Wand, UTS
Thursday 28 October | 12pm-1pm AEDT

For more information and registration, please refer to

https://mailchi.mp/9fad69599771/acems-public-lectures-may-5541985?e=cf88c86980

VA & Opt Webinar: Quoc Tran-Dinh

Title: Randomized Douglas-Rachford Splitting Algorithms for Federated Composite Optimization

Speaker: Quoc Tran-Dinh (University of North Carolina)

Date and Time: Wed Sep 29, 11:00 AEST (Register here for remote connection via Zoom)

Abstract:

In this talk, we present two randomized Douglas-Rachford splitting algorithms to solve a class of composite nonconvex finite-sum optimization problems arising from federated learning. Our algorithms rely on a combination of three main techniques: Douglas-Rachford splitting scheme, randomised block-coordinate technique, and asynchronous strategy. We show that our algorithms achieve the best-known communication complexity bounds under standard assumptions in the nonconvex setting, while allow one to inexactly updating local models with only a subset of users each round, and handle nonsmooth convex regularizers. Our second algorithm can be implemented in an asynchronous mode using a general probabilistic model to capture different computational architectures. We illustrate our algorithms with many numerical examples and show that the new algorithms have a promising performance compared to common existing methods.

This talk is based on the collaboration with Nhan Pham (UNC), Lam M. Nguyen (IBM), and Dzung Phan (IBM).

VA & Opt Webinar: Maxim Dolgopolik

Title: DC Semidefinite Programming

Speaker: Maxim Dolgopolik (Institute for Problems in Mechanical Engineering of the Russian Academy of Sciences)

Date and Time: Wed Sep 22, 17:00 AEST (Register here for remote connection via Zoom)

Abstract: DC (Difference-of-Convex) optimization has been an active area of research in nonsmooth nonlinear optimization for over 30 years. The interest in this class of problems is based on the fact that one can efficiently utilize ideas and methods of convex analysis/optimization to solve DC optimization problems. The main results of DC optimization can be extended to the case of nonlinear semidefinite programming problems, i.e. problems with matrix-valued constraints, in several different ways. We will discuss two possible generalizations of the notion of DC function to the case of matrix-valued functions and show how these generalizations lead to two different DC optimization approaches to nonlinear semidefinite programming.

VA & Opt Webinar: Walaa Moursi

Title: The Douglas-Rachford algorithm for solving possibly inconsistent optimization problems

Speaker: Walaa Moursi (University of Waterloo)

Date and Time: Wed Sep 15, 11:00 AEST (Register here for remote connection via Zoom)

Abstract: More than 40 years ago, Lions and Mercier introduced in a seminal paper the Douglas–Rachford algorithm. Today, this method is well recognized as a classical and highly successful splitting method to find minimizers of the sum of two (not necessarily smooth) convex functions. While the underlying theory has matured, one case remains a mystery: the behaviour of the shadow sequence when the given functions have disjoint domains. Building on previous work, we establish for the first time weak and value convergence of the shadow sequence generated by the Douglas–Rachford algorithm in a setting of unprecedented generality. The weak limit point is shown to solve the associated normal problem which is a minimal perturbation of the original optimization problem. We also present new results on the geometry of the minimal displacement vector.

1 2 3 4 5 9