Gitta Kutyniok, Ludwig Maximilian University of Munich, Germany
Scientific Computation meets Artificial Intelligence
J. Nathan Kutz, University of Washington, USA
Dynamical Models from Data
Caroline Lasser, Technical University of Munich, Germany
Quantum dynamics on the fly
Ari Stern, Washington University in St. Louis, USA
Structure-preserving hybrid methods
Beth Wingate, University of Exeter, UK
On the way to the limit: time-parallel algorithms for oscillatory, multiscale PDEs
Konstantinos C. Zygalakis, University of Edinburgh, UK
Lyapunov functions, convergence to equilibrium and applications to sampling and optimization
Gitta Kutyniok
Ludwig Maximilian University of Munich, Germany
WEB
Gitta Kutyniok currently has a Bavarian AI Chair for Mathematical Foundations of Artificial Intelligence at the Ludwig-Maximilians Universität München. She received her Diploma in Mathematics and Computer Science as well as her Ph.D. degree from the Universität Paderborn in Germany, and her Habilitation in Mathematics in 2006 at the Justus-Liebig Universität Gießen. From 2001 to 2008 she held visiting positions at several US institutions, including Princeton University, Stanford University, Yale University, Georgia Institute of Technology, and Washington University in St. Louis, and was a Nachdiplomslecturer at ETH Zurich in 2014. In 2008, she became a full professor of mathematics at the Universität Osnabrück, and moved to Berlin three years later, where she held an Einstein Chair in the Institute of Mathematics at the Technische Universität Berlin and a courtesy appointment in the Department of Computer Science and Engineering until 2020. In addition, Gitta Kutyniok holds an Adjunct Professorship in Machine Learning at the University of Tromso since 2019.
Artificial intelligence is currently leading to one breakthrough after the other, both in public life with, for instance, autonomous driving and speech recognition, and in the sciences in areas such as medical diagnostics or molecular dynamics. A similarly strong impact can currently be witnessed for scientific computation such as for solvers of inverse problems and numerical analysis of partial differential equations. In this lecture, we will first provide an introduction into this new vibrant research area. We will then survey recent advances at the intersection of scientific computation and artificial intelligence, and finally discuss fundamental limitations of such methodologies, in particular, in terms of computability aspects.
J. Nathan Kutz
University of Washington, USA
WEB
Nathan Kutz is the Yasuko Endo and Robert Bolles Professor of Applied Mathematics at the University of Washington, having served as chair of the department from 2007-2015. He has a wide range of interests, including neuroscience to fluid dynamics where he integrates machine learning with dynamical systems and control.
Machine learning and artificial intelligence algorithms are now being used to automate the discovery of governing physical equations and coordinate systems from measurement data alone. However, positing a universal physical law from data is challenging: (i) An appropriate coordinate system must also be advocated and (ii) simultaneously proposing an accompanying discrepancy model to account for the inevitable mismatch between theory and measurements must be considered. Using a combination of deep learning and sparse regression, specifically the sparse identification of nonlinear dynamics (SINDy) algorithm, we show how a robust mathematical infrastructure can be formulated for simultaneously learning physics models and their coordinate systems. This can be done with limited data and sensors. We demonstrate the methods on a diverse number of examples, showing how data can maximally be exploited for scientific and engineering applications.
Caroline Lasser
Technical University of Munich, Germany
WEB
Caroline Lasser is a professor for numerics of partial differential equations at Technical University Munich. Her research focus is on highly oscillatory evolution equations, in particular on high dimensional quantum systems.
What do chemical physicists do when simulating the quantum mechanical dynamics of larger molecules? Conventional grid-based discretization becomes unfeasible for molecular systems with more than five to six degrees of freedom, so that tensor and mesh free methods as well as combinations thereof are routinely applied. Our talk will aim at some mathematical grounding of these intriguing computational approaches.
Ari Stern
Washington University in St. Louis, USA
WEB
Ari Stern is an Associate Professor of Mathematics and Statistics at Washington University in St. Louis. He received his B.A. and M.A. from Columbia University, completed his Ph.D. at Caltech under the direction of Jerrold E. Marsden and Mathieu Desbrun, and was a postdoctoral researcher at UCSD working with Michael Holst. His research focuses on the interplay between geometry, differential equations, and numerical analysis.
ABSTRACT
The classical finite element method uses piecewise-polynomial function spaces satisfying continuity and boundary conditions. Hybrid finite element methods, by contrast, drop these conditions from the function spaces and instead enforce them weakly using Lagrange multipliers. The hybrid approach has several numerical and implementational advantages, which have been studied over the last few decades.
In this talk, we show how this hybrid framework has given new insight into a variety of “structure-preserving” methods for differential equations, including (multi)symplectic methods for Hamiltonian systems, charge-conserving methods for the Maxwell and Yang-Mills equations, and finite element exterior calculus. In particular, this provides a bridge linking “geometric numerical integration” of ODEs to numerical PDEs.
Beth Wingate
University of Exeter, UK
WEB
Professor Beth Wingate’s main research interest is the study of oscillations in geophysical fluid dynamics and numerics for high performance computing. Her recent research is focused on idealised slow/fast multiscale dynamics of the Atmosphere and Ocean, direct numerical simulations of idealised turbulence with rotation and magnetic fields, and time-stepping methods for climate modeling. Other interests include spectral element methods, in particular the investigation of near optimal interpolation on triangles. She did her PhD at the University of Michigan then spent many years at the Los Alamos National Laboratory in New Mexico, USA before moving to the University of Exeter in Devon, UK in 2013.
Motivated by using exascale computers for high resolution simulations of time-evolution problems, in this talk I will discuss time-parallelism in the context of oscillatory PDEs. I will give an introduction to time-parallel time-stepping methods and then go onto discuss work on understanding and using time-parallel methods for multiscale oscillatory PDEs (fast singular limits) like those found in the atmosphere, ocean, magnetic fields and plasmas. I will give concrete examples from ODES, such as the swinging spring, and PDEs, such as the rotating shallow water equations. All these problems share the common structure of having a parameter, epsilon, associated with purely oscillatory fast frequencies. I will show results of superlinear convergence in the limit as epsilon goes to zero, and sketch the proof of convergence for the more important case, when epsilon is finite. Time permitting, I will also discuss new directions for this work, including work on multi-level parareal for fast-singular limits and new strategies for using the exponential map with mean-field corrections.
Konstantinos C. Zygalakis
University of Edinburgh, UK
WEB
Konstantinos Zygalakis is a Reader in the Mathematics of Data Science at the University of Edinburgh. He received a 5-year Diploma in Applied Mathematics and Physics from the National Technical University of Athens in 2004, and his MSc and PhD from the University of Warwick in 2005 and 2009 respectively. Before Edinbugh he was a David Chrigton fellow at the University of Cambridge and held further postdoctoral positions at the University of Oxford and the Swiss Federal Institute of Technology, Lausanne as well as a lectureship in Applied Mathematics at the University of Southampton. His research spans a number of areas in the intersection of applied mathematics, numerical analysis, statistics and data science. In 2011, he was awarded a Leslie Fox Prize in Numerical Analysis (IMA UK) and he is a Fellow of the Alan Turing Institute since 2016. He has co-authored over forty research articles, as well as a graduate textbook in the Mathematics of Data Assimilation.
Optimization and Sampling problems lie in the heart of Bayesian inverse problems. The ability to solve such inverse problems depends crucially on the efficient calculation of quantities relating to the posterior distribution, giving thus rise to computationally challenging high dimensional optimization and sampling problems. In this talk, we will connect the corresponding optimization and sampling problems to the large time behaviour of solutions to (stochastic) differential equations. In addition, using a control theoretical formulation of these equation, we will utilise a set of linear matrix inequalities (applicable in the case of strongly convex potentials) to establish a framework that allow us to deduce their long-time properties as well as deducing the long time properties of their numerical discretisations. In particular, using this framework, we give an alternative explanation for the good properties of Nesterov method for strongly convex functions, as well as highlight the reasons behind the failure of the heavy ball method. Additionally, this framework allows us to study in a unified way the error (in the 2-Wasserstein distance) between the invariant distribution of an ergodic stochastic differential equation and the distribution of its numerical approximation for a number of different integrators proposed in the literature for the overdamped and underdamped Langevin dynamics.