Shi Jin, Institute of Natural Sciences, China
Random Batch Methods for classical and quantum N-body problems
Gitta Kutyniok, Ludwig-Maximilians-Universitat Munchen, Germany
J. Nathan Kutz, University of Washington, USA
Dynamical Models from Data
Caroline Lasser, Technical University of Munich, Germany
Quantum dynamics on the fly
Ari Stern, Washington University in St. Louis, USA
Structure-preserving hybrid methods
Beth Wingate, University of Exeter, UK
Konstantinos C. Zygalakis, University of Edinburgh, UK
Bayesian inverse problems, prior modelling and algorithms for posterior sampling
Institute of Natural Sciences, China
He is the Director of Institute of Natural Sciences, and Chair Professor of Mathematics, at Shanghai Jiao Tong University. He obtained his BS degree from Peking University and his Ph.D. from University of Arizona. He was a postdoc at Courant Institute, New York University, an assistant and associate professors at Georgia Institute of Technology, and full professor, department chair and Vilas Distinguished Achievement Professor at University of Wisconsin-Madison, Chair of Department of Mathematics at Shanghai Jiao Tong University.
He also serves as a co-director of the Shanghai Center of Applied Mathematics, director of Ministry of Education Key Lab on Scientific and Engineering Computing, and director of Center for Mathematical Foundation of Artificial Intelligence at Shanghai Jiao Tong University.
He received a Feng Kang Prize of Scientific Computing, and a Morningside Silver Medal of Mathematics of International Congress of Chinese Mathematicians. He is an inaugural Fellow of the American Mathematical Society (AMS), a Fellow of Society of Industrial and Applied Mathematics (SIAM), and an Invited Speaker at the International Congress of Mathematicians in 2018.
His research interests include kinetic theory, quantum dynamics, uncertainty quantification, interacting particle systems and computational fluid dynamics, etc.
We first develop random batch methods for classical interacting particle systems with large number of particles. These methods use small but random batches for particle interactions, thus the computational cost is reduced from O(N^2) per time step to O(N), for a system with N particles with binary interactions. For one of the methods, we give a particle number independent error estimate under some special interactions.
This method is also extended to quantum Monte-Carlo methods for N-body Schrodinger equation and will be shown to have significant gains in computational speed up over the classical Metropolis-Hastings algorithm and the Langevin dynamics based Euler-Maruyama method for statistical samplings of general distributions for interacting particles.
For quantum N-body Schrodinger equation, we also obtain, for pair-wise random interactions, a convergence estimate for the Wigner transform of the single-particle reduced density matrix of the particle system at time t that is uniform in N > 1 and independent of the Planck constant \hbar. To this goal we need to introduce a new metric specially tailored to handle at the same time the difficulties pertaining to the small \hbar regime (classical limit), and those pertaining to the large N regime (mean-field limit).
This talk is based on joint works with Lei Li, Jian-Guo Liu, Francois Golse, Thierry Paul and Xiantao Li.
Ludwig-Maximilians-Universitat Munchen, Germany
University of Washington, USA
Nathan Kutz is the Yasuko Endo and Robert Bolles Professor of Applied Mathematics at the University of Washington, having served as chair of the department from 2007-2015. He has a wide range of interests, including neuroscience to fluid dynamics where he integrates machine learning with dynamical systems and control.
Machine learning and artificial intelligence algorithms are now being used to automate the discovery of governing physical equations and coordinate systems from measurement data alone. However, positing a universal physical law from data is challenging: (i) An appropriate coordinate system must also be advocated and (ii) simultaneously proposing an accompanying discrepancy model to account for the inevitable mismatch between theory and measurements must be considered. Using a combination of deep learning and sparse regression, specifically the sparse identification of nonlinear dynamics (SINDy) algorithm, we show how a robust mathematical infrastructure can be formulated for simultaneously learning physics models and their coordinate systems. This can be done with limited data and sensors. We demonstrate the methods on a diverse number of examples, showing how data can maximally be exploited for scientific and engineering applications.
Technical University of Munich, Germany
Caroline Lasser is a professor for numerics of partial differential equations at Technical University Munich. Her research focus is on highly oscillatory evolution equations, in particular on high dimensional quantum systems.
What do chemical physicists do when simulating the quantum mechanical dynamics of larger molecules? Conventional grid-based discretization becomes unfeasible for molecular systems with more than five to six degrees of freedom, so that tensor and mesh free methods as well as combinations thereof are routinely applied. Our talk will aim at some mathematical grounding of these intriguing computational approaches.
Washington University in St. Louis, USA
Ari Stern is an Associate Professor of Mathematics and Statistics at Washington University in St. Louis. He received his B.A. and M.A. from Columbia University, completed his Ph.D. at Caltech under the direction of Jerrold E. Marsden and Mathieu Desbrun, and was a postdoctoral researcher at UCSD working with Michael Holst. His research focuses on the interplay between geometry, differential equations, and numerical analysis.
The classical finite element method uses piecewise-polynomial function spaces satisfying continuity and boundary conditions. Hybrid finite element methods, by contrast, drop these conditions from the function spaces and instead enforce them weakly using Lagrange multipliers. The hybrid approach has several numerical and implementational advantages, which have been studied over the last few decades.
In this talk, we show how this hybrid framework has given new insight into a variety of “structure-preserving” methods for differential equations, including (multi)symplectic methods for Hamiltonian systems, charge-conserving methods for the Maxwell and Yang-Mills equations, and finite element exterior calculus. In particular, this provides a bridge linking “geometric numerical integration” of ODEs to numerical PDEs.
University of Exeter, UK
University of Edinburgh, UK
Konstantinos Zygalakis is a Reader in the Mathematics of Data Science at the University of Edinburgh. He received a 5-year Diploma in Applied Mathematics and Physics from the National Technical University of Athens in 2004, and his MSc and PhD from the University of Warwick in 2005 and 2009 respectively. Before Edinbugh he was a David Chrigton fellow at the University of Cambridge and held further postdoctoral positions at the University of Oxford and the Swiss Federal Institute of Technology, Lausanne as well as a lectureship in Applied Mathematics at the University of Southampton. His research spans a number of areas in the intersection of applied mathematics, numerical analysis, statistics and data science. In 2011, he was awarded a Leslie Fox Prize in Numerical Analysis (IMA UK) and he is a Fellow of the Alan Turing Institute since 2016. He has co-authored over forty research articles, as well as a graduate textbook in the Mathematics of Data Assimilation.
Bayesian inverse problems provide a coherent mathematical and algorithmic framework that enables researchers to combine mathematical models with data. The ability to solve such inverse problems depends crucially on the efficient calculation of quantities relating to the posterior distribution, which itself requires the solution of high dimensional optimization and sampling problems. In this talk, we will study different algorithms for efficient sampling from the posterior distribution under two different prior modelling paradigms. In the first one, we use specific non-smooth functions, such as for example the total variation norm, to model the prior. The main computational challenge in this case is the non-smoothness of the prior which leads to “stiffness” for the corresponding stochastic differential equations that need to be discretised to perform sampling. We address this issue by using tailored stochastic numerical integrators, known as stochastic orthogonal Runge-Kutta Chebyshev (S-ROCK) methods, and show that the corresponding algorithms are able to outperform the current state of the art methods. In the second modelling paradigm, the prior knowledge available is given in the form of training examples and we use machine learning techniques to learn an analytic representation for the prior. The main computational challenge in this case is that the corresponding posterior distribution becomes multimodal which results in a challenging sampling problem since standard Markov Chain Monte Carlo methods (MCMC) can get stuck in different local maxima of the posterior distribution. We addess this issue, by using specifically designed MCMC methods and exhibit numerically that this “data-driven” approach improves the perfomance in a number of different imaging tasks, such as image denoising and image deblurring.