Prob Num — 11–13 April 2018 — The Alan Turing Institute, British Library, London, UK
email

Programme

The workshop will consist of three kinds of talk:

In addition, a number of expert “discussion leaders” will steer sessions of Collaborative Research throughout the workshop.

The provisional programme is as follows:

Wednesday 11
10:00–10:15 Introductions Chris Oates (Newcastle University) and
Tim Sullivan (Freie Universität Berlin and Zuse Institute Berlin)
10:15–11:00 Overview Talk Philipp Hennig (Max Planck Institute for Intelligent Systems, Tübingen)
11:00–11:30 Coffee Break  
11:30–12:15 Overview Talk Tim Sullivan (Freie Universität Berlin and Zuse Institute Berlin) Slides
12:15–13:00 Overview Talk Mike Osborne (University of Oxford) Slides
13:00–14:00 Lunch Break  
14:00–14:45 Overview Talk Fred J. Hickernell (Illinois Institute of Technology) Slides
14:45–15:30 Overview Talk Youssef Marzouk (Massachusetts Institute of Technology)
15:30–16:00 Coffee Break  
16:00–17:00 Collaborative Research Identification of Topics/Groups
Thursday 12
10:00–10:45 Overview Talk Houman Owhadi (California Institute of Technology)
10:45–11:15 Research Talk Oksana Chkrebtii (Ohio State University)
11:15–11:45 Coffee Break  
11:45–13:00 Collaborative Research  
13:00–14:00 Lunch Break  
14:00–14:30 Research Talk Toni Karvonen (Aalto University) Slides
14:30–15:30 Short Talks Florian Schäfer (California Institute of Technology)
Hans Kersting (Max Planck Institute for Intelligent Systems, Tübingen) Slides
Motonobu Kanagawa (Max Planck Institute for Intelligent Systems, Tübingen) Slides
Alexandra Gessner (Max Planck Institute for Intelligent Systems, Tübingen) Slides
Onur Teymur (Imperial College London) Slides
François-Xavier Briol (University of Warwick and Imperial College London) Slides
15:30–16:00 Coffee Break  
16:00–17:00 Collaborative Research  
Friday 13
10:00–10:30 Research Talk Han Cheng Lie (Freie Universität Berlin)
10:30–11:00 Research Talk Jon Cockayne (University of Warwick)
11:00–11:30 Research Talk Junyang Wang (Newcastle University)
11:30–12:00 Coffee Break  
12:00–13:00 Summary Discussion A Euston Road Manifesto? Chairs: Chris Oates (Newcastle University) and Tim Sullivan (Freie Universität Berlin and Zuse Institute Berlin)
Panel: Oksana Chkrebtii (Ohio State University), Philipp Hennig (MPI Tübingen), Youssef Marzouk (MIT), Mike Osborne (University of Oxford), Houman Owhadi (Caltech)
video
13:00–14:00 Lunch Break  
14:00–16:00 Collaborative Research  

Abstracts

Oksana Chkrebtii, Ohio State University
“Probability models for discretization uncertainty with adaptive grid designs”
When models are defined implicitly as systems of differential equations with no closed form solution, the choice of discretization grid for their approximation represents a trade-off between accuracy of the estimated solution and computational resources. We apply principles of statistical design to a class of sequential probability based models of discretization uncertainty for selecting the optimal discretization grid adaptively. Our proposal is compared to other approaches in the literature.

Jon Cockayne, University of Warwick
“A Bayesian conjugate gradient method”
A fundamental task in numerical computation is the solution of large linear systems. The conjugate gradient method is an iterative method which offers rapid convergence to the solution, particularly when an effective preconditioner is employed. However, for more challenging systems a substantial error can be present even after many iterations have been performed. The estimates obtained in this case are of little value unless further information can be provided about the numerical error. We propose a novel statistical model for this numerical error set in a Bayesian framework. Our approach is a strict generalisation of the conjugate gradient method, which is recovered as the posterior mean for a particular choice of prior. The estimates obtained are analysed with Krylov subspace methods and a contraction result for the posterior is presented. The method is then analysed in a simulation study as well as being applied to a challenging problem in medical imaging.

Philipp Hennig, Max Planck Institute for Intelligent Systems, Tübingen
“New applications for new numerics — Uses for PN in machine learning”
Much of PN research continues to focus on the much-needed foundations. But, as progress is made at the basis, we also have to identify crucial applications of the framework, to motivate the research. This talk will identify a few potential killer apps for PN in the domain of machine learning, where PN promises to improve the computational methods of learning machines by treating computational methods as learning machines.

Fred J. Hickernell, Illinois Institute of Technology
“Adaptive probabilistic numerical methods using fast transforms”
Adaptive numerical algorithms determine the sample size required to achieve the accuracy demanded based on the function values sampled. By assuming a Bayesian framework, probabilistic numerical methods can adaptively determine the sample size via credible intervals for the true answer. However, these credible intervals rely on assuming a reasonable prior for the sample space of input functions. Thus, one may assume a prior with parameters to be fit by maximum likelihood estimation. However, evaluating the likelihood function may be costly, requiring \( O(n^3) \) operations, where \( n \) is the number of function values. We describe situations where the the choice of data sites together with the family of covariance kernels make it possible to evaluate the likelihood function in only \( O(n \log n) \) operations. This makes adaptive probabilistic numerical methods practical. We describe the promise of this research direction.

Toni Karvonen, Aalto University
“A Bayes–Sard Cubature Method”
“To date, research effort on numerical integration as an inferential task has focussed on the development of Bayesian cubature, whose distributional output provides uncertainty quantification for the integral. However, the natural point estimators associated with Bayesian cubature do not, in general, correspond to widely-used and well-studied standard integration methods, such as Gaussian cubatures or quasi-Monte Carlo. We present Bayes–Sard a general framework in which any cubature rule can be endowed with a meaningful probabilistic output. This is achieved by considering a Gaussian process model for the integrand, whose mean is a parametric regression model with an improper flat prior on each regression coefficient. The features in the regression model consist of test functions which are exactly integrated, whilst the remainder of the computational budget is afforded to the non-parametric part. It is demonstrated that a judicious choice of test functions allows for recovering any cubature rule as the posterior mean in the Bayes–Sard output.

Han Cheng Lie, Freie Universität Berlin
“Strong convergence of probabilistic integrators for deterministic ordinary differential equations”
In the study of probabilistic integrators for deterministic ordinary differential equations, one goal is to establish the convergence (in an appropriate topology) of the random solutions to the true deterministic solution of an initial value problem defined by some vector field. The challenge is to identify the right conditions on the additive noise with which one constructs the probabilistic integrator, so that the convergence of the random solutions has the same order as the underlying deterministic integrator. Conrad et al. (Stat. Comput., 2017), established the mean square convergence of the solutions for globally Lipschitz vector fields, under the assumptions of i.i.d., state-independent, mean-zero Gaussian noise. We extend their analysis by considering vector fields that need not be globally Lipschitz, and by considering non-Gaussian, non-i.i.d. noise that can depend on the state and that can have nonzero mean. A key assumption is a uniform moment bound condition on the noise. We obtain convergence in the stronger topology of the uniform norm, and establish results that connect this topology to the regularity of the additive noise. Joint work with A. M. Stuart (Caltech) and T. J. Sullivan.

Mike Osborne, University of Oxford
“Bayesian optimisation is probabilistic numerics”
Bayesian optimisation, within machine learning, is perhaps the one area of probabilistic numerics in which demand outstrips supply, driven by industrial interest in automated machine learning. I propose that this beachhead should be used for a full-fledged incursion of probabilistic numerics into machine learning. Probabilistic numerics has much to offer to, and to learn from, machine learning. In particular, machine learning has diverse numerics needs that are core to performance, reliability and interpretability: probabilistic numerics can meet those needs. To get there, I think that probabilistic numerics needs to follow Bayesian optimisation's example in community-building, entrepreneurialism, and adopting a ruthless focus on the real pain-points of users.

Houman Owhadi, California Institute of Technology
“A game theoretic approach to numerical approximation and algorithm design”
This talk will review interplays between Game Theory, Numerical Approximation and Gaussian Process Regression. We will illustrate this interface between statistical inference and numerical analysis through problems related to numerical homogenization, operator adapted wavelets, fast solvers, and computation with dense kernel matrices. We will emphasize open problems, unexplored areas and opportunities for collaborative research. This talk will cover joint work with F. Schäfer, C. Scovel, T. Sullivan and L. Zhang.

Tim Sullivan, Free University of Berlin and Zuse Institute Berlin
“Bayesian probabilistic numerical methods”
Many probabilistic numerical methods have been introduced in recent years for tasks such as quadrature, optimisation, and the solution of differential equations. This talk will describe, on a formal level, what a probabilistic numerical method is, and what it means for one to be Bayesian. We will further examine the costs and benefits associated to being Bayesian, and how this differs from established notions of average-case optimality. This talk is based upon joint work with J. Cockayne, M. Girolami, and C. Oates.

Junyang Wang, Newcastle University
“An exact Bayesian probabilistic numerical method for ODEs?”
It has been argued that Bayesian probabilistic numerical methods (BPNM) provide a coherent framework in which numerical uncertainty can be propagated. However, a consequence of the strict definition of BPNM is that there are currently no closed-form BPNM for the solution of ODEs. It is thus interesting to ask; does a closed-form BPNM for ODEs exist? In this talk, we demonstrate how one can indeed construct closed-form BPNM for certain ODEs that are transformation-invariant. The method is presented for first order ODEs of the form \( u' = f(u, x) \) and requires that certain conditions associated with the Lie algebra of the ODE are satisfied.