Thomas Baldwin-McDonald: Bayesian Deep Learning with Physics-informed Gaussian Processes
AbstractDynamical systems are ubiquitous across the natural sciences, with many physical and biological processes being driven on a fundamental level by differential equations. In particularly complex systems it is often infeasible to characterise all of the individual processes present and the interactions between them. Rather than attempt to fully describe such systems, latent force models (LFMs) specify a simplified mechanistic model which captures salient features of the dynamics present. This leads to a model which is able to readily extrapolate beyond the training input space, thereby retaining one of the key advantages of mechanistic modeling over purely data-driven techniques. However, modeling nonlinear dynamical systems presents an additional challenge, as shallow models such as LFMs are generally less capable of modeling the non-stationarities often present in nonlinear systems than deep probabilistic models such as deep Gaussian processes (DGPs).
In this talk, I will introduce the concept of a deep latent force model (DLFM), which aims to bridge this gap and combine the advantages of LFMs and deep probabilistic models. We will consider two approaches to formulating the DLFM for the simple case of a first order ODE-based kernel: a weight-space approximation, and an inducing points-based method which relies on pathwise conditioning. Both models are amenable to doubly stochastic variational inference, and we find empirically that the proposed framework is capable of capturing nonlinear dynamics effectively, whilst also being applicable to more general tabular regression problems.