Combining Parareal and Machine Learning

Main innovation: Demonstration that physics-informed neural networks (PINNs) can be used to build coarse propagators for Parareal and that this can improve performance over numerical coarse method.

A key challenge to achieving speedup with Parareal (and other across-the-steps time parallel methods) is the design of a good coarse propagator. It needs to be accurate enough to make Parareal converge quickly but also computationally cheap, as it forms the serial part of the method. While many different strategies have been explored to build coarse propagators, for example using reduced models or algorithms of reduced precision, building effective coarse models remains difficult. A recent Time-X funded paper explores the use of physics-informed neural networks (PINNs) to build a coarse propagator for the Black-Scholes partial differential equations, a model used in computational finance. PINNs have the advantage that they include the PDE itself in the loss function and therefore can be trained without requiring large amounts of data.

In benchmark results using up to 16 MPI processes on a single core, Parareal with a PINN coarse propagator managed to achieve about double the speedup than Parareal with a numerical coarse propagator of comparable accuracy. Furthermore, the different computing patterns of the PINN allowed for the effective use of a GPU attached to the node: running the numerical fine model on the CPU while moving the coarse propagator PINN to the GPU was considerably faster than running both on either only the CPU or only the GPU.

The results will be presented at the Euro-Par 2023 conference in August 2023.

Publication

Scroll to Top