Use cases

TIME-X will showcase its potential in four diverse and challenging applications with high societal relevance:

  • Medicine
  • Electromagnetics
  • Drug design
  • Weather & climate

These four selected applications share key properties: the inherent multi-scale nature of the physics, the presence of strong nonlinearities, and a pressing need to run simulations within a short wall-clock time. PinT methods have been applied with some success to simplified situations in those applications: electromagnetic simulation of a 2D machine model, for simplified versions of numerical weather prediction models, and for a set of parabolic equations in computational medicine. Still, many questions about PinT efficiency remain, and there is great potential for improvements. Those features and circumstances, combined with the high societal relevance of each of these applications, makes them ideal testbeds for novel PinT methods on exascale systems.

Discover our targeted applications

Medicine

Cardiovascular diseases and diseases of the musculoskeletal system are among the most prevalent diseases in western countries. In fact, surgeries such as total hip or total knee replacement are extremely common (e.g. THP in Germany > 170’000 / year), leading to huge societal and economic impact. For this use case, we will provide PinT methods for prototypic applications from cardiac electrophysiology, i.e. electrical propagation in the human heart, and from biomechanics, i.e. wear in prostheses.

We will focus first on the acceleration of the computations for the electrical activity of the heart for the so called monodomain equation, which describes the propagation of the electrical signal in the human heart. The monodomain equation is a non-linear parabolic equation, which incorporates a multi-scale model, which uses models of the chemical activity in the myocytes, combined with diffusion, to describe the electrical activity on the organ scale. It is a well established and validated model, which is widely used in cardiovascular research, for understanding and treating diseases such as Atrial Fibrillation, diseases of the ion channels, or infarctions. For parallelization in time, we will build on existing space-time multilevel methods for parabolic equations, which use so called semi-geometric hierarchies of approximation spaces. These methods and their implementation are already available for smaller scale problems, but not for the exa-scale range. We will extend the existing methodological and software basis for enabling exa-scale for cardiac simulation. Given that medical data often contains uncertainties, we will also integrate multi-level Monte-Carlo methods for uncertainty quantification that is developed in the TIME-X project into our space-time model. The activation of the ion-channels will be off-loaded to GPUs via KOKOS.

In the biomechanical side, we will focus on wear in (knee) prostheses. Clearly, wear in prostheses is of paramount importance in terms of longevity. Reducing wear will not only reduce cost for prostheses replacement significantly, but will also possibly avoid the undesirable replacement procedure. Building on quasi-static models for friction and on our expertise in multi-level methods for contact problems, we will build time parallel multi-level methods for the simulation of wear in knee prosthesis. As a matter of fact, practical tests using the ISO-norm 14243 can be extremely time consuming and expensive, and they can be replaced during the development phase by our wear simulations. Using PinT, these can be expected to shorten significantly development and product cycles, thereby reducing the overall cost for our health system. We will build on the software and the multi-level structure already used for the first project part on cardiac activation, as the quasi-static approximation again leads to an equation of parabolic type. We will, however, deal for this use case with the additional contact constraints at the contact surfaces and the friction law, i.e. Coulomb friction or a non-local regularized version of Coulomb friction. The outcome shall be a time-parallel software library which will allow for the fast simulation of wear in (knee) prostheses according to existing ISO norms. We will furthermore consider the application of time-parallel optimization procedures for optimizing the shape of prostheses with respect to minimal wear.

Electromagnetics

This use case considers two challenging applications: industrial-driven robust optimization of energy converters and the simulation of electromagnetic fields and particle motion in accelerators, e.g. CERN or fusion reactors, e.g. CCFE. Both applications yield to problems in time domain which must be considered on long intervals and are subject to uncertainties.

Increased by the energy revolution, efficient and robust designs of electromechanical energy converters, especially electrical machines, are gaining in importance. Whether as generators or drives, as components in the automotive industry, in industrial automation or in household appliances. These devices can be very accurately simulated by a multiphysical model consisting of an electromagnetic field, the motion of the rotor, the thermal field and an excitation from surrounding circuitry. The resulting high-fidelity models lead mathematically speaking to a coupled system of partial differential algebraic equations, whose numerical solution, e.g. by time stepping and finite element discretization, may need weeks on desktop computers since long time intervals have to be considered, c.f. figure on the left . Therefore, engineers typically consider a hierarchy of models, e.g. using coarser resolution in space and time or even lumped elements models in frequency domain which can be obtained by semi-analytical means.

All those converters are developed close to the technical limit and this is supported by numerical simulations and high-dimensional multi-objective optimization. For example, a typical goal is the minimization of joule losses or the reduction of rare-earth permanent magnets which are expensive and environment polluting. However, the optimization is often not using transient high-fidelity models but simplified lumped models. Also uncertainties are commonly only quantified late in the development process just to verify the robustness. Since this is not taken into account during the optimization loop, the optimal robust design may not be found. This motivates the consideration of uncertainty quantification and optimization in conjunction with parallel-in-time methods.

In high energy particle accelerators and plasma reactors, (low and high) superconducting technologies allow to increase the device’s performance. As a consequence, the design of the experiment and the prediction of the particle motion becomes more complicated. Simple models may limit the overall stability. Among the various elements that constitute for example an accelerator, the magnetic quadrupoles, responsible for the beam focusing, are critical elements for the overall beam stability. It is therefore important to study their impact on the particle motion. The assessment of the beam stability requires long-term simulations, e.g. (100.000 revolutions in the accelerators), it is therefore important to compute the particle trajectory efficiently, accurately and preserving the stability.

Partner TUD has already significantly contributed to the simulation of CERN’s superconducting magnets using waveform relaxation, and uncertainty quantification was investigated in two recent PhD theses to quantify the impact of manufacturing tolerance on the electromagnetic fields (“multipoles”). However, parallelism in time was not exploited. Similarly, partner TUHH improved numerical time stepping methods for the LOCUST particle tracking code used by an international consortium to design International Thermonuclear Experimental Reactor (ITER). Performance improvements were shown for realistic test cases related to fusion reactor modelling. The new algorithm is easy to implement, still relies on the standard Boris method and promises particular high-speed ups when parallelism in time can be used. Eventually, the impact of manufacturing tolerances on the fields and consequently on the tracking is a challenging question that requires efficient algorithms and significant computing resources.

For example the uncertainties in the separation dipole magnet shown below on the left affects the particle trajectories shown on the right (images are taken from the Dissertation of U. Römer):

An implementation of the most promising methods will be made freely available based on the free pySDC software.

Molecular dynamics for drug design

Molecular dynamics (MD) enables the simulation of huge molecular systems, constituted of large molecules such as ribosome (with more than 3 millions of atoms) in solute boxes and gives a route to the dynamical properties of the system. Some properties of interest can be derived by such time dependent simulations such as transport coefficients, linear and nonlinear responses to perturbations, rheological properties and so on. A crucial step in drug design is the evaluation of drug-protein binding free energy, which requires an accuracy of about 0.5 kcal/mol. Ideally, we would like to model a whole cell in its full complexity, which would pave the road for genomics and personalized medicine at the atomic level. Important intermediate milestones have to be reached beforehand.

Indeed, such quantities are obtained by time-averages of MD simulations on very long time scales. Simulations over several microseconds of physical time are usually necessary to reach convergence with a satisfactory accuracy. This time scale has to be compared with the typical time scale of chemical bond vibrations, of the order of the femtosecond, which tightly constraints the time-step in MD simulations. An even more stringent time-scale problem arises in protein folding, which would sometimes require simulations over several milliseconds of physical time (1012 to 1014 time steps). Parareal strategy is a way to accelerate those computations on exascale platforms.

Parareal simulations for such systems have already been tested on simple and small molecules. The coarse simulation can be represented by simple force field models (possibly not including polarization) and the fine solver is within the software Tinker-HP. The purpose of this use case is to tackle more realistic size systems on ranges of several milliseconds of physical time.

Previously developed parallel-in-time methods will be combined with parallel-in-space methods, in particular Krylov subspace methods. We will use error estimators that allow to identify stopping criteria for the fine solver based on Krylov subspace methods, while not losing convergence in the parallel-in-time algorithm. The stopping criteria will allow to reduce the number of iterations of the Krylov subspace solver used at each iteration and for each step of the parallel-in-time method, thus reducing the number of floating point operations and of communication required at each iteration in time. We will further reduce the number of iterations of the Krylov subspace solvers by using information from previous iterations and integrate them either in the preconditioner or in the search space of the Krylov solver. This requires some extra computation, however this computation can be done when processors are idle waiting for results from the processor in charge of the previous time step.

Weather & climate

State-of-the-art time integration methods applied to dynamical cores (the numerics used within weather and climate simulations), are based on over 50 years of research. Novel time integration methods for dynamical cores are nowadays developed and assessed initially with simplified benchmarks to reveal and identify intrinsic problems such as instabilities, conservation of physical quantities, etc. which obviously needs to be taken into account. This is of extreme importance for the development of robust dynamical cores and we will utilize corresponding benchmarks for investigating parallel-in-time integration methods on dynamical cores.

Our work will be based on the first successful development of mathematics/HPC bridging parallel-in-time research targeting dynamical cores. This exploits discretization methods also used by the European Centre for Medium-Range Weather Forecasts (ECMWF) in order to fulfill requirements of mathematics (multi-modes, spectral solver properties) and high-performing scientific computing (reduced wall-clock time). This already led to improved wall-clock time vs. errors using different PinT approaches with “Cauchy contour integral based Parallel-in-time integration” for exponential time integration as well as using PFASST to point out the two most promising candidates.

As part of this project, we will work on improving the efficiency of iterative PinT methods such as PFASST and Parareal. In particular, we will vary the number of adaptively chosen speculative time steps over runtime in order to optimize the utilized computing resources.

Due to the utilization of numerics similar to the ones of ECMWF’s dynamical core IFS, our research already gained ECMWF’s interest due to a potential impact on Europe’s next medium-range forecasting system.

Scroll to Top