About TIME-X

Context

The shift towards massive parallelism. Since the 1950’s up until today, we have been observing Moore’s law, which states that computational power of processors doubles roughly every 18 – 24 months. Up until the early 2000’s, this doubling was realized by shrinking the transistor size on a chip, leading to a doubling of the clock frequency allowing single-threaded code to execute faster. However, as the clock frequency of the processor increases, so does the heat that it dissipates. Around 2005, continuing the process of shrinking transistors and increasing clock frequency reached technical limitations: processors could not be sufficiently cooled anymore and they would simply melt. At that point, chip manufacturers decided to use the available transistors on the chip to create multiple cores on a single chip, shifting from single-core to multi-core processors. We have seen dual-core, quad-core, octa-core processors and even today machines with tens or hundreds of thousands of cores, opening the pathway towards massive parallelism.

The urgent need for new algorithmic concepts. The shift towards massive parallelism causes a growing disruption in the field of computational sciences and poses numerous challenges for algorithm design. Before this massive concurrency shift, novel numerical algorithms and hardware evolution were contributing independently to the increase of computational power. However, since the parallelization potential of an algorithm is critical for its performance on modern HPC architectures, it is well possible that the most efficient current algorithms (in terms of total number of required computations) prove to be suboptimal on massively parallel computers, because they contain a few serial steps that cannot be parallelized. Therefore, to unlock the growth in HPC system computing power, parallelization must be taken into account from the outset.

The recent rise of time parallelization. The need for new algorithmic concepts is particularly urgent in fields that require simulating the evolution of a system starting from a given initial condition. Traditionally, the focus in these domains lies on the efficient parallelisation in space , leaving time being processed serially, via time-stepping. While this is a viable approach in many situations, the amount of parallelism that can be exploited is inherently limited by the number of spatial degrees of freedom. For long-time simulations, including those arising from the applications that will be treated by TIME-X (medicine,
electromagnetics, drug design, and weather/climate), such algorithms will not be able to fully exploit the growth in computational power arising from future Exascale HPC architectures. Moreover, forward-in-time simulation is often only part of the task, representing, for instance, a constraint when computing an optimal design or performing real-time control. Therefore, parallelisation in time is a crucial component in Exascale computing.

TIME-X Objectives

By bringing together experts in numerical analysis, applied mathematics, computer scientists and scientists from four selected application domains with direct high-level impact on society, TIME-X aims at advancing parallel-in-time integration from an academic methodology into a widely available technology, delivering Exascale performance for a wide range of scientific and industrial applications.

In particular, we will:

  1. Enhance scalability and robustness of application software implementations for future Exascale HPC systems with PinT, ensuring that the expected technical challenges of exascale systems (increased concurrency and hardware failures, decreased memory and memory bandwidth, and energy constraints) are integrated in the algorithms.
  2. Deliver a series of necessary methodological advances to propel PinT from an approach with a formulated concept and isolated successes to a technology with demonstrated potential to enable Exascale performance for a wide range of applications.
  3. Demonstrate the efficacy of PinT methods in four diverse and challenging applications of high societal relevance (medicine, electromagnetics, drug design, weather/climate).

Project structure

The structure of the TIME-X project can be seen in the figure below. The scientific work is performed in three strongly connected work packages, which are detailed below.

WP2: Enabling extreme scale parallel computing through PinT

To make an impact in extreme scale computing practice, PinT needs to provide speedup in addition to widely established parallelization techniques in space or across ensembles. Therefore, this work package will enable the scalable and robust combination of classical parallelization approaches with PinT. We will develop effective ways to integrate PinT algorithms with spatial parallelization and deliver optimal strategies to map space-time-ensemble parallel software to future architectures. By devising PinT methods that remain effective when confronted with inexactness from various sources, we will improve efficiency by reducing memory and communication requirements while simultaneously improving robustness. In doing so, WP2 will provide the necessary methodologies to leverage PinT techniques to provide improved extreme-scale performance for the applications in WP4.

Research highlights in WP2

WP3: PinT algorithms for simulation, optimization and uncertainty quantification

As a next step in the development of PinT methods with impact in extreme-scale computing practice, this WP will focus on algorithmic developments beyond simulation using a given mathematical model, and the corresponding mathematical analysis of their efficiency. We will develop algorithms to perform an optimization of some design or control of an operational system, in which the PinT simulations themselves are embedded within an iterative computational procedure. We will develop algorithms for uncertainty quantification and/or data assimilation for cases where the mathematical model contains uncertain parameters, and/or measurement data is available to inform the model. Finally, we will develop algorithms that take advantage of the existence of multiple mathematical models (at different scales), to increase parallel efficiency.

Research highlights in WP3

WP4: Large-scale applications of PinT

This work package is concerned with the PinT simulation of prototypical systems in four selected application domains: medicine, electromagnetics, drug design and weather/climate. As such, WP4 will bring to the test the methodological advances of WP3, as well as the software developments of WP2. Periodic integration of developments in these other work packages into the application-specific simulations will steer and adjust the work plan of those WPs. Demonstration of the speed-ups realised in these applications will be used to maximize impact in those fields and more broadly.

Research highlights in WP4
Scroll to Top