Dec-13-2022

Scientists at the Department of Energy’s Oak Ridge National Laboratory are leading a new project to ensure that the fastest supercomputers can keep up with big data from high energy physics research.

“For scientific big data, this is one of the largest challenges in the world,” said Marcel Demarteau, director of ORNL’s Physics Division and principal investigator of the project, which first aims to address a data tsunami that will arise from a major upgrade to the world’s most powerful particle accelerator, the Large Hadron Collider, or LHC. “Each of its largest particle detectors will be capable of streaming 50 terabits per second — the data equivalent to watching 10 million high-definition Netflix movies concurrently.”

The LHC sits deep underground at the European Laboratory for Particle Physics, or CERN, located on the border between Switzerland and France. Smashing protons and heavier nuclei, the LHC produces progeny particles that its detectors track. The detectors generate enormous amounts of data that is compared against simulations so that experiments can validate theories. The knowledge gained improves our understanding of fundamental forces.

Researchers expect the upgraded particle accelerator, named the High-Luminosity LHC, to begin operations in 2029. Luminosity measures how tightly packed particles are as they zip through the accelerator and collide. Higher luminosity means more particle collisions. The upgraded LHC promises discoveries but at a cost: it will create more data than simulations can manage.

“The High-Luminosity LHC will boost the number of proton collisions to 10 times what the LHC can produce,” Demarteau said.

To address this challenge, partners in the new project are developing a simulation code called Celeritas — the Latin word for speed. Current simulation codes work by calculating the particles’ electromagnetic interactions as they move through the detectors. To vastly increase the data throughput from high-fidelity simulations of high energy physics experiments, Celeritas will use new algorithms that employ graphics processing units for massive parallel processing on leadership-class computing platforms such as ORNL’s Frontier. The world’s first exascale computer, Frontier can perform a quintillion calculations per second. In other words, it can complete a task in one second that would take the entire global population more than four years if each person could complete one calculation every second.

Celeritas is one of five projects that DOE’s Office of Science is funding to accelerate high energy physics discoveries through high-performance computing. Its Advanced Scientific Computing Research and High Energy Physics offices support the project through a program called Scientific Discovery through Advanced Computing, or SciDAC.

“Celeritas is an important step in reworking the entire way computational simulations and analyses are done in the high energy physics ecosystem,” said ORNL’s Tom Evans. He will lead the multilaboratory project, which includes scientists at Argonne National Laboratory and Fermi National Accelerator Laboratory, or Fermilab.

Evans also leads ORNL’s High-Performance Computing Methods for Nuclear Applications Group and spearheads applications development related to the nation’s energy portfolio for DOE’s Exascale Computing Project. He uses Monte Carlo techniques that rely on repeated random sampling to step each particle through a virtual world and simulate the history of its movement.