4 min Reading Time
The fact that the world’s fourth-fastest supercomputer is now located in Europe marks a significant step toward greater technological independence from the U.S. and China. The newly inaugurated “Jupiter” supercomputer in Jülich is set to achieve a computing performance of one exaFLOP per second later this year.
Here, “FLOPS” (or “FLOP/s”) doesn’t refer to “blunders” – but rather to floating-point operations per second. “Exa” is the SI prefix for 10¹⁸ – i.e., one trillion on the long scale, or one quintillion on the short scale, as used in the U.S. or Russia.
The Jupiter supercomputer, officially inaugurated in early September 2025 at the Jülich Research Centre between Cologne and the Belgian border, is expected – according to Heise – to ultimately deliver approximately 1.4 exaFLOPS for double-precision floating-point calculations (FP64), powered by nearly 24,000 NVIDIA GH200 boards. That far exceeds its initial benchmark result for the Top500 list. Even so, Jupiter already secured fourth place on that list with a preliminary score of 793 petaFLOPS. For AI applications – which often use lower-precision FP8 arithmetic – Jupiter is projected to reach over 70 exaFLOPS.
“Jupiter Strengthens Germany’s Digital Sovereignty”
With 24,000 NVIDIA GPUs, Jupiter is ready for AI, quantum physics, and data-intensive research. Image source: Pexels/panumas nikhomkhai.
“The Jupiter supercomputer strengthens Germany’s digital sovereignty,” says Dr. Ralf Wintergerst, President of Bitkom, adding in a press release from the German digital association: “With Jupiter, Germany has entered the global elite of high-performance computing – and thereby improved conditions for developing artificial intelligence. Crucially, access to Jupiter must be made as straightforward and unbureaucratic as possible – for both startups and established companies. This would give AI development in Germany a real boost and help attract top international talent.”
As t3n reports, China has invested massively in supercomputer development and construction for roughly two decades. In 2013, the Tianhe-2 – with 33 petaFLOPS – dethroned the previous U.S.-built No. 1 system, outperforming it by a factor of two. It held the top spot until 2016, when it was overtaken by China’s Sunway TaihuLight – only to lose first place again in 2018 to the U.S. system Summit at Oak Ridge National Laboratory in Tennessee. Summit, after an upgrade, reached 148,600 teraFLOPS – but was itself dethroned in May 2020 by Japan’s Fugaku supercomputer at the RIKEN Center for Computational Science in Kobe.
Europe’s top-ranked system in 2024 was the world’s fifth-fastest HPC6, delivering 477.90 / 606.97 petaFLOPS. As of today, El Capitan – also built on an HPE Cray platform – holds the global lead with 1.742 and 2.74638 exaFLOPS, respectively.
Already Ranked #4 in Trial Operation
As t3n further notes, however, the Top500 list is steadily losing relevance – and since 2024, China has stopped reporting any of its supercomputers to the list’s organizers. Beyond that, a growing number of AI-dedicated systems – especially from the U.S. and China – are entering the exaFLOPS realm. That’s because AI training typically requires far less precision than standard 64-bit arithmetic.
As noted earlier, Jupiter achieved 793 petaFLOPS during its trial operation starting in June 2025 – landing it squarely at No. 4 on the Top500. Benedikt von St. Vieth, head of the High-Performance Computing, Cloud and Data Systems and Services department at the Jülich Supercomputing Centre (JSC), remains confident: “We expect to hit one exaFLOP before the end of this year.”
Jupiter is built modularly – and has been since May 2025. Yet during the June benchmark run, some racks remained switched off to protect the system. At that time, only 22,000 of the now fully deployed 24,000 GPUs (graphics processing units) were active. “We observed the system running too hot – so we had to optimize cooling,” von St. Vieth told t3n.
In 2026, Jupiter Booster – a dedicated expansion module – will add at least one more unit. The Jupiter Cluster will be equipped with especially powerful processors designed for highly “data-intensive applications.” However, these processors – developed in Europe by Si-Pearl and manufactured by TSMC – have not yet been delivered.
“Reality is far more complex,” says Herten in t3n, adding: “It depends on the model architecture, how much training data is used, and what level of precision is required.”
Also Suited for Training Large Language Models
Jupiter is primarily designed to run all kinds of simulation calculations. Its application spectrum spans cosmological models in astrophysics, simulations of biological systems, particle physics computations – and even AI tools for better analysis of brain data and neural functions.
AI capabilities are naturally central to Jupiter, emphasizes Andreas Herten, head of the Accelerating Devices Lab at JSC. The Jupiter Booster, developed by Eviden, according to the Helmholtz Institute, integrates around 24,000 NVIDIA GH200 Grace Hopper Superchips. Altogether, the module comprises roughly 6,000 compute nodes – each equipped with four Grace Hopper Superchips and correspondingly powerful processors featuring AI accelerators. Thus, the module houses approximately 24,000 GPUs – and just as many CPUs.
This computational capacity is sufficient to train a large language model (LLM) with 100 billion parameters in about one week. “That’s obviously a highly simplified estimate,” says Herten in t3n, adding: “It depends on the model architecture, the volume of training data, and the desired precision.” In theory, even significantly larger models could be trained – but that would take considerably longer. A planned AI model named Jarvis – scheduled for early 2026 – will serve primarily to train AI applications under the European AI Factory initiative.
Header Image Source: Forschungszentrum Jülich / Sascha Kreklau
Continue reading on cloudmagazin.com
- Europe’s digital sovereignty draws closer
- Google announces first verifiable quantum advantage – 13,000× faster than supercomputers
- Heilbronn aims to become Europe’s AI capital