The World’s 20 Most Powerful AI Supercomputers

www.visualcapitalist.com

Technology

See this visualization first on the Voronoi app.

Circle graphic showing the world's most powerful supercomputers.

The World’s Most Powerful AI Supercomputers

This was originally posted on our Voronoi app. Download the app for free on iOS or Android and discover incredible data-driven charts from a variety of trusted sources.

  • xAI’s Colossus cluster in Memphis leads the world with an estimated 200,000 Nvidia H100 chip equivalents.
  • Meta, Oracle, and Microsoft/OpenAI each operate compute clusters with 100,000 H100 equivalents.
  • The cost and power demands of these systems are rising sharply, doubling in cost roughly every 13 months.

Today, the rapid rise of AI is fueling a new arms race for compute power.

Tech companies are rapidly building AI supercomputers, also known as GPU clusters or AI datacenters, by packing them with ever-growing numbers of chips for training advanced AI models. So far, the U.S. has taken the lead, as Big Tech firms pour billions into AI infrastructure to secure a competitive edge.

This graphic shows the world’s leading compute clusters, based on data from Epoch AI.

Ranked: The Top 20 Compute Clusters Globally

Below, we show the most powerful AI supercomputers in the world as of mid-2025:

Compute ClusterH100 EquivalentsOwnerCountryCertainty
xAI Colossus Memphis
Phase 2
200000xAIU.S.Likely
Meta 100k100000Meta AIU.S.Likely
OpenAI/Microsoft
Goodyear Arizona
100000Microsoft, OpenAIU.S.Likely
xAI Colossus Memphis
Phase 1
100000xAIU.S.Confirmed
Oracle OCI Supercluster
H200s
65536OracleU.S.Likely
Tesla Cortex Phase 150000TeslaU.S.Confirmed
Lawrence Livermore NL
El Capitan Phase 2
44143U.S. Department of
Energy
U.S.Confirmed
CoreWeave H200s42000CoreWeaveU.S.Likely
Lambda Labs
H100/H200
32000Lambda LabsU.S.Likely
Anonymized Chinese
System
30000N/AChinaConfirmed
Meta GenAI 2024a24576Meta AIU.S.Confirmed
Meta GenAI 2024b24576Meta AIU.S.Confirmed
Jupiter, Jülich23536EuroHPC JU,
Jülich Supercomputing
Center
GermanyConfirmed
Oracle OCI MI300x21649OracleU.S.Likely
Anonymized Chinese
System
20000N/AChinaConfirmed
Andreessen Horowitz
Oxygen
20000Andreessen
Horowitz
U.S.Likely
AWS EC2 P5
UltraClusters
20000AmazonU.S.Likely
Anonymized Chinese
System
20000N/AChinaLikely
Anonymized Chinese
System
20000N/AChinaConfirmed
NexGen Cloud
Hyperstack AQ
Compute
Supercomputer
16384NexGen CloudNorwayLikely

xAI’s Colossus Memphis Phase 2 tops the chart with 200,000 H100 equivalents—twice the size of the next largest cluster.

Colossus delivers 20.6 on the base-10 scale for non-sparse operations—over 400 quintillion per second. That would be enough compute power to train OpenAI’s 2020 GPT-3’s full two-week cycle in under two hours.

Other major players are also scaling up. Meta’s 100K system, Microsoft/OpenAI’s Goodyear cluster, and Oracle’s H200-based machine follow, with GPU counts between 65,536 and 100,000.

Notably absent from public rankings are Google and Amazon, which have not disclosed as much information on their AI hardware builds.

Learn More on the Voronoi App

To learn more about this topic from a Big Tech perspective, check out this graphic on the rise of hyperscaler spending on AI.

Popular