Elon Musk Launches ‘World’s Most Powerful AI Cluster’: One With 100,000 NVIDIA H100 GPUs

Musk’s goal is to train his Grok3 artificial intelligence model, which he promises to have ready in December.

Elon Musk has been preparing for months to try to win the AI ​​battle. Now the company has taken another step in that direction and just launched a giant cluster of 100,000 NVIDIA H100 GPUs that will be critical to training the next AI model.

AI Obsession. After the OpenAI disaster, the mogul created xAI in 2023 to compete in that market. Later that year, he launched Grok, his specific (and sarcastic) competitor to ChatGPT, and a few months later, he tried to win over developers by opening up much of the project as open source.

100,000 NVIDIA H100 GPUs. A few hours ago, Musk announced the launch of the “Memphis Supercluster,” which began operations with 1,000 liquid-cooled NVIDIA H100 graphics cards. According to him, it is “the world’s most powerful AI training cluster.” The project was made possible in collaboration with Supermicro, whose CEO Charles Liang congratulated Musk in this message after the cluster’s launch.


card famineThe Tesla and SpaceX maker has been buying up these cards for months, both for xAI and to train its self-driving systems at Tesla, though it appears some have been redirected to X recently.

xAI will have a new model in December. Musk then assured that “this is a significant advantage for training the most powerful AI in the world by any metric,” and assured that it would be available in “December of this year.” This is likely referring to Grok3, the third generation of the model, which is currently not as popular as its competitors.

Musk didn’t want to wait. Two months ago, The Information reported on how Musk was preparing the “Computing Gigafactory.” At the time, the possibility of using the new B200 cards was considered, but Elon Musk didn’t seem to want anything special, despite the theoretical gains in power and efficiency.

This “supercomputer” will theoretically top the Top500 list.The giant Memphis supercluster could suddenly become the absolute leader of the Top500 list, at least if we look at the number of GPUs and their power.

The world’s most powerful supercomputers don’t have that many GPUs: Frontier has 37,888 AMD GPUs, Aurora has 60,000 from Intel, and Microsoft Eagle has 14,400 of the NVIDIA H100s. This new monster steals the show, though it’s unclear whether its specific focus on training AI models will help it appear in future editions of this prestigious list of the world’s most powerful supercomputers.

Image | xAI

In Hatake | Despite everything, Threads has become a serious competitor to X: 175 million users and counting.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button