Tesla Increases Number Of A100 GPUs In AI Supercomputer By 28%
According to the famous electric car manufacturer, Tesla, the company has increased the number of A100 GPUs in its AI supercomputer by about 28%. Before now, the supercomputer had 5,760 GPUs, but the company has expanded it to 7,360.
However, the company has not benchmarked the GPU system publicly. The number of GPUs is enough to make it number seven on the Top500 supercomputers Internationally.
Tesla’s Supercomputer Predicted To Perform 100 Linpack Petaflops
Crypto Comeback Pro is the #1 cryptocurrency trading robot for investors! This trading tool has a %88 winning rate on trades and is the recommended trading software for cryptocurrency traders. Try The Trading Software For FREE Today. (Advertisement)
The company will compete with other GPU-based systems if it benchmarks the system. Examples include the Selene of Nvidia, which has 4,480 A100 GPUs, and the Perlmutter of NERSC, which has 6,144 Nvidia A100 GPUs.
Nvidia’s supercomputer can perform 63.46 Linpack petaflops, while that of NERSC can carry out 70.87 Linpack petaflops. If we are to estimate Tesla’s supercomputer using Selene’s Top 500 submission, it might be able to carry out 100 Linpack petaflops.
However, Tesla might be running, majoring lower and single-precision workloads (bfloat16, FP32, FP16, etc.). At the start of 2022, Meta revealed the details of a faster AI supercomputer, the Research SuperCluster.
The Research SuperCluster has over 16k A100 GPUs, which would deliver over 200 double-precision petaflops. The company will complete this supercomputer this summer.
Andrej Karpathy, Tesla’s AI Senior director, revealed the news of the latest development in June 2021. This was during the CCVPR 2021 event.
Tesla’s AI Day To Hold September 30th
Karpathy told the participants that the company was building an insane supercomputer. As of them, the computer had 720 nodes. Each node was powered by only 8 80GB models of Nvidia’s A100 GPUs.
This totaled 5,760 A100s. The addition of 1,600 GPUs which is 200 nodes, brings the total number to 920 nodes currently. Tim Zaman, an engineering manager in the company, tweeted about the recent development.
Zaman said this is a promotion of the upcoming MLSysConf, starting from August 29th to September 1st. Besides, Tesla will be hosting its AI Day on September 30th.
Several technologies are vying to operate the world’s best AI supercomputers. GPUs from AMD now power the Frontier in addition to the GPUs from Nvidia.
This means the GPUs powering the supercomputer is from Nvidia and AMD. The Frontier is the world’s fastest supercomputer.
Crypto Comeback Pro is the #1 cryptocurrency trading robot for investors! This trading tool has a %88 winning rate on trades and is the recommended trading software for cryptocurrency traders. Try The Trading Software For FREE Today. (Advertisement)
Additionally, Intel is aiming to release its Ponte Vecchio GPU, the main power source for the next Aurora supercomputer. Also, custom processors are gaining popularity.
Amazon has introduced its Inferentia and Trainium AI chips while Microsoft has invested in FPGAs for handling AI workloads, and Google has begun using its fourth-generation TPUs.
Chip Timing Global is not responsible for the content, accuracy, quality, advertising, products or any other content posted on the site. Some of the content on this site is paid content that is not written by our authors and the views expressed do not reflect the views of this website. Any disputes you may have with brands or companies mentioned in our content will need to be taken care of directly with the specific brands and companies. The responsibility of our readers who may click links in our content and ultimately sign up for that product or service is their own. Cryptocurrencies, NFTs and Crypto Tokens are all a high-risk asset, investing in them can lead to losses. Readers should do their own research before taking any action.