Business

“We need a bigger GPU”

Nvidia’s CEO yesterday unveiled his company’s new GPU dedicated to artificial intelligence (AI), the B200. Based on a new architecture called Blackwell, this chip improves efficiency by reducing power consumption.

Nvidia GPU Blackwell B200 GB200 IA Jensen Huang“We need a bigger GPU”
Dr

Last night, the man who claims AI will overtake humans in 5 years presented a new breakthrough in the field. Nvidia CEO Jensen Huang unveiled the Blackwell B200 chip and the GB200 “superchip,” designed for artificial intelligence calculations, in front of an audience of 18,000 people. To introduce his new creations, the co-founder of Nvidia declared: “ We need a bigger GPU ” The color (green) is declared.

What does Nvidia’s Blackwell B200 chip offer?

Currently, the most common GPU for AI calculations is the Nvidia H100, a chip that notably powers Tesla’s supercomputer. Among companies in the field, such as OpenAI, the component costs more than $40,000 per unit. Older cards, such as the A100 launched in 2020, still sell for $20,000. These chips made Nvidia the third largest market capitalization in the world behind Apple.

Blackwell chips go further than this wildly successful GPU. The new B200 GPU delivers up to 20 petaflops of FP4 power thanks to 208 billion transistors. The GB200 SuperChip combines two of these GPUs with a Grace processor, providing up to 30 times better performance for the computational calculations it wants to do. turn over Large language models like GPT.

On the GPT-3 LLM benchmark with 175 billion parameters, Nvidia claims the GB200 delivers seven times the performance of the H100. Nvidia claims it also offers four times faster learning speeds, meaning calculations form Major language models.

Read > DLSS: What is this graphics technology from Nvidia?

The B200 GPU is more energy efficient than other Nvidia chips

But it is not only in terms of performance that progress is significant. Everyone knows that the power consumption of AI is huge. These cards are more effective. “GB200 reduces costs and power consumption by 25 times less“Compared to an H100, says Nvidia.

For example, training a model with 1.8 trillion parameters would previously require 8,000 GPU hoppers and 15 megawatts of power, according to Nvidia. Today, Jensen Huang explains that 2,000 Blackwell GPUs can do this using just four megawatts.

A great technological advance, but not according to investors! Nvidia’s stock fell 1.77% instead of continuing its wild rise over the past twelve months thanks to the announcement of the Blackwell architecture and B200 GPU. Nvidia won’t overtake Apple. At least for now.

Source: Bloomberg

  • Nvidia CEO Jensen Huang last night introduced his new GPU dedicated to AI calculations.
  • The GPU joins a superchip, the GB200, which provides increased performance.
  • It is also in terms of power consumption that the chip offers a big advance.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button