Search icon

Nvidia Unveils Blackwell AI Accelerator & NIM Software at GTC

Published on 19 Mar, 2024, 8:47 AM IST
Updated on 19 Mar, 2024, 9:02 AM IST
Sahil Mohan Gupta
ReadTimeIcon
4 min read
Top stories and News
Follow us onfollow-google-news-icon

Share Post

Nvidia

The Blackwell GB200 includes two B200 GPUs and one Grace ARM based CPU

Nvidia, the world's largest semiconductor company, unveiled its latest generation of AI accelerators, the Blackwell, and a new revenue-generating software called Nvidia Inference Microservice (NIM) at its GTC developer conference. The Blackwell chip, named GB200, is set to start shipping later this year and represents a significant upgrade from the company's previous H100 Grace Hopper superchip. Apart from this, Nvidia also showed off a quantum computing cloud based simulation service.

During the conference, Nvidia CEO Jensen Huang highlighted the necessity for more powerful GPUs, stating, "Hopper is fantastic, but we need bigger GPUs. We have created a processor for the generative AI era." This statement underscores the company's commitment to pushing the boundaries of AI hardware performance to meet the growing demands of the industry.

Nvidia aims to transform itself into an end-to-end platform by offering both the underlying chipset architecture and software layer. This approach mirrors the strategies employed by tech giants like Apple, Microsoft, and Google in the consumer electronics space. The Blackwell platform consists of the GB200 chipset and the NIM software layer, showcasing Nvidia's holistic approach to AI hardware and software development.

Manuvir Das, Nvidia's enterprise VP, emphasised the company's expanding commercial software business. "The sellable commercial product was the GPU and the software was all to help people use the GPU in different ways. Of course, we still do that. But what's really changed is, we really have a commercial software business now," he said. This shift highlights Nvidia's recognition of the increasing significance of software in the AI ecosystem.

Nvidia

The NIM software platform is designed to simplify the process of running programs on any Nvidia GPU, including older models. Manuvir Das explained, "If you're a developer, you've got an interesting model you want people to adopt, if you put it in a NIM, we'll make sure that it's runnable on all our GPUs, so you reach a lot of people." This feature aims to make Nvidia's AI hardware more accessible and user-friendly for developers.

The Blackwell-based GB200 chips offer a substantial performance boost compared to the widely used H100. With 20 petaflops in AI performance, the GB200 significantly outpaces the H100's 4 petaflops. The GB200 combines two B200 Blackwell GPUs with an ARM-based Grace CPU and features a transformer engine specifically tuned for transformer models like ChatGPT or Anthropic's Claude. This powerful combination is expected to enable the training of larger and more complex AI models.

Nvidia announced that all four major hyperscalers—Microsoft Azure, AWS, Google Cloud, and Oracle—will be selling the GB200. Notably, AWS plans to create a server cluster with over 20,000 GB200 chips, allowing for an impressive 27 trillion parameters on an AI model. To put this into perspective, GPT-4, one of the most advanced language models, has 1.7 trillion parameters.

While no specific pricing details have been released for the GB200 hardware, the H100 sells for between $25,000 to $40,000 (₹20.5 lakh to ₹32.8 lakh), and a complete server could cost up to $200,000 (₹1.64 crore). The high-end nature of these products suggests that the GB200 will likely come with a premium price tag.

NIM, an enterprise software subscription service, allows modern AI software to run more efficiently on older Nvidia GPUs. This means that companies can run their own AI models with less computational power. Nvidia is bundling NIM with access to its own cloud servers running its latest chipsets at $4,500 (₹3.69 lakh) per GPU per year.

Nvidia is collaborating with Microsoft and Hugging Face to ensure their AI models are optimised for compatible Nvidia chips. One of the most significant aspects of NIM is its ability to enable on-device processing, allowing AI to run on Nvidia GPU-equipped laptops without relying on cloud connectivity. This feature has the potential to revolutionise the way AI is deployed and used in various industries.

AckoDriveTag IconTags
Nvidia
Blackwell
NIM
Nvidia Inference Microservice
GPUs
AI accelerator
ChatGPT
OpenAI
Microsoft Azure
AWS
Google Cloud
Oracle
Jensen Huang
Manuvir Das
Generative AI

RecentTop stories and News

Looking for a new car?

We promise the best car deals and earliest delivery!

Callback Widget Desktop Icon