HomeTech NewsNVIDIA Announces H100 NVL Card for Large AI Models

NVIDIA Announces H100 NVL Card for Large AI Models

The market for large language models (LLMs) has become a very important area for the world of Artificial Intelligence (AI). NVIDIA recently announced a new acceleration card based on the Hopper architecture, designed specifically to meet the needs of LLM users: the H100 NVL.

The H100 NVL is an interesting variant of NVIDIA’s H100 PCIe card, which is aimed at a unique market: LLM implementation. This card has a remarkably large memory capacity, being 2 PCIe H100 cards already attached. One of the main features of the H100 NVL is its memory capacity, which is larger than any other NVIDIA model to date. Combining two GPUs on one card offers a total of 188 GB of HBM3 memory, which means that each card has 94 GB, which is more than any other NVIDIA model.

Card Specifications and Features

The H100 NVL offers performance on par with the H100 SXM5 model, with 2 x 16896 CUDA FP32 cores and 2 x 528 Tensor cores. The clock speed is 1.98 GHz, and the memory speed is approximately 5.1 Gbps HBM3. The memory bus width is 6144 bits, which allows a memory speed of 2 x 3.9 TB/second. The H100 NVL card comes with 2 x 67 TFLOPS of FP32 vectoring capability, and 2 x 1980 TOPS of INT8 stressing capability.

As for the look of the card, NVIDIA has designed the H100 NVL as a dual-GPU/dual-card that presents itself to the host system as such. The two PCIe H100 cards are linked via three NVLink 4 bridges, ensuring a direct connection with a speed of 600 GB/second. Each card has a TDP of 350W to 400W, which means the H100 NVL is a 700W to 800W total card.

What Makes the H100 NVL So Special?

The H100 NVL is an acceleration card that is specifically designed to meet the needs of LLM users. Large language models such as the GPT family require a lot of memory, making regular PCIe H100 card models insufficient to support them. The H100 NVL, on the other hand, offers a higher memory capacity per GPU, which makes it ideal for these types of tasks.

The H100 NVL card is essentially an upgraded version of NVIDIA’s H100 PCIe card, with 6 HBM memory stacks instead of 5, which means the H100 NVL can offer more memory and higher memory bandwidth. . NVIDIA has enabled the sixth HBM memory stack for the H100 NVL allowing you to access the additional memory it offers. Although this may affect performance rate, NVIDIA has noted that the additional memory capacity is vital for LLM tasks and most users are willing to pay a higher price for a more powerful acceleration card.

The H100 NVL stands out in the acceleration card market as it is the card with the most memory per GPU that NVIDIA has released to date. In addition, it offers much higher memory bandwidth than any other NVIDIA model. Overall, the H100 NVL is a very promising card for the LLM market and may become the preferred choice for many users.

What is the Impact of the H100 NVL on the Market?

The H100 NVL is a big step forward for NVIDIA as it allows them to expand their offering in the LLM market. The additional memory capacity and increased memory bandwidth offered by this card are a big draw for LLM users, which means NVIDIA can capture a share of the market that was previously out of reach.

The announcement of the H100 NVL is also important for the AI ​​industry in general, as it shows how acceleration card manufacturers are responding to the specific needs of LLM users. The ability to process large amounts of natural language data is becoming increasingly important in the world of AI, and the H100 NVL can be a key player in meeting this demand.

More information at anandtech.com

Latest articles

What are the best smartphones tested by Voonze in September 2024?

Here is our selection of the best smartphones in 2024, all tested and validated...

Nvidia GeForce Now in September 2024: the start of the school year promises to be fantastic with Final Fantasy XVI and Age of Mythology

In this rainy back-to-school season, Nvidia unveils the list of games that will join...

More like this