The new NVIDIA H100 is a mammoth GPU with 80 billion transistors. and it’s not for you

0
34
 The new NVIDIA H100 is a mammoth GPU with 80 billion transistors.  and it's not for you
the new nvidia h100 is a mammoth gpu with 80

In NVIDIA they offered yesterday an event full of news. News focused on the field of data centersbut it is that as it happened two years ago with Ampere, what was presented a few hours ago helps us to understand what will come to future GPUs for end users.

Among the novelties they highlighted the new NVIDIA Hopper architecture, successor to Ampere —present in the RTX 3000—, and also its first practical implementation, the NVIDIA H100 GPU which goes even further than its predecessor, the A100, and offers unprecedented power thanks to its 80,000 million transistors.

A look at…


How to know the components of your PC (RAM, Graphics, CPU…) and the state in which they are

transistors to me

The new GPU is a new technological marvel. TSMC’s 4nm photolithography is one of the keys to a chip that count an absurd number of transistors: 80,000 million. It is difficult to understand the magnitude of the figure, but it serves as a reference that its predecessor, the A100, had 54.2 billion.

Screenshot 2022 03 23 At 10 24 21

As can be seen in this comparison table, the evolution is brutal between these generations of GPUs for data centers, and it marks the future of GPUs for end users. Source: AnandTech

Virtually everything in the NVIDIA H100 is improved over its predecessor. The numbers are promising in all areas, but it is also true that the TDP almost doubles and it becomes 700 W compared to 400 W: with the electricity bill at record highs, using these chips is not going to be cheap for companies.

SEE ALSO  How can I recover deleted messages on WhatsApp

And is that the H100 is a GPU intended entirely for data centers. The commitment to areas related to artificial intelligence is enormous, and in fact the company made special emphasis on its Transformer Engine, “designed to speed up the training of artificial intelligence models”. This kind of technology is behind systems like GPT-3, and it will make training those functions much faster.

That GPU also benefits from the presence of fourth generation NVLink technology that allows scaling the interconnection of all its nodes. It offers up to 900 GB / s of bidirectional transfers per GPU, or what is the same, seven times the bandwidth of the PCIe 5 standard that has not even practically reached the market.

The Hopper architecture is also a fundamental part of these advances. At NVIDIA they highlighted the ability of this new architecture to speed up dynamic programming call“a problem-solving technique used in algorithms for genomics, quantum computing, path optimization, and more.”

According to the manufacturer, all these operations will be done now 40 times faster thanks to the use of new DXP instructions a new set of instructions that can be applied in these areas.

Screenshot 2022 03 23 At 10 10 00

Grace CPU Superchip

Another of the most striking announcements was that of his Grace CPU Superchip, which is made up of two processors connected via a low-latency NVLink-C2C link. The idea is to target this chip to “serve large-scale HPC centers and artificial intelligence applications” alongside Hopper architecture GPUs.

This “double chip” is the evolution of the Grace Hopper Superchip announced last year. in this iteration 144 ARMv9 cores included that achieve 1.5 times the dual CPU performance of DGX A100 systems that NVIDIA has long offered.

SEE ALSO  Huawei surprises with the first folding clamshell phone with four cameras (and one is very special)

In NVIDIA they also indicated that they are creating a new supercomputer called Eos intended for artificial intelligence tasks. According to the manufacturer, it will be the most powerful in the world when it is implanted. This project will consist of 4,600 H100 GPUs that will offer 18.4 exaflops of performance in AI operations, and it is expected to be ready in a few months, although it will only be used for internal research at NVIDIA.

Grace Superchip CPUs are expected to be available in the first half of 2023, so we’ll have to be patient. The NVIDIA H100 will be available sooner, in the third quarter of 2022.