Nvidia launches H200 AI superchip: the first to be paired with 141GB of current HBM3e memory

Nvidia launches H200 AI superchip: the first to be paired with 141GB of current HBM3e memory - NVIDIA super chip - NVIDIA H2

Last updated 13 month ago

Hardware
Industry
nvidia
ai

Nvidia launches H200 AI superchip: the first to be paired with 141GB of current HBM3e memory



Why it matters: The generative AI race shows no signs and symptoms of slowing down, and Nvidia is trying to completely capitalize on it with the creation of a brand new AI superchip, the H200 Tensor Core GPU. The largest development while in comparison to its predecessor is the usage of HBM3e memory, which allows for greater density and better reminiscence bandwidth, each important factors in enhancing the speed of services like ChatGPT and Google Bard.

Nvidia this week added a brand new monster processing unit for AI workloads, the HGX H200. As the call shows, the new chip is the successor to the wildly popular H100 Tensor Core GPU that debuted in 2022 whilst the generative AI hype educate started out picking up speed.

Team Green introduced the brand new platform throughout the Supercomputing 2023 conference in Denver, Colorado. Based on the Hopper architecture, the H200 is expected to supply almost double the inference velocity of the H100 on Llama 2, that's a large language model (LLM) with 70 billion parameters. The H200 also offers around 1.6 times the inference velocity when using the GPT-3 version, which has a hundred seventy five billion parameters.

Some of those overall performance enhancements came from the architectural refinements, but Nvidia says it is also completed full-size optimization work on the software program front. This is meditated in the current release of open-supply software program libraries like TensorRT-LLM which could deliver up to 8 times greater overall performance and up to 6 instances decrease strength consumption when the usage of the today's LLMs for generative AI.

Another spotlight of the H200 platform is that it's the first to make use of fester spec, HBM3e memory. The new Tensor Core GPU's total reminiscence bandwidth is a whopping four.Eight terabytes in step with 2nd, a good bit faster than the 3.35 terabytes in step with 2d carried out by using the H100's reminiscence subsystem. The total memory potential has also increased from 80 GB at the H100 to 141 GB at the H200 platform.

Nvidia says the H200 is designed to be compatible with the equal systems that assist the H100 GPU. That stated, the H200 will be available in numerous form elements which includes HGX H200 server boards with 4 or eight-manner configurations or as a GH200 Grace Hopper Superchip where it is going to be paired with a effective 72-center Arm-based CPU at the same board. The GH200 will allow for up to 1.1 terabytes of combination high-bandwidth memory and 32 petaflops of FP8 overall performance for deep-learning applications.

Just just like the H100 GPU, the brand new Hopper superchip can be in high demand and command an eye fixed-watering fee. A single H100 sells for an anticipated $25,000 to $40,000 depending on order volume, and many groups within the AI space are buying them via the thousands. This is forcing smaller businesses to companion up simply to get limited get entry to to Nvidia's AI GPUs, and lead times do not seem to be getting any shorter as time goes on.

Speaking of lead instances, Nvidia is creating a massive earnings on every H100 sold, so it is even shifted some of the production from the RTX forty-series closer to making extra Hopper GPUs. Nvidia's Kristin Uchiyama says supply might not be an difficulty because the organization is continuously working on including greater manufacturing potential, but declined to provide greater information on the matter.

One element is for certain – Team Green is tons greater inquisitive about promoting AI-targeted GPUs, as income of Hopper chips make up an more and more big bite of its sales. It's even going to top notch lengths to expand and manufacture reduce-down variations of its A100 and H100 chips just to avoid US export controls and deliver them to Chinese tech giants. This makes it difficult to get too excited about the upcoming RTX 4000 Super images playing cards, as availability may be a massive contributing element closer to their retail price.

Microsoft Azure, Google Cloud, Amazon Web Services, and Oracle Cloud Infrastructure will be the first cloud providers to provide access to H200-based totally times starting in Q2 2024.

  • NVIDIA super chip

  • NVIDIA H200 release date

  • NVIDIA HBM3

  • GH200 memory bandwidth

  • HPCwire

  • Nvidia AI chip

  • Nvidia new chip

LG's modern 4K clever projector doubles as a work of art

LG's modern 4K clever projector doubles as a work of art

 Projectors are one of the first portions of the puzzle for a domestic theater device and feature turn out to be quite popular in latest years. To deal with the developing call for, customer tech corporations are launch...

Last updated 11 month ago

US senator raises alarm on overseas government spying through Apple and Google push notifications

US senator raises alarm on overseas government spying through Apple and Google push notifications

A hot potato: Are foreign governments spying on you via push notifications supplied via Apple and Google? US Senator Ron Wyden says it sincerely does happen, and Apple has in view that confirmed the practice. It seems t...

Last updated 12 month ago

Google Play will soon let customers remotely uninstall apps

Google Play will soon let customers remotely uninstall apps

TL;DR: Google is about to offer a brand new control choice for Android customers and app-checking out fans. A latest replace to the Google System lower back stop brought a unique uninstall characteristic to all Android ...

Last updated 12 month ago

Retailer lists Intel Core i9-14900KS with 6.2GHz inventory pace, a client CPU file

Retailer lists Intel Core i9-14900KS with 6.2GHz inventory pace, a client CPU file

What just befell? Intel's Core i9-14900K Raptor Lake Refresh chip is an absolute speed demon of a CPU, but Team Blue has an even faster processor lined up – the Core i9-14900KS, that may hit 6.2 GHz out of the field. It...

Last updated 13 month ago

Apple macOS Sonoma is now available: new interactive widgets and an improved focus on gaming

Apple macOS Sonoma is now available: new interactive widgets and an improved focus on gaming

What simply befell? Apple on Tuesday launched macOS 14 'Sonoma' with a bunch of recent features and enhancements over Ventura. The maximum splendid addition within the trendy model is the remodeled desktop widgets that ...

Last updated 15 month ago

"Big Four" accounting corporation looks to AI to lessen destiny mass layoffs

"Big Four" accounting corporation looks to AI to lessen destiny mass layoffs

Why it subjects: Deloitte is a multinational professional services community and is considered one of the "Big Four" accounting corporations inside the world, along side EY, KPMG, and PwC. The consulting large...

Last updated 12 month ago


safirsoft.com© 2023 All rights reserved

HOME | TERMS & CONDITIONS | PRIVACY POLICY | Contact