Nvidia launches H200 AI superchip: the first to be paired with 141GB of current HBM3e memory

Nvidia launches H200 AI superchip: the first to be paired with 141GB of current HBM3e memory - NVIDIA super chip - NVIDIA H2

Last updated 12 month ago

Hardware
Industry
nvidia
ai

Nvidia launches H200 AI superchip: the first to be paired with 141GB of current HBM3e memory



Why it matters: The generative AI race shows no signs and symptoms of slowing down, and Nvidia is trying to completely capitalize on it with the creation of a brand new AI superchip, the H200 Tensor Core GPU. The largest development while in comparison to its predecessor is the usage of HBM3e memory, which allows for greater density and better reminiscence bandwidth, each important factors in enhancing the speed of services like ChatGPT and Google Bard.

Nvidia this week added a brand new monster processing unit for AI workloads, the HGX H200. As the call shows, the new chip is the successor to the wildly popular H100 Tensor Core GPU that debuted in 2022 whilst the generative AI hype educate started out picking up speed.

Team Green introduced the brand new platform throughout the Supercomputing 2023 conference in Denver, Colorado. Based on the Hopper architecture, the H200 is expected to supply almost double the inference velocity of the H100 on Llama 2, that's a large language model (LLM) with 70 billion parameters. The H200 also offers around 1.6 times the inference velocity when using the GPT-3 version, which has a hundred seventy five billion parameters.

Some of those overall performance enhancements came from the architectural refinements, but Nvidia says it is also completed full-size optimization work on the software program front. This is meditated in the current release of open-supply software program libraries like TensorRT-LLM which could deliver up to 8 times greater overall performance and up to 6 instances decrease strength consumption when the usage of the today's LLMs for generative AI.

Another spotlight of the H200 platform is that it's the first to make use of fester spec, HBM3e memory. The new Tensor Core GPU's total reminiscence bandwidth is a whopping four.Eight terabytes in step with 2nd, a good bit faster than the 3.35 terabytes in step with 2d carried out by using the H100's reminiscence subsystem. The total memory potential has also increased from 80 GB at the H100 to 141 GB at the H200 platform.

Nvidia says the H200 is designed to be compatible with the equal systems that assist the H100 GPU. That stated, the H200 will be available in numerous form elements which includes HGX H200 server boards with 4 or eight-manner configurations or as a GH200 Grace Hopper Superchip where it is going to be paired with a effective 72-center Arm-based CPU at the same board. The GH200 will allow for up to 1.1 terabytes of combination high-bandwidth memory and 32 petaflops of FP8 overall performance for deep-learning applications.

Just just like the H100 GPU, the brand new Hopper superchip can be in high demand and command an eye fixed-watering fee. A single H100 sells for an anticipated $25,000 to $40,000 depending on order volume, and many groups within the AI space are buying them via the thousands. This is forcing smaller businesses to companion up simply to get limited get entry to to Nvidia's AI GPUs, and lead times do not seem to be getting any shorter as time goes on.

Speaking of lead instances, Nvidia is creating a massive earnings on every H100 sold, so it is even shifted some of the production from the RTX forty-series closer to making extra Hopper GPUs. Nvidia's Kristin Uchiyama says supply might not be an difficulty because the organization is continuously working on including greater manufacturing potential, but declined to provide greater information on the matter.

One element is for certain – Team Green is tons greater inquisitive about promoting AI-targeted GPUs, as income of Hopper chips make up an more and more big bite of its sales. It's even going to top notch lengths to expand and manufacture reduce-down variations of its A100 and H100 chips just to avoid US export controls and deliver them to Chinese tech giants. This makes it difficult to get too excited about the upcoming RTX 4000 Super images playing cards, as availability may be a massive contributing element closer to their retail price.

Microsoft Azure, Google Cloud, Amazon Web Services, and Oracle Cloud Infrastructure will be the first cloud providers to provide access to H200-based totally times starting in Q2 2024.

  • NVIDIA super chip

  • NVIDIA H200 release date

  • NVIDIA HBM3

  • GH200 memory bandwidth

  • HPCwire

  • Nvidia AI chip

  • Nvidia new chip

Pentagon internet site now accepts UFO "pastime" submissions from authorities employees

Pentagon internet site now accepts UFO "pastime" submissions from authorities employees

 The Department of Defense is looking modern-day and former US personnel, military personnel, or contractors to record any knowledge of official government tasks related to UAPs (formerly UFOs) or alien activity to its ...

Last updated 12 month ago

Microsoft plans to kill VBScript in destiny Windows releases

Microsoft plans to kill VBScript in destiny Windows releases

In a nutshell: VBScript is an energetic scripting language this is been a part of Windows records considering the fact that Windows ninety eight, Windows NT four.0 Option Pack, and Windows CE. Now, after 25 years availa...

Last updated 13 month ago

How to invite for more money as a software developer

How to invite for more money as a software developer

When a number of the sooner waves of tech redundancies started out last year, talent acquisition, HR and client fulfillment were first inside the firing line. However, in 2023, attention has turned to middle technical r...

Last updated 13 month ago

The Surface Laptop Studio 2 opinions are in: greater area of interest each day

The Surface Laptop Studio 2 opinions are in: greater area of interest each day

Reviewers Liked 120Hz show with dynamic refresh rate Latest Intel and Nvidia silicon Dedicated neural chip for local AI USB Type-A and microSD reader Comfortable keyboard and haptic touchpad Sharp webcam Bright show To...

Last updated 13 month ago

Analysts expect PC market recuperation in 2024, partially due to AI

Analysts expect PC market recuperation in 2024, partially due to AI

 Falling call for for paintings-from-domestic setups has caused sharp declines in PC shipments during the last several quarters. However, analysts have currently determined a stabilizing marketplace, which would possibl...

Last updated 11 month ago

Self-healing smartphone displays might be here with the aid of 2028

Self-healing smartphone displays might be here with the aid of 2028

Forward-searching: For all of the improvements smartphones have made through the years, one technology we're nonetheless watching for, which might be lots appreciated, is self-repairing screens. However, in line with an...

Last updated 13 month ago


safirsoft.com© 2023 All rights reserved

HOME | TERMS & CONDITIONS | PRIVACY POLICY | Contact