AMD says its MI300X AI accelerator is quicker than Nvidia's H100

AMD says its MI300X AI accelerator is quicker than Nvidia's H100 - MI300X vs H100 - NVIDIA H100 - Intel Gaudi 2 vs Nvidia H1

Last updated 6 month ago

AI
Hardware
amd
nvidia

AMD says its MI300X AI accelerator is quicker than Nvidia's H100



A hot potato: AMD is combating back at Nvidia's claims about the H100 GPU accelerator, which in keeping with Team Green is quicker than the opposition. But Team Red said Nvidia didn't tell the complete story, and provided similarly benchmark outcomes with enterprise-widespread inferencing workloads.

AMD has eventually launched its Instinct MI300X accelerators, a new era of server GPUs designed to offer compelling overall performance stages for generative AI workloads and different excessive-performance computing (HPC) applications. MI300X is faster than H100, AMD stated in advance this month, however Nvidia attempted to refute the competitor's statements with new benchmarks launched multiple days in the past.

Nvidia examined its H100 accelerators with TensorRT-LLM, an open-source library and SDK designed to efficiently accelerate generative AI algorithms. According to the GPU business enterprise, TensorRT-LLM turned into able to run 2x faster on H100 than on AMD's MI300X with right optimizations.

AMD is now providing its own model of the tale, refuting Nvidia's statements about H100 superiority. Nvidia used TensorRT-LLM on H100, instead of vLLM utilized in AMD benchmarks, at the same time as evaluating overall performance of FP16 datatype on AMD Instinct MI300X to FP8 datatype on H100. Furthermore, Team Green inverted AMD's posted overall performance data from relative latency numbers to absolute throughput.

AMD shows that Nvidia tried to rig the game, at the same time as it is still busy figuring out new paths to unlock performance and uncooked electricity on Instinct MI300 accelerators. The company provided the cutting-edge overall performance tiers achieved through the Llama 70B chatbot version on MI300X, displaying a fair higher aspect over Nvidia's H100.

By the usage of the vLLM language model for both accelerators, MI300X became able to attain 2.1x the performance of H100 thanks to the modern day optimizations in AMD's software stack (ROCm). The agency highlighted a 1.4x performance advantage over H100 (with equivalent datatype and library setup) earlier in December. VLLM became selected because of its huge adoption within the network and the potential to run on each GPU architectures.

Even while the usage of TensorRT-LLM for H100, and vLLM for MI300X, AMD turned into nonetheless able to offer a 1.3x improvement in latency. When the usage of decrease-precision FP8 and TensorRT-LLM for H100, and better-precision FP16 with vLLM for MI300X, AMD's accelerator became apparently able to reveal a performance advantage in absolute latency.

vLLM does not assist FP8, AMD explained, and FP16 datatype become selected for its popularity. AMD said that its results display how MI300X the use of FP16 is similar to H100 even if the use of its excellent performance settings with FP8 datatype and TensorRT-LLM.

  • MI300X vs H100

  • NVIDIA H100

  • Intel Gaudi 2 vs Nvidia H100

  • NVIDIA H100 price

  • AMD MI300 performance

  • AMD MI300X release date

  • Intel Gaudi 3 vs Nvidia

  • MI300X vs A100

Qualcomm announces RISC-V chip for Wear OS-based wearable devices

Qualcomm announces RISC-V chip for Wear OS-based wearable devices

Highly anticipated: After betting on the future of RISC-V with an industry-wide alliance, Qualcomm is now bringing its first chip based at the open-supply architecture to the mass marketplace. The American chipmaker wil...

Last updated 8 month ago

Lenovo says 80% of its gadgets could be user repairable by using 2025

Lenovo says 80% of its gadgets could be user repairable by using 2025

Forward-searching: One of the sector's main PC manufacturers has publicly devoted to the proper to repair motion. During a latest speaking engagement on the Canalys EMEA Forum 2023, Luca Rossi, SVP and president of the ...

Last updated 8 month ago

California DMV suspends Cruise's permit to operate its driverless taxis

California DMV suspends Cruise's permit to operate its driverless taxis

What just passed off? Following numerous accidents regarding the cars, together with a latest incident in which a pedestrian become dragged and trapped below the wheels of a self-driving automobile, California has suspe...

Last updated 8 month ago

A weird garage bug in Android 14 is locking users out in their telephones

A weird garage bug in Android 14 is locking users out in their telephones

Facepalm: While Android offers customers the convenience of more than one profiles on a single tool, the trendy version of Google's mobile operating system has discovered an surprising and probably critical trouble with...

Last updated 8 month ago

Popular Tesla, Nissan and Ford EVs not qualify for the total $7,500 federal tax credit score

Popular Tesla, Nissan and Ford EVs not qualify for the total $7,500 federal tax credit score

What just passed off? With new regulations coming into effect on January 1, 2024, the listing of vehicles which might be eligible for the whole $7,500 federal EV tax credit score has shriveled dramatically. While a numb...

Last updated 5 month ago

Pioneering have a look at simulates the use of nuclear weapons for asteroid deflection

Pioneering have a look at simulates the use of nuclear weapons for asteroid deflection

 Using a nuclear warhead to damage or alternate the trajectory of an asteroid on its way to break Earth is a plotline visible in disaster movies together with Armageddon and Deep Impact. Now, scientists have advanced a ...

Last updated 6 month ago


safirsoft.com© 2023 All rights reserved

HOME | TERMS & CONDITIONS | PRIVACY POLICY | Contact