NVIDIA has released its fresh set of AI supercomputer chips. The HGX H200 GPU includes HBM3e memory, marking higher speed and capacity compared to its predecessor, the A100. It also doubles inference speed on Llama 2, releasing in Q2 2024 for use in varied data centers.
NVIDIA’s GH200 Grace Hopper “superchip”, using NVLink-C2C interlink to partner the HGX H200 GPU and Arm-based NVIDIA Grace CPU, is designed for supercomputers and will be used in “40+ AI supercomputers across global research centers, manufacturers and cloud providers.” Notably, HPE’s Cray EX2500 supercomputers will use quad GH200s, scaling up to tens of thousands of Grace Hopper Superchip nodes.
The JUPITER supercomputer, featuring NVIDIA GH200 Superchips, will be “the world’s most powerful AI system” upon installation next year. It will likely play a key role in advancing scientific research in several areas, from climate and weather prediction to drug discovery, quantum computing and industrial engineering. NVIDIA expects the new chips to continue to drive its impressive revenue growth within the AI and data center segments, ensuring its dominance in the sector.