Hopper (microarchitecture)

From HandWiki - Reading time: 6 min

Short description: GPU microarchitecture designed by Nvidia

Template:Infobox GPU microarchitecture Hopper is a graphics processing unit (GPU) microarchitecture developed by Nvidia. It is designed for datacenters and is parallel to Ada Lovelace.

Named for computer scientist and United States Navy rear admiral Grace Hopper, the Hopper architecture was leaked in November 2019 and officially revealed in March 2022. It improves upon its predecessors, the Turing and Ampere microarchitectures, featuring a new streaming multiprocessor and a faster memory subsystem.

Architecture

The Nvidia Hopper H100 GPU is implemented using the TSMC 4N process with 80 billion transistors. It consists of up to 144 streaming multiprocessors.[1] In SXM5, the Nvidia Hopper H100 offers better performance than PCIe.[2]

Streaming multiprocessor

The streaming multiprocessors for Hopper improve upon the Turing and Ampere microarchitectures, although the maximum number of concurrent warps per streaming multiprocessor (SM) remains the same between the Ampere and Hopper architectures, 64.[3] The Hopper architecture provides a Tensor Memory Accelerator (TMA), which supports bidirectional asynchronous memory transfer between shared memory and global memory.[4] Under TMA, applications may transfer up to 5D tensors. When writing from shared memory to global memory, elementwise reduction and bitwise operators may be used, avoiding registers and SM instructions while enabling users to write warp specialized codes. TMA is exposed through cuda::memcpy_async[5]

When parallelizing applications, developers can use thread block clusters. Thread blocks may perform atomics in the shared memory of other thread blocks within its cluster, otherwise known as distributed shared memory. Distributed shared memory may be used by an SM simultaneously with L2 cache; when used to communicate data between SMs, this can utilize the combined bandwidth of distributed shared memory and L2. The maximum portable cluster size is 8, although the Nvidia Hopper H100 can support a cluster size of 16 by using the cudaFuncAttributeNonPortableClusterSizeAllowed function, potentially at the cost of reduced number of active blocks.[6] With L2 multicasting and distributed shared memory, the required bandwidth for dynamic random-access memory read and writes is reduced.[7]

Hopper features improved single-precision floating-point format (FP32) throughput with twice as many FP32 operations per cycle per SM than its predecessor. Additionally, the Hopper architecture adds support for new instructions, including the Smith–Waterman algorithm.[6] Like Ampere, TensorFloat-32 (TF-32) arithmetic is supported. The mapping pattern for both architectures is identical.[8]

Memory

The Nvidia Hopper H100 supports HBM3 and HBM2e memory up to 80 GB; the HBM3 memory system supports 3 TB/s, an increase of 50% over the Nvidia Ampere A100's 2 TB/s. Across the architecture, the L2 cache capacity and bandwidth were increased.[9]

Hopper allows CUDA compute kernels to utilize inline compression, including in individual memory allocation, although this feature does not reduce memory footprint.[further explanation needed] The compressor will automatically choose between several compression algorithms.[9]

The Nvidia Hopper H100 increases the capacity of the combined L1 cache, texture cache, and shared memory to 256 KB. Like its predecessors, it combines L1 and texture caches into a unified cache designed to be a coalescing buffer. The attribute cudaFuncAttributePreferredSharedMemoryCarveout may be used to define the carveout of the L1 cache. Hopper introduces enhancements to NVLink through a new generation with faster overall communication bandwidth.[10]

Memory synchronization domains

Some CUDA applications may experience interference when performing fence or flush operations due to memory ordering. Because the GPU cannot know which writes are guaranteed and which are visible by chance timing, it may wait on unnecessary memory operations, thus slowing down fence or flush operations. For example, when a kernel performs computations in GPU memory and a parallel kernel performs communications with a peer, the local kernel will flush its writes, resulting in slower NVLink or PCIe writes. In the Hopper architecture, the GPU can reduce the net cast through a fence operation.[11]

DPX instructions

The Hopper architecture math application programming interface (API) exposes functions in the SM such as __viaddmin_s16x2_relu, which performs the per-halfword [math]\displaystyle{ max(min(a + b, c), 0) }[/math]. In the Smith–Waterman algorithm, __vimax3_s16x2_relu can be used, a three-way min or max followed by a clamp to zero.[12] Similarly, Hopper speeds up implementations of the Needleman–Wunsch algorithm.[13]

Transformer engine

The Hopper architecture utilizes a transformer engine.[14]

Power efficiency

The SXM5 form factor H100 has a thermal design power (TDP) of 700 watts. With regards to its asynchrony, the Hopper architecture may attain high degrees of utilization and thus may have a better performance-per-watt.[15]

History

In November 2019, a well-known Twitter account posted a tweet revealing that the next architecture after Ampere would be called Hopper, named after computer scientist and United States Navy rear admiral Grace Hopper, one of the first programmers of the Harvard Mark I. The account stated that Hopper would be based on a multi-chip module design, which would result in a yield gain with lower wastage.[16]

During the 2022 Nvidia GTC, Nvidia officially announced Hopper.[17]

References

Citations

  1. Elster & Haugdahl 2022, p. 4.
  2. Nvidia 2023c, p. 20.
  3. Nvidia 2023b, p. 9.
  4. Fujita et al. 2023, p. 6.
  5. Nvidia 2023b, p. 9-10.
  6. 6.0 6.1 Nvidia 2023b, p. 10.
  7. Vishal Mehta (September 2022). CUDA Programming Model for Hopper Architecture. Santa Clara: Nvidia. Retrieved May 29, 2023.
  8. Fujita et al. 2023, p. 4.
  9. 9.0 9.1 Nvidia 2023b, p. 11.
  10. Nvidia 2023b, p. 12.
  11. Nvidia 2023a, p. 44.
  12. Tirumala, Ajay; Eaton, Joe; Tyrlik, Matt (December 8, 2022). "Boosting Dynamic Programming Performance Using NVIDIA Hopper GPU DPX Instructions". Nvidia. https://developer.nvidia.com/blog/boosting-dynamic-programming-performance-using-nvidia-hopper-gpu-dpx-instructions/. 
  13. Harris, Dion (March 22, 2022). "NVIDIA Hopper GPU Architecture Accelerates Dynamic Programming Up to 40x Using New DPX Instructions". Nvidia. https://blogs.nvidia.com/blog/2022/03/22/nvidia-hopper-accelerates-dynamic-programming-using-dpx-instructions/. 
  14. Salvator, Dave (March 22, 2022). "H100 Transformer Engine Supercharges AI Training, Delivering Up to 6x Higher Performance Without Losing Accuracy". Nvidia. https://blogs.nvidia.com/blog/2022/03/22/h100-transformer-engine/. 
  15. Elster & Haugdahl 2022, p. 8.
  16. Pirzada, Usman (November 16, 2019). "NVIDIA Next Generation Hopper GPU Leaked – Based On MCM Design, Launching After Ampere". Wccftech. https://wccftech.com/nvidia-hopper-gpu-mcm-leaked/. 
  17. Vincent, James (March 22, 2022). "Nvidia reveals H100 GPU for AI and teases 'world's fastest AI supercomputer'". The Verge. https://www.theverge.com/2022/3/22/22989182/nvidia-ai-hopper-architecture-h100-gpu-eos-supercomputer. 

Works cited

Further reading





Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Engineering:Hopper_(microarchitecture)
10 views | Status: cached on July 28 2024 12:00:55
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF