Steps performed when offloading calculations to the GPU on a non-HSA system
Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks.[1] The HSA is being developed by the HSA Foundation, which includes (among many others) AMD and ARM. The platform's stated aim is to reduce communication latency between CPUs, GPUs and other compute devices, and make these various devices more compatible from a programmer's perspective,[2]:3[3] relieving the programmer of the task of planning the moving of data between devices' disjoint memories (as must currently be done with OpenCL or CUDA).[4] CUDA and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance.[5] Heterogeneous computing is widely used in system-on-chip devices such as tablets, smartphones, other mobile devices, and video game consoles.[6] HSA allows programs to use the graphics processor for floating point calculations without separate memory or scheduling.[7]
The rationale behind HSA is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other than GPUs, such as other manufacturers' DSPs, as well.
Steps performed when offloading calculations to the GPU on a non-HSA system
Modern GPUs are very well suited to perform single instruction, multiple data (SIMD) and single instruction, multiple threads (SIMT), while modern CPUs are still being optimized for branching. etc.
Originally introduced by embedded systems such as the Cell Broadband Engine, sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units – central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), or any type of application-specific integrated circuits (ASICs). The system architecture allows any accelerator, for instance a graphics processor, to operate at the same processing level as the system's CPU.
Among its main features, HSA defines a unified virtual address space for compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to share page tables so that devices can exchange data by sharing pointers. This is to be supported by custom memory management units.[2]:6–7 To render interoperability possible and also to ease various aspects of programming, HSA is intended to be ISA-agnostic for both CPUs and accelerators, and to support high-level programming languages.
So far, the HSA specifications cover:
HSAIL (Heterogeneous System Architecture Intermediate Language), a virtual instruction set for parallel programs
Mobile devices are one of the HSA's application areas, in which it yields improved power efficiency.[6]
The illustrations below compare CPU-GPU coordination under HSA versus under traditional architectures.
Standard architecture with a discrete GPU attached to the PCI Express bus. Zero-copy between the GPU and CPU is not possible due to distinct physical memories.
Unified main memory, where GPU and CPU are HSA-enabled. This makes zero-copy operation possible.[8]
The CPU's MMU and the GPU's IOMMU must both comply with HSA hardware specifications.
Some of the HSA-specific features implemented in the hardware need to be supported by the operating system kernel and specific device drivers. For example, support for AMD Radeon and AMD FirePro graphics cards, and APUs based on Graphics Core Next (GCN), was merged into version 3.19 of the Linux kernel mainline, released on 8 February 2015.[10] Programs do not interact directly with amdkfd[further explanation needed], but queue their jobs utilizing the HSA runtime.[11] This very first implementation, known as amdkfd, focuses on "Kaveri" or "Berlin" APUs and works alongside the existing Radeon kernel graphics driver.
Additionally, amdkfd supports heterogeneous queuing (HQ), which aims to simplify the distribution of computational jobs among multiple CPUs and GPUs from the programmer's perspective. Support for heterogeneous memory management (HMM), suited only for graphics hardware featuring version 2 of the AMD's IOMMU, was accepted into the Linux kernel mainline version 4.14.[12]
Integrated support for HSA platforms has been announced for the "Sumatra" release of OpenJDK, due in 2015.[13]
AMD APP SDK is AMD's proprietary software development kit targeting parallel computing, available for Microsoft Windows and Linux. Bolt is a C++ template library optimized for heterogeneous computing.[14]
GPUOpen comprehends a couple of other software tools related to HSA. CodeXL version 2.0 includes an HSA profiler.[15]
Post-2015 Carrizo and Bristol Ridge APUs also include the version 2 IOMMU functionality for the integrated GPU.[citation needed]
The following table shows features of AMD's APUs (see also: List of AMD accelerated processing units).
Codename | Server | Basic | Toronto | |||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Micro | Kyoto | |||||||||||||||||
Desktop | Mainstream | Carrizo | Bristol Ridge | Raven Ridge | Picasso | |||||||||||||
Entry | Llano | Trinity | Richland | Kaveri | ||||||||||||||
Basic | Kabini | |||||||||||||||||
Mobile | Performance | Renoir | ||||||||||||||||
Mainstream | Llano | Trinity | Richland | Kaveri | Carrizo | Bristol Ridge | Raven Ridge | Picasso | ||||||||||
Entry | Dalí | |||||||||||||||||
Basic | Desna, Ontario, Zacate | Kabini, Temash | Beema, Mullins | Carrizo-L | Stoney Ridge | |||||||||||||
Embedded | Trinity | Bald Eagle | Merlin Falcon, Brown Falcon |
Great Horned Owl | Ontario, Zacate | Kabini | Steppe Eagle, Crowned Eagle, LX-Family |
Prairie Falcon | Banded Kestrel | |||||||||
Platform | High, standard and low power | Low and ultra-low power | ||||||||||||||||
Released | Aug 2011 | Oct 2012 | Jun 2013 | Jan 2014 | Jun 2015 | Jun 2016 | Oct 2017 | Jan 2019 | Mar 2020 | Jan 2011 | May 2013 | Apr 2014 | May 2015 | Feb 2016 | Apr 2019 | |||
CPU microarchitecture | K10 | Piledriver | Steamroller | Excavator | "Excavator+"[16] | Zen | Zen+ | Zen 2 | Bobcat | Jaguar | Puma | Puma+[17] | "Excavator+" | Zen | ||||
ISA | x86-64 | x86-64 | ||||||||||||||||
Socket | Desktop | High-end | N/A | N/A | ||||||||||||||
Mainstream | N/A | AM4 | ||||||||||||||||
Entry | FM1 | FM2 | FM2+[lower-alpha 1] | N/A | ||||||||||||||
Basic | N/A | N/A | AM1 | N/A | ||||||||||||||
Other | FS1 | FS1+, FP2 | FP3 | FP4 | FP5 | FP6 | FT1 | FT3 | FT3b | FP4 | FP5 | |||||||
PCI Express version | 2.0 | 3.0 | 2.0 | 3.0 | ||||||||||||||
[[Engineering:Semiconductor device fabricatFab. (Nanometre|nm]]) | GF 32SHP (HKMG SOI) |
GF 28SHP (HKMG bulk) |
GF 14LPP (FinFET bulk) |
GF 12LP (FinFET bulk) |
TSMC N7 (FinFET bulk) |
TSMC N40 (bulk) |
TSMC N28 (HKMG bulk) |
GF 28SHP (HKMG bulk) |
GF 14LPP (FinFET bulk) | |||||||||
Die area (mm2) | 228 | 246 | 245 | 245 | 250 | 210[18] | 156 | 75 (+ 28 FCH) | 107 | ? | 125 | |||||||
Min TDP (W) | 35 | 17 | 12 | 10 | 4.5 | 4 | 3.95 | 10 | 6 | |||||||||
Max APU TDP (W) | 100 | 95 | 65 | 54 | 18 | 25 | ||||||||||||
Max stock APU base clock (GHz) | 3 | 3.8 | 4.1 | 3.7 | 3.8 | 3.6 | 3.7 | 3.3 | 1.75 | 2.2 | 2 | 2.2 | 3.2 | 3.3 | ||||
Max APUs per node[lower-alpha 2] | 1 | 1 | ||||||||||||||||
Max CPU[lower-alpha 3] cores per APU | 4 | 8 | 2 | 4 | 2 | |||||||||||||
Max threads per CPU core | 1 | 2 | 1 | 2 | ||||||||||||||
Integer structure | 3+3 | 2+2 | 4+2 | 4+2+1 | 1+1+1+1 | 2+2 | 4+2 | |||||||||||
i386, i486, i586, CMOV, NOPL, i686, PAE, NX bit, CMPXCHG16B, AMD-V, RVI, ABM, and 64-bit LAHF/SAHF | ||||||||||||||||||
IOMMU[lower-alpha 4] | N/A | |||||||||||||||||
BMI1, AES-NI, CLMUL, and F16C | N/A | |||||||||||||||||
MOVBE | N/A | |||||||||||||||||
AVIC, BMI2 and RDRAND | N/A | |||||||||||||||||
ADX, SHA, RDSEED, SMAP, SMEP, XSAVEC, XSAVES, XRSTORS, CLFLUSHOPT, and CLZERO | N/A | N/A | ||||||||||||||||
WBNOINVD, CLWB, RDPID, RDPRU, and MCOMMIT | N/A | N/A | ||||||||||||||||
FPUs per core | 1 | 0.5 | 1 | 1 | 0.5 | 1 | ||||||||||||
Pipes per FPU | 2 | 2 | ||||||||||||||||
FPU pipe width | 128-bit | 256-bit | 80-bit | 128-bit | ||||||||||||||
CPU instruction set SIMD level | SSE4a[lower-alpha 5] | AVX | AVX2 | SSSE3 | AVX | AVX2 | ||||||||||||
3DNow! | 3DNow!+ | N/A | N/A | |||||||||||||||
PREFETCH/PREFETCHW | ||||||||||||||||||
FMA4, LWP, TBM, and XOP | N/A | N/A | N/A | N/A | ||||||||||||||
FMA3 | ||||||||||||||||||
L1 data cache per core (KiB) | 64 | 16 | 32 | 32 | ||||||||||||||
L1 data cache associativity (ways) | 2 | 4 | 8 | 8 | ||||||||||||||
L1 instruction caches per core | 1 | 0.5 | 1 | 1 | 0.5 | 1 | ||||||||||||
Max APU total L1 instruction cache (KiB) | 256 | 128 | 192 | 256 | 64 | 128 | 96 | 128 | ||||||||||
L1 instruction cache associativity (ways) | 2 | 3 | 4 | 8 | 2 | 3 | 4 | |||||||||||
L2 caches per core | 1 | 0.5 | 1 | 1 | 0.5 | 1 | ||||||||||||
Max APU total L2 cache (MiB) | 4 | 2 | 4 | 1 | 2 | 1 | ||||||||||||
L2 cache associativity (ways) | 16 | 8 | 16 | 8 | ||||||||||||||
APU total L3 cache (MiB) | N/A | 4 | 8 | N/A | 4 | |||||||||||||
APU L3 cache associativity (ways) | 16 | 16 | ||||||||||||||||
L3 cache scheme | Victim | N/A | Victim | Victim | ||||||||||||||
Max stock DRAM support | DDR3-1866 | DDR3-2133 | DDR3-2133, DDR4-2400 | DDR4-2400 | DDR4-2933 | DDR4-3200, LPDDR4-4266 | DDR3L-1333 | DDR3L-1600 | DDR3L-1866 | DDR3-1866, DDR4-2400 | DDR4-2400 | |||||||
Max DRAM channels per APU | 2 | 1 | 2 | |||||||||||||||
Max stock DRAM bandwidth (GB/s) per APU | 29.866 | 34.132 | 38.400 | 46.932 | 68.256 | 10.666 | 12.800 | 14.933 | 19.200 | 38.400 | ||||||||
GPU microarchitecture | TeraScale 2 (VLIW5) | TeraScale 3 (VLIW4) | GCN 2nd gen | GCN 3rd gen | GCN 5th gen[19] | TeraScale 2 (VLIW5) | GCN 2nd gen | GCN 3rd gen[19] | GCN 5th gen | |||||||||
GPU instruction set | TeraScale instruction set | GCN instruction set | TeraScale instruction set | GCN instruction set | ||||||||||||||
Max stock GPU base clock (MHz) | 600 | 800 | 844 | 866 | 1108 | 1250 | 1400 | 1750 | 538 | 600 | ? | 847 | 900 | 1200 | ||||
Max stock GPU base GFLOPS[lower-alpha 6] | 480 | 614.4 | 648.1 | 886.7 | 1134.5 | 1760 | 1971.2 | 1792 | 86 | ? | ? | ? | 345.6 | 460.8 | ||||
3D engine[lower-alpha 7] | Up to 400:20:8 | Up to 384:24:6 | Up to 512:32:8 | Up to 704:44:16[20] | Up to 512:?:? | 80:8:4 | 128:8:4 | Up to 192:?:? | Up to 192:?:? | |||||||||
IOMMUv1 | IOMMUv2 | IOMMUv1 | ? | IOMMUv2 | ||||||||||||||
Video decoder | UVD 3.0 | UVD 4.2 | UVD 6.0 | VCN 1.0[21] | UVD 3.0 | UVD 4.0 | UVD 4.2 | UVD 6.0 | UVD 6.3 | VCN 1.0 | ||||||||
Video encoder | N/A | VCE 1.0 | VCE 2.0 | VCE 3.1 | N/A | VCE 2.0 | VCE 3.1 | |||||||||||
GPU power saving | PowerPlay | PowerTune | PowerPlay | PowerTune[22] | ||||||||||||||
TrueAudio | N/A | [23] | N/A | |||||||||||||||
FreeSync | 1 2 |
1 2 | ||||||||||||||||
HDCP[lower-alpha 8] | ? | 1.4 | 1.4 2.2 |
? | 1.4 | 1.4 2.2 | ||||||||||||
PlayReady[lower-alpha 8] | N/A | 3.0 not yet | N/A | 3.0 not yet | ||||||||||||||
Supported displays[lower-alpha 9] | 2–3 | 2–4 | 3 | 3 (desktop) 4 (mobile, embedded) |
4 | 2 | 3 | 4 | ||||||||||
/drm/radeon [lower-alpha 10][25][26] |
N/A | N/A | ||||||||||||||||
/drm/amdgpu [lower-alpha 10][27] |
N/A | [28] | N/A | [28] |
ARM's Bifrost microarchitecture, as implemented in the Mali-G71,[29] is fully compliant with the HSA 1.1 hardware specifications. (As of June 2016), ARM has not announced software support that would use this hardware feature.
Original source: https://en.wikipedia.org/wiki/Heterogeneous System Architecture.
Read more |