Heterogeneous System Architecture

From HandWiki - Reading time: 10 min

Heterogeneous System Architecture (HSA) is a cross-vendor set of specifications that allow for the integration of central processing units and graphics processors on the same bus, with shared memory and tasks.[1] The HSA is being developed by the HSA Foundation, which includes (among many others) AMD and ARM. The platform's stated aim is to reduce communication latency between CPUs, GPUs and other compute devices, and make these various devices more compatible from a programmer's perspective,[2]:3[3] relieving the programmer of the task of planning the moving of data between devices' disjoint memories (as must currently be done with OpenCL or CUDA).[4] CUDA and OpenCL as well as most other fairly advanced programming languages can use HSA to increase their execution performance.[5] Heterogeneous computing is widely used in system-on-chip devices such as tablets, smartphones, other mobile devices, and video game consoles.[6] HSA allows programs to use the graphics processor for floating point calculations without separate memory or scheduling.[7]

Rationale

The rationale behind HSA is to ease the burden on programmers when offloading calculations to the GPU. Originally driven solely by AMD and called the FSA, the idea was extended to encompass processing units other than GPUs, such as other manufacturers' DSPs, as well.

Modern GPUs are very well suited to perform single instruction, multiple data (SIMD) and single instruction, multiple threads (SIMT), while modern CPUs are still being optimized for branching. etc.

Overview

Originally introduced by embedded systems such as the Cell Broadband Engine, sharing system memory directly between multiple system actors makes heterogeneous computing more mainstream. Heterogeneous computing itself refers to systems that contain multiple processing units – central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs), or any type of application-specific integrated circuits (ASICs). The system architecture allows any accelerator, for instance a graphics processor, to operate at the same processing level as the system's CPU.

Among its main features, HSA defines a unified virtual address space for compute devices: where GPUs traditionally have their own memory, separate from the main (CPU) memory, HSA requires these devices to share page tables so that devices can exchange data by sharing pointers. This is to be supported by custom memory management units.[2]:6–7 To render interoperability possible and also to ease various aspects of programming, HSA is intended to be ISA-agnostic for both CPUs and accelerators, and to support high-level programming languages.

So far, the HSA specifications cover:

HSA Intermediate Layer

HSAIL (Heterogeneous System Architecture Intermediate Language), a virtual instruction set for parallel programs

  • similar to LLVM Intermediate Representation and SPIR (used by OpenCL and Vulkan)
  • finalized to a specific instruction set by a JIT compiler
  • make late decisions on which core(s) should run a task
  • explicitly parallel
  • supports exceptions, virtual functions and other high-level features
  • debugging support

HSA memory model

  • compatible with C++11, OpenCL, Java and .NET memory models
  • relaxed consistency
  • designed to support both managed languages (e.g. Java) and unmanaged languages (e.g. C)
  • will make it much easier to develop 3rd-party compilers for a wide range of heterogeneous products programmed in Fortran, C++, C++ AMP, Java, et al.

HSA dispatcher and run-time

  • designed to enable heterogeneous task queueing: a work queue per core, distribution of work into queues, load balancing by work stealing
  • any core can schedule work for any other, including itself
  • significant reduction of overhead of scheduling work for a core

Mobile devices are one of the HSA's application areas, in which it yields improved power efficiency.[6]

Block diagrams

The illustrations below compare CPU-GPU coordination under HSA versus under traditional architectures.

Software support

AMD GPUs contain certain additional functional units intended to be used as part of HSA. In Linux, kernel driver amdkfd provides required support.[9][10]

Some of the HSA-specific features implemented in the hardware need to be supported by the operating system kernel and specific device drivers. For example, support for AMD Radeon and AMD FirePro graphics cards, and APUs based on Graphics Core Next (GCN), was merged into version 3.19 of the Linux kernel mainline, released on 8 February 2015.[10] Programs do not interact directly with amdkfd[further explanation needed], but queue their jobs utilizing the HSA runtime.[11] This very first implementation, known as amdkfd, focuses on "Kaveri" or "Berlin" APUs and works alongside the existing Radeon kernel graphics driver.

Additionally, amdkfd supports heterogeneous queuing (HQ), which aims to simplify the distribution of computational jobs among multiple CPUs and GPUs from the programmer's perspective. Support for heterogeneous memory management (HMM), suited only for graphics hardware featuring version 2 of the AMD's IOMMU, was accepted into the Linux kernel mainline version 4.14.[12]

Integrated support for HSA platforms has been announced for the "Sumatra" release of OpenJDK, due in 2015.[13]

AMD APP SDK is AMD's proprietary software development kit targeting parallel computing, available for Microsoft Windows and Linux. Bolt is a C++ template library optimized for heterogeneous computing.[14]

GPUOpen comprehends a couple of other software tools related to HSA. CodeXL version 2.0 includes an HSA profiler.[15]

Hardware support

AMD

Post-2015 Carrizo and Bristol Ridge APUs also include the version 2 IOMMU functionality for the integrated GPU.[citation needed]

The following table shows features of AMD's APUs (see also: List of AMD accelerated processing units).

[ VisualEditor ] [view · talk · edit]

Codename Server Basic Toronto
Micro Kyoto
Desktop Mainstream Carrizo Bristol Ridge Raven Ridge Picasso
Entry Llano Trinity Richland Kaveri
Basic Kabini
Mobile Performance Renoir
Mainstream Llano Trinity Richland Kaveri Carrizo Bristol Ridge Raven Ridge Picasso
Entry Dalí
Basic Desna, Ontario, Zacate Kabini, Temash Beema, Mullins Carrizo-L Stoney Ridge
Embedded Trinity Bald Eagle Merlin Falcon,
Brown Falcon
Great Horned Owl Ontario, Zacate Kabini Steppe Eagle, Crowned Eagle,
LX-Family
Prairie Falcon Banded Kestrel
Platform High, standard and low power Low and ultra-low power
Released Aug 2011 Oct 2012 Jun 2013 Jan 2014 Jun 2015 Jun 2016 Oct 2017 Jan 2019 Mar 2020 Jan 2011 May 2013 Apr 2014 May 2015 Feb 2016 Apr 2019
CPU microarchitecture K10 Piledriver Steamroller Excavator "Excavator+"[16] Zen Zen+ Zen 2 Bobcat Jaguar Puma Puma+[17] "Excavator+" Zen
ISA x86-64 x86-64
Socket Desktop High-end N/A N/A
Mainstream N/A AM4
Entry FM1 FM2 FM2+[lower-alpha 1] N/A
Basic N/A N/A AM1 N/A
Other FS1 FS1+, FP2 FP3 FP4 FP5 FP6 FT1 FT3 FT3b FP4 FP5
PCI Express version 2.0 3.0 2.0 3.0
[[Engineering:Semiconductor device fabricatFab. (Nanometre|nm]]) GF 32SHP
(HKMG SOI)
GF 28SHP
(HKMG bulk)
GF 14LPP
(FinFET bulk)
GF 12LP
(FinFET bulk)
TSMC N7
(FinFET bulk)
TSMC N40
(bulk)
TSMC N28
(HKMG bulk)
GF 28SHP
(HKMG bulk)
GF 14LPP
(FinFET bulk)
Die area (mm2) 228 246 245 245 250 210[18] 156 75 (+ 28 FCH) 107 ? 125
Min TDP (W) 35 17 12 10 4.5 4 3.95 10 6
Max APU TDP (W) 100 95 65 54 18 25
Max stock APU base clock (GHz) 3 3.8 4.1 3.7 3.8 3.6 3.7 3.3 1.75 2.2 2 2.2 3.2 3.3
Max APUs per node[lower-alpha 2] 1 1
Max CPU[lower-alpha 3] cores per APU 4 8 2 4 2
Max threads per CPU core 1 2 1 2
Integer structure 3+3 2+2 4+2 4+2+1 1+1+1+1 2+2 4+2
i386, i486, i586, CMOV, NOPL, i686, PAE, NX bit, CMPXCHG16B, AMD-V, RVI, ABM, and 64-bit LAHF/SAHF Yes Yes
IOMMU[lower-alpha 4] N/A Yes
BMI1, AES-NI, CLMUL, and F16C N/A Yes
MOVBE N/A Yes
AVIC, BMI2 and RDRAND N/A Yes
ADX, SHA, RDSEED, SMAP, SMEP, XSAVEC, XSAVES, XRSTORS, CLFLUSHOPT, and CLZERO N/A Yes N/A Yes
WBNOINVD, CLWB, RDPID, RDPRU, and MCOMMIT N/A Yes N/A
FPUs per core 1 0.5 1 1 0.5 1
Pipes per FPU 2 2
FPU pipe width 128-bit 256-bit 80-bit 128-bit
CPU instruction set SIMD level SSE4a[lower-alpha 5] AVX AVX2 SSSE3 AVX AVX2
3DNow! 3DNow!+ N/A N/A
PREFETCH/PREFETCHW Yes Yes
FMA4, LWP, TBM, and XOP N/A Yes N/A N/A Yes N/A
FMA3 Yes Yes
L1 data cache per core (KiB) 64 16 32 32
L1 data cache associativity (ways) 2 4 8 8
L1 instruction caches per core 1 0.5 1 1 0.5 1
Max APU total L1 instruction cache (KiB) 256 128 192 256 64 128 96 128
L1 instruction cache associativity (ways) 2 3 4 8 2 3 4
L2 caches per core 1 0.5 1 1 0.5 1
Max APU total L2 cache (MiB) 4 2 4 1 2 1
L2 cache associativity (ways) 16 8 16 8
APU total L3 cache (MiB) N/A 4 8 N/A 4
APU L3 cache associativity (ways) 16 16
L3 cache scheme Victim N/A Victim Victim
Max stock DRAM support DDR3-1866 DDR3-2133 DDR3-2133, DDR4-2400 DDR4-2400 DDR4-2933 DDR4-3200, LPDDR4-4266 DDR3L-1333 DDR3L-1600 DDR3L-1866 DDR3-1866, DDR4-2400 DDR4-2400
Max DRAM channels per APU 2 1 2
Max stock DRAM bandwidth (GB/s) per APU 29.866 34.132 38.400 46.932 68.256 10.666 12.800 14.933 19.200 38.400
GPU microarchitecture TeraScale 2 (VLIW5) TeraScale 3 (VLIW4) GCN 2nd gen GCN 3rd gen GCN 5th gen[19] TeraScale 2 (VLIW5) GCN 2nd gen GCN 3rd gen[19] GCN 5th gen
GPU instruction set TeraScale instruction set GCN instruction set TeraScale instruction set GCN instruction set
Max stock GPU base clock (MHz) 600 800 844 866 1108 1250 1400 1750 538 600 ? 847 900 1200
Max stock GPU base GFLOPS[lower-alpha 6] 480 614.4 648.1 886.7 1134.5 1760 1971.2 1792 86 ? ? ? 345.6 460.8
3D engine[lower-alpha 7] Up to 400:20:8 Up to 384:24:6 Up to 512:32:8 Up to 704:44:16[20] Up to 512:?:? 80:8:4 128:8:4 Up to 192:?:? Up to 192:?:?
IOMMUv1 IOMMUv2 IOMMUv1 ? IOMMUv2
Video decoder UVD 3.0 UVD 4.2 UVD 6.0 VCN 1.0[21] UVD 3.0 UVD 4.0 UVD 4.2 UVD 6.0 UVD 6.3 VCN 1.0
Video encoder N/A VCE 1.0 VCE 2.0 VCE 3.1 N/A VCE 2.0 VCE 3.1
GPU power saving PowerPlay PowerTune PowerPlay PowerTune[22]
TrueAudio N/A Yes[23] N/A Yes
FreeSync 1
2
1
2
HDCP[lower-alpha 8] ? 1.4 1.4
2.2
? 1.4 1.4
2.2
PlayReady[lower-alpha 8] N/A 3.0 not yet N/A 3.0 not yet
Supported displays[lower-alpha 9] 2–3 2–4 3 3 (desktop)
4 (mobile, embedded)
4 2 3 4
/drm/radeon[lower-alpha 10][25][26] Yes N/A Yes N/A
/drm/amdgpu[lower-alpha 10][27] N/A Yes[28] Yes N/A Yes[28] Yes
  1. APU models: A8-7680, A6-7480. CPU only: Athlon X4 845.
  2. A PC would be one node.
  3. An APU combines a CPU and a GPU. Both have cores.
  4. Requires firmware support.
  5. No SSE4. No SSSE3.
  6. Single-precision performance is calculated from the base (or boost) core clock speed based on a FMA operation.
  7. Unified shaders : texture mapping units : render output units
  8. 8.0 8.1 To play protected video content, it also requires card, operating system, driver, and application support. A compatible HDCP display is also needed for this. HDCP is mandatory for the output of certain audio formats, placing additional constraints on the multimedia setup.
  9. To feed more than two displays, the additional panels must have native DisplayPort support.[24] Alternatively active DisplayPort-to-DVI/HDMI/VGA adapters can be employed.
  10. 10.0 10.1 DRM (Direct Rendering Manager) is a component of the Linux kernel. Support in this table refers to the most current version.

ARM

ARM's Bifrost microarchitecture, as implemented in the Mali-G71,[29] is fully compliant with the HSA 1.1 hardware specifications. (As of June 2016), ARM has not announced software support that would use this hardware feature.

See also

References

  1. Tarun Iyer (30 April 2013). "AMD Unveils its Heterogeneous Uniform Memory Access (hUMA) Technology". http://www.tomshardware.com/news/AMD-HSA-hUMA-APU,22324.html. 
  2. 2.0 2.1 George Kyriazis (30 August 2012). Heterogeneous System Architecture: A Technical Review (Report). AMD. http://amd-dev.wpengine.netdna-cdn.com/wordpress/media/2012/10/hsa10.pdf. Retrieved 26 May 2014. 
  3. "What is Heterogeneous System Architecture (HSA)?". AMD. http://developer.amd.com/resources/heterogeneous-computing/what-is-heterogeneous-system-architecture-hsa/. 
  4. Joel Hruska (2013-08-26). "Setting HSAIL: AMD explains the future of CPU/GPU cooperation". Ziff Davis. http://www.extremetech.com/gaming/164817-setting-hsail-amd-cpu-gpu-cooperation. 
  5. Linaro (21 March 2014). "LCE13: Heterogeneous System Architecture (HSA) on ARM". slideshare.net. http://www.slideshare.net/mobile/linaroorg/hsa-linaro-updatejuly102013. 
  6. 6.0 6.1 "Heterogeneous System Architecture: Purpose and Outlook". 2012-11-09. http://gpuscience.com/cs/heterogeneous-system-architecture-purpose-and-outlook/. 
  7. "Heterogeneous system architecture: Multicore image processing using a mix of CPU and GPU elements". http://embedded-computing.com/articles/heterogeneous-processing-using-mix-cpu-gpu-elements/. 
  8. "Kaveri microarchitecture". SemiAccurate. 2014-01-15. http://semiaccurate.com/2014/01/15/technical-look-amds-kaveri-architecture/. 
  9. Michael Larabel (21 July 2014). "AMDKFD Driver Still Evolving For Open-Source HSA On Linux". Phoronix. https://www.phoronix.com/scan.php?page=news_item&px=MTc0NTk. 
  10. 10.0 10.1 "Linux kernel 3.19, Section 1.3. HSA driver for AMD GPU devices". 8 February 2015. http://kernelnewbies.org/Linux_3.19#head-ae54e026ef7588f4431f7e94178d27d5cd830bbf. 
  11. "HSA-Runtime-Reference-Source/README.md at master". 14 November 2014. https://github.com/HSAFoundation/HSA-Runtime-Reference-Source/blob/master/README.md. 
  12. "Linux Kernel 4.14 Announced with Secure Memory Encryption and More". 13 November 2017. https://www.xda-developers.com/linux-kernel-414/. 
  13. Alex Woodie (26 August 2013). "HSA Foundation Aims to Boost Java's GPU Prowess". http://www.hpcwire.com/2013/08/26/hsa_foundation_aims_to_boost_javas_gpu_prowess/. 
  14. "Bolt on github". 11 January 2022. https://github.com/HSA-Libraries/Bolt. 
  15. AMD GPUOpen (2016-04-19). "CodeXL 2.0 includes HSA profiler". http://gpuopen.com/codexl-2-0-is-here-and-open-source/. 
  16. "AMD Announces the 7th Generation APU: Excavator mk2 in Bristol Ridge and Stoney Ridge for Notebooks". 31 May 2016. https://www.anandtech.com/show/10362/amd-7th-generation-apu-bristol-ridge-stoney-ridge-for-notebooks. Retrieved 3 January 2020. 
  17. "AMD Mobile "Carrizo" Family of APUs Designed to Deliver Significant Leap in Performance, Energy Efficiency in 2015" (Press release). 20 November 2014. Retrieved 16 February 2015.
  18. "The Mobile CPU Comparison Guide Rev. 13.0 Page 5 : AMD Mobile CPU Full List". TechARP.com. https://www.techarp.com/guides/mobile-cpu-comparison-guide/5/. Retrieved 13 December 2017. 
  19. 19.0 19.1 "AMD VEGA10 and VEGA11 GPUs spotted in OpenCL driver". VideoCardz.com. http://videocardz.com/62250/amd-vega10-and-vega11-gpus-spotted-in-opencl-driver/. Retrieved 6 June 2017. 
  20. Cutress, Ian (1 February 2018). "Zen Cores and Vega: Ryzen APUs for AM4 – AMD Tech Day at CES: 2018 Roadmap Revealed, with Ryzen APUs, Zen+ on 12nm, Vega on 7nm". Anandtech. https://www.anandtech.com/show/12233/amd-tech-day-at-ces-2018-roadmap-revealed-with-ryzen-apus-zen-on-12nm-vega-on-7nm/3. Retrieved 7 February 2018. 
  21. Larabel, Michael (17 November 2017). "Radeon VCN Encode Support Lands in Mesa 17.4 Git". Phoronix. https://www.phoronix.com/scan.php?page=news_item&px=Radeon-VCN-Encode-Lands. Retrieved 20 November 2017. 
  22. Tony Chen; Jason Greaves, "AMD's Graphics Core Next (GCN) Architecture", AMD, http://meseec.ce.rit.edu/551-projects/fall2014/3-4.pdf, retrieved 13 August 2016 
  23. "A technical look at AMD's Kaveri architecture". Semi Accurate. http://semiaccurate.com/2014/01/15/technical-look-amds-kaveri-architecture/. Retrieved 6 July 2014. 
  24. "How do I connect three or More Monitors to an AMD Radeon™ HD 5000, HD 6000, and HD 7000 Series Graphics Card?". AMD. http://support.amd.com/en-us/search/faq/154. Retrieved 8 December 2014. 
  25. Airlie, David (26 November 2009). "DisplayPort supported by KMS driver mainlined into Linux kernel 2.6.33". http://airlied.livejournal.com/68805.html. Retrieved 16 January 2016. 
  26. "Radeon feature matrix". freedesktop.org. http://xorg.freedesktop.org/wiki/RadeonFeature/. Retrieved 10 January 2016. 
  27. Deucher, Alexander (16 September 2015). "XDC2015: AMDGPU". http://www.x.org/wiki/Events/XDC2015/Program/deucher_zhou_amdgpu.pdf. Retrieved 16 January 2016. 
  28. 28.0 28.1 Michel Dänzer (17 November 2016). "[ANNOUNCE xf86-video-amdgpu 1.2.0"]. lists.x.org. https://lists.x.org/archives/xorg-announce/2016-November/002741.html. 
  29. "ARM Bifrost GPU Architecture". 2016-05-30. http://www.anandtech.com/show/10375/arm-unveils-bifrost-and-mali-g71/5. 

External links




Licensed under CC BY-SA 3.0 | Source: https://handwiki.org/wiki/Engineering:Heterogeneous_System_Architecture
1 | Status: cached on July 17 2024 14:56:09
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF