Computer performance by orders of magnitude

From Wikipedia - Reading time: 8 min

This list compares various amounts of computing power in instructions per second organized by order of magnitude in FLOPS.

Scientific E notation index: 2 | 3 | 6 | 9 | 12 | 15 | 18 | 21 | 24 | >24

Milliscale computing (10−3)

[edit]
  • 2×10−3: average human multiplication of two 10-digit numbers using pen and paper without aids[1]

Deciscale computing (10−1)

[edit]
  • 1×10−1: multiplication of two 10-digit numbers by a 1940s electromechanical desk calculator[1]
  • 3×10−1: multiplication on Zuse Z3 and Z4, first programmable digital computers, 1941 and 1945 respectively
  • 5×10−1: computing power of the average human mental calculation[clarification needed] for multiplication using pen and paper

Scale computing (100)

[edit]
  • 1 OP/S: power of an average human performing calculations[clarification needed] using pen and paper
  • 1.2 OP/S: addition on Z3, 1941, and multiplication on Bell Model V, 1946
  • 2.4 OP/S: addition on Z4, 1945
  • 5 OP/S: world record for addition set

Decascale computing (101)

[edit]
  • 1.8×101: ENIAC, first programmable electronic digital computer, 1945[2]
  • 5×101: upper end of serialized human perception computation (light bulbs do not flicker to the human observer)
  • 7×101: Whirlwind I 1951 vacuum tube computer and IBM 1620 1959 transistorized scientific minicomputer[2]

Hectoscale computing (102)

[edit]
  • 1.3×102: PDP-4 commercial minicomputer, 1962[2]
  • 2.2×102: upper end of serialized human throughput. This is roughly expressed by the lower limit of accurate event placement on small scales of time (The swing of a conductor's arm, the reaction time to lights on a drag strip, etc.)[3]
  • 2×102: IBM 602 electromechanical calculator (then called computer), 1946[citation needed]
  • 6×102: Manchester Mark 1 electronic general-purpose stored-program digital computer, 1949[4]

Kiloscale computing (103)

[edit]

Megascale computing (106)

[edit]

Gigascale computing (109)

[edit]

Terascale computing (1012)

[edit]

Petascale computing (1015)

[edit]
  • 1.026×1015: IBM Roadrunner 2009 Supercomputer
  • 1.32×1015: Nvidia GeForce 40 series' RTX 4090 consumer graphics card achieves 1.32 petaflops in AI applications, October 2022[10]
  • 2×1015: Nvidia DGX-2 a 2 Petaflop Machine Learning system (the newer DGX A100 has 5 Petaflop performance)
  • 10×1015: minimum computing power of a Type I Kardashev civilization[5]
  • 11.5×1015: Google TPU pod containing 64 second-generation TPUs, May 2017[11]
  • 17.17×1015: IBM Sequoia's LINPACK performance, June 2013[12]
  • 20×1015: roughly the hardware-equivalent of the human brain according to Ray Kurzweil. Published in his 1999 book: The Age of Spiritual Machines: When Computers Exceed Human Intelligence[13]
  • 33.86×1015: Tianhe-2's LINPACK performance, June 2013[12]
  • 36.8×1015: 2001 estimate of computational power required to simulate a human brain in real time.[14]
  • 93.01×1015: Sunway TaihuLight's LINPACK performance, June 2016[15]
  • 143.5×1015: Summit's LINPACK performance, November 2018[16]

Exascale computing (1018)

[edit]
  • 1×1018: The U.S. Department of Energy and NSA estimated in 2008 that they would need exascale computing around 2018[17]
  • 1×1018: Fugaku 2020 supercomputer in single precision mode[18]
  • 1.1x1018: Frontier 2022 supercomputer
  • 1.88×1018: U.S. Summit achieves a peak throughput of this many operations per second, whilst analysing genomic data using a mixture of numerical precisions.[19]
  • 2.43×1018 Folding@home distributed computing system during COVID-19 pandemic response[20]

Zettascale computing (1021)

[edit]
  • 1×1021: Accurate global weather estimation on the scale of approximately 2 weeks.[21] Assuming Moore's law remains applicable, such systems may be feasible around 2035.[22]

A zettascale computer system could generate more single floating point data in one second than was stored by any digital means on Earth in the first quarter of 2011.[citation needed]

Beyond zettascale computing (>1021)

[edit]
  • 1.12×1036: Estimated computational power of a Matrioshka brain, assuming 1.87×1026 watt power produced by solar panels and 6 GFLOPS/watt efficiency.[23]
  • 4×1048: Estimated computational power of a Matrioshka brain whose power source is the Sun, the outermost layer operates at 10 kelvins, and the constituent parts operate at or near the Landauer limit and draws power at the efficiency of a Carnot engine
  • 5×1058: Estimated power of a galaxy equivalent in luminosity to the Milky Way converted into Matrioshka brains.

See also

[edit]

References

[edit]
  1. ^ a b Neumann, John Von; Brody, F.; Vamos, Tibor (1995). The Neumann Compendium. World Scientific. ISBN 978-981-02-2201-7.
  2. ^ a b c d e f g h i j k l "Cost of CPU Performance Through Time 1944-2003". www.jcmit.net. Retrieved 2024-01-15.
  3. ^ "How many frames per second can the human eye see?". 2004-05-19. Retrieved 2013-02-19.
  4. ^ Copeland, B. Jack (2012-05-24). Alan Turing's Electronic Brain: The Struggle to Build the ACE, the World's Fastest Computer. OUP Oxford. ISBN 978-0-19-960915-4.
  5. ^ a b Gray, Robert H. (2020-04-23). "The Extended Kardashev Scale". The Astronomical Journal. 159 (5): 228. Bibcode:2020AJ....159..228G. doi:10.3847/1538-3881/ab792b. ISSN 1538-3881. S2CID 218995201.
  6. ^ "Intel 980x Gulftown | Synthetic Benchmarks | CPU & Mainboard | OC3D Review". www.overclock3d.net. March 12, 2010.
  7. ^ Tony Pearson, IBM Watson - How to build your own "Watson Jr." in your basement, Inside System Storage
  8. ^ "DGX-1 deep learning system" (PDF). NVIDIA DGX-1 Delivers 75X Faster Training...Note: Caffe benchmark with AlexNet, training 1.28M images with 90 epochs
  9. ^ "DGX Server". DGX Server. Nvidia. Retrieved 7 September 2017.
  10. ^ "NVIDIA GeForce-News". 12 October 2022.
  11. ^ "Build and train machine learning models on our new Google Cloud TPUs". 17 May 2017.
  12. ^ a b "Top500 List - June 2013 | TOP500 Supercomputer Sites". top500.org. Archived from the original on 2013-06-22.
  13. ^ Kurzweil, Ray (1999). The Age of Spiritual Machines: When Computers Exceed Human Intelligence. New York, NY: Penguin. ISBN 9780140282023.
  14. ^ "Brain on a Chip". 30 November 2001.
  15. ^ http://top500.org/list/2016/06/ Top500 list, June 2016
  16. ^ "November 2018 | TOP500 Supercomputer Sites". www.top500.org. Retrieved 2018-11-30.
  17. ^ "'Exaflop' Supercomputer Planning Begins". 2008-02-02. Archived from the original on 2008-10-01. Retrieved 2010-01-04. Through the IAA, scientists plan to conduct the basic research required to create a computer capable of performing a million trillion calculations per second, otherwise known as an exaflop.
  18. ^ "June 2020 | TOP500".
  19. ^ "Genomics Code Exceeds Exaops on Summit Supercomputer". Oak Ridge Leadership Computing Facility. Retrieved 2018-11-30.
  20. ^ Pande lab. "Client Statistics by OS". Archive.is. Archived from the original on 2020-04-12. Retrieved 2020-04-12.
  21. ^ DeBenedictis, Erik P. (2005). "Reversible logic for supercomputing". Proceedings of the 2nd conference on Computing frontiers. ACM Press. pp. 391–402. ISBN 1-59593-019-1.
  22. ^ "Zettascale by 2035? China Thinks So". 6 December 2018.
  23. ^ Jacob Eddison; Joe Marsden; Guy Levin; Darshan Vigneswara (2017-12-12), "Matrioshka Brain", Journal of Physics Special Topics, 16 (1), Department of Physics and Astronomy, University of Leicester
  24. ^ Moore, Gordon E. (1965). "Cramming more components onto integrated circuits" (PDF). Electronics Magazine. p. 4. Retrieved 2006-11-11.
[edit]

Licensed under CC BY-SA 3.0 | Source: https://en.wikipedia.org/wiki/Computer_performance_by_orders_of_magnitude
1 |
Download as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF