Artificial intelligence, especially foundation models, has made rapid progress, surpassing human capabilities in various benchmarks.
Progress in artificial intelligence (AI) refers to the advances, milestones, and breakthroughs that have been achieved in the field of artificial intelligence over time. AI is a branch of computer science that aims to create machines and systems capable of performing tasks that typically require human intelligence. AI applications have been used in a wide range of fields including medical diagnosis, finance, robotics, law, video games, agriculture, and scientific discovery. The society as a whole is looking for artificial intelligence to be on a key factor in the upcming years because of its potential. However, many AI applications are not perceived as AI: "A lot of cutting-edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[1][2]
"Many thousands of AI applications are deeply embedded in the infrastructure of every industry."[3] In the late 1990s and early 2000s, AI technology became widely used as elements of larger systems,[3][4] but the field was rarely credited for these successes at the time.
Kaplan and Haenlein structure artificial intelligence along three evolutionary stages:
Artificial narrow intelligence – AI capable only of specific tasks;
Artificial general intelligence – AI with ability in several areas, and able to autonomously solve problems they were never even designed for;
Artificial superintelligence – AI capable of general tasks, including scientific creativity, social skills, and general wisdom.[2]
To allow comparison with human performance, artificial intelligence can be evaluated on constrained and well-defined problems. Such tests have been termed subject-matter expert Turing tests. Also, smaller problems provide more achievable goals and there are an ever-increasing number of positive results.
In 2023, humans still substantially outperformed both GPT-4 and other models tested on the ConceptARC benchmark. Those models scored 60% on most, and 77% on one category, while humans scored 91% on all and 97% on one category.[5] However, later research in 2025 showed that human-generated output grids were only accurate 73% of the time, while AI models available that year managed to score above 77%.[6]
There are many useful abilities that can be described as showing some form of intelligence. This gives better insight into the comparative success of artificial intelligence in different areas.
AI, like electricity or the steam engine, is a general-purpose technology. There is no consensus on how to characterize which tasks AI tends to excel at.[16] Some versions of Moravec's paradox observe that humans are more likely to outperform machines in areas such as physical dexterity that have been the direct target of natural selection.[17] While projects such as AlphaZero have succeeded in generating their own knowledge from scratch, many other machine learning projects require large training datasets.[18][19] Researcher Andrew Ng has suggested, as a "highly imperfect rule of thumb", that "almost anything a typical human can do with less than one second of mental thought, we can probably now or in the near future automate using AI."[20]
Games provide a high-profile benchmark for assessing rates of progress; many games have a large professional player base and a well-established competitive rating system. AlphaGo brought the era of classical board-game benchmarks to a close when Artificial Intelligence proved their competitive edge over humans in 2016. Deep Mind's AlphaGo AI software program defeated the world's best professional Go Player Lee Sedol.[21] Games of imperfect knowledge provide new challenges to AI in the area of game theory; the most prominent milestone in this area was brought to a close by Libratus' poker victory in 2017.[22][23] E-sports continue to provide additional benchmarks; Facebook AI, Deepmind, and others have engaged with the popular StarCraft franchise of videogames.[24][25]
Broad classes of outcome for an AI test may be given as:
optimal: it is not possible to perform better (note: some of these entries were solved by humans)
Heads-up limit hold'em poker: Statistically optimal in the sense that "a human lifetime of play is not sufficient to establish with statistical significance that the strategy is not an exact solution" (2015)[28]
Proposed "universal intelligence" tests aim to compare how well machines, humans, and even non-human animals perform on problem sets that are generic as possible. At an extreme, the test suite can contain every possible problem, weighted by Kolmogorov complexity; however, these problem sets tend to be dominated by impoverished pattern-matching exercises where a tuned AI can easily exceed human performance levels.[78][79][80][81][82]
Exams
According to OpenAI, in 2023 GPT-4 achieved high scores on several standardized and professional examinations, including around the 90th percentile on the Uniform Bar Exam, the 89th percentile on the mathematics section of the SAT, the 93rd percentile on SAT Reading and Writing, the 54th percentile on the analytical writing section of the GRE, the 88th percentile on GRE quantitative reasoning, and the 99th percentile on GRE verbal reasoning. OpenAI also reported that GPT-4 scored in the 99th to 100th percentile on the 2020 USA Biology Olympiad semifinal exam and earned top scores on several AP exams.[83]
Independent researchers found in 2023 that ChatGPT based on GPT-3.5 performed "at or near the passing threshold" on all three parts of the United States Medical Licensing Examination (USMLE), suggesting that large language models could reach passing-level performance on some medical knowledge assessments even without domain-specific fine-tuning.[84] GPT-3.5 was also reported to attain a low but passing grade on examinations for four law school courses at the University of Minnesota.[83]
Further studies reported that GPT-4 passed a text-based radiology board-style examination.[85] Later radiology studies in 2024–2025 continued to find strong performance by newer models on exam-style questions, including image-based and student radiology examinations, while also noting persistent weaknesses and variation by task type.[86][87]
By 2025, comparative studies found substantial variation in medical-exam performance across models rather than a uniform "passing" level. A 2025 benchmarking study on publicly available USMLE sample questions reported that newer models such as ChatGPT and DeepSeek outperformed some rivals, but also made distinct errors and still showed limitations in clinical reasoning and domain-specific understanding.[88]
Newer legal benchmarks published in 2025 likewise suggested that exam performance remained uneven. The LEXam benchmark, built from 340 law exams across 116 law school courses, found that long-form legal reasoning remained challenging for contemporary large language models, especially on open-ended questions requiring structured, multi-step analysis.[89]
By 2026, broader work on expert-level academic testing emphasized that many older benchmarks and exam-style tasks were becoming saturated. A 2026 Nature paper introducing Humanity's Last Exam argued that state-of-the-art systems had surpassed 90% accuracy on several popular benchmarks, while still showing low accuracy on a more difficult benchmark designed to test the frontier of expert human knowledge.[90] Stanford HAI also cautioned in 2025 that benchmark and exam performance should not be treated as equivalent to reliable real-world performance or trustworthy decision-making.[91]
Many competitions and prizes, such as the Imagenet Challenge, promote research in artificial intelligence. The most common areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games.[92]
Past and current predictions
An expert poll around 2016, conducted by Katja Grace of the Future of Humanity Institute and associates, gave median estimates of 3 years for championship Angry Birds, 4 years for the World Series of Poker, and 6 years for StarCraft. On more subjective tasks, the poll gave 6 years for folding laundry as well as an average human worker, 7–10 years for expertly answering 'easily Googleable' questions, 8 years for average speech transcription, 9 years for average telephone banking, and 11 years for expert songwriting, but over 30 years for writing a New York Times bestseller or winning the Putnam math competition.[93][94][95]
Subsequent developments in the late 2010s and early 2020s showed rapid progress in several benchmark tasks, particularly in games and structured problem domains. Systems such as AlphaGo, AlphaZero, and later large language models achieved or exceeded human-level performance on a range of established benchmarks.[96][97][98]
At the same time, researchers have noted that performance on narrow benchmarks can saturate as systems are optimized for specific tasks, and that success on such evaluations does not necessarily generalize to broader forms of intelligence.[99]
Chess
Deep Blue at the Computer History Museum
An AI defeated a grandmaster in a regulation tournament game for the first time in 1988; rebranded as Deep Blue, it beat the reigning human world chess champion in 1997 (see Deep Blue versus Garry Kasparov).[100]
By the 2010s, chess engines running on consumer hardware had surpassed top human players by a wide margin. Neural-network-based systems such as AlphaZero demonstrated that superhuman performance could be achieved through reinforcement learning from self-play without reliance on human expert data.[101] Modern engines are widely used in preparation and analysis, and unaided human play is no longer competitive with top computer systems.
Estimates when computers would exceed humans at Chess
AlphaGo defeated a European Go champion in October 2015, and Lee Sedol in March 2016, one of the world's top players (see AlphaGo versus Lee Sedol). According to Scientific American and other sources, most observers had expected superhuman Computer Go performance to be at least a decade away.[104][105][106]
Subsequent systems such as AlphaGo Zero and AlphaZero demonstrated that superhuman performance could be achieved without human training data, using reinforcement learning from self-play.[107] By the late 2010s, computer Go programs had surpassed human champions by a substantial margin, and Go ceased to be a primary frontier benchmark for AI research.
Estimates when computers would exceed humans at Go
AI pioneer and economist Herbert A. Simon inaccurately predicted in 1965: "Machines will be capable, within twenty years, of doing any work a man can do". Similarly, in 1970 Marvin Minsky wrote that "Within a generation... the problem of creating artificial intelligence will substantially be solved."[113]
Four polls conducted in 2012 and 2013 suggested that the median estimate among experts for when AGI would arrive was 2040 to 2050, depending on the poll.[114][115]
The Grace poll around 2016 found results varied depending on how the question was framed. Respondents asked to estimate "when unaided machines can accomplish every task better and more cheaply than human workers" gave an aggregated median answer of 45 years and a 10% chance of it occurring within 9 years. Other respondents asked to estimate "when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers" estimated a median of 122 years and a 10% probability of 20 years. The median response for when "AI researcher" could be fully automated was around 90 years. No link was found between seniority and optimism, but Asian researchers were much more optimistic than North American researchers on average; Asians predicted 30 years on average for "accomplish every task", compared with the 74 years predicted by North Americans.[93][94][95]
A larger survey of 2,778 researchers who had published in top AI venues, fielded in 2023 and published in 2025, found shorter timelines for what it called "high-level machine intelligence". In that survey, the aggregate forecast assigned a 10% chance to unaided machines outperforming humans at every task by 2027 and a 50% chance by 2047. The same survey estimated that the full automation of all human occupations would reach a 10% probability by 2037 and a 50% probability by 2116.[116]
Despite increasingly short timelines in some surveys, there was still no consensus in late 2025 and early 2026 that AGI was imminent. In Stanford HAI's predictions for 2026, co-director James Landay said: "there will be no AGI this year".[117]
Estimates of when AGI will arrive
Year prediction made
Predicted year
Number of years
Predictor
Contemporaneous source
1965
1985 or sooner
20 or less
Herbert A. Simon
The shape of automation for men and management[113][118]
↑ 2.02.1Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons62: 15–25. doi:10.1016/j.bushor.2018.08.004.
↑Beger, Claas; Yi, Ryan; Fu, Shuhao; Moskvichev, Arseny; Tsai, Sarah W.; Rajamanickam, Sivasankaran; Mitchell, Melanie (6 October 2025). "Do AI Models Perform Human-like Abstract Reasoning Across Modalities?". arXiv:2510.02125 [cs.AI]. we found that human-generated output grids achieved an overall pass@1 accuracy of 73% on the 480 ConceptARC tasks, lower than that of the top reasoning models in the textual modality.
↑Approximate year AI started beating top human experts
↑ 8.08.1van den Herik, H.Jaap; Uiterwijk, Jos W.H.M.; van Rijswijck, Jack (January 2002). "Games solved: Now and in the future". Artificial Intelligence134 (1–2): 277–311. doi:10.1016/S0004-3702(01)00152-7.
↑Mokyr, Joel (2019-11-01). "The Technology Trap: Capital Labor, and Power in the Age of Automation. By Carl Benedikt Frey. Princeton: Princeton University Press, 2019. Pp. 480. $29.95, hardcover.". The Journal of Economic History79 (4): 1183–1189. doi:10.1017/s0022050719000639. ISSN0022-0507.
↑Ontanon, Santiago; Synnaeve, Gabriel; Uriarte, Alberto; Richoux, Florian; Churchill, David; Preuss, Mike (December 2013). "A Survey of Real-Time Strategy Game AI Research and Competition in StarCraft". IEEE Transactions on Computational Intelligence and AI in Games5 (4): 293–311. doi:10.1109/TCIAIG.2013.2286295.
↑Tesauro, Gerald (January 2002). "Programming backgammon using self-teaching neural nets". Artificial Intelligence134 (1–2): 181–199. doi:10.1016/S0004-3702(01)00110-2. "...at least two other neural net programs also appear to be capable of superhuman play".
↑"Mastering Stratego with Model-Free Multiagent Reinforcement Learning". Science. 2022.
↑Bakhtin, Anton; Wu, David; Lerer, Adam; Gray, Jonathan; Jacob, Athul; Farina, Gabriele; Miller, Alexander; Brown, Noam (11 October 2022). "Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning". arXiv:2210.05492 [cs.GT].
↑Hu, Hengyuan; Wu, David; Lerer, Adam; Foerster, Jakob; Brown, Noam (11 October 2022). "Human-AI Coordination via Human-Regularized Search and Learning". arXiv:2210.05125 [cs.AI].
↑Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (2019). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". NAACL-HLT.
↑Santoro, Adam; Bartunov, Sergey; Botvinick, Matthew; Wierstra, Daan; Lillicrap, Timothy (19 May 2016). "One-shot Learning with Memory-Augmented Neural Networks". p. 5, Table 1. arXiv:1605.06065 [cs.LG]. 4.2. Omniglot Classification: "The network exhibited high classification accuracy on just the second presentation of a sample from a class within an episode (82.8%), reaching up to 94.9% accuracy by the fifth instance and 98.1% accuracy by the tenth.
↑ 58.058.1Zhang, D., Mishra, S., Brynjolfsson, E., Etchemendy, J., Ganguli, D., Grosz, B., ... & Perrault, R. (2021). The AI index 2021 annual report. AI Index (Stanford University). arXiv preprint arXiv:2103.06312.
↑ 60.060.160.2van der Maas, Han L.J.; Snoek, Lukas; Stevenson, Claire E. (2021). "How much intelligence is there in artificial intelligence? A 2020 update". Intelligence87. doi:10.1016/j.intell.2021.101548.
↑Smith, Ray (2007). "An Overview of the Tesseract OCR Engine". Proceedings of the Ninth International Conference on Document Analysis and Recognition. doi:10.1109/ICDAR.2007.4376991.
↑He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). "Deep Residual Learning for Image Recognition". Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. doi:10.1109/CVPR.2016.90.
↑Brynjolfsson, Erik; Mitchell, Tom (2017). "What can machine learning do? Workforce implications". Science358 (6370): 1530–1534. doi:10.1126/science.aap8062.
↑Nie, Weili; Yu, Zhaowei; Mao, Liqiang; Patel, Ankit B.; Zhu, Yuke; Anandkumar, Anima (2020). "Bongard-LOGO: A New Benchmark for Human-level Concept Learning and Reasoning". Advances in Neural Information Processing Systems33.
↑Fischer, Thomas; Krauss, Christopher (2018). "Deep learning with long short-term memory networks for financial market predictions". European Journal of Operational Research270 (2): 654–669. doi:10.1016/j.ejor.2017.11.054.
↑Stephenson, Matthew; Renz, Jochen; Ge, Xiaoyu (2020). "The computational complexity of Angry Birds". Artificial Intelligence280. doi:10.1016/j.artint.2019.103232.
↑Koehn, Philipp (2020). Neural Machine Translation. Cambridge University Press.
↑Navigli, Roberto (2009). "Word Sense Disambiguation: A Survey". ACM Computing Surveys41 (2). doi:10.1145/1459352.1459355.
↑Schoenick, Carissa; Clark, Peter; Tafjord, Oyvind; Turney, Peter; Etzioni, Oren (23 August 2017). "Moving beyond the Turing Test with the Allen AI Science Challenge". Communications of the ACM60 (9): 60–64. doi:10.1145/3122814.
↑Feigenbaum, Edward A. (2003). "Some challenges and grand challenges for computational intelligence". Journal of the ACM50 (1): 32–40. doi:10.1145/602382.602400.
↑Hernandez-Orallo, J.; Dowe, D. L. (2010). "Measuring Universal Intelligence: Towards an Anytime Intelligence Test". Artificial Intelligence174 (18): 1508–1539. doi:10.1016/j.artint.2010.09.006.
↑Hernández-Orallo, José; Dowe, David L.; Hernández-Lloreda, M.Victoria (March 2014). "Universal psychometrics: Measuring cognitive abilities in the machine kingdom". Cognitive Systems Research27: 50–74. doi:10.1016/j.cogsys.2013.06.001.
↑Gotta, Jan; Brendlin, Anna; Hempel, Jan Matthias; Weikert, Thomas; Gassenmaier, Thomas (2025). "Large language models (LLMs) in radiology exams for medical students: results of GPT-3.5, GPT-4, Perplexity and Bing". European Radiology. doi:10.1007/s00330-024-11218-0. PMID39496293.
↑Sun, Shenghan; Yoon, Hye Mi; Lee, Jin Woo; Kim, Jin Mo (2025). "Large Language Models with Vision on Diagnostic Radiology Board Examination Questions". Radiology: Artificial Intelligence. doi:10.1148/ryai.240241. PMID39632215.
↑ 95.095.1Grace, Katja; Salvatier, John; Dafoe, Allan; Zhang, Baobao; Evans, Owain (2017). "When Will AI Exceed Human Performance? Evidence from AI Experts". arXiv:1705.08807 [cs.AI].
↑Silver, David; Hubert, Thomas; Schrittwieser, Julian (2017). "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm". arXiv.
↑Gibney, Elizabeth (28 January 2016). "Google AI algorithm masters ancient game of Go". Nature529 (7587): 445–446. doi:10.1038/529445a.
↑Bowman, Samuel R. (2022). "Challenges in Measuring Progress in Natural Language Understanding". Communications of the ACM65 (1): 60–68. doi:10.1145/3491209.
↑Silver, David; Hubert, Thomas; Schrittwieser, Julian (2017). "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm". arXiv.
↑Silver, David; Schrittwieser, Julian; Simonyan, Karen (2017). "Mastering the game of Go without human knowledge". Nature550: 354–359. doi:10.1038/nature24270.
↑Müller, Vincent C.; Bostrom, Nick (2016). "Future progress in artificial intelligence: A survey of expert opinion". Fundamental Issues of Artificial Intelligence. Cham: Springer. pp. 555–572. doi:10.1007/978-3-319-26485-1_33.
↑Grace, Katja; Thomas, Stephen; Stein-Perlman, Zach; Brauner, Jan; Korzekwa, Richard C. (2025). "Thousands of AI Authors on the Future of AI". Journal of Artificial Intelligence Research82: 1–58. doi:10.1613/jair.1.19087.