Artificial intelligence

From RationalWiki - Reading time: 12 min

We need the best
Technology
Icon Tech Portal.svg
Programming for Dummies
Thinking hardly
or hardly thinking?

Philosophy
Icon philosophy.svg
Major trains of thought
The good, the bad,
and the brain fart
Come to think of it

Artificial intelligence (AI) is a somewhat vague concept, but answering three questions can narrow what is and is not AI:[1]:12-13

  1. "…does the task require creative effort or training for a human to perform?"
  2. "Does the behavior of the system directly specified in code by the developer or did it indirectly emerge, say by learning from examples or searching through a database?"
  3. Does the system make decisions more or less autonomously and possess some degree of flexibility and adaptability to the environment?

Because of question #2, that there is no fixed result from AI, AI computer programs rely on Bayesian probability networks.[2]

There are many types of AI, but most AI programs fall into two general categories, generative and predictive. Generative AI encompasses chatbots and image generators, whereas predictive AI encompasses programs that try to predict future outcomes and are used in decision making.[1]:2 Each type has its own set of potential problems, but predictive AI has a replication crisis with research paper results not being reproducible by other researchers, and also has a problem with products that often do not work as claimed.[1]:9-12[3][4][5][6]

The history of the field of AI research spans great dreams, and more than half a century of failure to realize "strong AI" (an artificial mind of more general capability).[note 1] Eventually, research and development refocused on the productive goal of making systems with some specific skill each, such as analyzing data to classify objects, navigating robotic equipment, strategic decision-making within the framework of a particular "game" (e.g. chess) or situation, etc. In the early 2020s, the chatbot (earlier mostly a toy) was reinvented on the foundation of the large language model, producing something with a greater range of skills, yet still far from the old dream of strong AI.

Older AI ambitions were often rather anthropocentric, e.g. the striving for an artificial brainWikipedia which may replace the human organ. Anthropocentric thinking also underlies the old Turing test proposed by Alan Turing for evaluating a device as intelligent: (roughly) If a conversation with the device cannot be differentiated from a similar conversation with a human being then the device can be called intelligent. Such a test is based on the response of an audience to a performance, similarly to how an audience may respond to a stage magician, and as such tests the discernment of the audience more than the device.

Strong AI[edit]

Artificial general intelligence (AGI), also called strong AI,[7][8] is a hypothetical type of artificial intelligence that would not be limited to a specific task (as opposed to current AI, which is called specialized or "weak"), but would rather possess a general capability of thinking, learning, and improving much like an organic mind (though not necessarily working like an organic brain). It differs from a collection of specialized weak AIs, which is what e.g. Amazon AlexaWikipedia and Siri are; intelligent human-like marketeers can include as many such specialized tools as they want in a package, analogously to the parts of a Swiss army knife, without that package gaining the ability to develop new tools in a self-directed way.

With the generative AI boom of the 2020s and systems such as the large language-model ChatGPT, confusion about the meaning of strong AI or AGI has spread, allegedly deliberately on the part of some AI vendors, who hope to be able to more easily claim having achieved AGI.[9] For example, AGI has at times been defined by ChatGPT maker OpenAI as an AI surpassing humans in a majority of economically valuable tasks – so that if it sells and can profitably replace many types of human workers, it counts as AGI. This need not imply a fluid intelligence, and could hypothetically be achieved through the machine-learning version of large-scale rote memorization and brute force – particularly if commercial use values quantity and superhuman speed above quality and originality in arts and intellectual crafts where applicable.

The older, more established AGI idea of a "full artificial mind" (and often in addition some kind of artificial body accompanying it) is a basis of the transhumanism and singularity lore. It is also commonly portrayed in science fiction in diverse ways. Without it, neither will the wondrous things transhumanists often expect to happen in the future be possible, nor will the fearsome cybernetic revolts (robot uprisings and AI doomsday scenarios) they sometimes warn about and many sci-fi works include.

Its possibility[edit]

Despite immense amounts of money, research, and a broad range of specialized or "weak" AI products having been created, a general artificial intelligence — a sentient computer, capable of initiative, general reasoning, and seamless human interaction — has yet to come to fruition. (Some argue that a sentient computer might be more appropriately referred to as artificial consciousness than as artificial intelligence.) The boom in generative AI however pushes stochastic machine imitation to the limits,[note 2] showing that larger quantity in model size and training data leads to broader canned skills. This leads to new debates about how to distinguish the presence or absence of intelligence, and the relation of such canned skills to general intelligence, with skepticism expressed by a broader range of researchers including in cognitive science.[11]

In 2023, some researchers argued that the most powerful LLMs already constituted AGI.[12] Yet, as of 2024 their claims that LLMs can be competent at nearly any human information task is still very much lacking in evidence, as LLMs have trouble dealing with questions such as what the word "it" in a sentence refers to, and otherwise evince lack of abstract thought. It may be that, outside of the points made by such "LLMs lead to AGI" debaters, there are other large differences in understanding between them and their critics than the questions about the AI itself, namely questions about human intelligence and intelligence in general and how it works. It wouldn't be the first time such a gap in understanding leads to failing to deliver on the AGI front after AI researchers assumed they were almost already there.

Hubert Dreyfus's critique of artificial intelligence research,Wikipedia made back in the era when AI research tried to create AGI through symbol-manipulation systems, has been especially enduring.[13][14] Dreyfus does not explicitly deny the possibility of strong AI, but asserts that the fundamental assumptions of AI researchers at the time were either baseless or misguided. Because Dreyfus's critique draws on philosophers such as Heidegger and Maurice Merleau-Ponty, it was largely ignored (and lampooned) at the time of its arrival. However, as the fantastic predictions of early AI researchers continually failed to pan out (which included the solution to all philosophical problems!), his critique has largely been vindicated, and even incorporated into modern AI research. But this has arguably only happened piecemeal, problem by problem, and in response to the problems rather than in response to Dreyfus.[15]

Some commentators question more categorically whether or not a computer can even qualify in principle. John Searle proposed his "Chinese room" thought experiment to demonstrate that a computer program merely shuffles symbols around according to simple rules of syntax, but no semantic grasp of what the symbols really mean is obtained by the program.[16] Proponents of "strong AI", who believe an awareness can exist within a purely algorithmic process, have put forward various critiques of Searle's argument. Ultimately, Searle's argument seems inconclusive.

There are also woo objections to the possibility of strong AI, or objections at any rate on unfalsifiable grounds. These can - for example - be religious, based on ideas of quantum consciousness, or espouse some idea about biology (or maybe humanity in particular) being special. Some LLM and AGI hypers tend to lump all critics into this category.

Transhumanist dreams[edit]

Dreams of replacing the human brain with a device equal to or greater in capacity than it are central to transhumanism, a staple of science fiction, and have long accompanied ideas of strong AI. Much like strong AI in general, such dreams may or may not become possible with future technology, going by what is known today. This is in contrast to functions separate from the need for strong AI, e.g. prosthetic limbs and implants related to sensory processing, where some types are known to be possible or are even in use by people.

Brains and cognition are not currently well understood, and the scale of computation for an artificial brain is unknown. The power consumption of computers however leads to speculation that for an artificial brain, it would have to be orders of magnitude greater than its biological equivalent. The human brain consumes about 20 W of power (and most of it seems used just to keep it permanently up and running, plus basically energy being uselessly leaked away[17]) whereas current supercomputers may use as much as 1 MW or an order of 100,000 more, suggesting that AI may be a staggeringly energy-inefficient form of intelligence. Critics of brain simulationWikipedia believe that artificial intelligence can be modeled without imitating nature, using the analogy of early attempts to construct flying machines modeled after birds.[18][19]

An artificial brainWikipedia would not fall under the current biological definition of life any more than a kidney-dialysis machine. An example of a fictional character with this kind of prosthetic is CyborgWikipedia from the Teen TitansWikipedia comics.

Machine learning[edit]

See the main article on this topic: Machine learning

In the field of artificial intelligence, machine learning is a set of techniques that make it possible to train a computer model so that it behaves according to some given sample inputs and expected outputs. For example, machine learning can recognize objects in images or perform other complex tasks that would be too complicated to be described with traditional procedural code.

Large language models are neural networks for language modeling that are large (at least tens of millions of "parameters", or artificial neurons). These include ChatGPT, which sparked a great, renewed interest in chatbots, reinvented to be based on LLMs.

Risks of AI[edit]

Some science fiction has highlighted the risk of an AI takeover of human society. Most risks of this type are unrealistic and, even if they weren't, do not have relevance in today's society for us to worry about them (though Eliezer Yudkowsky disagrees and will not tire of letting people know it).

However, there are also many more prosaic potential downsides of AI that may necessitate cautious use of the technology, changes in regulations, or political action – way before hypothetical future AI technology reaches Terminator-like levels of general intelligence. These include:

  • Proliferation of junk information, harmful messages, or works infringing on rights. From spam, to plagiarism and forgeries, to psychological warfare, various machine learning approaches can be used to produce either remarkable quality, quantity, or even both, of bogus information of some chosen kind – text, images, audio, videos, etc. Deepfakes are fabrications made to have convincing qualities, e.g. using a generative adversarial networkWikipedia (GAN). In terms of quantity, LLMs like ChatGPT led to new fears of both mass plagiarism and automated disinformation sweeping the world.
    • AIs programmed to learn from any internet users who happen to interact with them, picking up racism, sexism, and misinformation, and parroting it – this has happened with older types of chatbot.[20] Similarly, generative AIs like LLMs soak up expressions of bigotry, and other biases and flaws in its data set (sourced in large part from the web) when the model is trained, and imitates it. While work can be done on the model to reduce the issue, large models are inscrutable and all problems can't reasonably be weeded out.[10]
    • Copyright issues related to art product of generative AIs, both because of them using for training extensive datasets that include copyrighted works without asking the artists, and related to who owns the copyright of a work generated such way[21][note 3]
  • AIs used by social-media systems and platforms, for example YouTube creating filter bubbles – potentially inadvertently increasing political polarisation and extremism.
  • Successive waves of technological unemployment.Wikipedia Self-driving cars and trucks may eventually take over the work of professional drivers.[note 4] Various white collar workers who handle boilerplate text, or chat with people, may be replaced by chatbots.[note 5] Online "influencers" can be replaced by generative AI, creating artificial social media personalities.[22] Business leaders also dream of replacing software developers with chatbots, though without AGI this cannot go far.[note 6] New jobs involving the use and supervision of AI tools can be expected, but how many? No one can yet be sure how employment will be affected on the whole. Some, notably including 2020 US Presidential candidate Andrew Yang, have advocated a universal basic income to act as a buffer against technological unemployment, though others have argued instead for a return to the 20th-century idea of government full-employment programs.
  • Some Tesla fans have theorised that self-driving cars will in the not-too-distant-future also mean that new cars become unaffordable for all but the very wealthy, as car manufacturers focus on selling highly-profitable and expensive autonomous vehicles to Uber and Lyft, or simply start their own autonomous taxi operations (as Tesla plans to do).
  • AI algorithms inadvertently making racist or sexist decisions about matters such as mortgages and other loans, or even about criminal-justice matters such as crime detection, sifting through large amounts of evidence, bail or probation decisions – this has also already happened.[23]
  • Black-box AIs making decisions that affect people, but whose reasoning is completely opaque and essentially undiscoverable by customers, judges and juries, and even by the organisations that own them.
    • The logical implication is that these algorithms could be hacked for pecuniary advantage, or to literally "get out of jail free", and possibly no-one would even notice
    • Also, combine this with the tendency of some politicians and bureaucrats with little understanding of technology to simply say "computer says no" when confronted with disagreements about computer-generated decisions, even in the absence of any machine intelligence at all, and this could be a recipe for special interests to do "regulatory capture" in a whole new way.
  • Environmental concerns, given how energy-intensive generative AIs are even for trivial tasks such as chatting or generating images, especially as artificial intelligence becomes more widely adopted[24][25].
  • Most disturbingly of all, flying drones controlled by autonomous AIs could be used by rogue states or terrorist groups (or even by ordinary states in war scenarios) to injure or assassinate individuals, or even to target large groups of people with pinpoint accuracy, like political opponents of an authoritarian leader, without it necessarily being traceable back to the people giving the orders. This dystopian scenario was vividly explored in a disturbing video titled "Slaughterbots", produced by the campaign to stop killer robots (yes, they are actually called that).[26]

Stephen Hawking's view[edit]

In a humorous interview with John Oliver, Stephen Hawking referenced AI as potentially dangerous. His final Reddit "Ask Me Anything" was also almost entirely focused on his views on AI.[27]

See also[edit]

Icon fun.svg For those of you in the mood, RationalWiki has a fun article about Artificial intelligence.
Icon fun.svg For those of you in the mood, RationalWiki has a fun article about AI.
Icon fun.svg For those of you in the mood, RationalWiki has a fun article about Artificial stupidity.

Want to read this in another language?[edit]

Se você procura pelo artigo em Português, ver Inteligência artificial.


Further reading[edit]

  • George Johnson, Machinery of Mind: Inside the New Science of Artificial Intelligence. Time Books, 1986. ISBN 0812912292
  • Roger Penrose, The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics. Oxford University Press, 1989. ISBN 0198519737
  • Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books, 1999. ISBN 0465026567

External links[edit]

Notes[edit]

  1. Strong AI has been dreamed of and vaguely imagined to be possible throughout the history of the electronic computer.
  2. Emily M. Bender et al. have labeled probabilistic imitation machines that parrot things with random variation without understanding as "stochastic parrots".[10]
  3. See, for example, this AI-generated picture of the god Cernunnos and that garbled watermark at bottom left
  4. Self-driving vehicle technology is slowly improving, though as of the early 2020s the hype remains overly optimistic about the rate of progress.
  5. In roles of technical or work support, psychological counseling, etc., LLM chatbots are poor substitutes for skilled human guidance, but they're cheaper to operate, making the transition profitable.
  6. Human developers may use LLM assistants to speed up some tasks, but can't generally be replaced by LLMs for the thinking involved in design, innovation, security expertise, to name some key areas.

References[edit]

  1. 1.0 1.1 1.2 AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference by Arvind Narayanan & Sayash Kapoor (2024) Princeton University Press. ISBN 069124913X.
  2. An Overview of Bayesian Networks in AI Turing.
  3. Transparency and reproducibility in artificial intelligence by Transparency and reproducibility in artificial intelligence] by Benjamin Halbe-Kains et al. (2020) Nature 586:E14-E18. doi:10.1038/s41586-019-1799-6 (2020).
  4. The Fallacy of AI Functionality by Inioluwa Deborah Raji et al. (2022) 2022 ACM Conference on Fairness, Accountability, and Transparency, Seoul, Republic of Korea, June 2022. doi:10.1145/3531146.3533158.
  5. AI is wrestling with a replication crisis: Tech giants dominate research but the line between real breakthrough and product showcase can be fuzzy. Some scientists have had enough. by Will Douglas Heaven (November 12, 2020) MIT Technology Review.
  6. Leakage and the reproducibility crisis in machine-learning-based science by Sayash Kapoor and Arvind Narayanan (2003) Patterns 4(9). doi:Sayash Kapoor and Arvind Narayanan.
  7. ,Long Live AI by Ray Kurzweil (15 August 2005) Forbes. Ray Kurzweil described strong AI as "machine intelligence with the full range of human intelligence."
  8. Advanced Human Intelligence. Responsible Nanotechnology, 10 August 2005.
  9. Why everyone seems to disagree on how to define Artificial General Intelligence by Mark Sullivan (18 October 2023) Fast Company.
  10. 10.0 10.1 On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 by Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell (2021) Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery. ISBN 978-1-4503-8309-7. doi=10.1145/3442188.3445922.
  11. Clark, Lindsay (4 July 2023). "Artificial General Intelligence remains a distant dream despite LLM boom" (in en). The Register. 
  12. Artificial General Intelligence Is Already Here by Blaise Agüera y Arcas & Peter Norvig (October 10, 2023) Noema.
  13. Hubert L. Dreyfus’s Critique of Classical AI and its Rationalist Assumptions by Setargew Kenaw (2008) Minds & Machines 18:227–238.
  14. What Computers Can't Do by Hubert Dreyfus (1972) MIT Press. ISBN 978-0-06-090613-9
  15. Machines Who Think: A Personal Inquiry into the History and Prospects of Artificial Intelligence (2nd ed.). Natick, Mass.: A.K. Peters. 2004. ISBN 1-56881-205-1. 
  16. Minds, brains, and programs by John Searle (1980) The Behavioral and Brain Sciences 3:417-457.

    "Could a machine think?" On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.

  17. We finally know why the brain uses so much energy
  18. Goertzel, Ben (December 2007). "Human-level artificial general intelligence and the possibility of a technological singularity: a reaction to Ray Kurzweil's The Singularity Is Near, and McDermott's critique of Kurzweil". Artificial Intelligence 171 (18, Special Review Issue): 1161–1173. Retrieved April 1, 2009. 
  19. Fox and Hayes quoted in Nilsson, Nils (1998), Artificial Intelligence: A New Synthesis, p581 Morgan Kaufmann Publishers
  20. Microsoft silences its new A.I. bot Tay, after Twitter users teach it racism by Sarah Perez (7:16 AM PDT · March 24, 2016) TechCrunch.
  21. Who owns AI art? by Adi Robertson (Nov 15, 2023, 7:12 AM PST) The Verge.
  22. AI-created “virtual influencers” are stealing business from humans by Christina Criddle (2023-12-29) Financial Times (on Ars Technica).
  23. Rise of the racist robots – how AI is learning all our worst impulses by Stephen Buranyi (8 Aug 2017 02.00 EDT) The Guardian.
  24. The AI Boom Could Use a Shocking Amount of Electricity by Lauren Leffer (October 13, 2023) Scientific American.
  25. Generating Just a Few AI Images Consumes As Much Energy As Charging Your Smartphone
  26. [Slaughterbots by Stop Autonomous Weapons (Nov 12, 2017) YouTube.
  27. u/Prof-Stephen-Hawking: Comments (Jul 3, 2015) Reddit.

Licensed under CC BY-SA 3.0 | Source: https://rationalwiki.org/wiki/Artificial_intelligence
35 views | Status: cached on November 18 2024 09:41:33
↧ Download this article as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF