We need the best Technology |
Programming for Dummies |
Thinking hardly or hardly thinking? Philosophy |
Major trains of thought |
The good, the bad, and the brain fart |
Come to think of it |
Artificial intelligence (AI) is a somewhat vague concept, but answering three questions can narrow what is and is not AI:[1]:12-13
Because of question #2, that there is no fixed result from AI, AI computer programs rely on Bayesian probability networks.[2]
There are many types of AI, but most AI programs fall into two general categories, generative and predictive. Generative AI encompasses chatbots and image generators, whereas predictive AI encompasses programs that try to predict future outcomes and are used in decision making.[1]:2 Each type has its own set of potential problems, but predictive AI has a replication crisis with research paper results not being reproducible by other researchers, and also has a problem with products that often do not work as claimed.[1]:9-12[3][4][5][6]
The history of the field of AI research spans great dreams, and more than half a century of failure to realize "strong AI" (an artificial mind of more general capability).[note 1] Eventually, research and development refocused on the productive goal of making systems with some specific skill each, such as analyzing data to classify objects, navigating robotic equipment, strategic decision-making within the framework of a particular "game" (e.g. chess) or situation, etc. In the early 2020s, the chatbot (earlier mostly a toy) was reinvented on the foundation of the large language model, producing something with a greater range of skills, yet still far from the old dream of strong AI.
Older AI ambitions were often rather anthropocentric, e.g. the striving for an artificial brain which may replace the human organ. Anthropocentric thinking also underlies the old Turing test proposed by Alan Turing for evaluating a device as intelligent: (roughly) If a conversation with the device cannot be differentiated from a similar conversation with a human being then the device can be called intelligent. Such a test is based on the response of an audience to a performance, similarly to how an audience may respond to a stage magician, and as such tests the discernment of the audience more than the device.
Artificial general intelligence (AGI), also called strong AI,[7][8] is a hypothetical type of artificial intelligence that would not be limited to a specific task (as opposed to current AI, which is called specialized or "weak"), but would rather possess a general capability of thinking, learning, and improving much like an organic mind (though not necessarily working like an organic brain). It differs from a collection of specialized weak AIs, which is what e.g. Amazon Alexa and Siri are; intelligent human-like marketeers can include as many such specialized tools as they want in a package, analogously to the parts of a Swiss army knife, without that package gaining the ability to develop new tools in a self-directed way.
With the generative AI boom of the 2020s and systems such as the large language-model ChatGPT, confusion about the meaning of strong AI or AGI has spread, allegedly deliberately on the part of some AI vendors, who hope to be able to more easily claim having achieved AGI.[9] For example, AGI has at times been defined by ChatGPT maker OpenAI as an AI surpassing humans in a majority of economically valuable tasks – so that if it sells and can profitably replace many types of human workers, it counts as AGI. This need not imply a fluid intelligence, and could hypothetically be achieved through the machine-learning version of large-scale rote memorization and brute force – particularly if commercial use values quantity and superhuman speed above quality and originality in arts and intellectual crafts where applicable.
The older, more established AGI idea of a "full artificial mind" (and often in addition some kind of artificial body accompanying it) is a basis of the transhumanism and singularity lore. It is also commonly portrayed in science fiction in diverse ways. Without it, neither will the wondrous things transhumanists often expect to happen in the future be possible, nor will the fearsome cybernetic revolts (robot uprisings and AI doomsday scenarios) they sometimes warn about and many sci-fi works include.
Despite immense amounts of money, research, and a broad range of specialized or "weak" AI products having been created, a general artificial intelligence — a sentient computer, capable of initiative, general reasoning, and seamless human interaction — has yet to come to fruition. (Some argue that a sentient computer might be more appropriately referred to as artificial consciousness than as artificial intelligence.) The boom in generative AI however pushes stochastic machine imitation to the limits,[note 2] showing that larger quantity in model size and training data leads to broader canned skills. This leads to new debates about how to distinguish the presence or absence of intelligence, and the relation of such canned skills to general intelligence, with skepticism expressed by a broader range of researchers including in cognitive science.[11]
In 2023, some researchers argued that the most powerful LLMs already constituted AGI.[12] Yet, as of 2024 their claims that LLMs can be competent at nearly any human information task is still very much lacking in evidence, as LLMs have trouble dealing with questions such as what the word "it" in a sentence refers to, and otherwise evince lack of abstract thought. It may be that, outside of the points made by such "LLMs lead to AGI" debaters, there are other large differences in understanding between them and their critics than the questions about the AI itself, namely questions about human intelligence and intelligence in general and how it works. It wouldn't be the first time such a gap in understanding leads to failing to deliver on the AGI front after AI researchers assumed they were almost already there.
Hubert Dreyfus's critique of artificial intelligence research, made back in the era when AI research tried to create AGI through symbol-manipulation systems, has been especially enduring.[13][14] Dreyfus does not explicitly deny the possibility of strong AI, but asserts that the fundamental assumptions of AI researchers at the time were either baseless or misguided. Because Dreyfus's critique draws on philosophers such as Heidegger and Maurice Merleau-Ponty, it was largely ignored (and lampooned) at the time of its arrival. However, as the fantastic predictions of early AI researchers continually failed to pan out (which included the solution to all philosophical problems!), his critique has largely been vindicated, and even incorporated into modern AI research. But this has arguably only happened piecemeal, problem by problem, and in response to the problems rather than in response to Dreyfus.[15]
Some commentators question more categorically whether or not a computer can even qualify in principle. John Searle proposed his "Chinese room" thought experiment to demonstrate that a computer program merely shuffles symbols around according to simple rules of syntax, but no semantic grasp of what the symbols really mean is obtained by the program.[16] Proponents of "strong AI", who believe an awareness can exist within a purely algorithmic process, have put forward various critiques of Searle's argument. Ultimately, Searle's argument seems inconclusive.
There are also woo objections to the possibility of strong AI, or objections at any rate on unfalsifiable grounds. These can - for example - be religious, based on ideas of quantum consciousness, or espouse some idea about biology (or maybe humanity in particular) being special. Some LLM and AGI hypers tend to lump all critics into this category.
Dreams of replacing the human brain with a device equal to or greater in capacity than it are central to transhumanism, a staple of science fiction, and have long accompanied ideas of strong AI. Much like strong AI in general, such dreams may or may not become possible with future technology, going by what is known today. This is in contrast to functions separate from the need for strong AI, e.g. prosthetic limbs and implants related to sensory processing, where some types are known to be possible or are even in use by people.
Brains and cognition are not currently well understood, and the scale of computation for an artificial brain is unknown. The power consumption of computers however leads to speculation that for an artificial brain, it would have to be orders of magnitude greater than its biological equivalent. The human brain consumes about 20 W of power (and most of it seems used just to keep it permanently up and running, plus basically energy being uselessly leaked away[17]) whereas current supercomputers may use as much as 1 MW or an order of 100,000 more, suggesting that AI may be a staggeringly energy-inefficient form of intelligence. Critics of brain simulation believe that artificial intelligence can be modeled without imitating nature, using the analogy of early attempts to construct flying machines modeled after birds.[18][19]
An artificial brain would not fall under the current biological definition of life any more than a kidney-dialysis machine. An example of a fictional character with this kind of prosthetic is Cyborg from the Teen Titans comics.
In the field of artificial intelligence, machine learning is a set of techniques that make it possible to train a computer model so that it behaves according to some given sample inputs and expected outputs. For example, machine learning can recognize objects in images or perform other complex tasks that would be too complicated to be described with traditional procedural code.
Large language models are neural networks for language modeling that are large (at least tens of millions of "parameters", or artificial neurons). These include ChatGPT, which sparked a great, renewed interest in chatbots, reinvented to be based on LLMs.
Some science fiction has highlighted the risk of an AI takeover of human society. Most risks of this type are unrealistic and, even if they weren't, do not have relevance in today's society for us to worry about them (though Eliezer Yudkowsky disagrees and will not tire of letting people know it).
However, there are also many more prosaic potential downsides of AI that may necessitate cautious use of the technology, changes in regulations, or political action – way before hypothetical future AI technology reaches Terminator-like levels of general intelligence. These include:
In a humorous interview with John Oliver, Stephen Hawking referenced AI as potentially dangerous. His final Reddit "Ask Me Anything" was also almost entirely focused on his views on AI.[27]
For those of you in the mood, RationalWiki has a fun article about Artificial intelligence. |
For those of you in the mood, RationalWiki has a fun article about AI. |
For those of you in the mood, RationalWiki has a fun article about Artificial stupidity. |
"Could a machine think?" On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking.