Sentience is the simplest or most primitive form of cognition, consisting of a conscious awareness of stimuli without association or interpretation.[1] The word was first coined by philosophers in the 1630s for the concept of an ability to feel, derived from Latin sentiens (feeling),[2] to distinguish it from the ability to think (reason).[citation needed]
In modern Western philosophy, sentience is the ability to experience sensations. In different Asian religions, the word "sentience" has been used to translate a variety of concepts. In science fiction, the word "sentience" is sometimes used interchangeably with "sapience", "self-awareness", or "consciousness".[3]
Some writers differentiate between the mere ability to perceive sensations, such as light or pain, and the ability to perceive emotions, such as fear or grief. The subjective awareness of experiences by a conscious individual are known as qualia in Western philosophy.[3]
In philosophy, different authors draw different distinctions between consciousness and sentience. According to Antonio Damasio, sentience is a minimalistic way of defining consciousness, which otherwise commonly and collectively describes sentience plus further features of the mind and consciousness, such as creativity, intelligence, sapience, self-awareness, and intentionality (the ability to have thoughts about something). These further features of consciousness may not be necessary for sentience, which is the capacity to feel sensations and emotions.[4]
According to Thomas Nagel in his paper "What Is It Like to Be a Bat?", consciousness can refer to the ability of any entity to have subjective perceptual experiences, or as some philosophers refer to them, "qualia"—in other words, the ability to have states that it feels like something to be in.[5] Some philosophers, notably Colin McGinn, believe that the physical process causing consciousness to happen will never be understood, a position known as "new mysterianism." They do not deny that most other aspects of consciousness are subject to scientific investigation but they argue that qualia will never be explained.[citation needed] Other philosophers, such as Daniel Dennett, argue that qualia is not a meaningful concept.[6]
Regarding animal consciousness, according to the Cambridge Declaration of Consciousness, which was publicly proclaimed on 7 July 2012 at Cambridge University, consciousness is that which requires specialized neural structures, chiefly neuroanatomical, neurochemical, and neurophysiological substrates, which manifests in more complex organisms as the central nervous system, to exhibit consciousness.[a] Accordingly, only organisms that possess these substrates, all within the animal kingdom, are said to be conscious.[7]
David Chalmers argues that sentience is sometimes used as shorthand for phenomenal consciousness, the capacity to have any subjective experience at all, but sometimes refers to the narrower concept of affective consciousness, the capacity to experience subjective states that have affective valence (i.e., a positive or negative character), such as pain and pleasure.[8]
While it has been traditionally assumed that sentience and sapience are, in principle, independent of each other, there are criticisms of that assumption. One such criticism is about recognition paradoxes, one example of which is that an entity that cannot distinguish a spider from a non-spider cannot be arachnophobic. More generally, it is argued that since it is not possible to attach an emotional response to stimuli that cannot be recognized, emotions cannot exist independently of cognition that can recognize. The claim that precise recognition exists as specific attention to some details in a modular mind is criticized both with regard to data loss, as a small system of disambiguating synapses in a module physically cannot make as precise distinctions as a bigger synaptic system encompassing the whole brain, and for energy loss, as having one system for motivation that needs some built-in cognition to recognize anything, as well as another cognitive system for making strategies, would cost more energy than integrating it all in one system that use the same synapses. Data losses inherent in all information transfer from more precise systems to less precise systems are also argued to make it impossible for any imprecise system to use a more precise system as an "emissary", as a less precise system would not be able to tell whether the outdata from the more precise system was in the interest of the less precise system or not.[9][10]
The original studies by Ivan Pavlov that showed that conditioned reflexes in human children are more discriminating than those in dogs, human children salivating only at ticking frequencies very close to those at which food was served while dogs drool at a wider range of frequencies, have been followed up in recent years with comparative studies on more species. It is shown that both brain size and brain-wide connectivity contribute to make perception more discriminating, as predicted by the theory of a brain-wide perception system but not by the theory of separate systems for emotion and cognition.[11]
Eastern religions including Hinduism, Buddhism, Sikhism, and Jainism recognise non-humans as sentient beings.[12] The term sentient beings is translated from various Sanskrit terms (jantu, bahu jana, jagat, sattva) and "conventionally refers to the mass of living things subject to illusion, suffering, and rebirth (Saṃsāra)".[13] In some forms of Buddhism plants, stones and other inanimate objects are considered to be 'sentient'.[14][15] In Jainism many things are endowed with a soul, jīva, which is sometimes translated as 'sentience'.[16][17] Some things are without a soul, ajīva, such as a chair or spoon.[18] There are different rankings of jīva based on the number of senses it has. Water, for example, is a sentient being of the first order, as it is considered to possess only one sense, that of touch.[19]
In Jainism and Hinduism, this is related to the concept of ahimsa, non-violence toward other beings.[citation needed]
Sentience in Buddhism is the state of having senses. In Buddhism, there are six senses, the sixth being the subjective experience of the mind. Sentience is simply awareness prior to the arising of Skandha. Thus, an animal qualifies as a sentient being. According to Buddhism, sentient beings made of pure consciousness are possible. In Mahayana Buddhism, which includes Zen and Tibetan Buddhism, the concept is related to the Bodhisattva, an enlightened being devoted to the liberation of others. The first vow of a Bodhisattva states, "Sentient beings are numberless; I vow to free them."
Sentience has been a central concept in the animal rights movement, tracing back to the well-known writing of Jeremy Bentham in An Introduction to the Principles of Morals and Legislation: "The question is not, Can they reason? nor, Can they talk? but, Can they suffer?"
Richard D. Ryder defines sentientism broadly as the position according to which an entity has moral status if and only if it is sentient.[20] In David Chalmer's more specific terminology, Bentham is a narrow sentientist, since his criterion for moral status is not only the ability to experience any phenomenal consciousness at all, but specifically the ability to experience conscious states with negative affective valence (i.e. suffering).[8] Animal welfare and rights advocates often invoke similar capacities. For example, the documentary Earthlings argues that while animals do not have all the desires and ability to comprehend as do humans, they do share the desires for food and water, shelter and companionship, freedom of movement and avoidance of pain.[21][b]
Animal-welfare advocates typically argue that any sentient being is entitled, at a minimum, to protection from unnecessary suffering[citation needed], though animal-rights advocates may differ on what rights (e.g., the right to life) may be entailed by simple sentience. Sentiocentrism describes the theory that sentient individuals are the center of moral concern.
Gary Francione also bases his abolitionist theory of animal rights, which differs significantly from Singer's, on sentience. He asserts that, "All sentient beings, humans or nonhuman, have one right: the basic right not to be treated as the property of others."[22]
Andrew Linzey, founder of the Oxford Centre for Animal Ethics in England, considers recognising animals as sentient beings as an aspect of his Christianity. The Interfaith Association of Animal Chaplains encourages animal ministry groups to adopt a policy of recognising and valuing sentient beings.[citation needed]
In 1997 the concept of animal sentience was written into the basic law of the European Union. The legally binding protocol annexed to the Treaty of Amsterdam recognises that animals are "sentient beings", and requires the EU and its member states to "pay full regard to the welfare requirements of animals".
Digital sentience (or artificial sentience) means the sentience of artificial intelligences. The question of whether artificial intelligences can be sentient is controversial.[23]
The AI research community does not consider sentience (that is, the "ability to feel sensations") as an important research goal, unless it can be shown that consciously "feeling" a sensation can make a machine more intelligent than just receiving input from sensors and processing it as information. Stuart Russell and Peter Norvig wrote in 2021: "We are interested in programs that behave intelligently. Individual aspects of consciousness -- awareness, self-awareness, attention -- can be programmed and can be part of an intelligent machine. The additional project making a machine conscious in exactly the way humans are is not one that we are equipped to take on."[24] Indeed, leading AI textbooks do not mention "sentience" at all.[25]
Digital sentience is of considerable interest to the philosophy of mind. Functionalist philosophers consider that sentience is about "causal roles" played by mental states, which involve information processing. In this view, the physical substrate of this information processing does not need to be biological, so there is no theoretical barrier to the possibility of sentient machines.[26] According to type physicalism however, the physical constitution is important; and depending on the types of physical systems required for sentience, it may or may not be possible for certain types of machines (such as electronic computing devices) to be sentient.[27]
The discussion on the topic of alleged sentience of artificial intelligence has been reignited in 2022 by the claims made about Google's LaMDA (Language Model for Dialogue Applications) artificial intelligence system that it is "sentient" and had a "soul."[28] LaMDA is an artificial intelligence system that creates chatbots – AI robots designed to communicate with humans – by gathering vast amounts of text from the internet and using algorithms to respond to queries in the most fluid and natural way possible. The transcripts of conversations between scientists and LaMDA reveal that the AI system excels at this, providing answers to challenging topics about the nature of emotions, generating Aesop-style fables on queue, and even describing its alleged fears.[29]
In 2022, philosopher David Chalmers made a speech on whether large language models (LLMs) can be conscious, encouraging more research on the subject. He said that it is very plausible that the training of AI models can cause a world model to emerge in them. He personally estimated the chances that the most advanced LLMs are conscious to be less than 10% in 2022 and more than 20% in 2032, reaching around 50% if it attains "virtual perception, language, action, unified agents" exceeding the cognition level of a fish. He stated that "If you see conscious A.I. coming somewhere down the line, then that's going to raise a whole new important group of extremely snarly ethical challenges with, you know, the potential for new forms of injustice".[30]
Nick Bostrom considers that while LaMDA is probably not sentient, being very sure of it would require understanding how consciousness works, having access to unpublished information about LaMDA's architecture, and finding how to apply the philosophical theory on the machine.[31] He also said about LLMs that "it's not doing them justice to say they're simply regurgitating text", noting that they "exhibit glimpses of creativity, insight and understanding that are quite impressive and may show the rudiments of reasoning". He thinks that "sentience is a matter of degree".[23]
The sentience quotient concept was introduced by Robert A. Freitas Jr. in the late 1970s.[32] It defines sentience as the relationship between the information processing rate of each individual processing unit (neuron), the weight/size of a single unit, and the total number of processing units (expressed as mass). It was proposed as a measure for the sentience of all living beings and computers from a single neuron up to a hypothetical being at the theoretical computational limit of the entire universe. On a logarithmic scale it runs from −70 up to +50.
a. ^ Quote: "The absence of a neocortex does not appear to preclude an organism from experiencing affective states. Convergent evidence indicates that non-human animals have the neuroanatomical, neurochemical, and neurophysiological substrates of conscious states along with the capacity to exhibit intentional behaviors. Consequently, the weight of evidence indicates that humans are not unique in possessing the neurological substrates that generate consciousness. Non-human animals, including all mammals and birds, and many other creatures, including octopuses, also possess these neurological substrates."[7]
b. ^ Quote: "Granted, these animals do not have all the desires we humans have; granted, they do not comprehend everything we humans comprehend; nevertheless, we and they do have some of the same desires and do comprehend some of the same things. The desires for food and water, shelter and companionship, freedom of movement and avoidance of pain."[21]
Original source: https://en.wikipedia.org/wiki/Sentience.
Read more |