Perception is the process by which stimulation of the senses is translated into meaningful experience. The understanding of this process has long been the topic of debate by philosophers, who largely question the source and validity of human knowledge—the study now known as epistemology—and in more recent times by psychologists.
In contemporary psychology, perception is defined as the brain’s interpretation of sensory information so as to give it meaning. Cognitive sciences make the understanding of perception more detailed: Perception is the process of acquiring, interpreting, selecting, and organizing sensory information. Many cognitive psychologists hold that, as we move about in the world, we create a model of how the world works. That means, we sense the objective world, but our sensations map to percepts, and these percepts are provisional, in the same sense that the scientific methods and scientific hypotheses can be provisional.
The word perception comes from the Latin perception-, percepio, meaning "receiving, collecting, action of taking possession, apprehension with the mind or senses."[1]
Our perception of the external world begins with the senses, which lead us to generate empirical concepts representing the world around us, within a mental framework relating new concepts to preexisting ones. Perception takes place in the brain. Using sensory information as raw material, the brain creates perceptual experiences that go beyond what is sensed directly. Familiar objects tend to be seen as having a constant shape, even though the retinal images they cast change as they are viewed from different angles. This is because our perceptions have the quality of constancy, which refers to the tendency to sense and perceive objects as relatively stable and unchanging despite changing sensory stimulation and information.
Once we have formed a stable perception of an object, we can recognized it from almost any position, at almost any distance, and under almost any illumination. A white house looks like a white house by day or by night and from any angle. We see it as the same house. The sensory information may change as illumination and perspective change, but the object is perceived as constant.
The perception of an object as the same regardless of the distance from which it is viewed is called size constancy. Shape constancy is the tendency to see an object as the same no matter what angle it is viewed from. Color constancy is the inclination to perceive familiar objects as retaining their color despite changes in sensory information. Likewise, when saying brightness constancy, we understand the perception of brightness as the same, even though the amount of light reaching the retina changes. Size, shape, brightness, and color constancies help us better to understand and relate to world. Psychophysical evidence denotes that without this ability, we would find the world very confusing.
Perception is categorized as internal and external: "Internal perception" ("interoception") tells us what is going on in our bodies. We can sense where our limbs are, whether we are sitting or standing; we can also sense whether we are hungry, or tired, and so forth. "External perception" or "sensory perception," ("exteroception"), tells us about the world outside our bodies. Using our senses of sight, hearing, touch, smell, and taste, we discover colors, sounds, textures, and so forth of the world at large.
Methods of studying perception range from essentially biological or physiological approaches, through psychological approaches, to the philosophy of mind, empiricist epistemology such as that of David Hume, John Locke, and George Berkeley, or Merleau Ponty's affirmation of perception as the basis of all science and knowledge.
The philosophy of perception concerns how mental processes and symbols depend on the world internal and external to the perceiver. The philosophy of perception is very closely related to a branch of philosophy known as epistemology—the theory of knowledge.
While René Descartes concluded that the question "Do I exist?" can only be answered in the affirmative (cogito ergo sum), Freudian psychology suggests that self-perception is an illusion of the ego, and cannot be trusted to decide what is in fact real. Such questions are continuously reanimated, as each generation grapples with the nature of existence from within the human condition. The questions remain: Do our perceptions allow us to experience the world as it "really is?" Can we ever know another point of view in the way we know our own?
There are two basic understandings of perception: Passive Perception (PP) and Active Perception (PA). The passive perception (conceived by René Descartes) could be summarized as the following sequence of events: surrounding - > input (senses) - > processing (brain) - > output (re-action). Although still supported by mainstream philosophers, psychologists, and neurologists, this theory is losing momentum. The theory of active perception has emerged from extensive research of sensory illusions in the work of psychologists such as Richard L Gregory. This theory is increasingly gaining experimental support and could be surmised as a dynamic relationship between “description” (in the brain) < - > senses < - > surrounding.
The most common theory of perception is naïve realism in which people believe what they perceive to be things in themselves. Children develop this theory as a working hypothesis of how to deal with the world. Many people who have not studied biology carry this theory into adult life and regard their perception to be the world itself rather than a pattern that overlays the form of the world. Thomas Reid took this theory a step further. He realized that sensation was composed of a set of data transfers but declared that these were in some way transparent so that there is a direct connection between perception and the world. This idea is called direct realism and has become popular in recent years with the rise of postmodernism and behaviorism. Direct realism does not clearly specify the nature of the bit of the world that is an object in perception, especially in cases where the object is something like a silhouette.
The succession of data transfers that are involved in perception suggests that somewhere in the brain there is a final set of activity, called sense data, that is the substrate of the percept. Perception would then be some form of brain activity and somehow the brain would be able to perceive itself. This concept is known as indirect realism. In indirect realism it is held that we can only be aware of external objects by being aware of representations of objects. This idea was held by John Locke and Immanuel Kant. The common argument against indirect realism, used by Gilbert Ryle amongst others, is that it implies a homunculus or Ryle's regress where it appears as if the mind is seeing the mind in an endless loop. This argument assumes that perception is entirely due to data transfer and classical information processing. This assumption is highly contentious and the argument can be avoided by proposing that the percept is a phenomenon that does not depend wholly upon the transfer and rearrangement of data.
Direct realism and indirect realism are known as "realist theories of perception" because they hold that there is a world external to the mind. Direct realism holds that the representation of an object is located next to, or is even part of, the actual physical object whereas indirect realism holds that the representation of an object is brain activity. Direct realism proposes some as yet unknown direct connection between external representations and the mind whilst indirect realism requires some feature of modern physics to create a phenomenon that avoids infinite regress. Indirect realism is consistent with experiences such as: dreams, imaginings, hallucinations, illusions, the resolution of binocular rivalry, the resolution of multistable perception, the modeling of motion that allows us to watch television, the sensations that result from direct brain stimulation, the update of the mental image by saccades of the eyes, and the referral of events backwards in time.
There are also "anti-realist" understanding of perception: Idealism and Skepticism. Idealism holds that we create our reality whereas skepticism holds that reality is always beyond us. One of the most influential proponents of idealism was George Berkeley who maintained that everything was mind or dependent upon mind. Berkeley's idealism has two main strands, phenomenalism in which physical events are viewed as a special kind of mental event and subjective idealism. David Hume is probably the most influential proponent of skepticism.
Perception is one of the oldest fields within scientific psychology, and there are correspondingly many theories about its underlying processes. The oldest quantitative law in psychology is the Weber-Fechner law, which quantifies the relationship between the intensity of physical stimuli and their perceptual effects. It was the study of perception that gave rise to the Gestalt school of psychology, with its emphasis on holistic approach.
The science of perception is concerned with how events are observed and interpreted. An event may be the occurrence of an object at some distance from an observer. According to the scientific account this object will reflect light from the sun in all directions. Some of this reflected light from a particular, unique point on the object will fall all over the corneas of the eyes and the combined cornea/lens system of the eyes will divert the light to two points, one on each retina. The pattern of points of light on each retina forms an image. This process also occurs in the case of silhouettes where the pattern of absence of points of light forms an image. The overall effect is to encode position data on a stream of photons and to transfer this encoding onto a pattern on the retinas. The patterns on the retinas are the only optical images found in perception, prior to the retinas, light is arranged as a fog of photons going in all directions.
The images on the two retinas are slightly different and the disparity between the electrical outputs from these is resolved either at the level of the lateral geniculate nucleus or in a part of the visual cortex called 'V1'. The resolved data is further processed in the visual cortex where some areas have relatively more specialized functions, for instance area V5 is involved in the modeling of motion and V4 in adding color. The resulting single image that subjects report as their experience is called a 'percept'. Studies involving rapidly changing scenes show that the percept derives from numerous processes that each involve time delays.[2]
fMRI studies show that dreams, imaginings, and perceptions of similar things such as faces are accompanied by activity in many of the same areas of brain. Thus, it seems that imagery that originates from the senses and internally generated imagery may have a shared ontology at higher levels of cortical processing.
If an object is also a source of sound this is transmitted as pressure waves that are sensed by the cochlea in the ear. If the observer is blindfolded it is difficult to locate the exact source of sound waves, if the blindfold is removed the sound can usually be located at the source. The data from the eyes and the ears is combined to form a 'bound' percept. The problem of how the bound percept is produced is known as the binding problem and is the subject of considerable study. The binding problem is also a question of how different aspects of a single sense (say, color and contour in vision) are bound to the same object when they are processed by spatially different areas of the brain.
In psychology, visual perception is the ability to interpret visible light information reaching the eyes which is then made available for planning and action. The resulting perception is also known as eyesight, sight or vision. The various components involved in vision are known as the visual system.
The visual system allows us to assimilate information from the environment to help guide our actions. The act of seeing starts when the lens of the eye focus an image of the outside world onto a light-sensitive membrane in the back of the eye, called the retina. The retina is actually part of the brain that is isolated to serve as a transducer for the conversion of patterns of light into neuronal signals. The lens of the eye focuses light on the photoreceptive cells of the retina, which detect the photons of light and respond by producing neural impulses. These signals are processed in a hierarchical fashion by different parts of the brain, from the retina to the lateral geniculate nucleus, to the primary and secondary visual cortex of the brain.
The major problem in visual perception is that what people see is not simply a translation of retinal stimuli (i.e., the image on the retina). Thus, people interested in perception have long struggled to explain what visual processing does to create what we actually see.
Ibn al-Haytham (Alhacen), the "father of optics," pioneered the scientific study of the psychology of visual perception in his influential Book of Optics in the 1000s, being the first scientist to argue that vision occurs in the brain, rather than the eyes. He pointed out that personal experience has an affect on what people see and how they see, and that vision and perception are subjective. He explained possible errors in vision in detail, and as an example, describes how a small child with less experience may have more difficulty interpreting what he/she sees. He also gives an example of an adult that can make mistakes in vision because of how one's experience suggests that he/she is seeing one thing, when he/she is really seeing something else.[3]
Ibn al-Haytham's investigations and experiments on visual perception also included sensation, variations in sensitivity, sensation of touch, perception of colors, perception of darkness, the psychological explanation of the moon illusion, and binocular vision.[4]
Hermann von Helmholtz is often credited with the first study of visual perception in modern times. Helmholtz held vision to be a form of unconscious inference: vision is a matter of deriving a probable interpretation for incomplete data.
Inference requires prior assumptions about the world: two well-known assumptions that we make in processing visual information are that light comes from above, and that objects are viewed from above and not below. The study of visual illusions (cases when the inference process goes wrong) has yielded much insight into what sort of assumptions the visual system makes.
The unconscious inference hypothesis has recently been revived in so-called Bayesian studies of visual perception. Proponents of this approach consider that the visual system performs some form of Bayesian inference to derive a perception from sensory data. Models based on this idea have been used to describe various visual subsystems, such as the perception of motion or the perception of depth.[5][6]
Gestalt psychologists working primarily in the 1930s and 1940s raised many of the research questions that are studied by vision scientists today. The Gestalt Laws of Organization have guided the study of how people perceive visual components as organized patterns or wholes, instead of many different parts. Gestalt is a German word that translates to "configuration or pattern." According to this theory, there are six main factors that determine how we group things according to visual perception: Proximity, Similarity, Closure, Symmetry, Common fate and Continuity.
The major problem with the Gestalt laws (and the Gestalt school generally) is that they are descriptive not explanatory. For example, one cannot explain how humans see continuous contours by simply stating that the brain "prefers good continuity." Computational models of vision have had more success in explaining visual phenomena[7] and have largely superseded Gestalt theory.
Color vision is the capacity of an organism or machine to distinguish objects based on the wavelengths (or frequencies) of the light they reflect or emit. The nervous system derives color by comparing the responses to light from the several types of cone photoreceptors in the eye. These cone photoreceptors are sensitive to different portions of the visible spectrum. For humans, the visible spectrum ranges approximately from 380 to 750 nm, and there are normally three types of cones. The visible range and number of cone types differ between species.
In the human eye, the cones are maximally receptive to short, medium, and long wavelengths of light and are therefore usually called S-, M-, and L-cones. L-cones are often referred to as the red receptor, but while the perception of red depends on this receptor, microspectrophotometry has shown that its peak sensitivity is in the greenish-yellow region of the spectrum. In most primates closely related to humans there are three types of color receptors (known as cone cells). This confers trichromatic color vision, so these primates, like humans, are known as trichromats. Many other primates and other mammals are dichromats, and many mammals have little or no color vision.
The peak response of human color receptors varies, even amongst individuals with 'normal' color vision;[8] In non-human species this polymorphic variation is even greater, and it may well be adaptive.[9]
An object may be viewed under various conditions. For example, it may be illuminated by the sunlight, the light of a fire, or a harsh electric light. In all of these situations, human vision perceives that the object has the same color: an apple always appears red, whether viewed at night or during the day. On the other hand, a camera with no adjustment for light may register the apple as having many different shades. This feature of the visual system is called chromatic adaptation, or color constancy; when the correction occurs in a camera it is referred to as white balance.
Chromatic adaptation is one aspect of vision that may fool someone into observing an color-based optical illusion. Though the human visual system generally does maintain constant perceived color under different lighting, there are situations where the brightness of a stimulus will appear reversed relative to its "background" when viewed at night. For example, the bright yellow petals of flowers will appear dark compared to the green leaves in very dim light. The opposite is true during the day. This is known as the Purkinje effect, and arises because in very low light, human vision is approximately monochromatic and limited to the region near a wavelength of 550nm (green).
We are constantly judging the distance between ourselves and other objects and we use many cues to determine the distance and the depth of objects. Some of these cues depend on visual messages that one eye alone can transmit: these are called "monocular cues." Others known as "binocular cues" require the use of both eyes. Having two eyes allows us to make more accurate judgments about distance and depth, particularly, when the objects are relatively close.
Depth perception is the visual ability to perceive the world in three dimensions. It is a trait common to many higher animals. Depth perception allows the beholder to accurately gauge the distance to an object.
Depth perception combines several types of depth cues grouped into two main categories: monocular cues (cues available from the input of just one eye) and binocular cues (cues that require input from both eyes).
As art students learn, there are several ways in which perspective can help in estimating distance and depth. In linear perspective, two parallel lines that extend into the distance seem to come together at some point in the horizon. In aerial perspective, distant objects have a hazy appearance and a somewhat blurred outline. The elevation of an object also serves as a perspective cue to depth.
Trained artists are keenly aware of the various methods for indicating spacial depth (color shading, distance fog, perspective, and relative size), and take advantage of them to make their works appear "real." The viewer feels it would be possible to reach in and grab the nose of a Rembrandt portrait or an apple in a Cezanne still life—or step inside a landscape and walk around among its trees and rocks.
Photographs capturing perspective are two-dimensional images that often illustrate the illusion of depth. Stereoscopes and Viewmasters, as well as Three-dimensional movies, employ binocular vision by forcing the viewer to see two images created from slightly different positions (points of view). By contrast, a telephoto lens—used in televised sports, for example, to zero in on members of a stadium audience—has the opposite effect. The viewer sees the size and detail of the scene as if it were close enough to touch, but the camera's perspective is still derived from its actual position a hundred meters away, so background faces and objects appear about the same size as those in the foreground.
At the end World War II, Merleau-Ponty published Phenomenology of Perception, in which he:
All my knowledge of the world.... is gained from my own particular point of view, or from some experience of the world without which the symbols of science would be meaningless ... I am the absolute source, my existence does not stem from my antecedents, from my physical and social environment; instead it moves out towards them and sustains them, for I alone bring into being for myself.....the horizon whose distance from me would be abolished ....if I were not there to scan it with my gaze.
Many artists applied such theories to their own aesthetics and started to focus on the importance of the person's point of view (and its mutability) for a wider understanding of reality. They began to conceive the audience, and the exhibition’s space, as fundamental parts of the artwork, creating a kind of mutual communication between them and the viewer, who ceased, therefore, to be a mere addressee of their message.
The perception of movement is a complicated process involving both visual information from the retina and messages from the muscles around the eyes as they follow an object. The perception of movement depends in part on movement of image across the retina of the eye. If you stand still and move your head to look around you, the images of all the objects in the room will pass across your retina. Yet, you will perceive all the objects as stationary. Even if you hold your head still and move only your eyes, the images will continue to pass across your retina. But the messages from the eye muscles seem counteract those from the retina, so the objects in the room will be perceived as motionless.
Motion perception is the process of inferring the speed and direction of objects and surfaces that move in a visual scene given some visual input. Although this process appears straightforward to most observers, it has proven to be a difficult problem from a computational perspective, and extraordinarily difficult to explain in terms of neural processing. Motion perception is studied by many disciplines, including psychology, neuroscience, neurophysiology, and computer science.
At times our perceptual processes trick us into believing that an object is moving when, in fact, it is not. There is a difference, then, between real movement and apparent movement. Examples of apparent movement are autokinetic illusion, stroboscopic motion, and beta movement. Autokinetic illusion is the sort of perception when a stationary object is actually moving. Apparent movement that results from flashing a series of still pictures in rapid succession, as in a motion picture, is called stroboscopic motion. Phi phenomenon is the apparent movement caused by flashing lights in sequence, as on theater marquees.
Visual illusions occur when we use a variety of sensory cues to create perceptual experiences that not actually exist. Some are physical illusions, such as the bent appearance of a stick in water. Others are perceptual illusions, which occur because a stimulus contains misleading cues that lead to an inaccurate perception. Most of stage magic is based on the principles of physical and perceptual illusion.
Amodal perception is the term used to describe the full perception of a physical structure when it is only partially perceived. For example, a table will be perceived as a complete volumetric structure even if only part of it is visible; the internal volumes and hidden rear surfaces are perceived despite the fact that only the near surfaces are exposed to view, and the world around us is perceived as a surrounding void, even though only part of it is in view at any time.
Formulation of the theory is credited to the Belgian psychologist Albert Michotte and Italian psychologist Fabio Metelli, with their work developed in recent years by E. S. Reed and Gestaltists.
Modal completion is a similar phenomena in which a shape is perceived to be occluding other shapes even when the shape itself is not drawn. Examples include the triangle that appears to be occluding three disks in the Kanizsa triangle and the circles and squares that appear in different versions of the Koffka cross.
The haptic perceptual system is unusual in that it can include the sensory receptors from the whole body. It is closely linked to the movement of the body so can have a direct effect on the world being perceived. Gibson (1966) defined the haptic system as "The sensibility of the individual to the world adjacent to his body by use of his body."[10]
The concept of haptic perception is closely allied to the concept of "active touch"—that realizes that more information is gathered when a motor plan (movement) is associated with the sensory system, and that of "extended physiological proprioception"—a realization that when using a tool such as a stick the perception is transparently transferred to the end of the tool.
Interestingly, the capabilities of the haptic sense, and of somatic sense in general have been traditionally underrated. In contrast to common expectation, loss of the sense of touch is a catastrophic deficit. It makes it almost impossible to walk or perform other skilled actions such as holding objects or using tools.[11] This highlights the critical and subtle capabilities of touch and somatic senses in general. It also highlights the potential of haptic technology.
Speech perception refers to the processes by which humans are able to interpret and understand the sounds used in language. The process of perceiving speech begins at the level of the sound signal and the process of audition. After processing the initial auditory signal, speech sounds are further processed to extract acoustic cues and phonetic information. This speech information can then be used for higher-level language processes, such as word recognition.
The study of speech perception is closely linked to the fields of phonetics and phonology in linguistics and cognitive psychology, and perception in psychology. Research in speech perception seeks to understand how human listeners recognize speech sounds and use this information to understand spoken language. Speech research has applications in building computer systems that can recognize speech, as well as improving speech recognition for hearing- and language-impaired listeners.
Proprioception (from Latin proprius, meaning "one's own" and perception) is the sense of the relative position of neighboring parts of the body. Unlike the six "exteroceptive" senses (sight, taste, smell, touch, hearing, and balance) by which we perceive the outside world, and "interoceptive" senses, by which we perceive the pain and the stretching of internal organs, proprioception is a third distinct sensory modality that provides feedback solely on the status of the body internally. It is the sense that indicates whether the body is moving with required effort, as well as where the various parts of the body are located in relation to each other.
Many cognitive psychologists hold that, as we move about in the world, we create a model of how the world works. That is, we sense the objective world, but our sensations map to percepts, and these percepts are provisional, in the same sense that scientific hypotheses are provisional (cf. scientific method). As we acquire new information, our percepts shift, thus solidifying the idea that perception is a matter of belief.
Just as one object can give rise to multiple percepts, such as ambiguous images and other visual illusions, so an object may fail to give rise to any percept at all: if the percept has no grounding in a person's experience, the person may literally not perceive it.
This confusing ambiguity of perception is exploited in human technologies such as camouflage, and also in biological mimicry, for example by Peacock butterflies, whose wings bear eye markings that birds respond to as though they were the eyes of a dangerous predator. Perceptual ambiguity is not restricted to vision. For example, recent touch perception research found that kinesthesia-based haptic perception strongly relies on the forces experienced during touch.[12] This makes it possible to produce illusory touch percepts.[13][14]
Cognitive theories of perception assume there is a poverty of stimulus. This (with reference to perception) is the claim that sensations are, by themselves, unable to provide a unique description of the world. Sensations require 'enriching', which is the role of the mental model. A different type of theory is the perceptual ecology approach of James J. Gibson.
Gibson rejected the assumption of a poverty of stimulus by rejecting the notion that perception is based in sensations. Instead, he investigated what information is actually presented to the perceptual systems. He and the psychologists who work within this paradigm) detailed how the world could be specified to a mobile, exploring organism via the lawful projection of information about the world into energy arrays. Specification is a 1:1 mapping of some aspect of the world into a perceptual array; given such a mapping, no enrichment is required and perception is direct.
The ecological understanding of perception advanced from Gibson's early work is perception-in-action, the notion that perception is a requisite property of animate action. Without perception, action would not be guided and without action, perception would be pointless. Animate actions require perceiving and moving together. In a sense, "perception and movement are two sides of the same coin, the coin is action." [15] A mathematical theory of perception-in-action has been devised and investigated in many forms of controlled movement by many different species of organism, General Tau Theory. According to this theory, tau information, or time-to-goal information is the fundamental 'percept' in perception.
We gather information about the world and interact with it through our actions. Perceptual information is critical for action. Perceptual deficits may lead to profound deficits in action (for touch-perception-related deficits.[16]
Perception is often referred to as a "cognitive process" in which information processing is used to transfer information from the world into the brain and mind where it is further processed and related to other information. Some philosophers and psychologists propose that this processing gives rise to particular mental states, whilst others envisage a direct path back into the external world in the form of action.
Many eminent behaviorists such as John B. Watson and B. F. Skinner proposed that perception acts largely as a process between a stimulus and a response, with other brain activities apparently irrelevant to the process. As Skinner wrote:
The objection to inner states is not that they do not exist, but that they are not relevant in a functional analysis.[17]
However, it has been shown by numerous researchers that sensory and perceptual experiences are affected by many factors that are not attributes of the object of perception but rather of the observer. These include the person’s race, gender, and age, among many others. Unlike puppies and kittens, human babies are born with their eyes open and functioning. Neonates begin to absorb and process information from the outside world as soon as they enter it (in some aspects even before). Even before babies are born, their ears are in working order. Fetuses in the uterus can hear sounds and startle at a sudden, loud noise in the mother’s environment. After birth, babies show signs that they remember sounds they heard in the womb. Babies also are born with the ability to tell the direction of a sound. They show this by turning their heads toward the source of a sound. Infants are particularly tuned in to the sounds of human speech. Their senses work fairly well at birth and rapidly improve to near-adult levels. Besides experience and learning, our perceptions can also be influenced by such factors as our motivations, values, interests and expectations, and cultural preconceptions.
All links retrieved November 23, 2022.
New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here:
The history of this article since it was imported to New World Encyclopedia:
Note: Some restrictions may apply to use of individual images which are separately licensed.