In psychology, parallel processing is the ability of the brain to simultaneously process incoming stimuli of differing quality.[1] Parallel processing is associated with the visual system in that the brain divides what it sees into four components: color, motion, shape, and depth. These are individually analyzed and then compared to stored memories, which helps the brain identify what you are viewing.[2] The brain then combines all of these into the field of view that is then seen and comprehended.[3] This is a continual and seamless operation. For example, if one is standing between two different groups of people who are simultaneously carrying on two different conversations, one may be able to pick up only some information of both conversations at the same time.[4] Parallel processing has been linked, by some experimental psychologists, to the stroop effect (resulting from the stroop test where there is a mismatch between the name of a color and the color that the word is written in).[5] In the stroop effect, an inability to attend to all stimuli is seen through people's selective attention.[6]
In 1990, American Psychologist David Rumelhart proposed the model of parallel distributed processing (PDP) in hopes of studying neural processes through computer simulations.[7] According to Rumelhart, the PDP model represents information processing as interactions between elements called units, with the interactions being either excitatory or inhibitory in nature.[8] Parallel Distributed Processing Models are neurally inspired, emulating the organisational structure of nervous systems of living organisms.[9] A general mathematical framework is provided for them.
Parallel processing models assume that information is represented in the brain using patterns of activation. Information processing encompasses the interactions of neuron-like units linked by synapse-like connections. These can be either excitatory or inhibitory. Every individual unit's activation level is updated using a function of connection strengths and activation level of other units. A set of response units is activated by the propagation of activation patterns. The connection weights are eventually adjusted using learning.[10]
In contrast to parallel processing, serial processing involves sequential processing of information, without any overlap of processing times.[11] The distinction between these two processing models is most observed during visual stimuli is targeted and processed (also called visual search).
In case of serial processing, the elements are searched one after the other in a serial order to find the target. When the target is found, the search terminates. Alternatively, it continues to the end to ensure that the target is not present. This results in reduced accuracy and increased time for displays with more objects.
On the other hand, in the case of parallel processing, all objects are processed simultaneously but the completion times may vary. This may or may not reduce the accuracy, but the time courses are similar irrespective of the size of the display.[12]
However, there are concerns about the efficiency of parallel processing models in case of complex tasks which are discussed ahead in this article.
There are eight major aspects of a parallel distributed processing model:[8]
These units may include abstract elements such as features, shapes and words, and are generally categorised into three types: input, output and hidden units.
This is a representation of the state of the system. The pattern of activation is represented using a vector of N real numbers, over the set of processing units. It is this pattern that captures what the system is representing at any time.
An output function maps the current state of activation to an output signal. The units interact with their neighbouring units by transmitting signals. The strengths of these signals are determined by their degree of activation. This in turn affects the degree to which they affect their neighbours.
The pattern of connectivity determines how the system will react to an arbitrary input. The total pattern of connectivity is represented by specifying the weights for every connection. A positive weight represents an excitatory input and a negative weight represents an inhibitory input.
A net input is produced for each type of input using rules that take the output vector and combine it with the connectivity matrices. In the case of a more complex pattern connectivity, the rules are more complex too.
A new state of activation is produced for every unit by joining the net inputs of impinging units combined and the current state of activation for that unit.
The patterns of connectivity are modified using experience. The modifications can be of three types: First, the development of new connections. Second, the loss of existing connection. Last, the modification of strengths of connections that already exist. The first two can be considered as special cases of the last one. When the strength of a connection is changed from zero to a positive or negative one, it is like forming a new connection. When the strength of a connection is changed to zero, it is like losing an existing connection.
In PDP models, the environment is represented as a time-varying stochastic function over the space of input patterns.[13] This means that at any given point, there is a possibility that any of the possible set of input patterns is impinging on the input units. [9]
An example of the PDP model is illustrated in Rumelhart's book 'Parallel Distributed Processing' of individuals who live in the same neighborhood and are part of different gangs. Other information is also included, such as their names, age group, marital status, and occupations within their respective gangs. Rumelhart considered each category as a 'unit' and an individual has connections with each unit. For instance, if more information is sought on an individual named Ralph, that name unit is activiated, revealing connections to the other properties of Ralph such as his marital status or age group.[8]
To sense depth, humans use both eyes to see three dimensional objects. This sense is present at birth in humans and some animals, such as cats, dogs, owls, and monkeys.[14] Animals with wider-set eyes have a harder time establishing depth, such as horses and cows. A special depth test was used on infants, named The Visual Cliff.[15] This test consisted of a table, half coated in a checkerboard pattern, and the other half a clear plexiglass sheet, revealing a second checkerboard platform about a foot below. Although the plexiglass was safe to climb on, the infants refused to cross over due to the perception of a visual cliff. This test proved that most infants already have a good sense of depth. This phenomenon is similar to how adults perceive heights.
Certain cues help establish depth perception. Binocular cues are made by humans' two eyes, which are subconsciously compared to calculate distance.[16] This idea of two separate images is used by 3-D and VR filmmakers to give two dimensional footage the element of depth. Monocular cues can be used by a single eye with hints from the environment. These hints include relative height, relative size, linear perspective, lights and shadows, and relative motion.[15] Each hint helps to establish small facts about a scene that work together to form a perception of depth. Binocular cues and monocular cues are used constantly and subconsciously to sense depth.
Limitations of parallel processing have been brought up in several analytical studies. The main limitations highlighted include capacity limits of the brain, attentional blink rate interferences, limited processing capabilities, and information limitations in visual searches.
There are processing limits to the brain in the execution of complex tasks like object recognition. All parts of the brain cannot process at full capacity in a parallel method. Attention controls the allocation of resources to the tasks. To work efficiently, attention must be guided from object to object.[17]
These limits to attentional resources sometimes lead to serial bottlenecks in parallel processing, meaning that parallel processing is obstructed by serial processing in between. However, there is evidence for coexistence of serial and parallel processes.[18]
The feature integration theory by Anne Treisman is one of the theories that integrates serial and parallel processing while taking into account attentional resources. It consists of two stages-