Models of consciousness

From Scholarpedia - Reading time: 17 min

A model of consciousness is a theoretical description that relates brain properties of consciousness (e.g., fast irregular electrical activity, widespread brain activation) to phenomenal properties of consciousness (e.g., qualia, a first-person-perspective, the unity of a conscious scene). Because of the diverse nature of these properties (Seth et al. 2005), useful models can be either mathematical/logical or verbal/conceptual.

Contents

[edit] Introduction

Models of consciousness should be distinguished from so-called neural correlates of consciousness (Crick & Koch 1990). While the identification of correlations between aspects of brain activity and aspects of consciousness may constrain the specification of neurobiologically plausible models, such correlations do not by themselves provide explanatory links between neural activity and consciousness. Models should also be distinguished from theories that do not propose any mechanistic implementation (e.g., Rosenthal’s ‘higher-order thought’ theories, Rosenthal 2005). Consciousness models are valuable precisely to the extent that they propose such explanatory links (Seth, 2009). This article summarizes models that include computational, informational, or neurodynamic elements that propose explanatory links between neural properties and phenomenal properties.

For each model, or class of models, only a brief summary is given. Some models may have more extensive descriptions elsewhere.

[edit] Global workspace models

Baars’ global workspace theory (GW; Baars 1988) has inspired a variety of related consciousness models. The central idea of GW theory is that conscious cognitive content is globally available for diverse cognitive processes including attention, evaluation, memory, and verbal report. The notion of global availability is suggested to explain the association of consciousness with integrative cognitive processes such as attention, decision making and action selection. Also, because global availability is necessarily limited to a single stream of content, GW theory may naturally account for the serial nature of conscious experience.

GW theory was originally described in terms of a ‘blackboard’ architecture in which separate, quasi-independent processing modules interface with a centralized, globally available resource (Baars 1988). This cognitive level of description is preserved in the computational models of Franklin and Graesser (1999), who proposed a model consisting of a population of interacting ‘software agents’, and Shanahan (2005), whose model incorporates aspects of internal simulation supporting executive control and more recently spiking neurons (Shanahan, 2008).

Figure 1: A schematic of the neuronal global workspace. A central global workspace, constituted by long-range cortico-cortical connections, assimilates other processes according to their salience. Other automatically activated processors do not enter the global workspace. This figure has been adapted from (Dehaene et al. 2003).

Dehaene, Changeux and colleagues have proposed a neuronal implementation of a global workspace architecture, the so-called ‘’neuronal global workspace’’ (see Figure and (Dehaene et al. 2003)). In this model, sensory stimuli mobilize excitatory neurons with long-range cortico-cortical axons, leading to the genesis of a global activity pattern among workspace neurons. Any such global pattern can inhibit alternative activity patterns among workspace neurons, thus preventing the conscious processing of alternative stimuli (for example, during the so-called attentional blink). The global neuronal workspace model predicts that conscious presence is a nonlinear function of stimulus salience; i.e., a gradual increase in stimulus visibility should be accompanied by a sudden transition of the neuronal workspace into a corresponding activity pattern (Dehaene et al. 2003).

Wallace has advocated a network-theoretic modelling perspective on global workspace theory (Wallace 2005). In this view, transient links among specialized processing modules comprise dynamically formed networks. The ignition of a global workspace corresponds to the formation of a ‘giant component’ whereby previously disconnected sub-networks coalesce into a single network encompassing the majority of modules. The emergence of giant components in dynamic networks can be considered as a phase transition.

[edit] Multiple drafts theory

Multiple drafts theory was introduced by Daniel Dennett (1991) to challenge the idea of a ‘Cartesian theatre’ where all perceptions, thoughts, and the other mental contents are presented to a conscious observer. According to the theory, distributed neural/cognitive models continually produce content in parallel, and conscious content is merely that content which has the biggest impact on the rest of the system. More recently, Dennett (2001) has utilized the metaphor of ‘fame in the brain’ to describe his theory. This metaphor emphasizes temporal aspects of the theory, in particular (i) there is no precise time at which particular content becomes ‘famous’ and (ii) fame can only be determined retrospectively.

According to Dennett, the only ‘hard problem’ of consciousness is acknowledging that there is nothing more to consciousness than mechanisms of conscious access and global influence.

[edit] The dynamic core

A different approach to modelling consciousness has focused on informational aspects of consciousness. The dynamic core hypothesis (Edelman and Tononi 2000; Tononi and Edelman 1998) was developed in the context of the theory of neuronal group selection (TNGS, also known as neural Darwinism), a selectionist theory of brain development and brain function (Edelman 1987; Edelman 2003). It is argued that the occurrence of any particular conscious scene constitutes a highly informative discrimination, for the reason that conscious scenes are at once ‘’integrated’’ (every conscious scene is experienced “all of a piece”) and ‘’differentiated’’ (every conscious scene is unique). A central claim of the dynamic core hypothesis is that conscious qualia ‘’are’’ these discriminations (Edelman 2003). The dynamic core hypothesis proposes that the neural mechanisms underlying consciousness consist of a functional cluster in the thalamocortical system, within which reentrant neuronal interactions yield a succession of differentiated yet unitary metastable states. The boundaries of the dynamic core are suggested to shift over time, with some neuronal groups leaving and others being incorporated, these transitions occurring under the influence of internal and external signals (Edelman and Tononi 2000).

While a detailed neuronal model of the dynamic core is lacking, a notable feature of the dynamic core hypothesis is the proposal of a quantitative measure of “neural complexity” (Tononi et al. 1994), high values of which are suggested to accompany consciousness. Neural complexity measures the extent to which the dynamics of a neural system are both integrated and differentiated (Tononi et al. 1994). The component parts of a neurally complex system are differentiated; however, as larger and larger subsets of elements are considered they become increasingly integrated. According to the dynamic core hypothesis, the distinctive reentrant anatomy of the thalamocortical system is ideally suited to producing dynamics of high neural complexity (Sporns et al. 2000).

The dynamic core hypothesis fits within an extended conceptual model of consciousness provided by the TNGS (Edelman 2003; Seth and Baars 2005). According to the TNGS, primary (sensory) consciousness arose in evolution when ongoing perceptual categorization was linked via reentry to a value-dependent memory creating the so-called “remembered present” (Edelman 1989). Higher-order consciousness, distinguished in humans by an explicit sense of self and the ability to construct past and future scenes, arose at a later stage with reentrant pathways linking value-dependent categorization with linguistic performance and conceptual memory (Edelman 2003).

[edit] Information integration

The information integration theory of consciousness (IITC; Tononi 2004, 2008) claims that consciousness corresponds to the capacity of a system to integrate information. A system is deemed capable of information integration to the extent that it has available a large repertoire of states and that the states of each element are causally dependent on the states of other elements. Like the dynamic core hypothesis, the IITC is based on the notion that the occurrence of any conscious scene simultaneously rules out the occurrence of a vast number of alternatives and therefore constitutes a highly informative discrimination. Also like the dynamic core hypothesis, the IITC proposes that the thalamocortical system provides the neuroanatomical substrate for the neural processes that underlie consciousness.

The IITC proposes a novel measure of the ‘quantity’ of consciousness generated by a system. This measure, \(\Phi\) (phi), is defined as the amount of causally effective information that can be integrated across the weakest link of a system (Tononi and Sporns 2003; Tononi 2004). One important distinction between \(\Phi\) and “neural complexity” (see Dynamic Core) is that \(\Phi\) measures directed, causal interactions within a system. According to the IITC, consciousness as measured by \(\Phi\) is characterized as a “disposition” or “potentiality”. The contents of any given conscious scene are specified by the value, at any given time, of the variables mediating informational interactions within the system. A distinguishing feature of the IITC is that \(\Phi\) is proposed to be a sufficient condition for consciousness, so that any system with sufficiently high \(\Phi\) – whether biological or non-biological – would be conscious (Tononi 2004). For a comparative analysis of \(\Phi\ ,\) neural complexity, and a third measure of complex neural dynamics – causal density – see (Seth et al. 2006).

In 2008 Tononi published a revised version of the theory in which \(\Phi\) is a measure of dynamics as well as capacity (Tononi, 2008). In this revised theory, \(\Phi\) measures the information generated when a system transitions from one state to another. Specifically, \(\Phi\) measures the information generated by a complex of elements during such a transition, over and above the information generated by its parts (\(\Phi\) therefore serves as well as an interesting measure of ‘emergence’). A number of new concepts are also introduced in the revised IITC, most prominently the notion of a “qualia arrow” or “qarrow”, collections of which define shapes within qualia space, with each (highly multidimensional) shape corresponding to a distinct phenomenal experience.

[edit] Thalamocortical rhythms

Several other approaches emphasize the thalamocortical system in the generation of consciousness. In one of these, Llinas, Ribary, Contreras & Pedroarena (1998) proposed, on the basis of earlier studies using MEG, that synchronous oscillations, or rhythms, in thalamocortical loops in the gamma-band of frequencies creates the conscious state. They also proposed that this “thalamocortical resonance” is modulated by the brainstem, and that it is given content by sensory input during waking and by intrinsic inputs during dreaming. They identified two separate systems of thalamocortical loops, one that has specific projections and one that has diffuse projections. Interaction between these two loop systems was proposed to underlie the temporal binding that characterizes conscious processing. They also suggested that some disorders of consciousness, such as petit mal epilepsy, arise because the thalamocortical rhythms are disrupted (‘thalamocortical dysrhythmias’), preventing the gamma-band resonances that constitute the conscious state from occurring.

[edit] Coalitions of neurons

In contrast to their earlier position that 40 Hz oscillations were sufficient for consciousness (Crick and Koch 1990), Crick and Koch (2003) suggested instead that consciousness may require competition among “coalitions” of neurons, in which winning coalitions determine the contents of consciousness at a given time. They pointed out that the notion of neuronal coalitions bears similarities both to the older Hebbian concept of a cell assembly, as well as to the more recent concept of a dynamic core.

Crick and Koch (2003) offered a framework position in which a central idea is that the much criticized notion of an “inner homunculus” may in fact have neurobiological validity. In particular, they suggested that the “front” of the brain may be “looking at” sensory systems in the “back” of the brain, similar to the earlier suggestions of Minsky (1986) and Ward (1992). They argued that such a homuncular arrangement would reflect a common intuition about the conscious self. They also suggested that unconscious processing may consist largely of feed-forward cortical waves, whereas consciousness may involve standing waves created by bidirectional signal flow. Lamme (2003) proposed a similar model, in which what he called the feedforward sweep is unconscious, but all reentrant cortical processing is phenomenally conscious. In his model attention selects certain phenomenally conscious contents for access to the global workspace.

[edit] Field models

The view that consciousness depends on global aspects of brain function is particularly prominent in so-called neural field theories (Kinsbourne 1988). E. Roy John has proposed that consciousness emerges from “global negative entropy” which is a property of an “electrical field resonating in a critical mass of brain regions” (John 2001). John suggests that sensory stimuli can evoke localized synchronous neuronal activation causing deviations from a baseline brain state of maximum entropy. These deviations establish spatially distributed islands of local negative entropy which correspond to “fragments of sensation”. These fragments can be bound into a unified global percept via integrative interactions within thalamocortical loops.

Another type of field model was proposed separately by both Pockett (2000) and McFadden (2002). Both authors proposed that consciousness is supported by an electromagnetic (E-M) field generated by neural electrical activity. They asserted that the integrated nature of this field is responsible for the unitary aspect of conscious experience. Both authors also emphasized synchronous neural activity as the generator of the coherent conscious E-M field, whereas asynchronous neural activity was said to generate weak and incoherent E-M fields that are not experienced. McFadden suggested that E-M field effects on final motor pathways are necessary for consciousness, whereas Pockett proposed that such effects provide a mechanism by which E-M fields could influence their neural generators.

Neural field theories may account well for the unified nature of conscious experience, as well as for associated properties including seriality and metastability (see above). As yet, however, detailed descriptions of how such conscious neural fields are generated, and how they can be distinguished from other kinds of E-M fields, are lacking.

[edit] Subcortical models

The models described above emphasize to a greater or lesser extent the importance of cortical or thalamocortical activity in the generation of consciousness. An alternative view, originating with the neurosurgical experiments of Penfield and Jasper (1954), emphasizes instead subcortical neural activity, in particular the diencephalon and/or the mesencephalon, as the locus of necessary and sufficient neural activity for consciousness. This so-called “centrencephalic” proposal has been updated most recently by Merker (2007). Merker suggested that the superior colliculus, in addition to coordinating eye movements, shifting attention, and integrating sensory information across modalities, also generates an analog simulation of the sensory world that corresponds to primary (sensory) consciousness. In this theory, the mesencephalon, along with other subcortical areas (the zona incerta, ventral thalamus, and the ascending value systems) constitutes a bottleneck in the relation between cortical and motor processing that allows an organism to behave in an adaptive and integrated fashion in the face of an ever-changing environment. Under this theory, the highly developed cerebral cortex of modern mammals elaborates conscious contents, whereas the centrencephalic system provides the neural substrate for consciousness per se.

Ward (2011) described a similar approach to that of Merker, but Ward emphasized the diencephalon, consisting mostly of the thalamus, as the critical substrate of conscious experience. He acknowledged that the cortex computes most of the contents of experience (in mammals at least), but identified the higher-order nuclei of the dorsal thalamus in particular with enabling the conscious state itself. Ward brought together four lines of evidence for this view: first, we experience the results, not the processes of the cortical computations; second, the thalamus is implicated as a common brain locus of action among general anesthetics, and as a critical locus of damage leading to vegetative state; third the anatomy and physiology of the thalamus ideally suit it to play the role of a dynamic blackboard for cortical computations; and fourth, its neurons are deeply implicated in neural synchronization, which has been identified as a neural correlate of conscious perception (see the Dynamic Core model). Thus Ward argued that whereas the cortex computes conscious contents, the thalamus (or particular higher-order nuclei in it) actually is the locus at which experiencing takes place via a dynamic core of synchronized neural activity: a thalamic dynamic core.

[edit] Internal simulation and self-modeling

High-level conceptual models can provide insights into the processes implemented by the neural mechanisms underlying consciousness, without necessarily specifying the mechanisms themselves. Several such models propose variations of the notion that consciousness arises from brain-based simulation of organism-environment interactions. These models illuminate in particular two fundamental aspects of phenomenology: the attribution of conscious experience to an experiencing ‘self’, and the first-person perspective that structures each conscious scene.

[edit] World simulation metaphor

Revonsuo (2005) draws on the technology of virtual reality to motivate his “world simulation metaphor” according to which consciousness arises from full immersion in an internal simulation of organism-environment interaction. The world we experience is a world simulation in the brain. Revonsuo notes that such simulation is particularly evident during dreaming, where it may have adaptive utility in the risk-free simulation of threatening situations.

[edit] Retinoid model

Trehub (1991, 2007) has proposed a set of minimal neuronal specifications for a system of brain mechanisms that enable it to model the world from a privileged egocentric perspective, arguing that neuronal activity in this ‘retinoid structure’ constitutes the phenomenal content of consciousness and provides a sense of self. The retinoid model can be viewed as a neural implementation of Baars' global workspace with additional emphasis on perspectivaleness (the unique spatiotemporal ‘origin’ of all of one's phenomenal experience), as emphasized by Revonsuo (2006) and Metzinger (below). According toTrehub, a phenomenal self model (Metzinger's PSM) cannot exist without the prior existence of the ‘self locus’, a neuronal entity constituting the ‘core self’ which is the origin of egocentric space. In the model, an innate core self is an essential part of a larger cognitive brain system which enables (among other important functions) a PSM to be constructed and reshaped as we mature and engage with the world (Trehub, 2007).

Figure 2: Metzinger’s ‘’phenomenal model of the intentionality relation’’ (PMIR). A subject-component S (the ‘’phenomenal self model’’ or PSM) is phenomenally represented as being directed at an “intentional object” O. This figure has been adapted from (Metzinger 2003).

[edit] Self-model theory of subjectivity

Metzinger (2003) has developed a series of conceptual models that focus on the notion of the self in conscious experience. According to his ‘’self-model theory of subjectivity’’, there are no such things as ‘selves’ in the world. Instead, there exist ‘’phenomenal self-models’’ (PSMs). A PSM is a self-model – an internal and dynamic representation of the organism - that cannot be recognized ‘’as’’ a model by the system using it (i.e., it is transparent). The existence of a PSM allows a distinction to be drawn between environment-related signal and organism-related signals, which in turn allows the organism itself to model the intentional relation between subject (PSM) and world. Metzinger suggests that this form of modelling can have phenomenal content, in which case it is a ‘’phenomenal model of the intentionality relation’’ (PMIR, see Figure). In short, according to Metzinger, a core feature of human consciousness has to do with “continuously co-representing the representational relation itself”.

[edit] Other approaches

The internal self-modelling approach has also been explored in detail by Hesslow (2002), who argues that the simulation approach can explain the relations between sensory, motor, and cognitive functions and the appearance of an inner world, and by Grush (2004), who has proposed a functional architecture by which visual perception results from such internal simulation models being used to form expectations of, and to interpret, sensory input.


[edit] Sensorimotor theory

Another high-level approach is provided by the ‘’sensorimotor theory’’of O’Regan and Noë (2001). These authors suggest that sensory experiences arise as a result of “cognitive access to the fact of currently exercising mastery of a sensorimotor skill”. This conceptual model shares with internal simulation models the intuition that consciousness arises from representation of organism-environment interaction; in this case, representation of the structure of the rules governing the sensory changes induced by different motor actions, this being a ‘’sensorimotor contingency’’. According to the sensorimotor theory, the phenomenal differences between the experiences of seeing and hearing can be accounted for by the different things we ‘’do’’ when we see and when we hear. The sensorimotor contingency theory as expressed by O’Regan and Noë (2001) challenges the notion of qualia as a purely experiential phenomenon. It is suggested instead that sensation consists in the exercise of a sensorimotor skill: a conscious experience is ‘’something that is done’’.

[edit] Other cognitive models

This final section summarizes a variety of models that share with global workspace models a cognitive perspective on the mechanisms of consciousness.

[edit] Supramodular interaction theory

Morsella (2005) contrasts the task demands of consciously penetrable processes (e.g., pain, conflicting urges, and the delay of gratification) and consciously impenetrable processes (e.g., intersensory conflicts, peristalsis, and the pupillary reflex). The resulting Supramodular Interaction Theory (SIT) specifies which kinds of information integration require conscious processing and which kinds do not (e.g., some intersensory processes). SIT proposes that conscious processes are required to integrate high-level systems in the brain that are vying for (specifically) skeletomotor control. For example, regarding processes such as digestion and excretion, conscious scenes associate only with those phases of the processes that require coordination with skeletomotor plans (e.g., chewing or micturating) and none of those that do not (e.g., peristalsis). From this standpoint, consciousness is associated with ‘cross-talk’ among high-level, specialized and often multi-modal, systems.

[edit] Multilevel feedback

Haikonen (2003) has proposed a cognitive model in which consciousness is associated with feedback at multiple levels of representation; unconscious processes involve direct links between sensors and actuators.

[edit] Intermediate level theory

According to intermediate level theory (Jackendoff 1987, Prinz REFSZ), representations that are either too weak or too strong do not enter consciousness. Intuitively, representations that are too strong correspond to inflexible, overlearned processes; those that are too weak are unable to generate sufficient activity for consciousness. Jackendoff draw analogies in several domains. For example, in speech perception we are conscious of word forms but not of acoustics (too low), nor of syntactic or grammatical rules (too high).

[edit] Radical plasticity thesis

This theory, due to Cleeremans (2008), is similar to a connectionist version of intermediate level theory. According to Cleeremans, conscious experience happens only when an information-processing system has learned about its own representations of the world. Learning increases the ‘quality’ of a representation in terms of its stability, strength and distinctiveness, which in turn enhances its availability to consciousness. Overlearned representations however lose quality by becoming automatic. The theory has resonances with both Karmiloff-Smith’s concept of ‘representational redescription’ and Lau’s ‘signal detection on the mind’.

[edit] CODAM

According to Taylor (2002), attention is the crucial gateway to consciousness. Taylor's CODAM (Corollary Discharge of Attention Movement) model uses an engineering control approach to create a model of sensory attention which allows the brain processes involved in attentional control to be coordinated into a set of functional modules. The efference copy/corollary discharge of the attention movement control signal, it is suggested by the model, creates the experience of ownership in relation to incoming stimulus activity from lower brain centers.

[edit] CogAff

Sloman and Chrisley (2003) observe that consciousness comes in many different varieties, and they propose a schema for designing computational architectures that reflect this diversity. Their “CogAff” (cognitive and affective) computational schema includes multiple layers for reactive, deliberative, and meta-management processes.

[edit] Consciousness as Attention to Memories

Izhikevich (2006) suggested that consciousness can be modelled as attention to memories, according to which memories correspond to the existence of polychronous groups, i.e., sets of neurons that can fire time-locked but not synchronous spiking patterns. Coordinated co-activation of groups representing an object in a sensory stream corresponds to attention being directed to the object. When such groups co-activate spontaneously, e.g., due to input from other groups representing other objects rather than sensory input, attention is directed to the memory of the object. This proposal is related to Neural Darwinism theory of Edelman (1987).

[edit] Summary

Because consciousness is a rich biological phenomenon, it is likely that a satisfactory scientific theory of consciousness will require the specification of detailed mechanistic models. The models of consciousness surveyed in this article vary in terms of their level of abstraction as well as in the aspects of phenomenal experience that they are proposed to explain. At present, however, no single model of consciousness appears sufficient to account fully for the multidimensional properties of conscious experience. Moreover, although some of these models have gained prominence, none has yet been accepted as definitive, or even as a foundation upon which to build a definitive model.

[edit] References

  • Baars, B. J. 1988 A cognitive theory of consciousness. New York, NY: Cambridge University Press.
  • Cleeremans, A. 2008 Consciousness: The radical plasticity thesis. Progress in Brain Science, 168, 19-33
  • Crick, F. & Koch, C. 1990 Towards a neurobiological theory of consciousness. Seminars in the Neurosciences 2, 263-275.
  • Crick, F. & Koch, C. 2003 A framework for consciousness. Nature Neuroscience 6, 119-126.
  • Dehaene, S., Sergent, C. & Changeux, J. P. 2003 A neuronal network model linking subjective reports and objective physiological data during conscious perception. Proc Natl Acad Sci U S A 100, 8520-5.
  • Dennett, D.C. 1991 Consciousness Explained. Boston: Little Brown.
  • Dennett, D.C. 2001 Are we explaining consciousness yet?. Cognition, 79, pp. 221-37.
  • Edelman, G. M. 1987 Neural Darwinism: The Theory of Neuronal Group Selection. New York: Basic Books, Inc.
  • Edelman, G. M. 1989 The remembered present. New York, NY: Basic Books.
  • Edelman, G. M. 2003 Naturalizing consciousness: a theoretical framework. Proc Natl Acad Sci U S A 100, 5520-4.
  • Edelman, G. M. & Tononi, G. 2000 A universe of consciousness : how matter becomes imagination. New York, NY: Basic Books.
  • Franklin, S. & Graesser, A. 1999 A software agent model of consciousness. Conscious Cogn 8, 285-301.
  • Grush, R. 2004 The emulation theory of representation: motor control, imagery, and perception. Behav Brain Sci 27, 377-96; discussion 396-442.
  • Haikonen, P. M. 2003 The cognitive approach to machine consciousness. Exeter, UK: Imprint Academic.
  • Hesslow, G. 2002 Conscious thought as simulation of behaviour and perception. Trends Cogn Sci 6, 242-247.
  • Jackendoff, R. 1987 Consciousness and the Computational Mind. MIT Press: Bradford Books.
  • John, E. R. 2001 A field theory of consciousness. Conscious Cogn 10, 184-213.
  • Izhikevich E.M. 2006 Polychronization: Computation With Spikes. Neural Computation,18:245-282
  • Kinsbourne, M. 1988. Integrated field theory of consciousness. In A.J. Marcel & E. Bisiach, (Eds.), Consciousness in Contemporary Science (pp. 239-256). New York: Oxford University Press.
  • Lamme, V.A.F. 2003. Why visual attention and awareness are different. Trends in Cognitive Sciences, 7, 12-18
  • Llinás, R., Ribary, U., Contreras, D. & Pedroarena, C. 1998. The neuronal basis for consciousness. Philosophical Transactions of the Royal Society of London, Series B, 353, 1841-1849.
  • McFadden, J. 2002. Synchronous firing and its influence on the brain’s magnetic field. Journal of Consciousness Studies, 9, 23-50.
  • Merker, B. 2007. Consciousness without a cerebral cortex: A challenge for neuroscience and medicine. Behavioral and Brain Sciences, 30(1):63-81
  • Metzinger, T. 2003 Being No-One. Cambridge, MA: MIT Press.
  • Minsky, M. 1986. The Society of Mind. New York: Simon and Schuster.
  • Morsella, E. 2005. The function of phenomenal states. Psychological Review 112(4):1000-21
  • O'Regan, J. K. & Noe, A. 2001 A sensorimotor account of vision and visual consciousness. Behav Brain Sci 24, 939-73; discussion 973-1031.
  • Penfield, W. & Jasper, H.H. 1954. Epilepsy and the functional autonomy of the human brain. Boston: Little, Brown & Co.
  • Pockett, S. 2000. The Nature of Consciousness: A Hypothesis. San Jose: Writers Club Press.
  • Prinz. J. 2005 A neurofunctional theory of consciousness. In A. Brook and K. Akins (Eds.) Cognition and the Brain: Philosophy and Neuroscience Movement (pp. 381-396). Cambridge University Press.
  • Revonsuo, A. 2005 Inner Presence: Consciousness as a Biological Phenomenon. Cambridge, MA: MIT Press.
  • Rosenthal, D. 2005 Consciousness and Mind. Oxford, Clarendon.
  • Seth, A.K. 2009 Explanatory correlates of consciousness: Theoretical and computational challenges. Cognitive Computation 1(1):50-63.
  • Seth, A. K. & Baars, B. J. 2005 Neural Darwinism and consciousness. Consciousness and Cognition 14, 140-168.
  • Seth, A. K., Baars, B. J. & Edelman, D. B. 2005 Criteria for consciousness in humans and other mammals. Consciousness and Cognition 14, 119-139.
  • Seth, A. K., Izhikevich, E., Reeke, G. N. & Edelman, G. M. 2006 Theories and measures of consciousness: An extended framework. Proc Natl Acad Sci U S A 103, 10799-10804.
  • Shanahan, M. 2005 A cognitive architecture that combines internal simulation with a global workspace. Conscious Cogn. 15(2):433-49
  • Shanahan, M. 2008 A spiking neuron model of cortical broadcast and competition. Conscious Cogn. 17(1):288-303.
  • Sloman, A. & Chrisley, R. 2003 Virtual machines and consciousness. Journal of Consciousness Studies 10, 133-72.
  • Sporns, O., Tononi, G. & Edelman, G. M. 2000 Theoretical neuroanatomy: Relating anatomical and functional connectivity in graphs and cortical connection matrices. Cerebral Cortex 10, 127-141.
  • Taylor, J. G. 2002 Consciousness: Theories of. In Handbook of Brain Theory and Neural Computation (ed. M. A. Arbib). Cambridge, MA: MIT Press.
  • Tononi, G. 2004 An information integration theory of consciousness. BMC Neurosci 5, 42.
  • Tononi, G. 2008 Consciousness as integrated information: A provisional manifesto. Biological Bulletin 215, 216-42.
  • Tononi, G. & Edelman, G. M. 1998 Consciousness and complexity. Science 282, 1846-51.
  • Tononi, G. & Sporns, O. 2003 Measuring information integration. BMC Neurosci 4, 31.
  • Tononi, G., Sporns, O. & Edelman, G. M. 1994 A measure for brain complexity: relating functional segregation and integration in the nervous system. Proc Natl Acad Sci U S A 91, 5033-7.
  • Trehub, A. (1991). The Cognitive Brain. MIT Press.
  • Trehub, A. 2007. Space, self, and the theater of consciousness. Consciousness and Cognition, 16, 310-330.
  • Wallace, R. 2005 Consciousness: A mathematical treatment of the neuronal global workspace model. Springer. NY
  • Ward, L.M. 1992. Mind in psychophysics. In D. Algom (Ed.), Psychophysical Approaches to Cognition (pp. 187-249). New York: North-Holland.
  • Ward, L.M. 2011. The thalamic dynamic core theory of conscious experience. Consciousness and Cognition, 20, 464-486.

Internal references

[edit] External links

[edit] See also

Complexity, Consciousness, Consciousness from Attention, Information integration theory, Metastability in the Brain, Models of Attention, Multiple drafts model Mutual Information, Neural Correlates of Consciousness, Neural Darwinism, Neural Fields, Thalamocortical Oscillations,


Licensed under CC BY-SA 3.0 | Source: http://www.scholarpedia.org/article/Models_of_consciousness
5 views | Status: cached on November 17 2021 02:27:52
↧ Download this article as ZWI file