Perceived Location of Touch
Perceiving the location of touch on our skin is a surprisingly complex process. Signals from cutaneous receptors within the skin are transmitted to the brain by afferent neurons to primary somatosensory cortex (S1) via the thalamus. Given noise, both at the level of skin receptors and neurally, the brain needs to decide both whether a stimulus was presented and where that stimulus is located. S1 is organized as a topographic map in which adjacent locations on the skin are represented next to one another. Information in these primary representations have been shown to be plastic – such that a neuron in S1 may represent touch at one location of the skin surface at one point in time, but then represent touch at a different location after cortical and peripheral changes. Therefore, further processing is required to relate activity in these primary representations to a sensation on a specific position on the skin surface. This is achieved by integration with higher order body representations. Finally, touch needs to be localized – not only to a position on the skin surface – but in reference frames relative to the body and the external environment. Therefore, the location of touch can be represented in a number of different representations, each with their own reference frame. We will present evidence on the different stages of processing and representation that are involved in localizing touch.
Contents |
There are two major hypotheses regarding conscious detection of touch and localization. In the first hypothesis, it has been argued that detection and localization are completely dissociable processes. If this were the case, then one would expect individuals who can detect but not localize touch, along with those who could localize but not detect touch. A number of individuals have been reported with intact tactile detection but poor tactile localization (Anema et al. 2009; Halligan et al. 1995; Paillard, Michel & Stelmach, 1983). Evidence for the second arm of this double dissociation comes from a disorder known as numbsense, in which a touch that cannot be detected can be accurately localized. The first case was documented in 1983 in a patient with left parietal damage, who was apparently unable to feel touch on her right hand (Paillard, Michel & Stelmach, 1983). In an experiment designed to examine her localization abilities, she was instructed to point where she was touched (even if she couldn't feel it) on her right hand. Her responses were broadly accurate - as she was able to "point approximately to the locus of stimulation". This observation was taken as evidence for a dissociation between tactile detection and localization.
However, there are a number of concerns regarding whether this evidence provides strong support for the hypothesis that tactile detection and localization are separable processes. The primary concern is the criteria used for a correct response in detection versus localization tasks. First, the claim of intact localization is strongly dependent on what is coded as a “correct” localization response. If one uses being within 20 mm of the target location as the criterion, the patient in Paillards’s study was correct on only 26% of localization trials, in contrast to 90% in a healthy control participant. Furthermore, the criterion for detecting touch can also vary. When presented with a near-threshold stimuli, participants need to make a decision as to whether this diminished sensation was caused by a tactile stimulus. If the participant only reports touch that has similar intensity and qualia to touch presented to their intact hand, they may not report feeling the touch. Supporting this, Paillard reported that the participant reported static pressure to her right hand as an "event" in later testing sessions, contrasting earlier testing in which she felt no touch. If these “events” were felt but not reported in earlier sessions, it is possible that she could both detect and localize touch. Overall, the evidence is unclear as to whether the numbsense of Paillard's patient is evidence for a detection/localization dissociation, or can be explained by a different mechanism (see figure 2, see also Signal to noise ratio in neuroscience).
In opposition to the hypothesis that tactile detection and localization are separable, a second, serial hypothesis has been proposed (see Harris, Thein & Clifford, 2004). In this serial hypothesis, participants first detect whether a tactile stimulus has been presented or not, and then localize that stimulus. Given that it is a serial hypothesis, detection is necessary for tactile localization. However, if this is the case, how could numbsense-like performance be observed? This was examined by instructing healthy individuals to point to near-threshold touch on the fingers (Harris, 2006). Participants were presented with a brief tactile stimulus, followed by a backwards mask, making tactile detection difficult. Detection was measured using a yes-no method, whereas localization was measured using a forced choice method where participants had to declare which finger was touched. Participants were able to localize stimuli when they apparently did not detect them. This finding, similar to numbtouch, appeared to be evidence for parallel processing. However, Harris and colleagues noted an important difference between the detection and localization tasks. Tactile detection involves adopting an arbitrary detection criterion for the yes-no task, such that a touch may have been detected but not declared as detected. This was not present in the forced choice localization task. A conservative threshold for a "yes" response could result in participants not reporting touch, even though they have sufficient information to detect and even localize the touch. The experiment was repeated using a forced-choice task for both tactile detection and localization, dealing with the issue of having different detection criteria for different tasks. When presenting forced-choice detection and localization tasks, participants could not localize stimuli that they did not detect. These results suggest that the differences observed in individuals with numbtouch may be due to different criterion used in localization versus detection tasks in individuals with numbtouch (Medina & Coslett, 2016). Future studies of potential numbsense will need larger sample sizes and equivalent reporting methods for comparison between studies.
After stimulating the skin surface, activity travels through the thalamus to primary somatosensory cortex (S1). The somatosensory cortex is organized topographically, such that (with a few exceptions) adjacent locations of the body are represented in neighboring locations on the map (see figure 3). Although it is topographic, the relationship between the size of the skin surface and the size of the map is not uniform across all regions of the body. These non-uniformities can arise through regional differences in the density of sensory innervation or limb usage.
Somatosensory cortex is also plastic. In non-human primates, the reorganization of the somatosensory cortex after amputation, skin island transfers, and other interventions are well studied (Merzenich & Jenkins, 1993). For example, when the third digit is removed by amputation the representations of the palm and adjacent digits expand into this space, so that the second and fourth digits now share a border in the cortex. Intensive stimulation of the skin surface also results in cortical reorganization. The spatial and temporal properties of the stimulation determine how S1 is reorganized. When the fingers of monkeys are stimulated simultaneously for a prolonged period, the finger representations become closer, whereas sequential stimulation moves them apart (Wang et al. 1995). In humans, synchronous stimulation of the fingers also results in changes in S1. Braun and colleagues (Braun et al. 2000) touched participants simultaneously on the first and fifth digits for an hour a day until twenty hours was reached. Near-threshold touch of either finger was misattributed to the other finger at a much higher rate than measured before the experiment. Thus increased usage not only induces topographic changes in S1, but also changes the perceived location of touch. Touch arising from self-generated movement can have similar effects; experienced piano players have much better two-point discrimination thresholds on the fingertips compared to non-musicians (Ragert et al. 2004). In addition their tactile acuity on the fingers has a dose-dependent relationship with hours of practice.
Studies of cortical plasticity demonstrate extensive changes in S1 topography. However, less research has been done examining how potential changes in S1 topography, due to plasticity, relate to changes in perception. Given that S1 is plastic, the relationship between activity in a specific region of S1 and perceiving touch in a particular location on the skin surface cannot be fixed, such that one set of neurons always represents touch at a specific location. There must be further processing that takes information from somatosensory regions and interprets it, such that conscious perception of touch location emerges. Very little is known about exactly how the brain interprets somatosensory activity as a particular tactile sensation. Some initial evidence towards understanding this comes from individuals with brain damage due to stroke.
Individuals who have had strokes in somatosensory regions often report reduced sensitivity to touch along with biases in tactile localization. For example, stroke patients often demonstrate localization errors such that tactile stimuli are localized towards the center of the hand (Rapp, Hendel & Medina, 2002). Interestingly, healthy individuals show similar “central” biases when presented with near-threshold tactile stimuli. For example, weaker touch on the forearm is mislocalized toward its middle (Steenbergen et al. 2014). Why would individuals with somatosensory damage presented with suprathreshold stimuli, along with neurologically intact individuals with near-threshold stimuli, demonstrate such a central tendency? General models that explain spatial bias under uncertainty could explain these tactile localization biases. Huttenlocher and colleagues proposed the category adjustment model (Huttenlocher & Others, 1991) to explain biases in spatial memory. Memories of spatial locations are biased towards the middle of a categorical space, and away from category boundaries, resulting in central error. Importantly, this central error increases as a function of uncertainty. In both cases (suprathreshold touch for brain-damaged individuals, and near-threshold touch for neurologically-intact individuals), somatosensory information is noisy and uncertain. One possibility is that, in interpreting information from somatosensory regions, the brain uses similar heuristics to interpret this noisy activation as touch in a particular location.
Representations in S1 are distorted. However, our everyday experience is not consistent with having a distorted body - that is, our fingers do not feel larger even though they have a larger cortical representation than our back. Therefore, information from the “distorted” representation in S1 needs to be mapped to a veridical representation of the skin surface. The majority view is that there are multiple representations of the body that are integrated with output from S1 (Schwoebel & Coslett, 2005). To map from these distorted representations, it has been proposed that there are higher-order body representations that serve to link S1 to a representation of locations on the skin surface. One way to examine the relationship between tactile localization and higher-order representations is by manipulating perceived body size.
The Pinocchio illusion provides a key piece of evidence supporting the existence of these representations (Lackner, 1988). In this illusion the participant holds their nose with one hand. The biceps brachii of that arm are vibrated; this is known to engage muscle receptors, creating the illusion that the arm is moving into extension. Given that the hand is touching the nose, and that the hand is perceived as moving away from the body, the individual often experiences that the nose is growing longer. One explanation is that the brain needs to interpret conflicting information (the arm is moving forward, and the hand is touching the nose). Given that the nose can grow, a higher-order representation of body size and shape is thought to change, resulting in perceived elongation of the nose. If these higher-order representations of body size and shape change, how does this affect tactile localization? That is, if someone perceives a body part to be longer/shorter, do they perceive concomitant changes in perceived touch? Or are the changes in perceived body size and shape separate from changes in tactile localization? In a variant of the Pinocchio illusion, when one finger is grasped by the other hand, vibration of the elbow flexors of the other arm can be used to create the illusion that the finger is extended. Using this illusion, de Vignemont and colleagues (de Vignemont et al. 2005) had participants judge the distance between two tactile stimuli presented on either the forehead (a reference judgment) or the finger during the illusion. Tactile distances were perceived to be longer on the finger, providing evidence that changes in perceived body size lead to changes in the perception of tactile distances. Visual information can also distort higher order representations of body size. To test if such distortions influence touch perception an experiment was devised using distorted views of the body (Taylor-Clarke, Jacobsen & Haggard, 2004). Participants viewed their hand as reduced and their forearm as magnified for one hour, after which touch perception was tested. Perceived distance was reduced on the finger and increased on the forearm. In conjunction with the vibration results, this observation may demonstrate that changes in perceived body size result in changes in perceived extent on the skin.
The studies reviewed so far have focused primarily on localizing touch on a location on the skin surface – usually referred to as a somatotopic frame of reference. In this frame of reference, the location of touch is represented relative to the skin surface itself, such that stimulation of the right index fingertip would result in a response regardless of where that finger is in external space. However, further processing is required to account for stretch and movement of the skin during muscle contractions, movement of the limbs with respect to the body, and movement of the body compared to the outside world. To act on the external world, one needs to know where a tactile stimulus is relative to a number of different frames of reference apart from simply location on the skin surface. Generally speaking, these are called “external” frames of reference, which are available in parallel in conjunction with somatotopic reference frames for processing touch location. The spatial co-ordinates of these external reference frames can have different origins. There are a number of non-human primate studies that have demonstrated that the body is encoded in various external frames of reference, including eye-centered reference frames for reach planning (Batista et al. 1999) and body-centered reference frames for limb movement (Lacquaniti et al. 1995; see also Colby, 1998 for a review). Studies for neurologically-intact and brain-damaged individuals provide evidence for coding tactile location in various external reference frames.
Medina and Rapp (Medina & Rapp, 2008) reported an individual with left fronto-parietal damage who experienced tactile synchiria – a condition in which stimulation of the ipsilesional hand results in sensation on both hands. These phantom contralesional percepts subsequent to ipsilesional stimulation were highly localizable – more so than actual stimulation on the contralesional hand itself. If this lesion only affected somatotopic representations, then moving the hand into the contralesional space should not change its strength. In contrast if the lesion influenced external representations, then moving the hand should influence the effect. They found that changing hand position resulted in a change in the rate of phantom percepts, such that more phantoms were observed when the hands were in contralesional space (versus ipsilesional space) in both head- and trunk-centered reference frames (See also Bartolomeo et al. 2004; Moro, Zampini & Aglioti, 2004, for similar findings from tactile extinction.)
A number of behavioural studies provide evidence for how tactile information might be encoded in both somatotopic and external reference frames (see also evidence from tactile temporal order judgment). Using other paradigms, Azañón & Soto-Faraco (Azañón & Soto-Faraco, 2008) asked participants to judge the location of a visual stimulus preceded by a tactile cue with crossed arms. They found a crossmodal cueing effect - but interestingly, the cueing effect was significant when the tactile cue was on the opposite hand when the interstimulus interval (ISI) was less than 100 msec. In contrast there was a significant cueing effect when cue and stimulus were on the same side for ISIs over 200 msec. These results could be explained by a “tactile remapping” process, in which tactile location is first represented in a somatotopic reference frame, and then in an external frame (see Yamamoto & Kitazawa, 2001). However, a second hypothesis is that transformation of reference frame co-ordinates occurs rapidly and is followed by integration of spatial information from these frames. Brandes & Heed (Brandes & Heed 2015) measured reach trajectories to uncrossed or crossed feet. The target stimulus was presented after reaching was initiated. Reaches to visual targets were deflected towards the correct foot at 138 msec regardless of foot posture. Reaches to tactile targets for uncrossed feet were redirected at 158 msec. This additional 20 msec likely represents remapping from somatotopic to external co-ordinates. When the feet were crossed, putting the somatotopic and external reference frames in conflict, the deflection was delayed to over 200 msec. This additional delay can be attributed to a spatial integration process, in which enough evidence for location has to accumulate before it guides the reach. Although participants were accurate, on occasion they initiated a movement in the wrong direction. These observations are consistent with the claim of rapid co-ordinate transformation such that the reference frames are concurrently available for the integration of spatial information. Such a process weights each reference frame individually during the integrative process of determining touch location (See Badde & Heed, 2016 for an in-depth discussion of this hypothesis).
There are no reports (that we are aware of) demonstrating that changes of hand position in head- and trunk-centered reference frames creates errors in tactile localization (e.g. the participant is touched on the index finger, but feels it on the middle finger when in a different hemispace). However, there is evidence for changes in tactile location perception when changing finger position relative to the hand. Coslett (Coslett, 1998) reported one brain-damaged individual who made more tactile localization errors on the contralesional hand when the fingers were spread apart versus close together. Haggard and colleagues (Haggard et al. 2006) designed a study in which participants had to verbally identify the finger or hand that was touched. The hands were positioned either one above the other (postural condition) or with interlaced fingers (spatial condition). The postural condition did not influence finger or hand identification. In contrast, interlacing the fingers impaired the ability to name the hand that was touched but not the finger that was touched. However, other studies find evidence finger identification still refers to external space but to a lesser degree than hand identification (Riemer et al. 2010). A study using temporal order judgements confirms that the fingers are localized in external reference co-ordinates (Heed, Backhaus & Roder, 2012). Thus hand and finger identification tasks use the same representations but may rely on them to different degrees. As postural and spatial manipulations did not influence finger identification it would seem that only a somatotopic representation is accessed, whereas hand identification relies on an external representation. A similar study by Overvliet and colleagues (Overvliet et al. 2011) found that performance on a forced-choice tactile localization task was best when the fingers were far apart as opposed to close together or interleaved. This provides some evidence that external spatial representations – perhaps in a hand-centered reference frame – influences tactile localization performance on the skin surface.
A number of challenges remain in understanding how touch is localized. The different representations of the body need to be explored further, as do their locations in the brain. For instance it is not known if localization errors occur in relatively simple tasks such as pointing to landmarks on the hand are a result of distorted body representations (Longo & Haggard, 2010) or other central biases. If touch localization incorporates expectations, what are these prior expectations and where are there neural representations? Is touch localization optimised to minimise errors (mean), to select the most probable location (mode), or for the goal of the task it is a part of? Further how do multisensory inputs influence touch location? Although touch localization is a simple task, it is a complex process with many open questions.
Currently we can use relatively non-invasive techniques to induce rapid and robust distortions of body representations, but long-lasting changes have not been observed. Treatments are needed for patients with impaired touch localization and other sensorimotor functions, of which there are over forty known disorders of body representations (de Vignemont, 2010). Tactile Illusions that change the size, shape, location, or ownership of a body part could have therapeutic potential for these disorders.
This material is based upon work supported by the National Science Foundation under grant no. 1632849.