Dichotic listening | |
---|---|
Synonyms | Dichotic listening test |
Purpose | used to investigate auditory laterality and selective attention |
Dichotic listening is a psychological test commonly used to investigate selective attention and the lateralization of brain function within the auditory system. It is used within the fields of cognitive psychology and neuroscience.
In a standard dichotic listening test, a participant is presented with two different auditory stimuli simultaneously (usually speech), directed into different ears over headphones.[1] In one type of test, participants are asked to pay attention to one or both of the stimuli; later, they are asked about the content of either the stimulus they were instructed to attend to or the stimulus they were instructed to ignore.[1][2]
Donald Broadbent is credited with being the first scientist to systematically use dichotic listening tests in his work.[3][4] In the 1950s, Broadbent employed dichotic listening tests in his studies of attention, asking participants to focus attention on either a left- or right-ear sequence of digits.[5][6] He suggested that due to limited capacity, the human information processing system needs to select which channel of stimuli to attend to, deriving his filter model of attention.[6]
In the early 1960s, Doreen Kimura used dichotic listening tests to draw conclusions about lateral asymmetry of auditory processing in the brain.[7][8] She demonstrated, for example, that healthy participants have a right-ear superiority for the reception of verbal stimuli, and left-ear superiority for the perception of melodies.[9] From that study, and others studies using neurological patients with brain lesions, she concluded that there is a predominance of the left hemisphere for speech perception, and a predominance of the right hemisphere for melodic perception.[10][11]
In the late 1960s and early 1970s, Donald Shankweiler[12] and Michael Studdert-Kennedy[13] of Haskins Laboratories used a dichotic listening technique (presenting different nonsense syllables) to demonstrate the dissociation of phonetic (speech) and auditory (nonspeech) perception by finding that phonetic structure devoid of meaning is an integral part of language and is typically processed in the left cerebral hemisphere.[14][15][16] A dichotic listening performance advantage for one ear is interpreted as indicating a processing advantage in the contralateral hemisphere. In another example, Sidtis (1981)[17] found that healthy adults have a left-ear advantage on a dichotic pitch recognition experiment. He interpreted this result as indicating right-hemisphere dominance for pitch discrimination.
An alternative explanation of the right-ear advantage in speech perception is that most people being right-handed, more of them put a telephone to their right ear.[18][19] The two explanations are not necessarily incompatible, in that telephoning behavior could be partly to do with hemispheric asymmetry. Some of the converse findings for nonspeech (e.g. environmental sounds [20][21]) are readily interpretable in this framework too.
During the early 1970s, Tim Rand demonstrated dichotic perception at Haskins Laboratories.[22][23] In his study, the first stimuli: formant (F1), was presented to one ear while the second and third stimuli:(F2) and (F3) formants, were presented to the opposite ear. F2 and F3 varied in low and high intensity. Ultimately, in comparison to the binaural condition, "peripheral masking is avoided when speech is heard dichotically."[23] This demonstration was originally known as "the Rand effect" but was later renamed "dichotic release from masking". The name for this demonstration continued to evolve and was finally named "dichotic perception" or "dichotic listening." Around the same time, Jim Cutting (1976),[24] an investigator at Haskins Laboratories, researched how listeners could correctly identify syllables when different components of the syllable were presented to different ears. The formants of vowel sounds and their relation are crucial in differentiating vowel sounds. Even though the listeners heard two separate signals with neither ear receiving a 'complete' vowel sound, they could still identify the syllable sounds.
The "dichotic fused words test" (DFWT) is a modified version of the basic dichotic listening test. It was originally explored by Johnson et al. (1977)[25] but in the early 80's Wexler and Hawles (1983)[26] modified this original test to ascertain more accurate data pertaining to hemispheric specialization of language function. In the DFWT, each participant listens to pairs of monosyllabic rhyming consonant-vowel-consonant (CVC) words. Each word varies in the initial consonant. The significant difference in this test is "the stimuli are constructed and aligned in such a way that partial interaural fusion occurs: subjects generally experience and report only one stimulus per trial."[27] According to Zatorre (1989), some major advantages of this method include "minimizing attentional factors, since the percept is unitary and localized to the midline" and "stimulus dominance effects may be explicitly calculated, and their influence on ear asymmetries assessed and eliminated."[27] Wexler and Hawles study obtained a high test-retest reliability (r=0.85).[26] High test-retest reliability is good, because it proves that the data collected from the study is consistent.
An emotional version of the dichotic listening task was developed. In this version individuals listen to the same word in each ear but they hear it in either a surprised, happy, sad, angry, or neutral tone. Participants are then asked to press a button indicating what tone they heard. Usually dichotic listening tests show a right-ear advantage for speech sounds. Right-ear/left-hemisphere advantage is expected, because of evidence from Broca's area and Wernicke's area, which are both located in the left hemisphere. In contrast, the left ear (and therefore the right hemisphere) is often better at processing nonlinguistic material.[28] The data from the emotional dichotic listening task is consistent with the other studies, because participants tend to have more correct responses to their left ear than to the right.[29] It is important to note that the emotional dichotic listening task is seemingly harder for the participants than the phonemic dichotic listening task, meaning more incorrect responses were submitted by individuals.
The manipulation of voice onset time (VOT) during dichotic listening tests has given many insights regarding brain function.[30] To date, the most common design is the utilisation of four VOT conditions: short-long pairs (SL), where a Consonant-Vowel (CV) syllable with a short VOT is presented to the left ear and a CV syllable with a long VOT is presented to the right ear, as well as long-short (LS), short-short (SS) and long-long (LL) pairs. In 2006, Rimol, Eichele, and Hugdahl[31] first reported that, in healthy adults, SL pairs elicit the largest REA while, in fact, LS pairs elicit a significant left ear advantage (LEA). A study of children 5–8 years old has shown a developmental trajectory whereby long VOTs gradually start to dominate over short VOTs when LS pairs are being presented under dichotic conditions.[32] Converging evidence from studies of attentional modulation of the VOT effect shows that, around age 9, children lack the adult-like cognitive flexibility required to exert top-down control over stimulus-driven bottom-up processes.[33][34] Arciuli et al.(2010) further demonstrated that this kind of cognitive flexibility is a predictor of proficiency with complex tasks such as reading.[30][35]
Dichotic listening tests can also be used as lateralized speech assessment task. Neuropsychologists have used this test to explore the role of singular neuroanatomical structures in speech perception and language asymmetry. For example, Hugdahl et al. (2003), investigated dichotic listening performance and frontal lobe function[36] in left and right lesioned frontal lobe nonaphasiac patients compared to healthy controls. In the study, all groups were exposed to 36 dichotic trials with pairs of CV syllables and each patient was asked to state which syllable he or she heard best. As expected, the right lesioned patients showed a right ear advantage like the healthy control group but the left hemisphere lesioned patients displayed impairment when compared to both the right lesioned patients and control group. From this study, researchers concluded "dichotic listening as into a neuronal circuitry which also involves the frontal lobes, and that this may be a critical aspect of speech perception."[36] Similarly, Westerhausen and Hugdahl (2008)[37] analyzed the role of the corpus callosum in dichotic listening and speech perception. After reviewing many studies, it was concluded that "...dichotic listening should be considered a test of functional inter-hemispheric interaction and connectivity, besides being a test of lateralized temporal lobe language function" and "the corpus callosum is critically involved in the top-down attentional control of dichotic listening performance, thus having a critical role in auditory laterality."[37]
Dichotic listening can also be used to test the hemispheric asymmetry of language processing. In the early 60s, Doreen Kimura reported that dichotic verbal stimuli (specifically spoken numerals) presented to a participant produced a right ear advantage (REA).[38] She attributed the right-ear advantage "to the localization of speech and language processing in the so-called dominant left hemisphere of the cerebral cortex."[39]: 115 According to her study, this phenomenon was related to the structure of the auditory nerves and the left-sided dominance for language processing.[40] It is important to note that REA doesn't apply to non-speech sounds. In "Hemispheric Specialization for Speech Perception," by Studdert-Kennedy and Shankweiler (1970)[14] examine dichotic listening of CVC syllable pairs. The six stop consonants (b, d, g, p, t, k) are paired with the six vowels and a variation in the initial and final consonants are analyzed. REA is the strongest when the sound of the initial and final consonants differ and it is the weakest when solely the vowel is changed. Asbjornsen and Bryden (1996) state that "many researchers have chosen to use CV syllable pairs, usually consisting of the six stop consonants paired with the vowel \a\. Over the years, a large amount of data has been generated using such material."[41]
In selective attention experiments, the participants may be asked to repeat aloud the content of the message they are listening to. This task is known as shadowing. As Colin Cherry (1953)[42] found, people do not recall the shadowed message well, suggesting that most of the processing necessary to shadow the attended to message occurs in working memory and is not preserved in the long-term store. Performance on the unattended message is worse. Participants are generally able to report almost nothing about the content of the unattended message. In fact, a change from English to German in the unattended channel frequently goes unnoticed. However, participants are able to report that the unattended message is speech rather than non-verbal content. In addition to this, if the content of the unattended message contains certain information, such as the listener's name, then the unattended message is more likely to be noticed and remembered.[43] A demonstration of this was done by Conway, Cowen, and Bunting (2001) in which they had subjects shadow words in one ear while ignoring words in the other ear. At some point, the subject's name was spoken in the ignored ear, and the question was whether the subject would report hearing their name. Subjects with a high working memory (WM) span were more capable of blocking out the distracting information.[44] Also if the message contains sexual words then people usually notice them immediately.[45] This suggests that the unattended information is also undergoing analysis and keywords can divert attention to it.
Some data gathered from dichotic listening test experiments suggests that there is possibly a small-population sex difference in perceptual and auditory asymmetries and language laterality. According to Voyer (2011),[46] "Dichotic listening tasks produced homogenous effect sizes regardless of task type (verbal, non-verbal), reflecting a significant sex difference in the magnitude of laterality effects, with men obtaining larger laterality effects than women."[46]: 245–246 However, the authors discuss numerous limiting factors ranging from publication bias to small effect size. Furthermore, as discussed in "Attention, reliability, and validity of perceptual asymmetries in the fused dichotic words test,"[47] women reported more "intrusions" or words presented to the uncued ear than men when presented with exogenous cues in the Fused Dichotic Word Task which suggests two possibilities: 1) Women experience more difficulty paying attention to the cued word than men and/or 2) regardless of the cue, women spread their attention evenly as opposed to men who may possibly focus in more intently on exogenous cues.[46]
A study conducted involving the dichotic listening test, with emphasis on subtypes of schizophrenia (particularly paranoid and undifferentiated), demonstrated that people with paranoid schizophrenia have the largest left hemisphere advantage – whereas people with undifferentiated schizophrenia (where psychotic symptoms are present but the criteria for paranoid, disorganized, or catatonic types have not been met) having the smallest.[48] The application of the dichotic listening test helped to further the beliefs that preserved left hemisphere processing is a product of paranoid schizophrenia, and in contrast, that the left hemisphere's lack of activity is a symptom of undifferentiated schizophrenia. In 1994, M.F. Green and colleagues tried to relate "the functional integration of the left hemisphere in hallucinating and nonhallucinating psychotic patients" using a dichotic listening study. The study showed that auditory hallucinations are connected to a malfunction in the left hemisphere of the brain.[49]
Dichotic listening can also be found in the emotion-oriented parts of the brain. Further study on this matter was done by Phil Bryden and his dichotic listening research focused on emotionally loaded stimuli (Hugdahl, 2015).[50] More research, focused on how lateralization and the identification of the cortical regions of the brain created inquiries on how dichotic listening is implicated whenever two dichotic listening tasks are provided. In order to obtain results, a Functional magnetic resonance imaging (fMRI) was used by Jancke et al. (2001) to determine the activation of parts of the brain in charge of attention, auditory stimuli to a specific emotional stimuli. Following results on this experiment clarified that the dependability of the provided stimuli (Phonetic, emotion) had a significant presence on activating the different parts of the brain in charge of the specific stimuli. However, no concerning difference in cortical activation was found.[51]