Bimodal bilingualism is an individual or community's bilingual competency in at least one oral language and at least one sign language, which utilize two different modalities. An oral language consists of a vocal-aural modality versus a signed language which consists of a visual-spatial modality.[1] A substantial number of bimodal bilinguals are children of deaf adults (CODA) or other hearing people who learn sign language for various reasons. Deaf people as a group have their own sign language(s) and culture that is referred to as Deaf,[2] but invariably live within a larger hearing culture with its own oral language. Thus, "most deaf people are bilingual to some extent in [an oral] language in some form".[3] In discussions of multilingualism in the United States, bimodal bilingualism and bimodal bilinguals have often not been mentioned or even considered. This is in part because American Sign Language, the predominant sign language used in the U.S., only began to be acknowledged as a natural language in the 1960s (in discussions of bimodal bilingualism in the U.S., the two languages involved are generally ASL and English). However, bimodal bilinguals share many of the same traits as traditional bilinguals (those with competency in at least two spoken languages), as well as differing in some interesting ways, due to the unique characteristics of the Deaf community. Bimodal bilinguals also experience similar neurological benefits as do unimodal bilinguals (see Cognitive effects of bilingualism), with significantly increased grey matter in various brain areas and evidence of increased plasticity as well as neuroprotective advantages that can help slow or even prevent the onset of age-related cognitive diseases, such as Alzheimer's and dementia.
Most modern neurological studies of bilingualism employ functional neuroimaging techniques to uncover the neurological underpinnings of multilingualism and how multilingualism is beneficial to the brain. Neuroimaging and other neurological studies have demonstrated in recent years that multilingualism has a significant impact on the human brain. The mechanisms required by bilinguals to code-switch (that is, alternate rapidly between multiple languages within a conversation), not only demonstrate increased connectivity and density of the neural network in multilinguals, but also appear to provide protection against damage due to age and age-related pathologies, such as Alzheimer's.[4] Multilingualism, especially bimodal multilingualism, can help slow to process of cognitive decline in aging. It is thought that this is a result of the increased work load that the executive system, housed mostly in the frontal cortex, must assume in order to successfully control the use of multiple languages at once. This means that the cortex must be more finely tuned, which results in a "neural reserve" that then has neuroprotective benefits.
Gray matter volume (GMV) has been shown to be significantly preserved in bimodal bilinguals as compared to monolinguals in multiple brain areas, including the hippocampus, amygdala, anterior temporal lobes, and left insula. Similarly, neuroimaging studies that have compared monolinguals, unimodal bilinguals, and bimodal bilinguals provide evidence that deaf signers exhibit brain activation in patterns different from those of hearing signers, especially in regards to the left superior temporal sulcus. In deaf signers, activation of the superior temporal sulcus is highly lateralized to the left side during facial recognition tasks, while this lateralization was not present in hearing, bimodal signers.[5] Bilinguals also require an effective and fast neural control system to allow them to select and control their languages even while code switching rapidly. Evidence indicates that the left caudate nucleus—a centrally located brain feature that is near the thalamus and the basal ganglia—is an important part of this mechanism, as bilinguals tend to have significantly increased GMV and activation in this region as compared to monolinguals, especially during active code switching tasks.[6] As implied by the significant preservation of gray matter in the hippocampi (an area of the brain largely associated with memory consolidation and higher cognitive function, such as decision-making) of bimodal bilinguals, areas of the brain that help control phonological working memory tend to also have higher activation in those individuals who are proficient in two or more languages. There is also evidence that suggests that the age at which an individual acquires a second language may play a significant role in the varying brain functions associated with bilingualism. For example, individuals who acquired their second language early (before the age of 10) tend to have drastically different activation patterns than do late learners. However, late learners who achieve full proficiency in their second language tend to show similar patterns of activation during auditory tasks regardless of which language is being used, whereas early learners tend to activate different brain areas depending upon which language is being used.[7]
Along with the neuroprotective benefits that help to prevent onset of age-related cognitive issues such as dementia, bimodal bilinguals also experience a slightly different pattern of organization of language in the brain. While hearing bimodal bilinguals showed less parietal activation than deaf signers when asked to use only sign language, those same bimodal bilinguals demonstrated greater left parietal activation than did monolinguals.[8] Parietal activation is not typically associated with language production but rather with motor activity. Therefore, it is logical that bimodal bilinguals, when switching between speech- and sign-based language, stimulate their left parietal areas as a result of their increased need to combine both motor action and language production. Moreover, it has been proven that there is a parallel or simultaneous language activation during language use. This activation occurs when the bilingual uses language, regardless of the L1 or L2, being used. The dominance or lack of dominance of a language does not impact the stimulation of the brain when a language is being used. This same activation happens with any language modality, meaning the brain is activated whether the language is written, signed, or spoken.[9]
A 2021 study used event-related potential (ERP) to track the brain's language switch in bimodal bilinguals and measured a brain response pattern not found in unimodal bilinguals.[10]
To be defined as bilingual, an individual need not have perfect fluency or equal skill in both languages.[11] Bimodal bilinguals, like oral-language bilinguals, exhibit a wide range of language competency in their first and second languages. For Deaf people (the majority of bimodal bilinguals in the U.S.), level of competency in ASL and English may be influenced by factors such as degree of hearing loss, whether the individual is prelingually or post-lingually deaf, style of and language used in their education, and whether the individual comes from a hearing or Deaf family.[12] Historically, assessment of bilingual children would only measure proficiency in one of their languages. In more recent research, linguists and educators have identified this design flaw. It can be concluded that most bilingual children achieve phonological, lexical, and grammatical milestones at the same rate as monolingual children. This same phenomenon has been found in comparing unimodal bilinguals and bimodal bilinguals. In a study by Fish & Morford (2012), bimodal bilingual CODAs have demonstrated the same rate of success in these areas as their unimodal bilingual peers.[13]
Regardless of English competency in other areas, no Deaf individual is likely to comprehend English in the same way as a hearing person when others are speaking it because only a small percentage of English phonemes are clearly visible through lip reading. Additionally, many Deaf bilinguals who have fluency in written English choose not to speak it because of the general social unacceptability of their voices, or because they are unable to monitor factors like pitch and volume.[12] The simultaneous production of speech and sign is referred to as code-switching. Using the fundamental concepts of Minimalism and Distributed Morphology, the research examined the phenomenon of code-switching and captured the true understanding and meaning of code-switching. The Synthesis model and WH-question were used in the study. The second modality, sign language, is recognized by the model. With any two language pairs, this method is intended to capture a variety of data. Their goal is to propose bilingual effects for any two language pairs.[14]
Like hearing oral-language bilinguals, Deaf bimodal bilinguals generally "do not judge themselves to be bilingual".[15] Whether because they do not believe the sign language to be a legitimate and separate language from the majority oral language, or because they do not consider themselves sufficiently fluent in one of their languages, denial of one's bilingualism is a common and well-known phenomenon among bilinguals, be they hearing or Deaf.[15]
Deaf or bimodal bilinguals, in their day-to-day lives, move among and between various points on the language mode continuum depending on the situation and the language competency and skills of those with whom they are interacting. For example, when conversing with a monolingual, all bilinguals will restrict themselves to the language of the individual with whom they are conversing. However, when interacting with another bilingual, all bilinguals can use a mixture of the two common languages.[15] While early aged bimodal bilinguals have more than one mode to communicate a language, they are just as susceptible as unimodal bilinguals to confusing domains and using the "wrong" language in a given situation.[16] Code-switching is a common phenomenon found among bilinguals; for bimodal bilinguals, another equivalent phenomenon is code-blending, which "involves simultaneous production of parts of an utterance in speech and sign." Examples of code-blending would be using ASL word order in a spoken English utterance, or conversing by showing an ASL classifier and speaking the English equivalent phrase simultaneously.[16] Like unimodal bilinguals, bimodal bilinguals will activate, deactivate or adjust their use of each language according to their domain. For ASL-English bilingualism, "deaf students' vocabulary knowledge in each language will be related to the contexts where the two languages are used." That is, vocabulary and topics learned and discussed in ASL will be recognized and recalled in ASL, and "English vocabulary will reflect the contexts where English is accessible to deaf students."[17]
As is the case in many situations of oral-language bilingualism, bimodal bilingualism in the U.S. involves two languages with vastly different social status. ASL has traditionally not even had the status of being considered a legitimate language, and Deaf children have been prevented from learning it through such "methods" as having their hands tied together. Hearing parents of Deaf children have historically been advised not to allow their children to learn ASL, as they were informed it would prevent the acquisition of English. Despite the fact that Deaf children's early exposure to ASL has now been shown to enhance their aptitude for acquiring English competency, the unequal social status of ASL and English, and of sign languages and oral languages, remains.[12][18] Consequently, CODAs have a wide range of both positive and negative impacts based on their individual experiences.
Since linguists did not recognize ASL as a true language until the second half of the twentieth century, there has been very little acknowledgment of, or attention or study devoted to, the bilingual status of the American Deaf community.[15] Deaf people are often "still seen by many as monolingual in the majority language whereas in fact many are bilingual in that language and in sign".[15]
Because almost all members of the American Deaf community are to some extent bilingual in ASL and English, it is rare that a Deaf person will find themselves conversing with a person who is monolingual in ASL. Therefore, unless an American Deaf person is communicating with someone who is monolingual in English (the majority language), he or she can expect to be conversing in a "bilingual language mode".[15] The result of this prolonged bilingual contact and mixing between a sign language and an oral language is known as contact sign.[12] Deaf children and their parents communicated using several modalities, such as oral-aural and visual-gestural. The mixed use of ASL and spoken English in bilinguals is discussed in this article. It led to the contact signing. The author of the paper covered many contact sign approaches in complex layers of ASL and spoken English expressions. In the deaf community, contact signing is a common occurrence.[19]
Language shift "occurs when speakers in a community give up speaking their language and take up the use of another in its place".[3] ASL in particular, and sign languages in general, are undeniably influenced by their close contact with English or other oral languages, as evidenced by phenomena such as "loan signs" or lexicalized fingerspelling (the sign language equivalent of loanwords), and through the influence of Contact Sign. However, due to the physical fact of deafness or hearing loss, deaf people generally cannot acquire and speak the majority language in the same way or with the same competency that the hearing population does. Simultaneously, Deaf people still often have a need or desire to learn some form of English in order to communicate with family members and the majority culture.[18] Thus, Deaf communities and individuals, in contrast to many hearing bilingual communities and individuals, will tend to "remain bilingual throughout their lives and from generation to generation".[15]
Bimodal bilinguals are able to produce and perceive a spoken and a signed language simultaneously compared to those who are unimodal. Thus, these individuals are only able to perceive a spoken language at a given time and would not be able to process a signed language at the same time unless one is proficient in ASL.[20] However, those who are able to produce and perceive a spoken and signed language simultaneously demonstrated a slower speech rate, decreased lexical richness, and lower syntactic complexity when compared to the results of the speech-only condition.[20] In addition, ASL users rely more on pragmatic inferences and background context versus syntactic information.[21]
In more recent research related to bilingualism and ASL, early exposure and adequate access to a first language has been imperative to children's development of language, academic and social opportunities, and critical thinking and reasoning skills – all of which can be "applied to literacy development in a spoken language (such as English)."[13] This conclusive research emphasizes the need for more additive models of bilingual education, as opposed to subtractive or transitional models of education, which are designed to shift the learner away from the native language for the goal of complete use and reliance of the majority language. For deaf children, subtractive models of bilingual education, combined with the lack of foundation of a native language, typically result in language deprivation and delay in cognitive development. In comparison, the aim the maintenance model, an additive model is "to support the development of the native language while also fostering acquisition and use of the majority language." This model is embedded in a bimodal, bilingual education and may include "comparative and integrative pedagogic strategies such as translation, fingerspelling, and chaining/sandwiching strategies."[22]
Simultaneous communication, or SimCom, which is a method of signing that represents English in its structure and elements, typically following English word order but still using one sign per word. However, research has shown this method of communication is not ideal for bilingual language learning. In a study about bimodal bilingual teachers and students' vocabulary levels, the results revealed a "slower speech rate, lower lexical richness, and lower syntactic complexity in the SimCom [teaching] condition compared with the speech-only condition." These findings suggest that "the [teachers'] production of the less dominant language (ASL) during SimCom entails inhibition of the dominant [spoken English] language relative to the speech-only condition." This study also acknowledges that SimCom is a "complex communication unit that cannot be reduced to the combination of two languages."[20]
Methodologies, strategies and support in bimodal bilingual education, as well as the language background and linguistic capital of bimodal bilingual educators are key aspects to the achievements of language competence of bimodal bilingual learners.
The written forms of language can be considered another modality. Sign languages do not have widely accepted written forms, so deaf individuals learn to read and write an oral language. This is known as sign–print bilingualism—a deaf individual has fluency in (at least) one sign language as their primary language and has literacy skills in the written form of (at least) one oral language, without access to other resources of the oral language that are gained through auditory stimuli.[23] Orthographic systems employ the morphology, syntax, lexical choices, and often phonetic representation of their target language in at least superficial ways; one must learn these new features of the target language in order to read or write. In communities where there is standardized education for the deaf, such as the United States and the Netherlands, deaf individuals do gain skill sets in reading and writing in the oral language of the community. In such a state, bilingualism is achieved between a sign language and the written form of the community's oral language. In this view, all sign–print bilinguals are bimodal bilinguals, but all bimodal bilinguals may not be sign–print bilinguals.
Children who are deaf and employ a sign language as their primary language learn to read in slightly different ways than their hearing counterparts. Much as speakers of oral languages most frequently achieve spoken fluency before they learn to read and write, the most successful profoundly deaf readers first learn to communicate in a sign language.[24] Research suggests that there is a mapping process, in which features from the sign language are accessed as a basis for the written language, Similar to the way hearing unimodal bilinguals access their primary language when communicating in their second language.[25][26] Profoundly deaf ASL signers show that fluency in ASL is the best predictor of high reading skills in predicting proficiency in written English.[24] In addition, highly proficient signing deaf children use more evaluative devices when writing than less proficient signing deaf children, and the relatively frequent omission of articles when writing in English by proficient signers may suggest a stage in which the transfer effect (that normally facilitates deaf children in reading) facilitates a mix of the morphosyntactic systems of written English and ASL.[25] Deaf children then appear to map the new morphology, syntax, and lexical choices of their written language onto the existing structures of their primary sign language. The study examined deaf and hearing readers' reading processes. The study examined deaf readers' responses to syntactic manipulations with using self-pacing methods. The experimental included of animate and inanimate subjects, actives and passives, as well as subject and object relatives. Hearing readers had higher comprehension accuracy than deaf readers, according to the findings of the study. Deaf readers, on the other hand, can read and grasp sentences faster than hearing readers, according to the study. Self-pacing is a better method for deaf readers, according to the study.[17]
There are mixed results in how important phonological information is to deaf individuals when reading and when that information is obtained. Alphabets, abugidas, abjads, and syllabaries all seem to require the reader/writer to know something about the phonology of their target language prior to learning the system. Profoundly deaf children do not have access to the same auditory base that hearing children do.[24] Orally trained deaf children do not always use phonological information in reading tasks, word recognition tasks or homophonic tasks; however, deaf signers who are not orally trained do utilize phonological information in word-rhyming tasks.[24] Furthermore, when performing on tasks with phonologically confusable initial sounds, hearing readers made more errors than deaf readers.[27] Yet when given sentences that are sub-lexically confusable when translated into ASL, deaf readers made more errors than hearing readers.[27] The body of literature clearly shows that skilled deaf readers can employ phonological skills, even if they do not all the time; without additional longitudinal studies it is uncertain if a profoundly deaf person must know something about the phonology of the target language to become a skilled reader (less than 75% of the deaf population) or if by becoming a skilled reader a deaf person learns how to employ phonological skills of the target language.[24]
This section needs to be updated.(May 2014) |
In 2012, "About one in five deaf students who graduate from high school have reading skills at or below the second grade level; about one in three deaf students who graduate from high school have reading skills between the second and fourth grade level. Compared to deaf students, hard of hearing students (i.e., those with mild to moderate hearing loss) fare better overall, but even mild hearing losses can create significant challenges for developing reading skills".[28] These concerning numbers are generally the result of varying levels of early language exposure. Most deaf children are born to hearing parents, which usually leaves a deficiency in their language exposure and development compared to children and parents who use the same modality to communicate. This group of children acquire a wide range of proficiency in a first language, which then impacts their ability to become proficient in a second (though sometimes possibly a first) language in the written modality.[24] Children exposed to Manually Coded English (MCE) as their primary form of communication show lower literary levels than their ASL signing peers. However, in countries such as Sweden which have adopted a bilingual–bicultural policy in their schools for the deaf, one sees a higher literacy rate compared to school systems favoring an oral tradition.[23]