Barbara Landau | |
---|---|
Alma mater | University of Pennsylvania |
Known for | Research in language development, spatial cognition |
Scientific career | |
Fields | |
Institutions | Johns Hopkins University |
Dr. Barbara Landau is the Dick and Lydia Todd Professor in the Department of Cognitive Science at Johns Hopkins University.[1] Landau specializes in language learning, spatial representation and relationships between these foundational systems of human knowledge. She examines questions about how the two systems work together to enhance human cognition and whether one is actually foundational to the other. She is known for her research on unusual cases of development and is a leading authority on language and spatial information in people with Williams syndrome.
Landau received her B.A. in sociology from the University of Pennsylvania in 1970, her Ed.M. in educational psychology from Rutgers University in 1977 and her Ph.D. in psychology from the University of Pennsylvania in 1982.[2] Prior to her current position at Johns Hopkins University, she was a faculty member at Columbia University, the University of California, Irvine, and the University of Delaware. She was awarded a Guggenheim Fellowship in 2009. She was elected a member of the National Academy of Sciences in 2018. In addition, she is a fellow of the American Academy of Arts and Sciences, the Cognitive Science Society, the American Psychological Association, and the American Association for the Advancement of Science.
Landau's research focuses on spatial language, spatial cognition, and the way that the two interact with one another. In addressing this relationship, there are several possibilities. It could be that universal spatial representation leads to spatial language, that language shapes our representations, or each plays a role in shaping the other.[3] Landau has examined these concepts in a number of different psychological settings to make sense of these possibilities.
One way of looking at the relationship is by examining the interaction between non-linguistic spatial memory and language. One study looked at this relationship by comparing English, Korean and Japanese speakers on several tasks.[3] In tasks where participants had to describe spatial relationships, they used different types of language. For example, English speakers only used contact terms (e.g. touch, sit on) when a reference object was touching the top side of a figure object but Japanese and Korean speakers used contact terms regardless of which side of a figure object the reference object was touching. However, the languages did not differ on every dimension of language use. Axial terms, which refer to either the vertical or horizontal orientation of objects (e.g. left, above), were used consistently across languages.
When participants completed tasks where they were asked to view spatial relationships and to hold these in memory, participants were equally adept, regardless of which language they spoke.[3] All participants had the greatest memory for orientations in which objects were in contact with one another or in which objects lay on an axis from one another (e.g. directly horizontally or vertically). Overall, there were distinctions in the use of languages to describe spatial relationships but not in the memory of these spatial relationships themselves. Landau's research implies that the perception of axial structure and of contact/support is foundational to cognition. Because they are such basic, underlying aspects of cognition, they are not impacted by the fact that these perceptions are described differently in different languages.
Another aspect of memory related to language that has been studied by Landau and colleagues has been feature conjunctions, which are the way in which people hold in memory multiple features of an object at one time (e.g. shape and color). Generally, this is difficult for people. In studies where squares were divided in half, with one half being one color and the other half being another, people recalled the way in which the squares were divided (e.g. horizontally, vertically or diagonally) but had difficulty recalling which half was which color.[4] Verbal cues given during the memorization period (e.g. saying that red was left of green) improved memory for feature conjunctions but non-lingual cues (e.g. flashing colors) did not. Additionally, only directional language caused memory improvements: just saying that red was touching green was unhelpful. The explanation was that linguistic cues allow people to create hybrid representational schemes: temporary mental representations of the target that incorporate both the directional language and the spatial orientation of the target. These hybrid schemes are much easier to hold in memory than simple spatial representations.
Language is also relevant to the differences between spatial representations across species. A number of species in addition to humans are able to represent information spatially.[5] For instance, when they are allowed to orient themselves and are then disoriented, many species, like mice and birds, are able to reorient themselves. Spatial representations are used preferentially, while non-spatial information is ignored, even if it is highly salient or would be relevant to the situation at hand. For example, when remembering a specific corner of a room, the length and shape of walls will be referred to but the color of the walls is often ignored. Evidence has been found that spatial representations occur in a specific area of the brain across species, most likely the hippocampus. Humans differ from other species in spatial representations in that language allows us to hold more stable spatial representations in memory such that they can be shared with others. Language helps to enhance memory of spatial representations but does not dramatically change them. In essence, language makes encoding more efficient, in that people can remember spatial representation in single phrases (e.g. to the right of the blue wall), rather than in mental images of the space itself. It allows humans to create a more unified representation of geometric and non-geometric information than other species.
Another area of work surrounding spatial representation and language involves the different ways that people encode objects and places.[6] In general, we have mental representations of the things that we use language for. Landau has done work examining the geometric properties involved in people's representations of object nouns as opposed to spatial prepositions and has found differences in how the two are encoded. For object nouns, people's mental representations include detailed geometric features. These include things like the parts of the object, whether it is hollow or solid, and the orientation of the axes (e.g. the back, front, and sides of the object). For prepositions, people's spatial representations are much less detailed. The only shape information is about the axes and is basically a general “sketch” of the word. The spatial relationships communicated by prepositions also lack detail and consist of simple states (e.g. the objects are in contact, one object contains another) and relative distances.
Landau provides two explanations that work in tandem to explain the reasons that objects and places are encoded so differently.[6] The first is the Design of Language Hypothesis, which is about the constraints of language itself. The theory is that language filters down spatial representations into a small amount of information. An infinite number of spatial relations can be encoded in representations but these don't need to be represented precisely by language. For instance, exact sizes or distances are not generally encoded in language, unless in an agreed-upon, scientific system of measurement. The second is the Design of Spatial Representation Hypothesis, which addresses innate human cognition. This hypothesis claims that mental representations are actually different for objects and places due to the fact that different areas of the brain encode “where” and “what” information. These systems must be able to consolidate information together, as humans need to understand what goes where but the information in these two categories is understood separately. Overall, what can be encoded by language is a factor, but a perhaps greater factor (as postulated by Landau) is that the brain is naturally equipped to handle places and objects differently.
Landau has also participated in research on the question of the time frame in which language has the potential to modify spatial representations.[7] There are several mechanisms by which this modification can occur. The first, selectivity, is that language only encodes certain aspects of space, not all of them. Because not everything is encoded in language, people are attuned by language to pay attention to certain aspects of a spatial situation and to ignore others. Another mechanism is enrichment, which is the idea that language allows people to combine spatial information with other information in a simple phrase, leading to more stable mental representations. However, these modifications have been found to only occur on a temporary, task-by-task basis, meaning that language can influence human spatial representations but does not permanently change spatial cognition. Overall, Landau's research provides evidence for an interaction between spatial representation and language, in which both play a role in shaping the other.
Landau has done work on learning how people come to understand paths of motion and of transition and specifically, on the fact that people tend to show a preference for goals in explanations of these paths. Paths can be goal-oriented (moving towards something) or source-oriented (moving away from something). These can be physical paths through motion (e.g. the boy ran from the house to the fence) but can also include transition states (e.g. she sells fruit to the man).[8] In using language to discuss paths, the speaker has to encode an accurate mental representation of the path and then choose which prepositional phrases to use to discuss it. When describing paths that start at one point and end at another, both children and adults regularly include the goal but not the source.[8] This occurred even when people were cued with source verbs (e.g. ran from). Some words inherently have paths (e.g. buy and sell) but even for these words, people would make a statement like, “the girl sold a muffin to the man” much more often than “the man bought a muffin from the girl.” Overall, people have a goal path bias when describing events, even when the events are neutral and the verbs used would allow for both options.
Further work by Landau and colleagues illuminates the fact that a goal bias developed in infancy, even before the emergence of a full language.[9] Infants are able to perceive the sources of paths and to encode them but only if they are very salient. Therefore, it is not the case that a goal bias exists because infants are incapable of perceiving information of sources. However, when they are shown motion with a salient source and a normal goal, they encode information about the goal preferentially to information about a source. Thus, this bias towards goal paths is not linguistic but exists even before language abilities in humans. There are several possibilities for the origin of this bias. One is that cognition is dependent on moving forwards and on planning ahead, which requires specific attention to the goal. Relatedly, this goal bias may be specific to intentional events, which tend to be about moving towards an endpoint, rather than away from a starting point.
Landau has done intensive research into the ways in which children learn new words and specifically into the way that spatial information impacts this word learning. One aspect of focus has been on determining which aspects of appearance children value most in learning object names. Research has shown that shape is viewed as more important than size or texture in learning novel object names in both children and adults.[10] For instance, when people learn that a square object is a Dax, they do not view non-squares as Daxes but still consider squares of a different size or texture to be good examples of a Dax. This bias towards shape increases with age. In fact, it has been hypothesized that the bias develops as a way of learning words but begins to be used in general categorization tasks as children develop.
Similarly, different spatial information is taken into account when children are learning about different types of words. In one study, a novel word was used as either a noun or a preposition to describe an object being placed in a standard position on top of a box.[11] Then, adults and children were asked to make inferences about whether other objects were examples of the word or not. Object shape and position were treated differently depending on whether people were making inferences about noun or preposition. The precise shape of the object was used to infer whether a noun was the same but the position of the object was irrelevant. For prepositions, the opposite was true: the word was extended to new objects based on the object's orientation relative to the ground object and sometimes based on the object's main axis but not on the shape of the object. People look at different geometric properties when learning nouns and prepositions because they are aware that these categories of words refer to different properties in the world.
Landau has also been involved in important work on the influence that in-lab learning has on later learning. In general, learning names for objects means paying attention to the right properties of the object. For instance, the most important feature in calling something a cup is that it is cup-shaped. 17-month-olds were trained on novel object names in a way that cued children to learn that the words were based specifically on the shape of the objects.[12] This in-lab training accelerated children's word learning abilities outside of the laboratory. When children learned in the laboratory that novel names for objects were based on shape, they were cued to pay closer attention to the relationships between shapes and object names in the real world. Not only do children often learn words based on shape but this learning has the potential to shape later learning. These results have the potential to address the Gavagai problem: the question of how children understand exactly what a novel word is referring to. Though prior research has supported the idea that innate constraints in word understanding allow children to do this, Landau's work implies that children may learn what factors to pay attention to through early word learning experience. In general, the research that Landau has been involved in demonstrates that many aspects of word learning depend on paying attention to spatial features.
In order to understand the normal development of spatial and language abilities, Landau has done extensive research into cases of unusual development: cases where people have some sort of cognitive or visual impairments that might make it more difficult to develop typical spatial or language abilities. The differences between abnormal development and typical development could shed light on how all people gain these abilities. In particular, Landau has done extensive research on Williams syndrome. People with Williams syndrome have severe deficits in spatial understanding combined with a relatively intact language system.[13] Much of Landau's work has focused on determining the underlying cause of this spatial deficit.
Williams syndrome is often studied using standardized tasks, such as asking participants to copy models of blocks. People with WS have difficulty with these types of tasks. Work by Landau and colleagues has revealed that this is not due to problems with the executive processes involved in problem-solving (things like correcting for errors) but is instead due to impairments in maintaining spatial representations of the blocks in the model.[13] This work elucidated which specific aspects of spatial representations seemed to be most impaired. Children with WS were just as competent as normally developing children at replicating simple models but were much less accurate when copying more complex models. They understood how patterns were oriented (horizontally, vertically or diagonally) but had problems with determining the arrangement of blocks (e.g. which color blocks went in which locations). These impairments in spatial representations did have some impact on executive processes: when copying complex models, children would often assemble in a semi-random way, instead of checking carefully for errors, as they did with simple models. Their impaired spatial abilities lead them to use a different pattern of problem-solving when faced with a complex model.
Work done into the abilities of people with Williams syndrome to track multiple objects at once also reveals spatial deficits.[14] People with Williams syndrome did not demonstrate impaired abilities in tracking multiple, non-moving objects but had a much more difficult time than normal children when objects were moving. Landau and colleagues discovered that this was due to an impaired visual indexing system. Visual indexing is the system that allows people to track multiple objects at one time. There is evidence that normal adults have five (meaning that they can track five objects at once) but people with Williams syndrome appear to have fewer, which means they can track a smaller number of objects.
One place where people with Williams syndrome seem to have normal spatial abilities is in the perception of biological motion.[15] Perception of biological motion can be tested using point-light-walkers. These consist of a collection of points of light, which when in motion together, show a human figure walking either left or right. Children with Williams syndrome were just as accurate as normal children at perceiving the motion of these point-light-walkers. Not only does this provide information about spatial cognition in individuals with Williams syndrome, but the “selective sparing” of biological motion perception in these individuals could also suggest that biological motion perception is in a specialized system, which is not impacted by the disorder.
Landau has also been involved in work involving spatial understanding in blind individuals and particularly in the way that spatial knowledge develops in blind children.[16] Understanding how blind children gain an understanding of spatial information can provide knowledge about the non-visual aspects of spatial learning. A case study of a child blind from birth showed that when the child was taken on paths between several objects or places, she was able to travel different routes between those objects and places. This means that blind children are still able to make spatial inferences and to find new routes between object pairs. In fact, the child in question performed at the same level as non-blind children who were blindfolded for the experiment, demonstrating that she had the same spatial abilities as children who had the capacity to learn about spatial relationships visually. This evidence contrasted with a previously widespread idea, that blind people were deficient in spatial knowledge. Blind children are able to hold abstract representations of spatial knowledge in mind and have a series of rules about how space exists.