Mind in Motion

How Action Shapes Thought

Contributors

By Barbara Tversky

Formats and Prices

Price

$35.00

Price

$44.00 CAD

Format

Format:

  1. Hardcover $35.00 $44.00 CAD
  2. ebook $18.99 $24.99 CAD

This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around May 21, 2019. This date is subject to change due to shipping delays beyond our control.

An eminent psychologist offers a major new theory of human cognition: movement, not language, is the foundation of thought

When we try to think about how we think, we can’t help but think of words. Indeed, some have called language the stuff of thought. But pictures are remembered far better than words, and describing faces, scenes, and events defies words. Anytime you take a shortcut or play chess or basketball or rearrange your furniture in your mind, you’ve done something remarkable: abstract thinking without words.

In Mind in Motion, psychologist Barbara Tversky shows that spatial cognition isn’t just a peripheral aspect of thought, but its very foundation, enabling us to draw meaning from our bodies and their actions in the world. Our actions in real space get turned into mental actions on thought, often spouting spontaneously from our bodies as gestures. Spatial thinking underlies creating and using maps, assembling furniture, devising football strategies, designing airports, understanding the flow of people, traffic, water, and ideas. Spatial thinking even underlies the structure and meaning of language: why we say we push ideas forward or tear them apart, why we’re feeling up or have grown far apart.

Like Thinking, Fast and Slow before it, Mind in Motion gives us a new way to think about how–and where–thinking takes place.

Excerpt

Explore book giveaways, sneak peeks, deals, and more.

Tap here to learn more.



CHAPTER ONE

The Space of the Body: Space Is for Action

In which we show that we have an insider’s view of the body, one shaped by our actions and sensations, unlike our outsider view of other things in our world that is shaped by appearance. Mirror neurons map others’ bodies onto our own, allowing us to understand other bodies through our own and to coordinate our actions with theirs.

WE BEGIN IN OUR SKIN, THAT THIN, FLEXIBLE MEMBRANE THAT encloses our bodies and separates us from everything else. A highly significant boundary. All our actions take place in the space outside our skin, and our lives depend on those actions. As any mother will happily tell you, that activity begins before birth. Who knows why those curious creatures growing inside us keep “kicking”—perhaps to find a more comfortable position? Or why they seem so active at importune times—one of them kept popping my dress up and down during my PhD orals.

Mercifully, bodies do far more than kick. They eventually perform an astounding assortment of activities. The harmonious coordination underlying those diverse behaviors depends on the continuous integration of a variable stream of information from many senses with the articulated actions of dozens of muscles (apologies for beginning with such a mouthful!). Although our skin encloses and separates our bodies from the surrounding world, accomplishing those activities entails countless interactions with the world. We cannot truly be separated from the world around us. It is those interactions that underlie our conceptions of our bodies.

Viewed from the outside, bodies are like other familiar objects: tables, chairs, apples, trees, dogs, or cars. We become adept at rapidly recognizing those common objects, primarily from their outlines, their contours, in their prototypical orientations. The contours of objects are, in turn, shaped by the configuration of their parts, legs and bodies for dogs and tables, trunks and canopies for trees. That skill, recognizing objects, takes up residency in a slew of places in the brain. Faces in one array, bodies in another, scenes in yet another. Those regions are active—light up—when we view those kinds of things and not when we view things from other categories.

For objects (and faces), some views are better than others. An upside-down table or tree is harder to recognize than a right-side-up version; the backside of a dog or the top view of a bicycle is harder to recognize than side views of either. A good view is one that shows the distinctive features of the object. A prototypical dog has four legs (like a prototypical table), an elongated horizontal tube for a body, and a symmetric head with eyes, snout, and a mouth as well as ears protruding from either side. The best view of a dog would show those features. Exactly those views, the ones that present more of the characteristic features in the proper configuration, are the ones we are fastest to recognize and the ones we judge as better representations of the object. For many objects, like dogs or tables, the best views are of course upright, and three-quarters view or profile. In many cases, the contours or silhouettes of good views are sufficient for rapid recognition.

BODIES AND THEIR PARTS

Just as for objects, contours of canonical orientations are especially effective for recognizing bodies—when we view them from the outside. But, singularly, for bodies we also have an insider perspective. That intimate insider perspective comes with many extras. We know what bodies can do and what bodies feel like from the inside. We can’t have that knowledge for chairs or even bugs (Kafka aside) or dogs or chimpanzees. We know what it feels like to stand tall or sit slumped, to climb stairs and trees, to jump and hop, to fasten buttons and tie shoes, to signal thumbs up or OK, to cry and laugh. We know not only what it feels like to act in those ways but, even more significantly, also what it means to act in those ways, stretching or slumping, crying or laughing. Importantly, we can map other bodies and their actions onto our own, suggesting that we understand other bodies not only by recognizing them but also by internalizing them.

Before that, we map our bodies onto our brains, onto the homunculus, the “little man,” sprawled ear-to-ear across the top shell, the cortex, of our brains. (See Figure 1.1.) The cortex is a thick, crenellated layer splayed over the parts of the brain that are evolutionarily older. From the outside, the brain looks like a giant walnut. And like a walnut, the brain is divided front to back into two not quite symmetric halves, or hemispheres, right and left. For the most part, the right hemisphere controls and has inputs from the left side of the body. The reverse holds for the left hemisphere. Each hemisphere is divided into plateaus called lobes that are separated by valleys, or sulci (singular, sulcus). It’s hard not to talk about the cortex geographically, and undoubtedly there are analogies in the formation of plateaus and layers and valleys on the earth and in the brain. Those wrinkles create more surface, important for the land and important for the brain. The inputs from the various sensory systems are partly channeled to separate lobes of the cortex, for example, vision to the occipital lobe at back of the head and sound to the temporal lobes above the ears. Yet each lobe is wondrously complex, with many regions, many layers, many connections, many kinds of cells, and many functions. Remarkably, even single neurons can be specialized, for a specific view of a face or for tracking an object that moves behind a screen. And there are billions of them in the human brain. A recent estimate is eighty-six billion.

There are actually two pairs of homunculi splayed along the central sulcus; one pair maps the sensations from the body, the other pair maps motor output to the body. The pair on the left side of the brain maps the right side of the body and the pair on the right side of the brain maps the left side of the body. The sensory and motor homunculi face each other. The motor homunculus is, perhaps significantly, positioned more forward (technical terms: anterior or frontal), toward the eyes and nose. It controls the output, telling the muscles how to move. The sensory homunculus is positioned toward the back of the head (technical terms: posterior or dorsal, from Latin for “tail”). It brings the input from the many kinds of sensations our bodies respond to, position, pain, pressure, temperature, and more. The homunculi are strange little people, with oversized heads, huge tongues, enormous hands, and skinny torsos and limbs.

FIGURE 1.1. Sensory homunculus.

You can’t help but see that these cortical proportions are far from the proportions of the body. Rather than representing the sizes of the body parts, the sizes of the cortical representations of the various body parts are proportional to the quantities of neurons ascending to them or descending from them. That is, the head and hands have more cortical neurons relative to their body size, and the torso and limbs have fewer cortical neurons relative to their body size. More neural connections mean more sensory sensitivity on the sensory side and more action articulation on the action side. The disproportionate sizes of cortical real estate make perfect sense once we think about the multitude of articulated actions that the face, tongue, and hands must perform and the sensory feedback needed to modulate their actions. Our tongues are involved in the intricate coordinated actions necessary for eating, sucking, and swallowing, for speaking, groaning, and singing, and for many other activities that I will leave to your imagination. Our mouths smile and frown and scowl, they blow bubbles and whistle and kiss. Hands type and play the piano, throw balls and catch them, weave and knit, tickle babies and pat puppies. Our toes, on the other hand, are sadly underused, incompetent, and unnoticed—until we stub them. That functional significance trounces size is deep inside us, or rather, right there at the top of the head.

Significance trounces size not only in the brain but also in talk and thought. We saw this in research in our laboratory. We first collected the body parts most frequently named across languages. Zipf’s Law tells us that the more a term gets used, the shorter it gets; co-op, TV, and NBA are examples. The presumption is that if a body part is named across languages, it’s probably important irrespective of culture. The top seven were head, hands, feet, arms, legs, front, back. All the names are short, and, in fact, all are important even compared to other useful parts, like elbow or forearm. We asked a large group of students to rank those parts by significance and another group by size. As expected, similar to the homunculus in the brain, significance and size didn’t always line up. Significance reflected size of cortical territory, not body size: head and hands were rated as highly significant but aren’t particularly large, and backs and legs are large but were rated lower in significance.

Next, we asked which body parts were faster for people to recognize, the large ones or the significant ones? We tried it two ways. In one study, people saw pairs of pictures of bodies, each in a different pose, each with a part highlighted. You might be thinking that people would naturally find large parts faster. To make all parts equal irrespective of size, we highlighted with a dot in the middle of the part. In the other study, people first saw a name of a body part and then a picture of a body with a part highlighted. In both studies, half the pairs had the same part highlighted and half had different parts highlighted. Participants were asked to indicate “same” or “different” as fast as possible. An easy task; there were very few errors. Our interest was in the time to respond: Would people respond faster for significant parts or for large ones? You’ve probably already guessed what happened. Significant parts were faster.

The triumph of significance over size was even stronger for name-body comparisons than for body-body comparisons. Names are a string of letters; they lack the concrete features of pictures like size and shape. Names, then, are more abstract than depictions. Similarly, associations to names of objects are more abstract than associations to depictions of objects. Names of things evoke abstract features like function and significance, whereas pictures of things evoke concrete perceptible features.

First General Fact Worth Remembering: Associations to names are more abstract than associations to pictures.

Remember that all the parts used in our studies were significant compared to familiar but less significant parts like shoulder or ankle. Notably, the word for each part—head, hands, feet, arms, legs, front, and back—has numerous extended uses, uses so common that we’re unaware of their bodily origins. Here are just a few: head of the nation, lost his head; right-hand person, on the one hand, hands down; foot of the mountains, all feet; arm of a chair, arm of the government; the idea doesn’t have legs, shake a leg, break a leg; up front, front organization; not enough backing, behind the back. Notice that some of these figurative meanings play on the appearance of the parts, elongated as in the arms and legs of a chair; others play on the functions of the parts, such as the head of the nation and the idea has no legs. Of course, many other body parts have figurative extensions: someone might be the butt of a joke or have their fingers into everything. Then there are all the places claiming to be the navel of the world—visiting all of them could keep you traveling for months—the navel, that odd dot on our bellies, a remnant of the lifeline that once connected us to our mothers. Once you start noticing figurative uses, you see and hear them everywhere.

Like our knowledge of space, we know about our bodies from a multitude of senses. We can see our own bodies as well as those of others. We can hear our footsteps and our hands clapping and our joints clicking and our mouths speaking. We sense temperature and texture and pressure and pleasure and pain and the positions of our limbs both from the surface of our skin and from proprioception, those sensations of our bodies from the inside. We know where our arms and legs are without looking, we can feel when we are off balance or about to be. It’s mind-boggling to think of how much delicate and precise coordination of so many sensory systems is needed just to stand and walk, not to mention shoot a basket or do a cartwheel. We weren’t born doing those things.

Babies have so much to learn. And they learn so fast: their brains create millions of synapses, connections between neurons, per second. But their brains also prune synapses. Otherwise, our brains would become tangled messes, everything connected to everything else, a multitude of possibilities but no focused action, no way to strengthen important connections and weaken irrelevant ones, no way to choose among all those possibilities and organize resources to act. Among other things, pruning allows us to quickly recognize objects in the world and to quickly catch falling teacups but not burning matches. But that process has costs: we can mistake a coyote for a dog and a heavy rock for a rubber ball.

This brings us to our First Law of Cognition: There are no benefits without costs. Searching through many possibilities to find the best can be time consuming and exhausting. Typically, we simply don’t have enough time or energy to search and consider all the possibilities. Is it a friend or a stranger? Is it a dog or a coyote? We need to quickly extend our hands when a ball is tossed to us but quickly duck when a rock is hurled at us. Life, if nothing else, is a series of trade-offs. The trade-off here is between considering possibilities and acting effectively and efficiently. Like all laws in psychology, this one is an oversimplification, and the small print has the usual caveats. Nevertheless, this law is so fundamental that we will return to it again and again.

INTEGRATING BODIES: ACTION AND SENSATION

With this in mind, watching five-month-old babies is all the more mystifying. On their backs, as they are now supposed to be placed, they can suddenly catch sight of their hand and are captivated. They stare intently at their hand as though it were the most interesting thing in the world. They don’t seem to understand that what they are regarding so attentively is their own hand. They might move their hand quite unintentionally and then watch the movement without realizing that they’ve caused it. If you put your finger or a rattle in their hand, they’ll grasp it; grasping is reflexive. But if the hand and the rattle disappear from sight, they won’t track them. Gradually, sight and sensation and action get integrated, starting at the top of the body, hands first. Weeks later, after they’ve accomplished reaching and grasping with their hands, they might accidentally catch their foot. Flexible little things with stubby legs, they might then bring their foot to their mouth. Putting whatever’s in the hand into the mouth is also quite automatic, but at first they don’t seem to realize that it’s their own foot.

Babies start disconnected. They don’t link what they see with what they do and what they feel. And they don’t link the parts of their body with each other. We take the connections between what we see and what we feel for granted, but human babies don’t enter the world with those connections; the connections are learned, slowly over many months. Ultimately, what unites the senses foremost is action. That is, the output—action—informs and integrates the input—sensation—through a feedback loop. Unifying the senses depends on acting: doing and seeing and feeling, sensing the feedback from the doing at the same time.

It’s not just babies who calibrate perception through action. We adults do it too. Experiments in which people don prismatic glasses that distort the world by turning it upside down or sliding it sideways show this dramatically. The first known experiments showing adaptation to distorting lenses were performed in the late nineteenth century by George Stratton, then a graduate student and later the founder of the Berkeley Psychology Department. Stratton fashioned lenses that distorted vision in several ways and tried them himself, wearing them for weeks. At first, Stratton was dizzy, nauseated, and clumsy, but gradually he adapted. After a week, the upside-down world seemed normal and so was his behavior. In fact, when he removed the lenses, he got dizzy and stumbled again. Since then, experiments with prismatic lenses that turn the world every which way have been repeated many times. You can try the lenses in many science museums or buy them on the Web. A charismatic introductory psychology teacher at Stanford used to bring a star football player to class and hand him distorting lenses. Then the instructor would toss the player a football, and of course the star player fumbled, much to everyone’s delight. A rather convincing demonstration! That disrupted behavior, the errors in reaching or walking, is the measure of adaptation to the prismatic world.

The surprising finding is this: seeing in the absence of acting doesn’t change perception. If people are wheeled about in a chair and handed what they need—if they don’t walk or reach for objects—they do not adapt to the prismatic lenses. Then, when the lenses are removed, the behavior of passive sitters is normal. No fumbling. No dizziness.

Because acting changes perception, it should not be surprising that acting changes the brain. This has been shown many times in many ways, in monkeys as well as in humans. Here’s the basic paradigm: give an animal or a person extensive experience using a tool. Then check areas of the brain that underlie perception of the body to see if they now extend outside the body to include the tool. Monkeys, for example, can quickly learn to use a hand rake to pull out-of-reach objects, especially treats, to themselves. After they become adept at using a rake, the brain regions that keep track of the area around the hand as it moves expand to include the rake as well as the hand. These findings were so exciting that they have been replicated many times in many variations in many species. The general finding is that extensive practice using tools enlarges both our conscious body image and our largely unconscious body schema.

That extensive tool use enlarges our body images to include the tools provides evidence for the claim that many of us jokingly make, that our cell phones or computers are parts of our bodies. But it also makes you wish that the people who turn and whack you with their backpacks had had enough experience with backpacks that their backpacks had become part of their body schemas. Too bad we don’t use our backpacks the ways we use the tools in our hands.

The evidence on action is sufficient to declare the Second Law of Cognition: Action molds perception. There are those who go farther and declare that perception is for action. Yes, perception serves action, but perception serves so much more. There are the pure pleasures of seeing and hugging people we love, listening to music we enjoy, viewing art that elevates us. There are the meanings we attach to what we feel and see and hear, the sight of a forgotten toy or the sound of a grandparent’s voice or the taste, for Proust, of a madeleine. Suffice it to say that action molds perception.

Earlier I observed that our skin surrounds and encloses our bodies, separating our bodies from the rest of the world. It turns out that it’s not quite that simple (never forget my caveats and my caveats about caveats). It turns out that we can rather easily be tricked into thinking that a rubber hand—yuck—is our own.

In a paradigmatic experiment, participants were seated at a table, with their left arm under the table, out of view. On the table was a very humanlike rubber hand positioned like the participant’s real arm. Participants watched as the experimenter gently stroked the rubber arm with a fine paintbrush. In synchrony, the experimenter stroked the participant’s real but not visible arm with an equivalent brush, matching the rhythm. Amazingly, most participants began to think that the arm they could see, the rubber arm, was their own. They reported that what they saw was what they felt. Action, per se, is not involved in creating this illusion, but proprioceptive feedback seems to be crucial. Both hands, the participant’s real hand and the rubber hand, are immobile. What seems to underlie the illusion is sensory integration, the integration of simultaneously seeing and feeling.

If people perceive the rubber arm as their own arm, then if they watch a threat to the rubber arm, they should get alarmed. This happened in subsequent experiments. First, as before, participants experienced enough synchronous stroking of their hidden real arm and the visible rubber arm to claim ownership of the rubber arm. Then the experimenters threatened the rubber arm by initiating an attack on the arm with a sharp needle. At the same time, they measured activation in areas of the brain known to respond to anticipated pain, empathetic pain, and anxiety. The more participants reported ownership of the rubber hand, the greater the activation in the brain regions underlying anticipated pain (left insula, left anterior cingulate cortex) during the threatened, but aborted, attack with a sharp needle.

The rubber hand phenomenon provides yet another explanation of why people’s body schemas enlarge to include tools but don’t seem to enlarge to include their backpacks. Ownership of a rubber hand depends on simultaneous seeing and sensing, seeing the rubber hand stroked and sensing simultaneous stroking on the real hand. We can’t see our backpacks and whatever sensations we have are pressure or weight on our backs and shoulders, which give no clue to the width of the backpack generating the pressure.

UNDERSTANDING OTHERS’ BODIES

Now to the bodies of others. It turns out that our perception and understanding of the bodies of others are deeply connected to the actions and sensations of our own bodies. What’s more, the connection of our bodies to those of others is mediated by the very structure of the brain and the nervous system. Let’s begin again with babies, let’s say, one-year-olds. Babies that young have begun to understand the goals and intentions of the actions of others, at least for simple actions like reaching. You might wonder how we know what babies are thinking. After all, they can’t tell us (not that what we say we are thinking is necessarily reliable). We know what babies are thinking the same way we often know what adults are thinking: from what they are looking at. Sometimes actions can be more revealing than words.

The most common way researchers infer the thoughts of babies is through a paradigm known as habituation of looking. Two ideas underlie this paradigm: people, even, or especially, babies, look at what they’re thinking about; and second, stuff that’s new grabs attention and thought. In a typical task, researchers show infants a stimulus or an event, in this case, a video of someone reaching for an object. At the same time, they monitor how much the infants are looking at the event. They show the event again, and monitor again. The researchers show the stimulus or the event over and over until the baby loses interest and looks away, that is, until the infant habituates to the event. After the infant habituates, the researchers show a new event that alters the previous one in one of two ways. They change the goal of the action by switching the object of reaching or they switch the means of attaining the goal by changing the manner of reaching. The question of interest is whether infants will look more at the event where the goal of reaching was changed or the event where the means of attaining the goal was changed.

If the infant understands that it’s the goal that matters, not the means to the goal, the infant should look more when the goal changes than when the means changes. At ten months, infants were indifferent to the changes; they looked equally at both. Both events were new, and the infants didn’t regard a change of goal as more interesting than a change of manner of attaining the goal. That changed in only two months. Twelve-month-old infants looked more when the goal changed than when the means to the goal changed. A leap of understanding of goal-directed behavior in two months.

More support for the notion that one-year-olds understand action-goal couplings comes from tracking their eye movements as they watch someone reaching. Remarkably, the eye movements of one-year-old infants jump to the goal of the action before the hand even reaches the goal, suggesting that they anticipate the goal.

Perhaps even more impressive is what happens even earlier, at three months. At that tender age, if infants have performed similar actions, they are more likely to understand the goals of others’ actions. At three months, infants don’t have good motor control, they cannot yet reach and grasp reliably, and their hands flail about. The clever experimenters put mittens with Velcro on the baby’s hands and a toy in front of the infant. Eventually, with enough flailing, the Velcroed hand would catch the toy. The infants who had had practice “grasping” objects in this way anticipated the viewed reaching and grasping actions of others more reliably than infants without practice grasping.

This is remarkable evidence that infants can understand the intentions behind the actions of others. Not all intentions and actions, of course, but reaching for an object is an important and common one, and there are undoubtedly others. Understanding others’ intentions comes about in part because of experience enacting similar actions with similar intentions. Moreover, as we shall see next, it has become clear that the very structure of the brain is primed for understanding observed action, through the mirror neuron system.

MIRROR NEURONS

In the late 1980s, a group of neuroscientists in Parma, Italy, led by Giacomo Rizzolatti made a surprising discovery. They implanted tiny electrodes in individual neurons in premotor cortex (inferior frontal gyrus and inferior parietal lobe) of macaque monkeys that allowed them to record activity in single neurons in animals who were moving about as they normally do. They found single neurons that fired when the monkey performed a specific action, like grasping or throwing. What was remarkable was that the exact same neuron fired when the animal saw someone else, in this case, a human, perform the same action. They called these remarkable neurons mirror neurons

Genre:

  • "An earnest effort to describe how our physical movements and the movements of those around us shape our consciousness...A well-informed book that will appeal to psychology buffs willing to pay close attention."—Kirkus
  • "This beautifully written book engages you in a one-on-one conversation with a rich and fascinating mind. It will guide you on a tour of your own experience and show you a new way to think about thinking."—Daniel Kahneman, author of Thinking, Fast and Slow
  • "An intriguing exploration of the spatial thinking that is embedded in our reasoning, our language, and our culture, from one of the world's leading researchers on these topics."—Steven Pinker, Johnstone Professor of Psychology at Harvard University and author of How the Mind Works
  • "In this engrossing new book, Tversky shows how motion, actions, and bodies are fundamental to the way we think. The mind extends from the brain and body to the world and environment, building upon how we perceive and manipulate our bodies and the objects around us. Truly engaging. Truly important."—Don Norman, director of The Design Lab at University of California, San Diego and author of The Design of Everyday Things
  • "Nimbly maneuvering between data, scientific theory, and extraordinary personal insight, Tversky elegantly establishes spatial thinking as core to our very existence as humans. Ranging from physics to linguistics to design, this sophisticated new book distills the author's expertise into a compelling geometry of facts: a delight for experts and accidental readers alike."—Paola Antonelli, Senior Curator of Architecture & Design at the Museum of Modern Art

On Sale
May 21, 2019
Page Count
384 pages
Publisher
Basic Books
ISBN-13
9780465093069

Barbara Tversky

About the Author

Barbara Tversky is an emerita professor of psychology at Stanford University and a professor of psychology at Teachers College at Columbia University. She is also the President of the Association for Psychological Science. Tversky has published over 200 scholarly articles about memory, spatial thinking, design, and creativity, and regularly speaks about embodied cognition at interdisciplinary conferences and workshops around the world. She lives in New York.

Learn more about this author