Intelligence Reframed

Multiple Intelligences for the 21st Century


By Howard E. Gardner

Formats and Prices




$16.99 CAD



  1. ebook $12.99 $16.99 CAD
  2. Trade Paperback $18.99 $23.99 CAD

This item is a preorder. Your payment method will be charged immediately, and the product is expected to ship on or around September 18, 2000. This date is subject to change due to shipping delays beyond our control.

Harvard psychologist Howard Gardner has been acclaimed as the most influential educational theorist since John Dewey. His ideas about intelligence and creativity – explicated in such bestselling books as Frames of Mind and Multiple Intelligences (over 200,000 copies in print combined) – have revolutionized our thinking. In his groundbreaking 1983 book Frames of Mind , Howard Gardner first introduced the theory of multiple intelligences, which posits that intelligence is more than a single property of the human mind. That theory has become widely accepted as one of the seminal ideas of the twentieth century and continues to attract attention all over the world. Now in Intelligence Reframed , Gardner provides a much-needed report on the theory, its evolution and revisions. He offers practical guidance on the educational uses of the theory and responds to the critiques leveled against him. He also introduces two new intelligences (existential intelligence and naturalist intelligence) and argues that the concept of intelligence should be broadened, but not so absurdly that it includes every human virtue and value. Ultimately, argues Gardner, possessing a basic set of seven or eight intelligences is not only a unique trademark of the human species, but also perhaps even a working definition of the species. Gardner also offers provocative ideas about creativity, leadership, and moral excellence, and speculates about the relationship between multiple intelligences and the world of work in the future.



Other Books by Howard Gardner

  • The Quest for Mind (1973)
  • The Arts and Human Development (1973)
  • The Shattered Mind (1975)
  • Developmental Psychology (1978)
  • Artful Scribbles (1980)
  • Art, Mind, and Brain (1982) Frames of Mind (1983)
  • The Mind’s New Science (1985) To Open Minds (1989)
  • The Unschooled Mind (1991) Creating Minds (1993)
  • Multiple Intelligences (1993)
  • Leading Minds (1996)
  • Intelligence: Multiple Perspectives (with Kornhaber and Wake) (1996)
  • Extraordinary Minds (1997)
  • The Disciplined Mind (1999)








This book draws heavily on essays written in the 1990s. For their critical comments on one or more of these essays, I thank Thomas Armstrong, Eric Blumenson, Veronica Boix-Mansilla, Mihaly Csikszentmihalyi, Patricia Bolanos, Antonio Damasio, William Damon, Reuven Feuerstein, Daniel Goleman, Tom Hatch, Tom Hoerr, Jeff Kane, Paul Kaufman, Mindy Kornhaber, Mara Krechevsky, Jonathan Levy, Tanya Luhrmann, Robert Ornstein, David Perkins, Charles Reigeluth, Courtney Ross, Mark Runco, Mark Turner, Julie Viens, Joe Walters, E. O. Wilson, and Ellen Winner. Jo Ann Miller, Donya Levine, Richard Fumosa, and Sharon Sharp of Perseus Books aided ably in the editorial process.

In particular, I have drawn on the following:

Gardner, H.: “Reflections on Multiple Intelligences: Myths and Messages.” Phi Delta Kappan 77, no. 3(1995): 200–209.

______. “Are There Additional Intelligences?” In Education, Information, and Transformation, ed. J. Kane. Upper Saddle River, N.J.: Prentice Hall, 1999.

______. “Multiple Approaches to Understanding.” In InstructionalDesign Theories and Models: A New Paradigm of Instructional Theory, ed. C. Reigeluth. Mahwah, N.J.: Erlbaum, 1999.

______. “Who Owns Intelligence?” Atlantic Monthly, February 1999, 67–76.

The work described in these essays has been made possible by generous funders. I would like to thank the Bauman Foundation, the Carnegie Corporation, the Nathan Cummings Foundation, Jeffrey Epstein, the Fetzer Institute, the Ford Foundation, the William T. Grant Foundation, the William and Flora Hewlett Foundation, the Christian A. Johnson Endeavor Foundation, Thomas H. Lee, the New American Schools Development Corporation, the Jesse Phillips Foundation, the Rockefeller Brothers Foundation, the Rockefeller Foundation, the Louise and Claude Rosenberg Jr. Family Foundation, the Ross Family Charitable Foundation, the Spencer Foundation, and the Bernard Van Leer Foundation, as well as a generous funder who wishes to remain anonymous.

Finally, I want to thank, though not by name, the many people in the United States and abroad who have worked with me to develop the implications of the theory of multiple intelligences. Many of you are listed in the appendices. To all of you, I express my heartfelt gratitude.

Cambridge, MA
June 1999



EVERY SOCIETY FEATURES its ideal human being. The ancient Greeks valued the person who displayed physical agility, rational judgment, and virtuous behavior. The Romans highlighted manly courage, and followers of Islam prized the holy soldier. Under the influence of Confucius, Chinese populations traditionally valued the person who was skilled in poetry, music, calligraphy, archery, and drawing. Among the Keres tribe of the Pueblo Indians today, the person who cares for others is held in high regard.

Over the past few centuries, particularly in Western societies, a certain ideal has become pervasive: that of the intelligent person. The exact dimensions of that ideal evolve over time and setting. In traditional schools, the intelligent person could master classical languages and mathematics, particularly geometry. In a business setting, the intelligent person could anticipate commercial opportunities, take measured risks, build up an organization, and keep the books balanced and the stockholders satisfied. At the beginning of the twentieth century, the intelligent person was one who could be dispatched to the far corners of an empire and who could then execute orders competently. Such notions remain important to many people.

As the turn of this millennium approaches, however, a premium has been placed on two new intellectual virtuosos: the “symbol analyst” and the “master of change.”1 A symbol analyst can sit for hours in front of a string of numbers and words, usually displayed on a computer screen, and readily discern meaning in this thicket of symbols. This person can then make reliable, useful projections. A master of change readily acquires new information, solves problems, forms “weak ties” with mobile and highly dispersed people, and adjusts easily to changing circumstances.

Those charged with guiding a society have always been on the outlook for intelligent young people. Two thousand years ago, Chinese imperial officials administered challenging examinations to identify those who could join and direct the bureaucracy. In the Middle Ages, church leaders searched for students who displayed a combination of studiousness, shrewdness, and devotion. In the late nineteenth century, Francis Galton, one of the founders of modern psychological measurement, thought that intelligence ran in families, and so he looked for intelligence in the offspring of those who occupied leading positions in British society.

Galton did not stop with hereditary lineages, however. He also believed that intelligence could be measured more directly. Beginning around 1870, he began to devise more formal tests of intelligence, ones consistent with the emerging view of the human mind as subject to measurement and experimentation. Galton thought that more intelligent persons would exhibit greater sensory acuity, and so the first formal measures of intelligence probed the ways in which individuals distinguished among sounds of different loudness, lights of different brightness, and objects of different weight. As it turned out, Galton (who thought himself very intelligent) bet on indices of intelligence that proved unrevealing for his purposes. But in his wager on the possibility of measuring intelligence, he was proved correct.

Since Galton’s time, countless people have avidly pursued the best ways of defining, measuring, and nurturing intelligence. Intelligence tests represent but the tip of the cognitive iceberg. In the United States, tests such as the Scholastic Assessment Test, the Miller Analogies Test, and the various primary, secondary, graduate, and professional examinations are all based on technology originally developed to test intelligence. Even assessments that are deliberately focused on measuring achievement (as opposed to “aptitude” or “potential for achievement”) often strongly resemble traditional tests of intelligence. Similar testing trends have occurred in many other nations as well. It is likely that efforts to measure intelligence will continue and, indeed, become more widespread in the future. Certainly, the prospect of devising robust measures of a highly valued human trait is attractive, for example, for those faced with decisions about educational placement or employment. And the press to determine who is intelligent and to do so at the earliest possible age is hardly going to disappear.

Despite the strong possibility that intelligence testing will remain with us indefinitely, this book is based on a different premise, namely, that intelligence is too important to be left to the intelligence testers. Just in the past half century, our understanding of the human mind and the human brain has been fundamentally altered. For example, we now understand that the human mind, reflecting the structure of the brain, is composed of many separate modules or faculties. At the same time, in the light of scientific and technological changes, the needs and desires of cultures all over the world have undergone equally dramatic shifts. We are faced with a stark choice: either to continue with the traditional views of intelligence and how it should be measured or to come up with a different, and better, way of conceptualizing the human intellect. In this book, I adopt the latter tack. I present evidence that human beings possess a range of capacities and potentials–multiple intelligences–that, both individually and in consort, can be put to many productive uses. Individuals can not only come to understand their multiple intelligences but also deploy them in maximally flexible and productive ways within the human roles that various societies have created. Multiple intelligences can be mobilized at school, at home, at work, or on the street–that is, throughout the various institutions of a society.

But the task for the new millennium is not merely to hone our various intelligences and use them properly. We must figure out how intelligence and morality can work together to create a world in which a great variety of people will want to live. After all, a society led by “smart” people still might blow up itself or the rest of the world. Intelligence is valuable but, as Ralph Waldo Emerson famously remarked, “Character is more important than intellect.” That insight applies at both the individual and the societal levels.


In Chapter 2, I describe the traditional scientific view of intelligence. I introduce my own view–the theory of multiple intelligences–in Chapter 3. While this theory was developed nearly two decades ago, it has not remained static. Thus, in Chapters 4 and 5, I consider several new candidate intelligences, including naturalist, spiritual, existential, and moral ones. In Chapter 6, I address some of the questions and criticisms that have arisen about the theory and I dispel some of the more prominent myths. I treat other controversial issues in Chapter 7. And I explore in Chapter 8 the relationships among intelligence, creativity, and leadership.

The next three chapters focus on ways in which the theory of multiple intelligences can be applied. Chapters 9 and 10 are devoted to a discussion of the theory in scholastic settings, and in Chapter 11 I discuss its applications in the wider world. Finally, returning to the issues raised in Chapter 1, in Chapter 12 I explore my answer to the provocative question “Who owns intelligence?”

Since my presentation of the theory almost twenty years ago, an enormous secondary literature has developed around it. And many individuals have propagated the theory in various ways. In the appendices, I present an up-to-date listing of my own writings on the theory, writings by other scholars who have devoted books or major articles to the theory, selected miscellaneous materials, and key individuals in the United States and abroad who have contributed to the development of the theory or related practices. I provided a similar, but much smaller, listing of resources in Multiple Intelligences: The Theory in Practice, completed in 1992. I am humbled by the continued and growing interest in the theory, and proud that it has touched so many people all over the world.

1Reference notes are found at the end of the book.



In the fall of 1994, an unusual event occurred in the book-publishing industry. An eight-hundred-page book, written by two scholars and including two hundred pages of statistical appendices, was issued by a general trade publisher. The manuscript had been kept under embargo and therefore had not been seen by potential reviewers. Despite (or perhaps because of) this secrecy, The Bell Curve, by Richard J. Herrnstein and Charles Murray, received front-page coverage in the weekly news magazines and became a major topic of discussion in the media and around dinner tables. Indeed, one would have had to go back half a century to a landmark treatise on blackwhite relations, Gunnar Myrdal’s An American Dilemma, to find a social science book that engendered a comparable buzz.

Even in retrospect, it is difficult to know fully what contributed to the notoriety surrounding The Bell Curve. None of the book’s major arguments were new to the educated public. Herrnstein, a Harvard psychology professor, and Murray, an American Enterprise Institute political scientist, argued that intelligence is best thought of as a single property distributed within the general population along a bell-shaped curve. That is, comparatively few people have very high intelligence (say, IQ over 130), comparatively few have very low intelligence (IQ under 70), and most people are clumped together somewhere in between (IQ from 85 to 115). Moreover, the authors adduced evidence that intelligence is to a significant extent inherited–that is, within a defined population, the variation in measured intelligence is due primarily to the genetic contributions of one’s biological parents.

These claims were fairly well known and hardly startling. But Herrnstein and Murray went further. They moved well beyond a discussion of measuring intelligence to claim that many of our current social ills are due to the behaviors and capacities of people with relatively low intelligence. The authors made considerable use of the National Longitudinal Survey of Youth, a rich data set of over 12,000 youths who have been followed since 1979. The population was selected in such a way as to include adequate representation from various social, ethnic, and racial groups; members of the group took a set of cognitive and aptitude measures under well controlled conditions. On the basis of these data, the authors presented evidence that those with low intelligence are more likely to be on welfare, to be involved in crime, to come from broken homes, to drop out of school, and to exhibit other forms of social pathology. And while they did not take an explicit stand on the well-known data showing higher IQs among whites than among blacks, they left the clear impression that these differences were difficult to change and, therefore, probably were a product of genetic factors.

I have labeled the form of argument in The Bell Curve “rhetorical brinkmanship.” Instead of stating the unpalatable, the authors lead readers to a point where they are likely to draw a certain conclusion on their own. And so, while Herrnstein and Murray claimed to remain “resolutely neutral” on the sources of blackwhite differences in intelligence, the evidence they presented strongly suggests a genetic basis for the disparity. Similarly, while they did not recommend eugenic practices, they repeatedly used the following form of reasoning: Social pathology is due to low intelligence, and intelligence cannot be significantly changed through societal interventions. The reader is drawn, almost ineluctably, to conclude that “we” (the intelligent reader, of course) must find a way to reduce the number of “unintelligent” people.

The reviews of The Bell Curve were primarily negative, with the major exception of those in politically conservative publications. Scholars were extremely critical, particularly in regard to the alleged links between low intelligence and social pathology. Not surprisingly, the authors’ conclusions about intelligence have been endorsed by many psychologists who specialize in measurement and on whose work much of the book was built.

Why the fuss over a book that offered few new ideas and dubious scholarship? I would not minimize the skill of the publisher, who kept the book under wraps from scholars while making sure that it got into the hands of people who would promote it or write at length about it. The application of seemingly scientific objectivity to racial issues on which many people hold private views may also have contributed to the book’s success. But my own, admittedly more cynical, view is that a demand arises every twenty-five years or so for a restatement of the “nature,” or hereditary explanation, of intelligence. Supporting this view is the fact that the Harvard Educational Review in 1969 published a controversial article titled “How much can we boost scholastic achievement?” The author, the psychologist Arthur Jensen, harshly criticized the effectiveness of early childhood intervention programs like Head Start. He said that such programs did not genuinely aid disadvantaged children and suggested that perhaps black children needed to be taught in a different way.

Just one year after the appearance of The Bell Curve, another book was published to even greater acclaim. In most respects, Emotional Intelligence, by the New York Times reporter and psychologist Daniel Goleman, could not have been more different from The Bell Curve. Issued by a mass-market trade publisher, Goleman’s short book was filled with anecdotes and presented only a few scattered statistics. Moreover, in sharp contrast to The Bell Curve, Emotional Intelligence contained a dim view of the entire psychometric tradition, as indicated by its subtitle: Why it can matter more than IQ.

In Emotional Intelligence, Goleman argued that our world has largely ignored a tremendously significant set of skills and abilities–those dealing with people and emotions. In particular, Goleman wrote about the importance of recognizing one’s own emotional life, regulating one’s own feelings, understanding others’ emotions, being able to work with others, and having empathy for others. He described ways of enhancing these capacities, particularly among children. More generally, he argued that the world could be more hospitable if we cultivated emotional intelligence as diligently as we now promote cognitive intelligence. Emotional Intelligence may well be the best-selling social science book ever published. By 1998, it had sold over 3 million copies worldwide, and in countries as diverse as Brazil and Taiwan it has remained on the best-seller list for unprecedented lengths of time. On the surface, it is easy to see why Emotional Intelligence is so appealing to readers. Its message is hopeful, and the author tells readers how to enhance their own emotional intelligence and that of others close to them. And–this is meant without disrespect–the message of the book is contained in its title and its subtitle.

I often wonder whether the readers of The Bell Curve have also read Emotional Intelligence. Can one be a fan of both books? There are probably gender and disciplinary differences in the audiences: To put it sharply, if not stereotypically, business people and tough-minded social scientists are probably more likely to gravitate toward The Bell Curve, while teachers, social workers, and parents are probably more likely to embrace Emotional Intelligence. (However, a successor volume, Goleman’s Working with Emotional Intelligence, sought to attract the former audiences, too.) But I suspect that there is also some overlap. Clearly, educators, business people, parents, and many others realize that the concept of intelligence is important and that conceptualizations of it are changing more rapidly than ever before.


By 1860 Charles Darwin had established the scientific case for the origin and evolution of all species. Darwin had also become curious about the origin and development of psychological traits, including intellectual and emotional ones. It did not take long before a wide range of scholars began to ponder the intellectual differences across the species, as well as within specific groups, such as infants, children, adults, or the “feeble-minded” and “eminent geniuses.” Much of this pondering occurred in the armchair; it was far easier to speculate about differences in intellectual power among dogs, chimpanzees, and people of different cultures than to gather comparative data relevant to these putative differences. It is perhaps not a coincidence that Darwin’s cousin, the polymath Francis Galton, was the first to establish an anthropometric laboratory for the purpose of assembling empirical evidence of people’s intellectual differences.

Still, the honor of having fashioned the first intelligence test is usually awarded to Alfred Binet, a French psychologist particularly interested in children and education. In the early 1900s, families were flocking into Paris from the provinces and from far-flung French territories. Some of the children from these families were having great difficulty with schoolwork. In the early 1900s, Binet and his colleague Théodore Simon were approached by the French Ministry of Education to help predict which children were at risk for school failure. Proceeding in a completely empirical fashion, Binet administered hundreds of test questions to these children. He wanted to identify a set of questions that were discriminating, that is, when passed, such items predicted success in school and when failed, the same items predicted difficulty in school.

Like Galton, Binet began with largely sensory-based items but soon discovered the superior predictive power of other, more “scholastic” questions. From Binet’s time on, intelligence tests have been heavily weighted toward measuring verbal memory, verbal reasoning, numerical reasoning, appreciation of logical sequences, and ability to state how one would solve problems of daily living. Without fully realizing it, Binet had invented the first tests of intelligence.

A few years later, in 1912, the German psychologist Wilhelm Stern came up with the name and measure of the “intelligence quotient,” or the ratio of one’s mental age to one’s chronological age, with the ratio to be multipled by 100 (which is why it is better to have an IQ of 130 than one of 70).

Like many Parisian fashions of the day, the IQ test made its way across the Atlantic–with a vengeance–and became Americanized during the 1920s and 1930s. Whereas Binet’s test had been administered one on one, American psychometricians–led by Stanford University psychologist Lewis Terman and the Harvard professor and army major Robert Yerkes–prepared paper-and-pencil (and, later, machine-scorable) versions that could be administered easily to many individuals. Since specific instructions were written out and norms were created, test takers could be examined under uniform conditions and their scores could be compared. Certain populations elicited special interest; much was written about the IQs of mentally deficient people, of putative young geniuses, U.S. Army recruits, members of different racial and ethnic groups, and immigrants from northern, central, and southern Europe, and by the mid-1920s, the intelligence test had become a fixture in educational practice in the United States and throughout much of western Europe.

Early intelligence tests were not without their critics. Many enduring concerns were first raised by the influential American journalist Walter Lippmann. In a series of debates with Lewis Terman, published in the New Republic, Lippmann criticized the test items’ superficiality and possible cultural biases, and he noted the risks associated with assessing an individual’s intellectual potential via a single, brief oral or paper-and-pencil method. IQ tests were also the subject of countless jokes and cartoons. Still, by sticking to their tests and their tables of norms, the psychometricians were able to defend their instruments, even as they made their way back and forth among the halls of academe; their testing cubicles in schools, hospitals, and employment agencies; and the vaults in their banks.


On Sale
Sep 18, 2000
Page Count
300 pages
Basic Books

Howard E. Gardner

About the Author

Howard Gardner is the John H. and Elisabeth A. Hobbs Professor of Cognition and Education at the Harvard Graduate School of Education and Senior Director of Harvard Project Zero. The author of more than twenty books and the recipient of a MacArthur Fellowship and twenty-one honorary degrees, he lives in Cambridge, Massachusetts.

Learn more about this author