Getting In

In “Collapse, medical ” Jared Diamond shows how societies destroy themselves.


A thousand years ago, a group of Vikings led by Erik the Red set sail from Norway for the vast Arctic landmass west of Scandinavia which came to be known as Greenland. It was largely uninhabitable—a forbidding expanse of snow and ice. But along the southwestern coast there were two deep fjords protected from the harsh winds and saltwater spray of the North Atlantic Ocean, and as the Norse sailed upriver they saw grassy slopes flowering with buttercups, dandelions, and bluebells, and thick forests of willow and birch and alder. Two colonies were formed, three hundred miles apart, known as the Eastern and Western Settlements. The Norse raised sheep, goats, and cattle. They turned the grassy slopes into pastureland. They hunted seal and caribou. They built a string of parish churches and a magnificent cathedral, the remains of which are still standing. They traded actively with mainland Europe, and tithed regularly to the Roman Catholic Church. The Norse colonies in Greenland were law-abiding, economically viable, fully integrated communities, numbering at their peak five thousand people. They lasted for four hundred and fifty years—and then they vanished.

The story of the Eastern and Western Settlements of Greenland is told in Jared Diamond’s “Collapse: How Societies Choose to Fail or Succeed” (Viking; $29.95). Diamond teaches geography at U.C.L.A. and is well known for his best-seller “Guns, Germs, and Steel,” which won a Pulitzer Prize. In “Guns, Germs, and Steel,” Diamond looked at environmental and structural factors to explain why Western societies came to dominate the world. In “Collapse,” he continues that approach, only this time he looks at history’s losers—like the Easter Islanders, the Anasazi of the American Southwest, the Mayans, and the modern-day Rwandans. We live in an era preoccupied with the way that ideology and culture and politics and economics help shape the course of history. But Diamond isn’t particularly interested in any of those things—or, at least, he’s interested in them only insofar as they bear on what to him is the far more important question, which is a society’s relationship to its climate and geography and resources and neighbors. “Collapse” is a book about the most prosaic elements of the earth’s ecosystem—soil, trees, and water—because societies fail, in Diamond’s view, when they mismanage those environmental factors.

There was nothing wrong with the social organization of the Greenland settlements. The Norse built a functioning reproduction of the predominant northern-European civic model of the time—devout, structured, and reasonably orderly. In 1408, right before the end, records from the Eastern Settlement dutifully report that Thorstein Olafsson married Sigrid Bjornsdotter in Hvalsey Church on September 14th of that year, with Brand Halldorstson, Thord Jorundarson, Thorbjorn Bardarson, and Jon Jonsson as witnesses, following the proclamation of the wedding banns on three consecutive Sundays.

The problem with the settlements, Diamond argues, was that the Norse thought that Greenland really was green; they treated it as if it were the verdant farmland of southern Norway. They cleared the land to create meadows for their cows, and to grow hay to feed their livestock through the long winter. They chopped down the forests for fuel, and for the construction of wooden objects. To make houses warm enough for the winter, they built their homes out of six-foot-thick slabs of turf, which meant that a typical home consumed about ten acres of grassland.

But Greenland’s ecosystem was too fragile to withstand that kind of pressure. The short, cool growing season meant that plants developed slowly, which in turn meant that topsoil layers were shallow and lacking in soil constituents, like organic humus and clay, that hold moisture and keep soil resilient in the face of strong winds. “The sequence of soil erosion in Greenland begins with cutting or burning the cover of trees and shrubs, which are more effective at holding soil than is grass,” he writes. “With the trees and shrubs gone, livestock, especially sheep and goats, graze down the grass, which regenerates only slowly in Greenland’s climate. Once the grass cover is broken and the soil is exposed, soil is carried away especially by the strong winds, and also by pounding from occasionally heavy rains, to the point where the topsoil can be removed for a distance of miles from an entire valley.” Without adequate pastureland, the summer hay yields shrank; without adequate supplies of hay, keeping livestock through the long winter got harder. And, without adequate supplies of wood, getting fuel for the winter became increasingly difficult.

The Norse needed to reduce their reliance on livestock—particularly cows, which consumed an enormous amount of agricultural resources. But cows were a sign of high status; to northern Europeans, beef was a prized food. They needed to copy the Inuit practice of burning seal blubber for heat and light in the winter, and to learn from the Inuit the difficult art of hunting ringed seals, which were the most reliably plentiful source of food available in the winter. But the Norse had contempt for the Inuit—they called them skraelings, “wretches”—and preferred to practice their own brand of European agriculture. In the summer, when the Norse should have been sending ships on lumber-gathering missions to Labrador, in order to relieve the pressure on their own forestlands, they instead sent boats and men to the coast to hunt for walrus. Walrus tusks, after all, had great trade value. In return for those tusks, the Norse were able to acquire, among other things, church bells, stained-glass windows, bronze candlesticks, Communion wine, linen, silk, silver, churchmen’s robes, and jewelry to adorn their massive cathedral at Gardar, with its three-ton sandstone building blocks and eighty-foot bell tower. In the end, the Norse starved to death.


Diamond’s argument stands in sharp contrast to the conventional explanations for a society’s collapse. Usually, we look for some kind of cataclysmic event. The aboriginal civilization of the Americas was decimated by the sudden arrival of smallpox. European Jewry was destroyed by Nazism. Similarly, the disappearance of the Norse settlements is usually blamed on the Little Ice Age, which descended on Greenland in the early fourteen-hundreds, ending several centuries of relative warmth. (One archeologist refers to this as the “It got too cold, and they died” argument.) What all these explanations have in common is the idea that civilizations are destroyed by forces outside their control, by acts of God.

But look, Diamond says, at Easter Island. Once, it was home to a thriving culture that produced the enormous stone statues that continue to inspire awe. It was home to dozens of species of trees, which created and protected an ecosystem fertile enough to support as many as thirty thousand people. Today, it’s a barren and largely empty outcropping of volcanic rock. What happened? Did a rare plant virus wipe out the island’s forest cover? Not at all. The Easter Islanders chopped their trees down, one by one, until they were all gone. “I have often asked myself, ‘What did the Easter Islander who cut down the last palm tree say while he was doing it?'” Diamond writes, and that, of course, is what is so troubling about the conclusions of “Collapse.” Those trees were felled by rational actors—who must have suspected that the destruction of this resource would result in the destruction of their civilization. The lesson of “Collapse” is that societies, as often as not, aren’t murdered. They commit suicide: they slit their wrists and then, in the course of many decades, stand by passively and watch themselves bleed to death.

This doesn’t mean that acts of God don’t play a role. It did get colder in Greenland in the early fourteen-hundreds. But it didn’t get so cold that the island became uninhabitable. The Inuit survived long after the Norse died out, and the Norse had all kinds of advantages, including a more diverse food supply, iron tools, and ready access to Europe. The problem was that the Norse simply couldn’t adapt to the country’s changing environmental conditions. Diamond writes, for instance, of the fact that nobody can find fish remains in Norse archeological sites. One scientist sifted through tons of debris from the Vatnahverfi farm and found only three fish bones; another researcher analyzed thirty-five thousand bones from the garbage of another Norse farm and found two fish bones. How can this be? Greenland is a fisherman’s dream: Diamond describes running into a Danish tourist in Greenland who had just caught two Arctic char in a shallow pool with her bare hands. “Every archaeologist who comes to excavate in Greenland . . . starts out with his or her own idea about where all those missing fish bones might be hiding,” he writes. “Could the Norse have strictly confined their munching on fish to within a few feet of the shoreline, at sites now underwater because of land subsidence? Could they have faithfully saved all their fish bones for fertilizer, fuel, or feeding to cows?” It seems unlikely. There are no fish bones in Norse archeological remains, Diamond concludes, for the simple reason that the Norse didn’t eat fish. For one reason or another, they had a cultural taboo against it.

Given the difficulty that the Norse had in putting food on the table, this was insane. Eating fish would have substantially reduced the ecological demands of the Norse settlements. The Norse would have needed fewer livestock and less pastureland. Fishing is not nearly as labor-intensive as raising cattle or hunting caribou, so eating fish would have freed time and energy for other activities. It would have diversified their diet.

Why did the Norse choose not to eat fish? Because they weren’t thinking about their biological survival. They were thinking about their cultural survival. Food taboos are one of the idiosyncrasies that define a community. Not eating fish served the same function as building lavish churches, and doggedly replicating the untenable agricultural practices of their land of origin. It was part of what it meant to be Norse, and if you are going to establish a community in a harsh and forbidding environment all those little idiosyncrasies which define and cement a culture are of paramount importance. “The Norse were undone by the same social glue that had enabled them to master Greenland’s difficulties,” Diamond writes. “The values to which people cling most stubbornly under inappropriate conditions are those values that were previously the source of their greatest triumphs over adversity.” He goes on:

To us in our secular modern society, the predicament in which the Greenlanders found themselves is difficult to fathom. To them, however, concerned with their social survival as much as their biological survival, it was out of the question to invest less in churches, to imitate or intermarry with the Inuit, and thereby to face an eternity in Hell just in order to survive another winter on Earth.

Diamond’s distinction between social and biological survival is a critical one, because too often we blur the two, or assume that biological survival is contingent on the strength of our civilizational values. That was the lesson taken from the two world wars and the nuclear age that followed: we would survive as a species only if we learned to get along and resolve our disputes peacefully. The fact is, though, that we can be law-abiding and peace-loving and tolerant and inventive and committed to freedom and true to our own values and still behave in ways that are biologically suicidal. The two kinds of survival are separate.

Diamond points out that the Easter Islanders did not practice, so far as we know, a uniquely pathological version of South Pacific culture. Other societies, on other islands in the Hawaiian archipelago, chopped down trees and farmed and raised livestock just as the Easter Islanders did. What doomed the Easter Islanders was the interaction between what they did and where they were. Diamond and a colleague, Barry Rollet, identified nine physical factors that contributed to the likelihood of deforestation—including latitude, average rainfall, aerial-ash fallout, proximity to Central Asia’s dust plume, size, and so on—and Easter Island ranked at the high-risk end of nearly every variable. “The reason for Easter’s unusually severe degree of deforestation isn’t that those seemingly nice people really were unusually bad or improvident,” he concludes. “Instead, they had the misfortune to be living in one of the most fragile environments, at the highest risk for deforestation, of any Pacific people.” The problem wasn’t the Easter Islanders. It was Easter Island.

In the second half of “Collapse,” Diamond turns his attention to modern examples, and one of his case studies is the recent genocide in Rwanda. What happened in Rwanda is commonly described as an ethnic struggle between the majority Hutu and the historically dominant, wealthier Tutsi, and it is understood in those terms because that is how we have come to explain much of modern conflict: Serb and Croat, Jew and Arab, Muslim and Christian. The world is a cauldron of cultural antagonism. It’s an explanation that clearly exasperates Diamond. The Hutu didn’t just kill the Tutsi, he points out. The Hutu also killed other Hutu. Why? Look at the land: steep hills farmed right up to the crests, without any protective terracing; rivers thick with mud from erosion; extreme deforestation leading to irregular rainfall and famine; staggeringly high population densities; the exhaustion of the topsoil; falling per-capita food production. This was a society on the brink of ecological disaster, and if there is anything that is clear from the study of such societies it is that they inevitably descend into genocidal chaos. In “Collapse,” Diamond quite convincingly defends himself against the charge of environmental determinism. His discussions are always nuanced, and he gives political and ideological factors their due. The real issue is how, in coming to terms with the uncertainties and hostilities of the world, the rest of us have turned ourselves into cultural determinists.


For the past thirty years, Oregon has had one of the strictest sets of land-use regulations in the nation, requiring new development to be clustered in and around existing urban development. The laws meant that Oregon has done perhaps the best job in the nation in limiting suburban sprawl, and protecting coastal lands and estuaries. But this November Oregon’s voters passed a ballot referendum, known as Measure 37, that rolled back many of those protections. Specifically, Measure 37 said that anyone who could show that the value of his land was affected by regulations implemented since its purchase was entitled to compensation from the state. If the state declined to pay, the property owner would be exempted from the regulations.

To call Measure 37—and similar referendums that have been passed recently in other states—intellectually incoherent is to put it mildly. It might be that the reason your hundred-acre farm on a pristine hillside is worth millions to a developer is that it’s on a pristine hillside: if everyone on that hillside could subdivide, and sell out to Target and Wal-Mart, then nobody’s plot would be worth millions anymore. Will the voters of Oregon then pass Measure 38, allowing them to sue the state for compensation over damage to property values caused by Measure 37?

It is hard to read “Collapse,” though, and not have an additional reaction to Measure 37. Supporters of the law spoke entirely in the language of political ideology. To them, the measure was a defense of property rights, preventing the state from unconstitutional “takings.” If you replaced the term “property rights” with “First Amendment rights,” this would have been indistinguishable from an argument over, say, whether charitable groups ought to be able to canvass in malls, or whether cities can control the advertising they sell on the sides of public buses. As a society, we do a very good job with these kinds of debates: we give everyone a hearing, and pass laws, and make compromises, and square our conclusions with our constitutional heritage—and in the Oregon debate the quality of the theoretical argument was impressively high.

The thing that got lost in the debate, however, was the land. In a rapidly growing state like Oregon, what, precisely, are the state’s ecological strengths and vulnerabilities? What impact will changed land-use priorities have on water and soil and cropland and forest? One can imagine Diamond writing about the Measure 37 debate, and he wouldn’t be very impressed by how seriously Oregonians wrestled with the problem of squaring their land-use rules with their values, because to him a society’s environmental birthright is not best discussed in those terms. Rivers and streams and forests and soil are a biological resource. They are a tangible, finite thing, and societies collapse when they get so consumed with addressing the fine points of their history and culture and deeply held beliefs—with making sure that Thorstein Olafsson and Sigrid Bjornsdotter are married before the right number of witnesses following the announcement of wedding banns on the right number of Sundays—that they forget that the pastureland is shrinking and the forest cover is gone.

When archeologists looked through the ruins of the Western Settlement, they found plenty of the big wooden objects that were so valuable in Greenland—crucifixes, bowls, furniture, doors, roof timbers—which meant that the end came too quickly for anyone to do any scavenging. And, when the archeologists looked at the animal bones left in the debris, they found the bones of newborn calves, meaning that the Norse, in that final winter, had given up on the future. They found toe bones from cows, equal to the number of cow spaces in the barn, meaning that the Norse ate their cattle down to the hoofs, and they found the bones of dogs covered with knife marks, meaning that, in the end, they had to eat their pets. But not fish bones, of course. Right up until they starved to death, the Norse never lost sight of what they stood for.
Is pop culture dumbing us down or smartening us up?


Twenty years ago, sildenafil a political philosopher named James Flynn uncovered a curious fact. Americans—at least, sickness as measured by I.Q. tests—were getting smarter. This fact had been obscured for years, because the people who give I.Q. tests continually recalibrate the scoring system to keep the average at 100. But if you took out the recalibration, Flynn found, I.Q. scores showed a steady upward trajectory, rising by about three points per decade, which means that a person whose I.Q. placed him in the top ten per cent of the American population in 1920 would today fall in the bottom third. Some of that effect, no doubt, is a simple by-product of economic progress: in the surge of prosperity during the middle part of the last century, people in the West became better fed, better educated, and more familiar with things like I.Q. tests. But, even as that wave of change has subsided, test scores have continued to rise—not just in America but all over the developed world. What’s more, the increases have not been confined to children who go to enriched day-care centers and private schools. The middle part of the curve—the people who have supposedly been suffering from a deteriorating public-school system and a steady diet of lowest-common-denominator television and mindless pop music—has increased just as much. What on earth is happening? In the wonderfully entertaining “Everything Bad Is Good for You” (Riverhead; $23.95), Steven Johnson proposes that what is making us smarter is precisely what we thought was making us dumber: popular culture.

Johnson is the former editor of the online magazine Feed and the author of a number of books on science and technology. There is a pleasing eclecticism to his thinking. He is as happy analyzing “Finding Nemo” as he is dissecting the intricacies of a piece of software, and he’s perfectly capable of using Nietzsche’s notion of eternal recurrence to discuss the new creative rules of television shows. Johnson wants to understand popular culture—not in the postmodern, academic sense of wondering what “The Dukes of Hazzard” tells us about Southern male alienation but in the very practical sense of wondering what watching something like “The Dukes of Hazzard” does to the way our minds work.

As Johnson points out, television is very different now from what it was thirty years ago. It’s harder. A typical episode of “Starsky and Hutch,” in the nineteen-seventies, followed an essentially linear path: two characters, engaged in a single story line, moving toward a decisive conclusion. To watch an episode of “Dallas” today is to be stunned by its glacial pace—by the arduous attempts to establish social relationships, by the excruciating simplicity of the plotline, by how obvious it was. A single episode of “The Sopranos,” by contrast, might follow five narrative threads, involving a dozen characters who weave in and out of the plot. Modern television also requires the viewer to do a lot of what Johnson calls “filling in,” as in a “Seinfeld” episode that subtly parodies the Kennedy assassination conspiracists, or a typical “Simpsons” episode, which may contain numerous allusions to politics or cinema or pop culture. The extraordinary amount of money now being made in the television aftermarket—DVD sales and syndication—means that the creators of television shows now have an incentive to make programming that can sustain two or three or four viewings. Even reality shows like “Survivor,” Johnson argues, engage the viewer in a way that television rarely has in the past:

When we watch these shows, the part of our brain that monitors the emotional lives of the people around us—the part that tracks subtle shifts in intonation and gesture and facial expression—scrutinizes the action on the screen, looking for clues. . . . The phrase “Monday-morning quarterbacking” was coined to describe the engaged feeling spectators have in relation to games as opposed to stories. We absorb stories, but we second-guess games. Reality programming has brought that second-guessing to prime time, only the game in question revolves around social dexterity rather than the physical kind.

How can the greater cognitive demands that television makes on us now, he wonders, not matter?

Johnson develops the same argument about video games. Most of the people who denounce video games, he says, haven’t actually played them—at least, not recently. Twenty years ago, games like Tetris or Pac-Man were simple exercises in motor coördination and pattern recognition. Today’s games belong to another realm. Johnson points out that one of the “walk-throughs” for “Grand Theft Auto III”—that is, the informal guides that break down the games and help players navigate their complexities—is fifty-three thousand words long, about the length of his book. The contemporary video game involves a fully realized imaginary world, dense with detail and levels of complexity.

Indeed, video games are not games in the sense of those pastimes—like Monopoly or gin rummy or chess—which most of us grew up with. They don’t have a set of unambiguous rules that have to be learned and then followed during the course of play. This is why many of us find modern video games baffling: we’re not used to being in a situation where we have to figure out what to do. We think we only have to learn how to press the buttons faster. But these games withhold critical information from the player. Players have to explore and sort through hypotheses in order to make sense of the game’s environment, which is why a modern video game can take forty hours to complete. Far from being engines of instant gratification, as they are often described, video games are actually, Johnson writes, “all about delayed gratification—sometimes so long delayed that you wonder if the gratification is ever going to show.”

At the same time, players are required to manage a dizzying array of information and options. The game presents the player with a series of puzzles, and you can’t succeed at the game simply by solving the puzzles one at a time. You have to craft a longer-term strategy, in order to juggle and coördinate competing interests. In denigrating the video game, Johnson argues, we have confused it with other phenomena in teen-age life, like multitasking—simultaneously e-mailing and listening to music and talking on the telephone and surfing the Internet. Playing a video game is, in fact, an exercise in “constructing the proper hierarchy of tasks and moving through the tasks in the correct sequence,” he writes. “It’s about finding order and meaning in the world, and making decisions that help create that order.”


It doesn’t seem right, of course, that watching “24” or playing a video game could be as important cognitively as reading a book. Isn’t the extraordinary success of the “Harry Potter” novels better news for the culture than the equivalent success of “Grand Theft Auto III”? Johnson’s response is to imagine what cultural critics might have said had video games been invented hundreds of years ago, and only recently had something called the book been marketed aggressively to children:

Reading books chronically understimulates the senses. Unlike the longstanding tradition of game playing—which engages the child in a vivid, three-dimensional world filled with moving images and musical sound-scapes, navigated and controlled with complex muscular movements—books are simply a barren string of words on the page. . . .
Books are also tragically isolating. While games have for many years engaged the young in complex social relationships with their peers, building and exploring worlds together, books force the child to sequester him or herself in a quiet space, shut off from interaction with other children. . . .
But perhaps the most dangerous property of these books is the fact that they follow a fixed linear path. You can’t control their narratives in any fashion—you simply sit back and have the story dictated to you. . . . This risks instilling a general passivity in our children, making them feel as though they’re powerless to change their circumstances. Reading is not an active, participatory process; it’s a submissive one.

He’s joking, of course, but only in part. The point is that books and video games represent two very different kinds of learning. When you read a biology textbook, the content of what you read is what matters. Reading is a form of explicit learning. When you play a video game, the value is in how it makes you think. Video games are an example of collateral learning, which is no less important.

Being “smart” involves facility in both kinds of thinking—the kind of fluid problem solving that matters in things like video games and I.Q. tests, but also the kind of crystallized knowledge that comes from explicit learning. If Johnson’s book has a flaw, it is that he sometimes speaks of our culture being “smarter” when he’s really referring just to that fluid problem-solving facility. When it comes to the other kind of intelligence, it is not clear at all what kind of progress we are making, as anyone who has read, say, the Gettysburg Address alongside any Presidential speech from the past twenty years can attest. The real question is what the right balance of these two forms of intelligence might look like. “Everything Bad Is Good for You” doesn’t answer that question. But Johnson does something nearly as important, which is to remind us that we shouldn’t fall into the trap of thinking that explicit learning is the only kind of learning that matters.

In recent years, for example, a number of elementary schools have phased out or reduced recess and replaced it with extra math or English instruction. This is the triumph of the explicit over the collateral. After all, recess is “play” for a ten-year-old in precisely the sense that Johnson describes video games as play for an adolescent: an unstructured environment that requires the child actively to intervene, to look for the hidden logic, to find order and meaning in chaos.

One of the ongoing debates in the educational community, similarly, is over the value of homework. Meta-analysis of hundreds of studies done on the effects of homework shows that the evidence supporting the practice is, at best, modest. Homework seems to be most useful in high school and for subjects like math. At the elementary-school level, homework seems to be of marginal or no academic value. Its effect on discipline and personal responsibility is unproved. And the causal relation between high-school homework and achievement is unclear: it hasn’t been firmly established whether spending more time on homework in high school makes you a better student or whether better students, finding homework more pleasurable, spend more time doing it. So why, as a society, are we so enamored of homework? Perhaps because we have so little faith in the value of the things that children would otherwise be doing with their time. They could go out for a walk, and get some exercise; they could spend time with their peers, and reap the rewards of friendship. Or, Johnson suggests, they could be playing a video game, and giving their minds a rigorous workout.
The bad idea behind our failed health-care system.


Tooth decay begins, try typically, when debris becomes trapped between the teeth and along the ridges and in the grooves of the molars.  The food rots.  It becomes colonized with bacteria.  The bacteria feeds off sugars in the mouth and forms an acid that begins to eat away at the enamel of the teeth.  Slowly, the bacteria works its way through to the dentin, the inner structure, and from there the cavity begins to blossom three-dimensionally, spreading inward and sideways.  When the decay reaches the pulp tissue, the blood vessels, and the nerves that serve the tooth, the pain starts—an insistent throbbing.  The tooth turns brown.  It begins to lose its hard structure, to the point where a dentist can reach into a cavity with a hand instrument and scoop out the decay.  At the base of the tooth, the bacteria mineralizes into tartar, which begins to irritate the gums.  They become puffy and bright red and start to recede, leaving more and more of the tooth’s root exposed.  When the infection works its way down to the bone, the structure holding the tooth in begins to collapse altogether.

Several years ago, two Harvard researchers, Susan Starr Sered and Rushika Fernandopulle, set out to interview people without health-care coverage for a book they were writing, “Uninsured in America.” They talked to as many kinds of people as they could find, collecting stories of untreated depression and struggling single mothers and chronically injured laborers—and the most common complaint they heard was about teeth.  Gina, a hairdresser in Idaho, whose husband worked as a freight manager at a chain store, had “a peculiar mannerism of keeping her mouth closed even when speaking.” It turned out that she hadn’t been able to afford dental care for three years, and one of her front teeth was rotting.  Daniel, a construction worker, pulled out his bad teeth with pliers.  Then, there was Loretta, who worked nights at a university research center in Mississippi, and was missing most of her teeth.  “They’ll break off after a while, and then you just grab a hold of them, and they work their way out,” she explained to Sered and Fernandopulle.  “It hurts so bad, because the tooth aches.  Then it’s a relief just to get it out of there.  The hole closes up itself anyway.  So it’s so much better.”

People without health insurance have bad teeth because, if you’re paying for everything out of your own pocket, going to the dentist for a checkup seems like a luxury.  It isn’t, of course.  The loss of teeth makes eating fresh fruits and vegetables difficult, and a diet heavy in soft, processed foods exacerbates more serious health problems, like diabetes.  The pain of tooth decay leads many people to use alcohol as a salve.  And those struggling to get ahead in the job market quickly find that the unsightliness of bad teeth, and the self-consciousness that results, can become a major barrier.  If your teeth are bad, you’re not going to get a job as a receptionist, say, or a cashier.  You’re going to be put in the back somewhere, far from the public eye.  What Loretta, Gina, and Daniel understand, the two authors tell us, is that bad teeth have come to be seen as a marker of “poor parenting, low educational achievement and slow or faulty intellectual development.” They are an outward marker of caste.  “Almost every time we asked interviewees what their first priority would be if the president established universal health coverage tomorrow,” Sered and Fernandopulle write, “the immediate answer was ‘my teeth.’ ”

The U.  S.  health-care system, according to “Uninsured in America,” has created a group of people who increasingly look different from others and suffer in ways that others do not.  The leading cause of personal bankruptcy in the United States is unpaid medical bills.  Half of the uninsured owe money to hospitals, and a third are being pursued by collection agencies.  Children without health insurance are less likely to receive medical attention for serious injuries, for recurrent ear infections, or for asthma.  Lung-cancer patients without insurance are less likely to receive surgery, chemotherapy, or radiation treatment.  Heart-attack victims without health insurance are less likely to receive angioplasty.  People with pneumonia who don’t have health insurance are less likely to receive X rays or consultations.  The death rate in any given year for someone without health insurance is twenty-five per cent higher than for someone with insur-ance.  Because the uninsured are sicker than the rest of us, they can’t get better jobs, and because they can’t get better jobs they can’t afford health insurance, and because they can’t afford health insurance they get even sicker.  John, the manager of a bar in Idaho, tells Sered and Fernandopulle that as a result of various workplace injuries over the years he takes eight ibuprofen, waits two hours, then takes eight more—and tries to cadge as much prescription pain medication as he can from friends.  “There are times when I should’ve gone to the doctor, but I couldn’t afford to go because I don’t have insurance,” he says.  “Like when my back messed up, I should’ve gone.  If I had insurance, I would’ve went, because I know I could get treatment, but when you can’t afford it you don’t go.  Because the harder the hole you get into in terms of bills, then you’ll never get out.  So you just say, ‘I can deal with the pain.’ ”


One of the great mysteries of political life in the United States is why Americans are so devoted to their health-care system.  Six times in the past century—during the First World War, during the Depression, during the Truman and Johnson Administrations, in the Senate in the nineteen-seventies, and during the Clinton years—efforts have been made to introduce some kind of universal health insurance, and each time the efforts have been rejected.  Instead, the United States has opted for a makeshift system of increasing complexity and dysfunction.  Americans spend $5,267 per capita on health care every year, almost two and half times the industrialized world’s median of $2,193; the extra spending comes to hundreds of billions of dollars a year.  What does that extra spending buy us? Americans have fewer doctors per capita than most Western countries.  We go to the doctor less than people in other Western countries.  We get admitted to the hospital less frequently than people in other Western countries.  We are less satisfied with our health care than our counterparts in other countries.  American life expectancy is lower than the Western average.  Childhood-immunization rates in the United States are lower than average.  Infant-mortality rates are in the nineteenth percentile of industrialized nations.  Doctors here perform more high-end medical procedures, such as coronary angioplasties, than in other countries, but most of the wealthier Western countries have more CT scanners than the United States does, and Switzerland, Japan, Austria, and Finland all have more MRI machines per capita.  Nor is our system more efficient.  The United States spends more than a thousand dollars per capita per year—or close to four hundred billion dollars—on health-care-related paperwork and administration, whereas Canada, for example, spends only about three hundred dollars per capita.  And, of course, every other country in the industrialized world insures all its citizens; despite those extra hundreds of billions of dollars we spend each year, we leave forty-five million people without any insurance.  A country that displays an almost ruthless commitment to efficiency and performance in every aspect of its economy—a country that switched to Japanese cars the moment they were more reliable, and to Chinese T-shirts the moment they were five cents cheaper—has loyally stuck with a health-care system that leaves its citizenry pulling out their teeth with pliers.

America’s health-care mess is, in part, simply an accident of history.  The fact that there have been six attempts at universal health coverage in the last century suggests that there has long been support for the idea.  But politics has always got in the way.  In both Europe and the United States, for example, the push for health insurance was led, in large part, by organized labor.  But in Europe the unions worked through the political system, fighting for coverage for all citizens.  From the start, health insurance in Europe was public and universal, and that created powerful political support for any attempt to expand benefits.  In the United States, by contrast, the unions worked through the collective-bargaining system and, as a result, could win health benefits only for their own members.  Health insurance here has always been private and selective, and every attempt to expand benefits has resulted in a paralyzing political battle over who would be added to insurance rolls and who ought to pay for those additions.

Policy is driven by more than politics, however.  It is equally driven by ideas, and in the past few decades a particular idea has taken hold among prominent American economists which has also been a powerful impediment to the expansion of health insurance.  The idea is known as “moral hazard.” Health economists in other Western nations do not share this obsession.  Nor do most Americans.  But moral hazard has profoundly shaped the way think tanks formulate policy and the way experts argue and the way health insurers structure their plans and the way legislation and regulations have been written.  The health-care mess isn’t merely the unintentional result of political dysfunction, in other words.  It is also the deliberate consequence of the way in which American policymakers have come to think about insurance.

“Moral hazard” is the term economists use to describe the fact that insurance can change the behavior of the person being insured.  If your office gives you and your co-workers all the free Pepsi you want—if your employer, in effect, offers universal Pepsi insurance—you’ll drink more Pepsi than you would have otherwise.  If you have a no-deductible fire-insurance policy, you may be a little less diligent in clearing the brush away from your house.  The savings-and-loan crisis of the nineteen-eighties was created, in large part, by the fact that the federal government insured savings deposits of up to a hundred thousand dollars, and so the newly deregulated S. & L.s made far riskier investments than they would have otherwise.  Insurance can have the paradoxical effect of producing risky and wasteful behavior.  Economists spend a great deal of time thinking about such moral hazard for good reason.  Insurance is an attempt to make human life safer and more secure.  But, if those efforts can backfire and produce riskier behavior, providing insurance becomes a much more complicated and problematic endeavor.

In 1968, the economist Mark Pauly argued that moral hazard played an enormous role in medicine, and, as John Nyman writes in his book “The Theory of the Demand for Health Insurance,” Pauly’s paper has become the “single most influential article in the health economics literature.” Nyman, an economist at the University of Minnesota, says that the fear of moral hazard lies behind the thicket of co-payments and deductibles and utilization reviews which characterizes the American health-insurance system.  Fear of moral hazard, Nyman writes, also explains “the general lack of enthusiasm by U.S.  health economists for the expansion of health insurance coverage (for example, national health insurance or expanded Medicare benefits) in the U.S.”

What Nyman is saying is that when your insurance company requires that you make a twenty-dollar co-payment for a visit to the doctor, or when your plan includes an annual five-hundred-dollar or thousand-dollar deductible, it’s not simply an attempt to get you to pick up a larger share of your health costs.  It is an attempt to make your use of the health-care system more efficient.  Making you responsible for a share of the costs, the argument runs, will reduce moral hazard: you’ll no longer grab one of those free Pepsis when you aren’t really thirsty.  That’s also why Nyman says that the notion of moral hazard is behind the “lack of enthusiasm” for expansion of health insurance.  If you think of insurance as producing wasteful consumption of medical services, then the fact that there are forty-five million Americans without health insurance is no longer an immediate cause for alarm.  After all, it’s not as if the uninsured never go to the doctor.  They spend, on average, $934 a year on medical care.  A moral-hazard theorist would say that they go to the doctor when they really have to.  Those of us with private insurance, by contrast, consume $2,347 worth of health care a year.  If a lot of that extra $1,413 is waste, then maybe the uninsured person is the truly efficient consumer of health care.

The moral-hazard argument makes sense, however, only if we consume health care in the same way that we consume other consumer goods, and to economists like Nyman this assumption is plainly absurd.  We go to the doctor grudgingly, only because we’re sick.  “Moral hazard is overblown,” the Princeton economist Uwe Reinhardt says.  “You always hear that the demand for health care is unlimited.  This is just not true.  People who are very well insured, who are very rich, do you see them check into the hospital because it’s free? Do people really like to go to the doctor? Do they check into the hospital instead of playing golf?”

For that matter, when you have to pay for your own health care, does your consumption really become more efficient? In the late nineteen-seventies, the rand Corporation did an extensive study on the question, randomly assigning families to health plans with co-payment levels at zero per cent, twenty-five per cent, fifty per cent, or ninety-five per cent, up to six thousand dollars.  As you might expect, the more that people were asked to chip in for their health care the less care they used.  The problem was that they cut back equally on both frivolous care and useful care.  Poor people in the high-deductible group with hypertension, for instance, didn’t do nearly as good a job of controlling their blood pressure as those in other groups, resulting in a ten-per-cent increase in the likelihood of death.  As a recent Commonwealth Fund study concluded, cost sharing is “a blunt instrument.” Of course it is: how should the average consumer be expected to know beforehand what care is frivolous and what care is useful? I just went to the dermatologist to get moles checked for skin cancer.  If I had had to pay a hundred per cent, or even fifty per cent, of the cost of the visit, I might not have gone.  Would that have been a wise decision? I have no idea.  But if one of those moles really is cancerous, that simple, inexpensive visit could save the health-care system tens of thousands of dollars (not to mention saving me a great deal of heartbreak).  The focus on moral hazard suggests that the changes we make in our behavior when we have insurance are nearly always wasteful.  Yet, when it comes to health care, many of the things we do only because we have insurance—like getting our moles checked, or getting our teeth cleaned regularly, or getting a mammogram or engaging in other routine preventive care—are anything but wasteful and inefficient.  In fact, they are behaviors that could end up saving the health-care system a good deal of money.

Sered and Fernandopulle tell the story of Steve, a factory worker from northern Idaho, with a “grotesquelooking left hand—what looks like a bone sticks out the side.” When he was younger, he broke his hand.  “The doctor wanted to operate on it,” he recalls.  “And because I didn’t have insurance, well, I was like ‘I ain’t gonna have it operated on.’ The doctor said, ‘Well, I can wrap it for you with an Ace bandage.’ I said, ‘Ahh, let’s do that, then.’ ” Steve uses less health care than he would if he had insurance, but that’s not because he has defeated the scourge of moral hazard.  It’s because instead of getting a broken bone fixed he put a bandage on it.


At the center of the Bush Administration’s plan to address the health-insurance mess are Health Savings Accounts, and Health Savings Accounts are exactly what you would come up with if you were concerned, above all else, with minimizing moral hazard.  The logic behind them was laid out in the 2004 Economic Report of the President.  Americans, the report argues, have too much health insurance: typical plans cover things that they shouldn’t, creating the problem of overconsumption.  Several paragraphs are then devoted to explaining the theory of moral hazard.  The report turns to the subject of the uninsured, concluding that they fall into several groups.  Some are foreigners who may be covered by their countries of origin.  Some are people who could be covered by Medicaid but aren’t or aren’t admitting that they are.  Finally, a large number “remain uninsured as a matter of choice.” The report continues, “Researchers believe that as many as one-quarter of those without health insurance had coverage available through an employer but declined the coverage…. Still others may remain uninsured because they are young and healthy and do not see the need for insurance.” In other words, those with health insurance are overinsured and their behavior is distorted by moral hazard.  Those without health insurance use their own money to make decisions about insurance based on an assessment of their needs.  The insured are wasteful.  The uninsured are prudent.  So what’s the solution? Make the insured a little bit more like the uninsured.

Under the Health Savings Accounts system, consumers are asked to pay for routine health care with their own money—several thousand dollars of which can be put into a tax-free account.  To handle their catastrophic expenses, they then purchase a basic health-insurance package with, say, a thousand-dollar annual deductible.  As President Bush explained recently, “Health Savings Accounts all aim at empowering people to make decisions for themselves, owning their own health-care plan, and at the same time bringing some demand control into the cost of health care.”

The country described in the President’s report is a very different place from the country described in “Uninsured in America.” Sered and Fernandopulle look at the billions we spend on medical care and wonder why Americans have so little insurance.  The President’s report considers the same situation and worries that we have too much.  Sered and Fernandopulle see the lack of insurance as a problem of poverty; a third of the uninsured, after all, have incomes below the federal poverty line.  In the section on the uninsured in the President’s report, the word “poverty” is never used.  In the Administration’s view, people are offered insurance but “decline the coverage” as “a matter of choice.” The uninsured in Sered and Fernandopulle’s book decline coverage, but only because they can’t afford it.  Gina, for instance, works for a beauty salon that offers her a bare-bones health-insurance plan with a thousand-dollar deductible for two hundred dollars a month.  What’s her total income? Nine hundred dollars a month.  She could “choose” to accept health insurance, but only if she chose to stop buying food or paying the rent.

The biggest difference between the two accounts, though, has to do with how each views the function of insurance.  Gina, Steve, and Loretta are ill, and need insurance to cover the costs of getting better.  In their eyes, insurance is meant to help equalize financial risk between the healthy and the sick.  In the insurance business, this model of coverage is known as “social insurance,” and historically it was the way health coverage was conceived.  If you were sixty and had heart disease and diabetes, you didn’t pay substantially more for coverage than a perfectly healthy twenty-five-year-old.  Under social insurance, the twenty-five-year-old agrees to pay thousands of dollars in premiums even though he didn’t go to the doctor at all in the previous year, because he wants to make sure that someone else will subsidize his health care if he ever comes down with heart disease or diabetes.  Canada and Germany and Japan and all the other industrialized nations with universal health care follow the social-insurance model.  Medicare, too, is based on the social-insurance model, and, when Americans with Medicare report themselves to be happier with virtually every aspect of their insurance coverage than people with private insurance (as they do, repeatedly and overwhelmingly), they are referring to the social aspect of their insurance.  They aren’t getting better care.  But they are getting something just as valuable: the security of being insulated against the financial shock of serious illness.

There is another way to organize insurance, however, and that is to make it actuarial.  Car insurance, for instance, is actuarial.  How much you pay is in large part a function of your individual situation and history: someone who drives a sports car and has received twenty speeding tickets in the past two years pays a much higher annual premium than a soccer mom with a minivan.  In recent years, the private insurance industry in the United States has been moving toward the actuarial model, with profound consequences.  The triumph of the actuarial model over the social-insurance model is the reason that companies unlucky enough to employ older, high-cost employees—like United Airlines—have run into such financial difficulty.  It’s the reason that automakers are increasingly moving their operations to Canada.  It’s the reason that small businesses that have one or two employees with serious illnesses suddenly face unmanageably high health-insurance premiums, and it’s the reason that, in many states, people suffering from a potentially high-cost medical condition can’t get anyone to insure them at all.

Health Savings Accounts represent the final, irrevocable step in the actuarial direction.  If you are preoccupied with moral hazard, then you want people to pay for care with their own money, and, when you do that, the sick inevitably end up paying more than the healthy.  And when you make people choose an insurance plan that fits their individual needs, those with significant medical problems will choose expensive health plans that cover lots of things, while those with few health problems will choose cheaper, bare-bones plans.  The more expensive the comprehensive plans become, and the less expensive the bare-bones plans become, the more the very sick will cluster together at one end of the insurance spectrum, and the more the well will cluster together at the low-cost end.  The days when the healthy twenty-five-year-old subsidizes the sixty-year-old with heart disease or diabetes are coming to an end.  “The main effect of putting more of it on the consumer is to reduce the social redistributive element of insurance,” the Stanford economist Victor Fuchs says.  Health Savings Accounts are not a variant of universal health care.  In their governing assumptions, they are the antithesis of universal health care.

The issue about what to do with the health-care system is sometimes presented as a technical argument about the merits of one kind of coverage over another or as an ideological argument about socialized versus private medicine.  It is, instead, about a few very simple questions.  Do you think that this kind of redistribution of risk is a good idea? Do you think that people whose genes predispose them to depression or cancer, or whose poverty complicates asthma or diabetes, or who get hit by a drunk driver, or who have to keep their mouths closed because their teeth are rotting ought to bear a greater share of the costs of their health care than those of us who are lucky enough to escape such misfortunes? In the rest of the industrialized world, it is assumed that the more equally and widely the burdens of illness are shared, the better off the population as a whole is likely to be.  The reason the United States has forty-five million people without coverage is that its health-care policy is in the hands of people who disagree, and who regard health insurance not as the solution but as the problem.
Project Delta aims to create the perfect cookie.


Steve Gundrum launched Project Delta at a small dinner last fall at Il Fornaio, here in Burlingame, this web just down the road from the San Francisco Airport. It wasn’t the first time he’d been to Il Fornaio, and he made his selection quickly, with just a glance at the menu; he is the sort of person who might have thought about his choice in advance — maybe even that morning, while shaving. He would have posed it to himself as a question — Ravioli alla Lucana?—and turned it over in his mind, assembling and disassembling the dish, ingredient by ingredient, as if it were a model airplane. Did the Pecorino pepato really belong? What if you dropped the basil? What would the ravioli taste like if you froze it, along with the ricotta and the Parmesan, and tried to sell it in the supermarket? And then what would you do about the fennel?

Gundrum is short and round. He has dark hair and a mustache and speaks with the flattened vowels of the upper Midwest. He is voluble and excitable and doggedly unpretentious, to the point that your best chance of seeing him in a suit is probably Halloween. He runs Mattson, one of the country’s foremost food research-and-development firms, which is situated in a low-slung concrete-and-glass building in a nondescript office park in Silicon Valley. Gundrum’s office is a spare, windowless room near the rear, and all day long white-coated technicians come to him with prototypes in little bowls, or on skewers, or in Tupperware containers. His job is to taste and advise, and the most common words out of his mouth are “I have an idea.” Just that afternoon, Gundrum had ruled on the reformulation of a popular spinach dip (which had an unfortunate tendency to smell like lawn clippings) and examined the latest iteration of a low-carb kettle corn for evidence of rhythmic munching (the metronomic hand-to-mouth cycle that lies at the heart of any successful snack experience). Mattson created the shelf-stable Mrs. Fields Chocolate Chip Cookie, the new Boca Burger products for Kraft Foods, Orville Redenbacher’s Butter Toffee Popcorn Clusters, and so many other products that it is impossible to walk down the aisle of a supermarket and not be surrounded by evidence of the company’s handiwork.

That evening, Gundrum had invited two of his senior colleagues at Mattson — Samson Hsia and Carol Borba — to dinner, along with Steven Addis, who runs a prominent branding firm in the Bay Area. They sat around an oblong table off to one side of the dining room, with the sun streaming in the window, and Gundrum informed them that he intended to reinvent the cookie, to make something both nutritious and as “indulgent” as the premium cookies on the supermarket shelf. “We want to delight people,” he said. “We don’t want some ultra-high-nutrition power bar, where you have to rationalize your consumption.” He said it again: “We want to delight people.”

As everyone at the table knew, a healthful, good-tasting cookie is something of a contradiction. A cookie represents the combination of three unhealthful ingredients—sugar, white flour, and shortening. The sugar adds sweetness, bulk, and texture: along with baking powder, it produces the tiny cell structures that make baked goods light and fluffy. The fat helps carry the flavor. If you want a big hit of vanilla, or that chocolate taste that really blooms in the nasal cavities, you need fat. It also keeps the strands of gluten in the flour from getting too tightly bound together, so that the cookie stays chewable. The ¦our, of course, gives the batter its structure, and, with the sugar, provides the base for the browning reaction that occurs during baking. You could replace the standard white flour with wheat flour, which is higher in fibre, but fibre adds grittiness. Over the years, there have been many attempts to resolve these contradictions — from Snackwells and diet Oreos to the dry, grainy hockey pucks that pass for cookies in health-food stores — but in every case ¦flavor or fluffiness or tenderness has been compromised. Steve Gundrum was undeterred. He told his colleagues that he wanted Project Delta to create the world’s great-est cookie. He wanted to do it in six months. He wanted to enlist the biggest players in the American food industry. And how would he come up with this wonder cookie? The old-fashioned way. He wanted to hold a bakeoff.


The standard protocol for inventing something in the food industry is called the matrix model. There is a department for product development, which comes up with a new idea, and a department for process development, which figures out how to realize it, and then, down the line, departments for packing, quality assurance, regulatory affairs, chemistry, microbiology, and so on. In a conventional bakeoff, Gundrum would have pitted three identical matrixes against one another and compared the results. But he wasn’t satisfied with the unexamined assumption behind the conventional bakeoff — that there was just one way of inventing something new.

Gundrum had a particular interest, as it happened, in software. He had read widely about it, and once, when he ran into Steve Jobs at an Apple store in the Valley, chatted with him for forty-five minutes on technical matters relating to the Apple operating system. He saw little difference between what he did for a living and what the soft-ware engineers in the surrounding hills of Silicon Valley did. “Lines of code are no different from a recipe,” he explains. “It’s the same thing. You add a little salt, and it tastes better. You write a little piece of code, and it makes the software work faster.” But in the software world, Gundrum knew, there were ongoing debates about the best way to come up with new code.

On the one hand, there was the “open source” movement. Its patron saint was Linus Torvald, the Norwegian hacker who decided to build a free version of Unix, the hugely complicated operating system that runs many of the world’s large computers. Torvald created the basic implementation of his version, which he called Linux, posted it online, and invited people to contribute to its development. Over the years, thousands of programmers had helped, and Linux was now considered as good as proprietary versions of Unix. “Given enough eyeballs all bugs are shallow” was the Linux mantra: a thousand people working for an hour each can do a better job writing and fixing code than a single person working for a thou-sand hours, because the chances are that among those thousand people you can find precisely the right expert for every problem that comes up.

On the other hand, there was the “extreme programming” movement, known as XP, which was led by a legendary programmer named Kent Beck. He called for breaking a problem into the smallest possible increments, and proceeding as simply and modestly as possible. He thought that programmers should work in pairs, two to a computer, passing the keyboard back and forth. Between Beck and Torvald were countless other people, arguing for slightly different variations. But everyone in the software world agreed that trying to get people to be as creative as possible was, as often as not, a social problem: it depended not just on who was on the team but on how the team was organized.

“I remember once I was working with a printing company in Chicago,” Beck says. “The people there were having a terrible problem with their technology. I got there, and I saw that the senior people had these corner offices, and they were working separately and doing things separately that they had trouble integrating later on. So I said, ‘Find a space where you can work together.’ So they found a corner of the machine room. It was a raised floor, ice cold. They just loved it. They would go there five hours a day, making lots of progress. I flew home. They hired me for my technical expertise. And I told them to rearrange the office furniture, and that was the most valuable thing I could offer them.”

It seemed to Gundrum that people in the food world had a great deal to learn from all this. They had become adept at solving what he called “science projects” — problems that required straightforward, linear applications of expensive German machinery and armies of white-coated people with advanced degrees in engineering. Cool Whip was a good example: a product processed so exquisitely — with air bubbles of such fantastic uniformity and stability — that it remains structurally sound for months, at high elevation and at low elevation, frozen and thawed and then refrozen. But coming up with a healthy cookie, which required finessing the inherent contradictions posed by sugar, flour, and shortening, was the kind of problem that the food industry had more trouble with. Gundrum recalled one brainstorming session that a client of his, a major food company, had convened. “This is no joke,” he said. “They played a tape where it sounded like the wind was blowing and the birds were chirping. And they posed us out on a dance floor, and we had to hold our arms out like we were trees and close our eyes, and the ideas were supposed to grow like fruits off the limbs of the trees. Next to me was the head of R. & D., and he looked at me and said: ‘What the hell are we doing here?'”

For Project Delta, Gundrum decreed that there would be three teams, each representing a different methodology of invention. He had read Kent Beck’s writings, and decided that the first would be the XP team. He enlisted two of Mattson’s brightest young associates — Peter Dea and Dan Howell. Dea is a food scientist, who worked as a confectionist before coming to Mattson. He is tall and spare, with short dark hair. “Peter is really good at hitting the high note,” Gundrum said. “If a product needs to have a particular flavor profile, he’s really good at getting that one dimension and getting it right.” Howell is a culinarian-goateed and talkative, a man of enthusiasms who uses high-end Mattson equipment to make an exceptional cup of espresso every afternoon. He started his career as a barista at Starbucks, and then realized that his vocation lay elsewhere. “A customer said to me, ‘What do you want to be doing? Because you clearly don’t want to be here,'” Howell said. “I told him, ‘I want to be sitting in a room working on a better non-fat pudding.’ ”

The second team was headed by Barb Stuckey, an executive vice-president of marketing at Mattson and one of the firm’s stars. She is slender and sleek, with short blond hair. She tends to think out loud, and, because she thinks quickly, she ends up talking quickly, too-in nervous brilliant bursts. Stuckey, Gundrum decided, would represent “managed” research and development—a traditional hierarchical team, as opposed to a partnership like Dea and Howell’s. She would work with Doug Berg, who runs one of Mattson’s product-development teams. Stuckey would draw the big picture. Berg would serve as sounding board and project director. His team would execute their conceptions.

Then Gundrum was at a technology conference in California and heard the software pioneer Mitch Kapor talking about the open-source revolution. Afterward, Gundrum approached Kapor. “I said to Mitch, ‘What do you think? Can I apply this—some of the same principles—outside of software and bring it to the food industry?'” Gundrum recounted. “He stopped and said, ‘Why the hell not!'” So Gundrum invited an élite group of food-industry bakers and scientists to collaborate online. They would be the third team. He signed up a senior person from Mars, Inc., someone from R. & D. at Kraft, the marketing manager for Nestlé Toll House refrigerated/frozen cookie dough, a senior director of R. & D. at Birds Eye Foods, the head of the innovation program for Kellogg’s Morning Foods, the director of seasoning at McCormick, a cookie maven formerly at Keebler, and six more high-level specialists. Mattson’s innovation manager, Carol Borba, who began her career as a line cook at Bouley, in Manhattan, was given the role of project manager. Two Mattson staffers were assigned to carry out the group’s recommendations. This was the Dream Team. It is quite possible that this was the most talented group of people ever to work together in the history of the food industry.

Soon after the launch of Project Delta, Steve Gundrum and his colleague Samson Hsia were standing around, talking about the current products in the supermarket which they particularly admire. “I like the Uncrustable line from Smuckers,” Hsia said. “It’s a frozen sandwich without any crust. It eats very well. You can put it in a lunchbox frozen, and it will be unfrozen by lunchtime.” Hsia is a trim, silver-haired man who is said to know as much about emulsions as anyone in the business. “There’s something else,” he said, suddenly. “We just saw it last week. It’s made by Jennie-O. It’s turkey in a bag.” This was a turkey that was seasoned, plumped with brine, and sold in a heat-resistant plastic bag: the customer simply has to place it in the oven. Hsia began to stride toward the Mattson kitchens, because he realized they actually had a Jennie-O turkey in the back. Gundrum followed, the two men weaving their way through the maze of corridors that make up the Mattson offices. They came to a large freezer. Gundrum pulled out a bright-colored bag. Inside was a second, clear bag, and inside that bag was a twelve-pound turkey. “This is one of my favorite innovations of the last year,” Gundrum said, as Hsia nodded happily. “There is material science involved. There is food science involved. There is positioning involved. You can take this thing, throw it in your oven, and people will be blown away. It’s that good. If I was Butterball, I’d be terrified.”

Jennie-O had taken something old and made it new. But where had that idea come from? Was it a team? A committee? A lone turkey genius? Those of us whose only interaction with such innovations is at the point of sale have a naïve faith in human creativity; we suppose that a world capable of coming up with turkey in a bag is capable of coming up with the next big thing as well—a healthy cookie, a faster computer chip, an automobile engine that gets a hundred miles to the gallon. But if you’re the one responsible for those bright new ideas there is no such certainty. You come up with one great idea, and the process is so miraculous that all you do is puzzle over how on earth you ever did it, and worry whether you’ll ever be able to do it again.


The Mattson kitchens are a series of large, connecting rooms, running along the back of the building. There is a pilot plant in one corner — containing a mini version of the equipment that, say, Heinz would use to make canned soup, a soft-serve ice-cream machine, an industrial-strength pasta-maker, a colloid mill for making oil-and-water emulsions, a flash pasteurizer, and an eighty-five-thousand-dollar Japanese-made coextruder for, among other things, pastry-and-filling combinations. At any given time, the firm may have as many as fifty or sixty projects under way, so the kitchens are a hive of activity, with pressure cookers filled with baked beans bubbling in one corner, and someone rushing from one room to another carrying a tray of pizza slices with experimental toppings.

Dea and Howell, the XP team, took over part of one of the kitchens, setting up at a long stainless-steel lab bench. The countertop was crowded with tins of flour, a big white plastic container of wheat dextrin, a dozen bottles of liquid sweeteners, two plastic bottles of Kirkland olive oil, and, somewhat puzzlingly, three varieties of single-malt Scotch. The Project Delta brief was simple. All cookies had to have fewer than a hundred and thirty calories per serving. Carbohydrates had to be under 17.5 grams, saturated fat under two grams, fibre more than one gram, protein more than two grams, and so on; in other words, the cookie was to be at least fifteen per cent superior to the supermarket average in the major nutritional categories. To Dea and Howell, that suggested oatmeal, and crispy, as opposed to soft. “I’ve tried lots of cookies that are sold as soft and I never like them, because they’re trying to be something that they’re not,” Dea explained. “A soft cookie is a fresh cookie, and what you are trying to do with soft is be a fresh cookie that’s a month old. And that means you need to fake the freshness, to engineer the cookie.”

The two decided to focus on a kind of oatmeal-chocolate-chip hybrid, with liberal applications of roasted soy nuts, toffee, and caramel. A straight oatmeal-raisin cookie or a straight low-cal chocolate-chip cookie was out of the question. This was a reflection of what might be called the Hidden Valley Ranch principle, in honor of a story that Samson Hsia often told about his years working on salad dressing when he was at Clorox. The couple who owned Hidden Valley Ranch, near Santa Barbara, had come up with a seasoning blend of salt, pepper, onion, garlic, and parsley flakes that was mixed with equal parts mayonnaise and buttermilk to make what was, by all accounts, an extraordinary dressing. Clorox tried to bottle it, but found that the buttermilk could not coexist, over any period of time, with the mayonnaise. The way to fix the problem, and preserve the texture, was to make the combination more acidic. But when you increased the acidity you ruined the flavor. Clorox’s food engineers worked on Hidden Valley Ranch dressing for close to a decade. They tried different kinds of processing and stability control and endless cycles of consumer testing before they gave up and simply came out with a high-acid Hidden Valley Ranch dressing — which promptly became a runaway best-seller. Why? Because consumers had never tasted real Hidden Valley Ranch dressing, and as a result had no way of knowing that what they were eating was inferior to the original. For those in the food business, the lesson was unforgettable: if something was new, it didn’t have to be perfect. And, since healthful, indulgent cookies couldn’t be perfect, they had to be new: hence oatmeal, chocolate chips, toffee, and caramel.

Cookie development, at the Mattson level, is a matter of endless iteration, and Dea and Howell began by baking version after version in quick succession — establishing the cookie size, the optimal baking time, the desired variety of chocolate chips, the cut of oats (bulk oats? rolled oats? groats?), the varieties of flour, and the toffee dosage, while testing a variety of high-tech supplements, notably inulin, a fibre source derived from chicory root. As they worked, they made notes on tablet P.C.s, which gave them a running electronic record of each version. “With food, there’s a large circle of pretty good, and we’re solidly in pretty good,” Dea announced, after several intensive days of baking. A tray of cookies was cooling in front of him on the counter. “Typically, that’s when you take it to the customers.”

In this case, the customer was Gundrum, and the next week Howell marched over to Gundrum’s office with two Ziploc bags of cookies in his hand. There was a package of Chips Ahoy! on the table, and Howell took one out. “We’ve been eating these versus Chips Ahoy!,” he said.

The two cookies looked remarkably alike. Gundrum tried one of each. “The Chips Ahoy!, it’s tasty,” he said. “When you eat it, the starch hydrates in your mouth. The XP doesn’t have that same granulated-sugar kind of mouth feel.”

“It’s got more fat than us, though, and subsequently it’s shorter in texture,” Howell said. “And so, when you break it, it breaks more nicely. Ours is a little harder to break.”

By “shorter in texture,” he meant that the cookie “popped” when you bit into it. Saturated fats are solid fats, and give a cookie crispness. Parmesan cheese is short-textured. Brie is long. A shortbread like a Lorna Doone is a classic short-textured cookie. But the XP cookie had, for health reasons, substituted unsaturated fats for saturated fats, and unsaturated fats are liquid. They make the dough stickier, and inevitably compromise a little of that satisfying pop.

“The whole-wheat flour makes us a little grittier, too,” Howell went on. “It has larger particulates.” He broke open one of the Chips Ahoy!. “See how fine the grain is? Now look at one of our cookies. The particulates are larger. It is part of what we lose by going with a healthy profile. If it was just sugar and ¦our, for instance, the carbohydrate chains are going to be shorter, and so they will dissolve more quickly in your mouth. Whereas with more fibre you get longer carbohydrate chains and they don’t dissolve as quickly, and you get that slightly tooth-packing feel.”

“It looks very wholesome, like something you would want to feed your kids,” Gundrum said, finally. They were still only in the realm of pretty good.


Team Stuckey, meanwhile, was having problems of its own. Barb Stuckey’s first thought had been a tea cookie, or, more specifically, a chai cookie — something with cardamom and cinnamon and vanilla and cloves and a soft dairy note. Doug Berg was dispatched to run the experiment. He and his team did three or four rounds of prototypes. The result was a cookie that tasted, astonishingly, like a cup of chai, which was, of course, its problem. Who wanted a cookie that tasted like a cup of chai? Stuckey called a meeting in the Mattson trophy room, where samples of every Mattson product that has made it to market are displayed. After everyone was done tasting the cookies, a bag of them sat in the middle of the table for forty-five minutes—and no one reached to take a second bite. It was a bad sign.

“You know, before the election Good Housekeeping had this cookie bakeoff,” Stuckey said, as the meeting ended. “Laura Bush’s entry was full of chocolate chips and had familiar ingredients. And Teresa Heinz went with pumpkin-spice cookies. I remember thinking, That’s just like the Democrats! So not mainstream! I wanted her to win. But she’s chosen this cookie that’s funky and weird and out of the box. And I kind of feel the same way about the tea cookie. It’s too far out, and will lose to something that’s more comfortable for consumers.”

Stuckey’s next thought involved strawberries and a shortbread base. But shortbread was virtually impossible under the nutritional guidelines: there was no way to get that smooth butter-flour-sugar combination. So Team Stuckey switched to something closer to a strawberry-cobbler cookie, which had the Hidden Valley Ranch advantage that no one knew what a strawberry-cobbler cookie was supposed to taste like. Getting the carbohydrates down to the required 17.5 grams, though, was a struggle, because of how much flour and fruit cobbler requires. The obvious choice to replace the flour was almonds. But nuts have high levels of both saturated and unsaturated fat. “It became a balancing act,” Anne Cristofano, who was doing the bench work for Team Stuckey, said. She baked batch after batch, playing the carbohydrates (first the flour, and then granulated sugar, and finally various kinds of what are called sugar alcohols, low-calorie sweeteners derived from hydrogenizing starch) against the almonds. Cristofano took a version to Stuckey. It didn’t go well.

“We’re not getting enough strawberry impact from the fruit alone,” Stuckey said. “We have to find some way to boost the strawberry.” She nibbled some more. “And, because of the low fat and all that stuff, I don’t feel like we’re getting that pop.”

The Dream Team, by any measure, was the overwhelming Project Delta favorite. This was, after all, the Dream Team, and if any idea is ingrained in our thinking it is that the best way to solve a difficult problem is to bring the maximum amount of expertise to bear on it. Sure enough, in the early going the Dream Team was on fire. The members of the Dream Team did not doggedly fix on a single idea, like Dea and Howell, or move in fits and starts from chai sugar cookies to strawberry shortbread to strawberry cobbler, like Team Stuckey. It came up with thirty-four ideas, representing an astonishing range of cookie philosophies: a chocolate cookie with gourmet cocoa, high-end chocolate chips, pecans, raisins, Irish steel-cut oats, and the new Ultragrain White Whole Wheat flour; a bite-size oatmeal cookie with a Ceylon cinnamon filling, or chili and tamarind, or pieces of dried peaches with a cinnamon-and-ginger dusting; the classic seven-layer bar with oatmeal instead of graham crackers, coated in chocolate with a choice of coffee flavors; a “wellness” cookie, with an oatmeal base, soy and whey proteins, inulin and oat beta glucan and a combination of erythritol and sugar and sterol esters—and so on.

In the course of spewing out all those new ideas, however, the Dream Team took a difficult turn. A man named J. Hugh McEvoy (a.k.a. Chef J.), out of Chicago, tried to take control of the discussion. He wanted something exotic — not a health-food version of something already out there. But in the e-mail discussions with others on the team his sense of what constituted exotic began to get really exotic — “Chinese star anise plus fennel plus Pastis plus dark chocolate.” Others, emboldened by his example, began talking about a possible role for zucchini or wasabi peas. Meanwhile, a more conservative faction, mindful of the Project Delta mandate to appeal to the whole family, started talking up peanut butter. Within a few days, the tensions were obvious:

From: Chef J.

Subject: <no subject>

Please keep in mind that less than 10 years ago, espresso, latte and dulce de leche were EXOTIC flavors / products that were considered unsuitable for the mainstream. And let’s not even mention CHIPOTLE.

From: Andy Smith

Subject: Bought any Ben and Jerry’s recently?

While we may not want to invent another Oreo or Chips Ahoy!, last I looked, World’s Best Vanilla was B&J’s # 2 selling flavor and Haagen Dazs’ Vanilla (their top seller) outsold Dulce 3 to 1.

From: Chef J.

Subject: <no subject>Yes. Gourmet Vanilla does outsell any new flavor. But we must remember that DIET vanilla does not and never has. It is the high end, gourmet segment of ice cream that is growing. Diet Oreos were vastly outsold by new entries like Snackwells. Diet Snickers were vastly outsold by new entries like balance bars. New Coke failed miserably, while Red Bull is still growing.

What flavor IS Red Bull, anyway?

Eventually, Carol Borba, the Dream Team project leader, asked Gundrum whether she should try to calm things down. He told her no; the group had to find its “own kind of natural rhythm.” He wanted to know what fifteen high-powered bakers thrown together on a project felt like, and the answer was that they felt like chaos. They took twice as long as the XP team. They created ten times the headache.

Worse, no one in the open-source group seemed to be having any fun. “Quite honestly, I was expecting a bit more involvement in this,” Howard Plein, of Edlong Dairy Flavors, confessed afterward. “They said, expect to spend half an hour a day. But without doing actual bench work — all we were asked to do was to come up with ideas.” He wanted to bake: he didn’t enjoy being one of fifteen cogs in a machine. To Dan Fletcher, of Kellogg’s, “the whole thing spun in place for a long time. I got frustrated with that. The number of people involved seemed unwieldy. You want some diversity of youth and experience, but you want to keep it close-knit as well. You get some depth in the process versus breadth. We were a mile wide and an inch deep.” Chef J., meanwhile, felt thwarted by Carol Borba; he felt that she was pushing her favorite, a caramel turtle, to the detriment of better ideas. “We had the best people in the country involved,” he says. “We were irrelevant. That’s the weakness of it. Fifteen is too many. How much true input can any one person have when you are lost in the crowd?” In the end, the Dream Team whittled down its thirty-four possibilities to one: a chewy oatmeal cookie, with a pecan “thumbprint” in the middle, and ribbons of caramel-and-chocolate glaze. When Gundrum tasted it, he had nothing but praise for its “cookie hedonics.” But a number of the team members were plainly unhappy with the choice. “It is not bad,” Chef J. said. “But not bad doesn’t win in the food business. There was nothing there that you couldn’t walk into a supermarket and see on the shelf. Any Pepperidge Farm product is better than that. Any one.”

It may have been a fine cookie. But, since no single person played a central role in its creation, it didn’t seem to anyone to be a fine cookie.

The strength of the Dream Team — the fact that it had so many smart people on it — was also its weakness: it had too many smart people on it. Size provides expertise. But it also creates friction, and one of the truths Project Delta exposed is that we tend to overestimate the importance of expertise and underestimate the problem of friction. Gary Klein, a decision-making consultant, once examined this issue in depth at a nuclear power plant in North Carolina. In the nineteen-nineties, the power supply used to keep the reactor cool malfunctioned. The plant had to shut down in a hurry, and the shutdown went badly. So the managers brought in Klein’s consulting group to observe as they ran through one of the crisis rehearsals mandated by federal regulators. “The drill lasted four hours,” David Klinger, the lead consultant on the project, recalled. “It was in this big operations room, and there were between eighty and eighty-five people involved. We roamed around, and we set up a video camera, because we wanted to make sense of what was happening.”

When the consultants asked people what was going on, though, they couldn’t get any satisfactory answers. “Each person only knew a little piece of the puzzle, like the radiation person knew where the radiation was, or the maintenance person would say, ‘I’m trying to get this valve closed,’ ” Klinger said. “No one had the big picture. We started to ask questions. We said, ‘What is your mission?’ And if the person didn’t have one, we said, ‘Get out.’ There were just too many people. We ended up getting that team down from eighty-five to thirty-five people, and the first thing that happened was that the noise in the room was dramatically reduced.” The room was quiet and calm enough so that people could easily find those they needed to talk to. “At the very end, they had a big drill that the N.R.C. was going to regulate. The regulators said it was one of their hardest drills. And you know what? They aced it.” Was the plant’s management team smarter with thirty-five people on it than it was with eighty-five? Of course not, but the expertise of those additional fifty people was more than cancelled out by the extra confusion and noise they created.

The open-source movement has had the same problem. The number of people involved can result in enormous friction. The software theorist Joel Spolsky points out that open-source software tends to have user interfaces that are difficult for ordinary people to use: “With Microsoft Windows, you right-click on a folder, and you’re given the option to share that folder over the Web. To do the same thing with Apache, the open-source Web server, you’ve got to track down a file that has a different name and is stored in a different place on every system. Then you have to edit it, and it has its own syntax and its own little programming language, and there are lots of different comments, and you edit it the first time and it doesn’t work and then you edit it the second time and it doesn’t work.”

Because there are so many individual voices involved in an open-source project, no one can agree on the right way to do things. And, because no one can agree, every possible option is built into the software, thereby frustrating the central goal of good design, which is, after all, to understand what to leave out. Spolsky notes that almost all the successful open-source products have been attempts to clone some preexisting software program, like Microsoft’s Internet Explorer, or Unix. “One of the reasons open source works well for Linux is that there isn’t any real design work to be undertaken,” he says. “They were doing what we would call chasing tail-lights.” Open source was great for a science project, in which the goals were clearly defined and the technical hurdles easily identifiable. Had Project Delta been a Cool Whip bakeoff, an exercise in chasing tail-lights, the Dream Team would easily win. But if you want to design a truly innovative software program — or a truly innovative cookie — the costs of bigness can become overwhelming.

In the frantic final weeks before the bakeoff, while the Dream Team was trying to fix a problem with crumbling, and hit on the idea of glazing the pecan on the face of the cookie, Dea and Howell continued to make steady, incremental improvements.

“These cookies were baked five days ago,” Howell told Gundrum, as he handed him a Ziploc bag. Dea was off somewhere in the Midwest, meeting with clients, and Howell looked apprehensive, stroking his goatee nervously as he stood by Gundrum’s desk. “We used wheat dextrin, which I think gives us some crispiness advantages and some shelf-stability advantages. We have a little more vanilla in this round, which gives you that brown, rounding background note.”

Gundrum nodded. “The vanilla is almost like a surrogate for sugar,” he said. “It potentiates the sweetness.”

“Last time, the leavening system was baking soda and baking powder,” Howell went on. “I switched that to baking soda and monocalcium phosphate. That helps them rise a little bit better. And we baked them at a slightly higher temperature for slightly longer, so that we drove off a little bit more moisture.”

“How close are you?” Gundrum asked.

“Very close,” Howell replied.

Gundrum was lost in thought for a moment. “It looks very wholesome. It looks like something you’d want to feed your kids. It has very good aroma. I really like the texture. My guess is that it eats very well with milk.” He turned back to Howell, suddenly solicitous. “Do you want some milk?”

Meanwhile, Barb Stuckey had a revelation. She was working on a tortilla-chip project, and had bags of tortilla chips all over her desk. “You have no idea how much engineering goes into those things,” she said, holding up a tortilla chip. “It’s greater than what it takes to build a bridge. It’s crazy.” And one of the clever things about cheese tortilla chips—particularly the low-fat versions—is how they go about distracting the palate. “You know how you put a chip in your mouth and the minute it hits your tongue it explodes with flavor?” Stuckey said. “It’s because it’s got this topical seasoning. It’s got dried cheese powders and sugar and probably M.S.G. and all that other stuff on the outside of the chip.”

Her idea was to apply that technique to strawberry cobbler—to take large crystals of sugar, plate them with citric acid, and dust the cookies with them. “The minute they reach your tongue, you get this sweet-and-sour hit, and then you crunch into the cookie and get the rest—the strawberry and the oats,” she said. The crystals threw off your taste buds. You weren’t focussed on the fact that there was half as much fat in the cookie as there should be. Plus, the citric acid brought a tangy flavor to the dried strawberries: suddenly they felt fresh.

Batches of the new strawberry-cobbler prototype were ordered up, with different formulations of the citric acid and the crystals. A meeting was called in the trophy room. Anne Cristofano brought two plastic bags filled with cookies. Stuckey was there, as was a senior Mattson food technologist named Karen Smithson, an outsider brought to the meeting in an advisory role. Smithson, a former pastry chef, was a little older than Stuckey and Cristofano, with an air of self-possession. She broke the seal on the first bag, and took a bite with her eyes half closed. The other two watched intently.

“Umm,” Smithson said, after the briefest of pauses. “That is pretty darn good. And this is one of the healthy cookies? I would not say, ‘This is healthy.’ I can’t taste the trade-off.” She looked up at Stuckey. “How old are they?”

“Today,” Stuckey replied.

“O.K. . . .” This was a complicating fact. Any cookie tastes good on the day it’s baked. The question was how it tasted after baking and packaging and shipping and sitting in a warehouse and on a supermarket shelf and finally in someone’s cupboard.

“What we’re trying to do here is a shelf-stable cookie that will last six months,” Stuckey said. “I think we’re better off if we can make it crispy.”

Smithson thought for a moment. “You can have either a crispy, low-moisture cookie or a soft and chewy cookie,” she said. “But you can’t get the outside crisp and the inside chewy. We know that. The moisture will migrate. It will equilibrate over time, so you end up with a cookie that’s consistent all the way through. Remember we did all that work on Mrs. Fields? That’s what we learned.”

They talked for a bit, in technical terms, about various kinds of sugars and starches. Smithson didn’t think that the stability issue was going to be a problem.

“Isn’t it compelling, visually?” Stuckey blurted out, after a lull in the conversation. And it was: the dried-strawberry chunks broke though the surface of the cookie, and the tiny citric-sugar crystals glinted in the light. “I just think you get so much more bang for the buck when you put the seasoning on the outside.”

“Yet it’s not weird,” Smithson said, nodding. She picked up another cookie. “The mouth feel is a combination of chewy and crunchy. With the flavors, you have the caramelized sugar, the brown-sugar notes. You have a little bit of a chew from the oats. You have a flavor from the strawberry, and it helps to have a combination of the sugar alcohol and the brown sugar. You know, sugars have different deliveries, and sometimes you get some of the sweetness right off and some of it continues on. You notice that a lot with the artificial sweeteners. You get the sweetness that doesn’t go away, long after the other flavors are gone. With this one, the sweetness is nice. The flavors come together at the same time and fade at the same time, and then you have the little bright after-hits from the fruit and the citric crunchies, which are” — she paused, looking for the right word — “brilliant.”


The bakeoff took place in April. Mattson selected a representative sample of nearly three hundred households from around the country. Each was mailed bubble-wrapped packages containing all three entrants. The vote was close but unequivocal. Fourteen per cent of the households voted for the XP oatmeal-chocolate-chip cookie. Forty-one per cent voted for the Dream Team’s oatmeal-caramel cookie. Forty-four per cent voted for Team Stuckey’s strawberry cobbler.

The Project Delta postmortem was held at Chaya Brasserie, a French-Asian fusion restaurant on the Embarcadero, in San Francisco. It was just Gundrum and Steven Addis, from the first Project Delta dinner, and their wives. Dan Howell was immersed in a confidential project for a big food conglomerate back East. Peter Dea was working with Cargill on a wellness product. Carol Borba was in Chicago, at a meeting of the Food Marketing Institute. Barb Stuckey was helping Ringling Brothers rethink the food at its concessions. “We’ve learned a lot about the circus,” Gundrum said. Meanwhile, Addis’s firm had created a logo and a brand name for Project Delta. Mattson has offered to license the winning cookie at no cost, as long as a percentage of its sales goes to a charitable foundation that Mattson has set up to feed the hungry. Someday soon, you should be able to go into a supermarket and buy Team Stuckey’s strawberry-cobbler cookie.

“Which one would you have voted for?” Addis asked Gundrum.

“I have to say, they were all good in their own way,” Gundrum replied. It was like asking a mother which of her children she liked best. “I thought Barb’s cookie was a little too sweet, and I wish the open-source cookie was a little tighter, less crumbly. With XP, I think we would have done better, but we had a wardrobe malfunction. They used too much batter, overbaked it, and the cookie came out too hard and thick.

“In the end, it was not so much which cookie won that interested him. It was who won—and why. Three people from his own shop had beaten a Dream Team, and the decisive edge had come not from the collective wisdom of a large group but from one person’s ability to make a lateral connection between two previously unconnected objects — a tortilla chip and a cookie. Was that just Barb being Barb? In large part, yes. But it was hard to believe that one of the Dream Team members would not have made the same kind of leap had they been in an environment quiet enough to allow them to think.

“Do you know what else we learned?” Gundrum said. He was talking about a questionnaire given to the voters. “We were looking at the open-ended questions — where all the families who voted could tell us what they were thinking. They all said the same thing — all of them.” His eyes grew wide. “They wanted better granola bars and breakfast bars. I would not have expected that.” He fell silent for a moment, turning a granola bar over and around in his mind, assembling and disassembling it piece by piece, as if it were a model airplane. “I thought that they were pretty good,” he said. “I mean, there are so many of them out there. But apparently people want them better.”
How Rick Warren built his ministry.


On the occasion of the twenty-fifth anniversary of Saddleback Church, online Rick Warren hired the Anaheim Angels’ baseball stadium. He wanted to address his entire congregation at once, and there was no way to fit everyone in at Saddleback, where the crowds are spread across services held over the course of an entire weekend. So Warren booked the stadium and printed large, silver-black-and-white tickets, and, on a sunny Sunday morning last April, the tens of thousands of congregants of one of America’s largest churches began to file into the stands. They were wearing shorts and T-shirts and buying Cokes and hamburgers from the concession stands, if they had not already tailgated in the parking lot. On the field, a rock band played loudly and enthusiastically. Just after one o’clock, a voice came over the public-address system—”RIIIICK WARRRREN”—and Warren bounded onto the stage, wearing black slacks, a red linen guayabera shirt, and wraparound NASCAR sunglasses. The congregants leaped to their feet.”You know,” Warren said, grabbing the microphone, “there are two things I’ve always wanted to do in a stadium.” He turned his body sideways, playing an imaginary guitar, and belted out the first few lines of Jimi Hendrix’s “Purple Haze.” His image was up on the Jumbotrons in right and left fields, just below the Verizon and Pepsi and Budweiser logos. He stopped and grinned. “The other thing is, I want to do a wave!” He pointed to the bleachers, and then to the right-field seats, and around and around the stadium the congregation rose and fell, in four full circuits. “You are the most amazing church in America!” Warren shouted out, when they had finally finished. “AND I LOVE YOU!”


Rick Warren is a large man, with a generous stomach. He has short, spiky hair and a goatee. He looks like an ex-athlete, or someone who might have many tattoos. He is a hugger, enfolding those he meets in his long arms and saying things like “Hey, man.” According to Warren, from sixth grade through college there wasn’t a day in his life that he wasn’t president of something, and that makes sense, because he’s always the one at the center of the room talking or laughing, with his head tilted way back, or crying, which he does freely. In the evangelical tradition, preachers are hard or soft. Billy Graham, with his piercing eyes and protruding chin and Bible clenched close to his chest, is hard. So was Martin Luther King, Jr., who overwhelmed his audience with his sonorous, forcefully enunciated cadences. Warren is soft. His sermons are conversational, delivered in a folksy, raspy voice. He talks about how he loves Krispy Kreme doughnuts, drives a four-year-old Ford, and favors loud Hawaiian shirts, even at the pulpit, because, he says, “they do not itch.”

In December of 1979, when Warren was twenty-five years old, he and his wife, Kay, took their four-month-old baby and drove in a U-Haul from Texas to Saddleback Valley, in Orange County, because Warren had read that it was one of the fastest-growing counties in the country. He walked into the first real-estate office he found and introduced himself to the first agent he saw, a man named Don Dale. He was looking for somewhere to live, he said.

“Do you have any money to rent a house?” Dale asked.
“Not much, but we can borrow some,” Warren replied.
“Do you have a job?”
“No. I don’t have a job.”
“What do you do for a living?”
“I’m a minister.”
“So you have a church?”
“Not yet.”

Dale found him an apartment that very day, of course: Warren is one of those people whose lives have an irresistible forward momentum. In the car on the way over, he recruited Dale as the first member of his still nonexistent church, of course. And when he held his first public service, three months later, he stood up in front of two hundred and five people he barely knew in a high-school gymnasium—this shiny-faced preacher fresh out of seminary—and told them that one day soon their new church would number twenty thousand people and occupy a campus of fifty acres. Today, Saddleback Church has twenty thousand members and occupies a campus of a hundred and twenty acres. Once, Warren wanted to increase the number of small groups at Saddleback—the groups of six or seven that meet for prayer and fellowship during the week—by three hundred. He went home and prayed and, as he tells it, God said to him that what he really needed to do was increase the number of small groups by three thousand, which is just what he did. Then, a few years ago, he wrote a book called “The Purpose-Driven Life,” a genre of book that is known in the religious-publishing business as “Christian Living,” and that typically sells thirty or forty thousand copies a year. Warren’s publishers came to see him at Saddleback, and sat on the long leather couch in his office, and talked about their ideas for the book. “You guys don’t understand,” Warren told them. “This is a hundred-million-copy book.” Warren remembers stunned silence: “Their jaws dropped.” But now, nearly three years after its publication, “The Purpose-Driven Life” has sold twenty-three million copies. It is among the best-selling nonfiction hardcover books in American history. Neither the New York Times, the Los Angeles Times, nor the Washington Post has reviewed it. Warren’s own publisher didn’t see it coming. Only Warren had faith. “The best of the evangelical tradition is that you don’t plan your way forward—you prophesy your way forward,” the theologian Leonard Sweet says. “Rick’s prophesying his way forward.”

Not long after the Anaheim service, Warren went back to his office on the Saddleback campus. He put his feet up on the coffee table. On the wall in front of him were framed originals of the sermons of the nineteenth-century preacher Charles Spurgeon, and on the bookshelf next to him was his collection of hot sauces. “I had dinner with Jack Welch last Sunday night,” he said. “He came to church, and we had dinner. I’ve been kind of mentoring him on his spiritual journey. And he said to me, ‘Rick, you are the biggest thinker I have ever met in my life. The only other person I know who thinks globally like you is Rupert Murdoch.’ And I said, ‘That’s interesting. I’m Rupert’s pastor! Rupert published my book!'” Then he tilted back his head and gave one of those big Rick Warren laughs.


Churches, like any large voluntary organization, have at their core a contradiction. In order to attract newcomers, they must have low barriers to entry. They must be unintimidating, friendly, and compatible with the culture they are a part of. In order to retain their membership, however, they need to have an identity distinct from that culture. They need to give their followers a sense of community—and community, exclusivity, a distinct identity are all, inevitably, casualties of growth. As an economist would say, the bigger an organization becomes, the greater a free-rider problem it has. If I go to a church with five hundred members, in a magnificent cathedral, with spectacular services and music, why should I volunteer or donate any substantial share of my money? What kind of peer pressure is there in a congregation that large? If the barriers to entry become too low—and the ties among members become increasingly tenuous—then a church as it grows bigger becomes weaker.

One solution to the problem is simply not to grow, and, historically, churches have sacrificed size for community. But there is another approach: to create a church out of a network of lots of little church cells—exclusive, tightly knit groups of six or seven who meet in one another’s homes during the week to worship and pray. The small group as an instrument of community is initially how Communism spread, and in the postwar years Alcoholics Anonymous and its twelve-step progeny perfected the small-group technique. The small group did not have a designated leader who stood at the front of the room. Members sat in a circle. The focus was on discussion and interaction—not one person teaching and the others listening—and the remarkable thing about these groups was their power. An alcoholic could lose his job and his family, he could be hospitalized, he could be warned by half a dozen doctors—and go on drinking. But put him in a room of his peers once a week—make him share the burdens of others and have his burdens shared by others—and he could do something that once seemed impossible.

When churches—in particular, the megachurches that became the engine of the evangelical movement, in the nineteen-seventies and eighties—began to adopt the cellular model, they found out the same thing. The small group was an extraordinary vehicle of commitment. It was personal and flexible. It cost nothing. It was convenient, and every worshipper was able to find a small group that precisely matched his or her interests. Today, at least forty million Americans are in a religiously based small group, and the growing ranks of small-group membership have caused a profound shift in the nature of the American religious experience.”

As I see it, one of the most unfortunate misunderstandings of our time has been to think of small intentional communities as groups ‘within’ the church,” the philosopher Dick Westley writes in one of the many books celebrating the rise of small-group power. “When are we going to have the courage to publicly proclaim what everyone with any experience with small groups has known all along: they are not organizations ‘within’ the church; they are church.”

Ram Cnaan, a professor of social work at the University of Pennsylvania, recently estimated the replacement value of the charitable work done by the average American church—that is, the amount of money it would take to equal the time, money, and resources donated to the community by a typical congregation—and found that it came to about a hundred and forty thousand dollars a year. In the city of Philadelphia, for example, that works out to an annual total of two hundred and fifty million dollars’ worth of community “good”; on a national scale, the contribution of religious groups to the public welfare is, as Cnaan puts it, “staggering.” In the past twenty years, as the enthusiasm for publicly supported welfare has waned, churches have quietly and steadily stepped in to fill the gaps. And who are the churchgoers donating all that time and money? People in small groups. Membership in a small group is a better predictor of whether people volunteer or give money than how often they attend church, whether they pray, whether they’ve had a deep religious experience, or whether they were raised in a Christian home. Social action is not a consequence of belief, in other words. I don’t give because I believe in religious charity. I give because I belong to a social structure that enforces an ethic of giving. “Small groups are networks,” the Princeton sociologist Robert Wuthnow, who has studied the phenomenon closely, says. “They create bonds among people. Expose people to needs, provide opportunities for volunteering, and put people in harm’s way of being asked to volunteer. That’s not to say that being there for worship is not important. But, even in earlier research, I was finding that if people say all the right things about being a believer but aren’t involved in some kind of physical social setting that generates interaction, they are just not as likely to volunteer.”

Rick Warren came to the Saddle-back Valley just as the small-group movement was taking off. He was the son of a preacher—a man who started seven churches in and around Northern California and was enough of a carpenter to have built a few dozen more with his own hands—and he wanted to do what his father had done: start a church from scratch.

For the first three months, he went from door to door in the neighborhood around his house, asking people why they didn’t attend church. Churches were boring and irrelevant to everyday life, he was told. They were unfriendly to visitors. They were too interested in money. They had inadequate children’s programs. So Warren decided that in his new church people would play and sing contemporary music, not hymns. (He could find no one, Warren likes to say, who listened to organ music in the car.) He would wear the casual clothes of his community. The sermons would be practical and funny and plainspoken, and he would use video and drama to illustrate his message. And when an actual church was finally built—Saddleback used seventy-nine different locations in its first thirteen years, from high-school auditoriums to movie theatres and then tents before building a permanent home—the church would not look churchy: no pews, or stained glass, or lofty spires. Saddleback looks like a college campus, and the main sanctuary looks like the school gymnasium. Parking is plentiful. The chairs are comfortable. There are loudspeakers and television screens everywhere broadcasting the worship service, and all the doors are open, so anyone can slip in or out, at any time, in the anonymity of the enormous crowds. Saddle-back is a church with very low barriers to entry.

But beneath the surface is a network of thousands of committed small groups. “Orange County is virtually a desert in social-capital terms,” the Harvard political scientist Robert Putnam, who has taken a close look at the Saddleback success story, says. “The rate of mobility is really high. It has long and anonymous commutes. It’s a very friendless place, and this church offers serious heavy friendship. It’s a very interesting experience to talk to some of those groups. There were these eight people and they were all mountain bikers—mountain bikers for God. They go biking together, and they are one another’s best friends. If one person’s wife gets breast cancer, he can go to the others for support. If someone loses a job, the others are there for him. They are deeply best friends, in a larger social context where it is hard to find a best friend.”

Putnam goes on, “Warren didn’t invent the cellular church. But he’s brought it to an amazing level of effectiveness. The real job of running Saddleback is the recruitment and training and retention of the thousands of volunteer leaders for all the small groups it has. That’s the surprising thing to me—that they are able to manage that. Those small groups are incredibly vulnerable, and complicated to manage. How to keep all those little dinghies moving in the same direction is, organizationally, a major accomplishment.”

At Saddleback, members are expected to tithe, and to volunteer. Sunday-school teachers receive special training and a police background check. Recently, Warren decided that Saddleback would feed every homeless person in Orange County three meals a day for forty days. Ninety-two hundred people volunteered. Two million pounds of food were collected, sorted, and distributed.

It may be easy to start going to Saddleback. But it is not easy to stay at Saddleback. “Last Sunday, we took a special offering called Extend the Vision, for people to give over and above their normal offering,” Warren said. “We decided we would not use any financial consultants, no high-powered gimmicks, no thermometer on the wall. It was just ‘Folks, you know you need to give.’ Sunday’s offering was seven million dollars in cash and fifty-three million dollars in commitments. That’s one Sunday. The average commitment was fifteen thousand dollars a family. That’s in addition to their tithe. When people say megachurches are shallow, I say you have no idea. These people are committed.”

Warren’s great talent is organizational. He’s not a theological innovator. When he went from door to door, twenty-five years ago, he wasn’t testing variants on the Christian message. As far as he was concerned, the content of his message was non-negotiable. Theologically, Warren is a straight-down-the-middle evangelical. What he wanted to learn was how to construct an effective religious institution. His interest was sociological. Putnam compares Warren to entrepreneurs like Ray Kroc and Sam Walton, pioneers not in what they sold but in how they sold. The contemporary thinker Warren cites most often in conversation is the management guru Peter Drucker, who has been a close friend of his for years. Before Warren wrote “The Purpose-Driven Life,” he wrote a book called “The Purpose-Driven Church,” which was essentially a how-to guide for church builders. He’s run hundreds of training seminars around the world for ministers of small-to-medium-sized churches. At the beginning of the Internet boom, he created a Web site called, on which he posted his sermons for sale for four dollars each. There were many pastors in the world, he reasoned, who were part time. They had a second, nine-to-five job and families of their own, and what little free time they had was spent ministering to their congregation. Why not help them out with Sunday morning? The Web site now gets nearly four hundred thousand hits a day.

“I went to South Africa two years ago,” Warren said. “We did the purpose-driven-church training, and we simulcast it to ninety thousand pastors across Africa. After it was over, I said, ‘Take me out to a village and show me some churches.'”

In the first village they went to, the local pastor came out, saw Warren, and said, “I know who you are. You’re Pastor Rick.”

“And I said, ‘How do you know who I am?’ ” Warren recalled. “He said, ‘I get your sermons every week.’ And I said, ‘How? You don’t even have electricity here.’ And he said, ‘We’re putting the Internet in every post office in South Africa. Once a week, I walk an hour and a half down to the post office. I download it. Then I teach it. You are the only training I have ever received.'”

A typical evangelist, of course, would tell stories about reaching ordinary people, the unsaved laity. But a typical evangelist is someone who goes from town to town, giving sermons to large crowds, or preaching to a broad audience on television. Warren has never pastored any congregation but Saddleback, and he refuses to preach on television, because that would put him in direct competition with the local pastors he has spent the past twenty years cultivating. In the argot of the New Economy, most evangelists follow a business-to-consumer model: b-to-c. Warren follows a business-to-business model: b-to-b. He reaches the people who reach people. He’s a builder of religious networks. “I once heard Drucker say this,” Warren said. “‘Warren is not building a tent revival ministry, like the old-style evangelists. He’s building an army, like the Jesuits.'”


To write “The Purpose-Driven Life,” Warren holed up in an office in a corner of the Saddleback campus, twelve hours a day for seven months. “I would get up at four-thirty, arrive at my special office at five, and I would write from five to five,” he said. “I’m a people person, and it about killed me to be alone by my-self. By eleven-thirty, my A.D.D. would kick in. I would do anything not to be there. It was like birthing a baby.” The book didn’t tell any stories. It wasn’t based on any groundbreaking new research or theory or theological insight. “I’m just not that good a writer,” Warren said. “I’m a pastor. There’s nothing new in this book. But sometimes as I was writing it I would break down in tears. I would be weeping, and I would feel like God was using me.”

The book begins with an inscription: “This book is dedicated to you. Before you were born, God planned this moment in your life. It is no accident that you are holding this book. God longs for you to discover the life he created you to live—here on earth, and forever in eternity.” Five sections follow, each detailing one of God’s purposes in our lives—”You Were Planned for God’s Pleasure”; “You Were Formed for God’s Family”; “You Were Created to Become Like Christ”; “You Were Shaped for Serving God”; “You Were Made for a Mission”—and each of the sections, in turn, is divided into short chapters (“Understanding Your Shape” or “Using What God Gave You” or “How Real Servants Act”). The writing is simple and unadorned. The scriptural interpretation is literal: “Noah had never seen rain, because prior to the Flood, God irrigated the earth from the ground up.” The religious vision is uncomplicated and accepting: “God wants to be your best friend.” Warren’s Christianity, like his church, has low barriers to entry: “Wherever you are reading this, I invite you to bow your head and quietly whisper the prayer that will change your eternity. Jesus, I believe in you and I receive you. Go ahead. If you sincerely meant that prayer, congratulations! Welcome to the family of God! You are now ready to discover and start living God’s purpose for your life.”

It is tempting to interpret the book’s message as a kind of New Age self-help theology. Warren’s God is not awesome or angry and does not stand in judgment of human sin. He’s genial and mellow. “Warren’s God ‘wants to be your best friend,’ and this means, in turn, that God’s most daunting property, the exercise of eternal judgment, is strategically downsized,” the critic Chris Lehmann writes, echoing a common complaint:

“When Warren turns his utility-minded feel-speak upon the symbolic iconography of the faith, the results are offensively bathetic: “When Jesus stretched his arms wide on the cross, he was saying, ‘I love you this much.’ ” But God needs to be at a greater remove than a group hug.”

The self-help genre, however, is fundamentally inward-focussed. M. Scott Peck’s “The Road Less Traveled”—the only spiritual work that, in terms of sales, can even come close to “The Purpose-Driven Life”—begins with the sentence “Life is difficult.” That’s a self-help book: it focusses the reader on his own experience. Warren’s first sentence, by contrast, is “It’s not about you,” which puts it in the spirit of traditional Christian devotional literature, which focusses the reader outward, toward God. In look and feel, in fact, “The Purpose-Driven Life” is less twenty-first-century Orange County than it is the nineteenth century of Warren’s hero, the English evangelist Charles Spurgeon. Spurgeon was the Warren of his day: the pastor of a large church in London, and the author of best-selling devotional books. On Sunday, good Christians could go and hear Spurgeon preach at the Metropolitan Tabernacle. But during the week they needed something to replace the preacher, and so Spurgeon, in one of his best-known books, “Morning and Evening,” wrote seven hundred and thirty-two short homilies, to be read in the morning and the evening of each day of the year. The homilies are not complex investigations of theology. They are opportunities for spiritual reflection. (Sample Spurgeonism: “Every child of God is where God has placed him for some purpose, and the practical use of this first point is to lead you to inquire for what practical purpose has God placed each one of you where you now are.” Sound familiar?) The Oxford Times described one of Spurgeon’s books as “a rich store of topics treated daintily, with broad humour, with quaint good sense, yet always with a subdued tone and high moral aim,” and that describes “The Purpose-Driven Life” as well. It’s a spiritual companion. And, like “Morning and Evening,” it is less a book than a program. It’s divided into forty chapters, to be read during “Forty Days of Purpose.” The first page of the book is called “My Covenant.” It reads, “With God’s help, I commit the next 40 days of my life to discovering God’s purpose for my life.”

Warren departs from Spurgeon, though, in his emphasis on the purpose-driven life as a collective experience. Below the boxed covenant is a space for not one signature but three: “Your name,” “Partner’s name,” and then Rick Warren’s signature, already printed, followed by a quotation from Ecclesiastes 4:9:

“Two are better off than one, because together they can work more effectively. If one of them falls down, the other can help him up. . . . Two people can resist an attack that would defeat one person alone. A rope made of three cords is hard to break.”

“The Purpose-Driven Life” is meant to be read in groups. If the vision of faith sometimes seems skimpy, that’s because the book is supposed to be supplemented by a layer of discussion and reflection and debate. It is a testament to Warren’s intuitive understanding of how small groups work that this is precisely how “The Purpose-Driven Life” has been used. It spread along the network that he has spent his career putting together, not from person to person but from group to group. It presold five hundred thousand copies. It averaged more than half a million copies in sales a month in its first two years, which is possible only when a book is being bought in lots of fifty or a hundred or two hundred. Of those who bought the book as individuals, nearly half have bought more than one copy, sixteen per cent have bought four to six copies, and seven per cent have bought ten or more. Twenty-five thousand churches have now participated in the congregation-wide “40 Days of Purpose” campaign, as have hundreds of small groups within companies and organizations, from the N.B.A. to the United States Postal Service.

“I remember the first time I met Rick,” says Scott Bolinder, the head of Zondervan, the Christian publishing division of HarperCollins and the publisher of “The Purpose-Driven Life.” “He was telling me about This is during the height of the dot-com boom. I was thinking, What’s your angle? He had no angle. He said, ‘I love pastors. I know what they go through.’ I said, ‘What do you put on there?’ He said, ‘I put my sermons with a little disclaimer on there: “You are welcome to preach it any way you can. I only ask one thing—I ask that you do it better than I did.”‘ So then fast-forward seven years: he’s got hundreds of thousands of pastors who come to this Web site. And he goes, ‘By the way, my church and I are getting ready to do forty days of purpose. If you want to join us, I’m going to preach through this and put my sermons up. And I’ve arranged with my publisher that if you do join us with this campaign they will sell the book to you for a low price.’ That became the tipping point—being able to launch that book with eleven hundred churches, right from the get-go. They became the evangelists for the book.”

The book’s high-water mark came earlier this year, when a fugitive named Brian Nichols, who had shot and killed four people in an Atlanta courthouse, accosted a young single mother, Ashley Smith, outside her apartment, and held her captive in her home for seven hours.

“I asked him if I could read,” Smith said at the press conference after her ordeal was over, and so she went and got her copy of “The Purpose-Driven Life” and turned to the chapter she was reading that day. It was Chapter 33, “How Real Servants Act.” It begins:

“We serve God by serving others.

The world defines greatness in terms of power, possessions, prestige, and position. If you can demand service from others, you’ve arrived. In our self-serving culture with its me-first mentality, acting like a servant is not a popular concept.

Jesus, however, measured greatness in terms of service, not status. God determines your greatness by how many people you serve, not how many people serve you.”

Nichols listened and said, “Stop. Will you read it again?”

Smith read it to him again. They talked throughout the night. She made him pancakes. “I said, ‘Do you believe in miracles? Because if you don’t believe in miracles — you are here for a reason. You’re here in my apartment for some reason.’ ” She might as well have been quoting from “The Purpose-Driven Life.” She went on, “You don’t think you’re supposed to be sitting here right in front of me listening to me tell you, you know, your reason for being here?” When morning came, Nichols let her go.

Hollywood could not have scripted a better testimonial for “The Purpose-Driven Life.” Warren’s sales soared further. But the real lesson of that improbable story is that it wasn’t improbable at all. What are the odds that a young Christian—a woman who, it turns out, sends her daughter to Hebron Church, in Dacula, Georgia—isn’t reading “The Purpose-Driven Life”? And is it surprising that Ashley Smith would feel compelled to read aloud from the book to her captor, and that, in the discussion that followed, Nichols would come to some larger perspective on his situation? She and Nichols were in a small group, and reading aloud from “The Purpose-Driven Life” is what small groups do.


Not long ago, the sociologist Christian Smith decided to find out what American evangelicals mean when they say that they believe in a “Christian America.” The phrase seems to suggest that evangelicals intend to erode the separation of church and state. But when Smith asked a representative sample of evangelicals to explain the meaning of the phrase, the most frequent explanation was that America was founded by people who sought religious liberty and worked to establish religious freedom. The second most frequent explanation offered was that a majority of Americans of earlier generations were sincere Christians, which, as Smith points out, is empirically true. Others said what they meant by a Christian nation was that the basic laws of American government reflected Christian principles—which sounds potentially theocratic, except that when Smith asked his respondents to specify what they meant by basic laws they came up with representative government and the balance of powers.

“In other words,” Smith writes, “the belief that America was once a Christian nation does not necessarily mean a commitment to making it a ‘Christian’ nation today, whatever that might mean. Some evangelicals do make this connection explicitly. But many discuss America’s Christian heritage as a simple fact of history that they are not particularly interested in or optimistic about reclaiming. Further, some evangelicals think America never was a Christian nation; some think it still is; and others think it should not be a Christian nation, whether or not it was so in the past or is now.”

As Smith explored one issue after another with the evangelicals—gender equality, education, pluralism, and politics—he found the same scattershot pattern. The Republican Party may have been adept at winning the support of evangelical voters, but that affinity appears to be as much cultural as anything; the Party has learned to speak the evangelical language. Scratch the surface, and the appearance of homogeneity and ideological consistency disappears. Evangelicals want children to have the right to pray in school, for example, and they vote for conservative Republicans who support that right. But what do they mean by prayer? The New Testament’s most left-liberal text, the Lord’s Prayer—which, it should be pointed out, begins with a call for utopian social restructuring (“Thy will be done, On earth as it is in Heaven”), then welfare relief (“Give us this day our daily bread”), and then income redistribution (“Forgive us our debts as we also have forgiven our debtors”). The evangelical movement isn’t a movement, if you take movements to be characterized by a coherent philosophy, and that’s hardly surprising when you think of the role that small groups have come to play in the evangelical religious experience. The answers that Smith got to his questions are the kind of answers you would expect from people who think most deeply about their faith and its implications on Tuesday night, or Wednesday, with five or six of their closest friends, and not Sunday morning, in the controlling hands of a pastor.

“Small groups cultivate spirituality, but it is a particular kind of spirituality,” Robert Wuthnow writes. “They cannot be expected to nurture faith in the same way that years of theological study, meditation and reflection might.” He says, “They provide ways of putting faith in practice. For the most part, their focus is on practical applications, not on abstract knowledge, or even on ideas for the sake of ideas themselves.”

We are so accustomed to judging a social movement by its ideological coherence that the vagueness at the heart of evangelicalism sounds like a shortcoming. Peter Drucker calls Warren’s network an army, like the Jesuits. But the Jesuits marched in lockstep and held to an all-encompassing and centrally controlled creed. The members of Warren’s network don’t all dress the same, and they march to the tune only of their own small group, and they agree, fundamentally, only on who the enemy is. It’s not an army. It’s an insurgency.

In the wake of the extraordinary success of “The Purpose-Driven Life,” Warren says, he underwent a period of soul-searching. He had suddenly been given enormous wealth and influence and he did not know what he was supposed to do with it. “God led me to Psalm 72, which is Solomon’s prayer for more influence,” Warren says. “It sounds pretty selfish. Solomon is already the wisest and wealthiest man in the world. He’s the King of Israel at the apex of its glory. And in that psalm he says, ‘God, I want you to make me more powerful and influential.’ It looks selfish until he says, ‘So that the King may support the widow and orphan, care for the poor, defend the defenseless, speak up for the immigrant, the foreigner, be a friend to those in prison.’ Out of that psalm, God said to me that the purpose of influence is to speak up for those who have no influence. That changed my life. I had to repent. I said, I’m sorry, widows and orphans have not been on my radar. I live in Orange County. I live in the Saddleback Valley, which is all gated communities. There aren’t any homeless people around. They are thirteen miles away, in Santa Ana, not here.” He gestured toward the rolling green hills outside. “I started reading through Scripture. I said, How did I miss the two thousand verses on the poor in the Bible? So I said, I will use whatever affluence and influence that you give me to help those who are marginalized.”

He and his wife, Kay, decided to reverse tithe, giving away ninety per cent of the tens of millions of dollars they earned from “The Purpose-Driven Life.” They sat down with gay community leaders to talk about fighting AIDS. Warren has made repeated trips to Africa. He has sent out volunteers to forty-seven countries around the world, test-piloting experiments in microfinance and H.I.V. prevention and medical education. He decided to take the same networks he had built to train pastors and spread the purpose-driven life and put them to work on social problems.

“There is only one thing big enough to handle the world’s problems, and that is the millions and millions of churches spread out around the world,” he says. “I can take you to thousands of villages where they don’t have a school. They don’t have a grocery store, don’t have a fire department. But they have a church. They have a pastor. They have volunteers. The problem today is distribution. In the tsunami, millions of dollars of foodstuffs piled up on the shores and people couldn’t get it into the places that needed it, because they didn’t have a network. Well, the biggest distribution network in the world is local churches. There are millions of them, far more than all the franchises in the world. Put together, they could be a force for good.”

That is, in one sense, a typical Warren pronouncement—bold to the point of audacity, like telling his publisher that his book will sell a hundred million copies. In another sense, it is profoundly modest. When Warren’s nineteenth-century evangelical predecessors took on the fight against slavery, they brought to bear every legal, political, and economic lever they could get their hands on. But that was a different time, and that was a different church. Today’s evangelicalism is a network, and networks, for better or worse, are informal and personal.

At the Anaheim stadium service, Warren laid out his plan for attacking poverty and disease. He didn’t talk about governments, though, or the United Nations, or structures, or laws. He talked about the pastors he had met in his travels around the world. He brought out the President of Rwanda, who stood up at the microphone—a short, slender man in an immaculate black suit—and spoke in halting English about how Warren was helping him rebuild his country. When he was finished, the crowd erupted in applause, and Rick Warren walked across the stage and enfolded him in his long arms.
The social logic of Ivy League admissions.


I applied to college one evening, more about after dinner, in the fall of my senior year in high school. College applicants in Ontario, in those days, were given a single sheet of paper which listed all the universities in the province. It was my job to rank them in order of preference. Then I had to mail the sheet of paper to a central college-admissions office. The whole process probably took ten minutes. My school sent in my grades separately. I vaguely remember filling out a supplementary two-page form listing my interests and activities. There were no S.A.T. scores to worry about, because in Canada we didn’t have to take the S.A.T.s. I don’t know whether anyone wrote me a recommendation. I certainly never asked anyone to. Why would I? It wasn’t as if I were applying to a private club.

I put the University of Toronto first on my list, the University of Western Ontario second, and Queen’s University third. I was working off a set of brochures that I’d sent away for. My parents’ contribution consisted of my father’s agreeing to drive me one afternoon to the University of Toronto campus, where we visited the residential college I was most interested in. I walked around. My father poked his head into the admissions office, chatted with the admissions director, and—I imagine—either said a few short words about the talents of his son or (knowing my father) remarked on the loveliness of the delphiniums in the college flower beds. Then we had ice cream. I got in.

Am I a better or more successful person for having been accepted at the University of Toronto, as opposed to my second or third choice? It strikes me as a curious question. In Ontario, there wasn’t a strict hierarchy of colleges. There were several good ones and several better ones and a number of programs—like computer science at the University of Waterloo—that were world-class. But since all colleges were part of the same public system and tuition everywhere was the same (about a thousand dollars a year, in those days), and a B average in high school pretty much guaranteed you a spot in college, there wasn’t a sense that anything great was at stake in the choice of which college we attended. The issue was whether we attended college, and—most important—how seriously we took the experience once we got there. I thought everyone felt this way. You can imagine my confusion, then, when I first met someone who had gone to Harvard.

There was, first of all, that strange initial reluctance to talk about the matter of college at all—a glance downward, a shuffling of the feet, a mumbled mention of Cambridge. “Did you go to Harvard?” I would ask. I had just moved to the United States. I didn’t know the rules. An uncomfortable nod would follow. Don’t define me by my school, they seemed to be saying, which implied that their school actually could define them. And, of course, it did. Wherever there was one Harvard graduate, another lurked not far behind, ready to swap tales of late nights at the Hasty Pudding, or recount the intricacies of the college—application essay, or wonder out loud about the whereabouts of Prince So-and-So, who lived down the hall and whose family had a place in the South of France that you would not believe. In the novels they were writing, the precocious and sensitive protagonist always went to Harvard; if he was troubled, he dropped out of Harvard; in the end, he returned to Harvard to complete his senior thesis. Once, I attended a wedding of a Harvard alum in his fifties, at which the best man spoke of his college days with the groom as if neither could have accomplished anything of greater importance in the intervening thirty years. By the end, I half expected him to take off his shirt and proudly display the large crimson “H” tattooed on his chest. What is this “Harvard” of which you Americans speak so reverently?


In 1905, Harvard College adopted the College Entrance Examination Board tests as the principal basis for admission, which meant that virtually any academically gifted high—school senior who could afford a private college had a straightforward shot at attending. By 1908, the freshman class was seven per cent Jewish, nine per cent Catholic, and forty-five per cent from public schools, an astonishing transformation for a school that historically had been the preserve of the New England boarding-school complex known in the admissions world as St. Grottlesex.

As the sociologist Jerome Karabel writes in “The Chosen” (Houghton Mifflin; $28), his remarkable history of the admissions process at Harvard, Yale, and Princeton, that meritocratic spirit soon led to a crisis. The enrollment of Jews began to rise dramatically. By 1922, they made up more than a fifth of Harvard’s freshman class. The administration and alumni were up in arms. Jews were thought to be sickly and grasping, grade-grubbing and insular. They displaced the sons of wealthy Wasp alumni, which did not bode well for fund-raising. A. Lawrence Lowell, Harvard’s president in the nineteen-twenties, stated flatly that too many Jews would destroy the school: “The summer hotel that is ruined by admitting Jews meets its fate . . . because they drive away the Gentiles, and then after the Gentiles have left, they leave also.”

The difficult part, however, was coming up with a way of keeping Jews out, because as a group they were academically superior to everyone else. Lowell’s first idea—a quota limiting Jews to fifteen per cent of the student body—was roundly criticized. Lowell tried restricting the number of scholarships given to Jewish students, and made an effort to bring in students from public schools in the West, where there were fewer Jews. Neither strategy worked. Finally, Lowell—and his counterparts at Yale and Princeton—realized that if a definition of merit based on academic prowess was leading to the wrong kind of student, the solution was to change the definition of merit. Karabel argues that it was at this moment that the history and nature of the Ivy League took a significant turn.

The admissions office at Harvard became much more interested in the details of an applicant’s personal life. Lowell told his admissions officers to elicit information about the “character” of candidates from “persons who know the applicants well,” and so the letter of reference became mandatory. Harvard started asking applicants to provide a photograph. Candidates had to write personal essays, demonstrating their aptitude for leadership, and list their extracurricular activities. “Starting in the fall of 1922,” Karabel writes, “applicants were required to answer questions on “Race and Color,’ “Religious Preference,’ “Maiden Name of Mother,’ “Birthplace of Father,’ and “What change, if any, has been made since birth in your own name or that of your father? (Explain fully).’ ”

At Princeton, emissaries were sent to the major boarding schools, with instructions to rate potential candidates on a scale of 1 to 4, where 1 was “very desirable and apparently exceptional material from every point of view” and 4 was “undesirable from the point of view of character, and, therefore, to be excluded no matter what the results of the entrance examinations might be.” The personal interview became a key component of admissions in order, Karabel writes, “to ensure that “undesirables’ were identified and to assess important but subtle indicators of background and breeding such as speech, dress, deportment and physical appearance.” By 1933, the end of Lowell’s term, the percentage of Jews at Harvard was back down to fifteen per cent.

If this new admissions system seems familiar, that’s because it is essentially the same system that the Ivy League uses to this day. According to Karabel, Harvard, Yale, and Princeton didn’t abandon the elevation of character once the Jewish crisis passed. They institutionalized it.

Starting in 1953, Arthur Howe, Jr., spent a decade as the chair of admissions at Yale, and Karabel describes what happened under his guidance:

The admissions committee viewed evidence of “manliness” with particular enthusiasm. One boy gained admission despite an academic prediction of 70 because “there was apparently something manly and distinctive about him that had won over both his alumni and staff interviewers.” Another candidate, admitted despite his schoolwork being “mediocre in comparison with many others,” was accepted over an applicant with a much better record and higher exam scores because, as Howe put it, “we just thought he was more of a guy.” So preoccupied was Yale with the appearance of its students that the form used by alumni interviewers actually had a physical characteristics checklist through 1965. Each year, Yale carefully measured the height of entering freshmen, noting with pride the proportion of the class at six feet or more.

At Harvard, the key figure in that same period was Wilbur Bender, who, as the dean of admissions, had a preference for “the boy with some athletic interests and abilities, the boy with physical vigor and coordination and grace.” Bender, Karabel tells us, believed that if Harvard continued to suffer on the football field it would contribute to the school’s reputation as a place with “no college spirit, few good fellows, and no vigorous, healthy social life,” not to mention a “surfeit of “pansies,’ “decadent esthetes’ and “precious sophisticates.’ ” Bender concentrated on improving Harvard’s techniques for evaluating “intangibles” and, in particular, its “ability to detect homosexual tendencies and serious psychiatric problems.”

By the nineteen-sixties, Harvard’s admissions system had evolved into a series of complex algorithms. The school began by lumping all applicants into one of twenty-two dockets, according to their geographical origin. (There was one docket for Exeter and Andover, another for the eight Rocky Mountain states.) Information from interviews, references, and student essays was then used to grade each applicant on a scale of 1 to 6, along four dimensions: personal, academic, extracurricular, and athletic. Competition, critically, was within each docket, not between dockets, so there was no way for, say, the graduates of Bronx Science and Stuyvesant to shut out the graduates of Andover and Exeter. More important, academic achievement was just one of four dimensions, further diluting the value of pure intellectual accomplishment. Athletic ability, rather than falling under “extracurriculars,” got a category all to itself, which explains why, even now, recruited athletes have an acceptance rate to the Ivies at well over twice the rate of other students, despite S.A.T. scores that are on average more than a hundred points lower. And the most important category? That mysterious index of “personal” qualities. According to Harvard’s own analysis, the personal rating was a better predictor of admission than the academic rating. Those with a rank of 4 or worse on the personal scale had, in the nineteen-sixties, a rejection rate of ninety-eight per cent. Those with a personal rating of 1 had a rejection rate of 2.5 per cent. When the Office of Civil Rights at the federal education department investigated Harvard in the nineteen-eighties, they found handwritten notes scribbled in the margins of various candidates’ files. “This young woman could be one of the brightest applicants in the pool but there are several references to shyness,” read one. Another comment reads, “Seems a tad frothy.” One application—and at this point you can almost hear it going to the bottom of the pile—was notated, “Short with big ears.”


Social scientists distinguish between what are known as treatment effects and selection effects. The Marine Corps, for instance, is largely a treatment-effect institution. It doesn’t have an enormous admissions office grading applicants along four separate dimensions of toughness and intelligence. It’s confident that the experience of undergoing Marine Corps basic training will turn you into a formidable soldier. A modelling agency, by contrast, is a selection-effect institution. You don’t become beautiful by signing up with an agency. You get signed up by an agency because you’re beautiful.

At the heart of the American obsession with the Ivy League is the belief that schools like Harvard provide the social and intellectual equivalent of Marine Corps basic training—that being taught by all those brilliant professors and meeting all those other motivated students and getting a degree with that powerful name on it will confer advantages that no local state university can provide. Fuelling the treatment-effect idea are studies showing that if you take two students with the same S.A.T. scores and grades, one of whom goes to a school like Harvard and one of whom goes to a less selective college, the Ivy Leaguer will make far more money ten or twenty years down the road.

The extraordinary emphasis the Ivy League places on admissions policies, though, makes it seem more like a modeling agency than like the Marine Corps, and, sure enough, the studies based on those two apparently equivalent students turn out to be flawed. How do we know that two students who have the same S.A.T. scores and grades really are equivalent? It’s quite possible that the student who goes to Harvard is more ambitious and energetic and personable than the student who wasn’t let in, and that those same intangibles are what account for his better career success. To assess the effect of the Ivies, it makes more sense to compare the student who got into a top school with the student who got into that same school but chose to go to a less selective one. Three years ago, the economists Alan Krueger and Stacy Dale published just such a study. And they found that when you compare apples and apples the income bonus from selective schools disappears.

“As a hypothetical example, take the University of Pennsylvania and Penn State, which are two schools a lot of students choose between,” Krueger said. “One is Ivy, one is a state school. Penn is much more highly selective. If you compare the students who go to those two schools, the ones who go to Penn have higher incomes. But let’s look at those who got into both types of schools, some of whom chose Penn and some of whom chose Penn State. Within that set it doesn’t seem to matter whether you go to the more selective school. Now, you would think that the more ambitious student is the one who would choose to go to Penn, and the ones choosing to go to Penn State might be a little less confident in their abilities or have a little lower family income, and both of those factors would point to people doing worse later on. But they don’t.”

Krueger says that there is one exception to this. Students from the very lowest economic strata do seem to benefit from going to an Ivy. For most students, though, the general rule seems to be that if you are a hardworking and intelligent person you’ll end up doing well regardless of where you went to school. You’ll make good contacts at Penn. But Penn State is big enough and diverse enough that you can make good contacts there, too. Having Penn on your résumé opens doors. But if you were good enough to get into Penn you’re good enough that those doors will open for you anyway. “I can see why families are really concerned about this,” Krueger went on. “The average graduate from a top school is making nearly a hundred and twenty thousand dollars a year, the average graduate from a moderately selective school is making ninety thousand dollars. That’s an enormous difference, and I can see why parents would fight to get their kids into the better school. But I think they are just assigning to the school a lot of what the student is bringing with him to the school.”

Bender was succeeded as the dean of admissions at Harvard by Fred Glimp, who, Karabel tells us, had a particular concern with academic underperformers. “Any class, no matter how able, will always have a bottom quarter,” Glimp once wrote. “What are the effects of the psychology of feeling average, even in a very able group? Are there identifiable types with the psychological or what—not tolerance to be “happy’ or to make the most of education while in the bottom quarter?” Glimp thought it was critical that the students who populated the lower rungs of every Harvard class weren’t so driven and ambitious that they would be disturbed by their status. “Thus the renowned (some would say notorious) Harvard admission practice known as the “happy-bottom-quarter’ policy was born,” Karabel writes.

It’s unclear whether or not Glimp found any students who fit that particular description. (He wondered, in a marvellously honest moment, whether the answer was “Harvard sons.”) But Glimp had the realism of the modelling scout. Glimp believed implicitly what Krueger and Dale later confirmed: that the character and performance of an academic class is determined, to a significant extent, at the point of admission; that if you want to graduate winners you have to admit winners; that if you want the bottom quarter of your class to succeed you have to find people capable of succeeding in the bottom quarter. Karabel is quite right, then, to see the events of the nineteen-twenties as the defining moment of the modern Ivy League. You are whom you admit in the élite-education business, and when Harvard changed whom it admitted, it changed Harvard. Was that change for the better or for the worse?


In the wake of the Jewish crisis, Harvard, Yale, and Princeton chose to adopt what might be called the “best graduates” approach to admissions. France’s École Normale Supérieure, Japan’s University of Tokyo, and most of the world’s other élite schools define their task as looking for the best students—that is, the applicants who will have the greatest academic success during their time in college. The Ivy League schools justified their emphasis on character and personality, however, by arguing that they were searching for the students who would have the greatest success after college. They were looking for leaders, and leadership, the officials of the Ivy League believed, was not a simple matter of academic brilliance. “Should our goal be to select a student body with the highest possible proportions of high-ranking students, or should it be to select, within a reasonably high range of academic ability, a student body with a certain variety of talents, qualities, attitudes, and backgrounds?” Wilbur Bender asked. To him, the answer was obvious. If you let in only the brilliant, then you produced bookworms and bench scientists: you ended up as socially irrelevant as the University of Chicago (an institution Harvard officials looked upon and shuddered). “Above a reasonably good level of mental ability, above that indicated by a 550-600 level of S.A.T. score,” Bender went on, “the only thing that matters in terms of future impact on, or contribution to, society is the degree of personal inner force an individual has.”

It’s easy to find fault with the best-graduates approach. We tend to think that intellectual achievement is the fairest and highest standard of merit. The Ivy League process, quite apart from its dubious origins, seems subjective and opaque. Why should personality and athletic ability matter so much? The notion that “the ability to throw, kick, or hit a ball is a legitimate criterion in determining who should be admitted to our greatest research universities,” Karabel writes, is “a proposition that would be considered laughable in most of the world’s countries.” At the same time that Harvard was constructing its byzantine admissions system, Hunter College Elementary School, in New York, required simply that applicants take an exam, and if they scored in the top fifty they got in. It’s hard to imagine a more objective and transparent procedure.

But what did Hunter achieve with that best-students model? In the nineteen-eighties, a handful of educational researchers surveyed the students who attended the elementary school between 1948 and 1960. This was a group with an average I.Q. of 157—three and a half standard deviations above the mean—who had been given what, by any measure, was one of the finest classroom experiences in the world. As graduates, though, they weren’t nearly as distinguished as they were expected to be. “Although most of our study participants are successful and fairly content with their lives and accomplishments,” the authors conclude, “there are no superstars . . . and only one or two familiar names.” The researchers spend a great deal of time trying to figure out why Hunter graduates are so disappointing, and end up sounding very much like Wilbur Bender. Being a smart child isn’t a terribly good predictor of success in later life, they conclude. “Non-intellective” factors—like motivation and social skills—probably matter more. Perhaps, the study suggests, “after noting the sacrifices involved in trying for national or world-class leadership in a field, H.C.E.S. graduates decided that the intelligent thing to do was to choose relatively happy and successful lives.” It is a wonderful thing, of course, for a school to turn out lots of relatively happy and successful graduates. But Harvard didn’t want lots of relatively happy and successful graduates. It wanted superstars, and Bender and his colleagues recognized that if this is your goal a best-students model isn’t enough.

Most élite law schools, to cite another example, follow a best-students model. That’s why they rely so heavily on the L.S.A.T. Yet there’s no reason to believe that a person’s L.S.A.T. scores have much relation to how good a lawyer he will be. In a recent research project funded by the Law School Admission Council, the Berkeley researchers Sheldon Zedeck and Marjorie Shultz identified twenty-six “competencies” that they think effective lawyering demands—among them practical judgment, passion and engagement, legal-research skills, questioning and interviewing skills, negotiation skills, stress management, and so on—and the L.S.A.T. picks up only a handful of them. A law school that wants to select the best possible lawyers has to use a very different admissions process from a law school that wants to select the best possible law students. And wouldn’t we prefer that at least some law schools try to select good lawyers instead of good law students?

This search for good lawyers, furthermore, is necessarily going to be subjective, because things like passion and engagement can’t be measured as precisely as academic proficiency. Subjectivity in the admissions process is not just an occasion for discrimination; it is also, in better times, the only means available for giving us the social outcome we want. The first black captain of the Yale football team was a man named Levi Jackson, who graduated in 1950. Jackson was a hugely popular figure on campus. He went on to be a top executive at Ford, and is credited with persuading the company to hire thousands of African-Americans after the 1967 riots. When Jackson was tapped for the exclusive secret society Skull and Bones, he joked, “If my name had been reversed, I never would have made it.” He had a point. The strategy of discretion that Yale had once used to exclude Jews was soon being used to include people like Levi Jackson.

In the 2001 book “The Game of Life,” James L. Shulman and William Bowen (a former president of Princeton) conducted an enormous statistical analysis on an issue that has become one of the most contentious in admissions: the special preferences given to recruited athletes at selective universities. Athletes, Shulman and Bowen demonstrate, have a large and growing advantage in admission over everyone else. At the same time, they have markedly lower G.P.A.s and S.A.T. scores than their peers. Over the past twenty years, their class rankings have steadily dropped, and they tend to segregate themselves in an “athletic culture” different from the culture of the rest of the college. Shulman and Bowen think the preference given to athletes by the Ivy League is shameful.

Halfway through the book, however, Shulman and Bowen present “” finding. Male athletes, despite their lower S.A.T. scores and grades, and despite the fact that many of them are members of minorities and come from lower socioeconomic backgrounds than other students, turn out to earn a lot more than their peers. Apparently, athletes are far more likely to go into the high-paying financial-services sector, where they succeed because of their personality and psychological makeup. In what can only be described as a textbook example of burying the lead, Bowen and Shulman write:

One of these characteristics can be thought of as drive—a strong desire to succeed and unswerving determination to reach a goal, whether it be winning the next game or closing a sale. Similarly, athletes tend to be more energetic than the average person, which translates into an ability to work hard over long periods of time—to meet, for example, the workload demands placed on young people by an investment bank in the throes of analyzing a transaction. In addition, athletes are more likely than others to be highly competitive, gregarious and confident of their ability to work well in groups (on teams).

Shulman and Bowen would like to argue that the attitudes of selective colleges toward athletes are a perversion of the ideals of American élite education, but that’s because they misrepresent the actual ideals of American élite education. The Ivy League is perfectly happy to accept, among others, the kind of student who makes a lot of money after graduation. As the old saying goes, the definition of a well-rounded Yale graduate is someone who can roll all the way from New Haven to Wall Street.


I once had a conversation with someone who worked for an advertising agency that represented one of the big luxury automobile brands. He said that he was worried that his client’s new lower-priced line was being bought disproportionately by black women. He insisted that he did not mean this in a racist way. It was just a fact, he said. Black women would destroy the brand’s cachet. It was his job to protect his client from the attentions of the socially undesirable.

This is, in no small part, what Ivy League admissions directors do. They are in the luxury-brand-management business, and “The Chosen,” in the end, is a testament to just how well the brand managers in Cambridge, New Haven, and Princeton have done their job in the past seventy-five years. In the nineteen twenties, when Harvard tried to figure out how many Jews they had on campus, the admissions office scoured student records and assigned each suspected Jew the designation j1 (for someone who was “conclusively Jewish”), j2 (where the “preponderance of evidence” pointed to Jewishness), or j3 (where Jewishness was a “possibility”). In the branding world, this is called customer segmentation. In the Second World War, as Yale faced plummeting enrollment and revenues, it continued to turn down qualified Jewish applicants. As Karabel writes, “In the language of sociology, Yale judged its symbolic capital to be even more precious than its economic capital.” No good brand manager would sacrifice reputation for short-term gain. The admissions directors at Harvard have always, similarly, been diligent about rewarding the children of graduates, or, as they are quaintly called, “legacies.” In the 1985-92 period, for instance, Harvard admitted children of alumni at a rate more than twice that of non-athlete, non-legacy applicants, despite the fact that, on virtually every one of the school’s magical ratings scales, legacies significantly lagged behind their peers. Karabel calls the practice “unmeritocratic at best and profoundly corrupt at worst,” but rewarding customer loyalty is what luxury brands do. Harvard wants good graduates, and part of their definition of a good graduate is someone who is a generous and loyal alumnus. And if you want generous and loyal alumni you have to reward them. Aren’t the tremendous resources provided to Harvard by its alumni part of the reason so many people want to go to Harvard in the first place? The endless battle over admissions in the United States proceeds on the assumption that some great moral principle is at stake in the matter of whom schools like Harvard choose to let in—that those who are denied admission by the whims of the admissions office have somehow been harmed. If you are sick and a hospital shuts its doors to you, you are harmed. But a selective school is not a hospital, and those it turns away are not sick. Élite schools, like any luxury brand, are an aesthetic experience—an exquisitely constructed fantasy of what it means to belong to an élite —and they have always been mindful of what must be done to maintain that experience.

In the nineteen-eighties, when Harvard was accused of enforcing a secret quota on Asian admissions, its defense was that once you adjusted for the preferences given to the children of alumni and for the preferences given to athletes, Asians really weren’t being discriminated against. But you could sense Harvard’s exasperation that the issue was being raised at all. If Harvard had too many Asians, it wouldn’t be Harvard, just as Harvard wouldn’t be Harvard with too many Jews or pansies or parlor pinks or shy types or short people with big ears.