The Bakeoff

In “Collapse, medical ” Jared Diamond shows how societies destroy themselves.

1.

A thousand years ago, a group of Vikings led by Erik the Red set sail from Norway for the vast Arctic landmass west of Scandinavia which came to be known as Greenland. It was largely uninhabitable—a forbidding expanse of snow and ice. But along the southwestern coast there were two deep fjords protected from the harsh winds and saltwater spray of the North Atlantic Ocean, and as the Norse sailed upriver they saw grassy slopes flowering with buttercups, dandelions, and bluebells, and thick forests of willow and birch and alder. Two colonies were formed, three hundred miles apart, known as the Eastern and Western Settlements. The Norse raised sheep, goats, and cattle. They turned the grassy slopes into pastureland. They hunted seal and caribou. They built a string of parish churches and a magnificent cathedral, the remains of which are still standing. They traded actively with mainland Europe, and tithed regularly to the Roman Catholic Church. The Norse colonies in Greenland were law-abiding, economically viable, fully integrated communities, numbering at their peak five thousand people. They lasted for four hundred and fifty years—and then they vanished.

The story of the Eastern and Western Settlements of Greenland is told in Jared Diamond’s “Collapse: How Societies Choose to Fail or Succeed” (Viking; $29.95). Diamond teaches geography at U.C.L.A. and is well known for his best-seller “Guns, Germs, and Steel,” which won a Pulitzer Prize. In “Guns, Germs, and Steel,” Diamond looked at environmental and structural factors to explain why Western societies came to dominate the world. In “Collapse,” he continues that approach, only this time he looks at history’s losers—like the Easter Islanders, the Anasazi of the American Southwest, the Mayans, and the modern-day Rwandans. We live in an era preoccupied with the way that ideology and culture and politics and economics help shape the course of history. But Diamond isn’t particularly interested in any of those things—or, at least, he’s interested in them only insofar as they bear on what to him is the far more important question, which is a society’s relationship to its climate and geography and resources and neighbors. “Collapse” is a book about the most prosaic elements of the earth’s ecosystem—soil, trees, and water—because societies fail, in Diamond’s view, when they mismanage those environmental factors.

There was nothing wrong with the social organization of the Greenland settlements. The Norse built a functioning reproduction of the predominant northern-European civic model of the time—devout, structured, and reasonably orderly. In 1408, right before the end, records from the Eastern Settlement dutifully report that Thorstein Olafsson married Sigrid Bjornsdotter in Hvalsey Church on September 14th of that year, with Brand Halldorstson, Thord Jorundarson, Thorbjorn Bardarson, and Jon Jonsson as witnesses, following the proclamation of the wedding banns on three consecutive Sundays.

The problem with the settlements, Diamond argues, was that the Norse thought that Greenland really was green; they treated it as if it were the verdant farmland of southern Norway. They cleared the land to create meadows for their cows, and to grow hay to feed their livestock through the long winter. They chopped down the forests for fuel, and for the construction of wooden objects. To make houses warm enough for the winter, they built their homes out of six-foot-thick slabs of turf, which meant that a typical home consumed about ten acres of grassland.

But Greenland’s ecosystem was too fragile to withstand that kind of pressure. The short, cool growing season meant that plants developed slowly, which in turn meant that topsoil layers were shallow and lacking in soil constituents, like organic humus and clay, that hold moisture and keep soil resilient in the face of strong winds. “The sequence of soil erosion in Greenland begins with cutting or burning the cover of trees and shrubs, which are more effective at holding soil than is grass,” he writes. “With the trees and shrubs gone, livestock, especially sheep and goats, graze down the grass, which regenerates only slowly in Greenland’s climate. Once the grass cover is broken and the soil is exposed, soil is carried away especially by the strong winds, and also by pounding from occasionally heavy rains, to the point where the topsoil can be removed for a distance of miles from an entire valley.” Without adequate pastureland, the summer hay yields shrank; without adequate supplies of hay, keeping livestock through the long winter got harder. And, without adequate supplies of wood, getting fuel for the winter became increasingly difficult.

The Norse needed to reduce their reliance on livestock—particularly cows, which consumed an enormous amount of agricultural resources. But cows were a sign of high status; to northern Europeans, beef was a prized food. They needed to copy the Inuit practice of burning seal blubber for heat and light in the winter, and to learn from the Inuit the difficult art of hunting ringed seals, which were the most reliably plentiful source of food available in the winter. But the Norse had contempt for the Inuit—they called them skraelings, “wretches”—and preferred to practice their own brand of European agriculture. In the summer, when the Norse should have been sending ships on lumber-gathering missions to Labrador, in order to relieve the pressure on their own forestlands, they instead sent boats and men to the coast to hunt for walrus. Walrus tusks, after all, had great trade value. In return for those tusks, the Norse were able to acquire, among other things, church bells, stained-glass windows, bronze candlesticks, Communion wine, linen, silk, silver, churchmen’s robes, and jewelry to adorn their massive cathedral at Gardar, with its three-ton sandstone building blocks and eighty-foot bell tower. In the end, the Norse starved to death.

2.

Diamond’s argument stands in sharp contrast to the conventional explanations for a society’s collapse. Usually, we look for some kind of cataclysmic event. The aboriginal civilization of the Americas was decimated by the sudden arrival of smallpox. European Jewry was destroyed by Nazism. Similarly, the disappearance of the Norse settlements is usually blamed on the Little Ice Age, which descended on Greenland in the early fourteen-hundreds, ending several centuries of relative warmth. (One archeologist refers to this as the “It got too cold, and they died” argument.) What all these explanations have in common is the idea that civilizations are destroyed by forces outside their control, by acts of God.

But look, Diamond says, at Easter Island. Once, it was home to a thriving culture that produced the enormous stone statues that continue to inspire awe. It was home to dozens of species of trees, which created and protected an ecosystem fertile enough to support as many as thirty thousand people. Today, it’s a barren and largely empty outcropping of volcanic rock. What happened? Did a rare plant virus wipe out the island’s forest cover? Not at all. The Easter Islanders chopped their trees down, one by one, until they were all gone. “I have often asked myself, ‘What did the Easter Islander who cut down the last palm tree say while he was doing it?'” Diamond writes, and that, of course, is what is so troubling about the conclusions of “Collapse.” Those trees were felled by rational actors—who must have suspected that the destruction of this resource would result in the destruction of their civilization. The lesson of “Collapse” is that societies, as often as not, aren’t murdered. They commit suicide: they slit their wrists and then, in the course of many decades, stand by passively and watch themselves bleed to death.

This doesn’t mean that acts of God don’t play a role. It did get colder in Greenland in the early fourteen-hundreds. But it didn’t get so cold that the island became uninhabitable. The Inuit survived long after the Norse died out, and the Norse had all kinds of advantages, including a more diverse food supply, iron tools, and ready access to Europe. The problem was that the Norse simply couldn’t adapt to the country’s changing environmental conditions. Diamond writes, for instance, of the fact that nobody can find fish remains in Norse archeological sites. One scientist sifted through tons of debris from the Vatnahverfi farm and found only three fish bones; another researcher analyzed thirty-five thousand bones from the garbage of another Norse farm and found two fish bones. How can this be? Greenland is a fisherman’s dream: Diamond describes running into a Danish tourist in Greenland who had just caught two Arctic char in a shallow pool with her bare hands. “Every archaeologist who comes to excavate in Greenland . . . starts out with his or her own idea about where all those missing fish bones might be hiding,” he writes. “Could the Norse have strictly confined their munching on fish to within a few feet of the shoreline, at sites now underwater because of land subsidence? Could they have faithfully saved all their fish bones for fertilizer, fuel, or feeding to cows?” It seems unlikely. There are no fish bones in Norse archeological remains, Diamond concludes, for the simple reason that the Norse didn’t eat fish. For one reason or another, they had a cultural taboo against it.

Given the difficulty that the Norse had in putting food on the table, this was insane. Eating fish would have substantially reduced the ecological demands of the Norse settlements. The Norse would have needed fewer livestock and less pastureland. Fishing is not nearly as labor-intensive as raising cattle or hunting caribou, so eating fish would have freed time and energy for other activities. It would have diversified their diet.

Why did the Norse choose not to eat fish? Because they weren’t thinking about their biological survival. They were thinking about their cultural survival. Food taboos are one of the idiosyncrasies that define a community. Not eating fish served the same function as building lavish churches, and doggedly replicating the untenable agricultural practices of their land of origin. It was part of what it meant to be Norse, and if you are going to establish a community in a harsh and forbidding environment all those little idiosyncrasies which define and cement a culture are of paramount importance. “The Norse were undone by the same social glue that had enabled them to master Greenland’s difficulties,” Diamond writes. “The values to which people cling most stubbornly under inappropriate conditions are those values that were previously the source of their greatest triumphs over adversity.” He goes on:

To us in our secular modern society, the predicament in which the Greenlanders found themselves is difficult to fathom. To them, however, concerned with their social survival as much as their biological survival, it was out of the question to invest less in churches, to imitate or intermarry with the Inuit, and thereby to face an eternity in Hell just in order to survive another winter on Earth.

Diamond’s distinction between social and biological survival is a critical one, because too often we blur the two, or assume that biological survival is contingent on the strength of our civilizational values. That was the lesson taken from the two world wars and the nuclear age that followed: we would survive as a species only if we learned to get along and resolve our disputes peacefully. The fact is, though, that we can be law-abiding and peace-loving and tolerant and inventive and committed to freedom and true to our own values and still behave in ways that are biologically suicidal. The two kinds of survival are separate.

Diamond points out that the Easter Islanders did not practice, so far as we know, a uniquely pathological version of South Pacific culture. Other societies, on other islands in the Hawaiian archipelago, chopped down trees and farmed and raised livestock just as the Easter Islanders did. What doomed the Easter Islanders was the interaction between what they did and where they were. Diamond and a colleague, Barry Rollet, identified nine physical factors that contributed to the likelihood of deforestation—including latitude, average rainfall, aerial-ash fallout, proximity to Central Asia’s dust plume, size, and so on—and Easter Island ranked at the high-risk end of nearly every variable. “The reason for Easter’s unusually severe degree of deforestation isn’t that those seemingly nice people really were unusually bad or improvident,” he concludes. “Instead, they had the misfortune to be living in one of the most fragile environments, at the highest risk for deforestation, of any Pacific people.” The problem wasn’t the Easter Islanders. It was Easter Island.

In the second half of “Collapse,” Diamond turns his attention to modern examples, and one of his case studies is the recent genocide in Rwanda. What happened in Rwanda is commonly described as an ethnic struggle between the majority Hutu and the historically dominant, wealthier Tutsi, and it is understood in those terms because that is how we have come to explain much of modern conflict: Serb and Croat, Jew and Arab, Muslim and Christian. The world is a cauldron of cultural antagonism. It’s an explanation that clearly exasperates Diamond. The Hutu didn’t just kill the Tutsi, he points out. The Hutu also killed other Hutu. Why? Look at the land: steep hills farmed right up to the crests, without any protective terracing; rivers thick with mud from erosion; extreme deforestation leading to irregular rainfall and famine; staggeringly high population densities; the exhaustion of the topsoil; falling per-capita food production. This was a society on the brink of ecological disaster, and if there is anything that is clear from the study of such societies it is that they inevitably descend into genocidal chaos. In “Collapse,” Diamond quite convincingly defends himself against the charge of environmental determinism. His discussions are always nuanced, and he gives political and ideological factors their due. The real issue is how, in coming to terms with the uncertainties and hostilities of the world, the rest of us have turned ourselves into cultural determinists.

3.

For the past thirty years, Oregon has had one of the strictest sets of land-use regulations in the nation, requiring new development to be clustered in and around existing urban development. The laws meant that Oregon has done perhaps the best job in the nation in limiting suburban sprawl, and protecting coastal lands and estuaries. But this November Oregon’s voters passed a ballot referendum, known as Measure 37, that rolled back many of those protections. Specifically, Measure 37 said that anyone who could show that the value of his land was affected by regulations implemented since its purchase was entitled to compensation from the state. If the state declined to pay, the property owner would be exempted from the regulations.

To call Measure 37—and similar referendums that have been passed recently in other states—intellectually incoherent is to put it mildly. It might be that the reason your hundred-acre farm on a pristine hillside is worth millions to a developer is that it’s on a pristine hillside: if everyone on that hillside could subdivide, and sell out to Target and Wal-Mart, then nobody’s plot would be worth millions anymore. Will the voters of Oregon then pass Measure 38, allowing them to sue the state for compensation over damage to property values caused by Measure 37?

It is hard to read “Collapse,” though, and not have an additional reaction to Measure 37. Supporters of the law spoke entirely in the language of political ideology. To them, the measure was a defense of property rights, preventing the state from unconstitutional “takings.” If you replaced the term “property rights” with “First Amendment rights,” this would have been indistinguishable from an argument over, say, whether charitable groups ought to be able to canvass in malls, or whether cities can control the advertising they sell on the sides of public buses. As a society, we do a very good job with these kinds of debates: we give everyone a hearing, and pass laws, and make compromises, and square our conclusions with our constitutional heritage—and in the Oregon debate the quality of the theoretical argument was impressively high.

The thing that got lost in the debate, however, was the land. In a rapidly growing state like Oregon, what, precisely, are the state’s ecological strengths and vulnerabilities? What impact will changed land-use priorities have on water and soil and cropland and forest? One can imagine Diamond writing about the Measure 37 debate, and he wouldn’t be very impressed by how seriously Oregonians wrestled with the problem of squaring their land-use rules with their values, because to him a society’s environmental birthright is not best discussed in those terms. Rivers and streams and forests and soil are a biological resource. They are a tangible, finite thing, and societies collapse when they get so consumed with addressing the fine points of their history and culture and deeply held beliefs—with making sure that Thorstein Olafsson and Sigrid Bjornsdotter are married before the right number of witnesses following the announcement of wedding banns on the right number of Sundays—that they forget that the pastureland is shrinking and the forest cover is gone.

When archeologists looked through the ruins of the Western Settlement, they found plenty of the big wooden objects that were so valuable in Greenland—crucifixes, bowls, furniture, doors, roof timbers—which meant that the end came too quickly for anyone to do any scavenging. And, when the archeologists looked at the animal bones left in the debris, they found the bones of newborn calves, meaning that the Norse, in that final winter, had given up on the future. They found toe bones from cows, equal to the number of cow spaces in the barn, meaning that the Norse ate their cattle down to the hoofs, and they found the bones of dogs covered with knife marks, meaning that, in the end, they had to eat their pets. But not fish bones, of course. Right up until they starved to death, the Norse never lost sight of what they stood for.
Is pop culture dumbing us down or smartening us up?

1.

Twenty years ago, sildenafil a political philosopher named James Flynn uncovered a curious fact. Americans—at least, sickness as measured by I.Q. tests—were getting smarter. This fact had been obscured for years, because the people who give I.Q. tests continually recalibrate the scoring system to keep the average at 100. But if you took out the recalibration, Flynn found, I.Q. scores showed a steady upward trajectory, rising by about three points per decade, which means that a person whose I.Q. placed him in the top ten per cent of the American population in 1920 would today fall in the bottom third. Some of that effect, no doubt, is a simple by-product of economic progress: in the surge of prosperity during the middle part of the last century, people in the West became better fed, better educated, and more familiar with things like I.Q. tests. But, even as that wave of change has subsided, test scores have continued to rise—not just in America but all over the developed world. What’s more, the increases have not been confined to children who go to enriched day-care centers and private schools. The middle part of the curve—the people who have supposedly been suffering from a deteriorating public-school system and a steady diet of lowest-common-denominator television and mindless pop music—has increased just as much. What on earth is happening? In the wonderfully entertaining “Everything Bad Is Good for You” (Riverhead; $23.95), Steven Johnson proposes that what is making us smarter is precisely what we thought was making us dumber: popular culture.

Johnson is the former editor of the online magazine Feed and the author of a number of books on science and technology. There is a pleasing eclecticism to his thinking. He is as happy analyzing “Finding Nemo” as he is dissecting the intricacies of a piece of software, and he’s perfectly capable of using Nietzsche’s notion of eternal recurrence to discuss the new creative rules of television shows. Johnson wants to understand popular culture—not in the postmodern, academic sense of wondering what “The Dukes of Hazzard” tells us about Southern male alienation but in the very practical sense of wondering what watching something like “The Dukes of Hazzard” does to the way our minds work.

As Johnson points out, television is very different now from what it was thirty years ago. It’s harder. A typical episode of “Starsky and Hutch,” in the nineteen-seventies, followed an essentially linear path: two characters, engaged in a single story line, moving toward a decisive conclusion. To watch an episode of “Dallas” today is to be stunned by its glacial pace—by the arduous attempts to establish social relationships, by the excruciating simplicity of the plotline, by how obvious it was. A single episode of “The Sopranos,” by contrast, might follow five narrative threads, involving a dozen characters who weave in and out of the plot. Modern television also requires the viewer to do a lot of what Johnson calls “filling in,” as in a “Seinfeld” episode that subtly parodies the Kennedy assassination conspiracists, or a typical “Simpsons” episode, which may contain numerous allusions to politics or cinema or pop culture. The extraordinary amount of money now being made in the television aftermarket—DVD sales and syndication—means that the creators of television shows now have an incentive to make programming that can sustain two or three or four viewings. Even reality shows like “Survivor,” Johnson argues, engage the viewer in a way that television rarely has in the past:

When we watch these shows, the part of our brain that monitors the emotional lives of the people around us—the part that tracks subtle shifts in intonation and gesture and facial expression—scrutinizes the action on the screen, looking for clues. . . . The phrase “Monday-morning quarterbacking” was coined to describe the engaged feeling spectators have in relation to games as opposed to stories. We absorb stories, but we second-guess games. Reality programming has brought that second-guessing to prime time, only the game in question revolves around social dexterity rather than the physical kind.

How can the greater cognitive demands that television makes on us now, he wonders, not matter?

Johnson develops the same argument about video games. Most of the people who denounce video games, he says, haven’t actually played them—at least, not recently. Twenty years ago, games like Tetris or Pac-Man were simple exercises in motor coördination and pattern recognition. Today’s games belong to another realm. Johnson points out that one of the “walk-throughs” for “Grand Theft Auto III”—that is, the informal guides that break down the games and help players navigate their complexities—is fifty-three thousand words long, about the length of his book. The contemporary video game involves a fully realized imaginary world, dense with detail and levels of complexity.

Indeed, video games are not games in the sense of those pastimes—like Monopoly or gin rummy or chess—which most of us grew up with. They don’t have a set of unambiguous rules that have to be learned and then followed during the course of play. This is why many of us find modern video games baffling: we’re not used to being in a situation where we have to figure out what to do. We think we only have to learn how to press the buttons faster. But these games withhold critical information from the player. Players have to explore and sort through hypotheses in order to make sense of the game’s environment, which is why a modern video game can take forty hours to complete. Far from being engines of instant gratification, as they are often described, video games are actually, Johnson writes, “all about delayed gratification—sometimes so long delayed that you wonder if the gratification is ever going to show.”

At the same time, players are required to manage a dizzying array of information and options. The game presents the player with a series of puzzles, and you can’t succeed at the game simply by solving the puzzles one at a time. You have to craft a longer-term strategy, in order to juggle and coördinate competing interests. In denigrating the video game, Johnson argues, we have confused it with other phenomena in teen-age life, like multitasking—simultaneously e-mailing and listening to music and talking on the telephone and surfing the Internet. Playing a video game is, in fact, an exercise in “constructing the proper hierarchy of tasks and moving through the tasks in the correct sequence,” he writes. “It’s about finding order and meaning in the world, and making decisions that help create that order.”

2.

It doesn’t seem right, of course, that watching “24” or playing a video game could be as important cognitively as reading a book. Isn’t the extraordinary success of the “Harry Potter” novels better news for the culture than the equivalent success of “Grand Theft Auto III”? Johnson’s response is to imagine what cultural critics might have said had video games been invented hundreds of years ago, and only recently had something called the book been marketed aggressively to children:

Reading books chronically understimulates the senses. Unlike the longstanding tradition of game playing—which engages the child in a vivid, three-dimensional world filled with moving images and musical sound-scapes, navigated and controlled with complex muscular movements—books are simply a barren string of words on the page. . . .
Books are also tragically isolating. While games have for many years engaged the young in complex social relationships with their peers, building and exploring worlds together, books force the child to sequester him or herself in a quiet space, shut off from interaction with other children. . . .
But perhaps the most dangerous property of these books is the fact that they follow a fixed linear path. You can’t control their narratives in any fashion—you simply sit back and have the story dictated to you. . . . This risks instilling a general passivity in our children, making them feel as though they’re powerless to change their circumstances. Reading is not an active, participatory process; it’s a submissive one.

He’s joking, of course, but only in part. The point is that books and video games represent two very different kinds of learning. When you read a biology textbook, the content of what you read is what matters. Reading is a form of explicit learning. When you play a video game, the value is in how it makes you think. Video games are an example of collateral learning, which is no less important.

Being “smart” involves facility in both kinds of thinking—the kind of fluid problem solving that matters in things like video games and I.Q. tests, but also the kind of crystallized knowledge that comes from explicit learning. If Johnson’s book has a flaw, it is that he sometimes speaks of our culture being “smarter” when he’s really referring just to that fluid problem-solving facility. When it comes to the other kind of intelligence, it is not clear at all what kind of progress we are making, as anyone who has read, say, the Gettysburg Address alongside any Presidential speech from the past twenty years can attest. The real question is what the right balance of these two forms of intelligence might look like. “Everything Bad Is Good for You” doesn’t answer that question. But Johnson does something nearly as important, which is to remind us that we shouldn’t fall into the trap of thinking that explicit learning is the only kind of learning that matters.

In recent years, for example, a number of elementary schools have phased out or reduced recess and replaced it with extra math or English instruction. This is the triumph of the explicit over the collateral. After all, recess is “play” for a ten-year-old in precisely the sense that Johnson describes video games as play for an adolescent: an unstructured environment that requires the child actively to intervene, to look for the hidden logic, to find order and meaning in chaos.

One of the ongoing debates in the educational community, similarly, is over the value of homework. Meta-analysis of hundreds of studies done on the effects of homework shows that the evidence supporting the practice is, at best, modest. Homework seems to be most useful in high school and for subjects like math. At the elementary-school level, homework seems to be of marginal or no academic value. Its effect on discipline and personal responsibility is unproved. And the causal relation between high-school homework and achievement is unclear: it hasn’t been firmly established whether spending more time on homework in high school makes you a better student or whether better students, finding homework more pleasurable, spend more time doing it. So why, as a society, are we so enamored of homework? Perhaps because we have so little faith in the value of the things that children would otherwise be doing with their time. They could go out for a walk, and get some exercise; they could spend time with their peers, and reap the rewards of friendship. Or, Johnson suggests, they could be playing a video game, and giving their minds a rigorous workout.
The bad idea behind our failed health-care system.

1.

Tooth decay begins, try typically, when debris becomes trapped between the teeth and along the ridges and in the grooves of the molars.  The food rots.  It becomes colonized with bacteria.  The bacteria feeds off sugars in the mouth and forms an acid that begins to eat away at the enamel of the teeth.  Slowly, the bacteria works its way through to the dentin, the inner structure, and from there the cavity begins to blossom three-dimensionally, spreading inward and sideways.  When the decay reaches the pulp tissue, the blood vessels, and the nerves that serve the tooth, the pain starts—an insistent throbbing.  The tooth turns brown.  It begins to lose its hard structure, to the point where a dentist can reach into a cavity with a hand instrument and scoop out the decay.  At the base of the tooth, the bacteria mineralizes into tartar, which begins to irritate the gums.  They become puffy and bright red and start to recede, leaving more and more of the tooth’s root exposed.  When the infection works its way down to the bone, the structure holding the tooth in begins to collapse altogether.

Several years ago, two Harvard researchers, Susan Starr Sered and Rushika Fernandopulle, set out to interview people without health-care coverage for a book they were writing, “Uninsured in America.” They talked to as many kinds of people as they could find, collecting stories of untreated depression and struggling single mothers and chronically injured laborers—and the most common complaint they heard was about teeth.  Gina, a hairdresser in Idaho, whose husband worked as a freight manager at a chain store, had “a peculiar mannerism of keeping her mouth closed even when speaking.” It turned out that she hadn’t been able to afford dental care for three years, and one of her front teeth was rotting.  Daniel, a construction worker, pulled out his bad teeth with pliers.  Then, there was Loretta, who worked nights at a university research center in Mississippi, and was missing most of her teeth.  “They’ll break off after a while, and then you just grab a hold of them, and they work their way out,” she explained to Sered and Fernandopulle.  “It hurts so bad, because the tooth aches.  Then it’s a relief just to get it out of there.  The hole closes up itself anyway.  So it’s so much better.”

People without health insurance have bad teeth because, if you’re paying for everything out of your own pocket, going to the dentist for a checkup seems like a luxury.  It isn’t, of course.  The loss of teeth makes eating fresh fruits and vegetables difficult, and a diet heavy in soft, processed foods exacerbates more serious health problems, like diabetes.  The pain of tooth decay leads many people to use alcohol as a salve.  And those struggling to get ahead in the job market quickly find that the unsightliness of bad teeth, and the self-consciousness that results, can become a major barrier.  If your teeth are bad, you’re not going to get a job as a receptionist, say, or a cashier.  You’re going to be put in the back somewhere, far from the public eye.  What Loretta, Gina, and Daniel understand, the two authors tell us, is that bad teeth have come to be seen as a marker of “poor parenting, low educational achievement and slow or faulty intellectual development.” They are an outward marker of caste.  “Almost every time we asked interviewees what their first priority would be if the president established universal health coverage tomorrow,” Sered and Fernandopulle write, “the immediate answer was ‘my teeth.’ ”

The U.  S.  health-care system, according to “Uninsured in America,” has created a group of people who increasingly look different from others and suffer in ways that others do not.  The leading cause of personal bankruptcy in the United States is unpaid medical bills.  Half of the uninsured owe money to hospitals, and a third are being pursued by collection agencies.  Children without health insurance are less likely to receive medical attention for serious injuries, for recurrent ear infections, or for asthma.  Lung-cancer patients without insurance are less likely to receive surgery, chemotherapy, or radiation treatment.  Heart-attack victims without health insurance are less likely to receive angioplasty.  People with pneumonia who don’t have health insurance are less likely to receive X rays or consultations.  The death rate in any given year for someone without health insurance is twenty-five per cent higher than for someone with insur-ance.  Because the uninsured are sicker than the rest of us, they can’t get better jobs, and because they can’t get better jobs they can’t afford health insurance, and because they can’t afford health insurance they get even sicker.  John, the manager of a bar in Idaho, tells Sered and Fernandopulle that as a result of various workplace injuries over the years he takes eight ibuprofen, waits two hours, then takes eight more—and tries to cadge as much prescription pain medication as he can from friends.  “There are times when I should’ve gone to the doctor, but I couldn’t afford to go because I don’t have insurance,” he says.  “Like when my back messed up, I should’ve gone.  If I had insurance, I would’ve went, because I know I could get treatment, but when you can’t afford it you don’t go.  Because the harder the hole you get into in terms of bills, then you’ll never get out.  So you just say, ‘I can deal with the pain.’ ”

2.

One of the great mysteries of political life in the United States is why Americans are so devoted to their health-care system.  Six times in the past century—during the First World War, during the Depression, during the Truman and Johnson Administrations, in the Senate in the nineteen-seventies, and during the Clinton years—efforts have been made to introduce some kind of universal health insurance, and each time the efforts have been rejected.  Instead, the United States has opted for a makeshift system of increasing complexity and dysfunction.  Americans spend $5,267 per capita on health care every year, almost two and half times the industrialized world’s median of $2,193; the extra spending comes to hundreds of billions of dollars a year.  What does that extra spending buy us? Americans have fewer doctors per capita than most Western countries.  We go to the doctor less than people in other Western countries.  We get admitted to the hospital less frequently than people in other Western countries.  We are less satisfied with our health care than our counterparts in other countries.  American life expectancy is lower than the Western average.  Childhood-immunization rates in the United States are lower than average.  Infant-mortality rates are in the nineteenth percentile of industrialized nations.  Doctors here perform more high-end medical procedures, such as coronary angioplasties, than in other countries, but most of the wealthier Western countries have more CT scanners than the United States does, and Switzerland, Japan, Austria, and Finland all have more MRI machines per capita.  Nor is our system more efficient.  The United States spends more than a thousand dollars per capita per year—or close to four hundred billion dollars—on health-care-related paperwork and administration, whereas Canada, for example, spends only about three hundred dollars per capita.  And, of course, every other country in the industrialized world insures all its citizens; despite those extra hundreds of billions of dollars we spend each year, we leave forty-five million people without any insurance.  A country that displays an almost ruthless commitment to efficiency and performance in every aspect of its economy—a country that switched to Japanese cars the moment they were more reliable, and to Chinese T-shirts the moment they were five cents cheaper—has loyally stuck with a health-care system that leaves its citizenry pulling out their teeth with pliers.

America’s health-care mess is, in part, simply an accident of history.  The fact that there have been six attempts at universal health coverage in the last century suggests that there has long been support for the idea.  But politics has always got in the way.  In both Europe and the United States, for example, the push for health insurance was led, in large part, by organized labor.  But in Europe the unions worked through the political system, fighting for coverage for all citizens.  From the start, health insurance in Europe was public and universal, and that created powerful political support for any attempt to expand benefits.  In the United States, by contrast, the unions worked through the collective-bargaining system and, as a result, could win health benefits only for their own members.  Health insurance here has always been private and selective, and every attempt to expand benefits has resulted in a paralyzing political battle over who would be added to insurance rolls and who ought to pay for those additions.

Policy is driven by more than politics, however.  It is equally driven by ideas, and in the past few decades a particular idea has taken hold among prominent American economists which has also been a powerful impediment to the expansion of health insurance.  The idea is known as “moral hazard.” Health economists in other Western nations do not share this obsession.  Nor do most Americans.  But moral hazard has profoundly shaped the way think tanks formulate policy and the way experts argue and the way health insurers structure their plans and the way legislation and regulations have been written.  The health-care mess isn’t merely the unintentional result of political dysfunction, in other words.  It is also the deliberate consequence of the way in which American policymakers have come to think about insurance.

“Moral hazard” is the term economists use to describe the fact that insurance can change the behavior of the person being insured.  If your office gives you and your co-workers all the free Pepsi you want—if your employer, in effect, offers universal Pepsi insurance—you’ll drink more Pepsi than you would have otherwise.  If you have a no-deductible fire-insurance policy, you may be a little less diligent in clearing the brush away from your house.  The savings-and-loan crisis of the nineteen-eighties was created, in large part, by the fact that the federal government insured savings deposits of up to a hundred thousand dollars, and so the newly deregulated S. & L.s made far riskier investments than they would have otherwise.  Insurance can have the paradoxical effect of producing risky and wasteful behavior.  Economists spend a great deal of time thinking about such moral hazard for good reason.  Insurance is an attempt to make human life safer and more secure.  But, if those efforts can backfire and produce riskier behavior, providing insurance becomes a much more complicated and problematic endeavor.

In 1968, the economist Mark Pauly argued that moral hazard played an enormous role in medicine, and, as John Nyman writes in his book “The Theory of the Demand for Health Insurance,” Pauly’s paper has become the “single most influential article in the health economics literature.” Nyman, an economist at the University of Minnesota, says that the fear of moral hazard lies behind the thicket of co-payments and deductibles and utilization reviews which characterizes the American health-insurance system.  Fear of moral hazard, Nyman writes, also explains “the general lack of enthusiasm by U.S.  health economists for the expansion of health insurance coverage (for example, national health insurance or expanded Medicare benefits) in the U.S.”

What Nyman is saying is that when your insurance company requires that you make a twenty-dollar co-payment for a visit to the doctor, or when your plan includes an annual five-hundred-dollar or thousand-dollar deductible, it’s not simply an attempt to get you to pick up a larger share of your health costs.  It is an attempt to make your use of the health-care system more efficient.  Making you responsible for a share of the costs, the argument runs, will reduce moral hazard: you’ll no longer grab one of those free Pepsis when you aren’t really thirsty.  That’s also why Nyman says that the notion of moral hazard is behind the “lack of enthusiasm” for expansion of health insurance.  If you think of insurance as producing wasteful consumption of medical services, then the fact that there are forty-five million Americans without health insurance is no longer an immediate cause for alarm.  After all, it’s not as if the uninsured never go to the doctor.  They spend, on average, $934 a year on medical care.  A moral-hazard theorist would say that they go to the doctor when they really have to.  Those of us with private insurance, by contrast, consume $2,347 worth of health care a year.  If a lot of that extra $1,413 is waste, then maybe the uninsured person is the truly efficient consumer of health care.

The moral-hazard argument makes sense, however, only if we consume health care in the same way that we consume other consumer goods, and to economists like Nyman this assumption is plainly absurd.  We go to the doctor grudgingly, only because we’re sick.  “Moral hazard is overblown,” the Princeton economist Uwe Reinhardt says.  “You always hear that the demand for health care is unlimited.  This is just not true.  People who are very well insured, who are very rich, do you see them check into the hospital because it’s free? Do people really like to go to the doctor? Do they check into the hospital instead of playing golf?”

For that matter, when you have to pay for your own health care, does your consumption really become more efficient? In the late nineteen-seventies, the rand Corporation did an extensive study on the question, randomly assigning families to health plans with co-payment levels at zero per cent, twenty-five per cent, fifty per cent, or ninety-five per cent, up to six thousand dollars.  As you might expect, the more that people were asked to chip in for their health care the less care they used.  The problem was that they cut back equally on both frivolous care and useful care.  Poor people in the high-deductible group with hypertension, for instance, didn’t do nearly as good a job of controlling their blood pressure as those in other groups, resulting in a ten-per-cent increase in the likelihood of death.  As a recent Commonwealth Fund study concluded, cost sharing is “a blunt instrument.” Of course it is: how should the average consumer be expected to know beforehand what care is frivolous and what care is useful? I just went to the dermatologist to get moles checked for skin cancer.  If I had had to pay a hundred per cent, or even fifty per cent, of the cost of the visit, I might not have gone.  Would that have been a wise decision? I have no idea.  But if one of those moles really is cancerous, that simple, inexpensive visit could save the health-care system tens of thousands of dollars (not to mention saving me a great deal of heartbreak).  The focus on moral hazard suggests that the changes we make in our behavior when we have insurance are nearly always wasteful.  Yet, when it comes to health care, many of the things we do only because we have insurance—like getting our moles checked, or getting our teeth cleaned regularly, or getting a mammogram or engaging in other routine preventive care—are anything but wasteful and inefficient.  In fact, they are behaviors that could end up saving the health-care system a good deal of money.

Sered and Fernandopulle tell the story of Steve, a factory worker from northern Idaho, with a “grotesquelooking left hand—what looks like a bone sticks out the side.” When he was younger, he broke his hand.  “The doctor wanted to operate on it,” he recalls.  “And because I didn’t have insurance, well, I was like ‘I ain’t gonna have it operated on.’ The doctor said, ‘Well, I can wrap it for you with an Ace bandage.’ I said, ‘Ahh, let’s do that, then.’ ” Steve uses less health care than he would if he had insurance, but that’s not because he has defeated the scourge of moral hazard.  It’s because instead of getting a broken bone fixed he put a bandage on it.

3.

At the center of the Bush Administration’s plan to address the health-insurance mess are Health Savings Accounts, and Health Savings Accounts are exactly what you would come up with if you were concerned, above all else, with minimizing moral hazard.  The logic behind them was laid out in the 2004 Economic Report of the President.  Americans, the report argues, have too much health insurance: typical plans cover things that they shouldn’t, creating the problem of overconsumption.  Several paragraphs are then devoted to explaining the theory of moral hazard.  The report turns to the subject of the uninsured, concluding that they fall into several groups.  Some are foreigners who may be covered by their countries of origin.  Some are people who could be covered by Medicaid but aren’t or aren’t admitting that they are.  Finally, a large number “remain uninsured as a matter of choice.” The report continues, “Researchers believe that as many as one-quarter of those without health insurance had coverage available through an employer but declined the coverage…. Still others may remain uninsured because they are young and healthy and do not see the need for insurance.” In other words, those with health insurance are overinsured and their behavior is distorted by moral hazard.  Those without health insurance use their own money to make decisions about insurance based on an assessment of their needs.  The insured are wasteful.  The uninsured are prudent.  So what’s the solution? Make the insured a little bit more like the uninsured.

Under the Health Savings Accounts system, consumers are asked to pay for routine health care with their own money—several thousand dollars of which can be put into a tax-free account.  To handle their catastrophic expenses, they then purchase a basic health-insurance package with, say, a thousand-dollar annual deductible.  As President Bush explained recently, “Health Savings Accounts all aim at empowering people to make decisions for themselves, owning their own health-care plan, and at the same time bringing some demand control into the cost of health care.”

The country described in the President’s report is a very different place from the country described in “Uninsured in America.” Sered and Fernandopulle look at the billions we spend on medical care and wonder why Americans have so little insurance.  The President’s report considers the same situation and worries that we have too much.  Sered and Fernandopulle see the lack of insurance as a problem of poverty; a third of the uninsured, after all, have incomes below the federal poverty line.  In the section on the uninsured in the President’s report, the word “poverty” is never used.  In the Administration’s view, people are offered insurance but “decline the coverage” as “a matter of choice.” The uninsured in Sered and Fernandopulle’s book decline coverage, but only because they can’t afford it.  Gina, for instance, works for a beauty salon that offers her a bare-bones health-insurance plan with a thousand-dollar deductible for two hundred dollars a month.  What’s her total income? Nine hundred dollars a month.  She could “choose” to accept health insurance, but only if she chose to stop buying food or paying the rent.

The biggest difference between the two accounts, though, has to do with how each views the function of insurance.  Gina, Steve, and Loretta are ill, and need insurance to cover the costs of getting better.  In their eyes, insurance is meant to help equalize financial risk between the healthy and the sick.  In the insurance business, this model of coverage is known as “social insurance,” and historically it was the way health coverage was conceived.  If you were sixty and had heart disease and diabetes, you didn’t pay substantially more for coverage than a perfectly healthy twenty-five-year-old.  Under social insurance, the twenty-five-year-old agrees to pay thousands of dollars in premiums even though he didn’t go to the doctor at all in the previous year, because he wants to make sure that someone else will subsidize his health care if he ever comes down with heart disease or diabetes.  Canada and Germany and Japan and all the other industrialized nations with universal health care follow the social-insurance model.  Medicare, too, is based on the social-insurance model, and, when Americans with Medicare report themselves to be happier with virtually every aspect of their insurance coverage than people with private insurance (as they do, repeatedly and overwhelmingly), they are referring to the social aspect of their insurance.  They aren’t getting better care.  But they are getting something just as valuable: the security of being insulated against the financial shock of serious illness.

There is another way to organize insurance, however, and that is to make it actuarial.  Car insurance, for instance, is actuarial.  How much you pay is in large part a function of your individual situation and history: someone who drives a sports car and has received twenty speeding tickets in the past two years pays a much higher annual premium than a soccer mom with a minivan.  In recent years, the private insurance industry in the United States has been moving toward the actuarial model, with profound consequences.  The triumph of the actuarial model over the social-insurance model is the reason that companies unlucky enough to employ older, high-cost employees—like United Airlines—have run into such financial difficulty.  It’s the reason that automakers are increasingly moving their operations to Canada.  It’s the reason that small businesses that have one or two employees with serious illnesses suddenly face unmanageably high health-insurance premiums, and it’s the reason that, in many states, people suffering from a potentially high-cost medical condition can’t get anyone to insure them at all.

Health Savings Accounts represent the final, irrevocable step in the actuarial direction.  If you are preoccupied with moral hazard, then you want people to pay for care with their own money, and, when you do that, the sick inevitably end up paying more than the healthy.  And when you make people choose an insurance plan that fits their individual needs, those with significant medical problems will choose expensive health plans that cover lots of things, while those with few health problems will choose cheaper, bare-bones plans.  The more expensive the comprehensive plans become, and the less expensive the bare-bones plans become, the more the very sick will cluster together at one end of the insurance spectrum, and the more the well will cluster together at the low-cost end.  The days when the healthy twenty-five-year-old subsidizes the sixty-year-old with heart disease or diabetes are coming to an end.  “The main effect of putting more of it on the consumer is to reduce the social redistributive element of insurance,” the Stanford economist Victor Fuchs says.  Health Savings Accounts are not a variant of universal health care.  In their governing assumptions, they are the antithesis of universal health care.

The issue about what to do with the health-care system is sometimes presented as a technical argument about the merits of one kind of coverage over another or as an ideological argument about socialized versus private medicine.  It is, instead, about a few very simple questions.  Do you think that this kind of redistribution of risk is a good idea? Do you think that people whose genes predispose them to depression or cancer, or whose poverty complicates asthma or diabetes, or who get hit by a drunk driver, or who have to keep their mouths closed because their teeth are rotting ought to bear a greater share of the costs of their health care than those of us who are lucky enough to escape such misfortunes? In the rest of the industrialized world, it is assumed that the more equally and widely the burdens of illness are shared, the better off the population as a whole is likely to be.  The reason the United States has forty-five million people without coverage is that its health-care policy is in the hands of people who disagree, and who regard health insurance not as the solution but as the problem.
Project Delta aims to create the perfect cookie.

1.

Steve Gundrum launched Project Delta at a small dinner last fall at Il Fornaio, here in Burlingame, this web just down the road from the San Francisco Airport. It wasn’t the first time he’d been to Il Fornaio, and he made his selection quickly, with just a glance at the menu; he is the sort of person who might have thought about his choice in advance — maybe even that morning, while shaving. He would have posed it to himself as a question — Ravioli alla Lucana?—and turned it over in his mind, assembling and disassembling the dish, ingredient by ingredient, as if it were a model airplane. Did the Pecorino pepato really belong? What if you dropped the basil? What would the ravioli taste like if you froze it, along with the ricotta and the Parmesan, and tried to sell it in the supermarket? And then what would you do about the fennel?

Gundrum is short and round. He has dark hair and a mustache and speaks with the flattened vowels of the upper Midwest. He is voluble and excitable and doggedly unpretentious, to the point that your best chance of seeing him in a suit is probably Halloween. He runs Mattson, one of the country’s foremost food research-and-development firms, which is situated in a low-slung concrete-and-glass building in a nondescript office park in Silicon Valley. Gundrum’s office is a spare, windowless room near the rear, and all day long white-coated technicians come to him with prototypes in little bowls, or on skewers, or in Tupperware containers. His job is to taste and advise, and the most common words out of his mouth are “I have an idea.” Just that afternoon, Gundrum had ruled on the reformulation of a popular spinach dip (which had an unfortunate tendency to smell like lawn clippings) and examined the latest iteration of a low-carb kettle corn for evidence of rhythmic munching (the metronomic hand-to-mouth cycle that lies at the heart of any successful snack experience). Mattson created the shelf-stable Mrs. Fields Chocolate Chip Cookie, the new Boca Burger products for Kraft Foods, Orville Redenbacher’s Butter Toffee Popcorn Clusters, and so many other products that it is impossible to walk down the aisle of a supermarket and not be surrounded by evidence of the company’s handiwork.

That evening, Gundrum had invited two of his senior colleagues at Mattson — Samson Hsia and Carol Borba — to dinner, along with Steven Addis, who runs a prominent branding firm in the Bay Area. They sat around an oblong table off to one side of the dining room, with the sun streaming in the window, and Gundrum informed them that he intended to reinvent the cookie, to make something both nutritious and as “indulgent” as the premium cookies on the supermarket shelf. “We want to delight people,” he said. “We don’t want some ultra-high-nutrition power bar, where you have to rationalize your consumption.” He said it again: “We want to delight people.”

As everyone at the table knew, a healthful, good-tasting cookie is something of a contradiction. A cookie represents the combination of three unhealthful ingredients—sugar, white flour, and shortening. The sugar adds sweetness, bulk, and texture: along with baking powder, it produces the tiny cell structures that make baked goods light and fluffy. The fat helps carry the flavor. If you want a big hit of vanilla, or that chocolate taste that really blooms in the nasal cavities, you need fat. It also keeps the strands of gluten in the flour from getting too tightly bound together, so that the cookie stays chewable. The ¦our, of course, gives the batter its structure, and, with the sugar, provides the base for the browning reaction that occurs during baking. You could replace the standard white flour with wheat flour, which is higher in fibre, but fibre adds grittiness. Over the years, there have been many attempts to resolve these contradictions — from Snackwells and diet Oreos to the dry, grainy hockey pucks that pass for cookies in health-food stores — but in every case ¦flavor or fluffiness or tenderness has been compromised. Steve Gundrum was undeterred. He told his colleagues that he wanted Project Delta to create the world’s great-est cookie. He wanted to do it in six months. He wanted to enlist the biggest players in the American food industry. And how would he come up with this wonder cookie? The old-fashioned way. He wanted to hold a bakeoff.

2.

The standard protocol for inventing something in the food industry is called the matrix model. There is a department for product development, which comes up with a new idea, and a department for process development, which figures out how to realize it, and then, down the line, departments for packing, quality assurance, regulatory affairs, chemistry, microbiology, and so on. In a conventional bakeoff, Gundrum would have pitted three identical matrixes against one another and compared the results. But he wasn’t satisfied with the unexamined assumption behind the conventional bakeoff — that there was just one way of inventing something new.

Gundrum had a particular interest, as it happened, in software. He had read widely about it, and once, when he ran into Steve Jobs at an Apple store in the Valley, chatted with him for forty-five minutes on technical matters relating to the Apple operating system. He saw little difference between what he did for a living and what the soft-ware engineers in the surrounding hills of Silicon Valley did. “Lines of code are no different from a recipe,” he explains. “It’s the same thing. You add a little salt, and it tastes better. You write a little piece of code, and it makes the software work faster.” But in the software world, Gundrum knew, there were ongoing debates about the best way to come up with new code.

On the one hand, there was the “open source” movement. Its patron saint was Linus Torvald, the Norwegian hacker who decided to build a free version of Unix, the hugely complicated operating system that runs many of the world’s large computers. Torvald created the basic implementation of his version, which he called Linux, posted it online, and invited people to contribute to its development. Over the years, thousands of programmers had helped, and Linux was now considered as good as proprietary versions of Unix. “Given enough eyeballs all bugs are shallow” was the Linux mantra: a thousand people working for an hour each can do a better job writing and fixing code than a single person working for a thou-sand hours, because the chances are that among those thousand people you can find precisely the right expert for every problem that comes up.

On the other hand, there was the “extreme programming” movement, known as XP, which was led by a legendary programmer named Kent Beck. He called for breaking a problem into the smallest possible increments, and proceeding as simply and modestly as possible. He thought that programmers should work in pairs, two to a computer, passing the keyboard back and forth. Between Beck and Torvald were countless other people, arguing for slightly different variations. But everyone in the software world agreed that trying to get people to be as creative as possible was, as often as not, a social problem: it depended not just on who was on the team but on how the team was organized.

“I remember once I was working with a printing company in Chicago,” Beck says. “The people there were having a terrible problem with their technology. I got there, and I saw that the senior people had these corner offices, and they were working separately and doing things separately that they had trouble integrating later on. So I said, ‘Find a space where you can work together.’ So they found a corner of the machine room. It was a raised floor, ice cold. They just loved it. They would go there five hours a day, making lots of progress. I flew home. They hired me for my technical expertise. And I told them to rearrange the office furniture, and that was the most valuable thing I could offer them.”

It seemed to Gundrum that people in the food world had a great deal to learn from all this. They had become adept at solving what he called “science projects” — problems that required straightforward, linear applications of expensive German machinery and armies of white-coated people with advanced degrees in engineering. Cool Whip was a good example: a product processed so exquisitely — with air bubbles of such fantastic uniformity and stability — that it remains structurally sound for months, at high elevation and at low elevation, frozen and thawed and then refrozen. But coming up with a healthy cookie, which required finessing the inherent contradictions posed by sugar, flour, and shortening, was the kind of problem that the food industry had more trouble with. Gundrum recalled one brainstorming session that a client of his, a major food company, had convened. “This is no joke,” he said. “They played a tape where it sounded like the wind was blowing and the birds were chirping. And they posed us out on a dance floor, and we had to hold our arms out like we were trees and close our eyes, and the ideas were supposed to grow like fruits off the limbs of the trees. Next to me was the head of R. & D., and he looked at me and said: ‘What the hell are we doing here?'”

For Project Delta, Gundrum decreed that there would be three teams, each representing a different methodology of invention. He had read Kent Beck’s writings, and decided that the first would be the XP team. He enlisted two of Mattson’s brightest young associates — Peter Dea and Dan Howell. Dea is a food scientist, who worked as a confectionist before coming to Mattson. He is tall and spare, with short dark hair. “Peter is really good at hitting the high note,” Gundrum said. “If a product needs to have a particular flavor profile, he’s really good at getting that one dimension and getting it right.” Howell is a culinarian-goateed and talkative, a man of enthusiasms who uses high-end Mattson equipment to make an exceptional cup of espresso every afternoon. He started his career as a barista at Starbucks, and then realized that his vocation lay elsewhere. “A customer said to me, ‘What do you want to be doing? Because you clearly don’t want to be here,'” Howell said. “I told him, ‘I want to be sitting in a room working on a better non-fat pudding.’ ”

The second team was headed by Barb Stuckey, an executive vice-president of marketing at Mattson and one of the firm’s stars. She is slender and sleek, with short blond hair. She tends to think out loud, and, because she thinks quickly, she ends up talking quickly, too-in nervous brilliant bursts. Stuckey, Gundrum decided, would represent “managed” research and development—a traditional hierarchical team, as opposed to a partnership like Dea and Howell’s. She would work with Doug Berg, who runs one of Mattson’s product-development teams. Stuckey would draw the big picture. Berg would serve as sounding board and project director. His team would execute their conceptions.

Then Gundrum was at a technology conference in California and heard the software pioneer Mitch Kapor talking about the open-source revolution. Afterward, Gundrum approached Kapor. “I said to Mitch, ‘What do you think? Can I apply this—some of the same principles—outside of software and bring it to the food industry?'” Gundrum recounted. “He stopped and said, ‘Why the hell not!'” So Gundrum invited an élite group of food-industry bakers and scientists to collaborate online. They would be the third team. He signed up a senior person from Mars, Inc., someone from R. & D. at Kraft, the marketing manager for Nestlé Toll House refrigerated/frozen cookie dough, a senior director of R. & D. at Birds Eye Foods, the head of the innovation program for Kellogg’s Morning Foods, the director of seasoning at McCormick, a cookie maven formerly at Keebler, and six more high-level specialists. Mattson’s innovation manager, Carol Borba, who began her career as a line cook at Bouley, in Manhattan, was given the role of project manager. Two Mattson staffers were assigned to carry out the group’s recommendations. This was the Dream Team. It is quite possible that this was the most talented group of people ever to work together in the history of the food industry.

Soon after the launch of Project Delta, Steve Gundrum and his colleague Samson Hsia were standing around, talking about the current products in the supermarket which they particularly admire. “I like the Uncrustable line from Smuckers,” Hsia said. “It’s a frozen sandwich without any crust. It eats very well. You can put it in a lunchbox frozen, and it will be unfrozen by lunchtime.” Hsia is a trim, silver-haired man who is said to know as much about emulsions as anyone in the business. “There’s something else,” he said, suddenly. “We just saw it last week. It’s made by Jennie-O. It’s turkey in a bag.” This was a turkey that was seasoned, plumped with brine, and sold in a heat-resistant plastic bag: the customer simply has to place it in the oven. Hsia began to stride toward the Mattson kitchens, because he realized they actually had a Jennie-O turkey in the back. Gundrum followed, the two men weaving their way through the maze of corridors that make up the Mattson offices. They came to a large freezer. Gundrum pulled out a bright-colored bag. Inside was a second, clear bag, and inside that bag was a twelve-pound turkey. “This is one of my favorite innovations of the last year,” Gundrum said, as Hsia nodded happily. “There is material science involved. There is food science involved. There is positioning involved. You can take this thing, throw it in your oven, and people will be blown away. It’s that good. If I was Butterball, I’d be terrified.”

Jennie-O had taken something old and made it new. But where had that idea come from? Was it a team? A committee? A lone turkey genius? Those of us whose only interaction with such innovations is at the point of sale have a naïve faith in human creativity; we suppose that a world capable of coming up with turkey in a bag is capable of coming up with the next big thing as well—a healthy cookie, a faster computer chip, an automobile engine that gets a hundred miles to the gallon. But if you’re the one responsible for those bright new ideas there is no such certainty. You come up with one great idea, and the process is so miraculous that all you do is puzzle over how on earth you ever did it, and worry whether you’ll ever be able to do it again.

3.

The Mattson kitchens are a series of large, connecting rooms, running along the back of the building. There is a pilot plant in one corner — containing a mini version of the equipment that, say, Heinz would use to make canned soup, a soft-serve ice-cream machine, an industrial-strength pasta-maker, a colloid mill for making oil-and-water emulsions, a flash pasteurizer, and an eighty-five-thousand-dollar Japanese-made coextruder for, among other things, pastry-and-filling combinations. At any given time, the firm may have as many as fifty or sixty projects under way, so the kitchens are a hive of activity, with pressure cookers filled with baked beans bubbling in one corner, and someone rushing from one room to another carrying a tray of pizza slices with experimental toppings.

Dea and Howell, the XP team, took over part of one of the kitchens, setting up at a long stainless-steel lab bench. The countertop was crowded with tins of flour, a big white plastic container of wheat dextrin, a dozen bottles of liquid sweeteners, two plastic bottles of Kirkland olive oil, and, somewhat puzzlingly, three varieties of single-malt Scotch. The Project Delta brief was simple. All cookies had to have fewer than a hundred and thirty calories per serving. Carbohydrates had to be under 17.5 grams, saturated fat under two grams, fibre more than one gram, protein more than two grams, and so on; in other words, the cookie was to be at least fifteen per cent superior to the supermarket average in the major nutritional categories. To Dea and Howell, that suggested oatmeal, and crispy, as opposed to soft. “I’ve tried lots of cookies that are sold as soft and I never like them, because they’re trying to be something that they’re not,” Dea explained. “A soft cookie is a fresh cookie, and what you are trying to do with soft is be a fresh cookie that’s a month old. And that means you need to fake the freshness, to engineer the cookie.”

The two decided to focus on a kind of oatmeal-chocolate-chip hybrid, with liberal applications of roasted soy nuts, toffee, and caramel. A straight oatmeal-raisin cookie or a straight low-cal chocolate-chip cookie was out of the question. This was a reflection of what might be called the Hidden Valley Ranch principle, in honor of a story that Samson Hsia often told about his years working on salad dressing when he was at Clorox. The couple who owned Hidden Valley Ranch, near Santa Barbara, had come up with a seasoning blend of salt, pepper, onion, garlic, and parsley flakes that was mixed with equal parts mayonnaise and buttermilk to make what was, by all accounts, an extraordinary dressing. Clorox tried to bottle it, but found that the buttermilk could not coexist, over any period of time, with the mayonnaise. The way to fix the problem, and preserve the texture, was to make the combination more acidic. But when you increased the acidity you ruined the flavor. Clorox’s food engineers worked on Hidden Valley Ranch dressing for close to a decade. They tried different kinds of processing and stability control and endless cycles of consumer testing before they gave up and simply came out with a high-acid Hidden Valley Ranch dressing — which promptly became a runaway best-seller. Why? Because consumers had never tasted real Hidden Valley Ranch dressing, and as a result had no way of knowing that what they were eating was inferior to the original. For those in the food business, the lesson was unforgettable: if something was new, it didn’t have to be perfect. And, since healthful, indulgent cookies couldn’t be perfect, they had to be new: hence oatmeal, chocolate chips, toffee, and caramel.

Cookie development, at the Mattson level, is a matter of endless iteration, and Dea and Howell began by baking version after version in quick succession — establishing the cookie size, the optimal baking time, the desired variety of chocolate chips, the cut of oats (bulk oats? rolled oats? groats?), the varieties of flour, and the toffee dosage, while testing a variety of high-tech supplements, notably inulin, a fibre source derived from chicory root. As they worked, they made notes on tablet P.C.s, which gave them a running electronic record of each version. “With food, there’s a large circle of pretty good, and we’re solidly in pretty good,” Dea announced, after several intensive days of baking. A tray of cookies was cooling in front of him on the counter. “Typically, that’s when you take it to the customers.”

In this case, the customer was Gundrum, and the next week Howell marched over to Gundrum’s office with two Ziploc bags of cookies in his hand. There was a package of Chips Ahoy! on the table, and Howell took one out. “We’ve been eating these versus Chips Ahoy!,” he said.

The two cookies looked remarkably alike. Gundrum tried one of each. “The Chips Ahoy!, it’s tasty,” he said. “When you eat it, the starch hydrates in your mouth. The XP doesn’t have that same granulated-sugar kind of mouth feel.”

“It’s got more fat than us, though, and subsequently it’s shorter in texture,” Howell said. “And so, when you break it, it breaks more nicely. Ours is a little harder to break.”

By “shorter in texture,” he meant that the cookie “popped” when you bit into it. Saturated fats are solid fats, and give a cookie crispness. Parmesan cheese is short-textured. Brie is long. A shortbread like a Lorna Doone is a classic short-textured cookie. But the XP cookie had, for health reasons, substituted unsaturated fats for saturated fats, and unsaturated fats are liquid. They make the dough stickier, and inevitably compromise a little of that satisfying pop.

“The whole-wheat flour makes us a little grittier, too,” Howell went on. “It has larger particulates.” He broke open one of the Chips Ahoy!. “See how fine the grain is? Now look at one of our cookies. The particulates are larger. It is part of what we lose by going with a healthy profile. If it was just sugar and ¦our, for instance, the carbohydrate chains are going to be shorter, and so they will dissolve more quickly in your mouth. Whereas with more fibre you get longer carbohydrate chains and they don’t dissolve as quickly, and you get that slightly tooth-packing feel.”

“It looks very wholesome, like something you would want to feed your kids,” Gundrum said, finally. They were still only in the realm of pretty good.

4.

Team Stuckey, meanwhile, was having problems of its own. Barb Stuckey’s first thought had been a tea cookie, or, more specifically, a chai cookie — something with cardamom and cinnamon and vanilla and cloves and a soft dairy note. Doug Berg was dispatched to run the experiment. He and his team did three or four rounds of prototypes. The result was a cookie that tasted, astonishingly, like a cup of chai, which was, of course, its problem. Who wanted a cookie that tasted like a cup of chai? Stuckey called a meeting in the Mattson trophy room, where samples of every Mattson product that has made it to market are displayed. After everyone was done tasting the cookies, a bag of them sat in the middle of the table for forty-five minutes—and no one reached to take a second bite. It was a bad sign.

“You know, before the election Good Housekeeping had this cookie bakeoff,” Stuckey said, as the meeting ended. “Laura Bush’s entry was full of chocolate chips and had familiar ingredients. And Teresa Heinz went with pumpkin-spice cookies. I remember thinking, That’s just like the Democrats! So not mainstream! I wanted her to win. But she’s chosen this cookie that’s funky and weird and out of the box. And I kind of feel the same way about the tea cookie. It’s too far out, and will lose to something that’s more comfortable for consumers.”

Stuckey’s next thought involved strawberries and a shortbread base. But shortbread was virtually impossible under the nutritional guidelines: there was no way to get that smooth butter-flour-sugar combination. So Team Stuckey switched to something closer to a strawberry-cobbler cookie, which had the Hidden Valley Ranch advantage that no one knew what a strawberry-cobbler cookie was supposed to taste like. Getting the carbohydrates down to the required 17.5 grams, though, was a struggle, because of how much flour and fruit cobbler requires. The obvious choice to replace the flour was almonds. But nuts have high levels of both saturated and unsaturated fat. “It became a balancing act,” Anne Cristofano, who was doing the bench work for Team Stuckey, said. She baked batch after batch, playing the carbohydrates (first the flour, and then granulated sugar, and finally various kinds of what are called sugar alcohols, low-calorie sweeteners derived from hydrogenizing starch) against the almonds. Cristofano took a version to Stuckey. It didn’t go well.

“We’re not getting enough strawberry impact from the fruit alone,” Stuckey said. “We have to find some way to boost the strawberry.” She nibbled some more. “And, because of the low fat and all that stuff, I don’t feel like we’re getting that pop.”

The Dream Team, by any measure, was the overwhelming Project Delta favorite. This was, after all, the Dream Team, and if any idea is ingrained in our thinking it is that the best way to solve a difficult problem is to bring the maximum amount of expertise to bear on it. Sure enough, in the early going the Dream Team was on fire. The members of the Dream Team did not doggedly fix on a single idea, like Dea and Howell, or move in fits and starts from chai sugar cookies to strawberry shortbread to strawberry cobbler, like Team Stuckey. It came up with thirty-four ideas, representing an astonishing range of cookie philosophies: a chocolate cookie with gourmet cocoa, high-end chocolate chips, pecans, raisins, Irish steel-cut oats, and the new Ultragrain White Whole Wheat flour; a bite-size oatmeal cookie with a Ceylon cinnamon filling, or chili and tamarind, or pieces of dried peaches with a cinnamon-and-ginger dusting; the classic seven-layer bar with oatmeal instead of graham crackers, coated in chocolate with a choice of coffee flavors; a “wellness” cookie, with an oatmeal base, soy and whey proteins, inulin and oat beta glucan and a combination of erythritol and sugar and sterol esters—and so on.

In the course of spewing out all those new ideas, however, the Dream Team took a difficult turn. A man named J. Hugh McEvoy (a.k.a. Chef J.), out of Chicago, tried to take control of the discussion. He wanted something exotic — not a health-food version of something already out there. But in the e-mail discussions with others on the team his sense of what constituted exotic began to get really exotic — “Chinese star anise plus fennel plus Pastis plus dark chocolate.” Others, emboldened by his example, began talking about a possible role for zucchini or wasabi peas. Meanwhile, a more conservative faction, mindful of the Project Delta mandate to appeal to the whole family, started talking up peanut butter. Within a few days, the tensions were obvious:

From: Chef J.

Subject: <no subject>

Please keep in mind that less than 10 years ago, espresso, latte and dulce de leche were EXOTIC flavors / products that were considered unsuitable for the mainstream. And let’s not even mention CHIPOTLE.

From: Andy Smith

Subject: Bought any Ben and Jerry’s recently?

While we may not want to invent another Oreo or Chips Ahoy!, last I looked, World’s Best Vanilla was B&J’s # 2 selling flavor and Haagen Dazs’ Vanilla (their top seller) outsold Dulce 3 to 1.

From: Chef J.

Subject: <no subject>Yes. Gourmet Vanilla does outsell any new flavor. But we must remember that DIET vanilla does not and never has. It is the high end, gourmet segment of ice cream that is growing. Diet Oreos were vastly outsold by new entries like Snackwells. Diet Snickers were vastly outsold by new entries like balance bars. New Coke failed miserably, while Red Bull is still growing.

What flavor IS Red Bull, anyway?

Eventually, Carol Borba, the Dream Team project leader, asked Gundrum whether she should try to calm things down. He told her no; the group had to find its “own kind of natural rhythm.” He wanted to know what fifteen high-powered bakers thrown together on a project felt like, and the answer was that they felt like chaos. They took twice as long as the XP team. They created ten times the headache.

Worse, no one in the open-source group seemed to be having any fun. “Quite honestly, I was expecting a bit more involvement in this,” Howard Plein, of Edlong Dairy Flavors, confessed afterward. “They said, expect to spend half an hour a day. But without doing actual bench work — all we were asked to do was to come up with ideas.” He wanted to bake: he didn’t enjoy being one of fifteen cogs in a machine. To Dan Fletcher, of Kellogg’s, “the whole thing spun in place for a long time. I got frustrated with that. The number of people involved seemed unwieldy. You want some diversity of youth and experience, but you want to keep it close-knit as well. You get some depth in the process versus breadth. We were a mile wide and an inch deep.” Chef J., meanwhile, felt thwarted by Carol Borba; he felt that she was pushing her favorite, a caramel turtle, to the detriment of better ideas. “We had the best people in the country involved,” he says. “We were irrelevant. That’s the weakness of it. Fifteen is too many. How much true input can any one person have when you are lost in the crowd?” In the end, the Dream Team whittled down its thirty-four possibilities to one: a chewy oatmeal cookie, with a pecan “thumbprint” in the middle, and ribbons of caramel-and-chocolate glaze. When Gundrum tasted it, he had nothing but praise for its “cookie hedonics.” But a number of the team members were plainly unhappy with the choice. “It is not bad,” Chef J. said. “But not bad doesn’t win in the food business. There was nothing there that you couldn’t walk into a supermarket and see on the shelf. Any Pepperidge Farm product is better than that. Any one.”

It may have been a fine cookie. But, since no single person played a central role in its creation, it didn’t seem to anyone to be a fine cookie.

The strength of the Dream Team — the fact that it had so many smart people on it — was also its weakness: it had too many smart people on it. Size provides expertise. But it also creates friction, and one of the truths Project Delta exposed is that we tend to overestimate the importance of expertise and underestimate the problem of friction. Gary Klein, a decision-making consultant, once examined this issue in depth at a nuclear power plant in North Carolina. In the nineteen-nineties, the power supply used to keep the reactor cool malfunctioned. The plant had to shut down in a hurry, and the shutdown went badly. So the managers brought in Klein’s consulting group to observe as they ran through one of the crisis rehearsals mandated by federal regulators. “The drill lasted four hours,” David Klinger, the lead consultant on the project, recalled. “It was in this big operations room, and there were between eighty and eighty-five people involved. We roamed around, and we set up a video camera, because we wanted to make sense of what was happening.”

When the consultants asked people what was going on, though, they couldn’t get any satisfactory answers. “Each person only knew a little piece of the puzzle, like the radiation person knew where the radiation was, or the maintenance person would say, ‘I’m trying to get this valve closed,’ ” Klinger said. “No one had the big picture. We started to ask questions. We said, ‘What is your mission?’ And if the person didn’t have one, we said, ‘Get out.’ There were just too many people. We ended up getting that team down from eighty-five to thirty-five people, and the first thing that happened was that the noise in the room was dramatically reduced.” The room was quiet and calm enough so that people could easily find those they needed to talk to. “At the very end, they had a big drill that the N.R.C. was going to regulate. The regulators said it was one of their hardest drills. And you know what? They aced it.” Was the plant’s management team smarter with thirty-five people on it than it was with eighty-five? Of course not, but the expertise of those additional fifty people was more than cancelled out by the extra confusion and noise they created.

The open-source movement has had the same problem. The number of people involved can result in enormous friction. The software theorist Joel Spolsky points out that open-source software tends to have user interfaces that are difficult for ordinary people to use: “With Microsoft Windows, you right-click on a folder, and you’re given the option to share that folder over the Web. To do the same thing with Apache, the open-source Web server, you’ve got to track down a file that has a different name and is stored in a different place on every system. Then you have to edit it, and it has its own syntax and its own little programming language, and there are lots of different comments, and you edit it the first time and it doesn’t work and then you edit it the second time and it doesn’t work.”

Because there are so many individual voices involved in an open-source project, no one can agree on the right way to do things. And, because no one can agree, every possible option is built into the software, thereby frustrating the central goal of good design, which is, after all, to understand what to leave out. Spolsky notes that almost all the successful open-source products have been attempts to clone some preexisting software program, like Microsoft’s Internet Explorer, or Unix. “One of the reasons open source works well for Linux is that there isn’t any real design work to be undertaken,” he says. “They were doing what we would call chasing tail-lights.” Open source was great for a science project, in which the goals were clearly defined and the technical hurdles easily identifiable. Had Project Delta been a Cool Whip bakeoff, an exercise in chasing tail-lights, the Dream Team would easily win. But if you want to design a truly innovative software program — or a truly innovative cookie — the costs of bigness can become overwhelming.

In the frantic final weeks before the bakeoff, while the Dream Team was trying to fix a problem with crumbling, and hit on the idea of glazing the pecan on the face of the cookie, Dea and Howell continued to make steady, incremental improvements.

“These cookies were baked five days ago,” Howell told Gundrum, as he handed him a Ziploc bag. Dea was off somewhere in the Midwest, meeting with clients, and Howell looked apprehensive, stroking his goatee nervously as he stood by Gundrum’s desk. “We used wheat dextrin, which I think gives us some crispiness advantages and some shelf-stability advantages. We have a little more vanilla in this round, which gives you that brown, rounding background note.”

Gundrum nodded. “The vanilla is almost like a surrogate for sugar,” he said. “It potentiates the sweetness.”

“Last time, the leavening system was baking soda and baking powder,” Howell went on. “I switched that to baking soda and monocalcium phosphate. That helps them rise a little bit better. And we baked them at a slightly higher temperature for slightly longer, so that we drove off a little bit more moisture.”

“How close are you?” Gundrum asked.

“Very close,” Howell replied.

Gundrum was lost in thought for a moment. “It looks very wholesome. It looks like something you’d want to feed your kids. It has very good aroma. I really like the texture. My guess is that it eats very well with milk.” He turned back to Howell, suddenly solicitous. “Do you want some milk?”

Meanwhile, Barb Stuckey had a revelation. She was working on a tortilla-chip project, and had bags of tortilla chips all over her desk. “You have no idea how much engineering goes into those things,” she said, holding up a tortilla chip. “It’s greater than what it takes to build a bridge. It’s crazy.” And one of the clever things about cheese tortilla chips—particularly the low-fat versions—is how they go about distracting the palate. “You know how you put a chip in your mouth and the minute it hits your tongue it explodes with flavor?” Stuckey said. “It’s because it’s got this topical seasoning. It’s got dried cheese powders and sugar and probably M.S.G. and all that other stuff on the outside of the chip.”

Her idea was to apply that technique to strawberry cobbler—to take large crystals of sugar, plate them with citric acid, and dust the cookies with them. “The minute they reach your tongue, you get this sweet-and-sour hit, and then you crunch into the cookie and get the rest—the strawberry and the oats,” she said. The crystals threw off your taste buds. You weren’t focussed on the fact that there was half as much fat in the cookie as there should be. Plus, the citric acid brought a tangy flavor to the dried strawberries: suddenly they felt fresh.

Batches of the new strawberry-cobbler prototype were ordered up, with different formulations of the citric acid and the crystals. A meeting was called in the trophy room. Anne Cristofano brought two plastic bags filled with cookies. Stuckey was there, as was a senior Mattson food technologist named Karen Smithson, an outsider brought to the meeting in an advisory role. Smithson, a former pastry chef, was a little older than Stuckey and Cristofano, with an air of self-possession. She broke the seal on the first bag, and took a bite with her eyes half closed. The other two watched intently.

“Umm,” Smithson said, after the briefest of pauses. “That is pretty darn good. And this is one of the healthy cookies? I would not say, ‘This is healthy.’ I can’t taste the trade-off.” She looked up at Stuckey. “How old are they?”

“Today,” Stuckey replied.

“O.K. . . .” This was a complicating fact. Any cookie tastes good on the day it’s baked. The question was how it tasted after baking and packaging and shipping and sitting in a warehouse and on a supermarket shelf and finally in someone’s cupboard.

“What we’re trying to do here is a shelf-stable cookie that will last six months,” Stuckey said. “I think we’re better off if we can make it crispy.”

Smithson thought for a moment. “You can have either a crispy, low-moisture cookie or a soft and chewy cookie,” she said. “But you can’t get the outside crisp and the inside chewy. We know that. The moisture will migrate. It will equilibrate over time, so you end up with a cookie that’s consistent all the way through. Remember we did all that work on Mrs. Fields? That’s what we learned.”

They talked for a bit, in technical terms, about various kinds of sugars and starches. Smithson didn’t think that the stability issue was going to be a problem.

“Isn’t it compelling, visually?” Stuckey blurted out, after a lull in the conversation. And it was: the dried-strawberry chunks broke though the surface of the cookie, and the tiny citric-sugar crystals glinted in the light. “I just think you get so much more bang for the buck when you put the seasoning on the outside.”

“Yet it’s not weird,” Smithson said, nodding. She picked up another cookie. “The mouth feel is a combination of chewy and crunchy. With the flavors, you have the caramelized sugar, the brown-sugar notes. You have a little bit of a chew from the oats. You have a flavor from the strawberry, and it helps to have a combination of the sugar alcohol and the brown sugar. You know, sugars have different deliveries, and sometimes you get some of the sweetness right off and some of it continues on. You notice that a lot with the artificial sweeteners. You get the sweetness that doesn’t go away, long after the other flavors are gone. With this one, the sweetness is nice. The flavors come together at the same time and fade at the same time, and then you have the little bright after-hits from the fruit and the citric crunchies, which are” — she paused, looking for the right word — “brilliant.”

5.

The bakeoff took place in April. Mattson selected a representative sample of nearly three hundred households from around the country. Each was mailed bubble-wrapped packages containing all three entrants. The vote was close but unequivocal. Fourteen per cent of the households voted for the XP oatmeal-chocolate-chip cookie. Forty-one per cent voted for the Dream Team’s oatmeal-caramel cookie. Forty-four per cent voted for Team Stuckey’s strawberry cobbler.

The Project Delta postmortem was held at Chaya Brasserie, a French-Asian fusion restaurant on the Embarcadero, in San Francisco. It was just Gundrum and Steven Addis, from the first Project Delta dinner, and their wives. Dan Howell was immersed in a confidential project for a big food conglomerate back East. Peter Dea was working with Cargill on a wellness product. Carol Borba was in Chicago, at a meeting of the Food Marketing Institute. Barb Stuckey was helping Ringling Brothers rethink the food at its concessions. “We’ve learned a lot about the circus,” Gundrum said. Meanwhile, Addis’s firm had created a logo and a brand name for Project Delta. Mattson has offered to license the winning cookie at no cost, as long as a percentage of its sales goes to a charitable foundation that Mattson has set up to feed the hungry. Someday soon, you should be able to go into a supermarket and buy Team Stuckey’s strawberry-cobbler cookie.

“Which one would you have voted for?” Addis asked Gundrum.

“I have to say, they were all good in their own way,” Gundrum replied. It was like asking a mother which of her children she liked best. “I thought Barb’s cookie was a little too sweet, and I wish the open-source cookie was a little tighter, less crumbly. With XP, I think we would have done better, but we had a wardrobe malfunction. They used too much batter, overbaked it, and the cookie came out too hard and thick.

“In the end, it was not so much which cookie won that interested him. It was who won—and why. Three people from his own shop had beaten a Dream Team, and the decisive edge had come not from the collective wisdom of a large group but from one person’s ability to make a lateral connection between two previously unconnected objects — a tortilla chip and a cookie. Was that just Barb being Barb? In large part, yes. But it was hard to believe that one of the Dream Team members would not have made the same kind of leap had they been in an environment quiet enough to allow them to think.

“Do you know what else we learned?” Gundrum said. He was talking about a questionnaire given to the voters. “We were looking at the open-ended questions — where all the families who voted could tell us what they were thinking. They all said the same thing — all of them.” His eyes grew wide. “They wanted better granola bars and breakfast bars. I would not have expected that.” He fell silent for a moment, turning a granola bar over and around in his mind, assembling and disassembling it piece by piece, as if it were a model airplane. “I thought that they were pretty good,” he said. “I mean, there are so many of them out there. But apparently people want them better.”