The Moral Hazard Myth

In “Collapse, medical ” Jared Diamond shows how societies destroy themselves.

1.

A thousand years ago, a group of Vikings led by Erik the Red set sail from Norway for the vast Arctic landmass west of Scandinavia which came to be known as Greenland. It was largely uninhabitable—a forbidding expanse of snow and ice. But along the southwestern coast there were two deep fjords protected from the harsh winds and saltwater spray of the North Atlantic Ocean, and as the Norse sailed upriver they saw grassy slopes flowering with buttercups, dandelions, and bluebells, and thick forests of willow and birch and alder. Two colonies were formed, three hundred miles apart, known as the Eastern and Western Settlements. The Norse raised sheep, goats, and cattle. They turned the grassy slopes into pastureland. They hunted seal and caribou. They built a string of parish churches and a magnificent cathedral, the remains of which are still standing. They traded actively with mainland Europe, and tithed regularly to the Roman Catholic Church. The Norse colonies in Greenland were law-abiding, economically viable, fully integrated communities, numbering at their peak five thousand people. They lasted for four hundred and fifty years—and then they vanished.

The story of the Eastern and Western Settlements of Greenland is told in Jared Diamond’s “Collapse: How Societies Choose to Fail or Succeed” (Viking; $29.95). Diamond teaches geography at U.C.L.A. and is well known for his best-seller “Guns, Germs, and Steel,” which won a Pulitzer Prize. In “Guns, Germs, and Steel,” Diamond looked at environmental and structural factors to explain why Western societies came to dominate the world. In “Collapse,” he continues that approach, only this time he looks at history’s losers—like the Easter Islanders, the Anasazi of the American Southwest, the Mayans, and the modern-day Rwandans. We live in an era preoccupied with the way that ideology and culture and politics and economics help shape the course of history. But Diamond isn’t particularly interested in any of those things—or, at least, he’s interested in them only insofar as they bear on what to him is the far more important question, which is a society’s relationship to its climate and geography and resources and neighbors. “Collapse” is a book about the most prosaic elements of the earth’s ecosystem—soil, trees, and water—because societies fail, in Diamond’s view, when they mismanage those environmental factors.

There was nothing wrong with the social organization of the Greenland settlements. The Norse built a functioning reproduction of the predominant northern-European civic model of the time—devout, structured, and reasonably orderly. In 1408, right before the end, records from the Eastern Settlement dutifully report that Thorstein Olafsson married Sigrid Bjornsdotter in Hvalsey Church on September 14th of that year, with Brand Halldorstson, Thord Jorundarson, Thorbjorn Bardarson, and Jon Jonsson as witnesses, following the proclamation of the wedding banns on three consecutive Sundays.

The problem with the settlements, Diamond argues, was that the Norse thought that Greenland really was green; they treated it as if it were the verdant farmland of southern Norway. They cleared the land to create meadows for their cows, and to grow hay to feed their livestock through the long winter. They chopped down the forests for fuel, and for the construction of wooden objects. To make houses warm enough for the winter, they built their homes out of six-foot-thick slabs of turf, which meant that a typical home consumed about ten acres of grassland.

But Greenland’s ecosystem was too fragile to withstand that kind of pressure. The short, cool growing season meant that plants developed slowly, which in turn meant that topsoil layers were shallow and lacking in soil constituents, like organic humus and clay, that hold moisture and keep soil resilient in the face of strong winds. “The sequence of soil erosion in Greenland begins with cutting or burning the cover of trees and shrubs, which are more effective at holding soil than is grass,” he writes. “With the trees and shrubs gone, livestock, especially sheep and goats, graze down the grass, which regenerates only slowly in Greenland’s climate. Once the grass cover is broken and the soil is exposed, soil is carried away especially by the strong winds, and also by pounding from occasionally heavy rains, to the point where the topsoil can be removed for a distance of miles from an entire valley.” Without adequate pastureland, the summer hay yields shrank; without adequate supplies of hay, keeping livestock through the long winter got harder. And, without adequate supplies of wood, getting fuel for the winter became increasingly difficult.

The Norse needed to reduce their reliance on livestock—particularly cows, which consumed an enormous amount of agricultural resources. But cows were a sign of high status; to northern Europeans, beef was a prized food. They needed to copy the Inuit practice of burning seal blubber for heat and light in the winter, and to learn from the Inuit the difficult art of hunting ringed seals, which were the most reliably plentiful source of food available in the winter. But the Norse had contempt for the Inuit—they called them skraelings, “wretches”—and preferred to practice their own brand of European agriculture. In the summer, when the Norse should have been sending ships on lumber-gathering missions to Labrador, in order to relieve the pressure on their own forestlands, they instead sent boats and men to the coast to hunt for walrus. Walrus tusks, after all, had great trade value. In return for those tusks, the Norse were able to acquire, among other things, church bells, stained-glass windows, bronze candlesticks, Communion wine, linen, silk, silver, churchmen’s robes, and jewelry to adorn their massive cathedral at Gardar, with its three-ton sandstone building blocks and eighty-foot bell tower. In the end, the Norse starved to death.

2.

Diamond’s argument stands in sharp contrast to the conventional explanations for a society’s collapse. Usually, we look for some kind of cataclysmic event. The aboriginal civilization of the Americas was decimated by the sudden arrival of smallpox. European Jewry was destroyed by Nazism. Similarly, the disappearance of the Norse settlements is usually blamed on the Little Ice Age, which descended on Greenland in the early fourteen-hundreds, ending several centuries of relative warmth. (One archeologist refers to this as the “It got too cold, and they died” argument.) What all these explanations have in common is the idea that civilizations are destroyed by forces outside their control, by acts of God.

But look, Diamond says, at Easter Island. Once, it was home to a thriving culture that produced the enormous stone statues that continue to inspire awe. It was home to dozens of species of trees, which created and protected an ecosystem fertile enough to support as many as thirty thousand people. Today, it’s a barren and largely empty outcropping of volcanic rock. What happened? Did a rare plant virus wipe out the island’s forest cover? Not at all. The Easter Islanders chopped their trees down, one by one, until they were all gone. “I have often asked myself, ‘What did the Easter Islander who cut down the last palm tree say while he was doing it?'” Diamond writes, and that, of course, is what is so troubling about the conclusions of “Collapse.” Those trees were felled by rational actors—who must have suspected that the destruction of this resource would result in the destruction of their civilization. The lesson of “Collapse” is that societies, as often as not, aren’t murdered. They commit suicide: they slit their wrists and then, in the course of many decades, stand by passively and watch themselves bleed to death.

This doesn’t mean that acts of God don’t play a role. It did get colder in Greenland in the early fourteen-hundreds. But it didn’t get so cold that the island became uninhabitable. The Inuit survived long after the Norse died out, and the Norse had all kinds of advantages, including a more diverse food supply, iron tools, and ready access to Europe. The problem was that the Norse simply couldn’t adapt to the country’s changing environmental conditions. Diamond writes, for instance, of the fact that nobody can find fish remains in Norse archeological sites. One scientist sifted through tons of debris from the Vatnahverfi farm and found only three fish bones; another researcher analyzed thirty-five thousand bones from the garbage of another Norse farm and found two fish bones. How can this be? Greenland is a fisherman’s dream: Diamond describes running into a Danish tourist in Greenland who had just caught two Arctic char in a shallow pool with her bare hands. “Every archaeologist who comes to excavate in Greenland . . . starts out with his or her own idea about where all those missing fish bones might be hiding,” he writes. “Could the Norse have strictly confined their munching on fish to within a few feet of the shoreline, at sites now underwater because of land subsidence? Could they have faithfully saved all their fish bones for fertilizer, fuel, or feeding to cows?” It seems unlikely. There are no fish bones in Norse archeological remains, Diamond concludes, for the simple reason that the Norse didn’t eat fish. For one reason or another, they had a cultural taboo against it.

Given the difficulty that the Norse had in putting food on the table, this was insane. Eating fish would have substantially reduced the ecological demands of the Norse settlements. The Norse would have needed fewer livestock and less pastureland. Fishing is not nearly as labor-intensive as raising cattle or hunting caribou, so eating fish would have freed time and energy for other activities. It would have diversified their diet.

Why did the Norse choose not to eat fish? Because they weren’t thinking about their biological survival. They were thinking about their cultural survival. Food taboos are one of the idiosyncrasies that define a community. Not eating fish served the same function as building lavish churches, and doggedly replicating the untenable agricultural practices of their land of origin. It was part of what it meant to be Norse, and if you are going to establish a community in a harsh and forbidding environment all those little idiosyncrasies which define and cement a culture are of paramount importance. “The Norse were undone by the same social glue that had enabled them to master Greenland’s difficulties,” Diamond writes. “The values to which people cling most stubbornly under inappropriate conditions are those values that were previously the source of their greatest triumphs over adversity.” He goes on:

To us in our secular modern society, the predicament in which the Greenlanders found themselves is difficult to fathom. To them, however, concerned with their social survival as much as their biological survival, it was out of the question to invest less in churches, to imitate or intermarry with the Inuit, and thereby to face an eternity in Hell just in order to survive another winter on Earth.

Diamond’s distinction between social and biological survival is a critical one, because too often we blur the two, or assume that biological survival is contingent on the strength of our civilizational values. That was the lesson taken from the two world wars and the nuclear age that followed: we would survive as a species only if we learned to get along and resolve our disputes peacefully. The fact is, though, that we can be law-abiding and peace-loving and tolerant and inventive and committed to freedom and true to our own values and still behave in ways that are biologically suicidal. The two kinds of survival are separate.

Diamond points out that the Easter Islanders did not practice, so far as we know, a uniquely pathological version of South Pacific culture. Other societies, on other islands in the Hawaiian archipelago, chopped down trees and farmed and raised livestock just as the Easter Islanders did. What doomed the Easter Islanders was the interaction between what they did and where they were. Diamond and a colleague, Barry Rollet, identified nine physical factors that contributed to the likelihood of deforestation—including latitude, average rainfall, aerial-ash fallout, proximity to Central Asia’s dust plume, size, and so on—and Easter Island ranked at the high-risk end of nearly every variable. “The reason for Easter’s unusually severe degree of deforestation isn’t that those seemingly nice people really were unusually bad or improvident,” he concludes. “Instead, they had the misfortune to be living in one of the most fragile environments, at the highest risk for deforestation, of any Pacific people.” The problem wasn’t the Easter Islanders. It was Easter Island.

In the second half of “Collapse,” Diamond turns his attention to modern examples, and one of his case studies is the recent genocide in Rwanda. What happened in Rwanda is commonly described as an ethnic struggle between the majority Hutu and the historically dominant, wealthier Tutsi, and it is understood in those terms because that is how we have come to explain much of modern conflict: Serb and Croat, Jew and Arab, Muslim and Christian. The world is a cauldron of cultural antagonism. It’s an explanation that clearly exasperates Diamond. The Hutu didn’t just kill the Tutsi, he points out. The Hutu also killed other Hutu. Why? Look at the land: steep hills farmed right up to the crests, without any protective terracing; rivers thick with mud from erosion; extreme deforestation leading to irregular rainfall and famine; staggeringly high population densities; the exhaustion of the topsoil; falling per-capita food production. This was a society on the brink of ecological disaster, and if there is anything that is clear from the study of such societies it is that they inevitably descend into genocidal chaos. In “Collapse,” Diamond quite convincingly defends himself against the charge of environmental determinism. His discussions are always nuanced, and he gives political and ideological factors their due. The real issue is how, in coming to terms with the uncertainties and hostilities of the world, the rest of us have turned ourselves into cultural determinists.

3.

For the past thirty years, Oregon has had one of the strictest sets of land-use regulations in the nation, requiring new development to be clustered in and around existing urban development. The laws meant that Oregon has done perhaps the best job in the nation in limiting suburban sprawl, and protecting coastal lands and estuaries. But this November Oregon’s voters passed a ballot referendum, known as Measure 37, that rolled back many of those protections. Specifically, Measure 37 said that anyone who could show that the value of his land was affected by regulations implemented since its purchase was entitled to compensation from the state. If the state declined to pay, the property owner would be exempted from the regulations.

To call Measure 37—and similar referendums that have been passed recently in other states—intellectually incoherent is to put it mildly. It might be that the reason your hundred-acre farm on a pristine hillside is worth millions to a developer is that it’s on a pristine hillside: if everyone on that hillside could subdivide, and sell out to Target and Wal-Mart, then nobody’s plot would be worth millions anymore. Will the voters of Oregon then pass Measure 38, allowing them to sue the state for compensation over damage to property values caused by Measure 37?

It is hard to read “Collapse,” though, and not have an additional reaction to Measure 37. Supporters of the law spoke entirely in the language of political ideology. To them, the measure was a defense of property rights, preventing the state from unconstitutional “takings.” If you replaced the term “property rights” with “First Amendment rights,” this would have been indistinguishable from an argument over, say, whether charitable groups ought to be able to canvass in malls, or whether cities can control the advertising they sell on the sides of public buses. As a society, we do a very good job with these kinds of debates: we give everyone a hearing, and pass laws, and make compromises, and square our conclusions with our constitutional heritage—and in the Oregon debate the quality of the theoretical argument was impressively high.

The thing that got lost in the debate, however, was the land. In a rapidly growing state like Oregon, what, precisely, are the state’s ecological strengths and vulnerabilities? What impact will changed land-use priorities have on water and soil and cropland and forest? One can imagine Diamond writing about the Measure 37 debate, and he wouldn’t be very impressed by how seriously Oregonians wrestled with the problem of squaring their land-use rules with their values, because to him a society’s environmental birthright is not best discussed in those terms. Rivers and streams and forests and soil are a biological resource. They are a tangible, finite thing, and societies collapse when they get so consumed with addressing the fine points of their history and culture and deeply held beliefs—with making sure that Thorstein Olafsson and Sigrid Bjornsdotter are married before the right number of witnesses following the announcement of wedding banns on the right number of Sundays—that they forget that the pastureland is shrinking and the forest cover is gone.

When archeologists looked through the ruins of the Western Settlement, they found plenty of the big wooden objects that were so valuable in Greenland—crucifixes, bowls, furniture, doors, roof timbers—which meant that the end came too quickly for anyone to do any scavenging. And, when the archeologists looked at the animal bones left in the debris, they found the bones of newborn calves, meaning that the Norse, in that final winter, had given up on the future. They found toe bones from cows, equal to the number of cow spaces in the barn, meaning that the Norse ate their cattle down to the hoofs, and they found the bones of dogs covered with knife marks, meaning that, in the end, they had to eat their pets. But not fish bones, of course. Right up until they starved to death, the Norse never lost sight of what they stood for.
Is pop culture dumbing us down or smartening us up?

1.

Twenty years ago, sildenafil a political philosopher named James Flynn uncovered a curious fact. Americans—at least, sickness as measured by I.Q. tests—were getting smarter. This fact had been obscured for years, because the people who give I.Q. tests continually recalibrate the scoring system to keep the average at 100. But if you took out the recalibration, Flynn found, I.Q. scores showed a steady upward trajectory, rising by about three points per decade, which means that a person whose I.Q. placed him in the top ten per cent of the American population in 1920 would today fall in the bottom third. Some of that effect, no doubt, is a simple by-product of economic progress: in the surge of prosperity during the middle part of the last century, people in the West became better fed, better educated, and more familiar with things like I.Q. tests. But, even as that wave of change has subsided, test scores have continued to rise—not just in America but all over the developed world. What’s more, the increases have not been confined to children who go to enriched day-care centers and private schools. The middle part of the curve—the people who have supposedly been suffering from a deteriorating public-school system and a steady diet of lowest-common-denominator television and mindless pop music—has increased just as much. What on earth is happening? In the wonderfully entertaining “Everything Bad Is Good for You” (Riverhead; $23.95), Steven Johnson proposes that what is making us smarter is precisely what we thought was making us dumber: popular culture.

Johnson is the former editor of the online magazine Feed and the author of a number of books on science and technology. There is a pleasing eclecticism to his thinking. He is as happy analyzing “Finding Nemo” as he is dissecting the intricacies of a piece of software, and he’s perfectly capable of using Nietzsche’s notion of eternal recurrence to discuss the new creative rules of television shows. Johnson wants to understand popular culture—not in the postmodern, academic sense of wondering what “The Dukes of Hazzard” tells us about Southern male alienation but in the very practical sense of wondering what watching something like “The Dukes of Hazzard” does to the way our minds work.

As Johnson points out, television is very different now from what it was thirty years ago. It’s harder. A typical episode of “Starsky and Hutch,” in the nineteen-seventies, followed an essentially linear path: two characters, engaged in a single story line, moving toward a decisive conclusion. To watch an episode of “Dallas” today is to be stunned by its glacial pace—by the arduous attempts to establish social relationships, by the excruciating simplicity of the plotline, by how obvious it was. A single episode of “The Sopranos,” by contrast, might follow five narrative threads, involving a dozen characters who weave in and out of the plot. Modern television also requires the viewer to do a lot of what Johnson calls “filling in,” as in a “Seinfeld” episode that subtly parodies the Kennedy assassination conspiracists, or a typical “Simpsons” episode, which may contain numerous allusions to politics or cinema or pop culture. The extraordinary amount of money now being made in the television aftermarket—DVD sales and syndication—means that the creators of television shows now have an incentive to make programming that can sustain two or three or four viewings. Even reality shows like “Survivor,” Johnson argues, engage the viewer in a way that television rarely has in the past:

When we watch these shows, the part of our brain that monitors the emotional lives of the people around us—the part that tracks subtle shifts in intonation and gesture and facial expression—scrutinizes the action on the screen, looking for clues. . . . The phrase “Monday-morning quarterbacking” was coined to describe the engaged feeling spectators have in relation to games as opposed to stories. We absorb stories, but we second-guess games. Reality programming has brought that second-guessing to prime time, only the game in question revolves around social dexterity rather than the physical kind.

How can the greater cognitive demands that television makes on us now, he wonders, not matter?

Johnson develops the same argument about video games. Most of the people who denounce video games, he says, haven’t actually played them—at least, not recently. Twenty years ago, games like Tetris or Pac-Man were simple exercises in motor coördination and pattern recognition. Today’s games belong to another realm. Johnson points out that one of the “walk-throughs” for “Grand Theft Auto III”—that is, the informal guides that break down the games and help players navigate their complexities—is fifty-three thousand words long, about the length of his book. The contemporary video game involves a fully realized imaginary world, dense with detail and levels of complexity.

Indeed, video games are not games in the sense of those pastimes—like Monopoly or gin rummy or chess—which most of us grew up with. They don’t have a set of unambiguous rules that have to be learned and then followed during the course of play. This is why many of us find modern video games baffling: we’re not used to being in a situation where we have to figure out what to do. We think we only have to learn how to press the buttons faster. But these games withhold critical information from the player. Players have to explore and sort through hypotheses in order to make sense of the game’s environment, which is why a modern video game can take forty hours to complete. Far from being engines of instant gratification, as they are often described, video games are actually, Johnson writes, “all about delayed gratification—sometimes so long delayed that you wonder if the gratification is ever going to show.”

At the same time, players are required to manage a dizzying array of information and options. The game presents the player with a series of puzzles, and you can’t succeed at the game simply by solving the puzzles one at a time. You have to craft a longer-term strategy, in order to juggle and coördinate competing interests. In denigrating the video game, Johnson argues, we have confused it with other phenomena in teen-age life, like multitasking—simultaneously e-mailing and listening to music and talking on the telephone and surfing the Internet. Playing a video game is, in fact, an exercise in “constructing the proper hierarchy of tasks and moving through the tasks in the correct sequence,” he writes. “It’s about finding order and meaning in the world, and making decisions that help create that order.”

2.

It doesn’t seem right, of course, that watching “24” or playing a video game could be as important cognitively as reading a book. Isn’t the extraordinary success of the “Harry Potter” novels better news for the culture than the equivalent success of “Grand Theft Auto III”? Johnson’s response is to imagine what cultural critics might have said had video games been invented hundreds of years ago, and only recently had something called the book been marketed aggressively to children:

Reading books chronically understimulates the senses. Unlike the longstanding tradition of game playing—which engages the child in a vivid, three-dimensional world filled with moving images and musical sound-scapes, navigated and controlled with complex muscular movements—books are simply a barren string of words on the page. . . .
Books are also tragically isolating. While games have for many years engaged the young in complex social relationships with their peers, building and exploring worlds together, books force the child to sequester him or herself in a quiet space, shut off from interaction with other children. . . .
But perhaps the most dangerous property of these books is the fact that they follow a fixed linear path. You can’t control their narratives in any fashion—you simply sit back and have the story dictated to you. . . . This risks instilling a general passivity in our children, making them feel as though they’re powerless to change their circumstances. Reading is not an active, participatory process; it’s a submissive one.

He’s joking, of course, but only in part. The point is that books and video games represent two very different kinds of learning. When you read a biology textbook, the content of what you read is what matters. Reading is a form of explicit learning. When you play a video game, the value is in how it makes you think. Video games are an example of collateral learning, which is no less important.

Being “smart” involves facility in both kinds of thinking—the kind of fluid problem solving that matters in things like video games and I.Q. tests, but also the kind of crystallized knowledge that comes from explicit learning. If Johnson’s book has a flaw, it is that he sometimes speaks of our culture being “smarter” when he’s really referring just to that fluid problem-solving facility. When it comes to the other kind of intelligence, it is not clear at all what kind of progress we are making, as anyone who has read, say, the Gettysburg Address alongside any Presidential speech from the past twenty years can attest. The real question is what the right balance of these two forms of intelligence might look like. “Everything Bad Is Good for You” doesn’t answer that question. But Johnson does something nearly as important, which is to remind us that we shouldn’t fall into the trap of thinking that explicit learning is the only kind of learning that matters.

In recent years, for example, a number of elementary schools have phased out or reduced recess and replaced it with extra math or English instruction. This is the triumph of the explicit over the collateral. After all, recess is “play” for a ten-year-old in precisely the sense that Johnson describes video games as play for an adolescent: an unstructured environment that requires the child actively to intervene, to look for the hidden logic, to find order and meaning in chaos.

One of the ongoing debates in the educational community, similarly, is over the value of homework. Meta-analysis of hundreds of studies done on the effects of homework shows that the evidence supporting the practice is, at best, modest. Homework seems to be most useful in high school and for subjects like math. At the elementary-school level, homework seems to be of marginal or no academic value. Its effect on discipline and personal responsibility is unproved. And the causal relation between high-school homework and achievement is unclear: it hasn’t been firmly established whether spending more time on homework in high school makes you a better student or whether better students, finding homework more pleasurable, spend more time doing it. So why, as a society, are we so enamored of homework? Perhaps because we have so little faith in the value of the things that children would otherwise be doing with their time. They could go out for a walk, and get some exercise; they could spend time with their peers, and reap the rewards of friendship. Or, Johnson suggests, they could be playing a video game, and giving their minds a rigorous workout.
The bad idea behind our failed health-care system.

1.

Tooth decay begins, try typically, when debris becomes trapped between the teeth and along the ridges and in the grooves of the molars.  The food rots.  It becomes colonized with bacteria.  The bacteria feeds off sugars in the mouth and forms an acid that begins to eat away at the enamel of the teeth.  Slowly, the bacteria works its way through to the dentin, the inner structure, and from there the cavity begins to blossom three-dimensionally, spreading inward and sideways.  When the decay reaches the pulp tissue, the blood vessels, and the nerves that serve the tooth, the pain starts—an insistent throbbing.  The tooth turns brown.  It begins to lose its hard structure, to the point where a dentist can reach into a cavity with a hand instrument and scoop out the decay.  At the base of the tooth, the bacteria mineralizes into tartar, which begins to irritate the gums.  They become puffy and bright red and start to recede, leaving more and more of the tooth’s root exposed.  When the infection works its way down to the bone, the structure holding the tooth in begins to collapse altogether.

Several years ago, two Harvard researchers, Susan Starr Sered and Rushika Fernandopulle, set out to interview people without health-care coverage for a book they were writing, “Uninsured in America.” They talked to as many kinds of people as they could find, collecting stories of untreated depression and struggling single mothers and chronically injured laborers—and the most common complaint they heard was about teeth.  Gina, a hairdresser in Idaho, whose husband worked as a freight manager at a chain store, had “a peculiar mannerism of keeping her mouth closed even when speaking.” It turned out that she hadn’t been able to afford dental care for three years, and one of her front teeth was rotting.  Daniel, a construction worker, pulled out his bad teeth with pliers.  Then, there was Loretta, who worked nights at a university research center in Mississippi, and was missing most of her teeth.  “They’ll break off after a while, and then you just grab a hold of them, and they work their way out,” she explained to Sered and Fernandopulle.  “It hurts so bad, because the tooth aches.  Then it’s a relief just to get it out of there.  The hole closes up itself anyway.  So it’s so much better.”

People without health insurance have bad teeth because, if you’re paying for everything out of your own pocket, going to the dentist for a checkup seems like a luxury.  It isn’t, of course.  The loss of teeth makes eating fresh fruits and vegetables difficult, and a diet heavy in soft, processed foods exacerbates more serious health problems, like diabetes.  The pain of tooth decay leads many people to use alcohol as a salve.  And those struggling to get ahead in the job market quickly find that the unsightliness of bad teeth, and the self-consciousness that results, can become a major barrier.  If your teeth are bad, you’re not going to get a job as a receptionist, say, or a cashier.  You’re going to be put in the back somewhere, far from the public eye.  What Loretta, Gina, and Daniel understand, the two authors tell us, is that bad teeth have come to be seen as a marker of “poor parenting, low educational achievement and slow or faulty intellectual development.” They are an outward marker of caste.  “Almost every time we asked interviewees what their first priority would be if the president established universal health coverage tomorrow,” Sered and Fernandopulle write, “the immediate answer was ‘my teeth.’ ”

The U.  S.  health-care system, according to “Uninsured in America,” has created a group of people who increasingly look different from others and suffer in ways that others do not.  The leading cause of personal bankruptcy in the United States is unpaid medical bills.  Half of the uninsured owe money to hospitals, and a third are being pursued by collection agencies.  Children without health insurance are less likely to receive medical attention for serious injuries, for recurrent ear infections, or for asthma.  Lung-cancer patients without insurance are less likely to receive surgery, chemotherapy, or radiation treatment.  Heart-attack victims without health insurance are less likely to receive angioplasty.  People with pneumonia who don’t have health insurance are less likely to receive X rays or consultations.  The death rate in any given year for someone without health insurance is twenty-five per cent higher than for someone with insur-ance.  Because the uninsured are sicker than the rest of us, they can’t get better jobs, and because they can’t get better jobs they can’t afford health insurance, and because they can’t afford health insurance they get even sicker.  John, the manager of a bar in Idaho, tells Sered and Fernandopulle that as a result of various workplace injuries over the years he takes eight ibuprofen, waits two hours, then takes eight more—and tries to cadge as much prescription pain medication as he can from friends.  “There are times when I should’ve gone to the doctor, but I couldn’t afford to go because I don’t have insurance,” he says.  “Like when my back messed up, I should’ve gone.  If I had insurance, I would’ve went, because I know I could get treatment, but when you can’t afford it you don’t go.  Because the harder the hole you get into in terms of bills, then you’ll never get out.  So you just say, ‘I can deal with the pain.’ ”

2.

One of the great mysteries of political life in the United States is why Americans are so devoted to their health-care system.  Six times in the past century—during the First World War, during the Depression, during the Truman and Johnson Administrations, in the Senate in the nineteen-seventies, and during the Clinton years—efforts have been made to introduce some kind of universal health insurance, and each time the efforts have been rejected.  Instead, the United States has opted for a makeshift system of increasing complexity and dysfunction.  Americans spend $5,267 per capita on health care every year, almost two and half times the industrialized world’s median of $2,193; the extra spending comes to hundreds of billions of dollars a year.  What does that extra spending buy us? Americans have fewer doctors per capita than most Western countries.  We go to the doctor less than people in other Western countries.  We get admitted to the hospital less frequently than people in other Western countries.  We are less satisfied with our health care than our counterparts in other countries.  American life expectancy is lower than the Western average.  Childhood-immunization rates in the United States are lower than average.  Infant-mortality rates are in the nineteenth percentile of industrialized nations.  Doctors here perform more high-end medical procedures, such as coronary angioplasties, than in other countries, but most of the wealthier Western countries have more CT scanners than the United States does, and Switzerland, Japan, Austria, and Finland all have more MRI machines per capita.  Nor is our system more efficient.  The United States spends more than a thousand dollars per capita per year—or close to four hundred billion dollars—on health-care-related paperwork and administration, whereas Canada, for example, spends only about three hundred dollars per capita.  And, of course, every other country in the industrialized world insures all its citizens; despite those extra hundreds of billions of dollars we spend each year, we leave forty-five million people without any insurance.  A country that displays an almost ruthless commitment to efficiency and performance in every aspect of its economy—a country that switched to Japanese cars the moment they were more reliable, and to Chinese T-shirts the moment they were five cents cheaper—has loyally stuck with a health-care system that leaves its citizenry pulling out their teeth with pliers.

America’s health-care mess is, in part, simply an accident of history.  The fact that there have been six attempts at universal health coverage in the last century suggests that there has long been support for the idea.  But politics has always got in the way.  In both Europe and the United States, for example, the push for health insurance was led, in large part, by organized labor.  But in Europe the unions worked through the political system, fighting for coverage for all citizens.  From the start, health insurance in Europe was public and universal, and that created powerful political support for any attempt to expand benefits.  In the United States, by contrast, the unions worked through the collective-bargaining system and, as a result, could win health benefits only for their own members.  Health insurance here has always been private and selective, and every attempt to expand benefits has resulted in a paralyzing political battle over who would be added to insurance rolls and who ought to pay for those additions.

Policy is driven by more than politics, however.  It is equally driven by ideas, and in the past few decades a particular idea has taken hold among prominent American economists which has also been a powerful impediment to the expansion of health insurance.  The idea is known as “moral hazard.” Health economists in other Western nations do not share this obsession.  Nor do most Americans.  But moral hazard has profoundly shaped the way think tanks formulate policy and the way experts argue and the way health insurers structure their plans and the way legislation and regulations have been written.  The health-care mess isn’t merely the unintentional result of political dysfunction, in other words.  It is also the deliberate consequence of the way in which American policymakers have come to think about insurance.

“Moral hazard” is the term economists use to describe the fact that insurance can change the behavior of the person being insured.  If your office gives you and your co-workers all the free Pepsi you want—if your employer, in effect, offers universal Pepsi insurance—you’ll drink more Pepsi than you would have otherwise.  If you have a no-deductible fire-insurance policy, you may be a little less diligent in clearing the brush away from your house.  The savings-and-loan crisis of the nineteen-eighties was created, in large part, by the fact that the federal government insured savings deposits of up to a hundred thousand dollars, and so the newly deregulated S. & L.s made far riskier investments than they would have otherwise.  Insurance can have the paradoxical effect of producing risky and wasteful behavior.  Economists spend a great deal of time thinking about such moral hazard for good reason.  Insurance is an attempt to make human life safer and more secure.  But, if those efforts can backfire and produce riskier behavior, providing insurance becomes a much more complicated and problematic endeavor.

In 1968, the economist Mark Pauly argued that moral hazard played an enormous role in medicine, and, as John Nyman writes in his book “The Theory of the Demand for Health Insurance,” Pauly’s paper has become the “single most influential article in the health economics literature.” Nyman, an economist at the University of Minnesota, says that the fear of moral hazard lies behind the thicket of co-payments and deductibles and utilization reviews which characterizes the American health-insurance system.  Fear of moral hazard, Nyman writes, also explains “the general lack of enthusiasm by U.S.  health economists for the expansion of health insurance coverage (for example, national health insurance or expanded Medicare benefits) in the U.S.”

What Nyman is saying is that when your insurance company requires that you make a twenty-dollar co-payment for a visit to the doctor, or when your plan includes an annual five-hundred-dollar or thousand-dollar deductible, it’s not simply an attempt to get you to pick up a larger share of your health costs.  It is an attempt to make your use of the health-care system more efficient.  Making you responsible for a share of the costs, the argument runs, will reduce moral hazard: you’ll no longer grab one of those free Pepsis when you aren’t really thirsty.  That’s also why Nyman says that the notion of moral hazard is behind the “lack of enthusiasm” for expansion of health insurance.  If you think of insurance as producing wasteful consumption of medical services, then the fact that there are forty-five million Americans without health insurance is no longer an immediate cause for alarm.  After all, it’s not as if the uninsured never go to the doctor.  They spend, on average, $934 a year on medical care.  A moral-hazard theorist would say that they go to the doctor when they really have to.  Those of us with private insurance, by contrast, consume $2,347 worth of health care a year.  If a lot of that extra $1,413 is waste, then maybe the uninsured person is the truly efficient consumer of health care.

The moral-hazard argument makes sense, however, only if we consume health care in the same way that we consume other consumer goods, and to economists like Nyman this assumption is plainly absurd.  We go to the doctor grudgingly, only because we’re sick.  “Moral hazard is overblown,” the Princeton economist Uwe Reinhardt says.  “You always hear that the demand for health care is unlimited.  This is just not true.  People who are very well insured, who are very rich, do you see them check into the hospital because it’s free? Do people really like to go to the doctor? Do they check into the hospital instead of playing golf?”

For that matter, when you have to pay for your own health care, does your consumption really become more efficient? In the late nineteen-seventies, the rand Corporation did an extensive study on the question, randomly assigning families to health plans with co-payment levels at zero per cent, twenty-five per cent, fifty per cent, or ninety-five per cent, up to six thousand dollars.  As you might expect, the more that people were asked to chip in for their health care the less care they used.  The problem was that they cut back equally on both frivolous care and useful care.  Poor people in the high-deductible group with hypertension, for instance, didn’t do nearly as good a job of controlling their blood pressure as those in other groups, resulting in a ten-per-cent increase in the likelihood of death.  As a recent Commonwealth Fund study concluded, cost sharing is “a blunt instrument.” Of course it is: how should the average consumer be expected to know beforehand what care is frivolous and what care is useful? I just went to the dermatologist to get moles checked for skin cancer.  If I had had to pay a hundred per cent, or even fifty per cent, of the cost of the visit, I might not have gone.  Would that have been a wise decision? I have no idea.  But if one of those moles really is cancerous, that simple, inexpensive visit could save the health-care system tens of thousands of dollars (not to mention saving me a great deal of heartbreak).  The focus on moral hazard suggests that the changes we make in our behavior when we have insurance are nearly always wasteful.  Yet, when it comes to health care, many of the things we do only because we have insurance—like getting our moles checked, or getting our teeth cleaned regularly, or getting a mammogram or engaging in other routine preventive care—are anything but wasteful and inefficient.  In fact, they are behaviors that could end up saving the health-care system a good deal of money.

Sered and Fernandopulle tell the story of Steve, a factory worker from northern Idaho, with a “grotesquelooking left hand—what looks like a bone sticks out the side.” When he was younger, he broke his hand.  “The doctor wanted to operate on it,” he recalls.  “And because I didn’t have insurance, well, I was like ‘I ain’t gonna have it operated on.’ The doctor said, ‘Well, I can wrap it for you with an Ace bandage.’ I said, ‘Ahh, let’s do that, then.’ ” Steve uses less health care than he would if he had insurance, but that’s not because he has defeated the scourge of moral hazard.  It’s because instead of getting a broken bone fixed he put a bandage on it.

3.

At the center of the Bush Administration’s plan to address the health-insurance mess are Health Savings Accounts, and Health Savings Accounts are exactly what you would come up with if you were concerned, above all else, with minimizing moral hazard.  The logic behind them was laid out in the 2004 Economic Report of the President.  Americans, the report argues, have too much health insurance: typical plans cover things that they shouldn’t, creating the problem of overconsumption.  Several paragraphs are then devoted to explaining the theory of moral hazard.  The report turns to the subject of the uninsured, concluding that they fall into several groups.  Some are foreigners who may be covered by their countries of origin.  Some are people who could be covered by Medicaid but aren’t or aren’t admitting that they are.  Finally, a large number “remain uninsured as a matter of choice.” The report continues, “Researchers believe that as many as one-quarter of those without health insurance had coverage available through an employer but declined the coverage…. Still others may remain uninsured because they are young and healthy and do not see the need for insurance.” In other words, those with health insurance are overinsured and their behavior is distorted by moral hazard.  Those without health insurance use their own money to make decisions about insurance based on an assessment of their needs.  The insured are wasteful.  The uninsured are prudent.  So what’s the solution? Make the insured a little bit more like the uninsured.

Under the Health Savings Accounts system, consumers are asked to pay for routine health care with their own money—several thousand dollars of which can be put into a tax-free account.  To handle their catastrophic expenses, they then purchase a basic health-insurance package with, say, a thousand-dollar annual deductible.  As President Bush explained recently, “Health Savings Accounts all aim at empowering people to make decisions for themselves, owning their own health-care plan, and at the same time bringing some demand control into the cost of health care.”

The country described in the President’s report is a very different place from the country described in “Uninsured in America.” Sered and Fernandopulle look at the billions we spend on medical care and wonder why Americans have so little insurance.  The President’s report considers the same situation and worries that we have too much.  Sered and Fernandopulle see the lack of insurance as a problem of poverty; a third of the uninsured, after all, have incomes below the federal poverty line.  In the section on the uninsured in the President’s report, the word “poverty” is never used.  In the Administration’s view, people are offered insurance but “decline the coverage” as “a matter of choice.” The uninsured in Sered and Fernandopulle’s book decline coverage, but only because they can’t afford it.  Gina, for instance, works for a beauty salon that offers her a bare-bones health-insurance plan with a thousand-dollar deductible for two hundred dollars a month.  What’s her total income? Nine hundred dollars a month.  She could “choose” to accept health insurance, but only if she chose to stop buying food or paying the rent.

The biggest difference between the two accounts, though, has to do with how each views the function of insurance.  Gina, Steve, and Loretta are ill, and need insurance to cover the costs of getting better.  In their eyes, insurance is meant to help equalize financial risk between the healthy and the sick.  In the insurance business, this model of coverage is known as “social insurance,” and historically it was the way health coverage was conceived.  If you were sixty and had heart disease and diabetes, you didn’t pay substantially more for coverage than a perfectly healthy twenty-five-year-old.  Under social insurance, the twenty-five-year-old agrees to pay thousands of dollars in premiums even though he didn’t go to the doctor at all in the previous year, because he wants to make sure that someone else will subsidize his health care if he ever comes down with heart disease or diabetes.  Canada and Germany and Japan and all the other industrialized nations with universal health care follow the social-insurance model.  Medicare, too, is based on the social-insurance model, and, when Americans with Medicare report themselves to be happier with virtually every aspect of their insurance coverage than people with private insurance (as they do, repeatedly and overwhelmingly), they are referring to the social aspect of their insurance.  They aren’t getting better care.  But they are getting something just as valuable: the security of being insulated against the financial shock of serious illness.

There is another way to organize insurance, however, and that is to make it actuarial.  Car insurance, for instance, is actuarial.  How much you pay is in large part a function of your individual situation and history: someone who drives a sports car and has received twenty speeding tickets in the past two years pays a much higher annual premium than a soccer mom with a minivan.  In recent years, the private insurance industry in the United States has been moving toward the actuarial model, with profound consequences.  The triumph of the actuarial model over the social-insurance model is the reason that companies unlucky enough to employ older, high-cost employees—like United Airlines—have run into such financial difficulty.  It’s the reason that automakers are increasingly moving their operations to Canada.  It’s the reason that small businesses that have one or two employees with serious illnesses suddenly face unmanageably high health-insurance premiums, and it’s the reason that, in many states, people suffering from a potentially high-cost medical condition can’t get anyone to insure them at all.

Health Savings Accounts represent the final, irrevocable step in the actuarial direction.  If you are preoccupied with moral hazard, then you want people to pay for care with their own money, and, when you do that, the sick inevitably end up paying more than the healthy.  And when you make people choose an insurance plan that fits their individual needs, those with significant medical problems will choose expensive health plans that cover lots of things, while those with few health problems will choose cheaper, bare-bones plans.  The more expensive the comprehensive plans become, and the less expensive the bare-bones plans become, the more the very sick will cluster together at one end of the insurance spectrum, and the more the well will cluster together at the low-cost end.  The days when the healthy twenty-five-year-old subsidizes the sixty-year-old with heart disease or diabetes are coming to an end.  “The main effect of putting more of it on the consumer is to reduce the social redistributive element of insurance,” the Stanford economist Victor Fuchs says.  Health Savings Accounts are not a variant of universal health care.  In their governing assumptions, they are the antithesis of universal health care.

The issue about what to do with the health-care system is sometimes presented as a technical argument about the merits of one kind of coverage over another or as an ideological argument about socialized versus private medicine.  It is, instead, about a few very simple questions.  Do you think that this kind of redistribution of risk is a good idea? Do you think that people whose genes predispose them to depression or cancer, or whose poverty complicates asthma or diabetes, or who get hit by a drunk driver, or who have to keep their mouths closed because their teeth are rotting ought to bear a greater share of the costs of their health care than those of us who are lucky enough to escape such misfortunes? In the rest of the industrialized world, it is assumed that the more equally and widely the burdens of illness are shared, the better off the population as a whole is likely to be.  The reason the United States has forty-five million people without coverage is that its health-care policy is in the hands of people who disagree, and who regard health insurance not as the solution but as the problem.