Smaller

How caffeine created the modern world.

1.

The original Coca-Cola was a late-nineteenth-century concoction known as Pemberton’s French Wine Coca, find pilule a mixture of alcohol, viagra dosage the caffeine-rich kola nut, page and coca, the raw ingredient of cocaine. In the face of social pressure, first the wine and then the coca were removed, leaving the more banal modern beverage in its place: carbonated, caffeinated sugar water with less kick to it than a cup of coffee. But is that the way we think of Coke? Not at all. In the nineteen-thirties, a commercial artist named Haddon Sundblom had the bright idea of posing a portly retired friend of his in a red Santa Claus suit with a Coke in his hand, and plastering the image on billboards and advertisements across the country. Coke, magically, was reborn as caffeine for children, caffeine without any of the weighty adult connotations of coffee and tea. It was–as the ads with Sundblom’s Santa put it–“the pause that refreshes.” It added life. It could teach the world to sing.

One of the things that have always made drugs so powerful is their cultural adaptability, their way of acquiring meanings beyond their pharmacology. We think of marijuana, for example, as a drug of lethargy, of disaffection. But in Colombia, the historian David T. Courtwright points out in “Forces of Habit” (Harvard; $24.95), “peasants boast that cannabis helps them to quita el cansancio or reduce fatigue; increase their fuerza and ánimo, force and spirit; and become incansable, tireless.” In Germany right after the Second World War, cigarettes briefly and suddenly became the equivalent of crack cocaine. “Up to a point, the majority of the habitual smokers preferred to do without food even under extreme conditions of nutrition rather than to forgo tobacco,” according to one account of the period. “Many housewives… bartered fat and sugar for cigarettes.” Even a drug as demonized as opium has been seen in a more favorable light. In the eighteen-thirties, Franklin Delano Roosevelt’s grandfather Warren Delano II made the family fortune exporting the drug to China, and Delano was able to sugarcoat his activities so plausibly that no one ever accused his grandson of being the scion of a drug lord. And yet, as Bennett Alan Weinberg and Bonnie K. Bealer remind us in their marvellous new book “The World of Caffeine” (Routledge; $27.50), there is no drug quite as effortlessly adaptable as caffeine, the Zelig of chemical stimulants.

At one moment, in one form, it is the drug of choice of café intellectuals and artists; in another, of housewives; in another, of Zen monks; and, in yet another, of children enthralled by a fat man who slides down chimneys. King Gustav III, who ruled Sweden in the latter half of the eighteenth century, was so convinced of the particular perils of coffee over all other forms of caffeine that he devised an elaborate experiment. A convicted murderer was sentenced to drink cup after cup of coffee until he died, with another murderer sentenced to a lifetime of tea drinking, as a control. (Unfortunately, the two doctors in charge of the study died before anyone else did; then Gustav was murdered; and finally the tea drinker died, at eighty-three, of old age–leaving the original murderer alone with his espresso, and leaving coffee’s supposed toxicity in some doubt.) Later, the various forms of caffeine began to be divided up along sociological lines. Wolfgang Schivelbusch, in his book “Tastes of Paradise,” argues that, in the eighteenth century, coffee symbolized the rising middle classes, whereas its great caffeinated rival in those years–cocoa, or, as it was known at the time, chocolate–was the drink of the aristocracy. “Goethe, who used art as a means to lift himself out of his middle class background into the aristocracy, and who as a member of a courtly society maintained a sense of aristocratic calm even in the midst of immense productivity, made a cult of chocolate, and avoided coffee,” Schivelbusch writes. “Balzac, who despite his sentimental allegiance to the monarchy, lived and labored for the literary marketplace and for it alone, became one of the most excessive coffee-drinkers in history. Here we see two fundamentally different working styles and means of stimulation–fundamentally different psychologies and physiologies.” Today, of course, the chief cultural distinction is between coffee and tea, which, according to a list drawn up by Weinberg and Bealer, have come to represent almost entirely opposite sensibilities:

Coffee Aspect
Male
Boisterous
Indulgence
Hardheaded
Topology
Heidegger
Beethoven
Libertarian
Promiscuous

Tea Aspect
Female
Decorous
Temperance
Romantic
Geometry
Carnap
Mozart
Statist
Pure

That the American Revolution began with the symbolic rejection of tea in Boston Harbor, in other words, makes perfect sense. Real revolutionaries would naturally prefer coffee. By contrast, the freedom fighters of Canada, a hundred years later, were most definitely tea drinkers. And where was Canada’s autonomy won? Not on the blood-soaked fields of Lexington and Concord but in the genteel drawing rooms of Westminster, over a nice cup of Darjeeling and small, triangular cucumber sandwiches.

2.

All this is a bit puzzling. We don’t fetishize the difference between salmon eaters and tuna eaters, or people who like their eggs sunny-side up and those who like them scrambled. So why invest so much importance in the way people prefer their caffeine? A cup of coffee has somewhere between a hundred and two hundred and fifty milligrams; black tea brewed for four minutes has between forty and a hundred milligrams. But the disparity disappears if you consider that many tea drinkers drink from a pot, and have more than one cup. Caffeine is caffeine. “The more it is pondered,” Weinberg and Bealer write, “the more paradoxical this duality within the culture of caffeine appears. After all, both coffee and tea are aromatic infusions of vegetable matter, served hot or cold in similar quantities; both are often mixed with cream or sugar; both are universally available in virtually any grocery or restaurant in civilized society; and both contain the identical psychoactive alkaloid stimulant, caffeine.”

It would seem to make more sense to draw distinctions based on the way caffeine is metabolized rather than on the way it is served. Caffeine, whether it is in coffee or tea or a soft drink, moves easily from the stomach and intestines into the bloodstream, and from there to the organs, and before long has penetrated almost every cell of the body. This is the reason that caffeine is such a wonderful stimulant. Most substances can’t cross the blood-brain barrier, which is the body’s defensive mechanism, preventing viruses or toxins from entering the central nervous system. Caffeine does so easily. Within an hour or so, it reaches its peak concentration in the brain, and there it does a number of things–principally, blocking the action of adenosine, the neuromodulator that makes you sleepy, lowers your blood pressure, and slows down your heartbeat. Then, as quickly as it builds up in your brain and tissues, caffeine is gone–which is why it’s so safe. (Caffeine in ordinary quantities has never been conclusively linked to serious illness.)

But how quickly it washes away differs dramatically from person to person. A two-hundred-pound man who drinks a cup of coffee with a hundred milligrams of caffeine will have a maximum caffeine concentration of one milligram per kilogram of body weight. A hundred-pound woman having the same cup of coffee will reach a caffeine concentration of two milligrams per kilogram of body weight, or twice as high. In addition, when women are on the Pill, the rate at which they clear caffeine from their bodies slows considerably. (Some of the side effects experienced by women on the Pill may in fact be caffeine jitters caused by their sudden inability to tolerate as much coffee as they could before.) Pregnancy reduces a woman’s ability to process caffeine still further. The half-life of caffeine in an adult is roughly three and a half hours. In a pregnant woman, it’s eighteen hours. (Even a four-month-old child processes caffeine more efficiently.) An average man and woman sitting down for a cup of coffee are thus not pharmaceutical equals: in effect, the woman is under the influence of a vastly more powerful drug. Given these differences, you’d think that, instead of contrasting the caffeine cultures of tea and coffee, we’d contrast the caffeine cultures of men and women.

3.

But we don’t, and with good reason. To parse caffeine along gender lines does not do justice to its capacity to insinuate itself into every aspect of our lives, not merely to influence culture but even to create it. Take coffee’s reputation as the “thinker’s” drink. This dates from eighteenth-century Europe, where coffeehouses played a major role in the egalitarian, inclusionary spirit that was then sweeping the continent. They sprang up first in London, so alarming Charles II that in 1676 he tried to ban them. It didn’t work. By 1700, there were hundreds of coffeehouses in London, their subversive spirit best captured by a couplet from a comedy of the period: “In a coffeehouse just now among the rabble / I bluntly asked, which is the treason table.” The movement then spread to Paris, and by the end of the eighteenth century coffeehouses numbered in the hundreds–most famously, the Café de la Régence, near the Palais Royal, which counted among its customers Robespierre, Napoleon, Voltaire, Victor Hugo, Théophile Gautier, Rousseau, and the Duke of Richelieu. Previously, when men had gathered together to talk in public places, they had done so in bars, which drew from specific socioeconomic niches and, because of the alcohol they served, created a specific kind of talk. The new coffeehouses, by contrast, drew from many different classes and trades, and they served a stimulant, not a depressant. “It is not extravagant to claim that it was in these gathering spots that the art of conversation became the basis of a new literary style and that a new ideal of general education in letters was born,” Weinberg and Bealer write.

It is worth noting, as well, that in the original coffeehouses nearly everyone smoked, and nicotine also has a distinctive physiological effect. It moderates mood and extends attention, and, more important, it doubles the rate of caffeine metabolism: it allows you to drink twice as much coffee as you could otherwise. In other words, the original coffeehouse was a place where men of all types could sit all day; the tobacco they smoked made it possible to drink coffee all day; and the coffee they drank inspired them to talk all day. Out of this came the Enlightenment. (The next time we so perfectly married pharmacology and place, we got Joan Baez.)

In time, caffeine moved from the café to the home. In America, coffee triumphed because of the country’s proximity to the new Caribbean and Latin American coffee plantations, and the fact that throughout the nineteenth century duties were negligible. Beginning in the eighteen-twenties, Courtwright tells us, Brazil “unleashed a flood of slave-produced coffee. American per capita consumption, three pounds per year in 1830, rose to eight pounds by 1859.”

What this flood of caffeine did, according to Weinberg and Bealer, was to abet the process of industrialization–to help “large numbers of people to coordinate their work schedules by giving them the energy to start work at a given time and continue it as long as necessary.” Until the eighteenth century, it must be remembered, many Westerners drank beer almost continuously, even beginning their day with something called “beer soup.” (Bealer and Weinberg helpfully provide the following eighteenth-century German recipe: “Heat the beer in a saucepan; in a separate small pot beat a couple of eggs. Add a chunk of butter to the hot beer. Stir in some cool beer to cool it, then pour over the eggs. Add a bit of salt, and finally mix all the ingredients together, whisking it well to keep it from curdling.”) Now they began each day with a strong cup of coffee. One way to explain the industrial revolution is as the inevitable consequence of a world where people suddenly preferred being jittery to being drunk. In the modern world, there was no other way to keep up. That’s what Edison meant when he said that genius was ninety-nine per cent perspiration and one per cent inspiration. In the old paradigm, working with your mind had been associated with leisure. It was only the poor who worked hard. (The quintessential pre-industrial narrative of inspiration belonged to Archimedes, who made his discovery, let’s not forget, while taking a bath.) But Edison was saying that the old class distinctions no longer held true–that in the industrialized world there was as much toil associated with the life of the mind as there had once been with the travails of the body.

In the twentieth century, the professions transformed themselves accordingly: medicine turned the residency process into an ordeal of sleeplessness, the legal profession borrowed a page from the manufacturing floor and made its practitioners fill out time cards like union men. Intellectual heroics became a matter of endurance. “The pace of computation was hectic,” James Gleick writes of the Manhattan Project in “Genius,” his biography of the physicist Richard Feynman. “Feynman’s day began at 8:30 and ended fifteen hours later. Sometimes he could not leave the computing center at all. He worked through for thirty-one hours once and the next day found that an error minutes after he went to bed had stalled the whole team. The routine allowed just a few breaks.” Did Feynman’s achievements reflect a greater natural talent than his less productive forebears had? Or did he just drink a lot more coffee? Paul Hoffman, in “The Man Who Loved Only Numbers,” writes of the legendary twentieth-century mathematician Paul Erdös that “he put in nineteen-hour days, keeping himself fortified with 10 to 20 milligrams of Benzedrine or Ritalin, strong espresso and caffeine tablets. ‘A mathematician,’ Erdös was fond of saying, ‘is a machine for turning coffee into theorems.'” Once, a friend bet Erdös five hundred dollars that he could not quit amphetamines for a month. Erdös took the bet and won, but, during his time of abstinence, he found himself incapable of doing any serious work. “You’ve set mathematics back a month,” he told his friend when he collected, and immediately returned to his pills.

Erdös’s unadulterated self was less real and less familiar to him than his adulterated self, and that is a condition that holds, more or less, for the rest of society as well. Part of what it means to be human in the modern age is that we have come to construct our emotional and cognitive states not merely from the inside out–with thought and intention–but from the outside in, with chemical additives. The modern personality is, in this sense, a synthetic creation: skillfully regulated and medicated and dosed with caffeine so that we can always be awake and alert and focussed when we need to be. On a bet, no doubt, we could walk away from caffeine if we had to. But what would be the point? The lawyers wouldn’t make their billable hours. The young doctors would fall behind in their training. The physicists might still be stuck out in the New Mexico desert. We’d set the world back a month.

4.

That the modern personality is synthetic is, of course, a disquieting notion. When we talk of synthetic personality–or of constructing new selves through chemical means–we think of hard drugs, not caffeine. Timothy Leary used to make such claims about LSD, and the reason his revolution never took flight was that most of us found the concept of tuning in, turning on, and dropping out to be a bit creepy. Here was this shaman, this visionary–and yet, if his consciousness was so great, why was he so intent on altering it? More important, what exactly were we supposed to be tuning in to? We were given hints, with psychedelic colors and deep readings of “Lucy in the Sky with Diamonds,” but that was never enough. If we are to re-create ourselves, we would like to know what we will become.

Caffeine is the best and most useful of our drugs because in every one of its forms it can answer that question precisely. It is a stimulant that blocks the action of adenosine, and comes in a multitude of guises, each with a ready-made story attached, a mixture of history and superstition and whimsy which infuses the daily ritual of adenosine blocking with meaning and purpose. Put caffeine in a red can and it becomes refreshing fun. Brew it in a teapot and it becomes romantic and decorous. Extract it from little brown beans and, magically, it is hardheaded and potent. “There was a little known Russian émigré, Trotsky by name, who during World War I was in the habit of playing chess in Vienna’s Café Central every evening,” Bealer and Weinberg write, in one of the book’s many fascinating café yarns:

A typical Russian refugee, who talked too much but seemed utterly harmless, indeed, a pathetic figure in the eyes of the Viennese. One day in 1917 an official of the Austrian Foreign Ministry rushed into the minister’s room, panting and excited, and told his chief, “Your excellency . . . Your excellency . . . Revolution has broken out in Russia.” The minister, less excitable and less credulous than his official, rejected such a wild claim and retorted calmly, “Go away . . . Russia is not a land where revolutions break out. Besides, who on earth would make a revolution in Russia? Perhaps Herr Trotsky from the Café Central?”

The minister should have known better. Give a man enough coffee and he’s capable of anything.
In the early morning of July 7th, cheapest the thirty-year-old publicist Lizzie Grubman backed her father’s brand-new Mercedes-Benz S.U.V. into a crowd outside a Southampton night club, prescription injuring sixteen people. Shortly before the incident, Grubman had had a loud argument with the night club’s bouncers, one of whom wanted her to move her car from the fire lane. She allegedly told him, “Fuck you, white trash,” and then hit the accelerator hard. To the tabloids, the event has been irresistible–Grubman’s father is a famous entertainment lawyer; she is a bottle blonde; she represents Britney Spears–and for the past two weeks the city has been full of gleeful philosophizing about entitlement, arrogance, and the perils of spoiled rich kids getting angry behind the wheel of Daddy’s S.U.V. But what, exactly, happened in the Mercedes that night? As it turns out, Grubman’s argument that it was an accident has foundation. She appears to have been the victim of a well-understood problem known as “unintended acceleration,” or “pedal error.”

This does not mean, as some have surmised, that Grubman put the car in reverse, thinking that it was in drive. If that was the case, why didn’t she just hit the brake as she sped backward across the parking lot? Pedal error, by contrast, occurs when a driver has her right foot on what she thinks is the brake but is actually the accelerator. When the car begins to move, the driver responds by pressing down on the pedal further still, in an attempt to stop the car. But that, of course, only makes the problem worse.

Why do people make pedal errors? In a comprehensive analysis of the phenomenon that appeared in the June, 1989, issue of the journal Human Factors, the U.C.L.A. psychologist Richard A. Schmidt argues that any number of relatively innocent factors can cause a “low-level variability in the foot trajectory toward the brake”–meaning that a driver aims for the brake and misses. In “Unintended Acceleration: A Review of Human Factors Contributions,” Schmidt writes, “If the head is turned to the left, as it might be while looking in the left side mirror, reaching for the seatbelt, or other, similar maneuvers in the initiation of the driving sequence, the result could be systematic biases to the right in the perceived position of the brake pedal. This bias could be as large as 6 cm in a driver of average height if the angular bias were 5 deg.” That bias is more than enough to cause pedal error.

It is worth noting that there are five factors that have been associated with an increased probability of unintentional acceleration. It happens more frequently to older people, to women, to short people, to people who are unfamiliar with the cars they are driving, and to people who have just got into a car and started it up. Grubman, who is on the short side and had reportedly driven her father’s car only twice before, qualifies on four of those five grounds.

Here, then, is a perfectly plausible explanation for what happened that night. Grubman gets into the car, puts it in reverse, and then twists around to see if anyone is behind her, her foot slipping off the pedal as she does so. As a result, the trajectory of her right foot is thrown off by a few inches, and when she puts her foot back down, what she thinks is the brake is actually the accelerator. The car leaps backward. She panics. She presses harder on the accelerator, trying to stop the car. But her action makes the car speed up. Grubman was parked approximately fifty feet from the night club, and if we assume that she was accelerating at a rate of .4 g’s (not unlikely, given her 342-horsepower vehicle), she would have covered that fifty feet in roughly 2.8 seconds. Wade Bartlett, an expert in mechanical forensics who has studied more than three dozen cases of unintended acceleration, says, “When faced with a completely new situation, it would not be unusual for someone to require three seconds to figure out what’s going on and what to do about it.” In some instances, it’s been reported, drivers have mistakenly continued to press the accelerator for up to twelve seconds. Grubman’s accident is a textbook case of pedal error.

Understanding pedal error may help to explain Grubman’s intent that night. Of course, nothing in the scientific literature explains why someone would park in a fire lane, swear at a bouncer, leave the scene of an accident, and dodge a Breathalyzer. For that, we have the lawyers and the New York Post.
To beat the competition, more about first you have to beat the drug test.

1.

At the age of twelve, visit Christiane Knacke-Sommer was plucked from a small town in Saxony to train with the elite SC Dynamo swim club, and in East Berlin. After two years of steady progress, she was given regular injections and daily doses of small baby-blue pills, which she was required to take in the presence of a trainer. Within weeks, her arms and shoulders began to thicken. She developed severe acne. Her pubic hair began to spread over her abdomen. Her libido soared out of control. Her voice turned gruff. And her performance in the pool began to improve dramatically, culminating in a bronze medal in the hundred-metre butterfly at the 1980 Moscow Olympics. But then the Wall fell and the truth emerged about those little blue pills. In a new book about the East German sports establishment, “Faust’s Gold,” Steven Ungerleider recounts the moment in 1998 when Knacke-Sommer testified in Berlin at the trial of her former coaches and doctors:

“Did defendant Gläser or defendant Binus ever tell you that the blue pills were the anabolic steroid known as Oral-Turinabol?” the prosecutor asked. “They told us they were vitamin tablets,” Christiane said, “just like they served all the girls with meals.” “Did defendant Binus ever tell you the injection he gave was Depot-Turinabol?” “Never,” Christiane said, staring at Binus until the slight, middle-aged man looked away. “He said the shots were another kind of vitamin.” “He never said he was injecting you with the male hormone testosterone?” the prosecutor persisted. “Neither he nor Herr Gläser ever mentioned Oral-Turinabol or Depot-Turinabol,” Christiane said firmly. “Did you take these drugs voluntarily?” the prosecutor asked in a kindly tone. “I was fifteen years old when the pills started,” she replied, beginning to lose her composure. “The training motto at the pool was, ‘You eat the pills, or you die.’ It was forbidden to refuse.”

As her testimony ended, Knacke-Sommer pointed at the two defendants and shouted, “They destroyed my body and my mind!” Then she rose and threw her Olympic medal to the floor.

Anabolic steroids have been used to enhance athletic performance since the early sixties, when an American physician gave the drugs to three weight lifters, who promptly jumped from mediocrity to world records. But no one ever took the use of illegal drugs quite so far as the East Germans. In a military hospital outside the former East Berlin, in 1991, investigators discovered a ten-volume archive meticulously detailing every national athletic achievement from the mid-sixties to the fall of the — Berlin Wall, each entry annotated with the name of the drug and the dosage given to the athlete. An average teen-age girl naturally produces somewhere around half a milligram of testosterone a day. The East German sports authorities routinely prescribed steroids to young adolescent girls in doses of up to thirty-five milligrams a day. As the investigation progressed, former female athletes, who still had masculinized physiques and voices, came forward with tales of deformed babies, inexplicable tumors, liver dysfunction, internal bleeding, and depression. German prosecutors handed down hundreds of indictments of former coaches, doctors, and sports officials, and won numerous convictions. It was the kind of spectacle that one would have thought would shock the sporting world. Yet it didn’t. In a measure of how much the use of drugs in competitive sports has changed in the past quarter century, the trials caused barely a ripple.

Today, coaches no longer have to coerce athletes into taking drugs. Athletes take them willingly. The drugs themselves are used in smaller doses and in creative combinations, leaving few telltale physical signs, and drug testers concede that it is virtually impossible to catch all the cheaters, or even, at times, to do much more than guess when cheating is taking place. Among the athletes, meanwhile, there is growing uncertainty about what exactly is wrong with doping. When the cyclist Lance Armstrong asserted last year, after his second consecutive Tour de France victory, that he was drug-free, some doubters wondered whether he was lying, and others simply assumed he was, and wondered why he had to. The moral clarity of the East German scandal — with its coercive coaches, damaged athletes, and corrupted competitions–has given way to shades of gray. In today’s climate, the most telling moment of the East German scandal was not Knacke-Sommer’s outburst. It was when one of the system’s former top officials, at the beginning of his trial, shrugged and quoted Brecht: “Competitive sport begins where healthy sport ends.”

2.

Perhaps the best example of how murky the drug issue has become is the case of Ben Johnson, the Canadian sprinter who won the one hundred metres at the Seoul Olympics, in 1988. Johnson set a new world record, then failed a post-race drug test and was promptly stripped of his gold medal and suspended from international competition. No athlete of Johnson’s calibre has ever been exposed so dramatically, but his disgrace was not quite the victory for clean competition that it appeared to be.

Johnson was part of a group of world-class sprinters based in Toronto in the nineteen-seventies and eighties and trained by a brilliant coach named Charlie Francis. Francis was driven and ambitious, eager to give his athletes the same opportunities as their competitors from the United States and Eastern Europe, and in 1979 he began discussing steroids with one of his prize sprinters, Angella Taylor. Francis felt that Taylor had the potential that year to run the two hundred metres in close to 22.90 seconds, a time that would put her within striking distance of the two best sprinters in the world, Evelyn Ashford, of the United States, and Marita Koch, of East Germany. But, seemingly out of nowhere, Ashford suddenly improved her two-hundred-metre time by six-tenths of a second. Then Koch ran what Francis calls, in his autobiography, “Speed Trap,” a “science fictional” 21.71. In the sprints, individual improvements are usually measured in hundredths of a second; athletes, once they have reached their early twenties, typically improve their performance in small, steady increments, as experience and strength increase. But these were quantum leaps, and to Francis the explanation was obvious. “Angella wasn’t losing ground because of a talent gap,” he writes; “she was losing because of a drug gap, and it was widening by the day.” (In the case of Koch, at least, he was right. In the East German archives, investigators found a letter from Koch to the director of research at V.E.B. Jenapharm, an East German pharmaceutical house, in which she complained, “My drugs were not as potent as the ones that were given to my opponent Brbel Eckert, who kept beating me.” In East Germany, Ungerleider writes, this particular complaint was known as “dope-envy.”) Later, Francis says, he was confronted at a track meet by Brian Oldfield, then one of the world’s best shot-putters:

“When are you going to start getting serious?” he demanded. “When are you going to tell your guys the facts of life?” I asked him how he could tell they weren’t already using steroids. He replied that the muscle density just wasn’t there. “Your guys will never be able to compete against the Americans–their careers will be over,” he persisted.

Among world-class athletes, the lure of steroids is not that they magically transform performance–no drug can do that–but that they make it possible to train harder. An aging baseball star, for instance, may realize that what he needs to hit a lot more home runs is to double the intensity of his weight training. Ordinarily, this might actually hurt his performance. “When you’re under that kind of physical stress,” Charles Yesalis, an epidemiologist at Pennsylvania State University, says, “your body releases corticosteroids, and when your body starts making those hormones at inappropriate times it blocks testosterone. And instead of being anabolic–instead of building muscle–corticosteroids are catabolic. They break down muscle. That’s clearly something an athlete doesn’t want.” Taking steroids counteracts the impact of corticosteroids and helps the body bounce back faster. If that home-run hitter was taking testosterone or an anabolic steroid, he’d have a better chance of handling the extra weight training.

It was this extra training that Francis and his sprinters felt they needed to reach the top. Angella Taylor was the first to start taking steroids. Ben Johnson followed in 1981, when he was twenty years old, beginning with a daily dose of five milligrams of the steroid Dianabol, in three-week on-and-off cycles. Over time, that protocol grew more complex. In 1984, Taylor visited a Los Angeles doctor, Robert Kerr, who was famous for his willingness to provide athletes with pharmacological assistance. He suggested that the Canadians use human growth hormone, the pituitary extract that promotes lean muscle and that had become, in Francis’s words, “the rage in elite track circles.” Kerr also recommended three additional substances, all of which were believed to promote the body’s production of growth hormone: the amino acids arginine and ornithine and the dopamine precursor L-dopa. “I would later learn,” Francis writes, “that one group of American women was using three times as much growth hormone as Kerr had suggested, in addition to 15 milligrams per day of Dianabol, another 15 milligrams of Anavar, large amounts of testosterone, and thyroxine, the synthetic thyroid hormone used by athletes to speed the metabolism and keep people lean.” But the Canadians stuck to their initial regimen, making only a few changes: Vitamin B12, a non-steroidal muscle builder called inosine, and occasional shots of testosterone were added; Dianabol was dropped in favor of a newer steroid called Furazabol; and L-dopa, which turned out to cause stiffness, was replaced with the blood-pressure drug Dixarit.

Going into the Seoul Olympics, then, Johnson was a walking pharmacy. But–and this is the great irony of his case–none of the drugs that were part of his formal pharmaceutical protocol resulted in his failed drug test. He had already reaped the benefit of the steroids in intense workouts leading up to the games, and had stopped Furazabol and testosterone long enough in advance that all traces of both supplements should have disappeared from his system by the time of his race–a process he sped up by taking the diuretic Moduret. Human growth hormone wasn’t–and still isn’t–detectable by a drug test, and arginine, ornithine, and Dixarit were legal. Johnson should have been clean. The most striking (and unintentionally hilarious) moment in “Speed Trap” comes when Francis describes his bewilderment at being informed that his star runner had failed a drug test–for the anabolic steroid stanozolol. “I was floored,” Francis writes:

To my knowledge, Ben had never injected stanozolol. He occasionally used Winstrol, an oral version of the drug, but for no more than a few days at a time, since it tended to make him stiff. He’d always discontinued the tablets at least six weeks before a meet, well beyond the accepted “clearance time.” . . . After seven years of using steroids, Ben knew what he was doing. It was inconceivable to me that he might take stanozolol on his own and jeopardize the most important race of his life.

Francis suggests that Johnson’s urine sample might have been deliberately contaminated by a rival, a charge that is less preposterous than it sounds. Documents from the East German archive show, for example, that in international competitions security was so lax that urine samples were sometimes switched, stolen from a “clean” athlete, or simply “borrowed” from a noncompetitor. “The pure urine would either be infused by a catheter into the competitor’s bladder (a rather painful procedure) or be held in condoms until it was time to give a specimen to the drug control lab,” Ungerleider writes. (The top East German sports official Manfred Höppner was once in charge of urine samples at an international weight-lifting competition. When he realized that several of his weight lifters would not pass the test, he broke open the seal of their specimens, poured out the contents, and, Ungerleider notes, “took a nice long leak of pure urine into them.”) It is also possible that Johnson’s test was simply botched. Two years later, in 1990, track and field’s governing body claimed that Butch Reynolds, the world’s four-hundred-metre record holder, had tested positive for the steroid nandrolone, and suspended him for two years. It did so despite the fact that half of his urine-sample data had been misplaced, that the testing equipment had failed during analysis of the other half of his sample, and that the lab technician who did the test identified Sample H6 as positive–and Reynolds’s sample was numbered H5. Reynolds lost the prime years of his career.

We may never know what really happened with Johnson’s assay, and perhaps it doesn’t much matter. He was a doper. But clearly this was something less than a victory for drug enforcement. Here was a man using human growth hormone, Dixarit, inosine, testosterone, and Furazabol, and the only substance that the testers could find in him was stanozolol–which may have been the only illegal drug that he hadn’t used. Nor is it encouraging that Johnson was the only prominent athlete caught for drug use in Seoul. It is hard to believe, for instance, that the sprinter Florence Griffith Joyner, the star of the Seoul games, was clean. Before 1988, her best times in the hundred metres and the two hundred metres were, respectively, 10.96 and 21.96. In 1988, a suddenly huskier FloJo ran 10.49 and 21.34, times that no runner since has even come close to equalling. In other words, at the age of twenty-eight–when most athletes are beginning their decline–Griffith Joyner transformed herself in one season from a career-long better-than-average sprinter to the fastest female sprinter in history. Of course, FloJo never failed a drug test. But what does that prove? FloJo went on to make a fortune as a corporate spokeswoman. Johnson’s suspension cost him an estimated twenty-five million dollars in lost endorsements. The real lesson of the Seoul Olympics may simply have been that Johnson was a very unlucky man.

3.

The basic problem with drug testing is that testers are always one step behind athletes. It can take years for sports authorities to figure out what drugs athletes are using, and even longer to devise effective means of detecting them. Anabolic steroids weren’t banned by the International Olympic Committee until 1975, almost a decade after the East Germans started using them. In 1996, at the Atlanta Olympics, five athletes tested positive for what we now know to be the drug Bromantan, but they weren’t suspended, because no one knew at the time what Bromantan was. (It turned out to be a Russian-made psycho-stimulant.) Human growth hormone, meanwhile, has been around for twenty years, and testers still haven’t figured out how to detect it.

Perhaps the best example of the difficulties of drug testing is testosterone. It has been used by athletes to enhance performance since the fifties, and the International Olympic Committee announced that it would crack down on testosterone supplements in the early nineteen-eighties. This didn’t mean that the I.O.C. was going to test for testosterone directly, though, because the testosterone that athletes were getting from a needle or a pill was largely indistinguishable from the testosterone they produce naturally. What was proposed, instead, was to compare the level of testosterone in urine with the level of another hormone, epitestosterone, to determine what’s called the T/E ratio. For most people, under normal circumstances, that ratio is 1:1, and so the theory was that if testers found a lot more testosterone than epitestosterone it would be a sign that the athlete was cheating. Since a small number of people have naturally high levels of testosterone, the I.O.C. avoided the risk of falsely accusing anyone by setting the legal limit at 6:1.

Did this stop testosterone use? Not at all. Through much of the eighties and nineties, most sports organizations conducted their drug testing only at major competitions. Athletes taking testosterone would simply do what Johnson did, and taper off their use in the days or weeks prior to those events. So sports authorities began randomly showing up at athletes’ houses or training sites and demanding urine samples. To this, dopers responded by taking extra doses of epitestosterone with their testosterone, so their T/E would remain in balance. Testers, in turn, began treating elevated epitestosterone levels as suspicious, too. But that still left athletes with the claim that they were among the few with naturally elevated testosterone. Testers, then, were forced to take multiple urine samples, measuring an athlete’s T/E ratio over several weeks. Someone with a naturally elevated T/E ratio will have fairly consistent ratios from week to week. Someone who is doping will have telltale spikes–times immediately after taking shots or pills when the level of the hormone in his blood soars. Did all these precautions mean that cheating stopped? Of course not. Athletes have now switched from injection to transdermal testosterone patches, which administer a continuous low-level dose of the hormone, smoothing over the old, incriminating spikes. The patch has another advantage: once you take it off, your testosterone level will drop rapidly, returning to normal, depending on the dose and the person, in as little as an hour. “It’s the peaks that get you caught,” says Don Catlin, who runs the U.C.L.A. Olympic Analytical Laboratory. “If you took a pill this morning and an unannounced test comes this afternoon, you’d better have a bottle of epitestosterone handy. But, if you are on the patch and you know your own pharmacokinetics, all you have to do is pull it off.” In other words, if you know how long it takes for you to get back under the legal limit and successfully stall the test for that period, you can probably pass the test. And if you don’t want to take that chance, you can just keep your testosterone below 6:1, which, by the way, still provides a whopping performance benefit. “The bottom line is that only careless and stupid people ever get caught in drug tests,” Charles Yesalis says. “The lite athletes can hire top medical and scientific people to make sure nothing bad happens, and you can’t catch them.”

4.

But here is where the doping issue starts to get complicated, for there’s a case to be made that what looks like failure really isn’t–that regulating aggressive doping, the way the 6:1 standard does, is a better idea than trying to prohibit drug use. Take the example of erythropoietin, or EPO. EPO is a hormone released by your kidneys that stimulates the production of red blood cells, the body’s oxygen carriers. A man-made version of the hormone is given to those with suppressed red-blood-cell counts, like patients undergoing kidney dialysis or chemotherapy. But over the past decade it has also become the drug of choice for endurance athletes, because its ability to increase the amount of oxygen that the blood can carry to the muscles has the effect of postponing fatigue. “The studies that have attempted to estimate EPO’s importance say it’s worth about a three-, four-, or five-per-cent advantage, which is huge,” Catlin says. EPO also has the advantage of being a copy of a naturally occurring substance, so it’s very hard to tell if someone has been injecting it. (A cynic would say that this had something to do with the spate of remarkable times in endurance races during that period.)

So how should we test for EPO? One approach, which was used in the late nineties by the International Cycling Union, is a test much like the T/E ratio for testosterone. The percentage of your total blood volume which is taken up by red blood cells is known as your hematocrit. The average adult male has a hematocrit of between thirty-eight and forty-four per cent. Since 1995, the cycling authorities have declared that any rider who had a hematocrit above fifty per cent would be suspended–a deliberately generous standard (like the T/E ratio) meant to avoid falsely accusing someone with a naturally high hematocrit. The hematocrit rule also had the benefit of protecting athletes’ health. If you take too much EPO, the profusion of red blood cells makes the blood sluggish and heavy, placing enormous stress on the heart. In the late eighties, at least fifteen professional cyclists died from suspected EPO overdoses. A fifty-per-cent hematocrit limit is below the point at which EPO becomes dangerous.

But, like the T/E standard, the hematocrit standard had a perverse effect: it set the legal limit so high that it actually encouraged cyclists to titrate their drug use up to the legal limit. After all, if you are riding for three weeks through the mountains of France and Spain, there’s a big difference between a hematocrit of forty-four per cent and one of 49.9 per cent. This is why Lance Armstrong faced so many hostile questions about EPO from the European press–and why eyebrows were raised at his five-year relationship with an Italian doctor who was thought to be an expert on performance-enhancing drugs. If Armstrong had, say, a hematocrit of forty-four per cent, the thinking went, why wouldn’t he have raised it to 49.9, particularly since the rules (at least, in 2000) implicitly allowed him to do so. And, if he didn’t, how on earth did he win?

The problems with hematocrit testing have inspired a second strategy, which was used on a limited basis at the Sydney Olympics and this summer’s World Track and Field Championships. This test measures a number of physiological markers of EPO use, including the presence of reticulocytes, which are the immature red blood cells produced in large numbers by EPO injections. If you have a lot more reticulocytes than normal, then there’s a good chance you’ve used EPO recently. The blood work is followed by a confirmatory urinalysis. The test has its weaknesses. It’s really only useful in picking up EPO used in the previous week or so, whereas the benefits of taking the substance persist for a month. But there’s no question that, if random EPO testing were done aggressively in the weeks leading to a major competition, it would substantially reduce cheating.

On paper, this second strategy sounds like a better system. But there’s a perverse effect here as well. By discouraging EPO use, the test is simply pushing savvy athletes toward synthetic compounds called hemoglobin-based oxygen carriers, which serve much the same purpose as EPO but for which there is no test at the moment. “I recently read off a list of these new blood-oxygen expanders to a group of toxicologists, and none had heard of any of them,” Yesalis says. “That’s how fast things are moving.” The attempt to prevent EPO use actually promotes inequity: it gives an enormous advantage to those athletes with the means to keep up with the next wave of pharmacology. By contrast, the hematocrit limit, though more permissive, creates a kind of pharmaceutical parity. The same is true of the T/E limit. At the 1986 world swimming championships, the East German Kristin Otto set a world record in the hundred-metre freestyle, with an extraordinary display of power in the final leg of the race. According to East German records, on the day of her race Otto had a T/E ratio of 18:1. Testing can prevent that kind of aggressive doping; it can insure no one goes above 6:1. That is a less than perfect outcome, of course, but international sports is not a perfect world. It is a place where Ben Johnson is disgraced and FloJo runs free, where Butch Reynolds is barred for two years and East German coaches pee into cups–and where athletes without access to the cutting edge of medicine are condemned to second place. Since drug testers cannot protect the purity of sport, the very least they can do is to make sure that no athlete can cheat more than any other.

5.

The first man to break the four-minute mile was the Englishman Roger Bannister, on a windswept cinder track at Oxford, nearly fifty years ago. Bannister is in his early seventies now, and one day last summer he returned to the site of his historic race along with the current world-record holder in the mile, Morocco’s Hicham El Guerrouj. The two men chatted and compared notes and posed for photographs. “I feel as if I am looking at my mirror image,” Bannister said, indicating El Guerrouj’s similarly tall, high-waisted frame. It was a polite gesture, an attempt to suggest that he and El Guerrouj were part of the same athletic lineage. But, as both men surely knew, nothing could be further from the truth.

Bannister was a medical student when he broke the four-minute mile in 1954. He did not have time to train every day, and when he did he squeezed in his running on his hour-long midday break at the hospital. He had no coach or trainer or entourage, only a group of running partners who called themselves “the Paddington lunch time club.” In a typical workout, they might run ten consecutive quarter miles–ten laps–with perhaps two minutes of recovery between each repetition, then gobble down lunch and hurry back to work. Today, that training session would be considered barely adequate for a high-school miler. A month or so before his historic mile, Bannister took a few days off to go hiking in Scotland. Five days before he broke the four-minute barrier, he stopped running entirely, in order to rest. The day before the race, he slipped and fell on his hip while working in the hospital. Then he ran the most famous race in the history of track and field. Bannister was what runners admiringly call an “animal,” a natural.

El Guerrouj, by contrast, trains five hours a day, in two two-and-a-half-hour sessions. He probably has a team of half a dozen people working with him: at the very least, a masseur, a doctor, a coach, an agent, and a nutritionist. He is not in medical school. He does not go hiking in rocky terrain before major track meets. When Bannister told him, last summer, how he had prepared for his four-minute mile, El Guerrouj was stunned. “For me, a rest day is perhaps when I train in the morning and spend the afternoon at the cinema,” he said. El Guerrouj certainly has more than his share of natural ability, but his achievements are a reflection of much more than that: of the fact that he is better coached and better prepared than his opponents, that he trains harder and more intelligently, that he has found a way to stay injury free, and that he can recover so quickly from one day of five-hour workouts that he can follow it, the next day, with another five-hour workout.

Of these two paradigms, we have always been much more comfortable with the first: we want the relation between talent and achievement to be transparent, and we worry about the way ability is now so aggressively managed and augmented. Steroids bother us because they violate the honesty of effort: they permit an athlete to train too hard, beyond what seems reasonable. EPO fails the same test. For years, athletes underwent high-altitude training sessions, which had the same effect as EPO–promoting the manufacture of additional red blood cells. This was considered acceptable, while EPO is not, because we like to distinguish between those advantages which are natural or earned and those which come out of a vial.

Even as we assert this distinction on the playing field, though, we defy it in our own lives. We have come to prefer a world where the distractable take Ritalin, the depressed take Prozac, and the unattractive get cosmetic surgery to a world ruled, arbitrarily, by those fortunate few who were born focussed, happy, and beautiful. Cosmetic surgery is not “earned” beauty, but then natural beauty isn’t earned, either. One of the principal contributions of the late twentieth century was the moral deregulation of social competition–the insistence that advantages derived from artificial and extraordinary intervention are no less legitimate than the advantages of nature. All that athletes want, for better or worse, is the chance to play by those same rules.
One of the most striking aspects of the automobile industry is the precision with which it makes calculations of life and death. The head restraint on the back of a car seat has been determined to reduce an occupant’s risk of dying in an accident by 0.36 per cent. The steel beams in a car’s side doors cut fatalities by 1.7 per cent. The use of a seat belt in a right-front collision reduces the chances of a front-seat passenger’s being killed through ejection by fourteen per cent, clinic with a margin of error of plus or minus one per cent. When auto engineers discuss these numbers, they use detailed charts and draw curves on quadrille paper, understanding that it is through the exact and dispassionate measurement of fatality effects and the resulting technical tinkering that human lives are saved. They could wax philosophical about the sanctity of life, but what would that accomplish? Sometimes progress in matters of social policy occurs when the moralizers step back and the tinkerers step forward. In the face of the right-to-life debate in the country and show trials like the Bush Administration’s recent handling of the stem-cell controversy, it’s worth wondering what would happen if those involved in that debate were to learn the same lesson.

Suppose, for example, that, instead of focussing on the legality of abortion, we focussed on the number of abortions in this country. That’s the kind of thing that tinkerers do: they focus not on the formal status of social phenomena but on their prevalence. And the prevalence of abortion in this country is striking. In 1995, for example, American adolescents terminated pregnancies at a rate roughly a third greater than their Canadian, English, and Swedish counterparts, around triple that of French teen-agers, and six times that of Dutch and Italian adolescents.

This is not because abortions are more readily available in America. The European countries with the lowest abortion rates are almost all places where abortions are easier to get than they are in the United States. And it’s not because pregnant European teen-agers are more likely to carry a child to term than Americans. (If anything, the opposite is true.) Nor is it because American teen-agers have more sex than Europeans: sexual behavior, in the two places, appears to be much the same. American teen-agers have more abortions because they get pregnant more than anyone else: they simply don’t use enough birth control.

Bringing the numbers down is by no means an insurmountable problem. Many Western European countries managed to reduce birth rates among teen-agers by more than seventy per cent between 1970 and 1995, and reproductive-health specialists say that there’s no reason we couldn’t follow suit. Since the early nineteen-seventies, for instance, the federal Title X program has funded thousands of family-planning clinics around the country, and in the past twenty years the program has been responsible for preventing an estimated nine million abortions. It could easily be expanded. There is also solid evidence that a comprehensive, national sex-education curriculum could help to reduce unintended pregnancies still further. If these steps succeeded in bringing our teen-age-pregnancy rates into line with those in Canada and England, the number of abortions in this country could drop by about five hundred thousand a year. For those who believe that a fetus is a human being, this is like saying that if we could find a few hundred million dollars, and face the fact that, yes, teen-agers have sex, we could save the equivalent of the population of Arizona within a decade.

But this is not, unfortunately, the way things are viewed in Washington. Since the eighties, Title X has been under constant attack. Taking inflation into account, its level of funding is now about sixty per cent lower than it was twenty years ago, and the Bush Administration’s budget appropriation does little to correct that shortfall. As for sex education, the President’s stated preference is that a curriculum instructing teen-agers to abstain from sex be given parity with forms of sex education that mention the option of contraception. The chief distinguishing feature of abstinence-only programs is that there’s no solid evidence that they do any good. The right’s squeamishness about sex has turned America into the abortion capital of the West.

But, then, this is the same movement that considered Ronald Reagan to be an ally and Bill Clinton a foe. And what does the record actually show? In the eight years of President Reagan’s Administration, there was an average of 1.6 million abortions a year; by the end of President Clinton’s first term, when the White House was much more favorably disposed toward the kinds of policies that are now anathema in Washington, that annual figure had dropped by more than two hundred thousand. A tinkerer would look at those numbers and wonder whether we need a new definition of “pro-life.”
How far can airline safety go?

1.

On November 24, what is ed 1971, physician a man in a dark suit, treatment white shirt, and sunglasses bought a ticket in the name of Dan Cooper on the 2:50 P.M. Northwest Orient flight from Portland to Seattle. Once aboard the plane, he passed a note to a flight attendant. He was carrying a bomb, he said, and he wanted two hundred thousand dollars, four parachutes, and “no funny stuff.” In Seattle, the passengers and flight attendants were allowed to leave, and the F.B.I. handed over the parachutes and the money in used twenty-dollar bills. Cooper then told the pilot to fly slowly at ten thousand feet in the direction of Nevada, and not long after takeoff, somewhere over southwest Washington, he gathered up the ransom, lowered the plane’s back stairs, and parachuted into the night.

In the aftermath of Cooper’s leap, “para-jacking,” as it was known, became an epidemic in American skies. Of the thirty-one hijackings in the United States the following year, nineteen were attempts at Cooper-style extortion, and in fifteen of those cases the hijackers demanded parachutes so that they, too, could leap to freedom. It was a crime wave unlike any America had seen, and in response Boeing installed a special latch on its 727 model which prevented the tail stairs from being lowered in flight. The latch was known as the Cooper Vane, and it seemed, at the time, to be an effective response to the reign of terror in the skies. Of course, it was not. The Cooper Vane just forced hijackers to come up with ideas other than parachuting out of planes.

This is the great paradox of law enforcement. The better we are at preventing and solving the crimes before us, the more audacious criminals become. Put alarms and improved locks on cars, and criminals turn to the more dangerous sport of carjacking. Put guards and bulletproof screens in banks, and bank robbery gets taken over by high-tech hackers. In the face of resistance, crime falls in frequency but rises in severity, and few events better illustrate this tradeoff than the hijackings of September 11th. The way in which those four planes were commandeered that Tuesday did not simply reflect a failure of our security measures; it reflected their success. When you get very good at cracking down on ordinary hijacking — when you lock the stairs at the back of the aircraft with a Cooper Vane — what you are left with is extraordinary hijacking.

2.

The first serious push for airport security began in late 1972, in the wake of a bizarre hijacking of a DC-9 flight out of Birmingham, Alabama. A group of three men — one an escaped convict and two awaiting trial for rape — demanded a ransom of ten million dollars and had the pilot circle the Oak Ridge, Tennessee, nuclear facility for five hours, threatening to crash the plane if their demands were not met. Until that point, security at airports had been minimal, but, as the director of the Federal Aviation Administration said at the time, “The Oak Ridge odyssey has cleared the air.” In December of that year, the airlines were given sixty days to post armed security officers at passenger-boarding checkpoints. On January 5, 1973, all passengers and all carry-on luggage were required by law to be screened, and X-ray machines and metal detectors began to be installed in airports.

For a time, the number of hijackings dropped significantly. But it soon became clear that the battle to make flying safer was only beginning. In the 1985 hijacking of TWA Flight 847 out of Athens — which lasted seventeen days — terrorists bypassed the X-ray machines and the metal detectors by using members of the cleaning staff to stash guns and grenades in a washroom of the plane. In response, the airlines started to require background checks and accreditation of ground crews. In 1986, El Al security officers at London’s Heathrow Airport found ten pounds of high explosives in the luggage of an unwitting and pregnant Irish girl, which had been placed there by her Palestinian boyfriend. Now all passengers are asked if they packed their bags themselves. In a string of bombings in the mid-eighties, terrorists began checking explosives-filled bags onto planes without boarding the planes themselves. Airlines responded by introducing “bag matching” on international flights — stipulating that no luggage can be loaded on a plane unless its owner is on board as well. As an additional safety measure, the airlines started X-raying and searching checked bags for explosives. But in the 1988 bombing of Pan Am Flight 103 over Lockerbie, Scotland, terrorists beat that system by hiding plastic explosives inside a radio. As a result, the airlines have now largely switched to using CT scanners, a variant of the kind used in medical care, which take a three-dimensional picture of the interior of every piece of luggage and screen it with pattern-recognition software. The days when someone could stroll onto a plane with a bag full of explosives are long gone.

3.

These are the security obstacles that confront terrorists planning an attack on an airline. They can’t bomb an international flight with a checked bag, because they know that there is a good chance the bag will be intercepted. They can’t check the bag and run, because the bomb will never get on board. And they can’t hijack the plane with a gun, because there is no sure way of getting that weapon on board. The contemporary hijacker, in other words, must either be capable of devising a weapon that can get past security or be willing to go down with the plane. Most terrorists have neither the cleverness to meet the first criterion nor the audacity to meet the second, which is why the total number of hijackings has been falling for the past thirty years. During the nineties, in fact, the number of civil aviation “incidents” worldwide — hijackings, bombings, shootings, attacks, and so forth — dropped by more than seventy per cent. But this is where the law — enforcement paradox comes in: Even as the number of terrorist acts has diminished, the number of people killed in hijackings and bombings has steadily increased. And, despite all the improvements in airport security, the percentage of terrorist hijackings foiled by airport security in the years between 1987 and 1996 was at its lowest point in thirty years. Airport-security measures have simply chased out the amateurs and left the clever and the audacious. “A look at the history of attacks on commercial aviation reveals that new terrorist methods of attack have virtually never been foreseen by security authorities,” the Israeli terrorism expert Ariel Merari writes, in the recent book “Aviation Terrorism and Security.”

The security system was caught by surprise when an airliner was first hijacked for political extortion; it was unprepared when an airliner was attacked on the tarmac by a terrorist team firing automatic weapons; when terrorists, who arrived as passengers, collected their luggage from the conveyer belt, took out weapons from their suitcases, and strafed the crowd in the arrivals hall; when a parcel bomb sent by mail exploded in an airliner’s cargo hold in mid-flight; when a bomb was brought on board by an unwitting passenger. . . . The history of attacks on aviation is the chronicle of a cat-and-mouse game, where the cat is busy blocking old holes and the mouse always succeeds in finding new ones.

And no hole was bigger than the one found on September 11th.

4.

What the attackers understood was the structural weakness of the passenger-gate security checkpoint, particularly when it came to the detection of knives. Hand-luggage checkpoints use X-ray machines, which do a good job of picking out a large, dense, and predictable object like a gun. Now imagine looking at a photograph of a knife. From the side, the shape is unmistakable. But if the blade edge is directly facing the camera what you’ll see is just a thin line. “If you stand the knife on its edge, it could be anything,” says Harry Martz, who directs the Center for Nondestructive Characterization at Lawrence Livermore Laboratories. “It could be a steel ruler. Then you put in computers, hair dryers, pens, clothes hangers, and it makes it even more difficult to pick up the pattern.”

The challenge of detecting something like a knife blade is made harder still by the psychological demands on X-ray operators. What they are looking for — weapons — is called the “signal,” and a well-documented principle of human-factors research is that as the “signal rate” declines, detection accuracy declines as well. If there was a gun in every second bag, for instance, you could expect the signals to be detected with almost perfect accuracy: the X-ray operator would be on his toes. But guns are almost never found in bags, which means that the vigilance of the operator inevitably falters. This is a significant problem in many fields, from nuclear-plant inspection to quality-control in manufacturing plants — where the job of catching defects on, say, a car becomes harder and harder as cars become better made. “I’ve studied this in people who look for cracks in the rotor disks of airplane engines,” says Colin Drury, a human-factors specialist at the University of Buffalo. “Remember the DC-10 crash at Sioux City? That was a rotor disk. Well, the probability of that kind of crack happening is incredibly small. Most inspectors won’t see one in their lifetime, so it’s very difficult to remain alert to that.” The F.A.A. periodically plants weapons in baggage to see whether they are detected. But it’s not clear what effect that kind of test has on vigilance. In the wake of the September attacks, some commentators called for increased training for X-ray security operators. Yet the problem is not just a lack of expertise; it is the paucity of signals. “Better training is only going to get you so far,” explains Douglas Harris, chairman of Anacapa Sciences, a California-based human-factors firm. “If it now takes a day to teach people the techniques they need, adding another day isn’t going to make much difference.”

A sophisticated terrorist wanting to smuggle knives on board, in other words, has a good shot at “gaming” the X-ray machine by packing his bags cleverly and exploiting the limitations of the operator. If he chooses, he can also beat the metal detector by concealing on his person knives made of ceramic or plastic, which wouldn’t trip the alarm. The knife strategy has its drawbacks, of course. It’s an open question how long a group of terrorists armed only with knives can hold off a cabin full of passengers. But if all they need is to make a short flight from Boston to downtown Manhattan knives would suffice.

5.

Can we close the loopholes that led to the September 11th attack? Logistically, an all-encompassing security system is probably impossible. A new safety protocol that adds thirty seconds to the check-in time of every passenger would add more than three hours to the preparation time for a 747, assuming that there are no additional checkpoints. Reforms that further encumber the country’s already overstressed air-traffic system are hardly reforms; they are self-inflicted wounds. People have suggested that we station armed federal marshals on more flights. This could be an obstacle for some terrorists but an opportunity for others, who could overcome a marshal to gain possession of a firearm.

What we ought to do is beef up security for a small percentage of passengers deemed to be high-risk. The airlines already have in place a screening technology of this sort, known as CAPPS — Computer-Assisted Passenger Prescreening System. When a ticket is purchased on a domestic flight in the United States, the passenger is rated according to approximately forty pieces of data. Though the parameters are classified, they appear to include the traveller’s address, credit history, and destination; whether he or she is travelling alone; whether the ticket was paid for in cash; how long before the departure it was bought; and whether it is one way. (A recent review by the Department of Justice affirmed that the criteria are not discriminatory on the basis of ethnicity.) A sixty-eight-year-old male who lives on Park Avenue, has a fifty-thousand-dollar limit on his credit card, and has flown on the Washington-New York shuttle twice a week for the past eight years, for instance, is never going to get flagged by the CAPPS system. Probably no more than a handful of people per domestic flight ever are, but those few have their checked luggage treated with the kind of scrutiny that, until this month, was reserved for international flights. Their bags are screened for explosives and held until the passengers are actually on board. It would be an easy step to use the CAPPS ratings at the gate as well. Those dubbed high-risk could have their hand luggage scrutinized by the slower but much more comprehensive CT scanner, which would make hiding knives or other weapons in hand luggage all but impossible.

At the same time, high-risk passengers could be asked to undergo an electronic strip search known as a body scan. In a conventional X-ray, the rays pass through the body, leaving an imprint on a detector on the other side. In a body scanner, the X-rays are much weaker, penetrating clothes but not the body, so they bounce back and leave an imprint of whatever lies on the surface of the skin. A body scanner would have picked up a ceramic knife in an instant. Focussing on a smaller group of high-risk people would have the additional benefit of improving the detection accuracy of the security staff: it would raise the signal rate.

We may never know, of course, whether an expanded CAPPS system would have flagged the September 11th terrorists, but certainly those who planned the attack would have had to take that possibility seriously. The chief distinction between American and Israeli airport defense, at the moment, is that the American system focusses on technological examination of the baggage while the Israeli system focusses on personal interrogation and assessment of the passenger — which has resulted in El Al’s having an almost unblemished record against bombings and hijackings over the past twenty years. Wider use of CAPPS profiling would correct that shortcoming, and narrow still further the options available for any would-be terrorist. But we shouldn’t delude ourselves that these steps will end hijackings, any more than the Cooper Vane did thirty years ago. Better law enforcement doesn’t eliminate crime. It forces the criminals who remain to come up with something else. And, as we have just been reminded, that something else, all too frequently, is something worse.
If you are wondering what to worry about when it comes to biological weapons, abortion you should concern yourself, first of all, with things that are easy to deliver. Biological agents are really dangerous only when they can reach lots of people, and very few bioweapons can easily do that. In 1990, members of Japan’s Aum Shinrikyo cult drove around the Parliament buildings in Tokyo in an automobile rigged to disseminate botulinum toxin. It didn’t work. The same group also tried, repeatedly, to release anthrax from a rooftop, and that didn’t work, either. It’s simply too complicated to make anthrax in the fine, “mist” form that is the most lethal. And the spores are destroyed so quickly by sunlight that any kind of mass administration of anthrax is extremely difficult.

A much scarier biological weapon would be something contagious: something a few infected people could spread, unwittingly, in ever widening and more terrifying circles. Even with a contagious agent, though, you don’t really have to worry about pathogens that are what scientists call stable–that are easy to identify and that don’t change from place to place or year to year–because those kinds of biological agents are easy to defend against. That’s why you shouldn’t worry quite so much about smallpox. Deadly as it is, smallpox is so well understood that the vaccine is readily made and extraordinarily effective, and works for decades. If we wanted to, we could all be inoculated against smallpox in a matter of years.

What you really should worry about, then, is something that is highly contagious and highly unstable, a biological agent that kills lots of people and isn’t easy to treat, that mutates so rapidly that each new bout of terror requires a brand-new vaccine. What you should worry about, in other words, is the influenza virus.

If there is an irony to America’s current frenzy over anthrax and biological warfare–the paralyzed mailrooms, the endless talk-show discussions, the hoarding of antibiotics, and the closed halls of Congress–it is that it has occurred right at the beginning of the flu season, the time each year when the democracies of the West are routinely visited by one of the most deadly of all biological agents. This year, around twenty thousand Americans will die of the flu, and if this is one of those years, like 1957 or 1968, when we experience an influenza pandemic, that number may hit fifty thousand. The victims will primarily be the very old and the very young, although there will be a significant number of otherwise healthy young adults among them, including many pregnant women. All will die horrible deaths, racked by raging fevers, infections, headaches, chills, and sweats. And the afflicted, as they suffer, will pass their illness on to others, creating a wave of sickness that will cost the country billions of dollars. Influenza “quietly kills tens of thousands of people every year,” Edwin Kilbourne, a research professor at New York Medical College and one of the country’s leading flu experts, says. “And those who don’t die are incapacitated for weeks. It mounts a silent and pervasive assault.”

That we have chosen to worry more about anthrax than about the flu is hardly surprising. The novel is always scarier than the familiar, and the flu virus, as far as we know, isn’t being sent through the mail by terrorists. But it is a strange kind of public-health policy that concerns itself more with the provenance of illness than with its consequences; and the consequences of the flu, year in, year out, dwarf everything but the most alarmist bioterror scenarios. If even a fraction of the energy and effort now being marshalled against anthrax were directed instead at the flu, we could save thousands of lives. Kilbourne estimates that at least half the deaths each year from the flu are probably preventable: vaccination rates among those most at risk under the age of fifty are a shameful twenty-three per cent, and for asthmatic children, who are also at high risk, the vaccination rate is ten per cent. And vaccination has been shown to save money: the costs of hospitalization for those who get sick far exceed the costs of inoculating everyone else. Why, under the circumstances, this country hasn’t mounted an aggressive flu-vaccination program is a question that Congress might want to consider, when it returns to its newly fumigated, anthrax-free chambers. Not all threats to health and happiness come from terrorists in faraway countries. Many are the result of what, through simple indifference, we do to ourselves.
The disposable diaper and the meaning of progress.

1.

The best way to explore the mystery of the Huggies Ultratrim disposable diaper is to unfold it and then cut it in half, story widthwise, web across what is known as the diaper’s chassis. At Kimberly-Clark’s Lakeview plant, for sale in Neenah, Wisconsin, where virtually all the Huggies in the Midwest are made, there is a quality-control specialist who does this all day long, culling diapers from the production line, pinning them up against a lightboard, and carefully dismembering them with a pair of scissors. There is someone else who does a “visual cull,” randomly picking out Huggies and turning them over to check for flaws. But a surface examination tells you little. A diaper is not like a computer that makes satisfying burbling noises from time to time, hinting at great inner complexity. It feels like papery underwear wrapped around a thin roll of Cottonelle. But peel away the soft fabric on the top side of the diaper, the liner, which receives what those in the trade delicately refer to as the “insult.” You’ll find a layer of what’s called polyfilm, which is thinner than a strip of Scotch tape. This layer is one of the reasons the garment stays dry: it has pores that are large enough to let air flow in, so the diaper can breathe, but small enough to keep water from flowing out, so the diaper doesn’t leak.

Or run your hands along that liner. It feels like cloth. In fact, the people at Kimberly-Clark make the liner out of a special form of plastic, a polyresin. But they don’t melt the plastic into a sheet, as one would for a plastic bag. They spin the resin into individual fibres, and then use the fibres to create a kind of microscopic funnel, channelling the insult toward the long, thick rectangular pad that runs down the center of the chassis, known as the absorbent core. A typical insult arrives at a rate of seven millilitres a second, and might total seventy millilitres of fluid. The liner can clear that insult in less than twenty seconds. The core can hold three or more of those insults, with a chance of leakage in the single digits. The baby’s skin will remain almost perfectly dry, and that is critical, because prolonged contact between the baby and the insult (in particular, ammonium hydroxide, a breakdown product of urine) is what causes diaper rash. And all this will be accomplished by a throwaway garment measuring, in the newborn size, just seven by thirteen inches. This is the mystery of the modern disposable diaper: how does something so small do so much?

2.

Thirty-seven years ago, the Silicon Valley pioneer Gordon Moore made a famous prediction. The number of transistors that engineers could fit onto a microchip, he said, would double every two years. It seemed like a foolhardy claim: it was not clear that you could keep making transistors smaller and smaller indefinitely. It also wasn’t clear that it would make sense to do so. Most of the time when we make things smaller, after all, we pay a price. A smaller car is cheaper and more fuel-efficient, and easier to park and maneuver, but it will never be as safe as a larger car. In the nineteen-fifties and sixties, the transistor radio was all the rage; it could fit inside your pocket and run on a handful of batteries. But, because it was so small, the sound was terrible, and virtually all the other mini-electronics turn out to be similarly imperfect. Tiny cell phones are hard to dial. Tiny televisions are hard to watch. In making an object smaller, we typically compromise its performance. The remarkable thing about chips, though, was that there was no drawback: if you could fit more and more transistors onto a microchip, then instead of using ten or twenty or a hundred microchips for a task you could use just one. This meant, in turn, that you could fit microchips in all kinds of places (such as cellular phones and laptops) that you couldn’t before, and, because you were using one chip and not a hundred, computer power could be had at a fraction of the price, and because chips were now everywhere and in such demand they became even cheaper to make–and so on and so on. Moore’s Law, as it came to be called, describes that rare case in which there is no trade-off between size and performance. Microchips are what might be termed a perfect innovation.

In the past twenty years, diapers have got smaller and smaller, too. In the early eighties, they were three times bulkier than they are now, thicker and substantially wider in the crotch. But in the mid-eighties Huggies and Procter & Gamble’s Pampers were reduced in bulk by fifty per cent; in the mid-nineties they shrank by a third or so; and in the next few years they may shrink still more. It seems reasonable that there should have been a downside to this, just as there was to the shrinking of cars and radios: how could you reduce the amount of padding in a diaper and not, in some way, compromise its ability to handle an insult? Yet, as diapers got smaller, they got better, and that fact elevates the diaper above nearly all the thousands of other products on the supermarket shelf.

Kimberly-Clark’s Lakeview plant is a huge facility, just down the freeway from Green Bay. Inside, it is as immaculate as a hospital operating room. The walls and floors have been scrubbed white. The stainless-steel machinery gleams. The employees are dressed in dark-blue pants, starched light-blue button-down shirts, and tissue-paper caps. There are rows of machines in the plant, each costing more than fifteen million dollars–a dizzying combination of conveyor belts and whirling gears and chutes stretching as long as a city block and creating such a din that everyone on the factory floor wears headsets and communicates by radio. Computers monitor a million data points along the way, insuring that each of those components is precisely cut and attached according to principles and processes and materials protected, on the Huggies Ultratrim alone, by hundreds of patents. At the end of the line, the Huggies come gliding out of the machine, stacked upright, one after another in an endless row, looking like exquisitely formed slices of white bread in a toast rack. For years, because of Moore’s Law, we have considered the microchip the embodiment of the technological age. But if the diaper is also a perfect innovation, doesn’t it deserve a place beside the chip?

3.

The modern disposable diaper was invented twice, first by Victor Mills and then by Carlyle Harmon and Billy Gene Harper. Mills worked for Procter & Gamble, and he was a legend. Ivory soap used to be made in an expensive and time-consuming batch-by-batch method. Mills figured out a simpler, continuous process. Duncan Hines cake mixes used to have a problem blending flour, sugar, and shortening in a consistent mixture. Mills introduced the machines used for milling soap, which ground the ingredients much more finely than before, and the result was New, Improved Duncan Hines cake mix. Ever wonder why Pringles, unlike other potato chips, are all exactly the same shape? Because they are made like soap: the potato is ground into a slurry, then pressed, baked, and wrapped–and that was Victor Mills’s idea, too.

In 1957, Procter & Gamble bought the Charmin Paper Company, of Green Bay, Wisconsin, and Mills was told to think of new products for the paper business. Since he was a grandfather–and had always hated washing diapers–he thought of a disposable diaper. “One of the early researchers told me that among the first things they did was go out to a toy store and buy one of those Betsy Wetsy-type dolls, where you put water in the mouth and it comes out the other end,” Ed Rider, the head of the archives department at Procter & Gamble, says. “They brought it back to the lab, hooked up its legs on a treadmill to make it walk, and tested diapers on it.” The end result was Pampers, which were launched in Peoria, in 1961. The diaper had a simple rectangular shape. Its liner, which lay against the baby’s skin, was made of rayon. The outside material was plastic. In between were multiple layers of crêped tissue. The diaper was attached with pins and featured what was known as a Z fold, meaning that the edges of the inner side were pleated, to provide a better fit around the legs.

In 1968, Kimberly-Clark brought out Kimbies, which took the rectangular diaper and shaped it to more closely fit a baby’s body. In 1976, Procter & Gamble brought out Luvs, which elasticized the leg openings to prevent leakage. But diapers still adhered to the basic Millsian notion of an absorbent core made out of paper–and that was a problem. When paper gets wet, the fluid soaks right through, which makes diaper rash worse. And if you put any kind of pressure on paper–if you squeeze it, or sit on it–it will surrender some of the water it has absorbed, which creates further difficulties, because a baby, in the usual course of squirming and crawling and walking, might place as much as five kilopascals of pressure on the absorbent core of a diaper. Diaper-makers tried to address this shortcoming by moving from crêped tissue to what they called fluff, which was basically finely shredded cellulose. Then they began to compensate for paper’s failing by adding more and more of it, until diapers became huge. But they now had Moore’s Law in reverse: in order to get better, they had to get bigger–and bigger still wasn’t very good.

Carlyle Harmon worked for Johnson & Johnson and Billy Gene Harper worked for Dow Chemical, and they had a solution. In 1966, each filed separate but virtually identical patent applications, proposing that the best way to solve the diaper puzzle was with a peculiar polymer that came in the form of little pepperlike flakes and had the remarkable ability to absorb up to three hundred times its weight in water.

In the Dow patent, Harper and his team described how they sprinkled two grams of the superabsorbent polymer between two twenty-inch-square sheets of nylon broadcloth, and then quilted the nylon layers together. The makeshift diaper was “thereafter put into use in personal management of a baby of approximately 6 months age.” After four hours, the diaper was removed. It now weighed a hundred and twenty grams, meaning the flakes had soaked up sixty times their weight in urine.

Harper and Harmon argued that it was quite unnecessary to solve the paper problem by stuffing the core of the diaper with thicker and thicker rolls of shredded pulp. Just a handful of superabsorbent polymer would do the job. Thus was the modern diaper born. Since the mid-eighties, Kimberly-Clark and Procter & Gamble have made diapers the Harper and Harmon way, pulling out paper and replacing it with superabsorbent polymer. The old, paper-filled diaper could hold, at most, two hundred and seventy-five millilitres of fluid, or a little more than a cup. Today, a diaper full of superabsorbent polymer can handle as much as five hundred millilitres, almost twice that. The chief characteristic of the Mills diaper was its simplicity: the insult fell directly into the core. But the presence of the polymer has made the diaper far more complex. It takes longer for the polymer than it does paper to fully absorb an insult, for instance. So another component was added, the acquisition layer, between the liner and the core. The acquisition layer acts like blotting paper, holding the insult while the core slowly does its work, and distributing the fluid over its full length.

Diaper researchers sometimes perform what is called a re-wet test, where they pour a hundred millilitres of fluid onto the surface of a diaper and then apply a piece of filter paper to the diaper liner with five kilopascals of pressure–the average load a baby would apply to a diaper during ordinary use. In a contemporary superabsorbent diaper, like a Huggies or a Pampers, the filter paper will come away untouched after one insult. After two insults, there might be 0.1 millilitres of fluid on the paper. After three insults, the diaper will surrender, at most, only two millilitres of moisture–which is to say that, with the aid of superabsorbents, a pair of Huggies or Pampers can effortlessly hold, even under pressure, a baby’s entire night’s work.

The heir to the legacy of Billy Gene Harper at Dow Chemical is Fredric Buchholz, who works in Midland, Michigan, a small town two hours northwest of Detroit, where Dow has its headquarters. His laboratory is in the middle of the sprawling chemical works, a mile or two away from corporate headquarters, in a low, unassuming brick building. “We still don’t understand perfectly how these polymers work,” Buchholz said on a recent fall afternoon. What we do know, he said, is that superabsorbent polymers appear, on a microscopic level, to be like a tightly bundled fisherman’s net. In the presence of water, that net doesn’t break apart into thousands of pieces and dissolve, like sugar. Rather, it just unravels, the way a net would open up if you shook it out, and as it does the water gets stuck in the webbing. That ability to hold huge amounts of water, he said, could make superabsorbent polymers useful in fire fighting or irrigation, because slightly gelled water is more likely to stay where it’s needed. There are superabsorbents mixed in with the sealant on the walls of the Chunnel between England and France, so if water leaks in the polymer will absorb the water and plug the hole.

Right now, one of the major challenges facing diaper technology, Buchholz said, is that urine is salty, and salt impairs the unravelling of the netting: superabsorbents can handle only a tenth as much salt water as fresh water. “One idea is to remove the salt from urine. Maybe you could have a purifying screen,” he said. If the molecular structure of the superabsorbent were optimized, he went on, its absorptive capacity could increase by another five hundred per cent. “Superabsorbents could go from absorbing three hundred times their weight to absorbing fifteen hundred times their weight. We could have just one perfect particle of super-absorbent in a diaper. If you are going to dream, why not make the diaper as thin as a pair of underwear?”

Buchholz was in his laboratory, and he held up a small plastic cup filled with a few tablespoons of superabsorbent flakes, each not much larger than a grain of salt. “It’s just a granular material, totally nontoxic,” he said. “This is about two grams.” He walked over to the sink and filled a large beaker with tap water, and poured the contents of the beaker into the jar of superabsorbent. At first, nothing happened. The amounts were so disproportionate that it looked as if the water would simply engulf the flakes. But, slowly and steadily, the water began to thicken. “Look,” Buchholz said. “It’s becoming soupy.” Sure enough, little beads of gel were forming. Nothing else was happening: there was no gas given off, no burbling or sizzling as the chemical process took place. The superabsorbent polymer was simply swallowing up the water, and within minutes the contents of the cup had thickened into what looked like slightly lumpy, spongy pudding. Buchholz picked up the jar and tilted it, to show that nothing at all was coming out. He pushed and prodded the mass with his finger. The water had disappeared. To soak up that much liquid, the Victor Mills diaper would have needed a thick bundle of paper towelling. Buchholz had used a few tablespoons of superabsorbent flakes. Superabsorbent was not merely better; it was smaller.

4.

Why does it matter that the diaper got so small? It seems a trivial thing, chiefly a matter of convenience to the parent taking a bag of diapers home from the supermarket. But it turns out that size matters a great deal. There’s a reason that there are now “new, improved concentrated” versions of laundry detergent, and that some cereals now come in smaller boxes. Smallness is one of those changes that send ripples through the whole economy. The old disposable diapers, for example, created a transportation problem. Tractor-trailers are prohibited by law from weighing more than eighty thousand pounds when loaded. That’s why a truck carrying something heavy and compact like bottled water or Campbell’s soup is “full,” when the truck itself is still half empty. But the diaper of the eighties was what is known as a “high cube” item. It was bulky and not very heavy, meaning that a diaper truck was full before it reached its weight limit. By cutting the size of a diaper in half, companies could fit twice as many diapers on a truck, and cut transportation expenses in half. They could also cut the amount of warehouse space and labor they needed in half. And companies could begin to rethink their manufacturing operations. “Distribution costs used to force you to have plants in lots of places,” Dudley Lehman, who heads the Kimberly-Clark diaper business, says. “As that becomes less and less of an issue, you say, ‘Do I really need all my plants?’ In the United States, it used to take eight. Now it takes five.” (Kimberly-Clark didn’t close any plants. But other manufacturers did, and here, perhaps, is a partial explanation for the great wave of corporate restructuring that swept across America in the late eighties and early nineties: firms could downsize their workforce because they had downsized their products.) And, because using five plants to make diapers is more efficient than using eight, it became possible to improve diapers without raising diaper prices–which is important, because the sheer number of diapers parents have to buy makes it a price-sensitive product. Until recently, diapers were fastened with little pieces of tape, and if the person changing the diapers got lotion or powder on her fingers the tape wouldn’t work. A hook-and-loop, Velcro-like fastener doesn’t have this problem. But it was years before the hook-and-loop fastener was incorporated into the diaper chassis: until over-all manufacturing costs were reduced, it was just too expensive.

Most important, though, is how size affects the way diapers are sold. The shelves along the aisles of a supermarket are divided into increments of four feet, and the space devoted to a given product category is almost always a multiple of that. Diapers, for example, might be presented as a twenty-foot set. But when diapers were at their bulkiest the space reserved for them was never enough. “You could only get a limited number on the shelf,” says Sue Klug, the president of Catalina Marketing Solutions and a former executive for Albertson’s and Safeway. “Say you only had six bags. Someone comes in and buys a few, and then someone else comes in and buys a few more. Now you’re out of stock until someone reworks the shelf, which in some supermarkets might be a day or two.” Out-of-stock rates are already a huge problem in the retail business. At any given time, only about ninety-two per cent of the products that a store is supposed to be carrying are actually on the shelf–which, if you consider that the average supermarket has thirty-five thousand items, works out to twenty-eight hundred products that are simply not there. (For a highly efficient retailer like Wal-Mart, in-stock rates might be as high as ninety-nine per cent; for a struggling firm, they might be in the low eighties.) But, for a fast-moving, bulky item like diapers, the problem of restocking was much worse. Supermarkets could have allocated more shelf space to diapers, of course, but diapers aren’t a particularly profitable category for retailers–profit margins are about half what they are for the grocery department. So retailers would much rather give more shelf space to a growing and lucrative category like bottled water. “It’s all a trade-off,” Klug says. “If you expand diapers four feet, you’ve got to give up four feet of something else.” The only way diaper-makers could insure that their products would actually be on the shelves was to make the products smaller, so they could fit twelve bags into the space of six. And if you can fit twelve bags on a shelf, you can introduce different kinds of diapers. You can add pull-ups and premium diapers and low-cost private-label diapers, all of which give parents more options.

“We cut the cost of trucking in half,” says Ralph Drayer, who was in charge of logistics for Procter & Gamble for many years and now runs his own supply-chain consultancy in Cincinnati. “We cut the cost of storage in half. We cut handling in half, and we cut the cost of the store shelf in half, which is probably the most expensive space in the whole chain.” Everything in the diaper world, from plant closings and trucking routes to product improvements and consumer choice and convenience, turns, in the end, on the fact that Harmon and Harper’s absorbent core was smaller than Victor Mills’s.

The shame of it, though, is that Harmon and Harper have never been properly celebrated for their accomplishment. Victor Mills is the famous one. When he died, he was given a Times obituary, in which he was called “the father of disposable diapers.” When Carlyle Harmon died, seven months earlier, he got four hundred words in Utah’s Deseret News, stressing his contributions to the Mormon Church. We tend to credit those who create an idea, not those who perfect it, forgetting that it is often only in the perfection of an idea that true progress occurs. Putting sixty-four transistors on a chip allowed people to dream of the future. Putting four million transistors on a chip actually gave them the future. The diaper is no different. The paper diaper changed parenting. But a diaper that could hold four insults without leakage, keep a baby’s skin dry, clear an insult in twenty seconds flat, and would nearly always be in stock, even if you arrived at the supermarket at eight o’clock in the evening–and that would keep getting better at all those things, year in and year out–was another thing altogether. This was more than a good idea. This was something like perfection.