Listening to Khakis

Who can be blamed for a disaster like the Challenger explosion, viagra a decade ago? No one, rx according to the new risk theorists, and we’d better get used to it.

1.

In the technological age, there is a ritual to disaster. When planes crash or chemical plants explode, each piece of physical evidence-of twisted metal or fractured concrete- becomes a kind of fetish object, painstakingly located, mapped, tagged, and analyzed, with findings submitted to boards of inquiry that then probe and interview and soberly draw conclusions. It is a ritual of reassurance, based on the principle that what we learn from one accident can help us prevent another, and a measure of its effectiveness is that Americans did not shut down the nuclear industry after Three Mile Island and do not abandon the skies after each new plane crash. But the rituals of disaster have rarely been played out so dramatically as they were in the case of the Challenger space shuttle, which blew up over southern Florida on January 28th ten years ago.

Fifty-five minutes after the explosion, when the last of the debris had fallen into the ocean, recovery ships were on the scene. They remained there for the next three months, as part of what turned into the largest maritime salvage operation in history, combing a hundred and fifty thousand square nautical miles for floating debris, while the ocean floor surrounding the crash site was inspected by submarines. In mid-April of 1986, the salvage team found several chunks of charred metal that confirmed what had previously been only suspected: the explosion was caused by a faulty seal in one of the shuttle’s rocket boosters, which had allowed a stream of flame to escape and ignite an external fuel tank.

Armed with this confirmation, a special Presidential investigative commission concluded the following June that the deficient seal reflected shoddy engineering and lax management at NASA and its prime contractor, Morton Thiokol. Properly chastised, NASA returned to the drawing board, to emerge thirty-two months later with a new shuttle-Discovery-redesigned according to the lessons learned from the disaster. During that first post- Challenger flight, as America watched breathlessly, the crew of the Discovery held a short commemorative service. “Dear friends,” the mission commander, Captain Frederick H. Hauck, said, addressing the seven dead Challenger astronauts, “your loss has meant that we could confidently begin anew.” The ritual was complete. NASA was back.

But what if the assumptions that underlie our disaster rituals aren’t true? What if these public post mortems don’t help us avoid future accidents? Over the past few years, a group of scholars has begun making the unsettling argument that the rituals that follow things like plane crashes or the Three Mile Island crisis are as much exercises in self-deception as they are genuine opportunities for reassurance. For these revisionists, high-technology accidents may not have clear causes at all. They may be inherent in the complexity of the technological systems we have created.

This month, on the tenth anniversary of the Challenger disaster, such revisionism has been extended to the space shuttle with the publication, by the Boston College sociologist Diane Vaughan, of “The Challenger Launch Decision” (Chicago), which is the first truly definitive analysis of the events leading up to January 28, 1986. The conventional view is that the Challenger accident was an anomaly, that it happened because people at NASA had not done their job. But the study’s conclusion is the opposite: it says that the accident happened because people at NASA had done exactly what they were supposed to do. “No fundamental decision was made at NASA to do evil,” Vaughan writes. “Rather, a series of seemingly harmless decisions were made that incrementally moved the space agency toward a catastrophic outcome.”

No doubt Vaughan’s analysis will be hotly disputed in the coming months, but even if she is only partly right the implications of this kind of argument are enormous. We have surrounded ourselves in the modern age with things like power plants and nuclear-weapons systems and airports that handle hundreds of planes an hour, on the understanding that the risks they represent are, at the very least, manageable. But if the potential for catastrophe is actually found in the normal functioning of complex systems, this assumption is false. Risks are not easily manageable, accidents are not easily preventable, and the rituals of disaster have no meaning. The first time around, the story of the Challenger was tragic. In its retelling, a decade later, it is merely banal.

2.

Perhaps the best way to understand the argument over the Challenger explosion is to start with an accident that preceded it-the near-disaster at the Three Mile Island (T.M.I.) nuclear- power plant in March of 1979. The conclusion of the President’s commission that investigated the T.M.I. accident was that it was the result of human error, particularly on the part of the plant’s operators. But the truth of what happened there, the revisionists maintain, is a good deal more complicated than that, and their arguments are worth examining in detail.

The trouble at T.M.I. started with a blockage in what is called the plant’s polisher-a kind of giant water filter. Polisher problems were not unusual at T.M.I., or particularly serious. But in this case the blockage caused moisture to leak into the plant’s air system, inadvertently tripping two valves and shutting down the flow of cold water into the plant’s steam generator.

As it happens, T.M.I. had a backup cooling system for precisely this situation. But on that particular day, for reasons that no one really knows, the valves for the backup system weren’t open. They had been closed, and an indicator in the control room showing they were closed was blocked by a repair tag hanging from a switch above it. That left the reactor dependent on another backup system, a special sort of relief valve. But, as luck would have it, the relief valve wasn’t working properly that day, either. It stuck open when it was supposed to close, and, to make matters even worse, a gauge in the control room which should have told the operators that the relief valve wasn’t working was itself not working. By the time T.M.I.’s engineers realized what was happening, the reactor had come dangerously close to a meltdown.

Here, in other words, was a major accident caused by five discrete events. There is no way the engineers in the control room could have known about any of them. No glaring errors or spectacularly bad decisions were made that exacerbated those events. And all the malfunctions-the blocked polisher, the shut valves, the obscured indicator, the faulty relief valve, and the broken gauge-were in themselves so trivial that individually they would have created no more than a nuisance. What caused the accident was the way minor events unexpectedly interacted to create a major problem.

This kind of disaster is what the Yale University sociologist Charles Perrow has famously called a “normal accident.” By “normal” Perrow does not mean that it is frequent; he means that it is the kind of accident one can expect in the normal functioning of a technologically complex operation. Modern systems, Perrow argues, are made up of thousands of parts, all of which interrelate in ways that are impossible to anticipate. Given that complexity, he says, it is almost inevitable that some combinations of minor failures will eventually amount to something catastrophic. In a classic 1984 treatise on accidents, Perrow takes examples of well-known plane crashes, oil spills, chemical-plant explosions, and nuclear-weapons mishaps and shows how many of them are best understood as “normal.” If you saw last year’s hit movie “Apollo 13,” in fact, you have seen a perfect illustration of one of the most famous of all normal accidents: the Apollo flight went awry because of the interaction of failures of the spacecraft’s oxygen and hydrogen tanks, and an indicator light that diverted the astronauts’ attention from the real problem.

Had this been a “real” accident-if the mission had run into trouble because of one massive or venal error-the story would have made for a much inferior movie. In real accidents, people rant and rave and hunt down the culprit. They do, in short, what people in Hollywood thrillers always do. But what made Apollo 13 unusual was that the dominant emotion was not anger but bafflement–bafflement that so much could go wrong for so little apparent reason. There was no one to blame, no dark secret to un-earth, no recourse but to re-create an entire system in place of one that had inexplicably failed. In the end, the normal accident was the more terrifying one.

3.

Was the Challenger explosion a “normal accident”? In a narrow sense, the answer is no. Unlike what happened at T.M.I., its explosion was caused by a single, catastrophic malfunction: the so-called O-rings that were supposed to prevent hot gases from leaking out of the rocket boosters didn’t do their job. But Vaughan argues that the O-ring problem was really just a symptom. The cause of the accident was the culture of NASA, she says, and that culture led to a series of decisions about the Challenger which very much followed the contours of a normal accident.

The heart of the question is how NASA chose to evaluate the problems it had been having with the rocket boosters’ O-rings. These are the thin rubber bands that run around the lips of each of the rocket’s four segments, and each O-ring was meant to work like the rubber seal on the top of a bottle of preserves, making the fit between each part of the rocket snug and airtight. But from as far back as 1981, on one shuttle flight after another, the O-rings had shown increasing problems. In a number of instances, the rubber seal had been dangerously eroded-a condition suggesting that hot gases had almost escaped. What’s more, O-rings were strongly suspected to be less effective in cold weather, when the rubber would harden and not give as tight a seal. On the morning of January 28, 1986, the shuttle launchpad was encased in ice, and the temperature at liftoff was just above freezing. Anticipating these low temperatures, engineers at Morton Thiokol, the manufacturer of the shuttle’s rockets, had recommended that the launch be delayed. Morton Thiokol brass and NASA, however, overruled the recommendation, and that decision led both the President’s commission and numerous critics since to accuse NASA of egregious-if not criminal-misjudgment.

Vaughan doesn’t dispute that the decision was fatally flawed. But, after reviewing thousands of pages of transcripts and internal NASA documents, she can’t find any evidence of people acting negligently, or nakedly sacrificing safety in the name of politics or expediency. The mistakes that NASA made, she says, were made in the normal course of operation. For example, in retrospect it may seem obvious that cold weather impaired O-ring performance. But it wasn’t obvious at the time. A previous shuttle flight that had suffered worse O-ring damage had been launched in seventy-five-degree heat. And on a series of previous occasions when NASA had proposed-but eventually scrubbed for other reasons-shuttle launches in weather as cold as forty-one degrees, Morton Thiokol had not said a word about the potential threat posed by the cold, so its pre-Challenger objection had seemed to NASA not reasonable but arbitrary. Vaughan confirms that there was a dispute between managers and engineers on the eve of the launch but points out that in the shuttle program disputes of this sort were commonplace. And, while the President’s commission was astonished by NASA’s repeated use of the phrases “acceptable risk” and “acceptable erosion” in internal discussion of the rocket-booster joints, Vaughan shows that flying with acceptable risks was a standard part of NASA culture. The lists of “acceptable risks” on the space shuttle, in fact, filled six volumes. “Although [O-ring] erosion itself had not been predicted, its occurrence conformed to engineering expectations about large-scale technical systems,” she writes. “At NASA, problems were the norm. The word ‘anomaly’ was part of everyday talk. . . . The whole shuttle system operated on the assumption that deviation could be controlled but not eliminated.”

What NASA had created was a closed culture that, in her words, “normalized deviance” so that to the outside world decisions that were obviously questionable were seen by NASA’s management as prudent and reasonable. It is her depiction of this internal world that makes her book so disquieting: when she lays out the sequence of decisions which led to the launch- each decision as trivial as the string of failures that led to T.M.I.-it is difficult to find any precise point where things went wrong or where things might be improved next time. “It can truly be said that the Challenger launch decision was a rule- based decision,” she concludes. “But the cultural understandings, rules, procedures, and norms that always had worked in the past did not work this time. It was not amorally calculating managers violating rules that were responsible for the tragedy. It was conformity.”

4.

There is another way to look at this problem, and that is from the standpoint of how human beings handle risk. One of the assumptions behind the modern disaster ritual is that when a risk can be identified and eliminated a system can be made safer. The new booster joints on the shuttle, for example, are so much better than the old ones that the over-all chances of a Challenger-style accident’s ever happening again must be lower-right? This is such a straightforward idea that questioning it seems almost impossible. But that is just what another group of scholars has done, under what is called the theory of “risk homeostasis.” It should be said that within the academic community there are huge debates over how widely the theory of risk homeostasis can and should be applied. But the basic idea, which has been laid out brilliantly by the Canadian psychologist Gerald Wilde in his book “Target Risk,” is quite simple: under certain circumstances, changes that appear to make a system or an organization safer in fact don’t. Why? Because human beings have a seemingly fundamental tendency to compensate for lower risks in one area by taking greater risks in another.

Consider, for example, the results of a famous experiment conducted several years ago in Germany. Part of a fleet of taxicabs in Munich was equipped with antilock brake systems (A.B.S.), the recent technological innovation that vastly improves braking, particularly on slippery surfaces. The rest of the fleet was left alone, and the two groups-which were otherwise perfectly matched-were placed under careful and secret observation for three years. You would expect the better brakes to make for safer driving. But that is exactly the opposite of what happened. Giving some drivers A.B.S. made no difference at all in their accident rate; in fact, it turned them into markedly inferior drivers. They drove faster. They made sharper turns. They showed poorer lane discipline. They braked harder. They were more likely to tailgate. They didn’t merge as well, and they were involved in more near-misses. In other words, the A.B.S. systems were not used to reduce accidents; instead, the drivers used the additional element of safety to enable them to drive faster and more recklessly without increasing their risk of getting into an accident. As economists would say, they “consumed” the risk reduction, they didn’t save it.

Risk homeostasis doesn’t happen all the time. Often-as in the case of seat belts, say-compensatory behavior only partly offsets the risk-reduction of a safety measure. But it happens often enough that it must be given serious consideration. Why are more pedestrians killed crossing the street at marked crosswalks than at unmarked crosswalks? Because they compensate for the “safe” environment of a marked crossing by being less viligant about oncoming traffic. Why did the introduction of childproof lids on medicine bottles lead, according to one study, to a substantial increase in fatal child poisonings? Because adults became less careful in keeping pill bottles out of the reach of children.

Risk homeostasis also works in the opposite direction. In the late nineteen-sixties, Sweden changed over from driving on the left-hand side of the road to driving on the right, a switch that one would think would create an epidemic of accidents. But, in fact, the opposite was true. People compensated for their unfamiliarity with the new traffic patterns by driving more carefully. During the next twelve months, traffic fatalities dropped seventeen per cent-before returning slowly to their previous levels. As Wilde only half-facetiously argues, countries truly interested in making their streets and highways safer should think about switching over from one side of the road to the other on a regular basis.

It doesn’t take much imagination to see how risk homeostasis applies to NASA and the space shuttle. In one frequently quoted phrase, Richard Feynman, the Nobel Prize- winning physicist who served on the Challenger commission, said that at NASA decision-making was “a kind of Russian roulette.” When the O-rings began to have problems and nothing happened, the agency began to believe that “the risk is no longer so high for the next flights,” Feynman said, and that “we can lower our standards a little bit because we got away with it last time.” But fixing the O-rings doesn’t mean that this kind of risk-taking stops. There are six whole volumes of shuttle components that are deemed by NASA to be as risky as O-rings. It is entirely possible that better O-rings just give NASA the confidence to play Russian roulette with something else.

This is a depressing conclusion, but it shouldn’t come as a surprise. The truth is that our stated commitment to safety, our faithful enactment of the rituals of disaster, has always masked a certain hypocrisy. We don’t really want the safest of all possible worlds. The national fifty-five-mile-per-hour speed limit probably saved more lives than any other single government intervention of the past twenty-five years. But the fact that Congress lifted it last month with a minimum of argument proves that we would rather consume the recent safety advances of things like seat belts and air bags than save them. The same is true of the dramatic improvements that have been made in recent years in the design of aircraft and flight- navigation systems. Presumably, these innovations could be used to bring down the airline-accident rate as low as possible. But that is not what consumers want. They want air travel to be cheaper, more reliable, or more convenient, and so those safety advances have been at least partly consumed by flying and landing planes in worse weather and heavier traffic conditions.

What accidents like the Challenger should teach us is that we have constructed a world in which the potential for high-tech catastrophe is embedded in the fabric of day-to-day life. At some point in the future-for the most mundane of reasons, and with the very best of intentions-a NASA spacecraft will again go down in flames. We should at least admit this to ourselves now. And if we cannot-if the possibility is too much to bear-then our only option is to start thinking about getting rid of things like space shuttles altogether.
Who can be blamed for a disaster like the Challenger explosion, viagra a decade ago? No one, rx according to the new risk theorists, and we’d better get used to it.

1.

In the technological age, there is a ritual to disaster. When planes crash or chemical plants explode, each piece of physical evidence-of twisted metal or fractured concrete- becomes a kind of fetish object, painstakingly located, mapped, tagged, and analyzed, with findings submitted to boards of inquiry that then probe and interview and soberly draw conclusions. It is a ritual of reassurance, based on the principle that what we learn from one accident can help us prevent another, and a measure of its effectiveness is that Americans did not shut down the nuclear industry after Three Mile Island and do not abandon the skies after each new plane crash. But the rituals of disaster have rarely been played out so dramatically as they were in the case of the Challenger space shuttle, which blew up over southern Florida on January 28th ten years ago.

Fifty-five minutes after the explosion, when the last of the debris had fallen into the ocean, recovery ships were on the scene. They remained there for the next three months, as part of what turned into the largest maritime salvage operation in history, combing a hundred and fifty thousand square nautical miles for floating debris, while the ocean floor surrounding the crash site was inspected by submarines. In mid-April of 1986, the salvage team found several chunks of charred metal that confirmed what had previously been only suspected: the explosion was caused by a faulty seal in one of the shuttle’s rocket boosters, which had allowed a stream of flame to escape and ignite an external fuel tank.

Armed with this confirmation, a special Presidential investigative commission concluded the following June that the deficient seal reflected shoddy engineering and lax management at NASA and its prime contractor, Morton Thiokol. Properly chastised, NASA returned to the drawing board, to emerge thirty-two months later with a new shuttle-Discovery-redesigned according to the lessons learned from the disaster. During that first post- Challenger flight, as America watched breathlessly, the crew of the Discovery held a short commemorative service. “Dear friends,” the mission commander, Captain Frederick H. Hauck, said, addressing the seven dead Challenger astronauts, “your loss has meant that we could confidently begin anew.” The ritual was complete. NASA was back.

But what if the assumptions that underlie our disaster rituals aren’t true? What if these public post mortems don’t help us avoid future accidents? Over the past few years, a group of scholars has begun making the unsettling argument that the rituals that follow things like plane crashes or the Three Mile Island crisis are as much exercises in self-deception as they are genuine opportunities for reassurance. For these revisionists, high-technology accidents may not have clear causes at all. They may be inherent in the complexity of the technological systems we have created.

This month, on the tenth anniversary of the Challenger disaster, such revisionism has been extended to the space shuttle with the publication, by the Boston College sociologist Diane Vaughan, of “The Challenger Launch Decision” (Chicago), which is the first truly definitive analysis of the events leading up to January 28, 1986. The conventional view is that the Challenger accident was an anomaly, that it happened because people at NASA had not done their job. But the study’s conclusion is the opposite: it says that the accident happened because people at NASA had done exactly what they were supposed to do. “No fundamental decision was made at NASA to do evil,” Vaughan writes. “Rather, a series of seemingly harmless decisions were made that incrementally moved the space agency toward a catastrophic outcome.”

No doubt Vaughan’s analysis will be hotly disputed in the coming months, but even if she is only partly right the implications of this kind of argument are enormous. We have surrounded ourselves in the modern age with things like power plants and nuclear-weapons systems and airports that handle hundreds of planes an hour, on the understanding that the risks they represent are, at the very least, manageable. But if the potential for catastrophe is actually found in the normal functioning of complex systems, this assumption is false. Risks are not easily manageable, accidents are not easily preventable, and the rituals of disaster have no meaning. The first time around, the story of the Challenger was tragic. In its retelling, a decade later, it is merely banal.

2.

Perhaps the best way to understand the argument over the Challenger explosion is to start with an accident that preceded it-the near-disaster at the Three Mile Island (T.M.I.) nuclear- power plant in March of 1979. The conclusion of the President’s commission that investigated the T.M.I. accident was that it was the result of human error, particularly on the part of the plant’s operators. But the truth of what happened there, the revisionists maintain, is a good deal more complicated than that, and their arguments are worth examining in detail.

The trouble at T.M.I. started with a blockage in what is called the plant’s polisher-a kind of giant water filter. Polisher problems were not unusual at T.M.I., or particularly serious. But in this case the blockage caused moisture to leak into the plant’s air system, inadvertently tripping two valves and shutting down the flow of cold water into the plant’s steam generator.

As it happens, T.M.I. had a backup cooling system for precisely this situation. But on that particular day, for reasons that no one really knows, the valves for the backup system weren’t open. They had been closed, and an indicator in the control room showing they were closed was blocked by a repair tag hanging from a switch above it. That left the reactor dependent on another backup system, a special sort of relief valve. But, as luck would have it, the relief valve wasn’t working properly that day, either. It stuck open when it was supposed to close, and, to make matters even worse, a gauge in the control room which should have told the operators that the relief valve wasn’t working was itself not working. By the time T.M.I.’s engineers realized what was happening, the reactor had come dangerously close to a meltdown.

Here, in other words, was a major accident caused by five discrete events. There is no way the engineers in the control room could have known about any of them. No glaring errors or spectacularly bad decisions were made that exacerbated those events. And all the malfunctions-the blocked polisher, the shut valves, the obscured indicator, the faulty relief valve, and the broken gauge-were in themselves so trivial that individually they would have created no more than a nuisance. What caused the accident was the way minor events unexpectedly interacted to create a major problem.

This kind of disaster is what the Yale University sociologist Charles Perrow has famously called a “normal accident.” By “normal” Perrow does not mean that it is frequent; he means that it is the kind of accident one can expect in the normal functioning of a technologically complex operation. Modern systems, Perrow argues, are made up of thousands of parts, all of which interrelate in ways that are impossible to anticipate. Given that complexity, he says, it is almost inevitable that some combinations of minor failures will eventually amount to something catastrophic. In a classic 1984 treatise on accidents, Perrow takes examples of well-known plane crashes, oil spills, chemical-plant explosions, and nuclear-weapons mishaps and shows how many of them are best understood as “normal.” If you saw last year’s hit movie “Apollo 13,” in fact, you have seen a perfect illustration of one of the most famous of all normal accidents: the Apollo flight went awry because of the interaction of failures of the spacecraft’s oxygen and hydrogen tanks, and an indicator light that diverted the astronauts’ attention from the real problem.

Had this been a “real” accident-if the mission had run into trouble because of one massive or venal error-the story would have made for a much inferior movie. In real accidents, people rant and rave and hunt down the culprit. They do, in short, what people in Hollywood thrillers always do. But what made Apollo 13 unusual was that the dominant emotion was not anger but bafflement–bafflement that so much could go wrong for so little apparent reason. There was no one to blame, no dark secret to un-earth, no recourse but to re-create an entire system in place of one that had inexplicably failed. In the end, the normal accident was the more terrifying one.

3.

Was the Challenger explosion a “normal accident”? In a narrow sense, the answer is no. Unlike what happened at T.M.I., its explosion was caused by a single, catastrophic malfunction: the so-called O-rings that were supposed to prevent hot gases from leaking out of the rocket boosters didn’t do their job. But Vaughan argues that the O-ring problem was really just a symptom. The cause of the accident was the culture of NASA, she says, and that culture led to a series of decisions about the Challenger which very much followed the contours of a normal accident.

The heart of the question is how NASA chose to evaluate the problems it had been having with the rocket boosters’ O-rings. These are the thin rubber bands that run around the lips of each of the rocket’s four segments, and each O-ring was meant to work like the rubber seal on the top of a bottle of preserves, making the fit between each part of the rocket snug and airtight. But from as far back as 1981, on one shuttle flight after another, the O-rings had shown increasing problems. In a number of instances, the rubber seal had been dangerously eroded-a condition suggesting that hot gases had almost escaped. What’s more, O-rings were strongly suspected to be less effective in cold weather, when the rubber would harden and not give as tight a seal. On the morning of January 28, 1986, the shuttle launchpad was encased in ice, and the temperature at liftoff was just above freezing. Anticipating these low temperatures, engineers at Morton Thiokol, the manufacturer of the shuttle’s rockets, had recommended that the launch be delayed. Morton Thiokol brass and NASA, however, overruled the recommendation, and that decision led both the President’s commission and numerous critics since to accuse NASA of egregious-if not criminal-misjudgment.

Vaughan doesn’t dispute that the decision was fatally flawed. But, after reviewing thousands of pages of transcripts and internal NASA documents, she can’t find any evidence of people acting negligently, or nakedly sacrificing safety in the name of politics or expediency. The mistakes that NASA made, she says, were made in the normal course of operation. For example, in retrospect it may seem obvious that cold weather impaired O-ring performance. But it wasn’t obvious at the time. A previous shuttle flight that had suffered worse O-ring damage had been launched in seventy-five-degree heat. And on a series of previous occasions when NASA had proposed-but eventually scrubbed for other reasons-shuttle launches in weather as cold as forty-one degrees, Morton Thiokol had not said a word about the potential threat posed by the cold, so its pre-Challenger objection had seemed to NASA not reasonable but arbitrary. Vaughan confirms that there was a dispute between managers and engineers on the eve of the launch but points out that in the shuttle program disputes of this sort were commonplace. And, while the President’s commission was astonished by NASA’s repeated use of the phrases “acceptable risk” and “acceptable erosion” in internal discussion of the rocket-booster joints, Vaughan shows that flying with acceptable risks was a standard part of NASA culture. The lists of “acceptable risks” on the space shuttle, in fact, filled six volumes. “Although [O-ring] erosion itself had not been predicted, its occurrence conformed to engineering expectations about large-scale technical systems,” she writes. “At NASA, problems were the norm. The word ‘anomaly’ was part of everyday talk. . . . The whole shuttle system operated on the assumption that deviation could be controlled but not eliminated.”

What NASA had created was a closed culture that, in her words, “normalized deviance” so that to the outside world decisions that were obviously questionable were seen by NASA’s management as prudent and reasonable. It is her depiction of this internal world that makes her book so disquieting: when she lays out the sequence of decisions which led to the launch- each decision as trivial as the string of failures that led to T.M.I.-it is difficult to find any precise point where things went wrong or where things might be improved next time. “It can truly be said that the Challenger launch decision was a rule- based decision,” she concludes. “But the cultural understandings, rules, procedures, and norms that always had worked in the past did not work this time. It was not amorally calculating managers violating rules that were responsible for the tragedy. It was conformity.”

4.

There is another way to look at this problem, and that is from the standpoint of how human beings handle risk. One of the assumptions behind the modern disaster ritual is that when a risk can be identified and eliminated a system can be made safer. The new booster joints on the shuttle, for example, are so much better than the old ones that the over-all chances of a Challenger-style accident’s ever happening again must be lower-right? This is such a straightforward idea that questioning it seems almost impossible. But that is just what another group of scholars has done, under what is called the theory of “risk homeostasis.” It should be said that within the academic community there are huge debates over how widely the theory of risk homeostasis can and should be applied. But the basic idea, which has been laid out brilliantly by the Canadian psychologist Gerald Wilde in his book “Target Risk,” is quite simple: under certain circumstances, changes that appear to make a system or an organization safer in fact don’t. Why? Because human beings have a seemingly fundamental tendency to compensate for lower risks in one area by taking greater risks in another.

Consider, for example, the results of a famous experiment conducted several years ago in Germany. Part of a fleet of taxicabs in Munich was equipped with antilock brake systems (A.B.S.), the recent technological innovation that vastly improves braking, particularly on slippery surfaces. The rest of the fleet was left alone, and the two groups-which were otherwise perfectly matched-were placed under careful and secret observation for three years. You would expect the better brakes to make for safer driving. But that is exactly the opposite of what happened. Giving some drivers A.B.S. made no difference at all in their accident rate; in fact, it turned them into markedly inferior drivers. They drove faster. They made sharper turns. They showed poorer lane discipline. They braked harder. They were more likely to tailgate. They didn’t merge as well, and they were involved in more near-misses. In other words, the A.B.S. systems were not used to reduce accidents; instead, the drivers used the additional element of safety to enable them to drive faster and more recklessly without increasing their risk of getting into an accident. As economists would say, they “consumed” the risk reduction, they didn’t save it.

Risk homeostasis doesn’t happen all the time. Often-as in the case of seat belts, say-compensatory behavior only partly offsets the risk-reduction of a safety measure. But it happens often enough that it must be given serious consideration. Why are more pedestrians killed crossing the street at marked crosswalks than at unmarked crosswalks? Because they compensate for the “safe” environment of a marked crossing by being less viligant about oncoming traffic. Why did the introduction of childproof lids on medicine bottles lead, according to one study, to a substantial increase in fatal child poisonings? Because adults became less careful in keeping pill bottles out of the reach of children.

Risk homeostasis also works in the opposite direction. In the late nineteen-sixties, Sweden changed over from driving on the left-hand side of the road to driving on the right, a switch that one would think would create an epidemic of accidents. But, in fact, the opposite was true. People compensated for their unfamiliarity with the new traffic patterns by driving more carefully. During the next twelve months, traffic fatalities dropped seventeen per cent-before returning slowly to their previous levels. As Wilde only half-facetiously argues, countries truly interested in making their streets and highways safer should think about switching over from one side of the road to the other on a regular basis.

It doesn’t take much imagination to see how risk homeostasis applies to NASA and the space shuttle. In one frequently quoted phrase, Richard Feynman, the Nobel Prize- winning physicist who served on the Challenger commission, said that at NASA decision-making was “a kind of Russian roulette.” When the O-rings began to have problems and nothing happened, the agency began to believe that “the risk is no longer so high for the next flights,” Feynman said, and that “we can lower our standards a little bit because we got away with it last time.” But fixing the O-rings doesn’t mean that this kind of risk-taking stops. There are six whole volumes of shuttle components that are deemed by NASA to be as risky as O-rings. It is entirely possible that better O-rings just give NASA the confidence to play Russian roulette with something else.

This is a depressing conclusion, but it shouldn’t come as a surprise. The truth is that our stated commitment to safety, our faithful enactment of the rituals of disaster, has always masked a certain hypocrisy. We don’t really want the safest of all possible worlds. The national fifty-five-mile-per-hour speed limit probably saved more lives than any other single government intervention of the past twenty-five years. But the fact that Congress lifted it last month with a minimum of argument proves that we would rather consume the recent safety advances of things like seat belts and air bags than save them. The same is true of the dramatic improvements that have been made in recent years in the design of aircraft and flight- navigation systems. Presumably, these innovations could be used to bring down the airline-accident rate as low as possible. But that is not what consumers want. They want air travel to be cheaper, more reliable, or more convenient, and so those safety advances have been at least partly consumed by flying and landing planes in worse weather and heavier traffic conditions.

What accidents like the Challenger should teach us is that we have constructed a world in which the potential for high-tech catastrophe is embedded in the fabric of day-to-day life. At some point in the future-for the most mundane of reasons, and with the very best of intentions-a NASA spacecraft will again go down in flames. We should at least admit this to ourselves now. And if we cannot-if the possibility is too much to bear-then our only option is to start thinking about getting rid of things like space shuttles altogether.
gladwell dot com – blowup

THE ARCHIVE

complete list Articles from the New Yorker Blowup download pdf homethe dogoutliersblinkthe tipping pointthe new yorker archiveetc.blog

January 22, sickness 1996
DEPT. OF DISPUTATION

Who can be blamed for a disaster like the
Challenger explosion, a decade ago? No one,
according to the new risk theorists, and
we’d better get used to it.

1.

In the technological age, there is a ritual to disaster. When planes crash or chemical plants explode, each piece of physical evidence-of twisted metal or fractured concrete- becomes a kind of fetish object, painstakingly located, mapped, tagged, and analyzed, with findings submitted to boards of inquiry that then probe and interview and soberly draw conclusions. It is a ritual of reassurance, based on the principle that what we learn from one accident can help us prevent another, and a measure of its effectiveness is that Americans did not shut down the nuclear industry after Three Mile Island and do not abandon the skies after each new plane crash. But the rituals of disaster have rarely been played out so dramatically as they were in the case of the Challenger space shuttle, which blew up over southern Florida on January 28th ten years ago.

Fifty-five minutes after the explosion, when the last of the debris had fallen into the ocean, recovery ships were on the scene. They remained there for the next three months, as part of what turned into the largest maritime salvage operation in history, combing a hundred and fifty thousand square nautical miles for floating debris, while the ocean floor surrounding the crash site was inspected by submarines. In mid-April of 1986, the salvage team found several chunks of charred metal that confirmed what had previously been only suspected: the explosion was caused by a faulty seal in one of the shuttle’s rocket boosters, which had allowed a stream of flame to escape and ignite an external fuel tank.

Armed with this confirmation, a special Presidential investigative commission concluded the following June that the deficient seal reflected shoddy engineering and lax management at NASA and its prime contractor, Morton Thiokol. Properly chastised, NASA returned to the drawing board, to emerge thirty-two months later with a new shuttle-Discovery-redesigned according to the lessons learned from the disaster. During that first post- Challenger flight, as America watched breathlessly, the crew of the Discovery held a short commemorative service. "Dear friends," the mission commander, Captain Frederick H. Hauck, said, addressing the seven dead Challenger astronauts, "your loss has meant that we could confidently begin anew." The ritual was complete. NASA was back.

But what if the assumptions that underlie our disaster rituals aren’t true? What if these public post mortems don’t help us avoid future accidents? Over the past few years, a group of scholars has begun making the unsettling argument that the rituals that follow things like plane crashes or the Three Mile Island crisis are as much exercises in self-deception as they are genuine opportunities for reassurance. For these revisionists, high-technology accidents may not have clear causes at all. They may be inherent in the complexity of the technological systems we have created.

This month, on the tenth anniversary of the Challenger disaster, such revisionism has been extended to the space shuttle with the publication, by the Boston College sociologist Diane Vaughan, of "The Challenger Launch Decision" (Chicago), which is the first truly definitive analysis of the events leading up to January 28, 1986. The conventional view is that the Challenger accident was an anomaly, that it happened because people at NASA had not done their job. But the study’s conclusion is the opposite: it says that the accident happened because people at NASA had done exactly what they were supposed to do. "No fundamental decision was made at NASA to do evil," Vaughan writes. "Rather, a series of seemingly harmless decisions were made that incrementally moved the space agency toward a catastrophic outcome."

No doubt Vaughan’s analysis will be hotly disputed in the coming months, but even if she is only partly right the implications of this kind of argument are enormous. We have surrounded ourselves in the modern age with things like power plants and nuclear-weapons systems and airports that handle hundreds of planes an hour, on the understanding that the risks they represent are, at the very least, manageable. But if the potential for catastrophe is actually found in the normal functioning of complex systems, this assumption is false. Risks are not easily manageable, accidents are not easily preventable, and the rituals of disaster have no meaning. The first time around, the story of the Challenger was tragic. In its retelling, a decade later, it is merely banal.

2.

Perhaps the best way to understand the argument over the Challenger explosion is to start with an accident that preceded it-the near-disaster at the Three Mile Island (T.M.I.) nuclear- power plant in March of 1979. The conclusion of the President’s commission that investigated the T.M.I. accident was that it was the result of human error, particularly on the part of the plant’s operators. But the truth of what happened there, the revisionists maintain, is a good deal more complicated than that, and their arguments are worth examining in detail.

The trouble at T.M.I. started with a blockage in what is called the plant’s polisher-a kind of giant water filter. Polisher problems were not unusual at T.M.I., or particularly serious. But in this case the blockage caused moisture to leak into the plant’s air system, inadvertently tripping two valves and shutting down the flow of cold water into the plant’s steam generator.

As it happens, T.M.I. had a backup cooling system for precisely this situation. But on that particular day, for reasons that no one really knows, the valves for the backup system weren’t open. They had been closed, and an indicator in the control room showing they were closed was blocked by a repair tag hanging from a switch above it. That left the reactor dependent on another backup system, a special sort of relief valve. But, as luck would have it, the relief valve wasn’t working properly that day, either. It stuck open when it was supposed to close, and, to make matters even worse, a gauge in the control room which should have told the operators that the relief valve wasn’t working was itself not working. By the time T.M.I.’s engineers realized what was happening, the reactor had come dangerously close to a meltdown.

Here, in other words, was a major accident caused by five discrete events. There is no way the engineers in the control room could have known about any of them. No glaring errors or spectacularly bad decisions were made that exacerbated those events. And all the malfunctions-the blocked polisher, the shut valves, the obscured indicator, the faulty relief valve, and the broken gauge-were in themselves so trivial that individually they would have created no more than a nuisance. What caused the accident was the way minor events unexpectedly interacted to create a major problem.

This kind of disaster is what the Yale University sociologist Charles Perrow has famously called a "normal accident." By "normal" Perrow does not mean that it is frequent; he means that it is the kind of accident one can expect in the normal functioning of a technologically complex operation. Modern systems, Perrow argues, are made up of thousands of parts, all of which interrelate in ways that are impossible to anticipate. Given that complexity, he says, it is almost inevitable that some combinations of minor failures will eventually amount to something catastrophic. In a classic 1984 treatise on accidents, Perrow takes examples of well-known plane crashes, oil spills, chemical-plant explosions, and nuclear-weapons mishaps and shows how many of them are best understood as "normal." If you saw last year’s hit movie "Apollo 13," in fact, you have seen a perfect illustration of one of the most famous of all normal accidents: the Apollo flight went awry because of the interaction of failures of the spacecraft’s oxygen and hydrogen tanks, and an indicator light that diverted the astronauts’ attention from the real problem.

Had this been a "real" accident-if the mission had run into trouble because of one massive or venal error-the story would have made for a much inferior movie. In real accidents, people rant and rave and hunt down the culprit. They do, in short, what people in Hollywood thrillers always do. But what made Apollo 13 unusual was that the dominant emotion was not anger but bafflement–bafflement that so much could go wrong for so little apparent reason. There was no one to blame, no dark secret to un-earth, no recourse but to re-create an entire system in place of one that had inexplicably failed. In the end, the normal accident was the more terrifying one.

3.

Was the Challenger explosion a "normal accident"? In a narrow sense, the answer is no. Unlike what happened at T.M.I., its explosion was caused by a single, catastrophic malfunction: the so-called O-rings that were supposed to prevent hot gases from leaking out of the rocket boosters didn’t do their job. But Vaughan argues that the O-ring problem was really just a symptom. The cause of the accident was the culture of NASA, she says, and that culture led to a series of decisions about the Challenger which very much followed the contours of a normal accident.

The heart of the question is how NASA chose to evaluate the problems it had been having with the rocket boosters’ O-rings. These are the thin rubber bands that run around the lips of each of the rocket’s four segments, and each O-ring was meant to work like the rubber seal on the top of a bottle of preserves, making the fit between each part of the rocket snug and airtight. But from as far back as 1981, on one shuttle flight after another, the O-rings had shown increasing problems. In a number of instances, the rubber seal had been dangerously eroded-a condition suggesting that hot gases had almost escaped. What’s more, O-rings were strongly suspected to be less effective in cold weather, when the rubber would harden and not give as tight a seal. On the morning of January 28, 1986, the shuttle launchpad was encased in ice, and the temperature at liftoff was just above freezing. Anticipating these low temperatures, engineers at Morton Thiokol, the manufacturer of the shuttle’s rockets, had recommended that the launch be delayed. Morton Thiokol brass and NASA, however, overruled the recommendation, and that decision led both the President’s commission and numerous critics since to accuse NASA of egregious-if not criminal-misjudgment.

Vaughan doesn’t dispute that the decision was fatally flawed. But, after reviewing thousands of pages of transcripts and internal NASA documents, she can’t find any evidence of people acting negligently, or nakedly sacrificing safety in the name of politics or expediency. The mistakes that NASA made, she says, were made in the normal course of operation. For example, in retrospect it may seem obvious that cold weather impaired O-ring performance. But it wasn’t obvious at the time. A previous shuttle flight that had suffered worse O-ring damage had been launched in seventy-five-degree heat. And on a series of previous occasions when NASA had proposed-but eventually scrubbed for other reasons-shuttle launches in weather as cold as forty-one degrees, Morton Thiokol had not said a word about the potential threat posed by the cold, so its pre-Challenger objection had seemed to NASA not reasonable but arbitrary. Vaughan confirms that there was a dispute between managers and engineers on the eve of the launch but points out that in the shuttle program disputes of this sort were commonplace. And, while the President’s commission was astonished by NASA’s repeated use of the phrases "acceptable risk" and "acceptable erosion" in internal discussion of the rocket-booster joints, Vaughan shows that flying with acceptable risks was a standard part of NASA culture. The lists of "acceptable risks" on the space shuttle, in fact, filled six volumes. "Although [O-ring] erosion itself had not been predicted, its occurrence conformed to engineering expectations about large-scale technical systems," she writes. "At NASA, problems were the norm. The word ‘anomaly’ was part of everyday talk. . . . The whole shuttle system operated on the assumption that deviation could be controlled but not eliminated."

What NASA had created was a closed culture that, in her words, "normalized deviance" so that to the outside world decisions that were obviously questionable were seen by NASA’s management as prudent and reasonable. It is her depiction of this internal world that makes her book so disquieting: when she lays out the sequence of decisions which led to the launch- each decision as trivial as the string of failures that led to T.M.I.-it is difficult to find any precise point where things went wrong or where things might be improved next time. "It can truly be said that the Challenger launch decision was a rule- based decision," she concludes. "But the cultural understandings, rules, procedures, and norms that always had worked in the past did not work this time. It was not amorally calculating managers violating rules that were responsible for the tragedy. It was conformity."

4.

There is another way to look at this problem, and that is from the standpoint of how human beings handle risk. One of the assumptions behind the modern disaster ritual is that when a risk can be identified and eliminated a system can be made safer. The new booster joints on the shuttle, for example, are so much better than the old ones that the over-all chances of a Challenger-style accident’s ever happening again must be lower-right? This is such a straightforward idea that questioning it seems almost impossible. But that is just what another group of scholars has done, under what is called the theory of "risk homeostasis." It should be said that within the academic community there are huge debates over how widely the theory of risk homeostasis can and should be applied. But the basic idea, which has been laid out brilliantly by the Canadian psychologist Gerald Wilde in his book "Target Risk," is quite simple: under certain circumstances, changes that appear to make a system or an organization safer in fact don’t. Why? Because human beings have a seemingly fundamental tendency to compensate for lower risks in one area by taking greater risks in another.

Consider, for example, the results of a famous experiment conducted several years ago in Germany. Part of a fleet of taxicabs in Munich was equipped with antilock brake systems (A.B.S.), the recent technological innovation that vastly improves braking, particularly on slippery surfaces. The rest of the fleet was left alone, and the two groups-which were otherwise perfectly matched-were placed under careful and secret observation for three years. You would expect the better brakes to make for safer driving. But that is exactly the opposite of what happened. Giving some drivers A.B.S. made no difference at all in their accident rate; in fact, it turned them into markedly inferior drivers. They drove faster. They made sharper turns. They showed poorer lane discipline. They braked harder. They were more likely to tailgate. They didn’t merge as well, and they were involved in more near-misses. In other words, the A.B.S. systems were not used to reduce accidents; instead, the drivers used the additional element of safety to enable them to drive faster and more recklessly without increasing their risk of getting into an accident. As economists would say, they "consumed" the risk reduction, they didn’t save it.

Risk homeostasis doesn’t happen all the time. Often-as in the case of seat belts, say-compensatory behavior only partly offsets the risk-reduction of a safety measure. But it happens often enough that it must be given serious consideration. Why are more pedestrians killed crossing the street at marked crosswalks than at unmarked crosswalks? Because they compensate for the "safe" environment of a marked crossing by being less viligant about oncoming traffic. Why did the introduction of childproof lids on medicine bottles lead, according to one study, to a substantial increase in fatal child poisonings? Because adults became less careful in keeping pill bottles out of the reach of children.

Risk homeostasis also works in the opposite direction. In the late nineteen-sixties, Sweden changed over from driving on the left-hand side of the road to driving on the right, a switch that one would think would create an epidemic of accidents. But, in fact, the opposite was true. People compensated for their unfamiliarity with the new traffic patterns by driving more carefully. During the next twelve months, traffic fatalities dropped seventeen per cent-before returning slowly to their previous levels. As Wilde only half-facetiously argues, countries truly interested in making their streets and highways safer should think about switching over from one side of the road to the other on a regular basis.

It doesn’t take much imagination to see how risk homeostasis applies to NASA and the space shuttle. In one frequently quoted phrase, Richard Feynman, the Nobel Prize- winning physicist who served on the Challenger commission, said that at NASA decision-making was "a kind of Russian roulette." When the O-rings began to have problems and nothing happened, the agency began to believe that "the risk is no longer so high for the next flights," Feynman said, and that "we can lower our standards a little bit because we got away with it last time." But fixing the O-rings doesn’t mean that this kind of risk-taking stops. There are six whole volumes of shuttle components that are deemed by NASA to be as risky as O-rings. It is entirely possible that better O-rings just give NASA the confidence to play Russian roulette with something else.

This is a depressing conclusion, but it shouldn’t come as a surprise. The truth is that our stated commitment to safety, our faithful enactment of the rituals of disaster, has always masked a certain hypocrisy. We don’t really want the safest of all possible worlds. The national fifty-five-mile-per-hour speed limit probably saved more lives than any other single government intervention of the past twenty-five years. But the fact that Congress lifted it last month with a minimum of argument proves that we would rather consume the recent safety advances of things like seat belts and air bags than save them. The same is true of the dramatic improvements that have been made in recent years in the design of aircraft and flight- navigation systems. Presumably, these innovations could be used to bring down the airline-accident rate as low as possible. But that is not what consumers want. They want air travel to be cheaper, more reliable, or more convenient, and so those safety advances have been at least partly consumed by flying and landing planes in worse weather and heavier traffic conditions.

What accidents like the Challenger should teach us is that we have constructed a world in which the potential for high-tech catastrophe is embedded in the fabric of day-to-day life. At some point in the future-for the most mundane of reasons, and with the very best of intentions-a NASA spacecraft will again go down in flames. We should at least admit this to ourselves now. And if we cannot-if the possibility is too much to bear-then our only option is to start thinking about getting rid of things like space shuttles altogether.

homethe new yorker archivetop

Why do some people turn into violent criminals? New evidence suggests that it may all be in the brain.

1.

On the morning of November 18, this 1996, Joseph Paul Franklin was led into Division 15 of the St. Louis County Courthouse, in Clayton, Missouri. He was wearing a pair of black high-top sneakers, an orange jumpsuit with short sleeves that showed off his prison biceps, and a pair of thick black-rimmed glasses. There were two guards behind him, two guards in front of him, and four more guards stationed around the courtroom, and as he walked into the room-or, rather, shuffled, since his feet were manacled-Franklin turned to one of them and said “Wassup?” in a loud, Southern-accented voice. Then he sat down between his attorneys and stared straight ahead at the judge, completely still except for his left leg, which bounced up and down in an unceasing nervous motion.

Joseph Franklin takes credit for shooting and paralyzing Larry Flynt, the publisher of Hustler, outside a Lawrenceville, Georgia, courthouse in March of 1978, apparently because Flynt had printed photographs of a racially mixed couple. Two years later, he says, he gunned down the civil-rights leader Vernon Jordan outside a Marriott in Fort Wayne, Indiana, tearing a hole in Jordan’s back the size of a fist. In the same period in the late seventies, as part of what he later described as a “mission” to rid America of blacks and Jews and of whites who like blacks and Jews, Franklin says that he robbed several banks, bombed a synagogue in Tennessee, killed two black men jogging with white women in Utah, shot a black man and a white woman coming out of a Pizza Hut in a suburb of Chattanooga, Tennessee, and on and on-a violent spree that may have spanned ten states and claimed close to twenty lives, and, following Franklin’s arrest, in 1980, earned him six consecutive life sentences.

Two years ago, while Franklin was imprisoned in Marion Federal Penitentiary, in Illinois, he confessed to another crime. He was the one, he said, who had hidden in the bushes outside a synagogue in suburban St. Louis in the fall of 1977 and opened fire on a group of worshippers, killing forty-two-year-old Gerald Gordon. After the confession, the State of Missouri indicted him on one count of capital murder and two counts of assault. He was moved from Marion to the St. Louis County jail, and from there, on a sunny November morning last year, he was brought before Judge Robert Campbell, of the St. Louis County Circuit Court, so that it could be determined whether he was fit to stand trial-whether, in other words, embarking on a campaign to rid America of Jews and blacks was an act of evil or an act of illness.

The prosecution went first. On a television set at one side of the courtroom, two videotapes were shown-one of an interview with Franklin by a local news crew and the other of Franklin’s formal confession to the police. In both, he seems lucid and calm, patiently retracing how he planned and executed his attack on the synagogue. He explains that he bought the gun in a suburb of Dallas, answering a classified ad, so the purchase couldn’t be traced. He drove to the St. Louis area and registered at a Holiday Inn. He looked through the Yellow Pages to find the names of synagogues. He filed the serial number off his rifle and bought a guitar case to carry the rifle in. He bought a bicycle. He scouted out a spot near his chosen synagogue from which he could shoot without being seen. He parked his car in a nearby parking lot and rode his bicycle to the synagogue. He lay in wait in the bushes for several hours, until congregants started to emerge. He fired five shots. He rode the bicycle back to the parking lot, climbed into his car, pulled out of the lot, checked his police scanner to see if he was being chased, then drove south, down I-55, back home toward Memphis.

In the interview with the news crew, Franklin answered every question, soberly and directly. He talked about his tattoos (“This one is the Grim Reaper. I got it in Dallas”) and his heroes (“One person I like is Howard Stern. I like his honesty”), and he respectfully disagreed with the media’s description of racially motivated crimes as “hate crimes,” since, he said, “every murder is committed out of hate.” In his confession to the police, after he detailed every step of the synagogue attack, Franklin was asked if there was anything he’d like to say. He stared thoughtfully over the top of his glasses. There was a long silence. “I can’t think of anything,” he answered. Then he was asked if he felt any remorse. There was another silence. “I can’t say that I do,” he said. He paused again, then added, “The only thing I’m sorry about is that it’s not legal.”

“What’s not legal?”

Franklin answered as if he’d just been asked the time of day: “Killing Jews.”

After a break for lunch, the defense called Dorothy Otnow Lewis, a psychiatrist at New York’s Bellevue Hospital and a professor at New York University School of Medicine. Over the past twenty years, Lewis has examined, by her own rough estimate, somewhere between a hundred and fifty and two hundred murderers. She was the defense’s only expert witness in the trial of Arthur Shawcross, the Rochester serial killer who strangled eleven prostitutes in the late eighties. She examined Joel Rifkin, the Long Island serial killer, and Mark David Chapman, who shot John Lennon-both for the defense. Once, in a Florida prison, she sat for hours talking with Ted Bundy. It was the day before his execution, and when they had finished Bundy bent down and kissed her cheek. “Bundy thought I was the only person who didn’t want something from him,” Lewis says. Frequently, Lewis works with Jonathan Pincus, a neurologist at Georgetown University. Lewis does the psychiatric examination; Pincus does the neurological examination. But Franklin put his foot down. He could tolerate being examined by a Jewish woman, evidently, but not by a Jewish man. Lewis testified alone.

Lewis is a petite woman in her late fifties, with short dark hair and large, liquid brown eyes. She was wearing a green blazer and a black skirt with a gold necklace, and she was so dwarfed by the witness stand that from the back of the courtroom only her head was visible. Under direct examination she said that she had spoken with Franklin twice-once for six hours and once for less than half an hour-and had concluded that he was a paranoid schizophrenic: a psychotic whose thinking was delusional and confused, a man wholly unfit to stand trial at this time. She talked of brutal physical abuse he had suffered as a child. She mentioned scars on his scalp from blows Franklin had told her were inflicted by his mother. She talked about his obsessive desire to be castrated, his grandiosity, his belief that he may have been Jewish in an earlier life, his other bizarre statements and beliefs. At times, Lewis seemed nervous, her voice barely audible, but perhaps that was because Franklin was staring at her unblinkingly, his leg bouncing faster and faster under the table. After an hour, Lewis stepped down. She paused in front of Franklin and, ever the psychiatrist, suggested that when everything was over they should talk. Then she walked slowly through the courtroom, past the defense table and the guards, and out the door.

Later that day, on the plane home to New York City, Lewis worried aloud that she hadn’t got her point across. Franklin, at least as he sat there in the courtroom, didn’t seem insane. The following day, Franklin took the stand himself for two hours, during which he did his own psychiatric diagnosis, confessing to a few “minor neuroses,” but not to being “stark raving mad,” as he put it. Of the insanity defense, he told the court, “I think it is hogwash, to tell you the truth. I knew exactly what I was doing.” During his testimony, Franklin called Lewis “a well-intentioned lady” who “seems to embellish her statements somewhat.” Lewis seemed to sense that that was the impression she’d left: that she was overreaching, that she was some kind of caricature- liberal Jewish New York psychiatrist comes to Middle America to tell the locals to feel sorry for a murderer. Sure enough, a week later the Judge rejected Lewis’s arguments and held Franklin competent to stand trial. But, flying back to New York, Lewis insisted that she wasn’t making an ideological point of Franklin; rather, she was saying that she didn’t feel that Franklin’s brain worked the way brains are supposed to work-that he had identifiable biological and psychiatric problems that diminished his responsibility for his actions. “I just don’t believe people are born evil,” she said. “To my mind, that is mindless. Forensic psychiatrists tend to buy into the notion of evil. I felt that that’s no explanation. The deed itself is bizarre, grotesque. But it’s not evil. To my mind, evil bespeaks conscious control over something. Serial murderers are not in that category. They are driven by forces beyond their control.”

The plane was in the air now. By some happy set of circumstances, Lewis had been bumped up to first class. She was sipping champagne. Her shoes were off. “You know, when I was leaving our last interview, he sniffed me right here,” she said, and she touched the back of her neck and flared her nostrils in mimicry of Franklin’s gesture. “He’d said to his attorney, ‘You know, if you weren’t here, I’d make a play for her.’ ” She had talked for six hours to this guy who hated Jews so much that he hid in the bushes and shot at them with a rifle, and he had come on to her, just like that. She shivered at the memory: “He said he wanted some pussy.”

2.

When Dorothy Lewis graduated from Yale School of Medicine, in 1963, neurology, the study of the brain and the rest of the nervous system, and psychiatry, the study of behavior and personality, were entirely separate fields. This was still the Freudian era. Little attempt was made to search for organic causes of criminality. When, after medical school, she began working with juvenile delinquents in New Haven, the theory was that these boys were robust, healthy. According to the prevailing wisdom, a delinquent was simply an ordinary kid who had been led astray by a troubled home life-by parents who were too irresponsible or too addled by drugs and alcohol to provide proper discipline. Lewis came from the archetypal do- gooding background-reared on Central Park West; schooled at Ethical Culture; a socialist mother who as a child had once introduced Eugene V. Debs at a political rally; father in the garment business; heated dinner-table conversations about the Rosenbergs-and she accepted this dogma. Criminals were just like us, only they had been given bad ideas about how to behave. The trouble was that when she began working with delinquents they didn’t seem like that at all. They didn’t lack for discipline. If anything, she felt, they were being disciplined too much. And these teenagers weren’t robust and rowdy; on the contrary, they seemed to be damaged and impaired. “I was studying for my boards in psychiatry, and in order to do a good job you wanted to do a careful medical history and a careful mental-status exam,” she says. “I discovered that many of these kids had had serious accidents, injuries, or illnesses that seemed to have affected the central nervous system and that hadn’t been identified previously.”

In 1976, she was given a grant by the State of Connecticut to study a group of nearly a hundred juvenile delinquents. She immediately went to see Pincus, then a young professor of neurology at Yale. They had worked together once before. “Dorothy came along and said she wanted to do this project with me,” Pincus says. “She wanted to look at violence. She had this hunch that there was something physically wrong with these kids. I said, ‘That’s ridiculous. Everyone knows violence has nothing to do with neurology.’ ” At that point, Pincus recalls, he went to his bookshelf and began reading out loud from what was then the definitive work in the field: “Criminality and Psychiatric Disorders,” by Samuel Guze, the chairman of the psychiatry department of Washington University, in St. Louis. “Sociopathy, alcoholism, and drug dependence are the psychiatric disorders characteristically associated with serious crime,” he read. “Schizophrenia, primary affective disorders, anxiety neurosis, obsessional neurosis, phobic neurosis, and”-and there he paused- “brain syndromes are not.” But Lewis would have none of it. “She said, ‘We should do it anyway.’ I said, ‘I don’t have the time.’ She said, ‘Jonathan, I can pay you.’ So I would go up on Sunday, and I would examine three or four youths, just give them a standard neurological examination.” But, after seeing the kids for himself, Pincus, too, became convinced that the prevailing wisdom about juvenile delinquents–and, by extension, about adult criminals–was wrong, and that Lewis was right. “Almost all the violent ones were damaged,” Pincus recalls, shaking his head.

Over the past twenty years, Lewis and Pincus have testified for the defense in more than a dozen criminal cases, most of them death- penalty appeals. Together, they have published a series of groundbreaking studies on murderers and delinquents, painstakingly outlining the medical and psychiatric histories of the very violent; one of their studies has been cited twice in United States Supreme Court opinions. Of the two, Pincus is more conservative. He doesn’t have doubts about evil the way Lewis does, and sharply disagrees with her on some of the implications of their work. On the core conclusions, however, they are in agreement. They believe that the most vicious criminals are, overwhelmingly, people with some combination of abusive childhoods, brain injuries, and psychotic symptoms (in particular, paranoia), and that while each of these problems individually has no connection to criminality (most people who have been abused or have brain injuries or psychotic symptoms never end up harming anyone else), somehow these factors together create such terrifying synergy as to impede these individuals’ ability to play by the rules of society.

Trying to determine the causes of human behavior is, of course, a notoriously tricky process. Lewis and Pincus haven’t done the kind of huge, population-wide studies that could definitively answer just how predictive of criminality these factors are. Their findings are, however, sufficiently tantalizing that their ideas have steadily gained ground in recent years. Other researchers have now done some larger studies supporting their ideas. Meanwhile, a wave of new findings in the fields of experimental psychiatry and neurology has begun to explain why it is that brain dysfunction and child abuse can have such dire effects. The virtue of this theory is that it sidesteps all the topics that so cripple contemporary discussions of violence-genetics, biological determinism, and, of course, race. In a sense, it’s a return to the old liberal idea that environment counts, and that it is possible to do something significant about crime by changing the material conditions of people’s lives. Only, this time the maddening imprecision of the old idea (what, exactly, was it about bad housing, say, that supposedly led to violent crime?) has been left behind. Lewis and Pincus and other neurologists and psychiatrists working in the field of criminal behavior think they are beginning to understand what it is that helps to turn some people into violent criminals-right down to which functions of the brain are damaged by abuse and injury. That’s what Lewis means when she says she doesn’t think that people are intrinsically evil. She thinks that some criminals simply suffer from a dysfunction of the brain, the way cardiac patients suffer from a dysfunction of the heart, and this is the central and in some ways disquieting thing about her. When she talks about criminals as victims, she doesn’t use the word in the standard liberal metaphorical sense. She means it literally.

Lewis works out of a tiny set of windowless offices on the twenty- first floor of the new wing of Bellevue Hospital, in Manhattan’s East Twenties. The offices are decorated in institutional colors-gray carpeting and bright-orange trim-and since they’re next to the children’s psychiatric ward you can sometimes hear children crying out. Lewis’s desk is stacked high with boxes of medical and court records from cases she has worked on, and also with dozens of videotapes of interviews with murderers which she has conducted over the years. She talks about some of her old cases-especially some of her death-row patients-as if they had just happened, going over and over details, sometimes worrying about whether she made the absolutely correct diagnosis. The fact that everyone else has long since given up on these people seems to be just what continues to attract her. Years ago, when she was in college, Lewis found herself sitting next to the Harvard theologian Paul Tillich on the train from New York to Boston. “When you read about witches being burned at the stake,” Tillich asked her, in the midst of a long and wide-ranging conversation, “do you identify with the witch or with the people looking on?” Tillich said he himself identified with the crowd. Not Lewis. She identified with the witch.

In her offices, Lewis has tapes of her interviews with Shawcross, the serial killer, and also tapes of Shawcross being interviewed by Park Elliott Dietz, the psychiatrist who testified for the prosecution in that case. Dietz is calm, in control, and has a slightly bored air, as if he had heard everything before. By contrast, Lewis, in her interviews, has a kind of innocence about her. She seems completely caught up in what is happening, and at one point, when Shawcross makes some particularly outrageous comment on what he did to one of the prostitutes he murdered, she looks back at the camera wide-eyed, as if to say “Wow!” When Dietz was on the stand, his notes were beside him in one of those rolling evidence carts, where everything is labelled and items are distinguished by color-coded dividers, so that he had the entire case at his fingertips. When Lewis testified, she kept a big stack of untidy notes on her lap and fussed through them after she was asked a question. She is like that in everyday life as well-a little distracted and spacey, wrapped up in the task at hand. It makes her so approachable and so unthreatening that it’s no wonder she gets hardened criminals to tell her their secrets. It’s another way of identifying with the witch. Once, while talking with Bundy, Lewis looked up after several hours and found that she had been so engrossed in their conversation that she hadn’t noticed that everyone outside the soundproof glass of the interview booth-the guard, the prison officials at their desks-had left for lunch. She and Bundy were utterly alone. Terrified, Lewis stayed glued to her seat, her eyes never leaving his. “I didn’t bat an eyelash,” she recalls. Another time, after Lewis had interviewed a murderer in a Tennessee prison, she returned to her hotel room to find out that there had been a riot in the prison while she was there.

3.

The human brain comprises, in the simplest terms, four interrelated regions, stacked up in ascending order of complexity. At the bottom is the brain stem, which governs the most basic and primitive functions-breathing, blood pressure, and body temperature. Above that is the diencephalon, the seat of sleep and appetite. Then comes the limbic region, the seat of sexual behavior and instinctual emotions. And on top, covering the entire outside of the brain in a thick carpet of gray matter, is the cortex, the seat of both concrete and abstract thought. It is the function of the cortex-and, in particular, those parts of the cortex beneath the forehead, known as the frontal lobes-to modify the impulses that surge up from within the brain, to provide judgment, to organize behavior and decision-making, to learn and adhere to rules of everyday life. It is the dominance of the cortex and the frontal lobes, in other words, that is responsible for making us human; and the central argument of the school to which Lewis and Pincus belong is that what largely distinguishes many violent criminals from the rest of us is that something has happened inside their brains to throw the functioning of the cortex and the frontal lobes out of whack. “We are a highly socialized animal. We can sit in theatres with strangers and not fight with each other,” Stuart Yudofsky, the chairman of psychiatry at Baylor College of Medicine, in Houston, told me. “Many other mammals could never crowd that closely together. Our cortex helps us figure out when we are and are not in danger. Our memory tells us what we should be frightened of and angry with and what we shouldn’t. But if there are problems there-if it’s impaired-one can understand how that would lead to confusion, to problems with disinhibition, to violence.” One of the most important things that Lewis and Pincus have to do, then, when they evaluate a murderer is check for signs of frontal-lobe impairment. This, the neurological exam, is Pincus’s task.

Pincus begins by taking a medical history: he asks about car accidents and falls from trees and sports injuries and physical abuse and problems at birth and any blows to the head of a kind that might have caused damage to the frontal lobes. He asks about headaches, tests for reflexes and sensorimotor functions, and compares people’s right and left sides and observes gait. “I measure the head circumference-if it’s more than two standard deviations below the normal brain circumference, there may be some degree of mental retardation, and, if it’s more than two standard deviations above, there may be hydrocephalus,” Pincus told me. “I also check gross motor coördination. I ask people to spread their fingers and hold their hands apart and look for choreiform movements-discontinuous little jerky movements of the fingers and arms.” We were in Pincus’s cluttered office at Georgetown University Medical Center, in Washington, D.C., and Pincus, properly professorial in a gray Glen- plaid suit, held out his hand to demonstrate. “Then I ask them to skip, to hop,” he went on, and he hopped up and down in a small space on the floor between papers and books.

Pincus stands just over six feet, has the long-limbed grace of an athlete, and plays the part of neurologist to perfection: calm, in command, with a distinguished sprinkle of white hair. At the same time, he has a look of mischief in his eyes, a streak of irreverence that allows him to jump up and down in his office before a total stranger. It’s an odd combination, like Walter Matthau playing Sigmund Freud.

“Then I check for mixed dominance, to see if the person is, say, right-eyed, left-footed,” he said. “If he is, it might mean that his central nervous system hasn’t differentiated the way it should.” He was sitting back down now. “No one of these by itself means he is damaged. But they can tell us something in aggregate.”

At this point, Pincus held up a finger forty-five degrees to my left and moved it slowly to the right. “Now we’re checking for frontal functions,” he said. “A person should be able to look at the examiner’s finger and follow it smoothly with his eyes. If he can only follow it jerkily, the frontal eye fields are not working properly. Then there’s upward gaze.” He asked me to direct my eyes to the ceiling. “The eye should go up five millimetres and a person should also be able to direct his gaze laterally and maintain it for twenty seconds. If he can’t, that’s motor impersistence.” Ideally, Pincus will attempt to amplify his results with neuropsychological testing, an EEG (an electroencephalogram, which measures electrical patterns in the brain), and an MRI scan (that’s magnetic resonance imaging), to see if he can spot scarring or lesions in any of the frontal regions which might contribute to impairment.

Pincus is also interested in measuring judgment. But since there is no objective standard for judgment, he tries to pick up evidence of an inability to cope with complexity, a lack of connection between experience and decision-making which is characteristic of cortical dysfunction. Now he walked behind me, reached over the top of my head, and tapped the bridge of my nose in a steady rhythm. I blinked once, then stopped. That, he told me, was normal.

“When you tap somebody on the bridge of the nose, it’s reasonable for a person to blink a couple of times, because there is a threat from the outside,” Pincus said. “When it’s clear there is no threat, the subject should be able to accommodate that. But, if the subject blinks more than three times, that’s ‘insufficiency of suppression,’ which may reflect frontal-lobe dysfunction. The inability to accommodate means you can’t adapt to a new situation. There’s a certain rigidity there.”

Arthur Shawcross, who had a cyst pressing on one temporal lobe and scarring in both frontal lobes (probably from, among other things, being hit on the head with a sledgehammer and with a discus, and falling on his head from the top of a forty-foot ladder), used to walk in absolutely straight lines, splashing through puddles instead of walking around them, and he would tear his pants on a barbed-wire fence instead of using a gate a few feet away. That’s the kind of behavior Pincus tries to correlate with abnormalities on the neurological examination. “In the Wisconsin Card Sorting Test, the psychologist shows the subject four playing cards-three red ones, one black one- and asks which doesn’t fit,” Pincus said. “Then he shows the subject, say, the four of diamonds, the four of clubs, the four of hearts, and the three of diamonds. Somebody with frontal-lobe damage who correctly picked out the black one the first time-say, the four of clubs- is going to pick the four of clubs the second time. But the rules have changed. It’s now a three we’re after. We’re going by numbers now, not color. It’s that kind of change that people with frontal-lobe damage can’t make. They can’t change the rules. They get stuck in a pattern. They keep using rules that are demonstrably wrong. Then there’s the word-fluency test. I ask them to name in one minute as many different words as they can think of which begin with the letter ‘f.’ Normal is fourteen, plus or minus five. Anyone who names fewer than nine is abnormal.”

This is not an intelligence test. People with frontal-lobe damage might do just as well as anyone else if they were asked, say, to list the products they might buy in a supermarket. “Under those rules, most people can think of at least sixteen products in a minute and rattle them off,” Pincus said. But that’s a structured test, involving familiar objects, and it’s a test with rules. The thing that people with frontal-lobe damage can’t do is cope with situations where there are no rules, where they have to improvise, where they need to make unfamiliar associations. “Very often, they get stuck on one word- they’ll say ‘four,’ ‘fourteen,’ ‘forty-four,’ ” Pincus said. “They’ll use the same word again and again-‘farm’ and then ‘farming.’ Or, as one fellow in a prison once said to me, ‘fuck,’ ‘fucker,’ ‘fucking.’ They don’t have the ability to come up with something else.”

What’s at stake, fundamentally, with frontal-lobe damage is the question of inhibition. A normal person is able to ignore the tapping after one or two taps, the same way he can ignore being jostled in a crowded bar. A normal person can screen out and dismiss irrelevant aspects of the environment. But if you can’t ignore the tapping, if you can’t screen out every environmental annoyance and stimulus, then you probably can’t ignore being jostled in a bar, either. It’s living life with a hair trigger.

A recent study of two hundred and seventy-nine veterans who suffered penetrating head injuries in Vietnam showed that those with frontal-lobe damage were anywhere from two to six times as violent and aggressive as veterans who had not suffered such injuries. This kind of aggression is what is known as neurological, or organic, rage. Unlike normal anger, it’s not calibrated by the magnitude of the original insult. It’s explosive and uncontrollable, the anger of someone who no longer has the mental equipment to moderate primal feelings of fear and aggression.

“There is a reactivity to it, in which a modest amount of stimulation results in a severe overreaction,” Stuart Yudofsky told me. “Notice that reactivity implies that, for the most part, this behavior is not premeditated. The person is rarely violent and frightening all the time. There are often brief episodes of violence punctuating stretches when the person does not behave violently at all. There is also not any gain associated with organic violence. The person isn’t using the violence to manipulate someone else or get something for himself. The act of violence does just the opposite. It is usually something that causes loss for the individual. He feels that it is out of his control and unlike himself. He doesn’t blame other people for it. He often says, ‘I hate myself for acting this way.’ The first person with organic aggression I ever treated was a man who had been inflating a truck tire when the tire literally exploded and the rim was driven into his prefrontal cortex. He became extraordinarily aggressive. It was totally uncharacteristic: he had been a religious person with strong values. But now he would not only be physically violent-he would curse. When he came to our unit, a nurse offered him some orange juice. He was calm at that moment. But then he realized that the orange juice was warm, and in one quick motion he threw it back at her, knocking her glasses off and injuring her cornea. When we asked him why, he said, ‘The orange juice was warm.’ But he also said, ‘I don’t know what got into me.’ It wasn’t premeditated. It was something that accelerated quickly. He went from zero to a hundred in a millisecond.” At that point, I asked Yudofsky an obvious question. Suppose you had a person from a difficult and disadvantaged background, who had spent much of his life on the football field, getting his head pounded by the helmets of opposing players. Suppose he was involved in a tempestuous on-again, off-again relationship with his ex-wife. Could a vicious attack on her and another man fall into the category of neurological rage? “You’re not the first person to ask that question,” Yudofsky replied dryly, declining to comment further.

Pincus has found that when he examines murderers neurological problems of this kind come up with a frequency far above what would be expected in the general population. For example, Lewis and Pincus published a study of fifteen death-row inmates randomly referred to them for examination; they were able to verify forty-eight separate incidents of significant head injury. Here are the injuries suffered by just the first three murderers examined:

I. three years: beaten almost to death by father (multiple facial scars) early childhood: thrown into sink onto head (palpable scar) late adolescence: one episode of loss of consciousness while boxing II. childhood: beaten in head with two-by-fours by parents childhood: fell into pit, unconscious for several hours seventeen years: car accident with injury to right eye eighteen years: fell from roof apparently because of a blackout III. six years: glass bottle deliberately dropped onto head from tree (palpable scar on top of cranium) eight years: hit by car nine years: fell from platform, received head injury fourteen years: jumped from moving car, hit head.

4.

Dorothy Lewis’s task is harder than Jonathan Pincus’s. He administers relatively straightforward tests of neurological function. But she is interested in the psychiatric picture, which means getting a murderer to talk about his family, his feelings and behavior, and, perhaps most important, his childhood. It is like a normal therapy session, except that Lewis doesn’t have weeks in which to establish intimacy. She may have only a session or two. On one occasion, when she was visiting a notorious serial killer at San Quentin, she got lucky. “By chance, one of the lawyers had sent me some clippings from the newspaper, where I read that when he was caught he had been carrying some Wagner records,” she told me. “For some reason, that stuck in my mind. The first time I went to see him, I started to approach him and he pointed at me and said, ‘What’s happening on June 18th?’ And I said, ‘That’s the first night PBS is broadcasting “Der Ring des Nibelungen.” ‘ You know, we’d studied Wagner at Ethical Culture. Granted, it was a lucky guess. But I showed him some respect, and you can imagine the rapport that engendered.” Lewis says that even after talking for hours with someone guilty of horrendous crimes she never gets nightmares. She seems to be able to separate her everyday life from the task at hand-to draw a curtain between her home and her work. Once, I visited Lewis at her home: she and her husband, Mel, who is a professor of psychiatry at Yale, live in New Haven. The two dote on each other (“When I met Mel, I knew within a week that this was the man I wanted to marry,” she says, flushing, “and I’ve never forgiven him, because it took him two weeks to ask me”), and as soon as I walked in they insisted on giving me a detailed tour of their house, picking up each memento, pointing out their children’s works of art, and retelling the stories behind thirty years of anniversaries and birthdays: sometimes they told their stories in alternating sentences, and sometimes they told a story twice, first from Dorothy’s perspective and then from Mel’s. All in all, it was a full hour of domestic vaudeville. Then Dorothy sat on her couch, with her cat, Ptolemy, on her lap, and began to talk about serial killers, making a seamless transition from the sentimental to the unspeakable.

At the heart of Lewis’s work with murderers is the search for evidence of childhood abuse. She looks for scars. She combs through old medical records for reports of suspicious injuries. She tries to talk to as many family members and friends as possible. She does all this because, of course, child abuse has devastating psychological consequences for children and the adults they become. But there is the more important reason-the one at the heart of the new theory of violence-which is that finding evidence of prolonged child abuse is a key to understanding criminal behavior because abuse also appears to change the anatomy of the brain.

When a child is born, the parts of his brain that govern basic physiological processes-that keep him breathing and keep his heart beating-are fully intact. But a newborn can’t walk, can’t crawl, can’t speak, can’t reason or do much of anything besides sleep and eat, because the higher regions of his brain-the cortex, in particular-aren’t developed yet. In the course of childhood, neurons in the cortex begin to organize themselves-to differentiate and make connections-and that maturation process is in large part responsive to what happens in the child’s environment. Bruce Perry, a psychiatrist at Baylor College of Medicine, has done brain scans of children who have been severely neglected, and has found that their cortical and sub-cortical areas never developed properly, and that, as a result, those regions were roughly twenty or thirty per cent smaller than normal. This kind of underdevelopment doesn’t affect just intelligence; it affects emotional health. “There are parts of the brain that are involved in attachment behavior-the connectedness of one individual to another-and in order for that to be expressed we have to have a certain nature of experience and have that experience at the right time,” Perry told me. “If early in life you are not touched and held and given all the somatosensory stimuli that are associated with what we call love, that part of the brain is not organized in the same way.”

According to Perry, the section of the brain involved in attachment-which he places just below the cortex, in the limbic region-would look different in someone abused or neglected. The wiring wouldn’t be as dense or as complex. “Such a person is literally lacking some brain organization that would allow him to actually make strong connections to other human beings. Remember the orphans in Romania? They’re a classic example of children who, by virtue of not being touched and held and having their eyes gazed into, didn’t get the somatosensory bath. It doesn’t matter how much you love them after age two-they’ve missed that critical window.”

In a well-known paper in the field of child abuse, Mary Main, a psychologist at Berkeley, and Carol George, now at Mills College, studied a group of twenty disadvantaged toddlers, half of whom had been subjected to serious physical abuse and half of whom had not. Main and George were interested in how the toddlers responded to a classmate in distress. What they found was that almost all the nonabused children responded to a crying or otherwise distressed peer with concern or sadness or, alternatively, showed interest and made some attempt to provide comfort. But not one of the abused toddlers showed any concern. At the most, they showed interest. The majority of them either grew distressed and fearful themselves or lashed out with threats, anger, and physical assaults. Here is the study’s description of Martin, an abused boy of two and a half, who- emotionally retarded in the way that Perry describes-seemed incapable of normal interaction with another human being:

Martin . . . tried to take the hand of the crying other child, and when she resisted, he slapped her on the arm with his open hand. He then turned away from her to look at the ground and began vocalizing very strongly. “Cut it out! cut it out!,” each time saying it a little faster and louder. He patted her, but when she became disturbed by his patting, he retreated “hissing at her and baring his teeth.” He then began patting her on the back again, his patting became beating, and he continued beating her despite her screams.

Abuse also disrupts the brain’s stress-response system, with profound results. When something traumatic happens-a car accident, a fight, a piece of shocking news-the brain responds by releasing several waves of hormones, the last of which is cortisol. The problem is that cortisol can be toxic. If someone is exposed to too much stress over too long a time, one theory is that all that cortisol begins to eat away at the organ of the brain known as the hippocampus, which serves as the brain’s archivist: the hippocampus organizes and shapes memories and puts them in context, placing them in space and time and tying together visual memory with sound and smell. J. Douglas Bremner, a psychiatrist at Yale, has measured the damage that cortisol apparently does to the hippocampus by taking M.R.I. scans of the brains of adults who suffered severe sexual or physical abuse as children and comparing them with the brains of healthy adults. An M.R.I. scan is a picture of a cross-section of the brain-as if someone’s head had been cut into thin slices like a tomato, and then each slice had been photographed-and in the horizontal section taken by Bremner the normal hippocampus is visible as two identical golf-ball-size organs, one on the left and one on the right, and each roughly even with the ear. In child-abuse survivors, Bremner found, the golf ball on the left is on average twelve per cent smaller than that of a healthy adult, and the theory is that it was shrunk by cortisol. Lewis says that she has examined murderers with dozens of scars on their backs, and that they have no idea how the scars got there. They can’t remember their abuse, and if you look at Bremner’s scans that memory loss begins to make sense: the archivist in their brain has been crippled.

Abuse also seems to affect the relationship between the left hemisphere of the brain, which plays a large role in logic and language, and the right hemisphere, which is thought to play a large role in creativity and depression. Martin Teicher, a professor of psychiatry at Harvard and McLean Hospital, recently gave EEGs to a hundred and fifteen children who had been admitted to a psychiatric facility, some of whom had a documented history of abuse. Not only did the rate of abnormal EEGs among the abused turn out to be twice that of the non-abused but all those abnormal brain scans turned out to be a result of problems on the left side of the brain. Something in the brain’s stress response, Teicher theorized, was interfering with the balanced development of the brain’s hemispheres.

Then Teicher did M.R.I.s of the brains of a subset of the abused children, looking at what is known as the corpus callosum. This is the fibre tract-the information superhighway-that connects the right and the left hemispheres. Sure enough, he found that parts of the corpus callosum of the abused kids were smaller than they were in the nonabused children. Teicher speculated that these abnormalities were a result of something wrong with the sheathing-the fatty substance, known as myelin, that coats the nerve cells of the corpus callosum. In a healthy person, the myelin helps the neuronal signals move quickly and efficiently. In the abused kids, the myelin seemed to have been eaten away, perhaps by the same excess cortisol that is thought to attack the hippocampus.

Taken together, these changes in brain hardware are more than simple handicaps. They are, in both subtle and fundamental ways, corrosive of self. Richard McNally, a professor of psychology at Harvard, has done memory studies with victims of serious trauma, and he has discovered that people with post-traumatic-stress disorder, or P.T.S.D., show marked impairment in recalling specific autobiographical memories. A healthy trauma survivor, asked to name an instance when he exhibited kindness, says, “Last Friday, I helped a neighbor plow out his driveway.” But a trauma survivor with P.T.S.D. can only say something like “I was kind to people when I was in high school.” This is what seems to happen when your hippocampus shrinks: you can’t find your memories. “The ability to solve problems in the here and now depends on one’s ability to access specific autobiographical memories in which one has encountered similar problems in the past,” McNally says. “It depends on knowing what worked and what didn’t.” With that ability impaired, abuse survivors cannot find coherence in their lives. Their sense of identity breaks down.

It is a very short walk from this kind of psychological picture to a diagnosis often associated with child abuse; namely, dissociative identity disorder, or D.I.D. Victims of child abuse are thought sometimes to dissociate, as a way of coping with their pain, of distancing themselves from their environment, of getting away from the danger they faced. It’s the kind of disconnection that would make sense if a victim’s memories were floating around without context and identification, his left and right hemispheres separated and unequal, and his sense of self fragmented and elusive. It’s also a short walk from here to understanding how someone with such neurological problems could become dangerous. Teicher argues that in some of his EEG and M.R.I. analyses of the imbalance between the left and the right hemispheres he is describing the neurological basis for the polarization so often observed in psychiatrically disturbed patients-the mood swings, the sharply contrasting temperaments. Instead of having two integrated hemispheres, these patients have brains that are, in some sense, divided down the middle. “What you get is a kind of erratic-ness,” says Frank Putnam, who heads the Unit on Developmental Traumatology at the National Institute of Mental Health, in Maryland. “These kinds of people can be very different in one situation compared with another. There is the sense that they don’t have a larger moral compass.”

Several years ago, Lewis and Pincus worked together on an appeal for David Wilson, a young black man on death row in Louisiana. Wilson had been found guilty of murdering a motorist, Stephen Stinson, who had stopped to help when the car Wilson was in ran out of gas on I-10 outside New Orleans; and the case looked, from all accounts, almost impossible to appeal. Wilson had Stinson’s blood on his clothes, in his pocket he had a shotgun shell of the same type and gauge as the one found in the gun at the murder scene, and the prosecution had an eyewitness to the whole shooting. At the trial, Wilson denied that the bloody clothes were his, denied that he had shot Stinson, denied that a tape-recorded statement the prosecution had played for the jury was of his voice, and claimed he had slept through the entire inci-dent. It took the jury thirty-five minutes to convict him of first-degree murder and sixty-five minutes more, in the sentencing phase, to send him to the electric chair.

But when Lewis and Pincus examined him they became convinced that his story was actually much more complicated. In talking to Wilson’s immediate family and other relatives, they gathered evidence that he had never been quite normal-that his personality had always seemed fractured and polarized. His mother recalled episodes from a very early age during which he would become “glassy-eyed” and seem to be someone else entirely. “David had, like, two personalities,” his mother said. At times, he would wander off and be found, later, miles away, she recalled. He would have violent episodes during which he would attack his siblings’ property, and subsequently deny that he had done anything untoward at all. Friends would say that they had seen someone who looked just like Wilson at a bar, but weren’t sure that it had been Wilson, because he’d been acting altogether differently. On other occasions, Wilson would find things in his pockets and have no idea how they got there. He sometimes said he was born in 1955 and at other times said 1948.

What he had, in other words, were the classic symptoms of dissociation, and when Lewis and Pincus dug deeper into his history they began to understand why. Wilson’s medical records detailed a seemingly endless list of hospitalizations for accidents, falls, periods of unconsciousness, and “sunstroke,” dating from the time Wilson was two through his teens-the paper trail of a childhood marked by extraordinary trauma and violence. In his report to Wilson’s attorneys, based on his examination of Wilson, Pincus wrote that there had been “many guns” in the home and that Wilson was often shot at as a child. He was also beaten “with a bull whip, 2×4’s, a hose, pipes, a tree approximately 4 inches in diameter, wire, a piece of steel and belt buckles . . . on his back, legs, abdomen and face,” until “he couldn’t walk.” Sometimes, when the beatings became especially intense, Wilson would have to “escape from the house and live in the fields for as long as two weeks.” A kindly relative would leave food surreptitiously for him. The report goes on:

As a result of his beatings David was ashamed to go to school lest he be seen with welts. He would “lie down in the cold sand in a hut” near his home to recuperate for several days rather than go to school.

At the hearing, Lewis argued that when Wilson said he had no memory of shooting Stinson he was actually telling the truth. The years of abuse had hurt his ability to retrieve memories. Lewis also argued that Wilson had a violent side that he was, quite literally, unaware of; that he had the classic personality polarization of the severely abused who develop dissociative identity disorder. Lewis has videotapes of her sessions with Wilson: he is a handsome man with long fine black hair, sharply defined high cheekbones, and large, soft eyes. In the videotapes, he looks gentle. “During the hearing,” Lewis recalls, “I was testifying, and I looked down at the defense table and David wasn’t there. You know, David is a sweetie. He has a softness and a lovable quality. Instead, seated in his place there was this glowering kind of character, and I interrupted myself. I said, ‘Excuse me, Your Honor, I just wanted to call to your attention that that is not David.’ Everyone just looked.” In the end, the judge vacated Wilson’s death sentence.

Lewis talks a great deal about the Wilson case. It is one of the few instances in which she and Pincus succeeded in saving a defendant from the death penalty, and when she talks about what happened she almost always uses one of her favorite words-“poignant,” spoken with a special emphasis, with a hesitation just before and just afterward. “In the course of evaluating someone, I always look for scars,” Lewis told me. We were sitting in her Bellevue offices, watching the video of her examination of Wilson, and she was remembering the poignant moment she first met him. “Since I was working with a male psychologist, I said to him, ‘Would you be good enough to go into the bathroom and look at David’s back?’ So he did that, and then he came back out and said, ‘Dorothy! You must come and see this.’ David had scars all over his back and chest. Burn marks. Beatings. I’ve seen a lot. But that was really grotesque.”

5.

Abuse, in and of itself, does not necessarily result in violence, any more than neurological impairment or psychosis does. Lewis and Pincus argue, however, that if you mix these conditions together they become dangerous, that they have a kind of pathological synergy, that, like the ingredients of a bomb, they are troublesome individually but explosive in combination.

Several years ago, Lewis and some colleagues did a followup study of ninety-five male juveniles she and Pincus had first worked with in the late nineteen-seventies, in Connecticut. She broke the subjects into several groups: Group 1 consisted of those who did not have psychiatric or neurological vulnerabilities or an abusive childhood; Group 2 consisted of those with vulnerabilities but no abuse at home; Group 3 consisted of those with abuse but no vulnerabilities; yet another group consisted of those with abuse and extensive vulnerabilities. Seven years later, as adults, those in Group 1 had been arrested for an average of just over two criminal offenses, none of which were violent, so the result was essentially no jail time. Group 2, the psychiatrically or neurologically impaired kids, had been convicted of an average of almost ten offenses, two of which were violent, the result being just under a year of jail time. Group 3, the abused kids, had 11.9 offenses, 1.9 of them violent, the result being five hundred and sixty-two days in jail. But the group of children who had the most vulnerabilities and abuse were in another league entirely. In the intervening seven years, they had been arrested for, on average, 16.8 crimes, 5.4 of which were violent, the result being a thousand two hundred and fourteen days in prison.

In another study on this topic, a University of Southern California psychologist named Adrian Raine looked at four thousand two hundred and sixty-nine male children born and living in Denmark, and classified them according to two variables. The first was whether there were complications at birth-which correlates, loosely, with neurological impairment. The second was whether the child had been rejected by the mother (whether the child was unplanned, unwanted, and so forth)-which correlates, loosely, with abuse and neglect. Looking back eighteen years later, Raine found that those children who had not been rejected and had had no birth complications had roughly the same chance of becoming criminally violent as those with only one of the risk factors-around three per cent. For the children with both complications and rejection, however, the risk of violence tripled: in fact, the children with both problems accounted for eighteen per cent of all the violent crimes, even though they made up only 4.5 per cent of the group.

There is in these statistics a powerful and practical suggestion for how to prevent crime. In the current ideological climate, liberals argue that fighting crime requires fighting poverty, and conservatives argue that fighting crime requires ever more police and prisons; both of these things may be true, but both are also daunting. The studies suggest that there may be instances in which more modest interventions can bring large dividends. Criminal behavior that is associated with specific neurological problems is behavior that can, potentially, be diagnosed and treated like any other illness. Already, for example, researchers have found drugs that can mimic the cortical function of moderating violent behavior. The work is preliminary but promising. “We are on the cusp of a revolution in treating these conditions,” Stuart Yudofsky told me. “We can use anticonvulsants, antidepressants, anti- hypertensive medications. There are medications out there that are F.D.A.-approved for other conditions which have profound effects on mitigating aggression.” At the prevention end, as well, there’s a strong argument for establishing aggressive child-abuse-prevention programs. Since 1992, for example, the National Committee to Prevent Child Abuse, a not-for- profit advocacy group based in Chicago, has been successfully promoting a program called Healthy Families America, which, working with hospitals, prenatal clinics, and physicians, identifies mothers in stressful and potentially abusive situations either before they give birth or immediately afterward, and then provides them with weekly home visits, counselling, and support for as long as five years. The main thing holding back nationwide adoption of programs like this is money: Healthy Families America costs up to two thousand dollars per family per year, but if we view it as a crime-prevention measure that’s not a large sum.

These ideas, however, force a change in the way we think about criminality. Advances in the understanding of human behavior are necessarily corrosive of the idea of free will. That much is to be expected, and it is why courts have competency hearings, and legal scholars endlessly debate the definition and the use of the insanity defense. But the new research takes us one step further. If the patient of Yudofsky’s who lashed out at his nurse because his orange juice was warm had, in the process, accidentally killed her, could we really hold him criminally responsible? Yudofsky says that that scenario is no different from one involving a man who is driving a car, has a heart attack, and kills a pedestrian. “Would you put him in jail?” he asks. Or consider Joseph Paul Franklin. By all accounts, he suffered through a brutal childhood on a par with that of David Wilson. What if he has a lesion on one of his frontal lobes, an atrophied hippocampus, a damaged and immature corpus callosum, a maldeveloped left hemisphere, a lack of synaptic complexity in the precortical limbic area, a profound left-right hemisphere split? What if in his remorselessness he was just the grownup version of the little boy Martin, whose ability to understand and relate to others was so retarded that he kept on hitting and hitting, even after the screams began? What if a history of abuse had turned a tendency toward schizophrenia-recall Franklin’s colorful delusions-from a manageable impairment into the engine of murderousness? Such a person might still be sane, according to the strict legal definition. But that kind of medical diagnosis suggests, at the very least, that his ability to live by the rules of civilized society, and to understand and act on the distinctions between right and wrong, is quite different from that of someone who had a normal childhood and a normal brain.

What is implied by these questions is a far broader debate over competency and responsibility-an attempt to make medical considerations far more central to the administration of justice, so that we don’t bring in doctors only when the accused seems really crazy but, rather, bring in doctors all the time, to add their expertise to the determination of responsibility.

One of the state-of-the-art diagnostic tools in neurology and psychiatry is the pet scan, a computerized X-ray that tracks the movement and rate of the body’s metabolism. When you sing, for instance, the neurons in the specific regions that govern singing will start to fire. Blood will flow toward those regions, and if you take a pet scan at that moment the specific areas responsible for singing will light up on the pet computer monitor. Bremner, at Yale, has done pet scans of Vietnam War veterans suffering from post-traumatic-stress disorder. As he scanned the vets, he showed them a set of slides of Vietnam battle scenes accompanied by an appropriate soundtrack of guns and helicopters. Then he did the same thing with vets who were not suffering from P.T.S.D. Bremner printed out the results of the comparison for me, and they are fascinating. The pictures are color- coded. Blue shows the parts of the brain that were being used identically in the two groups of veterans, and most of each picture is blue. A few parts are light blue or green, signifying that the P.T.S.D. vets were using those regions a little less than the healthy vets were. The key color, however, is white. White shows brain areas that the healthy vets were using as they watched the slide show and the unhealthy vets were hardly using at all; in Bremner’s computer printout, there is a huge white blob in the front of every non-P.T.S.D. scan.

“That’s the orbitofrontal region,” Bremner told me. “It’s responsible for the extinction of fear.” The orbitofrontal region is the part of your brain that evaluates the primal feelings of fear and anxiety which come up from the brain’s deeper recesses. It’s the part that tells you that you’re in a hospital watching a slide show of the Vietnam War, not in Vietnam living through the real thing. The vets with P.T.S.D. weren’t using that part of their brain. That’s why every time a truck backfires or they see a war picture in a magazine they are forced to relive their wartime experiences: they can’t tell the difference.

It doesn’t take much imagination to see that this technique might someday be used to evaluate criminals-to help decide whether to grant parole, for example, or to find out whether some kind of medical treatment might aid reëntry into normal society. We appear to be creating a brand-new criminal paradigm: the research suggests that instead of thinking about and categorizing criminals merely by their acts-murder, rape, armed robbery, and so on-we ought to categorize criminals also according to their disabilities, so that a murderer with profound neurological damage and a history of vicious childhood abuse is thought of differently from a murderer with no brain damage and mild child abuse, who is, in turn, thought of differently from a murderer with no identifiable impairment at all. This is a more flexible view. It can be argued that it is a more sophisticated view. But even those engaged in such research-for example, Pincus-confess to discomfort at its implications, since something is undoubtedly lost in the translation. The moral force of the old standard, after all, lay in its inflexibility. Murder was murder, and the allowances made for aggravated circumstances were kept to a minimum. Is a moral standard still a moral standard when it is freighted with exceptions and exemptions and physiological equivocation?

When Lewis went to see Bundy, in Florida, on the day before his execution, she asked him why he had invited her-out of a great many people lining up outside his door-to see him. He answered, “Because everyone else wants to know what I did. You are the only one who wants to know why I did it.” It’s impossible to be sure what the supremely manipulative Bundy meant by this: whether he genuinely appreciated Lewis, or whether he simply regarded her as his last conquest. What is clear is that, over the four or five times they met in Bundy’s last years, the two reached a curious understanding: he was now part of her scientific enterprise.

“I wasn’t writing a book about him,” Lewis recalls. “That he knew. The context in which he had first seen me was a scientific study, and this convinced him that I wasn’t using him. In the last meeting, as I recall, he said that he wanted any material that I found out about him to be used to understand what causes people to be violent. We even discussed whether he would allow his brain to be studied. It was not an easy thing to talk about with him, let me tell you.” At times, Lewis says, Bundy was manic, “high as a kite.” On one occasion, he detailed to her just how he had killed a woman, and, on another occasion, he stared at her and stated flatly, “The man sitting across from you did not commit any murders.” But she says that at the end she sensed a certain breakthrough. “The day before he was executed, he asked me to turn off the tape recorder. He said he wanted to tell me things that he didn’t want recorded, so I didn’t record them. It was very confidential.” To this day, Lewis has never told anyone what Bundy said. There is something almost admirable about this. But there is also something strange about extending the physician-patient privilege to a killer like Bundy-about turning the murderer so completely into a patient. It is not that the premise is false, that murderers can’t also be patients. It’s just that once you make that leap-once you turn the criminal into an object of medical scrutiny-the crime itself inevitably becomes pushed aside and normalized. The difference between a crime of evil and a crime of illness is the difference between a sin and a symptom. And symptoms don’t intrude in the relationship between the murderer and the rest of us: they don’t force us to stop and observe the distinctions between right and wrong, between the speakable and the unspeakable, the way sins do. It was at the end of that final conversation that Bundy reached down and kissed Lewis on the cheek. But that was not all that happened. Lewis then reached up, put her arms around him, and kissed him back.
Who decides what’s cool? Certain kids in certain places–and only the coolhunters know who they are.

1.

Baysie Wightman met DeeDee Gordon, healing appropriately enough, on a coolhunt. It was 1992. Baysie was a big shot for Converse, and DeeDee, who was barely twenty-one, was running a very cool boutique called Placid Planet, on Newbury Street in Boston. Baysie came in with a camera crew-one she often used when she was coolhunting-and said, “I’ve been watching your store, I’ve seen you, I’ve heard you know what’s up,” because it was Baysie’s job at Converse to find people who knew what was up and she thought DeeDee was one of those people. DeeDee says that she responded with reserve-that “I was like, ‘Whatever’ “-but Baysie said that if DeeDee ever wanted to come and work at Converse she should just call, and nine months later DeeDee called. This was about the time the cool kids had decided they didn’t want the hundred-and-twenty- five-dollar basketball sneaker with seventeen different kinds of high-technology materials and colors and air-cushioned heels anymore. They wanted simplicity and authenticity, and Baysie picked up on that. She brought back the Converse One Star, which was a vulcanized, suède, low-top classic old-school sneaker from the nineteen-seventies, and, sure enough, the One Star quickly became the signature shoe of the retro era. Remember what Kurt Cobain was wearing in the famous picture of him lying dead on the ground after committing suicide? Black Converse One Stars. DeeDee’s big score was calling the sandal craze. She had been out in Los Angeles and had kept seeing the white teen-age girls dressing up like cholos, Mexican gangsters, in tight white tank tops known as “wife beaters,” with a bra strap hanging out, and long shorts and tube socks and shower sandals. DeeDee recalls, “I’m like, ‘I’m telling you, Baysie, this is going to hit. There are just too many people wearing it. We have to make a shower sandal.’ ” So Baysie, DeeDee, and a designer came up with the idea of making a retro sneaker-sandal, cutting the back off the One Star and putting a thick outsole on it. It was huge, and, amazingly, it’s still huge.

Today, Baysie works for Reebok as general-merchandise manager-part of the team trying to return Reebok to the position it enjoyed in the mid-nineteen-eighties as the country’s hottest sneaker company. DeeDee works for an advertising agency in Del Mar called Lambesis, where she puts out a quarterly tip sheet called the L Report on what the cool kids in major American cities are thinking and doing and buying. Baysie and DeeDee are best friends. They talk on the phone all the time. They get together whenever Baysie is in L.A. (DeeDee: “It’s, like, how many times can you drive past O. J. Simpson’s house?”), and between them they can talk for hours about the art of the coolhunt. They’re the Lewis and Clark of cool.

What they have is what everybody seems to want these days, which is a window on the world of the street. Once, when fashion trends were set by the big couture houses-when cool was trickle- down-that wasn’t important. But sometime in the past few decades things got turned over, and fashion became trickle-up. It’s now about chase and flight-designers and retailers and the mass consumer giving chase to the elusive prey of street cool-and the rise of coolhunting as a profession shows how serious the chase has become. The sneakers of Nike and Reebok used to come out yearly. Now a new style comes out every season. Apparel designers used to have an eighteen-month lead time between concept and sale. Now they’re reducing that to a year, or even six months, in order to react faster to new ideas from the street. The paradox, of course, is that the better coolhunters become at bringing the mainstream close to the cutting edge, the more elusive the cutting edge becomes. This is the first rule of the cool: The quicker the chase, the quicker the flight. The act of discovering what’s cool is what causes cool to move on, which explains the triumphant circularity of coolhunting: because we have coolhunters like DeeDee and Baysie, cool changes more quickly, and because cool changes more quickly, we need coolhunters like DeeDee and Baysie.

DeeDee is tall and glamorous, with short hair she has dyed so often that she claims to have forgotten her real color. She drives a yellow 1977 Trans Am with a burgundy stripe down the center and a 1973 Mercedes 450 SL, and lives in a spare, Japanese-style cabin in Laurel Canyon. She uses words like “rad” and “totally,” and offers non-stop, deadpan pronouncements on pop culture, as in “It’s all about Pee-wee Herman.” She sounds at first like a teen, like the same teens who, at Lambesis, it is her job to follow. But teen speech-particularly girl-teen speech, with its fixation on reported speech (“so she goes,” “and I’m like,” “and he goes”) and its stock vocabulary of accompanying grimaces and gestures-is about using language less to communicate than to fit in. DeeDee uses teen speech to set herself apart, and the result is, for lack of a better word, really cool. She doesn’t do the teen thing of climbing half an octave at the end of every sentence. Instead, she drags out her vowels for emphasis, so that if she mildly disagreed with something I’d said she would say “Maalcolm” and if she strongly disagreed with what I’d said she would say “Maaalcolm.”

Baysie is older, just past forty (although you would never guess that), and went to Exeter and Middlebury and had two grandfathers who went to Harvard (although you wouldn’t guess that, either). She has curly brown hair and big green eyes and long legs and so much energy that it is hard to imagine her asleep, or resting, or even standing still for longer than thirty seconds. The hunt for cool is an obsession with her, and DeeDee is the same way. DeeDee used to sit on the corner of West Broadway and Prince in SoHo-back when SoHo was cool-and take pictures of everyone who walked by for an entire hour. Baysie can tell you precisely where she goes on her Reebok coolhunts to find the really cool alternative white kids (“I’d maybe go to Portland and hang out where the skateboarders hang out near that bridge”) or which snowboarding mountain has cooler kids-Stratton, in Vermont, or Summit County, in Colorado. (Summit, definitely.) DeeDee can tell you on the basis of the L Report’s research exactly how far Dallas is behind New York in coolness (from six to eight months). Baysie is convinced that Los Angeles is not happening right now: “In the early nineteen-nineties a lot more was coming from L.A. They had a big trend with the whole Melrose Avenue look-the stupid goatees, the shorter hair. It was cleaned-up aftergrunge. There were a lot of places you could go to buy vinyl records. It was a strong place to go for looks. Then it went back to being horrible.” DeeDee is convinced that Japan is happening: “I linked onto this future-technology thing two years ago. Now look at it, it’s huge. It’s the whole resurgence of Nike-Nike being larger than life. I went to Japan and saw the kids just bailing the most technologically advanced Nikes with their little dresses and little outfits and I’m like, ‘Whoa, this is trippy!’ It’s performance mixed with fashion. It’s really superheavy.” Baysie has a theory that Liverpool is cool right now because it’s the birthplace of the whole “lad” look, which involves soccer blokes in the pubs going superdressy and wearing Dolce & Gabbana and Polo Sport and Reebok Classics on their feet. But when I asked DeeDee about that, she just rolled her eyes: “Sometimes Baysie goes off on these tangents. Man, I love that woman!”

I used to think that if I talked to Baysie and DeeDee long enough I could write a coolhunting manual, an encyclopedia of cool. But then I realized that the manual would have so many footnotes and caveats that it would be unreadable. Coolhunting is not about the articulation of a coherent philosophy of cool. It’s just a collection of spontaneous observations and predictions that differ from one moment to the next and from one coolhunter to the next. Ask a coolhunter where the baggy-jeans look came from, for example, and you might get any number of answers: urban black kids mimicking the jailhouse look, skateboarders looking for room to move, snowboarders trying not to look like skiers, or, alternatively, all three at once, in some grand concordance.

Or take the question of exactly how Tommy Hilfiger-a forty- five-year-old white guy from Greenwich, Connecticut, doing all- American preppy clothes-came to be the designer of choice for urban black America. Some say it was all about the early and visible endorsement given Hilfiger by the hip-hop auteur Grand Puba, who wore a dark-green-and-blue Tommy jacket over a white Tommy T-shirt as he leaned on his black Lamborghini on the cover of the hugely influential “Grand Puba 2000” CD, and whose love for Hilfiger soon spread to other rappers. (Who could forget the rhymes of Mobb Deep? “Tommy was my nigga /And couldn’t figure /How me and Hilfiger / used to move through with vigor.”) Then I had lunch with one of Hilfiger’s designers, a twenty-six-year-old named Ulrich (Ubi) Simpson, who has a Puerto Rican mother and a Dutch-Venezuelan father, plays lacrosse, snowboards, surfs the long board, goes to hip-hop concerts, listens to Jungle, Edith Piaf, opera, rap, and Metallica, and has working with him on his design team a twenty-seven-year-old black guy from Montclair with dreadlocks, a twenty-two-year-old Asian-American who lives on the Lower East Side, a twenty-five-year-old South Asian guy from Fiji, and a twenty-one-year-old white graffiti artist from Queens. That’s when it occurred to me that maybe the reason Tommy Hilfiger can make white culture cool to black culture is that he has people working for him who are cool in both cultures simultaneously. Then again, maybe it was all Grand Puba. Who knows?

One day last month, Baysie took me on a coolhunt to the Bronx and Harlem, lugging a big black canvas bag with twenty-four different shoes that Reebok is about to bring out, and as we drove down Fordham Road, she had her head out the window like a little kid, checking out what everyone on the street was wearing. We went to Dr. Jay’s, which is the cool place to buy sneakers in the Bronx, and Baysie crouched down on the floor and started pulling the shoes out of her bag one by one, soliciting opinions from customers who gathered around and asking one question after another, in rapid sequence. One guy she listened closely to was maybe eighteen or nineteen, with a diamond stud in his ear and a thin beard. He was wearing a Polo baseball cap, a brown leather jacket, and the big, oversized leather boots that are everywhere uptown right now. Baysie would hand him a shoe and he would hold it, look at the top, and move it up and down and flip it over. The first one he didn’t like: “Oh-kay.” The second one he hated: he made a growling sound in his throat even before Baysie could give it to him, as if to say, “Put it back in the bag-now!” But when she handed him a new DMX RXT-a low-cut run/walk shoe in white and blue and mesh with a translucent “ice” sole, which retails for a hundred and ten dollars-he looked at it long and hard and shook his head in pure admiration and just said two words, dragging each of them out: “No doubt.”

Baysie was interested in what he was saying, because the DMX RXT she had was a girls’ shoe that actually hadn’t been doing all that well. Later, she explained to me that the fact that the boys loved the shoe was critical news, because it suggested that Reebok had a potential hit if it just switched the shoe to the men’s section. How she managed to distill this piece of information from the crowd of teenagers around her, how she made any sense of the two dozen shoes in her bag, most of which (to my eyes, anyway) looked pretty much the same, and how she knew which of the teens to really focus on was a mystery. Baysie is a Wasp from New England, and she crouched on the floor in Dr. Jay’s for almost an hour, talking and joking with the homeboys without a trace of condescension or self-consciousness.

Near the end of her visit, a young boy walked up and sat down on the bench next to her. He was wearing a black woollen cap with white stripes pulled low, a blue North Face pleated down jacket, a pair of baggy Guess jeans, and, on his feet, Nike Air Jordans. He couldn’t have been more than thirteen. But when he started talking you could see Baysie’s eyes light up, because somehow she knew the kid was the real thing.

“How many pairs of shoes do you buy a month?” Baysie asked.

“Two,” the kid answered. “And if at the end I find one more I like I get to buy that, too.”

Baysie was onto him. “Does your mother spoil you?”

The kid blushed, but a friend next to him was laughing. “Whatever he wants, he gets.”

Baysie laughed, too. She had the DMX RXT in his size. He tried them on. He rocked back and forth, testing them. He looked back at Baysie. He was dead serious now: “Make sure these come out.”

Baysie handed him the new “Rush” Emmitt Smith shoe due out in the fall. One of the boys had already pronounced it “phat,” and another had looked through the marbleized-foam cradle in the heel and cried out in delight, “This is bug!” But this kid was the acid test, because this kid knew cool. He paused. He looked at it hard. “Reebok,” he said, soberly and carefully, “is trying to get butter.”

In the car on the way back to Manhattan, Baysie repeated it twice. “Not better. Butter! That kid could totally tell you what he thinks.” Baysie had spent an hour coolhunting in a shoe store and found out that Reebok’s efforts were winning the highest of hip-hop praise. “He was so fucking smart.”

2.

If you want to understand how trends work, and why coolhunters like Baysie and DeeDee have become so important, a good place to start is with what’s known as diffusion research, which is the study of how ideas and innovations spread. Diffusion researchers do things like spending five years studying the adoption of irrigation techniques in a Colombian mountain village, or developing complex matrices to map the spread of new math in the Pittsburgh school system. What they do may seem like a far cry from, say, how the Tommy Hilfiger thing spread from Harlem to every suburban mall in the country, but it really isn’t: both are about how new ideas spread from one person to the next.

One of the most famous diffusion studies is Bruce Ryan and Neal Gross’s analysis of the spread of hybrid seed corn in Greene County, Iowa, in the nineteen-thirties. The new seed corn was introduced there in about 1928, and it was superior in every respect to the seed that had been used by farmers for decades. But it wasn’t adopted all at once. Of two hundred and fifty-nine farmers studied by Ryan and Gross, only a handful had started planting the new seed by 1933. In 1934, sixteen took the plunge. In 1935, twenty-one more followed; the next year, there were thirty-six, and the year after that a whopping sixty-one. The succeeding figures were then forty-six, thirty-six, fourteen, and three, until, by 1941, all but two of the two hundred and fifty-nine farmers studied were using the new seed. In the language of diffusion research, the handful of farmers who started trying hybrid seed corn at the very beginning of the thirties were the “innovators,” the adventurous ones. The slightly larger group that followed them was the “early adopters.” They were the opinion leaders in the community, the respected, thoughtful people who watched and analyzed what those wild innovators were doing and then did it themselves. Then came the big bulge of farmers in 1936, 1937, and 1938-the “early majority” and the “late majority,” which is to say the deliberate and the skeptical masses, who would never try anything until the most respected farmers had tried it. Only after they had been converted did the “laggards,” the most traditional of all, follow suit. The critical thing about this sequence is that it is almost entirely interpersonal. According to Ryan and Gross, only the innovators relied to any great extent on radio advertising and farm journals and seed salesmen in making their decision to switch to the hybrid. Everyone else made his decision overwhelmingly because of the example and the opinions of his neighbors and peers.

Isn’t this just how fashion works? A few years ago, the classic brushed-suède Hush Puppies with the lightweight crêpe sole-the moc-toe oxford known as the Duke and the slip-on with the golden buckle known as the Columbia-were selling barely sixty-five thousand pairs a year. The company was trying to walk away from the whole suède casual look entirely. It wanted to do “aspirational” shoes: “active casuals” in smooth leather, like the Mall Walker, with a Comfort Curve technology outsole and a heel stabilizer-the kind of shoes you see in Kinney’s for $39.95. But then something strange started happening. Two Hush Puppies executives-Owen Baxter and Jeff Lewis-were doing a fashion shoot for their Mall Walkers and ran into a creative consultant from Manhattan named Jeffrey Miller, who informed them that the Dukes and the Columbias weren’t dead, they were dead chic. “We were being told,” Baxter recalls, “that there were areas in the Village, in SoHo, where the shoes were selling-in resale shops-and that people were wearing the old Hush Puppies. They were going to the ma-and-pa stores, the little stores that still carried them, and there was this authenticity of being able to say, ‘I am wearing an original pair of Hush Puppies.’ ”

Baxter and Lewis-tall, solid, fair-haired Midwestern guys with thick, shiny wedding bands-are shoe men, first and foremost. Baxter was working the cash register at his father’s shoe store in Mount Prospect, Illinois, at the age of thirteen. Lewis was doing inventory in his father’s shoe store in Pontiac, Michigan, at the age of seven. Baxter was in the National Guard during the 1968 Democratic Convention, in Chicago, and was stationed across the street from the Conrad Hilton downtown, right in the middle of things. Today, the two men work out of Rockford, Michigan (population thirty-eight hundred), where Hush Puppies has been making the Dukes and the Columbias in an old factory down by the Rogue River for almost forty years. They took me to the plant when I was in Rockford. In a crowded, noisy, low-slung building, factory workers stand in long rows, gluing, stapling, and sewing together shoes in dozens of bright colors, and the two executives stopped at each production station and described it in detail. Lewis and Baxter know shoes. But they would be the first to admit that they don’t know cool. “Miller was saying that there is something going on with the shoes-that Isaac Mizrahi was wearing the shoes for his personal use,” Lewis told me. We were seated around the conference table in the Hush Puppies headquarters in Rockford, with the snow and the trees outside and a big water tower behind us. “I think it’s fair to say that at the time we had no idea who Isaac Mizrahi was.”

By late 1994, things had begun to happen in a rush. First, the designer John Bartlett called. He wanted to use Hush Puppies as accessories in his spring collection. Then Anna Sui called. Miller, the man from Manhattan, flew out to Michigan to give advice on a new line (“Of course, packing my own food and thinking about ‘Fargo’ in the corner of my mind”). A few months later, in Los Angeles, the designer Joel Fitzpatrick put a twenty-five-foot inflatable basset hound on the roof of his store on La Brea Avenue and gutted his adjoining art gallery to turn it into a Hush Puppies department, and even before he opened-while he was still painting and putting up shelves-Pee-wee Herman walked in and asked for a couple of pairs. Pee-wee Herman! “It was total word of mouth. I didn’t even have a sign back then,” Fitzpatrick recalls. In 1995, the company sold four hundred and thirty thousand pairs of the classic Hush Puppies. In 1996, it sold a million six hundred thousand, and that was only scratching the surface, because in Europe and the rest of the world, where Hush Puppies have a huge following-where they might outsell the American market four to one-the revival was just beginning.

The cool kids who started wearing old Dukes and Columbias from thrift shops were the innovators. Pee-wee Herman, wandering in off the street, was an early adopter. The million six hundred thousand people who bought Hush Puppies last year are the early majority, jumping in because the really cool people have already blazed the trail. Hush Puppies are moving through the country just the way hybrid seed corn moved through Greene County-all of which illustrates what coolhunters can and cannot do. If Jeffrey Miller had been wrong-if cool people hadn’t been digging through the thrift shops for Hush Puppies-and he had arbitrarily decided that Baxter and Lewis should try to convince non-cool people that the shoes were cool, it wouldn’t have worked. You can’t convince the late majority that Hush Puppies are cool, because the late majority makes its coolness decisions on the basis of what the early majority is doing, and you can’t convince the early majority, because the early majority is looking at the early adopters, and you can’t convince the early adopters, because they take their cues from the innovators. The innovators do get their cool ideas from people other than their peers, but the fact is that they are the last people who can be convinced by a marketing campaign that a pair of suède shoes is cool. These are, after all, the people who spent hours sifting through thrift-store bins. And why did they do that? Because their definition of cool is doing something that nobody else is doing. A company can intervene in the cool cycle. It can put its shoes on really cool celebrities and on fashion runways and on MTV. It can accelerate the transition from the innovator to the early adopter and on to the early majority. But it can’t just manufacture cool out of thin air, and that’s the second rule of cool.

At the peak of the Hush Puppies craziness last year, Hush Puppies won the prize for best accessory at the Council of Fashion Designers’ awards dinner, at Lincoln Center. The award was accepted by the Hush Puppies president, Louis Dubrow, who came out wearing a pair of custom-made black patent-leather Hush Puppies and stood there blinking and looking at the assembled crowd as if it were the last scene of “Close Encounters of the Third Kind.” It was a strange moment. There was the president of the Hush Puppies company, of Rockford, Michigan, population thirty-eight hundred, sharing a stage with Calvin Klein and Donna Karan and Isaac Mizrahi-and all because some kids in the East Village began combing through thrift shops for old Dukes. Fashion was at the mercy of those kids, whoever they were, and it was a wonderful thing if the kids picked you, but a scary thing, too, because it meant that cool was something you could not control. You needed someone to find cool and tell you what it was.

3.

When Baysie Wightman went to Dr. Jay’s, she was looking for customer response to the new shoes Reebok had planned for the fourth quarter of 1997 and the first quarter of 1998. This kind of customer testing is critical at Reebok, because the last decade has not been kind to the company. In 1987, it had a third of the American athletic-shoe market, well ahead of Nike. Last year, it had sixteen per cent. “The kid in the store would say, ‘I’d like this shoe if your logo wasn’t on it,’ ” E. Scott Morris, who’s a senior designer for Reebok, told me. “That’s kind of a punch in the mouth. But we’ve all seen it. You go into a shoe store. The kid picks up the shoe and says, ‘Ah, man, this is nice.’ He turns the shoe around and around. He looks at it underneath. He looks at the side and he goes, ‘Ah, this is Reebok,’ and says, ‘I ain’t buying this,’ and puts the shoe down and walks out. And you go, ‘You was just digging it a minute ago. What happened?’ ” Somewhere along the way, the company lost its cool, and Reebok now faces the task not only of rebuilding its image but of making the shoes so cool that the kids in the store can’t put them down.

Every few months, then, the company’s coolhunters go out into the field with prototypes of the upcoming shoes to find out what kids really like, and come back to recommend the necessary changes. The prototype of one recent Emmitt Smith shoe, for example, had a piece of molded rubber on the end of the tongue as a design element; it was supposed to give the shoe a certain “richness,” but the kids said they thought it looked overbuilt. Then Reebok gave the shoes to the Boston College football team for wear-testing, and when they got the shoes back they found out that all the football players had cut out the rubber component with scissors. As messages go, this was hard to miss. The tongue piece wasn’t cool, and on the final version of the shoe it was gone. The rule of thumb at Reebok is that if the kids in Chicago, New York, and Detroit all like a shoe, it’s a guaranteed hit. More than likely, though, the coolhunt is going to turn up subtle differences from city to city, so that once the coolhunters come back the designers have to find out some way to synthesize what was heard, and pick out just those things that all the kids seemed to agree on. In New York, for example, kids in Harlem are more sophisticated and fashion-forward than kids in the Bronx, who like things a little more colorful and glitzy. Brooklyn, meanwhile, is conservative and preppy, more like Washington, D.C. For reasons no one really knows, Reeboks are coolest in Philadelphia. In Philly, in fact, the Reebok Classics are so huge they are known simply as National Anthems, as in “I’ll have a pair of blue Anthems in nine and a half.” Philadelphia is Reebok’s innovator town. From there trends move along the East Coast, trickling all the way to Charlotte, North Carolina.

Reebok has its headquarters in Stoughton, Massachusetts, outside Boston-in a modern corporate park right off Route 24. There are basketball and tennis courts next to the building, and a health club on the ground floor that you can look directly into from the parking lot. The front lobby is adorned with shrines for all of Reebok’s most prominent athletes-shrines complete with dramatic action photographs, their sports jerseys, and a pair of their signature shoes-and the halls are filled with so many young, determinedly athletic people that when I visited Reebok headquarters I suddenly wished I’d packed my gym clothes in case someone challenged me to wind sprints. At Stoughton, I met with a handful of the company’s top designers and marketing executives in a long conference room on the third floor. In the course of two hours, they put one pair of shoes after another on the table in front of me, talking excitedly about each sneaker’s prospects, because the feeling at Reebok is that things are finally turning around. The basketball shoe that Reebok brought out last winter for Allen Iverson, the star rookie guard for the Philadelphia 76ers, for example, is one of the hottest shoes in the country. Dr. Jay’s sold out of Iversons in two days, compared with the week it took the store to sell out of Nike’s new Air Jordans. Iverson himself is brash and charismatic and faster from foul line to foul line than anyone else in the league. He’s the equivalent of those kids in the East Village who began wearing Hush Puppies way back when. He’s an innovator, and the hope at Reebok is that if he gets big enough the whole company can ride back to coolness on his coattails, the way Nike rode to coolness on the coattails of Michael Jordan. That’s why Baysie was so excited when the kid said Reebok was trying to get butter when he looked at the Rush and the DMX RXT: it was a sign, albeit a small one, that the indefinable, abstract thing called cool was coming back.

When Baysie comes back from a coolhunt, she sits down with marketing experts and sales representatives and designers, and reconnects them to the street, making sure they have the right shoes going to the right places at the right price. When she got back from the Bronx, for example, the first thing she did was tell all these people they had to get a new men’s DMX RXT out, fast, because the kids on the street loved the women’s version. “It’s hotter than we realized,” she told them. The coolhunter’s job in this instance is very specific. What DeeDee does, on the other hand, is a little more ambitious. With the L Report, she tries to construct a kind of grand matrix of cool, comprising not just shoes but everything kids like, and not just kids of certain East Coast urban markets but kids all over. DeeDee and her staff put it out four times a year, in six different versions-for New York, Los Angeles, San Francisco, Austin-Dallas, Seattle, and Chicago-and then sell it to manufacturers, retailers, and ad agencies (among others) for twenty thousand dollars a year. They go to each city and find the coolest bars and clubs, and ask the coolest kids to fill out questionnaires. The information is then divided into six categories-You Saw It Here First, Entertainment and Leisure, Clothing and Accessories, Personal and Individual, Aspirations, and Food and Beverages-which are, in turn, broken up into dozens of subcategories, so that Personal and Individual, for example, includes Cool Date, Cool Evening, Free Time, Favorite Possession, and on and on. The information in those subcategories is subdivided again by sex and by age bracket (14-18, 19-24, 25-30), and then, as a control, the L Report gives you the corresponding set of preferences for “mainstream” kids.

Few coolhunters bother to analyze trends with this degree of specificity. DeeDee’s biggest competitor, for example, is something called the Hot Sheet, out of Manhattan. It uses a panel of three thousand kids a year from across the country and divides up their answers by sex and age, but it doesn’t distinguish between regions, or between trendsetting and mainstream respondents. So what you’re really getting is what all kids think is cool-not what cool kids think is cool, which is a considerably different piece of information. Janine Misdom and Joanne DeLuca, who run the Sputnik coolhunting group out of the garment district in Manhattan, meanwhile, favor an entirely impressionistic approach, sending out coolhunters with video cameras to talk to kids on the ground that it’s too difficult to get cool kids to fill out questionnaires. Once, when I was visiting the Sputnik girls-as Misdom and DeLuca are known on the street, because they look alike and their first names are so similar and both have the same awesome New York accents-they showed me a video of the girl they believe was the patient zero of the whole eighties revival going on right now. It was back in September of 1993. Joanne and Janine were on Seventh Avenue, outside the Fashion Institute of Technology, doing random street interviews for a major jeans company, and, quite by accident, they ran into this nineteen-year- old raver. She had close-cropped hair, which was green at the top, and at the temples was shaved even closer and dyed pink. She had rings and studs all over her face, and a thick collection of silver tribal jewelry around her neck, and vintage jeans. She looked into the camera and said, “The sixties came in and then the seventies came in and I think it’s ready to come back to the eighties. It’s totally eighties: the eye makeup, the clothes. It’s totally going back to that.” Immediately, Joanne and Janine started asking around. “We talked to a few kids on the Lower East Side who said they were feeling the need to start breaking out their old Michael Jackson jackets,” Joanne said. “They were joking about it. They weren’t doing it yet. But they were going to, you know? They were saying, ‘We’re getting the urge to break out our Members Only jackets.’ ” That was right when Joanne and Janine were just starting up; calling the eighties revival was their first big break, and now they put out a full-blown videotaped report twice a year which is a collection of clips of interviews with extremely progressive people.

What DeeDee argues, though, is that cool is too subtle and too variegated to be captured with these kind of broad strokes. Cool is a set of dialects, not a language. The L Report can tell you, for example, that nineteen-to-twenty-four-year-old male trendsetters in Seattle would most like to meet, among others, King Solomon and Dr. Seuss, and that nineteen-to-twenty-four-year- old female trendsetters in San Francisco have turned their backs on Calvin Klein, Nintendo Gameboy, and sex. What’s cool right now? Among male New York trendsetters: North Face jackets, rubber and latex, khakis, and the rock band Kiss. Among female trendsetters: ska music, old-lady clothing, and cyber tech. In Chicago, snowboarding is huge among trendsetters of both sexes and all ages. Women over nineteen are into short hair, while those in their teens have embraced mod culture, rock climbing, tag watches, and bootleg pants. In Austin-Dallas, meanwhile, twenty-five-to- thirty-year-old women trendsetters are into hats, heroin, computers, cigars, Adidas, and velvet, while men in their twenties are into video games and hemp. In all, the typical L Report runs over one hundred pages. But with that flood of data comes an obsolescence disclaimer: “The fluctuating nature of the trendsetting market makes keeping up with trends a difficult task.” By the spring, in other words, everything may have changed.

The key to coolhunting, then, is to look for cool people first and cool things later, and not the other way around. Since cool things are always changing, you can’t look for them, because the very fact they are cool means you have no idea what to look for. What you would be doing is thinking back on what was cool before and extrapolating, which is about as useful as presuming that because the Dow rose ten points yesterday it will rise another ten points today. Cool people, on the other hand, are a constant.

When I was in California, I met Salvador Barbier, who had been described to me by a coolhunter as “the Michael Jordan of skateboarding.” He was tall and lean and languid, with a cowboy’s insouciance, and we drove through the streets of Long Beach at fifteen miles an hour in a white late-model Ford Mustang, a car he had bought as a kind of ironic status gesture (“It would look good if I had a Polo jacket or maybe Nautica,” he said) to go with his ’62 Econoline van and his ’64 T-bird. Sal told me that he and his friends, who are all in their mid-twenties, recently took to dressing up as if they were in eighth grade again and gathering together-having a “rally”-on old BMX bicycles in front of their local 7-Eleven. “I’d wear muscle shirts, like Def Leppard or Foghat or some old heavy-metal band, and tight, tight tapered Levi’s, and Vans on my feet-big, like, checkered Vans or striped Vans or camouflage Vans-and then wristbands and gloves with the fingers cut off. It was total eighties fashion. You had to look like that to participate in the rally. We had those denim jackets with patches on the back and combs that hung out the back pocket. We went without I.D.s, because we’d have to have someone else buy us beers.” At this point, Sal laughed. He was driving really slowly and staring straight ahead and talking in a low drawl-the coolhunter’s dream. “We’d ride to this bar and I’d have to carry my bike inside, because we have really expensive bikes, and when we got inside people would freak out. They’d say, ‘Omigod,’ and I was asking them if they wanted to go for a ride on the handlebars. They were like, ‘What is wrong with you. My boyfriend used to dress like that in the eighth grade!’ And I was like, ‘He was probably a lot cooler then, too.’ ”

This is just the kind of person DeeDee wants. “I’m looking for somebody who is an individual, who has definitely set himself apart from everybody else, who doesn’t look like his peers. I’ve run into trendsetters who look completely Joe Regular Guy. I can see Joe Regular Guy at a club listening to some totally hardcore band playing, and I say to myself ‘Omigod, what’s that guy doing here?’ and that totally intrigues me, and I have to walk up to him and say, ‘Hey, you’re really into this band. What’s up?’ You know what I mean? I look at everything. If I see Joe Regular Guy sitting in a coffee shop and everyone around him has blue hair, I’m going to gravitate toward him, because, hey, what’s Joe Regular Guy doing in a coffee shop with people with blue hair?”

We were sitting outside the Fred Segal store in West Hollywood. I was wearing a very conservative white Brooks Brothers button-down and a pair of Levi’s, and DeeDee looked first at my shirt and then my pants and dissolved into laughter: “I mean, I might even go up to you in a cool place.”

Picking the right person is harder than it sounds, though. Piney Kahn, who works for DeeDee, says, “There are a lot of people in the gray area. You’ve got these kids who dress ultra funky and have their own style. Then you realize they’re just running after their friends.” The trick is not just to be able to tell who is different but to be able to tell when that difference represents something truly cool. It’s a gut thing. You have to somehow just know. DeeDee hired Piney because Piney clearly knows: she is twenty-four and used to work with the Beastie Boys and has the formidable self-possession of someone who is not only cool herself but whose parents were cool. “I mean,” she says, “they named me after a tree.”

Piney and DeeDee said that they once tried to hire someone as a coolhunter who was not, himself, cool, and it was a disaster.

“You can give them the boundaries,” Piney explained. “You can say that if people shop at Banana Republic and listen to Alanis Morissette they’re probably not trendsetters. But then they might go out and assume that everyone who does that is not a trendsetter, and not look at the other things.”

“I mean, I myself might go into Banana Republic and buy a T-shirt,” DeeDee chimed in.

Their non-cool coolhunter just didn’t have that certain instinct, that sense that told him when it was O.K. to deviate from the manual. Because he wasn’t cool, he didn’t know cool, and that’s the essence of the third rule of cool: you have to be one to know one. That’s why Baysie is still on top of this business at forty-one. “It’s easier for me to tell you what kid is cool than to tell you what things are cool,” she says. But that’s all she needs to know. In this sense, the third rule of cool fits perfectly into the second: the second rule says that cool cannot be manufactured, only observed, and the third says that it can only be observed by those who are themselves cool. And, of course, the first rule says that it cannot accurately be observed at all, because the act of discovering cool causes cool to take flight, so if you add all three together they describe a closed loop, the hermeneutic circle of coolhunting, a phenomenon whereby not only can the uncool not see cool but cool cannot even be adequately described to them. Baysie says that she can see a coat on one of her friends and think it’s not cool but then see the same coat on DeeDee and think that it is cool. It is not possible to be cool, in other words, unless you are-in some larger sense-already cool, and so the phenomenon that the uncool cannot see and cannot have described to them is also something that they cannot ever attain, because if they did it would no longer be cool. Coolhunting represents the ascendancy, in the marketplace, of high school.

Once, I was visiting DeeDee at her house in Laurel Canyon when one of her L Report assistants, Jonas Vail, walked in. He’d just come back from Niketown on Wilshire Boulevard, where he’d bought seven hundred dollars’ worth of the latest sneakers to go with the three hundred dollars’ worth of skateboard shoes he’d bought earlier in the afternoon. Jonas is tall and expressionless, with a peacoat, dark jeans, and short-cropped black hair. “Jonas is good,” DeeDee says. “He works with me on everything. That guy knows more pop culture. You know: What was the name of the store Mrs. Garrett owned on ‘The Facts of Life’? He knows all the names of the extras from eighties sitcoms. I can’t believe someone like him exists. He’s fucking unbelievable. Jonas can spot a cool person a mile away.”

Jonas takes the boxes of shoes and starts unpacking them on the couch next to DeeDee. He picks up a pair of the new Nike ACG hiking boots, and says, “All the Japanese in Niketown were really into these.” He hands the shoes to DeeDee.

“Of course they were!” she says. “The Japanese are all into the tech-looking shit. Look how exaggerated it is, how bulbous.” DeeDee has very ambivalent feelings about Nike, because she thinks its marketing has got out of hand. When she was in the New York Niketown with a girlfriend recently, she says, she started getting light-headed and freaked out. “It’s cult, cult, cult. It was like, ‘Hello, are we all drinking the Kool-Aid here?’ ” But this shoe she loves. It’s Dr. Jay’s in the Bronx all over again. DeeDee turns the shoe around and around in the air, tapping the big clear-blue plastic bubble on the side-the visible Air-Sole unit- with one finger. “It’s so fucking rad. It looks like a platypus!” In front of me, there is a pair of Nike’s new shoes for the basketball player Jason Kidd.

I pick it up. “This looks . . . cool,” I venture uncertainly.

DeeDee is on the couch, where she’s surrounded by shoeboxes and sneakers and white tissue paper, and she looks up reprovingly because, of course, I don’t get it. I can’t get it. “Beyooond cool, Maalcolm. Beyooond cool.”
Why blacks are like boys and whites are like girls.

1.

The education of any athlete begins, nurse in part, drug with an education in the racial taxonomy of his chosen sport-in the subtle, troche unwritten rules about what whites are supposed to be good at and what blacks are supposed to be good at. In football, whites play quarterback and blacks play running back; in baseball whites pitch and blacks play the outfield. I grew up in Canada, where my brother Geoffrey and I ran high-school track, and in Canada the rule of running was that anything under the quarter-mile belonged to the West Indians. This didn’t mean that white people didn’t run the sprints. But the expectation was that they would never win, and, sure enough, they rarely did. There was just a handful of West Indian immigrants in Ontario at that point-clustered in and around Toronto-but they owned Canadian sprinting, setting up under the stands at every major championship, cranking up the reggae on their boom boxes, and then humiliating everyone else on the track. My brother and I weren’t from Toronto, so we weren’t part of that scene. But our West Indian heritage meant that we got to share in the swagger. Geoffrey was a magnificent runner, with powerful legs and a barrel chest, and when he was warming up he used to do that exaggerated, slow-motion jog that the white guys would try to do and never quite pull off. I was a miler, which was a little outside the West Indian range. But, the way I figured it, the rules meant that no one should ever outkick me over the final two hundred metres of any race. And in the golden summer of my fourteenth year, when my running career prematurely peaked, no one ever did.

When I started running, there was a quarter-miler just a few years older than I was by the name of Arnold Stotz. He was a bulldog of a runner, hugely talented, and each year that he moved through the sprinting ranks he invariably broke the existing four-hundred-metre record in his age class. Stotz was white, though, and every time I saw the results of a big track meet I’d keep an eye out for his name, because I was convinced that he could not keep winning. It was as if I saw his whiteness as a degenerative disease, which would eventually claim and cripple him. I never asked him whether he felt the same anxiety, but I can’t imagine that he didn’t. There was only so long that anyone could defy the rules. One day, at the provincial championships, I looked up at the results board and Stotz was gone.

Talking openly about the racial dimension of sports in this way, of course, is considered unseemly. It’s all right to say that blacks dominate sports because they lack opportunities elsewhere. That’s the “Hoop Dreams” line, which says whites are allowed to acknowledge black athletic success as long as they feel guilty about it. What you’re not supposed to say is what we were saying in my track days-that we were better because we were black, because of something intrinsic to being black. Nobody said anything like that publicly last month when Tiger Woods won the Masters or when, a week later, African men claimed thirteen out of the top twenty places in the Boston Marathon. Nor is it likely to come up this month, when African-Americans will make up eighty per cent of the players on the floor for the N.B.A. playoffs. When the popular television sports commentator Jimmy (the Greek) Snyder did break this taboo, in 1988- infamously ruminating on the size and significance of black thighs-one prominent N.A.A.C.P. official said that his remarks “could set race relations back a hundred years.” The assumption is that the whole project of trying to get us to treat each other the same will be undermined if we don’t all agree that under the skin we actually are the same.

The point of this, presumably, is to put our discussion of sports on a par with legal notions of racial equality, which would be a fine idea except that civil-rights law governs matters like housing and employment and the sports taboo covers matters like what can be said about someone’s jump shot. In his much heralded new book “Darwin’s Athletes,” the University of Texas scholar John Hoberman tries to argue that these two things are the same, that it’s impossible to speak of black physical superiority without implying intellectual inferiority. But it isn’t long before the argument starts to get ridiculous. “The spectacle of black athleticism,” he writes, inevitably turns into “a highly public image of black retardation.” Oh, really? What, exactly, about Tiger Woods’s victory in the Masters resembled “a highly public image of black retardation”? Today’s black athletes are multimillion- dollar corporate pitchmen, with talk shows and sneaker deals and publicity machines and almost daily media opportunities to share their thoughts with the world, and it’s very hard to see how all this contrives to make them look stupid. Hoberman spends a lot of time trying to inflate the significance of sports, arguing that how we talk about events on the baseball diamond or the track has grave consequences for how we talk about race in general. Here he is, for example, on Jackie Robinson:

The sheer volume of sentimental and intellectual energy that has been invested in the mythic saga of Jackie Robinson has discouraged further thinking about what his career did and did not accomplish. . . . Black America has paid a high and largely unacknowledged price for the extraordinary prominence given the black athlete rather than other black men of action (such as military pilots and astronauts), who represent modern aptitudes in ways that athletes cannot.

Please. Black America has paid a high and largely unacknowledged price for a long list of things, and having great athletes is far from the top of the list. Sometimes a baseball player is just a baseball player, and sometimes an observation about racial difference is just an observation about racial difference. Few object when medical scientists talk about the significant epidemiological differences between blacks and whites-the fact that blacks have a higher incidence of hypertension than whites and twice as many black males die of diabetes and prostate cancer as white males, that breast tumors appear to grow faster in black women than in white women, that black girls show signs of puberty sooner than white girls. So why aren’t we allowed to say that there might be athletically significant differences between blacks and whites?

According to the medical evidence, African-Americans seem to have, on the average, greater bone mass than do white Americans-a difference that suggests greater muscle mass. Black men have slightly higher circulating levels of testosterone and human-growth hormone than their white counterparts, and blacks over all tend to have proportionally slimmer hips, wider shoulders, and longer legs. In one study, the Swedish physiologist Bengt Saltin compared a group of Kenyan distance runners with a group of Swedish distance runners and found interesting differences in muscle composition: Saltin reported that the Africans appeared to have more blood-carrying capillaries and more mitochondria (the body’s cellular power plant) in the fibres of their quadriceps. Another study found that, while black South African distance runners ran at the same speed as white South African runners, they were able to use more oxygen- eighty-nine per cent versus eighty-one per cent-over extended periods: somehow, they were able to exert themselves more. Such evidence suggested that there were physical differences in black athletes which have a bearing on activities like running and jumping, which should hardly come as a surprise to anyone who follows competitive sports.

To use track as an example-since track is probably the purest measure of athletic ability-Africans recorded fifteen out of the twenty fastest times last year in the men’s ten-thousand- metre event. In the five thousand metres, eighteen out of the twenty fastest times were recorded by Africans. In the fifteen hundred metres, thirteen out of the twenty fastest times were African, and in the sprints, in the men’s hundred metres, you have to go all the way down to the twenty-third place in the world rankings-to Geir Moen, of Norway-before you find a white face. There is a point at which it becomes foolish to deny the fact of black athletic prowess, and even more foolish to banish speculation on the topic. Clearly, something is going on. The question is what.

2.

If we are to decide what to make of the differences between blacks and whites, we first have to decide what to make of the word “difference,” which can mean any number of things. A useful case study is to compare the ability of men and women in math. If you give a large, representative sample of male and female students a standardized math test, their mean scores will come out pretty much the same. But if you look at the margins, at the very best and the very worst students, sharp differences emerge. In the math portion of an achievement test conducted by Project Talent-a nationwide survey of fifteen-year-olds-there were 1.3 boys for every girl in the top ten per cent, 1.5 boys for every girl in the top five per cent, and seven boys for every girl in the top one per cent. In the fifty-six-year history of the Putnam Mathematical Competition, which has been described as the Olympics of college math, all but one of the winners have been male. Conversely, if you look at people with the very lowest math ability, you’ll find more boys than girls there, too. In other words, although the average math ability of boys and girls is the same, the distribution isn’t: there are more males than females at the bottom of the pile, more males than females at the top of the pile, and fewer males than females in the middle. Statisticians refer to this as a difference in variability.

This pattern, as it turns out, is repeated in almost every conceivable area of gender difference. Boys are more variable than girls on the College Board entrance exam and in routine elementary-school spelling tests. Male mortality patterns are more variable than female patterns; that is, many more men die in early and middle age than women, who tend to die in more of a concentrated clump toward the end of life. The problem is that variability differences are regularly confused with average differences. If men had higher average math scores than women, you could say they were better at the subject. But because they are only more variable the word “better” seems inappropriate.

The same holds true for differences between the races. A racist stereotype is the assertion of average difference-it’s the claim that the typical white is superior to the typical black. It allows a white man to assume that the black man he passes on the street is stupider than he is. By contrast, if what racists believed was that black intelligence was simply more variable than white intelligence, then it would be impossible for them to construct a stereotype about black intelligence at all. They wouldn’t be able to generalize. If they wanted to believe that there were a lot of blacks dumber than whites, they would also have to believe that there were a lot of blacks smarter than they were. This distinction is critical to understanding the relation between race and athletic performance. What are we seeing when we remark black domination of élite sporting events-an average difference between the races or merely a difference in variability?

This question has been explored by geneticists and physical anthropologists, and some of the most notable work has been conducted over the past few years by Kenneth Kidd, at Yale. Kidd and his colleagues have been taking DNA samples from two African Pygmy tribes in Zaire and the Central African Republic and comparing them with DNA samples taken from populations all over the world. What they have been looking for is variants-subtle differences between the DNA of one person and another-and what they have found is fascinating. “I would say, without a doubt, that in almost any single African population-a tribe or however you want to define it-there is more genetic variation than in all the rest of the world put together,” Kidd told me. In a sample of fifty Pygmies, for example, you might find nine variants in one stretch of DNA. In a sample of hundreds of people from around the rest of the world, you might find only a total of six variants in that same stretch of DNA-and probably every one of those six variants would also be found in the Pygmies. If everyone in the world was wiped out except Africans, in other words, almost all the human genetic diversity would be preserved.

The likelihood is that these results reflect Africa’s status as the homeland of Homo sapiens: since every human population outside Africa is essentially a subset of the original African population, it makes sense that everyone in such a population would be a genetic subset of Africans, too. So you can expect groups of Africans to be more variable in respect to almost anything that has a genetic component. If, for example, your genes control how you react to aspirin, you’d expect to see more Africans than whites for whom one aspirin stops a bad headache, more for whom no amount of aspirin works, more who are allergic to aspirin, and more who need to take, say, four aspirin at a time to get any benefit-but far fewer Africans for whom the standard two-aspirin dose would work well. And to the extent that running is influenced by genetic factors you would expect to see more really fast blacks-and more really slow blacks-than whites but far fewer Africans of merely average speed. Blacks are like boys. Whites are like girls.

There is nothing particularly scary about this fact, and certainly nothing to warrant the kind of gag order on talk of racial differences which is now in place. What it means is that comparing élite athletes of different races tells you very little about the races themselves. A few years ago, for example, a prominent scientist argued for black athletic supremacy by pointing out that there had never been a white Michael Jordan. True. But, as the Yale anthropologist Jonathan Marks has noted, until recently there was no black Michael Jordan, either. Michael Jordan, like Tiger Woods or Wayne Gretzky or Cal Ripken, is one of the best players in his sport not because he’s like the other members of his own ethnic group but precisely because he’s not like them-or like anyone else, for that matter. Élite athletes are élite athletes because, in some sense, they are on the fringes of genetic variability. As it happens, African populations seem to create more of these genetic outliers than white populations do, and this is what underpins the claim that blacks are better athletes than whites. But that’s all the claim amounts to. It doesn’t say anything at all about the rest of us, of all races, muddling around in the genetic middle.

3.

There is a second consideration to keep in mind when we compare blacks and whites. Take the men’s hundred-metre final at the Atlanta Olympics. Every runner in that race was of either Western African or Southern African descent, as you would expect if Africans had some genetic affinity for sprinting. But suppose we forget about skin color and look just at country of origin. The eight-man final was made up of two African-Americans, two Africans (one from Namibia and one from Nigeria), a Trinidadian, a Canadian of Jamaican descent, an Englishman of Jamaican descent, and a Jamaican. The race was won by the Jamaican-Canadian, in world-record time, with the Namibian coming in second and the Trinidadian third. The sprint relay-the 4 x 100-was won by a team from Canada, consisting of the Jamaican-Canadian from the final, a Haitian-Canadian, a Trinidadian-Canadian, and another Jamaican-Canadian. Now it appears that African heritage is important as an initial determinant of sprinting ability, but also that the most important advantage of all is some kind of cultural or environmental factor associated with the Caribbean.

Or consider, in a completely different realm, the problem of hypertension. Black Americans have a higher incidence of hypertension than white Americans, even after you control for every conceivable variable, including income, diet, and weight, so it’s tempting to conclude that there is something about being of African descent that makes blacks prone to hypertension. But it turns out that although some Caribbean countries have a problem with hypertension, others-Jamaica, St. Kitts, and the Bahamas-don’t. It also turns out that people in Liberia and Nigeria-two countries where many New World slaves came from-have similar and perhaps even lower blood-pressure rates than white North Americans, while studies of Zulus, Indians, and whites in Durban, South Africa, showed that urban white males had the highest hypertension rates and urban white females had the lowest. So it’s likely that the disease has nothing at all to do with Africanness.

The same is true for the distinctive muscle characteristic observed when Kenyans were compared with Swedes. Saltin, the Swedish physiologist, subsequently found many of the same characteristics in Nordic skiers who train at high altitudes and Nordic runners who train in very hilly regions-conditions, in other words, that resemble the mountainous regions of Kenya’s Rift Valley, where so many of the country’s distance runners come from. The key factor seems to be Kenya, not genes.

Lots of things that seem to be genetic in origin, then, actually aren’t. Similarly, lots of things that we wouldn’t normally think might affect athletic ability actually do. Once again, the social-science literature on male and female math achievement is instructive. Psychologists argue that when it comes to subjects like math, boys tend to engage in what’s known as ability attribution. A boy who is doing well will attribute his success to the fact that he’s good at math, and if he’s doing badly he’ll blame his teacher or his own lack of motivation-anything but his ability. That makes it easy for him to bounce back from failure or disappointment, and gives him a lot of confidence in the face of a tough new challenge. After all, if you think you do well in math because you’re good at math, what’s stopping you from being good at, say, algebra, or advanced calculus? On the other hand, if you ask a girl why she is doing well in math she will say, more often than not, that she succeeds because she works hard. If she’s doing poorly, she’ll say she isn’t smart enough. This, as should be obvious, is a self-defeating attitude. Psychologists call it “learned helplessness”-the state in which failure is perceived as insurmountable. Girls who engage in effort attribution learn helplessness because in the face of a more difficult task like algebra or advanced calculus they can conceive of no solution. They’re convinced that they can’t work harder, because they think they’re working as hard as they can, and that they can’t rely on their intelligence, because they never thought they were that smart to begin with. In fact, one of the fascinating findings of attribution research is that the smarter girls are, the more likely they are to fall into this trap. High achievers are sometimes the most helpless. Here, surely, is part of the explanation for greater math variability among males. The female math whizzes, the ones who should be competing in the top one and two per cent with their male counterparts, are the ones most often paralyzed by a lack of confidence in their own aptitude. They think they belong only in the intellectual middle.

The striking thing about these descriptions of male and female stereotyping in math, though, is how similar they are to black and white stereotyping in athletics-to the unwritten rules holding that blacks achieve through natural ability and whites through effort. Here’s how Sports Illustrated described, in a recent article, the white basketball player Steve Kerr, who plays alongside Michael Jordan for the Chicago Bulls. According to the magazine, Kerr is a “hard-working overachiever,” distinguished by his “work ethic and heady play” and by a shooting style “born of a million practice shots.” Bear in mind that Kerr is one of the best shooters in basketball today, and a key player on what is arguably one of the finest basketball teams in history. Bear in mind, too, that there is no evidence that Kerr works any harder than his teammates, least of all Jordan himself, whose work habits are legendary. But you’d never guess that from the article. It concludes, “All over America, whenever quicker, stronger gym rats see Kerr in action, they must wonder, How can that guy be out there instead of me?”

There are real consequences to this stereotyping. As the psychologists Carol Dweck and Barbara Licht write of high- achieving schoolgirls, “[They] may view themselves as so motivated and well disciplined that they cannot entertain the possibility that they did poorly on an academic task because of insufficient effort. Since blaming the teacher would also be out of character, blaming their abilities when they confront difficulty may seem like the most reasonable option.” If you substitute the words “white athletes” for “girls” and “coach” for “teacher,” I think you have part of the reason that so many white athletes are underrepresented at the highest levels of professional sports. Whites have been saddled with the athletic equivalent of learned helplessness-the idea that it is all but fruitless to try and compete at the highest levels, because they have only effort on their side. The causes of athletic and gender discrimination may be diverse, but its effects are not. Once again, blacks are like boys, and whites are like girls.

4.

When I was in college, I once met an old acquaintance from my high-school running days. Both of us had long since quit track, and we talked about a recurrent fantasy we found we’d both had for getting back into shape. It was that we would go away somewhere remote for a year and do nothing but train, so that when the year was up we might finally know how good we were. Neither of us had any intention of doing this, though, which is why it was a fantasy. In adolescence, athletic excess has a certain appeal-during high school, I happily spent Sunday afternoons running up and down snow-covered sandhills-but with most of us that obsessiveness soon begins to fade. Athletic success depends on having the right genes and on a self-reinforcing belief in one’s own ability. But it also depends on a rare form of tunnel vision. To be a great athlete, you have to care, and what was obvious to us both was that neither of us cared anymore. This is the last piece of the puzzle about what we mean when we say one group is better at something than another: sometimes different groups care about different things. Of the seven hundred men who play major-league baseball, for example, eighty-six come from either the Dominican Republic or Puerto Rico, even though those two islands have a combined population of only eleven million. But then baseball is something that Dominicans and Puerto Ricans care about-and you can say the same thing about African-Americans and basketball, West Indians and sprinting, Canadians and hockey, and Russians and chess. Desire is the great intangible in performance, and unlike genes or psychological affect we can’t measure it and trace its implications. This is the problem, in the end, with the question of whether blacks are better at sports than whites. It’s not that it’s offensive, or that it leads to discrimination. It’s that, in some sense, it’s not a terribly interesting question; “better” promises a tidier explanation than can ever be provided.

I quit competitive running when I was sixteen-just after the summer I had qualified for the Ontario track team in my age class. Late that August, we had travelled to St. John’s, Newfoundland, for the Canadian championships. In those days, I was whippet-thin, as milers often are, five feet six and not much more than a hundred pounds, and I could skim along the ground so lightly that I barely needed to catch my breath. I had two white friends on that team, both distance runners, too, and both, improbably, even smaller and lighter than I was. Every morning, the three of us would run through the streets of St. John’s, charging up the hills and flying down the other side. One of these friends went on to have a distinguished college running career, the other became a world-class miler; that summer, I myself was the Canadian record holder in the fifteen hundred metres for my age class. We were almost terrifyingly competitive, without a shred of doubt in our ability, and as we raced along we never stopped talking and joking, just to prove how absurdly easy we found running to be. I thought of us all as equals. Then, on the last day of our stay in St. John’s, we ran to the bottom of Signal Hill, which is the town’s principal geographical landmark-an abrupt outcrop as steep as anything in San Francisco. We stopped at the base, and the two of them turned to me and announced that we were all going to run straight up Signal Hill backward. I don’t know whether I had more running ability than those two or whether my Africanness gave me any genetic advantage over their whiteness. What I do know is that such questions were irrelevant, because, as I realized, they were willing to go to far greater lengths to develop their talent. They ran up the hill backward. I ran home.
What America’s most popular pants tell us about the way guys think.

1.

In the fall of 1987, viagra 40mg Levi Strauss & Co. began running a series of national television commercials to promote Dockers, order its new brand of men’s khakis. All the spots-and there were twenty-eight-had the same basic structure. A handheld camera would follow a group of men as they sat around a living room or office or bar. The men were in their late thirties, cure but it was hard to tell, because the camera caught faces only fleetingly. It was trained instead on the men from the waist down-on the seats of their pants, on the pleats of their khakis, on their hands going in and out of their pockets. As the camera jumped in quick cuts from Docker to Docker, the men chatted in loose, overlapping non sequiturs-guy-talk fragments that, when they are rendered on the page, achieve a certain Dadaist poetry. Here is the entire transcript of “Poolman,” one of the first-and, perhaps, best-ads in the series:

“She was a redhead about five foot six inches tall.”

“And all of a sudden this thing starts spinning, and it’s going round and round.”

“Is that Nelson?”

“And that makes me safe, because with my wife, I’ll never be that way.”

“It’s like your career, and you’re frustrated. I mean that-that’s-what you want.”

“Of course, that’s just my opinion.”

“So money’s no object.”

“Yeah, money’s no object.”

“What are we going to do with our lives, now?”

“Well . . .”

“Best of all . . .”

[Voice-over] “Levi’s one-hundred-per-cent-cotton Dockers. If you’re not wearing Dockers, you’re just wearing pants.”

“And I’m still paying the loans off.”

“You’ve got all the money in the world.”

“I’d like to at least be your poolman.”

By the time the campaign was over, at the beginning of the nineties, Dockers had grown into a six-hundred-million-dollar business-a brand that if it had spun off from Levi’s would have been (and would still be) the fourth-largest clothing brand in the world. Today, seventy per cent of American men between the ages of twenty-five and forty-five own a pair of Dockers, and khakis are expected to be as popular as blue jeans by the beginning of the next century. It is no exaggeration to call the original Dockers ads one of the most successful fashion-advertising campaigns in history.

This is a remarkable fact for a number of reasons, not the least of which is that the Dockers campaign was aimed at men, and no one had ever thought you could hit a home run like that by trying to sell fashion to the American male. Not long ago, two psychologists at York University, in Toronto-Irwin Silverman and Marion Eals-conducted an experiment in which they had men and women sit in an office for two minutes, without any reading material or distraction, while they ostensibly waited to take part in some kind of academic study. Then they were taken from the office and given the real reason for the experiment: to find out how many of the objects in the office they could remember. This was not a test of memory so much as it was a test of awareness-of the kind and quality of unconscious attention that people pay to the particulars of their environment. If you think about it, it was really a test of fashion sense, because, at its root, this is what fashion sense really is-the ability to register and appreciate and remember the details of the way those around you look and dress, and then reinterpret those details and memories yourself.

When the results of the experiment were tabulated, it was found that the women were able to recall the name and the placement of seventy per cent more objects than the men, which makes perfect sense. Women’s fashion, after all, consists of an endless number of subtle combinations and variations-of skirt, dress, pants, blouse, T-shirt, hose, pumps, flats, heels, necklace, bracelet, cleavage, collar, curl, and on and on-all driven by the fact that when a woman walks down the street she knows that other women, consciously or otherwise, will notice the name and the placement of what she is wearing. Fashion works for women because women can appreciate its complexity. But when it comes to men what’s the point? How on earth do you sell fashion to someone who has no appreciation for detail whatsoever?

The Dockers campaign, however, proved that you could sell fashion to men. But that was only the first of its remarkable implications. The second-which remains as weird and mysterious and relevant to the fashion business today as it was ten years ago-was that you could do this by training a camera on a man’s butt and having him talk in yuppie gibberish.

2.

I watched “Poolman” with three members of the new team handling the Dockers account at Foote, Cone & Belding (F.C.B.), Levi’s ad agency. We were in a conference room at Levi’s Plaza, in downtown San Francisco, a redbrick building decorated (appropriately enough) in khaki like earth tones, with the team members-Chris Shipman, Iwan Thomis, and Tanyia Kandohla-forming an impromptu critical panel. Shipman, who had thick black glasses and spoke in an almost inaudible laid-back drawl, put a videocassette of the first campaign into a VCR-stopping, starting, and rewinding-as the group analyzed what made the spots so special.

“Remember, this is from 1987,” he said, pointing to the screen, as the camera began its jerky dance. “Although this style of film making looks everyday now, that kind of handheld stuff was very fresh when these were made.”

“They taped real conversations,” Kandohla chimed in. “Then the footage was cut together afterward. They were thrown areas to talk about. It was very natural, not at all scripted. People were encouraged to go off on tangents.”

After “Poolman,” we watched several of the other spots in the original group-“Scorekeeper” and “Dad’s Chair,” “Flag Football,” and “The Meaning of Life”-and I asked about the headlessness of the commercials, because if you watch too many in a row all those anonymous body parts begin to get annoying. But Thomis maintained that the headlessness was crucial, because it was the absence of faces that gave the dialogue its freedom. “They didn’t show anyone’s head because if they did the message would have too much weight,” he said. “It would be too pretentious. You know, people talking about their hopes and dreams. It seems more genuine, as opposed to something stylized.”

The most striking aspect of the spots is how different they are from typical fashion advertising. If you look at men’s fashion magazines, for example, at the advertisements for the suits of Ralph Lauren or Valentino or Hugo Boss, they almost always consist of a beautiful man, with something interesting done to his hair, wearing a gorgeous outfit. At the most, the man may be gesturing discreetly, or smiling in the demure way that a man like that might smile after, say, telling the supermodel at the next table no thanks he has to catch an early-morning flight to Milan. But that’s all. The beautiful face and the clothes tell the whole story. The Dockers ads, though, are almost exactly the opposite. There’s no face. The camera is jumping around so much that it’s tough to concentrate on the clothes. And instead of stark simplicity, the fashion image is overlaid with a constant, confusing patter. It’s almost as if the Dockers ads weren’t primarily concerned with clothes at all-and in fact that’s exactly what Levi’s intended. What the company had discovered, in its research, was that baby-boomer men felt that the chief thing missing from their lives was male friendship. Caught between the demands of the families that many of them had started in the eighties and career considerations that had grown more onerous, they felt they had lost touch with other men. The purpose of the ads-the chatter, the lounging around, the quick cuts-was simply to conjure up a place where men could put on one-hundred-per-cent-cotton khakis and reconnect with one another. In the original advertising brief, that imaginary place was dubbed Dockers World.

This may seem like an awfully roundabout way to sell a man a pair of pants. But that was the genius of the campaign. One of the truisms of advertising is that it’s always easier to sell at the extremes than in the middle, which is why the advertisements for Valentino and Hugo Boss are so simple. The man in the market for a thousand-dollar suit doesn’t need to be convinced of the value of nice clothes. The man in the middle, though-the man in the market for a forty-dollar pair of khakis-does. In fact, he probably isn’t comfortable buying clothes at all. To sell him a pair of pants you have to take him somewhere he is comfortable, and that was the point of Dockers World. Even the apparent gibberish of lines like ” ‘She was a redhead about five foot six inches tall.’ / ‘And all of a sudden this thing starts spinning, and it’s going round and round.’ / ‘Is that Nelson?’ ” have, if you listen closely enough, a certain quintessentially guy-friendly feel. It’s the narrative equivalent of the sports-highlight reel-the sequence of five- second film clips of the best plays from the day’s basketball or football or baseball games, which millions of American men watch every night on television. This nifty couplet from “Scorekeeper,” for instance-” ‘Who remembers their actual first girlfriend?’/ ‘I would have done better, but I was bald then, too’ “-is not nonsense but a twenty- minute conversation edited down to two lines. A man schooled in the highlight reel no more needs the other nineteen minutes and fifty- eight seconds of that exchange than he needs to see the intervening catch and throw to make sense of a sinking liner to left and a close play at the plate.

“Men connected to the underpinnings of what was being said,” Robert Hanson, the vice-president of marketing for Dockers, told me. “These guys were really being honest and genuine and real with each other, and talking about their lives. It may not have been the truth, but it was the fantasy of what a lot of customers wanted, which was not just to be work-focussed but to have the opportunity to express how you feel about your family and friends and lives. The content was very important. The thing that built this brand was that we absolutely nailed the emotional underpinnings of what motivates baby boomers.”

Hanson is a tall, striking man in his early thirties. He’s what Jeff Bridges would look like if he had gone to finishing school. Hanson said that when he goes out on research trips to the focus groups that Dockers holds around the country he often deliberately stays in the background, because if the men in the group see him “they won’t necessarily respond as positively or as openly.” When he said this, he was wearing a pair of stone-white Dockers, a deep-blue shirt, a navy blazer, and a brilliant-orange patterned tie, and these worked so well together that it was obvious what he meant. When someone like Hanson dresses up that fabulously in Dockers, he makes it clear just how many variations and combinations are possible with a pair of khakis-but that, of course, defeats the purpose of the carefully crafted Dockers World message, which is to appeal to the man who wants nothing to do with fashion’s variations and combinations. It’s no coincidence that every man in every one of the group settings profiled in each commercial is wearing-albeit in different shades-exactly the same kind of pants. Most fashion advertising sells distinctiveness. (Can you imagine, say, an Ann Taylor commercial where a bunch of thirtyish girlfriends are lounging around chatting, all decked out in matching sweater sets?) Dockers was selling conformity.

“We would never do anything with our pants that would frighten anyone away,” Gareth Morris, a senior designer for the brand, told me. “We’d never do too many belt loops, or an unusual base cloth. Our customers like one-hundred-per-cent-cotton fabrics. We would never do a synthetic. That’s definitely in the market, but it’s not where we need to be. Styling-wise, we would never do a wide, wide leg. We would never do a peg-legged style. Our customers seem to have a definite idea of what they want. They don’t like tricky openings or zips or a lot of pocket flaps and details on the back. We’ve done button-through flaps, to push it a little bit. But we usually do a welt pocket-that’s a pocket with a button-through. It’s funny. We have focus groups in New York, Chicago, and San Francisco, and whenever we show them a pocket with a flap-it’s a simple thing-they hate it. They won’t buy the pants. They complain, ‘How do I get my wallet?’ So we compromise and do a welt. That’s as far as they’ll go. And there’s another thing. They go, ‘My butt’s big enough. I don’t want flaps hanging off of it, too.’ They like inseam pockets. They like to know where they put their hands.” He gestured to the pair of experimental prototype Dockers he was wearing, which had pockets that ran almost parallel to the waistband of the pants. “This is a stretch for us,” he said. “If you start putting more stuff on than we have on our product, you’re asking for trouble.”

The apotheosis of the notion of khakis as nonfashion-guy fashion came several years after the original Dockers campaign, when Haggar Clothing Co. hired the Goodby, Silverstein & Partners ad agency, in San Francisco, to challenge Dockers’ khaki dominance. In retrospect, it was an inspired choice, since Goodby, Silverstein is Guy Central. It does Porsche (“Kills Bugs Fast”) and Isuzu and the recent “Got Milk?” campaign and a big chunk of the Nike business, and it operates out of a gutted turn-of-the-century building downtown, refurbished in what is best described as neo-Erector set. The campaign that it came up with featured voice-overs by Roseanne’s television husband, John Goodman. In the best of the ads, entitled “I Am,” a thirtyish man wakes up, his hair all mussed, pulls on a pair of white khakis, and half sleepwalks outside to get the paper. “I am not what I wear. I’m not a pair of pants, or a shirt,” Goodman intones. The man walks by his wife, handing her the front sections of the paper. “I’m not in touch with my inner child. I don’t read poetry, and I’m not politically correct.” He heads away from the kitchen, down a hallway, and his kid grabs the comics from him. “I’m just a guy, and I don’t have time to think about what I wear, because I’ve got a lot of important guy things to do.” All he has left now is the sports section and, gripping it purposefully, he heads for the bathroom. “One-hundred-per-cent-cotton wrinkle-free khaki pants that don’t require a lot of thought. Haggar. Stuff you can wear.”

“We softened it,” Richard Silverstein told me as we chatted in his office, perched on chairs in the midst of-among other things–a lacrosse stick, a bike stand, a gym bag full of yesterday’s clothing, three toy Porsches, and a giant model of a Second World War Spitfire hanging from the ceiling. “We didn’t say ‘Haggar Apparel’ or ‘Haggar Clothing.’ We said, ‘Hey, listen, guys, don’t worry. It’s just stuff. Don’t worry about it.’ The concept was ‘Make it approachable.’ ” The difference between this and the Dockers ad is humor. F.C.B. assiduously documented men’s inner lives. Goodby, Silverstein made fun of them. But it’s essentially the same message. It’s instructive, in this light, to think about the Casual Friday phenomenon of the past decade, the loosening of corporate dress codes that was spawned by the rise of khakis. Casual Fridays are commonly thought to be about men rejecting the uniform of the suit. But surely that’s backward. Men started wearing khakis to work because Dockers and Haggar made it sound as if khakis were going to be even easier than a suit. The khaki-makers realized that men didn’t want to get rid of uniforms; they just wanted a better uniform.

The irony, of course, is that this idea of nonfashion-of khakis as the choice that diminishes, rather than enhances, the demands of fashion-turned out to be a white lie. Once you buy even the plainest pair of khakis, you invariably also buy a sports jacket and a belt and a whole series of shirts to go with it-maybe a polo knit for the weekends, something in plaid for casual, and a button-down for a dressier look-and before long your closet is thick with just the kinds of details and options that you thought you were avoiding. You may not add these details as brilliantly or as consciously as, say, Hanson does, but you end up doing it nonetheless. In the past seven years, sales of men’s clothing in the United States have risen an astonishing twenty- one per cent, in large part because of this very fact-that khakis, even as they have simplified the bottom half of the male wardrobe, have forced a steady revision of the top. At the same time, even khakis themselves-within the narrow constraints of khakidom-have quietly expanded their range. When Dockers were launched, in the fall of 1986, there were just three basic styles: the double-pleated Docker in khaki, olive, navy, and black; the Steamer, in cotton canvas; and the more casual flat-fronted Docker. Now there are twenty-four. Dockers and Haggar and everyone else has been playing a game of bait and switch: lure men in with the promise of a uniform and then slip them, bit by bit, fashion. Put them in an empty room and then, ever so slowly, so as not to scare them, fill the room with objects.

3.

There is a puzzle in psychology known as the canned-laughter problem, which has a deeper and more complex set of implications about men and women and fashion and why the Dockers ads were so successful. Over the years, several studies have been devoted to this problem, but perhaps the most instructive was done by two psychologists at the University of Wisconsin, Gerald Cupchik and Howard Leventhal. Cupchik and Leventhal took a stack of cartoons (including many from The New Yorker), half of which an independent panel had rated as very funny and half of which it had rated as mediocre. They put the cartoons on slides, had a voice-over read the captions, and presented the slide show to groups of men and women. As you might expect, both sexes reacted pretty much the same way. Then Cupchik and Leventhal added a laugh track to the voice-over-the subjects were told that it was actual laughter from people who were in the room during the taping-and repeated the experiment. This time, however, things got strange. The canned laughter made the women laugh a little harder and rate the cartoons as a little funnier than they had before. But not the men. They laughed a bit more at the good cartoons but much more at the bad cartoons. The canned laughter also made them rate the bad cartoons as much funnier than they had rated them before, but it had little or no effect on their ratings of the good cartoons. In fact, the men found a bad cartoon with a laugh track to be almost as funny as a good cartoon without one. What was going on?

The guru of male-female differences in the ad world is Joan Meyers-Levy, a professor at the University of Chicago business school. In a groundbreaking series of articles written over the past decade, Meyers-Levy has explained the canned-laughter problem and other gender anomalies by arguing that men and women use fundamentally different methods of processing information. Given two pieces of evidence about how funny something is-their own opinion and the opinion of others (the laugh track)-the women came up with a higher score than before because they added the two clues together: they integrated the information before them. The men, on the other hand, picked one piece of evidence and ignored the other. For the bad cartoons, they got carried away by the laugh track and gave out hugely generous scores for funniness. For the good cartoons, however, they were so wedded to their own opinion that suddenly the laugh track didn’t matter at all.

This idea-that men eliminate and women integrate-is called by Meyers-Levy the “selectivity hypothesis.” Men are looking for a way to simplify the route to a conclusion, so they seize on the most obvious evidence and ignore the rest, while women, by contrast, try to process information comprehensively. So-called bandwidth research, for example, has consistently shown that if you ask a group of people to sort a series of objects or ideas into categories, the men will create fewer and larger categories than the women will. They use bigger mental bandwidths. Why? Because the bigger the bandwidth the less time and attention you have to pay to each individual object. Or consider what is called the invisibility question. If a woman is being asked a series of personal questions by another woman, she’ll say more if she’s facing the woman she’s talking to than she will if her listener is invisible. With men, it’s the opposite. When they can’t see the person who’s asking them questions, they suddenly and substantially open up. This, of course, is a condition of male communication which has been remarked on by women for millennia. But the selectivity hypothesis suggests that the cause of it has been misdiagnosed. It’s not that men necessarily have trouble expressing their feelings; it’s that in a face-to-face conversation they experience emotional overload. A man can’t process nonverbal information (the expression and body language of the person asking him questions) and verbal information (the personal question being asked) at the same time any better than he can process other people’s laughter and his own laughter at the same time. He has to select, and it is Meyers- Levy’s contention that this pattern of behavior suggests significant differences in the way men and women respond to advertising.

Joan Meyers-Levy is a petite woman in her late thirties, with a dark pageboy haircut and a soft voice. She met me in the downtown office of the University of Chicago with three large folders full of magazine advertisements under one arm, and after chatting about the origins and the implications of her research she handed me an ad from several years ago for Evian bottled water. It has a beautiful picture of the French Alps and, below that, in large type, “Our factory.” The text ran for several paragraphs, beginning:

You’re not just looking at the French Alps. You’re looking at one of the most pristine places on earth. And the origin of Evian Natural Spring Water.

Here, it takes no less than 15 years for nature to purify every drop of Evian as it flows through mineral-rich glacial formations deep within the mountains. And it is here that Evian acquires its unique balance of minerals.

“Now, is that a male or a female ad?” she asked. I looked at it again. The picture baffled me. But the word “factory” seemed masculine, so I guessed male.

She shook her head. “It’s female. Look at the picture. It’s just the Alps, and then they label it ‘Our factory.’ They’re using a metaphor. To understand this, you’re going to have to engage in a fair amount of processing. And look at all the imagery they’re encouraging you to build up. You’re not just looking at the French Alps. It’s ‘one of the most pristine places on earth’ and it will take nature ‘no less than fifteen years’ to purify.” Her point was that this is an ad that works only if the viewer appreciates all its elements-if the viewer integrates, not selects. A man, for example, glancing at the ad for a fraction of a second, might focus only on the words “Our factory” and screen out the picture of the Alps entirely, the same way he might have screened out the canned laughter. Then he wouldn’t get the visual metaphor. In fact, he might end up equating Evian with a factory, and that would be a disaster. Anyway, why bother going into such detail about the glaciers if it’s just going to get lost in the big male bandwidth?

Meyers-Levy handed me another Evian advertisement. It showed a man-the Olympic Gold Medal swimmer Matt Biondi-by a pool drinking Evian, with the caption “Revival of the fittest.” The women’s ad had a hundred and nineteen words of text. This ad had just twenty-nine words: “No other water has the unique, natural balance of minerals that Evian achieves during its 15-year journey deep within the French Alps. To be the best takes time.” Needless to say, it came from a men’s magazine. “With men, you don’t want the fluff,” she said. “Women, though, participate a lot more in whatever they are processing. By giving them more cues, you give them something to work with. You don’t have to be so literal. With women you can be more allusive, so you can draw them in. They will engage in elaboration, and the more associations they make the easier it is to remember and retrieve later on.”

Meyers-Levy took a third ad from her pile, this one for the 1997 Mercury Mountaineer four-wheel-drive sport-utility vehicle. It covers two pages, has the heading “Take the Rough with the Smooth,” and shows four pictures-one of the vehicle itself, one of a mother and her child, one of a city skyline, and a large one of the interior of the car, over which the ad’s text is superimposed. Around the border of the ad are forty-four separate, tiny photographs of roadways and buildings and construction sites and manhole covers. Female. Next to it on the table she put another ad-this one a single page, with a picture of the Mountaineer’s interior, fifteen lines of text, a picture of the car’s exterior, and, at the top, the heading: “When the Going Gets Tough, the Tough Get Comfortable.” Male. “It’s details, details. They’re saying lots of different stuff,” she said, pointing to the female version. “With men, instead of trying to cover everything in a single execution, you’d probably want to have a whole series of ads, each making a different point.”

After a while, the game got very easy-if a bit humiliating. Meyers- Levy said that her observations were not antimale-that both the male and the female strategies have their strengths and their weaknesses- and, of course, she’s right. On the other hand, reading the gender of ads makes it painfully obvious how much the advertising world- consciously or not-talks down to men. Before I met Meyers-Levy, I thought that the genius of the famous first set of Dockers ads was their psychological complexity, their ability to capture the many layers of eighties guyness. But when I thought about them again after meeting Meyers-Levy, I began to think that their real genius lay in their heroic simplicity-in the fact that F.C.B. had the self-discipline to fill the allotted thirty seconds with as little as possible. Why no heads? The invisibility rule. Guys would never listen to that Dadaist extemporizing if they had to process nonverbal cues, too. Why were the ads set in people’s living rooms and at the office? Bandwidth. The message was that khakis were wide-bandwidth pants. And why were all the ads shot in almost exactly the same way, and why did all the dialogue run together in one genial, faux-philosophical stretch of highlight reel? Because of canned laughter. Because if there were more than one message to be extracted men would get confused.

4.

In the early nineties, Dockers began to falter. In 1992, the company sold sixty-six million pairs of khakis, but in 1993, as competition from Haggar and the Gap and other brands grew fiercer, that number slipped to fifty-nine million six hundred thousand, and by 1994 it had fallen to forty-seven million. In marketing-speak, user reality was encroaching on brand personality; that is, Dockers were being defined by the kind of middle-aged men who wore them, and not by the hipper, younger men in the original advertisements. The brand needed a fresh image, and the result was the “Nice Pants” campaign currently being shown on national television-a campaign widely credited with the resurgence of Dockers’ fortunes. In one of the spots, “Vive la France,” a scruffy young man in his early twenties, wearing Dockers, is sitting in a café in Paris. He’s obviously a tourist. He glances up and sees a beautiful woman (actually, the supermodel Tatjana Patitz) looking right at him. He’s in heaven. She starts walking directly toward him, and as she passes by she says, “Beau pantalon.” As he looks frantically through his French phrase book for a translation, the waiter comes by and cuffs him on the head: “Hey, she says, ‘Nice pants.’ ” Another spot in the series, “Subway Love,” takes place on a subway car in Chicago. He (a nice young man wearing Dockers) spots her (a total babe), and their eyes lock. Romantic music swells. He moves toward her, but somehow, in a sudden burst of pushing and shoving, they get separated. Last shot: she’s inside the car, her face pushed up against the glass. He’s outside the car, his face pushed up against the glass. As the train slowly pulls away, she mouths two words: “Nice pants.”

It may not seem like it, but “Nice Pants” is as radical a campaign as the original Dockers series. If you look back at the way that Sansabelt pants, say, were sold in the sixties, each ad was what advertisers would call a pure “head” message: the pants were comfortable, durable, good value. The genius of the first Dockers campaign was the way it combined head and heart: these were all- purpose, no-nonsense pants that connected to the emotional needs of baby boomers. What happened to Dockers in the nineties, though, was that everyone started to do head and heart for khakis. Haggar pants were wrinkle-free (head) and John Goodman-guy (heart). The Gap, with its brilliant billboard campaign of the early nineties-“James Dean wore khakis,” “Frank Lloyd Wright wore khakis”-perfected the heart message by forging an emotional connection between khakis and a particular nostalgic, glamorous all-Americanness. To reassert itself, Dockers needed to go an extra step. Hence “Nice Pants,” a campaign that for the first time in Dockers history raises the subject of sex.

“It’s always been acceptable for a man to be a success in business,” Hanson said, explaining the rationale behind “Nice Pants.” “It’s always been expected of a man to be a good provider. The new thing that men are dealing with is that it’s O.K. for men to have a sense of personal style, and that it’s O.K. to be seen as sexy. It’s less about the head than about the combination of the head, the heart, and the groin. It’s those three things. That’s the complete man.”

The radical part about this, about adding the groin to the list, is that almost no other subject for men is as perilous as the issue of sexuality and fashion. What “Nice Pants” had to do was talk about sex the same way that “Poolman” talked about fashion, which was to talk about it by not talking about it-or, at least, to talk about it in such a coded, cautious way that no man would ever think Dockers was suggesting that he wear khakis in order to look pretty. When I took a videotape of the “Nice Pants” campaign to several of the top agencies in New York and Los Angeles, virtually everyone agreed that the spots were superb, meaning that somehow F.C.B. had managed to pull off this balancing act.

What David Altschiller, at Hill, Holliday/Altschiller, in Manhattan, liked about the spots, for example, was that the hero was naïve: in neither case did he know that he had on nice pants until a gorgeous woman told him so. Naïveté, Altschiller stressed, is critical. Several years ago, he did a spot for Claiborne for Men cologne in which a great-looking guy in a bar, wearing a gorgeous suit, was obsessing neurotically about a beautiful woman at the other end of the room: “I see this woman. She’s perfect. She’s looking at me. She’s smiling. But wait. Is she smiling at me? Or laughing at me? . . . Or looking at someone else?” You’d never do this in an ad for women’s cologne. Can you imagine? “I see this guy. He’s perfect. Ohmigod. Is he looking at me?” In women’s advertising, self-confidence is sexy. But if a man is self-confident-if he knows he is attractive and is beautifully dressed- then he’s not a man anymore. He’s a fop. He’s effeminate. The cologne guy had to be neurotic or the ad wouldn’t work. “Men are still abashed about acknowledging that clothing is important,” Altschiller said. “Fashion can’t be important to me as a man. Even when, in the first commercial, the waiter says ‘Nice pants,’ it doesn’t compute to the guy wearing the nice pants. He’s thinking, What do you mean, ‘Nice pants’?” Altschiller was looking at a videotape of the Dockers ad as he talked-standing at a forty-five-degree angle to the screen, with one hand on the top of the monitor, one hand on his hip, and a small, bemused smile on his lips. “The world may think they are nice, but so long as he doesn’t think so he doesn’t have to be self-conscious about it, and the lack of self-consciousness is very important to men. Because ‘I don’t care.’ Or ‘Maybe I care, but I can’t be seen to care.’ ” For the same reason, Altschiller liked the relative understatement of the phrase “nice pants,” as opposed to something like “great pants,” since somewhere between “nice” and “great” a guy goes from just happening to look good to the unacceptable position of actually trying to look good. “In focus groups, men said that to be told you had ‘nice pants’ was one of the highest compliments a man could wish for,” Tanyia Kandohla told me later, when I asked about the slogan. “They wouldn’t want more attention drawn to them than that.”

In many ways, the “Nice Pants” campaign is a direct descendant of the hugely successful campaign that Rubin-Postaer & Associates, in Santa Monica, did for Bugle Boy Jeans in the early nineties. In the most famous of those spots, the camera opens on an attractive but slightly goofy-looking man in a pair of jeans who is hitchhiking by the side of a desert highway. Then a black Ferrari with a fabulous babe at the wheel drives by, stops, and backs up. The babe rolls down the window and says, “Excuse me. Are those Bugle Boy Jeans that you’re wearing?” The goofy guy leans over and pokes his head in the window, a surprised half smile on his face: “Why, yes, they are Bugle Boy Jeans.”

“Thank you,” the babe says, and she rolls up the window and drives away.

This is really the same ad as “Nice Pants”-the babe, the naïve hero, the punch line. The two ads have something else in common. In the Bugle Boy spot, the hero wasn’t some stunning male model. “I think he was actually a box boy at Vons in Huntington Beach,” Larry Postaer, the creative director of Rubin-Postaer & Associates, told me. “I guess someone”-at Bugle Boy-“liked him.” He’s O.K.-looking, but not nearly in the same class as the babe in the Ferrari. In “Subway Love,” by the same token, the Dockers man is medium-sized, almost small, and gets pushed around by much tougher people in the tussle on the train. He’s cute, but he’s a little bit of a wimp. Kandohla says that F.C.B. tried very hard to find someone with that look-someone who was, in her words, “aspirational real,” not some “buff, muscle- bound jock.” In a fashion ad for women, you can use Claudia Schiffer to sell a cheap pair of pants. But not in a fashion ad for men. The guy has to be believable. “A woman cannot be too gorgeous,” Postaer explained. “A man, however, can be too gorgeous, because then he’s not a man anymore. It’s pretty rudimentary. Yet there are people who don’t buy that, and have gorgeous men in their ads. I don’t get it. Talk to Barneys about how well that’s working. It couldn’t stay in business trying to sell that high-end swagger to a mass market. The general public wouldn’t accept it. Look at beer commercials. They always have these gorgeous girls-even now, after all the heat-and the guys are always just guys. That’s the way it is. We only reflect what’s happening out there, we’re not creating it. Those guys who run the real high-end fashion ads-they don’t understand that. They’re trying to remold how people think about gender. I can’t explain it, though I have my theories. It’s like a Grecian ideal. But you can’t be successful at advertising by trying to re-create the human condition. You can’t alter men’s minds, particularly on subjects like sexuality. It’ll never happen.”

Postaer is a gruff, rangy guy, with a Midwestern accent and a gravelly voice, who did Budweiser commercials in Chicago before moving West fifteen years ago. When he wasn’t making fun of the pretentious style of East Coast fashion advertising, he was making fun of the pretentious questions of East Coast writers. When, for example, I earnestly asked him to explain the logic behind having the goofy guy screw up his face in such a-well, goofy-way when he says, “Why, yes, they are Bugle Boy Jeans,” Postaer took his tennis shoes off his desk, leaned forward bemusedly in his chair, and looked at me as if my head came to a small point. “Because that’s the only way he could say it,” he said. “I suppose we might have had him say it a little differently if he could actually act.”

Incredibly, Postaer said, the people at Bugle Boy wanted the babe to invite the goofy guy into the car, despite the fact that this would have violated the most important rule that governs this new style of groin messages in men’s-fashion advertising, which is that the guy absolutely cannot ever get the girl. It’s not just that if he got the girl the joke wouldn’t work anymore; it’s that if he got the girl it might look as if he had deliberately dressed to get the girl, and although at the back of every man’s mind as he’s dressing in the morning there is the thought of getting the girl, any open admission that that’s what he’s actually trying to do would undermine the whole unself- conscious, antifashion statement that men’s advertising is about. If Tatjana Patitz were to say “Beau garçon” to the guy in “Vive la France,” or the babe on the subway were to give the wimp her number, Dockers would suddenly become terrifyingly conspicuous-the long-pants equivalent of wearing a tight little Speedo to the beach. And if the Vons box boy should actually get a ride from the Ferrari babe, the ad would suddenly become believable only to that thin stratum of manhood which thinks that women in Ferraris find twenty- four-dollar jeans irresistible. “We fought that tooth and nail,” Postaer said. “And it more or less cost us the account, even though the ad was wildly successful.” He put his tennis shoes back up on the desk. “But that’s what makes this business fun-trying to prove to clients how wrong they are.”

5.

The one ad in the “Nice Pants” campaign which isn’t like the Bugle Boy spots is called “Motorcycle.” In it a nice young man happens upon a gleaming Harley on a dark back street of what looks like downtown Manhattan. He strokes the seat and then, unable to contain himself, climbs aboard the bike and bounces up and down, showing off his Dockers (the “product shot”) but accidentally breaking a mirror on the handlebar. He looks up. The Harley’s owner-a huge, leather-clad biker-is looking down at him. The biker glowers, looking him up and down, and says, “Nice pants.” Last shot: the biker rides away, leaving the guy standing on the sidewalk in just his underwear.

What’s surprising about this ad is that, unlike “Vive la France” and “Subway Love,” it does seem to cross the boundaries of acceptable sex talk. The rules of guy advertising so carefully observed in those spots-the fact that the hero has to be naïve, that he can’t be too good-looking, that he can’t get the girl, and that he can’t be told anything stronger than “Nice pants”-are all, in some sense, reactions to the male fear of appearing too concerned with fashion, of being too pretty, of not being masculine. But what is “Motorcycle”? It’s an ad about a sweet-looking guy down in the Village somewhere who loses his pants to a butch-looking biker in leather. “I got so much feedback at the time of ‘Well, God, that’s kind of gay, don’t you think?’ ” Robert Hanson said. “People were saying, ‘This buff guy comes along and he rides off with the guy’s pants. I mean, what the hell were they doing?’ It came from so many different people within the industry. It came from some of our most conservative retailers. But do you know what? If you put these three spots up-‘Vive la France,’ ‘Subway Love,’ and ‘Motorcycle’-which one do you think men will talk about ad nauseam? ‘Motorcycle.’ It’s No. 1. It’s because he’s really cool. He’s in a really cool environment, and it’s every guy’s fantasy to have a really cool, tricked-out fancy motorcycle.”

Hanson paused, as if he recognized that what he was saying was quite sensitive. He didn’t want to say that men failed to pick up the gay implications of the ad because they’re stupid, because they aren’t stupid. And he didn’t want to sound condescending, because Dockers didn’t build a six-hundred-million-dollar business in five years by sounding condescending. All he was trying to do was point out the fundamental exegetical error in calling this a gay ad, because the only way for a Dockers man to be offended by “Motorcycle” would be if he thought about it with a little imagination, if he picked up on some fairly subtle cues, if he integrated an awful lot of detail. In other words, a Dockers man could only be offended if he did precisely what, according to Meyers-Levy, men don’t do. It’s not a gay ad because it’s a guy ad. “The fact is,” Hanson said, “that most men’s interpretation of that spot is: You know what? Those pants must be really cool, because they prevented him from getting the shit kicked out of him.”