The Picture Problem

How the S.U.V. ran over automotive safety.


In the summer of 1996, prescription buy the Ford Motor Company began building the Expedition, its new, full-sized S.U.V., at the Michigan Truck Plant, in the Detroit suburb of Wayne.   The Expedition was essentially the F-150 pickup truck with an extra set of doors and two more rows of seats—and the fact that it was a truck was critical.   Cars have to meet stringent fuel-efficiency regulations.   Trucks don’t.   The handling and suspension and braking of cars have to be built to the demanding standards of drivers and passengers.   Trucks only have to handle like, well, trucks.   Cars are built with what is called unit-body construction.   To be light enough to meet fuel standards and safe enough to meet safety standards, they have expensive and elaborately engineered steel skeletons, with built-in crumple zones to absorb the impact of a crash.   Making a truck is a lot more rudimentary.   You build a rectangular steel frame.   The engine gets bolted to the front.   The seats get bolted to the middle.   The body gets lowered over the top.   The result is heavy and rigid and not particularly safe.   But it’s an awfully inexpensive way to build an automobile.   Ford had planned to sell the Expedition for thirty-six thousand dollars, and its best estimate was that it could build one for twenty-four thousand—which, in the automotive industry, is a terrifically high profit margin.   Sales, the company predicted, weren’t going to be huge.   After all, how many Americans could reasonably be expected to pay a twelve-thousand-dollar premium for what was essentially a dressed-up truck? But Ford executives decided that the Expedition would be a highly profitable niche product.   They were half right.   The “highly profitable” part turned out to be true.   Yet, almost from the moment Ford’s big new S.U.V.s rolled off the assembly line in Wayne, there was nothing “niche” about the Expedition.

Ford had intended to split the assembly line at the Michigan Truck Plant between the Expedition and the Ford F-150 pickup.   But, when the first flood of orders started coming in for the Expedition, the factory was entirely given over to S.U.V.s.   The orders kept mounting.   Assembly-line workers were put on sixty- and seventy-hour weeks.   Another night shift was added.   The plant was now running twenty-four hours a day, six days a week.   Ford executives decided to build a luxury version of the Expedition, the Lincoln Navigator.   They bolted a new grille on the Expedition, changed a few body panels, added some sound insulation, took a deep breath, and charged forty-five thousand dollars—and soon Navigators were flying out the door nearly as fast as Expeditions.   Before long, the Michigan Truck Plant was the most profitable of Ford’s fifty-three assembly plants.   By the late nineteen-nineties, it had become the most profitable factory of any industry in the world.   In 1998, the Michigan Truck Plant grossed eleven billion dollars, almost as much as McDonald’s made that year.   Profits were $3.  7 billion.   Some factory workers, with overtime, were making two hundred thousand dollars a year.   The demand for Expeditions and Navigators was so insatiable that even when a blizzard hit the Detroit region in January of 1999—burying the city in snow, paralyzing the airport, and stranding hundreds of cars on the freeway—Ford officials got on their radios and commandeered parts bound for other factories so that the Michigan Truck Plant assembly line wouldn’t slow for a moment.   The factory that had begun as just another assembly plant had become the company’s crown jewel.

In the history of the automotive industry, few things have been quite as unexpected as the rise of the S.U.V. Detroit is a town of engineers, and engineers like to believe that there is some connection between the success of a vehicle and its technical merits.   But the S.U.V. boom was like Apple’s bringing back the Macintosh, dressing it up in colorful plastic, and suddenly creating a new market.   It made no sense to them.   Consumers said they liked four-wheel drive.   But the overwhelming majority of consumers don’t need four-wheel drive.   S.U.V. buyers said they liked the elevated driving position.   But when, in focus groups, industry marketers probed further, they heard things that left them rolling their eyes.   As Keith Bradsher writes in “High and Mighty”—perhaps the most important book about Detroit since Ralph Nader’s “Unsafe at Any Speed”—what consumers said was “If the vehicle is up high, it’s easier to see if something is hiding underneath or lurking behind it.  ” Bradsher brilliantly captures the mixture of bafflement and contempt that many auto executives feel toward the customers who buy their S.U.V.s.   Fred J.   Schaafsma, a top engineer for General Motors, says, “Sport-utility owners tend to be more like ‘I wonder how people view me,’ and are more willing to trade off flexibility or functionality to get that.  ” According to Bradsher, internal industry market research concluded that S.U.V.s tend to be bought by people who are insecure, vain, self-centered, and self-absorbed, who are frequently nervous about their marriages, and who lack confidence in their driving skills.   Ford’s S.U.V. designers took their cues from seeing “fashionably dressed women wearing hiking boots or even work boots while walking through expensive malls.  ” Toyota’s top marketing executive in the United States, Bradsher writes, loves to tell the story of how at a focus group in Los Angeles “an elegant woman in the group said that she needed her full-sized Lexus LX 470 to drive up over the curb and onto lawns to park at large parties in Beverly Hills.  ” One of Ford’s senior marketing executives was even blunter: “The only time those S.U.V.s are going to be off-road is when they miss the driveway at 3 a.  m.  ”

The truth, underneath all the rationalizations, seemed to be that S.U.V. buyers thought of big, heavy vehicles as safe: they found comfort in being surrounded by so much rubber and steel.   To the engineers, of course, that didn’t make any sense, either: if consumers really wanted something that was big and heavy and comforting, they ought to buy minivans, since minivans, with their unit-body construction, do much better in accidents than S.U.V.s.   (In a thirty-five m.p.h. crash test, for instance, the driver of a Cadillac Escalade—the G.M. counterpart to the Lincoln Navigator—has a sixteen-per-cent chance of a life-threatening head injury, a twenty-per-cent chance of a life-threatening chest injury, and a thirty-five-per-cent chance of a leg injury.   The same numbers in a Ford Windstar minivan—a vehicle engineered from the ground up, as opposed to simply being bolted onto a pickup-truck frame—are, respectively, two per cent, four per cent, and one per cent.  ) But this desire for safety wasn’t a rational calculation.   It was a feeling.   Over the past decade, a number of major automakers in America have relied on the services of a French-born cultural anthropologist, G.   Clotaire Rapaille, whose speciality is getting beyond the rational—what he calls “cortex”—impressions of consumers and tapping into their deeper, “reptilian” responses.   And what Rapaille concluded from countless, intensive sessions with car buyers was that when S.U.V. buyers thought about safety they were thinking about something that reached into their deepest unconscious.   “The No.   1 feeling is that everything surrounding you should be round and soft, and should give,” Rapaille told me.   “There should be air bags everywhere.   Then there’s this notion that you need to be up high.   That’s a contradiction, because the people who buy these S.U.V.s know at the cortex level that if you are high there is more chance of a rollover.   But at the reptilian level they think that if I am bigger and taller I’m safer.   You feel secure because you are higher and dominate and look down.   That you can look down is psychologically a very powerful notion.   And what was the key element of safety when you were a child? It was that your mother fed you, and there was warm liquid.   That’s why cupholders are absolutely crucial for safety.   If there is a car that has no cupholder, it is not safe.   If I can put my coffee there, if I can have my food, if everything is round, if it’s soft, and if I’m high, then I feel safe.   It’s amazing that intelligent, educated women will look at a car and the first thing they will look at is how many cupholders it has.  ” During the design of Chrysler’s PT Cruiser, one of the things Rapaille learned was that car buyers felt unsafe when they thought that an outsider could easily see inside their vehicles.   So Chrysler made the back window of the PT Cruiser smaller.   Of course, making windows smaller—and thereby reducing visibility—makes driving more dangerous, not less so.   But that’s the puzzle of what has happened to the automobile world: feeling safe has become more important than actually being safe.


One day this fall, I visited the automobile-testing center of Consumers Union, the organization that publishes Consumer Reports.   It is tucked away in the woods, in south-central Connecticut, on the site of the old Connecticut Speedway.   The facility has two skid pads to measure cornering, a long straightaway for braking tests, a meandering “handling” course that winds around the back side of the track, and an accident-avoidance obstacle course made out of a row of orange cones.   It is headed by a trim, white-haired Englishman named David Champion, who previously worked as an engineer with Land Rover and with Nissan.   On the day of my visit, Champion set aside two vehicles: a silver 2003 Chevrolet TrailBlazer—an enormous five-thousand-pound S.U.V.—and a shiny blue two-seater Porsche Boxster convertible.

We started with the TrailBlazer.   Champion warmed up the Chevrolet with a few quick circuits of the track, and then drove it hard through the twists and turns of the handling course.   He sat in the bucket seat with his back straight and his arms almost fully extended, and drove with practiced grace: every movement smooth and relaxed and unhurried.   Champion, as an engineer, did not much like the TrailBlazer.   “Cheap interior, cheap plastic,” he said, batting the dashboard with his hand.   “It’s a little bit heavy, cumbersome.   Quiet.   Bit wallowy, side to side.   Doesn’t feel that secure.   Accelerates heavily.   Once it gets going, it’s got decent power.   Brakes feel a bit spongy.  ” He turned onto the straightaway and stopped a few hundred yards from the obstacle course.

Measuring accident avoidance is a key part of the Consumers Union evaluation.   It’s a simple setup.   The driver has to navigate his vehicle through two rows of cones eight feet wide and sixty feet long.   Then he has to steer hard to the left, guiding the vehicle through a gate set off to the side, and immediately swerve hard back to the right, and enter a second sixty-foot corridor of cones that are parallel to the first set.   The idea is to see how fast you can drive through the course without knocking over any cones.   “It’s like you’re driving down a road in suburbia,” Champion said.   “Suddenly, a kid on a bicycle veers out in front of you.   You have to do whatever it takes to avoid the kid.   But there’s a tractor-trailer coming toward you in the other lane, so you’ve got to swing back into your own lane as quickly as possible.   That’s the scenario.  ”

Champion and I put on helmets.   He accelerated toward the entrance to the obstacle course.   “We do the test without brakes or throttle, so we can just look at handling,” Champion said.   “I actually take my foot right off the pedals.  ” The car was now moving at forty m.p.h. At that speed, on the smooth tarmac of the raceway, the TrailBlazer was very quiet, and we were seated so high that the road seemed somehow remote.   Champion entered the first row of cones.   His arms tensed.   He jerked the car to the left.   The TrailBlazer’s tires squealed.   I was thrown toward the passenger-side door as the truck’s body rolled, then thrown toward Champion as he jerked the TrailBlazer back to the right.   My tape recorder went skittering across the cabin.   The whole maneuver had taken no more than a few seconds, but it felt as if we had been sailing into a squall.   Champion brought the car to a stop.   We both looked back: the TrailBlazer had hit the cone at the gate.   The kid on the bicycle was probably dead.   Champion shook his head.   “It’s very rubbery.   It slides a lot.   I’m not getting much communication back from the steering wheel.   It feels really ponderous, clumsy.   I felt a little bit of tail swing.  ”

I drove the obstacle course next.   I started at the conservative speed of thirty-five m.p.h. I got through cleanly.   I tried again, this time at thirty-eight m.p.h., and that small increment of speed made a dramatic difference.   I made the first left, avoiding the kid on the bicycle.   But, when it came time to swerve back to avoid the hypothetical oncoming eighteen-wheeler, I found that I was wrestling with the car.   The protests of the tires were jarring.   I stopped, shaken.   “It wasn’t going where you wanted it to go, was it?” Champion said.   “Did you feel the weight pulling you sideways? That’s what the extra weight that S.U.V.s have tends to do.   It pulls you in the wrong direction.  ” Behind us was a string of toppled cones.   Getting the TrailBlazer to travel in a straight line, after that sudden diversion, hadn’t been easy.   “I think you took out a few pedestrians,” Champion said with a faint smile.

Next up was the Boxster.   The top was down.   The sun was warm on my forehead.   The car was low to the ground; I had the sense that if I dangled my arm out the window my knuckles would scrape on the tarmac.   Standing still, the Boxster didn’t feel safe: I could have been sitting in a go-cart.   But when I ran it through the handling course I felt that I was in perfect control.   On the straightaway, I steadied the Boxster at forty-five m.p.h., and ran it through the obstacle course.   I could have balanced a teacup on my knee.   At fifty m.p.h., I navigated the left and right turns with what seemed like a twitch of the steering wheel.   The tires didn’t squeal.   The car stayed level.   I pushed the Porsche up into the mid-fifties.   Every cone was untouched.   “Walk in the park!” Champion exclaimed as we pulled to a stop.

Most of us think that S.U.V.s are much safer than sports cars.   If you asked the young parents of America whether they would rather strap their infant child in the back seat of the TrailBlazer or the passenger seat of the Boxster, they would choose the TrailBlazer.   We feel that way because in the TrailBlazer our chances of surviving a collision with a hypothetical tractor-trailer in the other lane are greater than they are in the Porsche.   What we forget, though, is that in the TrailBlazer you’re also much more likely to hit the tractor-trailer because you can’t get out of the way in time.   In the parlance of the automobile world, the TrailBlazer is better at “passive safety.  ” The Boxster is better when it comes to “active safety,” which is every bit as important.

Consider the set of safety statistics compiled by Tom Wenzel, a scientist at Lawrence Berkeley National Laboratory, in California, and Marc Ross, a physicist at the University of Michigan.   The numbers are expressed in fatalities per million cars, both for drivers of particular models and for the drivers of the cars they hit.   (For example, in the first case, for every million Toyota Avalons on the road, forty Avalon drivers die in car accidents every year, and twenty people die in accidents involving Toyota Avalons.) The numbers below have been rounded:

Toyota Avalon
Chrysler Town & Country
Toyota Camry
Volkswagen Jetta
Ford Windstar
Nissan Maxima
Honda Accord
Chevrolet Venture
Buick Century
Subaru Legacy/Outback
Mazda 626
Chevrolet Malibu
Chevrolet Suburban
Jeep Grand Cherokee
Honda Civic
Toyota Corolla
Ford Expedition
GMC Jimmy
Ford Taurus
Nissan Altima
Mercury Marquis
Nissan Sentra
Toyota 4Runner
Chevrolet Tahoe
Dodge Stratus
Lincoln Town Car
Ford Explorer
Pontiac Grand Am
Toyota Tacoma
Chevrolet Cavalier
Dodge Neon
Pontiac Sunfire
Ford F-Series





Are the best performers the biggest and heaviest vehicles on the road? Not at all.   Among the safest cars are the midsize imports, like the Toyota Camry and the Honda Accord.   Or consider the extraordinary performance of some subcompacts, like the Volkswagen Jetta.   Drivers of the tiny Jetta die at a rate of just forty-seven per million, which is in the same range as drivers of the five-thousand-pound Chevrolet Suburban and almost half that of popular S.U.V. models like the Ford Explorer or the GMC Jimmy.   In a head-on crash, an Explorer or a Suburban would crush a Jetta or a Camry.   But, clearly, the drivers of Camrys and Jettas are finding a way to avoid head-on crashes with Explorers and Suburbans.   The benefits of being nimble—of being in an automobile that’s capable of staying out of trouble—are in many cases greater than the benefits of being big.

I had another lesson in active safety at the test track when I got in the TrailBlazer with another Consumers Union engineer, and we did three emergency-stopping tests, taking the Chevrolet up to sixty m.p.h. and then slamming on the brakes.   It was not a pleasant exercise.   Bringing five thousand pounds of rubber and steel to a sudden stop involves lots of lurching, screeching, and protesting.   The first time, the TrailBlazer took 146.  2 feet to come to a halt, the second time 151.  6 feet, and the third time 153.  4 feet.   The Boxster can come to a complete stop from sixty m.p.h. in about 124 feet.   That’s a difference of about two car lengths, and it isn’t hard to imagine any number of scenarios where two car lengths could mean the difference between life and death.


The S.U.V. boom represents, then, a shift in how we conceive of safety—from active to passive.   It’s what happens when a larger number of drivers conclude, consciously or otherwise, that the extra thirty feet that the TrailBlazer takes to come to a stop don’t really matter, that the tractor-trailer will hit them anyway, and that they are better off treating accidents as inevitable rather than avoidable.   “The metric that people use is size,” says Stephen Popiel, a vice-president of Millward Brown Goldfarb, in Toronto, one of the leading automotive market-research firms.   “The bigger something is, the safer it is.   In the consumer’s mind, the basic equation is, If I were to take this vehicle and drive it into this brick wall, the more metal there is in front of me the better off I’ll be.  ”

This is a new idea, and one largely confined to North America.   In Europe and Japan, people think of a safe car as a nimble car.   That’s why they build cars like the Jetta and the Camry, which are designed to carry out the driver’s wishes as directly and efficiently as possible.   In the Jetta, the engine is clearly audible.   The steering is light and precise.   The brakes are crisp.   The wheelbase is short enough that the car picks up the undulations of the road.   The car is so small and close to the ground, and so dwarfed by other cars on the road, that an intelligent driver is constantly reminded of the necessity of driving safely and defensively.   An S.U.V. embodies the opposite logic.   The driver is seated as high and far from the road as possible.   The vehicle is designed to overcome its environment, not to respond to it.   Even four-wheel drive, seemingly the most beneficial feature of the S.U.V., serves to reinforce this isolation.   Having the engine provide power to all four wheels, safety experts point out, does nothing to improve braking, although many S.U.V. owners erroneously believe this to be the case.   Nor does the feature necessarily make it safer to turn across a slippery surface: that is largely a function of how much friction is generated by the vehicle’s tires.   All it really does is improve what engineers call tracking—that is, the ability to accelerate without slipping in perilous conditions or in deep snow or mud.   Champion says that one of the occasions when he came closest to death was a snowy day, many years ago, just after he had bought a new Range Rover.   “Everyone around me was slipping, and I was thinking, Yeahhh.   And I came to a stop sign on a major road, and I was driving probably twice as fast as I should have been, because I could.   I had traction.   But I also weighed probably twice as much as most cars.   And I still had only four brakes and four tires on the road.   I slid right across a four-lane road.  ” Four-wheel drive robs the driver of feedback.   “The car driver whose wheels spin once or twice while backing out of the driveway knows that the road is slippery,” Bradsher writes.   “The SUV driver who navigates the driveway and street without difficulty until she tries to brake may not find out that the road is slippery until it is too late.  ” Jettas are safe because they make their drivers feel unsafe.   S.U.V.s are unsafe because they make their drivers feel safe.   That feeling of safety isn’t the solution; it’s the problem.


Perhaps the most troublesome aspect of S.U.V. culture is its attitude toward risk.   “Safety, for most automotive consumers, has to do with the notion that they aren’t in complete control,” Popiel says.   “There are unexpected events that at any moment in time can come out and impact them—an oil patch up ahead, an eighteen-wheeler turning over, something falling down.   People feel that the elements of the world out of their control are the ones that are going to cause them distress.  ”

Of course, those things really aren’t outside a driver’s control: an alert driver, in the right kind of vehicle, can navigate the oil patch, avoid the truck, and swerve around the thing that’s falling down.   Traffic-fatality rates vary strongly with driver behavior.   Drunks are 7. 6 times more likely to die in accidents than non-drinkers.   People who wear their seat belts are almost half as likely to die as those who don’t buckle up.   Forty-year-olds are ten times less likely to get into accidents than sixteen-year-olds.   Drivers of minivans, Wenzel and Ross’s statistics tell us, die at a fraction of the rate of drivers of pickup trucks.   That’s clearly because minivans are family cars, and parents with children in the back seat are less likely to get into accidents.   Frank McKenna, a safety expert at the University of Reading, in England, has done experiments where he shows drivers a series of videotaped scenarios—a child running out the front door of his house and onto the street, for example, or a car approaching an intersection at too great a speed to stop at the red light—and asks people to press a button the minute they become aware of the potential for an accident.   Experienced drivers press the button between half a second and a second faster than new drivers, which, given that car accidents are events measured in milliseconds, is a significant difference.   McKenna’s work shows that, with experience, we all learn how to exert some degree of control over what might otherwise appear to be uncontrollable events.   Any conception of safety that revolves entirely around the vehicle, then, is incomplete.   Is the Boxster safer than the TrailBlazer? It depends on who’s behind the wheel.   In the hands of, say, my very respectable and prudent middle-aged mother, the Boxster is by far the safer car.   In my hands, it probably isn’t.   On the open road, my reaction to the Porsche’s extraordinary road manners and the sweet, irresistible wail of its engine would be to drive much faster than I should.   (At the end of my day at Consumers Union, I parked the Boxster, and immediately got into my own car to drive home.   In my mind, I was still at the wheel of the Boxster.   Within twenty minutes, I had a two-hundred-and-seventy-one-dollar speeding ticket.  ) The trouble with the S.U.V. ascendancy is that it excludes the really critical component of safety: the driver.

In psychology, there is a concept called learned helplessness, which arose from a series of animal experiments in the nineteen-sixties at the University of Pennsylvania.   Dogs were restrained by a harness, so that they couldn’t move, and then repeatedly subjected to a series of electrical shocks.   Then the same dogs were shocked again, only this time they could easily escape by jumping over a low hurdle.   But most of them didn’t; they just huddled in the corner, no longer believing that there was anything they could do to influence their own fate.   Learned helplessness is now thought to play a role in such phenomena as depression and the failure of battered women to leave their husbands, but one could easily apply it more widely.   We live in an age, after all, that is strangely fixated on the idea of helplessness: we’re fascinated by hurricanes and terrorist acts and epidemics like sars—situations in which we feel powerless to affect our own destiny.   In fact, the risks posed to life and limb by forces outside our control are dwarfed by the factors we can control.   Our fixation with helplessness distorts our perceptions of risk.   “When you feel safe, you can be passive,” Rapaille says of the fundamental appeal of the S.U.V. “Safe means I can sleep.   I can give up control.   I can relax.   I can take off my shoes.   I can listen to music.  ” For years, we’ve all made fun of the middle-aged man who suddenly trades in his sedate family sedan for a shiny red sports car.   That’s called a midlife crisis.   But at least it involves some degree of engagement with the act of driving.   The man who gives up his sedate family sedan for an S.U.V. is saying something far more troubling—that he finds the demands of the road to be overwhelming.   Is acting out really worse than giving up?


On August 9, 2000, the Bridgestone Firestone tire company announced one of the largest product recalls in American history.   Because of mounting concerns about safety, the company said, it was replacing some fourteen million tires that had been used primarily on the Ford Explorer S.U.V. The cost of the recall—and of a follow-up replacement program initiated by Ford a year later—ran into billions of dollars.   Millions more were spent by both companies on fighting and settling lawsuits from Explorer owners, who alleged that their tires had come apart and caused their S.U.V.s to roll over.   In the fall of that year, senior executives from both companies were called to Capitol Hill, where they were publicly berated.   It was the biggest scandal to hit the automobile industry in years.   It was also one of the strangest.   According to federal records, the number of fatalities resulting from the failure of a Firestone tire on a Ford Explorer S.U.V., as of September, 2001, was two hundred and seventy-one.   That sounds like a lot, until you remember that the total number of tires supplied by Firestone to the Explorer from the moment the S.U.V. was introduced by Ford, in 1990, was fourteen million, and that the average life span of a tire is forty-five thousand miles.   The allegation against Firestone amounts to the claim that its tires failed, with fatal results, two hundred and seventy-one times in the course of six hundred and thirty billion vehicle miles.   Manufacturers usually win prizes for failure rates that low.   It’s also worth remembering that during that same ten-year span almost half a million Americans died in traffic accidents.   In other words, during the nineteen-nineties hundreds of thousands of people were killed on the roads because they drove too fast or ran red lights or drank too much.   And, of those, a fair proportion involved people in S.U.V.s who were lulled by their four-wheel drive into driving recklessly on slick roads, who drove aggressively because they felt invulnerable, who disproportionately killed those they hit because they chose to drive trucks with inflexible steel-frame architecture, and who crashed because they couldn’t bring their five-thousand-pound vehicles to a halt in time.   Yet, out of all those fatalities, regulators, the legal profession, Congress, and the media chose to highlight the .0005 per cent that could be linked to an alleged defect in the vehicle.

But should that come as a surprise? In the age of the S.U.V., this is what people worry about when they worry about safety—not risks, however commonplace, involving their own behavior but risks, however rare, involving some unexpected event.   The Explorer was big and imposing.   It was high above the ground.   You could look down on other drivers.   You could see if someone was lurking behind or beneath it.   You could drive it up on someone’s lawn with impunity.   Didn’t it seem like the safest vehicle in the world?
Fifty years ago, salve the mall was born. America would never be the same.


Victor Gruen was short, visit stout, and unstoppable, with a wild head of hair and eyebrows like unpruned hedgerows.  According to a profile in Fortune (and people loved to profile Victor Gruen), he was a “torrential talker with eyes as bright as mica and a mind as fast as mercury.”   In the office, he was famous for keeping two or three secretaries working full time, as he moved from one to the next, dictating non-stop in his thick Viennese accent.  He grew up in the well-to-do world of prewar Jewish Vienna, studying architecture at the Vienna Academy of Fine Arts—the same school that, a few years previously, had turned down a fledgling artist named Adolf Hitler.  At night, he performed satirical cabaret theatre in smoke-filled cafés.  He emigrated in 1938, the same week as Freud, when one of his theatre friends dressed up as a Nazi Storm Trooper and drove him and his wife to the airport.  They took the first plane they could catch to Zurich, made their way to England, and then boarded the S.S.  Statendam for New York, landing, as Gruen later remembered, “with an architect’s degree, eight dollars, and no English.”   On the voyage over, he was told by an American to set his sights high—”don’t try to wash dishes or be a waiter, we have millions of them”—but Gruen scarcely needed the advice.  He got together with some other German émigrés and formed the Refugee Artists Group.  George S.  Kaufman’s wife was their biggest fan.  Richard Rodgers and Al Jolson gave them money.  Irving Berlin helped them with their music.  Gruen got on the train to Princeton and came back with a letter of recommendation from Albert Einstein.  By the summer of 1939, the group was on Broadway, playing eleven weeks at the Music Box.  Then, as M.  Jeffrey Hartwick recounts in “Mall Maker,” his new biography of Gruen, one day he went for a walk in midtown and ran into an old friend from Vienna, Ludwig Lederer, who wanted to open a leather-goods boutique on Fifth Avenue.  Victor agreed to design it, and the result was a revolutionary storefront, with a kind of mini-arcade in the entranceway, roughly seventeen by fifteen feet: six exquisite glass cases, spotlights, and faux marble, with green corrugated glass on the ceiling.  It was a “customer trap.”   This was a brand-new idea in American retail design, particularly on Fifth Avenue, where all the carriage-trade storefronts were flush with the street.  The critics raved.  Gruen designed Ciro’s on Fifth Avenue, Steckler’s on Broadway, Paris Decorators on the Bronx Concourse, and eleven branches of the California clothing chain Grayson’s.  In the early fifties, he designed an outdoor shopping center called Northland outside Detroit for J.  L.  Hudson’s.  It covered a hundred and sixty-three acres and had nearly ten thousand parking spaces.  This was little more than a decade and a half since he stepped off the boat, and when Gruen watched the bulldozers break ground he turned to his partner and said, “My God but we’ve got a lot of nerve.”

But Gruen’s most famous creation was his next project, in the town of Edina, just outside Minneapolis.  He began work on it almost exactly fifty years ago.  It was called Southdale.  It cost twenty million dollars, and had seventy-two stores and two anchor department-store tenants, Donaldson’s and Dayton’s.  Until then, most shopping centers had been what architects like to call “extroverted,” meaning that store windows and entrances faced both the parking area and the interior pedestrian walkways.  Southdale was introverted: the exterior walls were blank, and all the activity was focussed on the inside.  Suburban shopping centers had always been in the open, with stores connected by outdoor passageways.  Gruen had the idea of putting the whole complex under one roof, with air-conditioning for the summer and heat for the winter.  Almost every other major shopping center had been built on a single level, which made for punishingly long walks.  Gruen put stores on two levels, connected by escalators and fed by two-tiered parking.  In the middle he put a kind of town square, a “garden court” under a skylight, with a fishpond, enormous sculpted trees, a twenty-one-foot cage filled with bright-colored birds, balconies with hanging plants, and a café.  The result, Hardwick writes, was a sensation:

Journalists from all of the country’s top magazines came for the Minneapolis shopping center’s opening.  Life, Fortune, Time, Women’s Wear Daily, the New York Times, Business Week and Newsweek all covered the event.  The national and local press wore out superlatives attempting to capture the feeling of Southdale.  “The Splashiest Center in the U.  S.,” Life sang.  The glossy weekly praised the incongruous combination of a “goldfish pond, birds, art and 10 acres of stores all. . . under one Minnesota roof.”   A “pleasure-dome-with-parking,” Time cheered.  One journalist announced that overnight Southdale had become an integral “part of the American Way.”

Southdale Mall still exists.  It is situated off I-494, south of downtown Minneapolis and west of the airport—a big concrete box in a sea of parking.  The anchor tenants are now J.  C.  Penney and Marshall Field’s, and there is an Ann Taylor and a Sunglass Hut and a Foot Locker and just about every other chain store that you’ve ever seen in a mall.  It does not seem like a historic building, which is precisely why it is one.  Fifty years ago, Victor Gruen designed a fully enclosed, introverted, multitiered, double-anchor-tenant shopping complex with a garden court under a skylight—and today virtually every regional shopping center in America is a fully enclosed, introverted, multitiered, double-anchor-tenant complex with a garden court under a skylight.  Victor Gruen didn’t design a building; he designed an archetype.  For a decade, he gave speeches about it and wrote books and met with one developer after another and waved his hands in the air excitedly, and over the past half century that archetype has been reproduced so faithfully on so many thousands of occasions that today virtually every suburban American goes shopping or wanders around or hangs out in a Southdale facsimile at least once or twice a month.  Victor Gruen may well have been the most influential architect of the twentieth century.  He invented the mall.


One of Gruen’s contemporaries in the early days of the mall was a man named A.  Alfred Taubman, who also started out as a store designer.  In 1950, when Taubman was still in his twenties, he borrowed five thousand dollars, founded his own development firm, and, three years later, put up a twenty-six-store open-air shopping center in Flint, Michigan.  A few years after that, inspired by Gruen, he matched Southdale with an enclosed mall of his own in Hayward, California, and over the next half century Taubman put together what is widely considered one of the finest collections of shopping malls in the world.  The average American mall has annual sales of around three hundred and forty dollars per square foot.  Taubman’s malls average sales close to five hundred dollars per square foot.  If Victor Gruen invented the mall, Alfred Taubman perfected it.  One day not long ago, I asked Taubman to take me to one of his shopping centers and explain whatever it was that first drew people like him and Victor Gruen to the enclosed mall fifty years ago.

Taubman, who just turned eighty, is an imposing man with a wry sense of humor who wears bespoke three-piece suits and peers down at the world through half-closed eyes.  He is the sort of old-fashioned man who refers to merchandise as “goods” and apparel as “soft goods” and who can glance at a couture gown from halfway across the room and come within a few dollars of its price.  Recently, Taubman’s fortunes took a turn for the worse when Sotheby’s, which he bought in 1983, ran afoul of antitrust laws and he ended up serving a year-long prison sentence on price-fixing charges.  Then his company had to fend off a hostile takeover bid led by Taubman’s archrival, the Indianapolis-based Simon Property Group.  But, on a recent trip from his Manhattan offices to the Mall at Short Hills, a half hour’s drive away in New Jersey, Taubman was in high spirits.  Short Hills holds a special place in his heart.  “When I bought that property in 1980, there were only seven stores that were still in business,” Taubman said, sitting in the back of his limousine.  “It was a disaster.  It was done by a large commercial architect who didn’t understand what he was doing.”   Turning it around took four renovations.  Bonwit Teller and B.  Altman—two of the original anchor tenants—were replaced by Neiman Marcus, Saks, Nordstrom, and Macy’s.  Today, Short Hills has average sales of nearly eight hundred dollars per square foot; according to the Greenberg Group, it is the third-most-successful covered mall in the country.  When Taubman and I approached the mall, the first thing he did was peer out at the parking garage.  It was just before noon on a rainy Thursday.  The garage was almost full.  “Look at all the cars!” he said, happily.

Taubman directed the driver to stop in front of Bloomingdale’s, on the mall’s north side.  He walked through the short access corridor, paused, and pointed at the floor.  It was made up of small stone tiles.  “People used to use monolithic terrazzo in centers,” he said.  “But it cracked easily and was difficult to repair.  Women, especially, tend to have thin soles.  We found that they are very sensitive to the surface, and when they get on one of those terrazzo floors it’s like a skating rink.  They like to walk on the joints.  The only direct contact you have with the building is through the floor.  How you feel about it is very important.”   Then he looked up and pointed to the second floor of the mall.  The handrails were transparent.  “We don’t want anything to disrupt the view,” Taubman said.  If you’re walking on the first level, he explained, you have to be able, at all times, to have an unimpeded line of sight not just to the stores in front of you but also to the stores on the second level.  The idea is to overcome what Taubman likes to call “threshold resistance,” which is the physical and psychological barrier that stands between a shopper and the inside of a store.  “You buy something because it is available and attractive,” Taubman said.  “You can’t have any obstacles.  The goods have to be all there.”   When Taubman was designing stores in Detroit, in the nineteen-forties, he realized that even the best arcades, like those Gruen designed on Fifth Avenue, weren’t nearly as good at overcoming threshold resistance as an enclosed mall, because with an arcade you still had to get the customer through the door.  “People assume we enclose the space because of air-conditioning and the weather, and that’s important,” Taubman said.  “But the main reason is that it allows us to open up the store to the customer.”

Taubman began making his way down the mall.  He likes the main corridors of his shopping malls to be no more than a thousand feet long—the equivalent of about three city blocks—because he believes that three blocks is about as far as peak shopping interest can be sustained, and as he walked he explained the logic behind what retailers like to call “adjacencies.”   There was Brooks Brothers, where a man might buy a six-hundred-dollar suit, right across from Johnston & Murphy, where the same man might buy a two-hundred-dollar pair of shoes.  The Bose electronics store was next to Brookstone and across from the Sharper Image, so if you got excited about some electronic gizmo in one store you were steps away from getting even more excited by similar gizmos in two other stores.  Gucci, Versace, and Chanel were placed near the highest-end department stores, Neiman Marcus and Saks.  “Lots of developers just rent out their space like you’d cut a salami,” Taubman explained.  “They rent the space based on whether it fits, not necessarily on whether it makes any sense.”   Taubman shook his head.  He gestured to a Legal Sea Foods restaurant, where he wanted to stop for lunch.  It was off the main mall, at the far end of a short entry hallway, and it was down there for a reason.  A woman about to spend five thousand dollars at Versace doesn’t want to catch a whiff of sautéed grouper as she tries on an evening gown.  More to the point, people eat at Legal Sea Foods only during the lunch and dinner hours—which means that if you put the restaurant in the thick of things, you’d have a dead spot in the middle of your mall for most of the day.

At the far end of the mall is Neiman Marcus, and Taubman wandered in, exclaimed over a tray of men’s ties, and delicately examined the stitching in the women’s evening gowns in the designer department.  “Hi, my name is Alfred Taubman—I’m your landlord,” he said, bending over to greet a somewhat startled sales assistant.  Taubman plainly loves Neiman Marcus, and with good reason: well-run department stores are the engines of malls.  They have powerful brand names, advertise heavily, and carry extensive cosmetics lines (shopping malls are, at bottom, delivery systems for lipstick)—all of which generate enormous shopping traffic.  The point of a mall—the reason so many stores are clustered together in one building—is to allow smaller, less powerful retailers to share in that traffic.  A shopping center is an exercise in coöperative capitalism.  It is considered successful (and the mall owner makes the most money) when the maximum number of department-store customers are lured into the mall.

Why, for instance, are so many malls, like Short Hills, two stories?  Back at his office, on Fifth Avenue, Taubman took a piece of paper and drew a simple cross-section of a two-story building.  “You have two levels, all right?  You have an escalator here and an escalator here.”   He drew escalators at both ends of the floors.  “The customer comes into the mall, walks down the hall, gets on the escalator up to the second level.  Goes back along the second floor, down the escalator, and now she’s back where she started from.  She’s seen every store in the center, right?  Now you put on a third level.  Is there any reason to go up there?  No.”   A full circuit of a two-level mall takes you back to the beginning.  It encourages you to circulate through the whole building.  A full circuit of a three-level mall leaves you at the opposite end of the mall from your car.  Taubman was the first to put a ring road around the mall—which he did at his mall in Hayward—for the same reason: if you want to get shoppers into every part of the building, they should be distributed to as many different entry points as possible.  At Short Hills—and at most Taubman malls—the ring road rises gently as you drive around the building, so at least half of the mall entrances are on the second floor.  “We put fifteen per cent more parking on the upper level than on the first level, because people flow like water,” Taubman said.  “They go down much easier than they go up.  And we put our vertical transportation—the escalators—on the ends, so shoppers have to make the full loop.”

This is the insight that drove the enthusiasm for the mall fifty years ago—that by putting everything under one roof, the retailer and the developer gained, for the first time, complete control over their environment.  Taubman fusses about lighting, for instance: he believes that next to the skylights you have to put tiny lights that will go on when the natural light fades, so the dusk doesn’t send an unwelcome signal to shoppers that it is time to go home; and you have to recess the skylights so that sunlight never reflects off the storefront glass, obscuring merchandise.  Can you optimize lighting in a traditional downtown?  The same goes for parking.  Suppose that there was a downtown where the biggest draw was a major department store.  Ideally, you ought to put the garage across the street and two blocks away, so shoppers, on their way from their cars and to their destination, would pass by the stores in between—dramatically increasing the traffic for all the intervening merchants.  But in a downtown, obviously, you can’t put a parking garage just anywhere, and even if you could, you couldn’t insure that the stores in that high-traffic corridor had the optimal adjacencies, or that the sidewalk would feel right under the thin soles of women’s shoes.  And because the stores are arrayed along a road with cars on it, you don’t really have a mall where customers can wander from side to side.  And what happens when they get to the department store?  It’s four or five floors high, and shoppers are like water, remember: they flow downhill.  So it’s going to be hard to generate traffic on the upper levels.  There is a tendency in America to wax nostalgic for the traditional downtown, but those who first believed in the mall—and understood its potential—found it hard to look at the old downtown with anything but frustration.  “In Detroit, prior to the nineteen-fifties, the large department stores, like Hudson’s, controlled everything, like zoning,” Taubman said.  “They were generous to local politicians.  They had enormous clout, and that’s why when Sears wanted to locate in downtown Detroit they were told they couldn’t.  So Sears put a store in Highland Park and on Oakland Boulevard, and built a store on the East Side, and it was able to get some other stores to come with them, and before long there were three mini-downtowns in the suburbs.  They used to call them hot spots.”   This happened more than half a century ago.  But it was clear that Taubman has never quite got over how irrational the world outside the mall can be: downtown Detroit chased away traffic.


Planning and control were of even greater importance to Gruen.  He was, after all, a socialist—and he was Viennese.  In the middle of the nineteenth century, Vienna had demolished the walls and other fortifications that had ringed the city since medieval times, and in the resulting open space built the Ringstrasse—a meticulously articulated addition to the old city.  Architects and urban planners solemnly outlined their ideas.  There were apartment blocks, and public squares and government buildings, and shopping arcades, each executed in what was thought to be the historically appropriate style.  The Rathaus was done in high Gothic; the Burgtheatre in early Baroque; the University was pure Renaissance; and the Parliament was classical Greek.  It was all part of the official Viennese response to the populist uprisings of 1848: if Austria was to remake itself as a liberal democracy, Vienna had to be physically remade along democratic lines.  The Parliament now faced directly onto the street.  The walls that separated the élite of Vienna from the unwashed in the suburbs were torn down.  And, most important, a ring road, or Ringstrasse—a grand mall—was built around the city, with wide sidewalks and expansive urban views, where Viennese of all backgrounds could mingle freely on their Sunday-afternoon stroll.  To the Viennese reformers of the time, the quality of civic life was a function of the quality of the built environment, and Gruen thought that principle applied just as clearly to the American suburbs.

Not long after Southdale was built, Gruen gave the keynote address at a Progressive Architecture awards ceremony in New Orleans, and he took the occasion to lash out at American suburbia, whose roads, he said, were “avenues of horror,” “flanked by the greatest collection of vulgarity—billboards, motels, gas stations, shanties, car lots, miscellaneous industrial equipment, hot dog stands, wayside stores—ever collected by mankind.”   American suburbia was chaos, and the only solution to chaos was planning.  When Gruen first drew up the plans for Southdale, he placed the shopping center at the heart of a tidy four-hundred-and-sixty-three-acre development, complete with apartment buildings, houses, schools, a medical center, a park, and a lake.  Southdale was not a suburban alternative to downtown Minneapolis.  It was the Minneapolis downtown you would get if you started over and corrected all the mistakes that were made the first time around.  “There is nothing suburban about Southdale except its location,” Architectural Record stated when it reviewed Gruen’s new creation.  It is an imaginative distillation of what makes downtown magnetic: the variety, the individuality, the lights, the color, even the crowds—for Southdale’s pedestrian-scale spaces insure a busyness and a bustle.

Added to this essence of existing downtowns are all kinds of things that ought to be there if downtown weren’t so noisy and dirty and chaotic—sidewalk cafés, art, islands of planting, pretty paving.  Other shopping centers, however pleasant, seem provincial in contrast with the real thing—the city downtown.  But in Minneapolis, it is the downtown that appears pokey and provincial in contrast with Southdale’s metropolitan character.

One person who wasn’t dazzled by Southdale was Frank Lloyd Wright.  “What is this, a railroad station or a bus station?” he asked, when he came for a tour.  “You’ve got a garden court that has all the evils of the village street and none of its charm.”   But no one much listened to Frank Lloyd Wright.  When it came to malls, it was only Victor Gruen’s vision that mattered.


Victor Gruen’s grand plan for Southdale was never realized.  There were no parks or schools or apartment buildings—just that big box in a sea of parking.  Nor, with a few exceptions, did anyone else plan the shopping mall as the centerpiece of a tidy, dense, multi-use development.  Gruen was right about the transformative effect of the mall on retailing.  But in thinking that he could reënact the lesson of the Ringstrasse in American suburbia he was wrong, and the reason was that in the mid-nineteen-fifties the economics of mall-building suddenly changed.

At the time of Southdale, big shopping centers were a delicate commercial proposition.  One of the first big postwar shopping centers was Shopper’s World, in Framingham, Massachusetts, designed by an old business partner of Gruen’s from his Fifth Avenue storefront days.  Shopper’s World was an open center covering seventy acres, with forty-four stores, six thousand parking spaces, and a two-hundred-and-fifty-thousand-square-foot Jordan Marsh department store—and within two years of its opening, in 1951, the developer was bankrupt.  A big shopping center simply cost too much money, and it took too long for a developer to make that money back.  Gruen thought of the mall as the centerpiece of a carefully planned new downtown because he felt that that was the only way malls would ever get built: you planned because you had to plan.  Then, in the mid-fifties, something happened that turned the dismal economics of the mall upside down: Congress made a radical change in the tax rules governing depreciation.

Under tax law, if you build an office building, or buy a piece of machinery for your factory, or make any capital purchase for your business, that investment is assumed to deteriorate and lose some part of its value from wear and tear every year.  As a result, a business is allowed to set aside some of its income, tax-free, to pay for the eventual cost of replacing capital investments.  For tax purposes, in the early fifties the useful life of a building was held to be forty years, so a developer could deduct one-fortieth of the value of his building from his income every year.  A new forty-million-dollar mall, then, had an annual depreciation deduction of a million dollars.  What Congress did in 1954, in an attempt to stimulate investment in manufacturing, was to “accelerate” the depreciation process for new construction.  Now, using this and other tax loopholes, a mall developer could recoup the cost of his investment in a fraction of the time.  As the historian Thomas Hanchett argues, in a groundbreaking paper in The American Historical Review, the result was a “bonanza” for developers.  In the first few years after a shopping center was built, the depreciation deductions were so large that the mall was almost certainly losing money, at least on paper—which brought with it enormous tax benefits.  For instance, in a front-page article in 1961 on the effect of the depreciation changes, the Wall Street Journal described the finances of a real-estate investment company called Kratter Corp.  Kratter’s revenue from its real-estate operations in 1960 was $9,997,043.  Deductions from operating expenses and mortgage interest came to $4,836,671, which left a healthy income of $5.16 million.  Then came depreciation, which came to $6.9 million, so now Kratter’s healthy profit had been magically turned into a “loss” of $1.76 million.  Imagine that you were one of five investors in Kratter.  The company’s policy was to distribute nearly all of its pre-depreciation revenue to its investors, so your share of their earnings would be roughly a million dollars.  Ordinarily, you’d pay a good chunk of that in taxes.  But that million dollars wasn’t income.  After depreciation, Kratter didn’t make any money.  That million dollars was “return on capital,” and it was tax-free.

Suddenly it was possible to make much more money investing in things like shopping centers than buying stocks, so money poured into real-estate investment companies.  Prices rose dramatically.  Investors were putting up buildings, taking out as much money from them as possible using accelerated depreciation, then selling them four or five years later at a huge profit—whereupon they built an even bigger building, because the more expensive the building was, the more the depreciation allowance was worth.

Under the circumstances, who cared whether the shopping center made economic sense for the venders?  Shopping centers and strip malls became what urban planners call “catalytic,” meaning that developers weren’t building them to serve existing suburban communities; they were building them on the fringes of cities, beyond residential developments, where the land was cheapest.  Hanchett points out, in fact, that in many cases the growth of malls appears to follow no demographic logic at all.  Cortland, New York, for instance, barely grew at all between 1950 and 1970.  Yet in those two decades Cortland gained six new shopping plazas, including the four-hundred-thousand-square-foot enclosed Cortlandville Mall.  In the same twenty-year span, the Scranton area actually shrank by seventy-three thousand people while gaining thirty-one shopping centers, including three enclosed malls.  In 1953, before accelerated depreciation was put in place, one major regional shopping center was built in the United States.  Three years later, after the law was passed, that number was twenty-five.  In 1953, new shopping-center construction of all kinds totalled six million square feet.  By 1956, that figure had increased five hundred per cent.  This was also the era that fast-food restaurants and Howard Johnsons and Holiday Inns and muffler shops and convenience stores began to multiply up and down the highways and boulevards of the American suburbs—and as these developments grew, others followed to share in the increased customer traffic.  Malls led to malls, and in turn those malls led to the big stand-alone retailers like Wal-Mart and Target, and then the “power centers” of three or four big-box retailers, like Circuit City, Staples, Barnes & Noble.  Victor Gruen intended Southdale to be a dense, self-contained downtown.  Today, fifteen minutes down an “avenue of horror” from Southdale is the Mall of America, the largest mall in the country, with five hundred and twenty stores, fifty restaurants, and twelve thousand parking spaces—and one can easily imagine that one day it, too, may give way to something newer and bigger.


Once, in the mid-fifties, Victor Gruen sat down with a writer from The New Yorker’s Talk of the Town to give his thoughts on how to save New York City.  The interview took place in Gruen’s stylish offices on West Twelfth Street, in an old Stanford White building, and one can only imagine the reporter, rapt, as Gruen held forth, eyebrows bristling.  First, Gruen said, Manhattan had to get rid of its warehouses and its light manufacturing.  Then, all the surface traffic in midtown—the taxis, buses, and trucks—had to be directed into underground tunnels.  He wanted to put superhighways around the perimeter of the island, buttressed by huge double-decker parking garages.  The jumble of tenements and town houses and apartment blocks that make up Manhattan would be replaced by neat rows of hundred-and-fifty-story residential towers, arrayed along a ribbon of gardens, parks, walkways, theatres, and cafés.

Mr.  G.  lowered his brows and glared at us.  “You are troubled by all those tunnels, are you not?” he inquired.  “You wonder whether there is room for them in the present underground jungle of pipes and wires.  Did you never think how absurd it is to bury beneath tons of solid pavement equipment that is bound to go on the blink from time to time?” He leaped from his chair and thrust an imaginary pneumatic drill against his polished study floor.  “Rat-a-tat-tat!” he exclaimed.  “Night and day! Tear up the streets! Then pave them! Then tear ’em up again!” Flinging aside the imaginary drill, he threw himself back in his chair.  “In my New York of the future, all pipes and wires will be strung along the upper sides of those tunnels, above a catwalk, accessible to engineers and painted brilliant colors to delight rather than appall the eye.”

Postwar America was an intellectually insecure place, and there was something intoxicating about Gruen’s sophistication and confidence.  That was what took him, so dramatically, from standing at New York Harbor with eight dollars in his pocket to Broadway, to Fifth Avenue, and to the heights of Northland and Southdale.  He was a European intellectual, an émigré, and, in the popular mind, the European émigré represented vision, the gift of seeing something grand in the banality of postwar American life.  When the European visionary confronted a drab and congested urban landscape, he didn’t tinker and equivocate; he levelled warehouses and buried roadways and came up with a thrilling plan for making things right.  “The chief means of travel will be walking,” Gruen said, of his reimagined metropolis.  “Nothing like walking for peace of mind.”   At Northland, he said, thousands of people would show up, even when the stores were closed, just to walk around.  It was exactly like Sunday on the Ringstrasse.  With the building of the mall, Old World Europe had come to suburban Detroit.

What Gruen had, as well, was an unshakable faith in the American marketplace.  Malls teach us, he once said, that “it’s the merchants who will save our urban civilization.  ‘Planning’ isn’t a dirty word to them; good planning means good business.”   He went on, “Sometimes self-interest has remarkable spiritual consequences.”   Gruen needed to believe this, as did so many European intellectuals from that period, dubbed by the historian Daniel Horowitz “celebratory émigrés.”   They had fled a place of chaos and anxiety, and in American consumer culture they sought a bulwark against the madness across the ocean.  They wanted to find in the jumble of the American marketplace something as grand as the Vienna they had lost—the place where the unconscious was meticulously dissected by Dr.  Freud on Berggasse, and where shrines to European civilization—to the Gothic, the Baroque, the Renaissance, and the ancient Greek traditions—were erected on the Ringstrasse.  To Americans, nothing was more flattering than this.  Who didn’t want to believe that the act of levelling warehouses and burying roadways had spiritual consequences?  But it was, in the end, too good to be true.  This wasn’t the way America worked at all.

A few months ago, Alfred Taubman gave a speech to a real-estate trade association in Detroit, about the prospects for the city’s downtown, and one of the things he talked about was Victor Gruen’s Northland.  It was simply too big, Taubman said.  Hudson’s, the Northland anchor tenant, already had a flagship store in downtown Detroit.  So why did Gruen build a six-hundred-thousand-square-foot satellite at Northland, just a twenty-minute drive away?  Satellites were best at a hundred and fifty thousand to two hundred thousand square feet.  But at six hundred thousand square feet they were large enough to carry every merchandise line that the flagship store carried, which meant no one had any reason to make the trek to the flagship anymore.  Victor Gruen said the lesson of Northland was that the merchants would save urban civilization.  He didn’t appreciate that it made a lot more sense, for his client, to save civilization at a hundred and fifty thousand square feet than at six hundred thousand square feet.  The lesson of America was that the grandest of visions could be derailed by the most banal of details, like the size of the retail footprint, or whether Congress set the depreciation allowance at forty years or twenty years.

When, late in life, Gruen came to realize this, it was a powerfully disillusioning experience.  He revisited one of his old shopping centers, and saw all the sprawling development around it, and pronounced himself in “severe emotional shock.”   Malls, he said, had been disfigured by “the ugliness and discomfort of the land-wasting seas of parking” around them.  Developers were interested only in profit.  “I refuse to pay alimony for those bastard developments,” he said in a speech in London, in 1978.  He turned away from his adopted country.  He had fixed up a country house outside of Vienna, and soon he moved back home for good.  But what did he find when he got there?  Just south of old Vienna, a mall had been built—in his anguished words, a “gigantic shopping machine.”   It was putting the beloved independent shopkeepers of Vienna out of business.  It was crushing the life of his city.  He was devastated.  Victor Gruen invented the shopping mall in order to make America more like Vienna.  He ended up making Vienna more like America.
Mustard now comes in dozens of varieties. Why has ketchup stayed the same?


Many years ago, cialis 40mg one mustard dominated the supermarket shelves: French’s.  It came in a plastic bottle.  People used it on hot dogs and bologna.  It was a yellow mustard, viagra 100mg made from ground white mustard seed with turmeric and vinegar, recipe which gave it a mild, slightly metallic taste.  If you looked hard in the grocery store, you might find something in the specialty-foods section called Grey Poupon, which was Dijon mustard, made from the more pungent brown mustard seed.  In the early seventies, Grey Poupon was no more than a hundred-thousand-dollar-a-year business.  Few people knew what it was or how it tasted, or had any particular desire for an alternative to French’s or the runner-up, Gulden’s.  Then one day the Heublein Company, which owned Grey Poupon, discovered something remarkable: if you gave people a mustard taste test, a significant number had only to try Grey Poupon once to switch from yellow mustard.  In the food world that almost never happens; even among the most successful food brands, only about one in a hundred have that kind of conversion rate.  Grey Poupon was magic.

So Heublein put Grey Poupon in a bigger glass jar, with an enamelled label and enough of a whiff of Frenchness to make it seem as if it were still being made in Europe (it was made in Hartford, Connecticut, from Canadian mustard seed and white wine).  The company ran tasteful print ads in upscale food magazines.  They put the mustard in little foil packets and distributed them with airplane meals—which was a brand-new idea at the time.  Then they hired the Manhattan ad agency Lowe Marschalk to do something, on a modest budget, for television.  The agency came back with an idea: A Rolls-Royce is driving down a country road.  There’s a man in the back seat in a suit with a plate of beef on a silver tray.  He nods to the chauffeur, who opens the glove compartment.  Then comes what is known in the business as the “reveal.”  The chauffeur hands back a jar of Grey Poupon.  Another Rolls-Royce pulls up alongside.  A man leans his head out the window.  “Pardon me.  Would you have any Grey Poupon?”

In the cities where the ads ran, sales of Grey Poupon leaped forty to fifty per cent, and whenever Heublein bought airtime in new cities sales jumped by forty to fifty per cent again.  Grocery stores put Grey Poupon next to French’s and Gulden’s.  By the end of the nineteen-eighties Grey Poupon was the most powerful brand in mustard.  “The tagline in the commercial was that this was one of life’s finer pleasures,” Larry Elegant, who wrote the original Grey Poupon spot, says, “and that, along with the Rolls-Royce, seemed to impart to people’s minds that this was something truly different and superior.”

The rise of Grey Poupon proved that the American supermarket shopper was willing to pay more—in this case, $3.99 instead of $1.49 for eight ounces—as long as what they were buying carried with it an air of sophistication and complex aromatics.  Its success showed, furthermore, that the boundaries of taste and custom were not fixed: that just because mustard had always been yellow didn’t mean that consumers would use only yellow mustard.  It is because of Grey Poupon that the standard American supermarket today has an entire mustard section.  And it is because of Grey Poupon that a man named Jim Wigon decided, four years ago, to enter the ketchup business.  Isn’t the ketchup business today exactly where mustard was thirty years ago? There is Heinz and, far behind, Hunt’s and Del Monte and a handful of private-label brands.  Jim Wigon wanted to create the Grey Poupon of ketchup.

Wigon is from Boston.  He’s a thickset man in his early fifties, with a full salt-and-pepper beard.  He runs his ketchup business—under the brand World’s Best Ketchup—out of the catering business of his partner, Nick Schiarizzi, in Norwood, Massachusetts, just off Route 1, in a low-slung building behind an industrial-equipment-rental shop.  He starts with red peppers, Spanish onions, garlic, and a high-end tomato paste.  Basil is chopped by hand, because the buffalo chopper bruises the leaves.  He uses maple syrup, not corn syrup, which gives him a quarter of the sugar of Heinz.  He pours his ketchup into a clear glass ten-ounce jar, and sells it for three times the price of Heinz, and for the past few years he has crisscrossed the country, peddling World’s Best in six flavors—regular, sweet, dill, garlic, caramelized onion, and basil—to specialty grocery stores and supermarkets.  If you were in Zabar’s on Manhattan’s Upper West Side a few months ago, you would have seen him at the front of the store, in a spot between the sushi and the gefilte fish.  He was wearing a World’s Best baseball cap, a white shirt, and a red-stained apron.  In front of him, on a small table, was a silver tureen filled with miniature chicken and beef meatballs, a box of toothpicks, and a dozen or so open jars of his ketchup.  “Try my ketchup!” Wigon said, over and over, to anyone who passed.  “If you don’t try it, you’re doomed to eat Heinz the rest of your life.”

In the same aisle at Zabar’s that day two other demonstrations were going on, so that people were starting at one end with free chicken sausage, sampling a slice of prosciutto, and then pausing at the World’s Best stand before heading for the cash register.  They would look down at the array of open jars, and Wigon would impale a meatball on a toothpick, dip it in one of his ketchups, and hand it to them with a flourish.  The ratio of tomato solids to liquid in World’s Best is much higher than in Heinz, and the maple syrup gives it an unmistakable sweet kick.  Invariably, people would close their eyes, just for a moment, and do a subtle double take.  Some of them would look slightly perplexed and walk away, and others would nod and pick up a jar.  “You know why you like it so much?” he would say, in his broad Boston accent, to the customers who seemed most impressed.  “Because you’ve been eating bad ketchup all ” Jim Wigon had a simple vision: build a better ketchup—the way Grey Poupon built a better mustard—and the world will beat a path to your door.  If only it were that easy.


The story of World’s Best Ketchup cannot properly be told without a man from White Plains, New York, named Howard Moskowitz.  Moskowitz is sixty, short and round, with graying hair and huge gold-rimmed glasses.  When he talks, he favors the Socratic monologue—a series of questions that he poses to himself, then answers, punctuated by “ahhh” and much vigorous nodding.  He is a lineal descendant of the legendary eighteenth-century Hasidic rabbi known as the Seer of Lublin.  He keeps a parrot.  At Harvard, he wrote his doctoral dissertation on psychophysics, and all the rooms on the ground floor of his food-testing and market-research business are named after famous psychophysicists.  (“Have you ever heard of the name Rose Marie Pangborn? Ahhh.  She was a professor at Davis.  Very famous.  This is the Pangborn kitchen.”) Moskowitz is a man of uncommon exuberance and persuasiveness: if he had been your freshman statistics professor, you would today be a statistician.  “My favorite writer? Gibbon,” he burst out, when we met not long ago.  He had just been holding forth on the subject of sodium solutions.  “Right now I’m working my way through the Hales history of the Byzantine Empire.  Holy shit! Everything is easy until you get to the Byzantine Empire.  It’s impossible.  One emperor is always killing the others, and everyone has five wives or three husbands.  It’s very Byzantine.”

Moskowitz set up shop in the seventies, and one of his first clients was Pepsi.  The artificial sweetener aspartame had just become available, and Pepsi wanted Moskowitz to figure out the perfect amount of sweetener for a can of Diet Pepsi.  Pepsi knew that anything below eight per cent sweetness was not sweet enough and anything over twelve per cent was too sweet.  So Moskowitz did the logical thing.  He made up experimental batches of Diet Pepsi with every conceivable degree of sweetness—8 per cent, 8.25 per cent, 8.5, and on and on up to 12—gave them to hundreds of people, and looked for the concentration that people liked the most.  But the data were a mess—there wasn’t a pattern—and one day, sitting in a diner, Moskowitz realized why.  They had been asking the wrong question.  There was no such thing as the perfect Diet Pepsi.  They should have been looking for the perfect Diet Pepsis.

It took a long time for the food world to catch up with Howard Moskowitz.  He knocked on doors and tried to explain his idea about the plural nature of perfection, and no one answered.  He spoke at food-industry conferences, and audiences shrugged.  But he could think of nothing else.  “It’s like that Yiddish expression,” he says.  “Do you know it? To a worm in horseradish, the world is horseradish!” Then, in 1986, he got a call from the Campbell’s Soup Company.  They were in the spaghetti-sauce business, going up against Ragú with their Prego brand.  Prego was a little thicker than Ragú, with diced tomatoes as opposed to Ragú’s purée, and, Campbell’s thought, had better pasta adherence.  But, for all that, Prego was in a slump, and Campbell’s was desperate for new ideas.

Standard practice in the food industry would have been to convene a focus group and ask spaghetti eaters what they wanted.  But Moskowitz does not believe that consumers—even spaghetti lovers—know what they desire if what they desire does not yet exist.  “The mind,” as Moskowitz is fond of saying, “knows not what the tongue wants.”  Instead, working with the Campbell’s kitchens, he came up with forty-five varieties of spaghetti sauce.  These were designed to differ in every conceivable way: spiciness, sweetness, tartness, saltiness, thickness, aroma, mouth feel, cost of ingredients, and so forth.  He had a trained panel of food tasters analyze each of those varieties in depth.  Then he took the prototypes on the road—to New York, Chicago, Los Angeles, and Jacksonville—and asked people in groups of twenty-five to eat between eight and ten small bowls of different spaghetti sauces over two hours and rate them on a scale of one to a hundred.  When Moskowitz charted the results, he saw that everyone had a slightly different definition of what a perfect spaghetti sauce tasted like.  If you sifted carefully through the data, though, you could find patterns, and Moskowitz learned that most people’s preferences fell into one of three broad groups: plain, spicy, and extra-chunky, and of those three the last was the most important.  Why? Because at the time there was no extra-chunky spaghetti sauce in the supermarket.  Over the next decade, that new category proved to be worth hundreds of millions of dollars to Prego.  “We all said, ‘Wow!’ ” Monica Wood, who was then the head of market research for Campbell’s, recalls.  “Here there was this third segment—people who liked their spaghetti sauce with lots of stuff in it—and it was completely untapped.  So in about 1989-90 we launched Prego extra-chunky.  It was extraordinarily successful.”

It may be hard today, fifteen years later—when every brand seems to come in multiple varieties—to appreciate how much of a breakthrough this was.  In those years, people in the food industry carried around in their heads the notion of a platonic dish—the version of a dish that looked and tasted absolutely right.  At Ragú and Prego, they had been striving for the platonic spaghetti sauce, and the platonic spaghetti sauce was thin and blended because that’s the way they thought it was done in Italy.  Cooking, on the industrial level, was consumed with the search for human universals.  Once you start looking for the sources of human variability, though, the old orthodoxy goes out the window.  Howard Moskowitz stood up to the Platonists and said there are no universals.

Moskowitz still has a version of the computer model he used for Prego fifteen years ago.  It has all the coded results from the consumer taste tests and the expert tastings, split into the three categories (plain, spicy, and extra-chunky) and linked up with the actual ingredients list on a spreadsheet.  “You know how they have a computer model for building an aircraft,” Moskowitz said as he pulled up the program on his computer.  “This is a model for building spaghetti sauce.  Look, every variable is here.”  He pointed at column after column of ratings.  “So here are the ingredients.  I’m a brand manager for Prego.  I want to optimize one of the segments.  Let’s start with Segment 1.”  In Moskowitz’s program, the three spaghetti-sauce groups were labelled Segment 1, Segment 2, and Segment 3.  He typed in a few commands, instructing the computer to give him the formulation that would score the highest with those people in Segment 1.  The answer appeared almost immediately: a specific recipe that, according to Moskowitz’s data, produced a score of 78 from the people in Segment 1.  But that same formulation didn’t do nearly as well with those in Segment 2 and Segment 3.  They scored it 67 and 57, respectively.  Moskowitz started again, this time asking the computer to optimize for Segment 2.  This time the ratings came in at 82, but now Segment 1 had fallen ten points, to 68.  “See what happens?” he said.  “If I make one group happier, I piss off another group.  We did this for coffee with General Foods, and we found that if you create only one product the best you can get across all the segments is a 60—if you’re lucky.  That’s if you were to treat everybody as one big happy family.  But if I do the sensory segmentation, I can get 70, 71, 72.  Is that big? Ahhh.  It’s a very big difference.  In coffee, a 71 is something you’ll die for.”

When Jim Wigon set up shop that day in Zabar’s, then, his operating assumption was that there ought to be some segment of the population that preferred a ketchup made with Stanislaus tomato paste and hand-chopped basil and maple syrup.  That’s the Moskowitz theory.  But there is theory and there is practice.  By the end of that long day, Wigon had sold ninety jars.  But he’d also got two parking tickets and had to pay for a hotel room, so he wasn’t going home with money in his pocket.  For the year, Wigon estimates, he’ll sell fifty thousand jars—which, in the universe of condiments, is no more than a blip.  “I haven’t drawn a paycheck in five years,” Wigon said as he impaled another meatball on a toothpick.  “My wife is killing me.”  And it isn’t just World’s Best that is struggling.  In the gourmet-ketchup world, there is River Run and Uncle Dave’s, from Vermont, and Muir Glen Organic and Mrs.  Tomato Head Roasted Garlic Peppercorn Catsup, in California, and dozens of others—and every year Heinz’s overwhelming share of the ketchup market just grows.

It is possible, of course, that ketchup is waiting for its own version of that Rolls-Royce commercial, or the discovery of the ketchup equivalent of extra-chunky—the magic formula that will satisfy an unmet need.  It is also possible, however, that the rules of Howard Moskowitz, which apply to Grey Poupon and Prego spaghetti sauce and to olive oil and salad dressing and virtually everything else in the supermarket, don’t apply to ketchup.


Tomato ketchup is a nineteenth-century creation—the union of the English tradition of fruit and vegetable sauces and the growing American infatuation with the tomato.  But what we know today as ketchup emerged out of a debate that raged in the first years of the last century over benzoate, a preservative widely used in late-nineteenth-century condiments.  Harvey Washington Wiley, the chief of the Bureau of Chemistry in the Department of Agriculture from 1883 to 1912, came to believe that benzoates were not safe, and the result was an argument that split the ketchup world in half.  On one side was the ketchup establishment, which believed that it was impossible to make ketchup without benzoate and that benzoate was not harmful in the amounts used.  On the other side was a renegade band of ketchup manufacturers, who believed that the preservative puzzle could be solved with the application of culinary science.  The dominant nineteenth-century ketchups were thin and watery, in part because they were made from unripe tomatoes, which are low in the complex carbohydrates known as pectin, which add body to a sauce.  But what if you made ketchup from ripe tomatoes, giving it the density it needed to resist degradation? Nineteenth-century ketchups had a strong tomato taste, with just a light vinegar touch.  The renegades argued that by greatly increasing the amount of vinegar, in effect protecting the tomatoes by pickling them, they were making a superior ketchup: safer, purer, and better tasting.  They offered a money-back guarantee in the event of spoilage.  They charged more for their product, convinced that the public would pay more for a better ketchup, and they were right.  The benzoate ketchups disappeared.  The leader of the renegade band was an entrepreneur out of Pittsburgh named Henry J.  Heinz.

The world’s leading expert on ketchup’s early years is Andrew F.  Smith, a substantial man, well over six feet, with a graying mustache and short wavy black hair.  Smith is a scholar, trained as a political scientist, intent on bringing rigor to the world of food.  When we met for lunch not long ago at the restaurant Savoy in SoHo (chosen because of the excellence of its hamburger and French fries, and because Savoy makes its own ketchup—a dark, peppery, and viscous variety served in a white porcelain saucer), Smith was in the throes of examining the origins of the croissant for the upcoming “Oxford Encyclopedia of Food and Drink in America,” of which he is the editor-in-chief.  Was the croissant invented in 1683, by the Viennese, in celebration of their defeat of the invading Turks? Or in 1686, by the residents of Budapest, to celebrate their defeat of the Turks? Both explanations would explain its distinctive crescent shape—since it would make a certain cultural sense (particularly for the Viennese) to consecrate their battlefield triumphs in the form of pastry.  But the only reference Smith could find to either story was in the Larousse Gastronomique of 1938.  “It just doesn’t check out,” he said, shaking his head wearily.

Smith’s specialty is the tomato, however, and over the course of many scholarly articles and books—”The History of Home-Made Anglo-American Tomato Ketchup,” for Petits Propos Culinaires, for example, and “The Great Tomato Pill War of the 1830’s,” for The Connecticut Historical Society Bulletin—Smith has argued that some critical portion of the history of culinary civilization could be told through this fruit.  Cortez brought tomatoes to Europe from the New World, and they inexorably insinuated themselves into the world’s cuisines.  The Italians substituted the tomato for eggplant.  In northern India, it went into curries and chutneys.  “The biggest tomato producer in the world today?” Smith paused, for dramatic effect.  “China.  You don’t think of tomato being a part of Chinese cuisine, and it wasn’t ten years ago.  But it is now.”  Smith dipped one of my French fries into the homemade sauce.  “It has that raw taste,” he said, with a look of intense concentration.  “It’s fresh ketchup.  You can taste the tomato.”  Ketchup was, to his mind, the most nearly perfect of all the tomato’s manifestations.  It was inexpensive, which meant that it had a firm lock on the mass market, and it was a condiment, not an ingredient, which meant that it could be applied at the discretion of the food eater, not the food preparer.  “There’s a quote from Elizabeth Rozin I’ve always loved,” he said.  Rozin is the food theorist who wrote the essay “Ketchup and the Collective Unconscious,” and Smith used her conclusion as the epigraph of his ketchup book: ketchup may well be “the only true culinary expression of the melting pot, and .  .  .  its special and unprecedented ability to provide something for everyone makes it the Esperanto of cuisine.”  Here is where Henry Heinz and the benzoate battle were so important: in defeating the condiment Old Guard, he was the one who changed the flavor of ketchup in a way that made it universal.


There are five known fundamental tastes in the human palate: salty, sweet, sour, bitter, and umami.  Umami is the proteiny, full-bodied taste of chicken soup, or cured meat, or fish stock, or aged cheese, or mother’s milk, or soy sauce, or mushrooms, or seaweed, or cooked tomato.  “Umami adds body,” Gary Beauchamp, who heads the Monell Chemical Senses Center, in Philadelphia, says.  “If you add it to a soup, it makes the soup seem like it’s thicker—it gives it sensory heft.  It turns a soup from salt water into a food.”  When Heinz moved to ripe tomatoes and increased the percentage of tomato solids, he made ketchup, first and foremost, a potent source of umami.  Then he dramatically increased the concentration of vinegar, so that his ketchup had twice the acidity of most other ketchups; now ketchup was sour, another of the fundamental tastes.  The post-benzoate ketchups also doubled the concentration of sugar—so now ketchup was also sweet—and all along ketchup had been salty and bitter.  These are not trivial issues.  Give a baby soup, and then soup with MSG (an amino-acid salt that is pure umami), and the baby will go back for the MSG soup every time, the same way a baby will always prefer water with sugar to water alone.  Salt and sugar and umami are primal signals about the food we are eating—about how dense it is in calories, for example, or, in the case of umami, about the presence of proteins and amino acids.  What Heinz had done was come up with a condiment that pushed all five of these primal buttons.  The taste of Heinz’s ketchup began at the tip of the tongue, where our receptors for sweet and salty first appear, moved along the sides, where sour notes seem the strongest, then hit the back of the tongue, for umami and bitter, in one long crescendo.  How many things in the supermarket run the sensory spectrum like this?

A number of years ago, the H.  J.  Heinz Company did an extensive market-research project in which researchers went into people’s homes and watched the way they used ketchup.  “I remember sitting in one of those households,” Casey Keller, who was until recently the chief growth officer for Heinz, says.  “There was a three-year-old and a six-year-old, and what happened was that the kids asked for ketchup and Mom brought it out.  It was a forty-ounce bottle.  And the three-year-old went to grab it himself, and Mom intercepted the bottle and said, ‘No, you’re not going to do that.’ She physically took the bottle away and doled out a little dollop.  You could see that the whole thing was a bummer.”  For Heinz, Keller says, that moment was an epiphany.  A typical five-year-old consumes about sixty per cent more ketchup than a typical forty-year-old, and the company realized that it needed to put ketchup in a bottle that a toddler could control.  “If you are four—and I have a four-year-old—he doesn’t get to choose what he eats for dinner, in most cases,” Keller says.  “But the one thing he can control is ketchup.  It’s the one part of the food experience that he can customize and personalize.”  As a result, Heinz came out with the so-called EZ Squirt bottle, made out of soft plastic with a conical nozzle.  In homes where the EZ Squirt is used, ketchup consumption has grown by as much as twelve per cent.

There is another lesson in that household scene, though.  Small children tend to be neophobic: once they hit two or three, they shrink from new tastes.  That makes sense, evolutionarily, because through much of human history that is the age at which children would have first begun to gather and forage for themselves, and those who strayed from what was known and trusted would never have survived.  There the three-year-old was, confronted with something strange on his plate—tuna fish, perhaps, or Brussels sprouts—and he wanted to alter his food in some way that made the unfamiliar familiar.  He wanted to subdue the contents of his plate.  And so he turned to ketchup, because, alone among the condiments on the table, ketchup could deliver sweet and sour and salty and bitter and umami, all at once.


Last February, Edgar Chambers IV, who runs the sensory-analysis center at Kansas State University, conducted a joint assessment of World’s Best and Heinz.  He has seventeen trained tasters on his staff, and they work for academia and industry, answering the often difficult question of what a given substance tastes like.  It is demanding work.  Immediately after conducting the ketchup study, Chambers dispatched a team to Bangkok to do an analysis of fruit—bananas, mangoes, rose apples, and sweet tamarind.  Others were detailed to soy and kimchi in South Korea, and Chambers’s wife led a delegation to Italy to analyze ice cream.

The ketchup tasting took place over four hours, on two consecutive mornings.  Six tasters sat around a large, round table with a lazy Susan in the middle.  In front of each panelist were two one-ounce cups, one filled with Heinz ketchup and one filled with World’s Best.  They would work along fourteen dimensions of flavor and texture, in accordance with the standard fifteen-point scale used by the food world.  The flavor components would be divided two ways: elements picked up by the tongue and elements picked up by the nose.  A very ripe peach, for example, tastes sweet but it also smells sweet—which is a very different aspect of sweetness.  Vinegar has a sour taste but also a pungency, a vapor that rises up the back of the nose and fills the mouth when you breathe out.  To aid in the rating process, the tasters surrounded themselves with little bowls of sweet and sour and salty solutions, and portions of Contadina tomato paste, Hunt’s tomato sauce, and Campbell’s tomato juice, all of which represent different concentrations of tomato-ness.

After breaking the ketchup down into its component parts, the testers assessed the critical dimension of “amplitude,” the word sensory experts use to describe flavors that are well blended and balanced, that “bloom” in the mouth.  “The difference between high and low amplitude is the difference between my son and a great pianist playing ‘Ode to Joy’ on the piano,” Chambers says.  “They are playing the same notes, but they blend better with the great pianist.”  Pepperidge Farm shortbread cookies are considered to have high amplitude.  So are Hellman’s mayonnaise and Sara Lee poundcake.  When something is high in amplitude, all its constituent elements converge into a single gestalt.  You can’t isolate the elements of an iconic, high-amplitude flavor like Coca-Cola or Pepsi.  But you can with one of those private-label colas that you get in the supermarket.  “The thing about Coke and Pepsi is that they are absolutely gorgeous,” Judy Heylmun, a vice-president of Sensory Spectrum, Inc., in Chatham, New Jersey, says.  “They have beautiful notes—all flavors are in balance.  It’s very hard to do that well.  Usually, when you taste a store cola it’s”— and here she made a series of pik! pik! pik! sounds—”all the notes are kind of spiky, and usually the citrus is the first thing to spike out.  And then the cinnamon.  Citrus and brown spice notes are top notes and very volatile, as opposed to vanilla, which is very dark and deep.  A really cheap store brand will have a big, fat cinnamon note sitting on top of everything.”

Some of the cheaper ketchups are the same way.  Ketchup aficionados say that there’s a disquieting unevenness to the tomato notes in Del Monte ketchup: Tomatoes vary, in acidity and sweetness and the ratio of solids to liquid, according to the seed variety used, the time of year they are harvested, the soil in which they are grown, and the weather during the growing season.  Unless all those variables are tightly controlled, one batch of ketchup can end up too watery and another can be too strong.  Or try one of the numerous private-label brands that make up the bottom of the ketchup market and pay attention to the spice mix; you may well find yourself conscious of the clove note or overwhelmed by a hit of garlic.  Generic colas and ketchups have what Moskowitz calls a hook—a sensory attribute that you can single out, and ultimately tire of.

The tasting began with a plastic spoon.  Upon consideration, it was decided that the analysis would be helped if the ketchups were tasted on French fries, so a batch of fries were cooked up, and distributed around the table.  Each tester, according to protocol, took the fries one by one, dipped them into the cup—all the way, right to the bottom—bit off the portion covered in ketchup, and then contemplated the evidence of their senses.  For Heinz, the critical flavor components—vinegar, salt, tomato I.D.  (over-all tomato-ness), sweet, and bitter—were judged to be present in roughly equal concentrations, and those elements, in turn, were judged to be well blended.  The World’s Best, though, “had a completely different view, a different profile, from the Heinz,” Chambers said.  It had a much stronger hit of sweet aromatics—4.0 to 2.5—and outstripped Heinz on tomato I.D.  by a resounding 9 to 5.5.  But there was less salt, and no discernible vinegar.  “The other comment from the panel was that these elements were really not blended at all,” Chambers went on.  “The World’s Best product had really low amplitude.”  According to Joyce Buchholz, one of the panelists, when the group judged aftertaste, “it seemed like a certain flavor would hang over longer in the case of World’s Best—that cooked-tomatoey flavor.”

But what was Jim Wigon to do? To compete against Heinz, he had to try something dramatic, like substituting maple syrup for corn syrup, ramping up the tomato solids.  That made for an unusual and daring flavor.  World’s Best Dill ketchup on fried catfish, for instance, is a marvellous thing.  But it also meant that his ketchup wasn’t as sensorily complete as Heinz, and he was paying a heavy price in amplitude.  “Our conclusion was mainly this,” Buchholz said.  “We felt that World’s Best seemed to be more like a sauce.”  She was trying to be helpful.

There is an exception, then, to the Moskowitz rule.  Today there are thirty-six varieties of Ragú spaghetti sauce, under six rubrics—Old World Style, Chunky Garden Style, Robusto, Light, Cheese Creations, and Rich & Meaty—which means that there is very nearly an optimal spaghetti sauce for every man, woman, and child in America.  Measured against the monotony that confronted Howard Moskowitz twenty years ago, this is progress.  Happiness, in one sense, is a function of how closely our world conforms to the infinite variety of human preference.  But that makes it easy to forget that sometimes happiness can be found in having what we’ve always had and everyone else is having.  “Back in the seventies, someone else—I think it was Ragú—tried to do an ‘Italian’-style ketchup,” Moskowitz said.  “They failed miserably.”  It was a conundrum: what was true about a yellow condiment that went on hot dogs was not true about a tomato condiment that went on hamburgers, and what was true about tomato sauce when you added visible solids and put it in a jar was somehow not true about tomato sauce when you added vinegar and sugar and put it in a bottle.  Moskowitz shrugged.  “I guess ketchup is ketchup.”
Employers love personality tests. But what do they really reveal?


When Alexander (Sandy) Nininger was twenty-three, viagra and newly commissioned as a lieutenant in the United States Army, he was sent to the South Pacific to serve with the 57th Infantry of the Philippine Scouts. It was January, 1942. The Japanese had just seized Philippine ports at Vigan, Legazpi, Lamon Bay, and Lingayen, and forced the American and Philippine forces to retreat into Bataan, a rugged peninsula on the South China Sea. There, besieged and outnumbered, the Americans set to work building a defensive line, digging foxholes and constructing dikes and clearing underbrush to provide unobstructed sight lines for rifles and machine guns. Nininger’s men were on the line’s right flank. They labored day and night. The heat and the mosquitoes were nearly unbearable.

Quiet by nature, Nininger was tall and slender, with wavy blond hair. As Franklin M. Reck recounts in “Beyond the Call of Duty,” Nininger had graduated near the top of his class at West Point, where he chaired the lecture-and-entertainment committee. He had spent many hours with a friend, discussing everything from history to the theory of relativity. He loved the theatre. In the evenings, he could often be found sitting by the fireplace in the living room of his commanding officer, sipping tea and listening to Tchaikovsky. As a boy, he once saw his father kill a hawk and had been repulsed. When he went into active service, he wrote a friend to say that he had no feelings of hate, and did not think he could ever kill anyone out of hatred. He had none of the swagger of the natural warrior. He worked hard and had a strong sense of duty.

In the second week of January, the Japanese attacked, slipping hundreds of snipers through the American lines, climbing into trees, turning the battlefield into what Reck calls a “gigantic possum hunt.” On the morning of January 12th, Nininger went to his commanding officer. He wanted, he said, to be assigned to another company, one that was in the thick of the action, so he could go hunting for Japanese snipers.

He took several grenades and ammunition belts, slung a Garand rifle over his shoulder, and grabbed a sub machine gun. Starting at the point where the fighting was heaviest—near the position of the battalion’s K Company—he crawled through the jungle and shot a Japanese soldier out of a tree. He shot and killed snipers. He threw grenades into enemy positions. He was wounded in the leg, but he kept going, clearing out Japanese positions for the other members of K Company, behind him. He soon ran out of grenades and switched to his rifle, and then, when he ran out of ammunition, used only his bayonet. He was wounded a second time, but when a medic crawled toward him to help bring him back behind the lines Nininger waved him off. He saw a Japanese bunker up ahead. As he leaped out of a shell hole, he was spun around by a bullet to the shoulder, but he kept charging at the bunker, where a Japanese officer and two enlisted men were dug in. He dispatched one soldier with a double thrust of his bayonet, clubbed down the other, and bayonetted the officer. Then, with outstretched arms, he collapsed face down. For his heroism, Nininger was posthumously awarded the Medal of Honor, the first American soldier so decorated in the Second World War.


Suppose that you were a senior Army officer in the early days of the Second World War and were trying to put together a crack team of fearless and ferocious fighters. Sandy Nininger, it now appears, had exactly the right kind of personality for that assignment, but is there any way you could have known this beforehand? It clearly wouldn’t have helped to ask Nininger if he was fearless and ferocious, because he didn’t know that he was fearless and ferocious. Nor would it have worked to talk to people who spent time with him. His friend would have told you only that Nininger was quiet and thoughtful and loved the theatre, and his commanding officer would have talked about the evenings of tea and Tchaikovsky. With the exception, perhaps, of the Scarlet Pimpernel, a love of music, theatre, and long afternoons in front of a teapot is not a known predictor of great valor. What you need is some kind of sophisticated psychological instrument, capable of getting to the heart of his personality.

Over the course of the past century, psychology has been consumed with the search for this kind of magical instrument. Hermann Rorschach proposed that great meaning lay in the way that people described inkblots. The creators of the Minnesota Multiphasic Personality Inventory believed in the revelatory power of true-false items such as “I have never had any black, tarry-looking bowel movements” or “If the money were right, I would like to work for a circus or a carnival.” Today, Annie Murphy Paul tells us in her fascinating new book, “Cult of Personality,” that there are twenty-five hundred kinds of personality tests. Testing is a four-hundred-million-dollar-a-year industry. A hefty percentage of American corporations use personality tests as part of the hiring and promotion process. The tests figure in custody battles and in sentencing and parole decisions. “Yet despite their prevalence—and the importance of the matters they are called upon to decide—personality tests have received surprisingly little scrutiny,” Paul writes. We can call in the psychologists. We can give Sandy Nininger a battery of tests. But will any of it help?

One of the most popular personality tests in the world is the Myers-Briggs Type Indicator (M.B.T.I.), a psychological-assessment system based on Carl Jung’s notion that people make sense of the world through a series of psychological frames. Some people are extroverts, some are introverts. Some process information through logical thought. Some are directed by their feelings. Some make sense of the world through intuitive leaps. Others collect data through their senses. To these three categories— (I)ntroversion/(E)xtroversion, i(N)tuition/(S)ensing, (T)hinking/(F)eeling—the Myers-Briggs test adds a fourth: (J)udging/(P)erceiving. Judgers “like to live in a planned, orderly way, seeking to regulate and manage their lives,” according to an M.B.T.I. guide, whereas Perceivers “like to live in a flexible, spontaneous way, seeking to experience and understand life, rather than control it.” The M.B.T.I. asks the test-taker to answer a series of “forced-choice” questions, where one choice identifies you as belonging to one of these paired traits. The basic test takes twenty minutes, and at the end you are presented with a precise, multidimensional summary of your personality-your type might be INTJ or ESFP, or some other combination. Two and a half million Americans a year take the Myers-Briggs. Eighty-nine companies out of the Fortune 100 make use of it, for things like hiring or training sessions to help employees “understand” themselves or their colleagues. Annie Murphy Paul says that at the eminent consulting firm McKinsey, ” ‘associates’ often know their colleagues’ four-letter M.B.T.I. types by heart,” the way they might know their own weight or (this being McKinsey) their S.A.T. scores.

It is tempting to think, then, that we could figure out the Myers-Briggs type that corresponds best to commando work, and then test to see whether Sandy Nininger fits the profile. Unfortunately, the notion of personality type is not nearly as straightforward as it appears. For example, the Myers-Briggs poses a series of items grouped around the issue of whether you—the test-taker—are someone who likes to plan your day or evening beforehand or someone who prefers to be spontaneous. The idea is obviously to determine whether you belong to the Judger or Perceiver camp, but the basic question here is surprisingly hard to answer. I think I’m someone who likes to be spontaneous. On the other hand, I have embarked on too many spontaneous evenings that ended up with my friends and me standing on the sidewalk, looking at each other and wondering what to do next. So I guess I’m a spontaneous person who recognizes that life usually goes more smoothly if I plan first, or, rather, I’m a person who prefers to be spontaneous only if there’s someone around me who isn’t. Does that make me spontaneous or not? I’m not sure. I suppose it means that I’m somewhere in the middle.

This is the first problem with the Myers-Briggs. It assumes that we are either one thing or another—Intuitive or Sensing, Introverted or Extroverted. But personality doesn’t fit into neat binary categories: we fall somewhere along a continuum.

Here’s another question: Would you rather work under a boss (or a teacher) who is good-natured but often inconsistent, or sharp-tongued but always logical?

On the Myers-Briggs, this is one of a series of questions intended to establish whether you are a Thinker or a Feeler. But I’m not sure I know how to answer this one, either. I once had a good-natured boss whose inconsistency bothered me, because he exerted a great deal of day-to-day control over my work. Then I had a boss who was quite consistent and very sharp-tongued—but at that point I was in a job where day-to-day dealings with my boss were minimal, so his sharp tongue didn’t matter that much. So what do I want in a boss? As far as I can tell, the only plausible answer is: It depends. The Myers-Briggs assumes that who we are is consistent from one situation to another. But surely what we want in a boss, and how we behave toward our boss, is affected by what kind of job we have.

This is the gist of the now famous critique that the psychologist Walter Mischel has made of personality testing. One of Mischel’s studies involved watching children interact with one another at a summer camp. Aggressiveness was among the traits that he was interested in, so he watched the children in five different situations: how they behaved when approached by a peer, when teased by a peer, when praised by an adult, when punished by an adult, and when warned by an adult. He found that how aggressively a child responded in one of those situations wasn’t a good predictor of how that same child responded in another situation. Just because a boy was aggressive in the face of being teased by another boy didn’t mean that he would be aggressive in the face of being warned by an adult. On the other hand, if a child responded aggressively to being teased by a peer one day, it was a pretty good indicator that he’d respond aggressively to being teased by a peer the next day. We have a personality in the sense that we have a consistent pattern of behavior. But that pattern is complex and that personality is contingent: it represents an interaction between our internal disposition and tendencies and the situations that we find ourselves in.

It’s not surprising, then, that the Myers-Briggs has a large problem with consistency: according to some studies, more than half of those who take the test a second time end up with a different score than when they took it the first time. Since personality is continuous, not dichotomous, clearly some people who are borderline Introverts or Feelers one week slide over to Extroversion or Thinking the next week. And since personality is contingent, not stable, how we answer is affected by which circumstances are foremost in our minds when we take the test. If I happen to remember my first boss, then I come out as a Thinker. If my mind is on my second boss, I come out as a Feeler. When I took the Myers-Briggs, I scored as an INTJ. But, if odds are that I’m going to be something else if I take the test again, what good is it?

Once, for fun, a friend and I devised our own personality test. Like the M.B.T.I., it has four dimensions. The first is Canine/Feline. In romantic relationships, are you the pursuer, who runs happily to the door, tail wagging? Or are you the pursued? The second is More/Different. Is it your intellectual style to gather and master as much information as you can or to make imaginative use of a discrete amount of information? The third is Insider/Outsider. Do you get along with your parents or do you define yourself outside your relationship with your mother and father? And, finally, there is Nibbler/Gobbler. Do you work steadily, in small increments, or do everything at once, in a big gulp? I’m quite pleased with the personality inventory we devised. It directly touches on four aspects of life and temperament-romance, cognition, family, and work style—that are only hinted at by Myers-Briggs. And it can be completed in under a minute, nineteen minutes faster than Myers-Briggs, an advantage not to be dismissed in today’s fast-paced business environment. Of course, the four traits it measures are utterly arbitrary, based on what my friend and I came up with over the course of a phone call. But then again surely all universal dichotomous typing systems are arbitrary.

Where did the Myers-Briggs come from, after all? As Paul tells us, it began with a housewife from Washington, D.C., named Katharine Briggs, at the turn of the last century. Briggs had a daughter, Isabel, an only child for whom (as one relative put it) she did “everything but breathe.” When Isabel was still in her teens, Katharine wrote a book-length manuscript about her daughter’s remarkable childhood, calling her a “genius” and “a little Shakespeare.” When Isabel went off to Swarthmore College, in 1915, the two exchanged letters nearly every day. Then, one day, Isabel brought home her college boyfriend and announced that they were to be married. His name was Clarence (Chief) Myers. He was tall and handsome and studying to be a lawyer, and he could not have been more different from the Briggs women. Katharine and Isabel were bold and imaginative and intuitive. Myers was practical and logical and detail-oriented. Katharine could not understand her future son-in-law. “When the blissful young couple returned to Swarthmore,” Paul writes, “Katharine retreated to her study, intent on ‘figuring out Chief.’ “She began to read widely in psychology and philosophy. Then, in 1923, she came across the first English translation of Carl Jung’s “Psychological Types.” “This is it!” Katharine told her daughter. Paul recounts, “In a dramatic display of conviction she burned all her own research and adopted Jung’s book as her ‘Bible,’ as she gushed in a letter to the man himself. His system explained it all: Lyman [Katharine’s husband], Katharine, Isabel, and Chief were introverts; the two men were thinkers, while the women were feelers; and of course the Briggses were intuitives, while Chief was a senser.” Encouraged by her mother, Isabel—who was living in Swarthmore and writing mystery novels—devised a paper-and-pencil test to help people identify which of the Jungian categories they belonged to, and then spent the rest of her life tirelessly and brilliantly promoting her creation.

The problem, as Paul points out, is that Myers and her mother did not actually understand Jung at all. Jung didn’t believe that types were easily identifiable, and he didn’t believe that people could be permanently slotted into one category or another. “Every individual is an exception to the rule,” he wrote; to “stick labels on people at first sight,” in his view, was “nothing but a childish parlor game.” Why is a parlor game based on my desire to entertain my friends any less valid than a parlor game based on Katharine Briggs’s obsession with her son-in-law?


The problems with the Myers-Briggs suggest that we need a test that is responsive to the complexity and variability of the human personality. And that is why, not long ago, I found myself in the office of a psychologist from New Jersey named Lon Gieser. He is among the country’s leading experts on what is called the Thematic Apperception Test (T.A.T.), an assessment tool developed in the nineteen-thirties by Henry Murray, one of the most influential psychologists of the twentieth century.

I sat in a chair facing Gieser, as if I were his patient. He had in his hand two dozen or so pictures—mostly black-and-white drawings—on legal-sized cards, all of which had been chosen by Murray years before. “These pictures present a series of scenes,” Gieser said to me. “What I want you to do with each scene is tell a story with a beginning, a middle, and an end.” He handed me the first card. It was of a young boy looking at a violin. I had imagined, as Gieser was describing the test to me, that it would be hard to come up with stories to match the pictures. As I quickly discovered, though, the exercise was relatively effortless: the stories just tumbled out.

“This is a young boy,” I began. “His parents want him to take up the violin, and they’ve been encouraging him. I think he is uncertain whether he wants to be a violin player, and maybe even resents the imposition of having to play this instrument, which doesn’t seem to have any appeal for him. He’s not excited or thrilled about this. He’d rather be somewhere else. He’s just sitting there looking at it, and dreading having to fulfill this parental obligation.”

I continued in that vein for a few more minutes. Gieser gave me another card, this one of a muscular man clinging to a rope and looking off into the distance. “He’s climbing up, not climbing down,” I said, and went on:

It’s out in public. It’s some kind of big square, in Europe, and there is some kind of spectacle going on. It’s the seventeenth or eighteenth century. The King is coming by in a carriage, and this man is shimmying up, so he can see over everyone else and get a better view of the King. I don’t get the sense that he’s any kind of highborn person. I think he aspires to be more than he is. And he’s kind of getting a glimpse of the King as a way of giving himself a sense of what he could be, or what his own future could be like.

We went on like this for the better part of an hour, as I responded to twelve cards—each of people in various kinds of ambiguous situations. One picture showed a woman slumped on the ground, with some small object next to her; another showed an attractive couple in a kind of angry embrace, apparently having an argument. (I said that the fight they were having was staged, that each was simply playing a role.) As I talked, Gieser took notes. Later, he called me and gave me his impressions. “What came out was the way you deal with emotion,” he said. “Even when you recognized the emotion, you distanced yourself from it. The underlying motive is this desire to avoid conflict. The other thing is that when there are opportunities to go to someone else and work stuff out, your character is always going off alone. There is a real avoidance of emotion and dealing with other people, and everyone goes to their own corners and works things out on their own.”

How could Gieser make such a confident reading of my personality after listening to me for such a short time? I was baffled by this, at first, because I felt that I had told a series of random and idiosyncratic stories. When I listened to the tape I had made of the session, though, I saw what Gieser had picked up on: my stories were exceedingly repetitive in just the way that he had identified. The final card that Gieser gave me was blank, and he asked me to imagine my own picture and tell a story about it. For some reason, what came to mind was Andrew Wyeth’s famous painting “Christina’s World,” of a woman alone in a field, her hair being blown by the wind. She was from the city, I said, and had come home to see her family in the country: “I think she is taking a walk. She is pondering some piece of important news. She has gone off from the rest of the people to think about it.” Only later did I realize that in the actual painting the woman is not strolling through the field. She is crawling, desperately, on her hands and knees. How obvious could my aversion to strong emotion be?

The T.A.T. has a number of cards that are used to assess achievement—that is, how interested someone is in getting ahead and succeeding in life. One is the card of the man on the rope; another is the boy looking at his violin. Gieser, in listening to my stories, concluded that I was very low in achievement:

Some people say this kid is dreaming about being a great violinist, and he’s going to make it. With you, it wasn’t what he wanted to do at all. His parents were making him do it. With the rope climbing, some people do this Tarzan thing. They climb the pole and get to the top and feel this great achievement. You have him going up the rope—and why is he feeling the pleasure? Because he’s seeing the King. He’s still a nobody in the public square, looking at the King.

Now, this is a little strange. I consider myself quite ambitious. On a questionnaire, if you asked me to rank how important getting ahead and being successful was to me, I’d check the “very important” box. But Gieser is suggesting that the T.A.T. allowed him to glimpse another dimension of my personality.

This idea—that our personality can hold contradictory elements—is at the heart of “Strangers to Ourselves,” by the social psychologist Timothy D. Wilson. He is one of the discipline’s most prominent researchers, and his book is what popular psychology ought to be (and rarely is): thoughtful, beautifully written, and full of unexpected insights. Wilson’s interest is in what he calls the “adaptive unconscious” (not to be confused with the Freudian unconscious). The adaptive unconscious, in Wilson’s description, is a big computer in our brain which sits below the surface and evaluates, filters, and looks for patterns in the mountain of data that come in through our senses. That system, Wilson argues, has a personality: it has a set of patterns and responses and tendencies that are laid down by our genes and our early-childhood experiences. These patterns are stable and hard to change, and we are only dimly aware of them. On top of that, in his schema we have another personality: it’s the conscious identity that we create for ourselves with the choices we make, the stories we tell about ourselves, and the formal reasons we come up with to explain our motives and feelings. Yet this “constructed self” has no particular connection with the personality of our adaptive unconscious. In fact, they could easily be at odds. Wilson writes:

The adaptive unconscious is more likely to influence people’s uncontrolled, implicit responses, whereas the constructed self is more likely to influence people’s deliberative, explicit responses. For example, the quick, spontaneous decision of whether to argue with a co-worker is likely to be under the control of one’s nonconscious needs for power and affiliation. A more thoughtful decision about whether to invite a co-worker over for dinner is more likely to be under the control of one’s conscious, self-attributed motives.

When Gieser said that he thought I was low in achievement, then, he presumably saw in my stories an unconscious ambivalence toward success. The T.A.T., he believes, allowed him to go beyond the way I viewed myself and arrive at a reading with greater depth and nuance.

Even if he’s right, though, does this help us pick commandos? I’m not so sure. Clearly, underneath Sandy Nininger’s peaceful façade there was another Nininger capable of great bravery and ferocity, and a T.A.T. of Nininger might have given us a glimpse of that part of who he was. But let’s not forget that he volunteered for the front lines: he made a conscious decision to put himself in the heat of the action. What we really need is an understanding of how those two sides of his personality interact in critical situations. When is Sandy Nininger’s commitment to peacefulness more, or less, important than some unconscious ferocity? The other problem with the T.A.T., of course, is that it’s a subjective instrument. You could say that my story about the man climbing the rope is evidence that I’m low in achievement or you could say that it shows a strong desire for social mobility. The climber wants to look down—not up—at the King in order to get a sense “of what he could be.” You could say that my interpretation that the couple’s fighting was staged was evidence of my aversion to strong emotion. Or you could say that it was evidence of my delight in deception and role-playing. This isn’t to question Gieser’s skill or experience as a diagnostician. The T.A.T. is supposed to do no more than identify themes and problem areas, and I’m sure Gieser would be happy to put me on the couch for a year to explore those themes and see which of his initial hypotheses had any validity. But the reason employers want a magical instrument for measuring personality is that they don’t have a year to work through the ambiguities. They need an answer now.


A larger limitation of both Myers-Briggs and the T.A.T. is that they are indirect. Tests of this kind require us first to identify a personality trait that corresponds to the behavior we’re interested in, and then to figure out how to measure that trait—but by then we’re two steps removed from what we’re after. And each of those steps represents an opportunity for error and distortion. Shouldn’t we try, instead, to test directly for the behavior we’re interested in? This is the idea that lies behind what’s known as the Assessment Center, and the leading practitioner of this approach is a company called Development Dimensions International, or D.D.I.

Companies trying to evaluate job applicants send them to D.D.I.’s headquarters, outside Pittsburgh, where they spend the day role-playing as business executives. When I contacted D.D.I., I was told that I was going to be Terry Turner, the head of the robotics division of a company called Global Solutions.

I arrived early in the morning, and was led to an office. On the desk was a computer, a phone, and a tape recorder. In the corner of the room was a video camera, and on my desk was an agenda for the day. I had a long telephone conversation with a business partner from France. There were labor difficulties at an overseas plant. A new product—a robot for the home-had run into a series of technical glitches. I answered e-mails. I prepared and recorded a talk for a product-launch meeting. I gave a live interview to a local television reporter. In the afternoon, I met with another senior Global Solutions manager, and presented a strategic plan for the future of the robotics division. It was a long, demanding day at the office, and when I left, a team of D.D.I. specialists combed through copies of my e-mails, the audiotapes of my phone calls and my speech, and the videotapes of my interviews, and analyzed me across four dimensions: interpersonal skills, leadership skills, business-management skills, and personal attributes. A few weeks later, I was given my report. Some of it was positive: I was a quick learner. I had good ideas. I expressed myself well, and—I was relieved to hear—wrote clearly. But, as the assessment of my performance made plain, I was something less than top management material:

Although you did a remarkable job addressing matters, you tended to handle issues from a fairly lofty perch, pitching good ideas somewhat unilaterally while lobbing supporting rationale down to the team below. . . . Had you brought your team closer to decisions by vesting them with greater accountability, responsibility and decision-making authority, they would have undoubtedly felt more engaged, satisfied and valued. . . .In a somewhat similar vein, but on a slightly more interpersonal level, while you seemed to recognize the value of collaboration and building positive working relationships with people, you tended to take a purely businesslike approach to forging partnerships. You spoke of win/win solutions from a business perspective and your rationale for partnering and collaboration seemed to be based solely on business logic. Additionally, at times you did not respond to some of the softer, subtler cues that spoke to people’s real frustrations, more personal feelings, or true point of view.

Ouch! Of course, when the D.D.I. analysts said that I did not respond to “some of the softer, subtler cues that spoke to people’s real frustrations, more personal feelings, or true point of view,” they didn’t mean that I was an insensitive person. They meant that I was insensitive in the role of manager. The T.A.T. and M.B.T.I. aimed to make global assessments of the different aspects of my personality. My day as Terry Turner was meant to find out only what I’m like when I’m the head of the robotics division of Global Solutions. That’s an important difference. It respects the role of situation and contingency in personality. It sidesteps the difficulty of integrating my unconscious self with my constructed self by looking at the way that my various selves interact in the real world. Most important, it offers the hope that with experience and attention I can construct a more appropriate executive “self.” The Assessment Center is probably the best method that employers have for evaluating personality.

But could an Assessment Center help us identify the Sandy Niningers of the world? The center makes a behavioral prediction, and, as solid and specific as that prediction is, people are least predictable at those critical moments when prediction would be most valuable. The answer to the question of whether my Terry Turner would be a good executive is, once again: It depends. It depends on what kind of company Global Solutions is, and on what kind of respect my co-workers have for me, and on how quickly I manage to correct my shortcomings, and on all kinds of other things that cannot be anticipated. The quality of being a good manager is, in the end, as irreducible as the quality of being a good friend. We think that a friend has to be loyal and nice and interesting—and that’s certainly a good start. But people whom we don’t find loyal, nice, or interesting have friends, too, because loyalty, niceness, and interestingness are emergent traits. They arise out of the interaction of two people, and all we really mean when we say that someone is interesting or nice is that they are interesting or nice to us.

All these difficulties do not mean that we should give up on the task of trying to understand and categorize one another. We could certainly send Sandy Nininger to an Assessment Center, and find out whether, in a make-believe battle, he plays the role of commando with verve and discipline. We could talk to his friends and discover his love of music and theatre. We could find out how he responded to the picture of the man on a rope. We could sit him down and have him do the Myers-Briggs and dutifully note that he is an Introverted, Intuitive, Thinking Judger, and, for good measure, take an extra minute to run him through my own favorite personality inventory and type him as a Canine, Different, Insider Gobbler. We will know all kinds of things about him then. His personnel file will be as thick as a phone book, and we can consult our findings whenever we make decisions about his future. We just have to acknowledge that his file will tell us little about the thing we’re most interested in. For that, we have to join him in the jungles of Bataan.
How to think about prescription drugs.


Ten years ago, information pills the multinational pharmaceutical company AstraZeneca launched what was known inside the company as the Shark Fin Project. The team for the project was composed of lawyers, marketers, and scientists, and its focus was a prescription drug known as Prilosec, a heartburn medication that, in one five-year stretch of its extraordinary history, earned AstraZeneca twenty-six billion dollars. The patent on the drug was due to expire in April of 2001. The name Shark Fin was a reference to what Prilosec sales—and AstraZeneca’s profits—would look like if nothing was done to fend off the ensuing low-priced generic competition.

The Shark Fin team drew up a list of fifty options. One idea was to devise a Prilosec 2.0—a version that worked faster or longer, or was more effective. Another idea was to combine it with a different heartburn remedy, or to change the formulation, so that it came in a liquid gel or in an extended-release form. In the end, AstraZeneca decided on a subtle piece of chemical reëngineering. Prilosec, like many drugs, is composed of two “isomers”—a left-hand and a right-hand version of the molecule. In some cases, removing one of the isomers can reduce side effects or make a drug work a little bit better, and in all cases the Patent Office recognizes something with one isomer as a separate invention from something with two. So AstraZeneca cut Prilosec in half.

AstraZeneca then had to prove that the single-isomer version of the drug was better than regular Prilosec. It chose as its target something called erosive esophagitis, a condition in which stomach acid begins to bubble up and harm the lining of the esophagus. In one study, half the patients took Prilosec, and half took Son of Prilosec. After one month, the two drugs were dead even. But after two months, to the delight of the Shark Fin team, the single-isomer version edged ahead—with a ninety-per-cent healing rate versus Prilosec’s eighty-seven per cent. The new drug was called Nexium. A patent was filed, the F.D.A. gave its blessing, and, in March of 2001, Nexium hit the pharmacy shelves priced at a hundred and twenty dollars for a month’s worth of pills. To keep cheaper generics at bay, and persuade patients and doctors to think of Nexium as state of the art, AstraZeneca spent half a billion dollars in marketing and advertising in the year following the launch. It is now one of the half-dozen top-selling drugs in America.

In the political uproar over prescription-drug costs, Nexium has become a symbol of everything that is wrong with the pharmaceutical industry. The big drug companies justify the high prices they charge—and the extraordinary profits they enjoy—by arguing that the search for innovative, life-saving medicines is risky and expensive. But Nexium is little more than a repackaged version of an old medicine. And the hundred and twenty dollars a month that AstraZeneca charges isn’t to recoup the costs of risky research and development; the costs were for a series of clinical trials that told us nothing we needed to know, and a half-billion-dollar marketing campaign selling the solution to a problem we’d already solved. “The Prilosec pattern, repeated across the pharmaceutical industry, goes a long way to explain why the nation’s prescription drug bill is rising an estimated 17 % a year even as general inflation is quiescent,” the Wall Street Journal concluded, in a front-page article that first revealed the Shark Fin Project.

In “The Truth About the Drug Companies: How They Deceive Us and What to Do About It” (Random House; $24.95), Marcia Angell offers an even harsher assessment. Angell used to be the editor-in-chief of The New England Journal of Medicine, which is among the most powerful positions in American medicine, and in her view drug companies are troubled and corrupt. She thinks that they charge too much, engage in deceptive research, produce inferior products, borrow their best ideas from government-funded scientists, and buy the affections of physicians with trips and gifts. To her, the story of Nexium and drugs like it is proof that the pharmaceutical industry is “now primarily a marketing machine to sell drugs of dubious benefit.”

Of course, it is also the case that Nexium is a prescription drug: every person who takes Nexium was given the drug with the approval of a doctor—and doctors are professionals who ought to know that there are many cheaper ways to treat heartburn. If the patient was coming in for the first time, the doctor could have prescribed what’s known as an H2 antagonist, such as a generic version of Tagamet (cimetidine), which works perfectly well for many people and costs only about twenty-eight dollars a month. If the patient wasn’t responding to Tagamet, the doctor could have put him on the cheaper, generic form of Prilosec, omeprazole.

The patient’s insurance company could easily have stepped in as well. It could have picked up the tab for Nexium only if the patient had first tried generic Tagamet. Or it could have discouraged Nexium use, by requiring anyone who wanted the drug to pay the difference between it and generic omeprazole. Both the physician and the insurance company, meanwhile, could have sent the patient to any drugstore in America, where he or she would have found, next to the Maalox and the Pepcid, a package of over-the-counter Prilosec. O.T.C. Prilosec is identical to prescription Prilosec and effectively equivalent to prescription Nexium, and it costs only twenty dollars a month.

Throughout the current debate over prescription-drug costs—as seniors have gone on drug-buying bus trips to Canada, as state Medicaid programs and employers have become increasingly angry over rising health-care costs, and as John Kerry has made reining in the pharmaceutical industry a central theme of his Presidential campaign—the common assumption has been that the rise of drugs like Nexium is entirely the fault of the pharmaceutical industry. Is it? If doctors routinely prescribe drugs like Nexium and insurers routinely pay for them, after all, there is surely more than one culprit in the prescription-drug mess.


The problem with the way we think about prescription drugs begins with a basic misunderstanding about drug prices. The editorial board of the Times has pronounced them much too high; Marcia Angell calls them “intolerable.” The perception that the drug industry is profiteering at the expense of the American consumer has given pharmaceutical firms a reputation on a par with that of cigarette manufacturers.

In fact, the complaint is only half true. The “intolerable” prices that Angell writes about are confined to the brand-name sector of the American drug marketplace. As the economists Patricia Danzon and Michael Furukawa recently pointed out in the journal Health Affairs, drugs still under patent protection are anywhere from twenty-five to forty per cent more expensive in the United States than in places like England, France, and Canada. Generic drugs are another story. Because there are so many companies in the United States that step in to make drugs once their patents expire, and because the price competition among those firms is so fierce, generic drugs here are among the cheapest in the world. And, according to Danzon and Furukawa’s analysis, when prescription drugs are converted to over-the-counter status no other country even comes close to having prices as low as the United States.

It is not accurate to say, then, that the United States has higher prescription-drug prices than other countries. It is accurate to say only that the United States has a different pricing system from that of other countries. Americans pay more for drugs when they first come out and less as the drugs get older, while the rest of the world pays less in the beginning and more later. Whose pricing system is cheaper? It depends. If you are taking Mevacor for your cholesterol, the 20-mg. pill is two-twenty-five in America and less than two dollars if you buy it in Canada. But generic Mevacor (lovastatin) is about a dollar a pill in Canada and as low as sixty-five cents a pill in the United States. Of course, not every drug comes in a generic version. But so many important drugs have gone off-patent recently that the rate of increase in drug spending in the United States has fallen sharply for the past four years. And so many other drugs are going to go off-patent in the next few years—including the top-selling drug in this country, the anti-cholesterol medication Lipitor—that many Americans who now pay more for their drugs than their counterparts in other Western countries could soon be paying less.

The second misconception about prices has to do with their importance in driving up over-all drug costs. In one three-year period in the mid-nineteen-nineties, for example, the amount of money spent in the United States on asthma medication increased by almost a hundred per cent. But none of that was due to an increase in the price of asthma drugs. It was largely the result of an increase in the prevalence of usage—that is, in the number of people who were given a diagnosis of the disease and who then bought drugs to treat it. Part of that hundred-per-cent increase was also the result of a change in what’s known as the intensity of drug use: in the mid-nineties, doctors were becoming far more aggressive in their attempts to prevent asthma attacks, and in those three years people with asthma went from filling about nine prescriptions a year to filling fourteen prescriptions a year. Last year, asthma costs jumped again, by twenty-six per cent, and price inflation played a role. But, once again, the big factor was prevalence. And this time around there was also a change in what’s called the therapeutic mix; in an attempt to fight the disease more effectively, physicians are switching many of their patients to newer, better, and more expensive drugs, like Merck’s Singulair.

Asthma is not an isolated case. In 2003, the amount that Americans spent on cholesterol-lowering drugs rose 23.8 per cent, and similar increases are forecast for the next few years. Why the increase? Well, the baby boomers are aging, and so are at greater risk for heart attacks. The incidence of obesity is increasing. In 2002, the National Institutes of Health lowered the thresholds for when people with high cholesterol ought to start taking drugs like Lipitor and Mevacor. In combination, those factors are having an enormous impact on both the prevalence and the intensity of cholesterol treatment. All told, prescription-drug spending in the United States rose 9.1 per cent last year. Only three of those percentage points were due to price increases, however, which means that inflation was about the same in the drug sector as it was in the over-all economy. Angell’s book and almost every other account of the prescription-drug crisis take it for granted that cost increases are evidence of how we’ve been cheated by the industry. In fact, drug expenditures are rising rapidly in the United States not so much because we’re being charged more for prescription drugs but because more people are taking more medications in more expensive combinations. It’s not price that matters; it’s volume.


This is a critical fact, and it ought to fundamentally change the way we think about the problem of drug costs. Last year, hospital expenditures rose by the same amount as drug expenditures—nine per cent. Yet almost all of that (eight percentage points) was due to inflation. That’s something to be upset about: when it comes to hospital services, we’re spending more and getting less. When it comes to drugs, though, we’re spending more and we’re getting more, and that makes the question of how we ought to respond to rising drug costs a little more ambiguous.

Take CareSource, a nonprofit group that administers Medicaid for close to four hundred thousand patients in Ohio and Michigan. CareSource runs a tightly managed pharmacy program and substitutes generics for brand-name drugs whenever possible. Nonetheless, the group’s pharmacy managers are forecasting at least ten-per-cent increases in their prescription-drug spending in the upcoming year. The voters of Ohio and Michigan can hardly be happy with that news. Then again, it’s not as if that money were being wasted.

The drug that CareSource spends more money on than any other is Singulair, Merck’s new asthma pill. That’s because Medicaid covers a lot of young, lowerincome families, where asthma is epidemic and Singulair is a highly effective drug. Isn’t the point of having a Medicaid program to give the poor and the ailing a chance to live a healthy life? This year, too, the number of patients covered by CareSource who are either blind or disabled or have received a diagnosis of aids grew from fifteen to eighteen per cent. The treatment of aids is one of the pharmaceutical industry’s great success stories: drugs are now available that can turn what was once a death sentence into a manageable chronic disease. The evidence suggests, furthermore, that aggressively treating diseases like aids and asthma saves money in the long term by preventing far more expensive hospital visits. But there is no way to treat these diseases in the short term—and make sick people healthy—without spending more on drugs.

The economist J. D. Klienke points out that if all physicians followed the treatment guidelines laid down by the National Institutes of Health the number of Americans being treated for hypertension would rise from twenty million to forty-three million, the use of asthma medication would increase somewhere between twofold and tenfold, and the number of Americans on one of the so-called “statin” class of cholesterol-lowering medications would increase by at least a factor of ten. By these measures, it doesn’t seem that we are spending too much on prescription drugs. If the federal government’s own medical researchers are to be believed, we’re spending too little.


The fact that volume matters more than price also means that the emphasis of the prescription-drug debate is all wrong. We’ve been focussed on the drug manufacturers. But decisions about prevalence, therapeutic mix, and intensity aren’t made by the producers of drugs. They’re made by the consumers of drugs.

This is why increasing numbers of employers have in recent years made use of what are known as Pharmacy Benefit Managers, or P.B.M.s. The P.B.M.s draw up drug formularies—lists of preferred medications. They analyze clinical-trials data to find out which drugs are the most cost-effective. In a category in which there are many equivalent options, they bargain with drug firms, offering to deliver all their business to one company in exchange for a discount. They build incentives into prescription-drug plans to encourage intelligent patient behavior. If someone wants to take a brand-name oral contraceptive and there is a generic equivalent available, for example, a P.B.M. might require her to pay the price difference. In the case of something like heartburn, the P.B.M. might require patients to follow what’s called step therapy—to try the cheaper H2 antagonists first, and only if that fails to move to a proton-pump inhibitor like omeprazole. Employers who used two or more of these strategies last year saw a decrease of almost five per cent in their pharmacy spending.

There is no mention of these successes in “The Truth About the Drug Companies.” Though much of the book is concerned with the problem of such costs, P.B.M.s, the principal tool that private health-care plans use to control rising drug costs, are dismissed in a few paragraphs. Angell’s focus, instead, is on the behavior of the pharmaceutical industry. An entire chapter, for instance, centers on the fact that the majority of drugs produced by the pharmaceutical industry are either minor variations or duplicates of drugs already on the market. Merck pioneered the statin category with Mevacor. Now we have Pfizer’s Lipitor, Bristol-Myers Squibb’s Pravachol, Novartis’s Lescol, AstraZeneca’s Crestor, and Merck’s second entrant, Zocor—all of which do pretty much the same thing. Angell thinks that these “me-too” drugs are a waste of time and money, and that the industry should devote its resources to the development of truly innovative drugs instead. In one sense, she’s right: we need a cure for Alzheimer’s much more than we need a fourth or fifth statin. Yet me-too drugs are what drive prices down. The presence of more than one drug in a given category gives P.B.M.s their leverage when it comes time to bargain with pharmaceutical companies.

With the passage of the Medicare prescription-drug-insurance legislation, late last year, the competition created by me-toos has become even more important. The bill gives responsibility for managing the drug benefit to P.B.M.s. In each therapeutic category, Medicare will set guidelines for how many and what kinds of drugs the P.B.M.s will have to include, and then the P.B.M.s will negotiate directly with drug companies for lower prices. Some analysts predict that, as long as Medicare is smart about how it defines the terms of the benefit, the discounts—particularly in crowded therapeutic categories like the statins—could be considerable. Angell appears to understand none of this. “Medicare will have to pay whatever drug companies charge,” she writes, bafflingly, “and it will have to cover expensive me-too drugs as well as more cost-effective ones.”


The core problem in bringing drug spending under control, in other words, is persuading the users and buyers and prescribers of drugs to behave rationally, and the reason we’re in the mess we’re in is that, so far, we simply haven’t done a very good job of that. “The sensitivity on the part of employers is turned up pretty high on this,” Robert Nease, who heads applied decision analysis for one of the nation’s largest P.B.M.s, the St. Louis-based Express Scripts, says. “This is not an issue about how to cut costs without affecting quality. We know how to do that. We know that generics work as well as brands. We know that there are proven step therapies. The problem is that we haven’t communicated to members that we aren’t cheating them.”

Among the costliest drug categories, for instance, is the new class of antiinflammatory drugs known as cox-2 inhibitors. The leading brand, Celebrex, has been heavily advertised, and many patients suffering from arthritis or similar conditions ask for Celebrex when they see their physician, believing that a cox-2 inhibitor is a superior alternative to the previous generation of nonsteroidal anti-inflammatories (known as nsaids), such as ibuprofen. (The second leading cox-2 inhibitor, Merck’s Vioxx, has just been taken off the market because of links to an elevated risk of heart attacks and strokes.) The clinical evidence, however, suggests that the cox-2s aren’t any better at relieving pain than the nsaids. It’s just that in a very select group of patients they have a lower risk of side effects like ulcers or bleeding.

“There are patients at high risk—people who have or have had an ulcer in the past, who are on blood-thinning medication, or who are of an advanced age,” Nease says. “That specific group you would likely start immediately on a cox-2.” Anyone else, he says, should really be started on a generic nsaid first. “The savings here are enormous,” he went on. “The cox-2s are between a hundred and two hundred dollars a month, and the generic nsaids are pennies a day—and these are drugs that people take day in, day out, for years and years.” But that kind of change can’t be implemented unilaterally: the health plan and the employer have to explain to employees that in their case a brand-new, hundreddollar drug may not be any better than an old, one-dollar drug.

Similarly, a P.B.M. might choose to favor one of the six available statins on its formulary—say, AstraZeneca’s Crestor—because AstraZeneca gave it the biggest discount. But that requires, once again, a conversation between the health plan and the employee: the person who has happily been taking Pfizer’s anti-cholesterol drug Lipitor for several years has to be convinced that Crestor is just as good, and the plan has to be very sure that Crestor is just as good.

The same debates are going on right now in Washington, as the Medicare program decides how to implement the new drug benefit. In practice, the P.B.M.s will be required to carry a choice of drugs in every therapeutic category. But how do you define a therapeutic category? Are drugs like Nexium and Prilosec and Prevacid—all technically known as proton-pump inhibitors—in one category, and the H2 antagonists in another? Or are they all in one big category? The first approach maximizes the choices available. The second approach maximizes the bargaining power of P.B.M.s. Deciding which option to take will have a big impact on how much we end up paying for prescription drugs—and it’s a decision that has nothing to do with the drug companies. It’s up to us; it requires physicians, insurers, patients, and government officials to reach some kind of consensus about what we want from our medical system, and how much we are willing to pay for it. AstraZeneca was able to do some chemical sleight of hand, spend half a billion on advertising, and get away with the “reinvention” of its heartburn drug only because that consensus hasn’t yet been reached. For sellers to behave responsibly, buyers must first behave intelligently. And if we want to create a system where millions of working and elderly Americans don’t have to struggle to pay for prescription drugs that’s also up to us. We could find it in our hearts to provide all Americans with adequate health insurance. It is only by the most spectacular feat of cynicism that our political system’s moral negligence has become the fault of the pharmaceutical industry.

There is a second book out this fall on the prescription-drug crisis, called “Overdosed America” (HarperCollins; $24.95), by John Abramson, who teaches at Harvard Medical School. At one point, Abramson discusses a study that he found in a medical journal concluding that the statin Pravachol lowered the risk of stroke in patients with coronary heart disease by nineteen per cent. That sounds like a significant finding, but, as Abramson shows, it isn’t. In the six years of the study, 4.5 per cent of those taking a placebo had a stroke versus 3.7 per cent of those on Pravachol. In the real world, that means that for every thousand people you put on Pravachol you prevent one stroke—which, given how much the drug costs, comes to at least $1.2 million per stroke prevented. On top of that, the study’s participants had an average age of sixty-two and most of them were men. Stroke victims, however, are more likely to be female, and, on average, much older—and the patients older than seventy in the study who were taking Pravachol had more strokes than those who were on a placebo.

Here is a classic case of the kind of thing that bedevils the American health system—dubious findings that, without careful evaluation, have the potential to drive up costs. But whose fault is it? It’s hard to blame Pravachol’s manufacturer, Bristol-Myers Squibb. The study’s principal objective was to look at Pravachol’s effectiveness in fighting heart attacks; the company was simply using that patient population to make a secondary observation about strokes. In any case, Bristol-Myers didn’t write up the results. A group of cardiologists from New Zealand and Australia did, and they hardly tried to hide Pravachol’s shortcomings in women and older people. All those data are presented in a large chart on the study’s third page. What’s wrong is the context in which the study’s findings are presented. The abstract at the beginning ought to have been rewritten. The conclusion needs a much clearer explanation of how the findings add to our understanding of stroke prevention. There is no accompanying commentary that points out the extreme cost-ineffectiveness of Pravachol as a stroke medication—and all those are faults of the medical journal’s editorial staff. In the end, the fight to keep drug spending under control is principally a matter of information, of proper communication among everyone who prescribes and pays for and ultimately uses drugs about what works and what doesn’t, and what makes economic sense and what doesn’t—and medical journals play a critical role in this process. As Abramson writes:

When I finished analyzing the article and understood that the title didn’t tell the whole story, that the findings were not statistically significant, and that Pravachol appeared to cause more strokes in the population at greater risk, it felt like a violation of the trust that doctors (including me) place in the research published in respected medical journals.

The journal in which the Pravachol article appeared, incidentally, was the New England Journal of Medicine. And its editor at the time the paper was accepted for publication? Dr. Marcia Angell. Physician, heal thyself.
The Man in the Gray Flannel Suit put the war behind him. Why can’t we?


When Tom Rath, nurse the hero of Sloan Wilson’s 1955 novel “The Man in the Gray Flannel Suit, check ” comes home to Connecticut each day from his job in Manhattan, his wife mixes him a Martini. If he misses the train, he’ll duck into the bar at Grand Central Terminal and have a highball, or perhaps a Scotch. On Sunday mornings, Rath and his wife lie around drinking Martinis. Once, Rath takes a tumbler of Martinis to bed, and after finishing it drifts off to sleep. Then his wife wakes him up in the middle of the night, wanting to talk. “I will if you get me a drink,” he says. She comes back with a glass half full of ice and gin. “On Greentree Avenue cocktail parties started at seven-thirty, when the men came home from New York, and they usually continued without any dinner until three or four o’clock in the morning,” Wilson writes of the tidy neighborhood in Westport where Rath and countless other young, middle-class families live. “Somewhere around nine-thirty in the evening, Martinis and Manhattans would give way to highballs, but the formality of eating anything but hors d’oeuvres in-between had been entirely omitted.”

“The Man in the Gray Flannel Suit” is about a public-relations specialist who lives in the suburbs, works for a media company in midtown, and worries about money, job security, and educating his children. It was an enormous best-seller. Gregory Peck played Tom Rath in the Hollywood version, and today, on the eve of the fiftieth anniversary of the book’s publication, many of the themes the novel addresses seem strikingly contemporary. But in other ways “The Man in the Gray Flannel Suit” is utterly dated. The details are all wrong. Tom Rath, despite an introspective streak, is supposed to be a figure of middle-class normalcy. But by our standards he and almost everyone else in the novel look like alcoholics. The book is supposed to be an argument for the importance of family over career. But Rath’s three children—the objects of his sacrifice—are so absent from the narrative and from Rath’s consciousness that these days he’d be called an absentee father.

The most discordant note, though, is struck by the account of Rath’s experience in the Second World War. He had, it becomes clear, a terrible war. As a paratrooper in Europe, he and his close friend Hank Mahoney find themselves trapped—starving and freezing—behind enemy lines, and end up killing two German sentries in order to take their sheepskin coats. But Rath doesn’t quite kill one of them, and Mahoney urges him to finish the job:

Tom had knelt beside the sentry. He had not thought it would be difficult, but the tendons of the boy’s neck had proved tough, and suddenly the sentry had started to sit up. In a rage Tom had plunged the knife repeatedly into his throat, ramming it home with all his strength until he had almost severed the head from the body.

At the end of the war, Rath and Mahoney are transferred to the Pacific theatre for the invasion of the island of Karkow. There Rath throws a hand grenade and inadvertently kills his friend. He crawls over to Hank’s body, calling out his name. “Tom had put his hand under Mahoney’s arm and turned him over,” Wilson writes. “Mahoney’s entire chest had been torn away, leaving the naked lungs and splintered ribs exposed.”

Rath picks up the body and runs back toward his own men, dodging enemy fire. Coming upon a group of Japanese firing from a cave, he props the body up, crawls within fifteen feet of the machine gun, tosses in two grenades, and then finishes off the lone survivor with a knife. He takes Hank’s body into a bombed-out pillbox and tries to resuscitate his friend’s corpse. The medics tell him that Hank has been dead for hours. He won’t listen. In a daze, he runs with the body toward the sea.

Wilson’s description of Mahoney’s death is as brutal and moving a description of the madness of combat as can be found in postwar fiction. But what happens to Rath as a result of that day in Karkow? Not much. It does not destroy him, or leave him permanently traumatized. The part of Rath’s war experience that leaves him truly guilt-ridden is the adulterous affair that he has with a woman named Maria while waiting for redeployment orders in Rome. In the elevator of his midtown office, he runs into a friend who knew Maria, and learns that he fathered a son. He obsessively goes over and over the affair in his mind, trying to square his feeling toward Maria with his love for his wife, and his marriage is fully restored only when he confesses to the existence of his Italian child. Killing his best friend, by contrast, is something that comes up and then gets tucked away. As Rath sat on the beach, and Mahoney’s body was finally taken away, Wilson writes:

A major, coming to squat beside him, said, “Some of these goddamn sailors got heads. They went ashore and got Jap heads, and they tried to boil them in the galley to get the skulls for souvenirs.”
Tom had shrugged and said nothing. The fact that he had been too quick to throw a hand grenade and had killed Mahoney, the fact that some young sailors had wanted skulls for souvenirs, and the fact that a few hundred men had lost their lives to take the island of Karkow—all these facts were simply incomprehensible and had to be forgotten. That, he had decided, was the final truth of the war, and he had greeted it with relief, greeted it eagerly, the simple fact that it was incomprehensible and had to be forgotten. Things just happen, he had decided; they happen and they happen again, and anybody who tries to make sense out of it goes out of his mind.

You couldn’t write that scene today, at least not without irony. No soldier, according to our contemporary understanding, could ever shrug off an experience like that. Today, it is Rath’s affair with Maria that would be rationalized and explained away. He was a soldier, after all, in the midst of war. Who knew if he would ever see his wife again? Tim O’Brien’s best-selling 1994 novel “In the Lake of the Woods” has a narrative structure almost identical to that of “The Man in the Gray Flannel Suit.” O’Brien’s hero, John Wade, is present at a massacre of civilians in the Vietnamese village of Thuan Yen. He kills a fellow-soldier—a man he loved like a brother. And, just like Rath, Wade sits down at the end of the long afternoon of the worst day of his war and tries to wish the memory away:

And then later still, snagged in the sunlight, he gave himself over to forgetfulness. “Go away,” he murmured. He waited a moment, then said it again, firmly, much louder, and the little village began to vanish inside its own rosy glow. Here, he reasoned, was the most majestic trick of all. In the months and years ahead, John Wade would remember Thuan Yen the way chemical nightmares are remembered, impossible combinations, impossible events, and over time the impossibility itself would become the richest and deepest and most profound memory
This could not have happened. Therefore it did not.
Already he felt better.

But John Wade cannot forget. That’s the point of O’Brien’s book. “The Man in the Gray Flannel Suit” ends with Tom Rath stronger, and his marriage renewed. Wade falls apart, and when he returns home to the woman he left behind he wakes up screaming in his sleep. By the end of the novel, the past has come back and destroyed Wade, and one reason for the book’s power is the inevitability of that disaster. This is the difference between a novel written in the middle of the last century and a novel written at the end of the century. Somehow in the intervening decades our understanding of what it means to experience a traumatic event has changed. We believe in John Wade now, not Tom Rath, and half a century after the publication of “The Man in the Gray Flannel Suit” it’s worth wondering whether we’ve got it right.


Several years ago, three psychologists—Bruce Rind, Robert Bauserman, and Philip Tromovitch—published an article on childhood sexual abuse in Psychological Bulletin, one of academic psychology’s most prestigious journals. It was what psychologists call a meta-analysis. The three researchers collected fifty-nine studies that had been conducted over the years on the long-term psychological effects of childhood sexual abuse (C.S.A.), and combined the data, in order to get the most definitive and statistically powerful result possible.

What most studies of sexual abuse show is that if you gauge the psychological health of young adults—typically college students—using various measures of mental health (alcohol problems, depression, anxiety, eating disorders, obsessive-compulsive symptoms, social adjustment, sleeping problems, suicidal thoughts and behavior, and so on), those with a history of childhood sexual abuse will have more problems across the board than those who weren’t abused. That makes intuitive sense. But Rind and his colleagues wanted to answer that question more specifically: how much worse off were the sexually abused? The fifty-nine studies were run through a series of sophisticated statistical tests. Studies from different times and places were put on the same scale. The results were surprising. The difference between the psychological health of those who had been abused and those who hadn’t, they found, was marginal. It was two-tenths of a standard deviation. “That’s like the difference between someone with an I.Q. of 100 and someone with an I.Q. of 97,” Rind says. “Ninety-seven is statistically different from 100. But it’s a trivial difference.”

Then Rind and his colleagues went one step further. A significant percentage of people who were sexually abused as children grew up in families with a host of other problems, like violence, neglect, and verbal abuse. So, to the extent that the sexually abused were damaged, what caused the damage—the sexual abuse, or the violence and neglect that so often accompanied the abuse? The data suggested that it was the latter, and, if you account for such factors, that two-tenths of a standard deviation shrinks even more. “The real gap is probably smaller than 100 and 97,” Rind says. “It might be 98, or maybe it’s 99.” The studies analyzed by Rind and his colleagues show that some victims of sexual abuse don’t even regard themselves, in retrospect, as victims. Among the male college students surveyed, for instance, Rind and his colleagues found that “37 percent viewed their C.S.A. experiences as positive at the time they occurred,” while forty-two per cent viewed them as positive when reflecting back on them.

The Rind article was published in the summer of 1998, and almost immediately it was denounced by conservative groups and lambasted in the media. Laura Schlessinger—a popular radio talk-show host known as Dr. Laura—called it “junk science.” In Washington, Representative Matt Salmon called it “the Emancipation Proclamation for pedophiles,” while Representative Tom DeLay accused it of “normalizing pedophilia.” They held a press conference at which they demanded that the American Psychological Association censure the paper. In July of 1999, a year after its publication, both the House and the Senate overwhelmingly passed resolutions condemning the analysis. Few articles in the history of academic psychology have created such a stir.

But why? It’s not as if the authors said that C.S.A. was a good thing. They just suggested that it didn’t cause as many problems as we’d thought—and the question of whether C.S.A. is morally wrong doesn’t hinge on its long-term consequences. Nor did the study say that sexual abuse was harmless. On average, the researchers concluded, the long-term damage is small. But that average is made up of cases where the damage is hard to find (like C.S.A. involving adolescent boys) and cases where the damage is quite significant (like father-daughter incest). Rind was trying to help psychologists focus on what was truly harmful. And, when it came to the effects of things like physical abuse and neglect, he and his colleagues sounded the alarm. “What happens in physical abuse is that it doesn’t happen once,” Rind says. “It happens time and time again. And, when it comes to neglect, the research shows that is the most noxious factor of all—worse than physical abuse. Why? Because it’s not practiced for one week. It’s a persistent thing. It’s a permanent feature of the parent-child relationship. These are the kinds of things that cause problems in adulthood.”

All Rind and his colleagues were saying is that sexual abuse is often something that people eventually can get over, and one of the reasons that the Rind study was so unacceptable is that we no longer think that traumatic experiences are things we can get over. We believe that the child who is molested by an uncle or a priest, on two or three furtive occasions, has to be permanently scarred by the experience—just as the soldier who accidentally kills his best friend must do more than sit down on the beach and decide that sometimes things just “happen.”

In a recent history of the Rind controversy, the psychologist Scott Lilienfeld pointed out that when we find out that something we thought was very dangerous actually isn’t that dangerous after all we usually regard what we’ve learned as good news. To him, the controversy was a paradox, and he is quite right. This attachment we have to John Wade over Tom Rath is not merely a preference for one kind of war narrative over another. It is a shift in perception so profound that the United States Congress could be presented with evidence of the unexpected strength and resilience of the human spirit and reject it without a single dissenting vote.


In “The Man in the Gray Flannel Suit,” Tom Rath works for Ralph Hopkins, who is the president of the United Broadcasting Company. Hopkins has decided that he wants to play a civic role in the issue of mental health, and Rath’s job is to write his speeches and handle public relations connected to the project. “It all started when a group of doctors called on me a few months ago,” Hopkins tells Rath, when he hires him for the job. “They apparently felt that there is too little public understanding of the whole question of mental illness, and that a campaign like the fight against cancer or polio is needed.” Again and again, in the novel, the topic of mental health surfaces. Rath’s father, we learn, suffered a nervous breakdown after serving in the trenches of the First World War, and died in what may well have been a suicide. His grandmother, whose death sets the book’s plot in motion, wanders in and out of lucidity at the end of her life. Hopkins, in a hilarious scene, recalls his unsatisfactory experience with a psychiatrist. To Wilson’s readers, this preoccupation would not have seemed out of place. In 1955, the population of New York State’s twenty-seven psychiatric hospitals was nearly ninety-four thousand. (Today, largely because of anti-psychotic drugs, it is less than six thousand.) It was impossible to drive any distance from Manhattan and not be confronted with ominous, hulking reminders of psychiatric distress: the enormous complex across the Triborough Bridge, on Wards Island; Sagamore and Pilgrim Hospitals, on Long Island; Creedmoor, in Queens. Mental health mattered to the reader of the nineteen-fifties, in a way that, say, aids mattered in the novels of the late nineteen-eighties.

But Wilson draws a very clear line between the struggles of the Raths and the plight of those suffering from actual mental illness. At one point, for example, Rath’s wife, Betsy, wonders why nothing is fun anymore:

It probably would take a psychiatrist to answer that. Maybe Tom and I both ought to visit one, she thought. What’s the matter? the psychiatrist would say, and I would reply, I don’t know—nothing seems to be much fun any more. All of a sudden the music stopped, and it didn’t start again. Is that strange, or does it happen to everyone about the time when youth starts to go?
The psychiatrist would have an explanation, Betsy thought, but I don’t want to hear it. People rely too much on explanations these days, and not enough on courage and action. . . . Tom has a good job, and he’ll get his enthusiasm back, be a success at it. Everything’s going to be fine. It does no good to wallow in night thoughts. In God we trust, and that’s that.

This is not denial, much as it may sound like it. Betsy Rath is not saying that her husband doesn’t have problems. She’s just saying that, in all likelihood, Tom will get over his problems. This is precisely the idea that lies at the heart of the Rind meta-analysis. Once you’ve separated out the small number of seriously damaged people—the victims of father-daughter incest, or of prolonged neglect and physical abuse—the balance of C.S.A. survivors are pretty much going to be fine. The same is true, it turns out, of other kinds of trauma. The Columbia University psychologist George Bonanno, for instance, followed a large number of men and women who had recently lost a spouse. “In the bereavement area, the assumption has been that when people lose a loved one there is a kind of unitary process that everybody must go through,” Bonanno says. “That process has been called grief work. The grief must be processed. It must be examined. It must be fully understood, then finished. It was the same kind of assumption that dominated the trauma world. The idea was that everybody exposed to these kinds of events will have to go through the same kind of process if they are to recover. And if you don’t do this, if you have somehow inhibited or buried the experience, the assumption was that you would pay in the long run.”

Instead, Bonanno found a wide range of responses. Some people went through a long and painful grieving process; others a period of debilitating depression. But by far the most common response was resilience: the majority of those who had just suffered from one of the most painful experiences of their lives never lapsed into serious depression, experienced a relatively brief period of grief symptoms, and soon returned to normal functioning. These people were not necessarily the hardiest or the healthiest. They just managed, by one means or another, to muddle through.

“Most people just plain cope well,” Bonanno says. “The vast majority of people get over traumatic events, and get over them remarkably well. Only a small subset—five to fifteen per cent—struggle in a way that says they need help.”

What these patterns of resilience suggest is that human beings are naturally endowed with a kind of psychological immune system, which keeps us in balance and overcomes wild swings to either end of the emotional spectrum. Most of us aren’t resilient just in the wake of bad experiences, after all. We’re also resilient in the wake of wonderful experiences; the joy of a really good meal, or winning a tennis match, or getting praised by a boss doesn’t last that long, either. “One function of emotions is to signal to people quickly which things in their environments are dangerous and should be avoided and which are positive and should be approached,” Timothy Wilson, a psychologist at the University of Virginia, has said. “People have very fast emotional reactions to events that serve as signals, informing them what to do. A problem with prolonged emotional reactions to past events is that it might be more difficult for these signals to get through. If people are still in a state of bliss over yesterday’s success, today’s dangers and hazards might be more difficult to recognize.” (Wilson, incidentally, is Sloan Wilson’s nephew.)

Wilson and his longtime collaborator, Daniel T. Gilbert, argue that a distinctive feature of this resilience is that people don’t realize that they possess it. People are bad at forecasting their emotions—at appreciating how well, under most circumstances, they will recover. Not long ago, for instance, Gilbert, Wilson, and two other researchers—Carey Morewedge and Jane Risen—asked passengers at a subway station in Cambridge, Massachusetts, how much regret they thought they would feel if they arrived on the platform just as a train was pulling away. Then they approached passengers who really had arrived just as their train was leaving, and asked them how they felt. They found that the predictions of how bad it would feel to have just barely missed a train were on average greater than reports of how it actually felt to watch the train pull away. We suffer from what Wilson and Gilbert call an impact bias: we always assume that our emotional states will last much longer than they do. We forget that other experiences will compete for our attention and emotions. We forget that our psychological immune system will kick in and take away the sting of adversity. “When I talk about our research, I say to people, ‘I’m not telling you that bad things don’t hurt,'” Gilbert says. “Of course they do. It would be perverse to say that having a child or a spouse die is not a big deal. All I’m saying is that the reality doesn’t meet the expectation.”

This is the difference between our own era and the one of half a century ago—between “The Man in the Gray Flannel Suit” and “In the Lake of the Woods.” Sloan Wilson’s book came from a time and a culture that had the confidence and wisdom to understand this truth. “I love you more than I can tell,” Rath says to his wife at the end of the novel. It’s an ending that no one would write today, but only because we have become blind to the fact that the past—in all but the worst of cases—sooner or later fades away. Betsy turns back to her husband:

“I want you to be able to talk to me about the war. It might help us to understand each other. Did you really kill seventeen men?”
“Do you want to talk about it now?”
“No. It’s not that I want to and can’t—it’s just that I’d rather think about the future. About getting a new car and driving up to Vermont with you tomorrow.”
“That will be fun. It’s not an insane world. At least, our part of it doesn’t have to be.”

Should a charge of plagiarism ruin your life?


One day this spring, viagra dosage a psychiatrist named Dorothy Lewis got a call from her friend Betty, for sale who works in New York City. Betty had just seen a Broadway play called “Frozen,” written by the British playwright Bryony Lavery. “She said, ‘Somehow it reminded me of you. You really ought to see it,'” Lewis recalled. Lewis asked Betty what the play was about, and Betty said that one of the characters was a psychiatrist who studied serial killers. “And I told her, ‘I need to see that as much as I need to go to the moon.'”

Lewis has studied serial killers for the past twenty-five years. With her collaborator, the neurologist Jonathan Pincus, she has published a great many research papers, showing that serial killers tend to suffer from predictable patterns of psychological, physical, and neurological dysfunction: that they were almost all the victims of harrowing physical and sexual abuse as children, and that almost all of them have suffered some kind of brain injury or mental illness. In 1998, she published a memoir of her life and work entitled “Guilty by Reason of Insanity.” She was the last person to visit Ted Bundy before he went to the electric chair. Few people in the world have spent as much time thinking about serial killers as Dorothy Lewis, so when her friend Betty told her that she needed to see “Frozen” it struck her as a busman’s holiday.

But the calls kept coming. “Frozen” was winning raves on Broadway, and it had been nominated for a Tony. Whenever someone who knew Dorothy Lewis saw it, they would tell her that she really ought to see it, too. In June, she got a call from a woman at the theatre where “Frozen” was playing. “She said she’d heard that I work in this field, and that I see murderers, and she was wondering if I would do a talk-back after the show,” Lewis said. “I had done that once before, and it was a delight, so I said sure. And I said, would you please send me the script, because I wanted to read the play.”

The script came, and Lewis sat down to read it. Early in the play, something caught her eye, a phrase: “it was one of those days.” One of the murderers Lewis had written about in her book had used that same expression. But she thought it was just a coincidence. “Then, there’s a scene of a woman on an airplane, typing away to her friend. Her name is Agnetha Gottmundsdottir. I read that she’s writing to her colleague, a neurologist called David Nabkus. And with that I realized that more was going on, and I realized as well why all these people had been telling me to see the play.”

Lewis began underlining line after line. She had worked at New York University School of Medicine. The psychiatrist in “Frozen” worked at New York School of Medicine. Lewis and Pincus did a study of brain injuries among fifteen death-row inmates. Gottmundsdottir and Nabkus did a study of brain injuries among fifteen death-row inmates. Once, while Lewis was examining the serial killer Joseph Franklin, he sniffed her, in a grotesque, sexual way. Gottmundsdottir is sniffed by the play’s serial killer, Ralph. Once, while Lewis was examining Ted Bundy, she kissed him on the cheek. Gottmundsdottir, in some productions of “Frozen,” kisses Ralph. “The whole thing was right there,” Lewis went on. “I was sitting at home reading the play, and I realized that it was I. I felt robbed and violated in some peculiar way. It was as if someone had stolen—I don’t believe in the soul, but, if there was such a thing, it was as if someone had stolen my essence.”

Lewis never did the talk-back. She hired a lawyer. And she came down from New Haven to see “Frozen.” “In my book,” she said, “I talk about where I rush out of the house with my black carry-on, and I have two black pocketbooks, and the play opens with her”—Agnetha—”with one big black bag and a carry-on, rushing out to do a lecture.” Lewis had written about biting her sister on the stomach as a child. Onstage, Agnetha fantasized out loud about attacking a stewardess on an airplane and “biting out her throat.” After the play was over, the cast came onstage and took questions from the audience. “Somebody in the audience said, ‘Where did Bryony Lavery get the idea for the psychiatrist?'” Lewis recounted. “And one of the cast members, the male lead, said, ‘Oh, she said that she read it in an English medical magazine.'” Lewis is a tiny woman, with enormous, childlike eyes, and they were wide open now with the memory. “I wouldn’t have cared if she did a play about a shrink who’s interested in the frontal lobe and the limbic system. That’s out there to do. I see things week after week on television, on ‘Law & Order’ or ‘C.S.I.,’ and I see that they are using material that Jonathan and I brought to light. And it’s wonderful. That would have been acceptable. But she did more than that. She took things about my own life, and that is the part that made me feel violated.”

At the request of her lawyer, Lewis sat down and made up a chart detailing what she felt were the questionable parts of Lavery’s play. The chart was fifteen pages long. The first part was devoted to thematic similarities between “Frozen” and Lewis’s book “Guilty by Reason of Insanity.” The other, more damning section listed twelve instances of almost verbatim similarities—totalling perhaps six hundred and seventy-five words—between passages from “Frozen” and passages from a 1997 magazine profile of Lewis. The profile was called “Damaged.” It appeared in the February 24, 1997, issue of The New Yorker. It was written by me.


Words belong to the person who wrote them. There are few simpler ethical notions than this one, particularly as society directs more and more energy and resources toward the creation of intellectual property. In the past thirty years, copyright laws have been strengthened. Courts have become more willing to grant intellectual-property protections. Fighting piracy has become an obsession with Hollywood and the recording industry, and, in the worlds of academia and publishing, plagiarism has gone from being bad literary manners to something much closer to a crime. When, two years ago, Doris Kearns Goodwin was found to have lifted passages from several other historians, she was asked to resign from the board of the Pulitzer Prize committee. And why not? If she had robbed a bank, she would have been fired the next day.

I’d worked on “Damaged” through the fall of 1996. I would visit Dorothy Lewis in her office at Bellevue Hospital, and watch the videotapes of her interviews with serial killers. At one point, I met up with her in Missouri. Lewis was testifying at the trial of Joseph Franklin, who claims responsibility for shooting, among others, the civil-rights leader Vernon Jordan and the pornographer Larry Flynt. In the trial, a videotape was shown of an interview that Franklin once gave to a television station. He was asked whether he felt any remorse. I wrote:

“I can’t say that I do,” he said. He paused again, then added, “The only thing I’m sorry about is that it’s not legal.”
“What’s not legal?”
Franklin answered as if he’d been asked the time of day: “Killing Jews.”

That exchange, almost to the word, was reproduced in “Frozen.”

Lewis, the article continued, didn’t feel that Franklin was fully responsible for his actions. She viewed him as a victim of neurological dysfunction and childhood physical abuse. “The difference between a crime of evil and a crime of illness,” I wrote, “is the difference between a sin and a symptom.” That line was in “Frozen,” too—not once but twice. I faxed Bryony Lavery a letter:

I am happy to be the source of inspiration for other writers, and had you asked for my permission to quote—even liberally—from my piece, I would have been delighted to oblige. But to lift material, without my approval, is theft.

Almost as soon as I’d sent the letter, though, I began to have second thoughts. The truth was that, although I said I’d been robbed, I didn’t feel that way. Nor did I feel particularly angry. One of the first things I had said to a friend after hearing about the echoes of my article in “Frozen” was that this was the only way I was ever going to get to Broadway—and I was only half joking. On some level, I considered Lavery’s borrowing to be a compliment. A savvier writer would have changed all those references to Lewis, and rewritten the quotes from me, so that their origin was no longer recognizable. But how would I have been better off if Lavery had disguised the source of her inspiration?

Dorothy Lewis, for her part, was understandably upset. She was considering a lawsuit. And, to increase her odds of success, she asked me to assign her the copyright to my article. I agreed, but then I changed my mind. Lewis had told me that she “wanted her life back.” Yet in order to get her life back, it appeared, she first had to acquire it from me. That seemed a little strange.

Then I got a copy of the script for “Frozen.” I found it breathtaking. I realize that this isn’t supposed to be a relevant consideration. And yet it was: instead of feeling that my words had been taken from me, I felt that they had become part of some grander cause. In late September, the story broke. The Times, the Observer in England, and the Associated Press all ran stories about Lavery’s alleged plagiarism, and the articles were picked up by newspapers around the world. Bryony Lavery had seen one of my articles, responded to what she read, and used it as she constructed a work of art. And now her reputation was in tatters. Something about that didn’t seem right.


In 1992, the Beastie Boys released a song called “Pass the Mic,” which begins with a six-second sample taken from the 1976 composition “Choir,” by the jazz flutist James Newton. The sample was an exercise in what is called multiphonics, where the flutist “overblows” into the instrument while simultaneously singing in a falsetto. In the case of “Choir,” Newton played a C on the flute, then sang C, D-flat, C—and the distortion of the overblown C, combined with his vocalizing, created a surprisingly complex and haunting sound. In “Pass the Mic,” the Beastie Boys repeated the Newton sample more than forty times. The effect was riveting.

In the world of music, copyrighted works fall into two categories—the recorded performance and the composition underlying that performance. If you write a rap song, and want to sample the chorus from Billy Joel’s “Piano Man,” you first have to get permission from the record label to use the “Piano Man” recording, and then get permission from Billy Joel (or whoever owns his music) to use the underlying composition. In the case of “Pass the Mic,” the Beastie Boys got the first kind of permission—the rights to use the recording of “Choir”—but not the second. Newton sued, and he lost—and the reason he lost serves as a useful introduction to how to think about intellectual property.

At issue in the case wasn’t the distinctiveness of Newton’s performance. The Beastie Boys, everyone agreed, had properly licensed Newton’s performance when they paid the copyright recording fee. And there was no question about whether they had copied the underlying music to the sample. At issue was simply whether the Beastie Boys were required to ask for that secondary permission: was the composition underneath those six seconds so distinctive and original that Newton could be said to own it? The court said that it wasn’t.

The chief expert witness for the Beastie Boys in the “Choir” case was Lawrence Ferrara, who is a professor of music at New York University, and when I asked him to explain the court’s ruling he walked over to the piano in the corner of his office and played those three notes: C, D-flat, C. “That’s it!” he shouted. “There ain’t nothing else! That’s what was used. You know what this is? It’s no more than a mordent, a turn. It’s been done thousands upon thousands of times. No one can say they own that.”

Ferrara then played the most famous four-note sequence in classical music, the opening of Beethoven’s Fifth: G, G, G, E-flat. This was unmistakably Beethoven. But was it original? “That’s a harder case,” Ferrara said. “Actually, though, other composers wrote that. Beethoven himself wrote that in a piano sonata, and you can find figures like that in composers who predate Beethoven. It’s one thing if you’re talking about da-da-da dummm, da-da-da dummm—those notes, with those durations. But just the four pitches, G, G, G, E-flat? Nobody owns those.”

Ferrara once served as an expert witness for Andrew Lloyd Webber, who was being sued by Ray Repp, a composer of Catholic folk music. Repp said that the opening few bars of Lloyd Webber’s 1984 “Phantom Song,” from “The Phantom of the Opera,” bore an overwhelming resemblance to his composition “Till You,” written six years earlier, in 1978. As Ferrara told the story, he sat down at the piano again and played the beginning of both songs, one after the other; sure enough, they sounded strikingly similar. “Here’s Lloyd Webber,” he said, calling out each note as he played it. “Here’s Repp. Same sequence. The only difference is that Andrew writes a perfect fourth and Repp writes a sixth.”

But Ferrara wasn’t quite finished. “I said, let me have everything Andrew Lloyd Webber wrote prior to 1978—’Jesus Christ Superstar,”Joseph,”Evita.'” He combed through every score, and in “Joseph and the Amazing Technicolor Dreamcoat” he found what he was looking for. “It’s the song ‘Benjamin Calypso.'” Ferrara started playing it. It was immediately familiar. “It’s the first phrase of ‘Phantom Song.’ It’s even using the same notes. But wait—it gets better. Here’s ‘Close Every Door,’ from a 1969 concert performance of ‘Joseph.'” Ferrara is a dapper, animated man, with a thin, well-manicured mustache, and thinking about the Lloyd Webber case was almost enough to make him jump up and down. He began to play again. It was the second phrase of “”The first half of ‘Phantom’ is in ‘Benjamin Calypso.’ The second half is in ‘Close Every Door.’ They are identical. On the button. In the case of the first theme, in fact, ‘Benjamin Calypso’ is closer to the first half of the theme at issue than the plaintiff’s song. Lloyd Webber writes something in 1984, and he borrows from himself.”

In the “Choir” case, the Beastie Boys’ copying didn’t amount to theft because it was too trivial. In the “Phantom” case, what Lloyd Webber was alleged to have copied didn’t amount to theft because the material in question wasn’t original to his accuser. Under copyright law, what matters is not that you copied someone else’s work. What matters is what you copied, and how much you copied. Intellectual-property doctrine isn’t a straightforward application of the ethical principle “Thou shalt not steal.” At its core is the notion that there are certain situations where you can steal. The protections of copyright, for instance, are time-limited; once something passes into the public domain, anyone can copy it without restriction. Or suppose that you invented a cure for breast cancer in your basement lab. Any patent you received would protect your intellectual property for twenty years, but after that anyone could take your invention. You get an initial monopoly on your creation because we want to provide economic incentives for people to invent things like cancer drugs. But everyone gets to steal your breast-cancer cure—after a decent interval—because it is also in society’s interest to let as many people as possible copy your invention; only then can others learn from it, and build on it, and come up with better and cheaper alternatives. This balance between the protecting and the limiting of intellectual property is, in fact, enshrined in the Constitution: “Congress shall have the power to promote the Progress of Science and useful Arts, by securing for limited”—note that specification, limited—”Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”


So is it true that words belong to the person who wrote them, just as other kinds of property belong to their owners? Actually, no. As the Stanford law professor Lawrence Lessig argues in his new book “Free Culture”:

In ordinary language, to call a copyright a “property” right is a bit misleading, for the property of copyright is an odd kind of property. . . . I understand what I am taking when I take the picnic table you put in your backyard. I am taking a thing, the picnic table, and after I take it, you don’t have it. But what am I taking when I take the good idea you had to put a picnic table in the backyard—by, for example, going to Sears, buying a table, and putting it in my backyard? What is the thing that I am taking then?
The point is not just about the thingness of picnic tables versus ideas, though that is an important difference. The point instead is that in the ordinary case—indeed, in practically every case except for a narrow range of exceptions—ideas released to the world are free. I don’t take anything from you when I copy the way you dress—though I might seem weird if I do it every day. . . . Instead, as Thomas Jefferson said (and this is especially true when I copy the way someone dresses), “He who receives an idea from me, receives instruction himself without lessening mine; as he who lights his taper at mine, receives light without darkening me.”

Lessig argues that, when it comes to drawing this line between private interests and public interests in intellectual property, the courts and Congress have, in recent years, swung much too far in the direction of private interests. He writes, for instance, about the fight by some developing countries to get access to inexpensive versions of Western drugs through what is called “parallel importation”—buying drugs from another developing country that has been licensed to produce patented medicines. The move would save countless lives. But it has been opposed by the United States not on the ground that it would cut into the profits of Western pharmaceutical companies (they don’t sell that many patented drugs in developing countries anyway) but on the ground that it violates the sanctity of intellectual property. “We as a culture have lost this sense of balance,” Lessig writes. “A certain property fundamentalism, having no connection to our tradition, now reigns in this culture.”

Even what Lessig decries as intellectual-property extremism, however, acknowledges that intellectual property has its limits. The United States didn’t say that developing countries could never get access to cheap versions of American drugs. It said only that they would have to wait until the patents on those drugs expired. The arguments that Lessig has with the hard-core proponents of intellectual property are almost all arguments about where and when the line should be drawn between the right to copy and the right to protection from copying, not whether a line should be drawn.

But plagiarism is different, and that’s what’s so strange about it. The ethical rules that govern when it’s acceptable for one writer to copy another are even more extreme than the most extreme position of the intellectual-property crowd: when it comes to literature, we have somehow decided that copying is never acceptable. Not long ago, the Harvard law professor Laurence Tribe was accused of lifting material from the historian Henry Abraham for his 1985 book, “God Save This Honorable Court.” What did the charge amount to? In an exposé that appeared in the conservative publication The Weekly Standard, Joseph Bottum produced a number of examples of close paraphrasing, but his smoking gun was this one borrowed sentence: “Taft publicly pronounced Pitney to be a ‘weak member’ of the Court to whom he could not assign cases.” That’s it. Nineteen words.

Not long after I learned about “Frozen,” I went to see a friend of mine who works in the music industry. We sat in his living room on the Upper East Side, facing each other in easy chairs, as he worked his way through a mountain of CDs. He played “Angel,” by the reggae singer Shaggy, and then “The Joker,” by the Steve Miller Band, and told me to listen very carefully to the similarity in bass lines. He played Led Zeppelin’s “Whole Lotta Love” and then Muddy Waters’s “You Need Love,” to show the extent to which Led Zeppelin had mined the blues for inspiration. He played “Twice My Age,” by Shabba Ranks and Krystal, and then the saccharine seventies pop standard “Seasons in the Sun,” until I could hear the echoes of the second song in the first. He played “Last Christmas,” by Wham!, followed by Barry Manilow’s “Can’t Smile Without You” to explain why Manilow might have been startled when he first heard that song, and then “Joanna,” by Kool and the Gang, because, in a different way, “Last Christmas” was an homage to Kool and the Gang as well. “That sound you hear in Nirvana,” my friend said at one point, “that soft and then loud, kind of exploding thing, a lot of that was inspired by the Pixies. Yet Kurt Cobain”—Nirvana’s lead singer and songwriter—”was such a genius that he managed to make it his own. And ‘Smells Like Teen Spirit’?”—here he was referring to perhaps the best-known Nirvana song. “That’s Boston’s ‘More Than a Feeling.'” He began to hum the riff of the Boston hit, and said, “The first time I heard ‘Teen Spirit,’ I said, ‘That guitar lick is from “More Than a Feeling.”‘ But it was different—it was urgent and brilliant and new.”

He played another CD. It was Rod Stewart’s “Do Ya Think I’m Sexy,” a huge hit from the nineteen-seventies. The chorus has a distinctive, catchy hook—the kind of tune that millions of Americans probably hummed in the shower the year it came out. Then he put on “Taj Mahal,” by the Brazilian artist Jorge Ben Jor, which was recorded several years before the Rod Stewart song. In his twenties, my friend was a d.j. at various downtown clubs, and at some point he’d become interested in world music. “I caught it back then,” he said. A small, sly smile spread across his face. The opening bars of “Taj Mahal” were very South American, a world away from what we had just listened to. And then I heard it. It was so obvious and unambiguous that I laughed out loud; virtually note for note, it was the hook from “Do Ya Think I’m Sexy.” It was possible that Rod Stewart had independently come up with that riff, because resemblance is not proof of influence. It was also possible that he’d been in Brazil, listened to some local music, and liked what he heard.

My friend had hundreds of these examples. We could have sat in his living room playing at musical genealogy for hours. Did the examples upset him? Of course not, because he knew enough about music to know that these patterns of influence—cribbing, tweaking, transforming—were at the very heart of the creative process. True, copying could go too far. There were times when one artist was simply replicating the work of another, and to let that pass inhibited true creativity. But it was equally dangerous to be overly vigilant in policing creative expression, because if Led Zeppelin hadn’t been free to mine the blues for inspiration we wouldn’t have got “Whole Lotta Love,” and if Kurt Cobain couldn’t listen to “More Than a Feeling” and pick out and transform the part he really liked we wouldn’t have “Smells Like Teen Spirit”—and, in the evolution of rock, “Smells Like Teen Spirit” was a real step forward from “More Than a Feeling.” A successful music executive has to understand the distinction between borrowing that is transformative and borrowing that is merely derivative, and that distinction, I realized, was what was missing from the discussion of Bryony Lavery’s borrowings. Yes, she had copied my work. But no one was asking why she had copied it, or what she had copied, or whether her copying served some larger purpose.


Bryony Lavery came to see me in early October. It was a beautiful Saturday afternoon, and we met at my apartment. She is in her fifties, with short tousled blond hair and pale-blue eyes, and was wearing jeans and a loose green shirt and clogs. There was something rugged and raw about her. In the Times the previous day, the theatre critic Ben Brantley had not been kind to her new play, “Last Easter.” This was supposed to be her moment of triumph. “Frozen” had been nominated for a Tony. “”Last Easter” had opened Off Broadway. And now? She sat down heavily at my kitchen table. “I’ve had the absolute gamut of emotions,” she said, playing nervously with her hands as she spoke, as if she needed a cigarette. “I think when one’s working, one works between absolute confidence and absolute doubt, and I got a huge dollop of each. I was terribly confident that I could write well after ‘Frozen,’ and then this opened a chasm of doubt.” She looked up at me. “I’m terribly sorry,” she said.

Lavery began to explain: “What happens when I write is that I find that I’m somehow zoning on a number of things. I find that I’ve cut things out of newspapers because the story or something in them is interesting to me, and seems to me to have a place onstage. Then it starts coagulating. It’s like the soup starts thickening. And then a story, which is also a structure, starts emerging. I’d been reading thrillers like ‘The Silence of the Lambs,’ about fiendishly clever serial killers. I’d also seen a documentary of the victims of the Yorkshire killers, Myra Hindley and Ian Brady, who were called the Moors Murderers. They spirited away several children. It seemed to me that killing somehow wasn’t fiendishly clever. It was the opposite of clever. It was as banal and stupid and destructive as it could be. There are these interviews with the survivors, and what struck me was that they appeared to be frozen in time. And one of them said, ‘If that man was out now, I’m a forgiving man but I couldn’t forgive him. I’d kill him.’ That’s in ‘Frozen.’ I was thinking about that. Then my mother went into hospital for a very simple operation, and the surgeon punctured her womb, and therefore her intestine, and she got peritonitis and died.”

When Lavery started talking about her mother, she stopped, and had to collect herself. “She was seventy-four, and what occurred to me is that I utterly forgave him. I thought it was an honest mistake. I’m very sorry it happened to my mother, but it’s an honest mistake.” Lavery’s feelings confused her, though, because she could think of people in her own life whom she had held grudges against for years, for the most trivial of reasons. “In a lot of ways, ‘Frozen’ was an attempt to understand the nature of forgiveness,” she said.

Lavery settled, in the end, on a play with three characters. The first is a serial killer named Ralph, who kidnaps and murders a young girl. The second is the murdered girl’s mother, Nancy. The third is a psychiatrist from New York, Agnetha, who goes to England to examine Ralph. In the course of the play, the three lives slowly intersect—and the characters gradually change and become “unfrozen” as they come to terms with the idea of forgiveness. For the character of Ralph, Lavery says that she drew on a book about a serial killer titled “The Murder of Childhood,” by Ray Wyre and Tim Tate. For the character of Nancy, she drew on an article written in the Guardian by a woman named Marian Partington, whose sister had been murdered by the serial killers Frederick and Rosemary West. And, for the character of Agnetha, Lavery drew on a reprint of my article that she had read in a British publication. “I wanted a scientist who would understand,” Lavery said—a scientist who could explain how it was possible to forgive a man who had killed your daughter, who could explain that a serial killing was not a crime of evil but a crime of illness. “I wanted it to be accurate,” she added.

So why didn’t she credit me and Lewis? How could she have been so meticulous about accuracy but not about attribution? Lavery didn’t have an answer. “I thought it was O.K. to use it,” she said with an embarrassed shrug. “It never occurred to me to ask you. I thought it was news.”

She was aware of how hopelessly inadequate that sounded, and when she went on to say that my article had been in a big folder of source material that she had used in the writing of the play, and that the folder had got lost during the play’s initial run, in Birmingham, she was aware of how inadequate that sounded, too.

But then Lavery began to talk about Marian Partington, her other important inspiration, and her story became more complicated. While she was writing “Frozen,” Lavery said, she wrote to Partington to inform her of how much she was relying on Partington’s experiences. And when “Frozen” opened in London she and Partington met and talked. In reading through articles on Lavery in the British press, I found this, from the Guardian two years ago, long before the accusations of plagiarism surfaced:

Lavery is aware of the debt she owes to Partington’s writing and is eager to acknowledge it.
“I always mention it, because I am aware of the enormous debt that I owe to the generosity of Marian Partington’s piece . . . . You have to be hugely careful when writing something like this, because it touches on people’s shattered lives and you wouldn’t want them to come across it unawares.”

Lavery wasn’t indifferent to other people’s intellectual property, then; she was just indifferent to my intellectual property. That’s because, in her eyes, what she took from me was different. It was, as she put it, “news.” She copied my description of Dorothy Lewis’s collaborator, Jonathan Pincus, conducting a neurological examination. She copied the description of the disruptive neurological effects of prolonged periods of high stress. She copied my transcription of the television interview with Franklin. She reproduced a quote that I had taken from a study of abused children, and she copied a quotation from Lewis on the nature of evil. She didn’t copy my musings, or conclusions, or structure. She lifted sentences like “It is the function of the cortex—and, in particular, those parts of the cortex beneath the forehead, known as the frontal lobes—to modify the impulses that surge up from within the brain, to provide judgment, to organize behavior and decision-making, to learn and adhere to rules of everyday life.” It is difficult to have pride of authorship in a sentence like that. My guess is that it’s a reworked version of something I read in a textbook. Lavery knew that failing to credit Partington would have been wrong. Borrowing the personal story of a woman whose sister was murdered by a serial killer matters because that story has real emotional value to its owner. As Lavery put it, it touches on someone’s shattered life. Are boilerplate descriptions of physiological functions in the same league?

It also matters how Lavery chose to use my words. Borrowing crosses the line when it is used for a derivative work. It’s one thing if you’re writing a history of the Kennedys, like Doris Kearns Goodwin, and borrow, without attribution, from another history of the Kennedys. But Lavery wasn’t writing another profile of Dorothy Lewis. She was writing a play about something entirely new—about what would happen if a mother met the man who killed her daughter. And she used my descriptions of Lewis’s work and the outline of Lewis’s life as a building block in making that confrontation plausible. Isn’t that the way creativity is supposed to work? Old words in the service of a new idea aren’t the problem. What inhibits creativity is new words in the service of an old idea.

And this is the second problem with plagiarism. It is not merely extremist. It has also become disconnected from the broader question of what does and does not inhibit creativity. We accept the right of one writer to engage in a full-scale knockoff of another—think how many serial-killer novels have been cloned from “The Silence of the Lambs.” Yet, when Kathy Acker incorporated parts of a Harold Robbins sex scene verbatim in a satiric novel, she was denounced as a plagiarist (and threatened with a lawsuit). When I worked at a newspaper, we were routinely dispatched to “match” a story from the Times: to do a new version of someone else’s idea. But had we “matched” any of the Times‘ words—even the most banal of phrases—it could have been a firing offense. The ethics of plagiarism have turned into the narcissism of small differences: because journalism cannot own up to its heavily derivative nature, it must enforce originality on the level of the sentence.

Dorothy Lewis says that one of the things that hurt her most about “Frozen” was that Agnetha turns out to have had an affair with her collaborator, David Nabkus. Lewis feared that people would think she had had an affair with her collaborator, Jonathan Pincus. “That’s slander,” Lewis told me. “I’m recognizable in that. Enough people have called me and said, ‘Dorothy, it’s about you,’ and if everything up to that point is true, then the affair becomes true in the mind. So that is another reason that I feel violated. If you are going to take the life of somebody, and make them absolutely identifiable, you don’t create an affair, and you certainly don’t have that as a climax of the play.”

It is easy to understand how shocking it must have been for Lewis to sit in the audience and see her “character” admit to that indiscretion. But the truth is that Lavery has every right to create an affair for Agnetha, because Agnetha is not Dorothy Lewis. She is a fictional character, drawn from Lewis’s life but endowed with a completely imaginary set of circumstances and actions. In real life, Lewis kissed Ted Bundy on the cheek, and in some versions of “Frozen” Agnetha kisses Ralph. But Lewis kissed Bundy only because he kissed her first, and there’s a big difference between responding to a kiss from a killer and initiating one. When we first see Agnetha, she’s rushing out of the house and thinking murderous thoughts on the airplane. Dorothy Lewis also charges out of her house and thinks murderous thoughts. But the dramatic function of that scene is to make us think, in that moment, that Agnetha is crazy. And the one inescapable fact about Lewis is that she is not crazy: she has helped get people to rethink their notions of criminality because of her unshakable command of herself and her work. Lewis is upset not just about how Lavery copied her life story, in other words, but about how Lavery changed her life story. She’s not merely upset about plagiarism. She’s upset about art—about the use of old words in the service of a new idea—and her feelings are perfectly understandable, because the alterations of art can be every bit as unsettling and hurtful as the thievery of plagiarism. It’s just that art is not a breach of ethics.

When I read the original reviews of “Frozen,” I noticed that time and again critics would use, without attribution, some version of the sentence “The difference between a crime of evil and a crime of illness is the difference between a sin and a symptom.” That’s my phrase, of course. I wrote it. Lavery borrowed it from me, and now the critics were borrowing it from her. The plagiarist was being plagiarized. In this case, there is no “art” defense: nothing new was being done with that line. And this was not “news.” Yet do I really own “sins and symptoms”? There is a quote by Gandhi, it turns out, using the same two words, and I’m sure that if I were to plow through the body of English literature I would find the path littered with crimes of evil and crimes of illness. The central fact about the “Phantom” case is that Ray Repp, if he was borrowing from Andrew Lloyd Webber, certainly didn’t realize it, and Andrew Lloyd Webber didn’t realize that he was borrowing from himself. Creative property, Lessig reminds us, has many lives—the newspaper arrives at our door, it becomes part of the archive of human knowledge, then it wraps fish. And, by the time ideas pass into their third and fourth lives, we lose track of where they came from, and we lose control of where they are going. The final dishonesty of the plagiarism fundamentalists is to encourage us to pretend that these chains of influence and evolution do not exist, and that a writer’s words have a virgin birth and an eternal life. I suppose that I could get upset about what happened to my words. I could also simply acknowledge that I had a good, long ride with that line—and let it go.

“It’s been absolutely bloody, really, because it attacks my own notion of my character,” Lavery said, sitting at my kitchen table. A bouquet of flowers she had brought were on the counter behind her. “It feels absolutely terrible. I’ve had to go through the pain for being careless. I’d like to repair what happened, and I don’t know how to do that. I just didn’t think I was doing the wrong thing . . . and then the article comes out in the New York Times and every continent in the world.” There was a long silence. She was heartbroken. But, more than that, she was confused, because she didn’t understand how six hundred and seventy-five rather ordinary words could bring the walls tumbling down. “It’s been horrible and bloody.” She began to cry. “I’m still composting what happened. It will be for a purpose . . . whatever that purpose is.
Mammography, more about air power, and the limits of looking.


At the beginning of the first Gulf War, the United States Air Force dispatched two squadrons of F-15E Strike Eagle fighter jets to find and destroy the Scud missiles that Iraq was firing at Israel. The rockets were being launched, mostly at night, from the backs of modified flatbed tractor-trailers, moving stealthily around a four-hundred-square-mile “Scud box” in the western desert. The plan was for the fighter jets to patrol the box from sunset to sunrise. When a Scud was launched, it would light up the night sky. An F-15E pilot would fly toward the launch point, follow the roads that crisscrossed the desert, and then locate the target using a state-of-the-art, $4.6-million device called a LANTIM navigation and targeting pod, capable of taking a high-resolution infrared photograph of a four-and-a-half-mile swath below the plane. How hard could it be to pick up a hulking tractor-trailer in the middle of an empty desert?

Almost immediately, reports of Scud kills began to come back from the field. The Desert Storm commanders were elated. “I remember going out to Nellis Air Force Base after the war,” Barry Watts, a former Air Force colonel, says. “They did a big static display, and they had all the Air Force jets that flew in Desert Storm, and they had little placards in front of them, with a box score, explaining what this plane did and that plane did in the war. And, when you added up how many Scud launchers they claimed each got, the total was about a hundred.” Air Force officials were not guessing at the number of Scud launchers hit; as far as they were concerned, they knew. They had a four-million-dollar camera, which took a nearly perfect picture, and there are few cultural reflexes more deeply ingrained than the idea that a picture has the weight of truth. “That photography not only does not, but cannot lie, is a matter of belief, an article of faith,” Charles Rosen and Henri Zerner have written. “We tend to trust the camera more than our own eyes.” Thus was victory declared in the Scud hunt–until hostilities ended and the Air Force appointed a team to determine the effectiveness of the air campaigns in Desert Storm. The actual number of definite Scud kills, the team said, was zero.

The problem was that the pilots were operating at night, when depth perception is impaired. LANTIM could see in the dark, but the camera worked only when it was pointed in the right place, and the right place wasn’t obvious. Meanwhile, the pilot had only about five minutes to find his quarry, because after launch the Iraqis would immediately hide in one of the many culverts underneath the highway between Baghdad and Jordan, and the screen the pilot was using to scan all that desert measured just six inches by six inches. “It was like driving down an interstate looking through a soda straw,” Major General Mike DeCuir, who flew numerous Scud-hunt missions throughout the war, recalled. Nor was it clear what a Scud launcher looked like on that screen. “We had an intelligence photo of one on the ground. But you had to imagine what it would look like on a black-and-white screen from twenty thousand feet up and five or more miles away,” DeCuir went on. “With the resolution we had at the time, you could tell something was a big truck and that it had wheels, but at that altitude it was hard to tell much more than that.” The postwar analysis indicated that a number of the targets the pilots had hit were actually decoys, constructed by the Iraqis from old trucks and spare missile parts. Others were tanker trucks transporting oil on the highway to Jordan. A tanker truck, after all, is a tractor-trailer hauling a long, shiny cylindrical object, and, from twenty thousand feet up at four hundred miles an hour on a six-by-six-inch screen, a long, shiny cylindrical object can look a lot like a missile. “It’s a problem we’ve always had,” Watts, who served on the team that did the Gulf War analysis, said. “It’s night out. You think you’ve got something on the sensor. You roll out your weapons. Bombs go off. It’s really hard to tell what you did.”

You can build a high-tech camera, capable of taking pictures in the middle of the night, in other words, but the system works only if the camera is pointed in the right place, and even then the pictures are not self-explanatory. They need to be interpreted, and the human task of interpretation is often a bigger obstacle than the technical task of picture-taking. This was the lesson of the Scud hunt: pictures promise to clarify but often confuse. The Zapruder film intensified rather than dispelled the controversy surrounding John F. Kennedy’s assassination. The videotape of the beating of Rodney King led to widespread uproar about police brutality; it also served as the basis for a jury’s decision to acquit the officers charged with the assault. Perhaps nowhere have these issues been so apparent, however, as in the arena of mammography. Radiologists developed state-of-the-art X-ray cameras and used them to scan women’s breasts for tumors, reasoning that, if you can take a nearly perfect picture, you can find and destroy tumors before they go on to do serious damage. Yet there remains a great deal of confusion about the benefits of mammography. Is it possible that we place too much faith in pictures?


The head of breast imaging at Memorial Sloan-Kettering Cancer Center, in New York City, is a physician named David Dershaw, a youthful man in his fifties, who bears a striking resemblance to the actor Kevin Spacey. One morning not long ago, he sat down in his office at the back of the Sloan-Kettering Building and tried to explain how to read a mammogram.

Dershaw began by putting an X-ray on a light box behind his desk. “Cancer shows up as one of two patterns,” he said. “You look for lumps and bumps, and you look for calcium. And, if you find it, you have to make a determination: is it acceptable, or is it a pattern that might be due to cancer?” He pointed at the X-ray. “This woman has cancer. She has these tiny little calcifications. Can you see them? Can you see how small they are?” He took out a magnifying glass and placed it over a series of white flecks; as a cancer grows, it produces calcium deposits. “That’s the stuff we are looking for,” he said.

Then Dershaw added a series of slides to the light box and began to explain all the varieties that those white flecks came in. Some calcium deposits are oval and lucent. “They’re called eggshell calcifications,” Dershaw said. “And they’re basically benign.” Another kind of calcium runs like a railway track on either side of the breast’s many blood vessels–that’s benign, too. “Then there’s calcium that’s thick and heavy and looks like popcorn,” Dershaw went on. “That’s just dead tissue. That’s benign. There’s another calcification that’s little sacs of calcium floating in liquid. It’s called ‘milk of calcium.’ That’s another kind of calcium that’s always benign.” He put a new set of slides against the light. “Then we have calcium that looks like this–irregular. All of these are of different density and different sizes and different configurations. Those are usually benign, but sometimes they are due to cancer. Remember you saw those railway tracks? This is calcium laid down inside a tube as well, but you can see that the outside of the tube is irregular. That’s cancer.” Dershaw’s explanations were beginning to be confusing. “There are certain calcifications in benign tissues that are always benign,” he said. “There are certain kinds that are always associated with cancer. But those are the ends of the spectrum, and the vast amount of calcium is somewhere in the middle. And making that differentiation, between whether the calcium is acceptable or not, is not clear-cut.”

The same is true of lumps. Some lumps are simply benign clumps of cells. You can tell they are benign because the walls of the mass look round and smooth; in a cancer, cells proliferate so wildly that the walls of the tumor tend to be ragged and to intrude into the surrounding tissue. But sometimes benign lumps resemble tumors, and sometimes tumors look a lot like benign lumps. And sometimes you have lots of masses that, taken individually, would be suspicious but are so pervasive that the reasonable conclusion is that this is just how the woman’s breast looks. “If you have a CAT scan of the chest, the heart always looks like the heart, the aorta always looks like the aorta,” Dershaw said. “So when there’s a lump in the middle of that, it’s clearly abnormal. Looking at a mammogram is conceptually different from looking at images elsewhere in the body. Everything else has anatomy–anatomy that essentially looks the same from one person to the next. But we don’t have that kind of standardized information on the breast. The most difficult decision I think anybody needs to make when we’re confronted with a patient is: Is this person normal? And we have to decide that without a pattern that is reasonably stable from individual to individual, and sometimes even without a pattern that is the same from the left side to the right.”

Dershaw was saying that mammography doesn’t fit our normal expectations of pictures. In the days before the invention of photography, for instance, a horse in motion was represented in drawings and paintings according to the convention of ventre à terre, or “belly to the ground.” Horses were drawn with their front legs extended beyond their heads, and their hind legs stretched straight back, because that was the way, in the blur of movement, a horse seemed to gallop. Then, in the eighteen-seventies, came Eadweard Muybridge, with his famous sequential photographs of a galloping horse, and that was the end of ventre à terre. Now we knew how a horse galloped. The photograph promised that we would now be able to capture reality itself.

The situation with mammography is different. The way in which we ordinarily speak about calcium and lumps is clear and unambiguous. But the picture demonstrates how blurry those seemingly distinct categories actually are. Joann Elmore, a physician and epidemiologist at the University of Washington Harborview Medical Center, once asked ten board-certified radiologists to look at a hundred and fifty mammograms–of which twenty-seven had come from women who developed breast cancer, and a hundred and twenty-three from women who were known to have been healthy. One radiologist caught eighty-five per cent of the cancers the first time around. Another caught only thirty-seven per cent. One looked at the same X-rays and saw suspicious masses in seventy-eight per cent of the cases. Another doctor saw “focal asymmetric density” in half of the cancer cases; yet another saw no “focal asymmetric density” at all. There was one particularly perplexing mammogram that three radiologists thought was normal, two thought was abnormal but probably benign, four couldn’t make up their minds about, and one was convinced was cancer. (The patient was fine.) Some of these differences are a matter of skill, and there is good evidence that with more rigorous training and experience radiologists can become better at reading breast X-rays. But so much of what can be seen on an X-ray falls into a gray area that interpreting a mammogram is also, in part, a matter of temperament. Some radiologists see something ambiguous and are comfortable calling it normal. Others see something ambiguous and get suspicious.

Does that mean radiologists ought to be as suspicious as possible? You might think so, but caution simply creates another kind of problem. The radiologist in the Elmore study who caught the most cancers also recommended immediate workups–a biopsy, an ultrasound, or additional X-rays–on sixty-four per cent of the women who didn’t have cancer. In the real world, a radiologist who needlessly subjected such an extraordinary percentage of healthy patients to the time, expense, anxiety, and discomfort of biopsies and further testing would find himself seriously out of step with his profession. Mammography is not a form of medical treatment, where doctors are justified in going to heroic lengths on behalf of their patients. Mammography is a form of medical screening: it is supposed to exclude the healthy, so that more time and attention can be given to the sick. If screening doesn’t screen, it ceases to be useful.

Gilbert Welch, a medical-outcomes expert at Dartmouth, has pointed out that, given current breast-cancer mortality rates, nine out of every thousand sixty-year-old women will die of breast cancer in the next ten years. If every one of those women had a mammogram every year, that number would fall to six. The radiologist seeing those thousand women, in other words, would read ten thousand X-rays over a decade in order to save three lives–and that’s using the most generous possible estimate of mammography’s effectiveness. The reason a radiologist is required to assume that the overwhelming number of ambiguous things are normal, in other words, is that the overwhelming number of ambiguous things really are normal. Radiologists are, in this sense, a lot like baggage screeners at airports. The chances are that the dark mass in the middle of the suitcase isn’t a bomb, because you’ve seen a thousand dark masses like it in suitcases before, and none of those were bombs–and if you flagged every suitcase with something ambiguous in it no one would ever make his flight. But that doesn’t mean, of course, that it isn’t a bomb. All you have to go on is what it looks like on the X-ray screen–and the screen seldom gives you quite enough information.


Dershaw picked up a new X-ray and put it on the light box. It belonged to a forty-eight-year-old woman. Mammograms indicate density in the breast: the denser the tissue is, the more the X rays are absorbed, creating the variations in black and white that make up the picture. Fat hardly absorbs the beam at all, so it shows up as black. Breast tissue, particularly the thick breast tissue of younger women, shows up on an X-ray as shades of light gray or white. This woman’s breasts consisted of fat at the back of the breast and more dense, glandular tissue toward the front, so the X-ray was entirely black, with what looked like a large white, dense cloud behind the nipple. Clearly visible, in the black, fatty portion of the left breast, was a white spot. “Now, that looks like a cancer, that little smudgy, irregular, infiltrative thing,” Dershaw said. “It’s about five millimetres across.” He looked at the X-ray for a moment. This was mammography at its best: a clear picture of a problem that needed to be fixed. Then he took a pen and pointed to the thick cloud just to the right of the tumor. The cloud and the tumor were exactly the same color. “That cancer only shows up because it’s in the fatty part of the breast,” he said. “If you take that cancer and put it in the dense part of the breast, you’d never see it, because the whiteness of the mass is the same as the whiteness of normal tissue. If the tumor was over there, it could be four times as big and we still wouldn’t see it.”

What’s more, mammography is especially likely to miss the tumors that do the most harm. A team led by the research pathologist Peggy Porter analyzed four hundred and twenty-nine breast cancers that had been diagnosed over five years at the Group Health Cooperative of Puget Sound. Of those, two hundred and seventy-nine were picked up by mammography, and the bulk of them were detected very early, at what is called Stage One. (Cancer is classified into four stages, according to how far the tumor has spread from its original position.) Most of the tumors were small, less than two centimetres. Pathologists grade a tumor’s aggression according to such measures as the “mitotic count”–the rate at which the cells are dividing–and the screen-detected tumors were graded “low” in almost seventy per cent of the cases. These were the kinds of cancers that could probably be treated successfully. “Most tumors develop very, very slowly, and those tend to lay down calcium deposits–and what mammograms are doing is picking up those calcifications,” Leslie Laufman, a hematologist-oncologist in Ohio, who served on a recent National Institutes of Health breast-cancer advisory panel, said. “Almost by definition, mammograms are picking up slow-growing tumors.”

A hundred and fifty cancers in Porter’s study, however, were missed by mammography. Some of these were tumors the mammogram couldn’t see–that were, for instance, hiding in the dense part of the breast. The majority, though, simply didn’t exist at the time of the mammogram. These cancers were found in women who had had regular mammograms, and who were legitimately told that they showed no sign of cancer on their last visit. In the interval between X-rays, however, either they or their doctor had manually discovered a lump in their breast, and these “interval” cancers were twice as likely to be in Stage Three and three times as likely to have high mitotic counts; twenty-eight per cent had spread to the lymph nodes, as opposed to eighteen per cent of the screen-detected cancers. These tumors were so aggressive that they had gone from undetectable to detectable in the interval between two mammograms.

The problem of interval tumors explains why the overwhelming majority of breast-cancer experts insist that women in the critical fifty-to-sixty-nine age group get regular mammograms. In Porter’s study, the women were X-rayed at intervals as great as every three years, and that created a window large enough for interval cancers to emerge. Interval cancers also explain why many breast-cancer experts believe that mammograms must be supplemented by regular and thorough clinical breast exams. (“Thorough” is defined as palpation of the area from the collarbone to the bottom of the rib cage, one dime-size area at a time, at three levels of pressure–just below the skin, the mid-breast, and up against the chest wall–by a specially trained practitioner for a period not less than five minutes per breast.) In a major study of mammography’s effectiveness–one of a pair of Canadian trials conducted in the nineteen-eighties–women who were given regular, thorough breast exams but no mammograms were compared with those who had thorough breast exams and regular mammograms, and no difference was found in the death rates from breast cancer between the two groups. The Canadian studies are controversial, and some breast-cancer experts are convinced that they may have understated the benefits of mammography. But there is no denying the basic lessons of the Canadian trials: that a skilled pair of fingertips can find out an extraordinary amount about the health of a breast, and that we should not automatically value what we see in a picture over what we learn from our other senses.

“The finger has hundreds of sensors per square centimetre,” says Mark Goldstein, a sensory psychophysicist who co-founded MammaCare, a company devoted to training nurses and physicians in the art of the clinical exam. “There is nothing in science or technology that has even come close to the sensitivity of the human finger with respect to the range of stimuli it can pick up. It’s a brilliant instrument. But we simply don’t trust our tactile sense as much as our visual sense.”


On the night of August 17, 1943, two hundred B-17 bombers from the United States Eighth Air Force set out from England for the German city of Schweinfurt. Two months later, two hundred and twenty-eight B-17s set out to strike Schweinfurt a second time. The raids were two of the heaviest nights of bombing in the war, and the Allied experience at Schweinfurt is an example of a more subtle–but in some cases more serious–problem with the picture paradigm.

The Schweinfurt raids grew out of the United States military’s commitment to bombing accuracy. As Stephen Budiansky writes in his wonderful recent book “Air Power,” the chief lesson of aerial bombardment in the First World War was that hitting a target from eight or ten thousand feet was a prohibitively difficult task. In the thick of battle, the bombardier had to adjust for the speed of the plane, the speed and direction of the prevailing winds, and the pitching and rolling of the plane, all while keeping the bombsight level with the ground. It was an impossible task, requiring complex trigonometric calculations. For a variety of reasons, including the technical challenges, the British simply abandoned the quest for precision: in both the First World War and the Second, the British military pursued a strategy of “morale” or “area” bombing, in which bombs were simply dropped, indiscriminately, on urban areas, with the intention of killing, dispossessing, and dispiriting the German civilian population.

But the American military believed that the problem of bombing accuracy was solvable, and a big part of the solution was something called the Norden bombsight. This breakthrough was the work of a solitary, cantankerous genius named Carl Norden, who operated out of a factory in New York City. Norden built a fifty-pound mechanical computer called the Mark XV, which used gears and wheels and gyroscopes to calculate airspeed, altitude, and crosswinds in order to determine the correct bomb-release point. The Mark XV, Norden’s business partner boasted, could put a bomb in a pickle barrel from twenty thousand feet. The United States spent $1.5 billion developing it, which, as Budiansky points out, was more than half the amount that was spent building the atomic bomb. “At air bases, the Nordens were kept under lock and key in secure vaults, escorted to their planes by armed guards, and shrouded in a canvas cover until after takeoff,” Budiansky recounts. The American military, convinced that its bombers could now hit whatever they could see, developed a strategic approach to bombing, identifying and selectively destroying targets that were critical to the Nazi war effort. In early 1943, General Henry (Hap) Arnold–the head of the Army Air Forces–assembled a group of prominent civilians to analyze the German economy and recommend critical targets. The Advisory Committee on Bombardment, as it was called, determined that the United States should target Germany’s ball-bearing factories, since ball bearings were critical to the manufacture of airplanes. And the center of the German ball-bearing industry was Schweinfurt. Allied losses from the two raids were staggering. Thirty-six B-17s were shot down in the August attack, sixty-two bombers were shot down in the October raid, and between the two operations a further hundred and thirty-eight planes were badly damaged. Yet, with the war in the balance, this was considered worth the price. When the damage reports came in, Arnold exulted, “Now we have got Schweinfurt!” He was wrong.

The problem was not, as in the case of the Scud hunt, that the target could not be found, or that what was thought to be the target was actually something else. The B-17s, aided by their Norden Mark XVs, hit the ball-bearing factories hard. The problem was that the picture Air Force officers had of their target didn’t tell them what they really needed to know. The Germans, it emerged, had ample stockpiles of ball bearings. They also had no difficulty increasing their imports from Sweden and Switzerland, and, through a few simple design changes, they were able to greatly reduce their need for ball bearings in aircraft production. What’s more, although the factory buildings were badly damaged by the bombing, the machinery inside wasn’t. Ball-bearing equipment turned out to be surprisingly hardy. “As it was, not a tank, plane, or other piece of weaponry failed to be produced because of lack of ball bearings,” Albert Speer, the Nazi production chief, wrote after the war. Seeing a problem and understanding it, then, are two different things.

In recent years, with the rise of highly accurate long-distance weaponry, the Schweinfurt problem has become even more acute. If you can aim at and hit the kitchen at the back of a house, after all, you don’t have to bomb the whole building. So your bomb can be two hundred pounds rather than a thousand. That means, in turn, that you can fit five times as many bombs on a single plane and hit five times as many targets in a single sortie, which sounds good–except that now you need to get intelligence on five times as many targets. And that intelligence has to be five times more specific, because if the target is in the bedroom and not the kitchen, you’ve missed him.

This is the issue that the United States command faced in the most recent Iraq war. Early in the campaign, the military mounted a series of air strikes against specific targets, where Saddam Hussein or other senior Baathist officials were thought to be hiding. There were fifty of these so-called “decapitation” attempts, each taking advantage of the fact that modern-day G.P.S.-guided bombs can be delivered from a fighter to within thirteen metres of their intended target. The strikes were dazzling in their precision. In one case, a restaurant was levelled. In another, a bomb burrowed down into a basement. But, in the end, every single strike failed. “The issue isn’t accuracy,” Watts, who has written extensively on the limitations of high-tech weaponry, says. “The issue is the quality of targeting information. The amount of information we need has gone up an order of magnitude or two in the last decade.”


Mammography has a Schweinfurt problem as well. Nowhere is that more evident than in the case of the breast lesion known as ductal carcinoma in situ, or DCIS, which shows up as a cluster of calcifications inside the ducts that carry milk to the nipple. It’s a tumor that hasn’t spread beyond those ducts, and it is so tiny that without mammography few women with DCIS would ever know they had it. In the past couple of decades, as more and more people have received regular breast X-rays and the resolution of mammography has increased, diagnoses of DCIS have soared. About fifty thousand new cases are now found every year in the United States, and virtually every DCIS lesion detected by mammography is promptly removed. But what has the targeting and destruction of DCIS meant for the battle against breast cancer? You’d expect that if we’ve been catching fifty thousand early-stage cancers every year, we should be seeing a corresponding decrease in the number of late-stage invasive cancers. It’s not clear whether we have. During the past twenty years, the incidence of invasive breast cancer has continued to rise by the same small, steady increment every year.

In 1987, pathologists in Denmark performed a series of autopsies of women in their forties who had not been known to have breast cancer when they died of other causes. The pathologists looked at an average of two hundred and seventy-five samples of breast tissue in each case, and found some evidence of cancer–usually DCIS–in nearly forty per cent of the women. Since breast cancer accounts for less than four per cent of female deaths, clearly the overwhelming majority of these women, had they lived longer, would never have died of breast cancer. “To me, that indicates that these kinds of genetic changes happen really frequently, and that they can happen without having an impact on women’s health,” Karla Kerlikowske, a breast-cancer expert at the University of California at San Francisco, says. “The body has this whole mechanism to repair things, and maybe that’s what happened with these tumors.” Gilbert Welch, the medical-outcomes expert, thinks that we fail to understand the hit-or-miss nature of cancerous growth, and assume it to be a process that, in the absence of intervention, will eventually kill us. “A pathologist from the International Agency for Research on Cancer once told me that the biggest mistake we ever made was attaching the word ‘carcinoma’ to DCIS,” Welch says. “The minute carcinoma got linked to it, it all of a sudden drove doctors to recommend therapy, because what was implied was that this was a lesion that would inexorably progress to invasive cancer. But we know that that’s not always the case.”

In some percentage of cases, however, DCIS does progress to something more serious. Some studies suggest that this happens very infrequently. Others suggest that it happens frequently enough to be of major concern. There is no definitive answer, and it’s all but impossible to tell, simply by looking at a mammogram, whether a given DCIS tumor is among those lesions which will grow out from the duct or part of the majority that will never amount to anything. That’s why some doctors feel that we have no choice but to treat every DCIS as life-threatening, and in thirty per cent of cases that means a mastectomy, and in another thirty-five per cent it means a lumpectomy and radiation. Would taking a better picture solve the problem? Not really, because the problem is that you don’t know for sure what you’re seeing, and as pictures have become better we have put ourselves in a position where we see more and more things that we don’t know how to interpret. When it comes to DCIS, the mammogram delivers information without true understanding. “Almost half a million women have been diagnosed and treated for DCIS since the early nineteen-eighties–a diagnosis virtually unknown before then,” Welch writes in his new book, “Should I Be Tested for Cancer?,” a brilliant account of the statistical and medical uncertainties surrounding cancer screening. “This increase is the direct result of looking harder–in this case with ‘better’ mammography equipment. But I think you can see why it is a diagnosis that some women might reasonably prefer not to know about.”


The disturbing thing about DCIS, of course, is that our approach to this tumor seems like a textbook example of how the battle against cancer is supposed to work. Use a powerful camera. Take a detailed picture. Spot the tumor as early as possible. Treat it immediately and aggressively. The campaign to promote regular mammograms has used this early-detection argument with great success, because it makes intuitive sense. The danger posed by a tumor is represented visually. Large is bad; small is better–less likely to have metastasized. But here, too, tumors defy our visual intuitions.

According to Donald Berry, who is the chairman of the Department of Biostatistics and Applied Mathematics at M. D. Anderson Cancer Center, in Houston, a woman’s risk of death increases only by about ten per cent for every additional centimetre in tumor length. “Suppose there is a tumor size above which the tumor is lethal, and below which it’s not,” Berry says. “The problem is that the threshold varies. When we find a tumor, we don’t know whether it has metastasized already. And we don’t know whether it’s tumor size that drives the metastatic process or whether all you need is a few million cells to start sloughing off to other parts of the body. We do observe that it’s worse to have a bigger tumor. But not amazingly worse. The relationship is not as great as you’d think.”

In a recent genetic analysis of breast-cancer tumors, scientists selected women with breast cancer who had been followed for many years, and divided them into two groups–those whose cancer had gone into remission, and those whose cancer spread to the rest of their body. Then the scientists went back to the earliest moment that each cancer became apparent, and analyzed thousands of genes in order to determine whether it was possible to predict, at that moment, who was going to do well and who wasn’t. Early detection presumes that it isn’t possible to make that prediction: a tumor is removed before it becomes truly dangerous. But scientists discovered that even with tumors in the one-centimetre range–the range in which cancer is first picked up by a mammogram–the fate of the cancer seems already to have been set. “What we found is that there is biology that you can glean from the tumor, at the time you take it out, that is strongly predictive of whether or not it will go on to metastasize,” Stephen Friend, a member of the gene-expression team at Merck, says. “We like to think of a small tumor as an innocent. The reality is that in that innocent lump are a lot of behaviors that spell a potential poor or good prognosis.”

The good news here is that it might eventually be possible to screen breast cancers on a genetic level, using other kinds of tests–even blood tests–to look for the biological traces of those genes. This might also help with the chronic problem of overtreatment in breast cancer. If we can single out that small percentage of women whose tumors will metastasize, we can spare the rest the usual regimen of surgery, radiation, and chemotherapy. Gene-signature research is one of a number of reasons that many scientists are optimistic about the fight against breast cancer. But it is an advance that has nothing to do with taking more pictures, or taking better pictures. It has to do with going beyond the picture.

Under the circumstances, it is not hard to understand why mammography draws so much controversy. The picture promises certainty, and it cannot deliver on that promise. Even after forty years of research, there remains widespread disagreement over how much benefit women in the critical fifty-to-sixty-nine age bracket receive from breast X-rays, and further disagreement about whether there is enough evidence to justify regular mammography in women under fifty and over seventy. Is there any way to resolve the disagreement? Donald Berry says that there probably isn’t–that a clinical trial that could definitively answer the question of mammography’s precise benefits would have to be so large (involving more than five hundred thousand women) and so expensive (costing billions of dollars) as to be impractical. The resulting confusion has turned radiologists who do mammograms into one of the chief targets of malpractice litigation. “The problem is that mammographers–radiology groups–do hundreds of thousands of these mammograms, giving women the illusion that these things work and they are good, and if a lump is found and in most cases if it is found early, they tell women they have the probability of a higher survival rate,” says E. Clay Parker, a Florida plaintiff’s attorney, who recently won a $5.1 million judgment against an Orlando radiologist. “But then, when it comes to defending themselves, they tell you that the reality is that it doesn’t make a difference when you find it. So you scratch your head and say, ‘Well, why do you do mammography, then?'”

The answer is that mammograms do not have to be infallible to save lives. A modest estimate of mammography’s benefit is that it reduces the risk of dying from breast cancer by about ten per cent–which works out, for the average woman in her fifties, to be about three extra days of life, or, to put it another way, a health benefit on a par with wearing a helmet on a ten-hour bicycle trip. That is not a trivial benefit. Multiplied across the millions of adult women in the United States, it amounts to thousands of lives saved every year, and, in combination with a medical regimen that includes radiation, surgery, and new and promising drugs, it has helped brighten the prognosis for women with breast cancer. Mammography isn’t as a good as we’d like it to be. But we are still better off than we would be without it.

“There is increasingly an understanding among those of us who do this a lot that our efforts to sell mammography may have been over-vigorous,” Dershaw said, “and that although we didn’t intend to, the perception may have been that mammography accomplishes even more than it does.” He was looking, as he spoke, at the mammogram of the woman whose tumor would have been invisible had it been a few centimetres to the right. Did looking at an X-ray like that make him nervous? Dershaw shook his head. “You have to respect the limitations of the technology,” he said. “My job with the mammogram isn’t to find what I can’t find with a mammogram. It’s to find what I can find with a mammogram. If I’m not going to accept that, then I shouldn’t be reading mammograms.”


In February of last year, just before the start of the Iraq war, Secretary of State Colin Powell went before the United Nations to declare that Iraq was in defiance of international law. He presented transcripts of telephone conversations between senior Iraqi military officials, purportedly discussing attempts to conceal weapons of mass destruction. He told of eyewitness accounts of mobile biological-weapons facilities. And, most persuasively, he presented a series of images–carefully annotated, high-resolution satellite photographs of what he said was the Taji Iraqi chemical-munitions facility.

“Let me say a word about satellite images before I show a couple,” Powell began. “The photos that I am about to show you are sometimes hard for the average person to interpret, hard for me. The painstaking work of photo analysis takes experts with years and years of experience, poring for hours and hours over light tables. But as I show you these images, I will try to capture and explain what they mean, what they indicate, to our imagery specialists.” The first photograph was dated November 10, 2002, just three months earlier, and years after the Iraqis were supposed to have rid themselves of all weapons of mass destruction. “Let me give you a closer look,” Powell said as he flipped to a closeup of the first photograph. It showed a rectangular building, with a vehicle parked next to it. “Look at the image on the left. On the left is a closeup of one of the four chemical bunkers. The two arrows indicate the presence of sure signs that the bunkers are storing chemical munitions. The arrow at the top that says ‘Security’ points to a facility that is a signature item for this kind of bunker. Inside that facility are special guards and special equipment to monitor any leakage that might come out of the bunker.” Then he moved to the vehicle next to the building. It was, he said, another signature item. “It’s a decontamination vehicle in case something goes wrong. . . . It is moving around those four and it moves as needed to move as people are working in the different bunkers.”

Powell’s analysis assumed, of course, that you could tell from the picture what kind of truck it was. But pictures of trucks, taken from above, are not always as clear as we would like; sometimes trucks hauling oil tanks look just like trucks hauling Scud launchers, and, while a picture is a good start, if you really want to know what you’re looking at you probably need more than a picture. I looked at the photographs with Patrick Eddington, who for many years was an imagery analyst with the C.I.A. Eddington examined them closely. “They’re trying to say that those are decontamination vehicles,” he told me. He had a photo up on his laptop, and he peered closer to get a better look. “But the resolution is sufficient for me to say that I don’t think it is–and I don’t see any other decontamination vehicles down there that I would recognize.” The standard decontamination vehicle was a Soviet-made box-body van, Eddington said. This truck was too long. For a second opinion, Eddington recommended Ray McGovern, a twenty-seven-year C.I.A. analyst, who had been one of George H. W. Bush’s personal intelligence briefers when he was Vice-President. “If you’re an expert, you can tell one hell of a lot from pictures like this,” McGovern said. He’d heard another interpretation. “I think,” he said, “that it’s a fire truck.”