Overdrive

How entrepreneurs really succeed.

1.

In 1969, store Ted Turner wanted to buy a television station. He was thirty years old. He had inherited a billboard business from his father, for sale which was doing well. But he was bored, and television seemed exciting. “He knew absolutely nothing about it,” one of Turner’s many biographers, Christian Williams, writes in “Lead, Follow or Get Out of the Way” (1981). “It would be fun to risk everything he had built, scare the hell out of everybody, and get back in the front seat of the roller coaster.”

The station in question was WJRJ, Channel 17, in Atlanta. It was an independent station on the UHF band, the lonely part of the television spectrum which viewers needed a special antenna to find. It was housed in a run-down cinder-block building near a funeral home, leading to the joke that it was at death’s door. The equipment was falling apart. The staff was incompetent. It had no decent programming to speak of, and it was losing more than half a million dollars a year. Turner’s lawyer, Tench Coxe, and his accountant, Irwin Mazo, were firmly opposed to the idea. “We tried to make it clear that—yes—this thing might work, but if it doesn’t everything will collapse,” Mazo said, years later. “Everything you’ve got will be gone. . . . It wasn’t just us, either. Everybody told him not to do it.”

Turner didn’t listen. He was Captain Courageous, the man with nerves of steel who went on to win the America’s Cup, take on the networks, marry a movie star, and become a billionaire. He dressed like a cowboy. He gave the impression of signing contracts without looking at them. He was a drinker, a yeller, a man of unstoppable urges and impulses, the embodiment of the entrepreneur as risk-taker. He bought the station, and so began one of the great broadcasting empires of the twentieth century.

What is sometimes forgotten amid the mythology, however, is that Turner wasn’t the proprietor of any old billboard company. He had inherited the largest outdoor-advertising firm in the South, and billboards, in the nineteen-sixties and seventies, were enormously lucrative. They benefitted from favorable tax-depreciation rules, they didn’t require much capital investment, and they produced rivers of cash. WJRJ’s losses could be used to offset the taxes on the profits of Turner’s billboard business. A television station, furthermore, fit very nicely into his existing business. Television was about selling ads, and Turner was very experienced at ad-selling. WJRJ may have been a virtual unknown in the Atlanta market, but Turner had billboards all over the city that were blank about fifteen per cent of the time. He could advertise his new station free. As for programming, Turner had a fix for that, too. In those days, the networks offered their local affiliates a full slate of shows, and whenever an affiliate wanted to broadcast local programming, such as sports or news, the national shows were preëmpted. Turner realized that he could persuade the networks in New York to let him have whatever programming their affiliates weren’t running. That’s exactly what happened. “When we reached the point of having four preempted NBC shows running in our daytime lineup,” Turner writes in his autobiography, “Call Me Ted” (2008), “I had our people put up some billboards saying ‘THE NBC NETWORK MOVES TO CHANNEL 17.’ ”

Williams writes that Turner was “attracted to the risk” of the deal, but it seems just as plausible to say that he was attracted by the deal’s lack of risk. “We don’t want to put it all on the line, because the result can’t possibly be worth the risk,” Mazo recalls warning Turner. Put it all on the line? The purchase price for WJRJ was $2.5 million. Similar properties in that era went for many times that, and Turner paid with a stock swap engineered in such a way that he didn’t have to put a penny down. Within two years, the station was breaking even. By 1973, it was making a million dollars in profit.

In a recent study, “From Predators to Icons,” the French scholars Michel Villette and Catherine Vuillermot set out to discover what successful entrepreneurs have in common. They present case histories of businessmen who built their own empires—ranging from Sam Walton, of Wal-Mart, to Bernard Arnault, of the luxury-goods conglomerate L.V.M.H.—and chart what they consider the typical course of a successful entrepreneur’s career. There is almost always, they conclude, a moment of great capital accumulation—a particular transaction that catapults him into prominence. The entrepreneur has access to that deal by virtue of occupying a “structural hole,” a niche that gives him a unique perspective on a particular market. Villette and Vuillermot go on, “The businessman looks for partners to a transaction who do not have the same definition as he of the value of the goods exchanged, that is, who undervalue what they sell to him or overvalue what they buy from him in comparison to his own evaluation.” He moves decisively. He repeats the good deal over and over again, until the opportunity closes, and—most crucially—his focus throughout that sequence is on hedging his bets and minimizing his chances of failure. The truly successful businessman, in Villette and Vuillermot’s telling, is anything but a risk-taker. He is a predator, and predators seek to incur the least risk possible while hunting.

Giovanni Agnelli, the founder of Fiat, financed his young company with the money of investors—who were “subsequently excluded from the company by a maneuver by Agnelli,” the authors point out. Bernard Arnault took over the Boussac group at a personal cost of forty million francs, which was a fraction of the “immediate resale value of the assets.” The French industrialist Vincent Bolloré “took charge of the failing family company for almost nothing with other people’s money.” George Eastman, the founder of Kodak, shifted the financial risk of his new enterprise to his family and to his wealthy friend Henry Strong. IKEA’s founder, Ingvar Kamprad, arranged to get his furniture made in Communist Poland for half of what it would cost him in Sweden. Marcel Dassault, the French aviation pioneer, did a study for the French Army that pointed out the value of propellers, and then took over a propeller manufacturer. When he started making planes for the military, he made sure he was paid in advance.

People like Dassault and Eastman and Arnault and Turner are all successful entrepreneurs, businessmen whose insights and decisions have transformed the economy, but their entrepreneurial spirit could not have less in common with that of the daring risk-taker of popular imagination. Would we so revere risk-taking if we realized that the people who are supposedly taking bold risks in the cause of entrepreneurship are actually doing no such thing?

2.

The most successful entrepreneur on Wall Street—certainly of the past decade and perhaps even of the postwar era—is a hedge-fund manager named John Paulson. He started a small money-management business in the nineteen-nineties and built it into a juggernaut, and Gregory Zuckerman’s recent account of Paulson’s triumph, “The Greatest Trade Ever,” offers a fascinating perspective on the predator thesis.

Paulson grew up in middle-class Queens, the child of an immigrant father. His career on Wall Street started relatively slowly. He launched his firm in 1994, when he was nearly forty years old, specializing in merger arbitrage. By 2004, Paulson was managing about two billion dollars of other people’s money, putting him in the middle ranks of hedge funds. He was, Zuckerman writes, a “solid investor, careful and decidedly unspectacular.” The particular kinds of deal he did were “among the safest forms of investing.” One of Paulson’s mentors was an investor named Marty Gruss, and, Zuckerman writes, “the ideal Gruss investment had limited risk but held the promise of a potential fortune. Marty Gruss drilled a maxim into Paulson: ‘Watch the downside; the upside will take care of itself.’ At his firm, he asked his analysts repeatedly, ‘How much can we lose on this trade?’ ” Long after he became wealthy, he would take the bus to his offices in midtown, and the train out to his summer house on Long Island. He was known for getting around the Hamptons on his bicycle.

By 2004-05, Paulson was increasingly suspicious of the real-estate boom. He decided to short the mortgage market, using a financial tool known as the credit-default swap, or C.D.S. A credit-default swap is like an insurance policy. Wall Street banks combined hundreds of mortgages together in bundles, and investors could buy insurance on any of the bundles they chose. Suppose I put together a bundle of ten mortgages totalling a million dollars. I could sell you a one-year C.D.S. policy on that bundle for, say, a hundred thousand dollars. If after the year was up the ten homeowners holding those mortgages were all making their monthly payments, I’d pocket your hundred thousand. If, however, those homeowners all defaulted, I’d owe you the full value of the bundle—a million dollars. Throughout the boom, countless banks and investment firms sold C.D.S. policies on securities backed by subprime loans, happily pocketing the annual premiums in the belief that there was little chance of ever having to make good on the contract. Paulson, as often as not, was the one on the other side of the trade. He bought C.D.S. contracts by the truckload, and, when he ran out of money, he found new investors, raising billions of new dollars so he could buy even more. By the time the crash came, he was holding insurance on some twenty-five billion dollars’ worth of subprime mortgages.

Was Paulson’s trade risky? Conventional wisdom said that it was. This kind of deal is known, in Wall Street parlance, as a “negative-carry” trade, and, as Zuckerman writes, negative-carry trades are a “maneuver that investment pros detest almost as much as high taxes and coach-class seating.” Their problem with negative-carry is that if the trade doesn’t pay off quickly it can become ruinously expensive. It’s one thing if I pay you a hundred thousand dollars for one year’s insurance on a million dollars’ worth of mortgages, and the mortgages go belly up after six months. But what if I pay premiums for two years, and the bubble still hasn’t burst? Then I’m out two hundred thousand dollars, with nothing to show for my efforts. And what if the bubble hasn’t burst after three years? Now I have a very nervous group of investors. To win at a negative-carry trade, you have not only to correctly predict the presence of a bubble but also to correctly predict when the bubble is about to burst.

At one point before the crash, Zuckerman writes, a trader at Morgan Stanley “hung up the phone after yet another Paulson order and turned to a colleague in disbelief. ‘This guy is nuts,’ he said with a chuckle, amazed that Paulson was agreeing to make so many annual insurance payments. ‘He’s just going to pay it all out?’ ” Wall Street thought that Paulson was crazy.

But Paulson wasn’t crazy at all. In 2006, he had his firm undertake a rigorous analysis of the housing market, led by Paulson’s associate Paolo Pellegrini. At that point, it was unclear whether rising housing prices represented a bubble or a legitimate phenomenon. Pellegrini concluded that housing prices had risen on average 1.4 per cent annually between 1975 and 2000, once inflation had been accounted for. In the next five years, though, they had risen seven per cent a year—to the point where they would have to fall by forty per cent to be back in line with historical trends. That fact left Paulson certain that he was looking at a bubble.

Paulson’s next concern was with the volatility of the housing market. Was this bubble resilient? Or was everything poised to come crashing down? Zuckerman tells how Pellegrini and another Paulson associate, Sihan Shu, “purchased enormous databases tracking the historic performance of more than six million mortgages in various parts of the country.” Thus equipped,

they crunched the numbers, tinkered with logarithms and logistic functions, and ran different scenarios, trying to figure out what would happen if housing prices stopped rising. Their findings seemed surprising: Even if prices just flatlined, homeowners would feel so much financial pressure that it would result in losses of 7 percent of the value of a typical pool of subprime mortgages. And if home prices fell 5 percent, it would lead to losses as high as 17 percent.

This was a crucial finding. Most people at the time believed that widespread defaults on mortgages were a function of some combination of structural economic factors such as unemployment rates, interest rates, and regional economic health. That’s why so many on Wall Street were happy to sell Paulson C.D.S. policies: they thought it would take a perfect storm to bring the market to its knees. But Pellegrini’s data showed that the bubble was being inflated by a single, rickety factor—rising home prices. It wouldn’t take much for the bubble to burst.

Paulson then looked at what buying disaster insurance on mortgages would cost. C.D.S. contracts can sometimes be prohibitively expensive. In the months leading up to General Motors’ recent bankruptcy, for example, a year’s insurance on a million of the carmaker’s bonds sold for eight hundred thousand dollars. If Paulson had to pay anything like that amount, there wouldn’t be much room for error. To his amazement, though, he found that to insure a million dollars of mortgages would cost him just ten thousand dollars—and this was for some of the most dubious and high-risk subprime mortgages. Paulson didn’t even need a general housing-market collapse to make his money. He needed only the most vulnerable of all homeowners to start defaulting. It was a classic asymmetrical trade. If Paulson raised a billion dollars from investors, he could buy a year’s worth of insurance on twelve billion dollars of subprime loans for a hundred and twenty million. That’s an outlay of twelve per cent up front. But, Zuckerman explains,

because premiums on CDS contracts, like those on any other insurance product, are paid out over time, the new fund could keep most of its money in the bank until the CDS bills came due, and thereby earn about 5 percent a year. That would cut the annual cost to the fund to a more reasonable 7 percent. Since Paulson would charge 1 percent a year as a management fee, the most an investor could lose would be 8 percent a year. . . . And the upside? If Paulson purchased CDS contracts that fully protected $12 billion of subprime mortgage bonds and the bonds somehow became worthless, Paulson & Co. would make a cool $12 billion.

“There’s never been an opportunity like this,” Paulson gushed to a colleague, as he made one bet after another. By “never,” he meant never ever—not in his lifetime and not in anyone else’s, either. In one of the book’s many memorable scenes, Zuckerman describes how a five-point decline in what’s called the ABX index (a measure of mortgage health) once made Paulson $1.25 billion in one morning. In 2007 alone, Paulson & Co. took in fifteen billion dollars in profits, of which four billion went directly into Paulson’s pocket. In 2008, his firm made five billion dollars. Rarely in human history has anyone made so much money is so short a time.

What Paulson’s story makes clear is how different the predator is from our conventional notion of the successful businessman. The risk-taking model suggests that the entrepreneur’s chief advantage is one of temperament—he’s braver than the rest of us are. In the predator model, the entrepreneur’s advantage is analytical—he’s better at figuring out a sure thing than the rest of us. Paulson looked at the same marketplace as everyone else on Wall Street did. But he saw a different pattern. As an outsider, he had fresh eyes, and his line of investing made him a lot more comfortable with negative-carry trades than his competitors were. He looked for and found partners to the transaction who did not have the same definition as he of the value of the goods exchanged—that is, the banks selling credit-default swaps for a penny on the dollar—and he exploited that advantage ruthlessly. At one point, incredibly, Paulson got together with some investment banks to assemble bundles of the most absurdly toxic mortgages—which the banks then sold to some hapless investors and Paulson then promptly bet against. As Zuckerman points out, this is the equivalent of a game of football in which the defense calls the plays for the offense. It’s how a nerd would play football, not a jock.

This is exactly how Turner pulled off another of his legendary early deals—his 1976 acquisition of the Atlanta Braves baseball team. Turner’s Channel 17 was the Braves’ local broadcaster, having acquired the rights four years before—a brilliant move, as it turned out, because it forced every Braves fan in the region to go out and buy a UHF antenna. (Well before ESPN and Rupert Murdoch’s Sky TV, Turner had realized how important live sports programming could be in building a television brand.) The team was losing a million dollars a year, and the owners wanted ten million dollars to sell. That was four times the price of Channel 17. “I had no idea how I could afford it,” Turner told one of his biographers, although by this point the reader is wise to his aw-shucks modesty. First, he didn’t pay ten million dollars. He talked the Braves into taking a million down, and the rest over eight or so years. Second, he didn’t end up paying the million down. Somewhat mysteriously, Turner reports that he found a million dollars on the team’s books—money the previous owners somehow didn’t realize they had—and so, he says, “I bought it using its own money, which was quite a trick.” He now owed nine million dollars. But Turner had already been paying the Braves six hundred thousand dollars a year for the rights to broadcast sixty of the team’s games. What the deal consisted of, then, was his paying an additional six hundred thousand dollars or so a year, for eight years: in return, he would get the rights to all a hundred and sixty-two of the team’s games, plus the team itself.

You and I might not have made that deal. But that’s not because Turner is a risk-taker and we are cowards. It’s because Turner is a cold-blooded bargainer who could find a million dollars in someone’s back pocket that the person didn’t know he had. Once you get past the more flamboyant aspects of Turner’s personal and sporting life, in fact, there is little evidence that he had any real appetite for risk at all. In his memoir, Turner tells us that when he was starting out in the family business his father, Ed, bought another billboard firm, called General Outdoor. That was the acquisition that launched the Turner company as a major advertising player in the South, and it involved taking on a sizable amount of debt. Young Ted had no qualms, intellectually, about the decision. He could do the math. There were substantial economies of scale in the advertising business: the bigger you got, the lower your costs were, and paying off the debt from the General Outdoor purchase, Ted Turner realized, probably wasn’t going to be a problem. But Turner’s father did something that Turner, when he was building his empire, always went to extraordinary lengths to avoid: he put his own capital into the deal. In the highly unlikely event that it didn’t work out, Turner Advertising would be crippled. It was a good deal, not a perfect one, and that niggling imperfection, along with the toll that the uncertainty was taking on his father, left Turner worried sick. “During the first six months or so after the General Outdoor acquisition my weight dropped from 180 pounds to 135,” he writes. “I developed a pre-ulcerative condition and my doctor made me swear off coffee. I’d get so tired and agitated that one of my eyelids developed a twitch.”

Zuckerman profiles John Paulson alongside three others who made the same subprime bet—Greg Lippmann, a trader at Deutsche Bank; Jeffrey Greene, a real-estate mogul in Los Angeles; and Michael Burry, who ran a hedge fund in Silicon Valley—and finds the same pattern. All were supremely confident of their decision. All had done their homework. All had swooped down, like perfect predators, on a marketplace anomaly. But these were not men temperamentally suited to risk-taking. They worked so hard to find the sure thing because anything short of that gave them ulcers. Here is Zuckerman on Burry, as he waited for his trade to pan out:

In a tailspin, Burry withdrew from his friends, family, and employees. Each morning, Burry walked into his firm and made a beeline to his office, head down, locking the door behind him. He didn’t emerge all day, not even to eat or use the bathroom. His remaining employees, who were still pulling for Burry, turned worried. Sometimes he got into the office so early, and kept the door closed for so long, that when his staff left at the end of the day, they were unsure if their boss had ever come in. Other times, Burry pounded his fists on his desk, trying to release his tension, as heavy-metal music blasted from nearby speakers.

3.

Paulson’s story also casts a harsh light on the prevailing assumptions behind corporate compensation policies. One of the main arguments for the generous stock options that are so often given to C.E.O.s is that they are necessary to encourage risk-taking in the corporate suite. This notion comes from what is known as “agency theory,” which Freek Vermeulen, of the London Business School, calls “one of the few academic theories in management academia that has actually influenced the world of management practice.” Agency theory, Vermeulen observes, “says that managers are inherently risk-averse; much more risk-averse than shareholders would like them to be. And the theory prescribes that you should give them stock options, rather than stock, to stimulate them to take more risk.” Why do shareholders want managers to take more risks? Because they want stodgy companies to be more entrepreneurial, and taking risks is what everyone says that entrepreneurs do.

The result has been to turn executives into risk-takers. Paulson, for his part, was stunned at the reckless behavior of his Wall Street counterparts. Some of the mortgage bundles he was betting against—collections of some of the sketchiest subprime loans—were paying the investors who bought them six-per-cent interest. Treasury bonds, the safest investment in the world, were paying almost five per cent at that point. Nor could he comprehend why so many banks were willing to sell him C.D.S. insurance at such low prices. Why would someone, in the middle of a housing bubble, demand only one cent on the dollar? At the end of 2006, Merrill Lynch paid $1.3 billion for First Franklin Financial, one of the biggest subprime lenders in the country, bringing the total value of subprime mortgages on its books to eleven billion dollars. Paulson was so risk-averse that he didn’t so much as put a toe in the water of subprime-mortgage default swaps until Pellegrini had done months of analysis. But Merrill Lynch bought First Franklin even though the firm’s own economists were predicting that housing prices were about to drop by as much as five per cent. “It just doesn’t make sense,” an incredulous Paulson told his friend Howard Gurvitch. “These are supposedly the smart people.”

The economist Scott Shane, in his book “The Illusions of Entrepreneurship,” makes a similar argument. Yes, he says, many entrepreneurs take plenty of risks—but those are generally the failed entrepreneurs, not the success stories. The failures violate all kinds of established principles of new-business formation. New-business success is clearly correlated with the size of initial capitalization. But failed entrepreneurs tend to be wildly undercapitalized. The data show that organizing as a corporation is best. But failed entrepreneurs tend to organize as sole proprietorships. Writing a business plan is a must; failed entrepreneurs rarely take that step. Taking over an existing business is always the best bet; failed entrepreneurs prefer to start from scratch. Ninety per cent of the fastest-growing companies in the country sell to other businesses; failed entrepreneurs usually try selling to consumers, and, rather than serving customers that other businesses have missed, they chase the same people as their competitors do. The list goes on: they underemphasize marketing; they don’t understand the importance of financial controls; they try to compete on price. Shane concedes that some of these risks are unavoidable: would-be entrepreneurs take them because they have no choice. But a good many of these risks reflect a lack of preparation or foresight.

4.

Shane’s description of the pattern of entrepreneurial failure brings to mind the Harvard psychologist David McClelland’s famous experiment with kindergarten children in the nineteen-fifties. McClelland watched a group of kids play ringtoss—throwing a hoop over a pole. The children who played the game in the riskiest manner, who stood so far from the pole that success was unlikely, also scored lowest on what he called “achievement motive,” that is, the desire to succeed. (Another group of low scorers were at the other extreme, standing so close to the pole that the game ceased to be a game at all.) Taking excessive risks was, then, a psychologically protective strategy: if you stood far enough back from the pole, no one could possibly blame you if you failed. These children went out of their way to take a “professional” risk in order to avoid a personal risk. That’s what companies are buying with their bloated C.E.O. stock-options packages—gambles so wild that the gambler can lose without jeopardizing his social standing within the corporate world. “As long as the music is playing, you’ve got to get up and dance,” the now departed C.E.O. of Citigroup, Charles Prince, notoriously said, as his company continued to pile one dubious investment on another. He was more afraid of being a wallflower than he was of imperilling his firm.

The successful entrepreneur takes the opposite tack. Villette and Vuillermot point out that the predator is often quite happy to put his reputation on the line in the pursuit of the sure thing. Ingvar Kamprad, of IKEA, went to Poland in the nineteen-sixties to get his furniture manufactured. Since Polish labor was inexpensive, it gave Kamprad a huge price advantage. But doing business with a Communist country at the height of the Cold War was a scandal. Sam Walton financed his first retailing venture, in Newport, Arkansas, with money from his wealthy in-laws. That approach was safer than turning to a bank, especially since Walton was forced out of Newport and had to go back to his wife’s family for another round. But you can imagine that it made for some tense moments at family reunions for a while. Deutsche Bank’s Lippmann, meanwhile, was called Chicken Little and Bubble Boy to his face for his insistence that the mortgage market was going to burst.

Why are predators willing to endure this kind of personal abuse? Perhaps they are sufficiently secure and confident that they don’t need public approval. Or perhaps they are so caught up in their own calculations that they don’t notice. The simplest explanation, though, is that it’s just another manifestation of their relentlessly rational pursuit of the sure thing. If an awkward family reunion was the price Walton had to pay for a guaranteed line of credit, then so be it. He went out of his way to take a personal risk in order to avoid a professional risk. Reputation, after all, is a commodity that trades in the marketplace at a significant and often excessive premium. The predator shorts the dancers, and goes long on the wallflowers.

5.

When Pellegrini finally finished his research on the mortgage market—proving how profoundly inflated home prices had become—he rushed in to show his findings to his boss. Zuckerman writes:

“This is unbelievable!” Paulson said, unable to take his eyes off the chart. A mischievous smile formed on his face, as if Pellegrini had shared a secret no one else was privy to. Paulson sat back in his chair and turned to Pellegrini. “This is our bubble! This is proof. Now we can prove it!” Paulson said. Pellegrini grinned, unable to mask his pride. The chart was Paulson’s Rosetta stone, the key to making sense of the entire housing market. Years later, he would keep it atop a pile of papers on his desk, showing it off to his clients and updating it each month with new data, like a car collector gently waxing and caressing a prized antique auto. . . . “I still look at it. I love that chart,” Paulson says.

There are a number of moments like this in “The Greatest Trade Ever,” when it becomes clear just how much Paulson enjoyed his work. Yes, he wanted to make money. But he was fabulously wealthy long before he tackled the mortgage business. His real motivation was the challenge of figuring out a particularly knotty problem. He was a kid with a puzzle.

This is consistent with the one undisputed finding in all the research on entrepreneurship: people who work for themselves are far happier than the rest of us. Shane says that the average person would have to earn two and a half times as much to be as happy working for someone else as he would be working for himself. And people who like what they do are profoundly conservative. When the sociologists Hongwei Xu and Martin Ruef asked a large sample of entrepreneurs and non-entrepreneurs to choose among three alternatives—a business with a potential profit of five million dollars with a twenty-per-cent chance of success, or one with a profit of two million with a fifty-per-cent chance of success, or one with a profit of $1.25 million with an eighty-per-cent chance of success—it was the entrepreneurs who were more likely to go with the third, safe choice. They weren’t dazzled by the chance of making five million dollars. They were drawn to the eighty-per-cent chance of getting to do what they love doing. The predator is a supremely rational actor. But, deep down, he is also a romantic, motivated by the simple joy he finds in his work.

In “Call Me Ted,” Turner tells the story of one of his first great traumas. When Turner was twenty-four, his father committed suicide. He had been depressed and troubled for some months, and one day after breakfast he went upstairs and shot himself. After the funeral, it emerged that the day before his death Turner’s father had sold the crown jewels of the family business—the General Outdoor properties—to a man named Bob Naegele. Turner was grief-stricken. But he fought back. He hired away the General Outdoor leasing department. He began “jumping” the company’s leases—that is, persuading the people who owned the real estate on which the General Outdoor billboards sat to cancel the leases and sign up with Turner Advertising. Then he flew to Palm Springs and strong-armed Naegele into giving back the business. Turner the rational actor negotiated the deal. But it was Turner the romantic who had the will, at the moment of his greatest grief, to fight back. What Turner understood was that none of his grand ambitions were possible without the billboard cash machine. He had felt the joy that comes with figuring out a particularly knotty problem, and he couldn’t give that up. Naegele, by the way, asked for two hundred thousand dollars, which Turner didn’t have. But Turner realized that for someone in Naegele’s tax bracket a flat payment like that made no sense. He countered with two hundred thousand dollars in Turner Advertising stock. “So far so good,” Turner writes in his autobiography. “I had kept the company out of Naegele’s hands and it didn’t cost me a single dollar of cash.” Of course it didn’t. He’s a predator. Why on earth would he take a risk like that?
How much people drink may matter less than how they drink it.

1.

In 1956, health Dwight Heath, a graduate student in anthropology at Yale University, was preparing to do field work for his dissertation. He was interested in land reform and social change, and his first choice as a study site was Tibet. But six months before he was to go there he got a letter from the Chinese government rejecting his request for a visa. “I had to find a place where you can master the literature in four months, and that was accessible,” Heath says now. “It was a hustle.” Bolivia was the next best choice. He and his wife, Anna Cooper Heath, flew to Lima with their baby boy, and then waited for five hours while mechanics put boosters on the plane’s engines. “These were planes that the U.S. had dumped after World War II,” Heath recalls. “They weren’t supposed to go above ten thousand feet. But La Paz, where we were headed, was at twelve thousand feet.” As they flew into the Andes, Cooper Heath says, they looked down and saw the remnants of “all the planes where the boosters didn’t work.”

From La Paz, they travelled five hundred miles into the interior of eastern Bolivia, to a small frontier town called Montero. It was the part of Bolivia where the Amazon Basin meets the Chaco—vast stretches of jungle and lush prairie. The area was inhabited by the Camba, a mestizo people descended from the indigenous Indian populations and Spanish settlers. The Camba spoke a language that was a mixture of the local Indian languages and seventeenth-century Andalusian Spanish. “It was an empty spot on the map,” Heath says. “There was a railroad coming. There was a highway coming. There was a national government . . . coming.”

They lived in a tiny house just outside of town. “There was no pavement, no sidewalks,” Cooper Heath recalls. “If there was meat in town, they’d throw out the hide in front, so you’d know where it was, and you would bring banana leaves in your hand, so it was your dish. There were adobe houses with stucco and tile roofs, and the town plaza, with three palm trees. You heard the rumble of oxcarts. The padres had a jeep. Some of the women would serve a big pot of rice and some sauce. That was the restaurant. The guy who did the coffee was German. The year we came to Bolivia, a total of eighty-five foreigners came into the country. It wasn’t exactly a hot spot.”

In Montero, the Heaths engaged in old-fashioned ethnography—”vacuuming up everything,” Dwight says, “learning everything.” They convinced the Camba that they weren’t missionaries by openly smoking cigarettes. They took thousands of photographs. They walked around the town and talked to whomever they could, and then Dwight went home and spent the night typing up his notes. They had a Coleman lantern, which became a prized social commodity. Heath taught some of the locals how to build a split-rail fence. They sometimes shared a beer in the evenings with a Bolivian Air Force officer who had been exiled to Montero from La Paz. “He kept on saying, ‘Watch me, I will be somebody,’ ” Dwight says. (His name was René Barrientos; eight years later he became the President of Bolivia, and the Heaths were invited to his inauguration.) After a year and a half, the Heaths packed up their photographs and notes and returned to New Haven. There Dwight Heath sat down to write his dissertation—only to discover that he had nearly missed what was perhaps the most fascinating fact about the community he had been studying.

Today, the Heaths are in their late seventies. Dwight has neatly combed gray hair and thick tortoiseshell glasses, a reserved New Englander through and through. Anna is more outgoing. They live not far from the Brown University campus, in Providence, in a house filled with hundreds of African statues and sculptures, with books and papers piled high on tables, and they sat, in facing armchairs, and told the story of what happened half a century ago, finishing each other’s sentences.

“It was August or September of 1957,” Heath said. “We had just gotten back. She’s tanned. I’m tanned. I mean, really tanned, which you didn’t see a lot of in New Haven in those days.”

“I’m an architecture nut,” Anna said. “And I said I wanted to see the inside of this building near the campus. It was always closed. But Dwight says, ‘You never know,’ so he walked over and pulls on the door and it opens.” Anna looked over at her husband.

“So we go in,” Dwight went on, “and there was a couple of little white-haired guys there. And they said, ‘You’re tanned. Where have you been?’ And I said Bolivia. And one of them said, ‘Well, can you tell me how they drink?’ ” The building was Yale’s Center of Alcohol Studies. One of the white-haired men was E. M. Jellinek, perhaps the world’s leading expert on alcoholism at the time; the other was Mark Keller, the editor of the well-regarded Quarterly Journal of Studies on Alcohol. Keller stood up and grabbed Heath by the lapels: “I don’t know anyone who has ever been to Bolivia. Tell me about it!” He invited Heath to write up his alcohol-related observations for his journal.

After the Heaths went home that day, Anna said to Dwight, “Do you realize that every weekend we were in Bolivia we went out drinking?” The code he used for alcohol in his notebooks was 30A, and when he went over his notes he found 30A references everywhere. Still, nothing about the alcohol question struck him as particularly noteworthy. People drank every weekend in New Haven, too. His focus was on land reform. But who was he to say no to the Quarterly Journal of Studies on Alcohol? So he sat down and wrote up what he knew. Only after his article, “Drinking Patterns of the Bolivian Camba,” was published, in September of 1958, and the queries and reprint requests began flooding in from around the world, did he realize what he had found. “This is so often true in anthropology,” Anna said. “It is not anthropologists who recognize the value of what they’ve done. It’s everyone else. The anthropologist is just reporting.”

2.

The abuse of alcohol has, historically, been thought of as a moral failing. Muslims and Mormons and many kinds of fundamentalist Christians do not drink, because they consider alcohol an invitation to weakness and sin. Around the middle of the last century, alcoholism began to be widely considered a disease: it was recognized that some proportion of the population was genetically susceptible to the effects of drinking. Policymakers, meanwhile, have become increasingly interested in using economic and legal tools to control alcohol-related behavior: that’s why the drinking age has been raised from eighteen to twenty-one, why drunk-driving laws have been toughened, and why alcohol is taxed heavily. Today, our approach to the social burden of alcohol is best described as a mixture of all three: we moralize, medicalize, and legalize.

In the nineteen-fifties, however, the researchers at the Yale Center of Alcohol Studies found something lacking in this emerging approach, and the reason had to do with what they observed right in their own town. New Haven was a city of immigrants—Jewish, Irish, and, most of all, Italian. Recent Italian immigrants made up about a third of the population, and whenever the Yale researchers went into the Italian neighborhoods they found an astonishing thirst for alcohol. The overwhelming majority of Italian-American men in New Haven drank. A group led by the director of the Yale alcohol-treatment clinic, Giorgio Lolli, once interviewed a sixty-one-year-old father of four who consumed more than three thousand calories a day of food and beverages—of which a third was wine. “He usually has an 8-oz. glass of wine immediately following his breakfast every morning,” Lolli and his colleagues wrote. “He always takes wine with his noonday lunch—as much as 24 oz.” But he didn’t display the pathologies that typically accompany that kind of alcohol consumption. The man was successfully employed, and had been drunk only twice in his life. He was, Lolli concluded, “a healthy, happy individual who has made a satisfactory adjustment to life.”

By the late fifties, Lolli’s clinic had admitted twelve hundred alcoholics. Plenty of them were Irish. But just forty were Italians (all of whom were second- or third-generation immigrants). New Haven was a natural experiment. Here were two groups who practiced the same religion, who were subject to the same laws and constraints, and who, it seemed reasonable to suppose, should have the same assortment within their community of those genetically predisposed to alcoholism. Yet the heavy-drinking Italians had nothing like the problems that afflicted their Irish counterparts.

“That drinking must precede alcoholism is obvious,” Mark Keller once wrote. “Equally obvious, but not always sufficiently considered, is the fact that drinking is not necessarily followed by alcoholism.” This was the puzzle of New Haven, and why Keller demanded of Dwight Heath, that day on the Yale campus, Tell me how the Camba drink. The crucial ingredient, in Keller’s eyes, had to be cultural.

The Heaths had been invited to a party soon after arriving in Montero, and every weekend and holiday thereafter. It was their Coleman lantern. “Whatever the occasion, it didn’t matter,” Anna recalled. “As long as the party was at night, we were first on the list.”

The parties would have been more aptly described as drinking parties. The host would buy the first bottle and issue the invitations. A dozen or so people would show up on Saturday night, and the party would proceed—often until everyone went back to work on Monday morning. The composition of the group was informal: sometimes people passing by would be invited. But the structure of the party was heavily ritualized. The group would sit in a circle. Someone might play the drums or a guitar. A bottle of rum, from one of the sugar refineries in the area, and a small drinking glass were placed on a table. The host stood, filled the glass with rum, and then walked toward someone in the circle. He stood before the “toastee,” nodded, and raised the glass. The toastee smiled and nodded in return. The host then drank half the glass and handed it to the toastee, who would finish it. The toastee eventually stood, refilled the glass, and repeated the ritual with someone else in the circle. When people got too tired or too drunk, they curled up on the ground and passed out, rejoining the party when they awoke. The Camba did not drink alone. They did not drink on work nights. And they drank only within the structure of this elaborate ritual.

“The alcohol they drank was awful,” Anna recalled. “Literally, your eyes poured tears. The first time I had it, I thought, I wonder what will happen if I just vomit in the middle of the floor. Not even the Camba said they liked it. They say it tastes bad. It burns. The next day they are sweating this stuff. You can smell it.” But the Heaths gamely persevered. “The anthropology graduate student in the nineteen-fifties felt that he had to adapt,” Dwight Heath said. “You don’t want to offend anyone, you don’t want to decline anything. I gritted my teeth and accepted those drinks.”

“We didn’t get drunk that much,” Anna went on, “because we didn’t get toasted as much as the other folks around. We were strangers. But one night there was this really big party—sixty to eighty people. They’d drink. Then pass out. Then wake up and party for a while. And I found, in their drinking patterns, that I could turn my drink over to Dwight. The husband is obliged to drink for his wife. And Dwight is holding the Coleman lantern with his arm wrapped around it, and I said, ‘Dwight, you are burning your arm.’ ” She mimed her husband peeling his forearm off the hot surface of the lantern. “And he said—very deliberately—’So I am.’ ”

When the Heaths came back to New Haven, they had a bottle of the Camba’s rum analyzed and learned that it was a hundred and eighty proof. It was laboratory alcohol—the concentration that scientists use to fix tissue. No one had ever heard of anyone drinking it. This was the first of the astonishing findings of the Heaths’ research—and, predictably, no one believed it at first.

“One of the world’s leading physiologists of alcohol was at the Yale center,” Heath recalled. “His name was Leon Greenberg. He said to me, ‘Hey, you spin a good yarn. But you couldn’t really have drunk that stuff.’ And he needled me just enough that he knew he would get a response. So I said, ‘You want me to drink it? I have a bottle.’ So one Saturday I drank some under controlled conditions. He was taking blood samples every twenty minutes, and, sure enough, I did drink it, the way I said I’d drunk it.”

Greenberg had an ambulance ready to take Heath home. But Heath decided to walk. Anna was waiting up for him in the third-floor walkup they rented, in an old fraternity house. “I was hanging out the window waiting for him, and there’s the ambulance driving along the street, very slowly, and next to it is Dwight. He waves, and he looks fine. Then he walks up the three flights of stairs and says, ‘Ahh, I’m drunk,’ and falls flat on his face. He was out for three hours.”

The bigger surprise was what happened when the Camba drank. The Camba had weekly benders with laboratory-proof alcohol, and, Dwight Heath said, “There was no social pathology—none. No arguments, no disputes, no sexual aggression, no verbal aggression. There was pleasant conversation or silence.” On the Brown University campus, a few blocks away, beer—which is to Camba rum approximately what a peashooter is to a bazooka—was known to reduce the student population to a raging hormonal frenzy on Friday nights. “The drinking didn’t interfere with work,” Heath went on. “It didn’t bring in the police. And there was no alcoholism, either.”

3.

What Heath found among the Camba is hard to believe. We regard alcohol’s behavioral effects as inevitable. Alcohol disinhibits, we assume, as reliably as caffeine enlivens. It gradually unlocks the set of psychological constraints that keep our behavior in check, and makes us do things that we would not ordinarily do. It’s a drug, after all.

But, after Heath’s work on the Camba, anthropologists began to take note of all the puzzling ways in which alcohol wasn’t reliable in its effects. In the classic 1969 work “Drunken Comportment,” for example, the anthropologists Craig MacAndrew and Robert B. Edgerton describe an encounter that Edgerton had while studying a tribe in central Kenya. One of the tribesmen, he was told, was “very dangerous” and “totally beyond control” after he had been drinking, and one day Edgerton ran across the man:

I heard a commotion, and saw people running past me. One young man stopped and urged me to flee because this dangerous drunk was coming down the path attacking all whom he met. As I was about to take this advice and leave, the drunk burst wildly into the clearing where I was sitting. I stood up, ready to run, but much to my surprise, the man calmed down, and as he walked slowly past me, he greeted me in polite, even deferential terms, before he turned and dashed away. I later learned that in the course of his “drunken rage” that day he had beaten two men, pushed down a small boy, and eviscerated a goat with a large knife.

The authors include a similar case from Ralph Beals’s work among the Mixe Indians of Oaxaca, Mexico:

The Mixe indulge in frequent fist fights, especially while drunk. Although I probably saw several hundred, I saw no weapons used, although nearly all men carried machetes and many carried rifles. Most fights start with a drunken quarrel. When the pitch of voices reaches a certain point, everyone expects a fight. The men hold out their weapons to the onlookers, and then begin to fight with their fists, swinging wildly until one falls down [at which point] the victor helps his opponent to his feet and usually they embrace each other.

The angry Kenyan tribesman was disinhibited toward his own people but inhibited toward Edgerton. Alcohol turned the Mixe into aggressive street fighters, but they retained the presence of mind to “hold out their weapons to the onlookers.” Something that truly disinhibits ought to be indiscriminate in its effects. That’s not the picture of alcohol that these anthropologists have given us. (MacAndrew and Edgerton, in one of their book’s many wry asides, point out that we are all acquainted with people who can hold their liquor. “In the absence of anything observably untoward in such a one’s drunken comportment,” they ask, “are we seriously to presume that he is devoid of inhibitions?”)

Psychologists have encountered the same kinds of perplexities when they have set out to investigate the effects of drunkenness. One common “belief is that alcohol causes self-inflation.” It makes us see ourselves through rose-tinted glasses. Oddly, though, it doesn’t make us view everything about ourselves through rose-tinted glasses. When the psychologists Claude Steele and Mahzarin Banaji gave a group of people a personality questionnaire while they were sober and then again when they were drunk, they found that the only personality aspects that were inflated by drinking were those where there was a gap between real and ideal states. If you are good-looking and the world agrees that you are good-looking, drinking doesn’t make you think you’re even better-looking. Drinking only makes you feel you’re better-looking if you think you’re good-looking and the world doesn’t agree.

Alcohol is also commonly believed to reduce anxiety. That’s what a disinhibiting agent should do: relax us and make the world go away. Yet this effect also turns out to be selective. Put a stressed-out drinker in front of an exciting football game and he’ll forget his troubles. But put him in a quiet bar somewhere, all by himself, and he’ll grow more anxious.

Steele and his colleague Robert Josephs’s explanation is that we’ve misread the effects of alcohol on the brain. Its principal effect is to narrow our emotional and mental field of vision. It causes, they write, “a state of shortsightedness in which superficially understood, immediate aspects of experience have a disproportionate influence on behavior and emotion.”

Alcohol makes the thing in the foreground even more salient and the thing in the background disappear. That’s why drinking makes you think you are attractive when the world thinks otherwise: the alcohol removes the little constraining voice from the outside world that normally keeps our self-assessments in check. Drinking relaxes the man watching football because the game is front and center, and alcohol makes every secondary consideration fade away. But in a quiet bar his problems are front and center—and every potentially comforting or mitigating thought recedes. Drunkenness is not disinhibition. Drunkenness is myopia.

Myopia theory changes how we understand drunkenness. Disinhibition suggests that the drinker is increasingly insensitive to his environment—that he is in the grip of an autonomous physiological process. Myopia theory, on the contrary, says that the drinker is, in some respects, increasingly sensitive to his environment: he is at the mercy of whatever is in front of him.

A group of Canadian psychologists led by Tara MacDonald recently went into a series of bars and made the patrons read a short vignette. They had to imagine that they had met an attractive person at a bar, walked him or her home, and ended up in bed—only to discover that neither of them had a condom. The subjects were then asked to respond on a scale of one (very unlikely) to nine (very likely) to the proposition: “If I were in this situation, I would have sex.” You’d think that the subjects who had been drinking heavily would be more likely to say that they would have sex—and that’s exactly what happened. The drunk people came in at 5.36, on average, on the nine-point scale. The sober people came in at 3.91. The drinkers couldn’t sort through the long-term consequences of unprotected sex. But then MacDonald went back to the bars and stamped the hands of some of the patrons with the phrase “AIDS kills.” Drinkers with the hand stamp were slightly less likely than the sober people to want to have sex in that situation: they couldn’t sort through the kinds of rationalizations necessary to set aside the risk of AIDS. Where norms and standards are clear and consistent, the drinker can become more rule-bound than his sober counterpart.

In other words, the frat boys drinking in a bar on a Friday night don’t have to be loud and rowdy. They are responding to the signals sent by their immediate environment—by the pulsing music, by the crush of people, by the dimmed light, by the countless movies and television shows and general cultural expectations that say that young men in a bar with pulsing music on a Friday night have permission to be loud and rowdy. “Persons learn about drunkenness what their societies import to them, and comporting themselves in consonance with these understandings, they become living confirmations of their society’s teachings,” MacAndrew and Edgerton conclude. “Since societies, like individuals, get the sorts of drunken comportment that they allow, they deserve what they get.”

4.

This is what connects the examples of Montero and New Haven. On the face of it, the towns are at opposite ends of the spectrum. The Camba got drunk every weekend on laboratory-grade alcohol. The Italians drank wine, in civil amounts, every day. The Italian example is healthy and laudable. The Camba’s fiestas were excessive and surely took a long-term physical toll. But both communities understood the importance of rules and structure. Camba society, Dwight Heath says, was marked by a singular lack of “communal expression.” They were itinerant farmworkers. Kinship ties were weak. Their daily labor tended to be solitary and the hours long. There were few neighborhood or civic groups. Those weekly drinking parties were not chaotic revels; they were the heart of Camba community life. They had a function, and the elaborate rituals—one bottle at a time, the toasting, the sitting in a circle—served to give the Camba’s drinking a clear structure.

In the late nineteen-forties, Phyllis Williams and Robert Straus, two sociologists at Yale, selected ten first- and second-generation Italian-Americans from New Haven to keep diaries detailing their drinking behavior, and their entries show how well that community understood this lesson as well. Here is one of their subjects, Philomena Sappio, a forty-year-old hairdresser from an island in the Bay of Naples, describing what she drank one week in October of 1948:

Fri.—Today for dinner 4 oz. of wine [noon]. In the evening, I had fish with 8 oz. of wine [6 P.M.].

Sat.—Today I did not feel like drinking at all. Neither beer nor any other alcohol. I drank coffee and water.

Sun.—For dinner I made lasagna at noon, and had 8 oz. of wine. In the evening, I had company and took one glass of liqueur [1 oz. strega] with my company. For supper—I did not have supper because I wasn’t hungry.

Mon.—At dinner I drank coffee, at supper 6 oz. of wine[5 P.M.].

Tues.—At dinner, 4 oz. wine [noon]. One of my friends and her husband took me and my daughter out this evening in a restaurant for supper. We had a splendid supper. I drank 1 oz. of vermouth [5:30 P.M.] and 12 oz. of wine [6 P.M.].

Wed.—For dinner, 4 oz. of wine [noon] and for supper 6 oz. of wine [6 P.M.].

Thurs.—At noon, coffee and at supper, 6 oz. of wine [6 P.M.].

Fri.—Today at noon I drank orange juice; at supper in the evening [6 P.M.] 8 oz. of wine.

Sappio drinks almost every day, unless she isn’t feeling well. She almost always drinks wine. She drinks only at mealtimes. She rarely has more than a glass—except on a special occasion, as when she and her daughter are out with friends at a restaurant.

Here is another of Williams and Straus’s subjects—Carmine Trotta, aged sixty, born in a village outside Salerno, married to a girl from his village, father of three, proprietor of a small grocery store, resident of an exclusively Italian neighborhood:

Fri.—I do not generally eat anything for breakfast if I have a heavy supper the night before. I leave out eggnog and only take coffee with whisky because I like to have a little in the morning with coffee or with eggnog or a few crackers.

Mon.—When I drink whisky before going to bed I always put it in a glass of water. . . .

Wed.—Today is my day off from business, so I [drank] some beer because it was very hot. I never drink beer when I am working because I don’t like the smell of beer on my breath for my customers.

Thurs.—Every time that I buy a bottle of whisky I always divide same. One half at home and one half in my shop.

Sappio and Trotta do not drink for the same purpose as the Camba: alcohol has no larger social or emotional reward. It’s food, consumed according to the same quotidian rhythms as pasta or cheese. But the content of the rules matters less than the fact of the rule, the existence of a drinking regimen that both encourages and constrains alcohol’s use. “I went to visit one of my friends this evening,” Sappio writes. “We saw television and she offered me 6 oz. of wine to drink, and it was good [9 P.M.].” She did not say that her friend put the bottle on the table or offered her a second glass. Evidently, she brought out one glass of wine for each of them, and they drank together, because one glass is what you had, in the Italian neighborhoods of New Haven, at 9 P.M. while watching television.

5.

Why can’t we all drink like the Italians of New Haven? The flood of immigrants who came to the United States in the nineteenth century brought with them a wealth of cultural models, some of which were clearly superior to the patterns of their new host—and, in a perfect world, the rest of us would have adopted the best ways of the newcomers. It hasn’t worked out that way, though. Americans did not learn to drink like Italians. On the contrary, when researchers followed up on Italian-Americans, they found that by the third and fourth generations they were, increasingly, drinking like everyone else.

There is something about the cultural dimension of social problems that eludes us. When confronted with the rowdy youth in the bar, we are happy to raise his drinking age, to tax his beer, to punish him if he drives under the influence, and to push him into treatment if his habit becomes an addiction. But we are reluctant to provide him with a positive and constructive example of how to drink. The consequences of that failure are considerable, because, in the end, culture is a more powerful tool in dealing with drinking than medicine, economics, or the law. For all we know, Philomena Sappio could have had within her genome a grave susceptibility to alcohol. Because she lived in the protective world of New Haven’s immigrant Italian community, however, it would never have become a problem. Today, she would be at the mercy of her own inherent weaknesses. Nowhere in the multitude of messages and signals sent by popular culture and social institutions about drinking is there any consensus about what drinking is supposed to mean.

“Mind if I vent for a while?” a woman asks her husband, in one popular—and depressingly typical—beer ad. He is sitting on the couch. She has just come home from work. He replies, “Mind? I’d prefer it!” And he jumps up, goes to the refrigerator, and retrieves two cans of Coors Light—a brand that comes with a special vent intended to make pouring the beer easier. “Let’s vent!” he cries out. She looks at him oddly: “What are you talking about?” “I’m talking about venting!” he replies, as she turns away in disgust. “What are you talking about?” The voice-over intones, “The vented wide-mouthed can from Coors Light. It lets in air for a smooth, refreshing pour.” Even the Camba, for all their excesses, would never have been so foolish as to pretend that you could have a conversation about drinking and talk only about the can.

backtop
It was a dazzling feat of wartime espionage. But does it argue for or against spying?

1.

On April 30, side effects 1943, sildenafil a fisherman came across a badly decomposed corpse floating in the water off the coast of Huelva, in southwestern Spain. The body was of an adult male dressed in a trenchcoat, a uniform, and boots, with a black attaché case chained to his waist. His wallet identified him as Major William Martin, of the Royal Marines. The Spanish authorities called in the local British vice-consul, Francis Haselden, and in his presence opened the attaché case, revealing an official-looking military envelope. The Spaniards offered the case and its contents to Haselden. But Haselden declined, requesting that the handover go through formal channels—an odd decision, in retrospect, since, in the days that followed, British authorities in London sent a series of increasingly frantic messages to Spain asking the whereabouts of Major Martin’s briefcase.

It did not take long for word of the downed officer to make its way to German intelligence agents in the region. Spain was a neutral country, but much of its military was pro-German, and the Nazis found an officer in the Spanish general staff who was willing to help. A thin metal rod was inserted into the envelope; the documents were then wound around it and slid out through a gap, without disturbing the envelope’s seals. What the officer discovered was astounding. Major Martin was a courier, carrying a personal letter from Lieutenant General Archibald Nye, the vice-chief of the Imperial General Staff, in London, to General Harold Alexander, the senior British officer under Eisenhower in Tunisia. Nye’s letter spelled out what Allied intentions were in southern Europe. American and British forces planned to cross the Mediterranean from their positions in North Africa, and launch an attack on German-held Greece and Sardinia. Hitler transferred a Panzer division from France to the Peloponnese, in Greece, and the German military command sent an urgent message to the head of its forces in the region: “The measures to be taken in Sardinia and the Peloponnese have priority over any others.”

The Germans did not realize—until it was too late—that “William Martin” was a fiction. The man they took to be a high-level courier was a mentally ill vagrant who had eaten rat poison; his body had been liberated from a London morgue and dressed up in ”s clothing. The letter was a fake, and the frantic messages between London and Madrid a carefully choreographed act. When a hundred and sixty thousand Allied troops invaded Sicily on July 10, 1943, it became clear that the Germans had fallen victim to one of the most remarkable deceptions in modern military history.

The story of Major William Martin is the subject of the British journalist Ben Macintyre’s brilliant and almost absurdly entertaining “Operation Mincemeat” (Harmony; $25.99). The cast of characters involved in Mincemeat, as the caper was called, was extraordinary, and Macintyre tells their stories with gusto. The ringleader was Ewen Montagu, the son of a wealthy Jewish banker and the brother of Ivor Montagu, a pioneer of table tennis and also, in one of the many strange footnotes to the Mincemeat case, a Soviet spy. Ewen Montagu served on the so-called Twenty Committee of the British intelligence services, and carried a briefcase full of classified documents on his bicycle as he rode to work each morning.

His partner in the endeavor was a gawky giant named Charles Cholmondeley, who lifted the toes of his size-12 feet when he walked, and, Macintyre writes, “gazed at the world through thick round spectacles, from behind a remarkable moustache fully six inches long and waxed into magnificent points.” The two men coördinated with Dudley Clarke, the head of deception for all the Mediterranean, whom Macintyre describes as “unmarried, nocturnal and allergic to children.” In 1925, Clarke organized a pageant “depicting imperial artillery down the ages, which involved two elephants, thirty-seven guns and ‘fourteen of the biggest Nigerians he could find.’ He loved uniforms, disguises and dressing up.” In 1941, British authorities had to bail him out of a Spanish jail, dressed in “high heels, lipstick, pearls, and a chic cloche hat, his hands, in long opera gloves, demurely folded in his lap. He was not supposed to even be in Spain, but in Egypt.” Macintyre, who has perfect pitch when it comes to matters of British eccentricity, reassures us, “It did his career no long-term damage.”

To fashion the container that would keep the corpse “fresh,” before it was dumped off the coast of Spain, Mincemeat’s planners turned to Charles Fraser-Smith, whom Ian Fleming is thought to have used as the model for Q in the James Bond novels. Fraser-Smith was the inventor of, among other things, garlic-flavored chocolate intended to render authentic the breath of agents dropping into France and “a compass hidden in a button that unscrewed clockwise, based on the impeccable theory that the ‘unswerving logic of the German mind’ would never guess that something might unscrew the wrong way.” The job of transporting the container to the submarine that would take it to Spain was entrusted to one of England’s leading race-car drivers, St. John (Jock) Horsfall, who, Macintyre notes, “was short-sighted and astigmatic but declined to wear spectacles.” At one point during the journey, Horsfall nearly drove into a tram stop, and then “failed to see a roundabout until too late and shot over the grass circle in the middle.”

Each stage of the deception had to be worked out in advance. Martin’s personal effects needed to be detailed enough to suggest that he was a real person, but not so detailed as to suggest that someone was trying to make him look like a real person. Cholmondeley and Montagu filled Martin’s pockets with odds and ends, including angry letters from creditors and a bill from his tailor. “Hour after hour, in the Admiralty basement, they discussed and refined this imaginary person, his likes and dislikes, his habits and hobbies, his talents and weaknesses,” Macintyre writes. “In the evening, they repaired to the Gargoyle Club, a glamorous Soho dive of which Montagu was a member, to continue the odd process of creating a man from scratch.” Francis Haselden, for his part, had to look as if he desperately wanted the briefcase back. But he couldn’t be too diligent, because he had to make sure that the Germans had a look at it first. “Here lay an additional, but crucial, consideration,” Macintyre goes on. “The Germans must be made to believe that they had gained access to the documents undetected; they should be made to assume that the British believed the Spaniards had returned the documents unopened and unread. Operation Mincemeat would only work if the Germans could be fooled into believing that the British had been fooled.” It was an impossibly complex scheme, dependent on all manner of unknowns and contingencies. What if whoever found the body didn’t notify the authorities? What if the authorities disposed of the matter so efficiently that the Germans never caught wind of it? What if the Germans saw through the ruse?

In mid-May of 1943, when Winston Churchill was in Washington, D.C., for the Trident conference, he received a telegram from the code breakers back home, who had been monitoring German military transmissions: “MINCEMEAT SWALLOWED ROD, LINE AND SINKER.” Macintyre’s “Operation Mincemeat” is part of a long line of books celebrating the cleverness of Britain’s spies during the Second World War. It is equally instructive, though, to think about Mincemeat from the perspective of the spies who found the documents and forwarded them to their superiors. The things that spies do can help win battles that might otherwise have been lost. But they can also help lose battles that might otherwise have been won.

2.

In early 1943, long before Major Martin’s body washed up onshore, the German military had begun to think hard about Allied intentions in southern Europe. The Allies had won control of North Africa from the Germans, and were clearly intending to cross the Mediterranean. But where would they attack? One school of thought said Sardinia. It was lightly defended and difficult to reinforce. The Allies could mount an invasion of the island relatively quickly. It would be ideal for bombing operations against southern Germany, and Italy’s industrial hub in the Po Valley, but it didn’t have sufficient harbors or beaches to allow for a large number of ground troops to land. Sicily did. It was also close enough to North Africa to be within striking distance of Allied short-range fighter planes, and a successful invasion of Sicily had the potential to knock the Italians out of the war.

Mussolini was in the Sicily camp, as was Field Marshal Kesselring, who headed up all German forces in the Mediterranean. In the Italian Commando Supremo, most people picked Sardinia, however, as did a number of senior officers in the German Navy and Air Force. Meanwhile, Hitler and the Oberkommando der Wehrmacht—the German armed-forces High Command—had a third candidate. They thought that the Allies were most likely to strike at Greece and the Balkans, given the Balkans’ crucial role in supplying the German war effort with raw materials such as oil, bauxite, and copper. And Greece was far more vulnerable to attack than Italy. As the historians Samuel Mitcham and Friedrich von Stauffenberg have pointed out, “in Greece all Axis reinforcements and supplies would have to be shipped over a single rail line of limited capacity, running for 1,300 kilometers (more than 800 miles) through an area vulnerable to air and partisan attack.”

All these assessments were strategic inferences from an analysis of known facts. But this kind of analysis couldn’t point to a specific target. It could only provide a range of probabilities. The intelligence provided by Major Martin’s documents was in a different category. It was marvellously specific. It said: Greece and Sardinia. But because that information washed up onshore, as opposed to being derived from the rational analysis of known facts, it was difficult to know whether it was true. As the political scientist Richard Betts has argued, in intelligence analysis there tends to be an inverse relationship between accuracy and significance, and this is the dilemma posed by the Mincemeat case.

As Macintyre observes, the informational supply chain that carried the Mincemeat documents from Huelva to Berlin was heavily corrupted. The first great enthusiast for the Mincemeat find was the head of German intelligence in Madrid, Major Karl-Erich Kühlenthal. He personally flew the documents to Berlin, along with a report testifying to their significance. But, as Macintyre writes, Kühlenthal was “a one-man espionage disaster area.” One of his prized assets was a Spaniard named Juan Pujol García, who was actually a double agent. When British code breakers looked at Kühlenthal’s messages to Berlin, they found that he routinely embellished and fictionalized his reports. According to Macintyre, Kühlenthal was “frantically eager to please, ready to pass on anything that might consolidate his ” in part because he had some Jewish ancestry and was desperate not to be posted back to Germany.

When the documents arrived in Berlin, they were handed over to one of Hitler’s top intelligence analysts, a man named Alexis Baron von Roenne. Von Roenne vouched for their veracity as well. But in some respects von Roenne was even less reliable than Kühlenthal. He hated Hitler and seemed to have done everything in his power to sabotage the Nazi war effort. Before D Day, Macintyre writes, “he faithfully passed on every deception ruse fed to him, accepted the existence of every bogus unit regardless of evidence, and inflated forty-four divisions in Britain to an astonishing eighty-nine.” It is entirely possible, Macintyre suggests, that von Roenne “did not believe the Mincemeat deception for an instant.”

These are two fine examples of why the proprietary kind of information that spies purvey is so much riskier than the products of rational analysis. Rational inferences can be debated openly and widely. Secrets belong to a small assortment of individuals, and inevitably become hostage to private agendas. Kühlenthal was an advocate of the documents because he needed them to be true; von Roenne was an advocate of the documents because he suspected them to be false. In neither case did the audiences for their assessments have an inkling about their private motivations. As Harold Wilensky wrote in his classic work “Organizational Intelligence” (1967), “The more secrecy, the smaller the intelligent audience, the less systematic the distribution and indexing of research, the greater the anonymity of authorship, and the more intolerant the attitude toward deviant views.” Wilensky had the Bay of Pigs debacle in mind when he wrote that. But it could just as easily have applied to any number of instances since, including the private channels of “intelligence” used by members of the Bush Administration to convince themselves that Saddam Hussein had weapons of mass destruction.

It was the requirement of secrecy that also prevented the Germans from properly investigating the Mincemeat story. They had to make it look as if they had no knowledge of Martin’s documents. So their hands were tied. The dated papers in Martin’s pockets indicated that he had been in the water for barely five days. Had the Germans seen the body, though, they would have realized that it was far too decomposed to have been in the water for less than a week. And, had they talked to the Spanish coroner who examined Martin, they would have discovered that he had noticed various red flags. The doctor had seen the bodies of many drowned fishermen in his time, and invariably there were fish and crab bites on the ears and other appendages. In this case, there were none. Hair, after being submerged for a week, becomes brittle and dull. Martin’s hair was not. Nor did his clothes appear to have been in the water very long. But the Germans couldn’t talk to the coroner without blowing their cover. Secrecy stood in the way of accuracy.

3.

Suppose that Kühlenthal had not been so eager to please Berlin, and that von Roenne had not loathed Hitler, and suppose that the Germans had properly debriefed the coroner and uncovered all the holes in the Mincemeat story. Would they then have seen through the British deception? Maybe so. Or maybe they would have found the flaws in Mincemeat a little too obvious, and concluded that the British were trying to deceive Germany into thinking that they were trying to deceive Germany into thinking that Greece and Sardinia were the real targets—in order to mask the fact that Greece and Sardinia were the real targets.

This is the second, and more serious, of the problems that surround the products of espionage. It is not just that secrets themselves are hard to fact-check; it’s that their interpretation is inherently ambiguous. Any party to an intelligence transaction is trapped in what the sociologist Erving Goffman called an “expression game.” I’m trying to fool you. You realize that I’m trying to fool you, and I—realizing that—try to fool you into thinking that I don’t realize that you have realized that I am trying to fool you. Goffman argues that at each turn in the game the parties seek out more and more specific and reliable cues to the other’s intentions. But that search for specificity and reliability only makes the problem worse. As Goffman writes in his 1969 book “Strategic Interaction”:

The more the observer relies on seeking out foolproof cues, the more vulnerable he should appreciate he has become to the exploitation of his efforts. For, after all, the most reliance-inspiring conduct on the subject’s part is exactly the conduct that it would be most advantageous for him to fake if he wanted to hoodwink the observer. The very fact that the observer finds himself looking to a particular bit of evidence as an incorruptible check on what is or might be corrupted is the very reason why he should be suspicious of this evidence; for the best evidence for him is also the best evidence for the subject to tamper with.

Macintyre argues that one of the reasons the Germans fell so hard for the Mincemeat ruse is that they really had to struggle to gain access to the documents. They ——and failed—to find a Spanish accomplice when the briefcase was still in Huelva. A week passed, and the Germans grew more and more anxious. The briefcase was transferred to the Spanish Admiralty, in Madrid, where the Germans redoubled their efforts. Their assumption, Macintyre says, was that if Martin was a plant the British would have made their task much easier. But Goffman’s argument reminds us that the opposite is equally plausible. Knowing that a struggle would be a sign of authenticity, the Germans could just as easily have expected the British to provide one.

The absurdity of such expression games has been wittily explored in the spy novels of Robert Littell and, with particular brio, in Peter Ustinov’s 1956 play, “Romanoff and Juliet.” In the latter, a crafty general is the head of a tiny European country being squabbled over by the United States and the Soviet Union, and is determined to play one off against the other. He tells the U.S. Ambassador that the Soviets have broken the Americans’ secret code. “We know they know our code,” the Ambassador, Moulsworth, replies, beaming. “We only give them things we want them to know.” The general pauses, during which, the play’s stage directions say, “he tries to make head or tail of this intelligence.” Then he crosses the street to the Russian Embassy, where he tells the Soviet Ambassador, Romanoff, “They know you know their code.” Romanoff is unfazed: “We have known for some time that they knew we knew their code. We have acted accordingly—by pretending to be duped.” The general returns to the American Embassy and confronts Moulsworth: “They know you know they know you know.” Moulsworth (genuinely alarmed): “What? Are you sure?”

The genius of that parody is the final line, because spymasters have always prided themselves on knowing where they are on the “I-know-they-know-I-know-they-know” regress. Just before the Allied invasion of Sicily, a British officer, Colonel Knox, left a classified cable concerning the invasion plans on the terrace of Shepheard’s Hotel, in Cairo—and no one could find it for two days. “Dudley Clarke was confident, however, that if it had fallen into enemy hands through such an obvious and ‘gross breach of security’ then it would probably be dismissed as a plant, pointing to Sicily as the cover target in accordance with Mincemeat,” Macintyre writes. “He concluded that ‘Colonel Knox may well have assisted rather than hindered us.’ ” In the face of a serious security breach, that’s what a counter-intelligence officer would say. But, of course, there is no way for him to know how the Germans would choose to interpret that discovery—and no way for the Germans to know how to interpret that discovery, either.

At one point, the British discovered that a French officer in Algiers was spying for the Germans. They “turned” him, keeping him in place but feeding him a steady diet of false and misleading information. Then, before D Day—when the Allies were desperate to convince Germany that they would be invading the Calais sector in July—they used the French officer to tell the Germans that the real invasion would be in Normandy on June 5th, 6th, or 7th. The British theory was that using someone the Germans strongly suspected was a double agent to tell the truth was preferable to using someone the Germans didn’t realize was a double agent to tell a lie. Or perhaps there wasn’t any theory at all. Perhaps the spy game has such an inherent opacity that it doesn’t really matter what you tell your enemy so long as your enemy is aware that you are trying to tell him something.

At around the time that Montagu and Cholmondeley were cooking up Operation Mincemeat, the personal valet of the British Ambassador to Turkey approached the German Embassy in Ankara with what he said were photographed copies of his boss’s confidential papers. The valet’s name was Elyesa Bazna. The Germans called him Cicero, and in this case they performed due diligence. Intelligence that came in over the transom was always considered less trustworthy than the intelligence gathered formally, so Berlin pressed its agents in Ankara for more details. Who was Bazna? What was his background? What was his motivation?

“Given the extraordinary ease with which seemingly valuable documents were being obtained, however, there was widespread worry that the enemy had mounted some purposeful deception,” Richard Wires writes, in “The Cicero Spy Affair: German Access to British Secrets in World War II” (1999). Bazna was, for instance, highly adept with a camera, in a way that suggested professional training or some kind of assistance. Bazna claimed that he didn’t use a tripod but simply held each paper under a light with one hand and took the picture with the other. So why were the photographs so clear? Berlin sent a photography expert to investigate. The Germans tried to figure out how much English he knew—which would reveal whether he could read the documents he was photographing or was just being fed them. In the end, many German intelligence officials thought that Cicero was the real thing. But Joachim von Ribbentrop, the Foreign Minister, remained wary—and his doubts and political infighting among the German intelligence agencies meant that little of the intelligence provided by Cicero was ever acted upon.

Cicero, it turned out, was the real thing. At least, we think he was the real thing. The Americans had a spy in the German Embassy in Turkey who learned that a servant was spying in the British Embassy. She told her bosses, who told the British. Just before his death, Stewart Menzies, the head of the British Secret Intelligence Service during the war, told an interviewer, “Of course, Cicero was under our control,” meaning that the minute they learned about Cicero they began feeding him false documents. Menzies, it should be pointed out, was a man who spent much of his professional career deceiving other people, and if you had been the wartime head of M.I.6, giving an interview shortly before your death, you probably would say that Cicero was one of yours. Or perhaps, in interviews given shortly before death, people are finally free to tell the truth. Who knows?

In the case of Operation Mincemeat, Germany’s spies told their superiors that something false was actually true (even though, secretly, some of those spies might have known better), and Germany acted on it. In the case of Cicero, Germany’s spies told their superiors that something was true that may indeed have been true, though maybe wasn’t, or maybe was true for a while and not true for a while, depending on whether you believe the word of someone two decades after the war was over—and in this case Germany didn’t really act on it at all. Looking at that track record, you have to wonder if Germany would have been better off not having any spies at all.

4.

The idea for Operation Mincemeat, Macintyre tells us, had its roots in a mystery story written by Basil Thomson, a former head of Scotland Yard’s criminal-investigation unit. Thomson was the author of a dozen detective stories, and his 1937 book “The Milliner’s Hat Mystery” begins with the body of a dead man carrying a set of documents that turn out to be forged. “The Milliner’s Hat Mystery” was read by Ian Fleming, who worked for naval intelligence. Fleming helped create something called the Trout Memo, which contained a series of proposals for deceiving the Germans, including this idea of a dead man carrying forged documents. The memo was passed on to John Masterman, the head of the Twenty Committee—of which Montagu and Cholmondeley were members. Masterman, who also wrote mysteries on the side, starring an Oxford don and a Sherlock Holmes-like figure, loved the idea. Mincemeat, Macintyre writes, “began as fiction, a plot twist in a long-forgotten novel, picked up by another novelist, and approved by a committee presided over by yet another novelist.”

Then, there was the British naval attaché in Madrid, Alan Hillgarth, who stage-managed Mincemeat’s reception in Spain. He was a “spy, former gold prospector, and, perhaps inevitably, successful novelist,” Macintyre writes. “In his six novels, Alan Hillgarth hankered for a lost age of personal valor, chivalry, and self-reliance.” Unaccountably, neither Montagu nor Cholmondeley seems to have written mysteries of his own. But, then again, they had Mincemeat. “As if constructing a character in a novel, Montagu and Cholmondeley . . . set about creating a personality with which to clothe their dead body,” Macintyre observes. Martin didn’t have to have a fiancée. But, in a good spy thriller, the hero always has a beautiful lover. So they found a stunning young woman, Jean Leslie, to serve as Martin’s betrothed, and Montagu flirted with her shamelessly, as if standing in for his fictional creation. They put love letters from her among his personal effects. “Don’t please let them send you off into the blue the horrible way they do nowadays,” she wrote to her fiancé. “Now that we’ve found each other out of the whole world, I don’t think I could bear it.”

The British spymasters saw themselves as the authors of a mystery story, because it gave them the self-affirming sense that they were in full command of the narratives they were creating. They were not, of course. They were simply lucky that von Roenne and Kühlenthal had private agendas aligned with the Allied cause. The intelligence historian Ralph Bennett writes that one of the central principles of Dudley Clarke (he of the cross-dressing, the elephants, and the fourteen Nigerian giants) was that “deception could only be successful to the extent to which it played on existing hopes and fears.” That’s why the British chose to convince Hitler that the Allied focus was on Greece and the Balkans—Hitler, they knew, believed that the Allied focus was on Greece and the Balkans. But we are, at this point, reduced to a logical merry-go-round: Mincemeat fed Hitler what he already believed, and was judged by its authors to be a success because Hitler continued to believe what he already believed. How do we know the Germans wouldn’t have moved that Panzer division to the Peloponnese anyway? Bennett is more honest: “Even had there been no deception, [the Germans] would have taken precautions in the Balkans.” Bennett also points out that what the Germans truly feared, in the summer of 1943, was that the Italians would drop out of the Axis alliance. Soldiers washing up on beaches were of little account next to the broader strategic considerations of the southern Mediterranean. Mincemeat or no Mincemeat, Bennett writes, the Germans “would probably have refused to commit more troops to Sicily in support of the Italian Sixth Army lest they be lost in the aftermath of an Italian defection.” Perhaps the real genius of spymasters is found not in the stories they tell their enemies during the war but in the stories they tell in their memoirs once the war is over.

5.

It is helpful to compare the British spymasters’ attitudes toward deception with that of their postwar American counterpart James Jesus Angleton. Angleton was in London during the nineteen-forties, apprenticing with the same group that masterminded gambits such as Mincemeat. He then returned to Washington and rose to head the C.I.A.’s counter-intelligence division throughout the Cold War.

Angleton did not write detective stories. His nickname was the Poet. He corresponded with the likes of Ezra Pound, E. E. Cummings, T. S. Eliot, Archibald MacLeish, and William Carlos Williams, and he championed William Empson’s “Seven Types of Ambiguity.” He co-founded a literary journal at Yale called Furioso. What he brought to spycraft was the intellectual model of the New Criticism, which, as one contributor to Furioso put it, was propelled by “the discovery that it is possible and proper for a poet to mean two differing or even opposing things at the same time.” Angleton saw twists and turns where others saw only straight lines. To him, the spy game was not a story that marched to a predetermined conclusion. It was, in a phrase of Eliot’s that he loved to use, “a wilderness of mirrors.”

Angleton had a point. The deceptions of the intelligence world are not conventional mystery narratives that unfold at the discretion of the narrator. They are poems, capable of multiple interpretations. Kühlenthal and von Roenne, Mincemeat’s audience, contributed as much to the plan’s success as Mincemeat’s authors. A body that washes up onshore is either the real thing or a plant. The story told by the ambassador’s valet is either true or too good to be true. Mincemeat seems extraordinary proof of the cleverness of the British Secret Intelligence Service, until you remember that just a few years later the Secret Intelligence Service was staggered by the discovery that one of its most senior officials, Kim Philby, had been a Soviet spy for years. The deceivers ended up as the deceived.

But, if you cannot know what is true and what is not, how on earth do you run a spy agency? In the nineteen-sixties, Angleton turned the C.I.A. upside down in search of K.G.B. moles that he was sure were there. As a result of his mole hunt, the agency was paralyzed at the height of the Cold War. American intelligence officers who were entirely innocent were subjected to unfair accusations and scrutiny. By the end, Angleton himself came under suspicion of being a Soviet mole, on the ground that the damage he inflicted on the C.I.A. in the pursuit of his imagined Soviet moles was the sort of damage that a real mole would have sought to inflict on the C.I.A. in the pursuit of Soviet interests.

“The remedy he had proposed in 1954 was for the CIA to have what would amount to two separate mind-sets,” Edward Jay Epstein writes of Angleton, in his 1989 book “Deception.” “His counterintelligence staff would provide the alternative view of the picture. Whereas the Soviet division might see a Soviet diplomat as a possible CIA mole, the counterintelligence staff would view him as a possible disinformation agent. What division case officers would tend to look at as valid information, furnished by Soviet sources who risked their lives to cooperate with them, counterintelligence officers tended to question as disinformation, provided by KGB-controlled sources. This was, as Angleton put it, ‘a necessary duality.'”

Translation: the proper function of spies is to remind those who rely on spies that the kinds of thing found out by spies can’t be trusted. If this sounds like a lot of trouble, there’s a simpler alternative. The next time a briefcase washes up onshore, don’t open it.
It was a dazzling feat of wartime espionage. But does it argue for or against spying?

1.

On April 30, side effects 1943, sildenafil a fisherman came across a badly decomposed corpse floating in the water off the coast of Huelva, in southwestern Spain. The body was of an adult male dressed in a trenchcoat, a uniform, and boots, with a black attaché case chained to his waist. His wallet identified him as Major William Martin, of the Royal Marines. The Spanish authorities called in the local British vice-consul, Francis Haselden, and in his presence opened the attaché case, revealing an official-looking military envelope. The Spaniards offered the case and its contents to Haselden. But Haselden declined, requesting that the handover go through formal channels—an odd decision, in retrospect, since, in the days that followed, British authorities in London sent a series of increasingly frantic messages to Spain asking the whereabouts of Major Martin’s briefcase.

It did not take long for word of the downed officer to make its way to German intelligence agents in the region. Spain was a neutral country, but much of its military was pro-German, and the Nazis found an officer in the Spanish general staff who was willing to help. A thin metal rod was inserted into the envelope; the documents were then wound around it and slid out through a gap, without disturbing the envelope’s seals. What the officer discovered was astounding. Major Martin was a courier, carrying a personal letter from Lieutenant General Archibald Nye, the vice-chief of the Imperial General Staff, in London, to General Harold Alexander, the senior British officer under Eisenhower in Tunisia. Nye’s letter spelled out what Allied intentions were in southern Europe. American and British forces planned to cross the Mediterranean from their positions in North Africa, and launch an attack on German-held Greece and Sardinia. Hitler transferred a Panzer division from France to the Peloponnese, in Greece, and the German military command sent an urgent message to the head of its forces in the region: “The measures to be taken in Sardinia and the Peloponnese have priority over any others.”

The Germans did not realize—until it was too late—that “William Martin” was a fiction. The man they took to be a high-level courier was a mentally ill vagrant who had eaten rat poison; his body had been liberated from a London morgue and dressed up in ”s clothing. The letter was a fake, and the frantic messages between London and Madrid a carefully choreographed act. When a hundred and sixty thousand Allied troops invaded Sicily on July 10, 1943, it became clear that the Germans had fallen victim to one of the most remarkable deceptions in modern military history.

The story of Major William Martin is the subject of the British journalist Ben Macintyre’s brilliant and almost absurdly entertaining “Operation Mincemeat” (Harmony; $25.99). The cast of characters involved in Mincemeat, as the caper was called, was extraordinary, and Macintyre tells their stories with gusto. The ringleader was Ewen Montagu, the son of a wealthy Jewish banker and the brother of Ivor Montagu, a pioneer of table tennis and also, in one of the many strange footnotes to the Mincemeat case, a Soviet spy. Ewen Montagu served on the so-called Twenty Committee of the British intelligence services, and carried a briefcase full of classified documents on his bicycle as he rode to work each morning.

His partner in the endeavor was a gawky giant named Charles Cholmondeley, who lifted the toes of his size-12 feet when he walked, and, Macintyre writes, “gazed at the world through thick round spectacles, from behind a remarkable moustache fully six inches long and waxed into magnificent points.” The two men coördinated with Dudley Clarke, the head of deception for all the Mediterranean, whom Macintyre describes as “unmarried, nocturnal and allergic to children.” In 1925, Clarke organized a pageant “depicting imperial artillery down the ages, which involved two elephants, thirty-seven guns and ‘fourteen of the biggest Nigerians he could find.’ He loved uniforms, disguises and dressing up.” In 1941, British authorities had to bail him out of a Spanish jail, dressed in “high heels, lipstick, pearls, and a chic cloche hat, his hands, in long opera gloves, demurely folded in his lap. He was not supposed to even be in Spain, but in Egypt.” Macintyre, who has perfect pitch when it comes to matters of British eccentricity, reassures us, “It did his career no long-term damage.”

To fashion the container that would keep the corpse “fresh,” before it was dumped off the coast of Spain, Mincemeat’s planners turned to Charles Fraser-Smith, whom Ian Fleming is thought to have used as the model for Q in the James Bond novels. Fraser-Smith was the inventor of, among other things, garlic-flavored chocolate intended to render authentic the breath of agents dropping into France and “a compass hidden in a button that unscrewed clockwise, based on the impeccable theory that the ‘unswerving logic of the German mind’ would never guess that something might unscrew the wrong way.” The job of transporting the container to the submarine that would take it to Spain was entrusted to one of England’s leading race-car drivers, St. John (Jock) Horsfall, who, Macintyre notes, “was short-sighted and astigmatic but declined to wear spectacles.” At one point during the journey, Horsfall nearly drove into a tram stop, and then “failed to see a roundabout until too late and shot over the grass circle in the middle.”

Each stage of the deception had to be worked out in advance. Martin’s personal effects needed to be detailed enough to suggest that he was a real person, but not so detailed as to suggest that someone was trying to make him look like a real person. Cholmondeley and Montagu filled Martin’s pockets with odds and ends, including angry letters from creditors and a bill from his tailor. “Hour after hour, in the Admiralty basement, they discussed and refined this imaginary person, his likes and dislikes, his habits and hobbies, his talents and weaknesses,” Macintyre writes. “In the evening, they repaired to the Gargoyle Club, a glamorous Soho dive of which Montagu was a member, to continue the odd process of creating a man from scratch.” Francis Haselden, for his part, had to look as if he desperately wanted the briefcase back. But he couldn’t be too diligent, because he had to make sure that the Germans had a look at it first. “Here lay an additional, but crucial, consideration,” Macintyre goes on. “The Germans must be made to believe that they had gained access to the documents undetected; they should be made to assume that the British believed the Spaniards had returned the documents unopened and unread. Operation Mincemeat would only work if the Germans could be fooled into believing that the British had been fooled.” It was an impossibly complex scheme, dependent on all manner of unknowns and contingencies. What if whoever found the body didn’t notify the authorities? What if the authorities disposed of the matter so efficiently that the Germans never caught wind of it? What if the Germans saw through the ruse?

In mid-May of 1943, when Winston Churchill was in Washington, D.C., for the Trident conference, he received a telegram from the code breakers back home, who had been monitoring German military transmissions: “MINCEMEAT SWALLOWED ROD, LINE AND SINKER.” Macintyre’s “Operation Mincemeat” is part of a long line of books celebrating the cleverness of Britain’s spies during the Second World War. It is equally instructive, though, to think about Mincemeat from the perspective of the spies who found the documents and forwarded them to their superiors. The things that spies do can help win battles that might otherwise have been lost. But they can also help lose battles that might otherwise have been won.

2.

In early 1943, long before Major Martin’s body washed up onshore, the German military had begun to think hard about Allied intentions in southern Europe. The Allies had won control of North Africa from the Germans, and were clearly intending to cross the Mediterranean. But where would they attack? One school of thought said Sardinia. It was lightly defended and difficult to reinforce. The Allies could mount an invasion of the island relatively quickly. It would be ideal for bombing operations against southern Germany, and Italy’s industrial hub in the Po Valley, but it didn’t have sufficient harbors or beaches to allow for a large number of ground troops to land. Sicily did. It was also close enough to North Africa to be within striking distance of Allied short-range fighter planes, and a successful invasion of Sicily had the potential to knock the Italians out of the war.

Mussolini was in the Sicily camp, as was Field Marshal Kesselring, who headed up all German forces in the Mediterranean. In the Italian Commando Supremo, most people picked Sardinia, however, as did a number of senior officers in the German Navy and Air Force. Meanwhile, Hitler and the Oberkommando der Wehrmacht—the German armed-forces High Command—had a third candidate. They thought that the Allies were most likely to strike at Greece and the Balkans, given the Balkans’ crucial role in supplying the German war effort with raw materials such as oil, bauxite, and copper. And Greece was far more vulnerable to attack than Italy. As the historians Samuel Mitcham and Friedrich von Stauffenberg have pointed out, “in Greece all Axis reinforcements and supplies would have to be shipped over a single rail line of limited capacity, running for 1,300 kilometers (more than 800 miles) through an area vulnerable to air and partisan attack.”

All these assessments were strategic inferences from an analysis of known facts. But this kind of analysis couldn’t point to a specific target. It could only provide a range of probabilities. The intelligence provided by Major Martin’s documents was in a different category. It was marvellously specific. It said: Greece and Sardinia. But because that information washed up onshore, as opposed to being derived from the rational analysis of known facts, it was difficult to know whether it was true. As the political scientist Richard Betts has argued, in intelligence analysis there tends to be an inverse relationship between accuracy and significance, and this is the dilemma posed by the Mincemeat case.

As Macintyre observes, the informational supply chain that carried the Mincemeat documents from Huelva to Berlin was heavily corrupted. The first great enthusiast for the Mincemeat find was the head of German intelligence in Madrid, Major Karl-Erich Kühlenthal. He personally flew the documents to Berlin, along with a report testifying to their significance. But, as Macintyre writes, Kühlenthal was “a one-man espionage disaster area.” One of his prized assets was a Spaniard named Juan Pujol García, who was actually a double agent. When British code breakers looked at Kühlenthal’s messages to Berlin, they found that he routinely embellished and fictionalized his reports. According to Macintyre, Kühlenthal was “frantically eager to please, ready to pass on anything that might consolidate his ” in part because he had some Jewish ancestry and was desperate not to be posted back to Germany.

When the documents arrived in Berlin, they were handed over to one of Hitler’s top intelligence analysts, a man named Alexis Baron von Roenne. Von Roenne vouched for their veracity as well. But in some respects von Roenne was even less reliable than Kühlenthal. He hated Hitler and seemed to have done everything in his power to sabotage the Nazi war effort. Before D Day, Macintyre writes, “he faithfully passed on every deception ruse fed to him, accepted the existence of every bogus unit regardless of evidence, and inflated forty-four divisions in Britain to an astonishing eighty-nine.” It is entirely possible, Macintyre suggests, that von Roenne “did not believe the Mincemeat deception for an instant.”

These are two fine examples of why the proprietary kind of information that spies purvey is so much riskier than the products of rational analysis. Rational inferences can be debated openly and widely. Secrets belong to a small assortment of individuals, and inevitably become hostage to private agendas. Kühlenthal was an advocate of the documents because he needed them to be true; von Roenne was an advocate of the documents because he suspected them to be false. In neither case did the audiences for their assessments have an inkling about their private motivations. As Harold Wilensky wrote in his classic work “Organizational Intelligence” (1967), “The more secrecy, the smaller the intelligent audience, the less systematic the distribution and indexing of research, the greater the anonymity of authorship, and the more intolerant the attitude toward deviant views.” Wilensky had the Bay of Pigs debacle in mind when he wrote that. But it could just as easily have applied to any number of instances since, including the private channels of “intelligence” used by members of the Bush Administration to convince themselves that Saddam Hussein had weapons of mass destruction.

It was the requirement of secrecy that also prevented the Germans from properly investigating the Mincemeat story. They had to make it look as if they had no knowledge of Martin’s documents. So their hands were tied. The dated papers in Martin’s pockets indicated that he had been in the water for barely five days. Had the Germans seen the body, though, they would have realized that it was far too decomposed to have been in the water for less than a week. And, had they talked to the Spanish coroner who examined Martin, they would have discovered that he had noticed various red flags. The doctor had seen the bodies of many drowned fishermen in his time, and invariably there were fish and crab bites on the ears and other appendages. In this case, there were none. Hair, after being submerged for a week, becomes brittle and dull. Martin’s hair was not. Nor did his clothes appear to have been in the water very long. But the Germans couldn’t talk to the coroner without blowing their cover. Secrecy stood in the way of accuracy.

3.

Suppose that Kühlenthal had not been so eager to please Berlin, and that von Roenne had not loathed Hitler, and suppose that the Germans had properly debriefed the coroner and uncovered all the holes in the Mincemeat story. Would they then have seen through the British deception? Maybe so. Or maybe they would have found the flaws in Mincemeat a little too obvious, and concluded that the British were trying to deceive Germany into thinking that they were trying to deceive Germany into thinking that Greece and Sardinia were the real targets—in order to mask the fact that Greece and Sardinia were the real targets.

This is the second, and more serious, of the problems that surround the products of espionage. It is not just that secrets themselves are hard to fact-check; it’s that their interpretation is inherently ambiguous. Any party to an intelligence transaction is trapped in what the sociologist Erving Goffman called an “expression game.” I’m trying to fool you. You realize that I’m trying to fool you, and I—realizing that—try to fool you into thinking that I don’t realize that you have realized that I am trying to fool you. Goffman argues that at each turn in the game the parties seek out more and more specific and reliable cues to the other’s intentions. But that search for specificity and reliability only makes the problem worse. As Goffman writes in his 1969 book “Strategic Interaction”:

The more the observer relies on seeking out foolproof cues, the more vulnerable he should appreciate he has become to the exploitation of his efforts. For, after all, the most reliance-inspiring conduct on the subject’s part is exactly the conduct that it would be most advantageous for him to fake if he wanted to hoodwink the observer. The very fact that the observer finds himself looking to a particular bit of evidence as an incorruptible check on what is or might be corrupted is the very reason why he should be suspicious of this evidence; for the best evidence for him is also the best evidence for the subject to tamper with.

Macintyre argues that one of the reasons the Germans fell so hard for the Mincemeat ruse is that they really had to struggle to gain access to the documents. They ——and failed—to find a Spanish accomplice when the briefcase was still in Huelva. A week passed, and the Germans grew more and more anxious. The briefcase was transferred to the Spanish Admiralty, in Madrid, where the Germans redoubled their efforts. Their assumption, Macintyre says, was that if Martin was a plant the British would have made their task much easier. But Goffman’s argument reminds us that the opposite is equally plausible. Knowing that a struggle would be a sign of authenticity, the Germans could just as easily have expected the British to provide one.

The absurdity of such expression games has been wittily explored in the spy novels of Robert Littell and, with particular brio, in Peter Ustinov’s 1956 play, “Romanoff and Juliet.” In the latter, a crafty general is the head of a tiny European country being squabbled over by the United States and the Soviet Union, and is determined to play one off against the other. He tells the U.S. Ambassador that the Soviets have broken the Americans’ secret code. “We know they know our code,” the Ambassador, Moulsworth, replies, beaming. “We only give them things we want them to know.” The general pauses, during which, the play’s stage directions say, “he tries to make head or tail of this intelligence.” Then he crosses the street to the Russian Embassy, where he tells the Soviet Ambassador, Romanoff, “They know you know their code.” Romanoff is unfazed: “We have known for some time that they knew we knew their code. We have acted accordingly—by pretending to be duped.” The general returns to the American Embassy and confronts Moulsworth: “They know you know they know you know.” Moulsworth (genuinely alarmed): “What? Are you sure?”

The genius of that parody is the final line, because spymasters have always prided themselves on knowing where they are on the “I-know-they-know-I-know-they-know” regress. Just before the Allied invasion of Sicily, a British officer, Colonel Knox, left a classified cable concerning the invasion plans on the terrace of Shepheard’s Hotel, in Cairo—and no one could find it for two days. “Dudley Clarke was confident, however, that if it had fallen into enemy hands through such an obvious and ‘gross breach of security’ then it would probably be dismissed as a plant, pointing to Sicily as the cover target in accordance with Mincemeat,” Macintyre writes. “He concluded that ‘Colonel Knox may well have assisted rather than hindered us.’ ” In the face of a serious security breach, that’s what a counter-intelligence officer would say. But, of course, there is no way for him to know how the Germans would choose to interpret that discovery—and no way for the Germans to know how to interpret that discovery, either.

At one point, the British discovered that a French officer in Algiers was spying for the Germans. They “turned” him, keeping him in place but feeding him a steady diet of false and misleading information. Then, before D Day—when the Allies were desperate to convince Germany that they would be invading the Calais sector in July—they used the French officer to tell the Germans that the real invasion would be in Normandy on June 5th, 6th, or 7th. The British theory was that using someone the Germans strongly suspected was a double agent to tell the truth was preferable to using someone the Germans didn’t realize was a double agent to tell a lie. Or perhaps there wasn’t any theory at all. Perhaps the spy game has such an inherent opacity that it doesn’t really matter what you tell your enemy so long as your enemy is aware that you are trying to tell him something.

At around the time that Montagu and Cholmondeley were cooking up Operation Mincemeat, the personal valet of the British Ambassador to Turkey approached the German Embassy in Ankara with what he said were photographed copies of his boss’s confidential papers. The valet’s name was Elyesa Bazna. The Germans called him Cicero, and in this case they performed due diligence. Intelligence that came in over the transom was always considered less trustworthy than the intelligence gathered formally, so Berlin pressed its agents in Ankara for more details. Who was Bazna? What was his background? What was his motivation?

“Given the extraordinary ease with which seemingly valuable documents were being obtained, however, there was widespread worry that the enemy had mounted some purposeful deception,” Richard Wires writes, in “The Cicero Spy Affair: German Access to British Secrets in World War II” (1999). Bazna was, for instance, highly adept with a camera, in a way that suggested professional training or some kind of assistance. Bazna claimed that he didn’t use a tripod but simply held each paper under a light with one hand and took the picture with the other. So why were the photographs so clear? Berlin sent a photography expert to investigate. The Germans tried to figure out how much English he knew—which would reveal whether he could read the documents he was photographing or was just being fed them. In the end, many German intelligence officials thought that Cicero was the real thing. But Joachim von Ribbentrop, the Foreign Minister, remained wary—and his doubts and political infighting among the German intelligence agencies meant that little of the intelligence provided by Cicero was ever acted upon.

Cicero, it turned out, was the real thing. At least, we think he was the real thing. The Americans had a spy in the German Embassy in Turkey who learned that a servant was spying in the British Embassy. She told her bosses, who told the British. Just before his death, Stewart Menzies, the head of the British Secret Intelligence Service during the war, told an interviewer, “Of course, Cicero was under our control,” meaning that the minute they learned about Cicero they began feeding him false documents. Menzies, it should be pointed out, was a man who spent much of his professional career deceiving other people, and if you had been the wartime head of M.I.6, giving an interview shortly before your death, you probably would say that Cicero was one of yours. Or perhaps, in interviews given shortly before death, people are finally free to tell the truth. Who knows?

In the case of Operation Mincemeat, Germany’s spies told their superiors that something false was actually true (even though, secretly, some of those spies might have known better), and Germany acted on it. In the case of Cicero, Germany’s spies told their superiors that something was true that may indeed have been true, though maybe wasn’t, or maybe was true for a while and not true for a while, depending on whether you believe the word of someone two decades after the war was over—and in this case Germany didn’t really act on it at all. Looking at that track record, you have to wonder if Germany would have been better off not having any spies at all.

4.

The idea for Operation Mincemeat, Macintyre tells us, had its roots in a mystery story written by Basil Thomson, a former head of Scotland Yard’s criminal-investigation unit. Thomson was the author of a dozen detective stories, and his 1937 book “The Milliner’s Hat Mystery” begins with the body of a dead man carrying a set of documents that turn out to be forged. “The Milliner’s Hat Mystery” was read by Ian Fleming, who worked for naval intelligence. Fleming helped create something called the Trout Memo, which contained a series of proposals for deceiving the Germans, including this idea of a dead man carrying forged documents. The memo was passed on to John Masterman, the head of the Twenty Committee—of which Montagu and Cholmondeley were members. Masterman, who also wrote mysteries on the side, starring an Oxford don and a Sherlock Holmes-like figure, loved the idea. Mincemeat, Macintyre writes, “began as fiction, a plot twist in a long-forgotten novel, picked up by another novelist, and approved by a committee presided over by yet another novelist.”

Then, there was the British naval attaché in Madrid, Alan Hillgarth, who stage-managed Mincemeat’s reception in Spain. He was a “spy, former gold prospector, and, perhaps inevitably, successful novelist,” Macintyre writes. “In his six novels, Alan Hillgarth hankered for a lost age of personal valor, chivalry, and self-reliance.” Unaccountably, neither Montagu nor Cholmondeley seems to have written mysteries of his own. But, then again, they had Mincemeat. “As if constructing a character in a novel, Montagu and Cholmondeley . . . set about creating a personality with which to clothe their dead body,” Macintyre observes. Martin didn’t have to have a fiancée. But, in a good spy thriller, the hero always has a beautiful lover. So they found a stunning young woman, Jean Leslie, to serve as Martin’s betrothed, and Montagu flirted with her shamelessly, as if standing in for his fictional creation. They put love letters from her among his personal effects. “Don’t please let them send you off into the blue the horrible way they do nowadays,” she wrote to her fiancé. “Now that we’ve found each other out of the whole world, I don’t think I could bear it.”

The British spymasters saw themselves as the authors of a mystery story, because it gave them the self-affirming sense that they were in full command of the narratives they were creating. They were not, of course. They were simply lucky that von Roenne and Kühlenthal had private agendas aligned with the Allied cause. The intelligence historian Ralph Bennett writes that one of the central principles of Dudley Clarke (he of the cross-dressing, the elephants, and the fourteen Nigerian giants) was that “deception could only be successful to the extent to which it played on existing hopes and fears.” That’s why the British chose to convince Hitler that the Allied focus was on Greece and the Balkans—Hitler, they knew, believed that the Allied focus was on Greece and the Balkans. But we are, at this point, reduced to a logical merry-go-round: Mincemeat fed Hitler what he already believed, and was judged by its authors to be a success because Hitler continued to believe what he already believed. How do we know the Germans wouldn’t have moved that Panzer division to the Peloponnese anyway? Bennett is more honest: “Even had there been no deception, [the Germans] would have taken precautions in the Balkans.” Bennett also points out that what the Germans truly feared, in the summer of 1943, was that the Italians would drop out of the Axis alliance. Soldiers washing up on beaches were of little account next to the broader strategic considerations of the southern Mediterranean. Mincemeat or no Mincemeat, Bennett writes, the Germans “would probably have refused to commit more troops to Sicily in support of the Italian Sixth Army lest they be lost in the aftermath of an Italian defection.” Perhaps the real genius of spymasters is found not in the stories they tell their enemies during the war but in the stories they tell in their memoirs once the war is over.

5.

It is helpful to compare the British spymasters’ attitudes toward deception with that of their postwar American counterpart James Jesus Angleton. Angleton was in London during the nineteen-forties, apprenticing with the same group that masterminded gambits such as Mincemeat. He then returned to Washington and rose to head the C.I.A.’s counter-intelligence division throughout the Cold War.

Angleton did not write detective stories. His nickname was the Poet. He corresponded with the likes of Ezra Pound, E. E. Cummings, T. S. Eliot, Archibald MacLeish, and William Carlos Williams, and he championed William Empson’s “Seven Types of Ambiguity.” He co-founded a literary journal at Yale called Furioso. What he brought to spycraft was the intellectual model of the New Criticism, which, as one contributor to Furioso put it, was propelled by “the discovery that it is possible and proper for a poet to mean two differing or even opposing things at the same time.” Angleton saw twists and turns where others saw only straight lines. To him, the spy game was not a story that marched to a predetermined conclusion. It was, in a phrase of Eliot’s that he loved to use, “a wilderness of mirrors.”

Angleton had a point. The deceptions of the intelligence world are not conventional mystery narratives that unfold at the discretion of the narrator. They are poems, capable of multiple interpretations. Kühlenthal and von Roenne, Mincemeat’s audience, contributed as much to the plan’s success as Mincemeat’s authors. A body that washes up onshore is either the real thing or a plant. The story told by the ambassador’s valet is either true or too good to be true. Mincemeat seems extraordinary proof of the cleverness of the British Secret Intelligence Service, until you remember that just a few years later the Secret Intelligence Service was staggered by the discovery that one of its most senior officials, Kim Philby, had been a Soviet spy for years. The deceivers ended up as the deceived.

But, if you cannot know what is true and what is not, how on earth do you run a spy agency? In the nineteen-sixties, Angleton turned the C.I.A. upside down in search of K.G.B. moles that he was sure were there. As a result of his mole hunt, the agency was paralyzed at the height of the Cold War. American intelligence officers who were entirely innocent were subjected to unfair accusations and scrutiny. By the end, Angleton himself came under suspicion of being a Soviet mole, on the ground that the damage he inflicted on the C.I.A. in the pursuit of his imagined Soviet moles was the sort of damage that a real mole would have sought to inflict on the C.I.A. in the pursuit of Soviet interests.

“The remedy he had proposed in 1954 was for the CIA to have what would amount to two separate mind-sets,” Edward Jay Epstein writes of Angleton, in his 1989 book “Deception.” “His counterintelligence staff would provide the alternative view of the picture. Whereas the Soviet division might see a Soviet diplomat as a possible CIA mole, the counterintelligence staff would view him as a possible disinformation agent. What division case officers would tend to look at as valid information, furnished by Soviet sources who risked their lives to cooperate with them, counterintelligence officers tended to question as disinformation, provided by KGB-controlled sources. This was, as Angleton put it, ‘a necessary duality.'”

Translation: the proper function of spies is to remind those who rely on spies that the kinds of thing found out by spies can’t be trusted. If this sounds like a lot of trouble, there’s a simpler alternative. The next time a briefcase washes up onshore, don’t open it.
Why is it so difficult to develop drugs for cancer?

1.

In the world of cancer research, there is something called a Kaplan-Meier curve, which tracks the health of patients in the trial of an experimental drug. In its simplest version, it consists of two lines. The first follows the patients in the “control arm,” the second the patients in the “treatment arm.” In most cases, those two lines are virtually identical. That is the sad fact of cancer research: nine times out of ten, there is no difference in survival between those who were given the new drug and those who were not. But every now and again—after millions of dollars have been spent, and tens of thousands of pages of data collected, and patients followed, and toxicological issues examined, and safety issues resolved, and manufacturing processes fine-tuned—the patients in the treatment arm will live longer than the patients in the control arm, and the two lines on the Kaplan-Meier will start to diverge.

Seven years ago, for example, a team from Genentech presented the results of a colorectal-cancer drug trial at the annual meeting of the American Society of Clinical Oncology—a conference attended by virtually every major cancer researcher in the world. The lead Genentech researcher took the audience through one slide after another—click, click, click—laying out the design and scope of the study, until he came to the crucial moment: the Kaplan-Meier. At that point, what he said became irrelevant. The members of the audience saw daylight between the two lines, for a patient population in which that almost never happened, and they leaped to their feet and gave him an ovation. Every drug researcher in the world dreams of standing in front of thousands of people at ASCO and clicking on a Kaplan-Meier like that. “It is why we are in this business,” Safi Bahcall says. Once he thought that this dream would come true for him. It was in the late summer of 2006, and is among the greatest moments of his life.

Bahcall is the C.E.O. of Synta Pharmaceuticals, a small biotechnology company. It occupies a one-story brick nineteen-seventies building outside Boston, just off Route 128, where many of the region’s high-tech companies have congregated, and that summer Synta had two compounds in development. One was a cancer drug called elesclomol. The other was an immune modulator called apilimod. Experimental drugs must pass through three phases of testing before they can be considered for government approval. Phase 1 is a small trial to determine at what dose the drug can be taken safely. Phase 2 is a larger trial to figure out if it has therapeutic potential, and Phase 3 is a definitive trial to see if it actually works, usually in comparison with standard treatments. Elesclomol had progressed to Phase 2 for soft-tissue sarcomas and for lung cancer, and had come up short in both cases. A Phase 2 trial for metastatic melanoma—a deadly form of skin cancer—was also under way. But that was a long shot: nothing ever worked well for melanoma. In the previous thirty-five years, there had been something like seventy large-scale Phase 2 trials for metastatic-melanoma drugs, and if you plotted all the results on a single Kaplan-Meier there wouldn’t be much more than a razor’s edge of difference between any two of the lines.

That left apilimod. In animal studies and early clinical trials for autoimmune disorders, it seemed promising. But when Synta went to Phase 2 with a trial for psoriasis, the results were underwhelming. “It was ugly,” Bahcall says. “We had lung cancer fail, sarcoma next, and then psoriasis. We had one more trial left, which was for Crohn’s disease. I remember my biostats guy coming into my office, saying, ‘I’ve got some good news and some bad news. The good news is that apilimod is safe. We have the data. No toxicity. The bad news is that it’s not effective.’ It was heartbreaking.”

Bahcall is a boyish man in his early forties, with a round face and dark, curly hair. He was sitting at the dining-room table in his sparsely furnished apartment in Manhattan, overlooking the Hudson River. Behind him, a bicycle was leaning against a bare wall, giving the room a post-college feel. Both his parents were astrophysicists, and he, too, was trained as a physicist, before leaving academia for the business world. He grew up in the realm of the abstract and the theoretical—with theorems and calculations and precise measurements. But drug development was different, and when he spoke about the failure of apilimod there was a slight catch in his voice.

Bahcall started to talk about one of the first patients ever treated with elesclomol: a twenty-four-year-old African-American man. He’d had Kaposi’s sarcoma; tumors covered his lower torso. He’d been at Beth Israel Deaconess Medical Center, in Boston, and Bahcall had flown up to see him. On a Monday in January of 2003, Bahcall sat by his bed and they talked. The patient was just out of college. He had an I.V. in his arm. You went to the hospital and you sat next to some kid whose only wish was not to die, and it was impossible not to get emotionally involved. In physics, failure was disappointing. In drug development, failure was heartbreaking. Elesclomol wasn’t much help against Kaposi’s sarcoma. And now apilimod didn’t work for Crohn’s. “I mean, we’d done charity work for the Crohn’s & Colitis Foundation,” Bahcall went on. “I have relatives and friends with Crohn’s disease, personal experience with Crohn’s disease. We had Crohn’s patients come in and talk in meetings and tell their stories. We’d raised money for five years from investors. I felt terrible. Here we were with our lead drug and it had failed. It was the end of the line.”

That summer of 2006, in one painful meeting after another, Synta began to downsize. “It was a Wednesday,” Bahcall said. “We were around a table, and we were talking about pruning the budget and how we’re going to contain costs, one in a series of tough discussions, and I noticed my chief medical officer, Eric Jacobson, at the end of the table, kind of looking a little unusually perky for one of those kinds of discussions.” After the meeting, Bahcall pulled Jacobson over: “Is something up?” Jacobson nodded. Half an hour before the meeting, he’d received some news. It was about the melanoma trial for elesclomol, the study everyone had given up on. “The consultant said she had never seen data this good,” Jacobson told him.

Bahcall called back the management team for a special meeting. He gave the floor to Jacobson. “Eric was, like, ‘Well, you know we’ve got this melanoma trial,’ ” Bahcall began, “and it took a moment to jog people’s memories, because we’d all been so focussed on Crohn’s disease and the psoriasis trials. And Eric said, ‘Well, we got the results. The drug worked! It was a positive trial!’ ” One person slammed the table, stood up, and hollered. Others peppered Eric with questions. “Eric said, ‘Well, the group analyzing the data is trying to disprove it, and they can’t disprove it.’ And he said, ‘The consultant handed me the data on Wednesday morning, and she said it was boinking good.’ And everyone said, ‘What?’ Because Eric is the sweetest guy, who never swears. A bad word cannot cross his lips. Everyone started yelling, ‘What? What? What did she say, Eric? Eric! Eric! Say it! Say it!’ ”

Bahcall contacted Synta’s board of directors. Two days later, he sent out a company-wide e-mail saying that there would be a meeting that afternoon. At four o’clock, all hundred and thirty employees trooped into the building’s lobby. Jacobson stood up. “So the lights go down,” Bahcall continued. “Clinical guys, when they present data, tend to do it in a very bottoms-up way: this is the disease population, this is the treatment, and this is the drug, and this is what was randomized, and this is the demographic, and this is the patient pool, and this is who had toenail fungus, and this is who was Jewish. They go on and on and on, and all anyone wants is, Show us the fucking Kaplan-Meier! Finally he said, ‘All right, now we can get to the efficacy.’ It gets really silent in the room. He clicks the slide. The two lines separate out beautifully—and a gasp goes out, across a hundred and thirty people. Eric starts to continue, and one person goes like this”—Bahcall started clapping slowly—”and then a couple of people joined in, and then soon the whole room is just going like this—clap, clap, clap. There were tears. We all realized that our lives had changed, the lives of patients had changed, the way of treating the disease had changed. In that moment, everyone realized that this little company of a hundred and thirty people had a chance to win. We had a drug that worked, in a disease where nothing worked. That was the single most moving five minutes of all my years at Synta.”

2.

In the winter of 1955, a young doctor named Emil Freireich arrived at the National Cancer Institute, in Bethesda, Maryland. He had been drafted into the Army, and had been sent to fulfill his military obligation in the public-health service. He went to see Gordon Zubrod, then the clinical director for the N.C.I. and later one of the major figures in cancer research. “I said, ‘I’m a hematologist,’ ” Freireich recalls. “He said, ‘I’ve got a good idea for you. Cure leukemia.’ It was a military assignment.” From that assignment came the first great breakthrough in the war against cancer.

Freireich’s focus was on the commonest form of childhood leukemia—acute lymphoblastic leukemia (ALL). The diagnosis was a “”The children would come in bleeding,” Freireich says. “They’d have infections. They would be in pain. Median survival was about eight weeks, and everyone was dead within the year.” At the time, three drugs were known to be useful against ALL. One was methotrexate, which, the pediatric pathologist Sidney Farber had shown seven years earlier, could push the disease into remission. Corticosteroids and 6-mercaptopurine (6-MP) had since proved useful. But even though methotrexate and 6-MP could kill a lot of cancer cells, they couldn’t kill them all, and those which survived would regroup and adjust and multiply and return with a vengeance. “These remissions were all temporary—two or three months,” Freireich, who now directs the adult-leukemia research program at the M. D. Anderson Cancer Center, in Houston, says. “The authorities in hematology didn’t even want to use them in children. They felt it just prolonged the agony, made them suffer, and gave them side effects. That was the landscape.”

In those years, the medical world had made great strides against tuberculosis, and treating t.b. ran into the same problem as treating cancer: if doctors went after it with one drug, the bacteria eventually developed resistance. Their solution was to use multiple drugs simultaneously that worked in very different ways. Freireich wondered about applying that model to leukemia. Methotrexate worked by disrupting folic-acid uptake, which was crucial in the division of cells; 6-MP shut down the synthesis of purine, which was also critical in cell division. Putting the two together would be like hitting the cancer with a left hook and a right hook. Working with a group that eventually included Tom Frei, of the N.C.I., and James Holland, of the Roswell Park Cancer Institute, in Buffalo, Freireich started treating ALL patients with methotrexate and 6-MP in combination, each at two-thirds its regular dose to keep side effects in check. The remissions grew more frequent. Freireich then added the steroid prednisone, which worked by a mechanism different from that of either 6-MP or methotrexate; he could give it at full dose and not worry about the side effects getting out of control. Now he had a left hook, a right hook, and an uppercut.

“So things are looking good,” Freireich went on. “But still everyone dies. The remissions are short. And then out of the blue came the gift from Heaven”—another drug, derived from periwinkle, that had been discovered by Irving Johnson, a researcher at Eli Lilly. “In order to get two milligrams of drug, it took something like two train-car loads of periwinkle,” Freireich said. “It was expensive. But Johnson was persistent.” Lilly offered the new drug to Freireich. “Johnson had done work in mice, and he showed me the results. I said, ‘Gee whiz, I’ve got ten kids on the ward dying. I’ll give it to them tomorrow.’ So I went to Zubrod. He said, ‘I don’t think it’s a good ‘ But I said, ‘These kids are dying. What’s the difference?’ He said, ‘O.K., I’ll let you do a few children.’ The response rate was fifty-five per cent. The kids jumped out of bed.” The drug was called vincristine, and, by itself, it was no wonder drug. Like the others, it worked only for a while. But the good news was that it had a unique mechanism of action—it interfered with cell division by binding to what is called the spindle protein—and its side effects were different from those of the other drugs. “So I sat down at my desk one day and I thought, Gee, if I can give 6-MP and meth at two-thirds dose and prednisone at full dose and vincristine has different limiting toxicities, I bet I can give a full dose of that, too. So I devised a trial where we would give all four in combination.” The trial was called VAMP. It was a left hook, a right hook, an uppercut, and a jab, and the hope was that if you hit leukemia with that barrage it would never get up off the canvas.

The first patient treated under the experimental regimen was a young girl. Freireich started her off with a dose that turned out to be too high, and she almost died. She was put on antibiotics and a respirator. Freireich saw her eight times a day, sitting at her bedside. She pulled through the chemo-induced crisis, only to die later of an infection. But Freireich was learning. He tinkered with his protocol and started again, with patient No. 2. Her name was Janice. She was fifteen, and her recovery was nothing short of miraculous. So was the recovery of the next patient and the next and the next, until nearly every child was in remission, without need of antibiotics or transfusions. In 1965, Frei and Freireich published one of the most famous articles in the history of oncology, “Progress and Perspective in the Chemotherapy of Acute Leukemia,” in Advances in Chemotherapy. Almost three decades later, a perfectly healthy Janice graced the cover of the journal Cancer Research.

What happened with ALL was a formative experience for an entire generation of cancer fighters. VAMP proved that medicine didn’t need a magic bullet—a superdrug that could stop all cancer in its tracks. A drug that worked a little bit could be combined with another that worked a little bit and another that worked a little bit, and, as long as all three worked in different ways and had different side effects, the combination could turn out to be spectacular. To be valuable, a cancer drug didn’t have to be especially effective on its own; it just had to be novel in the way it acted. And, from the beginning, this was what caused so much excitement about elesclomol.

3.

Safi Bahcall’s partner in the founding of Synta was a cell biologist at Harvard Medical School named Lan Bo Chen. Chen, who is in his mid-sixties, was born in Taiwan. He is a mischievous man, with short-cropped straight black hair and various quirks—including a willingness to say whatever is on his mind, a skepticism about all things Japanese (the Japanese occupied Taiwan during the war, after all), and a keen interest in the marital prospects of his unattached co-workers. Bahcall, who is Jewish, describes him affectionately as “the best and worst parts of a Jewish father and the best and worst parts of a Jewish mother rolled into one.” (Sample e-mail from Chen: “Safi is in Israel. Hope he finds wife.”)

Drug hunters like Chen fall into one of two broad schools. The first school, that of “rational design,” believes in starting with the disease and working backward—designing a customized solution based on the characteristics of the problem. Herceptin, one of the most important of the new generation of breast-cancer drugs, is a good example. It was based on genetic detective work showing that about a quarter of all breast cancers were caused by the overproduction of a protein called HER2. HER2 kept causing cells to divide and divide, and scientists set about designing a drug to turn HER2 off. The result is a drug that improved survival in twenty-five per cent of patients with advanced breast cancer. (When Herceptin’s Kaplan-Meier was shown at ASCO, there was stunned silence.) But working backward to a solution requires a precise understanding of the problem, and cancer remains so mysterious and complex that in most cases scientists don’t have that precise understanding. Or they think they do, and then, after they turn off one mechanism, they discover that the tumor has other deadly tricks in reserve.

The other approach is to start with a drug candidate and then hunt for diseases that it might attack. This strategy, known as “mass screening,” doesn’t involve a theory. Instead, it involves a random search for matches between treatments and diseases. This was the school to which Chen belonged. In fact, he felt that the main problem with mass screening was that it wasn’t mass enough. There were countless companies outside the drug business—from industrial research labs to photography giants like Kodak and Fujifilm—that had millions of chemicals sitting in their vaults. Yet most of these chemicals had never been tested to see if they had potential as drugs. Chen couldn’t understand why. If the goal of drug discovery was novelty, shouldn’t the hunt for new drugs go as far and wide as possible?

“In the early eighties, I looked into how Merck and Pfizer went about drug discovery,” Chen recalls. “How many compounds are they using? Are they doing the best they can? And I come up with an incredible number. It turns out that mankind had, at this point, made tens of millions of compounds. But Pfizer was screening only six hundred thousand compounds, and Merck even fewer, about five hundred thousand. How could they screen for drugs and use only five hundred thousand, when mankind has already made so many more?”

An early financial backer of Chen’s was Michael Milken, the junk-bond king of the nineteen-eighties who, after being treated for prostate cancer, became a major cancer philanthropist. “I told Milken my story,” Chen said, “and very quickly he said, ‘I’m going to give you four million dollars. Do whatever you want.’ Right away, Milken thought of Russia. Someone had told him that the Russians had had, for a long time, thousands of chemists in one city making compounds, and none of those compounds had been disclosed.” Chen’s first purchase was a batch of twenty-two thousand chemicals, gathered from all over Russia and Ukraine. They cost about ten dollars each, and came in tiny glass vials. With his money from Milken, Chen then bought a six-hundred-thousand-dollar state-of-the-art drug-screening machine. It was a big, automated Rube Goldberg contraption that could test ninety-six compounds at a time and do a hundred batches a day. A robotic arm would deposit a few drops of each chemical onto a plate, followed by a clump of cancer cells and a touch of blue dye. The mixture was left to sit for a week, and then reëxamined. If the cells were still alive, they would show as blue. If the chemical killed the cancer cells, the fluid would be clear.

Chen’s laboratory began by testing his compounds against prostate-cancer cells, since that was the disease Milken had. Later, he screened dozens of other cancer cells as well. In the first go-around, his batch of chemicals killed everything in sight. But plenty of compounds, including pesticides and other sorts of industrial poisons, will kill cancer cells. The trouble is that they’ll kill healthy cells as well. Chen was looking for something that was selective—that was more likely to kill malignant cells than normal cells. He was also interested in sensitivity—in a chemical’s ability to kill at low concentrations. Chen reduced the amount of each chemical on the plate a thousandfold, and tried again. Now just one chemical worked. He tried the same chemical on healthy cells. It left them alone. Chen lowered the dose another thousandfold. It still worked. The compound came from the National Taras Shevchenko University of Kiev. It was an odd little chemical, the laboratory equivalent of a jazz musician’s riff. “It was pure chemist’s joy,” Chen said. “Homemade, random, and clearly made for no particular purpose. It was the only one that worked on everything we tried.”

Mass screening wasn’t as elegant or as efficient as rational drug design. But it provided a chance of stumbling across something by accident—something so novel and unexpected that no scientist would have dreamed it up. It provided for serendipity, and the history of drug discovery is full of stories of serendipity. Alexander Fleming was looking for something to fight bacteria, but didn’t think the answer would be provided by the mold that grew on a petri dish he accidentally left out on his bench. That’s where penicillin came from. Pfizer was looking for a new heart treatment and realized that a drug candidate’s unexpected side effect was more useful than its main effect. That’s where Viagra came from. “The end of surprise would be the end of science,” the historian Robert Friedel wrote in the 2001 essay “Serendipity Is No Accident.” “To this extent, the scientist must constantly seek and hope for surprises.” When Chen gathered chemical compounds from the farthest corners of the earth and tested them against one cancer-cell line after another, he was engineering surprise.

What he found was exactly what he’d hoped for when he started his hunt: something he could never have imagined on his own. When cancer cells came into contact with the chemical, they seemed to go into crisis mode: they acted as if they had been attacked with a blowtorch. The Ukrainian chemical, elesclomol, worked by gathering up copper from the bloodstream and bringing it into cells’ mitochondria, sparking an electrochemical reaction. His focus was on the toxic, oxygen-based compounds in the cell called ROS, reactive oxygen species. Normal cells keep ROS in check. Many kinds of cancer cells, though, generate so much ROS that the cell’s ability to keep functioning is stretched to the breaking point, and elesclomol cranked ROS up even further, to the point that the cancer cells went up in flames. Researchers had long known that heating up a cancer cell was a good way of killing it, and there had been plenty of interest over the years in achieving that effect with ROS. But the idea of using copper to set off an electrochemical reaction was so weird—and so unlike the way cancer drugs normally worked—that it’s not an approach anyone would have tried by design. That was the serendipity. It took a bit of “chemist’s joy,” constructed for no particular reason by some bench scientists in Kiev, to show the way. Elesclomol was wondrously novel. “I fell in love,” Chen said. “I can’t explain it. I just did.”

4.

When Freireich went to Zubrod with his idea for VAMP, Zubrod could easily have said no. Drug protocols are typically tested in advance for safety in animal models. This one wasn’t. Freireich freely admits that the whole idea of putting together poisonous drugs in such dosages was “insane,” and, of course, the first patient in the trial had nearly been killed by the toxic regimen. If she had died from it, the whole trial could have been derailed.

The ALL success story provided a hopeful road map for a generation of cancer fighters. But it also came with a warning: those who pursued the unexpected had to live with unexpected consequences. This was not the elegance of rational drug design, where scientists perfect their strategy in the laboratory before moving into the clinic. Working from the treatment to the disease was an exercise in uncertainty and trial and error.

If you’re trying to put together a combination of three or four drugs out of an available pool of dozens, how do you choose which to start with? The number of permutations is vast. And, once you’ve settled on a combination, how do you administer it? A child gets sick. You treat her. She goes into remission, and then she relapses. VAMP established that the best way to induce remission was to treat the child aggressively when she first showed up with leukemia. But do you treat during the remission as well, or only when the child relapses? And, if you treat during remission, do you treat as aggressively as you did during remission induction, or at a lower level? Do you use the same drugs in induction as you do in remission and as you do in relapse? How do you give the drugs, sequentially or in combination? At what dose? And how frequently—every day, or do you want to give the child’s body a few days to recover between bouts of chemo?

Oncologists compared daily 6-MP plus daily methotrexate with daily 6-MP plus methotrexate every four days. They compared methotrexate followed by 6-MP, 6-MP followed by methotrexate, and both together. They compared prednisone followed by full doses of 6-MP, methotrexate, and a new drug, cyclophosphamide (CTX), with prednisone followed by half doses of 6-MP, methotrexate, and CTX. It was endless: vincristine plus prednisone and then methotrexate every four days or vincristine plus prednisone and then methotrexate daily? They tried new drugs, and different combinations. They tweaked and refined, and gradually pushed the cure rate from forty per cent to eighty-five per cent. At St. Jude Children’s Research Hospital, in Memphis—which became a major center of ALL research—no fewer than sixteen clinical trials, enrolling 3,011 children, have been conducted in the past forty-eight years.

And this was just childhood leukemia. Beginning in the nineteen-seventies, Lawrence Einhorn, at Indiana University, pushed cure rates for testicular cancer above eighty per cent with a regimen called BEP: three to four rounds of bleomycin, etoposide, and cisplatin. In the nineteen-seventies, Vincent T. DeVita, at the N.C.I., came up with MOPP for advanced Hodgkin’s disease: mustargen, oncovin, procarbazine, and prednisone. DeVita went on to develop a combination therapy for breast cancer called CMF—cyclophosphamide, methotrexate, and 5-fluorouracil. Each combination was a variation on the combination that came before it, tailored to its target through a series of iterations. The often asked question “When will we find a cure for cancer?” implies that there is some kind of master code behind the disease waiting to be cracked. But perhaps there isn’t a master code. Perhaps there is only what can be uncovered, one step at a time, through trial and error.

When elesclomol emerged from the laboratory, then, all that was known about it was that it did something novel to cancer cells in the laboratory. Nobody had any idea what its best target was. So Synta gave elesclomol to an oncologist at Beth Israel in Boston, who began randomly testing it out on his patients in combination with paclitaxel, a standard chemotherapy drug. The addition of elesclomol seemed to shrink the tumor of someone with melanoma. A patient whose advanced ovarian cancer had failed multiple rounds of previous treatment had some response. There was dramatic activity against Kaposi’s sarcoma. They could have gone on with Phase 1s indefinitely, of course. Chen wanted to combine elesclomol with radiation therapy, and another group at Synta would later lobby hard to study elesclomol’s effects on acute myeloid leukemia (AML), the commonest form of adult leukemia. But they had to draw the line somewhere. Phase 2 would be lung cancer, soft-tissue sarcomas, and melanoma.

Now Synta had its targets. But with this round of testing came an even more difficult question. What’s the best way to conduct a test of a drug you barely understand? To complicate matters further, melanoma, the disease that seemed to be the best of the three options, is among the most complicated of all cancers. Sometimes it confines itself to the surface of the skin. Sometimes it invades every organ in the body. Some kinds of melanoma have a mutation involving a gene called BRAF; others don’t. Some late-stage melanoma tumors pump out high levels of an enzyme called LDH. Sometimes they pump out only low levels of LDH, and patients with low-LDH tumors lived so much longer that it was as if they had a different disease. Two patients could appear to have identical diagnoses, and then one would be dead in six months and the other would be fine. Tumors sometimes mysteriously disappeared. How did you conduct a drug trial with a disease like this?

It was entirely possible that elesclomol would work in low-LDH patients and not in high-LDH patients, or in high-LDH patients and not in low-LDH ones. It might work well against the melanoma that confined itself to the skin and not against the kind that invaded the liver and other secondary organs; it might work in the early stages of metastasis and not in the later stages. Then, there was the prior-treatment question. Because of how quickly tumors become resistant to drugs, new treatments sometimes work better on “naïve” patients—those who haven’t been treated with other forms of chemotherapy. So elesclomol might work on chemo-naïve patients and not on prior-chemo patients. And, in any of these situations, elesclomol might work better or worse depending on which other drug or drugs it was combined with. There was no end to the possible combinations of patient populations and drugs that Synta could have explored.

At the same time, Synta had to make sure that whatever trial it ran was as big as possible. With a disease as variable as melanoma, there was always the risk in a small study that what you thought was a positive result was really a matter of spontaneous remissions, and that a negative result was just the bad luck of having patients with an unusually recalcitrant form of the disease. John Kirkwood, a melanoma specialist at the University of Pittsburgh, had done the math: in order to guard against some lucky or unlucky artifact, the treatment arm of a Phase 2 trial should have at least seventy patients.

Synta was faced with a dilemma. Given melanoma’s variability, the company would ideally have done half a dozen or more versions of its Phase 2 trial: low-LDH, high-LDH, early-stage, late-stage, prior-chemo, chemo-naïve, multi-drug, single-drug. There was no way, though, that they could afford to do that many trials with seventy patients in each treatment arm. The American biotech industry is made up of lots of companies like Synta, because small start-ups are believed to be more innovative and adventurous than big pharmaceutical houses. But not even big firms can do multiple Phase 2 trials on a single disease—not when trials cost more than a hundred thousand dollars per patient and not when, in the pursuit of serendipity, they are simultaneously testing that same experimental drug on two or three other kinds of cancer. So Synta compromised. The company settled on one melanoma trial: fifty-three patients were given elesclomol plus paclitaxel, and twenty-eight, in the control group, were given paclitaxel alone, representing every sort of LDH level, stage of disease, and prior-treatment status. That’s a long way from half a dozen trials of seventy each.

Synta then went to Phase 3: six hundred and fifty-one chemo-naïve patients, drawn from a hundred and fifty hospitals, in fifteen countries. The trial was dubbed SYMMETRY. It was funded by the pharmaceutical giant Glaxo Smith Kline. Glaxo agreed to underwrite the cost of the next round of clinical trials and—should the drug be approved by the Food and Drug Administration—to split the revenues with Synta.

But was this the perfect trial? Not really. In the Phase 2 trial, elesclomol had been mixed with an organic solvent called Cremophore and then spun around in a sonicator, which is like a mini washing machine. Elesclomol, which is rock-hard in its crystalline form, needed to be completely dissolved if it was going to work as a drug. For SYMMETRY, though, sonicators couldn’t be used. “Many countries said that it would be difficult, and some hospitals even said, ‘We don’t allow sonication in the preparation room,’ ” Chen explained. “We got all kinds of unbelievable feedback. In the end, we came up with something that, after mixing, you use your hand to shake it.” Would hand shaking be a problem? No one knew.

Then a Synta chemist, Mitsunori Ono, figured out how to make a water-soluble version of elesclomol. When the head of Synta’s chemistry team presented the results, he “sang a Japanese drinking song,” Chen said, permitting himself a small smile at the eccentricities of the Japanese. “He was very happy.” It was a great accomplishment. The water-soluble version could be given in higher doses. Should they stop SYMMETRY and start again with elesclomol 2.0? They couldn’t. A new trial would cost many millions of dollars more, and set the whole effort back two or three years. So they went ahead with a drug that didn’t dissolve easily, against a difficult target, with an assortment of patients who may or may not have been ideal—and crossed their fingers.

SYMMETRY began in late 2007. It was a double-blind, randomized trial. No one had any idea who was getting elesclomol and who wasn’t, and no one would have any idea how well the patients on elesclomol were doing until the trial data were unblinded. Day-to-day management of the study was shared with a third-party contractor. The trial itself was supervised by an outside group, known as a data-monitoring committee. “We send them all the data in some database format, and they plug that into their software package and then they type in the code and press ‘Enter,’ ” Bahcall said. “And then this line”—he pointed at the Kaplan-Meier in front of him—”will, hopefully, separate into two lines. They will find out in thirty seconds. It’s, literally, those guys press a button and for the next five years, ten years, the life of the drug, that’s really the only bit of evidence that matters.” It was January, 2009, and the last of the six hundred and fifty-one patients were scheduled to be enrolled in the trial in the next few weeks. According to protocol, when the results began to come in, the data-monitoring committee would call Jacobson, and Jacobson would call Bahcall. “ASCO starts May 29th,” Bahcall said. “If we get our data by early May, we could present at ASCO this year.”

5.

In the course of the SYMMETRY trial, Bahcall’s dining-room-table talks grew more reflective. He drew Kaplan-Meiers on the back of napkins. He talked about the twists and turns that other biotech companies had encountered on the road to the marketplace. He told wry stories about Lan Bo Chen, the Jewish mother and Jewish father rolled into one—and, over and over, he brought up the name of Judah Folkman. Folkman died in 2008, and he was a legend. He was the father of angiogenesis—a wholly new way of attacking cancer tumors. Avastin, the drug that everyone cheered at ASCO seven years ago, was the result of Folkman’s work.

Folkman’s great breakthrough had come while he was working with mouse melanoma cells at the National Naval Medical Center: when the tumors couldn’t set up a network of blood vessels to feed themselves, they would stop growing. Folkman realized that the body must have its own system for promoting and halting blood-vessel formation, and that if he could find a substance that prevented vessels from being formed he would have a potentially powerful cancer drug. One of the researchers in Folkman’s laboratory, Michael O’Reilly, found what seemed to be a potent inhibitor: angiostatin. O’Reilly then assembled a group of mice with an aggressive lung cancer, and treated half with a saline solution and half with angiostatin. In the book “Dr. Folkman’s War” (2001), Robert Cooke describes the climactic moment when the results of the experiment came in:

With a horde of excited researchers jam-packed into a small laboratory room, Folkman euthanized all fifteen mice, then began handing them one by one to O’Reilly to dissect. O’Reilly took the first mouse, made an incision in its chest, and removed the lung. The organ was overwhelmed by cancer. Folkman checked a notebook to see which group the mouse had been in. It was one of those that had gotten only saline. O’Reilly cut into the next mouse and removed its lung. It was perfect. What treatment had it gotten? The notebook revealed it was angiostatin.

It wasn’t Folkman’s triumph that Bahcall kept coming back to, however. It was his struggle. Folkman’s great insight at the Naval Medical Center occurred in 1960. O’Reilly’s breakthrough experiment occurred in 1994. In the intervening years, Folkman’s work was dismissed and attacked, and confronted with every obstacle.

At times, Bahcall tried to convince himself that elesclomol’s path might be different. Synta had those exciting Phase 2 results, and the endorsement of the Glaxo deal. “For the results not to be real, you’d have to believe that it was just a statistical fluke that the patients who got drugs are getting better,” Bahcall said, in one of those dining-room-table moments. “You’d have to believe that the fact that there were more responses in the treatment group was also a statistical fluke, along with the fact that we’ve seen these signs of activity in Phase 1, and the fact that the underlying biology strongly says that we have an extremely active anti-cancer agent.”

But then he would remember Folkman. Angiostatin and a companion agent also identified by Folkman’s laboratory, endostatin, were licensed by a biotech company called EntreMed. And EntreMed never made a dime off either drug. The two drugs failed to show any clinical effects in both Phase 1 and Phase 2. Avastin was a completely different anti-angiogenesis agent, discovered and developed by another team entirely, and brought to market a decade after O’Reilly’s experiment. What’s more, Avastin’s colorectal-cancer trial—the one that received a standing ovation at ASCO—was the drug’s second go-around. A previous Phase 3 trial, for breast cancer, had been a crushing failure. Even Folkman’s beautifully elaborated theory about angiogenesis may not fully explain the way Avastin works. In addition to cutting off the flow of blood vessels to the tumor, Avastin seems also to work by repairing some of the blood vessels feeding the tumor, so that the drugs administered in combination with Avastin can get to the tumor more efficiently.

Bahcall followed the fortunes of other biotech companies the way a teen-age boy follows baseball statistics, and he knew that nothing ever went smoothly. He could list, one by one, all the breakthrough drugs that had failed their first Phase 3 or had failed multiple Phase 2s, or that turned out not to work the way they were supposed to work. In the world of serendipity and of trial and error, failure was a condition of discovery, because, when something was new and worked in ways that no one quite understood, every bit of knowledge had to be learned, one experiment at a time. You ended up with VAMP, which worked, but only after you compared daily 6-MP and daily methotrexate with daily 6-MP and methotrexate every four days, and so on, through a great many iterations, none of which worked very well at all. You had results that looked “boinking good,” but only after a trial with a hundred compromises.

Chen had the same combination of realism and idealism that Bahcall did. He was the in-house skeptic at Synta. He was the one who worried the most about the hand shaking of the drugs in the SYMMETRY trial. He had never been comfortable with the big push behind melanoma. “Everyone at Dana- Farber”—the cancer hospital at Harvard—”told me, ‘Don’t touch melanoma,’ ” Chen said. ” ‘It is so hard. Maybe you save it as the last, after you have already treated and tried everything else.’ ” The scientists at Synta were getting better and better at understanding just what it was that elesclomol did when it confronted a cancer cell. But he knew that there was always a gap between what could be learned in the laboratory and what happened in the clinic. “We just don’t know what happens in vivo,” he said. “That’s why drug development is still so hard and so expensive, because the human body is such a black box. We are totally shooting in the dark.” He shrugged. “You have to have good science, sure. But once you shoot the drug in humans you go home and pray.”

Chen was sitting in the room at Synta where Eric Jacobson had revealed the “boinking good” news about elesclomol’s Phase 2 melanoma study. Down the hall was a huge walk-in freezer, filled with thousands of chemicals from the Russian haul. In another room was the Rube Goldberg drug-screening machine, bought with Milken’s money. Chen began to talk about elesclomol’s earliest days, when he was still scavenging through the libraries of chemical companies for leads and Bahcall was still an ex-physicist looking to start a biotech company. “I could not convince anyone that elesclomol had potential,” Chen went on. “Everyone around me tried to stop it, including my research partner, who is a Nobel laureate. He just hated it.” At one point, Chen was working with Fujifilm. The people there hated elesclomol. He worked for a while for the Japanese chemical company Shionogi. The Japanese hated it. “But you know who I found who believed in it?” Chen’s eyes lit up: “Safi!”

6.

Last year, on February 25th, Bahcall and Chen were at a Synta board meeting in midtown Manhattan. It was five-thirty in the afternoon. As the meeting was breaking up, Bahcall got a call on his cell phone. “I have to take this,” he said to Chen. He ducked into a nearby conference room, and Chen waited for him, with the company’s chairman, Keith Gollust. Fifteen minutes passed, then twenty. “I tell Keith it must be the data-monitoring committee,” Chen recalls. “He says, ‘No way. Too soon. How could the D.M.C. have any news just yet?’ I said, ‘It has to be.’ So he stays with me and we wait. Another twenty minutes. Finally Safi comes out, and I looked at him and I knew. He didn’t have to say anything. It was the color of his face.”

The call had been from Eric Jacobson. He had just come back from Florida, where he had met with the D.M.C. on the SYMMETRY trial. The results of the trial had been unblinded. Jacobson had spent the last several days going over the data, trying to answer every question and double-check every conclusion. “I have some really bad news,” he told Bahcall. The trial would have to be halted: more people were dying in the treatment arm than in the control arm. “It took me about a half hour to come out of primary shock,” Bahcall said. “I didn’t go home. I just grabbed my bag, got into a cab, went straight to LaGuardia, took the next flight to Logan, drove straight to the office. The chief medical officer, the clinical guys, statistical guys, operational team were all there, and we essentially spent the rest of the night, until about one or two in the morning, reviewing the data.” It looked as if patients with high-LDH tumors were the problem: elesclomol seemed to fail them completely. It was heartbreaking. Glaxo, Bahcall knew, was certain to pull out of the deal. There would have to be many layoffs.

The next day, Bahcall called a meeting of the management team. They met in the Synta conference room. “Eric has some news,” Bahcall said. Jacobson stood up and began. But before he got very far he had to stop, because he was overcome with emotion, and soon everyone else in the room was, too.

On December 7, 2009, Synta released the following statement:

Synta Pharmaceuticals Corp. (NASDAQ: SNTA), a biopharmaceutical company focused on discovering, developing, and commercializing small molecule drugs to treat severe medical conditions, today announced the results of a study evaluating the activity of elesclomol against acute myeloid leukemia (AML) cell lines and primary leukemic blast cells from AML patients, presented at the Annual Meeting of the American Society of Hematology (ASH) in New Orleans. . . .”The experiments conducted at the University of Toronto showed elesclomol was highly active against AML cell lines and primary blast cells from AML patients at concentrations substantially lower than those already achieved in cancer patients in clinical trials,” said Vojo Vukovic, M.D., Ph.D., Senior Vice President and Chief Medical Officer, Synta. “Of particular interest were the ex vivo studies of primary AML blast cells from patients recently treated at Toronto, where all 10 samples of leukemic cells responded to exposure to elesclomol. These results provide a strong rationale for further exploring the potential of elesclomol in AML, a disease with high medical need and limited options for patients.”

“I will bet anything I have, with anybody, that this will be a drug one day,” Chen said. It was January. The early AML results had just come in. Glaxo was a memory. “Now, maybe we are crazy, we are romantic. But this kind of characteristic you have to have if you want to be a drug hunter. You have to be optimistic, you have to have supreme confidence, because the odds are so incredibly against you. I am a scientist. I just hope that I would be so romantic that I become deluded enough to keep hoping.”
Social media can’t provide what social change has always required.

1.

At four-thirty in the afternoon on Monday, viagra sale February 1, view 1960, adiposity four college students sat down at the lunch counter at the Woolworth’s in downtown Greensboro, North Carolina. They were freshmen at North Carolina A. & T., a black college a mile or so away.

“I’d like a cup of coffee, please,” one of the four, Ezell Blair, said to the waitress.

“We don’t serve Negroes here,” she replied.

The Woolworth’s lunch counter was a long L-shaped bar that could seat sixty-six people, with a standup snack bar at one end. The seats were for whites. The snack bar was for blacks. Another employee, a black woman who worked at the steam table, approached the students and tried to warn them away. “You’re acting stupid, ignorant!” she said. They didn’t move. Around five-thirty, the front doors to the store were locked. The four still didn’t move. Finally, they left by a side door. Outside, a small crowd had gathered, including a photographer from the Greensboro Record. “I’ll be back tomorrow with A. & T. College,” one of the students said.

By next morning, the protest had grown to twenty-seven men and four women, most from the same dormitory as the original four. The men were dressed in suits and ties. The students had brought their schoolwork, and studied as they sat at the counter. On Wednesday, students from Greensboro’s “Negro” secondary school, Dudley High, joined in, and the number of protesters swelled to eighty. By Thursday, the protesters numbered three hundred, including three white women, from the Greensboro campus of the University of North Carolina. By Saturday, the sit-in had reached six hundred. People spilled out onto the street. White teen-agers waved Confederate flags. Someone threw a firecracker. At noon, the A. & T. football team arrived. “Here comes the wrecking crew,” one of the white students shouted.

By the following Monday, sit-ins had spread to Winston-Salem, twenty-five miles away, and Durham, fifty miles away. The day after that, students at Fayetteville State Teachers College and at Johnson C. Smith College, in Charlotte, joined in, followed on Wednesday by students at St. Augustine’s College and Shaw University, in Raleigh. On Thursday and Friday, the protest crossed state lines, surfacing in Hampton and Portsmouth, Virginia, in Rock Hill, South Carolina, and in Chattanooga, Tennessee. By the end of the month, there were sit-ins throughout the South, as far west as Texas. “I asked every student I met what the first day of the sitdowns had been like on his campus,” the political theorist Michael Walzer wrote in Dissent. “The answer was always the same: ‘It was like a fever. Everyone wanted to go.’ ” Some seventy thousand students eventually took part. Thousands were arrested and untold thousands more radicalized. These events in the early sixties became a civil-rights war that engulfed the South for the rest of the decade—and it happened without e-mail, texting, Facebook, or Twitter.

2.

The world, we are told, is in the midst of a revolution. The new tools of social media have reinvented social activism. With Facebook and Twitter and the like, the traditional relationship between political authority and popular will has been upended, making it easier for the powerless to collaborate, coördinate, and give voice to their concerns. When ten thousand protesters took to the streets in Moldova in the spring of 2009 to protest against their country’s Communist government, the action was dubbed the Twitter Revolution, because of the means by which the demonstrators had been brought together. A few months after that, when student protests rocked Tehran, the State Department took the unusual step of asking Twitter to suspend scheduled maintenance of its Web site, because the Administration didn’t want such a critical organizing tool out of service at the height of the demonstrations. “Without Twitter the people of Iran would not have felt empowered and confident to stand up for freedom and democracy,” Mark Pfeifle, a former national-security adviser, later wrote, calling for Twitter to be nominated for the Nobel Peace Prize. Where activists were once defined by their causes, they are now defined by their tools. Facebook warriors go online to push for change. “You are the best hope for us all,” James K. Glassman, a former senior State Department official, told a crowd of cyber activists at a recent conference sponsored by Facebook, A. T. & T., Howcast, MTV, and Google. Sites like Facebook, Glassman said, “give the U.S. a significant competitive advantage over terrorists. Some time ago, I said that Al Qaeda was ‘eating our lunch on the Internet.’ That is no longer the case. Al Qaeda is stuck in Web 1.0. The Internet is now about interactivity and conversation.”

These are strong, and puzzling, claims. Why does it matter who is eating whose lunch on the Internet? Are people who log on to their Facebook page really the best hope for us all? As for Moldova’s so-called Twitter Revolution, Evgeny Morozov, a scholar at Stanford who has been the most persistent of digital evangelism’s critics, points out that Twitter had scant internal significance in Moldova, a country where very few Twitter accounts exist. Nor does it seem to have been a revolution, not least because the protests—as Anne Applebaum suggested in the Washington Post—may well have been a bit of stagecraft cooked up by the government. (In a country paranoid about Romanian revanchism, the protesters flew a Romanian flag over the Parliament building.) In the Iranian case, meanwhile, the people tweeting about the demonstrations were almost all in the West. “It is time to get Twitter’s role in the events in Iran right,” Golnaz Esfandiari wrote, this past summer, in Foreign Policy. “Simply put: There was no Twitter Revolution inside Iran.” The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. “Western journalists who couldn’t reach—or didn’t bother reaching?—people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,” she wrote. “Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.”

Some of this grandiosity is to be expected. Innovators tend to be solipsists. They often want to cram every stray fact and experience into their new model. As the historian Robert Darnton has written, “The marvels of communication technology in the present have produced a false consciousness about the past—even a sense that communication has no history, or had nothing of importance to consider before the days of television and the Internet.” But there is something else at work here, in the outsized enthusiasm for social media. Fifty years after one of the most extraordinary episodes of social upheaval in American history, we seem to have forgotten what activism is.

3.

Greensboro in the early nineteen-sixties was the kind of place where racial insubordination was routinely met with violence. The four students who first sat down at the lunch counter were terrified. “I suppose if anyone had come up behind me and yelled ‘Boo,’ I think I would have fallen off my seat,” one of them said later. On the first day, the store manager notified the police chief, who immediately sent two officers to the store. On the third day, a gang of white toughs showed up at the lunch counter and stood ostentatiously behind the protesters, ominously muttering epithets such as “burr-head nigger.” A local Ku Klux Klan leader made an appearance. On Saturday, as tensions grew, someone called in a bomb threat, and the entire store had to be evacuated.

The dangers were even clearer in the Mississippi Freedom Summer Project of 1964, another of the sentinel campaigns of the civil-rights movement. The Student Nonviolent Coordinating Committee recruited hundreds of Northern, largely white unpaid volunteers to run Freedom Schools, register black voters, and raise civil-rights awareness in the Deep South. “No one should go anywhere alone, but certainly not in an automobile and certainly not at night,” they were instructed. Within days of arriving in Mississippi, three volunteers—Michael Schwerner, James Chaney, and Andrew Goodman—were kidnapped and killed, and, during the rest of the summer, thirty-seven black churches were set on fire and dozens of safe houses were bombed; volunteers were beaten, shot at, arrested, and trailed by pickup trucks full of armed men. A quarter of those in the program dropped out. Activism that challenges the status quo—that attacks deeply rooted problems—is not for the faint of heart.

What makes people capable of this kind of activism? The Stanford sociologist Doug McAdam compared the Freedom Summer dropouts with the participants who stayed, and discovered that the key difference wasn’t, as might be expected, ideological fervor. “All of the applicants—participants and withdrawals alike—emerge as highly committed, articulate supporters of the goals and values of the summer program,” he concluded. What mattered more was an applicant’s degree of personal connection to the civil-rights movement. All the volunteers were required to provide a list of personal contacts—the people they wanted kept apprised of their activities—and participants were far more likely than dropouts to have close friends who were also going to Mississippi. High-risk activism, McAdam concluded, is a “strong-tie” phenomenon.

This pattern shows up again and again. One study of the Red Brigades, the Italian terrorist group of the nineteen-seventies, found that seventy per cent of recruits had at least one good friend already in the organization. The same is true of the men who joined the mujahideen in Afghanistan. Even revolutionary actions that look spontaneous, like the demonstrations in East Germany that led to the fall of the Berlin Wall, are, at core, strong-tie phenomena. The opposition movement in East Germany consisted of several hundred groups, each with roughly a dozen members. Each group was in limited contact with the others: at the time, only thirteen per cent of East Germans even had a phone. All they knew was that on Monday nights, outside St. Nicholas Church in downtown Leipzig, people gathered to voice their anger at the state. And the primary determinant of who showed up was “critical friends”—the more friends you had who were critical of the regime the more likely you were to join the protest.

So one crucial fact about the four freshmen at the Greensboro lunch counter—David Richmond, Franklin McCain, Ezell Blair, and Joseph McNeil—was their relationship with one another. McNeil was a roommate of Blair’s in A. & T.’s Scott Hall dormitory. Richmond roomed with McCain one floor up, and Blair, Richmond, and McCain had all gone to Dudley High School. The four would smuggle beer into the dorm and talk late into the night in Blair and McNeil’s room. They would all have remembered the murder of Emmett Till in 1955, the Montgomery bus boycott that same year, and the showdown in Little Rock in 1957. It was McNeil who brought up the idea of a sit-in at Woolworth’s. They’d discussed it for nearly a month. Then McNeil came into the dorm room and asked the others if they were ready. There was a pause, and McCain said, in a way that works only with people who talk late into the night with one another, “Are you guys chicken or not?” Ezell Blair worked up the courage the next day to ask for a cup of coffee because he was flanked by his roommate and two good friends from high school.

4.

The kind of activism associated with social media isn’t like this at all. The platforms of social media are built around weak ties. Twitter is a way of following (or being followed by) people you may never have met. Facebook is a tool for efficiently managing your acquaintances, for keeping up with the people you would not otherwise be able to stay in touch with. That’s why you can have a thousand “friends” on Facebook, as you never could in real life.

This is in many ways a wonderful thing. There is strength in weak ties, as the sociologist Mark Granovetter has observed. Our acquaintances—not our friends—are our greatest source of new ideas and information. The Internet lets us exploit the power of these kinds of distant connections with marvellous efficiency. It’s terrific at the diffusion of innovation, interdisciplinary collaboration, seamlessly matching up buyers and sellers, and the logistical functions of the dating world. But weak ties seldom lead to high-risk activism.

In a new book called “The Dragonfly Effect: Quick, Effective, and Powerful Ways to Use Social Media to Drive Social Change,” the business consultant Andy Smith and the Stanford Business School professor Jennifer Aaker tell the story of Sameer Bhatia, a young Silicon Valley entrepreneur who came down with acute myelogenous leukemia. It’s a perfect illustration of social media’s strengths. Bhatia needed a bone-marrow transplant, but he could not find a match among his relatives and friends. The odds were best with a donor of his ethnicity, and there were few South Asians in the national bone-marrow database. So Bhatia’s business partner sent out an e-mail explaining Bhatia’s plight to more than four hundred of their acquaintances, who forwarded the e-mail to their personal contacts; Facebook pages and YouTube videos were devoted to the Help Sameer campaign. Eventually, nearly twenty-five thousand new people were registered in the bone-marrow database, and Bhatia found a match.

But how did the campaign get so many people to sign up? By not asking too much of them. That’s the only way you can get someone you don’t really know to do something on your behalf. You can get thousands of people to sign up for a donor registry, because doing so is pretty easy. You have to send in a cheek swab and—in the highly unlikely event that your bone marrow is a good match for someone in need—spend a few hours at the hospital. Donating bone marrow isn’t a trivial matter. But it doesn’t involve financial or personal risk; it doesn’t mean spending a summer being chased by armed men in pickup trucks. It doesn’t require that you confront socially entrenched norms and practices. In fact, it’s the kind of commitment that will bring only social acknowledgment and praise.

The evangelists of social media don’t understand this distinction; they seem to believe that a Facebook friend is the same as a real friend and that signing up for a donor registry in Silicon Valley today is activism in the same sense as sitting at a segregated lunch counter in Greensboro in 1960. “Social networks are particularly effective at increasing motivation,” Aaker and Smith write. But that’s not true. Social networks are effective at increasing participation—by lessening the level of motivation that participation requires. The Facebook page of the Save Darfur Coalition has 1,282,339 members, who have donated an average of nine cents apiece. The next biggest Darfur charity on Facebook has 22,073 members, who have donated an average of thirty-five cents. Help Save Darfur has 2,797 members, who have given, on average, fifteen cents. A spokesperson for the Save Darfur Coalition told Newsweek, “We wouldn’t necessarily gauge someone’s value to the advocacy movement based on what they’ve given. This is a powerful mechanism to engage this critical population. They inform their community, attend events, volunteer. It’s not something you can measure by looking at a ledger.” In other words, Facebook activism succeeds not by motivating people to make a real sacrifice but by motivating them to do the things that people do when they are not motivated enough to make a real sacrifice. We are a long way from the lunch counters of Greensboro.

5.

The students who joined the sit-ins across the South during the winter of 1960 described the movement as a “fever.” But the civil-rights movement was more like a military campaign than like a contagion. In the late nineteen-fifties, there had been sixteen sit-ins in various cities throughout the South, fifteen of which were formally organized by civil-rights organizations like the N.A.A.C.P. and CORE. Possible locations for activism were scouted. Plans were drawn up. Movement activists held training sessions and retreats for would-be protesters. The Greensboro Four were a product of this groundwork: all were members of the N.A.A.C.P. Youth Council. They had close ties with the head of the local N.A.A.C.P. chapter. They had been briefed on the earlier wave of sit-ins in Durham, and had been part of a series of movement meetings in activist churches. When the sit-in movement spread from Greensboro throughout the South, it did not spread indiscriminately. It spread to those cities which had preëxisting “movement centers”—a core of dedicated and trained activists ready to turn the “fever” into action.

The civil-rights movement was high-risk activism. It was also, crucially, strategic activism: a challenge to the establishment mounted with precision and discipline. The N.A.A.C.P. was a centralized organization, run from New York according to highly formalized operating procedures. At the Southern Christian Leadership Conference, Martin Luther King, Jr., was the unquestioned authority. At the center of the movement was the black church, which had, as Aldon D. Morris points out in his superb 1984 study, “The Origins of the Civil Rights Movement,” a carefully demarcated division of labor, with various standing committees and disciplined groups. “Each group was task-oriented and coordinated its activities through authority” Morris writes. “Individuals were held accountable for their assigned duties, and important conflicts were resolved by the minister, who usually exercised ultimate authority over the congregation.”

This is the second crucial distinction between traditional activism and its online variant: social media are not about this kind of hierarchical organization. Facebook and the like are tools for building networks, which are the opposite, in structure and character, of hierarchies. Unlike hierarchies, with their rules and procedures, networks aren’t controlled by a single central authority. Decisions are made through consensus, and the ties that bind people to the group are loose.

This structure makes networks enormously resilient and adaptable in low-risk situations. Wikipedia is a perfect example. It doesn’t have an editor, sitting in New York, who directs and corrects each entry. The effort of putting together each entry is self-organized. If every entry in Wikipedia were to be erased tomorrow, the content would swiftly be restored, because that’s what happens when a network of thousands spontaneously devote their time to a task.

There are many things, though, that networks don’t do well. Car companies sensibly use a network to organize their hundreds of suppliers, but not to design their cars. No one believes that the articulation of a coherent design philosophy is best handled by a sprawling, leaderless organizational system. Because networks don’t have a centralized leadership structure and clear lines of authority, they have real difficulty reaching consensus and setting goals. They can’t think strategically; they are chronically prone to conflict and error. How do you make difficult choices about tactics or strategy or philosophical direction when everyone has an equal say?

The Palestine Liberation Organization originated as a network, and the international-relations scholars Mette Eilstrup-Sangiovanni and Calvert Jones argue in a recent essay in International Security that this is why it ran into such trouble as it grew: “Structural features typical of networks—the absence of central authority, the unchecked autonomy of rival groups, and the inability to arbitrate quarrels through formal mechanisms—made the P.L.O. excessively vulnerable to outside manipulation and internal strife.”

In Germany in the nineteen-seventies, they go on, “the far more unified and successful left-wing terrorists tended to organize hierarchically, with professional management and clear divisions of labor. They were concentrated geographically in universities, where they could establish central leadership, trust, and camaraderie through regular, face-to-face meetings.” They seldom betrayed their comrades in arms during police interrogations. Their counterparts on the right were organized as decentralized networks, and had no such discipline. These groups were regularly infiltrated, and members, once arrested, easily gave up their comrades. Similarly, Al Qaeda was most dangerous when it was a unified hierarchy. Now that it has dissipated into a network, it has proved far less effective.

The drawbacks of networks scarcely matter if the network isn’t interested in systemic change—if it just wants to frighten or humiliate or make a splash—or if it doesn’t need to think strategically. But if you’re taking on a powerful and organized establishment you have to be a hierarchy. The Montgomery bus boycott required the participation of tens of thousands of people who depended on public transit to get to and from work each day. It lasted a year. In order to persuade those people to stay true to the cause, the boycott’s organizers tasked each local black church with maintaining morale, and put together a free alternative private carpool service, with forty-eight dispatchers and forty-two pickup stations. Even the White Citizens Council, King later said, conceded that the carpool system moved with “military precision.” By the time King came to Birmingham, for the climactic showdown with Police Commissioner Eugene (Bull) Connor, he had a budget of a million dollars, and a hundred full-time staff members on the ground, divided into operational units. The operation itself was divided into steadily escalating phases, mapped out in advance. Support was maintained through consecutive mass meetings rotating from church to church around the city.

Boycotts and sit-ins and nonviolent confrontations—which were the weapons of choice for the civil-rights movement—are high-risk strategies. They leave little room for conflict and error. The moment even one protester deviates from the script and responds to provocation, the moral legitimacy of the entire protest is compromised. Enthusiasts for social media would no doubt have us believe that King’s task in Birmingham would have been made infinitely easier had he been able to communicate with his followers through Facebook, and contented himself with tweets from a Birmingham jail. But networks are messy: think of the ceaseless pattern of correction and revision, amendment and debate, that characterizes Wikipedia. If Martin Luther King, Jr., had tried to do a wiki-boycott in Montgomery, he would have been steamrollered by the white power structure. And of what use would a digital communication tool be in a town where ninety-eight per cent of the black community could be reached every Sunday morning at church? The things that King needed in Birmingham—discipline and strategy—were things that online social media cannot provide.

6.

The bible of the social-media movement is Clay Shirky’s “Here Comes Everybody.” Shirky, who teaches at New York University, sets out to demonstrate the organizing power of the Internet, and he begins with the story of Evan, who worked on Wall Street, and his friend Ivanna, after she left her smart phone, an expensive Sidekick, on the back seat of a New York City taxicab. The telephone company transferred the data on Ivanna’s lost phone to a new phone, whereupon she and Evan discovered that the Sidekick was now in the hands of a teen-ager from Queens, who was using it to take photographs of herself and her friends.

When Evan e-mailed the teen-ager, Sasha, asking for the phone back, she replied that his “white ass” didn’t deserve to have it back. Miffed, he set up a Web page with her picture and a description of what had happened. He forwarded the link to his friends, and they forwarded it to their friends. Someone found the MySpace page of Sasha’s boyfriend, and a link to it found its way onto the site. Someone found her address online and took a video of her home while driving by; Evan posted the video on the site. The story was picked up by the news filter Digg. Evan was now up to ten e-mails a minute. He created a bulletin board for his readers to share their stories, but it crashed under the weight of responses. Evan and Ivanna went to the police, but the police filed the report under “lost,” rather than “stolen,” which essentially closed the case. “By this point millions of readers were watching,” Shirky writes, “and dozens of mainstream news outlets had covered the story.” Bowing to the pressure, the N.Y.P.D. reclassified the item as “stolen.” Sasha was arrested, and Evan got his friend’s Sidekick back.

Shirky’s argument is that this is the kind of thing that could never have happened in the pre-Internet age—and he’s right. Evan could never have tracked down Sasha. The story of the Sidekick would never have been publicized. An army of people could never have been assembled to wage this fight. The police wouldn’t have bowed to the pressure of a lone person who had misplaced something as trivial as a cell phone. The story, to Shirky, illustrates “the ease and speed with which a group can be mobilized for the right kind of cause” in the Internet age.

Shirky considers this model of activism an upgrade. But it is simply a form of organizing which favors the weak-tie connections that give us access to information over the strong-tie connections that help us persevere in the face of danger. It shifts our energies from organizations that promote strategic and disciplined activity and toward those which promote resilience and adaptability. It makes it easier for activists to express themselves, and harder for that expression to have any impact. The instruments of social media are well suited to making the existing social order more efficient. They are not a natural enemy of the status quo. If you are of the opinion that all the world needs is a little buffing around the edges, this should not trouble you. But if you think that there are still lunch counters out there that need integrating it ought to give you pause.

Shirky ends the story of the lost Sidekick by asking, portentously, “What happens next?”— no doubt imagining future waves of digital protesters. But he has already answered the question. What happens next is more of the same. A networked, weak-tie world is good at things like helping Wall Streeters get phones back from teen-age girls. Viva la revolución.
Why do we pay our stars so much money?

1.

When Marvin Miller took over as the head of the Major League Baseball Players Association, discount in 1966, cure he quickly realized that his members did not know what being in a union meant. He would talk to the players about how unfair their contracts were, about how the owners took an outsized portion of the profits, how pitiful their pensions and health-care benefits were, and how much better things would be if they organized themselves. But they weren’t listening. The players were young, and many came from small towns far from the centers of organized labor. They thought of themselves as privileged: they got to eat steak for dinner, and be cheered by thousands of fans. Even when Miller brought up something as seemingly straightforward as the baseball-card business, the players were oblivious of their worth. “The Topps bubble-gum company would go into the minor leagues,” Miller recalls, “and if the scouts told them someone was going to make it to the majors, they would sign those kids. You know what they would pay them? Five dollars. We’re talking about 1966, 1967. Five dollars to sign. And that meant they would own them for five years, and the players got no percentage of sales, the way you would with any other kind of deal like that. If you made it to the majors, you got a hundred and twenty-five dollars per year—for the company’s right to take your picture, use it in their advertising, put it on a card, use your name and your record on the back of it. I used to say to the players, ‘‘Why’d you sign?’ And they’d look sheepish and say, ‘When I was a kid, I used to collect cards, and now they want to put me on one!’”

One season when Miller was making his annual rounds of the spring-training sites, he decided to put his argument to the players as plainly as he could. He was visiting the San Francisco Giants, in Phoenix, Arizona. “The right fielder for the Giants was Bobby Bonds, a nice man,” Miller says. “I knew what Bonds’s salary was. And, considering that he was a really prime ballplayer—that he had hit more home runs as a lead-off man than anyone had ever hit in the major leagues, that he was a speedy base runner—the number shocked me. I was always shocked when I looked at salaries in those days. So I said, ‘I want to tell you something. Take any one of you—take Bobby Bonds. I’m going to make a prediction.’ ” The prediction was about Bobby Bonds’s son. The Giants’ owner encouraged his players to bring their families to spring training, so Miller knew Bonds’s son well. He was just a little kid, but already it was clear that he was something special. “I said, ‘If we can get rid of the system as we now know it, then Bobby Bonds’s son, if he makes it to the majors, will make more in one year than Bobby will in his whole career.’ And the eyebrows went up.” Bobby Bonds’s son, of course, was Barry Bonds, one of the greatest players of his generation. And Miller was absolutely right: he ended up making more in one year than all the members of his father’s San Francisco Giants team made in their entire careers, combined. That was Marvin Miller’s revolution—and, nearly half a century later, we are still dealing with its consequences.

2.

There was a time, not so long ago, when people at the —very top of their professionthe “talent”—did not make a lot of money. In the postwar years, corporate lawyers, Wall Street investment bankers, Fortune 500 executives, all-star professional athletes, and the like made a fraction of what they earn today. In baseball, between the mid-nineteen-forties and the mid-nineteen-sixties, the game’s minimum and highest salaries both fell by more than a third, in constant dollars. In 1935, lawyers in the United States made, on average, four times the country’s per-capita income. By 1958, that number was 2.4. The president of DuPont, Crawford Greenewalt, testified before Congress in 1955 that he took home half what his predecessor had made thirty years earlier. (“Being an honest man,” Greenewalt added wryly, “I think I should say that when I pointed the discrepancy out to him he replied merely that he was easily twice as good as I and hence deserved it.”)

That era was an upside-down version of our own: when society gazed upon captains of industry and commerce, it marvelled at how ordinary their lives were. A Wall Street Journal profile of the C.E.O. of one of the country’s “top industrial concerns” in the late nineteen-forties began with a description of “Mr. C” jotting down the cost of a taxi ride in a little black book he used to track his expenses. His after-tax income, Mr. C said, was $36,611, and the year before it had been $21,032. He’d bought two cars the previous year, but had to dip into his savings to afford them. “Mr. C has never lived extravagantly or even elegantly,” the article reported. “He’s never owned a yacht or a string of race horses. His main relaxation is swimming. ‘My idea of fun would be to have a swimming pool behind the house so I could take a dip whenever I felt like it. But to build it I’d have to sell a couple of my Government bonds. I don’t like to do that, so I’m getting along without the pool.’ ” Getting along without the pool?

In 1959, the Ladies’ Home Journal dispatched a writer to the suburban Chicago home of “Mr. O’Rourke,” one of the country’s “most successful executives.” Since the Wall Street Journal’s visit with Mr. C, America had undergone extraordinary growth. But Mr. O’Rourke’s life was no more extravagant than that of his counterpart of a dozen years earlier. He lived in an ivy-covered Georgian, with ten rooms. Mrs. O’Rourke, “a slim blonde in a tweed suit and loafers,” gave the writer a tour. “For our neighborhood this is not a large place,” she said. “You can see that we’ve made do with rugs from our old home and that this room has never seen the services of an interior decorator. We’ve bought our furniture piece by piece over the years and I’ve never thrown anything away.” Their summer house was a small cottage on a lake. “I’m president of one of the larger companies in the U.S.,” Mr. O’Rourke said, “yet chances are I will never become a millionaire.”

The truly rich in the nineteen-fifties and sixties were people who had inherited —the heirs of the great fortunes of the Gilded Age. Entrepreneurs who sold their own businesses could also become wealthy, because capital-gains taxes were relatively low. But the marketplace chose not to pay salaried professionals and managers a lot of money, and society chose not to let them keep much of what they made. On income above two hundred thousand dollars a year, the marginal tax rate was as high as ninety-one per cent. Formerly exclusive occupations, meanwhile, were opening themselves to new talent, as a result of the expansion of the public university system. Economists of the era were convinced, as one analysis put it, that there was a “connection between economic growth and the advance of democracy on the one hand and the worsening economic status of the intellectual and professional classes on the other.” In 1956, Roswell Magill, a partner at Cravath, Swaine & Moore, spoke for a generation of professionals when he wrote that law firms “can no longer honestly assure promising young men that if they become partners they can save money in substantial amounts, build country homes and gardens for themselves like their fathers and grandfathers did, and plan extensive European holidays.”

And then, suddenly, the world changed. Taxes began to fall. The salaries paid to high-level professionals—“talent”—started to rise. Baseball players became multimillionaires. C.E.O.s got private jets. The lawyers at Cravath, Swaine & Moore who once despaired of their economic future began saving money in substantial amounts, building country homes and gardens for themselves like their fathers and grandfathers did, and planning extensive European holidays. In the nineteen-seventies, against all expectations, the salaryman rose from the dead.

The story of how this transformation happened has been told in many different ways. Economists have pointed to the globalization of the world economy and the rise of what Robert Frank and Philip Cook call the “winner-take-all” economy. Political scientists speak of how the social consensus changed in favor of privilege: taxes came down, and the commitment to economic equality eroded. But there is one more crucial piece to the puzzle. As Roger Martin, the dean of the Rotman School of Management, at the University of Toronto, argued in the Harvard Business Review a few years ago, people who fell into the category of “Talent” came to realize that what they possessed was relatively scarce compared with what the class of owners, “Capital,” had at their disposal. People like O’Rourke and Mr. C and Roswell Magill “woke up”—in Martin’s phrase—to what they were really worth. And who woke them up? The Marvin Millers of the world.

3.

Marvin Miller is in his nineties now. He lives in a modest Manhattan high-rise on the Upper East Side, decorated with Japanese prints. He is slight and wiry: the ballplayers he represented for so long always loomed over him. His head is large and his features are aquiline. He has a wispy mustache—a slightly diminished version of the mustache he was asked to shave, by baseball’s traditionalists, upon his election to the union job. He said no to that request, just as he said no to the suggestion, floated by the players, that Richard Nixon serve as his general counsel, and no to virtually every scheme that baseball’s owners tried to foist upon him during his time in office. It was never wise to cross Marvin Miller. Bowie Kuhn, the commissioner of baseball, once accused Miller of failing to “reciprocate” his overtures of friendship. Miller responded that Kuhn was not trying to be friends: he was trying to “pick my brains,” and “there was scant possibility of reciprocity in that department.”

Miller came to baseball from the United Steelworkers union, where he was the chief economist. He was present during the epic eight-day White House negotiating session that narrowly averted a strike at the height of the Vietnam War, in 1965. He cut his teeth at the State, County and Municipal Workers of America and later served on the National War Labor Board. By temperament and experience, he is that increasingly rare species—the union man. Miller remembers going to Manhattan’s Lower East Side one Saturday morning as a child and seeing his father on the picket line for the Retail, Wholesale and Department Store Union. “My father sold ladies’ coats on Division Street,” Miller recalled, speaking in his apartment one muggy day this summer. “Management had been non-union for years and years and years, the Depression was on, and they were cutting the work year for everybody. The strike lasted a month. One day, my father came home later than usual. I was still up, and he had a document with him that he wanted me to read. It was a settlement. The workers got almost everything they were striking for.” Miller was describing a day in his life that happened more than seventy-five years ago. But there are some things that a union man never forgets. “They got a restoration of the work-year cuts,” he went on, ticking off the details as if the contract were in front of him. “They got affirmation that the management would obey the new wage law and pay time and a half for overtime and Sunday work—and a whole raft of small things. It was very impressive to me as a kid.”

The baseball union that Miller took over, however, was not a real union. It was a Players Association. Each team elected a representative, and the reps formed a loosely structured committee—headed by a part-time adviser who was selected and paid for by the owners. The owners did as they pleased. They required the players to abide by rules and regulations without even giving them copies of the rules and regulations they were to abide by. Every player signed what was called the Uniform Player’s Contract, a document so lopsided in its provisions, and so utterly without regard for the rights of the player, that it reminded Miller of the standard lease that the landlords’ association in New York draws up for prospective tenants. A few months after taking the job, Miller was invited to attend a meeting, in Chicago, of Major League Baseball’s Executive Council, a group consisting of the commissioner and a handful of owners, who served as the game’s governing body. There he discovered that the league had decided to terminate the agreement that had been in place for the previous ten years for funding the players’ pension system. The owners had been giving the players sixty per cent of the revenue from broadcasting the World Series. But, with a new, much larger television deal coming up, the owners had decided that sixty per cent would be too much.

Miller—the veteran of a hundred union negotiations—was stunned as he listened to the owners’ “decision.” In the world he had just come from, this did not happen. There had been no collective bargaining; the upcoming announcement was presented to him as a settled matter. Years later, in his memoirs, he recalled:

I looked across the room, hoping to find a sign that someone understood how blatantly illegal and offensive this all was. My eyes fell on Bowie, the only practicing lawyer in the room. I looked for a flicker of comprehension in his eyes, an awareness that his clients were about to display publicly their violations of law, demonstrating for all to see that they had engaged in a willful refusal to bargain. . . . Kuhn showed not the slightest sign of comprehension.

Miller had no staff at that point, and virtually no budget. He was up against a group of owners who were among the wealthiest men in America. In the past, whenever major battles between the owners and the players had been taken to the courts—including to the Supreme Court—the owners had invariably won. Miller’s own members barely understood what a union was for—and there he was, at a meeting of baseball’s governing committee, being treated like a potted plant.

Yet when Miller pushed back, the owners capitulated. He ended up winning the television-revenue battle. He rebuilt the players’ pension system. He got the owners to agree to collective bargaining—which meant that the players had a seat at the table on every issue affecting the game. He won binding arbitration for salary disputes and other grievances, a victory that he describes as the “difference between dictatorship and democracy”; no longer would players be forced to take whatever they were offered by their team. Then he won free agency, which gave veteran players the right to offer their services to any team they chose.

Not even Miller thought it would be that easy. At one point, he wanted the owners to use surplus income from the pension fund to pay for increased benefits. The owners drew a line in the sand. Reluctantly, Miller led the players out on strike—the first strike in the history of professional sports. This time, surely, the fight would be long and bloody. It was not. The owners folded after thirteen days. As Leonard Koppett, of the Times, memorably summed up the course of the negotiations:

PLAYERS: We want higher pensions.
OWNERS: We won’t give you one damn cent for that.
PLAYERS: You don’t have to—the money is already there. Just let us use it.
OWNERS: It would be imprudent.
PLAYERS: We did it before, and anyhow, we won’t play unless we can have some of it.
OWNERS: Okay.

This discovery that Capital was suddenly vulnerable swept across the professional classes in the mid-nineteen-seventies. At exactly the same time that Miller was leading the ballplayers out on strike, for example, a parallel revolution was taking place in the publishing world, as authors and their agents began to rewrite the terms of their relationship with publishers. One of the instigators of that revolution was Mort Janklow, a corporate lawyer who, in 1972, did a favor for his college friend William Safire, and sold Safire’s memoir of his years as a speechwriter in the Nixon Administration to William Morrow & Company. Here is how Janklow describes the earliest days of the uprising:

“So Bill delivers the book on September 1, 1973,” Janklow said, in his Park Avenue office, this past summer. “Ten or fifteen days go by, and Larry Hughes, his editor at Morrow, calls me and says, ‘This really doesn’t work for us. There’s no central theme, it seems too episodic—a bunch of the editors read it, and it’s really unacceptable. We feel bad about it, because we love Bill. But we’re going to return the book to you, and we want you to give back the advance.’”

Hughes was exercising the terms of what was then the standard publishing contract—the agreement that every author signed when he or she sold a book. Janklow knew nothing about the publishing world when he agreed to help his friend, and remembers looking at that contract for the first time and being aghast. “My first thought was, Jesus, does anybody sign this?” he said. “The analogy I’ve always made is, the old publishing agreement was to the writer what the New York apartment lease is to a tenant. Because, if you ever read your lease, the only thing that’s permanent is the obligation to pay rent. The building breaks down, you pay rent. It’s very weighted in favor of the landlord. That was the existing agreement in publishing.

“The author needed to deliver a book at a certain time at a certain quality of content, which had to be ‘acceptable’ to the publisher,” Janklow went on. “But there were no parameters on what acceptability meant. So all the publisher had to say was ‘It’s unacceptable,’ and he was out of the contract.”

To Janklow, the real reason for Morrow’s decision was obvious. It was about what had happened in the interval between when the company bought ’s book and when the manuscript was handed in: Watergate. Morrow just didn’t want to publish a pre-Watergate book in a post-Watergate world.

Janklow decided to fight. His friend’s reputation was on the line. Hughes referred Janklow to the publisher’s lawyer, Maurice Greenbaum, of Greenbaum, Wolff & Ernst. “It was considered a very literary, high-level firm,” Janklow recalled. “And Maury Greenbaum was the classic aristocratic fourth-generation German Jew, with a pince-nez. So I went to see him, and he said, ‘Let me tell you about how publishing works,’ and off he went in the most sanctimonious manner. I was a serious corporate lawyer, and he was lecturing me like I was a freshman in law school. He said, ‘You’re in a standards business. You can’t force a publisher to publish a book. If the publisher ’t want the book, you give the money back and you take back the book. That’s the way the business has worked for hundreds of years.’ When he was finished, I said, ‘Mr. ’m not trying to force the publisher to publish the book. I’m just trying to force the publisher to pay for it. This acceptability clause was being fraudulently exercised, and I’m going to sue you.’ So Greenbaum’s jaw clenched, and the veins on his forehead popped, and he said, ‘You don’t understand. If you start a lawsuit, I will see to it that you never work in this business again.’”

The case went to arbitration. Janklow had uncovered a William Morrow memo written in the summer of 1973—before Safire handed in his manuscript—saying that because of the Watergate scandal the firm ought to back out of its deal with Safire. Humiliated, Morrow settled, and a jolt of electricity went through the literary world. The likes of Larry Hughes and Maury Greenbaum didn’t have all the power after all, and, as one author after another—Judith Krantz, Barbara Taylor Bradford, and Sidney Sheldon, among others—called Janklow asking him to represent them, he began steadily extracting concessions from publishers, revising the acceptability clause and the financial terms so that authors were no longer held hostage to the whims of their publishers. “The publisher would say, ‘Send back that contract or there’s no deal,’ ” Janklow went on. “And I would say, ‘Fine, there’s no deal,’ and hang up. They’d call back in an hour: ‘Whoa, what do you mean?’ The point I was making was that the author was more important than the publisher.”

Janklow and Miller have never met, and they occupy entirely different social universes. Miller is a class warrior. Janklow is a rich corporate lawyer. Miller organized the ballplayers. The only thing Janklow ever organized was his Columbia Law School reunion. But their stories are remarkably similar. The insurgent comes to a previously insular professional world. He studies the prevailing rules of engagement, and is aghast. (For New Yorkers of a certain age, apparently, nothing represents injustice quite like the landlord’s contract.) And when he mounts an attack on what everyone else had assumed was the impregnable fortress of Capital, Capital crumbles. Comrade Janklow, meet Comrade Miller.

4.

Why did Capital crumble? Maury Greenbaum had no doubt been glowering at upstart agents for years and no one had ever challenged him before. Bobby Bonds was as deserving of a big contract as his son. So what changed to allow Talent’s value to be realized?

The economists Aya Chacar and William Hesterly offer an answer, in a recent issue of the journal Managerial and Decision Economics, by drawing on the work of Alan Page Fiske. Fiske is a U.C.L.A. anthropologist who argues that people use one of four models to guide the way they interact with one another: communal sharing, equality matching, market pricing, and authority ranking. Communal sharing is a group of roommates in a house who are free to read one another’s books and wear one another’s clothing. Equality matching is a car pool: if I drive your child to school today, you drive my child to school tomorrow. Market pricing is where the terms of exchange are open to negotiation, or subject to the laws of supply and demand. And authority ranking is paternalism: it is a hierarchical system in which “superiors appropriate or pre-empt what they wish,” as Fiske writes, and “have pastoral responsibility to provide for inferiors who are in need and to protect them.”

Fiske’s point isn’t that one of these paradigms is better than the rest. It is that, as human beings, we choose the relational form that’s most appropriate to a particular circumstance. Fiske gives the example of a dinner party. You buy the food at the store, paying more for those items which are considered more valuable. That’s market pricing. Some of the people who come may have been invited because they invited you to a dinner party in the past: that’s equality matching. At the party, everyone is asked to serve himself or herself (communal sharing), but, as the host, you tell your guests where to sit and they do as they are told (authority ranking). Suppose, though, you were to switch the models you were using for your dinner party. If you use equality matching to acquire the food, communal sharing for your invitations, authority ranking for the choice of what to serve, and market pricing for the seating, then you could have the same food, the same guests, and the same venue, but you wouldn’t have a dinner party anymore. You’d have a community fund-raiser. The model chosen in any situation has a profound effect on the nature of the interaction, and that, Chacar and Hesterly maintain, is what explains Talent’s transformation: across the professional world, relational models suddenly changed.

When Miller took over the players’ union, in 1966, the game was governed by the so-called reserve clause. It was a provision interpreted to mean that the club owned the rights to a player in perpetuity—that is, from the moment a player signed a minor-league contract, he belonged to his team, and no longer had any freedom of choice about where he played. Whenever Miller, in his early years as the union’s head, was speaking to players, he would pound away at how wrong that system was. “At every meeting,” he said, “I talked about how the reserve clause made them pieces of property. It took away all their dignity as human beings. It left them without any real bargaining power. You can’t bargain if the employer can simply say, ‘This is your salary and if you don’t like it you are barred from organized baseball.’”

But it wasn’t easy to convince the players that the reserve clause was an indignity. Miller remembers running into the Yankees’ ace Jim Bouton, and listening to Bouton argue, bafflingly, that the reserve clause somehow preserved the continuity and integrity of the game. “I ran into that kind of thinking over and over,” Miller says.

The attitude of the players was a textbook example of authority ranking. The players didn’t see themselves as exploited. Their salaries may not have been as high as they could have been, but they were higher than those of their peers back home. And the owners took care of them in the paternalistic way that superiors, in an authority-ranking system, take care of their subordinates: an owner might slip a player a fifty-dollar bill after a particularly good game, or pick up his medical bills if his child was sick. The players were content with the reserve clause because they had a model in their heads that said their best interests lay in letting the owners call the shots. “The implicit assumption in economics is that agents who find themselves on the short side of a bargaining regime will aggressively seek greater parity,” Chacar and Hesterly write. “This reasoning, however, presumes that agents view economic relationships through a market lens”—and until Miller worked his magic the players simply hadn’t made that leap.

Even on Wall Street, authority ranking held sway. In 1956, the head of Goldman Sachs, Sidney Weinberg, took the Ford Motor Company public, in what was at that point far and away the biggest initial public offering in history. As Charles Ellis, in “The Partnership,” describes the deal:

When Henry Ford had asked Weinberg at the outset what his fee would be, Weinberg had declined to get specific; he offered to work for a dollar a year until everything was over and then let the family decide what his efforts were really worth. Far more than the actual fee, Weinberg always said he appreciated an affectionate, handwritten letter he received from Ford, which says, along with other flattering things, “Without you, it could not have been accomplished.” Weinberg had the letter framed and hung in his office, where he would proudly direct visitors’ attention to it, saying: “That’s the big payoff as far as I’m concerned.” He was speaking more literally than his guests knew. The fee finally paid was estimated at the time to be as high as a million dollars. The actual fee was nowhere near that amount: For two years’ work and a dazzling success, the indispensable man was paid only $250,000. Deeply disappointed, Sidney Weinberg never mentioned the amount.

Let us enumerate all the surreal details of that transaction. A dollar a year? No banker today would so completely defer to a client on the details of his compensation, or suppress his anger at being tossed such a meagre bone, or point triumphantly at a thank-you letter as the real payoff. Weinberg was at that time the leading investment banker on Wall Street. He was so “indispensable” that the parties to the transaction (the Ford I.P.O. was an enormously complex deal involving both the Ford family and the Ford Foundation) fought over who would get to retain him. Weinberg had enormous leverage: a banker in the same position today would have held Ford for ransom. But he chose not to. He didn’t create a bidding war between family and foundation; he chose to swallow his pride and profess to be happy with a handwritten note. That’s authority ranking: Weinberg’s assumption was that he worked for Ford; that if the client gave him two hundred and fifty thousand dollars for two years’ work on the biggest deal of all time then he was obliged to accept it.

5.

What has changed today is not just that there is an extra zero or two on the end of that fee. What has changed is that the investment banker now perceives the social relations between client and banker as open to negotiation. When Roger Martin, of the University of Toronto, says that the talent revolution represented a change in sensibility, this is what he means. Maurice Greenbaum, sneering over his pince-nez at Janklow, represented the last gasp of the old order. He assumed that the social relations of publishing were settled. Janklow, on the other side of the table, was the new breed. He assumed that they were up in the air.

In the mid-nineteen-seventies, private-equity managers like Teddy Forstmann and Henry Kravis pioneered the practice of charging the now common sum of “two and twenty” for managing their clients’ money—that is, an annual service fee of one and a half or two per cent of assets under management, in addition to a twenty-per-cent share of any profits. Two and twenty was the basis on which the modern high-end money-management world was built: it is what has created billionaires where there were once only millionaires. Why did Forstmann do it? Because, he says now, “I wanted to be a principal and not an agent.” He was no longer satisfied with a social order that placed him below his investors. He wanted an order that placed him on the same plane with his investors.

Why did the Hollywood agent Tom Pollock demand, in 1975, that Twentieth Century Fox grant his client George Lucas full ownership of any potential sequels to “Star Wars”? Because Lucas didn’t want to work for the studio. In his apprenticeship with Francis Ford Coppola, he’d seen the frustrations that could lead to. He wanted to work with the studio. “That way, the project could never be buried in studio hell,” Pollock said. “The whole deal came from a fear that the studio wouldn’t agree to make the movie and wouldn’t let it go. What he was looking for was control.”

At that same moment, the world of modelling was transformed, when Lauren Hutton decided that she would no longer do piecework, the way every model had always done, and instead demanded that her biggest client, Revlon, sign her to a proper contract. Here is Hutton in an interview with her fellow-model Paulina Porizkova, in Vogue last year:

PAULINA PORIZKOVA: Can I ask a question? What was modeling like in 1975?
LAUREN: We modeled by the hour before 1974 or 1975. A good working model would have six jobs a day. You’d get a dollar a minute, $60 an hour. . . . So when the Revlon thing came, suddenly it was no longer about $60 an hour. I was getting $25,000 a day, and that was shocking.
PAULINA: How did that happen?
LAUREN: I read an article about a sports guy named Catfish Hunter on the bottom right-hand corner of the New York Times front page one day. It said he was going to get a million-dollar contract. . . . Veruschka had retired; Twiggy had retired; Jean Shrimpton had retired. All the stars were gone. Dick Avedon had no choice but to work with me continually. I yelled over to my boyfriend and asked, “How do you get a contract?” He didn’t even take a second to yell back: “Don’t tell them. Don’t do any makeup ads. Just refuse to do it. Tell all your photographers you want a contract.” Avedon got it like that, and after six months, that was it. . . . It took six months to work out a contract that had never been worked out before, and basically all contracts [after that] were based on that.
PAULINA: Lauren, I salute you. I got my house because of you.

Catfish Hunter, of course, pitched for the New York Yankees. He was the first baseball player Marvin Miller liberated from the tyranny of the reserve clause. The insurgent comes to a previously insular professional world. She studies the prevailing rules of engagement, and when she mounts an attack on what everyone had assumed was the impregnable fortress of Capital, Capital crumbles. Comrade Hutton, meet Comrade Miller.

6.

At one of the many meetings that Miller had with each baseball team when he first took over as union boss, a player stood up and hesitantly asked a question: “We know that you have been working for unions for most of your adult life, and we gather from what general managers and club presidents and owners and league presidents and the commissioner’s office are telling us that they don’t like you. So what we want to know is, can you get along with these people? Or is this going to be a perpetual conflict?”

Miller tried to answer as truthfully as he could: “I said, ‘I think I can get along with most people. But you have to remember that labor relations in this country are adversarial. The interests of the owners and your interests are diametrically opposed on many things, and you can’t hold up as a standard whether they like me.’ Then I said, ‘I’m going to go further. If at any time during my tenure here you find there’s a pattern of owners and owners’ officials singing my praises, you’d better fire me. I’m not kidding.’”

Miller knew firsthand, from his days with the United Steelworkers, how important the lessons of class solidarity and confrontation were. The men who worked in the mills had few financial or social resources. They could not threaten to go to the mill across the street, because their own mill could easily replace them. But a roller operator at U.S. Steel, by standing firm with his fellow-steelworkers, saw the $2.31 an hour he made in 1948 rise to $2.56 in 1950, $2.81 in 1952, up again the next year to $2.90, and then to $2.95, and on and on, rising steadily to $3.57, in 1958.

Miller’s goal was to get his ballplayers to think like steelworkers—to persuade members of the professional class to learn from members of the working class. His great insight was that if you brought trade unionism to the world of talent—to a class with great social and economic resources, whose abilities were exceptional and who couldn’t easily be —you wouldn’t be measuring your success in fractions of a dollar anymore. The class struggle that characterized the conventional world of organized labor would turn into a rout. And so it did: the share of total baseball revenues paid to baseball players in salary went from ten per cent in the pre-Miller years to near fifty per cent by the beginning of the eighties. By 2003, the minimum salary in baseball was fifty times as high as it was when Miller took over, and the average salary was a hundred and thirty-four times as high. In 2005, Barry Bonds, the little boy playing at Miller’s feet, was paid twenty-two million dollars by the San Francisco Giants—which is not only more than what his father’s teammates collectively received in their lifetimes but more than a good number of baseball’s owners ever made in a single year.

Miller knew that the owners would never sing his praises. How could they, after he had so thoroughly trounced them? Twenty-eight years after his retirement, Miller is still not even a member of baseball’s Hall of Fame, despite the fact that no one outside of Babe Ruth and Jackie Robinson has had as great an impact on the modern game.

“My wife was alive and well one of the last times they voted—three years ago this December,” Miller said, shaking his head at the sheer pettiness of it all. “We had a long talk. She said she had come to the conclusion that some things had had their time, and that this is not funny anymore. She said, ‘You can do what you want, of course. But I think you would do well to ask them not to nominate you anymore.’ I thought about it, and she was right. So I wrote a letter to the group that did the nominating, and I thanked them, but I said we’d all be better off if they didn’t do it again.”

He shrugged. He hadn’t taken over the Players Association for the money. He lives in a plain vanilla high-rise on Manhattan’s Upper East Side. He doesn’t summer in the Hamptons. He was a union man—and in his world you measured your success by the size of the bite you took out of Capital. “I guess I have to remember what I said to the players originally,” he said, after a moment’s reflection. “If the owners praise me, fire me.”

But, if one side so thoroughly dominates another in the marketplace, is it really market pricing anymore? A negotiation in which a man can get paid twenty-two million dollars for hitting a baseball is not really a negotiation. It is a capitulation, and the lingering question left by Miller’s revolution is whether the scales ended up being tilted too far in the direction of Talent—whether what Talent did with its newfound power was simply create a new authority ranking, this time with itself at the top. A few years ago, a group of economists looked at more than a hundred Fortune 500 firms, trying to figure out what predicted how much money the C.E.O. made. Compensation, it turned out, was only weakly related to the size and profitability of the company. What really mattered was how much money the members of the compensation committee of the board of directors made in their jobs. Pay is not determined vertically, in other words, according to the characteristics of the organization an executive works for; it is determined horizontally, according to the characteristics of the executive’s peers. They decide, among themselves, what the right amount is. This is not a market.

Chacar and Hesterly observe that in the modern knowledge economy the most successful and profitable industries aren’t necessarily those with the talented employees. That’s because Talent can demand so much in salary that there’s no money left over for shareholders. Retail banking—the branch down the street where you have your checking account—is a more reliable source of profits, Roger Martin says, than the billion-dollar deal-making of investment banks. What do you do when your branch manager threatens to walk if he doesn’t get a big raise? You let him walk. But you can’t do that if you’re Lazard. “The problem with investment banking is that you need investment ” Martin says, “and they become superstars on their own, and, oops, you got a problem—which is that they take all the money.”

When, a few years ago, Robert Nardelli left Home Depot, he was given a severance package worth two hundred and ten million dollars, even though the board of the company was unhappy with his performance. He got that much because he wrote that number into his contract when he was hired, and the reason Home Depot agreed to that provision back then is that Capital has become accustomed to saying yes to Talent, even in cases where Talent does not end up being all that Talented. The problem with the old system of authority ranking was arrogance—the assumption that the world ought to be ordered according to the whims of Capital. The problem with the new order is greed—the assumption that Talent deserves whatever it can extort. As Martin says, the attitude of the professional class has gone from “how much is enough to how much can I get.”

A decade before Marvin Miller came to baseball, Stan Musial, one of the greatest players in the history of the game, had his worst season as a professional, hitting seventy-six points below his career average. Musial then went to the general manager of his team and asked for a twenty-per-cent pay cut from his salary of a hundred thousand dollars. Miller would be outraged by that story: even at his original salary, Musial was grossly underpaid. Miller would also point out that Musial’s team would not have unilaterally raised his salary by twenty per cent if he’d performed brilliantly that season. In both cases, Miller would have been absolutely right. But it is hard—in an era in which failed executives are rewarded like kings and hedge-fund managers live like the robber barons of the Gilded Age—not to be just a little nostalgic for the explanation that Musial gave for his decision: “There wasn’t anything noble about it. I had a lousy year. I didn’t deserve the money.”
Who really rescued General Motors?

1.

In February of 2009, drug Steven Rattner was selected by the Obama Administration to oversee the federal bailout of General Motors and Chrysler. It was not a popular choice. Rattner was a Wall Street financier with no expertise in the automobile business. The head of the United Auto Workers, order the chairman of Ford, pills and a number of congressmen from Michigan all complained that the White House should have hired someone with an industry background. But, as Rattner makes clear in “Overhaul” (Houghton Mifflin Harcourt; $27), his account of the experience, the critics misunderstood his role. “This was not a managerial job,” he writes. “It was a restructuring and private-equity assignment,” and private equity was Rattner’s forte. He made his living buying troubled and mismanaged companies, turning them around, and then taking them public again—and that’s exactly what the Obama Administration wanted him to do in Detroit. “Overhaul” is not a Washington memoir, even though it is set in Washington, and it involves one of the most deeply politicized issues in recent memory. It is a Wall Street memoir, a book about one of the biggest private-equity deals in history. The result is fascinating—although perhaps not entirely in the ways that its author intended.

In the past twenty-five years, private equity has risen from obscurity to become one of the most powerful forces in the American economy. Private-equity firms collectively make hundreds of billions of dollars in investments every year. The industry’s most prominent player, K.K.R., was by 2007 the fourth-largest employer in the world. Traditional investors, like Warren Buffett, scout for companies that the market has overlooked or undervalued, and buy stakes in them with an eye to the long term. Private-equity investors are activists. They acquire firms outright. Then they bring in their own specialists to “fix” the company. Typically, a private-equity firm plans to take its acquisition public again in three to five years, and the theory behind the enterprise is that buying, fixing, and reselling companies can be far more profitable than Buffett-style “buy and hold” investing. In one of the deals that put private equity on the map, Forstmann Little acquired Dr Pepper for six hundred and fifty million dollars in 1984, cut costs and spun off the company’s less glamorous divisions—such as its textile business and its Canada Dry operation—and then took it public again within three years, at a reported gain to investors of more than eight hundred per cent. An investor like Warren Buffett has to think that he is smarter than the market. Private-equity managers aim higher. They see themselves as smarter than the managers of the companies they are buying. It is not a field for someone with any obvious deficits in self-confidence, and Rattner, a cofounder of the Quadrangle Group, was long considered one of the most intellectually able men on Wall Street.

“Team Auto,” as Rattner refers to the group that he assembled to help supervise the bailout, consisted of about a dozen people, some in their twenties and early thirties. They started work in March of 2009. One of the first major issues was whether to save Chrysler. To settle the question, Rattner tells us, Team Auto gathered in the office of Larry Summers, the President’s chief economic adviser. The case against Chrysler was that most of the jobs lost by letting the company fail would eventually be offset by gains made by Ford and General Motors, as those companies picked up Chrysler’s old customers. Letting Chrysler fail would make Ford and G.M. stronger. But did the team really want several hundred thousand jobs to disappear—even if the losses were short-term—in the middle of a severe recession? Chrysler’s failure would also mean that Michigan’s unemployment-insurance fund, for starters, would need to be bailed out. One of Rattner’s team members made a counter-argument: “Given the uncertainty in our economy, it was better to invest $6 billion for a meaningful chance that Chrysler would survive than to invest several billion dollars in its funeral.” Summers put the matter to a vote. The tally was 4-3 in favor of letting Chrysler die. When the vote came to Rattner, he said that it should live. Summers agreed. Chrysler lived.

Next up was General Motors. Team Auto’s idea was to bypass the traditional bankruptcy procedure, in which the entire company would be restructured through a protracted process of negotiation with creditors. Instead, the company would be divided into two. “Old G.M.” would contain the unwanted factories and debts and unused assets—all of which would be wound down and sold over time. The best parts of the automaker would be transferred to “New G.M.,” an entity funded and owned by the American taxpayer. The task of carving out the new entity was enormously complex, and involved rewriting countless contracts with unions, suppliers, and creditors. To minimize disruption to the company’s operations, Team Auto worked with lightning speed. Rattner would rise at five-thirty, be on the treadmill at the gym by six, and in the office by seven. Lunch was a tuna-fish sandwich at his desk. He wouldn’t be back at his rented condo in Foggy Bottom until eight or nine, catching up on the day’s e-mails before heading to bed. One of Rattner’s team members spent his first month on a friend’s couch in Virginia. Another worked around the clock during the week, and then made the five-hour drive every weekend to see his family, in Pittsburgh. None had any time for ceremony. At one point, two members of Team Auto, Brian Osias and Clay Calhoon, called for a sitdown with senior Chrysler executives at eleven on a Saturday morning. “The executives were almost all middle-aged industry veterans,” Rattner recounts. “Osias was thirty-two years old and Calhoon was twenty-six, and both looked younger than their years.” Calhoon announced to the room, “We’re going to sit at this table until we’re done.” They were there until 2 A.M. on Sunday. On another occasion, the Team Auto member Harry Wilson had a meeting with senior G.M. officials, who arrived with a hundred-and-fifty-page document. Rattner writes, “What’s this?’ Harry asked. ‘The agenda,’ came back the reply. Harry, almost laughing, said, ‘You can’t run a meeting with a 150-page agenda!’ ” He substituted his own. Rattner took the job as Auto Czar in February. He was back home in New York, mission accomplished, by July.

Rattner has since run into some trouble. Recently, an S.E.C. investigation into a “pay to play” scandal involving the New York state pension fund led to sanctions against Rattner, who has reportedly accepted a two-year ban from the securities business. But there is no question that the auto bailout represents one of the signature accomplishments both of his career and of the Obama Administration. In August, G.M. posted its second quarterly profit in a row, its best result in three years. Chrysler, for its part, is now safely in the hands of Fiat, at least for the time being. Two years ago, when the heads of G.M., Ford, and Chrysler came to the Senate in the hope of gaining relief, no one could have imagined such a favorable outcome. At the time, the Center for Automotive Research estimated that the collapse of the Big Three would result in as many as three million lost jobs. So soon after the Wall Street rescue, there seemed little public or political appetite for another taxpayer bailout. The reaction of Richard Shelby, the ranking Republican on the Senate finance committee, was typical. “I don’t believe they’ve got good management,” he said of G.M. “They don’t innovate. They are a dinosaur. . . . I don’t believe the twenty-five billion dollars they’re talking about will make them survive. It’s just postponing the inevitable.” The reason to bring in a private-equity expert is that he would never be so defeatist. To someone like Rattner, there is nothing wrong with giving a dinosaur money if you think you can fix the dinosaur. One might even say that the private-equity investor prefers the dinosaur, because dinosaurs are cheap, which increases the potential profit at the end. And then the world will look at him with awe and say, “Wow, you turned around a dinosaur—even if, on closer examination, that wasn’t what happened at all.

2.

Steven Rattner never took to Rick Wagoner, the C.E.O. of General Motors. The problem started with Wagoner’s testimony before the Senate, on the day in November of 2008 when he and his fellow auto C.E.O.s flew their private jets down to Washington to ask for taxpayers’ money.

“I do not agree with those who say we are not doing enough to position G.M. for success,” Wagoner said, in his testimony. “What exposes us to failure now is not our product lineup, is not our business plan, is not our employees and their willingness to work hard, is not our long-term strategy. What exposes us to failure now was the global financial crisis, which has severely restricted credit availability and reduced industry sales to the lowest per-capita level since World War II.”

To Rattner, that comment summed up everything that was wrong with G.M. Its leaders were arrogant and out of touch. Their sales forecasts were bizarrely optimistic. Members of Team Auto had “surreal” conversations with the company’s C.F.O., Ray Young. Rattner looked in vain for a sense of urgency. In one of G.M.’s endless PowerPoint presentations, he saw a product-price chart that included no comparison data for G.M.’s competitors. “Why would G.M. present the data in such a useless manner?” he wonders. “Whom were they trying to fool?”

The culprit in all this, Rattner believed, was Wagoner. When the two men met, Rattner was struck by Wagoner’s combination of “amiability and remoteness.” The previous day, Team Auto had met with the Chrysler C.E.O., Robert Nardelli, and his engaged and direct manner had impressed Rattner. But Wagoner “gave listeners very little to grab onto.” Rattner writes:

He made a few opening comments and then turned over the floor to his lieutenants, occasionally interjecting a remark here and there but mostly presiding. While I respected the collegiality this implied, it left nearly everyone with the impression that he held himself aloof. If Rick had taken a more central role it would probably not have affected our assessment of the company, but might have affected our judgment of him.

Wagoner, in Rattner’s judgment, simply didn’t have what it would take:

Born and bred as an insider, Wagoner never displayed any fortitude for remaking GM’s hidebound corporate culture. He operated as an incrementalist, and a slow-moving one at that. His guiding star appeared to be an unshakable faith that GM was not like any other company; it was General Motors. Whatever happened to other companies couldn’t possibly happen to GM.

Wagoner’s testimony at the Senate hearing, to Rattner’s mind, had been typical: “He and his team seemed certain that virtually all of their problems could be laid at the feet of some combination of the financial crisis, oil prices, the yen-dollar exchange rate, and the UAW.” In fact, Wagoner’s corporate team was simply dysfunctional: “A top-down, hierarchical approach” afflicted G.M.’s upper management. Wagoner and his senior executives “involved themselves in decisions that should have been left to executives several layers beneath them.”

This is a perplexing bundle of criticisms. We learn that Wagoner is aloof and excessively collegial—and also a meddler. We learn that Wagoner is perhaps unreasonably partisan toward his own company. We learn that his testimony before Congress rubbed Rattner the wrong way, and we learn that his subordinates gave flawed PowerPoint presentations. What we don’t learn is whether Wagoner was any good at the job he was hired to do—that is, run General Motors—which is a critical omission, because by that criterion Wagoner actually comes off very well.

Wagoner was not a perfect manager, by any means. Unlike Alan Mulally, the C.E.O. at Ford, he failed to build up cash reserves in anticipation of the economic downturn, which might have kept his company out of bankruptcy. He can be faulted for riding the S.U.V. wave too long, and for being too slow to develop a credible small-car alternative. But, especially given the mess that Wagoner inherited when he took over, in 2000—and the inherent difficulty of running a company that had to pay pension and medical benefits to half a million retirees—he accomplished a tremendous amount during his eight-year tenure. He cut the workforce from three hundred and ninety thousand to two hundred and seventeen thousand. He built a hugely profitable business in China almost from scratch: a G.M. joint venture is the leading automaker in what is now the world’s largest automobile market. In 1995, it took forty-six man-hours to build the typical G.M. car, versus twenty-nine hours for the typical Toyota. Under Wagoner’s watch, the productivity gap closed almost entirely.

Most important, Wagoner—along with his counterparts at Ford and Chrysler—was responsible for a historic agreement with the United Auto Workers. Under that contract, which was concluded in 2007, new hires at G.M. receive between fourteen and seventeen dollars an hour—instead of the twenty-eight to thirty-three dollars an hour that preëxisting employees get—and give up all rights to the traditional retiree benefit package. The 2007 deal also transferred all responsibility for paying for the health care of G.M.’s retirees to a special fund, administered by the U.A.W. It is hard to overstate the importance of that second provision. G.M. has five hundred and seventeen thousand retirees. Between 1993 and 2007, the company paid out a hundred and three billion dollars to those former workers—a burden unimaginable to its foreign competitors. In the 2007 deal, G.M. agreed to make a series of lump-sum payments to the U.A.W. over ten years, worth some thirty-two billion dollars—at which point the company would be free of its outsized retiree health-care burden. It is estimated that, within a few years, G.M.’s labor —which were once almost fifty per cent higher than the domestic operations of Toyota, Nissan, and Honda—will be lower than its competitors’.

In the same period, G.M.’s product line was transformed. In 1989, to give one example, Chevrolet’s main midsize sedan had something like twice as many reported defects as its competitors at Honda and Toyota, according to the J. D. Power “initial quality” metrics. Those differences no longer exist. The first major new car built on Wagoner’s watch—the midsize Chevy Malibu—scores equal to or better than the Honda Accord and Toyota Camry. G.M. earned more than a billion dollars in profits in the last quarter because American consumers have started to buy the cars that Wagoner brought to market—the Buick Regal and LaCrosse, the Envoy, the Cadillac CTS, the Chevy Malibu and Cruze, and others. They represent the most competitive lineup that G.M. has fielded since the nineteen-sixties. (Both the CTS and the Malibu have been named to Car and Drivers annual “10 Best Cars” list.)

What Wagoner meant in his testimony before the Senate, in other words, was something like this: “At G.M., we are finally producing world-class cars. We have brought our costs, quality, and productivity into line with those of our competitors. We have finally disposed of the crippling burden of our legacy retiree costs. We have expanded into the world’s fastest-growing markets more effectively than any other company in the United States. But the effort required to bring about that transformation has left our balance sheet thin—and, at the very moment that we need a couple of years of normal economic activity to refill our coffers, auto sales have fallen off a cliff. Do you mind giving us a hand until things get back to normal?” This is not arrogance. It happens to be something very close to the truth. And, when senators like Richard Shelby seem to have no idea what your company has accomplished in the past decade, forcefully making the case for your own company’s merits is probably a sound strategy.

Rattner was perfectly aware of the strides that G.M. had made. When members of Team Auto toured some of G.M.’s factories, he tells us, they came back marvelling at the “truly collaborative” relationship that existed between management and labor, and at how “consistent and disciplined” the manufacturing process was. One of his first major conclusions, after studying up on the auto industry, was that “U.S. automakers were no longer as pathetically inefficient as people thought.” What Rattner cannot seem to see, though, is that his contempt for G.M.’s leadership is contradicted by the evidence of the company’s accomplishments. How can Wagoner be a slow-moving incrementalist when, in less than a decade, he took the world’s largest company from an uncompetitive monolith to a worthy competitor of Toyota and Honda? “GM’s day-to-day workings were solid,” Rattner writes at one point. “It was the head that was rotting.” But, if the head was rotting, how did the day-to-day operations become solid?

It is not hard to understand what is going on here. Team Auto was engaged in an act of financial engineering: it used the power of the bankruptcy process to rid G.M. of some of the liabilities that had been holding it back. This was cleverly and swiftly done. It was badly needed. But, at the end of the day, cleaning up a balance sheet is cleaning up a balance sheet. Kristin Dziczek, of the Center for Automotive Research, estimates that the “new” G.M. is roughly eighty-five per cent the product of the work that Wagoner, in concert with the U.A.W., did in his eight years at the company and fifteen per cent the product of Team Auto’s efforts. That seems about right: car companies stand or fall, ultimately, on the strength of their product, and teaching a giant company how to build a quality car again is something that can’t be done on the private-equity timetable. The problem is that no private-equity manager wants to be thought of as a mere financial engineer. The mythology of the business is that the specialists who swoop in from Wall Street are not economic opportunists, buying, stripping, and selling companies in order to extract millions in fees, but architects of rebirth. Rattner wants us to think of this as his G.M. “As we drafted press statements and fact sheets,” he writes, “I would constantly force myself to write that ‘GM’ had done such and such. Just once I would have liked to write ‘we’ instead.”

So what did Rattner do with Wagoner? He fired him, of course: “Though I’d met Wagoner only once, to my mind there was no question but that he had to go.” (Wait: once?) If this was to be Rattner’s G.M., it needed to have Rattner’s man at the helm: “After nearly a decade of experience as a private equity manager, I believed in a bedrock principle of that business: put money behind only a bankable team.” Bankable does not mean a self-effacing C.E.O., with a heavy PowerPoint deck and a manner that leaves his listeners with nothing to hang on to. Bankable means a star. Rattner called Jack Welch for advice. He consulted with headhunters. He pondered the question during his morning runs on the treadmill until he found his leading man—Ed Whitacre, a former C.E.O. of A.T.&T. And, at this point, a book that began as a case study in twenty-first-century economic realities descends into schoolboy romance.

“His reputation was for toughness,” Rattner says of Whitacre. “I remembered having once read a Business Week story that described him killing rattlesnakes on his Texas ranch (he would pin down the snake with a stick and crush its head with a rock). His flinty image was reinforced by his lean, six-foot-four frame, his full head of gray hair, and his laconic speech. Ed believed that we are born with two ears and one mouth and we should use them in rough proportion.”

The two men have a date in Washington. Rattner chooses the steakhouse Bobby Vans, “on the theory that Ed, being Texan, would want red meat.” Whitacre agrees to take the job, even though he has never worked for a manufacturing company in his life. Rattner follows Whitacre on his first trip to G.M. headquarters. “Having had no experience as a corporate executive, I was eager to observe a top-notch one at work,” Rattner explains. He trails along as Whitacre meets with G.M. management: “For all his reputation as a tough guy, I was fascinated to see him take the time to get to know the individuals as people. By the end of the day, he could talk knowledgeably about each executive’s background, personality, and aspirations.” The top company brass gather in the chairman’s conference room, and the lanky snake-killer rises to address the group:

The men and women listened intently as Ed explained in his measured Texas drawl that he had no interest in presiding over a second-rate company. He praised the people. He stressed the need to make decisions. . . . Then, looking straight into the eyes of one attendee after another, he said: “I’m used to winning and have no intention of seeing that change at GM.” The GM executives, unused to this sort of bluntness, were impressed, and so was I. It was superlative leadership as I had always imagined it.

Whitacre makes commercials for G.M., with himself as the star. He takes lunch in the food court, mingling with the rank and file. “Hi, I’m Ed. Who’re you?” he’ll say to some dumbstruck middle manager in the elevator. He walks into one meeting, listens for a while, says, ” You are all smart guys, right? You know what to do,” then walks out. He flies back to Texas every weekend, just to keep things in perspective. He is the face of the new G.M., the man handpicked to lead one of America’s greatest companies through its time of gravest crisis. And then one day last August—just nine months into his term as C.E.O.—Rattner’s superlative leader suddenly and mysteriously quits. Does this make Rattner question his own judgment? That’s not the private-equity way. “I shared the board’s disappointment,” he writes briskly, and moves on with his narrative of triumph.

3.

Early on in his time in Washington, Rattner realized that Team Auto would have to make “at least one trip” to Detroit in order to

avoid more criticism from the heartland. By early March, we could delay no more. All the same, we were determined not to waste more than a day, and so arranged a packed itinerary that would touch all the right bases. To satisfy the futurists, we would visit GM’s Technical center and drive its next generation vehicles. For traditionalists, we would tour an old-line Chrysler assembly plant. And to acknowledge the importance of labor, we would visit with UAW leaders at their headquarters, Solidarity House.

One would have thought that a man as savvy as Rattner would have made the Detroit visit sound a little less of a burden. The Auto Czar should want to see the industry he is supposedly fixing, shouldn’t he? But this is what makes “Overhaul” so unexpectedly fascinating. It is the product of someone so convinced of the value of his contribution, and of the private-equity model, that he feels no need to hide his condescension.

Team Auto makes sure to rent a car at the airport with G.P.S., “since none of us knew our way around Detroit.” At the U.A.W., Rattner looks with pleasure at what he takes to be evidence of his own industry’s handiwork: “Far from browbeating us, they gave a thorough presentation that included as many details and figures as investment bankers would have used. (I later learned that my old firm Lazard had helped prepare it.)”

Then it was on to G.M. and finally to Chrysler. But not for long, because time was short and the real work of saving Detroit, of course, has nothing to do with Detroit. “We walked among the vehicles—sedans and trucks and even a Fiat 500—as the Chrysler people talked about advanced hybrid power trains and new, environmentally friendly diesels,” Rattner continues. “But by this point our goal was not to miss our flight back to the mountain of work that awaited us back in Washington.”