The Talent Myth

Big business and the myth of the lone inventor

1.

Philo T. Farnsworth was born in 1906, tadalafil pharmacy and he looked the way an inventor of that era was supposed to look: slight and gaunt, with bright-blue exhausted eyes, and a mane of brown hair swept back from his forehead. He was nervous and tightly wound. He rarely slept. He veered between fits of exuberance and depression. At the age of three, he was making precise drawings of the internal mechanisms of locomotives. At six, he declared his intention to follow in the footsteps of Thomas Edison and Alexander Graham Bell. At fourteen, while tilling a potato field on his family’s farm in Idaho, he saw the neat, parallel lines of furrows in front of him, and it occurred to him–in a single, blinding moment–that a picture could be sent electronically through the airwaves in the same way, broken down into easily transmitted lines and then reassembled into a complete picture at the other end. He went to see his high-school science teacher, and covered the blackboard with drawings and equations. At nineteen, after dropping out of college, he impressed two local investors with his brilliance and his conviction. He moved to California and set up shop in a tiny laboratory. He got married on an impulse. On his wedding night, he seized his bride by the shoulders and looked at her with those bright-blue eyes. “Pemmie,” he said. “I have to tell you. There is another woman in my life–and her name is Television.”

Philo T. Farnsworth was the inventor of television. Through the nineteen-thirties and forties, he engaged in a heroic battle to perfect and commercialize his discovery, fending off creditors and predators, and working himself to the point of emotional and physical exhaustion. His nemesis was David Sarnoff, the head of RCA, then one of the most powerful American electronics companies. Sarnoff lived in an enormous Upper East Side mansion and smoked fat cigars and travelled by chauffeured limousine. His top television researcher was Vladimir Zworykin, the scion of a wealthy Russian family, who wore elegant three-piece suits and round spectacles, had a Ph.D. in physics, and apprenticed with the legendary Boris Rosing at the St. Petersburg Institute of Technology. Zworykin was never more than half a step behind Farnsworth: he filed for a patent on his own version of electronic television two years after Farnsworth had his potato-field vision. At one point, Sarnoff sent Zworykin to Farnsworth’s tiny laboratory, on Green Street in San Francisco, and he stayed for three days, asking suspiciously detailed questions. He had one of Farnsworth’s engineers build the heart of Farnsworth’s television system–the so-called image dissector–before his eyes, and then picked the tube up and turned it over in his hands and said, ominously, “This is a beautiful instrument. I wish I had invented it myself.” Soon Sarnoff himself came out to Green Street, swept imperially through the laboratory, and declared, “There’s nothing here we’ll need.” It was, of course, a lie. In the nineteen-thirties, television was not possible without Philo Farnsworth’s work. But in the end it didn’t much matter. Farnsworth’s company was forced out of the TV business. Farnsworth had a nervous breakdown, and Sarnoff used his wealth and power to declare himself the father of television.

The life of Philo Farnsworth is the subject of two new books, “The Last Lone Inventor,” by Evan I. Schwartz (HarperCollins; $24.95), and “The Boy Genius and the Mogul,” by Daniel Stashower (Broadway; $24.95). It is a wonderful tale, riveting and bittersweet. But its lessons, on closer examination, are less straightforward than the clichés of the doomed inventor and the villainous mogul might suggest. Philo Farnsworth’s travails make a rather strong case for big corporations, not against them.

2.

The idea of television arose from two fundamental discoveries. The first was photoconductivity. In 1872, Joseph May and Willoughby Smith discovered that the electrical resistance of certain metals varied according to their exposure to light. And, since everyone knew how to transmit electricity from one place to another, it made sense that images could be transmitted as well. The second discovery was what is called visual persistence. In 1880, the French engineer Maurice LeBlanc pointed out that, because the human eye retains an image for about a tenth of a second, if you wanted to transmit a picture you didn’t have to send it all at once. You could scan it, one line at a time, and, as long as you put all those lines back together at the other end within that fraction of a second, the human eye would be fooled into thinking that it was seeing a complete picture.

The hard part was figuring out how to do the scanning. In 1883, the German engineer Paul Nipkow devised an elaborate and ultimately unworkable system using a spinning metal disk. The disk was punctured with a spiral of small holes, and, as it spun, one line of light after another was projected through the holes onto a photocell. In 1908, a British electrical engineer named A. A. Campbell Swinton suggested that it would make more sense to scan images electronically, using a cathode ray. Philo Farnsworth was the first to work out how to do that. His image dissector was a vacuum tube with a lens at one end, a photoelectric plate right in front of the lens to convert the image from light to electricity, and then an “anode finger” to scan the electrical image line by line. After setting up his laboratory, Farnsworth tinkered with his makeshift television camera day and night for months. Finally, on September 7, 1927, he was ready. His wife, Pem, was by his side. His tiny television screen was in front of him. His brother-in-law, Cliff Gardner, was manning the television camera in a room at the other end of the lab. Stashower writes:

Squaring his shoulders, Farnsworth took his place at the controls and flicked a series of switches. A small, bluish patch of light appeared at the end of the receiving tube. Farnsworth lifted his head and began calling out instructions to Gardner in the next room.

“Put in the slide, Cliff,” Farnsworth said.

“Okay, it’s in,” Gardner answered. “Can you see it?”

A faint but unmistakable line appeared across the receiving end of the tube. As Farnsworth made some adjustments, the line became more distinct.

“Turn the slide a quarter turn, Cliff,” Farnsworth called. Seconds later, the line on the receiving tube rotated ninety degrees. Farnsworth looked up from the tube. “That’s it, folks,” he announced with a tremor in his voice. “We’ve done it–there you have electronic television.”

Both Stashower and Schwartz talk about how much meaning Farnsworth attached to this moment. He was a romantic, and in the romance of invention the creative process consists of two discrete, euphoric episodes, linked by long years of grit and hard work. First is the magic moment of conception: Farnsworth in the potato field. Second is the moment of execution: the day in the lab. If you had the first of those moments and not the second, you were a visionary. But if you had both you were in a wholly different category. Farnsworth must have known the story of King Gillette, the bottle-cap salesman, who woke up one morning in the summer of 1895 to find his razor dull. Gillette had a sudden vision: if all he wanted was a sharp edge, then why should he have to refashion the whole razor? Gillette later recalled:

As I stood there with the razor in my hand, my eyes resting on it as lightly as a bird settling down on its nest, the Gillette razor was born–more with the rapidity of a dream than by a process of reasoning. In a moment I saw it all: the way the blade could be held in a holder; the idea of sharpening the two opposite edges on the thin piece of steel; the clamping plates for the blade, with a handle half-way between the two edges of the blade…I stood there before the mirror in a trance of joy. My wife was visiting Ohio and I hurriedly wrote to her: “I’ve got it! Our fortune is made!”

If you had the vision and you made the vision work, then the invention was yours–that was what Farnsworth believed. It belonged to you, just as the safety razor belonged to King Gillette.

But this was Farnsworth’s mistake, because television wasn’t at all like the safety razor. It didn’t belong to one person. May and Smith stumbled across photoconductivity, and inspired LeBlanc, who, in turn, inspired Swinton, and Swinton’s idea inspired inventors around the world. Then there was Zworykin, of course, and his mentor Boris Rosing, and the team of Max Dieckmann and Rudolf Hell, in Germany, who tried to patent something in the mid-twenties that was virtually identical to the image dissector. In 1931, when Zworykin perfected his own version of the television camera, called the Iconoscope, RCA did a worldwide patent search and found very similar patent applications from a Hungarian named Kolomon Tihany, a Canadian named François Henrouteau, a Japanese inventor named Kenjiro Takayanagi, two Englishmen, and a Russian. Everyone was working on television and everyone was reading everyone else’s patent applications, and, because television was such a complex technology, nearly everyone had something new to add. Farnsworth came up with the first camera. Zworykin had the best early picture tube. And when Zworykin finally came up with his own camera it was not as good as Farnsworth’s camera in some respects, but it was better in others. In September of 1939, when RCA finally licensed the rights to Farnsworth’s essential patents, it didn’t replace the Iconoscope with Farnsworth’s image dissector. It took the best parts of both.

It is instructive to compare the early history of television with the development, some seventy-five years earlier, of the sewing machine. As the historian Grace Rogers Cooper points out, a sewing machine is really six different mechanisms in one–a means of supporting the cloth, a needle and a combining device to form the stitch, a feeding mechanism to allow one stitch to follow another, a means of insuring the even delivery of thread, and a governing mechanism to insure that each of the previous five steps is performed in sequence. Cooper writes in her book “The Sewing Machine”:

Weisenthal had added a point to the eye-end of the needle. Saint supported the fabric by placing it in a horizontal position with a needle entering vertically, Duncan successfully completed a chainstitch for embroidery purposes, Chapman used a needle with an eye at its point and did not pass it completely through the fabric, Krems stitched circular caps with an eye-pointed needle used with a hook to form a chainstitch, Thimmonier used the hooked needle to form a chainstitch on a fabric laid horizontally, and Hunt created a new stitch that was more readily adapted to sewing by machine than the hand stitches had been.

The man generally credited with combining and perfecting these elements is Elias Howe, a machinist from Boston. But even Howe’s patents were quickly superseded by a new round of patents, each taking one of the principles of his design and either augmenting it or replacing it. The result was legal and commercial gridlock, broken only when, in 1856, Howe and three of the leading sewing-machine manufacturers (among them Isaac Merritt Singer, who gave the world the sewing-machine foot pedal) agreed to pool their patents and form a trust. It was then that the sewing-machine business took off. For the sewing machine to succeed, in other words, those who saw themselves as sewing-machine inventors had to swallow their pride and concede that the machine was larger than they were–that groups, not individuals, invent complex technologies. That was what Farnsworth could not do, and it explains the terrible turn that his life took.

3.

David Sarnoff’s RCA had a very strict policy on patents. If you worked for RCA and you invented something patentable, it belonged to RCA. Your name was on the patent, and you got credit for your work. But you had to sign over your rights for one dollar. In “The Last Lone Inventor,” Schwartz tells the story of an RCA engineer who thought the system was so absurd that he would paste his one-dollar checks to the wall of his office–until the accounting department, upset with the unresolved balance on its books, steamed them off and forced him to cash them. At the same time, Sarnoff was a patient and generous benefactor. When Zworykin and Sarnoff discussed television for the first time, in 1929, Zworykin promised the RCA chief that he would create a working system in two years, at a cost of a hundred thousand dollars. In fact, it took more than ten years and fifty million dollars, and through all those years–which just happened to coincide with the Depression–Sarnoff’s support never wavered. Sarnoff “hired the best engineers out of the best universities,” Schwartz writes. “He paid them competitive salaries, provided them with ample research budgets, and offered them a chance to join his crusade to change the world, working in the most dynamic industry the world had ever seen.” What Sarnoff presented was a compromise. In exchange for control over the fruits of invention, he gave his engineers the freedom to invent.

Farnsworth didn’t want to relinquish that control. Both RCA and General Electric offered him a chance to work on television in their laboratories. He turned them both down. He wanted to go it alone. This was the practical consequence of his conviction that television was his, and it was, in retrospect, a grievous error. It meant that Farnsworth was forced to work in a state of chronic insecurity. He never had enough money. He feuded constantly with his major investor, a man named Jesse McCargar, who didn’t have the resources to play the television game. At the time of what should have been one of Farnsworth’s greatest triumphs–the granting of his principal –McCargar showed up at the lab complaining about costs, and made Farnsworth fire his three star engineers. When, in 1928, the Green Street building burned down, a panicked Farnsworth didn’t know whether or not his laboratory was insured. It was, as it happened, but a second laboratory, in Maine, wasn’t, and when it burned down, years later, he lost everything. Twice, he testified before Congress. The first time, he rambled off on a tangent about transmission bandwidth which left people scratching their heads. The second time, he passed up a perfect opportunity to register his complaints about RCA, and launched, instead, into a sentimental account of his humble origins. He simply did not understand how to play politics, just as he did not understand how to raise money or run a business or organize his life. All he really knew how to do was invent, which was something that, as a solo operator, he too seldom had time for.

This is the reason that so many of us work for big companies, of course: in a big company, there is always someone to do what we do not want to do or do not do well–someone to answer the phone, and set up our computer, and arrange our health insurance, and clean our office at night, and make sure the building is insured. In a famous 1937 essay, “The Nature of the Firm,” the economist Ronald Coase said that the reason we have corporations is to reduce the everyday transaction costs of doing business: a company puts an accountant on the staff so that if a staffer needs to check the books all he has to do is walk down the hall. It’s an obvious point, but one that is consistently overlooked, particularly by those who periodically rail, in the name of efficiency, against corporate bloat and superfluous middle managers. Yes, the middle manager does not always contribute directly to the bottom line. But he does contribute to those who contribute to the bottom line, and only an absurdly truncated account of human productivity–one that assumes real work to be somehow possible when phones are ringing, computers are crashing, and health insurance is expiring–does not see that secondary contribution as valuable.

In April, 1931, Sarnoff showed up at the Green Street laboratory to review Farnsworth’s work. This was, by any measure, an extraordinary event. Farnsworth was twenty-four, and working out of a ramshackle building. Sarnoff was one of the leading industrialists of his day. It was as if Bill Gates were to get in his private jet and visit a software startup in a garage across the country. But Farnsworth wasn’t there. He was in New York, trapped there by a court order resulting from a frivolous lawsuit filed by a shady would-be investor. Stashower calls this one of the great missed opportunities of Farnsworth’s career, because he almost certainly would have awed Sarnoff with his passion and brilliance, winning a lucrative licensing deal. Instead, an unimpressed Sarnoff made a token offer of a hundred thousand dollars for Farnsworth’s patents, and Farnsworth dismissed the offer out of hand. This, too, is a reason that inventors ought to work for big corporations: big corporations have legal departments to protect their employees against being kept away from their laboratories by frivolous lawsuits. A genius is a terrible thing to waste.

4.

In 1939, at the World’s Fair in New York City, David Sarnoff set up a nine-thousand-square-foot pavilion to showcase the new technology of television. The pavilion, shaped like a giant radio tube, was covered with RCA logos, and stood next to the Perisphere Theatre, the centerpiece of the fairgrounds. On opening day, thirty thousand people gathered to hear from President Roosevelt and Albert Einstein. The gala was televised by RCA, beamed across the New York City area from the top of the Empire State Building. As it happened, Farnsworth was in New York City that day, and he caught the opening ceremonies on a television in a department-store window. He saw Sarnoff introducing both Roosevelt and Einstein, and effectively claiming this wondrous new technology as his own. “Farnsworth’s entire existence seemed to be annulled in this moment,” Schwartz writes:

The dreams of a farm boy, the eureka moment in a potato field, the confession to a teacher, the confidence in him shown by businessmen and bankers and investors, the breakthroughs in the laboratory, all the years of work, the decisions of the official patent examiners, those hard-fought victories, all of those demonstrations that had come and gone, the entire vision of the future. All of it was being negated by Sarnoff’s performance at the World’s Fair. Would the public ever know the truth?… The agony of it set off sharp pains in his stomach.

Finally, later that summer, RCA settled with Farnsworth. It agreed to pay him a million dollars for the rights to his main patents, plus royalties on every television set sold. But it was too late. Something had died in him. “It’s come to the point of choosing whether I want to be a drunk or go crazy,” he told his wife. One doctor prescribed chloral hydrate, which destroyed his appetite and left him dangerously thin. Another doctor prescribed cigarettes, to soothe his nerves. A third prescribed uppers. He became addicted to the painkiller Pantipon. He committed himself to a sanitarium in Massachusetts, where he was given a course of shock therapy. After the war, his brother died in a plane crash. His patents expired, drying up his chief source of income. His company, unable to compete with RCA, was forced out of the television business. He convinced himself that he could unlock the secrets of nuclear fusion, and launched another private research project, mortgaging his home, selling his stock, and cashing in his life insurance to fund the project. But nothing came of it. He died in 1971–addicted to alcohol, deeply depressed, and all but forgotten. He was sixty-four.

In “Tube,” a history of television, David E. Fisher and Marshall Jon Fisher point out that Farnsworth was not the only television pioneer to die in misery. So did two others–John Logie Baird and Charles Francis Jenkins–who had tried and failed to produce mechanical television. This should not come as a surprise. The creative enterprise is a hazardous journey, and those who venture on it alone do so at their peril. Baird and Jenkins and Farnsworth risked their psychological and financial well-being on the romantic notion of the solitary inventor, and when that idea failed them what resources did they have left? Zworykin had his share of setbacks as well. He took on Farnsworth in court, and lost. He promised television in two years for a hundred thousand dollars and he came in eight years and fifty million dollars over budget. But he ended his life a prosperous and contented man, lauded and laurelled with awards and honorary degrees. He had the cocoon of RCA to protect him: a desk and a paycheck and a pension and a secretary and a boss with the means to rewrite history in his favor. This is perhaps a more important reason that we have companies–or, for that matter, that we have universities and tenure. Institutions are not just the best environment for success; they are also the safest environment for failure–and, much of the time, failure is what lies in store for innovators and visionaries. Philo Farnsworth should have gone to work for RCA. He would still have been the father of television, and he might have died a happy man.
Are smart people overrated?

1.

Five years ago, viagra several executives at McKinsey & Company, America’s largest and most prestigious management-consulting firm, launched what they called the War for Talent. Thousands of questionnaires were sent to managers across the country. Eighteen companies were singled out for special attention, and the consultants spent up to three days at each firm, interviewing everyone from the C.E.O. down to the human-resources staff. McKinsey wanted to document how the top-performing companies in America differed from other firms in the way they handle matters like hiring and promotion. But, as the consultants sifted through the piles of reports and questionnaires and interview transcripts, they grew convinced that the difference between winners and losers was more profound than they had realized. “We looked at one another and suddenly the light bulb blinked on,” the three consultants who headed the project–Ed Michaels, Helen Handfield-Jones, and Beth Axelrod–write in their new book, also called “The War for Talent.” The very best companies, they concluded, had leaders who were obsessed with the talent issue. They recruited ceaselessly, finding and hiring as many top performers as possible. They singled out and segregated their stars, rewarding them disproportionately, and pushing them into ever more senior positions. “Bet on the natural athletes, the ones with the strongest intrinsic skills,” the authors approvingly quote one senior General Electric executive as saying. “Don’t be afraid to promote stars without specifically relevant experience, seemingly over their heads.” Success in the modern economy, according to Michaels, Handfield-Jones, and Axelrod, requires “the talent mind-set”: the “deep-seated belief that having better talent at all levels is how you outperform your competitors.”

This “talent mind-set” is the new orthodoxy of American management. It is the intellectual justification for why such a high premium is placed on degrees from first-tier business schools, and why the compensation packages for top executives have become so lavish. In the modern corporation, the system is considered only as strong as its stars, and, in the past few years, this message has been preached by consultants and management gurus all over the world. None, however, have spread the word quite so ardently as McKinsey, and, of all its clients, one firm took the talent mind-set closest to heart. It was a company where McKinsey conducted twenty separate projects, where McKinsey’s billings topped ten million dollars a year, where a McKinsey director regularly attended board meetings, and where the C.E.O. himself was a former McKinsey partner. The company, of course, was Enron.

The Enron scandal is now almost a year old. The reputations of Jeffrey Skilling and Kenneth Lay, the company’s two top executives, have been destroyed. Arthur Andersen, Enron’s auditor, has been driven out of business, and now investigators have turned their attention to Enron’s investment bankers. The one Enron partner that has escaped largely unscathed is McKinsey, which is odd, given that it essentially created the blueprint for the Enron culture. Enron was the ultimate “talent” company. When Skilling started the corporate division known as Enron Capital and Trade, in 1990, he “decided to bring in a steady stream of the very best college and M.B.A. graduates he could find to stock the company with talent,” Michaels, Handfield-Jones, and Axelrod tell us. During the nineties, Enron was bringing in two hundred and fifty newly minted M.B.A.s a year. “We had these things called Super Saturdays,” one former Enron manager recalls. “I’d interview some of these guys who were fresh out of Harvard, and these kids could blow me out of the water. They knew things I’d never heard of.” Once at Enron, the top performers were rewarded inordinately, and promoted without regard for seniority or experience. Enron was a star system. “The only thing that differentiates Enron from our competitors is our people, our talent,” Lay, Enron’s former chairman and C.E.O., told the McKinsey consultants when they came to the company’s headquarters, in Houston. Or, as another senior Enron executive put it to Richard Foster, a McKinsey partner who celebrated Enron in his 2001 book, “Creative Destruction,” “We hire very smart people and we pay them more than they think they are worth.”

The management of Enron, in other words, did exactly what the consultants at McKinsey said that companies ought to do in order to succeed in the modern economy. It hired and rewarded the very best and the very brightest–and it is now in bankruptcy. The reasons for its collapse are complex, needless to say. But what if Enron failed not in spite of its talent mind-set but because of it? What if smart people are overrated?

2.

At the heart of the McKinsey vision is a process that the War for Talent advocates refer to as “differentiation and affirmation.” Employers, they argue, need to sit down once or twice a year and hold a “candid, probing, no-holds-barred debate about each individual,” sorting employees into A, B, and C groups. The A’s must be challenged and disproportionately rewarded. The B’s need to be encouraged and affirmed. The C’s need to shape up or be shipped out. Enron followed this advice almost to the letter, setting up internal Performance Review Committees. The members got together twice a year, and graded each person in their section on ten separate criteria, using a scale of one to five. The process was called “rank and yank.” Those graded at the top of their unit received bonuses two-thirds higher than those in the next thirty per cent; those who ranked at the bottom received no bonuses and no extra stock options–and in some cases were pushed out.

How should that ranking be done? Unfortunately, the McKinsey consultants spend very little time discussing the matter. One possibility is simply to hire and reward the smartest people. But the link between, say, I.Q. and job performance is distinctly underwhelming. On a scale where 0.1 or below means virtually no correlation and 0.7 or above implies a strong correlation (your height, for example, has a 0.7 correlation with your parents’ height), the correlation between I.Q. and occupational success is between 0.2 and 0.3. “What I.Q. doesn’t pick up is effectiveness at common-sense sorts of things, especially working with people,” Richard Wagner, a psychologist at Florida State University, says. “In terms of how we evaluate schooling, everything is about working by yourself. If you work with someone else, it’s called cheating. Once you get out in the real world, everything you do involves working with other people.”

Wagner and Robert Sternberg, a psychologist at Yale University, have developed tests of this practical component, which they call “tacit knowledge.” Tacit knowledge involves things like knowing how to manage yourself and others, and how to navigate complicated social situations. Here is a question from one of their tests:

You have just been promoted to head of an important department in your organization. The previous head has been transferred to an equivalent position in a less important department. Your understanding of the reason for the move is that the performance of the department as a whole has been mediocre. There have not been any glaring deficiencies, just a perception of the department as so-so rather than very good. Your charge is to shape up the department. Results are expected quickly. Rate the quality of the following strategies for succeeding at your new position.

a) Always delegate to the most junior person who can be trusted with the task.
b) Give your superiors frequent progress reports.
c) Announce a major reorganization of the department that includes getting rid of whomever you believe to be “dead wood.”
d) Concentrate more on your people than on the tasks to be done.
e) Make people feel completely responsible for their work.

Wagner finds that how well people do on a test like this predicts how well they will do in the workplace: good managers pick (b) and (e); bad managers tend to pick (c). Yet there’s no clear connection between such tacit knowledge and other forms of knowledge and experience. The process of assessing ability in the workplace is a lot messier than it appears.

An employer really wants to assess not potential but performance. Yet that’s just as tricky. In “The War for Talent,” the authors talk about how the Royal Air Force used the A, B, and C ranking system for its pilots during the Battle of Britain. But ranking fighter pilots–for whom there are a limited and relatively objective set of performance criteria (enemy kills, for example, and the ability to get their formations safely home)–is a lot easier than assessing how the manager of a new unit is doing at, say, marketing or business development. And whom do you ask to rate the manager’s performance? Studies show that there is very little correlation between how someone’s peers rate him and how his boss rates him. The only rigorous way to assess performance, according to human-resources specialists, is to use criteria that are as specific as possible. Managers are supposed to take detailed notes on their employees throughout the year, in order to remove subjective personal reactions from the process of assessment. You can grade someone’s performance only if you know their performance. And, in the freewheeling culture of Enron, this was all but impossible. People deemed “talented” were constantly being pushed into new jobs and given new challenges. Annual turnover from promotions was close to twenty per cent. Lynda Clemmons, the so-called “weather babe” who started Enron’s weather derivatives business, jumped, in seven quick years, from trader to associate to manager to director and, finally, to head of her own business unit. How do you evaluate someone’s performance in a system where no one is in a job long enough to allow such evaluation?

The answer is that you end up doing performance evaluations that aren’t based on performance. Among the many glowing books about Enron written before its fall was the best-seller “Leading the Revolution,” by the management consultant Gary Hamel, which tells the story of Lou Pai, who launched Enron’s power-trading business. Pai’s group began with a disaster: it lost tens of millions of dollars trying to sell electricity to residential consumers in newly deregulated markets. The problem, Hamel explains, is that the markets weren’t truly deregulated: “The states that were opening their markets to competition were still setting rules designed to give their traditional utilities big advantages.” It doesn’t seem to have occurred to anyone that Pai ought to have looked into those rules more carefully before risking millions of dollars. He was promptly given the chance to build the commercial electricity-outsourcing business, where he ran up several more years of heavy losses before cashing out of Enron last year with two hundred and seventy million dollars. Because Pai had “talent,” he was given new opportunities, and when he failed at those new opportunities he was given still more opportunities . . . because he had “talent.” “At Enron, failure–even of the type that ends up on the front page of the Wall Street Journal–doesn’t necessarily sink a career,” Hamel writes, as if that were a good thing. Presumably, companies that want to encourage risk-taking must be willing to tolerate mistakes. Yet if talent is defined as something separate from an employee’s actual performance, what use is it, exactly?

3.

What the War for Talent amounts to is an argument for indulging A employees, for fawning over them. “You need to do everything you can to keep them engaged and satisfied–even delighted,” Michaels, Handfield-Jones, and Axelrod write. “Find out what they would most like to be doing, and shape their career and responsibilities in that direction. Solve any issues that might be pushing them out the door, such as a boss that frustrates them or travel demands that burden them.” No company was better at this than Enron. In one oft-told story, Louise Kitchin, a twenty-nine-year-old gas trader in Europe, became convinced that the company ought to develop an online-trading business. She told her boss, and she began working in her spare time on the project, until she had two hundred and fifty people throughout Enron helping her. After six months, Skilling was finally informed. “I was never asked for any capital,” Skilling said later. “I was never asked for any people. They had already purchased the servers. They had already started ripping apart the building. They had started legal reviews in twenty-two countries by the time I heard about it.” It was, Skilling went on approvingly, “exactly the kind of behavior that will continue to drive this company forward.”

Kitchin’s qualification for running EnronOnline, it should be pointed out, was not that she was good at it. It was that she wanted to do it, and Enron was a place where stars did whatever they wanted. “Fluid movement is absolutely necessary in our company. And the type of people we hire enforces that,” Skilling told the team from McKinsey. “Not only does this system help the excitement level for each manager, it shapes Enron’s business in the direction that its managers find most exciting.” Here is Skilling again: “If lots of [employees] are flocking to a new business unit, that’s a good sign that the opportunity is a good one. . . . If a business unit can’t attract people very easily, that’s a good sign that it’s a business Enron shouldn’t be in.” You might expect a C.E.O. to say that if a business unit can’t attract customers very easily that’s a good sign it’s a business the company shouldn’t be in. A company’s business is supposed to be shaped in the direction that its managers find most profitable. But at Enron the needs of the customers and the shareholders were secondary to the needs of its stars.

A dozen years ago, the psychologists Robert Hogan, Robert Raskin, and Dan Fazzini wrote a brilliant essay called “The Dark Side of Charisma.” It argued that flawed managers fall into three types. One is the High Likability Floater, who rises effortlessly in an organization because he never takes any difficult decisions or makes any enemies. Another is the Homme de Ressentiment, who seethes below the surface and plots against his enemies. The most interesting of the three is the Narcissist, whose energy and self-confidence and charm lead him inexorably up the corporate ladder. Narcissists are terrible managers. They resist accepting suggestions, thinking it will make them appear weak, and they don’t believe that others have anything useful to tell them. “Narcissists are biased to take more credit for success than is legitimate,” Hogan and his co-authors write, and “biased to avoid acknowledging responsibility for their failures and shortcomings for the same reasons that they claim more success than is their due.” Moreover:

Narcissists typically make judgments with greater confidence than other people . . . and, because their judgments are rendered with such conviction, other people tend to believe them and the narcissists become disproportionately more influential in group situations. Finally, because of their self-confidence and strong need for recognition, narcissists tend to “self-nominate”; consequently, when a leadership gap appears in a group or organization, the narcissists rush to fill it.

Tyco Corporation and WorldCom were the Greedy Corporations: they were purely interested in short-term financial gain. Enron was the Narcissistic Corporation–a company that took more credit for success than was legitimate, that did not acknowledge responsibility for its failures, that shrewdly sold the rest of us on its genius, and that substituted self-nomination for disciplined management. At one point in “Leading the Revolution,” Hamel tracks down a senior Enron executive, and what he breathlessly recounts–the braggadocio, the self-satisfaction–could be an epitaph for the talent mind-set:

“You cannot control the atoms within a nuclear fusion reaction,” said Ken Rice when he was head of Enron Capital and Trade Resources (ECT), America’s largest marketer of natural gas and largest buyer and seller of electricity. Adorned in a black T-shirt, blue jeans, and cowboy boots, Rice drew a box on an office whiteboard that pictured his business unit as a nuclear reactor. Little circles in the box represented its “contract originators,” the gunslingers charged with doing deals and creating new businesses. Attached to each circle was an arrow. In Rice’s diagram the arrows were pointing in all different directions. “We allow people to go in whichever direction that they want to go.”

The distinction between the Greedy Corporation and the Narcissistic Corporation matters, because the way we conceive our attainments helps determine how we behave. Carol Dweck, a psychologist at Columbia University, has found that people generally hold one of two fairly firm beliefs about their intelligence: they consider it either a fixed trait or something that is malleable and can be developed over time. Five years ago, Dweck did a study at the University of Hong Kong, where all classes are conducted in English. She and her colleagues approached a large group of social-sciences students, told them their English-proficiency scores, and asked them if they wanted to take a course to improve their language skills. One would expect all those who scored poorly to sign up for the remedial course. The University of Hong Kong is a demanding institution, and it is hard to do well in the social sciences without strong English skills. Curiously, however, only the ones who believed in malleable intelligence expressed interest in the class. The students who believed that their intelligence was a fixed trait were so concerned about appearing to be deficient that they preferred to stay home. “Students who hold a fixed view of their intelligence care so much about looking smart that they act dumb,” Dweck writes, “for what could be dumber than giving up a chance to learn something that is essential for your own success?”

In a similar experiment, Dweck gave a class of preadolescent students a test filled with challenging problems. After they were finished, one group was praised for its effort and another group was praised for its intelligence. Those praised for their intelligence were reluctant to tackle difficult tasks, and their performance on subsequent tests soon began to suffer. Then Dweck asked the children to write a letter to students at another school, describing their experience in the study. She discovered something remarkable: forty per cent of those students who were praised for their intelligence lied about how they had scored on the test, adjusting their grade upward. They weren’t naturally deceptive people, and they weren’t any less intelligent or self-confident than anyone else. They simply did what people do when they are immersed in an environment that celebrates them solely for their innate “talent.” They begin to define themselves by that description, and when times get tough and that self-image is threatened they have difficulty with the consequences. They will not take the remedial course. They will not stand up to investors and the public and admit that they were wrong. They’d sooner lie.

4.

The broader failing of McKinsey and its acolytes at Enron is their assumption that an organization’s intelligence is simply a function of the intelligence of its employees. They believe in stars, because they don’t believe in systems. In a way, that’s understandable, because our lives are so obviously enriched by individual brilliance. Groups don’t write great novels, and a committee didn’t come up with the theory of relativity. But companies work by different rules. They don’t just create; they execute and compete and coördinate the efforts of many different people, and the organizations that are most successful at that task are the ones where the system is the star.

There is a wonderful example of this in the story of the so-called Eastern Pearl Harbor, of the Second World War. During the first nine months of 1942, the United States Navy suffered a catastrophe. German U-boats, operating just off the Atlantic coast and in the Caribbean, were sinking our merchant ships almost at will. U-boat captains marvelled at their good fortune. “Before this sea of light, against this footlight glare of a carefree new world were passing the silhouettes of ships recognizable in every detail and sharp as the outlines in a sales catalogue,” one U-boat commander wrote. “All we had to do was press the button.”

What made this such a puzzle is that, on the other side of the Atlantic, the British had much less trouble defending their ships against U-boat attacks. The British, furthermore, eagerly passed on to the Americans everything they knew about sonar and depth-charge throwers and the construction of destroyers. And still the Germans managed to paralyze America’s coastal zones.

You can imagine what the consultants at McKinsey would have concluded: they would have said that the Navy did not have a talent mind-set, that President Roosevelt needed to recruit and promote top performers into key positions in the Atlantic command. In fact, he had already done that. At the beginning of the war, he had pushed out the solid and unspectacular Admiral Harold R. Stark as Chief of Naval Operations and replaced him with the legendary Ernest Joseph King. “He was a supreme realist with the arrogance of genius,” Ladislas Farago writes in “The Tenth Fleet,” a history of the Navy’s U-boat battles in the Second World War. “He had unbounded faith in himself, in his vast knowledge of naval matters and in the soundness of his ideas. Unlike Stark, who tolerated incompetence all around him, King had no patience with fools.”

The Navy had plenty of talent at the top, in other words. What it didn’t have was the right kind of organization. As Eliot A. Cohen, a scholar of military strategy at Johns Hopkins, writes in his brilliant book “Military Misfortunes in the Atlantic”:

To wage the antisubmarine war well, analysts had to bring together fragments of information, direction-finding fixes, visual sightings, decrypts, and the “flaming datum” of a U-boat attack–for use by a commander to coordinate the efforts of warships, aircraft, and convoy commanders. Such synthesis had to occur in near “real time”–within hours, even minutes in some cases.

The British excelled at the task because they had a centralized operational system. The controllers moved the British ships around the Atlantic like chess pieces, in order to outsmart U-boat “wolf packs.” By contrast, Admiral King believed strongly in a decentralized management structure: he held that managers should never tell their subordinates ” ‘how’ as well as what to ‘do.’ ” In today’s jargon, we would say he was a believer in “loose-tight” management, of the kind celebrated by the McKinsey consultants Thomas J. Peters and Robert H. Waterman in their 1982 best-seller, “In Search of Excellence.” But “loose-tight” doesn’t help you find U-boats. Throughout most of 1942, the Navy kept trying to act smart by relying on technical know-how, and stubbornly refused to take operational lessons from the British. The Navy also lacked the organizational structure necessary to apply the technical knowledge it did have to the field. Only when the Navy set up the Tenth Fleet–a single unit to coördinate all anti-submarine warfare in the Atlantic–did the situation change. In the year and a half before the Tenth Fleet was formed, in May of 1943, the Navy sank thirty-six U-boats. In the six months afterward, it sank seventy-five. “The creation of the Tenth Fleet did not bring more talented individuals into the field of ASW”–anti-submarine warfare–“than had previous organizations,” Cohen writes. “What Tenth Fleet did allow, by virtue of its organization and mandate, was for these individuals to become far more effective than previously.” The talent myth assumes that people make organizations smart. More often than not, it’s the other way around.

5.

There is ample evidence of this principle among America’s most successful companies. Southwest Airlines hires very few M.B.A.s, pays its managers modestly, and gives raises according to seniority, not “rank and yank.” Yet it is by far the most successful of all United States airlines, because it has created a vastly more efficient organization than its competitors have. At Southwest, the time it takes to get a plane that has just landed ready for takeoff–a key index of productivity–is, on average, twenty minutes, and requires a ground crew of four, and two people at the gate. (At United Airlines, by contrast, turnaround time is closer to thirty-five minutes, and requires a ground crew of twelve and three agents at the gate.)

In the case of the giant retailer Wal-Mart, one of the most critical periods in its history came in 1976, when Sam Walton “unretired,” pushing out his handpicked successor, Ron Mayer. Mayer was just over forty. He was ambitious. He was charismatic. He was, in the words of one Walton biographer, “the boy-genius financial officer.” But Walton was convinced that Mayer was, as people at McKinsey would say, “differentiating and affirming” in the corporate suite, in defiance of Wal-Mart’s inclusive culture. Mayer left, and Wal-Mart survived. After all, Wal-Mart is an organization, not an all-star team. Walton brought in David Glass, late of the Army and Southern Missouri State University, as C.E.O.; the company is now ranked No. 1 on the Fortune 500 list.

Procter & Gamble doesn’t have a star system, either. How could it? Would the top M.B.A. graduates of Harvard and Stanford move to Cincinnati to work on detergent when they could make three times as much reinventing the world in Houston? Procter & Gamble isn’t glamorous. Its C.E.O. is a lifer–a former Navy officer who began his corporate career as an assistant brand manager for Joy dishwashing liquid–and, if Procter & Gamble’s best played Enron’s best at Trivial Pursuit, no doubt the team from Houston would win handily. But Procter & Gamble has dominated the consumer-products field for close to a century, because it has a carefully conceived managerial system, and a rigorous marketing methodology that has allowed it to win battles for brands like Crest and Tide decade after decade. In Procter & Gamble’s Navy, Admiral Stark would have stayed. But a cross-divisional management committee would have set the Tenth Fleet in place before the war ever started.

6.

Among the most damning facts about Enron, in the end, was something its managers were proudest of. They had what, in McKinsey terminology, is called an “open market” for hiring. In the open-market system–McKinsey’s assault on the very idea of a fixed organization–anyone could apply for any job that he or she wanted, and no manager was allowed to hold anyone back. Poaching was encouraged. When an Enron executive named Kevin Hannon started the company’s global broadband unit, he launched what he called Project Quick Hire. A hundred top performers from around the company were invited to the Houston Hyatt to hear Hannon give his pitch. Recruiting booths were set up outside the meeting room. “Hannon had his fifty top performers for the broadband unit by the end of the week,” Michaels, Handfield-Jones, and Axelrod write, “and his peers had fifty holes to fill.” Nobody, not even the consultants who were paid to think about the Enron culture, seemed worried that those fifty holes might disrupt the functioning of the affected departments, that stability in a firm’s existing businesses might be a good thing, that the self-fulfillment of Enron’s star employees might possibly be in conflict with the best interests of the firm as a whole.

These are the sort of concerns that management consultants ought to raise. But Enron’s management consultant was McKinsey, and McKinsey was as much a prisoner of the talent myth as its clients were. In 1998, Enron hired ten Wharton M.B.A.s; that same year, McKinsey hired forty. In 1999, Enron hired twelve from Wharton; McKinsey hired sixty-one. The consultants at McKinsey were preaching at Enron what they believed about themselves. “When we would hire them, it wouldn’t just be for a week,” one former Enron manager recalls, of the brilliant young men and women from McKinsey who wandered the hallways at the company’s headquarters. “It would be for two to four months. They were always around.” They were there looking for people who had the talent to think outside the box. It never occurred to them that, if everyone had to think outside the box, maybe it was the box that needed fixing.