John Rock’s Error

Do our first three years of life determine how we’ll turn out?

1.

In April of 1997, sickness cost Hillary Clinton was the host of a daylong conference at the White House entitled “What New Research on the Brain tells Us About Our Youngest Children.” In her opening remarks, physician which were beamed live by satellite to nearly a hundred hospitals, diagnosis universities, and schools, in thirty-seven states, Mrs. Clinton said, “Fifteen years ago, we thought that a baby’s brain structure was virtually complete at birth.” She went on:

Now we understand that it is a work in progress, and that everything we do with a child has some kind of potential physical influence on that rapidly forming brain. A child’s earliest experiences–their relationships with parents and caregivers, the sights and sounds and smells and feelings they encounter, the challenges they meet–determine how their brains are wired. . . . These experiences can determine whether children will grow up to be peaceful or violent citizens, focussed or undisciplined workers, attentive or detached parents themselves.

At the afternoon session of the conference, the keynote speech was given by the director turned children’s advocate Rob Reiner. His goal, Reiner told the assembled, was to get the public to “look through the prism” of the first three years of life “in terms of problem solving at every level of society”:

If we want to have a real significant impact, not only on children’s success in school and later on in life, healthy relationships, but also an impact on reduction in crime, teen pregnancy, drug abuse, child abuse, welfare, homelessness, and a variety of other social ills, we are going to have to address the first three years of life. There is no getting around it. All roads lead to Rome.

The message of the conference was at once hopeful and a little alarming.On the one hand, it suggested that the right kind of parenting during those first three years could have a lasting effect on a child’s life; on the other hand, it implied that if we missed this opportunity the resulting damage might well be permanent. Today, there is a zero-to-three movement, made up of advocacy groups and policymakers like Hillary Clinton, which uses the promise and the threat of this new brain science to push for better pediatric care, early childhood education, and day care. Reiner has started something called the I Am Your Child Foundation, devoted to this cause, and has enlisted the support of, among others, Tom Hanks, Robin Williams, Billy Crystal, Charlton Heston, and Rosie O’Donnell. Some lawmakers now wonder whether programs like Head Start ought to be drastically retooled, to focus on babies and toddlers rather than on preschoolers. The state of California recently approved a fifty-cent-per-pack tax on cigarettes to fund programs aimed at improving care for babies and toddlers up to the age of five. The state governments of Georgia and Tennessee send classical-music CDs home from the hospital with every baby, and Florida requires that day-care centers play classical music every day–all in the belief that Mozart will help babies build their minds in this critical window of development. “During the first part of the twentieth century, science built a strong foundation for the physical health of our children,” Mrs. Clinton said in her speech that morning. “The last years of this century are yielding similar breakthroughs for the brain. We are . . . coming closer to the day when we should be able to insure the well-being of children in every domain–physical, social, intellectual, and emotional.”

The First Lady took pains not to make the day’s message sound too extreme. “I hope that this does not create the impression that, once a child’s third birthday rolls around, the important work is over,”she said, adding that much of the brain’s emotional wiring isn’t completed until adolescence, and that children never stop needing the love and care of their parents. Still, there was something odd about the proceedings. This was supposed to be a meeting devoted to new findings in brain science, but hardly any of the brain science that was discussed was new. In fact, only a modest amount of brain science was discussed at all. Many of the speakers were from the worlds of education and policy. Then, there was Mrs. Clinton’s claim that the experiences of our first few years could “determine” whether we grow up to be peaceful or violent, focussed or undisciplined. We tend to think that the environmental influences upon the way we turn out are the sum of a lifetime of experiences–that someone is disciplined because he spent four years in the Marines, or because he got up every morning as a teen-ager to train with the swim team. But Hillary Clinton was proposing that we direct our attention instead to what happens to children in a very brief window early in life. The First Lady, now a candidate for the United States Senate, is associating herself with a curious theory of human development. Where did this idea come from? And is it true?

2.

John Bruer tackles both these questions in his new book, “The Myth of The First Three Years” (Free Press; $25). From its title, Bruer’s work sounds like a rant. It isn’t. Noting the cultural clout of the zero-to- three idea, Bruer, who heads a medical-research foundation in St. Louis, sets out to compare what people like Rob Reiner and Hillary Clinton are saying to what neuroscientists have actually concluded. The result is a superb book, clear and engaging, that serves as both popular science and intellectual history.

Mrs. Clinton and her allies, Bruer writes, are correct in their premise: the brain at birth is a work in progress. Relatively few connections among its billions of cells have yet been established. In the first few years of life, the brain begins to wire itself up at a furious pace, forming hundreds of thousands, even millions, of new synapses every second. Infants produce so many new neural connections, so quickly, that the brain of a two-year-old is actually far more dense with neural connections than the brain of an adult. After three, that burst of activity seems to slow down, and our brain begins the long task of rationalizing its communications network, finding those connections which seem to be the most important and getting rid of the rest.

During this brief initial period of synaptical “exuberance,” the brain is especially sensitive to its environment. David Hubel and Torsten Wiesel, in a famous experiment, sewed one of the eyes of a kitten shut for the first three months of its life, and when they opened it back up they found that the animal was permanently blind in that eye. There are critical periods early in life, then, when the brain will not develop properly unless it receives a certain amount of outside stimulation. In another series of experiments, begun in the early seventies, William Greenough, a psychologist at the University of Illinois, showed that a rat reared in a large, toy-filled cage with other rats ended up with a substantially more developed visual cortex than a rat that spent its first month alone in a small, barren cage: the brain, to use the word favored by neuroscientists, is plastic–that is, modifiable by experience. In other words, Hillary Clinton’s violent citizens and unfocussed workers might seem to be the human equivalents of kittens who’ve had an eye sewed shut, or rats who’ve been reared in a barren cage. If in the critical first three years of synapse formation we could give people the equivalent of a big cage full of toys, she was saying, we could make them healthier and smarter.

Put this way, these ideas sound quite reasonable, and it’s easy to see why they have attracted such excitement. But Bruer’s contribution is to show how, on several critical points, this account of child development exaggerates or misinterprets the available evidence.

Consider, he says, the matter of synapse formation. The zero-to- three activists are convinced that the number of synapses we form in our earliest years plays a big role in determining our mental capacity. But do we know that to be true? People with a form of mental retardation known as fragile-X syndrome, Bruer notes, have higher numbers of synapses in their brain than the rest of us. More important, the period in which humans gain real intellectual maturity is late adolescence, by which time the brain is aggressively pruning the number of connections. Is intelligence associated with how many synapses you have or with how efficiently you manage to sort out and make sense of those connections later in life? Nor do we know how dependent the initial burst of synapse formation is on environmental stimulation. Bruer writes of an experiment where the right hand of a monkey was restrained in a leather mitten from birth to four months, effectively limiting all sensory stimulation. That’s the same period when young monkeys form enormous numbers of connections in the somatosensory cortex, the area of the monkey brain responsible for size and texture discriminations, so you’d think that the restrained hand would be impaired. But it wasn’t: within a short time, it was functioning normally, which suggests that there is a lot more flexibility and resilience in some aspects of brain development than we might have imagined.

Bruer also takes up the question of early childhood as a developmental window. It makes sense that if children don’t hear language by the age of eleven or twelve they aren’t going to speak, and that children who are seriously neglected throughout their upbringing will suffer permanent emotional injury. But why, Bruer asks, did advocates arrive at three years of age as a cutoff point? Different parts of the brain develop at different speeds. The rate of synapse formation in our visual cortex peaks at around three or four months. The synapses in our prefrontal cortex–the parts of our brain involved in the most sophisticated cognitive tasks–peak perhaps as late as three years, and aren’t pruned back until middle-to-late adolescence. How can the same cutoff apply to both regions?

Greenough’s rat experiments are used to support the critical- window idea, because he showed that he could affect brain development in those early years by altering the environment of his animals. The implications of the experiment aren’t so straightforward, though. The experiments began when the rats were about three weeks old, which is already past rat “infancy,” and continued until they were fifty-five days old, which put them past puberty. So the experiment showed the neurological consequences of deprivation not during some critical window of infancy but during the creature’s entire period of maturation. In fact, when Greenough repeated his experiment with rats that were four hundred and fifty days old–well past middle age–he found that those kept in complex environments once again had significantly denser neural connections than those kept in isolation.

Even the meaning of the kitten with its eye sewn shut turns out to be far from obvious. When that work was repeated on monkeys, researchers found that if they deprived both eyes of early stimulation–rearing a monkey in darkness for its first six months–the animal could see (although not perfectly), and the binocularity of its vision, the ability of its left and right eyes to coördinate images, was normal. The experiment doesn’t show that more stimulation is better than less for binocular vision. It just suggests that whatever stimulation there is should be balanced, which is why closing one eye tilts the developmental process in favor of the open eye.

To say that the brain is plastic, then, is not to say that the brain is dependent on certain narrow windows of stimulation. Neuroscientists say instead that infant brains have “experience-expectant plasticity”–which means that they need only something that approximates a normal environment. Bruer writes:

The odds that our children will end up with appropriately fine-tuned brains are incredibly favorable, because the stimuli the brain expects during critical periods are the kinds of stimuli that occur everywhere and all the time within the normal developmental environment for our species. It is only when there are severe genetic or environmental aberrations from the normal that nature’s expectations are frustrated and neural development goes awry.

In the case of monkeys, the only way to destroy their binocular vision is to sew one eye shut for six months–an entirely contrived act that would almost never happen in the wild. Greenough points out that the “complex” environment he created for his rats–a large cage full of toys and other animals–is actually the closest equivalent of the environment that a rat would encounter naturally. When he created a super-enriched environment for his rats, one with even more stimulation than they would normally encounter, the rats weren’t any better off. The only way he could affect the neurological development of the animals was to put them in a barren cage by themselves–again, a situation that an animal would never encounter in the wild. Bruer quotes Steve Petersen, a neuroscientist at Washington University, in St. Louis, as saying that neurological development so badly wants to happen that his only advice to parents would be “Don’t raise your children in a closet, starve them, or hit them in the head with a frying pan.” Petersen was, of course, being flip. But the general conclusion of researchers seems to be that we human beings enjoy a fairly significant margin of error in our first few years of life. Studies done of Romanian orphans who spent their first year under conditions of severe deprivation suggest that most (but not all) can recover if adopted into a nurturing home. In another study, psychologists examined children from an overcrowded orphanage who had been badly neglected as infants and subsequently adopted into loving homes. Within two years of their adoption, one psychologist involved in their rehabilitation had concluded:

We had not anticipated the older children who had suffered deprivations for periods of 21/2 to 4 years to show swift response to treatment. That they did so amazed us. These inarticulate, underdeveloped youngsters who had formed no relationships in their lives, who were aimless and without a capacity to concentrate on anything, had resembled a pack of animals more than a group of human beings….As we worked with the children, it became apparent that their inadequacy was not the result of damage but, rather, was due to a dearth of normal experiences without which development of human qualities is impossible. After a year of treatment, many of these older children were showing a trusting dependency toward the staff of volunteers and…self-reliance in play and routines.

3.

Some years ago, the Berkeley psychology professor Alison Gopnik and one of her students, Betty Repacholi, conducted an experiment with a series of fourteen-month-old toddlers. Repacholi showed the babies two bowls of food, one filled with Goldfish crackers and one filled with raw broccoli. All the babies, naturally, preferred the crackers. Repacholi then tasted the two foods, saying “Yuck” and making a disgusted face at one and saying “Yum” and making a delighted face at the other. Then she pushed both bowls toward the babies, stretched out her hand, and said, “Could you give me some?”

When she liked the crackers, the babies gave her crackers. No surprise there. But when Repacholi liked the broccoli and hated the crackers, the babies were presented with a difficult philosophical issue–that different people may have different, even conflicting, desires. The fourteen-month-olds couldn’t grasp that. They thought that if they liked crackers everyone liked crackers, and so they gave Repacholi the crackers, despite her expressed preferences. Four months later, the babies had, by and large, figured this principle out, and when Repacholi made a face at the crackers they knew enough to give her the broccoli. “The Scientist in the Crib”(Morrow; $24), a fascinating new book that Gopnik has written with Patricia Kuhl and Andrew Meltzoff, both at the University of Washington, argues that the discovery of this principle–that different people have different desires–is the source of the so-called terrible twos. “What makes the terrible twos so terrible is not that the babies do things you don’t want them to do–one-year-olds are plenty good at that–but that they do things because you don’t want them to,” the authors write. And why is that? Not, as is commonly thought, because toddlers want to test parental authority, or because they’re just contrary. Instead, the book argues, the terrible twos represent a rational and engaged exploration of what is to two-year-olds a brand-new idea–a generalization of the insight that the fact that they hate broccoli and like crackers doesn’t mean that everyone hates broccoli and likes crackers. “Toddlers are systematically testing the dimensions on which their desires and the desires of others may be in conflict,” the authors write. Infancy is an experimental research program, in which “the child is the budding psychologist; we parents are the laboratory rats.”

These ideas about child development are, when you think about it, oddly complementary to the neurological arguments of John Bruer. The paradox of the zero-to-three movement is that, for all its emphasis on how alive children’s brains are during their early years, it views babies as profoundly passive–as hostage to the quality of the experiences provided for them by their parents and caregivers. “The Scientistin the Crib” shows us something quite different. Children are scientists, who develop theories and interpret evidence from the world around them in accordance with those theories. And when evidence starts to mount suggesting that the existing theory isn’t correct–wait a minute, just because I like crackers doesn’t mean Mommy likes crackers–they create a new theory to explain the world, just as a physicist would if confronted with new evidence on the relation of energy and matter. Gopnik, Meltzoff, and Kuhl play with this idea at some length. Science, they suggest, is actually a kind of institutionalized childhood, an attempt to harness abilities that evolved to be used by babies or young children. Ultimately, the argument suggests that child development is a rational process directed and propelled by the child himself. How does the child learn about different desires? By systematically and repeatedly provoking a response from adults. In the broccoli experiment, the adult provided the fourteen-month-old with the information (“I hate Goldfish crackers”) necessary to make the right decision. But the child ignored that information until he himself had developed a theory to interpret it. When “The Scientist in the Crib” describes children as budding psychologists and adults as laboratory rats, it’s more than a clever turn of phrase. Gopnik, Meltzoff, and Kuhl observe that our influence on infants “seems to work in concert with children’s own learning abilities.” Newborns will “imitate facial expressions” but not “complex actions they don’t understand themselves.” And the authors conclude, “Children won’t take in what you tell them until it makes sense to them. Other people don’t simply shape what children do; parents aren’t the programmers. Instead, they seem designed to provide just the right sort of information.”

It isn’t until you read “The Scientist in the Crib” alongside more conventional child-development books that you begin to appreciate the full implications of its argument. Here, for example, is a passage from “What’s Going On in There? How the Brain and Mind Develop in the First Five Years of Life,” by Lise Eliot, who teaches at the University of Chicago: “It’s important to avoid the kind of muddled baby-talk that turns a sentence like ‘Is she the cutest little baby in the world?’ into ‘Uz see da cooest wiwo baby inna wowud?’ Caregivers should try to enunciate clearly when speaking to babies and young children, giving them the cleanest, simplest model of speech possible.” Gopnik, Meltzoff, and Kuhl see things a little differently. First, they point out, by six or seven months babies are already highly adept at decoding the sounds they hear around them, using the same skills we do when we talk to someone with a thick foreign accent or a bad cold. If you say “Uz see da cooest wiwo baby inna wowud?” they hear something like “Is she the cutest little baby in the world?” Perhaps more important, this sort of Motherese–with its elongated vowels and repetitions and overpronounced syllables–is just the thing for babies to develop their language skills.And Motherese, the authors point out, seems to be innate. It’s found in every culture in the world, and anyone who speaks to a baby uses it, automatically, even without realizing it. Babies want Motherese, so they manage to elicit it from the rest of us. That’s a long way from the passive baby who thrives only because of the specialized, high-end parenting skills of the caregiver. “One thing that science tells us is that nature has designed us to teach babies, as much as it has designed babies to learn,” Gopnik, Meltzoff, and Kuhl write. “Almost all of the adult actions we’ve described”–actions that are critical for the cognitive development of babies–“are swift, spontaneous, automatic and unpremeditated.”

4.

Does it matter that Mrs. Clinton and her allies have misread the evidence on child development? In one sense, it doesn’t. The First Lady does not claim to be a neuroscientist. She is a politician, and she is interested in the brains of children only to further an entirely worthy agenda: improved day care, pediatric care, and early- childhood education. Sooner or later, however, bad justifications for social policy can start to make for bad social policy, and that is the real danger of the zero-to-three movement.

In Lise Eliot’s book, for instance, there’s a short passage in which she writes of the extraordinary powers of imitation that infants possess. A fifteen-month-old who watches an experimenter lean over and touch his forehead to the top of a box will, when presented with that same box four months later, do exactly the same thing. “The fact that these memories last so long is truly remarkable–and a little bit frightening,” Eliot writes, and she continues:

It goes a long way toward explaining why children, even decades later, are so prone to replicating their parents’ behavior. If toddlers can repeat, even several months later, actions they’ve seen only once or twice, just imagine how watching their parents’ daily activities must affect them. Everything they see and hear over time–work, play, fighting, smoking, drinking, reading, hitting, laughing, words, phrases, and gestures–is stored in ways that shape their later actions, and the more they see of a particular behavior, the likelier it is to reappear in their own conduct.

There is something to this. Why we act the way we do is obviously the result of all kinds of influences and experiences, including those cues we pick up unconsciously as babies. But this doesn’t mean, as Eliot seems to think it does, that you can draw a straight line between a concrete adult behavior and what little Suzie, at six months, saw her mother do. As far as we can tell, for instance, infant imitation has nothing to do with smoking. As the behavioral geneticist David Rowe has demonstrated, the children of smokers are more likely than others to take up the habit because of genetics: they have inherited the same genes that made their parents like, and be easily addicted to, nicotine. Once you account for heredity, there is little evidence that parental smoking habits influence children; the adopted children of smokers, for instance, are no more likely to smoke than the children of non-smokers. To the extent that social imitation is a factor in smoking, the psychologist Judith Rich Harris has observed, it is imitation that occurs in adolescence between a teen-ager and his or her peers. So if you were to use Eliot’s ideas to design an anti- smoking campaign you’d direct your efforts to stop parents from smoking around their children, and miss the social roots of smoking entirely.

This point–the distance between infant experience and grownup behavior–is made even more powerfully in Jerome Kagan’s marvellous new book, “Three Seductive Ideas”(Oxford; $27.50). Kagan, a professor of psychology at Harvard, offers a devastating critique of what he calls “infant determinism,” arguing that many of the truly critical moments of socialization–the moments that social policy properly concerns itself with–occur well after the age of three. As Kagan puts it, a person’s level of “anxiety, depression, apathy and anger” is linked to his or her “symbolic constructions of experience”–how the bare facts of any experience are combined with the context of that event, attitudes toward those involved, expectations and memories of past experience. “The Palestinian youths who throw stones at Israeli soldiers believe that the Israeli government has oppressed them unjustly,” Kagan writes. He goes on:

The causes of their violent actions are not traceable to the parental treatment they received in their first few years. Similarly, no happy African-American two-year-old knows about the pockets of racism in American society or the history of oppression blacks have suffered. The realization that there is prejudice will not take form until that child is five or six years old.

Infant determinism doesn’t just encourage the wrong kind of policy. Ultimately, it undermines the basis of social policy. Why bother spending money trying to help older children or adults if the patterns of a lifetime are already, irremediably, in place? Inevitably, some people will interpret the zero-to-three dogma to mean that our obligations to the disadvantaged expire by the time they reach the age of three. Kagan writes of a famous Hawaiian study of child development, in which almost seven hundred children, from a variety of ethnic and economic backgrounds, were followed from birth to adulthood. The best predictor of who would develop serious academic or behavioral problems in adolescence, he writes, was social class: more than eighty per cent of the children who got in trouble came from the poorest segment of the sample. This is the harsh reality of child development, from which the zero-to-three movement offers a convenient escape. Kagan writes, “It is considerably more expensive to improve the quality of housing, education and health of the approximately one million children living in poverty in America today than to urge their mothers to kiss, talk to, and play with them more consistently.” In his view, “to suggest to poor parents that playing with and talking to their infant will protect the child from future academic failure and guarantee life success” is an act of dishonesty. But that does not go far enough. It is also an unwitting act of reproach: it implies to disadvantaged parents that if their children do not turn out the way children of privilege do it is their fault–that they are likely to blame for the flawed wiring of their children’s brains.

5.

In 1973, when Hillary Clinton–then, of course, known as Hillary Rodham–was a young woman just out of law school, she wrote an essay for the Harvard Educational Review entitled “Children Under the Law.” The courts, she wrote, ought to reverse their long-standing presumption that children are legally incompetent. She urged, instead, that children’s interests be considered independently from those of their parents. Children ought to be deemed capable of making their own decisions and voicing their own interests, unless evidence could be found to the contrary. To her, the presumption of incompetence gave the courts too much discretion in deciding what was in the child’s best interests, and that discretion was most often abused in cases of children from poor minority families. “Children of these families,” she wrote, “are perceived as bearers of the sins and disabilities of their fathers.”

This is a liberal argument, because a central tenet of liberalism is that social mobility requires a release not merely from burdens imposed by poverty but also from those imposed by family–that absent or indifferent or incompetent parents should not be permitted to destroy a child’s prospects. What else was the classic Horatio Alger story about? In “Ragged Dick,” the most famous of Alger’s novels, Dick’s father runs off before his son’s birth, and his mother dies destitute while Dick is still a baby. He becomes a street urchin, before rising to the middle class through a combination of hard work, honesty, and luck. What made such tales so powerful was, in part, the hopeful notion that the circumstances of your birth need not be your destiny; and the modern liberal state has been an attempt to make good on that promise.

But Mrs. Clinton is now promoting a movement with a different message–that who you are and what you are capable of could be the result of how successful your mother and father were in rearing you. In her book “It Takes a Village,” she criticizes the harsh genetic determinism of “The Bell Curve.” But an ideology that holds that your future is largely decided at birth by your parents’ genes is no more dispiriting than one that holds that your future might be decided at three by your parents’ behavior. The unintended consequence of the zero-to-three movement is that, once again, it makes disadvantaged children the bearers of the sins and disabilities of their parents.

The truth is that the traditional aims of the liberal agenda find ample support in the arguments of John Bruer, of Jerome Kagan, of Judith Rich Harris, and of Gopnik, Meltzoff, and Kuhl. All of them offer considerable evidence that what the middle class perceives as inadequate parenting need not condemn a baby for life, and that institutions and interventions to help children as they approach maturity can make a big difference in how they turn out. It is, surely, a sad irony that, at the very moment when science has provided the intellectual reinforcement for modern liberalism, liberals themselves are giving up the fight.
What the co-inventor of the Pill didn’t know about menstruation can endanger women’s health.

1.

John Rock was christened in 1890 at the Church of the Immaculate Conception in Marlborough, malady Massachusetts, this web and married by Cardinal William O’Connell, of Boston. He had five children and nineteen grandchildren. A crucifix hung above his desk, and nearly every day of his adult life he attended the 7 a.m. Mass at St. Mary’s in Brookline. Rock, his friends would say, was in love with his church. He was also one of the inventors of the birth-control pill, and it was his conviction that his faith and his vocation were perfectly compatible. To anyone who disagreed he would simply repeat the words spoken to him as a child by his home-town priest: “John, always stick to your conscience. Never let anyone else keep it for you. And I mean anyone else.” Even when Monsignor Francis W. Carney, of Cleveland, called him a “moral rapist,” and when Frederick Good, the longtime head of obstetrics at Boston City Hospital, went to Boston’s Cardinal Richard Cushing to have Rock excommunicated, Rock was unmoved. “You should be afraid to meet your Maker,” one angry woman wrote to him, soon after the Pill was approved. “My dear madam,” Rock wrote back, “in my faith, we are taught that the Lord is with us always. When my time comes, there will be no need for introductions.”

In the years immediately after the Pill was approved by the F.D.A., in 1960, Rock was everywhere. He appeared in interviews and documentaries on CBS and NBC, in Time, Newsweek, Life, The Saturday Evening Post. He toured the country tirelessly. He wrote a widely discussed book, “The Time Has Come: A Catholic Doctor’s Proposals to End the Battle Over Birth Control,” which was translated into French, German, and Dutch. Rock was six feet three and rail-thin, with impeccable manners; he held doors open for his patients and addressed them as “Mrs.” or “Miss.” His mere association with the Pill helped make it seem respectable. “He was a man of great dignity,” Dr. Sheldon J. Segal, of the Population Council, recalls. “Even if the occasion called for an open collar, you’d never find him without an ascot. He had the shock of white hair to go along with that. And posture, straight as an arrow, even to his last year.” At Harvard Medical School, he was a giant, teaching obstetrics for more than three decades. He was a pioneer in in-vitro fertilization and the freezing of sperm cells, and was the first to extract an intact fertilized egg. The Pill was his crowning achievement. His two collaborators, Gregory Pincus and Min- Cheuh Chang, worked out the mechanism. He shepherded the drug through its clinical trials. “It was his name and his reputation that gave ultimate validity to the claims that the pill would protect women against unwanted pregnancy,” Loretta McLaughlin writes in her marvellous 1982 biography of Rock. Not long before the Pill’s approval, Rock travelled to Washington to testify before the F.D.A. about the drug’s safety. The agency examiner, Pasquale DeFelice, was a Catholic obstetrician from Georgetown University, and at one point, the story goes, DeFelice suggested the unthinkable–that the Catholic Church would never approve of the birth-control pill. “I can still see Rock standing there, his face composed, his eyes riveted on DeFelice,” a colleague recalled years later, “and then, in a voice that would congeal your soul, he said, ‘Young man, don’t you sell my church short.’ ”

In the end, of course, John Rock’s church disappointed him. In 1968, in the encyclical “Humanae Vitae,” Pope Paul VI outlawed oral contraceptives and all other “artificial” methods of birth control. The passion and urgency that animated the birth-control debates of the sixties are now a memory. John Rock still matters, though, for the simple reason that in the course of reconciling his church and his work he made an error. It was not a deliberate error. It became manifest only after his death, and through scientific advances he could not have anticipated. But because that mistake shaped the way he thought about the Pill–about what it was, and how it worked, and most of all what it meant–and because John Rock was one of those responsible for the way the Pill came into the world, his error has colored the way people have thought about contraception ever since.

John Rock believed that the Pill was a “natural” method of birth control. By that he didn’t mean that it felt natural, because it obviously didn’t for many women, particularly not in its earliest days, when the doses of hormone were many times as high as they are today. He meant that it worked by natural means. Women can get pregnant only during a certain interval each month, because after ovulation their bodies produce a surge of the hormone progesterone. Progesterone–one of a class of hormones known as progestin–prepares the uterus for implantation and stops the ovaries from releasing new eggs; it favors gestation. “It is progesterone, in the healthy woman, that prevents ovulation and establishes the pre- and post-menstrual ‘safe’ period,” Rock wrote. When a woman is pregnant, her body produces a stream of progestin in part for the same reason, so that another egg can’t be released and threaten the pregnancy already under way. Progestin, in other words, is nature’s contraceptive. And what was the Pill? Progestin in tablet form. When a woman was on the Pill, of course, these hormones weren’t coming in a sudden surge after ovulation and weren’t limited to certain times in her cycle. They were being given in a steady dose, so that ovulation was permanently shut down. They were also being given with an additional dose of estrogen, which holds the endometrium together and–as we’ve come to learn–helps maintain other tissues as well. But to Rock, the timing and combination of hormones wasn’t the issue. The key fact was that the Pill’s ingredients duplicated what could be found in the body naturally. And in that naturalness he saw enormous theological significance.

In 1951, for example, Pope Pius XII had sanctioned the rhythm method for Catholics because he deemed it a “natural” method of regulating procreation: it didn’t kill the sperm, like a spermicide, or frustrate the normal process of procreation, like a diaphragm, or mutilate the organs, like sterilization. Rock knew all about the rhythm method. In the nineteen-thirties, at the Free Hospital for Women, in Brookline, he had started the country’s first rhythm clinic for educating Catholic couples in natural contraception. But how did the rhythm method work? It worked by limiting sex to the safe period that progestin created. And how did the Pill work? It worked by using progestin to extend the safe period to the entire month. It didn’t mutilate the reproductive organs, or damage any natural process. “Indeed,” Rock wrote, oral contraceptives “may be characterized as a ‘pill-established safe period,’ and would seem to carry the same moral implications” as the rhythm method. The Pill was, to Rock, no more than “an adjunct to nature.”

In 1958, Pope Pius XII approved the Pill for Catholics, so long as its contraceptive effects were “indirect”–that is, so long as it was intended only as a remedy for conditions like painful menses or “a disease of the uterus.” That ruling emboldened Rock still further. Short-term use of the Pill, he knew, could regulate the cycle of women whose periods had previously been unpredictable. Since a regular menstrual cycle was necessary for the successful use of the rhythm method–and since the rhythm method was sanctioned by the Church–shouldn’t it be permissible for women with an irregular menstrual cycle to use the Pill in order to facilitate the use of rhythm? And if that was true why not take the logic one step further? As the federal judge John T. Noonan writes in “Contraception,” his history of the Catholic position on birth control:

If it was lawful to suppress ovulation to achieve a regularity necessary for successfully sterile intercourse, why was it not lawful to suppress ovulation without appeal to rhythm? If pregnancy could be prevented by pill plus rhythm, why not by pill alone? In each case suppression of ovulation was used as a means. How was a moral difference made by the addition of rhythm?

These arguments, as arcane as they may seem, were central to the development of oral contraception. It was John Rock and Gregory Pincus who decided that the Pill ought to be taken over a four-week cycle–a woman would spend three weeks on the Pill and the fourth week off the drug (or on a placebo), to allow for menstruation. There was and is no medical reason for this. A typical woman of childbearing age has a menstrual cycle of around twenty- eight days, determined by the cascades of hormones released by her ovaries. As first estrogen and then a combination of estrogen and progestin flood the uterus, its lining becomes thick and swollen, preparing for the implantation of a fertilized egg. If the egg is not fertilized, hormone levels plunge and cause the lining–the endometrium–to be sloughed off in a menstrual bleed. When a woman is on the Pill, however, no egg is released, because the Pill suppresses ovulation. The fluxes of estrogen and progestin that cause the lining of the uterus to grow are dramatically reduced, because the Pill slows down the ovaries. Pincus and Rock knew that the effect of the Pill’s hormones on the endometrium was so modest that women could conceivably go for months without having to menstruate. “In view of the ability of this compound to prevent menstrual bleeding as long as it is taken,” Pincus acknowledged in 1958, “a cycle of any desired length could presumably be produced.” But he and Rock decided to cut the hormones off after three weeks and trigger a menstrual period because they believed that women would find the continuation of their monthly bleeding reassuring. More to the point, if Rock wanted to demonstrate that the Pill was no more than a natural variant of the rhythm method, he couldn’t very well do away with the monthly menses. Rhythm required “regularity,” and so the Pill had to produce regularity as well.

It has often been said of the Pill that no other drug has ever been so instantly recognizable by its packaging: that small, round plastic dial pack. But what was the dial pack if not the physical embodiment of the twenty-eight-day cycle? It was, in the words of its inventor, meant to fit into a case “indistinguishable” from a woman’s cosmetics compact, so that it might be carried “without giving a visual clue as to matters which are of no concern to others.” Today, the Pill is still often sold in dial packs and taken in twenty-eight-day cycles. It remains, in other words, a drug shaped by the dictates of the Catholic Church–by John Rock’s desire to make this new method of birth control seem as natural as possible. This was John Rock’s error. He was consumed by the idea of the natural. But what he thought was natural wasn’t so natural after all, and the Pill he ushered into the world turned out to be something other than what he thought it was. In John Rock’s mind the dictates of religion and the principles of science got mixed up, and only now are we beginning to untangle them.

2.

In 1986, a young scientist named Beverly Strassmann travelled to Africa to live with the Dogon tribe of Mali. Her research site was the village of Sangui in the Sahel, about a hundred and twenty miles south of Timbuktu. The Sahel is thorn savannah, green in the rainy season and semi-arid the rest of the year. The Dogon grow millet, sorghum, and onions, raise livestock, and live in adobe houses on the Bandiagara escarpment. They use no contraception. Many of them have held on to their ancestral customs and religious beliefs. Dogon farmers, in many respects, live much as people of that region have lived since antiquity. Strassmann wanted to construct a precise reproductive profile of the women in the tribe, in order to understand what female biology might have been like in the millennia that preceded the modern age. In a way, Strassmann was trying to answer the same question about female biology that John Rock and the Catholic Church had struggled with in the early sixties: what is natural? Only, her sense of “natural” was not theological but evolutionary. In the era during which natural selection established the basic patterns of human biology–the natural history of our species–how often did women have children? How often did they menstruate? When did they reach puberty and menopause? What impact did breast-feeding have on ovulation? These questions had been studied before, but never so thoroughly that anthropologists felt they knew the answers with any certainty.

Strassmann, who teaches at the University of Michigan at Ann Arbor, is a slender, soft-spoken woman with red hair, and she recalls her time in Mali with a certain wry humor. The house she stayed in while in Sangui had been used as a shelter for sheep before she came and was turned into a pigsty after she left. A small brown snake lived in her latrine, and would curl up in a camouflaged coil on the seat she sat on while bathing. The villagers, she says, were of two minds: was it a deadly snake–Kere me jongolo, literally, “My bite cannot be healed”–or a harmless mouse snake? (It turned out to be the latter.) Once, one of her neighbors and best friends in the tribe roasted her a rat as a special treat. “I told him that white people aren’t allowed to eat rat because rat is our totem,” Strassmann says. “I can still see it. Bloated and charred. Stretched by its paws. Whiskers singed. To say nothing of the tail.” Strassmann meant to live in Sangui for eighteen months, but her experiences there were so profound and exhilarating that she stayed for two and a half years. “I felt incredibly privileged,” she says. “I just couldn’t tear myself away.”

Part of Strassmann’s work focussed on the Dogon’s practice of segregating menstruating women in special huts on the fringes of the village. In Sangui, there were two menstrual huts–dark, cramped, one-room adobe structures, with boards for beds. Each accommodated three women, and when the rooms were full, latecomers were forced to stay outside on the rocks. “It’s not a place where people kick back and enjoy themselves,” Strassmann says. “It’s simply a nighttime hangout. They get there at dusk, and get up early in the morning and draw their water.” Strassmann took urine samples from the women using the hut, to confirm that they were menstruating. Then she made a list of all the women in the village, and for her entire time in Mali–seven hundred and thirty- six consecutive nights–she kept track of everyone who visited the hut. Among the Dogon, she found, a woman, on average, has her first period at the age of sixteen and gives birth eight or nine times. From menarche, the onset of menstruation, to the age of twenty, she averages seven periods a year. Over the next decade and a half, from the age of twenty to the age of thirty-four, she spends so much time either pregnant or breast-feeding (which, among the Dogon, suppresses ovulation for an average of twenty months) that she averages only slightly more than one period per year. Then, from the age of thirty-five until menopause, at around fifty, as her fertility rapidly declines, she averages four menses a year. All told, Dogon women menstruate about a hundred times in their lives. (Those who survive early childhood typically live into their seventh or eighth decade.) By contrast, the average for contemporary Western women is somewhere between three hundred and fifty and four hundred times.

Strassmann’s office is in the basement of a converted stable next to the Natural History Museum on the University of Michigan campus. Behind her desk is a row of battered filing cabinets, and as she was talking she turned and pulled out a series of yellowed charts. Each page listed, on the left, the first names and identification numbers of the Sangui women. Across the top was a time line, broken into thirty-day blocks. Every menses of every woman was marked with an X. In the village, Strassmann explained, there were two women who were sterile, and, because they couldn’t get pregnant, they were regulars at the menstrual hut. She flipped through the pages until she found them. “Look, she had twenty-nine menses over two years, and the other had twenty- three.” Next to each of their names was a solid line of x’s. “Here’s a woman approaching menopause,” Strassmann went on, running her finger down the page. “She’s cycling but is a little bit erratic. Here’s another woman of prime childbearing age. Two periods. Then pregnant. I never saw her again at the menstrual hut. This woman here didn’t go to the menstrual hut for twenty months after giving birth, because she was breast-feeding. Two periods. Got pregnant. Then she miscarried, had a few periods, then got pregnant again. This woman had three menses in the study period.” There weren’t a lot of x’s on Strassmann’s sheets. Most of the boxes were blank. She flipped back through her sheets to the two anomalous women who were menstruating every month. “If this were a menstrual chart of undergraduates here at the University of Michigan, all the rows would be like this.”

Strassmann does not claim that her statistics apply to every preindustrial society. But she believes–and other anthropological work backs her up–that the number of lifetime menses isn’t greatly affected by differences in diet or climate or method of subsistence (foraging versus agriculture, say). The more significant factors, Strassmann says, are things like the prevalence of wet-nursing or sterility. But over all she believes that the basic pattern of late menarche, many pregnancies, and long menstrual-free stretches caused by intensive breast-feeding was virtually universal up until the “demographic transition” of a hundred years ago from high to low fertility. In other words, what we think of as normal–frequent menses–is in evolutionary terms abnormal. “It’s a pity that gynecologists think that women have to menstruate every month,”Strassmann went on. “They just don’t understand the real biology of menstruation.”

To Strassmann and others in the field of evolutionary medicine, this shift from a hundred to four hundred lifetime menses is enormously significant. It means that women’s bodies are being subjected to changes and stresses that they were not necessarily designed by evolution to handle. In a brilliant and provocative book, “Is Menstruation Obsolete?,” Drs. Elsimar Coutinho and Sheldon S. Segal, two of the world’s most prominent contraceptive researchers, argue that this recent move to what they call “incessant ovulation” has become a serious problem for women’s health. It doesn’t mean that women are always better off the less they menstruate. There are times–particularly in the context of certain medical conditions–when women ought to be concerned if they aren’t menstruating: In obese women, a failure to menstruate can signal an increased risk of uterine cancer. In female athletes, a failure to menstruate can signal an increased risk of osteoporosis. But for most women, Coutinho and Segal say, incessant ovulation serves no purpose except to increase the occurence of abdominal pain, mood shifts, migraines, endometriosis, fibroids, and anemia–the last of which, they point out, is “one of the most serious health problems in the world.”

Most serious of all is the greatly increased risk of some cancers. Cancer, after all, occurs because as cells divide and reproduce they sometimes make mistakes that cripple the cells’ defenses against runaway growth. That’s one of the reasons that our risk of cancer generally increases as we age: our cells have more time to make mistakes. But this also means that any change promoting cell division has the potential to increase cancer risk, and ovulation appears to be one of those changes. Whenever a woman ovulates, an egg literally bursts through the walls of her ovaries. To heal that puncture, the cells of the ovary wall have to divide and reproduce. Every time a woman gets pregnant and bears a child, her lifetime risk of ovarian cancer drops ten per cent. Why? Possibly because, between nine months of pregnancy and the suppression of ovulation associated with breast-feeding, she stops ovulating for twelve months–and saves her ovarian walls from twelve bouts of cell division. The argument is similar for endometrial cancer. When a woman is menstruating, the estrogen that flows through her uterus stimulates the growth of the uterine lining, causing a flurry of potentially dangerous cell division. Women who do not menstruate frequently spare the endometrium that risk. Ovarian and endometrial cancer are characteristically modern diseases, consequences, in part, of a century in which women have come to menstruate four hundred times in a lifetime.

In this sense, the Pill really does have a “natural”effect. By blocking the release of new eggs, the progestin in oral contraceptives reduces the rounds of ovarian cell division. Progestin also counters the surges of estrogen in the endometrium, restraining cell division there. A woman who takes the Pill for ten years cuts her ovarian-cancer risk by around seventy per cent and her endometrial-cancer risk by around sixty per cent. But here “natural” means something different from what Rock meant. He assumed that the Pill was natural because it was an unobtrusive variant of the body’s own processes. In fact, as more recent research suggests, the Pill is really only natural in so far as it’s radical–rescuing the ovaries and endometrium from modernity. That Rock insisted on a twenty-eight-day cycle for his pill is evidence of just how deep his misunderstanding was: the real promise of the Pill was not that it could preserve the menstrual rhythms of the twentieth century but that it could disrupt them.

Today, a growing movement of reproductive specialists has begun to campaign loudly against the standard twenty-eight-day pill regimen. The drug company Organon has come out with a new oral contraceptive, called Mircette, that cuts the seven-day placebo interval to two days. Patricia Sulak, a medical researcher at Texas A.& M. University, has shown that most women can probably stay on the Pill, straight through, for six to twelve weeks before they experience breakthrough bleeding or spotting. More recently, Sulak has documented precisely what the cost of the Pill’s monthly “off” week is. In a paper in the February issue of the journal Obstetrics and Gynecology, she and her colleagues documented something that will come as no surprise to most women on the Pill: during the placebo week, the number of users experiencing pelvic pain, bloating, and swelling more than triples, breast tenderness more than doubles, and headaches increase by almost fifty per cent. In other words, some women on the Pill continue to experience the kinds of side effects associated with normal menstruation. Sulak’s paper is a short, dry, academic work, of the sort intended for a narrow professional audience. But it is impossible to read it without being struck by the consequences of John Rock’s desire to please his church. In the past forty years, millions of women around the world have been given the Pill in such a way as to maximize their pain and suffering. And to what end? To pretend that the Pill was no more than a pharmaceutical version of the rhythm method?

3.

In 1980 and 1981, Malcolm Pike, a medical statistician at the University of Southern California, travelled to Japan for six months to study at the Atomic Bomb Casualties Commission. Pike wasn’t interested in the effects of the bomb. He wanted to examine the medical records that the commission had been painstakingly assembling on the survivors of Hiroshima and Nagasaki. He was investigating a question that would ultimately do as much to complicate our understanding of the Pill as Strassmann’s research would a decade later: why did Japanese women have breast-cancer rates six times lower than American women?

In the late forties, the World Health Organization began to collect and publish comparative health statistics from around the world, and the breast-cancer disparity between Japan and America had come to obsess cancer specialists. The obvious answer–that Japanese women were somehow genetically protected against breast cancer–didn’t make sense, because once Japanese women moved to the United States they began to get breast cancer almost as often as American women did. As a result, many experts at the time assumed that the culprit had to be some unknown toxic chemical or virus unique to the West. Brian Henderson, a colleague of Pike’s at U.S.C. and his regular collaborator, says that when he entered the field, in 1970, “the whole viral- and chemical- carcinogenesis idea was huge–it dominated the literature.” As he recalls, “Breast cancer fell into this large, unknown box that said it was something to do with the environment–and that word ‘environment’ meant a lot of different things to a lot of different people. They might be talking about diet or smoking or pesticides.”

Henderson and Pike, however, became fascinated by a number of statistical pecularities. For one thing, the rate of increase in breast-cancer risk rises sharply throughout women’s thirties and forties and then, at menopause, it starts to slow down. If a cancer is caused by some toxic outside agent, you’d expect that rate to rise steadily with each advancing year, as the number of mutations and genetic mistakes steadily accumulates. Breast cancer, by contrast, looked as if it were being driven by something specific to a woman’s reproductive years. What was more, younger women who had had their ovaries removed had a markedly lower risk of breast cancer; when their bodies weren’t producing estrogen and progestin every month, they got far fewer tumors. Pike and Henderson became convinced that breast cancer was linked to a process of cell division similar to that of ovarian and endometrial cancer. The female breast, after all, is just as sensitive to the level of hormones in a woman’s body as the reproductive system. When the breast is exposed to estrogen, the cells of the terminal-duct lobular unit–where most breast cancer arises–undergo a flurry of division. And during the mid-to-late stage of the menstrual cycle, when the ovaries start producing large amounts of progestin, the pace of cell division in that region doubles.

It made intuitive sense, then, that a woman’s risk of breast cancer would be linked to the amount of estrogen and progestin her breasts have been exposed to during her lifetime. How old a woman is at menarche should make a big difference, because the beginning of puberty results in a hormonal surge through a woman’s body, and the breast cells of an adolescent appear to be highly susceptible to the errors that result in cancer. (For more complicated reasons, bearing children turns out to be protective against breast cancer, perhaps because in the last two trimesters of pregnancy the cells of the breast mature and become much more resistant to mutations.) How old a woman is at menopause should matter, and so should how much estrogen and progestin her ovaries actually produce, and even how much she weighs after menopause, because fat cells turn other hormones into estrogen.

Pike went to Hiroshima to test the cell-division theory. With other researchers at the medical archive, he looked first at the age when Japanese women got their period. A Japanese woman born at the turn of the century had her first period at sixteen and a half. American women born at the same time had their first period at fourteen. That difference alone, by their calculation, was sufficient to explain forty per cent of the gap between American and Japanese breast-cancer rates. “They had collected amazing records from the women of that area,” Pike said. “You could follow precisely the change in age of menarche over the century. You could even see the effects of the Second World War. The age of menarche of Japanese girls went up right at that point because of poor nutrition and other hardships. And then it started to go back down after the war. That’s what convinced me that the data were wonderful.”

Pike, Henderson, and their colleagues then folded in the other risk factors. Age at menopause, age at first pregnancy, and number of children weren’t sufficiently different between the two countries to matter. But weight was. The average post- menopausal Japanese woman weighed a hundred pounds; the average American woman weighed a hundred and forty-five pounds. That fact explained another twenty-five per cent of the difference. Finally, the researchers analyzed blood samples from women in rural Japan and China, and found that their ovaries– possibly because of their extremely low-fat diet–were producing about seventy-five per cent the amount of estrogen that American women were producing. Those three factors, added together, seemed to explain the breast-cancer gap. They also appeared to explain why the rates of breast cancer among Asian women began to increase when they came to America: on an American diet, they started to menstruate earlier, gained more weight, and produced more estrogen. The talk of chemicals and toxins and power lines and smog was set aside. “When people say that what we understand about breast cancer explains only a small amount of the problem, that it is somehow a mystery, it’s absolute nonsense,” Pike says flatly. He is a South African in his sixties, with graying hair and a salt-and-pepper beard. Along with Henderson, he is an eminent figure in cancer research, but no one would ever accuse him of being tentative in his pronouncements. “We understand breast cancer extraordinarily well. We understand it as well as we understand cigarettes and lung cancer.”

What Pike discovered in Japan led him to think about the Pill, because a tablet that suppressed ovulation–and the monthly tides of estrogen and progestin that come with it–obviously had the potential to be a powerful anti-breast-cancer drug. But the breast was a little different from the reproductive organs. Progestin prevented ovarian cancer because it suppressed ovulation. It was good for preventing endometrial cancer because it countered the stimulating effects of estrogen. But in breast cells, Pike believed, progestin wasn’t the solution; it was one of the hormones that caused cell division. This is one explanation for why, after years of studying the Pill, researchers have concluded that it has no effect one way or the other on breast cancer: whatever beneficial effect results from what the Pill does is cancelled out by how it does it. John Rock touted the fact that the Pill used progestin, because progestin was the body’s own contraceptive. But Pike saw nothing “natural”about subjecting the breast to that heavy a dose of proges- tin. In his view, the amount of progestin and estrogen needed to make an effective contraceptive was much greater than the amount needed to keep the reproductive system healthy–and that excess was unnecessarily raising the risk of breast cancer. A truly natural Pill might be one that found a way to suppress ovulation without using progestin. Throughout the nineteen-eighties, Pike recalls, this was his obsession. “We were all trying to work out how the hell we could fix the Pill. We thought about it day and night.”

4.

Pike’s proposed solution is a class of drugs known as GnRHAs, which has been around for many years. GnRHAs disrupt the signals that the pituitary gland sends when it is attempting to order the manufacture of sex hormones. It’s a circuit breaker. “We’ve got substantial experience with this drug,” Pike says. Men suffering from prostate cancer are sometimes given a GnRHA to temporarily halt the production of testosterone, which can exacerbate their tumors. Girls suffering from what’s called precocious puberty–puberty at seven or eight, or even younger–are sometimes given the drug to forestall sexual maturity. If you give GnRHA to women of childbearing age, it stops their ovaries from producing estrogen and progestin. If the conventional Pill works by convincing the body that it is, well, a little bit pregnant, Pike’s pill would work by convincing the body that it was menopausal.

In the form Pike wants to use it, GnRHA will come in a clear glass bottle the size of a saltshaker, with a white plastic mister on top. It will be inhaled nasally. It breaks down in the body very quickly. A morning dose simply makes a woman menopausal for a while. Menopause, of course, has its risks. Women need estrogen to keep their hearts and bones strong. They also need progestin to keep the uterus healthy. So Pike intends to add back just enough of each hormone to solve these problems, but much less than women now receive on the Pill. Ideally, Pike says, the estrogen dose would be adjustable: women would try various levels until they found one that suited them. The progestin would come in four twelve-day stretches a year. When someone on Pike’s regimen stopped the progestin, she would have one of four annual menses.

Pike and an oncologist named Darcy Spicer have joined forces with another oncologist, John Daniels, in a startup called Balance Pharmaceuticals. The firm operates out of a small white industrial strip mall next to the freeway in Santa Monica. One of the tenants is a paint store, another looks like some sort of export company. Balance’s offices are housed in an oversized garage with a big overhead door and concrete floors. There is a tiny reception area, a little coffee table and a couch, and a warren of desks, bookshelves, filing cabinets, and computers. Balance is testing its formulation on a small group of women at high risk for breast cancer, and if the results continue to be encouraging, it will one day file for F.D.A. approval.

“When I met Darcy Spicer a couple of years ago,” Pike said recently, as he sat at a conference table deep in the Balance garage, “he said, ‘Why don’t we just try it out? By taking mammograms, we should be able to see changes in the breasts of women on this drug, even if we add back a little estrogen to avoid side effects.’ So we did a study, and we found that there were huge changes.” Pike pulled out a paper he and Spicer had published in the Journal of the National Cancer Institute, showing breast X-rays of three young women. “These are the mammograms of the women before they start,” he said. Amid the grainy black outlines of the breast were large white fibrous clumps–clumps that Pike and Spicer believe are indicators of the kind of relentless cell division that increases breast-cancer risk. Next to those x-rays were three mammograms of the same women taken after a year on the GnRHA regimen. The clumps were almost entirely gone. “This to us represents that we have actually stopped the activity inside the breasts,” Pike went on. “White is a proxy for cell proliferation. We’re slowing down the breast.”

Pike stood up from the table and turned to a sketch pad on an easel behind him. He quickly wrote a series of numbers on the paper. “Suppose a woman reaches menarche at fifteen and menopause at fifty. That’s thirty-five years of stimulating the breast. If you cut that time in half, you will change her risk not by half but by half raised to the power of 4.5.” He was working with a statistical model he had developed to calculate breast-cancer risk. “That’s one-twenty-third. Your risk of breast cancer will be one- twenty-third of what it would be otherwise. It won’t be zero. You can’t get to zero. If you use this for ten years, your risk will be cut by at least half. If you use it for five years, your risk will be cut by at least a third. It’s as if your breast were to be five years younger, or ten years younger–forever.” The regimen, he says, should also provide protection against ovarian cancer.

Pike gave the sense that he had made this little speech many times before, to colleagues, to his family and friends–and to investors. He knew by now how strange and unbelievable what he was saying sounded. Here he was, in a cold, cramped garage in the industrial section of Santa Monica, arguing that he knew how to save the lives of hundreds of thousands of women around the world. And he wanted to do that by making young women menopausal through a chemical regimen sniffed every morning out of a bottle. This was, to say the least, a bold idea. Could he strike the right balance between the hormone levels women need to stay healthy and those that ultimately make them sick? Was progestin really so important in breast cancer? There are cancer specialists who remain skeptical. And, most of all, what would women think? John Rock, at least, had lent the cause of birth control his Old World manners and distinguished white hair and appeals from theology; he took pains to make the Pill seem like the least radical of interventions–nature’s contraceptive, something that could be slipped inside a woman’s purse and pass without notice. Pike was going to take the whole forty-year mythology of “natural” and sweep it aside. “Women are going to think, I’m being manipulated here. And it’s a perfectly reasonable thing to think.” Pike’s South African accent gets a little stronger as he becomes more animated. “But the modern way of living represents an extraordinary change in female biology. Women are going out and becoming lawyers, doctors, presidents of countries. They need to understand that what we are trying to do isn’t abnormal. It’s just as normal as when someone hundreds of years ago had menarche at seventeen and had five babies and had three hundred fewer menstrual cycles than most women have today. The world is not the world it was. And some of the risks that go with the benefits of a woman getting educated and not getting pregnant all the time are breast cancer and ovarian cancer, and we need to deal with it. I have three daughters. The earliest grandchild I had was when one of them was thirty-one. That’s the way many women are now. They ovulate from twelve or thirteen until their early thirties. Twenty years of uninterrupted ovulation before their first child! That’s a brand-new phenomenon!”

5.

John Rock’s long battle on behalf of his birth-control pill forced the Church to take notice. In the spring of 1963, just after Rock’s book was published, a meeting was held at the Vatican between high officials of the Catholic Church and Donald B. Straus, the chairman of Planned Parenthood. That summit was followed by another, on the campus of the University of Notre Dame. In the summer of 1964, on the eve of the feast of St. John the Baptist, Pope Paul VI announced that he would ask a committee of church officials to reëxamine the Vatican’s position on contraception. The group met first at the Collegio San Jose, in Rome, and it was clear that a majority of the committee were in favor of approving the Pill. Committee reports leaked to the National Catholic Register confirmed that Rock’s case appeared to be winning. Rock was elated. Newsweek put him on its cover, and ran a picture of the Pope inside. “Not since the Copernicans suggested in the sixteenth century that the sun was the center of the planetary system has the Roman Catholic Church found itself on such a perilous collision course with a new body of knowledge,” the article concluded. Paul VI, however, was unmoved. He stalled, delaying a verdict for months, and then years. Some said he fell under the sway of conservative elements within the Vatican. In the interim, theologians began exposing the holes in Rock’s arguments. The rhythm method ” ‘prevents’ conception by abstinence, that is, by the non-performance of the conjugal act during the fertile period,” the Catholic journal America concluded in a 1964 editorial. “The pill prevents conception by suppressing ovulation and by thus abolishing the fertile period. No amount of word juggling can make abstinence from sexual relations and the suppression of ovulation one and the same thing.” On July 29, 1968, in the “Humanae Vitae” encyclical, the Pope broke his silence, declaring all “artificial” methods of contraception to be against the teachings of the Church.

In hindsight, it is possible to see the opportunity that Rock missed. If he had known what we know now and had talked about the Pill not as a contraceptive but as a cancer drug–not as a drug to prevent life but as one that would save life–the church might well have said yes. Hadn’t Pius XII already approved the Pill for therapeutic purposes? Rock would only have had to think of the Pill as Pike thinks of it: as a drug whose contraceptive aspects are merely a means of attracting users, of getting, as Pike put it, “people who are young to take a lot of stuff they wouldn’t otherwise take.”

But Rock did not live long enough to understand how things might have been. What he witnessed, instead, was the terrible time at the end of the sixties when the Pill suddenly stood accused–wrongly–of causing blood clots, strokes, and heart attacks. Between the mid-seventies and the early eighties, the number of women in the United States using the Pill fell by half. Harvard Medical School, meanwhile, took over Rock’s Reproductive Clinic and pushed him out. His Harvard pension paid him only seventy-five dollars a year. He had almost no money in the bank and had to sell his house in Brookline. In 1971, Rock left Boston and retreated to a farmhouse in the hills of New Hampshire. He swam in the stream behind the house. He listened to John Philip Sousa marches. In the evening, he would sit in the living room with a pitcher of martinis. In 1983, he gave his last public interview, and it was as if the memory of his achievements was now so painful that he had blotted it out.

He was asked what the most gratifying time of his life was. “Right now,” the inventor of the Pill answered, incredibly. He was sitting by the fire in a crisp white shirt and tie, reading “The Origin,” Irving Stone’s fictional account of the life of Darwin. “It frequently occurs to me, gosh, what a lucky guy I am. I have no responsibilities, and I have everything I want. I take a dose of equanimity every twenty minutes. I will not be disturbed about things.”

Once, John Rock had gone to seven-o’clock Mass every morning and kept a crucifix above his desk. His interviewer, the writer Sara Davidson, moved her chair closer to his and asked him whether he still believed in an afterlife.

“Of course I don’t,” Rock answered abruptly. Though he didn’t explain why, his reasons aren’t hard to imagine. The church could not square the requirements of its faith with the results of his science, and if the church couldn’t reconcile them how could Rock be expected to? John Rock always stuck to his conscience, and in the end his conscience forced him away from the thing he loved most. This was not John Rock’s error. Nor was it his church’s. It was the fault of the haphazard nature of science, which all too often produces progress in advance of understanding. If the order of events in the discovery of what was natural had been reversed, his world, and our world, too, would have been a different place.

“Heaven and Hell, Rome, all the Church stuff–that’s for the solace of the multitude,” Rock said. He had only a year to live. “I was an ardent practicing Catholic for a long time, and I really believed it all then, you see.”