Imagine for a moment that the American Psychiatric Association was about to compile a new edition of its Diagnostic and Statistical Manual of Mental Disorders but instead of 2019, the year is 1880.
Transported to the world of the late 19th century, our nation’s preeminent psychiatric body would have virtually no choice but to include hysteria in the pages of its new volume. At the turn of the last century, women by the tens of thousands, after all, displayed the distinctive signs: convulsive fits, facial tics, spinal irritation, sensitivity to touch, and leg paralysis. Not a doctor in the Western world at the time would have failed to recognize the presentation. “The illness of our age is hysteria,” a French journalist wrote. “Everywhere one rubs elbows with it.”
Hysteria would have had to be included in our hypothetical 1880 DSM for the exact same reasons that attention deficit hyperactivity disorder is included in the recently released DSM-5. The disorder clearly existed in a population and could be reliably distinguished, by experts and clinicians, from other constellations of symptoms. There were no reliable medical tests to distinguish hysteria from other illnesses then; the same is true of the disorders listed in the DSM-5 today. Practically speaking, the criteria by which something is declared a mental illness are virtually the same now as they were over a hundred years ago.
The DSM determines which mental disorders are worthy of insurance reimbursement, legal standing, and serious discussion in American life. That its diagnoses are not more scientific is, according to several prominent critics, a scandal.
In a major blow to the APA’s dominance over mental-health diagnoses, Thomas R. Insel, while he was in the role of director of the National Institute of Mental Health, declared that his organization would no longer rely on the DSM as a guide to funding research.
“The weakness is its lack of validity,” he wrote in 2013 about the new version of the DSM-5. “Unlike our definitions of ischemic heart disease, lymphoma, or AIDS, the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In the rest of medicine, this would be equivalent to creating diagnostic systems based on the nature of chest pain or the quality of fever.”
As an alternative, Insel called for the creation of a new, rival classification system based on genomics and brain imaging. Unfortunately, as Insel himself would find out in subsequent years, the science just wasn’t there yet.
…the DSM diagnoses are based on a consensus about clusters of clinical symptoms, not any objective laboratory measure. In the rest of medicine, this would be equivalent to creating diagnostic systems based on the nature of chest pain or the quality of fever.
Insel, who went on from the NIMH to start a private healthcare and diagnostics company, now acknowledges that he had a “bias that we could fix the diagnostic problem with genomics and imaging,” he said recently, adding: “I was wrong. We spent a lot of money on both of those efforts, and at the end of the day we found only a little evidence. It was not the transformative technology that we had hoped for. We just weren’t getting the specificity and the clarity that we thought we would get.”
Insel’s new company is exploring the idea that data from how we use our cell phones may provide a unique way to see a mental illness before we are suffering severe symptoms. “Because there is so much data every time we use our smartphones, this is beginning to give us critical insights into cognition, mood, and other aspects of behavior,” he says. “It is continuous and highly objective, and it may be actionable.”
Insel’s mistake in prematurely dismissing the DSM goes beyond the fact that the scientific tools weren’t ready. The assumption that it is even possible to strip away the influence of culture on mental health symptoms, might have been his more fundamental mistake. Indeed, from where I sit, the trouble with the DSM— both this one and previous editions—is not so much that it is insufficiently grounded in biology, but that it ignores the inescapable relationship between social cues and the shifting manifestations of mental illness.
It is true that the dsm has a great deal of influence in modern america, but it may be more of a scapegoat than a villain.
PSYCHIATRY TENDS NOT TO learn from its past. With each new generation, psychiatric healers dismiss the enthusiasms of their predecessors by pointing out the unscientific biases and cultural trends on which their theories were based. Looking back at hysteria, we can see now that 19th-century doctors were operating amidst fanciful beliefs about female anatomy, an assumption of feminine weakness, and the Victorian-era weirdness surrounding female sexuality. And good riddance to bad old ideas. But the more important point to take away is this: There is little doubt that the symptoms expressed by those thousands of women were real.
The resounding lesson of the history of mental illness is that psychiatric theories and diagnostic categories shape the symptoms of patients. “As doctors’ own ideas about what constitutes ‘real’ disease change from time to time,” writes the medical historian Edward Shorter, “the symptoms that patients present will change as well.”
This is not to say that psychiatry wantonly creates sick people where there are none, as many critics fear the new DSM-5 will do. Allen Frances—a psychiatrist who, as it happens, was in charge of compiling the previous DSM, the DSM-IV—predicts in his new book, Saving Normal, that the DSM-5 will “mislabel normal people, promote diagnostic inflation, and encourage inappropriate medication use.” Big Pharma, he says, is intent on ironing out all psychological diversity to create a “human monoculture,” and the DSM-5 will facilitate that mission. In Frances’ dystopian post-DSM-5 future, there will be a psychoactive pill for every occasion, a diagnosis for every inconvenient feeling: “Disruptive mood dysregulation disorder” will turn temper tantrums into a mental illness and encourage a broadened use of antipsychotic drugs; new language describing attention deficit disorder that expands the diagnostic focus to adults will prompt a dramatic rise in the prescription of stimulants like Adderall and Ritalin; the removal of the bereavement exclusion from the diagnosis of major depressive disorder will stigmatize the human process of grieving. The list goes on.
Recently, a large study suggested that 46 percent of Americans will receive a mental-health diagnosis at some point in their lifetimes. Critics like Frances suggest that, with the new categories and loosened criteria in the DSM-5, the percentage of Americans thinking of themselves as mentally ill will rise far above that mark.
But recent history doesn’t support these fears. In 1994 the DSM-IV—the edition Frances oversaw—launched several new diagnostic categories that became hugely popular among clinicians and the public (bipolar II, attention deficit hyperactivity disorder, and social phobia, to name a few), but the number of people receiving a mental-health diagnosis did not go up between 1994 and 2005. In fact, as psychologist Gary Greenberg, author of The Book of Woe, recently pointed out to me, the prevalence of mental health diagnoses actually went down slightly. This suggests that the declarations of the APA don’t have the power to create legions of mentally ill people by fiat, but rather that the number of people who struggle with their own minds stays somewhat constant.
What changes, it seems, is that they get categorized differently depending on the cultural landscape of the moment. Those walking worried who would have accepted the ubiquitous label of “anxiety” in the 1970s would accept the label of depression that rose to prominence in the late 1980s and the 1990s, and many in the same group might today think of themselves as having social anxiety disorder or ADHD.
Viewed over history, mental health symptoms begin to look less like immutable biological facts and more like a kind of language. Someone in need of communicating his or her inchoate psychological pain has a limited vocabulary of symptoms to choose from. From a distance, we can see how the flawed certainties of Victorian-era healers created a sense of inevitability around the symptoms of hysteria. There is no reason to believe that the same isn’t happening today. Healers have theories about how the mind functions and then discover the symptoms that conform to those theories. Because patients usually seek help when they are in need of guidance about the workings of their minds, they are uniquely susceptible to being influenced by the psychiatric certainties of the moment. There is really no getting around this dynamic. Even Insel’s supposedly objective laboratory scientists would, no doubt, inadvertently define which symptoms our troubled minds gravitate toward. The human unconscious is adept at speaking the language of distress that will be understood.
WHY DO PSYCHIATRIC DIAGNOSES fade away only to be replaced by something new? The demise of hysteria may hold a clue. In the early part of the 20th century, the distinctive presentation of the disorder began to blur and then disappear. The symptoms began to lose their punch. In France this was called la petite hysterie. One doctor described patients who would “content themselves with a few gesticulatory movements, with a few spasms.” Hysteria had begun to suffer from a kind of diagnostic overload. By 1930s or so, the dramatic and unmistakable symptoms of hysteria were vanishing from the cultural landscape because they were no longer recognized as a clear communication of psychological suffering by a new generation of women and their healers.
It is true that the DSM has a great deal of influence in modern America, but it may be more of a scapegoat than a villain. It is certainly not the only force at play in determining which symptoms become culturally salient. As Frances suggests, the marketing efforts of Big Pharma on TV and elsewhere have a huge influence over which diagnoses become fashionable. Some commentators have noted that shifts in diagnostic trends seem uncannily timed to coincide with the term lengths of the patents that pharmaceutical companies hold on drugs. Is it a coincidence that the diagnosis of anxiety diminished as the patents on tranquilizers ran out? Or that the diagnosis of depression rose as drug companies landed new exclusive rights to sell a new generation of antidepressants? Consider for a moment that the diagnosis of depression didn’t become popular in Japan until Glaxo-SmithKlein got approval to market Paxil in the country.
We are always just one blockbuster movie and some weekend therapist’s workshops away from a new fad. Look for another epidemic beginning in a decade or two as a new generation of therapists forgets the lessons of the past.
Journalists play a role as well: We love to broadcast new mental-health epidemics. The dramatic rise of bulimia in the United Kingdom neatly coincided with the media frenzy surrounding the rumors and subsequent revelation that Princess Di suffered from the condition. Similarly, an American form of anorexia hit Hong Kong in the mid-1990s just after a wave of local media coverage brought attention to the disorder.
The trick is not to scrub culture from the study of mental illness but to understand how the unconscious takes cues from its social settings. This knowledge won’t make mental illnesses vanish (Americans, for some reason, find it particularly difficult to grasp that mental illnesses are absolutely real and culturally shaped at the same time). But it might discourage healers from leaping from one trendy diagnosis to the next. As things stand, we have little defense against such enthusiasms. “We are always just one blockbuster movie and some weekend therapist’s workshops away from a new fad,” Frances writes. “Look for another epidemic beginning in a decade or two as a new generation of therapists forgets the lessons of the past.” Given all the players stirring these cultural currents, I’d make a sizable bet that we won’t have to wait nearly that long.