An ‘Epidemical Distemper’: Conversion and Disorder, then and now

On March 11, 2012, the New York Times Sunday Magazine ran a cover story about a small town in western New York where eighteen female high school students had recently begun twitching uncontrollably. One bruised her face with her own cell phone; others uttered involuntary cries in the school halls which teachers tried bravely to ignore. Fears of toxic contamination from an old train wreck brought social activist Erin Brockovich to the scene, while psychologists made the more widely persuasive diagnosis of conversion disorder, or—since multiple individuals were involved—mass psychogenic illness. Environmentally induced malady or psychosomatic pathology? In this debate, perhaps even more than the girls’ behaviors alone, the contorted bodies of Le Roy, New York, powerfully suggest another group of twitching girls from almost two centuries before. There too, leading intellectuals used the events of a town few had heard of—Northampton, Massachusetts—to justify conflicting worldviews by making exemplary subjects of young women undergoing a shared affliction. Characterized by “outcries, faintings, convulsions and such like,” their behavior was described in eerily similar terms to that of the Le Roy girls with their “facial tics, body twitches, vocal outbursts, seizures.” Spastic girls, writhing girls, girls in pain, girls out of control. All made for good reading, in 1737 as again in 2012.

Both events share key features: from the use of the term “conversion,” to the attention brought to bear on an otherwise unremarkable location, to public figures’ use of the phenomenon for self-promotion, to the predominance of women in their exposition. In all these arenas, we will see that the struggle to interpret these behaviors correctly was also a struggle about social order. The crisis in Le Roy—a town of less than 10,000 in New York’s rust belt, between Rochester and Buffalo—began with one person, a seventeen-year-old high school cheerleader who woke up from a nap in the fall of 2011 experiencing facial spasms. Within a couple of months, three other girls, two of them cheerleaders, had similar symptoms, including stuttering and uncontrollable tics. Eventually, at least eighteen members of the high school were afflicted, all teenage girls except for two. The seemingly inexplicable nature of this contagion made for widespread media coverage (including local TV news, live appearances on “Dr. Drew,” online reporting by well-established sites such as The Daily Beast, and profiles in some of the nation’s most august print outlets). As public attention has diminished (and with it the strange combination of stress and sudden fame that may have fueled the symptoms in the first place), many individuals have recovered. Slightly less than a year after that fateful nap, one local station reported that the girls’ physical condition had improved significantly, with many living “symptom-free.” As for the conditions that contributed to the malady—which could range from widespread economic decline, to the absence of fathers in most sufferers’ lives, to the sometimes brutal social hierarchies of high school—they are perhaps even more mysterious, and more recalcitrant, than the tics themselves.

This essay explores the connection between these two related historical phenomena. It asks why we continue to see shared involuntary behaviors among young women in such oppositional terms, and what the stakes are behind our focus on female adolescents as a barometer of the state of our communities. Why, in the words of New Hampshire minister Ethan Smith in 1815, must we make “instructive biography” out of female behavior that is extravagantly, pointedly not intended to be didactic? The more the experience itself seems to resist coherent interpretation (whether by being characterized by erratic behavior, or spreading from one to another through unknown means), the more various authorities invest in their own particular readings, each representative of a competing social viewpoint. As opposing groups fight to defend antagonistic beliefs, their accounts take on a life of their own, such that the women’s existence becomes most important not in and of itself but rather as a register of broader cultural struggles. Somehow, bouts of intense, shared, atypical experience among young women attract attention both on the basis of their particular dramatic appeal and as uniquely pliable discourses in the service of ideological debate.

 

1. “Portrait of Jonathan Edwards,” eng. A.B. Walter. Courtesy of the American Portrait Print Collection, the American Antiquarian Society, Worcester, Massachusetts.

And yet these efforts to turn female experience into a “teachable moment” meet with curious resistance in the socially specific and physically freighted details that lard the narratives. Instructive biography stumbles under the weight of its own evidence, which compels our attention for reasons less salubrious than we might think or wish. While we may not share recent New York Times letter-writer Mark Schreiner’s view, regarding the events of Le Roy, that “it is a crime that Americans living in other places would watch all this for their entertainment,” it is undeniable that in both centuries, the suffering of otherwise unexceptional individuals brought their towns to the attention of a far-flung public. No one would have known about either event had the curiosity of strangers not driven an outpouring of print on the subject. And it is this curiosity, even more than the opposing viewpoints it allowed to see the light of day, that makes seemingly passive bystanders—guilty of no more than buying a New Yorker—active partners in putting teenage girls to particular uses. Regardless of where we place our sympathies, as readers, TV-watchers, Internet surfers and more, we are complicit in what we choose to consume. And despite our recent self-consciousness about the potentially insidious nature of celebrity culture, whether for driving princesses (literally) to their death or for glorifying the salacious over the significant, the scandal of Northampton reminds us that the hunger for what Joseph Roach calls “intimate authentication” is nothing new. In fact, celebrity might be called the “enthusiasm” of our moment.

 

2. Title page of "Enthusiasm Described and Caution'd Against: A Sermon Preach'd at the Old Brick Meeting-House in Boston, The Lord's Day after the Commencement, 1742," by Charles Chauncy. Printed by J. Draper (Boston, 1742). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.
2. Title page of “Enthusiasm Described and Caution’d Against: A Sermon Preach’d at the Old Brick Meeting-House in Boston, The Lord’s Day after the Commencement, 1742,” by Charles Chauncy. Printed by J. Draper (Boston, 1742). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.

 

Our story begins with Northampton and its most famous parson, the philosopher Jonathan Edwards (fig. 1). Edwards ministered to Northampton during what we now often refer to as the “Great Awakening.” This movement, which first swept the Anglo-American colonies during the mid-to-late 1730s, was characterized by a renewal of interest in religion, charismatic itinerant preaching, and new opportunities for religious participation on the part of the less privileged. Edwards is often identified as its greatest champion in colonial New England. While it would be an oversimplification to assume that he celebrated the complexities of religious activity during this period in any simplistic way, it is true that he rejoiced in the evangelical conversions it enabled, whereby formerly complacent individuals became deeply concerned about their spiritual state and, after much agonizing, often underwent a sudden and overwhelming experience of God’s favor. It is also true that the movement attracted many enemies, especially among more established metropolitan congregations for whom religion was most important as a way of ensuring social stability—not as a means to intense personal experience that might “fill the world with contention and confusion.” Roughly speaking, these two groups divided into the “New Light” and “Old Light” Congregationalists, and Edwards’ most famous Old Light opponent, the so-called “[c]aptain of the antirevival forces,” was Charles Chauncy, co-pastor of the First Church in Boston. In a sermon published in 1742, “Enthusiasm Described and Caution’d Against,” Chauncy delivered his first scathing critique of the revival (fig. 2). Edwards, in turn, articulated his belief in the legitimacy of evangelical conversion in several publications, including the “Account of Abigail Hutchinson: a young woman, hopefully converted at Northampton, Mass, 1734” (fig. 3); A Faithful Narrative (fig. 4), first published in London in 1737, and America in 1738; and 1742’s Some Thoughts Concerning the Present Revival of Religion in New England (fig. 5).

Such a nondescript location that Edwards’ London publishers first located it in New Hampshire (following a misreading of “county” as “country,” confusion ensued between “Hampshire County” in Massachusetts, where the town was located, and the similarly named American colony), Northampton was, to Edwards’ great pride, a place where not much happened (much like Le Roy, New York, a town whose greatest claim to fame is that it is the birthplace of Jell-O). Its stolid character issued not so much from any inherent goodness on the part of its citizens, but rather from its geographic location. As Edwards explains in the opening paragraphs of “A Faithful Narrative”: “Our being so far within the land, at a distance from seaports, and in a corner of the country, has doubtless been one reason why we have not been so much corrupted with vice, as most other parts.” In other words, Northampton’s relative freedom from corruption derived not from what the locale possessed but what it lacked: a coastline. Seaports were sites of scandalous and irregular behavior, bred out of the promiscuous interchange of strangers from foreign places and with suspect opinions. By contrast, when generally sober inland townsfolk behaved oddly, these manifestations were worthy of attention, since they were not the product of strange ideas imported from abroad. For Edwards, Northampton’s relative isolation made it a veritable petri dish when dramatic happenings did arise.

 

3. Title page of “Account of Abigail Hutchinson: A Young Woman, Hopefully Converted at Northampton, Mass. 1734,” by Jonathan Edwards. Printed for the New England Tract Society by Flagg and Gould (Andover, Mass., 1816). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.
4. Title page of A Faithful Narrative of the Surprising Work of God in the Conversion of Many Hundred Souls in Northampton … Jonathan Edwards. Printed by Shepard Kollock (Elizabeth-town, N.J., 1790). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.

 

And yet, despite Edwards’ strenuous insistence that inland towns were immune to foreign contagion, the very fact that he found this point necessary to make, and to make right away in A Faithful Narrative, suggests the defensiveness of his position. For there were many who saw corrupting influence aplenty in Northampton, proceeding not from recently arrived sailors and immigrants, but from longtime residents: ministers, lay preachers, and even fellow impressionable citizens. Had the term been available, these critics would no doubt have appreciated being able to append the term “disorder” to what Edwards called “conversion” (sadly for them, the Diagnostic and Statistical Manual-IV, which lists conversion disorder, was more than 250 years in the future). And while the idea of “mass psychogenic illness” might be somewhat anachronistic, theologians skeptical of Northampton’s transports had no trouble viewing the shared psychic complaint in the town as a form of pathology. In New Hampshire (the colony, not the county), John Caldwell referred to the many conversions of the period, especially among women, as an “epidemical distemper”—a contagious form of mental illness. Chauncy considered the phenomenon nothing less than “a disease, a sort of madness: And there are few: perhaps, none at all, but are subject to it.” What Edwards saw as spiritual proof of God’s blessing visited upon human vessels, Chauncy insisted was mere susceptibility to the manipulative wiles of unscrupulous cult leaders. Where Edwards celebrated a widespread dawning awareness of the saving power of Christ’s love, Chauncy used the word “enthusiasm”—a term with negative connotations of false excess not entirely absent today—to describe something closer to insanity than awakening. In “Enthusiasm Described and Caution’d Against,” he went so far as to suggest that the rapidly escalating outbursts of so-called converts were in fact symptomatic of inherent, previously unsuspected moral laxity and even sexual licentiousness.

To understand the significance of Chauncy’s opposition, it is necessary to let go of our contemporary associations with the word “enthusiasm.” While we tend to consider this staple of recommendation letters primarily a term of praise, the eighteenth century remained more attuned to the word’s implications of delusion. They also associated it with another thing familiar to us from the events of both 1737 and 2012: a susceptibility to contagion, or spreading from one person to another through unknown means. As early as 1708, the Third Earl of Shaftesbury wrote of “saving souls from the contagion of enthusiasm.” Clearly, Chauncy did not invent the stance against “enthusiasm”; a precedent existed for denouncing mass religious awakening by seeing it as a form of potentially “epidemical” mental illness. Chauncy developed on this precedent to great effect, defining “enthusiasm” as follows:

an imaginary, not a real inspiration: according to which sense, the Enthusiast is one, who has a conceit of himself as a person favoured with the extraordinary presence of the Deity. He mistakes the workings of his own passions for divine communications, and fancies himself immediately inspired by the SPIRIT of GOD, when all the while, he is under no other influence than that of an over-heated imagination.

Enthusiasts, it would seem, were the sentimentalists of the day, too ready with their tears and embraces, unable to discern that which deserved sympathetic attention from that which merely approximated truth in order to trick susceptible bystanders. They were not evil so much as deluded. Simply put, they were dupes. The true culprits were those, such as Edwards and some of his associates, who led these vulnerable souls to false belief—who “overheated” their imaginations. Chauncy’s most convincing example was one John Davenport, who not only behaved rudely to the venerable minister himself but ended up rousing a mass of followers into a wig-burning mob on the wharf at New London, Connecticut. Chauncy addressed the letter introducing his published version of “Enthusiasm Described” to Davenport, as if to imply that he, not the far more famous Edwards, who had recently lectured at Yale, was behind the so-called spiritual transports of the day. Had Chauncy been around to witness the indignities visited upon the girls of Le Roy by their disorder, the hums and hallway outbursts and bruised shins, he would have found a perfect example of what to him remained pointless and humiliating self-abasement.

 

5. Title page of Some Thoughts Concerning the Present Revival of Religion in New-England … Jonathan Edwards (Boston, 1742). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.
5. Title page of Some Thoughts Concerning the Present Revival of Religion in New-England … Jonathan Edwards (Boston, 1742). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.

 

What Chauncy failed to note is that Edwards was no more enthusiastic about enthusiasm than were his colleagues in New Haven, Boston, and Connecticut. He lamented the “strange enthusiastic delusions” that characterized a period of crisis in Northampton in 1735, manifested by a spate of suicides and attempted suicides. And he deeply resented Davenport, who was generally recognized as an embarrassment to the revival. The only difference was that Chauncy considered all extreme displays of unseemly behavior to be forms of enthusiasm, whereas Edwards allowed for a certain leniency in the event of divine rapture. Even more galling to his detractors, Edwards claimed to be able to tell the difference between true and false inspiration, whereas Chauncy insisted that, since “we have no way of judging but by what is outward and visible,” the only way to determine a true Christian was by the degree to which he followed the rules laid out in that mother of all behavior manuals: the Bible.

Chauncy’s revulsion at the distempers on display in Northampton had many sources. Chief among them were the destabilizing effects of mass religious awakening on the established social order. When ordinary people who had once known their place in the social hierarchy and asked for no undue attention suddenly began acting out spiritual transports in public—when they began to consider their own felt experience as unique and important—far more than religious doctrine was at stake. Or, in Chauncy’s words: enthusiasm has “made strong attempts to destroy all property, to make all things common, wives as well as goods.”

Chauncy’s reference to “wives” as a form of property speaks to one important aspect of the social destabilization he found so abhorrent. Women were not the only citizens of Northampton to experience conversion during the period of Edwards’ time in the pulpit there, but they played an outsize role—as they do today, for reasons still poorly understood, in both conversion disorder and mass psychogenic illness. Not only did the so-called Great Awakening provide new opportunities for female religious participation, but the emotionality associated with it was also connected, in the minds of contemporary friends and foes alike, to forms of behavior (such as fainting) that had long been considered feminine. Chauncy’s bold rhetorical move here is to sexualize that affiliation, and with it evangelical conversion, by punning on the word “common.” To “make wives common” is both to share other men’s wives and to render them cheap. From equating sudden conversion with enthusiasm, Chauncy has here moved to equating female enthusiasm with sexual license, thereby gaining a particularly strong hold over a population steeped in an ideology (if not practice) of chaste maidenhood.

Chauncy did not require the women who participated in the movement to actually be sexually promiscuous in order to see them has having been cheapened. All it really took was speaking out in church, like the “boisterous female speaker” he saw making a fool of herself at a Quaker meeting. According to Chauncy, the evangelical movement championed by Edwards brought the extreme practices of populist religions such as Quakerism into Congregational churches. Thus he condemned an increasingly common practice within the Great Awakening, namely

the suffering, much more the encouraging WOMEN, yea, GIRLS, to speak in the assemblies for religious worship … ‘Tis a plain case, these FEMALE EXHORTERS are condemned by the apostle; and if ’tis the commandment of the LORD, that they should not speak, they are spiritualonly in their own tho’ts, while they attempt to do so.

In the above passage, as throughout Chauncy’s diatribe, female religious experience is both disgusting in its own right and representative of the disorder and confusion of the entire “assembly.” The wild countenances, loose tongues, “convulsions and distortions,” and “freakish” conduct he observed may have looked particularly bad on women—but for that very reason, they served him well in communicating his disdain for men who would “set religion in such an ugly light by their unguarded conduct.”

Why, given the prevalence of attitudes such as Chauncy’s, did Edwards choose women as his representative converts when he wrote to defend the events taking place in Northampton against skeptics? Given Old Light Congregationalists’ evident distaste for female “boisterousness,” Edwards would seem only to have been adding fuel to the fire when he used Abigail Hutchinson, Phoebe Bartlett, and his own wife, Sarah Pierpont Edwards, as his exemplary converts. If women—whom Chauncy, following his own brand of biblical precedent, pronounced unfit even to speak in church—had every detail of their conversion published for a hungry public, surely the movement would be put at even greater risk of looking like public degeneracy to interested observers, from Boston to London.

Here again, as in his characterization of Northampton, Edwards anticipated such charges cleverly. Far from sluts and prostitutes, he characterized his converts as especially sensitive to contemporary assumptions that virtuous women would not seek fame. In fact, his converts detested unwelcome intrusion. Whether by preferring the country to the town, as did his teenage convert Hutchinson, or withdrawing to her private chamber before watching Christ take her heart and put it at his feet, as did his wife, Edwards’ exemplary converts craved privacy, and allowed its violation only in the service of a higher truth. In sum, where opponents of the revival sexualized female participants, Edwards tried to desexualize them. He chose children; he chose women dying of frankly disgusting conditions; and, when writing about his wife, he omitted gender pronouns altogether.

Most of all, Edwards made sure to note the increasing physical discomfort that accompanied their approach to salvation. Like the girls in Le Roy, these eighteenth-century young women’s “conversions” occasioned great bodily pain and displays of physical duress. “It has been very common,” Edwards noted, “that the deep and fixed concern that has been on persons’ minds, has had a painful influence on the bodies and given disturbance to animal nature.” And yet these “disturbances” conveyed a message about human susceptibility to salvation that rendered such suffering redemptive.

Abigail Hutchinson’s story can be summed up briefly. After beginning to focus on the question of her own salvation for the very human reason that she envied another young woman’s greater religiosity, Abigail, already “long infirm of body,” was taken up with a passionate thirst to meet her maker that made her agonizing death by starvation and dehydration, the result of a painful throat obstruction, a reputedly joyous event for both her and Edwards. Her early awakening sounds more like adolescent backbiting (or Facebook rivalries among Le Roy inhabitants trying to distinguish the truly sick from the fakers) than anything holy. When she heard about the conversion of another young woman of the town,

This news wrought much upon her, and stirred up a spirit of envy in her towards this young woman, whom she thought very unworthy of being distinguished from others by such a mercy; but withal it engaged her in a firm resolution to do her utmost to obtain the same blessing.

In an instance of “be careful what you ask for,” Abigail did indeed catch up with her undeserving peer, and it did indeed take her utmost. As “her illness increased upon her” and her body grew weaker, she felt her connection with God grow stronger. Suffering became a sign of salvation:

Her illness in the latter part of it was seated much in her throat; and swelling inward, filled up the pipe so that she could swallow nothing but what was perfectly liquid, and but little of that, and with great and long strugglings and stranglings, that which she took in flying out at her nostrils till she at last could swallow nothing at all … Others were greatly moved to see what she underwent, and were filled with admiration at her unexampled patience.

It seems of particular significance that Abigail’s throat, crucial not only to eating but also to speaking, became her illness’s last stop before death. The quieter and more “patient” Abigail grew, the more her experience inspired others.

Edwards’ narrative’s other exemplar, Phoebe Bartlet, was also a woman of few words. Her inarticulacy owed not to illness, however, but rather extreme youth. Phoebe was four years old when she experienced conversion. Like Abigail’s, it began in response to a social situation, this time “the talk of her brother.” And like Abigail, she found it challenging to speak of her spiritual difficulties. Instead, as any four-year-old might, she threw something of a tantrum: “exceedingly crying, and wreathing her body to and fro, like one in anguish of spirit.” At this point, as would any adult trying to coax sense out of a weeping child, her mother began to ask questions. Eventually, Phoebe answered “yes” to one about whether she loved God better than her family. But her chief form of expression remained the tear: she wept for her siblings as not being saved. In sum, like Abigail’s, Phoebe’s main medium of persuasion was her body—her tears proved the truth of her conversion.

For both Abigail and Phoebe, these intense but imprecise manifestations of spiritual torment not only highlighted a larger-than-life relationship with a divine entity who had no need for words, they also minimized the social components that had characterized the early stages of the conversion experience. Whether envying a neighbor or admiring a brother, both subjects of Edwards’ account began their conversions firmly embedded in the context of their everyday lives. By the time Edwards was done with them, however, their only vital relationship was vertical (Abigail’s more religious friend rapidly fades to insignificance). This lack of circumstantial context both made their conversion experiences easier to emulate and implicitly answered critics’ charges that the malevolent influence of an unstable community led to such delusions.

The move from horizontal to vertical relationships became most pronounced in Edwards’ account of his wife’s conversion in Some Thoughts Concerning the Revival. For Edwards, his wife, Sarah, seemed at times to represent Chauncy’s own worst nightmare. Given Edwards’ distaste for “enthusiasm,” which he saw as a threat not only to individual salvation but also to the reputation of his community, it is not insignificant that he described Sarah as having once displayed an “enthusiastic disposition to follow impulses” (italics mine). Sarah did not always think much better of her husband, whom she described as capable of “ill will.” What he called her “enthusiastic … impulses,” she described as mere “conversation.” In fact, her own spiritual autobiography begins with her in a state of distress over Jonathan having told her “he thought I had failed in some measure in point of prudence, in some conversation I had with Mr. Williams … the day before.” Vexed because she and her husband had exchanged harsh words shortly before his departure, pacing the house alone with her anger and guilt, she found herself casting about for comfort, and it is here that her own account begins. The importance of her relationships with those around her is clear. Early in her account she referred to, among others, three ministers (whom she is attempting not to compare unfavorably with her husband); a neighbor; a favorite author; and “the negro servants in the town.” Clearly, Sarah was a highly articulate woman for whom social life was crucial, if not always satisfactory.

Reading Edwards’ rewriting of his wife’s autobiographical narrative for publication as “An Example of Evangelical Piety” in 1743’s Some Thoughts, one could be forgiven for thinking he was describing a second, meeker wife. Like Abigail and Phoebe, this Sarah favors unconscious bodily expression over verbal intent. She doesn’t talk so much as experience “high and extraordinary transport.” These transports tend toward the involuntary, as when they cause “the person (wholly unavoidably) to leap with … mighty exultation of soul.” And they are debilitating, as “bodily strength” is repeatedly “overcome.” With at least sixteen references to his subject’s bodily weakness in four pages, Edwards, as he had with Abigail and Phoebe, refocused attention away from the social interactions Sarah herself found so integral to her spiritual journey and onto the body laid low. This silenced corporeal entity allows him to represent her changing relationship with God as something both deeply private and broadly representative.

With bickering spouses, envious neighbors, and admiring little sisters put to rest, the world of Edwards’ female converts seemed to consist of souls purified by physical affliction. In his accounts, this affliction, while it often prevented legible speech, served as another kind of language. Sarah’s fainting body, Abigail’s obstructed airway, and Phoebe’s infantile tears became utterances that, precisely because of their inscrutability, spoke for God. As with the girls of Le Roy, then, physical suffering became a focal point for Edwards in order to establish meanings that had less to do with the pain itself than what that pain might represent. In countering Chauncy’s charges of sluttish self-abandonment among female converts, Edwards rendered his exemplary women nothing but air.

Charles Chauncy was probably never an easy man to like. One of the few existing biographies of him even has an index entry for “chief antagonists,” an entry that lists no fewer than twenty men. In any age, it is difficult to enjoy the company of someone so dedicated to hierarchy, order, and obedience. But the seemingly disproportionate rage Northampton’s conversion-fest inspired in him, while certainly an expression of his strict mindset, can also be seen as a response that many shared then—and still do today—to a historical phenomenon whose effects continue to unfold. For what Chauncy was witnessing with such horror was nothing less than the birth of celebrity: a new form of status that could claim (and discard) the humble as well as the mighty—and, in its way, make the humble mighty.

Whether they were for it or against, the attention both men directed to the spiritual uprising of the 1730s and 40s, like the TV cameras in Le Roy, only intensified the phenomenon. As the battle between New and Old Light Congregationalists played itself out in pulpits and in print, the most flamboyant characters in the drama placed ever-greater demands on the public imagination. Child prodigies, saintly female martyrs, unstable agitators, and even suicidal extremists whose death presaged the end of Northampton’s moment in the sun—all these individuals, whether honored or reviled, whether courting attention or buried before the fuss had even started, created more curiosity the more their stories became known. Whatever the true state of the converts themselves, international demand for news of their travails fixed to them a new kind of fame whose effects are still felt today.

What, we might ask, is our place in this business? It’s easy to condemn a crab like Chauncy, or hover fascinated over Edwards’ accounts of childhood wonder and teenage affliction. It is even easier to move from the New York Times to the New Yorker, respected bastions of journalistic excellence, in pursuit of more news about the events in Le Roy. But without what Joseph Roach calls our “probing fingers” and those of our colonial predecessors, none of these accounts would have had reason to exist in the first place, let alone thrive as they did. In essence, then, the reader herself becomes party to the debate over what, and how, ordinary women’s extraordinary experience means. In the face of our own complicity, instead of resorting to a familiar indictment of patriarchal discourse over the course of almost three centuries, we need to attend to the appetites that make such struggles marketable. Why do we want to watch young women suffer? If the religious revivals of mid-eighteenth century America reveal celebrity culture’s early outlines, the recent media frenzy over mysteriously afflicted schoolgirls suggests where this venerable American habit of mind may lead us, as mental aberration develops from an indicator of salvation or damnation to a holy state, or shameful blight, in its own right.

In her classic study Illness as Metaphor, Susan Sontag writes that “In the twentieth century, the repellent, harrowing disease that is made the index of a superior sensitivity, the vehicle of ‘spiritual’ feelings … is insanity.” If we accept this claim, the links between the kind of attention paid to Abigail, Phoebe, and Sarah and that paid to Lydia Parker, Katie Krautwurst, Chelsey Dumars, and the other twitching girls of Le Roy become more evident. To understand this connection, we need to distinguish between how the families of Le Roy tended to think about what was happening and how consumers of national media such as The New York Times, The New Yorker, The Atlantic, USA Today and National Public Radio interpreted the story. Most Le Roy parents were so reluctant to accept doctors’ prevailing diagnosis of a stress-related mental disorder that, once claims of environmental toxicity had been more or less ruled out, they flocked to a pediatric neurologist, Dr. Rosario Trifiletti, who was willing to diagnose a strep infection and walking pneumonia. Despite inconsistencies such as the fact that, unlike any other form of strep or pneumonia, this one seemed to affect young women almost exclusively, and despite the scorn this diagnosis occasioned among other doctors, many girls began taking the antibiotics Trifiletti prescribed. These families preferred “hard” science, whether in the form of environmental contamination or bacterial infection, to the uncertainties and potential stigma of psychiatric diagnosis.

In other words, the patients and their families tended, at least at first, to experience a psychological diagnosis as a shameful blight to be resisted by whatever means necessary. This resistance can be explained in part by the role that stress was said to play in the disorder. Stress implies familial and social failings in a way that can often seem to assign blame. In the case of these young women, causes could have ranged from the widely shared economic decline of a formerly thriving town whose closed factories left a working-class community at the edge of poverty, to parental abuse and neglect, to high school status rivalry, to chronic illness and familial loss. Reluctance to accept “conversion disorder” also may have had a lot to do with the inadequacy of the diagnosis itself, which gave little clear indication of how exactly internalized conditions, from inherited genetic patterns to childhood experience to familial dysfunction, became contagious—that is, of how individual sets of symptoms became “mass psychogenic illness,” which still does not have a specific listing in the DSM.

And yet while many residents of Le Roy remained skeptical and looked for more tangible explanations for the girls’ behavior, the mostly middle-class national audiences who heard about, and reported, their situation in prestigious news outlets from The Atlantic to NPR tended to interpret this reluctance as delusional in its own right, often implicitly attributing it to the lesser education, or even the diminished interpretive capacity, of a socio-economic stratum beneath their own. Here, we cannot but be reminded of Chauncy’s own disgust with rural townspeople who, lacking the metropolitan sophistication of his own congregation, fell for the charlatan antics of their leaders and their neighbors. The distinction is that, unlike Chauncy and his peers, today’s middling orders don’t feel comfortable condescending to those they consider beneath them.

Given this tension between pride and guilt, one reason that those far from Le Roy found conversion disorder—as opposed to a distant cousin of strep throat—by far the most satisfactory explanation of the Le Roy girls’ malady is that, given what Sontag calls our current “romanticizing of madness,” it raised these otherwise inconsequential individuals in the observers’ estimation. There are sick girls everywhere, but a group of girls who share a mental illness is something else entirely. Nowhere is this clearer than in the photos that accompany the New York Times story, in which the mundane and the exceptional coexist uncomfortably. First, one notices the oppressive ordinariness of the surroundings. In a kitchen photo, bare walls (not counting a dry-erase board) and boxes of Lucky Charms and Froot Loops set the scene. In another, a popular girl’s bedroom is painted in contrasting shades of pink. Polka-dot mugs, a bottle of Nestlé Quik, and plastic tubs of cosmetics litter its faux-antique furniture scrolled with craft-store appliqué flowers. Above a propped-up, framed poster of a smiling Barack Obama (recalling the “I heart Black People” bumper sticker pegged to the yellow, peace-sign-stenciled wall of another photo), a small group photo proclaims “Memories” in large black type. In all the photos (four in total), each room is shot to appear as small and crowded as possible.

Enter the human subjects, larger than life. While Lydia is humbly dressed—sitting next to the Froot Loops, with smudged eyeliner and French-manicured nails, she wears a terry-cloth bathrobe in the same pink as the cereal box—she is ennobled by the contusions that ring her eyes and darken her chin. At least one bruise, we are informed in the caption, happened “when an uncontrollable tic caused her to hit herself with her cellphone.” Something about the ordinariness of the cellphone in contrast with the extremity of the violence it caused raises her even further above her surroundings. From the wreckage of what to many sophisticated readers would seem an almost intolerably boring life, Lydia has become—well, interesting. Tragic, even. And a peer, with the direct gaze and solemn bearing of one who has endured perhaps more than the viewer can imagine. Katie, she of the cluttered pink bedroom, also almost shames us with her sad sideways gaze, her bravely mismatched socks poking out from under torn jeans. Pink as it is, her room only highlights its occupant’s absolute lack of girlish cheer. Finally, as single mom Chelsey stands by a rusted bridge over a brush-littered stream, holding an obliviously cute baby bundled in purple hearts, her black jacket, compressed lips and, again, direct stare dare us to read her as anything less than a Madonna.

In 1737, to accuse young women of a mental illness such as “enthusiasm” was to shame them, while to render their religious conversion in convincing terms was to raise them above the town, the colonies, and even the print culture that, in calling attention to their controversial condition, enshrined them in the public eye. In 2012, by contrast, a well-intentioned diagnosis out of the DSM-IV was meant to relieve concern by providing a coherent, if rather amorphous, account of a seemingly inexplicable phenomenon—and yet it had a not dissimilar effect to its predecessor. On the one hand, it wreaked havoc among suffering individuals whose attempts to deny their diagnosis only prolonged their anxiety and forestalled adequate treatment. On the other, it made stars out of ordinary women, whose “superior sensitivity” raised them above their generally dismissive readers. This ennobling did not help the patients much—in fact, more attention meant more stress, which meant an intensification of symptoms. But it did help readers work through their own class anxieties by finding common ground with individuals who previously had been as indistinguishable as their cereal boxes. And, as formerly unknown young women came to seem more and more like the reader in their vulnerability and tenacity, initially voyeuristic strangers found a mirror in which to consider the unnerving fact that minds—theirs included—might not always do right by their bodies.

Over the course of almost three centuries, the concerns have changed. We now seem to worry more about our current circumstances than our future estate. But the mechanism for exploring our inchoate anxieties through the vehicle of anonymous young women in the grip of an unnamed affliction remains nearly identical. Today, “conversion disorder” demands the respect that “conversion” itself did two centuries prior. For some reason, we now bestow our collective favor on those who suffer delusion, whereas two centuries ago, we saved it for those who knew a truth that others could only imagine. In this sense, the young women of Northampton and Le Roy are worlds, as well as centuries, apart. And yet the conditions they must satisfy to gain our undivided attention remain both similar and similarly cruel. Groups of young women seem to attract special notice when they are joined in a common affliction whose signal characteristic is that it speaks through the body, rendering verbal coherence erratic at best and irrelevant at worst. We continue to believe that actions speak louder than words (especially words that might come from young women). The difficulty is that some actions hurt more than others. Some actions pose no tangible benefit to the actor, and yet repeat themselves unaccountably. And it would seem that these are the behaviors most worthy of notice when it comes to groups of otherwise unexceptional young women. Thus do cultural debates of the moment, whether about the nature of proper religious observance or the mind’s capacity to unravel the body, make ordinary young women temporarily famous for something over which they have no say.

Further Reading

For a collection of the sermons and other writing from Jonathan Edwards’ experience in Northampton, see Jonathan Edwards, The Great Awakening, ed. C. C. Goen, vol. 4 in “The Works of Jonathan Edwards” (New Haven, Conn., 1972). Many of the details of Jonathan and Sarah Edwards’ spiritual lives can be found in Sereno Dwight, ed., The Works of President Edwards: With a Memoir of His Life in Ten Volumes, Volume 1 (New York, 1830). The reputation of Jonathan Edwards’ nemesis, Charles Chauncy, has suffered by comparison, with versions of his sermons harder to find. A sensitive biography of Chauncy is Edward Griffin, Old Brick: Charles Chauncy of Boston, 1705-1787 (Minneapolis, 1980).

A useful examination of the phenomenon of religious enthusiasm in young women in early America—and of the lessons that the culture drew from such manifestations—is Ann Taves, ed. Religion and Domestic Violence in Early New England: The Memoirs of Abigail Abbot Bailey (Bloomington, Ind., 1989). For an excellent discussion of the significance of emotion—including emotional religion—in eighteenth-century America, see Nicole Eustace, Passion is the Gale: Emotion, Power, and the Coming of the American Revolution (Chapel Hill, N.C., 2008). For a recent meditation on the phenomenon of celebrity, see Joseph Roach, “The Doubting-Thomas Effect,” PMLA (October 2011).

The mysterious group illness of the girls in Le Roy, N.Y., in 2011-12 generated coverage across a wide range of media outlets. Some of the more notable pieces to address the phenomenon include Nicholas Jackson, “It Could Just Be Stress: The Teens of Le Roy and Conversion Disorder” (The Atlantic, February 5, 2012); Susan Dominus “What Happened to the Girls in Le Roy,” The New York Times Magazine (March 11, 2012); and Emily Eakin, “Le Roy Postcard: Hysterical,” The New Yorker (March 5, 2012).

 

This article originally appeared in issue 13.3 (Spring, 2013). 


 



There Arose Such a Clatter Who Really Wrote “The Night before Christmas”? (And Why Does It Matter?)

In a chapter of his just-published book, Author Unknown, Don Foster tries to prove an old claim that had never before been taken seriously: that Clement Clarke Moore did not write the poem commonly known as “The Night before Christmas” but that it was written instead by a man named Henry Livingston Jr. Livingston (1748-1828) never took credit for the poem himself, and there is, as Foster is quick to acknowledge, no actual historical evidence to back up this extraordinary claim. (Moore, on the other hand, did claim authorship of the poem, although not for two decades after its initial–and anonymous–publication in the Troy [N.Y.] Sentinel in 1823.) Meanwhile, the claim for Livingston’s authorship was first made in the late 1840s at the earliest (and possibly as late as the 1860s), by one of his daughters, who believed that her father had written the poem back in 1808.

Why revisit it now? In the summer of 1999, Foster reports, one of Livingston’s descendants pressed him to take up the case (the family has long been prominent in New York’s history). Foster had made a splash in recent years as a “literary detective” who could find in a piece of writing certain unique and telltale clues to its authorship, clues nearly as distinctive as a fingerprint or a sample of DNA. (He has even been called on to bring his skills to courts of law.) Foster also happens to live in Poughkeepsie, New York, where Henry Livingston himself had resided. Several members of the Livingston family eagerly provided the local detective with a plethora of unpublished and published material written by Livingston, including a number of poems written in the same meter as “The Night before Christmas” (known as anapestic tetrameter: two short syllables followed by an accented one, repeated four times per line–“da-da-DUM, da-da-DUM, da-da-DUM, da-da-DUM,” in Foster’s plain rendering). These anapestic poems struck Foster as quite similar to “The Night before Christmas” in both language and spirit, and, upon further investigation, he was also struck by telling bits of word usage and spelling in that poem, all of which pointed to Henry Livingston. On the other hand, Foster found no evidence of such word usage, language, or spirit in anything written by Clement Clarke Moore–except, of course, for “The Night before Christmas” itself. Foster therefore concluded that Livingston and not Moore was the real author. The literary gumshoe had tackled and solved another hard case.

 

The above illustration is from an 1869 edition of A Visit from Saint Nicholas.
The above illustration is from an 1869 edition of A Visit from Saint Nicholas.

Foster’s textual evidence is ingenious, and his essay is as entertaining as a lively lawyer’s argument to the jury. If he had limited himself to offering textual evidence about similarities between “The Night before Christmas” and poems known to have been written by Livingston, he might have made a provocative case for reconsidering the authorship of America’s most beloved poem–a poem that helped create the modern American Christmas. But Foster does not stop there; he goes on to argue that textual analysis, in tandem with biographical data, proves that Clement Clarke Moore could not have written “The Night before Christmas.” In the words of an article on Foster’s theory that appeared in the New York Times, “He marshals a battery of circumstantial evidence to conclude that the poem’s spirit and style are starkly at odds with the body of Moore’s other writings.” With that evidence and that conclusion I take strenuous exception.

By itself, of course, textual analysis doesn’t prove anything. And that’s especially true in the case of Clement Moore, inasmuch as Don Foster himself insists that Moore had no consistent poetic style but was a sort of literary sponge whose language in any given poem was a function of whichever author he had recently been reading. Moore “lifts his descriptive language from other poets,” Foster writes: “The Professor’s verse is highly derivative–so much so that his reading can be tracked . . . by the dozens of phrases borrowed and recycled by his sticky-fingered Muse.” Foster also suggests that Moore may even have read Livingston’s work–one of Moore’s poems “appears to have been modeled on the anapestic animal fables of Henry Livingston.” Taken together, these points should underline the particular inadequacy of textual evidence in the case of “The Night before Christmas.”

Nevertheless, Foster insists that for all Moore’s stylistic incoherence, one ongoing obsession can be detected in his verse (and in his temperament), and that is–noise. Foster makes much of Moore’s supposed obsession with noise, partly to show that Moore was a dour “curmudgeon,” a “sourpuss,” a “grouchy pedant” who was not especially fond of young children and who could not have written such a high-spirited poem as “The Night before Christmas.” Thus Foster tells us that Moore characteristically complained, in a particularly ill-tempered poem about his family’s visit to the spa town of Saratoga Springs, about noise of all kinds, from the steamboat’s hissing roar to the “Babylonish noise about my ears” made by his own children, a hullabaloo which “[c]onfounds my brain and nearly splits my head.”

Assume for the moment that Foster is correct, that Moore was indeed obsessed with noise. It is worth remembering in that case that this very motif also plays an important role in “The Night before Christmas.” The narrator of that poem, too, is startled by a loud noise out on his lawn: “[T]here arose such a clatter / I got up from my bed to see what was the matter.” The “matter” turns out to be an uninvited visitor–a household intruder whose appearance in the narrator’s private quarters not unreasonably proves unsettling, and the intruder must provide a lengthy set of silent visual cues before the narrator is reassured that he has “nothing to dread.”

“Dread” happens to be another term that Foster associates with Moore, again to convey the man’s dour temperament. “Clement Moore is big on dread,” Foster writes, “it’s his specialty: ‘holy dread,’ ‘secret dread,’ ‘need to dread,’ ‘dreaded shoal,’ ‘dread pestilence,’ ‘unwonted dread,’ ‘pleasures dread,’ ‘dread to look,’ ‘dreaded weight,’ ‘dreadful thought,’ ‘deeper dread,’ ‘dreadful harbingers of death,’ ‘dread futurity.'” Again, I’m not convinced that the frequent use of a word has terribly much significance–but Foster is convinced, and in his own terms the appearance of this word in “The Night before Christmas” (and at a key moment in its narrative) ought to constitute textual evidence of Moore’s authorship.

Then there’s the curmudgeon question. Foster presents Moore as a man temperamentally incapable of writing “The Night Before Christmas.” According to Foster, Moore was a gloomy pedant, a narrow-minded prude who was offended by every pleasure from tobacco to light verse, and a fundamentalist Bible thumper to boot, a “Professor of Biblical Learning.” (When Foster, who is himself an academic, wishes to be utterly dismissive of Moore, he refers to him with a definitive modern putdown–as “the Professor.”)

But Clement Moore, born in 1779, was not the Victorian caricature that Foster draws for us; he was a late-eighteenth-century patrician, a landed gentleman so wealthy that he never needed to take a job (his part-time professorship–of Oriental and Greek literature, by the way, not “Biblical Learning”–provided him mainly with the opportunity to pursue his scholarly inclinations). Moore was socially and politically conservative, to be sure, but his conservatism was high Federalist, not low fundamentalist. He had the misfortune to come into adulthood at the turn of the nineteenth century, a time when old-style patricians were feeling profoundly out of place in Jeffersonian America. Moore’s early prose publications are all attacks on the vulgarities of the new bourgeois culture that was taking control of the nation’s political, economic, and social life, and which he (in tandem with others of his sort) liked to discredit with the term “plebeian.” It is this attitude that accounts for much of what Foster regards as mere curmudgeonliness.

 

Clement C. Moore.
Clement C. Moore.

Consider “A Trip to Saratoga,” the forty-nine page account of Moore’s visit to that fashionable resort which Foster cites at length as evidence of its author’s sour temperament. The poem is in fact a satire, and written in a well-established satirical tradition of accounts of disappointing visits to that very place, America’s premier resort destination in the first half of the nineteenth century. These accounts were written by men who belonged to Moore’s own social class (or who aspired to do so), and they were all attempts to show that the majority of visitors to Saratoga were not authentic ladies and gentlemen but mere social climbers, bourgeois pretenders who merited only disdain. Foster calls Moore’s poem “serious,” but it was meant to be witty, and Moore’s intended readers (all of them members of his own class) would have understood that a poem about Saratoga could not be any more “serious” than a poem about Christmas. Surely not in Moore’s description of the beginning of the trip, on the steamboat that was taking him and his children up the Hudson River:

Dense with a living mass the vessel teem’d;
In search of pleasure, some, and some, of health;
Maids who of love and matrimony dream’d,
And speculators keen, in haste for wealth.

Or their entrance into the resort hotel:

Soon as arriv’d, like vultures on their prey,
The keen attendants on the baggage fell;
And trunks and bags were quickly caught away,
And in the destin’d dwelling thrown pell-mell.

Or the would-be sophisticates who tried to impress each other with their fashionable conversation:

And, now and then, might fall upon the ear
The voice of some conceited vulgar cit,
Who, while he would the well-bred man appear,
Mistakes low pleasantry for genuine wit.

Some of these barbs retain their punch even today (and the poem as a whole was plainly a parody of Lord Byron’s hugely popular travel romance, “Childe Harold’s Pilgrimage”). In any case, it is a mistake to confuse social satire with joyless prudery. Foster quotes Moore, writing in 1806 to condemn people who wrote or read light verse, but in the preface to his 1844 volume of poems, Moore denied that there was anything wrong with “harmless mirth and merriment,” and he insisted that “in spite of all the cares and sorrows of this life, . . . we are so constituted that a good honest hearty laugh . . . is healthful both to body and mind.”

Healthy too, he believed, was alcohol. One of Moore’s many satirical poems, “The Wine Drinker,” was a devastating critique of the temperance movement of the 1830s–another bourgeois reform that men of his class almost universally distrusted. (If Foster’s picture of the man is to be believed, Moore could not have written this poem, either.) It begins:

I’ll drink my glass of generous wine;
And what concern is it of thine,
Thou self-erected censor pale,
Forever watching to assail
Each honest, open-hearted fellow
Who takes his liquor ripe and mellow,
And feels delight, in moderate measure,
With chosen friends to share his pleasure?

This poem goes on to embrace the adage that “[t]here’s truth in wine” and to praise the capacity of alcohol to “impart / new warmth and feeling to the heart.” It culminates in a hearty invitation to the drink:

Come then, your glasses fill, my boys.
Few and constant are the joys
That come to cheer this world below;
But nowhere do they brighter flow
Than where kind friends convivial meet,
‘Mid harmless glee and converse sweet.

These lines would have done pleasure-loving Henry Livingston proud–and so too would many others to be found in Moore’s collected poems. “Old Dobbin” was a gently humorous poem about his horse. “Lines for Valentine’s Day” found Moore in a “sportive mood” that prompted him “to send / A mimic valentine, / To teaze awhile, my little friend / That merry heart of thine.” And “Canzonet” was Moore’s translation of a sprightly Italian poem written by his friend Lorenzo Da Ponte–the same man who had written the libretti to Mozart’s three great Italian comic operas, “The Marriage of Figaro,” “Don Giovanni,” and “Cosi Fan Tutte,” and who had immigrated to New York in 1805, where Moore later befriended him and helped win him a professorship at Columbia. The final stanza of this little poem could have referred to the finale of one of Da Ponte’s own operas: “Now, from your seats, all spring alert, / ‘Twere folly to delay, / In well-assorted pairs unite, / And nimbly trip away.”

Moore was neither the dull pedant nor the joy-hating prude that Don Foster makes him out to be. Of Henry Livingston himself I know only what Foster has written, but from that alone it is clear enough that he and Moore, whatever their political and even temperamental differences, were both members of the same patrician social class, and that the two men shared a fundamental cultural sensibility that comes through in the verses they produced. If anything, Livingston, born in 1746, was more a comfortable gentleman of the high eighteenth century, whereas Moore, born thirty-three years later in the midst of the American Revolution, and to loyalist parents at that, was marked from the beginning with a problem in coming to terms with the facts of life in republican America.

Don Foster also claims that Clement Clarke Moore loathed children, but from the 1820s on–after he was forty, and beginning at the very time “The Night before Christmas” was first published–Moore seems (like many other Americans) to have found satisfaction and something like serenity by taking emotional refuge in the ordinary pleasures of family life. His later poems show him as a doting father, a man who cherished domesticity and loved to spend what we would now call “quality time” with his six children. (His wife died in 1830, and it is clear that he cared to provide serious moral training along with lots of indulgence.) “Lines Written after a Snow-Storm” could almost be titled “The Morning after Christmas”:

Come children dear, and look around;
Behold how soft and light
The silent snow has clad the ground
In robes of purest white . . .
You wonder how the snows were made
That dance upon the air,
As if from purer worlds they stray’d,
So lightly and so fair.

(It is true that the poem concludes allegorically, by pointing out that the snow will soon melt. But that does not make it any less child-centered and affectionate.) In another later poem, Moore recalled his own childhood and his parents putting him to sleep:

Whene’er night’s shadows call’d to rest,
I sought my father, to request
His benediction mild:
A mother’s love more loud would speak,
With kiss on kiss she’d print my cheek,
And bless her darling child.

Moore actually based one of his poems on a homework assignment one of his own children had received at school. That poem, “The Pig and the Rooster,” was in anapestic tetrameter, the poetic meter of “The Night before Christmas.” (Don Foster makes the curiously self-defeating claim that “The Pig and the Rooster” was “modeled on the anapestic animal fables of Henry Livingston.”) But what is just as significant is that Moore took such an interest in his son’s homework that he would write a poem about it.

 

Nineteenth-century engraving of Santa Claus, courtesy the AAS.
Nineteenth-century engraving of Santa Claus, courtesy the AAS.

Even in “The Wine Drinker,” Moore reserves what may be his deepest scorn for the fact that the temperance movement was willing to exploit innocent children for political ends. There is no ironic humor but only what Moore called “indignant feelings” in these lines (which bring to mind the tactics of modern anti-abortion organizers):

Children I see paraded round,
In badges deck’d, with ribbons bound,
And banners floating o’er their head,
Like victims to the slaughter led . . .
How can ye dare to fill a child,
Whose spirits should be free and wild,
And only love to run and romp,
With vanity and pride and pomp?

But it may be his long poem “A Trip to Saratoga” that shows Moore at his most child-centered. While this poem is social satire, even more fundamentally it is the story of a widowed father who, in the face of all his own feelings, allows his six children to persuade him to leave his beloved fireside–“the pure delights of their dear home”–and take them for the summer to a place he well knows will prove a vulgar disappointment. Foster says this poem shows Moore’s loathing of children, and especially of their “noise.” It is true that Moore begins the poem with his six children simultaneously begging their father, over breakfast, to take them on “a summer trip,” and that he responds by asking for a little order (Foster quotes only the last two of these lines):

“One at a time, for pity’s sake, my dears,”
Half laughing, half provok’d, at length he said,
“This babylonish din about my ears
Confounds my brain, and nearly splits my head.”

The Clement Moore whom Foster gives us would have simply ordered his children to shut up–but this father soon gives in to his children’s demands. And from this point on, for the remainder of the poem, he displays nothing but affection for them. When, as he reports, they get bored on the train out of New York City and “begin to pant for somewhat [i.e., something] new”–this is on the very first day of the trip–Moore reports what any modern parent will find easy to recognize, as the children begin

To ask the distance they still had to go;
At what abode they were to pass the night;
Their progress seems continually more slow;
They wish’d that Albany would come in sight.

Hardly the tone of a man who was incapable of tolerating children. And, let us not forget, this was a single parent dealing by himself with six of them.

In fact, Moore is pleased pink with his kids, with their behavior, their personalities, and even their physical beauty. Saratoga may have been filled with beautiful belles, Moore acknowledges–but his own eldest daughter was “the loveliest of them all.” Even when this same daughter argues with her father, and he rebuts her argument with a single dismissive word–this is at the very beginning of the poem–Moore lets us in on his real feelings when he tells us that his “brisk retort [was] made”

With half a smile, and twinkle of the eye
That spoke–“You are a darling saucy jade.”

In just the same voiceless fashion, in a far better-known poem generally attributed to the same author, it is with a smile–and, yes, with eyes that twinkle–that Santa Claus lets us know that he means well.

-

Clement Clarke Moore was capable of having fun, writing light verse, and loving his children. Was he also a liar?

Having attacked Moore’s personality, ideology, and parental style, in the end Foster challenges the man’s personal integrity as well. In a way, he needs to do so, since Moore did, after all, eventually have “The Night before Christmas” published under his own name, a circumstance that would seem to offer the most powerful evidence of his authorship. A man could be dour and child-hating without being a liar to boot–and a serious liar Moore must have been if he did not really write the poem.

At the end of his argument, Foster delivers a parting shot, proof positive that Moore falsely took credit for another work which was not his. Foster learned that Moore donated a book to the New-York Historical Society. The book, an 1811 treatise on the raising of Merino sheep, was originally written in French, and on the title page of the copy he donated, just beneath the words “translated from the French,” is a penned-in notation: “by Clement C. Moore, A.M.” But Foster found a copyright notice for this book, included only in a later bound-in appendix, showing that another man, one Francis Durand, “is also the book’s sole translator” (these are Foster’s words). Foster concludes, “Professor Moore does not just recycle a few borrowed phrases, as in his poetry–he lays claim to an entire book that was the work of another man.”

The charge will not stick. It is clear even to my own inexpert eye that the penned inscription “by Clement C. Moore, A.M.” is not written in Moore’s rather distinctive hand. Moreover, Moore was not in the habit of referring to his master’s degree when he signed his name. In all likelihood, the inscription was written by someone at the New-York Historical Society in recognition of Moore’s gift. It is no evidence that Moore tried to take credit for the translation. Charge dismissed.

 

The inscription on the title page of the copy of A Complete Treatise on Merinos and Other Sheep, which Foster cites as proof of plagiarism. Courtesy of New-York Historical Society
The inscription on the title page of the copy of A Complete Treatise on Merinos and Other Sheep, which Foster cites as proof of plagiarism. Courtesy of New-York Historical Society

Still, why the apparently erroneous attribution? While this question requires no answer here, the most likely one happens to shed light on a larger question: I believe that the attribution was correct: Moore did do the translation, perhaps together with Durand, and he never chose to take public credit for it. The reason is simple, and revealing: men of high social position often published their work anonymously in the early nineteenth century (Moore often did so himself), because public anonymity was often a sign of gentility. But it is easy to imagine that Moore was pleased with his work and he did not object to letting word of it become known to the small elite group who were his fellow members of the New-York Historical Society. (In fact, the copyright notice does not show that Francis Durand translated this treatise but only that he claimed legal rights to it–rights that Moore could easily have assigned to him in a display of noblesse, perhaps for collaborating in the translation. Furthermore, while the title page of the book indicates that it was “translated from the French,” it does not name a translator. Had Francis Durand really done that job himself, he could easily have said so on the title page–and he did not.) The whole inconsequential affair shows, again, not that Moore was a liar but that he was just what we already know him to have been–a patrician.

A similar dynamic was probably at play with “The Night before Christmas.” The poem first appeared in 1823, anonymously, in a newspaper in Troy, New York (there is no clear evidence how it got there, though legend has it that one of Moore’s relatives was responsible for copying the poem down after hearing Moore say it aloud to his family the year before). In 1829 that same Troy newspaper reprinted the poem, which by now had already begun to circulate widely around the country. The 1829 printing was again anonymous, but this time the newspaper’s editor added some tantalizing hints about the identity of the poem’s author: he was a New York City man “by birth and residence,” and “a gentleman of more merit as a scholar and writer than many of more noisy pretensions.” While keeping up the aura of genteel anonymity, these words pointed pretty clearly to Moore (Henry Livingston, who had died two years before, was neither a scholar nor a New Yorker), and it seems rather likely that Moore’s name had been cropping up for some time among people in his own circle. Moore was almost certainly becoming privately proud of what was far and away the most famous thing he had ever done. Eight years later, in 1837, a member of Moore’s circle publicly named him as the author; Moore did not object. Finally, in 1844, Moore used the rationale that his own children had pressed him to publish his collected poetry as an excuse to include the poem and thereby to openly acknowledge his authorship. (Moore’s children believed–and perhaps with very good reason–that their father had written “The Night before Christmas.”)

 

Moore's signature from a manuscript for "A Visit from Saint Nicholas." Courtesy Kaller's America Gallery, Inc., N.Y.
Moore’s signature from a manuscript for “A Visit from Saint Nicholas.” Courtesy Kaller’s America Gallery, Inc., N.Y.

Assume for a moment that Foster is correct after all in his assessment of Moore’s personality. In that case–if the man was so curmudgeonly, prudish, and moralistic, so profoundly offended by frivolous poetry, that he would not have written “The Night before Christmas”–why would he have chosen to take public credit for it? If there was anything less likely than his writing such a thing (and doing so for wholly private use), surely it was choosing to name himself in print as its author–in a handsomely printed collection of his own poetry, no less. From Moore’s own perspective–though crucially not from ours, and we should be sure to make this distinction–such a thing could have brought him only discredit. Foster’s claim that Moore was incapable of writing the poem is incompatible with the fact that Moore was capable of claiming its authorship.

Clement Moore was no child-hating, mendacious curmudgeon. But to say that he was capable of writing light domestic verse is not to say that “The Night before Christmas” is nothing but a light-hearted children’s poem, a mere esprit in which the real man is nowhere to be discerned. There is in fact no reason why humorous works written for children may not also contain the seeds of serious adult concerns. Alice in Wonderland comes quickly to mind, of course, not to mention virtually any “fairy tale.” “The Night before Christmas” too is just such a work, a fact which strengthens the case for Moore’s authorship. Understanding this requires understanding the New York social world in which Moore lived, a world in which St. Nicholas was emerging as a real cultural presence in the first two decades of the nineteenth century.

 

Thomas Nast, cover for Harper's Weekley, 1863.
Thomas Nast, cover for Harper’s Weekley, 1863.

This was the world of self-dubbed “knickerbockers,” a group of men whose collective home was the New-York Historical Society, founded in 1804 by John Pintard. Pintard actually introduced St. Nicholas as the symbolic patron saint of the Historical Society, which held annual dinners on December 6, St. Nicholas Day. (According to the scholar who investigated this subject, before Pintard’s interventions there had been no evidence of Santa Claus rituals in the state of New York.) The most famous member of the New-York Historical Society was Washington Irving, who made much of St. Nicholas in his 1809 book Knickerbocker’s History of New York, which was actually published on St. Nicholas Day. It was Irving who popularized St. Nicholas in the 1810s. Clement Moore joined the New-York Historical Society in 1813.

For the Historical Society’s St. Nicholas Day dinner in 1810, John Pintard commissioned the publication of a broadside containing a picture of St. Nicholas in the form of a rather stern, magisterial bishop, bringing gifts for good children and punishments for bad ones. Two weeks later, and presumably in response to Pintard’s broadside, a New York newspaper printed a poem about St. Nicholas. Moore almost certainly knew of this poem; in fact, it is just barely possible that he wrote it. The poem is narrated by a child who is essentially offering a prayer to the stern saint.

The poem is in–what else?–anapestic tetrameter. It opens: “Oh good holy man! whom we Sancte Claus name, / The Nursery forever your praise shall proclaim.” It goes on to catalogue the presents St. Nicholas might be hoped to leave, followed by an entreaty that he not come for the purpose of punishment (“[I]f in your hurry one thing you mislay, / Let that be the Rod–and oh! keep it away.”) And it concludes with a promise of future good behavior:

Then holy St. Nicholas! all the year,
Our books we will love and our parents revere,
From naughty behavior we’ll always refrain,
In hopes that you’ll come and reward us again.

Like Clement Moore, the knickerbockers who brought St. Nicholas to New York were a deeply conservative group who loathed the democrats and the capitalists who were taking over their city and their nation. Washington Irving disdainfully summarized in the Knickerbocker History an episode which clearly represented to his readers the Jeffersonian Revolution of 1800: “[J]ust about this time the mob, since called the sovereign people . . . exhibited a strange desire of governing itself.” And in 1822 (a year before the first publication of “The Night before Christmas”), John Pintard explained to his daughter just why he was opposed to a new state constitution adopted that year, a constitution that gave men without property the right to vote: “All power,” Pintard wrote, “is to be given, by the right of universal suffrage, to a mass of people, especially in this city, which has no stake in society. It is easier to raise a mob than to quell it, and we shall hereafter be governed by rank democracy . . . Alas that the proud state of New York should be engulfed in the abyss of ruin.”

During these same years, Clement Moore’s large home estate (named Chelsea) was being systematically destroyed by the city of New York, divided up by right of eminent domain into a new series of numbered streets and avenues that were a product of the city’s rapid northward expansion. (Chelsea extended all the way from what is now called Eighteenth Street to Twenty-fourth Street, and from Eighth to Tenth Avenues–a large chunk of real estate indeed, and one that is known to this day as the Chelsea District.) In 1818, Moore published a tract protesting against New York’s relentless development. In that tract he expressed a fear that the city was in what he termed “destructive and ruthless hands,” the hands of men who did not “respect the rights of property.” He was pessimistic about the future: “We know not the amount nor the extent of oppression which may yet be reserved for us.”

“In the real world of New York, misrule came to a head at Christmastime.”

In short, both Moore himself and his fellow knickerbockers felt that they belonged to a patrician class whose authority was under siege. From that angle, the knickerbocker interest in St. Nicholas was part of a larger, ultimately quite serious cultural enterprise: forging a pseudo-Dutch identity for New York, a placid “folk” identity that could provide a cultural counterweight to the commercial bustle and democratic misrule of the early-nineteenth-century city. (Incidentally, Don Foster should be wary about taking Henry Livingston’s “Dutch” persona wholly at face value, as a lingering manifestation of traditional folk culture; I’m inclined to suspect it was highly self-conscious.) The best-known literary expression of this larger knickerbocker enterprise is Irving’s classic story “Rip Van Winkle” (published in 1819), the tale of a lazy but contented young Dutchman who falls asleep for twenty years and awakens to a world transformed, a topsy-turvy world in which he seems to have no place.

In the real world of New York, misrule came to a head at Christmastime. As I have shown in my book The Battle for Christmas, this season had traditionally been a time of carnival behavior, especially among those whom the knickerbockers considered “plebeians.” Bands of roving youths, lubricated by alcohol, went about town making merry, making noise, and sometimes making trouble. Ritual usage sanctioned their practice of stopping at the houses of the well-to-do and demanding gifts of food and especially drink–a form of trick-or-treat commonly known as “wassailing.” After 1800, this Christmas misrule took on a nastier tone, as young and alienated working-class New Yorkers began to use wassailing as a form of rambling riot, sometimes invading people’s homes and vandalizing their property. One particularly serious episode took place during the 1827 Christmas season; one newspaper reported it to have been the work of a mob that was not only “stimulated by drink” but also “enkindled by resentment.” The newspaper warned its readers not “to wink at such excesses, merely because they occur at a season of festivity. A license of this description will soon turn festivals of joy, into regular periods of fear to the inhabitants, and will end in scenes of riot, intemperance, and bloodshed.” (There is no evidence that Clement Moore’s Chelsea home was disturbed by roving gangs, despite the new cross-streeted vulnerability of the property, but in “A Trip to Saratoga” he noted that noisy drunken hotel guests often made “the sounds of strife or wassail, in the night.”)

Washington Irving and John Pintard were both nostalgic for the days when wassailing had been a more innocent practice, and both were concerned about the way Christmas had lately become a season of menace. Each, in his own way, engaged in an effort to reclaim the season. Irving wrote stories of idyllic English holiday celebrations (he did much of his research at the New-York Historical Society), and Pintard went about devising new seasonal rituals that were restricted to family and friends. His introduction of St. Nicholas at the Historical Society after 1804 was part of that effort.

 

Broadside of St. Nicholas, 1810, commissioned by John Pintard. Courtesy the New-York Historical Society.
Broadside of St. Nicholas, 1810, commissioned by John Pintard. Courtesy the New-York Historical Society.

And “The Night before Christmas,” published in 1823, became its apotheosis. What these enduring verses accomplished was to address all the problems of elite New Yorkers at Christmastime. Using the raw material already devised out of Dutch tradition by John Pintard and Washington Irving, the poem transformed stern and dignified St. Nicholas into a jolly old elf, Santa Claus, a magical figure who brought only gifts, no punishments or threats. Just as important, the poem provided a simple and effective ceremony that enabled its readers to restrict the holiday to their own family, and to place at its heart the presentation of gifts to their children–in a profoundly gratifying, ritual alternative to the rowdy street scene that was taking place outside. “The Night before Christmas” moved the Christmas gift exchange off the streets and into the house–a secure domestic space in which there really was “nothing to dread.” And don’t forget that in real life, prosperous people did have something to dread–after all, those wassailing plebeians might not be satisfied to remain outside.

“The Night before Christmas” contains a sly allusion to that possibility: for Santa Claus himself is a personage who breaks into people’s houses in the middle of the night at Christmastime. But of course this particular housebreaker comes not to take but togive–to wish goodwill without having received anything in return. “The Night before Christmas” raises the ever present threat–the “dread”–but only in order to defuse it, to offer jolly assurance that the well-being of the household will not be disturbed but only enhanced by this nocturnal holiday visitor.

Did Clement Clarke Moore write “The Night before Christmas”? I believe he did, and I think I have marshaled an array of good evidence to prove, in any case, that Moore had the means, the opportunity, and even the motive to write the poem. Like Don Foster’s, my evidence must necessarily be circumstantial, but I believe mine is better than his. Some of my evidence is quite straightforward. All of it is based on the belief that historical circumstance helped make Clement Moore a figure of greater complexity than either his admirers or his detractors have recognized, and that he might well have revealed that complexity in a poem he almost certainly did regard as nothing more than a throwaway children’s piece. But, then again, what more likely occasion for a curmudgeonly patrician to confront his inner demon?

Especially when he could turn him into a jolly old elf.

 

Further Reading:

Don Foster’s essay appears as chapter 6 of his book Author Unknown: On the Trail of Anonymous (New York, 2000). Section 3 of the present essay is based on chapters 1 and 2 of my book The Battle for Christmas (New York, 1996); see also Charles W. Jones, “Knickerbocker Santa Claus,” The New-York Historical Society Quarterly 38 (1954): 356-83. The only biography of Clement Clarke Moore, albeit hagiographic, is Samuel W. Patterson, The Poet of Christmas Eve: A life of Clement Clarke Moore, 1779-1863 (New York, 1956); see also Arthur N. Hosking, “The Life of Clement Clark Moore,” appended to a facsimile reprint of the 1848 edition of Moore’s “A Visit from St. Nicholas” (New York, 1934). Another satirical account of a visit to Saratoga is James K. Paulding, The New Mirror for Travelers; and Guide to the Springs (New York, 1828); himself a knickerbocker, Paulding also wrote The Book of Saint Nicholas (New York, 1836). The transformation of New York City can be followed in Paul A. Gilje, The Road to Mobocracy: Popular Disorder in New York City, 1763-1834 (Chapel Hill, 1989); Raymond A. Mohl, Poverty in New York, 1783-1825 (New York, 1971); Christine Stansell, City of Women: Sex and Class in New York, 1789-1860 (New York, 1986); and Sean Wilentz, Chants Democratic: New York City and the Rise of the American Working Class, 1788-1850 (New York, 1984); see also Susan G. Davis, Parades and Power: Street Theatre in Nineteenth-Century Philadelphia (Philadelphia, 1986). For the notion of “invented traditions” (such as St. Nicholas in New York), see Eric J. Hobsbawm and Terence Ranger, eds., The Invention of Tradition (Cambridge: 1983).

 

This article originally appeared in issue 1.2 (January, 2001).


Stephen Nissenbaum’s book The Battle for Christmas (1996) was a finalist for the Pulitzer Prize in history; he teaches history at the University of Massachusetts, Amherst.




George’s Story: Dolls and the Material Culture of Christmas

This dapper fellow is George (fig. 1). I met George last summer at his current home, the New-York Historical Society. He attracted my attention because he was described as a Christmas gift and I am working on a history of gift giving. According to the accession records, Elise Weidenman received the boy doll, which she named “George,” from Mary Brownell around 1880. I soon found that George had not been alone under the Weidenman Christmas tree. In fact, he had two brothers, Jakie (fig. 2) and Fredie (not pictured), which Brownell gave to Elise’s sisters, Marguerite and Anna. The records listed Brownell as the maker of the dolls. Marguerite, the youngest sister, donated them to the New-York Historical Society in 1946. The records do not tell us who named the dolls, but it is likely that the girls themselves did so.

The New-York Historical Society’s files contain detailed information about the physical construction of the dolls, which are nearly identical. The dolls are 14 inches tall and have wax over composition heads with attached shoulders, and curly blond hair inserted into the heads (fig. 3). They gaze, unblinking, out of lidless black glass eyes, and they have chubby pink cheeks and a closed (and rather small) bow mouth. Their bodies consist of stuffed cloth and kid leather (fig. 4), aside from the lower arm and hands, which are molded composition. Their feet are wooden and painted to look like boots; they are disproportionately small, which is a common characteristic of dolls of this era (fig. 5). The dolls have bellows in their torso to make them squeak when squeezed. They are nattily dressed in little suits with trim on the jacket and pants, beading on the pants, a pleated shirt, and a ribbon bow tie. George wears brown, while his brothers wear black and green. Aside from some wear on the heads, the dolls are remarkably well preserved, and their clothing is in particularly good shape.

 

Fig. 1. Boy doll “George,” Christmas gift from Mary Brownell to Elise Weidenman, ca. 1880. Clothing consists of brown cloth suit with trim, metal beading at pants waist, white pleated shirt, and blue ribbon bow tie. Object No. 1946.104. Gift of Marguerite Weidenman. Courtesy of the Collection of the New-York Historical Society, New York.
Fig. 2. Boy doll “Jakie,” Christmas gift from Mary Brownell to Marguerite Weidenman, ca. 1880. Object No. 1946.105. Gift of Marguerite Weidenman. Courtesy of the Collection of the New-York Historical Society, New York.
Fig. 3. Head of boy doll “George.” Wax over composition with painted face, black glass eyes, and inserted blond hair. Photograph by author. Courtesy of the Collection of the New-York Historical Society, New York.

Given that sewing was a widespread and necessary skill for American women in that period, it is quite likely that Brownell made the doll’s clothing (fig. 6). It is less plausible that she made the dolls themselves, however. By the time Brownell gave these gifts, manufactured dolls were widely available and had become popular Christmas gifts for middle-class children, particularly girls. This had not been the case in the antebellum era. Girls from wealthy families, such as the Sedgwick sisters of Massachusetts, received imported dolls for Christmas in the 1820s and 1803s, but such dolls were expensive and thus scarce in middle-class homes. Dolls became increasingly prevalent in Christmas advertisements at mid-century, however. A Pennsylvania merchant in 1851 advertised “an assortment of TOYS and FANCY GOODS for the coming holidays,” including “Dolls and doll Heads.” And the National Anti-Slavery’s Christmas Bazaar in 1853 listed among its gift items “Dolls of every kind and variety.” Depictions of domestic Christmas scenes often featured dolls among the presents, further promoting them as appropriate gifts. An 1869 version ofA Visit from St. Nicholas, for instance, featured an illustration by Thomas Nast of children reaping the Christmas bounty (fig. 7). A young girl gazes happily at the doll she has received while one brother plays with his own version of a doll riding a wheeled horse, another grabs at his stocking, and adult members of the family watch from the hallway.

The growing popularity of dolls had both ideological and pragmatic roots in the emerging middle class. Scholars have pointed out that doll play helped to reinforce middle-class gender ideals and train girls in their future maternal roles. But dolls also served as substitute playmates for children (both girls and boys) with fewer siblings to look after, and, along with other toys, they filled the increasing time middle-class children had for play. While these factors contributed to the increasing purchase of dolls after the Civil War, of equal importance were the technological advances that made dolls more available and affordable.

Doll making was one of the crafts transformed by industrialization in this period, and wax over composition construction, like George’s, contributed to this revolution. Composition referred to a form of papier-mâché, which German doll makers began to use for dolls’ heads because it was easily molded, inexpensive, and durable. It did not produce the lifelike skin tones of the more expensive poured wax or porcelain dolls, however. The Germans solved this problem in a cost-effective manner by dipping the molded composite heads in wax to produce a more realistic skin tone and texture. Workers attached the doll heads, generally to cloth bodies, painted the faces, arranged the hair, and dressed the dolls (if they were to be sold clothed). The cheapest dolls had molded and painted hair, while others had wigs. That George and his brothers had inserted hair suggests they were a step up in quality and price. Noise or squeak boxes like those in the Weidenman dolls were also common in German wax dolls.

Dolls imported from Germany dominated the middle to lower range of the U.S. doll market before the First World War, and were joined by French and English imports in American stores. The nascent U.S. doll industry, led by German immigrants, could not compete with the flood of imports, although American inventors patented technologically advanced talking, walking, and creeping dolls (which drew few fans among children). The boom in manufactured dolls and the growth of Christmas present-making encouraged the new department stores to stock dolls and toys, and to open seasonal toy departments. Dolls and doll parts were widely available at a variety of price points by the 1870s, when the Weidenman sisters received their dolls. Emerson’s Grand Bazaar, a Massachusetts department store, claimed to have available for holiday shoppers in 1871 “50 dozen of Wax Dolls, of every description,” as well as “crying” and “floating” dolls. Macy’s 1877 catalogue offered German wax dolls in 10 sizes and at prices ranging from 56 cents to $8.66. In addition to wax and china dolls at varying prices, stores stocked doll parts, including bodies, arms, and heads of china, parian, rubber, and leather, as shown in an 1875 Emerson’s catalogue (fig. 8).

 

Fig. 4. Torso of boy doll “George.” Stuffed cloth and kid leather with squeak box. Photograph by author. Courtesy of the Collection of the New-York Historical Society, New York.
Fig. 5. Legs and feet, boy doll “George.” Stuffed kid upper legs, wooden lower legs and feet with painted shoes. Photograph by author. Courtesy of the Collection of the New-York Historical Society, New York.

Why did stores sell doll heads and parts? For one thing, American doll makers concentrated on making body parts rather than the more expensive heads, which they left to European manufacturers in the late nineteenth century. An article in Harper’s Bazar told women how to select doll heads, bodies, and clothing to make dolls, declaring that “Mothers who want to teach their children correct ideas select each part of the doll with care, and have each article of clothing well made.” Thus doll givers could blur the line between a handcrafted and purchased gift by building a doll from parts, in much the same way children build bears and dolls today at the mall. It is quite possible that this is how Mary Brownell made the dolls she gave the Weidenman sisters. Middle-class women such as Brownell were encouraged to supplement the manufactured items dised in their parlors with their own “fancywork,” which encompassed such handicrafts as needlework, china painting, hair jewelry, and wax flowers. This fancywork also enabled women to transform a mundane purchased object into a sentimental keepsake by embellishing and personalizing it with their talents—painting a china plate or making a frame for an inexpensive chromolithograph, for instance. Combining manufactured doll parts and clothing them in hand-sewn outfits was in keeping with this practice. Dolls sold in parts were so many commodities, as the apparently identical heads and bodies of the Weidenman dolls suggest, yet Brownell’s probable work in putting the pieces together and sewing the clothing transformed George and his brothers into objects imbued with Brownell’s affection for the sisters, if only by virtue of the different colors of their outfits.

 

Fig. 6. Cropped photograph of boy doll “George.” Clothing consists of brown cloth suit with trim, metal beading at pants waist, white pleated shirt, and blue ribbon bow tie. Photograph by author. Courtesy of the Collection of the New-York Historical Society, New York.
Fig. 7. “Girl receiving doll for Christmas from Santa Claus.” From A Visit from Saint Nicholas, by Clement Clark Moore and illustrated by Thomas Nast (1869?). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.
Fig. 8. This catalog page suggests the range of dolls and doll parts available at a Massachusetts department store. “Dolls, Etc.,” advertisement taken from page 20 of a trade catalogue titled Emersons’ Grand Bazaar Catalogue by Charles Emerson & Sons. Printed by Franklin P. Stiles, Haverhill, Massachusetts (1875). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.

An 1883 letter from Alexander Graham Bell to his wife, Mabel, suggests another reason for the sale of parts: doll heads and bodies, particularly those made of bisque and porcelain, were fragile. Moreover, many children played roughly with their dolls, punishing them physically for bad behavior and even “killing” them and holding doll funerals. Whether through such rough play, accident, or carelessness, the Bells’ young daughters had broken the heads of the dolls they had received the previous Christmas. Bell believed it a bad idea simply to replace the dolls’ heads. In keeping with the view that dolls provided maternal training, he wrote that the girls should treat their dolls as if they were their own babies. He argued that, just as a child’s head could not be replaced, neither should a doll’s. Bell concluded that the dolls should be destroyed and his daughters told that “Santa Claus . . . has taken them back as they were not cared for properly.” He suggested “Santa might entrust them with another baby” in the future, should they prove themselves trustworthy mothers. There is no indication as to whether Mabel Bell agreed to this draconian plan, but the following year, when the girls demonstrated their father’s invention for a reporter by telephoning “Santa Claus,” each requesting a new doll for Christmas.

The sale of doll heads and body parts thus provided a practical way to deal with breakable products, as well as allowing for creativity and individual taste in selecting a doll. That the Weidenman sisters’ dolls survived intact suggests that, unlike the Bell sisters, they did not play roughly with their gifts. It is possible that these dolls were displayed rather than played with, particularly since the two older girls were over ten when they received them.

 

Fig. 9. Santa Claus surveying his handiwork. Gifts include boy and girl dolls. “Loading the Christmas Tree,” from Santa Claus and His Works (Snowflake Series No. 55), by George Webster (New York, 1888). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.
Fig. 10. Distributing Christmas gifts to children. “The Christmas-Tree,” engraved by Winslow Homer. Taken from Harper’s Weekly, December 25, 1858. Courtesy of the American Antiquarian Society, Worcester, Massachusetts.

One curious aspect of the dolls is their gender. Although this illustration from an 1888 children’s book shows a Christmas tree featuring both female and male dolls (fig. 9), boy dolls were actually unusual among manufactured dolls, constituting perhaps 10 percent of output in the nineteenth century. Why did Brownell give the Weidenman sisters boy dolls? All we can do is guess. Brownell may have been responding to the girls’ wishes for boy dolls to add to their doll collection, or perhaps they were attracted to boy dolls because of their own lack of brothers. Scholars have suggested that dolls could substitute for the siblings missing from the smaller middle-class family. Alternatively, Brownell may have been trying to distinguish her gifts by making them unique by virtue of their gender as well as their clothing. Of course, it may have just been that they had a special on boy dolls when she went shopping for a gift, but this seems the least likely reason for her choice.

Ultimately, the questions of why Brownell chose boy dolls and whether she made them cannot be answered definitively. The growth of the doll industry, the wide availability of dolls and doll parts, and the similarities between German manufactured dolls and the Weidenman dolls all suggest that Mary Brownell did not make the dolls in their entirety. Given Marguerite Weidenman’s description of the dolls as made by Brownell, however, it seems reasonable to conclude that in at least some respect they were so. Would the sisters have preserved and treasured them as much if they thought the dolls were not handmade? Or did Brownell’s selecting boy dolls, putting the pieces together, and making the outfits transform them from commodities to personalized, “handcrafted” gifts?

An essayist for The Nation a few years earlier had claimed that such a transformation was possible, noting that a giver who did not have the time or talent to handcraft a Christmas present could still “buy cheap brown or buff earthen candlesticks and paint them with his own hands till they are more beautiful than the costliest porcelain.” Similarly, Brownell could transform imported German dolls or doll parts by sewing stylish little suits for them. Alternatively, according to The Nation, the giver could “keep a memorandum-book for the purpose of recording wishes” and purchase the item most desired by the recipient. Given the overwhelming dominance of the market by girl dolls, it seems likely that Brownell deliberately chose boy dolls in response to some desire of the Weidenman sisters. Even if Brownell did not handcraft the dolls, therefore, their preservation suggests that she had used her talents and her knowledge of the Weidenman sisters to transform these commodities into gifts.

Since gifts imply affective relationships, I sought information that might illuminate the connection between Mary Brownell and the three Weidenman sisters. Unfortunately, the documentary evidence is sparse and shows only that Brownell and the Weidenmans lived in Hartford, Connecticut, at the same time in the 1870s. The girls were the daughters of Jacob Weidenmann and Anna Schwager Weidenmann, who immigrated to New York from Switzerland and Germany respectively. (The daughters apparently dropped the final “n” from their surname.) Jacob was a prominent landscape architect, who worked with Frederick Law Olmsted. In 1860, when daughter Anna was a baby, the family moved to Hartford, where Elise and Marguerite were born, in 1861 and 1868 respectively. The Weidenmans moved back to New York in 1874, when the girls would have been 14, 13, and 6. Since the Weidenmans left Hartford in 1874, the girls probably received the dolls from Brownell sometime between 1871 and 1874, rather than the 1880 date estimated in the accession files.

Mary Brownell and her husband Franklin “Clinton” Brownell were living in New Jersey in 1870, but after their infant son’s death that year, they returned to Hartford, where they must have met the Weidenmans. Franklin died in 1871, leaving Mary Brownell to raise their four surviving children. The children were close in age to the Weidenman sisters, which suggests they may have been school or play mates in Hartford. There is no indication that Brownell was related to the Weidenmans. It may be that she was a neighbor or friend of the family during their mutual residence in Hartford. Perhaps Mary Brownell and Anna Weidenman visited as their children played together. Certainly Brownell was quite close to the Weidenman girls to have given them such a Christmas present.

Scholars have suggested that gift exchange in modern societies constitutes a social system for the transfer of affection and the establishment and maintenance of social ties. The domestic ideal, by which the new middle class defined itself in the mid-nineteenth century, produced what Elizabeth Pleck has called “the sentimental occasion,” which both created and reinforced family memories. Chief among those occasions was Christmas, which Americans transformed from a public carnival, marked by feasting and drinking, treating the poor, and boisterous recreation, to a private holiday centered on the middle-class nuclear family. The central ritual of the new, domesticated Christmas was gathering around a Christmas tree and giving children presents, many from the new incarnation of St. Nicholas, “Santa Claus.” Promoted through magazines such as Godey’s Lady’s Book and Harper’s Weekly, as well as department stores and a growing host of businesses eager to sell holiday gift items, this transformation ultimately shifted holiday gifting from New Year’s Day to Christmas and from the external poor to the family’s children. An 1858 illustration by Winslow Homer for Harper’s Weekly helped to naturalize the domestic Christmas by depicting members of an extended family distributing the “wonderful foliage and fruit” of the Christmas tree, including at least one doll, to the children, who are the central focus of the illustration, as they were of the transformed Christmas (fig. 10).

The exchange of gifts on Christmas symbolized the ties of affection that bound family and friends, in contrast to the pecuniary relationships of the market. But gifts were not only symbols. They were actual physical things given and received, treasured or detested, proudly displayed or furtively hidden, even regifted, and ultimately saved or discarded. That the Weidenman sisters named their dolls and preserved them for some seventy years suggests that they cherished these gifts, as does Marguerite’s inclusion of them among the few family items she donated to the New-York Historical Society.

Whatever the specifics of the connection between Brownell and the Weidenmans, the dolls suggest the relationship was a fond one. Nineteenth-century gift advisors defined sentiment as the essence of the gift, just as it was the essence of the family celebration of Christmas. They distinguished the presents exchanged on Christmas and other sentimental occasions from the commodity transactions of the marketplace by creating a Romantic ideal of the gift. In an 1844 essay, philosopher Ralph Waldo Emerson chided those who purchased gifts, asserting “it is a cold, lifeless business when you go to the shops to buy me something, which does not represent your life and talent, but a goldsmith’s.” Emerson instead declared that “[t]he only gift is a portion of thyself,” suggesting the handcrafted present as the ideal.

But the line between gift and commodity was not so easily drawn, as the example of the dolls demonstrates. Historian Stephen Nissenbaum has suggested that the domestication of Christmas, ironically, helped to commercialize the holiday, as the emphasis on gifts for children and other family members meshed with the commercial-industrial economy and its rising production of consumer goods. Indeed, Emerson’s disparagement of purchased presents reveals that there was already a thriving trade in these by the 1840s. Commercial gifts had been heavily promoted since the 1820s, and Americans selected holiday presents from among dozens of annual gift books, cakes and candies, toys (including dolls), and a growing array of jewelry, pens, and other “fancy goods” sold by local merchants.

A variety of critics wrestled with this intrusion of the marketplace into the intimate province of the domestic gift. In Godey’s, novelist and gift book editor Caroline Kirkland lamented the transformation of the gift into “something which can be bought with money,” concluding that presents “have almost lost their sweet meaning, and become a meaner sort of merchandize.” But Kirkland also valued gifts as “natural expressions of goodwill and affection.” Writers in magazines as different as the Methodist Ladies’ Repository and the Nation agreed that the “universal custom of giving presents on commemorative occasions” was “inevitable and necessary,” as well as “a pleasant and easy way of expressing one’s feelings.” Because they valued gift giving, these commentators formulated ways to blunt the force of commercialization. One way they did this was by endorsing the new Santa Claus, who “made” gifts in his workshop and gave them freely. An illustration from an 1888 children’s book, for instance, depicts Santa sitting tailor fashion and sewing doll’s clothing (fig. 11).

Marguerite Weidenman’s recollection of the dolls she and her sisters received as handmade by Mary Brownell places them within the Romantic ideal of the handcrafted gift and suggests that she particularly valued that aspect of the gift. It seems safe to conclude, then, that George and his brothers represented the Romantic ideal of the gift, which owed “all value to sentiment.” The ideal gift, according to the Ladies’ Repository, should reflect “some painstaking of the donor,” such as Brownell’s likely sewing of the doll clothes, rather than “the greedy eye of trade.” But the Weidenman girls’ dolls also suggest the role of the marketplace in Christmas gifting. George and his brothers are artifacts of the material culture of the domesticated Christmas, but they are also products of the factory and the marketplace. Despite the idealization of handcrafted presents, they show that Christmas gifts had become enmeshed in the developing economy of consumer goods, and so stood at the intersection of commerce and affection.

 

Fig. 11. Santa Claus making dolls. "Making the Doll's Clothes," from Santa Claus and His Works by George P. Webster (New York, 1888?). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.
Fig. 11. Santa Claus making dolls. “Making the Doll’s Clothes,” from Santa Claus and His Works by George P. Webster (New York, 1888?). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.

Further Reading

The most comprehensive scholarly study of the history of dolls and doll play in the United States is Miriam Formanek-Brunell, Made to Play House: Dolls and the Commercialization of American Girlhood, 1830-1930 (Baltimore, 1993). The vast majority of works that touch on doll history are those aimed at collectors. Many provide useful information on doll making and types. See, for example, Jean M. Burks, The Dolls of Shelburne Museum (Shelburne, Vt., 2004); Roger Baker, Dolls and Dolls’ Houses: A Collector’s Introduction (London, 1973); Eleanor St. George, Old Dolls (New York, 1950); Caroline Goodfellow, The Ultimate Doll Book (London, 1993).

The best source on the transformation of Christmas is Stephen Nissenbaum, The Battle for Christmas: A Cultural History of America’s Most Cherished Holiday (New York, 1996). For a perceptive discussion of the role of the market in the new Christmas, see Leigh Eric Schmidt, Consumer Rites: The Buying and Selling of American Holidays (Princeton, N.J., 1995). On the transition to purchased gifts, also see William B. Waits, The Modern Christmas in America (New York, 1993). For those interested in the interplay between sentiment and the market in the nineteenth century, two useful works are Elizabeth H. Pleck, Celebrating the Family: Ethnicity, Consumer Culture, and Family Rituals (Cambridge, Mass., 2000); Elizabeth White Nelson, Market Sentiments: Middle-Class Market Culture in Nineteenth-Century America (Washington, D.C., 2004). Nelson touches on the role of fancywork, as does Nancy Dunlap Bercaw, “Solid Objects/Mutable Meanings: Fancywork and the Construction of Bourgeois Culture, 1840-1880,” Winterthur Portfolio 26 (Winter 1991): 231-47.

Historians of gift giving must begin with Marcel Mauss, The Gift: The Form and Reason for Exchange in Archaic Societies, trans. W. D. Halls, foreword by Mary Douglas (1923; New York, 1990). The best historical discussion of the developing ideology of gifts in market-based societies is James Carrier, Gifts and Commodities: Exchange and Western Capitalism since 1700 (London and New York, 1995). Other key works examining the relationship between the social and economic meanings of gifts include David J. Cheal, The Gift Economy (London, 1988); Barry Schwartz, “The Social Psychology of the Gift,” American Journal of Sociology 73 (July 1967): 1-11; Theodore Caplow, “Christmas Gifts and Kin Networks,” American Sociological Review 47 (1982): 383-92; Aafke E. Komter, Social Solidarity and the Gift (Cambridge, 2005); Jacques T. Godbout, in collaboration with Alain Caillé, The World of the Gift, trans. Donald Winkler (Montreal, 1998).

The Romantic ideal of the gift was articulated by Ralph Waldo Emerson, “Gifts,” in The Collected Works of Ralph Waldo Emerson, vol. 3: Essays: Second Series (1844; Cambridge, MA, 1983): 93-96. Two other useful formulations of this ideal are found in Caroline Kirkland, “Hints for an Essay on Presents,” Godey’s Lady’s Book (January 1845): 27-29; and “Festivals and Presents,”Ladies’ Repository (January 1871): 43-46.

Finally, for anyone interested in looking at nineteenth-century dolls, good digital collections are available online at the Strong National Museum of Play, and the Wisconsin Historical Society, which even allows one to search for boy dolls.

 

This article originally appeared in issue 12.3 (April, 2012).


Ellen Litwicki teaches in the department of History at the State University of New York at Fredonia. Her research focuses on the history of social and cultural rituals, and her publications include America’s Public Holidays, 1876-1920 (2000). She is currently working on a cultural history of American gift giving.

 

 




Girls Just Want to Have Fun

This morning, with great fear and trembling, I entered my stepdaughter’s American Girl room. When my husband and I got married and the three of us moved in together, the room was something of a selling point, a private space of her own, much like a study, and I had promised I would never touch her dolls when she was away. She’d left Addie, the runaway slave, lounging in shorts and hiking boots. Molly was still in her 1940s pajamas but was in the process of climbing out of bed and reaching for her glasses. As for Kit, a child of the Great Depression, she had been dressed in a white silk ball gown and black felt cape and was arranged in a hammock intended for Beanie Babies. She looked a little like Sleeping Beauty.

 

Fig. 1: American Girl Dolls: Addie, Molly, Kit. © by the Pleasant Company.
Fig. 1: American Girl Dolls: Addie, Molly, Kit. © by the Pleasant Company.

These girls come from radically different historical circumstances, but as far as my stepdaughter is concerned, they are sisters. Addie, circa 1864, is, logically enough, the oldest. The girls share clothing, lunches, and accessories freely. For a while, Addie didn’t have a bed. Given her history, I took this to heart and got my stepdaughter–or, more honestly, Addie herself–the “official” Addie bed for Christmas, complete with African American story-quilt. I’ll admit that my compassion was misguided. As far as my stepdaughter is concerned, the three dolls are only incidentally connected to their official profiles. It doesn’t matter that Addie may never be reunited with her family, that Kit was thrown into jail with a group of hoboes, or that Molly’s father is off at war. These dolls are refugees from their own histories. There are seven American Girl dolls in all, spanning a period from the American Revolution to the Second World War. The Pleasant Company, named after its founder, Pleasant T. Rowland, has sold five million dolls since 1986. The company’s staff includes a small team of historians and librarians, and clearly, they are having a terrific time. For anyone with a taste for the history of everyday life, the catalogue makes fascinating reading. Each of the girls is surrounded by meticulously researched doll-sized butter churns or snowshoes or school desks. Kit’s lunchbox is decorated with WPA-style heroic locomotives. Addie’s is an old milk pail big enough to hold a cold meat pie and a bunch of grapes. In some ways, the dolls have the feel of successful market research. At ninety dollars, they’re hardly cheap, and their price tag is part of their appeal. These are not intended to be superficial toys. They’re a way to spark historical imagination, to make connections between the past and present, to help girls understand that, as the catalogue brightly notes, “You’re a part of history too.” Do American Girl dolls turn young girls into junior historians? I can only speak from my own experience. My stepdaughter adores her dolls, but was initially ambivalent about their stories. Each doll comes with the first book of its series, and her father had gotten her a full set of the Molly books, but she never asked to read them. Her reaction to the Addie books was even stronger. She had started on the first, only to find the descriptions of slavery so intense that they gave her nightmares. But as time went on, particularly as she became an independent reader, she began to pick up some of the “short stories,” books that can be read in a single sitting, and slowly graduated to the chapter books. She now has most of the Molly series memorized.

 

Fig. 2: American Girl books. © by the Pleasant Company.
Fig. 2: American Girl books. © by the Pleasant Company.

Fifty-six million American Girl books have been sold since the Pleasant Company was founded. At $5.95, the books are much less of an investment than the dolls and are sold not only through the catalogue, but are available in libraries and bookstores. Although the books can be purchased independently, the American Girl stories and accessories are intended to have a symbiotic relationship. A dress that plays a key part in Addie’s Surprise is available in the catalogue for $22.00; Kit’s typewriter and rolltop desk, key features in several of her adventures as a budding journalist, can be purchased for $82.00. Still, the stories can be read by girls who have no interest in–or no money for–the dolls. My ten-year-old niece, who prefers gerbils, regularly plows through two American Girl books a day. These books deserve their popularity. The writing is lively and, remarkably enough, rarely crosses into gross sentimentality. Admittedly, the stories are formulaic. Each of the dolls gets six books with interchangeable titles: Meet _____, _______ Learns a Lesson, ________ Saves the Day. This drives home the central message, that at bottom, all of these girls face the same problems: a family in transition, adjustments at school, a summer where they are thrown into unfamiliar circumstances. The girls themselves are also essentially identical: spirited and resourceful, surrounded by loving grown-ups, and often in a position to help those less fortunate. As a writer of historical novels, I can appreciate the way the authors of these books use basic similarities to ease their readers into less familiar territory. Historical details are woven through the books: we learn, along the way, how school was taught in 1774, 1834, 1864, or 1934. The authors work not only with staff historians, but also with curators of historical museums in the towns where they set the stories. Each of the chapter books ends with a section called “A Peek into the Past,” a few pages of illustrated historical background. At first, my stepdaughter had no interest in those pages, but as time went on, she began to suffer through them, and sometimes, she would even ask questions. At the end of one of the Molly books was a photograph of Hitler, and suddenly I had an opportunity to carefully broach the subject of the Holocaust.

 

Fig. 3: American Girl books. © by the Pleasant Company.
Fig. 3: American Girl books. © by the Pleasant Company.

It would be easy to poke holes in the way the Pleasant Company presents history. Felicity, “a spunky, spritely girl growing up in Virginia in 1774,” visits a local plantation where there are clearly slaves; the issue never arises. Kirsten, a second-generation Scandinavian pioneer, has an entirely predictable friendship with a Native American girl named Singing Bird. The hardest to take, to my mind, is Samantha, a Victorian orphan who lives with her wealthy grandmother and has befriended an Irish servant girl named Nellie. I suspect, with a sinking heart, that Samantha is the most popular doll of the series. Still, at times the books can take you by surprise. The Addie series, in particular, not only covers her escape from slavery, but moves on to deal with life in Civil War Philadelphia, Northern racism, freedmen’s mutual aid societies, class antagonisms, and gradual, moving reunions with members of her family, including a brother who lost an arm in the war. Josefina lives in 1824 in what is now New Mexico, and her stories don’t dismiss the cultural and historical complexities of that time and place. Even the Samantha books contain a suffragette or two. One could wish for more. I have recurring fantasies of “Rosa, a strong-willed and clever girl growing up in 1914 in the tenements of New York.” On the third anniversary of losing her older sister in the Triangle Fire, Rosa is comforted by cheerful visits by the old family friend, Aunt Emma Goldman, who takes her to her first strike. I imagine the doll dressed in a rather stained shawl and kerchief, carrying a union card. I don’t know what my stepdaughter would make of her. Chances are, Rosa would simply join her sisters, Addie, Kit, and Molly, under a nine-year-old’s benign dictatorship. Even now, when the American Girl books have become staples in our household, the “official” stories have nothing to do with the world the actual dolls inhabit. My stepdaughter creates stories of her own. Perhaps it would be too complex to have three histories coexist, or perhaps it is simply a tribute to the power of imagination. The careful research of the historians on the staff at the Pleasant Company is not relevant, and that is as it should be. In any event, I would probably have to buy Rosa a bed.

 

This article originally appeared in issue 2.2 (January, 2002).


Simone Zelitch is the author of The Confession of Jack Straw (Seattle: Black Heron, 1991), Louisa (NewYork: Putnam, 2000), and a third novel, Moses in Sinai (Seattle: Black Heron, forthcoming). She is currently writing a novel about the Civil Rights movement. She lives in Philadelphia with her husband, stepdaughter, and Addie, Molly, and Kit.




Defining A “Christian Nation”: or, A Case of Being Careful What You Wish For

Steven K. Green’s Inventing a Christian America is an unusual work: one that re-tells an important early American narrative while providing a methodological model for historical debate. Basing his study on an array of primary and secondary sources, Green—Fred H. Paulus Professor of Law and Affiliated Professor of History at Willamette University, author of works on separation of church and state, and frequent contributor to litigation on religious freedom—demonstrates how easily the historical inventions of one era may become the historical facts of another. This tendency is especially common for the history of the American founding given, as Green argues, the almost irresistible connection between myth-making and nation-building as intellectual constructs.

 

Steven K. Green, Inventing a Christian America: The Myth of the Religious Founding.  Oxford: Oxford University Press, 2015. 312 pp., $29.95.
Steven K. Green, Inventing a Christian America: The Myth of the Religious Founding. Oxford: Oxford University Press, 2015. 295 pp., $29.95.

Green writes that he “seeks to unravel the myth of America’s religious foundings,” a process that is “crucial if our nation is to come to grips with its religious past and its pluralistic future” (vii, viii). In his introduction he sets out the three goals of his study: to examine the terms of the debate between what he calls the “religionists” and the “secularists”; to identify some of the errors of analysis and method by the former; and to explain that by the term “myth” he is referring to the efforts of early nineteenth-century writers “to forge a national identity, a process that sought to sanctify the recent past” (15).

Green addresses these points in a running critique of the work of historians who, to summarize an often complex position, have argued that the United States was intentionally established as a Christian nation, that first national documents are replete with Christian references demonstrating the intention of the Founders (both well known and “forgotten”) to establish a government based on God’s higher law (rather than Lockean contract theory alone), and that the Founders’ preferred form of Christianity was Protestantism. His work joins others exploring this “religionist” scholarship, especially two useful anthologies edited by Daniel L. Dreisbach, Mark David Hall, and Jeffry H. Morrison (The Founders on God and Government and The Forgotten Founders on Religion and Public Life).

In his four central chapters, Green expertly weaves together his take on the “Christian nationalist” argument with a narrative aiming to historicize major issues in early American religion and constitutional politics. These include the emergence of the idea of America as a religious haven; the transition from Puritan covenant to civil contract in American governance; the disconnect between the religious statements of the Founders and the foundations of civil government; and the emergence of the democratic consensus regarding the origins of constitutional authority, especially in the U.S. Constitution. In his culminating fifth chapter, Green surveys the influence of the writings of biographer William Weems, clergyman Lyman Beecher, Supreme Court Justice Joseph Story, and novelist Nathaniel Hawthorne in promoting the myth that the United States was founded as an expressly Christian nation and that the Founding Fathers, especially George Washington, were unusually pious.

Green shows effectively how Antifederalists’ fears of the irreligious character of the U.S. Constitution reflected their understanding that the new government was not expressly Christian: just the opposite of what the “religionist” school has concluded. And while prominent clergy “believed that providence had been instrumental in creating the new nation, they acknowledged that the source of authority for the new government rested with the people” (191). Green deftly explores the impact of the Second Great Awakening on the changing character of American Christianity in subsequent decades, far removed from the ecclesiastical mainstream of the Revolutionary era: “By mid-nineteenth-century America, ‘Christian’ meant not only Protestant but evangelical in belief and practice. Washington [an Episcopal rationalist] had been ‘born again’ in his death” (208).

In the manner of a legal brief, Inventing a Christian America is elegantly written, densely argued, and persuasive. But like any “forensic” work—that is, work designed to prove a case—it has its limitations. For one thing, Green focuses exclusively on the religionists’ arguments, leaving aside those of the secularists. A prime example of the latter is Isaac Kramnick and R. Laurence Moore’s The Godless Constitution: A Moral Defense of the Secular State, itself a forensic-style work, originally subtitled “The Case Against Religious Correctness,” and distinctly less nuanced than Green’s. Indeed, one of the difficulties of writing about religion and the Founding is that secularists have their own founding myths: among these, that the patriots were indifferent to religion in the conflict with Great Britain; that the Founders were largely Deists; and that the absence of any references to God in the U.S. Constitution is the prima facie evidence that Americans aimed to create a nation free of religious content. Evidence in recent work on the Revolution, such as T.H. Breen’s American Insurgents, American Patriots: The Revolution of the People, suggests that these conclusions are wide of the mark.

For another thing, Green does not discuss the states’ constitutional provisions regarding religion before and after the ratification of the U.S. Constitution in 1788. In his short survey Church and State in America, James Hutson stresses that under federalism issues relating to religion and churches were left to the states, while separation of church and state did not become constitutional doctrine until the Supreme Court’s Everson v. Board of Education decision in 1947. An important new article by Vincent Phillip Muñoz (“Church and State in the Founding-Era State Constitutions”) sets out the specific provisions of the states’ founding-era declarations of rights and constitutions concerning religion, any number of which permitted government regulation of worship or conditions for office-holding (while, it should be noted, also emphasizing religious freedom and, in several cases, barring clergyman from holding political office).

In fact, as Green’s study suggests but does not explore, a key conceptual problem with the Christian-nation thesis centers on the Founders’ use of the term “Christian,” and, for that matter, “Protestant,” “religion,” and “church.” It’s fair to say that, in keeping with the tradition of kingdoms and nation-states, the creators of an explicitly Christian republic would have been precise about which religious institution they preferred to ally with. But instead, the sources are full of generalities. Several state constitutions, for example, required office-holders to adhere to belief in the Trinity, in the divine inspiration of the Scriptures, in “Christianity,” or the “Christian religion,” or to be Protestants, but made no reference to specific denominations or their clergy. Abstract and generic references pepper Green’s sources, comprising a body of discourse that historians used to call “civil religion” and that Jon Meacham has more recently called “public religion.” Of course, nearly all of the Founders, prominent and not so prominent, were Protestant, by identity if not church-attendance, and many assumed that belief in Providence was a key ingredient of virtue, that piety was the glue that tied Americans together, and that God was on Americans’ side. But such sentiments are far removed from any intention to create a constitutional agreement between government and an organization called a “church,” or, for that matter, to elevate clergy to positions of influence within the government, as would be expected in a religious state.

In short, then as now, the United States was home to myriad church-es, denominations, religious societies, and sects—the words were often used interchangeably—that adhered to widely diverging interpretations of the Scriptures and organized themselves in substantially different ways. In fact—the crux of the matter—there was no, and is no, such institution as the “Christian Church” or the “Protestant Church” with which the United States and individual states could, or can, make an official compact. Had such a compact been possible, there seems little doubt that it would have been between the government and orthodox churches. Under such a regime, controversial sects like the Methodists—the most popular denomination in America by the Civil War but stereotyped as Loyalists and charlatans in the founding era—along with the offshoots of the Second Great Awakening, whose descendants form the bulk of Protestant church membership today, likely would have been denied voting rights and access to political office.

Green concludes his book with the warning: “So long as proponents of America’s Christian origins fail to see the narrative as a myth, they will be unable to appreciate the true import of America’s religious heritage” (243). Something of a jeremiad, Green’s study is also an important exercise in historical method: a model of history-as-debate, based on a lucid and consistent reading of the sources and a clear conviction that getting the story right matters. Given his approach, he can’t be blamed that the story of religion and the founding continues to be only partially told.

 

This article originally appeared in issue 16.3 (Summer, 2016).


Dee E. Andrews is professor of history at California State University, East Bay. She is the author of The Methodists and Revolutionary America (2000) and is currently completing a book on Thomas Clarkson as abolitionist author.




Artificial Light

George Washington Plunkitt teaches a lesson

I don’t understand why they choose to sit in dark.

It’s certainly not a matter of timidity. With my blessing as well as without it, these students are constantly opening up windows, moving around chairs, getting up in the middle of a discussion to grab a tissue or go to the bathroom, and so on. And yet if for some reason I’m not the first person to enter the classroom at the start of the day, they will sit in the liminal early morning light until I enter and flick the switch. Maybe it preserves some notion of receding freedom; as long as the lights are off, school hasn’t really begun.

Anyway, now I’m here, the lights are on, attendance has been taken, and we’re getting down to business. The topic at hand: a discussion of an excerpt from Plunkitt of Tammany Hall, a 1905 portrait of machine politics written—or, perhaps more accurately, filtered—by a young newspaper reporter and editor named William Riordan. Conceived as a response to muckraking journalist Lincoln Steffens’s 1903 exposé The Shame of the CitiesPlunkitt of Tammany Hall is a remarkably rich social document in which we hear—an auditory metaphor seems apt, because the voice is so striking—an indiscreet Irish politician named George Washington Plunkitt reveal far more about himself than he should for his own good. (The book, published at a moment when Plunkitt was seeking to recover his lost seat in the New York state senate, effectively sealed his doom, as Riordan may well have intended.) In one particularly rich passage, which I have assigned as homework and now proceed to read aloud, Plunkitt makes a famous distinction between what comes to be known as “honest graft” and “dishonest graft.”

 

Everybody is talkin’ these days about Tammany men growin’ rich on graft, but nobody thinks of drawin’ the distinction between honest graft and dishonest graft. There’s all the difference in the world between the two.

Yes, many of our men have grown rich in politics. I have myself. I’ve made a big fortune out of the game, and I’m gettin’ richer every day, but I’ve not gone in for dishonest graft—blackmailin’ gamblers, saloon-keepers, disorderly people, etc.—and neither has any of the men who have made big fortunes in politics.

There’s an honest graft, and I’m an example of how it works. I might sum up the whole thing by sayin’: “I seen my opportunities and I took ’em.”

Just let me explain by examples. My party’s in power in the city, and it’s goin’ to undertake a lot of public improvements. Well, I’m tipped off, say, that they’re going to lay out a new park at a certain place. I see my opportunity and I take it. I go to that place and I buy up all the land I can in the neighborhood. Then the board of this or that makes its plan public, and there is a rush to get my land, which nobody cared particular for before.

Ain’t it perfectly honest to charge a good price and make a profit on my investment and foresight? Of course, it is. Well, that’s honest graft.

Or, supposin’ it’s a new bridge they’re goin’ to build. I get tipped off and I buy as much property as I can that has to be taken for approaches. I sell at my own price later on and drop some more money in the bank.

Wouldn’t you? It’s just like lookin’ ahead in Wall Street or in the coffee or cotton market. It’s honest graft, and I’m lookin’ for it every day in the year. I will tell you frankly that I’ve got a good lot of it, too.

 

“Tammany Hall, 1830.” G. Hayward, lithographer (New York, 1865). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.

All right, I tell the students. There you have it. Plunkitt is telling us that dishonest graft means things like running prostitution rings or selling liquor, while the kinds of examples he’s giving us constitute the legitimate practice of politics—whatever “goo-goos” like that Progressive Lincoln Steffens may say. You buy it?

There’s a pause. There often is. J.D. raises his hand first, as he typically does. No, I don’t buy it, he says. Plunkitt’s just trying to justify his corruption.

Well, okay, I say. But just what is it here that constitutes corruption?

He’s using his office to get rich, says Leah, one of my better students. A politician is not supposed to do that.

Oh no? I reply. Maybe it depends on your definition of politics.

She looks at me quizzically.

We’ve already talked about the role of political machines for immigrants earlier in the week, I note. If a guy like Plunkitt will get your brother-in-law a job or deliver a turkey at Thanksgiving, how much do you care if he cuts himself in for a piece of the action? Doesn’t he pretty much need to cut himself in for a piece of the action in order to maintain his position, and thus to help other people? (I launch into a brief discussion here on the role of party patronage as the financial lubricant of the nineteenth-century two-party system and the reformers who tried to break it.) What are you, Leah—one of those Progressive control freaks who uses morality as a cover for hatred of people she considers different from herself?

Leah smiles. She knows I’m only kidding. After all, her people were Jewish immigrants, and she knows I know this. Still, I’m hoping she’ll feel the poke under her ribs.

I kind of like Plunkitt, says Danielle. He’s funny.

Manuel raises his hand. He doesn’t do this often, so I’m eager to draw him into the conversation. I think the guy is right, he says. Let’s be real: this is what politics is all about. Look at Eliot Spitzer.

 

Courtesy of the author
Courtesy of the author

We proceed to discuss the disgraced New York governor, who is in the middle of grappling with revelations of participation in a prostitution ring, over which he will resign shortly. For the next ten minutes or so, we assess how much or how little politics has changed. I ask how much of a difference there is between what Spitzer did and what Plunkitt is describing. (I don’t get into how much of Plunkitt’s persona is a collaboration between him and Riordan; that’s a conversation for another day.) At least Plunkitt is no hypocrite the way the crusading Spitzer is. But I also ask: if Plunkitt buys land at ten dollars an acre and sells it for one hundred dollars an acre, is he stealing ninety dollars an acre from the public?

Absolutely! says Laura.

But isn’t a Wall Street speculator pretty much doing the same thing?

Manuel nods approvingly. J.D. furrows his brow: he’s working this.

Well yes, says Alison. It’s just a different kind of theft, not that she’s particularly surprised by either. I find her sophistication a little unsettling in a sixteen-year-old.

Are you saying that we hold politicians to a different standard than other people? A higher standard than other people?

Alison shrugs, a gesture that says, “Yes I am saying that, but you shouldn’t take me any more seriously than I’m taking you.”

Laura has an incredulous expression on her face: She’s about as bright as Alison but wholly lacks Alison’s sense of irony. I observe to the class that Laura appears bemused by Alison’s position and ask her if she thinks politicians should in fact be held to a higher standard than other citizens.

Yes, she says, but immediately upon doing so she starts to backtrack, recognizing even as she says so that they’re just people too, before her improvised meditations descend into the incoherence of a mouth that moves more slowly than a mind. I might try to untangle her thoughts, but we’re just about out of time. I know this because people like Tess, who never say a word, have begun moving books into their backpacks, part of a growing rustle that’s my not-so-subtle cue to wrap this up.

By way of conclusion, I remind the class of something Leah said: “A politician is not supposed to do that.” Whether or not you agree, I say, a vision of the way the world works is embedded in that assertion. I ask the class to keep thinking about Plunkitt and the way his remarks might help clarify their own notion of what politics should be. (A rather large and amorphous request, to be sure, but you never know who will latch onto what.) And then I say I want them to graduate, go to college, and use that vision of politics to go and make the world a better place. I get some smiles as the desks and chairs slide around and the desperately bored make their exit. They know what I mean in a Jon Stewart kind of way. And they know, just like him, that I’m not entirely joking.

I feel a nagging sense of unease as I head upstairs to my desk to check my messages and grade some papers. I’ve done my job, by my lights anyway: to foster a spirit of thoughtful inquiry in good democratic fashion. As such, I would like be able to see myself as furthering a long and honorable living tradition. And yet I know not only that this isn’t happening very frequently in thousands of high schools around the country but that it can’t. For one thing, there’s a prescribed curriculum, and someone like George Washington Plunkitt is not likely to show up on it. Insofar as he might, it’s likely to be as a series of facts (file under: Tammany Hall; nineteenth-century urban politics). And even if the students were inclined to have a discussion of the kind I did, any number of forces would mitigate against it, ranging from administrative procedures, to student apathy, to district requirements to teach to the test. Far from honest laborer tilling the fields of democracy, I am a cosseted servant of the ruling class inviting the children of an elite to do what our society doesn’t want or can’t afford to do for the majority of its children: invite them to think for themselves. I’m reminded of Marie Antoinette strolling around Le Petit Hameau, her peasant cottage/garden complex at Versailles, fancying herself a farmer.

But that’s not the worst of it. For it might be one thing if, in fact, I knew that the education these students were getting from me and others would allow them to at least preserve the values I’m modeling, if not explicitly espousing. Yet I’m haunted by a fear that insofar as I succeed, their educations will be worse than useless—that, like Marie Antoinette, they will be confronted with a world in which the grooves of their minds make them less, not more, able to adapt to a coming dispensation, one in which a sentimental attachment to democracy will not receive the lip service it does now. I worry, in other words, that they will be old before their time.

It’s at this point that I seek cover in a sense of humility. Don’t kid yourself, I think. They’re not empty vessels, and you’re not a fountain that fills them. These students—the smart ones, anyway—will discard what they need to. Education: a process of figuring out what doesn’t matter. You’re just a teacher.

George Washington Plunkitt is chuckling, softly. Who’s the honest one now, he asks with an almost mirthless smile. Did you remember to turn off the lights before you left the room?

 

This article originally appeared in issue 9.2 (January, 2009).


Jim Cullen, Common-place column editor and regular contributor, teaches at the Ethical Culture Fieldston School in New York, where he serves on the Board of Trustees. He is the author of the newly published Essaying the Past: How to Read, Write and Think about History and other books. This essay is adapted from a work-in-progress about a year in the life of the U.S. History Survey. Names have been changed to protect the identities of his students.




Sex and Public Memory of Founder Aaron Burr

Historian Nancy Isenberg has analyzed the sexualized politics of the early Republic that gave rise to Burr’s reputation as an immoral, sexually dissipated man. As Isenberg explains, Burr became the target of sexually charged attacks in the press for fifteen years beginning with his becoming a U.S. senator for New York in 1792. This depiction of him as sexually corrupt in his private life contrasted sharply with his early pedigree and public accomplishments. The grandson of famed New England minister Jonathan Edwards, Burr was born in 1756 in Newark, New Jersey. He attended Princeton at the age of thirteen, eventually becoming a successful lawyer. He served as a U.S. senator, as the third vice president of the United States, and as a major figure in the development of the political party system in the new nation. In 1800 Aaron Burr stood a “hair’s breadth” away from becoming the third president of the United States, losing to Thomas Jefferson by just one electoral vote. He married widow Theodosia Prevost in 1782. Together they had one child, Theodosia. Both his wife and daughter perished tragically and prematurely: His wife died from cancer in 1794, and his daughter was lost at sea in the winter of 1812. Her son (Burr’s only grandchild) had died at the age of ten that same year. In 1833, after almost four decades of being a widower, he married widow Eliza Jumel, separating just four months later. He lived until 1836, dying at the age of 80—on the very day that their divorce was finalized.

Public memory of Aaron Burr contains fascinating threads that defend his reputation by asserting that his inner self conformed to normative, idealized standards, and thus that he could not have been guilty of the charges of immorality that were leveled against him. There has never been a shortage of negative depictions of Burr, but it has become a nearly two-centuries-old cliché that he “has always been out of favor,” that he has only enjoyed the reputation of “outright villain” among the founders. By tracing defenses of his personal life from the nineteenth century to the recent past, this essay shows that sex has long been used to define the character of the American founders; arguably it continues to be used in this capacity as a window to the nation’s soul.

Two Burrs, Burr the traitor and Burr the rake, were often co-conspirators. In the preface to an 1847 novel titled Burton: Or, the Sieges, the incredibly prolific popular novelist Joseph Holt Ingraham illustrated how negative depictions of Burr explicitly connected his private character and his political person: “In the page of history from which this romance is taken, we see the young aid-de-camp exhibiting the trophies of his conquests, drawn from the wreck of innocence and beauty. If we turn to a later page, we shall see the betrayer of female confidence, by a natural and easy transition, become the betrayer of the trust reposed in him by his country, and ready to sacrifice her dearest interests on the altar of youthful vanity, ripened into hoary ambition.”

His earliest biographer, Matthew L. Davis, stated that he had possession of virtually all of Burr’s letters and met and discussed with him (at Burr’s request) as he worked on his memoirs. Burr’s letters, according to Davis, indicated “no very strict morality in some of his female correspondents.” Acting with the chivalry that his subject supposedly lacked, Davis separated out and destroyed such letters to protect the reputations and virtue, not of Burr, but of the young women and their families. He claimed that Burr wouldn’t let the letters be destroyed in his lifetime, but when Burr died Davis burned them all so that no one else could publish them. In the absence of such sources, biographers have largely had only the accusations to work with.

 

"Portrait of Aaron Burr," engraved by J.A. O'Neill, after portrait by John Vanderlyn (1802). Courtesy of the Portrait Prints Collection, the American Antiquarian Society, Worcester, Massachusetts.
“Portrait of Aaron Burr,” engraved by J.A. O’Neill, after portrait by John Vanderlyn (1802). Courtesy of the Portrait Prints Collection, the American Antiquarian Society, Worcester, Massachusetts.

Davis was criticized by numerous biographers for largely depicting Burr as his political enemies had done. Burr’s second biographer, James Parton, set the tone for future defensive accounts. Parton, who would found American Heritage, was the most popular biographer of nineteenth-century America. He complained: “Mr. Matthew L. Davis, to whom Colonel Burr left his papers and correspondence, and the care of his fame, prefaces his work with a statement that has, for twenty years, closed the ears of his countrymen against every word that may have been uttered in Burr’s praise or vindication.”

Parton’s mid-nineteenth-century account defended Burr from a host of negative depictions, beginning with those that centered on his youth and reputation as a college lothario. “It has been said … that he was dissipated at college; but his dissipation could scarcely have been of an immoral nature.” Burr, he explained, was not given to immoral activities that typically link to sexuality, including gambling, drinking, and general excess.

One such rumor was that during the Revolution, he seduced and abandoned a young woman named Margaret Moncrieffe. Parton’s biography dismissed the story, and additionally cast aspersions on her character. Parton described Moncrieffe as a girl of fourteen, “but a woman in development and appetite, witty, vivacious, piquant and beautiful.” He attempted to discredit her by portraying her as immoral, stating the account had been “published after she had been the mistress of half a dozen of the notables of London.” And he lamented Burr’s legacy: “the man has enough to answer for without having the ruin of this girl of fourteen laid to his charge.”

Later defenders would echo Parton’s response. An 1899 biography of Burr by Henry Childs Merwin explained: “It is evident that, whatever may have been Burr’s conduct toward Margaret Moncrieffe, the lady herself, the person chiefly concerned, had no complaint to make of it.” And Merwin yoked Burr’s sexual reputation to broader character traits. “Burr was all his life an excessively busy, hard-working man; he was abstemious as respects food and drink; he was refined and fastidious in all his tastes; he preserved his constitution almost unimpaired to a great age. It is nearly incredible that such a man could have been the unmitigated profligate described by Mr. Davis.”

Burr’s defenders also trained their sights on his marriage. Similar to popular depictions of Hamilton, Washington, and Jefferson, in the hands of his biographers Burr appears to have experienced the perfect marital union. (And similar to the cases of Hamilton, Washington, and Jefferson, we have little to no documentation to support the characterization of this very personal relationship.) Virtually all of his defenders emphasize the idealized romantic bond that he shared with his wife. Parton insisted: “To the last, she was a happy wife, and he an attentive, fond husband. I assert this positively. The contrary has been recently declared on many platforms; but I pronounce the assertion to be one of the thousand calumnies with which the memory of his singular, amiable, and faulty being has been assailed. … I repeat, therefore, that Mrs. Burr lived and died a satisfied, a confiding, a beloved, a trusted wife.”

Parton made it clear that Burr could have won the hand of any young “maiden” he desired. But that he “should have chosen to marry a widow ten years older than himself, with two rollicking boys (one of them eleven years old), with precarious health, and no great estate,” revealed much about his character. And, indeed, for Parton the marriage countered much that had been written about Burr. “Upon the theory that Burr was the artful devil he has been said to be, all whose ends and aims were his own advancement, no man can explain such a marriage.”

Parton emphasized that Burr was not guilty of marrying for money: “Before the Revolution he had refused, point-blank, to address a young lady of fortune, whom his uncle, Thaddeus Burr, incessantly urged upon his attention.” And he could have married others for personal gain: “During the Revolution he was on terms of intimacy with all the great families of the State—the Clintons, the Livingstons, the Schuylers, the Van Rensselaers, and the rest; alliance with either of whom gave a young man of only average abilities, immense advantages in a State which was, to a singular extent, under the dominion of great families.”

No, it would be made clear that Burr married not for power but instead for love. Parton explained, “no considerations of this kind could break the spell which drew him, with mysterious power, to the cottage at remote and rural Paramus,” where his future wife lived.

Parton wrote in a decade that saw the emergence of a dedicated women’s rights movement, and he portrayed Burr as an early feminist, a view that would later be more fully developed: “He thought highly of the minds of women; he prized their writings. The rational part of the opinions now advocated by the Woman’s Rights Conventions, were his opinions fifty years before those Conventions began their useful and needed work,” Parton claimed. (At the time of the publication of his biography of Burr, James Parton was married to Sara Payson Willis, who had gained fame under her pseudonym Fanny Fern as the author of the proto-feminist novel Ruth Hall.) Parton’s depiction of Burr’s wife as friend supported the claim that Burr had a deep respect for women. “The lady was notbeautiful. Besides being past her prime, she was slightly disfigured by a scar on her forehead. It was the graceful and winning manners of Mrs. Prevost that first captivated the mind of Colonel Burr.”

Burr’s defenders have long recognized the need to defend his personal life as part of the defense of his political life. Virtually all have recognized the significant role that his personal reputation played in his public standing. Parton insisted: “Burr nevercompromised a woman’s name, nor spoke lightly of a woman’s virtue, nor boasted of, nor mentioned any favors he may have received from a woman.” Indeed, he exclaimed, “he was the man least capable of such unutterable meanness!” Although Burr has remained a lesser known founder, and one with a tarnished reputation, his ample supply of defenders have long followed Parton’s well-constructed foundation, one that relied on yoking positive portrayals of his sexuality in an effort to shore up his battered political self.

Many of his early twentieth-century biographers decried the fact that his personal life overshadowed his public accomplishments, and they continued to highlight his intimate life as one of virtue. The alleged falsity of the tale of the seduction and abandonment of Margaret Moncrieffe and additionally the supposed lies behind a story of the intentional “ruin” of one Miss Bullock were repeatedly used to defend his character. A 1925 biography by Samuel Wandell and Meade Minngerode prematurely stated that the legend about Bullock had been “finally laid to rest” by the reference librarian at Princeton, who had “showed conclusively, from evidence furnished by the unfortunate lady’s family” that she had died “quite virtuously.” Nathan Schachner wrote in his biography of Burr, a decade later, that: “Another legend is not so innocuous. It was the forerunner of a whole battalion of similar tales, all purporting to prove Aaron Burr a rake, a seducer, a scoundrel, a man without morals and without principles, wholly unfit to be invited into any decent man’s home. Though, on analysis, not one of these infamous stories has emerged intact.” He then described the “canard” of Burr seducing and abandoning a “young lady of Princeton” who later in “despair committed suicide.” The author explained that the girl died of “tubercular condition” twenty years after Burr graduated from Princeton.

Some accounts defended Burr as having exposed Moncrieffe as a spy for the British. A 1903 historical novel—Blennerhassett, by Charles Felton Pidgin—depicted the Moncrieffe story as a later burden for Burr, despite the fact that he was in fact a great patriot. In this regard, rumors about Burr’s sexual history were criticized for overshadowing the truth of his virtue and for hiding what was his true patriotism. Explained the character of Burr in the novel: “‘I became convinced that she was conveying intelligence to the enemy and I wrote a letter to General Washington informing him of my suspicions. By his orders, she was at once sent out of the city. The chain of circumstances was followed up and it was discovered that the mayor of the city, who was a Tory, and Governor Tryon, the British commander, who made his headquarters on board the Duchess of Gordon, a British man-of-war lying below here in the river, were implicated in the plot.'” The man he explained this to asked: “‘And were you publicly thanked by the commander- in-chief?'” “‘Not by name,’ said Burr, somewhat abruptly, and he thought of the manner in which his name had been coupled with that of the young lady in question.” Here Burr was portrayed as the victim of his own patriotism. For this author, dismissing the Moncrieffe story not only cleared Burr’s name—it made it possible to depict the true Aaron Burr, a patriot and war hero.

Another early twentieth-century account, by Alfred Henry Lewis, romanticized the incident, notably including only vague reference to the young woman’s age: “On that day when the farmers of Concord turn their rifles upon King George, there dwells in Elizabeth a certain English Major Moncrieffe. With him is his daughter, just ceasing to be a girl and beginning to be a woman. Peggy Moncrieffe is a beauty, and, to tell a whole truth, confident thereof to the verge of brazen… . Young Aaron, selfish, gallant, pleased with a pretty face as with a poem, becomes flatteringly attentive to pretty Peggy Moncrieffe. She, for her side, turns restless when he leaves her, to glow like the sun when he returns. She forgets the spinning wheel for his conversation. The two walk under the trees in the Battery, or, from the quiet steps of St. Paul’s, watch the evening sun go down beyond the Jersey hills.” This account styled Moncrieffe as hardly a victim, but rather as “brazen,” welcoming the advances of the dashing young soldier. The defense of Burr in the case of Moncrieffe would continue through the twentieth century. A mid-century account by Herbert Parmet and Marie Hecht dismissed the story directly, stating that the “lady’s own words contradict this assumption” and calling it a “very good example of the propensity of his chroniclers to link Burr’s name with women, particularly notorious ones.” Milton Lomask’s two-volume biography included the story of Miss Bullock as a “typical example of the many half-factual, half-fanciful tales that have attached themselves to the memory of Aaron Burr.” It continued by explaining that, “fed by Burr’s then growing reputation as a ladies’ man, this macabre tale persisted in the face of evidence, unearthed by a Princeton librarian, that Miss Bullock had died in the home of an aunt, ‘quite virtuously,’ of tuberculosis.”

 

"President's Row, Princeton Cemetery," with Aaron Burr's name on tombstone in foreground. Detroit Publishing Company (c. 1903). Courtesy of the Library of Congress Prints and Photographs Division, Washington, D.C.
“President’s Row, Princeton Cemetery,” with Aaron Burr’s name on tombstone in foreground. Detroit Publishing Company (c. 1903). Courtesy of the Library of Congress Prints and Photographs Division, Washington, D.C.

In a similarly defensive move, Burr’s marriage was idealized by his twentieth-century biographers, as it had been by Parton a century earlier. Henry Childs Merwin wrote in his 1899 biography of Burr that “his family life was ideal,” and Charles Burr Todd, writing three years later, stated: “I think it should be mentioned here—because the opposite has been stated—that the marriage was conducive of great happiness to both, and that Colonel Burr was to the end the most faithful and devoted of husbands.” Quoting a lengthy passage in the Leader, it continued, and included the following: “His married life with Mrs. Prevost … was of the most affectionate character, and his fidelity never questioned.” Virtually all of the accounts read in a similar manner. Consider, for example, the following: “This marriage certainly gives no color to the popular belief that Colonel Burr was a cold, selfish, unprincipled schemer, with an eye always open to the main chance.” Similarly, Wandell and Minngerode defended Burr’s marriage thusly: “It was a love marriage, that of Aaron Burr and Theodosia Prevost,” and “admirable in the last degree.”

The depiction of his marriage as spotless provides a powerful counterweight to the blemishes that mar both his public and private reputations. Biographers implicitly and explicitly use the bond of husband and wife to discredit those who challenge his personal character in the area of romantic relations. One 1930s author noted: “Between Burr and his wife ardent love had deepened to an abiding trust.” This depiction only deepened in the twentieth century. In the early 1970s, Laurence Kunstler described the marriage as “twelve wonderful, happy, and triumphant years,” and Jonathan Daniels lauded the union as “a faithful love which only the most austere historians and venomous critics have questioned.” Samuel Engel Burr Jr.—the founder of the Aaron Burr Association, a professor of American studies, and a sixth-generation descendant of Burr—wrote several books in the 1960s and 1970s defending his ancestor’s reputation, and all bolstered his character by defending his marriage. In Colonel Aaron Burr, Burr depicted it as a “happy experience for both of them.” And in a Mother’s Day lecture delivered to the New York Schoolmasters’ Club, he focused on the “influence of [Burr’s] wife and his daughter” on his “life and career” to underscore his domestic bond, in contrast to the view of him as a vile seducer of women. (Burr Jr. also argues that Madame Jumel divorced Aaron on trumped up charges of adultery, arguing it was the “only legal grounds for divorce” at the time, thus trying to further wipe the slate clean.) Virtually all authors agree with Jonathan Daniels, who argued that “Nothing is more clear in the record than Burr’s tenderness and concern for his wife.” Still others, including Milton Lomask, contended: “To trace Aaron Burr’s life as a husband and father … is to glimpse the man at his best. Domesticity became him.”

Of particular importance to Burr’s defenders was his choice of spouse. Virtually all biographers insert that Mrs. Prevost was no “beauty,” underscoring that there was no superficial attraction that drew Burr to her. In a typical example, Nathan Schachner described her as “not beautiful,” “pious,” “well read and cultured.” This view continued through the twentieth century. Burr could have married “into any of those powerful prosperous dynasties,” wrote Laurence Kunstler, emphasizing that he had instead married for love. Charles Burr Todd (a descendant of the Burr family) made a similar point in his 1902 biography: “He was young, handsome, well born, a rising man in his profession, and might no doubt have formed an alliance with any one of the wealthy and powerful families that lent lustre to the annals of their State. This would have been the course of a politician. But Burr, disdaining these advantages, married the widow of a British officer, the most unpopular thing in the then state of public feeling that a man could do, a lady without wealth, position, or beauty, and at least ten years his senior, simply because he loved her; and he loved her, it is well to note, because she had the truest heart, the ripest intellect, and the most winning and graceful manners of any woman he had ever met.” Late in life, Aaron Burr would marry a second time. But as if to underscore the significance of his first marital bond, no biographers dwell on this bond or the marriage.

Virtually all twentieth-century accounts point out that in contrast to the politicized depiction of Aaron Burr as a man who seduced and abandoned women, Burr “showed an understanding of women.” Such authors typically concede that Burr had numerous affairs with women, but that they were not exploitive. As Jonathan Daniels wrote, perhaps over-descriptively: “There was never anything in his life, however, to suggest the bestiality and brutality in sex which his enemies imputed to him. Concupiscent, he may have been, cruel he never was.”

Burr’s most recent biographer, Nancy Isenberg, the only academic historian to take on that task, highlights his support for early feminism as evidenced by the fact that his marriage was “based on a very modern idea of friendship between the sexes.” Calling Burr a “feminist,” she argues that he was alone among the Founding Fathers in this regard: “No other founder even came close to thinking in these terms.”

Today, much as in his own lifetime, the debate rages about the salience of his personal life for understanding the “true” Burr. Some contend that the “true biography” of Burr “must be disentangled” “from … a mass of legend about his lapses with the ladies.” Others revel in those stories as a way to bring to life the Burr they think existed. The view of Burr as unique—for better or worse—is an old one. James Parton, writing in direct response to the early account of Matthew Davis, set the tone for a defense of Burr’s personal life that would last until the present day. Parton could not have been more assertive:

Aaron Burr, then, was a man of gallantry. He was not a debauchee; not a corrupter of virgin innocence; not a de-spoiler of honest households; not a betrayer of tender confidences. He was a man of gallantry. It is beyond question that, in the course of his long life, he had many intrigues with women, someof which (not many, there is good reason to believe) were carried to the point of criminality. The grosser forms of licentiousness he utterly abhorred; such as the seduction of innocence, the keeping of mistresses, the wallowing in the worse than beastliness of prostitution.

This kind of defense continued through the end of the nineteenth century, with biographers outlining their case against his detractors and making a strong case for examining the public and private life of a man who clearly had intimate relationships outside the context of marriage and who raised questions in many minds about his allegiance to the nation.

Twentieth-century biographers wrote of Burr as a victim on many scores: of politics in the early republic, of a back-stabbing first biographer, and of later portrayals, as “one of history’s greatest losers,” as Donald Barr Chidsey put it. The novel Blennerhassett began with a similar note of Burr’s exceptional status: “For a hundred years, one of the most remarkable of Americans has borne a weight of obloquy and calumny such as has been heaped upon no other man, and, unlike any other man, during his lifetime he never by voice or pen made answer to charges made against him, or presented either to friends or foes any argument or evidence to refute them.” Nathan Schachner, writing in the 1930s, similarly captured the view of many biographers who have chronicled Burr. He wrote: “Probably of no one else in American history are there more unsupported, and unsupportable, tales in circulation.” And he ended his biography with a similar refrain: “Who in history has survived a more venomous brood of decriers?”

Burr’s legacy dramatically illustrates the various ways that sexual reputation informs public masculine character. Despite the complaints of his biographers who positioned themselves as solitary champions of history’s greatest victim, a man repeatedly “misinterpreted” and “misjudged,” there has never been a shortage of Burr defenders, then or now, and virtually all of them use sex as one means of shoring up his public standing. Our enduring interest in connecting personal with public selves will almost certainly keep competing Burrs alive in popular memory—and will no doubt prevent Aaron Burr from ever being either completely “rescued” or finally banished from the pantheon of great American founders.

Further Reading

This essay comes out of my research for my most recent book, Sex and the Founding Fathers: The American Quest for a Relatable Past (Philadelphia, 2014), which examines the ways in which we have (or haven’t) talked about the sex lives of the founders. The current depiction of Burr as sexually and morally bankrupt was perhaps most popularly captured by the 1973 historical novel Burr by Gore Vidal, in which Burr is gossiped to be the “lover of his own daughter”—a fictionalized rumor created by Vidal. Burr has been the subject of more straightforward biographies since shortly after his death. The earliest is Matthew L. Davis, Memoirs of Aaron Burr with Miscellaneous Selections from his Correspondence (New York, 1836), while the most enthusiastically pro-Burr may be James Parton, Life and Times of Aaron Burr (New York, 1858). There have been numerous biographies since, including Herbert S. Parmet and Marie B. Hecht, Aaron Burr: Portrait of an Ambitious Man (New York, 1967); Donald Barr Chidsey, The Great Conspirator: Aaron Burr and His Strange Doings in the West (New York, 1967); Jonathan Daniels, Ordeal of Ambition: Jefferson, Hamilton, Burr (New York, 1970), Laurence Kunstler, The Unpredictable Mr. Aaron Burr (New York, 1974); Milton Lomask’s two-volume Aaron Burr (New York, 1979-82). The best modern biography is by Nancy Isenberg, Fallen Founder: The Life of Aaron Burr (New York, 2007). Her essay “The ‘Little Emperor’: Aaron Burr, Dandyism, and the Sexual Politics of Treason,” in Jeffrey L. Pasley, Andrew W. Robertson, and David Waldstreicher, eds., Beyond the Founders: New Approaches to the Political History of the Early American Republic (Chapel Hill, N.C., 2004) is also extremely valuable.

Burr is notable among the founders for the extent to which his descendants have taken up his cause. Charles Burr Todd, a historian of the Burr family from Connecticut, wrote The True Aaron Burr: A Biographical Sketch, in 1902 (New York). Samuel Engle Burr Jr. not only founded the Aaron Burr Association; he also wrote books loyal to his ancestor’s memory, including Colonel Aaron Burr: The American Phoenix (New York, 1961) and The Influence of his Wife and his Daughter on the Life and Career of Col. Aaron Burr (Linden, Va., 1975).

 

This article originally appeared in issue 15.1 (Fall, 2014).


Thomas A. Foster is professor of history at DePaul University. He is the author and editor of six books, including Sex and the Founding Fathers: The American Quest for a Relatable Past (2014). Foster tweets at @ThomasAFoster.




The Kingness of Mad George

The roots of the current debate over presidential power

The recent conflict over President Bush’s domestic surveillance program reflects one of the oldest recurring divisions in American politics, dating all the way to the 1790s. Bush’s Democratic critics have taken a stance that traces back to the Jeffersonian (or Democratic) Republicans, arguing that the U.S. government is rather flexibly bound, but still bound, by the values and rules embedded in our founding documents and, as such, is a government whose power is essentially limited. The Bush administration and its modern (anti-Democratic) Republican defenders have staked out a position that traces back to Alexander Hamilton and the Federalists, reasoning from the inherent nature of government and the overwhelming fearsomeness of the challenges the United States faces that the powers of its government must be essentially unlimited. The GOP-Federalist position applies especially to times of foreign crisis, a state that Federalists saw as virtually perpetual in the early Republic and the Republicans have likewise been warning about ever since the outbreak of the cold war in 1946.

This recurring argument has often turned on the question of whether the norms and procedures of democracy and republicanism are adequate to national survival in a dangerous world of terrorists, Commies, and Frenchmen. Federalists and modern Republicans alike have often indicated their belief, expressed with varying degrees of regret, that the methods of democratic, accountable, transparent government are not strong enough to meet these challenges. Jeffersonian Republicans and modern Democrats, in turn, have tended to respond that they are. The essence of the frequently heard rightist refrain that America cannot fight the evildoers of the moment with democracy tying its hands or with one arm tied behind its back (fill in your Goldwaterish/Cheneyesque metaphor) can be found in a recent Wall Street Journal op-ed columnabout the Pentagon paying Iraqi journalists for favorable coverage. If the U.S. military had elected to “play by Marquess of Queensberry rules,” argued the WSJ, we would have had to “wait decades” for some good Arab press, and we would have created “a heady propaganda win for the terrorist/insurgents, a prolonged conflict, and more unnecessary violence and death”—as opposed to the speedy triumph the writer apparently believes we are experiencing in Iraq right now.

The key difference in the recurring party debate is not so much the government’s or military’s mere use of extraconstitutional powers and undemocratic methods. Those things have happened under many presidents of most of the major U.S. parties, especially during the cold war. The key is the further act of justifying such powers and methods in principle. George W. Bush and Dick Cheney have repeatedly gone out of their way to do this, asserting and exercising an alleged independent presidential authority to do things (like eavesdropping on suspected terrorists) the government was able to do just as swiftly and effectively under existing legal procedures. (A secret court was created in the 1970s with no other purpose than legally authorizing government eavesdropping when national security requires it.) In other cases, they have ordered up briefs to self-legalize obviously unconstitutional powers to have people tortured and to hold American citizens without charge or trial.

A similar tactic was recently used against Senator John McCain’s anti-torture resolution, a measure that Bush vehemently opposed but finally signed just before New Year’s Day. With the president’s signature, the administration included a “signing statement” explaining that it reserved the right to torture whoever it pleased no matter what the resolution said.

The executive branch shall construe [the provision], relating to detainees, in a manner consistent with the constitutional authority of the President to supervise the unitary executive branch and as Commander in Chief and consistent with the constitutional limitations on the judicial power, which will assist in achieving the shared objective of the Congress and the President . . . of protecting the American people from further terrorist attacks.

The recipe for this little writ of mandamus is two parts pure executive prerogative and one part the ends justify the means. The statement invokes the president’s “constitutional authority” but employs a concept not found in the Constitution: the idea that the president has the apparently sole and absolute power to supervise a “unitary executive branch.” Advise and consent this, Congress. The only constitutional limitation mentioned is on the judicial branch and any effort it might make to hold the “unitary executive” to any procedural standards when it decides to detain people. Capping things off we have a statement implying that any action the administration deems handy in the “shared objective” of “protecting the American people” is automatically legal and constitutional.

In the same vein, President Bush’s December radio address assured listeners that the National Security Agency’s warrantless domestic-spying program was “fully consistent with my constitutional responsibilities and authorities.” Not legally authorized by Congress, but “consistent” with the general ends of the president’s duties. Bush could not even cite which constitutional duties he might mean because those would actually be quite hard to find in the Constitution. Commander in chief of the armed forces is one thing, but Bush and Cheney clearly have some broader and frankly more king-like role in mind, something along the lines of the monarchical title that John Adams thought presidents should bear: “His Highness the President of the United States and Protector of the Rights of the Same.” Karl Rove might want to add the British monarchs’ tag, “Defender of the Faith,” for the religious Right’s benefit. Even closer to what Bush and Cheney seem to intend would be the title that Richard III used before he finally dealt with those pesky little congressmen, I mean princes, in the Tower: “Lord Protector of the Realm.”

Hamilton, Lincoln, and the Inherent-Powers Tradition

President Bush’s admirers will doubtless be heartened by the knowledge that he shares some aspects of this governing philosophy with the newly re-burnished “Business Class Hero” of the founding era, Alexander Hamilton. Confronted by Thomas Jefferson and James Madison with the fairly credible argument that the brand-new Constitution did not provide the government with the power to create his proposed national bank, Hamilton appealed by referring, not simply to the text of the Constitution itself, but more importantly to the “general principle . . . . inherent in the very definition of government.” The principle was “That every power vested in a government is in its nature sovereign, and includes, by force of the term, a right to employ all the means requisite and fairly applicable to the ends of such power.” While Hamilton recognized (unlike Bush) that a constitutional government could not legally engage in actions that its constitution specifically prohibited, his “definition of government” was in fact far older than the United States and its founding documents, and in truth it was not terribly respectful to those documents. Hamilton derided Jefferson and Madison’s arguments that the text of the Constitution might truly limit the government’s “sovereign power, as to its declared purposes and trusts,” writing that they presented “the singular spectacle of a political society without sovereignty, or of a people governed without government.” It barely dawned on Hamilton that such a spectacle, of a people governed without a traditional European form of government, was exactly what many Americans thought their revolution had sought.

 

Fig.1
Fig.1

Abraham Lincoln fell back on a similarly ante-constitutional notion of the inherent powers of government in justifying his decision to restore the Union by force. As explained in his first inaugural address, Lincoln held “in contemplation of universal law” that “the Union of these States is perpetual.” Like Bush and Hamilton, Lincoln invoked the Constitution but based his position largely on concepts not mentioned in it. “Perpetuity is implied, if not expressed, in the fundamental law of all national governments. It is safe to assert that no government proper ever had a provision in its organic law for its own termination.” It may not have been as safe to assert this as Lincoln hoped because for many Americans, and not only the defenders of slavery, the U.S. experiment in liberal government had relatively little in common with the fundamental law of all other national governments. They did not see the United States as a “government proper” if that meant it existed in unconditional perpetuity, with the people losing forever the Lockean right of revolution described in the Declaration of Independence.

The Hamilton/Lincoln idea of the “definition of government” or “government proper” amounts, in the final exigency, to the very old and widely embraced idea of government as rulership, the repository of sovereign authority that has no superior within its ambit and cannot be lawfully overruled. Though not necessarily absolute or completely insulated from popular influence, this sort of government derives its authority from some transcendent and irresistible source, a divine source for most of the monarchs who practiced it and a natural source—the nature of government and the practical requirements of nation-building—for Hamilton and other American advocates of inherent powers.

The logic behind this view can seem beguilingly simple and practical. Government is coterminous with the community and the guarantor of its structure, values, and very existence—matters too basic to be left to the whims of political give-and-take. Government is charged with the fundamental tasks of preserving the community from internal disorder, external conquest, and other forces that threaten to destroy it. Burdened with such awesome responsibilities, it needs powers to match, powers that were limited only by what its subjects would accept as legitimate by their mere acquiescence.

Defenders of the inherent-powers position frequently and significantly direct attention to the necessity or desirability of the ends they seek to achieve: fighting the terrorists or Communists or (in Hamilton’s case) achieving national greatness and economic growth. While such goals were worthy enough on their own, the move of loudly proclaiming their transcendent worthiness is a political tactic rather than a constitutional or substantive argument; its real function is to embarrass and silence critics by calling their patriotism or morals into question. At the same time, the tactic expresses a basic tenet of old-school governance, which is that law, procedure, and constitutionalism are minor matters as long as what Hamilton called “the essential ends of political society”—security and prosperity and whatever other states of being a community wants for itself—are being met. State this as a folksy modern politician might, say as “getting the job done,” and it sounds like practical good sense. State it a bit more clearly, and it makes a mockery of the very idea of limited, transparent, and democratic government by dismissing it as so much “red tape.”

Angels in the Form of George W. Bush?

As Reinhard Bendix points out in Kings or People (1978), one of the very first scholarly books I can remember buying, the old-school view of government as a mandate to rule, constrained only by such compromises as were necessary to allow the mandate’s continued existence, is one that any medieval king, pope, god-emperor, or caliph would have found perfectly familiar. A ruler had to do what a ruler had to do. And you knew his actions were legitimate if he got away with them and succeeded in his goals. 

While it has monarchical origins, the reliance on inherent powers does not by itself render a government monarchical. Early American nationalists like Hamilton, Lincoln, and Daniel Webster had made the modernizing transition that Bendix describes from God to “the people” or “the nation” as the inviolate source of governmental authority. By contrast, Bush and Cheney clearly hearken back to the older monarchical model in which everything rests with the supreme ruler and his supreme duties. The key difference lies in their approach toward law. While different societies tend to have very different legal traditions, in the crudest sense we may say that kings got to be kings by establishing themselves as the sole legitimate source of secular law within their realms. American government has long been celebrated as one of “laws, not men,” where law is created by following certain publicly known and set procedures and, in the process, obtains some form of popular consent.

This is where Bush and Cheney’s views and actions seem quite breathtakingly dangerous. There have likely been absolute monarchs whose lawmaking was more procedurally constrained than that of the present administration. “We have a system of law,” Senator Russ Feingold said of the NSA spying program. “He just can’t make up the law . . . It would turn George Bush not into President George Bush, but King George Bush.” While I hope and believe that George W. Bush has no intention of crowning himself, his mentor Cheney has been seeking “unimpaired” presidential power ever since his days as a junior aide in the Ford White House. Why should his president/boy-prince be forced to endure the insolent effrontery of pesky reporters and congressional investigating committees? For Cheney, the Imperial Presidency is a matter of personal and ideological conviction.

Despite my obvious preferences in present politics, the underlying philosophical question here is still an open one for me. All governments probably do have inherent powers they will have to exercise in times of crisis. Lincoln certainly faced one and probably made the most courageous and far-sighted choice. Yet we should be clear that we are doing just that—making a choice—when we endorse government action based on such thinking. Governing on the basis of inherent powers rather than clear legal-constitutional authority is a distinctly undemocratic, illiberal, and un-American approach to governance. As Lincoln recognized, it should be used sparingly and only when absolutely and indispensably necessary.

The problem comes when leaders manipulate the public sense of crisis to make extraconstitutional powers and presidential monarchy thinkable. The modern American Right has a long record of promoting phony or highly exaggerated crises for political effect, often as a way to attack aspects of democracy, especially the economic, cultural, and intellectual expressions of it that conservatives so dislike. Extensive freedom of expression, strict protections for the rights of the accused, and other civil liberties have never been popular with the dominant elements of the American Right, and strangely enough, the present crisis—whatever it is—always seems to demand that civil liberties be curtailed in some way. The 9/11 terrorist attacks only provided a more easily salable version of the ongoing crisis that the Right has been ringing alarm bells over for the past sixty years or more. The sudden salience of Islamic terrorism as an issue allowed Republicans to revive many of their old cold war themes and policies and provided the opportunity to apply them in Iraq.

There is pretty overwhelming evidence that the intelligence failures regarding al Qaeda and Iraq had more to do with incompetence and ideologically driven inattention and misperception—useful information had been gathered but was not acted on or reported correctly—than a lack of “tools” such as legalized torture and illegal mass eavesdropping. Given that situation, I will let Thomas Jefferson’s first inaugural address give the last word, for now, on my behalf.

I know, indeed, that some honest men fear that a republican government can not be strong, that this Government is not strong enough; but would the honest patriot, in the full tide of successful experiment, abandon a government which has so far kept us free and firm on the theoretic and visionary fear that this Government, the world’s best hope, may by possibility want energy to preserve itself? I trust not. I believe this, on the contrary, the strongest Government on earth. I believe it the only one where every man, at the call of the law, would fly to the standard of the law, and would meet invasions of the public order as his own personal concern. Sometimes it is said that man can not be trusted with the government of himself. Can he, then, be trusted with the government of others? Or have we found angels in the forms of kings to govern him? Let history answer this question.

Further Reading:

The sources for all the quotations above are linked at the point where a particular document or news item is first introduced.

The Bush administration’s working theory of the executive’s nearly absolute powers in matters relating to national security and foreign policy has been given its most developed form by University of California, Berkeley, law professor John Yoo (a former Department of Justice official) in his book The Powers of War and Peace: The Constitution and Foreign Affairs after 9/11 (Chicago, 2005). Simply put, the Constitution does not seem to have much to do with it, except through the most, er, tortured constructions imaginable. Yoo can be heard defending the presidential power to do just about anything here. (Link via Information Clearinghouse.)

For my money, the most incisive recent commentary on the president’s role in our current system is a chapter in Jon Stewart’s America (the Book): “The President: King of Democracy.”

I don’t claim great expertise on the history of kingship or its theoretical basis, but the remarks above are influenced by Martin Van Creveld, The Rise and Decline of the State (Cambridge, 1999); Richard L. Bushman, King and People in Provincial Massachusetts (Chapel Hill, 1992); Robert Filmer, Patriarcha and Other Writings, ed. Johann P. Somerville (Cambridge, 1991); the first part of Gordon S. Wood, The Radicalism of the American Revolution (New York, 1992); and especially Reinhard Bendix, Kings or People: Power and the Mandate to Rule (Berkeley, 1978). History Book Club dealt in some weighty tomes back in those days. Van Creveld, a military historian based in Israel, recently had some choice words on the Bush administration and the Iraq War in the Forward.

In expectation of the hate mail I will soon be receiving from Alexander Hamilton’s many fans, let me urge any present-day liberals tempted to imagine Hamilton and the Federalists as their guys in the 1790s—I know a lot of historians who incline this way—to first read Mike Wallace’s review essay “Business-Class Hero,” about the New-York Historical Society’s Hamilton exhibit. That said, Max Edling’s recent book, A Revolution in Favor of Government: Origins of the U.S. Constitution and the Making of the American State (New York, 2003), convinced me that Hamilton was a more measured statist than I once believed. A somewhat overdrawn reminder that the early presidents were no strangers to the perennial presidential yen for secrecy and covert action is Stephen F. Knott, Secret and Sanctioned: Covert Operations and the American Presidency (New York, 1996).

While Hamilton and the Federalists strike me as far more respectful of the law than the present administration, one thing that Bush and Cheney still seem to have in common with the Federalists is a largely imaginary sense of social superiority to the rabble engaged in democratic politics. This week’s Time magazine contains a remarkable quotation in which the White House uses frank social prejudice as a way of distancing themselves from disgraced House Majority Leader Tom Delay: “Of the former exterminator, a Republican close to the President’s inner circle says, ‘They have always seen him as beneath them, more blue collar. He’s seen as a useful servant, not someone you would want to vacation with.’”

I imagine this piece will have many detractors, and I hope they and any supporters will take advantage of the Common-place Coffeeshop in making their views known. Future plans call for a blog-like discussion space that will be more directly linked to this column.

 

This article originally appeared in issue 6.2 (January, 2006).


Jeffrey L. Pasley, a former journalist and speechwriter, is associate professor of history at the University of Missouri, Columbia. He is the author of “The Tyranny of Printers”: Newspaper Politics in the Early American Republic (Charlottesville, 2001) and the co-editor (with Andrew Robertson and David Waldstreicher) of Beyond the Founders: New Approaches to the Political History of the Early American Republic (Chapel Hill, 2004).




The Clinton Impeachment: Dr. Clio Goes to Washington

On the weekend after the midterm congressional elections of 1998, I flew east to appear with a score of constitutional scholars before the Judiciary Committee of the House of Representatives. The subject of the hearing was the background and history of impeachment, and the occasion was the impending impeachment of President William Jefferson Clinton. Prior to the hearing, Forrest McDonald and I spent a pleasant hour discussing the subject on C-SPAN’s morning Washington Journal program. The highlight of that hour came when Brian Lamb replayed an excerpt from an interview he had conducted with Forrest back in the mid-1980s. When asked what one would see if one ventured out to the McDonald farm outside Tuscaloosa to catch the historian in the act of writing (in flagrante delicto, as it were), Forrest’s eyes dart to the side, a smirk briefly crosses his face, followed by the disarming confession that he writes in the nude–at least in the summer. From the studios opposite Union Station, we grabbed a cab to the House office building on the far side of the Capitol, and checked in with the committee staff; then we went to the main hearing room and took our places.

Thinking of that moment ever since has reminded me of the scene in Larry McMurty’s Lonesome Dove where Gus has to hang Jake Spoon, his Texas Ranger buddy gone bad, for throwing in his lot with the evil Suggs brothers. Gus says something like, “I’m sorry you crossed the line, Jake,” and Jake, distracted by the noose, replies something like, “I never seen no line to cross.” Walking into the committee room, I felt I had crossed a line as well. Testifying as an expert, and effectively taking sides in a highly charged political dispute, is not a role that historians assume readily, nor is it an opportunity that comes our way with any frequency. Forrest professed to be testifying only as an impartial scholar, but I, for one, wasn’t buying his line, nor was I so naive about my own sentiments as to claim to act in the same capacity. I am a native Cook County Democrat, with family ties to the old machine of the elder Richard J. Daley, and proud of it, and I thought then, as I do now, that Hillary Rodham Clinton (coincidentally the mother of one of my better-known students, though we had not yet been introduced) was close to the mark in her famous remark blaming a “vast right-wing conspiracy” for the impeachment.

 

Fig. 1. Alexander Hamilton, The Federalist, 1788, The Gilder Lehrman Collection, courtesy of the Gilder Lehrman Institute of American History, New York.
Fig. 1. Alexander Hamilton, The Federalist, 1788, The Gilder Lehrman Collection, courtesy of the Gilder Lehrman Institute of American History, New York.

For many historians, becoming professionally involved in a partisan conflict (such as these hearings) or in litigation (which I have also done) risks crossing the line between scholar and advocate. Professional historians should have no problem in admitting ambiguity or uncertainty in our findings, but political and legal disputes leave little room for scholarly hemming and hawing. Of the nineteen “experts” testifying on November 9, only one, Michael Gerhardt of William and Mary, appeared in a neutral capacity. The others had been summoned by one party or another. My own invitation came through the assistance of my Stanford colleague, Deborah Rhode, then serving as a staff attorney for the committee’s Democratic minority.

For my part, I have to confess that I was not uncomfortable in this role. For one thing, I had already begun writing op-ed essays about the constitutional issues raised by impeachment, and had formed a position strongly critical of the theory upon which it was proceeding. For another, I felt, with characteristic immodesty, that my work on the origins of the Constitution offered a perspective on the Impeachment Clauses that only a handful of scholars were qualified to present. Legal scholars aplenty would be testifying, but they are used to adversarial argument, and cavalierly happy to deploy whatever materials serve the cause they favor without the historian’s due regard for the limits and ambiguities of the evidence. I had spent more than a decade developing a model or method for conducting inquiries into the original meaning of the Constitution, with due respect for the rules of using historical evidence, and this was too good an opportunity to pass up. Moreover, I felt then, and still believe now, that historians have a civic obligation to bring their knowledge to bear, even if it involves taking sides in a partisan dispute. Obviously it would be better to do so in a more balanced, less partisan forum. But if that is all that is available, why should we forego the opportunity, challenge, and obligation? The test has to be whether what one is prepared to argue in this public role is consistent with what one has written as a scholar. On this count, I had no qualms about my ability to present an originalist argument against the legitimacy of Clinton’s impeachment that would fully comport with the discussion of the presidency in my book, Original Meanings: Politics and Ideas in the Making of the Constitution (New York, 1996).

The hearings got off to a curious start. Although most of the full committee attended most of the day, the hearings were held under the auspices of the subcommittee on the Constitution, chaired by my fellow Haverford College alumnus, Charles Canady. I was naive enough to suppose that Canady might begin by thanking the witnesses for taking time out from their schedules to help enlighten the members on a truly difficult subject. Instead, his opening remarks seemed to amount to saying, we have a rope, yonder is the tree, and all we need is to catch the evildoer and string him up.

An even more curious interlude followed. Television monitors were turned on, and the members of the committee sat raptly watching a ten-minute video consisting primarily of earnest statements about the gravity of impeachment culled from the Watergate proceedings of 1974. This struck me as a strange way to get the members in the mood, but who was the historian to judge? Having been assigned to the afternoon panel, I sat back and prepared to watch the proceedings unfold.

One lesson became evident fairly quickly. The Judiciary Committee’s reputation as the most partisan and ideologically polarized committee on the Hill was well deserved. The tone of the questioning was sometimes amiable, especially when members were questioning friendly witnesses. But from Canady’s opening remarks on, it was difficult to ignore the intensely partisan character of the proceedings, or to resist the conclusion that the hearings were basically a sham because the members already knew (barring some unforeseen political contingency) how they would vote.

As a close student of the political debates of the Revolutionary era, I have always assumed that legislative debate must matter at some level–that there must be a point to deliberations. Yet it was sobering to observe and participate in a discussion where all the positions to be taken are preordained, where the rhetorical moves each side can make are sharply constrained by the structure of the dispute and the limits of the available evidence, where debate as such can therefore have little, if any, impact. Although “pre-commitments” were not possible for many of the issues that the revolutionaries faced in the 1770s and 1780s, the hearings were a useful reminder of the lesson that historians ignore context and circumstance only at their peril.

The second great lesson I took away from the impeachment proceedings came when I had to ask myself what, if anything, I had been able to add to the discussion–as nondeliberative as it turned out to be.

Here I have to begin by describing my substantive position on the merits of impeachment. In Original Meanings, I had argued that the establishment of the presidency proved to be the single most difficult and puzzling problem in institutional design that the Framers of the Constitution had confronted. There were no useful antecedents for the national republican executive the Framers contemplated and there were numerous perplexing uncertainties about the proper mode of election and the political dimensions of executive power. These considerations help to explain why the Framers literally looped around in their attempts to decide such interlocking questions as the mode of election, eligibility for re-election, length of term, and method of removal–including, of course, impeachment. The key decisions on the presidency emerged only during the final fortnight of debate, and even then, the key initiatives came out of the so-called Committee on Postponed Parts.

If any one factor best explained the eventual design of the presidency, I argued, it was the Framers’ desire to make the executive as politically independent of Congress as possible, while allowing Congress (or more specifically the House of Representatives) the residual right to elect the president should the electoral college fail to produce a majority. My testimony to the committee argued that that same principle should be applied to the interpretation of the Impeachment Clause.

In the case of President Clinton, the key problem was to determine whether the phrase “other high crimes and misdemeanors” should be read narrowly or expansively. A narrow reading would limit impeachment to offenses that amounted to a clear abuse of the public trust in the performance of official duties. A broad reading would leave much more to the discretion of Congress, and arguably embrace the kinds of nonofficial failings for which Clinton stood exposed. If one wanted to reason as an originalist, a narrow reading would be consistent with the idea that the Framers worried about leaving the president vulnerable to congressional pressure and manipulation–which is what the records of debate seemed to me to suggest. A broad reading of “high crimes and misdemeanors” carried the opposite implication–that the Framers wanted to make the president politically subservient to Congress–and that seemed incompatible with the evolution of the presidency through the course of the Federal Convention.

By the time my turn came to testify, the atmosphere in the committee room had eased considerably. The mood during the morning session had seemed quite charged, especially when Republican members took issue with Arthur Schlesinger Jr., for asserting that gentlemen always lied about sex. But by late afternoon, some of the committee members had absented themselves, and our circadian rhythms clicked in.

Reading my testimony and trying to gauge what sense the committee members could possibly make of it led to another insight. As much as members of Congress like to praise the Founders and cite useful passages from The Federalist to demonstrate their own learning, their sense of history is both underdeveloped, on the one hand, and distorted by their romantic attachments to the founding era, on the other. All of them, I am convinced, feel a deep kinship and sense of gratitude to the founding generation, for establishing the institutions and offices they now love to inhabit. All of them know how to wallow in the conventional trappings and expressions of American patriotism.

But few of them know much about how the Constitution was drafted, or have a good grasp of the political disputes and conceptual uncertainties of the Revolutionary era. They are much more inclined to think of the Framers imparting their collective wisdom to posterity than to realize that the decisions of 1787 were reached by processes not dissimilar to the ones in which they ordinarily engage. To offer, therefore, an account of the Impeachment Clause which emphasized George Mason’s idiosyncratic role in the debates, or the difficulty of defining “high crimes and misdemeanors,” or the deep uncertainty that clouded the entire discussion of the executive branch at Philadelphia in 1787, was (I sensed) to provide an unsettling lesson that they could neither assimilate nor easily apply. I believed that immersion in the historical evidence was essential to understanding the Impeachment Clause, but as I watched the bemused (or confused) look on Chairman Henry Hyde’s face, it occurred to me that I just as well could have been speaking Greek. My account of Mason’s and Madison’s respective concerns could hardly compete with his appeals to Thomas à Becket or the Normandy war dead.

But why should history matter at all to the members of the Judiciary Committee? With the sole exception of Mary Bono, whose presence on the committee was a bit of a mystery, they were all attorneys, and therein lay a clue to the workings and mentality of the committee. They might not know much about history, but they knew a lot about the workings of the legal system, the importance of witnesses telling the truth and the likelihood that witnesses often try to shade their testimony in just the self-serving and indeed duplicitous way in which Clinton had his. Their own substantial legal experience, in other words, readily shaped the way in which they thought about impeachment; even the most compelling account of the true historical origins and ambiguities of the relevant constitutional language could only be a quaint distraction.

So the unpleasant and messy truth is that the sense of nuance that historians bring to their work cannot be readily translated into the political sphere, especially in a controversy as bitterly partisan as the impeachment imbroglio. Does that mean that historians should refrain from engaging in such controversies, in part because they have little chance to influence them, and in part because they risk compromising their objectivity? I have already been attacked twice by the distinguished jurist and overly prolific legal writer, Richard Posner, for having signed the historians’ “October surprise” advertisement challenging the House impeachment hearings before the 1998 congressional elections and otherwise participating in an avowedly political debate under the bare pretense, as he sees it, of being scholarly. I take comfort, however, from the belief that everything I wrote during the impeachment mess was consistent with my prior scholarly writings. I still believe that historians have an obligation to inform public discussions as best we can, when the opportunity arises. And as citizens, we have the same rights to exercise as anyone else. But as historians we should also understand why our contributions, which rely on the nuanced feel for the past that we have to develop to ply our trade, are likely to have little effect. I accordingly no longer believe, as I did then, that if I could just be given forty-five minutes of prime time to present the equivalent of an undergraduate lecture on the origins of the Impeachment Clause, the country could have been spared the year wasted on the whole sordid affair.

Since September 11, 2001, I have entertained one further reflection about the impeachment imbroglio. At the time, nothing was more common than to hear opinions expressed on either side as to how history would judge either the president’s behavior or the passion with which his detractors hounded him even after they knew that he would remain in office until January 20, 2001. We now know, I think, what the truer judgment of history will really be. While politics dictated that the national government be paralyzed for a year with partisan foolishness, our enemies elsewhere were making other plans for us–plans that we perhaps could have been better prepared to confront. But of course Monica was more important.

 

This article originally appeared in issue 2.4 (July, 2002).


Jack Rakove is the Coe Professor of History and American Studies and professor of political science at Stanford University.




The Clinton Impeachment: Clinton Hating

As the hot glow of 1998-99’s impeachment crisis fades, and the Clinton presidency recedes into the past, we now know far more than we could have wanted to know about the former president’s personal life. We have also learned much that we should have known earlier about the right-wing agitators and propagandists who discovered, publicized, fomented, and sometimes simply manufactured scandalous accusations against him. Yet with all the ink spilt, strikingly little attention has been paid to the nature of the political passions underlying the crisis–the outsized and persistent contempt and resentment that the president himself inspired among a vocal minority of the American electorate.

Why did so many conservatives see the president not simply as a detested opponent but as a cheater, a deceiver, a beguiler, and a rogue? Why did many left-liberals regard him as a self-serving betrayer of their principles? And, perhaps most perplexingly, why did so many members of the cosmopolitan middle, what we might call the supercilious center–people who actually come very close to sharing the former president’s politics–hold him in such disdain? It won’t do simply to say that the accusations are true and thus the opprobrium justified; for one must then contend with the fact that the man was not only twice elected president, but maintained historically high levels of public approval through most of his presidency. Clinton hating was more than ordinary disaffection; it was aggravated and embittered, a phenomenon as much personal as political, and one that simply confounds conventional political analysis.

 

Fig. 1. First printing of the second draft of the Constitution from the Committee of Style. September 12, 1787. The Gilder Lehrman Collection, courtesy of the Gilder Lehrman Institute of American History, New York.
Fig. 1. First printing of the second draft of the Constitution from the Committee of Style. September 12, 1787. The Gilder Lehrman Collection, courtesy of the Gilder Lehrman Institute of American History, New York.

So how is this phenomenon and impeachment, which was its logical culmination, to be understood in the context of the American constitutional order?

While the United States Constitution is a table of rules and procedures for organizing and running the national government, it was also devised–perhaps principally devised–as a structure to channel and break the tides of passion and political enthusiasm that are common to, and recurrently threaten, the existence of popular government. Impeachment had a narrow constitutional focus in the sense that the trial and attempted removal of the president followed the prescribed constitutional procedures. But it is perhaps more fruitfully understood as the culmination of a process that has several times recurred in American political history and is in some sense intrinsic to the American constitutional order: periods of turbulent political transition wherein the Constitution’s separation of powers prevents the resolution of basic political questions for an extended period of time. Parliamentary systems avoid this problem, providing the possibility of unified control of the levers of legislative and executive power even when substantial division in the electorate remain. But the separation of powers at the heart of the American governmental structure–along with the additional divided authorities created by federalism–creates too many redoubts and recesses of authority where committed oppositions can retrench, regroup, and stymie majorities.

The pattern of a two-term president who is widely popular but also deeply reviled in a period of rapid political, economic, and social change is not unprecedented in our history. In their own times, Franklin Roosevelt (1933-45) and Andrew Jackson (1829-37) engendered similar political polarization, with embitterment and contempt on the one hand, and a deep, intuitive identification with a broad mass of the population on the other. (The only other presidential impeachment, that of Andrew Johnson, originated similarly in a disjunction between the forces controlling the executive and legislative branches.) Franklin Roosevelt’s enemies vilified him as “that man”–a demagogue and class traitor who had seduced voters through a kind of illicit, hypnotic mass spell. Jackson was, in his own time, similarly reviled. Part aristocrat and part rough-hewn soldier, Jackson represented a new kind of politics and a new conception of the presidency. He too had a deep, intuitive connection with the American people that terrified his enemies and convinced them that he was a demagogue who threatened the very institutions of American government.

Both men’s presidencies had a transformative character. Each, individually, had a unique ability to connect and communicate with ordinary citizens, an ability that their enemies saw as phony, perverse, opportunistic, and ultimately dangerous. In each case the president’s adversaries’ attacks upon him only deepened and intensified the support of his supporters, in a circular and mutually reinforcing fashion. The antagonism over the man echoed deeper cultural and political rifts that remained inchoate, latent, or simply unspoken. The impeachment crisis of 1998 and ’99 had similar origins in unresolved political stalemate and the unrelieved passions and antagonisms this generated.

Over the years observers have posited a number of possible explanations for the enmity that grew up around the forty-second president. Early in his presidency the disaffection was often chalked up to generational transition: Clinton was the first president since John Kennedy to be well under fifty years of age; he was also the first president to have been fully washed over, and in many ways compromised, by the upheavals and experimentation of the 1960s. His very person, in this reading, became a battleground for a newly intensified version of culture war that had been playing itself out in one form or another since the late 1960s. Yet another theory sees Clinton hating rooted in a sort of baby-boomer self-loathing, a contempt for their inability to reconcile their own youthful indulgence and middle-aged hypocrisy. Each of these explanations is partly true. But neither is quite satisfactory.

To get a better purchase on the questions, let’s first distinguish between at least three distinct kinds of Clinton hating: conservative Clinton hating, left-liberal Clinton hating, and cosmopolitan Clinton hating, each of which shares common roots and predilections but remains nevertheless distinct.

The rhetoric of conservative Clinton hating is immediately familiar. Clinton is a liar, a phony, an immoral man, a deceiver. He can’t be trusted. He had “stolen” their issues. The feelings have become more tortured and embittered because again and again Clinton has won when he shouldn’t have been able to win.

Conservative Clinton hating echoes the McCarthyism of the 1950s, only not necessarily in the sense some of his supporters have argued. The subtlest historical interpretations of McCarthyism describe the movement as a product of two quite distinct forces–one crassly political and opportunistic, another deeply rooted in the insecurities of the early Cold War. In 1946 the Republicans won back the Congress for the first time in fourteen years, only to lose it again two years later, and be defeated in a presidential election they seemed certain to win. From what seemed like an expected restoration after Franklin Roosevelt’s death, the GOP now faced a fifth straight presidential loss and what seemed like it might be a near permanent exclusion from power in the national government.

This reverse made Republicans resentful; it also made them feel cheated. And they retaliated with an attitude that held no tactic or charge as beyond the pale. As Robert Taft, the respected Republican Senate Majority leader, famously told McCarthy early in his crusade, “[K]eep talking, and if one case doesn’t work–proceed with another.” But partisan warfare was only half the story. It was a necessary, but not a sufficient cause for what happened in the early 1950s. Only in a climate of deep-seated political uncertainty and fear could such concerted political attacks have had the truly explosive results they did. The early 1990s were not the early 1950s, of course, but in many respects the times were equally unsettled. The end of the Cold War, though immeasurably more benign than its onset, nevertheless created a similar disequilibrium in the nation’s politics, shaking free a swirling hatred of government and a search for internal enemies that had not been seen in so virulent a form since the McCarthy era. Journalists have described the partisan campaigns–open and covert–against Clinton, but why these efforts struck such a profound chord among a minority of the population still needs to be explained.

One clear reason for the out-sized opposition to Clinton was how much his election–and even more his subsequent success–scotched the paradigm of historical and ideological transformation Republicans had been crafting for themselves during their twelve-year hold over the executive branch from 1980 to 1992. For partisan Republicans these three successive presidential victories were not simply the result of favorable times or quality candidates–for many Republicans, in fact, quite the opposite for the first President Bush. They were the result of an epochal shift in the ideological complexion of the American electorate–a wholesale shift away from liberalism and the New Deal. Clinton’s election in 1992 might have been either an accident or simply a time-out in the Republican hegemony–à la Jimmy Carter. But his eventual success created a dissonance and frustration among partisan Republicans that in its own way was as frustrating as Truman’s unexpected victory in 1948, which seemed to doom them to permanent executive-branch oblivion.

Much less visible to the general public is the equally charged antipathy toward the president among many liberals. The left-liberal Clinton hater found the president phony and inauthentic, willing to sacrifice any principle or precept not simply for expedience but for self-interest. At the same time however (and in a partly contradictory fashion) these Clinton haters see the president as providing Democratic cover for a complete surrender to Reaganism, with balanced budgets, welfare reform, and tax cuts. Like conservative Clinton haters, they despised him because he is something their map of the world doesn’t account for: a Democrat who plays to win, a Democrat who wasn’t afraid to play political hard ball, cut necessary deals, or generally get his hands dirty in the inevitable back and forth of political warfare. Other similarities exist. Part of the depth of disaffection with Clinton among many left-liberals was that he had been successful when he should not have been able to be successful. In many cases he had been able to accomplish goals these critics have long espoused by means that shouldn’t have worked. And perhaps most galling, Clinton had been able to gain the support of constituencies left-liberals have long considered very much their own (women and African Americans particularly), even while eschewing their policies.

The third group, the cosmopolitan Clinton haters, are the most paradoxical because their displeasure is not obviously rooted in specific ideological disagreement. For many, in fact, the level of disgust and disdain for the president appeared to be inversely related to ideological proximity. Political commentators and prominent press figures Howell Raines, Michael Kelly, Maureen Dowd, Joe Klein, Christopher Matthews, and most of the rest of Clinton’s most vituperative elite media critics were centrists of a vaguely liberal hue. This group includes much of establishment Washington, but extends a good deal further, taking in an important slice of society up and down the Northeast corridor. With this group the element of class condescension and resentment runs most deeply, and what seems to cause the greatest irritation is that Clinton is a “bubba” and a mandarin–two qualities that should not be able to coexist in the same person.

As in the cases of Roosevelt and Jackson, a group of journalists and intellectuals slipped into a pit of their own contempt for Clinton and somehow became unhinged by it. They became obsessed and this obsession transformed them, in many cases leaving them different, damaged, certainly not the same. Some of the prime examples of this are Stuart Taylor, Michael Kelly, Maureen Dowd, Christopher Hitchens, Nat Hentoff, and even Kenneth Starr. Every president has critics. And most of these began in a conventional enough way. But Clinton’s unwillingness to be defeated by conventional political means–typified by his refusal to resign after being impeached–undid them. The failure of ordinary means pushed them to extraordinary means. Their failure to bring him down, paradoxically, magnified him in their eyes, leading these critics into an endlessly escalating series of polemics.

Clinton was different, of course, for at least two reasons. Jackson and Roosevelt each in their own way threatened important political and economic constituencies and interests. On the surface at least it is difficult to see how this can be said about Clinton. His policies were centrist and, after 1994 at least, cautious. His cabinets were liberally staffed with men and women who had made their careers on Wall Street. The stock market prospered mightily during his presidency. Many of the social pathologies that conservative politicians and social critics have railed against have undeniably diminished during his tenure in office. Clinton’s policies significantly tacked against the conservatizing course of his predecessors and cut against the pure celebration of the market that so typified the decade. But his policies were still generally friendly toward business and the market and surely not nearly so leftist in complexion as the intensity of the opposition would imply. So–and I hasten to say again, on the surface at least–it is not immediately clear why Clinton’s presidency should be so contentious and polarizing.

Second, as the president’s critics never tire of pointing out, while his public approval numbers were high by historical standards, Clinton has always enjoyed more support than respect. His political strength has been rooted in a politics of empathy, a fact which polling data, if scrutinized closely, bear out. Beside the normal horse-race polls we usually see, pollsters ask a variety of other basic questions, one of which is: Does politician X care about the needs of people like you? On many other questions Clinton’s numbers have fluctuated drastically. Thus, for instance, according to the Gallup poll, from February 1995 to January 1999, the percentage of Americans who believed Clinton could “get things done” rose from 45 to 82 percent. Less favorably, over precisely the same period, the percentage of Americans who believed Clinton was “honest and trustworthy” dropped from 46 to 24 percent. But on the question of whether Clinton “cares about the needs of people like you” his numbers remained virtually unchanged over the entire course of his presidency, averaging just over 60 percent. Without too much facetiousness this might be fairly be called the “feel your pain” index. And, though he was roundly abused for that line, it was also been the core of his political strength and resilience.

Many of those who opposed impeachment saw it at the time as an abuse of constitutional mechanisms provided for most of extreme crisis and executive malfeasance. But the roots of the crisis–particularly the structure of the government the Constitution prescribes, with its pronounced separation of powers, which frequently stalemates resolution of major political divisions and questions–are just as clearly rooted in the Constitution itself. Separation of powers may benignly slow the workings of government and refine them through countless small revisions and seasoning of radical reforms. But in an essentially democratic polity it also contains within itself the seeds of crises–crises for which the extreme solution of impeachment may have been virtually preordained.

 

This article originally appeared in issue 2.4 (July, 2002).


Joshua Micah Marshall, former Washington editor of the American Prospect, is author of Talking Points Memo. His articles have appeared in a wide variety of print and electronic publications including the American Prospect, the New Republic, the New York Times, Salon, Slate, Talk, and the Washington Monthly. He is currently finishing his doctoral dissertation in colonial American history at Brown University.