Slaves and Slavery in George Washington’s World

An interview with Henry Wiencek

Common-place: One of the striking revelations in Imperfect God is just how intertwined Washington’s life was with the institution of slavery. Everyone knows he owned slaves, but few recognize just how pervasive a part of his day-to-day existence slaves and slavery were. Was this a revelation to you as well? If so, how did it come about?

Henry Wiencek: Because Washington is chiefly known and studied as a political figure, historians have looked at Washington’s encounter with slavery through a political lens. Finding that Washington made no official statements about slavery during his presidency and that the issue did not arise in any dramatically significant way during his term, the political historians have relegated slavery to a footnote in studies of Washington. The story is almost the same for Washington as a military leader. General biographers of Washington have by and large been uninterested in slavery (Flexner is somewhat of an exception), except as a narrative device for making Washington look good; so they have tended to cherry pick anecdotes and statements that put Washington in a positive light, and they have tended to compartmentalize the discussion in a single, brief section or chapter. Reading these books would make one think that slavery was present in Washington’s life only as a kind of social/environmental wallpaper—African American figures hovering silently in the background in dining rooms and in fields—and that slavery never ruffled his Olympian conscience at all.

Because I came at Washington from the perspective of someone who studies plantation families, I knew before I had even begun that I would find slavery a pervasive presence in Washington’s life. How could it be otherwise? Before he was anything else he was a planter/farmer (two different things), and if you asked him while he was in office what his occupation was, he would have said “farmer.” He inherited slaves when he was still a child; he bought, sold, and rented slaves; he personally managed slaves, depended on slaves for his income for his entire life, negotiated with individual slaves, personally chose certain slaves for his household and for public appearances, and entered contracts regarding slaves; he married a woman whose wealth consisted very largely of slaves and who controlled more slaves than he did; he directly felt the effects of local and federal laws regarding slaves, etc., etc. Having slaves around all the time was part of his psychology—it was comforting; it validated his status as a person of substance and authority. There was no doubt at all that slavery pervaded his life, but the question was: ubiquitous or not, did slaves and slavery stand far in the background of his consciousness (as wallpaper), or did he have some direct, pressing awareness of moral and ethical issues regarding slavery? We look back and say slavery was evil by our standards; maybe he didn’t feel that way at all. I was acutely aware of the problem of “presentism”—judging a figure of the distant past by our standards. I wanted to discover what Washington’s own standards were, and my starting point was his last will and testament, in which he freed his slaves. In parsing the language of his will I found that, by the last year of his life, slavery had become a huge moral issue for him. Did this represent a change in his thinking? If so, what brought about that change? When? The will also suggested that Washington sharply disagreed with his wife and the rest of his extended family on the slavery issue.

CP: It seems clear from Imperfect God that you learned a great deal from genealogists. For historians working inside the academy, this might seem striking. How was it that you came to be interested in genealogy as a way of addressing larger historical questions about race and slavery?

 

William Costin (c. 1780-1842), the Washingtons' mixed-race grandson/nephew. He was the son of Ann Dandridge, enslaved half sister of Martha Washington, and Jacky Custis, Martha's son. Courtesy of the Library of Congress, Prints and Photographs Division.
William Costin (c. 1780-1842), the Washingtons’ mixed-race grandson/nephew. He was the son of Ann Dandridge, enslaved half sister of Martha Washington, and Jacky Custis, Martha’s son. Courtesy of the Library of Congress, Prints and Photographs Division.

HW: When I researched my previous book, The Hairstons: An American Family in Black and White, I could not avoid genealogy and genealogists. That book focused on one extended family with a black side, a white side, and family genealogists on every side trying to reconstruct a lost/hidden past. In several instances I came across documents indicating hidden or forgotten blood ties between the whites and blacks. You can’t avoid finding that kind of information if you’re studying plantation families. It happened everywhere and the evidence is thick on the ground—wills, gifts of land, odd emancipations, payments for education, favored treatment for particular people. I had so many of these stories from Hairston documents and oral history that I couldn’t put them all into the book. And after the book came out more people called or wrote to me about other instances. The other part of this is you have to be careful in evaluating this information—not everything is at it seems.

When you encounter evidence of kinship between owners and slaves you have, first of all, learned something new about the complexity of their world, and next you are confronted with the question: did knowledge of his or her kinship to slaves influence the actions of an owner? Martha Washington’s first father-in-law, John Custis, all but acknowledged his mixed-race son, freed him, and gave him a very generous bequest. In contrast, Martha held her own half sister in slavery. The existence of this half sister, Ann Dandridge, was one of the great shocks of my research, and I discovered her only because genealogists had written to Mount Vernon about Dandridge and their letters were in the files. I pursued the leads in that correspondence and came up with additional evidence. So through the work of genealogists I came up with information that completely changed our view of what slavery was like at Mount Vernon.

Another case: The genealogist Anita Wills, who grew up in California, was told by her mother that the family originated in Virginia, and when Wills researched her past she found she was a descendant of mixed-race indentured servants held by Washington’s family. Her genealogical research opened a window on a facet of servitude I knew nothing about—the forced indenture of mixed-race children, a quasi-slavery fastened on people who fell afoul of the racial purity laws, which one might more accurately call the “how-to-get-free-servants-for-thirty-years statutes.” All of this told me a lot about the fabric of Washington’s world, and all of this came to light because a genealogist in California began tugging at a thread that led her all the way back to Westmoreland County, Virginia, and to George Washington.

Genealogy is much more than just birth certificates and gravestones. You are compelled to get deeply into wills, all manner of legal documents, county and state records, family letters and diaries. By chasing down every stray reference to a particular person, you find supremely important documents by accident, documents you certainly would not have found otherwise. By pursuing documents about one of Washington’s slaves you can encounter documents about Washington himself you might never have discovered through a strictly defined “George Washington” search.

Genealogy has special importance to the study of slavery because by studying the wills and inheritance practices of slave-owning families you begin to see that slave owning was often a collective enterprise. The simple declarative statement “George Washington owned slaves” is actually a vast oversimplification. Yes, he owned slaves, but some of his slaves intermarried with “Martha’s” slaves. I have to put Martha’sin quotation marks because she didn’t actually own them; they were owned by a legal entity—the estate of her deceased first husband—and they were earmarked for her son, Jacky, and after his death, for his heirs. You see how complicated it gets. The slaves were at Mount Vernon, under Washington’s control, but his ownership of his own slaves and his control of “Martha’s” were complicated by intermarriages among the slaves. Some of the offspring belonged to George; some, to the Custis heirs.

This phenomenon of collective family ownership of slaves turns out to be very important for understanding Washington. I was curious to find out if there was any direct connection between Washington’s private slaveholding and his public policy on slavery, and I found that there was such a connection. While he was in office, the revolting slave-trading practices of an in-law drove Washington to plan the emancipation of all the slaves at Mount Vernon—his own and “Martha’s”—an emancipation that Washington knew would have a strong political impact. I found unpublished letters from Washington’s negotiations with a Custis heir, in which a very detailed manumission plan is outlined. The Custis heir resisted Washington’s plan, thus preventing the president from taking a forceful stand on emancipation.

As to “genealogy as a way of addressing larger historical questions about race and slavery”—genealogy teaches us that many white colonial families had mixed-race kin. It would be fascinating to consult Virginia’s African American genealogists and see how many of them can trace their families back to leading white families such as the Carters, Lees, Byrds, Randolphs, et al. (Right now I can say “yes” to three of those names—I don’t know about the Byrds, but they’re related to the Custises, so I guess they’d be a “yes” too.) That would give us a sense of how closely entwined these leading families were with slaves. Reading the accounts of the very peculiar, very intense relationship between Landon Carter and his slave Nassau, I have wondered if they were half brothers. My point is that, in public statements, the white male leadership of colonial Virginia reviled miscegenation, and we have come to believe that they were genuinely revolted by race mixing. Then how could these same men so avidly practice it? If they were disgusted by mixed-race people, how could the masters and mistresses of the era staff their houses with mulattoes? Wouldn’t you expect mulattoes to be shunned, exiled? Jefferson is a prime example. He spoke forcefully against racial mixing, but his entire household staff consisted of mulatto and all-but-white slaves, many of whom were his relatives. My thinking is that, to some degree, this eighteenth-century racial-purity talk was smokescreen and rationalization for outsiders. It’s an extremely complex issue.

CP: To follow up, how do you think attention to genealogy might’ve altered the arguments of other books on similar topics? Ed Morgan’s classic American Slavery, American Freedom comes to mind here.

HW: Broad-scope books such as those of Edmund Morgan, Philip Morgan, David Davis, and the Genoveses really have little opportunity to take up genealogical issues, except in an anecdotal way. But if your anecdote is about Jefferson it packs a lot of wallop. Philip Morgan wrote a terrific article about shadow families, “Interracial Sex in the Chesapeake and the British Atlantic World, c. 1700-1820” (in Jan Ellen Lewis and Peter S. Onuf, eds, Sally Hemings & Thomas Jefferson: History, Memory, and Civic Culture [Charlottesville, 1999]), which delves into some specific genealogies. But I think you might be defining genealogy too narrowly—it can also mean the study of family histories, which is particularly revealing in looking at the South. To pick one example, the settlement of Mississippi was, to a great extent, a family enterprise—nuclear and extended families from Virginia and the Carolinas migrated as units to the new land, bringing with them enslaved African American families (sometimes intact, sometimes not). By studying the wills and other financial records of slave-owning families we see that the natural increase in a family’s slaves often provided the capital and the labor for the rising white generation to expand operations into new lands. At age eleven, Washington inherited 10 slaves; he died owning 123, an enormous “dividend” in humans that his heirs could have converted to cash—but Washington freed his people. When you look into the histories of other planter families you see parents doling out the African American “increase” to their children to finance the creation of new plantations. When you read the wills you find that dowries for the daughters often consisted of slaves or of cash raised from the sale of slaves. So studying genealogy reveals how deeply slavery was woven into the lives of families, one generation after another, and you see how unusual it was for George Washington to deny this bonanza to his heirs.

CP: As we conduct this interview, African American genealogy is in the national spotlight as it has never been, thanks to Henry Louis Gates’s PBS series, African American Lives. The series explores the family histories of a group of famous African Americans, and for most of them, that history was hitherto unknown. Much like your work, the program raises profound questions about exactly who Americans are. Can you say something about the importance of those questions? How, for example, might they reshape America’s sense of itself?

HW: First off, presentations such as Gates’s PBS series explode the idea of racial purity. Genealogy shows that there has been a tremendous amount of racial mixing in our past. A great many white people have mixed blood, and a great many more white people have African American blood relatives. What does this do to our definition of family? If you have black kin, doesn’t that mean you belong to a black family? Our definition of family has tended to be legalistic, document based, and oriented toward property rights—it’s the courtroom approach. Don’t forget that one prime purpose of genealogy has been to establish the legal inheritance of property. Some genealogists are quite rigid on this point and insist that a family consists only of a string of legally documented legal heirs. It’s very interesting that many eighteenth- and nineteenth-century wills specified that property could be passed only to so-and-so and his or her “lawful issue.” They knew that there were many extra-legal children and grandchildren, and they wanted to exclude them from inheriting, but that didn’t make such “side families” less real. Many southern families, white and black, are going back into the records to find their blood kin because they want a truer picture of their families than the law has recognized. Jefferson descendants are wrestling with this right now. The Hemings descendants may not be legal heirs of any Jeffersons, but does that exclude them from the Jefferson family?

Another fascinating aspect of African American family history is that you encounter the tremendous achievements of the African American community under slavery and segregation—schools and churches built, businesses started, land bought and farmed, campaigns for civil rights waged. You see a whole hidden history emerge on a family, community, and regional scale. When I go to African American family reunions the people remember very well where they came from, and it serves as their bedrock today.

CP: Can you tell us about your current work? What will follow Imperfect God?

HW: I’m writing a book about Jefferson and his slaves. Hard going! I don’t have a title yet. When I was writing about Washington I wished I were writing about Jefferson because there’s so much more information about TJ and his slaves; now that I got my wish, I yearn for the old days working on GW. Watch what you wish for.

 

This article originally appeared in issue 6.4 (July, 2006).


Henry Wiencek is the author of the acclaimed An Imperfect God: George Washington, His Slaves, and the Creation of America (2003), winner of the Los Angeles Times Book Prize in history and the Best Book of 2003 award from the Society for Historians of the Early American Republic. He has also written The Hairstons: An American Family in Black and White (1999), which received the 1999 National Book Critics Circle Award. In the spirit of rereading, this issue’s Common Reading asks Wiencek to talk about his work on Washington and slavery and to reflect on some of the ways it revises received wisdom about the American past.




When Night was Dark

At Day’s Close: Night in Times Past
At Day’s Close: Night in Times Past

The lights go out a lot where I live. Whenever a storm blows through the woods of eastern Connecticut, falling trees knock down power lines—often including the line that reaches my end of a long, dead-end road. There’s a stunning moment of darkness and an abrupt halt to the evening routine. I’ve come to enjoy these little breaks from modern technology, the search for candles, the lighting of a fire in the fireplace. It’s like camping but with the pleasure of ice cream that might otherwise melt if the power failure lasted. A short power failure is, of course, the best. A full night without light and I begin to think about buying a generator.

What would it mean to live an entire life without electricity, in a world where eyesight faltered at each setting of the sun? A lot more than mere inconvenience, as Roger Ekirch reveals in his richly evocative history, At Day’s Close: Night in Times Past. Before the Industrial Revolution, Ekirch writes, the experience of darkness was so powerful that night can be said to have had its own “rich and vibrant culture very different from daily reality, an ‘alternate reign,’ as an English poet put it. More than that, darkness, for the greater part of humankind, afforded a sanctuary from ordinary existence, the chance, as shadows lengthened, for men and women to express inner impulses and realize repressed desires both in their waking hours and in their dreams, however innocent or sinister in nature” (xxvi).

This is an astonishingly ambitious work, based on two decades of research. Ekirch pays closest attention to early modern Britain, from about 1500 to 1750, but his study sprawls across the western world from Russia to the United States, and from antiquity to the nineteenth century. The experience of night, he suggests, has been shaped both by specific human cultures and by universal human physiology. “Such was the impact of this natural cycle that it frequently transcended differences in culture and time” (xxviii).

As differences in culture and time are the usual bread and butter of historians, Ekirch’s approach produces an unusual narrative. Sliding easily from country to country and from century to century, he organizes his twelve chapters around different aspects of the nocturnal experience. He illustrates his points with a wealth of evidence drawn from a wide range of diaries, letters, newspapers, memoirs, and other writings; occasionally he dips into the sciences to introduce modern knowledge of astronomy, eyesight, and circadian rhythms. Thus the study seems concerned not so much with a historically specific culture of night (say, seventeenth-century English night as opposed to night somewhere else or at an earlier time) as with how human beings have experienced the darkness. Historical change enters the story mainly at the very end when technology severs humans from the natural world.

But perhaps such comments should be left to the fluorescent glare of the seminar room. This book is best enjoyed under a warm reading lamp in the “dead of night” (midnight to 3:00 a.m., Ekirch explains), when the pleasures of its fascinating details and skillful writing are most seductive.

The opening chapters evoke a time when “night brutally robbed men and women of their vision, the most treasured of human senses” (8) and left them literally groping their way through a world thought to be rife with physical and supernatural menace. Strange noises, ominous lights in the sky, miasmatic vapors, and satanic spirits filled Europeans with terror. There were also many more prosaic dangers, such as stumbling over obstacles in the unlit streets or being doused by carelessly emptied chamber pots. Robbers, burglars, arsonists, and other nightwalkers were afoot in the dark of the moon, sometimes carrying human fingers as magic charms. It was, as a watchman would cry out, “time for all honest people to be in bed” (32).

Public authorities tried to control the night with curfews, night patrols, and half-hearted attempts at street lighting. Night watchmen patrolled the city, checking doors, looking for fire, and crying the hours (in part to keep people from sleeping too deeply to interrupt burglars). Watchmen were held in low esteem and were of little use against criminals. Families had to barricade themselves inside their homes with weapons in easy reach. Artificial lighting was woefully inadequate. Smoky flames “pulsed amid the shadows,” unable to illuminate corners or ceilings. People relied on memory and touch to help navigate their homes in the dark. Memory, hearing, touch, and even smell made it possible to travel in the countryside on moonless nights. City people would rely on a torch-bearing servant or would take their chances with a hired “linkboy” who might turn out to be in league with robbers.

For a surprising number of people, night was a time of continued work. Bakers, servants, midwives, scavengers, and iron workers were among those who continued to toil while others slept. Still, the work experience was quite different from the shift labor of the industrial age. Even when out of phase with the sun, preindustrial work continued to be shaped by natural rhythms of seasons and moonlight, particularly in the countryside or on the seacoast. Night work differed from day work in its lack of routine and its more playful spirit.

Ekirch’s chapters on the freedoms of night contain fewer surprises. People often had sex at night, it seems. Adolescents enjoyed getting away from parental supervision. Some people spent their evening hours reading, writing, or praying. The wealthy elite threw extravagant parties. Religious minorities, servants, and slaves grew bold and less inhibited in the night. This section of the book goes over some of the same ground previously explored by Bryan Palmer in Cultures of Darkness: Night Travels in the Histories of Transgression (2000).

The final section is a highly original study of preindustrial sleep, previously published in different form in the American Historical Review. In this brilliant piece of scholarship, Ekirch explores the rituals of bedtime, the meanings of communal beds, and most surprisingly, the very different rhythms of sleep. He shows that preindustrial Europeans “experienced two major intervals of sleep bridged by up to an hour or more of wakefulness.” People rose in the middle of the night to take a drink, to talk, to do a little work, to contemplate their dreams, to steal firewood from their neighbors, to make love, pray, or urinate. Whatever they chose to do, they routinely experienced a break between “first sleep” and “second sleep,” which Ekirch believes to be biologically determined. “There is every reason to believe that segmented sleep, such as many wild animals exhibit, had long been the natural pattern of our slumber before the modern age” (303).

In a brief conclusion, Ekrich asserts that the eighteenth century began a “nocturnal revolution.” Men and women, influenced by the Enlightenment, discarded the supernatural beliefs that had made the night a time of terror. Night work increased with industrialization. Law enforcement improved. But the most important change seems to have been the dramatic improvement of artificial illumination in the nineteenth century: fueled first by oil and coal gas and then by electricity. Ancient patterns of work, leisure, and sleep were radically disrupted. “It is not difficult to imagine a time when night, for all practical purposes, will have become day—truly a twenty-four/seven society in which traditional phases of time, from morning to midnight, have lost their original identities” (339).

Ekirch thus implies that the history of night can be roughly divided into preindustrial and industrial eras, much as Wolfgang Schivelbusch did in Disenchanted Night: The Industrialization of Light in the Nineteenth Century (first published in German in 1983). It appears from reading At Day’s Close that modern technology has cut humans off from the natural world and disrupted behavioral rhythms rooted in part in physiology. The ending reinforces the suggestion throughout the book that people from different countries and different centuries experienced preindustrial night in similar ways. Such a suggestion could be accepted with more complete confidence if the author had not relied so heavily on British evidence, or if he had more explicitly compared the British experience with the nocturnal habits of other cultures. But the book is ambitious enough as it is. Ekirch has done a remarkable job of showing that night in times past was profoundly different from what it has become.

Most effectively, Ekirch has revealed the extent to which natural rhythms once controlled daily life. It’s an appealing theme—the human animal in the grip of nature—and one that seems increasingly exotic in the twenty-first century. This story of the forgotten terrors and pleasures of the night reminds us of how much daily life has been transformed by technology. It also helps us appreciate the lamplight on the page, even before the power goes out.

 

This article originally appeared in issue 6.4 (July, 2006).


Peter Baldwin, associate professor of history at the University of Connecticut, is writing a social history of night in American cities.




Old Wealth, Common Wealth

Old Dominion, Industrial Commonwealth: Coal, Politics, and Economy in Antebellum America
Old Dominion, Industrial Commonwealth: Coal, Politics, and Economy in Antebellum America

By 2006, Virginia had not had an effective transportation plan for twenty years. Yet Norfolk and northern Virginia are among the fastest-growing regions in the nation. Everyday an avalanche of cars and trucks engulf its antiquated roadways. A new governor had called for an expansive program of transportation improvements to address the crisis. The legislature, however, was mired in a controversy between moderates and conservatives over funding the governor’s program with a permanent revenue stream from new, mostly indirect taxes. Conservatives refused to budge on raising taxes. What Sean Patrick Adams might call a political economy of state action—or inaction in this case—was crippling Virginia’s capacity to address one of the greatest threats to its economic growth and well-being.

In Old Dominion, Industrial Commonwealth, Adams demonstrates that the impasse of the current imbroglio has deep historical roots. Explaining the troubled present is not Adams’s objective, but his book provides a powerful tool for doing just that. His focus lies on coal, not transportation. Although the book’s subtitle limits its scope to antebellum America, its concerns encompass the entire nineteenth century. And they include not only Virginia but also Pennsylvania and that area of Virginia that became West Virginia. Key to Adams’s thesis is the nineteenth-century transition from localized economies, manageable by state governments, to an emerging national economic order in which huge and highly capitalized corporations overwhelmed the states and the limited public sphere they represented.

In the decades before the Civil War, contrasts between Pennsylvania and Virginia demonstrate Adams’s central idea: the political economy of state institutions shaped the course of economic development. Simply put, Pennsylvania emerged as one of the nation’s industrial powerhouses and Virginia did not because of different approaches to exercising the economic powers of the state in the interest of commercial development and the general good. The difference was not the fault of natural endowment. Virginia, like Pennsylvania, possessed important coal reserves in its piedmont region within the immediate market reach of Atlantic port cities. And each state harbored vast reserves of bituminous coal in its western mountains. What separated Virginia from Pennsylvania and condemned the Old Dominion to second-rank industrial status was quite simply slavery. Slavery dominated the concerns of tobacco planters east of the state’s Blue Ridge, and these men dominated the legislature.

At this point the past begins to resonate with the present. Fearful that the costs of economic development and industrial growth would be paid for by property taxes—and slaves, of course, were property—easterners crippled development projects like the James River and Kanawha Company’s efforts to connect tidewater markets to the Ohio River through a series of canals, turnpikes, and navigation improvements. Because transportation development in the west did little to benefit the agricultural economy of the east, planter elites refused to establish a permanent revenue stream for the improvement of the interior. Conservative, antitax principles also limited the economic impact of other endeavors such as a state geological survey or a general incorporation act—the latter would have eased the incorporation process by freeing it from the political machinations of the state legislature.

The legislative environment and political economy of Pennsylvania seemed a world away, separated by the gulf of modernization. Differences emerged from more than the absence of slavery. They also came from the presence of a diverse population and a political culture shaped by varied interests and a more inclusive conception of the public will. Thus when colliers and farmers in the state’s central region needed canals to get goods to market, the state itself built them. Pennsylvania’s geological survey—not only conducted simultaneously with its southern neighbor’s but also by the brother of the Old Dominion’s state geologist—was put to the service of developing the commonwealth’s mineral and agricultural resources. Although interest-based politics often confounded attempts to reform the process of corporate chartering, Pennsylvania eventually established an effective program of fostering corporations as the most powerful, albeit private, means of industrial development.

The clarity of the Pennsylvania-Virginia comparison dissolved in the postbellum economic world. Both states succumbed to the forces of economic nationalization as large corporations with national political clout gained control of the American economy. The contrast was further complicated by the emergence of West Virginia on that side of Virginia’s sectional faultline containing the old state’s vast store of bituminous coal. But with the demise of the Radical Republican political movement that created the state and with the emergence of conservative local politicians eager to reestablish a political culture of deference, the state withered while national energy and transportation corporations ransacked its mineral resources.

Meanwhile, the fact that Virginia had a coalfield just west of Richmond disappeared from public memory. The triumph of conservative Democrats by the end of the 1880s, for all practical purposes, put a political economy of state action in the deep freeze. Although Adams’s story stops at about this point, the extent of this chill in economic development was evident again in the 1950s when the Virginia legislature, faced with one of the worst, most poorly funded educational systems in the nation, adopted a requirement that all surplus tax revenues go back to taxpayers instead of toward improving public life. The deadening effects of massive resistance to school desegregation and civil rights—especially when programs for the social and economic betterment of the state’s poor population depended upon tax revenues—was only further evidence for the persistent power of a political economy and political culture born in slavery. And needless to say, many Virginians in the state’s transportation-starved population centers might feel enslaved today to a political culture deeply suspicious of the public sphere and resentful of any public spending to support it.

 

This article originally appeared in issue 6.4 (July, 2006).


Warren R. Hofstra is professor of history at Shenandoah University in Winchester, Virginia, and writes on landscape, regional history, and Virginia broadly construed.




Whose Failure?

Bruce Ackerman, The Failure of the Founding Fathers: Jefferson, Marshall, and the Rise of Presidential Democracy.

 

Bruce Ackerman takes seriously a phenomenon Americans often seem to overlook: the rapid evolution of a political system radically different from the one anticipated by the framers of the Constitution. Ackerman argues, in The Failure of the Founding Fathers, that the Constitution’s procedures for electing a president were particularly ill suited to the realities of American politics and that the election of 1800 exposed these inadequacies. The constitutional mechanism failed because the framers failed to foresee the rise of political parties. The party system became a vehicle for the creation of a “plebiscitarian” presidency, a chief executive whose unexpected influence derived from his ability to claim a popular mandate. Less responsive to public opinion, the federal courts remained an obstacle to presidential government, but Ackerman sees a creative synthesis emerging from the conflict. The Supreme Court would assume the power of judicial review, but the Constitution would not be allowed to frustrate the will of a popular president. Ackerman’s argument echoes his earlier work: at critical moments in the nation’s history, unequivocal expressions of public sentiment at the ballot box have rewritten constitutional law.

Ackerman’s story begins with the results of the election of 1800. The Republican nominee for president, incumbent vice president Thomas Jefferson, defeated the Federalist incumbent, John Adams, but Jefferson found himself tied in the electoral college with his running mate, the enigmatic New Yorker Aaron Burr. Under the terms of Article II, Section 1 of the Constitution, each elector voted for two candidates, with the vice presidency going to the second-place finisher. Showing a level of party discipline the founders did not anticipate, the Republican electors failed to waste a single vote on a third candidate. The votes were perfectly divided between two Republican candidates.

Allowing individual electors to cast separate ballots was only one of the Founding Fathers’ blunders. Article II, Section 1 also authorized the vice president to count the electoral votes in the presence of both houses of Congress, which was at best an awkward business when the vice president was a candidate. To make matters worse, in 1800, Georgia’s electors reported their votes for the Republican candidates on an irregular ballot. Jefferson blithely counted the early republic’s equivalent of a hanging chad for himself and Burr. It is a fine story, but Ackerman makes too much of it; the Federalists knew they had lost Georgia.

The election then went to the House of Representatives where each state had one vote. The Constitution permitted the sitting of a lame-duck Congress, which had become customary by 1800. The Federalists had lost control of the House, but they would select the new president. The one-state, one-vote rule diluted Federalist voting strength, but after thirty-five ballots, Jefferson remained one vote short of a majority. Loath to elect Jefferson, the Federalists considered making a deal with Burr or appointing an interim president and calling a new election. Ackerman implies that Delaware’s one congressman, the Federalist James Bayard, single-handedly put Jefferson over the top on the thirty-sixth ballot. In reality, although Bayard was wavering, vote switches by Maryland and Vermont gave the Virginian a majority.

Ackerman then shifts his focus to the courts. As the Federalist era came to an end, the outgoing Congress passed the Judiciary Act of 1801 creating new federal circuit courts, and Adams busied himself appointing his notorious “midnight judges,” including John Marshall, the new chief justice. The Republican Congress promptly repealed the 1801 act and left the circuit judges unemployed, despite the constitutional provision guaranteeing them lifetime appointments. The repeal was part of a broader attack on the Federalist federal courts, which leads Ackerman to one of his most provocative points: Marshall’s landmark decision in Marbury v. Madison (1803), asserting the court’s power of judicial review, was less a magisterial assertion of authority than an exercise in judicial damage control. The Supreme Court justices had reluctantly resumed circuit-riding duties after the abolition of the circuit judgeships, and in Stuart v. Laird (1803) they grudgingly upheld the repeal of the Judiciary Act. Marbury supposedly allowed the court to salvage a slim measure of institutional integrity. But, according to Ackerman, Republican appointments eventually undermined Marshall’s Federalist jurisprudence. As evidence, he points to the court’s rejection in 1812 of federal jurisdiction over common-law crimes.

There is much to appreciate here. Ackerman rescues Stuart v. Laird from undeserved obscurity. He demystifies Marshall, who was in fact Adams’s third choice for the court, and raises some intriguing questions. If Adams had not settled the Quasi-War with France and demobilized the army, might the Federalists have attempted a coup? Ackerman has uncovered enough political miscues for a comic opera. Adams intended to appoint Rhode Island Senator Ray Greene to a district-court judgeship but inadvertently made him a circuit judge. Greene, accordingly, resigned his Senate seat, only to have the Republican Congress abolish his court.

But problems abound with Ackerman’s interpretation. His constant references to the stupidity of the founders are unconvincing. He criticizes the Constitution’s provisions for electing a president and disparages the Twelfth Amendment, providing for the election of the president and vice president on a single ticket, without suggesting any alternative. By making a few electoral votes potentially decisive, the electoral college invites controversy in close elections, yet Ackerman, for all his complaints, has little to say about this. He sheds little light on the great mysteries of 1800. Why did the Republicans allow a deadlock to develop, and why did Burr allow it to continue? Trying too hard to be provocative, Ackerman stumbles on minor points, such as the details on the final House vote in 1800, and misfires badly on major ones. He does not produce the evidence to demonstrate a politically expedient constitutional synthesis emerging from the partisan conflicts of the early 1800s. He seems insensitive to the constitutional scruples of early nineteenth-century politicians, as if they shared the legal nihilism of modern officeholders. Even conceding that Jefferson embodied a “presidential democracy,” perhaps Ackerman’s most critical point, the concept was an aberration until the twentieth century. There may have been an Age of Jackson, but there was no Age of Fillmore, Pierce, or Hayes.

Further Reading:

Bernard A. Weisberger’s America Afire: Jefferson, Adams, and the Revolutionary Election of 1800 (New York, 2000) provides a readable introduction to the politics of the 1790s. Susan Dunn’s Jefferson’s Second Revolution: The Election Crisis of 1800 and the Triumph of Republicanism (Boston, 2004) is unabashedly pro-Jefferson. The best balance of accessibility and careful scholarship is John Ferling, Adams v. Jefferson: The Tumultuous Election of 1800 (Oxford and New York, 2004). On the Marshall Court, see Herbert A. Johnson, The Chief Justiceship of John Marshall, 1801-1835 (Columbia, S.C., 1997). Marshall Smelser’s The Democratic Republic, 1801-1815 (New York, 1968) remains useful on the presidencies of Jefferson and Madison. For more on Bruce Ackerman’s view of the relationship between politics and legal change, see his We the People, 2 vols. (Cambridge, Mass., [1991] 1998).

 

This article originally appeared in issue 6.4 (July, 2006).


Jeff Broadwater is associate professor of history at Barton College in Wilson, North Carolina. His most recent book is George Mason: Forgotten Founder, forthcoming from the University of North Carolina Press.




Imperialists in Denial

Fred Anderson and Andrew Cayton, The Dominion of War: Empire and Liberty in North America, 1500-2000.
Fred Anderson and Andrew Cayton, The Dominion of War: Empire and Liberty in North America, 1500-2000.

Fred Anderson and Andrew Cayton, The Dominion of War: Empire and Liberty in North America, 1500-2000. New York: Viking, 2005. 424 pp., cloth, $27.95; paper, $16.00.

The Dominion of War is an important book that has been respectfully reviewed in all the right academic journals but still has not secured the public attention it deserves. This is the type of historical project that should be the subject of heated op-eds, TV news shows, the blogosphere, and numerous political campaigns. Dominion should be regarded as a thinking man’s Fahrenheit 9/11.

This is not to suggest that Anderson and Cayton have produced a partisan polemic dressed up as scholarship. Instead, they have attempted a nuanced history of American imperialism—or at least as nuanced a portrait as might be conceived in a work that spans five centuries and an entire continent. Also, remarkably, given the scope of their work, the authors have tried to tell their story through a complicated form of group biography, using the surprisingly interconnected lives of nine men to illustrate their thesis. Portraits of Samuel de Champlain, William Penn, George Washington, Andrew Jackson, Antonio Lopez de Santa Anna, Ulysses Grant, Arthur MacArthur, Douglas MacArthur, and Colin Powell provide their story’s core material.

Nonetheless, the provocative argument behind these interlocking portraits can be stated rather simply—most American policymakers have been (and remain) imperialists in denial who have vastly underestimated the degree to which wars of conquest have shaped their culture. This is the sort of bold and broadly conceived reinterpretation that we don’t see much anymore from academics. It is also the reason why this historically minded study has such contemporary relevance. If true, this argument provides a thoughtful context for the post-9/11 world and especially for the current, bitter debate over the war in Iraq. Unfortunately, much like Michael Moore’s documentary, this work is more likely to polarize existing opinions than to provoke thoughtful debate about America’s current interventionist moment.

There is little doubt, however, that at least the first half of this book offers a brilliant new synthesis of early American history. Anderson and Cayton explain in exquisite detail how the arrival of European explorers such as Champlain and even ostensibly peaceful settlers such as Penn set off a violent chain reaction among native peoples, creating a culture of warfare, which ensnared almost everyone. They help the reader navigate little-known but critical episodes such as the Beaver Wars of the seventeenth century when the Iroquois attacked over fifty other Indian nations during a thirty-year period in a desperate effort to stave off the collapse of their own empire. The authors demonstrate in convincing fashion that the unintended consequence of the initial Native-European contact in North America and the subsequent colonization effort by various westerners led to the dramatic “clash of empires” that became the Seven Years’ (or French and Indian) War—a pivotal conflict, which, according to Anderson and Cayton, “altered the whole landscape of empire in North America” (103).

In some ways, this is ground that Fred Anderson covered in his award-winning Crucible of War: The Seven Years’ War and the Fate of Empire in British North America, 1754-1766 (2000), but the wide-ranging and fast-paced synthesis presented in Dominion of War is still quite valuable and eminently teachable. Where the argument becomes strained, however, is when the authors attempt to rewrite the history of the American Revolution as a continuation of the imperial conflicts that shaped the Seven Years’ War rather than as an ideological struggle for liberty. They see the rising martial spirit, the “uncontrollable violence” (172) of “ethnic cleansing” (170) on the frontier, and the enticing opportunities for westward expansion as the deciding factors in the development of American nationalism, or what they’re calling “the making of an imperial republic” (160).

The evolution of George Washington’s attitudes provides the main framework for this analysis. “War made an Anglophile imperialist into a committed American nationalist,” write Anderson and Cayton about the father of our country (180). There is surely some truth in this statement, but Washington also seemed devoted to the coming revolution long before the hard years of the war itself—famously showing up in military uniform, as the authors themselves note, at the initial meeting of the Second Continental Congress. In addition, for someone so presumably affected by the military conflict and the promise of expansion, General Washington seemed remarkably deferential to the principles of the rule of law and civilian control within the new republic and quite sincere in his desire for keeping the new nation out of foreign conflicts and entanglements. Nor is Washington the only example to consider. Figures such as John Adams, Thomas Jefferson, or Benjamin Franklin would have offered far less convincing material for the claim that American republicanism was “imperial” from the outset.

This suspicion—that selective biography can allow authors to cherry-pick their evidence and thus manipulate their narrative—begins to intrude more and more as The Dominion of War describes the rest of the story of U.S. expansionism. Almost inevitably, we read chapters about Andrew Jackson, “Butcher” Grant, and the Generals MacArthur instead of ones focusing on John Quincy Adams, Abraham Lincoln, or William Jennings Bryan. The latter figures would certainly have appeared less absorbed by “racial hatred” (246) or prone to the violence and the desire for conquest that Anderson and Cayton see as increasingly defining the American republic. Were they less representative? That is debatable but not really debated within these pages.

In truth, there isn’t much of any debate going on within the second half of this book. Time and again, the authors seem content to quote from the idealistic rhetoric of American decision makers without offering engaged analysis, incorrectly assuming that the hypocrisy is self-evident. They skip quickly past the “good” wars in American history, earnestly explaining why the Mexican War actually mattered more than the Civil War or why the Philippines occupation was more revealing than World War II. By the time they reach the Powell Doctrine and the end of the twentieth century, they seem almost frantic to reach a verdict that will serve to condemn interventionism in places such as Vietnam and Iraq. Ultimately, the book degenerates in its final chapters into little more than a thin textbook survey of modern U.S. military and diplomatic blunders. What began as a fascinating reconstruction of the American narrative thus sadly becomes a stale, uninspired critique of the misuses of modern U.S. military power.

This is why Anderson’s and Cayton’s climactic question lacks some punch. They write at the very end of the book: “At what point do the contradictions between the advocacy of liberty and the use of coercive means become overwhelming?” (424). The appropriate answer to this question is that whenever historians actually succeed in demonstrating that this disconnection is intentional or that the consequences of warfare are routinely catastrophic to freedom is when the “contradictions” will become “overwhelming.” Instead, even after a critique as rich and sometimes as powerful as The Dominion of War, we are still left to contemplate the meaning of a national history that includes both noble conflict essential to the preservation of freedom—by containing British imperialism, by destroying slavery, or by defeating fascism—and other more hellish confrontations, which seem, at best, paved only with good intentions.

 

This article originally appeared in issue 6.4 (July, 2006).


Matthew Pinsker is an associate professor and the Brian Pohanka Chair of Civil War History at Dickinson College in Carlisle, Pennsylvania.




The Capitalist Portrait

Collecting at the New York Chamber of Commerce

Though the collection of the New York Chamber of Commerce is now dispersed and the institution has faded into history, the extravagant Great Hall, located in the chamber’s headquarters on Liberty Street, was once the Olympus of American business (fig. 1). Looking at the thick installation of paintings hung in tiers on paneled walls, a viewer might simply isolate the portrait of an honored individual. But the overarching purpose of the Great Hall was instead to project onto the viewer the meta-impression of a cohesive family of professional fathers and brothers, the overall architecture of the display taking on the distinct look of a kinship, with all its suggestions of affinity, loyalty, obligation, power, and dynastic propagation. Matthew Pratt’s large “ancestral” portrait of Cadwallader Colden, the eighteenth-century New York agent for the chamber’s royal charter, always occupied the central position on the east wall, below which, on a dais, was positioned the chair and podium of the chamber’s current president. Flanking Colden on both sides were smaller portraits stacked in horizontal rows. Whatever the particular arrangement and density of the flanking pictures, which changed from time to time, they always had the look of being the “descendants” of Colden, giving the Great Hall a sense of both ancestry and succession.

Founded in 1768 and chartered by King George III in 1770, the chamber came to serve the small-scale antebellum merchants of the city. It did not begin to collect portraits, however, until after the Civil War, when powerful individuals dominated the economic landscape of New York. Eventually numbering more than three hundred (both commissioned and donated), most of them were installed in the Great Hall, a ninety-by-sixty-foot paneled room with a thirty-foot ceiling, set into James Barnes Baker’s beaux-arts building at 65 Liberty Street. The chamber’s collecting initiative coincided with the new bourgeois effort of other occupations that were also interested in their public reputations and wishing to use portraiture to project or stabilize a coherent new image. Doctors, for example, used portraiture to fashion themselves into men of letters or science as part of an effort to overcome the common stereotype of physicians as experimenters or butchers. Likewise, the chamber tried to rebut the negative stereotype of “robber baron” that dogged some businessmen by filling its spaces with images of thoughtful, polite, conventional, even noble men. Their inoffensive collective identity served the conformist—rather than transgressive—status aspirations of an ascendant professional group.

 

Fig. 1. James Barnes Baker, "Great Hall of the Chamber of Commerce, 1924." Photograph from Catalogue of Portraits of the Chamber of Commerce of the State of New York, with Foreward, Biographical Sketches and List of Artists, published by the Chamber of Commerce of the State of New York (New York, 1924).
Fig. 1. James Barnes Baker, “Great Hall of the Chamber of Commerce, 1924.” Photograph from Catalogue of Portraits of the Chamber of Commerce of the State of New York, with Foreward, Biographical Sketches and List of Artists, published by the Chamber of Commerce of the State of New York (New York, 1924).

For the Great Hall to amount to an ersatz business family, the perception of individual identities had to yield to the visual network that conveyed the fiction of a unified fraternal community, its members positioned shoulder to shoulder. Though the portraits of merchants, capitalists, and financiers had been collected by or donated to the chamber over many years and painted by a variety of artists (notably Eastman Johnson and Daniel Huntington), most of the signs of individuation had been repressed and replaced by narrow typological templates. (One hundred and eighty of the pictures that are now in the New York State Library can be viewed digitally). Not surprisingly, then, most of the portraits downplay the sitter’s particular triumphs. Though there is the anomalous sight of merchant Peletiah Perit sitting between a stack of money and a view of a freighter loading up in New York harbor or that of railroad entrepreneur James Gore King (fig. 2) beside a clock held up by Atlas, over which is suspended a map of the world, the typology of mastery and power is uncommon in the chamber’s collection. Instead, the pictures concentrate, for the most part, on confidence, grace, intelligence, character, respectability, sincerity, some minor business activity, and rarely any personal life. They de-emphasize the specific attributes and behaviors associated with running a business or making money or even conspicuously giving it away and instead encourage viewers to focus on the generalized look of high character. These men are to be understood above all as gentlemen. For example, both banker Samuel Babcock and merchant Josiah Orne Low sit face forward and remove spectacles with their right hands in order to address the viewer with solemn earnestness. New York Stock Exchange president J. Edward Simmons (fig. 3), leather trader Jackson S. Schultz, real estate magnate Amos Eno, and Equitable president Henry B. Hyde stand like classical orators. Some, such as railroad financier John S. Kennedy and Equitable president Thomas I. Parkinson, handle letters and papers. A significant number, such as James DePeyster Odgen (fig. 2), hold or are engrossed in books or prints. And the great majority of pictures show even less. Mostly bust-length portraits (approximately 140 of them), they depict men emotionally uninflected and with few or no objects, even when the person portrayed was known to have led a high-achieving or flamboyant life.

In the Great Hall, there is such a sense of sameness in the men’s appearances that it does not take much effort to see that personality has been deliberately tamed for the sake of assembling an artificial society, a seamless superstructure open to being expanded, contracted, and reconfigured by house fiat. It was an aesthetic system that muted idiosyncrasy, as if all diversity could be checked like an overcoat at the chamber’s front doors.

 

Fig. 2. From Catalogue of Portraits of the Chamber of Commerce of the State of New York (New York, 1924).
Fig. 2. From Catalogue of Portraits of the Chamber of Commerce of the State of New York (New York, 1924).

Of course the impression that these businessmen were kin was, as Georg Simmel pointed out in his 1922 essay “The Web of Group-Affiliations,” a fiction. As a business organization, the chamber functioned well because disconnected individuals from “heterogeneous groups” (rural, urban, Protestant, Jewish, etc.) became newly “related” by a “similarity of talents, inclinations, activities, and so on.” These men, who might have been strangers or foes outside the affiliation, willingly and temporarily repress cultural differences in order to form “a group whose cohesion was based on purpose.” Yet for all the solidarity pictured in the Great Hall, few of these men would have actually wished to mingle or to export their in-chamber alliances to what Simmel called “every nook and cranny” of their “natural relationships.” Their real-life affiliation was collapsible, as men only occasionally assembled at the chamber for a specific purpose and then disassembled, moving back to other, more constant and entrenched social structures. Some of these men, especially those owning railroads, were actually in mortal conflict with one another, vying for markets, stocks, shipping routes, track rights, and patents. Commodore Vanderbilt, for example, considered his railroad rival and fellow chamber member James J. Hill to be a dangerous threat to his New York Central empire. Meanwhile, Jacob Schiff battled against chamber colleague J. Pierpont Morgan (fig. 3) in advancing Edward H. Harriman’s hostile bid to take over Hill’s Northern Pacific Railroad.

The solidarity pictured in the Great Hall was ultimately, then, a rhetorical expression of the institutional aims of the chamber. All identities, however complex and conflicted they might have been on the street, now flatten out, so as to “make it appear to others as a unified group.” Though the pictures in the gallery promise professional congruity and mutual order, the real-life logic of competitive individualism modified the possible pleasures of fraternal exchange. This is to say, the competitiveness of businessmen with each other was temporarily interrupted on the walls of the chamber for the sake of a common purpose and replaced by the aggregate image of a placid network, a family tree, a professional genealogy. If a pan-business brotherhood existed as an ideal or goal of the New York Chamber of Commerce, it was mostly manifested in the pictorial unity on the chamber’s walls and not necessarily on Wall Street. In the headquarters of American capitalism, of all places, the discourse of professional and personal rivalry, in which many of the businessmen reveled, was converted into the quiet visual economy of prestige and honor. However individualistic or idiosyncratic a Morgan or a Carnegie might have been, in the realm of the gallery their images conform to and share in the official visual norms of the chamber group, which in turn provides them with a share of the institution’s collective symbolic value.

 

Fig. 3. From Catalogue of Portraits of the Chamber of Commerce of the State of New York (New York, 1924).
Fig. 3. From Catalogue of Portraits of the Chamber of Commerce of the State of New York (New York, 1924).

Further Reading:

For an overview the chamber’s collecting, see George Wilson, Portrait Gallery of the Chamber of Commerce of the State of New York (New York, 1890). Many of the pictures can be viewed at the NYS Website. On the chamber in the city, see David Hammack, Power and Society: Greater New York at the Turn of the Century (New York, 1987). The negative image of the American businessman is thoroughly elaborated in Louis Galambos, The Public Image of Big Business in America, 1880-1940 (Baltimore, 1975); in Sigmund Diamond, The Reputation of the American Businessman (Cambridge, Mass., 1955); and in Matthew Josephson, The Robber Barons: The Great American Capitalists, 1861-1901 (New York, 1934). On the logic of collecting portraits of men in occupations, see Ludmilla Jordanova, Defining Features: Scientific and Medical Portraits, 1660-2000 (London, 2000) and her “Medical Men, 1780-1820,” in Joanna Woodall, ed., Portraiture: Facing the Subject (Manchester, UK, 1997): 101-15. Also see R. Burgess, Portraits of Doctors and Scientists in the Wellcome Institute of the History of Medicine (London, 1973). On masculinity and fraternal action, see Dana D. Nelson, National Manhood: Capitalist Citizenship and the Imagined Fraternity of White Men (Durham, N.C., 1998). Simmel’s essay is published in Conflict: The Web of Group Affiliations (1922; reprint, Glencoe, Ill., 1955). For an analysis of the social structure of fraternal organizations, see Mary Douglas, In the Active Voice (London, 1982): 183-254; David Knoke and James R. Wood, Organized for Action: Commitment in Voluntary Associations (New Brunswick, N.J., 1981); and Eric J. Hobsbawm, “Fraternity,” New Society, 16:4 (27 November 1975): 470-73.

 

This article originally appeared in issue 7.2 (January, 2007).


Paul Staiti is professor of art history at Mount Holyoke College.




First Person: The 38th Voyage

Large Stock

This is our inaugural installment in an ongoing series devoted to “First Person” experiences with early American culture. We envision “First Person” features as an exploration of what it means to inhabit history in extraordinary ways. What alchemy occurs when we set sail on a whaling ship, put on period clothing, perform the words and deeds of an earlier life? In short, what happens when we put our bodies in the very spaces we’ve spent so much time reading about? Of course, we all know that we can’t really feel what our subjects felt, but this feature thinks critically about our desire to do just that.

This issue’s First Person feature includes accounts from three of the voyagers on Morgan‘s 38th Voyage, a remarkable event that invited scholars, writers, and artists to voyage on a nineteenth-century whaler. The Mystic Seaport’s Website provides excellent material on the event, as described below:

 

In the summer of 2014 the 1841 whaleship Charles W. Morgan sailed for the first time in more than 80 years. During this 38th Voyage, 85 individuals from a wide range of disciplines and backgrounds sailed aboard the ship and participated in an unprecedented public-history project. This select group, which included artists, historians, scientists, journalists, teachers, musicians, scholars, and whaling descendants, used their own perspectives and talents to document and filter their experience and will produce a creative product for Mystic Seaport to share with the public. 

While rooted in history, the 38th Voyage was not a reenactment, but an opportunity to add to the Morgan‘s story with contemporary perspectives. The 38th Voyagers sailed aboard one voyage leg (one night plus the following day) and worked alongside Museum staff, examining every aspect of the journey to better understand the past experiences of those who sailed this ship and others like her.”

 

 

This article originally appeared in issue 15.1 (Fall, 2014).


 




Introducing the Commonplace politics issue

Around the time of the previous presidential election, David Waldstreicher, Andrew Robertson, and I published a volume called Beyond the Founders: New Approaches to the Political History of the Early American Republic. It was meant to showcase ways of thinking about the political history of the early American republic that were different from the warmed-over, artfully scuffed-up, but deeply conservative “great man” approach, which had become so prevalent during an era of not-so-great political leadership and worse political journalism. Our volume did little to slow the parade of so-called presidential historians through bookstores and across TV screens, peddling their anodyne anecdotes, but it did suggest alternative paths that scholars might follow to America’s political past.

At the risk of tarring my coeditors with my own motivations, Beyond the Founders was also intended as an announcement that political history was back in the academic house. Once upon a time, long before most of the volume’s authors had completed grade school, much less graduate school, political history had been the king of the discipline, the subject that defined what it was to do history. This earlier brand of political history left the vast majority of early Americans out of most histories altogether, so there was little reason to mourn when that particular monarch was deposed beginning in the 1960s. Reflecting a noticeable but largely unconscious return to political topics by younger scholars and an abiding belief that the lineaments of public power profoundly affect every person in every society whether they possess such power or not, Beyond the Founders tried to bring back political history with a humbler attitude and a much broader base. Neither a backlash against recent historical trends nor a kneejerk response to them, the book sought to foster a more capacious and also more accurate notion of precisely what activities comprised politics in early America. At the same time, we tried not to forget that traditional political history topics like elections and presidencies and diplomacy were too important to be left to the political scientists.

While this special politics issue of Common-place is not exactly a sequel to Beyond the Founders, our jokey sequel-ish title indicates its origin in the same broad project, this time made a bit more accessible to general readers. The emphasis of our special issue is two-fold: scholarly perspectives on political phenomena that connect the early American republic with the present and articles that aim, like Beyond the Founders, to expand readers’ definition of what counts as “political history.”

In the first category are most of the issue’s freestanding articles. Jonathan Sassi looks at how one influential evangelical minister learned that Christianity could be even more powerful in a secularly governed nation. Jim Cullen and Amy Greenberg, respectively, describe the way racism and manhood worked in the presidential politics of the 1850s and how distressingly little has changed in 150 years. Reeve Huston considers the fundamentally different models of democracy that operated in different quarters of nineteenth-century American politics. Huston finds the stark differences between the Workingmen’s Parties and their antagonists during the Jacksonian era echoed in divergent strategies of the Obama and McCain campaigns today.

In the second category, we have several groups of articles that delve into areas of public life typically overlooked by a traditional political history dominated by election campaigns and presidencies. Black churches are known as a political force in modern American politics but generally do not figure in political narratives of the slavery era. Richard Newman shows us how the political culture of electoral democracy filtered into African American communities legally barred from the formal political process, as free blacks voted within their churches. Likewise, though we are all familiar with modern political battles over Supreme Court nominations, constitutionalism itself rarely registers as a political problem, leading to uncritical acceptance of the relatively recent notion that the only appropriate venues in which constitutional rights can be defended are the courts. Ray Raphael takes us back to one of the many other, more popular forms of constitutionalism that existed earlier, in this case citizens in local communities formally instructing their legislators. In a bonus article hosted at the Common-place political blog, Publick Occurrences 2.0, legal historian Christian Fritz takes an even broader look at the almost-lost constitutional world of early America.

Then we have a package of articles on a political topic that even Beyond the Founders neglected, the history of the American state. At the time this introduction is being written (September 2008), the federal government has just bought 79.9 percent of the world’s largest insurance company, AIG (not long after purchasing 100 percent of two of the largest secondary mortgage lenders in the world, Fannie Mae and Freddie Mac), as part of series of transactions that will effectively nationalize much of the U.S. financial industry. Thus it seems more obvious than ever that historians and history readers ignore the role of government institutions at their peril. While putting U.S. taxpayers into the insurance business, the mortgage business, and soon the investment management business contradicts the ideology of both present-day political parties, even the George W. Bush administration finally had to admit what has always been true: that government is the ultimate guarantor of the national weal. No matter how privatized basic public functions (such as shielding citizens from financial risk) appear to be, it is government that has to take responsibility when the chips are down and basic stability is at stake. Actually government has always had that ultimate responsibility, but in recent times American leaders found it more politic and seemingly more efficient to handle such tasks through institutions defined as private businesses. Now we know better. Any notion of political history with even the slightest pretensions to accuracy and comprehensiveness cannot afford to leave the “American state” out of the picture. Our state package is introduced by the redoubtable Richard R. John, the scholar who almost single-handedly “brought the state back in” to the study of the early American republic.

Finally, bridging the two missions, we have an innovative set of articles on the material history of the American ballot. This was an issue that modern Americans rarely even considered before the epoch-making Florida chads and butterflies of 2000, but it turns out to be one that cultural historians and literary scholars are uniquely suited to illuminate. The benefits of a broader political history were never more clear.

It will also be noticed that, despite the variety on offer here, we haven’t attempted to be comprehensive or systematic about covering all possible facets of early American political history or even to reflect all the most prominent themes of current scholarship. We might have done more with recent research on women’s political activities in early America, building on the work of historians Rosemarie Zagarri, Susan Branson, Catherine Allgor, Elizabeth Varon, and others. In a different vein, readers who know my own work may be surprised to find little here focusing directly on the political press, another topic I obviously consider to be of great importance in understanding early American politics.

We also might have done much more in this special issue with electoral history, a traditional political topic that Common-place‘s main sponsor, the American Antiquarian Society, is involved with in a highly untraditional way. The New Nation Votes project aims to make available to scholars and the public the life’s work of Philip Lampi, an AAS employee who has been collecting early American election returns for more than four decades, most of that in his spare time. Elections before 1828 were long considered the “lost Atlantis” of American political history because there was no complete set of election returns to study. Lampi set out to map those lost coastlines, amassing his collection by hand, from old newspaper reports and local records. In recent years, working with my Beyond the Founders coeditor Andrew Robertson, Krista Ferrante, and others, Lampi has also been trying to correct the cultural myths he believes have emerged about the politics of the founding era in the absence of real electoral data. To honor Phil Lampi’s work, further his larger project, and also take advantage of Common-place‘s online format, the Publick Occurrences 2.0 blog will run an open-ended series entitled “Myths of the Lost Atlantis“. Beginning when this special issue is posted, the series will continue—with postings every few days—through late October at least. Joining me on the blog will be distinguished guests including Donald Ratcliffe, Rosemarie Zagarri, Matthew Mason, and Andrew Shankman, plus Robertson and Lampi themselves. Fellow Common-place readers and historians are urged to join in by commenting on the blog or sending their own contributions to me at PasleyJ@missouri.edu.

Finally, as Common-place moves toward an upgrade of the site’s interactive features in 2009, each article in the special politics issue will have a dedicated comments page on the blog, accessible through a link at the bottom of its page. I will be moderating these comments and trying to facilitate dialogue between authors and readers if humanly and technically possible. Screen names are allowed and all non-offensive, reasonably on-topic comments will be posted. Let the reactions and corrections begin!

 

This article originally appeared in issue 9.1 (October, 2008).


 




Blogging, with Pickles: Adventures (and misadventures) in the quest to capture the flavor of everyday school life

When, not for the first time, the ever-encouraging director of technology at my school suggested that I jump on the blogging bandwagon earlier this year, I was finally inclined to hear his message and take his advice. There were a few reasons. The first, of course, was simple curiosity. Like a lot of people, I’ve gradually become aware of what might be termed the blogging craze of the early twenty-first century, a cultural practice at the center of what we’ve come to call “Web 2.0.” I first learned of blogging in early 2001, when doing research for an updated edition of my book on the history of popular culture. It wasn’t until years later, however, that I became a regular reader of blogs, mostly those of major news outlets. The IT director at my school turned me from a reader to a writer by telling me about the existence of a series of idiot-proof software programs that allow even a technophobe like me to have his own blog. Because he happened to be most familiar with Blogger, a free software package hosted by Google, I  took that route.

That’s the way I went but not quite why I went. Actually, the forces leading me in this direction were at least as much cultural as they were technological. Most of the professional ambition of my life has focused on books, and at the start of 2009 I’d just published my tenth. But I had serious doubts that I would ever produce another one. This was partly a matter of fatigue. At the same time, I’d long been a student of the publishing industry, which is, of course, in terrible shape (as it was in 1985, when I got my first job out of college working for Simon & Schuster; lamenting the state of publishing is the one constant of the business). The challenge now is not simply the usual one of the difficulty even established authors have getting books published—and as a mid-list author at best, my track record hurts more than helps—but that the Web is transforming both the form and pace of information distribution.

 

10.1.Cullen.1

 

It doesn’t really matter what we teachers say: going to the library—even placing an order with Amazon.com—is simply not the way most of the young people I work with get their information anymore. Yes, of course, I take them to the library, have the librarians walk them through the stacks (along with the databases), and demand that they have print sources in the bibliographies of their research essays. But let’s face it: in real life, if it’s not online, it effectively doesn’t exist. The book business hasn’t gone the way of the record business, yet, but it’s only a matter of time. As I was writing this article, I bought my first e-book to be read on my iPod. I very much doubt it will be my last.

But while it’s one thing to have new means of communication and to believe they’re important, it’s another to have anything useful to say. We all know the blogosphere is littered with the detritus of failed experiments, dated information, and mindless drivel. Moreover, the mere existence of quality material doesn’t mean anyone will find it, much less read it. I have a few topics on which I consider myself a bona fide expert (e.g., the music of Bruce Springsteen). I can also knock out a halfway decent book review pretty quickly. But so what? Sure, I could post pieces along these lines (and have gone on to do so), along with millions of others. But was there anything I could talk about usefully in an ongoing way?

I decided the answer was yes: I could write about classroom teaching. From what I could tell, the professional discourse of pedagogy takes three forms. The first is empirical research that seeks to ascertain concrete answers to notoriously elusive questions about things like teacher quality, the impact of variables like school or class size, and the relationship between curricular and extracurricular dimensions of a child’s educational experience. The second is more theoretical work, sometimes informed by empirical research, about the nature of learning. And the third is what I’ll call “Try This” literature, educational recipes that take the form of specific techniques or content you simply pop into the classroom, season to taste, and serve. (We’ve published some work of this kind in Common-place.) Sprawling across a vast K-12 domain, dogged by questions of professional prestige that have plagued primary and secondary school teachers since the nineteenth century, this discourse is at best a sprawling bazaar in which you can occasionally find treasured information. At worst, it’s an impenetrable jungle.

I decided I wanted to try a different approach. Rather than describe educational issues, problems, or practices in an abstract way, I would depict real-life situations that were both concrete and resonant at the same time. Rather than tell, I would show. How can you have a real conversation about the cost of rail travel in the industrial revolution? What are some techniques for dealing with passive students? Over-assertive parents? The important point here is that I would not necessarily be showcasing best practices in a Try This kind of way. Instead, the goal would be to be set up truly arguable scenarios that would further discussion and reflection rather than deliver information. Above all, I would try to capture the life of teaching in motion, the way teachers constantly make choices in real time, like actors or musicians practicing a craft. Rather than continue the century-long chase of educators seeking to derive their legitimacy by couching discussions of their work in terms of social science, I would locate the discourse of education where it belongs: in the realm of art.

 

10.1.Cullen.2

 

In adopting this method, I would be emulating the work of my father-in-law, Ted Sizer, an education reformer who in Horace’s Compromise (1984) and other books, periodically used the device of a composite teacher (later principal) named Horace Smith to illustrate the barriers to education reform. Ted, a former headmaster at Phillips Academy Andover and later a professor of education at Brown, had an eagle-eye view of the educational horizon formed over decades of close observation at hundreds of schools, though one in which the classroom teacher was always pivotally important. As a classroom teacher myself, I wanted to take his work to its logical conclusion: to ground conversations about education in the people who actually do the work of educating—which, to a great extent, of course, includes the students.

You see where this is going and where I would be likely to have problems. I dubbed my project “The Felix Chronicles,” in honor of Felix Adler, the Progressive-era founder of my school. My posts, typically about a thousand words, were set in recognizable locations and narrated in the first person: I was unmistakably me. Naturally, I merged, altered, and invented identities for my students and colleagues, whose privacy I took seriously. The fact that the focus of my posts was a U.S. history survey of which I had taught multiple sections going back many years made it easy for me to scramble people and topics. I was interested in the most ordinary of interactions and situations in everyday life, not personal secrets or institutional controversies. But I knew what I was doing was risky, and I knew that sooner or later I would make a mistake of one kind or another.

It took about two weeks. The blog post in question, called “Pickles,” described an interdepartmental meeting to discuss a joint English-History curriculum. There has been longstanding friction between the departments at my school (not unique, I’ve learned) stemming from a belief that such efforts seem to turn literature into a handmaiden of history, shoe-horning it, for example, into a chronological sequence that English teachers don’t particularly like. To literally dramatize the point, I invented dialogue in which an irritated English teacher notes the absence of Emerson from a proposed curriculum I’d drafted. When I note that the Sage of Concord is there, pointing to a line in the draft that specifies “Emerson and/or Thoreau,” this teacher replies irritably, “You’ve just illustrated my point. Don’t you see? It’s like pickles or coleslaw. One or the other. A side dish. And the burger is History.” Other English teachers rally to this argument, and I realize they’re right. That was the point of the piece.

However, the sympathetic but concerned chair of my department reported to me, that’s not what my readers were taking away from it. Instead, the focus was on the way my pickles character corresponded to colleague X.

“But it’s not X,” I objected.

“Well that’s who the students understand it to be.” My chair went on to note that the figure in question had a similar hairstyle and had generated impatience within the English department, two traits well known to students and faculty at the school. I had to concede the point; indeed, I had foreseen the possibility at the time I wrote the piece, which is why I went out of my way to give this character a very different personality than that of X, who in fact is quite a genial person with a different position on the issue in question than that of the character I described. But I could see now that I’d been careless. My intent was to raise pedagogical questions. Instead, I’d unwittingly written a gossip column.

Alerted to this reality, I went back and overhauled the post, determinedly altering the character in question even more. I considered going to X and offering an apology. But it seemed odd to apologize for a matter that was more about what others thought than what I said, and I feared it might only inflame the situation further. Whether or not X was aware of the situation—this is the kind of person who might well have laughed it off—there was no sign of distress, and indeed we remained as friendly as ever, notwithstanding the awkwardness I felt and the debit that remains on my moral ledger. A few days after our “pickles” conversation, the chair approached me again, noting that the buzz hadn’t gone away. So I deleted the post entirely, which appeared to put out that particular brushfire. Still, I decided that if the matter resurfaced or another like it came up, I would pull the plug on the project as a whole.

The experiment lasted another four months, effectively making it a semester long. In that time, I think I got better at framing issues and practicing discretion, and I got some positive feedback inside and outside the school along the way. But I began to see that the mere knowledge I was blogging could conceivably have an impact on what a student might or might not say in the classroom (one student wrote to tell me of reading the blog and hoping to surface on it). Ironically, the final shot across my bow was a function of an attempt to parry such issues. I took a snippet of classroom exchange and moved it to a different course in a different grade. Nevertheless, I heard third hand that the students in the latter class were certain I was writing about them, when no part of the conversation could accurately be attributed to them. Still, the facts were beside the point. I did not believe I had done anyone serious harm. But I knew that sooner or later I really could and that my primary obligation was to my institution and the people who comprise it, not my profession (or my aspirations). I wrapped the project up with a few more pieces, mostly personal reflection, and brought it to a conclusion.

Yet I haven’t been willing to give up on it entirely. This summer, I embarked on a new series. My protagonist is a wholly imaginary character, a wise Latina woman named Maria Bradstreet. Maria is a forty-nine-year-old recent divorcee who left her job in New Hampshire to take a position at the fictive Hudson High School, located somewhere in metropolitan New York. So far, Maria has dealt with situations like her ambivalence about getting help with her laptop from a sixteen-year-old, deciding how much homework to assign for a new elective she designed, and grappling with a crying student that she encounters in the girls’ bathroom. Fiction gives me a legal firewall, but I will continue to have to navigate ethical issues in what I still consider, for the moment anyway, an intellectually legitimate enterprise.

Writing this piece has been a somewhat sobering experience because it has led me to reflect on a series of aspects of my project, some of which I could anticipate before I started and some of which I have only started to apprehend. I think we all know that technology, specifically that cluster of innovations we designate with the shorthand “Internet 2.0,” is transforming our social lives and the boundary between public and private, even as the implications remain far from clear. But until I was actually participating myself, I don’t think I understood the extent to which activities like blogging and social networks like Facebook (which I use to promote the blog to friends and former students) have invisibly reached into the traditionally entrenched space of the classroom, a reach that administrators, politicians, and reformers can only envy. These new developments haven’t necessarily subverted traditional teaching—or replaced old-fashioned forms of gossip that certainly require no Wi-Fi access—but we ignore them, or uncritically embrace them, at our peril.

Technology is also partially a factor in what I see as a broader epistemological shift in intellectual life, one anticipated by postmodern theory and now literally being played out on iPhone and other screens, in which the constructed nature of reality trumps the positivist foundations of intellectual inquiry long central to the educational enterprise. If you would have told me a year ago that my principal pursuit of professional development would take the form of writing fiction, I would have found the idea laughable. And, of course, I’m positively old-fashioned in crafting what are in effect didactic short stories. Parents, administrators, and government officials may obsess about test scores, but the real frontier of learning these days is the documentary films, Websites, and games that good students are as apt to design as to watch or play. Anyone who thinks otherwise will be left behind.

I think my experience also shows the excitement, possibilities, and limits of the new world in which the barriers to publication are effectively removed. Having lived most of my professional life trying to win the approval of gatekeepers for periodicals and publishing houses, I now have an exhilarating freedom to broadcast whatever I want whenever I want. Of course, getting people to pay attention (forget about getting paid) is another matter. Moreover, an urge for an audience brings with it a great potential to lead one astray. And the chase for readers may also obscure the importance of another group of people who become more valuable as they become more rare: good editors. Much of the last year, I’ve been as nervous as I have been excited at walking a publishing tightrope without an editorial net. (My wife, God bless her, has pitched in with copyediting, criticism, and other thankless duties.)

Again, I still feel like I’m in the middle of a provisional experiment. In the spirit of interactivity widely hailed as the hallmark of the new media, I’ll end by saying that I’d be glad to hear what you might have to say.

Further Reading:

Theodore R. Sizer’s Horace’s Compromise: The Dilemma of the American High School (Boston, 1984), Horace’s School: Redesigning the American High School (Boston, 1992), and Horace’s Hope: What Works for the American High School (Boston, 1996) all use the device of a fictional character. So does Sizer’s book coauthored with his wife Nancy Faust Sizer, The Students are Watching: Schools and the Moral Contract (Boston, 1999). Katherine Simon includes slices of actual classroom dialogue in her book Moral Questions in the Classroom: How to Get Kids to Think Deeply about Their Life and School Work (New Haven, Conn., 2001). Jim Cullen’s complete set of “Felix Chronicles,” along with the ongoing “Maria Chronicles” can be accessed at his blog, American History Now.

 

This article originally appeared in issue 10.1 (October, 2009).


Jim Cullen teaches at the Ethical Culture Fieldston School in New York. His most recent book is Essaying the Past: How to Read, Write and Think about History. He blogs at American History Now, where examples of the work he describes here can be found.




“Doomed … to eat the bread of dependency”?: Insuring the middle-class against hard times

In January of 2009, a major car manufacturer announced a novel “Assurance” program which was designed to reduce the risk to consumers of taking on new debt during what has been dubbed “The Great Recession.” If the buyer became unemployed or disabled, went bankrupt, or accidentally died within a year of purchasing her shiny new crossover vehicle or his sporty coupe, he or she could return the car to the dealer and (metaphorically) walk away from any financial obligations with no further penalties. In crafting this program, the company cleverly homed in on the two essential elements of middle-class mentalité. On the one hand, consumption serves as a critical indicator of both current class status and confidence in one’s future; the economic and social aspirations of the middle class are best demonstrated by what one owns. On the other hand, the assumption of additional debt might easily undermine a middle-class existence dependent on the continuation of a regular income. Particularly during hard times, the fear of potential failure and loss of status causes the middle class to become extremely risk averse. The consumptive impulse (the devil on your left shoulder telling you to “buy! buy! buy!”) is now replaced by the contrasting virtues of frugality and self-restraint (the angel on your right shoulder telling you to “save! save! save!”). Hoping to clip the angel’s wings, this “Assurance” program aimed to remove the risks associated with conspicuous consumption, allowing middle-class consumers to focus on their aspirations rather than their fears.

While the crossover vehicles of the early American republic relied on a slightly more literal understanding of “horse power” and “air conditioning,” two hundred years of industrial progress has nevertheless left the competing elements of this mentalité virtually unchanged. Then, as now, this self-described class optimistically assumed that economic mobility was always possible, that talent and hard work would be recognized, that modest comforts in the present were the just rewards of such previous efforts, and that the ability to secure an even better economic future for one’s children was the ultimate payoff. Yet, while prosperous times encouraged this optimism, economic downturns revealed the fragility of such beliefs. As historians such as Edward Balleisen, Bruce Mann, and Scott Sandage have revealed, the potential for profit and prosperity needed to be carefully balanced against the perils of engagement with the marketplace. As Americans moved from agricultural regions into burgeoning, anonymous cities, as families became dependent on the cash income of the head of the household rather than a more holistic family economy, and as businessmen became aware that their economic fortunes were as much dependent on the booms and busts of the business cycle as on their own abilities, the long-term economic fate of middle-class families became increasingly uncertain and fraught with anxiety.

Most studies of the nineteenth-century middle class in America have focused on providing a group portrait, correctly emphasizing that it cannot solely be defined by income level. Rather, one must also consider their social and cultural conduct, including their movement into salaried, non-manual occupations, their embrace of reform movements, their assertion of control over family size, their segregation of public and private sectors in and outside the home, and their consumption patterns. Yet this group portrait neglects to explain how the middle class reacted to economic shocks. What did they actuallydo when facing hard times? How did they preserve their aspirations for the future—a critical element of their status—when facing economic struggles in the present? It is only when middle-class hopes and fears meet the realities of a modern economy that a fuller picture of its experiences can emerge.

Edward Balleisen addresses these questions in Navigating Failure, where he examines the actions of several hundred middle-class bankrupts during the 1840s and concludes that the desire to mitigate risk was the key lesson taken from their experience. While many embraced the risks inherent in the market by seeking to re-establish themselves as independent small proprietors after their initial failure, they now adopted a much more cautious approach in their commercial reincarnation by seeking to limit their use of credit and avoid the high risks (and thus high returns) of more speculative opportunities. Other bankrupts adopted a more extreme response, either rejecting the market outright by embracing a communal lifestyle in one of the many utopian communities of the period, or by seeking to reproduce the landed independence of previous generations in becoming a farmer out West (with or without market aspirations).

Many more sought a compromise between these two extremes: limiting their exposure to market risk by redefining ideal middle-class vocations. The economic independence of self-employment, once valued as the definitive attribute of a middling competence, was now re-conceptualized to signify an occupation that bound the proprietor in a life-long struggle with credit and debt, raising questions about the ostensible “independence” that position conferred. In contrast, salaried occupations were now reconceived as providing personal independence even as they reduced the once self-employed man to the role of employee. As Balleisen concludes, these risk-averse individuals “redefined autonomy in terms of security and freedom from the anxieties that so often beset the owners of business ventures.”

While hard times often occurred as the result of macroeconomic shocks that resulted in widespread suffering—such as the Panic of 1819 or 1837, the Great Depression, or our current “Great Recession”—individual families might also fall on hard times independent of the booms and busts of the business cycle. In particular, the death of the main breadwinner quickly exposed the precariousness of middle-class status. Remedies such as the short-lived Bankruptcy Act of 1841 were designed to cushion the impact of business failure by enabling many American debtors to break free from their overwhelming debts and start their economic lives anew. Yet even this solution left families vulnerable if the head of the household died. What if death intervened before the bankruptcy proceedings were completed, or before the person was able to reestablish himself in business? As one observer concluded in 1837, “The late and present pecuniary embarrassments of the mercantile world, and the consequent derangement in every thing connected with it, … show conclusively the necessity of making provision for dependents that shall be beyond the control of reverses in trade or commerce.” In particular, this provision needed to take into account the possibility of death, “a contingency which, when it happens is irremediable—beyond which no recovery of disastrous step can be made.” Whereas the negative economic impact of death had always been present, panics and depressions served to underscore middle-class fears of failure and socio-economic decline. “With these turns in the business cycle,” Mary Ryan contends in Womanhood in America, “many a loyal wife watched her economic security disintegrate in some financial wizardry that she scarcely understood.”

The negative impact of death was only compounded as urbanization removed the economic and social safety nets that had existed in rural societies. Several options were available for widowed and orphaned families in the countryside. Under the common law of dower, widows received a fixed share of the real property owned by their husbands; under the law of most states they received lifetime use of one-third of the husband’s landed property. Fatherless households could thus continue running the family economy in his absence—particularly when older sons were available to help on the farm or continue his trade. The assistance of family and neighbors was vital to this transition, while children and paid farmhands provided long-term stability. Even families lacking the economic, emotional, or human capital necessary to continue without a husband and father were not left out in the cold. Neighbors and relatives readily incorporated victims of loss into their own household economies. Historian Jack Larkin has pointed out that “the chances for early death made for many widows and widowers who frequently found places in the households of their children or of their married brothers and sisters. Kinfolk came into their relatives’ families as paid or unpaid domestic help, apprentices and employees, and even paying lodgers.” The very nature of the household economy in agrarian America helped to shield families from sudden economic dislocations when the family head died.

The situation for urban families was much different. While a solvent businessman could rest assured that his firm would provide for his family after his demise, either through continuance by another family member or by a profitable liquidation, the salaried man could take no such consolation since his money-earning power died with him. The transition of the home from a place of production (with responsibilities divided among all family members) to one exclusively of consumption put considerably more pressure on the role of the primary male breadwinner, who knew that the family could not survive without his income. Families who lost their fathers and husbands faced a dismal fate because the wages paid to women were not designed to support their survivors, and a decline in class status necessarily ensued. For many, then, remarriage—and a return to the dependent confines of the private sphere—was the only means of maintaining their middle-class status.

Nineteenth-century fathers similarly experienced increasing expectations to establish middle-class foundations for their children. Whereas the offspring of previous generations were likely to follow directly in the footsteps of their parents, a father now needed to ensure adequate education and training, if not capital investments, for his son’s future career. His daughter, as well, needed to acquire the appropriate literary and musical skills to be considered marriageable. For many fathers, the ability to provide a proper education for their children became both symbolic of their success as a middle-class parent and a critical sign of their continued class status. Yet a father’s untimely demise might force his children to enter the workforce prematurely, sacrificing the education that was increasingly critical to middle-class life.

Support from family or neighbors was much more limited in the cities as well. Urban residents were not heartless, but interpersonal obligations and connections were fluid in this highly mobile environment. Whereas taking in the needy in rural communities added productive units to the household, at least partially compensating for their additional consumption, they became extra mouths to feed in the city—placing undue strain on household budgets. In the mushrooming towns and cities of antebellum America, the plight of the widowed and the orphaned thus emerged as a new concern, particularly as the waged piecework women could take home was not designed to guarantee survival.

During the early nineteenth century several charitable organizations such as the Society for the Relief of Poor Widows with Small Children, the New York Female Guardian Society, the Association for the Relief of Respectable or Aged Females, and the Association for Improving the Condition of the Poor were founded in order to provide aid for the “deserving” poor—among whom were included (according to the 1852 report of the latter) “females once in comfortable circumstances who have been reduced to poverty by the death or misfortune of their husbands and relatives.” While these charitable organizations certainly performed a crucial function for nineteenth-century Americans in need, however, no moderate-income father wanted his family to become “dependent upon the cold charities of the world” after his demise. He found it “indispensably necessary,” rather, “that some sure and unfailing provision should be made to those who are dear to him, a sufficient competency to place them beyond a miserable dependence upon public charity after his death.” Indeed, even though most charities targeted those considered most deserving, the idea of receiving charity remained stigmatized and many widows who aspired to protect their middle-class status refused such handouts.

 

10.2.Murphy.1
Advertisement, “Life Insurance Agency at Hartford, of the New-York Life Insurance and Trust Company,” Connecticut Courant, June 3, 1837, issue 3776. Courtesy of the American Antiquarian Society, Worcester, Massachusetts.

To avoid subjecting one’s family to such a horrifying potential fate, middle-class fathers increasingly sought out the protection offered by the emerging life insurance industry. Starting in the 1810s and 1820s with the chartering of the Pennsylvania Company for Insurances on Lives and Granting Annuities and the Massachusetts Hospital Life Insurance Company, and then rapidly growing in popularity during the 1830s with the New York Life Insurance and Trust Company and the Baltimore Life Insurance Company, life insurance promised to maintain the economic wellbeing of a family after a breadwinner’s death. In particular, the industry understood that the growing urban middle class faced opportunities and anxieties that made them unique among Americans. As an 1858 New York Life Company brochure proclaimed, life insurance appealed not to their status consciousness—as people “who imagine themselves rich in this world’s goods”—but to their recognition of the fragility of that same status. By providing not just “a certainty against future want” but also the “comforts of life” for families (a contemporary concept signifying the consumables that marked a family’s class status), life insurance offered a hedge against the economic vicissitudes of middle-class life. As is evident in the marketing literature produced by the industry, all antebellum life insurance companies believed that their most lucrative business would come from fathers whose death would leave their families in “pecuniary distress,” “in want,” in a state of “poverty, in the hour of their distress,” suffering “sacrifice and loss,” or exposing them “to insult and poverty” or “the horrors of destitution, of want, and of misery.”

Reflecting the most basic anxieties of middle-class Americans, these bleak descriptions resonated throughout urban society. Letters to Baltimore Life from potential policyholders have survived, providing a rare (albeit brief) glimpse at people’s motives for insuring themselves during the 1830s. One lawyer from Sanford, Virginia sought a $3,000 policy “to insure a living to my wife” while another Virginian, in anticipation of making a marriage proposal, wanted “to secure to a Lady if she shall survive me, $10,000, if not then to my children.” A Richmond, Virginia, businessman—who would later become Baltimore Life’s local agent in that city—explained: “It has since occurred to me that having a Family of young Children dependent in a great measure on my exertions it would be a matter of prudence to effect a Life Ins[uranc]e. Say to the am[oun]t of $3000 for their benefit.”

Merchants and small proprietors, in particular, were well aware of the high rate of business failure in America, and thus sought insurance to protect their families against the risks inherent in their source of livelihood. “I have been raised to the mercantile business, and think I have the capacity and opportunity to employ capital advantageously,” explained one entrepreneur from Memphis, Tennessee, in 1845. “And in obtaining it,” he continued, “would be very glad to avail myself at the same time of the protection afforded by such an Institution as yours, against the vicissitudes of trade, and the sufferings to a young and helpless family which might result from my death in a state of poverty.” Middle-class inquirers sought life insurance in order to secure the economic future of their families, thus freeing them to pursue their professional aspirations with less fear of the consequences of failure.

Due to its location near the nation’s capital, Baltimore Life targeted its sales among Washington’s growing military and bureaucratic workforce. When the company officially opened a Washington agency in March of 1833, its new agent’s main objective was to sell insurance policies to government clerks. In his acceptance letter, this agent declared that he was “located in the midst of the public offices, & have an intimate acquaintance with nearly all the officers of the Departments, a class of persons of all others the best suited to the object of your Company,” due to their status as salaried professionals. As with the emerging pockets of “white-collar” workers throughout the eastern seaboard, he viewed these federal employees—many of whom were likely young clerks with higher professional aspirations—as a prime target for the life insurance industry. In an 1839 letter to the company president, the agent stressed that “Many of the clerks are notoriously improvident, most of them receive inadequate Salaries; and very many leave their families at their death in a most deplorable State of destitution.” These clerks lived on the cusp of middle-class existence. While their non-manual employment and long-term aspirations placed them firmly within a middle-classmentalité, their modest incomes left them particularly vulnerable to losing that precarious social status.

By midcentury, insurance advertisements increasingly reached beyond mere allusions to poverty or sacrifice and fully embraced the emotional turmoil that was by now a central part of middle-class life. A blatant example of this type of psychological marketing tactic is found in the 1848 brochure of the New York branch of Eagle Life Insurance Company of London. The firm painted a picture of one thousand young healthy males who dreamed of marrying and passing on their middle-class status as “successful independent operatives” to their offspring. During their lifetimes these men would easily be able to support their families in a comfortable, even refined, existence, yet Eagle Life estimated that half would die an early death: “the children of five hundred are doomed in some way, to eat the bread of dependency. There is no effort of ordinary economy which can save them from such a contingency,” which would leave them “a hunger-driven herd of shiftless individuals.”

The concluding paragraph of this description brought home the nightmare scenario dreaded by every self-respecting modern patriarch—that his children would fall out of the middle class and have to repeat the struggle up the ladder of their father and grandfather. Upon premature death, “his heirs and representatives must instantly descend many grades in the scale of comfort, if not of respectability; to feed on husks and breathe in corners, and find in scattered places, and among varied chances a vague hope of attaining in after years a snug hearthstone like their father’s.” These advertisements thus rehearsed a classic dictum by which the sins of the father (failing to adequately protect his family through life insurance) became a yoke borne by his children.

Distinct from the social obligations of rural society and the pity of urban charities, a life insurance policy was a breadwinner’s best investment in his family’s future. Indeed, this was a novel market solution to a pressing market problem. The growth of insurance over the course of the nineteenth century paralleled the development of a middle-class mentalité by providing a new safety-net to protect middle-class widows and orphans from a loss of class status, by facilitating the aspirations and lifestyle they sought in the present and by allowing them to continue educating their children for advancement up the socio-economic ladder in the future. It freed the middle class to take more risks during their lifetimes since their wives and children would now be protected from the risks of death in a modern economy.

Further reading

References to life insurance company brochures are found in Historic Corporate Reports, Baker Library Historical Collections, Harvard Business School. Correspondence with insuring customers is from the Baltimore Life Insurance Collection, MS 175, H. Furlong Baldwin Library, Maryland Historical Society.

For general studies of the nineteenth-century middle class, see Cindy Sondik Aron, Ladies and Gentlemen of the Civil Service (New York, 1987), Stuart M. Blumin, The Emergence of the Middle Class: Social Experience in the American City, 1760-1900 (New York, 1989), T. Walter Herbert, Dearest Beloved: The Hawthornes and the Making of the Middle-Class Family (Berkeley, 1995), Paul E. Johnson, A Shopkeeper’s Millennium: Society and Revivals in Rochester, New York, 1815—1837 (New York, 1978), or Mary P. Ryan, Cradle of the Middle Class: The Family in Oneida County, New York, 1790-1865 (Cambridge, 1981).

For studies of how early Americans functioned during economic downturns, see Edward J. Balleisen, Navigating Failure: Bankruptcy and Commercial Society in Antebellum America (Chapel Hill, 2001), Peter J. Coleman, Debtors and Creditors in America: Insolvency, Imprisonment for Debt, and Bankruptcy, 1607-1900. (Madison, 1974), Bruce Mann, Republic of Debtors: Bankruptcy in the Age of American Independence (Cambridge, 2002), or Scott A. Sandage, Born Losers: A History of Failure in America (Cambridge, 2005).

For shifting expectations within the household, see Stephen M. Frank, Life with Father: Parenthood and Masculinity in the Nineteenth-Century American North (Baltimore, 1998), Shawn Johansen, Family Men: Middle-Class Fatherhood in Early Industrializing America (New York, 2001), Jack Larkin, The Reshaping of Everyday Life: 1790-1840 (New York, 1988), or Mary P. Ryan, Womanhood in America: From Colonial Times to the Present (New York, 1983).

Finally, to contrast the urban experience with rural America, see Joan M. Jensen, Loosening the Bonds: Mid-Atlantic Farm Women, 1750-1850 (New Haven, 1986), or Nancy Grey Osterud, Bonds of Community: The Lives of Farm Women in Nineteenth-Century New York (Ithaca, 1991).

 

This article originally appeared in issue 10.3 (April, 2010).


Sharon Ann Murphy is an associate professor of history at Providence College. Her first book, Investing in Life: Insurance in Antebellum America (forthcoming in 2010 from Johns Hopkins University Press) reflects her interests in the complex interactions between financial institutions and their clientele. She plans to focus her second book on the public relations problems of antebellum commercial banks.