The Truth of the Picnic: Writing about American slavery

i

Writing about American slavery is no picnic. Hard to imagine anyone undertaking this for fun. Writing about slavery is more of a nightmare, although to this writer, this seems like stating the obvious. It is the obvious that rattles you, however.

Finding (or seeking) relief from resonance seems a fruitless pursuit. Images which to others seem simple or even banal rage and scream and writhe under the pen, threaten the writer with dual curses: stories overdone, tragedies understated.

Burning hot sun associates with a criminally violent South. Rope suggests the noose and lash, as tree branches recall strung bodies and intentionally broken necks. (An African American woman in the nineteenth century–Ida B. Wells–had to take lynching on as a personal, fevered, journalistic crusade in order to free us from the trees. And this was in the aftermath of slavery, death by lynching after ownership was outlawed.)

Thus, there are images we can’t hide from or avoid; images that don’t enrage people who haven’t known, emotionally, psychologically, in the cells. We live in a tragic vortex, the kind from which sweeping epic stories of a people are spun. We are neither flaxen, nor hidden in towers. We have been burdened with an evil aspect.

Ropes and trees and pyres and sunshine are not innocent, but grim. All the world shudders when you look up from the fields, when you are dodging the whip, when you research your personal history, when you make the effort to identify, to remember, to render.

ii

Gapers surrounded where niggers were burned. Being tied to a stake and exterminated seemed common, gory, routine. The stakes were high. America refuses to acknowledge this memory, this collective recollection African Americans can’t escape. Stinging stinking bleeding draining aching piercing truths. Willful abuse, intentional destruction, shackling, binding, reducing. Only our psychologies know this pain now.

Pain is part psychology; the memory of pain is wholly that. People are consequently crippled by failure to thrive. Failure to thrive can be congenital, inherited. Despair can pass through generations, invade the birth canal, infect the swimming seed. Disease in a public domain. Indignity persists–like cancer–especially shame and unchecked, unacknowledged rage. The plagues of history persist and make a future epidemiology. Blood flows from there. Slavery an infection oozing pus, and the wound unseen, untreated, underneath. From there the future rises. History persists.

iii

Slavery and its aftermath are human drama still unsettled. Administrators, timekeepers, civil servants, guardians of the state try to revise our understanding of the period and its outcomes. An effort to convince us that the drama is over rages. Some of us insist, and rightly so, that we are now in this drama’s second act, we have not moved beyond the raised curtain, we are still in shock at what we have finally seen.

We can only argue over whether this drama is done. Both the act (slavery) and its aftermath (now) seem like swirling and flailing and being sucked up. Odd similarities between bondage and its resulting “freedom.” To those of us who have inherited slavery as an experience of grand imposition, constraint and abuse, this swirling seizing vortex is unsettling, but deed- and lash- and fire-free. We flail but find groundlessness and chaos an improvement over tending ground for profit, for a separate, avaricious, maddeningly protected class.

iv

We have been maligned malappropriated and misused by people who misrepresented themselves–purported to be angelic, orderly, appropriate, blessed, sanctioned, right. For centuries they traded in us, worked our bodies and body fluids deep into the fertilized fields. Hid under brims of avarice, wore affluent hats. Meted and obscured deep pockets of knowledge, with dedicated, malicious intent. Used us as tools, and as currency. Wore us out, or passed us on. Sent us out to the dirt plains to die–chopping or cutting or seeding or weeding or turning ground. Ginning or starving or bleeding under a whimsical punishment, decaying under trumped-up law.

 

Broadside, "Great Sale of Land, Negroes, Corn & Other Property." Dated Charleston, 24 November 1860. Courtesy of the Gilder Lehrman Collection, Pierpont Morgan Library.
Broadside, “Great Sale of Land, Negroes, Corn & Other Property.” Dated Charleston, 24 November 1860. Courtesy of the Gilder Lehrman Collection, Pierpont Morgan Library.

 

v

This recalcitrant reality is hard to survive, or to revisit. In our time, and in that time, our psychologies suffer(ed) and writhe(d). Every writer knows psychology as artistic terrain.

When we invent or appropriate–narrative, characters, situations–and cast them into text, we writers own the stories we’ve made. When we choose to write about history, though, the hands that precede us have writ, and have cast some facts and some fictions into insistent, resistant, stone. There are only two choices really–write about now or then. Our time, or history. A bald and unyielding duality, this choice.

Some writers shy away from writing about the present. New years, new decades, new sunrises–incoming phone calls even–could change the course of events unfolding. The present is not fixed. And so, devoting time and sweat and angst to characters wrestling over fluid events seems improvident, daunting, a slippery slope.

Comparatively considered, history is fixed. Toni Morrison has asserted, by contrast, that history is fungible. Certainly, the facts and fictions that constitute “history as accepted” bear examination, study, sorting out. Those of us who have studied and written and lived long enough have learned an almost nasty truth about the past: we get the “power version” of the time that came before us. That is, history as we read it (and therefore know it) has been propagated–planted, tilled, nurtured, weeded, guarded, managed, harvested–by the holders of the pen, those who wear the badge, the self-proclaimed community of power, the perpetrators.

vi

Who owns history? Everyone and no one, or both. A more important question, though, at least for the writer-me, is who tells the truth? We are, by now, in this, our new century, familiar with liars and manipulators, and their strategies: racism by omission, disenfranchisement by law, genocide by lash and under-education, paint by number.

The spirit or the soul partners body and its instincts with mind and its labels. From this multiplicity personal psychology springs. What intellectual acrobat will you find stooped in the cotton fields with sore and bleeding hands and split and bleeding feet? How much mental energy greets those sore and swollen hands and feet? How can anyone’s thoughts possibly leap? What lessons of pain or grace will the stooped fieldworker teach the young ones? How much hope burns in the abject? Who can healthily remain conscious, awake?

vii

There is a flip side to this misery: the owner class revels. They controlled people. Underfed them. Overworked them. Raked in profit. And passed all this on to their descendants in the name of empire, inheritance, trust (an oxymoron), wealth. This horrid past of prejudice. Massive money made, unshared.

All this the writer has to grant her characters. Even though so many would rather not hear it, so many of the owner bunch vehemently decry, deny.

viii

In our country, there is real estate. This notion of what’s palpable, tactile, and perpetual. There was no real ownership for slaves, none of any kind. Where slaves lived was called the quarters, which suggests, in itself, places partial, subsidiary, undeeded, unkind.

Living was segregated, and it stayed that way, for an unarguably long time.

But worst, it seems, was the calculated refusal to teach the culture, to share the culture. No counting, no reading, no money. No sense of humanitarianism, no admission of worth.

There were the slaves, in the center of a commerce. Money flowed around them and through them, money was made and manufactured by them.

And they had none.

ix

We–writers, readers, regular people–cannot escape the language either. We are assaulted by word, by image, by historical fact.

There are right now, for example, arguments between legend (or folklore) and academia, about the definition or origin of the word “picnic.” American English defines the word, of course, as an outdoor meal, planned in advance, and usually for fun. Or, alternatively, colloquially, a pleasure.

Folklore has it that picnic derived from the slaveholding practice of playing games, planning amusements, with “picking a nigger” being a central part of the game. Sometimes the word picnic is reported as a straight derivation of “pick a nigger” and sometimes as a variant of “pick a pickaninny.” (This word, pickaninny, has a dictionary definition of a small Negro child. Of course, no one has used this word outside the context of slavery, and I continue to be surprised when I find it in the dictionary. I daresay I have looked this word up a hundred times in my life, and I continue to be amazed; I continue to be intrigued by the shades of difference in the definitions.) The fabulous new encyclopedia, Africana (Boulder, Colo., 1999), has a special missive that reports that there is no verifiable connection between the word picnic and the word pickaninny or the phrase “pick a nigger.” Of course, it isn’t the word or phrase that’s so unsettling: it’s the notion of games being played at the expense of a people. Fun games, evil games, war games–no difference. That the Africanamissive exists seems to me to represent evidence. Oral history being impossible to argue against, inherited opinions being impossible to refute.

x

Occasionally, I feel trapped in this box as a Negro writer. The questions I’m asked to address, the issues I can worry over and write around, but certainly cannot resolve. I am concerned with broader ideas than race. On many mornings, race hardly seems relevant, since I often write about love and sex and intelligence, about human possibility. But to be who I am is to be a child of the slave institution. To be who I am is to be a writer whose awareness has three hundred years lost. To be who I am is to be angry and outraged and sometimes falsely humble.

To write about slavery is to face honestly what is denied, to wage war and not die.

 

This article originally appeared in issue 1.4 (July, 2001).


A. J. Verdelle is the author of The Good Negress (Chapel Hill, 1995), for which she was awarded a Whiting Writer’s Award, a Bunting Fellowship at Harvard University, a PEN/Faulkner Finalist’s Award, and an award from the American Academy of Arts and Letters for distinguished prose fiction. Her novel in progress, Meanwhile Back at the Ranch, follows a family of former slaves as they cross this wide country endeavoring to secure the promise of freedom. Verdelle teaches creative writing at Princeton University.




Harry Potter, My Daughter, Elihu Smith, and Me

Near the climax of his second book of adventures, Harry Potter and the Chamber of Secrets (New York, 1999), the world’s favorite British prepubescent nerd-turned-wizard nearly kills himself by reading a diary. What happens, without giving away too much, is this: At the Hogwarts School of Witchcraft and Wizardry, in an alternate dimension somewhere near London, Harry finds a diary someone has tried to flush down a toilet. “The little book lay on the floor, nondescript and soggy.” Curiously, it’s blank, but Harry keeps “picking it up and turning the pages as though it were a story he wanted to finish.” Eventually he discovers that when he writes in the diary, it writes back to him. “My name is Harry Potter,” he writes, dipping his quill in scarlet ink. “Oozing back out of the page, in his very own ink, came words Harry had never written.  ‘Hello, Harry Potter.  My name is Tom Riddle. How did you come by my diary?‘”

 

J.K. Rowling. Harry Potter and the Chamber of Secrets. New York: Scholastic, 1999.
J.K. Rowling. Harry Potter and the Chamber of Secrets. New York: Scholastic, 1999.

After they’ve exchanged pleasantries, the diary, once owned by Tom Riddle, a former Hogwarts student, promises to reveal secrets about certain mysterious events at the school. It makes this promise by offering to let Harry step inside Riddle’s memory to witness scenes from the school’s past, half a century earlier. “Let me show you,” the diary pleads. And then, “Harry saw that the little square for June thirteenth seemed to have turned into a miniscule television screen. His hands trembling slightly, he raised the book to press his eye against the little window, and before he knew what was happening, he was tilting forward; the window was widening, he felt his body leave his bed, and he was pitched headfirst through the opening in the page, into a whirl of color and shadow.”

Once inside, Harry is sort of like Jimmy Stewart in It’s a Wonderful Life: he enters rooms, overhears conversations, feels as if he’s there, but no one can see him. He gathers a certain amount of information about the past, thinks he has solved the mystery plaguing the school, then whirls back out of the diary to land on his dormitory bed.

A few chapters later, Harry encounters Tom Riddle again, only this time the diarist has escaped the volume in which he had been prisoner. He’s a little blurry around the edges. “Are you a ghost?” Harry asks. “A memory,” Riddle responds. “Preserved in a diary for fifty years.” It doesn’t take long for Harry to realize that Tom Riddle wants to kill him. Riddle has learned that Harry, in Riddle’s distant future, would foil the diarist’s most nefarious schemes. Harry also realizes that the “memories” he witnessed, when he had been sucked into the diary, were a sham. Riddle hadn’t let Harry see the whole truth; his distortions had led Harry to false conclusions in his detective work. After protracted verbal sparring followed by hand-to-hand combat, Harry defeats Riddle by stabbing the diary. Scarlet ink spurts out of it “in torrents, streaming over Harry’s hands, flooding the floor” like blood. Riddle’s “memory” vanishes.

For eighteen months or so, as my six-year-old daughter and I read nightly from Harry’s four volumes of adventures, we felt much the way he does when he plunges, unexpectedly, into Tom Riddle’s diary. Or even when he first boards the Hogwarts Express, from an invisible train platform at King’s Cross station, and heads to the wizarding academy. Harry’s awkwardness, his orphaned status, his cruel, nonmagical relatives, make him a sympathetic figure, particularly to preteens who feel the world’s injustice on a regular basis. But another aspect of the books’ appeal, I think, stems from the fundamental way the series is about the act of reading itself. In other words, my daughter and I, as we get lost in these stories, are supposed to feel somewhat like Harry stumbling into magical dimensions and enchanted diaries. His imaginative adventures mimic readers’ adventures of imagination. Harry doesn’t nod off like Alice or Dorothy. It’s almost as if Harry has plunged headlong into a good book, and we’ve simply gone along for the ride.

Perhaps it’s due to the fact that I spend a good portion of my own time reading and writing about (and sometimes feeling lost in) diaries, but Harry’s encounter with Tom Riddle’s magical memoir is one of my favorite episodes in the series. The interactive diary has provided me with images that resurface every time I open a diary I’m studying or writing about. What would it be like to step inside the memory of the diarist? To experience a diary as if it were a video replay? What would I ask an enchanted diary if it one day spontaneously responded to my marginal annotations? Playing around with these questions has required me to think hard about my own encounters with diaries, and about the relationship more broadly between a reader in the present and a text that “preserves” a person from the past.

What Tom Riddle’s diary captures most of all, I think, is the fantasy of encountering a perfectly preserved personality, fully capable of conversing across time and space. Both in and out of the text, Harry encounters Riddle as a three-dimensional, if blurry, being, conscious of Harry’s presence as a reader. Inside the diary, Harry watches the past the way he would watch TV. But the fantasy of gaining this sort of access to a diarist’s experience is also related to an illicit tingle readers of diaries sometimes get. Reading a diary–even if its author is several hundred years dead–sometimes feels voyeuristic. I confess to having secretly read friends’ and roommates’ diaries when I was younger, hoping to find out what they really thought about me. It’s the same search for candor that gives diary readers the sense that we’re experiencing something that’s “truer” than other forms of literary writing or historical record. Tom Riddle uses this notion to draw Harry into his diary: “I always knew there would be those who would not want this diary read.” The knowledge he can offer Harry is dangerous, exclusive, and, he would have Harry believe, absolutely true.

Even as it suggests a reader’s wish to converse with someone from another time, Riddle’s diary serves as a perfect figure of what literary critics sometimes call “reader response” or “reception” criticism. These ways of reading involve a constant awareness that we can’t really encounter our subjects as three-dimensional beings, and that even if we could, it might not help us get at the “truth” any better. At the risk of crude oversimplification, reader response theorists hold that a text’s “meaning” is provided by a reader, not a writer. When we read, we create meaning based on our previous experiences and subjective views. Reception theorists, operating from a similar set of assumptions, like to think about books historically, to ask, for example, how the same work might have been differently read in different times and places. What Hamlet meant in nineteenth-century New York may not be what it meant to Elizabethan Londoners, because different viewers and readers assign the play different meanings in different contexts.

Harry Potter’s experience with Tom Riddle’s diary seems to bear out these notions of how meaning is made: the book is blank until Harry himself starts writing in it. And Tom Riddle only understands current events through his readers, through what they bring to their encounters with the text. Riddle gains clarity, takes shape, and eventually escapes the diary’s confines by feeding off his readers’ emotions.

As an undergraduate English major in the late 1980s and early 1990s, I was well trained by literary critics to discount any notion that we can recover an author’s intentions. The “death of the author,” we were informed, had been proclaimed and little mourned some twenty-odd years earlier. We were never to assume we could know what an author “really meant,” and most of all, we weren’t to attempt literary interpretation by recourse to an author’s biography. If an author seemed to be writing autobiographically, we were to understand such declarations of “selfhood” as fictions, as performances. Coherent “selfhood” was out; “subjectivity,” a more limited, perspectival notion, was in. With these principles in mind, we were to leave the author in the grave; the text would suggest its own meanings, or suggest to us how to create one. We were the authors. These cautionary tales have sometimes served me well. They remind me that any attempt to encounter the past is limited by fragmentary sources, if not by the inability of language to “represent” something perfectly in the first place.

What happens when the stories people tell about their own lives, the ways they narrate their everyday experiences, are every bit as interesting as the stories they publish as literature? What happens when the two sets of stories overlap?

Oddly, however, one result of these post-1970s currents in literary studies was a turn toward historical approaches to literature. Or toward a certain kind of history, anyway, since “literary history” still didn’t include that most slippery of all historical enterprises, the writing of biography. Letting go of the quest to reclaim the “true” past (including the relationship between author and text) has lead literary scholars to treat all writing as literature, and to proclaim loudly that all history writing is akin to writing historical fiction. If diaries can’t be relied on for accurate representation of the past, as this argument would maintain, then we need to read them much the way we read novels. Even as fictions, they can still give us some sense of history–especially of the history of human subjectivity–without forcing us to rely on them for the truth about what “really happened” or for insight into an author’s intentions. If Harry Potter learned anything from his encounter with Tom Riddle’s diary, it should be that things didn’t necessarily happen the way the diarist would have you believe.

 

James Sharples. "Elihu Smith." Collection of The New-York Historical Society.
James Sharples. “Elihu Smith.” Collection of The New-York Historical Society.

This sort of advice doesn’t accord easily, though, with the kinds of (perverse) pleasures reading a diary can bring. For several years I’ve worked closely with one diary in particular, the diary of Elihu Hubbard Smith, a physician and poet who lived in New York City in the 1790s. Though I didn’t find Smith’s diary in a toilet, I did stumble across it (or into it?) in a way that felt like Harry’s headlong fall. As a literary critic I had been grappling with the novels of Smith’s sometime roommate and closest friend, Charles Brockden Brown, best known today as the author of Wieland (1798), Ormond (1799), Arthur Mervyn (1799-1800), and Edgar Huntly (1799). Readers of Brown’s gothic novels realize how frustrating his quirky writing can be: obscure, incomplete, dialogic, difficult to pin down in terms of its engagement with contemporary politics in particular. Reading criticism on Brown’s novels I realized that critics had never abandoned biographical claims as fully as they advised. Literary historians often read the novels, which were produced during the partisan rancor of the late 1790s, as advocating one political position or another, often by associating Brown with Smith and his other friends who formed an intellectual circle they called the Friendly Club. Those who recognized Smith’s conservative political bent assumed he must have won over Brown to Federalism. Those who knew that the club included the Jeffersonian politician Samuel Mitchill assumed the club–and therefore Brown and Brown’s novels–must have had Republican allegiances. Early on in my study of Brown, when a computer search alerted me to the 1974 publication of Smith’s diary, I headed into the library stacks to retrieve it, determined to settle the matter of Brown’s politics once and for all.

What I found in Smith’s diary–nearly four hundred thousand words written in just three years–was a text every bit as entrancing as Brown’s bizarre novels. At random I opened the enormous quarto volume to a page near the center. Smith was drafting a long letter to a childhood friend, Theodore Dwight, in which he defended his loss of faith in Christianity. Something about the letter–its passionate defense of intellectual freedom, the courage it took to face down a fiercely evangelical Dwight, whose older brother, Timothy, was known as “Pope” Dwight of Yale University–made me feel almost as if I had tumbled through a miniature TV screen and landed in a candlelit New York bachelor’s apartment two hundred years earlier. In addition to detailing these religious dilemmas, Smith’s diary was packed with fascinating narratives of city social life, literary endeavors, scientific enterprises, and two yellow fever epidemics, the second of which took Smith’s life when he was only twenty-seven. Before his early death, he had published the first anthology of American poems and founded the first American medical journal. In his diary he had drafted letters to famous Americans and to British literati; indulged crushes on married older women; fantasized about the notoriety he and his friends would gain if Brown’s fiction generated scandal. While the diary only complicated, for various reasons, the issue of Brown’s politics, it quickly raised for me questions that would alter my approach to literary history: What happens when the stories people tell about their own lives, the ways they narrate their everyday experiences, are every bit as interesting as the stories they publish as literature? What happens when the two sets of stories overlap?

Behind these questions was a recognition that I actually cared about the people who produced the texts I read. Sitting in the New-York Historical Society one summer afternoon, I held my breath, chills up my spine, as I read one of the last letters Smith had written before his death of yellow fever. The letter described the prevailing epidemic, but Smith had no way of knowing that within days he would be dead, just as he had no idea that the mosquitoes that kept him up at night were the carriers of the deadly disease. Was he already sick when he wrote this letter, now faded, yellow, brittle? What traces of his DNA might still be preserved in the document I was handling? Later I would walk the streets of lower Manhattan, feeling out the distances between Pine Street, where Smith had once lived, and the Battery, where he took his constitutional walks. Was this safe territory for a literary critic? After all, deep-rooted training taught me not to trust biographical approaches to interpreting texts, taught me that any biographical reconstruction was a “fiction.” I shouldn’t let my desire to piece together biographical information bear too heavily on my readings of the literature these people produced, including the diary.

Like most diaries, Smith’s offers readers the sense that they are receiving exclusive information. Yet just as Tom Riddle only let Harry see what he wanted him to see, Smith often wrote with particular audiences in mind. Much of the diary is made up of letters like the one to Dwight: here Smith’s audience awareness should be self-evident. His medical writing envisions its own set of readers, too. But the question of audience is more complicated the more one reads. Smith also frequently read aloud from his diary to friends, or swapped journals with them after long separations. He burned early volumes, unfortunately, to prevent their obsession with unrequited love from ever having an audience. And he intended, at some point, to publish his journals as a memorial not only of his own life and thinking, but as a record of the many illustrious people–Alexander Hamilton, Benjamin Rush, or John Adams, the father of his friend Charles–he knew and observed. “To those into whose hands my papers may come, when I cease to exist,” he wrote on one occasion, “they will be valuable: for my connections in many instances, have been with those, who either have been, or promise to be, in some good measure, distinguished actors in the scene around me.” Perhaps Tom Riddle would remind us that such theatrical language is telling. “Audience beware,” he might say. “The author of the diary you are about to read will likely come off as the hero of his story.”

A few weeks ago my daughter offered a similar warning, unintentionally, when she brought home a diary of her entire first grade school year. Most entries recount what happened the previous day or what was planned for that day at school. For 26 February 2001 the assignment, apparently, was to chronicle what had happened over the previous week’s school vacation. She writes: “On vacation I went too my friend’s house and I went to the empres new grove. and I had hot coco when I got home. and I had another sleepofer with a difrint kid’s house. and I went to disny on ice. and one time I stay’d up all night.” Now, chances are good that she saw The Emperor’s New Groove over vacation. I can’t remember. But I’m pretty sure she didn’t have a sleepover, and I know she has never been to Disney on Ice or–the ultimate fantasy of all–stayed up all night. Occasionally the diary includes other imagined events, such as swimming with the dolphins in Florida. Who’s to say what prompted her to record these things? Are they her own fantasies? Were they prompted by the diary of the kid sitting next to her? Will some future reader be fooled into thinking these things happened? Am I fooled into thinking they say something fundamental about her psyche?

But are these entries really written by the same person? Did “I” write these entries? Do they have meaning outside what I assign them as an adult reader?

I have a diary of my own from around age six, another that covers to thirteen or so, and one more that runs from thirteen to nineteen. Since I don’t actually remember the events I narrated when I was my daughter’s age, who’s to say they really happened the way I said they did, or what prompted me to record some things and not others? I’m struck by the way a bound diary suggests a continuity of selfhood, that life is like a story with beginning, middle, end; rising action; character development. Writing in the same volume, the thirteen-year-old who recorded daily life as part of an eighth grade English class is apparently the same person who, at age nineteen, recorded the agonies of a college freshman’s love life. But are these entries really written by the same person? Did “I” write these entries? Do they have meaning outside what I assign them as an adult reader?

One of the most appealing aspects of Harry Potter’s encounter with Riddle’s text is that the diarist has been frozen in time. He remains sixteen years old and has no idea, until a reader informs him about his later life, what that future would hold. It’s a difficult if not impossible task for readers of a diary not to exploit their knowledge of the diarist’s final outcome. Reading Smith’s last few diary entries and letters makes me want to shout out warnings: “It’s the mosquitoes! The mosquitoes!” It’s impossible to read his long letter to Theodore Dwight, to watch their friendship erode due to their religious differences, without thinking of Smith’s impending death, less than two years away. But should Smith’s twenty-seven-year-old self necessarily be related to the Smith who published poems in the Gazette of the United Statesas a young man? Riddle would seem to suggest that we can attempt to let entries be informed by the past, but not necessarily by the future. My daughter warns that even the past’s information might be misleading.

Tom’s other warning–against privileging diaries as somehow more transparent than other historical records–is the same sort of warning my college English teachers had given me. But if this is the final truth about diary reading, if we’re unable to place any confidence in the “reality” of what a diary records, how can we explain what remains uniquely seductive about texts like Smith’s? Perhaps Riddle can offer answers here as well. Tom only knows what his readers have told him about life after the diary was written. In this way, as I’ve noted, he seems a perfect emblem for the relationship between a text and the reader who gives it meaning. But Tom also works against reader response theories in certain ways. He is not only a figure of audience response, but of an author’s audience awareness. This suggests his agency in shaping the ultimate meanings his text might take and the uses to which it will be put. Tom has anticipated his readers. He hopes to use them, to outsmart them, to remain immortal through them. The story of Harry’s encounter with Tom Riddle’s diary offers a modifiednotion of reader response. When Harry initially discovers how the diary works, he carries out conversations with it, writing in it and receiving its answers. When he meets Riddle outside the text, the interaction is more physical, but still suggests that meaning is made as a reader interacts with a text, grapples with it, fights with it. The power is on both sides. It’s in the constant negotiation with a diary, our awareness of its various audiences, our imagined conversations with its author, our suspicion of his or her motives, that we can begin to learn what such a text has to teach us.

If Tom and Harry warn that diaries, like other forms of writing, are misleading or limited in their representation of the past, they also remind us, perhaps, of why we constantly want to test those limits. Whether or not we want to admit it, we read diaries differently than we do novels because we want to believe they are letting us at least approach something that was once real. For years literary critics belied the death of the author in their occasional biographical references (even if only appealing to an author’s gender, race, or apparent partisan affiliation) as evidence for a text’s take on the culture that produced it, what it could tell us about “history.” Admitting that biography can matter, that we can learn from the stories people told to make sense of their own experiences, that these stories can even help us interpret more overt fictions from the past, shouldn’t threaten us, as long as we remain relatively skeptical about autobiographical truth claims. I know, for example, that Brown’s novels gain additional meanings when I bring to them my experience of reading Smith’s diaries.

But Riddle may be most useful as a reminder that beyond the academic and historical uses we make of diaries, the texts have personal meanings and motivations, both in their production and reception, then and now. Those meanings and motivations warrant respect. Like Tom Riddle, Elihu Smith had his own reasons for writing: my task isn’t simply to make meaning for his diary by reading and writing about it the way I do Brown’s novels. It’s to seek to recognize and understand his own acts of imagining, of meaning-making, the reasons why he wanted to preserve some version of himself, for whatever audience, and why he chose a diary as his vehicle. Addressing these issues can be dangerous for a literary critic. But only when we respect a diary’s dangers can we, like Harry, thumb through its pages and “finish” the story. Scarlet ink and all.

 

This article originally appeared in issue 2.1 (October, 2001).


Bryan Waterman, an assistant professor of English at New York University, is completing a book about Elihu Hubbard Smith, the Friendly Club, and late-eighteenth-century intellectual life in New York. He lives a few blocks from where Smith’s apartment once stood.




Doctrines of Discovery

I. Introduction

In a speech given on September 8, 2000, at a ceremony on the 175th anniversary of the Bureau of Indian Affairs (BIA), Kevin Gover, a Commanche tribal member and outgoing head of the Bureau, issued “a formal apology to Indian people for the historical conduct of this agency,” an agency whose first mission was the forced removal of the southeastern tribes from their homelands, along the notorious Trail of Tears. “Today I stand before you,” Gover continued, “as the leader of an institution that in the past has committed acts so terrible that they infect, diminish, and destroy the lives of Indian peoples decades later, generations later.”

The Bureau of Indian Affairs was established by President James Madison in 1824, as part of the Department of War. In 1832, Congress authorized the president to appoint a Commissioner of Indian Affairs, and, in 1834, enacted a bill to organize a Department of Indian Affairs. In 1849, the BIA was transferred to the newly created Interior Department. By the 1850s, overseeing Indian reservations had become its principal arena of activity.

In this context, the term colonialism has a precise meaning: the control by the federal government over what federal law terms “Indian country” (Title 18, U.S. Code, section 1151), which, in broadest terms, includes all federal reservation land; all “Indian allotments”; and all “dependent Indian communities,” whether they are residing within a reservation or not. In Indian country, reservation land is land used by federally recognized tribes, but titled to the federal government, which thus has legal ownership of it, keeping the lands “in trust” for the tribes, of which there are 330 today in the lower forty-eight states.

The “trust” relationship between the tribes and the federal government is at best a double-edged sword. Ostensibly guaranteeing federal protection of Indian assets, it also casts Indians in the role of perpetual minors, a barely veiled version of the classic European stereotype of the childlike “savage.” Indians, by definition legally incompetent to manage their own resources, find these resources placed in the hands of a federal bureaucracy, overseen by Congress, which has historically grossly mismanaged them. The BIA currently finds itself embroiled in an almost five-year-old class-action lawsuit filed by the Native American Rights Fund against the Bureau and the Department of the Interior for the mismanagement of an estimated ten billion dollars in Indian trust funds since the end of the nineteenth century. In February 1999, as reported in the Washington Post of August 17, 2000, Gover himself was held in contempt of court for not turning over records in this case, records he claimed “no longer existed.”

As it functions, the trust relationship contradicts what for the last thirty years has been the stated federal policy of increased “self-determination” for Indian tribes. Yet the tribes, rightfully, resist any congressional attempts to dissolve this relationship (and only Congress has the constitutional power to do so) because all such attempts have only offered the dismemberment of the tribes as an alternative.

As distinct from reservation lands, allotted lands are lands, on or off reservation, which the federal government has granted to individual Indians. In this case, the government may retain title to the individual lands, which is most often the case, or the individual may hold title. Approximately eighty percent of the Indians lands to which the federal government holds title (approximately fifty-five million acres) are reservation lands.

It is clear enough under the law that any tribe occupying reservation land is considered a “dependent Indian community” in relation to the federal government. There are also tribes, like the Oklahoma Cherokees, numerically the largest tribe in the United States, that, while not occupying a reservation per se, still come under federal superintendence with title to their tribal lands held by the federal government, and are thus considered a “dependent Indian community.” Ambiguities arise, however, in the case of Alaskan Native communities, which except in one case, do not occupy reservations or other kinds of “trust” lands but hold title to their lands as corporate entities under the Alaska Native Claims Settlement Act of 1971 (43 U.S.C.A, sec 1601-28), while still receiving federal benefits of various kinds because of their standing as Native Americans.

In 1998 in Alaska v. Native Village of Venetie Tribal Government, 118 S.Ct. 948, the Supreme Court tied the notion of “dependence” to the fact of lands held “in trust” by the federal government for Native Americans and thus ruled that the Alaskan village in question and by extension all such corporate entities were not in “Indian Country,” while at the same time recognizing Congress’s constitutional authority to modify the legal definition of “Indian country.” At present, Indian country does not extend to include Native Hawaiians either, though they are a people historically colonized by the United States and are engaged in an ongoing struggle for their land rights.

In carrying out U.S. Indian policy today, the BIA has long counted on the collaboration of elected tribal councils, Western-styled governments first put into place under the auspices of the Indian Reorganization Act (IRA) of 1934. The decision whether or not to adopt IRA-sponsored constitutions was left up to the tribes. At the time, 181 tribes voted to adopt them and seventy-seven tribes voted to reject. Nevertheless, all the tribes needed a governmental mechanism (tribal council) in order to deal with the BIA for the resources it controlled under congressional mandate, which included, principally, tribal lands. Thus, whether or not a tribe drafted a constitution as the BIA requested, it had to comply in one way or another to BIA pressure to form representative governments. As they do today, these councils faced various forms of resistance from the grassroots of their communities. Thus, one often finds in Indian country a democratically elected tribal government that is at the same time opposed by or alienated from the grassroots population precisely because it is perceived as an arm of U.S. colonial power. But, it is important to emphasize, a tribal council may also oppose U.S. power in certain instances and so claim the support of its constituency on certain issues, particularly those dealing with land and sovereignty. These kinds of divisions within the tribes have a long history, which is a direct result of European colonial policies in the Americas.

Because of reforms instituted by the IRA, the BIA is now administered from top to bottom largely by Indians. But, in spite of Kevin Gover’s optimism, the BIA continues to contribute to the general impoverishment of Indian people. Today the 1,698,483 tribally enrolled U.S. Indians (out of an approximate Native population of two million) are the poorest of the poor. The 1990 census reports a per capita income in Indian country of $4,478, compared to a national average of $14,420. According to BIA statistics, in 1999 unemployment among the labor force of the 556 federally recognized tribes (226 in Alaska) was forty-three percent, a one percent increase from 1997. Only nine percent of Native Americans twenty-five years and older have college degrees in comparison to thirteen percent of Latinos, fourteen percent of African Americans, and twenty-four percent of the total population.

In his September 2000 apology, Gover distinguished between the BIA “in the past” and the BIA now, “in this era of self-determination, when the Bureau of Indian Affairs is at long last serving as an advocate for Indian people in an atmosphere of mutual respect.” But this all too clean separation overlooks, for example, the mismanagement of trust funds previously mentioned. Meanwhile, his neat distinction between the old and the new BIA also ignores the ongoing Navajo-Hopi Land Dispute in which, beginning in 1977, the Bureau has overseen the displacement of twelve to fourteen thousand Navajos from their homelands. Despite Grover’s claims for “an atmosphere of mutual respect,” the colonial structure of federal Indian law, which the BIA administers, dooms the Bureau to be a certain kind of classic colonial bureaucracy.

 

II. Land and Property 

United States federal Indian law is grounded in the history of Western imperialism in the Americas, and in what were and remain the central issues in the conflict between Indian communities and European powers: land and sovereignty. It is not only that the Euro-Americas are built on stolen Indian land but also that the traditional Native relationship to land was radically opposed to early modern Europe’s increasingly capitalist relationship to it.

[T]reaties were always written in Western languages employing Western legal vocabularies, grounded in the term property.

Traditionally, land was and is the absolute resource of the Native community. In Native America land mediates all relationships on a plane where the distinction between the sacred and the secular made by the West does not exist. Native land is not what the West understands as propertya decidedly secular institution.As a traditional value, land is the antithesis of property. Land, in this view, is the inalienable ground of the communal, defined exclusively in terms of extended kinship relations. I use traditional in this context not to denote unchanging cultural practices, the notion of which is in any case a fiction, but rather to signify an ongoing and adaptive force marshaled from the historical moment of the Columbian invasion of the Americas (1492) against the European exploitation of Native land. Such resistance is exemplified in the present by the continued refusal of the Sioux Nation to accept a monetary settlement, now with accumulated interest worth an estimated 350 million dollars, granted them in 1974 by the Indian Claims Commission for a wrongful taking in 1877 of the Black Hills, land central to their identity as a people. For the Sioux, the Black Hills are not fungible.

Whether in such different cultures as the Pueblos in what would become the southwestern U.S. (or the pueblos in Mexico), the Iroquois Confederacy in the territory that is now the northeastern U.S. and Canada, the Creek or Cherokee towns in what became the southeastern U.S., or the tiospaye of the Oceti Sakowin (Sioux) on the great plains of North America, the traditional Native community can be described as an extended family or system of interlocking extended families working in concert for mutual sustenance. But we should be careful not to conflate the Western nuclear family paradigm with the Native paradigm of family, or, as I prefer, kinship. The relational terms of the Western family (father, mother, brother, sister, aunt, uncle, cousin, etc.) do not translate into the terms of Native kinship. In comparison to the class and gender hierarchies of Western nation-states, Native communities were marked by egalitarian social and political structures, where group action was based on group consensus, precisely because (if one wants to take an economic perspective) the labor of all, female and male, was equally valuable for the sustenance of the group. Native kinship terms extend as well into that part of the world that the West has increasingly alienated, subordinated, and exploited as “nature.” Such extended kinship by folding nature into the Native community sets conservative limits to the use of natural resources.

In theory and practice, the indigenous conception of community does not exclude conflict either within or between communities, as indigenous oral traditions clearly attest. However, in societies where there were no class divisions, where every person’s contribution was valuable to the sustenance of the group, and where there were no systems of incarceration, solutions to intragroup conflict were conceived primarily in terms of restoring balance to social relations rather than, as in Western societies, isolating transgressors from these relations. So, for example, the killing of the member of one group (family or clan) by the member of another might be balanced by a single counter-killing or, alternatively, a payment of some kind, either of which, it was agreed by the aggrieved party, would close the circuit of violence. Interclan conflicts within the Hopi villages have been resolved historically by the formation of new villages, which nevertheless remain within the Hopi fold through clan ties that link village to village on the three mesas in northeastern Arizona. The last resort in maintaining balance in indigenous social systems was exile, for psychic and social survival outside the kinship community was precarious at best. As for intercommunity conflict, what the West terms war, it is enough to say here that whatever its function (ritual, territorial, raiding) it cannot be understood in terms of modern Western warfare, which is based in an imperial/colonial paradigm: the clash of nation-states over issues of property. Once capitalist economies disrupted Native economies, of course, Native kinship relations to land were disrupted by property relations, and were forced to come to terms with property relations, but have also managed to mount a continuing, if often divided, resistance to these relations. That is, the Western imperial invasions of Native America have brought with them the kinds of collaboration that such invasions bring, the kind, for example, instanced by the BIA at the present moment.

In Native kinship economies, land was not fungible, that is, marketable, or alienable by an individual, or group acting as an individual within the community. Thus, the treaty, signed by “chiefs” or other designated leaders, in which, centrally, the Indian “tribe” or “nation” alienated a portion of its land in exchange for payment of various kinds, is always, quite literally, the sign of the imposition of Western terms on indigenous communities: treaties were always written in Western languages employing Western legal vocabularies, grounded in the term property.

Property is the foundation of Western capitalist democracies; and land is in the history of these democracies the fundamental form of property. These democracies both as nations (ideas, or ideals, or ideologies) and states (political systems that mediate, or express, the nation) are particular articulations of property, which is not simply a material relation but implies in the very history of the word property a moral and social one (what is proper) and a metaphysical one as well: the particular properties that define what the West has come to understand as an individual. When the United States was founded, for example, only property-holding white males by and large had the franchise, were, that is, considered individuals in the political realm. Even today, not to hold some form of property in the West is to have one’s individuality bracketed, to find one’s recognition as a person seriously compromised. It has been the overriding thrust of U.S. federal Indian law from its constitutional inception to the present to translate Indian land into property, not for the purpose of entitling Indians to their land but for the purpose of legally entitling the federal government to it and thereby compromising the sovereignty of Indian communities.

 

III. Legal Fictions

The laws that govern the colonial space of Indian country today are for the most part codified in the twenty-fifth title of the U.S. code. This makes Indians the only group of people in the United States who are governed by a distinct body of law. This body of law, which defines the colonial status of Indians, derives its ultimate authority from the Commerce Clause of the Constitution (Article I, Section VIII, Paragraph III), giving Congress the power: “To regulate commerce with foreign nations, and among the several States, and with the Indian tribes.” Using the Commerce Clause as a basis, Congress enacted a series of trade and intercourse laws between 1790 and 1834 that extended the definition of “regulating commerce” to include control over the buying and selling of Indian land, as had the British Royal Proclamation of 1763, which was the model for U.S. Indian policy in this regard. These laws became the constitutional rationale for the three major legal cases that to this day form the foundation of federal Indian law. These cases, known as the “Marshall Trilogy,” after John Marshall, the chief justice of the Supreme Court who wrote the defining opinion in each case, are, in the order of their enactment, Johnson v. McIntosh, 21 U.S. 543 (1823), Cherokee Nation v. the State of Georgia, 30 U.S. 1 (1831), and Worcester v. Georgia, 31 U.S. 515 (1832).

Perhaps the principal irony in Johnson is that in a case that has determined the status of Indian land and sovereignty from 1823 to the present, there were no Indian parties to the suit . . .

Like all bodies of knowledge that claim objectivity, the subjectivity of the law–its social, cultural, political, and economic biases–can be located in the narratives that underlie it but that are rarely brought into play in contemporary legal practice. The U S. law upholding the constitutionality of the death penalty is exemplary in this respect. It seems at this point clearly driven by the social and economic biases of race and class (a radically disproportionate number of prisoners on death row are poor and black); it provides, that is, an historical narrative of race and class discrimination not only in the workings of the criminal justice system but in the nation as a whole, which in this case Congress and the Supreme Court have agreed to ignore so that the death sentence can continue to be administered (in the majority of states) as if it were being administered fairly. Federal Indian law works in precisely the same way; that is, it is grounded in a narrative of cultural and political bias, which the government ignores so that it can continue to administer this body of law as if the narrative of the imperial domination of culturally inferior peoples that drives it were a thing of the past. When Kevin Gover apologized to Indian people for the genocidal past of the BIA, he implicitly acknowledged this narrative. But when he uncoupled the present BIA from the past BIA, he repressed the persistence of this narrative in the present.

 

Chief Justice John Marshall. Courtesy AAS.
Chief Justice John Marshall. Courtesy AAS.

 

In Johnson v. M’Intosh, however, this narrative remains quite explicit and so it is only by the continuing noninspection of this foundation of federal Indian law by the majority (in Congress and hence of the voters in the U.S.) that the edifice of federal Indian law remains uncondemned. Public and official ignorance and indifference, as well as bureaucratic inertia, are the principal props for the colonial structure of Indian country.

Perhaps the principal irony in Johnson is that in a case that has determined the status of Indian land and sovereignty from 1823 to the present, there were no Indian parties to the suit, which was brought by the Anglo-American heirs of one of the parties to a land sale consummated in 1775 between the Piankeshaw Indians and a group of British investors in what would become after the Revolutionary War the state of Illinois (21 U.S. 543, 555). As the facts of the case reveal, this land was taken militarily from the British by the state of Virginia in 1778 and incorporated as “the county of Illinois,” which in 1783 at the conclusion of the Revolution Virginia ceded to the United States (558-59). Then in 1818, the U.S sold 11,560 acres of this land in what was by then the state of Illinois to William M’Intosh, a citizen of the state (560). This sale provoked the suit by the lessee of Joshua Johnson and Thomas J. Graham, heirs of Thomas Johnson, one of the original purchasers of the Piankeshaw lands, and citizens of Maryland (561). The suit came to the Supreme Court on a writ of error from the District Court of Illinois brought by the plaintiff. The Supreme Court confirmed the verdict of the lower court in favor of the defendant, M’Intosh.

At stake in the suit was the status of Indian title to Indian lands, whether, that is, Indian title to the lands Indians inhabited superceded U.S. title to those same lands; and therefore, whether a sale of those lands by an Indian tribe to private individuals was legal. The Court found that such a sale was not legal, precisely because, in the opinion of the Court, the U.S. held absolute title to Indian lands. The principal problem in interpreting this case from Marshall’s opinion to the present has been that all of its commentators have assumed the naturalness or universality of the terms of Western property law, the terms of title, under which the case was argued and decided in favor of the federal government’s right to the title of Indian lands. This assumption is explicit in the stated facts of the case, including:

That from time immemorial, and always up to the present time, all the Indian tribes, or nations of North America . . . held their respective lands and territories each in common . . . there being among them [“the individuals of each tribe”] no separate property in the soil; and that their sole method of selling, granting, and conveying their lands, whether to governments or individuals, always has been, from time immemorial, and now is, for certain chiefs of the tribe selling, to represent the whole tribe in every part of the transaction; to make the contract, and execute the deed, on behalf of the whole tribe; to receive for it the consideration, whether in money or commodities, or both; and finally, to divide such consideration among the individuals of the tribe . . . (549-50)

The fact here stated is a fiction. This legal fiction at once recognizes Indian communities “from time immemorial” as communal landholders but then constructs these communal holdings on the model of a Western corporation or joint-stock company whose “individuals” own equal shares in a common “property,” which upon the vote of the stockholders, as communicated to a “chief” (a CEO of sorts), is fungible in terms of “money or commodities, or both.”

No one from Marshall to the present has commented on the fiction of this “fact,” though nine years later in Worcester, arguing for both the sovereignty of Indian tribes and the federal government over and against the states in Indian matters, Marshall would in passing allude to the imposition of Western legal terminology on Indian communities when he noted: “The words ‘treaty’ and ‘nation’ are words of our own language, selected in our diplomatic and legislative proceedings, by ourselves, having each a definite and well understood meaning. We have applied them to Indians, as we have applied them to the other nations of the earth. They are applied to all in the same sense” (31 U.S. at 559-60). In retrospect, this comment, which seems disingenuous in light of the usurpation of Native sovereignty accomplished by the Court in Johnson and Cherokee Nation, serves to highlight the fact that in order for Johnson to be argued in the first place, Indian communal conceptions of land were implicitly translated into the terms of property so that the issue of title could be raised.

Before Johnson ever came to court and in the absence of any indigenous input, Indian lands were translated into property and the Indians given title to those lands, not to recognize Indian sovereignty but, quite the contrary, so that the Court could alienate that title legally to the federal government thereby placing Indian sovereignty under U.S. control. Part of the summary of the argument for the defendants in Johnson captures succinctly the extent of that control: “Such, then, being the nature of the Indian title to lands, the extent of their right of alienation must depend upon the laws of the dominion under which they live. They are subject to the sovereignty of the United States . . . The statutes of Virginia, and of all the other colonies, and of the United States, treat them as an inferior race of people, without the privileges of citizens, and under the perpetual protection and pupilage of the government” (21 U.S. at 568-69).

The language of “protection and pupilage” used by the defense in Johnson to argue against Indian title points to the language that Marshall would employ eight years later in Cherokee Nation to compromise Indian sovereignty by defining Indian “nations,” or “tribes,” as “domestic dependent nations.” Following the logic of his opinion in Johnson, Marshall, in Cherokee Nation,reasoned of Indian tribes: “They occupy a territory to which we assert a title independent of their will . . . Meanwhile they are in a state of pupilage. Their relation to the United States resembles that of a ward to his guardian” (30 U.S. at 17). Marshall’s oxymoronic phrase “domestic dependent nations” (for by definition a nation is at once both foreign and independent) allowed the Court both to recognize that the Cherokees were “a distinct political society, separated from others, capable of managing its own affairs and governing itself” (16) and deny that fact at the same time. Further, we recognize in Marshall’s metaphor of the Indian/government relation as one of “a ward to his guardian” the basis for the “trust” relationship discussed earlier. While all Indians were granted citizenship by an act of Congress in 1924, Indians living on reservations as well as all tribally enrolled Indians continued to be dominated by the colonial relationship of “trust.”

Among other issues, Johnson points to the culturally relative status of the term legal. Nowhere is this clearer than in the narrative Marshall constructs to validate his opinion in favor of U.S. title to Indian lands. This narrative is the European narrative of the conquest of the Americas from which Marshall derives the legal “doctrine of discovery”:

On the discovery of this immense continent, the great nations of Europe were eager to appropriate to themselves so much of it as they could respectively acquire. Its vast extent offered an ample field to the ambition and enterprise of all; and the character and religion of its inhabitants afforded an apology for considering them as a people over whom the superior genius of Europe might claim an ascendency [sic]. The potentates of the old world found no difficulty in convincing themselves that they made ample compensation to the inhabitants of the new, by bestowing on them civilization and Christianity, in exchange for unlimited independence. But, as they were all in pursuit of nearly the same object, it was necessary, in order to avoid conflicting settlements, and consequent war with each other, to establish a principle, which all should acknowledge as the law by which the right of acquisition, which they all asserted, should be regulated as between themselves. This principle was, that discovery gave title to the government by whose subjects, or by whose authority, it was made, against all other European governments, which title might be consummated by possession (21 U.S. at 572-73).

Perhaps Marshall inflects this expansive and expansionist narrative with a bit of irony in noting what he appears to recognize as the ethnocentric perspective of the imperial powers of the Old World who “found no difficulty in convincing themselves that they made ample compensation to the inhabitants of the new, by bestowing on them civilization and Christianity, in exchange for unlimited independence.” The implicit stereotype here is the one that Columbus records early in his journals, before he begins to be met with indigenous resistance: innocent savages willingly exchanging all their wealth for the blessings of Christian Europe. But if Marshall is to a certain extent being ironic in order to suggest his historical sophistication vis-à-vis 1492, he nevertheless ultimately justifies the taking of Indian land with the same stereotypical opposition between the savage and the civilized, hunters and cultivators, that mapped the ideological terrain for Columbus: “[T]ribes of Indians inhabiting this country were fierce savages, whose occupation was war, and whose subsistence was drawn chiefly from the forest. To leave them in possession of their country, was to leave the country a wilderness; to govern them as a distinct people, was impossible, because they were as brave and as high spirited as they were fierce, and were ready to repel by arms every attempt on their independence” (590).

The characterization ends with praise that damns, and it is also in bad faith because Marshall had available to him an abundance of published ethnographic material that contradicted the stereotype of savage Indian hunters (not to mention its implied counterpart: the stereotype of Europeans as peaceful farmers who did not hunt). Indeed, in the “history of America, from its discovery to the present day” (574), a history focused from the Anglo-American perspective that Marshall constructs as the context and rationale for his opinion in Johnson, he references the first two permanent British colonies in North America, those at Jamestown (1607) and Plymouth (1620), though in keeping with his savaging of the Indians he doesn’t mention what was paramount in the historical narratives written by the first colonists: these colonies couldn’t have survived without Indian agriculture, principally corn. Marshall’s narrative of Anglo-American conquest of North America necessarily mentions the French and Indian, or Seven Years’, War (1756-63), which effectively established British control of the continent and thus set the stage for the American Revolution. This narrative mentions as well the alliance of the British with the Iroquois Confederacy but it does not mention the crucial part this alliance played in assuring British victory. More importantly, in terms of propping up his figure of the Indian as an ungovernable savage, Marshall does not reflect in his narrative on the way the very act of alliance contradicts the stereotype, just as indigenous economies from the agricultural Pueblos in the Southwest to the mixed agricultural/hunting/fishing societies of the Northeast contradicted his stereotype of an Indian “wilderness.”

 

IV. Sovereignty

Ironically, within eight years of Johnson, Marshall would have the example of the Cherokee Nation before him in court, which would appear in the person of its attorneys, William Wirt (the former attorney general) and Thomas Sergeant (a renown legal scholar), to petition the Court to recognize it, based on its treaty relationship with the United States, as a fully sovereign foreign nation, so that as a foreign nation it could bring suit against the state of Georgia in the Court for violating its treaties with the United States. The Cherokees, who had in 1821 incorporated a written language developed in a syllabary by the Cherokee Sequoyah, adopted a written constitution in 1827 modeled on that of the U.S., and established in 1828 the first Indian newspaper in the U.S. (the Cherokee Phoenix circulated in a bilingual Cherokee and English edition), contradicted in the most obvious ways Marshall’s stereotype of Indians as unregenerate savages, as did the other four of the so-called “Five Civilized Tribes,” (Creeks, Choctaws, Chickasaws, Seminoles) who were effectively dispossessed of their lands in the southeast by the Indian Removal Act of 1830 and forced between 1831 and 1838 out to “Indian territory” (present-day Oklahoma) on the Trail of Tears, for which we find Kevin Gover apologizing one hundred and seventy years later.

Given the history of U.S. Indian relations, the presumption of congressional good faith and the coincidence of Indian and federal interests in these matters is, to say the least, ironic.

It is within Marshall’s imperial narrative in Johnson, a narrative loaded with the ideological charge of racial stereotyping, that Indian lands were silently translated into property, and that “the rights of the original inhabitants . . . were necessarily, to a considerable extent, impaired. They were admitted to be the rightful occupants of the soil, with a legal as well as just claim to retain possession of it, and to use it according to their own discretion; but their rights to complete sovereignty, as independent nations, were necessarily diminished, and their power to dispose of the soil at their own will, to whomsoever they pleased, was denied by the fundamental principle, that discovery gave exclusive title to those who made it” (574).

The language here may be deceiving because it only seems to restrict Indian “power to dispose of the soil at their own will.” But, as I have emphasized, Indian tribes had no interest in selling their land outside of a context of conquest, that is, of forced sales called treaties. And within this context of conquest, for which “discovery” is a euphemism, the forced translation of Indian land into property, accompanied by the forced transfer of “exclusive title” to the conquerors, gave to the conquerors ultimate control over that land in all respects, as the history of federal Indian law demonstrates. The legal scholar Robert A. Williams Jr. catches the force of Johnson at the time it was enacted, when he notes: “While the tasks of conquest and colonization had not yet been fully actualized on the entire American continent, the original legal rules and principles of federal Indian law set down by Marshall in Johnson v. McIntosh and its discourse of conquest ensured that future acts of genocide would proceed on a rationalized, legal basis.” The fundamental decision in Johnson (the transfer of title of all Indian land to the federal government), which is the cornerstone of the colonial edifice of federal Indian law, remains intact, and thus so does the “legal” ground of this decision: the imperial “doctrine of discovery.” The United States continues to celebrate this doctrine every Columbus Day, a day of mourning and a continued call to resistance in Native American communities.

Marshall’s rulings in Johnson and Cherokee Nation helped solidify the geopolitical integrity of the expanding and expansionist United States as a nation during the age of Manifest Destiny. His ruling in Cherokee Nation–that the Cherokees were not a foreign nation thus barring them from suing Georgia for their treaty rights before the Court–avoided as well a potential constitutional crisis: a confrontation between the Court and both the president and the state of Georgia. Andrew Jackson had refused the Cherokees’ request for U.S. troops to enforce their treaty rights (as a foreign nation) against Georgia, which was invading their lands and usurping their laws. And Georgia itself refused to appear before the Court in this case, as it would in Worcester, asserting that it was solely a state matter. But the Court’s expedient politics in Cherokee Nation were paid for by the Cherokees and ultimately all Indian communities. For the Marshall decision rationalized the breaking of Native treaty rights and thus paved the way for “removal,” or what Gover refers to rightly as “ethnic cleansing,” as the centerpiece of federal Indian policy. This policy of ethnic cleansing did not begin nor end with the infamous Trail of Tears, on which approximately four thousand Cherokees died from both murder and murderous exposure in 1838.

In the same month (March 1831) the Court’s decision in Cherokee Nation was being handed down, the state of Georgia, in violation of Cherokee sovereignty guaranteed by treaty, arrested two missionaries, Samuel Worcester and Elizur Butler, who were living and working with the Cherokees on Cherokee lands at Cherokee behest. The two had broken a Georgia law interdicting “white persons” from “residing within the limits of the Cherokee nation without a license” (31 U.S. at 528). That law itself, the two stated in their representation by Wirt and Sergeant, violated both the Cherokees’ sovereignty over their internal affairs and U.S. sovereignty over the Cherokees. Reading Worcester v. Georgia, the final case in the “Marshall Trilogy,”which was decided only a year after Cherokee Nation and is based in the same set of legal issues regarding Indian sovereignty, is to experience the schizophrenia, the constant double takes, induced by an analysis of federal Indian law. For in it, Marshall echoes the arguments presented in the dissenting opinion of Justice Thompson in Cherokee Nation (arguments reasoning that the Cherokees are a fully sovereign, foreign nation) in order to reach the following conclusion: “The Cherokee nation, then, is a distinct community occupying its own territory, with boundaries accurately described, in which the laws of Georgia can have no force, and which the citizens of Georgia have no right to enter, but with the assent of the Cherokees themselves, or in conformity with treaties, and with the acts of congress [sic]. The whole intercourse between the United States and this nation, is, by our constitution [sic] and laws, vested in the government of the United States” (561).

Marshall’s reasoning in Worcester, if not his opinion,contradicts both his reasoning in Johnson, based on “the doctrine of discovery,” which in Worcester he is at pains to qualify in the most instrumental of political terms, and in Cherokee Nation. But as in the Freudian model of the psyche, such contradictions remain permanently in the unconscious so that consciousness, in this case the law, can appear coherent. While the language of Marshall’s opinion in Worcester seems inconsistent with the infantilizing language in Cherokee Nation that describes the Cherokees (and by extension all Indian communities) as the “ward” or “pupil” of the federal government, this language is nevertheless careful not to limit the federal government’s powers in Indian affairs but only to assert the limits of the power of the states in relation to both tribal autonomy and the authority of the federal government. Worcester makes clear that federal authority is preeminent in Indian affairs and that it is vested constitutionally in the first instance in Congress. Far from rocking the foundation of the colonial edifice of federal Indian law, Worcester puts the capstone on it by making clear what this law will come to define as the “plenary power” of Congress in governing Indian country, a power if not absolute then virtually absolute in its force, though, it must be emphasized, a power that Indian communities have historically resisted and continue to resist both in and outside the courts.

This “plenary power” allowed Congress in 1871 to pass a law prohibiting any further treaty making with Indian tribes, though treaties signed before this time retain the force of law and typically form the basis for ongoing tribal land claims. And in Lone Wolf v. Hitchcock, 187 U.S. 553 (1903), the Supreme Court ruled that the plenary power of Congress in Indian affairs allowed it “to abrogate the provisions of an Indian treaty, though presumably such power will be exercised only when circumstances arise which will not only justify the government in disregarding the stipulations of the treaty, but may demand, in the interest of the country and the Indian themselves, that it should do so” (566).

Given the history of U.S. Indian relations, the presumption of congressional good faith and the coincidence of Indian and federal interests in these matters is, to say the least, ironic. While subsequent Supreme Court decisions have qualified to some extent the congressional power to abrogate treaties–Congress must compensate tribes monetarily for land taken that was reserved by treaty, see United States v. Shoshone Tribe of Indians, 304 U.S. 111 (1938) and United States v. Sioux Nation of Indians, 448 U.S. 371 (1980)–“the holding [in Lone Wolf] remains a valid precedent,” according to Getches, Wilkinson, and Williams in Cases and Materials on Federal Indian Law (4th ed., St. Paul, Minn., 2000), one of the definitive textbooks in federal Indian law today.

Former BIA head Kevin Gover might wish that “in this era of self-determination, . . . the Bureau of Indian Affairs is at long last serving as an advocate for Indian people in an atmosphere of mutual respect,” but the continuing colonialism of federal Indian law subverts such advocacy at its source.

 

This article originally appeared in issue 2.1 (October, 2001).


Eric Cheyfitz teaches English and federal Indian law at the University of Pennsylvania and is the author of The Poetics of Imperialism (New York, 1991).




Notes of a Historyteller

Large Stock

Asked about the relationship between fact and fiction in his work, the historical novelist typically defends his enterprise by pointing to the perquisites of poetic license and urging respect for his application of tools such as conjecture, imagination, and intuition. At best, a defense on those terms leads to a reassessment of certain isolated facts, but rarely does it pose a challenge to the very categories of fact and fiction, categories which make praise available to the historical novel only insofar as it conforms to paradigms generally belonging to traditional historical writing. I like to think Last Refuge of Scoundrels, a novel about the relationship between fact and fiction in the telling of history, offers a new answer to the question, an answer consistent with its attempt to capture the American Revolution not as written, but as lived.

 

2.1.Lussier.1
Screenwriter and novelist Paul Lussier.

Notwithstanding valiant dissenting renditions by Howard Fast, Gore Vidal, Edward Bellamy, Kenneth Roberts (and scant few others), popular renderings of America’s founding have remained largely unchanged since Parson Weems first foisted his cherry-tree folktale upon our culture in 1806. And, like it or not, the Weems view of the Revolution still influences the American imagination: the hallowed light around Washington shines upon each of the Founding Fathers, revealing them as men of unalloyed virtue who, with selflessness, enlightenment, and an appreciation for all things fair and true, inspired an entire country to arms solely upon the strength of their great ideas.

Meanwhile, traditional military and political history, with its exclusive focus on battles, meetings, and writings of the Founding Fathers, embraces Weems-style protagonists: these histories require not human beings to people its narrative but symbols of righteousness. As a result, the common folk who actually fought the war with revolution–not independence–as their goal (the case with my protagonists John, a feminized aide-de-camp of Washington, and Deborah, a prostitute who fought disguised as a man) are either eliminated from the story altogether or branded enemies of the cause. And those who fought for independence without concern for democratic revolution don’t fare much better, rendered as mere mouthpieces for the Founding Fathers’ documented ideals. Consequently, when it comes to popular representations of the “lesser sort,” last year’s Hollywood film The Patriot is, sadly, about as good as it gets.

Certainly our narratives of the American Revolution have grown smart and plucky over the years (Jefferson in Paris, the History Channel series Founding Fathers); these days it seems Americans prefer their Founding Fathers warts and all: fresher, snappier, even naughty. John Adams’s fantasies included being crowned the next American king when the war was over. Samuel Adams was a slob and an embezzler. Ben Franklin had a girlfriend, one Madame Helvetius, who would kiss her lap dog on the lips and swab up its excretions with her chemise. While I, too, am admittedly guilty of having great fun with such details in Last Refuge of Scoundrels, one could argue that these touches–in making the Founding Fathers more identifiable, more fun–only reinforce their longstanding stranglehold on the story of the American Revolution. A seemingly patriotic hold that, in misrepresenting the Founding Fathers as front and center, marginalizes the efforts, thoughts, and ideas of common people (as detailed in sources such as diaries, letters, etc.), and, thus, creates powerful textual support for their continued marginalization across the board: from the history books we read to the political infrastructure we live.

To those who deem alternative sources (diaries, etc.) and the details they generate unreliable, I have only to say that I cannot see any reason why, for example, Samuel Adams’s accounts claiming the fervor in 1765 for the lawyer-driven, anti-Stamp Tax movement should be considered more reliable (particularly given his avowed mission to propagandize on behalf of the movement) than contemporaneous accounts of:

ordinary folks documenting protests for fair bread pricing; women protesting a system that accorded them little or no authority in their male households;

indebted farmers holding the likes of John Hancock responsible for the prohibitive cost of land, and decrying the emerging shenanigan by propertied men to fence in land once deemed common and then exact a fee for farmers to graze their cows where once they had freely roamed;

indentured servants imagining a day when they would not be expected to step in the sewage sludge that ran in rivulets along the sidewalk so that a rich man could be afforded easy passage.

While some reviewers have called Last Refuge of Scoundrels groundbreaking for exposing such facts, these “revelations” are not what is new about the book. As anyone familiar with recent American historical scholarship is aware, perspectives which incorporate these kinds of details have been found in academic writing for decades. If the book is groundbreaking it is because it avails itself of these facts to spin a story–a novel–of the American Revolution unlike anything popular culture has, to date, produced.

Facts alone seldom change how people think about their past. Stories, however, go where facts cannot, speaking directly to myths that shape culture. Those of us who want to change collective memory are compelled, in my view, to take stock of how traditional historical narratives have managed to achieve such deep entrenchment in our culture. I hazard the hypothesis that were it not for two hundred years’ worth of storytellers supporting the single-minded apotheosis of the Founding Fathers (from Weems on), our popular myth of the American Revolution as merely a parade of the Founding Fathers would not be nearly as unevictable as it is.

I feel strongly that current historical scholarship, especially social history, must find its port of entry into the popular marketplace. Some recent works, such as Alfred Young’s excellent The Shoemaker and the Tea Party (Boston, 1999), stride significantly in this direction. Yet even this pioneering work alone is not sufficient. In my view, to successfully shake the popular hold of traditional “cherry-tree” mythology, social history must tell its own stories based on its own paradigms of history. We must become historytellers. I recognize fully that the exigencies of stories that appeal to a broad public–good vs. evil, pace, structure–require above all else an overarching viewpoint, and this might seem, at first blush, incongruous with the basic methods and principles of historical inquiry. But in answer to the concern that such consensus making must involve compromise, nostalgia, or false heroics, I offer Last Refuge of Scoundrels, which I like to think achieves consensus sufficient for good storytelling while avoiding these pitfalls. The novel is the dramatization of the American Revolution told from the vantage point of many people, from all ranks of society, coming to the Revolution for a myriad of purposes. Purposes coalescing in a story. A story of the Revolution as it was lived, breathed, experienced: as the struggle amongst a people to claim its meaning, to name it for themselves and call it their own.

So what is the relationship between fact and fiction in Last Refuge of Scoundrels?

It’s the story itself. Now is the time to begin a dialogue, the particular approach of Last Refuge of Scoundrels notwithstanding, about what historytelling is and could be. For I fear that, without embracing story, the noble endeavors of social history will remain as marginalized and ghettoized as the lives of ordinary men and women whose perspectives we must continue to hail.

 

This article originally appeared in issue 2.1 (October, 2001).


Common-place asks screenwriter and novelist Paul Lussier about the relationship between fact and fiction in his novel about the American Revolution, Last Refuge of Scoundrels (New York, 2001).




Searching for Self

During the opening years of the American Revolution, a young white Methodist convert from Maryland underwent an amazing transformation of self. Freeborn Garrettson experienced a “series of disturbing dreams and visions” that launched a process of radical self-transformation. Preaching to an African American audience, “he heard the voice of God telling him, ‘You must let the oppressed go free.'” He immediately freed his own slaves. At risk of his own safety, he opposed the War of Independence, feeling that he “should not ‘have any hand in shedding human blood.'” He was even guided by his dreams “to renege on a marriage proposal . . . so that he would be free to devote his life to preaching the word” (79-80). Garrettson “emerged from this time of inner turmoil and outer conflict with slave owners, Revolutionary mobs, and the temptation of a sexual and family life as a recognized leader in the new Methodist Church” dedicating “all his time and all of his self to oppressed people, black and white” (80). He had responded to God’s command “by freeing both his own slaves and himself.”

Garrettson’s remarkable experiences are but one of the many stories of self-fashioning that Teach Me Dreamsexplores. This important work looks at the changing relations between self and society in Anglo-America during the “Greater Revolutionary era” of 1740 to 1840. As in her other works, Sobel remains concerned with the interconnections between white and black–the hatred and affection, envy and appropriation that permeated relations between Euro-Americans and African Americans.  Teach Me Dreams is based upon some two hundred life narratives written by Americans from all walks of life. Landon Carter, Elizabeth Ashbridge, Sojourner Truth, and Frederick Douglass find their place in these pages alongside William Grimes, Solomon Mack, Maria Stewart, and K. White.

Sobel draws upon several schools of psychoanalytic thought, including both object relations and self psychology theory, to explain the shift in self that she identifies with this pivotal period. From the permeable, communal “we-self” of the early modern era, Americans moved to a more “interior,” individuated sense of self. The autobiographies reflect this change, in which lives formerly narrated as “a random string of events” were now infused with “a dramatic pattern” (1-2, 18). Individuals internalized an “enemy” or “alien other” as an important part of this process of individuation. Disavowed attributes of the self might be split off and located in the other (defined respectively as black or white, female or male). Such projections might then be reincorporated or introjected into the self in a more acceptable form. Dreams offered an important venue in which individuals could experience these disavowed parts. Both the extraordinary attention to dreaming in Revolutionary America and the interest in autobiographical reflection would prove to be important arenas (Foucauldian “technologies of the self”) in which individuals might engage with their “alien others” in a transforming way (11-15). “Both Africans and Europeans began developing in opposition to each other–those whom they would ‘not be’–however, this process actually made them dependent on their oppositional others” (4). Sobel explores this process of othering, both consciously and (most originally) unconsciouslythrough a series of four chapters, each of which focuses on a different side of the two key dyadic relations she identifies: white/black, male/female. She ends her book with a lengthy coda in which she explores changes in the life cycle, new attitudes towards death, the development of gendered spheres for the nineteenth-century middle and upper classes, and, most important, an increasing rejection of dream experience. By the mid-nineteenth century, as Abraham Lincoln noted, dreams, dream interpretation, and the search for self in dreams were regarded as “very foolish,” and had all been relegated to the province of “old women” or youngsters “in love” (240-41).

Sobel’s book is notable in turning our attention back to a topic too long thought off limits to historians: the psychological origins of race and racism. She connects this hot-button issue to the formation of the modern, individuated self that would become the hallmark of modern capitalism, also a notable development of the early nineteenth century. As she nears the end of her story, she cautions, “by relating to their dreams . . . a wide spectrum of the population had become reconnected with disassociated parts of their selves. As dreams were increasingly ignored, these aspects became more alien and more dangerous, and selves began to develop in more polarized and menacing directions.” In a perceptive twist, she notes that this rejection of “‘the world of dreams’ . . . ostensibly because of the growth of rationality,” unleashed a sadly modern racist and sexist “irrational hatred of the other” (241).

Readers either unfamiliar with or skeptical of psychoanalytic interpretations may find Sobel’s work to be somewhat tough going. She makes few concessions to the lay reader, dropping analytic concepts (“projection,” “introjection,” “extractive introjection,” “repression,” “unconscious guilt”) into her narrative with the sparest explanations of the analytic thought that informs each one. She scants other, more familiar modes of inquiry in favor of her psychological model. I longed for a fuller exploration of the autobiographical genre and its literary precedents, as well as for explanation of the commercial demand for and reader reception of the published life narratives. Such an exploration would have helped build a better historical context in which to allow the reader to credit or discount these highly refined sources. Indeed, both the structure of the narratives and the dreams retold therein were products of extensive secondary revision (another basic psychoanalytic concept) through which the true intentions of the dreamer’s or narrator’s unconscious were disguised and distorted.

Psychoanalytically inclined readers may have different reactions to Sobel’s analysis. While her readings of dreams are often fruitful, why did she shy away from reading the complete narratives psychoanalytically? To do so would require massive in-depth research into the narrators’ lives, although it might have been better to trade breadth for depth in order to uncover the multi-determined nature of character formation known to clinicians. Puzzling too is her privileging of race and gender over the other dyad that permeates analytic literature: that of nurturer and infant. One could, and should, argue that relations between black and white, male and female are often intertwined with this defining self-other relationship in the infant’s experience. Yet Sobel privileges the relationship between black and white, declaring it “the defining self-other relationship for most of the narrators in this study and [one which] has remained central in American culture since that time” (6). Analysts and historians alike would doubtless agree that such an exclusive focus overlooks much critically important historical, regional, temporal, domestic, and cultural complexity in the American experience.

Still, despite its flaws, this is a brave and welcome addition to the literatures of both early American history and American cultural studies. Sobel has worked hard to teach us these dreams. She uses them as new and deeper sources of historical evidence, uncovering their pivotal role for Americans (especially evangelical Americans like Freeborn Garrettson) in the wake of the Revolution. Such texts served as sources of reassurance, edification, and inspiration for individual narrators and for their countless readers, and, in Sobel’s hands, they bring a fresh message to modern readers as well.

 

This article originally appeared in issue 2.1 (October, 2001).


Ann Marie Plane is associate professor of history at the University of California, Santa Barbara. She recently published Colonial Intimacies: Indian Marriage in Early New England (Ithaca, 2000) and is at work on an article on dream narration and dream interpretation among English colonists and Native Americans in seventeenth-century New England.




The Visible Public

City Reading: Written Words and Public Spaces in Antebellum New York

 

With City Reading: Written Words and Public Spaces in Antebellum New York, David M. Henkin has produced a brilliant and exhilarating book. It offers nothing less than an utterly original answer to the question: What is the nature of human experience in capitalist cities, and how might we speak historically of such experience? By extension, the book further takes as its subject the nature of human experience outsidecapitalist cities, in all those niches and regions where the spread of technological change, for nearly two centuries now, has facilitated the introduction of social conventions possessing urban origins. In short, City Reading is analytically expansive, sure to alter the way in which one perceives modernity itself.

Henkin’s interest lies in the public (that is, outdoor) spaces of antebellum New York. In particular, he seizes upon the presence of written words in the streets of Manhattan, identifying four kinds of “urban texts” that veritably colonized Gotham’s environment with printed words in the three or four decades leading up to the Civil War. “Fixed,” especially commercial, signs represent the first sort of text treated by Henkin. “Mobile” signs, such as handbills, represent the second; newspapers the third; and banknotes the fourth. Why should these signs, papers, and notes matter? Because Henkin shows that their proliferation in antebellum New York provided the material basis of a new type of social formation: “a new kind of public” (7). As Henkin says of his book, “At the thematic core of this tour of urban texts is the proposition that those forms of engagement and disengagement that characterize big-city living emerged fairly early in New York around the experience of written words posted, circulated, fixed, and flashed in public view” (x).

In other words, at the heart of Henkin’s study lie so many acts of reading, a practice usually understood apart from the category of urban experience. “In the historical imagination,” Henkin notes, “the nineteenth-century city appears as a place of cacophonous commotion, while the nineteenth-century reader sits in silent solitude, engrossed in the pleasures of a novel” (4). But City Reading makes it clear that to wend one’s way through the burgeoning world of antebellum New York, its streets increasingly filled with strangers, entailed the reading of one sign or poster or daily or dollar after another. To be sure, this was a markedly different experience than that had by the solitary reader–precisely Henkin’s point. For the difference between the lone reader and the reader out on the street, Henkin argues, was the difference between one sort of public and another. It was the difference between an abstract, bourgeois public and a concrete, mass public. It was the difference between an invisible public of isolated readers and a visible public of anonymous ones.

City Reading thus offers a counterpoint to the notion of the public prominently advanced by Jürgen Habermas. And in contrast to those who simply lament the decline of bourgeois institutions of public reflection (for example, the coffeehouse), Henkin puts forward a story about a public that succeeded–and in some ways exceeded–those earlier institutions. In the vocabulary of this story, perhaps the single most important word is an adjective: “impersonal.” For Henkin rightly recognizes the impersonality of modern urban experience as one of its leading features. He recognizes other significant features of city life too, but Henkin speaks, most of all, about the “impersonal authority” of his urban texts, about the ways in which these texts garnered legitimacy before an undifferentiated public on the streets of antebellum New York.

To consider how the personal authority communicated in a world of familiar faces receded before the impersonal authority wielded in the modern city, let us regard the frenzy of commercial signs that came to mark the outdoor environment of New York in the 1830s, ’40s, and ’50s. Henkin provides ample, indeed striking, evidence of this frenzy. One is astonished, for instance, to look upon a photograph of lower Hudson Street in which a printer’s, a carpenter’s, a clothier’s, a painter’s, and a druggist’s signs subordinate all else within the physical environment, including its buildings and people (frontispiece, 50). Henkin establishes that the power of such signs–private signs–owed to their enormity and elevation, and that these traits enabled the anonymous circulation of people on the streets of Manhattan. “In a city,” he writes, “where, every day, strangers would arrive with few, if any, acquaintances and no point of entry into a world of face-to-face contact that might orient them, a sign on a building offered direction indiscriminately” (50-51). More than this, Henkin demonstrates that private signs moved toward “typographical regularity” in the antebellum period, toward an “image of uniformity” despite the competing commercial interests represented by these very same signs, so that they consequently gained “an aura of being official” (56, 57). In all, then, commercial signs achieved “an impersonal authority associated with the public sign” of today (65). And in this connection, Henkin introduces the case of New York’s Central Park, whose increasing number of signs in the years after 1860 “would have been neither conceivable nor effective without decades of previous sign use in which New Yorkers had grown familiar with words appearing in public” (67).

Today, we are so familiar with the public appearance of words that City Reading is thoroughly remarkable for questioning what others have taken for granted. “After all,” Henkin submits, “we have come to expect cityscapes to be legible, much as we expect consumer goods to come with labels, instructions, and promotional copy. But there was once something novel in the spectacle of so many words, and something radical in the notion that buildings and streets ought to be marked” (3). Henkin informs us that no small part of this novelty owed to the widespread introduction of cheap daily newspapers, themselves public objects that were sold, scattered, and read on the street. But Henkin is also interested in the papers’ “print spaces,” and here his analysis merits special attention for its originality (113). For, while much has been written about the penny press in antebellum America, nothing akin to Henkin’s interpretation has been proffered. In brief, he poses the daily as a public space in its own right. First Henkin establishes the lack, in the antebellum press, of any clear line distinguishing what we would call “news” and “advertising” from one another. Next he points to the daily’s “rectilinear layout” and “even columns,” analogizing them to Gotham’s grid (117). And finally he identifies the daily’s “typographical uniformity” (117). What, then, was the cumulative effect of these features? “Discrete news stories, tendentious political commentaries, competitive commercial claims, and ostensibly unrelated bits of information blended together in the print columns of the metropolitan press in a characteristically urban juxtaposition of unlikely neighbors that also imbued all of the texts with the appearance of sharing a single, impersonal authority” (119). Put differently (and ingeniously): “In many respects, perusing the columns of the major dailies resembled and paralleled walking down the city’s major thoroughfares . . . ” (125).

“Perusing” and “walking” seem the right note on which here to conclude. If the act of reading has come unmoored in recent decades, invoked metaphorically in the pursuit of sundry objects of inquiry, then David Henkin has alerted us to all those acts of reading that have literally taken the city as their subject; he has clarified the meaning of those acts; and he has rendered for those acts a history where previously none existed. Ultimately, as well, he has fashioned a book that is itself nothing if not urban in character, so rapid-fire in its delivery that the reader vicariously enters the energetic world of the modern city.

 

This article originally appeared in issue 2.1 (October, 2001).


James Kessenides is completing his dissertation for the department of history at Yale University. The dissertation is entitled “Before Hollywood: A Prehistory of Los Angeles.” He teaches history at the University of South Florida, St. Petersburg.




Investigating Patrollers

Slave Patrols: Law and Violence in Virginia and the Carolinas.

 

Sally E. Hadden brings sharp eyes and listening ears to the activities, significance, and composition of southern slave patrols in her important and stimulating study. The introduction, six chapters, and an epilogue span the colonial period through Reconstruction, treat Virginia and the Carolinas, and draw on evidence ranging from legal statutes to slave narratives. Hadden is the first to interrogate precisely and thoroughly those most responsible for surveilling and policing the Old South’s slave population.

She begins with an examination of the evolution of patrols in the seventeenth and eighteenth centuries. Fear of slave revolts helped establish the patrols and the process of their formation followed a similar trajectory in the three colonies with private efforts to control the slave population giving way, albeit in sometimes convoluted fashion, to state-sponsored patrols. By the end of the Revolutionary era, “except in urban areas, patrols served as separate groups, apart from militia, constables, and sheriffs” (40). Hadden then turns to the consolidation of the patrols after the Revolution, considers whether or not they were effective, and claims that “[p]atrolling had the same appeal of jury duty in the modern era: it might seem onerous, time-consuming, and people might try to avoid serving, but it was indubitably important” (69). The chapter concludes with a bit of a red herring when

Hadden argues the patrol’s obligation to protect the property of others “was repugnant to Southern white ideas of individual freedom and, indirectly, their sense of personal honor” (70). Patrols, contends Hadden, were entirely too communal, too suggestive of white fear of black revolt, and too intrusive on the slave-master relationship to sit comfortably with elite antebellum Southern men. Thus, efforts to “change and strengthen the slave patrols ran directly counter to Southern white notions of honor and self-sufficiency” (70). Hadden’s characterization of Southern whites as resolutely individualistic is an exaggeration. Certainly, many were fiercely independent but, as a good deal of work has shown, ties of kinship and economic reciprocity bound fairly disparate groups of white southerners. Perhaps failure to strengthen antebellum patrols indicated that although some found aspects of them unpalatable, many nonetheless considered them effective; or, perhaps men (women never patrolled) found the patrols to bolster their ties to community and notions of masculinity. Hadden herself later points out that members of a patrol were “routinely composed of men who knew their fellow patrollers well” (85). In other words, serving in a patrol may have reaffirmed Southern notions of community, kinship, masculinity, and honor.

Hadden’s third chapter is noteworthy for several reasons. Here, she compares patrollers to slave catchers, plantation overseers, and urban constables [no one summarized the difference between the police and the patrollers more poignantly than a former slave: the police “were for white folks. Patteroles were for niggers” (84)]. Most significantly, Hadden shows that that poor whites did not make up the bulk of southern patrollers. Based on her analysis of tax and tithe data for two eighteenth-century Virginia counties (Hadden’s use of difficult sources is exemplary), she finds that “slave patrollers were neither wealthy nor at the bottom among the landless and propertyless of their community” (98). Although this began to change after the 1820s, the importance of the patrol to antebellum slaveowning society ensured that patrolling was not left solely to poor whites. Men of some means had to be involved, argues Hadden, not only to protect their property but also to monitor poor white relations with slaves.

A powerful discussion of the patrol’s day-to-day functions and activities, its methods of surveillance, and slaves’ tactics of evasion constitutes the fourth chapter. Hadden is rightly sensitive to how patrollers used not only their eyes but also their ears to ferret out illicit slave activity, most frequently betrayed by noise emanating from slave cabins. Conversely, she also shows how slaves used sight and sound to try to evade and confuse the patrol. A discussion of how the patrols responded during times of slave revolt and war is the focus of chapters 5 and 6. Hadden’s examination of insurrection scares, the American Revolution, the War of 1812, and, in chapter 6, the Civil War, enables her to conclude that wars and rebellions resulted in the desire for tighter surveillance but that wars especially sometimes limited the patrol’s effectiveness. An epilogue, which examines the similarities between antebellum patrol and postbellum Klan–unsurprisingly, there were many–concludes the study.

What to make of Hadden’s fine survey of Southern surveillance? Beyond the important and immensely helpful data she presents about the nature and working of southern patrols, Hadden’s book is refreshing and suggestive for a couple of reasons. First, Hadden’s study invites and, indeed, facilitates, comparative work. Future studies of the patrol would do well to compare Hadden’s portrait of the old southeast with patrolling practices in the southwest and, in fact, other slaveholding societies. The second is suggested by Hadden herself when she seems to anticipate criticism from scholars interested principally in how slaves perceived the patrols and how they resisted white surveillance. Of course, Hadden is not unmindful of slave perceptions of the patrol, as she demonstrates in chapter 4, but her focus is on the public regulation of slavery. She wants to move “beyond the worlds of slave and master to include a third party–the slave patrols” rather than dwell on strategies of subaltern resistance (2). In this context, there is much to recommend such an emphasis for, above all else, this book is a healthy reminder and exploration of the authority, nature, and power of Southern slaveholding society. For Hadden, slaves had agency, but it was one often hedged by the stifling presence of the patrol. In this respect, her study goes some way toward answering what remains a critical question: why the relative absence of large-scale slave insurrections in the Old South? Hadden’s book suggests an answer and much more, besides.

 

This article originally appeared in issue 2.1 (October, 2001).


Mark M. Smith is associate professor of history at the University of South Carolina and author of Mastered by the Clock: Time, Slavery, and Freedom in the American South (Chapel Hill, 1997), Debating Slavery: Economy and Society in the Antebellum American South (Cambridge, 1998), and editor of The Old South (Oxford, 2000). This fall he will publish Listening to Nineteenth-Century America with the University of North Carolina Press.




Midwife Tales

What is it about the Puritans? From The Scarlet Letter to The Crucible, black-hatted, steely-eyed Puritans have been every author’s favorite villain. Puritanism itself is portrayed as an intolerant, joyless, self-satisfied religion of mealy-mouthed, slave-owning, protocapitalists. As a historian who studies these people, I spend much of my time battling these images. Indeed, over the years I have even developed a reluctant affection for the real Puritans, who fit some, but by no means all, of these literary stereotypes. Thus, it was with both hope and trepidation that I opened the first of Stephen Lewis’s three mystery novels set in seventeenth-century Massachusetts–hope that I would find a likable Puritan at last, and fear that I would not. I discovered that Lewis has written three decent detective stories, and has described the material conditions of life in early New England well enough. However, like his literary predecessors, Lewis has succumbed to the image of the villainous Puritan.

 

Stephen Lewis, The Dumb Shall Sing. New York: Berkley, 1999.

 

Catherine Williams, the main character and “detective” of these novels, is a sympathetic and likable character. She is a wealthy widow in Lewis’s fictional town of Newbury, Massachusetts, and like many real seventeenth-century widows, she lives an independent life. Catherine occupies a prominent position in Newbury as both a high ranking woman and the town’s midwife.

Catherine is a strong-willed and courageous woman who is not afraid to stand up to authority when she has to. When we meet Catherine in the first scene of The Dumb Shall Sing, she demonstrates this aspect of her character. A group of Pequot war captives are about to be executed and Catherine is there to bear witness. However, she is also there to see to an obligation made to her recently deceased husband: that one of the captives be spared and his fate left up to her. The gathered authorities object to the idea of a single woman making a decision regarding an unpredictable “savage,” but Catherine stands firm. She chooses the bravest of the Indians–a man named Massaquoit, who stood up to torture and humiliation with dignity and courage. By doing so, Catherine acquires an important ally and friend. Massaquoit’s special knowledge of Indian culture and local woodcraft are crucial to solving the three mysteries.

This first scene also introduces us to three other characters: Joseph Woolsey, the local magistrate, Minister Davis, and Governor Peters. Joseph is a good-hearted man who has been Catherine’s close friend for many years. Unlike Catherine, he is weak willed and unwilling to challenge others with whom he disagrees. Davis and Peters, on the other hand, are typical literary Puritans–self-righteous, greedy, racist, and cruel. It is the minister who gives the order to have Massaquoit’s fellow captives shot, and it is Governor Peters who objects the most strenuously to Catherine sparing one of the Indians. Both Peters and Davis act as Catherine’s nemeses, objecting to her independence and vigorously repressing all forms of dissent in Newbury.

 

Stephen Lewis, The Blind in Darkness. New York: Berkley, 2000.

 

This group of characters comes together in all three of the novels. True to the mystery genre, each plot revolves around a suspicious death. In the first book, The Dumb Shall Sing, Catherine’s role as midwife involves her in the case of Margaret Mary Donavan, an Irish servant girl accused of murdering a newborn baby placed in her care. The Blind in Darkness, number two in the series, begins with Catherine discovering the body of Isaac Powell, a recluse who lives only with a servant boy named Thomas. Massaquoit becomes the first and apparently most likely suspect. Finally, in The Sea Hath Spoken, Quakers arrive in Newbury. Roger and Jane Whitcomb immediately disturb services at the meeting house and soon become pariahs. Clouds of suspicion have already gathered around them when a local sailor drowns. Needless to say, the obvious suspects in each book are not the true villains, and Catherine plays a crucial role in bringing the real killers to justice. Each book provides enough plot twists and red herrings to keep a reader entertained until the end. One warning however: anyone who has read either Mary Beth Norton’s Founding Mothers and Fathers (New York, 1996) or Kathleen Brown’s Good Wives, Nasty Wenches and Anxious Patriarchs (Chapel Hill, 1996) will immediately recognize the true story on which The Blind in Darkness is based.

 

Stephen Lewis, The Sea Hath Spoken. New York: Berkley, 2001.

 

As historical novels, the books work well on one level. The material culture, foodways, and most of the medical details of Catherine’s midwifery practice are accurate. I was particularly pleased with the dialogue. Lewis manages to capture the texture of seventeenth-century language in a way that does not sacrifice intelligibility. Lewis has also clearly done his homework on quotidian details of colonial Massachusetts life. His knowledge of seventeenth-century firearms is extensive– I think I could almost fire a matchlock myself from Lewis’s detailed descriptions. Lewis’s depictions of childbirth capture both the atmosphere of the birthing room and the extent of midwives’ medical skills. While the historical details are for the most part accurate, there are a few anachronisms that an alert reader will notice. In general these do not detract from the story.

However, I could not overlook the biggest anachronism of all–that of character. Minister Davis and Governor Peters are caricatures, behaving in ways that even the sternest of real Puritans would have rejected. Even more disturbingly, Lewis succumbs to the trap of making all sympathetic characters models of twenty-first century tolerance and understanding. When the minister refers to Indians as bloodthirsty savages, Catherine replies that they are “Men. . . no more nor less, only without the benefit of your condition” (The Dumb Shall Sing, 4). Catherine also constantly questions her society’s values and her religion’s most basic doctrine–so much so that she sometimes seems more like a New Age spiritual advisor than a devout Puritan. Early in the first book, Lewis states that Catherine “depended on her intuitive understanding of the deity’s intention in a particular circumstance rather than on the applications of the minister’s sermons” ( The Dumb Shall Sing, 20). Passages such as these seem to come from the mistaken assumption that to be likable a character must share modern values, emotional reactions, and worldview.

And that is just what most disappointed me about these novels. I opened them hoping against hope that for once I would find a sympathetic Puritan character–one who accepts the values of his or her religion, who despises Quakers and fears Indians, but who is still understandable. While I did like Catherine Williams, she reminded me more of the nice nurse-midwife that lives down the street than of Anne Bradstreet or even Anne Hutchinson. Novelists have freedom that historians do not, and I long for one to create a Puritan that we can all appreciate and even love. I fear that I will just have to wait a while longer to find one.

 

This article originally appeared in issue 2.1 (October, 2001).


Rebecca J. Tannenbaum is an avid reader of both mysteries and historical novels. She is also the author of The Healer’s Calling: Women, Men, and Medicine in Early New England, forthcoming from Cornell University Press in 2002, and is a visiting assistant professor of history at Yale.




Losing One to the Gipper

While the Founders of the United States have seen their reputations downgraded in some quarters over the past few decades, they have usually been able to count on the support of our leading political and cultural conservatives. When George Washington got left out of the National History Standards, when Thomas Jefferson’s DNA patterns were found in the wrong bloodstream, well-financed defenders rushed to the rescue. Pity poor Alexander Hamilton, then, for the Founders’ friends have forsaken him. While other major Founders have monuments on the National Mall and their faces carved into the sides of mountains, Hamilton’s one prominent memorial, his portrait on the ten-dollar bill, is under attack from the very same forces that have so often rallied behind his former colleagues. A well-connected conservative organization has begun a drive to put Ronald Reagan’s face on the ten-spot instead.

 

 

Hamilton has long been the least loved of the Founders. A treasury secretary rather than a president, as supporters of the Reagan tenner point out, his major accomplishments were in the arena of public finance rather than war or politics. And unlike his great rival Thomas Jefferson or fellow greenback illustration Benjamin Franklin, he did not leave behind a mass of quotable quotes suitable for use in everyday life and politics. Besides the Federalist essays, co-authored with Jefferson sidekick and historians’ darling James Madison, Hamilton’s major written legacy is a series of brilliant but highly technical reports on such matters as the public debt, banking, and manufacturing. As anyone who has assigned excerpts of them to an undergraduate class knows, those reports are more likely to inspire narcolepsy than patriotic reverence in the average modern reader.

Though Hamilton sought the status of national hero–“fame,” as he might have put it–his chosen route to that goal was “to plan and undertake extensive and arduous enterprises for the public benefit.” In other words, he sought fame as a lawgiver, statesman, and bureaucrat, rather than as a popular politician. Unlike the other figures emblazoned on our currency, and most other American leaders, Hamilton was notable for his lack of attention to personal popularity. He rarely minced words or obfuscated his views, and he showed a marked aversion to taking half measures in the pursuit of anything he considered a worthy goal. Popular unrest over taxes? The government “should appear like Hercules,” with troops, and show the punks who was boss. Wages too high for factories to be feasible? Hire women and children. The Constitution? A “frail and worthless fabric” that needed to remain as elastic as possible if it was to survive at all. Indeed, the record of Hamilton’s public career is a tissue of ideas and causes that were or became extremely unpopular: lenient treatment for Revolutionary Era Loyalists, life tenure for presidents and senators, the taxation of ordinary citizens, open subsidies to business corporations, the maintenance of a national debt, the restriction of free speech, and the reduction of the powers of the states, among others. To all of these other “PR” problems Hamilton added a sex scandal to which he, unlike Jefferson, freely and extensively confessed.

Hamilton’s poor public image befitted a political ideology that made little room for democracy and probably would have made little difference to the man himself. He retained the respect of the mercantile and legal elite whose opinions mattered to him, and his main objective in public life was to build a stable and prosperous national state that could govern effectively while thriving in a hostile, heavily armed world. At this Hamilton largely succeeded. But while Americans have always enjoyed feeling rich, powerful, and safe, they have also tended to chafe at the policies deemed necessary to make that so. Hamilton has had his political admirers over the years, but he never caught on with the culture the way that other Founders and political heroes did. Mount Vernon, Monticello, and Franklin Court became popular tourist destinations surrounded by impressive memorial infrastructures, while Hamilton’s home the Grange sits, sparsely furnished and little visited, literally tucked behind a church on a back street in the northern reaches of Manhattan. Builders of governments are hard to love.

Especially for those who hate government. The source of the effort to remove Hamilton from our wallets is the Ronald Reagan Legacy Project, the same people who got Reagan’s name added to Washington National Airport and Florida’s major turnpike. An arm of the anti-tax group Americans for Tax Reform, the Legacy Project’s self-described mission is to name “significant public landmarks” after Reagan in each of the nation’s fifty states and three thousand counties, “as well as in formerly communist countries across the world.” Alas, the drive to put a Gipper monument in every county seems to be going slowly. There are some schools, bridges, streets, parks, and museums in the Illinois towns where Reagan grew up, and in California where he was governor. But elsewhere the memorial pickings are slim: a building at a Mississippi Sheriffs’ Boys and Girls Ranch and an historical marker in Iowa are important enough triumphs to list on the “Legacy Project Website.” Also highly touted is the possibility that a new high school in Fishers, Indiana, set to open in 2006, might bear the Reagan name.

The campaign has been much more successful with sites directly controlled by the Big Government in Washington, where the inside-the-Beltway conservative activists who care most deeply about the issue are much more influential, despite their erstwhile commitment to shrinking government. The project’s board of advisors includes Attorney General John Ashcroft and numerous Republican congressional leaders, including House Majority Leader Dick Armey and Majority Whip Tom DeLay. Among the successful namings and renamings they count not only the Washington airport, but also the nuclear aircraft carrier USS Ronald Reagan, the Ronald Reagan Ballistic Missile Defense Test Site in the Marshall Islands, and a proposed Reagan memorial on the National Mall.

The head of both the ATR and its Reagan Legacy Project is lobbyist and “issues management strategist” Grover G. Norquist. An ally of Newt Gingrich, Norquist spent the past decade coordinating what he calls a “Leave-Us-Alone Coalition” of right-wing groups, including “property owners, anti-tax activists, gun owners, home- and private-schoolers, small business owners, religious conservatives, and libertarians.” Wielding much of his influence through Wednesday-morning strategy sessions at the ATR offices to which representatives of various conservative causes are invited, Norquist has been given credit for some of the signal conservative policy campaigns of the 1990s, including the scuttling of the Clinton health plan, the 1994 Republican takeover of Congress, and the numerous state-level drives for “paycheck protection” laws (bans on the automatic withholding of union dues) that have now percolated up into the Bush administration’s agenda. Norquist’s group is also responsible for the “Taxpayer Protection Pledge” (against raising taxes for any purpose or under any circumstances) that most Republican politicians have routinely signed since George H. W. Bush failed to read his own lips. While Grover Norquist is not a household name, inside Washington he is known as one of the most influential of the conservative activists who currently set the terms of our political debates. Some conservatives regard him as nothing less than a hidden prime mover of modern history. His pal Newt Gingrich hails Norquist as “the most innovative, creative, courageous and entrepreneurial leader of the anti-tax efforts and of conservative grassroots activism in America . . . He has truly made a difference and truly changed American history.” The man himself boasts of “saving” the country from the outbreak of–quel horreur!–“social democracy” represented by a national health plan. Like Richard Nixon but without the charm or social conscience, Norquist regards all government programs as nothing more than attempts to buy votes. And, like Charlton Heston and the Militia of Montana, he regards almost any effort to regulate or tax anything as a horrific infringement of human freedom. Norquist looks forward to cutting all forms of government in half over the next twenty-five years. This will be tough, he concedes. But if the full agenda of extreme economic conservatives were enacted–“if you privatize Social Security, if you voucherize education, if you sell the $270 billion worth of airports and wastewater treatment plants, eliminate welfare, and so on”–then it could be done.

On the surface, it is easy to see why movement conservatives like Norquist would be repelled by Alexander Hamilton, whose great concern was to firmly establish the government that Norquist hates so much, who virtually introduced national taxation in America, and who was not afraid to use force to collect the taxes that were laid. If the Whiskey Rebellion were held today, Norquist (an NRA “life member” and director) would certainly be out shooting up the revenue collector’s house, not marching west with Hamilton and Washington’s army. Still, the Reagan Legacy Project has thus far not cited any substantive objections to Hamilton, other than to state the obvious truths that he was not a president and that his political party no longer has any seats in Congress. (Washington and Franklin, take note.)

Yet there is clearly something deeper going here. A Republican-led drive to erase Hamilton from the national memory represents a significant shift in American conservatives’ narrative of their own history, a tale in which Hamilton has long played a prominent role. Though Aaron Burr slew Hamilton decades before the modern GOP emerged, Hamilton’s relatively few political fans have almost always been Republicans. It seems to have been the Lincoln administration that first put Hamilton’s face on United States currency, and the Coolidge administration that granted him the prominent location on the ten. Theodore Roosevelt was one of Hamilton’s few vocal presidential supporters. In more recent times, such conservative writers as William Bennett, Karl-Friedrich Walling, and the National Review’s Richard Brookhiser have written admiringly about Hamilton.

More importantly, there are some substantive reasons for modern conservatives to admire the first treasury secretary. Hamilton was to some degree the original theorist of trickle-down economics, taking the view that the creation of large fortunes and intensive development were ultimately good for the nation, however unequally the benefits might be distributed. His financial system quite frankly aimed to lock the government into a tight embrace with its creditors and gave special benefits to business interests. The Bank of the United States was not the Federal Reserve but a private, profit-making corporation that got to use the government’s tax receipts as its capital. From the excise tax to the “funding” at face value of a devalued debt that speculators had bought up from cash-strapped citizens, the financial system amounted to a significant transfer of wealth to an entrepreneurial elite that Hamilton hoped would both support the new government (because they were profiting by it) and invest their profits in developing the still largely rural republic.

Hamilton is the legitimate ancestor of modern conservatives on other counts as well. One recent study, Walling’s Republican Empire, depicts Hamilton as the original Cold Warrior, an “American Churchill” whose vigilance and calls for military preparedness saved America from French aggression just as later conservative leaders saved the world from Nazism and communism. Hamilton can also be considered the inventor of the organized religious right in America. In the presidential campaigns of 1796, 1800, and 1804, Hamilton’s Federalists, assisted by many of the old-line clergy, stoked the fears of pious New Englanders regarding Jefferson’s liberal religious views and personal libertinism. Hamilton suggested taking these efforts a step further and creating a network of Christian Constitutional Societies that would instruct devout citizens in how to follow the inevitably Federalist dictates of their faith. In all of these ways, then, Hamilton offers a highly usable past to twenty-first century conservatives.

Why, then, the drive to remove him from our wallets? The excision of Hamilton from the story of American conservatism meets a critical ideological need for antigovernment extremists of Grover Norquist’s ilk. A Hamilton in the national pantheon presents an uncomfortable contradiction for these people. Here was a great American white male, a Founding Father no less, of impeccably conservative, pro-business instincts and beliefs, who nonetheless recognized that there were public goods that only governments could accomplish. Hamilton would have found the simplistic opposition that the current Republican right depends upon–an opposition between government activity of any kind on the one hand and human liberty on the other–to be manifestly absurd. Hamilton understood that government exists to protect liberty, a fact that today’s conservatives prefer to sidestep–except, of course, when liberty is threatened in some far away and potentially profitable place, such as a Kuwaiti oil field.

If Hamilton was too soft on government to suit today’s conservative tastes, he was also far too honest in announcing his purposes, having not yet grasped the essential strategic insight of modern reactionary politics. One of the most significant changes in the Republican party since the 1950s, a change that came in with Richard Nixon and Joe McCarthy and helped usher both Nixon and Ronald Reagan into the White House, was its embrace of an aggressive but basically dishonest populist rhetoric. While the party’s financial backers and preferred policies changed relatively little, its leaders began shaking their fists at evil liberal “elites” in the name of the common man. Yet even Nixon felt the need to give his “silent majority” rhetoric some substantive backing by courting labor leaders and continuing or expanding most New Deal and Great Society programs. In the 1980s and 90s, the Reagans, Bushes, and Norquists came to believe such substantive compromises were no longer necessary: that even policies baldly aimed at redistributing wealth away from ordinary Americans, or making their jobs less secure, or crippling the government services they depended on, could be sold with populist slogans about paycheck protection, death taxes, and tax cuts for “working Americans”–slogans sprinkled liberally throughout the ATR Website in justification of the new Bush administration’s plutocrat-oriented tax proposals. This kind of political inversion is the real legacy that Norquist means to memorialize on the Reagan ten-dollar bill.

Like his friend Newt, Norquist is one of those so-called conservatives who talks, and may actually believe, that he is in fact some kind of social revolutionary. While there is reason to doubt that a former U.S. Chamber of Commerce employee really poses much threat to the social order, Norquist’s revolutionary self-image does seem accurate in at least one sense: he and his friends display a distinctly Soviet penchant for rewriting history to suit the present regime’s needs. By forcibly making Reagan into the towering figure of modern American history, today’s “movement” is trying to install a grandiose view of its own importance and heroism in our collective memory, while they have the votes in Congress and a friend in the White House. The proposed Reagan for Hamilton switch may not be the most important of these efforts, but it manages to seem both chilling and stupid, like the famous doctored images of Stalin conferring with Lenin.

Though the idea of adding Reagan to Mount Rushmore has been rejected, there seems to be a serious intention of transforming Reagan into some kind of honorary Founding Father. These are exactly the terms in which one Legacy Project supporter justified a Reagan-naming proposal. “At a time when the names of our Founders are being stripped from schools in other areas of the nation,” said an Indiana conservative, with no thought for poor Hamilton, “let’s rekindle the spirit that respects those figures from our past who helped give our futures promise.” Perhaps next the Gipper can be painted over John Adams in Trumbull’s Signing of the Declaration on the Capitol walls.

Further Reading:

E. J. Dionne’s Washington Post column for March 23, 2001, commented on the Reagan Legacy Project’s designs to rewrite history and suggested the Soviet comparison elaborated on above. Dionne called Norquist “the Lenin of the contemporary right.” I would pick Stalin or Trotsky myself.

An alarming interview with Grover Norquist from the libertarian magazine Reason is the source of several of the quotations above and is available here.

Conservative appreciations of Hamilton can be found in Karl-Friedrich Walling, Republican Empire: Alexander Hamilton on War and Free Government (Lawrence, Kans., 1999); Richard Brookhiser, Alexander Hamilton, American (New York, 1999); and William J. Bennett, Our Sacred Honor: Words of Advice from the Founders in Stories, Letters, Poems, and Speeches (New York, 1997).

The whole history of currency portraits can be learned from “Dawn’s Virtual Currency Collection,” where U.S. paper money can be browsed by portrait. Who knew De Witt Clinton, Thomas Hart Benton, Albert Gallatin, Silas Wright, Daniel Manning (duh! treasury secretary under Cleveland!), and a bunch of generals and admirals were all once the faces of legal tender? Dawn’s site is light on analysis, but a distinct pattern of partisan portrait selection emerges.

For additional, late-breaking comments and information on Hamilton and the Gipper, including a belated response from the Reagan Legacy Project itself, click here.

 

This article originally appeared in issue 1.3 (March, 2001).


 




Playing Dress Up

This summer, public television stations across the U.S. aired The 1900 House, a four-part series billed on PBS’s website as “sci-fi drama of time travel meets true-life drama” but perhaps better explained as This Old Housemeets Upstairs, Downstairs. Originally broadcast in the U.K. in September of 1999, The 1900 House transported an ordinary middle-class English family, the Bowlers, back in time “to live in a house restored to the exact specifications of the late Victorian era.” For three months, the Bowlers not only lived without central heating or electricity, but also adhered (more or less) to a strict set of House Guidelines, the most burdensome of which turned out to be the rule requiring wearing period clothes–“including the relevant layers of undergarments”–at all times. Buttoning up said undergarments for the first time, the mother of the family, Joyce Bowler asked, with charming candor, “So, um, how do I go to the loo?” (The answer: unbutton, and aim.)

“Aging hats and casting brass button moulds is all well and good, and probably Oscar worthy, but what’s the point if no one bothers to research speechways or family life or race relations or colonial politics?”

Most of the four hundred English families who applied to live in The 1900 House expected the experience to be both thrilling and enlightening. Even the otherwise levelheaded Joyce enthused, “We might find we don’t need some of the things that we’ve come to depend on so much.” But actually living in The 1900 House turned out to be pretty miserable. By the third day, Joyce was reduced to tears when her nine-year-old son, Joe, refused to eat yet another (admittedly dubious) meal cooked with nineteenth-century ingredients using nineteenth-century appliances. And watching her husband, Eric, a Royal Marine, leave the house for work every day (in period uniform) while she was stuck at home cleaning the three-story house from top to bottom using only rags, soda crystals, and an utterly useless hand-pumped vacuum, left Joyce enraged. “I’m jealous,” she admitted to a camera concealed in her closet, “because he’s a man and whether it’s 1900 or 1999, he gets the better deal.” Beat. Smile. “But I get the frillier drawers.” By the fifth week even the frilly drawers had lost their appeal: the family was forced to call in a doctor after the corsets Joyce and her eldest daughter wore left them chronically short of breath. (“I hate it,” Joyce screamed at her whalebone-and-lace. “I hate the bloody thing. I absolutely hate it.”)

Just as The 1900 House ended, at the beginning of July, and the Bowlers gleefully chucked their petticoats and pinafores for Levi’s and Polarfleece, Columbia Pictures released The Patriot, a two-hour-and-forty-minute epic about the American Revolution starring Mel Gibson as Benjamin Martin, a South Carolina planter who, according to David Denby in the New Yorker, “is the kind of guy who would never wear a powdered wig.” But Gibson does manage to don a nicely tailored linen shirt and wool knee breeches. Indeed, the costumes for The Patriot, designed by Academy Award winner Deborah Scott with assistance from costume specialists at the Smithsonian, were painstakingly researched and produced. The Patriot‘s website boasts that “83,900 military buttons were made with historic details in three different sizes” for the military uniforms worn by the thousands of redcoats and Continentals marching across the screen, while “[o]ver 1500 hats were created and aged to appear as though they had been worn for several months.”

Aging hats and casting brass button moulds is all well and good, and probably Oscar worthy, but what’s the point if no one bothers to research speechways or family life or race relations or colonial politics? Why bother getting the costumes right if the rest of the film suffers from scenes like the one in which Martin’s love interest replies to his question, “Can I sit here?” with a petulant, “It’s a free country–or at least it will be,” while the African Americans who labor on Martin’s plantation are free men and women devoted to serving their beloved master, to whom they later offer protection at an otherwise all-black seaside encampment where the Martin family happily dances to the music of African drums in a kind of multicultural Beach Blanket Bingo. Not for nothing did historian David Hackett Fischer conclude in the New York Times (July 1, 2000) that “‘The Patriot’ is to history as Godzilla was to biology.” The film may get its costumes right, but it gets just about everything else wrong. In a climactic speech before the South Carolina legislature, Martin explains his unwillingness to fight against the British by declaring, “I’m a parent. I can’t afford principles.” What could be more modern? Benjamin Martin is the Wayne Gretzky of the Revolution, the Joe Kennedy Jr. of the eighteenth century, a man who holds a press conference to announce his retirement from public life and declares, with tear-filled eyes, that he is tired of working hard and traveling too much, and eager to spend more time with his family.

If the past is a foreign country, traveling to Benjamin Martin’s world doesn’t even require crossing state lines. It only requires dressing up. And therein lies the brilliance of The 1900 House, which understands that dressing people up, and putting them in a house filled with props, doesn’t actually transport them to another era. “Remember me?” Joyce Bowler asks her children in the series’ final episode, after she’s changed back into her modern clothes for the first time in three months. “I was still here all along.”

And so, of course, was Mel Gibson, though no one seems to have minded (with the notable exception of historians, whose professional list-serves have seethed with such abundant derision for the film that one posting finally urged, “Get over it!”). The Patriot has received generally favorable critical reviews, even if it’s lost out to The Perfect Storm at the box office. Meanwhile, The 1900 House has nearly drowned in a flood of crocodile tears offered up by media critics who have persistently compared it to this past summer’s other “voyeur TV” shows–Survivor and Big Brother–a new television genre that is always taken as evidence, in one form or another, of the decline of Western civilization. In truth,The 1900 House is better understood as a historical film stripped of pretense, or, perhaps, as a “costume documentary” that unveils the artifice of “costume dramas.” The Bowlers know, far better than The Patriot’s writers do, that the past–especially the remote past–is difficult to visit, even when you dress for the occasion. “It’s an original,” Joyce Bowler says of the antique range on which she cooks the family’s meals, “I’m the fake.”

Living in The 1900 House left Joyce Bowler keen to learn more about the past: “It’s made me want to dig deeper, to know what it was really like then, and not just to accept what’s between the pages of a history book.” Maybe what Joyce Bowler needs is Common-place, which seems, after all, quite a reasonable compromise between reading a history book and wearing a corset for three months. We hope it will take your breath away.

 

This article originally appeared in issue 1.1 (September, 2000).