Beyond Words: Sylvia’s Diary

I had no idea what I was getting into. I intended to take notes from one slender manuscript journal for a brief article or lecture. Yet somehow I’ve spent the last five years working with the thirty-year diary of Sylvia Lewis Tyler (1785-1851), an early nineteenth-century Everywoman, of Connecticut and Western Reserve Ohio. It’s a research journey that has unearthed artifacts made by, used by, or known to her, and it has taken me from Washington, D.C., to Connecticut and Ohio, stumbling down riverbanks and puttering through graveyards, using research tools and finding aids from vital records to dowsing rods.

I first learned of the diary when organizing a 2001 exhibition on childhood, for the Daughters of the American Revolution (DAR) Museum in Washington. I’d asked the archivist what the DAR’s Americana collection had that reflected teenagers’ activities in the pre-Victorian era. She suggested Sylvia’s diary, since it was begun at the age of fifteen, and chronicled the many chores and social activities of a Federal-era girl. I scarcely got a glimpse of the pages before the diary went into its exhibition case, but I did notice many references to spinning and sewing. Years later, I thought I’d take another look, hoping for some useful raw data on textile and clothing production in early nineteenth-century New England. I was taken aback when the archivist deposited nineteen manila folders before me, each containing a small, slim, hand-made volume.

I could have abandoned the project. I could have stuck to my original plan of just noting Sylvia’s textile and costume-related activities. I did begin cautiously, focusing on the spinning and knitting that appeared in nearly every entry. I also noted the names of neighbors and nearby towns, since the title page offered the only clue to where she lived: “Sylvia Lewis of Bristol.”

Working upstairs from one of the nation’s best genealogy libraries does help. Having once lived in the town adjacent to Bristol, Connecticut, I recognized some town names as Connecticut ones, but several states have Farmingtons and Hartfords. Just a few minutes spent checking some names in the diary against the 1800 Bristol, Connecticut, census confirmed my theory, establishing Sylvia as a Connecticut girl. A few more minutes with Bristol’s bicentennial history told me that another Bristol girl, Candace Roberts, had started a diary the same year. This was preserved at the Bristol Public Library’s Bristol History Room, and in another few minutes I was on the phone with Jay Manewitz, the librarian in charge of the collection. “Did you know there’s another Bristol teenager’s diary, begun the same year as Candace’s, and that she mentions Candace on the first page?” I tantalized him. “No! Is there a transcript?” he asked eagerly. “No,” I replied, and knew I was crazy to add, “I’m working on it.”

 

Fig. 1. Inside front cover of the Sylvia Lewis Diary, vol. 1 (1801-2). Courtesy NSDAR Americana Collection. Click to enlarge in new window.
Fig. 1. Inside front cover of the Sylvia Lewis Diary, vol. 1 (1801-2). Courtesy NSDAR Americana Collection. Click to enlarge in new window.

Thus I began to transcribe Sylvia’s diary, begun at age fifteen in 1801 and ending at age forty-six in 1831. (Two years are missing, and there is a gap between 1822 and 1828; the last volume covers 1829-1831 sporadically.) I could get through about three months of the tiny, sometimes smudged, often cramped, erratically spelled entries in a two-hour spurt: that was all I could stand, between eye strain and the archive’s frigid, artifact-friendly temperatures.

Why did I leap into this project—and why did I stick to it? Having spent four happy years in the town next to Sylvia’s Bristol, I have an abiding fondness for the area and interest in its local history. Moreover, Sylvia’s records are richly informative as social history. Since she was compulsive about chronicling her daily doings, I have assembled nearly twenty years of records for spinning, knitting, sewing, quilting, and laundry, and many entries on foodways, socializing, and other aspects of life in federal-era New England (and pioneer Ohio). From these entries, I’ve compiled “activity tallies”—charts and graphs of seasonal patterns in her various chores. (Was wash day always Monday, I wondered, as common understanding has it? Overwhelmingly, but not exclusively, yes.) Her mundane chronicles, over time, gave me a record of an entire community, and every bit of background and context I’ve found in my research has filled in the picture. I’ve found her entries, whether on textile production or fashion, socializing or foodways, travel or churchgoing, immensely useful in my study of the objects and people I research at the museum.

 

Fig. 2. Sylvia Lewis Diary, October 1813. Sylvia adopted this chart format late in 1801, and her handwriting (and quality of ink) varied in legibility. Courtesy NSDAR Americana Collection.
Fig. 2. Sylvia Lewis Diary, October 1813. Sylvia adopted this chart format late in 1801, and her handwriting (and quality of ink) varied in legibility. Courtesy NSDAR Americana Collection.

Most of all, Sylvia herself drew me in. Her occasional, restrained outbursts are endearing, from her distress at her father’s death to her raptures on reading her first Gothic novel. No surprise that she found The Children of the Abbey “the most entertaining book I ever read,” after the religious and moral tracts that formed her usual reading diet. Once I knew the cast of characters in the diary, the entries created a narrative, and I kept wanting to know what happens next. Would Sylvia marry Tracy Peck, who walked her home from a quilting? Or Abel Tyler, who nursed her through the spotted fever epidemic? Who would emigrate to Ohio next? Along the way, research became its own reward, and the challenge of finding more and more information to flesh out the spare entries of the diary became engrossing.

Five years later, I am still at it, nearing the end of the 1821 volume. I keep pausing for background research, and to take stock of Sylvia’s data. To go beyond the diary itself, however, I’ve embarked on a research journey that has drawn upon documents, artifacts, and field trips, in an effort to illuminate Sylvia’s written words and reconstruct her world and life.

My first research was biographical—to compile a “who’s who,” to understand who Sylvia was playing with, working for, and visiting. Sylvia’s father, Royce Lewis, was one of ten sons, most of whom grew to adulthood and lived near their father, Josiah, on what is still known as Lewis Street. I needed to sort out the relatives littering the diary. (Sylvia, not writing for an audience, seldom mentioned connections—but “Mrs. Hooker” would reveal herself as an aunt, for example, and Amy and Naomi as cousins as well as playmates.)

Gravestones were most helpful for my biographical summaries. I blessed New Englanders for their habit of inscribing “relict of Reuben,” “daughter of Eli,” etc., on gravestones, connecting children, parents, and spouses. I blessed Bristol’s Katherine Gaylord DAR Chapter, for having transcribed Bristol’s gravestones in the 1920s when they were still legible. Of course, some married daughters, and children who emigrated (whether to the next town or to New York or Ohio), weren’t included in the Bristol record; nonetheless, I had a pretty good Lewis family tree by the end of the first day’s digging, just in the DAR library. For other neighbors, family genealogy books at the DAR were helpful. Like the Lewises, there were often several related families of the same name, and it helped to know which branch of the Cowles or Ives family someone belonged to.

 

Fig. 3. Abel Lewis tavern sign, 1812. Sylvia attended many balls each year at her Uncle Abel’s tavern, a few hundred yards from her home, on what is now Maple Street, Bristol. Courtesy Connecticut Historical Society.
Fig. 3. Abel Lewis tavern sign, 1812. Sylvia attended many balls each year at her Uncle Abel’s tavern, a few hundred yards from her home, on what is now Maple Street, Bristol. Courtesy Connecticut Historical Society.

Casting my net a bit wider, I looked at vital records. Town vital records for Bristol are missing, but church records provided marriages and deaths. The Congregationalists’ baptismal habits frustrated me: only children of full members of the church were baptized, and not everyone believed in infant baptism (though Giles Hooker Cowles, Bristol’s pastor, published a treatise in favor of it). Thus, only the July 1805 death record for Sylvia’s niece Minerva, noting her age as 11 months, allows us to name the baby whose birth Sylvia reported in August of 1804.

Death records were often illuminating. Where Sylvia tended to note simply that “Mrs. Bartholomew” had died, the church clerk gave not only full names, but ages and causes of death. During the spotted fever epidemic that struck New England in 1808, Bristol fared better than some towns, but the emotional toll echoes through the clerk’s annotations: “May 3 Polly Wife of Luther Tuttle, aged 29 years Spottd fever Sick about four days very calm” reads a typical record. Sylvia, sick herself, afterwards wrote a day by day account, to the best of her recollection, of news she had heard during her illness. “I heard that Mr Luther Tuttle & his wife was very sick he says O my dear wife, she will die, what shall I do, she will go to heaven, he says—and at night she died and we trust sleep in Jesus,” she wrote for May 3. Luther died the next day. Read in tandem, the church death records and the diary portray a chilling two months in a small, interconnected community.

Despite abundant town and church records, I still found that Sylvia’s private notations were the only source for some biographical events, such as her brother Abraham’s two marriages and the death of his first wife. In October 1814, Sylvia recorded news of the death of “Abraham’s wife,” whose name she never once mentioned. No death record or gravestone survives in Trumbull County, Ohio, where Abraham and his wife had emigrated. And it’s only Sylvia who reported Abraham’s remarriage, on a trip home to Bristol in 1815, to the Widow Plumb. It takes piecing together several more entries’ clues, from 1801 to 1818, to reveal that Abraham’s first wife was Lois Lowry of Bristol and later Ohio. The Lowry family history confirms Abraham as Lois’s husband, but a newspaper obituary that recorded the death of “Mrs. Abraham Lewis” would put Lois’s death in 1837. Only Sylvia’s diary can explain that Lois died in 1814, and it is Rachel Plumb Lewis whose gravestone survives beside Abraham’s in the Vienna (Ohio) Center Cemetery.

Going beyond vital records, I hoped land records and maps would allow me to reconstruct a geography of Sylvia’s Bristol. This proved impossible. Did a single year go by without one or more Lewises buying or selling land, usually between brothers? I don’t believe so. I cursed, too, the New England metes and bounds system. Following English tradition, this method described plots of land as if the people involved were walking around its boundary, using natural and man-made landmarks, the old chains-and-rods measurements, and references to neighbors’ plots. After 200 years, not one of these landmarks or neighbors’ holdings survived, not even the location of the “highways” first laid out in the 1740s. How was I supposed to diagram land that began at a heap of stones at the southwest corner of Widow Cowles’s orchard?

Despite these obstacles, piecing clues from several sources has enabled me to pinpoint some places and people in the diary. For example, Sylvia wrote in 1812 of watching the cotton factory being “raised.” Chris Bailey, then curator of the American Clock and Watch Museum in Bristol, had told me about Sylvia’s brother-in-law’s cotton factory, a relocated church building. But where had it been sited?

 

Fig. 4. Skein of silk. One of the artifacts preserved by Sylvia’s descendants, who believe it to have been spun by her daughter Sylvia Tyler Bushnell; with the diary’s documentation of the elder Sylvia’s spinning, it seems likely it was actually spun by Sylvia Lewis Tyler. Courtesy Sally Shell.
Fig. 4. Skein of silk. One of the artifacts preserved by Sylvia’s descendants, who believe it to have been spun by her daughter Sylvia Tyler Bushnell; with the diary’s documentation of the elder Sylvia’s spinning, it seems likely it was actually spun by Sylvia Lewis Tyler. Courtesy Sally Shell.

Fortunately for my quest, Bristol became a prosperous industrial town after Sylvia’s time and has been extremely well documented, starting with a church history by her old beau Tracy Peck. A 1907 history of the town stated that the cotton factory building had become part of the E. Ingraham clock factory, still in business in 1907. An 1896 map of Bristol showed me exactly where the Ingraham factory was: on a branch of the Pequabuck river that can no longer be seen, thanks to its diversion after a devastating 1955 flood. So there was the cotton factory: a half a mile or less from Sylvia’s house.

Being a curator and therefore object-oriented, I hoped to find some material culture related to Sylvia or her family. Sylvia had attended numerous balls at Uncle Abel Lewis’s tavern, and I knew that the Connecticut Historical Society (CHS) in Hartford had a large collection of tavern signs. Hundreds of towns, thousands of taverns—what were the chances Abel’s had survived? Then again—what if? I was astounded to learn that the CHS not only preserved Abel’s tavern sign, but also a flamboyant red and gold silk dress worn about 1825 by the adopted daughter of Abel’s son Miles and his wife Isabinda Peck. (Sylvia had quilted with “Binda” in 1803 and 1805.)

A few other artifacts have surfaced through my inquiries on genealogy message boards, which reached descendants eager to hear about Sylvia’s diary. Descendants of her daughter, Sylvia Tyler Bushnell, still have Sylvia’s family Bible, our source for her younger children’s names and dates of birth (born after the yearly volumes end), and for the name of her baby who died, age two days, in 1818. “Mr. Tyler held it when it Died which was about SunSet . . . my trials were new & not to be described to those who have not felt the same,” she heartbreakingly recorded in the diary, but with no mention of Susan’s name.

One descendant owns a skein of silk spun by Sylvia, a fantastic companion to a few diary entries in 1801-1802 in which Sylvia reported “pick[ing] silk balls” and spinning silk, and, more broadly, a rare artifact of Connecticut’s early attempts at sericulture. Two cousins treasure linen pillowcases possibly woven, but certainly embroidered, by Sylvia, dated 1828.

 

Fig. 5. Tintype of Sylvia’s daughter, Sylvia Tyler Bushnell, and one of her children, c.1850. It seems likely that Sylvia Lewis Tyler looked much like her daughter, based on analysis of photos of several Lewis family members with extremely similar features. Courtesy Sally Shell.
Fig. 5. Tintype of Sylvia’s daughter, Sylvia Tyler Bushnell, and one of her children, c.1850. It seems likely that Sylvia Lewis Tyler looked much like her daughter, based on analysis of photos of several Lewis family members with extremely similar features. Courtesy Sally Shell.

The third type of research I’ve conducted is geographical, and it’s been some of the most rewarding. Traveling to towns where Sylvia lived has allowed me to understand the layout of her communities and the distances between her and the neighbors and places she mentioned, and more subjectively, to get a “sense of place” for each locale she inhabited: Bristol before her marriage, nearby New Hartford, Connecticut, in 1809-1817, and the Western Reserve of Ohio from 1817 to her death in 1851.

Bristol was the obvious place to start. Jay Manewitz of the Bristol Public Library, who got me into this project, escorted me on my first visit, back in 2005, to Uncle Abel’s tavern (now an office). We stood in the probable ballroom, where Sylvia had danced at many a Thanksgiving and election-day ball, and then went to Cousin Miles Lewis’s house, now the Clock Museum. The current owners of Candace Roberts’s house, still a private home on the south side of town, welcomed us inside. Here, Sylvia and some other teens visited after church on March 1, 1801, and at least Sylvia and Candace decided to keep diaries, which both begin that same day. Finally, we saw the exterior of the houses of Uncles Roger and Eli, and across the street from them, the “Old North” or Lewis Street Cemetery, where so many Lewises are buried.

Cemeteries are addictive research libraries. I have returned often to both the Lewis Street and Down Street Cemeteries in Bristol. Making diagrams of the placement of Lewises in Lewis Street has shown how tightly knit this family was. In Down Street, I found the grave of Josiah’s first son, Roger, and also a small stone nearby inscribed only “d.L.”—probably David Lewis, the first of Josiah’s sons to die, aged only eight months, in 1752. I doubt anyone had thought to look for David in the tiny graveyard just south of the Pequabuck river, and I take not only a historian’s, but also a sentimental pleasure in finding Roger’s little brother, buried just “behind” him. I’m not sentimental about my own relatives’ graves—we are not a graveyard-visiting family—but locating the physical remains of these people I’ve come to “know” through the diary has been strangely compelling.

 

Fig. 6. Eli Lewis’s house, Lewis Street, Bristol. Of the ten Lewis brothers’ houses, only Roger’s, Eli’s, and Abel’s (and Abel’s son Miles’s) survive. Courtesy of the author.
Fig. 6. Eli Lewis’s house, Lewis Street, Bristol. Of the ten Lewis brothers’ houses, only Roger’s, Eli’s, and Abel’s (and Abel’s son Miles’s) survive. Courtesy of the author.

The grave I most wanted to find, of course, was Sylvia’s, in Ohio. I’d never have found it without help. The regent of the nearest DAR chapter put me in touch with an incredibly talented genealogist and DAR member, Sally Mazer, who enthusiastically embraced the Sylvia project, doing advance research and insisting I stay with her and her husband during my visit to Trumbull County. We also connected with Rebecca Rogers, a historian who’d published a history of Trumbull County’s wooden clock industry, in which Sylvia’s husband, Abel, and her brothers Abraham and Levi, all worked. Rebecca took us to the riverbank site of the old iron foundry outside of town, where Abel often went for clock parts, which Sylvia sometimes helped assemble. She also introduced us to Chris and Diane Klingemeyer, who shared their collection of Trumbull County clocks, and while none made by Sylvia’s family survive, it was a thrill to see examples made by her neighbors.

Sylvia’s grave was recorded in the Vienna Center Cemetery in the 1920s, but the gravestone had long since disappeared. Sally, however, knew of a 1900 map of the graveyard in the county’s genealogy library, and had narrowed down the area where Sylvia ought to be. As soon as I arrived in Ohio, Sally took me to Vienna (always, inexplicably, pronounced Vye-enna), to the cemetery next to the Presbyterian church in the center of town. There she opened her trunk to reveal her specialized graveyard research equipment: gallons of water, a trowel, scrub brushes, and the most unlikely finding aid I’d ever encountered: dowsing rods.

 

Fig. 7. Lewis family graves, Lewis Street Cemetery, Bristol. The gravestones of Sylvia’s grandparents, Josiah and Phebe Lewis, are the two red sandstone markers at the center. Each son had his branch’s graves (often including married daughters and their husbands and children) in a row beside, before, or behind Josiah and Phebe. Courtesy of the author.
Fig. 7. Lewis family graves, Lewis Street Cemetery, Bristol. The gravestones of Sylvia’s grandparents, Josiah and Phebe Lewis, are the two red sandstone markers at the center. Each son had his branch’s graves (often including married daughters and their husbands and children) in a row beside, before, or behind Josiah and Phebe. Courtesy of the author.

In addition to finding water, metal dowsing rods apparently will indicate the presence of a grave, by crossing into an X when you walk over one, holding the rods parallel before you. Even spookier, if you hold one rod perpendicular to the ground over a grave, it will move either clockwise or counter-clockwise depending on the sex of the body underneath. Unbelievably, this worked for me in a blind test. Having previously narrowed down the likely area, Sally dowsed for a grave where Sylvia ought to be, and in no time, her rods were magically crossing into an X. Gently using the trowel to pry aside the lawn, we found a flattened gravestone. Scrubbing off some mud and washing off the rest with the water, a name emerged: SYLVIA ….

Finding Sylvia’s final resting place was a highlight of my research journey, although the Ohio trip yielded more conventional research results as well. Back in Washington, I keep working on the diary, filling in gaps both useful and incidental. Sylvia’s life was not remarkable, in any traditional sense; she accomplished no notable feats, she has no prominent offspring. But she is nonetheless both memorable and historically informative—an Everywoman of her time and place, no longer anonymous. By recording her everyday activities and interactions with the people in her communities, she created a rich body of primary evidence that illuminates our understanding of early republican America.

 

Fig. 8. Sylvia’s gravestone in Vienna Center Cemetery, Vienna, Ohio. Courtesy of the author. In addition to finding water, metal dowsing rods apparently will indicate the presence of a grave, by crossing into an X when you walk over one, holding the rods parallel before you. Even spookier, if you hold one rod perpendicular to the ground over a grave, it will move either clockwise or counter-clockwise depending on the sex of the body underneath. Unbelievably, this worked for me in a blind test. Having previously narrowed down the likely area, Sally dowsed for a grave where Sylvia ought to be, and in no time, her rods were magically crossing into an X. Gently using the trowel to pry aside the lawn, we found a flattened gravestone. Scrubbing off some mud and was
Fig. 8. Sylvia’s gravestone in Vienna Center Cemetery, Vienna, Ohio. Courtesy of the author.
In addition to finding water, metal dowsing rods apparently will indicate the presence of a grave, by crossing into an X when you walk over one, holding the rods parallel before you. Even spookier, if you hold one rod perpendicular to the ground over a grave, it will move either clockwise or counter-clockwise depending on the sex of the body underneath. Unbelievably, this worked for me in a blind test. Having previously narrowed down the likely area, Sally dowsed for a grave where Sylvia ought to be, and in no time, her rods were magically crossing into an X. Gently using the trowel to pry aside the lawn, we found a flattened gravestone. Scrubbing off some mud and was

Further reading:

Sylvia’s diary is unpublished, but transcripts are available to be shared with interested researchers. Sylvia’s diary informs Ann Buermann Wass’s study of Federal era clothing; see Ann Buermann Wass and Michelle Webb Fandrich, Clothing through American History: The Federal Era through Antebellum, 1786-1860 (Santa Barbara, 2010). The diary is also being used in a compilation of names of workers in the American wooden clock industry (ongoing), and by Walter Woodward in his forthcoming study of the Connecticut diaspora of the early nineteenth century. For background reading on some aspects of Sylvia’s life, Bristol’s most recent history is Bruce Clouette and Matthew Roth’s Bristol, Connecticut, a bicentennial history, 1785-1985 (Canaan, NH, 1984). Sylvia’s husband, brothers, and other clockmakers and peddlers mentioned in the diary are discussed in Rebecca M. Rogers’s comprehensive study, The Trumbull County Clock Industry, 1812-1835 (privately printed, n.d.).

 

This article originally appeared in issue 11.2 (January, 2011).


Alden O’Brien is the curator of costume, textiles, and toys at the DAR Museum in Washington, D.C. Her exhibits have included “The Stuff of Childhood: Artifacts and Attitudes 1750-1900,” “Costume Myths and Mysteries,” and “Something Old, Something New: Inventing the American Wedding.” She is currently working on “Fashioning the New Woman,” an exhibit on women and fashions of the Progressive Era.

 



When Banks Fail: Creating money and risk in antebellum America

Bank failures bode ill. At least, that is how students of panics and depressions have generally seen it. Crisis is a whirlwind, emanating from the world of finance and spreading into the economy’s productive sectors. In other words, if a bank fails, run for it. And so historians, like most people, tend to think that bank insolvency is a particularly egregious instance of business failure: since banks link many different people and businesses they are especially likely to drag down many others when they themselves succumb to debt and become insolvent.

But what if bank failures were actually the least, rather than the most, problematic instances of bad credit? What if the demise of a bank were actually less troublesome to the economy than the failure of other kinds of businesses? I would like to pursue this admittedly provocative speculation in the context of the Panic of 1837, which, like many other panics, was characterized, and perhaps even triggered, by a chain of banking failures. I am not suggesting that bank failures are unproblematic. But a close examination of this problem can teach us some general lessons about how money was created and about the cultural resonance of credit in antebellum America.

In the wake of the Panic of 1837 anti-bank sentiment in the United States reached a fever pitch. Indeed, anti-bank polemics were perhaps one of the few industries that flourished amidst the general gloom. Banks were assailed for manipulating credit to their advantage and at the same time for allowing credit to dry up just when it was needed most. They were blamed for extracting large profits, but also for being insufficiently capitalized and thus exposed to instability, and eventually to insolvency, as a result of imprudent decisions. Nor were the ills caused by banking limited to the financial, or even the economic, sphere: banks were also seen as corrupters of morals, law, and order.

Of course, such attacks were not born of the Panic, at least not of this panic. Similar charges against banks as ruining the character of the people animate William Gouge’s 1833 work, whose title tells much: The Curse of Paper Money and Banking, or a Short History of Banking in the United States, with an Account of its Ruinous Effects on Landowners, Farmers, Traders, and on All the Industrious Classes of the Community. Gouge compared banking and paper currency to feudalism, claiming that the former “divides the community into distinct classes, and impresses its stamp on morals and manners.” He thought that the creation of the paper system 140 years earlier had “affected the very structure of society, and, in a greater or less degree, the character of every member of the community. It may require one hundred and forty years more, fully to wear out its effects on manners and morals.” But these effects on character, while deep and most troubling, were not at the heart of Gouge’s diatribe. He focused, instead, on the “multitudes,” the “many thousands of families,” who had been ruined and “reduced to poverty by various Banking processes.”

Gouge’s critique, and with it the assumption that banks may drag hundreds or thousands down with them when they fail, was a persistent feature of antebellum popular writing (demonstrating a remarkable staying power right up to the present day as well). But by what agency do banks bring ruin to thousands? Through what mechanism do actual people suffer when a bank fails? To the modern ear, the very question sounds naïve, at best. Everyone, according to modern opinion, keeps their savings in the bank. Its failure would thus mean the disappearance of those savings. The modern solution to that problem, of course, is deposit insurance, which makes it safe for regular people to place savings with the bank, which the bank then invests, lending in support of productive economic activities. But we tend to forget that this is, for the most part, a modern solution for a modern problem. Antebellum banking in America was not, in fact, based on the numerous deposits of dispersed individuals. The primary asset to be found in today’s typical bank portfolio was insignificant in the early nineteenth century. This raises the question: if bank failure did not ruin people by destroying their savings, how did it affect them?

 

"The Golden Age Or How To Restore Pubic [i.e. Public] Credit," lithograph (U.S., between 1832 and 1837?). Courtesy of the Political Cartoon Collection, American Antiquarian Society, Worcester, Massachusetts. Click image to enlarge in a new window.
“The Golden Age Or How To Restore Pubic [i.e. Public] Credit,” lithograph (U.S., between 1832 and 1837?). Courtesy of the Political Cartoon Collection, American Antiquarian Society, Worcester, Massachusetts. Click image to enlarge in a new window.

 

In order to answer this question, we need to expand our focus to the question of credit more generally and, in particular, bad credit. Let us imagine a typical credit relationship. Someone wishes to expand his business by purchasing a new machine. We can presume that credit will be forthcoming, either from the manufacturer or from the seller of the machine, who might agree to a deferred payment (perhaps in installments), or from a lender who will transfer the money to the businessman who will, in turn, pass it on to the seller. In either case, the income generated by the expanded production that results from using the machine is considered to be the source that will make repayment possible. It need not be an exclusive source, but its capability to generate returns is thought to be the rationale for giving credit. What has happened, from a social perspective, in this transaction? The borrower has realized a capacity for extending production in a way that was not possible without the credit. A writer on the “Principles of Credit” in Hunt’s Merchants’ Magazine in 1840 described this dynamic, noting that credit is “carried on upon a presumption that some positive benefit is to accrue, and some addition is about to be made to the resources of mankind. Whatever shape commercial credit may assume, it will always be found to rest upon some basis of value, real or supposed, at present existing, or to be created out of the application of labor. The object of loans is to realize a profit both to the lender and the borrower.” From an aggregate social perspective, the ensuing profit is just a sign of what has actually happened: an “addition … made to the resources of mankind.” Credit is the enabler of this particular addition.

Now let us turn to the unhappy case of bad credit. In this instance, the borrower uses the new machine to the same advantage, but for other reasons (say, uninsured healthcare expenses) he suffers a reversal of fortune and is unable to pay back the debt. The creditor is of course particularly unhappy. Aristotle thought that this situation—the half-completed exchange—presented the most urgent instance of intervention to assure justice. Not only had the creditor been damaged, but that damage was precisely the gain of the borrower. Each had moved from the initial status quo, but their movement was in opposite directions, resulting in a doubly wide gap that called out for remedy, for corrective justice. This is certainly the most common, intuitive response to a half-completed exchange, that is, to bad credit. But if we revisit the social perspective, we encounter a surprising result. Recall that the aggregate social perspective was interested in the addition to the resources of mankind. In the case of bad credit, as well, there is every reason to believe that such an addition was actually made. The problem is not that people did not act to use their capacities to increase production—that was the point of the credit, and that was also its result. The problem now is only one of distributing the addition, that is, the surplus derived from the increased production. The credit relationship determines a particular form of distribution which has now been upset. But since the distribution has no effect on the aggregate—on the “resources of mankind”—this only becomes a problem if we have some other reason to be concerned about the distribution, as opposed to the creation, of wealth or resources.

This dismissal of bad credit will not withstand generalization, however. For while a specific case of bad credit poses only distributive questions, the hint that bad credit is not an isolated incident but a widespread feature of interaction may hinder the advance of credit elsewhere. Returning to our “Principles” from Hunt’s we learn that “whatever the sum of capital may be, and the degree of credit which will necessarily attach to it in any community, they can never be made practically beneficial unless the general fidelity in the performance of engagements is fully complied with. This is the stimulus to all the active industry of modern society; for it creates the disposition to believe a promise of future labor equivalent to present capital, and hence promotes exchanges between the two.” It is only our trust in the principle and habit of fidelity to promises that allows people to engage in a credit system. “The disposition to perform promises is, then, as essential to the establishment of credit, as the ability. The two combine in every community to create that species of confidence which may be made the basis of action” (my emphasis).

There we have the key. What is credit but a species of confidence-inspiring action? And so, the problem with bad credit is not a particular failure to pay one’s debts. Instead, what is at issue is a type of game in which we encourage people to act without receiving any immediate gain, relying only on the solid expectation of future gain. The problem is that “credit may be most effectually destroyed, if the sense of the people can be demoralized,” which will bring them to disregard “all law, divine or human, but their own will.” If people lack confidence, if they believe that credit is too dangerous, they will stake nothing on the game, choosing to produce less than they would if they used credit.

With this general understanding of credit, then, let us return to the particular role of banks in the antebellum system of credit and production. Here again, the failure of any one particular bank—which happens when its liabilities prove to be cases of bad credit—seems much less problematic than our intuitive sense of the damage caused by bank failure. Credit was extended, borrowers engaged in production, and the resources of the community were increased. And what of the creditors? Again, recall that in antebellum banking the depositors who would be today’s paramount concern are not present in large numbers. Instead, there are two types of creditors, only one of which we are accustomed to thinking about today. First, there are the bank’s direct investors: its initial promoters (and in all probability, its directors) to the extent they contributed capital, together with stockholders if the bank “went public” at some point and sold stock on the open market. While this group technically classifies as a creditor of the bank, the bank’s successes were their successes, and so its failure is in great measure their failure. In other words, the distributive outcome has little or no meaning: the bank’s directors were in a sense their own creditors.

The second group of creditors comprises note holders. Antebellum banks issued notes that were payable in specie on demand. When a bank failed, note-holders could apply to the receiver of the bank for payment, but the bank’s assets typically did not cover even a small fraction of its liabilities. As a result, small note holders probably recovered only in rare instances. Happily, no one but those who had intimate business relations with a bank was likely to be left holding a large amount of the notes of any particular bank. And so, unlike today’s depositors whose holdings are highly concentrated in one bank, note-holders in antebellum America typically held notes from a wide array of banks, each for a small sum and for a relatively short duration. Or in modern parlance: the small note-holders were well diversified. They thus faced little risk from the demise of any particular bank.

One further twist should be borne in mind: banks, whether of the antebellum variety or those doing business today, are not simply intermediaries, despite a common belief. Banks do more than concentrate dispersed money and funnel it towards specific uses. Banks inhabit a fractional reserve system, which means that they can lend far more than they have on deposit. In fact, bank lending in a fractional reserve system creates money. Today, banks create money in the lending process because they can lend more (usually ten times more) than they must hold as reserves. Antebellum banks created money in a somewhat similar fashion, but they had a tool that today’s banks lack: they could issue their own notes, holding limited reserves in order to be able to redeem them in specie. The mechanics are different but the effects in terms of money creation are similar. The upshot here is that antebellum banks actually loaned considerably more money than they ever collected from investors or depositors. That means that for every dollar risked by a bank, there was much less than a dollar risked by a depositor. We have one more reason, then, to think that bank failures might be less of a cause for worry than other business failures. Because banks have the privilege of creating money, they actually spur the system of production beyond its existing capacity. Bank credit allows the production of something for nothing, or nearly so. More accurately, the equation is something for a nothing that will turn (everyone hopes) into a something later on. As our teacher in Hunt’s understood, credit allows people to treat the future as if it were already here.

We thus begin to see how the advantages of banking and the optimism regarding the minimal impact of any single bank failure might begin to unravel, and why the critics of banking were, at least in broad terms, barking up precisely the right tree. They understood or at least intuited three related problems about banking and the ways it might in fact ruin the lives of multitudes.

First, the critics saw that the power of banking to enhance productive capacity beyond the immediate ability to pay for it was potentially valuable, but almost certainly dangerous. Each act of advancing credit was an act of faith. But faith is like lying: it becomes easier with each additional step. In this sense, lending against fractional reserves, which was still something of a financial innovation, had a cascading effect, both on the way up and on the way down. The availability of credit, and especially the expansion of the sources of credit (that is, the appearance of new banks; or the expanded lending of existing banks once federal deposits were distributed throughout the country), helped create a bubble that would eventually burst. When on the rise, credit generates competitive pressures on surrounding banks to engage in similar acts. This dynamic has stayed with us regarding every form of financial innovation since the early nineteenth century. Confidence has its price.

Second, the critics understood that when the system came under stress, that is, when too many people demanded their money, banks would exacerbate rather than alleviate that pressure. This was because lending against fractional reserves meant that banks could not respond to actual stringency by opening their vaults. When faced with demands for cash, the banks would have to call in their loans; their customers, in turn, would have to call in any liabilities owed to them. In other words, just when the demand for money became most intense, the banks would become an active force on the demand side rather than being able to act as a source of supply. Banks would not encounter this problem if they simply acted as intermediaries. It was their money-creating function which made banks a big part of the problem rather than a means of resolving the problem when pressure for money arose. What’s more, the system of lending money that did not actually exist until the loan was made insured that more and more economic actors had their fortunes integrated. Pressure on one point in a chain of loans could easily be transferred to many other points in many other chains through the channel of a bank.

Third, critics understood that the kind of confidence required for banking created a precarious web. When confidence in one bank was shaken, confidence in surrounding banks was nearly sure to founder as well. Once creditors got edgy, there was nothing standing in the way of the downward cascade described above. And because banks owed one another, and all businesses owed banks, a crisis of confidence would (almost) never be limited to one bank. This is how the ripple effects of a banking crisis would spread directly to the production sector: banks would call in loans, and businesses strapped for cash would have to cut expenditures, fire workers, and eventually close. The banks were instrumental in propping up more businesses than could ever have existed without the banks’ support. At the same time, the banks were a central reason for why so many businesses came under pressure all at once in what Irving Fisher would eventually call the debt-deflation problem. When productive businesses close, people lose their jobs. To the extent that bad bank credit contributes to general decline, all the various observers concerned about bank failure are completely right to worry.

Perhaps the best way to understand the attacks on banking is not by focusing on the concern for the pecuniary safety of the average citizen. For Gouge and others like him, the soul rather than the body of the people was at stake. When the advocates of banking could write that “credit is a moral property as well as an economical instrument,” and that the habits of credit should “extend their influence over general conduct in all the relations of life,” critics bristled. This was not because they doubted whether there was a moral imperative to pay one’s debts. Rather, they were shocked to see the idea of bank credit, based as it was on getting something for nothing, vying for the moral high ground. Credit of this sort was a speculation. Allowing it to flourish was one thing; granting it not only legitimacy, but moral status was horrific. If people were taught to consider their relationship with their banker as analogous to their obligations toward family, community, and state, the multitudes would indeed have come to ruin.

Further reading

For competing interpretations of the role of banking in precipitating the Panic of 1837, see Bray Hammond, Banks and Politics in America, from the Revolution to the Civil War (Princeton, 1957) and Peter Temin, The Jacksonian Economy (New York, 1969). A new evaluation of the evidence appears in Jane Knodell, “Rethinking the Jacksonian Economy: The Impact of the 1832 Bank Veto on Commercial Banking,” The Journal of Economic History 66, no. 3 (2006): 541-74. On the history of banking in the United States, see Howard Bodenhorn, State Banking in Early America: A New Economic History (New York, 2003) and Naomi Lamoreaux, Insider Lending: Banks, Personal Connections, and Economic Development in Industrial New England (New York, 1994). On the economics and culture of failure, see Edward Balleisen, Navigating Failure: Bankruptcy and Commercial Society in Antebellum America (Chapel Hill, 2001). For an account of one particularly spectacular banking failure, see Jane Kamensky, The Exchange Artist (New York, 2008).

 

This article originally appeared in issue 10.3 (April, 2010).


Roy Kreitner is the author of Calculating Promises: The Emergence of Modern American Contract Doctrine (2007). He teaches in the faculty of law at Tel Aviv University and is currently the Lillian Gollay Knafel Fellow at the Radcliffe Institute for Advanced Study, Harvard University, and an American Council of Learned Societies Fellow.




On The American Jeremiad

Small Stock

The American Jeremiad passes at least one test of really important work: it forces you to reconsider beliefs that you took for granted. I’d like to focus here on the particular belief of my own that Bercovitch’s book most forced me to rethink, though not without resistance.

It’s the belief that, though the United States repeatedly betrays its founding ideals of justice and fairness, it is uniquely willing to measure its betrayals against those ideals and to try to change itself accordingly. For Bercovitch, the ideological function of this belief is to provide a kind of no-fault insurance for the American dream: if America does the right thing, we demonstrate our essential goodness; but if America does the wrong thing we show we are even better because we are so willing to condemn our own crimes. The greatness of America, then, is nowhere better reaffirmed than in our negative jeremiads—including our politically radical self-critiques—a process Bercovitch describes in the book as “the Americanization of dissent.” In his preface to the new anniversary edition of the book, he urges the need to “open the possibility of contemplating injustice and conceiving democracy outside the framework of the Real/True America” (emphasis mine).

Why is it so bad to measure ourselves against an ideal America if doing so leads at times at least to correcting our evils?

It’s a brilliant demonstration of how for us all roads, however oppositional they may seem, lead inevitably back to a celebration of America’s unique greatness, from the Puritans’ confusion of godliness and Americanness on down to the present day. I admit, however, that at first I had my doubts about this argument. Why is it so bad to measure ourselves against an ideal America if doing so leads at times at least to correcting our evils?

What convinced me, however, that Bercovitch is indeed onto something important is an example he offers in the new preface, where he updates his argument by listing instances of jeremiad-like rhetoric on the current political scene. Bercovitch quotes a dissenting slogan from the 2012 Occupy Wall Street protests: “The American Dream has been stolen…” one protestor asserted. “The 1% has destroyed this nation and its values.”

On first encountering this example and Bercovitch’s critical attitude toward it, my thought was, wait a minute—what’s the matter with that? What’s wrong with accusing the one percent of hijacking the American Dream and betraying our nation’s values? I felt confirmed in this thought after the November 2012 U.S. elections, where it seemed that this sort of appeal to economic fairness had been very effective in deciding the outcome. For once, a populist critique of plutocracy had persuaded the American mainstream, and it was the appeal to American values that had helped turn the tide.

But on reflection, I saw Bercovitch’s point more clearly and felt its force: the American habit of acting as if we invented values that are not peculiar to us at all drapes a cloud of nationalist and exceptionalist mystification over our every public discussion—in this case, the swindling of the ninety-nine percent by the one percent. If that swindle is wrong, its wrongness has nothing to do with whether it is un-American or not. But here is Bercovitch’s biggest point: until the nation frees itself from this confusion, it is probably never likely to wise up to such swindles.

 

This article originally appeared in issue 14.4 (Summer, 2014).


Gerald Graff is professor of English and education at the University of Illinois, Chicago, and the author of numerous works of literary and cultural criticism as well as critical histories and commentaries on higher education, including Professing Literature: An Institutional History (1987), Beyond the Culture Wars (1992), and Clueless in Academe (2003).




Unrecouped

Hey look, I am back. I could bemoan the insidious forces that have kept me from blogging this morning, but I seem to know so many people who have been sick, injured, or lost loved ones in recent months, it really does not seem to become me to complain. And that was even without reading the paper this morning. Anyway, it’s a new year, a new semester, a new decade, so let’s get started.

Having been to more than my share of very sparsely-attended indie rock shows and history conference panels in recent years, the thought has occurred more than once that “mid-career” academic historians have much in common with a lot of the veteran indie musicians I go to see: well-known within a certain dispersed circle of cognoscenti, perhaps even established in certain way, but doing something too particular in its appeal to ever achieve more than the most modest sort of popularity.  Most historians like most bands still have to set up and load their own equipment, and while  it saddens me that we historians don’t usually get to perform in dive bars, the bathrooms in conference hotels are usually cleaner.

Then there is the economics of our respective types of publication. My reminder of the similarities here , admittedly not too recent at this writing, was this very informative post by Tim Quirk of Too Much Joy, critiquing his band’s royalty statement.

From Tim Quirk, I learned a new term (new to me) major record labels used to denote those never-hit-it-big back catalog bands that they authorize themselves to ignore and abuse: “unrecouped” bands whose sales, according to major label accounting, never paid back their advance and promotional costs. (According to the statement, Too Much Joy’s account with with Warner Brothers stood at $62.47 in royalties with an an unrecouped  balance of $395,277.18.)  Historians lucky enough to find teaching jobs and get tenure do enjoy some job security that bands who had a couple of songs on alt-rock radio in the early 90s might not, but we will all live in danger of remaining “unrecouped” and thus powerless when it comes to dealing with the publishers  and their self-serving accounting practices.
—————-
Now playing: The Low Anthem – To the Ghosts Who Write History Books

 

This article originally appeared in issue 10.2 (January, 2010).


Jeffrey L. Pasley is associate professor of history at the University of Missouri and the author of “The Tyranny of Printers”: Newspaper Politics in the Early American Republic (2001), along with numerous articles and book chapters, most recently the entry on Philip Freneau in Greil Marcus’s forthcoming New Literary History of America. He is currently completing a book on the presidential election of 1796 for the University Press of Kansas and also writes the blog Publick Occurrences 2.0 for some Website called Common-place.




Curiosity Did/Did Not Kill the Cat: The Controversy Continues

What is curiosity? “Curiosity” shares etymological roots with “care” and “careful;” once, a curious man was a fastidious one, a curious object an object well-wrought. Now, to be curious is to seek knowledge, but that knowledge, because acquired through curiosity, can been seen as illicit. It is a virtue to be curious, but curiosity killed the cat, and left Curious George locked up at the zoo, on display for curious children and their curious parents.

Curiosity works likes this, snaking its way among people and objects and animals, attaching itself first to one thing, then to another. In early America, curious men discussed curious things and displayed curious objects in cabinets of curiosity. Curiosity links a world of ideas with the social worlds in which men, women, and ideas circulated.

Consider this episode in the life of Benjamin Franklin. A young and ambitious Franklin arrived in London in 1724, only to discover that his Philadelphia patron had failed to send letters of introduction. Fortunately, young Ben had other means of introduction: “I had brought over a few curiosities,” he later recalled, “among which the principal was a purse made of the asbestos which purifies by fire. Sir Hans Sloane heard of it, came to see me, and invited me to his house in Bloomsbury Square, where he show’d me all his curiosities, and persuaded me to let him add that to the number, for which he paid me handsomely.”

In this neat little transaction, Franklin turned his curious purse to social connection and to cash, two things he very much needed at the time. He gives us a glimpse too of the gossip among learned men, cabinet keepers, and curiosity seekers; Sir Hans Sloane just happened to have heard of Franklin’s curiosity. How exactly he had heard, Franklin does not say, but someone must have been talking to Sir Hans of the curious young man from the colonies with the collection of curious objects he was willing to show and to sell.

Shared curiosity linked the men of the Enlightenment, in European capitals and the colonies. Curious men and the occasional curious woman exchanged peculiar objects that seemed to defy the categories they had devised to sort out the world: in this case, a purse fire could not destroy. While Franklin and Sir Hans likely had a genteel or learned exchange about the purse, on city streets sometimes more rough-and-tumble seekers turned out to see people with strange features or from strange places displayed as “Great Curiosities.”

In the 1830s, another curious American traveler headed west. In 1834, Richard Henry Dana dropped out of Harvard and worked his way to California as a common sailor. He picked up plenty of “curious and useful information” about his ship, and about California and its residents, and passed it on to readers in his travel narrative, Two Years Before the Mast. In his year working with cowhides on the California coast, Dana met up with some different kinds of curiosities. He remembered one acquaintance who was “considerably over six feet, and of a frame so large that he might have been shown for a curiosity.” The sailor’s feet “were so large,” Dana writes, “that he could not find a pair of shoes in California to fit him, and was obliged to send to Oahu for a pair; and when he got them, he was compelled to wear them down at the heel. He told me once, himself, that he was wrecked in an American brig on the Goodwin Sands, and was sent up to London, to the charge of the American consul, without clothing to his back or shoes to his feet, and was obliged to go about London streets in his stocking feet three or four days, in the month of January, until the consul could have a pair of shoes made for him.

This story lets curiosity slip off the pages and set its hooks into us. Why would a man send to Oahu for large shoes? Was this barefooted paradise actually a source of shoes for big-footed men? Why the heels on Hawaiian-made shoes? And how often did American sailors wander wintry London streets in their stocking feet? In fact, why was this shoeless man “obliged” to go about the London streets at all? Was it common for the American consul to commission shoes for shipwrecked citizens? Did the consul have a clothing allowance? A shoemaker and a tailor on call?

Dana was curious about the large man; we are curious about Dana’s story. But there is more. Dana reported that a particular Hawaiian friend of his “was very curious about Boston (as they call the United States); asking many questions about the houses, the people, etc., and always wished to have the pictures in books explained to him.” And on the ship back to Boston he encountered one of his old Harvard professors who was very curious about California’s rocks and shells. The professor’s curiosity made the man himself an oddity, an object of curiosity to the sailors. “The Pilgrim’s crew christened Mr. N. ‘Old Curious,’ from his zeal for curiosities, and some of them said that he was crazy, and that his friends let him go about and amuse himself in this way. Why else a rich man (sailors call every man rich who does not work with his hands, and wears a long coat and cravat) should leave a Christian country, and come to such a place as California, to pick up shells and stones, they could not understand.”

For Franklin, Sir Hans, and sailor Dana, curiosity was largely a virtue, a good thing that spurred the inquiring minds of leading men. But as literary historian Barbara M. Benedict reminds us, some people were better at being curious than others. Snooping women got caught up, she writes, in “the seamy obverse of elite inquiry.” A woman’s desire to know flirted with transgression; so did a child’s curiosity to know the world of adults, a worker’s desire for information guarded by a boss, a slave’s interest in doings of his master, and every human desire to know the ways of the gods.

In Benedict’s wonderful account, Curiosity: a Cultural History of Early Modern Inquiry (Chicago, 2001), we learn that even elite male curiosity has a checkered past. Questions that appeared to be disinterested matters of science to men of Franklin’s generation once seemed to ecclesiastical authorities to stem from a dangerous desire to know too much. “Flooded by new and curious men and women,” Benedict writes, “early modern culture characterizes curiosity as cultural ambition: the longing to know more. And this characterization, as both praise and blame, remains with us today.”

This special issue of Common-place takes up the uncommon history of curiosity. Our authors help us notice that men and women who are curious are themselves sometimes turned into curiosities. We are curious about a medical man in Worcester puzzling about the curious behavior of a sleep-walking servant and about a medical missionary displaying portraits of his patients to pique the curiosity and open the purses of would-be donors. We visit the medical museums where men and women curious about their own anatomy gazed on displays of preserved and sometimes grotesque body parts.

Body parts had no say in how they were perceived. But our authors recover stories of men and women who were displayed as curiosities but then turned the curiosity of customers to power or profit. We see explorers in a new world puzzled by strange plants and strange creatures and needing native knowledge to sort the dangerous from the benign in New World flora and fauna. We see merchants in New York, with the help of taste-making ladies, upgrading “Curiosity Shops” by calling them antique stores. We see students curious about a painter of Indians provoking the curiosity of their professor who learns that the artist’s own curiosity about his subjects distinguished his paintings from more pedestrian images that were rendered to meet contemporary tastes and expectations. We encounter a historian caught up in his own curiosity about a portrait of Emily Dickinson. We see a globe maker in rural Vermont whose curiosity about the world beyond the borders of his small state inspires the curiosity of a historian. We watch men and women speculate on the odd things that don’t fit in easy categories. Why did the novel Uncle Tom’s Cabin have such a long and strange theatrical afterlife? How did mountain stones come to form the likeness of a human face? What kinds of creatures inhabited ancient America? What race of men inhabit contemporary America?

In the spirit of the old Yiddish proverb–”A man should go on living, if only to satisfy his curiosity”–we welcome readers to join the subjects and authors of this issue in exploring some of its many entangled meanings and consequences.  

 

This article originally appeared in issue 4.2 (January, 2004).


Ann Fabian teaches history and American studies at Rutgers University in New Brunswick, New Jersey. Her publications include Card Sharps, Dream Books, and Bucket Shops: Gambling in Nineteenth Century America (Ithaca, 1990) and The Unvarnished Truth (Berkeley, 2000). Fabian is currently working on skull collectors.

Joshua Brown is executive director of the American Social History Project at The Graduate Center, City University of New York. He is the author of Beyond the Lines: Pictorial Reporting, Everyday Life, and the Crisis of Gilded Age America (Berkeley, 2002) and co-author of the CD-ROM Who Built America?: From the Great War of 1914 to the Dawn of the Atomic Age in 1946 (New York, 2000).




Naming the Pacific: How Magellan’s relief came to stick, and what it stuck to

And God said, Let the waters under the heaven be gathered together unto one place, and let the dry land appear: and it was so. And God called the dry land Earth; and the gathering together of the waters called he Seas: and God saw that it was good . . . 

And out of the ground the Lord God formed every beast of the field, and every fowl of the air; and brought them unto Adam to see what he would call them: and whatsoever Adam called every living creature, that was the name thereof.

—Genesis 1:9-10, 2:19, King James Version

From the beginning, the relationship between naming and waters has been unsettled. The Book of Genesis begins as formless abstractions emerge out of chaos. Light comes out of darkness, and the firmament divides the waters from the waters. To these abstractions, a self-satisfied God assigns the first proper names (besides his own): “God called the light Day, and the darkness he called Night . . . and God called the firmament Heaven.” These names—Day, Night, Heaven—are singular, as is the name God gives the dry land—Earth. But even though the waters under Heaven are “gathered together unto one place” to let the dry land appear, the name they receive is plural, at least in the English translation offered by King James: “the gathering together of the waters called he Seas.” Later, God creates the first man and hands him the task of naming the earth’s creatures. The man’s name, Adam, is mentioned for the first time in the very act of bringing Adam those creatures “to see what he would call them.” God gave Adam dominion “over every living thing that moveth upon the earth,” so naturally it was in Adam’s gift to name them. But if Adam named the fish, it was God who named the waters. And in his infinite wisdom he thought of them as plural and called them Seas.

In these origin myths, we can identify two questions pertinent to the uneasy relationship between waters and names. First, what gives human beings the authority, or the gall, to assign names to such inaccessible and incomprehensible entities as the seas? How can limited and puny mortals name these mysterious givers of sustenance, from which even the authors of Genesis realized that life emerged? The dominion that comes with naming seems somehow inappropriate for eternal rivers and boundless seas. 

Second, even if it were appropriate for man to name the waters, what kind of name would the seas merit—one name or many, singular or plural? As the Genesis authors recognized, waters tend to “gather together unto one place,” which encourages us to think of them as singular. From earliest recorded times, some geographical theorists imagined the seas of the world to be one entity, the visible waters of the known world flowing into and surrounded by a vast ocean, a stream or river that encircled the world’s landmass.  They recognized that this aqueous body lacked the obvious boundaries that lend confidence and ease to the practice of naming—we might as well name the air that we breathe as the sea that surrounds us. 

Yet on the other hand, in the days when wind and sail provided the fastest means of travel, when continental and global circumnavigations lay far in the future, there was no obvious way to test whether widely separated seas, such as those that lay off the Arabian Peninsula and those of the Mediterranean, were in any way contiguous. Some thinkers consequently imagined them to be utterly separate entities. Ptolemy’s global geographical scheme envisioned the Indian Ocean as an enclosed sea, surrounded by Africa, Asia, and a great unknown southern land mass that connected southern Africa to Asia. From this perspective, it was quite natural to give these boundless yet separate bodies of water individual names. Even within bodies of water that the ancients could demonstrate were contiguous, different peoples gave different names to parts of the same waters—the Adriatic, Ionian, and Aegean Seas, for example.

I.

All these confusions still faced Europe’s fifteenth-century explorers who sailed westward out of the Mediterranean. Columbus drew encouragement for his voyages from his belief, one of several plausible geographical schemes of his era, that the Ocean Sea was very broad from north to south, but not very wide from west to east. He went to his grave believing that he had indeed sailed across it, and bearing the title that Ferdinand and Isabella bestowed upon him—Admiral of the Ocean Sea—for that was the name of the sea that he, before all other men, had mastered. 

Of course, the explorers who sailed westward in Columbus’s wake, most of them Iberians, quickly realized the Admiral’s error and began to think of the Americas as a continental landmass, somewhere between the western ocean of Europe and Africa and the eastern ocean of Asia. With this dawning realization, and with the dramatic increase in ocean-going traffic beyond the Pillars of Hercules, came an increased need to name and chart the seas that Columbus sailed. Before the great age of exploration, the western ocean had sometimes been called “Atlantic,” a name derived from the Atlas mountains of western Libya, from which the Ocean River could be seen, as well as from the Greek myth of a lost island civilization, Atlantis, lying somewhere vaguely to the west of the Hellenic world. But Atlantic was by no means a universal or exclusive name for the seas Columbus sailed. The coastal voyagers of northern Europe had their own names for these waters: the Baltic, the North Sea, the Irish Sea. Early maps of what we think of as the Atlantic basin extended this practice, naming the Sea of France, or Mare Gallicum, for instance. South of the equator, the waters navigated by the Portuguese as they made their way toward and then around the Cape of Good Hope were commonly called the “Ethiopian Sea,” a name that lasted well into the eighteenth century. And Columbus’s first discoveries were quickly added to the list as the Antillean Seas. In other words, although “Atlantic” was in play from the beginning, it was but one name among many for the waters that were now a highway to New World wonders.

 

Fig. 1. On this map of the Spanish Main the sea to the north is labeled the North Sea, Mar del Norte. "Novus Orbis sive America: Meridionalis et Septentrionalis," 1734. Courtesy of the American Antiquarian Society.
Fig. 1. On this map of the Spanish Main the sea to the north is labeled the North Sea, Mar del Norte. “Novus Orbis sive America: Meridionalis et Septentrionalis,” 1734. Courtesy of the American Antiquarian Society.

Spain soon came to dominate both the wonders and the wealth of these new worlds, and the center of Spanish transatlantic operations lay in the maritime basin formed by the northern coast of South America and the islands of the Antillean chain, from Cuba in the northwest to Trinidad in the southeast. The northern coast of South America, stretching a thousand miles from the mouth of the Orinoco in the east to Darien in the west, the Spanish called Tierra Firme, the mainland, translated by the English as the “Spanish Main.” . The sea to its north was called, quite naturally, the “North Sea,” Mar del Norte, by its Spanish masters. Over time, the name for this central sea of the Spanish Empire became the general name for the entire basin between Old Spain and New, so that maps from the sixteenth and seventeenth centuries commonly referred to the entire Atlantic as the North Sea—Mar del Norte. As late as the 1690s, even the southernmost regions of the Atlantic, the waters to the east of Argentina and Tierra del Fuego, were labeled as Mer de Nort in a French atlas. 

II.

As Spanish conquistadors gradually took possession of Tierra Firme, the native peoples they encountered began to inform them of another body of water, a great sea comparable to the North Sea that Spain now controlled. In 1511, Vasco Núñez de Balboa, one of several Spanish overlords competing for control of the native caciquesor chieftains of the Darien region, visited the domain of Cacique Comogre, where he heard Comogre’s eldest son Panquiaco tell of golden treasures that could be found in lands to the south, across the mountains of the Darien Isthmus and the sea that lay beyond. Based on these early reports of the riches of the Incas, Balboa organized an exploratory expedition that departed Darien on September 1, 1513. After nearly a month’s overland journey, they came to a hill overlooking the Gulf of San Miguel, whence Balboa could look out toward the Bay of Panama, large enough in its own right, but only a small coastal indentation of an ocean the vastness of which Balboa could not possibly have imagined. 

What to call it? Balboa’s journey across the isthmus had roughly gone from north to south, and the sea he left behind him was the Mar del Norte. Balboa therefore called the waters beyond the Gulf of San Miguel the “South Sea.” Balboa did not so much name this new ocean as give utterance to what it was already called by necessity of the convergence of Spanish and Native American histories. From that moment onward, as Balboa’s fellow conquistadors seized the realm of the Incas, Spain neatly divided the waters that bounded its growing empire into the Mar del Norte and the Mar del Zur, the North Sea and the South Sea. 

Balboa’s expedition made it possible for Spain to double its New World empire into symmetrical northern and southern regions, but it did nothing to indicate whether this new South Sea bore any relation to the North Sea. Nor was there yet any certainty that Balboa’s South Sea was part of the Eastern Ocean of the Indies. The navigational triumph of Fernáo de Magalháes, the Portuguese nobleman who had formerly served in India and Morocco, and who set sail for Charles I of Spain in 1519, would begin to provide such a reason. Ferdinand Magellan’s search for the southwest passage that might link the North Sea with the South Sea, the Western Ocean with the Eastern Ocean, came to fruition in the straits that he named for himself. 

The waters of these narrows were so tempestuous that when, at last, Magellan’s fleet of three vessels broke free of the straits, beat their way off the rugged coast of Chile, and entered into the calmer waters of the open ocean, the captain in his relief named the sea “Pacifico.” Or so we learn from Antonio Pigafetta, the Italian who accompanied Magellan and survived the circumnavigation to write an account. It is Pigafetta who tells us that the open sea they crossed “was well named Pacific, for during this same time we met with no storm.” The only other surviving logbook from the expedition to name these waters, that of Francisco Albo, calls it the “South sea.”

Based on contemporary maps and globes that he might have seen, Magellan may have believed that he would cross this sea and arrive in the Indies in a matter of days, or perhaps a few weeks. On this score, he was wrong. But by sailing to the northwest for the next three months, reaching first the Marianas Islands and then later the Philippines, where he was killed, Magellan proved conclusively that Jehovah and the ancient Greeks were right. The seas had been gathered together unto one place, and ocean encircled the land masses of the world. 

As a navigational achievement, Magellan’s voyage rivaled that of Columbus, and he, perhaps more than Columbus, deserved the title of Admiral of the Ocean Sea. But the name he gave to the peaceful waters off the western coast of Chile did not yet become the name for the basin of the Eastern Ocean. For more than two centuries, mapmakers and navigators would continue to make the South Sea the common label for the waters west of the Americas and east of Asia. There are several reasons for this. First of all, Magellan’s name was not so apt; often the waters west of the straits were anything but peaceful. Francis Fletcher, who accompanied Francis Drake on his global circumnavigation in the 1570s, thought that “Mare furiosum” would have been a better name than “Mare pacificum.”

 

Fig. 2. Map showing the oceans listed as the North Sea and the South Sea. "America with Those Known Parts in That Unknowne Worlde,"1626. Courtesy of the American Antiquarian Society.
Fig. 2. Map showing the oceans listed as the North Sea and the South Sea. “America with Those Known Parts in That Unknowne Worlde,”1626. Courtesy of the American Antiquarian Society.

Another explanation lies in the fact that it took more than two centuries for explorers to chart the limits of the Pacific, and until that time, South Sea remained a plausible and useful description. For Europeans to reach these waters, ships had to sail to the south for a tremendous distance. For the Spanish fleets that maintained a dominant presence in this region through the sixteenth, seventeenth, and eighteenth centuries, most of the maritime traffic connected the southern reaches of their empire, the Viceroyalty of Peru, to the central and northern regions of the Spanish Main and Mexico. Beyond this north-south Spanish axis, the rest of the coastal territories of the Pacific basin remained almost completely unconnected by any regular traffic. 

The sole exception to this rule was the annual voyage of one or two Manila galleons that struggled back and forth across the South Sea from Acapulco to the Philippines. The Manila galleons carried Peruvian and Mexican silver to the markets of Asia, and the spices and silks of the Indies back to New Spain, a journey lasting six months in each direction. But in the quarter millennium (1565-1815) that they traversed the Pacific, the crews of the Manila galleons never came upon the Hawaiian Islands, never charted the islands and coasts of the unknown Pacific reaches. Their solitary journeys created a fragile trading channel between East Asia and South America, an oceanic equivalent of the Silk Road, but they did not create a Pacific World that could readily be described, much less named. 

The discontinuity of the Pacific was maintained in part by the indifference of Asians to their far-eastern waters. Japan, perhaps the most quintessentially Pacific of the world’s modern nations, largely withdrew from international affairs during the Tokugawa Shogunate (1603-1868), and even before that time, its overseas connections were directed almost exclusively to the westward. Although heavily dependent on the surrounding waters for sustenance, the Japanese, unlike other island or coastal nations (Britain, the Netherlands, Portugal), were not great long-distance seafarers. The currents that swirled around the home islands tended to push wayward Japanese sailing ships into treacherous waters and then out into the open ocean, never to return. Early modern Japanese maps divided that ocean into the small eastern sea, the familiar waters that they fished and that linked them to the Asian continent, and the large eastern sea, a frightening, boundless, and uncharted maritime region with no people and no points of interest. Similarly, China, the Middle Kingdom of the Asian world, took a strong trading and imperial interest in southeast Asia, Indonesia, and the Philippines, where it encountered the tiny outpost of Spanish traders in Manila. But the Chinese played no active role in world exploration after the great fleets led by Admiral Cheng Ho in the early fifteenth century sailed west to the African coast of the Indian Ocean, but not east beyond Japan.

Finally, although Magellan’s global circumnavigation had proven that the world’s waters were one, he had not disproven the long treasured geographical theory of a great southern continent, a landmass that would be the symmetrical parallel of the Eurasia-Africa ecumene. So long as Australia remained unexplored and the southern limits of the oceans remained unknown, belief in the existence of a Terra Australis held firm. Most of the naming of the waters throughout history involved the extension of the names of land regions onto the seas. So if there was a large and as yet uncharted continent at earth’s southern extremity, then it made sense to call the waters around it the “Southern Ocean” or the “South Seas.” Maps of the sixteenth and seventeenth centuries therefore often restricted names such as the “Indian Ocean” to the waters relatively near India, and called the waters between southern Africa and Indonesia the “Southern Ocean,” out of respect for this theory.

 

Fig. 3. Fig. 3. A map showing Tierra del Fuego to be the tip of a vast southern continent, rather than a small island at the tip of South America. From Abraham Ortelius, Epitome Theatri Orteliani, (Antwerp, 1601). Courtesy of the American Antiquarian Society.
Fig. 3. Fig. 3. A map showing Tierra del Fuego to be the tip of a vast southern continent, rather than a small island at the tip of South America. From Abraham Ortelius, Epitome Theatri Orteliani, (Antwerp, 1601). Courtesy of the American Antiquarian Society.

European exploration party after another ventured out through the Straits of Magellan in the eighteenth century, they thought of the waters they sailed into as the South Seas. In 1740, British naval officer George Anson led a four-year expedition to sail around the world. In the official account of the voyage, probably ghost-written by Anson’s chaplain, Richard Walter, the South Seas were the primary destination. “Within the limits of the southern Ocean,” Walter expected to find the “celebrated tranquility of the Pacifick Ocean” just to the west of the straits where Magellan had found them, “but these were delusions which only served to render our disappointment more terrible.” A generation later, in 1767, Louis-Antoine de Bougainville, formerly an aide-de-camp to Montcalm in Quebec, commanded a major French exploratory party on a two-year mission through the Straits of Magellan and on to circumnavigate the globe, accompanied by Charles-Nicolas-Othon d’Orange, the Prince of Nassau. In their journals and accounts of the voyage, Bougainville and Nassau referred to the waters beyond the straits as “the South Sea.” Even as late as 1815, when Otto von Kotzebue led a Russian naval expedition on a voyage of discovery looking for a “north-east passage,” they thought of their destination as the South Sea. And as a literary convention, the name lasted well into the nineteenth century; Herman Melville, Edgar Allan Poe, and Robert Louis Stevenson all used “South Seas” rather than “Pacific” in the titles of their works. 

The direction that Enlightenment geography was taking made it increasingly unlikely that “Pacific” would overtake “South Sea” as a common name for this ocean. Beginning in the 1690s and continuing through much of the eighteenth century, many European geographers took to naming the waters not by the vast ocean basins familiar to us today, but rather by what the modern geographer Martin W. Lewis calls the “ocean-arc” concept. In this scheme, oceans are thought of as the waters that wrap around the edges of integrated landmasses, rather than as the empty seas between continents. For example, the Ethiopian Ocean in this model wraps around southern Africa, and therefore includes parts of what we would call the Atlantic and Indian Oceans. In similar fashion, a 1719 French atlas described the Mer Magellanique as a single sea that encircles the entire southern tip of South America. The value of this ocean-arc concept lay in the fact that these arcs often corresponded to actual maritime pathways of human activity. If this theoretical trend had continued to prevail, there might never have been a reason to construct an oceanic basin to which the label “Pacific” would plausibly apply.

What challenged this model were the remarkable voyages in the 1770s of Captain James Cook, remarkable as much for what Cook failed to discover as for what he found. Part of Cook’s purpose in making his three voyages to the South Seas was to prove, once and for all, whether the great southern landmass about which geographers had speculated for millennia actually existed. Although Cook erred in thinking that the Antarctic he explored was all ice and no landmass, his explorations around Australia and New Zealand and his circumnavigation of Antarctica proved that Australia was not the northern edge of a great southern continent. There was no such landmass and therefore no obvious need to name a great southern ocean after it. 

In his third and final voyage, Cook explored the northern reaches of the Pacific, searching for the fabled Northwest Passage that would easily link Europe with the Far East. Before that time, Spanish explorers had been inching their way up the west coast of North America, and Russians had been slowly moving east and south from Alaska, but without linking the basin together as a connected and integrated whole. Cook’s careful mapping changed all that. He did not find a Northwest Passage, but he did chart the northern reaches of the Pacific before his untimely and much debated demise at the hands of Hawaiian islanders in 1779.

Over the course of Cook’s three voyages, something seems to have changed about European perceptions of the Pacific, or at least about what westerners were willing to call it. Cook himself, in his report to the secretary of the Admiralty after his first voyage, hoped that “this Voyage will be found as Compleat as any before made to the South Seas.” Similarly, Sydney Parkinson, who accompanied Joseph Banks as a “draughtsman” aboard the Endeavor on Cook’s first voyage, published a journal that he called A Voyage to the South Seas. But when Connecticut native John Ledyard sailed on board the Discovery for Cook’s last and fatal voyage in 1776, he called his account A Journal of Captain Cook’s Last Voyage to the Pacific Ocean

After Cook’s death, subsequent maps much more frequently applied the word “Pacific” to the entire basin, and “South Seas” became increasingly literary, a romantic term most often used for the regions around the islands in the central and southern latitudes of the Pacific. Where Anson and his crew had expected to find “Pacifick” waters within the great Southern Ocean, now people began to think of the South Seas as an exotic portion of the well-charted Pacific Ocean. In short, Cook’s accomplishments made it possible (if not necessary) for everyone from geographers to ordinary folks to imagine the Pacific as we now know it: as the entire basin between the Americas on the east and Asia and Australia in the west. To this singular sea they increasingly gave a single definitive name that reshuffled and replaced older habits.

Consider, for instance, Lieutenant William Reynolds, a young naval officer from Lancaster, Pennsylvania, who took part in the U.S. exploring expedition of 1838-42 under Captain Charles Wilkes, the first international maritime survey conducted by the United States. Wilkes and his fleet of six sailing ships followed in Cook’s wake and charted for the first time large portions of the Antarctic coast and the islands of Polynesia. Lieutenant Reynolds wrote frequent letters home to his family in Pennsylvania, letters with real dramatic flair and style. His literary aspirations were aided by the fact that on ship, the crew had access to “the Histories of all the French and English expeditions to the Seas we are to visit.” Most of these histories of course labeled the ocean to which Reynolds was heading as “the South Seas.” But Reynolds had a different sensibility from many of his ocean-going forebears, formed in part by his reaction to their writings. Less than a month into his voyage, Reynolds described the inspirational beauty of these volumes, “published by the respective Governments in superb style, full of plates, colored and plain. I have been looking over them and only wish that we were now among the Islands: ah! we shall have a glorious time, wilder than the romance of imagination.” 

As the ships neared Cape Horn, the weather worsened but Reynolds’s excitement mounted as “those indefinable sensations that set one’s heart a throbbing while viewing new and striking scenes were dancing through my veins.” At long last, they rounded the Cape: “We had fairly doubled it!” In his ecstasy, Lieutenant Reynolds had no doubts about what name to call the waters he now entered, but his comments about the ocean’s appearance suggest a tinge of disappointment, a vague unease that cycles us back to the beginning of our story, to the fundamental ambiguities surrounding the naming of the seas. Quoth Reynolds, “[F]or the first time I found myself in the Pacific Ocean—it looked very like the Atlantic!” 

Further Reading:

This essay would not have been possible without the insightful work of Martin W. Lewis in “Dividing the Ocean Sea,” The Geographical Review 89, no. 2 (April 1999): 188-214. Lewis’s essay appears in a valuable special issue of The Geographical Review entitled “Oceans Connect,” which also includes Marcia Yonemoto’s “Maps and Metaphors of the ‘Small Eastern Sea’ in Tokugawa Japan,” 169-187, the source for information on Japanese and Chinese understandings of the Pacific in this article. J. H. Parry, The Discovery of the Sea (Berkeley, 1981), offers an introductory discussion of the evolving problem of knowledge of the seas. On the Spanish Main and the naming of the Atlantic, see Carl Ortwin Sauer, The Early Spanish Main (Berkeley, 1966), and for a general overview of Spanish expansion into the Pacific, including a discussion of the Manila galleons, see Henry Kamen, Empire: How Spain Became a World Power, 1492-1763 (New York, 2003). 

Balboa’s discovery of the South Sea is described in Charles L. G. Anderson, Life and Letters of Vasco Núñez de Balboa (1941; reprint, Westport, Conn., 1970). For Magellan’s voyage and Pigafetta’s commentaries, see The First Voyage Round the World by Magellan, translated from the Accounts of Pigafetta, trans. by Lord Stanley of Alderley (London, 1874), 65; as well as a recent popular narrative, Laurence Bergreen, Over the Edge of the World: Magellan’s Terrifying Circumnavigation of the Globe (New York, 2003). Francis Fletcher’s critique of Magellan’s “Pacific” is found in The World Encompassed by Sir Francis Drake (1628), ed. by W. S. W. Vaux (London, 1854), 82. Several recent accounts deal with the fleets led by the Chinese Admiral Cheng Ho, including the fanciful Gavin Menzies, 1421: The Year China Discovered America (New York, 2003). On George Anson’s circumnavigation, see Richard Walter and Benjamin Robins, A Voyage round the World . . . by George Anson (London, 1974). Bougainville’s expedition is described in John Dunmore, ed., The Pacific Journal of Louis-Antoine de Bougainville, 1767-1768 (London, 2002), and details on the Russian expedition of 1815-18 can be found in Otto von Kotzebue, A Voyage of Discovery into the South Sea and Beering’s Straits, 3 vols. (1821; reprint, New York, 1967). 

The literature on Cook’s voyages is voluminous, but see the handsome and lavishly illustrated J. C. Beaglehole, ed. The Journals of Captain James Cook on his Voyages of Discovery, 5 vols. (Rochester, N.Y., 1999). For the most recent analytical biography of Cook, see Nicholas Thomas, Cook: The Extraordinary Voyages of Captain James Cook (New York, 2003). Finally, William Reynolds’s charming letters and his profound description of the resemblance between the Atlantic and Pacific are found in Voyage to the Southern Ocean: The Letters of Lieutenant William Reynolds from the US Exploring Expedition, 1838-1842, ed. by Anne Hoffman Cleaver and E. Jeffrey Stann (Annapolis, 1988), 11-12, 55. For a general account of the U.S. exploratory expedition of 1838-42, see Nathaniel Philbrick, Sea of Glory: America’s Voyage of Discovery, The U. S. Exploring Expedition, 1838-1842 (New York, 2003).

 

This article originally appeared in issue 5.2 (January, 2005).


Mark Peterson teaches history at the University of Iowa and is the author of The Price of Redemption: The Spiritual Economy of Puritan New England (Stanford, 1997) and “Puritanism and Refinement in Early New England: Reflections on Communion Silver,” William and Mary Quarterly, 3d ser., 58 (April 2001): 307-46. He is working on a history of Boston in the Atlantic world that spills over into other oceans as well.




American History on Other Continents

On the trail of China traders in Africa and Asia

“In Persuance of an Act of this Commonwealth . . . Pasckal Nelson Smith Esqr. of Boston in the Countij of Suffolk and the Common-wealth Massachusetts maketh oath that the Sloop Harriett where of Allen Hallet is at present Master . . . of the Burthen of Thirty-five Tons or there about was built at piscataqua in the Year of Our Lord one Thousand Seven Hundred and Eighty Three.” Having copied this as best he could, the Dutch clerk added John Hancock’s John Hancock into the margin; Hancock had been governor of Massachusetts when the Harriett set sail.

The Harriett was the first American vessel to sail for China, and there were its papers copied and tucked away in The Hague, in the Nationaal Archief, part of the Dutch East India Company records, an enclosure to the daybook sent from Cape Town. And there, improbably, was I. It was a long way to come for just one ship.

Fortunately, there were others. The Cape Colony dagregisters, daybooks, opened up bit by bit: a few ship’s papers here, a quarterly list of shipping there; after a month I had put together a surprisingly complete record of American ships in Cape Town sailing to and from China in the 1780s and 1790s.

But the shipping lists told more. American merchants weren’t just sailing to China; they were sailing to Mauritius, to India, to Batavia, and to points in between. So I started to follow the ships.

The Dutch national archives are well organized, well run, and thoroughly digitized. It is a stereotype of national efficiency. Everything is catalogued. Documents are brought out within fifteen minutes, order numbers appearing all the while on an illuminated display overhead. I’ve stood longer at delis.

Other archives are less well endowed, but even the most unlikely of them had more American history than one historian could cover.

The tropical sun made the seat burn. There I was half lost, driving on the left on a moped in a country with French road signs, trying to find a national archive, which, unlike everything else, had no sign. I circled around the industrial park where the archive was improbably said to be located, past the Billabong factory, past the gutter running red with dye, until, next to a warehouse, over a door, I saw a small sign I couldn’t quite read from the road: Mauritius National Archives.

Mauritius is an Indian Ocean island about five hundred miles east of Madagascar. It was a French colony until 1810 and British until its independence in the second half of the twentieth century. It remains a developing African success story.

Home in Mauritius was the seaside town of Flic-en-Flac, which boasted a beach popular with European vacationers; numerous holiday villas were going up around town. Most Mauritians lived elsewhere, in settlements near the sugar plantations, which employ a large segment of the population, or in larger, industrial towns upland in the interior, near the archive.

The bus from town—for the moped was impractical for a laptop-paranoid graduate student—made its way up from the beach through the sugar fields, stopping here and there to take on passengers in the cane or to drop folks off at the next market town. I could get off at one such town, make my way down a back road, cross a couple of abandoned soccer fields and catch the back entrance to the industrial park (occasionally guarded by a wayward goat), which I did in an oxford button-down and khakis out of the very graduate-student fear that the archivist would look at me funny if I didn’t.

The afternoon bus home was full of schoolchildren and their books, and no matter how many times I saw it, the prospect of a high schooler walking back through the cane with an accounting textbook always made me smile—so too with the government-run clinics in every village. They conjured the sense of a nation going someplace. Mauritius may be poor, but it is a young democracy with low crime, high literacy, and a government doing its level best to educate its children and improve their lot. When you see government money being spent on clinics and schoolbooks, it’s hard to begrudge those rupees not being spent on archives.

The reading room was upstairs past the main desk: a room within the main storage area, lights low and slatted windows open, four wood tables and as many electrical outlets. A woman dressed in a sari came round from the record cage, which in the open floor plan receded back beyond sight. She pointed me to a small bookcase of finding aids, which I spent the next three months going through.

Unlike the Netherlands, Mauritius has limited funds for its archives. Electricity must be bought; plugging in your laptop costs until you get to know the archivist. There are no LED displays for document orders. There is no online catalog. The archive is small but so is the country, and because of that it remains manageable.

The entire archive is housed on a converted factory floor. Most of the other businesses in the building are warehousing firms; pallets and loading docks flank the front. Textile dust occasionally whips up into the air, a respirable fire hazard. Over the course of my stay there I spoke with several readers whose greatest fear was an industrial accident that might combine the Triangle Shirtwaist tragedy with cultural disaster.

Both humidity and bugs were doing their best to make cultural disaster before fire could. To provide some protection to their three-centuries-old volumes, the archivists had boxed the oldest and most loosely bound. While boxing shields books from light or page loss, it also shields them from view and, hence, casual inspection.

Some hadn’t been seen since they were boxed, and in the interval worms or insects had made their way in and in their own happy time consumed entire volumes. Some volumes turned out to be little more than cover and binding, others a maddening Swiss cheese of fragmentary layers, the shreds of one page intertwined with the remnants of the next, too fragile to disentangle, too jumbled to read.

Rust was just as bad; the iron in the ink—the gallnut-and-iron-salt blend was as common in colonial Mauritius as in colonial America—had eaten through page after page as it oxidized in the wet air. Not only would it cut through pages, it adhered each page to the next. For a researcher looking for long runs of data, this was troubling.

Perhaps just as maddening was how French officials had recorded their documents. Dutch, British, and American administrators drew up carefully proportioned tables of shipping to show the flow of vessels and to permit voyages to be tallied or thought of as part of a larger trade, often with tax revenues already counted out. Gallic administrators, on the other hand, recorded each vessel on its own sheet of paper (and subsequently lost many of them). They made little or no attempt to facilitate an accounting of overall trade. There was no printed form for ship arrivals, no one to go to the dockside and count the ships, no single body in charge of tracking trade and ensuring all the forms were saved (there were two, the police and admiralty keeping separate, incomplete compilations of captains’ declarations d’arrivées, when and if the captain bothered to report to their offices). Only after the British conquest were such records kept with an eye to a long-term, complete, and more-numerate record of trade.

Yet there were always glimmers of hope whenever these prospects made me glum, when the declarations—each written out just differently enough to forestall skimming—grew monotonous or when I grew fed up with the whole half-eaten, half-rusted mess.

For one, there was an ice-cream truck. The driver careened through the industrial park in the afternoon heat with a tinny version of “Jingle Bells” coming in and out of range with a sublime torture that made visions of sugar cones dance in our heads. Out we came—archivists, historians and workers—chasing, reverting, happy for sure.

“It’s one of the best archives in the Western Indian Ocean,” my professor had told me. Standing there in the shade with my cone, I wondered if he meant the ice cream.

Fortunately for my research, the gaps in the records were not insurmountable. Other scholars had already mined the French-era data but without comparing them to later, British-era records. So I reviewed my predecessors’ methodology, verified their findings as best I could with what French data survived, and began work in the more fulsome British sources. I felt as though I had pulled original research from the jaws of defeat.

I followed the Americans to other archives as well. The Cape Town Archives Repository in Cape Town, South Africa, is housed in a former prison at the foot of Table Mountain—and with stunningly unimprisoned views of the peak from the old prison courtyard. It held a cache of wonderfully preserved (and meticulously organized) shipping tables from the late—Dutch and early British periods. Between the Cape Town and Mauritius records, it became apparent that China traders were not the only Americans to round Africa. Slave ships from East Africa stopped on the long, middle passage to the Caribbean and the United States. The Dutch authorities at the Cape noted, “304 slaven,” of the Horizon on its voyage from Mozambique to “America” in 1804.

Whaling and sealing vessels called at the Cape and Mauritius too. American debtors began appearing in South African court records, and American shipping made its way into official Mauritian debates on commercial policy. In each port, American ships proved the most important link to the wider world while it remained in Napoleonic hands. U.S. merchants enmeshed themselves in sundry local trading ventures: buying ostrich feathers at the Cape (plumage to Europe’s fashionable set) in exchange for great wheels of Edam.

But, as every historian has experienced, not every archive yielded treasure. In Macau, the ex-Portuguese port on the China coast through which every China trader passed and the hub to all the trade I was catching at the spokes and rims, I fumbled through every finding aid the good-natured archivists could dredge up. The port may have been central to my research, but precious little shipping data survived.

On another trip, this one to Jakarta, I found data in the Dutch records but had to dig them out without knowing a word of Bahasa Indonesia. Communication was reduced, once I had found the finding aids (which were partly in Dutch), to writing call numbers on a slip of paper and smiling at the archivist until the record appeared.

Such troubles were mild compared to the rewards. But who rightly can claim to be the first to use some new archive? Certainly, I was no “first.” Yet digging away for American history in the Indonesian heat or the African winter, surrounded by scholars of local national histories, I felt as though I were still making discoveries. I was perhaps the first early Americanist to cross their threshold; I was certainly committing the oddity of looking for another country’s history in their national archive. Sometimes I thought I might be able to contribute to two countries’ histories at once. But even that might be a bar too high; if I could make the history of early America just a little more capacious, that would be reward enough.

 

This article originally appeared in issue 7.2 (January, 2007).


James Fichter is an assistant professor of U.S. and international economic history at Lingnan University, Hong Kong.




Post Transbellum?

What might a post-transbellum moment in American literary studies look like?

With this question, and the multiple, even contradictory temporal designations it contains, I mean not to raise doubts about the keyword at the heart of Cody Marrs’s wonderfully argued and beautifully written Nineteenth-Century American Literature and the Long Civil War. Nor do I want to call (already) for a conclusion to the provocations about periodization, literary history, and the legacy of internecine conflict that this study offers teachers and students of the nineteenth-century United States. To the contrary: in keeping with Marrs’s claim that the Civil War “continued to unfold long after 1865,” and perhaps “is still unfolding,” I want to ask how we—“the latter-day heirs of this struggle”—might respond to Nineteenth-Century American Literature and the Long Civil War. What are the possibilities that the book’s projects make available, and what might we do with them? Which is to ask: what happens if we pair “post” and “transbellum”?

Of course, there is a sense in which this very question disregards one of the central theses of Marrs’s monograph: his claim that the Civil War must be read as a “multilinear upheaval,” and that if we study the literary careers of writers such as Whitman and Dickinson with this frame in mind, “categories” like “antebellum” and “postbellum” both “crystallize and dissolve, yielding a literature that crosses through the conflict and far beyond it.” Here, Marrs makes a compelling case for rejecting the received designations for studying the nineteenth century; as he goes on to assert, the literature that forms the archive of his book “can only be called transbellum.” This is a crucial claim, and one that I fully accept.

But I am also interested in the way that Marrs formulates the “ante“ and “post“ here as obtaining dialectically (to deploy a term from his chapter on Whitman) in the “trans.” That is, if reading across the divide of 1865—or, better, “against 1865,” as Marrs and Christopher Hager put it in their productively polemical J19 essay—we realize that even as markers like “antebellum” and “postbellum” fade away, they also, importantly, solidify and clarify. Their functions come into relief.

In other words, in rejecting the standard periodization of nineteenth-century American literature that turns on ideas of ante and post, before and after, we might come to recognize not just the limitations of such prefixes but also their generative possibilities. It’s as if, in casting them away, Marrs allows us to see what these orthodox and somewhat staid labels might do if understood in a richer, more robust conceptual framework in which the Civil War does not end in 1865, and where “time” does not only signify movement along a “straight line.”

This insight is incisive and—to make my intellectual commitments explicit—much needed. Indeed, as someone whose own forthcoming book, Untimely Democracy, seeks to bring attention to the neglected literature of the post-Reconstruction epoch, I worry about the way the designation “nineteenth century” comes to stand primarily, even sometimes exclusively, for the “antebellum era.” My concern is less about coverage than about the values implicit in the practice. Letting “antebellum” and “nineteenth century” function as synonyms seems to me to imply that the aftermath of the Civil War and the period following the collapse of Reconstruction are somehow less instructive or illuminating for exploring questions of aesthetic experimentation and political activism than is the run-up to these events.

Marrs offers us a concise institutional history that explains this state of affairs. Pointing us first to the etymology of “antebellum” and “postbellum” within the field of “international law,” where they served to regulate claims of property and land transfer in the context of martial conflict, Marrs goes on to assert that the terms accordingly promoted “fictions of erasure that enabled both sides to pretend either that the war had never really happened, or that history began anew with its completion.” When “antebellum” was deployed after the Civil War in the American context, Marrs writes, it tended to “describe something that was both Southern and outmoded.”

It was not until the twentieth century and the founding of American literature as a field of study in the Cold War era that “the concept of a national antebellum literature” emerged. Indeed, as Marrs demonstrates in perhaps the most provocative portion of this meditation, “antebellum” gained traction as a result of the New Americanist critique of the narrow canon promulgated by F. O. Matthiessen and the other founders of the field. As Marrs puts it, “the New Americanists effectively replaced an authorial canon with a periodic canon, encapsulated by the terms ‘antebellum’ and ‘postbellum.’”

The legacy of this backstory is important, for it forms the present of our critical moment—and should bear on any prognostications about the “post.” Focusing on questions of race, gender, and sexuality, and troubling the consensus about what counts as a “text” worth reading, the New Americanists enlarged literary studies, making the field reflect the “devotion to the possibilities of democracy” that Matthiessen claimed as his Renaissance’s defining feature. Still, this critical movement has left unexplored the way that assumptions about periodization (and more broadly, temporality) inflect what are now its orthodox organizing rubrics and conceptual frames.

As an example, consider the books explicitly concerned with the nineteenth century published in the Duke University Press New Americanists series, where much of the most exciting and transformative work of this approach appeared. Among these titles, the period before 1865 holds a decisive influence, with the latter half of the epoch represented primarily in closing chapters. Whereas the transnational turn has been acknowledged as the necessary response to one of the limitations of the New Americanist paradigm and its retention of the nation-state as analytic unit, Marrs entreats us to consider whether “there are temporal as well as spatial borderlands” to which we must attend.

This question holds special force for African American writers working after the Civil War, in the era that Charles W. Chesnutt called the “postbellum, pre-Harlem” moment. Chesnutt created this designation to explain the neglect suffered by turn-of-the-twentieth-century authors like himself, whose project was problematically overshadowed by Harlem Renaissance luminaries. But it is worth asking, with Marrs, what “transbellum” might do for “postbellum, pre-Harlem.”

Marrs points us in this direction in his coda on “Other Nineteenth Centuries,” where he reflects on what it would mean to read Frances Ellen Watkins Harper’s 1892 Iola Leroy, Or Shadows Uplifted not as “a historical novel” about “passing and racial uplift”familiar topics of the epoch’s literature—but rather as a “counterhistorical novel that pivots on emancipation’s longue durée.” I am not sure what to make of the opposition of “historical” and “counterhistorical” in this instance. Harper’s novel, with its commitment to racial progress, on the one hand, and to a vision of bondage as an intergenerational harm, on the other, seems better accounted for as a profound engagement with the “multilinear” history that Marrs explores in earlier pages. But it seems to me perfectly right that Iola Leroy is about the long—and hardly temporally progressive—afterlife of slavery. Indeed, the template that Marrs offers here for reading black writers working after the Civil War but still preoccupied by its unfulfilled promises and unfinished projects stands as one of the signal insights and implications of Nineteenth-Century American Literature and the Long Civil War.

I want to conclude with an example of one such implication: the case—or let’s say, the “career”—of Callie House. Born a slave in Rutherford County, Tennessee, in 1861, House was a child of the Civil War in more ways than one. As the historian Mary Frances Berry has suggested, House’s father probably fought for the Union Army, and the march of Grant’s soldiers through Tennessee would have constituted for her a sort of political primal scene.

But House’s most profound relationship to the war came after its ostensible conclusion, in the era that Rayford Logan has called the “nadir” of racial history. After the promises of Sherman’s Field Order No. 15 had faded away and the commitment to racial justice embodied institutionally by the Freedmen’s Bureau had been abandoned, Callie House continued to fight the war in her own way. She became a leader of the National Ex-Slave Mutual Relief, Bounty, and Pension Association of the United States of America (MRB&PA), an organization that built a campaign to redress slavery, taking the Union soldier pension program as its model. “We are organizing our selves together as a race of people who feels that they have been wronged,” she announced in 1899.

Though we might immediately note an affinity between Harper and House, I want to pursue another pairing made possible through Marrs’s powerful concept of the “career.” As Marrs defines them, literary “careers bridge the historical and the transhistorical, unfolding in ways that disclose the influence of particular events on given works and, at the same time, the broader imaginative connections with which those works are bound up.” Accordingly, “Careers … enable us to read multilinearly across eras and genres that are often kept quite separate from one another, and this perspective is utterly crucial when it comes to the Civil War.”

I would add that this multilinear perspective is utterly crucial when it comes to figures like House. For House’s organization pursued emancipation long after the war by asserting the right of slaves to seek reparation—and by using the very language that slaves deployed before the war. As she put it in a September 1899 letter, the MRB&PA’s objective is to get the government to “pay us…an indemnity for the work we and our fore parents was rob of from the Declaration of Independence down to the Emancipation of four + half million slaves who was turn loose ignorant, bare footed, and naked, without a dollar in their pockets, without a shelter to go under out of the falling rain.”

With Marrs’s sense of the “career” in mind, we can recognize House as literary kin not only to Harper but also to Dickinson. Consider the way both writers worked in forms that have made their output difficult to place within the institutional structures of literary study, which privilege published texts. In fact, we might take what Marrs says of Dickinson to illuminate House, for she, too, “reimagined the conflict … by creating alternative worlds and timescapes, many of which extend … far beyond the war’s chronological end-points.” That we can use Marrs’s account of one of the most canonical writers of the nineteenth century to begin to understand the career of Callie House stands as perhaps the greatest index of the contribution this study makes.      

And in this way, Nineteenth-Century American Literature and the Long Civil War points to a project for American literary studies post the New Americanists. For one of the reasons that House is largely unknown is that the whole of her writing is a continuation of the Civil War—that absent cause, supposedly the “defining event of the nineteenth century” that is “deemphasized by the periodizing practices that are specifically designed to acknowledge its impact.” In forcing us to focus on the “transbellum,” and in unsettling the ante/post divide, Marrs paradoxically offers us an occasion to better understand the post. That is, he invites us to attend to those authors and activists working after the war that perhaps never ended, and he gives us a way to account for their projects, which are inextricable from that conflict and its sources.

Or, more simply put: Nineteenth-Century American Literature and the Long Civil War offers an occasion to consider the careers of Callie House and many others whose names we still do not know.

Further Reading

For a companion piece to Nineteenth-Century American Literature and the Long Civil War, especially its arguments about periodization, see Cody Marrs and Christopher Hager, “Against 1865: Reperiodizing the Nineteenth Century,” J19: The Journal of Nineteenth-Century Americanists 1:2 (2013): 259-84. For the quote from F. O. Matthiessen, see his landmark American Renaissance: Art and Expression in the Age of Emerson and Whitman (1941; repr., New York, 1968), ix. The complete list of titles in Duke University Press’s New Americanist series, edited by Donald Pease, is available here. Russ Castronovo’s contribution, Necro Citizenship: Death, Eroticism, and the Public Sphere in the Nineteenth-Century United States (Durham, N.C., 2001), represents an important exception to the trend I describe, not simply for its attention to the post-Civil War moment and its more general emphasis on the boundaries between life and death—which in some ways anticipates the temporal turn—but also for its practice of placing literature within political and cultural contexts without also reducing the work of the former to the work of the latter. For a trenchant critique of the New Americanist paradigm, with a specific focus on the movement’s approach to identity and representation, see Johannes Voelz, Transcendental Resistance: The New Americanists and Emerson’s Challenge (Hanover, N.H., 2010). Voelz’s book inaugurated Pease’s post-New Americanist series, “Re-Mapping the Transnational.” For Charles W. Chesnutt’s pronouncement, see his “Post-Bellum—Pre-Harlem,” in Stories, Novels, and Essays, ed. Werner Sollors (New York, 2002), 906-12. Barbara McCaskill and Caroline Gebhard deploy this designation in their crucial edited volume Post-Bellum, Pre-Harlem: African American Literature and Culture, 1877-1919 (New York, 2006). On Callie House, see Mary Frances Berry, My Face Is Black Is True: Callie House and the Struggle for Ex-Slave Reparations (New York, 2005) and my Untimely Democracy: The Politics of Progress after Slavery (New York, 2017). “Nadir” comes from Rayford W. Logan, The Negro in American Life and Thought: The Nadir, 1877-1901 (New York, 1954). The quoted passages from House’s writings appear in Callie House to Harrison Barrett, Acting Assistant Attorney General of the Post Office Department, September 29, 1899, National Archives and Records Administration, Record Group 28, Records of the Post Office Department, Office of the Postmaster General, Office of the Solicitor, “Fraud Order” Case Files, 1894-1951, File 1321.

 

This article originally appeared in issue 17.1 (Fall, 2016).


Gregory Laski earned his PhD from Northwestern University, and currently is a civilian assistant professor of English at the United States Air Force Academy. He is co-founder of the Democratic Dialogue Project, a Mellon grant-funded initiative that seeks to invigorate citizenship by bridging the military-civilian divide. His work has appeared or is forthcoming in Callaloo, African American Review, J19, and Approaches to Teaching Charles W. Chesnutt. His book, Untimely Democracy: The Politics of Progress after Slavery, will be published by Oxford University Press in 2017. 

 




Pictures of Panic: Constructing hard times in words and images

In the late spring of 1837, Edward Williams Clay put grease to stone in the Manhattan shop of H. R. Robinson, a caricaturist and publisher. Clay was inspired. His move from Philadelphia to New York City in 1835 offered him a front-row seat to one of the most dramatic events of his lifetime, a financial crisis of unprecedented proportions.

Philadelphia had long been the home of the Second Bank of the United States (B.U.S.) and the center of the nation’s finances, but since Clay’s move to New York City, times had changed. President Andrew Jackson removed the federal government’s deposits from the B.U.S., depositing the funds in banks throughout the nation. Western land sales spurred by Indian removal and duties collected on imports generated more money than the federal government needed. Congress redistributed this surplus to the state governments, further decentralizing the nation’s finances. By the end of 1836, more than 700 banks were printing their own paper money in the United States. More than 100 of them had opened their doors for the first time just that year. Many of the new banks were located far away from the traditional ports of the Atlantic coast; their only tie to global trade existed on paper in the charters of not-yet constructed canals and railroads. To build these “internal improvements” companies raised capital by selling stocks and bonds in the nation’s largest financial market, located in lower Manhattan. Mere blocks from Clay’s desk, brokers with international connections managed the nation’s trade through a variety of commercial paper: stocks, bonds, bank notes, and bills of exchange. They sent accounts of imports, exports, debits, and credits across the Atlantic to the world’s central financial market in London where English investors were eager to earn the highest interest rates available in America. During Clay’s first two years in New York City his new neighborhood was the scene of not merely local but transnational and international economic excitement.

During the first few months of 1837, the times changed again. News from London arrived regarding investors’ doubts about America’s continued prosperity. British lenders raised interest rates. The confidence that had undergirded business trust evaporated. On both sides of the Atlantic, creditors demanded payment. Banks tightened up their loaning practices. Factors, brokers, and merchants failed to make payments. As one failure triggered another, the always precariously balanced system of credit collapsed. The prices of commodities, including land and slaves, plunged, pushing many to seek “safe” investments such as gold and silver coin (also known as specie). For although bank notes promised “to pay ten dollars on demand,” the banks themselves only held a small fraction of the value of their circulating paper in actual coins. Most of their assets were tied up in mortgages on property, bonds, and stocks that, like everything else, were now rapidly losing value. With anxieties rising, bank directors worried that too many holders of their notes might simultaneously demand specie. While few such bank runs had occurred by mid-May, banks throughout the nation sought to protect themselves from the possibility by preemptively suspending specie payments. This brought an end to the initial moment of panic as individuals now faced the certainty of their failure. But troubles kept mounting. Note holders and depositors, including the federal government, lost access to their assets. Foreign debtors sued their American creditors over unpaid debts. Unable to pay their workers, buy raw materials, or sell their products, factories stopped manufacturing. Unemployed workers fled cities. Farmers could not sell their produce. Creditors, sheriffs, and customs collectors seized all forms of property in lieu of debt payments. Lawsuits multiplied. Trade ground to a halt, especially in New York City, whose financial district was particularly hard hit by the crisis. Desperate speculations and miscalculations provoked a second panic in 1839 which was followed by a period of general economic depression that lasted until the early 1840s. Although this entire period would be remembered as the Panic of 1837, contemporaries only used the term “panic” to describe their anxieties during the initial months.

 

Fig. 1. “H.R. Robinson, Lithographer, Publisher and Caricaturist. 52 Courtlandt Street, New York. Printed from Stone,” between 1836 and 1842. Courtesy of the American Antiquarian Society, Worcester, Massachusetts.
Fig. 2. “Ten Dollar Note—The New Orleans New Gas Light & Banking Company,” (New York, 1800?). Courtesy of the Bank Note Collection at the American Antiquarian Society, Worcester, Massachusetts.

Panic inspired Clay’s art. His productivity soared after the banks suspended specie payments, a turn of events that inspired several hand-painted lithographs. One of these pictures, “The Times,” (fig. 3) staged the nation’s financial ills as if they were a theatrical production. In the foreground, the characters evoke sympathy or scorn. Shoeless tradesmen huddle beside overpriced commodities and broadsides advertising high prices for coins and credit, as well as schemes and frauds. A respectable widow and child, dressed in neat mourning black, beg for a hand out from a fat mortgage holder. A dark-skinned soldier, stogie in his mouth, watches a drunk pass a bottle of gin to a young mother lying barefoot and spread-eagle on the dirty straw floor of a lean-to. The troubles of a commercial community in crisis fill the background. Crowds throng the liquor store, pawnbroker’s shop, sheriff’s office, and almshouse. Attorneys wait on clients emerging from luxurious carriages. Clerks sit idly by the Customs House windows above a sign demanding specie for payment of duties as ships (and their cargoes) rot in the harbor. Well-dressed men make a run on the “Mechanic’s Bank,” which announces to depositors that “no specie payments” will be forthcoming, while soldiers march upon the unarmed crowd. No billows of smoke emerge from the stacks of the railroad engine or steamboat. Signs on the city’s offices, hotel, and factory respectively read “to let,” “for sale,” and “closed for the present.” A woman draws the shutters closed above the pawnshop of “Shylock Graspall.” A fort named “Bridewell,” an infamous English poorhouse and debtors’ prison, prepares to welcome a new inmate while a veteran tenant hangs from a gibbet. All the while, in an expression of visual gallows humor, the well-tended fields produce crops that have no hope of being transported to markets or of alleviating the hunger in the city.

“The Times” has become the iconic image of the Panic of 1837. It has graced the covers of monographs, collections of essays and conference programs, and appeared in textbooks as the definitive illustration of the financial crisis and the national depression that followed. But, in fact, this is a strange choice, for “The Times” looks nothing like most of the other images produced in America during and after the crisis. The latter blame the hard times on politicians and the political system, and are replete with monstrous figures and literary analogies, with little interest in conveying a realist account of the events of 1837. These images are rooted in a vision of economic life as inseparable from political and moral judgment. Nothing of the systemic neutrality and objectivity that we assign to the economy is in evidence here.

“The Times” also contains something of this earlier polemical convention. Clay set a scene of bank runs two months later than it actually occurred, for instance, in order to give it a more symbolic date, July 4, 1837. But his argument about the political causes of crisis remains at the margins. The suicide of several figures leaping out of a burning hot air balloon labeled “Safety Fund” alludes to problems with Democratic financial policy. Jacksonian emblems on the sun, together with several of his famous quotations, or “popular sayings,” suggest that the former president had something to do with the current hardships. In general, however, Clay’s vision was a distinctive image of panic that abjured convention by reflecting uncertainty about the cause of crisis. The vague, quasi-realism of “The Times” suggests that hard times have a recognizably timeless quality. Such a perspective was anathema to prevailing economic thought in the 1830s. At the same time, it allowed Clay to achieve immortality, for its generic imagery transcended time and place and so appealed to future generations using a vastly different lexicon to think about the economy.

 

Fig. 12. “Specie Claws,” printed and published by H. R. Robinson (New York, 1837). Courtesy of the Political Cartoon Collection at the American Antiquarian Society, Worcester, Massachusetts. Click image to enlarge in a new window.
Fig. 4. “Hard Times,” page 2, The American Comic Almanac for 1838 (Philadelphia, 1837). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.

In 1837, Americans panicked. That, at least, is what they called their experience as the immediate financial crisis ended in May 1837 and they sought an explanation for the unfathomable, nearly universal failure of business. Before the crisis began, when the economy was still prospering, popular novels, domestic economy manuals, and even political economy textbooks taught contemporaries that individuals were responsible for their own economic successes or failures. When George Putnam preached his sermon, “The Signs of the Times” on March 6, 1836, for example, he imagined “the figures and coloring of a picture, a painted canvass prefiguring the moral history of the coming prosperous year.” This “crowded canvas” included stories of successful individuals who made smart, safe, frugal, and industrious choices, as well as failures who had abandoned morality in pursuing or convincing others to follow “visions of sudden wealth.” True, impersonal forces such as “the swelling tide of prosperity” or the “stormy sea of speculation” are found in Putnam’s word painting. The end result, however, lay with the individual who “plunges into a raging sea that he has never sounded, to work like a drowning man for his life, to sink or swim amid the stormy and treacherous waves.” To Putnam, as well as innumerable other pre-Panic writers in a variety of genres, individual souls bore responsibility for individual fates.

Within a year, Putnam’s prediction of continued prosperity proved disastrously wrong. Indeed, the financial crisis challenged the prevailing theory of individual economic responsibility. With the outbreak of troubles in March, individuals throughout the nation and across party lines struggled to find the words in letters, diaries, and newspapers to describe their experience. “The agitation, the panic, I may call it,” a New Yorker wrote to the National Intelligencer, “no pen can properly describe.” But many pens and even more printing type tried. The editor of the New Orleans Bee fumbled about for the right language, writing about “the excitement, the terror, the panic, or whatever you please to term the state of public feeling.” “In one word, excitement, anxiety, terror, panic, pervades all classes and ranks,” a correspondent from New Orleans wrote to a northern newspaper, incapable of restricting himself to merely “one word.” These examples are revealing of more than just a national failure to find a vocabulary to convey the economic reality. They point to the absence of a conceptual apparatus that could explain the crisis. The terms “capitalism” and “the economy” had not yet been invented. The latter still referred, for instance, to a frugal management of resources, either those of a household or of a nation, and not to a societal structure.

As the crisis intensified, the notion that each man assumed responsibility for his economic fate seemed increasingly flawed. Everyone, that is, was ready to claim credit for prosperity; none were willing to confess to personal failure. Across the ideological spectrum, ornate metaphors, similes, and tropes exposed the general desire to blame everyone and everything but one’s own choices. William Leggett, the editor of the New York Plaindealer, mixed his metaphors with abandon. In one brief essay he drew on the entire lexicon of panic: He described the nation’s business as “unhealthy,” as a “machine” that had been “thrown out of repair, if not broken all to pieces,” as a fabled frog that had been “blown up to unnatural dimensions,” or as “the dreadful consequences of a deluge of bank credit” produced by “effusion from the fountain of evil.” Whether the crisis was imagined in terms of contagious disease, technological disaster, fantastical horror, unpredictable weather, and divine (or satanic) test of human morality, it turned men into victims.

 

Fig. 5. “‘And if we fail—we fail!’—Macbeth,” page 7, Elton’s Comic All-my-nack 1838 1:5 (New York 1837). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.
Fig. 6. “The Explosion,” lithograph by H. R. Robinson, between pages 30-31 of Vision of Judgment, Or A Present for the Whigs of ’76 and ’37, by Junius Jr. (New York, 1838). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.

The less control observers exercised over their finances, the more they described their experience in terms that eliminated their personal economic agency and the more they used the word “panic.” According to the 1828 edition of Noah Webster’sAmerican Dictionary, the term referred to “a sudden fright; particularly, a sudden fright without real cause, or terror inspired by a trifling cause or misapprehension of danger.” Those who now adopted the term were determined to discover forces larger than themselves which had provoked the crisis. Moses Taylor, a New York merchant, wrote to his correspondents in Cuba that “we are at the present moment enduring a panic greater than has yet been felt in the US.” The scale and acuteness of the crisis seemed unprecedented, unique, and incomprehensible. As Taylor wrote to another correspondent, “it is almost impossible to conceive of the disastrous state of affairs here.”

In fact, the panicked victims of 1837 lacked the concepts that would guide later attempts to identify the causes of crisis. Twentieth-century economic historians have explained the troubles of the late 1830s and early 1840s as a result of small changes in monetary policies in Great Britain, the United States, and China, exacerbated by the enormous interlocking character of a global economy. In 1837 no one could see this as the cause of crisis since no one had the statistical tools, the historical perspective, and a century of economic theory for making such an argument. James Gilbart, an English advocate of democratized banking, argued that “the science of statistics has received till lately but little attention in this country, and perhaps, the statistics of banking have received less attention than any other portion of that science.” If statistical data had not yet become a feature of English economic thought, America barely accepted the theories of political economy, in words or numbers. When one of the nation’s first and most popular political economy textbooks, Francis Wayland’s The Elements of Political Economy, first reached readers mere months before the crisis, reviewers pointed out that “we meet with men grown grey in politics and legislation, who emphatically term the science of Political Economy a humbug, and its partisans a set of visionary schemers and theorists.” Macroeconomic monetary forces may have been at work in 1837, but contemporaries had no way of seeing them. Indeed, some contemporaries yearned for this type of economic overview. As Charles Francis Adams wrote in the summer of 1837, “one obstacle to the success of Political Economy as a science and consequent attention by practical men to its injunctions, is found in the difficulty of attaining a position elevated enough to look over the whole surface of action. Hence a danger of mistaking the relative importance of events, of giving to an exception the character of a rule, and of making a partial view weigh as much as if it was a general one.” The people of 1837 could not visualize the system they had not yet come to call “the economy,” let alone the crisis, from a macroeconomic bird’s-eye view.

While the present might have been difficult for contemporaries to see, the past proved no less difficult to bring into focus. They knew that economic history was full of “bubbles,” “revulsions,” “pressures,” and “panics.” Within a few decades, theorists would sketch the outlines of a business cycle. But in 1837 past crises were independent events the causes of which could be easily misunderstood and barely compared. Richard Hildreth argued in his The History of Banks (1837) that “the great pressure in the money market, produced by the high value of money, has been mistaken by practical men, whose experience does not extend beyond the panic of 1819.” The real parallel, Hildreth argued, was the “pressure” caused by the “scarcity of capital” during “the whole period from 1793 to 1808.” Few looked back that far. As recently as 1834, Americans had experienced a credit crunch when the B.U.S. contracted its loans after Jackson withdrew federal deposits. The cause of this much less severe crisis had been clearly political, namely, the bank war between President Jackson and B.U.S. president Nicholas Biddle.

 

Fig. 7. “Uncle Sam Sick with La Grippe,” printed and published by H. R. Robinson (New York, 1837). Courtesy of the Political Cartoon Collection at the American Antiquarian Society, Worcester, Massachusetts. Click image to enlarge in a new window.
Fig. 8. “New Edition of Macbeth. Bonk-Oh’s! Ghost,” printed and published by H. R. Robinson (New York, 1837). Courtesy of the Political Cartoon Collection at the American Antiquarian Society, Worcester, Massachusetts. Click image to enlarge in a new window.

Politics provided the most obvious explanation for individuals who wanted to think of themselves as victims of an event caused by forces beyond their control. As one Senator excoriated, “some have the hardihood … to call it a panic, and to assert that it is manufactured, as if the public were feigning distress to make an exhibition of itself.” Politicians of all varieties used the idea of a manufactured panic to attack their opponents. Commercially oriented Democrats who had supported state-chartered over federally chartered banks blamed panic on speculating merchants who, in turn, sought to blame state chartering for causing their own “overtrading.” “Hard money” Democrats and working class Locofocos blamed banks of all kinds for fomenting an unnecessary panic in order to prevent the establishment of a specie currency. Whigs argued that the crisis was caused by the Jackson administration’s policies and called the event a panic in order to emphasize the role the federal government could play in ending this “experiment” with the nation’s financial system. Before the crisis ended, a coalition of New York Whigs and commercial Democrats called on President Martin Van Buren to convene an emergency session of Congress that would pass measures they believed necessary for restoring liquidity to financial markets and confidence to the nation’s foreign creditors. When Van Buren refused, they blamed his inaction for the panic’s severity. And when the bank suspensions forced Van Buren to call a “panic session,” he blamed the Whigs for manipulating the banks for political purposes.

Partisans of both sides therefore crafted a picture of national and impersonal panic that diverged from the local and psychological experience described by individuals in their letters and diaries. Only after the panic ended were references to meltdowns, tempests, epidemics, and biblical punishments replaced by arguments about victimization at the hands of the political system. The Bank of England was pushed far offstage. Instead, images of Jackson, Van Buren, Biddle, and the other politicians of the day came to dominate images of panic as well as the consequent writing of its history. The only figures who panicked in most of these accounts were the villains and heroes of national party politics.

 

Aside from a few crude engravings published in humorous almanacs in 1838, the only images of panic from 1837 are political cartoons. Most of these are single-page broadsides. One interesting exception is a bizarre pamphlet entitled Vision of Judgment, Or A Present for the Whigs of ’76 and ’37 which narrates the story of the Jackson and Van Buren administrations by means of beastly characters. Ironically, the words and the pictures in this allegory do not tell the same story about financial crisis. The text seeks to describe the panic as a human experience:

It happened, one cold morning, a little before sunrise, that an “awful explosion,” like an earthquake, was felt all over the plain, even to the farthest extremity. The whole nation were in the greatest consternation. Some thinking, from the rocking of the walls, that their houses were falling down on their heads, began to weep and lament in the most distressing and alarming manner. Others tore their hair in the agony and frenzy of the moment, running about and screaming in the most heart-rending tones; while others again gave themselves in sullenness to despair, and cursed the day of their birth. In short it would be impossible to describe half the distress and wretchedness produced on that dreadful and never-to-be-forgotten day.

The accompanying image, however, a lithograph produced in Robinson’s shop, displayed none of the feelings conveyed in the text (fig. 6). It focused, rather, on how “the golden ball,” a symbol of the policies of Hard Money Democrats, “was found to have been hollow within and only gilt without.” “From it, as from the fabled box of Pandora, issued every evil thing which could be imagined,” the description continued, “Poverty, Distress, and Famine came forth, followed by a ghostly train, bearing in their arms whole bundles of paper.” To Whigs, these were symbols of the hypocrisy of Van Buren’s Subtreasury plan, a federal department whose creation was proposed in the summer of 1837 to protect government deposits from the suspended banks but would, like them, become an issuer of paper notes (or Subtreasury “rags”). Rather than illustrate the “never-to-be-forgotten” human suffering described in the words, the artist rendered an exclusively political argument.

 

Fig. 9. “The Modern Balaam and His Ass,” printed and published by H. R. Robinson (New York, 1837). Courtesy of the Political Cartoon Collection at the American Antiquarian Society, Worcester, Massachusetts. Click image to enlarge in a new window.
Fig. 10. “Balaam and His Ass,” Nuremberg Chronicle, folio XXX, Hartmann Schedel (Nuremberg, Germany, 1493). Courtesy of the American Antiquarian Society, Worcester, Massachusetts.

Most of the era’s images of economic crisis avoided these confusions because they had no separate text. Single-page political caricatures were long popular in England but such lithographs only became commercially successful in America in the mid-1820s, just in time to lambaste the escapades of a very colorful era of partisan politics. Although we have few details regarding the total circulation of these prints or of how they were displayed, we can trace the increasing popularity of this medium by counting the number of images produced each year. Before 1831 the annual production of caricatures in the country never exceeded ten. From 1829 to 1852 the average reached almost thirty. Costing between twelve-and-a-half to twenty-five cents, caricatures constituted a significant expense for urban manufacturing laborers who earned about a dollar a day. For the same price they could buy any of Hannah Farnham Sawyer Lee’s bestselling novels, which instructed families how to survive a financial crisis. Not surprising, then, most of these cartoons presented the perspective of wealthy Whigs or commercially oriented Democrats. The interpretation of the financial crisis as a national panic provoked by the foolishness of the Democrats provided an ideal subject for cartoons. Clay, the era’s most prolific caricaturist, produced over 100 lithographs in his thirty-year career, a third of these images coming in the three years of economic upheaval from 1837 to 1840. They highlight his erudition but look nothing like the images we associate with economic history. There are no plummeting graphs and few depictions of bank runs. As historical sources, the caricatures of this period are very difficult to read because of their oblique references, suggesting the winks and nudges of inside jokes. Clay and other artists who made these images developed four dominant motifs for depicting the crisis: animalistic allegories, national political figures experiencing a metaphor of panic, literary allusions, and fake bank notes.

Clay clearly included the metaphors of panic in his cartoon “Uncle Sam Sick with La Grippe” (fig. 7). Many commented on the need for a “remedy” for the “disease” which had overwhelmed the nation’s commerce in the spring of 1837. In Clay’s image, a sick Uncle Sam (dressed in moccasins and a liberty cap and holding a sheet of paper listing the dollar amounts of failures in New York, New Orleans, and Philadelphia) is tended to by a variety of medical practitioners attired in eighteenth-century styles but displaying the visages of contemporary Democratic Party politicians. Jackson diagnoses the illness as “overeating,” a reference to American “overconsumption” of imports. Through an oversize syringe, Thomas Hart Benton, the leader of the Hard Money Democrats, administers his “gold pills” and “mint drops” as a cure. Dressed like an eighteenth-century nurse, “Aunt Matty,” a nickname crafted by Davy Crockett for the refined and diminutive Van Buren, blames Uncle Sam’s sickness on “over issues”—a reference to the increased number of bank notes. Uncle Sam takes issue with “Dr. Hickory” and insists that he is not a glutton but “half starved.” He blames “Apothecary Benton” for “tying up my bowels” and reminds his nurse that he was once “as hearty an old cock as every lived.” Outside the sickroom, Biddle arrives with his own set of remedies—a variety of paper money—and is greeted by a desperate Brother Jonathan, a symbol of America’s English creditors. To avoid starvation, the bald eagle suggests flying to the Republic of Texas, a popular location for absconding from debts. The cartoon argues that the Democrats’ attempts to restructure American finance nearly killed the country. The individual experience of panic is nationalized and embodied in the character of Uncle Sam.

The ghost of a dead bank is the featured character in another Clay cartoon, “New Edition of Macbeth. Bank-oh’s! Ghost.” In this image (fig. 8), “the ghost of commerce” who has been strangled by the specie circular (a hard money policy) confronts a startled—one might say panicked—Van Buren. With pockets full of “bills protested” and “bills not negotiable,” the specter points to the paper at the heart of the crisis: a listing of millions of dollars in failures in New York, New Orleans, and Philadelphia, interest rates of six percent, etc. Quoting Shakespeare, Van Buren insists on his innocence while Jackson (dressed as Lady Macbeth, complete with Bowie knife), a mint julep-swilling cotton planter, and an unkempt Locofoco toast commerce’s death. Although Van Buren’s face provides some idea of what personal panic may have looked like, the point of this cartoon is that the Democrats killed the nation’s trade, which is again embodied in a single character representing the collective experience of merchants.

 

Fig. 11. “Great Locofoco Juggernaut, A New-Console-A-Tory Sub-Treasury Rag Monster” (1837). Courtesy of the Political Cartoon Collection at the American Antiquarian Society, Worcester, Massachusetts. Click image to enlarge in a new window.
Fig. 12. “Specie Claws,” printed and published by H. R. Robinson (New York, 1837). Courtesy of the Political Cartoon Collection at the American Antiquarian Society, Worcester, Massachusetts. Click image to enlarge in a new window.

In addition to theatrical references, cartoonists mobilized phrenology, Don Quixote, and the Bible to depict the crisis. “The Modern Balaam and His Ass,” by H. R. Robinson (fig. 9), draws an analogy between national politics and the Old Testament story of a prophet who is reproached by an angel for beating his talking donkey when the animal stops to make way for a divine messenger. The image drew on iconography familiar since at least the fifteenth century, coupled with the new symbol of the Democratic party, the donkey. Jackson beats his ride here with a veto stick for delaying the delivery of the “specie currency.” Meanwhile, the animal has been frightened by a “protest,” a sign of the inability of the federal government to pay its bills because the funds are stuck in the suspended banks. Van Buren walks behind the ass, as he promised in his inaugural address to “tread in the footsteps of my illustrious predecessor.” Blaming a shortsighted Jackson and his unthinking follower, the cartoon illustrates the nation’s financial troubles with a sign posted on the door of the “Mechanic’s Bank” that reads: “No Specie payments made here!” As such, the president rather than individual investors have failed a divine morality test. But the voters are not entirely off the hook, for in America, the politicians who induced the Panic hold office only after winning the support of the voters, represented here by the donkey, who have sought to stop the crisis but could not prevent the ensuing destruction. This notion of a system composed of individuals yet beyond the control of all but the most powerful offered a model for a new kind of economic thinking. Once Americans recognized “the economy” as a system, they would have an answer to the question of economic responsibility that combined agency and victimization. Despite the ancient and early modern references to literature, iconography, and the language of conspiracy, caricatures of panic actually offered nineteenth-century Americans a new conceptual model, the system.

With so many banks each issuing their own paper currency, the most tangible symbol of economic relationships—money—looked to be anything but systematic. Nevertheless, the thousands of varieties of bank notes all shared a common graphic organization, language, and iconography. Several political caricatures, assuming the form of paper money, poked fun at the substance of these currencies and the president’s Subtreasury plan by critiquing both the form and the substance of finance. D. C. Johnston copied the style of bank notes in producing “The Great Locofoco Juggernaut, A New Console-A-Tory Sub-Treasury Rag Monster” (fig. 11). The title refers to Democratic party machine politics, British government securities, and dubious paper money, the paper itself physically derived from rags. This “shin plaster” (a slang for fractional currency) represents a value of twelve-and-a-half cents, probably the price of the cartoon. Most bank notes and other paper promises contained icons of stability, economic growth, and patriotism to evoke confidence. Johnston, in contrast, depicts the lenders as laughable characters unworthy of trust. In a critique of the hypocrisy of Hard Money Democrats, the caricature lists “Locofoco” in the place normally reserved for the signature of a bank president and has been “accepted” for payment by Benton as if he was the cashier. Jackson, dressed in drag and trampling on the people’s rights, frames one side of the bill. The other side shows a small version of “A Modern Balaam and His Ass,” with Van Buren assuming monkey-form and Jackson’s head imposed on the ass carrying the deposits and advancing on the road “to ruin.” The accompanying seals argue that “officeholders” are paid in “yellowboys” (gold) while the “people’s pay” consists of “treasury rags.” The note is crowned by a monstrous Van Buren, the monster having been the symbol used by Democrats in attacking the Bank of the United States during the Bank War, riding a carriage carrying the federal government’s deposits and pulled by his appointees to federal offices. Running on “jackass power,” the “railroad to perdition” crushes the people despite a well-dressed character’s declaration that the “experiment” (a nickname for Jacksonian financial policy) would “end the people’s sufferings.” Here is a metaphor of panic: an out-of-control machine. Instead of guaranteeing the paper’s worth the text at the center of the bill ridicules Democratic politicos. The slogan “good for a shave” reflects the high fees charged by bill brokers who pocketed a portion of the face value of paper money for their own profits. Below, Benton appears in the shape of a bug—the same insect that appears in the image fromVision of Judgment—and flies around Van Buren who, like the ghost of a bank in the Macbeth cartoon, is being strangled by the “Specie Circular.” This portrait is called “Laocoon,” another literary reference, this time to Greek mythology. All this dense imagery mirrors the complexity of American finance, suggesting that if there were a system, it’s a joke.

 

Fig. 13. "Migrant Mother," photograph by Dorothea Lange. Courtesy of the Library of Congress, Washington, D.C.
Fig. 13. “Migrant Mother,” photograph by Dorothea Lange. Courtesy of the Library of Congress, Washington, D.C.

 

Not all of the caricatures displayed such dark humor. “Specie Claws,” a homophone of the “specie clause” in Van Buren’s Subtreasury plan that allowed only coin to be used for the payment of federal debts, was the title of the only picture besides “The Times” to literally remove national political leaders from focus (fig. 12). Faint portraits of Jackson and Van Buren hang on the wall of a one-room, attic apartment of a poor family in this image depicting the effects of national policies on domestic life. Seated at a table upon which rests an empty platter, a carpenter leans on a copy of the Locofoco newspaper, the New Era, while listening to his children’s requests for bread. “I’m so hungry,” exclaims the tallest boy. His mother, nursing the couple’s youngest child, says “My dear, cannot you contrive to get some food for the children? I don’t care for myself.” He replies that he has “no money and cannot get any work.” Meanwhile, two well-appointed men carrying a warrant and “distraint for rent” wonder “where we are to get our costs.” One of the most emotive images of the period, “Specie Claws” was meant to provoke sympathy with the family’s plight and the father’s emasculating experience of being unable to provide food and shelter for his family. Robinson’s image, however, is deceptive. Although it assigns an anonymous human face to the economic crisis, it does so for purposes of national politics. The key to this underlying agenda is uttered by the second son. “I say Father can’t you get some specie claws?” he asks. This Whig lithograph was designed to expose the mistaken allegiance of workers to the Democratic party, blaming the former for supporting the Locofoco’s hard money policies. The father cannot get his “claws” into the specie hoarded by the federal government and the banks. But by having voted for such policies he must shoulder responsibility for their outcome, namely, his family’s poverty. All Locofoco supporters bore responsibility for the nation’s economic troubles. Like “The Modern Balaam And His Ass,” “Specie Claws” also describes a combination of agency and victimhood that would eventually be moved from the political arena to a newly conceptualized economic system.

During the century separating the 1830s from the 1930s, proponents of laissez-faire were so successful in advocating an economy that purportedly operated independent of the political system that New Deal supporters had to convince voters that the government could (and should) intervene economically on behalf of suffering Americans. In the 1930s, Dorothea Lange used a technology unavailable in 1837 to photograph the plight of economic victims in her composition “Migrant Mother.” Shot in a California pea picker’s camp during the Great Depression for the government’s Farm Security Administration, the photograph is strikingly similar to “Specie Claws.” The posture of the central characters is nearly identical. Both pictures appeal to emotion to make an argument about the effects of economic events on families. These images, however, make opposite arguments about the cause of economic disaster. “Specie Claws” blames the political system; “Migrant Mother” demands its intervention. The pictures might look similar but the subject had changed; economics replaced political economy as the discipline that offered tools for understanding panic. In a recent New York Times review of Linda Gordon’s biography of Lange, David Oshinsky praised the “timelessness” of Lange’s work. Pictures of panic and depression might seem timeless to us, but the political cartoons of the Jacksonian period provide evidence that our understanding of the economy has changed dramatically over the past two centuries. Up until the Great Depression, economists naturalized financial crises as part of a business cycle powered by individual choices yet beyond individual control. Since the 1930s, these experts (equipped with statistical indicators, complex models, and national or international financial institutions) have convinced us that specialized knowledge can be used to regulate the forces of the economy. We presently view our own financial crises as, at least in part, the result of a failure by regulatory agencies to flatten the business cycle and moderate those economic forces that victimize us. We recognize the hardship of “The Times” because, unlike George Putnam, who believed in complete individual economic agency, we believe in economic victimhood. Each small detail of “The Times” portrays economic disasters that resemble familiar photographs of the Great Depression as well as images of later nineteenth-century downturns. These visual commodities reinforce our twenty-first century belief in the business cycle, for they demonstrate that economic crisis looks the same across time. By choosing “The Times” to represent the Panic of 1837, we validate our present-day theories of victimhood, though we do little to further our understanding of what happened in 1837.

Further reading

The Panic of 1837 has inspired prize-winning books in both political and economic history. For competing political interpretations of the crisis, see: Reginald Charles McGrane, The Panic of 1837 (Chicago, 1924); Arthur M. Schlesinger Jr., The Age of Jackson (Boston, 1945); Bray Hammond, Banks and Politics in America (Princeton, 1957); Marvin Meyers, The Jacksonian Persuasion (Stanford, 1957); Sean Wilentz, The Rise of American Democracy (New York, 2005); and Daniel Walker Howe, What Hath God Wrought (New York, 2007).

For competing economic interpretations of the crisis, see: Leland Hamilton Jenks, The Migration of British Capital to 1875 (New York, 1927); Richard Timberlake, “The Specie Circular and the Distribution of the Surplus,”Journal of Political Economy 68 (1960): 109-117; Peter Temin, The Jacksonian Economy (New York, 1969); Peter Rousseau, “Jacksonian Monetary Policy, Specie Flows, and the Panic of 1837,” Journal of Economic History 62 (2002): 457-88; and John Wallis, “What Caused the Crisis of 1839?” NBER Historical Working Paper 133 (Cambridge, Mass., 2001).

For the social history of the depression, see Samuel Rezneck, “The Social History of an American Depression, 1837-1843,” American Historical Review 40, No. 4 (1935): 662-687. For a recent discussion of the effect of the panic on fiction, see Maria Carla Sanchez, Reforming the World (Iowa City, 2008).

For the rise of the concept of the economy, the business cycle, numeracy, economic individualism, and genres of economic writing, see: Margaret Schabas, The Natural Origins of Economics (Chicago, 2006); Harold Hagemann, ed. Business Cycle Theory (London, 2002); Ann Fabian, “Speculation on Distress: The Popular Discourse of the Panics of 1837 and 1857,” Yale Journal of Criticism 3, No. 1 (1989): 127-42; Patricia Cline Cohen, A Calculating People (Chicago, 1982); Jeffrey Sklansky, The Soul’s Economy (Chapel Hill, 2002); and Mary Poovey, Genres of the Credit Economy (Chicago, 2008).

Much recent scholarship has explored the culture of capitalism in the early American republic with a particular emphasis on failure, bankruptcy, and confidence. For examples, see: Scott Sandage, Born Losers (Cambridge, Mass., 2005); Stephen Mihm, A Nation of Counterfeiters (Cambridge, Mass., 2007); Jane Kamensky, The Exchange Artist (New York, 2008); Edward Balleisen, Navigating Failure (Chapel Hill, 2001); and Bruce Mann, Republic of Debtors  (Cambridge, Mass., 2003).

Nancy Reynolds Davison’s University of Michigan doctoral dissertation “Edward Williams Clay” (1980) is an invaluable source on this prolific artist and his lithographs. For a broader perspective on nineteenth-century American political caricature, see: Frank Weitenkampf, Political Caricature in the United States (New York, 1953); and Bernard F. Reilly Jr. American Political Prints, 1776-1876 (Boston, 1991).

 

This article originally appeared in issue 10.3 (April, 2010).


Jessica Lepler is an Assistant Professor of History at the University of New Hampshire. She is completing her forthcoming book, 1837: Anatomy of a Panic. Her dissertation of the same title won the 2008 Allan Nevins Dissertation Prize from the Society of American Historians.

 




Midget on Horseback

American Indians and the history of the American state

Modern American political culture has no greater shibboleth than Big Government, that un-American serpent who slithered into our garden around the time of FDR, wrapping American society in its coils. “Isn’t our choice really not one of left or right, but of up or down?” Ronald Reagan once asked. “Down through the welfare state to statism, to more and more government largesse accompanied always by more government authority, less individual liberty and, ultimately, totalitarianism.” Nothing was more certain to conservatives like Reagan than that their vision of a government that did little but plan and fight wars was the original American model, “the dream conceived by our Founding Fathers.” 

Though probably only a minority of professional American historians ever voted for Ronald Reagan or any politician like him, they have generally told a similar story about government’s role in the early American past. Despite the fact that this government was what the founders chiefly worked to create, the institution itself gets almost no play in typical historical narratives once the founding documents are signed. Typical historians’ attitudes are well summarized by Princeton historian John Murrin’s quip that the early U.S. government was “a midget institution in a giant land,” an insignificant force “with almost no internal functions” and no ability to effect major changes or drive historical trends. “Its role scarcely went beyond…the use of port duties and the revenue from land sales to meet its own expenses.” Murrin was building on a long tradition of scholarly riffing at the expense of the American state. Political scientist James Sterling Young’s Bancroft Prize-winning study of Jeffersonian Washington, The Washington Community, 1800-1828 (1966) cast a long shadow. “The early government was…a small institution, small almost beyond modern imagination,” Young wrote, and size mattered, he thought. “Small size indicated slightness of function.”

This “myth of statelessness,” as University of Chicago historian William Novak calls it, was a comforting and ideologically convenient interpretation for Cold War-era historians eager to turn American history into a story of expanding individual freedom that could be contrasted with Soviet authoritarianism. It proved equally convenient for so-called New Political Historians whose number-crunching approach treated voting statistics as social data and consistently concluded that antebellum politics was best understood in terms of competing religious values and ethnic identities rather than the policy debates and economic issues that previous scholars had emphasized. A weak or nearly nonexistent early American state also made an indispensable contrasting benchmark for political scientists eager to show a transformation in American governance at some later period. According to pioneering “new institutionalist” scholar Stephen Skowronek, the operations of “the early American state were all innocuous enough to make it seem as if there was no state in America at all.”

So everyone agrees. Yet for a political historian who has grown up, so to speak, after the great post-1960s expansion of mainstream history beyond its former social and racial boundaries, a question naturally occurs: how could any scholar claim to have seriously interpreted the history of the American state, without foregrounding the experience of those peoples who were first, most frequently, and most punishingly targeted by government policy in the United States? Those peoples would be the American Indians, who were subjected to U.S. government policy before the United States even had an executive branch or the power to levy taxes.

Rising State, Vanishing Americans

The early American state, the “Great Father at Washington,” did not look much like a harmless little person to the continent’s indigenous population. Without suggesting any lack of pride in their own nations or discounting the many occasions when Uncle Sam’s representatives appeared to be weak and incompetent, Indian leaders found the United States an awesome, inexorable force. This impression especially got across when Indians were directly confronted with the government’s full scope and the nation’s true size and extent. One of the most effective tactics against indigenous resistance was taking delegations from tribes it was dealing with back east to tour U.S. cities and visit the president in Washington, D.C. European travelers and subsequent historians generally derided the early capital as a “city of magnificent distances,” but native visitors were usually impressed even if there were still cows grazing at the base of Capitol Hill. “So large and beautiful was the President’s House,” said a Winnebago chief visiting John Quincy Adams in 1828, “I thought I was in heaven, and the old man there [Adams], was the Great Spirit.” Diehard Sauk resistance leader Black Hawk was taken to Washington after his capture in 1832 and went home speaking respectfully of the U.S. “war chiefs” and open to political and cultural cooperation with whites. U.S. officials were well aware of the impact these tours had on the Indians. It was noted that, before Little Crow, leader of the 1862 Dakota Sioux rebellion in Minnesota, no Indian who had been to Washington ever engaged in violence against whites again. The tactic remained generally effective even long after 1862. Writing of a recalcitrant tribe in 1888, the commissioner of Indian affairs advised that “a visit east will…open their eyes to the power of the Government [and] ‘knock the fight out of them.’”

While the early American state may seem innocuous and ineffectual by European or twentieth-century American standards, it rarely seemed puny to Native Americans who had to face myriad, overlapping projections of government power into their territories, cultures, and lives. That midget came on horseback, booted, spurred, and heavily armed, with wagon trains of bureaucrats, government-contracted missionaries, and speculators in the “public lands”—fungible commodities government policy had created out of the Indians’ quasi-communal homelands—trailing behind it. This midget somehow managed to accomplish the political subjugation and economic expropriation of the interior North American Indians and their lands, spanning the entire continent, in less than a century.

It is important that we be clear on the magnitude of the U.S. government’s gruesome achievements in this area. It has long been popular among Americans of European descent to think of the decline of the Indians as the result of some inevitable “natural process.” Romanticized beginning in the late 1700s as “vanishing Americans” whose noble but primitive civilization was picturesquely doomed by progress, militarily defeated American Indians were subjected to a kind of cultural cannibalism in which their image, history, and very names were appropriated by artists, writers, and real estate developers to ennoble and Americanize their own productions. The general idea can be seen in such still-popular images as James Fraser’s much-copied statue, The End of the Trail, and Frederic Remington’s The Scout, which mournfully overlooks downtown Kansas City from a hilltop park and serves as the city’s unofficial logo and one-time hockey mascot. In the Midwest, they slapped once panic-inducing names like Black Hawk, Sauk, Tecumseh, Shawnee, and Osceola onto towns, counties, streets, and schools seemingly within hours of these war leaders and their peoples’ defeats.

 

Fig. 1. The Scout, Frederic Remington. Photo courtesy of the author.
Fig. 1. The Scout, Frederic Remington. Photo courtesy of the author.

In recent times, it has become more common to blame the Indians’ alleged demise on European diseases and market forces, but from a certain angle that only amounts to a further naturalization of the conquest, which manages to subtly strengthen the European-American’s sense of having a genetically superior civilization, while simultaneously allowing him to deplore the Native American holocaust and absolve present Americans of any sense of blame for it. Biological agents played a huge role in the conquest to be sure, but within North America much of the native holocaust—a demographer’s term, not mine—was still ahead when the United States came on the scene. Demographic historian Russell Thornton estimates that the indigenous population of the present forty-eight contiguous states was approximately two million in 1492. By 1800, the first year for which Thornton finds reliable census figures, the numbers were much diminished, but there were still some six hundred thousand American Indians living across a vast territory that they still largely controlled.In certain eastern regions where first contact and virgin-soil epidemics were many generations past, notably the southeastern lands of the Creek and Cherokee nations, the Indian population was actually growing. That was a major reason Andrew Jackson and other southern leaders wanted to remove them. The rapid conquest and liquidation of this population—by force, forced relocation, and legal-economic evisceration—was the primary work of the nineteenth-century United States government, and any truly complete or coherent account of the history of the American state must take it into account. This conquest was the product of an interlocking system of government policies rather than something that happened “naturally.”

By calling attention to the awful “effectiveness” of U.S. Indian policy, I do not mean to deny the agency or resourcefulness of the peoples who were expropriated or the survival of their descendants today. Instead I use such strong language because doing otherwise risks making the struggle for the continent seem like a fairer fight, and thus more glorious to the victors, than it was. It would be terribly ironic if historians’ laudable desire to make indigenous peoples the protagonists of their own history led us to inadvertently take it easy on Uncle Sam and his policies.

Its Name Was Legion

A brief narrative of the U.S. conquest of native lands in the early republic shows just how prominent a role government played. Stalemated in the West at the end of the revolution, Alexander Hamilton’s better-financed United States government flexed its new “sinews of power” (see Max Edling’s article in this issue) against the still-unbowed Indians of the Old Northwest soon after 1789. After the disastrous defeats of Generals Josiah Harmar and Arthur St. Clair in 1790 and 1791, the budding fiscal-military state plowed its new revenues into a brand-new, much better-equipped and better-trained army, General Anthony Wayne’s United States Legion. The legion invaded the Indian confederacy’s heartland in present northwest Ohio and routed a native force at Fallen Timbers in 1794.

One of the legion’s major accomplishments in the campaign had more to do with public works than fighting. Like previous armies attacking Indians farther east in earlier wars, Wayne’s men built a road through the woods and swamps that was large and smooth enough to allow large numbers of soldiers and heavy military wagons to move north from the Ohio River and penetrate deep into Indian country. 

While barely mentioned in standard accounts minimizing the early American state, military road-building was in fact one of the more crucial and extensive activities undertaken by the federal government. Once they had served their original military purposes, military roads greatly facilitated the expansion of civilian commerce and the white population into the areas where they were built. The devastation wrought by those developments was much more permanent than any battlefield setback. From 1794 on, it was standard to insert language into Indian treaties granting the United States rights-of-way to build roads through Indian land, increasing the natives’ vulnerability in multiple ways. “The road or canal can scarcely be designated, which is highly useful for military operations,” wrote secretary of war John C. Calhoun, “that is not equally required for the industry or political prosperity of the community.” While historians have paid much more attention to the slow progress of civilian government road building in the form of the National Road, it was the War Department that really had charge of national transportation development, drawing up plans for a national road network (see fig. 2 below), that led to more planning through the Survey Act of 1824 and furious lobbying for new routes and improvements to old ones.

The Battle of Fallen Timbers began a string of significant military and diplomatic moves that, over the following two decades, would gradually crush the natives’ ability to offer large-scale resistance to the United States or its settlers, while incorporating gigantic swaths of Indian territory into the United States. Despite the small size of the antebellum standing army, the government showed a remarkable ability to meet episodes of Indian resistance or noncompliance with overwhelming force. Two examples were the battles won by future presidents William Henry Harrison, at Tippecanoe in 1811, and Andrew Jackson, at Horseshoe Bend in 1814, where almost nine hundred Creeks died in the single bloodiest day of U.S.-Indian warfare ever. (Jackson’s and Harrison’s armies also built roads, later improved by the War Department.) While the United States dismally failed to “liberate” Canada or defend the East Coast during the War of 1812, the aforementioned western victories (plus Jackson’s encore against the British in New Orleans) were remarkably effective in securing the continent’s midsection from further European incursions and in destroying the remaining military capabilities of the Eastern Woodlands tribes. The United States would face many frustrations at the hands of the Indians later in the nineteenth century, but the War of 1812 was the last time that Native Americans presented a serious military threat to U.S. sovereignty.

Cultural domination and actual occupation of Indian lands by U.S. citizens was another matter, but these too came very rapidly in the wake of military defeat. The speed and scale of this displacement are truly shocking to contemplate. As of the 1795 Treaty of Greeneville, the “frontier” line between acknowledged Indian land and the areas open to white settlement still only reached partway across present-day Ohio. Large chunks of territory, even in the eastern states behind that line, belonged to the natives in law and in culture. By the 1840s, the “line” would be far beyond the Mississippi and the small Indian population behind the line largely ignored. By the early twentieth century, the United States had expropriated all the economically productive land from coast to coast, reduced the native population to barely more than one hundred thousand people, and successfully claimed total hegemony even over Indians who lived in their own homelands.

This massive cultural and demographic displacement was engineered by instrumentalities of the federal government just as much as the military conquests. Though Indians were technically sovereign in their own lands until those lands were ceded by treaty, various policies and constitutional arrangements rendered Indian lands latent U.S. territory in practice and created tremendous incentives for U.S. citizens to encroach on Indian lands. As the historian and budding radio personality Peter Onuf has argued, the system of territorial government set up in the 1780s promoted a “dynamic, expansive conception of the union” that ensured the rapid incorporation of available territory into the United States. The territorial system automatically imbued frontier areas with the legal infrastructure necessary to legitimize U.S. private-property holding, law enforcement, and military action. The system of survey and sales administered by the General Land Office and its local branches facilitated the swift commodification of frontier land and the attendant destruction of native communities.

Though U.S. laws and treaties typically forbade whites from settling on unceded or reserved Indian land, the government’s incentives for expansion simply overwhelmed those relatively feeble protections. Both the first U.S. president and his secretary of war, Henry Knox, wanted to foster a more peaceful and stable relationship between Indians and whites and fulfill the treaty commitments they had made to the natives. Yet both Washington and Knox were life-long speculators in frontier lands, and their efforts to do the Indians justice were made in full awareness and tacit support of the fact that the United States would continue to expand at the Indians’ expense. This was the purpose of the land system they had created. In 1796, Washington approved the surveying of a boundary line between U.S. and Cherokee land with some distinctly fatalistic remarks about the permanence or meaningfulness of such a line. “The Indians urge this, the law requires it, and it ought to be done; but I believe scarcely anything short of a Chinese wall, or a line of troops, will restrain land jobbers, or the encroachment of the settlers upon the Indian territory.” 

Both Federalist and Democratic-Republican presidents occasionally used troops to remove illegal settlers from Indian lands, but these moves are more accurately seen as blows for a more orderly and efficient expansion process (so officially favored speculators and the government could make more money) than as real efforts to protect the Indians or curtail expansion. Such acts of due diligence also served to buttress the U.S. claim to the international community and critics at home that the aggressive expansion of white settlement and the expropriation of the Indians were lawful and legitimate activities.

The governmental machinery set up to facilitate expansion caught the Indians up far more often than it did the white squatters on Indian land. Any native efforts to resist such encroachments, even by livestock, immediately enveloped them in the white political and legal machinery that was set up around their lands, frequently leading to violence that might or might not progress to the stage of full-scale war but almost always led to new pressures for land cessions or outright removal. Enforcement of the laws protecting Indians depended upon local courts dominated by Indian-hating white settlers who were little disposed to see any of their own punished to provide justice to the natives. Territorial militia like those Andrew Jackson led against the Red Sticks were the shock troops of this Indian removal process, and the frontier garrisons and federal officials charged with protecting the Indians on their lands often oversaw their liquidation instead. 

The infamous Indian Removal Act, the first major legislation of Jackson’s presidency, only formalized and reduced the level of violence associated with a procedure that had existed since the time of Washington and Jefferson. At the same time, it was an ambitious government program that eventually required the negotiation of more than ninety separate treaties and the transportation and resettlement in the West of something on the order of one hundred thousand people, albeit often not successfully in terms of getting them there alive or establishing permanent new communities.

The Welfare State on the Range

In fairness, it might be argued that military conquest, rudimentary land management, and the extension of the legal system do not really count as “the state” in the modern sense of a centralized, bureaucratic welfare state. Yet this later kind of state, generally supposed to have emerged in America only in the early twentieth century, first emerged in Indian country more than a century earlier.

The Indians faced not only U.S. soldiers and forts but also a continental network of Indian agents and subagents, trading posts or “factories,” government-sponsored missionaries, and schools. U.S. Indian policies not only sought to control and displace Indians but also to radically reshape their cultures and lifestyles. The thrust of this “civilization policy” was precisely to transform the Indians into potential citizens of a modern liberal state: private-property-owning individuals who would disregard their tribal identities and communal economic practices. Once the natives had changed their way of life, it was hoped, they could live peacefully and perhaps invisibly among the European American population. In addition to the diplomatic and military efforts against them, Native Americans were the “beneficiaries” of what is arguably the first social program in U.S. history and the “clients” (to use a modern term from the world of social provision) of the government’s first regulatory agency and social welfare bureaucracy, the Bureau of Indian Affairs.

The scope of this social reengineering project was more or less total. Federal policymakers wanted to transform Indian agriculture, politics, religion, dress, living arrangements, gender roles, and, especially, their modes of subsistence and property holding. Begun slowly under Washington and Knox but greatly expanded by President Thomas Jefferson and his successors after 1800, the civilization program required the creation of numerous new public institutions and a new bureaucracy to manage them. A series of trade and intercourse laws created a chain of government stores (or “factories”) to supply the Indians and provide a market for the produce of their hunting. Following the practice of the British Indian Department, regional superintendents were appointed to manage U.S.-Indian relations in their area. Beneath the superintendents, a network of Indian agents was created to work with individual tribes. Through the agents or missionary groups contracted for this purpose, economic and educational assistance was provided that ideally included European farming supplies and implements, craftsmen to service this equipment and teach their skills to the natives, model farms to demonstrate more commercial and intensive European farming methods, and schools to teach English and European household economy. Run by the secretary of war in its first few decades, the complexity of the civilization program eventually led to the creation of the Bureau of Indian Affairs in 1824.

This was part of a highly ironic pattern: each new policy that looked forward to the disappearance of the “Indian problem,” whether by assimilation, removal, or whatever other means, ended up requiring more elaborate programs and more extensive bureaucracies to deal with the policy’s results. Those darn vanishing Americans just wouldn’t actually vanish. For instance, the removal of eastern Indians to the west was supposed to allow the agency system to be gradually phased out and military expenditures retrenched. Instead, troubles with the Plains tribes and unexpectedly rapid white expansion into the Plains and the Pacific Coast led to expensive wars and the reservation system, which involved the government not just sending Indian nations west but minutely controlling all of their movements and greatly expanding certain aspects of a civilization program policymakers had once hoped to let wither away. As early as 1842, the Bureau of Indian Affairs was running its own school system, with two thousand students enrolled in some fifty-two schools and more on the way.

Professionalism and reliance on expertise are two hallmarks of the modern governance systems generally thought to have emerged only in the later nineteenth century. While the early American state did not have access to university-trained professional administrators and social scientists, as they did not yet exist, the early Indian civilization program was notable for the experience and intellectual engagement of the men who designed and implemented it. President Washington had gained much direct experience with Native Americans during his time as a young frontier military officer, and Thomas Jefferson’s scientific interests are well known. Jefferson had included extensive anthropological material on the eastern Indians in his Notes on Virginia and instructed Lewis and Clark to bring back much more data on the Plains and Northwestern peoples. It was Jefferson who set out the civilization program’s full intellectual rationale and mode of operation, including the self-serving expectation that Indians in the process of civilizing would fall into debt and thus be motivated to sell more land.

While Indian agents and BIA officials later earned a terrible reputation for corruption and incompetence, the most prominent officials of the early U.S. Indian service were dedicated civil servants who eagerly sought to gain expertise on Native American culture and seem to have been sincerely and idealistically committed to the misguided cause of changing that culture. The first southern superintendent and long-time Creek agent Benjamin Hawkins gave up a political career in North Carolina to spend the rest of his life in the Creek country far beyond the boundaries of white settlement. Hawkins conceived himself as a kind of secular missionary, devoted to the cause of “bettering the [Indians’] condition.” We might usefully think of him as the civilization program’s caseworker in the Creek Country. 

Thus American Indians had to confront the U.S. government’s full “stateness” long before many other Americans did. As political scientist James C. Scott so vividly showed in his book Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (1998), statist social engineering projects in the twentieth century were frequently and often disastrously applied by elites who had little sympathy with or understanding of the social groups and conditions their plans affected. This is not to say that the state always fails or is inherently evil or is uniquely the tool of left or right. It is to suggest that governments tend to work in terms of abstractions far more readily imposed upon people whom the dominant culture regards as inferior and whose wishes officials are not bound to respect. Contrary to popular belief, Americans need not look to the history of “totalitarian” governments abroad to understand this truth. Their own government’s relations with indigenous peoples tell the same story eloquently enough.

Adapted from a paper presented at the Society for Historians of the Early American Republic annual meeting, Worcester, Massachusetts, July 2007, and at the Policy History Conference, Charlottesville, Virginia, June 2006.

 

Further Reading:

The “midget institution” quotation comes from John M. Murrin, “The Great Inversion, or Court versus Country: A Comparison of the Revolution Settlements in England (1688-1721) and America (1776-1816),” in J. G. A. Pocock, ed., Three British Revolutions: 1641, 1688, 1776 (Princeton, N.J., 1980): 386-453. The “myth of statelessness” is taken down by William J. Novak in The People’s Welfare: Law and Regulation in Nineteenth-Century America (Chapel Hill, N.C., 1996) and “The Myth of the ‘Weak’ American State,” American Historical Review 113 (2008): 752-772, but was the premise of Stephen Skowronek, Building a New American State: The Expansion of National Administrative Capacities, 1877-1920 (Cambridge, 1982).

On the “vanishing American” trope and other ideas that have guided political history away from taking much account of the American Indian experience, see Robert J. Berkhofer Jr., The White Man’s Indian: Images of the American Indian From Columbus to the Present (New York, 1979); Brian W. Dippie, The Vanishing American: White Attitudes and U.S. Indian Policy (Middletown, Conn., 1982); and Gordon M. Sayre, The Indian Chief As Tragic Hero: Native Resistance and the Literatures of America, From Moctezuma to Tecumseh (Chapel Hill, N.C., 2005).

The standard historical reference on the U.S. government policies and institutions charged with managing the indigenous population is Francis Paul Prucha, The Great Father: The United States Government and the American Indians (Lincoln, Neb., 1986). Other especially illuminating works, among many, are Reginald Horsman, Expansion and American Indian Policy, 1783-1812 (Norman, Okla., 1992); Peter S. Onuf, The Origins of the Federal Republic: Jurisdictional Controversies in the United States, 1775-1787 (Philadelphia, 1983); Jeffrey Ostler, The Plains Sioux and U.S. Colonialism From Lewis and Clark to Wounded Knee (New York, 2004); Claudio Saunt, A New Order of Things: Property, Power, and the Transformation of the Creek Indians, 1733-1816 (New York, 1999); and Wiley Sword, President Washington’s Indian War: The Struggle for the Old Northwest, 1790-1795 (Norman, Okla., 1985). On the particular topic of military road building, see Harold L. Nelson, “Military Roads for War and Peace—1791-1836,” Military Affairs 19 (1955): 1-14; Francis Paul Prucha, Broadax and Bayonet: The Role of the United States Army in the Development of the Northwest, 1815-1860 (Lincoln, Neb., 1995); and W. Turrentine Jackson, Wagon Roads West: A Study of Federal Road Surveys and Construction in the Trans-Mississippi West, 1846-1869 (Lincoln, Neb., 1979). On native leaders being brought to Washington for a reality check, see Herman J. Viola, Diplomats in Buckskins: A History of Indian Delegations in Washington, D.C. (Bluffton, S.C., 1995). The Indian population statistics above come from Russell Thornton, American Indian Holocaust and Survival: A Population History Since 1492 (Norman, Okla., 1987). Finally, Patrick Griffin makes a consonant but very different argument in American Leviathan: Empire, Nation, and Revolutionary Frontier (New York, 2008). Personally, I am not sure I can go all the way from “midget” to “leviathan.”

 

This article originally appeared in issue 9.1 (October, 2008).


Jeffrey L. Pasley is associate professor of history at the University of Missouri and the author of “The Tyranny of Printers”: Newspaper Politics in the Early American Republic (2001), along with numerous articles and book chapters, most recently the entry on Philip Freneau in Greil Marcus’s forthcoming New Literary History of America. He is currently completing a book on the presidential election of 1796 for the University Press of Kansas and also writes the blog Publick Occurrences 2.0 for some Website called Common-place.