Miniature Worlds

Fig. 1. Dr. Robert Hazard, dressed miniature. Courtesy the Connecticut Historical Society, Hartford, Connecticut.
Fig. 1. Dr. Robert Hazard, dressed miniature. Courtesy the Connecticut Historical Society, Hartford, Connecticut.
Fig. 2. Betsey Way Champlain, unidentified portrait. Courtesy of R. MacMullen.
Fig. 2. Betsey Way Champlain, unidentified portrait. Courtesy of R. MacMullen.
Fig. 3. Eliza Way Champlain, Iris. Courtesy of R. MacMullen.
Fig. 3. Eliza Way Champlain, Iris. Courtesy of R. MacMullen.

Look closely at these images. However large they appear on your computer screen, in reality they are tiny indeed. The first, a dressed miniature portrait, probably depicting Dr. Robert Hazard, measures just under six inches from the top of the frame to the bottom; it was created by Mary Way around 1800. Hazard’s head is painted, his torso assembled from bits of cloth and lace. The second portrait, watercolor on ivory, is about half the size of Hazard’s portrait. The history of this unfinished painting of an unidentified woman isn’t clear. Perhaps it was rejected by the sitter. Perhaps it was rejected by the artist, Mary Way’s sister, Betsey Way Champlain. The final image, the allegorical Iris, measures only 1 1/2 inches in diameter—small enough to fit inside the case of an early-nineteenth-century pocket watch. Although we don’t know whose watch Iris once decorated, we do know that it was painted by Betsey Champlain’s daughter, Eliza.

Small. Quaint. Curious. And a far cry from that era’s most familiar paintings—the magisterial portraits of Founding Fathers and republican gentry that have long decorated history textbooks and the flat likenesses of rural New Englanders that have acquired new cachet as the relics of a lost “folk” culture. Consider that these images were created by women, a family of women, and they become both more and less peculiar. While it might surprise you to discover two sisters and a daughter trying to support themselves as miniaturists in the early republic, you might decide that the obscurity of the artists serves to explain the oddness of their work: after all, marginal painters can be expected to produce marginal paintings.

At least, that was the conclusion that Betsey Champlain’s son William reached in 1825. William was proud of his mother, who had worked as a miniaturist in Connecticut since the 1790s. Still, to his eye, Betsey’s best work signified only wasted potential. After watching her laboriously copying a portrait by celebrated New York City miniaturist Nathaniel Rogers, William wrote, “I wish it had been in her power to be instructed by some person like Rogers . . . But she labors under a thousand difficulties to prevent her progress in the art which others have not.” With better instruction for her eye, her hand, and her taste, he imagined, Betsey might have claimed a place among the nation’s most fashionable painters, gaining artistic and perhaps financial independence. Although he confined his remarks to his mother, he might just as well have extended his remarks to include his aunt and sister, who also attempted to fashion careers as painters in the early republic. Emphasizing his mother’s distance from cosmopolitan techniques and aesthetics and the material success that might reward them, William’s comments anticipated contemporary perspectives on women who painted for pleasure and profit in the eighteenth and nineteenth centuries. A focus on the painting exposes second-rate art; a focus on the artist reveals a second-class citizen. But this emphasis on insufficiency and exclusion obscures a more interesting story about the painters, their work, and their world.

Sisters Mary Way and Elizabeth Way (Champlain) were born into a New London, Connecticut, mercantile family just before the Revolution. By the 1790s, when they reached their early twenties, both were painting miniature portraits of neighbors and relatives. Betsey continued to paint miniatures and teach painting after her marriage to George Champlain in 1794, perhaps because his volatile fortunes as a ship captain often left the family in straitened circumstances. Between her husband’s retirement in the 1810s and his death in 1820, Betsey’s painting played a critical role in the family’s support. As a widow, she lived largely on her earnings as a painter, supplemented by irregular contributions from her sons. Over the course of her thirty-year career, Champlain attracted commissions from local notables as well as distinguished visitors, including Universalist minister John Murray in 1799 and the brother of Commodore Perry in 1822. But the flux of demand in a town like New London required her to hone her entrepreneurial talents along with her artistic ones. In the 1820s, Champlain expanded her business by capitalizing on the popularity of mourning tokens. It paid to take likenesses from corpses for, as she explained, “[Y]ou can ask what you please” from the bereaved.

Betsey Way’s sister, Mary Way, the best known of the three women, abandoned Connecticut for New York City in 1811 at the age of forty-two. There she quickly worked her way into the fringes of a coterie of successful painters including John Jarvis, Joseph Wood, and Anson Dickinson, who critiqued her style and loaned her paintings to copy. By 1818, Mary Way had attracted a significant clientele, drawing both from parishioners at her Universalist church and older New London connections; she advertised a “ladies drawing academy” in the New York papers and she had two miniatures on ivory included in the annual exhibition of the American Academy of Fine Arts. Mary Way was never a star in the city’s art scene. And she never attained financial security. Still, when blindness ended her career in 1820, the Academy sponsored a benefit to raise money on her behalf.

Not surprisingly, Eliza Way Champlain (Betsey’s daughter and Mary’s niece) benefited from an early introduction to art. As a teenager, she painted watch papers and copied engravings, graduating to ivory miniatures and teaching by the time she reached her twenties. She learned painting from her mother and especially her aunt, whose tireless criticism continued unabated long after the older woman had gone blind. Mary Way worked especially hard to position Eliza for a career as a painter. Over the course of several extended visits to New York City between 1815 and 1820, Eliza studied painting with her aunt, met many of Way’s artist-friends, and attended Academy exhibitions and art auctions to view examples of fine art. In the 1820s, she made a series of half-hearted attempts to support herself in New York City by giving art lessons to young ladies and painting portraits, allegorical “fancy-pieces,” and watch papers. But she found that she lacked the survival skills necessary to compete in an art market saturated with European émigrés and other women painter-cum-teachers. After her marriage in 1826 she painted sporadically before the demands of childcare and her own indifferent health induced her to “abandon [it] entirely.”

Like most other eighteenth- and early nineteenth-century American painters, the Way-Champlain women found their careers shaped partly by their distance from the protocols that dominated European, and especially English, academic painting. Anglo-American artists found their commissions confined largely to portraits rather than more prestigious historical or allegorical subjects. The majority found themselves executing those portraits with less—and less sophisticated—training than their English counterparts boasted. The general problems that confronted American painters were only compounded for women artists, much as William Champlain had remarked. Access to formal studio training was all but nonexistent, with the notable exception of James Peale’s daughters, who enjoyed lengthy apprenticeships under their famous father’s tutelage. Even after a woman mastered painting’s technical skills and the theory that stood behind them, she encountered the nagging problem of publicity. A successful artist had to exhibit and advertise. She necessarily exposed herself to strangers as she drummed up a clientele and again when she sat down to capture their likenesses. The training needed to transform raw talent into polished style was beyond most women’s reach just as the demands of professional painting were at odds with the standards of feminine decorum.

If the Way-Champlain women were perforce excluded from the nation’s most rarified artistic circles, neither can they be subsumed within the ranks of artisan-painters who scoured the countryside recording the likenesses of provincial New Englanders. None of the three women conformed to the pattern of accidental vocation and ad hoc itinerancy that marked the careers of men like James Guild or Chester Harding, who stumbled into careers as painters and developed both skills and reputation while traveling through rural America. Consider too the women’s chosen medium: Portrait miniatures on ivory demanded a high level of technical skill to ensure that the thin washes of color adhered to the surface; they also required a considerable investment in materials. Neither cosmopolitan elites nor artisanal itinerants, the Way-Champlain women followed a different path, one that owed less to the various strategies pursued by male painters than to the expansion and transformation of women’s education.

In the decades immediately preceding and following the Revolution, scores of newly opened academies and seminaries afforded young women unprecedented instruction in rhetoric, history, geography, philosophy, and the natural sciences. Most of these schools also offered training in the “accomplishments”: drawing, painting, embroidery, music, and dancing. Although parents paid extra for this genteel instruction, it proved enormously popular. In fact, it was the income generated from these classes that kept many schools afloat. The accomplishments were neither frivolous distractions from the serious business of education nor the distant forerunners of modern day home economics classes, preparing girls for careers as wives and mothers. Instead, accomplishments and book learning were imagined as complementary parts of a single, unified project. Both were calculated to inculcate and demonstrate the virtuousness of American women, to provide proof of their sensibility. As an ideal, sensibility married order and harmony, reason and feeling, and enshrined taste both as a register of virtue and a delineator of class. The interior qualities that comprised sensibility manifested themselves externally in graceful bearing, transparent coloring, and expressive eye. They acquired a social dimension through sympathetic conversation, through belles lettres, and especially for women, through the accomplishments. Far more than a collection of desirable personal characteristics, sensibility carried broad political significance: it was both the precondition for virtuous citizenry and the best evidence of it.

While no evidence survives to explain exactly how or when the Way sisters learned to paint, it seems probable that they encountered some sort of art instruction during stints at one of Connecticut’s many female academies. The titles of Mary Way’s early decorative pieces—Friendship and Amabilité, for example—were popular subjects both for school girls’ needlework pictures and for the “improving” prose and verse that filled their commonplace books. Or consider Way’s unique dressed miniatures, her earliest surviving work. Like the portraits of Dr. Hazard and Mrs. Smith (below), these portraits positioned tiny watercolor profiles, carefully cut out of paper, atop busts fashioned from cloth, braid, lace, and other trimmings. The combination of delicately painted faces, applique, embroidery, and fine decorative sewing recalls the elaborate needlework pictures that young ladies produced at the culmination of their schooling.

 

Fig. 4. Mrs. Smith Sarah Raymond Smith, dressed miniature. Courtesy the Connecticut Historical Society, Hartford, Connecticut.
Fig. 4. Mrs. Smith Sarah Raymond Smith, dressed miniature. Courtesy the Connecticut Historical Society, Hartford, Connecticut.

Regardless of whether female academies provided Mary and Betsey with their earliest training, the public’s appetite for accomplishment provided all three Way-Champlain women with a market for their skills. Each woman periodically accepted individual students and Mary and Eliza both operated formal schools or drawing academies that offered instruction to groups of young ladies. Mary and Eliza also accepted assignments to create images that could be incorporated into other women’s badges of accomplishment. Teenaged Eliza, for example, copied an engraving of the Marquis de Salvo for inclusion in a New London women’s school; her painting would have educated students in the rudiments of taste and provided a model for their own paintings and needlework pictures. And Mary Way once accepted a commission to copy an “eligant engraving” of Christ Healing the Blind onto a piece of silk stretching more than half a yard “for a young lady to embroider.”

If the paintings and careers of all three Way-Champlain women reveal the broad influence of the culture of female accomplishment, so too do they mark changes within in it. Surviving portraits and literary evidence suggest that neither Mary Way nor Betsey Way Champlain paid much attention to the stylistic innovations that transformed portrait miniatures in the late 1810s and early 1820s: the use of larger, rectangular—rather than oval—ivories; a palette that included more and brighter colors; and the inclusion of the detailed backgrounds, elaborate props, and drapery swags that had long distinguished oil portraits. Instead, the older women relied upon oval and round ivories and soft, monochromatic color schemes, positioning their sitters’ heads high on the ivory. They marshaled their understanding of color theory and their technical skills to exploit ivory’s special qualities, using its translucence to create the luminous skin their sitters desired. In other words, they continued to rely upon the pictorial conventions that dominated miniature portraiture during the Federal Period, conventions that had given physical form to the abstractions of sensibility.

In their depiction of women, for example, the sisters adhered to the visual standards of virtuous sensibility, republican style—striving for “due proportion, symmetry, ease and grace.” Way’s portrayal of allegorical figures apparently recapitulated the conventions that shaped countless images of Liberty and Columbia. Critiquing an early version of the allegorical Fancy, painted by her niece, Mary Way observed that the figure’s stiffness did “not accord” with her “ideas of ease and elegance.” Instead, the form should be “light and airy, in a loose flowing robe . . . at least, not look like a stick with corsets on and a frock tied round it.” Similar conventions governed both sisters’ depiction of real women. Unornamented clothing and hair signaled a woman’s virtue, delicate coloring suggested her sensitivity, graceful posture embodied her gentility, and a plain background drew attention to the countenance that revealed her character. The final effect was one of “softness and harmony,” “simplicity and elegance.” According to Eliza Champlain, these portraits rivaled the work of “the ancients” in their unadorned beauty. Drawing both on the culture of female accomplishment and the visual conventions of virtue, the older women’s portraits captured and constructed the sensibility of their sitters.

Like her mother and her aunt, Eliza Champlain’s progress as an artist was mediated by the culture of female accomplishment. But by the 1820s, the meanings and aesthetics of accomplishment had shifted. Eliza’s work marked a sharp departure from the standards of “simplicity and elegance” endorsed by her aunt and mother. She described dressing her female sitters in an abundance of color, pattern, and drapery; she situated them in rooms, against landscapes, and clutching volumes of Byron’s poetry. The stylistic differences are yet more pronounced on the watch papers and “fancy pieces” that Champlain produced for sale and as gifts for her patrons and that made up the bulk of her work. In these tiny images, ornamental flowers saturated with color crowd in on women whose heart-shaped faces and enormous eyes deviate from the neoclassical ideal. In short, these paintings owe far less to the sensibility of the eighteenth century than the sentimentality of the nineteenth. At the same time, the culture of female accomplishment had become less accomplished and more female. The flowers and the allegorical “cupids and flying females” that were Eliza Champlain’s favorite subjects had become the special province of young women’s amateur art.

 

Fig. 5. Eliza Way Champlain, Fancy. Courtesy of R. MacMullen.
Fig. 5. Eliza Way Champlain, Fancy. Courtesy of R. MacMullen.

During Champlain’s short-lived career as a teacher, her pupils learned to paint by copying her own versions of floral wreaths, Fancy, and Cupid. These themes were deemed especially appropriate for women during a period when the art training available to upper- and especially middle-class women was rapidly expanding. By the early decades of the nineteenth century, this visual vocabulary, produced and reproduced by young women in an extraordinarily wide variety of media, spelled amateurism and female amateurism at that. Even Mary Way criticized her niece’s work as “chaste, labour’d, [and] mincing”—terms that cast the young woman not simply as a mediocre artist but a mediocre female artist. Champlain’s subjects, style, to say nothing of her metier—the watercolor miniature—served to underscore her artistic and economic dependence on a derivative amateurism while exaggerating the difference between her own paintings and the magisterial canvases that gave her such pleasure at exhibitions and auctions. Like her mother and her aunt, Eliza Champlain could claim that her brush was inspired by Fancy, but she did so in a culture that was increasingly preoccupied with genius.

Mary Way, Betsey Way Champlain, and Eliza Champlain never attained wealth or fame. All three scrambled to make ends meet, relying on the generosity of kin and friends to make up the difference when they fell short. Most of their work has long since disappeared, surviving only as references in their letters. Even Mary Way, the most successful of the three and the one best known in her day and ours, survives as a funky footnote, her place in American art history secured with her earliest work, the dressed miniatures. But the Way-Champlain women compel us to think about women’s role in arts, in America’s emerging culture industry. Their extensive correspondence and their scattered paintings can illuminate the efforts of ordinary women and men to claim the mantle of sensibility for themselves. Just as important, they can help us to understand how that culture of sensibility gave way to the nineteenth-century culture of sentimentality and the gains and losses that shift spelled for women artists.

Scroll back up to the top of this story and look closely at those three images. Small? Very. Quaint? Maybe. Curious? Absolutely. But perhaps your curiosity now extends beyond the images to include their makers and their world, a world we have yet to recover.

Further Reading: The Way-Champlain family papers, which include letters, poetry, and imaginative writings, are deposited at the American Antiquarian Society. They may be accessed on the Web by entering a search for “Way-Champlain” into the AAS’s online catalog. Ramsay MacMullen has edited a splendid collection of the family’s letters, Sisters of the Brush: Their Family, Art, Lives, and Letters, 1797-1833 (New Haven, Conn., 1997). American miniature portraits get their fullest treatment to date in Robin Jaffee Frank’s Love and Loss: American Portrait and Mourning Miniatures (New Haven, Conn., 2000), a book whose distinctive size and design comes close to recapturing the charm of the paintings themselves. Mary Way’s early work is described in William Lamson Warren, “Mary Way’s Dressed Miniatures,” The Magazine Antiques (October 1992): 540-49. The shifting significance of miniature portraiture is traced by Anne Verplanck in “The Social Meanings of Portrait Miniatures in Philadelphia, 1760-1820,” in Ann Smart Martin and J. Ritchie Garrison, eds., American Material Culture: The Shape of the Field (Winterthur, Del., 1997). On the social barriers confronting women painters in the early republic, see Anne Sue Hirshorn, “Anna Claypoole, Margaretta, and Sarah Miriam Peale: Modes of Accomplishment and Fortune,” in Lillian B. Miller, ed., The Peale Family: Creation of a Legacy, 1770-1870 (Washington, D.C., 1996). David Lubin’s fine “Lily Martin Spencer’s Domestic Genre Painting in Antebellum America,” in David C. Miller, ed., American Iconology: New Approaches to Nineteenth-Century Art and Literature (New Haven, Conn., 1993) is an exemplary analysis of one woman’s career and painting. Ann Bermingham’s Learning to Draw: Studies in the Cultural History of a Polite and Useful Art (New Haven, Conn., 2000) includes a useful analysis of British women’s drawing and painting.  Anonymous Was a Woman (New York, 1979) by Mirra Bank, offers an accessible survey of women painters in eighteenth- and nineteenth-century America. For general descriptions of the culture of accomplishment at academies and seminaries, see Lynne Templeton Brickley’s dissertation, “Sarah Pierce’s Litchfield Female Academy, 1792-1833” (Harvard University, 1985). Emily Noyes Vanderpoel, Chronicles of a Pioneer School: The Litchfield Female Academy, Litchfield, Connecticut, 1792-1833 (Cambridge, Mass., 1902), and William C. Reichel, A History of the Rise, Progress, and Present Condition of the Bethlehem Female Seminary with a Catalogue of Its Pupils, 1785-1858 (Philadelphia, 1858), offer extracts from commonplace books and describe students’ artwork. Betty Ring’s definitive studies of ornamental needlework, including Let Virtue be a Guide to Thee: Needlework in the Education of Rhode Island Women (Providence, R.I., 1983) and Girlhood Embroidery: American Samplers and Pictorial Needlework, 1690-1850 (New York, 1993), vols. 1-2, locate “schoolgirl art” in national and transatlantic literary and visual cultures. Catherine Keene Fields and Lisa C. Knightlinger, eds., “To Ornament Their Minds:” Sarah Pierce’s Litchfield Academy, 1792-1833 (Litchfield Conn., 1993) and Suzanne L. Flynt, Ornamental and Useful Accomplishments: Schoolgirl Education and Deerfield Academy, 1800-1830 (Deerfield, Mass., 1988) explore students’ work at two academies. The careers of early American portrait artists who have attained canonical status have been creatively explored by Margaretta Lovell in several essays, especially “Painters and Their Customers: Aspects of Art and Money in Eighteenth-Century America” in Cary Carson, et al., eds., Consuming Interests: The Style of Life in the Eighteenth Century (Charlottesville, Va., 1994) and “Bodies of Illusion: Portraits, People, and the Construction of Memory,” in Robert Blair St. George, ed., Possible Pasts: Becoming Colonial in Early America (Ithaca, 2000). Philadelphia’s famous Peale family has attracted the most attention from art historians, museum curators, and historians.  The Peale Family: Creation of a Legacy, 1770-1870 provides an especially wide-ranging and beautifully illustrated overview of the Peales and their world. Of course, not all painters found themselves in the canon. For a discussions of the artisanal itinerants who produced “folk” or “primitive” portraits, see David Jaffee’s “One of the Primitive Sort: Portrait Makers of the Rural North, 1760-1860” in Hahn and Prude, eds., The Countryside in the Age of Capitalist Transformation (Chapel Hill, 1985): 103-38. Ellen Hickey Grayson’s work shifts our attention away from the artists to the aesthetics of the paintings themselves; see for example “Toward a New Understanding of the Aesthetics of ‘Folk’ Portraits,” in Peter Benes, ed., Painting and Portrait Making in the American Northeast (Boston, 1995): 217-34. David Jaffee, et al., eds., Meet Your Neighbors: New England Portraits, Painters, & Society, 1790-1850 (Sturbridge, Mass., 1992) provides an extensive description of New England’s itinerant artists and their clientele. On the broad connections between aesthetics, politics, and culture in the eighteenth and nineteenth centuries, see Jay Fleigelman, Declaring Independence: Jefferson, Natural Language, and the Culture of Performance (Stanford, Calif., 1993) and David S. Shields, Civil Tongues and Polite Letters in British America (Chapel Hill, 1997); these books also map the landscape of sensibility. David Waldstreicher’s In the Midst of Perpetual Fêtes: The Making of American Nationalism, 1760-1820 (Chapel Hill, 1997) makes clear the role of sentiment and sensibility in early national political culture while David Steinberg’s essay “Charles Willson Peale Portrays the Body Politic,” in The Peale Family, analyzes the pictorial conventions of republican virtue, including female virtue. Neil Harris explores the connections between the social history of early American art and emergent nationalism in The Artist in American Society: The Formative Years (New York, 1966).

 

This article originally appeared in issue 3.2 (January, 2003).


Catherine Kelly teaches history at the University of Oklahoma. The author of In the New England Fashion: Reshaping Women’s Lives in the Early Nineteenth Century (Ithaca, N.Y., 1999), she is currently working on a study of gender and visual culture in the early republic.




Commemorating Concord

Concord, Massachusetts, is often portrayed as the quintessential New England town, and it is easy to understand why. Founded in 1635 as the first Puritan settlement above tidewater, the town appears connected to its past, even after nearly 370 years of growth and change. The historic center, which has evolved from the nucleated village planted by the original English settlers, still anchors the town. Colonial and early nineteenth-century houses line the same road that the king’s troops took into the village on the fateful nineteenth of April 1775. Visitors today pass many of the sights–the Greek Revival Unitarian Meetinghouse, the hill burial ground, the Wright Tavern, the Colonial Inn, the Town Hall, the cluster of shops and offices around the common and the milldam–that were familiar in the era of the Transcendentalists. Walk a mile or so in any direction, and you can enjoy the natural beauty of a landscape that seems miraculously to have escaped the ravages of suburban development. Early in the morning or in midwinter at Walden Pond, you can imagine yourself as solitary as Thoreau in his cabin. Concord encourages such illusions. It suggests rootedness, authenticity, an organic sense of place rarely found in the contemporary United States. No wonder the New York Times recently recommended the town for a weekend getaway: “Concord,” it declared, “is no Colonial Williamsburg.”

In these adulatory terms, Concord has been celebrated for a century and a half. It was, said the Boston Globe in 1909, “an ideal town,” which, in its tradition of “plain living and high thinking,” offered an alternative to an America dominated by the “commercial spirit.” Founded on Puritan rectitude, the town focused on “destiny rather than dollars,” cultivated a heritage of liberty and conscience, and brought forth two American revolutions. The first was the opening battle of the War of Independence, when minutemen confronted British regulars at the North Bridge on April 19, 1775; the second the movement for intellectual independence associated with the Transcendentalist writers and residents Ralph Waldo Emerson and Henry David Thoreau. “It is a model of what a New England town should be,” observed the Globe. “Concord, one of the oldest towns in the commonwealth, has retained through all the stress and strain of 275 years much of her pristine purity and most of her Puritan ideals.” As the home of Puritans, minutemen, and Transcendentalists, Concord symbolized the New England tradition at its best.

 

Fig. 1. A view of Concord taken from The Massachusetts Magazine, July 1794. Courtesy of the American Antiquarian Society.
Fig. 1. A view of Concord taken from The Massachusetts Magazine, July 1794. Courtesy of the American Antiquarian Society.

Few places enjoy so enviable an image. But such reputations do not arise spontaneously in a culture. They are consciously crafted by interested parties to shape the present and the future. Concord’s identity in the public mind was the work of several generations, inside and outside the town, and for all its apparent seamlessness, it gathers together strands of thought that were once incompatible. Who invented this pristine, revolutionary Concord, and why?

I. Concord staked its claim to be the birthplace of Independence during the celebration of “America’s jubilee” on April 19, 1825, the fiftieth anniversary of Concord Fight. Concord was then an expansive town of nineteen hundred inhabitants, thriving with crafts and trade in the village and surrounded by farms prospering on demand from rising urban centers in the long boom that accompanied the opening phase of the Industrial Revolution in the Northeast. It also occupied a prominent place on the political landscape; as a shire town, where the county courts convened, it had risen into a leading center of Middlesex County, and its politicians were major players on that stage. Economic and political ambitions, as well as pride in the past, drove the insistence that Concord was the “first site of forcible resistance to British aggression.”

It may seem natural to us that in 1825 the children and grandchildren of minutemen would commemorate the fight at the North Bridge. It was not. Concord had, in fact, done little to mark the occasion since April 19, 1776, when the town minister, the Reverend William Emerson, preached an anniversary sermon in honor of the “memorable Day that . . . marked in plain though crimson Lines the Path of Duty for those to tread, that nobly scorned to wear the british Yoke.” Like his colleague, the Reverend Jonas Clark of Lexington, who also gave a public address that day “to commemorate the murder, blood-shed and commencement of hostilities, between Great-Britain and America” in his town, Emerson meant to shore up patriot morale. He was soon off to serve and die as chaplain to an ill-fated military expedition to Ticonderoga; the annual anniversary sermon ended with him. Still, civic pride remained strong, and townspeople never lost an opportunity to remind others of their indispensable role in the Revolution. Twice–once in 1792 and again in 1813-14–they sought aid from the state legislature to erect a monument to the battle, only to be foiled by alert representatives from Boston, jealous lest Concord gain greater prominence and thereby strengthen its recurrent bid to become the capital of Massachusetts. In 1798, as war with France appeared imminent, a group of fervent Federalists held a public meeting and vowed “in holy remembrance of those who bled” on “the memorable 19th of April” to “defend by our valor, what they won by their blood.” The beleaguered President Adams appreciated Concord’s support, but advised his supporters to drop all that talk about April 19. This was no time to stir up old resentments against British “cruelty”: “If Concord drank the first blood of martyred freemen, Concord should be the first to forget the injury when it is no longer useful to remember it.”

Whatever the reason, the inhabitants made few public displays of local patriotism. Training days for the militia were rarely scheduled for the nineteenth of April, and when the citizens assembled to celebrate American Independence, it was on the Fourth of July. Such commemorations were invariably held in the village, not at the battlefield. Back in 1792, the North Bridge had been torn down and the main road over the Concord River rerouted; the adjoining land fell into the hands of the Reverend Ezra Ripley, Emerson’s successor in the pulpit and the Old Manse, who incorporated it into his back pasture. It was no longer possible to travel to the site of “the shot heard ’round the world.” Then again, with so many veterans of Concord Fight still living in town, trading reminiscences in the taverns and telling tales of military glory to eager young boys, there was no special need to do so.

All that changed with the approach of the jubilee. On April 19, 1824, the two volunteer military companies, the Concord Artillery and the Light Infantry, drilled on the common, enjoyed a public dinner, then marched to the battle site, where their host, the Reverend Ripley, delivered “an instructive address.” Five months later, the aging Marquis de Lafayette came to town, near the start of his year-long procession through the republic as “the nation’s Guest.” That gala occasion certainly burnished local pride; George Washington’s old comrade in arms was delighted to be “at the place where the first resistance was made to British invasion in 1775,” regretting only that he could not personally visit the exact spot. But the affair was also something of a public relations disaster. The official reception was held in a tent on the common, which had room only for town officials, the welcoming committee, a few veterans, and the ladies who served the cake and punch; everybody else had to glimpse the festivities from behind the ropes that cordoned off the tent and were patrolled by soldiers. In their eagerness to see the general, many inhabitants pressed against the barriers, the guards pushed back, and tempers rose. Some people began to complain aloud at the favoritism: “although they were not as well dressed nor as educated in society . . . as those within . . . their fathers had served the country, some had fought with Lafayette in the battles of the Revolution, and they were as grateful for his services.” Luckily, the town escaped a riot. Ten years later, resentment was still simmering as Concord prepared to celebrate its bicentennial. One resident dubbed the birthday party “Another Lafayette Celebration!” and vowed to boycott the event. “Well do I remember the insulting treatment I received when, among others, I attempted to look at Lafayette; we had to stand back then at the point of the bayonet, whilst the great folks sat and drank at our expense.”

 

Fig. 2. Central Part of Concord, Massachusetts, taken from Historical Collections by John Warner Barber, 1839. Courtesy of the American Antiquarian Society.
Fig. 2. Central Part of Concord, Massachusetts, taken from Historical Collections by John Warner Barber, 1839. Courtesy of the American Antiquarian Society.

Despite such complaints, Concord’s leaders moved forward with plans for a large-scale commemoration of the April 19 jubilee. At the initiative of ten inhabitants, including the father of Henry David Thoreau, the town meeting voted in March 1825 to hold a public celebration of the “Concord Battle, in which the enemies of freedom were first met and forcibly repulsed by brave Americans.” This was to be more than a local affair. Six months earlier, the newly formed Bunker Hill Monument Association had launched a public campaign to raise money for erecting its proposed memorial to the Charlestown battle that broke British military power in Massachusetts. In a bid to win support from Concord, the association pledged part of its funds to build a smaller monument in the town “where the first conflict was had.” Not surprisingly, Concord seized on the proposal and joined its commemoration to the Bunker Hill scheme. Two events dominated the ceremonies on April 19, 1825: the laying of a cornerstone for the monument in the village center and the delivery of a formal address at the meetinghouse by Edward Everett, the Harvard professor who had parlayed his role as secretary of the Bunker Hill Monument Association into a successful candidacy for the Middlesex County seat in Congress at the recent November 1824 elections. With these decisions, Concord highlighted the agenda of the BHMA, whose conservative leaders, drawn from Boston’s elite, aspired to impose their Federalist vision of society on New England. Commemorating the past was a key instrument of that purpose; through public observation of such landmark events as Forefathers’ Day, when the Pilgrims supposedly stepped onto Plymouth Rock, and of the Concord Fight, the elite hoped to gather a deferential populace behind its leadership, in shared “patriotic feelings” on “sacred ground.”

That agenda certainly suited the leading figures in Concord, such as the lawyer Samuel Hoar, who was designated “president of the day” by the committee of arrangements for the April 19 celebration. Back in 1820-21, Hoar had represented Concord at the state constitutional convention, and he had worked closely with Daniel Webster and other Federalist leaders to preserve those twin pillars of the social order: tax support for ministers and churches and property qualifications for suffrage and office. His Concord colleague at that conclave was the lawyer John Keyes, a Republican who pushed for the expansion of voting rights but had no objections to the establishment of religion. The two men were archrivals, who, within a decade, would end up as fellow travelers in the Whig Party. On April 19, 1825, both were overshadowed by Everett, who, as it happened, had soundly defeated Keyes for his congressional seat. Keyes was reduced to offering a toast at the public dinner following Everett’s address. Hoar, who was inexplicably replaced at the last minute as president of the day, made no mark in the official records.

Everett’s two-hour address to a “crowded audience” enhanced his reputation for eloquence and won him equal standing to Daniel Webster as the leading orator of New England’s “Age of Commemoration.” He surely flattered his listeners, who included veterans of the fight wearing special badges of honor, by lifting events of April 19, 1775, to the plane of universal history. “It was one of those great days, one of those elemental occasions in the world’s affairs, when the people arise, and act for themselves.” In this rehearsal of events, it was not the murderous advance of British troops on Lexington Common or the two-minute skirmish at Concord Bridge that seized Everett’s attention. The longest part of the narrative recounted the rallying of “the indignant yeomanry” in response to the Concord alarm: “unprepared husbandmen, without concert, discipline, or leaders,” drove the “picked men” of the British army back to Boston in defeat. With this theme, Everett deftly got himself out of a sticky situation. In the months leading up to the jubilee, spokesmen for Lexington and Concord had conducted a public feud over which town deserved credit for mounting the first resistance to the British assault and thus for starting the Revolutionary War. Concord mocked the sudden effort by Lexington to turn a “massacre” into a “battle.” Lexington replied by charging Concord, a bigger and richer town, with trying to steal the laurels from “the little village . . . that reared this Spartan band.” This petty quarrel, which bemused outsiders, would occupy the champions of both towns for decades. Everett sidestepped the controversy: when visitors ask where “the first battle of that great and glorious contest was fought,” we can “with honest complacency” direct them “to the plains of Lexington and Concord.” He showered his praise on the yeomanry who poured out from every Middlesex village and farm to vindicate the character of American freemen. Conveniently, those same Middlesex citizens had just elected him to Congress.

If Everett neglected to flatter Concord’s ego sufficiently, the Bunker Hill Monument Association managed to ruffle a good many feathers. Its pledge of financial aid for Concord’s monument came with two strings attached: first, the structure had to be a smaller scale version of the obelisk designed by Solomon Willard to ornament Bunker Hill–a provincial chip off the metropolitan block; second, it had to be located in the village center. Nobody, to my knowledge, objected to the style requirement, but the site provoked disagreement. In principle, there was a good case for the village; the British had, after all, spent more time in the center on their search-and-destroy mission than at the North Bridge. The proposed location would also be good for business, attracting visitors to nearby taverns and shops. These arguments proved persuasive; by an overwhelming margin, 65 to 25, the citizens endorsed the site by the town pump. And so, at the start of the April 19 celebration, the local Corinthian Lodge of Masons laid the cornerstone of the monument with “great solemnity . . . calculated to make a deep impression on the mind.” Beneath that “huge granite block, some four feet cube,” they buried a lead box containing various documents, including newspapers of the day and descriptions of the government of the United States and of Massachusetts, and a plate inscribed with an unambiguous statement of Concord’s priority in the Revolution: “Here on the 19th April 1775, began the war of that Revolution which gave Independence to America.”

Who could complain? Evidently, a fair number of inhabitants remained unreconciled. One morning in the winter of 1825-26 the villagers awakened to discover an unusual formation atop the cornerstone: a pile of tar barrels and boards, twenty feet high, raised in mockery of the site. “This monument is erected here,” explained the inscription, “to commemorate the battle which took place at the North Bridge.” The satirical display didn’t last long. The following night “some of the rowdy element,” aggressively defending village honor, set the sham monument on fire. It was a “great illumination,” one witness recalled years later. Unluckily for the assailants, their action proved self-defeating. The cornerstone was ruined. No shaft ever rose above the base.

Nobody took credit for the mock monument or its destruction. Neither did the wits elaborate on their joke; for them, the absurdity of the village site was self-evident. How could anybody think to place a monument in the center, the very scene of the successful British occupation, rather than at the North Bridge, where the Concord and Acton militiamen had been the first to oppose that aggression? All those polemics on Concord’s behalf had done their work. But something greater was at stake than mere local pride. Time and again, commemorative speakers called attention not just to the armed resistance at the North Bridge but to the shedding of British and American blood. One toast at the celebratory banquet hailed “The town of Concord–Consecrated by the blood of the first martyrs to American liberty.” Another, given by the representative of Lexington, tactfully paid tribute to “The Genius of Liberty,” who “rose from the blood-stained field of Lexington, and waved her celestial banner over the land, the chains of tyranny were broken asunder, the nation was disenthraled.” It was, indeed, to preserve the memory of those sacrifices that Edward Everett dedicated his ceremonial address: “Above all, their blood calls to us from the soil which we tread; it beats in our veins; it cries to us . . . ‘My sons, forget not your fathers.’” In short, through the spilling of blood, the “embattled farmers” consecrated the ground, and only on the site of their martyrdom should a monument be raised. The farmers’ fields, bordering the ruins of the bridge, were sacred space.

II. A decade later, by the mid-1830s, with over two thousand inhabitants, Concord was probably at its political and economic pinnacle. The central village hosted some nine stores, forty shops, four hotels and taverns, four doctors and four lawyers, a variety of county associations, a printing office and a post office. Manufacturing was humming, too, with a growing mill village in the west part of town, along the quick-running Assabet River, and rising producers of carriages and chaises, boots and shoes, bricks, guns, bellows, and pencils. But a good many people were left out of the prosperity. In what was still a farming town, 64 percent of adult males were landless, while the top tenth of taxpayers, some fifty men, controlled nearly half the wealth. Those who failed to obtain a stake in society, native and newcomer alike, quickly moved on. The ties that once joined neighbors together were fraying. On the farms, the old work customs–the huskings, roof-raisings, and apple bees–by which people cooperated to complete essential chores gave way to modern capitalist arrangements. When men needed help, they hired it, and paid the going rate, which no longer included the traditional ration of grog. With a new zeal for temperance, employers abandoned the custom of drinking with workers in what had been a ritual display of camaraderie. There was no point in pretending to common bonds.

With the loosening of familiar obligations came unprecedented opportunities for personal autonomy and voluntary choice. Massachusetts inaugurated a new era of religious pluralism in 1834, ending two centuries of mandatory support for local churches. Even in Concord, a slim majority approved the change, and as soon as it became law, townspeople deserted the two existing churches–the Unitarian flock of the Reverend Ripley and an orthodox Calvinist congregation started in 1826–in droves. The Sabbath no longer brought all ranks and orders together in obligatory devotion to the Word of God. Instead, townspeople gathered in an expanding array of voluntary associations–libraries, lyceums, charitable and missionary groups, Masonic lodges, antislavery and temperance societies, among others–to promote diverse projects for the common good. The privileged classes, particularly the village elite, were remarkably active in these campaigns. But even as they pulled back from customary roles and withdrew into private associations, they continued to exercise public power. Such pretensions were guaranteed to ignite political conflict.

The explosion came in the form of Anti-Masonry, which swept through Concord from 1833 to 1835 with as much intensity as it had in the “burned-over district” of New York state, where the movement got its start. It was propelled by the conviction that Freemasonry, once associated with Revolutionary heroes George Washington and Benjamin Franklin, posed an imminent threat to the republic. Bound together by secret oaths, conducting business behind closed doors, allegedly promoting one another’s interests through command over the levers of power, the Masons epitomized the contradictions of the emerging social order. In the link between private loyalties and public influence, opponents detected “an engine of conspiracy for any evil or selfish purpose.” Concord’s Masons were acutely vulnerable. They had taken a special role in the jubilee celebration. Their members occupied every level of power, from state senator John Keyes to the captain of the Concord Artillery to the editor of the local newspaper, who experienced a sudden change of heart in 1833 and defected to the enemy, converting his press into an organ of Anti-Masonry. The most prominent target was the Reverend Ripley, a Mason of thirty-five years’ standing and Grand Chaplain of the Most Worshipful Grand Lodge of Massachusetts.

In this atmosphere of conflict, the eighty-three-year-old Ripley turned to history as a means of reuniting the distracted town. In 1834, he proposed to donate the land behind the Old Manse for a monument to commemorate “the Great Events at Concord North Bridge on the 19th of April 1775.” Immediately, a few critics arose to denounce the scheme: Why should a monument be located in “the backside of Dr. Ripley’s house?” But the town snapped up the offer, in part because it cost the inhabitants nothing. The land was free; the costs of upkeep were paid by private donors; the fund set up in 1825 financed construction. All the town had to do was authorize a change of venue. By this reliance upon private money to facilitate public ends, Ripley and his allies cleverly removed the issue from democratic give-and-take.

At the same time, the parson’s offer was intended to forge a new basis for civic unity, as he made clear in a lecture to the Concord Lyceum on April 19, 1837. Taking stock of the “agitated and unsettled state of society,” Ripley reminded his listeners that “a well-regulated town or parish” is like “a swarm of bees, clinging together in one body, mutually sustaining and depending upon one another . . . If those in the centre let go their hold, the whole body fails; and if the surrounding multitude fly off, the whole swarm is broken up.” In what is a familiar theme today, the patriarch who had presided over Concord for six decades bewailed the loss of community. Neighbors used to know one another, share mutual interests, respect others’ views. Now, with so little in common, they exaggerated “differences in opinion, on religion and politics” and polarized the community. Ripley’s gift was designed to heal those rifts. It would pull Concord together in common reverence for the Revolution. It would highlight the blessings of Providence. It crystallized a new civic identity. It consecrated a sacred landscape.

 

Fig. 3. Letterhead taken from the invitation for the Grand Centennial Military and Civic Ball on April 19, 1875. Courtesy of the American Antiquarian Society.
Fig. 3. Letterhead taken from the invitation for the Grand Centennial Military and Civic Ball on April 19, 1875. Courtesy of the American Antiquarian Society.

As the aging parson was creating a lasting legacy, his step-grandson Ralph Waldo Emerson was on the threshold of the distinguished career as writer and lecturer that won him enduring fame as “the Sage of Concord.” The latest in a long line of New England clergy, Emerson had abandoned the pulpit in 1832 following the death of his first wife, traveled to Europe and Britain on a journey of self-discovery, and returned to write a little manifesto of his new vision, entitled Nature, while enjoying Ezra Ripley’s hospitality in the Old Manse. Like his grandfather, the erstwhile minister was troubled by the changes unsettling New England, especially the rising conflict between social classes and the unabashed pursuit of self-interest he had witnessed in his hometown of Boston. Sadly, he lamented in 1829, that was “a community composed of a thousand different interests, a thousand societies filled with competition in the arts, in trade, in politics, in private life” and united by no “common good.” Emerson’s solution for disharmony would ultimately take him far from Ripley’s social ethic. Rather than rely on elite leaders and established institutions, he discovered in nature the means to reconcile individual and society.

Out of this personal illumination Emerson forged a radical doctrine of self-trust that earned him a growing following among educated young people and angry denunciations from onetime colleagues in the Unitarian clergy. In the eyes of critics, the respectable renegade from the ministry was a dangerous disturber of social order. But that opinion did not hold in Concord. There Emerson was readily admitted into village elite, following his second marriage, to Lydia Jackson in 1835, and the purchase of a handsome house near the town center. In short order, he was elected to membership in the exclusive Social Circle, an organization of the town’s leading men. Emerson was apparently untroubled by charges that the group was a self-styled “aristocracy.” Compared to the great inequalities and social distances of Boston, Concord was a haven of small-town sociability. “Much of the best society I have ever known,” he told a friend in Boston in 1844, “is a club in Concord called the Social Circle, consisting always of twenty-five of our citizens, doctor, lawyer, farmer, trader, miller, mechanic, etc., solidest of men, who yield the solidest of gossip.”

In this benign mood, Emerson delivered the formal address for Concord’s bicentennial in 1835 and composed the hymn for the dedication of the monument at the bridge site on July 4, 1837. Nothing he said would have bothered Edward Everett in the least. The story of Concord, he declared in his ceremonial discourse, is the story of liberty. At their first settlement in the wilderness, the Puritan founders of the town established government and society upon an ideal plan. “The nature of man and his condition in the world, for the first time, . . . controlled the formation of the State.” For all his vaunted nonconformity, Emerson was as attached as his neighbors to the conventional wisdom regarding Concord’s decisive part in the events of April 19: the clash at the North Bridge was “the first organized resistance . . . to British arms.” Turning to the handful of veterans of that memorable day who were sitting in his audience, the thirty-two-year-old orator offered up an encomium that could have come from Webster or Everett: “If ever men in arms had a spotless cause, you had.”

Emerson never swerved from this serene prospect on the local past, which he rendered for posterity in the elegiac lines of the famous “Concord Hymn.” In fact, over the succeeding decades, as he developed into a leading critic of New England society and a powerful advocate of antislavery, he avoided the subject altogether. Having launched his literary career by bewailing the filiopietism of his contemporaries, he fastened his attention on “the signs of the times,” in hopes of discovering the transcendent meaning of passing events. This evasion of history is striking, for in the 1840s and 1850s, at the high tide of the crusade against slavery, Concord was astonishingly attentive to its heritage. Alarmed by the disarray of its records, gathering dust in the possession of the town clerk, Concord spent a remarkable seven hundred dollars to put its archives in order and installed a fireproof safe in its new Town Hall, built in 1852, for their protection. Its leaders, notably, Samuel Hoar, played a leading part in winning passage of a state law in 1851 “for the Better Preservation of Municipal and Other Records.” Doubtless, Emerson observed and approved these initiatives, but they made little impact on his prose. Even when Emerson thundered at the knavery and the cowardice of Massachusetts’s leaders in the face of an aggressive slave power, he seldom contrasted them with the legendary figures of the Revolution. Instead, he derided the patriotic speeches gotten up for “the nineteenth of April” and the Fourth of July as “a great deal of nonsense” belied by New Englanders’ support for the Fugitive Slave Law. There was once a time, he observed in an 1855 “Lecture on Slavery,” when America’s leaders were its “foremost” men: “Washington, Adams, Jefferson, really embodied the ideas of Americans. But now we put obscure persons into the chairs, without character or representative force of any kind.” More often, he urged listeners to take action for themselves: “You must be citadels and warriors, yourselves Declarations of Independence.”

It was left to Thoreau, “the man of Concord” Emerson called him, to quarrel strenuously with his neighbors’ version of the past. Though he is famous for blithe dismissal of his elders, Thoreau was actually remarkably attentive to local history. One of the wittiest sections of Walden is his mock-heroic account of the battle of the ants, whose combatants far outstripped the minutemen in “patriotism and heroism.” “For numbers and for carnage it was an Austerlitz or Dresden. Concord Fight! Two killed on the patriots’ side, and Luther Blanchard wounded! Why here every ant was a Buttrick.” But in the struggle against slavery, Concord’s Revolutionary heritage was no laughing matter. Though many inhabitants, especially women, were quick to enlist in the abolitionist movement– Thoreau’s mother and aunts and Emerson’s wife rallied early to William Lloyd Garrison’s cause–and though prominent politicians, such as Samuel Hoar and his son Ebenezer Rockwood Hoar, played key roles in the founding of Free Soil and Republican Parties, the local elite contained a fair number of entrenched Old Whigs, who put “cotton” over “conscience.” (Rockwood Hoar coined that very notion.) In 1850, for the seventy-fifth anniversary of the Concord Fight, the town staged a great “Union” celebration, at a time of national crisis over slavery. The first choice for speaker was Senator Daniel Webster, who declined, citing his immersion in the desperate effort to find a national “compromise.” That was fortunate for Concord; after March 7, when the great orator endorsed the Fugitive Slave Law, Webster was execrated by many of his one-time worshipers. Emerson pronounced the judgment on Webster: “The fairest American fame ends in the filthy law.” The eventual speaker was Robert Rantoul Jr., an antislavery Democrat who would briefly succeed Webster in the Senate. On April 19, 1850, Rantoul was discreet. Not until the final sentence of his address, in the course of which he celebrated “the site of the old North Bridge” as “the pivot on which the history of the world turns,” did the speaker breathe a hint of the issue that was on everybody’s minds. Charging his listeners to safeguard “the beacon-fire of liberty whose flames our fathers kindled,” Rantoul invoked those in dire need of its “refulgent” light, including “the wanderers in the chill darkness of slavery, [whom] it guides, and cheers, and warms . . . ” In Emerson’s view, this was a paltry performance, noted only for its “wearisomeness” and “painfulness.” Thoreau ignored it altogether.

What Thoreau did not overlook was his neighbors’ reluctance to put their antislavery sentiments into action. In 1854, as the Fugitive Slave Law continued to be enforced in Massachusetts, he derided popular preoccupation with the fate of Kansas and Nebraska and indifference to oppression at home. “The inhabitants of Concord are not prepared to stand by one of their own bridges, but talk only of taking up a position on the highlands beyond the Yellowstone river. Our Buttricks, and Davises, and Hosmers are retreating thither, and I fear that they will have no Lexington Common between them and the enemy.” Rantoul’s “beacon-fire of liberty” was fast dimming out. Fortunately, in Thoreau’s view, it was rekindled by that revolutionary from out of the West, John Brown. In the simple grandeur of Brown, Thoreau found a way to reclaim the New England heritage. The man possessed the indomitable spirit of a Puritan soldier in Cromwell’s army. “He was like the best of those who stood at Concord Bridge once, on Lexington Common and on Bunker Hill, only he was firmer and higher principled than any that I have chanced to hear of as there.” Best of all, educated not at Harvard but “at the great university of the West, where he sedulously pursued the study of Liberty,” Brown devoted his entire self to a noble ideal. In the highest praise he could offer, Thoreau branded his hero “a transcendentalist above all, a man of ideas and principles,–that was what distinguished him. Not yielding to a whim or transient impulse, but carrying out the purpose of a life.”

Thoreau’s forceful rhetoric had an unintended effect. By embodying the New England heritage in a living individual, he meant to inspire others to heroic action. But conflating Puritans, minutemen, and Transcendentalists together could foster complacency. New Englanders might consider themselves the nation’s conscience, even when they merely cultivated lofty thoughts in their gardens. By such literary means, the Concord philosophers were domesticated to their town and region. In 1853, the writer George William Curtis, who had resided in Concord for several years following a brief sojourn at Brook Farm, sketched the town of Emerson and Hawthorne in a volume aimed at literary tourists, entitled Homes of American Authors. Taking his inspiration from their writings, Curtis conjured up Concord from Emerson’s and Hawthorne’s texts. Emerson expressed the spirit of the place. “The imagination of the man who roams the solitary pastures of Concord, or floats, dreaming, down its river, will easily see its landscape upon Emerson’s pages.” Hawthorne evoked its legends in Mosses from an Old Manse. (Thoreau, who had not yet published Walden, received no mention.) In Curtis’s telling, Concord enjoyed a happy life as a writer’s retreat. Untainted by industry and trade, populated by plowmen and poets, associated with a fabulous past and eternal nature, the town belonged to the realm of the pastoral: a place apart from its own time, where an urban visitor might gain respite from the pressures of modern life. In Curtis’s pages, Transcendentalism and tourism merged. A trip to Concord was a spiritual experience.

That new identity took hold, in part because it refracted an undeniable reality. With the coming of the railroad in 1844 and the waning of the village as a vital economic and political center, Concord underwent an alteration from town into suburb. Though it continued to support numerous dairy farms and market gardens geared to demands from Boston, and its textile mill held on till the 1890s, an increasing number of residents began commuting regularly to jobs in the city. Many fewer people came to Concord for business. The regular stages stopped running; teamsters no longer carried country produce to local stores; eventually, the county courts decamped for the industrial city of Lowell. Devoid of its former liveliness, the village struck one short-term resident, the ex-urbanite Harriet Hanson Robinson, as something of a ghost town: “It is a dull place,” Harriet complained. “It is a narrow old place. It is a set old place. It is a snobbish old place . . . It is full of graveyards, and winters are endless. The women never go out, and the streets are full of stagnation.”

 

Fig. 4. Old North Bridge, Concord, Massachusetts, photograph taken by E.M. Perry, 1898. Courtesy of the American Antiquarian Society.
Fig. 4. Old North Bridge, Concord, Massachusetts, photograph taken by E.M. Perry, 1898. Courtesy of the American Antiquarian Society.

In such a placid setting, it is easy to see how Concord, with its rich heritage, attractive landscape, and literary associations, could become a retreat from the wider world. Local inhabitants were soon publishing tourist guides, which proliferated in the wake of Louisa May Alcott’s great success with Little Women and its successors and after Walden became a pilgrims’ Mecca. As early as 1862, a short-lived magazine entitled The Monitor was half-facetiously suggesting that visitors would be better off skipping the annual April 19 ceremonies and spending their time in the woods, where they might run into a local philosopher. “Leave business behind . . . Money, too, for there is nothing here that money will buy. Fashion as well, for it, alone, does not pass current here. Do not despise anyone you may meet in the woods, or up the river on account of their clothing.”

But nature did not displace history, nor did tourism eliminate activism. Little more than a year before that Monitor article, on April 19, 1861, a new generation of young men joined their military companies on the town common to answer Lincoln’s call for troops; six years later on that date, Concord raised its Soldiers’ Monument on the site. The nineteenth of April would continue to accrue meanings over the years, as its message of liberty and community was reinterpreted for new generations. In the Gilded Age, as Anglo-Saxon nativism surged in the face of mass immigration from southern and eastern Europe, it was often an occasion for narrow, ancestral pride. But the minutemen could also inspire a larger vision of freedom. On the very first Patriots’ Day in 1894, Rockwood Hoar, the former attorney general of the United States– who had watched the 1825 celebration as a schoolboy, served as president of the 1850 commemoration, and hosted President Grant at the centennial–spurned the parochialism and prejudice that had come to surround the anniversary. Son of the man who had touched off the feud with Lexington back in 1824, Hoar firmly declared that April 19 belonged to no single town. “It was Massachusetts up in arms that day . . . Whatever was done, Massachusetts did it.” But state pride was no better than town pride, if it expressed a bigoted spirit. In a bold challenge to his own class, Hoar turned to the representatives of the Sons of the American Revolution, who were sitting in the audience, and made a “modest suggestion”: shouldn’t the group end its restriction of membership to blood descendants of Revolutionary War soldiers? “The title to public consideration or leadership in public affairs by reason of descent, is not an American idea.” Surely, “every citizen of the Commonwealth who prefers honor and public service to selfishness and ease, who loves liberty, and will resist tyranny without counting the personal cost, wherever he was born and of whatever lineage . . . should have a right to call himself, and is a son of the American Revolution.”

That notion has enjoyed wide appeal in American culture. Daniel Chester French’s statue of the minuteman at the bridge–the patriotic farmer with a plow under one hand and a musket in the other–served as a popular emblem of the American fighting man in World War II. During the Cold War, “Minutemen” missiles stood guard against Soviet attack. But in recent years, the minuteman has become a favorite of the right wing. Participants in the militia movement of the 1990s seized upon the designation “minutemen” for their extralegal companies of weekend soldiers preparing to fend off an invasive federal government, deemed as dangerous to liberty as ever was the British Empire under George III. By coincidence, it was on April 19, 1993 that federal agents launched their catastrophic raid on the Branch Davidian compound in Waco, Texas, and confirmed the extremists’ worst fears. Alas, to avenge that attack, Timothy McVeigh chose April 19, 1995 to bomb the Alfred P. Murrah Federal Building in Oklahoma City. In his wake, the once “memorable” nineteenth of April now stands not only for the birth of independence but also for the worst episode of domestic terrorism in American history. Attorney General Hoar, who cared passionately about the rule of law, would have been shocked by the new connotations of an event he celebrated as a signal moment in the history of freedom.

To reclaim the day from the paramilitary Right requires more than the patriotic cant of those holiday orations Emerson and Thoreau despised. It calls both for history and for memory, in a continuing interplay between the urge to recapture the past in all its complexity and the impulse to appropriate it for the political and ideological ends of later times. That is a difficult balancing act, but without its discipline, the minutemen are in danger of becoming a symbol for any and every group purporting to be fighting in liberty’s defense. But we can find inspiration in that effort by pausing to reflect on Concord’s ongoing redefinition of itself.

Further Reading: For more on American jubilee celebrations, see Andrew Burstein, America’s Jubilee (New York, 2001). For Thoreau on the battle of the ants, see Walter Harding, The Days of Henry Thoreau: A Biography, rev. ed. (New York, 1982), 66; Henry D. Thoreau, Walden, ed. J. Lyndon Shanley (Princeton, 1971), 9, 230; John McWilliams, “Lexington, Concord, and the ‘Hinge of the Future,’” American Literary History 5 (Spring 1993): 1-29. See also Robert A. Gross, The Minutemen and Their World (New York, 1976) and “The Celestial Village: Transcendentalism and Tourism in Concord,” in Charles Capper and Conrad Edick Wright, eds., Transient and Permanent: The Transcendentalist Movement and Its Contexts (Boston, 1999); and Harlow W. Sheidley, Sectional Nationalism: Massachusetts Conservative Leaders and the Transformation of America, 1815-1836 (Boston, 1998).

 

This article originally appeared in issue 4.1 (October, 2003).


Robert A. Gross is James L. and Shirley A. Draper Professor of Early American History at the University of Connecticut in Storrs. He is the author of The Minutemen and Their World, 25th anniversary ed. (New York, 2001) and The Transcendentalists and Their World (forthcoming). In 2002-03 he was Mellon Distinguished Scholar in Residence at the American Antiquarian Society, where this essay was first presented.




Truth or Dare: On history and fiction

5.1.Lebsock.1
Suzanne Lebsock

On a sticky June evening in 1895 a fifty-six-year-old white woman named Lucy Jane Pollard was found murdered in her farmyard in Lunenburg County, Virginia. She had been bludgeoned to death with a meat ax. Almost $900 was missing from the house.

Despite Lunenburg’s remoteness from Richmond, the Pollard murder riveted reporters there. “A recital of its thrilling chapters,” one of them wrote, “sounds more like fiction than like reality.”

Distinctions between fiction and reality–or, with a century gone by, fiction and history–were much on my mind in 1995 when I began to write about the Lunenburg case. That fall I had the good fortune to visit a University of Virginia seminar, where I stated my intention to write in a fiction-like form, paying attention to dimensions like plot and pace. I planned to take advantage of the suspense inherent in the story, and to use some other conventions of detective novels. I would drop in clues calculated to lead readers to suspect first one possible perpetrator and then another. But I would not make things up: not make up events or change their order; not put thoughts in people’s heads or words in their mouths; not invent locations, objects, or the weather.

Some of the students surprised me by asking, Why not?

I wish I had asked them what lay beneath the question. Did the students approve of inventions, provided they were essentially authentic to the time and place? Or perhaps the students thought the opposite, that we should abandon the very concept of authenticity. History, fiction, it’s all a construction anyway, so fictionalize all you want. As my teenagers would say, what-ev-er.

Whatever, indeed: I couldn’t answer the question. In retrospect, my decision to write only what could be documented was made by 1992 or so, and it came in part from my despair at the increasing audacity of the “spin” emanating from the Reagan, and then first Bush White House. In my own corner of knowledge production I would stick with history.

The decision came, too, from my sense that what happened in Lunenburg was more interesting than anything I might make up. The alarm was sounded by Edward Pollard, Lucy’s seventy-two-year-old husband. He wept over his wife’s body, though some neighbors wondered whether he was crying over Lucy (wife number three) or the stolen money.

After four days, a posse arrested Solomon Marable, a young mulatto sawmill hand who had been spotted spending suspect twenty-dollar bills. Marable confessed his presence on the Pollard place at the time of the killing. But he claimed that the robbery and murder had been committed by women.

 

Mary Abernathy with nine-month-old Bessie Mitchell Abernathy. Photograph from the Richmond Planet, October 17, 1896. Courtesy of the Library of Virginia.
Mary Abernathy with nine-month-old Bessie Mitchell Abernathy. Photograph from the Richmond Planet, October 17, 1896. Courtesy of the Library of Virginia.

He named them: Mary Abernathy, Mary Barnes, and her grown daughter Pokey Barnes, black women all, and neighbors of the Pollards. They were swiftly put on trial, as was Marable himself. In separate trials that took a total of nine days, all four were convicted. Mary Barnes was convicted as an accessory and sentenced to ten years in prison. Solomon Marable, Mary Abernathy, and Pokey Barnes were found guilty of first-degree murder and condemned to death. The three were then hustled up to Richmond to be protected from mob violence until September, when they were scheduled to hang.

“And All the people say, ‘Amen.’” So wrote a Lunenburg diarist after each of the juries reached its verdict. Other observers had their doubts. None of the suspects had had counsel. None could read or write, and no physical evidence connected any of the women to the crime. The questions intensified four days after the final trial, when Solomon Marable told an electrifying new story. Marable now swore to reporters that the killer was a white man. The women were not “in it” at all.

In the weeks that followed, all sorts of people enlisted on one side or the other, the case attracting hundreds of regular folks along with an intriguing assortment of oddballs, saints, and scumbags. While the condemned women rapidly gained advocates–many of them black, some white–the county government of Lunenburg organized a campaign to try to make the convictions stick. They battled for eighteen months. At several junctures the threat of lynching was acute.

With history like this, I thought, who needs fiction?

That was before I tried to write scenes for which the evidence was thin. Novels, like movies, typically move from scene to specific scene. Characters speak, act, and use objects–props, as the novelist Charles Johnson would say–in very particular surroundings. Some episodes in the Lunenburg story were amply documented, but for others, although they were critical to the outcome, I was able to find only the sketchiest records. I drafted accounts of those incidents anyway, sitting with the sources a long time, trying to think of ways to recruit whatever telling details they offered. A few of these scenes worked; many did not, and with much advice from editors, I cut them later.

It would have been much speedier to make things up. But the Lunenburg story has a moral, and I believed that falling back on fiction could only sandbag that aspect of its value. Consider the known history of race relations in the late-nineteenth-century South. In the 1890s legislators moved to segregate by law almost everything that was not already segregated by custom. African-American men, who had voted in significant numbers since the close of the Civil War, were systematically disfranchised, the methods ranging from constitutional amendments to assassination. Lynching reached its all time high.

Hence my surprise at finding that in the Lunenburg case, white men in several instances ran enormous risks to uphold the rule of law. The first such episode occurred the day Solomon Marable was caught. Captured in the next county, Marable was returned to the Pollard farm, taken into the house by local authorities, and subjected to a lengthy preliminary hearing. Mary Barnes and Pokey Barnes had been arrested as well. (Mary Abernathy was still at large.) They, too, were held in the Pollard house and, like Marable, given preliminary hearings.

In the front yard a crowd of several hundred gathered. They carried guns and ropes, and as darkness fell their mood turned ugly. The men conducting the hearings inside hatched a plan. First the constable stepped out to the front porch and launched into an oration. While he monopolized the attention of the crowd, the men inside the house motioned the suspects to the back door. On signal, they dashed across the back yard and down into a wooded ravine.

They hiked in silence all night–Marable, Mary Barnes, Pokey Barnes, and a dozen white farmers–staying in the woods, pausing breathlessly whenever they thought they heard voices or horses. Near dawn they emerged at Lunenburg Courthouse, sixteen miles from the Pollard farm, where the prisoners would be safe–but safe only until sundown. That afternoon the deputy sheriff ordered them into the bed of a horse-drawn wagon, the group by this time including a pregnant Mary Abernathy. Their destination was Petersburg, sixty miles away, the distance and the town’s more secure jail meant to protect the suspects while they awaited trial. Their escorts were again white men, armed and anxious.

Time and again, people in the Lunenburg story acted courageously and against stereotype. Placed on trial without attorneys (the accused women could not pay and no one volunteered), Pokey Barnes and the two Marys simply had to do the best they could. Pokey’s best was astounding. When the prosecution put Solomon on the witness stand, Pokey noticed he had changed a portion of his story. She rose to cross-examine him. He tried to duck her questions. Pokey persevered. “Did you or did you not tell the jailer at Petersburg that you saw me on Friday on the road near Fort Mitchell?” she asked. “I don’t recollect,” Solomon said. “Then you tell this jury whether you, when you kissed that Bible to tell the truth, told the grand jury and jailer a lie, or are you telling the lie now?” “What I said the first time was false,” Marable admitted. “I’m telling the truth now.” Pokey was brilliant. She had made her accuser confess to perjury.

Does it seem plausible that an illiterate, twenty-three-year-old washer and ironer could stand up in a packed courtroom and conduct herself like a seasoned trial attorney? Nope, but it happened in Lunenburg, as did many other transcendent events. What a waste, should they be dismissed as fiction.

Would it not be an option to invent some small details: the color of Pokey’s dress, for example, or the tilt of her hat? My sense is that if historians want to be believed on the big things, we should exercise care on the little ones. Had I concocted something unusual about the courthouse itself–say, the unique outdoor staircase that led to the courtroom on the second floor–why would a reader buy my account of Pokey Barnes’s performance in that same courtroom? I might have attempted to add drama by matching the weather to the mood, tossing in the occasional dark and stormy night, for instance. But then why should readers believe that men whose class and color qualified them as “rednecks” risked their lives to prevent their black neighbors from being lynched?

Retrospectively I have come around to the thought–news to me as a historian who privileges analysis, but probably not to crime writers–that in murder cases, both actual and fictional, detail has a special standing. Even at the nineteenth century’s end, jurors rarely had forensic science to draw upon, and so in countless cases, the decisive evidence was circumstantial. The amateur detectives who sleuthed about Lunenburg raised dozens of questions. What was Edward Pollard accused of just hours before his wife was murdered? (His stepson called him “a hog-stealer and a land-stealer and a thief in every degree.”) Exactly how many minutes did it take an experienced seamstress to sew the buttons on the fly of a man’s trousers? An entire case could ride on one or two such questions, spelling life or death for the accused.

One last time, why not fictionalize? In time I was able to articulate an answer. A fiction-like form gives this story its entertainment value. But it is the truth that gives it power.

As I wrap up this essay, it is a dark and stormy morning. Really.

 

This article originally appeared in issue 5.1 (October, 2004).


Common-place asks Suzanne Lebsock, Board of Governors Professor of History at Rutgers University and author of A Murder in Virginia: Southern Justice on Trial (New York, 2003), winner of the 2004 Francis Parkman Prize, what’s at stake in writing history that is meant to read like fiction.




9/11 and Acoma Pueblo: Homeland security in Indian Country

View from the top of the mesa, Acoma. Photograph courtesy of Mark Penzel.
View from the top of the mesa, Acoma. Photograph courtesy of Mark Penzel.

In late December 2001, my husband Mark, kids Lily, Max, and Sam, and I were at the end of a tour of Acoma Pueblo, America’s oldest continually inhabited village. About sixty miles west of Albuquerque, New Mexico, Acoma is a nearly thousand-year-old pueblo sitting atop a 367-foot-high mesa, famous for its sixteenth-century run-in with Spanish governor Juan de Oñate, its massive, seventeenth-century adobe mission church, and its contemporary potters. Eating sizzling hot frybread sprinkled with cinnamon and sugar, we huddled together amidst the dwellings in the sharp winter sun, gazing out at Mount Taylor and some of the most dramatic landscape in the American West.

Mouth full, I pivoted just slightly and noticed an American flag poster in the window of a nearby house. “Look at that,” I said to the kids, pointing. Instantly, all eyes settled on the decal. Chewing slowed noticeably. “Huh?” said Sam, steam escaping from between his lips. “What the . . .” said Max, hopping from foot to foot. “Interesting,” Lily said, raising an eyebrow.

 

U.S. flag in a home window, Acoma. Photograph courtesy of Mark Penzel.
U.S. flag in a home window, Acoma. Photograph courtesy of Mark Penzel.

Tafoya came up with the design and slogan for his homeland security T-shirt a few weeks after terrorists flew jets into the Twin Towers. He recalls thinking, “That’s right. Now they know how it feels.”

My kids have been traveling from the Northeast to New Mexico to visit family since they were nine months old. They laugh when Boston friends ask if they need passports to visit Mark’s parents. “New Mexico!” they reply. “It’s a state, not a foreign country!” The kids are “good tourists.” They have come to know, after years of attending rodeos and powwows, of stopping at Taos Pueblo to see Christmas bonfires, of examining the smorgasbord of jewelry spread out along the walk in front of the Governor’s Palace in Santa Fe, that Indians are many things: smart and smart-alecky, proud, and more often than not getting the short end of the stick. They do not associate Indians with flag-waving American patriotism.

Which is why we had snickered the previous evening when we had seen a particular T-shirt at a Los Lobos concert. Bracketing a silk-screened reproduction of a nineteenth-century photograph of Geronimo, armed, alongside three Apache warriors, the writing on the shirt read “Homeland Security . . . Fighting Terrorism Since 1492.” Wildly popular in Indian Country ever since U.S. troops invaded Iraq, the shirt was only beginning to make the rounds in December 2001. Even then, we got the joke. Seen from Indian Country, the folks at the Department of Homeland Security are the hypocritical descendants of terrorists, themselves.

Thinking about the T-shirt and seeing that flag poster up at Acoma, I wondered what Indians were saying about 9/11. That question stuck with me. A few conversations and emails later, I have learned that, like many other minorities in America, the Indians I spoke to are struggling to negotiate multiple identities that leave them to work out their relationships with patriotism and oppression. I have also learned that there is something uniquely Indian in the quality of this struggle, something that other groups, no matter how disenchanted or disenfranchised, cannot share.

 

A T-shirt with a message: Geronomio and Chiricahua Apache warriors. Courtesy of Matthew Tafoya andwww.nativesovereigntees.com.
A T-shirt with a message: Geronomio and Chiricahua Apache warriors. Courtesy of Matthew Tafoya andwww.nativesovereigntees.com.

It is hard to understand how Indians can simultaneously fly flags, said Robert Holden, Choctaw, and view the federal government as an occupying, terrorist agency. But that is just the way it is. “This is still ourhomeland,” said Holden, a specialist in radioactive waste disposal on Native land for the National Congress of American Indians in Washington, D.C. To illustrate Indians’ position, Holden reminded me that during World War II the Iroquois confederacy, seeing itself as a sovereign nation, declared war on Germany and Japan. Nowadays, even when they know that the U.S. government has contaminated their lands, “Indian people still go and fight for this country.” The National Congress of American Indians does not have figures yet for how many Native peoples are fighting in Iraq. It estimates that eight thousand Indians fought in World War I, twenty-five thousand fought in World War II, and forty-three thousand fought in Vietnam. Maybe the hard part for non-Indians to understand, Holden said, is that Indians do not entirely see the homeland they are defending as either American or Indian. “We are going to stand with our allies and protect our homeland.” 

Matthew K. Tafoya, Navajo, who designed the original homeland security T-shirt and marketed it through his Albuquerque company, Tribal Sovereign Tees, is far more blunt. To Tafoya, Indians who fly American flags are “brainwashed” and “not thinking for themselves.” Indians do not join the U.S. Military, Tafoya said, because they are flag-waving patriots. With unemployment on Indian reservations hovering between 60 and 70 percent, Tafoya said, “the military is the only sure way to get a paycheck.” 

Tafoya came up with the design and slogan for his homeland security T-shirt a few weeks after terrorists flew jets into the Twin Towers. He recalls thinking, “That’s right. Now they know how it feels.” Tafoya said that the shirt has been extremely popular with Indian veterans of the wars in Korea, Vietnam, and the Persian Gulf, who–ironically–show up at his booth at flea markets wearing worn-out, government-issue combat fatigues. He suspects that when Indian vets see his shirt, they are thinking, “We’re completely screwed over by the government, and we’re also lucky to be alive.” 

“Traditional culture can promote entry to the U.S. military as an extension of the ‘warrior tradition,’” wrote Ben Winton, editor and publisher of The Native Press, which also markets a homeland security T-shirt. In an email responding to my questions about Indians, patriotism, 9/11, and military service, Winton wrote that young Indians “are protecting their families and their traditional homeland (what little of it remains under tribal control, anyway).” He mentioned the Navajo Code Talkers of World War II as a group that wanted to protect Indian Country and U. S. soil. “Assimilation and acculturation allow for many people to feel a sense of dual identity/citizenship,” Winton wrote. “They feel both proud as an ‘Indian person’ and proud as an ‘American’.”

Winton suspects it became easier for Indians to feel a dual sense of pride and a more conventional type of patriotism after the late 1970s. By then, he explained, younger Indians were not consumed by animosity associated with the “Termination Era” of the 1950s, when “the U.S. government bused thousands of Indian people off the reservations into the cities with the promises of ‘a better life’” while “seeking to terminate their legal status as sovereign domestic nations within the borders of America.” Some “assimilated into the larger society,” and their children, he concluded, may have lost some of their parents’ disappointment and bitterness–as well as their activism.

Up at Acoma, award-winning traditional potter Norma Jean Ortiz has been negotiating multiple identities all her life. With a white father and a Pueblo mother, Ortiz grew up at Acoma getting ribbed by her peers for not being “Indian enough,” even as her grandmother taught her to grind ancient potsherds into newly mined clay to strengthen the walls of her pots. Ortiz was selling her wares, which she had laid out on a cloth-covered folding table, when Mark, the kids, and I had finished our frybread. We were studying the tiny dwellings, none of which has electricity or running water, some of which still use thin sheets of mica instead of glass for windowpanes. We were also supervising the kids, who we had given pocket money to buy themselves each a souvenir. They stopped to examine Ortiz’s work, including potsherd magnets, mugs, inexpensive animal figurines, as well as fine, gourd-shaped pots.

 

Norma Jean Ortiz and her wares, Acoma. Photograph courtesy of Mark Penzel.
Norma Jean Ortiz and her wares, Acoma. Photograph courtesy of Mark Penzel.

Ortiz talked openly, if elliptically, about why she had chosen to fly a small American flag from her family’s ancient home. She put up her flag, she said, in response to the attacks on the World Trade Center. What did she think about what had happened September 11? She shook her head. Unable to identify with unmuddied political or ethnic allegiances, Ortiz’s least common denominator was empathy. “I feel real bad,” she said. “All those people. I think about them a lot. I know they’re out there, though.”

I bought a potsherd magnet. Lily and Sam each chose small animal figurines. Max found a mug with his name on it. Pottery wrapped, ready to begin our trip back to Boston, we began our climb down the mesa. 

For examples of Norma Jean Ortiz’s pottery, visit akisofthesouthwest.com or pablos4corners.com.

 

This article originally appeared in issue 5.1 (October, 2004).


A nonresident fellow at the Charles Warren Center at Harvard University, Cathy Corman lives in Brookline, Massachusetts, with her husband and eleven-year-old triplets and is completing two book manuscripts, one about Indian literacy during the removal era, the other a series of profiles of successful adults with Attention Deficit Disorder.




Political Electricity: The occult mechanism of revolution

Here is a tableau–an object in fact–that offers historical lessons about empire, and a warning:

 

Fig. 1. Political Electricity; or, an Historical & Prophetical Print in the Year 1770. Anonymous, London, 1770. Courtesy, American Antiquarian Society. Click picture for enlargement in new window.
Fig. 1. Political Electricity; or, an Historical & Prophetical Print in the Year 1770. Anonymous, London, 1770. Courtesy, American Antiquarian Society. Click picture for enlargement in new window.

This is Political Electricity, a copperplate engraving that circulated in London in 1770 as a large broadsheet measuring 27 1/2 x 16 1/2 inches signed “Veridicus,” the nom de plume of Richard Whitworth, opposition M.P. for Stafford. Comprised of thirty-one distinct representations, the design is political satire as complex colloquial art, a densely allusive lattice-work of the misfortunes of the imperial British polity on the eve of the American Revolution. The image insists on a single narrative connecting many separate events. Where the story begins or ends is unclear, but the viewer is offered a partial thread, or rather a chain for guidance–an “electrical chain” that connects several of the print’s panels, and whose movement is described by an accompanying key.

In this broadsheet, politics are electric and electricity political. The electric chain emerges in the top right-hand corner of the print from the person of “Lord Bute on the Coast of France . . . his Body the Electrical Machine shaking Hands with the Principal Nobles in France” (frame 1). It then proceeds in two directions. In one it is “conveyed from the Electrical Tube to the Princess of Wales” (frame 3) who, head in the clouds, is poised atop a set of scales in which different groups of M.P.s are balanced. The key tells us that “the Electrical Chain in her Left Hand which is conveyed across the Water from France, touches the Middle of her Waist and passes to the Hand of the [king] standing in the same Cloud.” This end of the chain terminates at the person of George III, whose crown appears near his head. The other end, however, continues much further. From the right arm of the prime minister, the duke of Grafton, the chain passes to a group of proministry M.P.s “in the Left Hand Scale” (frame 5) of the “Ballance of Power,” and down through the heads of the secretary of state and the lord president of the Privy Council (frame 14), who are “Playing at Cards with the Public Money,” while the paymaster-general in England and master of the rolls in Ireland is “cajoling them with Wine.” The chain then moves down diagonally to the right through Arthur’s Club House, a “Gaming house in St. James’s Street where the Ministry are Playing at Cards regardless of the Nation’s Welfare” (frame 13). After continuing through a party of physicians examining the corpse of a man killed during riots after recent elections at Brentford, involving the controversial John Wilkes (frame 18), the chain finally terminates in a scene at King’s Bench Prison (frame 19).

 

Frame 19 from Political Electricity
Frame 19 from Political Electricity

Wilkes himself and a clergyman look on from the prison windows, as fire from the chain discharges a musket on a young man protesting Parliament’s unwillingness to admit Wilkes: he “touches the Barrel of his Musket to draw out the Electrical Fire, but the Force of the Shock is so great that it Kills him” (frame 20). The authority of George III terminates in the barrel of a gun turned on his own subjects.

Political Electricity sets in motion for the viewer a train of conceptions about the relationship between power and secrecy. The theme of conspiracy and hidden influence (equally prominent among Britons and Americans in this era) is signaled by the presence of figures like the earl of Bute, the king’s favorite, prime mover in the resurgence of the Tories after decades of Whig supremacy, and widely seen as the incarnation of ministerial corruption in the 1760s. Corruption, both financial and constitutional, is repeatedly emphasized in the print. Ministers gamble away the nation’s wealth, while the figure of Wilkes–Parliament’s most ardent critic and self-styled martyr for British civil liberties–is insistently invoked. 

To anyone aware of British politics in the 1760s, these figures would have been unmistakable; by their inclusion, Political Electricity immediately establishes a shared frame of reference with the contemporary viewer into which a specific narrative and moral can then be inserted. Recent political history converges with prophecy through a narrative chain (the electrical chain) of violent self-destruction: the suppression of British liberties, the rise of corruption and militarism, the wrecking of trade and commerce, the disintegration of the British state, and the rise of America. In the opposition scale (frame 6), outweighed by the ministerial, Edmund Burke M.P., champion of American claims to the “rights of Englishmen,” speaks with a scroll before him entitled “The Injured Ghost of Liberty.” In reference to the damage done to transatlantic trade by the Stamp Act and Townshend duties, the banks of the Thames are a wasteland where animals graze, while ships sit docked in disrepair, their masts turned into broomsticks (frame 25); the Royal Exchange is “turn’d into wilderness” (frame 11) and London itself erupts in flames (frame 21). At the bottom center, the great British Lion is about to be carved up, with Bute sitting at the head of the ministry’s table, the beast’s genitals already on his plate (frame 26).

 

Frame 24 from Political Electricity
Frame 24 from Political Electricity

Tellingly, the only scene of prosperity foretells the rise of the colonies at the mother country’s expense: the London skyline (including the dome of St. Paul’s) is labeled “Boston” (frame 24). These are “the Coasts of America where the Inhabitants are Industrious in every Art to provide themselves with the Manufactures that Great Britain used to furnish them with, being constrained and drove as it were to Industry, by the late Ministry.” The result: “The City of London [is] transferred to Boston.” 

That all these events are related is confirmed by the visual metaphor of the electrical chain. But why this particular metaphor? Electricity was one of the leading branches of experimental science in the Enlightenment, certainly the branch with the highest public profile owing to the proliferation of commercial entertainments after the 1740s, both in Europe and British America, where customers willingly paid for the novel experience of having the “electric fire” course through their bodies. Such performances combined demonstrations of the rational principles of natural philosophy with playful sensory disorientations, electrifying bodies in order to demonstrate and explain the behavior of electricity, but also diverting the unwary with surprising shocks and sparks. Unlike in our own time, electricity in the eighteenth century was thought to possess spiritual and moral qualities. Conceived of as an “active power,” a material yet weightless entity, electricity was a force that penetrated and animated passive matter; as such, it was thought to mediate between the immaterial world, God, and His material creation, nature. According to Franklin’s electrical theory, one of the most influential of the Enlightenment, electricity moved matter when a communication was established between physical bodies possessing different charges (positive/negative). Lacking such communication, electricity remained inert and imperceptible. As Franklin wrote in the late 1740s, “[T]he electrical fire is never visible but when in motion, and leaping from body to body.”  Political Electricity astutely follows this logic in using the electric fire as a metaphor for invisible political power. The electric chain is a medium of communication that reveals power in its transmission from George III through ministerial bodies, terminating in the tyrannical suppression of dissent outside King’s Bench Prison. The electrical chain is the conspiracy narrative; it makes visible, if only for the instant of communication, the murderous force cloaked in honorable persons of state.

Beyond taking advantage of the logical conveniences of electricity as a metaphor for anxieties about occult political power, Political Electricity also participated in a broader politicization of science in the later eighteenth century. This was to culminate in a conservative critique of the radicalism (and Terror) of the French Revolution as the poisonous fruits of an atheistic rationalism, but in Britain, experimental science had already become politicized by the American Revolution. The key figure was, of course, Benjamin Franklin, invoked in the print through the seemingly innocuous figure flying a kite off the coast of France (frame 1).

 

Detail from frame 1 from Political Electricity
Detail from frame 1 from Political Electricity

Having received the Royal Society’s Copley Medal in 1753 for the invention of the lightning rod, Franklin came to Britain in the late 1750s as a colonial agent. Although he remained loyal to the cause of reconciliation between Britain and the colonies well into the 1770s, Franklin had associated since the previous decade with a group of liberal and radical Whigs in London, men of science, religious Dissenters, and critics of Parliament, who were sympathetic to American grievances after 1763. When resistance turned into revolution in the 1770s, Franklin came to embody this conjunction of experimental science and liberal politics with unprecedented symbolic force. Heroic representations of Franklin as the experimenter-turned-republican-revolutionary abounded in America, Britain, and especially France, where Turgot famously wrote of him, eripuit fulmen coelo sceptrumque tirannis (he seized lightning from the heavens and the scepter from tyrants). Those who defended the Crown’s authority in America, however, lamented Franklin’s career as evidence of the dangerous results when men of lowly status got ideas above their station. In Six Letters on Electricity (1800), the Anglican minister William Jones of Nayland described Franklin’s lightning experiments as “an ominous prelude to the business he was soon afterwards to do in the world, in drawing down the fire of civil war upon his country, and spreading the confusion of anarchy over the earth.” 

Like laboratory electricity, political electricity was power that became evident in communication and circulation. The source of this power remained hidden but it could be glimpsed through its effects, through the bodies it moved and the explosions it caused. Political Electricity does, however, present one generating point. Often depicted by satiric cartoons and effigy-bearing mobs as a boot, Bute here appears in the remarkable form of an electrical machine, his faceless head made to resemble a glass cylinder, cranked by French allies as though he were an electrostatic generator.

 

Detail from frame 1 from Political Electricity
Detail from frame 1 from Political Electricity

More than a conceit for generating political power, this figure of the human machine suggests that artificial manipulation, rather than accident or natural causes, underlay the larger pattern of imperial implosion. If laboratory electricity required the artful manipulation of machines by individual social actors, so too did the political electricity of revolution. 

Languages connecting electricity and politics were thus intimately linked with Enlightenment polemics about the competing moral authorities of art and nature. Tellingly, proministry commentators often invoked machines to deny legitimacy to American resistance. According to Loyalists, an elite American cabal engineered a kind of popular delusion in America, deliberately inflaming the colonial population with lies about British designs, rousing them to violence. The people of America “were like the Mobility of all Countries, perfect machines, wound up by any Hand who might first take the Winch,” wrote Peter Oliver in 1781. Self-serving mobocrats like Franklin and Samuel Adams in Massachusetts deceived the clergy, and the clergy deceived the people, who were “weak, and unversed in the Arts of Deception.” Thus, “the Wheel of Enthusiasm was set on going, and its constant Rotation set the People’s Brains on Whirling; and by a certain centrifugal Force, all the Understanding which the People had was whirled away.” American resistance was a machine of political madness set in motion by conspiracy and enthusiasm–the confusion of false causes for true. American republicanism was illegitimate because it had no basis in nature or reality–it was simply a work of conspiratorial art.

These rejections of resistance as the product of art and imagination contrasted sharply with Patriotic American celebrations of republicanism as a natural and divine electrical force. The janus face of political electricity (occult power as enthusiasm) was electrical politics: the sublime revelation of the electric fire of liberty through the movement of republican bodies. “The news [of an independent government in] . . . South Carolina has aroused and animated all the continent,” John Adams wrote to James Warren in April 1776; “it has spread like a visible joy, and . . . will spread through the rest of the colonies like electric fire.” Time and again, Patriots used electricity to conjure resistance not as a work of mechanical art, but as a spontaneous expression of divine will working through nature. Republican virtue, like the electric fire at an experimental demonstration, traveled effortlessly between feeling bodies. Looking back on the Revolution in his Autobiography (1821), Thomas Jefferson invoked the same metaphor. Resistance to the British in Virginia, he wrote, was like “a shock of electricity, arousing every man and placing him erect and solidly on his centre.” 

The print Political Electricity was thus part of a larger discourse that revealed the art behind such natural and divine appearances. Where Patriots celebrated the agency of divine will in the electrical-political sparks of revolution, vexed members of the metropolitan establishment saw conspirators artfully turning political-electrical machines. Polemics of art versus nature became polemics of conspiracy versus revelation. Political Electricity was an object that visually materialized lessons about the dangers of the immaterial and invisible forces threatening the empire in 1770. But by the time these forces had materialized, it was too late for such lessons to be learnt. Its prophecy was now history.

 

Further Reading: 

For a full description of Political Electricity, see Frederic George Stephens, Catalogue of Prints and Drawings in the British Museum, Division One: Political and Personal Satires, Vol. IV (London, 1883), 649-60. For a fuller discussion of electricity and politics in the American Revolution, see James Delbourgo, “Electricity, Experiment and Enlightenment in Eighteenth-Century North America” (Ph.D. diss., Columbia University, 2003), chap. 4. General treatments of electricity in the Enlightenment are John L. Heilbron, Electricity in the Seventeenth and Eighteenth Centuries: A Study of Early Modern Physics (Mineola, Ny., 1999), and I. Bernard Cohen, Benjamin Franklin’s Science (Cambridge, Mass., 1990); for electricity’s cultural history, see Simon Schaffer, “Natural Philosophy and Public Spectacle in the Eighteenth Century,” History of Science 21 (March 1983): 1-43, and “Self Evidence,” in James Chandler, Arnold I. Davidson, and Harry Harootunian, eds., Questions of Evidence: Proof, Practice, and Persuasion Across the Disciplines (Chicago, 1991), 56-91; on early American science, see Brooke Hindle, The Pursuit of Science in Revolutionary America, 1735-1789 (Chapel Hill, 1956) and Raymond P. Stearns, Science in the British Colonies of America (Urbana, Ill., 1970). On satirical political cartoons of the era, see Peter D. G. Thomas, The American Revolution (Cambridge, 1986); on Franklin’s political affiliations in prerevolutionary London, see Verner Crane, “The Club of Honest Whigs: Friends of Science and Liberty,” William and Mary Quarterly 23 (April 1966): 210-33. Peter Oliver’s Origin and Progress of the American Rebellion (1781) is reprinted in Douglass Adair and John A. Schutz, eds., Peter Oliver’s Origin and Progress of the American Rebellion: A Tory View (Stanford, 1961). On British fears of American conspiracy, see Ira D. Gruber, “The American Revolution as a Conspiracy: The British View,” William and Mary Quarterly 26 (July 1969): 360-72; on American fears of British conspiracy, see Bernard Bailyn, The Ideological Origins of the American Revolution (Cambridge, Mass., 1967), 144-59, and Gordon S. Wood, “Conspiracy and the Paranoid Style: Causality and Deceit in the Eighteenth Century,” William and Mary Quarterly 39 (July 1982): 401-41. On sensibility, polemics of art and nature, and the intersection of science and politics in the French Enlightenment, see Jessica Riskin, Science in the Age of Sensibility: The Sentimental Empiricists of the French Enlightenment (Chicago, 2002). For recent approaches to the relationship of things and ideas, see Bill Brown, ed., “Things,” special issue of Critical Inquiry 28 (Autumn 2001), and Lorraine Daston, ed., Things That Talk: Object Lessons From Art and Science (New York, 2004).

 

This article originally appeared in issue 5.1 (October, 2004).


James Delbourgo is assistant professor of history at McGill University, Montreal, where he teaches colonial American history and history of science. He is the author of a forthcoming cultural history of electricity in eighteenth-century America, and is co-organizer of an upcoming symposium at UCLA entitled Atlantic Knowledges: The Sciences and the Early Modern Atlantic World. His current research explores the history of science, travel, and empire in the British Atlantic world.




Slaves in Algiers, Captives in Iraq: The strange career of the Barbary captivity narrative

About midway through my undergraduate seminar on American captivity narratives last fall, we were discussing one of the earliest American literary works to deploy this essential historic genre: Susanna Haswell Rowson’s 1794 play Slaves in Algiers, or, A Struggle for Freedom, a comedy-melodrama focusing on a group of Americans held captive in Algiers, one of the Barbary States of North Africa. The play is not distinguished by great literary excellence or readability, but it is fascinating in its complex mix of political agendas. On the surface level, the play was part of a wide public effort in the early 1790s to stir sympathy for the real white captives of the time. But it is equally dedicated to serving the ongoing commitment of Rowson (best known as the author of the wildly popular seduction novel Charlotte Temple) to advocate for women’s rights in the new republic and maintain the importance of female virtue. On other political levels, Slaves in Algiers reveals uncomfortable strains of xenophobia and anti-Semitism and–most conspicuously to readers in the present political era–it makes evident the deep roots of America’s imperial fantasies concerning the Islamic world.

The galvanizing moment in our class discussion came as we reread the play’s conclusion. Its closing words are shared by the young American hero and heroine, Henry and Olivia, separated by their respective captivities and now reunited following the Americans’ victory over their Muslim captors. Henry speaks of returning to the United States, “where liberty has established her court–where the warlike Eagle extends his glittering pinions in the sunshine of prosperity.” And Olivia concludes, “Long, long may that prosperity continue–may Freedom spread her benign influence thro’ every nation, till the bright Eagle, united with the dove and the olive branch, waves high, the acknowledged standard of the world.” “Hang on,” I told my students, “Now listen to this–” and I read to them from the conclusion of President Bush’s 2003 State of the Union speech: “America is a strong nation and honorable in the use of our strength. We exercise power without conquest, and we sacrifice for the liberty of strangers. Americans are a free people, who know that freedom is the right of every person and the future of every nation.” Gratifyingly, I heard sucked-in breaths and exclamations at the echoes between early national and contemporary political rhetoric as we contemplated the continuing presence of the past. Bush’s speech was delivered less than two months before the tanks rolled into Iraq; Rowson’s dialogue, less than a decade before the United States’ invasion of Tripoli, the first war authorized under the U.S. Constitution and the country’s first military victory following the Revolution. What my students and I shook our heads over was how precisely for both Rowson’s characters and the current administration the dream is the same: that the world will become an empire of liberty under the leadership of the United States, a country that considers itself entitled to tell everyone else what freedom means and impose itself as “the standard of the world.”

 

Fig. 1. Frontispiece map, "A map of Barbary comprehending Morocco, Fez, Algiers, Tunis and Tripoli," from Mathew Carey, A Short Account of Algiers and of its Several Wars. Philadelphia, 1794. Courtesy of the American Antiquarian Society.
Fig. 1. Frontispiece map, “A map of Barbary comprehending Morocco, Fez, Algiers, Tunis and Tripoli,” from Mathew Carey, A Short Account of Algiers and of its Several Wars. Philadelphia, 1794. Courtesy of the American Antiquarian Society.

In both the early republic and the present, this troubling dream is recurrently enmeshed with stories of American captives abroad. Reading Slaves in Algiers was not the first time in the captivity class we had had occasion to consider recent events in Iraq. From the opening day, a touch-point for our discussions was the story of the captivity and rescue of Jessica Lynch, taken captive in an ambush in Nasiriyah in March 2003 (two major versions of the narrative, the TV movie Saving Jessica Lynch and the book by journalist Rick Bragg, I Am a Soldier, Too: The Jessica Lynch Story, appeared during the course of the semester). The intense public fascination with Lynch’s captivity and rescue and the less-than-subtle spin the events received in the hands of the military and the media made it abundantly clear that, though we might dwell in class on texts written in the seventeenth and eighteenth centuries, the genre cannot be consigned to a dry past full of impossible religious beliefs and short on alternative narrative thrills. 

Recognition of the historic resonance of the Jessica Lynch story came early. In an op-ed piece in The New York Times on April 6, 2003, less than a week after the rescue, Melani McAlister, a professor of American studies at George Washington University, characterized the heavily mythologized version of the story that initially circulated as “the latest iteration of a classic American war fantasy: the captivity narrative.” McAlister brought up parallels between Lynch and the first, most famous captive in the American tradition, Mary Rowlandson, held by Algonquian Indians during King Philip’s War in 1676. “For more than two centuries,” McAlister explained, “our culture has made the liberation of captives into a trope for American righteousness.” This analysis is absolutely right. However, literary works such as Rowson’s play and the nonfictional (or purportedly nonfictional) narratives that became popular in America beginning in the 1790s remind us that, along with the better-known Indian captivity narratives, there is a second captivity tradition focused on white slavery in the Barbary States. These stories encouraged early Americans to see themselves not just as members of a community under God, as Rowlandson’s narrative emphasizes, but as part of a nation finding its way in a complex international scene. 

The story of Jessica Lynch’s nine days of confinement in the land of Saddam Hussein dovetails with the explicitly political concerns of the Barbary captivity tradition. Beyond offering the earliest American portraits of the Islamic world, captivity narratives and other writings about Barbary from the early republic contrast America as a land of liberty (“the greatest blessing human beings ever possessed,” wrote John Foss) with the tyrannies of Muslim rulers, a new set of oppressive masters to fill in for the British rulers the Revolution had dislodged. In his 1798 narrative, for example, Foss wrote that the Algerines’ “tenderest mercies towards the Christian captives, are the most extreme cruelties,” and quoted the Dey of Algiers, who declared, “[N]ow that I have got you, you Christian dogs, you shall eat stones.” Obviously, such polarizing accounts are politically expedient to this day. Yet at times, early accounts of Barbary provoked a challenge as well: stories of the horrors of white slavery (a term that from the outset writers used interchangeably along with “captivity” and “imprisonment”) could become an unsettling mirror for the nation, one that forced Americans to confront their own hypocrisy by recognizing the similar or worse slavery they practiced at home. In 1790, Ben Franklin made the point through a satiric hoax entitled “On the Slave Trade,” which exposed readers to the fallacies of a proslavery argument as spoken by an imaginary Algerine advocate of Christian slavery. In 1799, William Eaton put it much more bluntly: “Barbary is hell–So, alas, is all America south of Pennsylvania.”

At present we are discovering, I think, the continuing power of “Muslim captivity” to effect such disquieting moral reversals. Only a year and a half after the fact, the rescue of Jessica Lynch has begun to seem like a quaint memory, as our attention has been turned from American captives to American captors, from Lynch’s blond ingenuousness to the scary hard eyes of Lynndie England and her comrades at Abu Ghraib. What of “power without conquest” now? The linking of captivity with a vision of “American righteousness” has never seemed more fraught.

 

Fig. 2. Title page, History of the Captivity and Sufferings of Mrs. Maria Martin. Courtesy of the American Antiquarian Society.
Fig. 2. Title page, History of the Captivity and Sufferings of Mrs. Maria Martin. Courtesy of the American Antiquarian Society.

And, of course, in all of these events there is the question of gender. Captives or captors, the ones we remember most are the women. If the fate of Jessica Lynch, her friend Lori Piestewa (killed in the Nasiriyah attack), and Shoshanna Johnson (another captive who survived) spurred little overt discussion of their status as women or the issue of female soldiers, the behavior of Lynndie England and the other women cavorting in the photographs from Abu Ghraib have, at least for some, provoked not just a moral but a gender crisis. Barbara Ehrenreich has written, “A certain kind of feminism, or perhaps I should say a certain kind of feminist naiveté, died in Abu Ghraib.” What died, of course, is the idea that women would never do such things. The photos from the prison, Ehrenreich points out, display “everything that the Islamic fundamentalists believe characterizes Western culture, all nicely arranged in one hideous image: imperial arrogance, sexual depravity . . . and gender equality.” Gender equality: the very principle Susanna Rowson sought to promote through a tale of strong women confronting the Islamic world. Perhaps Slaves in Algiers can offer us as many insights as any modern commentary into the tangle of captivity, domination, and American self-definition that we are confronting, and what gender might have to do with it all. 

The origins of Barbary captivity reach back to 1492, when the Moriscos (or “Moors”) were violently expelled from Catholic Spain, culminating centuries of crusading violence between the two religions. This expulsion generated intense hostility among Muslims toward Spain and other Christian countries. The Barbary States of Algiers, Tripoli, Tunis, and Morocco–where the Moriscos settled after being driven from their homes–began an extensive program of privateering designed to attack European shipping and take slaves. While this practice had economic benefits and political advantages, vengeance remained a central motive; the pirates’ raids have been memorably characterized as a “sea-borne jihad.” There is no way of telling how many Europeans were enslaved, but the historian Robert Davis estimates that roughly thirty-five thousand Christians were held captive at any given time in the century following 1580, a figure startlingly high in comparison to the number of whites taken captive by North American Indians, if insignificant in comparison to the number of Africans and their descendants later enslaved in America. 

Americans’ experience of Barbary captivity came after the main European captivity period, when the new nation’s independence made it politically and economically vulnerable. After 1776, the U.S. was no longer under the protection of the British navy, and some believed Britain was encouraging the attacks as their own form of vengeance; meanwhile, the new nation lacked the funds and diplomatic muscle to readily negotiate the captives’ release. Barbary pirates captured several American ships in 1784-85, but the major public uproar came in 1793, when additional attacks ensued after Algerian vessels gained access to the Atlantic again, following a peace deal with Portugal brokered by Britain. Soon, up to 120 Americans were being held in Algiers. During this, the nation’s first hostage crisis, accounts of Barbary began to pour forth from the presses in every conceivable genre, including the print version of Slaves in Algiers. Some of these were published as part of efforts to raise money towards freeing the captives, a goal that that would not be achieved until 1796, when, after prolonged struggle and negotiation, the New England poet and U.S. consul Joel Barlow managed to secure their release for the astonishing sum of one million dollars.

 

Fig. 3. Frontispiece, The Captivity and Sufferings of Mrs. Mary Velnet. Boston, 1828. Courtesy of the American Antiquarian Society. This narrative is a major source for the narrative of Maria Martin, which includes a similar, though fully clothed, frontispiece image.
Fig. 3. Frontispiece, The Captivity and Sufferings of Mrs. Mary Velnet. Boston, 1828. Courtesy of the American Antiquarian Society. This narrative is a major source for the narrative of Maria Martin, which includes a similar, though fully clothed, frontispiece image.

There is little documentation of Anglo-American women’s experience of Barbary captivity, and the existing narratives that focus on female captives appear to be fictional. Yet women occupy an important place in the American imagination of the Muslim world. Early writers emphasized the enclosure and sexual objectification of Muslim women, and representations of female Christian captives in the Barbary States similarly dwelt on their confinement and sexual vulnerability to predatory masters. Such images were both sentimentally affecting and covertly titillating, but they had political meaning as well. Historian Robert J. Allison explains, “Westerners saw the eighteenth-century Muslim world as a wicked mix of political tyranny and wild sex . . . [S]exual tyranny became the ultimate form of Muslim political tyranny.” This tyranny was exemplified through the image of the seraglio, where beautiful women were kept as “slaves to the tyrant’s lust.” The popular though spurious History of the Captivity and Sufferings of Maria Martin (1807), for example, culminates in an account of Martin’s two years of “close confinement” loaded down with irons and chains in a specially built dungeon. The frontispiece image of a topless, chained woman hints at the situation’s erotic implications and the narrative’s emphasis on the female sufferer’s helplessness. Though Martin at first “glow[s] with the desire” to show her fortitude in suffering, she eventually finds herself struggling with illness and depression, until her liberty is purchased and she is freed. Such a figure is almost perfectly reincarnated in the images of Jessica Lynch as captive: alone, broken, and immobilized in a Nasiriyah hospital (and, according to Rick Bragg, a victim of anal rape). 

Such scenes of gendered oppression are alluded to throughout Slaves in Algiers, although Rowson’s emphasis is on moral and emotional pain rather than spectacles of physical suffering. Rowson dwells on the pain faced by both Muslim and captive American women, who are both in one way or another in bondage to male tyrants and their lusts. The play’s complicated plot involves American male captives who, as they seek freedom, are pursued by Muslim women, who wish to escape gender oppression by escaping with Christian lovers, and American female captives who teach and inspire the women of Algiers. The villains are two swaggering patriarchs: Muley Moloc, the Dey of Algiers, something of a Saddam figure characterized by his “tremendous whiskers” and “huge [scimitar],” and the rich and influential Ben Hassan, a English Jew turned Muslim renegade.

The American women, the older Rebecca and the younger Olivia, are pressured by Ben Hassan and Muley Moloc respectively to marry them. Yet they are given strength by their belief in liberty–both liberty as an American political principle, and liberty as it most directly affected women: freedom to live as a person rather than a sexual object and to choose a partner for love; freedom to be esteemed, as men are or should be, on the content of one’s character. Their strengths lie in their resistance to domination and their moral clarity, traits clearly exhibited by Rebecca when Ben Hassan argues that her commitment to liberty should extend to “liberty in love” (and hence encourage her to be one of his multiple wives):  Hold, Hassan; prostitute not the sacred word by applying it to licentiousness.” For her part, Olivia nobly plans to marry Muley Moloc to save her friends from death–then escape his power by “sink[ing] at once into the silent grave.”

 

Fig. 4. Title page, Susanna Rowson, Slaves in Algiers, or a Struggle for Freedom. Philadelphia, 1794. Courtesy of the American Antiquarian Society.
Fig. 4. Title page, Susanna Rowson, Slaves in Algiers, or a Struggle for Freedom. Philadelphia, 1794. Courtesy of the American Antiquarian Society.

Rebecca and Olivia have subversively indoctrinated the Muslim women around them with their beliefs; the most eager student is Ben Hassan’s daughter Fetnah, who says of Rebecca, “It was she . . . who taught me, woman was never formed to be the abject slave of man . . . She came from that land, where virtue in either sex is the only mark of superiority–She was an American.” Such rhetoric is, of course, really directed at Americans themselves, not all of whom accepted Rowson’s version of women’s high standing. Her contemporary William Cobbett harshly criticized Rowson’s play as the harbinger of a social revolution. “Who knows,” he speculated with dismay, “but our present House of Representatives, for instance[,] may be succeeded by members of the other sex?” Clearly, the “struggle for freedom” of the play’s subtitle is ultimately not so much that of the white captives as it is women’s struggle, or more broadly America’s struggle to live up to its political ideals by recognizing women’s contribution to the emerging nation.

This feminist commitment does not mean, however, that Rowson rejected traditional gender roles. It is hard to know if her insistence on women’s equality would have extended to female soldiers. Conversely, though, it is hard to see her not approving of women like Jessica Lynch and her comrades, who express female virtue much as she and other patriotic women of the period envisioned it, as a combination of love of country combined with devotion to family and other traditional female qualities. Like Rowson’s heroines, Jessica Lynch is plucky but ladylike, an aspiring kindergarten teacher, her tale of survival reassuringly linked to the story of her budding romance with a male soldier. It might not be going too far to see women such as Lynch as embodiments of the values of republican motherhood, the ideology through which the distinct but contained feminism of Rowson and other early national women was channeled.

Yet the vision of gender equality in Slaves in Algiers never escapes or contradicts the play’s imperialism. After all, emphasizing–indeed exaggerating–women’s freedom in America remains to this day one more way of insisting on America’s superiority in relation to Muslim countries.  Slaves may show women teaching their sisters the value of liberty, but even this contributes to the dynamic of American Christians deciding they know what is right for the people of Algiers. In the closing scene, the formerly tyrannical Muley Moloc responds to the former captives who have promised him mercy, “I fear from following the steps of my ancestors, I have greatly erred: teach me then, you who so well know how to practice what is right, how to amend my faults.” He is urged to “sink the name of subject in the endearing epithet of fellow-citizen.” Everyone in Algiers must learn to become, in effect, Americans.

Reading the play in 2004, as we struggle with the continuing mess in Iraq, Rowson’s fantasy of neatly transferring democratic values to grateful recipients is all too ironic. And, post-Abu Ghraib, some of the play’s images are just a little creepy.  I notice now, as I did not a year ago, how much the conclusion of Slaves in Algiers turns on the spectacle of abject Muslim men. Muley Moloc is stripped of his political and phallic power, reduced to thanking his former captives for their lenience. More obviously, Ben Hassan, who bears the weight of Rowson’s anti-Semitism as well as her anti-Muslim attitudes, is humiliated in implicitly sexual terms: unmanned by the slaves’ uprising, he seeks to escape by donning his wife’s clothes, and has a male Spanish slave fall in love with him. The Americans’ rather self-congratulatory declarations of mercy and might at the end are made possible by the very spectacle of disempowered prisoners before them.

 

Fig. 5. Rick Bragg, I am a Soldier Too: The Jessica Lynch Story. New York, 2003.
Fig. 5. Rick Bragg, I am a Soldier Too: The Jessica Lynch Story. New York, 2003.

Even Rowson’s vision of women’s equality cannot, it seems, escape the images of domination and submission the play has set loose. In the epilogue, Rowson plays with reversing the hierarchy of gender subordination, claiming that women who understand the real power of their sex “hold in silken chains–the lordly tyrant man.” She imagines what a female viewer might take from her play: “‘Women were born for universal sway; / Men to adore, be silent, and obey.’” Such assertions are, Rowson reassures us, only meant flippantly. But her qualification cannot entirely erase the thought that the lines allow a moment of breathing room: just what would it feel like for women to be the dominant ones, with men at our mercy? Given the images of confinement and sexual vulnerability always close at hand in the Barbary context, I wonder if these lines may not hint at the rush Lynndie England and others must have felt in Abu Ghraib. Whether or not they consciously saw themselves reversing the all-too-familiar image of the woman as victim, they were getting to experience themselves not only as Americans, but as women, on top. 

Such images challenge us on every level. If once the theme of Barbary captivity forced Americans to confront their own investment in slavery, it now seems to demand we examine the very dynamics of domination as they permeate both gender and foreign relations. But as we confront the dilemmas of the present, the past remains illuminating. Rowson’s characters do not, after all, inflict corporeal vengeance or other cruelty upon their former captors. Once again the character Rebecca states it clearly: “Let us not throw on another’s neck, the chains we scorn to wear.” For all the signs of nascent American imperialism in her play, Rowson stood behind an idealistic vision of a world “where virtue in either sex is the only mark of superiority.” Without a virtuous citizenry, educated Americans of her era believed, the republic itself could not survive. And central to the period’s conception of virtue was a person’s capacity for feeling and sympathy for others–a capacity essential to understanding that the experience of captivity or imprisonment is truly terrible for all people. 

Further Reading:

Susanna Haswell Rowson, Slaves in Algiers, or, A Struggle for Freedom, eds. Jennifer Margulis and Karen M. Poremski (Acton, Mass., 2000); Paul Baepler, ed., White Slaves, African Masters: An Anthology of American Barbary Captivity Narratives (Chicago, 1999); Robert J. Allison, The Crescent Obscured: The United States and the Muslim World, 1776-1815 (Chicago, 1995); Robert C. Davis, Christian Slaves, Muslim Masters: White Slavery in the Mediterranean, the Barbary Coast, and Italy, 1500-1800 (New York, 2003); James R. Lewis, “Savages of the Seas: Barbary Captivity Tales and Images of Muslims in the Early Republic,” Journal of American Culture 13.2 (1990): 75-84; Paul Michael Baepler, “The Barbary Captivity Narrative and American Culture,” Early American Literature 39.2 (2004): 217-46; Malini Johar Schueller, U.S. Orientalisms: Nation, Race, and Gender in Literature, 1790-1890 (Ann Arbor, 1998).

 

This article originally appeared in issue 5.1 (October, 2004).


Anne G. Myles is assistant professor of English at the University of Northern Iowa. She has published numerous articles on various aspects of dissent and gender in early America.




The Physiognomy of Biometrics: The face of counterterrorism

Terror is not faceless.

—Joseph Atick, CEO Identix, 2002

I.

Susanna Rowson’s postrevolutionary novel The Inquisitor, or Invisible Rambler (1788, 1793) recounts the experiences of a wealthy gentleman who, after complaining about the amount of duplicity in the world, is mysteriously given a ring that can turn him invisible. With the power of invisibility, the gentleman boasts that now “I should find my real friends, and detect my enemies.” And that is more or less what happens. Over the next three volumes of the novel, the gentleman’s morning walks provide him with numerous occasions to use his invisibility for the benefit of mankind. He exposes rakes, protects the innocent, and saves lives from ruin. Sometimes the gentleman intervenes after witnessing an immoral act while invisible, but far more often he first suspects someone and then investigates the person’s behavior invisibly. His ability to follow the duplicitous before they execute their designs is integral to the novel’s imagination of social order and justice. Yet, if his invisibility is what enables him to spy on people unobserved, then how does he know whom to watch and whom to ignore?

He knows, we later learn, because he is a physiognomist. “I never cast my eye upon a stranger but I immediately form some idea of his or her dispositions by the turn of their eyes and cast of their features,” he explains, “and though my skill in physiognomy is not infallible, I seldom find myself deceived.” Indeed, nearly all of the people the invisible rambler suspects eventually behave as their faces predicted they would. Throughout The Inquisitor, faces reveal seducers, gamblers, idlers, dissimulators, and a variety of crooks and fortune hunters. For Rowson at least, a person’s face becomes the probable cause for the rambler’s surveillance.

The idea that a person’s face could belie his will and disclose his character can be traced to Johann Lavater’s enormously popular Essays on Physiognomy (1775-78). At least twenty editions of Lavater’s Essays were published in English, including two in America, before 1810. By 1825, American periodicals had featured no fewer than seventy articles on physiognomy. Lavater’s distinction between pathognomy (the study of man’s passions and his visible, but impermanent facial expressions) and physiognomy (the study of the correspondence between man’s moral character and his permanent and unalterable facial features) limited the power of people to manipulate the reception of their image in public, since it disassociated expression from character. Since Lavaterian physiognomy read moral character from unalterable and involuntary facial features, it created a visual system for discerning a person’s permanent moral character despite his or her social masks. Readers of the 1817 Pocket Lavater, for instance, learned how to look at the features of various white male faces in order to discriminate “the physiognomy of . . . a man of business” from that a “a rogue.”

 

The Man of Business, opposite page 63 in Johann Caspar Lavater, The Pocket Lavater, or, The Science of Physiognomy (New York, 1817). Courtesy of the American Antiquarian Society.
The Man of Business, opposite page 63 in Johann Caspar Lavater, The Pocket Lavater, or, The Science of Physiognomy (New York, 1817). Courtesy of the American Antiquarian Society.

By turning to physiognomy as a way to detect vice, expose dissimulation, and undermine social mobility in their novels, Rowson and other postrevolutionary authors reproduced Lavater’s opposition between a model of character read from performance and one read from the structure of the face. In contrast to the revisable, performed, and voluntary self of the fortune-hunting seducer Cogdie, for instance, The Inquisitor posits the permanent, physiognomic, and involuntary one used by the invisible rambler to unmask him. This opposition was foundational, I would argue, to how the postrevolutionary novel in particular and early American culture in general imagined the structure of social relations. The physiognomic distinction of the face opposed the functional, almost incidental relation of a person’s body to genteel performance that texts such as Benjamin Franklin’s Autobiography promoted and, as a result, it challenged Franklin’s idea that the acquisition of his social and political power was as universally available as the acquisition of his conduct. With the rise of physiognomy, the sphere of agency from which a person’s moral character could be known shrunk from the range and quality of his actions to the contour and shape of his face. 

I begin with The Inquisitor’s invocation of physiognomy and surveillance to “find my real friends, and detect my enemies” because its attention to the face, social goals, and underlying logic are similar to those now surrounding today’s science of biometrics (which includes but is not limited to facial recognition systems). This is not to say that biometrics and physiognomy are the same. When biometrics look to a face it is to identify a person, when physiognomy looks to a face it is to identify that person’s permanent moral character. Yet, each attempts to control mobility and the instability it brings to the social order by turning to bodies in general and faces in particular. These two sciences, eighteenth-century and twenty-first, share, in other words, a commitment to the idea that the body does not change, and they seek to ground a person’s essential character or unique identity in that idea of the body’s permanence. In so doing, however, both insist on a false opposition between a model of character that is performed and one that is corporeal. The persistence of this opposition may help to explain why the failure of biometrics to provide security seems to have no bearing on the perception that they provide security nonetheless.

II.

Biometrics are often associated with the future. Facial recognition systems, fingerprint readers, and retinal scans are the stuff of science fiction films such as Total Recall and Minority Report. Yet, as you read this, they are becoming very much a part of the present. Next year, the Enhanced Border Security and Visa Reform Act of 2002 will require that all visas and other travel documents to the United States include biometric identifiers. A $10 billion border control contract has already been awarded and plans are underway to install biometric devices (most likely fingerprint and facial recognition systems) at all three hundred border entry points. Soon biometrics will also be used to identify some two million transportation workers. Last year, the Department of Homeland Security handed out nearly $11 billion for biometrics, and it seeks another $1.4 billion in 2005. Millions more have been spent by the Department of Defense. Earlier this year, the American Association of Motor Vehicle Administrators upped the ante by proposing to create the world’s largest database of biometric data: a North American ID card that would utilize approximately three hundred million DMV facial images. Most recently, the 9/11 Commission report urged the government to establish a comprehensive biometric screening program “as quickly as possible.” These are but a few examples of what can only be called a stampede of post-9/11 government legislation, projects, and contracts all looking to buy what the biometric industry is selling: security. With over two hundred vendors now offering biometric solutions, the International Biometrics Group predicts that global revenue from biometrics firms will climb to $4.64 billion by 2008.

So how does biometrics provide security? Most biometric technologies automate the identification of people by one or more of their distinct physical characteristics, matching a face or a fingerprint, for example. As Michigan State University engineering professor Anil Jain explains, biometrics rely on who you are as opposed to what you know (such as a password) or what you have (such as a passport). They transform a unique personal feature such as your face into a numerical code or template, store that template, and then compare your face to it each time thereafter. In short, biometrics turn your body into your password. Biometric systems either prove that you are who you say you are (verification) or they prove that you are not who you say you are not (identification). During verification, your face is matched with your template so that you are positively identified. During identification, your face is compared against every face in a database (such as a gallery of terrorists) to insure that you are not on a watchlist. Since biometrics claim to be more difficult to copy, forge, share, lose, or forget than traditional credentials, they have been heralded as an almost infallible way to control access to secure areas.

Biometrics, however, can make mistakes. A false match happens when you are incorrectly matched to another person’s template (as would be the case if you were falsely identified for a terrorist). A false nonmatch occurs when a person is incorrectly not matched to a truly matching template (as would be the case if you were not identified as yourself). Now here is the rub: you cannot lower both error rates simultaneously. The more you try to reduce the chance of people being falsely identified as terrorists, the more likely they will not be identified as themselves, and vice versa. 

This has proven to be quite a problem for the industry, since biometrics, especially facial recognition systems, have not performed well when tested. A recent National Institute for Standards and Technology study, for example, found that facial recognition technology failed to match people correctly 23 percent of the time. Last year, it failed to match employees at Boston’s Logan International Airport up to 38 percent of the time, and in 2002 it failed to match Palm Beach Airport employees 53 percent of the time. According to theEconomist, the 2003 government-sponsored Face Recognition Vendor Test found that “none of the systems worked well . . . when shown a face and asked to identify the subject.” Martyn Gates, a facial recognition specialist, confessed to the Financial Times that “in some systems, the accuracy is almost random.”

 

The Rogue, opposite page 89 in Lavater, The Pocket Lavater. Courtesy of the American Antiquarian Society.
The Rogue, opposite page 89 in Lavater, The Pocket Lavater. Courtesy of the American Antiquarian Society.

Part of the reason biometrics perform so poorly, as many industry experts admit, is that the technologies are still immature. Consequently, biometrics have been routinely fooled or “spoofed.” Magazine photographs and high-resolution images of faces have been enrolled into facial recognition systems, while cadaver, silicone, and gelatin fingers have fooled fingerprint scanners. As the Wall Street Journal reported last year, Tsumoto Matsimoto from Yokohama University was able to fool eleven different fingerprint scanners roughly 80 percent of the time using $10 worth of gelatin. Researchers at West Virginia University, the Guardian noted, were able to enroll fourteen cadaver fingers into a biometric system and, once enrolled, were able to verify their identities 40-94 percent of the time. Yet, you do not have to try to “spoof” biometrics in order to generate errors. Head movement, skin color, lighting conditions, and camera angles all affect the accuracy of facial recognition systems. Similarly, finger placement, hand lotion, dust, humidity, and temperature can alter fingerprint scans.

III. 

Although biometrics does make forging credentials more difficult, a person’s biometric data can still be stolen. A 2003 National Academies of Sciences report, for example, recommended that “biometrics should not be sent over a network” because the transmission of templates to a remote database presents the risk of theft. Yet, “the biggest reason biometrics are vulnerable to misuse,” the NAS report warned, “is that, unlike computer passwords or bankcard PIN numbers, they’re not secret.” “Collecting the data needed to compromise a person’s bioprint,” David Hamilton observed in the Wall Street Journal, “may be no more complicated than spying on him for a day or two” before lifting a fingerprint from a glass. And “once someone steals your biometric,” security expert Bruce Schneier explains, “it remains stolen for life.” While the government can issue a new passport or a bank, a new PIN number, a person has only one face and ten fingers.

Even if biometric technology were infallible, critics maintain that it violates a person’s right to privacy and compromises our ability to live in a free society. Stephen Kent, committee chairman for the NAS report on biometrics, warned, “The ability to remain anonymous and have a choice about when and to whom one’s identity is disclosed is an essential aspect of a democracy.” Others worry about what sociologists call “function creep,” the process by which information is used beyond its initial intended and limited use. The ease with which facial recognition systems have been integrated with closed circuit television cameras or other third-party databases has alarmed civil liberties and human rights activists, who are concerned that biometrics would lead to the creation of a global surveillance infrastructure. “Without social agreement and legal restrictions on how the system could be deployed,” George Washington law professor Jeffrey Rosen imagines, “it could create a kind of ubiquitous surveillance that the government could use to harass its political enemies or that citizens could use, with the help of subpoenas, to blackmail or embarrass each other.” 

If 9/11 sparked the biometric boom, there are doubts about how effectively the technology can identify future terrorists. As one critic put it in the New Scientist, “I could give you my fingerprint and you still wouldn’t know who I am. Biometrics says nothing about whether I’m a terrorist or not.” Indeed, all nineteen of the 9/11 hijackers entered the country using valid visas, on their own passports. “Verifying their identities using biometric visas,” the Economistrecently argued, “would have made no difference.” Even though photographs of known terrorists can be enrolled into facial recognition systems, only a few terrorists have ever been identified, and those images are often blurry and unreliable. Others contend that terrorists could exploit human error during the nontechnological process of enrollment. As technology specialist Keith Rhodes warned Congress, “[B]iometrics cannot necessarily link a person to his or her true identity . . . People who are not on the watchlist cannot be flagged as someone who is not eligible to receive a credential.”

IV.

With the Wall Street Journal calling facial recognition technology “one of the most error-prone types of biometric devices available today” on the one hand, and the ACLU branding it “an over-hyped failure” on the other, how can the government’s continued appetite for biometrics and the public’s apparent indifference to its costs and problems be explained? “It is difficult to avoid the conclusion,” the Economist told its readers, “that the chief motivation for deploying biometrics is not so much to provide security, but to provide the appearance of security.” Yet, poll after poll reveals that a majority of Americans believe that biometric screening will increase security. Why do so many find an illusion sufficient for security?

Without debating the strategic merits of the deterrent value of biometrics in a post-9/11 world, the confidence displayed in biometric technologies might have something to do with how they recall familiar but ultimately unproven ideas about the body’s permanence and its capacity to communicate our essential moral character or our unique identity. Biometrics posits that there are unique, measurable, and permanent physical features, which is why this science—like physiognomy before it—has difficulty with the simple fact that people change. Aging, weight gain or loss, changes in hairstyle, illness, accident, and cosmetic surgery have all been found to alter presumably permanent biometric characteristics. “Biometric input is not always the same and the technology has difficulty adapting to input variations,” admits Valorie Valencia, CEO of the biometric firm Authenticorp. In fact, the problem of user change is significant enough that the euphemistically labeled “time decay” of each kind of biometric is now part of a $3.1 million NSF/DHS study. By insisting that there are permanent features of the face, biometrics reproduce the physiognomic fallacy: namely, that there is an opposition between a voluntary, revisable self knowable from behavior and an involuntary, permanent self knowable from the body. Moreover, just as physiognomy was imagined by postrevolutionary novelists such as Rowson to thwart the rapid social mobility of fortune-hunting seducers, biometrics imagine the permanence associated with the corporeal self as an instrument for identifying people and regulating their mobility. 

The disavowal of the physiognomic fallacy by the biometric industry perhaps can be most strongly felt in how it chooses the future rather than the past in order to confront questions about the social consequences of its technology. In general, the industry and the media covering it address the social effects of biometrics as they are imagined in blockbuster Hollywood films such as Minority ReportThe Bourne Identity, or Enemy of the State. (Industry experts served as technical consultants to many of these films.) At last year’s Biometric Consortium Conference, for instance, Catherine Tilton blamed Hollywood depictions of biometrics for perpetuating a series of myths regarding the loss of privacy, the loss of freedom, constant surveillance, absurd costs, and inaccuracy of biometrics. Chris Winton of Biometrics Australia lodged a similar complaint this year to the Sydney Morning Herald, saying that “biometrics is suffering from bad PR as a result of Hollywood.”

 

Illustration from Johann Caspar Lavater, Essays on Physiognomy: For the Promotion of the Knowledge and the Love of Mankind (Boston, 1794). Courtesy of the American Antiquarian Society.
Illustration from Johann Caspar Lavater, Essays on Physiognomy: For the Promotion of the Knowledge and the Love of Mankind (Boston, 1794). Courtesy of the American Antiquarian Society.

By pointing to Hollywood dramatizations of biometrics as the origin of “myths” regarding the technology’s violation of privacy and freedom, the industry denies the actual, relevant histories of identity and corporeality that have existed in the United States and elsewhere since at least the era of physiognomy. It puts biometrics in dialogue with futuristic fantasies—at times paranoid, at other times, accurate—about its imagined social effects rather than with actual past histories of the social, cultural, and political consequences of identifying people by their bodies. When the past is invoked by biometrics, its official genealogy is a progressive, scientific one beginning in the late nineteenth century with the early biometric criminologists, Alphonse Bertillon (inventor of a body measurement system for identifying criminals) and Francis Galton (father of fingerprinting), and evolving to the technologically savvy and precise biometrics of today. On the one hand, biometrics desires a history, but on the other, it suppresses its own relationship to prejudicial scientific discourses such as physiognomy, phrenology, anthropology, Bertillonage, and eugenics and their histories of generating and naturalizing social types complicit with racism, discrimination, and social injustice.

These histories seem particularly important to consider given the nontechnological aspects of biometrics. The question of how to identify a terrorist without a picture of his face, for instance, remains unanswered by biometrics, and the mysterious notion of a “watchlist” only defers the issue to government intelligence. How the watchlist is constructed, who is on it, and for how long, are rarely addressed in the debate over biometrics. When asked if he knew, Raj Nanavati of the International Biometric Group told Newsweek, “I’m not sure myself . . . they’re comparing it against a watchlist of nondesirables.” While biometric boosters like Identix CEO Joseph Atick assure the public that “trusted identity . . . is not a class distinction,” his own description of how his company’s facial recognition system will be able to discern the untrustworthy few from the “trusted identity” of “the honest majority” sounds all too similar to the invisible rambler’s magical declaration to “find my real friends, and detect my enemies.”

 

Further Reading: 

The emerging field of biometrics has produced a large number of short, mostly informative Web, newspaper, and periodical sources, but only a few book-length examinations. For more information on biometrics, see Joseph Atick, “Biometric Consortium Keynote Speech,” Biometric Consortium, Washington D.C., Feb. 2002; Ruud M. Bolle, Anil Jain, and Sharath Pankanti, “Biometrics: The Future of Identification,” Computer 33:2 (Feb.2000): 46-49; Owen Bowcott, “Biometrics Helping the Fight Against Terror, Hindering the Hope for Privacy,” Guardian, 18 June 2004, 3; David P. Hamilton, “Workplace Security (A Special Report); Read My Lips: Are Biometric Systems The Security Solution of the Future? Maybe, But We’re Not There Yet,” Wall Street Journal, 29 Sept. 2003, R4; Anil Jain, Sharath Pankanti, and Sailil Prabihakar, “Biometric Recognition: Security and Privacy Concerns,” IEEE Security and Privacy, 1:2 (March/April 2003): 33-42; The Nine-Eleven Commission Report, Washington D.C., 2004; Christian Parenti, The Soft Cage: Surveillance in America: From Slavery to the War on Terror (New York, 2003); Jeffrey Rosen, The Naked Crowd (New York, 2004); Irma Van der Ploeg, “The Illegal Body: ‘Eurodac’ and the Politics of Biometric Identification,” Ethics and Information Technology 1 (1999): 295-302; and James Wayman, “Interview with Joe Palca,” Talk of the Nation, National Public Radio, 11 June 2004. For more on physiognomy at the end of the eighteenth century, see Johann Caspar Lavater, Essays on Physiognomy, trans. by Henry Hunter, with engravings by Thomas Holloway, 3 vols. in 5 (London, 1789-98); Johann Caspar Lavater, The Pocket Lavater or, The Science of Physiognomy (New York, 1817); and Susanna Rowson, The Inquisitor, or Invisible Rambler (1788; Philadelphia, 1793).

 

This article originally appeared in issue 5.1 (October, 2004).


Christopher Lukasik is an assistant professor of English and American studies at Boston University. He is currently completing a book manuscript entitled, Discerning Characters: Social Distinction and the Face in American Literary and Visual Culture, 1780-1850.




Captors to Captives to Christians to Calabar

5.1.Childs.1
The Two Princes of Calabar: An Eighteenth-Century Atlantic Odyssey

The largest forced migration in human history has left a powerfully silent documentary record for historians to work from. Given the long-lasting historical repercussions of the estimated eleven million African captives forced to cross the Atlantic from the fifteenth to the nineteenth century, we know amazingly little about the individual experiences of the horrific middle passage. Those who controlled and directed the trade were far more interested in quantifying the cargo to determine profits than in somehow accounting for the humanity of the individual captives.

Historian Randy Sparks’s slim but extremely informative book corrects this silence. It tells the remarkable story of two African princes enslaved at Old Calabar in the Bight of Biafra, taken first to the Caribbean and then shipped to Virginia. They then escaped to England where they sued for their freedom, and finally made their way back to Old Calabar. Sparks’s study began not with a research proposal and set of historical questions to answer, but a chance encounter in the archives. While conducting research at the John Rylands Library in Manchester, England, on the topic of nineteenth century American and British Methodism, Sparks encountered a series of letters by former slaves to Charles Wesley, the brother of the founder of Methodism, John Wesley. The letters were written by Little Ephraim Robin John and Ancona Robin John, natives of Old Calabar, a principal source for the Atlantic slave trade in the eighteenth century. The brothers Robin John called upon and received assistance from Charles Wesley to gain their freedom and guide their conversion to Methodism. Rather than setting the sources aside as part of that never-ending wish list historians tend to compile for future studies, Sparks studied the letters in detail and searched for other sources that could shed light on the Robin Johns’ odyssey. The result is a much-needed examination of the transatlantic slave trade centered on the lives of two individuals. In Sparks’s hands, the Robin Johns’ story allows us “to translate those statistics [of the slave trade] into people” (5).

Sparks’s book is a testament to the ongoing convergence over the last thirty years among the specialized fields of colonial North American history, Latin American and Caribbean history, and African history under the rather broad title of Atlantic history. The Robin Johns’ odyssey could not be told any other way. History conveniently divided by nation-state boundaries and imperial rule to conform to academic specialization has neglected how fluid these commercial, cultural, political, and social boundaries have been since the fifteenth century. While the increasing interest in the field of Atlantic history undoubtedly reflects contemporary concerns (as with all historiographical trends), namely globalization and the origins of transnationalism, scholars of once-specialized fields are situating their studies within multiple contents to speak to multiple audiences. 

The Robin Johns’ enslavement and liberation resulted from their active roles as slave traders at the West African region of Old Calabar. Little Ephraim Robin John and Ancona Robin John were members of the elite Efik slave traders of Old Calabar and participated in the Ekpe secret society that governed the commercial relations with Atlantic traders. As Old Calabar grew from a small town in the late seventeenth century to one of the most important slave trading regions of the eighteenth century, Efik traders such as the Robin Johns came to dominate Old Calabar society. The Robin Johns’ ability to speak and write English and a pidgin trade language (even before they left Africa), and effectively move through the cultural milieus of Africa, America, and Europe, Sparks shows, was indicative of the increasing interconnectedness of the Atlantic littoral. 

The Robin Johns’ power and control of the trade, which often resulted in British traders being held captive until higher prices were agreed upon, ultimately created the conditions of their undoing. In 1767 British slave traders aggravated with paying exceedingly high costs demanded by Old Calabar Efik traders directly assisted rivals at nearby New Town in a bloody massacre that resulted in the capture of the Robin Johns. Immediately upon enslavement in Old Calabar, the Robin Johns began to use their intimate knowledge and connections developed through years of participating in the Atlantic slave trade to scheme for their freedom. Sparks rightfully concludes that “[h]owever rare such cases may have been, the Robin Johns knew what most captives did not–that it was possible to make their way home” (73). 

The Robin Johns earned the title “Two Princes” upon enslavement because they clearly set themselves apart from other Africans. Their knowledge of the English language and well-known connections to merchants trading in the Atlantic served to keep them away from what Sidney Mintz aptly described as the “agro-industrial graveyards” of the plantations. Upon arrival on the British Caribbean island of Dominica, they spent seven months working for a French physician who undoubtedly found their English-language abilities a great asset. While on Dominica, the Robin Johns made their plight known through smuggling channels and promised a handsome reward upon their return to Old Calabar. William Sharp of Liverpool contacted the Robin Johns and told them if they could make their way to his ship he would return them to Africa. Sharp, however, was bound not for Africa, but for Virginia, where he sold the Robin Johns. The brothers proved equally determined to escape their Virginia enslavement. They contacted Captain Terence O’Neil who promised to return them to Africa after their return voyage to Bristol if they could escape to his ship. 

Although the Robin Johns had never been to Bristol, they certainly knew much more about the city than the average African trapped in the Atlantic slave trade. One of the great strengths of Sparks’s book is his examination of the numerous Atlantic connections between Bristol and Old Calabar. Merchants from Bristol and Liverpool dominated the trade from Old Calabar, and approximately 85 percent of the 1.2 million slaves exported from the area in the eighteenth century left on English ships. Several Old Calabar Efik traders sent sons to England to learn English and solidify commercial relations. 

Luckily, the Robin Johns landed in Bristol at a fortuitous moment. In 1772 Chief Justice Lord Mansfield ruled that James Somerset, who had been brought to England as a slave by his Virginia master but had escaped, could not be re-enslaved and forcibly sent outside the country against his will. The Robin Johns sued for their freedom on the basis that they would be sent back to Virginia and sold as slaves against their will. Unable to establish a “legitimate” account for the Robin Johns enslavement, Lord Mansfield declared them free in 1773. Shortly thereafter, they began their return journey back to Old Calabar.

In their seven-year odyssey crisscrossing the Atlantic the Robin Johns repeatedly drew upon their connections established as Efik slave traders, but also sought out new allies to assist them in their quest for freedom. During their stay in Bristol, the cradle of English Methodism, they sought out Charles Wesley and became associates of his family. Sparks warns that reading the Robin Johns’ conversion to Methodism as merely a strategy for freedom is far too simplistic as their personal letters attest to spiritual and emotional convictions, even though it undoubtedly helped their case. As traders conversant in multiple languages and cultures, the Efik were particularly receptive to other belief systems, molding them to their own values. For Sparks, their conversion to Methodism serves as another example of their ongoing process of creolization. In regard to the significance of their embrace of Methodism, he argues, “their conversion was an act of defiance, an effort to erase concepts of difference and inferiority based on race through religion” (115). 

While conversion allowed the Robin Johns to claim equality with other Methodists by demonstrating they were just as equal as any whites in regard to Christianity, we need a more complex discussion of what defiance specifically means. How conversion represented a form of resistance different from other strategies to end their own enslavement is not clear from Sparks’s analysis or the Robin Johns’ subsequent actions. This is not a problem specific to Sparks’s analysis, but one that marks slave studies in general. While the emphasis on resistance has been necessary to destroy the “Sambo Myth” of slavery, the scholarly tendency to label any agency on the part of the slaves as resistance has severely dulled its effectiveness as an analytical tool. 

The hardest lesson for modern readers of the Robin Johns’ extraordinary story will undoubtedly be that they never renounced the slave trade or slavery. Avoiding both disappointment and shock, Sparks concludes that they returned to slave trading. Here lies the tragic consequence of Atlantic slavery and the close relationship between slavery and freedom. Without their personal investment in the slave trade, the Robin Johns most likely would not have gained their freedom. 

In the slave societies bordering the early-modern Atlantic, whether they were connected by trade such as that between Old Calabar and Bristol or plantations in the Americas, the clearest indication of personal freedom was marked not by individual autonomy and economic independence, but by ownership of another human being. With great care, engaging prose, and appreciation for the complexities and contradictions of the human condition, Randy Sparks allows the Robin Johns’ story to vividly illustrate the few triumphs and numerous tragedies that marked the transatlantic slave trade.

 

This article originally appeared in issue 5.1 (October, 2004).


Matt D. Childs is an assistant professor in history at Florida State University and editor with Toyin Falola of The Yoruba Diaspora in the Atlantic World (Bloomington, 2004).




Race and Citizenship in Early New England

5.1.Molineux.1
Bodies Politic: Negotiating Race in the American North, 1730-1830

John Wood Sweet’s Bodies Politic: Negotiating Race in the American North, 1730-1830 taps a range of new manuscript and printed sources to paint a fascinating picture of the interactions between English settlers, African slaves, and Native Americans in New England during the colonial era and early Republic.  While reinforcing the fluidity-to-rigidity model of racial identity that other scholars have proposed, Bodies Politic fleshes out the process of encounter and exchange. Sweet argues that contests over who belonged to the developing American society–who could claim “citizenship”–were crucial to the formation of colonial New England, to meanings of the American Revolution, and to the development of democracy.

Part I, “Coming Together,” discusses the encroachment of English settlers onto Narragansett lands, the development of African slavery, and the negotiation of identity as blacks and Indians converted to Christianity, appropriated English ways of life, and sought to define their roles in colonial New England society. Sweet provides a nuanced understanding of the conflicts through which the Narragansetts were dispossessed of their land, the complicated constructions of slave resistance within the dominant culture, and the emergence of autonomous Christian traditions among native and African peoples toward the end of the colonial period. Part I argues that even as acculturation erased differences, it distanced African slaves and Native Americans as a whole from participation in the developing colonial society and “prompted increasingly vital senses of racial identity”(57). 

Part II, “Living Together,” moves from narratives of acculturation that were integral to English ideologies of imperialism to other, more intimate forms of membership in New England society. Whether regulating marriage or illicit sex, grappling over military recruitment policies, or, ultimately, disputing abolition, the public played a significant role in determining the limits to native and black resistance; yet Sweet also highlights the ways in which these peripheral groups sought sexual respectability, exploited the destabilization of political order during the Revolution to gain concessions from white settlers, and sought aid from emerging networks of abolitionists to pursue manumission.” If Part I stresses the agency of blacks and natives in the process of acculturation, Part II focuses on the ways in which living together spawned both the promise of greater equality and a white male fraternity that left free people of color without a clear place. 

Part III then turns to the “problem of race” and the “problem of equality” in the early republic, looking at the meanings of the Revolution for African Americans, Native Americans, and whites in terms of their expectations for citizenship. Some Native Americans and free blacks abandoned hopes for equality and moved elsewhere, but many sought to establish themselves as members of the new nation. During the nineteenth century, free blacks used increasingly confrontational strategies to assert their citizenship, and whites responded by constructing rigid categories of racial difference that fueled widespread antiblack violence and brought to an end the “period of potential racial egalitarianism during the early years of the Republic” (355). These struggles over the symbolism of citizenship reflect not only the inheritance of slavery and exploitation, but also the ways in which the desire to construct a new republic created the problem of equality among whites. The founding of American democracy occurred through the rigorous exclusion of people of color from the new body politic; yet even as the book comes to a close, people of color are articulating alternative narratives of the origins of the new nation that would continue to challenge the myth that America had come into its own as a land of “heroic self-sacrifice, manly vigor, and republican virtue” (399) and to call for Americans to realize the promise inherent in the rhetoric of the Revolution.

To do justice to the subtlety of Sweet’s analysis or to the range of his source material would require a review of greater length. But I would like to point to two examples that are representative of the sensitivities that he brings to the encounters between the peoples of early New England. In chap. 1, Sweet shows that Indians and English settlers both forged analogies between English and native models of government, a process that produced ambiguities that each could exploit in their relations with one another. Chaps. 2 and 6, on the other hand, do a wonderful job of decentering the master and stressing the role of the public in determining the contours of the master-slave relationship. Sweet provides rich evidence of slaves running away and thus directing their resistance towards individual masters rather than the slave system itself, but he broadens this view by arguing that the general threat of slave revolt was partly directed at a complicit public that failed to recognize the rights of slaves. In so doing, he points to the ways in which New England colonial identity was increasingly bound up in the settlers’ desire to keep slavery private and permanent in the face of an institution that, in practice, constantly undermined this vision. 

Bodies Politic considers negotiations of citizenship in a range of different contexts from acculturation to the formation of a post-Revolutionary democracy. The analysis would have benefited from a more dynamic definition of “citizenship” that made explicit what the move from subject to citizen meant for early New Englanders; as it is, Sweet’s static definition of citizenship–meaning, broadly, the rights enjoyed by white, property-holding men–reflects the perspective of the historian and loses sensitivity to change and to the fluidity of membership in early American society. At the same time, however, Sweet provides a much more detailed picture of common New Englanders’ day-to-day lives. His book comes as part of a long-term effort to balance the complicated histories that we have of white America with an equally nuanced understanding of the perceptions and strategies of people of color. That he does so with the American north is crucial to revising the still-too-common assumption that New England housed a relatively homogenous population less shaped by interactions–especially with African slaves–than by a shared religious errand. The more pluralistic society that Sweet reconstructs brings this region into line with the growing scholarship on the importance of encounter, exchange, and conflict in the development of American identities and reintegrates it into the accepted broad narratives of American history for this period. Perhaps the book’s greatest contribution lies in its portrayal of African and Native American identities as quickly coming to share the same cultural ground as whites’; in the persistence with which people of color sought equality in the new republic and even in the English-style Christian settlements that those who chose to leave pursued elsewhere lies the unfinished business of the American Revolution and a constant reminder of the origins of American democracy.

 

This article originally appeared in issue 5.1 (October, 2004).


Catherine Molineux is a Ph.D. candidate at Johns Hopkins University and is currently finishing a dissertation entitled, “The Peripheries within: Race, Slavery, and Empire in Early Modern England.”




Adding Food to Business History and Urban History

5.1.Lovell.1
Public Markets and Civic Culture in Nineteenth-Century America.

Helen Tangires’s Public Markets and Civic Culture in Nineteenth-Century America is the study of a building type but it is also an account of a profound ideological shift with implications for public-policy decisions today. Part business history, part urban history, part social history, Public Markets traces a century of architectural and urban-planning change as the market–its form, location, management, and ownership–expressed American foodways, relationships between rural and urban populations, and the responsibility of the polity toward its poorest members. The narrative focuses on Philadelphia and New York City but it includes important developments in other American cities as well as a brief account of how and why contemporary English and French markets differed.

Food is unlike other commodities in two ways: time (the window of use is, in most cases, quite brief) and universality (everyone must eat). Moreover, because it is necessarily produced at a distance from where it is consumed, its movement often involves key decisions on the part of transporters, middlemen, and politicians as well as producers and consumers. The focus of Tangires’s study is on the major shift in the mid-nineteenth century from low sheds with overhanging protective eaves and open sides built by vernacular builders, owned by the city, and situated on public land (often in the middle of a purpose-built, extra-wide street), in which trading hours and behaviors were regulated to support a moral economy of “fairness” between buyers and sellers understood to incorporate all classes and to accommodate a panoply of activities, to large architect-designed, private, multi-story enclosed buildings (often utilizing Renaissance Italian urban-building vocabularies) on private land explicitly inviting middleclass patronage. The first was associated with face-to-face exchange between producer and consumer in a pedestrian city in an arena fostering vernacular theatricals, loitering, and bargaining (and hostile to monopolies, hoarding, and greed); the second was aligned with the triumph of the capitalist market economy in which food was a commodity like any other. Spatially, to effect this shift, the street at the center of town had to be reassigned meaning, from a meeting place to a conduit in which market sheds were no longer the pedestrian’s destination but a “nuisance” and an “obstruction” to the newly valorized railroad and streetcar whose tracks frequently commandeered the public market-house site and gave that public space over to the ease and encouragement of rail-facilitated suburban development (125, 129). The destruction of buildings was also a symbolic attempt at erasure of social groups that ascendant politicians and business leaders sought to excise from the “modernized” version of their cities. In the words of one apologist foreseeing the destruction of his city’s hospitable market sheds: “where would the unemployed street laborer, wood sawyer, coal heaver . . . and their successors forever . . . find a commodious place of shelter to spend a rainy day . . . [or] the respectable Tramp, in search of employment . . . find a comfortable place to lay their weary heads” (127)? The narrative of urban change, in other words, told from the perspective of food markets, lets us see a visible manifestation of an ideological shift. Unlike the courthouse, church, town hall or other purpose-built public structure, the public market of the first half of the nineteenth century welcomed everyone–the farmer with his cabbages to sell, the housewife in search of dinner, the indigent widow-huckster provisioning her basket, and those with nowhere else to go to find shelter, sociability, a job, or a cheap meal.

Those who sought to eliminate the public markets–businessmen, politicians, railroad barons–identified certain categories of things, people, and activities that one should not see in the modern city, at least not at its core, so, characteristically, the private markets had opaque (usually brick) walls with ventilation provided by wicker lattice-filled apertures. Tangires describes but does not comment on the architectural language in which the new regime encased the modernized version of this building type–mimicking explicitly urban, masonry, aristocratic historicist European “palace” prototypes (122, 126). Simultaneously, European cities were modernizing their public markets in the second half of the nineteenth century by adopting the very different modular glass-and-iron translucency of exposition buildings and train sheds (187). The larger history of this American “enclosure” of markets–leading to the supermarkets and malls of today’s shopping landscape–was both far-reaching, extremely contentious, and by no means a uniform development, with some unexpected twists. The residents of Pullman, Illinois, for instance, provisioned in a centrally located (fully enclosed) market house (built in 1881) with stalls rented by independent retailers, rather than in a company store as we might expect (174).

Tangires is an unobtrusive narrator. She draws on a wide array of verbal materials; public ordinances, court cases, newspaper articles, tracts, local histories, public reports, private diaries are all richly invoked to set out her tale. Particularly apt are the visual resources–period maps, engravings, paintings, and especially period photographs–that she has marshaled to build her case. Although the reproduction quality is not high, these images contribute importantly to her evidence base. Unfortunately we have no sense of her engagement with actual surviving buildings here. Perhaps none from the nineteenth century, even in modified form, have survived. However, there is at least one from the eighteenth century–the Brick Market in Newport Rhode Island–that might have provided a baseline in terms of scale and integration with neighboring structures and public spaces. Fieldwork may not have changed the narrative but it might have given depth, solidity, and immediacy to a tale that is important not just as a historical account but also as a comment on the practices and possibilities we see on the landscape today.

The second opportunity missed is commentary on American foodways; on this point Tangires is unnecessarily laconic. She makes clear in her account that butchers were the key players in the markets in question, meat the focus of massive infrastructure development and regulation, and Americans the global leaders in meat consumption (52, 61-88, 112, 115, 137, 156-57). Why this should be the case and when they began to insist on fresh (rather than salted, smoked, or otherwise preserved) meat is the engine driving much of this story but this part of the chronicle remains obscure and unfortunately underinvestigated. Last, the intriguing tale Tangires tells concerns, chiefly, the eclipse of the public market in the interest of the evolution of both private shops and megastores; that is, the demise of the “walking” market in which a community of independent farmer-producers (or hoof-to-steak butchers) offered their wares for official inspection, regulation, and public purchase, and the rise of middleman food merchants. But Tangires also makes clear that the public market as an institution (if not as an architectural type) has survived and, in fact, as usually meatless farmers’ markets, is making a vigorous comeback. She notes that in 2000 there were 2863 active farmers’ markets in operation in the U. S. in which, generally, only producers, rather than middlemen, could sell, only seasonal produce was available, and, generally, public space was used on a temporary but scheduled basis (xvi). Exploring this phenomenon, with its “un-American” emphasis on unbranded goods and producer-to-consumer contact, would have been an apt coda to this book. One might usefully have explored as well the new “moral economy” it represents in its embrace of taste and its refusal of the ideology and environmental degradation characteristic of the prevailing agribusiness-supermarket complex. In terms of that which can and should be seen in an urban environment, the farmers’ market offers the promise of farm-ripened foods: unboxed, unprocessed, and unbranded food, offered in an ad hoc space temporarily converted to the kind of pedestrian sociability and exchange so common at the core of our cities a century ago. Is this phenomenon a hopelessly quixotic gesture in the direction of a world we have lost, or a genuine revolt against a food system that maximizes resources to produce cheap food and large profit at the expense of our land, our health, and our eating pleasure, a trajectory we embraced unbeknownst when we redefined the good use of public space in the mid-nineteenth century as trains rather than market sheds? In short, Tangires’s excellent book would have been even stronger had she pressed her investigation in the direction it so relentlessly points, that is, toward the present. As she puts it, the “market is society’s conscience–the place where we can evaluate our success or failures at organizing urban life” (xvi).

 

This article originally appeared in issue 5.1 (October, 2004).


Margaretta M. Lovell is professor of the history of art and director of the American studies program at the University of California, Berkeley. She publishes on American food, art, and architecture; her Art in a Season of Revolution (University of Pennsylvania Press) is due out in November 2004.