A Note on the Common-place Redesign

Large Stock

When I was asked to design Common-place back in 1999, the editors requested that it “look like a broadsheet.” Seventeenth- and eighteenth-century broadsheets–large sheets set in an unvariegated sea of tiny columns of text–were in many ways the opposite of Web pages. Broadsheet design reflected movable type’s strengths and weaknesses: the sheer amount of labor required in locking up a page of text and the costs involved in paper and ink meant that every pressing counted. On the other hand, the medium of Web pages is virtually inexhaustible, although the individual pages are confined to the size and resolution of the computer monitor.

There will probably be growing pains involved.

The look upon which we settled used what techniques were available in Web design five years ago. HTML (Hypertext Markup Language), the formatting language of the Web, was never intended to be a flexible layout tool, but designers for the medium discovered that through a careful use of tables and graphics it was possible to imitate some of the look of a printed page, and this was the technique I chose. However, these techniques are labor-intensive, prone to error and variation on different machines, and less accessible to special-needs visitors, such as blind surfers using Braille browsers.

The last few years have seen the rise of new coding methods and practices for the Web. There has been a call for accessibility and standardization: pages should be readable by a variety of devices, including Web browsers, PDAs, cell phones, television browsers, and should be searchable and easily indexed by automatic systems. To achieve these ends, designers and developers are turning away from traditional HTML using table layouts to the new markup language XHTML (eXtensible Hypertext Markup Language) using CSS (Cascading Style Sheets).

The basic idea behind all this technical stuff is to separate the content of a page (that is, the text, images, and other information) from the page’s format (columns, fonts, colors, etc.). If later the same content is reused in a different venue than a Web page–say, a printable version of the page–no new coding is necessary to reformat the content. If a redesign is required, changing the style sheet is all that is necessary to change the look of the entire site. These are some of the practical reasons that I have switched to XHTML/CSS in the design of the Common-place.

But beyond utility, I have also used this redesign as an opportunity to rethink the way the site looks and to tweak it visually. While the overall look is very much of a whole with earlier editions of Common-place, the new page design reflects more closely the way people use the site.

There will probably be growing pains involved. Older browsers don’t understand CSS in full or in part. Netscape 4.x will not render correctly (and I apologize to the few of you still using it); Internet Explorer 5 for Windows renders with a few oddities. Other visitors with older browsers will probably find the design working in unexpected ways, but traditional HTML also had cross-browser issues. I suggest that you use this as an opportunity to upgrade to the latest version of Explorer, Netscape, Safari, Opera, or whatever your favorite browser may be.

In coming issues, I will continue working to make the new XHTML/CSS design more attractive and easy to use. I hope to format the printed version of the pages so that visitors who prefer to read the text in hard copy will be able to have a well-organized version without the “furniture” (navigational links, decorative graphics, etc.) of the online version. I also want to re-examine the applications such as the search function and the bulletin board to make them more attractive and easier to use.

I’m looking forward to, and hope you will enjoy, these changes.

 

This article originally appeared in issue 4.3 (April, 2004).


 




E-Abolitionists

Look at you: sitting there at your computer checking the weather reports, studying Buffy the Vampire Slayer episode guides, and reading online history magazines when you could be doing something worthwhile. “You can use the Internet to set people free,” reads the message across the bottom border of my browser. “And it takes only two minutes a week to help.”

The site that promises these powerful surfing opportunities is iAbolish.org, an “Anti-Slavery Web Portal” set up by the American Anti-Slavery Group. Founded by management consultant Charles Jacobs in 1993, the group has spearheaded a self-styled “new abolitionist” movement that claims to have freed tens of thousands of mostly African slaves, and has begun to get some traction in Congress and on college campuses. The immigration laws have already been modified at the new movement’s behest, and someday you may need to check candy bars and chocolate syrup bottles for a “slave-free” label that the group is pushing in its drive to call attention to forced child labor on the cocoa farms of West Africa. On its primary issue, the capture and enslavement of southern Sudanese by raiders from the dominant north, the group has generated enough publicity to push Sudan toward a pariah status similar to South Africa’s in the 1980s.

While the people it champions clearly are in dire need of help, historically minded Web surfers may find iAbolish.org rather jarring in its combination of Internet-era jargon with familiar terms and strategies from the nineteenth-century antislavery movement. The site has all the bells, whistles, and spending opportunities that netizens have come to expect: animated Flash presentations, on-demand multimedia content, interactive maps, and an online store where one can load up on antislavery books, t-shirts, posters, and videos.

 

 

Yet the substance of all this postmodernity would be quite familiar to William Lloyd Garrison and Frederick Douglass. There are personal narratives of freed slaves, horrifying accounts of the conditions that slaves endure with much emphasis on threats to women and children, prefabricated messages to send to elected officials, and inspiring stories of slaves who escaped from bondage. And then there are the constant references to the creators and users of the site as “abolitionists,” a usage boldly appropriating the name of perhaps the most radical reform movement in American history that actually achieved its stated goal.

Almost inevitably this borrowing leads iAbolish to the wince-worthy coinage “e-abolitionist,” which one can become by signing up. Along with that term goes a strong dose of the techno-messianism that has so infected recent American culture: “The Internet has changed your life,” the site announces. “Now, you can use the Internet to help liberate millions in bondage.”

There are many reasons why the people behind iAbolish.org seem justified in claiming the abolitionist mantle. The harnessing of cutting-edge persuasive techniques to a moral crusade was, of course, a hallmark of the old abolitionists. They and their colleagues in the other Jacksonian-era religious reform movements have been called the inventors of mass media politics (back when mass media meant newspapers, books, and pamphlets), and pioneered the use of direct mail solicitations. (In South Carolina, the slaveholders rioted to stop these abolitionist mass mailings.) One can easily see an analogy between Charles Jacobs and Lewis and Arthur Tappan, the dry-goods magnates who helped bankroll the original American abolitionist movement. In both cases, commercial sensibilities blend seamlessly with reforming zeal. Even if the new abolitionists were to go Hollywood and recruit some rock stars or commission a TV miniseries, they would only be following in the footsteps of the old abolitionists, whose arsenal included the popular Hutchinson Family Singers and Uncle Tom’s Cabin in multiple formats (including the best-selling novel, sheet music, and a special effects-laden theatrical production).

The new antislavery movement also represents a remarkable resurgence of the great nineteenth-century alliance of evangelical Christianity and social reform, the very nexus in which radical abolitionism once thrived. This alliance seems to have gone into hibernation somewhere around the time of the Scopes trial, and in my own lifetime politicized evangelical Christians have typically regarded liberals and radicals as godless communists and secular humanists, people they would no more team up with than Satan himself. So it’s interesting to see a movement that counts both Jesses–Helms and Jackson–among its supporters, and can get Johnnie Cochran and Ken Starr on the same side of a law case. Its biggest legislative success to date, the Victims of Trafficking Protection Act of 2000, was cosponsored by arguably the most liberal man in the Senate, Minnesota’s Paul Wellstone, and one of the most conservative, Sam Brownback of Kansas. (Brownback was a leader of the Christian Right takeover of Kansas Republican politics that has made the Alf Landons and Bob Doles of Kansas history look distinctly left wing.) It is almost unbelievable that a movement could be popular in the aggressively secular environs of Cambridge, Massachusetts, where the coffee shops are a lot more crowded on Sunday mornings than the local churches, yet have an educational director who publicly wished that God would add “another star” to Jesse Helms’s “crown.” Just this summer, the new abolitionists have gotten Republican fans of President Ronald “Constructive Engagement” Reagan to call for U.S. disinvestment in Sudan, a strategy most of them opposed regarding South Africa in the 1980s.

Startling as these feats are, there are other ways in which the new abolitionists fall far short of their role models. The most radical nineteenth-century abolitionists were stern critics of their own culture, ready to extirpate the evil of slavery “root and branch,” even if that meant destroying the Union and revolutionizing American society. They were up against an institution of basic importance to the American economy, South and North (as Yale University has recently discovered), one with strong defenders in the political structure of their times. William Lloyd Garrison came to believe that the Constitution itself was a “covenant with death and an agreement with Hell,” and famously burned it at a public meeting.

By contrast, the new abolitionists direct most of their outrage against foreign countries and certain immigrant cultures within the U.S. The most conspicuous villains in the new abolitionist literature are Arabs and other Muslims, one of the last remaining groups that American popular culture still freely demonizes. It is not insignificant for the movement’s appeal to conservatives that many of the enslaved peoples are Christians.

At the same time, the new abolitionists’ proposed policy solutions take the distinctly modern form of mostly cost-free miniproposals, like the cocoa labeling initiative and their new cause, a “Sudan Peace Act” that would have oil companies doing business in Sudan barred from the New York Stock Exchange. The foundations of the American economy will not be threatened by these blows–though the industries involved are still lobbying against them–and doubtless the new Republican abolitionists fully understand that.

Leaving aside the root and branch reformation of American race and labor policies, the new abolitionists seem to devote most of their money and energy to the controversial practice of “redeeming” slaves in Sudan. These redemptions involve paying armed men to take enslaved people away from their masters and deliver them to visiting foreigners for transport back to their villages or out of the country. Critics have argued that this practice actually creates a market in slaves. And while the new abolitionists take steps to avoid this result, they clearly are plugging into, rather than challenging, the existing cultures of raiding and captive taking that victimized the rescued people in the first place.

The new abolitionists have also identified a much more amorphous form of slavery to abolish than did the old. The “slavery” the e-abolitionists target is less a basic domestic institution than a disparate collection of bad social and economic situations: from debt peonage, to kidnapping, to child labor, to prostitution, to a few cases of outright chattel slavery. The miscellaneous nature of modern slavery does not make the situations iAbolish details any less evil or worthy of condemnation. But it does raise the question of whether “slavery” is the most appropriate label for dealing with them.

What do all these situations have in common with each other, and with antebellum American slavery? For one, almost all represent the social and political consequences of economic globalization, something that has been going on for centuries but clearly sped up during the 1990s. Forced labor is one of the things that can happen when very wealthy societies with highly developed market economies and power on the world stage get connected with much poorer, weaker societies that have things they need. Desire for the wealth to be gained overwhelms the fragile institutions protecting human rights in the poorer countries, as existing forms of conflict and oppression become tools for supplying what the world economy demands. So the fading, guilty-minded institution of slavery in the southern U.S. mushroomed and grew belligerent in response to the rise of northern and European textile manufacturing, and so the less lovely features of many poverty-stricken, starkly inegalitarian societies in Africa, Eastern Europe, Latin America, and Asia have been mobilized to supply the world’s wealthier nations with cheaply manufactured goods or scarce resources. Sometimes the new abolitionists apply the term “slavery” to the results, as with West African cocoa. And sometimes they don’t, as with the rush for coltan (an element used in the manufacture of advanced electronic devices such as portable phones and computers) that has fueled many of the horrors during Congo’s civil war. Either way, it sucks to be a poor person caught up in the ruthless efforts of local elites, militias, and “entrepreneurs” to get their cut of that foreign wealth.

Some new abolitionists may want to deny this. The iAbolish FAQ Page claims that slavery in Sudan is “not economic,” and while this may be true in terms of the labor that Sudanese slaves do, it leaves out the chief reason for fighting over southern Sudan or terrorizing the populace there, its oil resources.

The new abolitionism represents a rare effort to make the United States–or, rather, individual citizens–take some responsibility for a few of these problems, but it does so in a way that may not adequately acknowledge the complicity of our whole way of life in spawning them. Americans have always been against slavery, since the days of the Revolution when the term in politics usually referred not to human bondage but to a propertied white man being taxed by a government that did not allow him effective representation. With such absolutist notions of liberty, slavery eventually came to seem an intolerable evil to most Americans in whatever forms they found it.

The problem is, we find slavery such a uniquely monstrous evil that we have in the past forgotten to clean up after the monster is killed, much less to inquire into how it came into being. It was much easier for Civil War-era Americans to abolish the institution of slavery than face the egregious economic inequalities of Southern society, and so the poverty of Southern blacks and the ruthlessness of Southern white property owners combined quickly to create a new, less formal type of servitude, one that the self-congratulatory North left largely unmolested for a century.

Will labeling the many forms of exploitation and cruelty that are so prevalent around the world “slavery” prevent such an outcome? Or will it merely postpone our much needed reckoning with the brutal inequalities that make our discount shopping possible? I hope the answer is the former, but I fear it may be the latter. Our track record on pursuing the economic underpinnings of moral evils is not good. And now it is not a regional legal institution that needs abolition, but our whole manner of dealing with the rest of the world.

Further Reading:

On the old abolitionists as cutting-edge media politicians, see David Paul Nord, “The Evangelical Origins of Mass Media in America, 1815-1835,” Journalism Monographs 88 (1984): 1-30; and Richard R. John, Spreading the News: The American Postal System From Franklin to Morse (Cambridge, Mass., 1995). For visual examples of, and links to, some of their productions, see these sections of the Library of Congress online exhibits The African-American Odyssey and The African-American Mosaic.

For additional, late-breaking comments on this and other historical-political topics, visit “the Historical Punditry Page“.

 

This article originally appeared in issue 1.4 (July, 2001).


 




Grow Up, America: Choose Our Better History

I have long thought that now-President Obama’s reputation as an orator was little inflated, more by a media and public starved for a leader who could speak in complete sentences and cogent thoughts than by the man himself. That is an observation, not a criticism. My short speech-writing period left me with a very lively sense of how hard and ill-advised it is for a real modern human being to write or speak like a JFK film clip. Lots of Democratic politicians have hurt themselves rhetorically by trying to channel JFK. When they try MLK, it is generally even worse.

Today’s inaugural address was much like Obama’s convention acceptance speech in wisely avoiding Sorensenian flights of inspirational rhetoric and preacherly flourishes, but instead presenting liberal values and a post-imperial world view in forms that Americans raised on decades of Reaganism might be able to accept. Here is a passage that struck me:

We remain a young nation, but in the words of Scripture, the time has come to set aside childish things. The time has come to reaffirm our enduring spirit; to choose our better history; to carry forward that precious gift, that noble idea, passed on from generation to generation: the God-given promise that all are equal, all are free, and all deserve a chance to pursue their full measure of happiness.

In reaffirming the greatness of our nation, we understand that greatness is never a given. It must be earned. Our journey has never been one of short-cuts or settling for less. It has not been the path for the faint-hearted – for those who prefer leisure over work, or seek only the pleasures of riches and fame. Rather, it has been the risk-takers, the doers, the makers of things – some celebrated but more often men and women obscure in their labor, who have carried us up the long, rugged path towards prosperity and freedom.

Nothing special there rhetorically — even the nice “better history” line turns out to be recycled from Obama’s late campaign stump speech. Yet what he was saying what rather noteworthy, coming from a U.S. president. Here and in other parts of the speech, the infantile exceptionalism that has become nearly our national creed was quietly but firmly rejected. Our freedom, wealth, and power relative to other nations do not exempt us from the exigencies of history or the rules of morality, Obama declared. Quite the contrary.  We are not authorized to “do as we please” just because we are America; our activities have an impact on other peoples that must be taken into account, and that accounting must modify our behavior. Poverty, injustice, fear, evil, and incompetence all exist in modern America and as part of our tradition. We can and must choose our “better history,” and also choose not to dwell on the worst, but the worst is still there, some it of sitting on the inaugural dais, in a wheelchair.

As in the convention speech, there was also a distinctly liberal economic message in Obama’s inaugural address, but delivered in so mild and sensible a fashion as to be almost impossible for all but the most hardened ideologues to disagree with. The free market is a powerful tool for generating wealth, but it cannot work properly without the “watchful eye” of government. Otherwise the market will “spin out of control.” The last line quoted above, about “the risk-takers, the doers, the makers of things” was one that many listeners (including Fox’s Brit Hume) probably heard as a shout-out to capitalist entrepreneurs. What it really was, or perhaps simultaneously acted as, was a little restatement of the labor theory of value that can be linked back to the producerism that has been the heart of so many past radical movements in American history. True wealth was not created by amassing “riches,” Obama argued, but instead by making things through our labors.

I make no claim that there is anything radical about Obama, or even Populist, and I worry about the Wall Street/Ivy League establishmentarians he has guiding his economic policy here at the outset. Yet he does represent and express the better part of our historical political tradition. I am happy that we chose it and look forward to the day when it does not take a national crisis to bring some of those better angels out.

 

This article originally appeared in issue 9.2 (January, 2009).


Jeffrey L. Pasley is associate professor of history at the University of Missouri and the author of “The Tyranny of Printers”: Newspaper Politics in the Early American Republic (2001), along with numerous articles and book chapters, most recently the entry on Philip Freneau in Greil Marcus’s forthcoming New Literary History of America. He is currently completing a book on the presidential election of 1796 for the University Press of Kansas and also writes the blog Publick Occurrences 2.0 for some Website called Common-place.




Whose Great War for Empire? British America and the Problem of Imperial Agency

The world war that commenced on the banks of the Ohio in 1754 has never been an easy one to name. The French and Indian War–probably the war’s oldest designation and one still popular with many Americans–suffers from the obvious defect of only referring to the colonial dimensions of what was, ultimately, a global contest. European historians eventually settled on the Seven Years’ War as a more inclusive title; however, it, too, fails to account for the nine years that the war lasted in America, nor does it accurately describe the conflict in India, which ended some two years after the Peace of Paris (1763) concluded hostilities elsewhere. Despite its panoramic sweep, even Lawrence Henry Gipson’s Great War for Empire implicitly privileges the extra-European theaters over Germany, where the war had more to do with maintaining the balance of power among the Continent’s principal states. Although victors typically claim the right to bestow definitive names, the British themselves long referred to it simply as “the late war” or, when there seemed to be a need for greater clarity, “the late war with France.”

If the question of what to call the Seven Years’ War poses difficulties, it is largely because it touched so many people in so many different parts of the world. Among the more significant of the war’s legacies were the origins of the transatlantic movement to abolish slavery, the erosion of Mughal authority in India, and the beginning of the end of the ancien régime in France. In the war’s aftermath, even the most benighted of Europe’s rulers appeared to embrace the cause of Enlightenment and reform, with Catherine the Great taking the extraordinary step of offering to help finance the completion of Diderot’s Encyclopédie. In terms of sheer complexity, though, none of the war’s consequences can rival the tumultuous effects on Britain itself, both at home and among the outlying regions of its Atlantic empire. Although the victory heralded Britain’s apotheosis as the greatest imperial power since Rome, it also brought a host of related problems, including a crushing deficit, new territories in every region of the globe, and diplomatic isolation in Europe. An early sign of trouble was the outburst of anti-Scottish xenophobia in England centered on the cashiered militia officer John Wilkes; another was the Stamp Act (1765), Britain’s misguided attempt to force the Americans to help pay for their own defense. But the most surprising consequence of all was Britain’s apparent impotence in the face of the colonial protests that resulted, an incapacity signaled most clearly in Parliament’s humiliating repeal of the stamp tax during the spring of 1766. The British had triumphed in every quarter of the globe, inviting the admiration of friend and foe alike. As they pondered the fruits of victory, however, well might they have asked, whose great war for empire?

The answer to this question was–and is–anything but straightforward. As Fred Anderson’s magisterial new book demonstrates, British America alone contained at least three different groups, each with its own vision of what the war meant. In descending order of the power at their disposal, they were the American colonists, the Indians of the trans-Appalachian interior, and the British government. Of course, most people at the time would have placed the British first; however, the government’s ability to control events in the colonies was limited. Not only was the initial crisis in the Ohio Valley largely shaped by the machinations of colonial speculators and the Iroquois Confederacy’s desperate attempt to retain its authority over the region’s other tribes, but even after Whitehall committed some thirty thousand regulars to North America, the British repeatedly found themselves playing by someone else’s rules. In the continent’s interior, this meant recognizing that the Indians were allies, not subjects, and that British and provincial officers were powerless to prevent them from engaging in a range of “barbaric” practices, including taking women and children hostage, scalping French prisoners of war (officers as well as enlisted men), and insisting on lavish gifts as the price for accompanying the king’s troops into battle. Despite obvious cultural differences, the same dynamic was evident in Britain’s dealings with the colonists, who repeatedly refused to quarter British soldiers, who regarded military service–whether in provincial units or the regular army–as a strictly contractual undertaking, and whose assemblies invariably insisted on parliamentary subsidies to help them raise the troops necessary for their own defense. In the exasperated words of Lord Loudoun, the imperious Scot who spent two campaigns in the colonies as the British commander-in-chief, America seemed to be a maddeningly chaotic place, with no law but “the Rule every man pleases to lay down for himself” (148).

As long as the conquest of French Canada was in doubt, the British had no choice but to accept such limits on their authority. At no point, however, did they see their willingness to do so as more than a temporary expedient. Once the war was over, they accordingly attempted to impose new terms, ending the costly practices of Indian gift giving and taxing the colonists to help pay for the ten thousand regulars that remained in garrisons west of the Alleghenies. The results, of course, were disastrous, with Pontiac’s Rebellion crippling Indian relations in the interior, while the Stamp Act crisis threatened Westminster’s authority up and down the eastern seaboard. Anderson believes that neither irruption was inevitable and that Britain might have retained its North American empire, had George III’s ministers acted less precipitously. Yet, as Anderson also notes, it is not clear whether the persistence of imperial authority would have made much difference for any of the three parties involved. At most, the British government would have been left with a “hollow” empire, where the exercise of effective authority depended on the consent of the colonists and their representatives. Under such conditions, moreover, Britain would have been able to offer only limited protections to any of America’s other inhabitants, including, especially, the Indians, whose lands in the Ohio Valley were already being encroached upon by a steady influx of European settlers. In a sense, the Seven Years’ War ended up confirming the “American” character of Britain’s North American empire, an entity over which metropolitan authority had never been more than tenuous.

Without doubt, there is much to recommend this argument, and not just for the way it sets up a promised sequel in the American Revolution. Indeed, despite some important differences, Anderson’s interpretation of the Seven Years’ War in North America bears a striking resemblance to the one that Peter Marshall and Christopher Bayly have proposed for India during the same time period. As happened in Iroquoia, the Mughal Empire’s progressive collapse during the later 1740s and 1750s drew the British, who had been in India as traders since the early seventeenth century, ever more deeply into politics on the subcontinent, first as the auxiliaries of local grandees, eventually as political actors in their own right. When the East India Company assumed effective powers of government in Bengal (1765), however, it did so not through the imposition of British or European forms, but by acting as the Mughal Emperor’s diwani (a Muslim office roughly analogous to a European tax farmer). Despite the temptation to act unilaterally, moreover, the company’s officials never forgot that they owed their authority to the cooperation of local elites, who in turn accepted British rule because they assumed they could use it to their own advantage. Although there were undoubtedly the vast differences between them, India’s experience of British rule during the eighteenth century points to the same devolution of imperial agency as in America, what Jack P. Greene has identified as a pattern of “negotiated authority,” whereby the unlimited powers claimed by officials at the empire’s center were subject to constant revision by indigenous and creole brokers on the periphery.

All this suggests that the Seven Years’ War was actually a war for several different empires–each shaped as much by provincial conditions as by metropolitan goals–with the one that culminated in the independence of the United States being only the most conspicuous. At the same time, though, it is important to remember that the war was also a “British” war for empire, whose chief effect was to impose an unprecedented degree of political unity on what had previously been a set of scattered, frequently unconnected regions. Despite the crucial part played by men and women on Britain’s periphery, the war’s meaning was no less dependent on the metropolitan public, especially the public “without doors,” whose bellicose patriotism transformed how the British viewed both themselves and their place in the wider world during the eighteenth century’s middle decades. Up to that point, the nation’s extra-European activities, whether in South Asia, the West Indies, or North America, had typically possessed a piratical, bucaneerish quality, with most Britons regarding their imperial project as an adventure “beyond the line,” to be embraced only when it did not affect their affairs in Europe. In the rapidly changing environment of the 1730s, 1740s, and 1750s, however, even minor colonial imbroglios began transmogrifying into international incidents of the first importance, causae belli like the unfortunate Captain Jenkins’ severed ear, which neither Parliament nor the king’s ministers dared ignore. Only with this shift in metropolitan attitudes could Washington’s ill-fated skirmish at the headwaters of the Ohio become the opening engagement in the first European war of truly global proportions, rather than an engagement of merely local significance. Likewise, it was only because of this shift that the British people proved willing to make such extraordinary sacrifices during the Seven Years’ War, including escalating taxes, public borrowing on a scale never before seen, and a deeply unpopular militia reform, which prompted England’s worst rural riots of the eighteenth century.

This is not to discount the agency of either the colonists or the Indians; rather, it is to say that the metropolitan public’s imaginative capacity to connect events in North America, Europe, India, Africa, and the West Indies was equally decisive in shaping the fate of the British Empire in each of its outlying regions, including the Atlantic seaboard. On repeated occasions during the 1760s, the colonists were forced to respond to the British government’s imperial policies, not only in terms of their relevance to conditions in North America, but in ways that also acknowledged connections between their own situation and conditions elsewhere, including Britain’s own crushing tax burden, the annual £400,000 subsidy that the East India Company placed at Parliament’s disposal in 1767, and the metropolitan perception that the colonists were British subjects, who could be governed in the same manner as men and women in England, Scotland, and Wales. Even the British public’s mounting qualms over the slave trade affected the imperial crisis, making it difficult for colonial planters to complain of the figurative dangers of British slavery when they were personally responsible for far more insidious forms of bondage. In each instance, the integrated nature of the wider British world placed definite limits on the extent to which Americans could control the terms of debate, let alone their own political destiny, even when the issue involved something as apparently clear-cut as the English right to no taxation without representation.

For this reason, the Seven Years’ War was both an essential prologue to the American Revolution and a key event in the integration of the wider British Empire. It would obviously be foolhardy to give one consequence priority over the other, not least because the British context continued to shape the course of American history, even after George III grudgingly recognized the independence of the United States in 1783. To borrow Richard White’s useful term, Britain’s history as an imperial power occurred on a “middle ground” where no one group could completely dominate the others. As early American historians shift the discussion of their own subject onto this embattled landscape, they, too, will need to accommodate the histories of many other groups and nations, whether European, African, or indigenous American. If Fred Anderson’s concern is largely with the Seven Years’ War as a founding moment in what became the United States, one of his book’s many strengths lies in the way it shows just how multiethnic and transnational the crucible of war that preceded the Revolution ultimately was.

Perhaps this is why the apparently minor question of what to call the Seven Years’ War refuses to go away. It was easily among the most decisive British conflicts ever, with the annus mirabilis of 1759–the remarkable string of victories with which Britain vanquished France in Europe, Africa, Asia, and the Americas–eclipsing any other comparable moment except, perhaps, the period between Trafalgar and Waterloo. Indeed, in a very real sense, the war inaugurated Britain’s two-hundred-year reign as the world’s leading imperial power. For all its stupendous scale, however, the Seven Years’ War was also a war whose first and most memorable name was coined by the colonists, using words that gave equal weight to Britain’s French and Indian adversaries. Perhaps on some unacknowledged level even the British recognized that theirs was a hollow victory and that they would not be the only ones to profit, still less as its primary beneficiaries.

Bibliographic Essay

The British dimension of the Seven Years’ War is the subject, most recently, of Eliga H. Gould, The Persistence of Empire: British Political Culture in the Age of the American Revolution (Chapel Hill and London, 2000), especially chapters 2-4; see also Marie Peters, Pitt and Popularity: The Patriot Minister and London Opinion during the Seven Years’ War (Oxford, 1980); Linda Colley, Britons: Forging the Nation, 1707-1837 (New Haven, Conn., 1992), and Kathleen Wilson, The Sense of the People: Politics, Culture, and Imperialism in England, 1715-1785 (Cambridge, 1985). On the origins of British India, see P. J. Marshall, Bengal: The British Bridgehead, vol. 2 of The New Cambridge History of India (Cambridge, 1987); C. A. Bayly, Indian Society and the Making of the British Empire (Cambridge, 1988). Along with Anderson’s magnum opus, readers interested in the dynamics of European-Indian relations in North America should consult Richard White, The Middle Ground: Indians, Empires and Republics in the Great Lakes Region, 1650-1815 (Cambridge, 1991). The war’s role in the origins of the Anglo-American movement to abolish slavery–a subject on which Anderson has little to say–has been treated by numerous scholars, most recently by Christopher L. Brown, “Empire without Slaves: British Concepts of Emancipation in the Age of the American Revolution,” William and Mary Quarterly, 3d ser., 56 (1999): 273-306. For the comparative problem of imperial authority both within the British Empire and elsewhere in the Atlantic world, see Jack P. Greene, Negotiated Authorities: Essays in Colonial Political and Constitutional History (Charlottesville, Va., 1994); there is also much of value in P. J. Marshall, ed., The Eighteenth Century, vol. 2 of The Oxford History of the British Empire (Oxford, 1998). The politics of naming military conflicts generally is, of couse, a principal concern of Jill Lepore, The Name of War: King Philip’s War and the Origins of American Identity (New York, 1998). Although the Seven Years’ War has been studied extensively from the standpoint of each of the fields mentioned here, there is no modern study that considers the war as a truly global, transnational conflict.

 

This article originally appeared in issue 1.1 (September, 2000).


Eliga Gould is Associate Professor of History at the University of New Hampshire, where he teaches early American, British, and Atlantic history. He is the author of The Persistence of Empire: British Political Culture in the Age of the American Revolution (2000) and America and the Atlantic World, 1670-1815 (2002, expected), and is co-editing with Peter S. Onuf Empire and Nation: The American Revolution in the Atlantic World (2001, expected). He is also currently at work on a book-length study of changing British and American conceptions of the world beyond Europe between 1750 and 1815.




The Louisiana Purchase

Small Stock

3.3.Kennedy.1
Roger G. Kennedy

The Louisiana Purchase, an achievement doubling the size of our country, not only should have been a better deal, but indeed could have been: better for the people—black, white, and Native American—then occupying the territory, better for those who came to occupy it thereafter by migration from the United States, and better, especially, for those who were driven into it as slaves. As Thomas Jefferson wrote to Albert Gallatin, “How much better to have every 160 acres settled by an able-bodied militia man, than by purchasers with their hordes of Negroes, to add weakness instead of strength.” Yet slave-owning “purchasers” after 1803 were enabled to bring into Louisiana “their hordes” of slaves because of the terms of the purchase agreement, as interpreted by the Congress during Jefferson’s own administration, and because a series of decisions were made both by his administration and a Congress sympathetic to it.

Similar decisions had already brought slavery westward from the plantations of the Chesapeake and the Carolinas to the edge of the lands purchased. It could have been otherwise. Each of those decisions was narrowly made, commencing in 1784, when Jefferson, then a representative to Congress from Virginia, lamented that “the fate of millions unborn hung on the tongue of one man, and heaven was silent at that awful moment.” Language to which Jefferson gave his assent, prepared for congressional action by Timothy Pickering, representative from Massachusetts, would have prohibited slavery in all territories between the Appalachians and the Mississippi except Kentucky. It failed by one vote.

The “one man” whose tongue that might have altered these outcomes was James Monroe. In 1786, as chairman of a committee to take up again the ordinance of 1784, he did nothing to restore the language of Pickering and Jefferson. We are told by Monroe’s biographer, Dr. Henry Ammon, that the committee produced “a report adhering closely to his [Monroe’s] views . . . [yet] the provision excluding slavery, struck out in 1784, was not restored . . . Jefferson made no comment about the omission . . . Monroe never explained why he did not incorporate this provision, to which Jefferson attached so much importance.” Nor did Jefferson. Slavery moved to the banks of the Mississippi, facing westward toward the empire purchased in 1803. Then Monroe presided over the final negotiations for that Purchase, in which was inserted the fatal language assuring, in the interpretation of the Jeffersonian Congress, the rights to hold and to import slaves into the vast dominion included in the Louisiana Purchase.

Under the leadership of seven evangelical clergymen, Kentucky’s constitutional conventions of the 1790s had almost succeeded in repairing the damage done in 1786, and bringing it into the Union as a free state. Just before the Louisiana Purchase, even in Mississippi Territory, the lower house of the legislature passed anti-slavery resolutions. Plantation slavery was in decline in Louisiana when it was purchased. Thereafter, Arkansas and Missouri only came into the Union as slave states by bare majorities. Thomas Jefferson’s Lost Cause, a republic of free and independent yeoman farmers, was lost in a series of insufficiently contested choices. That was a great loss, in economic, environmental, and moral terms.

And, of course, there were costs in money incurred in the purchase of a territory from Napoleon, who did not own it, at a time in which his failed Haitian expedition demonstrated that he had not the means to wrest it away from Spain and hold it against a determined American administration. Alexander Hamilton, Aaron Burr, and Andrew Jackson all preferred either an inexpensive purchase from Spain or the acquisition of the territory by force of arms. Jefferson and Monroe did not. The planters in general were unlikely to rejoice in a military conquest of Louisiana headed by either Hamilton or Burr, both sworn enemies to the slave system. And the planters got their way.

As for the peoples present in Louisiana when it was purchased, the costs were obvious. Slavery gathered strength. A new and muscular power came on the scene, bent upon driving Indians westward, out of the arable plains. With astonishing dexterity, Jefferson was able to get the Indians living east of the Mississippi to pay for the Purchase itself. He explained to his old confidant John Dickinson that once “the lands held by the Indians on this side of the Mississippi” were obtained, “we may sell out our lands here and pay the whole debt contracted before it comes due.” That could be done by re-selling those Indian lands to the planters, those “purchasers with their hordes of Negroes” about whom he wrote Gallatin. Buying cheap, working the spread, and selling a little more expensively, the government he headed managed to achieve a remarkable transaction. The cost was low in cash, that is true, but high in other values.

Many Americans have since become the beneficiaries of Napoleon’s sale to Jefferson of an empire that did, indeed, become an Empire of Freedom after 1865. The Indians who inhabited nearly all of the territory he purported to sell; the Spanish empire which had the superior claim to it among European powers; that Bonapartian empire that extorted it from Spain, held it for a twinkling, and sold it; the rising American empire that bought it, the white settlers who crowded into it; and the black slaves that worked much of it, all might make differing computations of its costs and benefits. At two hundred years’ distance, we may rejoice in the opportunity that sale and purchase has offered us. Until 1865, however, the Louisiana Purchase did not create an Empire of Freedom for many who lived within it—though it might have. The cost of “the deal” as it was made was very high. Indeed, the costs of the succession of “deals,” of which it was one, struck by the planters and those who acquiesced in their triumphal progress across the South, accumulated into a final terrible cost in Civil War.

Between 1776 and 1860, choices were made by those controlling the government of the United States, and the governments of its territories and states, determining whether or not slavery would be permitted within their boundaries. In 1803, the Louisiana Purchase did indeed double the extent of the territory conceded by the European powers to lie within the United States. (The Indians, of course, had other ideas.) After arrangements were made as part of that acquisition, slavery was given fresh encouragement in Louisiana and permitted to expand up the Mississippi Valley. A momentum of events began, eventuating in an attempted division of the Union by slave owners, slave sellers, and those they could convince to follow their lead. They so detested the prospect of restriction upon the continued spread of their system of forced labor that they sought to take the states they controlled out of the United States.

They had been threatening to do so since the 1780s. They had often raised the specter of disunion to convince a sufficient number of Northern senators and congressmen to permit them to have their way after the nation was placed under constitutional government in 1787: when the Southwest Territories were chartered in 1787-89, when Kentucky adopted its constitution in 1792, and when Mississippi Territory was organized in 1802. Yet all the while, from 1784 onward, as each new area was opened to slavery, eloquent men and women argued that keeping people in bondage was inconsistent with the nation’s founding documents. In 1805 the necessity to organize the Louisiana Purchase detonated a two-year debate as to how land use and labor use might determine civil society. The contention increased in ferocity as portions of the Purchase became the slave states of Louisiana, Arkansas, and Missouri.

Thomas Jefferson, the predominant political figure in the nation, had expressed in radiant language his aversion to slavery and his preference for a republic of free and independent farmers. In his early middle age, until 1784, he had offered proposals whereby a virtuous republic might wisely dispose of its public lands and encourage a benign labor system on those lands. In his later years he was fully informed of the choices being made, but interposed no public objection as his edifice of dreams was systematically reduced to rubble. He could not escape full knowledge of the consequences for the land itself of each decision. During his own presidency (1801-09) great plantations worked by slaves engrossed more and more of the choicest portions of a quarter of a continent. He was aware of that outcome. Therefore this is a tragic story.

The tragedy was, of course, larger than the disappointment of a single man. It was a national one: the nation as a whole had the power, over and over again, to stop its decline into civil war. As new domains were acquired by purchases and wars from the Indian nations, from France, and from Spain, the preferences most affecting the allocation of that land were those of owners of large plantations worked by many slaves. The great planters saw to it that the choicest property went into the hands of people such as themselves rather than to family farmers.

These were all political decisions made by narrow majorities. Each could have been tipped to another outcome. None was inevitable. Few political choices are when great moral questions are manifestly at stake. As these decisions were made, the contestants on both sides understood that the alternative labor system to slavery was family farming. And each of the choices between planters and family farmers left effects not only upon the nation, but upon the land itself, ordaining its future as well.

The land is where we live and where the consequences of our presence accumulate, determining what else we can do, and what we can no longer do. The land is thus the book of our lives. Each day we write upon it new pages, some splendid, some sordid, informing our progeny of the truth about us whatever we may write elsewhere. What we do is recorded upon it, indelibly, day after day. So it was between 1776 and 1861. So it is today.

 

This article originally appeared in issue 3.3 (April, 2003).


Common-place asks Roger G. Kennedy, director emeritus of the National Museum of American History; fourteenth director of the U.S. National Park Service; and the author, most recently, of Mr. Jefferson’s Lost Cause: Land, Farmers, Slavery, and the Louisiana Purchase (New York, 2002), whether the Louisiana Purchase could have been a better deal.




What Changed During the American Revolution?

"Six Nations Map," by Guy Johnson (1771), engraving, page 1090 (vol. IV) from The Documentary History of the State of New-York, by E.B. O'Callaghan (Albany, 1851). Courtesy of the American Antiquarian Society, Worcester, Massachusetts. Click image to enlarge in new window.

Time and again between the earliest period of colonization and the Civil War, North American people waged ferocious war over what kind of place “their” America ought to be. The Revolutionary era was one such time. The Civil War was another. Yet though Founding Father narratives abound, serious study of the Revolution seems at a low ebb. Where are its passion, fear, hope, triumph, transformation, gain, loss, and tragedy? Borrowing from Lenin, this Revolution might as well be just a Tea Party.

The Seneca leader Chainbearer knew better when it was over and his people had lost. So, somewhat later, did Washington Irving (in “The Legend of Sleepy Hollow”), Nathaniel Hawthorne (in “My Kinsman, Major Molineux”), Frederick Douglass, and Elizabeth Cady Stanton. The Revolution’s course of human events overwhelmed existing institutions, beliefs, and practices. It provoked enormous creativity and it brought huge loss. All successful revolutions may ultimately be alike, in that they overthrow one order and institute another. But each successful revolution is successful in its own way. What, then, of the colonial order from which the American Revolution emerged? What did the Revolution transform? What did it leave unchanged? What did it render problematic that previously had been mere fact?

Throughout the hemisphere, colonized America differed strikingly from Europe. Slavery—which did not exist in Britain, the Netherlands, and France, and which was of minor direct importance in Spain and Portugal—spread wherever colonizers went, engulfing both Native people and Africans.

To comprehend such questions we need to reach beyond the British colonies and early United States. Colonial settlements the length and breadth of the hemisphere were neo-Europes, enmeshed in ocean-spanning imperial structures. From their own viewpoint, Scotland, the Pays d’Oc, and Vizcaya were peripheral in relation to London, Paris, and Madrid. But though distant from their capitals, such places were parts of metropolitan cores. New France, New England, and New Spain were otherwise. In the British case, that fact underpinned the ultimately irresolvable problem that the attempted reforms of the 1760s and 1770s provoked: What did it mean to “belong” to Britain outside the central British realm? The Revolution ended that whole problem with the entry of the United States into Europe’s Westphalian state system, able now to do all the “acts and things that independent states may of right do.” It would be sovereign in the same sense as Britain or Spain, dealing with them as a juridical equal, defining its boundaries, setting its terms of belonging, freedom, and obligation, and, internally, answering to no power higher than itself. Its new order was republican rather than monarchical, but Europe’s great theorists of sovereignty—Jean Bodin, Thomas Hobbes, and Emer de Vattel—had allowed for that possibility. In this sense, the American Revolution transformed a set of incomplete colonial neo-European polities into a single full participant in the European order, calling itself the United States.

Throughout the hemisphere, however, colonized America differed strikingly from Europe. Slavery—which did not exist in Britain, the Netherlands, and France, and which was of minor direct importance in Spain and Portugal—spread wherever colonizers went, engulfing both Native people and Africans. George Washington’s Mount Vernon only looked like an English gentleman’s estate; its enslaved labor force made it fundamentally different. Slaves did not do the productive work of London, Bordeaux, Amsterdam, Oporto, and Seville as they did of New York City, Cap Haitien, Willemstad, Recife, and Havana. Africans came to the Americas as captives, but in plantation quarters, on back streets of colonial towns, and in free “maroon” settlements from Virginia’s Dismal Swamp through Jamaica’s Blue Mountains to Brazil’s Quilombos, they created neo-African communities as well as they could.

 

Fredrika Teute and Ed Countryman discuss the Haitian Revolution and the American people.

 

Sticking just to the northern continent, “colonial” America reached far beyond the Neo-Europes and Neo-Africas, to wherever European power, diplomacy, war, trade, and non-human species could be felt. Unlike anywhere in Europe, colonial areas were contested rather than defined. Guillaume de L’Isle’s 1702 map of “La Floride” splashed color to distinguish British, French, and Spanish zones all across North America. But beneath his tints, in small print, were inscribed the names of the native peoples who actually were in control. Two decades later, a Chickasaw map comprehended much the same space, depicting Native communities from the Red River to the upper Ohio, without any recognition of European claims. John Mitchell’s supposedly definitive 1755 map of British America showed Virginia, the Carolinas, and Georgia stretching toward the Pacific. French and Spanish cartographers would have disagreed, and so would Cherokees and Creeks in the southern Appalachians, Choctaws, Chickasaws, Osage, and Quapaws in the Mississippi Valley, and Comanches on the High Plains.

Consider one late-colonial artifact. In 1771 cartographer Guy Johnson published a map “of the Country of the VI Nations [Iroquois].” Johnson rendered Iroquoia as beginning at a line that ran southward from just east of Oneida Lake to the Pennsylvania border and as including the whole northern country between Lake Champlain and Lake Ontario, “the boundary of New York not being closed.” Within the Six Nations he drew only three of the Finger Lakes; the remainder could not “be laid down with certainty.” The Iroquois guarded knowledge of what was theirs, even from Johnson, whom they knew well. They had bargained hard at the great Fort Stanwix treaty conference of 1768 for the line that separated them from New York. Mohawk country was already lost, and they wanted this new boundary to last. But playing the game of cartographic boasting as ruthlessly as any statesman, speculator, or settler, they also gave away a vast area that was not theirs at all. Delawares, Shawnees, and Cherokees were furious.

New York Governor William Tryon, the dedicatee of Johnson’s map, thought entirely differently from both the Iroquois and Johnson, reporting to the Lords of Trade in 1774 that New York extended all the way to Detroit. Iroquoia belonged to his province, not to its people. But provincial authorities had nothing to do with the boundary line on Johnson’s map. It had been drawn by Iroquois negotiators and Sir William Johnson, Guy’s uncle and Britain’s Superintendent of Northern Indian Affairs, who was the Crown’s direct agent. His power fitted with what colonials were coming to see as a London plan to control them “in all cases whatsoever.” Sir William lived as befitted a marcher lord, in neo-European gentlemanly style at Johnson Hall, a few dozen miles down the Mohawk River from Fort Stanwix. His life there, however, with his Mohawk wife Molly Brant and enough slaves to run a southern plantation, was entirely colonial American. Taken together, Guy Johnson’s map, the Fort Stanwix Treaty, and Sir William Johnson’s power and way of life brought all the themes of the colonial order into focus: contested space, imperial power, neo-European mimesis, and the prevalence of slavery.

 

"Six Nations Map," by Guy Johnson (1771), engraving, page 1090 (vol. IV) from The Documentary History of the State of New-York, by E.B. O'Callaghan (Albany, 1851). Courtesy of the American Antiquarian Society, Worcester, Massachusetts. Click image to enlarge in new window.
“Six Nations Map,” by Guy Johnson (1771), engraving, page 1090 (vol. IV) from The Documentary History of the State of New-York, by E.B. O’Callaghan (Albany, 1851). Courtesy of the American Antiquarian Society, Worcester, Massachusetts. Click image to enlarge in new window.

Running through all these dimensions were the political problems of authority, power, and belonging on which the British Empire broke. Colonial settlers believed they had grown up and could run their world. Beginning in 1763, imperial reformers set out to teach them otherwise. Colonials wanted Indian land, but Indians knew how to defend themselves. Far from being the plantation south’s “peculiar institution,” slavery was everywhere, in both law and fact. In the midst of it all, only one power seemed absolute—that of masters over their slaves. Here, as unplanned, incoherent, and vibrant as Europe’s Ancien Régime, was the colonial old order.

The Revolution’s creation of a sovereign American People and of that People’s instruments of power resolved the imperial problem. With remarkable speed, it also settled the colonial era’s fundamental contestation about American space. Drawing the modern borders of the eastern states and creating the American system of western territories that could become states in their own right were part of the resolution, ending the problem of supposed settler inferiority. Just as important, if not more so, is that the new, self-conscious, empowered American People took rapid possession of all the land it could grasp, entirely on its own terms, achieving in mere decades what centuries of European empire builders had failed to do. Meanwhile, the colonial era’s other great legacy, slavery, changed from an unchallenged universal fact into the South’s “peculiar institution.” The problem that destroyed the colonial order emerged from a combination of contested imperial power and contested American space. The problem that nearly destroyed the United States emerged from contested national power over freedom and slavery, within space that the Republic called its own.

From the beginning, Europe’s children in America connected themselves with both Native people and Africans. The mature colonial order presented one set of such connections, turning ultimately on space; the young Republic presented another set, turning ultimately on slavery. Neither was a European problem at all. The Revolution replaced a colonial-era landscape of contested spaces with triumphalist notions about an Empire of Liberty, Manifest Destiny, and the Moving Frontier, in which Native people became mere “Indians Not Taxed” and, later, “domestic dependent nations.” It also turned slavery from an accepted, universal fact into a pressing issue, opening a breach into which Black Americans stepped, and raising the question of whether, should slavery end, they would belong to the Republic as citizens or, like Indians, be excluded from it. Appreciating such continuities and disruptions, such gains and losses, transformations and consequences of the Revolutionary era, may offer a way to bring the American Revolution back to life as a subject of compelling and deeply human interest.

Further reading:

The thirteen essays collected in Juliana Barr and Edward Countryman, eds., Contested Spaces of Early America (University of Pennsylvania Press, 2014) reach well beyond the American Revolution both in geographical and chronological terms. But taken as a whole they bring out important differences between the colonial/imperial order that began to take shape with the Columbian encounter, developed and flourished during the seventeenth and eighteenth centuries, and was radically transformed during the hemispheric era of national revolutions and state formation. Both Eliga Gould, Among the Powers of the Earth: The American Revolution and the Making of a New World Empire (Cambridge, Mass., 2012) and Leonard J. Sadosky, Revolutionary Negotiations: Indians, Empires, and Diplomats in the Founding of America (Charlottesville, Va., 2009) continue that theme, Gould in the sense of the United States joining Europe’s Westphalian state system and Sadosky showing how a colonial order structured around Native-European diplomacy gave way to a post-revolutionary order structured around national sovereignty for international purposes and state sovereignty for internal purposes. Finally, Edward Countryman, Enjoy the Same Liberty: Black Americans and the Revolutionary Era (Lanham, Md., 2012), addresses what its subjects did with the opportunities and the partial liberation of the Revolutionary era and how the problem of an American nation divided between slavery and freedom emerged from that era.

 

This article originally appeared in issue 14.3 (Spring, 2014).


Edward Countryman is University Distinguished Professor of History at Southern Methodist University in Dallas, Texas.




American Midrash

Every July Fourth, Americans celebrate their nation’s independence. In Washington, some four hundred thousand people crowd the Mall for a National Symphony Orchestra concert (this year featuring Chuck Berry and Aretha Franklin) followed by fireworks at the Washington Monument. Meanwhile, in Main Streets and backyards across the country, Americans watch parades and barbecue burgers.

But who celebrates Constitution Day? Who even knows that it comes every September 17, and that it commemorates the closing of the Constitutional Convention? There are no fireworks and no hot dogs. No one gets off from work or school. Instead, Constitution Day begins when the president of the United States (or, in a pinch, his wife) recites the Constitution’s Preamble (“We the People . . . “), not on national television but on a conference call with school children. And the day’s main celebration does not take place in Philadelphia, where the Constitution was written, but at an amusement park in Southern California, in Knott’s Berry Farm’s replica of Independence Hall (a building that also houses a copy of the Liberty Bell that, its Website oddly boasts, weighs in at “only five pounds less than the original”). All of which is brought to you, not by the National Park Service, which pays for Washington’s July Fourth celebration, but by Constitution, Inc., a nonprofit private organization.

The lowly status of Constitution Day is at least partly a consequence of the status of the Constitution itself. Yes, the Constitution is often publicly praised. But it is more often debated and argued about. As attorney general, John Ashcroft is charged with upholding the Constitution. Yet his six-year term in the Senate included seven different attempts to change it, including one measure that would make it even easier to amend in the future. Both as a statement of principles and a cause for celebration, the Constitution often seems to play second fiddle to the Declaration of Independence. Both documents were debated and accepted in the old Pennsylvania State House, a building since renamed not Constitution, but Independence Hall. The Declaration has been jubilantly celebrated since 1777 but in 1987 Congress refused to allow even a one-time, one-day holiday for the bicentennial of the Constitution.

Why does America’s Constitution Day fail to attract the attention that Constitution Days create in countries like Norway (where it is a major holiday) and Japan (where it is an integral part of perhaps the greatest holiday of the year, Golden Week)? Americans’ lack of enthusiasm is partly due to the odd timing of the document itself. The Constitution was drafted some twelve years after the war with Britain started, eleven years after independence was declared, and four years after it was won. It was not even America’s first constitution, a distinction held by the ineffectual Articles of Confederation that took almost as much time to ratify as to disintegrate afterwards. As a result, America ended up with two primary documents–and in the popular imagination, the Declaration often seems to overshadow the Constitution.

But Americans’ partiality for the Declaration has other roots. The Declaration contains inspirational phrases such as “all men are created equal” and “Life, Liberty, and the pursuit of Happiness.” The Constitution offers little more memorable than its most quoted three words: “We the People.” The document that describes our government does not sing. Perhaps this is not surprising; after all, the Constitution was written by a meeting and polished by a Committee on Style. The Declaration, on the other hand, had the advantage of being drafted by a single author, Thomas Jefferson, and being originally edited by both John Adams and Benjamin Franklin.

But the Constitution’s lack of stylistic flair cannot be attributed solely to writing staffs. The Declaration is a statement of principles and grievances. Since independence had actually been declared two days before, it served primarily as a propaganda piece. The Constitution needed to be accepted formally by the American people, and it needed to serve as a guide to practice. This collective and practical nature may be the key reason why Americans find it difficult to hold celebrations for the Constitution. The Declaration is advertising copy that seeks to close a sale. The Constitution is the owner’s manual that frustrates us and keeps us up late the night before Christmas.

 

In a different metaphor, that of the Hebrew Scriptures, the Constitution is the law (the Torah), and the Declaration the prophets. Prophetic language at its best (and the Declaration is surely that) recalls our moral commitments, our sense of rightness. Abraham Lincoln knew this. At a time of flagging zeal during the Civil War, his Gettysburg Address proclaimed that the nation was “dedicated to the proposition that all men are created equal.” David Walker, the son of a slave who helped inspire the abolitionist movement, put it more bluntly: “See your Declaration, Americans!!! Do you understand your own language?”

If the Declaration inspires us with lofty ideals, the Constitution vexes us with questions of interpretation. This disagreement, which began days after the release of the document by the secret convention that wrote it, is heightened by the distance that now exists between the Constitution’s statements and specific circumstances. Whatever James Madison meant by the right to “keep and bear Arms” in the Second Amendment, he wasn’t thinking of guns that fire four hundred bullets per minute; an extraordinary Revolutionary-era soldier would have needed at least that time to fire three or four. Similarly, the Bill of Rights’ prohibition on illegal searches did not envision the use of an Agema Thermovision 210 thermal imager to see if a person was using heat lamps to grow marijuana in his house.

These problems have led to two primary schools of constitutional interpretation. One suggests that the problems of AK-47s and heat imaging can be resolved by simply looking more carefully at the Constitution. The words and intentions of the Founders provide all the necessary guidance. This idea has been called “originalism,” or, in Supreme Court Justice Antonin Scalia’s term, “textualism.” The other primary tradition of constitutional interpretation suggests that its words are less the end than the beginning of a discussion. Sometimes writers speak of a “living Constitution” or of traditions of interpretation.

Both schools of thought create further difficulties. Whose intentions or original meanings should prevail–those of Madison, of the Convention, or of the ratifiers? But, on the other hand, how can we call anything wrong if the conversation is everything? Jewish tradition again provides a helpful parallel, that of the Midrash. Midrashim (to use the plural) are interpretations, commentaries, and discussions. They respond to the difficulties of applying the written law of Moses in a setting where most Jews lived away from the Temple–and then when the Temple itself no longer existed. In attempting to create living connections between life and the law, the Midrash embraces the key insights of both schools of constitutional interpretation. Commentary, reinterpretation, and sometimes commentary upon commentary are the heart of Midrash. A Midrash recognizes that all possible questions have not been answered, that there are seeming inconsistencies in the texts, and that truths contained there might be understood even more deeply than the original author realized. But each Midrash also begins with the original text, warning against the danger of moving too far from it.

Seeing discussions about the Constitution as American Midrash helps us see the possibilities as well as the difficulties of celebrating the Constitution. In a society where the document can be contested in a courtroom rather than simply admired from afar, Americans may perhaps be too close, too deeply involved in using the Constitution to be able to celebrate it easily. A recent poll in Russia, where Constitution Day is a national holiday, showed that over half the respondents admitted that they didn’t know any specifics of their constitution. Of those who did, only a third thought the document was a good one (Many Americans, by contrast, do not know much about what is in the Constitution, but virtually all approve of it). Americans’ deep and abiding discussions and disagreements about the Constitution may not easily inspire the sort of celebrations that attract television cameras or the Park Service, but our continuing debates about the document should be cause for celebration anyway.

 

This article originally appeared in issue 2.4 (July, 2002).


Steven C. Bullock, professor of history at Worcester Polytechnic Institute, is author of Revolutionary Brotherhood (Chapel Hill, 1996) and guest editor of this issue’s roundtable on the uses and abuses of the Constitution in contemporary American life.




Benjamin Franklin, Slavery, and the Founders: On the dangers of reading backwards

4.4.Waldstreicher.1
Runaway America: Benjamin Franklin, Slavery, and the American Revolution by David Waldstreicher

Alan Taylor has remarked upon a certain trend in the recent profusion of books on the Founders. As the reputations of some, like John Adams, are raised, others are condemned. History becomes a parody of Wall Street: a bull market for Hamilton means it is time to sell your stock in Thomas Jefferson.

When the controversial matter of slavery in the nation’s past is added to the mix, the results can be still more dubious. Recently we have seen the emergence of Benjamin Franklin, champion of freedom, and opponent of all forms of slavery. Or rather the reemergence, since this view was first advanced by the aging Franklin himself, spread vigorously by nineteenth-century abolitionists eager to ennoble their struggle by associating it with the Revolution, and kept alive by progressive and African American scholars, such as W. E. B. DuBois, in the early twentieth century.

Oddly enough, the antislavery Franklin is claimed not only by both sides of the slavery-and-the-Founders debate, but also by those who, wisely enough, try to mediate between them. Joseph J. Ellis, for example, emphasized the bad faith of Thomas Jefferson and James Madison on slavery only to hold up Franklin’s antislavery credentials—his presidency of the Pennsylvania Abolition Society in 1787 and his prominent signature on a petition presented to the first Federal Congress—as the jewel in the Founders’ crown. Meanwhile, the most forthright recent critic of the Founders on the slavery question justified his harsh judgment of Jefferson in light of the fact that Franklin “believed in racial equality.” A prominent scholar of race and the law in U.S. history argued, in an op-ed piece, against the erasure of history involved in a New Orleans school’s decision to give up the name of George Washington because he owned slaves. It is important to remember, she wrote, that Washington had a better record on slavery than Jefferson, adding that “some contemporaries of Washington like Benjamin Franklin and John Quincy Adams were against slavery and did not own slaves.” 

When the views of Franklin of the 1780s, Washington of the 1790s, and John Quincy Adams of the 1830s are all conflated to oppose a timeless Jefferson on the question of slavery, the notion of Founders and foundings departs history and enters the realm of myth. Certainly the notion of a founding “generation” means very little if it stretches the entire fifty-nine years from the Declaration of Independence to the Amistadcase. And, in what seems a curious sort of founding grandfather complex, what matters most is what great men did in their old age when they were already known to be great. 

Beneath the mythologizing, however, the story of Franklin and slavery is considerably more complex. Indeed, one could argue that Jefferson did more to undermine slavery during the era of the American Revolution than did Franklin. While the Pennsylvanian was busy blaming the British for slavery, the Virginian pushed for the end of the international slave trade and gradual emancipation in Virginia and almost succeeded in closing the Northwest territories to slave owners. Insofar as they acted as contemporaries, Franklin and Jefferson converged in the writing of the original draft Declaration, with its simultaneous indictment of slavery, blame of England, and outrage at the king’s enlistment of slaves. 

Events after 1776, of course, do matter, as do the final acts of great lives. Franklin lived just long enough for his slaves to run away and die off, and for antislavery to become politically safe in his home state. By 1776, indeed, Franklin had become the point man defending the American patriots against accusations like those of Dr. Johnson, who asked pointedly, “How is it that we hear the loudest yelps for liberty among the drivers of negroes?” Franklin could hardly afford not to seem at least theoretically antislavery when he went to France and sought to depict the new United States as a land of freedom, charming the philosophes who were straining at the restrictions of the old regime. He did not so much experience a sea change in his attitudes as he managed to deflect the blame, deflecting criticisms of the Americans as slavemongers into a critique of colonialism British style, establishing a common ground in favor of liberty—and the American cause.

It is harder to see Franklin as part of an antislavery vanguard either before or after Independence when we realize how much he was responding to the initiatives of others, for other purposes. In this, actually, he was quite consistent. Antislavery gained real, if minority, support long before the Declaration of Independence or even the Stamp Act protests. In the colonies, it became a public issue during Franklin’s youth in Boston in the first quarter of the eighteenth century, and during his young adulthood in Philadelphia in the following decade. By the 1760s, Franklin’s contemporaries at home and in England and France were well aware of the similarities between colonists’ claims to liberty and those made by and on behalf of slaves. The slavery issue itself became inseparable from the debate over the governance, and liberties, of the American colonists. The American revolutionists and their leaders—most notably, Benjamin Franklin—often worked to stave off criticisms of the institution, for they rightly perceived criticism of slavery as attacks on themselves, their way of life, and their campaigns for freedom.

Franklin’s antislavery credentials have been, at the very least, remembered backwards. At most, they have been greatly exaggerated. His debt to slavery, and his early, persistent engagement with controversies surrounding slaves, have been largely ignored. He profited from the domestic and international slave trade, complained about the ease with which slaves and servants ran off to the British army during the colonial wars of the 1740s and 1750s, and staunchly defended slaveholding rebels during the Revolution. He owned a series of slaves between about 1735 and 1781 and never systematically divested himself of them. After 1731 he wrote publicly and regularly on the topics of slavery and racial identity but almost never in a straightforwardly antislavery or antiracist fashion. He declined to bring the matter of slavery to the Constitutional Convention of 1787 when asked to do so by the abolition society he served as president. 

There are enough smoking guns, to be sure, to condemn Franklin as a hypocrite, Jefferson style, if one wishes to do so. But would another round of condemnation tell us what we need to know about the relationship of slavery to this country’s founding? We might ask, in other words, whether these debates about the relative virtues of Founders are doing anything besides increasing our obsession with the Founders and their personal traits. The very question has its biases toward smoking guns, moral judgments, individuals, and their last words. “Character” is said to explain Jefferson’s flaws, why his deeds did not match up to his words; we can proceed by celebrating Washington instead, even though Washington the politician did far less to challenge slavery than Jefferson. The problem, in other words, is as much in how we approach the past as in the facts themselves.

Neither defense, condemnation, nor the rating of different founders according to their “character” gets us very far in understanding the paradox of liberty and slavery in America. The most telling aspect of Franklin’s engagement with the problem of slavery is its continuous presence in his life, thought, and politics. This was inevitable given slavery’s importance in his world. Franklin was too much of an entrepreneur, too interested in his changing society, and too much of a statesman not to repeatedly deal with the problem of slavery. Franklin’s remarkable creativity, and his central role in crafting the stories that explained America and Americans, also made a tremendous difference. He had a talent for being present at precisely those moments when slavery was being challenged—and a knack for eloquently finessing the issue. 

Franklin’s importance to the history of slavery may lie less in his contribution to antislavery after 1787 than in his earlier mediation of slavery, freedom, and revolution. It took a Pennsylvanian, a printer, a cosmopolitan, a slaveholder with doubts about slavery, to explain the paradox of American slavery and American freedom to a skeptical world—and to America itself. The American Revolution may have pushed some Americans, like Franklin, toward a more explicit opposition to slavery. But it only did so after giving Americans the cultural tools of denial and forgetting, not to mention the political wherewithal to resist a national and international attack on the institution. Franklin, in other words, was a champion of freedom, but also the author of our greatest myths. We need to remember what Franklin helped Americans to forget, how he did so, and why. 

Does such treatment knock Franklin off his deserved pedestal? Or does it rather restore some measure of reality, not to mention humanity, to his fascinating and important life? The problem of slavery touched Franklin to so significant an extent that its investigation actually permits, rather than prevents, a deeper appreciation of the man and the revolution he led.

 

This article first appeared in issue 4.4 (July, 2004).


David Waldstreicher is professor of history at Temple University and the author of Runaway America: Benjamin Franklin, Slavery, and the American Revolution (New York, 2004).




Exploring the Known World

The Known World

 

Who would have thought that what may be the best novel ever written about American slavery would be about slaveholders who were black? Never more than a few percent of the antebellum South’s free-black population, or a few thousand people, this group included African Americans who owned their relatives–especially in states where manumission was prohibited. Moreover, their experiences are almost as sparsely documented as they are unrepresentative.

So it is a wonder to see Edward P. Jones conjure up a whole world around a black slave owner in The Known World, the winner of this year’s Pulitzer Prize for fiction. That world is Manchester County, Virginia, a fictitious place with its own history, mythology, and cast of characters–like William Faulkner’s Yoknapatwapha County. But unlike Yoknapatawpha County, where the power relations are organized around the color line, Manchester is a place where the relationship between race and power is further complicated by the fact that some of the slaveholders are black. 

Most prominent among them is Henry Townsend, the richly imagined African American slaveholder whose life and early death are the center of this dazzling novel’s diachronic narrative. An exploration of the messy heart of slavery itself, The Known World does not just tell Henry’s story: it maps his whole social world, free and enslaved, black, white, and Indian. The novel makes little pretence of strict historical accuracy. Indeed, Jones claims to have done virtually no historical research. Instead, he uses the idea of black slaveholders, a group he first heard about in college, to explore the social and legal relationships that structured slavery. In doing so, he offers an unparalleled meditation of the master-slave relationship. 

What better way to explore the bare essentials of the master-slave relationship than with the figure of the black slaveholder, who comes to slavery without the delusion of racial difference that divided white owners from their slaves? When the owner and the slave are the same race, slavery becomes a story about people and power rather than race relations–a story about what people can do to each other, and what kinds of social relations slavery fosters, rather than a story about blacks and whites. And so it is in The Known World.

Don’t get me wrong, this novel barely has a plot; and cannot be reduced to anything as simple and didactic as a lesson. Beautifully written, it tells a series of interconnected stories about Manchester’s diverse inhabitants. The book’s almost endless cast of characters includes all Henry’s slaves; his ex-slave parents, who purchased themselves out of slavery, and then worked to purchase their slave son, whom they finally redeemed from the plantation as a young adult; William Robbins–the richest man in Manchester County–and the white slave owner on whose plantation Henry grows up, long after his parents leave to earn his freedom; and other black slave-owning families in Manchester County, all of whom “knew each one another’s business.” Additional characters range from Oden, the Cherokee patroller, who is known as the man to go to when you want to cut off a runaway slave’s ear without endangering the life of the property, to a murder victim whose background is evocative, if uncertain. From “Finland or Norway or Sweden” depending on his mood, the deceased always maintained that he was from Sweden when he was “in a foul mood . . . He was Swedish the day he died.” 

But what ties all these tales together is the story of Henry Townsend and how he went from being a slave to slave owner. As a free man, Henry Townsend rejects the example of the ex-slave parents who bought his freedom. “Thou shall own no one, havin been owned once your own self,” was the principle by which they lived. “Don’t go back to Egypt after God done took you outa there.” When he buys his first slave, Henry tells his father, “I ain’t done nothing that any white man wouldn’t do. I ain’t broke no law.” And he is not wrong. Indeed, the most profound lesson that Henry learns about slavery as he becomes a slaveholder is that the slaveholder is a creature of the law. 

Not a bad man, Henry aspires to be “a master different from any other, the kind of shepherd master God had intended,” but his former owner and life-long mentor William Robbins warns him early on that the law places certain limitations on what the master can be to the slave. “The law will protect you as a master to your slave, and it will not flinch when it protects you . . . it does not matter if you are not much more darker than your slave. The law is blind to that,” Robbins tells Henry after observing his young black protégé horsing around with his first slave purchase, a man named Moses. 

“But the law expects you to know what is master and what is slave . . . if you roll around and be a playmate to your property, and your property turns around and bites you, the law will come to you still, but it will not come with the full heart and all the deliberate speed you will need. You will have failed in your part of the bargain. You will have pointed to the line that separates you from your property and told your property that the line does not matter . . . You are rollin round now today, with property you have a slip of paper on. How will you act when you have ten slips of paper, fifty slips of paper? How will you act, Henry, when you have a hundred slips of paper? Will you still be rollin in the dirt with them?”

Henry does not live long enough to own a hundred slaves, but he does acquire thirty-three, and in so doing, Henry takes Robbins’s advice without ever grasping that the demands of slave ownership have foreclosed his dreams of being “a better master than any white man he had ever known.” “He did not understand,” as his free-born wife Caledonia observes, “that the kind of world he wanted to create was doomed before he had even spoken the first syllable of the word master.” 

Meanwhile, slavery is even more mystifying to the slaves, especially when the slave owner is black. Henry’s slave Moses takes “more than two weeks to come to understand that someone wasn’t fiddling with him and that indeed a black man two shades darker than himself, owned him and any shadow he made . . . it was already a strange world that made him slave to a white man, but God had indeed set it twisting and twirling when he put black people up to owning their own kind. Was God even up there attending to business anymore?” 

A player in the crisis that unfolds on Henry Townsend’s plantation after he dies, the “world stupid” Moses never really understands the line that separates the slaveholder from the slave–and suffers tragic consequences as a result. But readers of The Known World have a chance to look at the line. They also get to know a place where the dividing line between slave and master is crisscrossed but never entirely redrawn by color and race, and obscured but never erased by love, sex, violence, and friendship. These messy and powerful human connections wove across both color and property lines in the antebellum South, as they do in Edward Jones’s fictional world. At bottom, Edward Jones’s book tells an America still obsessed with slavery as a racial problem that slavery’s many paradoxes begin with just one and have nothing to do with race. It is the paradox of anybody “owning their own kind.”

 

This article first appeared in issue 4.3 (April, 2004).


Mia Bay is an associate professor of history at Rutgers University. She is the author of The White Image in the Black Mind: African-American Ideas About White People 1830-1925 (New York, 2000).




Revealing the Many Faces of the Woman behind the Mask

Masquerade: The Life and Times of Deborah Sampson, Continental Soldier

 

 With an abiding interest in the ordinary person caught up in extraordinary circumstances, Alfred F. Young is well suited to tackle the opaque life of one of the American Revolution’s lesser-known figures, Deborah Sampson. Sampson, whose fame has undergone a resurgence of late, is best remembered for managing to hide her identity during a seventeen-month stint as a cross-dressing soldier in the Continental Army. At first consideration, her ability to pass as male defies credulity. Yet Young, as detective and storyteller, reconstructs her wartime duties in a way that makes the success of Sampson’s masquerade altogether plausible.

In this biography, Young offers a compelling portrait of a woman who transgressed boundaries, not just in her military disguise but in other realms as well. As a single woman who fell afoul of religious authorities, as a soldier who impressed her superiors with her vigilance, and as a married mother of four who left her family to go on a speaking tour, Sampson challenged social norms. As Young dissects these roles, he uncovers a figure alternately honored and maligned for her army exploits, subject to curiosity, speculation, and gossip. Sampson’s contemporaries and descendants alike were undecided as to whether they should celebrate or censure a woman who so clearly challenged gender constraints. 

Young invites the reader on a treasure hunt as he searches for clues of Sampson’s life in the records of the time and in the memories of her descendants, breaking up her history and her legacy into five parts. In Part One: “Deborah Sampson,” he traces her early years of hardship, with a father who deserted his family and a mother who lied about his absence, and the need for her to go into service at a young age. During a youth of dependence and poverty, Sampson struggled to acquire an education, learning to read and write, despite her master’s displeasure. As a young adult, she supported herself as a weaver and a teacher, occupations practiced at the time by both men and women.

The taste of independence Sampson experienced as a “masterless woman,” according to Young, may have contributed to her willingness to seek her fortunes as Robert Shurtliff in May 1782. Interestingly, that masquerade was not her first. Earlier that spring, the five-foot, seven-inch Sampson had donned men’s apparel, signed up for the army as Timothy Thayer, and received an enlistment bounty. Apparently, she had no intention of actually joining the troops. When her criminal act of cross-dressing and her fraudulent enlistment were discovered, Sampson found herself at odds with the Baptist congregation she had joined as a young adult. Soon thereafter, she left town dressed as a man, thereby getting around many of the inconveniences and scrutiny she would have encountered traveling as a woman.

A critical source for this period of Sampson’s life is Herman Mann’s The Female Review: or, Memoirs of an American Young Lady, published in 1797. Sampson collaborated with Mann, a young and inexperienced writer, who used her experiences and then embellished, fabricated, and plagiarized to create “‘a novel based on fact’” (14). To deal with the problems such a deeply flawed source presents, Young researched the religious, social, and economic landscape. Then, taking apart Mann’s highly fictionalized memoir, Young evaluates apparently fanciful adventures against other evidence and rates the episodes as likely, possible, or improbable. This weighing of the evidence reveals the scholar at work: imaginative in approach and meticulous in execution.

One of the more lurid episodes surrounds a sexually charged dream that Sampson told Mann she had in early 1775. Calling on God’s aid as a giant serpent approached her bed, followed by an ox that sought to gore him, Sampson bludgeoned the beasts with the violence and heroism of her biblical namesake. Young believes the dream to be Sampson’s own, rather than a product of Mann’s imagination, and interprets it as expressing fear of aggression as well as an early indication of the future soldier’s ability to fight for herself. 

Sampson’s career in the army receives close examination in Part Two: “Robert Shurtliff.” Young argues that Sampson was able to escape discovery partly because she excelled at her duties. A model soldier, and a prize at her height late in the war, Sampson was selected for the light infantry, a dangerous assignment that came with a special uniform. Her skills as a seamstress likely enabled her to alter it herself and thereby avoid a visit to the army’s tailor. She lowered her age when she enlisted to make her lack of facial hair less remarkable and refrained from drinking throughout her service, a wise strategy for one who needed to maintain control. Furthermore, as Young notes, “[A]rmy standards on sanitation worked in her favor” (107).

Detection was always a threat, however. Involved in several skirmishes, Sampson was wounded and may have carried a musket ball in her body for the rest of her life. When serious illness landed her in a military hospital and her identity was discovered, Sampson escaped punishment for her fraud, having earned her superiors’ appreciation for her zeal and skill in the light infantry and then as a general’s orderly. 

Marriage and children followed Sampson’s discharge, but the former soldier, now the wife of Benjamin Gannet Jr., remained restless. In Part Three: “The Celebrated Mrs. Gannett” and Part Four: “Old Soldier,” Young evokes the struggles that defined Sampson’s life after the war. Money was always short, and the trappings of gentility she sought remained elusive. Alternately angry, entrepreneurial, and supplicating, Sampson devoted much of her energy and time over the next few decades to seeking recognition and reward for her military service. In the 1790s, she successfully petitioned the state government for back pay and participated in the creation of her memoirs. In 1802-03, she traveled on her own in a path-breaking public-speaking tour. Subsequent efforts to secure a pension and compensation as an invalid veteran preoccupied her for many years. The ebb and flow of her fame is the subject of Part Five: “Passing into History.”

A brief review can only hint at Young’s dazzling scholarship and the range of subjects he addresses. There are really two stories here. Foregrounded is the painstaking reconstruction of the life of a woman whose decision to disguise herself to achieve her goals led her on a remarkable journey; her path, in Young’s hands, reveals the shifting economic, religious, political, and social contours of the late eighteenth and early nineteenth centuries. The constraints of gender and the politics of memory are equally intriguing. Simultaneously, this study works as a fascinating mystery, with Young as detective and the reader as collaborator. Sampson might have appreciated such an elegantly written tale, ambivalent about the exposure it brought to some corners of her life yet grateful for the recognition she felt she deserved.

 

This article first appeared in issue 4.3 (April, 2004).


Patricia Cleary, the author of Elizabeth Murray: A Woman’s Pursuit of Independence in Eighteenth-Century America (Amherst, 2000), teaches history at California State University, Long Beach. She is currently at work on a Website, “The Elizabeth Murray Project: A Resource Site for Early American History,” and a study of colonial St. Louis, “The World, the Flesh, and the Devil.”