Trust and Diversity: George Bush, Robert Putnam and the American Idea

 

 

 

 

                                    Trust and Diversity:

         Robert Putnam, George W. Bush and the American Idea

 

 

                                            Sheila Suess Kennedy    

                                    Professor of Law and Public Policy  

                            School of Public and Environmental Affairs

                          Indiana University Purdue University Indianapolis

                                    801 W. Michigan Street #4061

                                       Indianapolis, Indiana 46202

                                            shekenne@iupui.edu

 

 

 

 

 

 

 

 

    For presentation at the Annual Meeting of the Law and Society Association

                                         Montreal, Canada 2008

       Please do not cite without prior approval of the author

 

 

 

 

 

                                       I.  “Trust Me,” Said the Spider

 

“Can’t we all just get along?” Those wistful words—spoken by Rodney King, whose savage beating by Los Angeles police in 1991 was videotaped by a passer-by and repeatedly televised—have entered our national vocabulary. They seem to capture the current American mood. Why are so many of us so hostile? Why do we seem to have so much trouble communicating? Why are we so cynical about business practices, so suspicious of government at every level? Why don’t we trust each other? 

The question of trust has become a hot topic, not just for the talking heads who increasingly dominate our airwaves, but in academia as well. Questions abound: what do we mean by trust? How does trusting your husband differ from “generalized social trust”? Why is the latter important? How necessary is it to effective governance? Do contemporary Americans really trust their neighbors less than our parents and grandparents did, and if so, why?

            Any serious exploration of these issues requires that we address the work—and outsized influence—of Robert Putnam, whose research has shaped opinion not only in academic circles, but among the so-called “chattering classes,” the pundits who increasingly frame public perceptions. His best-selling book, Bowling Alone (2000), was enormously influential—not only did it introduce the concept of “social capital”[1] to readers who had previously never heard the term, it influenced an entire cohort of scholars, foundation executives and public officials to target and address issues of civic engagement. The book argued that civic engagement in America had  declined significantly since the post-War generation, and that the decline threatened social capital. The book inspired spirited responses, pro and con, and spawned a mini-industry of hand-wringing and doomsday predictions.         

Much of the criticism engendered by Putnam’s theorized decline in social capital centered upon the argument that civic organizations overall had not declined, but changed in character. Those bygone bowlers may be found coaching youth soccer; many of the women who have abandoned  the Ladies Garden Club have defected to professional associations and the local Chamber of Commerce. Robert Wuthnow has suggested that Americans are changing the definition of engagement, that we are experimenting with “looser, more sporadic, ad hoc connections, in place of the long-term memberships in hierarchical organizations of the past” (2002).

Whatever  one’s conclusions about the existence and/or severity of the crisis in civic engagement posited by Putnam, Bowling Alone clearly struck a nerve. Perhaps because ours is—and has always been—a remarkably heterogeneous country, Americans have long been consumed by the question “what is it that holds us together?” Tocqueville’s admiration for our early tendency to form “civic associations” suggested one approach to answering that question.  If we create common ground with our fellow-citizens by engaging in recreational, civic and political activities with them, data that seems to suggest a marked decline in such activities should be taken seriously. Right or wrong, by introducing an important issue to the broader public for consideration and debate, Putnam performed an important public service.

More recently, Putnam’s research has led him to an even more disconcerting conclusion: people who live in more diverse communities are less trusting. And they are less trusting of everyone, not just of those who are different. Opponents of immigration (legal or not), multiculturalism (even in its mildest forms), and interfaith dialogue have seized upon the research as vindication of their worst fears.

 In E Pluribus Unum: Diversity and Community in the Twenty-First Century (2007), Putnam reported on a large-scale study in which he found a negative correlation between levels of ethnic diversity and generalized social trust. As he puts it, “In the short to medium run, immigration and ethnic diversity challenge social solidarity and inhibit social capital.” (In a memorable phrase, Putnam says that in the short term, diversity brings out the “turtle” in us—causes individuals to “hunker down” in our shells and withdraw from many kinds of social interaction.) In the United States, the article has been seized upon by opponents of immigration as evidence that a continued influx of “others” will corrode the social fabric and doom the civic enterprise.

There are reasons to credit Putnam’s conclusions. Religious sociologist Peter Berger has suggested that people are most uncomfortable when they are forced to question the “taken for granted” nature of their worldviews. When we assume that most people think the way we do—because they look like us, or go to our church, or bowl in our league—we trust others more, because we take for granted that they see the world in pretty much the same way we do. Sometimes that’s an accurate perception, often it’s not.

            When we can’t simply take for granted the attitudes or likely behaviors of others, based upon skin color or religion or sexual orientation, relations with others may lose a measure of spontaneity. As a result, many of us will retreat a bit—or “turtle.”  This is rarely a conscious process; the only thing we are conscious of is that we no longer simply assume that we know what others will think and how they are likely to react. The “taken for granted” element of our interactions is an important component of generalized trust. (Of course, what we are trusting is as much our own ability to predict the behavior of others as it is a considered belief in the trustworthiness of those others.)

The three questions I will consider in this paper are: 1) whether this decline in generalized social trust, assuming it has occurred, is primarily an outcome of America’s increased diversity, or whether other aspects of our contemporary civic experience may be equally—or more—responsible; 2) whether the nature of the social trust America requires at this particular juncture in our national evolution is different from that needed in our simpler, more rural past, and if so, why and how; and 3) how America’s particular constitutional culture shapes our approach to the issue of social trust. 

In the sections that follow, I will argue that we need to conduct this discourse about trust with appropriate recognition of the magnitude and pace of social change and the multiplying complexities of contemporary American life. I will also argue that a certain amount and kind of distrust is not only healthy, it’s actually the

           

                                                II. Trust and Social Capital

 

Putnam believes that social trust—the belief that other people in your neighborhood and community are generally trustworthy—is essential to social capital. In order to understand the most recent furor over his research, it’s important to understand not only what social capital is, but also what role it is thought to play in society. Social capital is the name we give to our memberships in social networks, the variety of human relationships within which we are embedded. The term is thought to have been coined by Jane Jacobs, in her seminal study of urban life, The Death and Life of Great American Cities.  In order to describe the characteristics of good city neighborhoods, Jacobs drew on a fiscal analogy.

“To be sure, a good city neighborhood can absorb newcomers into itself, both newcomers by choice and immigrants settling by expediency, and it can protect a reasonable amount of transient population too. But these increments or displacements have to be gradual. If self-government in the place is to work, underlying any float of population must be a continuity of people who have forged neighborhood networks. These networks are a city’s irreplacable social capital. Whenever the capital is lost, from whatever cause, the income from it disappears, never to return until and unless new capital is slowly and chancily accumulated.”

As one scholar of the concept puts it, “To have social capital, a person must be related to others, and it is those others, not himself, who are the actual sources of his or her advantage.” (Portes 1998, 7) Trust is an important component of social capital, but reciprocity is also an essential element. To explain the concept of reciprocity, Putnam has quoted the philosopher David Hume,  

“Your corn is ripe today; mine will be so tomorrow. ‘Tis profitable for us both, that I should labour with you today, and that you should aid me tomorrow. I have no kindness for you, and know you have as little for me. I will not, therefore, take any pains upon your account; and should I labour with you upon my own account, in expectation of a return, I know I should be disappointed and that I should in vain depend upon your gratitude. Here then I leave you to labour alone; You treat me in the same manner. The seasons change; and both of us lose our harvests for want of mutual confidence and security.”

We require trust and reciprocity among participants in our social networks, because collaboration and collective action are at the heart of the concept of social capital. As Carles Boix and Daniel Posner have written, “social capital is, at its core, a set of institutionalized expectations that other social actors will reciprocate cooperative overtures” (1998).  If we fail to work together when such collective efforts are necessary, we all emerge the poorer. Without trust that our participation will be reciprocated, we are less willing to enter into communal enterprises.  Even government, with its monopoly on the legitimate use of coercive power, cannot implement programs effectively in the absence of social capital and voluntary compliance; there is a limit to how much can be accomplished only by the use of authority and control, as many an autocrat has discovered.

Closely allied to the concept of social capital is that of civil society—what Nancy Rosenblum has called the “chicken soup of political theory.” (2002, 23) Civil society is composed of human networks that are neither governmental nor individual.[2] The sometimes dizzying array of voluntary and nonprofit associations that make up civil society are sometimes referred to as a “buffer zone” between the large and frequently impersonal institutions of formal government, on the one hand, and the individual and his or her family, on the other.  Francis Fukuyama has described the “left-wing” version of civil society as a community-wide mobilization to stop a new Wal-Mart. A “right-wing” version might be an anti-tax protest that defeats funding for construction of a new high school.

There are many benefits of communal activities and civil society, however, social capital can be a two-edged sword. This is because there are two kinds of social capital: bonding and bridging. Bonding social capital encourages in-group solidarity—the bonds forged in kinship groups, church “families,” fraternal organizations and the like. The negative uses of bonding social capital are apparent in groups like the Neo-Nazis and the Ku Klux Klan. Bonding capital tends to reinforce exclusive identities and homogeneity, while bridging social capital promotes ties across group barriers. Bridging social capital refers to the sorts of relationships forged in service clubs, political organizations, and other venues where diverse Americans come together in order to accomplish a particular task, or support a particular institution or cause. To borrow the language of political philosophy, we might say that bonding social capital is characterized by “thick” connections, and bridging capital by “thin” ones.

The “thick” networks that distinguish bonding social capital can be very useful; these are the kinds of networks that tend to reinforce discipline and provide moral and material support to individual members. On the other hand, bonding networks often lead members to exclude outsiders, promote conformity, and restrict individual liberty. Bridging social capital empowers individuals by extending the networks to which they have access and by encouraging social cooperation across lines of ethnicity, religion and other categories of personal identity. But bridging capital is weaker: the social ties thus formed involve lower levels of trust and reciprocity, and correspondingly less social support, than is true with the thicker, bonding forms of social capital.

Trust and reciprocity are both considered key to social capital, but there is a lively debate about which is more important to the bridging social capital needed in a diverse society. Marc Hooghe, a Belgian scholar, has argued that too much attention has been paid to the element of trust, and not enough to reciprocity, which is “better adapted than trust to function in divided, plural and increasingly diverse societies.” Hooghe points out that reciprocal relationships can encourage co-operative ventures, which in turn can generate an ongoing relationship founded upon a “process-based” form of trust. He also notes that reciprocity enjoys something of a competitive advantage over trust, since it can operate in conditions of uncertainty and diversity (2002).

Hooghe is also one of the many scholars who point out that there are many different kinds of trust, and they are not equally pertinent to the question of social capital. Are we talking about interpersonal trust which depends on knowing the character and previous behavior of a friend or colleague? Or are we talking about the sort of “generalized” trust that—as several students of the concept have pointed out—depends heavily on resemblance and homogeneity? That kind of trust “has to be achieved within a familiar world” (Luhmann 1988, 95)

“The central role of reputation and stereotyping also implies that closed networks will be much more conducive in developing trust than open and rapidly fluctuating networks (Coleman 1990). The closure of  networks has a double impact on the decision to trust. First, it allows for a more effective sanctioning of behavior…Secondly, reputation (and gossip) travels faster in closed networks than in open environments” (Hooghe 2002, 7).

            What Hooghe calls “depersonalized trust” (i.e. trust of someone with whom we don’t have a prior relationship) is possible only with bonding capital, because it is inextricably based upon category and identity. If that sort of trust is essential to social capital, then increasing diversity by definition dooms social capital. And increasing diversity is an absolutely inevitable feature of modernity.

            In simpler societies, we depended upon reputation to decide who was trustworthy. As John Tierney recently noted (2007), gossip was valuable because it gave people information about who they could trust—and who they couldn’t. In more complicated societies, however, trust itself becomes more complicated.  For example, in a column about the crisis precipitated by sub-prime mortgage foreclosures, Paul Krugman noted that

“Today, when a bank makes a home loan, it doesn’t hold on to it. Instead, it quickly sells the mortgage off to financial engineers, who chop up, repackage and resell home loans pretty much the way supermarkets chop up, repackage and resell meat.

It’s a business model that depends on trust. You don’t know anything about the cows that contributed body parts to your package of ground beef, so you have to trust the supermarket when it assures you that the beef is U.S.D.A. prime. You don’t know anything about the subprime mortgage loans that were sliced, diced and pureed to produce that mortgage-backed security, so you have to trust the seller…” (2007) (emphasis added.)

In contemporary Western societies, this sort of trust—in the integrity of business enterprises and the government agencies that regulate them—is absolutely critical to our ability to engage in social and economic transactions.

Once again, it is Jane Jacobs who provides an incisive explanation of the sort of trust involved in such transactions, and its importance. In Systems of Survival (1994), she has a character named Armbruster convene a group of friends to help him consider and sort out the reasons for his growing concern over dishonesty in the workplace. As Jacobs has Armbruster explain,

“My worry dates from a euphoric moment in Hanover immediately after my retirement. I’d accepted a brief consulting engagement there… and was about to follow my exertions with a holiday in Switzerland. I took my fee to a local bank for transfer to my bank here. Commonplace sort of transaction, but this was one of those occasions when the commonplace suddenly seemed extraordinary. It hit me that I’d handed over my fee to a total stranger in a bank I knew nothing about in a city where I knew almost nobody…in exchange for nothing but a flimsy paper with a scribble in a language I didn’t understand. What I had going for me, I reflected, unworried, as I dashed to catch my train for Zurich, was a great web of trust in the honesty of business.” (5)

Most contemporary Americans operate on precisely the same assumptions that Armbruster has made explicit: we deposit our paychecks and take for granted that the funds will appear on our next bank statement. We put a deposit with the electrical utility without worrying whether the service will, in fact, be forthcoming. We mail checks to payees on the assumption that the envelopes will reach their destination, intact and unopened (if not always on time). We call the fire department and expect their prompt response. Even when engaging in internet transactions with merchants with whom we have had no prior contact, merchants who may be located halfway around the world, we increasingly rely on representations of third-party facilitators that their sites are secure and their merchandise will be shipped—the volume of business done in cyberspace multiplies exponentially month after month. That kind of trust not only allows necessary social mechanisms to function, it makes our lives immeasurably more convenient and comfortable.

Trust of this sort—what I call institutional trust—is built on reciprocity and reliability. When we engage in repeated transactions in which satisfactory performance is reciprocal, we are building and reinforcing institutional trust.

In urban communities and complex societies, we will never know most of our neighbors, even by sight. The informal mechanisms people employed in simpler social settings—reputation, gossip, identity—can no longer  carry the information we require, cannot give us the guidance we need. We have no alternative but to put our trust in the web of  institutions we have created—the police and other government agencies, Better Business Bureaus, watchdog industry groups and the like—to discharge their responsibility for maintaining the trustworthiness of our economic and social systems.

There is a consensus among scholars who study trust that people who live in larger cities and metropolitan areas are less trusting of other people than are inhabitants of small towns. This is variously attributed to the increased diversity of urban environments, the greater complexity of life in metropolitan areas and/or the stresses of urban life. All of those factors are undoubtedly important, but we should not lose sight of the fact that diminished generalized trust is, in fact, an eminently reasonable response to the realities of contemporary urban life, where neither word-of-mouth nor signals of similarity are reliable shortcuts for making judgments about individuals with whom we do not have a prior relationship.

Contemporary communities have compensated for urban complexity and attempted to accommodate the realities of modern city life by creating trustworthy institutions.  As societies have become more complex, the repositories of social trust have shifted. But as valuable as trustworthy social institutions are, and as critical to the operation of modern life, they are different in kind from the trust repositories of simpler times, and they require different strategies to ensure that they remain trustworthy. Gossip and reputation will not alert investors to the machinations of an Enron or WorldCom.   

Government is the largest and most important—not to mention the most pervasive—of our collective social mechanisms. This nation’s Founders crafted governing institutions that were constrained by structural guarantors of good behavior. The monarchies with which they had experience had provided plenty of examples of unconstrained—and untrustworthy—behavior, and it was their explicit goal to provide safeguards against similar abuses by the new government they were creating.  The Founders saw government’s role in classically libertarian terms: it was a necessary mechanism to deal with external threats, and to prevent some citizens from harming the persons or property of  others. But they were well aware that a government powerful enough to provide security would be a government powerful enough to threaten that same security. They did not place their trust in the goodness of the  people who would be elected to run that government—they placed it in the checks and balances they created.  

Gradually, as America has grown larger and more complicated, government has assumed additional responsibilities. Many of these new duties came as a result of the Great Depression, and the recognition that citizens needed an “umpire,” a trustworthy institution to police and regulate a variety of business practices. Even the most ardent contemporary advocate of limited government is likely to concede the utility and propriety of FDA regulations of food quality, for example, in an era when few of us grow our own vegetables or slaughter our own animals. Americans today rely on government agencies to ensure that our water is drinkable, our aircraft flyable, our roads passable, and much more.

 It would be difficult to overstate the importance of our being able to trust our government agencies to discharge these and similar functions properly. When America goes through a time where government seems inept or corrupt, as we periodically do, that confidence is shaken, and our skepticism and distrust affect more than just the political system. From time to time, America goes through periods where the failures of our governing institutions and those who are managing them are so manifest that knowledge of them is inescapable. We are in one such period as I write this. The national mood is sour. We cannot overlook the effect of that mood on  levels of generalized trust.

 It isn’t only government, of course. Many other important institutions have been publicly compromised. Media figures who were supposed to be independent watchdogs have been found to be writing administration propaganda, and getting paid handsomely for it. Headlines report lawsuits against the Catholic Church for protecting priests accused of being sexual predators. Each day seems to bring a new round of business scandals—whether it is retirees losing their pensions, reports of predatory lending practices, or the recent crisis caused by subprime mortgage foreclosures, it is a rare day when some systemic unsavory conduct is not uncovered. The Bush administration’s open disdain for constitutional checks and balances and the rule of law has given rise to unprecedented—and justifiable—alarm. Even if one discounts the multiplying charges of illegal and unethical behaviors, the War in Iraq and the massive failure of federal, state and local government agencies to deal competently with the destruction caused by Hurricane Katrina have shaken confidence in government reliability at all levels.

In the face of so much evidence that we cannot trust our institutions to operate effectively, is it any wonder that people are wary, skeptical and “turtled”?

           

                                       III. Distrust, American Style

 

In a very real sense, the United States is a “manufactured” country.  Unlike other nation-states, we are not an outgrowth of kinship groups, and we don’t trace our ancestries to a specific piece of territory, a common language or a common religion. Native Americans excepted, we have always been a nation of immigrants. Historian Alan Taylor has detailed the often-neglected pluralism of the American colonial period in American Colonies. As one reviewer put it, Taylor’s “underlying theme [is] that American distinctiveness lies not in any inherent uniqueness of the British colonial experience of creating new societies, but in the unprecedented mixing of radically different peoples…and in the intersection of such a variety of different colonial stories and their eventual convergence into a single national story.” (Hagedorn, 2003)

American history has been a continuation of this process of self-invention, a process of incorporating very unlike people into a single national narrative. Indeed, America is better understood as an idea than a place. What Americans have in common is a constitutional culture, a particular view of how governments and free citizens should behave.[3] In that sense, we are a voluntary community, and voluntariness is a characteristic that leads to a measure of insecurity. As Todd Gitlin has written, “ The United States is a nation that invites anxiety about what it means to belong, because the national boundary is ideological, hence disputable and porous.” Gitlin refers to this self-identification as “covenanted patriotism, as opposed to the blood and soil variety.” (Gitlin 2006, 131)

The forging of a distinctive “American” identity has always been messy and heavily contested. Relations between Native Americans and the early colonists were uneasy even at their best, and they were rarely at their best.  Settlers from different countries looked askance at each other. Colonies routinely ejected religious dissenters. The slave trade eventually brought thousands of Africans with different skin colors, religious beliefs and cultures to America’s shores. As the colonies became more populous, they developed distinctive political cultures of their own, and they were competitive with and suspicious of each other.  It should thus be no surprise that trust and brotherly love were not the most noticeable features of colonial life.

 Distrust of English rule led to the Revolutionary war, and once that war was over, distrust among the colonies led the new nation to adopt the largely aspirational Articles of Confederation rather than a constitution that would unequivocally bind those colonies together under a strong national government. It was only when the Confederation proved too weak to provide effective governance that the men we collectively call “the Founders” recognized the need to rectify the weakness and create a “more perfect union.” Furthermore, the delegates who gathered in Philadelphia to accomplish that task brought their distrust—of government in general and other colonies in particular—with them. They faced a difficult task: creating a national government that would be strong enough to govern, yet constrained enough to respect the considerable differences among the states—especially the growing differences over slavery, but also substantial religious and political differences.

Checks and balances grew out of the central preoccupation of those charged with creating that new government: limiting the exercise of government power, but without repeating the mistakes of the Articles of Confederation. They were all too aware of the conundrum they faced; a government strong enough to protect citizens’ property would be a government strong enough to expropriate that property. Their goal was a central government with enough power to be effective, but not enough to be dangerous. Rather than trusting those who would subsequently be elected to manage the new government, they placed their reliance on structural and institutional impediments to mischief. They divided the new government into three coequal branches: a legislative branch to make the laws, an executive branch to administer and enforce them, and a judicial branch to say “no” when either of the others overstepped its authority. This separation of powers is best understood as an institutionalized form of distrust, and it is absolutely basic to our constitutional system.

The separation of powers was not the only structural brake the Founders placed on the power of the state. Constitutional architects devised several different systems to prevent the new government from becoming autocratic:  federalism, which further divided power by assigning authority over certain functions to the federal government and continuing to vest others in state and local authorities; representative (rather than democratic) government, in which voters elect representatives (or initially, for the Senate, electors who then elected the Senators) to actually debate and decide policies; and a bicameral legislature (a legislature with two houses, with bills required to pass both). All of these mechanisms were intended both to limit the power of the new central government and to temper the “passions” of popular majorities.

When the constitution was submitted to the colonies for ratification, even the inclusion of all of these impediments to the use and abuse of power were not deemed adequate. The colonists also demanded—and got—a Bill of Rights. As a result—“power to the people” rhetoric to the contrary—American government does not operate by majority rule. While a great many of our public decisions are based upon majority preferences, the Bill of Rights is correctly understood as a “counter-majoritarian” instrument. It was grounded in distrust of majority passions and put in place to protect individuals against the misuse of government power even when acting at the behest of those majorities.

The Bill of Rights grew out of the philosophy of the Enlightenment; it was an institutional recognition of human diversity. Its passage was an effort to protect dissent, individual autonomy, and the right of each individual to be different, to decide for himself or herself what to believe, what to read and what to think, free of the interference of even a freely-elected, majoritarian government. Like the structural checks and balances built into the fabric of the Constitution, it was an outgrowth of the colonists’ deep distrust of government, democratically elected or not. The men who drafted the Constitution deliberately chose to include mechanisms that would force deliberation, negotiation and compromise, that would serve to remind the new government that—as Hobbes had insisted—its primary purpose was to protect individual liberties.

When we revisit the founding era, we sometimes forget that the Founders didn’t protect our right to say what we think because they trusted we would all mouth non-offensive proprieties. They didn’t insist on our right to pray (or not) as we choose because they were confident we would all agree about the nature of Ultimate Truth. And they didn’t insist that government have a good reason to search or detain us because they were sure we wouldn’t ever have anything to hide. They protected individual rights because they believed those rights were part of the “natural law,” and intrinsically valuable—not because freedom was safe, and certainly not because they trusted their fellow-citizens to behave well. They protected individual liberty against the power of government, because they believed the misuse of government power was far more dangerous than the misbehavior of some individual citizens.[4] 

All of the devices the Founders employed—separation of powers, federalism, bicameral legislatures, representative government, a Bill of Rights—were put in place to protect individual liberty, and to limit the reach of official power. As important as many other governing innovations were, and have been, the real basis of our constitutional culture—what I am calling the American Idea—was this recognition that government should not be trusted with extensive power over the individual, and that a variety of institutional barriers—checks and balances—are needed in order to create trustworthy institutions.

           

                                    IV. Putnam, Bush, and The American Idea

 

What relevance does this excursion into constitutional history and political philosophy have to the work done by Putnam and others on diversity and distrust?

All research begins with a framework, a broad theoretical “lens” that shapes how scholars ask their questions and through which they analyze their data. In that sense, all questions—even the most seemingly scientific or factual—incorporate a political perspective. Barbara Arneil, a Canadian scholar, has analyzed the political theory implicit in concepts of social capital, particularly as articulated by Putnam, and has advanced one of the more intriguing critiques of that theory, a critique she has tied firmly to communitarianism.

“Social capital, as a concept, has had such a profound impact in such a short time for several reasons. First, it represents an important shift in focus, within Western political theory, away from either the state or citizen to the civic space in between. In this regard, the social capital thesis parallels two influential schools of thought within contemporary liberal democratic theory, namely communitarianism and ‘third way’ theory. In both cases, civic space or community is the starting point of analysis, rather than either the rights-bearing citizen of liberalism or the equality-bearing state of socialism or social democracy” (Arneil 2006, 1).       

Arneil finds social capital theory as embraced by Coleman and Putnam to be very different from the “European school” definition associated with Pierre Bourdieu.  (Bourdieu concluded that social capitalism—in common with economic capitalism—“is an ideology of inclusion and exclusion: a means by which the powerful may protect and further their interests against the less powerful” (8).)  Arneil sees Putnam’s formulation as an appeal for solidarity that goes well beyond the liberal democratic notion of civic participation. She identifies it with “civic republicanism”—an appeal for civic virtue and unity. Social capital, at least as Putnam has envisioned it, calls upon Americans to transcend their differences. As Arneil notes, however,

“[S]uch unity can represent an enormously threatening force for those groups that have historically been excluded from or assimilated to American society based on the values or attributes of the dominant cultural group, or that even today contest certain ostensibly ‘universal’ norms in the name of cultural diversity or justice.” (7)

Arneil locates what she calls the “emotive appeal” of Bowling Alone in the “fundamentally Christian narrative (paradise, the fall, the promise of redemption) that lies at the heart of Putnam’s thesis.” In that narrative, Americans were once enthusiastic “joiners,” but our original sense of community has declined, and can only be “redeemed” by a renewed commitment to civic participation, which will in turn generate renewed trust in our fellow citizenry. Viewed in this way, it is a story of collapse and a promise of renewal, and such stories are both familiar and appealing. What this particular narrative omits, she suggests, are some inconvenient questions: could it be that the current divisions in civil society represent a positive phenomenon, namely the struggle for equality and inclusion of previously marginalized persons?  Does Putnam’s implicit emphasis upon civic solidarity—like the communitarian emphasis upon social “embeddedness”—shortchange the classic liberal preference for justice? And what evidence is there to suggest that civic participation, in and of itself, generates trust?[5]

What Putnam’s theory does not explain—and Arneil’s does—are the persistent differences in levels of generalized trust among different social groups.  The available data shows that more privileged citizens are more trusting than are members of historically marginalized groups. (To a somewhat lesser extent, women are also less trusting than men.) This “trust gap” has held steady, even while overall levels of generalized trust have declined, lending support to the theory that lower absolute levels of trust reflect—at least to some extent—reductions in the trustworthiness of our social institutions, while the differences in those levels, the “trust gap,” reflects the experiences of people from different social backgrounds. As Robert Wuthnow has put it, “Any discussion focusing only on the decline in trust is missing the more essential fact that trust has been, and remains, quite differentially distributed across status groups” (2002,86). Not surprisingly, the largest gap occurs between the races.

As Arneil concludes,

“Ultimately, what Putnam overlooks in his causal analysis of a decline in trust as opposed to participation is that the former, unlike the latter, is rooted in betrayal. It is necessary, therefore, in our alternative causal explanation to begin by looking for what might have led Americans to feel increasingly betrayed by their community or society over the last forty years (the decline question), as well as why some groups of Americans have a deeper sense of betrayal than others (the gap question).” (2006, 126) (Emphasis in the original.)      

Americans live in one of the most diverse countries in the world, a country that is inexorably becoming even more diverse. Putnam tells us that (1) social capital—our connection to our fellow-citizens—is eroding; and (2) the erosion is worse among those who live in the most diverse parts of the country. His evidence for this is his finding that generalized social trust is lower in more diverse environments. Other scholars point out that generalized trust in unnamed and unidentified “others” has never been uniformly distributed through the population, and in fact has always been lower in large urban areas, and among marginalized groups—women, racial and ethnic minorities and gays. They also dispute whether there has really been an overall decline in trust, and argue about whether trust is really central to social capital or whether reciprocity is more important. 

That said, something is making Americans in general “turtle” more than we used to. Something is contributing to our cynicism and suspicion. Something is preventing us from fully trusting either “them” or “the system.” The most obvious contributor to the current decline in generalized trust is the behavior of the federal government. America’s sour mood reflects the widespread belief that our institutions—our checks and balances—no longer work as they should. When we no longer trust the integrity of our governing institutions, that distrust infects everything else.

            Our country’s Founders understood that the creation of competent and trustworthy government agencies requires checks and balances. When we are talking about contemporary agencies like the FDA or the FCC or FEMA, the nature of the checks and balances is not necessarily the same as those that were put in place to keep the federal government in line; in the case of such agencies, we are depending much more heavily on Congressional and administrative oversight, conducted with the active assistance of a free press. That oversight function has failed frequently over the past decade. And thanks to a multitude of all-news television channels and the internet, large numbers of people have become aware of the failures. When government stops functioning at an adequate level, that failure affects us all. Ineffective oversight enables private players to break the rules; loss of government integrity allows special interests to “buy” special treatment. When we see evidence that government is abusing its power or breaking its own rules, we no longer know who or what we can trust.         

In a complex society, it is simply not possible to depend upon good will, tribal norms, or social sanctions to enforce trustworthy behavior. The kind of trust needed in order to support the social capital needed to make contemporary American society work requires trust in the governing institutions we have established to police public, corporate and individual behavior. Unfortunately, the past decade has produced a litany of ineptitude and corruption, emanating from virtually all the major sectors of American society.

We sustained a stunning attack on American soil, reminding us that the oceans no longer safeguard us from the hostility of others. We invaded another nation because of fears that it had weapons of mass destruction that made it an imminent threat, only to discover that no such weapons existed. News reports have brought daily warnings that our governing institutions are “off the track.” (In recent polls, 81% of Americans reported that the country is going in “the wrong direction.”) There has been visible, worrying erosion of our constitutional safeguards. Meanwhile, the imperatives of population growth and commerce, technology and transportation, as well as politics, have eroded local control and hollowed out “states rights,” leaving people powerless to change or even affect many aspects of their legal and political environments.

Old-fashioned corruption and greed have combined with political and regulatory dysfunction to undermine business ethics. Enron, WorldCom, Halliburton, the sub-prime housing market meltdown—these and so many others are the stuff of daily news reports. Newspapers report on the stratospheric salaries of corporate CEOs, often in articles running alongside stories about the latest layoffs, reductions in employer-funded health care and loss of pensions for thousands of retired workers. Throughout most of this time, business forecasters have insisted that the economy was doing well—a pronouncement met with disbelief from wage earners who hadn’t participated in any of the reported economic gains, and whose take-home pay in real terms had often declined. By 2007, the gap between rich and poor Americans rivaled that of the 1920s (Center on Budget and Policy Priorities, 2007). Many of the business scandals were tied to failures by—or incompetence of—federal regulatory agencies; others were traced back to K Street influence-peddlers of whom Jack Abramoff is only the most prominent example.[6] 

 American religious institutions have also been the subject of controversy.   Revelations ranging from misappropriation of funds to protection of pedophiles to the “outing” of stridently anti-gay clergy have discouraged believers and increased skepticism of organized religion. In that other American religion, major league sports, the news has been no better. High profile investigations confirmed widespread use of steroids by baseball players. At least one NBA referee was found guilty of taking bribes to “shade” close calls, and others have been accused of betting on games at which they officiate. Football players frequently make the front pages; Atlanta Falcon Michael Vick’s federal  indictment and guilty plea on charges related to dog fighting was tabloid fodder for several weeks. Even charitable organizations have come under fire; a few years ago, United Way of America had to fire an Executive Director accused of using contributions to finance a lavish lifestyle. Other charities have been accused of spending far more on overhead than on good works.

The constant drumbeat of scandal has played out against a background of gridlock and hyper-partisanship in Washington. And—more significantly, for purposes of the public mood—all of it has been endlessly recycled and debated by a newly pervasive media: all-news channels that operate twenty-four hours a day, talk radio, satellite radio, “alternative” newspapers, and literally millions of blogs (weblogs), in addition to the more traditional media outlets.[7] Political gaffes and irreverent commentaries find their way to YouTube, where they are viewed by millions; wildly popular political satirists like Jon Stewart, Bill Maher and Stephen Colbert have used cable television to engage a generational cohort that had not traditionally focused on political news. Everyone who leaves government service seems to write at least one book pointing an accusing finger or otherwise raising an alarm; their exposes join literally hundreds of other books (most of them alarmist) cranked out by pundits, political scientists and scolds playing to partisan passions. The political maneuvering, cozy cronyism and policy tradeoffs that used to be the stuff of “inside baseball,” of interest only to political players and policy wonks, are increasingly the stuff of everyday conversation.

When one adds to this constant din of revelations, charges and counter-charges the highly visible and widely reported ineptitude of the current administration’s handling of Hurricane Katrina, the drawn-out, inconclusive war in Iraq, the even more nebulous and worrisome conduct of the so-called “War on Terror,” and mounting questions about the nature and extent of government surveillance, is it any wonder American citizens have grown cynical?  Furthermore, all these miscues and misdeeds—and many more—are taking place in an environment characterized by economic uncertainty and polarization, as well as accelerating social, technological and cultural change (including but certainly not limited to the growth of diversity). Add in the so-called “culture wars,” and it’s not hard to understand why generalized trust has eroded. In fact, the better question might be why we haven’t had more social unrest and more generalized distrust.

            What are the attributes of trustworthy institutions? Those of us who teach public administration and public policy frequently use words like “transparency” and “accountability” when we describe the attributes of institutional integrity. The sheer size and scope of today’s government makes it more difficult to achieve the sort of transparency upon which accountability ultimately depends. It also makes it more important. In 1998, Valerie Braithwaite and Margaret Levi edited a series of essays exploring and debating the interrelationship between governance practices and public trust. The chapters in the book grew out of conferences held by the Research School of Social Sciences at Australia’s National University, and the contributors synthesized much of the available research.

In one chapter, “Trust in Government,” Russell Hardin (perhaps the preeminent scholar working to define and analyze issues of trust) points out that trust in government is a different animal than trust in one’s neighbor, spouse or child; as he notes, most citizens will lack the specific information needed to make a meaningful decision whether to trust government in the sense we trust individuals with whom we interact. That being the case, Hardin echoes the nation’s Founders in advocating institutional design that will safeguard citizens against abuses of power and other official malfeasance. Implicit in this recommendation is the notion that “trust” in government is necessarily trust in the integrity and continued effectiveness of that institutional design, rather than in the individuals who may be managing the government enterprise at any particular time.

In “Communal and Exchange Trust Norms: Their Value Base and Relevance to Institutional Trust,” Valerie Braithwaite argues that trust in government rests on shared social values, as well as the predictability and dependability of government action. In other words, a government that acts on behalf of “us,” and that does so in a manner consistent with “our” expectations, is not just instrumental. In an important sense, it is constitutive—that is, by behaving in accordance with our expectations, government becomes an expression of a set of values through which we express our unique national character. 

In a particularly acute analysis titled “A State of Trust,” Margaret Levi notes the often-overlooked importance of government’s contribution to the creation of social capital and generalized trust, and catalogues the institutional arrangements that make governments trustworthy. She finds the most important attributes of a trustworthy state to be “the capacity to monitor laws, bring sanctions against lawbreakers, and provide information and guarantees about those seeking to be trusted…If citizens doubt the state’s commitment to enforce the laws and if its information and guarantees are not credible, then the state’s capacity to generate interpersonal trust will diminish ” (Braithwaite and Levi, 1998; 85, 86) (emphasis mine).   Levi goes on to warn that when citizens don’t consider their government trustworthy, the government cannot perform this function.  An essential element of its trustworthiness is a belief in the government’s fairness—what lawyers tend to discuss in terms of due process, equal protection and the rule of law.  A government that “plays favorites” or that refuses to follow its own rules loses its claim to be trustworthy.

In “Political Trust and the Roots of Devolution,” M. Kent Jennings marshals survey data spanning thirty-odd years to demonstrate that Americans’ trust in the federal government has steadily declined. He attributes much of that decline to a failure to meet performance expectations, and notes that trust in state and local government units has not ebbed to a similar degree during the period in question. Jennings suggests that this difference in the amount of trust placed in federal and state governments may explain some of the movement toward “devolution” that occurred during the past quarter-century. (It is worth noting that the book in which Jennings’ chapter appears was published well before the actions of the current administration further polarized and alienated Americans.) 

In still another thought-provoking analysis, “ Trust and Democratic Governance,” Tom R. Tyler built on a different series of studies to argue that governments that are widely regarded as procedurally fair, trustworthy, and respectful of their citizens generate social trust by establishing a shared identity.  Citizens take pride in their identification with a “good government” and that pride generates both higher levels of compliance with the laws and a belief in the legitimacy of the government.

There are, of course, a multitude of other studies—empirical and theoretical, philosophical, sociological—about the nature and effects of trust in government. Most of them reinforce or elaborate on the points sketched out above. The bottom line is that perceived government legitimacy engenders trust and social capital, and legitimacy is measured by the state’s compliance with its own constitutional rules. In America, that sort of legitimacy is particularly important, because fidelity to our constitutional values is what makes the otherwise disparate and diverse residents of this country Americans.

As noted, Trust and Governance was published in 1998, two years before the disputed election that brought George W. Bush and his administration to power. Bush’s predecessor, Bill Clinton, certainly had not enjoyed universal admiration and support; stories of his womanizing dogged him from early in his first campaign and persisted throughout his two terms in office. Allegations of improper business dealings led to the enormously expensive (and ultimately inconclusive) investigation known as “Whitewater,” which in turn led to revelations of sexual improprieties with a White House intern, and an abortive impeachment effort. Widespread use of the nickname “slick Willie” testified to the fact that many Americans did not consider him trustworthy.

For purposes of a discussion of trust in government, however, there is an important difference between Americans’ distrust of Clinton and their distrust of George W. Bush and his administration. People who disliked Bill Clinton, who considered him untrustworthy, made that claim based upon his personal character. Their disapproval, in fact, was often rooted in a belief that he was not “worthy” of the Presidency, that he was “disgracing” the office. The fact that so many of his critics framed the Clinton trust issue in this way is telling, because it conveyed a clear distinction between the (still trusted) Office of President and its (distrusted) inhabitant. In contrast, complaints about the Bush Administration are largely, although certainly not entirely, aimed at the conduct of the office itself, at its perceived inability to govern competently and at efforts by the President and Vice-President to evade constitutional restrictions on the power of the Executive branch.

Every administration will pursue policies and make decisions that displease or anger various constituencies. Every officeholder has political and ideological opponents and personal enemies with a vested interest in bringing his or her transgressions and deficiencies to the public’s notice. Furthermore, many former Presidents and high-ranking officials have attempted to exercise powers more extensive than those granted by the constitution, and many have engaged in behaviors—both personal and institutional—that have been less than honorable. (The Nixon Administration is an obvious case in point.) What has distinguished the Bush Administration has been its incompetence, as well as the sheer number of ethical and legal transgressions, the magnitude of their consequences, and especially—if belatedly—the widespread public awareness and disapproval of them.  It is that heightened awareness that feeds the public’s fear that our governing institutions have been hijacked—that our most basic governing structures are no longer reliable. In that sense, whether any particular criticism is fair or unfair is irrelevant; so long as large majorities of Americans believe—as they clearly do—that the very structure of constitutional government has been compromised, that belief alone is enormously consequential for the public trust.  

George W. Bush entered office under a cloud: his opponent, Al Gore, had won the popular vote, and the legal wrangling over who would get Florida’s electoral vote lasted nearly a month. When the Supreme Court handed Bush the victory, many voters felt disenfranchised, if not robbed.[8] His first few months in office were lackluster, and his approval ratings never rose above the low fifties. Then, of course, the attacks of 9-11 changed the political narrative.

In the wake of 9-11, the President’s approval ratings shot into the nineties, as Americans rallied around their Commander-in-Chief. Over the next six years, however, the President’s poll numbers steadily declined, and during his final two years in office, they have hovered in the high twenties or low thirties, depending upon the poll. As closely-held and tightly-run as the Administration had been early in its tenure, events that were clearly beyond its control—and its (frequently ham-handed) reactions to those events—were reported on by a media undergoing profound changes. “News management” and “spin control” became steadily less possible—and  attempts to exercise such control became more visible. Much of the public’s eventual disenchantment was due to old-fashioned incompetence and corruption that—thanks to the aforementioned internet, twenty-four hour news networks and an already polarized electorate—was aggressively publicized. More importantly, mounting evidence of cronyism and self-dealing made people more willing to distrust the Administration’s motives in what soon came to be seen as an all-out assault on America’s constitutional checks and balances. In the final analysis, the strength and breadth of that assault was what distinguished the Bush Administration from prior unpopular or inept administrations.

Most Americans do not follow political news carefully. The process through which average citizens form their opinions about the state of their governing institutions is complex, and the so-called conventional wisdom of any time is the product of many kinds and sources of information. Public consensus is notoriously slow to arrive (and even slower to be displaced). It wasn’t until the Bush Administration was in its second term that most Americans (at least, judging from available polls) believed it to be untrustworthy. Unfortunately, they had no trusted alternative to turn to. If Bush’s approval numbers were abysmal—and they were—Congress fared little better. Americans had returned the Democrats to control in 2006, but for a variety of reasons, some understandable, many not, Congress did little during the following year to confront the Administration or to make the major policy changes (most notably, setting a time for withdrawal from Iraq) that most people thought they had voted for. The perception that the beneficiaries of the 2006 vote have failed to effect real change added to the frustration and feelings of powerlessness that are feeding the public mood.

Many commentators have remarked upon the fearfulness that characterized much of the electorate in the wake of 9-11 and for several years thereafter. Those events had certainly bewildered and frightened many people, but the fears that have lingered are not just fears of terrorism (a fear that has been cynically exploited by politicians of both parties, although more frequently and effectively by the GOP). Other, equally destabilizing fears and uncertainties are rooted in systemic problems and challenges that long preceded the Bush Administration.

As increasing numbers of Americans lost health insurance, they worried about medical catastrophes; as outsourcing sent even white-collar jobs overseas, they worried about their own and their families’ future security. Major employers downsized as they lost their competitive edge and market share. As gas prices rose steadily, a country more dependent than most on the personal automobile and the availability of cheap energy felt the squeeze. As general economic conditions worsened, people who had not previously experienced financial insecurity found themselves in increasingly tenuous situations. Mortgage foreclosures moved from low-income neighborhoods into pricier precincts, and the boarded windows became a grim reminder that middle-class Americans are not always exempt from hard times.

If the business cycle and periodic economic downturns have long been a fact of economic life, the threat posed by global warming has not. Environmentalists and climate scientists may have been warning of the dangers for years, but widespread public understanding of the magnitude of the threat to the planet only began to emerge after 2000.  

All of these problems—terrorism, the environment, the global economy, the health care system, the energy crisis—have at least one thing in common: individuals cannot fix them. They require collective efforts, and our ability to take effective collective action depends upon the vitality and trustworthiness of our common institutions—primarily, government. But everything Americans have heard, read and been told for the better part of the last decade has added up to one message: our government is broken.

                                    V. Who Trusts, Who ‘Turtles,’ and Why?

 

Declining social trust is not simply a response to our disquieting external realities.  Putnam concluded that people who live in more diverse neighborhoods are less trusting than those who live in more homogeneous communities. That means that their levels of generalized trust are even lower than the low levels that currently characterize Americans generally. And apparently, it doesn’t matter whether the residents of those diverse neighborhoods are members of the majority or minority—they are all less trusting. Other research, on balance, supports that conclusion. So the question that prompted this inquiry remains: why do diverse neighborhoods produce lower levels of generalized trust than other, less diverse neighborhoods?  Is it living with difference that makes us more wary? Or are the lower trust levels actually a function of the places that we live? In other words, cities.   

            If national events and institutional failures were the sole explanation for low levels of trust, the decline would be more uniform. And there is a puzzling political reality that further complicates analysis: the fact that inhabitants of our most diverse cities—the very inhabitants who have been identified as least trusting and most ‘turtled”—are the voters who are far more likely to support liberal candidates and expanded social welfare policies (policies that will clearly benefit the presumably distrusted “others”) than are residents of rural and suburban areas. If the inhabitants of diverse neighborhoods are really less engaged with their communities, less trusting of their neighbors, why are so many of them willing to pay higher taxes to improve those communities and assist those neighbors? The research on social trust does not provide us with answers to these questions; even a cursory examination of that literature uncovers multiple areas of debate and uncertainty. 

Scholars disagree about the nature of social trust and its role in creating and sustaining social capital. They disagree on definitions of social capital. They disagree about how to measure either one. They debate varying theories about the relationship of trust and social capital to the performance of government and social institutions. And they propose multiple theories about where trust comes from in the first place. (There is even recent medical research that links trust in humans to levels of oxytocin, a neuro-peptide that has been shown to play a key role in social attachment and affiliation in other mammals. (Kosfeld, Heinrichs, Zak, Fischbacher & Fehr 2005.))

In 1992, Jan Delhey and Kenneth Newton reviewed most of the research that had previously been done on social trust, and asked a pertinent question: “What sorts of people express social trust and distrust, and under what sorts of social, economic and political circumstances do they do so?” (3)  In other words, who trusts and why? They mined the (extensive) available literature and identified six main theories of social trust, which they divided into two individual and four social theories, and which they then tested against survey data from seven nations.  The individual theories were (1) Personality, a theory resting on social-psychological factors (this theory describes social trust as part of a particular personality type—as an attribute of people who are optimistic, who believe in co-operation, and who believe that reasonable people can sit down and resolve their differences); and (2) Success and Well-Being (the theory that social trust is associated with society’s “winners,” and that distrust is more common among the “losers”—those with poor educations, lower incomes and status, and those who are generally dissatisfied with their lives). Personality theory tends to emphasize the importance of childhood socialization, while success and well-being theory stresses adult life experience. In both categories, however, trust is identified as an individual characteristic.

In contrast to the two individual theories, societal theories begin with the assumption that generalized trust is a social property—an attribute of the culture within which the individual functions. Those who approach social trust from this perspective believe expressions of trust are based upon the individual’s estimation of the trustworthiness of the larger society. Delhey and Newton identify four types of societal theory: (1) the Voluntary Associations Theory (with which Putnam is identified, and which posits that we learn trust from participation in voluntary and communal organizations); (2) Network Theory (which sees trust as an outgrowth of participation in the informal networks of daily life—family, friends, co-workers and the like); (3) Community Theory (the belief that trust correlates with the demographic characteristics of the communities within which individuals reside—size, population density, etc.); and (4) Societal Theory (people who live in wealthier nations with democratic governments, greater income equality, more universal social welfare systems, independent courts and political controls over the power of politicians are more trusting than those who live in societies that lack these characteristics). Community theory is sometimes referred to as a “bottom up” explanation, while Societal Theory is “top down,” or institutional.

Delhey and Newton prefaced their description of survey research results with an appropriate caution,

“The study of trust is benighted by the problem of cause and effect. Do people become more trusting as a result of close and sustained interaction with others in voluntary organizations? Or is it, on the contrary, that trusting people join voluntary associations and get involved with their community, leaving distrusting ones at home to watch the television? Do people develop higher levels of trust because life has been kind to them, or is life kind to them because they are trusting?” (11)

When Delhey and Newton tested the six theories they had identified against survey data from seven countries—Germany, Hungary, Slovenia, South Korea, Spain and Switzerland—they found little support for the social-psychological theory that attributed social trust to early socialization or personality type. Putnam notwithstanding, they also found little or no association between levels of trust and membership in voluntary organizations, or between trust and city size, type of community, or neighborhood satisfaction. Trust was also unrelated to age, gender or even education  (except to the extent that better education was a factor in success and well-being, factors which were related to trust).

Other theories, however, did prove to have explanatory power. First, societal conditions—particularly the presence or absence of social conflict and public safety—were robust predictors of trust; where there was social conflict and/or the absence of public safety and personal security, there was less social trust. As the authors pointed out, this finding is consistent with the theory that socially homogeneous societies—with shared social norms and low levels of social conflict—are likely to have higher levels of trust than societies with “deep social and economic cleavages.” (22)[9] It is also consistent with research that has identified fear as one of the most powerful—and generally detrimental—social motivators.

Second, although membership in voluntary associations was unrelated to levels of social trust, membership in informal social networks was positively related to trust. (This should be good news, since some research suggests that participation in these informal networks—in distinction to more formal associations—is growing.)

Third, the success and well-being theory performed well. As the authors noted, “there is, it seems, quite a lot in the suggestion that those who are successful in life can afford to trust more.” (22) They found that anxiety scores—highest among low income and low status groups and the unemployed—were predictive of higher levels of distrust. And interestingly, the two countries that registered lowest in levels of social trust in this study were the two that had most recently experienced significant political and social changes.  Despite their reticence about assigning cause and effect, Delhey and Newton concluded that “Lack of trust is not the cause of social and political upheaval and conflict in these countries, but the expression of them.” (23)

The Delhey and Newton results are consistent with points raised by several critics of social capital research. Carles Boix and Daniel Posner have argued that “a community’s co-operative capacity is a function of the degree of social and political inequality that the community has experienced over the course of its historical development” (1998, 687). The results also lend support to the work of American scholars who criticize social capital theory for its failure to adequately acknowledge the impact of racism and the importance of social justice issues (e.g. Roberts 2000),[10] particularly the wide and growing differences between America’s haves and have-nots. There is a significant overlap between the two categories; poor people in America have historically been disproportionately African-American. 

Interestingly, Putnam’s most recent findings have been discussed almost entirely in the context of immigration, rather than race, and his work has been cited—to his obvious dismay—by those who argue that immigration (legal or illegal) is threatening American bonds of social solidarity. This is curious, since Putnam attributed the ‘turtle’ phenomenon to the extent of ethnic diversity, not to the identity of the people creating it. In much of the United States, “diversity” is most often used as a code word for differences in race, rather than national origin.

Conflicts in the past have certainly centered around differences in cultural ethnicity or religion or immigration, but the central fault-line in America has been and continues to be race. It  is not coincidental that in our current, highly-charged and politicized arguments over immigration, the complaints focus first and foremost on immigrants from Latin America, and to a lesser extent, those from Asia. It is hard to know the degree to which current anti-immigrant sentiment is masking racial hostility. Both America’s racial history and our tolerance for economic inequality sets us apart from other Western democracies, and it should come as no surprise that comparative research confirms this unfortunate aspect of American “exceptionalism” (Hooghe, Reeskens, Stolle & Trappers, 2006).

            If Delhey and Newton are correct, the factors that reduce trust are the presence of social conflict, concerns about public safety, reduced participation in informal networks, and anxiety, especially economic anxiety.  All of these factors tend to be present in America’s urban centers, where—despite the growing diversity of the nation’s suburbs—the bulk of our diverse neighborhoods can still be found. As a recent study by the Urban Center concluded

“More than half of all neighborhoods in America’s 100 largest metropolitan areas (56.6%) are home to significant numbers of whites, minorities and immigrants, with no single racial or ethnic group dominating the minority population. Six of ten (60.8%) are mixed-income—dominated neither by households in the highest income quintiles nor by those in the lowest. And about a third of all tracts (34.9%) exhibit substantial diversity with respect to age, ethnicity and income.” (Turner & Fenderson 2006).

If we look both at the factors that generate distrust and the characteristics of  neighborhoods where the greatest diversity is to be found, it isn’t difficult to confirm the overlap.

·         In central city neighborhoods characterized by diversity, social conflict is common. Conflicts may take the form of interest groups fighting over inadequate municipal resources, they may occur as different groups jockey for power, or they may arise from miscommunication or lack of communication caused by differences in language, lifestyle or culture.  In some of the poorest such neighborhoods, in addition to genuine disagreements about goals, or efforts to access resources, there are often the sorts of “street fight” encounters that may be triggered by anything from rival gangs, real or perceived slights, or just by the multiple tensions that accompany poverty.

·         Metropolitan areas struggle to provide public safety. Larger cities also must deal with pervasive negative perceptions about public safety and urban life; even in relatively safe areas and periods of diminished criminal activity, so-called conventional wisdom reinforces the image of “mean streets,” lurking danger and threatened criminal behaviors. From petty thefts (purse snatching, pick-pocketing) to carjackings, burglaries and violent crimes like murder and rape, in many inner city areas, crime is a constant concern. (It doesn’t help that most metropolitan newspapers follow the “if it bleeds, it leads” school of journalism, and accordingly publicize the more gory crimes with banner headlines and front-page, “above the fold” placements. Local broadcast news follows a similar pattern.) Criminologists and those who map the incidence of crime—particularly violent crimes like homicides—will confirm that such crimes are more common in areas having greater population density and even more so in areas having both density and substantial poverty.  When people do not feel safe, they rarely feel trusting and neighborly.     

·         In neighborhoods plagued by perceptions of crime and populated by people who look different, follow different customs, eat different foods and increasingly speak different languages, easy opportunities to engage in the sorts of informal social networking that encourages trusting attitudes are sharply limited.

·         Finally, the anxiety that is a predictor of distrust is a constant companion of people living in many central city venues. The United States has one of the least effective and least extensive social welfare systems in the West, and we actively stigmatize dependence on public assistance. We do not have national health care, or universal access to health care; currently, over forty-six million Americans are without any health insurance coverage. Add to the insecurity and anxiety that accompany this lack of a social safety net the steady loss of manufacturing and other jobs as industries downsize and outsource in order to remain competitive. Then herd large numbers of these vulnerable citizens (and even more vulnerable noncitizens) into crowded in-city neighborhoods. It shouldn’t be surprising that terms like “success and well-being” are not the first words that come to mind to describe such precincts, nor should it come as a surprise that the resulting levels of stress and anxiety are high.  These are frequently people who are without resources or power—and not just political and economic power, but even the power to substantially improve their own lives. Anxiety is a child of powerlessness.

Dealing with unfamiliarity that challenges our worldviews can be stressful under any circumstances, but when ever-increasing pluralism complicates the lives of those who have the fewest personal and fiscal resources for dealing with such challenges, we shouldn’t be surprised when the response is withdrawal. Turtles retreat into their shells when they’re threatened, and in many of America’s inner cities, people live under more or less constant threat.

There are, of course, many people who live in cities—and in diverse neighborhoods—who are not without resources, just as there are many who are civically engaged. Widespread gentrification has brought people of means back to our central cities in large numbers. It would be a mistake, however, to assume that these more fortunate residents can entirely escape the consequences of municipal failures to secure public safety and provide city services. Urban areas are where institutional failures are most apparent, and cities are places where all people—rich and poor—are most dependent upon reliable and trustworthy services—not just police and fire departments, but public transportation, public schools, public parks and public works. When urban public institutions fail, everyone who lives in the city feels the effects. 

Does diversity add to the stress of urban living? Undoubtedly—for the reasons suggested by Putnam and others, and alluded to earlier in this paper. But it is difficult—and arguably misleading—to attribute lower levels of trust to diversity alone, especially when we can’t rule out potential contributions by other elements of those same environments.

American diversity is not a new phenomenon. Optimists viewing the current state of affairs like to point out that the history of the United States is a history of making strangers into (more-or-less) members of the family.  Pessimists remind us that the process has been uneven, often unpleasant, and constantly contested. Historians have argued strenuously over the proper metaphor: is America a melting pot, where cultural differences are “cooked out,” or is it a stew, with different ingredients providing their unique flavors to the same dish? Perhaps we should think of the country as a symphony, where one group plays horns and another strings—different instruments harmonizing to create a single musical composition that is more than the sum of its parts.

Uplifting as the symphony metaphor may be, however, the pessimists are right to remind us that American history has rarely been harmonious. Particularity and identity have stubbornly resisted being amalgamated into a more featureless, more White Anglo-Saxon Protestant, more uniform citizenry. Furthermore, as a result of slavery, voluntary immigration, and internal migration, the face and character of the American majority has undergone constant change, as successive groups of newcomers have been first resisted and then grudgingly accommodated. The process has been anything but smooth, and Emma Lazarus to the contrary, Americans haven’t always lifted lamps “beside the Golden Door.”[11]   Globalization, technology and terrorism may be raising the stakes, but the tensions caused by diversity and immigration are hardly new.

Short of retreating from the global community, American diversity will continue to increase. The research cited in this paper suggests that we can best ameliorate the tensions that accompany that diversity by devoting the resources necessary to reduce crime and repair and reinforce America’s tattered and inadequate social safety net.

 

 

 

 

 

 

References

Arneil, Barbara. Diverse Communities: The Problem with Social Capital. Cambridge University Press, 2006.

 

Boix, Carles and Daniel N. Posner. “Social Capital: Explaining its Origins and Effects on Government Performance.” British Journal of Political Science. 28:0404 pp. 686-693, 1998.

 

Braithwaite, Valerie and Margaret Levi. Trust and Governance (Volume I: Russell Sage Foundation Series on Trust), 1998.

 

Center on Budget and Policy Priorities, “Income Inequality Hits Record Levels, New CBO Data Show,  December 14, 2007.

 

Delhey, Jan and Kenneth Newton. “Who Trusts? The Origins of Social Trust in Seven Nations.” Social Science Research Center. Berlin, 2002.

 

Gitlin, Todd. The Intellectuals and the Flag. Columbia University Press, 2006.

Hagedorn, Nancy L. Review of Alan Taylor, American Colonies: The Settling of North America. Penguin History of the United States. New York and London: Viking, 2001.  H-Atlantic. November, 2003.

 

Hooghe, Marc. “Is Reciprocity Sufficient? Trust and Reciprocity as Forms of Social Capital.” Paper presented at the 98th Annual Meeting of the American Political Science Association, Boston, August 29-September 1, 2002.

 

Hooghe, Marc, Time Reeskens, Dietlind Stolle, and Ann Trappers. “Ethnic Diversity, Trust and Ethnocentrism and Europe: A Multilevel Analysis of 21 European Countries.” Paper presented at the 102d Annual Meeting of the American Political Science Association, Philadelphia. August 31-September 3, 2006.

 

Jacobs, Jane. The Death and Life of Great American Cities. Random House. 1961.

—————-Systems of Survival: A Dialogue on the Moral Foundations of Commerce and Politics. Vintage Press, 1992.

 

Kosfeld, Heinrichs, Zak, Fischbacher & Fehr, “Oxytocin Increases Trust in Humans.” Nature, Vol. 435, #2. June, 2005.

 

Krugman, Paul. “Gone Baby Gone” New York Times, October 22, 2007.

 

Luhmann, Niklas. “Familiarity, Confidence, Trust: Problems and Alternatives.” In Diego Gambetta (ed.) Trust. Oxford: Blackwell. Pp. 94-107. 1998.

.

Portes, Alejandro. “Social Capital: Its Origins and Applications in Modern Sociology.” Annual Review of Sociology 24:1-24

 

Putnam, Robert D. Bowling Alone: The Collapse and Revival of American Community. Simon and Schuster, 2000.

———– “E Pluribus Unum: Diversity and Community in the Twenty-First Century. Scandinavian Political Studies, Vol. 30-No. 2, 2007.

Roberts, Dorothy E. “The Moral Exclusivity of the New Civil Society.” Chicago-Kent Law Review, 2000.

Rosenblum, Nancy L., and Robert C. Post. Civil Society and Government. Princeton University Press, 2002.

Tierney, John. “Facts Prove No Match for Gossip, It Seems.” New York Times, Oct. 16th. 2007.

 

Turner, Margery Austin and Julie Fenderson, “Understanding Diverse Neighborhoods in an Era of Demographic Change.” The Urban Institute. June, 2006.

 

Urban Institute, Uninsured and Dying Because of It: Updating the Institute of Medicine Analysis on the Impact of Uninsurance on Mortality. January 2008. 

Wuthnow, Robert. America and the Challenges of Religious Diversity. Princeton University Press, 2005.

———————-Loose Connections: Joining Together in America’s Fragmented Communities, Harvard University Press, 2002.

 

 

                                               



[1] Social capital is the term given to human networks that engender mutual trust and reciprocity. .

[2] Most students of the concept would argue that social capital networks are not exclusively those created by the organizations in civil society—that we can build social capital through workplace associations, for example. But civil society is where the great majority of such networks are found.

[3] I say “constitutional culture” because–although too few citizens are familiar with the explicit ideological bases of American  political philosophy—most of us have been socialized into a culture with values based upon that philosophy. To quote the immortal words of Superman, most Americans really do believe in Truth, Justice and the American Way.

 

[5] In fact, she cites several studies suggesting that greater trust leads to increased participation, rather than the other way around.

[6] K street is shorthand for the proliferating number of lobbyists pleading their clients’ cases with members of Congress, named for the street where many of their offices are located. Many of those lobbyists were very recently elected officials themselves, who make considerable amounts of money by persuading their former colleagues to vote for or against particular legislation.

[7] Media critics charge traditional media with providing superficial “infotainment,” rather than substance. While it is true that, for many of us, wall-to-wall coverage of the latest murder or most recent “bimbo eruption” occupies far too much of the media’s attention, the need to fill a vastly expanded “newshole” means that political news also gets reported, if not in the detail or depth some of us might like. Not only does it get reported, but thanks to the 24 hour nature of today’s news environment, those reports get endlessly recycled and repeated.

[8] Many legal scholars were particularly critical of the Court’s intervention into what had historically been categorized as a ‘political question’ not appropriate for judicial determination. As several pointed out at the time, had the Court allowed the process to unfold as the Constitution provided, the result would have been the same; the dispute would have been decided in the House of Representatives, which was firmly under Republican control at the time. Resolution of the contested election would have taken more time, but charges of illegitimacy would have had much less weight. The titles of scholarly articles by academics (who generally moderate their personal views) illustrates just how strongly many felt: see, e.g. Vincent Bugliosi, The Betrayal of America: How the Supreme Court Undermined the Constitution and Chose Our President (2001); Alan Dershowitz, Supreme Injustice: How the High Court Hijacked Election 2000 (2001); Doug Keller, Grand Theft 2000: Media Spectacle and a Stolen Election (2001).  

[9] There is a good deal of research suggesting that high levels of social capital are associated with better government; the open question, of course, is which came first—does social capital lead to good, or at least better, government, or does good government create social capital?

[10] As Nicholas Lemann dryly noted (in a 1996 article in the Atlantic, entitled “Kicking in Groups”), the opposite of Putnam’s theory would be that the decline in ‘civic virtue’ was largely confined to the decade between 1965-75, when both crime and divorce rates rose dramatically, and that the “overwhelming social and moral problem in American life is instead the disastrous condition of poor neighborhoods…Rather than assume, with Putnam, that such essential public goods as safety, decent housing, and good education can be generated only from within a community, we could assume that they might be provided from without—by government.” (Vol. 277, No.4. p.26) 

[11] Emma Lazarus is the poet who penned the famous words inscribed at the foot of the Statue of Liberty. “Give me your tired, your poor, your huddled masses yearning to breathe free…..I lift my lamp beside the Golden Door.”

 

 

 

American Way

.

 

Comments

The Idea of Liberty

When I was initially asked to speak, I asked if there was a subject you wanted to hear about.  I was told “Oh, something about liberty.” As my older grandchildren might say, that seemed pretty specific—NOT.

 

When I thought about it, though, I decided that the subtext to the assignment was really something like: We read your columns in the Star, and we want to know where the hell you’re coming from. Which certainly seems fair enough. I get 500 words every two weeks to make my case on a given topic; that barely lets me outline an argument, let alone provide the context within which I’m operating.

 

I am definitely a person with a particular point of view (I know that comes as a huge surprise to those who read my columns…As my husband often says, “tell me what you REALLY think, Sheila”), and that point of view is grounded in my reading of the  U.S. Constitution, the philosophy that animated it, and its assumptions about the essential nature and importance of individual liberty.

 

It is absolutely staggering how little the general public knows about our constitution and Bill of Rights. There are many reasons for that—we don’t do a very good job of teaching civics, or high school government, for one thing, and the media doesn’t do a very good job of covering or explaining constitutional arguments, for another. 

 

I have been amazed, for example, how many of my undergraduate students cannot answer the very basic question “What is a constitution, and how do constitutions differ from statutes?”

 

As I try to explain to them, a constitution is a framework within which a government functions. It limits what government can do. Part of our constitution establishes mechanics and structure: assigning duties, determining who does what, establishing requirements for holding public office—that sort of thing. Some of it, however—especially the Bill of Rights—is a statement of our national values—our American moral code. The Constitution and Bill of Rights establish our national values, and our subsequent policy choices are constrained by those constitutional values, and are supposed to be consistent with them. (Village Pantry example.)

 

Original intent, properly understood, means fidelity to those original values. So, for example, James Madison/internet.

(Story about student who asked “Who’s J.M.?)

 

The men who drafted our constitution were products of the Enlightenment, the 18th century philosophical movement that ushered in great changes in the way people thought about government, science, and the nature of reality. One of the most significant consequences of Enlightenment philosophy was the elevation of the value of individual rights, understood as personal autonomy, or “self-rule.” The Founders wanted to limit the extent to which government can prescribe how we live, or interfere with our right to decide for ourselves what beliefs and goals make our lives meaningful. They were profoundly respectful of the integrity of the individual conscience. They wanted to protect each individual’s right to decide how to live his or her life; that’s why the Bill of Rights is a list of things government is forbidden to do—it sets limits on the power of government, even when that government is acting in accordance with majority desires—perhaps especially when it is.

 

The Founders were especially leery of what they called the “passions of the majority,” and as a result, the Bill of Rights is what lawyers and scholars call a “counter-majoritarian” document. It is a libertarian “brake” on the power of the majority to direct government action.

 

What critics often fail to recognize is that the Bill of Rights is less concerned with outcomes than processes; less concerned with what we decide than it is with who decides it and how.  In our system, we decide for ourselves how to live, what to read, whether to pray and if so, to whom—and we get to make those decisions free of government interference or coercion, so long as we do not harm the person or property of a non-consenting other, and so long as we recognize and honor a similar right for our fellow-citizens.

 

Even when we are careful to define our terms, however, we really can’t understand the Bill of Rights without understanding American history. And if my students are any indication, our high schools  aren’t doing any better at teaching American history than they are at teaching American government.

 

The Founders we all talk about, those we do learn about in American history, were the men who gave us the Declaration of Independence, the Constitution, and the Bill of Rights—the men who created our legal system. But there was another set of Founders: the Pilgrims and Puritans who originally populated the colonies, and the Founding Fathers who drafted the constitution. And those two sets of progenitors lived in very different conceptual universes.

 

The first set, the men that legal scholar Frank Lambert has called the Planting Fathers, came to the colonies for religious freedom, just as we learn in grade school. What we don’t learn is that religious freedom meant something rather different from the religious freedom most of us celebrate today. The Planting Father’s definition of religious liberty was “freedom to do the right thing.” (explain). Puritans like John Winthrop came to America to build a ‘Shining City on the Hill,’ a ‘new Israel.’

 

Religion was intensely important to those original settlers. Most of them believed that God not only wanted them to follow the “right way,” but that He also wanted them to make sure their neighbors were living in accordance with God’s rules as their particular church defined those rules. Most of them took for granted that government would impose religion on its citizens. Religious freedom meant a government that would establish the “correct” religion.

 

Not that there weren’t dissenters even then: the fiery Baptist preacher John Leland was one. Roger Williams was another ardent believer in freedom of conscience. He was expelled from the Massachusetts Bay Colony for his dissenting views, and went on to found Rhode Island, where he insisted that there be no religious test for citizenship. Williams was the first person we know of to use the term “A Wall of Separation between Church and State,” over 150 years before Thomas Jefferson would do so.

 

Those early colonists were products of the Protestant Reformation, the religious upheaval that began when Martin Luther pounded his 95 Theses on the church door. The Reformation brought about a new emphasis upon the individual. The Catholic Church had taught that people needed a “go-between” between themselves and God, and an interpreter to tell them what the bible meant; Luther, however, insisted that each man could approach God—and read the bible—without an intermediary. Over time, that bit of heresy ended up undermining the whole idea of authority—the idea that priests—and Kings—held those positions because God had chosen them, and that disobedience to the King and other civil authorities was therefore the same thing as disobedience to God.  

 

The cultural influence of the Puritans, especially Calvinism, remains strong to this day. Calvin taught that God alone was sovereign, and that he had chosen certain people for salvation and others for damnation.  Calvin also fashioned what has come to be called a “Presbyterian” model of church governance, in which individual congregations elected delegates to a presbytery, or governing body—a democratic organizational model that influenced later American attitudes toward the structures of authority. Calvinism in its various manifestations has been an important influence on American culture—and so has the Puritan view of liberty as the “freedom to do the right thing, and to make sure your neighbor does too.”

 

This is the way that one legal scholar has described the difference between these early Puritans, or “Planting Fathers” and the nation’s Founding Fathers:

 

“In 1639, a group of New England Puritans drafted a constitution affirming their faith in God and their intention to organize a Christian Nation. Delegates from the towns of Windsor, Hartford and Wetheresfield drew up the Fundamental Orders of Connecticut, which made clear that their government rested on divine authority and pursued godly purposes….

 

One hundred and fifty years later, George Washington took another oath, swearing to ‘faithfully execute the office of President of the United States,” and pledging to the best of his ability to “preserve, protect, and defend the Constitution of the United States.” The constitution that he swore to uphold was the work of another group of America’s progenitors, commonly known as the Founding Fathers, who in 1787 drafted a constitution for the new nation. But unlike the work of the Puritan Fathers, the federal constitution made no reference whatever to God or divine providence, citing as its sole authority ‘the people of the United States.’”

 

What had happened in that intervening 150 years was the philosophical movement called the Enlightenment. John Locke, John Stuart Mill, Montesquieu, and other Enlightenment figures had become particularly influential in the colonies. The Enlightenment introduced a new emphasis on rationality, ushered in modern science, and—most important for our purposes today—fashioned a very different definition of liberty: to these philosophers, liberty did not mean “freedom to do the right thing.” It meant freedom to do your own thing, freedom to choose your own ends and live your own life free of government interference, so long as you did not thereby harm anyone else.

 

I don’t think it is an exaggeration to say that American history has involved a continuing tension between the Puritan and enlightenment worldviews—and the very different concepts of liberty they represented. In my most recent book, God and Country: America in Red and Blue (only 16.45 on Amazon.com!!) I explored that history, and its relevance to our current public policy debates and the so-called “culture war.” Let me just share just one example I used in the book:

 

Some of you may remember the incident in 2003, when the federal courts ordered the removal of the five-ton granite Ten Commandments display that had been erected at the entrance to the Alabama Supreme Court by Judge Roy Moore. When the stone was being removed, supporters of Judge Moore rallying in front of the courthouse were interviewed by television reporters. Virtually all of them said that the removal of the monument was an infringement of their “religious freedom.” Not surprisingly, lawyers and civil libertarians found this claim ludicrous; what they saw was a theocrat attempting to use the authority of the state to impose a particular religious perspective at the expense of all others—the absolute antithesis of “religious freedom.”

 

What we need to understand is that neither side was pandering or lying. They looked at the same basic “facts”—a five-ton stone with a carved replica of the Ten Commandments, located at the entrance to the State Supreme Court, and a federal court order to remove it—but they interpreted what they saw in radically different ways. Both sides genuinely believed the other side was willfully ignoring “plain” truths: Moore’s supporters were angry that federal courts would not recognize the “fact” that the United States was a Christian Nation. Civil libertarians found Moore’s position incredible in the face of the First Amendment’s “clear” prohibition of religious establishments—a clarity which, needless to say, eluded Moore’s defenders. The two sides to this conflict might as well have lived on different planets, given their inability to communicate.

 

The constitutional constraints on government don’t just protect religious liberty, of course. The 4th Amendment requires government to have a good reason—an articulable reason, not just a hunch—for searching or seizing someone. Due process guarantees require certain procedures to be followed if government is interfering with your liberty or your property. The Equal Protection doctrine requires government to treat all similarly-situated citizens alike. We can discuss these and other specifics of the Bill of Rights during the Q and A, but one way to think about all of them is as pieces of an effort to level the playing field between relatively powerless individuals and the 500 pound gorilla that is government. (explain)

 

Clearly, I take sides in these debates. Here is my reasoning: the United States is the most diverse nation on earth. We are also one of the most religious and religiously diverse. If we are to live peacefully together, government cannot be allowed to play favorites. There are two arguments for preferring the vision of the Founding Fathers over that of the Planting Fathers: one is philosophical and the other is prudential.

 

Philosophically, I prefer what is sometimes called the “libertarian principle,” the principle that individuals should be able to live as they see fit, free of the interference of government, so long as they do not harm the persons or property of others. You might call it the “live and let live” principle. More authoritarian people will argue that certain private behaviors harm the whole community and that therefore the government should be able to prevent those behaviors. The difficulty is that there is no consensus on what behaviors those are.

 

Prudentially, there is a strong argument to be made that in a diverse country that is becoming increasingly more diverse, there is no practical alternative to state neutrality in matters of conscience. All over the world, countries are fracturing along ethnic and religious lines, as contending factions try to seize power and to privilege their own ethnic group or tribe. Whatever the merits or demerits of the Founding Fathers’ choices, I would argue that their insistence upon limiting the power of the state and subjecting elected officials to the rule of law has kept us from turning into Bosnia or Northern Ireland or more recently, Iraq.

 

As I have already noted, the Founding Fathers were products of the Enlightenment who understood that a government strong enough to protect a man’s property was a government strong enough to expropriate it. They were well aware that a government powerful enough to provide security would be a government powerful enough to threaten that same security. They understood that government needed sufficient authority to be effective, but they insisted on checks and balances to insure that its authority would not be abused. They knew that the legitimacy of government is dependent upon its adherence to constitutional limits and the rule of law. 

 

I used to work for Bill Hudnut, when he was Mayor of Indianapolis—I was his Corporation Counsel, or City Attorney. I think his favorite analogy said it best:  government should be the umpire, not one of the players. Or, as an ACLU friend of mine used to say, “Be careful before you empower the government to impose whatever rules the majority favors at any given time. Remember, poison gas is a great weapon, but only until the wind shifts.”

Our job as citizens is to protect liberty by keeping the poison gas out of the hands of the government.

 

Thank you.

 

 

Smart Government

Every four years, candidates for offices high and low attribute the problems of government to a distressing lack of bipartisanship, and promise that—if elected—they will “reach across the aisle” to “solve real problems.” These promises are so predictable, and so empty, that most of us simply tune them out.

Wonder of wonders, however, a genuinely bipartisan effort is being mounted right now, right here in Indiana, to address what most impartial observers agree is the most significant governance problem we Hoosiers face.

MySmartgov.org has been formed to enact recommendations initially made by the Kernan-Shepard Commission, a bipartisan group of Indiana leaders who studied the structure of Indiana government and issued a report with numerous recommendations in December 2007. As its name suggests, the commission was led by former Governor Joe Kernan and Chief Justice Randall Shepard, who accepted the task at the request of Governor Mitch Daniels.

It is telling that the Commission’s recommendations closely mirrored those made by Gov. Paul McNutt—in 1936.  Never let it be said that Hoosiers rush into anything.

MySmartgov.org proves the old adage that politics makes strange bedfellows. Its most prominent member-supporters, other than the original Commission participants, are the Indiana Chamber of Commerce, the Central Indiana Corporate Community Council, the Indiana Realtors, and the Professional Firefighters Union. Its Executive Director is Marilyn Shultz, formerly the State Budget Director during the Kernan Administration. Even the organization’s blogging is being done by a team consisting of one Republican and one Democrat.

Why is this a big deal? Because Indiana’s inefficient and bloated governing structure is strangling us, driving up property taxes while starving service delivery.

Governing decisions enacted in 1816 and 1851 are still on the books, and as a result, Indiana citizens pay for, and are governed by, more than 10,300 local officials. The state “boasts” 3,086 separate governing bodies, hundreds of which have taxing authority. When we compare Indiana to 11 other states our size, we have more levels of government than all but two of them.

It is this bloated superstructure that makes it nearly impossible to follow through on the other perennial promise of political candidates—the promise to root out waste. Here in Indianapolis, for example, Mayor Ballard is belatedly realizing just how limited his options are. It’s easy to criticise incumbents and demand to know where our tax dollars are going; what too few of those critics understand is that most of the waste is in our governing structures, in overlapping and outmoded units of government. It’s certainly not in service delivery, which has been cut to the bone.

In Indiana, we don’t put tax revenues to work enhancing our quality of life. Instead, we use them to pay for 1008 Township Trustees and other officeholders we no longer need.

In some contexts, bipartisanship is code for retaining the status quo. In this case, however, it is the only way Indiana can progress. Liberal or conservative, Republican or Democratic, we all deserve efficient, accountable government. Smart government.

 

Comments

2008 Election

The American economy has been strained to the breaking point by eight years of reckless fiscal policies. Our international stature has been compromised and diminished by arrogant and unilateral foreign policies. Our government has helped create a global energy crisis, and has done nothing about climate change. You could be forgiven for assuming that those issues are central to the upcoming elections, but I’m going to suggest that war and peace, economic prosperity and even national self-respect are in a very real sense subsidiary to what is truly at stake on November 4th. 

This election is a contest between the past and the future; its outcome will determine whether Enlightenment rationalism or religious fundamentalism prevails. In short, this is the election that will determine who wins the “culture wars.”

There are some arenas where the culture clash is front and center; even James Dobson has said that losing the referendum on same-sex marriage in California would mean that the Christian Right has unambiguously lost the culture war. But the conflict is more consequential than the future of same-sex marriage and gay rights, important as that is. This election will determine who gets to control what America will look like in the 21st century. It is a fight between absolutely incompatible worldviews.

I’d been convinced for some time that this election would be a fateful battle between culture warriors, but the choice of Sarah Palin as John McCain’s running mate confirmed my thesis.  I don’t say this simply because Palin represents everything that is wrong with social conservatives’ ideology, although she does. (She’s anti-choice even in cases of rape or incest, she opposes stem-cell research, she’s anti-gay, and she’s really anti-science—she’s an advocate of teaching creationism in the schools who does not believe that human activities contribute to global warming).

I also don’t say this simply because her social conservatism was more important to John McCain than her absolute lack of any qualification to be a heartbeat away from the Presidency.

I say this because her selection was part and parcel of the way in which culture warriors really see the issue of gender—and by extension, how they see every other issue of diversity, including but certainly not limited to gays and lesbians.

Think about it. Had McCain chosen a male running-mate with Sarah Palin’s resume, the choice would have been laughed off the national stage, dismissed as absolutely unserious. Tim Pawlenty, the equally socially conservative Minnesota governor who was on the McCain short list, was widely criticized for being too insubstantial, for having qualifications too likely to be dwarfed by Biden’s greater experience and gravitas. And Pawlenty looks like a seasoned elder statesman compared to Palen. What, then, did she bring to the table, other than (excuse me) a vagina? And just how cynical—and revealing—does that make this selection?

Here’s the calculus as I see McCain’s folks analyzing it: 1) a lot of women voted for Hillary; 2) social conservatives in the GOP base still aren’t excited by McCain. We can energize the base by choosing one of their own, and as a  bonus, we can pick up disappointed Hillary voters because she’s a woman, and women are interchangeable. Women just want to see someone who looks like them in office, bless their pretty little heads.  It seems genuinely never to have occurred to the McCain camp that for women voters to believe that a candidate “looks like them” might require more than shared secondary sexual characteristics.  At the very least, it means sharing a particular worldview, being a particular kind of woman.

The Christian Right approaches issues of gay equality the same way, by constructing a monolithic “gay agenda” that everyone in the gay community is assumed to share. It is also the way they see African-Americans—and in fact, as one friend of mine remarked, the choice of Palin is based on precisely the same worldview that put Clarence Thomas on the Supreme Court. He’s black, so the black folks should be happy. So what if everything Thomas stands for is in stark contrast to what the vast majority of African-Americans believe? So what if Sarah Palin’s positions are profoundly anti-woman? She’s female. Surely that’s all Hillary’s supporters—and by extension, other women—care about.

It is ironic that, as the Democratic party has moved past tokenism toward genuinely pluralist politics, the Republicans have bought into the worst kind of identity politics. Those differences between contemporary Republican and Democratic worldviews are consequential for all of us.

  • The emerging Democratic philosophy requires that we look at individuals—gay, straight, Christian, Jewish, black or white—and evaluate those individuals on their merits, their talents, their characters. It isn’t that race or religion or gender or orientation becomes irrelevant;  it’s just that those markers of identity aren’t material—they’re just one aspect of this particular human being, and we are grading this human being on the basis of everything he or she brings to the table. Everybody gets to compete on a level playing field, where being gay, female or purple is neither an asset nor a liability. It’s simply a description.
  • The worldview of the right-wingers who control today’s GOP, on the other hand, is paternalistic. It begins by assigning people to categories, by dividing the world into “us versus them.” Members of the group labeled “us” are the elect, the rightful rulers of the universe. Political considerations do, however, require some concessions to the fact that “they” have the right to vote, and so some tokenism is required. (It never seems to occur to those holding this worldview that tokenism is as insulting as outright bigotry. Tokenism assumes that members of those “other” groups are interchangeable, that unlike white Protestant straight males, they are not entitled to be accepted or rejected on the basis of their individual merits.) When you view the political landscape through this lens, you believe every debate must have winners and losers. There is no “win-win.” There is no “live and let live,” because allowing people to live their lives in accordance with any rules other than your own is—by definition—defeat.

At its base, this election is a choice between those two worldviews. It’s a choice between the past—where the color of your skin, the denomination of your church, your gender and/or your sexual orientation determined your place in the social order—and a future where behavior, and not identity, determines how far a person can go.

 

Comments

Constitutional Culture

As Americans prepare to go to the polls, the nation is teetering on the edge of an economic meltdown. If we are to avoid electing someone who will make things even worse—never mind beginning to turn things around—it behooves us to consider how and why we are in this mess.

Permit me to suggest that our current problems—including our economic problems—are rooted in the fact that for the past eight years, we have been governed by an administration that has operated far outside of what I call America’s constitutional culture. As we prepare to say “adios” to the Bush calamity and to choose a new President, we would be well advised to look closely at each candidates’ approach to the constitution, because a willingness to operate within its constraints will tell us much more than the issue papers and campaign promises that are the staples of electoral strategies.

A constitution does many things: in its more pedestrian provisions, it lays down the mechanics of governing—how old must a person be to run for President? How shall the legislature be selected? Those sorts of things. More fundamentally, however, constitutions provide a statement of national values—a moral code governing our necessary civic infrastructure. America’s constitution places a high premium on protecting individual rights by limiting the scope of government power, by the separation of powers, and an insistence on checks and balances and the rule of law. 

For the past eight years, the Bush-Cheney Administration has shown nothing but contempt for those constitutional constraints, and the policies it has favored have been consistent with that contempt.

It’s not just the Patriot Act, NSA spying, or the establishment of the prison at Guantanamo, alarming as those and similar measures have been. It’s not just the careful selection of judges who can be expected to favor the prerogatives of government over the rights of citizens. It’s not just the use of signing statements to circumvent constitutionally prescribed policymaking processes. It can also be seen in the proliferation of no-bid contracts, privatization, cronyism, and lack of regulatory oversight that has precipitated our current financial crisis.  (Make no mistake—the administration’s anti-regulatory fervor is part and parcel of its general disdain for the rule of law, and has been a major contributor to our current economic crisis. Notwithstanding the florid rhetoric from self-proclaimed advocates of the free market, markets cannot function without clear ground rules and impartial umpires willing to enforce those rules.)

Fine, you may say. I agree the people we elect ought to be bound by the rule of law. But what does the constitutional commitment and knowledge of a Presidential candidate  tell me about his or her policies most likely to affect me?

 

Consider the following:

·        A President who understands the First Amendment’s religion clauses will not try to change the laws to incorporate particularistic religious beliefs about abortion, homosexuality or science. That means supporting stem-cell research. It means no Terry Schaivo dramas, no “Defense of Marriage” acts, no creationism in the classroom.

·        An administration respectful of the Fourth Amendment will not  read your email or eavesdrop on your telephone conversations.

·        A President who respects the rule of law, who enforces laws and regulations impartially (and thus prevents the wholesale looting of the treasury by the well-connected) is far less likely to preside over an economy where jobs are lost, homes foreclosed and retirement accounts devalued.

·        A President who understands the philosophy and intent of the Equal Protection Clause of the Fourteenth Amendment will respect diversity and insist upon equal rights for all Americans.

Barack Obama taught constitutional law. He and Joe Biden have given ample evidence that they understand, and are committed to, constitutional principles. John McCain’s embrace of constitutional limits has been spotty, at best; Sarah Palin has given no evidence of ever having read the constitution (or much else).

I am as aware as anyone that this country has often failed to live up to its highest aspirations and constitutional institutions. But the damage done by the Bush Administration has been both systemic and insidious, because it has called those very aspirations into question. It will not be easily repaired.

Political partisans always insist that “this election is the most important ever.” It’s easy to dismiss overheated pronouncements (like my own!) as predictable election-year rhetoric. But as the old sayings go, even paranoids have enemies and even stopped clocks are right twice a day. When Americans go to the polls November 4th, we will be voting for far more than a President. We will be voting to reclaim—or to jettison what is left of–America’s constitutional culture.