In a recent column for the New York Times, Charles Blow gave voice to a question with which I continue to struggle–a question that (I assume, albeit without evidence) bedevils most thoughtful people: what can I do? What difference can one person make?
Blow recounted his family’s history of poverty, and told of a trip back to visit a favorite–very poor–aunt. By the time of the visit, he had moved into a more favorable economic position, but was certainly not able to ameliorate the conditions of the impoverished folks in his family, let alone others similarly situated.
I sat there thinking about the great divide among us, about how far removed I now was from this life, but also about how very connected I was, spiritually, to it.
And I was conflicted. How much could I or should I help? I have had long talks with my mother about this. Other than a little money in greeting cards, there wasn’t much that I could do for all the people I knew in need.
Blow concluded–accurately–that the problem of poverty was not going to be solved by personal generosity. It would require public policy– and public indifference continued to impede passage of such policies. He decided that, given his particular skills and his position with the Times, the best thing he could do was advocate.
Blow’s column really resonated with me, not because of the specific issue he identified, but because that issue–poverty–shares an essential component of most of the issues Americans face right now. It is a problem that’s far too big for an individual to solve, or even substantially affect.
I don’t know about those of you who read this blog, or other people generally (it may simply be my own personality defect), but what depresses me are not the sorts of problems and challenges we all face in life. I can deal with those, because in most cases, if I work hard, I can do something about them. What depresses me is powerlessness-– an inability to solve a problem, whether personal or social, or even make a dent in it.
Most of what I see around me these days reinforces that powerlessness.
Any reasonably well-informed person in today’s America cannot help but see what seems to be the disintegration of our society in the face of the truly massive threats we confront. Yes, some of those threats have been with us a long time, although (thanks to the fact that we currently marinate in media and social media) we have become much more aware of them. But others, like climate change, pose challenges that are new–and monumental.
And then there’s gerrymandering, and a global pandemic and the utter insanity of a significant portion of the American population.
If we are sentient and even remotely aware, each of us has to ask ourselves the question Charles Blow posed in his column: what can I do? What possible impact can an individual make on problems that are national or even global in scope?
I suppose one answer is to work for the election of reasonable, competent people who take these problems seriously, although gerrymandering frequently defeats that effort. Another is to model appropriate behaviors in our own lives–to work for equity and inclusion and rational public policies in our own communities. But–in the absence of widespread public participation in those activities or the emergence of effective social movements devoted to them– any rational evaluation of their efficacy will conclude that they have very little impact. (That doesn’t mean we shouldn’t do these things, but neither should we exaggerate their importance in the scheme of things…)
Charles Blow concluded that advocacy was the best thing he could do; as someone with a “bully pulpit” at a national newspaper, he is in a position to affect the national discussion. Most of us involved in advocacy don’t have that sort of audience. We are left feeling powerless–because in a very real sense, we are powerless.
Maybe that feeling–that acute awareness of a loss of agency–is why so many people are looking for someone to blame…
Every morning when I sit down at my computer, I’m confronted with headlines from the various news sources to which I subscribe: The Guardian, The New York Times, The Washington Post…and through the day, a mind-numbing number of others. I don’t know anyone with the time and/or inclination to carefully read all the available news and opinion, and I certainly don’t–like most consumers of media, I scan the headlines and click on those that promise some measure of enlightenment or moderately important/relevant information.
Theranos, as you probably know, was the much-hyped startup company founded by Elizabeth Holmes–young, very good-looking and evidently one really smooth talker. She claimed the company had invented a new kind of blood testing technology that was going to save both time and money. Lots of people invested in it.
The most generous interpretation of what came next was a discovery that the technology didn’t work; a less-generous interpretation is that Holmes intentionally perpetrated a fraud. A jury is currently hearing evidence on the latter interpretation.
So what–if anything–does this audacious scam (if that is, indeed, what it turns out to be) have to do with Afghanistan? Well, the article does point out that General Mattis, late of the Trump Administration and the Afghan war, was on the board of Theranos and a major cheerleader for the company.
But the real connection was a cultural one.
Like the Afghanistan debacle, Theranos is a horror story of wishful thinking, credulous media, and celebrity impunity. Whether or not intentional deception was involved, both episodes display the dishonesty and incompetence of interlocking tech, finance, media, and military elites.
Mattis’ role in both sorry spectacles–the war and Theranos–illustrates the moral rot that infects far too many of the figures lionized by a media chasing eyeballs and clicks rather than the information required by a democratic citizenry.
Mattis denies any wrongdoing, claiming he was taken in, too. Even if that’s true, his role is discreditable. Mattis’ association with the company began in 2011, when he met Holmes at a Marine Memorial event in San Francisco. According to author John Carreyrou and other journalists, he immediately began campaigning for military adoption of Theranos’ ostensibly innovative bloodtesting technology. Mattis was not deterred by the lack of FDA approval and mounting doubts about whether the technology actually worked. After his retirement in 2013, Mattis also ignored legal advice that it would be improper to join the board while the company was seeking procurement of its products for use in Afghanistan.
It would be a mistake to single out a few “bad actors,” however. The problem is systemic–a widespread, “baked-in” disinclination to either provide or accept evidence that is contrary to what one wants to believe.
The article focuses on the impunity enjoyed by what it calls the American ruling class “until their conduct becomes literally criminal,” and it points out that the same people who make decisions in Washington sit on boards in Silicon Valley and appear on the same few cable channels. When the projects they promote go south, they continue to be celebrated and compensated as authors, management consultants, and respected pundits.
There’s a word for this governing hierarchy: kakistocracy, governance by the worst, least qualified, or most unscrupulous citizens.
Which gets us back to culture.
In today’s America, celebrity is more valued than competence. A loud voice commands far more attention than an expert opinion. Purveyors of ridiculous conspiracy theories overwhelm the conclusions and cautions of reputable scientists. This is the culture that in 2016 gave us an embarrassing, mentally-ill buffoon for President, the culture that elects equally embarrassing crazies like Marjorie Taylor Greene. It’s the culture that leads thousands of people to ingest a horse de-wormer and reject the expertise of epidemiologists and medical professionals.
It’s a culture that threatens to overwhelm those of us who want to live in the reality-based community.
The problem with living at a time when there are so many problems–and so many truly major ones, at that–is that our focus gets splintered. Climate change. Vote suppression. White Supremicists. Rightwing domestic terrorism. Guns. Government gridlock. The pandemic. Continual wars and the growth of the military-industrial complex …The list is endless.
But a recent report in the Washington Post reminded me of one of our most long-term and shameful problems–America’s perverse refusal to follow the lead of other wealthy (and plenty of non-wealthy) countries and provide universal access to health care. The negative consequences of our refusal to allow anyone to opt in to Medicare (Medicare for those who want it), or just to broaden the scope of the Affordable Care Act, have receded from prominence.
We may be distracted by other policy failures, but the problem remains–and it is as acute as ever, if not more so.
Researchers at the Commonwealth Fund compared the health-care systems of 11 high-income countries: Australia, Canada, France, Germany, the Netherlands, New Zealand, Norway, Sweden, Switzerland, the United Kingdom and the United States.
The United States has the worst health-care system overall among 11 high-income countries, even though it spends the highest proportion of its gross domestic product on health care, according to research by the Commonwealth Fund.
“We’ve set up a system where we spend quite a bit of money on health care but we have significant financial barriers, which tend to dissuade people from getting care,” said Eric Schneider, the lead author behind the findings and senior vice president for policy and research at the Commonwealth Fund, which conducts independent research on health-care issues.
The researchers identified five metrics of a well-functioning health care system: access to care, the care process itself, administrative efficiency, equity and overall health-care outcomes.Norway, the Netherlands and Australia were judged to be the top-performing countries overall.
The high performers stand apart from the United States in providing universal coverage and removing cost barriers, investing in primary care systems to reduce inequities, minimizing administrative burdens, and investing in social services among children and working-age adults, the Commonwealth Fund found.
The latter is particularly important for easing the burdens on health systems created by older populations, according to Schneider. “These sort of basic supports throughout younger age groups reduce, we think, the chronic disease burden that’s higher in the U.S.,” he said.
Since I have a son who lives in Amsterdam, I was particularly interested in the description of the Netherland’s high-performing system. The researchers found that it was a “well-organized system of locally placed primary care doctors and nurses who provide care on a 24/7 basis”–a system that keeps minor problems from turning into major ones.
The U.S. doesn’t come close. (As a former graduate student, a hospital administrator, told me years ago, we don’t have a healthcare system in the U.S.; we have a healthcare Industry.)
The United States was rated last overall, researchers found, ranking “well below” the average of the other countries overall and “far below” Switzerland and Canada, the two countries ranked above it. In particular, the United States fell at the end of the pack on access to care, administrative efficiency, equity and health-care outcomes.
The article noted that the inequities in America’s healthcare, together with our inadequate primary care, put the country in a much weaker position when it came to confronting the pandemic. That fact–together with the GOP’s advocacy of vaccine denial–may account for the fact that the U.S. has the second-highest COVID death rate among the eleven countries in the study.
America’s healthcare industry is costly in both lives and dollars.
Spending on health care as a share of GDP had grown in all of the countries the Commonwealth Fund surveyed, even before the pandemic. But the increase in the United States has “greatly exceeded” those of other nations. The United States spent 16.8 percent of its GDP on health care in 2019; the next highest country on the list was Switzerland, at 11.3 percent of GDP. The lowest was New Zealand, which spent roughly 9 percent of its GDP on health care in 2019.
Meanwhile, health care in the United States is the least affordable.
I hate sounding like a broken record, but this is what happens when racism drives decisions about the social safety net. Political scientists and sociologists confirm that–in addition to the profit motives/special interests of insurance companies and Big Pharma–the fact that White Americans don’t want “their” tax dollars spent on medical care or other social benefits for “those people” has prevented us from installing a less-costly and vastly more effective medical system.
At noon today, I’m speaking (via Zoom) to a Columbus, Indiana human rights organization. Here are my prepared remarks. (Long one–sorry.)
____________________________________________________
Over the past few years, Americans have begun to recognize how endangered our representative democracy has become.
Pundits and political scientists have their pet theories for how this has happened. Some of that analysis has been intriguing, and even illuminating. Until lately, however, none of it had attempted to answer the important question: what should we do to fix our problems, and why should we do it? As the causes of our dysfunctions have become more obvious, however—as it has become very clear that we are caught up in an obsolete system that facilitates the dominance of a clear minority of our voting population– scholars are urging reforms that focus on protecting voting rights, and restructuring America’s antiquated electoral processes.
First, some background.
You know, we humans don’t always appreciate the extent to which cultural or legal institutions—what we might call folkways, our longtime accepted ways of behaving and interacting—shape the way we understand the world around us. We rarely stop to consider things we simply take for granted—the conventions that constitute our daily lives. We drive on this side of the road, not that side; our marriages consist of two adults, not three or four; when our country holds elections we get to participate or abstain. Most of us accept these and multiple other conventions as givens, as “the way things are.” In some cases, however, institutions, systems and expectations that have worked well, or at least adequately, for a number of years simply outlive whatever original utility they may once have had, made obsolete by modern communications and transportation technologies, corrupt usages or cultural and demographic change.
I want to suggest that such obsolescence is a particularly acute element of American political life today. Let me share some of the more important examples that currently work in tandem to disenfranchise literally millions of Americans who are entitled to have their voices heard and their votes counted.
Perhaps the most significant problem of today’s electoral system is partisan gerrymandering. As you know, every ten years, after each census, state governments redraw state and federal district lines to reflect population changes. States—including Indiana– are engaged in that exercise as we speak. Except in the few states that have established nonpartisan redistricting commissions, the party in control of the state legislature when redistricting time rolls around controls the line-drawing process, and Republican or Democrat, they will all draw districts that maximize their own electoral prospects and minimize those of the opposing party.
Partisan redistricting goes all the way back to Elbridge Gerry, who gave Gerrymandering its name—and he signed the Declaration of Independence—but the process became far more sophisticated and precise with the advent of computers, leading to a situation which has been aptly described as legislators choosing their voters, rather than the other way around.
Academic researchers and political reformers alike blame gerrymandering for electoral non-competitiveness and political polarization. A 2008 book co-authored by Norman Orenstein and Thomas Mann argued that the decline in competition fostered by gerrymandering has entrenched partisan behavior and diminished incentives for compromise and bipartisanship.
Mann and Orenstein are political scientists who have written extensively about redistricting, and about “packing” (creating districts with supermajorities of the opposing party) “cracking” (distributing members of the opposing party among several districts to ensure that they don’t have a majority in any of them) and “tacking” (expanding the boundaries of a district to include a desirable group from a neighboring district). They have tied redistricting to the advantages of incumbency, and also point out that the reliance by House candidates upon maps drawn by state-level politicians operates to reinforce “partisan rigidity,” the increasing nationalization of the political parties.
Interestingly, one study they cited investigated whether representatives elected from districts drawn by independent commissions become less partisan. Contrary to their initial expectations, they found that politically independent redistricting did reduce partisanship, and in statistically significant ways.
Perhaps the most pernicious effect of gerrymandering is the proliferation of safe seats. Safe districts breed voter apathy and reduce political participation. After all, why should citizens get involved if the result is foreordained? Why donate to a sure loser? (For that matter, unless you are trying to buy political influence for some reason, why donate to a sure winner?) What is the incentive to volunteer or vote when it obviously doesn’t matter? It isn’t only voters who lack incentives for participation, either: it becomes increasingly difficult for the “sure loser” party to recruit credible candidates. As a result, in many of these races, voters are left with no meaningful choice. Ironically, the anemic voter turnout that gerrymandering produces leads to handwringing about citizen apathy, usually characterized as a civic or moral deficiency. But voter apathy may instead be a highly rational response to noncompetitive politics. People save their efforts for places where those efforts count, and thanks to the increasing lack of competitiveness in our electoral system, those places often do not include the voting booth.
Worst of all, in safe districts, the only way to oppose an incumbent is in the primary–and that almost always means that the challenge will come from the “flank” or extreme. When the primary is, in effect, the general election, the battle takes place among the party faithful, who also tend to be the most ideological voters. So Republican incumbents will be challenged from the Right and Democratic incumbents will be attacked from the Left. Even where those challenges fail, they create a powerful incentive for incumbents to “toe the line”— to placate the most rigid elements of their respective parties. Instead of the system working as intended, with both parties nominating candidates they think will be most likely to appeal to the broader constituency, the system produces nominees who represent the most extreme voters on each side of the philosophical divide.
The consequence of this ever-more-precise state-level and Congressional district gerrymandering has been a growing philosophical gap between the parties and— especially but not exclusively in the Republican party— an empowered, rigidly ideological base intent on punishing any deviation from orthodoxy and/or any hint of compromise.
After the 2010 census, Republicans dominated state governments in a significant majority of states, and they proceeded to engage in one of the most thorough, most strategic, most competent gerrymanders in history. The 2011 gerrymander did two things: as intended, it gave Republicans control of the House of Representatives; the GOP held 247 seats to the Democrats’ 186, a 61 vote margin– despite the fact that nationally, Democratic House candidates had received over a million more votes than Republican House candidates. But that gerrymander also did something unintended; it destroyed Republican party discipline. It created and empowered the significant number of Republican Representatives who make up what has been called the “lunatic caucus” and made it virtually impossible for the Republicans to govern.
Then, of course, there’s the problem that pretty much everyone now recognizes: The Electoral College. In the 2016 election, Hillary Clinton won the popular vote by approximately 2.85 million votes. Donald Trump won in the Electoral College due to a total vote margin of fewer than 80,000 votes that translated into paper-thin victories in three states. Thanks to “winner take all” election laws, Trump received all of the electoral votes of those three states. “Winner take all” systems, in place in most states, award all of a state’s electoral votes to the winner of the popular vote, no matter how close the result; if a candidate wins a state 50.5% to 49.5% or 70% to 30%, the result is the same; votes cast for the losing candidate simply don’t count.
Problems with the Electoral College are widely recognized. Among them are the outsized influence it gives swing states, the lack of an incentive to vote if you favor the minority party in a winner-take-all state, and the over-representation of rural voters and less populated states—what one scholar has called “extra votes for topsoil.” (Wyoming, for example, our least populous state, has one-sixty-sixth of California’s population, but it has one-eighteenth of California’s electoral votes.) The Electoral College
advantages rural voters over urban ones, and white voters over voters of color. (Of course, it isn’t only the Electoral College that is a mismatch between our professed belief in “one person, one vote”—the fact that each state gets two Senators means that the 40 million people who live in the 22 smallest states get 44 senators to represent their views, while the 40 million people in California get two. We are unlikely to change that particular element of our system, but there’s no reason to add insult to injury by keeping the Electoral College.)
Akil Reed Amar, who teaches Constitutional Law at Yale Law School, criticizes the justifications we often hear for the Electoral College. As he has pointed out, the framers put the Constitution itself to a popular vote of sorts, provided for direct election of House members and favored the direct election of governors. The Electoral College was actually a concession to the demands of Southern slave states. In a direct-election system, the South would have lost every time because a huge proportion of its population — slaves — couldn’t vote. The Electoral college enabled slave states to count their slaves in the electoral college apportionment, albeit at a discount, under the Constitution’s three-fifths clause.
Americans pick mayors and governors by direct election, and there is no obvious reason that a system that works for the nation’s other chief executives can’t also work for President. Amar points out that no other country employs a similar mechanism.
As Representative Jamin Raskin points out, the Electoral College is an incentive to cheat:
“Every citizen’s vote should count equally in presidential elections, as in elections for governor or mayor. But the current regime makes votes in swing states hugely valuable while rendering votes in non-competitive states virtually meaningless. This weird lottery, as we have seen, dramatically increases incentives for strategic partisan mischief and electoral corruption in states like Florida and Ohio. You can swing a whole election by suppressing, deterring, rejecting and disqualifying just a few thousand votes.”
Gerrymandering and the Electoral College are the “big two,” but there are other changes that would reinvigorate American democracy. The way we administer elections is one of them.
State-level control over the conduct of elections made sense when difficulties in communication and transportation translated into significant isolation of populations; today, state-level control allows for all manner of mischief, including—as we’ve recently seen– significant and effective efforts at vote suppression, and what is especially worrisome, efforts to put partisans in charge of counting the votes. But even without intentional cheating, state-level control allows for wide variations from state to state in the hours polls are open, in provisions for early and absentee voting, and for the placement and accessibility of polling places. In states that have instituted “Voter ID” laws, documentation that satisfies those laws varies widely. (Voter ID measures are popular with the public, despite the fact that study after study has found in-person voter fraud to be virtually non-existent, and despite clear evidence that the impetus for these laws is a desire to suppress turnout among poor and minority populations likely to vote Democratic.)
State-level control of voting makes it difficult to implement measures that would encourage more citizen participation, like the effort to make election day a national holiday or at least move election day to a weekend. A uniform national system, overseen by a nonpartisan or bipartisan federal agency with the sole mission of administering fair, honest elections, would also facilitate consideration of other improvements proposed by good government organizations.
The entire registration system, for example, was designed when registrars needed weeks to receive registration changes in the mail to produce hard copy voter rolls for elections. We are in a very different time now, and making registration automatic, moving to same day registration and on-line registration systems, adopting no-excuse absentee ballots or universal vote by mail, eliminating caucuses, mandating at least 14 hour election day opening times and one week of early voting would make for a better, more modern and much more user-friendly American election system.
I don’t need to belabor the next one: Campaign Finance/Money in Politics. Common Cause sums it up: “American political campaigns are now financed through a system of legalized bribery.” Other organizations, including the Brennan Center for Justice, the Center for Responsive Politics, and the National Institute for Money in State Politics, among others, have documented the outsized influence of campaign contributions on American public policy, but contributions to parties and candidates aren’t the only ways wealthier citizens influence policy. The ability to hire lobbyists, many of whom are former legislators, gives corporate interests considerable clout. Money doesn’t just give big spenders the chance to express a view or support a candidate; it gives them leverage to reshape the American economy in their favor.
Even worse, a system that privileges the speech of wealthy citizens by allowing them to use their greater resources to amplify their message in ways that average Americans cannot does great damage to notions of fundamental democratic fairness, ethical probity and civic equality.
Until recently, the role played by current use of the filibuster has been less well recognized, but it is no less destructive of genuine democracy.
Whatever the original purpose or former utility of the filibuster, when its use was infrequent and it required a Senator to actually make a lengthy speech on the Senate floor, today, the filibuster operates to require government by super-majority. It has become a weapon employed by extremists to hold the country hostage.
The original idea of a filibuster was that so long as a senator kept talking, the bill in question could not move forward. Once those opposed to the measure felt they had made their case, or at least exhausted their argument, they would leave the floor and allow a vote. In 1917, when filibustering Senators threatened President Wilson’s ability to respond to a perceived military threat, the Senate adopted a mechanism called cloture, allowing a super-majority to vote to end a filibuster.
Then in 1975, the Senate changed several of its rules and made it much easier to filibuster. The new rules effectively allowed “virtual” filibusters, by allowing other business to be conducted during the time a filibuster is theoretically taking place. Senators no longer are required to take to the Senate floor and argue their case. This “virtual” use, which has increased dramatically as partisan polarization has worsened, has effectively abolished the principle of majority rule: in effect, it now takes sixty votes (the number needed for cloture) to pass any legislation. This anti-democratic result isn’t just in direct conflict with the intent of those who crafted our constitutional system, it has brought normal government operation to a standstill, and allowed small numbers of senators to effortlessly place personal political agendas above the common good and suffer no consequence.
My final two targets aren’t part of our governing or electoral systems, but they have played massively important roles in producing America’s current dysfunctions. The first is substandard civic education. This civic deficit was a primary focus of my scholarship for a very long time. Let me just say that when significant segments of the population do not know the history, philosophy or contents of the Constitution or the legal system under which they live, they cannot engage productively in political activities or accurately evaluate the behavior of their elected officials. They cannot be the informed voters the country requires. We see this constitutional ignorance today when people claim that mask or vaccination mandates infringe their liberties. The Bill of Rights has never given Americans the “liberty” to endanger their neighbors.
The final institution that has massively failed us also doesn’t need much editorial comment from me: the current Media—including talk radio, Fox News, social media and the wild west that is the Internet.
Several studies have found that the greatest contributor to political polarization is the growing plurality of news sources and increasing access to cable television. People engage in confirmation bias—they look for viewpoint validation rather than exposure to a common source of verified news.
The Pew Research Center published an extensive investigation into political polarization and media habits in 2014; among their findings, unsurprisingly, was that those categorized as “consistent conservatives” clustered around a single news source: 47% cited Fox News as their main source for news about government and politics, with no other source even close. Among consistent liberals, no outlet was named by more than 15%.
People who routinely consume sharply partisan news coverage are less likely to accept uncongenial facts even when they are accompanied by overwhelming evidence. Fox News and talk radio– with Rush Limbaugh and his imitators– were forerunners of the thousands of Internet sites offering spin, outright propaganda and fake news. Contemporary Americans can choose their preferred “realities” and simply insulate themselves from information that is inconsistent with their worldviews.
Americans is marinating in media, but we’re in danger of losing what used to be called the journalism of verification. The frantic competition for eyeballs and clicks has given us a 24/7 “news hole” that media outlets race to fill, far too often prioritizing speed over accuracy. That same competition has increased media attention to sports, celebrity gossip and opinion, and has greatly reduced coverage of government and policy. The scope and range of watchdog journalism that informs citizens about their government has dramatically declined, especially at the local level. We still have national coverage but with the exception of niche media, we have lost local news. I should also point out that there is a rather obvious relationship between those low levels of civic literacy and the rise of propaganda and fake news.
In order for democracy to function, there must be widespread trust in the integrity of elections and the operation of government. The fundamental democratic idea is a fair fight, a contest between candidates with competing ideas and policy proposals, followed by a winner legitimized and authorized to implement his or her agenda. Increasingly, however, those democratic norms have been replaced by bare-knuckled power plays. The refusal of Mitch McConnell and the Republicans in the Senate to “advise and consent” to a sitting President’s nominee for the Supreme Court was a stunning and unprecedented breach of duty that elevated political advantage over the national interest. The dishonesty of that ploy was underlined by his rush to install an ideologically-acceptable replacement almost immediately after Ruth Bader Ginsberg died. No matter what one’s policy preferences or political party, we should all see such behaviors as shocking and damaging deviations from American norms—and as invitations to Democrats to do likewise when they are in charge.
If that invitation is accepted, we’ve lost the rule of law.
One outcome of these demonstrations of toxic partisanship has been a massive loss of trust in government and other social institutions. Without that trust—without a widespread public belief in an overarching political community to which all citizens belong and in which all citizens are valued—tribalism thrives. Especially in times of rapid social change, racial resentments grow. The divide between urban and rural Americans widens. Economic insecurity and social dysfunction grow in the absence of an adequate social safety net, adding to resentment of both government and “the Other.” It is a prescription for civic unrest and national decline.
If Americans do not engage civically in far greater numbers than we have previously—If we do not reform outdated institutions, protect the right to vote, improve civic education, and support legitimate journalism—that decline will be irreversible.
The good news is that there is evidence that such engagement is underway. We the People can do this.
There are fairly obvious reasons that posts and comments to this blog have increasingly centered on bigotry–well-meaning individuals are (reluctantly) facing up to the extent of the tribal animus that continues to fester in far too many of our fellow Americans.
Much of the reaction to that animus is expressed in moral or religious terms– the belief that racial and religious hatreds are immoral or sinful. Others point to the destabilizing, anti-democratic consequences of such bias, and still others point to the human costs to individuals who suffer from discrimination or may even be prevented from pursuing their life goals for no reason other than their religion, gender or the color of their skin.
The research concludes that racial and ethnic disparities in the United States–disparities resulting from official and social discrimination–haven’t simply hurt the people who experience that discrimination. They’ve hurt us all, by depressing U.S. economic output by trillions of dollars over the past 30 years.
The researchers controlled for five variables:
employment (the percentage of people with jobs); hours worked; educational attainment (the level of education completed); educational utilization (the extent to which people are in jobs that fully use their education); and earnings gaps not explained by those factors.
Then they calculated how much larger the U.S. economic pie would be if opportunities and outcomes had been more equally distributed by race and ethnicity. Their answer? $22.9 trillion over the 30-year period.
When we fail to utilize the talents of millions of people, we shouldn’t be surprised that the result is lower prosperity for everyone. (We began to recognize that reality when large numbers of women finally entered the workforce and we were no longer failing to use the smarts and talents of fifty percent of the population.)
As J.P. Morgan & Co.–hardly a socialist enterprise– has documented, Black people represent 12.7 percent of the U.S. population, but only 4.3 percent of the 22.2 million business owners in the country. A significant reason for that disparity is the difficulty minority business-people encounter when they are trying to raise capital; Black entrepreneurs are almost three times more likely to have business profits negatively affected by access to capital.
Furthermore, barely six percent of small businesses in majority-Black communities and 11 percent of small businesses in majority-Hispanic communities have more than 14 days worth of cash on hand, compared to 65 percent of businesses in majority-white communities. Similar disparities are found in comparison of first-year business revenues: Black-owned small businesses earned 59 percent less and Latino-owned small businesses earned 21 percent less in first-year revenues than white-owned counterparts.
The J.P. Morgan report noted the effect of these disparities on the overall economy:
Closing this racial wealth gap could grow the U.S. gross domestic product (GDP) by an estimated four to six percent by 2028, adding an additional $1 to $1.5 trillion to the economy, according to McKinsey. An economy that works for more people could break down barriers to opportunity and improve how people live, from life earnings to life expectancy.
As I read the research documenting the various ways in which ostensibly neutral financial decisions reflect bias–the extent to which decisions by investors and banks are influenced by attitudes about race and gender– I keep coming back to the episode recounted by Heather McGhee, about the town that filled in its swimming pool rather than share it with Black people.
Hard as it is for me to get my head around, it’s obvious that there are a lot of Americans who choose to go without–who choose to be poor, or poorer than necessary–if the alternative is that some of their Black or Brown neighbors succeed.
If the “benefit” in that cost/benefit analysis is an outcome ensuring that Whites and people of color are equally denied an otherwise available asset, then the costs of bigotry are massively disproportionate to the benefits.