A Philosophical Big Sort

I have previously cited Bill Bishop’s excellent 2008 book, The Big Sort, in which Bishop focused on physical “sorting”–the increased geographical clustering of like-minded Americans choosing to live in areas populated by people who generally shared their political worldviews.

A very thoughtful book review by Ronald Aronson in The Hedgehog Review centered on a different type of American division–what one might call “philosophical sorting.”

The book being reviewed was The Upswing written by Robert Putnam (he of “Bowling Alone” fame) and Shaylyn Garrett. The book looked at what it called the “I-We-I arc” through the lens of the last 125 years of American economic, political, social, and cultural history.

A remarkable assemblage of data and a compelling story about America history, The Upswing begins with the Gilded Age, the period of disintegration, conflict, and aggressive individualism after the Civil War. It was followed by seventy-five years of growth in equality and national community achieved first by the Progressive movement, then by the New Deal, and, under different conditions, by wartime solidarity. But then things went sour: “Between the mid-1960s and today—by scores of hard measures along multiple dimensions—we have been experiencing declining economic equality, the deterioration of compromise in the public square, a fraying social fabric, and a descent into cultural narcissism.” The last century’s upswing has been followed by the slide toward an unhappy collection of democratic ills: inequality, individualism, austerity, the domination of human needs by the “free market,” political polarization, and the blockage of economic and educational gains by African Americans.

According to the review, the book is replete with graphs that reveal a repeating arc: an inverted U. Until around 1970, the data shows an increasing sense of “community, equality, belongingness, and solidarity—a growing “we.” After that, however, the graphs show a “sharp collapse into an individualistic and even conflictual assertion of “I” in values and culture as well as politics and economics.”

This is a story that unfolds in four overlapping parts. First, the trend toward greater economic equality reversed sharply over the past fifty years. Second, political polarization, some of it rooted in the Civil War, gave way under the influence of the Progressive movement to a remarkable degree of political consensus by the 1930s. But then things turned in the other direction as the Civil Rights Act of 1964, supported by substantial majorities of both Republicans and Democrats, led to bitter party polarization that was accompanied by a steep decline in trust in government and a rise in cynicism. Third, social life became anemic as membership in clubs and associations declined (a main theme of Putnam’s Bowling Alone) and the social and cultural force of labor unions dramatically weakened. Fourth, as an indicator of the changing frequency of occurrence of certain words, Google Ngrams tell a parallel story of a rise and fall in values of community and individualism: “association,” “cooperation,” “socialism,” and the “common man,” as well as “agreement,” “compromise,” and “unity,” all showing the same inverted U-shaped curve, rising and then declining steeply, to where we are today.

I was particularly intrigued by the observation that many whites come to champion the idea of individualism…”because it provides them with a principled and apparently neutral justification for opposing policies that favor Black Americans.” If racism is truly a major underpinning of the “I” portion of that I-We-I arc, I’m afraid the “upswing” Putnam and Garrett believe is on the horizon will be a long time coming.

 Aronson is equally dubious about the prospects of an upswing. As he points out, if anything should have prompted a return to “we,” it should have been the pandemic. It didn’t. Americans “sorted” philosophically and politically.

Survey research tells us that 36 percent of Republicans– as opposed to  4 percent of Democrats– thought the 2020 shutdowns were too restrictive. Prominent Republicans insisted that COVID-19 was a hoax and that the death toll was exaggerated. The U.S. has 5 percent of the world’s population–and 20 percent of COVID deaths. Twelve of the fifteen hardest-hit states are governed by Republicans.

The Upswing was published in 2020, prior to the pandemic, and didn’t address it. Other omissions are less understandable.

Aronson points to the multiple social influences that are simply missing from the book’s analysis: the role played by American capitalism’s “outsourcing, deregulation, financialization, speculative bubbles, austerity, and neoliberalism;” globalization; the Vietnam War; “inflation, and American imperialism, including the Cold War and the post–Cold War military-industrial complex.” And as he says, “we must come back in the end to the crucial link between America’s coming apart and its deeply imbedded racism.” 

I am very much afraid that the continued existence of a White Supremacy Party–and the philosophical gulf between Americans that is symbolized by that continued existence–is incompatible with an imminent upswing.

I hope I’m wrong.

Comments

Return On Investment

I tend to get testy when I hear people intone that government should “be run like a business.” (Granted–I’m testy a lot…) Government is very different from business–its purposes (which do not include a profit motive) are distinct. Not recognizing the substantial differences  between governance and enterprise marks those making that facile comment as–at best– uninformed.

That said, there is one concept fundamental to both business plans and investment decisions that should also guide governmental decisions: return on investment. Interestingly, however, many of the same folks who want more businesslike governance routinely ignore that calculation.

If I’m purchasing stock in a company, I want evidence that the shares I purchase will appreciate in value–or at least pay dividends. If I am a savvy/mature investor, rather than a gambler playing the market, I understand that such appreciation will likely not be immediate; I will invest “for the long haul.” 

That same calculation ought to determine America’s investments in social spending.  Although appropriate returns on government investment will not and should not be monetary, a number of studies confirm that a surprising number of programs actually do turn a fiscal profit for taxpayers.

Children who have been fed thanks to food stamps grow up into healthier, more productive adults than those who didn’t get enough to eat. That greater productivity means that government eventually recoups much of what it spent on those food stamps–and also saves money due to reduced spending on things like disability payments.

A recent study by Harvard economists found that many programs — especially those focused on children and young adults — made money for taxpayers, when all costs and benefits were factored in.

That’s because they improved the health and education of enrollees and their families, who eventually earned more income, paid more taxes and needed less government assistance over all.

The study, published in The Quarterly Journal of Economics, analyzed 101 government programs. In one way, it was a standard cost/benefit analysis–it looked at what  government’s costs were, and the resulting benefits to the recipients. However, the researchers took an extra step–they calculated the “fiscal externalities: the indirect ways that a program affected the government’s budget.”

In other words, in addition to the upfront costs, they calculated the monetary return on taxpayers’ investment.

Consider one program: health insurance for pregnant women. In the mid-1980s, the federal government allowed states to expand Medicaid eligibility to more low-income pregnant women. Some, but not all, states took up the offer. Increased Medicaid coverage enabled women to receive prenatal care and better obstetric care, and to save on personal medical spending.

For the federal government, the most straightforward fiscal impact of this expanded coverage was increased spending on health insurance. The indirect fiscal effects were more complex, and could easily be overlooked, but they have been enormous.

First, newly eligible women had fewer uninsured medical costs. The federal government picks up part of the tab for the uninsured because it reimburses hospitals for “uncompensated care,” or unpaid bills. Thus, this saved the government some money. On the other hand, some of the women stopped working, probably because they no longer needed employer-provided private health insurance, and this cost the government money.

But the biggest indirect effects were not apparent until children born to the Medicaid-covered women became adults. As shown in a study by Sarah Miller at the University of Michigan and Laura Wherry at the University of California, Los Angeles, those second-generation beneficiaries were healthier in adulthood, with fewer hospitalizations. The government saved money because it would have paid for part of those hospital bills. The now-adult beneficiaries had more education and earned more money than people in similar situations whose mothers did not get Medicaid benefits. That meant higher tax revenue.

Data on other social programs yields similar results. Researchers have found that Medicaid expansion, for example, more than paid for itself, even after accounting for the fact that future benefits are “discounted”–i.e., worth less today. 

Businesspeople understand that it usually takes time to realize profit. With government social programs, too, the fiscal “payoff” generally is delayed. That doesn’t mean it is less substantial or less real. In the cited study, 16 of the social policies that the researchers examined either broke even or actually made a profit. 

I’m certainly not suggesting that government programs be limited to those with a positive financial return–government is most definitely not a business. I am suggesting, however, that we consider government social programs investments–and that the returns on those investments aren’t limited to improving the safety and security of the communities in which we all live, sufficient as that return would be. In many cases, taxpayers also get a positive monetary return on investment.

Just like well-run businesses.

Comments

This Isn’t Dunkirk

Longtime readers of this blog know that I rarely, if ever, post about foreign policy. There’s a reason for that–I am uninformed about most aspects of such policies, and I am deeply conflicted about America’s obligations vis a vis purely humanitarian concerns. 

When it comes to warfare, I mostly agree with those who insist we should keep our cotton-pickin’ hands off unless there is a very clear American interest to be protected, or a humanitarian crisis of significant proportions that we are actually in a position to ameliorate. I will readily admit that the definition of American interests and the nature and extent of humanitarian crises are matters of considerable debate.

If I had been the person determining the parameters of America’s intervention in Afghanistan, I would have approved an initial intervention to root out Al Qaida and “get” Osama Bin Laden–but not the slog of the subsequent 18 years, during which we wasted trillions of dollars–not to mention the lives of thousands of soldiers and civilians.

But here we are.

President Biden has made what I consider the absolutely correct call–and the media and self-styled pundits, abetted by deeply dishonest Republicans sensing political advantage, are having a field day attacking him for, among other things, recognizing and admitting the obvious.

I think that Michael Moore, of all people, has it right in the following paragraphs. (I say “of all people” because I tend to find Moore tiresome–you usually know precisely what he’ll say because, like far too many people, he approaches all issues through an unshakable, pre-defined lens. Sometimes, of course, like that “stopped clock” he’s right; sometimes, not so much.)

In this case,I think he is “on point.” In his recent letter, Moore wrote about our departure from Afghanistan: 

This is nothing here to celebrate. This should only be a monumental gut-check moment of serious reflection and a desire to seek redemption for ourselves. We don’t need to spend a single minute right now analyzing how Biden has or has not messed up while bravely handling the end of this mess he was handed — including his incredible private negotiations all this week with the Taliban leaders to ensure that not a single enemy combatant from the occupying force (that would be us; e.g., U.S. soldiers and spies and embassy staff), will be harmed. And Biden so far has gotten every American and foreign journalist out alive, plus a promise from the Taliban that those who stay to cover it will not be harmed. And not a single one has! Usually a force like the Taliban rushes in killing every enemy in sight. That has not happened! And we will learn that it was because of the negotiating skills and smarts of the Biden team that there was no mass slaughter. This is not Dunkirk.

Dozens of planes have safely taken off all week — and not one of them has been shot down. None of our troops in this chaotic situation have been killed. Despite the breathless shrieks of panic from maleducated journalists who think they’re covering the Taliban of the 1990s (Jake Tapper on CNN keeps making references to “beheadings“ and how girls might be “kidnapped” and “raped” and forced to become “child brides”), none of this seems to be happening. I do not want to hear how we “need to study” what went wrong with this Taliban victory and our evacuation because (switching to all caps because I can’t scream this loud enough): WE ARE NEVER GOING TO FIND OURSELVES IN A SITUATION LIKE THIS AGAIN BECAUSE OUR DAYS OF INVADING AND TAKING OVER COUNTRIES MUST END. RIGHT? RIGHT!!

Unfortunately, we probably will find ourselves in similar situations, because a substantial portion of our citizenry believes we have the right–indeed, the duty–to impose our will around the globe, irrespective of any threat to genuine American interests.

Is our exit from Afghanistan being accomplished smoothly? No. To the extent both the war and the exit were bungled, we’ll need sober analyses of those failures in order to inform future foreign policy decisions. But sober analyses are not what we’re getting–for that matter, even presumably straightforward eyewitness reports of what is occurring “on the ground” are wildly inconsistent. 

If people of good will are truly concerned about the fate of non-Taliban Afghanis–especially Afghani women–under a fundamentalist religious regime, what they can and must do is extend a welcome to those who want to emigrate, and work to facilitate their speedy immigration and resettlement.

It is telling–but not surprising– that the monkeys throwing poo in hopes it sticks to the administration are unwilling to do that.

Comments

Memorializing History?

The angry battles over the propriety of statues commemorating Confederate soldiers, and the somewhat different arguments that regularly erupt over the design of war memorials and the like, are reminders that–as a species–we humans like to erect permanent (or at least, long-lasting) mementos of people and events we consider worth memorializing.

There are probably psychological studies probing the reasons for that evidently widespread impulse, and I thought about what that desire to commemorate might tell us when I read a request from Lester Levine, who regularly comments here. Lester asked me to post the following announcement/invitation to the community that converses (or lurks) on this site.

You are a creative bunch. So, as we all wallow in health and political turmoil, I would like to invite you and anyone you know to deeply immerse hands, minds and souls in an engaging project. It will require minimal artistic skills and “production” time.

In addition to my curmudgeonly comments on this blog, I am Lester Levine, the only person to read all 5,201 entries to the 2003 World Trade Center Memorial Competition. My 2016 book highlights the 100+ most innovative designs submitted. That research forever interested me in the role of memory/memorials in history and culture.

And so, as we approach 9/11/2021, I am struck by how the January 6, 2021 event has also been “named” by its date.. Likewise, I can’t help but wonder about the artistic/intellectual challenge of imagining a physical marker for historical memory. It was not war; there were few killed. Yet, many think the event was a seminal moment for the United States and all that it stands for.
___________________________________________________________
Announcing the “Remembering 1/6 Design Competition”
Opens 8/17/21
Open to anyone, anywhere
Entry format, rules, questions – [email protected]
Entries due by midnight US Eastern time, 10/22/21 to [email protected]
Judged by creators of innovative 9/11 memorial entries
Winners announced on 12/6/21
____________________________________________________________

I will be interested to see what the people responding consider appropriate “markers” of that very troubling and ominous event.

I just hope that a hundred years or so from now there are still people around to see whatever monument is ultimately erected.

Comments

The Scales Of Justice

We are all familiar with “Lady Justice”–the statue of a blindfolded woman holding scales, intended to represent the dispassionate weighing of evidence leading to a just result.

The justice system has developed a number of rules governing how the “weighing and measuring” symbolized by those scales occurs. Rules against the consideration of hearsay, for example, are intended to exclude evidence that is essentially gossip–matters for which the person testifying cannot personally vouch.

Most people understand why courts disallow hearsay, or allow cross-examination. The reasons for other rules are less intuitive. As Paul Ogden has recently noted on his own blog, statutes of limitations fall into that latter category.

Ogden shares his concerns about a recent case brought by a woman against singer/songwriter Bob Dylan, alleging that he molested her when she was twelve years old– in 1965. 

Let’s think about this.

The “Me Too” movement ushered in a long-deferred accounting of the exploitation of women by men who were often in positions of power and/or privilege. “Me Too” has done a lot of good–but like so many overdue movements, it has had its share of excesses. The state of New York, in a recent effort to protect abused children (male or female), passed the New York Child Victims Act. Among other things, it temporarily expanded by decades the statute of limitations for child victims to bring civil lawsuits.  It also protected the identity of those bringing such lawsuits from disclosure–presumably, even the identity of plaintiffs who are now adults.

On the surface, it might seem that allowing individuals much more time to bring a lawsuit would advance justice. But as Paul points out, there are sound reasons for statutes limiting the time periods within which suits can be filed. As he notes, in 1965 “Lyndon Johnson was President, man had not yet stepped on the moon (1969), and seat belts were not yet required in cars (1968).  

As Paul also notes, extending or eliminating statutes of limitations can put accused people at a distinct disadvantage.  As time passes, memories fade, witnesses die, evidence gets lost, destroyed or simply buried by history.  Statutes of limitations exist to ensure that claims are litigated while the evidence is relatively fresh and the evidence proving or disproving the claim is still available.

In his post, Paul lists the specific allegations of the complaint and details the monumental difficulty of proving or disproving those allegations over 50 years later.

We can certainly debate the ideal time period within which lawsuits should be commenced, but declaring “open season” for such suits not only makes the achievement of certainty virtually impossible, it invites all sorts of mischief. Let’s say you were on a date in college, had a bit more to drink than was wise (but not enough to make you insensitive), and had consensual sex that you later regretted. As the years pass, you “remember” the incident a bit differently–perhaps as date rape. If your “assailant” comes into a lot of money later in life (perhaps through fame, perhaps through hard work, perhaps through inheritance–whatever), how tempting would it be to use the justice system to confirm your now-sincere but somewhat “adjusted” recollection of the event?

I am absolutely not suggesting that tardy allegations are all–or even mostly– manufactured. I’m sure they aren’t. And I have no idea whether the plaintiff accusing Dylan was actually abused or not. She may well have been.

The point is, after the passage of a certain amount of time, it is absolutely impossible to know. 

Achieving justice requires according fundamental fairness to both the accuser and the accused. The rules governing “due process of law” are meant to ensure that injured people get their day in court, and that unfairly accused people have the means to demonstrate their innocence. 

Being as fair as possible to both parties means requiring an aggrieved person to sue within a reasonable period of time. Admittedly, what constitutes a reasonable time is debatable.

Research has shown memory can be unreliable at pretty much any time, but requiring that litigation be pursued while any witnesses are still likely to be alive and probative evidence is likely to be obtainable–seems only fair. 

It’s the process that is “due.”

Comments