As we head into the year 2020, it’s hard to know whether to be fearful or hopeful. (Despite it being “20/20” I’m not seeing very clearly.)
So far, the 21st Century has left a lot to be desired.
When I was young–many years ago–I imagined we’d make great progress by the 21st Century. My anticipation had less to do with flying cars and computers and more to do with things like world peace; in any event, I wasn’t prepared for the renewed tribalism and various bigotries that have grown more intractable in the years since 2000. (I was definitely not prepared for a President reckless enough to Wag the Dog.)
It’s hard to know whether the problems we face are truly worse than they have been, or whether–thanks to vastly improved communication technologies– we are just much more aware of them. In any event, as we turn the page on 2019, pundits and historians are proposing terms to describe the last decade.
“Unraveling” was the descriptor offered by Dana Milbank, one of the Post opinion writersoffering their perspectives on the last ten years. I think Milbank got the decade right.
It began with the tea party, a rebellion nominally against taxes and government but really a revolt against the first African American president. At mid-decade came the election of Donald Trump, a backlash against both the black president and the first woman on a major party ticket.
Milbank attributes much of the ugliness of our time to the fury of white Christian men who realized that they were losing their hegemony. He saved some opprobrium for social media:
It gave rise to demagoguery, gave an edge to authoritarianism and its primary weapon, disinformation, and gave legitimacy and power to the most extreme, hate-filled and paranoid elements of society.
Molly Roberts had a somewhat different take; she characterized the decade as one of (over) sharing. Facebook, Twitter, Instagram and the like have ushered in “full-frontal confessionalism to a country full of emotional voyeurs.” In the process of baring our souls, we also, inadvertently, shared a lot of private information.
To maximize our engagement, those platforms played on the preferences all our sharing revealed — which meant shoving inflammatory content in our faces and shoving us into silos. All that connection ended up dividing us.
Jennifer Rubin has been turning out a stream of perceptive columns the past couple of years, and her take on the decade didn’t disappoint: she dubbed it the Decade of Anxiety, “one in which we lost not simply a shared sense of purpose but a shared sense of reality.” Rubin, a classical conservative, is clear-eyed about what has happened to the GOP.
The Republican Party degenerated into a cult, converted cruelty into public policy and normalized racism. Internationally, U.S. retrenchment ushered in a heyday for authoritarian aggressors and a dismal period for international human rights and press freedom.
Christine Emba, with whom I am unfamiliar, characterized the period as a Decade of Dissonance–a period during which our reality and our expectations kept moving further and further apart.
For her part, Alexandra Petri called it the Decade of Ouroboros. I had to Google that one. Turns out it’s a serpent or dragon eating its own tail. (I’ll admit to some head-scratching; she either meant a time when we set about destroying–eating– ourselves, or a time when everything is ominous.)
The final offering, from the economist Robert Samuelson, struck me as appropriate, if depressing. He called it the Decade of Retreat.
It’s not just the end of the decade. It’s the end of the American century. When historians look back on the past 10 years, they may conclude this was the moment Americans tired of shaping the world order.
At my house, it has been a decade of civic disappointment–and exhaustion. (Persistent outrage really tires you out….)
How would you characterize the decade? And more to the point, where do you see America after another ten years?
Yesterday, we awoke to find that Trump had ordered an airstrike that killed an Iranian general. The general’s position was equivalent to that of our Joint Chiefs of Staff, or even Vice-President, and he was evidently revered in Iran.
Critics don’t dispute the administration’s contention that General Suleimani posed a threat to Americans (although absolutely no evidence supports claims that an attack was “imminent”). Both George W. Bush and Barack Obama had decided against efforts to target Suleimani, because they were convinced that such an action had a high probability of triggering a war.
They were correct. The assassination is being reported in both the U.S. and Iran as an Act of War.
Of course, both Bush and Obama listened to their diplomatic and military experts, and consulted with Congressional leaders–none of which Trump did. The strike violated a longstanding executive order forbidding U.S. involvement in the assassination of foreign officials, as well as the requirement that a President seek Congressional approval under the Authorization for the Use of Military Force Act. Of course, this is an administration that routinely ignores compliance with laws it dislikes.
I don’t think it is a coincidence that a military strike that allows Trump to brag about a “show of strength” comes at a time when his bungled and inept forays into foreign policy are being widely criticized.
Despite his much-hyped meetings with Kim Jon Un (meetings which gifted Un with an unearned but long-desired legitimacy), North Korea has announced its intent to resume nuclear tests. Trump’s approach to Iran–actually, his approach to the entire Middle East–has been wildly contradictory, as spurts of belligerence have alternated with troop pullouts and inexplicable decisions have been “justified” by Trump’s usual word-salad tweets and statements.
North Korea’s announcement, coming as the 2020 election campaign begins heating up, and the Iranian-backed attacks on the U.S. Embassy in Iraq, increased public attention to those failures, and triggered renewed allegations that Trump doesn’t understand foreign policy and is incapable of developing a coherent strategy. Those criticisms have been leveled throughout his term in office, but they have become louder and more frequent in the wake of recent events.
So, like the child he is, Trump blindly struck out.
Since 2016, it has become abundantly clear that the Oval Office is occupied by a profoundly ignorant, mentally-unstable man-child who is utterly incapable of understanding the likely consequences of his actions. The damage he has done domestically is enormous; the threat he poses to world peace and hundreds of thousands of American lives is terrifying.
Yesterday’s media was full of analyses by Middle East and foreign policy experts; most of the people who read this blog have undoubtedly seen many of them. I don’t have any additional insights to offer.
A deadly opening attack. Nearly untraceable, ruthless proxies spreading chaos on multiple continents. Costly miscalculations. And thousands — perhaps hundreds of thousands — killed in a conflict that would dwarf the war in Iraq.
Welcome to the US-Iran war, which has the potential to be one of the worst conflicts in history.
The Thursday night killing of Maj. Gen. Qassem Suleimani, who led Iranian covert operations and intelligence and was one of the country’s most senior leaders, brought Washington and Tehran closer to fighting that war. Iran has every incentive to retaliate, experts says, using its proxies to target US commercial interests in the Middle East, American allies, or even American troops and diplomats hunkered down in regional bases and embassies.
It’s partly why the Eurasia Group, a prominent international consulting firm, now puts the chance of “a limited or major military confrontation” at 40 percent.
This is what happens when self-described “patriots” cast their votes for an unhinged buffoon with limited intellect and a monumental ignorance of the ways of the world. Those voters weren’t a majority, but there were enough of them to elect the candidate whose only “qualification” was a pathetic eagerness to validate their bigotries.
One of the lessons we should–but don’t–learn from history is that revolutions almost never succeed in replacing the systems being rejected with those that are more to the liking of the revolutionaries.
Revolutions can and do change the identity of the people in charge. The American Revolution got rid of King George and English authority, for example–but it didn’t change fundamental attitudes about individual rights, or a legal system based on common law, or accepted ways of doing business.
A lot of things we think of as being very left-wing are actually extremely popular — like higher taxes on rich people. But other things requiring ordinary middle class people to change aren’t ever easy to do.
Systems that are very different from our own on health care all have deep historical roots. There is enormous path dependence in policy. The systems that countries have on health care, retirement, and most other stuff has a lot to do with decisions that were made generations ago. And it’s very hard to shift to a radically different path. So incrementalism tends to rule everywhere.
Krugman points to polling that says that a public buy-in to Medicare is very popular, but a replacement of private insurance that is not voluntary is not.
The international evidence is that it’s just very hard for to make radical changes in social programs. The shape of them tends to be fixed for a really long time. US Social Security is widely held up as a role model of doing it right because we got it right at a time when things were still pretty amorphous and uninformed. On the other hand, our health care system is a mess because of decisions we made around the same time that left us with bad stuff entrenched in the system.
The operative word is “entrenched.”
Wikipedia begins its discussion of “path dependency” thusly: “Path dependence explains how the set of decisions people face for any given circumstance is limited by the decisions they have made in the past or by the events that they experienced, even though past circumstances may no longer be relevant.”
Multiple studies of path dependence confirm that previous policy decisions that have since become “the way we do things” generate enormous inertia. Studies of welfare policies, especially, have concluded that significant changes can be made only in exceptional situations. (It isn’t only politics. Studies of how technologies become path-dependent demonstrate that so-called “externalities”–habits, really– resulting from established supplier and customer preferences can lead to the dominance of one technology over another, even if the technology that “loses” is clearly superior.)
It is one thing to compare the mess that is America’s health system with the far better systems elsewhere and acknowledge that we got it wrong. In an ideal world, we would start from scratch and devise something very different. But we don’t live in an ideal world; we live in a world and country where most people fear and resist change– even change to something that is clearly superior.
No president can wave a magic wand and effect overnight transformation. FDR and Truman both pushed for forms of national health insurance and failed. Nixon also favored it. President Kennedy supported Medicare and Johnson finally got that done in 1965–after the trauma of an assassination. All other efforts failed until 2010, when Obama and Pelosi (barely) managed to get the Affordable Care Act passed. Even that compromised legislation triggered ferocious opposition, including bills that weaken it and litigation that aims to overturn it.
People who think we just have to elect a candidate who recognizes what a better system would look like, and empower that person to wave his or her magic wand and give us a “do-over,” aren’t simply naive. They’re delusional.
The other day, during a political discussion (these days, pretty much every discussion gets political) my youngest son wondered aloud whether it had been a mistake to win the Civil War. The red states of the South have been an economic drag on the blue states for a long time–they send significantly fewer dollars to Washington than they receive courtesy of blue state largesse.
Democratic-leaning areas used to look similar to Republican-leaning areas in terms of productivity, income and education. But they have been rapidly diverging, with blue areas getting more productive, richer and better educated. In the close presidential election of 2000, counties that supported Al Gore over George W. Bush accounted for only a little over half the nation’s economic output. In the close election of 2016, counties that supported Hillary Clinton accounted for 64 percent of output, almost twice the share of Trump country.
Evidently, however, we don’t just live in different economies–lately, we also die differently.
Back in the Bush years I used to encounter people who insisted that the United States had the world’s longest life expectancy. They hadn’t looked at the data, they just assumed that America was No. 1 on everything. Even then it wasn’t true: U.S. life expectancy has been below that of other advanced countries for a long time.
The death gap has, however, widened considerably in recent years as a result of increased mortality among working-age Americans. This rise in mortality has, in turn, been largely a result of rising “deaths of despair”: drug overdoses, suicides and alcohol. And the rise in these deaths has led to declining overall life expectancy for the past few years.
What I haven’t seen emphasized is the divergence in life expectancy within the United States and its close correlation with political orientation. True, a recent Times article on the phenomenon noted that life expectancy in coastal metropolitan areas is still rising about as fast as life expectancy in other advanced countries. But the regional divide goes deeper than that.
It turns out that the “death divide” Krugman is addressing is closely correlated with political orientation.
I looked at states that voted for Donald Trump versus states that voted for Clinton in 2016, and calculated average life expectancy weighted by their 2016 population. In 1990, today’s red and blue states had almost the same life expectancy. Since then, however, life expectancy in Clinton states has risen more or less in line with other advanced countries, compared with almost no gain in Trump country. At this point, blue-state residents can expect to live more than four years longer than their red-state counterparts.
There are a number of possible explanations: blue states expanded Medicaid while most red states didn’t, for example. The gap in educational levels is probably implicated as well; better-educated people tend to be healthier than the less educated, for a number of reasons.
Krugman also notes differences in behavior and lifestyle that affect mortality. (Although obesity has dramatically increased all across America, obesity rates are significantly higher in red states.)
Krugman references–and debunks–conservative explanations for the death divide:
Conservative figures like William Barr, the attorney general, look at rising mortality in America and attribute it to the collapse of traditional values — a collapse they attribute, in turn, to the evil machinations of “militant secularists.” The secularist assault on traditional values, Barr claims, lies behind “soaring suicide rates,” rising violence and “a deadly drug epidemic.”
But European nations, which are far more secularist than we are, haven’t seen a comparable rise in deaths of despair and an American-style decline in life expectancy. And even within America these evils are concentrated in states that voted for Trump, and have largely bypassed the more secular blue states.
Although he doesn’t mention it, I’d also be interested in seeing a comparison of gun deaths in Red and Blue states.
Actually, conservatives like Barr inadvertently make a point: culture and values matter. Just not the way they think.
Even if you found yesterday’s post persuasive, a UBI seems politically impossible and cost prohibitive.
Politically, shifting from a paternalistic and judgmental “welfare” system to one awarding benefits based upon membership in the polity would not only require a significant culture change, but would be vigorously opposed by the large number of companies and individuals whose interests are served by America’s current patchwork of programs, subsidies and policies.
Then there’s the issue of cost.
Although Americans’ deeply-ingrained belief that people are poor because they made bad choices or didn’t work hard enough continues to be a barrier to a more generous and equitable social safety net, the most significant impediment to passage of a Universal Basic Income is the argument that has consistently been made to thwart universal healthcare– that America, rich as the country is, simply cannot afford such a Brave New World. This argument flies in the face of evidence from counties with far more robust safety nets: In 2012, the U.S. spent an estimated 19.4% of GDP on social expenditures, according to the Organization for Economic Co-operation and Development. Denmark spent 30.5%, Sweden 28.2% and Germany 26.3%. All of these countries have a lower central government debt to GDP ratio than the United States.
While specific economic recommendations aren’t possible in the absence of concrete, “fleshed out” policy proposals, it’s possible to identify ways in which universal programs might be financed, and how they might affect economic growth. The short answer is that both the UBI and some version of Medicare-for-All could be funded by a combination of higher taxes, savings through cost containment, economies of scale, reduction of welfare bureaucracy, the elimination or reform of existing subsidies, and meaningful reductions in America’s bloated defense budget.
Debates over taxes rarely if ever consider the extent to which individual taxpayers actually save money when government relieves them of the expense of a service. Even now, requiring citizens to make out-of-pocket payments for such things as scavenger services (in lieu of municipal garbage collection), or private police and fire protection or schooling, would vastly exceed the amounts individual households pay in taxes for those services. Low-income citizens, of course, would be unable to afford them.
The American public is positively allergic to taxes, even when a majority financially benefits from them. If low-and-middle income American families did not have to pay out-of-pocket for health insurance, and could count on a stipend of $1000/month, most would personally be much better off, even if they experienced increases in their tax rates. They would likely see other savings as well: for example, if the U.S. had national health care, auto and homeowners’ insurance rates could be expected to decline, because insurance companies wouldn’t have to include the costs of medical care in the event of an accident or injury in their actuarial calculations. Research also predicts the country would see a decline in crime, child and spousal abuse and similar behaviors that have been found to increase under the stresses associated with poverty. (The extent of such reductions and the cost savings attributable to them is speculative, but a substantial level of abatement seems likely.)
Most tax increases, obviously, would be levied against those capable of paying them. Americans used to believe in progressive taxation, and not simply to raise revenue. Taxes on the very wealthy were originally conceived as correctives, like tobacco taxes, that should be judged by their social impact as well as their ability to generate revenue. High tax rates on the rich were intended to reduce the vast accumulations of money that serve to give a handful of people a level of power deemed incompatible with democracy.
Right now they pay about 30% of their income in taxes. Increasing their overall average tax rate by about 10 percentage points would generate roughly $3tn in revenue over the next 10 years, while still leaving the 1% with an average post-tax annual income of more than $1.4m. (That new tax rate, by the way, would be about the same as the overall rate the richest 1% paid back in the 1940s and 1950s.)
As indicated, in addition to reducing inequality, progressive taxation does raise money, and there is widespread agreement that the very rich aren’t paying their share. At the 2019 Davos World Economic Forum, Dutch historian Rutger Bregman caused a mini-sensation by telling the uber-wealthy assembled there than the “real issue” in the battle for equality is tax avoidance and the failure of rich people to pay what they should. Momentum is clearly building for more progressive tax rates than the United States currently imposes.
There is also growing anger directed at the generosity of various tax credits and deductions, aka “loopholes,” that allow immensely profitable corporations to reduce their tax liabilities (or escape them completely). The use of offshore tax havens and other creative methods of eluding payment devised by sophisticated tax lawyers employed by the uber-wealthy is an ongoing scandal.
Real-world experiments like Governor Sam Brownback’s tax cuts in Kansas confirm that, contrary to the ideological arguments against imposing higher taxes on wealthy “makers,” high marginal rates don’t depress economic growth and cutting taxes doesn’t trigger an increase in either job creation or economic growth. In 1947, the top tax rate was 86.45% on income over $200,000; in 2015, it was 39.60% on income over $466,950. During that time span, researchers have found very little correlation between economic growth and higher or lower marginal rates. In 2012, the Congressional Research Service published a research study that rebutted the presumed inverse correlation between tax rates and economic growth.
Climate change is affecting America’s weather, increasing the urgency of efforts to reduce carbon emissions and increase the development and use of clean energy sources. Yet the United States spends twenty billion dollars a year subsidizing fossil fuels, including 2.5 billion per year specifically earmarked for searching out new fossil fuel resources, at a time in human history when the development of those resources is contraindicated. According to Oil Change International, permanent tax breaks to the US fossil fuel industry are seven times larger than those for renewable energy. At current prices, the production of nearly half of all U.S. oil would not be economically viable but for federal and state subsidies.
During the 2015-2016 election cycle oil, gas, and coal companies spent $354 million in campaign contributions and lobbying, and received $29.4 billion in federal subsidies in total over those same years – an 8,200% return on investment. The OCI report concluded that: “Removing these highly inefficient [fossil fuel] subsidies – which waste billions of dollars propping up an industry incompatible with safe climate limits – should be the first priority of fiscally responsible climate, energy, and tax reform policies.” Not incidentally, eliminating these subsidies would free up funds for other uses, including the social safety net.
Then there are farm subsidies– another 20 Billion dollars annually. Arguments for and against terminating these subsidies are more complicated than for fossil fuel subsidies, but the case for means-testing them is strong. In 2017, the USDA released a report showing that approximately half the money went to farmers with household incomes over $150,000. As Tamar Haspel wrote in the Washington Post, “That means billions of dollars, every year, go to households with income nearly three times higher than the median U.S. household income, which was $55,775 that year.”
Farm subsidies were created during the Depression in order to keep family farms afloat and ensure a stable national food supply. Since 2008, however, the top 10 farm subsidy recipients have each received an average of $18.2 million – that’s $1.8 million annually, $150,000 per month, or $35,000 a week. These farmers received more than 30 times the average yearly income of U.S. families. Millionaires are benefitting from a program originally established to protect family farms during times of economic distress.
Most citizens understand why government should not be providing billions of dollars to support companies that make climate change worse, or adding to the bottom lines of already-profitable corporate farms. Efforts to cut the military budget encounter genuine anxieties about endangering national security, as well as more parochial concerns from lawmakers representing districts with economies heavily dependent upon military bases or contractors. Those concerns may explain why U.S. military spending in 2017 was over 30% higher in real terms than it was in 2000.
The United States will spend $716 billion in 2019, and annually spends more than twice what Russia, China, Iran and North Korea spend collectively.
Critics of the military budget make three basic arguments: the budget is much bigger than threats to U.S. security require; very little of the money appropriated supports efforts to fight terrorist groups that pose the real threat in today’s world; and the countries that might threaten America militarily are historically few and weak. (Russia, for example, has an energy-dependent economy roughly the size of Italy’s. According to America’s intelligence community, its efforts to destabilize the U.S. are made through social media, assaults by “bots,” and hacks into vulnerable data repositories, not military action.)
The massive amounts that America spends on its military are used to support bases and troops that are ill-suited to the conduct of modern-day defense. (Even the Pentagon has estimated that base capacity exceeds need by 20%) The existence of this enormous military capacity also creates an incentive to substitute military intervention for the exercise of diplomacy and soft power (as the Japanese proverb warns, when the tool you have is a hammer, every problem looks like a nail.)
An argument can also be made that we are supporting a military establishment that is prepared to fight the last war, not the next one.
As one military expert has written, “counterterrorism is poorly served by manpower-intensive occupational wars, which rarely produce stability, let alone democracy.” He argues the U.S. could safely cut the military budget by 25%; even if he is wrong about the size of the savings that could be realized, knowledgable observers suggest that modernizing military operations, restraining America’s all-too-frequent interventions into the affairs of other countries, and focusing on actual threats would translate into very significant savings.
The elimination of fossil fuel subsidies, and the reduction of farm subsidies and military expenditures would allow lawmakers to achieve substantial savings while pursuing important policy goals. The government ought not be abetting climate change or further enriching wealthy Americans, and it is past time to reconfigure national defense to meet the challenges of the 21st Century.
Andy Stern lists a number of ways a UBI might be funded, including “cashing out” all or most of the existing 126 welfare programs that currently cost taxpayers $1 trillion a year. The UBI would make many if not most of these programs unnecessary.
Stein also lists a number of targeted tax proposals, including a Value Added Tax (VAT), that have been suggested by economists supportive of a UBI. As he points out, these and other proposals constitute a “menu” of possibilities. (Another example: If the UBI allows workers to cover basic essentials, taxpayers would be relieved of the need to supplement the wages of McDonalds and Walmart workers, saving government some ten billion dollars annually.) If and when America has a Congress that is serious about reforming both our democratic decision-making structures and our social infrastructure, that menu provides a number of options from which to choose.
America’s problem is a lack of political will to confront the special interest groups that currently feed at the government trough, not a lack of realistic funding mechanisms.