Tag Archives: evidence

That Pesky Thing Called Evidence

The World’s Worst Legislature is barreling toward the session’s finish line, and the Republican super-majority shows no sign of moderating its war on public education, despite recently emerging evidence that several of the most enthusiastic proponents of vouchers have disturbing conflicts of interest, not to mention overwhelming evidence that privatizing schools leads to poorer educational outcomes.

Of course, Indiana’s lawmakers are impervious to evidence of all kinds. (Look at Indiana’s gun laws, disregard of environmental impacts…the list goes on.)

I know my periodic posts on the subject are the equivalent of “whistling in the wind,” but as the research continues to pile up, I find it hard to restrain myself.

So…

In the Public Interest recently shared  “a clear and concise breakdown of the problems of vouchers,” written by a Professor of Education Policy at Michigan State University, and  titled “There is no Upside.”

Here’s the lede:

What if I told you there is a policy idea in education that, when implemented to its full extent, caused some of the largest academic drops ever measured in the research record?

What if I told you that 40 percent of schools funded under that policy closed their doors afterward, and that kids in those schools fled them at about a rate of 20 percent per year?

What if I told you that some the largest financial backers of that idea also put their money behind election denial and voter suppression—groups still claiming Donald Trump won the 2020 election? Would you believe what those groups told you about their ideas for improving schools?

What if I told you that idea exists, that it’s called school vouchers, and despite all of the evidence against it the idea persists and is even expanding?

The article followed up with a compilation of independent analyses drawn from both the research community and “on the ground” reporting by journalists. You need to click through for the details, but here are the “top level” findings:

  • First, vouchers mostly fund children already in private school. Seventy to -eighty percent of kids using vouchers were already in private school before taxpayers picked up the tab.
  •  Among the relatively few kids who did use vouchers to leave public schools, test scores dropped between -0.15 and -0.50 standard deviations.
  • The typical private school accepting vouchers “isn’t one of the elite, private schools in popular narrative.” The typical voucher school is “small, often run out of a church property like its basement, often popping up specifically to get the voucher.”
  • Understandably, many  kids leave those sub-prime schools. (In Wisconsin, about 20 percent of kids left their voucher school every year and most transferred to a public school.)

Then there is the issue of transparency and oversight.

All of the above evidence should already tell you why it’s critically important that states passing voucher laws also include strong academic and financial reporting requirements. If we’re going to use taxpayer funds on these private ventures, we need to know what the academic results are and what the return on government investment is.

And of course, we don’t.

Then, of course, there’s discrimination.

We know that in Indiana, where one of the largest and lowest-performing voucher programs exists, more than $16 million in taxpayer dollars went to schools discriminating against LGBTQ children. Similar story in Florida—and that includes kids whose parents are gay, regardless of how the children identify.

Given the fact that Indiana’s legislature is advancing other discriminatory measures aimed at the LGBTQ community–especially several ugly measures  targeting trans children–I’m sure our lawmakers consider that documented bigotry to be a feature, not a bug.

The article also traces connections I’d not previously been aware of between the most active voucher proponents and far-right organizations engaging in efforts to suppress votes and reject the results of the 2020 presidential election.

Interestingly, the article doesn’t highlight one of my main concerns: that vouchers are an end-run around the First Amendment’s Separation of Church and State. Here in Indiana, over 90% of voucher students attend religious schools, a significant percentage of which are fundamentalist. The children who attend overwhelmingly come from the corresponding faith communities. Even the religious schools that don’t actively discriminate do not and cannot provide the diverse classroom environment that prepares children for  citizenship in increasingly diverse  America.(Most don’t teach civics, either.)

It also doesn’t address how vouchers disproportionately hurt rural communities.

The article concludes:

So there you have it: catastrophic academic harm. A revolving door of private school failures. High turnover rates among at-risk children. Avoiding oversight and transparency. Overt, systematic discrimination against vulnerable kids and families. Deep and sustained ties to anti-democratic forces working in the United States today.

That’s school vouchers in 2023.

That’s the “system” Hoosier lawmakers want to greatly expand–with funds stolen from the state’s already under-resourced public schools.

It’s indefensible.

 

Misinformation Matters

A good friend of ours, originally from Canada, left his faculty position in Indianapolis and moved to Ottawa to assume a position as President and CEO of the Council of Canadian Academies, or CCA.

Knowing my preoccupation with media and misinformation, he has shared some intriguing research from an expert panel appointed by the CCA. That research delved into the effects of misinformation on science and health, going beyond the typical hand-wringing over the extent of misinformation and its potential harms, and looking instead at the nature and extent of quantifiable damage done by widespread dissemination of patently wrong information.

As a news release explained

Considerable and mounting evidence shows that misinformation has led to illness and death from unsafe interventions and products, vaccine preventable diseases, and a lack of adherence to public health measures, with the most vulnerable populations bearing the greatest burden. The Expert Panel on the Socioeconomic Impacts of Science and Health Misinformation estimates that misinformation cost the Canadian healthcare system at least $300 million during nine months of the COVID-19 pandemic in 2021.

While combatting misinformation is a complex and long-term challenge, the report details several measures that have shown promise. Ensuring that accurate health and science information is widely accessible and is communicated honestly, understandably, and by trusted messengers can help insulate people from misinformation. Identifying, labelling, and debunking misinformation can also be effective, as are measures that better equip individuals to sort through the increasingly complex information environment, particularly the promotion of critical thinking and media and science literacy in school curricula.

You can access the entire report here.Some of the findings struck me as particularly significant, especially the description of when, why and how people come to accept what the panel calls “misinformation” and I would probably label conspiracy theories and lies.

Misinformation is designed to appeal to emotion and–as the report notes–intended to exploit our “cognitive shortcuts.” We are all susceptible to it, especially in times of crisis.

Science and health misinformation damages our community well-being through otherwise preventable illnesses, deaths, and economic losses, and our social well-being through polarization and the erosion of public trust. These harms often fall most heavily on the most vulnerable.

The research found a number of outcomes directly attributable to the spread and acceptance of misinformation; they included: Illness, poisoning, and death from unsafe health interventions and products; Illness and death from communicable and vaccine-preventable diseases; money wasted on disproven products and services; susceptibility to further and potentially more insidious forms of misinformation; increased healthcare and societal costs; and Inaction on or delay of public policy responses.

Misinformation contributes to a lack of adherence to public health measures and to vaccine hesitancy, which can result in vaccine-preventable disease outbreaks, increased healthcare costs, and elevated risk to the health and well-being of vulnerable populations. Misinformation also amplifies social divisions, which have resulted in overt conflict and violence, often directed at racialized communities. Furthermore, the consequences of science and health misinformation are not borne equally — for instance, negative health impacts during the COVID-19 pandemic have been found to disproportionately affect the well-being of racialized and other underserved communities, exacerbating existing inequalities.

Where possible, panel members put numbers to these generalized descriptions, estimating that widely circulated misinformation about COVID-19 had cost the Canadian healthcare system “at least $300 million in hospital and ICU visits between March 1 and November 30, 2021.” That number did not include the costs of outpatient medication, physician compensation, or long COVID.

And for obvious reasons, the panel was unable to estimate what it called “broader societal costs.” Those included such difficult-to-quantify effects as “delayed elective surgeries, social unrest, moral injury to healthcare workers, and the uneven distribution of harms borne by communities.”

The negative consequences of misinformation are–obviously–not confined to citizens of Canada. In the absence of credible, trustworthy information that is widely trusted and accepted, it proliferates. In the U.S., political data confirms the harm: the MAGA folks who rejected vaccination (evidently believing it to be some sort of nefarious liberal plot) died of COVID in far larger numbers than the independents and Democrats who trusted the science.

The question is: what can be done to counter the confusion and reduce the damage sowed by purveyors of propaganda and inaccurate information? One answer is clearly education, especially science education.  (That conclusion supports concerns over the metastasizing  voucher programs that are sending students to private, predominantly religious schools–many of which have been found to teach creationism in lieu of science).

When citizens don’t inhabit the same evidence-based reality, both individual and social health are compromised–sometimes fatally.

 

It’s The Culture..

Every morning when I sit down at my computer, I’m confronted with headlines from the various news sources to which I subscribe: The Guardian, The New York Times, The Washington Post…and through the day, a mind-numbing number of others. I don’t know anyone with the time and/or inclination to carefully read all the available news and opinion, and I certainly don’t–like most consumers of media, I scan the headlines and click on those that promise some measure of enlightenment or moderately important/relevant information.

But occasionally, a headline is so weird, I have to read the article. That’s what lured me to a report in The Week titled (no kidding) “Did Theranos Lose Afghanistan?”

Theranos, as you probably know, was the much-hyped startup company founded by Elizabeth Holmes–young, very good-looking and evidently one really smooth talker. She claimed the company had invented a new kind of blood testing technology that was going to save both time and money. Lots of people invested in it.

The most generous interpretation of what came next was a discovery that the technology didn’t work; a less-generous interpretation is that Holmes intentionally perpetrated a fraud. A jury is currently hearing evidence on the latter interpretation.

So what–if anything–does this audacious scam (if that is, indeed, what it turns out to be) have to do with Afghanistan? Well, the article does point out that General Mattis, late of the Trump Administration and the Afghan war, was on the board of Theranos and a major cheerleader for the company.

But the real connection was a cultural one.

Like the Afghanistan debacle, Theranos is a horror story of wishful thinking, credulous media, and celebrity impunity. Whether or not intentional deception was involved, both episodes display the dishonesty and incompetence of interlocking tech, finance, media, and military elites.

Mattis’ role in both sorry spectacles–the war and Theranos–illustrates the moral rot that infects far too many of the figures lionized by a media chasing eyeballs and clicks rather than the information required by a democratic citizenry.

Mattis denies any wrongdoing, claiming he was taken in, too. Even if that’s true, his role is discreditable. Mattis’ association with the company began in 2011, when he met Holmes at a Marine Memorial event in San Francisco. According to author John Carreyrou and other journalists, he immediately began campaigning for military adoption of Theranos’ ostensibly innovative bloodtesting technology. Mattis was not deterred by the lack of FDA approval and mounting doubts about whether the technology actually worked. After his retirement in 2013, Mattis also ignored legal advice that it would be improper to join the board while the company was seeking procurement of its products for use in Afghanistan.

It would be a mistake to single out a few “bad actors,” however. The problem is systemic–a widespread, “baked-in” disinclination to either provide or accept evidence that is contrary to what one wants to believe.

The article focuses on the impunity enjoyed by what it calls the American ruling class “until their conduct becomes literally criminal,” and it points out that the same people who make decisions in Washington sit on boards in Silicon Valley and appear on the same few cable channels. When the projects they promote go south, they continue to be celebrated and compensated as authors, management consultants, and respected pundits.

There’s a word for this governing hierarchy: kakistocracy, governance by the worst, least qualified, or most unscrupulous citizens.

Which gets us back to culture.

In today’s America, celebrity is more valued than competence. A loud voice commands far more attention than an expert opinion. Purveyors of ridiculous conspiracy theories overwhelm the conclusions and cautions of reputable scientists. This is the culture that in 2016 gave us an embarrassing, mentally-ill buffoon for President, the culture that elects  equally embarrassing crazies like Marjorie Taylor Greene. It’s the culture that leads thousands of people to ingest a horse de-wormer and reject the expertise of epidemiologists and medical professionals.

It’s a culture that threatens to overwhelm those of us who want to live in the reality-based community.

About A UBI…

I’m speaking today to a woman’s group about proposals for a Universal Basic Income. Here’s what I’ll say. WARNING: It’s a lot longer than my usual posts.

_______________________________

I’ve recently been obsessing about what an updated social contract might look like. How would the realities of modern life alter the framework that emerged, after all, from the 18th Century Enlightenment? Is it possible to craft a governing structure that both respects individual liberty and provides basic material security? Actually, is anyone truly free when they face a daily struggle just to survive? And most important, at a time when we are recognizing how polarized Americans are, can government safety-net policies help to unify a quarrelsome and diverse population?

Social scientists are just beginning to appreciate the multiplicity of ways in which America’s obsessive focus on individual responsibility and achievement has obscured recognition of the equally important role played by the communities within which we are all embedded. A much-cited remark made by Elizabeth Warren during her first Senate campaign reminded us of the important ways social infrastructure makes individual success and market economies possible:

“There is nobody in this country who got rich on their own. Nobody. You built a factory out there – good for you. But I want to be clear. You moved your goods to market on roads the rest of us paid for. You hired workers the rest of us paid to educate. You were safe in your factory because of police forces and fire forces that the rest of us paid for. You didn’t have to worry that marauding bands would come and seize everything at your factory… Now look. You built a factory and it turned into something terrific or a great idea – God bless! Keep a hunk of it. But part of the underlying social contract is you take a hunk of that and pay forward for the next kid who comes along.”

The fact that Warren’s observation garnered so much attention (it evidently triggered an epiphany in many people) suggests that Americans rarely see individual success stories as dependent upon the government’s ability to provide a physical and legal environment—an infrastructure– within which that success can occur. It was a pointed rebuke of our national tendency to discount the importance of effective and competent governance.

The importance of hard work and individual talent certainly shouldn’t be minimized, but neither should it be exaggerated. When the focus is entirely upon the individual, when successes of any sort are attributed solely to individual effort, we fail to see the effects of social and legal structures that privilege some groups and impede others. When marginalized groups call attention to additional barriers they face, members of more privileged groups cling even more strongly to the fiction that only individual merit explains success and failure.

The problem is, when we ignore the operation of systemic influences, we feed pernicious stereotypes. We harden our tribal affiliations. That’s why the first priority of a new social contract should be to nurture what scholars call “social solidarity,” the ability of diverse citizens to see ourselves as part of an over-arching, encompassing American community.

Here’s the thing: Public policies can either increase or reduce polarization and tensions between groups. Policies intended to help less fortunate citizens can be delivered in ways that stoke resentments, or in ways that encourage national cohesion.  Think about widespread public attitudes about welfare programs aimed at poor people, and contrast those attitudes with the overwhelming majorities that approve of Social Security and Medicare. Polling data since 1938 shows growing numbers of Americans who believe laziness and lack of motivation  to be the main causes of poverty, and who insist that government assistance—what we usually refer to as welfare—breeds dependence. These attitudes about poverty and welfare have remained largely unchanged despite overwhelming evidence that they are untrue.

Social Security and Medicare send a very different message. They are universal programs; virtually everyone contributes to them and everyone who lives long enough participates in their benefits. Just as we don’t generally hear accusations that “those people are driving on roads paid for by my taxes,” or sentiments begrudging a poor neighbor’s garbage pickup, beneficiaries of programs that include everyone (or almost everyone) are much more likely to escape stigma. In addition to the usual questions of efficacy and cost-effectiveness, policymakers should evaluate proposed programs by considering whether they are likely to unify or further divide Americans. Universal policies are far more likely to unify, an important and often overlooked argument favoring a Universal Basic Income.

Attention to the UBI—a universal basic income– has increased due to predictions that automation could eliminate up to 50% of current American jobs, and sooner than we think. Self-driving cars alone threaten the jobs of the over 4 million Americans who drive trucks, taxis and delivery vehicles for a living—and those middle-aged, displaced workers aren’t all going to become computer experts. A UBI could avert enormous social upheaval resulting from those job losses–but there are many other reasons to seriously consider it.

A workable social contract connects citizens to an overarching community in which they have equal membership and from which they receive equal support. The challenge is to achieve a healthy balance—to create a society that genuinely respects individual liberty within a renewed emphasis on the common good, a society that both rewards individual effort and talent, and nurtures the equal expression of those talents irrespective of tribal identity.

What if the United States embraced a new social contract, beginning with the premise that all citizens are valued members of the American community, and that (as the advertisement says) membership has its privileges? In my imagined “Brave New World,” government would create an environment within which humans could flourish, an environment within which members—citizens—would be guaranteed a basic livelihood, including access to health care, a substantive education and an equal place at the civic table. In return, members (aka citizens) would pay their “dues:” taxes, a year or two of civic service, and the consistent discharge of civic duties like voting and jury service.

In my Brave New World, government would provide both a physical and a social infrastructure. We’re all familiar with physical infrastructure: streets, roads, bridges, utilities, parks, museums, public transportation, and the like; we might even expand the definition to include common municipal services like police and fire protection, garbage collection and similar necessities and amenities of community life. Local governments across the country understand the importance of these assets and services, and struggle to provide them with the generally inadequate tax dollars collected from grudging but compliant citizens.

There is far less agreement on what a social infrastructure should look like and how it should be funded. The most consequential element of a new social infrastructure, and by far the most difficult to implement, would require significant changes to the deep-seated cultural assumptions on which our current economy rests. Its goals would be to ease economic insecurities, restore workers’ bargaining power and (not so incidentally) rescue market capitalism from its descent into plutocracy. The two major pillars of that ambitious effort would be a Universal Basic Income and single-payer health insurance.

The defects of existing American welfare policies are well-known. The nation has a patchwork of state and federal efforts and programs, with bureaucratic barriers and means testing that operate to exclude most of the working poor. Welfare recipients are routinely stigmatized by moralizing lawmakers pursuing punitive measures aimed at imagined “takers” and “Welfare Queens.” Current anti-poverty policies haven’t made an appreciable impact on poverty, but they have grown the bureaucracy and contributed significantly to racial stereotyping and socio-economic polarization; as a result, a number of economists and political thinkers now advocate replacing the existing patchwork with a Universal Basic Income.

A UBI is an amount of money that would be sent to every U.S. Citizen, with no strings attached– no requirement to work, or to spend the money on certain items and not others. It’s a cash grant sufficient to insure basic sustenance; most proponents advocate $1000 per month. As Andy Stern has written,

“A basic income is simple to administer, treats all people equally, rewards hard work and entrepreneurship, and trusts the poor to make their own decisions about what to do with their money. Because it only offers a floor, people are encouraged to make additional income through their own efforts: As I like to say, a UBI gives you enough to live on the first floor, but to get a better view—for example, a seventh-floor view of the park—you need to come up with more money. Welfare, on the other hand, discourages people from working because, if your income increases, you lose benefits.

As Stern points out, with a UBI, in contrast to welfare, there’s no phase-out, no marriage penalties, no people falsifying information. Support for the concept is not limited to progressives. Milton Friedman famously proposed a “negative income tax,” and F.A. Hayek, the libertarian economist, wrote “There is no reason why in a free society government should not assure to all, protection against severe deprivation in the form of an assured minimum income, or a floor below which nobody need descend.” In 2016, Samuel Hammond of the libertarian Niskanen Center, noted the “ideal” features of a UBI: its unconditional structure avoids creating poverty traps; it sets a minimum income floor, which raises worker bargaining power without wage or price controls; it decouples benefits from a particular workplace or jurisdiction; since it’s cash, it respects a diversity of needs and values; and it simplifies and streamlines bureaucracy, eliminating rent seeking and other sources of inefficiency.

Hammond’s point about worker bargaining power is especially important. In today’s work world, with its dramatically-diminished unions and the growth of the “gig economy,” the erosion of employee bargaining power has been severe. Wages have been effectively stagnant for years, despite significant growth in productivity. In 2018, Pew Research reported that “today’s real average wage (that is, the wage after accounting for inflation) has about the same purchasing power it did 40 years ago. And what wage gains there have been have mostly flowed to the highest-paid tier of workers.” With a UBI and single payer health coverage, workers would have the freedom to leave abusive employers, unsafe work conditions, and uncompetitive pay scales. A UBI wouldn’t level the playing field, but it would sure reduce the tilt.

It is also worth noting that a UBI would have much the same positive effect on economic growth as a higher minimum wage. When poor people get money, they spend it, increasing demand—and increased demand is what fuels job creation and economic growth. If nobody is buying your widgets, you aren’t going to hire people to produce more of them.

Several countries have run pilot projects assessing the pros and cons of UBIs, and American pilot projects are currently underway in Stockton amd Oakland, California, and Mississippi; Gary Mayor Jerome Prince just announced that Gary will be participating in one. A rigorous academic evaluation of an earlier experiment, in Kenya, found that—contrary to skeptic’s predictions—the money had primarily been spent on food, medicine and education, and that there was no increase in use or purchase of alcohol and tobacco. The study also identified “a significant positive spillover on female empowerment,” and “large increases in psychological well-being” of the recipients.

Psychologists have underscored the importance of that last finding. Families with few resources face barriers that can overwhelm cognitive capacities. The psychological impacts from scarcity are real and the outcomes are difficult to reverse. A 2017 article in Forbes reported that when Native Americans opened casinos along the Rio Grande and used the proceeds to deliver basic incomes to the tribal poor, child abuse and crime dropped drastically. Simply handing money to poor people was enormously helpful. Being trapped in poverty, with the stress and insecurities associated with that, is progressively debilitating.

Counter-intuitive as it may seem, a significant body of research supports the
importance of a robust social safety net to market economies. As Will Wilkinson, vice-president for policy at the libertarian Niskanen Center, put it in the conservative National Review, contemporary arguments between self-defined capitalists and socialists both misunderstand economic reality. The left fails to appreciate the important role of capitalism and markets in producing abundance, and the right refuses to acknowledge the indispensable role safety nets play in buffering the socially destructive consequences of insecurity.

I may be a nerd, but I’m not delusional: Even if a UBI sounds good, the enormous barriers to its adoption are obvious: politically, shifting from a paternalistic and judgmental “welfare” system to one awarding benefits based upon membership in American society would require a significant culture change and would be vigorously opposed by the large number of companies and individuals whose interests are served by America’s dysfunctional patchwork of programs. State-level legislators would resist policy changes that moved decision-making from the state to either the federal or local level. And of course, voters are notoriously suspicious of change, even when it serves their interests. Nevertheless, if survey research is to be believed, public opinion is slowly moving in these directions. In time, and with sufficient moral and strategic leadership, change is possible. First, however, misconceptions must be confronted. (As the old saying goes, it isn’t what we don’t know that’s a problem, it’s what we know that isn’t so.)

Although Americans’ deeply-ingrained belief that people are poor because they made bad choices or didn’t work hard enough continues to be a barrier to a more generous and equitable social safety net, the most significant impediment to passage of a UBI is the same argument that has consistently and successfully thwarted universal healthcare, that America, rich as the country is, simply can’t afford it. This argument flies in the face of evidence from poorer counties with far more robust safety nets. Both the UBI and some version of Medicare-for-All could be funded by a combination of higher taxes, savings through cost containment, efficiencies and economies of scale, the elimination or reform of existing subsidies, and meaningful reductions in America’s bloated defense budget. (I should also note that government already pays some 70% of U.S. healthcare costs through a variety of programs and via coverage for government employees—and that’s without the substantial savings that a national system could achieve. According to one 2014 study, a single-payer system would save $375 billion per year just by removing inefficient administrative costs generated by multiple payers.) But back to UBI.

First, taxes. I know—dirty word.

Interestingly, public debates over taxes rarely if ever consider the extent to which individual taxpayers actually save money when government taxes them to supply a service. If citizens had to pay out-of-pocket for privatized police and fire protection or private schooling, the expense would vastly exceed the amounts individual households pay in taxes for those services. Low-income citizens, of course, would be unable to afford them.

There is a reason that debates about taxes rarely include consideration of the saving side of the ledger; the American public is positively allergic to taxes, even when a majority financially benefits from them. If low-and-middle income American families did not have to pay out-of-pocket for health insurance, and could count on receiving a stipend of $1000/month, most would personally be much better off, even if some of them experienced tax increases.

Tax increases, of course, are levied against people capable of paying them. Americans used to believe in progressive taxation, and not simply to raise revenue. Taxes on the very wealthy were originally conceived as correctives, like tobacco taxes, that should be judged by their societal impact as well as their ability to generate revenue. High tax rates on the rich were intended to reduce the vast accumulations of money that serve to give a handful of people a level of power deemed incompatible with democracy. Of course, in addition to reducing inequality, progressive taxation does raise money. Elizabeth Warren proposed taxing households with over $50 million in assets by levying a 2 percent tax on their net worth every year. The rate would rise to 3 percent on assets over $1 billion. Warren’s plan would affect a total of just 75,000 households, but would raise $2.75 trillion over 10 years. Representative Alexandria Ocasio-Cortez has called for raising the marginal federal tax rate on annual incomes over $10 million. Both proposals reflect a growing consensus that the very rich are not paying their fair share.

There’s also growing anger directed at the generosity of various tax “loopholes,” that allow immensely profitable corporations to reduce their tax liabilities (or escape them completely). In 2018, Amazon, which reported 11.2 billion dollars in profit, paid no tax and received a rebate of 129 million. The use of offshore tax havens and other creative methods of eluding payment devised by sophisticated tax lawyers employed by the uber-wealthy is an ongoing scandal.

Both economic research and real-world experiments like Governor Sam Brownback’s tax cuts in Kansas confirm that, contrary to the emotional and ideological arguments against imposing higher taxes on wealthy individuals, high marginal rates don’t depress economic growth and cutting taxes doesn’t trigger an increase in either job creation or economic growth. In 1947, the top tax rate was 86.45% on income over $200,000; in 2015, it was 39.60% on income over $466,950. During that time, research has found very little correlation between economic growth and higher or lower marginal rates. In 2012, the Congressional Research Service published a research study that rebutted the presumed correlation between tax rates and economic growth.

It isn’t just taxes that need to be adjusted. We need to significantly reduce fossil fuel subsidies, farm subsidies and our bloated military budget—and we need to stop subsidizing shareholders of immensely profitable companies like Walmart and McDonalds. If a UBI allowed workers to cover basic essentials, taxpayers wouldn’t need to supplement the wages of low-wage workers. A Senate panel recently reported that nearly half of workers making less than $15 an hour currently rely on public assistance programs costing taxpayers $107 billion dollars each year.

Climate change is already affecting America’s weather, increasing the urgency of efforts to reduce carbon emissions and increase the development and use of clean energy sources. Yet the United States spends twenty billion dollars a year subsidizing fossil fuels. That includes 2.5 billion per year specifically earmarked for searching out new fossil fuel resources, at a time when development of those resources is arguably suicidal. Permanent tax breaks to the US fossil fuel industry are seven times larger than those for renewable energy. Research tells us that, at current prices, the production of nearly half of all U.S. oil would not be economically viable, but for federal and state subsidies.

The Obama administration proposed to eliminate 60% of federal fossil fuel subsidies. That  proposal went nowhere–perhaps because during the 2015-2016 election cycle oil, gas, and coal companies spent $354 million on campaign contributions and lobbying. The industry received $29.4 billion in total federal subsidies those same years – an 8,200% return on investment. We waste billions of dollars propping up an industry that makes climate change worse. Eliminating these subsidies would free up funds for other uses, including a UBI.

Farm subsidies represent another 20 Billion dollars annually. Arguments for and against terminating these subsidies are more complicated than for fossil fuel subsidies, but the case for means-testing them is strong.  In 2017, the USDA released a report showing that approximately half the money paid out went to farmers with household incomes over $150,000. That means billions of dollars, every year, go to households with income nearly three times higher than the median U.S. household income, which was $55,775 that year.

Farm subsidies were created during the Depression to keep family farms afloat and to ensure a stable national food supply. Since 2008, however, the top 10 farm subsidy recipients have each received an average of $18.2 million – that’s $1.8 million annually, $150,000 per month, or $35,000 a week– more than 30 times the average yearly income of U.S. families. Surely the formula governing distribution of those subsidies could be changed to ensure that millionaires aren’t benefitting from a program established to protect family farms during times of economic distress.  According to Forbes, since 2008, the top five recipients of farm subsidies took in between $18.6 million and $23.8 million apiece. Some of us are old enough to remember that Richard Lugar consistently criticized farm subsidies as wasteful and even counterproductive and offered legislation to limit them; his legislation also went nowhere.

Making the case for eliminating fossil fuel subsidies or limiting farm subsidies is much simpler than advocating for strategic cuts in America’s bloated military budget. Most citizens understand why government should not be providing billions of dollars to support companies that make climate change worse, or adding to the bottom lines of massively-profitable corporate farms. Efforts to cut the military budget, enormous though it is, encounter genuine anxieties about endangering national security, as well as more parochial concerns from lawmakers representing districts with economies heavily dependent upon military bases or contractors. That may explain why U.S. military spending in 2017 was over 30% higher in real terms than it was in 2000. The United States spent $716 billion in 2019; annually, we spend more than twice what Russia, China, Iran and North Korea spend collectively.

Critics of the military budget make three basic arguments: the budget is much bigger than threats to U.S. security require; very little of the money appropriated supports efforts to fight the terrorist groups that pose the real threat in today’s world; and the countries that might threaten American interests militarily are historically few and weak. (Russia, for example, has an energy-dependent economy roughly the size of Italy’s. According to America’s intelligence community, Russian efforts to destabilize us are made through social media, assaults by “bots,” and hacks into vulnerable data repositories, not military action.)

The massive amounts that America spends on its military support bases and troops that aren’t even suited to the conduct of modern-day defense. It would also be worth investigating whether the existence of this enormous military capacity creates an incentive to substitute military intervention for the exercise of diplomacy and soft power (as the Japanese proverb warns, when the tool you have is a hammer, every problem looks like a nail.) We appear to be supporting a military establishment that’s prepared to fight the last war, not the next one.  Several experts argue that the U.S. could safely cut the military budget by 25%.

We should address these subsidies in any event, but when it comes to paying for a UBI, there are a number of ways it might be funded, including “cashing out” all or most of the existing 126 welfare programs that currently cost taxpayers $1 trillion a year. The UBI would make most of these programs unnecessary.

America’s problem is a lack of political will to confront the special interest groups that currently feed at the government trough, not a lack of realistic funding mechanisms.

A girl can dream….

We Should See Clearly Now…

In the wake of the 2016 election, I was criticized by some very nice people for claiming that Trump’s win was all about racism. Those nice people–and they are nice people, I’m not being sarcastic here–were shocked that I would tar all Trump voters with such an accusation. But as my youngest son pointed out, Trump’s own racism was so obvious that the best thing you could say about his voters was that they didn’t find his bigotry disqualifying.

Conclusions of academic researchers following that election have been unambiguous. “Racial resentment” predicted support for Trump.

After the insurrection at the Capital, Americans simply cannot pretend that the profound divisions in this country are about anything but White Christian supremacy. We are finally seeing  recognition of that fact from previously circumspect sources.

Here’s what the staid numbers-crunchers at 538.com. wrote:

Much will be said about the fact that these actions threaten the core of our democracy and undermine the rule of law. Commentators and political observers will rightly note that these actions are the result of disinformationand heightened political polarization in the United States. And there will be no shortage of debate and discussion about the role Trump played in giving rise to this kind of extreme behavior. As we have these discussions, however, we must take care to appreciate that this is not just about folks being angry about the outcome of one election. Nor should we believe for one second that this is a simple manifestation of the president’s lies about the integrity of his defeat. This is, like so much of American politics, about race, racism and white Americans’ stubborn commitment to white dominance, no matter the cost or the consequence. (emphasis mine)

How about Darren Walker,  President of the Ford Foundation?

I have long believed that inequality is the greatest threat to justice—and, the corollary, that white supremacy is the greatest threat to democracy. But what has become clear during recent weeks—and all the more apparent yesterday—is that the converse is also true: Democracy is the greatest threat to white supremacy.

This explains the white backlash that has plagued American politics from its beginnings and throughout these last four years. It also casts a light on what we witnessed yesterday: A failed coup—an insurrection at the United States Capitol.

In his statement, Walker made a point that has been made repeatedly in the aftermath of that assault: If these had been protestors for racial justice–no matter how peaceful– rather than a violent and angry mob exhibiting “white pride” and grievance, the use of force by law enforcement would have been very different. 

Walker is correct: democracy–the equal voice of all citizens expressed through the ballot box–threatens White supremacy. That’s why, as demographic change accelerates, the GOP– aka the new Confederacy– has frantically worked to suppress minority votes, why it has opposed vote-by-mail and other efforts to facilitate participation in democratic decision-making.

Like almost everyone I know, I’ve been glued to reporting and commentary that has tried to make sense of what we saw. One of the most insightful was an article from Psychology Today that explained epistemic knowing.

After noting that “claims that the 2020 U.S. presidential election was illegitimate are widespread in Trump’s party,” despite overwhelming evidence to the contrary, the author   focused on why people who should know better nevertheless choose to believe those claims.

He noted that he’d recently re-read To Kill a Mockingbird. As he reminds us, the book is about a black man being tried for rape in a Southern town. It becomes obvious during the trial that the accused didn’t do it–in fact, the evidence of his innocence is overwhelming. Yet the jury convicts him.

 The jury convicts Robinson of rape because at the heart of the case is whose word is believed: that of a white woman or that of a black man. In Lee’s Maycomb, it is important to the population that the word of the white woman be upheld as a more respected source of knowledge, even when this goes against the facts. What was at stake was not just this one particular case, but a larger principle: whose claims need to be respected….

When interpretations differ, people need to understand who to trust. They may choose to only nominate certain people, or certain kinds of people, to be worthy of giving interpretations worth trusting.

This is an illustration of “epistemic entitlement”–the choice of who is entitled to occupy the role of “Knower.” Who gets to say what’s true and false, what’s real and fake? 

Far too many Americans choose to believe White people over facts, evidence, and their “lying eyes.”