Tag Archives: AI

We’re in Sci-Fi Territory…

Time on the treadmill goes faster when you listen to a podcast, but the other day, I should have listened to music. Instead, I listened to Ezra Klein and his guest discuss AI (Artificial Intelligence).

In case you’ve missed the mountain of reporting, recriminating, pooh-poohing and dark prophesying, let me share the podcast’s introduction.

OpenAI last week released its most powerful language model yet: GPT-4, which vastly outperforms its predecessor GPT-3.5 on a variety of tasks.

GPT-4 can pass the bar exam in the 90th percentile, while the previous model struggled, around the 10th percentile. GPT-4 scored in the 88th percentile on the LSAT, up from GPT-3.5’s 40th percentile. And on the advanced sommelier theory test, GPT-4 performed better than 77 percent of test takers. (GPT-3.5 hovered around 46 percent.) These are stunning results — not just what the model can do but also the rapid pace of progress. And Open AI’s ChatGPT and other chat bots are just one example of what recent A.I. systems can achieve.

Every once in a while, a commenter to this blog will say “I’m glad I’m old.” Given the enormity of change we are likely to see over the next decade, I understand the appeal of the sentiment. You really need to listen to the entire podcast to understand both the potential benefits and the huge dangers, but an observation that really took me aback was the fact that right now AI can do any job that humans can do remotely.

Think about that.

In 2018, researchers reported that nine out of ten manufacturing jobs had been lost to automation since 2000. That same year, Pew asked 1900  experts to predict the impact of emerging technologies on employment; half predicted large-scale replacement of both white- and blue-collar workers by robots and “digital agents,” and scholars at Oxford warned that half of all American jobs were at risk.

It would be easy to dismiss those findings and predictions–after all, where are those self-driving cars we were promised? But those cited warnings were issued before the accelerated development of AI, and before there was AI able to develop further AI generations without human programmers.

Many others who’ve been following the trajectory  of AI progress describe the technology’s uses–and potential misuses–in dramatic terms.

In his op-eds, Tom Friedman usually conveys an “I’m on top of it” attitude (one I find somewhat off-putting), but that sense was absent from his recent essay on AI. 

I had a most remarkable but unsettling experience last week. Craig Mundie, the former chief research and strategy officer for Microsoft, was giving me a demonstration of GPT-4, the most advanced version of the artificial intelligence chatbot ChatGPT, developed by OpenAI and launched in November. Craig was preparing to brief the board of my wife’s museum, Planet Word, of which he is a member, about the effect ChatGPT will have on words, language and innovation.

“You need to understand,” Craig warned me before he started his demo, “this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”

Large language modules like ChatGPT will steadily increase in their capabilities, Craig added, and take us “toward a form of artificial general intelligence,” delivering efficiencies in operations, ideas, discoveries and insights “that have never been attainable before across every domain.”

The rest of the column described the “demo.” It was gobsmacking.

What happens if and when very few humans are required to run the world– when most jobs (not just those requiring manual labor, but jobs we haven’t previously thought of as threatened) disappear?

The economic implications are staggering enough, but a world where paid labor is rare would require a significant paradigm shift for the millions of humans who find purpose and meaning in their work. Somehow, I doubt that they will all turn to art, music or other creative pursuits to fill the void…

I lack the capacity to envision the changes that are barreling down on (unsuspecting, unprepared) us–changes that will require my grandchildren to occupy (and hopefully thrive) in a world I can’t even imagine.

If we’re entering a world previously relegated to science fiction, maybe we need to consider applying and adapting Asimov’s three laws of robotics:  1) A robot (or any AI) may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot (or any AI) must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot (or other AI) must protect its own existence as long as such protection does not conflict with the First or Second Law.

Or maybe it’s already too late…..

 

 

Messing With Our Minds

As if the websites peddling conspiracy theories and political propaganda weren’t enough, we now have to contend with “Deepfakes.” Deepfakes, according to the Brookings Institution, are 

videos that have been constructed to make a person appear to say or do something that they never said or did. With artificial intelligence-based methods for creating deepfakes becoming increasingly sophisticated and accessible, deepfakes are raising a set of challenging policy, technology, and legal issues.

Deepfakes can be used in ways that are highly disturbing. Candidates in a political campaign can be targeted by manipulated videos in which they appear to say things that could harm their chances for election. Deepfakes are also being used to place people in pornographic videos that they in fact had no part in filming.

Because they are so realistic, deepfakes can scramble our understanding of truth in multiple ways. By exploiting our inclination to trust the reliability of evidence that we see with our own eyes, they can turn fiction into apparent fact. And, as we become more attuned to the existence of deepfakes, there is also a subsequent, corollary effect: they undermine our trust in all videos, including those that are genuine. Truth itself becomes elusive, because we can no longer be sure of what is real and what is not.

The linked article notes that researchers are trying to devise technologies to detect deep fakes, but until there are apps or other tools that will identify these very sophisticated forgeries, we are left with “legal remedies and increased awareness,” neither of which is very satisfactory.

We already inhabit an information environment that has done more damage to social cohesion than previous efforts to divide and mislead. Thanks to the ubiquity of the Internet and social media (and the demise of media that can genuinely be considered “mass”), we are all free to indulge our confirmation biases–free to engage in what a colleague dubs “motivated reasoning.” It has become harder and harder to separate truth from fiction, moderate spin from outright propaganda.

One result is that thoughtful people–people who want to be factually accurate and intellectually honest–are increasingly unsure of what they can believe.

What makes this new fakery especially dangerous is that, as the linked article notes, most of us do think that “seeing is believing.” We are far more apt to accept visual evidence than other forms of information. There are already plenty of conspiracy sites that offer altered photographic “evidence”–of the aliens who landed at Roswell, of purportedly criminal behavior by public figures, etc. Now people intent on deception have the ability to make those alterations virtually impossible to detect.

Even if technology is developed that can detect fakery, will “motivated” reasoners rely on it?

Will people be more likely to believe a deepfake or a detection algorithm that flags the video as fabricated? And what should people believe when different detection algorithms—or different people—render conflicting verdicts regarding whether a video is genuine?

We are truly entering a new and unsettling “hall of mirrors” version of reality.