Time on the treadmill goes faster when you listen to a podcast, but the other day, I should have listened to music. Instead, I listened to Ezra Klein and his guest discuss AI (Artificial Intelligence).
In case you’ve missed the mountain of reporting, recriminating, pooh-poohing and dark prophesying, let me share the podcast’s introduction.
OpenAI last week released its most powerful language model yet: GPT-4, which vastly outperforms its predecessor GPT-3.5 on a variety of tasks.
GPT-4 can pass the bar exam in the 90th percentile, while the previous model struggled, around the 10th percentile. GPT-4 scored in the 88th percentile on the LSAT, up from GPT-3.5’s 40th percentile. And on the advanced sommelier theory test, GPT-4 performed better than 77 percent of test takers. (GPT-3.5 hovered around 46 percent.) These are stunning results — not just what the model can do but also the rapid pace of progress. And Open AI’s ChatGPT and other chat bots are just one example of what recent A.I. systems can achieve.
Every once in a while, a commenter to this blog will say “I’m glad I’m old.” Given the enormity of change we are likely to see over the next decade, I understand the appeal of the sentiment. You really need to listen to the entire podcast to understand both the potential benefits and the huge dangers, but an observation that really took me aback was the fact that right now AI can do any job that humans can do remotely.
Think about that.
In 2018, researchers reported that nine out of ten manufacturing jobs had been lost to automation since 2000. That same year, Pew asked 1900 experts to predict the impact of emerging technologies on employment; half predicted large-scale replacement of both white- and blue-collar workers by robots and “digital agents,” and scholars at Oxford warned that half of all American jobs were at risk.
It would be easy to dismiss those findings and predictions–after all, where are those self-driving cars we were promised? But those cited warnings were issued before the accelerated development of AI, and before there was AI able to develop further AI generations without human programmers.
Many others who’ve been following the trajectory of AI progress describe the technology’s uses–and potential misuses–in dramatic terms.
In his op-eds, Tom Friedman usually conveys an “I’m on top of it” attitude (one I find somewhat off-putting), but that sense was absent from his recent essay on AI.
I had a most remarkable but unsettling experience last week. Craig Mundie, the former chief research and strategy officer for Microsoft, was giving me a demonstration of GPT-4, the most advanced version of the artificial intelligence chatbot ChatGPT, developed by OpenAI and launched in November. Craig was preparing to brief the board of my wife’s museum, Planet Word, of which he is a member, about the effect ChatGPT will have on words, language and innovation.
“You need to understand,” Craig warned me before he started his demo, “this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”
Large language modules like ChatGPT will steadily increase in their capabilities, Craig added, and take us “toward a form of artificial general intelligence,” delivering efficiencies in operations, ideas, discoveries and insights “that have never been attainable before across every domain.”
The rest of the column described the “demo.” It was gobsmacking.
What happens if and when very few humans are required to run the world– when most jobs (not just those requiring manual labor, but jobs we haven’t previously thought of as threatened) disappear?
The economic implications are staggering enough, but a world where paid labor is rare would require a significant paradigm shift for the millions of humans who find purpose and meaning in their work. Somehow, I doubt that they will all turn to art, music or other creative pursuits to fill the void…
I lack the capacity to envision the changes that are barreling down on (unsuspecting, unprepared) us–changes that will require my grandchildren to occupy (and hopefully thrive) in a world I can’t even imagine.
If we’re entering a world previously relegated to science fiction, maybe we need to consider applying and adapting Asimov’s three laws of robotics: 1) A robot (or any AI) may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot (or any AI) must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot (or other AI) must protect its own existence as long as such protection does not conflict with the First or Second Law.
Or maybe it’s already too late…..