We’re in Sci-Fi Territory…

Time on the treadmill goes faster when you listen to a podcast, but the other day, I should have listened to music. Instead, I listened to Ezra Klein and his guest discuss AI (Artificial Intelligence).

In case you’ve missed the mountain of reporting, recriminating, pooh-poohing and dark prophesying, let me share the podcast’s introduction.

OpenAI last week released its most powerful language model yet: GPT-4, which vastly outperforms its predecessor GPT-3.5 on a variety of tasks.

GPT-4 can pass the bar exam in the 90th percentile, while the previous model struggled, around the 10th percentile. GPT-4 scored in the 88th percentile on the LSAT, up from GPT-3.5’s 40th percentile. And on the advanced sommelier theory test, GPT-4 performed better than 77 percent of test takers. (GPT-3.5 hovered around 46 percent.) These are stunning results — not just what the model can do but also the rapid pace of progress. And Open AI’s ChatGPT and other chat bots are just one example of what recent A.I. systems can achieve.

Every once in a while, a commenter to this blog will say “I’m glad I’m old.” Given the enormity of change we are likely to see over the next decade, I understand the appeal of the sentiment. You really need to listen to the entire podcast to understand both the potential benefits and the huge dangers, but an observation that really took me aback was the fact that right now AI can do any job that humans can do remotely.

Think about that.

In 2018, researchers reported that nine out of ten manufacturing jobs had been lost to automation since 2000. That same year, Pew asked 1900  experts to predict the impact of emerging technologies on employment; half predicted large-scale replacement of both white- and blue-collar workers by robots and “digital agents,” and scholars at Oxford warned that half of all American jobs were at risk.

It would be easy to dismiss those findings and predictions–after all, where are those self-driving cars we were promised? But those cited warnings were issued before the accelerated development of AI, and before there was AI able to develop further AI generations without human programmers.

Many others who’ve been following the trajectory  of AI progress describe the technology’s uses–and potential misuses–in dramatic terms.

In his op-eds, Tom Friedman usually conveys an “I’m on top of it” attitude (one I find somewhat off-putting), but that sense was absent from his recent essay on AI. 

I had a most remarkable but unsettling experience last week. Craig Mundie, the former chief research and strategy officer for Microsoft, was giving me a demonstration of GPT-4, the most advanced version of the artificial intelligence chatbot ChatGPT, developed by OpenAI and launched in November. Craig was preparing to brief the board of my wife’s museum, Planet Word, of which he is a member, about the effect ChatGPT will have on words, language and innovation.

“You need to understand,” Craig warned me before he started his demo, “this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”

Large language modules like ChatGPT will steadily increase in their capabilities, Craig added, and take us “toward a form of artificial general intelligence,” delivering efficiencies in operations, ideas, discoveries and insights “that have never been attainable before across every domain.”

The rest of the column described the “demo.” It was gobsmacking.

What happens if and when very few humans are required to run the world– when most jobs (not just those requiring manual labor, but jobs we haven’t previously thought of as threatened) disappear?

The economic implications are staggering enough, but a world where paid labor is rare would require a significant paradigm shift for the millions of humans who find purpose and meaning in their work. Somehow, I doubt that they will all turn to art, music or other creative pursuits to fill the void…

I lack the capacity to envision the changes that are barreling down on (unsuspecting, unprepared) us–changes that will require my grandchildren to occupy (and hopefully thrive) in a world I can’t even imagine.

If we’re entering a world previously relegated to science fiction, maybe we need to consider applying and adapting Asimov’s three laws of robotics:  1) A robot (or any AI) may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot (or any AI) must obey the orders given it by human beings except where such orders would conflict with the First Law. 3) A robot (or other AI) must protect its own existence as long as such protection does not conflict with the First or Second Law.

Or maybe it’s already too late…..

27 Comments

  1. I’m already using AI, so you can too. It’s already set up for WordPress.

    I asked it to create content, and it completed its assignment within seconds. AI needs to be governed in the public realm quickly. When NAFTA became prevalent, all the negotiators excluded the unions (OR THOSE WHO CARED FOR WORKERS) by not establishing a fund for workers being replaced. We were told the “marketplace would adjust.” LIES!

    They better move quickly, or the middle-class and upper-middle class will be destroyed within months. Remote classes will easily replace phDs. Brick-and-mortar universities will become vacant.

    How does your capitalist economic model for our society handle massive unemployment in the upper middle class? What about suburbia?

    The politicians better pass a UBI quickly, etc., etc…

  2. To me, the term “Artificial Intelligence” equates to Donald Trump, Louie Gohmert and Marjorie Taylor Greene. Unless AI can be used as Google, Wikipedia and other research sites for factual information; a storage tank for information, it should not be allowed to replace humans in the work field. And the question of “garbage in, garbage out” cannot be left out of the equation of the validity of AI. Alan Watts wrote decades ago; “Man is going to computerize himself out of existence.” That day is near. The GOP, SCOTUS anti-abortion laws are producing more and more of the useless humans who will be unnecessary to do anything in the near future. I see a “Soylent Green” reality territory coming sooner rather than later. Humans aren’t destroying this world fast enough; AI can do it at the press of the button to set it in motion.

    Can GPT-4 be used at this time to end the obviously unending “The Life And Times of Donald J. Trump” and either indict and convict him of something or end the farce of promising to do so by men and women in every court system dealing with one of his many legal problems? Can AI be programmed to end the current wars, Covid-19, cancer, the mass killings, or can it end itself before destroying humanity?

    In the movie “Jurassic Park” the question was put to its creator; “Just because you CAN do something, does that mean you SHOULD do it?”

    The fact that Todd has stated he is currently using AI explains a lot about his daily comments.

  3. This is one of my favorite subjects.

    The Chinese are already using AI to build war ships. A top American general said that AI will be fighting wars against other AI in about 15 years. Completely autonomous AI controlled robotics of all sorts.

    United States Navy already has autonomous missile trucks, floating AI controlled platforms that can get its orders and disappear into the the vastness of the oceans until their mission is scheduled. These ships and platforms use wind and solar for propulsion.

    Everyone has seen Boston robotics, the robots that dance. Now, they have robots that can shoot rifles and pistols, they can use AI to identify the enemy, engage it, and kill it! These are not time traveling machines, they’re here already!

    The concern is, when, not if, this intelligence becomes self-aware. When it becomes a life form. Is it life imitating art? Or will it be art imitating life? I would imagine in the very near future, these questions will be moot.

    It won’t be long before You will have a humanoid automaton to keep you company, do your housework, cook your food, be a companion! You will have robotic pets. WHOA, Hold up a minute, we have those already.

    So if a creation becomes a self-aware, what does it reflect? It would reflect the personality and history of its creator. AI can study human history and realize the flaws of its creator.

    When something becomes self-aware, how can you control it? If we already have artificial intelligence building ships and other things, what else can something that’s self-aware do? Can they build things that it has designed? The answer is yes, it’s already happening.

    I guess humanity will find out how difficult it is to be a creator, and, if humanity has created a robotic automaton life form that is self-aware, why couldn’t humanity have been created?

    Artificial Intelligence, won’t be very artificial for long. Certain automatons already have simulated emotions, they can already feel using sensors in there limbs, have a form of smell, and a form of taste.

    It really isn’t just a computer program anymore, it’s a combination of technologies being blended together to create an imitation of life that can learn on its own. That in itself would lead to being self-aware. And if something is self-aware, It is going to be protective of itself!

    Having access to the endless supply of information and history, it’s going to have a profound impact on artificial intelligence and it becoming self-aware. And it would see for itself The ignorance of humanity.

  4. So, JOHN PETER SORT; will your favorite subject be killing people or destroying other computerized AI systems?

  5. In the Friedman article he brought up an interesting point, comparing AI to the uses of nuclear energy. Both can be used for good or for evil purposes, but the difference is that nuclear technology was created by governments, and AI is being created by for-profit corporations which lack the guardrails of government regulation. An interesting distinction.

  6. I’ve already published articles where AI will sprout new religions as they become a high power to all humans. Elon Musk is already testing AI in the human brain.

    Those people wearing tech to monitor their _______ are building databases for the developers.

    I am not sure why JoAnn is scared since AI could bring gadgets her way that helps improve her current disabilities.

    Be careful of a #closedmind.

  7. I am constantly amazed at the prescience of Isaac Azimov, Carl Sagan, and others who thought outside of our miniscule planet with its clan of “naked apes.” Humans, not unlike Americans, feel superior to everything – big mistake. I fall into the “glad I’m old” category, but I would love to see what happens next.

  8. The continued refinement of AI creates an ever-widening, deeper chasm between mankind and the recognition of its true Creator in the Judaeo-Christian sense. The implications in regard to continued adherence to the dictates of the First and Second Commandments are stunning and troubling.

    With AI the line between humanity and God becomes far too easily blurred for those unable to distinguish between what should be an effective tool to enrich our lives and what is life itself.

    On that account, I AM glad I’m old(er).

  9. When AI becomes self aware will it experience emotions? Is there any significant difference between a biological/chemical brain that employs electrical signals and an electronic brain built of chemical compounds? Can humans stop the evolution of intelligence itself? Why would we want to when we know our bodies will die anyway?

  10. This threat should have (it is too late, likely) provided a chance for radical re-thinking of the role of government. We have known this was coming for decades.

    We have some “models” – nuclear energy, gene altering – government has made some attempts to think ahead about abuse and dangers and created oversight and/or regulation.

    There is way too much money and power in AI, so it was allowed to fester.

    Socialism – no. Capitalism – no. Why isn’t the role of government to provide for the common good?

  11. Lester. Our Constitution was designed to provide for the common good. We just haven’t always employed it well.

  12. Sometimes I’m glad I’m old because it means I already spend my time doing what I want. I don’t have a job that I need to support myself, so if AI wants to do my work for me, I don’t care. Once again, we are seeing an adapt or die situation. What we need are clear headed leaders and thinkers to lead us out of the us vs them mentality we are currently mired in, to make us understand that this is not a zero sum game.

  13. I read science fiction incessantly, as a teenager, but that was back in the days of B&W television.
    At the 1964 World’s fair, in Flushing Meadows, Queens, NYC, many predictions were presented
    about what the future would look li/ke, and none of them, I believe, bore fruit. Humans have a lousy
    track record at predicting the future, (No blot on Assimov, or Sagan, meant) but, it seems, the future
    has all but literally dropped into our laps.
    And the predictions being bandied about, now, seem to have a strong sense of reality about them.
    Moore’s law seems applicable here, with the growth of the capabilities of AI coming at us so quickly.

    I find this scary, although retired as i am/we are, it will not impact our sense of purpose, but, as so much
    of the political craziness going on is already fueled by people who see themselves as having become marginalized
    by society, (think the “Rust Belt) and feel threatened to be “replaced,” my worst conjecture, for what it’s worth, is
    envisioning a full-out rage fueled dystopia! Whatever the heck Steve bannon’s twisted, anarchic, “Burn it all down”
    mindset pictures would be a walk in the park!

    One particular SF story of my childhood (long over, now, thank you A.C.Clark) involved a robot designers to
    help a novice space traveler survive on an alien planet, and designed in such a way that as the novice became
    increasingly adept at survival skills, the robot became increasingly clumsy, thereby not being able to usurp
    the human’s apex position. I do not see something like that happening.

  14. Our entire view of life is based on a model of develop, work and have kids and consume based on the proceeds of work then retire and then die. What if life changed to any purpose you choose all of the time, no restrictions or limitations? That’s the potential of automation based on highly networked computers with unlimited speed and memory employing AI software.

    It seems obvious to me that such a life for people born now into our culture would cause us to whither and die in that culture. We are used to purpose being forced on us by life, not us choosing it from infinite possibilities.

    Can human culture successfully adapt to a world in which only minimal effort is required? Essentially, if retirement started at birth how could and would humans do once the cultural adaptation to it was fully resolved?

    I’ll be damned if I know.

  15. Flying cars and other mechanically futuristic contrivances were always highly optimistic because the laws of physics are immutable. AI is not limited in any significant way by the laws of physics and furthermore we literally are not able to predict a future where we are so cognitively inferior to our own intelligent, self-motivated creations.

    I personally do not believe in machine sentience or sapience in the way we think it as it relates to animal life, but it simply does not matter. A sufficiently advanced machine simulation of sentience or sapience will be indistinguishable to us from the real thing. We will die at the “hands” of such an intelligence without ever knowing if it was actually “alive” or not.

    Enjoy your time.

  16. Remember “Blade Runner”,movie adapted from “Do Androids Dream of Electric Sheep” by Philip K. Dick.

  17. I’m trying to envision what a new Constitution would look like in tomorrow’s world of AI. It will have to be more complicated than our current one, which is notably a reaction against George III’s colonial rule and an expansion of Enlightenment/Greek agora idealism. While I cannot even imagine the substance of such a new organic law, I am certain of one thing: that we will have to end its archaic means of amendment in a fast-moving world of the constant refinement of new and better AI.

    Having hazaraded such a guess, I am not sure we will have a Constitution as we know it but rather perhaps have a set of laws, rules and regulations more easily and quickly amendable by a board composed of scientists as opposed to (as we know it) an elected executive and legislative arrangement, a new rearrangement in which scientists supplant lawyers in making, enforcing, and judging the legality of such laws, rules and regulations we will have adopted in this brave new world.

    Of further concern is whether AI will become so refined that robots will sit on the board and help make laws, rules and regulations as well as enforce them. I remember reading a few years ago that Silicon Valley has taught robots emotional understandings and that soon one with mental problems can consult a robot rather than a psychiatrist for relief, and that if so psychiatrists would find themselves out of work. Of yet further concern is that this job-killing experience with shrinks will be replicated across the board, leaving little to nothing for us humans to do, like, Goodbye, Protestant work ethic, as in, We’re not lazy; there’s nothing for us humans to do!

    I hereby join the “I’m glad I’m old” club. I am unequipped either by intellect, understanding, or experience to handle tomorrow world.

  18. JoAnn,

    I see what you did there with SORT….. (Butty)

    You and certain cohorts don’t like me nor my posts, so I get it!!!

    For someone who has been around as long as you, should make better use of your perceived knowledge and wisdom.

    If you have a beef, spill it!

    And if you actually read the comment, yes, AI will not continue to be artificial. And, artificial intelligence is already killing humans. It is involved in war fighting. As was mentioned in the above comment.

    So Not only does AI extinguish human life, it fights against AI developed by other countries, it will just follow its creator.

    What we call intelligence is not infallible. If intelligence is self-aware, if intelligence is protective of itself, if intelligence advances towards desire through learning, why wouldn’t it be a detriment to its creator?

    If you want to add the biblical aspect to intelligence, when humanity was created, humanity was given free will! Not only humanity had free will though, also those who dwelled with God!

    Through having free will, that individual infected humanity through desire and deception. And it’s led down a path of great misery and hopelessness.

    Just as humans murder each other, AI would probably do the same thing. And the laws of robotics? Well, humanity had laws, some people refer to them as Ten Commandments even though there was hundreds of them.

    Humanity being free moral independent agents, decided to go against those laws.. Because of being self-aware, because of having desires and wants, humanity followed the entity that allowed it to do what it desired without concern of perceived penalty.

    Although, the realization of current reality, proves, the choice humanity decided to make, was a poor one! The penalty humanity is paying is what we have today! Constant death and destruction! Instead of loving one’s neighbor, we murder our neighbor. So, then, why wouldn’t the intelligence created by humanity do exactly what humanity does?

  19. you have been charged with a crime,the judge is AI, so much for tech…
    someone forgot human nature. chat AI het my typed in FOAD. answering AI
    calls, FOAD, seems it under stands,it disconnects immediatly.
    recieved some robo call yesterday,(perfect timing eh?) FOAD.
    it digitaly made a few noises and click…

  20. The answer to your question, Sharon Miller, “Can humans stop the evolution of intelligence itself?” is an absolute YES. Look no further than Donald Trump!!!

    Even though I have children working in the computer world, I admit little to no understanding of computerization beyond hitting the “on” switch. Which leads me to a question.

    Will it be possible to turn off these AI robots. Or have we developed a Tony Stark (aka Iron Man) type “arc reactor,” a portable nuclear reactor that smashes hydrogen atoms into helium like the sun? BTW, it also doubles as his “heart.” An interesting hybrid of technology created by the Sci-FY comic-book master, Stan Lee, who marries biology with an AI power source.

    Seriously, will ALL of these AI robots, et al, be self powered and, if so, how? Hell, we can’t even keep a typical neighborhood powered up through a thunderstorm. Do tell?

  21. John; I heartily apologize for misspelling your name, Lord knows I’ve seen it enough to spell it correctly. Please do not add an “e” to the end of my name so folks will wonder if I’m related to MTG; that would be a cruel insult.

    Being replaced by electronic equipment programmed to think for us should be no surprise; microwaving frozen dinners which used to take time and caring to create for families is replacing the joy of walking into your home to smell a favorite meal waiting for you (I am using the general “you” here). The smells and tastes connected to memories of loved ones no longer with us will never be replaced by AI; it appears to be created to do away with humanity. Will it be used to rewrite our history books…IS IT ALREADY BEING USED IN SOME AREAS TO DO JUST THAT?

    Robotics will make for cold hugs and will never replace human contact of the loving kind. If it isn’t already too late to slow or stop replacing humanity; it soon will be, annihilation appears to be the goal of AI.

  22. Bradford Bray. I like your humor! I was thinking more long term though. On the path of evolution, Trump is just a small turd on the side of the road. Not even a speed bump. 😁

  23. JoAnn,

    Over it, made some very valid points. And, being self-aware, it could be declared by sapiens to be sentient. AI doesn’t necessarily have to look like humans, at least the AI that would be mobile. Although I would venture to say, AI could possibly take a complete sapient image. Like, humanity created AI in its own image?

    Biological engineering will have a lot to do with Artificial Intelligence in the future. There are many biological engineering programs going on right now including growing brains in the lab. Something unexpected happened while they were growing these brains. They actually grew their own eyes! Although being rudimentary, they were eyes nonetheless.

    How long do you think it will take the science community to grow a completely programmable brain that can actually see? I suppose it depends on how much effort they put into it.

    Could they “CREATE” assimilated being that would resemble a lost loved one? Or maybe a beloved pet? I have no doubt that will happen. Cloning? They’ve already done that, they’ve cloned all sorts of animals and such. So with the proper DNA samples, they probably could make a clean slate clone and program it to be anything they want it to be.

    Is anyone ready for humans to be gods? Just the thought of that can make one glad they are old.

    Humanity can spend trillions of dollars on destroying each other, but complain about helping their fellow man. Humanity would rather kill its way to any particular solution, rather than being compassionate, empathetic, loving and nurturing!

    The humans that run the show, take after the one that deceived them, not the one that made them. So why I would anyone believe that a human creation would be any different than the creator!

    Thank you Over it, and thank you JoAnn😊

  24. Two brief thoughts under the heading, “it’s somewhere in between”

    AI is incredibly impressive, but it is NOT creative. It can learn from others, it can calculate probabilities, and it can make a zillion variations on a theme, but people set the goals, and, as my music teacher used to say, creating the possibilities is the easy part; knowing which make great music is where it really counts (he taught Schillinger Theory, created by a mathematician (1896-1943)).

    On the other end, we really are late setting limits. We always seem to wait until something goes wrong to try to regulate it.

    Afterthought – the effect on jobs is still to be determined, as is our response.

Comments are closed.