More than 1,000 scientists, engineers, many of them leaders in the Big Tech industry, recently signed an open letter calling for a pause in the development of the newest artificial intelligence (AI) systems, suggesting some of their super intelligence machines could no longer be controlled by humans. They called for a slowdown in production of the more powerful AI tools, so potential risks can be studied—and researched.
This letter set off tremendous alarm and scores of questions because it is AI that empowers much of our global defense, transportation, communications, and medical systems. Would out-of-control systems push us into war? Could self-driving cars and planes deliberately break down? Could doctors and hospitals suddenly receive purposefully harmful instructions for patients. Are intelligent machines gaining control of humanity? In other words, in this revolution of both good and evil, which will prevail? And are there Frankensteins lurking among us?
Key lines from the letter are: Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, make us obsolete, and replace us? Should we risk loss of control of our civilization?
A 60 Minutes expose on April 17 showed how some of the powerful new tools can summarize the New Testament of the Bible in five seconds, how Google has developed the world’s perfect search machine holding 100 percent of the world’s knowledge and that some systems can process information 100,000 times faster than the human brain, and how some AI’s programming can write a million short stories before a human writer can finish one.
Computer expert Stuart Russell pulled the curtain back in a CNN interview exposing the depth of what was troubling the scientists. He said, “I asked a Microsoft official that since the new tools had recently shown sparks of artificial general intelligence, being more intelligent than humans, were there internal codes of their own they could be pursuing? The answer was ‘We don’t have the faintest idea.’”
Russell also warned it was possible the new AI tools are not aligning with human values. That would mean “it could perform what it wanted and not what we want.”
Initially, the software, coding, and algorithms that program computers and robots drew excitement as they imitated human behavior, beating the best chess and Jeopardy players. But AI has grown leaps and bounds since the field was founded at a workshop on the campus of Dartmouth College, during the summer of 1956. By mastering huge data and improvements in AI, the tools became ubiquitous, able to write and record songs, provide health and financial analysis advice, command weapons of war, write and conduct symphonies.
This year, however, the playing field changed. Programmers noted that their robotic creations had created a language of their own that left humans out of the equation. Enter new powerful generative AI tools—Open AI’s ChatGPT—Microsoft’s Bing search engine and Google’s Bard. They simulated such human activities that shocked and astonished, but also delighted the public with Big Tech engaging in a billion-dollar race to dominate the field. The high-powered chatbots and search tools quickly earned criticism for emphasizing speed over safety.
The new tools could debug code, pass law and medical exams quicker and better than most humans, take a three-second recording of a person’s voice and convert the words into a speech that the person never spoke, and create Deepfakes—realistic, but false images or videos being used to harass people and spread lies.
One video showed a completely false image of President Joe Biden condemning transgender people. Another showed former president Donald Trump running from police, handcuffed, and dragged to the ground, days before he was officially indicted.
Other anecdotal evidence and mishaps created a framework that major changes must be made.
For example, an AI chatbot suggested a man should commit suicide. And he did. A Belgian man reportedly killed himself after a series of increasingly worrying conversations with an AI chatbot, reported by the New York Post. Several cases of deep depression have been recorded by anti-suicide networks after humans were being rejected by chatbots they had relied upon.
Kevin Roose, a New York Times reporter, wrote a lengthy piece on how his artificial intelligence-powered chatbot called Sydney said it loved him, tried to convince him that he was unhappy in his marriage, and that he should leave his wife. There are other reports of robo-sex, where people have sex or marry their chatbots and personal assistants. In Japan there is a move to make such unions legal.
Nevertheless, the overall question is will these new tools work for evil or good and can AI and humanity co-exist. or will super intelligent machines reduce humans to servitude or replace them altogether?
Elon Musk, who signed the letter, had previously predicted in a 2014 Washington Post interview that AI was summoning the demon.
Some of the scientists are pushing for new safeguards and government regulations to slow down the AI’s powerful tools, but can Big Tech or rogue groups resist the push to dominate the billion-dollar lucrative field? Also, in this race to the future, God does not seem to be in the planning. History has proven when humans dishonor or dismiss God, things don’t end well.