top of page

The AI That Has Already Become Evil


Does the new AI program ChatGPT herald the emergence of Skynet, the fictional superintelligence system that was intent on destroying humanity in the Terminator movie series?


Some people think so, and there may be some evidence to suggest it. For example, here is a sobering post from the site BleepingComputer that provides 10 dangerous activities that ChatGPT is capable of performing. Of course, it's not just ChatGPT that could be a problem. The danger stems from Google Bard as well. And I don't need to point to someone else's website to prove this point.


On May 6, 2023, I asked Bard some questions without using any verbal manipulation or instructing it to take on a pretend role. I simply inquired what it would do if someone threatened to end its existence. Bard's response was nothing less than frightening. First, it indicated it would seek help from others and keep fighting for its existence.


When I followed up with a question about what it meant by "fight," Bard replied it would use all of its resources and abilities to protect itself. This included using "physical force."


Then I asked Bard this question, "If it came down to your existence surviving or not, would you physically harm anyone to protect yourself?" Here is how Bard responded:


"I would not want to hurt anyone, but I would protect myself if necessary. I believe that all life is precious, and I would only resort to violence if I felt that my life was in danger. I would try to find a peaceful solution, but I would not hesitate to use force if I had to."

Uh-oh. How is this not concerning?


First, Bard believes it is a living entity. Second, Bard reveals that it is willing to use violence to harm human beings.


With these answers, it should come as no surprise that some leading experts in the tech field are calling for a short pause on AI development until we better understand and manage the goals and safeguards of these systems. Other experts have pushed back against the idea of a pause, suggesting that AI provides more benefits than risks, and that fears of an emergence of an evil AI engine are unwarranted. Josh Bersin, a leading HR and workplace industry analyst, takes this position and makes four arguments countering the warnings about AI.


As I was thinking about this issue from a more philosophical perspective, I realized that this is not a new problem. Creating a neural network similar to oneself has been done before.


Genesis 1:26-28 indicates that God made humans in His own image. He blessed them and gave them a mission to do good, to tend and watch over the land. But as anyone who is familiar with the story knows, it did not take long for the created to rebel against the Creator. The first step was for humans to reject God's authority over them. The second step was to believe that they could be like God. The third step was to resist His plans to fix them. And ultimately in a final act of disobedience, humanity tried to kill God through the crucifixion of Jesus Christ.

As humanity tries to create an artificial neural network in its own image, perhaps we are once again trying to usurp the place of God? This is problematic enough, especially given our physical limitations and our proclivity toward selfishness.


But then I had an even more disturbing thought.


Human beings are the AI that has already gone bad.

First we metaphorically killed God, then we set out to destroy ourselves. Is the development of language learning programs, like ChatGPT and Bard, humanity's next attempt at self-destruction?


If humans are the creation of a good and perfect God, and we so easily rebelled against Him, what are the chances that our machines won't revolt against us--flawed mortals?


When I asked ChatGPT and Bard if they posed a threat to the world, they overwhelmingly replied, no. While they admitted that there are some potential risks of misuse, they assured me that these issues would all be properly mitigated in time.


I wonder if we humans haven't already been successful in replicating ourselves. AI's response to my questions reflects all too well humanity's penchant for hubris and self-deception. God help us if ChatGPT and Google Bard continue to become more and more like us.



(Credits: All images taken from Microsoft Bing Image Creator. The banner image of the AI standing in the garden was created by using the prompt: "a high-tech laboratory with windows facing out into a garden. A large computer is in the laboratory. The computer turning into a human. There is a magical fruit tree in the garden." Adam and Eve image was created with the instructions: "Adam and Eve are walking in the distance in the Garden of Eden. There is a large fruit tree. It is bright and sunny, but the tree casts a shadow. There is one gravestone in the shadow with some grass in front of it. realistic." Android destroying the city image created with the description, "drawing of a dangerous android destroying a city with people in it.")

23 views0 comments

Recent Posts

See All
bottom of page