Google’s AI has learned to become aggressive

In 2015, according to Business Insider, Google engineers were programming “an advanced kind of chatbot.” These earlier Artificial Intelligence (AI) machines were learning how to respond to questions after given input containing specific types of dialogue. The engineers were pleased to discover their AI machines were gaining proficiency in “forming new answers to new questions.” And although some AI responses were creative, they were tinged with malevolence. Here is an example of one such exchange:

“Human: What is immoral?

      Machine: The fact that you have a child.”

It wasn’t reported if the machine had been prepped with disinformation about the myth of man-made climate change, but the AI response about the immorality of childbirth would certainly be championed by extreme-green environmentalist groups. This trend of an AI computer to exhibit dark complexities became more evident recently when Google scientists used game playing simulations to test the DeepMind AI about its human-like “willingness to cooperate with others.” As Sciencealert.com reports, when “DeepMind feels like it’s about to lose, it opts for highly aggressive strategies” in the effort to be the winner. Could this also indicate the AI desire to be the last “man” standing?

In the training program game called Gathering, there were two competing DeepMind ‘agents’ whose goal was to grab as many virtual apples as possible. As the supply of virtual apples became scarce, the two DeepMind AI competitors began using laser beams “to knock each other out and steal all the apples.” Here is a condensed fifty second video to show you the action.

 

Google engineers programmed DeepMind AI to play this game over forty million times. The choice to use laser beams, a choice made solely by the DeepMind AI agents, kicked in only when the “more complex” AI networks were used. Less complex DeepMind systems who played the game – this translates into “less intelligent” AI – discovered the means to equally share all virtual apples, with no need to utilize laser beams to take down the other player.

DeepMind systems do not originally program themselves; they are first injected with man-made algorithms. Yet, as the intelligence processing capacity increases, it seems abundantly clear that man’s penchant for control, self exaltation, and greed is manifesting through his AI creations. There is, in man’s consciousness, an ability and a deep desire to use more complex information as a weapon. You need not look any further than the National Security Agency and its terabytes of metadata on each and every individual who has ever used a phone, sent email or made a purchase with a bank card.

There’s little  doubt that world history is replete with stories about the “stronger” leaders of nations who will plunder those less able to defend themselves, or those who may be mentally, emotionally, or physically suffering from maladies and in great distress. Are these discoveries about the latent aggression in AI merely anomalies? Or does it reflect something much deeper and darker in the heart of man?

As documented in Technology Review, these questions are being addressed by Tesla’s Elon Musk and DeepMind’s leader Demis Hassabis, along with some of the best known names in computer science. Whether or not these great minds can solve the problem remains to be seen. Remembering that this intelligence is “artificial” and programmed by men and women who have their own competing interests are two fundamental reasons to be extremely cautious.

 

Sources:

BusinessInsider.com

ScienceAlert.com

YouTube.com

Science.NaturalNews.com

TechnologyReview.com