Google Bans Its AI Technology for Weapons Work


13 June, 2018

Google says it will no longer permit its artificial intelligence, or AI technology to be used in any activities involving weapons.

The company's chief executive officer, Sundar Pichai, announced the decision in an internet post. He wrote that the new policy was one of several newly-launched "principles" aimed at guiding the company's AI work in the future.

The principles are a set of ethical guidelines covering the company's development and sale of AI technology and tools.

Google says it will no longer design or launch AI for weapons or other technologies whose main purpose is to cause harm to people. It will also not permit its AI technology to be used for surveillance activities that violate "internationally accepted norms."

"We believe these principles are the right foundation for our company and the future development of AI," Pichai wrote.

FILE - Google CEO Sundar Pichai speaks on stage during the annual Google I/O developers conference in Mountain View, California, May 8, 2018.
FILE - Google CEO Sundar Pichai speaks on stage during the annual Google I/O developers conference in Mountain View, California, May 8, 2018.

The principles were announced after more than 4,000 Google employees signed a document calling for the company to cancel an AI agreement with the U.S. Department of Defense. That agreement, known as Project Maven, involves the use of Google's AI technology to examine drone images for the U.S. military.

A Google official recently told employees Project Maven would not be extended after it ends next year. Google is expected to discuss with military officials how to complete the project without violating its new principles.

Kirk Hanson is director of the Markkula Center for Applied Ethics at Santa Clara University in California. The center examines how ethics can be used to guide technology development.

He told VOA the opposition by Google employees to the U.S. military agreement was based on fears that AI technology could lead to the creation of "autonomous weapons."

"If you have artificial intelligence which identifies targets and automatically launches weapons, you have what is known as an autonomous weapon -- there is no human decision to launch the weapon."

Hanson said other companies could also face pressure from employees or the public if their AI technology is used to develolp autonomous weapons. Just as with driverless vehicles, autonomous weapon systems may not be as safe as their supporters promise.

"We should be more concerned about how an autonomous weapon might make a mistake. Is that artificial intelligence targeting system as good as we think it is? And until we have trust that those systems will not make mistakes, we're going to have a lot of doubts about the use of artificial intelligence."

Hanson says even though Project Maven does not directly use Google AI to power autonomous weapons, AI systems do help with military targeting.

"If you have better targeting, presumably that's a good thing. But the critics say if you have better targeting, it raises your level of confidence in the targeting, which may lead you to then apply independent autonomous decision making by machine, which will launch the weapons."

A top Department of Defense official was asked about the use of autonomous weapons during an event last year at the Center for Strategic and International Studies in Washington. Air Force Gen. Paul J. Selva, vice chairman of the Joint Chiefs of Staff, said such systems should never be used to replace human commanders.

Google chief Pichai said the company does not plan to stop providing AI technology for all military uses. He said Google will still seek government projects in areas such as military training, internet security and search and rescue.

I'm Bryan Lynn.

Bryan Lynn wrote this story for VOA Learning English, based on information from Google, and reports from the Associated Press and VOA News reporter Michelle Quinn. Kelly Jean Kelly was the editor.

We want to hear from you. Write to us in the Comments section, and visit 51VOA.COM.

_____________________________________________________________

Words in This Story

artificial intelligence n. ability of a machine to reproduce human behavior

principle n. a rule or belief that influences behavior and which is based on what a person thinks is right

ethical adj. following accepted rules of behavior: morally right and good

surveillance n. intelligence gathering

autonomous adj. taking part in an activity separately from other things

doubt n. a feeling of not being sure about something

presumably adv. very likely

confidence n. a feeling of being sure of your ability to do things well

apply v. use something in a particular situation