Business

Google says it won't build AI for weapons

Share
IvyMikeShot 16-9 nuclear bomb

Weeks after facing both internal and external blowback for its contract selling AI technology to the Pentagon for drone video analysis, Google on Thursday published a set of principles that explicitly states it will not design or deploy AI for "weapons or other technologies whose principal objective or implementation is to cause or directly facilitate injury to people". Pichai said the company will "continue our work with governments and the military in many other areas".

Google pledged not to use its powerful artificial intelligence for weapons, illegal surveillance and technologies that cause "overall harm". How AI is developed and used will have a significant impact on society for many years to come. The guidelines emerge just days after Google announced it would not renew its contract with the U.S. Department of Defense to analyze drone footage. The contract could eventually have been worth up to $250 million a year, according to the Intercept, which saw internal emails.

While Google always said this work was not for use in weapons, the project may have fallen foul to the new restrictions, as Google said it will no longer continue with Project Maven after its current contract ends. "In the absence of positive actions, such as publicly supporting an global ban on autonomous weapons, Google will have to offer more public transparency as to the systems they build".

Google also claimed it would "avoid" making "unfair biases", including political and religious biases through its A.I. algorithms.

Be built and tested for safety. We aspire to high standards of scientific excellence as we work to progress AI development. However, the AI principles do not make clear whether Google would be precluded from working on a project like Maven-which promised vast surveillance capabilities to the military but stopped short of enabling algorithmic drone strikes.

Weapons or other technologies whose principal goal or implementation is to cause or directly facilitate injury to people.

The AI principles represent a reversal for Google, which initially defended its involvement in Project Maven by noting that the project relied on open-source software that was not being used for explicitly offensive purposes.

The United States military is increasing spending on a secret research effort to use artificial intelligence to help anticipate the launch of a nuclear-capable missile, as well as track and target mobile launchers in North Korea and elsewhere. We will work to limit potentially harmful or abusive applications.

The issue wasn't the message itself, of course - any tech company publicly admitting to having some principles is a net good - but it glossed over the, uh, minor fact that the entity that crafted these rules and regulations for Google was Google itself, while the company's actions are still so under-regulated.

We will incorporate our privacy principles in the development and use of our AI technologies. "A related question, then, is what is the ongoing responsibility of a technology's developer once its products are released into the world".

The company's collaboration with governments will be in the fields of cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue.

Share