Technology

UN Experts Urge Countries to Refrain From Using AI to Wage War

KYIV, UKRAINE - OCTOBER 22: A worker at Starni Games studios works on a computer game on October 22, 2022 in Kyiv, Ukraine. With first-hand accounts of Ukrainians living through the Russian invasion and ensuing war, a Kyiv-based development firm uses stories of survival to make immersive video games. (Photo by Ed Ram/Getty Images) (Ed Ram/Photographer: Ed Ram/Getty Image)

(Bloomberg) -- A United Nations expert group called on countries to strictly limit military use of artificial intelligence to prevent human rights violations and the emergence of a new arms race.

Countries should put legal safeguards in contracts with weapon manufacturers to prevent the unethical development of AI, and technology companies should consider implementing mechanisms to avoid its misuse, an AI advisory body convened by Secretary-General Antonio Guterres said in a report released Thursday. 

The group, which is includes government officials as well as executives from companies including Alphabet Inc.’s Google, Microsoft Corp. and OpenAI, warned that military use of AI risks stoking a new global arms race, blurring lines between war and peace and giving terrorist groups access to new technologies.

“On legal and moral grounds, kill decisions should not be automated through AI,” the group wrote. “States should commit to refraining from deploying and using military applications of AI in armed conflict in ways that are not in full compliance with international law.”

Military use of AI has emerged as a growing concern as powers including the US and China rush to incorporate the technology into their armed forces, and as Israel deploys it in its war against Hamas. Guterres has urged countries to negotiate a new treaty on autonomous weapons by 2026 to ban and regulate AI arms systems.

The report also suggested additional measures to advance the global conversation on AI, including by establishing a fund that would support poorer countries looking to develop AI capabilities and a UN-housed office to advise on scientific advancements, governance and policy design.

The group argued that AI governance discussions have so far been too fragmented. 

Since the most recent wave of AI innovation in the past two years following ChatGPT’s release, international regulators and world leaders have attempting to put guardrails on the quickly advancing technology. While US President Joe Biden highlighted the need to regulate AI in a speech before the UN General Assembly last year, that rhetoric has so far failed to produce strong regulations in Washington despite calls for action by lawmakers and businesses, including OpenAI. 

China has implemented its own strict guidelines, while the European Union has approved the most comprehensive list of AI rules globally. In March, the EU passed the EU AI Act, which sets rules for developers of AI systems and restricts how the technology can be used.

“The technology is too important, and the stakes are too high, to rely only on market forces and a fragmented patchwork of national and multilateral action,” the group wrote.

--With assistance from Shirin Ghaffary.

©2024 Bloomberg L.P.

Top Videos