UN officials urge regulation of Artificial Intelligence
Guterres warned that AI may ease a path for criminals, terrorists and other actors intent on causing “death and destruction, widespread trauma, and deep psychological damage on an unimaginable scale.”
The launch last year of ChatGPT – which can create texts from prompts, mimic voice and generate photos, illustrations and videos – has raised alarm about disinformation and manipulation.
On Tuesday, diplomats and leading experts in the field of AI laid out for the Security Council the risks and threats – along with the scientific and social benefits – of the new emerging technology. Much remains unknown about the technology even as its development speeds ahead, they said.
“It’s as though we are building engines without understanding the science of combustion,” said Jack Clark, co-founder of Anthropic, an AI safety research company. Private companies, he said, should not be the sole creators and regulators of AI.
Guterres said a U.N. watchdog should act as a governing body to regulate, monitor and enforce AI regulations in much the same way that other agencies oversee aviation, climate and nuclear energy.
Discover the stories of your interest
The proposed agency would consist of experts in the field who shared their expertise with governments and administrative agencies that might lack the technical know-how to address the threats of AI. But the prospect of a legally binding resolution about governing it remains distant. The majority of diplomats did, however, endorse the notion of a global governing mechanism and a set of international rules.
“No country will be untouched by AI, so we must involve and engage the widest coalition of international actors from all sectors,” said Britain’s foreign secretary, James Cleverly, who presided over the meeting because Britain holds the rotating presidency of the council this month.
Russia, departing from the majority view of the council, expressed skepticism that enough was known about the risks of AI to raise it as a source of threats to global instability. And China’s ambassador to the United Nations, Zhang Jun, pushed back against the creation of a set of global laws and said that international regulatory bodies must be flexible enough to allow countries to develop their own rules.
The Chinese ambassador did say, however, that his country opposed the use of AI as a “means to create military hegemony or undermine the sovereignty of a country.”
The military use of autonomous weapons in the battlefield or in another country for assassinations, such as the satellite-controlled AI robot that Israel dispatched to Iran to kill a top nuclear scientist, Mohsen Fakhrizadeh, was also brought up.
Guterres said that the United Nations must come up with a legally binding agreement by 2026 banning the use of AI in automated weapons of war.
Rebecca Willett, director of AI at the Data Science Institute at the University of Chicago, said in an interview that in regulating the technology, it was important not to lose sight of the humans behind it.
The systems are not entirely autonomous. and the people who design them need to be held accountable, she said.
“This is one of the reasons that the U.N. is looking at this,” Willett said. “There really needs to be international repercussions so that a company based in one country can’t destroy another country without violating international agreements. Real enforceable regulation can make things better and safer.”
For all the latest Technology News Click Here