Artificial intelligence poses ‘risk of extinction,’ tech execs and experts warn | CBC News
Top artificial intelligence executives including OpenAI CEO Sam Altman on Tuesday joined other experts and professors in urging policymakers to see the technology as one of the most serious risks to humanity in the future.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” more than 350 signatories wrote in a 23-word letter published by the nonprofit Center for AI Safety (CAIS).
Competition in the industry has led to a sort of “AI arms race,” CAIS Executive Director Dan Hendrycks told CBC News in an interview.
“That could escalate and, like the nuclear arms race, potentially bring us to the brink of catastrophe,” he said, suggesting humanity “could go the way of the Neanderthals.”
But he specified that certain jobs or companies may be more likely to vanish first, with AI-run automation being chosen over human labour.
Recent developments in AI have created tools supporters say can be used in applications from medical diagnostics to writing legal briefs, but this has sparked fears the technology could lead to privacy violations, powerful misinformation campaigns and lead to issues with “smart machines” thinking for themselves.
“There are many ways that [AI] could go wrong,” said Hendrycks, explaining the need to also examine which AI tools may be used for generic purposes and those that could be used with malicious intent.
He also raised the concern of artificial intelligence developing autonomously.
“It would be difficult to tell if an AI had a goal different from our own because it could potentially conceal it. This is not completely out of the question,” he said.
‘Godfathers of AI’ among critics
Hendrycks and the signatories to the CAIS statement are calling for international co-operation and to treat AI as a “global priority to address these risks.”
“We were able to do the same with nuclear weapons,” he said. “It’s of course difficult to do, but I think it’s something we have to do, otherwise things could get very bad.”
The letter coincided with the U.S.-EU Trade and Technology Council meeting in Sweden where politicians are expected to talk about regulating AI.
As well as Altman, signatories included the CEOs of AI firms DeepMind and Anthropic, and executives from Microsoft and Google.
Also among them were British-Canadian computer scientist Geoffrey Hinton and Université de Montréal computer science professor Yoshua Bengio — two of the three so-called “godfathers of AI” who received the 2018 Turing Award for their work on deep learning — and professors from institutions ranging from Harvard to China’s Tsinghua University.
Hinton earlier told Reuters that AI could pose a “more urgent” threat to humanity than climate change. He recently resigned from his position at Google in order to speak more freely about his concerns over the rapid development of AI.
A statement from CAIS singled out Meta, where the third godfather of AI, Yann LeCun, works, for not signing the letter.
Bengio and Elon Musk, along with more than 1,000 other experts and industry executives, had already cited potential risks to society in April.
Last week, Altman referred to EU AI — the first efforts to create a regulation for AI — as over-regulation and threatened to leave Europe. He reversed his stance within days after criticism from politicians.
European Commission President Ursula von der Leyen will meet Altman on Thursday.
For all the latest business News Click Here