‘AI poses existential threat’, warn OpenAI CEO, top Microsoft tech titans

Science and technology leaders including Sam Altman, the CEO of OpenAI, as well as high-ranking executives from Microsoft and Google, issued a fresh cautionary message stating that Artificial Intelligence presents a potential threat of causing the extinction of humanity.

Taking to Twitter Centre for AI Safety tweeted, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” they said in a statement posted on the Center for AI Safety’s website.

In March 2023, billionaire Elon Musk, Apple co-founder Steve Wozniak, renowned author Yuval Noah Harari, along with approximately 1,120 researchers and scientists, signed an open letter titled ‘Pause giant AI experiments.’ The objective of this letter was to urge laboratories to temporarily halt their post-GPT4 AI experiments for a minimum period of six months.

The recent statement has garnered signatures from renowned figures such as Geoffrey Hinton, a highly regarded computer scientist known as the pioneer of artificial intelligence, Demis Hassabis, the CEO of Google DeepMind, Ilya Sutskever, co-founder and chief scientist of OpenAI, and numerous other prominent individuals in the field.

The statement noted that AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI.

“Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion,” it said.

The latest warning was intentionally succinct — just a single sentence — to encompass a broad coalition of scientists who might not agree on the most likely risks or the best solutions to prevent them, said Dan Hendrycks, executive director of the San Francisco-based nonprofit Center for AI Safety, which organized the move, told Associated Press.

As stated on the Center for AI Safety’s website it is also meant to create common knowledge of the growing number of experts and public figures who take some of advanced AI’s most severe risks seriously.

Earlier this month, Musk said the letter was futile. “I knew it’d be futile. I just wanted to call it – it’s one of those things. Well, for the record, I have recommended that we pause. Did I think we would – there would be a pause? Absolutely not,” he told CNBC in an interview on May 16.

Unlike the previous warning, the recent statement does not put forward specific solutions or remedies. However, some individuals, including Sam Altman, have suggested the establishment of an international regulatory body similar to the United Nations nuclear agency as a potential approach.

Critics argue that the alarming predictions made by AI creators regarding existential risks have exaggerated the capabilities of AI and diverted attention from the pressing need for immediate regulations to address the tangible issues arising from their use.

(With inputs from agencies)

 

 

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint.
Download The Mint News App to get Daily Market Updates.

More
Less

Updated: 31 May 2023, 06:58 PM IST

For all the latest world News Click Here 

Read original article here

Denial of responsibility! TechAI is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.