AI Poses Major Societal Risks, Say Industry Leaders

Artificial intelligence industry leaders say they’re concerned about the potential threats advanced AI systems pose to humanity. On Tuesday, several of them, including OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis, along with other scientists and notable figures signed a statement warning of the risks of AI. 

The curt, sentence-long statement was posted on the website of the nonprofit Center for AI Safety. “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement reads. 

Nearly every major tech company has released an AI chatbot or other generative AI tool in recent months, following the launch of OpenAI’s ChatGPT and Dall-E last year. The technology has begun to seep into everyday life and could change everything from how you search for information on the web to how you create a fitness routine. The rapid release of AI tools has also spurred scientists and industry experts to voice concerns about the technology’s risks if development continues without regulation.

The statement is the latest in a series of recent warnings on the potential threats of the advanced technology. Last week, Microsoft, an industry leader in AI and an investor in OpenAI, released a 40-page report saying AI regulation is needed to stay ahead of bad actors and potential risks. In March, Tesla and Twitter CEO Elon Musk, Apple co-founder Steve Wozniak and a thousand other tech industry folks signed an open letter demanding companies halt production on advanced AI projects for at least six months, or until industry standards and protocols have caught up.

“Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders” reads the letter, which was published March 22. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Some critics have noted that the attention tech leaders are giving to the future risks of the technology fail to address current problems, like AI’s tendency to “hallucinate,” the unclear ways an AI chatbot arrives at an answer to a prompt, and data privacy and plagiarism concerns. There’s also the potential that some of these tech leaders are requesting a halt on their competitors’ products so that they can have time to build an AI product of their own. 

Editors’ note: CNET is using an AI engine to create some personal finance explainers that are edited and fact-checked by our editors. For more, see this post.

For all the latest world News Click Here 

Read original article here

Denial of responsibility! TechAI is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.