‘Godfather of AI’ quits Google to warn dangers of tech
WASHINGTON: A computer scientist often dubbed “the godfather of artificial intelligence” has quit his job at Google to speak out about the dangers of the technology, US media reported Monday.
Geoffrey Hinton, who created a foundation technology for AI systems, told The New York Times that advancements made in the field posed “profound risks to society and humanity.”
“Look at how it was five years ago and how it is now,” he was quoted as saying in the piece, which was published on Monday.
“Take the difference and propagate it forward. That’s scary.”
Hinton said that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, risking jobs and spreading misinformation.
“It is hard to see how you can prevent the bad actors from using it for bad things,” he told the Times.
In 2022, Google and OpenAI — the start-up behind the popular AI chatbot ChatGPT — started building systems using much larger amounts of data than before.
Hinton told the Times he believed that these systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing.
“Maybe what is going on in these systems is actually a lot better than what is going on in the brain,” he told the paper.
While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk.
AI “takes away the drudge work” but “might take away more than that”, he told the Times.
The scientist also warned about the potential spread of misinformation created by AI, telling the Times that the average person will “not be able to know what is true anymore.”
Hinton notified Google of his resignation last month, the Times reported.
Jeff Dean, lead scientist for Google AI, thanked Hinton in a statement to US media.
“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI,” the statement added.
“We’re continually learning to understand emerging risks while also innovating boldly.”
In March, tech billionaire Elon Musk and a range of experts called for a pause in the development of AI systems to allow time to make sure they are safe.
An open letter, signed by more than 1,000 people including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4, a much more powerful version of the technology used by ChatGPT.
Hinton did not sign that letter at the time, but told The New York Times that scientists should not “scale this up more until they have understood whether they can control it.”
Geoffrey Hinton, who created a foundation technology for AI systems, told The New York Times that advancements made in the field posed “profound risks to society and humanity.”
“Look at how it was five years ago and how it is now,” he was quoted as saying in the piece, which was published on Monday.googletag.cmd.push(function() {googletag.display(‘div-gpt-ad-8052921-2’); });
“Take the difference and propagate it forward. That’s scary.”
Hinton said that competition between tech giants was pushing companies to release new AI technologies at dangerous speeds, risking jobs and spreading misinformation.
“It is hard to see how you can prevent the bad actors from using it for bad things,” he told the Times.
In 2022, Google and OpenAI — the start-up behind the popular AI chatbot ChatGPT — started building systems using much larger amounts of data than before.
Hinton told the Times he believed that these systems were eclipsing human intelligence in some ways because of the amount of data they were analyzing.
“Maybe what is going on in these systems is actually a lot better than what is going on in the brain,” he told the paper.
While AI has been used to support human workers, the rapid expansion of chatbots like ChatGPT could put jobs at risk.
AI “takes away the drudge work” but “might take away more than that”, he told the Times.
The scientist also warned about the potential spread of misinformation created by AI, telling the Times that the average person will “not be able to know what is true anymore.”
Hinton notified Google of his resignation last month, the Times reported.
Jeff Dean, lead scientist for Google AI, thanked Hinton in a statement to US media.
“As one of the first companies to publish AI Principles, we remain committed to a responsible approach to AI,” the statement added.
“We’re continually learning to understand emerging risks while also innovating boldly.”
In March, tech billionaire Elon Musk and a range of experts called for a pause in the development of AI systems to allow time to make sure they are safe.
An open letter, signed by more than 1,000 people including Musk and Apple co-founder Steve Wozniak, was prompted by the release of GPT-4, a much more powerful version of the technology used by ChatGPT.
Hinton did not sign that letter at the time, but told The New York Times that scientists should not “scale this up more until they have understood whether they can control it.”
For all the latest Technology News Click Here