‘Current AI Models are very stupid’: Nick Clegg, Meta’s President of Global Affairs

‘Current AI Models are very stupid’: Nick Clegg, Meta's President of Global Affairs

As impressive they are, current AI models, especially the ones available to the public, they are not intelligent at all, and are only good at predicting words and numbers based on the data they have been trained, says Meta’s President of Global Affairs, Nick Clegg

Nick Clegg, the president of global affairs at Meta (formerly Facebook), downplayed the risks of current Artificial Intelligence (AI) models, stating that they are “quite stupid” and the hype surrounding AI has surpassed the technology’s capabilities. He mentioned that the current models are far from achieving true autonomy and the ability to think for themselves.

Meta recently announced that its large language model, LLaMA 2, will be available as an open-source tool for commercial businesses and researchers to use. This decision has sparked debates within the tech community due to concerns about the potential misuse of such a powerful tool.

The limitations on current AI models
Clegg acknowledged that large language models like GPT, upon which ChatGPT is made are essentially trained to predict the next word in a sequence based on enormous datasets of text, which makes them lack true understanding and independent thinking.

Related Articles

27%

27% of workforce in European nations could lose jobs to AI

27%

Meta, OpenAI In Trouble: US Comedian, TV Writers and various authors sue AI bot makers for content theft

While opening up the product for others to use through open source allows for free user testing data and improvements, it also raises concerns about the need for strong guardrails to prevent misuse. Previous chatbot iterations have been manipulated to spread hate speech and false information, raising questions about how Meta plans to address the potential misuse of LLaMA 2.

The collaboration with Microsoft to make LLaMA 2 available through Microsoft’s platforms like Azure indicates Meta’s ambitions in the AI field. With the deep pockets of companies like Microsoft investing in AI creators like OpenAI (the creator of ChatGPT), there are concerns about a consolidation of power in the AI industry, potentially limiting healthy competition.

Overall, the availability and use of LLaMA 2 raise important questions about the ethical use of AI and the need for robust measures to prevent its misuse.

The need to go open source
LLaMA 2, developed through a partnership between Microsoft and Meta is offered as an open-source tool, making it free for commercial businesses and researchers to use. In contrast, GPT-4 and Google’s LLM, which powers the Bard chatbot, are not available for free use in commercial or research applications.

Recently, US comedian Sarah Silverman filed a lawsuit against both OpenAI and Meta, alleging that her copyright has been violated in the training of their AI systems.

Dame Wendy Hall, a prominent computer science professor at the University of Southampton, expressed concerns about open-sourcing AI models, particularly in terms of legislation and regulation.

AI surrounded by Hyperbole
Hall raised the question of whether the industry can be trusted to self-regulate or if there is a need for government involvement in regulation. She used strong language, comparing open-sourcing AI to providing a template for building a nuclear bomb.

In response, Nick dismissed the comparison as “hyperbole,” clarifying that Meta’s open-sourced system, LLaMA 2, cannot generate images or build harmful bioweapons. However, he agreed that AI indeed needs to be regulated.

Sir Nick emphasized that open-sourcing AI models is already common practice, and the real concern is how to do it responsibly and safely. He asserted that the LLMs (large language models) that are open-sourcing, including LLaMA 2, are safer than other open-sourced AI models.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! TechAI is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.