Google, Microsoft, Amazon, Meta Pledge to Make AI Safer and More Secure
The White House has secured “voluntary commitments” from tech companies that they’ll help reduce the risks involved in artificial intelligence.
US President Joe Biden met with Amazon, Microsoft, Meta, Google, OpenAI, Anthropic and Inflection on Friday at the White House, where they agreed to emphasize “safety, security and trust” when developing AI technologies. Here are some details in each of those categories.
- Safety: The companies agreed to “testing the safety and capabilities of their AI systems, subjecting them to external testing, assessing their potential biological, cybersecurity, and societal risks and making the results of those assessments public.”
- Security: The companies also said they will safeguard their AI products “against cyber and insider threats” and share “best practices and standards to prevent misuse, reduce risks to society, and protect national security.”
- Trust: One of the biggest agreements secured was for these companies to make it easy for people to tell whether images are original, altered or generated by AI. They will also ensure that AI doesn’t promote discrimination or bias, they will protect children from harm, and will use AI to solve challenges like climate change and cancer.
The arrival of OpenAI’s ChatGPT in late 2022 was the beginning of a stampede of major tech companies releasing generative AI tools to the masses. OpenAI’s GPT-4 launched in mid-March. It’s the latest version of the large language model that powers the ChatGPT AI chatbot, which among other things is advanced enough to pass the bar exam. Chatbots, however, are prone to spitting out incorrect answers and sometimes sources that don’t exist. As adoption of these tools has exploded, their potential problems have gained renewed attention — including spreading misinformation and deepening bias and inequality.
What the AI companies are saying and doing
Meta said it welcomed the White House agreement. Earlier this week, the company launched the second generation of its AI large language model, Llama 2, making it free and open source.
“As we develop new AI models, tech companies should be transparent about how their systems work and collaborate closely across industry, government, academia and civil society,” said Nick Clegg, Meta’s president of global affairs.
The White House agreement will “create a foundation to help ensure the promise of AI stays ahead of its risks,” Brad Smith, Microsoft vice chair and president, said in a blog post.
Microsoft is a partner on Meta’s Llama 2. It also launched AI-powered Bing search earlier this year that makes use of ChatGPT and is bringing more and more AI tools to Microsoft 365 and its Edge browser.
The agreement with the White House is part of OpenAI’s “ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance,” said Anna Makanju, OpenAI vice president of global affairs. “Policymakers around the world are considering new laws for highly capable AI systems. Today’s commitments contribute specific and concrete practices to that ongoing discussion.”
Amazon is in support of the voluntary commitments “as one of the world’s leading developers and deployers of AI tools and services,” Tim Doyle, Amazon spokesperson, told CNET in an emailed statement. “We are dedicated to driving innovation on behalf of our customers while also establishing and implementing the necessary safeguards to protect consumers and customers.”
Amazon has leaned into AI for its podcasts and music and on Amazon Web Services.
Anthropic said in an emailed statement that all AI companies “need to join in a race for AI safety.” The company said it will announce its plans in the coming weeks on “cybersecurity, red teaming and responsible scaling.”
“There’s a huge amount of safety work ahead. So far AI safety has been stuck in the space of ideas and meetings,” Mustafa Suleyman, co-founder and CEO of Inflection AI, wrote in a blog post Friday. “The amount of tangible progress versus hype and panic has been insufficient. At Inflection we find this both concerning and frustrating. That’s why safety is at the heart of our mission.”
What else?
Google didn’t immediately respond to a request for comment, but earlier this year said it would watermark AI content. The company’s AI model Gemini will identify text, images and footage that have been generated by AI. It will check the metadata integrated in content to let you know what’s unaltered and what’s been created by AI.
Image software company Adobe is similarly ensuring it’s tagging AI-generated images from its Firefly AI tools with metadata indicating they’ve been created by an AI system.
You can read the entire voluntary agreement between the companies and the White House here.
The Biden-Harris administration is also developing an executive order and seeking bipartisan legislation “to keep Americans safe” from AI. The US Office of Management and Budget is additionally slated to release guidelines for any federal agencies that are procuring or using AI systems.
See also: ChatGPT vs. Bing vs. Google Bard: Which AI Is the Most Helpful?
Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.
For all the latest world News Click Here