Meta: Google, Meta, OpenAI among 7 companies commit to responsible AI development – Times of India

The top companies involved in the development of artificial intelligence (AI) tools and products have committed to protecting users from risks posed by the technology by voluntarily agreeing to a series of promises. These companies include Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.
According to a note by the White House, the Biden-Harris Administration has “secured voluntary commitments from these companies to help move toward safe, secure, and transparent development of AI technology.”
It also said that the companies have chosen to undertake the commitments immediately and they underscore three principles – safety, security, and trust – fundamental in developing responsible AI.
What are the commitments?
The commitments by the tech giants are broadly divided under these three principles. These companies have committed to internal and external security testing, which will be carried out in part by independent experts, of their AI systems before their release. Secondly, the companies will share information across the industry along with governments, civil society and academia on managing AI risks.
These seven tech companies will also invest in cybersecurity, and facilitate third-party discovery and reporting of vulnerabilities in their AI systems. They will also have to develop and deploy advanced AI systems to help address societal challenges, and publicly report their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use.
Tech CEOs take on AI development
Google and OpenAI, among others, have already been promoting responsible development of the technology. While Google CEO Sundar Pichai has spoken a lot about it in public forums and interviews, OpenAI chief executive recently concluded his global tour where he visited multiple countries, including India, to talk about the need for responsible AI.
In June this year, Apple CEO Tim Cook also opened up about the potential and dangers that AI poses to humanity. He said that large language models (LLMs) show “great promise” but also the potential for “things like bias, things like misinformation [and] maybe worse in some cases.”
Emphasising the need for regulation and guardrails, Cook said, “If you look down the road, then it’s so powerful that companies have to employ their own ethical decisions.”

function loadGtagEvents(isGoogleCampaignActive) { if (!isGoogleCampaignActive) { return; } var id = document.getElementById('toi-plus-google-campaign'); if (id) { return; } (function(f, b, e, v, n, t, s) { t = b.createElement(e); t.async = !0; t.defer = !0; t.src = v; t.id = 'toi-plus-google-campaign'; s = b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t, s); })(f, b, e, 'https://www.googletagmanager.com/gtag/js?id=AW-877820074', n, t, s); };

window.TimesApps = window.TimesApps || {}; var TimesApps = window.TimesApps; TimesApps.toiPlusEvents = function(config) { var isConfigAvailable = "toiplus_site_settings" in f && "isFBCampaignActive" in f.toiplus_site_settings && "isGoogleCampaignActive" in f.toiplus_site_settings; var isPrimeUser = window.isPrime; if (isConfigAvailable && !isPrimeUser) { loadGtagEvents(f.toiplus_site_settings.isGoogleCampaignActive); loadFBEvents(f.toiplus_site_settings.isFBCampaignActive); } else { var JarvisUrl="https://jarvis.indiatimes.com/v1/feeds/toi_plus/site_settings/643526e21443833f0c454615?db_env=published"; window.getFromClient(JarvisUrl, function(config){ if (config) { loadGtagEvents(config?.isGoogleCampaignActive); loadFBEvents(config?.isFBCampaignActive); } }) } }; })( window, document, 'script', );

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! TechAI is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.