Connect with us

Business

US companies commit to safe, transparent AI development

Published

on

Seven US artificial intelligence (AI) giants – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – have publicly committed to “help move toward safe, secure, and transparent development of AI technology.”

The commitments

“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe. To make the most of AI’s potential, the Biden-Harris Administration is encouraging this industry to uphold the highest standards to ensure that innovation doesn’t come at the expense of Americans’ rights and safety,” the Biden-⁠Harris Administration noted.

While the Administration is working on an executive order that will impose legal obligations to companies in the AI field, these seven companies are committing to:

  • Test the security of their AI systems before launch (the testing will be done both internally and by independent experts)
  • Share knowledge about AI risk management best practices among themselves and with the government
  • Protect proprietary and unreleased model weights – “the most essential part of an AI system” – by investing in cybersecurity and insider threat safeguards
  • Make it easy for third parties to detect and report vulnerabilities in their AI systems
  • Make sure users can unequivocably know when content (video, audio) is AI-generated (e.g., with watermarks)
  • Disclose the capabilities, limitations, and both appropriate and inappropriate uses of their AI systems (and the security and societal risks they carry)
  • Keep researching the potential societal risks (bias, discrimination) of AI use and protect privacy
  • Create advanced AI systems to tackle society’s most significant challenges (e.g., cancer prevention, climate change, combating cyberthreats)

Addressing the risks posed by AI

In the official document outlining the eight commitments, it says that they “apply only to generative models that are overall more powerful than the current industry frontier (e.g., models that are overall more powerful than any currently released models, including GPT-4, Claude 2, PaLM 2, Titan and, in the case of image generation, DALL-E 2).”

That provision doesn’t sit right with non-profit research and advocacy organization AlgorithmWatch, which pointed out that the currently available AI systems are doing harm right now. “If companies agree that it’s a good idea to apply these precautions, should they not apply them to the stuff they’re selling globally at this moment? Of course they should!” they said.

“Public awareness around validation of information sourced from the Internet is a must,” notes James Campbell, CEO, Cado Security.

“An issue with LLMs in particular is that they deliver information in an authoritative manner which is often incorrect (described as ‘hallucinations’). This results in users of these LLMs believing they have intimate knowledge of a subject area even on occasions where they’ve been misled. Users of LLMs need to approach the results of their prompting with a large dose of scepticism and additional validation from an alternative source. Government guidance to users should emphasise this until the results are more reliable.”

“The ‘testing’ referred to in the release is likely around both the internal security of AI developers and the broader societal impact of the technologies themselves,” he added.

“There is a lot of potential for privacy issues arising from the use of AI technologies, especially around Large Language Models (LLMs) such as ChatGPT. OpenAI themselves disclosed a vulnerability in ChatGPT that inadvertently provided access to other users’ conversation titles. Clearly this has serious data security implications for users of these LLMs. More generally, companies may be asked to conduct a risk assessment from a societal impact perspective prior to releasing AI-enabled technologies.”

But, according to Abnormal Security CISO Mike Britton, when finally enacted, the most significant regulation will be around ethics, transparency and assurances in how the AI operates.

“Any good AI solution should also enable a human to make the final decision when it comes to executing (and potentially undoing) any actions taken by AI,” he told Help Net Security.

Working on safe, trustworthy and ethical AI systems

The Administration has previously published a Blueprint for an AI Bill of Rights, aimed at protecting US citizens from risks that AI systems may pose, such as bias and discrimination.

They identified five principles that guide the design, use, and deployment of such systems:

  • Protection from unsafe or ineffective systems
  • Protection from algorithmic discrimination
  • Data privacy protection
  • Notice and explaination about the systems used
  • Availability of a human alternative to automated systems

To protect the public from algorithmic discrimination, President Biden has signed an Executive Order to urge federal agencies to combat bias in the development and implementation of emerging technologies, such as AI.

With the intention to bolster AI research and development, the Administration also invested $140 million into the National Science Foundation to launch seven new National AI Research Institutes (adding to the existing 18).

Source: https://www.helpnetsecurity.com/2023/07/24/us-safe-ai/

Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Copyright © 2023 Cyber Reports Cyber Security News All Rights Reserved Website by Top Search SEO