[US] Leading AI companies make ‘voluntary’ safety commitments at the White House

[US] Leading AI companies make ‘voluntary’ safety commitments at the White House
25 Jul 2023

Substantive AI legislation has yet to be put into place in the US but the industry is evolving fast leaving many, including the White House, concerned that it may get carried away. To that end, the Biden administration has collected “voluntary commitments” from seven of the biggest AI developers to pursue shared safety and transparency goals until a planned executive order is in place, TechCrunch reports.

The big-name companies taking part in this non-binding agreement are Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI. On July 21, they sent representatives to the White House to meet with President Biden.

The practices agreed are purely voluntary, there is no rule or enforcement being proposed. However, while no government agency will hold a company accountable if a company falls short, it is reportedly likely to be a matter of public record.

The all-male White House attendees were:

  • Brad Smith, President, Microsoft
  • Kent Walker, President, Google
  • Dario Amodei, CEO, Anthropic
  • Mustafa Suleyman, CEO, Inflection AI
  • Nick Clegg, President, Meta
  • Greg Brockman, President, OpenAI
  • Adam Selipsky, CEO, Amazon Web Services

 

The seven companies - and potentially further tech businesses who want to follow their example - have committed to:

  • Internal and external security tests of AI systems before release, including adversarial “red teaming” by experts outside the company.
  • Share information across government, academia and “civil society” on AI risks and mitigation techniques (such as preventing “jailbreaking”).
  • Invest in cybersecurity and “insider threat safeguards” to protect private model data like weights. Important not only to protect IP but because premature wide release could represent an opportunity for malicious actors.
  • Facilitate third-party discovery and reporting of vulnerabilities, e.g. a bug bounty programme or domain expert analysis.
  • Develop robust watermarking or some other way of marking AI-generated content.
  • Report AI systems’ “capabilities, limitations, and areas of appropriate and inappropriate use.”
  • Prioritise research on societal risks like systematic bias or privacy issues.
  • Develop and deploy AI “to help address society’s greatest challenges” such as cancer prevention and climate change. (In a press call, it was reportedly noted that the carbon footprint of AI models was not being tracked.)

 

The commitments detailed above are voluntary but the threat of an executive order - the White House is “currently developing” one - is there to encourage compliance. For example, if some companies fail to allow external security testing of their models before release, the EO may develop a paragraph directing the FTC to scrutinise AI products claiming robust security. One EO is already in force asking agencies to watch out for bias in the development and use of AI.

The White House is reportedly keen to get out ahead of this next big wave of tech, after being caught on the back foot by social media’s disruptive capabilities. The president and vice president have met with industry leaders, sought advice on a national AI strategy and dedicated funding to new AI research centres and programmes. The national science and research apparatus is said to be somewhat more up-to-date, as demonstrated by the highly comprehensive research challenges and opportunities report from the DOE and National Labs.


Source: TechCrunch

(Links via original reporting) 

Substantive AI legislation has yet to be put into place in the US but the industry is evolving fast leaving many, including the White House, concerned that it may get carried away. To that end, the Biden administration has collected “voluntary commitments” from seven of the biggest AI developers to pursue shared safety and transparency goals until a planned executive order is in place, TechCrunch reports.

The big-name companies taking part in this non-binding agreement are Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI. On July 21, they sent representatives to the White House to meet with President Biden.

The practices agreed are purely voluntary, there is no rule or enforcement being proposed. However, while no government agency will hold a company accountable if a company falls short, it is reportedly likely to be a matter of public record.

The all-male White House attendees were:

  • Brad Smith, President, Microsoft
  • Kent Walker, President, Google
  • Dario Amodei, CEO, Anthropic
  • Mustafa Suleyman, CEO, Inflection AI
  • Nick Clegg, President, Meta
  • Greg Brockman, President, OpenAI
  • Adam Selipsky, CEO, Amazon Web Services

 

The seven companies - and potentially further tech businesses who want to follow their example - have committed to:

  • Internal and external security tests of AI systems before release, including adversarial “red teaming” by experts outside the company.
  • Share information across government, academia and “civil society” on AI risks and mitigation techniques (such as preventing “jailbreaking”).
  • Invest in cybersecurity and “insider threat safeguards” to protect private model data like weights. Important not only to protect IP but because premature wide release could represent an opportunity for malicious actors.
  • Facilitate third-party discovery and reporting of vulnerabilities, e.g. a bug bounty programme or domain expert analysis.
  • Develop robust watermarking or some other way of marking AI-generated content.
  • Report AI systems’ “capabilities, limitations, and areas of appropriate and inappropriate use.”
  • Prioritise research on societal risks like systematic bias or privacy issues.
  • Develop and deploy AI “to help address society’s greatest challenges” such as cancer prevention and climate change. (In a press call, it was reportedly noted that the carbon footprint of AI models was not being tracked.)

 

The commitments detailed above are voluntary but the threat of an executive order - the White House is “currently developing” one - is there to encourage compliance. For example, if some companies fail to allow external security testing of their models before release, the EO may develop a paragraph directing the FTC to scrutinise AI products claiming robust security. One EO is already in force asking agencies to watch out for bias in the development and use of AI.

The White House is reportedly keen to get out ahead of this next big wave of tech, after being caught on the back foot by social media’s disruptive capabilities. The president and vice president have met with industry leaders, sought advice on a national AI strategy and dedicated funding to new AI research centres and programmes. The national science and research apparatus is said to be somewhat more up-to-date, as demonstrated by the highly comprehensive research challenges and opportunities report from the DOE and National Labs.


Source: TechCrunch

(Links via original reporting)