Monday, July 24, 2023

From the New York Times: Pressured by Biden, A.I. Companies Agree to Guardrails on New Tools

A look at how regulatory policies develop and evolve.

- Click here for the article

Seven leading A.I. companies in the United States have agreed to voluntary safeguards on the technology’s development, the White House announced on Friday, pledging to manage the risks of the new tools even as they compete over the potential of artificial intelligence.

The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally made their commitment to new standards for safety, security and trust at a meeting with President Biden at the White House on Friday afternoon.

. . . The voluntary safeguards are only an early, tentative step as Washington and governments across the world seek to put in place legal and regulatory frameworks for the development of artificial intelligence. The agreements include testing products for security risks and using watermarks to make sure consumers can spot A.I.-generated material.

But lawmakers have struggled to regulate social media and other technologies in ways that keep up with the rapidly evolving technology.

The White House offered no details of a forthcoming presidential executive order that aims to deal with another problem: how to control the ability of China and other competitors to get ahold of the new artificial intelligence programs, or the components used to develop them.

The order is expected to involve new restrictions on advanced semiconductors and restrictions on the export of the large language models. Those are hard to secure — much of the software can fit, compressed, on a thumb drive.

An executive order could provoke more opposition from the industry than Friday’s voluntary commitments, which experts said were already reflected in the practices of the companies involved. The promises will not restrain the plans of the A.I. companies nor hinder the development of their technologies. And as voluntary commitments, they will not be enforced by government regulators.

. . . As part of the safeguards, the companies agreed to security testing, in part by independent experts; research on bias and privacy concerns; information sharing about risks with governments and other organizations; development of tools to fight societal challenges like climate change; and transparency measures to identify A.I.-generated material.
. . . But the rules on which they agreed are largely the lowest common denominator, and can be interpreted by every company differently. For example, the firms committed to strict cybersecurity measures around the data used to make the language models on which generative A.I. programs are developed. But there is no specificity about what that means, and the companies would have an interest in protecting their intellectual property anyway.

. . . “The voluntary commitments announced today are not enforceable, which is why it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections, and stepped-up research on the wide range of risks posed by generative A.I.,” Mr. Barrett said in a statement.

. . . Lawmakers have been grappling with how to address the ascent of A.I. technology, with some focused on risks to consumers and others acutely concerned about falling behind adversaries, particularly China, in the race for dominance in the field.

This week, the House committee on competition with China sent bipartisan letters to U.S.-based venture capital firms, demanding a reckoning over investments they had made in Chinese A.I. and semiconductor companies. For months, a variety of House and Senate panels have been questioning the A.I. industry’s most influential entrepreneurs and critics to determine what sort of legislative guardrails and incentives Congress ought to be exploring.

Many of those witnesses, including Sam Altman of OpenAI, have implored lawmakers to regulate the A.I. industry, pointing out the potential for the new technology to cause undue harm. But that regulation has been slow to get underway in Congress, where many lawmakers still struggle to grasp what exactly A.I. technology is.


 For more: 

- Blueprint for an AI Bill of Rights

- NCSL: Artificial Intelligence 2023 Legislation.

- Texas Standard: Gov. Abbott signs bill to establish an Artificial Intelligence Advisory Council.

- Wikipedia: Regulation of artificial intelligence.