New legislation cedes AI to government control
Legislation introduced last week in the Senate aims to bring artificial intelligence under government control.
The Digital Platform Commission Act of 2023, sponsored by Democrat Senators Michael Bennett (D-CO) and Peter Welch (D-VT), would create a federal agency of “experts” with the power to govern artificial intelligence platforms down to their algorithms.
Without such regulation, says the bill, digital platforms produce “demonstrable harm” such as “abetting the collapse of trusted local journalism,” “disseminating disinformation and hate speech,” “radicalizing individuals to violence,” “perpetuating discriminatory treatment of communities of color and underserved populations,” “enabling addiction” and other maladies.
Bennett and Welch therefore seek to establish a Federal Digital Platform Commission to “protect the public interest” by managing major public-facing AI products like ChatGPT. The commission would create a Code Council to develop “behavioral codes, technical standards, or other policies for digital platforms”. Some of these algorithmic requirements may be voluntary, others will be enforceable.
The Code Council’s 18 members are to be chosen from several sectors, including digital platforms, nonprofits, “disinformation,” academia, AI and others. The five members of the commission are to be appointed by the president and confirmed by the Senate.
“The purpose of the Commission is to regulate digital platforms, consistent with the public interest, convenience, and necessity,” reads the legislation, adding that the public must be protected from “addicting design features or harmful algorithmic processes.” The commission will also demand that age verification features be embedded in algorithms.
A day before the legislation was introduced, OpenAI CEO Sam Altman, whose Microsoft-backed company produced ChatGPT, urged lawmakers to regulate the AI sector with a licensing scheme.
“It is vital that AI companies — especially those working on the most powerful models — adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results,” said Altman in his opening statement to the Senate Judiciary Committee.
“To ensure this, the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements.”
A licensing system would be a boon for Altman, whose ChatGPT is considered the fastest-growing app in history after picking up 100 million users in two months post-launch. But it would likely create a barrier to entry for competitors, raising concerns that AI technology could be ruled by a powerful technocracy allied with the federal government. Altman signaled he knew this when he acknowledged that while there will be many machine-learning models, “there will be a relatively small number of providers that can make models at the true edge.”
But lawmakers were pleased with Altman’s proposal.
“We need to empower an agency that issues a license and can take it away,” agreed Senator Lindsey Graham (R-SC). “Wouldn’t that be some incentive to do it right if you could actually be taken out of business?”
“Clearly that should be part of what an agency can do,” Altman responded.