Threats like AI-aided bioweapons confound policymakers
The U.S. is looking at how to prevent systemic dangers from the emerging technology
In June, a group of students at the Massachusetts Institute of Technology and Harvard University with no scientific background showed they could design a deadly new pandemic outbreak in an hour by using chatbots powered by generative artificial intelligence models.
Using ChatGPT-4 designed by OpenAI, Bing by Microsoft, Bard by Google and FreedomGPT, an open-source model, the students learned how to obtain samples and reverse engineer potential pandemic-causing candidates, including smallpox, according to a study the students wrote about the effort.
“Our results demonstrate that artificial intelligence can exacerbate catastrophic biological risks,” warned a pre-publication print of the study titled “Can large language models democratize access to dual-use biotechnology?”
“Widely accessible artificial intelligence threatens to allow people without laboratory training to identify, acquire, and release viruses highlighted as pandemic threats in the scientific literature,” it said.
That kind of danger is among risks that the White House, U.S. lawmakers and foreign officials are working furiously to prevent. The European Union Parliament in June adopted draft legislation called the EU AI Act that would require companies developing generative AI technologies to label content created by such systems, design models to prevent generation of illegal content, and publish summaries of copyrighted data used in training the models.
But it largely avoids dealing with large threats like bioweapons, according to some U.S. officials.
Broader approach in Congress
The AI effort in Congress, led by Senate Majority Leader Charles E. Schumer, is aiming for a broader regulatory approach that would encompass not only application-specific AI systems but also generative AI technologies that can be put to multiple uses, two congressional aides involved in the process said.
“The EU’s approach focuses on individual harms from AI tech and not on systemic harms to society, such as potential use in designing chemical and biological weapons, spread of disinformation, or election interference,” one of the aides said, speaking on condition of anonymity because the discussions are ongoing.
In the United States, top lawmakers involved in the effort “don’t want individual and social harms to be separated from each other,” the aide said. “Such decoupling makes it harder to address both.”
Schumer said in June he would propose legislation that would address harms, but also ways to promote innovation. He has tasked a small group of lawmakers — including Sens. Martin Heinrich, D-N.M.; Todd Young, R-Ind.; and Mike Rounds, R-S.D. — to draw up proposals.
While announcing his plans, Schumer said he would consult with the EU and other countries, but he added that none of the proposals, including the EU’s AI Act, had “really captured the imagination of the world.” Schumer said once the U.S. puts forth a comprehensive AI regulatory proposal, “I think the rest of the world will follow.”
In addition to three previous briefings, Schumer plans to host a series of as many as 10 forums, starting Wednesday, for senators featuring experts and civil society groups. In the House, Speaker Kevin McCarthy has tapped an informal group of lawmakers led by Rep. Jay Obernolte, R-Calif., a computer scientist by training, to brainstorm ideas.
The congressional aides said the U.S. approach is unlikely to lead to a new regulatory agency “because the goal is not to centralize authority on AI enforcement in the hands of one agency,” as one of them put it. “Instead, the idea is to empower existing agencies.”
Those may include giving tools to oversee AI applications to the Food and Drug Administration, the Federal Trade Commission, the Federal Communications Commission and the Federal Aviation Administration in their respective areas, the aides said.
But some senators are leaning in a different direction.
On Friday, Sens. Richard Blumenthal and Josh Hawley — respectively, the Democratic chair and top Republican on the Senate Judiciary Subcommittee on Privacy, Technology, and the Law — offered a legislative outline that would create an independent oversight agency for AI and require companies creating high-risk applications to register with the new body.
“The oversight body should have the authority to conduct audits of companies seeking licenses and cooperate with other enforcers, including considering vesting concurrent enforcement authority in state Attorneys General,” Blumenthal, D-Conn., and Hawley, R-Mo., said in a fact sheet about their proposal.
The idea of a single AI enforcement agency has been backed by some experts, including Yoshua Bengio, professor of computer sciences at the University of Montreal, an expert on the subject. “If there are 10 different agencies trying to regulate AI in its various forms, that could be useful, but also, we need to have a single voice that coordinates with the other countries,” Bengio told Blumenthal’s subcommittee during a hearing in July. “And having one agency that does that is going to be very important.”
In the EU, Dragos Tudorache, the EU Parliament member who steered the bloc’s AI draft legislation, said he’s trying to get a central, Europe-wide regulatory agency for AI included in the final bill, instead of vesting powers with each national regulatory body.
“I have introduced the idea of a European AI board that recruits all of the national regulators” and can conduct “joint investigations, taking on enforcement for certain types of infringements that exceed national authorities or applications that affect users in different countries,” Tudorache said in an interview in Brussels. “That would also have a built-in mechanism for uniformity and coherence.”
Balancing safety, innovation
Lawmakers around the world also are struggling to strike the right balance between regulation and keeping doors open to innovation so that domestic companies don’t get squeezed out by heavy-handed rules.
The world’s top AI companies are all U.S.-based, “and there’s a reason for that,” Obernolte said in an interview. “It’s because we have been the crucible of entrepreneurialism and technology for a long time, and I don’t want to see us surrender that role to anyone.”
Obernolte pointed to the U.K. effort to distinguish its approach to AI regulation from that of the European Union because it would like to “see more of the AI development occur in the U.K.”
The U.K. government issued a white paper titled “A pro-innovation approach to AI regulation” that calls for “getting regulation right so that innovators can thrive and the risks posed by AI can be addressed.” It calls for empowering existing agencies as opposed to creating a central authority.
Irrespective of which path Washington chooses, the U.S. is likely to combine regulations with money to promote innovation and development of technologies, said Tony Samp, who heads the AI policy practice at the law firm of DLA Piper in Washington. Samp was working for Heinrich when the senator helped launch the Senate Artificial Intelligence Caucus.
While protecting against risks, Congress may see where private industry is not investing, “and maybe those are the areas where the federal government plays a role,” Samp said. He said government funding could go toward developing safety-oriented technologies such as watermarking, which would expose when text was written by AI.
The EU’s approach has its critics, but the proposal is one among several pieces of legislation that have useful features, said Rumman Chowdhury, Responsible AI fellow at the Berkman Klein Center for Internet & Society at Harvard University.
The EU’s Digital Services Act, which went into effect last month, is designed to combat hate speech and disinformation and applies to large online platforms and search engines. The law also has created a mechanism to audit algorithms, Chowdhury said.
“If you look at what it is auditing for, it would be, for example, impact on democracy, democratic processes and free and fair elections, which would include something like disinformation,” Chowdhury said.
The European Center for Algorithmic Transparency is designing the audits, and Chowdhury said she’s a consultant in the effort.
The EU may be able to address the larger, society-wide problems posed by generative AI technologies through the auditing mechanism because such technologies ultimately would be embedded in search engines and social media platforms, Chowdhury said.
Note: This is the second in a series of stories examining the European Union’s regulations on technology and how it contrasts with approaches being pursued in the United States. Reporting for this series was made possible in part through a trans-Atlantic media fellowship from the Heinrich Boell Foundation, Washington D.C.