Skip to content

US agency calls for audits of AI systems to ensure accountability

Report seeks consequences for AI developers who misrepresent how their systems work

A computer screen with the home page of the OpenAI website displays ChatGPT, a generative artificial intelligence model.
A computer screen with the home page of the OpenAI website displays ChatGPT, a generative artificial intelligence model. (Marco Bertorello/AFP via Getty Images file photo)

The National Telecommunications and Information Administration on Wednesday issued a report calling for establishing a system of audits for artificial intelligence systems that would ensure transparency as well as hold tech companies accountable for potential risks and harms. 

The Artificial Intelligence Accountability Policy Report stemmed from more than 1,400 comments the agency, which is part of the Commerce Department, received last year from companies and advocacy groups about creating an accountability system for artificial intelligence technologies. 

“The report calls for improved transparency into AI systems, independent evaluations of those systems, and consequences for imposing new risks,” Alan Davidson, NTIA’s administrator and assistant secretary of Commerce, told reporters Tuesday. 

“The government ought to require independent audits of the highest risk AI systems, those that, for example, directly impact physical safety or health” of users, Davidson said. 

Davidson said such a system of audits would be similar to financial audits that public companies undertake to certify financial performance based on a broadly accepted set of accounting and compliance principles.

The NTIA recommendations would likely feed into decisions Congress and the executive branch make in the coming months on how to devise regulations and laws for artificial intelligence systems. 

Senate Majority Leader Charles E. Schumer, D-N.Y., has held a series of briefings for lawmakers on AI with the goal of drafting legislation. In February the House launched a bipartisan task force on AI led by Reps. Jay Obernolte, R-Calif., and Ted Lieu, D-Calif. 

A legislative framework proposed by Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., last year would hold AI systems accountable by creating a new federal oversight agency that “should have the authority to conduct audits” and issue licenses to companies developing AI systems used in high-risk situations such as facial recognition and others.  

A bipartisan group of lawmakers led by Sens. John Thune, R-S.D., and Amy Klobuchar, D-Minn., unveiled legislation late last year that promises to bring greater transparency to the development of artificial intelligence systems and hold companies developing such systems accountable.

The bill would create an advisory body of industry experts to guide the Commerce Department on standards for AI systems in use at infrastructure facilities, within criminal justice systems, in the collection of biometric information and other critical areas.

Not all of the recommendations in the NTIA report require legislation, Davidson said, adding that the agency is working with Congress on those issues. 

The report calls for supporting the U.S. AI Safety Institute at the National Institutes of Standards and Technology as well as federal agencies to work with companies and advocacy groups to develop and design audits, and liability standards including who should be held responsible for harms produced by AI systems. 

Davidson said building a well-functioning auditing system for AI could take years, and would include building a “workforce of AI auditors … so that we can make sure that there’s a level of independence” among auditors and auditing firms. 

Existing regulatory agencies including the Food and Drug Administration, the Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission, the Federal Trade Commission and others that are already looking at how to regulate AI systems within their respective fields can incorporate auditing mechanisms, Davidson said. 

The report also calls for consequences for AI developers who misrepresent how their systems work, and those would include both regulatory and marketplace consequences, Davidson said.  

Labels similar to Energy Star ratings and other similar seals of approval could help consumers figure out whether they can trust AI systems, Davidson said. 

Recent Stories

Lawmakers postpone decision on Virginia-class submarine money

House Foreign Affairs votes to recommend holding Blinken in contempt

Filibuster’s future, emergency abortions fuel Senate debate

‘Take their factories’: Trump vows to snatch jobs from other countries

Senate Democrats bash Supreme Court’s Trump immunity ruling

‘Hello, I’m Johnny Cash’: Statue of the ‘Man in Black’ unveiled at the Capitol