AI and FOIA: Evolution, Devolution, or Revolution?

Alexander Howard
This Sunshine Week, open government advocates are reeling from an unprecedented and controversial shift in our government. We have seen a widespread dismissal of the officials and civil servants entrusted with upholding the public right to know, and ensuring accountability for fraud, waste, and abuse of power.
The American people are being informed of what is happening in agencies through the work of journalists and watchdogs using modern investigative tools to scrape and track changes to public websites and data, including backing up public records before they are censored.
News media, watchdogs, scientists, and librarians are all seeking to preserve public information as a curtain of blindness descends across our government of the people, complemented by arguable transparency theater on state social media feeds. At this moment, the potential of AI to improve agency operations remains real, however, including administration of the Freedom of Information Act (“FOIA”).
AI has potentially positive applications at every level and branch of government, starting with pilots at relatively low levels of risk. In Congress, that may include transcription, translation, or drafting letters and responses to constituents. Redacting PII or witnesses from videos will be critical to keeping up personally identifiable information (“PII”), ethical disclosures of the records generated by closed circuit cameras, dash cams, police body cams, and smartphones.
When the last term of the US Freedom of Information Act Advisory Committee considered AI and government, however, we voted for further study instead of making substantive recommendations for specific actions or against specific use cases. In the interim, agencies have moved forward, with some laudable outcomes. Eric Stein, the State Department’s Chief FOIA officer, briefed our committee on the extraordinary work he and his colleagues did to improve public access through careful uses of technology and enlightened policy—particularly the “release to one, release to all” policy that I advocated for at the Sunlight Foundation in 2016 and over the decade since.
In a world where all government agencies had spent the past decade cleaning and structuring public records into open government data, artificial intelligence can still play a key role in accelerating not just searches, but faster responses to FOIA requests or productions after court orders directing disclosure.
That is not the world we are living in, however, after the Trump and Biden Administrations neglected to fully implement the Open Government Data Act across the vast enterprises of the executive branch. Instead, public records remain in print and digital form, in proprietary formats, lost on personal devices, or consigned to the opacity of obscurity in vaults.
AI depends on access to accurate open data. Garbage data will return garbage outcomes, from distorted insights to algorithmic delusions. Agencies must now dedicate more human capacity to the careful process of building data warehouses and disclosing records so that they do not compromise personal information, ongoing investigations, or national security.
It is plausible that the efforts of the U.S. DOGE Agency to access and extract data could aid this, but the intentional shift to shield it from FOIA does not bode well for transparency. Firing civic technologists at 18F and the former US Digital Service removed precisely the people who could’ve helped FOIA officers most. Firing the Director of the Office of Information Policy at the Department of Justice and an unknown number of FOIA and privacy officials across agencies has removed human capacity necessary to ensure public access is balanced with privacy rights, sanction officials who blatantly violate the public right to know, or outright conceal or destroy records.
It’s a historic mistake to eviscerate human FOIA capacity and hope generative AI can somehow make up the difference. The administration cannot fire its way out of the crisis in public access or try to bolt on AI to legacy systems. Instead, we need to see AI used to augment human capacities to move carefully and disclose things. The destruction of human capacity to administrate FOIA has happened at the worst possible time, as we see the potential for AI-generated requests to drown agencies in demand.
This phenomenon will create a “distributed denial of democracy” across our nation in the name of transparency that will prevent public interest requests from being fulfilled during the time period when disclosure might provide the public and overseers with timely, actionable knowledge about what is happening.
A more enlightened policy would now instead focus on hiring and retaining more FOIA officials, pairing them with technologists, and applying AI to their work. For instance, agencies might focus on helping first-party requesters seeking access to information about themselves or their loved ones, from people in the immigration system to veterans. Indeed, modernizing USCIS systems has been a long time recommendation from the FOIA Advisory Committee and the DHS Subcommittee on Open Government.
Under former Archivist of the United States Shogan and Deputy Archivist Bosanko, the National Archives and Records Administration was exploring how AI could identify specific forms in records to make access happen. That work remains critical if the National Declassification Center is going to have any prayer of keeping up with the accelerating volume of classified records generated across a government that continues to over classify public information in the name of national security.
It’s unclear if it continues, however, and that’s a huge problem. First principles of open government must be applied to AI across government, but particularly in FOIA: algorithmic transparency, accountability, explainability.
Any government of the people must ensure all AI-use cases are transparent to the people through inventories posted in open formats online on a weekly basis. Agencies must not be allowed to use AI tools that return inaccurate results and claim to have conducted an effective search. Like public records and public databases, public sector code must be subject to FOIA in the 21st century, not shrouded in secrecy. Wherever AI models are “black boxes” that cannot be open sourced and subject to public scrutiny, then risk-limiting audits of their outcomes should be open by default.
Handicapping the capacity of our government to respond to FOIA requests and disclose responsive documents is an overt attack on the American public’s right to know and access information. The continued silence in Congress about these actions is a dereliction of duty that harms everyone who believes in government transparency and accountability. It’s time, once again, to let the sunshine in.
Alexander B. Howard is a writer, digital governance expert, and open government advocate based in Washington, DC. He edits Civic Texts, a publication focused on emerging technologies and governance. Previously, he was the director of the Digital Democracy Project at the Demand Progress Educational Fund, a nonprofit focused on improving democratic governance. He was also the deputy director of the Sunlight Foundation, the first senior editor for technology and society at the Huffington Post, and has held fellowships on networked transparency and data journalism at Harvard and Columbia University. Alex was a member of the U.S. Freedom of Information Act Advisory Committee at the U.S. National Archives.