Trump Administration Using AI to Speed Up Deregulatory Effort After Loper Bright 

By

| July 30, 2025

ChatGPT AI chatbot writing answers with a pen on paper, symbolizing the integration of artificial intelligence in communication, content creation, and problem-solving, digital illustration

The first several months of the Trump Administration have focused on executive orders, agency reorganization, and budget reconciliation.  But attention is now shifting to the meat of Executive Branch reform: deregulation.  The Washington Post reports that DOGE has built a deregulatory tool that harnesses AI to assist agencies in identifying and eliminating unnecessary or unlawful regulations. 

The July 1st DOGE presentation, obtained by the Post, highlights the AI tools and cites Loper Bright and Executive Order 14,219 as the basis for agencies to identify rules as exceeding statutory authorization.  Any deregulatory effort will likely face swift legal challenge under the Administrative Procedure Act.  And the novel use of AI tools is sure to raise a host of new legal questions. 

Courts often examine deregulatory efforts for reasoned decision making under an arbitrary-and-capricious standard of review.  The DOGE presentation repeatedly references the importance of agency policy staff and attorneys being intimately involved with the decision-making process.  DOGE asserts the tool “enables agencies to comment and modify” the results, while it “automatically drafts all submission documents for attorneys to edit.”  Yet the “policy and legal teams [must] make all the decisions.”  This is a sage warning, but will agencies heed it in practice?  And how will courts react while analyzing the AI-driven processes behind particular deregulatory efforts?  

One efficiency the tool boasts is the ability to “analyze & respond to 100,000+ [public regulatory] comments” in a fraction of the time it would take a staffer.  But this raises the question, will such review, summary, and response drafted by an AI tool and then later reviewed by a staffer meet the APA’s requirement that agencies give meaningful consideration to commenters’ filings? 

And how will agency staff and courts deal with so-called hallucinations.  Stories continue to proliferate about courts themselves issuing opinions that contain errors seemingly provided by AI drafters.  Presumably these drafts were reviewed and given the judge’s stamp of approval before being issued.  Once mistakes were identified, courts pulled them back and corrected the errors.  But the federal rulemaking process is not so amenable to post hoc revisions on the fly.  Agencies are usually stuck with the reasoning and administrative record they publish along with a final rule.  And it is certainly foreseeable that a court would find an error-laden final rule to be arbitrary and capricious, even if the error is a minor one that is easily corrected. 

While the use of AI in federal rulemaking raises novel legal questions, using it to comb through the Code of Federal Regulations has enormous potential to identify areas of duplication or deviation from statutory mandates.  The sheer scale of an overgrown regulatory state not only burdens the regulated community but it also hampers a reform-minded administration’s ability to assess where changes are most needed or mandated by changes in law.  An AI tool is perfectly situated to help sift through such large amounts of data and pinpoint places where agency staff should focus their efforts.  New tools always require experimentation, refinement, and redeployment.  The use of AI in federal rulemaking will be no different.  But it’s heartening to see the Trump Administration’s willingness to experiment with something new.