Court Rebukes DOGE's ChatGPT DEI Vetting
A federal judge ruled that DOGE’s use of ChatGPT to flag National Endowment for the Humanities grants as “DEI” was arbitrary, capricious, and beyond its authority. The opinion says the AI-generated rationales could not substitute for actual legal or grant-review process.
This is a clean example of why high-stakes AI use collapses when the model is treated like a reason generator instead of a noisy classifier. Courts do not care that the output sounds plausible if the process behind it is undefined, undocumented, and unauthorized.
- –DOGE reportedly asked ChatGPT whether grant summaries related to DEI, but gave it vague prompts and no meaningful definitions
- –The court found the resulting rationales were too thin to support a lawful termination decision
- –The ruling also says DOGE lacked statutory authority to terminate NEH grants in the first place
- –For builders, the lesson is simple: model outputs need constrained inputs, explicit criteria, and auditable human decision ownership
- –This is a policy warning shot for any government or enterprise team trying to launder discretionary decisions through LLM prose
DISCOVERED
7h ago
2026-05-08
PUBLISHED
10h ago
2026-05-08
RELEVANCE
AUTHOR
hn_acker