The Trump administration’s Department of Government Efficiency (DOGE) drastically reduced National Endowment for the Humanities (NEH) funding, cancelling over $100 million in projected grants. This initiative, which utilized ChatGPT to identify projects related to Diversity, Equity, and Inclusion (DEI), led to the rejection of numerous proposals. Notably, a $350,000 grant for the High Point Museum’s HVAC system replacement was cancelled after being flagged as “#DEI” by the AI, despite its primary purpose being artifact preservation and energy efficiency. These actions, challenged in court as unconstitutional discrimination, appear to have extended beyond explicit DEI initiatives, even impacting projects deemed “harmless” by NEH officials.

Read the original article here

It’s certainly a head-scratcher, isn’t it? Court documents are now revealing a rather peculiar situation where a significant grant, earmarked for a museum’s HVAC system, was reportedly cancelled because an AI tool, ChatGPT, flagged it as related to DEI – Diversity, Equity, and Inclusion. We’re talking about a substantial sum, around $349,000, that was essentially pulled because of an algorithm’s assessment, according to what’s surfacing in legal filings.

This whole affair seems to stem from actions taken by the Department of Government Efficiency (DOGE) during the Trump administration. Apparently, DOGE was looking to cut funding, and a whopping sum exceeding $100 million, which was distributed through the National Endowment for the Humanities (NEH), was put on the chopping block. The justification for these cuts, as stated, was that the projects were tied to DEI initiatives.

The individuals tasked with identifying these projects were reportedly using ChatGPT as a tool to sift through proposals. It’s quite the modern approach to fiscal scrutiny, relying on artificial intelligence to determine if a project aligns with specific government priorities, or in this case, if it deviates from them in a way that warrants defunding. The employees mentioned in the filings are Justin Fox and Nate Cavanaugh, who were overseeing these funding reductions.

This decision didn’t go unnoticed or unchallenged, as you might imagine. Several prominent academic and literary organizations, including the American Council of Learned Societies, the American Historical Association, the Modern Language Association, and the Authors Guild, have collectively filed a motion. Their argument is that by cancelling grants based on DEI criteria, DOGE has, in effect, violated fundamental constitutional rights, specifically those protected by the First Amendment and the equal protection clause.

They contend that making funding decisions based on whether a project relates to DEI is inherently discriminatory. The basis for discrimination, as they see it, could encompass race, ethnicity, gender, and other protected characteristics. It raises a significant question about how these broad mandates are interpreted and applied, especially when an AI is the intermediary.

The situation highlights a fascinating, and perhaps concerning, intersection of technology, government policy, and civil liberties. The idea that a crucial infrastructure project for a museum – something as practical as maintaining its environment – could be deemed extraneous or even problematic due to its perceived association with DEI, is certainly a point of discussion.

It also brings into sharp relief the evolving role of AI in decision-making processes. When an algorithm’s output, however sophisticated, can lead to such tangible consequences like defunding a cultural institution’s essential maintenance, it prompts a deeper look at the reliability and potential biases within these AI systems themselves.

The reliance on ChatGPT for such nuanced determinations is particularly striking. While AI can be incredibly powerful for data analysis and pattern recognition, its capacity to accurately interpret complex social and political concepts like DEI, especially in the context of artistic and cultural preservation, is a subject of ongoing debate. The very nature of how these prompts are phrased and how the AI interprets them could lead to unintended outcomes.

The legal challenge suggests a fundamental disagreement about the process and the criteria used for these funding decisions. If the court finds in favor of these organizations, it could have significant implications for how government agencies utilize AI in policy implementation and funding allocation. It also underscores the importance of human oversight and the potential pitfalls of automating judgment on matters that require deep contextual understanding.

Ultimately, this incident serves as a stark reminder of the challenges we face as we integrate artificial intelligence into critical governmental functions. It’s not just about the technology itself, but about how we choose to deploy it, the frameworks we build around it, and the potential for it to inadvertently amplify existing societal divisions or create new ones. The future of funding for cultural institutions and the very definition of what constitutes a legitimate government expenditure seem to be, in part, at stake in this unfolding legal narrative.