During a lengthy deposition, former investment banker Justin Fox, now associated with DOGE, struggled to define DEI. He admitted to using ChatGPT to scan government contracts for specific demographic terms, excluding others. Fox also made and then retracted a claim that a grant he helped cut was “not for the benefit of humankind.” These exchanges offer a glimpse into the operational methods of DOGE, an organization linked to significant damage and negative consequences despite failing to reduce the government deficit.

Read the original article here

The sheer volume of testimony from individuals associated with DOGE is something that, after spending six hours immersed in it, leaves one with a rather distinct impression. It’s not just the duration; it’s the content, or at times, the lack thereof, that truly stands out.

One particular individual, a former investment banker now involved with DOGE, faced a series of pointed questions during his deposition. When pressed about his understanding of Diversity, Equity, and Inclusion (DEI), his responses were notably evasive, offering no concrete definition. He admitted to using AI, specifically ChatGPT, to scan government contracts for terms like “Black” and “homosexual,” yet he did not perform the same search for terms like “white” or “caucasian.” Furthermore, he made a rather striking statement that a particular grant he helped to reduce was “not for the benefit of humankind,” though he later attempted to retract this assertion.

Watching this entire six-hour deposition, from the clipped exchanges to the meandering arguments and the uncomfortable silences, was quite an experience. These videos, publicly released as part of a lawsuit filed by prominent academic organizations, offer a stark, and frankly, unsettling look into the mindset of some within DOGE. Even with the witness’s struggle to provide clear answers to seemingly straightforward questions, the testimony serves as a powerful illustration of what appears to be a reckless and heavy-handed approach by a group of young, seemingly inexperienced individuals. Their actions have been linked to significant damage across the U.S. government, resulting in adverse consequences that extend well beyond governmental spheres. It’s been reported that DOGE’s actions have been associated with a substantial number of deaths and multiple significant data breaches, all while failing to actually reduce the government’s deficit.

Prior to his involvement with DOGE, this individual was an associate at a private equity firm. Now, he’s a co-founder of a company focused on the senior care sector. His explanation for this venture is that they are acquiring businesses, integrating technology, and aiming to increase the pay for nurses and caregivers, thereby ensuring the aging population has sufficient care. He stated, under oath, that he had no prior experience in government or public grant administration before joining DOGE, a point that adds a layer of concern to his subsequent actions.

The narrative that emerges from this testimony is one of a group that, despite their supposed drive for efficiency and cost-cutting, seems to lack a fundamental understanding of the complex systems they’ve influenced. Their approach to complex issues, like DEI, appears to be based on superficial keyword searches rather than any deeper comprehension of the underlying principles. The decision to use AI to identify specific terms without a corresponding effort to analyze broader categories raises questions about their objectivity and motivations.

The justification for cutting grants, like the one deemed “not for the benefit of humankind,” further highlights a potentially callous disregard for the impact of their decisions. While the retraction of such a statement might seem like a step towards correction, the initial utterance reveals a concerning perspective. The sheer scope of the damage attributed to DOGE, from data breaches to the tragic loss of life, underscores the profound consequences of decisions made by individuals who may lack the necessary experience and foresight.

The idea of using AI to streamline operations in sectors like senior care, while seemingly innovative, also raises red flags when paired with the testimonies provided. The claim of “adapting technology to pay nurses and caregivers more” can easily be interpreted as a smokescreen for a more calculated strategy. It’s plausible that this translates to optimizing workflows and potentially reducing the overall number of human staff, with the promise of slightly higher wages for those remaining, all while drastically increasing their workload. This creates a scenario where the advertised benefits mask a more complex and potentially exploitative reality for both caregivers and the elderly they serve.

The disconnect between the stated goals and the apparent outcomes is a recurring theme. The inability to articulate a clear definition of DEI, coupled with a selective use of AI for contract analysis, suggests a superficial engagement with critical societal issues. This, in turn, casts doubt on the genuine intentions behind their initiatives, especially when juxtaposed with the significant negative repercussions observed. The assertion that the ultimate goal was never about saving money or lives, but rather about breaking systems for personal gain and power, resonates strongly with the patterns observed in these testimonies. It points towards a mindset that prioritizes wealth accumulation and influence over the well-being of individuals and the integrity of public institutions.