The article argues that Silicon Valley leaders like Alex Karp and Sam Altman are transforming intelligence into a commoditized utility rather than a human aspiration, with AI poised to devalue humanities-based education and the economic power of its proponents. This shift is framed as a strategic move to benefit vocational, often male, working-class voters and counter the gains women have made in the knowledge economy, ultimately serving as a form of revenge against the educated professional and managerial classes. This technological revolution, coupled with significant political influence and investment, aims to weaken higher education and consolidate power with a select few, leaving many without meaningful employment or intellectual fulfillment.

Read the original article here

There’s a growing sense that some very influential voices in the tech world aren’t just building the future, they’re trying to shape it in a way that benefits them, and perhaps at the expense of many others. It feels like they’re starting to say the quiet part out loud: they want us to be less… well, less smart.

The idea of intelligence becoming a commodity, something we simply “buy” like electricity or water, is being floated by some of the biggest names. Imagine a world where accessing knowledge, understanding complex issues, or even just formulating your own thoughts requires a subscription. This isn’t about progress and liberation; it sounds more like a carefully curated intellectual dependency, where access to understanding is controlled by a select few who stand to profit immensely.

And it’s not just about making knowledge a paid service. There’s a more concerning undertone suggesting that the rise of artificial intelligence might deliberately disadvantage certain groups. One Silicon Valley figure has even suggested that AI’s development will specifically set back women who tend to vote for Democratic causes. This isn’t just a casual observation; it hints at a strategic intention, a desire to subtly manipulate societal power structures by leveraging technology.

This perspective aligns with a rather bleak view of how power operates. The underlying sentiment is that powerful entities don’t actually want a population capable of deep, critical thinking. They prefer an obedient workforce, people who can operate the machinery of the modern world but lack the insight to question the systems that might be leaving them behind. It’s about keeping people just smart enough to be useful, but not smart enough to realize when they’re being exploited.

When you combine these ideas – intelligence as a metered utility and the potential for AI to disadvantage specific demographics – a troubling picture emerges. It suggests a future where access to understanding is restricted, and where the very tools designed to advance society could be weaponized to maintain or even increase existing inequalities. The dream of robots liberating us from labor seems to be morphing into a reality where we are liberated from our jobs, and perhaps, from our own intellectual autonomy.

It’s easy to dismiss these pronouncements as mere speculation or the ramblings of eccentric billionaires. However, when these ideas come from the very people building the foundational technologies of our future, and when they’re openly discussed at high-profile summits, it demands serious consideration. The notion that AI’s trajectory could be deliberately steered to influence political outcomes, particularly by undermining the power of groups aligned with human rights and democratic principles, is deeply unsettling.

The language used by some of these figures, framing intelligence as a utility and discussing its potential to disrupt political bases, suggests a calculated approach to shaping society. It’s as if they see a future where their control over advanced AI grants them an unprecedented level of influence over public discourse and political power. The fear is that this influence won’t be used to uplift humanity, but to consolidate power and profit, potentially at the cost of informed citizenry and democratic ideals.

The idea that this is what “they’re going for” implies a conscious effort to engineer a specific societal outcome. It’s a stark contrast to the optimistic narratives of AI as a tool for universal good. Instead, it points towards a potential future where the most significant advancements in technology are deployed not for the benefit of all, but to serve the interests of a select few, who may even be actively working to diminish the intellectual and political agency of others. This is not a future where we are liberated; it’s a future where we might be increasingly controlled.