A recent report from the Dutch data protection authority (AP) revealed that AI chatbots providing voting advice are unreliable and exhibit significant biases. The AP’s testing of several chatbots found they frequently recommended the same two parties, regardless of the user’s input, with some parties rarely mentioned. This skewed output raises concerns about the integrity of free and fair elections, potentially misguiding voters towards parties that don’t align with their views. Consequently, the AP strongly advises against using these chatbots for voting advice due to their unclear operation and difficulty in verification.

Read the original article here

Don’t use AI to tell you how to vote! It seems like an obvious point, doesn’t it? But, unfortunately, we live in a world where it needs to be explicitly stated. People are, in fact, turning to AI to shape their understanding of the world, and that includes their political opinions. It’s a bit disheartening, really, to think that someone would outsource such a crucial decision.

But, perhaps it’s not that simple. If the alternative is the echo chambers of social media, the biases of cable news, or even just rolling a dice, maybe the AI is an improvement? It’s not the best option, by any means, but at least there is an attempt at processing information.

Yet, consider the potential. These algorithms are built and controlled by companies with their own agendas. Think about how many people will be influenced, either accidentally or on purpose. There are already efforts underway. We are heading towards a scary future if we let these systems tell us how to vote.

The irony is thick. Low-information voters often take their cues from influencers and TV hosts. At least with AI, there’s a semblance of independent thought, even if it’s ultimately manipulated. It’s a false sense of agency, of course, because the AI is directed.

It’s easy to joke about this, but the reality is more serious. I have to admit, it’s very handy for gathering information. AI can search vast amounts of data and present it in a digestible format. However, it’s crucial to remember that AI can also be extremely good at identifying and presenting the policies of politicians. But relying solely on it for voting is a recipe for disaster.

Here’s another point to consider: the mess of modern politics. With numerous parties and complex platforms, it’s easy to see why people would be tempted by AI to summarize it all. For many, that’s just a symptom of a larger issue. When trust in the political system is low, people are more open to alternative ways of making decisions.

The problem isn’t that people are using AI to inform themselves. It’s that they trust it implicitly. They are outsourcing their thinking, their decision-making, to an unproven and misunderstood technology. But, there is always the problem of the sources of the information. No media source is “clean”.

Here’s another issue: AI models can be trained on biased datasets. What if an AI is trained on information from a specific political viewpoint? It will inevitably reinforce those views. It could recommend the wrong choice!

It’s easy to imagine AI being used to push a particular agenda, even if it’s unintentional. The real danger is when governments start pressuring these companies to tell their users that they can’t make political choices for them.

Let’s not forget how new technologies typically arrive. The problems arise and the solution comes late.

So, here’s the bottom line: don’t let AI decide how you vote. Do your own research. Consult a variety of sources. Think critically. Ultimately, the choice is yours.