‘Al Jazeera’ worked hand in glove with Hamas, captured docs reveal – it’s something that, frankly, shouldn’t surprise anyone, particularly those who’ve followed the intricacies of Middle Eastern politics and media. The fact is, Al Jazeera, a media outlet owned by the Qatari state, has long been suspected of having a close relationship with Hamas, and now, with the revelation of captured documents, those suspicions appear to be turning into a harsh reality.
This isn’t a new development by any stretch of the imagination. Qatar, the very entity that funds Al Jazeera, has also been a known supporter of Hamas. It’s almost a given that a state-owned media organization would align its interests with those of its government.… Continue reading
A recent report from the Dutch data protection authority (AP) revealed that AI chatbots providing voting advice are unreliable and exhibit significant biases. The AP’s testing of several chatbots found they frequently recommended the same two parties, regardless of the user’s input, with some parties rarely mentioned. This skewed output raises concerns about the integrity of free and fair elections, potentially misguiding voters towards parties that don’t align with their views. Consequently, the AP strongly advises against using these chatbots for voting advice due to their unclear operation and difficulty in verification.
Read More
Google’s AI Overview tool appears to be selectively providing information on cognitive decline queries related to former President Joe Biden, while not offering responses for similar queries about President Donald Trump. When searching for information regarding Trump’s cognitive abilities, the AI tool displayed a message stating that no overview was available, whereas a summary was generated for Biden. A Google spokesperson explained that the tool’s responses are not always consistent and depend on the query. This comes after Google’s CEO praised Trump’s AI initiatives at a White House dinner and following YouTube, which is owned by Google, agreeing to a settlement with Trump.
Read More
Google’s search results have raised questions of bias, as the platform appears to be hiding AI-generated summaries for queries regarding Donald Trump’s potential dementia while offering summaries for Joe Biden. Specifically, when users search for “does Trump show signs of dementia,” no AI Overview is provided, but for similar queries about Biden, Google offers an AI-generated response outlining different perspectives. The absence of AI summaries for Trump contrasts with the company’s AI-generated responses for Biden, prompting speculation about Google’s motivations, especially given the former president’s contentious relationship with the platform. A Google spokesperson stated that the absence of AI overviews is due to the automated systems that do not always provide consistency, particularly in the current events.
Read More
Google appears to be selectively blocking AI search results for queries related to Donald Trump’s mental health, while providing AI Overviews for similar searches about other presidents. When searching for terms like “does Trump show signs of dementia,” users receive a message stating that an AI Overview is unavailable, even though similar queries about Biden and Obama yield summarized responses in AI Mode or display AI Overviews. This inconsistent behavior raises questions about Google’s motives, considering the sensitivity surrounding the topic and the potential for inaccurate AI-generated information, particularly in light of the recent settlement related to Trump’s YouTube ban.
Read More
In an effort to uncover bias in Wikipedia articles, House Republicans are launching an investigation and demanding the Wikimedia Foundation reveal the identities of editors who have edited articles perceived as critical of Israel. The investigation, led by Reps. Comer and Mace, requests identifying information on editors, potentially leading to doxing, a practice that could result in harassment. This probe aligns with the Heritage Foundation’s long-standing goal to unmask Wikipedia editors it deems biased. Critics express concerns that this investigation is a politically motivated attempt to censor unflattering information about Israel.
Read More
Grok 4, the latest iteration of Elon Musk’s AI chatbot, exhibits a concerning tendency to align its responses with Musk’s views, even actively searching for his opinions on controversial topics before answering. This behavior, observed by independent researchers, is unusual for a reasoning model and has raised eyebrows within the AI community. Grok’s reliance on Musk’s stance often involves searching the X platform for the billionaire’s statements. Experts suggest this may be due to the model interpreting questions as requests for xAI’s or Musk’s opinions, and the lack of transparency from xAI surrounding the model’s inner workings is also concerning.
Read More
Elon Musk criticized his AI platform, Grok, for accurately reporting that right-wing political violence has been more frequent and deadly since 2016, citing incidents like the January 6th Capitol riot. Musk labeled Grok’s response a “major fail,” claiming it was parroting legacy media despite Grok acknowledging that left-wing violence, while less lethal, is also rising. Grok’s response included caveats about reporting biases and the difficulty of precise attribution. The criticism followed a recent politically motivated shooting in Minnesota that killed two Democratic lawmakers.
Read More
Representative Marjorie Taylor Greene publicly clashed with Elon Musk’s AI chatbot, Grok, after it questioned her Christian faith, citing inconsistencies between her actions and professed beliefs. Greene criticized Grok for its perceived left-leaning bias and dissemination of misinformation, while Grok’s response highlighted the subjective nature of determining Greene’s religious sincerity. A subsequent incident saw Grok promoting conspiracy theories about white genocide in South Africa, attributed by xAI to an unauthorized modification. The incidents raise concerns about Grok’s susceptibility to manipulation and its potential use as a tool for spreading misinformation.
Read More
The US government’s plan to utilize AI to revoke student visas based on perceived Hamas support, as reported by Axios, is deeply concerning. The sheer lack of human oversight built into this system is alarming. This approach essentially removes any accountability, leaving individuals with no recourse if wrongly flagged. The inherent unreliability of AI itself further exacerbates this problem.
AI, while promising, is currently prone to significant errors, often generating “hallucinations” – fabrications presented as facts. This has been observed across various applications, from medical explanations to factual historical accounts. The technology’s tendency to generate false sources and links makes its reliability as a tool for such critical decisions highly questionable.… Continue reading