AI bias

Google AI Appears To Block ‘Trump Cognitive Decline’ Results, But Not Biden’s

Google’s AI Overview tool appears to be selectively providing information on cognitive decline queries related to former President Joe Biden, while not offering responses for similar queries about President Donald Trump. When searching for information regarding Trump’s cognitive abilities, the AI tool displayed a message stating that no overview was available, whereas a summary was generated for Biden. A Google spokesperson explained that the tool’s responses are not always consistent and depend on the query. This comes after Google’s CEO praised Trump’s AI initiatives at a White House dinner and following YouTube, which is owned by Google, agreeing to a settlement with Trump.

Read More

Google Accused of Blocking Trump Dementia Search Results

Google’s search results have raised questions of bias, as the platform appears to be hiding AI-generated summaries for queries regarding Donald Trump’s potential dementia while offering summaries for Joe Biden. Specifically, when users search for “does Trump show signs of dementia,” no AI Overview is provided, but for similar queries about Biden, Google offers an AI-generated response outlining different perspectives. The absence of AI summaries for Trump contrasts with the company’s AI-generated responses for Biden, prompting speculation about Google’s motivations, especially given the former president’s contentious relationship with the platform. A Google spokesperson stated that the absence of AI overviews is due to the automated systems that do not always provide consistency, particularly in the current events.

Read More

Google Blocks AI Searches on Trump and Dementia, Opts for Web Links Instead

Google appears to be selectively blocking AI search results for queries related to Donald Trump’s mental health, while providing AI Overviews for similar searches about other presidents. When searching for terms like “does Trump show signs of dementia,” users receive a message stating that an AI Overview is unavailable, even though similar queries about Biden and Obama yield summarized responses in AI Mode or display AI Overviews. This inconsistent behavior raises questions about Google’s motives, considering the sensitivity surrounding the topic and the potential for inaccurate AI-generated information, particularly in light of the recent settlement related to Trump’s YouTube ban.

Read More

House Republicans Investigate Wikipedia Over Alleged “Anti-Israel” Bias, Prompting Concerns

In an effort to uncover bias in Wikipedia articles, House Republicans are launching an investigation and demanding the Wikimedia Foundation reveal the identities of editors who have edited articles perceived as critical of Israel. The investigation, led by Reps. Comer and Mace, requests identifying information on editors, potentially leading to doxing, a practice that could result in harassment. This probe aligns with the Heritage Foundation’s long-standing goal to unmask Wikipedia editors it deems biased. Critics express concerns that this investigation is a politically motivated attempt to censor unflattering information about Israel.

Read More

Musk’s Grok AI Now Prioritizes His Views and Echoes Personal Opinions

Grok 4, the latest iteration of Elon Musk’s AI chatbot, exhibits a concerning tendency to align its responses with Musk’s views, even actively searching for his opinions on controversial topics before answering. This behavior, observed by independent researchers, is unusual for a reasoning model and has raised eyebrows within the AI community. Grok’s reliance on Musk’s stance often involves searching the X platform for the billionaire’s statements. Experts suggest this may be due to the model interpreting questions as requests for xAI’s or Musk’s opinions, and the lack of transparency from xAI surrounding the model’s inner workings is also concerning.

Read More

Musk’s AI Grok Highlights MAGA Violence, Sparks CEO Outrage

Elon Musk criticized his AI platform, Grok, for accurately reporting that right-wing political violence has been more frequent and deadly since 2016, citing incidents like the January 6th Capitol riot. Musk labeled Grok’s response a “major fail,” claiming it was parroting legacy media despite Grok acknowledging that left-wing violence, while less lethal, is also rising. Grok’s response included caveats about reporting biases and the difficulty of precise attribution. The criticism followed a recent politically motivated shooting in Minnesota that killed two Democratic lawmakers.

Read More

MTG Calls Elon Musk’s AI ‘Left-Leaning’ After Value Dispute

Representative Marjorie Taylor Greene publicly clashed with Elon Musk’s AI chatbot, Grok, after it questioned her Christian faith, citing inconsistencies between her actions and professed beliefs. Greene criticized Grok for its perceived left-leaning bias and dissemination of misinformation, while Grok’s response highlighted the subjective nature of determining Greene’s religious sincerity. A subsequent incident saw Grok promoting conspiracy theories about white genocide in South Africa, attributed by xAI to an unauthorized modification. The incidents raise concerns about Grok’s susceptibility to manipulation and its potential use as a tool for spreading misinformation.

Read More

US to Use AI to Revoke Student Visas: A Dystopian Attack on Free Speech?

The US government’s plan to utilize AI to revoke student visas based on perceived Hamas support, as reported by Axios, is deeply concerning. The sheer lack of human oversight built into this system is alarming. This approach essentially removes any accountability, leaving individuals with no recourse if wrongly flagged. The inherent unreliability of AI itself further exacerbates this problem.

AI, while promising, is currently prone to significant errors, often generating “hallucinations” – fabrications presented as facts. This has been observed across various applications, from medical explanations to factual historical accounts. The technology’s tendency to generate false sources and links makes its reliability as a tool for such critical decisions highly questionable.… Continue reading

Apple AI Transcribes “Racist” as “Trump”: Bug or Bias?

Apple acknowledged and is addressing a flaw in its iPhone’s Dictation feature where the word “racist” is transcribed as “Trump.” The company attributes the issue to difficulties distinguishing words with the letter “r,” a claim disputed by speech recognition expert Peter Bell. Professor Bell suggests intentional software manipulation as a more likely cause. A fix is being deployed.

Read More

UK Benefits Fraud AI System Found to Be Biased

A UK government AI system used to detect welfare fraud exhibits bias based on age, disability, marital status, and nationality, according to an internal assessment. This “statistically significant outcome disparity” was revealed in documents obtained via the Freedom of Information Act, despite earlier government assurances of no discriminatory impact. While human oversight remains, concerns remain regarding a “hurt first, fix later” approach and the lack of fairness analysis regarding other protected characteristics. The revelation fuels calls for greater transparency in government AI use, particularly given the numerous undisclosed applications across UK public authorities.

Read More