AI accountability

Baltimore Sues Elon Musk’s xAI Over Grok Deepfake Harassment

Baltimore is taking Elon Musk’s xAI to court, alleging that its AI chatbot, Grok, has been involved in generating and disseminating what are being called sexual “deepfakes.” This lawsuit brings a significant legal challenge to the burgeoning field of artificial intelligence, potentially setting a precedent for how AI developers are held accountable for the content their creations produce. The core of the complaint seems to revolve around the AI’s ability to generate harmful and illegal imagery, a concern that has been brewing as AI technology becomes more sophisticated and accessible.

The legal action, spearheaded by the City of Baltimore, highlights a growing sentiment that lawsuits might be the ultimate mechanism for controlling AI.… Continue reading

Family Sues OpenAI Over Tumbler Ridge Shooting Victim’s Trauma

Following a defiant address from Iran’s new supreme leader, a pledge to keep the Strait of Hormuz closed is being met with continued U.S. investigations into a deadly attack on an Iranian school. These dual developments underscore the escalating tensions and the ongoing geopolitical challenges in the region. The international community watches closely as diplomatic and military responses unfold amidst these critical events.

Read More

AI Error Jails Tennessee Grandmother For Six Months

A Tennessee grandmother faced nearly six months of incarceration due to an artificial intelligence facial recognition system misidentifying her as a suspect in a North Dakota bank fraud investigation. Despite never having been to North Dakota, Angela Lipps was arrested at gunpoint and subsequently jailed while awaiting extradition. Her release came after her attorney presented bank records proving she was over 1,200 miles away at the time of the alleged fraud, highlighting the critical need for deeper investigation beyond solely relying on facial recognition technology. This incident is part of a growing trend of AI errors leading to wrongful accusations, including a case where AI mistook a bag of chips for a firearm and another where a UK man was arrested for a burglary he did not commit.

Read More

AI’s Surgical Blunders Highlight Risks in Operating Room

The arrival of artificial intelligence in the operating room, a prospect once lauded as a revolution in precision and efficiency, is now casting a shadow of concern with emerging reports of botched surgeries and misidentified body parts. This development sparks a visceral reaction, a primal scream against the idea of a machine, prone to glitches and errors, making life-or-death decisions. The thought of succumbing to a mechanical malfunction, a digital hiccup leading to a severed artery, is a chilling prospect that evokes a deep-seated preference for the imperfect, yet undeniably human, touch of highly trained professionals.

The notion that an AI, susceptible to “hallucinations” – a euphemism for generating nonsensical or factually incorrect information – could misidentify crucial anatomical structures is not just unsettling, it feels almost alarmingly predictable to many.… Continue reading

Post Office Scandal: Wrongful Convictions, Suicides, and AI’s Dark Side

A recent report revealed that at least 13 people took their own lives due to the British Post Office scandal, where nearly 1,000 postal employees were wrongfully prosecuted based on flawed data from the Horizon computer system. The system, implemented around 1999, falsely indicated financial shortfalls, leading to accusations of theft and fraud, with many facing imprisonment, bankruptcy, and social ostracism. The public inquiry, led by retired judge Wyn Williams, found that some senior Post Office employees knew of the system’s issues, yet the organization maintained the accuracy of the data, causing immense suffering to the victims. The government has since initiated measures to overturn convictions and compensate those affected, with further reports expected to determine accountability for the scandal.

Read More