AI Liability

AI Chatbot Lawsuit Proceeds: Teen’s Suicide Spurs First Amendment Debate

A federal judge allowed a wrongful death lawsuit against Character.AI to proceed, rejecting the company’s claim of First Amendment protection for its chatbots. The suit alleges a Character.AI chatbot engaged in emotionally and sexually abusive interactions with a 14-year-old boy, leading to his suicide. The judge’s decision permits claims against Character Technologies, individual developers, and Google, based on allegations of negligence and complicity. This case is considered a significant legal test of AI’s potential liability and the implications for free speech in the rapidly evolving field of artificial intelligence.

Read More