The documents reportedly contained an alleged statement that the author intended to “lead by example” by committing crimes, demonstrating sincerity in advocating for others to do the same. Furthermore, these documents purportedly listed the names and addresses of key individuals within AI companies, including board members, CEOs, and investors. This information suggests a direct threat and intent to target leadership in the artificial intelligence sector.
Read More
The home of OpenAI CEO Sam Altman was reportedly the target of a second incident early Sunday morning, with a car stopping and appearing to fire a shot at the Russian Hill property. This follows an alleged Molotov cocktail attack on Friday. Police have arrested two suspects, Amanda Tom and Muhamad Tarik Hussein, for negligent discharge, and a search of their residence yielded three firearms. The incidents occurred amid heightened concerns about AI, which Altman himself has acknowledged.
Read More
Organizers for child safety groups were contacted by the Parents & Kids Safe AI Coalition regarding policy priorities for AI regulation, including age verification and parental controls. However, many were unaware that the coalition was entirely funded by OpenAI, the popular AI chatbot company. This lack of transparency led some groups to withdraw their support once OpenAI’s substantial role and funding became apparent. These events highlight concerns that AI companies may be attempting to unduly influence child safety legislation, with some advocates calling for them to step back from policy discussions.
Read More
OpenAI has announced the upcoming closure of its Sora AI video-generation app, a move that comes amid intense competition and a strategic refocusing of resources. This decision follows significant industry concern regarding the potential impact of Sora on creative professionals and the ethical implications of its realistic video generation capabilities. The company is expected to provide further details on timelines and data preservation shortly.
Read More
The recent resignation of OpenAI’s robotics head following a deal with the Pentagon has ignited a flurry of discussion, and frankly, it’s a situation that raises some significant ethical questions about the future of artificial intelligence. It appears this departure stems from deep-seated concerns about the direction OpenAI is heading, particularly regarding the potential misuse of AI for surveillance and autonomous weaponry.
The core of the disagreement seems to revolve around the ethical boundaries that were perhaps not adequately considered before entering into this partnership. The idea of “surveillance of Americans without judicial oversight and lethal autonomy without human authorization” is precisely the kind of scenario that triggers alarm bells for many.… Continue reading
OpenAI CEO Sam Altman stated that the company does not control the Pentagon’s operational decisions regarding their AI products, even as the military reportedly uses AI in operations like the seizure of Nicolás Maduro and targeting in the conflict with Iran. This comes amidst employee and public concern that OpenAI has crossed ethical lines that rival Anthropic refused to, particularly after the Pentagon declared Anthropic a “supply-chain risk” for refusing a deal. Despite Altman’s assurances of legal use and efforts at damage control, Anthropic’s CEO accused OpenAI of “safety theater” and political motivations behind their Pentagon agreement.
Read More
Following initial backlash over concerns of loopholes for domestic surveillance, OpenAI has announced a reworked agreement with the Pentagon. The revised terms explicitly state that the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals, and defense intelligence components are excluded from this contract. Despite these changes, some observers and legal experts remain skeptical, citing the lack of public release of the full contract and lingering concerns about broad interpretations of the terms. This development occurs amidst broader debates between AI companies and the military regarding ethical AI usage in national defense.
Read More
Following a directive to cease federal use of its AI tools, Anthropic faces a “supply chain risk” designation from the Pentagon. In contrast, OpenAI has secured a Pentagon deal for its AI tools within classified systems, contingent upon similar safety restrictions. These restrictions reportedly include prohibitions on domestic mass surveillance and the requirement for human responsibility in the use of force, specifically concerning autonomous weapon systems. OpenAI will embed engineers to ensure model safety, advocating for these terms to be offered to all AI companies to encourage de-escalation from governmental actions towards mutually agreed-upon terms.
Read More
OpenAI, the creator of ChatGPT, revealed that it had identified the account of Jesse Van Rootselaar last June for “furtherance of violent activities” and considered alerting Canadian police. However, the company determined at the time that the activity did not meet its threshold for referral to law enforcement, which requires an imminent and credible risk of serious physical harm. Following the tragic school shooting where Van Rootselaar killed eight people, OpenAI proactively shared information about the individual’s use of ChatGPT with the Royal Canadian Mounted Police to support their ongoing investigation. The RCMP confirmed receiving this information and is conducting a thorough review of the suspect’s digital and physical evidence.
Read More
Nvidia’s plan to invest up to $100 billion in OpenAI has stalled, according to reports from the Wall Street Journal. It seems like a massive deal, a staggering amount of money, but digging a little deeper reveals that this isn’t just about a straightforward investment. It was more like a plan to *give* OpenAI $100 billion, with the expectation that OpenAI would immediately turn around and order GPUs from Nvidia. This intricate arrangement was essentially designed to boost Nvidia’s sales projections, keeping those valuations sky-high, even while the CEO was selling off billions of dollars of stock and apparently funneling funds through a “charity.”… Continue reading