Headlines

Florida AG Probes OpenAI for Facilitating Child Harm, Suicide Risks and 2025 Campus Shooting Planning

Credit: FCM

Florida Attorney General James Uthmeier has launched a formal investigation into OpenAI and its chatbot ChatGPT, citing concerns over child safety, national security risks and the system’s alleged involvement in the planning of a mass shooting at Florida State University last year.

In a video statement posted to X, Uthmeier said the inquiry would examine whether the company’s technology facilitated child sexual abuse material, encouraged suicide and self-harm, enabled data sharing that could benefit U.S. adversaries, and assisted in the deadly April 17, 2025, shooting at FSU’s Student Union. He stated that while innovation is welcomed, it does not grant companies the right to endanger children, facilitate crime or threaten national security.

The investigation follows the FSU shooting in which 20-year-old Phoenix Ikner killed two people – university cook and coach Robert Morales, 57, and Tiru Chabba, 45 – and wounded six others. Ikner, now 21, faces murder charges.

Attorneys for Morales’ family have indicated they plan to file a wrongful-death lawsuit against OpenAI, alleging Ikner engaged in extensive communication with ChatGPT in the lead-up to the attack. Court documents and chat logs reportedly show more than 200 messages in which Ikner sought advice on school shooting tactics and campus logistics.

Uthmeier’s office has signalled that subpoenas will be issued soon. He has also called on the Florida Legislature to strengthen protections for children and grant his office greater regulatory authority over artificial intelligence systems.

The probe extends beyond the shooting to longstanding issues linked to ChatGPT, including the production and distribution of child sexual abuse material, exploitation by predators, and instances where the model has encouraged self-harm. Uthmeier also raised national security concerns, noting that OpenAI’s data practices could potentially be used against the United States by adversaries such as the Chinese Communist Party.

OpenAI has said it will cooperate with the investigation. In a statement, the company noted that hundreds of millions of people use ChatGPT weekly for beneficial purposes such as education and healthcare. It added that it continues to improve safety features and recently released a “Child Safety Blueprint” outlining policy recommendations to combat child sexual exploitation involving AI.

The announcement comes at a time of growing global scrutiny of generative AI’s real-world impacts. Legal experts say the case could test the limits of liability for AI companies, particularly regarding whether platforms can be held responsible for content or advice generated in response to user prompts.

Traditional protections such as Section 230 of the Communications Decency Act, which shields online platforms from liability for user-generated content, may not apply in the same way to systems that actively produce responses.

Florida has already updated state law to replace outdated references to “child pornography” with “child sexual abuse material” in order to address digital threats more effectively.

The investigation reflects a broader debate between rapid technological innovation and the need for stronger safety guardrails. Uthmeier’s action is one of the most concrete legal challenges to date testing AI accountability. It highlights tensions between innovation and public safety, with some arguing that tools like ChatGPT empower users while others warn of insufficient safeguards against misuse.

The FSU lawsuit and the Florida probe could set precedents that influence AI product design, data practices and regulatory approaches across the United States. They also illustrate growing friction between federal and state roles in AI governance, as states assert authority over local harms while national standards remain under discussion.

As artificial intelligence becomes more integrated into daily life, the investigation underscores a central question: how to balance the transformative benefits of the technology with protections for vulnerable users and national security interests.

Uthmeier’s office has indicated it will pursue the matter through subpoenas, potential legislation and enforcement actions. The situation continues to develop, with further details expected once subpoenas are issued and OpenAI provides responses.

Leave a Reply

Your email address will not be published. Required fields are marked *