OpenAI Flagged Shooter's Chats, Did Not Alert Police

OpenAI banned a ChatGPT account in June 2025 linked to Jesse Van Rootselaar but did not report it; the company later provided evidence to the RCMP after the Tumbler Ridge attack.

Overview

A summary of the key points of this story verified across multiple sources.

1.

OpenAI identified and banned a ChatGPT account linked to Jesse Van Rootselaar in June 2025 but did not alert law enforcement at that time, according to reports and company statements.

2.

The account belonged to the person suspected of killing eight people and injuring roughly 25 to 27 others in attacks in Tumbler Ridge, British Columbia, authorities said.

3.

About a dozen OpenAI employees raised alarms months earlier and debated reporting the chats to police, but company leaders concluded the submissions did not show an 'imminent and credible' plan, OpenAI said and reports said.

4.

A provincial representative met with OpenAI employees on February 11 about a satellite office, and the company requested contact information for the Royal Canadian Mounted Police the following day, the province said.

5.

OpenAI said it later proactively handed evidence from the banned account to the Royal Canadian Mounted Police and will continue to support their investigation, company spokespeople said.

Written using shared reports from
7 sources
.
Report issue

Analysis

Compare how each side frames the story — including which facts they emphasize or leave out.

Center-leaning sources frame the story as a cautionary tech-accountability narrative, emphasizing OpenAI staff debate, flagged chats, and related digital red flags. They use evaluative terms ("alarmed," "concerning"), prioritize institutional failures and legal risks, and highlight corroborating details (Roblox game, police reports) to suggest inadequate preventive action.

Sources (7)

Compare how different news outlets are covering this story.

FAQ

Dig deeper on this story with frequently asked questions.

OpenAI banned the account in June 2025 because it violated usage policies, likely involving content related to violence or hate, as flagged by the company's systems.

Company leaders determined the chats did not show an 'imminent and credible' plan for harm, despite internal debates by employees.

OpenAI proactively provided evidence from the banned account to the Royal Canadian Mounted Police (RCMP) and committed to supporting their investigation.

OpenAI reviews cases involving planning to harm others and refers to law enforcement only if there is an imminent threat of serious physical harm; self-harm cases are not reported to respect privacy.

The policy prohibits terrorism, violence including hate-based violence, weapons development or use, and promotion of suicide or self-harm.

History

See how this story has evolved over time.

This story does not have any previous versions.