OpenAI Flagged Shooter's Chats, Did Not Alert Police
OpenAI banned a ChatGPT account in June 2025 linked to Jesse Van Rootselaar but did not report it; the company later provided evidence to the RCMP after the Tumbler Ridge attack.
Overview
OpenAI identified and banned a ChatGPT account linked to Jesse Van Rootselaar in June 2025 but did not alert law enforcement at that time, according to reports and company statements.
The account belonged to the person suspected of killing eight people and injuring roughly 25 to 27 others in attacks in Tumbler Ridge, British Columbia, authorities said.
About a dozen OpenAI employees raised alarms months earlier and debated reporting the chats to police, but company leaders concluded the submissions did not show an 'imminent and credible' plan, OpenAI said and reports said.
A provincial representative met with OpenAI employees on February 11 about a satellite office, and the company requested contact information for the Royal Canadian Mounted Police the following day, the province said.
OpenAI said it later proactively handed evidence from the banned account to the Royal Canadian Mounted Police and will continue to support their investigation, company spokespeople said.
Analysis
Center-leaning sources frame the story as a cautionary tech-accountability narrative, emphasizing OpenAI staff debate, flagged chats, and related digital red flags. They use evaluative terms ("alarmed," "concerning"), prioritize institutional failures and legal risks, and highlight corroborating details (Roblox game, police reports) to suggest inadequate preventive action.
Sources (7)
FAQ
OpenAI banned the account in June 2025 because it violated usage policies, likely involving content related to violence or hate, as flagged by the company's systems.
Company leaders determined the chats did not show an 'imminent and credible' plan for harm, despite internal debates by employees.
OpenAI proactively provided evidence from the banned account to the Royal Canadian Mounted Police (RCMP) and committed to supporting their investigation.
OpenAI reviews cases involving planning to harm others and refers to law enforcement only if there is an imminent threat of serious physical harm; self-harm cases are not reported to respect privacy.
The policy prohibits terrorism, violence including hate-based violence, weapons development or use, and promotion of suicide or self-harm.
History
This story does not have any previous versions.






