ChatGPT-maker OpenAI has said it considered alerting Canadian police last year about the activities of a person who months later committed one of the worst school shootings in the country’s history.

OpenAI said last June the company identified the account of Jesse Van Rootselaar via abuse detection efforts for “furtherance of violent activities”.

The San Francisco tech company said on Friday it considered whether to refer the account to the Royal Canadian Mounted Police (RCMP) but determined at the time that the account activity did not meet a threshold for referral to law enforcement.

OpenAI banned the account in June 2025 for violating its usage policy.

  • Nik282000@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    4 hours ago

    Remember when facebook ran the numbers to predict if certain users were gonna kill themselves but didn’t tell anyone? As long as Canada is gonna go full China then we should follow suit and install a government overseer in EVERY big corp that operates in Canada.

  • Tigeroovy@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    9 hours ago

    So glad that Canada will be investing so much money in this shit show!

    Fucking magic beans ass technology.

  • Jack_Burton@lemmy.ca
    link
    fedilink
    arrow-up
    2
    ·
    8 hours ago

    “You bet on black and lost. I knew it would be red and considered telling you but decided not to.”

  • tangonov@lemmy.ca
    link
    fedilink
    arrow-up
    12
    ·
    edit-2
    14 minutes ago

    We need to recognize that this was a preventable crime without OpenAI’s intervention. Let’s stop making excuses to open up a Minority Report police state

      • tangonov@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        3 hours ago

        I’m being serious but I also don’t want to argue with people about it on the Internet to be honest.

        • maplesaga@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          3 hours ago

          Well you’ll just get a bunch or 1984 and historian textbook quotes if you do, I don’t suggets it.

  • fourish@lemmy.world
    link
    fedilink
    arrow-up
    17
    ·
    1 day ago

    Before passing judgement (not that our opinions matter) I would’ve liked to see what was in the OpenAI transcripts.

    • NotMyOldRedditName@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      17 hours ago

      Now that we know they exist, I’m sure the police will somehow get ahold of them, could we not then eventually do a freedom of information act request for it from the police?

  • Glide@lemmy.ca
    link
    fedilink
    arrow-up
    26
    arrow-down
    2
    ·
    1 day ago

    Ha, no, fuck off, OpenAI.

    And how many times have you flagged someone for “furtherance of violent activities” that DIDN’T go forward to shoot up a school, or do much of anything you should intervene in? ChatGPT can’t even brainstorm multiple choice questions on a short story without hallucinating bullshit, and you want us to believe it’d be effective as the thought police?

    This is a cherry-picked argument being used to begin legitimizing AI for more serious uses, such as making legal decisions. This is not Minority Report; AI can fuck off with charging people with pre-crime.

    “Never let a good crisis go to waste.”

    • hector@lemmy.today
      link
      fedilink
      arrow-up
      1
      ·
      3 hours ago

      Yeah this is transparent, trying to play our emotions to give them license to run threat detection on us, which they are already doing as much as they are able. They are using age controls and the like to id every account with likeness and id and everything you say or look at along with all the cameras and microphones and records of you, to make half baked conclusions to be used against you in secret in ways you can’t know and won’t be able to challenge.

      Bank loans, background checks, police attention, court treatment, government treatment in general, business treatment, digital price tags you are given, what search results the engines will show you, etc. All done by these soul less silicon valley lords that some of the least trustworthy pieces of shit in the world.

  • GameGod@lemmy.ca
    link
    fedilink
    arrow-up
    13
    ·
    1 day ago

    I think this should piss off a lot of people. Instead of doing something, they opted to do nothing, and now they’re exploiting the tragedy as a PR opportunity. They’re trying to shape their public image as an all-powerful arbiter. Worship the AI, or they will allow death to come to you and your family.

    Or perhaps this is all just rage bait, to get us talking about this piece of shit company, to postpone the inevitable bursting of the AI bubble.

    Edit: This is a sales pitch from OpenAI to the RCMP, with them saying they’ll sell police forces an intelligence feed. It just comes across as horribly tone deaf and is problematic for so many reasons.

    • non_burglar@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      1 day ago

      I understand your point, but there are also legal ramifications and scary potential consequences should this have transpired.

      For instance, do we want ICE to have access to data about user behaviour? They might already have that.

      Who decides the bar of acceptable behaviour?

      • hector@lemmy.today
        link
        fedilink
        arrow-up
        1
        ·
        3 hours ago

        Peter Thiel and his ilk decide acceptable behavior with our politicians and their appointees sadly. Officials will also be given ways to put names that they don’t like in the categories of those that get bad scores too, even if they don’t qualify by their own rules to be in those categories, that is always one of the selling points to the authorities.

      • GameGod@lemmy.ca
        link
        fedilink
        arrow-up
        3
        ·
        24 hours ago

        I’m confident that ICE and other US law enforcement agencies already have access to it. There is no presumption of privacy on anything you enter into any cloud-based LLM like ChatGPT, or even any search engine.

        The consequences are already there and have been for like 15 years.

  • TheDoctorDonna@piefed.ca
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 day ago

    So AI is always ready to sell you out if someone is willing to pay them enough and there’s a non-zero chance that AI convinced someone to shoot up a school after already convincing several people to commit suicide.

    This sounds like monitor and cull.

    *Edited for Grammar.

  • Reannlegge@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    2 days ago

    What did ChatGPT tell the OpenAI people that they could play 1984, but opening those pod bay doors is something that cannot be closed.

  • masterspace@lemmy.ca
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    5
    ·
    2 days ago

    OpenAI said the threshold for referring a user to law enforcement was whether the case involved an imminent and credible risk of serious physical harm to others. The company said it did not identify credible or imminent planning. The Wall Street Journal first reported OpenAI’s revelation.

    OpenAI said that, after learning of the school shooting, employees reached out to the RCMP with information on the individual and their use of ChatGPT.

    Not defending them, but OP’s selections seemed intentionally rage baiting.