• 14th_cylon@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 days ago

    To really be sure would require knowing what software is actually doing

    i am pretty sure you do know whether you wrote a text, or it just magically spawned in front of your eyes out of thin air - you don’t need degree in computer science for that.

    • tabular@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Creating text is not the only issue, it may be trained from your confidential files.

    • tabular@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Also no, you don’t know what it’s doing so you could be blindsighted by the latest AI update making unexpected changes. Not only from good-intentioned features but also bugs, or malicious anti-features after the CEO throws their toys out of the Twitter pram.

      • 14th_cylon@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 days ago

        no malicious update can force you to generate a text and file it in court as your own work.

        • tabular@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          Consider that a program can edit the file while running at any point, not merely during user input. Like a virus with access to user’s files it could even edit a document that’s not even being displayed to the user on the screen.

          • 14th_cylon@lemmy.zip
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 day ago

            well that would be fucked up for sure. are you suggesting any existing program works like that, or are just speculating what if?

            • tabular@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 day ago

              This may be out of date but in this video by Lawful Masses lawyers are concerned that software AI tools which somehow (I don’t recall) help them understand a case. This issue is the AI should not use information sourced from another client’s confidential case/documents to inform them about another case but they don’t know how it works. Responses from Microsoft were not forthcoming.

              I would argue they can’t know unless they have access to the source code to verify what any (local) AI can do (not personally do it, but a trusted 3rd party audit which isn’t behind closed doors).