Scary article

  • Joe@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    If it’s run in a good sandbox, it’ll be safer than most of the code you run.

    Then you add in controlled interfaces/gateways to give it “just enough” power to do something interesting… and you audit the hell out of those.

    Risk is something that has to be managed, because it usually can’t be eliminated.

    • MountingSuspicion@reddthat.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      If you sandbox anything it’ll be safer than otherwise. Not really sure what you’re suggesting. I would still want the code reviewed regardless of the safety measures in place.

      I wrote a program that basically auto organizes my files for me. Even if an AI was sandboxed and only had access to the relevant files and had no delete privileges, I would still want the code reviewed. Otherwise it could move a file into a nonsensical location and I would have to go through all possible folders to find it. Someone would have to make the interfaces/gateways and also review the code. There’s no way to know how it’s working, so there’s no way to know IF it’s working, until the code is reviewed. Regardless of how detailed you prompt, AI will generate something that possibly (currently very likely) needs to be adjusted. I’m not going to take an AIs raw output and run it assuming the AI did it properly, regardless of the safety measures.

      • Joe@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        Yeah, if you want it to write code that will act on important data or outside a sandbox, then a code review is still advised, even if only a sniff-test.