• pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 days ago

    You: just the cheap ones

    I never said that. I just said that the cheap ones are especially shitty.

    People on this site really lack reading comprehension it seems.

    • Log in | Sign up@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      17 hours ago

      no its just the free models…

      You just have to be aware… when using a cheap model

      You: just the cheap ones

      I never said that.

      Ohhhhhhhhh ok yes of course you never said or implied that. Not your repeated message at all. And yet you can’t keep away from adressing your criticism towards free or cheap LLMs! It’s like your subtext or your underlying belief is that of you just pay big tech enough money and they can just build a big enough set of server farms, it’ll be ok. No, it will not be ok and the enshittification has begun from an already shitty base point.

      All LLMs are shit, the cheap and free ones are indeed just easier to spot as generating shit, if you ask them about things you know about. But you have to accept that they’re ALL shit and STOP making get out clauses for the expensive ones by firing your criticisms exclusively at the cheap or free ones.

      Giving ANY LLM executive power over your data is A BIG MISTAKE because you’re putting your data in the control of something which operates, at its heart, as a random number generator. They’re trained to sound right. People trust them because they sound right. This is a fundamental error.

      • pixxelkick@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        The only people who have these issues, are people who are using the tools wrong or poorly.

        Using these models in a modern tooling context is perfectly reasonable, going beyond just guard rails and instead outright only giving them explicit access to approved operations in a proper sandbox.

        Unfortunately that takes effort and know-how, skill, and understanding how these tools work.

        And unfortunately a lot of people are lazy and stupid, and take the “easy” way out and then (deservedly) get burned for it.

        But I would say, yes, there are safe ways yo grant an llm “access” to data in a way where it does not even have the ability to muck it up.

        My typical approach is keeping it sandbox’d inside a docker environment, where even if it goes off the rails and deletes something important, the worst it can do is cause its docker instance to crash.

        And then setting up via MCP tooling that commands and actions it can prefer are explicit opt in whitelist. It can only run commands I give it access to.

        Example: I grant my LLMs access to git commit and status, but not rebase or checkout.

        Thus it can only commit stuff forward, but it cant even change branches, rebase, nor push either.

        This isnt hard imo, but too many people just yolo it and raw dawg an LLM on their machine like a fuckin idiot.

        These people are playing with fire imo.