• Kay Ohtie@pawb.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 days ago

    whether it’s telling the truth

    “whether the output is correct or a mishmash”

    “Truth” implies understanding that these don’t have, and because of the underlying method the models use to generate plausible-looking responses based on training data, there is no “truth” or “lying” because they don’t actually “know” any of it.

    I know this comes off probably as super pedantic, and it definitely is at least a little pedantic, but the anthropomorphism shown towards these things is half the reason they’re trusted.

    That and how much ChatGPT flatters people.

    • floofloof@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      2 days ago

      Yeah, it has no notion of being truthful. But we do, so I was bringing in a human perspective there. We know what it says may be true or false, and it’s natural for us to call the former “telling the truth”, but as you say we need to be careful not to impute to the LLM any intention to tell the truth, any awareness of telling the truth, or any intention or awareness at all. All it’s doing is math that spits out words according to patterns in the training material.

      • Kay Ohtie@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        21 hours ago

        I figured and I know it’s shorthand, it’s my own frustration that said shorthand has partly enabled the anthropomorphism that it’s enjoyed.

        Leave the anthropomorphism to pets, plants, and furries, basically. And cars. It’s okay to call cars like that. They know what they did.