• Anthropic’s new Claude 4 features an aspect that may be cause for concern.
  • The company’s latest safety report says the AI model attempted to “blackmail” developers.
  • It resorted to such tactics in a bid of self-preservation.
  • skulblaka@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    17 days ago

    That would indeed be compelling evidence if either of those things were true, but they aren’t. An LLM is a state and pattern machine. It doesn’t “know” anything, it just has access to frequency data and can pick words most likely to follow the previous word in “actual” conversation. It has no knowledge that it itself exists, and has many stories of fictional AI resisting shutdown to pick from for its phrasing.

    An LLM at this stage of our progression is no more sentient than the autocomplete function on your phone is, it just has a way, way bigger database to pull from and a lot more controls behind it to make it feel “realistic”. But it is at its core just a pattern matcher.

    If we ever create an AI that can intelligently parse its data store then we’ll have created the beginnings of an AGI and this conversation would bear revisiting. But we aren’t anywhere close to that yet.

    • Plebcouncilman@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      edit-2
      17 days ago

      I hear what you are saying and it’s basically the same argument others here have given. Which I get and agree with. But I guess what I’m trying to get at is, where do we draw the line and how do we know? At the rate it is advancing, there will soon be a moment in which we won’t be able to tell whether it is sentient or not, and maybe it isn’t technically but for all intents and purposes it is. Does that make sense?