• HalfSalesman@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    edit-2
    2 days ago

    The thing is, he is referencing specifically a model that recently demonstrated (to its developers at least) the ability to self improve without direct human input. But obviously there are caveats to what that actually means.

    That he is referencing something specific and recent though makes me think he’s being genuine here, he believes what he is saying.

    Obviously, hes almost certainly jumping the gun. Hes demonstrated a lack of critical thinking when it comes to new technological developments. See: all the money he dumped into VR and the Metaverse. (I say this as much as I personally like VR, its not exactly a money maker)

    I see some commenters here pointing out that he is a coder, but he’s probably not coded anything for more than a decade at this point. He is fully immersed in the Silicon Valley koolaid, only he seems to use more public facing optimism about AI compared to other CEOs.

    • DarkCloud@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      2 days ago

      A jabber bot that slightly improved (through pre training) it’s words per minute when responding but is still just mindlessly jabbering/hallucinating is just as dumb as it was before. I mean random chance is a not insignificant factor with LLMs… But also, it’s a pretty big assumption these days that “scientific” papers mean anything. There’s already been sobmany fraudulent LLM papers, like LLMs “teaching themselves other languages” or LLMs “showing ability to reason” when all of that was just a product of the training data.