• db2@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    ·
    1 day ago

    Stupid people who can’t think will be vetting software that they believe thinks for them. What could possibly go wrong.

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 day ago

      It’s worse…

      I remember something about them preventing states from regulating them too.

      They’re gonna say only Grok level chatbots are “real” be ause it’s constantly tweaked to stay right wing

      They know the type of people who us AInare incredibly gullible and prone to being manipulated.

      So they’re going to force ever American chatbot addict, to use chatbots that only reinforce maga propaganda

      This isn’t trump making these decisions, they’re too logical. It’s likely Peter Thiel.

  • rslogix89@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    1 day ago

    FIFY White House Considers Vetting taking donations from A.I. Models companies Before They Are Released

  • terabyterex@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 day ago

    this worries me with any tech.

    1. if a smaller company dvelopes a competing product (openai and anthropic used to be small) will it hinder them and provide access to only the mion stream companies?

    2. how does this affect non-us models?

    3. will they only be approved if they say wonderful things about trump? have you ever asked grok about elon?

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 day ago

      I really feel like you’re actually being too generous to this proposal.

      Let’s be clear, when this administration says they want to vet new models, what they mean is that they want to turn them into right wing propaganda engines. This is “reprogram ChatGPT to say the 2020 election was stolen, white genocide is real and trans people are all sex predators, or we won’t certify it.”

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      This is some Cold War regulatory capture BS based on a Myth(os), something that didn’t happen:

      Anthropic did in its Mythos system card, suggest a model has “broken containment and sent a message” when it A) was instructed to do so and B) did not actually break out of any container.

    • partofthevoice@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      21 hours ago

      Training costs are still enormous for Generational AI. It’s possible to moderate it by tracking power consumption, data centers, and known actors. Even DeepSeek 4 still cost about $6M to train. If they want to, they can impose regulation while watching for training. Of course won’t matter what China releases

  • darthsundhaft@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    1 day ago

    Government finally realizing that bleeding edge software that has no regulation being applied to it needs said regulation after all.

    Of course, this is more like the government is making sure the models produce the content the government wants the public to know. Very much like China, Russia, et al. Ergo, controlling the narrative.