• bandwidthcrisis@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      4 days ago

      AI runs in the cloud because it needs a powerful server to run the biggest (i.e. “smartest”) models.

      The cloud servers are doing nothing special that another powerful enough computer could do, just a huge amount of data processing.

      You can run an ai chat on a steam deck or directly on a phone, if it’s not too demanding (“smarter” models are bigger data files, so won’t fit in the memory of a small device).

      Today, for instance, I had a phone call from “Spectrum Internet support” and part-way through the call my phone blared an alarm and said “possible scam” on screen.

      The phone itself interpreted the conversation as sus.

      https://support.google.com/phoneapp/answer/15654065?hl=en

      For Pixel 9 and later devices: Scam Detection is powered by Gemini Nano on-device

    • trackball_fetish@lemmy.wtf
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 days ago

      The cloud being a bunch of computational power (servers). A bunch of phones in a network also can be utilized for said computational power. Passing the savings on to you! ;)

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        4 days ago

        It’s also cheaper, if they can offload a portion to the user’s computer.

        • Em Adespoton@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 days ago

          Cheaper for them, that is.

          What I want to see is throttleable models, kind of like progressive JPEG, where the default model is “nano” and it has a watch function that analyzes if more tokens might be needed for a certain task and scales up as needed — identifying if the resources are too much for the device and offloading to the cloud (with explicit permission) only if (but always if) needed. Over time as the technology improves, larger models move to the endpoint.

          And then people could have a basic set of sliders: on-device only, on-cloud only, or somewhere in between, based on the user’s preferences.

          • T156@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            That’s basically model routing, and has existed a while. Open AI’s GPT-5 and llama-swap do that, for example. If the task is simple, it uses a smaller, less intensive model, and only uses the slower, larger one of the task is more complex.

            Though most tend to operate with models on the same device/service, rather than a model run elsewhere.

        • fiat_lux 🆕 🏠@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          Yeah, even there. A page loading is one thing, but browser features are somewhat independent of the content. There’s also a good chance this is being used as a hook for other Google products like Drive or Docs (which are basically websites under the hood) to allow offline file management, creation, etc.

          It’s a bad choice, but it wouldn’t be the first bad choice Google has made.

        • Blackmist@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          Well everything else is in it.

          Shit, Chrome supports the use of COM ports. It’s an OS within an OS.