• boonhet@lemm.ee
          link
          fedilink
          arrow-up
          8
          arrow-down
          1
          ·
          edit-2
          1 year ago

          Imagine a standardized API where you provide either your own LLM running locally, your own LLM running in your server (for enthusiasts or companies), or a 3rd party LLM service over the Internet, for your optional AI assistant that you can easily disable.

          Regardless of your DE, you could choose if you want an AI assistant and where you want the model to run.

          • hackris@lemmy.ml
            link
            fedilink
            arrow-up
            4
            ·
            1 year ago

            I’ve had this idea for a long time now, but I don’t know shit about LLMs. GPT can be run locally though, so I guess only the API part is needed.

            • boonhet@lemm.ee
              link
              fedilink
              arrow-up
              3
              ·
              1 year ago

              I’ve run LLMs locally before, it’s the unified API for digital assistants that would be interesting to me. Then we’d just need an easy way to acquire LLMs that laymen could use, but probably any bigger DE or distro can create a setup wizard.

        • superguy@lemm.ee
          link
          fedilink
          arrow-up
          4
          ·
          1 year ago

          Yeah. I’m really annoyed by this trend of having programs that could function offline require connecting to a server.

        • RachelRodent@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Not just hypothetically but practically too. A foss program called koboldai let’s you run LLMs locally on your computer and a project that takes advantage of this is the koboldassistant project. You can essentially make your own Alexa,Cortana,Siri whatever that doesn’t collect your data and belongs to you

      • taanegl@beehaw.org
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        Open source locally run LLM that runs on GPU or dedicated PCIe open hardware that doesn’t touch the cloud…

    • PixxlMan@lemmy.world
      link
      fedilink
      arrow-up
      29
      arrow-down
      3
      ·
      1 year ago

      To be fair - people don’t know what they want until they get it. In 2005 people would’ve asked for faster flip phones, not smartphones.

      I don’t have much faith in current gen AI assistants actually being useful though, but the fact that no one has asked for it doesn’t necessarily mean much.

      • nossaquesapao@lemmy.eco.br
        link
        fedilink
        arrow-up
        11
        ·
        1 year ago

        To be fair, in 2005 a lot of people dreamed of “mini portable computers that could fit in their hands”. They just didn’t associate it to the form created with smartphones, and when the smartphones came to be, people were amazed by it. I don’t see the same level of reception when it comes to AI assistants.

      • superguy@lemm.ee
        link
        fedilink
        arrow-up
        6
        ·
        edit-2
        1 year ago

        faster flip phones

        I don’t think speed was a complaint anyone had about phones right before smartphones launched.

        People were mostly concerned with cell phone plans. Talking used to be charged by the minute, texting was charged per text, and data was practically non-existent.

        Cell phones have come a long way, but I think a lot of people take for granted just how much cell service has improved. I pay $25/month for a single line that gives me unlimited talk, text, and data (Visible). Couldn’t be happier.

    • UnculturedSwine@lemmy.world
      link
      fedilink
      arrow-up
      15
      arrow-down
      1
      ·
      1 year ago

      Would be a cool feature if it could be leveraged in a secure, private, efficient way that was more useful than 99% of the algorithmic monkey typewriter garbage that’s on the market these days. I don’t need a glorified Cleverbot rifling through my unspeakables.

      • ffhein@lemmy.world
        link
        fedilink
        arrow-up
        7
        ·
        1 year ago

        Local LLMs are getting better at a very rapid pace. Still a bit too resource hungry to have running in the background all the time, but for example Mistral-7b is quite competent for its size.