Running AI is so expensive that Amazon will probably charge you to use Alexa in future, says outgoing exec::In an interview with Bloomberg, Dave Limp said that he “absolutely” believes that Amazon will soon start charging a subscription fee for Alexa

    • stevedidwhat_infosec@infosec.pub
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      You shouldn’t need anything really, all the components run via cloud services so you just need a network connection.

      That’s why it’ll run just fine on a cheap pi model

      Essentially the script in Python just sends api requests directly to OpenAI and returns the AI response. Next I just pass that response to the elevenlabs api and play that audio binary stream via any library that supports audio playback.

      (That last bit is what I’ll have to toy around with on a pi but, I’m not worried about finding a suitable option, there’s lots of libraries out there)

      • Couch "Spud" Berner@feddit.nl
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Oh wait, I think I misunderstood. I thought you had local language models running on your computer. I have seen that be discussed before with varying results.

        Last time I tried running my own model was in the early days of the Llama release and ran it on an RTX 3060. The speed of delivery was much slower than OpenAI’s API and the material was way off.

        It doesn’t have to be perfect, but I’d like to do my own API calls from a remote device phoning home instead of OpenAI’s servers. Using my own documents as a reference would be a plus to, just to keep my info private and still accessible by the LLM.

        Didn’t know about Elevenlabs. Checking them out soon.

        Edit because writing is hard.