• jsomae@lemmy.ml
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    19 hours ago

    I don’t have a source for that, but the most that any locally-run program can cost in terms of power is basically the sum of a few things: maxed-out gpu usage, maxed-out cpu usage, maxed-out disk access. GPU is by far the most power-consuming of these things, and modern video games make essentially the most possible use of the GPU that they can get away with.

    Running an LLM locally can at most max out usage of the GPU, putting it in the same ballpark as a video game. Typical usage of an LLM is to run it for a few seconds and then submit another query, so it’s not running 100% of the time during typical usage, unlike a video game (where it remains open and active the whole time, GPU usage dips only when you’re in a menu for instance.)

    Data centers drain lots of power by running a very large number of machines at the same time.

    • msage@programming.dev
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      12 hours ago

      From what I know, local LLMs take minutes to process a single prompt, not seconds, but I guess that depends on the use case.

      But also games, dunno about maxing GPU in most games. I maxed mine for crypto mining, and that was power hungry. So I would put LLMs closer to crypto than games.

      Not to mention games will entertain you way more for the same time.

      • jsomae@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        Obviously it depends on your GPU. A crypto mine, you’ll leave it running 24/7. On a recent macbook, an LLM will run at several tokens per second, so yeah for long responses it could take more than a minute. But most people aren’t going to be running such an LLM for hours on end. Even if they do – big deal, it’s a single GPU, that’s negligible compared to running your dishwasher, using your oven, or heating your house.