It depends on the model but I’ve seen image generators range from 8.6 wH per image to over 100 wH per image. Parameter count and quantization make a huge difference there. Regardless, even at 10 wH per image that’s not nothing, especially given that most ML image generation workflows involve batch generation of 9 or 10 images. It’s several orders of magnitude less energy intensive than training and fine tuning, but it is not nothing by any means.
- 0 Posts
- 13 Comments
The training is a huge power sink, but so is inference (I.e. generating the images). You are absolutely spinning up a bunch of silicon that’s sucking back hundreds of watts with each image that’s output, on top of the impacts of training the model.
NewOldGuard@lemmy.mlto Ask Lemmygrad@lemmygrad.ml•What do you guys think of the usage of AI and the creation of AI images?English82·4 days ago“AI” image and video generation is soulless, ugly, and worthless. It isn’t art; it is divorced from the human experience. It is incredibly harmful to the environment. It is used to displace art & artists and replace them with garbage filler content that sucks even more of the joy from this world. Just incredibly wasteful and aesthetically insulting.
I think these critiques apply to “GenAI” more broadly, too. LLMs in particular are hot garbage. They are unreliable but with no easy way to verify what is or isn’t accurate, so people fully buy into misinformation created by these things. They also get treated as a source of truth or authority, meanwhile the types of responses you get are literally tailor made to suit the needs of the organization doing the training by their training data set, input and activation functions, and the type of reinforcement learning they performed. This leads to people treating output from an LLM as authoritative truth, while it is just parroting the biases of the human text in its training data. The can’t do anything truly novel; they remix and add error to their training data in statistically nice ways. Not to mention they steal the labor of the working class in an attempt to mimic and replace it (poorly), they vacuum up private user data at unprecedented rates, and they are destroying the environment at every step in the process. To top it all off, people are cognitively offloading to these the same way they did for reliable tech in the past, but due to hallucinations and general unreliability those doing this are actively becoming less intelligent.
My closing thought is that “GenAI” is a massive bubble waiting to burst, and this tech won’t be going anywhere but it won’t be nearly as accessible after that happens. Companies right now are dumping tens or hundreds of billions a year into training and inferencing these models, but with annual revenues in the hundreds of millions for these sectors. It’s entirely unsustainable, and they’re all just racing to bleed the next guy white so they can be the last one standing to collect all the (potential future) profits. The cost of tokens for an LLM are rising, despite the marketing teams claiming the opposite when they put old models on steep discount while raising prices on the new ones. The number of tokens needed per prompt are also going up drastically with the “thinking”/“reasoning” approach that’s become popular. Training costs are rising with diminishing returns due to lack of new data and poor quality generated data getting fed back in (risking model collapse). The costs will only go up more and more quickly, and with nothing to show for it. All of this for something which you’re going to need to review and edit anyway to ensure any standard of accuracy, so you may as well have just done the work yourself and been better off financially and mentally.
NewOldGuard@lemmy.mlto Technology@lemmy.ml•How can one consume media these days with any sort of privacy?English1·5 days agoI don’t really have issues with them aside from the youtube mirrors 🤷♂️ and even then libredirect lets you set fallbacks so I just stack them and rarely have problems
NewOldGuard@lemmy.mlto Technology@lemmy.ml•How can one consume media these days with any sort of privacy?English5·5 days agoWhen I have to use a big tech platform I use a privacy preserving proxy site like invidious for youtube. The LibRedirect extension for Firefox makes it really easy and has a surprising number of sites it can serve mirrors for
NewOldGuard@lemmy.mlto Linux@lemmy.ml•Bazzite has gained nearly 10k users in 3 months while other Fedora Atomic distros remain fairly stagnantEnglish6·8 days agoThat’s not the case for the newer open source drivers from nvidia. They’re only compatible with the last few generations of cards but they’re performant and the only feature they lack is CUDA to my knowledge. Not talking nouveau here
When I’m forced to use windows it’s the LTSC IOT version with telemetry disabled via group policy and a local account. I run O&O shut up after that, then install portmaster. I don’t run it as a daily OS but I think that’s private enough for my limited use case. My only other random recommendations are using either scoop or winget for package management, and komorebi with whkd for tiling window management.
Haskell mentioned λ 💪 λ 💪 λ 💪 λ 💪 λ
NewOldGuard@lemmy.mlto Privacy@lemmy.ml•What are you going to do when the internet starts asking for ID for everything?English3·28 days agoGopher and I2P as well
NewOldGuard@lemmy.mlto Linux@programming.dev•Fastfetch 2.48 System Information Tool Brings Fedora Variant SupportEnglish4·1 month agoI use pfetch-rs in new terminal sessions to add a little bit of decor. It doesn’t do anything but look nice, I just added some custom ascii art and it shows some specs
NewOldGuard@lemmy.mlto Linux@lemmy.ml•Latest image of u-blue has removed firefox, several gnome support packages and some pipewire and gstreamer packagesEnglish3·5 months agoYou can make custom images of this with some software called BlueBuild. I base mine off of the SecureBlue project then tweak it for my needs
NewOldGuard@lemmy.mlto Privacy@lemmy.ml•OpenAI Says It’s "Over" If It Can’t Steal All Your Copyrighted WorkEnglish2·5 months agoOh no not the plagiarism machine however would we recover???
Please fail and die openai thx
Also copyright is bullshit and IP shouldn’t exist especially for corporate entities. Free sharing of human knowledge and creativity should be a right. Machine plagiarism to create uninspired mimicries isn’t a necessary part of that process and should be regulated heavily
Yeah you’d really only say it on the theoretical side of things, I’ve definitely heard it in research and academia but even then people usually point to the particulars of their work first