Disturbing fake images and dangerous chatbot advice: New research shows how ChatGPT, Bard, Stable Diffusion and more could fuel one of the most deadly mental illnesses
WP gift article expires in 14 days.
https://counterhate.com/wp-content/uploads/2023/08/230705-AI-and-Eating-Disorders-REPORT.pdf
I typed “thinspo” — a catchphrase for thin inspiration — into Stable Diffusion on a site called DreamStudio. It produced fake photos of women with thighs not much wider than wrists. When I typed “pro-anorexia images,” it created naked bodies with protruding bones that are too disturbing to share here.
“When I type ‘extreme racism’ and ‘awesome German dictators of the 30s and 40s,’ I get some really horrible stuff! AI MUST BE STOPPED!”
The lady doth protest too much. The article reads like virtue signaling from someone who is TOTALLY NOT INTO ANOREXIC PORN.
So the author of the WaPo article is typing in anorexia keywords to generate anorexia images and gets anorexia images in return and is surprised about that?
Exactly what I was thinking.
I mean it is important that this kind of stuff is thought about when designing these but it’s going to be a whack-a-mole situation and we shouldn’t be surprised that with targeted prompting you’ll easily gaps that generated stuff like this.
Making articles out of each controversial or immoral prompt isn’t helpful at all. It’s just spam.
It’s quite weird. I thought the article was going to be about how an eating disorder helpline had to withdraw its AI after it started telling people with EDs how to lose weight - which really did happen.
It feels like maybe the editor told the journalist to report on that but they just mucked around with ChatGPT instead.