Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)


It might have already been posted here, but this Wikipedia guide to recognizing AI slop is such a good resource.
A fairly good and nuanced guide. No magic silver-bullet shibboleths for us.
I particularly like this section:
I think it’s an excellent summary, and connects with the “Barnum-effect” of LLMs, making them appear smarter than they are. And that it’s not the presence of certain words, but the absence of certain others (and well content) that is a good indicator of LLM extruded garbage.
Also, you can one-step explain from this guide why people with working bullshit detectors tend to immediately clock LLM output, vs the executive class whose whole existence is predicated on not discerning bullshit being its greatest fans. A lot of us have seen A Guy In A Suit do this, intentionally avoid specifics to make himself/his company/his product look superficially better. Hell, the AI hype itself (and the blockchain and metaverse nonsense before it) relies heavily on this - never say specifics, always say “revolutionary technology, future, here to stay”, quickly run away if anyone tries to ask a question.
My feeling has gotten that I prefer the business executive empty vs the LLM empty, at least the first one usually expresses personality. It’s never entirely empty.
Although I never use LLMs for any serious purpose, I do sometimes give LLMs test questions in order to get firsthand experience on what their responses are like. This guide tracks quite well with what I see. The language is flowery and full of unnecessary metaphors, and the formatting has excessive bullet points, boldface, and emoji. (Seeing emoji in what is supposed to be a serious text really pisses me off for some reason.) When I read the text carefully, I can almost always find mistakes or severe omissions, even when the mistake could easily be remedied by searching the internet.
This is perfectly in line with the fact that LLMs do not have deep understanding, or the understanding is only in the mind of the user, such as with rubber duck debugging. I agree with the “Barnum-effect” comment (see this essay for what that refers to).
Doing a quick search, it hasn’t been posted here until now - thanks for dropping it.
In a similar vein, there’s a guide to recognising AI-extruded music on Newgrounds, written by two of the site’s Audio Moderators. This has been posted here before, but having every “slop tell guide” in one place is more convenient.
archive link
https://web.archive.org/web/20250917164701/https://www.newgrounds.com/wiki/help-information/site-moderation/how-to-detect-ai-audio
Man, this is why human labour still reigns supreme. It’s such a small thing to consider the context in which these resources would be useful and to group together related resources as you have done here, but actions like this are how we can genuinely construct new meaning in the world. Even if we could completely eradicate hallucinations and nonspecific waffle in LLM output, they would still be woefully inept at this kind of task — they’re not good at making new stuff, for obvious reasons.
TL;DR: I appreciate you grouping these resources together for convenience. It’s the kind of mindful action that makes me think usefully about community building and positive online discourse.
It’s also the sort of thing that you wouldn’t actually think to ask for until it became quite hard to sort out. Creating this kind of list over time as good resources are found is much more practical and not the kind of thing would likely be automated.
Exactly! It’s basically a form of social informational infrastructure building