Want to wade into the sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)


We’ve got the new system prompt for OpenAI’s Codex now, and boy is it fun.
While the goblin stuff is the headliner here, and there are a few other little fun notes like an explicit instruction to avoid em-dashes. Basically it’s really obvious that they don’t have a meaningful way to describe exactly what they want it to do and so they’re playing whack-a-mole with undesired behaviors in order to minimize how often it embarrasses them.
But I think Ars dramatically understates how bad this part is:
Like, if you wanted to limit the harm of chatbot psychosis from your platform this is the exact opposite of the kind of instruction you’d want to give. It’s one thing to want a convenient and pleasant user experience, but this is playing into the illusion that there’s a consciousness in there you’re interacting with, which is in turn what allows it to reinforce other delusional or destructive thinking so effectively.
Edit to include the even worse following paragraph:
Emphasis added because of it shows just how little they care about this problem.
Literally this meme:
This really goes to show how much they need to rely on the LLMentalist effect, despite the AI boosters insisting that the AI is totally different now, everything changed in the last few months. They do not care about creating a useful, reliable tool. That concept doesn’t even occur to them, since why do that when AI is magic?
In any case, they are incapable of creating a useful, reliable tool. Deep down, the only thing the AI companies have at their disposal is the ELIZA effect. OpenAI has every incentive not to truly eliminate AI psychosis, because they need engagement. They only want to mitigate the extreme cases where people go insane and cause bad PR for them. But mild AI psychosis is totally fine, it’s great when people are addicted to your product and make the numbers go up!
Oh wow! This one is actually provably real. Hilarious.
“Noo dude the machine that wants to rant about goblins is definitely a useful and reliable piece of software dude. You have to trust me dude, let have your personal information! put it into the goblin bot”.
The whole ‘how many r’s in strawberry’ sort of stuff already made me suspect that, when the popular one was fixed and other attempts at asking for letters did still give the miscounts.
Wonder of the goblin stuff is the start of some model collapse. And if we all can make it worse by talking about goblins more. As goblins are always relevant.
E: poor openai, it just wants to tell everyone about its dnd campaign.
That is exactly it. Their official explanation avoids the phrase model collapse, but that is exactly what they describe: using the output of one model as training data for another amplified the occurrence of the word goblin (and other creatures), which apparently initially occurred because of their system prompt which was aimed at maximizing the Eliza effect (again they avoid an honest framing, but that is totally what they are doing and it is pretty gross considering all the cases of AI psychosis that have been occuring) by telling the model "You are an unapologetically nerdy, playful and wise AI mentor to a human. "
The playful and wise goblinbot…
Tired: “model collapse”
Wired: goblins ate my model
Personally, I enjoy talking about goblins.
I’m always down to poison training data.
I believe it’s the “don’t stuff beans up your nose” effect, writing this prompt is causing it to mention goblins
ChatGpt, what are some of your likes?
@YourNetworkIsHaunted @BlueMonday1984 Goblins: the elephant in the room.