Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Regarding a project to translate several thousand ancient letters:
So, um… this is bad. Really bad. I looked at the letters that were translated by the AI, and the very first one I found was almost entirely hallucination.
A LWer is super-impressed by the time travel fantasy Illumine Lingao (an example of Chuanyue)
https://www.lesswrong.com/posts/YiRsCfkJ2ERGpRpen/leogao-s-shortform?commentId=J4YGrY26Ezt5oMsot
Listen to this pitch:
the vast majority of the book is devoted to discussing every single technical aspect in excruciating well-researched detail. you don’t simply have a paragraph about them deciding to buy guns, you get an entire chapter of different gun experts arguing back and forth about exactly which gun to buy based on maintainability, range, differences between civilian and military models, semi automatic vs fully automatic.
Apparently they’re quite unaware of the extensive number of works in Russian with similar themes:
https://en.wikipedia.org/wiki/Accidental_travel#In_Russian_fiction
followup, here’s a real substack interview with one of the originators of the collab novel
https://afraw.substack.com/p/first-dig-the-latrines
to be honest sounds like semi-fascist shit to me.
Look, I’ve read some long-ass web novels. I enjoyed Worm, A Practical Guide to Evil, and Katalepsis all start to finish. I have also spent more hours than I could count (even if I did care to) perusing excessively detailed fan wikis and reading interninal debates between nerds about minutia. I have done all of this and enjoyed myself greatly.
But the way they’re describing this sounds absolutely exhausting and incredibly dull. If this isn’t the result of some kind of collaborative project where the debates are between different actual people then it sounds like you’re just dumping your worldbuilding notes into the page and throwing a “he said” every so often.
And here I thought “people being easy to replace with a small shell script” was a joke…

fig. 1: @self hard at work keeping awful systems up and running

from Unix World 1985. enterprise computing was so much more fun in those days
Hey, we hailed our @self!
Thank you @self, love your haircut
if you use Rust enough it just grows like that
Man, if only I had enough optimism left to aspire to that level of silliness, as opposed to be sliding further and further in to the maw of computer stupidity.
heh. thiel’s sermons about antichrist annoyed vatican enough for its a.i. adviser to rhetorically ask “should we burn peter thiel?” in an article titled “american heresy: should we burn peter thiel”. as the children twenty years ago were saying: lol. lmao.
sources:
He almost certainly got the info in other places, but I find it profondly amusing to think that in the past the AI Advisor to the Pope, may have stumbled into our corner of the internet.
I’m like 60% sure he will self-incinerate when exposed to sunlight
Dans cette vision, la démocratie entendue comme autogouvernement de citoyens égaux est déjà morte — et il ne reste plus que, dans l’obscurité d’un data center, la gestion clinique de son cadavre.
I’ll grant the Holy Inquisition guy something: he really knows how to turn a phrase. the English summaries of this piece really don’t do it justice.
For the non-French speakers among us:
In this vision for the world, democracy understood as the self-governance of equal citizens is already dead — and there only remains shrouded in the darkness of a data center, the clinical administration of its corpse.
and frankly, i’m not a catholic anymore, and i did look around for matches.
deleted by creator
maybe someone with a camera in SF can pop downtown and document this gathering
apply exposure comp b/c everyone will be pasty white
AI seems good at purple prose and metaphors that don’t exactly make sense. No, I do not give a fuck about the “triangle of calm” when it comes to, of all things, the narrator taking off her shoes. No, I am not interested in how long the narrator sets the timer on the microwave when she makes literally the blandest meal of all time.
Now I’m sure the techbros truly think this is good “literary” writing. After all, they only care that the writing sounds flowery, because they seem to be very good at missing the actual meaning of everything. I remember Saltman saying that the movie Oppenheimer needed to be more optimistic to inspire more kids to become physicists (while also saying that The Social Network did that for startup founders).
All I could think about is who has a microwave that beeps while it’s still cooking?
maybe it’s carbon monoxide detector going off, it would make more sense
Mine does if I use the defrost setting. I assume it wants me to rearrange the contents, but when it beeps the contents are still one solid chunk of ice. It doesn’t make sense, especially for a device that claims to have a “smart” sensor.
It’s a bit like the excerpt. It feels like someone is trying to rewrite the American Psycho routine, but it hammers the obsessive compulsive tropes with all the subtlety of a brick to the face while simultaneously lacking an overall purpose. It’s just noise.
I had the thought, that maybe the author could be intentionally trying to be mind-numbingly boring, but that just killed it. Into the slop jail!
I mean maybe it’s poorly worded and there’s only one set of beeps at the end. But then why would the protagonist be reminded multiple times?
Unless she’s remembering all the times in the past that microwaving bland chicken reminded her of the world being orderly?
But now I think I’m thinking too deeply about microwaves.
In other news, Cade Metz’ latest piece is actually pretty critical, especially by NYT standards, but you wouldn’t know it from the headline.
“A.I. Agents: They’re Fun. They’re Useful. But Don’t Give Them the Credit Card.”
Back and forth a few years ago on the SlateStarCodex subredit, roughly:
Scott Alexander: Bay Area rationality is wonderful, we have foundations and group homes and jolly social activities and a Solistice ritual and even “Reciprocity and Propinquity: two different rationalist dating/matchmaking services”
Rando:
I don’t know, I live in a nice community in a different city where people I know have lots of Shabbat dinners, choirs, board game nights, discussions, etc. And zero people I know have joined a cult, and one person I know has developed psychosis, but she had a family history of psychosis, starting having symptoms in early adulthood, and pretty quickly went on antipsychotics and got a lot better.
Is it just that California attracts weird shit and if you put people in California, whatever they’re already doing will get culty?
Alexander: base rates! how do your demographics compare to ours?
Rando:
Probably similar size and age? Nearly everyone I knew has parents who are teachers/lawyers/doctors/therapists/etc, so I guess upper middle class according to that book you wrote about a while ago.
It’s not like everyone’s doing great, lots of people have depression and anxiety and probably smoke more weed than is good for them. Most of those people already had those problems from their adolescence.
But our rates of weird problems, like multiple people with overlapping psychoses tied to some guy, are low.
Suppose you’re a college grad who has to decide between the usual unpaid internships at dumb startups vs. getting to be a ‘research fellow’ for a group that says it’s going to solve philosophy and save the world, and the only catch is that the group is actually a cult. Still seems pretty tempting honestly.
Oh?
I mean, giving inflated titles and grandiose plans is part of the sales pitch. Y’know, for the cult.
Like, I think there’s a fundamental misunderstanding here. The problem isn’t that the people who want to be cult leaders are able to attract a lot of people who are preinclined to be cult followers and those people suffer the associated psychic damages. It’s that even the less culty parts of the rationalist subculture seem to produce a weirdly high number of wannabe cult leaders, even if they don’t conceptualize themselves that way.
a group that says it’s going to solve philosophy
This must hit really hard if you are Wittgenstein
My first degree was a professional degree, so after college I went out and got a paid job doing that, using the experience I had developed in paid summer jobs. Even when I was young I think I would have said no to Leverage Research.
Microsoft MIGHT be suing OpenAI
Strong might, since nothing is set in stone yet. There have been talks where Microsoft has threatened to sue but that’s it so far
For lawsuits that actually are happening, OpenAI is getting sued by dictionaries
5 Tools You Can Vibe Code For Your Business In Under An Hour exactly the sort of slop from someone with a hard-on for AI, no understanding of the risks of vibe coding core parts of your business’ infrastructure and guest writes for Forbes would produce.
Starts with a sickening intro that leans into “pilled” to be “down with the kids”
If you haven’t joined the Claudepilled crowd, open an account and play.
Bright ideas include “copy and paste the source code from your home page into Claude” but overlooks the how to actually get those changes deployed part.
Wanna see my cool website. It’s at
http://localhost:1234/take that web developers!Then she describes building a custom internal dashboard…
Open Claude Code and describe your business. List every software tool you use. Ask it to suggest the key metrics you’d want to see from each one. Go back and forth until the list feels right. Then give it your brand guidelines and ask it to build a dashboard that displays everything. Ask for it to be password protected.
Yes that sounds like a great idea and not a car crash waiting to happen
She also describes building a customer facing onboarding site
Build a custom client-facing dashboard instead. Tell Claude Code what your onboarding process looks like step by step. Describe what information you need to collect and what your clients need to access. Ask it to build a secure portal they can log into, with automations that send them what they need and follow up to collect what you need. This is a branded, professional experience that scales without you. The emotional design matters here too: you want clients to feel held, not herded. Tell Claude that.
Yes vibe coded customer facing tools are a fantastic idea and definitely not a vector for cyber attacks nuh-uh. I’m sure it will be fine if you ask for it to be “secure” right?
FML are we in the twilight zone here?
The software industry is experiencing a huge collective AI psychosis.
Ask for it to be password protected.
I think I’m having a stroke. Or at least I hope I’m having a stroke and that this unparodiably dumb piece isn’t any more real than it sounds.
guest writes for Forbes would produce.
I seriously think we can completely dismiss Forbes as a credible source at this point, even if it’s not something coming from, ahem, “contributors”
The Founder of Anthropic Says He Wants to Protect Humanity From AI. Just Don’t Ask How. another long article about the AI craze and in particular Anthropic. A snippet that stood out to me:
"Reviewing my interview transcripts one night, I discover I’d left my recorder running when I excused myself to use the bathroom at Anthropic. On the tape, Kyle Fish, the AI researcher, and Danielle Ghiglieri, my tattooed guide, are laughing about some visitors to their headquarters the day before, what sounds like a documentary or TV crew.
“I sit right next to Trenton,” Fish says. “I went back and told him, ‘Dude, you really did something to those guys with your sunscreen stuff yesterday.’ He thought it was hilarious.”
They’re both cracking up.
Ghiglieri says Fish, too, had convincingly come off as a “different species of human,” adding: “They were very enamored with you.”
They’re inclined to cooperate with whatever project these people proposed, she says, and make everybody a star. I hadn’t heard Trenton’s sunscreen spiel yet. Only later, over lunch, would he tell me that he stopped protecting himself against skin cancer because AI was going to end the world in five years.
Crazy to me how people can so confidently predict AI doomsday, and then just keep working at an AI company
I’m more concerned that the writer could listen to this, presumably multiple times on his tape, and still wrote the rest of the piece like these guys are acting in good faith. Regardless of the unanswerable question of whether they believe their own hype, they are clearly saying things for a purpose of self-enrichment and self-aggrandizement rather than out of any concern for other people, and that is where the story should be. Even the guys most ostensibly interested in protecting humanity are still, when they think the mic is off and the journalist is out of the room, joking about how they’re manipulating the press into saying what they want.
I think it’s a specific genre of reportage where you objectively[1] report what you observe and let the reader draw their own conclusions.
[1] problematic term, engage!
Reading the article again, that definitely feels like the angle the author was going for
I will confess that my initial reaction was from a partial reading since I got derailed ranting about the silicon valley attitude towards neurodivergence and how much damage it’s doing to us, and basically right after that bit it starts taking a much more (appropriately imo) cynical tone that was honestly refreshing.
Let this be a lesson to those of us who must learn, I guess.
I mean there is a lot of crazy bullshit in there so I don’t blame anyone for getting derailed
Can not be rid of this rancid sub-class of plutocrats quickly enough. “Gen ⍺ hanging the last squillionaire-tech-bro by the entrails of the last kleptocrat” should be this millenium’s Diderotian coda on their existence.
The AI-as-excuse layoffs have come for Meta:
@dgerard hey, I saw your bsky post and had an idea. Have you weighed in on Nscale and the UK’s sovereign scafolding reserve? I hope that it gets noticed by the wider public, because that shit is 10/10 hilarious.
It gets better! According to Trashfuture, Nscale never even bothered to buy the scaffolding yard, which is still in operation.
https://trashfuturepodcast.podbean.com/e/unlocked-scaffold-to-heaven/
right here! https://awful.systems/post/7557155
Magnificent
Apologies for missing it first time around!
Don’t knock scaffolding, at least it has everyday uses.











