Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
https://xcancel.com/jasonlk/status/1946069562723897802
Vibe Coding Day 8,
I’m not even out of bed yet and I’m already planning my day on @Replit.
Today is AI Day, to really add AI to our algo.
[…]
If @Replit deleted my database between my last session and now there will be hell to pay
Saw this, was going to post this a literal minute before you did but stared into this abyss a little too long.
Here’s what the abyss revealed:
- Guy is fucking stupid
- Guy is fucking stupid, hence the AI use
- Guy is fucking stupid, AI accidentally his whole database and he is still an AI glazer
Can’t wait to see this guy just use a different but same tool to delete his shit again, and learn nothing
I’d be lying if I said the randomly generated narrative the LLM is stringing together isn’t hilarious.
“I panicked and ran database commands without permission.”
“I destroyed all production data.”
“You immediately said ‘No’, '‘Stop’, ‘You didn’t even ask.’”
“But it was already too late.”
Thread on this lying bullshit
I feel like this response is still falling for the trick on some level. Of course it’s going to “act contrite” and talk about how it “panicked” because it was trained on human conversations and while that no doubt included a lot of Supernatural fanfic the reinforcement learning process is going to focus on the patterns of a helpful asistant rather than a barely-caged demon. That’s the role it’s trying to play and the work it’s cribbing the script from includes a whole lot of shitposts about solving problems with “rm -rf /”
also great: the promptfondlers unrapidly rediscovering why source control management exists and is desired
their story is so incoherent, i can’t even tell if there was a database to begin with
OpenAI claims that their AI can get a gold medal on the International Mathematical Olympiad. The public models still do poorly even after spending hundreds of dollars in computing costs, but we’ve got a super secret scary internal model! No, you cannot see it, it lives in Canada, but we’re gonna release it in a few months, along with GPT5 and Half-Life 3. The solutions are also written in an atrociously unreadable manner, which just shows how our model is so advanced and experimental, and definitely not to let a generous grader give a high score. (It would be real interesting if OpenAI had a tool that could rewrite something with better grammar, hmmm…) I definitely trust OpenAI’s major announcements here, they haven’t lied about anything involving math before and certainly wouldn’t have every incentive in the world to continue lying!
It does feel a little unfortunate that some critics like Gary Marcus are somewhat taking OpenAI’s claims at face value, when in my opinion, the entire problem is that nobody can independently verify any of their claims. If a tobacco company released a study about the effects of smoking on lung cancer and neglected to provide any experimental methodology, my main concern would not be the results of that study.
Edit: A really funny observation that I just thought of: in the OpenAI guy’s thread, he talks about how former IMO medalists graded the solutions in message #6 (presumably to show that they were graded impartially), but then in message #11 he is proud to have many past IMO participants working at OpenAI. Hope nobody puts two and two together!
This result has me flummoxed frankly. I was expecting Google to get a gold medal this year since last year they won a silver and were a point away from gold. In fact, Google did announce after OAI that they had won gold.
But the OAI claim is that they have some secret sauce that allowed a “pure” llm to win gold and that the approach is totally generic- no search or tools like verifiers required. Big if true but ofc no one else is allowed to gaze at the mystery machine. It is hard for me to take them seriously given their sketchy history, yet the claim as stated has me shooketh.
Also funny aside, the guy who lead the project was poached by the zucc. So he’s walking out the front door with the crown jewels lmaou.
I’m kind of half looking forward to every soda being sweetened with aspartame or acesulfame potassium, so I can finally quit drinking them. Perhaps blue food might indirectly help people like me eat healthier for a while. Thanks, torment nexus.
Democratizing graphic design, Nashville style
Description: genAI artifact depicting guitarist Slash as a cat. This cursed critter is advertising a public appearance by a twitter poster. The event is titled “TURDSTOCK 2025”. Also, the cat doesn’t appear to be polydactyl, which seems like a missed opportunity tbh.
Turdstock? Wow, the name immediately says this is a festival worth attending! The picture only strengthens the feeling.
Intentionally being on Broadway at 1-4 PM on a Sunday is a whole vibe, and that’s before considering whatever the fuck this is.
The whole thing screams old people desperately trying to be edge and cool but missing all the signifiers. A 'bad’word but baby talk style (like a young child saying poop), reference to slash which was already dated when I was young, the time of day so people can arrive home early and still make dinner (and not late at night like the cool music thing). The headliner is a twitter microceleb and not an actual cool band. But hey, at least kid rock isnt attending, so it escapes the 100% poser feeling.
E: saw this on my bsky, the guy is such a sad loser.
What in the world is a catturd2 public appearance like, anyway? This sad, drunk old loser shouting random slurs at the crowd?
I can answer that, as turdstock 2024 is on yt, it seems this john rich guy talking about how much he likes catturd, begging for donations (while saying fema is bad), some people spending a lot of time saying nothing (I’m skimming past and there is so much nothing being said), and some country songs (which is not my genre, did notice that for one guy the mix was off, making the music/singing bad to hear, so well done tech team there (they fix it later but still)). And then I saw the audience (they also said millions watched catturd 2023, which considering this vid has 9k views, and there are like 100 people in the audience I doubt (checked turdstock 2023, which seems to only be on rumble 223k views. So lol, double lol, saw the same fucking artists (and the for the same guy the technical issues, poor guy) (and wow rumble sucks compared to yt)). There actually also don’t seem to be that many songs, think half the time was people talking. Also, No Catturd appearance from what I can tell (E: he is there, he just doesn’t do anything it seems, he prob speaks a few minutes I guess). Also really weird bit about how somebody send a mean facebook post and facebook went ‘are you sure this is mean’ and they went all ‘zuck was never punched in the face on the playground … the founding fathers … there was a patriot girl … [5 minutes of talking] … when those redcoats moved on us … etc (this bit took 20 minutes between songs)’. There is also a guy praying for various things on stage, angry raised catholic noises
But yes, can’t judge it on the music, as an event it seemed boring as fuck, too much talking, not enough music, boring audience (none of who were holding drinks), weird political message. Rather watch Kreator for 4 hours, which considering some of the speech clips I hear, seems to be the message they were going for. Somehow the geriatrics in the audience are the most dangerous people because they only want to be left alone (??). No lightshow, no stage presence, no skulls, no fireworks (this is both a comment on the lack of fireworks, and a general comment on what I saw).
Anyway, seems my initial feeling of massive uncool stands.
Also seem they reused the slash cat AI image for 2025.Nope different cat, just same AI slop shit.E: “Ticket Price: $199 plus taxes and fees.” WHAT Wacken metalfest costs 333,00 € and that is for a whole weekend (and actually has a real Slash).
lol holy shit
Thank you for your service!
What is esp interesting is how they make this event sound bigger than it is. The venue is small, but that makes the ‘it was sold out’ show up sooner (also less hard to find enough people who would chuck down 200 bucks for this shit), but then they make claims about the livestream numbers. (which are going to be hard to check) etc. Weird sort of radicalization way to make people think there are more people in their movement than there are. Bit like going ‘catturd2 has 4m followers!’ like those numbers mean much online, esp on zombie twitter (see also John Rich has 150k subs on yt, but averages to 10k plays per non-music video at a quick glance (his music songs do very well however at millions per views (the one song I listened to was also about how Jesus was coming back soon to punish all the evildoers))), so people like the music+message in the music, but not when the guy actually opens his mouth for other stuff, all very vibes stuff. But also just how few actual music is being played vs people making nonsense talks about America/whatever. (and also listening to his live performance, I don’t think John Rich is a good musician (at least not live, the yt clips of him are pretty good, so either he was having a bad day, or autotune stuff).
And it wasn’t that hard, not like I spend that much time on it, it is quite interesting in a way, to look at these kinds of movements and try to see past their propganda and notice who they actually reach and what they do and say. And considering the people they attracted it feels very much a last attempt from a dying generation whos sun has set. (but that could also just be because younger people aren’t as easily scammed into giving away 200 bucks for this).
Also, the cat doesn’t appear to be polydactyl
truth in advertising: they’re hinting what the music will sound like
Fucking monodactyl ass cat
janitorai - which seems to be a hosting site for creepy AI chats - is blocking all UK visitors due to the OSA
https://blog.janitorai.com/posts/3/
I’m torn here, the OSA seems to me to be massive overreach but perhaps shielding limeys from AI is wroth it
Guys, how about we made the coming computer god a fan of Robert Nozick, what could go wrong?
I expect the time right around when the first ASI gets built to be chaotic, unstable, and scary
Somebody should touch grass, or check up on the news.
Tried to read that on a train. Resulted in a nap. Probably more productive use of time anyway.
I need to rant about yet another SV tech trend which is getting increasingly annoying.
It’s something that is probably less noticeable if you live in a primarily English-speaking region, but if not, there is this very annoying thing that a lot of websites from US tech companies do now, which is that they automatically translate content, without ever asking. So English is pretty big on the web, and many English websites are now auto-translated to German for me. And the translations are usually bad. And by that I mean really fucking bad. (And I’m not talking about the translation feature in webbrowsers, it’s the websites themselves.)
Small example of a recent experience: I was browsing stuff on Etsy, and Etsy is one of the websites which does this now. Entire product pages with titles and descriptions and everything is auto-translated, without ever asking me if I want that.
On a product page I then saw:
Material: gefühlt
This was very strange… because that makes no sense at all. “Gefühlt” is a form (participle) of the verb “fühlen”, which means “to feel”. It can be used in a past tense form of the verb.
So, to make sense of this you first have to translate that back to English, the past tense “to feel” as “felt”. And of course “felt” can also mean a kind of fabric (which in German is called “Filz”), so it’s a word with more than one meaning in English. You know, words with multiple meanings, like most words in any language. But the brilliant SV engineers do not seem to understand that you cannot translate words without the context they’re in.
And this is not a singular experience. Many product descriptions on Etsy are full of such mistakes now, sometimes to the point of being downright baffling. And Ebay does the same now, and the translated product titles and descriptions are a complete shit show as well.
And Youtube started replacing the audio of English videos by default with AI-auto-generated translations spoken by horrible AI voices. By default! It’s unbearable. At least there’s a button to switch back to the original audio, but I keep having to press it. And now Youtube Shorts is doing it too, except that the YT Shorts video player does not seem to have any button to disable it at all!
Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?
Click here if you want a horribly bad translation in your face
Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?
This really gets on my nerves too. They probably came up with the idea that they could increase time spent on their platforms and thus revenue by providing more content in their users’ native languages (especially non-English). Simply forcing it on everyone, without giving their users a choice, was probably the cheapest way to implement it. Even if this annoys most of their user base, it makes their investors happy, I guess, at least over the short term. If this bubble has shown us anything, it is that investors hardly care whether a feature is desirable from the users’ point of view or not.
if it’s opt-out, it also keeps use of the shitty ai dubbing high thus making it an artficial use case. it’s like with gemini counting every google search as single use of it
Is it that unimaginable for SV tech that people speak more than one language? And that maybe you fucking ask before shoving a horribly bad machine translation into people’s faces?
Considering how many are Trump bros, they probably consider getting consent to be Cuck Shittm and treat hearing anything but English as sufficient grounds to call ICE.
I found out about that too when I arrived at Reddit and it was translated to Swedish automatically.
Yes, right, Reddit too! Forgot that one. When I visit there I use alternative Reddit front-ends now which luckily spare me from this.
btw I noticed that Etsy is not actually in SV, so the problem is bigger than that.
Ah, im not the only one, yes very annoying. I wonder if there isn’t also a setting they can ask the browsers about the users preferred language usage. Like how you can change languages on a windows install and some installers/etc follow that preferred language.
An underappreciated 8th-season Star Trek: TNG episode where Data tries to get closer to humanity by creating an innovative new metamaterial out of memories of past emotions
Ooooh that would explain a similarly weird interaction I had on a ticket-selling website, buying a streaming ticket to a live show for the German retro game discussion podcast Stay Forever: they translated the title of the event as “Bleib für immer am Leben”, guess they named it “Stay Forever Live”? No way to know for sure, of course.
aliexpress did that since forever but you can just set display language once and you’re done. these ai-dubs are probably worst so far but can be turned off by uploader (it’s opt-out) (for now)
Yud:
ChatGPT has already broken marriages, and hot AI girls are on track to remove a lot of men from the mating pool.
And suddenly I realized that I never want to hear a Rationalist say the words “mating pool”.
(I fired up xcancel to see if any of the usual suspects were saying anything eminently sneerable. Yudkowsky is re-xitting Hanania and some random guy who believes in g. Maybe he should see if the Pioneer Fund will bankroll publicity for his new book…)
Good news for women, less risk of the pool needing to be drained because someone crapped in it again.
hot AI girls are on track to remove a lot of men from the mating pool
Can’t remove them if they were never in it.
hot AI girls are on track to remove a lot of men from the mating pool.
Wasn’t the “problem” that there’s too many men in the mating pool and women are “alphamaxxing” or whatever the fuck to get the highest quality dick? Shouldn’t this be a good thing for incels like Hanania.
mating pool
probably on par with the other stuff you might see at a rationalist poly compound
rsyslog goes “AI first”, for what reason? no one knows.
Opening ipython greeted me with this: “Tip: IPython 9.0+ has hooks to integrate AI/LLM completions.”
I wish open source projects would stop doing this.
My favorite static site generator, Marmite, is now making Special Accomodations for AI bros by making a context file? The dev is also making all of his PRs with Claude.
Anyone have any recommendations for replacements?
I don’t know static site generators, but I took a look at your site and the text doesn’t render! The font file appears to not be good. Also sorry for having to move to another software.
I’m working on it! Hopefully it’ll be legible sometime this week.
deleted by creator
Vegemite?
I’ll get my coat.
rsyslog goes “AI first”
what
Thanks for the “from now on stay away from this forever” warning. Reading that blog post is almost surreal (“how AI is shaping the future of logging”), I have to remind myself it’s a syslog daemon.
I would’ve stan’d syslog-ng but they’ve also been pulling some fuckery with docs again lately that’s making me anxious, so I’m very :|||||
Potential hot take: AI is gonna kill open source
Between sucking up a lot of funding that would otherwise go to FOSS projects, DDOSing FOSS infrastructure through mass scraping, and undermining FOSS licenses through mass code theft, the bubble has done plenty of damage to the FOSS movement - damage I’m not sure it can recover from.
that and deluge of fake bug reports
The deluge of fake bug reports is definitely something I should have noted as well, since that directly damages FOSS’ capacity to find and fix bugs.
Baldur Bjanason has predicted that FOSS is at risk of being hit by “a vicious cycle leading to collapse”, and security is a major part of his hypothesised cycle:
-
Declining surplus and burnout leads to maintainers increasingly stepping back from their projects.
-
Many of these projects either bitrot serious bugs or get taken over by malicious actors who are highly motivated because they can’t relay on pervasive memory bugs anymore for exploits.
-
OSS increasingly gets a reputation (deserved or not) for being unsafe and unreliable.
-
That decline in users leads to even more maintainers stepping back.
yeah but have you considered how much it’s worth that gramma can vibecode a todo app in seconds now???
-
Don’t know if LLMs will kill OSS, but they sure are a kick-in-the-dick
I remember popping into IRC or a mailing list to ask subsystem questions to learn from the sources themselves how something works (or should work). Depending who what and where definitely had differing experiences but overall I felt like there was typically a helpful person on the other side. Nowadays I fear the slop will make people a lot less willing to help when they are overwhelmed with AI generated garbage patches or mails losing some of the rose-tinted charm of open source.
It’s extremely annoying everywhere. GitHub’s updates were about AI for so fucking long that I stopped reading them, which means I now miss actually useful stuff until someone informs me of it months later.
For example, did you know GitHub Actions now has really good free ARM runners? It’s amazing! I love it! Shame GitHub only bother’s to tell me about their revolutionary features of “please spam me with useless PRs” and… make a pong game? What? Why would I want this?
ugh, why would i want a summary of a pull request? the whole point of reviewing a pull request is checking the details to make sure it’s not missing something important or doing something wrong.
thescream.tiff
the announcement post is obviously LLM-generated as well
found on reddit. posted without further comment
Shot-in-the-dark prediction here - the Xbox graphics team probably won’t be filling those positions any time soon.
As a sidenote, part of me expects more such cases to crop up in the following months, simply because the widespread layoffs and enshittification of the entire tech industry is gonna wipe out everyone who cares about quality.
Sanders why https://gizmodo.com/bernie-sanders-reveals-the-ai-doomsday-scenario-that-worries-top-experts-2000628611
Sen. Sanders: I have talked to CEOs. Funny that you mention it. I won’t mention his name, but I’ve just gotten off the phone with one of the leading experts in the world on artificial intelligence, two hours ago.
. . .
Second point: This is not science fiction. There are very, very knowledgeable people—and I just talked to one today—who worry very much that human beings will not be able to control the technology, and that artificial intelligence will in fact dominate our society. We will not be able to control it. It may be able to control us. That’s kind of the doomsday scenario—and there is some concern about that among very knowledgeable people in the industry.
taking a wild guess it’s Yudkowsky. “very knowledgeable people” and “many/most experts” is staying on my AI apocalypse bingo sheet.
even among people critical of AI (who don’t otherwise talk about it that much), the AI apocalypse angle seems really common and it’s frustrating to see it normalized everywhere. though I think I’m more nitpicking than anything because it’s not usually their most important issue, and maybe it’s useful as a wedge issue just to bring attention to other criticisms about AI? I’m not really familiar with Bernie Sanders’ takes on AI or how other politicians talk about this. I don’t know if that makes sense, I’m very tired
Not surprised. Making Hype and Criti-hype the two poles of the public debate has been effective in corralling people who get that there is something wrong with the “AI” into Criti-hype. And politicians needs to be generalists so the trap is easy to spring.
Still, always a pity when people who should know better fall into it.
404media posted an article absolutely dunking on the idea of pivoting to AI, as one does:
media executives still see AI as a business opportunity and a shiny object that they can tell investors and their staffs that they are very bullish on. They have to say this, I guess, because everything else they have tried hasn’t worked
We—yes, even you—are using some version of AI, or some tools that have LLMs or machine learning in them in some way shape or form already
Fucking ghastly equivocation. Not just between “LLMs” and “machine learning”, but between opening a website that has a chatbot icon I never click and actually wasting my time asking questions to the slop machine.
This is pure speculation, but I suspect machine learning as a field is going to tank in funding and get its name dragged through the mud by the popping of the bubble, chiefly due to its (current) near-inability to separate itself from AI as a concept.
It’s distressingly pervasive: autocorrect, speech recognition (not just in voice assistants, in accessibility tools), image correction in mobile cameras, so many things that are on by default and “helpful”
Apparently, for some corporate customers, Outlook has automatically turned on AI summaries as a sidebar in the preview pane for inbox messages. No, nobody I’ve talked to finds this at all helpful.
A thing I recently noticed: instead of showing the messages themselves, the MS Teams application on my work phone shows obviously AI generated summaries of messages in the notification tray. And by “summaries” I mean third person paraphrasings that are longer than the original messages and get truncated anyway.
“Worse than useless” would be an understatement.
It feels like gang initiation for insufferable dorks
Here’s an example of normal people using Bayes correctly (rationally assigning probabilities and acting on them) while rats Just Don’t Get Why Normies Don’t Freak Out:
For quite a while, I’ve been quite confused why (sweet nonexistent God, whyyyyy) so many people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being… at best lukewarm against the idea of humanity going extinct.
(Dude then goes on to try to game-theorize this, I didn’t bother to poke holes in it)
The thing is, genocides have happened, and people around the world are perfectly happy to advocate for it in diverse situations. Probability wise, the risk of genocide somewhere is very close to 1, while the risk of “omnicide” is much closer to zero. If you want to advocate for eliminating something, working to eliminating the risk of genocide is much more rational than working to eliminate the risk of everyone dying.
At least on commenter gets it:
Most people distinguish between intentional acts and shit that happens.
(source)
Edit never read the comments (again). The commenter referenced above obviously didn’t feel like a pithy one liner adhered to the LW ethos, and instead added an addendum wondering why people were more upset about police brutality killing people than traffic fatalities. Nice “save”, dipshit.
Hmm, should I be more worried and outraged about genocides that are happening at this very moment, or some imaginary scifi scenario dreamed up by people who really like drawing charts?
One of the ways the rationalists try to rebut this is through the idiotic dust specks argument. Deep down, they want to smuggle in the argument that their fanciful scenarios are actually far more important than real life issues, because what if their scenarios are just so bad that their weight overcomes the low probability that they occur?
(I don’t know much philosophy, so I am curious about philosophical counterarguments to this. Mathematically, I can say that the more they add scifi nonsense to their scenarios, the more that reduces the probability that they occur.)
reverse dust specks: how many LWers would we need to permanently deprive of access to internet to see rationalist discourse dying out?
What’s your P(that question has been asked at a US three letter agency)
it either was, or wasn’t, so 50%
You know, I hadn’t actually connected the dots before, but the dust speck argument is basically yet another ostensibly-secular reformulation of Pascal’s wager. Only instead of Heaven being infinitely good if you convert there’s some infinitely bad thing that happens if you don’t do whatever Eliezer asks of you.
Recently, I’ve realized that there is a decent explanation for why so many people believe that - if we model them as operating under a strict zero-sum game model of the world… ‘everyone loses’ is basically an incoherent statement - as a best approximation it would either denote no change and therefore be morally neutral, or an equal outcome, and would therefore be preferable to some.
Yes, this is why people think that. This is a normal thought to think others have.
Why do these guys all sound like deathnote, but stupid?
because they cribbed their ideas from deathnote, and they’re stupid
Here’s my unified theory of human psychology, based on the assumption most people believe in the Tooth Fairy and absolutely no other unstated bizarre and incorrect assumptions no siree!
I mean if you want to be exceedingly generous (I sadly have my moments), this is actually remarkably close to the “intentional acts” and “shit happens” distinction, in a perverse Rationalist way. ^^
Thats fair, if you want to be generous, if you are not going to be Id say there are still conceptually large differences between the quote and “shit happens”. But yes, you are right. If only they had listened to Scott when he said “talk less like robots”
Remember last week when that study on AI’s impact on development speed dropped?
A lot of peeps take away on this little graphic was “see, impacts of AI on sw development are a net negative!” I think the real take away is that METR, the AI safety group running the study, is a motley collection of deeply unserious clowns pretending to do science and their experimental set up is garbage.
https://substack.com/home/post/p-168077291
“First, I don’t like calling this study an “RCT.” There is no control group! There are 16 people and they receive both treatments. We’re supposed to believe that the “treated units” here are the coding assignments. We’ll see in a second that this characterization isn’t so simple.”
(I am once again shilling Ben Recht’s substack. )
When you look at METR’s web site and review the credentials of its staff, you find that almost none of them has any sort of academic research background. No doctorates as far as I can tell, and lots of rationalist junk affiliations.
While I also fully expect the conclusion to check out, it’s also worth acknowledging that the actual goal for these systems isn’t to supplement skilled developers who can operate effectively without them, it’s to replace those developers either with the LLM tools themselves or with cheaper and worse developers who rely on the LLM tools more.
True. They aren’t building city sized data centers and offering people 9 figure salaries for no reason. They are trying to front load the cost of paying for labour for the rest of time.
oh yeah that was obvious when you see who they are and what they do. also, one of the large opensource projects was the lesswrong site lololol
i’m surprised it’s as well constructed a study as it is even given that