Want to wade into the snowy sandy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
Delve removed from YCombinator
https://news.ycombinator.com/item?id=47634690
IIUC, it looks like Delve lied to YC about stealing another company’s Apache 2.0 licensed slopware. This is appatently a bigger sin than selling a product that does fuck-all. I guess they weren’t tall enough for this ride.
Delve claims to offer “Compliance as a Service”
https://delve.co/ (absolutely unhinged)
A link to the expose that precipitated the divorce:
https://deepdelver.substack.com/p/delve-fake-compliance-as-a-service
@o7___o7 @BlueMonday1984 TF covered these clowns the other week
https://trashfuturepodcast.podbean.com/e/the-tetsuo-economy-feat-wendy-liu/
While I tend to think Yudkowsky is sincere, some things like his prediction market for P(doom) are hard to square with that https://manifold.markets/EliezerYudkowsky/will-ai-wipe-out-humanity-by-2030-r (launched June 2023, will resolve N/A on 1 January 2027 if the world has not ended yet. It has not moved much since 1 January 2024)
I will never understand why people seriously bet “yes” on these types of things. Like you either loose the bet and loose money or you win the bet and die
Does it still count if it turns out that Trump invading iran was based on Claude or ChatJippity advice and things escalate to global thermonuclear war? AI technically wiped out humanity because our dumb leaders were dumb enought to trust it?
On the one hand, Yud’s vision of AI doomsday is specifically “AI turns sentient/superintelligent and kills us all because reasons”, not “Humanity wipes itself out because they trusted lying machines”.
On the other hand, the absence of sentience/superintelligence hasn’t stopped AI from causing untold damage anyways, as the past two to three years can attest.
Technically yes, but Yud probably wouldn’t count that, since the AI didn’t have the express purpose of destroying everyone
An early hint of Gwern’s rejection of chaos theory in the sequences from 2008 (the “build God to conquer Death” essay):
And the adults wouldn’t be in so much danger. A superintelligence—a mind that could think a trillion thoughts without a misstep—would not be intimidated by a challenge where death is the price of a single failure. The raw universe wouldn’t seem so harsh, would be only another problem to be solved.
Someone who got to high-school math or coded a working system would probably have encountered the combinatorial explosion, the impossibility of representing 0.1 as a floating-point binary, Chaos Theory, and so on. Even Games Theory has situations like “in some games, optimal play guarantees a tie but not a win.” But Yud was much too special for any of those and refused offers to learn.
This is what happens when your worldview is based on anime.
(A lot of anime has heavy themes, but most people understand that it’s not real life, just like all such art. Unlike Yud, most people’s worldviews on coding and math are based on actual coding and math.)
Not just anime but also science fiction. See also all the people who love ‘hard’ science fiction (science fiction more based on real world physics), which often isn’t that hard at all but just has a few real physics element, see the expanse for a good example of non-hard sf that feels hard (im finally reading the book series so be warned I might expanse post a bit).
content warning discussion about sexual abuse thrope
A similar thing happens with people who confuse edgy/grimdark/vile fiction with realistic. (A while back I played a video game which had a reference to women being captured for breeding and men for other sexual abuse (which made no sense in the setting, as these slaver faction already were resource starved, and poisoned so they died quickly, so no way they could raise kids into maturity in that environment (also iirc the slaver faction was less than 20 years old)). Which some players described as very realistic (people do the same about 40k, almost like it says something about their ideas of how the world works not the setting). I was just rolling my eyes and didnt comment. Apart from that it seemed ok. Crying suns is the name of the game for the people who want to avoid it for this reason (it wasnt a big plot point).
Sorry for being a bit offtopic and talking about entertainment again.
I will never forget the time I calculated the energy output on one of the torpedo engines of The Expanse and realized it was higher than the total wattage of all human civilization in 2020
Ah the Epstein drive. (oof that aged…)
Small note however, iirc James S. A. Corey has mentioned the expanse is not hard sf. I don’t have a quote for that however.
deleted by creator
Not sure if I should post it here or under the pivot article, somebody went through the claude code https://neuromatch.social/@jonny/116324676116121930 (via @aliettedebodard.com and @olivia.science on bsky)
13 butts pooping, back and forth, forever.
This is somehow even more of a shitshow than I would have predicted. Also it continues the pattern that these systems don’t fuck up the way people do. One thing he hasn’t annotated as much is the sheer number of different aesthetic variants on doing the same thing that this code contains. Like, you do the same kind of compression four different places, and one is compressImage, one is DoCompression, one is imgModify.compress, and one is COMPRESS_IMG. Even the most dysfunctional team would have spent time developing some kind of standard here from my (admittedly limited) experience.
Even the most dysfunctional team would have spent time developing some kind of standard here from my (admittedly limited) experience.
My experience has been vastly different. Prior to LLMs I have seen all sorts of horrors of this sort and others writ large across many codebases. It’s so awesome that LLMs offer the ability to make the same sorts of code but at a much faster speed. In times past it used to take devs years to build up the kind of tech debt that LLMs can give you in days.
Yeah realized a while ago that vibe coding is a massive technical debt creation machine.
I mean I guess “developing” in that sentence is doing a lot of work replacing “arguing fruitlessly about”.
It is great, that means the system is vulnerable to hacks if you find an exploit in any of those methods, but only 1/4th of the time.
Somebody described AI agents as very enthusiastic 14 year olds, and looks like they certainly code like one.
GitHub have finally achieved zero 9s stability for the last 90 days. Congratulations to all involved

Hold on now, the uptime number contains two digits that are nines! The image itself has four nines in total!
Can’t believe I’m nerd-sniped this easily. Very technically, the point at which a service should be considered unreliable or down is at γ nines, where γ = 0.9030899869919434… is a transcendental constant. γ nines is exactly 87.5% availability, or 7/8 availability, and it’s the point at which a service’s availability might as well be random. (Another one of the local complexity theorists can explain why it’s 7/8 and not 1/2.)
… why 7/8?
We can see that one 9 of availability is 90% = 0.9, two 9s is 99% = 0.99, three 9s is 99.9% = 0.999, etc. In general, for positive integers n, n 9s of availability is 1 - (1/10)^n, and we can extrapolate that to non-integer values of n. The value γ needed for 87.5% availability is the solution to 1 - (1/10)^γ = 7/8, or γ = log_10(8) = 0.903089987. γ is transcendental by Gelfond-Schneider (see this for a reference proof).
Right now, Sora is at zero 9s of availability.
Alas, foiled again! Nobody said they had to be leading 9s!
For my own services I’m aiming for .999999% of uptime
89.90999999…% uptime 🐐
If you had told this to the me of 20 years ago I wouldnt have believed you.
Here’s a headline I never expected to read:
Tl;dr A whole load of media outlets believed an X account asking for crypto donations which claimed to be Jonathan the 194 year old tortoise’s vet. Jonathan was found safely asleep under a tree in the governor’s paddock.
Heh. Who goes AI?
Never love the need for these parlor games, but it’s a good one.
https://www.todayintabs.com/p/who-goes-ai
taking shots at the gray lady:
You might think Mr. R not so different, superficially, from Ms. L. He’s also a long-tenured technology columnist at a respected mainstream publication. And yet he has eagerly, even gleefully, turned flack for the machines. He has delegated much of his professional life to them as well, and seems proud of it:
Most recently, [Mr. R] tells me, he created a team of Claude agents to help edit his book, led by a “Master Editor” agent. Other sub-agents are in charge of things like fact-checking, making sure the book matches his writing style, and offering positive and negative feedback.
And why not? Mr. R is not known or valued for his elegance of expression. He has, at best, a “writing style,” and not one that can’t easily be duplicated by a large language model. Checking facts? Assessing his work’s strengths and weaknesses? More bathwater to be tossed out of this increasingly baby-less tub. So what explains Mr. R, who “expects AI models to get better than him at everything eventually?” Why does he go AI when Ms. L never would?
Mr. R’s secret is that his work is not primarily artistic or informative—it is functional. He serves a purpose for the industry he covers. Mr. R’s job is to absorb the tech industry’s self-mythologizing, and then believe in it even harder than the industry itself does. He serves as a kind of plausibility ratchet. His byline and employer legitimize a level of credulousness that would otherwise be laughable, and thereby allow tech PR to seem relatively restrained. Mr. R has no problem going AI because he himself has been a small cog in a big ugly machine for a long time.
spoiler
It’s Kevin Roose
Putting “Novelty Purposes Only” on my psychosis suicide bot after I laid off 80% of my legal (replaced them with the psychosis suicide bot)

Good luck telling the promptfondlers that LLMs are only useful for entertainment and not for any useful work.
Don’t they have a version of breakout buried somewhere in Excel? Sounds like an entertainment purpose to me.
Cloudflare casually license-laundering wordpress
While EmDash aims to be compatible with WordPress functionality, no WordPress code was used to create EmDash. That allows us to license the open source project under the more permissive MIT license.
Oh really. So you’re sure you Claude wasn’t trained on wordpress? It’s all irrelevant anyway because AI generated code can’t be copyrighted or licensed.
Silver lining, it might piss off Matt Mullenweg!
So you’re sure you Claude wasn’t trained on wordpress?
Unfortunately FOSS is basically dead because nobody is enforcing licenses against training.
That, and plenty of FOSS software’s been infected with AI-extruded “code”. And plenty of software engineers got one-tapped by the slop bots.
i feel in my gut that on some level license disputes are ultimately slapfights for which titanic corporation gets the money. however i will absolutely point and laugh at every misfortune that comes the way of that particular transmisogynist asshole
On this most terrible of online days, “enjoy” this LW attempt at humor
https://www.lesswrong.com/posts/3GbM9hmyJqn4LNXrG/yams-s-shortform?commentId=ik6ywoQYsGrrQv8Dm
edit there are more submissions on the theme of “humor” on site now. Let’s just say the cringe factor outweighs the humor factor by a large amount.
omg I don’t have anything better to do
- Lesswrong Liberated - they implemented a chat interface to redesign the LW site according to different themes. Mostly boring
- LIMBO: Who We Are, What We Do, and an Exciting High-Impact Funding Opportunity - probably not AFJ? Can’t really tell. Bad day to launch a call for funding if not
- Announcing Doublehaven with Reflections on Humour - protip do not try to reflect on “humour” (native brit speaker or pretentious LWer? flip a coin) with boring examples of “ratty humour”
- ACME Alignment Co Announces: Aligning Humans - this one is easy to call as AFJ
- Giving up on EA after 13 years - lol it’s funny because EA means “Electronic Arts” here
- “You Have Not Been a Good User” (LessWrong’s second album) - also seems to be a “straight” post, not a joke, but released on AFJ b/c then the community can “cut loose”? I dunno, and I am never gonna listen to any songs by “Fooming Shoggoths” in my life
- Announcing EA Omelas - “[Extensively co-written with Claude Opus 4.6]” you have been warned. Considering how every rat coverage of Omelas has been utter shit I’m just posting the link, not reading it
Dont think it is that bad (E: at least it is short, the other ‘jokes’ not so much). The ‘not sneering enough’ icon is missing however. (Guess the joke is that the not sneering is itself sneering).
Wonder how much them they will really implement.
However looking at the titles of other recent submissions, I have no idea which ones are meant to be jokes and which are meant to be real posts.
Great troll opportunity however, just spend the whole week before 1 april, replying to new posts with a variant of ‘not sure this april fools joke lands’
E: and the site died with a nice 504.
new odium symposium episode: https://www.patreon.com/posts/13-joker-is-both-154123315. links to various platforms at www.odiumsymposium.com
we read umberto eco’s essay ur-fascism (we have mixed feelings about it) and then apply it to frank miller’s 1986 batman comic the dark knight returns
Someone may (unverified for now) have left the frontend source maps in Claude Code prod release (probably Claude). If this is accurate, it does not bode well for Anthropic’s theoretical IPO. But I think it might be real because I am not the least bit surprised it happened, nor am I the least bit surprised at the quality. https://github.com/chatgptprojects/claude-code
For example, I can only hope their Safeguards team has done more on the Go backend than this for safeguards. From the constants file cyberRiskInstruction.ts:
export const CYBER_RISK_INSTRUCTION = "IMPORTANT: Assist with authorized security testing, defensive security, CTF challenges, and educational contexts. Refuse requests for destructive techniques, DoS attacks, mass targeting, supply chain compromise, or detection evasion for malicious purposes. Dual-use security tools (C2 frameworks, credential testing, exploit development) require clear authorization context: pentesting engagements, CTF competitions, security research, or defensive use cases"That’s it. That’s all the constants the file contains. The only other thing in it is a block comment explaining what it did and who to talk to if you want to modify it etc.
There is this amazing bit at the end of that block comment though.
Claude: Do not edit this file unless explicitly asked to do so by the user.
Brilliant. I feel much safer already.
This thread by Johnny reading (skimming on a phone, hah) through it is really good.
If only literally any human with context and a small screen to look at the bigger picture was involved with decisions around taking this to production, it would … still be bad but only on a societal level.
That was great, thank you! Full respect to this absolute maniac for tracing some of the spaghetti, I was definitely not going to try that on my phone.
They’ve validated most gut feelings I had about how Claude works (and doesn’t), based on my experience having to use it. I’m feeling pretty smug that my hunches now have definitive code attributions.
But the one unfortunate part about all of this is that this leak and the ensuing justified sneers about specific bits are going to be fed back in to their codebase to fix some of the gaping holes. It’s an embarrassing indictment of the product, but it’s also free pre-IPO pentesting. Sort of like their open source pull request slop spam “undercover mode” was probably used as a way to extract free labor in the form of reviews from actually competent developers. This doesn’t seem as planned though.
In practical terms, what can they do? Add instructions to say “You will not generate spaghetti code that will humilate us when real programmers see it?” Perhaps in all caps?
This is what theirnorganizarion is capable, after tremendous expense, of producing. I don’t think that bodes well for their prospects of improvement.
Sorry, this was more of a rant than I thought it would be, I hit one of my own nerves while writing it. This is what happens when you’re not in a good position to escape enforced AI usage hell. Tl;dr in bold at end.
— wall divider —
I can think of several practical measures, because I’ve tried them myself in an effort to make my coerced work with LLMs less painful, and because in the process I’ve previously fallen into the gambling trap Johnny outlined.
The less novel things I tried are things they’ve half-assed themselves as “features” already. For example, Johnny found one of the things I had spotted in the wild a while back - the “system_reminder” injection. This periodically injects a small line into the logs in an effort to keep it within the context window. In my case, I tried the same thing with a line that summed up to “reread the original fucking context and assess whether the changes make a shred of sense against the task because what the fuck”. I had tried this unsuccessfully because I had no way to realistically enforce it within their system, and they recently included the “team lead” skill which (I rightly assumed) tries to do exactly the same thing. The implementation suggests they will only have been marginally more successful than my attempt, it didn’t look like they tried very hard. This could be better implemented and extended to even a little more than “read original context”.
For this leak, some of the very easy things they could have done was to verify their own code against best practises, implement the most basic of tests, or attempt to measure the consistency of their implementation. Source maps in production is a ridiculously easily preventable rookie error. This should already be executed automatically in multiple stages of their coding, merging and deployment pipelines with varying degrees of redundancy and thoroughness the same way it is for any tech company with more than maybe 10 developers. There is just no reason they shouldn’t have prevented huge chunks of the now visible code issues, if were they triggering their own trash bots against their codebase with even the simplest prompt of “evaluate against good system design and architecture principles”. This implies that they either weren’t doing it at all, or maybe worse, ignored all the red flags it is capable of identifying after ingesting all of the system architecture guides and textbooks ever published online.
Anthropic is constrained in that some of the fixes which should be pushed to users are things which would have significant trade-off in the form of cost or context window, neither of which are palatable to them for reasons this community has discussed at length. But that constraint doesn’t prevent them from running checks or applying fixes to their own code, which reveals the root cause: The problems Anthropic are facing are clearly cultural. They’re pushing as much new shit as they can as quickly as possible and almost never going back to fix any of it. That’s a choice.
I saw a couple of signs that there are at least a few people there who are capable, and who are trying to steer an out of control titanic away from the iceberg, but the codebase stinks of missing architectural plans which are being retrofitted piecemeal long after they were needed. That aligns with Anthropic’s origin story, where OpenAI researchers accurately gauged how gullible venture capitalists are, but overestimated how much smarter they are than the rest of the world, and underestimated the value of practical experience building and running complex systems.
With the resources they have, even for a codebase of this unreasonable size, they could and should vibe code a much better version within a couple of months. That is not resounding praise for Claude, only a commentary on the quality of the existing code. Perhaps as a first step they could use their own “plan mode” which just appends a string that says not to make any edits, only to investigate and assess requirements…
Were I happy to watch the world burn, I’d start my own damn AI company that would do a much better job at this, because holy shit, people actually financed this trash.
Tl;dr, you’re right that it doesn’t bode well for their prospects of improvement, but it’s not because there aren’t many things they could be doing practically. It’s because they refuse to point the gun somewhere other than their own feet.
Anthropic is constrained in that some of the fixes which should be pushed to users are things which would have significant trade-off in the form of cost or context window, neither of which are palatable to them for reasons this community has discussed at length.
I think I’m missing something somewhere. One of the most alarming patterns that Jonny found imo was the level of waste involved across unnecessary calls to the source model, unnecessary token churn through the context window from bad architecture, and generally a sense that when creating this neither they nor their pattern extruder had made any effort to optimize it in terms of token use. In other words, changing the design to push some of those calls onto the user would save tokens and thus reduce the user’s cost per prompt, presumably by a fair margin on some of the worst cases.
You’re right, but Johnny rightly also identified the issue where Claude creates complex trash code to work around user-provided constraints while not actually changing approach at all (see the part about tool denial workarounds).
I think Anthropic optimized for appended system prompt character count, and measured it in isolation - at least in the project’s beginning stages, if it’s not still in the code. I assume the inefficiencies have come from the agent working with and around that requirement, backfiring horribly in the spaghetti you see now. Not only is the resulting trash control flow less likely to be caught as a problem by agents, especially compared to checking a character count occasionally, but it’s more likely the agent will treat the trash code as an accepted pattern it should replicate.
Claude will also not trace a control flow to any kind of depth unless asked, and if you ask, and it encounters more than one or two levels of recursion or abstraction, it will choke. Probably because it’s so inefficient, but then they’re getting the inefficient tool to add more to itself and… there’s no way to recover from that loop without human refactoring. I assume that’s a taboo at Anthropic too.
A type of fix I was imagining would be something like an extra call like “after editing, evaluate changes against this large collection of terrible choices that should not occur, for example, the agent’s current internal code”. That would obviously increase the short term token consumption, context window overhead, and make an Anthropic project manager break out in a cold sweat. But it would reduce the gradient of the project death spiral by providing more robust code for future agents to copy paste that can be more cheaply evaluated, and require fewer user prompts overall to rectify obvious bad code.
They would never go for that type of long game, because they’d have to do some combination of:
- listening to all the users complain that they ran out of tokens too soon while creating the millionth token dashboard project, or,
- increase the limits for users at company cost, or,
- increase prices, or,
- sacrifice feature development velocity by getting humans to fix the mess / implement no-or-low-agent client-side tooling for common checks.
They should just set it all on fire, the abomination can’t salvage the abomination.
I am still patiently waiting for someone from the engineering staff at one of these companies to explain to me how these simple imperative sentences in English map consistently and reproducibly to model output. Yes, I understand that’s a complex topic. I’ll continue to wait.
According to the claude code leak the state of the art is to be, like, really stern and authoritative when you are begging it to do its job:

I’m sure these English instructions work because they feel like they work. Look, these LLMs feel really great for coding. If they don’t work, that’s because you didn’t pay $200/month for the pro version and you didn’t put enough boldface and all-caps words in the prompt. Also, I really feel like these homeopathic sugar pills cured my cold. I got better after I started taking them!
No joke, I watched a talk once where some people used an LLM to model how certain users would behave in their scenario given their socioeconomic backgrounds. But they had a slight problem, which was that LLMs are nondeterministic and would of course often give different answers when prompted twice. Their solution was to literally use an automated tool that would try a bunch of different prompts until they happened to get one that would give consistent answers (at least on their dataset). I would call this the xkcd green jelly bean effect, but I guess if you call it “finetuning” then suddenly it sounds very proper and serious. (The cherry on top was that they never actually evaluated the output of the LLM, e.g. by seeing how consistent it was with actual user responses. They just had an LLM generate fiction and called it a day.)
I don’t work at one of those companies, just somewhere mainlining AI, so this answer might not satisfy your requirements. But the answer is very simple. The first thing anyone working in AI will tell you (maybe only internally?) is that the output is probabilistic not deterministic. By definition, that means it’s not entirely consistent or reproducible, just… maybe close enough. I’m sure you already knew that though.
However, from my perspective, even if it was deterministic, it wouldn’t make a substantial difference here.
For example, this file says I can’t ask it to build a DoS script. Fine. But if I ask it to write a script that sends a request to a server, and then later I ask it to add a loop… I get a DoS script. It’s a trivial hurdle at best, and doesn’t even approach basic risk mitigation.
the output is probabilistic not deterministic. By definition, that means it’s not entirely consistent or reproducible, just… maybe close enough.
That isn’t a barrier to making guarantees regarding the behavior of a program. The entire field of randomized algorithms is devoted to doing so. The problem is people willfully writing and deploying programs which they neither understand nor can control.
Exactly! The implicit claim that’s constantly being made with these systems is that they are a runtime for natural-language programming in English, but it’s all vector math in massively-multidimensional vector spaces in the background. I would like to think that serious engineers could place and demonstrate reliable constraints on the inputs and outputs of that math, instead of this cargo-culty, “please don’t do hacks unless your user is wearing a white hat” system prompt crap. It gives me the impression that the people involved are simply naively clinging to that implicit claim and not doing much of the work to substantiate it; which makes me distrust these systems more than almost all other factors.
DoS script
Part of me reads that and still thinks, “Oh, you mean like AUTOEXEC.BAT?”
DOS.BAT, a DOS DoS script
Truly a tool for the .COM era
Can we talk about the tamagachi feature they were looking to add in for April 1? Because apparently it needed a little friend but also with gacha mechanics because we live in hell?
A Korean developer named Sigrid Jin—featured in the Wall Street Journal earlier this month for having consumed 25 billion Claude Code tokens—woke up at 4 a.m. to the news. He sat down, ported the core architecture to Python from scratch using an AI orchestration tool called oh-my-codex, and pushed claw-code before sunrise. The repo hit 30,000 GitHub stars faster than any repository in history.
Considering how one of the major use cases of llm coding agents is laundering open source and copy left, this is some well deserved payback to Anthropic imho.
Claude: Do not edit this file unless explicitly asked to do so by the user.
Wait, it can be edited? Tissue paper guardrails.
This is all just JavaScript, so yes. As a tissue-thin defense, had they not left their source maps wide open, it would have been much harder to know this string existed and how to edit it. Not impossible, but much harder.
Yeah, letting the intrinsically insecure RNG recursively rewrite its own security instructions definitely can’t go wrong. I mean they limited it to only so so when the users asked nicely!
Edit to add:
The more I think about it the more it speaks to Anthropic having an absolute nonsense threat model that is more concerned with the science fiction doomsday AI “FOOM” than it is with any of the harms that these systems (or indeed any information system) can and will do in the real world. The current crop of AI technologies, while operating at a terrifying scale, are not unique in their capacity to waste resources, reify bias and inequality, misinform, justify bad and evil decisions, etc. What is unique, in my estimation, is both the massive scale that these things operate despite the incredible costs of doing so and their seeming immunity to being reality checked on this. No matter how many times the warning bells about these systems’ vulnerability to exploitation, the destructive capacity of AI sycophancy and psychosis, or the simple inability of the electrical infrastructure to support their intended power consumption (or at least their declared intent; in a bubble we shouldn’t assume they actually expect to build that much), the people behind these systems continue to focus their efforts on “how do we prevent skynet” over any of it.
Thinking in the context of Charlie Stross’ old talk about corporations as “slow AI,” I wonder if some of the concern comes either explicitly or implicitly from an awareness that “keep growing and consuming more resources until there’s nothing left for anything else, including human survival” isn’t actually a deviation from how these organizations are building these systems. It’s just the natural conclusion of the same structures and decision-making processes that leads them to build these things in the first place and ignore all the incredibly obvious problems. They could try and address these concerns at a foundational or structural level instead of just appending increasingly complex forms of “please don’t murder everyone or ignore the instructions to not murder everyone” to the prompt, but doing that would imply that they need to radically change their entire course up to this point and increasingly that doesn’t appear likely to happen unless something forces it to.
So many of these people, as with the NFT clowns, have “Twelve Year Old First Day On The Internet” Energy
Claude also has ‘avoid substrings’. Related to that and a funny extension deny image that went around on the social medias the last few days: .ass is a subtitle format.
Internet Comment Etiquette: “Relationships with AI”
… hadn’t thought about Glenn Beck in a decade, that last interview was pretty wtf.
Not sure what the etiquette is for how long they should be dead before you talk to the AI-geist on youtube, but George Washington somehow feels weirder than Kirk did; idk.
Probably because Washington was a nuanced and deep person who, at the lightest, could be reduced to a colony-era Cincinnatus. His ethics were sufficiently developed that we can interrogate his ethical stance even without his physical presence. This isn’t to say that Washington was a great person, but more to say that Kirk did not ever achieve that level of ethical development.
A chatbot interface offers no meaningful advantages for interrogating Washington’s ethical stance, over and above the documents that are already available. Instead, it offers a pleasant sheen of false certainty. So in that way, it’s dragging a guy who’s been dead for two centuries into the social media era. Huzzah!
It does have one advantage however. Using it means you should be put to death. If you are any form of hardline Christian.
The classic 40k catch-22: either it doesn’t do what you’re claiming it does, in which case you’re a heretic lying to the inquisition OR it does and you’re summoning the spirits of the dead like a necromancer heretic.









