

I was low-key hoping for a technical philosophical article, which argues that to find any of this shit useful you need a distinctly american understanding of reality.


I was low-key hoping for a technical philosophical article, which argues that to find any of this shit useful you need a distinctly american understanding of reality.


Kind of wild that the guy who popularized “enshittification” as a term will die on the hill that the technology which drives the industrial enshittification of all human media is fine actually, because some people find the plugins useful.


Ah, thanks! My expectations of node aren’t much affected I guess. Bun.js maybe?


Eh, straight pip with venv and pip-tools for support worked fine anyway. wrong uv!
As for systemd… time to look at the BSDs? Was Debian among the anti-slop projects? Would be nice if they took an interest in preventing the slopification of one of their core system.


Man, that harper piece is a full DnD alignment chart of the most online bay area weirdos you’ve ever seen.


Unfortunately the paper structure screams “AI senpai, notice me!”
AI coding agents seem bad at this job yet, but if you optimize for our benchmark…


0 content moderation from what it looks like. Was told to kys after 2 rounds.
Tech won’t save us, but we sure could make a great start by, say, submerging 20 carefully chosen server racks in pickle juice.


I can think of one notable project I ever saw one, and that’s Bookwyrm with the Anti-Capitalist Software License v1.4.
But this seems too vague-posty to refer to something that specific. Prolly just someone butthurt over copyleft.


I hadn’t heard of square either. Are they the guys doing squarespace? No idea.
EDIT: Okay, I did hear of CashApp, and it goes without saying that you need an entire lock-in ecosystem and a crypto-gimmick around a fintech product these days.


I would assume that you will only get across a very limited amount of information. If you pack them with details they will zone out, if you can focus on very few arguments something might stick. If you have the background knowledge to bring up points as needed, that’s great of course.
If I would try to sway some business people, I’d try this angle: AI intensification creates a dependence on your AI model vendor and endangers your human capital. Your AI vendor is knowingly selling you broken goods, so they can satisfy their desperate bubble economics. Your people are (on average) dabbling with AI, but diving into it too much can cause mental health issues (an in-progress paper trying to look at this [1]). And furthermore you’re endangering the maintenance and transfer of critical know-how because people are burying critical business processes in slop that sort of works but noone understands (throwback to the 80s where similar things happened with classical automation [2]).
[1] https://archive.is/20260212071631/https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it [2] https://www.sciencedirect.com/science/article/abs/pii/0005109883900468
Social media didn’t invent it. When I had art and music class in school, why the fuck did I have to get graded? And why does my bumfuck nowhere local volleyball club have to have aspirations for some regional 17th league and do cardio and drills every meet? I just wanna throw ball sometimes to not get fat.
Regardless, I can’t imagine what that does to the mood on set when your director is being screwed by Markiplier.
How could he do this to Iron Lung’s wife?


I started to raise my eyebrows when the Second Brain got lumped into the AI wife pile.
Bro, I just write shit down. I am in fact taking responsibility for my schedule and handling my emotions without relying on external support. Am I turning to (checks notes…) the notebook industry for a technological replacement wife?
I mean some valid points, and some of it might explain the gendered AI adoption gap, but too much generalization.


deleted by creator


I poked around to see how far gone my main text editor is. They’re not about to join the Butlerian Jihad, but I think I can live with it.
We don’t care how you wrote the code, but we do care that you fully(!) understand it and how it solves the underlying issue. LLM coding assistants can help with tedious routine and investigation (such as constructing test cases), but they are not a replacement for understanding the problem as well as the code you touch. (Nvim’s codebase is full of… let’s say “history”, and generic models tend to do quite poorly here.)
What is not OK is to copy-paste responses from the LLM as your comments. We don’t want to play a game of telephone with the LLM (if it was smart enough to solve the problem, we would be doing that ourselves).
Except in special circumstances (and with explicit notes), all your comments and descriptions must be written by you yourself. (Use a translation tool if you must, but don’t let someone else put words in your mouth.)
Contributor actually was bullied into closing his PR, but maintainers reopened and merged it, as the change was fine apparently. Lol
deleted by creator