

I have better thoughts I can waste time on
Me too. Unfortunately I don’t get to pick my intrusive thoughts.


I have better thoughts I can waste time on
Me too. Unfortunately I don’t get to pick my intrusive thoughts.


I have some thoughts about this goober (Simon Willison) that I need to get out of my head:
First the positives:
But, by his own admission:
Tl;dr: an experienced dev who uses clankers to churn out tons of technically functional hobby software and thinks this gives him right to speak for all software engineers.


OK, my bad. I was thinking about scenario like this: https://eneroutlook.enerdata.net/total-electricity-generation-projections.html
If you assume doubling of electricity production by 2050 (development + electrification) then 10% of that would mean more than double nuclear production.
5% would not really be a massive increase, my mistake, but would still mean more builds than retirement.


Could be, I don’t follow that closely. I’m not aware of any that come close to the level of shitshow of say, Hinckley Point C. That matters.


Yeah, I found his credulousness about "AI"quite amusing. Like when he went to a wrong station in Japan, because he asked ChatGPT. And posted about it, not realizing how much of a dumbass it made him look like. But it’s starting to wear off.
But hey, he at least admits there is a bubble.
And also, I haven’t unfollowed/muted/blocked Mike Masnick yet. And he’s at least twice as annoying about AI.
Unrelated: Did chrome just detect me writing about AI to shill Gemini to me? (puts on tinfoil hat)


If we’re talking about the general West, then there new nuclear is probably fucked. Rest of the world still builds for reasonable costs. Not nuclear bro amounts, but still.
I think we could see a future where nuclear makes 5-10% of the world’s electricity, which would technically make it a niche source of power, but it would also be a massive increase from today.


Sometimes, I get to peer into the AI pilled brain, and it is… uh… not good.
But now if I suffer from Imposter syndrome I can remind myself that some other people take advice from clankers.
https://bsky.app/profile/carnage4life.bsky.social/post/3m634z3r7kk2l

Transcript:
The fact I can login every morning and ask an AI to review all my emails and chats from yesterday then given what it knows about my goals and my role it should suggest what I could have done better is amazing.
Bubble or not, AI is huge for personal productivity and overall improvement.


AFAIK data warehouse = regular database data lake = place to keep various files that don’t fit into DB
Data lakehouse aims to integrate these two, I don’t think it’s a totally stupid idea.


This. Masayoshi son is selling furniture to YOLO more money into OpenAI.
I have no opinion, but I have to note that I keep reading “KeepAssXC …”


Pivot to bio-computing


Beff Jezos and friends have produced something other than tweets. Possibly. Maybe.


Looks like a hobby project of someone who has very particular views about computers.
I’m not sure what kind of neural network are they planning to run on a custom FPGA based GPU with 4GB RAM shared with CPU


“How dare you suggest that we pivoted to SlopTok and smut because of money if something that we totally cannot do right now is more lucrative?”


If you use your business to “do business” it’s nice to have good catering.


Simon Willison writes a fawning blog post about the new “Claude skills” (which are basically files with additional instructions for specific tasks for the bot to use)
How does he decide to demonstrate these awesome new capabilities?
By making a completely trash, seizure inducing GIF…
https://simonwillison.net/2025/Oct/16/claude-skills/
He even admits it’s garbage. How do you even get to the point that you think that’s something you want to advertise? Even the big slop monger companies manage to cherry pick their demos.
Just felt like I got an aneurysm there.
(in unrelated things, first)


They will all be just this one:


Slack CEO responded there that it was all a “billing mistake” and that they’ll do better in the future and people are having none of it.
A rare orange site W, surprisingly heartwarming.


Was jumpscared on my YouTube recommendations page by a video from AI safety peddler Rob Miles and decided to take a look.
It talked about how it’s almost impossible to detect whether a model was deliberately trained to output some “bad” output (like vulnerable code) for some specific set of inputs.
Pretty mild as cult stuff goes, mostly anthropomorphizing and referring to such LLM as a “sleeper agent”. But maybe some of y’all will find it interesting.
I think he’s just a true believer (or very good at lying).