

Bay Area rationalist Sam Kirchner, cofounder of the Berkeley “Stop AI” group, claims “nonviolence isn’t working anymore” and goes off the grid. Hasn’t been heard from in weeks.
Article has some quotes from Emile Torres.


Bay Area rationalist Sam Kirchner, cofounder of the Berkeley “Stop AI” group, claims “nonviolence isn’t working anymore” and goes off the grid. Hasn’t been heard from in weeks.
Article has some quotes from Emile Torres.


When she’s not attending the weddings of people like Curtis Yarvin.


Alex, I’ll take “Things that never happened” for $1000.


Amusing to see him explaining to you the connection between Bay Area rationalists and AI safety people.


The “unhoused friend” story is about as likely to be true as the proverbial Canadian girlfriend story. “You wouldn’t know her.”


But he’s getting
so much attention.


This one’s been making the rounds, so people have probably already seen it. But just in case…
Meta did a live “demo” of their recording new AI.


In fairness, not everything nVidia does is generative AI. I don’t know if this particular initiative has anything to do with GenAI, but a lot of digital artists depend on their graphics cards’ capabilities to create art that is very much human-derived.


Yud: “That’s not going to asymptote to a great final answer if you just run them for longer.”
Asymptote is a noun, you git. I know in the grand scheme of things this is a trivial thing to be annoyed by, but what is it it with Yud’s weird tendency to verbify nouns? Most rationalists seem to emulate him on this. It’s like a cult signifier.


Now that his new book is out, Big Yud is on the interview circuit. I hope everyone is prepared for a lot of annoying articles in the next few weeks.
Today he was on the Hard Fork podcast with Kevin Roose and Casey Newton (didn’t listen to it yet). There’s also a milquetoast profile in the NYT written by Kevin Roose, where Roose admits his P(doom) is between 5 and 10 percent.


Make sure to click the “Apply Now” button at the bottom for a special treat.


I know it’s been said thousands of times before, but as a software developer I’ve never felt a greater sense of job security than I do right now. The amount of work it’s going to take to clean up all this slop is going to be monumental. Unfortunately, that kind of work is also soul-deadening.


Last year McDonald’s withdrew AI from its own drive-throughs as the tech misinterpreted customer orders - resulting in one person getting bacon added to their ice cream in error, and another having hundreds of dollars worth of chicken nuggets mistakenly added to their order.
Clearly artificial superintelligence has arrived, and instead of killing us all with diamondoid bacteria, it’s going to kill us by force-feeding us fast food.


It immediately made me wonder about his background. He’s quite young and looks to be just out of college. If I had to guess, I’d say he was probably a member of the EA club at Harvard.


In case you needed more evidence that the Atlantic is a shitty rag.



Clown world.
How many times will he need to revise his silly timeline before media figures like Kevin Roose stop treating him like some kind of respectable authority? Actually, I know the answer to that question. They’ll keep swallowing his garbage until the bubble finally bursts.


Is it a loop if it only executes once?


How much energy was used to produce that video?
Orange site mods retitled a post about a16z funding AI slop farms to remove the a16z part.
The mod tried to pretend the reason was that the title was just too damn long and clickbaity. His new title was 1 character shorter than the original.
https://news.ycombinator.com/item?id=46305113