

That post requires signing in to view; it’s a link to this: https://futurism.com/science-energy/trump-altman-plutonium-oklo


That post requires signing in to view; it’s a link to this: https://futurism.com/science-energy/trump-altman-plutonium-oklo


Level design, color palette, continuity, assets and physics by a bowl of salvia


Do you remember / Unending and boundless September / Flames were catchin’ the threads of the first-years


It is 1998, and I am reading Internet posts complaining about Mac users
It is 2005, and I am reading Internet posts complaining about Mac users
It is 2025, and I am reading Internet posts complaining about Mac users


All participants in the Stubsack, including awful.systems regulars and those joining from elsewhere, are reminded that this is not debate club. Anyone tempted by the possibility of debate-club behavior is encouraged to touch your nearest grass immediately. We are here to sneer, not to bicker: This is a place to mock the outside world, not to settle grand matters of ideology, unless the latter is done in an extraordinarily amusing way.


I believe those sentences can be paraphrased as, “The term entire function is only used in complex analysis. The function f(z) = z^2 + 1 is zero at z = i.”


New research coordinated by the European Broadcasting Union (EBU) and led by the BBC has found that AI assistants – already a daily information gateway for millions of people – routinely misrepresent news content no matter which language, territory, or AI platform is tested. […] 45% of all AI answers had at least one significant issue.
31% of responses showed serious sourcing problems – missing, misleading, or incorrect attributions.
20% contained major accuracy issues, including hallucinated details and outdated information.
Gemini performed worst with significant issues in 76% of responses, more than double the other assistants, largely due to its poor sourcing performance.
https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content
And yet the BBC still has a Programme Director for “Generative AI” who gets trotted out to say “We want these tools to succeed”. No, we don’t, you blithering bellend.


(thinks) The Colossus of Chodes


The best pizza we had in college came from the place where the window by the front door had a spiderweb fracture and when you stood at the cash register you could see back into the kitchen where one old Italian guy was making pizza and six beefy Italian guys were standing around doing nothing.
General rules can have exceptions, is all I’m sayin’.


deleted by creator


The idea that AI will be a boon for searching the mathematical literature is undermined somewhat by how it shits the bed there too.


I also found this Reddit comment that lays into Sokal and Bricmont’s treatment of Lacan, but not having read Lacan, I can’t vouch for it:
I’ll just note the sneerability of how Sokal contributed to sex pest Krauss’ War on Science book, right alongside Jordan Peterson, who has said plenty of things as batshit as Sokal accused Lacan of being.


Here’s a written review of that book which covers its problems fairly well, I think. (Which indirectly reminded me that last year I wrote a blog post about how Sokal and Bricmont’s Fashionable Nonsense wasn’t such hot stuff. I guess I hadn’t shared that here before.)


Good catch; thanks. I think I had too many awful.system tabs open at once.


Highlight the space just after the abstract of my own most recent arXiv preprint for a surprise. :-)


Last week, we learned that area transphobe Sabine Hossenfelder is using her arXiv-posting privileges to shill Eric Weinstein’s bullshit. I have poked around the places where I’d expect to find technical discussion of a physics preprint, and I’ve come up with nothing. The Stubsack thread, as superficial as it was, has been the most substantive conversation about her post’s actual content.


NeurIPS is one of the big conferences for machine learning. Having your work accepted there is purportedly equivalent to getting a paper published in a top-notch journal in physics (a field that holds big conferences but treats journals as more the venues of record). Today I learned that NeurIPS endorses peer reviewers asking questions to chatbots during the review process. On their FAQ page for reviewers, they include the question
I often use LLMs to help me understand concepts and draft my writing. Can I use LLMs during the review process?
And their response is not shut the fuck up, the worms have reached your brain and we will have to operate. You know, the bare minimum that any decent person would ask for.
You can use resources (e.g. publications on Google Scholar, Wikipedia articles, interactions with LLMs and/or human experts without sharing the paper submissions) to enhance your understanding of certain concepts and to check the grammaticality and phrasing of your written review. Please exercise caution in these cases so you do not accidentally leak confidential information in the process.
“Yeah, go ahead, ask ‘Grok is this true’, but pretty please don’t use the exact words from the paper you are reviewing. We are confident that the same people who turn to a machine to paraphrase their own writing will do so by hand first this time.”
Please remember that you are responsible for the quality and accuracy of your submitted review regardless of any tools, resources, or other help you used to construct the final review.
“Having positioned yourself at the outlet pipe of the bullshit fountain and opened your mouth, please imbibe responsibly.”
Far be it for me to suggest that NeurIPS taking an actually ethical stance about bullshit-fountain technology would call into question the presentations being made there and thus imperil their funding stream. But, I mean, if the shoe fits…
Oreo: the cookie that doesn’t need a marketing department.