• 0 Posts
  • 130 Comments
Joined 6 months ago
cake
Cake day: August 23rd, 2025

help-circle
  • Posting for archival and indexing purposes: u/GorillasAreForEating found an Urbit post titled “Quis cancellat ipsos cancellores?” which complains that Aella takes it on herself to exclude people and movements from the broader LessWrong/Effective Altruist community. The poster says that Aella was the anonymous person who pushed CFAR to finally do something about Brent Dill, because she was roommates with “Persephone.” He or she does not quite say that any of the accusations were untrue, just that “an anonymous, unverified report” says that some details were changed by an editor, and that her Medium post was of “dramatically lower fidelity, but higher memetic virulence” than Brent’s buddies investigating him behind closed doors (Dill posted about domming a 16-year-old who he met when she was 15). The poster accuses Aella of using substances and BDSM games to blur the line of consent.

    The post names Joscha Bach as someone Aella tried to exclude. We recently talked abut Bach’s attempt to get Jeffrey Epstein to fund an event where our friends would speak.

    Often, people in messed-up situations point at a very similar situation and say “at least we are not like that.” I hope that all of these people find friends who can give them perspective that none of these communities are healthy or just. Whether you are in to bull sessions or polyamory, there are healthy communities to explore in any medium-sized city!





  • News story from 2015:

    (Some people might have been concerned to read that) almost 3,000 “researchers, experts and entrepreneurs” have signed an open letter calling for a ban on developing artifical intelligence (AI) for “lethal autonomous weapons systems” (LAWS), or military robots for short. Instead, I yawned. Heavy artillery fire is much more terrifying than the Terminator.

    The people who signed the letter included celebrities of the science and high-tech worlds like Tesla’s Elon Musk, Apple co-founder Steve Wozniak, cosmologist Stephen Hawking, Skype co-founder Jaan Tallinn, Demis Hassabis, chief executive of Google DeepMind and, of course, Noam Chomsky. They presented their letter in late July to the International Joint Conference on Artificial Intelligence, meeting this year in Buenos Aires.

    They were quite clear about what worried them: “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

    “Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populations, warlords wishing to perpetrate ethnc cleansing, etc.”

    The letter was issued by the Future of Life Institute which is now Max Tegmark and Toby Walsh’s organization.

    People have worked on the general pop culture that inspired TESCREAL, and on the current hype, but less on earlier attempts to present machine minds as a clear and present danger. This has the ‘arms race’ narrative, the ‘research ban’ proposed solution, but focuses on smaller dangers.



  • I like this reply on Reddit:

    I do my PhD in fair evaluation of ML algorithms, and I literally have enough work to go through until I die. So much mess, non-reproducible results, overfitting benchmarks, and worst of all this has become a norm. Lately, it took our team MONTHS to reproduce (or even just run) a bunch of methods to just embed inputs, not even train or finetune.

    I see maybe a solution, or at least help, in closer research-business collaboration. Companies don’t care about papers really, just to get methods that work and make money. Maxing out drug design benchmark is useless if the algorithm fails to produce anything usable in real-world lab. Anecdotally, I’ve seen much better and more fair results from PhDs and PhD students that work part-time in the industry as ML engineers or applied researchers.

    This can go a good way (most of the field becomes a closed circle like parapsychology) or a bad way (people assume the results are true and apply them, like the social priming or Reinhart and Rogoff’s economic paper with the Excel error).





  • I miss when Patrick McKenzie was just sharing an American’s view on Japanese culture and reminding devs that names are not always Firstname Lastname in the Latin alphabet and ‘just’ paying yourself twice the average local income from your business is not a failure. The following is deep twitter pundit brain for a rich white man in Chicago who has lived most of his adult life in Japan and SoCal referring to social programs for poor brown people in Minnesota:

    I think journalism and civil society should do some genuine soul-searching on how we knew—knew—the state of that pond, but didn’t consider it particularly important or newsworthy until someone started fishing on camera.

    Edit. I also like the HN response which explains that private companies have few responses to fraud except refusing service, but the State of Minnesota can arrest fraudsters, command third parties to provide evidence about them, and send them to prison, so the People of Minnesota require strong evidence before it uses those powers.







  • Also likely to be concentrated in the USA, whose government is helpfully screaming at the rest of the world “disconnect your economies and your IT systems from ust!” Most of us are busy doing that as fast as possible although it takes a while to get everyone on board.

    its a very useful skill when reading the internet to learn to see when “in the US and UK” or “in SoCal and London” should be appended to a sentence. Once you see it you can’t unsee it.