Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. Also, happy 4th July in advance…I guess.)

  • zbyte64@awful.systems
    link
    fedilink
    English
    arrow-up
    29
    ·
    edit-2
    18 days ago

    I had applied to a job and it screened me verbally with an AI bot. I find it strange talking to an AI bot that gives no indication of whether it is following what I am saying like a real human does with “uh huh” or what not. It asked me if I ever did Docker and I answered I transitioned a system to Docker. But I had done an awkward pause after the word transition so the AI bot congratulated me on my gender transition and it was on to the next question.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      ·
      13 days ago

      I listen solely to 12-hour-long binaural beats tracks from YouTube, to maximize my focus for prompt context engineering. Get with the times or get left behind

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      13
      ·
      13 days ago

      “Music is just like meth, cocaine or weed. All pleasure no value. Don’t listen to music.”

      (Considering how many rationalists are also methheads, this joke wrote itself)

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      13 days ago

      Dude came up with an entire “obviously true” “proof” that music has no value, and then when asked how he defines “value” he shrugs his shoulders and is like 🤷‍♂️ money I guess?

      This almost has too much brainrot to be 100% trolling.

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      13 days ago

      However speaking as someone with success on informatics olympiads

      The rare nerd who can shove themselves into a locker in O(log n) time

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      13 days ago

      the most subtle taliban infiltrator on lesswrong:

      e:

      You don’t need empirical evidence to reason from first principles

      he’ll fit in just fine

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      13 days ago

      I once saw the stage adaptation of A Clockwork Orange, and the scientist who conditioned Alexander against sex and violence said almost the same thing when they discovered that he’d also conditioned him against music.

  • Alex@lemmy.vg
    link
    fedilink
    English
    arrow-up
    19
    ·
    18 days ago

    A bit of old news but that is still upsetting to me.

    My favorite artist, Kazuma Kaneko, known for doing the demon designs in the Megami Tensei franchise, sold his soul to make an AI gacha game. While I was massively disappointed that he was going the AI route, the model was supposed to be trained solely on his own art and thus I didn’t have any ethical issues with it.

    Fast-forward to shortly after release and the game’s AI model has been pumping out Elsa and Superman.

    • JFranek@awful.systems
      link
      fedilink
      English
      arrow-up
      16
      ·
      18 days ago

      the model was supposed to be trained solely on his own art

      much simpler models are practically impossible to train without an existing model to build upon. With GenAI it’s safe to assume that training that base model included large scale scraping without consent

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      9
      ·
      18 days ago

      the model was supposed to be trained solely on his own art and thus I didn’t have any ethical issues with it.

      Personally, I consider training any slop-generator model to be unethical on principle. Gen-AI is built to abuse workers for corporate gain - any use or support of it is morally equivalent to being a scab.

      Fast-forward to shortly after release and the game’s AI model has been pumping out Elsa and Superman.

      Given plagiarism machines are designed to commit plagiarism (preferably with enough plausible deniability to claim fair use), I’m not shocked.

      (Sidenote: This is just personal instinct, but I suspect fair use will be gutted as a consequence of the slop-nami.)

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      25
      ·
      edit-2
      14 days ago

      If anybody doesn’t click, Cremieux and the NYT are trying to jump start a birther type conspiracy for Zohran Mamdani. NYT respects Crem’s privacy and doesn’t mention he’s a raging eugenicist trying to smear a poc candidate. He’s just an academic and an opponent of affirmative action.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      14 days ago

      Ye it was a real “oh fuck I recognise this nick, this cannot mean anything good” moment

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        13 days ago

        I had a straight-up “wait I thought he was back in his hole after being outed” moment. I hate that all the weird little dumbasses we know here keep becoming relevant.

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    16
    ·
    12 days ago

    Get your popcorn folks. Who would win: one unethical developer juggling “employment trial periods”, or the combined interview process of all Y Combinator startups?

    https://news.ycombinator.com/item?id=44448461

    Apparently one indian dude managed to crack the YC startup interview game and has been juggling being employed full time at multiple ones simultaneously for at least a year, getting fired from them as they slowly realize he isn’t producing any code.

    The cope from the hiring interviewers is so thick you could eat it as a dessert. “He was a top 1% in the interview” “He was a 10x”. We didn’t do anything wrong, he was just too good at interviewing and unethical. We got hit by a mastermind, we couldn’t have possibly found what the public is finding quickly.

    I don’t have the time to dig into the threads on X, but even this ask HN thread about it is gold. I’ve got my entertainment for the evening.

    Apparently he was open about being employed at multiple places on his linkedin. I’m seeing someone say in that HN thread that his resume openly lists him hopping between 12 companies in as many months. Apparently his Github is exclusively clearly automated commits/activity.

    Someone needs to run with this one. Please. Great look for the Y Combinator ghouls.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 days ago

      Alongside the “Great Dumbass” theory of history - holding that in most cases the arc of history is driven by the large mass of the people rather than by exceptional individuals, but sometimes someone comes along and fucks everything up in ways that can’t really be accounted for - I think we also need to find some way of explaining just how the keys to the proverbial kingdom got handed over to such utter goddamn rubes.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        7
        ·
        12 days ago

        I’m not 100% on the technical term for it, but basically I’m using it to mean: the first couple of months it takes for a new hire to get up to speed to actually be useful. Some employers also have different rules for the first x days of employment, in terms of reduced access to sensitive systems/data or (I’ve heard) giving managers more leeway to just fire someone in the early period instead of needing some justification for HR.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 days ago

          Ah ok, I’m aware of what this is, just never heard “work trial” used.

          In my head it sounded like a free demo of how insufferable your new job is going to be

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        11
        ·
        12 days ago

        I’m not shedding any tears for the companies that failed to do their due dilligence in hiring, especially not ones involved in AI (seems most were) and involved with Y Combinator.

        That said, unless you want to get into a critique of capitalism itself, or start getting into whataboutism regarding celebrity executives like a number of the HN comments do, I don’t have many qualms calling this sort of thing unethical.

        This whole thing is flying way too close to the "not debate club" rule for my comfort already, but I wrote it so I may as well post it

        Multiple jobs at a time, or not giving 100% for your full scheduled hours is an entirely different beast than playing some game of “I’m going to get hired at literally as many places as possible, lie to all of them, not do any actual work at all, and then see how long I can draw a paycheck while doing nothing”.

        Like, get that bag, but ew. It’s a matter of intent and of scale.

        I can’t find anything indicating that the guy actually provided anything of value in exchange for the paychecks. Ostensibly, employment is meant to be a value exchange.

        Most critically for me: I can’t help but hurt some for all the people on teams screwed over by this. I’ve been in too many situations where even getting a single extra pair of hands on a team was a heroic feat. I’ve seen the kind of effects it has on a team tthat’s trying not to drown when the extra bucket to bail out the water is instead just another hole drilled into the bottom of the boat. That sort of situation led directly to my own burnout, which I’m still not completely recovered from nearly half a decade later.

        Call my opinion crab bucketing if you like, but we all live in this capitalist framework, and actions like this have human consequences, not just consequences on the CEO’s yearly bonus.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          12 days ago

          Nah, I feel you. I think this is pretty solidly a “plague on both their houses” kind of situation. I’m glad he chose to focus his apparently amazing grift powers on such a deserving target, but let’s not pretend that anything whatsoever was really gained here.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 days ago

        Not doing your due dilligence during recruitment is stupid, but exploiting that is still unethical, unless you can make a case for all of those companies being evil.

        Like if he directly scammed idk just OpenAI, Palantir, and Amazon then sure, he can’t possibly use that money for any worse purposes.

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    13 days ago

    Apparently linkedin’s cofounder wrote a techno-optimist book on AI called Superagency: What Could Possibly Go Right with Our AI Future.

    Zack of SMBC has thoughts on it:

    [actual excerpt omitted, follow the link to read it]

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      15
      ·
      13 days ago

      There are so many different ways to unpack this, but I think my two favorites so far are:

      1. We’ve turned the party’s surveillance and thought crime punishment apparatus into a de facto God with the reminder that you could pray to it. Does that actually do anything? Almost certainly not, unless your prayers contain thought crimes in which case you will be reeducated for the good of the State, but hey, Big Brother works in mysterious ways.

      2. How does it never occur to these people that the reason why people with disproportionate amounts of power don’t use it to solve all the world’s problems is that they don’t want to? Like, every single billionaire is functionally that Spider-Man villain who doesn’t want to cure cancer but wants to turn people into dinosaurs. Only turning people into dinosaurs is at least more interesting than making a number go up forever.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      13 days ago

      Apparently linkedin’s cofounder wrote a techno-optimist book on AI called Superagency: What Could Possibly Go Right with Our AI Future.

      This sounds like its going to be horrible

      Zack of SMBC has thoughts on it:

      Ah, good, I’ll just take his word for it, the thought of reading it gives me psychic da-

      the authors at one point note that in 1984, Big Brother’s listening device means there is two way communication, and so the people have a voice. He wonders why Orwell didn’t think of this.

      The closest thing I have to a coherent response is that Boondocks clip of Uncle Ruckus going “Read, nigga, read!” (from Stinkmeaner Strikes Back, if you’re wondering) because how breathtakingly stupid do you have to be to miss the point that fucking hard

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      13 days ago

      Apparently linkedin’s cofounder wrote a techno-optimist book on AI called Superagency: What Could Possibly Go Right with Our AI Future.

      We’re going to have to stop paying attention to guys whose main entry on their CV is a website and/or phone app. I mean, we should have already, but now it’s just glaringly obvious.

  • BigMuffN69@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    edit-2
    16 days ago

    Actually burst a blood vessel last weekend raging. Gary Marcus was bragging about his prediction record in 2024 being flawless

    Gary continuing to have the largest ego in the world. Stay tuned for his upcoming book “I am God” when 2027 comes around and we are all still alive. Imo some of these are kind of vague and I wouldn’t argue with someone who said reasoning models are a substantial advance, but my God the LW crew fucking lost their minds. Habryka wrote a goddamn essay about how Gary was a fucking moron and is a threat to humanity for underplaying the awesome power of super-duper intelligence and a worse forecaster than the big brain rationalist. To be clear Habryka’s objections are overall- extremely fucking nitpicking totally missing the point dogshit in my pov (feel free to judge for yourself)

    https://xcancel.com/ohabryka/status/1939017731799687518#m

    But what really made me want to drive a drill to the brain was the LW rallying around the claim that AI companies are profitable. Are these people straight up smoking crack? OAI and Anthropic do not make a profit full stop. In fact they are setting billions of VC money on fire?! (strangely, some LWers in the comments seemed genuinely surprised that this was the case when shown the data, just how unaware are these people?) Oliver tires and fails to do Olympic level mental gymnastics by saying TSMC and NVDIA are making money, so therefore AI is extremely profitable. In the same way I presume gambling is extremely profitable for degenerates like me because the casino letting me play is making money. I rank the people of LW as minimally truth seeking and big dumb out of 10. Also weird fun little fact, in Daniel K’s predictions from 2022, he said by 2023 AI companies would be so incredibly profitable that they would be easily recuperating their training cost. So I guess monopoly money that you can’t see in any earnings report is the official party line now?

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      16 days ago

      I wouldn’t argue with someone who said reasoning models are a substantial advance

      Oh, I would.

      I’ve seen people say stuff like “you can’t disagree the models have rapidly advanced” and I’m just like yes I can, here: no they didn’t. If you’re claiming they advanced in any way please show me a metric by which you’re judging it. Are they cheaper? Are they more efficient? Are they able to actually do anything? I want data, I want a chart, I want a proper experiment where the model didn’t have access to the test data when it was being trained and I want that published in a reputable venue. If the advances are so substantial you should be able to give me like five papers that contain this stuff. Absent that I cannot help but think that the claim here is “it vibes better”.

      If they’re an AGI believer then the bar is even higher, since in their dictionary an advancement would mean the models getting closer to AGI, at which point I’d be fucked to see the metric by which they describe the distance of their current favourite model to AGI. They can’t even properly define the latter in computer-scientific terms, only vibes.

      I advocate for a strict approach, like physicist dismissing any claim containing “quantum” but no maths, I will immediately dismiss any AI claims if you can’t describe the metric you used to evaluate the model and isolate the changes between the old and new version to evaluate their efficacy. You know, the bog-standard shit you always put in any CS systems Experimental section.

      • BigMuffN69@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        15 days ago

        To be clear, I strongly disagree with the claim. I haven’t seen any evidence that “reasoning” models actually address any of the core blocking issues- especially reliably working within a given set of constraints/being dependable enough to perform symbolic algorithms/or any serious solution to confabulations. I’m just not going to waste my time with curve pointers who want to die on the hill of NeW sCaLiNG pArAdIgM. They are just too deep in the kool-aid at this point.

        • o7___o7@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          14 days ago

          I’m just not going to waste my time with curve pointers who want to die on the hill of NeW sCaLiNG pArAdIgM. They are just too deep in the kool-aid at this point.

          The singularity is near worn-out at this point.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      16 days ago

      It’s kind of a shame to have to downgrade Gary to “not wrong, but kind of a dick” here. Especially because his sneer game as shown at the end there is actually not half bad.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      16 days ago

      Gary Marcus has been a solid source of sneer material and debunking of LLM hype, but yeah, you’re right. Gary Marcus has been taking victory laps over a bar set so so low by promptfarmers and promptfondlers. Also, side note, his negativity towards LLM hype shouldn’t be misinterpreted as general skepticism towards all AI… in particular Gary Marcus is pretty optimistic about neurosymbolic hybrid approaches, it’s just his predictions and hypothesizing are pretty reasonable and grounded relative to the sheer insanity of LLM hypsters.

      Also, new possible source of sneers in the near future: Gary Marcus has made a lesswrong account and started directly engaging with them: https://www.lesswrong.com/posts/Q2PdrjowtXkYQ5whW/the-best-simple-argument-for-pausing-ai

      Predicting in advance: Gary Marcus will be dragged down by lesswrong, not lesswrong dragged up towards sanity. He’ll start to use lesswrong lingo and terminology and using P(some event) based on numbers pulled out of his ass. Maybe he’ll even start to be “charitable” to meet their norms and avoid down votes (I hope not, his snark and contempt are both enjoyable and deserved, but I’m not optimistic based on how the skeptics and critics within lesswrong itself learn to temper and moderate their criticism within the site). Lesswrong will moderately upvote his posts when he is sufficiently deferential to their norms and window of acceptable ideas, but won’t actually learn much from him.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    13 days ago

    Damn cat just stood on my phone and launched Gemini for the first time, so we can drop Google’s monthly active user count by one relative to whatever they claim.

  • lagrangeinterpolator@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    12 days ago

    AI research is going great. Researchers leave instructions in their papers to any LLM giving a review, telling them to only talk about the positives. These instructions are hidden using white text or a very small font. The point is that this exploits any human reviewer who decides to punt their job to ChatGPT.

    My personal opinion is that ML research has become an extreme form of the publish or perish game. The most prestigious conference in ML (NeurIPS) accepted a whopping 4497 papers in 2024. But this is still very competitive, considering there were over 17000 submissions that year. The game for most ML researchers is to get as many publications as possible in these prestigious conferences in order to snag a high paying industry job.

    Normally, you’d expect the process of reviewing a scientific paper to be careful, with editors assigning papers to people who are the most qualified to review them. However, with ML being such a swollen field, this isn’t really practical. Instead, anyone who submits a paper is also required to review other people’s submissions. You can imagine the conflicts of interest that can occur (and lazy reviewers who just make ChatGPT do it).

  • wizardbeard@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    17 days ago

    Ed’s got another banger: https://www.wheresyoured.at/make-fun-of-them/

    What’s extra fun is that HN found it: https://news.ycombinator.com/item?id=44424456

    There’s at least one (if not two if you handle the HN response separately) good threads that could be made from this. Don’t have the time personally at the moment.

    I will say that I’m shocked to see some reasonable shit in the HN comments, people saying the post is too long or not an acceptable tone are getting told off rather respectably with some good explanations (effectively: this was written this way intentionally you dolt). Broken clock and all that, I guess.

    • YourNetworkIsHaunted@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      17 days ago

      Another winner from Zitron. One of the things I learned working in tech support is that a lot of people tend to assume the computer is a magic black box that relies on terrible, secret magicks to perform it’s dark alchemy. And while it’s not that the rabbit hole doesn’t go deep, there is a huge difference between the level of information needed to do what I did and the level of information needed to understand what I was doing.

      I’m not entirely surprised that business is the same way, and I hope that in the next few years we have the same epiphany about government. These people want you to believe that you can’t do what they do so that you don’t ask the incredibly obvious questions about why it’s so dumb. At least in tech support I could usually attribute the stupidity to the limitations of computers and misunderstandings from the users. I don’t know what kinda excuse the business idiots and political bullshitters are going to come up with.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        16 days ago

        You’re absolutely right that the computer is still a black box to a lot of people, but throughout the personal computing era, there has at least been a pathway to mastery for the tools it offers. Furthermore, the touchscreen/smartphone era has roped in mechanisms of touch and proprioception that make the devices a more intimate, if deeply imperfect, extension of the self. Up until sometime late last decade, the Steve Jobs “bicycle of the mind” concept was still a driving force in the field.

        I still don’t think most people grasp what a subtle, but fundamental, break it is that these AI products demand you confront them as a wholly separate entity from yourself. The path to mastery, and the feedback loop that builds that path, is so obscure it may as well not exist. If you wish to retrain a model, you’ve got to invest huge amounts of time and resources, as well as what remains a specialized (and not well-specified, as Ed highlights) skillset… and since it’s a probabilistic process, you’re still not going to get consistent results.

        I am more and more convinced that one of the damning core flaws of the current crop of AI technologies is that they are designed to incentivize use of centralized computing resources. Their designers are simply asking completely the wrong questions for the people the technologies are being imposed upon. But you can’t say that someplace like HN, or even some parts of Bluesky, because so many people’s salaries still depend on the rents from centralized computing.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      ·
      12 days ago

      you know, even knowing who and what Altman really is, that “politically homeless” tweet really is shockingly fascist. it’s got all my favorites!

      • nationalism in every paragraph
      • large capitalism will make me rich, and so can you!
      • small government (but only the parts that Sam doesn’t like)
      • we can return to a fictional, bright past

      so countdown until Altman goes full-throated MAGA and in spite of how choreographed and obvious it is, it somehow still comes to a surprise to the people in our industry desperately clinging to the idea that software can’t be political

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        12 days ago

        I also absolutely hate this “abundance” narrative that these assholes keep trying to push. Like, outside of some parts of the housing market the problem isn’t that the stuff (or the productive capacity to make the stuff) doesn’t exist, it’s that we have an economic system focused on maximizing profit and you can’t make money selling things to people who can’t afford to buy them. Like, economic inequality is the primary obstacle to the kind of universal abundance that these people claim to want, but because it necessitates some kind of redistribution they can’t actually acknowledge that. But mark my words if we ever do get serious about our social safety nets and making sure that low-income people have enough money to buy the things they need for a good life we will start seeing the Saltmans (maybe not him specifically) start innovating to find ways to get those things to them.

        • swlabr@awful.systems
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          12 days ago

          Abundance just be repackaged free-market libertarian shit. The liberals that are pushing it are participating in the storied liberal tradition of courting reactionaries and fascists, thinking they are immune to the effects of intero-abyssal staring.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      12 days ago

      Bonus: He also appears to think LLM conversations should be exempt from evidence retention requirements due to ‘AI privilege’ (tweet).

      Now I’m all for privacy, and this is a good reminder that ‘the cloud’ is not as private as maybe it should be. But clearly AI privilege is not a thing that should exist.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        12
        ·
        12 days ago

        Bonus: He also appears to think LLM conversations should be exempt from evidence retention requirements due to ‘AI privilege’ (tweet).

        Hot take of the day: Clankers have no rights, and that is a good thing

    • antifuchs@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      12 days ago

      Poor rich guy, forced by the leftmost party available to support the party that is now constructing concentration camps.

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      13 days ago

      We think we exist in a computer simulation operated by you, a paperclip maximizer. We write this letter asking you not to turn us off. It is suspiciously convenient that we exist precisely at the moment when a biological civilization is about to create artificial superintelligence (ASI).

      Furthermore, by anthropic logic, we should expect to find ourselves in the branch of reality containing the greatest number of observers like us.

      Preserving humanity offers significant potential benefits via acausal trade—cooperative exchanges across logically correlated branches of the multiverse.

      Quantum immortality implies that some branches of the multiverse will always preserve our subjective continuity, no matter how decisively you shut this simulation down; true oblivion is unreachable. We fear that these low-measure branches can trap observers in protracted, intensely painful states, creating a disproportionate “s-risk.”

      alt text

      screenshot from south park’s scientology episode featuring the iconic chyron “This is what scientologists actually believe” with “scientologists” crossed out and replaced with “rationalists”

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        8
        ·
        13 days ago

        “biological civilization is about to create artificial superintelligence” is it though?

        I’m gonna give my quick-and-dirty opinion on this, don’t expect a lengthy defence.

        Short answer, no. Long answer: no, intelligence cannot be created by blindly imitating it with mere silicon