Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • Architeuthis@awful.systems
    link
    fedilink
    English
    arrow-up
    5
    ·
    16 hours ago

    OpenAI Declares ‘Code Red’ as Google Threatens AI Lead

    I just wanted to point out this tidbit:

    Altman said OpenAI would be pushing back work on other initiatives, such as advertising, AI agents for health and shopping, and a personal assistant called Pulse.

    Apparently a fortunate side effect of google supposedly closing the gap is that it’s a great opportunity to give up on agents without looking like complete clowns. And also make Pulse even more vapory.

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      Is Pulse the Jony Ive device thing? I had half a suspicion that will never come to market anyway.

  • sc_griffith@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    8 hours ago

    the fifth episode of odium symposium, “4chan: the french connection” is now up. roughly the first half of the episode is a dive into sartre’s theory of antisemitism. then we apply his theory to the style guide of a nazi news site and the life of its founder, andrew anglin

    EDIT: btw if you like the episode please tell people about it! frankly we have no idea how to market or otherwise promote a podcast sooo we’re kind of just hoping the listeners do it

    • fullsquare@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      https://boomsupersonic.com/flyby/ai-needs-more-power-than-the-grid-can-deliver-supersonic-tech-can-fix-that

      okay, that’s the missing piece (? not the last): 1GW from GE, 1GW from proenergy, 1.2GW from this fuckass startup that nobody heard of, either missing 1.2GW of gas turbines or 1.2GW grid connection gets almost 4.5GW of power for crusoe

      also you don’t need supersonic jet engines for that, these will be actively worse in reality for stationary power generation. they do that because you can haul them in a truck

      Meanwhile China is adding power capacity at a wartime pace—coal, gas, nuclear, everything—while America struggles to get a single transmission line permitted.

      thank Jack Welch for deindustrialization then

      we built something no one else has built this century: a brand-new large engine core optimized for continuous, high‑temperature operation.

      Lockheed Martin: am i a joke to you? (also, lots of manufacturers for proper CCGT turbines do just that)

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      This is such a pivot, from “you can soon fly between capitals in half the time” to “this screaming jet engine will soon be disturbing your sleep 24/7”

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    1 day ago

    regarding my take in previous stubsack, it does seem like crusoe intends to use these gas turbines as backup, and as of 31.07.2025 they had five turbines installed, who knows if connected, with obvious place for five more, with some pieces of them (smokestacks mostly) in place. it does make sense that as of october announcement, they had the first tranche of 10 installed or at least delivered. there’s no obvious prepared place where they intend to put next 19 of them, and that’s just stuff from GE, with more 21 coming from proenergy (maybe it’s for different site?). that said, it’s texas with famously reliable ercot, which means that on top using these for shortages, they might be paying market rates for electricity, which means that even with power available, they might turn turbines on when electricity gets ridiculously expensive. i’m sure that dispatchers will love some random fuckass telling them “hey, we’re disconnecting 250MW load in 15 minutes” when grid is already unstable due to being overloaded

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    1 day ago

    After finding out about her here, I’ve been watching a lot of Angela Collier videos lately. Here’s the most recent one which talks about our life extending friends.

    E: just expressing my general appreciation for her vids. Things that I like:

    • low frequency of cuts/her speech isn’t broken up into 5 second clips
    • lack of kowtowing to algorithmic suggestion
    • subtle, dry humour

    Which I’m now realising is somewhat counter to current trends in content, which might be contributing to why I like these.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      21 hours ago

      She also said she basically wants to focus less on the sort of ‘callout’ content which does well on yt and more focus on actual physics stuff. Which is great, and also good she realized how slippery a slide that sort of content is for your channel.

      (I mentioned before how sad it is to see ‘angry gamer culture war’ channels be stuck in that sort of content, as when they do non rage shit, nobody watches them. (I mean sad for them in an ‘if i was them’ way btw, dont get me wrong, fuckem for chosing that path (and fuck the system for that they are now financially stuck in that, and that they made this an available path anyway (while making it hard for lgbt people to make a channel about their experiences)), so many people hurt/radicalized for a few clicks and ad money))

  • BioMan@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    2 days ago

    The Great Leader himself, on how he avoids going insane during the onging End of the World because among other things that’s not what an intelligent character would do in a story, but you might not be capable of that.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      21 hours ago

      I forgot to mention it last week, but this is Scott Adams shit. The stuff which made him declare that Trump would win in a landslide in 2016 due to movie rules. Iirc he also claimed he was right on that, despite Trump not winning in a landslide, the sort of goalpost moving bs which he judges others harshly for (despite in the other situations it not applying)

      So up next, Yud will claim some pro AI people want him dead and after that Yud will try to convince people he can bring people to orgasm by words alone. I mean those are the ‘adamslike’ genre tropes now.

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      The first and oldest reason I stay sane is that I am an author, and above tropes.

      Nobody is above tropes. Tropes are just patterns you see in narratives. Everything you can describe is a trope. To say you are above tropes means you don’t live and exist.

      Going mad in the face of the oncoming end of the world is a trope.

      Not going mad as the world ends is also a trope, you fuck!

      This sense – which I might call, genre-savviness about the genre of real life – is historically where I began; it is where I began, somewhere around age nine, to choose not to become the boringly obvious dramatic version of Eliezer Yudkowsky that a cliche author would instantly pattern-complete about a literary character facing my experiences.

      We now have a canon mental age for Yud of drumroll nine.

      Just decide to be sane

      That isn’t how it works, idiot. You can’t “decide to be sane”, that’s like having a private language.

      Anyway, just to make the subtext of my other comments into text. Acting like you are a character in a story is a dissociative delusion and counter to reality. It is definitively not sane. Insane, if you will.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        22 hours ago

        To say you are above tropes means you don’t live and exist.

        To say you are above tropes is actually a trope

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        23 hours ago

        Followup:

        Look, the world is fucked. All kinds of paradigms we’ve been taught have been broken left and right. The world has ended many times over in this regard. In place of anything interesting or helpful to address this, Yud’s encoded a giant turd into a blog post. How to stay sane? Just stay sane, bro. Easy to say if the only thing threatening your worldview is a made-up robodemon that will never exist.

        Here’s Yud’s actually-quite-easy-to-understand suggestions:

        1. detach from reality by pretending you are a character in a story as a coping mechanism.
        2. assume no personal responsibility or agency.
        3. don’t go insane, i.e. make sure you try and fulfil society’s expectations of what sanity is.

        All of these are terrible. In general, you want to stay grounded in reality, be aware of the agency you have in the world, and don’t feel pressured to performatively participate in society, especially if that means doing arbitrary rituals to prove that you are “sane”.

        Here are my thoughts on “how to stay sane” and “how to cope”:

        It’s entirely reasonable to crash out. I don’t want anyone to go insane, but fucking look at all this shit. Datacenters are boiling the oceans. Liberalism is starting its endgame into fascism. All the fucking genocides! Dissociating is acceptable and expected as an emotional response. All of this has been happening in (modern) human history to a degree where crashing out has been reasonable. Yet, many people have been able to “stay sane” in the face of this. If you see someone who appears to be sane, either they’re fucked in the head, or they have some perspective or have built up some level of resilience. Whether or not those things can be helpful to someone else is not deterministic. If you are someone who has “stayed sane”, please remember to show some empathy and some awareness that it’s fine if someone is miserable, because again, everything is fucked.

        Putting the above together, I accept basically any reaction to the state of the world. It’s reasonable to go either way, and you shouldn’t feel bad either way. “Sanity” has different meanings depending on where you look. I think there’s a common, unspoken definition that basically boils down to “a sane person is someone who can productively participate in society.” This is not a standard you always need to hold yourself to. I think it’s helpful to introspect and, uh, “extrospect”, here. Like, figure out what you think it means to be sane, what you want it to mean, and what you want. And bounce these ideas off of someone else, because that usually helps.

        I think there is another common definition of sanity that might just be “mentally healthy”. To that end, things that have helped me, aside from therapy, that aren’t particularly insightful or unique:

        1. Talking to friends
        2. Finding places to talk about the world going to shit.
        3. Participating in community, online or irl.
        4. Basically just finding spaces where stupid shit gets dunked on.
        5. Leftist meme pages

        I mean, is that so fucking hard to say?

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      One part in particular pissed me off for being blatantly the opposite of reality

      and remembering that it’s not about me.

      And so similarly I did not make a great show of regret about having spent my teenage years trying to accelerate the development of self-improving AI.

      Eliezer literally has multiple sequence about his foolish youth where he nearly destroyed the world trying to jump straight to inventing AI instead of figuring out “AI Friendliness” first!

      I did not neglect to conduct a review of what I did wrong and update my policies; you know some of those updates as the Sequences.

      Nah, you learned nothing from what you did wrong and your sequence posts were the very sort of self aggrandizing bullshit you’re mocking here.

      Should I promote it to the center of my narrative in order to make the whole thing be about my dramatic regretful feelings? Nah. I had AGI concerns to work on instead.

      Eliezer’s “AGI concerns to work on” was making a plan for him, personally, to lead a small team, which would solve meta-ethics and figure out how to implement these meta-ethics in a perfectly reliable way in an AI that didn’t exist yet (that a theoretical approach didn’t exist for yet, that an inkling of how to make traction on a theoretical approach for didn’t exist yet). The very plan Eliezer came up with was self aggrandizing bullshit that made everything about Eliezer.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      Screaming at the void towards Chuunibyou (wiki) Eliezer: YOU ARE NOT A NOVEL CHARACTER, THINKING OF WHAT BENEFITS THE NOVELIST vs THE CHARACTER HAS NO BEARING ON REAL LIFE.

      Sorry for yelling.

      Minor notes:

      But <Employee> thinks I should say it, so I will say it. […] <Employee> asked me to speak them anyways, so I will.

      It’s quite petty of Yud to be so passive-aggressive towards his employee insisted he at least try to discuss coping. Name dropping him not once but twice (although that is also likely to just be poor editing)

      “How are you coping with the end of the world?” […Blah…Blah…Spiel about going mad tropes…]

      Yud, when journalists ask you “How are you coping?”, they don’t expect you to be “going mad facing apocalypse”, that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.

      Alternatively it’s also a question to gauge how full of shit you may be. (By gauging how emotionally invested you are)

      The trope of somebody going insane as the world ends, does not appeal to me as an author, including in my role as the author of my own life. It seems obvious, cliche, predictable, and contrary to the ideals of writing intelligent characters. Nothing about it seems fresh or interesting. It doesn’t tempt me to write, and it doesn’t tempt me to be.

      Emotional turmoil and how characters cope, or fail to cope makes excellent literature! That all you can think of is “going mad”, reflects only your poor imagination as both a writer and a reader.

      I predict, because to them I am the subject of the story and it has not occurred to them that there’s a whole planet out there too to be the story-subject.

      This is only true if they actually accept the premise of what you are trying to sell them.

      […] I was rolling my eyes about how they’d now found a new way of being the story’s subject.

      That is deeply Ironic, coming from someone who makes choice based on him being the main character of a novel.

      Besides being a thing I can just decide, my decision to stay sane is also something that I implement by not writing an expectation of future insanity into my internal script / pseudo-predictive sort-of-world-model that instead connects to motor output.

      If you are truly doing this, I would say that means you are expecting insanity wayyyyy to much. (also psychobabble)

      […Too painful to actually quote psychobabble about getting out of bed in the morning…]

      In which Yud goes in depth, and self-aggrandizing nonsensical detail about a very mundane trick about getting out of bed in the morning.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        21 hours ago

        The trope of somebody going insane as the world ends, does not appeal to me as an author, including in my role as the author of my own life. It seems obvious, cliche, predictable, and contrary to the ideals of writing intelligent characters. Nothing about it seems fresh or interesting. It doesn’t tempt me to write, and it doesn’t tempt me to be.

        When I read HPMOR, which was years ago before I knew who tf Yud was and I thought Harry was intentionally written as a deeply flawed character and not a fucking self-insert, my favourite part was Hermione’s death. Harry then goes into grief that he is unable to cope with, disassociating to such an insane degree he stops viewing most other people as thinking and acting individuals. He quite literally goes insane as his world - his friend and his illusion of being the smartest and always in control of the situation - ended.

        Of course now in hindsight I know this is just me inventing a much better character and story, and Yud is full of shit, but I find it funny that he inadvertently wrote a character behave insanely and probably thought he’s actually a turborational guy completely in control of his own feelings.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 hours ago

          I feel like this is a really common experience with both HPMoR and HP itself, and explains a large part of the positive reputation they enjoy(ed).

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        Yud, when journalists ask you “How are you coping?”, they don’t expect you to be “going mad facing apocalypse”, that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.

        I think the way he reads the question is telling on himself. He knows he is sort of doing a half-assed response to the impending apocalypse (going on a podcast tour, making even lower-quality lesswrong posts, making unworkable policy proposals, and continuing to follow the lib-centrist deep down inside himself and rejecting violence or even direct action against the AI companies that are hurling us towards an apocalypse). He knows a character from one of his stories would have a much cooler response, but it might end up getting him labeled a terrorist and sent to prison or whatever, so instead he rationalizes his current set of actions. This is in fact insane by rationalist standards, so when a journalist asks him a harmless question it sends him down a long trail of rationalizations that include failing to empathize with the journalist and understand the question.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 day ago

        Yud seems to have the same conception of insanity that Lovecraft did, where you learn too much and end up gibbering in a heap on the floor and needing to be fed through a tube in an asylum or whatever. Even beyond the absurdity of pretending that your authorial intent has some kind of ability to manifest reality as long as you don’t let yourself be the subject (this is what no postmodernism does to a person), the actual fear of “going mad” seems fundamentally disconnected from any real sense of failing to handle the stress of being famously certain that the end times are indeed upon us. I guess prophets of doom aren’t really known for being stable or immune to narcissistic flights of fancy.

        • gerikson@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          Having a SAN stat act like an INT (IQ) stat is very on brand for rationalists (except ofc the INT stat is immutable duh)

        • scruiser@awful.systems
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          the actual fear of “going mad” seems fundamentally disconnected from any real sense of failing to handle the stress of being famously certain that the end times are indeed upon us

          I think he actually is failing to handle the stress he has inflicted on himself, and that’s why his latest few lesswrong posts hadreally stilted poor parables about Chess and about alien robots visiting earth that were much worse than classic sequences parables. And why he has basically given up trying to think of anything new and instead keeps playing the greatest lesswrong hits on repeat, as if that would convince anyone that isn’t already convinced.

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.

        does EY fail to get that interview isn’t for him, but for audience? if he wants to sway anyone, then he’d need to adjust what he talks about and how, otherwise it just turns into a circlejerk

      • blakestacey@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 day ago

        “How do you keep yourself from going insane?”

        “I tell myself I’m a character from a book who comes to life and is also a robot!” (Hubert Farnsworth giggle)

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    2 days ago

    Patrick Boyle on YouTube has a breakdown of the breakdown of the Microstrategy flywheel scheme. Decent financial analysis of this nonsense combined with some of the driest humor on the internet.

    • zogwarg@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      ·
      2 days ago

      A fairly good and nuanced guide. No magic silver-bullet shibboleths for us.

      I particularly like this section:

      Consequently, the LLM tends to omit specific, unusual, nuanced facts (which are statistically rare) and replace them with more generic, positive descriptions (which are statistically common). Thus the highly specific “inventor of the first train-coupling device” might become “a revolutionary titan of industry.” It is like shouting louder and louder that a portrait shows a uniquely important person, while the portrait itself is fading from a sharp photograph into a blurry, generic sketch. The subject becomes simultaneously less specific and more exaggerated.

      I think it’s an excellent summary, and connects with the “Barnum-effect” of LLMs, making them appear smarter than they are. And that it’s not the presence of certain words, but the absence of certain others (and well content) that is a good indicator of LLM extruded garbage.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        23 hours ago

        Also, you can one-step explain from this guide why people with working bullshit detectors tend to immediately clock LLM output, vs the executive class whose whole existence is predicated on not discerning bullshit being its greatest fans. A lot of us have seen A Guy In A Suit do this, intentionally avoid specifics to make himself/his company/his product look superficially better. Hell, the AI hype itself (and the blockchain and metaverse nonsense before it) relies heavily on this - never say specifics, always say “revolutionary technology, future, here to stay”, quickly run away if anyone tries to ask a question.

        • zogwarg@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          My feeling has gotten that I prefer the business executive empty vs the LLM empty, at least the first one usually expresses personality. It’s never entirely empty.

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 days ago

      Although I never use LLMs for any serious purpose, I do sometimes give LLMs test questions in order to get firsthand experience on what their responses are like. This guide tracks quite well with what I see. The language is flowery and full of unnecessary metaphors, and the formatting has excessive bullet points, boldface, and emoji. (Seeing emoji in what is supposed to be a serious text really pisses me off for some reason.) When I read the text carefully, I can almost always find mistakes or severe omissions, even when the mistake could easily be remedied by searching the internet.

      This is perfectly in line with the fact that LLMs do not have deep understanding, or the understanding is only in the mind of the user, such as with rubber duck debugging. I agree with the “Barnum-effect” comment (see this essay for what that refers to).

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 days ago

      Doing a quick search, it hasn’t been posted here until now - thanks for dropping it.

      In a similar vein, there’s a guide to recognising AI-extruded music on Newgrounds, written by two of the site’s Audio Moderators. This has been posted here before, but having every “slop tell guide” in one place is more convenient.

      • AnarchistArtificer@slrpnk.net
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        “This has been posted here before, but having every “slop tell guide” in one place is more convenient.”

        Man, this is why human labour still reigns supreme. It’s such a small thing to consider the context in which these resources would be useful and to group together related resources as you have done here, but actions like this are how we can genuinely construct new meaning in the world. Even if we could completely eradicate hallucinations and nonspecific waffle in LLM output, they would still be woefully inept at this kind of task — they’re not good at making new stuff, for obvious reasons.

        TL;DR: I appreciate you grouping these resources together for convenience. It’s the kind of mindful action that makes me think usefully about community building and positive online discourse.

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          2 days ago

          It’s also the sort of thing that you wouldn’t actually think to ask for until it became quite hard to sort out. Creating this kind of list over time as good resources are found is much more practical and not the kind of thing would likely be automated.