Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 days ago

    Haven’t read the source paper yet (apparently it came out two weeks ago, maybe it already got sneered?) but this might be fun: OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws.

    Full of little gems like

    Beyond proving hallucinations were inevitable, the OpenAI research revealed that industry evaluation methods actively encouraged the problem. Analysis of popular benchmarks, including GPQA, MMLU-Pro, and SWE-bench, found nine out of 10 major evaluations used binary grading that penalized “I don’t know” responses while rewarding incorrect but confident answers.

    I had assumed that the problem was solely technical, that the fundamental design of LLMs meant that they’d always generate bullshit, but it hadn’t occurred to me that the developers actively selected for bullshit generation.

    It seems kinda obvious in retrospect… slick bullshit extrusion is very much what is selling “AI” to upper management.

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      6 days ago

      Well, I’ll give them the text equivalent of a “you tried” sticker for finally admitting their automatic bullshit machines produce (gasp) bullshit, but the main sneerable thing I see is the ISO Standard OpenAI Anthropomo-

      the developers actively selected for bullshit generation

      every_tf2_class_laughing_at_once.wav

      (Maximising lies extruded per ocean boiled was definitely what they were going for in hindsight, but it geniunely cracks me up to see them come out and just say it)

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 days ago

    There’s an ACX guest post rehashing the history of Project Xanadu, an important example of historical vaporware that influenced computing primarily through opinions and memes. This particular take is focused on Great Men and isn’t really up to the task of humanizing the participants, but they do put a good spotlight on the cults that affected some of those Great Men. They link to a 1995 article in Wired that tells the same story in a better way, including the “six months” joke. The orange site points out a key weakness that neither narrative quite gets around to admitting: Xanadu’s micropayment-oriented transclusion-and-royalty system is impossible to correctly implement, due to a mismatch between information theory and copyright; given the ability to copy text, copyright is provably absurd. My choice sneer is to examine a comment from one of the ACX regulars:

    The details lie in the devil, for sure…you’d want the price [of making a change to a document] low enough (zero?) not to incur Trivial Inconvenience penalties for prosocial things like building wikis, yet high enough to make the David Gerards of the world think twice.

    Ah yes, low enough to allow our heroic wiki-builders, wiki-citers, and wiki-correctors; and high enough to forbid their brutish wiki-pedants, wiki-lawyers, and wiki-deleters.

    Disclaimer: I know Miller and Tribble from the capability-theory community. My language Monte is literally a Python-flavored version of Miller’s E (WP, esolangs), which is itself a Java-flavored version of Tribble’s Joule. I’m in the minority of a community split over the concept of agoric programming, where a program can expand to use additional resources on demand. To me, an agoric program is flexible about the resources allocated to it and designed to dynamically reconfigure itself; to Miller and others, an agoric program is run on a blockchain and uses micropayments to expand. Maybe more pointedly, to me a smart contract is what a vending machine proffers (see How to interpret a vending machine: smart contracts and contract law for more words); to them, a smart contract is how a social network or augmented/virtual reality allows its inhabitants to construct non-primitive objects.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      Xanadu’s micropayment-oriented transclusion-and-royalty system is impossible to correctly implement, due to a mismatch between information theory and copyright; given the ability to copy text, copyright is provably absurd

      it kept being funny to me that even while xanadu had already shown the problems with content control the entirety of the NFT craze just went on as if it was full greenfields novel problem

      The details lie in the devil, for sure…you’d want the price [of making a change to a document] low enough (zero?) not to incur Trivial Inconvenience penalties for prosocial things like building wikis, yet high enough to make the David Gerards of the world think twice.

      some of these people just really don’t know their history very well, do they

      on a total tangent:

      while xanadu’s commercial-aspiration history is intimately tied up in why it never got much further, I do occasionally daydream about if we had, and if we could’ve combined it with more-modern signing and sourcing: daydream in the respect of “CA and cert chains, but for transcluded content”, esp in the face of all the fucking content mills used to push disinfo etc. not sure this would work ootb either, mind you, it’s got its own set of vulnerabilities and problems that you’d need to work through (and ofc you can’t solve social problems purely in the technical domain)

      has there been any meaningful advancement or neat new research in agoric computing? haven’t really looked into it in a while, and the various blockchain nonsense took so much air out of the room for so long I haven’t had to spoons to look

      (separately I know there’s also been some developments in remote trusted compute, but afaict that’s also still quite early days)

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      6 days ago

      The 17 rules also seem to have abuse build in. Documents need to be stored redundantly (without any mention of how many copies that means), and it has a system where people are billed for the data they store. Combine these and storing your data anywhere runs the risk of a malicious actor emptying your accounts. In a ‘it costs ten bucks to store a file here’ ‘sorry we had to securely store ten copies of your file, 100 bucks please’. Weird sort of rules. Feels a lot like it never figured out what it wants to be a centralized or distributed system, a system where writers can make money, or they need to pay to use. And a lot of technical solutions for social problems.

      • froztbyte@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        much of the lore of the early/earlier internet being built is also full of some extremely, extremely unhinged stuff. I’ve had some first-hand in-the-trenches accounts from people I’ve known active from the early-mid 90s to middle 00s and holy shit there are some batshit things happening in places. often think of it when I see the kinds of shit thiel/musk/etc are all up to (a lot of it boils down to “they’re big mad that they have to even consider other people and can’t just do whatever they like”)

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      6 days ago

      If you use physical force to stop me however, I will make it a priority to ensure you regret doing this when you are on your deathbed. You have probably never met an enemy as intelligent, creative and willing to play the decade-long game as I am.

      “When you were partying, I studied the blade.”

    • sus@programming.dev
      link
      fedilink
      English
      arrow-up
      8
      ·
      6 days ago

      Also the point is to get attention of broader public, not just those at the labs.

      The highest possible attainment, to generate several popular memes about crazy cult member does something slightly odd to show his devotion, but isn’t brave enough to do it outside his own home

      disclaimer

      memes often contain mild inaccuracies

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 days ago

        The way typical US educations (idk about other parts of the world) portray historical protests and activist movements has been disastrous to the ability of people to actually succeed in their activism. My cynical assumption is that is exactly as intended.

          • scruiser@awful.systems
            link
            fedilink
            English
            arrow-up
            9
            ·
            6 days ago

            So, to give the first example that comes to mind, in my education from Elementary School to High School, the (US) Civil Rights movement of the 1950s and 1960s was taught with a lot of emphasis on passive nonviolent resistance, downplaying just how disruptive they had to make their protests to make them effective and completely ignoring armed movements like the Black Panthers. Martin Luther King Jr.'s interest and advocacy for socialism is ignored. The level of organization and careful planning by some of the organizations isn’t properly explained. (For instance, Rosa Parks didn’t just spontaneously decide to not move her seat one day, they planned it and picked her in order to advance a test case, but I don’t think any of my school classes explained that until High School.) Some of the level of force the federal government had to bring in against the Southern States (i.e. Federal Marshals escorting Ruby Bridges) is properly explained, but the full scale is hard to visualize so. So the overall misleading impression someone could develop or subconsciously perceive is that rights were given to black people through democratic processes after they politely asked for them with just a touch of protests.

            Someone taking the way their education presents the Civil Rights protests at face value without further study will miss the role of armed resistance, miss the level of organization and planning going on behind pivotal acts, and miss just how disruptive protests had to get to be effective. If you are a capital owner benefiting from the current status quo (or well paid middle class that perceives themselves as more aligned with the capital owners than other people that work for a living), then you have a class interest in keeping protests orderly and quiet and harmless and non-disruptive. It vents off frustration in a way that ultimately doesn’t force any kind of change.

            This hunger strike and other rationalist attempts at protesting AI advancement seems to suffer from this kind of mentality. They aren’t organized on a large scale and they don’t have coherent demands they agree on (which is partly a symptom of the fact that the thing they are trying to stop is so speculative and uncertain). Key leaders like Eliezer have come out strongly against any form of (non-state) violence. (Which is a good thing, because their fears are unfounded, but if I actually thought we were doomed with p=.98 I would certainly be contemplating vigilante violence.) (Also, note form the nuke the datacenter’s comments, Eliezer is okay with state level violence.) Additionally, the rationalist often have financial and social ties to the very AI companies they are protesting, further weakening their ability to engage in effective activism.

            • V0ldek@awful.systems
              link
              fedilink
              English
              arrow-up
              3
              ·
              5 days ago

              That’s interesting, because in Poland 95% of all history you are taught is “and then they grabbed guns because they were just so fed up with their* shit” and from modern history it’s mostly anti-commumist worker movements that were all about general strikes and loud, disruptive protests.

              *Russians’, Germans’, Austrians’, king’s, …

              • scruiser@awful.systems
                link
                fedilink
                English
                arrow-up
                5
                ·
                5 days ago

                So us Americans do get some of “grabbed guns and openly fought” in the history of our revolutionary war, but its taught in a way that doesn’t link it to any modern movements that armed themselves. And the people most willing to lean into guns and revolutionary war imagery/iconography tend to be far right wing (and against movement for worker’s rights or minorities’ rights or such).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 days ago

      I almost wanna use some reverse psychology to try and make him stop.

      ‘hey im from sneerclub and we are loving this please dont stop this strike’

      (I mean he clearly mentally prepped against arguments and even force (and billionaires), but not someone just making fun of him. Of course he prob doesn’t know about any of these places and hasn’t build us up to Boogeyman status, but imagine it worked)

    • swlabr@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 days ago

      Hmm, it’s still on the funny side of graph for me. I think it could go on for at least another week.

  • YourNetworkIsHaunted@awful.systems
    link
    fedilink
    English
    arrow-up
    16
    ·
    12 days ago

    Sneer inspired by a thread on the preferred Tumblr aggregator subreddit.

    Rationalists found out that human behavior didn’t match their ideological model, then rather than abandon their model or change their ideology decided to replace humanity with AIs designed to behave the way they think humans should, just as soon as they can figure out a way to do that without them destroying all life in the universe.

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      11 days ago

      That thread gives me hope. A decade ago, a random internet discussion in which rationalist came up would probably mention “quirky Harry Potter fanfiction” with mixed reviews, whereas all the top comments on that thread are calling out the alt-right pipeline and the racism.

      • David Gerard@awful.systemsM
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 days ago

        I have no hope. The guy who introduced me to LessWrong included what I later realised was a race science pitch. Yudkowsky was pushing this shit in 2007. This sucker just realised a coupla decades late.

  • flere-imsaho@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    10 days ago

    david heinemeier hanson of the ruby on rails fame decided to post a white supremacist screed with a side of transphobia because now he doesn’t need to pretend anything anymore. it’s not surprising, he was heading this way for a while, but seeing the naked apology of fascism is still shocking for me.

    any reasonable open source project he participates in should immediately cut ties with the fucker. (i’m not holding my breath waiting, though.)

  • BlueMonday1984@awful.systemsOP
    link
    fedilink
    English
    arrow-up
    14
    ·
    13 days ago

    Starting things off with a newsletter by Jared White that caught my attention: Why “Normies” Hate Programmers and the End of the Playful Hacker Trope, which directly discusses how the public perception of programmers has changed for the worse, and how best to rehabilitate it.

    Adding my own two cents, the rise of gen-AI has definitely played a role here - I’m gonna quote Baldur Bjarnason directly here, since he said it better than I could:

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      13 days ago

      This is an interesting crystallization that parallels a lot of thoughts I’ve been having, and it’s particularly hopeful that it seeks to discard the “hacker” moniker and instead specifically describe the subjects as programmers. Looking back, I was only becoming terminally online circa 1997, and back then it seemed like there was an across-the-spectrum effort to reclaim the term “hacker” into a positive connotation after the federal prosecutions of the early 90s. People from aspirant-executive types like Paul Graham to dirty hippies like RMS were insistent that being a “hacker” was a good thing, maybe the best possible thing. This was, of course, a dead letter as soon as Facebook set up at “One Hacker Way” in Menlo Park, but I’d say it’s definitely for the best to finally put a solid tombstone on top of that cultural impulse.

      As well, because my understanding of the defining activity of the positive-good “hacker” is that it’s all too close to Zuckerberg’s “move fast and break things,” and I think Jared White would probably agree with me. Paul Graham was willing to embrace the term because he was used to the interactive development style of Lisp environments, but the mainstream tools have only fitfully evolved in that direction at best. When “hacking,” the “hacker” makes a series of short, small iterations with a mostly nebulous goal in mind, and the bulk of the effort may actually be what’s invested in the minimum viable product. The self-conception inherits from geek culture a slumped posture of almost permanent insufficiency, perhaps hiding a Straussian victimhood complex to justify maintaining one’s own otherness.

      In mentioning Jobs, the piece gestures towards the important cultural distinction that I still think is underexamined. If we’re going to reclaim and rehabilitate even homeopathic amounts of Jobs’ reputation, the thesis we’re trying to get at is that his conception of computers as human tools is directly at odds with the AI promoters’ (and, more broadly, most cloud vendors’) conception of computers as separate entities. The development of generative AI is only loosely connected with the sanitized smiley-face conception of “hacking.” The sheer amount of resources and time spent on training foreclose the possibility of a rapid development loop, and you’re still not guaranteed viable output at the end. Your “hacks” can devolve into a complete mess, and at eye-watering expense.

      I went and skimmed Graham’s Hackers and Painters again to see if I could find any choice quotes along these lines, since he spends that entire essay overdosing on the virtuosity of the “hacker.” And hoo boy:

      Measuring what hackers are actually trying to do, designing beautiful software, would be much more difficult. You need a good sense of design to judge good design. And there is no correlation, except possibly a negative one, between people’s ability to recognize good design and their confidence that they can.

      You think Graham will ever realize that we’re culminating a generation of his precious “hackers” who ultimately failed at all this?

      • mirrorwitch@awful.systems
        link
        fedilink
        English
        arrow-up
        9
        ·
        13 days ago

        re: last line: no, he never will admit or concede to a single damn thing, and that’s why every time I remember this article exists I have to reread dabblers & blowhards one more time purely for defensive catharsis

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          13 days ago

          I don’t even know the degree to which that’s the fault of the old hackers, though. I think we need to acknowledge the degree to which a CS degree became a good default like an MBA before it, only instead of “business” it was pitched as a ticket to a well-paying job in “computer”. I would argue that a large number of those graduates were never going to be particularly interested in the craft of programming beyond what was absolutely necessary to pull a paycheck.

      • Don Piano@feddit.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        12 days ago

        Interesting, I’d go rhetorically more in this direction: A hack is not a solution, it’s the temporary fix (or… break?) until you get around to doing it properly. On the axis where hacks are on one end and solutions on the other, genAI shit is beyond the hack. It’s not even a temporary fix, its less, functionally and culturally.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          12 days ago

          A hack can also just be a clever way to use a system in a way it wasnt designed.

          Say you put a Ring doorbell on a drone as a perimeter defense thing? A hack. See also the woman who makes bad robots.

          It also can be a certain playfulness with tech. Which is why hacker is dead. It cannot survive contact with capitalist forces.

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      13 days ago

      AFAIK the USA is the only country where programmers make very high wages compared to other college-educated people in a profession anyone can enter. Its a myth that so-called STEM majors earn much more than others, although people with a professional degree often launch their careers quicker than people without (but if you really want to launch your career quickly, learn a trade or work in an extractive industry somewhere remote). So I think for a long time programmers in the USA made peace with FAANG because they got a share of the booty.

      • k4rlos@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 days ago

        Not the only. Former USSR and Eastern Europe as well, and it’s way worse there. Typically, SWE would earn about several TIMES more than your college-educated person. This leads to programmers being obnoxious libertarian nazi fucktards.

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      13 days ago

      Hackers is dead. (Apologies to punk)

      Id say that for one reason alone, when Musk claimed grok was from the guide nobody really turned on him.

      Unrelated to programmers or hackers, Elons father (CW: racism) went fully mask off and claims Elon agrees with him. Which considering his promotion of the UK racists does not feel off the mark. (And he is spreading the dumb ‘[Africans] have an [average] IQ of 63’ shit, and claims it is all genetic. Sure man, the average African needs help understanding the business end of a hammer. As I said before, guess I met the smartest Africans in the world then, as my university had a few smart exchange students from an African country. If you look at his statements it is even dumber than normal, as he says population, so that means either non-Black Africans are not included, showing just how much he thinks of himself as the other, or they are, and the Black African average is even lower).

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    14
    ·
    12 days ago

    Regarding occasional sneer target Lawrence Krauss and his co-conspirators:

    Months of waiting but my review copy of The War on Science has arrived.

    I read Krauss’ introduction. What the fuck happened to this man? He comes off as incapable of basic research, argument, basic scholarship. […] Um… I think I found the bibliography: it’s a pdf on Krauss’ website? And all the essays use different citation formats?

    Most of the essays don’t include any citations in the text but some have accompanying bibliographies?

    I think I’m going insane here.

    What the fuck?

    https://bsky.app/profile/nateo.bsky.social/post/3lyuzaaj76s2o

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      18
      ·
      12 days ago

      Huh, I wonder who this Krauss guy is, haven’t heard of him.

      *open wikipedia*

      *entire subsection titled “Allegations of sexual misconduct”*

      *close wikipedia*

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      14
      ·
      12 days ago

      All of those people, Krauss, Dawkins, Harris (okay that one might’ve been unsalvageable from the start, I’m really not sure) are such a great reminder that you can be however smart/educated you want, the moment you believe you’re the smartest boi and stop learning and critically approaching your own output you get sucked into the black hole of your asshole, never to return.

      Like if I had a nickel. It’s hubris every time. All of those people need just a single good friend that, from time to time, would tell them “man, what you said was really fucking stupid just now” and they’d be saved.

      Clout is a proxy of power and power just absolutely rots your fucking brain. Every time a Guy emerges, becomes popular, clearly thinks “haha, but I am different, power will not rot MY brain”, five years later boom, he’s drinking with Jordan Benzo Peterson. Even Joe Fucking Rogan used to be significantly more lucid before someone gave him ten bazillion dollars for a podcast and he suffered severe clout poisoning.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      Sorry but who the fuck is that? Not one of our common guests here, I need a primer on her

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      10 days ago

      The commentator who thinks that USD 120k / year is a poor income for someone with a PhD makes me sad. That is what you earn if you become a professor of physics at a research university or get a good postdoc, but she aged out of all of those jobs and was stuck on poorly paid short-term contracts. There are lots of well-paid things that someone with a PhD in physics can do if she is willing to network and work for it, but she chose “rogue intellectual.”

      A German term to look up is WissZeitVG but many academic jobs in many countries are only offered to people no more than x years after receiving their PhD (yep, this discriminates against women and the disabled and those with sick spouses or parents).

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      6
      ·
      8 days ago

      thanks for linking this, was fun to watch

      hadn’t seen that saltman clip (been real busy running around pretty afk the last few weeks), but it’s a work of art. despite grokking the dynamics, it continues to be astounding just how vast the gulf between fact and market vibes are

      and as usual, Collier does a fantastic job ripping the whole idea a new one in a most comprehensive manner

  • David Gerard@awful.systemsM
    link
    fedilink
    English
    arrow-up
    13
    ·
    7 days ago

    the talking point about disparaging terms for AI users by choice “I came up with a racist-sounding term for AI users, so if you say ‘clanker’ you must be a racist” is so fucking stupid it’s gotta be some sort of op

    (esp when the made-up racist-sounding term turns out to have originated with Warren fucking Ellis)

    i am extremely disappointed that awful systems users have fallen for it for a moment

    • scruiser@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 days ago

      Side note: The way I’ve seen clanker used has been for the AIs themselves, not their users. I’ve mostly seen the term in the context of star wars memers eager to put their anti-droid memes and jokes to IRL usage.

      • ShakingMyHead@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        7 days ago

        Same here, I’ve never actually seen the term “clanker” be used in reference to a person using the AI, but the AI itself. Which to me was analogous to going to an expensive bakery and accusing the bread of ripping you off instead of the baker (or whoever was setting prices, which wouldn’t be the bread).

        If there was any sort of op going on (which I don’t think there is), I’d guess it would be from the AI doomers who want people to think of these things as things with enough self-awareness that something like “clanker” would actually insult them (but, again, probably not, IMO).

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      7 days ago

      Slightly related to the ‘it is an op’ thing, did you look at the history of the wikipedia page for clanker? There were 3 edits to the page before 1 June 2025.

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      The truth is that we feel shame to a much greater degree than the other side, which makes it pretty easy to divide us on these annoying trivialities.

      My personal hatred of tone policing is greater than my sense of shame, but I imagine that isnt something to expect for most.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      12 days ago

      TIL Hank Green, the milquetoast BlueSky poster, also has some YouTube channel. How quaint.

      I think every time I learn That Guy From BlueSky also has some other gig different from posting silly memes I lose some respect for them.

      E.g. I thought Mark Cuban was just a dumb libertarian shitposter, but then it turned out he has a cuntillion dollars and also participated in a show unironically called “Shark Tank” that I still don’t 100% believe was a real thing because by god

      • bitofhope@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        11 days ago

        I figured he’d be a lot better known for his YouTube career than for his bsky posting. I see his stuff all the time in my recommendations, though his style isn’t my cup of tea so I seldom watch any of them.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 days ago

          I haven’t seen the YouTube recommendation page in so long I wouldn’t know. Invidious my beloved <3

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 days ago

      What’s up with all the websites that tell me “you’ve reached the limit of free articles for the month” even though I’ve literally never entered that site before in my life. Stop gaslighting me you cunts.

      Anyway, here’s the archive

  • PMMeYourJerkyRecipes@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    11 days ago

    Getting pretty far afield here, but goddamn Matt Yglesias’s new magazine sucks:

    The case for affirmative action for conservatives

    “If we cave in and give the right exactly what they want on this issue, they’ll finally be nice to us! Sure, you might think based on the last 50,000 times we’ve tried this strategy that they’ll just move the goalposts and demand further concessions, but then they’ll totally look like hypocrites and we’ll win the moral victory, which is what actually matters!”

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    12
    ·
    13 days ago

    Woke up to some hashtag spam this morning

    AI’s Biggest Security Threat May Be Quantum Decryption

    which appears to be over of those evolutionary “transitional forms” between grifts.

    The sad thing is the underlying point is almost sound (hoarding data puts you at risk of data breaches, and leaking sensitive data might be Very Bad Indeed) but it is wrapped up in so much overhyped nonsense it is barely visible. Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.

    (it also appears to be a month-old story, but I guess there’s no reason for mastodon hashtag spammers to be current 🫤)

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      12 days ago

      Is there already a word for “an industry which has removed itself from reality and will collapse when the public’s suspension of disbelief fades away”?

      Calling this just “a bubble” doesn’t cut it anymore, they’re just peddling sci-fi ideas now. (Metaverse was a bubble, and it was stupid as hell, but at least those headsets and the legless avatars existed.)

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        12 days ago

        I would actually contend that crypto and the metaverse both qualify as early precursors to the modern AI post-economic bubble. In both cases you had a (heavily politicized) story about technology attract investment money well in excess of anyone actually wanting the product. But crypto ran into a problem where the available products were fundamentally well-understood forms of financial fraud, significantly increasing the risk because of the inherent instability of that (even absent regulatory pressure the bezzle eventually runs out and everyone realizes that all the money in those ‘returns’ never existed). And the VR technology was embarrassingly unable to match the story that the pushers were trying to tell, to the point where the next question, whether anyone actually wanted this, never came up.

        GenAI is somewhat unique in that the LLMs can do something impressive in mimicking the form of actual language or photography or whatever it was trained on. And on top of that, you can get impressively close to doing a lot of useful things with that, but not close enough to trust it. That’s the part that limits genAI to being a neat party trick, generating bulk spam text that nobody was going to read anyways, and little more. The economics don’t work out when you need to hire someone skilled enough to do the work to take just as much time double-checking the untrustworthy robot output, and once new investment capital stops subsidizing their operating costs I expect this to become obvious, though with a lot of human suffering in the debris. The challenge of “is this useful enough to justify paying its costs” is the actual stumbling block here. Older bubbles were either blatantly absurd (tulips, crypto) or overinvestment as people tried to get their slice of a pie that anyone with eyes could see was going to be huge (railroad, dotcom). The combination of purely synthetic demand with an actual product is something I can’t think of other examples of, at this scale.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 days ago

        There are many such terms! Just look at the list of articles under “See Also” for “The Emperor’s New Clothes”. My favorite term, not listed there, is “coyote time”: “A brief delay between an action and the consequences of that action that has no physical cause and exists only for comedic or gameplay purposes.” Closely related is the fact that industries don’t collapse when the public opinion shifts, but have a stickiness to them; the guy who documented that stickiness is often quoted as saying, “Market[s] can remain irrational a lot longer than you [and I] can remain solvent.”

        • YourNetworkIsHaunted@awful.systems
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 days ago

          I don’t know if it quite applies here since all the money is openly flowing to nVidia in exchange for very real silicon, but I’m partial to “the bezzle” - referring to the duration of time between a con artist taking your money and you realizing the money is gone. Some cons will stretch the bezzle out as long as possible by lying and faking returns to try and get the victims to give them even more money, but despite how excited the victims may be about this period the money is in fact already gone.

      • BlueMonday1984@awful.systemsOP
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        12 days ago

        Is there already a word for “an industry which has removed itself from reality and will collapse when the public’s suspension of disbelief fades away”?

        If there is, I haven’t heard of it. To try and preemptively coin one, “artificial industry” (“AI” for short) would be pretty fitting - far as I can tell, no industry has unmoored itself from reality like this until the tech industry pulled it off via the AI bubble.

        Calling this just “a bubble” doesn’t cut it anymore, they’re just peddling sci-fi ideas now. (Metaverse was a bubble, and it was stupid as hell, but at least those headsets and the legless avatars existed.)

        I genuinely forgot the metaverse existed until I read this.

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      12 days ago

      linkedin thotleedir posts directly into your mailbox? gonna have to pour one out for you

      AI’s Biggest Security Threat May Be Quantum Decryption

      an absolutely wild grab-bag of words. the more you know about each piece, the more surreal the sentence becomes. unintentional art!

    • BlueMonday1984@awful.systemsOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      12 days ago

      Naturally, the best and most obvious fix — don’t hoard all that shit in the first place — wasn’t suggested.

      At this point, I’m gonna chalk the refusal to stop hoarding up to ideology more than anything else. The tech industry clearly sees data not as information to be taken sparingly, used carefully, and deleted when necessary, but as Objective Reality Unitstm which are theirs to steal and theirs alone.

      • swlabr@awful.systems
        link
        fedilink
        English
        arrow-up
        10
        ·
        8 days ago

        All is forgiven. Hot Ones is an internet interview show. Its core premise is that the host and interviewee conduct their interview while eating increasingly spicier chicken wings. As with any interview platform, it’s a common stop for public figures to hit up, especially on PR tours.

        The show has a reputation for researching its guests well and asking insightful/deep questions. There’s also an element to it where, for some guests, as they experience spicier wings, they are unable to keep up whatever facade or persona they usually keep up in interviews.

        I wasn’t making any profound commentary; I want to see Yud in pain while trying to explain alignment.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        8 days ago

        YouTube interview show where the interviewee is fed hot-sauce coated chicken wings of escalating spiciness

        As an aside, my personal tolerance is such that if I ever go on there, I’m going to end up bankrupting the fuckers

  • corbin@awful.systems
    link
    fedilink
    English
    arrow-up
    11
    ·
    10 days ago

    Some of our younger readers might not be fully inoculated against high-control language. Fortunately, cult analyst Amanda Montell is on Crash Course this week with a 45min lecture introducing the dynamics of cult linguistics. For example, describing Synanon attack therapy, Youtube comments, doomscrolling, and maybe a familiar watering hole or two:

    You know when people can’t stop posting negative or conspiratorial comments, thinking they’re calling someone out for some moral infraction, when really they’re just aiming for clout and maybe catharsis?