Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(December’s finally arrived, and the run-up to Christmas has begun. Credit and/or blame to David Gerard for starting this.)

  • Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    3 days ago

    New preprint just dropped, and noticed some seemingly pro AI people talk about it and conclude that people who have more success with genAI have better empathy, are more social and have theory of mind. (I will not put those random people on blast, I also have not read the paper itself (aka, I didn’t do the minimum actually required research so be warned), just wanted to give people a heads up on it).

    But yes, that does describe the AI pushers, social people who have good empathy and theory of mind. (Also, ow got genAI runs on fairy rules, you just gotta believe it is real (I’m joking a bit here, it is prob fine, as it helps that you understand where a model is coming from and you realize its limitations it helps, and the research seems to be talking about humans + genAI vs just genAI)).

    • ShakingMyHead@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      2 days ago

      So, I’m not an expert study-reader or anything, but it looks like they took some questions from the MMLU, modified it in some unspecified way and put it into 3 categories (AI, human, AI-human), and after accounting for skill, determined that people with higher theory of mind had a slightly better outcome than people with lower theory of mind. They determined this based on what the people being tested wrote to the AI, but what they wrote isn’t in the study.
      What they didn’t do is state that people with higher theory of mind are more likely to use AI or anything like that. The study also doesn’t mention empathy at all, though I guess it could be inferred.

      Not that any of that actually matters because how they determined how much “theory of mind” each person had was to ask Gemini 2.5 and GPT-4o.