• marcos@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    17 days ago

    We don’t. We keep just doing things and good things keep happening afterwards.

    We don’t even know if those two facts are linked in any way.

    • degen@midwest.social
      link
      fedilink
      English
      arrow-up
      13
      ·
      17 days ago

      Nearly irrelevant xkcd

      At least in software we know where the linchpins are on some level.

    • Azuth@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      16 days ago

      Descartes said it best. The only thing I can know for sure is that I do, in fact, exist.

  • taiyang@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    17 days ago

    Frequentist statistics are really… silly in a way. And this coming from someone who has to teach it. Sure, p is less than 5%, but you sampled 100,000 people-- an effect size of 0.05 would be significant at this rate. “bUt ItS sIgNiFiCaNt”… Oy.

    • Contramuffin@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      17 days ago

      I get very suspicious if a paper samples multiple groups and still uses p. You would use q in that case, and the fact that they didn’t suggests that nothing came up positive.

      Still, in my opinion it’s generally OK if they only use the screen as a starting point and do follow-up experiments afterwards

      • taiyang@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        17 days ago

        Yeah, I used to work in a field with huge samples so significance wasn’t really all that useful. I usually just report significant coefficients and try to make clear what changes by model. For instance, if a type of curriculum showed improvements on test scores, you simply say how much and, possibly, illustrate it by saying if a person went from 50th percentile to 55th percentile.

        Every field varies, though. I find it crazy how much psychologists I’ve worked with cared about r-squared. To each their own, I guess.