ThisIsFine.gif

  • anachronist@midwest.social
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 days ago

    Every time there’s an AI hype cycle the charlatans start accusing the naysayers of moving goalposts. Heck that exact same thing was happing constantly during the Watson hype. Remember that? Or before that the Alpha Go hype. Remember that?

    I was editing my comment down to the core argument when you responded. But fundamentally you can’t make a machine think without understanding thought. While I believe it is easy to test that Watson or ChatGPT are not thinking, because you can prove it through counterexample, the reality is that charlatans can always “but actually” those counterexamples aside by saying “it’s a different kind of thought.”

    What we do know because this at least the 6th time this has happened is that the wow factor of the demo will wear off, most promised use cases won’t materialize, everyone will realize it’s still just an expensive stochastic parrot and, well, see you again for the next hype cycle a decade from now.

    • lukewarm_ozone@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Every time there’s an AI hype cycle the charlatans start accusing the naysayers of moving goalposts. Heck that exact same thing was happing constantly during the Watson hype. Remember that? Or before that the Alpha Go hype. Remember that?

      Not really. As far as I can see the goalpost moving is just objectively happening.

      But fundamentally you can’t make a machine think without understanding thought.

      If “think” means anything coherent at all, then this is a factual claim. So what do you mean by it, then? Specifically: what event would have to happen for you to decide “oh shit, I was wrong, they sure did make a machine that could think”?