• sturger@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    17 days ago
    • They string words together based on the probability of one word following another.
    • They are heavily promoted by people that don’t know what they’re doing.
    • They’re wrong 70% of the time but promote everything they say as truth.
    • Average people have a hard time telling when they’re wrong.

    In other words, AIs are BS automated BS artists… being promoted breathlessly by BS artists.

    • Honytawk@feddit.nl
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      4
      ·
      15 days ago

      LLMs have their flaws, but to claim they are wrong 70% of the time is just hate train bullshit.

      Sounds like you base this info on models like GPT3. Have you tried any newer model?

      • ebu@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        15 days ago

        ah, yes, i’m certain the reason the slop generator is generating slop is because we haven’t gone to eggplant emoji dot indian ocean and downloaded Mistral-Deepseek-MMAcevedo_13.5B_Refined_final2_(copy). i’m certain this model, unlike literally every past model in the past several years, will definitely overcome the basic and obvious structural flaws in trying to build a knowledge engine on top of a stochastic text prediction algorithm