• Honytawk@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    14 days ago

    LLMs have their flaws, but to claim they are wrong 70% of the time is just hate train bullshit.

    Sounds like you base this info on models like GPT3. Have you tried any newer model?

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 days ago

      ah, yes, i’m certain the reason the slop generator is generating slop is because we haven’t gone to eggplant emoji dot indian ocean and downloaded Mistral-Deepseek-MMAcevedo_13.5B_Refined_final2_(copy). i’m certain this model, unlike literally every past model in the past several years, will definitely overcome the basic and obvious structural flaws in trying to build a knowledge engine on top of a stochastic text prediction algorithm