• Noxy@pawb.social
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      2 个月前

      how does one even know and verify what an LLM’s “sources” are? wouldn’t it just vomit out whatever response and then find “sources” that happen to match its stupid output after the fact?

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 个月前

        Precisely my point. But if it is correct and can link to an authoritative source (eg. a news article), that is relatively easy to verify.

        How much you can trust a news article is still up for debate.

        • Noxy@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 个月前

          so then all the value it brings is the exact same thing search engines have already been doing for decades.

    • Bone@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      2 个月前

      Sure, but where are the sources saying it’s true what OP stated? It seems harder to prove a rumor.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 个月前

        Agreed, I’m more worried about people blindly trusting AI than I am about this particular situation.