• lugal@sopuli.xyz
        link
        fedilink
        arrow-up
        5
        ·
        1 month ago

        Not sure what would frighten me more: the fact that this is trainings data or if it was hallucinated

        • EpeeGnome@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 month ago

          Neither, in this case it’s an accurate summary of one of the results, which happens to be a shitpost on Quara. See, LLM search results can work as intended and authoritatively repeat search results with zero critical analysis!

      • xavier666@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        Pretty sure AI will start telling us “You should not believe everything you see on the internet as told by Abraham Lincoln”

    • kateA
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      1 month ago

      Can’t even rly blame the AI at that point

      • TheFriar@lemm.ee
        link
        fedilink
        arrow-up
        12
        arrow-down
        1
        ·
        1 month ago

        Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.

        • kateA
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          1
          ·
          1 month ago

          Should an LLM try to distinguish satire? Half of lemmy users can’t even do that