• 0 Posts
  • 6 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle


  • I know this shouldn’t be surprising, but I still cannot believe people really bounce questions off LLMs like they’re talking to a real person. https://ai.stackexchange.com/questions/47183/are-llms-unlikely-to-be-useful-to-generate-any-scientific-discovery

    I have just read this paper: Ziwei Xu, Sanjay Jain, Mohan Kankanhalli, “Hallucination is Inevitable: An Innate Limitation of Large Language Models”, submitted on 22 Jan 2024.

    It says there is a ground truth ideal function that gives every possible true output/fact to any given input/question, and no matter how you train your model, there is always space for misapproximations coming from missing data to formulate, and the more complex the data, the larger the space for the model to hallucinate.

    Then he immediately follows up with:

    Then I started to discuss with o1. [ . . . ] It says yes.

    Then I asked o1 [ . . . ], to which o1 says yes [ . . . ]. Then it says [ . . . ].

    Then I asked o1 [ . . . ], to which it says yes too.

    I’m not a teacher but I feel like my brain would explode if a student asked me to answer a question they arrived at after an LLM misled them on like 10 of their previous questions.




  • I’ve seen people say that /uj is essential to keeping communities healthy. If you only allow ‘reasonable discussion’, you allow all kinds of awful people in as long as they’re not too obvious, while regular people get reprimanded for responding to it. But if you only allow shitposting and no genuine discussion, it’s going to become genuine whether you want it to or not (see: Gamers Rise Up or similar)

    On here, you can see people write earnestly on a bunch of different topics, but you can also see them just tell a promptfan “you can’t get it up unless the fingers are wrong, can you” and ban them. It’s great