• 1 Post
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle








  • True, GPT does not return a “yes” or “no” 100% of the time in either case, but that’s not the point. The point is that it’s impossible to say if GPT has actually gotten better or worse at predicting prime numbers with their test set. Since the test set is composed of only prime numbers, we do not know if GPT is more likely to call a number “prime” when it actually is a prime number than when it isn’t. All we know is that it was very likely to answer “yes” to the question “is this number prime?” in March, and very likely to answer “no” in July. We do not know if the number makes a difference.



  • Damn, you’re right. The study has not been peer reviewed yet according to the article, and in my opinion, it really shows. For anyone who doesn’t want to actually read the study:

    They took the set of questions from a different study (which is fine). The original study had a set of 500 randomly chosen prime numbers and asked ChatGPT if they were prime, and to support its reasoning. They did this to see if in the cases where ChatGPT got the question wrong, ChatGPT would try to support its wrong answer with more faulty reasoning - a dataset with only prime numbers is perfectly fine for this initial question.

    The study in the article appears to be trying to answer two questions - is there significant drift in the answers ChatGPT gives, and is ChatGPT getting better or worse at answering questions. The dataset is perfectly fine for answering the first question, but completely inadequate for answering the second, since an AI that simply thinks all numbers are prime would be judged as having perfect accuracy! Some good peer review would never let that kind of thing slide.