I’m not sure how to distill the point I’m trying to make down any further. The basics of what you’re saying are 100% accurate, yes, but look back at the two specific examples I gave. Are you asserting that an LLM inherently can’t process the second example, because it would all have to be done in one step, but at the same time it can process the first (in one step)? Can’t you see what I’m saying that the two examples are identical, in the aspect of the LLM needing to identify which part of the input sequence applies to the place it’s currently at in the output sequence?
Edit: Actually, second counterpoint: How, if you’re saying that this is just an inherent limitation of LLMs, can GPT-4 do it?
The difference is that repeating a quote does not need new information, it’s all already in the text prompt. The current direction on the other side is not in the text, it has to be derived from the instructions. If you ask GPT to break the problem down into steps, you shrink the size of the problem dramatically. One or two turn it can handle in one step, it’s only when you increase the turn number that it gets it wrong and can’t answer it in one step.
It’s really not much different from humans here. If I read all those turn instruction, I have no idea where things will end up either. I have to break the problem down and keep track of the direction at each step.
How, if you’re saying that this is just an inherent limitation of LLMs, can GPT-4 do it?
GPT-4 is just bigger, meaning it can handle larger problems in one-step. It should still fail when you ask it the same simple problem, but just make it longer.
Hm… yeah, I see what you’re saying. It’s not capable of maintaining “hidden” state as it goes step by step through the output, but if you have it talk its way through the hidden part of the state, it can do it. I can agree with that.
I’m not sure how to distill the point I’m trying to make down any further. The basics of what you’re saying are 100% accurate, yes, but look back at the two specific examples I gave. Are you asserting that an LLM inherently can’t process the second example, because it would all have to be done in one step, but at the same time it can process the first (in one step)? Can’t you see what I’m saying that the two examples are identical, in the aspect of the LLM needing to identify which part of the input sequence applies to the place it’s currently at in the output sequence?
Edit: Actually, second counterpoint: How, if you’re saying that this is just an inherent limitation of LLMs, can GPT-4 do it?
The difference is that repeating a quote does not need new information, it’s all already in the text prompt. The current direction on the other side is not in the text, it has to be derived from the instructions. If you ask GPT to break the problem down into steps, you shrink the size of the problem dramatically. One or two turn it can handle in one step, it’s only when you increase the turn number that it gets it wrong and can’t answer it in one step.
It’s really not much different from humans here. If I read all those turn instruction, I have no idea where things will end up either. I have to break the problem down and keep track of the direction at each step.
GPT-4 is just bigger, meaning it can handle larger problems in one-step. It should still fail when you ask it the same simple problem, but just make it longer.
Hm… yeah, I see what you’re saying. It’s not capable of maintaining “hidden” state as it goes step by step through the output, but if you have it talk its way through the hidden part of the state, it can do it. I can agree with that.