I think the problem is misguided attention. The word “glass of wine” and all the previous context is so strong that it “blows out” the “full glass of wine” as the actual intent. Also, LLMs are still pretty crap at multi turn multimedia understanding. They work are especially prone to repeating previous conversation.
It should be better if you word it like “an overflowing glass with wine splashing out.” And clear the history.
I hate to ramble, but this is what I hate most about the way big corpos present “AI.” They are narrow tools the user needs to learn how to operate, like photoshop or something, not magic genie lamps like they are trying to sell.
There’s no previous context to speak of; each screenshot shows a self-contained “conversation”, with no earlier input or output. And there’s no history to clear, since Gemini app activity is not even turned on.
And even with your suggested prompt, one of the issues is still there:
The other issue is not being tested in this shot as it’s language-specific, but it is relevant here because it reinforces that the issue is in the training, not in the context window.
What I am trying to get at is the misconception: AI can generate novel content not in its training dataset. An astronaut riding a horse is the classic test case, which did not exist anywhere before diffusion models, and it should be able to extrapolate a fuller wine glass. It’s just too dumb to do it, lol.
This is a misconception. Sort of.
I think the problem is misguided attention. The word “glass of wine” and all the previous context is so strong that it “blows out” the “full glass of wine” as the actual intent. Also, LLMs are still pretty crap at multi turn multimedia understanding. They work are especially prone to repeating previous conversation.
It should be better if you word it like “an overflowing glass with wine splashing out.” And clear the history.
I hate to ramble, but this is what I hate most about the way big corpos present “AI.” They are narrow tools the user needs to learn how to operate, like photoshop or something, not magic genie lamps like they are trying to sell.
There’s no previous context to speak of; each screenshot shows a self-contained “conversation”, with no earlier input or output. And there’s no history to clear, since Gemini app activity is not even turned on.
And even with your suggested prompt, one of the issues is still there:
The other issue is not being tested in this shot as it’s language-specific, but it is relevant here because it reinforces that the issue is in the training, not in the context window.
Was just a guess. The AI is still shitty, lol.
What I am trying to get at is the misconception: AI can generate novel content not in its training dataset. An astronaut riding a horse is the classic test case, which did not exist anywhere before diffusion models, and it should be able to extrapolate a fuller wine glass. It’s just too dumb to do it, lol.