Tixanou@lemmy.world to Lemmy Shitpost@lemmy.world · 1 年前AI is the futurelemmy.worldimagemessage-square70linkfedilinkarrow-up1867arrow-down110
arrow-up1857arrow-down1imageAI is the futurelemmy.worldTixanou@lemmy.world to Lemmy Shitpost@lemmy.world · 1 年前message-square70linkfedilink
minus-squarekateAlinkfedilinkEnglisharrow-up2arrow-down3·1 年前Can’t even rly blame the AI at that point
minus-squareTheFriar@lemm.eelinkfedilinkarrow-up12arrow-down1·1 年前Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.
minus-squarekateAlinkfedilinkEnglisharrow-up6arrow-down1·1 年前Should an LLM try to distinguish satire? Half of lemmy users can’t even do that
minus-squareKevonLooney@lemm.eelinkfedilinkarrow-up9·1 年前Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.
minus-squareBakerBagel@midwest.sociallinkfedilinkarrow-up4·1 年前It should if you are gonna feed it satire to learn from
minus-squarexavier666@lemm.eelinkfedilinkEnglisharrow-up2·1 年前Sarcasm detection is a very hard problem in NLP to be fair
minus-squareancap shark@lemmy.todaylinkfedilinkarrow-up2arrow-down1·1 年前If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that
Can’t even rly blame the AI at that point
Sure we can. If it gives you bad information because it can’t differentiate between a joke a good information…well, seems like the blame falls exactly at the feet of the AI.
Should an LLM try to distinguish satire? Half of lemmy users can’t even do that
Do you just take what people say on here as fact? That’s the problem, people are taking LLM results as fact.
It should if you are gonna feed it satire to learn from
Sarcasm detection is a very hard problem in NLP to be fair
If it’s being used to give the definite answer of a search, so it should. If it can, than it shouldn’t be used for that