- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.
Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.
The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.
Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.
These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.
The problem is that even the specific things they’re good at, they don’t do well enough to justify spending actual money on. And when I say “actual money”, I’m not talking about the hilariously discounted prices AI companies are offering in an effort to capture an audience.
A bot that can do a job reasonably well, but still needs a human to check their work is, from an employment perspective, still an employee, just now with some very expensive helper software. And because of the inherent unreliability of LLMs, a problem that many top figures in the industry are finally admitting may never be solved, they will always need a human to check their work. And that human has to be competent enough to do the job without the AI, in order to figure out where and how it went wrong.
GenAI was supposed to put us all out of work, and maybe one day it will, but the current state of the technology isn’t remotely close to being good enough to do that. It turns out that while bots can very effectively look and sound like humans, they’re not remotely capable of thinking like humans, and that actually matters when your chatbot starts promising customers discounts that don’t actually exist, to name one real example. What was treated as being the last ten percent is actually looking more and more like ninety-nine percent of the work in terms of creating something that can effectively replace a human being.
(As an aside, I can’t help but feel that a big part of this epic faceplant arises from Silicon Valley fully ingesting the bullshit notion of “unskilled labour”. Turns out working the drive thru at McDonald’s is a more complicated job than people think, including McDonald’s themselves. We’ve so undervalued the skills of vast swathes of our population that we were easily deluded into thinking they could all be replaced by simple machines. While some of those tasks certainly can, and will, be automated, there are some human elements - especially in conflict resolution - that are really hard to replace)