- cross-posted to:
- technology@lemmy.ml
- cross-posted to:
- technology@lemmy.ml
Artificial intelligence is worse than humans in every way at summarising documents and might actually create additional work for people, a government trial of the technology has found.
Amazon conducted the test earlier this year for Australia’s corporate regulator the Securities and Investments Commission (ASIC) using submissions made to an inquiry. The outcome of the trial was revealed in an answer to a questions on notice at the Senate select committee on adopting artificial intelligence.
The test involved testing generative AI models before selecting one to ingest five submissions from a parliamentary inquiry into audit and consultancy firms. The most promising model, Meta’s open source model Llama2-70B, was prompted to summarise the submissions with a focus on ASIC mentions, recommendations, references to more regulation, and to include the page references and context.
Ten ASIC staff, of varying levels of seniority, were also given the same task with similar prompts. Then, a group of reviewers blindly assessed the summaries produced by both humans and AI for coherency, length, ASIC references, regulation references and for identifying recommendations. They were unaware that this exercise involved AI at all.
These reviewers overwhelmingly found that the human summaries beat out their AI competitors on every criteria and on every submission, scoring an 81% on an internal rubric compared with the machine’s 47%.
This is an old study, they tested University level adults against the standard Llama2-70B.
Kinda absolete now, the model has completely fallen out of use, for the newer and far better 3 and 3.1 Versions. It also wasnt fine tuned for summarization, and while base L2-70B was OK, it wasnt great at anything without fine tuning.
This clickbait title also sounds like self gratification, the abysmal reading comprehension in the Internet is directly counter to it. The average human found on the Internet doesnt approch the level of literary capabilities, that those ten human testers showed in the study.