Welcome to DU!
The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards.
Join the community:
Create a free account
Support DU (and get rid of ads!):
Become a Star Member
Latest Breaking News
Editorials & Other Articles
General Discussion
The DU Lounge
All Forums
Issue Forums
Culture Forums
Alliance Forums
Region Forums
Support Forums
Help & Search
General Discussion
Showing Original Post only (View all)AI hallucinations are getting worse - and they're here to stay (New Scientist) [View all]
https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/Earlier thread about the worsening accuracy: https://www.democraticunderground.com/100220267171
From New Scientist yesterday:
-snip-
These models work by repeatedly answering the question of what is a likely next word to formulate answers to prompts, and so they arent processing information in the usual sense of trying to understand what information is available in a body of text, says Bender. But many tech companies still frequently use the term hallucinations when describing output errors.
Hallucination as a term is doubly problematic, says Bender. On the one hand, it suggests that incorrect outputs are an aberration, perhaps one that can be mitigated, whereas the rest of the time the systems are grounded, reliable and trustworthy. On the other hand, it functions to anthropomorphise the machines hallucination refers to perceiving something that is not there [and] large language models do not perceive anything.
Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasnt necessarily helped.
The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.
These models work by repeatedly answering the question of what is a likely next word to formulate answers to prompts, and so they arent processing information in the usual sense of trying to understand what information is available in a body of text, says Bender. But many tech companies still frequently use the term hallucinations when describing output errors.
Hallucination as a term is doubly problematic, says Bender. On the one hand, it suggests that incorrect outputs are an aberration, perhaps one that can be mitigated, whereas the rest of the time the systems are grounded, reliable and trustworthy. On the other hand, it functions to anthropomorphise the machines hallucination refers to perceiving something that is not there [and] large language models do not perceive anything.
Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasnt necessarily helped.
The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.
The question of course, is why we should find error-prone AI in any way acceptable. Especially considering everything harmful about generative AI besides the fact that it can't be trusted to provide the right answer.
21 replies
= new reply since forum marked as read
Highlight:
NoneDon't highlight anything
5 newestHighlight 5 most recent replies

AI hallucinations are getting worse - and they're here to stay (New Scientist) [View all]
highplainsdem
May 10
OP
I kind of look at the historical precedence of the Democratic administrations that follow a hostile GOP.
LiberalArkie
May 10
#12
I would love complete fixes, too, including making them pay EVERY PENNY they grifted in office
SheltieLover
May 10
#17
Maybe one day they'll discover what a 'true intelligence' is and use it as a map to imprint other AIs. Until they do,
SWBTATTReg
May 10
#2
No, it's in our wheelhouse, for all of us. None of us have to accept the helplessness the AI bros want
highplainsdem
May 10
#4
The very first time an "AI overview" popped up on Google I knew it was completely wrong. Wish I had saved it.
eppur_se_muova
May 11
#18