Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

General Discussion

Showing Original Post only (View all)

highplainsdem

(55,974 posts)
Sat May 10, 2025, 04:49 PM May 10

AI hallucinations are getting worse - and they're here to stay (New Scientist) [View all]

https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/

Earlier thread about the worsening accuracy: https://www.democraticunderground.com/100220267171

From New Scientist yesterday:

-snip-

These models work by repeatedly answering the question of “what is a likely next word” to formulate answers to prompts, and so they aren’t processing information in the usual sense of trying to understand what information is available in a body of text, says Bender. But many tech companies still frequently use the term “hallucinations” when describing output errors.

“‘Hallucination’ as a term is doubly problematic,” says Bender. “On the one hand, it suggests that incorrect outputs are an aberration, perhaps one that can be mitigated, whereas the rest of the time the systems are grounded, reliable and trustworthy. On the other hand, it functions to anthropomorphise the machines – hallucination refers to perceiving something that is not there [and] large language models do not perceive anything.”

Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasn’t necessarily helped.

The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.


The question of course, is why we should find error-prone AI in any way acceptable. Especially considering everything harmful about generative AI besides the fact that it can't be trusted to provide the right answer.
21 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
I am hopeful a future Dem admin will deem AI illegal SheltieLover May 10 #1
Will not happen LiberalArkie May 10 #7
Probably not as the dumbing down continues... SheltieLover May 10 #9
I kind of look at the historical precedence of the Democratic administrations that follow a hostile GOP. LiberalArkie May 10 #12
I would love complete fixes, too, including making them pay EVERY PENNY they grifted in office SheltieLover May 10 #17
Maybe one day they'll discover what a 'true intelligence' is and use it as a map to imprint other AIs. Until they do, SWBTATTReg May 10 #2
I'm upvoting you even though markodochartaigh May 10 #8
It is here and infiltrating. Fact checking is a pain, but cachukis May 10 #3
No, it's in our wheelhouse, for all of us. None of us have to accept the helplessness the AI bros want highplainsdem May 10 #4
Okay, i got you. cachukis May 10 #6
I google probably 20-30 things a day. markodochartaigh May 10 #13
My experience as well. cachukis May 10 #14
The very first time an "AI overview" popped up on Google I knew it was completely wrong. Wish I had saved it. eppur_se_muova May 11 #18
Thank you very much. markodochartaigh May 11 #20
Just wait for bad actors flooding the zone with bad info. Hassin Bin Sober May 10 #5
Can't wait for AI to "Hallucinate" a couple of triple 7's into each other at 35k feet (scarily kidding) Cheezoholic May 10 #10
"'Hallucination' as a term is doubly problematic," is an understatement Nigrum Cattus May 10 #11
Great clarification. cachukis May 10 #15
The learning experience. cachukis May 10 #16
"Fantasy" then, or "delusion" ?? nt eppur_se_muova May 11 #19
I like that you wrote "learning" in quotes Redleg May 11 #21
Latest Discussions»General Discussion»AI hallucinations are get...