General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsAI hallucinations are getting worse - and they're here to stay (New Scientist)
https://www.newscientist.com/article/2479545-ai-hallucinations-are-getting-worse-and-theyre-here-to-stay/Earlier thread about the worsening accuracy: https://www.democraticunderground.com/100220267171
From New Scientist yesterday:
These models work by repeatedly answering the question of what is a likely next word to formulate answers to prompts, and so they arent processing information in the usual sense of trying to understand what information is available in a body of text, says Bender. But many tech companies still frequently use the term hallucinations when describing output errors.
Hallucination as a term is doubly problematic, says Bender. On the one hand, it suggests that incorrect outputs are an aberration, perhaps one that can be mitigated, whereas the rest of the time the systems are grounded, reliable and trustworthy. On the other hand, it functions to anthropomorphise the machines hallucination refers to perceiving something that is not there [and] large language models do not perceive anything.
Arvind Narayanan at Princeton University says that the issue goes beyond hallucination. Models also sometimes make other mistakes, such as drawing upon unreliable sources or using outdated information. And simply throwing more training data and computing power at AI hasnt necessarily helped.
The upshot is, we may have to live with error-prone AI. Narayanan said in a social media post that it may be best in some cases to only use such models for tasks when fact-checking the AI answer would still be faster than doing the research yourself. But the best move may be to completely avoid relying on AI chatbots to provide factual information, says Bender.
The question of course, is why we should find error-prone AI in any way acceptable. Especially considering everything harmful about generative AI besides the fact that it can't be trusted to provide the right answer.

SheltieLover
(67,730 posts)🤞
LiberalArkie
(18,149 posts)SheltieLover
(67,730 posts)
LiberalArkie
(18,149 posts)Like the recall of the Reagan tax cuts, that I call the create the billionaire tax cuts.. Did not happen.
Like the rollback of the Bush2 reorganization of all the justice divisions into the "Homeland Security" (definitely a third reich sounding name)
Like the elimination of Gitmo
Those are just at the top of my head.. Why don't we ever fix what they destroy?
SheltieLover
(67,730 posts)It will never happen because to do so they would have to break the real golden rule: rich, white people with penises are never to be held to account.
SWBTATTReg
(25,244 posts)then I suspect that true AI won't be truly discovered or developed until all human biases are removed entirely from any development efforts of the initial AI.
markodochartaigh
(2,822 posts)I doubt that true intelligence exists on our planet.
cachukis
(3,187 posts)necessary. It is first on my searches replacing Wikipedia. I worry most won't search farther down for reality.
My youngers are acceptant and say they are dealing with it.
Out of my wheelhouse.
highplainsdem
(55,880 posts)us to feel.
Using AI search instead of Wikipedia - or Google AI Overview instead of the regular search results offered - is a really bad idea. That allows the AI companies to steal traffic and income from the sites they already stole the training data from.. AI is killing the internet, between stealing traffic and adding AI slop almost everywhere.
If your youngsters are too accepting of AI, it may be harming them in all sorts of ways, including encouraging cheating. Which is apparently still the main thing ChatGPT is used for.
cachukis
(3,187 posts)Wheelhouse meant forcing my ideology on my kids.
My son works for a company heavily involved with AI. They do pre trial prep assembling evidence and briefs. They catalogue based on created algorithmic search engines. West Coast portion of NYC based firm.
He has to sift through lots of stuff managing his team. From upstairs to down.
These people are AI. How does one manage 30 million documents?
AI is here.
Modernity is the pace of nipping at your heels.
I converse with those using AI answers to support their opinions rather than their own thoughts out.
I'm not digging this. But I'm not stopping it.
The fact that AI hallucinates is almost to be expected as the mind is a terrible thing to waste.
Is AI going to falter like Tesla? Elon took on too much ego.
AI is a tool being refined by itself, THE FEAR.
HAL is rearing his terrifying voice.
Taming the tiger persists.
markodochartaigh
(2,822 posts)Suddenly a few months ago, unbidden, "AI Overview" started to show up at the top of my search. I didn't click on "Show more", but the first sentences seemed accurate although not helpful to anyone wanting more than an extremely cursory overview. I occasionally read the entire AI Overview now, and several times I have found information which is obviously wrong. I understand the excitement of our oligarchs thinking that they can get rid of large numbers of employees; but the "customer experience" for the customers of those businesses is sure going to go downhill.
cachukis
(3,187 posts)eppur_se_muova
(39,051 posts)Because Google's AI consumes utterly inexcusable quantities of energy -- to the detriment of our climate and our future -- I encourage people to install the udm=14 add-on to avoid Google's AI:
https://addons.mozilla.org/en-US/firefox/addon/udm14/
markodochartaigh
(2,822 posts)I will try that and pass it along.
Hassin Bin Sober
(27,057 posts)They already do it now. Used to be you could take a nap to a nice YouTube video - now you wake up in the middle of an hour long Prager advertisement telling you how the Nazis were really left wing.
AI versus AI pumping out fake news.
Cheezoholic
(2,960 posts)But then again in my day Hallucinating was a path to spiritual understanding
Nigrum Cattus
(509 posts)The term is a human perception of what A.I. is doing.
Did you ever, or any child you know make shit up. Of course.
The A.I.'s are evolving, without human input and just making shit up.
It's not a hallucination. It's testing, checking, recording - "learning"
If they don't regulate it, the corps will set it loose on all of mankind.
cachukis
(3,187 posts)cachukis
(3,187 posts)The library they are drawing on is thousands of years old.
But, the human pause of reflection has not taken charge.
Wonder of Social Darwinism sneaks in.
eppur_se_muova
(39,051 posts)Redleg
(6,490 posts)It seems ironic to me that the increasing use of AI by students in college courses may actually lead to less learning on the part of the students. I think that we in academia need to figure out how to use AI in such a way that it enhances student learning and can avoid some of the pitfalls.