Not all searches get AI answers, but Google has been steadily expanding this feature since it debuted last year. One searcher on Reddit spotted a troubling confabulation when searching for crashes involving Airbus planes. AI Overviews, apparently overwhelmed with results reporting on the Air India crash, stated confidently (and incorrectly) that it was an Airbus A330 that fell out of the sky shortly after takeoff. We’ve run a few similar searches—some of the AI results say Boeing, some say Airbus, and some include a strange mashup blaming both Airbus and Boeing. It’s a mess.
Always remember that AI, or more accurately, LLMs are just glorified predictive text like on your phone. Don’t trust them. Maybe someday they will be reliable, but that day isn’t today.___
“Lies” and even “fabricates” imply intent. “Makes shit up” is probably most accurate, but it also implies intent, which we can’t really apply to an LLM.
Hallucination is probably the most accurate thing. There’s no intent – it’s something made up, that it expresses as true not because it is trying to mislead, but because it’s just as “true” to the LLM as anything else it says.
Or because it was programmed with a bias to respond in a certain way. There may not be intent on the LLM’s part, the same is not necessarily true for its developers though.
Definitely! Journalists would have to be reasonably certain of the intent to be able to publish it that way, though.
There can’t be intent on the part of a non-sentient program. It has working code, flawed code, and probably intentionally biased code. Dont think of it as a being that intends to do anything.
deleted by creator