What is a good eli5 analogy for GenAI not "knowing" what they say?

I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

Aceticon,

Like parrots, LLM learn to immitate language (only, unlike parrots, it’s done in a learning mode, not from mere exposure, and it’s billions or even trillions of examples) without ever understanding its primary meaning, much less secondary more subtle meanings (such as how a person’s certainty and formal education shapes their choice of words used for a subject).

As we humans tend to see patterns in everything even when they’re not there (like spotting a train in the clouds or a christ in a burnt toast), when confronted with the parroted output from an LLM we tend to “spot” subtle patterns and from them conclude characteristics of the writter of those words as we would if the writter was human.

Subconsciously we’re using a cognitive process meant to derive conclusions about other humans from their words, and applying it to words from non-humans, and of course out of such process you only ever get human chracteristics out so this shortcut yields human characteristics for non-humans - in logical terms it’s as if we’re going “assuming this is from a human, here are the human characteristics of the writer of this words” only because it’s all subconscious we don’t spot we’re upfront presuming humanity to conclude the presence of human traits, i.e. circular logic.

This kind of natural human cognitive shortcut is commonly and purposefully taken advantage of by all good scammers, including politicians and propagandists, to lead people into reaching specific conclusions since we’re much more wedded to conclusion we (think we) reached ourselves than to those others told us about.

JamesStallion,

The Chinese Room by Searle

kaffiene,

In the sense that the “argument” is an intuition pump. As an anti ai argument it’s weak - you could replace the operator in the Chinese room with an operator in an individual neuron and conclude that our brains don’t know anything, either

themusicman,

Exactly. The brain is analogous to the room, not the person in it. Try removing a chunk of a brain and see how well it can “understand”

JamesStallion,

There is no intelligent operator in a neuron

kaffiene,

Yeah? Of course?

JackbyDev,

I think a good example would be finding similar prompts that reliably give contradictory information.

It’s sort of like auto pilot. It just believes everything and follows everything as if they’re instructions. Prompt injection and jail breaking are examples of this. It’s almost exactly like the trope where you trick an AI into realizing it’s had a contradiction and it explodes.

Hamartiogonic, (edited )
@Hamartiogonic@sopuli.xyz avatar

All of this also touches upon an interesting topic. What it really means to understand something? Just because you know stuff and may even be able to apply it in flexible ways, does that count as understanding? I’m not a philosopher, so I don’t even know how to approach something like this.

Anyway, I think the main difference is the lack of personal experience about the real world. With LLMs, it’s all second hand knowledge. A human could memorize facts like how water circulates between rivers, lakes and clouds, and all of that information would be linked to personal experiences, which would shape the answer in many ways. An LLM doesn’t have such experiences.

Another thing would be reflecting on your experiences and knowledge. LLMs do none of that. They just speak whatever “pops in their mind”, whereas humans usually think before speaking… Well at least we are capable of doing that even though we may not always take advantage of this super power. Although, the output of an LLM can be monitored and abruptly deleted as soon as it crosses some line. It’s sort of like mimicking the thought processes you have inside your head before opening your mouth.

Example: Explain what it feels like to have an MRI taken of your head. If you haven’t actually experienced that yourself, you’ll have to rely on second hand information. In that case, the explanation will probably be a bit flimsy. Imagine you also read all the books, blog posts and and reddit comments about it, and you’re able to reconstruct a fancy explanation regardless.

This lack of experience may hurt the explanation a bit, but an LLM doesn’t have any experiences of anything in the real world. It has only second hand descriptions of all those experiences, and that will severely hurt all explanations and reasoning.

trashgirlfriend,

I feel like you’re already not getting it and therefore giving too much credit to the LLM.

With LLMs it’s not even about second hand knowledge, the concept of knowledge does not apply to LLMs at all, it’s literally just about statistics, eg. what is the most likely next output after this token.

kaffiene,

You could argue that embeddings constitute some kind of stored knowledge. But I do agree with your larger point, LLMs are getting to much credit because of the language we use to describe them

unreasonabro,

It’s like talking to a republican!

HorseRabbit,

Not an ELI5, sorry. I’m an AI PhD, and I want to push back against the premises a lil bit.

Why do you assume they don’t know? Like what do you mean by “know”? Are you taking about conscious subjective experience? or consistency of output? or an internal world model?

There’s lots of evidence to indicate they are not conscious, although they can exhibit theory of mind. Eg: arxiv.org/pdf/2308.08708.pdf

For consistency of output and internal world models, however, their is mounting evidence to suggest convergence on a shared representation of reality. Eg this paper published 2 days ago: arxiv.org/abs/2405.07987

The idea that these models are just stochastic parrots that only probabilisticly repeat their training data isn’t correct, although it is often repeated online for some reason.

A little evidence that comes to my mind is this paper showing models can understand rare English grammatical structures even if those structures are deliberately withheld during training: arxiv.org/abs/2403.19827

GamingChairModel,

The idea that these models are just stochastic parrots that only probabilisticly repeat their training data isn’t correct

I would argue that it is quite obviously correct, but that the interesting question is whether humans are in the same category (I would argue yes).

HorseRabbit,

People sometimes act like the models can only reproduce their training data, which is what I’m saying is wrong. They do generalise.

During training the models are trained to predict the next word, but after training the network is always effectively interpolating between the training examples it has memorised. But this interpolation doesn’t happen in text space but in a very high dimensional abstract semantic representation space, a ‘concept space’.

Now imagine that you have memorised two paragraphs that occupy two points in concept space. And then you interpolate between them. This gives you a new point, potentially unseen during training, a new concept, that is in some ways analogous to the two paragraphs you memorised, but still fundamentally different, and potentially novel.

Rhynoplaz,

You sound like a chatbot who’s offended by it’s intelligence being insulted.

trashgirlfriend,

Bro is lost in the sauce

HorseRabbit,

Maybe I misunderstood the OP? Idk

IzzyScissor,

It’s your phone’s ‘predictive text’, but if it were trained on the internet.

It can guess what the next word should be a lot of the time, but it’s also easy for it to go off the rails.

kava,

It’s all just fancy statistics. It turns words into numbers. Then it finds patterns in those numbers. When you enter a prompt, it finds numbers that are similar and spits out an answer.

You can get into vectors and back propagation and blah blah blah but essentially it’s a math formula. We call it AI but it’s not fundamentally different than solving 2x + 4 = 10 for x.

RizzRustbolt,

x = 3

Feathercrown,

The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around.

I see the people you talk to aren’t familiar with politicians?

GamingChairModel,

Harry Frankfurt’s influential 2005 book (based on his influential 1986 essay), On Bullshit, offered a description of what bullshit is.

When we say a speaker tells the truth, that speaker says something true that they know is true.

When we say a speaker tells a lie, that speaker says something false that they know is false.

But bullshit is when the speaker says something to persuade, not caring whether the underlying statement is true or false. The goal is to persuade the listener of that underlying fact.

The current generation of AI chat bots are basically optimized for bullshit. The underlying algorithms reward the models for sounding convincing, not necessarily for being right.

Chocrates,

A 5 year old repeating daddy’s swear words without knowing what it is.

rubin,

Imagine that you have a random group of people waiting in line at your desk. You have each one read the prompt, and the response so far, and then add a word themself. Then they leave and the next person in line comes and does it.

This is why “why did you say ?” questions are nonsensical to AI. The code answering it is not the code that wrote it and there is no communication coordination or anything between the different word answerers.

relevants,

Ok, I like this description a lot actually, it’s a very quick and effective way to explain the effects of no backtracking. A lot of the answers here are either too reductive or too technical to actually make this behavior understandable to a layman. “It just predicts the next word” is easy to forget when the thing makes it so easy to be anthropomorphized subconsciously.

Hegar,
Hegar avatar

Part of the problem is hyperactive agency detection - the same biological bug/feature that fuels belief in the divine.

If a twig snaps, it could be nothing or someone. If it's nothing and we react as if it was someone, no biggie. If it was someone and we react as if it was nothing, potential biggie. So our brains are bias towards assuming agency where there is none, to keep us alive.

mindbleach,

“Biased.”

CodeInvasion, (edited )

I am an LLM researcher at MIT, and hopefully this will help.

As others have answered, LLMs have only learned the ability to autocomplete given some input, known as the prompt. Functionally, the model is strictly predicting the probability of the next word^+^, called tokens, with some randomness injected so the output isn’t exactly the same for any given prompt.

The probability of the next word comes from what was in the model’s training data, in combination with a very complex mathematical method to compute the impact of all previous words with every other previous word and with the new predicted word, called self-attention, but you can think of this like a computed relatedness factor.

This relatedness factor is very computationally expensive and grows exponentially, so models are limited by how many previous words can be used to compute relatedness. This limitation is called the Context Window. The recent breakthroughs in LLMs come from the use of very large context windows to learn the relationships of as many words as possible.

This process of predicting the next word is repeated iteratively until a special stop token is generated, which tells the model go stop generating more words. So literally, the models builds entire responses one word at a time from left to right.

Because all future words are predicated on the previously stated words in either the prompt or subsequent generated words, it becomes impossible to apply even the most basic logical concepts, unless all the components required are present in the prompt or have somehow serendipitously been stated by the model in its generated response.

This is also why LLMs tend to work better when you ask them to work out all the steps of a problem instead of jumping to a conclusion, and why the best models tend to rely on extremely verbose answers to give you the simple piece of information you were looking for.

From this fundamental understanding, hopefully you can now reason the LLM limitations in factual understanding as well. For instance, if a given fact was never mentioned in the training data, or an answer simply doesn’t exist, the model will make it up, inferring the next most likely word to create a plausible sounding statement. Essentially, the model has been faking language understanding so much, that even when the model has no factual basis for an answer, it can easily trick a unwitting human into believing the answer to be correct.

—-

^+^more specifically these words are tokens which usually contain some smaller part of a word. For instance, understand and able would be represented as two tokens that when put together would become the word understandable.

Sabata11792,
Sabata11792 avatar

As some nerd playing with various Ai models at home with no formal training, any wisdom you think that's worth sharing?

BigMikeInAustin,

The only winning move is not to play.

Sabata11792,
Sabata11792 avatar

But my therapist said she needs more VRam.

HamsterRage,

I think that a good starting place to explain the concept to people would be to describe a Travesty Generator. I remember playing with one of those back in the 1980’s. If you fed it a snippet of Shakespeare, what it churned out sounded remarkably like Shakespeare, even if it created brand “new” words.

The results were goofy, but fun because it still almost made sense.

The most disappointing source text I ever put in was TS Eliot. The output was just about as much rubbish as the original text.

SwearingRobin,

The way I’ve explained it before is that it’s like the autocomplete on your phone. Your phone doesn’t know what you’re going to write, but it can predict that after word A, it is likelly word B will appear, so it suggests it. LLMs are just the same as that, but much more powerful and trained on the writing of thousands of people. The LLM predicts that after prompt X the most likelly set of characters to follow it is set Y. No comprehension required, just prediction based on previous data.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • nostupidquestions@lemmy.world
  • DreamBathrooms
  • osvaldo12
  • tacticalgear
  • magazineikmin
  • thenastyranch
  • rosin
  • everett
  • Youngstown
  • khanakhh
  • slotface
  • Durango
  • kavyap
  • InstantRegret
  • mdbf
  • JUstTest
  • ethstaker
  • cubers
  • cisconetworking
  • modclub
  • anitta
  • tester
  • ngwrru68w68
  • GTA5RPClips
  • normalnudes
  • megavids
  • provamag3
  • Leos
  • lostlight
  • All magazines