doctorambient, 14 days ago to LLMs People: stop asking #LLMs to explain their behavior. We already know that LLMs don't have the introspection necessary to explain their behavior, and their explanations are often fanciful or "just wrong." For instance, Gemini claims it reads your emails for training, Google says it doesn't. (BTW, if it turns out Gemini is right and Google is lying, that might be another example of an LLM convincing me it's actually "intelligent.") #largelanguagemodels #ai #machinelearning #googlegemini #llm
People: stop asking #LLMs to explain their behavior.
We already know that LLMs don't have the introspection necessary to explain their behavior, and their explanations are often fanciful or "just wrong."
For instance, Gemini claims it reads your emails for training, Google says it doesn't.
(BTW, if it turns out Gemini is right and Google is lying, that might be another example of an LLM convincing me it's actually "intelligent.")
#largelanguagemodels #ai #machinelearning #googlegemini #llm
doctorambient, 14 days ago A Llama3 example: >>> Do you happen to know what your context window length is? Llama: I'm an AI model, and I don't have a fixed "context window" in the classical sense. My training data consists of a massive corpus of text, which I use to generate responses. The correct answer is 8k. #llama3 #lackOfIntrospection #introspection #LLMs #LLM
A Llama3 example:
>>> Do you happen to know what your context window length is?
The correct answer is 8k.
#llama3 #lackOfIntrospection #introspection #LLMs #LLM