kellogh, this paper is going around. tbh the result is intuitive, at least to me, although it’s also a bit contrived fwiw. i do think it’s an important paper that helps us recognize the limitations of #LLMs, but i don’t see a reason why it would cause anyone to pause development of #LLM apps. Proving the existence of hallucinations is a bit like proving that the network will always drop packets — it just changes how you approach it