@simon I wonder what to make of an llm chat (local model) answer that abruptly stops. I got relatively consistent results by following it with "continue", but it stops again and there's an ensuing ping pong match.
Do you know what's happening? Maybe it's related to the token limit?
(In this case it's the Llama 3 8B Instruct.)
Add comment