LLMs can be great for explaining things that have concrete solutions, like physics and math problems, when they have a separate “computations” AI bolted onto it, like ChatGPT does. Usually, you can check the answer in the back of the book anyway, so it’s very easy to catch fact hallucinations.
I wouldn’t worry about source hallucinations with this either. I don’t think it would even come up?