blindsight,

LLMs can be great for explaining things that have concrete solutions, like physics and math problems, when they have a separate “computations” AI bolted onto it, like ChatGPT does. Usually, you can check the answer in the back of the book anyway, so it’s very easy to catch fact hallucinations.

I wouldn’t worry about source hallucinations with this either. I don’t think it would even come up?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@beehaw.org
  • Durango
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • osvaldo12
  • everett
  • Youngstown
  • khanakhh
  • slotface
  • rosin
  • thenastyranch
  • ngwrru68w68
  • kavyap
  • normalnudes
  • megavids
  • ethstaker
  • GTA5RPClips
  • modclub
  • cisconetworking
  • mdbf
  • tacticalgear
  • cubers
  • provamag3
  • tester
  • anitta
  • Leos
  • JUstTest
  • lostlight
  • All magazines