green_light_stop,
green_light_stop avatar

The concern with LLM's as any sort of source of truth is that they have no concept of facts or truth. They simply read training material and then pattern match to come up with a response to input. There is no concept of correct information. And unless you fact check it, you will not know if it is correct or it's reasoning is sound. Using this to teach is dangerous IMO. Using the word reasoning is a anthropomorphising it too; it's just pattern matching.

Could we develop some adversarial system that fact checks it in the future? Possibly. But I don't know of one that's effective. Besides, good luck determining what is true when your training set is the internet. Or having it account for advances in understanding.

From the article you linked:

The incredible capabilities of large language models like ChatGPT are centered on how they have been trained on a vast corpus of knowledge. They provide us with an unparalleled resource for information and guidance. As your virtual professor, LLMs can guide you through the intricacies of each subject for deeper understanding of complex concepts.

That's a very naive take on LLMs. It assumes that because the training material is valid, it's output is valid. It is not!

I worry about the future where LLMs become the basis of information exchange because outputs "look right".

Show me a system that can guarantee correct answers and I'm 100% on board.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • ai_@lemmy.world
  • PowerRangers
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • everett
  • Youngstown
  • tacticalgear
  • slotface
  • ngwrru68w68
  • rosin
  • khanakhh
  • kavyap
  • tsrsr
  • InstantRegret
  • normalnudes
  • mdbf
  • Durango
  • GTA5RPClips
  • cubers
  • vwfavf
  • ethstaker
  • hgfsjryuu7
  • osvaldo12
  • tester
  • cisconetworking
  • modclub
  • Leos
  • anitta
  • All magazines