JohnEdwa,

Because language learning models don’t actually understand what is truth or what is real, they just know how humans usually string words together so they can conjure plausible readable text. If your training data contains falsehoods, it will learn to write them.

To get something that would benefit from knowing both sides, we’d need to first create a proper agi, artificial general intelligence, with the ability to actually think.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • technology@lemmy.world
  • everett
  • DreamBathrooms
  • modclub
  • normalnudes
  • ngwrru68w68
  • magazineikmin
  • Durango
  • ethstaker
  • Youngstown
  • rosin
  • slotface
  • InstantRegret
  • kavyap
  • thenastyranch
  • megavids
  • osvaldo12
  • Leos
  • cubers
  • tacticalgear
  • khanakhh
  • mdbf
  • GTA5RPClips
  • anitta
  • provamag3
  • cisconetworking
  • tester
  • JUstTest
  • lostlight
  • All magazines