CarbonatedPastaSauce,

You’re illustrating the issue so many people have with this technology. Without a fundamental understanding of how it works, people will attempt to use it in ways it shouldn’t be used, and won’t understand why it isn’t giving them correct information. It simply doesn’t have the ability to do anything but put words in an order that statistically will resemble how a human might answer the question.

LLMs don’t know anything. They can’t tell fact from fiction (and are incapable of even trying), and don’t understand concepts such as verifying info when requested. That’s the problem, they don’t ‘understand’ anything, including what they are telling you. But they do spit out words in a statistically probable order, even if the result is complete bullshit. They do it so well that they can fool most people into thinking the computer actually knows what it’s telling you.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • news@lemmy.world
  • DreamBathrooms
  • ngwrru68w68
  • cubers
  • magazineikmin
  • thenastyranch
  • rosin
  • khanakhh
  • InstantRegret
  • Youngstown
  • slotface
  • Durango
  • kavyap
  • mdbf
  • tacticalgear
  • JUstTest
  • osvaldo12
  • normalnudes
  • tester
  • cisconetworking
  • everett
  • GTA5RPClips
  • ethstaker
  • anitta
  • Leos
  • provamag3
  • modclub
  • megavids
  • lostlight
  • All magazines