happyborg, to LLMs
@happyborg@fosstodon.org avatar

By making LLMs aim for plausibility rather than correctness, they have been tuned for deceit.

They produce the most plausible response regardless of correctness, which makes it hard to spot incorrect and misleading output.

are inherently dangerous in the hands of humans because they are designed to bypass our critical faculties.

What could possibly go wrong?!

tomstoneham,
@tomstoneham@dair-community.social avatar

@happyborg
This applies to public LLMs. Not ones developed for business purposes. It isn't inevitable.

So why do they do it? Because giving the unsavvy user an answer they find really plausible, the answer they want, makes them really impressed with the capabilities of the LLM. And that amplifies and ultimately enables the narrative that this is an amazing technology we must all adopt ASAP

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • kavyap
  • DreamBathrooms
  • cisconetworking
  • magazineikmin
  • InstantRegret
  • Durango
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • mdbf
  • khanakhh
  • ethstaker
  • JUstTest
  • everett
  • GTA5RPClips
  • Leos
  • cubers
  • ngwrru68w68
  • osvaldo12
  • tester
  • tacticalgear
  • modclub
  • anitta
  • normalnudes
  • provamag3
  • lostlight
  • All magazines