tomw,
@tomw@mastodon.social avatar

Every so often I see a post about how LLMs fail logic puzzles.

And... yes? Of course they do. The only way it could solve it is if it has seen the puzzle before or a substantially similar one. (But that might cause it to give the answer to the similar one, not the correct answer.)

Why is this even tested so often or considered surprising? It is, in essence, an autocomplete. It does not understand logic. It has no concept of a correct answer. It gives the most likely completion.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • anitta
  • DreamBathrooms
  • osvaldo12
  • mdbf
  • everett
  • magazineikmin
  • khanakhh
  • Youngstown
  • rosin
  • slotface
  • modclub
  • kavyap
  • tacticalgear
  • ngwrru68w68
  • JUstTest
  • thenastyranch
  • cisconetworking
  • Durango
  • ethstaker
  • InstantRegret
  • Leos
  • provamag3
  • GTA5RPClips
  • tester
  • cubers
  • megavids
  • normalnudes
  • lostlight
  • All magazines