isaiah,
@isaiah@mastodon.social avatar

every time i read an article about AI/LLMs it inevitably says something like, "AI's don't really understand _____" (fill in the blank with just about anything.

i really feel like some folks need to take a few philosophy courses before writing these think-pieces. not because they're wrong per se, but because it's the wrong framing.

even my tiny bit of philosophy coursework made it abundantly clear that humans don't possess the level understanding that we believe we do.

isaiah,
@isaiah@mastodon.social avatar

i'm not saying AI's do understand. or that their abilities are near ours.

just that framing this as a contest of understanding is… problematic.

humans are wired up to believe our perception of reality is crystal clear and our understanding nearly infallible. it's why it's so difficult to change someone's mind -- even when they themselves are fully aware they're in the wrong.

when we're hard-wired to measure "understanding" badly, it's probably a poor metric to choose.

collin,
@collin@ruby.social avatar

@isaiah This is a good point. Humans "hallucinate" all the time and start making things up when we don't fully know what we're talking about.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • tacticalgear
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • osvaldo12
  • Youngstown
  • ngwrru68w68
  • slotface
  • everett
  • rosin
  • thenastyranch
  • kavyap
  • GTA5RPClips
  • modclub
  • megavids
  • normalnudes
  • cubers
  • ethstaker
  • mdbf
  • Durango
  • khanakhh
  • tester
  • provamag3
  • cisconetworking
  • Leos
  • anitta
  • JUstTest
  • lostlight
  • All magazines