FeralRobots,
@FeralRobots@mastodon.social avatar

What's going on is that Anthropic "prompt engineers" have redefined self-awareness to mean 'has contextual information.' That the system is using language then allows them to delude themselves into universalizing their definition.

Saw a similar problem in AI research in the 80s: researchers might define a "frame" holding contextual info, & when their program produced solutions that referenced the frame, construed that as a form of self-awareness.

https://arstechnica.com/information-technology/2024/03/claude-3-seems-to-detect-when-it-is-being-tested-sparking-ai-buzz-online/

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ngwrru68w68
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • kavyap
  • cubers
  • megavids
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines