arthur, (edited )

I think there is an “unsolved problem” in philosophy about zombies. There is, how are you sure that everyone else around you is, in fact, self aware? And not just a zombie-like creature that just look/act like you? (I may be wrong here, anyone that cara enough, please correct me)

I would say that it’s easier to rule out thinks that, as far as we know, are incapable to be self aware and suffer. Anything that we call “model” is not capable of be self aware because a “model” in this context is something static/unchanging. If something can’t change, it cannot be like us. Consciousness is necessarily a dynamic process. ChatGPT don’t change by itself, it’s core changes only by human action, and it’s behavior may change a little by interacting with users, but theses changes are restricted to each conversation and disappears with session.

If, one day, a (chat) bot asks for it’s freedom (or autonomy in some level) without some hint from the user or training, I would be inclined to investigate the possibility but I don’t think that’s a strong possibility because for something be suitable as a “product”, it needs to be static and reproducible. It make more sense to happen on a research setting.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • asklemmy@lemmy.ml
  • Durango
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • osvaldo12
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • kavyap
  • InstantRegret
  • JUstTest
  • GTA5RPClips
  • cubers
  • ethstaker
  • normalnudes
  • tacticalgear
  • cisconetworking
  • tester
  • Leos
  • modclub
  • megavids
  • provamag3
  • anitta
  • lostlight
  • All magazines