Jimmyeatsausage,

I just…you seem to have a fair grasp of mathematics and logic (or you copied a portion of your reply from some other source that does) but you either don’t have a grasp of how LLMs work and are built or you have an extremely nieve view consciousness or I’m missing some prior assumption you used in coming to the conclusion that LLMs are anywhere near the level you seem to be implying instead of statistical models. The input you provide to an LLM does not alter the underlying weights of the nodes in the network unless it is kept in training mode. When that happens, they quickly break down, and all the output becomes garbage because they have no reality checking mechanism, and they don’t have context in the way people or even animals we consider intelligent.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • futurology@futurology.today
  • InstantRegret
  • ngwrru68w68
  • everett
  • mdbf
  • modclub
  • rosin
  • osvaldo12
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • Youngstown
  • GTA5RPClips
  • slotface
  • kavyap
  • JUstTest
  • ethstaker
  • tacticalgear
  • tester
  • cubers
  • Durango
  • normalnudes
  • khanakhh
  • Leos
  • anitta
  • cisconetworking
  • provamag3
  • megavids
  • lostlight
  • All magazines