Jimmyeatsausage,

It’s more than being limited in the overall complexity. The locked node weights mean that the LLM is fully deterministic…that is, it has no will or goals, no opinion, no sense of self/sense of the environment/sense of the separation between self/environment. It has no comprehension.

Iterative training cycles are already used with LLMs and don’t solve any of those issues.

From the standpoint of psychology, there’s not a wholly agreed upon definition for ‘intelligence’ but most working definitions require the ability to learn from experience, the ability to recognize problems and to generalize and adapt that experience to solve the problem.

Theoretically, if an LLM had “intelligence,” you could ask it about a problem that was completely dereferenced in the training data. An intelligent LLM would be able to comprehend that problem, generalize it to a level that it could relate to some previous experience, then use details about that prior experience to come up with potential solutions to the new problem. LLMs can’t achieve any of those things individually, never mind all together. If someone pulled that off, it wouldn’t convince me their model was worth the level of concern you articulated earlier, but it would get my attention and would be something I’d watch pretty closely.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • futurology@futurology.today
  • InstantRegret
  • ngwrru68w68
  • everett
  • mdbf
  • modclub
  • rosin
  • khanakhh
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • Youngstown
  • GTA5RPClips
  • slotface
  • kavyap
  • JUstTest
  • ethstaker
  • osvaldo12
  • normalnudes
  • tacticalgear
  • cisconetworking
  • cubers
  • Durango
  • Leos
  • anitta
  • tester
  • megavids
  • provamag3
  • lostlight
  • All magazines