intensely_human,

Okay that’s a good point. LLMs, without retraining, are limited in the overall amount of complexity they can successfully navigate.

Sort of like a human who isn’t allowed to sleep, in my opinion. A human may be capable of designing an airplane, but not if the human never sleeps, because the complexity is beyond what a human can do in a single day without becoming exhausted and producing errors.

Do you believe that a series of LLMs, with each LLM being trained on the previous LLM’s training data plus the “input/output completions” that the previous iteration performed, would be a general intelligence?

If I sound naive it is because I am trying to apply Occam’s Razor to my own thinking, and minimize the conversation to the absolute minimum necessary set of involvements to move it forward. I’ll consider anything you ask me to, but so far I haven’t seen a reason to involve consciousness in questions of general intelligence. Do you think they are linked?

By the way, if you have a better definition of “general intelligence” than whatever definition was implied by my original challenge, I’m all ears.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • futurology@futurology.today
  • tacticalgear
  • thenastyranch
  • ethstaker
  • everett
  • Durango
  • rosin
  • InstantRegret
  • DreamBathrooms
  • magazineikmin
  • Youngstown
  • mdbf
  • slotface
  • GTA5RPClips
  • kavyap
  • megavids
  • modclub
  • cisconetworking
  • cubers
  • ngwrru68w68
  • khanakhh
  • tester
  • anitta
  • normalnudes
  • Leos
  • osvaldo12
  • provamag3
  • JUstTest
  • lostlight
  • All magazines