Saganastic,

I agree, there's more going on in a human brain. But fundamentally both humans and LLMs use neural networks. The design of the neural network in a LLM is much simpler than the neural network in a human.

But they both "think" to come up with an answer. They both cross reference learned information. They both are able to come up with an answer that is statically likely to be correct based on their learned information.

There's a ton of potential to take the neural networks in LLMs beyond just language. To have then conceptualize abstract ideas the way a human would. To add specialized subsections to the model for math and logic. I think we're going to see a ton of development in this area.

And I think you're right, they're not exactly the same as humans. But fundamentally there is a lot of similarity. At the end of the day, they are modeled after human brains.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • chatgpt@lemmy.world
  • ethstaker
  • thenastyranch
  • GTA5RPClips
  • everett
  • Durango
  • rosin
  • InstantRegret
  • DreamBathrooms
  • magazineikmin
  • Youngstown
  • mdbf
  • slotface
  • cisconetworking
  • kavyap
  • JUstTest
  • normalnudes
  • modclub
  • cubers
  • ngwrru68w68
  • khanakhh
  • tacticalgear
  • tester
  • provamag3
  • Leos
  • osvaldo12
  • anitta
  • megavids
  • lostlight
  • All magazines