wordshaper,
@wordshaper@weatherishappening.network avatar

Layering extra things on top of the “making shit up at random” machine isn’t going to fundamentally make it stop being a making shit up at random machine. https://mstdn.social/@TechCrunch/112383184056923602

tartley,
@tartley@mastodon.social avatar

@wordshaper That is true, but my layman speculation is that this is sorta partly how humans work. You have a deep subconscious level that makes lots of noisy associations, and a bunch of filters that prevent most of the unworkable or irrelevant or socially unacceptable ideas from making it up to your conscious thoughts.

A make up random stuff machine with filters might be more creative and productive than the opposite, attempting to deduce true things from axioms purely by inductive reasoning.

wordshaper,
@wordshaper@weatherishappening.network avatar

@tartley I'm in the middle of learning French and also not a layman with neural networks and while this is one of the ways people learn, it's only a way people learn reflexive and mostly content-free things.

It's definitely not how people learn most things like languages or skills. (Source: my brain, though take that with some caution) We, or me at least, seem to start with more abstract symbolic understanding and layer things on top of that. LLMs are entirely surface-level stuff.

wordshaper,
@wordshaper@weatherishappening.network avatar

@tartley Which isn't to say that LLM-like activity isn't involved in cognition, but if it is then it's probably layered on top of deeper layers of symbol manipulation and relationships. A pure, or even mostly, LLM style of "learning" would be one that had basically no understanding of anything and was purely reflexive without any kind of depth.

tartley,
@tartley@mastodon.social avatar

@wordshaper Absolutely agree. I didn't mean to imply anything about the process of human learning, only the subsequent exploitation of the resulting associations when we're being creative.

wordshaper,
@wordshaper@weatherishappening.network avatar

@tartley Something LLM-like, or rather trained-neural-network-like may be used as a boundary translation layer -- as an interface between language and whatever we do to build and manipulate meaning. It almost undoubtedly is, tbh.

It's hard to say how related they are to what brains do, partly because we don't quite understand neural networks and partly because we really don't understand brains, but if LLMs and things like them are involved they're likely not involved at the conceptual levels.

tartley,
@tartley@mastodon.social avatar

@wordshaper Very fair. My speculation about "how brains actually work" was crudely expressed. I should have said "Behaviourally, seems to have some things in common."

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • tacticalgear
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • khanakhh
  • Youngstown
  • ngwrru68w68
  • slotface
  • everett
  • rosin
  • thenastyranch
  • kavyap
  • GTA5RPClips
  • cisconetworking
  • JUstTest
  • normalnudes
  • osvaldo12
  • ethstaker
  • mdbf
  • modclub
  • Durango
  • tester
  • provamag3
  • cubers
  • Leos
  • anitta
  • megavids
  • lostlight
  • All magazines