conciselyverbose,

It’s fundamentally extremely comparable mathematically and algorithmically. That’s the point. Simulated annealing doesn’t need to understand the search space to find a pretty good answer to a problem. It just needs to know what a good answer approximately looks like and nudge potential answers closer that way.

What LLMs are doing is not mysterious at all. Why a specific point in a model is what it is is, but there’s no mystery to the algorithm. We can’t even guess at most of the algorithms that make up the brain.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • futurology@futurology.today
  • DreamBathrooms
  • magazineikmin
  • everett
  • InstantRegret
  • rosin
  • Youngstown
  • slotface
  • love
  • Durango
  • kavyap
  • ethstaker
  • tacticalgear
  • thenastyranch
  • cisconetworking
  • megavids
  • mdbf
  • tester
  • khanakhh
  • osvaldo12
  • normalnudes
  • GTA5RPClips
  • ngwrru68w68
  • modclub
  • anitta
  • Leos
  • cubers
  • provamag3
  • JUstTest
  • All magazines