dr2chase,
@dr2chase@ohai.social avatar

@futurebird @baldur that device, meh. But in datacenters etc, the special AI hardware (chips, even) provide a bunch of operations on lower-than-usual precision floating-point numbers. Used to be those were mostly 32 and 64 bits, sometimes 80, and maybe sometimes 128. The machine learning hardware works with floats that are only 16 or even 8 bits long: https://en.wikipedia.org/wiki/Tensor_Processing_Unit
That's new, not currently useful for non-ML stuff, and makes them faster and more efficient. /

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • kavyap
  • InstantRegret
  • tacticalgear
  • DreamBathrooms
  • khanakhh
  • magazineikmin
  • thenastyranch
  • Youngstown
  • GTA5RPClips
  • slotface
  • Durango
  • ethstaker
  • rosin
  • vwfavf
  • provamag3
  • everett
  • cubers
  • cisconetworking
  • ngwrru68w68
  • normalnudes
  • modclub
  • mdbf
  • osvaldo12
  • tester
  • anitta
  • Leos
  • megavids
  • JUstTest
  • All magazines