MaxLaves, German
@MaxLaves@det.social avatar

@atpfm ML scientist here: The Neural Engine has always been fp16 only. When you can use it, it delivers incredible performance at very low power needs. However, there‘s no public API to program it. You are restricted to using CoreML, which cannot be used to develop and train models directly (You need PyTorch etc). However, model conversion fails more often than not, because CoreML lacks support for many layer ops. I have high hopes for MLX.

siracusa,
@siracusa@mastodon.social avatar

@MaxLaves So what do you think about the M4 neural engine’s supposed 38 trillion operations per second compared to 18 trillion in the M3? Does that seem plausible without some shenanigans in how those numbers are determined?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • Durango
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • kavyap
  • InstantRegret
  • JUstTest
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • megavids
  • lostlight
  • All magazines