weekend_editor,
@weekend_editor@mathstodon.xyz avatar

@inthehands @ajsadauskas

If your dataset contains biases, then ANYTHING you train on it will inherit those biases (absent specific corrective action).

Pretty much every textbook tells you that your models will sometimes train on non-obvious "details". This applies to AI. It applies to machine learning. It applies to statistics, even simple old regression and classification.

If AI/statistics practitioners know this, why do we have to keep re-learning this lesson the hard way?

Perhaps managements need a couple knocks to the side of the head to beat this fact into them?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ethstaker
  • thenastyranch
  • GTA5RPClips
  • everett
  • Durango
  • rosin
  • InstantRegret
  • DreamBathrooms
  • magazineikmin
  • Youngstown
  • mdbf
  • slotface
  • cisconetworking
  • kavyap
  • JUstTest
  • normalnudes
  • modclub
  • cubers
  • ngwrru68w68
  • khanakhh
  • tacticalgear
  • tester
  • provamag3
  • Leos
  • osvaldo12
  • anitta
  • megavids
  • lostlight
  • All magazines