paco,

Any or nerds out there want to offer me an opinion? One of my security programmes that I run will be tracking time-to-decision (typically measured in calendar days). We do between 50 and 125 decisions in a year, so there are only 5-10 data points in a typical month. As you can imagine, with any sort of human approval process, there will be outliers where things will go very quickly ("no way in hell") and some that will go very slowly.

I want to report on time-to-decision and I want to blunt the impact of outliers on our statistics. If there's one decision that takes 6 months and the others take a couple weeks, I don't want the one outlier making us look bad. The math question:

I was gonna use a trimmed mean, but reading about Winsorised means is also interesting. I was assuming I'd use a trimmed mean excluding the bottom 5% and top 5% and then report an average of the remaining 90%.

Anybody have better ideas? Anybody with opinions on trimmed v winsorised means?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • science
  • ngwrru68w68
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • megavids
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • provamag3
  • JUstTest
  • All magazines