A lot of discussions on why X is good are very abstract, which makes it difficult or impossible to evaluate the claims. It's good to see something that talks about some limitations with a specific, real world, use case.
It's funny how often, when I look into why something doesn't work, it's because it basically wasn't tested.
To pick a not (purely) software example, I was wondering why people report so many problems with non-Tesla EV charging networks, so I looked into this.
A convo with an Electrify America person (largest U.S. charging network) months after the release of the Mustang Mach-E (top selling non-Tesla EV) revealed that they hadn't tested they hadn't tested a Mach-E on their network.
EA's excuse was that Ford hadn't given them a vehicle to test and Ford's excuse was that there was so much demand they couldn't give a car to EA since so many customers were waiting. Not only are these bad excuses, a single person with the will to fix the problem and the appropriate authority at either company could've fixed this for all Ford EVs.
At EA, they could've literally bought a Mach-E and then re-sold it (this is how Consumer Reports tests cars and they have much less funding than EA)
or rented cars on Turo. On Ford's end, they have no problem allocating cars to automotive journalists and easily could've temporarily allocated an EV they send to journalists to the biggest charging network in the U.S. (not that one is enough to ensure compatibility, but the plan was to test one, so it would've at least allowed EA and Ford to execute their plan).
From what I've seen of other big companies, I bet multiple people at EA and Ford had all of these ideas and were blocked from above.
I wonder when (if?) driver behavior will get back to normal. Despite barely driving, most of the most reckless driving I've ever seen has been since the pandemic.
You can see this in the data: 2019-2021 had the biggest (%) increase in U.S. per capita motor vehicle fatalities over a two year period since 1944-1946, which was due to people coming back from the war. Normalized for miles travelled, 2019-2021 had the largest increase over a two year period as far back as there's data (1921).
I also wonder if/when a bunch of other things will get back to normal, e.g., supply chain disruptions. It's a big news item that car availability and normalized pricing is the worst it's been for decades and is slowly improving, but you see this with a bunch of non-news items as well, e.g., when I wanted to buy a 9/16" wrench last year, literally no store I called (10+) had one except as part of a set, but every set at every store had the 7/16" stolen out of it, so you couldn't buy a full set.
I talked to a CivE who works a lot with trades and they told me they'd never seen anything like it. Sometimes a job will need a lot of something and go buy out a store or two, but a lot of wrench sizes (and other tools) weren't available for purchase in the city at all; they'd never seen so many things unavailable.
I don't think wrench availability by itself is particularly interesting, but I find it interesting how many supply chains were too fragile/intricate to be fixed in "only" 2-3 years.
@stuartmarks@aardvark179 Yeah, I've noticed that a number of gov systems I interact with (US and Canadian) are badly backed up. From a queuing standpoint, it seems like the arrival rate didn't change much during the pandemic but the processing rate went to zero or near zero for a while and has come back to a lower steady state, resulting in unbounded queue growth and/or drops.
This doesn't seem unfixable, but there are some areas where it doesn't seem like there's progress towards a fix.
Anonymous HN commenter on Amazon's anti-piracy effort: "... I had to fight hard to get even a proof of concept project greenlit.
There was exteme organizational disinterest - partly for a bad but predictable reason (we made a lot of money off these fraudsters) and partly for a reason so bad it still makes me cringe (money recovered from identified fraudsters went into the balance sheet of a different SVP's org, so our org viewed it as a waste of time) ..."
When I was in school, in a couple different contexts, someone asked about binary classifiers with < 50% accuracy and the standard smart/clever response was that there's no such thing because you can just invert the classification if something is right < 50% of the time (and I think this was even in at least one textbook).
I nodded along at the time but, in retrospect, this seems quite wrong? It's often not trivial to tell if your classifier has > 50% accuracy or not.
As an example (which isn't binary but illustrates one core problem), many years ago, I was talking to someone at a startup trading firm that had been losing money for quite a while but thought their approach was fundamentally sound and should eventually make money.
It turns out they were right (or lucky); the firm still exists and is profitable, but if they were somewhat less lucky, the firm would've gone under or changed approaches and they never would've known if their approach was sound.
In the real world, you don't have an oracle that tells you the "true" success rate of your approach is and you often have long runs where your approach loses.
Even on binary problems, it's often correct to bet on an approach that's had < 50% success rate. Sometimes that will be the right call and sometimes it won't and then you'll actually be relying on a classifier that has < 50% accuracy. Not only will you not know when you're making the decision, you often can't tell in retrospect.
In the former, KP suggests that people should make judgments (and not defer to experts) because people can tell if they have good judgment. If they notice they're making bad calls, they can choose to defer to experts.
In the latter, JG tells a story of what she says was a badly failed workshop idea where there was an exercise that was intended to get people to notice when they had clouded judgment.
A minority of people did notice, but most people seemingly very incorrectly decided that they were unbiased and truth seeking.
I know quite a few people, both IRL and online, who consistently have truly bad judgment to the point where their belief is a good inverse signal and of course they don't notice.
I'm sad about the direction popular board games have gone. Until 2008-ish, there was a lot of (euro/ameritrash) board game innovation that increased the number of meaningful decisions you have per unit time and increased the impact of a skill differences.
On average, since then, games have been increasing complexity and randomness and decreasing the impact of skill differences. You can see this in e.g., bgg top games over time, games like Puerto Rico and
Caylus have been replaced by more complex games that take longer to play and are also more random.
People often complain that games aren't complex enough and that a more complex game that has more randomness per unit time is more fun or that the game wasn't fun because the game was decided before the last turn, but of course there's no way to have a non-random game with meaningful decisions if anyone can win on the last turn.
It's amazing how often, when I look into why something turned to junk (consumer products, tools, etc.), it turns out that it's because a PE firm or a PE-like software company acquired the thing and then made an extremely short-term optimized move that wiped out most of the value and potential revenue of the thing, e.g., I was looking into why a formerly active automotive forum is now a ghost town and it turned out they got bought by VerticalScope, which has apparently killed ~1k forums?