Indeed. For offline use, a good inscribed rock can still go a long way: https://www.astralcodexten.com/p/heuristics-that-almost-alwa...
Those examples are pretty bad, though.
The guard deters thieves by mere presence - just like how an apparently-locked door, even if unlocked, deters more thieves than a wide-open door.
The example doctor provides psychological support and real advice when things are _obviously_ wrong. Those are things that are way more useful than a rock (also, in real cases, patients do return after 2 weeks).
(the futurist provides no value whatsoever, here we do agree :P)
But I feel the entire underlying message is questionable. "Pretty good heuristics" are honestly pretty good! Sometimes (oftentimes, even) it's all you need, and it's much better than doing the extensive research. You should only do the extensive research for the "volcano" scenario, where the consequences are dire - otherwise, you're probably wasting time)
> The guard deters thieves by mere presence - just like how an apparently-locked door, even if unlocked, deters more thieves than a wide-open door.
I don't really have any point to make, but it is fun to note that you can buy these.
https://www.amazon.co.uk/Secured-ADT-Alarm-Window-Stickers/d...
Stickers, and these! Solar powered! [0]
[0] https://www.amazon.com/BNT-Security-Simulated-Surveillance-R...
I loved that article :-) . The only thing I have to add is that the cult of the rock's main opposition is almost always another cult. People reading scientific papers and keeping track of the data don't stand a chance, they simply don't exude the same steely confidence.
What he describes is a very basic concept in data science, and the tradeoffs in making a binary classifier (which this essentially is), are very well explored in the Reciever Operator Characteristic (ROC) curve (https://en.wikipedia.org/wiki/Receiver_operating_characteris...).
This article is from 2022, and data science wasn't exactly novel by that time, considering the author appeals (successfully) to those big brained silicon valley types, that leads me to throw some shade at the writer and his readership.
Designing detectors for rare events is a pretty common, problem dealt extensively with in statistics, after all the linked methodology was devised for WW2 radar operators, and the default mode for radars is 'there isn't a German plane in range', despite that they needed to find a mathematical approach to find how good their radars are.
It definitely tripped my "self-important bullshit author" detector, which doesn't happen to be a very useful heuristic since it requires me to read the whole article.
But anyway, the examples used are not in any way like another. The "futurist" in particular fundamentally lacked the "they do things well and then get sloppy" which leads me to believe the author has a particular axe to grind about the topic of tech progress.
For a lot of the other examples, the entire point is that the person knows they are being sloppy, they know they're letting issues slip through the cracks. If you know you're doing a bad thing, what use is the analogy to a heuristic? It's not a heuristic to be sloppy, it's just being sloppy.
Once again, the article is lampshading the naive fallacy that if a classifier gets a large percentage of the input distribution right, that classifier is good. Basic literature about designing these classifiers recognizes these fallacies, and knows how to correct for them.
And part of that correction lies in the fact that different domains place different costs on false positives and false negatives.
He describes different scenarios and the apparent contradiction is resolved by weighing the consequences of getting things wrong. The futurist can do whatever, since nobody nothing happens if he gets things wrong or right. In the case with real stakes, where getting things wrong has an immense cost, you just have to accept you will cry wolf a lot of the time.
In the doctor scenario, let's say the doctor is highly skilled, and can tell that if a patient has a certain set of symptoms, he's 99% likely to have cancer. Thing is, only 1 people in 1000 actually do, which means even this amazing doctor will tell 9 healthy people for every sick person that they have cancer. Is the doctor bad at their job? No, he's excellent - the inconvenience caused to these people is dwarfed by letting a sick person go undiagnosed.
If a factory is making Happy Meal toys, and they determined that 1 out of 1000 they produce is faulty, should they invest in a similar screening process? No, the cost of the process, and the cost of handling false positives is way costlier than the minor problem of a child getting a broken toy once in a while.
Same numbers, different common sense actions.
Once again, I think your analysis is an oversimplification. (Also, staring with "once again" is so condescending it makes my skin crawl)
I am sorry, I did not mean to be rude, I apologize.
I just wanted to say that I still think the article doesn't really show surprising to people who took an undergrad stats/data science course, and the apparent 'conundrums' are well-understood.