I feel like I need a button on HN for, as another commenter put it, "folksy wisdom porn", where an article superficially touches all the right buttons to get it to the front page (hey, I always fail to reach my goals, I need a new framework!), but is just anecdotes and shows the results of the author's own Rorschach test.
The section on NASA made absolutely no sense to me:
> NASA had a fixed budget, fixed timeline, and a goal that bordered on the absurd: land a man on the moon before the decade was out. But what made it possible wasn’t the moonshot goal. It was the sheer range of constraints: weight, heat, vacuum, radio delay, computation. Each constraint forced creative workarounds. Slide rules and paper simulations gave us one of the most improbable technological feats in history.
Wut? The constraints are what made it a hard problem, but the only reason they were able to hit this goal in an impossibly short timeline is the huge amount of resources that they put toward a very clear goal (which was, honestly, less "let man explore the heavens" than "beat the Soviets").
This is why I love hacker news. I often find myself falling for this stuff and thinking "yeah, that makes a lot of sense". It's always good come back to the comments and get a good old fashioned reality check.
Indeed. For offline use, a good inscribed rock can still go a long way: https://www.astralcodexten.com/p/heuristics-that-almost-alwa...
Those examples are pretty bad, though.
The guard deters thieves by mere presence - just like how an apparently-locked door, even if unlocked, deters more thieves than a wide-open door.
The example doctor provides psychological support and real advice when things are _obviously_ wrong. Those are things that are way more useful than a rock (also, in real cases, patients do return after 2 weeks).
(the futurist provides no value whatsoever, here we do agree :P)
But I feel the entire underlying message is questionable. "Pretty good heuristics" are honestly pretty good! Sometimes (oftentimes, even) it's all you need, and it's much better than doing the extensive research. You should only do the extensive research for the "volcano" scenario, where the consequences are dire - otherwise, you're probably wasting time)
> The guard deters thieves by mere presence - just like how an apparently-locked door, even if unlocked, deters more thieves than a wide-open door.
I don't really have any point to make, but it is fun to note that you can buy these.
https://www.amazon.co.uk/Secured-ADT-Alarm-Window-Stickers/d...
Stickers, and these! Solar powered! [0]
[0] https://www.amazon.com/BNT-Security-Simulated-Surveillance-R...
I loved that article :-) . The only thing I have to add is that the cult of the rock's main opposition is almost always another cult. People reading scientific papers and keeping track of the data don't stand a chance, they simply don't exude the same steely confidence.
What he describes is a very basic concept in data science, and the tradeoffs in making a binary classifier (which this essentially is), are very well explored in the Reciever Operator Characteristic (ROC) curve (https://en.wikipedia.org/wiki/Receiver_operating_characteris...).
This article is from 2022, and data science wasn't exactly novel by that time, considering the author appeals (successfully) to those big brained silicon valley types, that leads me to throw some shade at the writer and his readership.
Designing detectors for rare events is a pretty common, problem dealt extensively with in statistics, after all the linked methodology was devised for WW2 radar operators, and the default mode for radars is 'there isn't a German plane in range', despite that they needed to find a mathematical approach to find how good their radars are.
It definitely tripped my "self-important bullshit author" detector, which doesn't happen to be a very useful heuristic since it requires me to read the whole article.
But anyway, the examples used are not in any way like another. The "futurist" in particular fundamentally lacked the "they do things well and then get sloppy" which leads me to believe the author has a particular axe to grind about the topic of tech progress.
For a lot of the other examples, the entire point is that the person knows they are being sloppy, they know they're letting issues slip through the cracks. If you know you're doing a bad thing, what use is the analogy to a heuristic? It's not a heuristic to be sloppy, it's just being sloppy.
Once again, the article is lampshading the naive fallacy that if a classifier gets a large percentage of the input distribution right, that classifier is good. Basic literature about designing these classifiers recognizes these fallacies, and knows how to correct for them.
And part of that correction lies in the fact that different domains place different costs on false positives and false negatives.
He describes different scenarios and the apparent contradiction is resolved by weighing the consequences of getting things wrong. The futurist can do whatever, since nobody nothing happens if he gets things wrong or right. In the case with real stakes, where getting things wrong has an immense cost, you just have to accept you will cry wolf a lot of the time.
In the doctor scenario, let's say the doctor is highly skilled, and can tell that if a patient has a certain set of symptoms, he's 99% likely to have cancer. Thing is, only 1 people in 1000 actually do, which means even this amazing doctor will tell 9 healthy people for every sick person that they have cancer. Is the doctor bad at their job? No, he's excellent - the inconvenience caused to these people is dwarfed by letting a sick person go undiagnosed.
If a factory is making Happy Meal toys, and they determined that 1 out of 1000 they produce is faulty, should they invest in a similar screening process? No, the cost of the process, and the cost of handling false positives is way costlier than the minor problem of a child getting a broken toy once in a while.
Same numbers, different common sense actions.
Once again, I think your analysis is an oversimplification. (Also, staring with "once again" is so condescending it makes my skin crawl)
I am sorry, I did not mean to be rude, I apologize.
I just wanted to say that I still think the article doesn't really show surprising to people who took an undergrad stats/data science course, and the apparent 'conundrums' are well-understood.
can't agree more. there are a lot excellent comments that are better than the original post. I always think about a way to collect those pithy comments. so far i have not figure it out yet. i was wondering if anybody else has the same idea.
Often I find myself going directly to the comments … harvesting the wisdom of the crowd…
If someone is bringing up John Boyd in a serious capacity to make a point, it's likely they have no idea what they're talking about.
Why is that? Is Boyd not considered a serious source?
Boyd's role (and that of the fighter mafia in general) in the development of 4th generation fighters is very exaggerated. Bringing him up (and Feynman honestly) is a sign you're about to read some shallow self-help esque shloch.
Virtually every blog post that makes it to the front page is full of shit. They're either folksy wisdom porn, or a novice who just discovered a minor technical detail that is superficially new information to the HN audience (and leads to wrong conclusions). But the real key is a clickbait title; it's the "shocked face thumbnail" of every YouTube video made today. We can't stop clicking on them.
The "wisdom of the crowd" is a combination of ignorance and mesmerization, and the result is a front page of dreck.
> Virtually every blog post that makes it to the front page is full of shit.
While I agree many posts are full of shit, I think it's important not to throw the baby out with the bathwater. There are tons of HN posts that I find incredibly insightful and informative. The ones I like usually fall into 2 categories:
1. They are a detailed description of something the author actually did, and show a really cool solution or implementation of something. They don't always have to be jaw-droppingly amazing (though some are), but they just have to show that the blog post is the outcome of the work, not the other way around.
2. The author has been thinking about a problem for a while and brings a clear, informative, well-argued insight to the problem space. E.g. this post, https://news.ycombinator.com/item?id=37509507, is one of my favorites that helped me understand phenomena I was definitely aware of but hadn't yet tied together.
For me, this "folksy wisdom porn" is a cheap, bad, superficial version of #2 (FWIW, I think what you describe as "a novice who just discovered a minor technical detail that is superficially new information" is the cheap, bad, superficial version of #1). It has the veneer of some sort of deep insight, but when you actually get to the details and try to understand it, it either just doesn't make sense or is essentially word salad.
In response to this, I was going to craft a comment that critiqued the critiques and began with the same wording as the critiques but instead I'll say this...
A nuanced critique! - excellent.
Please, nobody reply to this comment.
> 1. They are a detailed description of something the author actually did, and show a really cool solution or implementation of something
This is certainly entertaining, and feels insightful and informative. But usually it is inaccurate, subjective, or wrong, because it's an individual non-expert's experience.
> 2. The author has been thinking about a problem for a while and brings a clear, informative, well-argued insight to the problem space
Again, feels like wisdom, but an armchair expert is not an actual expert, and "I thought about it for a while" is not the same thing as "academics critically discuss at length and come to a consensus".
In almost all cases, actual experts have actually studied a thing for a long time, or practiced it for a long time, and have actual evidence to go on. Blog posts don't - because real experts tend to publish in books and journals first (which are peer reviewed), not blogs. If the blog post isn't showing its work with a lot of evidence, critical study, and consensus, it's extremely likely to be bullshit.
I say all this because in the 16 years I've been on this forum, I can count on one hand the number of front page blog posts that accurately portray my field. I'm guessing the real information is not clickbaity enough, or it doesn't validate the biases and expectations of readers. However, the number of posts full of bullshit has been endless.
Without tight constraints you get vague solutions governed by politics which tends to fuck up the constraints in a vicious cycle. Without constraints you get a sea of solutions, although if constraints are too tight you get no solutions. What you want is constraints that are loose enough to explore a ridgeline or constellation with pretty clear local maximas but tight enough to not admit uncountably many solutions that breeds worthless rhetoric.
I kept skimming through the article, waiting for it to say something useful. I flamed out around the NASA example. The internet is overflowing with this kind of superficial nonsense.
At least this one was just a waste of time, instead of being actively harmful like some of it is.
right, the implication would be that if the goal was something easy like walking the President's dog NASA would never have been able to do it due to the lack of innovation fostering constraints.
For me the post just didn't make sense. Constraints are only constraits in regards to reaching something, and that something is a goal.
The author confused "constraint" which are rules or boundaries that define a situation or problem. She confused "constraint" With "strategy" which is a meta solution or algorithm to achieve a goal. A strategy doesn't have to be Google Maps giving you turn by turn directions. A better strategy is a set of rules or algorithm to guide you towards your goal; in all situations, situational awareness and common sense trumps any algorithm or strategy. It is like Google Maps telling you to drive straight while your eyes are telling you the bridge isn't there and driving straight will cause yourself to fall into a river.
>but is just anecdotes
While there's a hefty dose of junk, most of what's worth in life advice is "just anecdotes" too.
"Research-driven" or "scientific" insight for such matters is a joke - and often more snake-oily and based on some current fad than any crude anecdotes.
That paragraph had me scratch my head as well, but overall the article is making a valid point: https://news.ycombinator.com/item?id=44236665
Thanks for this comment, I'm glad it's not only me. I came back feeling like, I enjoyed the article and it was a waste of time. Classic advice, where its only use is passing it on.
What actually made the moon mission possible was recruiting a bunch of Nazi rocket scientists[1] that gained their expertise making weapons.
This comment always comes up. Yes, their knowledge was a huge help to the space program and we conveniently forgot about their crimes in exchange for it. But it was THE sole thing that made the moon landing possible? Not our massive industrial and economic capacity? Not the droves of non-nazi scientists and engineers that were driven out of Europe and into the U.S. during the war? Not our academic institutions?
We obviously couldn't have gotten to the moon without any of those things, but someone always jumps in and credits the whole endeavor to Operation Paperclip as if it's a revelation. Gotta cash in on the modern trend of "erm, actually"-ing everything, I guess.
This is not like Intel poaching Jim Keller from AMD. The U.S. had only a nascent rocket program at the end of the war. They did more than recruiting scientists, they also came back with loads of hardware. Within a year they were launching seized V-2s at altitudes that matched the Germans, and within 7-8 years it was manufacturing rockets that matched Germany. The only rival to the U.S. rocket program after the war was the Soviet Union which had their own program to recruit Nazi scientists[1].
Every field is "hobbyist level" before certain breakthroughs are made that allow it to take off. Look at computers before and after the invention of integrated circuits, transistors, or even vacuum tubes.
In this case those initial breakthroughs were made by the Nazis. Nobody is disputing that. But there is quite a leap to be made between lobbing explosives at London and putting live humans on the moon and then retrieving them, and many things besides scientists with dubious pasts were needed to make that leap. I do not understand what drives somebody to downplay those accomplishments every time the subject comes up. Your statement that those scientists were "what actually made the moon mission possible" is worded in a way that implies that they were the only thing that made it possible, rather than one factor among many, and that is objectively false. It's like saying that a spark plug is "what actually makes a car run".
We essentially imported the Nazi rocket program and continued investing in it. Obviously that continued investment was essential, but I don't think it's overstating it to say it's what made it possible. Many of the countries that have nukes have them as a direct result of espionage, and I don't think many people would object to saying that it was the espionage that made it possible, even though that is far less than importing large groups of scientists and equipment.
Constraints make the specific goal (moon landing) harder, but force technological development. If landing on the moon had been 'easy' with existing tech and not required that massive investment of resources, progress on everything else is delayed. Material science, engines, batteries, solar, radios, integrated circuits, even PCs and smartphones - sure it would all have happened eventually but the key innovations were made when they were because of constraints.
Does this imply the secret to success is making your life / business / product space artificially hard?
It seems like you'd do a better job "setting yourself up for success" than making your life as hard as possible, and hoping "that which doesn't kill you only makes you stronger" doesn't, in fact, "kill" you (metaphorically or literally speaking).
Yeah, I don't know myself as dyslexic but have trouble understanding what the heck are those constraints and in what relevant way are they different from the goal.
My goal is not to work for a salary. I'm constrained by an otherwise empty stomach to do it.
Kind of sounds like a long way of saying "use S.M.A.R.T. goals"
Specific, Measurable, Achievable, Realistic, and Time-bound
(or, you know, constraints).
The questions should be: If the Apollo 13 accident had happened instead during Apollo 8 while the astronauts were behind the moon, so that the mission was lost and the cause was not known, would Apollo 9, 10, and 11 been delayed past the end of the decade? Should they have been? What if the cause was known? Was the Central Committee of the Communist Party of the Soviet Union correct when it denied Soviet Cosmonauts the opportunity to beat Apollo 8 to the moon using a rocket that had dismally failed all its tests?
Fortune favors the bold, sometimes. Did the good luck of the preceding astronauts lead to the decision to launch a space shuttle under adverse conditions on the day Reagan was scheduled to deliver his SOTU, etc?