> That may have made sense in the days of < 100 MHz CPUs
you don't know how much C++ code is being written for 100-200MHz CPUs everyday
https://github.com/search?q=esp8266+language%3AC%2B%2B&type=...
I have a codebase that is right now C++23 and soon I hope C++26 targeting from Teensy 3.2 (72 MHz) to ESP32 (240 MHz). Let me tell you, I'm fighting for microseconds every time I work with this.
I bet even there you have only a few spots where it really makes a difference. It’s good to have the option but I think the default behavior should be safer.
I don't know, way too often often my perf traces are evenly distributed across a few hundred functions (at best), without any clear outlier.
"how much code" =/= how many developers.
the people who care about clock ticks should be the ones inconvenienced, not ordinary joes who are maintaining a FOSS package that is ultimately stuck by a 0-day. It still takes a swiss-cheese lineup to get there, for sure. but one of the holes in the cheese is C++'s default behavior, trying to optimize like it's 1994.
> the people who care about clock ticks
I mean that's pretty much the main reason for using c++ isn't it? Video games, real-time media processing, CPU ai inference, network middleware, embedded, desktop apps where you don't want startup time to take more than a few milliseconds...
No, it's not a dichotomy of having uninitialized data and fast startup or wait several milliseconds for a jvm or interpreter to load a gigabyte of heap allocated crap.
it's not about startup time. it's about computational bandwidth and latency once running.