That may have made sense in the days of < 100 MHz CPUs but today I wish they would amend the standard to reduce UB by default and only add risky optimizations with specific flags, after the programmer has analyzed them for each file.
> That may have made sense in the days of < 100 MHz CPUs
you don't know how much C++ code is being written for 100-200MHz CPUs everyday
https://github.com/search?q=esp8266+language%3AC%2B%2B&type=...
I have a codebase that is right now C++23 and soon I hope C++26 targeting from Teensy 3.2 (72 MHz) to ESP32 (240 MHz). Let me tell you, I'm fighting for microseconds every time I work with this.
I bet even there you have only a few spots where it really makes a difference. It’s good to have the option but I think the default behavior should be safer.
I don't know, way too often often my perf traces are evenly distributed across a few hundred functions (at best), without any clear outlier.
"how much code" =/= how many developers.
the people who care about clock ticks should be the ones inconvenienced, not ordinary joes who are maintaining a FOSS package that is ultimately stuck by a 0-day. It still takes a swiss-cheese lineup to get there, for sure. but one of the holes in the cheese is C++'s default behavior, trying to optimize like it's 1994.
> the people who care about clock ticks
I mean that's pretty much the main reason for using c++ isn't it? Video games, real-time media processing, CPU ai inference, network middleware, embedded, desktop apps where you don't want startup time to take more than a few milliseconds...
No, it's not a dichotomy of having uninitialized data and fast startup or wait several milliseconds for a jvm or interpreter to load a gigabyte of heap allocated crap.
it's not about startup time. it's about computational bandwidth and latency once running.
They are doing what you want. It is a long difficult process to figure out what is UB - most of it is cases where there is nothing written down and so it UB by default - it wasn't defined. Once UB is found and documented then they get to figure out what to be done about it. In some cases nothing as realistically nobody does that, in the case in question they have defined what happens, but the article is 8 years old.
CPU speed is not memory bandwidth. Latency and contention always exist. Long lived processes are not always the norm.
In another era we would have just called this optimal. https://x.com/ID_AA_Carmack/status/1922100771392520710