The other issue is that people seem to just copy configure/autotools scripts over from older or other projects because either they are lazy or don't understand them enough to do it themselves. The result is that even with relatively modern code bases that only target something like x86, arm and maybe mips and only gcc/clang, you still get checks for the size of an int, or which header is needed for printf, or whether long long exists.... And then the entire code base never checks the generated macros in a single place, uses int64_t and never checks for stint.h in the configure script...
I don't think it's fair to say "because they are lazy or don't understand". Who would want to understand that mess? It isn't a virtue.
A fairer criticism would be that they have no sense to use a more sane build system. CMake is a mess but even that is faaaaar saner than autotools, and probably more popular at this point.
I took the trouble (and even spent the money) to get to grips with autotools in a structured and detailed way by buying a book [1] about it and reading as much as possible. Yes, it's not trivial, but autotools are not witchcraft either, but as written elsewhere, a masterpiece of engineering. I have dealt with it without prejudice and since then I have been more of a fan of autotools than a hater. Anyway, I highly recommend the book and yes, after reading it, I think autotools is better than its reputation.
Autotools use M4 to meta-program a bash script that meta-programs a bunch of C(++) sources and generates C(++) sources that utilizes meta-programming for different configurations; after which the meta-programmed script, again, meta-programs monolithic makefiles.
This is peak engineering.
Sounds like a headache. Is there a nice Python lib to generate all this M4-mumbo-jumbo?
autotools is the worst, except for all the others.
I'd like to think of myself as reasonable, so I'll just say that reasonable people may disagree with your assertion that cmake is in any way at all better than autotools.
Nope, autotools is actually the worst.
There is no way in hell anyone reasonable could say that Autotools is better than CMake.
I've seen programs replicate autotools in their Makefiles. That's actually worse. I've also used the old Visual Studio build tooling.
Autotools is terrible, but it's not the worst.
Configure-make is easier to use for someone else. Configuring a cmake based project is slightly harder. In every other conceivable way I agree 100% (until someone can convince me otherwise)
And presumably the measure by which they are judged to be reasonable or not is if they prefer CMake over Autotools, correct? :D
Correct. I avoid autotools and cmake as much as I can. I'd better write Makefiles by hand. But when I need to deal with them, I'd prefer cmake. I can can modify CMakeLists.txt in a meaningful way and get the results I want. I wouldn't touch autotools build system because I never was able to figure out which of the files is the configuration that is meant to be edited by hands and not generated by scripts in other files. I tried to dig the documentation but I never made it.
My experience with cmake, though dated, is that it's simpler because it simply cannot do what autotools can do.
It really smelled of "oh I can do this better", and you rewrite it, and as part of rewriting it you realise oh, this is why the previous solution was complicated. It's because the problem is actually more complex than I though.
And then of course there's the problem where you need to install on an old release. But the thing you want to install requires a newer cmake (autotools doesn't have this problem because it's self contained). But this is an old system that you cannot upgrade, because the vendor support contract for what the server runs would be invalidated. So now you're down a rabbit hole of trying to get a new version of cmake to build on an unsupported system. Sigh. It's less work to just try to construct `gcc` commands yourself, even for a medium sized project. Either way, this is now your whole day, or whole week.
If only the project had used autotools.
No, CMake can do everything Autotools does, but a hell of a lot simpler and without checking for a gazillion flags and files that you don't actually need to but you're checking them anyway because you copied the script from a someone who copied the script from... all the way back to the 90s when C compilers actually existed that didn't have stdint.h or whatever.
CMake is easy to upgrade. There are binary downloads. You can even install it with pip (although recently the Python people in their usual wisdom have broken that).
This.
Simple projects: just use plain C. This is dwm, the window manager that spawned a thousand forks. No ./configure in sight: <https://git.suckless.org/dwm/files.html>
If you run into platform-specific stuff, just write a ./configure in simple and plain shell: <https://git.suckless.org/utmp/file/configure.html>. Even if you keep adding more stuff, it shouldn't take more than 100ms.
If you're doing something really complex (like say, writing a compiler), take the approach from Plan 9 / Go. Make a conditionally included header file that takes care of platform differences for you. Check the $GOARCH/u.h files here:
<https://go.googlesource.com/go/+/refs/heads/release-branch.g...>
(There are also some simple OS-specific checks: <https://go.googlesource.com/go/+/refs/heads/release-branch.g...>)
This is the reference Go compiler; it can target any platform, from any host (modulo CGO); later versions are also self-hosting and reproducible.
I want to agree with you, but as someone who regularly packages software for multiple distributions I really would prefer people using autoconf.
Software with custom configure scripts are especially dreaded amongst packagers.
Why, again, software in the Linux world has to be packaged for multiple distributions? On the Windows side, if you make installer for Windows 7, it will still work on Windows 11. And to the boot, you don't have to go through some Microsoft-approved package distibution platform and its approval process: you can, of course, but you don't have to, you can distribute your software by yourself.
Interesting that you would bring up Go. Go is probably the most head-desk language of all for writing portable code. Go will fight you the whole way.
Even plain C is easier.
You can have a whole file be for OpenBSD, to work around that some standard library parts have different types on different platforms.
So now you need one file for all platforms and architectures where Timeval.Usec is int32, and another file for where it is int64. And you need to enumerate in your code all GOOS/GOARCH combinations that Go supports or will ever support.
You need a file for Linux 32 bit ARM (int32/int32 bit), one for Linux 64 bit ARM (int64,int64), one for OpenBSD 32 bit ARM (int64/int32), etc…. Maybe you can group them, but this is just one difference, so in the end you'll have to do one file per combination of OS and Arch. And all you wanted was pluggable "what's a Timeval?". Something that all build systems solved a long time ago.
And then maybe the next release of OpenBSD they've changed it, so now you cannot use Go's way to write portable code at all.
So between autotools, cmake, and the Go method, the Go method is by far the worst option for writing portable code.
I have specifically given an example of u.h defining types such as i32, u64, etc to avoid running a hundred silly tests like "how long is long", "how long is long long", etc.
> So now you need one file for all platforms and architectures where Timeval.Usec is int32, and another file for where it is int64. And you need to enumerate in your code all GOOS/GOARCH combinations that Go supports or will ever support.
I assume you mean [syscall.Timeval]?
$ go doc syscall
[...]
Package syscall contains an interface to the low-level operating system
primitives. The details vary depending on the underlying system [...].
Do you have a specific use case for [syscall], where you cannot use [time]? Yeah I've had specific use cases when I need to use syscall. I mean... if there weren't use cases for syscall then it wouldn't exist.
But not only is syscall an example of portability done wrong for APIs, as I said it's also an example of it being implemented in a dumb way causing needless work and breakage.
Syscall as implementation leads by bad example because it's the only method Go supports.
Checking for GOARCH+GOOS tuple equality for portable code is a known anti pattern, for reasons I've said and other ones, that Go still decided to go with.
But yeah, autotools scripts often check for way more things than actually matter. Often because people copy paste configure.ac from another project without trimming.
> either they are lazy or don't understand them enough to do it themselves.
Meh, I used to keep printed copies of autotools manuals. I sympathize with all of these people and acknowledge they are likely the sane ones.
I've had projects where I spent more time configuring autoconf than actually writing code.
That's what you get for wanting to use a glib function.
It’s always wise to be specific about the sizes you want for your variables. You don’t want your ancient 64-bit code to act differently on your grandkids 128-bit laptops. Unless, of course, you want to let the compiler decide whether to leverage higher precision types that become available after you retire.