I am still seeing occasional, semi-random build failures, although it
mostly appears to work. This is on a dual-core Intel Mac with -j 3. I
haven’t run enough of them to spot a pattern, but sometimes the make
fails and sometimes the make check.
With all due respect, I don’t think the current approach of recursive
makefiles is going to get us to a point where you can get a consistent
and repeatable build with arbitrary numbers of processors or build
machines (in the case of a distributed build), with multiple developers,
over long periods of time. It is really really hard to do this without
the top-level make being aware of all the dependencies throughout the
whole code base. It is so easy to subtly break a build by introducing a
dependency in a lower-level Makefile that the top-level make isn’t aware
of. Then multi-process builds will sometimes fail and sometimes succeed,
depending on the execution order, which is completely arbitrary.
Spotting and debugging the resulting inconsistencies is usually pretty
painful, as most of you will know.
I know I might be starting a holy war here, but I believe that a better
way to do this reliably is for the top-level Makefile to include the
lower-level ones, which just state the dependencies for the particular
module. All the actual work is controlled by a single instance of make,
which has the whole picture of the dependencies and can schedule
accordingly. That doesn’t preclude building individual modules, as they
can be stated as sub-targets and built individually.
A reference to the landmark paper “Recursive make considered harmful”
may be in order
(http://www.pcug.org.au/~millerp/rmch/recu-make-cons-harm.html). This is
almost 10 years old, but still well worth reading.
I just wondered whether this was on anybody else’s radar.