One of the first comments made about RPMint was the issue of performance. Specifically that RPM package builds should be done on a PC using a cross compiler. This is reasonably sound advice but not all the packages are going to be cross compiler ready. I have 3 solutions that I think work pretty well though when combined together that I'll talk about.

The first thing is configure cache. I specify as part of the configure script a static configuration file. These options don't (generally) change so there is no reason to recompute them over and over again.

The solution looks like this: ./configure --cache-file=/root/mysw-%{version}.cache.

The result is that all of the calculated configuration checks are cached into this file and will be rechecked on the next config. Part of building software includes figuring out what will satisfy the configure script and rechecking things you've already checked is time consuming. Once your cache file is built after the first run, further iterations fly through. Indeed you can even use this as part of the RPM build but with some care as you wouldn't want to corrupt the build with incorrect cached settings.

The next solution is ccache. ccache is a neat solution that can also greatly benefit a package build. Ccache takes a hash (of some sort) of all of the output coming from the c preprocessor which includes header files, macros and other bits. It then caches in an out of band cache directory the resulting object file. Unless something is seriously wrong with your system, the object file should always be the same from this yet if you change anything at all it will be a cache miss and will rebuild. ccache is another one of those things that will greatly improve package build and rebuild performance. In theory if you build a whole system and it is cached, when you rebuild to link in a new static library, in theory you may not compile any new objects at all except for the changed library. The entire rebuild process of a large number of packages could happen exceptionally fast since the preprocessor stage is not slow at all.

The final solution is distcc. I simply setup a cross compiler, provided by Vincent Riviere on a normal linux PC. This compiler running natively on a modern linux machine is much much faster than even the fastest 68k type machine. distcc takes the preprocessed output, sends it over the network to compile on the host pc, then sends the resulting object file back over the network. It sounds slow in theory but in reality it results in a massive speed boost and even the largest packages build rather quickly.

The trick to using both ccache and distcc is this - when running make use the command like this: CCACHE_PREFIX='distcc' make V=1 CC="ccache m68k-atari-mint-gcc" -j4

You would make sure that your gcc binary on your native system has the same name as the cross compiler on the distcc system. Make sure to use -jX where X is the number of cores and parallel processes you allow as part of distcc. Perfectly reasonable with my modern xeon distcc host to run _24_ compile threads as it has 12 cores and 12 "hyperthread cores" for a total representation of 24 cores. With all of these things, all of the build farm instances absolutely scream through the software builds and performance becomes a non-issue. The current problems I face are that RPM's built with this setup will not successfully build on a system that does NOT have distcc setup. I need to add if statements to handle all conditions so that end users can rebuild source rpms with their own customizations easily.  Other developers take note as well.  These systems are very easy to setup and result in a massive productivity boost when building other software.