show up correctly again.
split start_cores into a read_config/start_cores part, so that read_config
can happen right after handle_options, and so that we can put more options
in the hosts file (to be done)
as we should.
Use core methods to access the memory threshold.
UI simplification: -M can take a suffix, default is K, but you can just say
-M 520M or -M 2G now.
elsewhere. Also use it for the Reporter, as it makes no sense to spend
THAT much time reporting quick changes, which actually slows the build.
($factor to tweak as needed).
also move the report to the main package, and show a prominent
STOPPED in the title bar if you forget you stopped it during a
previous run ('why is my dpb not building anything ?')
- affinity info is similar to locks, but with a completely different
lifetime.
- streamline the main loop of the engine, so that it can do two passes:
first pass shuns paths with the wrong affinity. If no good path is found,
those are considered during the second pass.
- make the Core factory aware of what hosts might be running, so that
affinity info for machines removed from a config file will be ignored.
thanks to landry@ for a few tests.
re-enable wait_timeout on localhost temporarily (should be done in another
way, most probably by checking whether repo is on nfs, we can steal code
from VStat.pm)
pass umask through ssh. This took us long enough to figure out, and it's
considerably simpler than tweaking every login class once again.
options on the command line now define *defaults* that host files can
override (for instance -j, stuck, -p, -J).
Add -p /n to mean "take number of jobs, if >1, divide by n, round up to 2,
and use that for parallel.
Document -p.
Make junk be 'by host' (and it's a prop, so you can tweak it).
concurrent log that records how many jobs are running each time it changes.
tag parallel builds *n in the time record.
(it's easy to distinguish between a file and a directory under ports).
expand sequences for those files and hosts
when restarting dpb, kill locks that don't correspond to errors, but to a dpb
running on the same host that's no longer there.
do __WARN__ like __DIE__
option -DDONT_BUILD_ONCE
option -DDONT_CLEAN_LOCKS
document some
tree may want to weed distfiles too, so allow for a full scan of the tree
without building/fetching anything, just to update history:
dpb -DHISTORY_ONLY
(just requires making sure the right engines are created, and a very shortened
loop at end waiting for history to be updated).
- keep a stash indexed by checksum, so dpb can identify duplicate files.
- in a full bulk, if the scan has no errors, write to a ${DISTDIR}/history
file the files encountered in ${DISTDIR}/distinfo that seem to no longer
be needed (with full timestamp and checksum info).
Should be enough info to know when to expire old DISTDIR entries.