elsewhere. Also use it for the Reporter, as it makes no sense to spend
THAT much time reporting quick changes, which actually slows the build.
($factor to tweak as needed).
while dependencies are missing
Following landry's remark, also take packages to build out of the race
if some RUN_DEPENDS are going to be ignored. Only do it right before we
put the package in the queue, so that the test is run exactly once per
package instead of during every scan.
I was also worried about multi-packages, but this only takes one fullpkgpath
out of the loop, and "make package" is going to fail half-way through anyways.
happening while some 'unjunk' ports are building.
this is a work-around for a known problem with cmake and qt4 include
dependency handlers...
Also, cache fullpkgpath while building a job, as this contributes for
a large part to the speed (not) of the display when building lots of
ports.
Yields more accurate stat output, where we can see a staircase effect on
the queue when the engine is run not that often, and Built actually go
up until it goes back down again.
duh. put "live" affinity markers while we're building stuff.
We don't re-read the on-disk markers outside of restart (should we ?)
so they HAVE to be in the pkgpath struct proper.
tweak the algorithm slightly (since we forget the old check_interval).
In particular, never keep old times in reserve.
Makes for simpler and clearer reading
Most things will move as a result of {install} changes.
Things that move because of EXTRA depends that are satisfied are unlikely to trigger
further changes.
So, stagger changes for "normal" tobuild -> queue first, and extra depends later.
ago, don't run it again right now.
This prevents the Engine from busy-checking the same data when it's building
lots of small packages very fast.
(50 and 30 may need some fine-tuning)
- affinity info is similar to locks, but with a completely different
lifetime.
- streamline the main loop of the engine, so that it can do two passes:
first pass shuns paths with the wrong affinity. If no good path is found,
those are considered during the second pass.
- make the Core factory aware of what hosts might be running, so that
affinity info for machines removed from a config file will be ignored.
thanks to landry@ for a few tests.
zap all subpackages from the queue.
need to distinguish the normal case (is_done) with a simpler case
(is_done_quick) that's used when scanning the ports tree. Otherwise, we
would check multi-packages loads of time.
was built already... so things will go straight to B/I instead of
getting through T/Q...
less noise in engine.log, and much more accurate information for choosing
queue items.
re-enable wait_timeout on localhost temporarily (should be done in another
way, most probably by checking whether repo is on nfs, we can steal code
from VStat.pm)
pass umask through ssh. This took us long enough to figure out, and it's
considerably simpler than tweaking every login class once again.
options on the command line now define *defaults* that host files can
override (for instance -j, stuck, -p, -J).
Add -p /n to mean "take number of jobs, if >1, divide by n, round up to 2,
and use that for parallel.
Document -p.
Make junk be 'by host' (and it's a prop, so you can tweak it).
concurrent log that records how many jobs are running each time it changes.
tag parallel builds *n in the time record.
Reset info for a new path systematically, instead of only creating
new infos.
Part of handling erroring paths better: if a pkgpath errors out, when
we remove the lock, the whole port will be rescanned at once, instead of
doing one subpkgpath only.
- during the scanning stage, we can rely on more than sizes. Specifically,
for files with cached sha values: detect problems early, zap the files, so
the new ones do fetch.
- do not allow "negative" caching: if the cached file doesn't match, just
run the checksum again to make sure (manual download would tamper with that).
This should allow builders to forget about the existence of
/usr/ports/distfiles/distinfo again.
- remove bad files so that fetch has a chance to work (todo: log some more
info, yeah landry...)
- zap code from (checksum) proper that's no longer in-use.
okay jasper@
(gets in because fixing the mirrors for the release is important, and dpb -F
would not do the right thing without manual intervention).
a shell object that can chdir, setenv, and exec commands.
(note that this executes stuff after fork, so permanent changes are cheap
and okay)
Also create it from "host" objects, which simplifies parameter passing.
(it's easy to distinguish between a file and a directory under ports).
expand sequences for those files and hosts
when restarting dpb, kill locks that don't correspond to errors, but to a dpb
running on the same host that's no longer there.
do __WARN__ like __DIE__
option -DDONT_BUILD_ONCE
option -DDONT_CLEAN_LOCKS
document some
tree may want to weed distfiles too, so allow for a full scan of the tree
without building/fetching anything, just to update history:
dpb -DHISTORY_ONLY
(just requires making sure the right engines are created, and a very shortened
loop at end waiting for history to be updated).
remove a few explicit (and implicit) die from Fetch: missing/out-of-sync
distinfo no longer kill dpb, instead they're properly reported as broken
paths and things still go on (note that even a missing SUPDISTFILE checksum
*will* mark a path as broken, that's totally intentional)
the names of files in there are not totally trivial to figure out from the
normal filename + sha.
-> filename will be stripped off DIST_SUBDIR
-> b64 checksums interfere with filesystem semantics, e.g., /u8//ffg
will become u8/ffg