elsewhere. Also use it for the Reporter, as it makes no sense to spend
THAT much time reporting quick changes, which actually slows the build.
($factor to tweak as needed).
while dependencies are missing
Following landry's remark, also take packages to build out of the race
if some RUN_DEPENDS are going to be ignored. Only do it right before we
put the package in the queue, so that the test is run exactly once per
package instead of during every scan.
I was also worried about multi-packages, but this only takes one fullpkgpath
out of the loop, and "make package" is going to fail half-way through anyways.
happening while some 'unjunk' ports are building.
this is a work-around for a known problem with cmake and qt4 include
dependency handlers...
Also, cache fullpkgpath while building a job, as this contributes for
a large part to the speed (not) of the display when building lots of
ports.
also move the report to the main package, and show a prominent
STOPPED in the title bar if you forget you stopped it during a
previous run ('why is my dpb not building anything ?')
Yields more accurate stat output, where we can see a staircase effect on
the queue when the engine is run not that often, and Built actually go
up until it goes back down again.
duh. put "live" affinity markers while we're building stuff.
We don't re-read the on-disk markers outside of restart (should we ?)
so they HAVE to be in the pkgpath struct proper.
tweak the algorithm slightly (since we forget the old check_interval).
In particular, never keep old times in reserve.
Makes for simpler and clearer reading
Most things will move as a result of {install} changes.
Things that move because of EXTRA depends that are satisfied are unlikely to trigger
further changes.
So, stagger changes for "normal" tobuild -> queue first, and extra depends later.
ago, don't run it again right now.
This prevents the Engine from busy-checking the same data when it's building
lots of small packages very fast.
(50 and 30 may need some fine-tuning)
- affinity info is similar to locks, but with a completely different
lifetime.
- streamline the main loop of the engine, so that it can do two passes:
first pass shuns paths with the wrong affinity. If no good path is found,
those are considered during the second pass.
- make the Core factory aware of what hosts might be running, so that
affinity info for machines removed from a config file will be ignored.
thanks to landry@ for a few tests.
zap all subpackages from the queue.
need to distinguish the normal case (is_done) with a simpler case
(is_done_quick) that's used when scanning the ports tree. Otherwise, we
would check multi-packages loads of time.
was built already... so things will go straight to B/I instead of
getting through T/Q...
less noise in engine.log, and much more accurate information for choosing
queue items.
BULK=auto will invoke bulk behavior on dependencies, but not during
normal build.
(internally, deps have _SOLVING_DEP=yes, so we can distinguish them)
okay ajacoutot@
re-enable wait_timeout on localhost temporarily (should be done in another
way, most probably by checking whether repo is on nfs, we can steal code
from VStat.pm)
pass umask through ssh. This took us long enough to figure out, and it's
considerably simpler than tweaking every login class once again.
- resurrect USE_X11 in a smart way: auto-determine it correctly from
WANTLIBS (accounts for most ports)
- define a BUILD_XENOCARA knob that builds fake based on mtree for
X11BASE.
- if BUILD_XENOCARA_TOO=Yes, prepare to hook to a xenocara "fake" meta
package.
All of this off by default, the xenocara shadow tree is not in yet
anyways. Zero impact on regular builds.
options on the command line now define *defaults* that host files can
override (for instance -j, stuck, -p, -J).
Add -p /n to mean "take number of jobs, if >1, divide by n, round up to 2,
and use that for parallel.
Document -p.
Make junk be 'by host' (and it's a prop, so you can tweak it).
concurrent log that records how many jobs are running each time it changes.
tag parallel builds *n in the time record.
say "def" for version number.
check that pkgpath in dependency did not change, in which case the
dependent port should have been bumped.
problem experienced by aja@ on glib2...
patch tested and okay jasper@, sthen@
have a "default" SUBST_CMD that will substitute the non-subpackage version
of the variables.
SUBST_CMD = ${SUBST_CMD${SUBPACKAGE}}
is a bad idea, because SUBPACKAGE may vary in unexpected ways, like you
get the 'default' value when building manually, and you might get a
different subpackage when building with dpb, leading to weird errors.
So, old users/users during patch/configure/build can use base SUBST_CMD
without much surprise.
- correct syntax for variable (Vadim Zhukov)
- both _DO_LOCK and _cache_fragment want to use traps.
Since that's the only place where the problem occurs, simply put the second
trap in a subshell...
and have do-install/do-build use them.
Replace pre-configure with folded in shell fragment.
Don't hardcode perl location, we don't hardcode those things but rely
on PATH instead.
check that Makefile.PL actually produced a Makefile, since the way it
errors out does not exit 1, thus leading to configure having failed and
ports thinking it succeeded...
okay sthen@
Reset info for a new path systematically, instead of only creating
new infos.
Part of handling erroring paths better: if a pkgpath errors out, when
we remove the lock, the whole port will be rescanned at once, instead of
doing one subpkgpath only.
README-sub (as noticed by aja@)
- stronger checks for X correctly installed: don't ignore ports if X11
is not there, error out right away. Make sure /usr/local/lib/X11/app-defaults
is a link, and that whatis.db is there (as should be fixed by release in
xenocara)
- during the scanning stage, we can rely on more than sizes. Specifically,
for files with cached sha values: detect problems early, zap the files, so
the new ones do fetch.
- do not allow "negative" caching: if the cached file doesn't match, just
run the checksum again to make sure (manual download would tamper with that).
This should allow builders to forget about the existence of
/usr/ports/distfiles/distinfo again.
- remove bad files so that fetch has a chance to work (todo: log some more
info, yeah landry...)
- zap code from (checksum) proper that's no longer in-use.
okay jasper@
(gets in because fixing the mirrors for the release is important, and dpb -F
would not do the right thing without manual intervention).
distfiles. MD5 is known to be insecure and RIPEMD-160 and SHA-1
are considered inferior to SHA-256.
Also, the concatenation of different hashes is not more secure than
its strongest component; see Antoine Joux, "Multicollisions in
iterated hash functions. Application to cascased constructions"
http://www.iacr.org/cryptodb/archive/2004/CRYPTO/1472/1472.pdf
Discussed with many, ok sthen@
at downloads.sourceforge.net; all the FRS mirrors just redirect us back there
to lookup the file as ports don't have logical folder names in the file paths.
add an XXX comment because we don't _really_ want to be relying on this:
to be revisited.
noticed after no-longer-existent mirrors pointed out by fgs@
a shell object that can chdir, setenv, and exec commands.
(note that this executes stuff after fork, so permanent changes are cheap
and okay)
Also create it from "host" objects, which simplifies parameter passing.