Each jobset now has a "scheduling share" that determines how much of
the build farm's time it is entitled to. For instance, if a jobset
has 100 shares and the total number of shares of all jobsets is 1000,
it's entitled to 10% of the build farm's time. When there is a free
build slot for a given system type, the queue runner will select the
jobset that is furthest below its scheduling share over a certain time
window (currently, the last day). Withing that jobset, it will pick
the build with the highest priority.
So meta.schedulingPriority now only determines the order of builds
within a jobset, not between jobsets. This makes it much easier to
prioritise one jobset over another (e.g. nixpkgs:trunk over
nixpkgs:stdenv).
In your hydra config, you can add an arbitrary number of <s3config>
sections, with the following options:
* name (required): Bucket name
* jobs (required): A regex to match job names (in project:jobset:job
format) that should be backed up to this bucket
* compression_type: bzip2 (default), xz, or none
* prefix: String to prepend to all hydra-created s3 keys (if this is
meant to represent a directory, you should include the trailing slash,
e.g. "cache/"). Default "".
After each build with an output (i.e. successful or failed-with-output
builds), the output path and its closure are uploaded to the bucket as
.nar files, with corresponding .narinfos to enable use as a binary
cache.
This plugin requires that s3 credentials be available. It uses
Net::Amazon::S3, which as of this commit the nixpkgs version can
retrieve s3 credentials from the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables, or from ec2 instance
metadata when using an IAM role.
This commit also adds a hydra-s3-backup-collect-garbage program, which
uses hydra's gc roots directory to determine which paths are live, and
then deletes all files except nix-cache-info and any .nar or .narinfo
files corresponding to live paths. hydra-s3-backup-collect-garbage
respects the prefix configuration option, so it won't delete anything
outside of the hierarchy you give it, and it has the same credential
requirements as the plugin. Probably a timer unit running the garbage
collection periodically should be added to hydra-module.nix
Note that two of the added tests fail, due to a bug in the interaction
between Net::Amazon::S3 and fake-s3. Those behaviors work against real
s3 though, so I'm committing this even with the broken tests.
Signed-off-by: Shea Levy <shea@shealevy.com>
We now keep *all* unfinished evaluations of a jobset, in addition to
the <keepnr> most recent finished evaluations.
The main motivation is to ensure that mirror-{nixos,nixpkgs} work
properly: if building an evaluation takes too long, some of its builds
may already have been garbage-collected by the time the others finish.
We had Postgres barfing with this error:
DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed: ERROR: stack depth limit exceeded
because the ‘drvpath => [ @dependentDrvs ]’ in failDependents can
cause a query of unbounded size. (In this specific case there was a
failure of Bison, which has > 10000 dependent derivations.) So now we
just get all scheduled builds from the DB.
Aggregate constituents are derivations. However there can be multiple
builds in an evaluation that have the same derivation, i.e. they can
alias each other (e.g. "emacs", "emacs24" and "emacs24Packages.emacs"
in Nixpkgs). Previously we picked a build arbitrarily for the
AggregateConstituents table. Now we pick the one with the shortest
name (e.g. "emacs").
We now keep all builds in the N most recent evaluations of a jobset,
rather than the N most recent builds of every job. Note that this
means that typically fewer builds will be kept (since jobs may be
unchanged across evaluations).
For presentation purposes, we need to know what builds are part of an
aggregate build. So at evaluation time, look at the "members"
attribute, find the corresponding builds in the eval, and create a
mapping in the AggregateMembers table.
The NrBuilds table tracks the value of ‘select count(*) from Builds
where finished = 0’, keeping it up to date via a trigger. This is
necessary to make the /all page fast, since otherwise it needs to do a
sequential scan on the Builds table.
HipChat notification messages now say which committers were
responsible, e.g.
Job patchelf:trunk:tarball: Failed, probably due to 2 commits by Eelco Dolstra
Restarted builds whose derivation has been garbage-collected in the
meantime caused hydra-queue-runner to get stuck in a loop saying:
Jun 14 11:54:25 lucifer hydra-queue-runner[31844]: system type `x86_64-darwin': 0 active, 2 allowed, started 2 builds
Jun 14 11:54:25 lucifer hydra-queue-runner[31844]: {UNKNOWN}: path `/nix/store/wcizsch2garjlvs4pswrar47i1hwjaia-inconsolata.drv' is not valid at
/nix/store/ypkdm4v13yrk941rvp8h0y425a5ww6nm-hydra-0.1pre1353-40debf1/bin/.hydra-queue-runner-wrapped line 51. at
/nix/store/kjpsc2zdaxnd44azxyw60f2px839m1cd-hydra-perl-deps/lib/perl5/site_perl/5.16.2/Catalyst/Model/DBIC/Schema.pm line 501
This happens if the previous iteration took more than 60 seconds.
Then the queue runner may think that builds failed to start properly
and unlock them, e.g.
build 5264936 pid 19248 died, unlocking
build 5264951 pid 19248 died, unlocking
build 5257073 pid 19248 died, unlocking
...
Because we don't start a build if a dependency is already building,
it's possible that some or all of the $extraAllowed highest-priority
builds in the queue are not eligible. E.g. with $extraAllowed = 32,
we might start only 3 builds even though there are thousands in the
queue. The fix is to try all queued builds until $extraAllowed have
been started.
Issue #99.
Previously, for scheduled builds, "timestamp" contained the time the
build was added to the queue, while for finished builds, it was the
time the build finished. Now it's always the former.
See e.g. http://hydra.nixos.org/build/4915744.
P.S. existing active build steps of finished builds can be marked as
aborted by running:
update buildsteps set busy = 0, status = 4
where (build, stepnr) in
(select s.build, s.stepnr from buildsteps s join builds b on s.build = b.id where b.finished = 1 and s.busy = 1);
This is mostly so we don't have to pass around common parameters like
"db" and "config", and we don't have to check for the existence of
methods.
A plugin now looks like this:
package Hydra::Plugin::TwitterNotification;
use parent 'Hydra::Plugin';
sub buildFinished {
my ($self, $build, $dependents) = @_;
print STDERR "tweeting about build ", $build->id, "\n";
# Send tweet...
# Hydra database is $self->{db}.
}
You can now add plugins to Hydra by writing a module called
Hydra::Plugin::<whatever> and putting it in Perl's search path. The
only plugin operation currently supported in buildFinished, called
when hydra-build has finished doing a build.
For instance, a Twitter notification plugin would look like this:
package Hydra::Plugin::TwitterNotification;
sub buildFinished {
my ($self, $db, $config, $build, $dependents) = @_;
print STDERR "tweeting about build ", $build->id, "\n";
# send tweet...
}
1;
Previously this function didn't actually have a lot of effect. If a
build A had a dependency B, Hydra would start B first. But on the
next scan through the queue, it would start A anyway, because of the
"busy => 0" restriction.
Now the queue runner won't start a build if a dependency is already
running. (This is not necessarily optimal, since the build may have
other dependencies that don't correspond to a build in the queue but
could run. One day we'll start all Hydra builds in parallel...)
Also, for performance, use computeFSClosure instead of "nix-store
-qR". And don't bother with topological sorting because it didn't
have an effect anyway since the database returns dependencies in
arbitrary order.
This allows checking a jobset (say) at most once a day. It's also
possible to disable polling by setting the interval to 0. This is
useful for jobsets that use push notification or are manually
evaluated.
Avoid the frequently printed
hydra-queue-runner[10293]: system type `x86_64-linux': 2 active, 2 allowed, starting 0 builds
message. That information is only interesting when some build are
actually started.
This is a followup to commit
10882a1ffd ("Add multiple output
support").
* src/script/hydra-eval-guile-jobs.in (job-evaluations->sxml): Return
several `output' tags in the body, and remove the `outPath' attribute
of `job'.