We can just show the normal "edit jobset" page for the original jobset
and then do a PUT request to create a new jobset.
Also simplified updating the jobset inputs. We can just delete all of
them and recreate them from the user parameters. That's safe because
it's done in a transaction.
It's now a dropdown menu in the tabs thingy, which subsumes the
"Reproduce locally" button. This makes the actions in the menu a bit
more visible, IMHO.
This commit is provided by (zsh syntax):
sed -i 's|/static[^"]*|[% c.uri_for("&") %]|;s/\[% size %\]/${size}/' **/*.tt
And the reason for this change is to make it easier to change the base
path with headers like X-Request-Base to be served within a URI prefix,
especially when behind a reverse proxy.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Pick the jobset that has used the smallest fraction of its share,
rather than the jobset furthest below its share in absolute terms.
This gives jobsets with a small share a quicker start (but they
will also run out of their share quicker).
Each jobset now has a "scheduling share" that determines how much of
the build farm's time it is entitled to. For instance, if a jobset
has 100 shares and the total number of shares of all jobsets is 1000,
it's entitled to 10% of the build farm's time. When there is a free
build slot for a given system type, the queue runner will select the
jobset that is furthest below its scheduling share over a certain time
window (currently, the last day). Withing that jobset, it will pick
the build with the highest priority.
So meta.schedulingPriority now only determines the order of builds
within a jobset, not between jobsets. This makes it much easier to
prioritise one jobset over another (e.g. nixpkgs:trunk over
nixpkgs:stdenv).
In your hydra config, you can add an arbitrary number of <s3config>
sections, with the following options:
* name (required): Bucket name
* jobs (required): A regex to match job names (in project:jobset:job
format) that should be backed up to this bucket
* compression_type: bzip2 (default), xz, or none
* prefix: String to prepend to all hydra-created s3 keys (if this is
meant to represent a directory, you should include the trailing slash,
e.g. "cache/"). Default "".
After each build with an output (i.e. successful or failed-with-output
builds), the output path and its closure are uploaded to the bucket as
.nar files, with corresponding .narinfos to enable use as a binary
cache.
This plugin requires that s3 credentials be available. It uses
Net::Amazon::S3, which as of this commit the nixpkgs version can
retrieve s3 credentials from the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables, or from ec2 instance
metadata when using an IAM role.
This commit also adds a hydra-s3-backup-collect-garbage program, which
uses hydra's gc roots directory to determine which paths are live, and
then deletes all files except nix-cache-info and any .nar or .narinfo
files corresponding to live paths. hydra-s3-backup-collect-garbage
respects the prefix configuration option, so it won't delete anything
outside of the hierarchy you give it, and it has the same credential
requirements as the plugin. Probably a timer unit running the garbage
collection periodically should be added to hydra-module.nix
Note that two of the added tests fail, due to a bug in the interaction
between Net::Amazon::S3 and fake-s3. Those behaviors work against real
s3 though, so I'm committing this even with the broken tests.
Signed-off-by: Shea Levy <shea@shealevy.com>
We now keep *all* unfinished evaluations of a jobset, in addition to
the <keepnr> most recent finished evaluations.
The main motivation is to ensure that mirror-{nixos,nixpkgs} work
properly: if building an evaluation takes too long, some of its builds
may already have been garbage-collected by the time the others finish.
We had Postgres barfing with this error:
DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed: ERROR: stack depth limit exceeded
because the ‘drvpath => [ @dependentDrvs ]’ in failDependents can
cause a query of unbounded size. (In this specific case there was a
failure of Bison, which has > 10000 dependent derivations.) So now we
just get all scheduled builds from the DB.
Due to the fixed-output derivation hashing scheme, there can be
multiple derivations of the same output path. But build logs are
indexed by derivation path. Thus, we may not be able to find the
log of a build or build step using its derivation. So as a fallback,
Hydra now looks for other derivations with the same output paths.
They're mostly redundant since there is a faster "jobs" tab on
the jobset pages now. The only thing the latter lacks is the
ability to see status change times, but those are quite expensive
to compute, and are visible on build pages if you really need them.
PostgreSQL and Perl have different sort orders, in particular when
comparing job names such as "aspell.x86_64-linux" and
"aspellDicts.cs.i686-freebsd". This confused the evaluation
comparison code, causing some jobs to appear as "removed".
So now we do all the sorting in Perl.
Fixes#105.
Aggregate constituents are derivations. However there can be multiple
builds in an evaluation that have the same derivation, i.e. they can
alias each other (e.g. "emacs", "emacs24" and "emacs24Packages.emacs"
in Nixpkgs). Previously we picked a build arbitrarily for the
AggregateConstituents table. Now we pick the one with the shortest
name (e.g. "emacs").
We now keep all builds in the N most recent evaluations of a jobset,
rather than the N most recent builds of every job. Note that this
means that typically fewer builds will be kept (since jobs may be
unchanged across evaluations).
For presentation purposes, we need to know what builds are part of an
aggregate build. So at evaluation time, look at the "members"
attribute, find the corresponding builds in the eval, and create a
mapping in the AggregateMembers table.
It redirects to the latest successful build from a finished
evaluation. This is mostly useful for the Nixpkgs/NixOS mirroring
script, which need the latest finished evaluation in which some
aggregate job (such as ‘tested’ in NixOS) succeeded.
The NrBuilds table tracks the value of ‘select count(*) from Builds
where finished = 0’, keeping it up to date via a trigger. This is
necessary to make the /all page fast, since otherwise it needs to do a
sequential scan on the Builds table.
Doing a chdir in the parent is evil. For instance, we had Hydra core
dumps ending up in the cloned directory. Therefore, the function
‘run’ allows doing a chdir in the child. The function ‘grab’ returns
the child's stdout and throws an exception if the child fails.
This allows users to sign in to Hydra using Mozilla Persona accounts.
When a user first sign in, a row in the Users table for the given
Persona identity (an email address) is created automatically.
To do: figure out how to deal with legacy accounts.
The catalyst-action-rest branch from shlevy/hydra was an exploration of
using Catalyst::Action::REST to create a JSON API for hydra. This commit
merges in the best bits from that experiment, with the goal that further
API endpoints can be added incrementally.
In addition to migrating more endpoints, there is potential for
improvement in what's already been done:
* The web interface can be updated to use the same non-GET endpoints as
the JSON interface (using x-tunneled-method) instead of having a
separate endpoint
* The web rendering should use the $c->stash->{resource} data structure
where applicable rather than putting the same data in two places in
the stash
* Which columns to render for each endpoint is a completely debatable
question
* Hydra::Component::ToJSON should turn has_many relations that have
strings as their primary keys into objects instead of arrays
FixesNixOS/hydra#98
Signed-off-by: Shea Levy <shea@shealevy.com>
HipChat notification messages now say which committers were
responsible, e.g.
Job patchelf:trunk:tarball: Failed, probably due to 2 commits by Eelco Dolstra
This plugin sends notification of build failure or success to a
HipChat room, if the status differs from the last build.
The plugin can be configured by adding one or more of these stanzas to
hydra.conf:
<hipchat>
jobs = (patchelf|nixops):.*:.*
room = 1234
token = 39ab2198fe...
</hipchat>
Here "jobs" is a regular expression against which the fully qualified
job name of the build is matched (so for instance
"nixops:master:tarball" will match the stanza above).
Restarted builds whose derivation has been garbage-collected in the
meantime caused hydra-queue-runner to get stuck in a loop saying:
Jun 14 11:54:25 lucifer hydra-queue-runner[31844]: system type `x86_64-darwin': 0 active, 2 allowed, started 2 builds
Jun 14 11:54:25 lucifer hydra-queue-runner[31844]: {UNKNOWN}: path `/nix/store/wcizsch2garjlvs4pswrar47i1hwjaia-inconsolata.drv' is not valid at
/nix/store/ypkdm4v13yrk941rvp8h0y425a5ww6nm-hydra-0.1pre1353-40debf1/bin/.hydra-queue-runner-wrapped line 51. at
/nix/store/kjpsc2zdaxnd44azxyw60f2px839m1cd-hydra-perl-deps/lib/perl5/site_perl/5.16.2/Catalyst/Model/DBIC/Schema.pm line 501
This happens if the previous iteration took more than 60 seconds.
Then the queue runner may think that builds failed to start properly
and unlock them, e.g.
build 5264936 pid 19248 died, unlocking
build 5264951 pid 19248 died, unlocking
build 5257073 pid 19248 died, unlocking
...
Because we don't start a build if a dependency is already building,
it's possible that some or all of the $extraAllowed highest-priority
builds in the queue are not eligible. E.g. with $extraAllowed = 32,
we might start only 3 builds even though there are thousands in the
queue. The fix is to try all queued builds until $extraAllowed have
been started.
Issue #99.
For some reason, hg clone from a local (path-based) repo will fail if
the parent directory of the destination directory doesn't exist (though
it succeeds when cloning from an http repo).
Signed-off-by: Shea Levy <shea@shealevy.com>
Previously, for scheduled builds, "timestamp" contained the time the
build was added to the queue, while for finished builds, it was the
time the build finished. Now it's always the former.
The revision counting changes depending on which revision is cloned
initially, so clone the default branch first and then checkout the
required revision to match hydra's revCount.
Signed-off-by: Shea Levy <shea@shealevy.com>
See e.g. http://hydra.nixos.org/build/4915744.
P.S. existing active build steps of finished builds can be marked as
aborted by running:
update buildsteps set busy = 0, status = 4
where (build, stepnr) in
(select s.build, s.stepnr from buildsteps s join builds b on s.build = b.id where b.finished = 1 and s.busy = 1);
This is mostly so we don't have to pass around common parameters like
"db" and "config", and we don't have to check for the existence of
methods.
A plugin now looks like this:
package Hydra::Plugin::TwitterNotification;
use parent 'Hydra::Plugin';
sub buildFinished {
my ($self, $build, $dependents) = @_;
print STDERR "tweeting about build ", $build->id, "\n";
# Send tweet...
# Hydra database is $self->{db}.
}
You can now add plugins to Hydra by writing a module called
Hydra::Plugin::<whatever> and putting it in Perl's search path. The
only plugin operation currently supported in buildFinished, called
when hydra-build has finished doing a build.
For instance, a Twitter notification plugin would look like this:
package Hydra::Plugin::TwitterNotification;
sub buildFinished {
my ($self, $db, $config, $build, $dependents) = @_;
print STDERR "tweeting about build ", $build->id, "\n";
# send tweet...
}
1;
Previously this function didn't actually have a lot of effect. If a
build A had a dependency B, Hydra would start B first. But on the
next scan through the queue, it would start A anyway, because of the
"busy => 0" restriction.
Now the queue runner won't start a build if a dependency is already
running. (This is not necessarily optimal, since the build may have
other dependencies that don't correspond to a build in the queue but
could run. One day we'll start all Hydra builds in parallel...)
Also, for performance, use computeFSClosure instead of "nix-store
-qR". And don't bother with topological sorting because it didn't
have an effect anyway since the database returns dependencies in
arbitrary order.
This allows checking a jobset (say) at most once a day. It's also
possible to disable polling by setting the interval to 0. This is
useful for jobsets that use push notification or are manually
evaluated.
This caused exceptions like:
Caught exception in Hydra::Controller::Build->view_build "writing to file: Broken pipe at /nix/store/ihdb3widsq1dk7sbl5vqjxfcxb5ypad4-hydra-0.1pre1297-8158093/libexec/hydra/lib/Hydra/Controller/Build.pm line 59."
because the connection to the Nix daemon would be terminated due to a
protocol violation (calling queryPathInfo with an empty string).