Include information about who changed the build status in notification
emails, and enable optional per-input notification of said committers.
Conflicts due to two branches modifying the database schema.
Signed-off-by: Shea Levy <shea@shealevy.com>
Conflicts:
src/lib/Hydra/Schema/Jobsets.pm
src/sql/upgrade-23.sql
Currently the dashboard allows users to get a quick overview of the
status of jobs they're interested in, but more will be added,
e.g. viewing all your jobsets or all jobs of which you're a
maintainer.
There are jobsets that are evaluated only once, that is, after they've
been evaluated, they're disabled automatically. This is primarily
useful for doing releases: for instance, doing an evaluation with
"officialRelease" set to "true" should be done only once.
We can just show the normal "edit jobset" page for the original jobset
and then do a PUT request to create a new jobset.
Also simplified updating the jobset inputs. We can just delete all of
them and recreate them from the user parameters. That's safe because
it's done in a transaction.
It's now a dropdown menu in the tabs thingy, which subsumes the
"Reproduce locally" button. This makes the actions in the menu a bit
more visible, IMHO.
Each jobset now has a "scheduling share" that determines how much of
the build farm's time it is entitled to. For instance, if a jobset
has 100 shares and the total number of shares of all jobsets is 1000,
it's entitled to 10% of the build farm's time. When there is a free
build slot for a given system type, the queue runner will select the
jobset that is furthest below its scheduling share over a certain time
window (currently, the last day). Withing that jobset, it will pick
the build with the highest priority.
So meta.schedulingPriority now only determines the order of builds
within a jobset, not between jobsets. This makes it much easier to
prioritise one jobset over another (e.g. nixpkgs:trunk over
nixpkgs:stdenv).
In your hydra config, you can add an arbitrary number of <s3config>
sections, with the following options:
* name (required): Bucket name
* jobs (required): A regex to match job names (in project:jobset:job
format) that should be backed up to this bucket
* compression_type: bzip2 (default), xz, or none
* prefix: String to prepend to all hydra-created s3 keys (if this is
meant to represent a directory, you should include the trailing slash,
e.g. "cache/"). Default "".
After each build with an output (i.e. successful or failed-with-output
builds), the output path and its closure are uploaded to the bucket as
.nar files, with corresponding .narinfos to enable use as a binary
cache.
This plugin requires that s3 credentials be available. It uses
Net::Amazon::S3, which as of this commit the nixpkgs version can
retrieve s3 credentials from the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables, or from ec2 instance
metadata when using an IAM role.
This commit also adds a hydra-s3-backup-collect-garbage program, which
uses hydra's gc roots directory to determine which paths are live, and
then deletes all files except nix-cache-info and any .nar or .narinfo
files corresponding to live paths. hydra-s3-backup-collect-garbage
respects the prefix configuration option, so it won't delete anything
outside of the hierarchy you give it, and it has the same credential
requirements as the plugin. Probably a timer unit running the garbage
collection periodically should be added to hydra-module.nix
Note that two of the added tests fail, due to a bug in the interaction
between Net::Amazon::S3 and fake-s3. Those behaviors work against real
s3 though, so I'm committing this even with the broken tests.
Signed-off-by: Shea Levy <shea@shealevy.com>
Due to the fixed-output derivation hashing scheme, there can be
multiple derivations of the same output path. But build logs are
indexed by derivation path. Thus, we may not be able to find the
log of a build or build step using its derivation. So as a fallback,
Hydra now looks for other derivations with the same output paths.
They're mostly redundant since there is a faster "jobs" tab on
the jobset pages now. The only thing the latter lacks is the
ability to see status change times, but those are quite expensive
to compute, and are visible on build pages if you really need them.
PostgreSQL and Perl have different sort orders, in particular when
comparing job names such as "aspell.x86_64-linux" and
"aspellDicts.cs.i686-freebsd". This confused the evaluation
comparison code, causing some jobs to appear as "removed".
So now we do all the sorting in Perl.
Fixes#105.
Aggregate constituents are derivations. However there can be multiple
builds in an evaluation that have the same derivation, i.e. they can
alias each other (e.g. "emacs", "emacs24" and "emacs24Packages.emacs"
in Nixpkgs). Previously we picked a build arbitrarily for the
AggregateConstituents table. Now we pick the one with the shortest
name (e.g. "emacs").
For presentation purposes, we need to know what builds are part of an
aggregate build. So at evaluation time, look at the "members"
attribute, find the corresponding builds in the eval, and create a
mapping in the AggregateMembers table.