This adds a `InfluxDBNotification` plugin which is configured as:
```
<influxdb>
url = http://127.0.0.1:8086
db = hydra
</influxdb>
```
which will write a notification for every finished job to the
configured database in InfluxDB looking like:
```
hydra_build_status,cached=false,job=job,jobset=default,project=sample,repo=default,result=success,status=success,system=x86_64-linux build_id="1",build_status=0i,closure_size=584i,duration=0i,main_build_id="1",queued=0i,size=168i 1564156212
```
Prior, tests would all fail to build, causing, roughly, the following
error (roughly, because I added some debug log messages :)):
ok 68 - Evaluating jobs/build-products.nix should result in 2 builds
Queue runner stderr: using 4185024512 bytes for the NAR buffer
locking path '/build/source/tests/data/queue-runner/lock'
lock acquired on '/build/source/tests/data/queue-runner/lock.lock'
warning: unknown setting 'max-connection-age'
warning: unknown setting 'max-connections'
dispatcher woken up
dispatcher woken up
dispatcher sleeping for 7674380800s
adding new machine ‘localhost’
dispatcher woken up
checking the queue for builds > 0...
dispatcher sleeping for 7674380800s
sending notification about build 1
loading build 18 (tests:build-products:simple)
considering derivation ‘/build/source/tests/nix/store/24h0i450d4k00a4jhhk6r7qpqdvzskw6-build-product-simple.drv’
sending notification about build 2
creating build step ‘/build/source/tests/nix/store/24h0i450d4k00a4jhhk6r7qpqdvzskw6-build-product-simple.drv’
added build 18 (top-level step /build/source/tests/nix/store/24h0i450d4k00a4jhhk6r7qpqdvzskw6-build-product-simple.drv, 1 new steps)
got 1 new runnable steps from 1 new builds
step ‘/build/source/tests/nix/store/24h0i450d4k00a4jhhk6r7qpqdvzskw6-build-product-simple.drv’ is now runnable
dispatcher woken up
dispatcher sleeping for 7674380800s
performing step ‘/build/source/tests/nix/store/24h0i450d4k00a4jhhk6r7qpqdvzskw6-build-product-simple.drv’ 1 times on ‘localhost’ (needed by build 18
and 0 others)
sending closure of ‘/build/source/tests/nix/store/24h0i450d4k00a4jhhk6r7qpqdvzskw6-build-product-simple.drv’ to ‘localhost’
building ‘/build/source/tests/nix/store/24h0i450d4k00a4jhhk6r7qpqdvzskw6-build-product-simple.drv’ on ‘localhost’
killing process 10462
marking build 18 as failed
finishing build step ‘/build/source/tests/nix/store/24h0i450d4k00a4jhhk6r7qpqdvzskw6-build-product-simple.drv’
ok 69 - Build 'simple' from jobs/build-products.nix should exit with code 0
ok 70 - newbuild->finished was '1' instead of 1
not ok 71 - newbuild->buildstatus was '1' instead of 0
not ok 72 - Build 'simple' from jobs/build-products.nix should have buildstatus 0
Can't call method "name" on an undefined value at ./evaluation-tests.pl line 173.
FAIL: evaluation-tests.pl
This rewrites the top-level loop of hydra-evaluator in C++. The Perl
stuff is moved into hydra-eval-jobset. (Rewriting the entire evaluator
would be nice but is a bit too much work.) The new version has some
advantages:
* It can run multiple jobset evaluations in parallel.
* It uses PostgreSQL notifications so it doesn't have to poll the
database. So if a jobset is triggered via the web interface or from
a GitHub / Bitbucket webhook, evaluation of the jobset will start
almost instantaneously (assuming the evaluator is not at its
concurrency limit).
* It imposes a timeout on evaluations. So if e.g. hydra-eval-jobset
hangs connecting to a Mercurial server, it will eventually be
killed.
In your hydra config, you can add an arbitrary number of <s3config>
sections, with the following options:
* name (required): Bucket name
* jobs (required): A regex to match job names (in project:jobset:job
format) that should be backed up to this bucket
* compression_type: bzip2 (default), xz, or none
* prefix: String to prepend to all hydra-created s3 keys (if this is
meant to represent a directory, you should include the trailing slash,
e.g. "cache/"). Default "".
After each build with an output (i.e. successful or failed-with-output
builds), the output path and its closure are uploaded to the bucket as
.nar files, with corresponding .narinfos to enable use as a binary
cache.
This plugin requires that s3 credentials be available. It uses
Net::Amazon::S3, which as of this commit the nixpkgs version can
retrieve s3 credentials from the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables, or from ec2 instance
metadata when using an IAM role.
This commit also adds a hydra-s3-backup-collect-garbage program, which
uses hydra's gc roots directory to determine which paths are live, and
then deletes all files except nix-cache-info and any .nar or .narinfo
files corresponding to live paths. hydra-s3-backup-collect-garbage
respects the prefix configuration option, so it won't delete anything
outside of the hierarchy you give it, and it has the same credential
requirements as the plugin. Probably a timer unit running the garbage
collection periodically should be added to hydra-module.nix
Note that two of the added tests fail, due to a bug in the interaction
between Net::Amazon::S3 and fake-s3. Those behaviors work against real
s3 though, so I'm committing this even with the broken tests.
Signed-off-by: Shea Levy <shea@shealevy.com>
The NrBuilds table tracks the value of ‘select count(*) from Builds
where finished = 0’, keeping it up to date via a trigger. This is
necessary to make the /all page fast, since otherwise it needs to do a
sequential scan on the Builds table.
The catalyst-action-rest branch from shlevy/hydra was an exploration of
using Catalyst::Action::REST to create a JSON API for hydra. This commit
merges in the best bits from that experiment, with the goal that further
API endpoints can be added incrementally.
In addition to migrating more endpoints, there is potential for
improvement in what's already been done:
* The web interface can be updated to use the same non-GET endpoints as
the JSON interface (using x-tunneled-method) instead of having a
separate endpoint
* The web rendering should use the $c->stash->{resource} data structure
where applicable rather than putting the same data in two places in
the stash
* Which columns to render for each endpoint is a completely debatable
question
* Hydra::Component::ToJSON should turn has_many relations that have
strings as their primary keys into objects instead of arrays
FixesNixOS/hydra#98
Signed-off-by: Shea Levy <shea@shealevy.com>