Commit graph

1130 commits

Author SHA1 Message Date
Bas van Dijk
48678df8b6
updateDeclarativeJobset: only set the emailresponsible column when defined (#788) 2020-07-08 19:08:11 -04:00
Eelco Dolstra
b0163e9eae
Fix project creation by non-admin users 2020-07-08 12:26:46 +02:00
Eelco Dolstra
1831866a52
Merge pull request #692 from knl/emit-hostname-with-influxdb-metrics
Add host tag to InfluxDB metrics
2020-06-10 10:57:17 +02:00
Eelco Dolstra
56b1660c4d
Merge pull request #775 from knl/path-input-cache-validity-fix
Make PathInput plugin cache validity configurable
2020-06-05 17:22:07 +02:00
Nikola Knezevic
e34d40d4f8 Remove dead method from Nix.pm
This method has been moved to hydra-eval-jobset a long time ago.
2020-06-05 15:01:36 +02:00
Nikola Knezevic
fceaed2b24 Make PathInput plugin cache validity configurable
PathInput plugin keeps a cache of path evaluations. This cache is simple, and
path is not checked more than once every N seconds, where N=30. The caching is
there to avoid expensive calls to `nix-store --add`.

This change makes the validity period configurable. The main use case is
`api-test.pl` which was implemented wrong for a while, as the invocation of
`hydra-eval-jobset` would return the previous evaluation, claiming there are no
changes. The test has been fixed to check better for a new evaluation.
2020-06-04 12:26:47 +02:00
Eelco Dolstra
8adb433e3b
Remove the Jobs table
This table has been superfluous for a long time.
2020-05-27 20:09:36 +02:00
Nikola Knezevic
f79810bac1 Improve handling of Perl's block eval errors
Taken from `Perl::Critic`:

A common idiom in perl for dealing with possible errors is to use `eval`
followed by a check of `$@`/`$EVAL_ERROR`:

    eval {
        ...
    };
    if ($EVAL_ERROR) {
        ...
    }

There's a problem with this: the value of `$EVAL_ERROR` (`$@`) can change
between the end of the `eval` and the `if` statement. The issue are object
destructors:

    package Foo;

    ...

    sub DESTROY {
        ...
        eval { ... };
        ...
    }

    package main;

    eval {
        my $foo = Foo->new();
        ...
    };
    if ($EVAL_ERROR) {
        ...
    }

Assuming there are no other references to `$foo` created, when the
`eval` block in `main` is exited, `Foo::DESTROY()` will be invoked,
regardless of whether the `eval` finished normally or not. If the `eval`
in `main` fails, but the `eval` in `Foo::DESTROY()` succeeds, then
`$EVAL_ERROR` will be empty by the time that the `if` is executed.
Additional issues arise if you depend upon the exact contents of
`$EVAL_ERROR` and both `eval`s fail, because the messages from both will
be concatenated.

Even if there isn't an `eval` directly in the `DESTROY()` method code,
it may invoke code that does use `eval` or otherwise affects
`$EVAL_ERROR`.

The solution is to ensure that, upon normal exit, an `eval` returns a
true value and to test that value:

    # Constructors are no problem.
    my $object = eval { Class->new() };

    # To cover the possiblity that an operation may correctly return a
    # false value, end the block with "1":
    if ( eval { something(); 1 } ) {
        ...
    }

    eval {
        ...
        1;
    }
        or do {
            # Error handling here
        };

Unfortunately, you can't use the `defined` function to test the result;
`eval` returns an empty string on failure.

Various modules have been written to take some of the pain out of
properly localizing and checking `$@`/`$EVAL_ERROR`. For example:

    use Try::Tiny;
    try {
        ...
    } catch {
        # Error handling here;
        # The exception is in $_/$ARG, not $@/$EVAL_ERROR.
    };  # Note semicolon.

"But we don't use DESTROY() anywhere in our code!" you say. That may be
the case, but do any of the third-party modules you use have them? What
about any you may use in the future or updated versions of the ones you
already use?
2020-05-26 11:19:43 +02:00
Nikola Knezevic
575113396d Handle missing values in declarative jobsets
The current implementation will pass all values to `create_or_update` method. The
missing values will end up as `undef` (or `NULL`) when assigned to `%update`.
Thus, for columns that are NOT NULL, when, for example, flakes are not used,
will result in a horrible:

    DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed:
    ERROR:  null value in column "type" violates not-null constraint

    DETAIL:  Failing row contains (.jobsets, 118, hydra, hydra jobsets, src, hydra/jobsets.nix, null,
    null, null, 1589536378, 1, 0, 0, , 3, 30, 100, null, null, 1589536379, null, null). [for Statement
    "UPDATE jobsets SET checkinterval = ?, description = ?, enableemail = ?, nixexprinput = ?,
    nixexprpath = ?, type = ? WHERE ( ( name = ? AND project = ? ) )" with ParamValues: 1='30',
    2='hydra jobsets', 3='0', 4='src', 5='hydra/jobsets.nix', 6=undef, 7='.jobsets', 8='hydra'] at
    /nix/store/lsf81ip9ybxihk5praf2n0nh14a6i9j0-hydra-0.1.19700101.DIRTY/libexec/hydra/lib/Hydra/Helper/AddBuilds.pm line 50

This change just omits adding such values to `%update`, which results in
PostgreSQL assigning the default values.
2020-05-15 20:33:54 +02:00
Graham Christensen
548fd8eadd
schema/Builds: use jobset_id instead of jobset name matches
This was clearly an error in the original part-2 of the diff, and
specifically breaks when two projects have a jobset of the same name.
2020-05-13 10:12:56 -04:00
Bas van Dijk
38122544ed GitInput: only convert integer option values to int
The previous code converted option values to ints when the value
contained a digit somewhere. This is too eager since it also converts
strings like `release-0.2` to an int which should not happen.

We now only convert to int when the value is an integer.
2020-05-13 11:41:52 +02:00
Bas van Dijk
f32a2a48d7
Merge pull request #740 from knl/add-githubrefs-plugin
Add GithubRefs plugin
2020-05-08 13:21:45 +02:00
Eelco Dolstra
96a514c169
Remove the "releases" feature
We haven't used this in many years (it was really only used for nix
and patchelf releases).
2020-05-06 12:39:21 +02:00
Emery Hemingway
e93c36aab1 SoTest: read credentials from file 2020-04-26 12:12:04 +05:30
Nikola Knezevic
f03e7ef800 Add GithubRefs plugin
This plugin is a counterpart to GithubPulls plugin. Instead of fetching pull
requests, it will fetch all references (branches and tags) that start with a
particular prefix.

The plugin is a copy of GithubPulls plugin with appropriate changes to call the
right API and parse the config matching the need.
2020-04-23 10:45:37 +02:00
Emery Hemingway
a63e349476 Add SoTest plugin
https://opensource.sotest.io/
https://docs.sotest.io/
2020-04-21 15:25:44 +05:30
721c764951
Remove Hydra::Helper::nix::txn_do from the Perl code
To quote the function's comment:

  Awful hack to handle timeouts in SQLite: just retry the transaction.
  DBD::SQLite *has* a 30 second retry window, but apparently it
  doesn't work.

Since SQLite is now dropped entirely, this wrapper can be removed
completely.
2020-04-16 00:42:40 +02:00
efcbc08686
Get rid of dependency to SQLite
SQLite isn't properly supported by Hydra for a few years now[1], but
Hydra still depends on it. Apart from a slightly bigger closure this can
cause confusion by users since Hydra picks up SQLite rather than
PostgreSQL by default if HYDRA_DBI isn't configured properly[2]

[1] 78974abb69
[2] https://logs.nix.samueldr.com/nixos-dev/2020-04-10#3297342;
2020-04-16 00:42:40 +02:00
Eelco Dolstra
4cabb37ebd
Merge pull request #730 from NixOS/flake
Flake support
2020-04-07 11:18:38 +02:00
Eelco Dolstra
2d092a6fbc
Merge pull request #702 from kquick/fix_api_push
Handle case where jobset has no defined errormsg for api/jobsets
2020-04-01 13:09:05 +02:00
Nikola Knezevic
1bee6e3d8a Add debug logging
This will help us track potential problems with the plugin.
2020-03-26 13:27:44 +01:00
Nikola Knezevic
986fde8888 Refactor code
Extract the conditions before the loop, as they do not change due to channel
definition.
2020-03-26 13:27:44 +01:00
Nikola Knezevic
956f009672 Add documentation for SlackNotification plugin 2020-03-26 13:27:44 +01:00
Nikola Knezevic
01ae944c80 Add host tag to InfluxDB metrics
This should help us discern machines in environments with multiple Hydra deployments.
2020-03-12 14:23:07 +01:00
Eelco Dolstra
4b5bb4e760
Merge remote-tracking branch 'origin/master' into flake 2020-03-04 15:28:23 +01:00
Eelco Dolstra
3cc1deb125
Merge pull request #721 from grahamc/one-by-one
hydra-evaluator: add a 'ONE_AT_A_TIME' evaluator style
2020-03-04 08:44:43 +01:00
Graham Christensen
994430b94b
treewide: allow nix command 2020-03-03 22:52:20 -05:00
Graham Christensen
117b9ecef1
Nix.pm: readNixFile: pass «--experimental-features nix-command»
Declarative jobsets were broken by the Nix update, causing
nix cat-file to break silently.

This commit restores declarative jobsets, based on top of a commit
making it easier to see what broke.
2020-03-03 22:36:21 -05:00
Graham Christensen
113a312f67
handleDeclarativeJobsetBuild: handle errors from readNixFile 2020-03-03 22:32:13 -05:00
Graham Christensen
5fae9d96a2
hydra-evaluator: add a 'ONE_AT_A_TIME' evaluator style
In the past, jobsets which are automatically evaluated are evaluated
regularly, on a schedule. This schedule means a new evaluation is
created every checkInterval seconds (assuming something changed.)

This model works well for architectures where our build farm can
easily keep up with demand.

This commit adds a new type of evaluation, called ONE_AT_A_TIME, which
only schedules a new evaluation if the previous evaluation of the
jobset has no unfinished builds.

This model of evaluation lets us have 'low-tier' architectures.

For example, we could now have a jobset for ARMv7l builds, where
the buildfarm only has a single, underpowered ARMv7l builder.
Configuring that jobset as ONE_AT_A_TIME will create an evaluation
and then won't schedule another evaluation until every job of
the existing evaluation is complete.

This way, the cache will have a complete collection of pre-built
software for some commits, but the underpowered architecture will
never become backlogged in ancient revisions.
2020-03-03 19:28:44 -05:00
Eelco Dolstra
15187b059b
Remove hydra-eval-guile-jobs
This hasn't been used in a long time (Guix uses its own CI system),
and it probably doesn't work anymore.

(cherry picked from commit 23c9ca3e94)
2020-02-20 09:58:12 +01:00
Eelco Dolstra
881b7449fd Merge remote-tracking branch 'origin/master' into flake 2020-02-11 14:23:16 +01:00
Graham Christensen
f0f41eaaff
LatestSucceededForJob{,set}: Filter with jobset_id 2020-02-11 07:06:20 -05:00
Graham Christensen
66fbbd9692
Jobs.builds: Fetch via Jobs.jobset_id 2020-02-11 07:06:20 -05:00
Graham Christensen
7c71f9df28
Jobsets.builds: Fetch via Jobsets.id 2020-02-11 07:06:20 -05:00
Graham Christensen
3c392b8cd8
Jobsets.jobs: Fetch via Jobsets.id 2020-02-11 07:06:20 -05:00
Graham Christensen
8ef08f1385
Builds.jobset_id: make not-null 2020-02-11 07:06:20 -05:00
Graham Christensen
2cdcc7f188
Jobs.jobset_id: make not-null 2020-02-11 07:06:17 -05:00
Eelco Dolstra
100e09a5b3
Merge remote-tracking branch 'origin/master' into flake
Also update flake.lock
2020-02-10 17:58:10 +01:00
Graham Christensen
ddf00fa627
Builds: add a nullable jobset_id foreign key to Jobsets.
Also, adds an explicitly named "builds" accessor to the Jobsets
Schema object, which uses the project/jobset name.
2020-02-10 11:43:02 -05:00
Graham Christensen
efa1f1d4fb
Jobs: add a nullable jobset_id foreign key to Jobsets.
Also, adds an explicitly named "jobs" accessor to the Jobsets
Schema object, which uses the project/jobset name.
2020-02-10 11:43:02 -05:00
Graham Christensen
e00030563b
Jobsets: add a SERIAL, unique, non-null id column
A postgresql column which is non-null and unique is treated with
the same optimisations as a primary key, so we have no need to
try and recreate the `id` as the primary key.

No read paths are impacted by this change, and the database will
automatically create an ID for each insert. Thus, no code needs to
change.
2020-02-10 11:42:59 -05:00
Graham Christensen
6fe57ab5fa
Copy the flake migration from the flake branch
hydra.nixos.org is already running this rev, and it should be safe to
apply to everyone else. If we make changes to this migration, we'll
need to write another migration anyway.
2020-02-09 15:21:28 -05:00
Graham Christensen
c2f932a7e3
sql: Generate models from postgresql
Lowercasing is due to postgresql not having case-sensitive table names.
It always technically workde before, but those table names never
existed literally.

The switch to generating from postgresql is to handle an upcoming
addition of an auto-incrementign ID to the Jobset table. Sqlite doesn't
seem to be able to handle the table having an auto incrementing ID
field which isn't the primary key, but we can't change the primary
key trivially.

Since hydra doesn't support sqlite and hasn't for many year anyway,
it is easier to just generate from pgsql directly.
2020-02-06 12:23:47 -05:00
Kevin Quick
cdd9d6e071
Update haserrormsg logic implementation. 2020-01-20 10:40:33 -08:00
Kevin Quick
de24771a8e
Handle case where jobset has no defined errormsg for api/jobsets 2020-01-11 17:01:44 -08:00
Graham Christensen
5cac40a438
Merge remote-tracking branch 'origin/master' into flake 2019-12-29 16:37:25 -05:00
Graham Christensen
d0f1bda0b8
job prometheus endpoint: drop nixname, too variable 2019-12-29 16:37:13 -05:00
Graham Christensen
d24de1b5de
Merge remote-tracking branch 'origin/master' into flake 2019-12-28 20:58:03 -05:00
Graham Christensen
64cdc3413c
job prometheus endpoint: d'oh 2019-12-28 20:57:27 -05:00
Graham Christensen
511c2db8aa
Merge remote-tracking branch 'origin/master' into flake 2019-12-28 20:22:54 -05:00
Graham Christensen
d5445bfc1d
job: create a prometheus endpoint
Export the most recent stop time and exit status in
a prometheus-friendly format.
2019-12-28 16:41:49 -05:00
Eelco Dolstra
55b0afa08f
Merge remote-tracking branch 'origin/master' into flake 2019-11-07 18:42:15 +01:00
Andreas Rammhold
841a47cabe
Add cancel-build role 2019-11-05 22:56:01 +01:00
Andreas Rammhold
ce1e10c116
Add bump-to-front role 2019-11-05 19:32:06 +01:00
Janne Heß
6f99d958bc Fix declarative flake builds 2019-10-27 14:57:53 +01:00
Graham Christensen
937e165328
export a /prometheus endpoint
Currently only shows per-machine build times
2019-09-24 16:50:52 -04:00
Eelco Dolstra
0ccf36ca3b
Merge remote-tracking branch 'origin/master' into flake 2019-09-24 19:03:18 +02:00
Antoine Eiche
c8983ca076 Add haserrormsg boolean attribute to jobset API response
This attribute allows to know if an error occurred or not: when an
error occurs, errormsg is not an empty string. Note we can not use the
errormsg attribute because it can be arbitrarily long and is excluded
from the jobset API response.
2019-08-26 16:07:49 +02:00
Rob Vermaas
f10b2c2da8 Update Hydra schema, otherwise hydra-notify will not work. 2019-08-19 17:05:11 +02:00
tobias pflug
919195b04f Extend the jobset API response
This adds the following (pre-existing) attributes to the jobset response:

- nrtotal
- lastcheckedtime
- starttime
- checkinterval
- triggertime
- fetcherrormsg
- errortime
2019-08-16 16:04:04 +02:00
Eelco Dolstra
2de52d8538
Merge remote-tracking branch 'origin/master' into flake 2019-08-15 13:56:00 +02:00
Eelco Dolstra
c8a4030c5f
Fix error in GitlabStatus plugin
May 15 09:20:10 chef hydra-queue-runner[27523]: Hydra::Plugin::GitlabStatus=HASH(0x519a7b8)->buildFinished: Can't call method "value" on an undefined value at /nix/store/858hinflxcl2jd12wv1r3a8j11ybsf6w-hydra-0.1.2629.89fa829/libexec/hydra/lib/Hydra/Plugin/GitlabStatus.pm line 57.

(cherry picked from commit 438ddf5289)
2019-08-15 13:55:47 +02:00
Eelco Dolstra
92d8d6baa5
Avoid fetching Projects/Jobsets just to get the name column
In particular, doing a 'select * from Jobsets where ...' must be
avoided, because the 'errormsg' column can be very big.
2019-08-13 18:18:25 +02:00
Eelco Dolstra
16811d3e78
Plugins: Add isEnabled method
Plugins are now disabled at startup time unless there is some relevant
configuration in hydra.conf. This avoids hydra-notify having to do a
lot of redundant work (a lot of plugins did a lot of database queries
*before* deciding they were disabled).

Note: BitBucketStatus users will need to add 'enable_bitbucket_status
= 1' to hydra.conf.
2019-08-13 18:18:25 +02:00
Eelco Dolstra
a74dec6fb1
Merge remote-tracking branch 'origin/master' into flake 2019-08-09 12:46:52 +02:00
Nikola Knezevic
06bdc8f85c Added the InfluxDBNotification plugin including a NixOS test
This adds a `InfluxDBNotification` plugin which is configured as:

```
<influxdb>
  url = http://127.0.0.1:8086
  db = hydra
</influxdb>
```

which will write a notification for every finished job to the
configured database in InfluxDB looking like:

```
hydra_build_status,cached=false,job=job,jobset=default,project=sample,repo=default,result=success,status=success,system=x86_64-linux build_id="1",build_status=0i,closure_size=584i,duration=0i,main_build_id="1",queued=0i,size=168i 1564156212
```
2019-07-26 17:47:03 +02:00
Antoine Eiche
d1e590af1f hydra-server: add limit parameter to the search path
This allows a client to set a limit to the search results it wants to
get:

    http://hydra.nixos.org/search?query=emacs&limit=1

This returns only 1 results (while the default is 500).
2019-06-19 14:30:32 +02:00
Antoine Eiche
cb1fce21ba hydra-server: set a limit on builds and buildoutputs search
This patch adds a limit statement for Postgresql queries on `builds`
and `buildsoutputs` tables.
2019-06-19 14:28:55 +02:00
Antoine Eiche
778fc03570 Allow to search builds by hash
Currently, a full store path has to be provided to search in
builds. This patch permits to search jobs with a output path or
derivation hash.

Usecase: we are building Docker images with Hydra. The tag of the
Docker image is the hash of the image output path. This patch would
allow us to find back the build job from the tag of a running
container image.
2019-06-05 11:56:42 +02:00
Eelco Dolstra
438ddf5289
Fix error in GitlabStatus plugin
May 15 09:20:10 chef hydra-queue-runner[27523]: Hydra::Plugin::GitlabStatus=HASH(0x519a7b8)->buildFinished: Can't call method "value" on an undefined value at /nix/store/858hinflxcl2jd12wv1r3a8j11ybsf6w-hydra-0.1.2629.89fa829/libexec/hydra/lib/Hydra/Plugin/GitlabStatus.pm line 57.
2019-05-15 10:28:16 +02:00
Eelco Dolstra
ed00f0b25e
Cache flake-based jobset evaluations 2019-05-11 00:41:13 +02:00
Eelco Dolstra
f9f595cd21
Add flake configuration to the web interface 2019-05-11 00:11:38 +02:00
Robin Gloster
f4e7c104ff
Create a gitlab status plugin
This plugin expects as inputs to a jobset the following:
 - gitlab_status_repo => Name of the repository input pointing to that
   status updates should be POST'ed, i.e. the jobset has a git input
   "nixexprs": "https://gitlab.example.com/project/nixexprs", in which
   case "gitlab_status_repo" would be "nixexprs".
 - gitlab_project_id => ID of the project in Gitlab, i.e. in the above
   case the ID in gitlab of "nixexprs"
2019-04-11 16:40:44 +02:00
Graham Christensen
0e337e6f9c
Merge pull request #644 from input-output-hk/declarative-jobset-error-message
improve the error messages when invalid declarative jobsets are defined
2019-03-21 14:25:23 -04:00
Michael Bishop
c741576563
improve the error messages when invalid declarative jobsets are defined
(cherry picked from commit 7568b89a1a9da3a58a0cdddc7b5bcea7bb6209d8)
2019-03-20 15:27:37 -04:00
Michael Bishop
3ad091faf3
allow using a shorter context and increase hydra-notify debug
(cherry picked from commit 1c76ad393669af2f728fd519a050f417319412a6)
2019-03-20 15:22:24 -04:00
Graham Christensen
8a41ea5f60
Merge pull request #571 from kquick/moreinfo
Additional helpful information in error messages.
2019-03-18 15:18:14 -05:00
Graham Christensen
215ca5da9c
Merge pull request #607 from nlewo/json-search
Add JSON search API endpoint
2019-03-18 15:08:32 -05:00
Kevin Quick
2e225ba7c8
Do not attempt to report dir for grab command failure if not specified. 2019-03-17 23:15:24 -07:00
Graham Christensen
88a92256e1
Merge pull request #636 from Ma27/serve-json-for-evals-and-machines
Serialize data as JSON with `Accept: application/json`
2019-03-17 17:17:17 -04:00
1cbbc6c52c
Serialize data as JSON with Accept: application/json
Similar to #607. According to the Catalyst[1] docs it's possible to
specify a data structure that is supposed to be serialized when
requesting i.e. a JSON response.

[1] https://metacpan.org/pod/Catalyst::Controller::REST#status_ok
2019-02-14 01:22:48 +01:00
04ff9e217b Add hydra-dev-server which uses the classic Catalyst server
This, in turns allows

 - Using --restart for reloading the perl code
 - Printing traces on error
2019-01-22 20:34:21 -05:00
Eelco Dolstra
e0d8dcfe2d
Merge pull request #619 from samueldr/feature/lazy_errors
Adds error messages to lazy tabs
2019-01-20 23:22:17 +01:00
Shea Levy
b298ba4bbe
Merge branch 'pr-gitlabpulls' of git://github.com/nlewo/hydra 2019-01-11 10:08:23 -05:00
ed85daf2ac User: jobs tab returns its error as a lazy error. 2018-12-01 13:40:41 -05:00
9986053e73 Controllers: allows lazy tabs to return custom errors. 2018-12-01 13:40:41 -05:00
Antoine Eiche
d9253543e4 plugin/GitLabPulls: support for using a personal access token (PAT)
In order to access protected or private repositories. Using the target
repository URL along with the merge-request ref instead of the source
repository url and branch is necessary to avoid running into issues if
the source repository is not actually accessible to the user Hydra is
authenticating as.

Thanks Alexei Robyn for this patch.
2018-11-20 16:27:40 +01:00
Rob Vermaas
14d5577bf8 Add duration to Slack notification. 2018-11-20 14:57:50 +01:00
Eelco Dolstra
cd234f6a14
Merge pull request #529 from bennofs/feat-all-builds
feat: add /eval/<id>/builds endpoint
2018-11-19 17:36:19 +01:00
Michael Bishop
7916c6b185 [DEVOPS-1126] throttle github status calls to remain under api ratelimits 2018-11-09 12:06:37 -04:00
Eelco Dolstra
1c44de1779
Merge pull request #567 from phile314/bitbucket_pulls
Add BitBucket pull request support
2018-11-06 10:48:37 +01:00
Antoine Eiche
0d2a2d8923 Add json output for the search API endpoint
This commit also add a test of this feature.

Note the search JSON output doesn't contain any jobs because they can
not be exported to JSON yet.

The JSON output on a search query matching a build looks like:

```
{
  "builds": [
    {
      "buildoutputs": {
        "out": {
          "path": "/nix/store/wdag3pznrvqk01byk989irg7rq3q2a2c-job"
        }
      },
      "finished": 0,
      "releasename": null,
      "starttime": null,
      "project": "sample",
      "buildproducts": {},
      "timestamp": 1541007629,
      "buildstatus": null,
      "nixname": "job",
      "drvpath": "/nix/store/n9zqndn7j7nyr6gg3bmxvw26cfmdwv2n-job.drv",
      "job": "job",
      "id": 1,
      "stoptime": null,
      "priority": 100,
      "system": "x86_64-linux",
      "jobsetevals": [
        1
      ],
      "jobset": "default",
      "buildmetrics": {}
    }
  ],
  "projects": [],
  "jobsets": [],
  "buildsdrv": []
}
```
2018-11-01 09:31:15 +01:00
Antoine Eiche
68aad22d19 Add GitlabPulls input plugin 2018-10-15 14:45:26 +02:00
Andreas Rammhold
63a294d4ca
allow users with 'restart-jobs' role to restart individual builds 2018-10-04 21:59:42 +02:00
Kevin Quick
35bcab74ed Fix darcs input to use darcs-specific SCM cache dir.
Currently re-using the git cache dir which could cause overlap problems.
2018-09-09 22:04:32 -07:00
Nathan van Doorn
a77954be4d Allow for precisely one instance of RunCommand plugin 2018-08-21 15:52:41 +01:00
Eelco Dolstra
e122d3bef3
RunCommand: Return metrics as a float
Apparently, DBIx::Class doesn't handle columns with type 'double
precision' properly.
2018-08-02 12:31:28 +02:00
Eelco Dolstra
6db2cbf094
Add a plugin to execute arbitrary commands when a build finishes
The plugin can be configured using one or more <runcommand> sections
in hydra.conf, e.g.

<runcommand>
  command = echo Build finished
</runcommand>

Optionally, the command can be executed for specific
projects/jobsets/jobs:

  job = patchelf:master:tarball

or

  job = patchelf:*:*

The default is *:*:*.

The command is executed with the environment variable $HYDRA_JSON
pointing to a JSON file containing info about the build, e.g.

  {
    "build": 3772978,
    "buildStatus": 0,
    "drvPath": "/nix/store/9y4h1fyx9pl3ic08i2f09239b90x1lww-patchelf-tarball-0.8pre894_ed92f9f.drv",
    "event": "buildFinished",
    "finished": 1,
    "job": "tarball",
    "jobset": "master",
    "metrics": [
      {
        "name": "random1",
        "unit": null,
        "value": "20282"
      },
      {
        "name": "random2",
        "unit": "KiB",
        "value": "6664"
      }
    ],
    "outputs": [
      {
        "name": "out",
        "path": "/nix/store/39h5xciz5pnh1aypmr3rpdx0536y5s2w-patchelf-tarball-0.8pre894_ed92f9f"
      }
    ],
    "products": [
      {
        "defaultPath": "",
        "fileSize": 148216,
        "name": "patchelf-0.8pre894_ed92f9f.tar.gz",
        "path": "/nix/store/39h5xciz5pnh1aypmr3rpdx0536y5s2w-patchelf-tarball-0.8pre894_ed92f9f/tarballs/patchelf-0.8pre894_ed92f9f.tar.gz",
        "productNr": 4,
        "sha1hash": "9f27d18382436a7f743f6c2f6ad66e1b536ab4c8",
        "sha256hash": "b04faef2916c411f10711b58ea26965df7cb860ca33a87f1e868051b874c44b3",
        "subtype": "source-dist",
        "type": "file"
      },
      {
        "defaultPath": "",
        "fileSize": 121279,
        "name": "patchelf-0.8pre894_ed92f9f.tar.bz2",
        "path": "/nix/store/39h5xciz5pnh1aypmr3rpdx0536y5s2w-patchelf-tarball-0.8pre894_ed92f9f/tarballs/patchelf-0.8pre894_ed92f9f.tar.bz2",
        "productNr": 3,
        "sha1hash": "7a664841fb779dec19023be6a6121e0398067b7c",
        "sha256hash": "c81e36099893f541a11480f869fcdebd2fad3309900519065c8745f614dd024a",
        "subtype": "source-dist",
        "type": "file"
      },
      {
        "defaultPath": "README",
        "fileSize": null,
        "name": "",
        "path": "/nix/store/39h5xciz5pnh1aypmr3rpdx0536y5s2w-patchelf-tarball-0.8pre894_ed92f9f",
        "productNr": 2,
        "sha1hash": null,
        "sha256hash": null,
        "subtype": "readme",
        "type": "doc"
      },
      {
        "defaultPath": "",
        "fileSize": 6230,
        "name": "README",
        "path": "/nix/store/39h5xciz5pnh1aypmr3rpdx0536y5s2w-patchelf-tarball-0.8pre894_ed92f9f/README",
        "productNr": 1,
        "sha1hash": "dc6bb09093183ab52d7e6a35b72d179869bd6fbf",
        "sha256hash": "5371aee9de0216b3ea2d5ea869da9d5ee441b99156a99055e7e11e7a705f7920",
        "subtype": "readme",
        "type": "doc"
      }
    ],
    "project": "patchelf",
    "startTime": 1533137091,
    "stopTime": 1533137094,
    "timestamp": 1533136076
  }

So for example, the following command:

  command = echo Build $(jq -r .build $HYDRA_JSON) \($(jq -r .project $HYDRA_JSON):$(jq -r .jobset $HYDRA_JSON):$(jq -r .job $HYDRA_JSON)\) finished, metrics: $(jq -r '.metrics[].value' $HYDRA_JSON).

will print

  Build 3772978 (patchelf:master:tarball) finished, metrics: 20282 6664.
2018-08-01 19:43:50 +02:00
Kevin Quick
e523e8a643 Additional helpful information in error messages. 2018-06-29 17:41:23 -07:00