0
0
Fork 0
forked from lix-project/hydra
Commit graph

1073 commits

Author SHA1 Message Date
Eelco Dolstra 8adb433e3b
Remove the Jobs table
This table has been superfluous for a long time.
2020-05-27 20:09:36 +02:00
Nikola Knezevic f79810bac1 Improve handling of Perl's block eval errors
Taken from `Perl::Critic`:

A common idiom in perl for dealing with possible errors is to use `eval`
followed by a check of `$@`/`$EVAL_ERROR`:

    eval {
        ...
    };
    if ($EVAL_ERROR) {
        ...
    }

There's a problem with this: the value of `$EVAL_ERROR` (`$@`) can change
between the end of the `eval` and the `if` statement. The issue are object
destructors:

    package Foo;

    ...

    sub DESTROY {
        ...
        eval { ... };
        ...
    }

    package main;

    eval {
        my $foo = Foo->new();
        ...
    };
    if ($EVAL_ERROR) {
        ...
    }

Assuming there are no other references to `$foo` created, when the
`eval` block in `main` is exited, `Foo::DESTROY()` will be invoked,
regardless of whether the `eval` finished normally or not. If the `eval`
in `main` fails, but the `eval` in `Foo::DESTROY()` succeeds, then
`$EVAL_ERROR` will be empty by the time that the `if` is executed.
Additional issues arise if you depend upon the exact contents of
`$EVAL_ERROR` and both `eval`s fail, because the messages from both will
be concatenated.

Even if there isn't an `eval` directly in the `DESTROY()` method code,
it may invoke code that does use `eval` or otherwise affects
`$EVAL_ERROR`.

The solution is to ensure that, upon normal exit, an `eval` returns a
true value and to test that value:

    # Constructors are no problem.
    my $object = eval { Class->new() };

    # To cover the possiblity that an operation may correctly return a
    # false value, end the block with "1":
    if ( eval { something(); 1 } ) {
        ...
    }

    eval {
        ...
        1;
    }
        or do {
            # Error handling here
        };

Unfortunately, you can't use the `defined` function to test the result;
`eval` returns an empty string on failure.

Various modules have been written to take some of the pain out of
properly localizing and checking `$@`/`$EVAL_ERROR`. For example:

    use Try::Tiny;
    try {
        ...
    } catch {
        # Error handling here;
        # The exception is in $_/$ARG, not $@/$EVAL_ERROR.
    };  # Note semicolon.

"But we don't use DESTROY() anywhere in our code!" you say. That may be
the case, but do any of the third-party modules you use have them? What
about any you may use in the future or updated versions of the ones you
already use?
2020-05-26 11:19:43 +02:00
Nikola Knezevic 575113396d Handle missing values in declarative jobsets
The current implementation will pass all values to `create_or_update` method. The
missing values will end up as `undef` (or `NULL`) when assigned to `%update`.
Thus, for columns that are NOT NULL, when, for example, flakes are not used,
will result in a horrible:

    DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed:
    ERROR:  null value in column "type" violates not-null constraint

    DETAIL:  Failing row contains (.jobsets, 118, hydra, hydra jobsets, src, hydra/jobsets.nix, null,
    null, null, 1589536378, 1, 0, 0, , 3, 30, 100, null, null, 1589536379, null, null). [for Statement
    "UPDATE jobsets SET checkinterval = ?, description = ?, enableemail = ?, nixexprinput = ?,
    nixexprpath = ?, type = ? WHERE ( ( name = ? AND project = ? ) )" with ParamValues: 1='30',
    2='hydra jobsets', 3='0', 4='src', 5='hydra/jobsets.nix', 6=undef, 7='.jobsets', 8='hydra'] at
    /nix/store/lsf81ip9ybxihk5praf2n0nh14a6i9j0-hydra-0.1.19700101.DIRTY/libexec/hydra/lib/Hydra/Helper/AddBuilds.pm line 50

This change just omits adding such values to `%update`, which results in
PostgreSQL assigning the default values.
2020-05-15 20:33:54 +02:00
Graham Christensen 548fd8eadd
schema/Builds: use jobset_id instead of jobset name matches
This was clearly an error in the original part-2 of the diff, and
specifically breaks when two projects have a jobset of the same name.
2020-05-13 10:12:56 -04:00
Bas van Dijk 38122544ed GitInput: only convert integer option values to int
The previous code converted option values to ints when the value
contained a digit somewhere. This is too eager since it also converts
strings like `release-0.2` to an int which should not happen.

We now only convert to int when the value is an integer.
2020-05-13 11:41:52 +02:00
Bas van Dijk f32a2a48d7
Merge pull request from knl/add-githubrefs-plugin
Add GithubRefs plugin
2020-05-08 13:21:45 +02:00
Eelco Dolstra 96a514c169
Remove the "releases" feature
We haven't used this in many years (it was really only used for nix
and patchelf releases).
2020-05-06 12:39:21 +02:00
Emery Hemingway e93c36aab1 SoTest: read credentials from file 2020-04-26 12:12:04 +05:30
Nikola Knezevic f03e7ef800 Add GithubRefs plugin
This plugin is a counterpart to GithubPulls plugin. Instead of fetching pull
requests, it will fetch all references (branches and tags) that start with a
particular prefix.

The plugin is a copy of GithubPulls plugin with appropriate changes to call the
right API and parse the config matching the need.
2020-04-23 10:45:37 +02:00
Emery Hemingway a63e349476 Add SoTest plugin
https://opensource.sotest.io/
https://docs.sotest.io/
2020-04-21 15:25:44 +05:30
Maximilian Bosch 721c764951
Remove Hydra::Helper::nix::txn_do from the Perl code
To quote the function's comment:

  Awful hack to handle timeouts in SQLite: just retry the transaction.
  DBD::SQLite *has* a 30 second retry window, but apparently it
  doesn't work.

Since SQLite is now dropped entirely, this wrapper can be removed
completely.
2020-04-16 00:42:40 +02:00
Maximilian Bosch efcbc08686
Get rid of dependency to SQLite
SQLite isn't properly supported by Hydra for a few years now[1], but
Hydra still depends on it. Apart from a slightly bigger closure this can
cause confusion by users since Hydra picks up SQLite rather than
PostgreSQL by default if HYDRA_DBI isn't configured properly[2]

[1] 78974abb69
[2] https://logs.nix.samueldr.com/nixos-dev/2020-04-10#3297342;
2020-04-16 00:42:40 +02:00
Eelco Dolstra 4cabb37ebd
Merge pull request from NixOS/flake
Flake support
2020-04-07 11:18:38 +02:00
Eelco Dolstra 2d092a6fbc
Merge pull request from kquick/fix_api_push
Handle case where jobset has no defined errormsg for api/jobsets
2020-04-01 13:09:05 +02:00
Nikola Knezevic 1bee6e3d8a Add debug logging
This will help us track potential problems with the plugin.
2020-03-26 13:27:44 +01:00
Nikola Knezevic 986fde8888 Refactor code
Extract the conditions before the loop, as they do not change due to channel
definition.
2020-03-26 13:27:44 +01:00
Nikola Knezevic 956f009672 Add documentation for SlackNotification plugin 2020-03-26 13:27:44 +01:00
Eelco Dolstra 4b5bb4e760
Merge remote-tracking branch 'origin/master' into flake 2020-03-04 15:28:23 +01:00
Eelco Dolstra 3cc1deb125
Merge pull request from grahamc/one-by-one
hydra-evaluator: add a 'ONE_AT_A_TIME' evaluator style
2020-03-04 08:44:43 +01:00
Graham Christensen 994430b94b
treewide: allow nix command 2020-03-03 22:52:20 -05:00
Graham Christensen 117b9ecef1
Nix.pm: readNixFile: pass «--experimental-features nix-command»
Declarative jobsets were broken by the Nix update, causing
nix cat-file to break silently.

This commit restores declarative jobsets, based on top of a commit
making it easier to see what broke.
2020-03-03 22:36:21 -05:00
Graham Christensen 113a312f67
handleDeclarativeJobsetBuild: handle errors from readNixFile 2020-03-03 22:32:13 -05:00
Graham Christensen 5fae9d96a2
hydra-evaluator: add a 'ONE_AT_A_TIME' evaluator style
In the past, jobsets which are automatically evaluated are evaluated
regularly, on a schedule. This schedule means a new evaluation is
created every checkInterval seconds (assuming something changed.)

This model works well for architectures where our build farm can
easily keep up with demand.

This commit adds a new type of evaluation, called ONE_AT_A_TIME, which
only schedules a new evaluation if the previous evaluation of the
jobset has no unfinished builds.

This model of evaluation lets us have 'low-tier' architectures.

For example, we could now have a jobset for ARMv7l builds, where
the buildfarm only has a single, underpowered ARMv7l builder.
Configuring that jobset as ONE_AT_A_TIME will create an evaluation
and then won't schedule another evaluation until every job of
the existing evaluation is complete.

This way, the cache will have a complete collection of pre-built
software for some commits, but the underpowered architecture will
never become backlogged in ancient revisions.
2020-03-03 19:28:44 -05:00
Eelco Dolstra 15187b059b
Remove hydra-eval-guile-jobs
This hasn't been used in a long time (Guix uses its own CI system),
and it probably doesn't work anymore.

(cherry picked from commit 23c9ca3e94)
2020-02-20 09:58:12 +01:00
Eelco Dolstra 881b7449fd Merge remote-tracking branch 'origin/master' into flake 2020-02-11 14:23:16 +01:00
Graham Christensen f0f41eaaff
LatestSucceededForJob{,set}: Filter with jobset_id 2020-02-11 07:06:20 -05:00
Graham Christensen 66fbbd9692
Jobs.builds: Fetch via Jobs.jobset_id 2020-02-11 07:06:20 -05:00
Graham Christensen 7c71f9df28
Jobsets.builds: Fetch via Jobsets.id 2020-02-11 07:06:20 -05:00
Graham Christensen 3c392b8cd8
Jobsets.jobs: Fetch via Jobsets.id 2020-02-11 07:06:20 -05:00
Graham Christensen 8ef08f1385
Builds.jobset_id: make not-null 2020-02-11 07:06:20 -05:00
Graham Christensen 2cdcc7f188
Jobs.jobset_id: make not-null 2020-02-11 07:06:17 -05:00
Eelco Dolstra 100e09a5b3
Merge remote-tracking branch 'origin/master' into flake
Also update flake.lock
2020-02-10 17:58:10 +01:00
Graham Christensen ddf00fa627
Builds: add a nullable jobset_id foreign key to Jobsets.
Also, adds an explicitly named "builds" accessor to the Jobsets
Schema object, which uses the project/jobset name.
2020-02-10 11:43:02 -05:00
Graham Christensen efa1f1d4fb
Jobs: add a nullable jobset_id foreign key to Jobsets.
Also, adds an explicitly named "jobs" accessor to the Jobsets
Schema object, which uses the project/jobset name.
2020-02-10 11:43:02 -05:00
Graham Christensen e00030563b
Jobsets: add a SERIAL, unique, non-null id column
A postgresql column which is non-null and unique is treated with
the same optimisations as a primary key, so we have no need to
try and recreate the `id` as the primary key.

No read paths are impacted by this change, and the database will
automatically create an ID for each insert. Thus, no code needs to
change.
2020-02-10 11:42:59 -05:00
Graham Christensen 6fe57ab5fa
Copy the flake migration from the flake branch
hydra.nixos.org is already running this rev, and it should be safe to
apply to everyone else. If we make changes to this migration, we'll
need to write another migration anyway.
2020-02-09 15:21:28 -05:00
Graham Christensen c2f932a7e3
sql: Generate models from postgresql
Lowercasing is due to postgresql not having case-sensitive table names.
It always technically workde before, but those table names never
existed literally.

The switch to generating from postgresql is to handle an upcoming
addition of an auto-incrementign ID to the Jobset table. Sqlite doesn't
seem to be able to handle the table having an auto incrementing ID
field which isn't the primary key, but we can't change the primary
key trivially.

Since hydra doesn't support sqlite and hasn't for many year anyway,
it is easier to just generate from pgsql directly.
2020-02-06 12:23:47 -05:00
Kevin Quick cdd9d6e071
Update haserrormsg logic implementation. 2020-01-20 10:40:33 -08:00
Kevin Quick de24771a8e
Handle case where jobset has no defined errormsg for api/jobsets 2020-01-11 17:01:44 -08:00
Graham Christensen 5cac40a438
Merge remote-tracking branch 'origin/master' into flake 2019-12-29 16:37:25 -05:00
Graham Christensen d0f1bda0b8
job prometheus endpoint: drop nixname, too variable 2019-12-29 16:37:13 -05:00
Graham Christensen d24de1b5de
Merge remote-tracking branch 'origin/master' into flake 2019-12-28 20:58:03 -05:00
Graham Christensen 64cdc3413c
job prometheus endpoint: d'oh 2019-12-28 20:57:27 -05:00
Graham Christensen 511c2db8aa
Merge remote-tracking branch 'origin/master' into flake 2019-12-28 20:22:54 -05:00
Graham Christensen d5445bfc1d
job: create a prometheus endpoint
Export the most recent stop time and exit status in
a prometheus-friendly format.
2019-12-28 16:41:49 -05:00
Eelco Dolstra 55b0afa08f
Merge remote-tracking branch 'origin/master' into flake 2019-11-07 18:42:15 +01:00
Andreas Rammhold 841a47cabe
Add cancel-build role 2019-11-05 22:56:01 +01:00
Andreas Rammhold ce1e10c116
Add bump-to-front role 2019-11-05 19:32:06 +01:00
Janne Heß 6f99d958bc Fix declarative flake builds 2019-10-27 14:57:53 +01:00
Graham Christensen 937e165328
export a /prometheus endpoint
Currently only shows per-machine build times
2019-09-24 16:50:52 -04:00