Commit graph

2438 commits

Author SHA1 Message Date
Tyson Whitehead 4d881b59ad
Update bootbox to latest 5.2.0 2021-03-24 17:10:27 -04:00
Tyson Whitehead 230a0387d2
Update boostrap to latest 4.3.1
Co-authored-by: Graham Christensen <graham@grahamc.com>
... but just fixing up merge conflicts from the introduction of flakes
and the removal of the Jobs table.
2021-03-24 17:10:27 -04:00
Tyson Whitehead 627af61abe
Update jquery to latest 3.4.1 (considered by some as more secure) 2021-03-24 17:10:27 -04:00
Graham Christensen 425c7ff17f
hydra-send-stats: add a --once option for testing 2021-03-20 09:16:08 -04:00
Jörg Thalheim 6bb180a0f2
hydra-send-stats: fix imports 2021-03-20 09:16:04 -04:00
Eelco Dolstra aeb3d2f44c
Merge pull request #892 from grahamc/hydra-queue-runner-build-one
hydra-queue-runner: --build-one: correctly handle a cached build
2021-03-16 21:28:32 +01:00
Graham Christensen 87d46ad5d6
hydra-queue-runner: --build-one: correctly handle a cached build
Previously, the build ID would never flow through channels which
exited.

This patch tracks the buildOne state as part of State and exits avoids
waiting forever for new work.

The code around buildOnly is a bit rough, making this a bit weird to
implement but since it is only used for testing the value of improving
it on its own is a bit questionable.
2021-03-16 16:13:38 -04:00
Janne Heß 3c86083d21
Fixup #717 "Add the project name to declarative inputs"
```
Mar 10 16:22:35 hydra-b hydra-evaluator[41419]: DBIx::Class::Storage::DBI::_dbh_execute(): DBI Exception: DBD::Pg::st execute failed: ERROR:  null value in column "type" violates not-null constraint
Mar 10 16:22:35 hydra-b hydra-evaluator[41419]: DETAIL:  Failing row contains (62358, projectName, 0, null, null, null, hackworthltd, null, , null). [for Statement "INSERT INTO jobsetevalinputs ( altnr, dependency, eval, name, path, revision, sha256hash, type, uri, value) VALUES ( ?, ?, ?, ?, ?, ?, ?, ?, ?, ? )" with ParamValues: 1='0', 2=undef, 3='62358', 4='projectName', 5='', 6=undef, 7=undef, 8=undef, 9=undef, 10='hackworthltd'] at /nix/store/cmqblv437mp57yz5lwvkzcqca4ldf3r5-hydra-0.1.20210308.ebf1cd2/bin/.hydra-eval-jobset-wrapped line 793
Mar 10 16:22:35 hydra-b hydra-evaluator[25828]: evaluation of jobset ‘hackworthltd:.jobsets (jobset#1)’ failed with exit code 1
```

Use the abstraction for creating inputs for simulating the project
name input.

Co-authored-by: Graham Christensen <graham@grahamc.com>
2021-03-16 09:52:36 -04:00
Shea Levy 930f05c38e
Bump Nix version 2021-03-10 12:53:03 -05:00
Graham Christensen b9fb66401b
Merge pull request #880 from grahamc/runcommand-finished-bool
RunCommand: emit the `finished` field as a boolean
2021-03-09 09:58:43 -05:00
Graham Christensen 2179b4b4b0
RunCommand: emit the finished field as a boolean 2021-03-08 12:11:20 -05:00
Janne Heß 9e018d5443
Add the project name to declarative inputs
This allows for more generic declarative configurations which can be
shared between projects.
2021-03-08 17:36:52 +01:00
Matej Cotman a551fba346
statsd: add a chance to set hostname and port in hydra.conf
Co-authored-by: Graham Christensen <graham@grahamc.com>
2021-03-08 10:03:16 -05:00
Josh McSavaney e0d3a1c1a5
Make nix-build args copy-pastable via set -x
A reproduce script includes a logline that may resemble:

> using these flags: --arg nixpkgs { outPath = /tmp/build-137689173/nixpkgs/source; rev = "fdc872fa200a32456f12cc849d33b1fdbd6a933c"; shortRev = "fdc872f"; revCount = 273100; } -I nixpkgs=/tmp/build-137689173/nixpkgs/source --arg officialRelease false --option extra-binary-caches https://hydra.nixos.org/ --option system x86_64-linux /tmp/build-137689173/nixpkgs/source/pkgs/top-level/release.nix -A 

These are passed along to nix-build and that's fine and dandy, but you can't just copy-paste this as is, as the `{}` introduces a syntax error and the value accompanying `-A` is `''`.

A very naive approach is to just `printf "%q"` the individual args, which makes them safe to copy-paste. Unfortunately, this looks awful due to the liberal usage of slashes:

```
$ printf "%q" '{ outPath = /tmp/build-137689173/nixpkgs/source; rev = "fdc872fa200a32456f12cc849d33b1fdbd6a933c"; shortRev = "fdc872f"; revCount = 273100; }'
\{\ outPath\ =\ /tmp/build-137689173/nixpkgs/source\;\ rev\ =\ \"fdc872fa200a32456f12cc849d33b1fdbd6a933c\"\;\ shortRev\ =\ \"fdc872f\"\;\ revCount\ =\ 273100\;\ \}
```

Alternatively, if we just use `set -x` before we execute nix-build, we'll get the whole invocation in a friendly, copy-pastable format that nicely displays `{}`-enclosed content and preserves the empty arg following `-A`:

```
running nix-build...
using this invocation: 
+ nix-build --arg nixpkgs '{ outPath = /tmp/build-138165173/nixpkgs/source; rev = "e0e4484f2c028d2269f5ebad0660a51bbe46caa4"; shortRev = "e0e4484"; revCount = 274008; }' -I nixpkgs=/tmp/build-138165173/nixpkgs/source --arg officialRelease false --option extra-binary-caches https://hydra.nixos.org/ --option system x86_64-linux /tmp/build-138165173/nixpkgs/source/pkgs/top-level/release.nix -A ''
```
2021-03-06 23:25:26 -05:00
Graham Christensen 68ac64dbd9
Merge pull request #832 from wizeman/fix-hash-mismatch
Fix persistent hash mismatch errors when importing
2021-03-02 16:04:23 -05:00
Graham Christensen a756614fa1
RunCommand: pass homepage, description, license, system, and nixname 2021-02-24 16:13:09 -05:00
Graham Christensen 3fda37f65a
RunCommand: Test 2021-02-24 13:43:25 -05:00
Graham Christensen e4cda87b5a
db.hh: use hasPrefix for prefix comparisons
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
2021-02-24 07:00:26 -05:00
Graham Christensen fe1f2f0806
Create an ephemeral PostgreSQL database per test 2021-02-23 21:12:06 -05:00
regnat f602ed0d86 Remove the sendDerivation logic from the builder
The queue runner used to special-case `localhost` as a remote builder:
Rather than using the normal remote-build (using the
`cmdBuildDerivation` command), it was using the (generally less
efficient, except when running against localhost) `cmdBuildPaths`
command because the latter didn't require a privileged Nix user (so made
testing easier − allowing to run hydra in a container in particular).

However:
1. this means that the build loop can follow two discint code paths depending
   on the setup, the irony being that the most commonly used one in production
   (the “non-localhost” case) isn't the one used in the testsuite (because all
   the tests run against a local store);
2. It turns out that the “localhost” version is buggy in relatively obvious
   ways − in particular a failure in a fixed-output derivation or a hash
   mismatch isn't reported properly;
3. If the “run in a container” use-case is indeed that important, it can be
   (partially) restored using a chroot store (which wouldn't behave excactly
   the same way of course, but would be more than good-enough for testing)
2021-02-23 09:50:15 +01:00
Eelco Dolstra 107d60027f hydra-eval-jobs: Fix unexpected EOF when a top-level attr fails 2021-02-22 16:29:07 +01:00
Eelco Dolstra a7d8ee98da Fix build 2021-02-22 15:10:24 +01:00
Eelco Dolstra a39b479280
Merge pull request #866 from Infinisil/github-status-flakes
Fix Github status plugin for flakes
2021-02-16 17:00:46 +01:00
Silvan Mosberger 58dd7f9ed3
Fix Github status plugin for flakes
If the root flake is a github: one, github status notifications are sent
to it. The githubstatus->inputs configuration isn't used for flakes.
2021-02-06 00:02:30 +01:00
Ismaël Bouya 339a09f2e4
Fix check in jobsets
The current check happening in jobsets is incorrect.
The wanted constraint is stated as follow :
- If type is 0 (legacy), then the flake field should be null, and
  both nixExprInput and nixExprPath should be non-null
- If type is 1 (flake), then the flake field should be non-null, and
  both nixExprInput and nixExprPath should be null

The current version will not catch (i.e. it will accept) situations
where you have for instance :
type = 1, nixExprPath null, nixExprInput non-null, flake non-null

This commit fixes that.

I split(ted) that into two constraints, to make it more readable and
easier to extend if a new type appears in the future.

The complete query could be instead :
( type = 0
  AND nixExprInput IS NOT NULL AND nixExprPath IS NOT NULL AND flake IS NULL )
OR ( type = 1
  AND nixExprInput IS NULL AND nixExprPath IS NULL AND flake IS NOT NULL )

(but an "OR" cannot be split, hence the other formulation)
2021-02-03 22:14:53 +01:00
Graham Christensen bc12fe19f9
Merge pull request #855 from grahamc/jobsetevals-fixups
JobsetEvals: fixup permission references
2021-02-02 11:04:18 -05:00
Graham Christensen 6de9c6540c
Merge pull request #858 from Infinisil/fix-declarative-flakes
Fix transition from declarative non-flake to flake jobset
2021-02-02 11:04:05 -05:00
Graham Christensen f1e75c8bff
Move evaluation errors from evaluations to EvaluationErrors, a new table
DBIx likes to eagerly select all columns without a way to really tell
it so. Therefore, this splits this one large column in to its own
table.

I'd also like to make "jobsets" use this table too, but that is on hold
to stop the bleeding caused by the extreme amount of traffic this is
causing.
2021-02-01 21:33:14 -05:00
Silvan Mosberger 1d45b63516
Fix transition from declarative non-flake to flake jobset
The database has these constraints:

    check ((type = 0) = (nixExprInput is not null and nixExprPath is not null)),
    check ((type = 1) = (flake is not null)),

which prevented switching to flakes in a declarative jobspec, since the
nixexpr{path,input} fields were not nulled in such an update

Co-Authored-By: Graham Christensen <graham@grahamc.com>
2021-02-01 18:57:40 +01:00
Graham Christensen 8d7bfe1706
JobsetEvals: fixup permission references
Going from an eval to a project now requires hopping through the jobset
2021-02-01 10:31:05 -05:00
Graham Christensen 91e63fb7da
search: limit queries to 20s
Even 20s is really long, but it cuts off queries which are today
running for 500+s.
2021-01-30 11:51:20 -05:00
Graham Christensen 4f308b1f2f
search: limit results to 50, default to 10
This search query is pretty heavy. Defaulting to 500 has caused
Hydra's web UI to appear to be down. Since 500 can take it down, users
probably shouldn't be allowed t ask for that many.
2021-01-30 08:37:57 -05:00
Graham Christensen 5fcc2018db
hydra-evaluator: clean up names, clean up & / * spacing 2021-01-28 09:15:19 -05:00
Graham Christensen 091d58c128
hydra-dev-server: autoflush stderr/stdout 2021-01-26 13:51:39 -05:00
Graham Christensen 54b8cb188e
perl: jobsetevals -> jobset via by jobset_id
Frankly, this was suspiciously little work.
2021-01-26 13:51:39 -05:00
Graham Christensen 54341cd9f6
hydra-evaluator: deal in jobset IDs 2021-01-26 13:51:31 -05:00
Graham Christensen cb01859718
hydra-evaluator: JobsetName -> JobsetIdentity 2021-01-26 11:50:38 -05:00
Graham Christensen 705a45df2b
hydra-evaluator: reformat readJobsets query 2021-01-26 11:50:37 -05:00
Graham Christensen ac3e8a4a59
jobsetevals: refer to jobset by ID 2021-01-26 11:50:37 -05:00
Graham Christensen 99e3c83358
JobsetEvals: noop: re-run the generator to update the order of fields 2021-01-26 11:50:36 -05:00
Graham Christensen bf674a9653
hydra.sql: embed some in-line docs about schema changes 2021-01-26 11:50:36 -05:00
Graham Christensen dc5a0d59c5
sql: Stop loading SQL if an error occurs
Otherwise we may go ahead and create DBIx classes for a half-loaded schema.
2021-01-26 11:50:32 -05:00
Graham Christensen 9516b256f1
Normalize nixexpr{input,path} from builds to jobsetevals.
Duplicating this data on every record of the builds table cost
approximately 4G of duplication.

Note that the database migration included took about 4h45m on an
untuned server which uses very slow rotational disks in a RAID5 setup,
with not a lot of RAM. I imagine in production it might take an hour
or two, but not 4. If this should become a chunked migration, I can do
that.

Note: Because of the question about chunked migrations, I have NOT
YET tested this migration thoroughly enough for merge.
2021-01-22 09:10:18 -05:00
Graham Christensen c64c4aac4f
jobset page: render error labels per eval 2021-01-21 17:08:02 -05:00
Graham Christensen 805dd6e7ee
Evaluation page: render evaluation errors 2021-01-21 13:11:05 -05:00
Graham Christensen 086eed5147
hydra-eval-jobs: write evaluation errorMsg to the jobseteval table 2021-01-21 13:10:41 -05:00
Graham Christensen d9989b7fa1
Schema: add errorMsg, errorTime to JobsetEvals 2021-01-21 13:10:41 -05:00
Graham Christensen bc4b96d053
BuildOutputs: index path with HASH
Looking at AWS' Performance Insights for a Hydra instance, I found
the hydra-queue-runner's query:

    select id, buildStatus, releaseName, closureSize, size
    from Builds b
    join BuildOutputs o on b.id = o.build
    where
      finished = ?
      and (buildStatus = ? or buildStatus = ?)
      and path = $1

was the slowest query by at least 10x. Running an explain on this
showed why:

hydra=> explain select id, buildStatus, releaseName, closureSize, size
    from Builds b join BuildOutputs o on b.id = o.build where
    finished = 1 and (buildStatus = 0 or buildStatus = 6) and
    path = '/nix/store/s93khs2dncf2cy273mbyr4fb4ns3db20-MIDIVisualizer-5.1';

                                                     QUERY PLAN
    ------------------------------------------------------------------------
     Gather  (cost=1000.43..33718.98 rows=2 width=56)
       Workers Planned: 2
       ->  Nested Loop  (cost=0.43..32718.78 rows=1 width=56)
             ->  Parallel Seq Scan on buildoutputs o  (cost=0.00..32710.32
                                                       rows=1
                                                        width=4)
                   Filter: (path = '/nix/store/s93kh...snip...'::text)
             ->  Index Scan using indexbuildsonjobsetidfinishedid on builds b
                                            (cost=0.43..8.45 rows=1 width=56)
                   Index Cond: ((id = o.build) AND (finished = 1))
                   Filter: ((buildstatus = 0) OR (buildstatus = 6))
    (8 rows)

A paralell sequential scan is definitely better than a sequential scan, but the
cost ranging from 0 to 32710 is not great. Looking at the table, I saw the `path`
column is completely unindex:

    hydra=> \d buildoutputs
                Table "public.buildoutputs"
    Column |  Type   | Collation | Nullable | Default
    --------+---------+-----------+----------+---------
    build  | integer |           | not null |
    name   | text    |           | not null |
    path   | text    |           | not null |
    Indexes:
        "buildoutputs_pkey" PRIMARY KEY, btree (build, name)
    Foreign-key constraints:
        "buildoutputs_build_fkey" FOREIGN KEY (build) REFERENCES builds(id)
            ON DELETE CASCADE

Since we always do exact matches on the path and don't care about ordering,
and since the path column is very high cardinality a `hash` index is a
good candidate. Note that I did test a btree index and it performed
similarly well, but slightly worse.

After creating the index (this took about 10 seconds) on a test database:

    create index IndexBuildOutputsPath on BuildOutputs using hash(path);

We get a *significantly* reduced cost:

    hydra=> explain select id, buildStatus, releaseName, closureSize, size
    hydra->     from Builds b join BuildOutputs o on b.id = o.build where
    hydra->     finished = 1 and (buildStatus = 0 or buildStatus = 6) and
    hydra->     path = '/nix/store/s93khs2dncf2cy273mbyr4fb4ns3db20-MIDIVisualizer-5.1';
                                                QUERY PLAN
    -------------------------------------------------------------------------------------------------------
    Nested Loop  (cost=0.43..41.41 rows=2 width=56)
    ->  Index Scan using buildoutputs_path_hash on buildoutputs o  (cost=0.00..16.05 rows=3 width=4)
            Index Cond: (path = '/nix/store/s93khs2dncf2cy273mbyr4fb4ns3db20-MIDIVisualizer-5.1'::text)
    ->  Index Scan using indexbuildsonjobsetidfinishedid on builds b  (cost=0.43..8.45 rows=1 width=56)
            Index Cond: ((id = o.build) AND (finished = 1))
            Filter: ((buildstatus = 0) OR (buildstatus = 6))
    (6 rows)

For direct comparison, the overall query plan was changed:

    From: Gather      (cost=1000.43..33718.98 rows=2 width=56)
    To:   Nested Loop (cost=   0.43.....41.41 rows=2 width=56)

and the query plan for buildoutputs changed from a maximum cost of
32,710 down to 16.

In practical terms, the query's planning and execution time was reduced:

Before (ms) | Try 1   | Try 2   | Try 3
------------+---------+---------+--------
Planning    |   0.898 |   0.416 |   0.383
Execution   | 138.644 | 172.331 | 375.585

After (ms)  | Try 1   | Try 2   | Try 3
------------+---------+---------+--------
Planning    |   0.298 |   0.290 |   0.296
Execution   | 219.625 |   0.035 |   0.034
2021-01-18 11:28:05 -05:00
Eelco Dolstra be0aa7eb85
Merge pull request #841 from pingiun/github-login
Implement GitHub logins
2021-01-05 14:51:51 +01:00
Jelle Besseling 43d662f63a
Don't use enable_github_login option after all
Instead the github_client_id option is used to detect if github logins
should be enabled.
2021-01-04 18:09:49 +01:00