Compare commits

...

58 commits
main ... main

Author SHA1 Message Date
Maximilian Bosch eccf01d4fe Merge pull request 'flake: update nixpkgs to 24.11' (#15) from ma27/hydra:update-nixpkgs into main
Reviewed-on: lix-project/hydra#15
2024-12-06 16:37:25 +00:00
Maximilian Bosch 42c4a85ec8 Merge pull request 'README: document release branches' (#16) from ma27/hydra:readme-branches into main
Reviewed-on: lix-project/hydra#16
2024-12-06 16:34:52 +00:00
Maximilian Bosch 2b1c46730c
README: document release branches
Closes #12
2024-12-04 10:24:37 +01:00
Maximilian Bosch 51608e1ca1
flake: update nixpkgs to 24.11
I've been using the module on 24.11 and building Hydra from this repo
with 24.11 as well, so far it's looking good.

Making the upgrade since 24.05 is deprecated now.
2024-12-04 10:13:34 +01:00
leo60228 6285440304
Merge pull request 'chore: apply lix include-rearrangement to hydra' (#14) from jade/include-rearrangement into main
Reviewed-on: lix-project/hydra#14
Reviewed-by: leo60228 <leo@60228.dev>
2024-11-29 16:23:14 -05:00
jade 988554eb7a chore: apply lix include-rearrangement to hydra
This complies with the new include layout in Lix, which will eventually
replace the legacy one previously in use.
2024-11-25 23:08:32 -08:00
jade 4acb1959a1 fixup: use the correct nix-eval-jobs input URL; same commit 2024-11-25 23:02:52 -08:00
jade 453adb7f25 Merge pull request 'Update all flake inputs, fix build with latest Lix' (#13) from ma27/hydra:update-lix into main
Reviewed-on: lix-project/hydra#13
2024-11-26 06:50:41 +00:00
Maximilian Bosch acd54bfbd6
Update all flake inputs, fix build with latest Lix
Flake lock file updates:

• Updated input 'lix':
    'git+https://git.lix.systems/lix-project/lix?ref=refs/heads/main&rev=ed9b7f4f84fd60ad8618645cc1bae2d686ff0db6' (2024-10-05)
  → 'git+https://git.lix.systems/lix-project/lix?ref=refs/heads/main&rev=66f6dbda32959dd5cf3a9aaba15af72d037ab7ff' (2024-11-20)
• Updated input 'lix/nix2container':
    'github:nlewo/nix2container/3853e5caf9ad24103b13aa6e0e8bcebb47649fe4' (2024-07-10)
  → 'github:nlewo/nix2container/fa6bb0a1159f55d071ba99331355955ae30b3401' (2024-08-30)
• Updated input 'lix/pre-commit-hooks':
    'github:cachix/git-hooks.nix/f451c19376071a90d8c58ab1a953c6e9840527fd' (2024-07-15)
  → 'github:cachix/git-hooks.nix/4e743a6920eab45e8ba0fbe49dc459f1423a4b74' (2024-09-19)
• Updated input 'nix-eval-jobs':
    'git+https://git.lix.systems/lix-project/nix-eval-jobs?ref=refs/heads/main&rev=42a160bce2fd9ffebc3809746bc80cc7208f9b08' (2024-08-13)
  → 'git+https://git.lix.systems/lix-project/nix-eval-jobs?ref=refs/heads/main&rev=912a9d63319e71ca131e16eea3348145a255db2e' (2024-11-18)
• Updated input 'nix-eval-jobs/flake-parts':
    'github:hercules-ci/flake-parts/8471fe90ad337a8074e957b69ca4d0089218391d' (2024-08-01)
  → 'github:hercules-ci/flake-parts/506278e768c2a08bec68eb62932193e341f55c90' (2024-11-01)
• Updated input 'nix-eval-jobs/nix-github-actions':
    'github:nix-community/nix-github-actions/622f829f5fe69310a866c8a6cd07e747c44ef820' (2024-07-04)
  → 'github:nix-community/nix-github-actions/e04df33f62cdcf93d73e9a04142464753a16db67' (2024-10-24)
• Updated input 'nix-eval-jobs/treefmt-nix':
    'github:numtide/treefmt-nix/349de7bc435bdff37785c2466f054ed1766173be' (2024-08-12)
  → 'github:numtide/treefmt-nix/746901bb8dba96d154b66492a29f5db0693dbfcc' (2024-10-30)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/ecbc1ca8ffd6aea8372ad16be9ebbb39889e55b6' (2024-10-06)
  → 'github:NixOS/nixpkgs/e8c38b73aeb218e27163376a2d617e61a2ad9b59' (2024-11-16)
2024-11-23 10:58:14 +01:00
leo60228 a4b2b58e2b
update for lix header change 2024-11-17 19:57:36 -05:00
Maximilian Bosch ee1234c15c ignoreException has been split into two
The Finally part is a destructor, so using `ignoreExceptionInDestructor`
seems to be the correct choice here.
2024-10-07 19:22:32 +02:00
Maximilian Bosch 7c7078cccf Fix build with latest Lix
Since ca1dc3f70bf98e2424b7b2666ee2180675b67451, the NAR parser has moved
the preallocate & receive steps into the file handle class to remove the
assumption that only one file can be handled at a time.
2024-10-07 19:22:32 +02:00
Maximilian Bosch a5099d9e80 flake.lock: Update
Flake lock file updates:

• Updated input 'lix':
    'git+https://git.lix.systems/lix-project/lix?ref=refs/heads/main&rev=02eb07cfd539c34c080cb1baf042e5e780c1fcc2' (2024-09-01)
  → 'git+https://git.lix.systems/lix-project/lix?ref=refs/heads/main&rev=31954b51367737defbae87fba053b068897416fb' (2024-09-26)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/6e99f2a27d600612004fbd2c3282d614bfee6421' (2024-08-30)
  → 'github:NixOS/nixpkgs/759537f06e6999e141588ff1c9be7f3a5c060106' (2024-09-25)
2024-10-07 19:22:32 +02:00
Yureka 799441dcf6 Revert "update lix"
This reverts commit e4d466ffcd.
2024-10-06 13:55:10 +02:00
Yureka e4d466ffcd update lix 2024-10-05 23:32:45 +02:00
raito d3257e4761 Merge pull request 'Add metric for builds waiting for download slot' (#9) from waiting-metrics into main
Reviewed-on: lix-project/hydra#9
Reviewed-by: raito <raito@noreply.git.lix.systems>
2024-10-01 16:16:24 +00:00
Ilya K f23ec71227 Add metric for builds waiting for download slot 2024-10-01 19:14:24 +03:00
leo60228 ac37e44982 Merge pull request 'Update Lix; fix build' (#7) from ma27/hydra:lix-update into main
Reviewed-on: lix-project/hydra#7
Reviewed-by: leo60228 <leo@60228.dev>
2024-09-02 22:56:03 +00:00
Maximilian Bosch 6a88e647e7
flake.lock: Update; fix build
Flake lock file updates:

• Updated input 'lix':
    'git+https://git.lix.systems/lix-project/lix?ref=refs/heads/main&rev=278fddc317cf0cf4d3602d0ec0f24d1dd281fadb' (2024-08-17)
  → 'git+https://git.lix.systems/lix-project/lix?ref=refs/heads/main&rev=02eb07cfd539c34c080cb1baf042e5e780c1fcc2' (2024-09-01)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/c3d4ac725177c030b1e289015989da2ad9d56af0' (2024-08-15)
  → 'github:NixOS/nixpkgs/6e99f2a27d600612004fbd2c3282d614bfee6421' (2024-08-30)
2024-09-02 10:53:46 +02:00
Pierre Bourdon 8d5d4942e1
queue-runner: remove unused method from State 2024-08-27 02:57:37 +02:00
Pierre Bourdon e5a8ee5c17
web: require permissions for /api/push 2024-08-27 02:57:16 +02:00
Pierre Bourdon fd7fd0ad65
treewide: clang-tidy modernize 2024-08-27 01:33:12 +02:00
Pierre Bourdon d3fcedbcf5
treewide: enable clang-tidy bugprone findings
Fix some trivial findings throughout the codebase, mostly making
implicit casts explicit.
2024-08-27 00:43:17 +02:00
Pierre Bourdon 3891ad77e3
queue-runner: change Machine object creation to work around clang bug
https://github.com/llvm/llvm-project/issues/106123
2024-08-26 22:34:48 +02:00
Pierre Bourdon 21fd1f8993
flake: add devShells, including a clang one for clang-tidy & more 2024-08-26 22:22:18 +02:00
emily ab6d81fad4
api: fix github webhook 2024-08-26 20:26:21 +02:00
Sandro 64df0cba47
Match URIs that don't end in .git
Co-authored-by: Charlotte <lotte@chir.rs>
2024-08-26 20:26:21 +02:00
Sandro Jäckel 6179b298cb
Add gitea push hook 2024-08-26 20:26:20 +02:00
Pierre Bourdon 44b9a7b95d
queue-runner: handle broken pg pool connections in builder code
Completes 9b62c52e5c with another location
that was initially missed.
2024-08-25 22:05:13 +02:00
Maximilian Bosch 3ee51dbe58 readIntoSocket: fix with store URIs containing an &
The third argument to `open()` in `-|` mode is passed to a shell if it's
a string. In my case the store URI contains
`?secret-key=${signingKey.directory}/secret&compression=zstd`

For the `nix store cat` case this means that

* until `&` the process will be started in the background. This fails
  immediately because no path to cat is specified.
* `compression=zstd` is a variable assignment
* the `$path` argument to `store cat` is attempted to be executed as
  another command

Passing just the list solves the problem.
2024-08-18 21:41:54 +00:00
Maximilian Bosch e987f74954
doc: drop dev-notes & make update-dbix more discoverable
`dev-notes` are severely outdated. I dropped everything except one note
that I moved to hacking.md. The parts about creating users are also
covered elsewhere.

The `update-dbix` part got a just command to make it discoverable again.
2024-08-18 14:47:09 +02:00
Maximilian Bosch 1f802c008c
flake.lock: Update
Flake lock file updates:

• Updated input 'lix':
    'git+https://git.lix.systems/lix-project/lix?ref=refs/heads/main&rev=5137cea99044d54337e439510a647743110b2d7d' (2024-08-10)
  → 'git+https://git.lix.systems/lix-project/lix?ref=refs/heads/main&rev=278fddc317cf0cf4d3602d0ec0f24d1dd281fadb' (2024-08-17)
• Updated input 'nix-eval-jobs':
    'git+https://git.lix.systems/lix-project/nix-eval-jobs?ref=refs/heads/main&rev=c057494450f2d1420726ddb0bab145a5ff4ddfdd' (2024-07-17)
  → 'git+https://git.lix.systems/lix-project/nix-eval-jobs?ref=refs/heads/main&rev=42a160bce2fd9ffebc3809746bc80cc7208f9b08' (2024-08-13)
• Updated input 'nix-eval-jobs/flake-parts':
    'github:hercules-ci/flake-parts/9227223f6d922fee3c7b190b2cc238a99527bbb7' (2024-07-03)
  → 'github:hercules-ci/flake-parts/8471fe90ad337a8074e957b69ca4d0089218391d' (2024-08-01)
• Updated input 'nix-eval-jobs/treefmt-nix':
    'github:numtide/treefmt-nix/0fb28f237f83295b4dd05e342f333b447c097398' (2024-07-15)
  → 'github:numtide/treefmt-nix/349de7bc435bdff37785c2466f054ed1766173be' (2024-08-12)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/a781ff33ae258bbcfd4ed6e673860c3e923bf2cc' (2024-08-10)
  → 'github:NixOS/nixpkgs/c3d4ac725177c030b1e289015989da2ad9d56af0' (2024-08-15)
2024-08-18 14:18:36 +02:00
Maximilian Bosch 3a4e0d4917
Get dev environment working again
* justfile, inspired from Lix.
* let foreman use the stuff from outputs, similar to what Lix does>
* mess around with PERL5LIB[1] and PATH to get tests running locally.

[1] I don't really know how `Setup` was found before tbh.
2024-08-18 14:18:36 +02:00
Maximilian Bosch 3517acc5ba
Add direnv & PLS to the dev setup 2024-08-18 14:18:35 +02:00
71rd 459aa0a598 Stream files from store instead of buffering them
When an artifact is requested from hydra the output is first copied
from the nix store into memory and then sent as a response, delaying
the download and taking up significant amounts of memory.

As reported in https://github.com/NixOS/hydra/issues/1357

Instead of calling a command and blocking while reading in the entire
output, this adds read_into_socket(). the function takes a
command, starting a subprocess with that command, returning a file
descriptor attached to stdout.
This file descriptor is then by responsebuilder of Catalyst to steam
the output directly
2024-08-13 22:09:48 +02:00
eldritch horrors f1b552ecbf update flake locks, fix compile errors 2024-08-12 22:45:34 +02:00
Pierre Bourdon db8c2cc4a8
.github: remove
The primary host for this repo isn't github, and this is causing a whole
bunch of false indications in the Lix Forgejo UI as it tries to run the
.github/workflow with non-existing runners.
2024-08-11 16:15:12 +02:00
Rick van Schijndel 8858abb1a6
t/test.pl: increase event-timeout, set qvf
Some checks failed
Test / tests (push) Has been cancelled
Only log issues/failures when something's actually up.
It has irked me for a long time that so much output came
out of running the tests, this seems to silence it.
It does hide some warnings, but I think it makes the output
so much more readable that it's worth the tradeoff.

Helps for highly parallel running of jobs, sometimes they'd not give output for a while.
Setting this timeout higher appears to help.
Not completely sure if this is the right place to do it, but it works fine for me.
2024-08-11 16:08:35 +02:00
Rick van Schijndel ef619eca99
t: increase timeouts for slow commands with high load
We've seen many fails on ofborg, at lot of them ultimately appear to come down to
a timeout being hit, resulting in something like this:

Failure executing slapadd -F /<path>/slap.d -b dc=example -l /<path>/load.ldif.

Hopefully this resolves it for most cases.
I've done some endurance testing and this helps a lot.
some other commands also regularly time-out with high load:

- hydra-init
- hydra-create-user
- nix-store --delete

This should address most issues with tests randomly failing.

Used the following script for endurance testing:

```

import os
import subprocess

run_counter = 0
fail_counter = 0

while True:
    try:
        run_counter += 1
        print(f"Starting run {run_counter}")
        env = os.environ
        env["YATH_JOB_COUNT"] = "20"
        result = subprocess.run(["perl", "t/test.pl"], env=env)
        if (result.returncode != 0):
            fail_counter += 1
        print(f"Finish run {run_counter}, total fail count: {fail_counter}")
    except KeyboardInterrupt:
        print(f"Finished {run_counter} runs with {fail_counter} fails")
        break
```

In case someone else wants to do it on their system :).
Note that YATH_JOB_COUNT may need to be changed loosely based on your
cores.
I only have 4 cores (8 threads), so for others higher numbers might
yield better results in hashing out unstable tests.
2024-08-11 16:08:09 +02:00
marius david 41dfa0e443
Document the default user and port in hacking.md
Some checks are pending
Test / tests (push) Waiting to run
2024-08-11 16:06:08 +02:00
Pierre Bourdon 4b107e6ff3
hydra-eval-jobset: pass --workers and --max-memory-size to n-e-j
Some checks failed
Test / tests (push) Has been cancelled
Lost in the h-e-j -> n-e-j migration, causing evaluation to always be
single threaded and limited to 4GiB RAM. Follow the config settings like
h-e-j used to do (via C++ code).
2024-07-22 23:16:29 +02:00
Pierre Bourdon 4b886d9c45
autotools -> meson
Some checks are pending
Test / tests (push) Waiting to run
There are some known regressions regarding local testing setups - since
everything was kinda half written with the expectation that build dir =
source dir (which should not be true anymore). But everything builds and
the test suite runs fine, after several hours spent debugging random
crashes in libpqxx with MALLOC_PERTURB_...
2024-07-22 22:30:41 +02:00
Pierre Bourdon fbb894af4e
static: de-bundle vendored dependencies
The current way this whole build works is incompatible with having a
separate build dir, or at least with having a separate build dir. To be
improved in the future - maybe minimize the dependencies a bit. But this
isn't so much data that we really have to care.
2024-07-22 16:30:13 +02:00
Niklas Hambüchen 8a984efaef
renderInputDiff: Increase git hash length 8 -> 12
Some checks failed
Test / tests (push) Has been cancelled
See investigation on lengths required to be conflict-free in practice:

https://github.com/NixOS/hydra/pull/1258#issuecomment-1321891677
2024-07-21 12:23:29 +02:00
Pierre Bourdon abc9f11417
queue runner: fix store URI args being written to the SSH hosts file
Some checks are pending
Test / tests (push) Waiting to run
2024-07-20 16:09:07 +02:00
Pierre Bourdon 9a4a5dd624
jobset-eval: fix actions not showing up sometimes for new jobs
Some checks are pending
Test / tests (push) Waiting to run
New jobs have their "new" status take precedence over them being
"failed" or "queued", which means actions that can act on "failed" or
"queued" jobs weren't shown to the user when they could only act on
"new" jobs.
2024-07-20 13:09:39 +02:00
Janik Haag ac406a9175
nixos-modules: hydra-queue-runner fix network-online.target eval warning
Some checks failed
Test / tests (pull_request) Has been cancelled
Test / tests (push) Has been cancelled
2024-07-19 09:13:32 +02:00
Pierre Bourdon 73616aa0d9
nixos-module: don't force Nix GC to keep outputs
Some checks failed
Test / tests (push) Has been cancelled
This isn't actually needed (h.n.o even overrides it!).

Fix the use of deprecated `gc-keep-derivations` alias along the way.
2024-07-17 13:21:58 +02:00
Pierre Bourdon d33fc08341
nixos-module: fix trusted users
- Use extra-trusted-users to avoid overriding the default set of trusted
  users and causing permission issues.
- Add hydra and hydra-www users which also need permissions.
2024-07-17 13:20:37 +02:00
Pierre Bourdon b0e9b4b2f9
hydra-eval-jobset: incrementally ingest eval results
Some checks failed
Test / tests (push) Has been cancelled
Test / tests (pull_request) Has been cancelled
nix-eval-jobs streams output, unlike hydra-eval-jobs. Now that we've
migrated, we can use this to:

1. Use less RAM by avoiding buffering a whole eval's worth of metadata
   into a Perl string and an array of JSON objects.
2. Make evals latency a bit lower by allowing the queue runner to start
   ingesting builds faster.
2024-07-17 12:05:41 +02:00
Pierre Bourdon 370a4bf138
treewide: start removing tests related to constituents
Some checks failed
Test / tests (push) Waiting to run
Test / tests (pull_request) Has been cancelled
The feature cannot easily be ported to nix-eval-jobs since it requires
deep integration into the evaluator, and h.n.o doesn't use it. Later
more of this will be ripped out.
2024-07-17 08:31:19 +02:00
Pierre Bourdon ed7c58708c
hydra-eval-jobs: remove, replaced by nix-eval-jobs 2024-07-17 08:31:19 +02:00
Pierre Bourdon 6d4ccff43c
hydra-eval-jobset: use nix-eval-jobs instead of hydra-eval-jobs 2024-07-17 08:31:19 +02:00
Pierre Bourdon 684cc50d86
flake: add nix-eval-jobs as input 2024-07-17 08:17:32 +02:00
Pierre Bourdon 6195cec6a3
hydra-queue-runner: adjust for Lix generators related changes 2024-07-16 04:35:44 +02:00
Pierre Bourdon 1fbfed8162
flake: rename 'nix' input to 'lix'
For consistency with other Lix forks of Nix ecosystems projects, e.g.
nix-eval-jobs.
2024-07-16 03:59:38 +02:00
Pierre Bourdon fb9e29d4d0
queue runner: fix nullptr deref on build exception after releasing a machine reservation
Some checks failed
Test / tests (push) Has been cancelled
2024-07-13 06:12:35 +02:00
Pierre Bourdon 05d620a54f
flake.lock: Update
Some checks are pending
Test / tests (push) Waiting to run
Flake lock file updates:

• Updated input 'nix':
    'git+https://git@git.lix.systems/lix-project/lix?ref=refs/heads/main&rev=4c3d93611f2848c56ebc69c85f2b1e18001ed3c7' (2024-06-24)
  → 'git+https://git@git.lix.systems/lix-project/lix?ref=refs/heads/main&rev=4b109ec1a8fc4550150f56f0f46f2f41d844bda8' (2024-07-11)
• Updated input 'nixpkgs':
    'github:NixOS/nixpkgs/e4509b3a560c87a8d4cb6f9992b8915abf9e36d8' (2024-06-23)
  → 'github:NixOS/nixpkgs/a046c1202e11b62cbede5385ba64908feb7bfac4' (2024-07-11)
2024-07-13 03:08:50 +02:00
246 changed files with 89016 additions and 1769 deletions

12
.clang-tidy Normal file
View file

@ -0,0 +1,12 @@
UseColor: true
Checks:
- -*
- bugprone-*
# kind of nonsense
- -bugprone-easily-swappable-parameters
# many warnings due to not recognizing `assert` properly
- -bugprone-unchecked-optional-access
- modernize-*
- -modernize-use-trailing-return-type

1
.envrc Normal file
View file

@ -0,0 +1 @@
use flake .#clang

View file

@ -1,37 +0,0 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: bug
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Hydra Server:**
Please fill out this data as well as you can, but don't worry if you can't -- just do your best.
- OS and version: [e.g. NixOS 22.05.20211203.ee3794c]
- Version of Hydra
- Version of Nix Hydra is built against
- Version of the Nix daemon
**Additional context**
Add any other context about the problem here.

View file

@ -1,20 +0,0 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View file

@ -1,6 +0,0 @@
version: 2
updates:
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

View file

@ -1,14 +0,0 @@
name: "Test"
on:
pull_request:
push:
jobs:
tests:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: cachix/install-nix-action@v17
#- run: nix flake check
- run: nix-build -A checks.x86_64-linux.build -A checks.x86_64-linux.validate-openapi

43
.gitignore vendored
View file

@ -1,48 +1,9 @@
/.pls_cache
*.o
*~
Makefile
Makefile.in
.deps
.hydra-data
/config.guess
/config.log
/config.status
/config.sub
/configure
/depcomp
/libtool
/ltmain.sh
/autom4te.cache
/aclocal.m4
/missing
/install-sh
.test_info.*
/src/sql/hydra-postgresql.sql
/src/sql/hydra-sqlite.sql
/src/sql/tmp.sqlite
/src/hydra-eval-jobs/hydra-eval-jobs
/src/root/static/bootstrap
/src/root/static/js/flot
/tests
/doc/manual/images
/doc/manual/manual.html
/doc/manual/manual.pdf
/t/.bzr*
/t/.git*
/t/.hg*
/t/nix
/t/data
/t/jobs/config.nix
t/jobs/declarative/project.json
/inst
hydra-config.h
hydra-config.h.in
result
result-*
.hydra-data
outputs
config
stamp-h1
src/hydra-evaluator/hydra-evaluator
src/hydra-queue-runner/hydra-queue-runner
src/root/static/fontawesome/
src/root/static/bootstrap*/

View file

@ -1,2 +0,0 @@
[test]
-I=rel(t/lib)

View file

@ -1,12 +0,0 @@
SUBDIRS = src doc
if CAN_DO_CHECK
SUBDIRS += t
endif
BOOTCLEAN_SUBDIRS = $(SUBDIRS)
DIST_SUBDIRS = $(SUBDIRS)
EXTRA_DIST = nixos-modules/hydra.nix
install-data-local: nixos-modules/hydra.nix
$(INSTALL) -d $(DESTDIR)$(datadir)/nix
$(INSTALL_DATA) nixos-modules/hydra.nix $(DESTDIR)$(datadir)/nix/hydra-module.nix

View file

@ -4,6 +4,17 @@
Hydra is a [Continuous Integration](https://en.wikipedia.org/wiki/Continuous_integration) service for [Nix](https://nixos.org/nix) based projects.
## Branches
| Branch | Lix release | Maintained |
| --- | --- | --- |
| `main` | [`main` branch of Lix](https://git.lix.systems/lix-project/lix/commits/branch/main) | ✅ |
| `lix-2.91` | [2.91](https://lix.systems/blog/2024-08-12-lix-2.91-release/) | ✅ |
Active development happens on `main` only.
Branches that track a Lix release are maintained as long as the Lix version is
maintained. These only receive critical bugfixes.
## Installation And Setup
**Note**: The instructions provided below are intended to enable new users to get a simple, local installation up and running. They are by no means sufficient for running a production server, let alone a public instance.
@ -78,11 +89,11 @@ $ nix-build
### Development Environment
You can use the provided shell.nix to get a working development environment:
```
$ nix-shell
$ autoreconfPhase
$ configurePhase # NOTE: not ./configure
$ make
$ nix develop
[nix-shell]$ just setup
[nix-shell]$ just install
```
### Executing Hydra During Development
@ -91,10 +102,9 @@ When working on new features or bug fixes you need to be able to run Hydra from
can be done using [foreman](https://github.com/ddollar/foreman):
```
$ nix-shell
$ # hack hack
$ make
$ foreman start
$ nix develop
[nix-shell]$ just install
[nix-shell]$ foreman start
```
Have a look at the [Procfile](./Procfile) if you want to see how the processes are being started. In order to avoid
@ -115,22 +125,22 @@ Start by following the steps in [Development Environment](#development-environme
Then, you can run the tests and the perlcritic linter together with:
```console
$ nix-shell
$ make check
$ nix develop
[nix-shell]$ just test
```
You can run a single test with:
```
$ nix-shell
$ yath test ./t/foo/bar.t
$ nix develop
[nix-shell]$ yath test ./t/foo/bar.t
```
And you can run just perlcritic with:
```
$ nix-shell
$ make perlcritic
$ nix develop
[nix-shell]$ just perlcritic
```
### JSON API

View file

@ -1,91 +0,0 @@
AC_INIT([Hydra], [m4_esyscmd([echo -n $(cat ./version.txt)$VERSION_SUFFIX])])
AC_CONFIG_AUX_DIR(config)
AM_INIT_AUTOMAKE([foreign serial-tests])
AC_LANG([C++])
AC_PROG_CC
AC_PROG_INSTALL
AC_PROG_LN_S
AC_PROG_LIBTOOL
AC_PROG_CXX
AC_PATH_PROG([XSLTPROC], [xsltproc])
AC_ARG_WITH([docbook-xsl],
[AS_HELP_STRING([--with-docbook-xsl=PATH],
[path of the DocBook XSL stylesheets])],
[docbookxsl="$withval"],
[docbookxsl="/docbook-xsl-missing"])
AC_SUBST([docbookxsl])
AC_DEFUN([NEED_PROG],
[
AC_PATH_PROG($1, $2)
if test -z "$$1"; then
AC_MSG_ERROR([$2 is required])
fi
])
NEED_PROG(perl, perl)
NEED_PROG([NIX_STORE_PROGRAM], [nix-store])
AC_MSG_CHECKING([whether $NIX_STORE_PROGRAM is recent enough])
if test -n "$NIX_STORE" -a -n "$TMPDIR"
then
# This may be executed from within a build chroot, so pacify
# `nix-store' instead of letting it choke while trying to mkdir
# /nix/var.
NIX_STATE_DIR="$TMPDIR"
export NIX_STATE_DIR
fi
if NIX_REMOTE=daemon PAGER=cat "$NIX_STORE_PROGRAM" --timeout 123 -q; then
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
AC_MSG_ERROR([`$NIX_STORE_PROGRAM' doesn't support `--timeout'; please use a newer version.])
fi
PKG_CHECK_MODULES([NIX], [lix-main lix-expr lix-store])
testPath="$(dirname $(type -p expr))"
AC_SUBST(testPath)
CXXFLAGS+=" -include lix/config.h -std=gnu++20"
AC_CONFIG_FILES([
Makefile
doc/Makefile
doc/manual/Makefile
src/Makefile
src/hydra-evaluator/Makefile
src/hydra-eval-jobs/Makefile
src/hydra-queue-runner/Makefile
src/sql/Makefile
src/ttf/Makefile
src/lib/Makefile
src/root/Makefile
src/script/Makefile
])
# Tests might be filtered out
AM_CONDITIONAL([CAN_DO_CHECK], [test -f "$srcdir/t/api-test.t"])
AM_COND_IF(
[CAN_DO_CHECK],
[
jobsPath="$(realpath ./t/jobs)"
AC_SUBST(jobsPath)
AC_CONFIG_FILES([
t/Makefile
t/jobs/config.nix
t/jobs/declarative/project.json
])
])
AC_CONFIG_COMMANDS([executable-scripts], [])
AC_CONFIG_HEADER([hydra-config.h])
AC_OUTPUT

View file

@ -1,4 +0,0 @@
SUBDIRS = manual
BOOTCLEAN_SUBDIRS = $(SUBDIRS)
DIST_SUBDIRS = $(SUBDIRS)

View file

@ -1,122 +0,0 @@
* Recreating the schema bindings:
$ make -C src/sql update-dbix
* Running the test server:
$ DBIC_TRACE=1 ./script/hydra_server.pl
* Setting the maximum number of concurrent builds per system type:
$ psql -d hydra <<< "insert into SystemTypes(system, maxConcurrent) values('i686-linux', 3);"
* Creating a user:
$ hydra-create-user root --email-address 'e.dolstra@tudelft.nl' \
--password-prompt
(Replace "foobar" with the desired password.)
To make the user an admin:
$ hydra-create-user root --role admin
To enable a non-admin user to create projects:
$ hydra-create-user root --role create-projects
* Changing the priority of a scheduled build:
update buildschedulinginfo set priority = 200 where id = <ID>;
* Changing the priority of all builds for a jobset:
update buildschedulinginfo set priority = 20 where id in (select id from builds where finished = 0 and project = 'nixpkgs' and jobset = 'trunk');
* Steps to install:
- Install the Hydra closure.
- Set HYDRA_DATA to /somewhere.
- Run hydra_init.pl
- Start hydra_server
- Visit http://localhost:3000/
- Create a user (see above)
- Create a project, jobset etc.
- Start hydra_evaluator and hydra_queue_runner
* Job selection:
php-sat:build [system = "i686-linux"]
php-sat:build [same system]
tarball [same patchelfSrc]
--if system i686-linux --arg build {...}
* Restart all aborted builds in a given evaluation (e.g. 820909):
> update builds set finished = 0 where id in (select id from builds where finished = 1 and buildstatus = 3 and exists (select 1 from jobsetevalmembers where eval = 820909 and build = id));
* Restart all builds in a given evaluation that had a build step time out:
> update builds set finished = 0 where id in (select id from builds where finished = 1 and buildstatus != 0 and exists (select 1 from jobsetevalmembers where eval = 926992 and build = id) and exists (select 1 from buildsteps where build = id and status = 7));
* select * from (select project, jobset, job, system, max(timestamp) timestamp from builds where finished = 1 group by project, jobset, job, system) x join builds y on x.timestamp = y.timestamp and x.project = y.project and x.jobset = y.jobset and x.job = y.job and x.system = y.system;
select * from (select project, jobset, job, system, max(timestamp) timestamp from builds where finished = 1 group by project, jobset, job, system) natural join builds;
* Delete all scheduled builds that are not already building:
delete from builds where finished = 0 and not exists (select 1 from buildschedulinginfo s where s.id = builds.id and busy != 0);
* select x.project, x.jobset, x.job, x.system, x.id, x.timestamp, r.buildstatus, b.id, b.timestamp
from (select project, jobset, job, system, max(id) as id from Builds where finished = 1 group by project, jobset, job, system) as a_
natural join Builds x
natural join BuildResultInfo r
left join Builds b on b.id =
(select max(id) from builds c
natural join buildresultinfo r2
where x.project = c.project and x.jobset = c.jobset and x.job = c.job and x.system = c.system
and x.id > c.id and r.buildstatus != r2.buildstatus);
* Using PostgreSQL (version 9.2 or newer is required):
$ HYDRA_DBI="dbi:Pg:dbname=hydra;" hydra-server
* Find the builds with the highest number of build steps:
select id, (select count(*) from buildsteps where build = x.id) as n from builds x order by n desc;
* Evaluating the NixOS Hydra jobs:
$ ./hydra_eval_jobs ~/Dev/nixos-wc/release.nix --arg nixpkgs '{outPath = /home/eelco/Dev/nixpkgs-wc;}' --arg nixosSrc '{outPath = /home/eelco/Dev/nixos-wc; rev = 1234;}' --arg services '{outhPath = /home/eelco/services-wc;}' --argstr system i686-linux --argstr system x86_64-linux --arg officialRelease false
* Show all the failing jobs/systems in the nixpkgs:stdenv jobset that
succeed in the nixpkgs:trunk jobset:
select job, system from builds b natural join buildresultinfo where project = 'nixpkgs' and jobset = 'stdenv' and iscurrent = 1 and finished = 1 and buildstatus != 0 and exists (select 1 from builds natural join buildresultinfo where project = 'nixpkgs' and jobset = 'trunk' and job = b.job and system = b.system and iscurrent = 1 and finished = 1 and buildstatus = 0) order by job, system;
* Get all Nixpkgs jobs that have never built succesfully:
select project, jobset, job from builds b1
where project = 'nixpkgs' and jobset = 'trunk' and iscurrent = 1
group by project, jobset, job
having not exists
(select 1 from builds b2 where b1.project = b2.project and b1.jobset = b2.jobset and b1.job = b2.job and finished = 1 and buildstatus = 0)
order by project, jobset, job;

View file

@ -1,6 +0,0 @@
MD_FILES = src/*.md
EXTRA_DIST = $(MD_FILES)
install: $(MD_FILES)
mdbook build . -d $(docdir)

33
doc/manual/meson.build Normal file
View file

@ -0,0 +1,33 @@
srcs = files(
'src/SUMMARY.md',
'src/about.md',
'src/api.md',
'src/configuration.md',
'src/hacking.md',
'src/installation.md',
'src/introduction.md',
'src/jobs.md',
'src/monitoring/README.md',
'src/notifications.md',
'src/plugins/README.md',
'src/plugins/RunCommand.md',
'src/plugins/declarative-projects.md',
'src/projects.md',
'src/webhooks.md',
)
manual = custom_target(
'manual',
command: [
mdbook, 'build', '@SOURCE_ROOT@/doc/manual', '-d', meson.current_build_dir() / 'html'
],
depend_files: srcs,
output: ['html'],
build_by_default: true,
)
install_subdir(
manual.full_path(),
install_dir: get_option('datadir') / 'doc/hydra',
strip_directory: true,
)

View file

@ -12,15 +12,14 @@ To enter a shell in which all environment variables (such as `PERL5LIB`)
and dependencies can be found:
```console
$ nix-shell
$ nix develop
```
To build Hydra, you should then do:
```console
[nix-shell]$ autoreconfPhase
[nix-shell]$ configurePhase
[nix-shell]$ make
[nix-shell]$ just setup
[nix-shell]$ just install
```
You start a local database, the webserver, and other components with
@ -30,6 +29,8 @@ foreman:
$ foreman start
```
The Hydra interface will be available on port 63333, with an admin user named "alice" with password "foobar"
You can run just the Hydra web server in your source tree as follows:
```console
@ -39,18 +40,13 @@ $ ./src/script/hydra-server
You can run Hydra's test suite with the following:
```console
[nix-shell]$ make check
[nix-shell]$ # to run as many tests as you have cores:
[nix-shell]$ make check YATH_JOB_COUNT=$NIX_BUILD_CORES
[nix-shell]$ just test
[nix-shell]$ # or run yath directly:
[nix-shell]$ yath test
[nix-shell]$ # to run as many tests as you have cores:
[nix-shell]$ yath test -j $NIX_BUILD_CORES
```
When using `yath` instead of `make check`, ensure you have run `make`
in the root of the repository at least once.
**Warning**: Currently, the tests can fail
if run with high parallelism [due to an issue in
`Test::PostgreSQL`](https://github.com/TJC/Test-postgresql/issues/40)
@ -101,3 +97,14 @@ Off NixOS, change `/etc/nix/nix.conf`:
```conf
trusted-users = root YOURUSERNAME
```
### Updating schema bindings
```
just update-dbix
```
### Find the builds with the highest number of build steps:
select id, (select count(*) from buildsteps where build = x.id) as n from builds x order by n desc;

View file

@ -1,9 +1,12 @@
# Webhooks
Hydra can be notified by github's webhook to trigger a new evaluation when a
Hydra can be notified by github or gitea with webhooks to trigger a new evaluation when a
jobset has a github repo in its input.
To set up a github webhook go to `https://github.com/<yourhandle>/<yourrepo>/settings` and in the `Webhooks` tab
click on `Add webhook`.
## GitHub
To set up a webhook for a GitHub repository go to `https://github.com/<yourhandle>/<yourrepo>/settings`
and in the `Webhooks` tab click on `Add webhook`.
- In `Payload URL` fill in `https://<your-hydra-domain>/api/push-github`.
- In `Content type` switch to `application/json`.
@ -11,3 +14,14 @@ click on `Add webhook`.
- For `Which events would you like to trigger this webhook?` keep the default option for events on `Just the push event.`.
Then add the hook with `Add webhook`.
## Gitea
To set up a webhook for a Gitea repository go to the settings of the repository in your Gitea instance
and in the `Webhooks` tab click on `Add Webhook` and choose `Gitea` in the drop down.
- In `Target URL` fill in `https://<your-hydra-domain>/api/push-gitea`.
- Keep HTTP method `POST`, POST Content Type `application/json` and Trigger On `Push Events`.
- Change the branch filter to match the git branch hydra builds.
Then add the hook with `Add webhook`.

View file

@ -16,7 +16,28 @@
"type": "github"
}
},
"nix": {
"flake-parts": {
"inputs": {
"nixpkgs-lib": [
"nix-eval-jobs",
"nixpkgs"
]
},
"locked": {
"lastModified": 1730504689,
"narHash": "sha256-hgmguH29K2fvs9szpq2r3pz2/8cJd2LPS+b4tfNFCwE=",
"owner": "hercules-ci",
"repo": "flake-parts",
"rev": "506278e768c2a08bec68eb62932193e341f55c90",
"type": "github"
},
"original": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
"lix": {
"inputs": {
"flake-compat": "flake-compat",
"nix2container": "nix2container",
@ -27,27 +48,74 @@
"pre-commit-hooks": "pre-commit-hooks"
},
"locked": {
"lastModified": 1719211568,
"narHash": "sha256-oIgmvhe3CV/36LC0KXgqWnKXma39wabks8U9JBMDfO4=",
"lastModified": 1732112222,
"narHash": "sha256-H7GN4++a4vE49SUNojZx+FSk4mmpb2ifJUtJMJHProI=",
"ref": "refs/heads/main",
"rev": "4c3d93611f2848c56ebc69c85f2b1e18001ed3c7",
"revCount": 15877,
"rev": "66f6dbda32959dd5cf3a9aaba15af72d037ab7ff",
"revCount": 16513,
"type": "git",
"url": "https://git@git.lix.systems/lix-project/lix"
"url": "https://git.lix.systems/lix-project/lix"
},
"original": {
"type": "git",
"url": "https://git@git.lix.systems/lix-project/lix"
"url": "https://git.lix.systems/lix-project/lix"
}
},
"nix-eval-jobs": {
"inputs": {
"flake-parts": "flake-parts",
"lix": [
"lix"
],
"nix-github-actions": "nix-github-actions",
"nixpkgs": [
"nixpkgs"
],
"treefmt-nix": "treefmt-nix"
},
"locked": {
"lastModified": 1732351635,
"narHash": "sha256-H94CcQ3yamG5+RMxtxXllR02YIlxQ5WD/8PcolO9yEA=",
"ref": "refs/heads/main",
"rev": "dfc286ca3dc49118c30d8d6205d6d6af76c62b7a",
"revCount": 617,
"type": "git",
"url": "https://git.lix.systems/lix-project/nix-eval-jobs"
},
"original": {
"type": "git",
"url": "https://git.lix.systems/lix-project/nix-eval-jobs"
}
},
"nix-github-actions": {
"inputs": {
"nixpkgs": [
"nix-eval-jobs",
"nixpkgs"
]
},
"locked": {
"lastModified": 1731952509,
"narHash": "sha256-p4gB3Rhw8R6Ak4eMl8pqjCPOLCZRqaehZxdZ/mbFClM=",
"owner": "nix-community",
"repo": "nix-github-actions",
"rev": "7b5f051df789b6b20d259924d349a9ba3319b226",
"type": "github"
},
"original": {
"owner": "nix-community",
"repo": "nix-github-actions",
"type": "github"
}
},
"nix2container": {
"flake": false,
"locked": {
"lastModified": 1712990762,
"narHash": "sha256-hO9W3w7NcnYeX8u8cleHiSpK2YJo7ecarFTUlbybl7k=",
"lastModified": 1724996935,
"narHash": "sha256-njRK9vvZ1JJsP8oV2OgkBrpJhgQezI03S7gzskCcHos=",
"owner": "nlewo",
"repo": "nix2container",
"rev": "20aad300c925639d5d6cbe30013c8357ce9f2a2e",
"rev": "fa6bb0a1159f55d071ba99331355955ae30b3401",
"type": "github"
},
"original": {
@ -58,16 +126,16 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1719145550,
"narHash": "sha256-K0i/coxxTEl30tgt4oALaylQfxqbotTSNb1/+g+mKMQ=",
"lastModified": 1733120037,
"narHash": "sha256-En+gSoVJ3iQKPDU1FHrR6zIxSLXKjzKY+pnh9tt+Yts=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "e4509b3a560c87a8d4cb6f9992b8915abf9e36d8",
"rev": "f9f0d5c5380be0a599b1fb54641fa99af8281539",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-24.05",
"ref": "nixos-24.11",
"repo": "nixpkgs",
"type": "github"
}
@ -91,11 +159,11 @@
"pre-commit-hooks": {
"flake": false,
"locked": {
"lastModified": 1712055707,
"narHash": "sha256-4XLvuSIDZJGS17xEwSrNuJLL7UjDYKGJSbK1WWX2AK8=",
"lastModified": 1726745158,
"narHash": "sha256-D5AegvGoEjt4rkKedmxlSEmC+nNLMBPWFxvmYnVLhjk=",
"owner": "cachix",
"repo": "git-hooks.nix",
"rev": "e35aed5fda3cc79f88ed7f1795021e559582093a",
"rev": "4e743a6920eab45e8ba0fbe49dc459f1423a4b74",
"type": "github"
},
"original": {
@ -106,9 +174,31 @@
},
"root": {
"inputs": {
"nix": "nix",
"lix": "lix",
"nix-eval-jobs": "nix-eval-jobs",
"nixpkgs": "nixpkgs"
}
},
"treefmt-nix": {
"inputs": {
"nixpkgs": [
"nix-eval-jobs",
"nixpkgs"
]
},
"locked": {
"lastModified": 1732292307,
"narHash": "sha256-5WSng844vXt8uytT5djmqBCkopyle6ciFgteuA9bJpw=",
"owner": "numtide",
"repo": "treefmt-nix",
"rev": "705df92694af7093dfbb27109ce16d828a79155f",
"type": "github"
},
"original": {
"owner": "numtide",
"repo": "treefmt-nix",
"type": "github"
}
}
},
"root": "root",

View file

@ -1,16 +1,21 @@
{
description = "A Nix-based continuous build system";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.05";
inputs.nix.url = "git+https://git@git.lix.systems/lix-project/lix";
inputs.nix.inputs.nixpkgs.follows = "nixpkgs";
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.11";
outputs = { self, nixpkgs, nix }:
inputs.lix.url = "git+https://git.lix.systems/lix-project/lix";
inputs.lix.inputs.nixpkgs.follows = "nixpkgs";
inputs.nix-eval-jobs.url = "git+https://git.lix.systems/lix-project/nix-eval-jobs";
inputs.nix-eval-jobs.inputs.nixpkgs.follows = "nixpkgs";
inputs.nix-eval-jobs.inputs.lix.follows = "lix";
outputs = { self, nix-eval-jobs, nixpkgs, lix }:
let
systems = [ "x86_64-linux" "aarch64-linux" ];
forEachSystem = nixpkgs.lib.genAttrs systems;
overlayList = [ self.overlays.default nix.overlays.default ];
overlayList = [ self.overlays.default lix.overlays.default ];
pkgsBySystem = forEachSystem (system: import nixpkgs {
inherit system;
@ -24,6 +29,7 @@
overlays.default = final: prev: {
hydra = final.callPackage ./package.nix {
inherit (final.lib) fileset;
nix-eval-jobs = nix-eval-jobs.packages.${final.system}.default;
rawSrc = self;
};
};
@ -67,6 +73,21 @@
default = pkgsBySystem.${system}.hydra;
});
devShells = forEachSystem (system: let
pkgs = pkgsBySystem.${system};
lib = pkgs.lib;
mkDevShell = stdenv: (pkgs.mkShell.override { inherit stdenv; }) {
inputsFrom = [ (self.packages.${system}.default.override { inherit stdenv; }) ];
packages =
lib.optional (stdenv.cc.isClang && stdenv.hostPlatform == stdenv.buildPlatform) pkgs.clang-tools;
};
in {
default = mkDevShell pkgs.stdenv;
clang = mkDevShell pkgs.clangStdenv;
});
nixosModules = import ./nixos-modules {
overlays = overlayList;
};

View file

@ -3,4 +3,4 @@
# wait for hydra-server to listen
while ! nc -z localhost 63333; do sleep 1; done
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec hydra-evaluator
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec $(pwd)/outputs/out/bin/hydra-evaluator

View file

@ -28,4 +28,4 @@ use-substitutes = true
</hydra_notify>
EOF
fi
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec hydra-dev-server --port 63333 --restart --debug
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec $(pwd)/outputs/out/bin/hydra-dev-server --port 63333 --restart --debug

View file

@ -3,4 +3,4 @@
# wait for hydra-server to listen
while ! nc -z localhost 63333; do sleep 1; done
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec hydra-notify
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec $(pwd)/outputs/out/bin/hydra-notify

View file

@ -3,4 +3,4 @@
# wait until hydra is listening on port 63333
while ! nc -z localhost 63333; do sleep 1; done
NIX_REMOTE_SYSTEMS="" HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec hydra-queue-runner
NIX_REMOTE_SYSTEMS="" HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec $(pwd)/outputs/out/bin/hydra-queue-runner

17
justfile Normal file
View file

@ -0,0 +1,17 @@
setup *OPTIONS:
meson setup build --prefix="$PWD/outputs/out" $mesonFlags {{ OPTIONS }}
build *OPTIONS:
meson compile -C build {{ OPTIONS }}
install *OPTIONS: (build OPTIONS)
meson install -C build
test *OPTIONS:
meson test -C build --print-errorlogs {{ OPTIONS }}
update-dbix:
cd src/sql && ./update-dbix-harness.sh
perlcritic:
perlcritic .

36
meson.build Normal file
View file

@ -0,0 +1,36 @@
project('hydra', 'cpp',
version: files('version.txt'),
license: 'GPL-3.0',
default_options: [
'debug=true',
'optimization=2',
'cpp_std=c++20',
],
)
lix_expr_dep = dependency('lix-expr', required: true)
lix_main_dep = dependency('lix-main', required: true)
lix_store_dep = dependency('lix-store', required: true)
# Lix/Nix need extra flags not provided in its pkg-config files.
lix_dep = declare_dependency(
dependencies: [
lix_expr_dep,
lix_main_dep,
lix_store_dep,
],
compile_args: ['-include', 'lix/config.h'],
)
pqxx_dep = dependency('libpqxx', required: true)
prom_cpp_core_dep = dependency('prometheus-cpp-core', required: true)
prom_cpp_pull_dep = dependency('prometheus-cpp-pull', required: true)
mdbook = find_program('mdbook', native: true)
perl = find_program('perl', native: true)
subdir('doc/manual')
subdir('nixos-modules')
subdir('src')
subdir('t')

View file

@ -229,9 +229,8 @@ in
};
nix.settings = {
trusted-users = [ "hydra-queue-runner" ];
gc-keep-outputs = true;
gc-keep-derivations = true;
extra-trusted-users = [ "hydra" "hydra-queue-runner" "hydra-www" ];
keep-derivations = true;
};
services.hydra-dev.extraConfig =
@ -340,6 +339,7 @@ in
systemd.services.hydra-queue-runner =
{ wantedBy = [ "multi-user.target" ];
requires = [ "hydra-init.service" ];
wants = [ "network-online.target" ];
after = [ "hydra-init.service" "network.target" "network-online.target" ];
path = [ cfg.package pkgs.nettools pkgs.openssh pkgs.bzip2 config.nix.package ];
restartTriggers = [ hydraConf ];

View file

@ -0,0 +1,4 @@
install_data('hydra.nix',
install_dir: get_option('datadir') / 'nix',
rename: ['hydra-module.nix'],
)

View file

@ -12,7 +12,8 @@
, git
, makeWrapper
, autoreconfHook
, meson
, ninja
, nukeReferences
, pkg-config
, mdbook
@ -36,6 +37,7 @@
, cacert
, foreman
, just
, glibcLocales
, libressl
, openldap
@ -48,6 +50,7 @@
, xz
, gnutar
, gnused
, nix-eval-jobs
, rpm
, dpkg
@ -91,6 +94,7 @@ let
DigestSHA1
EmailMIME
EmailSender
FileCopyRecursive
FileLibMagic
FileSlurper
FileWhich
@ -138,20 +142,13 @@ stdenv.mkDerivation (finalAttrs: {
src = fileset.toSource {
root = ./.;
fileset = fileset.unions ([
./version.txt
./configure.ac
./Makefile.am
./src
./doc
./nixos-modules/hydra.nix
# These are always needed to appease Automake
./t/Makefile.am
./t/jobs/config.nix.in
./t/jobs/declarative/project.json.in
] ++ lib.optionals finalAttrs.doCheck [
./meson.build
./nixos-modules
./src
./t
./version.txt
./.perlcriticrc
./.yath.rc
]);
};
@ -159,7 +156,8 @@ stdenv.mkDerivation (finalAttrs: {
nativeBuildInputs = [
makeWrapper
autoreconfHook
meson
ninja
nukeReferences
pkg-config
mdbook
@ -192,6 +190,9 @@ stdenv.mkDerivation (finalAttrs: {
openldap
postgresql_13
pixz
nix-eval-jobs
perlPackages.PLS
just
];
checkInputs = [
@ -220,16 +221,23 @@ stdenv.mkDerivation (finalAttrs: {
darcs
gnused
breezy
nix-eval-jobs
] ++ lib.optionals stdenv.isLinux [ rpm dpkg cdrkit ]
);
OPENLDAP_ROOT = openldap;
mesonBuildType = "release";
postPatch = ''
patchShebangs .
'';
shellHook = ''
pushd $(git rev-parse --show-toplevel) >/dev/null
PATH=$(pwd)/src/hydra-evaluator:$(pwd)/src/script:$(pwd)/src/hydra-eval-jobs:$(pwd)/src/hydra-queue-runner:$PATH
PERL5LIB=$(pwd)/src/lib:$PERL5LIB
PATH=$(pwd)/outputs/out/bin:$PATH
PERL5LIB=$(pwd)/src/lib:$(pwd)/t/lib:$PERL5LIB
export HYDRA_HOME="$(pwd)/src/"
mkdir -p .hydra-data
export HYDRA_DATA="$(pwd)/.hydra-data"
@ -238,14 +246,11 @@ stdenv.mkDerivation (finalAttrs: {
popd >/dev/null
'';
NIX_LDFLAGS = [ "-lpthread" ];
enableParallelBuilding = true;
doCheck = true;
mesonCheckFlags = [ "--verbose" ];
preCheck = ''
patchShebangs .
export LOGNAME=''${LOGNAME:-foo}
# set $HOME for bzr so it can create its trace file
export HOME=$(mktemp -d)

View file

@ -1,3 +0,0 @@
SUBDIRS = hydra-evaluator hydra-eval-jobs hydra-queue-runner sql script lib root ttf
BOOTCLEAN_SUBDIRS = $(SUBDIRS)
DIST_SUBDIRS = $(SUBDIRS)

View file

@ -1,5 +0,0 @@
bin_PROGRAMS = hydra-eval-jobs
hydra_eval_jobs_SOURCES = hydra-eval-jobs.cc
hydra_eval_jobs_LDADD = $(NIX_LIBS) -llixcmd
hydra_eval_jobs_CXXFLAGS = $(NIX_CFLAGS) -I ../libhydra

View file

@ -1,577 +0,0 @@
#include <iostream>
#include <thread>
#include <optional>
#include <unordered_map>
#include "shared.hh"
#include "store-api.hh"
#include "eval.hh"
#include "eval-inline.hh"
#include "eval-settings.hh"
#include "signals.hh"
#include "terminal.hh"
#include "get-drvs.hh"
#include "globals.hh"
#include "lix/libcmd/common-eval-args.hh"
#include "flake/flakeref.hh"
#include "flake/flake.hh"
#include "attr-path.hh"
#include "derivations.hh"
#include "local-fs-store.hh"
#include "hydra-config.hh"
#include <sys/types.h>
#include <sys/wait.h>
#include <sys/resource.h>
#include <nlohmann/json.hpp>
void check_pid_status_nonblocking(pid_t check_pid)
{
// Only check 'initialized' and known PID's
if (check_pid <= 0) { return; }
int wstatus = 0;
pid_t pid = waitpid(check_pid, &wstatus, WNOHANG);
// -1 = failure, WNOHANG: 0 = no change
if (pid <= 0) { return; }
std::cerr << "child process (" << pid << ") ";
if (WIFEXITED(wstatus)) {
std::cerr << "exited with status=" << WEXITSTATUS(wstatus) << std::endl;
} else if (WIFSIGNALED(wstatus)) {
std::cerr << "killed by signal=" << WTERMSIG(wstatus) << std::endl;
} else if (WIFSTOPPED(wstatus)) {
std::cerr << "stopped by signal=" << WSTOPSIG(wstatus) << std::endl;
} else if (WIFCONTINUED(wstatus)) {
std::cerr << "continued" << std::endl;
}
}
using namespace nix;
static Path gcRootsDir;
static size_t maxMemorySize;
struct MyArgs : MixEvalArgs, MixCommonArgs, RootArgs
{
Path releaseExpr;
bool flake = false;
bool dryRun = false;
MyArgs() : MixCommonArgs("hydra-eval-jobs")
{
addFlag({
.longName = "gc-roots-dir",
.description = "garbage collector roots directory",
.labels = {"path"},
.handler = {&gcRootsDir}
});
addFlag({
.longName = "dry-run",
.description = "don't create store derivations",
.handler = {&dryRun, true}
});
addFlag({
.longName = "flake",
.description = "build a flake",
.handler = {&flake, true}
});
expectArg("expr", &releaseExpr);
}
};
static MyArgs myArgs;
static std::string queryMetaStrings(EvalState & state, DrvInfo & drv, const std::string & name, const std::string & subAttribute)
{
Strings res;
std::function<void(Value & v)> rec;
rec = [&](Value & v) {
state.forceValue(v, noPos);
if (v.type() == nString)
res.push_back(v.string.s);
else if (v.isList())
for (unsigned int n = 0; n < v.listSize(); ++n)
rec(*v.listElems()[n]);
else if (v.type() == nAttrs) {
auto a = v.attrs->find(state.symbols.create(subAttribute));
if (a != v.attrs->end())
res.push_back(std::string(state.forceString(*a->value, a->pos, "while evaluating meta attributes")));
}
};
Value * v = drv.queryMeta(name);
if (v) rec(*v);
return concatStringsSep(", ", res);
}
static void worker(
EvalState & state,
Bindings & autoArgs,
AutoCloseFD & to,
AutoCloseFD & from)
{
Value vTop;
if (myArgs.flake) {
using namespace flake;
auto flakeRef = parseFlakeRef(myArgs.releaseExpr);
auto vFlake = state.allocValue();
auto lockedFlake = lockFlake(state, flakeRef,
LockFlags {
.updateLockFile = false,
.useRegistries = false,
.allowUnlocked = false,
});
callFlake(state, lockedFlake, *vFlake);
auto vOutputs = vFlake->attrs->get(state.symbols.create("outputs"))->value;
state.forceValue(*vOutputs, noPos);
auto aHydraJobs = vOutputs->attrs->get(state.symbols.create("hydraJobs"));
if (!aHydraJobs)
aHydraJobs = vOutputs->attrs->get(state.symbols.create("checks"));
if (!aHydraJobs)
throw Error("flake '%s' does not provide any Hydra jobs or checks", flakeRef);
vTop = *aHydraJobs->value;
} else {
state.evalFile(lookupFileArg(state, myArgs.releaseExpr), vTop);
}
auto vRoot = state.allocValue();
state.autoCallFunction(autoArgs, vTop, *vRoot);
while (true) {
/* Wait for the master to send us a job name. */
writeLine(to.get(), "next");
auto s = readLine(from.get());
if (s == "exit") break;
if (!s.starts_with("do ")) abort();
std::string attrPath(s, 3);
debug("worker process %d at '%s'", getpid(), attrPath);
/* Evaluate it and send info back to the master. */
nlohmann::json reply;
try {
auto vTmp = findAlongAttrPath(state, attrPath, autoArgs, *vRoot).first;
auto v = state.allocValue();
state.autoCallFunction(autoArgs, *vTmp, *v);
if (auto drv = getDerivation(state, *v, false)) {
// CA derivations do not have static output paths, so we
// have to defensively not query output paths in case we
// encounter one.
DrvInfo::Outputs outputs = drv->queryOutputs(
!experimentalFeatureSettings.isEnabled(Xp::CaDerivations));
if (drv->querySystem() == "unknown")
state.error<EvalError>("derivation must have a 'system' attribute").debugThrow();
auto drvPath = state.store->printStorePath(drv->requireDrvPath());
nlohmann::json job;
job["nixName"] = drv->queryName();
job["system"] =drv->querySystem();
job["drvPath"] = drvPath;
job["description"] = drv->queryMetaString("description");
job["license"] = queryMetaStrings(state, *drv, "license", "shortName");
job["homepage"] = drv->queryMetaString("homepage");
job["maintainers"] = queryMetaStrings(state, *drv, "maintainers", "email");
job["schedulingPriority"] = drv->queryMetaInt("schedulingPriority", 100);
job["timeout"] = drv->queryMetaInt("timeout", 36000);
job["maxSilent"] = drv->queryMetaInt("maxSilent", 7200);
job["isChannel"] = drv->queryMetaBool("isHydraChannel", false);
/* If this is an aggregate, then get its constituents. */
auto a = v->attrs->get(state.symbols.create("_hydraAggregate"));
if (a && state.forceBool(*a->value, a->pos, "while evaluating the `_hydraAggregate` attribute")) {
auto a = v->attrs->get(state.symbols.create("constituents"));
if (!a)
state.error<EvalError>("derivation must have a constituents attribute").debugThrow();
NixStringContext context;
state.coerceToString(a->pos, *a->value, context, "while evaluating the `constituents` attribute", true, false);
for (auto & c : context)
std::visit(overloaded {
[&](const NixStringContextElem::Built & b) {
job["constituents"].push_back(b.drvPath->to_string(*state.store));
},
[&](const NixStringContextElem::Opaque & o) {
},
[&](const NixStringContextElem::DrvDeep & d) {
},
}, c.raw);
state.forceList(*a->value, a->pos, "while evaluating the `constituents` attribute");
for (unsigned int n = 0; n < a->value->listSize(); ++n) {
auto v = a->value->listElems()[n];
state.forceValue(*v, noPos);
if (v->type() == nString)
job["namedConstituents"].push_back(v->str());
}
}
/* Register the derivation as a GC root. !!! This
registers roots for jobs that we may have already
done. */
auto localStore = state.store.dynamic_pointer_cast<LocalFSStore>();
if (gcRootsDir != "" && localStore) {
Path root = gcRootsDir + "/" + std::string(baseNameOf(drvPath));
if (!pathExists(root))
localStore->addPermRoot(localStore->parseStorePath(drvPath), root);
}
nlohmann::json out;
for (auto & [outputName, optOutputPath] : outputs) {
if (optOutputPath) {
out[outputName] = state.store->printStorePath(*optOutputPath);
} else {
// See the `queryOutputs` call above; we should
// not encounter missing output paths otherwise.
assert(experimentalFeatureSettings.isEnabled(Xp::CaDerivations));
out[outputName] = nullptr;
}
}
job["outputs"] = std::move(out);
reply["job"] = std::move(job);
}
else if (v->type() == nAttrs) {
auto attrs = nlohmann::json::array();
StringSet ss;
for (auto & i : v->attrs->lexicographicOrder(state.symbols)) {
std::string name(state.symbols[i->name]);
if (name.find(' ') != std::string::npos) {
printError("skipping job with illegal name '%s'", name);
continue;
}
attrs.push_back(name);
}
reply["attrs"] = std::move(attrs);
}
else if (v->type() == nNull)
;
else state.error<TypeError>("attribute '%s' is %s, which is not supported", attrPath, showType(*v)).debugThrow();
} catch (EvalError & e) {
auto msg = e.msg();
// Transmits the error we got from the previous evaluation
// in the JSON output.
reply["error"] = filterANSIEscapes(msg, true);
// Don't forget to print it into the STDERR log, this is
// what's shown in the Hydra UI.
printError(msg);
}
writeLine(to.get(), reply.dump());
/* If our RSS exceeds the maximum, exit. The master will
start a new process. */
struct rusage r;
getrusage(RUSAGE_SELF, &r);
if ((size_t) r.ru_maxrss > maxMemorySize * 1024) break;
}
writeLine(to.get(), "restart");
}
int main(int argc, char * * argv)
{
/* Prevent undeclared dependencies in the evaluation via
$NIX_PATH. */
unsetenv("NIX_PATH");
return handleExceptions(argv[0], [&]() {
auto config = std::make_unique<HydraConfig>();
auto nrWorkers = config->getIntOption("evaluator_workers", 1);
maxMemorySize = config->getIntOption("evaluator_max_memory_size", 4096);
initNix();
initGC();
myArgs.parseCmdline(argvToStrings(argc, argv));
auto pureEval = config->getBoolOption("evaluator_pure_eval", myArgs.flake);
/* FIXME: The build hook in conjunction with import-from-derivation is causing "unexpected EOF" during eval */
settings.builders = "";
/* Prevent access to paths outside of the Nix search path and
to the environment. */
evalSettings.restrictEval = true;
/* When building a flake, use pure evaluation (no access to
'getEnv', 'currentSystem' etc. */
evalSettings.pureEval = pureEval;
if (myArgs.dryRun) settings.readOnlyMode = true;
if (myArgs.releaseExpr == "") throw UsageError("no expression specified");
if (gcRootsDir == "") printMsg(lvlError, "warning: `--gc-roots-dir' not specified");
struct State
{
std::set<std::string> todo{""};
std::set<std::string> active;
nlohmann::json jobs;
std::exception_ptr exc;
};
std::condition_variable wakeup;
Sync<State> state_;
/* Start a handler thread per worker process. */
auto handler = [&]()
{
Pid pid;
try {
AutoCloseFD from, to;
while (true) {
/* Start a new worker process if necessary. */
if (!pid) {
Pipe toPipe, fromPipe;
toPipe.create();
fromPipe.create();
pid = startProcess(
[&,
to{std::make_shared<AutoCloseFD>(std::move(fromPipe.writeSide))},
from{std::make_shared<AutoCloseFD>(std::move(toPipe.readSide))}
]()
{
try {
EvalState state(myArgs.searchPath, openStore());
Bindings & autoArgs = *myArgs.getAutoArgs(state);
worker(state, autoArgs, *to, *from);
} catch (Error & e) {
nlohmann::json err;
auto msg = e.msg();
err["error"] = filterANSIEscapes(msg, true);
printError(msg);
writeLine(to->get(), err.dump());
// Don't forget to print it into the STDERR log, this is
// what's shown in the Hydra UI.
writeLine(to->get(), "restart");
}
});
from = std::move(fromPipe.readSide);
to = std::move(toPipe.writeSide);
debug("created worker process %d", pid.get());
}
/* Check whether the existing worker process is still there. */
auto s = readLine(from.get());
if (s == "restart") {
pid.wait();
continue;
} else if (s != "next") {
auto json = nlohmann::json::parse(s);
throw Error("worker error: %s", (std::string) json["error"]);
}
/* Wait for a job name to become available. */
std::string attrPath;
while (true) {
checkInterrupt();
auto state(state_.lock());
if ((state->todo.empty() && state->active.empty()) || state->exc) {
writeLine(to.get(), "exit");
return;
}
if (!state->todo.empty()) {
attrPath = *state->todo.begin();
state->todo.erase(state->todo.begin());
state->active.insert(attrPath);
break;
} else
state.wait(wakeup);
}
/* Tell the worker to evaluate it. */
writeLine(to.get(), "do " + attrPath);
/* Wait for the response. */
auto response = nlohmann::json::parse(readLine(from.get()));
/* Handle the response. */
StringSet newAttrs;
if (response.find("job") != response.end()) {
auto state(state_.lock());
state->jobs[attrPath] = response["job"];
}
if (response.find("attrs") != response.end()) {
for (auto & i : response["attrs"]) {
std::string path = i;
if (path.find(".") != std::string::npos){
path = "\"" + path + "\"";
}
auto s = (attrPath.empty() ? "" : attrPath + ".") + (std::string) path;
newAttrs.insert(s);
}
}
if (response.find("error") != response.end()) {
auto state(state_.lock());
state->jobs[attrPath]["error"] = response["error"];
}
/* Add newly discovered job names to the queue. */
{
auto state(state_.lock());
state->active.erase(attrPath);
for (auto & s : newAttrs)
state->todo.insert(s);
wakeup.notify_all();
}
}
} catch (...) {
check_pid_status_nonblocking(pid.release());
auto state(state_.lock());
state->exc = std::current_exception();
wakeup.notify_all();
}
};
std::vector<std::thread> threads;
for (size_t i = 0; i < nrWorkers; i++)
threads.emplace_back(std::thread(handler));
for (auto & thread : threads)
thread.join();
auto state(state_.lock());
if (state->exc)
std::rethrow_exception(state->exc);
/* For aggregate jobs that have named consistuents
(i.e. constituents that are a job name rather than a
derivation), look up the referenced job and add it to the
dependencies of the aggregate derivation. */
auto store = openStore();
for (auto i = state->jobs.begin(); i != state->jobs.end(); ++i) {
auto jobName = i.key();
auto & job = i.value();
auto named = job.find("namedConstituents");
if (named == job.end()) continue;
std::unordered_map<std::string, std::string> brokenJobs;
auto getNonBrokenJobOrRecordError = [&brokenJobs, &jobName, &state](
const std::string & childJobName) -> std::optional<nlohmann::json> {
auto childJob = state->jobs.find(childJobName);
if (childJob == state->jobs.end()) {
printError("aggregate job '%s' references non-existent job '%s'", jobName, childJobName);
brokenJobs[childJobName] = "does not exist";
return std::nullopt;
}
if (childJob->find("error") != childJob->end()) {
std::string error = (*childJob)["error"];
printError("aggregate job '%s' references broken job '%s': %s", jobName, childJobName, error);
brokenJobs[childJobName] = error;
return std::nullopt;
}
return *childJob;
};
if (myArgs.dryRun) {
for (std::string jobName2 : *named) {
auto job2 = getNonBrokenJobOrRecordError(jobName2);
if (!job2) {
continue;
}
std::string drvPath2 = (*job2)["drvPath"];
job["constituents"].push_back(drvPath2);
}
} else {
auto drvPath = store->parseStorePath((std::string) job["drvPath"]);
auto drv = store->readDerivation(drvPath);
for (std::string jobName2 : *named) {
auto job2 = getNonBrokenJobOrRecordError(jobName2);
if (!job2) {
continue;
}
auto drvPath2 = store->parseStorePath((std::string) (*job2)["drvPath"]);
auto drv2 = store->readDerivation(drvPath2);
job["constituents"].push_back(store->printStorePath(drvPath2));
drv.inputDrvs.map[drvPath2].value = {drv2.outputs.begin()->first};
}
if (brokenJobs.empty()) {
std::string drvName(drvPath.name());
assert(drvName.ends_with(drvExtension));
drvName.resize(drvName.size() - drvExtension.size());
auto hashModulo = hashDerivationModulo(*store, drv, true);
if (hashModulo.kind != DrvHash::Kind::Regular) continue;
auto h = hashModulo.hashes.find("out");
if (h == hashModulo.hashes.end()) continue;
auto outPath = store->makeOutputPath("out", h->second, drvName);
drv.env["out"] = store->printStorePath(outPath);
drv.outputs.insert_or_assign("out", DerivationOutput::InputAddressed { .path = outPath });
auto newDrvPath = store->printStorePath(writeDerivation(*store, drv));
debug("rewrote aggregate derivation %s -> %s", store->printStorePath(drvPath), newDrvPath);
job["drvPath"] = newDrvPath;
job["outputs"]["out"] = store->printStorePath(outPath);
}
}
job.erase("namedConstituents");
/* Register the derivation as a GC root. !!! This
registers roots for jobs that we may have already
done. */
auto localStore = store.dynamic_pointer_cast<LocalFSStore>();
if (gcRootsDir != "" && localStore) {
auto drvPath = job["drvPath"].get<std::string>();
Path root = gcRootsDir + "/" + std::string(baseNameOf(drvPath));
if (!pathExists(root))
localStore->addPermRoot(localStore->parseStorePath(drvPath), root);
}
if (!brokenJobs.empty()) {
std::stringstream ss;
for (const auto& [jobName, error] : brokenJobs) {
ss << jobName << ": " << error << "\n";
}
job["error"] = ss.str();
}
}
std::cout << state->jobs.dump(2) << "\n";
});
}

View file

@ -1,5 +0,0 @@
bin_PROGRAMS = hydra-evaluator
hydra_evaluator_SOURCES = hydra-evaluator.cc
hydra_evaluator_LDADD = $(NIX_LIBS) -lpqxx
hydra_evaluator_CXXFLAGS = $(NIX_CFLAGS) -Wall -I ../libhydra -Wno-deprecated-declarations

View file

@ -1,9 +1,9 @@
#include "db.hh"
#include "hydra-config.hh"
#include "logging.hh"
#include "pool.hh"
#include "shared.hh"
#include "signals.hh"
#include "lix/libmain/shared.hh"
#include "lix/libutil/logging.hh"
#include "lix/libutil/pool.hh"
#include "lix/libutil/signals.hh"
#include <algorithm>
#include <thread>
@ -14,11 +14,12 @@
#include <sys/wait.h>
#include <boost/format.hpp>
#include <utility>
using namespace nix;
using boost::format;
typedef std::pair<std::string, std::string> JobsetName;
using JobsetName = std::pair<std::string, std::string>;
class JobsetId {
public:
@ -28,8 +29,8 @@ class JobsetId {
int id;
JobsetId(const std::string & project, const std::string & jobset, int id)
: project{ project }, jobset{ jobset }, id{ id }
JobsetId(std::string project, std::string jobset, int id)
: project{std::move( project )}, jobset{std::move( jobset )}, id{ id }
{
}
@ -41,7 +42,7 @@ class JobsetId {
friend bool operator== (const JobsetId & lhs, const JobsetName & rhs);
friend bool operator!= (const JobsetId & lhs, const JobsetName & rhs);
std::string display() const {
[[nodiscard]] std::string display() const {
return str(format("%1%:%2% (jobset#%3%)") % project % jobset % id);
}
};
@ -88,11 +89,11 @@ struct Evaluator
JobsetId name;
std::optional<EvaluationStyle> evaluation_style;
time_t lastCheckedTime, triggerTime;
int checkInterval;
time_t checkInterval;
Pid pid;
};
typedef std::map<JobsetId, Jobset> Jobsets;
using Jobsets = std::map<JobsetId, Jobset>;
std::optional<JobsetName> evalOne;
@ -138,13 +139,15 @@ struct Evaluator
if (evalOne && name != *evalOne) continue;
auto res = state->jobsets.try_emplace(name, Jobset{name});
auto res = state->jobsets.try_emplace(name, Jobset{.name=name});
auto & jobset = res.first->second;
jobset.lastCheckedTime = row["lastCheckedTime"].as<time_t>(0);
jobset.triggerTime = row["triggerTime"].as<time_t>(notTriggered);
jobset.checkInterval = row["checkInterval"].as<time_t>();
switch (row["jobset_enabled"].as<int>(0)) {
int eval_style = row["jobset_enabled"].as<int>(0);
switch (eval_style) {
case 1:
jobset.evaluation_style = EvaluationStyle::SCHEDULE;
break;
@ -154,6 +157,9 @@ struct Evaluator
case 3:
jobset.evaluation_style = EvaluationStyle::ONE_AT_A_TIME;
break;
default:
// Disabled or unknown. Leave as nullopt.
break;
}
seen.insert(name);
@ -175,7 +181,7 @@ struct Evaluator
void startEval(State & state, Jobset & jobset)
{
time_t now = time(0);
time_t now = time(nullptr);
printInfo("starting evaluation of jobset %s (last checked %d s ago)",
jobset.name.display(),
@ -228,7 +234,7 @@ struct Evaluator
return false;
}
if (jobset.lastCheckedTime + jobset.checkInterval <= time(0)) {
if (jobset.lastCheckedTime + jobset.checkInterval <= time(nullptr)) {
// Time to schedule a fresh evaluation. If the jobset
// is a ONE_AT_A_TIME jobset, ensure the previous jobset
// has no remaining, unfinished work.
@ -301,7 +307,7 @@ struct Evaluator
/* Put jobsets in order of ascending trigger time, last checked
time, and name. */
std::sort(sorted.begin(), sorted.end(),
std::ranges::sort(sorted,
[](const Jobsets::iterator & a, const Jobsets::iterator & b) {
return
a->second.triggerTime != b->second.triggerTime
@ -324,7 +330,7 @@ struct Evaluator
while (true) {
time_t now = time(0);
time_t now = time(nullptr);
std::chrono::seconds sleepTime = std::chrono::seconds::max();
@ -411,7 +417,7 @@ struct Evaluator
printInfo("evaluation of jobset %s %s",
jobset.name.display(), statusToString(status));
auto now = time(0);
auto now = time(nullptr);
jobset.triggerTime = notTriggered;
jobset.lastCheckedTime = now;
@ -507,14 +513,14 @@ int main(int argc, char * * argv)
std::vector<std::string> args;
parseCmdLine(argc, argv, [&](Strings::iterator & arg, const Strings::iterator & end) {
LegacyArgs(argv[0], [&](Strings::iterator & arg, const Strings::iterator & end) {
if (*arg == "--unlock")
unlock = true;
else if (arg->starts_with("-"))
return false;
args.push_back(*arg);
return true;
});
}).parseCmdline(Strings(argv + 1, argv + argc));
if (unlock)

View file

@ -0,0 +1,9 @@
hydra_evaluator = executable('hydra-evaluator',
'hydra-evaluator.cc',
dependencies: [
libhydra_dep,
lix_dep,
pqxx_dep,
],
install: true,
)

View file

@ -1,8 +0,0 @@
bin_PROGRAMS = hydra-queue-runner
hydra_queue_runner_SOURCES = hydra-queue-runner.cc queue-monitor.cc dispatcher.cc \
builder.cc build-result.cc build-remote.cc \
hydra-build-result.hh counter.hh state.hh db.hh \
nar-extractor.cc nar-extractor.hh
hydra_queue_runner_LDADD = $(NIX_LIBS) -lpqxx -lprometheus-cpp-pull -lprometheus-cpp-core
hydra_queue_runner_CXXFLAGS = $(NIX_CFLAGS) -Wall -I ../libhydra -Wno-deprecated-declarations

View file

@ -1,20 +1,22 @@
#include <algorithm>
#include <cmath>
#include <ranges>
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include "build-result.hh"
#include "current-process.hh"
#include "path.hh"
#include "serve-protocol.hh"
#include "lix/libstore/build-result.hh"
#include "lix/libstore/path.hh"
#include "lix/libstore/serve-protocol-impl.hh"
#include "lix/libstore/serve-protocol.hh"
#include "lix/libstore/ssh.hh"
#include "lix/libutil/current-process.hh"
#include "lix/libutil/finally.hh"
#include "lix/libutil/url.hh"
#include "state.hh"
#include "serve-protocol.hh"
#include "serve-protocol-impl.hh"
#include "ssh.hh"
#include "finally.hh"
#include "url.hh"
#include "lix/libstore/temporary-dir.hh"
using namespace nix;
@ -41,6 +43,7 @@ static Strings extraStoreArgs(std::string & machine)
}
} catch (BadURL &) {
// We just try to continue with `machine->sshName` here for backwards compat.
printMsg(lvlWarn, "could not parse machine URL '%s', passing through to SSH", machine);
}
return result;
@ -65,8 +68,8 @@ static void openConnection(::Machine::ptr machine, Path tmpDir, int stderrFD, SS
if (machine->sshKey != "") append(argv, {"-i", machine->sshKey});
if (machine->sshPublicHostKey != "") {
Path fileName = tmpDir + "/host-key";
auto p = machine->sshName.find("@");
std::string host = p != std::string::npos ? std::string(machine->sshName, p + 1) : machine->sshName;
auto p = sshName.find("@");
std::string host = p != std::string::npos ? std::string(sshName, p + 1) : sshName;
writeFile(fileName, host + " " + machine->sshPublicHostKey + "\n");
append(argv, {"-oUserKnownHostsFile=" + fileName});
}
@ -121,7 +124,7 @@ static void copyClosureTo(
the remote host to substitute missing paths. */
// FIXME: substitute output pollutes our build log
conn.to << ServeProto::Command::QueryValidPaths << 1 << useSubstitutes;
ServeProto::write(destStore, conn, closure);
conn.to << ServeProto::write(destStore, conn, closure);
conn.to.flush();
/* Get back the set of paths that are already valid on the remote
@ -133,8 +136,8 @@ static void copyClosureTo(
auto sorted = destStore.topoSortPaths(closure);
StorePathSet missing;
for (auto i = sorted.rbegin(); i != sorted.rend(); ++i)
if (!present.count(*i)) missing.insert(*i);
for (auto & i : std::ranges::reverse_view(sorted))
if (!present.count(i)) missing.insert(i);
printMsg(lvlDebug, "sending %d missing paths", missing.size());
@ -304,12 +307,12 @@ static BuildResult performBuild(
time_t startTime, stopTime;
startTime = time(0);
startTime = time(nullptr);
{
MaintainCount<counter> mc(nrStepsBuilding);
result = ServeProto::Serialise<BuildResult>::read(localStore, conn);
}
stopTime = time(0);
stopTime = time(nullptr);
if (!result.startTime) {
// If the builder gave `startTime = 0`, use our measurements
@ -338,10 +341,10 @@ static BuildResult performBuild(
// were known
assert(outputPath);
auto outputHash = outputHashes.at(outputName);
auto drvOutput = DrvOutput { outputHash, outputName };
auto drvOutput = DrvOutput { .drvHash=outputHash, .outputName=outputName };
result.builtOutputs.insert_or_assign(
std::move(outputName),
Realisation { drvOutput, *outputPath });
Realisation { .id=drvOutput, .outPath=*outputPath });
}
}
@ -359,7 +362,7 @@ static std::map<StorePath, ValidPathInfo> queryPathInfos(
/* Get info about each output path. */
std::map<StorePath, ValidPathInfo> infos;
conn.to << ServeProto::Command::QueryPathInfos;
ServeProto::write(localStore, conn, outputs);
conn.to << ServeProto::write(localStore, conn, outputs);
conn.to.flush();
while (true) {
auto storePathS = readString(conn.from);
@ -368,7 +371,7 @@ static std::map<StorePath, ValidPathInfo> queryPathInfos(
auto references = ServeProto::Serialise<StorePathSet>::read(localStore, conn);
readLongLong(conn.from); // download size
auto narSize = readLongLong(conn.from);
auto narHash = Hash::parseAny(readString(conn.from), htSHA256);
auto narHash = Hash::parseAny(readString(conn.from), HashType::SHA256);
auto ca = ContentAddress::parseOpt(readString(conn.from));
readStrings<StringSet>(conn.from); // sigs
ValidPathInfo info(localStore.parseStorePath(storePathS), narHash);
@ -397,8 +400,7 @@ static void copyPathFromRemote(
/* Receive the NAR from the remote and add it to the
destination store. Meanwhile, extract all the info from the
NAR that getBuildOutput() needs. */
auto source2 = sinkToSource([&](Sink & sink)
{
auto coro = [&]() -> WireFormatGenerator {
/* Note: we should only send the command to dump the store
path to the remote if the NAR is actually going to get read
by the destination store, which won't happen if this path
@ -409,11 +411,11 @@ static void copyPathFromRemote(
conn.to << ServeProto::Command::DumpStorePath << localStore.printStorePath(info.path);
conn.to.flush();
TeeSource tee(conn.from, sink);
extractNarData(tee, localStore.printStorePath(info.path), narMembers);
});
co_yield extractNarDataFilter(conn.from, localStore.printStorePath(info.path), narMembers);
};
GeneratorSource source2{coro()};
destStore.addToStore(info, *source2, NoRepair, NoCheckSigs);
destStore.addToStore(info, source2, NoRepair, NoCheckSigs);
}
static void copyPathsFromRemote(
@ -624,6 +626,7 @@ void State::buildRemote(ref<Store> destStore,
/* Throttle CPU-bound work. Opportunistically skip updating the current
* step, since this requires a DB roundtrip. */
if (!localWorkThrottler.try_acquire()) {
MaintainCount<counter> mc(nrStepsWaitingForDownloadSlot);
updateStep(ssWaitingForLocalSlot);
localWorkThrottler.acquire();
}
@ -635,7 +638,7 @@ void State::buildRemote(ref<Store> destStore,
* copying outputs and we end up building too many things that we
* haven't been able to allow copy slots for. */
assert(reservation.unique());
reservation = 0;
reservation = nullptr;
wakeDispatcher();
StorePathSet outputs;
@ -698,7 +701,7 @@ void State::buildRemote(ref<Store> destStore,
if (info->consecutiveFailures == 0 || info->lastFailure < now - std::chrono::seconds(30)) {
info->consecutiveFailures = std::min(info->consecutiveFailures + 1, (unsigned int) 4);
info->lastFailure = now;
int delta = retryInterval * std::pow(retryBackoff, info->consecutiveFailures - 1) + (rand() % 30);
int delta = static_cast<int>(retryInterval * std::pow(retryBackoff, info->consecutiveFailures - 1) + (rand() % 30));
printMsg(lvlInfo, "will disable machine %1% for %2%s", machine->sshName, delta);
info->disabledUntil = now + std::chrono::seconds(delta);
}

View file

@ -1,6 +1,7 @@
#include "hydra-build-result.hh"
#include "store-api.hh"
#include "fs-accessor.hh"
#include "lix/libstore/fs-accessor.hh"
#include "lix/libstore/store-api.hh"
#include "lix/libutil/strings.hh"
#include <regex>
@ -34,11 +35,8 @@ BuildOutput getBuildOutput(
auto outputS = store->printStorePath(output);
if (!narMembers.count(outputS)) {
printInfo("fetching NAR contents of '%s'...", outputS);
auto source = sinkToSource([&](Sink & sink)
{
store->narFromPath(output, sink);
});
extractNarData(*source, outputS, narMembers);
GeneratorSource source{store->narFromPath(output)};
extractNarData(source, outputS, narMembers);
}
}

View file

@ -1,9 +1,10 @@
#include <cmath>
#include "state.hh"
#include "hydra-build-result.hh"
#include "finally.hh"
#include "binary-cache-store.hh"
#include "lix/libstore/binary-cache-store.hh"
#include "lix/libutil/error.hh"
#include "lix/libutil/finally.hh"
#include "state.hh"
using namespace nix;
@ -35,13 +36,22 @@ void State::builder(MachineReservation::ptr reservation)
activeSteps_.lock()->erase(activeStep);
});
auto conn(dbPool.get());
try {
auto destStore = getDestStore();
res = doBuildStep(destStore, reservation, activeStep);
// Might release the reservation.
res = doBuildStep(destStore, reservation, *conn, activeStep);
} catch (pqxx::broken_connection & e) {
printMsg(lvlError, "db lost while building %s on %s: %s (retriable)",
localStore->printStorePath(activeStep->step->drvPath),
reservation ? reservation->machine->sshName : std::string("(no machine)"),
e.what());
conn.markBad();
} catch (std::exception & e) {
printMsg(lvlError, "uncaught exception building %s on %s: %s",
localStore->printStorePath(reservation->step->drvPath),
reservation->machine->sshName,
localStore->printStorePath(activeStep->step->drvPath),
reservation ? reservation->machine->sshName : std::string("(no machine)"),
e.what());
}
}
@ -49,7 +59,7 @@ void State::builder(MachineReservation::ptr reservation)
/* If the machine hasn't been released yet, release and wake up the dispatcher. */
if (reservation) {
assert(reservation.unique());
reservation = 0;
reservation = nullptr;
wakeDispatcher();
}
@ -63,7 +73,7 @@ void State::builder(MachineReservation::ptr reservation)
step_->tries++;
nrRetries++;
if (step_->tries > maxNrRetries) maxNrRetries = step_->tries; // yeah yeah, not atomic
int delta = retryInterval * std::pow(retryBackoff, step_->tries - 1) + (rand() % 10);
int delta = static_cast<int>(retryInterval * std::pow(retryBackoff, step_->tries - 1) + (rand() % 10));
printMsg(lvlInfo, "will retry %s after %ss", localStore->printStorePath(step->drvPath), delta);
step_->after = std::chrono::system_clock::now() + std::chrono::seconds(delta);
}
@ -75,6 +85,7 @@ void State::builder(MachineReservation::ptr reservation)
State::StepResult State::doBuildStep(nix::ref<Store> destStore,
MachineReservation::ptr & reservation,
Connection & conn,
std::shared_ptr<ActiveStep> activeStep)
{
auto step(reservation->step);
@ -105,8 +116,6 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
buildOptions.maxLogSize = maxLogSize;
buildOptions.enforceDeterminism = step->isDeterministic;
auto conn(dbPool.get());
{
std::set<Build::ptr> dependents;
std::set<Step::ptr> steps;
@ -131,7 +140,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
for (auto build2 : dependents) {
if (build2->drvPath == step->drvPath) {
build = build2;
pqxx::work txn(*conn);
pqxx::work txn(conn);
notifyBuildStarted(txn, build->id);
txn.commit();
}
@ -177,16 +186,16 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
unlink(result.logFile.c_str());
}
} catch (...) {
ignoreException();
ignoreExceptionInDestructor();
}
}
});
time_t stepStartTime = result.startTime = time(0);
time_t stepStartTime = result.startTime = time(nullptr);
/* If any of the outputs have previously failed, then don't bother
building again. */
if (checkCachedFailure(step, *conn))
if (checkCachedFailure(step, conn))
result.stepStatus = bsCachedFailure;
else {
@ -194,13 +203,13 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
building. */
{
auto mc = startDbUpdate();
pqxx::work txn(*conn);
pqxx::work txn(conn);
stepNr = createBuildStep(txn, result.startTime, buildId, step, machine->sshName, bsBusy);
txn.commit();
}
auto updateStep = [&](StepState stepState) {
pqxx::work txn(*conn);
pqxx::work txn(conn);
updateBuildStep(txn, buildId, stepNr, stepState);
txn.commit();
};
@ -229,7 +238,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
}
}
time_t stepStopTime = time(0);
time_t stepStopTime = time(nullptr);
if (!result.stopTime) result.stopTime = stepStopTime;
/* For standard failures, we don't care about the error
@ -243,7 +252,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
auto step_(step->state.lock());
if (!step_->jobsets.empty()) {
// FIXME: loss of precision.
time_t charge = (result.stopTime - result.startTime) / step_->jobsets.size();
time_t charge = (result.stopTime - result.startTime) / static_cast<time_t>(step_->jobsets.size());
for (auto & jobset : step_->jobsets)
jobset->addStep(result.startTime, charge);
}
@ -251,7 +260,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
/* Finish the step in the database. */
if (stepNr) {
pqxx::work txn(*conn);
pqxx::work txn(conn);
finishBuildStep(txn, result, buildId, stepNr, machine->sshName);
txn.commit();
}
@ -327,7 +336,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
{
auto mc = startDbUpdate();
pqxx::work txn(*conn);
pqxx::work txn(conn);
for (auto & b : direct) {
printInfo("marking build %1% as succeeded", b->id);
@ -355,7 +364,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
/* Send notification about the builds that have this step as
the top-level. */
{
pqxx::work txn(*conn);
pqxx::work txn(conn);
for (auto id : buildIDs)
notifyBuildFinished(txn, id, {});
txn.commit();
@ -384,7 +393,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
}
} else
failStep(*conn, step, buildId, result, machine, stepFinished);
failStep(conn, step, buildId, result, machine, stepFinished);
// FIXME: keep stats about aborted steps?
nrStepsDone++;

View file

@ -46,7 +46,7 @@ void State::dispatcher()
auto t_after_work = std::chrono::steady_clock::now();
prom.dispatcher_time_spent_running.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count());
static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count()));
dispatchTimeMs += std::chrono::duration_cast<std::chrono::milliseconds>(t_after_work - t_before_work).count();
/* Sleep until we're woken up (either because a runnable build
@ -63,7 +63,7 @@ void State::dispatcher()
auto t_after_sleep = std::chrono::steady_clock::now();
prom.dispatcher_time_spent_waiting.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count());
static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count()));
} catch (std::exception & e) {
printError("dispatcher: %s", e.what());
@ -190,7 +190,7 @@ system_time State::doDispatch()
}
}
sort(runnableSorted.begin(), runnableSorted.end(),
std::ranges::sort(runnableSorted,
[](const StepInfo & a, const StepInfo & b)
{
return
@ -240,11 +240,11 @@ system_time State::doDispatch()
- Then by speed factor.
- Finally by load. */
sort(machinesSorted.begin(), machinesSorted.end(),
std::ranges::sort(machinesSorted,
[](const MachineInfo & a, const MachineInfo & b) -> bool
{
float ta = std::round(a.currentJobs / a.machine->speedFactorFloat);
float tb = std::round(b.currentJobs / b.machine->speedFactorFloat);
float ta = std::round(static_cast<float>(a.currentJobs) / a.machine->speedFactorFloat);
float tb = std::round(static_cast<float>(b.currentJobs) / b.machine->speedFactorFloat);
return
ta != tb ? ta < tb :
a.machine->speedFactorFloat != b.machine->speedFactorFloat ? a.machine->speedFactorFloat > b.machine->speedFactorFloat :
@ -345,7 +345,7 @@ void State::abortUnsupported()
auto machines2 = *machines.lock();
system_time now = std::chrono::system_clock::now();
auto now2 = time(0);
auto now2 = time(nullptr);
std::unordered_set<Step::ptr> aborted;
@ -436,7 +436,7 @@ void Jobset::addStep(time_t startTime, time_t duration)
void Jobset::pruneSteps()
{
time_t now = time(0);
time_t now = time(nullptr);
auto steps_(steps.lock());
while (!steps_->empty()) {
auto i = steps_->begin();
@ -464,7 +464,7 @@ State::MachineReservation::~MachineReservation()
auto prev = machine->state->currentJobs--;
assert(prev);
if (prev == 1)
machine->state->idleSince = time(0);
machine->state->idleSince = time(nullptr);
{
auto machineTypes_(state.machineTypes.lock());

View file

@ -2,9 +2,9 @@
#include <memory>
#include "hash.hh"
#include "derivations.hh"
#include "store-api.hh"
#include "lix/libstore/derivations.hh"
#include "lix/libstore/store-api.hh"
#include "lix/libutil/hash.hh"
#include "nar-extractor.hh"
struct BuildProduct
@ -14,7 +14,7 @@ struct BuildProduct
bool isRegular = false;
std::optional<nix::Hash> sha256hash;
std::optional<off_t> fileSize;
BuildProduct() { }
BuildProduct() = default;
};
struct BuildMetric

View file

@ -11,15 +11,16 @@
#include <nlohmann/json.hpp>
#include "signals.hh"
#include "state.hh"
#include "hydra-build-result.hh"
#include "store-api.hh"
#include "lix/libstore/store-api.hh"
#include "lix/libutil/signals.hh"
#include "state.hh"
#include "globals.hh"
#include "hydra-config.hh"
#include "s3-binary-cache-store.hh"
#include "shared.hh"
#include "lix/libmain/shared.hh"
#include "lix/libstore/globals.hh"
#include "lix/libstore/s3-binary-cache-store.hh"
#include "lix/libutil/args.hh"
using namespace nix;
using nlohmann::json;
@ -105,7 +106,7 @@ State::State(std::optional<std::string> metricsAddrOpt)
: config(std::make_unique<HydraConfig>())
, maxUnsupportedTime(config->getIntOption("max_unsupported_time", 0))
, dbPool(config->getIntOption("max_db_connections", 128))
, localWorkThrottler(config->getIntOption("max_local_worker_threads", std::min(maxSupportedLocalWorkers, std::max(4u, std::thread::hardware_concurrency()) - 2)))
, localWorkThrottler(static_cast<ptrdiff_t>(config->getIntOption("max_local_worker_threads", std::min(maxSupportedLocalWorkers, std::max(4u, std::thread::hardware_concurrency()) - 2))))
, maxOutputSize(config->getIntOption("max_output_size", 2ULL << 30))
, maxLogSize(config->getIntOption("max_log_size", 64ULL << 20))
, uploadLogsToBinaryCache(config->getBoolOption("upload_logs_to_binary_cache", false))
@ -138,7 +139,7 @@ nix::MaintainCount<counter> State::startDbUpdate()
{
if (nrActiveDbUpdates > 6)
printError("warning: %d concurrent database updates; PostgreSQL may be stalled", nrActiveDbUpdates.load());
return MaintainCount<counter>(nrActiveDbUpdates);
return {nrActiveDbUpdates};
}
@ -171,9 +172,9 @@ void State::parseMachines(const std::string & contents)
for (auto & f : mandatoryFeatures)
supportedFeatures.insert(f);
using MaxJobs = std::remove_const<decltype(nix::Machine::maxJobs)>::type;
using MaxJobs = std::remove_const_t<decltype(nix::Machine::maxJobs)>;
auto machine = std::make_shared<::Machine>(nix::Machine {
auto machine = std::make_shared<::Machine>(::Machine {{
// `storeUri`, not yet used
"",
// `systemTypes`, not yet used
@ -194,11 +195,11 @@ void State::parseMachines(const std::string & contents)
tokens[7] != "" && tokens[7] != "-"
? base64Decode(tokens[7])
: "",
});
}});
machine->sshName = tokens[0];
machine->systemTypesSet = tokenizeString<StringSet>(tokens[1], ",");
machine->speedFactorFloat = atof(tokens[4].c_str());
machine->speedFactorFloat = static_cast<float>(atof(tokens[4].c_str()));
/* Re-use the State object of the previous machine with the
same name. */
@ -412,7 +413,7 @@ void State::finishBuildStep(pqxx::work & txn, const RemoteResult & result,
}
int State::createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
unsigned int State::createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
Build::ptr build, const StorePath & drvPath, const nix::Derivation drv, const std::string & outputName, const StorePath & storePath)
{
restart:
@ -534,7 +535,7 @@ void State::markSucceededBuild(pqxx::work & txn, Build::ptr build,
product.type,
product.subtype,
product.fileSize ? std::make_optional(*product.fileSize) : std::nullopt,
product.sha256hash ? std::make_optional(product.sha256hash->to_string(Base16, false)) : std::nullopt,
product.sha256hash ? std::make_optional(product.sha256hash->to_string(Base::Base16, false)) : std::nullopt,
product.path,
product.name,
product.defaultPath);
@ -594,7 +595,7 @@ std::shared_ptr<PathLocks> State::acquireGlobalLock()
createDirs(dirOf(lockPath));
auto lock = std::make_shared<PathLocks>();
if (!lock->lockPaths(PathSet({lockPath}), "", false)) return 0;
if (!lock->lockPaths(PathSet({lockPath}), "", false)) return nullptr;
return lock;
}
@ -602,10 +603,10 @@ std::shared_ptr<PathLocks> State::acquireGlobalLock()
void State::dumpStatus(Connection & conn)
{
time_t now = time(0);
time_t now = time(nullptr);
json statusJson = {
{"status", "up"},
{"time", time(0)},
{"time", time(nullptr)},
{"uptime", now - startedAt},
{"pid", getpid()},
@ -613,6 +614,7 @@ void State::dumpStatus(Connection & conn)
{"nrActiveSteps", activeSteps_.lock()->size()},
{"nrStepsBuilding", nrStepsBuilding.load()},
{"nrStepsCopyingTo", nrStepsCopyingTo.load()},
{"nrStepsWaitingForDownloadSlot", nrStepsWaitingForDownloadSlot.load()},
{"nrStepsCopyingFrom", nrStepsCopyingFrom.load()},
{"nrStepsWaiting", nrStepsWaiting.load()},
{"nrUnsupportedSteps", nrUnsupportedSteps.load()},
@ -620,7 +622,7 @@ void State::dumpStatus(Connection & conn)
{"bytesReceived", bytesReceived.load()},
{"nrBuildsRead", nrBuildsRead.load()},
{"buildReadTimeMs", buildReadTimeMs.load()},
{"buildReadTimeAvgMs", nrBuildsRead == 0 ? 0.0 : (float) buildReadTimeMs / nrBuildsRead},
{"buildReadTimeAvgMs", nrBuildsRead == 0 ? 0.0 : (float) buildReadTimeMs / (float) nrBuildsRead},
{"nrBuildsDone", nrBuildsDone.load()},
{"nrStepsStarted", nrStepsStarted.load()},
{"nrStepsDone", nrStepsDone.load()},
@ -629,7 +631,7 @@ void State::dumpStatus(Connection & conn)
{"nrQueueWakeups", nrQueueWakeups.load()},
{"nrDispatcherWakeups", nrDispatcherWakeups.load()},
{"dispatchTimeMs", dispatchTimeMs.load()},
{"dispatchTimeAvgMs", nrDispatcherWakeups == 0 ? 0.0 : (float) dispatchTimeMs / nrDispatcherWakeups},
{"dispatchTimeAvgMs", nrDispatcherWakeups == 0 ? 0.0 : (float) dispatchTimeMs / (float) nrDispatcherWakeups},
{"nrDbConnections", dbPool.count()},
{"nrActiveDbUpdates", nrActiveDbUpdates.load()},
};
@ -649,8 +651,8 @@ void State::dumpStatus(Connection & conn)
if (nrStepsDone) {
statusJson["totalStepTime"] = totalStepTime.load();
statusJson["totalStepBuildTime"] = totalStepBuildTime.load();
statusJson["avgStepTime"] = (float) totalStepTime / nrStepsDone;
statusJson["avgStepBuildTime"] = (float) totalStepBuildTime / nrStepsDone;
statusJson["avgStepTime"] = (float) totalStepTime / (float) nrStepsDone;
statusJson["avgStepBuildTime"] = (float) totalStepBuildTime / (float) nrStepsDone;
}
{
@ -677,8 +679,8 @@ void State::dumpStatus(Connection & conn)
if (m->state->nrStepsDone) {
machine["totalStepTime"] = s->totalStepTime.load();
machine["totalStepBuildTime"] = s->totalStepBuildTime.load();
machine["avgStepTime"] = (float) s->totalStepTime / s->nrStepsDone;
machine["avgStepBuildTime"] = (float) s->totalStepBuildTime / s->nrStepsDone;
machine["avgStepTime"] = (float) s->totalStepTime / (float) s->nrStepsDone;
machine["avgStepBuildTime"] = (float) s->totalStepBuildTime / (float) s->nrStepsDone;
}
statusJson["machines"][m->sshName] = machine;
}
@ -706,7 +708,7 @@ void State::dumpStatus(Connection & conn)
};
if (i.second.runnable > 0)
machineTypeJson["waitTime"] = i.second.waitTime.count() +
i.second.runnable * (time(0) - lastDispatcherCheck);
i.second.runnable * (time(nullptr) - lastDispatcherCheck);
if (i.second.running == 0)
machineTypeJson["lastActive"] = std::chrono::system_clock::to_time_t(i.second.lastActive);
}
@ -732,11 +734,11 @@ void State::dumpStatus(Connection & conn)
{"narWriteCompressionTimeMs", stats.narWriteCompressionTimeMs.load()},
{"narCompressionSavings",
stats.narWriteBytes
? 1.0 - (double) stats.narWriteCompressedBytes / stats.narWriteBytes
? 1.0 - (double) stats.narWriteCompressedBytes / (double) stats.narWriteBytes
: 0.0},
{"narCompressionSpeed", // MiB/s
stats.narWriteCompressionTimeMs
? (double) stats.narWriteBytes / stats.narWriteCompressionTimeMs * 1000.0 / (1024.0 * 1024.0)
? (double) stats.narWriteBytes / (double) stats.narWriteCompressionTimeMs * 1000.0 / (1024.0 * 1024.0)
: 0.0},
};
@ -749,20 +751,20 @@ void State::dumpStatus(Connection & conn)
{"putTimeMs", s3Stats.putTimeMs.load()},
{"putSpeed",
s3Stats.putTimeMs
? (double) s3Stats.putBytes / s3Stats.putTimeMs * 1000.0 / (1024.0 * 1024.0)
? (double) s3Stats.putBytes / (double) s3Stats.putTimeMs * 1000.0 / (1024.0 * 1024.0)
: 0.0},
{"get", s3Stats.get.load()},
{"getBytes", s3Stats.getBytes.load()},
{"getTimeMs", s3Stats.getTimeMs.load()},
{"getSpeed",
s3Stats.getTimeMs
? (double) s3Stats.getBytes / s3Stats.getTimeMs * 1000.0 / (1024.0 * 1024.0)
? (double) s3Stats.getBytes / (double) s3Stats.getTimeMs * 1000.0 / (1024.0 * 1024.0)
: 0.0},
{"head", s3Stats.head.load()},
{"costDollarApprox",
(s3Stats.get + s3Stats.head) / 10000.0 * 0.004
+ s3Stats.put / 1000.0 * 0.005 +
+ s3Stats.getBytes / (1024.0 * 1024.0 * 1024.0) * 0.09},
(double) (s3Stats.get + s3Stats.head) / 10000.0 * 0.004
+ (double) s3Stats.put / 1000.0 * 0.005 +
+ (double) s3Stats.getBytes / (1024.0 * 1024.0 * 1024.0) * 0.09},
};
}
}
@ -848,7 +850,7 @@ void State::run(BuildID buildOne)
/* Can't be bothered to shut down cleanly. Goodbye! */
auto callback = createInterruptCallback([&]() { std::_Exit(0); });
startedAt = time(0);
startedAt = time(nullptr);
this->buildOne = buildOne;
auto lock = acquireGlobalLock();
@ -867,7 +869,7 @@ void State::run(BuildID buildOne)
<< metricsAddr << "/metrics (port " << exposerPort << ")"
<< std::endl;
Store::Params localParams;
StoreConfig::Params localParams;
localParams["max-connections"] = "16";
localParams["max-connection-age"] = "600";
localStore = openStore(getEnv("NIX_REMOTE").value_or(""), localParams);
@ -972,7 +974,7 @@ int main(int argc, char * * argv)
BuildID buildOne = 0;
std::optional<std::string> metricsAddrOpt = std::nullopt;
parseCmdLine(argc, argv, [&](Strings::iterator & arg, const Strings::iterator & end) {
LegacyArgs(argv[0], [&](Strings::iterator & arg, const Strings::iterator & end) {
if (*arg == "--unlock")
unlock = true;
else if (*arg == "--status")
@ -987,7 +989,7 @@ int main(int argc, char * * argv)
} else
return false;
return true;
});
}).parseCmdline(Strings(argv + 1, argv + argc));
settings.verboseBuild = true;

View file

@ -0,0 +1,22 @@
srcs = files(
'builder.cc',
'build-remote.cc',
'build-result.cc',
'dispatcher.cc',
'hydra-queue-runner.cc',
'nar-extractor.cc',
'queue-monitor.cc',
)
hydra_queue_runner = executable('hydra-queue-runner',
'hydra-queue-runner.cc',
srcs,
dependencies: [
libhydra_dep,
lix_dep,
pqxx_dep,
prom_cpp_core_dep,
prom_cpp_pull_dep,
],
install: true,
)

View file

@ -1,13 +1,43 @@
#include "nar-extractor.hh"
#include "archive.hh"
#include "lix/libutil/archive.hh"
#include <unordered_set>
#include <utility>
using namespace nix;
struct Extractor : ParseSink
struct Extractor : NARParseVisitor
{
class MyFileHandle : public FileHandle
{
NarMemberData & memberData;
uint64_t expectedSize;
std::unique_ptr<HashSink> hashSink;
public:
MyFileHandle(NarMemberData & memberData, uint64_t size) : memberData(memberData), expectedSize(size)
{
hashSink = std::make_unique<HashSink>(HashType::SHA256);
}
void receiveContents(std::string_view data) override
{
*memberData.fileSize += data.size();
(*hashSink)(data);
if (memberData.contents) {
memberData.contents->append(data);
}
assert(memberData.fileSize <= expectedSize);
if (memberData.fileSize == expectedSize) {
auto [hash, len] = hashSink->finish();
assert(memberData.fileSize == len);
memberData.sha256 = hash;
hashSink.reset();
}
}
};
std::unordered_set<Path> filesToKeep {
"/nix-support/hydra-build-products",
"/nix-support/hydra-release-name",
@ -15,11 +45,10 @@ struct Extractor : ParseSink
};
NarMemberDatas & members;
NarMemberData * curMember = nullptr;
Path prefix;
Extractor(NarMemberDatas & members, const Path & prefix)
: members(members), prefix(prefix)
Extractor(NarMemberDatas & members, Path prefix)
: members(members), prefix(std::move(prefix))
{ }
void createDirectory(const Path & path) override
@ -27,41 +56,15 @@ struct Extractor : ParseSink
members.insert_or_assign(prefix + path, NarMemberData { .type = FSAccessor::Type::tDirectory });
}
void createRegularFile(const Path & path) override
std::unique_ptr<FileHandle> createRegularFile(const Path & path, uint64_t size, bool executable) override
{
curMember = &members.insert_or_assign(prefix + path, NarMemberData {
auto memberData = &members.insert_or_assign(prefix + path, NarMemberData {
.type = FSAccessor::Type::tRegular,
.fileSize = 0,
.contents = filesToKeep.count(path) ? std::optional("") : std::nullopt,
}).first->second;
}
std::optional<uint64_t> expectedSize;
std::unique_ptr<HashSink> hashSink;
void preallocateContents(uint64_t size) override
{
expectedSize = size;
hashSink = std::make_unique<HashSink>(htSHA256);
}
void receiveContents(std::string_view data) override
{
assert(expectedSize);
assert(curMember);
assert(hashSink);
*curMember->fileSize += data.size();
(*hashSink)(data);
if (curMember->contents) {
curMember->contents->append(data);
}
assert(curMember->fileSize <= expectedSize);
if (curMember->fileSize == expectedSize) {
auto [hash, len] = hashSink->finish();
assert(curMember->fileSize == len);
curMember->sha256 = hash;
hashSink.reset();
}
return std::make_unique<MyFileHandle>(*memberData, size);
}
void createSymlink(const Path & path, const std::string & target) override
@ -76,7 +79,19 @@ void extractNarData(
const Path & prefix,
NarMemberDatas & members)
{
Extractor extractor(members, prefix);
parseDump(extractor, source);
// Note: this point may not be reached if we're in a coroutine.
auto parser = extractNarDataFilter(source, prefix, members);
while (parser.next()) {
// ignore raw data
}
}
nix::WireFormatGenerator extractNarDataFilter(
Source & source,
const Path & prefix,
NarMemberDatas & members)
{
return [](Source & source, const Path & prefix, NarMemberDatas & members) -> WireFormatGenerator {
Extractor extractor(members, prefix);
co_yield parseAndCopyDump(extractor, source);
}(source, prefix, members);
}

View file

@ -1,9 +1,9 @@
#pragma once
#include "fs-accessor.hh"
#include "types.hh"
#include "serialise.hh"
#include "hash.hh"
#include "lix/libstore/fs-accessor.hh"
#include "lix/libutil/hash.hh"
#include "lix/libutil/serialise.hh"
#include "lix/libutil/types.hh"
struct NarMemberData
{
@ -13,7 +13,7 @@ struct NarMemberData
std::optional<nix::Hash> sha256;
};
typedef std::map<nix::Path, NarMemberData> NarMemberDatas;
using NarMemberDatas = std::map<nix::Path, NarMemberData>;
/* Read a NAR from a source and get to some info about every file
inside the NAR. */
@ -21,3 +21,8 @@ void extractNarData(
nix::Source & source,
const nix::Path & prefix,
NarMemberDatas & members);
nix::WireFormatGenerator extractNarDataFilter(
nix::Source & source,
const nix::Path & prefix,
NarMemberDatas & members);

View file

@ -1,10 +1,11 @@
#include "state.hh"
#include "hydra-build-result.hh"
#include "globals.hh"
#include "thread-pool.hh"
#include "lix/libstore/globals.hh"
#include "lix/libutil/thread-pool.hh"
#include "state.hh"
#include <cstring>
#include <signal.h>
#include <utility>
#include <csignal>
using namespace nix;
@ -52,7 +53,7 @@ void State::queueMonitorLoop(Connection & conn)
auto t_after_work = std::chrono::steady_clock::now();
prom.queue_monitor_time_spent_running.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count());
static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count()));
/* Sleep until we get notification from the database about an
event. */
@ -79,7 +80,7 @@ void State::queueMonitorLoop(Connection & conn)
auto t_after_sleep = std::chrono::steady_clock::now();
prom.queue_monitor_time_spent_waiting.Increment(
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count());
static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count()));
}
exit(0);
@ -88,7 +89,7 @@ void State::queueMonitorLoop(Connection & conn)
struct PreviousFailure : public std::exception {
Step::ptr step;
PreviousFailure(Step::ptr step) : step(step) { }
PreviousFailure(Step::ptr step) : step(std::move(step)) { }
};
@ -117,7 +118,7 @@ bool State::getQueuedBuilds(Connection & conn,
for (auto const & row : res) {
auto builds_(builds.lock());
BuildID id = row["id"].as<BuildID>();
auto id = row["id"].as<BuildID>();
if (buildOne && id != buildOne) continue;
if (builds_->count(id)) continue;
@ -137,7 +138,7 @@ bool State::getQueuedBuilds(Connection & conn,
newIDs.push_back(id);
newBuildsByID[id] = build;
newBuildsByPath.emplace(std::make_pair(build->drvPath, id));
newBuildsByPath.emplace(build->drvPath, id);
}
}
@ -162,7 +163,7 @@ bool State::getQueuedBuilds(Connection & conn,
("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $3 where id = $1 and finished = 0",
build->id,
(int) bsAborted,
time(0));
time(nullptr));
txn.commit();
build->finishedInDB = true;
nrBuildsDone++;
@ -176,7 +177,7 @@ bool State::getQueuedBuilds(Connection & conn,
/* Create steps for this derivation and its dependencies. */
try {
step = createStep(destStore, conn, build, build->drvPath,
build, 0, finishedDrvs, newSteps, newRunnable);
build, nullptr, finishedDrvs, newSteps, newRunnable);
} catch (PreviousFailure & ex) {
/* Some step previously failed, so mark the build as
@ -221,7 +222,7 @@ bool State::getQueuedBuilds(Connection & conn,
"where id = $1 and finished = 0",
build->id,
(int) (ex.step->drvPath == build->drvPath ? bsFailed : bsDepFailed),
time(0));
time(nullptr));
notifyBuildFinished(txn, build->id, {});
txn.commit();
build->finishedInDB = true;
@ -254,7 +255,7 @@ bool State::getQueuedBuilds(Connection & conn,
{
auto mc = startDbUpdate();
pqxx::work txn(conn);
time_t now = time(0);
time_t now = time(nullptr);
if (!buildOneDone && build->id == buildOne) buildOneDone = true;
printMsg(lvlInfo, "marking build %1% as succeeded (cached)", build->id);
markSucceededBuild(txn, build, res, true, now, now);
@ -355,7 +356,7 @@ void State::processQueueChange(Connection & conn)
pqxx::work txn(conn);
auto res = txn.exec("select id, globalPriority from Builds where finished = 0");
for (auto const & row : res)
currentIds[row["id"].as<BuildID>()] = row["globalPriority"].as<BuildID>();
currentIds[row["id"].as<BuildID>()] = row["globalPriority"].as<int>();
}
{
@ -410,7 +411,7 @@ std::map<DrvOutput, std::optional<StorePath>> State::getMissingRemotePaths(
const std::map<DrvOutput, std::optional<StorePath>> & paths)
{
Sync<std::map<DrvOutput, std::optional<StorePath>>> missing_;
ThreadPool tp;
ThreadPool tp("hydra-getMissingRemotePaths");
for (auto & [output, maybeOutputPath] : paths) {
if (!maybeOutputPath) {
@ -438,7 +439,7 @@ Step::ptr State::createStep(ref<Store> destStore,
Build::ptr referringBuild, Step::ptr referringStep, std::set<StorePath> & finishedDrvs,
std::set<Step::ptr> & newSteps, std::set<Step::ptr> & newRunnable)
{
if (finishedDrvs.find(drvPath) != finishedDrvs.end()) return 0;
if (finishedDrvs.find(drvPath) != finishedDrvs.end()) return nullptr;
/* Check if the requested step already exists. If not, create a
new step. In any case, make the step reachable from
@ -516,7 +517,7 @@ Step::ptr State::createStep(ref<Store> destStore,
std::map<DrvOutput, std::optional<StorePath>> paths;
for (auto & [outputName, maybeOutputPath] : destStore->queryPartialDerivationOutputMap(drvPath, &*localStore)) {
auto outputHash = outputHashes.at(outputName);
paths.insert({{outputHash, outputName}, maybeOutputPath});
paths.insert({{.drvHash=outputHash, .outputName=outputName}, maybeOutputPath});
}
auto missing = getMissingRemotePaths(destStore, paths);
@ -560,7 +561,7 @@ Step::ptr State::createStep(ref<Store> destStore,
auto & path = *pathOpt;
try {
time_t startTime = time(0);
time_t startTime = time(nullptr);
if (localStore->isValidPath(path))
printInfo("copying output %1% of %2% from local store",
@ -578,7 +579,7 @@ Step::ptr State::createStep(ref<Store> destStore,
StorePathSet { path },
NoRepair, CheckSigs, NoSubstitute);
time_t stopTime = time(0);
time_t stopTime = time(nullptr);
{
auto mc = startDbUpdate();
@ -602,7 +603,7 @@ Step::ptr State::createStep(ref<Store> destStore,
// FIXME: check whether all outputs are in the binary cache.
if (valid) {
finishedDrvs.insert(drvPath);
return 0;
return nullptr;
}
/* No, we need to build. */
@ -610,7 +611,7 @@ Step::ptr State::createStep(ref<Store> destStore,
/* Create steps for the dependencies. */
for (auto & i : step->drv->inputDrvs.map) {
auto dep = createStep(destStore, conn, build, i.first, 0, step, finishedDrvs, newSteps, newRunnable);
auto dep = createStep(destStore, conn, build, i.first, nullptr, step, finishedDrvs, newSteps, newRunnable);
if (dep) {
auto step_(step->state.lock());
step_->deps.insert(dep);
@ -658,11 +659,11 @@ Jobset::ptr State::createJobset(pqxx::work & txn,
auto res2 = txn.exec_params
("select s.startTime, s.stopTime from BuildSteps s join Builds b on build = id "
"where s.startTime is not null and s.stopTime > $1 and jobset_id = $2",
time(0) - Jobset::schedulingWindow * 10,
time(nullptr) - Jobset::schedulingWindow * 10,
jobsetID);
for (auto const & row : res2) {
time_t startTime = row["startTime"].as<time_t>();
time_t stopTime = row["stopTime"].as<time_t>();
auto startTime = row["startTime"].as<time_t>();
auto stopTime = row["stopTime"].as<time_t>();
jobset->addStep(startTime, stopTime - startTime);
}
@ -702,7 +703,7 @@ BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store>
"where finished = 1 and (buildStatus = 0 or buildStatus = 6) and path = $1",
localStore->printStorePath(output));
if (r.empty()) continue;
BuildID id = r[0][0].as<BuildID>();
auto id = r[0][0].as<BuildID>();
printInfo("reusing build %d", id);
@ -727,7 +728,7 @@ BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store>
product.fileSize = row[2].as<off_t>();
}
if (!row[3].is_null())
product.sha256hash = Hash::parseAny(row[3].as<std::string>(), htSHA256);
product.sha256hash = Hash::parseAny(row[3].as<std::string>(), HashType::SHA256);
if (!row[4].is_null())
product.path = row[4].as<std::string>();
product.name = row[5].as<std::string>();

View file

@ -8,6 +8,7 @@
#include <queue>
#include <regex>
#include <semaphore>
#include <utility>
#include <prometheus/counter.h>
#include <prometheus/gauge.h>
@ -15,27 +16,27 @@
#include "db.hh"
#include "parsed-derivations.hh"
#include "pathlocks.hh"
#include "pool.hh"
#include "build-result.hh"
#include "store-api.hh"
#include "sync.hh"
#include "lix/libstore/build-result.hh"
#include "lix/libstore/machines.hh"
#include "lix/libstore/parsed-derivations.hh"
#include "lix/libstore/pathlocks.hh"
#include "lix/libstore/serve-protocol.hh"
#include "lix/libstore/store-api.hh"
#include "lix/libutil/pool.hh"
#include "lix/libutil/sync.hh"
#include "nar-extractor.hh"
#include "serve-protocol.hh"
#include "machines.hh"
typedef unsigned int BuildID;
using BuildID = unsigned int;
typedef unsigned int JobsetID;
using JobsetID = unsigned int;
typedef std::chrono::time_point<std::chrono::system_clock> system_time;
using system_time = std::chrono::time_point<std::chrono::system_clock>;
typedef std::atomic<unsigned long> counter;
using counter = std::atomic<unsigned long>;
typedef enum {
enum BuildStatus {
bsSuccess = 0,
bsFailed = 1,
bsDepFailed = 2, // builds only
@ -49,10 +50,10 @@ typedef enum {
bsNarSizeLimitExceeded = 11,
bsNotDeterministic = 12,
bsBusy = 100, // not stored
} BuildStatus;
};
typedef enum {
enum StepState {
ssPreparing = 1,
ssConnecting = 10,
ssSendingInputs = 20,
@ -60,7 +61,7 @@ typedef enum {
ssWaitingForLocalSlot = 35,
ssReceivingOutputs = 40,
ssPostProcessing = 50,
} StepState;
};
struct RemoteResult
@ -78,7 +79,7 @@ struct RemoteResult
unsigned int overhead = 0;
nix::Path logFile;
BuildStatus buildStatus() const
[[nodiscard]] BuildStatus buildStatus() const
{
return stepStatus == bsCachedFailure ? bsFailed : stepStatus;
}
@ -95,10 +96,10 @@ class Jobset
{
public:
typedef std::shared_ptr<Jobset> ptr;
typedef std::weak_ptr<Jobset> wptr;
using ptr = std::shared_ptr<Jobset>;
using wptr = std::weak_ptr<Jobset>;
static const time_t schedulingWindow = 24 * 60 * 60;
static const time_t schedulingWindow = static_cast<time_t>(24 * 60 * 60);
private:
@ -115,7 +116,7 @@ public:
return (double) seconds / shares;
}
void setShares(int shares_)
void setShares(unsigned int shares_)
{
assert(shares_ > 0);
shares = shares_;
@ -131,8 +132,8 @@ public:
struct Build
{
typedef std::shared_ptr<Build> ptr;
typedef std::weak_ptr<Build> wptr;
using ptr = std::shared_ptr<Build>;
using wptr = std::weak_ptr<Build>;
BuildID id;
nix::StorePath drvPath;
@ -163,8 +164,8 @@ struct Build
struct Step
{
typedef std::shared_ptr<Step> ptr;
typedef std::weak_ptr<Step> wptr;
using ptr = std::shared_ptr<Step>;
using wptr = std::weak_ptr<Step>;
nix::StorePath drvPath;
std::unique_ptr<nix::Derivation> drv;
@ -221,13 +222,8 @@ struct Step
nix::Sync<State> state;
Step(const nix::StorePath & drvPath) : drvPath(drvPath)
Step(nix::StorePath drvPath) : drvPath(std::move(drvPath))
{ }
~Step()
{
//printMsg(lvlError, format("destroying step %1%") % drvPath);
}
};
@ -239,7 +235,7 @@ void visitDependencies(std::function<void(Step::ptr)> visitor, Step::ptr step);
struct Machine : nix::Machine
{
typedef std::shared_ptr<Machine> ptr;
using ptr = std::shared_ptr<Machine>;
/* TODO Get rid of: `nix::Machine::storeUri` is normalized in a way
we are not yet used to, but once we are, we don't need this. */
@ -254,7 +250,7 @@ struct Machine : nix::Machine
float speedFactorFloat = 1.0;
struct State {
typedef std::shared_ptr<State> ptr;
using ptr = std::shared_ptr<State>;
counter currentJobs{0};
counter nrStepsDone{0};
counter totalStepTime{0}; // total time for steps, including closure copying
@ -328,7 +324,6 @@ struct Machine : nix::Machine
operator nix::ServeProto::WriteConn ()
{
return {
.to = to,
.version = remoteVersion,
};
}
@ -359,22 +354,22 @@ private:
bool useSubstitutes = false;
/* The queued builds. */
typedef std::map<BuildID, Build::ptr> Builds;
using Builds = std::map<BuildID, Build::ptr>;
nix::Sync<Builds> builds;
/* The jobsets. */
typedef std::map<std::pair<std::string, std::string>, Jobset::ptr> Jobsets;
using Jobsets = std::map<std::pair<std::string, std::string>, Jobset::ptr>;
nix::Sync<Jobsets> jobsets;
/* All active or pending build steps (i.e. dependencies of the
queued builds). Note that these are weak pointers. Steps are
kept alive by being reachable from Builds or by being in
progress. */
typedef std::map<nix::StorePath, Step::wptr> Steps;
using Steps = std::map<nix::StorePath, Step::wptr>;
nix::Sync<Steps> steps;
/* Build steps that have no unbuilt dependencies. */
typedef std::list<Step::wptr> Runnable;
using Runnable = std::list<Step::wptr>;
nix::Sync<Runnable> runnable;
/* CV for waking up the dispatcher. */
@ -386,7 +381,7 @@ private:
/* The build machines. */
std::mutex machinesReadyLock;
typedef std::map<std::string, Machine::ptr> Machines;
using Machines = std::map<std::string, Machine::ptr>;
nix::Sync<Machines> machines; // FIXME: use atomic_shared_ptr
/* Throttler for CPU-bound local work. */
@ -402,6 +397,7 @@ private:
counter nrStepsDone{0};
counter nrStepsBuilding{0};
counter nrStepsCopyingTo{0};
counter nrStepsWaitingForDownloadSlot{0};
counter nrStepsCopyingFrom{0};
counter nrStepsWaiting{0};
counter nrUnsupportedSteps{0};
@ -432,7 +428,7 @@ private:
struct MachineReservation
{
typedef std::shared_ptr<MachineReservation> ptr;
using ptr = std::shared_ptr<MachineReservation>;
State & state;
Step::ptr step;
Machine::ptr machine;
@ -535,7 +531,7 @@ private:
void finishBuildStep(pqxx::work & txn, const RemoteResult & result, BuildID buildId, unsigned int stepNr,
const std::string & machine);
int createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
unsigned int createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
Build::ptr build, const nix::StorePath & drvPath, const nix::Derivation drv, const std::string & outputName, const nix::StorePath & storePath);
void updateBuild(pqxx::work & txn, Build::ptr build, BuildStatus status);
@ -595,6 +591,7 @@ private:
enum StepResult { sDone, sRetry, sMaybeCancelled };
StepResult doBuildStep(nix::ref<nix::Store> destStore,
MachineReservation::ptr & reservation,
Connection & conn,
std::shared_ptr<ActiveStep> activeStep);
void buildRemote(nix::ref<nix::Store> destStore,
@ -623,8 +620,6 @@ private:
void addRoot(const nix::StorePath & storePath);
void runMetricsExporter();
public:
void showStatus();

View file

@ -242,23 +242,35 @@ sub push : Chained('api') PathPart('push') Args(0) {
$c->{stash}->{json}->{jobsetsTriggered} = [];
my $force = exists $c->request->query_params->{force};
my @jobsets = split /,/, ($c->request->query_params->{jobsets} // "");
foreach my $s (@jobsets) {
my @jobsetNames = split /,/, ($c->request->query_params->{jobsets} // "");
my @jobsets;
foreach my $s (@jobsetNames) {
my ($p, $j) = parseJobsetName($s);
my $jobset = $c->model('DB::Jobsets')->find($p, $j);
next unless defined $jobset && ($force || ($jobset->project->enabled && $jobset->enabled));
triggerJobset($self, $c, $jobset, $force);
push @jobsets, $jobset if defined $jobset;
}
my @repos = split /,/, ($c->request->query_params->{repos} // "");
foreach my $r (@repos) {
triggerJobset($self, $c, $_, $force) foreach $c->model('DB::Jobsets')->search(
foreach ($c->model('DB::Jobsets')->search(
{ 'project.enabled' => 1, 'me.enabled' => 1 },
{
join => 'project',
where => \ [ 'exists (select 1 from JobsetInputAlts where project = me.project and jobset = me.name and value = ?)', [ 'value', $r ] ],
order_by => 'me.id DESC'
});
})) {
push @jobsets, $_;
}
}
foreach my $jobset (@jobsets) {
requireRestartPrivileges($c, $jobset->project);
}
foreach my $jobset (@jobsets) {
next unless defined $jobset && ($force || ($jobset->project->enabled && $jobset->enabled));
triggerJobset($self, $c, $jobset, $force);
}
$self->status_ok(
@ -273,7 +285,7 @@ sub push_github : Chained('api') PathPart('push-github') Args(0) {
$c->{stash}->{json}->{jobsetsTriggered} = [];
my $in = $c->request->{data};
my $owner = $in->{repository}->{owner}->{name} or die;
my $owner = $in->{repository}->{owner}->{login} or die;
my $repo = $in->{repository}->{name} or die;
print STDERR "got push from GitHub repository $owner/$repo\n";
@ -285,6 +297,23 @@ sub push_github : Chained('api') PathPart('push-github') Args(0) {
$c->response->body("");
}
sub push_gitea : Chained('api') PathPart('push-gitea') Args(0) {
my ($self, $c) = @_;
$c->{stash}->{json}->{jobsetsTriggered} = [];
my $in = $c->request->{data};
my $url = $in->{repository}->{clone_url} or die;
$url =~ s/.git$//;
print STDERR "got push from Gitea repository $url\n";
triggerJobset($self, $c, $_, 0) foreach $c->model('DB::Jobsets')->search(
{ 'project.enabled' => 1, 'me.enabled' => 1 },
{ join => 'project'
, where => \ [ 'me.flake like ? or exists (select 1 from JobsetInputAlts where project = me.project and jobset = me.name and value like ?)', [ 'flake', "%$url%"], [ 'value', "%$url%" ] ]
});
$c->response->body("");
}
1;

View file

@ -240,7 +240,7 @@ sub serveFile {
# XSS hole.
$c->response->header('Content-Security-Policy' => 'sandbox allow-scripts');
$c->stash->{'plain'} = { data => grab(cmd => ["nix", "--experimental-features", "nix-command",
$c->stash->{'plain'} = { data => readIntoSocket(cmd => ["nix", "--experimental-features", "nix-command",
"store", "cat", "--store", getStoreUri(), "$path"]) };
# Detect MIME type.

View file

@ -76,7 +76,9 @@ sub view_GET {
$c->stash->{removed} = $diff->{removed};
$c->stash->{unfinished} = $diff->{unfinished};
$c->stash->{aborted} = $diff->{aborted};
$c->stash->{failed} = $diff->{failed};
$c->stash->{totalAborted} = $diff->{totalAborted};
$c->stash->{totalFailed} = $diff->{totalFailed};
$c->stash->{totalQueued} = $diff->{totalQueued};
$c->stash->{full} = ($c->req->params->{full} || "0") eq "1";

View file

@ -35,6 +35,7 @@ sub noLoginNeeded {
return $whitelisted ||
$c->request->path eq "api/push-github" ||
$c->request->path eq "api/push-gitea" ||
$c->request->path eq "google-login" ||
$c->request->path eq "github-redirect" ||
$c->request->path eq "github-login" ||
@ -80,7 +81,7 @@ sub begin :Private {
$_->supportedInputTypes($c->stash->{inputTypes}) foreach @{$c->hydra_plugins};
# XSRF protection: require POST requests to have the same origin.
if ($c->req->method eq "POST" && $c->req->path ne "api/push-github") {
if ($c->req->method eq "POST" && $c->req->path ne "api/push-github" && $c->req->path ne "api/push-gitea") {
my $referer = $c->req->header('Referer');
$referer //= $c->req->header('Origin');
my $base = $c->req->base;

View file

@ -32,7 +32,12 @@ sub buildDiff {
removed => [],
unfinished => [],
aborted => [],
failed => [],
# These summary counters cut across the categories to determine whether
# actions such as "Restart all failed" or "Bump queue" are available.
totalAborted => 0,
totalFailed => 0,
totalQueued => 0,
};
my $n = 0;
@ -80,8 +85,15 @@ sub buildDiff {
} else {
push @{$ret->{new}}, $build if !$found;
}
if (defined $build->buildstatus && $build->buildstatus != 0) {
push @{$ret->{failed}}, $build;
if ($build->finished != 0 && $build->buildstatus != 0) {
if ($aborted) {
++$ret->{totalAborted};
} else {
++$ret->{totalFailed};
}
} elsif ($build->finished == 0) {
++$ret->{totalQueued};
}
}

View file

@ -36,6 +36,7 @@ our @EXPORT = qw(
jobsetOverview
jobsetOverview_
pathIsInsidePrefix
readIntoSocket
readNixFile
registerRoot
restartBuilds
@ -406,6 +407,16 @@ sub pathIsInsidePrefix {
return $cur;
}
sub readIntoSocket{
my (%args) = @_;
my $sock;
eval {
open($sock, "-|", @{$args{cmd}}) or die q(failed to open socket from command:\n $x);
};
return $sock;
}

View file

@ -1,22 +0,0 @@
PERL_MODULES = \
$(wildcard *.pm) \
$(wildcard Hydra/*.pm) \
$(wildcard Hydra/Helper/*.pm) \
$(wildcard Hydra/Model/*.pm) \
$(wildcard Hydra/View/*.pm) \
$(wildcard Hydra/Schema/*.pm) \
$(wildcard Hydra/Schema/Result/*.pm) \
$(wildcard Hydra/Schema/ResultSet/*.pm) \
$(wildcard Hydra/Controller/*.pm) \
$(wildcard Hydra/Base/*.pm) \
$(wildcard Hydra/Base/Controller/*.pm) \
$(wildcard Hydra/Script/*.pm) \
$(wildcard Hydra/Component/*.pm) \
$(wildcard Hydra/Event/*.pm) \
$(wildcard Hydra/Plugin/*.pm)
EXTRA_DIST = \
$(PERL_MODULES)
hydradir = $(libexecdir)/hydra/lib
nobase_hydra_DATA = $(PERL_MODULES)

View file

@ -2,8 +2,8 @@
#include <pqxx/pqxx>
#include "strings.hh"
#include "environment-variables.hh"
#include "lix/libutil/environment-variables.hh"
#include "lix/libutil/strings.hh"
struct Connection : pqxx::connection

View file

@ -2,9 +2,9 @@
#include <map>
#include "environment-variables.hh"
#include "file-system.hh"
#include "strings.hh"
#include "lix/libutil/environment-variables.hh"
#include "lix/libutil/file-system.hh"
#include "lix/libutil/strings.hh"
struct HydraConfig
{

5
src/libhydra/meson.build Normal file
View file

@ -0,0 +1,5 @@
libhydra_inc = include_directories('.')
libhydra_dep = declare_dependency(
include_directories: [libhydra_inc],
)

16
src/meson.build Normal file
View file

@ -0,0 +1,16 @@
# Native code
subdir('libhydra')
subdir('hydra-evaluator')
subdir('hydra-queue-runner')
# Data and interpreted
foreach dir : ['lib', 'root', 'sql', 'ttf']
install_subdir(dir,
install_dir: get_option('libexecdir') / 'hydra',
)
endforeach
install_subdir('script',
install_dir: get_option('bindir'),
install_mode: 'rwxr-xr-x',
strip_directory: true,
)

View file

@ -1,39 +0,0 @@
TEMPLATES = $(wildcard *.tt)
STATIC = \
$(wildcard static/images/*) \
$(wildcard static/css/*) \
static/js/bootbox.min.js \
static/js/popper.min.js \
static/js/common.js \
static/js/jquery/jquery-3.4.1.min.js \
static/js/jquery/jquery-ui-1.10.4.min.js
FLOT = flot-0.8.3.zip
BOOTSTRAP = bootstrap-4.3.1-dist.zip
FONTAWESOME = fontawesome-free-5.10.2-web.zip
ZIPS = $(FLOT) $(BOOTSTRAP) $(FONTAWESOME)
EXTRA_DIST = $(TEMPLATES) $(STATIC) $(ZIPS)
hydradir = $(libexecdir)/hydra/root
nobase_hydra_DATA = $(EXTRA_DIST)
all:
mkdir -p $(srcdir)/static/js
unzip -u -d $(srcdir)/static $(BOOTSTRAP)
rm -rf $(srcdir)/static/bootstrap
mv $(srcdir)/static/$(basename $(BOOTSTRAP)) $(srcdir)/static/bootstrap
unzip -u -d $(srcdir)/static/js $(FLOT)
unzip -u -d $(srcdir)/static $(FONTAWESOME)
rm -rf $(srcdir)/static/fontawesome
mv $(srcdir)/static/$(basename $(FONTAWESOME)) $(srcdir)/static/fontawesome
install-data-local: $(ZIPS)
mkdir -p $(hydradir)/static/js
cp -prvd $(srcdir)/static/js/* $(hydradir)/static/js
mkdir -p $(hydradir)/static/bootstrap
cp -prvd $(srcdir)/static/bootstrap/* $(hydradir)/static/bootstrap
mkdir -p $(hydradir)/static/fontawesome/{css,webfonts}
cp -prvd $(srcdir)/static/fontawesome/css/* $(hydradir)/static/fontawesome/css
cp -prvd $(srcdir)/static/fontawesome/webfonts/* $(hydradir)/static/fontawesome/webfonts

Binary file not shown.

View file

@ -411,7 +411,7 @@ BLOCK renderInputDiff; %]
[% ELSIF bi1.uri == bi2.uri && bi1.revision != bi2.revision %]
[% IF bi1.type == "git" %]
<tr><td>
<b>[% bi1.name %]</b></td><td><tt>[% INCLUDE renderDiffUri contents=(bi1.revision.substr(0, 8) _ ' to ' _ bi2.revision.substr(0, 8)) %]</tt>
<b>[% bi1.name %]</b></td><td><tt>[% INCLUDE renderDiffUri contents=(bi1.revision.substr(0, 12) _ ' to ' _ bi2.revision.substr(0, 12)) %]</tt>
</td></tr>
[% ELSE %]
<tr><td>

Binary file not shown.

View file

@ -48,16 +48,16 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
<a class="nav-link dropdown-toggle" data-toggle="dropdown" href="#">Actions</a>
<div class="dropdown-menu">
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('create_jobset'), [eval.id]) %]">Create a jobset from this evaluation</a>
[% IF unfinished.size > 0 %]
[% IF totalQueued > 0 %]
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('cancel'), [eval.id]) %]">Cancel all scheduled builds</a>
[% END %]
[% IF aborted.size > 0 || stillFail.size > 0 || nowFail.size > 0 || failed.size > 0 %]
[% IF totalFailed > 0 %]
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('restart_failed'), [eval.id]) %]">Restart all failed builds</a>
[% END %]
[% IF aborted.size > 0 %]
[% IF totalAborted > 0 %]
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('restart_aborted'), [eval.id]) %]">Restart all aborted builds</a>
[% END %]
[% IF unfinished.size > 0 %]
[% IF totalQueued > 0 %]
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('bump'), [eval.id]) %]">Bump builds to front of queue</a>
[% END %]
</div>

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,331 @@
/*!
* Bootstrap Reboot v4.3.1 (https://getbootstrap.com/)
* Copyright 2011-2019 The Bootstrap Authors
* Copyright 2011-2019 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* Forked from Normalize.css, licensed MIT (https://github.com/necolas/normalize.css/blob/master/LICENSE.md)
*/
*,
*::before,
*::after {
box-sizing: border-box;
}
html {
font-family: sans-serif;
line-height: 1.15;
-webkit-text-size-adjust: 100%;
-webkit-tap-highlight-color: rgba(0, 0, 0, 0);
}
article, aside, figcaption, figure, footer, header, hgroup, main, nav, section {
display: block;
}
body {
margin: 0;
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
font-size: 1rem;
font-weight: 400;
line-height: 1.5;
color: #212529;
text-align: left;
background-color: #fff;
}
[tabindex="-1"]:focus {
outline: 0 !important;
}
hr {
box-sizing: content-box;
height: 0;
overflow: visible;
}
h1, h2, h3, h4, h5, h6 {
margin-top: 0;
margin-bottom: 0.5rem;
}
p {
margin-top: 0;
margin-bottom: 1rem;
}
abbr[title],
abbr[data-original-title] {
text-decoration: underline;
-webkit-text-decoration: underline dotted;
text-decoration: underline dotted;
cursor: help;
border-bottom: 0;
-webkit-text-decoration-skip-ink: none;
text-decoration-skip-ink: none;
}
address {
margin-bottom: 1rem;
font-style: normal;
line-height: inherit;
}
ol,
ul,
dl {
margin-top: 0;
margin-bottom: 1rem;
}
ol ol,
ul ul,
ol ul,
ul ol {
margin-bottom: 0;
}
dt {
font-weight: 700;
}
dd {
margin-bottom: .5rem;
margin-left: 0;
}
blockquote {
margin: 0 0 1rem;
}
b,
strong {
font-weight: bolder;
}
small {
font-size: 80%;
}
sub,
sup {
position: relative;
font-size: 75%;
line-height: 0;
vertical-align: baseline;
}
sub {
bottom: -.25em;
}
sup {
top: -.5em;
}
a {
color: #007bff;
text-decoration: none;
background-color: transparent;
}
a:hover {
color: #0056b3;
text-decoration: underline;
}
a:not([href]):not([tabindex]) {
color: inherit;
text-decoration: none;
}
a:not([href]):not([tabindex]):hover, a:not([href]):not([tabindex]):focus {
color: inherit;
text-decoration: none;
}
a:not([href]):not([tabindex]):focus {
outline: 0;
}
pre,
code,
kbd,
samp {
font-family: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
font-size: 1em;
}
pre {
margin-top: 0;
margin-bottom: 1rem;
overflow: auto;
}
figure {
margin: 0 0 1rem;
}
img {
vertical-align: middle;
border-style: none;
}
svg {
overflow: hidden;
vertical-align: middle;
}
table {
border-collapse: collapse;
}
caption {
padding-top: 0.75rem;
padding-bottom: 0.75rem;
color: #6c757d;
text-align: left;
caption-side: bottom;
}
th {
text-align: inherit;
}
label {
display: inline-block;
margin-bottom: 0.5rem;
}
button {
border-radius: 0;
}
button:focus {
outline: 1px dotted;
outline: 5px auto -webkit-focus-ring-color;
}
input,
button,
select,
optgroup,
textarea {
margin: 0;
font-family: inherit;
font-size: inherit;
line-height: inherit;
}
button,
input {
overflow: visible;
}
button,
select {
text-transform: none;
}
select {
word-wrap: normal;
}
button,
[type="button"],
[type="reset"],
[type="submit"] {
-webkit-appearance: button;
}
button:not(:disabled),
[type="button"]:not(:disabled),
[type="reset"]:not(:disabled),
[type="submit"]:not(:disabled) {
cursor: pointer;
}
button::-moz-focus-inner,
[type="button"]::-moz-focus-inner,
[type="reset"]::-moz-focus-inner,
[type="submit"]::-moz-focus-inner {
padding: 0;
border-style: none;
}
input[type="radio"],
input[type="checkbox"] {
box-sizing: border-box;
padding: 0;
}
input[type="date"],
input[type="time"],
input[type="datetime-local"],
input[type="month"] {
-webkit-appearance: listbox;
}
textarea {
overflow: auto;
resize: vertical;
}
fieldset {
min-width: 0;
padding: 0;
margin: 0;
border: 0;
}
legend {
display: block;
width: 100%;
max-width: 100%;
padding: 0;
margin-bottom: .5rem;
font-size: 1.5rem;
line-height: inherit;
color: inherit;
white-space: normal;
}
progress {
vertical-align: baseline;
}
[type="number"]::-webkit-inner-spin-button,
[type="number"]::-webkit-outer-spin-button {
height: auto;
}
[type="search"] {
outline-offset: -2px;
-webkit-appearance: none;
}
[type="search"]::-webkit-search-decoration {
-webkit-appearance: none;
}
::-webkit-file-upload-button {
font: inherit;
-webkit-appearance: button;
}
output {
display: inline-block;
}
summary {
display: list-item;
cursor: pointer;
}
template {
display: none;
}
[hidden] {
display: none !important;
}
/*# sourceMappingURL=bootstrap-reboot.css.map */

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,8 @@
/*!
* Bootstrap Reboot v4.3.1 (https://getbootstrap.com/)
* Copyright 2011-2019 The Bootstrap Authors
* Copyright 2011-2019 Twitter, Inc.
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
* Forked from Normalize.css, licensed MIT (https://github.com/necolas/normalize.css/blob/master/LICENSE.md)
*/*,::after,::before{box-sizing:border-box}html{font-family:sans-serif;line-height:1.15;-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:transparent}article,aside,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}body{margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-size:1rem;font-weight:400;line-height:1.5;color:#212529;text-align:left;background-color:#fff}[tabindex="-1"]:focus{outline:0!important}hr{box-sizing:content-box;height:0;overflow:visible}h1,h2,h3,h4,h5,h6{margin-top:0;margin-bottom:.5rem}p{margin-top:0;margin-bottom:1rem}abbr[data-original-title],abbr[title]{text-decoration:underline;-webkit-text-decoration:underline dotted;text-decoration:underline dotted;cursor:help;border-bottom:0;-webkit-text-decoration-skip-ink:none;text-decoration-skip-ink:none}address{margin-bottom:1rem;font-style:normal;line-height:inherit}dl,ol,ul{margin-top:0;margin-bottom:1rem}ol ol,ol ul,ul ol,ul ul{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-left:0}blockquote{margin:0 0 1rem}b,strong{font-weight:bolder}small{font-size:80%}sub,sup{position:relative;font-size:75%;line-height:0;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}a{color:#007bff;text-decoration:none;background-color:transparent}a:hover{color:#0056b3;text-decoration:underline}a:not([href]):not([tabindex]){color:inherit;text-decoration:none}a:not([href]):not([tabindex]):focus,a:not([href]):not([tabindex]):hover{color:inherit;text-decoration:none}a:not([href]):not([tabindex]):focus{outline:0}code,kbd,pre,samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;font-size:1em}pre{margin-top:0;margin-bottom:1rem;overflow:auto}figure{margin:0 0 1rem}img{vertical-align:middle;border-style:none}svg{overflow:hidden;vertical-align:middle}table{border-collapse:collapse}caption{padding-top:.75rem;padding-bottom:.75rem;color:#6c757d;text-align:left;caption-side:bottom}th{text-align:inherit}label{display:inline-block;margin-bottom:.5rem}button{border-radius:0}button:focus{outline:1px dotted;outline:5px auto -webkit-focus-ring-color}button,input,optgroup,select,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,input{overflow:visible}button,select{text-transform:none}select{word-wrap:normal}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button}[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled),button:not(:disabled){cursor:pointer}[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner,button::-moz-focus-inner{padding:0;border-style:none}input[type=checkbox],input[type=radio]{box-sizing:border-box;padding:0}input[type=date],input[type=datetime-local],input[type=month],input[type=time]{-webkit-appearance:listbox}textarea{overflow:auto;resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{display:block;width:100%;max-width:100%;padding:0;margin-bottom:.5rem;font-size:1.5rem;line-height:inherit;color:inherit;white-space:normal}progress{vertical-align:baseline}[type=number]::-webkit-inner-spin-button,[type=number]::-webkit-outer-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:none}[type=search]::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}output{display:inline-block}summary{display:list-item;cursor:pointer}template{display:none}[hidden]{display:none!important}
/*# sourceMappingURL=bootstrap-reboot.min.css.map */

File diff suppressed because one or more lines are too long

10038
src/root/static/bootstrap/css/bootstrap.css vendored Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

4435
src/root/static/bootstrap/js/bootstrap.js vendored Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,34 @@
Font Awesome Free License
-------------------------
Font Awesome Free is free, open source, and GPL friendly. You can use it for
commercial projects, open source projects, or really almost whatever you want.
Full Font Awesome Free license: https://fontawesome.com/license/free.
# Icons: CC BY 4.0 License (https://creativecommons.org/licenses/by/4.0/)
In the Font Awesome Free download, the CC BY 4.0 license applies to all icons
packaged as SVG and JS file types.
# Fonts: SIL OFL 1.1 License (https://scripts.sil.org/OFL)
In the Font Awesome Free download, the SIL OFL license applies to all icons
packaged as web and desktop font files.
# Code: MIT License (https://opensource.org/licenses/MIT)
In the Font Awesome Free download, the MIT license applies to all non-font and
non-icon files.
# Attribution
Attribution is required by MIT, SIL OFL, and CC BY licenses. Downloaded Font
Awesome Free files already contain embedded comments with sufficient
attribution, so you shouldn't need to do anything additional when using these
files normally.
We've kept attribution comments terse, so we ask that you do not actively work
to remove them from files, especially code. They're a great way for folks to
learn about Font Awesome.
# Brand Icons
All brand icons are trademarks of their respective owners. The use of these
trademarks does not indicate endorsement of the trademark holder by Font
Awesome, nor vice versa. **Please do not use brand logos for any purpose except
to represent the company, product, or service to which they refer.**

4396
src/root/static/fontawesome/css/all.css vendored Normal file

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,14 @@
/*!
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
*/
@font-face {
font-family: 'Font Awesome 5 Brands';
font-style: normal;
font-weight: normal;
font-display: auto;
src: url("../webfonts/fa-brands-400.eot");
src: url("../webfonts/fa-brands-400.eot?#iefix") format("embedded-opentype"), url("../webfonts/fa-brands-400.woff2") format("woff2"), url("../webfonts/fa-brands-400.woff") format("woff"), url("../webfonts/fa-brands-400.ttf") format("truetype"), url("../webfonts/fa-brands-400.svg#fontawesome") format("svg"); }
.fab {
font-family: 'Font Awesome 5 Brands'; }

View file

@ -0,0 +1,5 @@
/*!
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
*/
@font-face{font-family:"Font Awesome 5 Brands";font-style:normal;font-weight:normal;font-display:auto;src:url(../webfonts/fa-brands-400.eot);src:url(../webfonts/fa-brands-400.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-brands-400.woff2) format("woff2"),url(../webfonts/fa-brands-400.woff) format("woff"),url(../webfonts/fa-brands-400.ttf) format("truetype"),url(../webfonts/fa-brands-400.svg#fontawesome) format("svg")}.fab{font-family:"Font Awesome 5 Brands"}

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

View file

@ -0,0 +1,15 @@
/*!
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
*/
@font-face {
font-family: 'Font Awesome 5 Free';
font-style: normal;
font-weight: 400;
font-display: auto;
src: url("../webfonts/fa-regular-400.eot");
src: url("../webfonts/fa-regular-400.eot?#iefix") format("embedded-opentype"), url("../webfonts/fa-regular-400.woff2") format("woff2"), url("../webfonts/fa-regular-400.woff") format("woff"), url("../webfonts/fa-regular-400.ttf") format("truetype"), url("../webfonts/fa-regular-400.svg#fontawesome") format("svg"); }
.far {
font-family: 'Font Awesome 5 Free';
font-weight: 400; }

View file

@ -0,0 +1,5 @@
/*!
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
*/
@font-face{font-family:"Font Awesome 5 Free";font-style:normal;font-weight:400;font-display:auto;src:url(../webfonts/fa-regular-400.eot);src:url(../webfonts/fa-regular-400.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-regular-400.woff2) format("woff2"),url(../webfonts/fa-regular-400.woff) format("woff"),url(../webfonts/fa-regular-400.ttf) format("truetype"),url(../webfonts/fa-regular-400.svg#fontawesome) format("svg")}.far{font-family:"Font Awesome 5 Free";font-weight:400}

View file

@ -0,0 +1,16 @@
/*!
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
*/
@font-face {
font-family: 'Font Awesome 5 Free';
font-style: normal;
font-weight: 900;
font-display: auto;
src: url("../webfonts/fa-solid-900.eot");
src: url("../webfonts/fa-solid-900.eot?#iefix") format("embedded-opentype"), url("../webfonts/fa-solid-900.woff2") format("woff2"), url("../webfonts/fa-solid-900.woff") format("woff"), url("../webfonts/fa-solid-900.ttf") format("truetype"), url("../webfonts/fa-solid-900.svg#fontawesome") format("svg"); }
.fa,
.fas {
font-family: 'Font Awesome 5 Free';
font-weight: 900; }

View file

@ -0,0 +1,5 @@
/*!
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
*/
@font-face{font-family:"Font Awesome 5 Free";font-style:normal;font-weight:900;font-display:auto;src:url(../webfonts/fa-solid-900.eot);src:url(../webfonts/fa-solid-900.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-solid-900.woff2) format("woff2"),url(../webfonts/fa-solid-900.woff) format("woff"),url(../webfonts/fa-solid-900.ttf) format("truetype"),url(../webfonts/fa-solid-900.svg#fontawesome) format("svg")}.fa,.fas{font-family:"Font Awesome 5 Free";font-weight:900}

View file

@ -0,0 +1,371 @@
/*!
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
*/
svg:not(:root).svg-inline--fa {
overflow: visible; }
.svg-inline--fa {
display: inline-block;
font-size: inherit;
height: 1em;
overflow: visible;
vertical-align: -.125em; }
.svg-inline--fa.fa-lg {
vertical-align: -.225em; }
.svg-inline--fa.fa-w-1 {
width: 0.0625em; }
.svg-inline--fa.fa-w-2 {
width: 0.125em; }
.svg-inline--fa.fa-w-3 {
width: 0.1875em; }
.svg-inline--fa.fa-w-4 {
width: 0.25em; }
.svg-inline--fa.fa-w-5 {
width: 0.3125em; }
.svg-inline--fa.fa-w-6 {
width: 0.375em; }
.svg-inline--fa.fa-w-7 {
width: 0.4375em; }
.svg-inline--fa.fa-w-8 {
width: 0.5em; }
.svg-inline--fa.fa-w-9 {
width: 0.5625em; }
.svg-inline--fa.fa-w-10 {
width: 0.625em; }
.svg-inline--fa.fa-w-11 {
width: 0.6875em; }
.svg-inline--fa.fa-w-12 {
width: 0.75em; }
.svg-inline--fa.fa-w-13 {
width: 0.8125em; }
.svg-inline--fa.fa-w-14 {
width: 0.875em; }
.svg-inline--fa.fa-w-15 {
width: 0.9375em; }
.svg-inline--fa.fa-w-16 {
width: 1em; }
.svg-inline--fa.fa-w-17 {
width: 1.0625em; }
.svg-inline--fa.fa-w-18 {
width: 1.125em; }
.svg-inline--fa.fa-w-19 {
width: 1.1875em; }
.svg-inline--fa.fa-w-20 {
width: 1.25em; }
.svg-inline--fa.fa-pull-left {
margin-right: .3em;
width: auto; }
.svg-inline--fa.fa-pull-right {
margin-left: .3em;
width: auto; }
.svg-inline--fa.fa-border {
height: 1.5em; }
.svg-inline--fa.fa-li {
width: 2em; }
.svg-inline--fa.fa-fw {
width: 1.25em; }
.fa-layers svg.svg-inline--fa {
bottom: 0;
left: 0;
margin: auto;
position: absolute;
right: 0;
top: 0; }
.fa-layers {
display: inline-block;
height: 1em;
position: relative;
text-align: center;
vertical-align: -.125em;
width: 1em; }
.fa-layers svg.svg-inline--fa {
-webkit-transform-origin: center center;
transform-origin: center center; }
.fa-layers-text, .fa-layers-counter {
display: inline-block;
position: absolute;
text-align: center; }
.fa-layers-text {
left: 50%;
top: 50%;
-webkit-transform: translate(-50%, -50%);
transform: translate(-50%, -50%);
-webkit-transform-origin: center center;
transform-origin: center center; }
.fa-layers-counter {
background-color: #ff253a;
border-radius: 1em;
-webkit-box-sizing: border-box;
box-sizing: border-box;
color: #fff;
height: 1.5em;
line-height: 1;
max-width: 5em;
min-width: 1.5em;
overflow: hidden;
padding: .25em;
right: 0;
text-overflow: ellipsis;
top: 0;
-webkit-transform: scale(0.25);
transform: scale(0.25);
-webkit-transform-origin: top right;
transform-origin: top right; }
.fa-layers-bottom-right {
bottom: 0;
right: 0;
top: auto;
-webkit-transform: scale(0.25);
transform: scale(0.25);
-webkit-transform-origin: bottom right;
transform-origin: bottom right; }
.fa-layers-bottom-left {
bottom: 0;
left: 0;
right: auto;
top: auto;
-webkit-transform: scale(0.25);
transform: scale(0.25);
-webkit-transform-origin: bottom left;
transform-origin: bottom left; }
.fa-layers-top-right {
right: 0;
top: 0;
-webkit-transform: scale(0.25);
transform: scale(0.25);
-webkit-transform-origin: top right;
transform-origin: top right; }
.fa-layers-top-left {
left: 0;
right: auto;
top: 0;
-webkit-transform: scale(0.25);
transform: scale(0.25);
-webkit-transform-origin: top left;
transform-origin: top left; }
.fa-lg {
font-size: 1.33333em;
line-height: 0.75em;
vertical-align: -.0667em; }
.fa-xs {
font-size: .75em; }
.fa-sm {
font-size: .875em; }
.fa-1x {
font-size: 1em; }
.fa-2x {
font-size: 2em; }
.fa-3x {
font-size: 3em; }
.fa-4x {
font-size: 4em; }
.fa-5x {
font-size: 5em; }
.fa-6x {
font-size: 6em; }
.fa-7x {
font-size: 7em; }
.fa-8x {
font-size: 8em; }
.fa-9x {
font-size: 9em; }
.fa-10x {
font-size: 10em; }
.fa-fw {
text-align: center;
width: 1.25em; }
.fa-ul {
list-style-type: none;
margin-left: 2.5em;
padding-left: 0; }
.fa-ul > li {
position: relative; }
.fa-li {
left: -2em;
position: absolute;
text-align: center;
width: 2em;
line-height: inherit; }
.fa-border {
border: solid 0.08em #eee;
border-radius: .1em;
padding: .2em .25em .15em; }
.fa-pull-left {
float: left; }
.fa-pull-right {
float: right; }
.fa.fa-pull-left,
.fas.fa-pull-left,
.far.fa-pull-left,
.fal.fa-pull-left,
.fab.fa-pull-left {
margin-right: .3em; }
.fa.fa-pull-right,
.fas.fa-pull-right,
.far.fa-pull-right,
.fal.fa-pull-right,
.fab.fa-pull-right {
margin-left: .3em; }
.fa-spin {
-webkit-animation: fa-spin 2s infinite linear;
animation: fa-spin 2s infinite linear; }
.fa-pulse {
-webkit-animation: fa-spin 1s infinite steps(8);
animation: fa-spin 1s infinite steps(8); }
@-webkit-keyframes fa-spin {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg); }
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg); } }
@keyframes fa-spin {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg); }
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg); } }
.fa-rotate-90 {
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";
-webkit-transform: rotate(90deg);
transform: rotate(90deg); }
.fa-rotate-180 {
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";
-webkit-transform: rotate(180deg);
transform: rotate(180deg); }
.fa-rotate-270 {
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";
-webkit-transform: rotate(270deg);
transform: rotate(270deg); }
.fa-flip-horizontal {
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";
-webkit-transform: scale(-1, 1);
transform: scale(-1, 1); }
.fa-flip-vertical {
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";
-webkit-transform: scale(1, -1);
transform: scale(1, -1); }
.fa-flip-both, .fa-flip-horizontal.fa-flip-vertical {
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";
-webkit-transform: scale(-1, -1);
transform: scale(-1, -1); }
:root .fa-rotate-90,
:root .fa-rotate-180,
:root .fa-rotate-270,
:root .fa-flip-horizontal,
:root .fa-flip-vertical,
:root .fa-flip-both {
-webkit-filter: none;
filter: none; }
.fa-stack {
display: inline-block;
height: 2em;
position: relative;
width: 2.5em; }
.fa-stack-1x,
.fa-stack-2x {
bottom: 0;
left: 0;
margin: auto;
position: absolute;
right: 0;
top: 0; }
.svg-inline--fa.fa-stack-1x {
height: 1em;
width: 1.25em; }
.svg-inline--fa.fa-stack-2x {
height: 2em;
width: 2.5em; }
.fa-inverse {
color: #fff; }
.sr-only {
border: 0;
clip: rect(0, 0, 0, 0);
height: 1px;
margin: -1px;
overflow: hidden;
padding: 0;
position: absolute;
width: 1px; }
.sr-only-focusable:active, .sr-only-focusable:focus {
clip: auto;
height: auto;
margin: 0;
overflow: visible;
position: static;
width: auto; }
.svg-inline--fa .fa-primary {
fill: var(--fa-primary-color, currentColor);
opacity: 1;
opacity: var(--fa-primary-opacity, 1); }
.svg-inline--fa .fa-secondary {
fill: var(--fa-secondary-color, currentColor);
opacity: 0.4;
opacity: var(--fa-secondary-opacity, 0.4); }
.svg-inline--fa.fa-swap-opacity .fa-primary {
opacity: 0.4;
opacity: var(--fa-secondary-opacity, 0.4); }
.svg-inline--fa.fa-swap-opacity .fa-secondary {
opacity: 1;
opacity: var(--fa-primary-opacity, 1); }
.svg-inline--fa mask .fa-primary,
.svg-inline--fa mask .fa-secondary {
fill: black; }
.fad.fa-inverse {
color: #fff; }

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load diff

File diff suppressed because one or more lines are too long

Binary file not shown.

File diff suppressed because it is too large Load diff

After

Width:  |  Height:  |  Size: 675 KiB

Some files were not shown because too many files have changed in this diff Show more