Compare commits
No commits in common. "main" and "main" have entirely different histories.
12
.clang-tidy
12
.clang-tidy
|
@ -1,12 +0,0 @@
|
|||
UseColor: true
|
||||
Checks:
|
||||
- -*
|
||||
|
||||
- bugprone-*
|
||||
# kind of nonsense
|
||||
- -bugprone-easily-swappable-parameters
|
||||
# many warnings due to not recognizing `assert` properly
|
||||
- -bugprone-unchecked-optional-access
|
||||
|
||||
- modernize-*
|
||||
- -modernize-use-trailing-return-type
|
37
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
37
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: bug
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Hydra Server:**
|
||||
|
||||
Please fill out this data as well as you can, but don't worry if you can't -- just do your best.
|
||||
|
||||
- OS and version: [e.g. NixOS 22.05.20211203.ee3794c]
|
||||
- Version of Hydra
|
||||
- Version of Nix Hydra is built against
|
||||
- Version of the Nix daemon
|
||||
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
6
.github/dependabot.yml
vendored
Normal file
6
.github/dependabot.yml
vendored
Normal file
|
@ -0,0 +1,6 @@
|
|||
version: 2
|
||||
updates:
|
||||
- package-ecosystem: "github-actions"
|
||||
directory: "/"
|
||||
schedule:
|
||||
interval: "weekly"
|
14
.github/workflows/test.yml
vendored
Normal file
14
.github/workflows/test.yml
vendored
Normal file
|
@ -0,0 +1,14 @@
|
|||
name: "Test"
|
||||
on:
|
||||
pull_request:
|
||||
push:
|
||||
jobs:
|
||||
tests:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
with:
|
||||
fetch-depth: 0
|
||||
- uses: cachix/install-nix-action@v17
|
||||
#- run: nix flake check
|
||||
- run: nix-build -A checks.x86_64-linux.build -A checks.x86_64-linux.validate-openapi
|
43
.gitignore
vendored
43
.gitignore
vendored
|
@ -1,9 +1,48 @@
|
|||
/.pls_cache
|
||||
*.o
|
||||
*~
|
||||
.test_info.*
|
||||
Makefile
|
||||
Makefile.in
|
||||
.deps
|
||||
.hydra-data
|
||||
/config.guess
|
||||
/config.log
|
||||
/config.status
|
||||
/config.sub
|
||||
/configure
|
||||
/depcomp
|
||||
/libtool
|
||||
/ltmain.sh
|
||||
/autom4te.cache
|
||||
/aclocal.m4
|
||||
/missing
|
||||
/install-sh
|
||||
/src/sql/hydra-postgresql.sql
|
||||
/src/sql/hydra-sqlite.sql
|
||||
/src/sql/tmp.sqlite
|
||||
/src/hydra-eval-jobs/hydra-eval-jobs
|
||||
/src/root/static/bootstrap
|
||||
/src/root/static/js/flot
|
||||
/tests
|
||||
/doc/manual/images
|
||||
/doc/manual/manual.html
|
||||
/doc/manual/manual.pdf
|
||||
/t/.bzr*
|
||||
/t/.git*
|
||||
/t/.hg*
|
||||
/t/nix
|
||||
/t/data
|
||||
/t/jobs/config.nix
|
||||
t/jobs/declarative/project.json
|
||||
/inst
|
||||
hydra-config.h
|
||||
hydra-config.h.in
|
||||
result
|
||||
result-*
|
||||
.hydra-data
|
||||
outputs
|
||||
config
|
||||
stamp-h1
|
||||
src/hydra-evaluator/hydra-evaluator
|
||||
src/hydra-queue-runner/hydra-queue-runner
|
||||
src/root/static/fontawesome/
|
||||
src/root/static/bootstrap*/
|
||||
|
|
12
Makefile.am
Normal file
12
Makefile.am
Normal file
|
@ -0,0 +1,12 @@
|
|||
SUBDIRS = src doc
|
||||
if CAN_DO_CHECK
|
||||
SUBDIRS += t
|
||||
endif
|
||||
|
||||
BOOTCLEAN_SUBDIRS = $(SUBDIRS)
|
||||
DIST_SUBDIRS = $(SUBDIRS)
|
||||
EXTRA_DIST = nixos-modules/hydra.nix
|
||||
|
||||
install-data-local: nixos-modules/hydra.nix
|
||||
$(INSTALL) -d $(DESTDIR)$(datadir)/nix
|
||||
$(INSTALL_DATA) nixos-modules/hydra.nix $(DESTDIR)$(datadir)/nix/hydra-module.nix
|
27
README.md
27
README.md
|
@ -78,11 +78,11 @@ $ nix-build
|
|||
### Development Environment
|
||||
|
||||
You can use the provided shell.nix to get a working development environment:
|
||||
|
||||
```
|
||||
$ nix develop
|
||||
[nix-shell]$ just setup
|
||||
[nix-shell]$ just install
|
||||
$ nix-shell
|
||||
$ autoreconfPhase
|
||||
$ configurePhase # NOTE: not ./configure
|
||||
$ make
|
||||
```
|
||||
|
||||
### Executing Hydra During Development
|
||||
|
@ -91,9 +91,10 @@ When working on new features or bug fixes you need to be able to run Hydra from
|
|||
can be done using [foreman](https://github.com/ddollar/foreman):
|
||||
|
||||
```
|
||||
$ nix develop
|
||||
[nix-shell]$ just install
|
||||
[nix-shell]$ foreman start
|
||||
$ nix-shell
|
||||
$ # hack hack
|
||||
$ make
|
||||
$ foreman start
|
||||
```
|
||||
|
||||
Have a look at the [Procfile](./Procfile) if you want to see how the processes are being started. In order to avoid
|
||||
|
@ -114,22 +115,22 @@ Start by following the steps in [Development Environment](#development-environme
|
|||
Then, you can run the tests and the perlcritic linter together with:
|
||||
|
||||
```console
|
||||
$ nix develop
|
||||
[nix-shell]$ just test
|
||||
$ nix-shell
|
||||
$ make check
|
||||
```
|
||||
|
||||
You can run a single test with:
|
||||
|
||||
```
|
||||
$ nix develop
|
||||
[nix-shell]$ yath test ./t/foo/bar.t
|
||||
$ nix-shell
|
||||
$ yath test ./t/foo/bar.t
|
||||
```
|
||||
|
||||
And you can run just perlcritic with:
|
||||
|
||||
```
|
||||
$ nix develop
|
||||
[nix-shell]$ just perlcritic
|
||||
$ nix-shell
|
||||
$ make perlcritic
|
||||
```
|
||||
|
||||
### JSON API
|
||||
|
|
91
configure.ac
Normal file
91
configure.ac
Normal file
|
@ -0,0 +1,91 @@
|
|||
AC_INIT([Hydra], [m4_esyscmd([echo -n $(cat ./version.txt)$VERSION_SUFFIX])])
|
||||
AC_CONFIG_AUX_DIR(config)
|
||||
AM_INIT_AUTOMAKE([foreign serial-tests])
|
||||
|
||||
AC_LANG([C++])
|
||||
|
||||
AC_PROG_CC
|
||||
AC_PROG_INSTALL
|
||||
AC_PROG_LN_S
|
||||
AC_PROG_LIBTOOL
|
||||
AC_PROG_CXX
|
||||
|
||||
AC_PATH_PROG([XSLTPROC], [xsltproc])
|
||||
|
||||
AC_ARG_WITH([docbook-xsl],
|
||||
[AS_HELP_STRING([--with-docbook-xsl=PATH],
|
||||
[path of the DocBook XSL stylesheets])],
|
||||
[docbookxsl="$withval"],
|
||||
[docbookxsl="/docbook-xsl-missing"])
|
||||
AC_SUBST([docbookxsl])
|
||||
|
||||
|
||||
AC_DEFUN([NEED_PROG],
|
||||
[
|
||||
AC_PATH_PROG($1, $2)
|
||||
if test -z "$$1"; then
|
||||
AC_MSG_ERROR([$2 is required])
|
||||
fi
|
||||
])
|
||||
|
||||
NEED_PROG(perl, perl)
|
||||
|
||||
NEED_PROG([NIX_STORE_PROGRAM], [nix-store])
|
||||
|
||||
AC_MSG_CHECKING([whether $NIX_STORE_PROGRAM is recent enough])
|
||||
if test -n "$NIX_STORE" -a -n "$TMPDIR"
|
||||
then
|
||||
# This may be executed from within a build chroot, so pacify
|
||||
# `nix-store' instead of letting it choke while trying to mkdir
|
||||
# /nix/var.
|
||||
NIX_STATE_DIR="$TMPDIR"
|
||||
export NIX_STATE_DIR
|
||||
fi
|
||||
if NIX_REMOTE=daemon PAGER=cat "$NIX_STORE_PROGRAM" --timeout 123 -q; then
|
||||
AC_MSG_RESULT([yes])
|
||||
else
|
||||
AC_MSG_RESULT([no])
|
||||
AC_MSG_ERROR([`$NIX_STORE_PROGRAM' doesn't support `--timeout'; please use a newer version.])
|
||||
fi
|
||||
|
||||
PKG_CHECK_MODULES([NIX], [lix-main lix-expr lix-store])
|
||||
|
||||
testPath="$(dirname $(type -p expr))"
|
||||
AC_SUBST(testPath)
|
||||
|
||||
CXXFLAGS+=" -include lix/config.h -std=gnu++20"
|
||||
|
||||
AC_CONFIG_FILES([
|
||||
Makefile
|
||||
doc/Makefile
|
||||
doc/manual/Makefile
|
||||
src/Makefile
|
||||
src/hydra-evaluator/Makefile
|
||||
src/hydra-eval-jobs/Makefile
|
||||
src/hydra-queue-runner/Makefile
|
||||
src/sql/Makefile
|
||||
src/ttf/Makefile
|
||||
src/lib/Makefile
|
||||
src/root/Makefile
|
||||
src/script/Makefile
|
||||
])
|
||||
|
||||
# Tests might be filtered out
|
||||
AM_CONDITIONAL([CAN_DO_CHECK], [test -f "$srcdir/t/api-test.t"])
|
||||
AM_COND_IF(
|
||||
[CAN_DO_CHECK],
|
||||
[
|
||||
jobsPath="$(realpath ./t/jobs)"
|
||||
AC_SUBST(jobsPath)
|
||||
AC_CONFIG_FILES([
|
||||
t/Makefile
|
||||
t/jobs/config.nix
|
||||
t/jobs/declarative/project.json
|
||||
])
|
||||
])
|
||||
|
||||
AC_CONFIG_COMMANDS([executable-scripts], [])
|
||||
|
||||
AC_CONFIG_HEADER([hydra-config.h])
|
||||
|
||||
AC_OUTPUT
|
4
doc/Makefile.am
Normal file
4
doc/Makefile.am
Normal file
|
@ -0,0 +1,4 @@
|
|||
SUBDIRS = manual
|
||||
BOOTCLEAN_SUBDIRS = $(SUBDIRS)
|
||||
DIST_SUBDIRS = $(SUBDIRS)
|
||||
|
122
doc/dev-notes.txt
Normal file
122
doc/dev-notes.txt
Normal file
|
@ -0,0 +1,122 @@
|
|||
* Recreating the schema bindings:
|
||||
|
||||
$ make -C src/sql update-dbix
|
||||
|
||||
* Running the test server:
|
||||
|
||||
$ DBIC_TRACE=1 ./script/hydra_server.pl
|
||||
|
||||
* Setting the maximum number of concurrent builds per system type:
|
||||
|
||||
$ psql -d hydra <<< "insert into SystemTypes(system, maxConcurrent) values('i686-linux', 3);"
|
||||
|
||||
* Creating a user:
|
||||
|
||||
$ hydra-create-user root --email-address 'e.dolstra@tudelft.nl' \
|
||||
--password-prompt
|
||||
|
||||
(Replace "foobar" with the desired password.)
|
||||
|
||||
To make the user an admin:
|
||||
|
||||
$ hydra-create-user root --role admin
|
||||
|
||||
To enable a non-admin user to create projects:
|
||||
|
||||
$ hydra-create-user root --role create-projects
|
||||
|
||||
* Changing the priority of a scheduled build:
|
||||
|
||||
update buildschedulinginfo set priority = 200 where id = <ID>;
|
||||
|
||||
* Changing the priority of all builds for a jobset:
|
||||
|
||||
update buildschedulinginfo set priority = 20 where id in (select id from builds where finished = 0 and project = 'nixpkgs' and jobset = 'trunk');
|
||||
|
||||
|
||||
* Steps to install:
|
||||
|
||||
- Install the Hydra closure.
|
||||
|
||||
- Set HYDRA_DATA to /somewhere.
|
||||
|
||||
- Run hydra_init.pl
|
||||
|
||||
- Start hydra_server
|
||||
|
||||
- Visit http://localhost:3000/
|
||||
|
||||
- Create a user (see above)
|
||||
|
||||
- Create a project, jobset etc.
|
||||
|
||||
- Start hydra_evaluator and hydra_queue_runner
|
||||
|
||||
|
||||
* Job selection:
|
||||
|
||||
php-sat:build [system = "i686-linux"]
|
||||
php-sat:build [same system]
|
||||
tarball [same patchelfSrc]
|
||||
--if system i686-linux --arg build {...}
|
||||
|
||||
|
||||
* Restart all aborted builds in a given evaluation (e.g. 820909):
|
||||
|
||||
> update builds set finished = 0 where id in (select id from builds where finished = 1 and buildstatus = 3 and exists (select 1 from jobsetevalmembers where eval = 820909 and build = id));
|
||||
|
||||
|
||||
* Restart all builds in a given evaluation that had a build step time out:
|
||||
|
||||
> update builds set finished = 0 where id in (select id from builds where finished = 1 and buildstatus != 0 and exists (select 1 from jobsetevalmembers where eval = 926992 and build = id) and exists (select 1 from buildsteps where build = id and status = 7));
|
||||
|
||||
|
||||
* select * from (select project, jobset, job, system, max(timestamp) timestamp from builds where finished = 1 group by project, jobset, job, system) x join builds y on x.timestamp = y.timestamp and x.project = y.project and x.jobset = y.jobset and x.job = y.job and x.system = y.system;
|
||||
|
||||
select * from (select project, jobset, job, system, max(timestamp) timestamp from builds where finished = 1 group by project, jobset, job, system) natural join builds;
|
||||
|
||||
|
||||
* Delete all scheduled builds that are not already building:
|
||||
|
||||
delete from builds where finished = 0 and not exists (select 1 from buildschedulinginfo s where s.id = builds.id and busy != 0);
|
||||
|
||||
|
||||
* select x.project, x.jobset, x.job, x.system, x.id, x.timestamp, r.buildstatus, b.id, b.timestamp
|
||||
from (select project, jobset, job, system, max(id) as id from Builds where finished = 1 group by project, jobset, job, system) as a_
|
||||
natural join Builds x
|
||||
natural join BuildResultInfo r
|
||||
left join Builds b on b.id =
|
||||
(select max(id) from builds c
|
||||
natural join buildresultinfo r2
|
||||
where x.project = c.project and x.jobset = c.jobset and x.job = c.job and x.system = c.system
|
||||
and x.id > c.id and r.buildstatus != r2.buildstatus);
|
||||
|
||||
* Using PostgreSQL (version 9.2 or newer is required):
|
||||
|
||||
$ HYDRA_DBI="dbi:Pg:dbname=hydra;" hydra-server
|
||||
|
||||
|
||||
* Find the builds with the highest number of build steps:
|
||||
|
||||
select id, (select count(*) from buildsteps where build = x.id) as n from builds x order by n desc;
|
||||
|
||||
|
||||
* Evaluating the NixOS Hydra jobs:
|
||||
|
||||
$ ./hydra_eval_jobs ~/Dev/nixos-wc/release.nix --arg nixpkgs '{outPath = /home/eelco/Dev/nixpkgs-wc;}' --arg nixosSrc '{outPath = /home/eelco/Dev/nixos-wc; rev = 1234;}' --arg services '{outhPath = /home/eelco/services-wc;}' --argstr system i686-linux --argstr system x86_64-linux --arg officialRelease false
|
||||
|
||||
|
||||
* Show all the failing jobs/systems in the nixpkgs:stdenv jobset that
|
||||
succeed in the nixpkgs:trunk jobset:
|
||||
|
||||
select job, system from builds b natural join buildresultinfo where project = 'nixpkgs' and jobset = 'stdenv' and iscurrent = 1 and finished = 1 and buildstatus != 0 and exists (select 1 from builds natural join buildresultinfo where project = 'nixpkgs' and jobset = 'trunk' and job = b.job and system = b.system and iscurrent = 1 and finished = 1 and buildstatus = 0) order by job, system;
|
||||
|
||||
|
||||
* Get all Nixpkgs jobs that have never built succesfully:
|
||||
|
||||
select project, jobset, job from builds b1
|
||||
where project = 'nixpkgs' and jobset = 'trunk' and iscurrent = 1
|
||||
group by project, jobset, job
|
||||
having not exists
|
||||
(select 1 from builds b2 where b1.project = b2.project and b1.jobset = b2.jobset and b1.job = b2.job and finished = 1 and buildstatus = 0)
|
||||
order by project, jobset, job;
|
6
doc/manual/Makefile.am
Normal file
6
doc/manual/Makefile.am
Normal file
|
@ -0,0 +1,6 @@
|
|||
MD_FILES = src/*.md
|
||||
|
||||
EXTRA_DIST = $(MD_FILES)
|
||||
|
||||
install: $(MD_FILES)
|
||||
mdbook build . -d $(docdir)
|
|
@ -1,33 +0,0 @@
|
|||
srcs = files(
|
||||
'src/SUMMARY.md',
|
||||
'src/about.md',
|
||||
'src/api.md',
|
||||
'src/configuration.md',
|
||||
'src/hacking.md',
|
||||
'src/installation.md',
|
||||
'src/introduction.md',
|
||||
'src/jobs.md',
|
||||
'src/monitoring/README.md',
|
||||
'src/notifications.md',
|
||||
'src/plugins/README.md',
|
||||
'src/plugins/RunCommand.md',
|
||||
'src/plugins/declarative-projects.md',
|
||||
'src/projects.md',
|
||||
'src/webhooks.md',
|
||||
)
|
||||
|
||||
manual = custom_target(
|
||||
'manual',
|
||||
command: [
|
||||
mdbook, 'build', '@SOURCE_ROOT@/doc/manual', '-d', meson.current_build_dir() / 'html'
|
||||
],
|
||||
depend_files: srcs,
|
||||
output: ['html'],
|
||||
build_by_default: true,
|
||||
)
|
||||
|
||||
install_subdir(
|
||||
manual.full_path(),
|
||||
install_dir: get_option('datadir') / 'doc/hydra',
|
||||
strip_directory: true,
|
||||
)
|
|
@ -12,14 +12,15 @@ To enter a shell in which all environment variables (such as `PERL5LIB`)
|
|||
and dependencies can be found:
|
||||
|
||||
```console
|
||||
$ nix develop
|
||||
$ nix-shell
|
||||
```
|
||||
|
||||
To build Hydra, you should then do:
|
||||
|
||||
```console
|
||||
[nix-shell]$ just setup
|
||||
[nix-shell]$ just install
|
||||
[nix-shell]$ autoreconfPhase
|
||||
[nix-shell]$ configurePhase
|
||||
[nix-shell]$ make
|
||||
```
|
||||
|
||||
You start a local database, the webserver, and other components with
|
||||
|
@ -29,8 +30,6 @@ foreman:
|
|||
$ foreman start
|
||||
```
|
||||
|
||||
The Hydra interface will be available on port 63333, with an admin user named "alice" with password "foobar"
|
||||
|
||||
You can run just the Hydra web server in your source tree as follows:
|
||||
|
||||
```console
|
||||
|
@ -40,13 +39,18 @@ $ ./src/script/hydra-server
|
|||
You can run Hydra's test suite with the following:
|
||||
|
||||
```console
|
||||
[nix-shell]$ just test
|
||||
[nix-shell]$ make check
|
||||
[nix-shell]$ # to run as many tests as you have cores:
|
||||
[nix-shell]$ make check YATH_JOB_COUNT=$NIX_BUILD_CORES
|
||||
[nix-shell]$ # or run yath directly:
|
||||
[nix-shell]$ yath test
|
||||
[nix-shell]$ # to run as many tests as you have cores:
|
||||
[nix-shell]$ yath test -j $NIX_BUILD_CORES
|
||||
```
|
||||
|
||||
When using `yath` instead of `make check`, ensure you have run `make`
|
||||
in the root of the repository at least once.
|
||||
|
||||
**Warning**: Currently, the tests can fail
|
||||
if run with high parallelism [due to an issue in
|
||||
`Test::PostgreSQL`](https://github.com/TJC/Test-postgresql/issues/40)
|
||||
|
@ -97,14 +101,3 @@ Off NixOS, change `/etc/nix/nix.conf`:
|
|||
```conf
|
||||
trusted-users = root YOURUSERNAME
|
||||
```
|
||||
|
||||
### Updating schema bindings
|
||||
|
||||
```
|
||||
just update-dbix
|
||||
```
|
||||
|
||||
### Find the builds with the highest number of build steps:
|
||||
|
||||
select id, (select count(*) from buildsteps where build = x.id) as n from builds x order by n desc;
|
||||
|
||||
|
|
|
@ -1,12 +1,9 @@
|
|||
# Webhooks
|
||||
|
||||
Hydra can be notified by github or gitea with webhooks to trigger a new evaluation when a
|
||||
Hydra can be notified by github's webhook to trigger a new evaluation when a
|
||||
jobset has a github repo in its input.
|
||||
|
||||
## GitHub
|
||||
|
||||
To set up a webhook for a GitHub repository go to `https://github.com/<yourhandle>/<yourrepo>/settings`
|
||||
and in the `Webhooks` tab click on `Add webhook`.
|
||||
To set up a github webhook go to `https://github.com/<yourhandle>/<yourrepo>/settings` and in the `Webhooks` tab
|
||||
click on `Add webhook`.
|
||||
|
||||
- In `Payload URL` fill in `https://<your-hydra-domain>/api/push-github`.
|
||||
- In `Content type` switch to `application/json`.
|
||||
|
@ -14,14 +11,3 @@ and in the `Webhooks` tab click on `Add webhook`.
|
|||
- For `Which events would you like to trigger this webhook?` keep the default option for events on `Just the push event.`.
|
||||
|
||||
Then add the hook with `Add webhook`.
|
||||
|
||||
## Gitea
|
||||
|
||||
To set up a webhook for a Gitea repository go to the settings of the repository in your Gitea instance
|
||||
and in the `Webhooks` tab click on `Add Webhook` and choose `Gitea` in the drop down.
|
||||
|
||||
- In `Target URL` fill in `https://<your-hydra-domain>/api/push-gitea`.
|
||||
- Keep HTTP method `POST`, POST Content Type `application/json` and Trigger On `Push Events`.
|
||||
- Change the branch filter to match the git branch hydra builds.
|
||||
|
||||
Then add the hook with `Add webhook`.
|
||||
|
|
124
flake.lock
124
flake.lock
|
@ -16,28 +16,7 @@
|
|||
"type": "github"
|
||||
}
|
||||
},
|
||||
"flake-parts": {
|
||||
"inputs": {
|
||||
"nixpkgs-lib": [
|
||||
"nix-eval-jobs",
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1722555600,
|
||||
"narHash": "sha256-XOQkdLafnb/p9ij77byFQjDf5m5QYl9b2REiVClC+x4=",
|
||||
"owner": "hercules-ci",
|
||||
"repo": "flake-parts",
|
||||
"rev": "8471fe90ad337a8074e957b69ca4d0089218391d",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "hercules-ci",
|
||||
"repo": "flake-parts",
|
||||
"type": "github"
|
||||
}
|
||||
},
|
||||
"lix": {
|
||||
"nix": {
|
||||
"inputs": {
|
||||
"flake-compat": "flake-compat",
|
||||
"nix2container": "nix2container",
|
||||
|
@ -48,74 +27,27 @@
|
|||
"pre-commit-hooks": "pre-commit-hooks"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1728163191,
|
||||
"narHash": "sha256-SW0IEBsPN1EysqzvfDT+8Kimtzy03O1BxQQm7ZB6fRY=",
|
||||
"lastModified": 1719211568,
|
||||
"narHash": "sha256-oIgmvhe3CV/36LC0KXgqWnKXma39wabks8U9JBMDfO4=",
|
||||
"ref": "refs/heads/main",
|
||||
"rev": "ed9b7f4f84fd60ad8618645cc1bae2d686ff0db6",
|
||||
"revCount": 16323,
|
||||
"rev": "4c3d93611f2848c56ebc69c85f2b1e18001ed3c7",
|
||||
"revCount": 15877,
|
||||
"type": "git",
|
||||
"url": "https://git.lix.systems/lix-project/lix"
|
||||
"url": "https://git@git.lix.systems/lix-project/lix"
|
||||
},
|
||||
"original": {
|
||||
"type": "git",
|
||||
"url": "https://git.lix.systems/lix-project/lix"
|
||||
}
|
||||
},
|
||||
"nix-eval-jobs": {
|
||||
"inputs": {
|
||||
"flake-parts": "flake-parts",
|
||||
"lix": [
|
||||
"lix"
|
||||
],
|
||||
"nix-github-actions": "nix-github-actions",
|
||||
"nixpkgs": [
|
||||
"nixpkgs"
|
||||
],
|
||||
"treefmt-nix": "treefmt-nix"
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1723579251,
|
||||
"narHash": "sha256-xnHtfw0gRhV+2S9U7hQwvp2klTy1Iv7FlMMO0/WiMVc=",
|
||||
"ref": "refs/heads/main",
|
||||
"rev": "42a160bce2fd9ffebc3809746bc80cc7208f9b08",
|
||||
"revCount": 609,
|
||||
"type": "git",
|
||||
"url": "https://git.lix.systems/lix-project/nix-eval-jobs"
|
||||
},
|
||||
"original": {
|
||||
"type": "git",
|
||||
"url": "https://git.lix.systems/lix-project/nix-eval-jobs"
|
||||
}
|
||||
},
|
||||
"nix-github-actions": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"nix-eval-jobs",
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1720066371,
|
||||
"narHash": "sha256-uPlLYH2S0ACj0IcgaK9Lsf4spmJoGejR9DotXiXSBZQ=",
|
||||
"owner": "nix-community",
|
||||
"repo": "nix-github-actions",
|
||||
"rev": "622f829f5fe69310a866c8a6cd07e747c44ef820",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "nix-community",
|
||||
"repo": "nix-github-actions",
|
||||
"type": "github"
|
||||
"url": "https://git@git.lix.systems/lix-project/lix"
|
||||
}
|
||||
},
|
||||
"nix2container": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1720642556,
|
||||
"narHash": "sha256-qsnqk13UmREKmRT7c8hEnz26X3GFFyIQrqx4EaRc1Is=",
|
||||
"lastModified": 1712990762,
|
||||
"narHash": "sha256-hO9W3w7NcnYeX8u8cleHiSpK2YJo7ecarFTUlbybl7k=",
|
||||
"owner": "nlewo",
|
||||
"repo": "nix2container",
|
||||
"rev": "3853e5caf9ad24103b13aa6e0e8bcebb47649fe4",
|
||||
"rev": "20aad300c925639d5d6cbe30013c8357ce9f2a2e",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
@ -126,11 +58,11 @@
|
|||
},
|
||||
"nixpkgs": {
|
||||
"locked": {
|
||||
"lastModified": 1728193676,
|
||||
"narHash": "sha256-PbDWAIjKJdlVg+qQRhzdSor04bAPApDqIv2DofTyynk=",
|
||||
"lastModified": 1719145550,
|
||||
"narHash": "sha256-K0i/coxxTEl30tgt4oALaylQfxqbotTSNb1/+g+mKMQ=",
|
||||
"owner": "NixOS",
|
||||
"repo": "nixpkgs",
|
||||
"rev": "ecbc1ca8ffd6aea8372ad16be9ebbb39889e55b6",
|
||||
"rev": "e4509b3a560c87a8d4cb6f9992b8915abf9e36d8",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
@ -159,11 +91,11 @@
|
|||
"pre-commit-hooks": {
|
||||
"flake": false,
|
||||
"locked": {
|
||||
"lastModified": 1721042469,
|
||||
"narHash": "sha256-6FPUl7HVtvRHCCBQne7Ylp4p+dpP3P/OYuzjztZ4s70=",
|
||||
"lastModified": 1712055707,
|
||||
"narHash": "sha256-4XLvuSIDZJGS17xEwSrNuJLL7UjDYKGJSbK1WWX2AK8=",
|
||||
"owner": "cachix",
|
||||
"repo": "git-hooks.nix",
|
||||
"rev": "f451c19376071a90d8c58ab1a953c6e9840527fd",
|
||||
"rev": "e35aed5fda3cc79f88ed7f1795021e559582093a",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
|
@ -174,31 +106,9 @@
|
|||
},
|
||||
"root": {
|
||||
"inputs": {
|
||||
"lix": "lix",
|
||||
"nix-eval-jobs": "nix-eval-jobs",
|
||||
"nix": "nix",
|
||||
"nixpkgs": "nixpkgs"
|
||||
}
|
||||
},
|
||||
"treefmt-nix": {
|
||||
"inputs": {
|
||||
"nixpkgs": [
|
||||
"nix-eval-jobs",
|
||||
"nixpkgs"
|
||||
]
|
||||
},
|
||||
"locked": {
|
||||
"lastModified": 1723454642,
|
||||
"narHash": "sha256-S0Gvsenh0II7EAaoc9158ZB4vYyuycvMGKGxIbERNAM=",
|
||||
"owner": "numtide",
|
||||
"repo": "treefmt-nix",
|
||||
"rev": "349de7bc435bdff37785c2466f054ed1766173be",
|
||||
"type": "github"
|
||||
},
|
||||
"original": {
|
||||
"owner": "numtide",
|
||||
"repo": "treefmt-nix",
|
||||
"type": "github"
|
||||
}
|
||||
}
|
||||
},
|
||||
"root": "root",
|
||||
|
|
29
flake.nix
29
flake.nix
|
@ -2,20 +2,15 @@
|
|||
description = "A Nix-based continuous build system";
|
||||
|
||||
inputs.nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.05";
|
||||
inputs.nix.url = "git+https://git@git.lix.systems/lix-project/lix";
|
||||
inputs.nix.inputs.nixpkgs.follows = "nixpkgs";
|
||||
|
||||
inputs.lix.url = "git+https://git.lix.systems/lix-project/lix";
|
||||
inputs.lix.inputs.nixpkgs.follows = "nixpkgs";
|
||||
|
||||
inputs.nix-eval-jobs.url = "git+https://git.lix.systems/lix-project/nix-eval-jobs";
|
||||
inputs.nix-eval-jobs.inputs.nixpkgs.follows = "nixpkgs";
|
||||
inputs.nix-eval-jobs.inputs.lix.follows = "lix";
|
||||
|
||||
outputs = { self, nix-eval-jobs, nixpkgs, lix }:
|
||||
outputs = { self, nixpkgs, nix }:
|
||||
let
|
||||
systems = [ "x86_64-linux" "aarch64-linux" ];
|
||||
forEachSystem = nixpkgs.lib.genAttrs systems;
|
||||
|
||||
overlayList = [ self.overlays.default lix.overlays.default ];
|
||||
overlayList = [ self.overlays.default nix.overlays.default ];
|
||||
|
||||
pkgsBySystem = forEachSystem (system: import nixpkgs {
|
||||
inherit system;
|
||||
|
@ -29,7 +24,6 @@
|
|||
overlays.default = final: prev: {
|
||||
hydra = final.callPackage ./package.nix {
|
||||
inherit (final.lib) fileset;
|
||||
nix-eval-jobs = nix-eval-jobs.packages.${final.system}.default;
|
||||
rawSrc = self;
|
||||
};
|
||||
};
|
||||
|
@ -73,21 +67,6 @@
|
|||
default = pkgsBySystem.${system}.hydra;
|
||||
});
|
||||
|
||||
devShells = forEachSystem (system: let
|
||||
pkgs = pkgsBySystem.${system};
|
||||
lib = pkgs.lib;
|
||||
|
||||
mkDevShell = stdenv: (pkgs.mkShell.override { inherit stdenv; }) {
|
||||
inputsFrom = [ (self.packages.${system}.default.override { inherit stdenv; }) ];
|
||||
|
||||
packages =
|
||||
lib.optional (stdenv.cc.isClang && stdenv.hostPlatform == stdenv.buildPlatform) pkgs.clang-tools;
|
||||
};
|
||||
in {
|
||||
default = mkDevShell pkgs.stdenv;
|
||||
clang = mkDevShell pkgs.clangStdenv;
|
||||
});
|
||||
|
||||
nixosModules = import ./nixos-modules {
|
||||
overlays = overlayList;
|
||||
};
|
||||
|
|
|
@ -3,4 +3,4 @@
|
|||
# wait for hydra-server to listen
|
||||
while ! nc -z localhost 63333; do sleep 1; done
|
||||
|
||||
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec $(pwd)/outputs/out/bin/hydra-evaluator
|
||||
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec hydra-evaluator
|
||||
|
|
|
@ -28,4 +28,4 @@ use-substitutes = true
|
|||
</hydra_notify>
|
||||
EOF
|
||||
fi
|
||||
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec $(pwd)/outputs/out/bin/hydra-dev-server --port 63333 --restart --debug
|
||||
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec hydra-dev-server --port 63333 --restart --debug
|
||||
|
|
|
@ -3,4 +3,4 @@
|
|||
# wait for hydra-server to listen
|
||||
while ! nc -z localhost 63333; do sleep 1; done
|
||||
|
||||
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec $(pwd)/outputs/out/bin/hydra-notify
|
||||
HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec hydra-notify
|
||||
|
|
|
@ -3,4 +3,4 @@
|
|||
# wait until hydra is listening on port 63333
|
||||
while ! nc -z localhost 63333; do sleep 1; done
|
||||
|
||||
NIX_REMOTE_SYSTEMS="" HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec $(pwd)/outputs/out/bin/hydra-queue-runner
|
||||
NIX_REMOTE_SYSTEMS="" HYDRA_CONFIG=$(pwd)/.hydra-data/hydra.conf exec hydra-queue-runner
|
||||
|
|
17
justfile
17
justfile
|
@ -1,17 +0,0 @@
|
|||
setup *OPTIONS:
|
||||
meson setup build --prefix="$PWD/outputs/out" $mesonFlags {{ OPTIONS }}
|
||||
|
||||
build *OPTIONS:
|
||||
meson compile -C build {{ OPTIONS }}
|
||||
|
||||
install *OPTIONS: (build OPTIONS)
|
||||
meson install -C build
|
||||
|
||||
test *OPTIONS:
|
||||
meson test -C build --print-errorlogs {{ OPTIONS }}
|
||||
|
||||
update-dbix:
|
||||
cd src/sql && ./update-dbix-harness.sh
|
||||
|
||||
perlcritic:
|
||||
perlcritic .
|
36
meson.build
36
meson.build
|
@ -1,36 +0,0 @@
|
|||
project('hydra', 'cpp',
|
||||
version: files('version.txt'),
|
||||
license: 'GPL-3.0',
|
||||
default_options: [
|
||||
'debug=true',
|
||||
'optimization=2',
|
||||
'cpp_std=c++20',
|
||||
],
|
||||
)
|
||||
|
||||
lix_expr_dep = dependency('lix-expr', required: true)
|
||||
lix_main_dep = dependency('lix-main', required: true)
|
||||
lix_store_dep = dependency('lix-store', required: true)
|
||||
|
||||
# Lix/Nix need extra flags not provided in its pkg-config files.
|
||||
lix_dep = declare_dependency(
|
||||
dependencies: [
|
||||
lix_expr_dep,
|
||||
lix_main_dep,
|
||||
lix_store_dep,
|
||||
],
|
||||
compile_args: ['-include', 'lix/config.h'],
|
||||
)
|
||||
|
||||
pqxx_dep = dependency('libpqxx', required: true)
|
||||
|
||||
prom_cpp_core_dep = dependency('prometheus-cpp-core', required: true)
|
||||
prom_cpp_pull_dep = dependency('prometheus-cpp-pull', required: true)
|
||||
|
||||
mdbook = find_program('mdbook', native: true)
|
||||
perl = find_program('perl', native: true)
|
||||
|
||||
subdir('doc/manual')
|
||||
subdir('nixos-modules')
|
||||
subdir('src')
|
||||
subdir('t')
|
|
@ -229,8 +229,9 @@ in
|
|||
};
|
||||
|
||||
nix.settings = {
|
||||
extra-trusted-users = [ "hydra" "hydra-queue-runner" "hydra-www" ];
|
||||
keep-derivations = true;
|
||||
trusted-users = [ "hydra-queue-runner" ];
|
||||
gc-keep-outputs = true;
|
||||
gc-keep-derivations = true;
|
||||
};
|
||||
|
||||
services.hydra-dev.extraConfig =
|
||||
|
@ -339,7 +340,6 @@ in
|
|||
systemd.services.hydra-queue-runner =
|
||||
{ wantedBy = [ "multi-user.target" ];
|
||||
requires = [ "hydra-init.service" ];
|
||||
wants = [ "network-online.target" ];
|
||||
after = [ "hydra-init.service" "network.target" "network-online.target" ];
|
||||
path = [ cfg.package pkgs.nettools pkgs.openssh pkgs.bzip2 config.nix.package ];
|
||||
restartTriggers = [ hydraConf ];
|
||||
|
|
|
@ -1,4 +0,0 @@
|
|||
install_data('hydra.nix',
|
||||
install_dir: get_option('datadir') / 'nix',
|
||||
rename: ['hydra-module.nix'],
|
||||
)
|
47
package.nix
47
package.nix
|
@ -12,8 +12,7 @@
|
|||
, git
|
||||
|
||||
, makeWrapper
|
||||
, meson
|
||||
, ninja
|
||||
, autoreconfHook
|
||||
, nukeReferences
|
||||
, pkg-config
|
||||
, mdbook
|
||||
|
@ -37,7 +36,6 @@
|
|||
|
||||
, cacert
|
||||
, foreman
|
||||
, just
|
||||
, glibcLocales
|
||||
, libressl
|
||||
, openldap
|
||||
|
@ -50,7 +48,6 @@
|
|||
, xz
|
||||
, gnutar
|
||||
, gnused
|
||||
, nix-eval-jobs
|
||||
|
||||
, rpm
|
||||
, dpkg
|
||||
|
@ -94,7 +91,6 @@ let
|
|||
DigestSHA1
|
||||
EmailMIME
|
||||
EmailSender
|
||||
FileCopyRecursive
|
||||
FileLibMagic
|
||||
FileSlurper
|
||||
FileWhich
|
||||
|
@ -142,13 +138,20 @@ stdenv.mkDerivation (finalAttrs: {
|
|||
src = fileset.toSource {
|
||||
root = ./.;
|
||||
fileset = fileset.unions ([
|
||||
./doc
|
||||
./meson.build
|
||||
./nixos-modules
|
||||
./src
|
||||
./t
|
||||
./version.txt
|
||||
./configure.ac
|
||||
./Makefile.am
|
||||
./src
|
||||
./doc
|
||||
./nixos-modules/hydra.nix
|
||||
# These are always needed to appease Automake
|
||||
./t/Makefile.am
|
||||
./t/jobs/config.nix.in
|
||||
./t/jobs/declarative/project.json.in
|
||||
] ++ lib.optionals finalAttrs.doCheck [
|
||||
./t
|
||||
./.perlcriticrc
|
||||
./.yath.rc
|
||||
]);
|
||||
};
|
||||
|
||||
|
@ -156,8 +159,7 @@ stdenv.mkDerivation (finalAttrs: {
|
|||
|
||||
nativeBuildInputs = [
|
||||
makeWrapper
|
||||
meson
|
||||
ninja
|
||||
autoreconfHook
|
||||
nukeReferences
|
||||
pkg-config
|
||||
mdbook
|
||||
|
@ -190,9 +192,6 @@ stdenv.mkDerivation (finalAttrs: {
|
|||
openldap
|
||||
postgresql_13
|
||||
pixz
|
||||
nix-eval-jobs
|
||||
perlPackages.PLS
|
||||
just
|
||||
];
|
||||
|
||||
checkInputs = [
|
||||
|
@ -221,23 +220,16 @@ stdenv.mkDerivation (finalAttrs: {
|
|||
darcs
|
||||
gnused
|
||||
breezy
|
||||
nix-eval-jobs
|
||||
] ++ lib.optionals stdenv.isLinux [ rpm dpkg cdrkit ]
|
||||
);
|
||||
|
||||
OPENLDAP_ROOT = openldap;
|
||||
|
||||
mesonBuildType = "release";
|
||||
|
||||
postPatch = ''
|
||||
patchShebangs .
|
||||
'';
|
||||
|
||||
shellHook = ''
|
||||
pushd $(git rev-parse --show-toplevel) >/dev/null
|
||||
|
||||
PATH=$(pwd)/outputs/out/bin:$PATH
|
||||
PERL5LIB=$(pwd)/src/lib:$(pwd)/t/lib:$PERL5LIB
|
||||
PATH=$(pwd)/src/hydra-evaluator:$(pwd)/src/script:$(pwd)/src/hydra-eval-jobs:$(pwd)/src/hydra-queue-runner:$PATH
|
||||
PERL5LIB=$(pwd)/src/lib:$PERL5LIB
|
||||
export HYDRA_HOME="$(pwd)/src/"
|
||||
mkdir -p .hydra-data
|
||||
export HYDRA_DATA="$(pwd)/.hydra-data"
|
||||
|
@ -246,11 +238,14 @@ stdenv.mkDerivation (finalAttrs: {
|
|||
popd >/dev/null
|
||||
'';
|
||||
|
||||
NIX_LDFLAGS = [ "-lpthread" ];
|
||||
|
||||
enableParallelBuilding = true;
|
||||
|
||||
doCheck = true;
|
||||
|
||||
mesonCheckFlags = [ "--verbose" ];
|
||||
|
||||
preCheck = ''
|
||||
patchShebangs .
|
||||
export LOGNAME=''${LOGNAME:-foo}
|
||||
# set $HOME for bzr so it can create its trace file
|
||||
export HOME=$(mktemp -d)
|
||||
|
|
3
src/Makefile.am
Normal file
3
src/Makefile.am
Normal file
|
@ -0,0 +1,3 @@
|
|||
SUBDIRS = hydra-evaluator hydra-eval-jobs hydra-queue-runner sql script lib root ttf
|
||||
BOOTCLEAN_SUBDIRS = $(SUBDIRS)
|
||||
DIST_SUBDIRS = $(SUBDIRS)
|
5
src/hydra-eval-jobs/Makefile.am
Normal file
5
src/hydra-eval-jobs/Makefile.am
Normal file
|
@ -0,0 +1,5 @@
|
|||
bin_PROGRAMS = hydra-eval-jobs
|
||||
|
||||
hydra_eval_jobs_SOURCES = hydra-eval-jobs.cc
|
||||
hydra_eval_jobs_LDADD = $(NIX_LIBS) -llixcmd
|
||||
hydra_eval_jobs_CXXFLAGS = $(NIX_CFLAGS) -I ../libhydra
|
577
src/hydra-eval-jobs/hydra-eval-jobs.cc
Normal file
577
src/hydra-eval-jobs/hydra-eval-jobs.cc
Normal file
|
@ -0,0 +1,577 @@
|
|||
#include <iostream>
|
||||
#include <thread>
|
||||
#include <optional>
|
||||
#include <unordered_map>
|
||||
|
||||
#include "shared.hh"
|
||||
#include "store-api.hh"
|
||||
#include "eval.hh"
|
||||
#include "eval-inline.hh"
|
||||
#include "eval-settings.hh"
|
||||
#include "signals.hh"
|
||||
#include "terminal.hh"
|
||||
#include "get-drvs.hh"
|
||||
#include "globals.hh"
|
||||
#include "lix/libcmd/common-eval-args.hh"
|
||||
#include "flake/flakeref.hh"
|
||||
#include "flake/flake.hh"
|
||||
#include "attr-path.hh"
|
||||
#include "derivations.hh"
|
||||
#include "local-fs-store.hh"
|
||||
|
||||
#include "hydra-config.hh"
|
||||
|
||||
#include <sys/types.h>
|
||||
#include <sys/wait.h>
|
||||
#include <sys/resource.h>
|
||||
|
||||
#include <nlohmann/json.hpp>
|
||||
|
||||
void check_pid_status_nonblocking(pid_t check_pid)
|
||||
{
|
||||
// Only check 'initialized' and known PID's
|
||||
if (check_pid <= 0) { return; }
|
||||
|
||||
int wstatus = 0;
|
||||
pid_t pid = waitpid(check_pid, &wstatus, WNOHANG);
|
||||
// -1 = failure, WNOHANG: 0 = no change
|
||||
if (pid <= 0) { return; }
|
||||
|
||||
std::cerr << "child process (" << pid << ") ";
|
||||
|
||||
if (WIFEXITED(wstatus)) {
|
||||
std::cerr << "exited with status=" << WEXITSTATUS(wstatus) << std::endl;
|
||||
} else if (WIFSIGNALED(wstatus)) {
|
||||
std::cerr << "killed by signal=" << WTERMSIG(wstatus) << std::endl;
|
||||
} else if (WIFSTOPPED(wstatus)) {
|
||||
std::cerr << "stopped by signal=" << WSTOPSIG(wstatus) << std::endl;
|
||||
} else if (WIFCONTINUED(wstatus)) {
|
||||
std::cerr << "continued" << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
using namespace nix;
|
||||
|
||||
static Path gcRootsDir;
|
||||
static size_t maxMemorySize;
|
||||
|
||||
struct MyArgs : MixEvalArgs, MixCommonArgs, RootArgs
|
||||
{
|
||||
Path releaseExpr;
|
||||
bool flake = false;
|
||||
bool dryRun = false;
|
||||
|
||||
MyArgs() : MixCommonArgs("hydra-eval-jobs")
|
||||
{
|
||||
addFlag({
|
||||
.longName = "gc-roots-dir",
|
||||
.description = "garbage collector roots directory",
|
||||
.labels = {"path"},
|
||||
.handler = {&gcRootsDir}
|
||||
});
|
||||
|
||||
addFlag({
|
||||
.longName = "dry-run",
|
||||
.description = "don't create store derivations",
|
||||
.handler = {&dryRun, true}
|
||||
});
|
||||
|
||||
addFlag({
|
||||
.longName = "flake",
|
||||
.description = "build a flake",
|
||||
.handler = {&flake, true}
|
||||
});
|
||||
|
||||
expectArg("expr", &releaseExpr);
|
||||
}
|
||||
};
|
||||
|
||||
static MyArgs myArgs;
|
||||
|
||||
static std::string queryMetaStrings(EvalState & state, DrvInfo & drv, const std::string & name, const std::string & subAttribute)
|
||||
{
|
||||
Strings res;
|
||||
std::function<void(Value & v)> rec;
|
||||
|
||||
rec = [&](Value & v) {
|
||||
state.forceValue(v, noPos);
|
||||
if (v.type() == nString)
|
||||
res.push_back(v.string.s);
|
||||
else if (v.isList())
|
||||
for (unsigned int n = 0; n < v.listSize(); ++n)
|
||||
rec(*v.listElems()[n]);
|
||||
else if (v.type() == nAttrs) {
|
||||
auto a = v.attrs->find(state.symbols.create(subAttribute));
|
||||
if (a != v.attrs->end())
|
||||
res.push_back(std::string(state.forceString(*a->value, a->pos, "while evaluating meta attributes")));
|
||||
}
|
||||
};
|
||||
|
||||
Value * v = drv.queryMeta(name);
|
||||
if (v) rec(*v);
|
||||
|
||||
return concatStringsSep(", ", res);
|
||||
}
|
||||
|
||||
static void worker(
|
||||
EvalState & state,
|
||||
Bindings & autoArgs,
|
||||
AutoCloseFD & to,
|
||||
AutoCloseFD & from)
|
||||
{
|
||||
Value vTop;
|
||||
|
||||
if (myArgs.flake) {
|
||||
using namespace flake;
|
||||
|
||||
auto flakeRef = parseFlakeRef(myArgs.releaseExpr);
|
||||
|
||||
auto vFlake = state.allocValue();
|
||||
|
||||
auto lockedFlake = lockFlake(state, flakeRef,
|
||||
LockFlags {
|
||||
.updateLockFile = false,
|
||||
.useRegistries = false,
|
||||
.allowUnlocked = false,
|
||||
});
|
||||
|
||||
callFlake(state, lockedFlake, *vFlake);
|
||||
|
||||
auto vOutputs = vFlake->attrs->get(state.symbols.create("outputs"))->value;
|
||||
state.forceValue(*vOutputs, noPos);
|
||||
|
||||
auto aHydraJobs = vOutputs->attrs->get(state.symbols.create("hydraJobs"));
|
||||
if (!aHydraJobs)
|
||||
aHydraJobs = vOutputs->attrs->get(state.symbols.create("checks"));
|
||||
if (!aHydraJobs)
|
||||
throw Error("flake '%s' does not provide any Hydra jobs or checks", flakeRef);
|
||||
|
||||
vTop = *aHydraJobs->value;
|
||||
|
||||
} else {
|
||||
state.evalFile(lookupFileArg(state, myArgs.releaseExpr), vTop);
|
||||
}
|
||||
|
||||
auto vRoot = state.allocValue();
|
||||
state.autoCallFunction(autoArgs, vTop, *vRoot);
|
||||
|
||||
while (true) {
|
||||
/* Wait for the master to send us a job name. */
|
||||
writeLine(to.get(), "next");
|
||||
|
||||
auto s = readLine(from.get());
|
||||
if (s == "exit") break;
|
||||
if (!s.starts_with("do ")) abort();
|
||||
std::string attrPath(s, 3);
|
||||
|
||||
debug("worker process %d at '%s'", getpid(), attrPath);
|
||||
|
||||
/* Evaluate it and send info back to the master. */
|
||||
nlohmann::json reply;
|
||||
|
||||
try {
|
||||
auto vTmp = findAlongAttrPath(state, attrPath, autoArgs, *vRoot).first;
|
||||
|
||||
auto v = state.allocValue();
|
||||
state.autoCallFunction(autoArgs, *vTmp, *v);
|
||||
|
||||
if (auto drv = getDerivation(state, *v, false)) {
|
||||
|
||||
// CA derivations do not have static output paths, so we
|
||||
// have to defensively not query output paths in case we
|
||||
// encounter one.
|
||||
DrvInfo::Outputs outputs = drv->queryOutputs(
|
||||
!experimentalFeatureSettings.isEnabled(Xp::CaDerivations));
|
||||
|
||||
if (drv->querySystem() == "unknown")
|
||||
state.error<EvalError>("derivation must have a 'system' attribute").debugThrow();
|
||||
|
||||
auto drvPath = state.store->printStorePath(drv->requireDrvPath());
|
||||
|
||||
nlohmann::json job;
|
||||
|
||||
job["nixName"] = drv->queryName();
|
||||
job["system"] =drv->querySystem();
|
||||
job["drvPath"] = drvPath;
|
||||
job["description"] = drv->queryMetaString("description");
|
||||
job["license"] = queryMetaStrings(state, *drv, "license", "shortName");
|
||||
job["homepage"] = drv->queryMetaString("homepage");
|
||||
job["maintainers"] = queryMetaStrings(state, *drv, "maintainers", "email");
|
||||
job["schedulingPriority"] = drv->queryMetaInt("schedulingPriority", 100);
|
||||
job["timeout"] = drv->queryMetaInt("timeout", 36000);
|
||||
job["maxSilent"] = drv->queryMetaInt("maxSilent", 7200);
|
||||
job["isChannel"] = drv->queryMetaBool("isHydraChannel", false);
|
||||
|
||||
/* If this is an aggregate, then get its constituents. */
|
||||
auto a = v->attrs->get(state.symbols.create("_hydraAggregate"));
|
||||
if (a && state.forceBool(*a->value, a->pos, "while evaluating the `_hydraAggregate` attribute")) {
|
||||
auto a = v->attrs->get(state.symbols.create("constituents"));
|
||||
if (!a)
|
||||
state.error<EvalError>("derivation must have a ‘constituents’ attribute").debugThrow();
|
||||
|
||||
NixStringContext context;
|
||||
state.coerceToString(a->pos, *a->value, context, "while evaluating the `constituents` attribute", true, false);
|
||||
for (auto & c : context)
|
||||
std::visit(overloaded {
|
||||
[&](const NixStringContextElem::Built & b) {
|
||||
job["constituents"].push_back(b.drvPath->to_string(*state.store));
|
||||
},
|
||||
[&](const NixStringContextElem::Opaque & o) {
|
||||
},
|
||||
[&](const NixStringContextElem::DrvDeep & d) {
|
||||
},
|
||||
}, c.raw);
|
||||
|
||||
state.forceList(*a->value, a->pos, "while evaluating the `constituents` attribute");
|
||||
for (unsigned int n = 0; n < a->value->listSize(); ++n) {
|
||||
auto v = a->value->listElems()[n];
|
||||
state.forceValue(*v, noPos);
|
||||
if (v->type() == nString)
|
||||
job["namedConstituents"].push_back(v->str());
|
||||
}
|
||||
}
|
||||
|
||||
/* Register the derivation as a GC root. !!! This
|
||||
registers roots for jobs that we may have already
|
||||
done. */
|
||||
auto localStore = state.store.dynamic_pointer_cast<LocalFSStore>();
|
||||
if (gcRootsDir != "" && localStore) {
|
||||
Path root = gcRootsDir + "/" + std::string(baseNameOf(drvPath));
|
||||
if (!pathExists(root))
|
||||
localStore->addPermRoot(localStore->parseStorePath(drvPath), root);
|
||||
}
|
||||
|
||||
nlohmann::json out;
|
||||
for (auto & [outputName, optOutputPath] : outputs) {
|
||||
if (optOutputPath) {
|
||||
out[outputName] = state.store->printStorePath(*optOutputPath);
|
||||
} else {
|
||||
// See the `queryOutputs` call above; we should
|
||||
// not encounter missing output paths otherwise.
|
||||
assert(experimentalFeatureSettings.isEnabled(Xp::CaDerivations));
|
||||
out[outputName] = nullptr;
|
||||
}
|
||||
}
|
||||
job["outputs"] = std::move(out);
|
||||
reply["job"] = std::move(job);
|
||||
}
|
||||
|
||||
else if (v->type() == nAttrs) {
|
||||
auto attrs = nlohmann::json::array();
|
||||
StringSet ss;
|
||||
for (auto & i : v->attrs->lexicographicOrder(state.symbols)) {
|
||||
std::string name(state.symbols[i->name]);
|
||||
if (name.find(' ') != std::string::npos) {
|
||||
printError("skipping job with illegal name '%s'", name);
|
||||
continue;
|
||||
}
|
||||
attrs.push_back(name);
|
||||
}
|
||||
reply["attrs"] = std::move(attrs);
|
||||
}
|
||||
|
||||
else if (v->type() == nNull)
|
||||
;
|
||||
|
||||
else state.error<TypeError>("attribute '%s' is %s, which is not supported", attrPath, showType(*v)).debugThrow();
|
||||
|
||||
} catch (EvalError & e) {
|
||||
auto msg = e.msg();
|
||||
// Transmits the error we got from the previous evaluation
|
||||
// in the JSON output.
|
||||
reply["error"] = filterANSIEscapes(msg, true);
|
||||
// Don't forget to print it into the STDERR log, this is
|
||||
// what's shown in the Hydra UI.
|
||||
printError(msg);
|
||||
}
|
||||
|
||||
writeLine(to.get(), reply.dump());
|
||||
|
||||
/* If our RSS exceeds the maximum, exit. The master will
|
||||
start a new process. */
|
||||
struct rusage r;
|
||||
getrusage(RUSAGE_SELF, &r);
|
||||
if ((size_t) r.ru_maxrss > maxMemorySize * 1024) break;
|
||||
}
|
||||
|
||||
writeLine(to.get(), "restart");
|
||||
}
|
||||
|
||||
int main(int argc, char * * argv)
|
||||
{
|
||||
/* Prevent undeclared dependencies in the evaluation via
|
||||
$NIX_PATH. */
|
||||
unsetenv("NIX_PATH");
|
||||
|
||||
return handleExceptions(argv[0], [&]() {
|
||||
|
||||
auto config = std::make_unique<HydraConfig>();
|
||||
|
||||
auto nrWorkers = config->getIntOption("evaluator_workers", 1);
|
||||
maxMemorySize = config->getIntOption("evaluator_max_memory_size", 4096);
|
||||
|
||||
initNix();
|
||||
initGC();
|
||||
|
||||
myArgs.parseCmdline(argvToStrings(argc, argv));
|
||||
|
||||
auto pureEval = config->getBoolOption("evaluator_pure_eval", myArgs.flake);
|
||||
|
||||
/* FIXME: The build hook in conjunction with import-from-derivation is causing "unexpected EOF" during eval */
|
||||
settings.builders = "";
|
||||
|
||||
/* Prevent access to paths outside of the Nix search path and
|
||||
to the environment. */
|
||||
evalSettings.restrictEval = true;
|
||||
|
||||
/* When building a flake, use pure evaluation (no access to
|
||||
'getEnv', 'currentSystem' etc. */
|
||||
evalSettings.pureEval = pureEval;
|
||||
|
||||
if (myArgs.dryRun) settings.readOnlyMode = true;
|
||||
|
||||
if (myArgs.releaseExpr == "") throw UsageError("no expression specified");
|
||||
|
||||
if (gcRootsDir == "") printMsg(lvlError, "warning: `--gc-roots-dir' not specified");
|
||||
|
||||
struct State
|
||||
{
|
||||
std::set<std::string> todo{""};
|
||||
std::set<std::string> active;
|
||||
nlohmann::json jobs;
|
||||
std::exception_ptr exc;
|
||||
};
|
||||
|
||||
std::condition_variable wakeup;
|
||||
|
||||
Sync<State> state_;
|
||||
|
||||
/* Start a handler thread per worker process. */
|
||||
auto handler = [&]()
|
||||
{
|
||||
Pid pid;
|
||||
try {
|
||||
AutoCloseFD from, to;
|
||||
|
||||
while (true) {
|
||||
|
||||
/* Start a new worker process if necessary. */
|
||||
if (!pid) {
|
||||
Pipe toPipe, fromPipe;
|
||||
toPipe.create();
|
||||
fromPipe.create();
|
||||
pid = startProcess(
|
||||
[&,
|
||||
to{std::make_shared<AutoCloseFD>(std::move(fromPipe.writeSide))},
|
||||
from{std::make_shared<AutoCloseFD>(std::move(toPipe.readSide))}
|
||||
]()
|
||||
{
|
||||
try {
|
||||
EvalState state(myArgs.searchPath, openStore());
|
||||
Bindings & autoArgs = *myArgs.getAutoArgs(state);
|
||||
worker(state, autoArgs, *to, *from);
|
||||
} catch (Error & e) {
|
||||
nlohmann::json err;
|
||||
auto msg = e.msg();
|
||||
err["error"] = filterANSIEscapes(msg, true);
|
||||
printError(msg);
|
||||
writeLine(to->get(), err.dump());
|
||||
// Don't forget to print it into the STDERR log, this is
|
||||
// what's shown in the Hydra UI.
|
||||
writeLine(to->get(), "restart");
|
||||
}
|
||||
});
|
||||
from = std::move(fromPipe.readSide);
|
||||
to = std::move(toPipe.writeSide);
|
||||
debug("created worker process %d", pid.get());
|
||||
}
|
||||
|
||||
/* Check whether the existing worker process is still there. */
|
||||
auto s = readLine(from.get());
|
||||
if (s == "restart") {
|
||||
pid.wait();
|
||||
continue;
|
||||
} else if (s != "next") {
|
||||
auto json = nlohmann::json::parse(s);
|
||||
throw Error("worker error: %s", (std::string) json["error"]);
|
||||
}
|
||||
|
||||
/* Wait for a job name to become available. */
|
||||
std::string attrPath;
|
||||
|
||||
while (true) {
|
||||
checkInterrupt();
|
||||
auto state(state_.lock());
|
||||
if ((state->todo.empty() && state->active.empty()) || state->exc) {
|
||||
writeLine(to.get(), "exit");
|
||||
return;
|
||||
}
|
||||
if (!state->todo.empty()) {
|
||||
attrPath = *state->todo.begin();
|
||||
state->todo.erase(state->todo.begin());
|
||||
state->active.insert(attrPath);
|
||||
break;
|
||||
} else
|
||||
state.wait(wakeup);
|
||||
}
|
||||
|
||||
/* Tell the worker to evaluate it. */
|
||||
writeLine(to.get(), "do " + attrPath);
|
||||
|
||||
/* Wait for the response. */
|
||||
auto response = nlohmann::json::parse(readLine(from.get()));
|
||||
|
||||
/* Handle the response. */
|
||||
StringSet newAttrs;
|
||||
|
||||
if (response.find("job") != response.end()) {
|
||||
auto state(state_.lock());
|
||||
state->jobs[attrPath] = response["job"];
|
||||
}
|
||||
|
||||
if (response.find("attrs") != response.end()) {
|
||||
for (auto & i : response["attrs"]) {
|
||||
std::string path = i;
|
||||
if (path.find(".") != std::string::npos){
|
||||
path = "\"" + path + "\"";
|
||||
}
|
||||
auto s = (attrPath.empty() ? "" : attrPath + ".") + (std::string) path;
|
||||
newAttrs.insert(s);
|
||||
}
|
||||
}
|
||||
|
||||
if (response.find("error") != response.end()) {
|
||||
auto state(state_.lock());
|
||||
state->jobs[attrPath]["error"] = response["error"];
|
||||
}
|
||||
|
||||
/* Add newly discovered job names to the queue. */
|
||||
{
|
||||
auto state(state_.lock());
|
||||
state->active.erase(attrPath);
|
||||
for (auto & s : newAttrs)
|
||||
state->todo.insert(s);
|
||||
wakeup.notify_all();
|
||||
}
|
||||
}
|
||||
} catch (...) {
|
||||
check_pid_status_nonblocking(pid.release());
|
||||
auto state(state_.lock());
|
||||
state->exc = std::current_exception();
|
||||
wakeup.notify_all();
|
||||
}
|
||||
};
|
||||
|
||||
std::vector<std::thread> threads;
|
||||
for (size_t i = 0; i < nrWorkers; i++)
|
||||
threads.emplace_back(std::thread(handler));
|
||||
|
||||
for (auto & thread : threads)
|
||||
thread.join();
|
||||
|
||||
auto state(state_.lock());
|
||||
|
||||
if (state->exc)
|
||||
std::rethrow_exception(state->exc);
|
||||
|
||||
/* For aggregate jobs that have named consistuents
|
||||
(i.e. constituents that are a job name rather than a
|
||||
derivation), look up the referenced job and add it to the
|
||||
dependencies of the aggregate derivation. */
|
||||
auto store = openStore();
|
||||
|
||||
for (auto i = state->jobs.begin(); i != state->jobs.end(); ++i) {
|
||||
auto jobName = i.key();
|
||||
auto & job = i.value();
|
||||
|
||||
auto named = job.find("namedConstituents");
|
||||
if (named == job.end()) continue;
|
||||
|
||||
std::unordered_map<std::string, std::string> brokenJobs;
|
||||
auto getNonBrokenJobOrRecordError = [&brokenJobs, &jobName, &state](
|
||||
const std::string & childJobName) -> std::optional<nlohmann::json> {
|
||||
auto childJob = state->jobs.find(childJobName);
|
||||
if (childJob == state->jobs.end()) {
|
||||
printError("aggregate job '%s' references non-existent job '%s'", jobName, childJobName);
|
||||
brokenJobs[childJobName] = "does not exist";
|
||||
return std::nullopt;
|
||||
}
|
||||
if (childJob->find("error") != childJob->end()) {
|
||||
std::string error = (*childJob)["error"];
|
||||
printError("aggregate job '%s' references broken job '%s': %s", jobName, childJobName, error);
|
||||
brokenJobs[childJobName] = error;
|
||||
return std::nullopt;
|
||||
}
|
||||
return *childJob;
|
||||
};
|
||||
|
||||
if (myArgs.dryRun) {
|
||||
for (std::string jobName2 : *named) {
|
||||
auto job2 = getNonBrokenJobOrRecordError(jobName2);
|
||||
if (!job2) {
|
||||
continue;
|
||||
}
|
||||
std::string drvPath2 = (*job2)["drvPath"];
|
||||
job["constituents"].push_back(drvPath2);
|
||||
}
|
||||
} else {
|
||||
auto drvPath = store->parseStorePath((std::string) job["drvPath"]);
|
||||
auto drv = store->readDerivation(drvPath);
|
||||
|
||||
for (std::string jobName2 : *named) {
|
||||
auto job2 = getNonBrokenJobOrRecordError(jobName2);
|
||||
if (!job2) {
|
||||
continue;
|
||||
}
|
||||
auto drvPath2 = store->parseStorePath((std::string) (*job2)["drvPath"]);
|
||||
auto drv2 = store->readDerivation(drvPath2);
|
||||
job["constituents"].push_back(store->printStorePath(drvPath2));
|
||||
drv.inputDrvs.map[drvPath2].value = {drv2.outputs.begin()->first};
|
||||
}
|
||||
|
||||
if (brokenJobs.empty()) {
|
||||
std::string drvName(drvPath.name());
|
||||
assert(drvName.ends_with(drvExtension));
|
||||
drvName.resize(drvName.size() - drvExtension.size());
|
||||
|
||||
auto hashModulo = hashDerivationModulo(*store, drv, true);
|
||||
if (hashModulo.kind != DrvHash::Kind::Regular) continue;
|
||||
auto h = hashModulo.hashes.find("out");
|
||||
if (h == hashModulo.hashes.end()) continue;
|
||||
auto outPath = store->makeOutputPath("out", h->second, drvName);
|
||||
drv.env["out"] = store->printStorePath(outPath);
|
||||
drv.outputs.insert_or_assign("out", DerivationOutput::InputAddressed { .path = outPath });
|
||||
auto newDrvPath = store->printStorePath(writeDerivation(*store, drv));
|
||||
|
||||
debug("rewrote aggregate derivation %s -> %s", store->printStorePath(drvPath), newDrvPath);
|
||||
|
||||
job["drvPath"] = newDrvPath;
|
||||
job["outputs"]["out"] = store->printStorePath(outPath);
|
||||
}
|
||||
}
|
||||
|
||||
job.erase("namedConstituents");
|
||||
|
||||
/* Register the derivation as a GC root. !!! This
|
||||
registers roots for jobs that we may have already
|
||||
done. */
|
||||
auto localStore = store.dynamic_pointer_cast<LocalFSStore>();
|
||||
if (gcRootsDir != "" && localStore) {
|
||||
auto drvPath = job["drvPath"].get<std::string>();
|
||||
Path root = gcRootsDir + "/" + std::string(baseNameOf(drvPath));
|
||||
if (!pathExists(root))
|
||||
localStore->addPermRoot(localStore->parseStorePath(drvPath), root);
|
||||
}
|
||||
|
||||
if (!brokenJobs.empty()) {
|
||||
std::stringstream ss;
|
||||
for (const auto& [jobName, error] : brokenJobs) {
|
||||
ss << jobName << ": " << error << "\n";
|
||||
}
|
||||
job["error"] = ss.str();
|
||||
}
|
||||
}
|
||||
|
||||
std::cout << state->jobs.dump(2) << "\n";
|
||||
});
|
||||
}
|
5
src/hydra-evaluator/Makefile.am
Normal file
5
src/hydra-evaluator/Makefile.am
Normal file
|
@ -0,0 +1,5 @@
|
|||
bin_PROGRAMS = hydra-evaluator
|
||||
|
||||
hydra_evaluator_SOURCES = hydra-evaluator.cc
|
||||
hydra_evaluator_LDADD = $(NIX_LIBS) -lpqxx
|
||||
hydra_evaluator_CXXFLAGS = $(NIX_CFLAGS) -Wall -I ../libhydra -Wno-deprecated-declarations
|
|
@ -14,12 +14,11 @@
|
|||
#include <sys/wait.h>
|
||||
|
||||
#include <boost/format.hpp>
|
||||
#include <utility>
|
||||
|
||||
using namespace nix;
|
||||
using boost::format;
|
||||
|
||||
using JobsetName = std::pair<std::string, std::string>;
|
||||
typedef std::pair<std::string, std::string> JobsetName;
|
||||
|
||||
class JobsetId {
|
||||
public:
|
||||
|
@ -29,8 +28,8 @@ class JobsetId {
|
|||
int id;
|
||||
|
||||
|
||||
JobsetId(std::string project, std::string jobset, int id)
|
||||
: project{std::move( project )}, jobset{std::move( jobset )}, id{ id }
|
||||
JobsetId(const std::string & project, const std::string & jobset, int id)
|
||||
: project{ project }, jobset{ jobset }, id{ id }
|
||||
{
|
||||
}
|
||||
|
||||
|
@ -42,7 +41,7 @@ class JobsetId {
|
|||
friend bool operator== (const JobsetId & lhs, const JobsetName & rhs);
|
||||
friend bool operator!= (const JobsetId & lhs, const JobsetName & rhs);
|
||||
|
||||
[[nodiscard]] std::string display() const {
|
||||
std::string display() const {
|
||||
return str(format("%1%:%2% (jobset#%3%)") % project % jobset % id);
|
||||
}
|
||||
};
|
||||
|
@ -89,11 +88,11 @@ struct Evaluator
|
|||
JobsetId name;
|
||||
std::optional<EvaluationStyle> evaluation_style;
|
||||
time_t lastCheckedTime, triggerTime;
|
||||
time_t checkInterval;
|
||||
int checkInterval;
|
||||
Pid pid;
|
||||
};
|
||||
|
||||
using Jobsets = std::map<JobsetId, Jobset>;
|
||||
typedef std::map<JobsetId, Jobset> Jobsets;
|
||||
|
||||
std::optional<JobsetName> evalOne;
|
||||
|
||||
|
@ -139,15 +138,13 @@ struct Evaluator
|
|||
|
||||
if (evalOne && name != *evalOne) continue;
|
||||
|
||||
auto res = state->jobsets.try_emplace(name, Jobset{.name=name});
|
||||
auto res = state->jobsets.try_emplace(name, Jobset{name});
|
||||
|
||||
auto & jobset = res.first->second;
|
||||
jobset.lastCheckedTime = row["lastCheckedTime"].as<time_t>(0);
|
||||
jobset.triggerTime = row["triggerTime"].as<time_t>(notTriggered);
|
||||
jobset.checkInterval = row["checkInterval"].as<time_t>();
|
||||
|
||||
int eval_style = row["jobset_enabled"].as<int>(0);
|
||||
switch (eval_style) {
|
||||
switch (row["jobset_enabled"].as<int>(0)) {
|
||||
case 1:
|
||||
jobset.evaluation_style = EvaluationStyle::SCHEDULE;
|
||||
break;
|
||||
|
@ -157,9 +154,6 @@ struct Evaluator
|
|||
case 3:
|
||||
jobset.evaluation_style = EvaluationStyle::ONE_AT_A_TIME;
|
||||
break;
|
||||
default:
|
||||
// Disabled or unknown. Leave as nullopt.
|
||||
break;
|
||||
}
|
||||
|
||||
seen.insert(name);
|
||||
|
@ -181,7 +175,7 @@ struct Evaluator
|
|||
|
||||
void startEval(State & state, Jobset & jobset)
|
||||
{
|
||||
time_t now = time(nullptr);
|
||||
time_t now = time(0);
|
||||
|
||||
printInfo("starting evaluation of jobset ‘%s’ (last checked %d s ago)",
|
||||
jobset.name.display(),
|
||||
|
@ -234,7 +228,7 @@ struct Evaluator
|
|||
return false;
|
||||
}
|
||||
|
||||
if (jobset.lastCheckedTime + jobset.checkInterval <= time(nullptr)) {
|
||||
if (jobset.lastCheckedTime + jobset.checkInterval <= time(0)) {
|
||||
// Time to schedule a fresh evaluation. If the jobset
|
||||
// is a ONE_AT_A_TIME jobset, ensure the previous jobset
|
||||
// has no remaining, unfinished work.
|
||||
|
@ -307,7 +301,7 @@ struct Evaluator
|
|||
|
||||
/* Put jobsets in order of ascending trigger time, last checked
|
||||
time, and name. */
|
||||
std::ranges::sort(sorted,
|
||||
std::sort(sorted.begin(), sorted.end(),
|
||||
[](const Jobsets::iterator & a, const Jobsets::iterator & b) {
|
||||
return
|
||||
a->second.triggerTime != b->second.triggerTime
|
||||
|
@ -330,7 +324,7 @@ struct Evaluator
|
|||
|
||||
while (true) {
|
||||
|
||||
time_t now = time(nullptr);
|
||||
time_t now = time(0);
|
||||
|
||||
std::chrono::seconds sleepTime = std::chrono::seconds::max();
|
||||
|
||||
|
@ -417,7 +411,7 @@ struct Evaluator
|
|||
printInfo("evaluation of jobset ‘%s’ %s",
|
||||
jobset.name.display(), statusToString(status));
|
||||
|
||||
auto now = time(nullptr);
|
||||
auto now = time(0);
|
||||
|
||||
jobset.triggerTime = notTriggered;
|
||||
jobset.lastCheckedTime = now;
|
||||
|
|
|
@ -1,9 +0,0 @@
|
|||
hydra_evaluator = executable('hydra-evaluator',
|
||||
'hydra-evaluator.cc',
|
||||
dependencies: [
|
||||
libhydra_dep,
|
||||
lix_dep,
|
||||
pqxx_dep,
|
||||
],
|
||||
install: true,
|
||||
)
|
8
src/hydra-queue-runner/Makefile.am
Normal file
8
src/hydra-queue-runner/Makefile.am
Normal file
|
@ -0,0 +1,8 @@
|
|||
bin_PROGRAMS = hydra-queue-runner
|
||||
|
||||
hydra_queue_runner_SOURCES = hydra-queue-runner.cc queue-monitor.cc dispatcher.cc \
|
||||
builder.cc build-result.cc build-remote.cc \
|
||||
hydra-build-result.hh counter.hh state.hh db.hh \
|
||||
nar-extractor.cc nar-extractor.hh
|
||||
hydra_queue_runner_LDADD = $(NIX_LIBS) -lpqxx -lprometheus-cpp-pull -lprometheus-cpp-core
|
||||
hydra_queue_runner_CXXFLAGS = $(NIX_CFLAGS) -Wall -I ../libhydra -Wno-deprecated-declarations
|
|
@ -1,6 +1,5 @@
|
|||
#include <algorithm>
|
||||
#include <cmath>
|
||||
#include <ranges>
|
||||
|
||||
#include <sys/types.h>
|
||||
#include <sys/stat.h>
|
||||
|
@ -42,7 +41,6 @@ static Strings extraStoreArgs(std::string & machine)
|
|||
}
|
||||
} catch (BadURL &) {
|
||||
// We just try to continue with `machine->sshName` here for backwards compat.
|
||||
printMsg(lvlWarn, "could not parse machine URL '%s', passing through to SSH", machine);
|
||||
}
|
||||
|
||||
return result;
|
||||
|
@ -67,8 +65,8 @@ static void openConnection(::Machine::ptr machine, Path tmpDir, int stderrFD, SS
|
|||
if (machine->sshKey != "") append(argv, {"-i", machine->sshKey});
|
||||
if (machine->sshPublicHostKey != "") {
|
||||
Path fileName = tmpDir + "/host-key";
|
||||
auto p = sshName.find("@");
|
||||
std::string host = p != std::string::npos ? std::string(sshName, p + 1) : sshName;
|
||||
auto p = machine->sshName.find("@");
|
||||
std::string host = p != std::string::npos ? std::string(machine->sshName, p + 1) : machine->sshName;
|
||||
writeFile(fileName, host + " " + machine->sshPublicHostKey + "\n");
|
||||
append(argv, {"-oUserKnownHostsFile=" + fileName});
|
||||
}
|
||||
|
@ -123,7 +121,7 @@ static void copyClosureTo(
|
|||
the remote host to substitute missing paths. */
|
||||
// FIXME: substitute output pollutes our build log
|
||||
conn.to << ServeProto::Command::QueryValidPaths << 1 << useSubstitutes;
|
||||
conn.to << ServeProto::write(destStore, conn, closure);
|
||||
ServeProto::write(destStore, conn, closure);
|
||||
conn.to.flush();
|
||||
|
||||
/* Get back the set of paths that are already valid on the remote
|
||||
|
@ -135,8 +133,8 @@ static void copyClosureTo(
|
|||
auto sorted = destStore.topoSortPaths(closure);
|
||||
|
||||
StorePathSet missing;
|
||||
for (auto & i : std::ranges::reverse_view(sorted))
|
||||
if (!present.count(i)) missing.insert(i);
|
||||
for (auto i = sorted.rbegin(); i != sorted.rend(); ++i)
|
||||
if (!present.count(*i)) missing.insert(*i);
|
||||
|
||||
printMsg(lvlDebug, "sending %d missing paths", missing.size());
|
||||
|
||||
|
@ -306,12 +304,12 @@ static BuildResult performBuild(
|
|||
|
||||
time_t startTime, stopTime;
|
||||
|
||||
startTime = time(nullptr);
|
||||
startTime = time(0);
|
||||
{
|
||||
MaintainCount<counter> mc(nrStepsBuilding);
|
||||
result = ServeProto::Serialise<BuildResult>::read(localStore, conn);
|
||||
}
|
||||
stopTime = time(nullptr);
|
||||
stopTime = time(0);
|
||||
|
||||
if (!result.startTime) {
|
||||
// If the builder gave `startTime = 0`, use our measurements
|
||||
|
@ -340,10 +338,10 @@ static BuildResult performBuild(
|
|||
// were known
|
||||
assert(outputPath);
|
||||
auto outputHash = outputHashes.at(outputName);
|
||||
auto drvOutput = DrvOutput { .drvHash=outputHash, .outputName=outputName };
|
||||
auto drvOutput = DrvOutput { outputHash, outputName };
|
||||
result.builtOutputs.insert_or_assign(
|
||||
std::move(outputName),
|
||||
Realisation { .id=drvOutput, .outPath=*outputPath });
|
||||
Realisation { drvOutput, *outputPath });
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -361,7 +359,7 @@ static std::map<StorePath, ValidPathInfo> queryPathInfos(
|
|||
/* Get info about each output path. */
|
||||
std::map<StorePath, ValidPathInfo> infos;
|
||||
conn.to << ServeProto::Command::QueryPathInfos;
|
||||
conn.to << ServeProto::write(localStore, conn, outputs);
|
||||
ServeProto::write(localStore, conn, outputs);
|
||||
conn.to.flush();
|
||||
while (true) {
|
||||
auto storePathS = readString(conn.from);
|
||||
|
@ -370,7 +368,7 @@ static std::map<StorePath, ValidPathInfo> queryPathInfos(
|
|||
auto references = ServeProto::Serialise<StorePathSet>::read(localStore, conn);
|
||||
readLongLong(conn.from); // download size
|
||||
auto narSize = readLongLong(conn.from);
|
||||
auto narHash = Hash::parseAny(readString(conn.from), HashType::SHA256);
|
||||
auto narHash = Hash::parseAny(readString(conn.from), htSHA256);
|
||||
auto ca = ContentAddress::parseOpt(readString(conn.from));
|
||||
readStrings<StringSet>(conn.from); // sigs
|
||||
ValidPathInfo info(localStore.parseStorePath(storePathS), narHash);
|
||||
|
@ -399,7 +397,8 @@ static void copyPathFromRemote(
|
|||
/* Receive the NAR from the remote and add it to the
|
||||
destination store. Meanwhile, extract all the info from the
|
||||
NAR that getBuildOutput() needs. */
|
||||
auto coro = [&]() -> WireFormatGenerator {
|
||||
auto source2 = sinkToSource([&](Sink & sink)
|
||||
{
|
||||
/* Note: we should only send the command to dump the store
|
||||
path to the remote if the NAR is actually going to get read
|
||||
by the destination store, which won't happen if this path
|
||||
|
@ -410,11 +409,11 @@ static void copyPathFromRemote(
|
|||
conn.to << ServeProto::Command::DumpStorePath << localStore.printStorePath(info.path);
|
||||
conn.to.flush();
|
||||
|
||||
co_yield extractNarDataFilter(conn.from, localStore.printStorePath(info.path), narMembers);
|
||||
};
|
||||
GeneratorSource source2{coro()};
|
||||
TeeSource tee(conn.from, sink);
|
||||
extractNarData(tee, localStore.printStorePath(info.path), narMembers);
|
||||
});
|
||||
|
||||
destStore.addToStore(info, source2, NoRepair, NoCheckSigs);
|
||||
destStore.addToStore(info, *source2, NoRepair, NoCheckSigs);
|
||||
}
|
||||
|
||||
static void copyPathsFromRemote(
|
||||
|
@ -625,7 +624,6 @@ void State::buildRemote(ref<Store> destStore,
|
|||
/* Throttle CPU-bound work. Opportunistically skip updating the current
|
||||
* step, since this requires a DB roundtrip. */
|
||||
if (!localWorkThrottler.try_acquire()) {
|
||||
MaintainCount<counter> mc(nrStepsWaitingForDownloadSlot);
|
||||
updateStep(ssWaitingForLocalSlot);
|
||||
localWorkThrottler.acquire();
|
||||
}
|
||||
|
@ -637,7 +635,7 @@ void State::buildRemote(ref<Store> destStore,
|
|||
* copying outputs and we end up building too many things that we
|
||||
* haven't been able to allow copy slots for. */
|
||||
assert(reservation.unique());
|
||||
reservation = nullptr;
|
||||
reservation = 0;
|
||||
wakeDispatcher();
|
||||
|
||||
StorePathSet outputs;
|
||||
|
@ -700,7 +698,7 @@ void State::buildRemote(ref<Store> destStore,
|
|||
if (info->consecutiveFailures == 0 || info->lastFailure < now - std::chrono::seconds(30)) {
|
||||
info->consecutiveFailures = std::min(info->consecutiveFailures + 1, (unsigned int) 4);
|
||||
info->lastFailure = now;
|
||||
int delta = static_cast<int>(retryInterval * std::pow(retryBackoff, info->consecutiveFailures - 1) + (rand() % 30));
|
||||
int delta = retryInterval * std::pow(retryBackoff, info->consecutiveFailures - 1) + (rand() % 30);
|
||||
printMsg(lvlInfo, "will disable machine ‘%1%’ for %2%s", machine->sshName, delta);
|
||||
info->disabledUntil = now + std::chrono::seconds(delta);
|
||||
}
|
||||
|
|
|
@ -1,7 +1,6 @@
|
|||
#include "hydra-build-result.hh"
|
||||
#include "store-api.hh"
|
||||
#include "fs-accessor.hh"
|
||||
#include "strings.hh"
|
||||
|
||||
#include <regex>
|
||||
|
||||
|
@ -35,8 +34,11 @@ BuildOutput getBuildOutput(
|
|||
auto outputS = store->printStorePath(output);
|
||||
if (!narMembers.count(outputS)) {
|
||||
printInfo("fetching NAR contents of '%s'...", outputS);
|
||||
GeneratorSource source{store->narFromPath(output)};
|
||||
extractNarData(source, outputS, narMembers);
|
||||
auto source = sinkToSource([&](Sink & sink)
|
||||
{
|
||||
store->narFromPath(output, sink);
|
||||
});
|
||||
extractNarData(*source, outputS, narMembers);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -1,6 +1,5 @@
|
|||
#include <cmath>
|
||||
|
||||
#include "error.hh"
|
||||
#include "state.hh"
|
||||
#include "hydra-build-result.hh"
|
||||
#include "finally.hh"
|
||||
|
@ -36,22 +35,13 @@ void State::builder(MachineReservation::ptr reservation)
|
|||
activeSteps_.lock()->erase(activeStep);
|
||||
});
|
||||
|
||||
auto conn(dbPool.get());
|
||||
|
||||
try {
|
||||
auto destStore = getDestStore();
|
||||
// Might release the reservation.
|
||||
res = doBuildStep(destStore, reservation, *conn, activeStep);
|
||||
} catch (pqxx::broken_connection & e) {
|
||||
printMsg(lvlError, "db lost while building ‘%s’ on ‘%s’: %s (retriable)",
|
||||
localStore->printStorePath(activeStep->step->drvPath),
|
||||
reservation ? reservation->machine->sshName : std::string("(no machine)"),
|
||||
e.what());
|
||||
conn.markBad();
|
||||
res = doBuildStep(destStore, reservation, activeStep);
|
||||
} catch (std::exception & e) {
|
||||
printMsg(lvlError, "uncaught exception building ‘%s’ on ‘%s’: %s",
|
||||
localStore->printStorePath(activeStep->step->drvPath),
|
||||
reservation ? reservation->machine->sshName : std::string("(no machine)"),
|
||||
localStore->printStorePath(reservation->step->drvPath),
|
||||
reservation->machine->sshName,
|
||||
e.what());
|
||||
}
|
||||
}
|
||||
|
@ -59,7 +49,7 @@ void State::builder(MachineReservation::ptr reservation)
|
|||
/* If the machine hasn't been released yet, release and wake up the dispatcher. */
|
||||
if (reservation) {
|
||||
assert(reservation.unique());
|
||||
reservation = nullptr;
|
||||
reservation = 0;
|
||||
wakeDispatcher();
|
||||
}
|
||||
|
||||
|
@ -73,7 +63,7 @@ void State::builder(MachineReservation::ptr reservation)
|
|||
step_->tries++;
|
||||
nrRetries++;
|
||||
if (step_->tries > maxNrRetries) maxNrRetries = step_->tries; // yeah yeah, not atomic
|
||||
int delta = static_cast<int>(retryInterval * std::pow(retryBackoff, step_->tries - 1) + (rand() % 10));
|
||||
int delta = retryInterval * std::pow(retryBackoff, step_->tries - 1) + (rand() % 10);
|
||||
printMsg(lvlInfo, "will retry ‘%s’ after %ss", localStore->printStorePath(step->drvPath), delta);
|
||||
step_->after = std::chrono::system_clock::now() + std::chrono::seconds(delta);
|
||||
}
|
||||
|
@ -85,7 +75,6 @@ void State::builder(MachineReservation::ptr reservation)
|
|||
|
||||
State::StepResult State::doBuildStep(nix::ref<Store> destStore,
|
||||
MachineReservation::ptr & reservation,
|
||||
Connection & conn,
|
||||
std::shared_ptr<ActiveStep> activeStep)
|
||||
{
|
||||
auto step(reservation->step);
|
||||
|
@ -116,6 +105,8 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
|
|||
buildOptions.maxLogSize = maxLogSize;
|
||||
buildOptions.enforceDeterminism = step->isDeterministic;
|
||||
|
||||
auto conn(dbPool.get());
|
||||
|
||||
{
|
||||
std::set<Build::ptr> dependents;
|
||||
std::set<Step::ptr> steps;
|
||||
|
@ -140,7 +131,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
|
|||
for (auto build2 : dependents) {
|
||||
if (build2->drvPath == step->drvPath) {
|
||||
build = build2;
|
||||
pqxx::work txn(conn);
|
||||
pqxx::work txn(*conn);
|
||||
notifyBuildStarted(txn, build->id);
|
||||
txn.commit();
|
||||
}
|
||||
|
@ -186,16 +177,16 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
|
|||
unlink(result.logFile.c_str());
|
||||
}
|
||||
} catch (...) {
|
||||
ignoreExceptionInDestructor();
|
||||
ignoreException();
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
time_t stepStartTime = result.startTime = time(nullptr);
|
||||
time_t stepStartTime = result.startTime = time(0);
|
||||
|
||||
/* If any of the outputs have previously failed, then don't bother
|
||||
building again. */
|
||||
if (checkCachedFailure(step, conn))
|
||||
if (checkCachedFailure(step, *conn))
|
||||
result.stepStatus = bsCachedFailure;
|
||||
else {
|
||||
|
||||
|
@ -203,13 +194,13 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
|
|||
building. */
|
||||
{
|
||||
auto mc = startDbUpdate();
|
||||
pqxx::work txn(conn);
|
||||
pqxx::work txn(*conn);
|
||||
stepNr = createBuildStep(txn, result.startTime, buildId, step, machine->sshName, bsBusy);
|
||||
txn.commit();
|
||||
}
|
||||
|
||||
auto updateStep = [&](StepState stepState) {
|
||||
pqxx::work txn(conn);
|
||||
pqxx::work txn(*conn);
|
||||
updateBuildStep(txn, buildId, stepNr, stepState);
|
||||
txn.commit();
|
||||
};
|
||||
|
@ -238,7 +229,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
|
|||
}
|
||||
}
|
||||
|
||||
time_t stepStopTime = time(nullptr);
|
||||
time_t stepStopTime = time(0);
|
||||
if (!result.stopTime) result.stopTime = stepStopTime;
|
||||
|
||||
/* For standard failures, we don't care about the error
|
||||
|
@ -252,7 +243,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
|
|||
auto step_(step->state.lock());
|
||||
if (!step_->jobsets.empty()) {
|
||||
// FIXME: loss of precision.
|
||||
time_t charge = (result.stopTime - result.startTime) / static_cast<time_t>(step_->jobsets.size());
|
||||
time_t charge = (result.stopTime - result.startTime) / step_->jobsets.size();
|
||||
for (auto & jobset : step_->jobsets)
|
||||
jobset->addStep(result.startTime, charge);
|
||||
}
|
||||
|
@ -260,7 +251,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
|
|||
|
||||
/* Finish the step in the database. */
|
||||
if (stepNr) {
|
||||
pqxx::work txn(conn);
|
||||
pqxx::work txn(*conn);
|
||||
finishBuildStep(txn, result, buildId, stepNr, machine->sshName);
|
||||
txn.commit();
|
||||
}
|
||||
|
@ -336,7 +327,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
|
|||
{
|
||||
auto mc = startDbUpdate();
|
||||
|
||||
pqxx::work txn(conn);
|
||||
pqxx::work txn(*conn);
|
||||
|
||||
for (auto & b : direct) {
|
||||
printInfo("marking build %1% as succeeded", b->id);
|
||||
|
@ -364,7 +355,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
|
|||
/* Send notification about the builds that have this step as
|
||||
the top-level. */
|
||||
{
|
||||
pqxx::work txn(conn);
|
||||
pqxx::work txn(*conn);
|
||||
for (auto id : buildIDs)
|
||||
notifyBuildFinished(txn, id, {});
|
||||
txn.commit();
|
||||
|
@ -393,7 +384,7 @@ State::StepResult State::doBuildStep(nix::ref<Store> destStore,
|
|||
}
|
||||
|
||||
} else
|
||||
failStep(conn, step, buildId, result, machine, stepFinished);
|
||||
failStep(*conn, step, buildId, result, machine, stepFinished);
|
||||
|
||||
// FIXME: keep stats about aborted steps?
|
||||
nrStepsDone++;
|
||||
|
|
|
@ -46,7 +46,7 @@ void State::dispatcher()
|
|||
auto t_after_work = std::chrono::steady_clock::now();
|
||||
|
||||
prom.dispatcher_time_spent_running.Increment(
|
||||
static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count()));
|
||||
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count());
|
||||
dispatchTimeMs += std::chrono::duration_cast<std::chrono::milliseconds>(t_after_work - t_before_work).count();
|
||||
|
||||
/* Sleep until we're woken up (either because a runnable build
|
||||
|
@ -63,7 +63,7 @@ void State::dispatcher()
|
|||
|
||||
auto t_after_sleep = std::chrono::steady_clock::now();
|
||||
prom.dispatcher_time_spent_waiting.Increment(
|
||||
static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count()));
|
||||
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count());
|
||||
|
||||
} catch (std::exception & e) {
|
||||
printError("dispatcher: %s", e.what());
|
||||
|
@ -190,7 +190,7 @@ system_time State::doDispatch()
|
|||
}
|
||||
}
|
||||
|
||||
std::ranges::sort(runnableSorted,
|
||||
sort(runnableSorted.begin(), runnableSorted.end(),
|
||||
[](const StepInfo & a, const StepInfo & b)
|
||||
{
|
||||
return
|
||||
|
@ -240,11 +240,11 @@ system_time State::doDispatch()
|
|||
- Then by speed factor.
|
||||
|
||||
- Finally by load. */
|
||||
std::ranges::sort(machinesSorted,
|
||||
sort(machinesSorted.begin(), machinesSorted.end(),
|
||||
[](const MachineInfo & a, const MachineInfo & b) -> bool
|
||||
{
|
||||
float ta = std::round(static_cast<float>(a.currentJobs) / a.machine->speedFactorFloat);
|
||||
float tb = std::round(static_cast<float>(b.currentJobs) / b.machine->speedFactorFloat);
|
||||
float ta = std::round(a.currentJobs / a.machine->speedFactorFloat);
|
||||
float tb = std::round(b.currentJobs / b.machine->speedFactorFloat);
|
||||
return
|
||||
ta != tb ? ta < tb :
|
||||
a.machine->speedFactorFloat != b.machine->speedFactorFloat ? a.machine->speedFactorFloat > b.machine->speedFactorFloat :
|
||||
|
@ -345,7 +345,7 @@ void State::abortUnsupported()
|
|||
auto machines2 = *machines.lock();
|
||||
|
||||
system_time now = std::chrono::system_clock::now();
|
||||
auto now2 = time(nullptr);
|
||||
auto now2 = time(0);
|
||||
|
||||
std::unordered_set<Step::ptr> aborted;
|
||||
|
||||
|
@ -436,7 +436,7 @@ void Jobset::addStep(time_t startTime, time_t duration)
|
|||
|
||||
void Jobset::pruneSteps()
|
||||
{
|
||||
time_t now = time(nullptr);
|
||||
time_t now = time(0);
|
||||
auto steps_(steps.lock());
|
||||
while (!steps_->empty()) {
|
||||
auto i = steps_->begin();
|
||||
|
@ -464,7 +464,7 @@ State::MachineReservation::~MachineReservation()
|
|||
auto prev = machine->state->currentJobs--;
|
||||
assert(prev);
|
||||
if (prev == 1)
|
||||
machine->state->idleSince = time(nullptr);
|
||||
machine->state->idleSince = time(0);
|
||||
|
||||
{
|
||||
auto machineTypes_(state.machineTypes.lock());
|
||||
|
|
|
@ -14,7 +14,7 @@ struct BuildProduct
|
|||
bool isRegular = false;
|
||||
std::optional<nix::Hash> sha256hash;
|
||||
std::optional<off_t> fileSize;
|
||||
BuildProduct() = default;
|
||||
BuildProduct() { }
|
||||
};
|
||||
|
||||
struct BuildMetric
|
||||
|
|
|
@ -105,7 +105,7 @@ State::State(std::optional<std::string> metricsAddrOpt)
|
|||
: config(std::make_unique<HydraConfig>())
|
||||
, maxUnsupportedTime(config->getIntOption("max_unsupported_time", 0))
|
||||
, dbPool(config->getIntOption("max_db_connections", 128))
|
||||
, localWorkThrottler(static_cast<ptrdiff_t>(config->getIntOption("max_local_worker_threads", std::min(maxSupportedLocalWorkers, std::max(4u, std::thread::hardware_concurrency()) - 2))))
|
||||
, localWorkThrottler(config->getIntOption("max_local_worker_threads", std::min(maxSupportedLocalWorkers, std::max(4u, std::thread::hardware_concurrency()) - 2)))
|
||||
, maxOutputSize(config->getIntOption("max_output_size", 2ULL << 30))
|
||||
, maxLogSize(config->getIntOption("max_log_size", 64ULL << 20))
|
||||
, uploadLogsToBinaryCache(config->getBoolOption("upload_logs_to_binary_cache", false))
|
||||
|
@ -138,7 +138,7 @@ nix::MaintainCount<counter> State::startDbUpdate()
|
|||
{
|
||||
if (nrActiveDbUpdates > 6)
|
||||
printError("warning: %d concurrent database updates; PostgreSQL may be stalled", nrActiveDbUpdates.load());
|
||||
return {nrActiveDbUpdates};
|
||||
return MaintainCount<counter>(nrActiveDbUpdates);
|
||||
}
|
||||
|
||||
|
||||
|
@ -171,9 +171,9 @@ void State::parseMachines(const std::string & contents)
|
|||
for (auto & f : mandatoryFeatures)
|
||||
supportedFeatures.insert(f);
|
||||
|
||||
using MaxJobs = std::remove_const_t<decltype(nix::Machine::maxJobs)>;
|
||||
using MaxJobs = std::remove_const<decltype(nix::Machine::maxJobs)>::type;
|
||||
|
||||
auto machine = std::make_shared<::Machine>(::Machine {{
|
||||
auto machine = std::make_shared<::Machine>(nix::Machine {
|
||||
// `storeUri`, not yet used
|
||||
"",
|
||||
// `systemTypes`, not yet used
|
||||
|
@ -194,11 +194,11 @@ void State::parseMachines(const std::string & contents)
|
|||
tokens[7] != "" && tokens[7] != "-"
|
||||
? base64Decode(tokens[7])
|
||||
: "",
|
||||
}});
|
||||
});
|
||||
|
||||
machine->sshName = tokens[0];
|
||||
machine->systemTypesSet = tokenizeString<StringSet>(tokens[1], ",");
|
||||
machine->speedFactorFloat = static_cast<float>(atof(tokens[4].c_str()));
|
||||
machine->speedFactorFloat = atof(tokens[4].c_str());
|
||||
|
||||
/* Re-use the State object of the previous machine with the
|
||||
same name. */
|
||||
|
@ -412,7 +412,7 @@ void State::finishBuildStep(pqxx::work & txn, const RemoteResult & result,
|
|||
}
|
||||
|
||||
|
||||
unsigned int State::createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
|
||||
int State::createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
|
||||
Build::ptr build, const StorePath & drvPath, const nix::Derivation drv, const std::string & outputName, const StorePath & storePath)
|
||||
{
|
||||
restart:
|
||||
|
@ -534,7 +534,7 @@ void State::markSucceededBuild(pqxx::work & txn, Build::ptr build,
|
|||
product.type,
|
||||
product.subtype,
|
||||
product.fileSize ? std::make_optional(*product.fileSize) : std::nullopt,
|
||||
product.sha256hash ? std::make_optional(product.sha256hash->to_string(Base::Base16, false)) : std::nullopt,
|
||||
product.sha256hash ? std::make_optional(product.sha256hash->to_string(Base16, false)) : std::nullopt,
|
||||
product.path,
|
||||
product.name,
|
||||
product.defaultPath);
|
||||
|
@ -594,7 +594,7 @@ std::shared_ptr<PathLocks> State::acquireGlobalLock()
|
|||
createDirs(dirOf(lockPath));
|
||||
|
||||
auto lock = std::make_shared<PathLocks>();
|
||||
if (!lock->lockPaths(PathSet({lockPath}), "", false)) return nullptr;
|
||||
if (!lock->lockPaths(PathSet({lockPath}), "", false)) return 0;
|
||||
|
||||
return lock;
|
||||
}
|
||||
|
@ -602,10 +602,10 @@ std::shared_ptr<PathLocks> State::acquireGlobalLock()
|
|||
|
||||
void State::dumpStatus(Connection & conn)
|
||||
{
|
||||
time_t now = time(nullptr);
|
||||
time_t now = time(0);
|
||||
json statusJson = {
|
||||
{"status", "up"},
|
||||
{"time", time(nullptr)},
|
||||
{"time", time(0)},
|
||||
{"uptime", now - startedAt},
|
||||
{"pid", getpid()},
|
||||
|
||||
|
@ -613,7 +613,6 @@ void State::dumpStatus(Connection & conn)
|
|||
{"nrActiveSteps", activeSteps_.lock()->size()},
|
||||
{"nrStepsBuilding", nrStepsBuilding.load()},
|
||||
{"nrStepsCopyingTo", nrStepsCopyingTo.load()},
|
||||
{"nrStepsWaitingForDownloadSlot", nrStepsWaitingForDownloadSlot.load()},
|
||||
{"nrStepsCopyingFrom", nrStepsCopyingFrom.load()},
|
||||
{"nrStepsWaiting", nrStepsWaiting.load()},
|
||||
{"nrUnsupportedSteps", nrUnsupportedSteps.load()},
|
||||
|
@ -621,7 +620,7 @@ void State::dumpStatus(Connection & conn)
|
|||
{"bytesReceived", bytesReceived.load()},
|
||||
{"nrBuildsRead", nrBuildsRead.load()},
|
||||
{"buildReadTimeMs", buildReadTimeMs.load()},
|
||||
{"buildReadTimeAvgMs", nrBuildsRead == 0 ? 0.0 : (float) buildReadTimeMs / (float) nrBuildsRead},
|
||||
{"buildReadTimeAvgMs", nrBuildsRead == 0 ? 0.0 : (float) buildReadTimeMs / nrBuildsRead},
|
||||
{"nrBuildsDone", nrBuildsDone.load()},
|
||||
{"nrStepsStarted", nrStepsStarted.load()},
|
||||
{"nrStepsDone", nrStepsDone.load()},
|
||||
|
@ -630,7 +629,7 @@ void State::dumpStatus(Connection & conn)
|
|||
{"nrQueueWakeups", nrQueueWakeups.load()},
|
||||
{"nrDispatcherWakeups", nrDispatcherWakeups.load()},
|
||||
{"dispatchTimeMs", dispatchTimeMs.load()},
|
||||
{"dispatchTimeAvgMs", nrDispatcherWakeups == 0 ? 0.0 : (float) dispatchTimeMs / (float) nrDispatcherWakeups},
|
||||
{"dispatchTimeAvgMs", nrDispatcherWakeups == 0 ? 0.0 : (float) dispatchTimeMs / nrDispatcherWakeups},
|
||||
{"nrDbConnections", dbPool.count()},
|
||||
{"nrActiveDbUpdates", nrActiveDbUpdates.load()},
|
||||
};
|
||||
|
@ -650,8 +649,8 @@ void State::dumpStatus(Connection & conn)
|
|||
if (nrStepsDone) {
|
||||
statusJson["totalStepTime"] = totalStepTime.load();
|
||||
statusJson["totalStepBuildTime"] = totalStepBuildTime.load();
|
||||
statusJson["avgStepTime"] = (float) totalStepTime / (float) nrStepsDone;
|
||||
statusJson["avgStepBuildTime"] = (float) totalStepBuildTime / (float) nrStepsDone;
|
||||
statusJson["avgStepTime"] = (float) totalStepTime / nrStepsDone;
|
||||
statusJson["avgStepBuildTime"] = (float) totalStepBuildTime / nrStepsDone;
|
||||
}
|
||||
|
||||
{
|
||||
|
@ -678,8 +677,8 @@ void State::dumpStatus(Connection & conn)
|
|||
if (m->state->nrStepsDone) {
|
||||
machine["totalStepTime"] = s->totalStepTime.load();
|
||||
machine["totalStepBuildTime"] = s->totalStepBuildTime.load();
|
||||
machine["avgStepTime"] = (float) s->totalStepTime / (float) s->nrStepsDone;
|
||||
machine["avgStepBuildTime"] = (float) s->totalStepBuildTime / (float) s->nrStepsDone;
|
||||
machine["avgStepTime"] = (float) s->totalStepTime / s->nrStepsDone;
|
||||
machine["avgStepBuildTime"] = (float) s->totalStepBuildTime / s->nrStepsDone;
|
||||
}
|
||||
statusJson["machines"][m->sshName] = machine;
|
||||
}
|
||||
|
@ -707,7 +706,7 @@ void State::dumpStatus(Connection & conn)
|
|||
};
|
||||
if (i.second.runnable > 0)
|
||||
machineTypeJson["waitTime"] = i.second.waitTime.count() +
|
||||
i.second.runnable * (time(nullptr) - lastDispatcherCheck);
|
||||
i.second.runnable * (time(0) - lastDispatcherCheck);
|
||||
if (i.second.running == 0)
|
||||
machineTypeJson["lastActive"] = std::chrono::system_clock::to_time_t(i.second.lastActive);
|
||||
}
|
||||
|
@ -733,11 +732,11 @@ void State::dumpStatus(Connection & conn)
|
|||
{"narWriteCompressionTimeMs", stats.narWriteCompressionTimeMs.load()},
|
||||
{"narCompressionSavings",
|
||||
stats.narWriteBytes
|
||||
? 1.0 - (double) stats.narWriteCompressedBytes / (double) stats.narWriteBytes
|
||||
? 1.0 - (double) stats.narWriteCompressedBytes / stats.narWriteBytes
|
||||
: 0.0},
|
||||
{"narCompressionSpeed", // MiB/s
|
||||
stats.narWriteCompressionTimeMs
|
||||
? (double) stats.narWriteBytes / (double) stats.narWriteCompressionTimeMs * 1000.0 / (1024.0 * 1024.0)
|
||||
? (double) stats.narWriteBytes / stats.narWriteCompressionTimeMs * 1000.0 / (1024.0 * 1024.0)
|
||||
: 0.0},
|
||||
};
|
||||
|
||||
|
@ -750,20 +749,20 @@ void State::dumpStatus(Connection & conn)
|
|||
{"putTimeMs", s3Stats.putTimeMs.load()},
|
||||
{"putSpeed",
|
||||
s3Stats.putTimeMs
|
||||
? (double) s3Stats.putBytes / (double) s3Stats.putTimeMs * 1000.0 / (1024.0 * 1024.0)
|
||||
? (double) s3Stats.putBytes / s3Stats.putTimeMs * 1000.0 / (1024.0 * 1024.0)
|
||||
: 0.0},
|
||||
{"get", s3Stats.get.load()},
|
||||
{"getBytes", s3Stats.getBytes.load()},
|
||||
{"getTimeMs", s3Stats.getTimeMs.load()},
|
||||
{"getSpeed",
|
||||
s3Stats.getTimeMs
|
||||
? (double) s3Stats.getBytes / (double) s3Stats.getTimeMs * 1000.0 / (1024.0 * 1024.0)
|
||||
? (double) s3Stats.getBytes / s3Stats.getTimeMs * 1000.0 / (1024.0 * 1024.0)
|
||||
: 0.0},
|
||||
{"head", s3Stats.head.load()},
|
||||
{"costDollarApprox",
|
||||
(double) (s3Stats.get + s3Stats.head) / 10000.0 * 0.004
|
||||
+ (double) s3Stats.put / 1000.0 * 0.005 +
|
||||
+ (double) s3Stats.getBytes / (1024.0 * 1024.0 * 1024.0) * 0.09},
|
||||
(s3Stats.get + s3Stats.head) / 10000.0 * 0.004
|
||||
+ s3Stats.put / 1000.0 * 0.005 +
|
||||
+ s3Stats.getBytes / (1024.0 * 1024.0 * 1024.0) * 0.09},
|
||||
};
|
||||
}
|
||||
}
|
||||
|
@ -849,7 +848,7 @@ void State::run(BuildID buildOne)
|
|||
/* Can't be bothered to shut down cleanly. Goodbye! */
|
||||
auto callback = createInterruptCallback([&]() { std::_Exit(0); });
|
||||
|
||||
startedAt = time(nullptr);
|
||||
startedAt = time(0);
|
||||
this->buildOne = buildOne;
|
||||
|
||||
auto lock = acquireGlobalLock();
|
||||
|
|
|
@ -1,22 +0,0 @@
|
|||
srcs = files(
|
||||
'builder.cc',
|
||||
'build-remote.cc',
|
||||
'build-result.cc',
|
||||
'dispatcher.cc',
|
||||
'hydra-queue-runner.cc',
|
||||
'nar-extractor.cc',
|
||||
'queue-monitor.cc',
|
||||
)
|
||||
|
||||
hydra_queue_runner = executable('hydra-queue-runner',
|
||||
'hydra-queue-runner.cc',
|
||||
srcs,
|
||||
dependencies: [
|
||||
libhydra_dep,
|
||||
lix_dep,
|
||||
pqxx_dep,
|
||||
prom_cpp_core_dep,
|
||||
prom_cpp_pull_dep,
|
||||
],
|
||||
install: true,
|
||||
)
|
|
@ -3,41 +3,11 @@
|
|||
#include "archive.hh"
|
||||
|
||||
#include <unordered_set>
|
||||
#include <utility>
|
||||
|
||||
using namespace nix;
|
||||
|
||||
struct Extractor : NARParseVisitor
|
||||
struct Extractor : ParseSink
|
||||
{
|
||||
class MyFileHandle : public FileHandle
|
||||
{
|
||||
NarMemberData & memberData;
|
||||
uint64_t expectedSize;
|
||||
std::unique_ptr<HashSink> hashSink;
|
||||
|
||||
public:
|
||||
MyFileHandle(NarMemberData & memberData, uint64_t size) : memberData(memberData), expectedSize(size)
|
||||
{
|
||||
hashSink = std::make_unique<HashSink>(HashType::SHA256);
|
||||
}
|
||||
|
||||
void receiveContents(std::string_view data) override
|
||||
{
|
||||
*memberData.fileSize += data.size();
|
||||
(*hashSink)(data);
|
||||
if (memberData.contents) {
|
||||
memberData.contents->append(data);
|
||||
}
|
||||
assert(memberData.fileSize <= expectedSize);
|
||||
if (memberData.fileSize == expectedSize) {
|
||||
auto [hash, len] = hashSink->finish();
|
||||
assert(memberData.fileSize == len);
|
||||
memberData.sha256 = hash;
|
||||
hashSink.reset();
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
std::unordered_set<Path> filesToKeep {
|
||||
"/nix-support/hydra-build-products",
|
||||
"/nix-support/hydra-release-name",
|
||||
|
@ -45,10 +15,11 @@ struct Extractor : NARParseVisitor
|
|||
};
|
||||
|
||||
NarMemberDatas & members;
|
||||
NarMemberData * curMember = nullptr;
|
||||
Path prefix;
|
||||
|
||||
Extractor(NarMemberDatas & members, Path prefix)
|
||||
: members(members), prefix(std::move(prefix))
|
||||
Extractor(NarMemberDatas & members, const Path & prefix)
|
||||
: members(members), prefix(prefix)
|
||||
{ }
|
||||
|
||||
void createDirectory(const Path & path) override
|
||||
|
@ -56,15 +27,41 @@ struct Extractor : NARParseVisitor
|
|||
members.insert_or_assign(prefix + path, NarMemberData { .type = FSAccessor::Type::tDirectory });
|
||||
}
|
||||
|
||||
std::unique_ptr<FileHandle> createRegularFile(const Path & path, uint64_t size, bool executable) override
|
||||
void createRegularFile(const Path & path) override
|
||||
{
|
||||
auto memberData = &members.insert_or_assign(prefix + path, NarMemberData {
|
||||
curMember = &members.insert_or_assign(prefix + path, NarMemberData {
|
||||
.type = FSAccessor::Type::tRegular,
|
||||
.fileSize = 0,
|
||||
.contents = filesToKeep.count(path) ? std::optional("") : std::nullopt,
|
||||
}).first->second;
|
||||
}
|
||||
|
||||
return std::make_unique<MyFileHandle>(*memberData, size);
|
||||
std::optional<uint64_t> expectedSize;
|
||||
std::unique_ptr<HashSink> hashSink;
|
||||
|
||||
void preallocateContents(uint64_t size) override
|
||||
{
|
||||
expectedSize = size;
|
||||
hashSink = std::make_unique<HashSink>(htSHA256);
|
||||
}
|
||||
|
||||
void receiveContents(std::string_view data) override
|
||||
{
|
||||
assert(expectedSize);
|
||||
assert(curMember);
|
||||
assert(hashSink);
|
||||
*curMember->fileSize += data.size();
|
||||
(*hashSink)(data);
|
||||
if (curMember->contents) {
|
||||
curMember->contents->append(data);
|
||||
}
|
||||
assert(curMember->fileSize <= expectedSize);
|
||||
if (curMember->fileSize == expectedSize) {
|
||||
auto [hash, len] = hashSink->finish();
|
||||
assert(curMember->fileSize == len);
|
||||
curMember->sha256 = hash;
|
||||
hashSink.reset();
|
||||
}
|
||||
}
|
||||
|
||||
void createSymlink(const Path & path, const std::string & target) override
|
||||
|
@ -79,19 +76,7 @@ void extractNarData(
|
|||
const Path & prefix,
|
||||
NarMemberDatas & members)
|
||||
{
|
||||
auto parser = extractNarDataFilter(source, prefix, members);
|
||||
while (parser.next()) {
|
||||
// ignore raw data
|
||||
}
|
||||
}
|
||||
|
||||
nix::WireFormatGenerator extractNarDataFilter(
|
||||
Source & source,
|
||||
const Path & prefix,
|
||||
NarMemberDatas & members)
|
||||
{
|
||||
return [](Source & source, const Path & prefix, NarMemberDatas & members) -> WireFormatGenerator {
|
||||
Extractor extractor(members, prefix);
|
||||
co_yield parseAndCopyDump(extractor, source);
|
||||
}(source, prefix, members);
|
||||
parseDump(extractor, source);
|
||||
// Note: this point may not be reached if we're in a coroutine.
|
||||
}
|
||||
|
|
|
@ -13,7 +13,7 @@ struct NarMemberData
|
|||
std::optional<nix::Hash> sha256;
|
||||
};
|
||||
|
||||
using NarMemberDatas = std::map<nix::Path, NarMemberData>;
|
||||
typedef std::map<nix::Path, NarMemberData> NarMemberDatas;
|
||||
|
||||
/* Read a NAR from a source and get to some info about every file
|
||||
inside the NAR. */
|
||||
|
@ -21,8 +21,3 @@ void extractNarData(
|
|||
nix::Source & source,
|
||||
const nix::Path & prefix,
|
||||
NarMemberDatas & members);
|
||||
|
||||
nix::WireFormatGenerator extractNarDataFilter(
|
||||
nix::Source & source,
|
||||
const nix::Path & prefix,
|
||||
NarMemberDatas & members);
|
||||
|
|
|
@ -4,8 +4,7 @@
|
|||
#include "thread-pool.hh"
|
||||
|
||||
#include <cstring>
|
||||
#include <utility>
|
||||
#include <csignal>
|
||||
#include <signal.h>
|
||||
|
||||
using namespace nix;
|
||||
|
||||
|
@ -53,7 +52,7 @@ void State::queueMonitorLoop(Connection & conn)
|
|||
auto t_after_work = std::chrono::steady_clock::now();
|
||||
|
||||
prom.queue_monitor_time_spent_running.Increment(
|
||||
static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count()));
|
||||
std::chrono::duration_cast<std::chrono::microseconds>(t_after_work - t_before_work).count());
|
||||
|
||||
/* Sleep until we get notification from the database about an
|
||||
event. */
|
||||
|
@ -80,7 +79,7 @@ void State::queueMonitorLoop(Connection & conn)
|
|||
|
||||
auto t_after_sleep = std::chrono::steady_clock::now();
|
||||
prom.queue_monitor_time_spent_waiting.Increment(
|
||||
static_cast<double>(std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count()));
|
||||
std::chrono::duration_cast<std::chrono::microseconds>(t_after_sleep - t_after_work).count());
|
||||
}
|
||||
|
||||
exit(0);
|
||||
|
@ -89,7 +88,7 @@ void State::queueMonitorLoop(Connection & conn)
|
|||
|
||||
struct PreviousFailure : public std::exception {
|
||||
Step::ptr step;
|
||||
PreviousFailure(Step::ptr step) : step(std::move(step)) { }
|
||||
PreviousFailure(Step::ptr step) : step(step) { }
|
||||
};
|
||||
|
||||
|
||||
|
@ -118,7 +117,7 @@ bool State::getQueuedBuilds(Connection & conn,
|
|||
|
||||
for (auto const & row : res) {
|
||||
auto builds_(builds.lock());
|
||||
auto id = row["id"].as<BuildID>();
|
||||
BuildID id = row["id"].as<BuildID>();
|
||||
if (buildOne && id != buildOne) continue;
|
||||
if (builds_->count(id)) continue;
|
||||
|
||||
|
@ -138,7 +137,7 @@ bool State::getQueuedBuilds(Connection & conn,
|
|||
|
||||
newIDs.push_back(id);
|
||||
newBuildsByID[id] = build;
|
||||
newBuildsByPath.emplace(build->drvPath, id);
|
||||
newBuildsByPath.emplace(std::make_pair(build->drvPath, id));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -163,7 +162,7 @@ bool State::getQueuedBuilds(Connection & conn,
|
|||
("update Builds set finished = 1, buildStatus = $2, startTime = $3, stopTime = $3 where id = $1 and finished = 0",
|
||||
build->id,
|
||||
(int) bsAborted,
|
||||
time(nullptr));
|
||||
time(0));
|
||||
txn.commit();
|
||||
build->finishedInDB = true;
|
||||
nrBuildsDone++;
|
||||
|
@ -177,7 +176,7 @@ bool State::getQueuedBuilds(Connection & conn,
|
|||
/* Create steps for this derivation and its dependencies. */
|
||||
try {
|
||||
step = createStep(destStore, conn, build, build->drvPath,
|
||||
build, nullptr, finishedDrvs, newSteps, newRunnable);
|
||||
build, 0, finishedDrvs, newSteps, newRunnable);
|
||||
} catch (PreviousFailure & ex) {
|
||||
|
||||
/* Some step previously failed, so mark the build as
|
||||
|
@ -222,7 +221,7 @@ bool State::getQueuedBuilds(Connection & conn,
|
|||
"where id = $1 and finished = 0",
|
||||
build->id,
|
||||
(int) (ex.step->drvPath == build->drvPath ? bsFailed : bsDepFailed),
|
||||
time(nullptr));
|
||||
time(0));
|
||||
notifyBuildFinished(txn, build->id, {});
|
||||
txn.commit();
|
||||
build->finishedInDB = true;
|
||||
|
@ -255,7 +254,7 @@ bool State::getQueuedBuilds(Connection & conn,
|
|||
{
|
||||
auto mc = startDbUpdate();
|
||||
pqxx::work txn(conn);
|
||||
time_t now = time(nullptr);
|
||||
time_t now = time(0);
|
||||
if (!buildOneDone && build->id == buildOne) buildOneDone = true;
|
||||
printMsg(lvlInfo, "marking build %1% as succeeded (cached)", build->id);
|
||||
markSucceededBuild(txn, build, res, true, now, now);
|
||||
|
@ -356,7 +355,7 @@ void State::processQueueChange(Connection & conn)
|
|||
pqxx::work txn(conn);
|
||||
auto res = txn.exec("select id, globalPriority from Builds where finished = 0");
|
||||
for (auto const & row : res)
|
||||
currentIds[row["id"].as<BuildID>()] = row["globalPriority"].as<int>();
|
||||
currentIds[row["id"].as<BuildID>()] = row["globalPriority"].as<BuildID>();
|
||||
}
|
||||
|
||||
{
|
||||
|
@ -439,7 +438,7 @@ Step::ptr State::createStep(ref<Store> destStore,
|
|||
Build::ptr referringBuild, Step::ptr referringStep, std::set<StorePath> & finishedDrvs,
|
||||
std::set<Step::ptr> & newSteps, std::set<Step::ptr> & newRunnable)
|
||||
{
|
||||
if (finishedDrvs.find(drvPath) != finishedDrvs.end()) return nullptr;
|
||||
if (finishedDrvs.find(drvPath) != finishedDrvs.end()) return 0;
|
||||
|
||||
/* Check if the requested step already exists. If not, create a
|
||||
new step. In any case, make the step reachable from
|
||||
|
@ -517,7 +516,7 @@ Step::ptr State::createStep(ref<Store> destStore,
|
|||
std::map<DrvOutput, std::optional<StorePath>> paths;
|
||||
for (auto & [outputName, maybeOutputPath] : destStore->queryPartialDerivationOutputMap(drvPath, &*localStore)) {
|
||||
auto outputHash = outputHashes.at(outputName);
|
||||
paths.insert({{.drvHash=outputHash, .outputName=outputName}, maybeOutputPath});
|
||||
paths.insert({{outputHash, outputName}, maybeOutputPath});
|
||||
}
|
||||
|
||||
auto missing = getMissingRemotePaths(destStore, paths);
|
||||
|
@ -561,7 +560,7 @@ Step::ptr State::createStep(ref<Store> destStore,
|
|||
auto & path = *pathOpt;
|
||||
|
||||
try {
|
||||
time_t startTime = time(nullptr);
|
||||
time_t startTime = time(0);
|
||||
|
||||
if (localStore->isValidPath(path))
|
||||
printInfo("copying output ‘%1%’ of ‘%2%’ from local store",
|
||||
|
@ -579,7 +578,7 @@ Step::ptr State::createStep(ref<Store> destStore,
|
|||
StorePathSet { path },
|
||||
NoRepair, CheckSigs, NoSubstitute);
|
||||
|
||||
time_t stopTime = time(nullptr);
|
||||
time_t stopTime = time(0);
|
||||
|
||||
{
|
||||
auto mc = startDbUpdate();
|
||||
|
@ -603,7 +602,7 @@ Step::ptr State::createStep(ref<Store> destStore,
|
|||
// FIXME: check whether all outputs are in the binary cache.
|
||||
if (valid) {
|
||||
finishedDrvs.insert(drvPath);
|
||||
return nullptr;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* No, we need to build. */
|
||||
|
@ -611,7 +610,7 @@ Step::ptr State::createStep(ref<Store> destStore,
|
|||
|
||||
/* Create steps for the dependencies. */
|
||||
for (auto & i : step->drv->inputDrvs.map) {
|
||||
auto dep = createStep(destStore, conn, build, i.first, nullptr, step, finishedDrvs, newSteps, newRunnable);
|
||||
auto dep = createStep(destStore, conn, build, i.first, 0, step, finishedDrvs, newSteps, newRunnable);
|
||||
if (dep) {
|
||||
auto step_(step->state.lock());
|
||||
step_->deps.insert(dep);
|
||||
|
@ -659,11 +658,11 @@ Jobset::ptr State::createJobset(pqxx::work & txn,
|
|||
auto res2 = txn.exec_params
|
||||
("select s.startTime, s.stopTime from BuildSteps s join Builds b on build = id "
|
||||
"where s.startTime is not null and s.stopTime > $1 and jobset_id = $2",
|
||||
time(nullptr) - Jobset::schedulingWindow * 10,
|
||||
time(0) - Jobset::schedulingWindow * 10,
|
||||
jobsetID);
|
||||
for (auto const & row : res2) {
|
||||
auto startTime = row["startTime"].as<time_t>();
|
||||
auto stopTime = row["stopTime"].as<time_t>();
|
||||
time_t startTime = row["startTime"].as<time_t>();
|
||||
time_t stopTime = row["stopTime"].as<time_t>();
|
||||
jobset->addStep(startTime, stopTime - startTime);
|
||||
}
|
||||
|
||||
|
@ -703,7 +702,7 @@ BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store>
|
|||
"where finished = 1 and (buildStatus = 0 or buildStatus = 6) and path = $1",
|
||||
localStore->printStorePath(output));
|
||||
if (r.empty()) continue;
|
||||
auto id = r[0][0].as<BuildID>();
|
||||
BuildID id = r[0][0].as<BuildID>();
|
||||
|
||||
printInfo("reusing build %d", id);
|
||||
|
||||
|
@ -728,7 +727,7 @@ BuildOutput State::getBuildOutputCached(Connection & conn, nix::ref<nix::Store>
|
|||
product.fileSize = row[2].as<off_t>();
|
||||
}
|
||||
if (!row[3].is_null())
|
||||
product.sha256hash = Hash::parseAny(row[3].as<std::string>(), HashType::SHA256);
|
||||
product.sha256hash = Hash::parseAny(row[3].as<std::string>(), htSHA256);
|
||||
if (!row[4].is_null())
|
||||
product.path = row[4].as<std::string>();
|
||||
product.name = row[5].as<std::string>();
|
||||
|
|
|
@ -8,7 +8,6 @@
|
|||
#include <queue>
|
||||
#include <regex>
|
||||
#include <semaphore>
|
||||
#include <utility>
|
||||
|
||||
#include <prometheus/counter.h>
|
||||
#include <prometheus/gauge.h>
|
||||
|
@ -27,16 +26,16 @@
|
|||
#include "machines.hh"
|
||||
|
||||
|
||||
using BuildID = unsigned int;
|
||||
typedef unsigned int BuildID;
|
||||
|
||||
using JobsetID = unsigned int;
|
||||
typedef unsigned int JobsetID;
|
||||
|
||||
using system_time = std::chrono::time_point<std::chrono::system_clock>;
|
||||
typedef std::chrono::time_point<std::chrono::system_clock> system_time;
|
||||
|
||||
using counter = std::atomic<unsigned long>;
|
||||
typedef std::atomic<unsigned long> counter;
|
||||
|
||||
|
||||
enum BuildStatus {
|
||||
typedef enum {
|
||||
bsSuccess = 0,
|
||||
bsFailed = 1,
|
||||
bsDepFailed = 2, // builds only
|
||||
|
@ -50,10 +49,10 @@ enum BuildStatus {
|
|||
bsNarSizeLimitExceeded = 11,
|
||||
bsNotDeterministic = 12,
|
||||
bsBusy = 100, // not stored
|
||||
};
|
||||
} BuildStatus;
|
||||
|
||||
|
||||
enum StepState {
|
||||
typedef enum {
|
||||
ssPreparing = 1,
|
||||
ssConnecting = 10,
|
||||
ssSendingInputs = 20,
|
||||
|
@ -61,7 +60,7 @@ enum StepState {
|
|||
ssWaitingForLocalSlot = 35,
|
||||
ssReceivingOutputs = 40,
|
||||
ssPostProcessing = 50,
|
||||
};
|
||||
} StepState;
|
||||
|
||||
|
||||
struct RemoteResult
|
||||
|
@ -79,7 +78,7 @@ struct RemoteResult
|
|||
unsigned int overhead = 0;
|
||||
nix::Path logFile;
|
||||
|
||||
[[nodiscard]] BuildStatus buildStatus() const
|
||||
BuildStatus buildStatus() const
|
||||
{
|
||||
return stepStatus == bsCachedFailure ? bsFailed : stepStatus;
|
||||
}
|
||||
|
@ -96,10 +95,10 @@ class Jobset
|
|||
{
|
||||
public:
|
||||
|
||||
using ptr = std::shared_ptr<Jobset>;
|
||||
using wptr = std::weak_ptr<Jobset>;
|
||||
typedef std::shared_ptr<Jobset> ptr;
|
||||
typedef std::weak_ptr<Jobset> wptr;
|
||||
|
||||
static const time_t schedulingWindow = static_cast<time_t>(24 * 60 * 60);
|
||||
static const time_t schedulingWindow = 24 * 60 * 60;
|
||||
|
||||
private:
|
||||
|
||||
|
@ -116,7 +115,7 @@ public:
|
|||
return (double) seconds / shares;
|
||||
}
|
||||
|
||||
void setShares(unsigned int shares_)
|
||||
void setShares(int shares_)
|
||||
{
|
||||
assert(shares_ > 0);
|
||||
shares = shares_;
|
||||
|
@ -132,8 +131,8 @@ public:
|
|||
|
||||
struct Build
|
||||
{
|
||||
using ptr = std::shared_ptr<Build>;
|
||||
using wptr = std::weak_ptr<Build>;
|
||||
typedef std::shared_ptr<Build> ptr;
|
||||
typedef std::weak_ptr<Build> wptr;
|
||||
|
||||
BuildID id;
|
||||
nix::StorePath drvPath;
|
||||
|
@ -164,8 +163,8 @@ struct Build
|
|||
|
||||
struct Step
|
||||
{
|
||||
using ptr = std::shared_ptr<Step>;
|
||||
using wptr = std::weak_ptr<Step>;
|
||||
typedef std::shared_ptr<Step> ptr;
|
||||
typedef std::weak_ptr<Step> wptr;
|
||||
|
||||
nix::StorePath drvPath;
|
||||
std::unique_ptr<nix::Derivation> drv;
|
||||
|
@ -222,8 +221,13 @@ struct Step
|
|||
|
||||
nix::Sync<State> state;
|
||||
|
||||
Step(nix::StorePath drvPath) : drvPath(std::move(drvPath))
|
||||
Step(const nix::StorePath & drvPath) : drvPath(drvPath)
|
||||
{ }
|
||||
|
||||
~Step()
|
||||
{
|
||||
//printMsg(lvlError, format("destroying step %1%") % drvPath);
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
|
@ -235,7 +239,7 @@ void visitDependencies(std::function<void(Step::ptr)> visitor, Step::ptr step);
|
|||
|
||||
struct Machine : nix::Machine
|
||||
{
|
||||
using ptr = std::shared_ptr<Machine>;
|
||||
typedef std::shared_ptr<Machine> ptr;
|
||||
|
||||
/* TODO Get rid of: `nix::Machine::storeUri` is normalized in a way
|
||||
we are not yet used to, but once we are, we don't need this. */
|
||||
|
@ -250,7 +254,7 @@ struct Machine : nix::Machine
|
|||
float speedFactorFloat = 1.0;
|
||||
|
||||
struct State {
|
||||
using ptr = std::shared_ptr<State>;
|
||||
typedef std::shared_ptr<State> ptr;
|
||||
counter currentJobs{0};
|
||||
counter nrStepsDone{0};
|
||||
counter totalStepTime{0}; // total time for steps, including closure copying
|
||||
|
@ -324,6 +328,7 @@ struct Machine : nix::Machine
|
|||
operator nix::ServeProto::WriteConn ()
|
||||
{
|
||||
return {
|
||||
.to = to,
|
||||
.version = remoteVersion,
|
||||
};
|
||||
}
|
||||
|
@ -354,22 +359,22 @@ private:
|
|||
bool useSubstitutes = false;
|
||||
|
||||
/* The queued builds. */
|
||||
using Builds = std::map<BuildID, Build::ptr>;
|
||||
typedef std::map<BuildID, Build::ptr> Builds;
|
||||
nix::Sync<Builds> builds;
|
||||
|
||||
/* The jobsets. */
|
||||
using Jobsets = std::map<std::pair<std::string, std::string>, Jobset::ptr>;
|
||||
typedef std::map<std::pair<std::string, std::string>, Jobset::ptr> Jobsets;
|
||||
nix::Sync<Jobsets> jobsets;
|
||||
|
||||
/* All active or pending build steps (i.e. dependencies of the
|
||||
queued builds). Note that these are weak pointers. Steps are
|
||||
kept alive by being reachable from Builds or by being in
|
||||
progress. */
|
||||
using Steps = std::map<nix::StorePath, Step::wptr>;
|
||||
typedef std::map<nix::StorePath, Step::wptr> Steps;
|
||||
nix::Sync<Steps> steps;
|
||||
|
||||
/* Build steps that have no unbuilt dependencies. */
|
||||
using Runnable = std::list<Step::wptr>;
|
||||
typedef std::list<Step::wptr> Runnable;
|
||||
nix::Sync<Runnable> runnable;
|
||||
|
||||
/* CV for waking up the dispatcher. */
|
||||
|
@ -381,7 +386,7 @@ private:
|
|||
|
||||
/* The build machines. */
|
||||
std::mutex machinesReadyLock;
|
||||
using Machines = std::map<std::string, Machine::ptr>;
|
||||
typedef std::map<std::string, Machine::ptr> Machines;
|
||||
nix::Sync<Machines> machines; // FIXME: use atomic_shared_ptr
|
||||
|
||||
/* Throttler for CPU-bound local work. */
|
||||
|
@ -397,7 +402,6 @@ private:
|
|||
counter nrStepsDone{0};
|
||||
counter nrStepsBuilding{0};
|
||||
counter nrStepsCopyingTo{0};
|
||||
counter nrStepsWaitingForDownloadSlot{0};
|
||||
counter nrStepsCopyingFrom{0};
|
||||
counter nrStepsWaiting{0};
|
||||
counter nrUnsupportedSteps{0};
|
||||
|
@ -428,7 +432,7 @@ private:
|
|||
|
||||
struct MachineReservation
|
||||
{
|
||||
using ptr = std::shared_ptr<MachineReservation>;
|
||||
typedef std::shared_ptr<MachineReservation> ptr;
|
||||
State & state;
|
||||
Step::ptr step;
|
||||
Machine::ptr machine;
|
||||
|
@ -531,7 +535,7 @@ private:
|
|||
void finishBuildStep(pqxx::work & txn, const RemoteResult & result, BuildID buildId, unsigned int stepNr,
|
||||
const std::string & machine);
|
||||
|
||||
unsigned int createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
|
||||
int createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
|
||||
Build::ptr build, const nix::StorePath & drvPath, const nix::Derivation drv, const std::string & outputName, const nix::StorePath & storePath);
|
||||
|
||||
void updateBuild(pqxx::work & txn, Build::ptr build, BuildStatus status);
|
||||
|
@ -591,7 +595,6 @@ private:
|
|||
enum StepResult { sDone, sRetry, sMaybeCancelled };
|
||||
StepResult doBuildStep(nix::ref<nix::Store> destStore,
|
||||
MachineReservation::ptr & reservation,
|
||||
Connection & conn,
|
||||
std::shared_ptr<ActiveStep> activeStep);
|
||||
|
||||
void buildRemote(nix::ref<nix::Store> destStore,
|
||||
|
@ -620,6 +623,8 @@ private:
|
|||
|
||||
void addRoot(const nix::StorePath & storePath);
|
||||
|
||||
void runMetricsExporter();
|
||||
|
||||
public:
|
||||
|
||||
void showStatus();
|
||||
|
|
|
@ -242,35 +242,23 @@ sub push : Chained('api') PathPart('push') Args(0) {
|
|||
$c->{stash}->{json}->{jobsetsTriggered} = [];
|
||||
|
||||
my $force = exists $c->request->query_params->{force};
|
||||
my @jobsetNames = split /,/, ($c->request->query_params->{jobsets} // "");
|
||||
my @jobsets;
|
||||
|
||||
foreach my $s (@jobsetNames) {
|
||||
my @jobsets = split /,/, ($c->request->query_params->{jobsets} // "");
|
||||
foreach my $s (@jobsets) {
|
||||
my ($p, $j) = parseJobsetName($s);
|
||||
my $jobset = $c->model('DB::Jobsets')->find($p, $j);
|
||||
push @jobsets, $jobset if defined $jobset;
|
||||
next unless defined $jobset && ($force || ($jobset->project->enabled && $jobset->enabled));
|
||||
triggerJobset($self, $c, $jobset, $force);
|
||||
}
|
||||
|
||||
my @repos = split /,/, ($c->request->query_params->{repos} // "");
|
||||
foreach my $r (@repos) {
|
||||
foreach ($c->model('DB::Jobsets')->search(
|
||||
triggerJobset($self, $c, $_, $force) foreach $c->model('DB::Jobsets')->search(
|
||||
{ 'project.enabled' => 1, 'me.enabled' => 1 },
|
||||
{
|
||||
join => 'project',
|
||||
where => \ [ 'exists (select 1 from JobsetInputAlts where project = me.project and jobset = me.name and value = ?)', [ 'value', $r ] ],
|
||||
order_by => 'me.id DESC'
|
||||
})) {
|
||||
push @jobsets, $_;
|
||||
}
|
||||
}
|
||||
|
||||
foreach my $jobset (@jobsets) {
|
||||
requireRestartPrivileges($c, $jobset->project);
|
||||
}
|
||||
|
||||
foreach my $jobset (@jobsets) {
|
||||
next unless defined $jobset && ($force || ($jobset->project->enabled && $jobset->enabled));
|
||||
triggerJobset($self, $c, $jobset, $force);
|
||||
});
|
||||
}
|
||||
|
||||
$self->status_ok(
|
||||
|
@ -285,7 +273,7 @@ sub push_github : Chained('api') PathPart('push-github') Args(0) {
|
|||
$c->{stash}->{json}->{jobsetsTriggered} = [];
|
||||
|
||||
my $in = $c->request->{data};
|
||||
my $owner = $in->{repository}->{owner}->{login} or die;
|
||||
my $owner = $in->{repository}->{owner}->{name} or die;
|
||||
my $repo = $in->{repository}->{name} or die;
|
||||
print STDERR "got push from GitHub repository $owner/$repo\n";
|
||||
|
||||
|
@ -297,23 +285,6 @@ sub push_github : Chained('api') PathPart('push-github') Args(0) {
|
|||
$c->response->body("");
|
||||
}
|
||||
|
||||
sub push_gitea : Chained('api') PathPart('push-gitea') Args(0) {
|
||||
my ($self, $c) = @_;
|
||||
|
||||
$c->{stash}->{json}->{jobsetsTriggered} = [];
|
||||
|
||||
my $in = $c->request->{data};
|
||||
my $url = $in->{repository}->{clone_url} or die;
|
||||
$url =~ s/.git$//;
|
||||
print STDERR "got push from Gitea repository $url\n";
|
||||
|
||||
triggerJobset($self, $c, $_, 0) foreach $c->model('DB::Jobsets')->search(
|
||||
{ 'project.enabled' => 1, 'me.enabled' => 1 },
|
||||
{ join => 'project'
|
||||
, where => \ [ 'me.flake like ? or exists (select 1 from JobsetInputAlts where project = me.project and jobset = me.name and value like ?)', [ 'flake', "%$url%"], [ 'value', "%$url%" ] ]
|
||||
});
|
||||
$c->response->body("");
|
||||
}
|
||||
|
||||
|
||||
1;
|
||||
|
|
|
@ -240,7 +240,7 @@ sub serveFile {
|
|||
# XSS hole.
|
||||
$c->response->header('Content-Security-Policy' => 'sandbox allow-scripts');
|
||||
|
||||
$c->stash->{'plain'} = { data => readIntoSocket(cmd => ["nix", "--experimental-features", "nix-command",
|
||||
$c->stash->{'plain'} = { data => grab(cmd => ["nix", "--experimental-features", "nix-command",
|
||||
"store", "cat", "--store", getStoreUri(), "$path"]) };
|
||||
|
||||
# Detect MIME type.
|
||||
|
|
|
@ -76,9 +76,7 @@ sub view_GET {
|
|||
$c->stash->{removed} = $diff->{removed};
|
||||
$c->stash->{unfinished} = $diff->{unfinished};
|
||||
$c->stash->{aborted} = $diff->{aborted};
|
||||
$c->stash->{totalAborted} = $diff->{totalAborted};
|
||||
$c->stash->{totalFailed} = $diff->{totalFailed};
|
||||
$c->stash->{totalQueued} = $diff->{totalQueued};
|
||||
$c->stash->{failed} = $diff->{failed};
|
||||
|
||||
$c->stash->{full} = ($c->req->params->{full} || "0") eq "1";
|
||||
|
||||
|
|
|
@ -35,7 +35,6 @@ sub noLoginNeeded {
|
|||
|
||||
return $whitelisted ||
|
||||
$c->request->path eq "api/push-github" ||
|
||||
$c->request->path eq "api/push-gitea" ||
|
||||
$c->request->path eq "google-login" ||
|
||||
$c->request->path eq "github-redirect" ||
|
||||
$c->request->path eq "github-login" ||
|
||||
|
@ -81,7 +80,7 @@ sub begin :Private {
|
|||
$_->supportedInputTypes($c->stash->{inputTypes}) foreach @{$c->hydra_plugins};
|
||||
|
||||
# XSRF protection: require POST requests to have the same origin.
|
||||
if ($c->req->method eq "POST" && $c->req->path ne "api/push-github" && $c->req->path ne "api/push-gitea") {
|
||||
if ($c->req->method eq "POST" && $c->req->path ne "api/push-github") {
|
||||
my $referer = $c->req->header('Referer');
|
||||
$referer //= $c->req->header('Origin');
|
||||
my $base = $c->req->base;
|
||||
|
|
|
@ -32,12 +32,7 @@ sub buildDiff {
|
|||
removed => [],
|
||||
unfinished => [],
|
||||
aborted => [],
|
||||
|
||||
# These summary counters cut across the categories to determine whether
|
||||
# actions such as "Restart all failed" or "Bump queue" are available.
|
||||
totalAborted => 0,
|
||||
totalFailed => 0,
|
||||
totalQueued => 0,
|
||||
failed => [],
|
||||
};
|
||||
|
||||
my $n = 0;
|
||||
|
@ -85,15 +80,8 @@ sub buildDiff {
|
|||
} else {
|
||||
push @{$ret->{new}}, $build if !$found;
|
||||
}
|
||||
|
||||
if ($build->finished != 0 && $build->buildstatus != 0) {
|
||||
if ($aborted) {
|
||||
++$ret->{totalAborted};
|
||||
} else {
|
||||
++$ret->{totalFailed};
|
||||
}
|
||||
} elsif ($build->finished == 0) {
|
||||
++$ret->{totalQueued};
|
||||
if (defined $build->buildstatus && $build->buildstatus != 0) {
|
||||
push @{$ret->{failed}}, $build;
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -36,7 +36,6 @@ our @EXPORT = qw(
|
|||
jobsetOverview
|
||||
jobsetOverview_
|
||||
pathIsInsidePrefix
|
||||
readIntoSocket
|
||||
readNixFile
|
||||
registerRoot
|
||||
restartBuilds
|
||||
|
@ -407,16 +406,6 @@ sub pathIsInsidePrefix {
|
|||
return $cur;
|
||||
}
|
||||
|
||||
sub readIntoSocket{
|
||||
my (%args) = @_;
|
||||
my $sock;
|
||||
|
||||
eval {
|
||||
open($sock, "-|", @{$args{cmd}}) or die q(failed to open socket from command:\n $x);
|
||||
};
|
||||
|
||||
return $sock;
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
|
22
src/lib/Makefile.am
Normal file
22
src/lib/Makefile.am
Normal file
|
@ -0,0 +1,22 @@
|
|||
PERL_MODULES = \
|
||||
$(wildcard *.pm) \
|
||||
$(wildcard Hydra/*.pm) \
|
||||
$(wildcard Hydra/Helper/*.pm) \
|
||||
$(wildcard Hydra/Model/*.pm) \
|
||||
$(wildcard Hydra/View/*.pm) \
|
||||
$(wildcard Hydra/Schema/*.pm) \
|
||||
$(wildcard Hydra/Schema/Result/*.pm) \
|
||||
$(wildcard Hydra/Schema/ResultSet/*.pm) \
|
||||
$(wildcard Hydra/Controller/*.pm) \
|
||||
$(wildcard Hydra/Base/*.pm) \
|
||||
$(wildcard Hydra/Base/Controller/*.pm) \
|
||||
$(wildcard Hydra/Script/*.pm) \
|
||||
$(wildcard Hydra/Component/*.pm) \
|
||||
$(wildcard Hydra/Event/*.pm) \
|
||||
$(wildcard Hydra/Plugin/*.pm)
|
||||
|
||||
EXTRA_DIST = \
|
||||
$(PERL_MODULES)
|
||||
|
||||
hydradir = $(libexecdir)/hydra/lib
|
||||
nobase_hydra_DATA = $(PERL_MODULES)
|
|
@ -1,5 +0,0 @@
|
|||
libhydra_inc = include_directories('.')
|
||||
|
||||
libhydra_dep = declare_dependency(
|
||||
include_directories: [libhydra_inc],
|
||||
)
|
|
@ -1,16 +0,0 @@
|
|||
# Native code
|
||||
subdir('libhydra')
|
||||
subdir('hydra-evaluator')
|
||||
subdir('hydra-queue-runner')
|
||||
|
||||
# Data and interpreted
|
||||
foreach dir : ['lib', 'root', 'sql', 'ttf']
|
||||
install_subdir(dir,
|
||||
install_dir: get_option('libexecdir') / 'hydra',
|
||||
)
|
||||
endforeach
|
||||
install_subdir('script',
|
||||
install_dir: get_option('bindir'),
|
||||
install_mode: 'rwxr-xr-x',
|
||||
strip_directory: true,
|
||||
)
|
39
src/root/Makefile.am
Normal file
39
src/root/Makefile.am
Normal file
|
@ -0,0 +1,39 @@
|
|||
TEMPLATES = $(wildcard *.tt)
|
||||
STATIC = \
|
||||
$(wildcard static/images/*) \
|
||||
$(wildcard static/css/*) \
|
||||
static/js/bootbox.min.js \
|
||||
static/js/popper.min.js \
|
||||
static/js/common.js \
|
||||
static/js/jquery/jquery-3.4.1.min.js \
|
||||
static/js/jquery/jquery-ui-1.10.4.min.js
|
||||
|
||||
FLOT = flot-0.8.3.zip
|
||||
BOOTSTRAP = bootstrap-4.3.1-dist.zip
|
||||
FONTAWESOME = fontawesome-free-5.10.2-web.zip
|
||||
|
||||
ZIPS = $(FLOT) $(BOOTSTRAP) $(FONTAWESOME)
|
||||
|
||||
EXTRA_DIST = $(TEMPLATES) $(STATIC) $(ZIPS)
|
||||
|
||||
hydradir = $(libexecdir)/hydra/root
|
||||
nobase_hydra_DATA = $(EXTRA_DIST)
|
||||
|
||||
all:
|
||||
mkdir -p $(srcdir)/static/js
|
||||
unzip -u -d $(srcdir)/static $(BOOTSTRAP)
|
||||
rm -rf $(srcdir)/static/bootstrap
|
||||
mv $(srcdir)/static/$(basename $(BOOTSTRAP)) $(srcdir)/static/bootstrap
|
||||
unzip -u -d $(srcdir)/static/js $(FLOT)
|
||||
unzip -u -d $(srcdir)/static $(FONTAWESOME)
|
||||
rm -rf $(srcdir)/static/fontawesome
|
||||
mv $(srcdir)/static/$(basename $(FONTAWESOME)) $(srcdir)/static/fontawesome
|
||||
|
||||
install-data-local: $(ZIPS)
|
||||
mkdir -p $(hydradir)/static/js
|
||||
cp -prvd $(srcdir)/static/js/* $(hydradir)/static/js
|
||||
mkdir -p $(hydradir)/static/bootstrap
|
||||
cp -prvd $(srcdir)/static/bootstrap/* $(hydradir)/static/bootstrap
|
||||
mkdir -p $(hydradir)/static/fontawesome/{css,webfonts}
|
||||
cp -prvd $(srcdir)/static/fontawesome/css/* $(hydradir)/static/fontawesome/css
|
||||
cp -prvd $(srcdir)/static/fontawesome/webfonts/* $(hydradir)/static/fontawesome/webfonts
|
BIN
src/root/bootstrap-4.3.1-dist.zip
Normal file
BIN
src/root/bootstrap-4.3.1-dist.zip
Normal file
Binary file not shown.
|
@ -411,7 +411,7 @@ BLOCK renderInputDiff; %]
|
|||
[% ELSIF bi1.uri == bi2.uri && bi1.revision != bi2.revision %]
|
||||
[% IF bi1.type == "git" %]
|
||||
<tr><td>
|
||||
<b>[% bi1.name %]</b></td><td><tt>[% INCLUDE renderDiffUri contents=(bi1.revision.substr(0, 12) _ ' to ' _ bi2.revision.substr(0, 12)) %]</tt>
|
||||
<b>[% bi1.name %]</b></td><td><tt>[% INCLUDE renderDiffUri contents=(bi1.revision.substr(0, 8) _ ' to ' _ bi2.revision.substr(0, 8)) %]</tt>
|
||||
</td></tr>
|
||||
[% ELSE %]
|
||||
<tr><td>
|
||||
|
|
BIN
src/root/flot-0.8.3.zip
Normal file
BIN
src/root/flot-0.8.3.zip
Normal file
Binary file not shown.
BIN
src/root/fontawesome-free-5.10.2-web.zip
Normal file
BIN
src/root/fontawesome-free-5.10.2-web.zip
Normal file
Binary file not shown.
|
@ -48,16 +48,16 @@ c.uri_for(c.controller('JobsetEval').action_for('view'),
|
|||
<a class="nav-link dropdown-toggle" data-toggle="dropdown" href="#">Actions</a>
|
||||
<div class="dropdown-menu">
|
||||
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('create_jobset'), [eval.id]) %]">Create a jobset from this evaluation</a>
|
||||
[% IF totalQueued > 0 %]
|
||||
[% IF unfinished.size > 0 %]
|
||||
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('cancel'), [eval.id]) %]">Cancel all scheduled builds</a>
|
||||
[% END %]
|
||||
[% IF totalFailed > 0 %]
|
||||
[% IF aborted.size > 0 || stillFail.size > 0 || nowFail.size > 0 || failed.size > 0 %]
|
||||
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('restart_failed'), [eval.id]) %]">Restart all failed builds</a>
|
||||
[% END %]
|
||||
[% IF totalAborted > 0 %]
|
||||
[% IF aborted.size > 0 %]
|
||||
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('restart_aborted'), [eval.id]) %]">Restart all aborted builds</a>
|
||||
[% END %]
|
||||
[% IF totalQueued > 0 %]
|
||||
[% IF unfinished.size > 0 %]
|
||||
<a class="dropdown-item" href="[% c.uri_for(c.controller('JobsetEval').action_for('bump'), [eval.id]) %]">Bump builds to front of queue</a>
|
||||
[% END %]
|
||||
</div>
|
||||
|
|
3719
src/root/static/bootstrap/css/bootstrap-grid.css
vendored
3719
src/root/static/bootstrap/css/bootstrap-grid.css
vendored
File diff suppressed because it is too large
Load diff
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
331
src/root/static/bootstrap/css/bootstrap-reboot.css
vendored
331
src/root/static/bootstrap/css/bootstrap-reboot.css
vendored
|
@ -1,331 +0,0 @@
|
|||
/*!
|
||||
* Bootstrap Reboot v4.3.1 (https://getbootstrap.com/)
|
||||
* Copyright 2011-2019 The Bootstrap Authors
|
||||
* Copyright 2011-2019 Twitter, Inc.
|
||||
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
|
||||
* Forked from Normalize.css, licensed MIT (https://github.com/necolas/normalize.css/blob/master/LICENSE.md)
|
||||
*/
|
||||
*,
|
||||
*::before,
|
||||
*::after {
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
html {
|
||||
font-family: sans-serif;
|
||||
line-height: 1.15;
|
||||
-webkit-text-size-adjust: 100%;
|
||||
-webkit-tap-highlight-color: rgba(0, 0, 0, 0);
|
||||
}
|
||||
|
||||
article, aside, figcaption, figure, footer, header, hgroup, main, nav, section {
|
||||
display: block;
|
||||
}
|
||||
|
||||
body {
|
||||
margin: 0;
|
||||
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, "Noto Sans", sans-serif, "Apple Color Emoji", "Segoe UI Emoji", "Segoe UI Symbol", "Noto Color Emoji";
|
||||
font-size: 1rem;
|
||||
font-weight: 400;
|
||||
line-height: 1.5;
|
||||
color: #212529;
|
||||
text-align: left;
|
||||
background-color: #fff;
|
||||
}
|
||||
|
||||
[tabindex="-1"]:focus {
|
||||
outline: 0 !important;
|
||||
}
|
||||
|
||||
hr {
|
||||
box-sizing: content-box;
|
||||
height: 0;
|
||||
overflow: visible;
|
||||
}
|
||||
|
||||
h1, h2, h3, h4, h5, h6 {
|
||||
margin-top: 0;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
p {
|
||||
margin-top: 0;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
abbr[title],
|
||||
abbr[data-original-title] {
|
||||
text-decoration: underline;
|
||||
-webkit-text-decoration: underline dotted;
|
||||
text-decoration: underline dotted;
|
||||
cursor: help;
|
||||
border-bottom: 0;
|
||||
-webkit-text-decoration-skip-ink: none;
|
||||
text-decoration-skip-ink: none;
|
||||
}
|
||||
|
||||
address {
|
||||
margin-bottom: 1rem;
|
||||
font-style: normal;
|
||||
line-height: inherit;
|
||||
}
|
||||
|
||||
ol,
|
||||
ul,
|
||||
dl {
|
||||
margin-top: 0;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
ol ol,
|
||||
ul ul,
|
||||
ol ul,
|
||||
ul ol {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
dt {
|
||||
font-weight: 700;
|
||||
}
|
||||
|
||||
dd {
|
||||
margin-bottom: .5rem;
|
||||
margin-left: 0;
|
||||
}
|
||||
|
||||
blockquote {
|
||||
margin: 0 0 1rem;
|
||||
}
|
||||
|
||||
b,
|
||||
strong {
|
||||
font-weight: bolder;
|
||||
}
|
||||
|
||||
small {
|
||||
font-size: 80%;
|
||||
}
|
||||
|
||||
sub,
|
||||
sup {
|
||||
position: relative;
|
||||
font-size: 75%;
|
||||
line-height: 0;
|
||||
vertical-align: baseline;
|
||||
}
|
||||
|
||||
sub {
|
||||
bottom: -.25em;
|
||||
}
|
||||
|
||||
sup {
|
||||
top: -.5em;
|
||||
}
|
||||
|
||||
a {
|
||||
color: #007bff;
|
||||
text-decoration: none;
|
||||
background-color: transparent;
|
||||
}
|
||||
|
||||
a:hover {
|
||||
color: #0056b3;
|
||||
text-decoration: underline;
|
||||
}
|
||||
|
||||
a:not([href]):not([tabindex]) {
|
||||
color: inherit;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a:not([href]):not([tabindex]):hover, a:not([href]):not([tabindex]):focus {
|
||||
color: inherit;
|
||||
text-decoration: none;
|
||||
}
|
||||
|
||||
a:not([href]):not([tabindex]):focus {
|
||||
outline: 0;
|
||||
}
|
||||
|
||||
pre,
|
||||
code,
|
||||
kbd,
|
||||
samp {
|
||||
font-family: SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
|
||||
font-size: 1em;
|
||||
}
|
||||
|
||||
pre {
|
||||
margin-top: 0;
|
||||
margin-bottom: 1rem;
|
||||
overflow: auto;
|
||||
}
|
||||
|
||||
figure {
|
||||
margin: 0 0 1rem;
|
||||
}
|
||||
|
||||
img {
|
||||
vertical-align: middle;
|
||||
border-style: none;
|
||||
}
|
||||
|
||||
svg {
|
||||
overflow: hidden;
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
||||
table {
|
||||
border-collapse: collapse;
|
||||
}
|
||||
|
||||
caption {
|
||||
padding-top: 0.75rem;
|
||||
padding-bottom: 0.75rem;
|
||||
color: #6c757d;
|
||||
text-align: left;
|
||||
caption-side: bottom;
|
||||
}
|
||||
|
||||
th {
|
||||
text-align: inherit;
|
||||
}
|
||||
|
||||
label {
|
||||
display: inline-block;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
button {
|
||||
border-radius: 0;
|
||||
}
|
||||
|
||||
button:focus {
|
||||
outline: 1px dotted;
|
||||
outline: 5px auto -webkit-focus-ring-color;
|
||||
}
|
||||
|
||||
input,
|
||||
button,
|
||||
select,
|
||||
optgroup,
|
||||
textarea {
|
||||
margin: 0;
|
||||
font-family: inherit;
|
||||
font-size: inherit;
|
||||
line-height: inherit;
|
||||
}
|
||||
|
||||
button,
|
||||
input {
|
||||
overflow: visible;
|
||||
}
|
||||
|
||||
button,
|
||||
select {
|
||||
text-transform: none;
|
||||
}
|
||||
|
||||
select {
|
||||
word-wrap: normal;
|
||||
}
|
||||
|
||||
button,
|
||||
[type="button"],
|
||||
[type="reset"],
|
||||
[type="submit"] {
|
||||
-webkit-appearance: button;
|
||||
}
|
||||
|
||||
button:not(:disabled),
|
||||
[type="button"]:not(:disabled),
|
||||
[type="reset"]:not(:disabled),
|
||||
[type="submit"]:not(:disabled) {
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
button::-moz-focus-inner,
|
||||
[type="button"]::-moz-focus-inner,
|
||||
[type="reset"]::-moz-focus-inner,
|
||||
[type="submit"]::-moz-focus-inner {
|
||||
padding: 0;
|
||||
border-style: none;
|
||||
}
|
||||
|
||||
input[type="radio"],
|
||||
input[type="checkbox"] {
|
||||
box-sizing: border-box;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
input[type="date"],
|
||||
input[type="time"],
|
||||
input[type="datetime-local"],
|
||||
input[type="month"] {
|
||||
-webkit-appearance: listbox;
|
||||
}
|
||||
|
||||
textarea {
|
||||
overflow: auto;
|
||||
resize: vertical;
|
||||
}
|
||||
|
||||
fieldset {
|
||||
min-width: 0;
|
||||
padding: 0;
|
||||
margin: 0;
|
||||
border: 0;
|
||||
}
|
||||
|
||||
legend {
|
||||
display: block;
|
||||
width: 100%;
|
||||
max-width: 100%;
|
||||
padding: 0;
|
||||
margin-bottom: .5rem;
|
||||
font-size: 1.5rem;
|
||||
line-height: inherit;
|
||||
color: inherit;
|
||||
white-space: normal;
|
||||
}
|
||||
|
||||
progress {
|
||||
vertical-align: baseline;
|
||||
}
|
||||
|
||||
[type="number"]::-webkit-inner-spin-button,
|
||||
[type="number"]::-webkit-outer-spin-button {
|
||||
height: auto;
|
||||
}
|
||||
|
||||
[type="search"] {
|
||||
outline-offset: -2px;
|
||||
-webkit-appearance: none;
|
||||
}
|
||||
|
||||
[type="search"]::-webkit-search-decoration {
|
||||
-webkit-appearance: none;
|
||||
}
|
||||
|
||||
::-webkit-file-upload-button {
|
||||
font: inherit;
|
||||
-webkit-appearance: button;
|
||||
}
|
||||
|
||||
output {
|
||||
display: inline-block;
|
||||
}
|
||||
|
||||
summary {
|
||||
display: list-item;
|
||||
cursor: pointer;
|
||||
}
|
||||
|
||||
template {
|
||||
display: none;
|
||||
}
|
||||
|
||||
[hidden] {
|
||||
display: none !important;
|
||||
}
|
||||
/*# sourceMappingURL=bootstrap-reboot.css.map */
|
File diff suppressed because one or more lines are too long
|
@ -1,8 +0,0 @@
|
|||
/*!
|
||||
* Bootstrap Reboot v4.3.1 (https://getbootstrap.com/)
|
||||
* Copyright 2011-2019 The Bootstrap Authors
|
||||
* Copyright 2011-2019 Twitter, Inc.
|
||||
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/master/LICENSE)
|
||||
* Forked from Normalize.css, licensed MIT (https://github.com/necolas/normalize.css/blob/master/LICENSE.md)
|
||||
*/*,::after,::before{box-sizing:border-box}html{font-family:sans-serif;line-height:1.15;-webkit-text-size-adjust:100%;-webkit-tap-highlight-color:transparent}article,aside,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}body{margin:0;font-family:-apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,"Helvetica Neue",Arial,"Noto Sans",sans-serif,"Apple Color Emoji","Segoe UI Emoji","Segoe UI Symbol","Noto Color Emoji";font-size:1rem;font-weight:400;line-height:1.5;color:#212529;text-align:left;background-color:#fff}[tabindex="-1"]:focus{outline:0!important}hr{box-sizing:content-box;height:0;overflow:visible}h1,h2,h3,h4,h5,h6{margin-top:0;margin-bottom:.5rem}p{margin-top:0;margin-bottom:1rem}abbr[data-original-title],abbr[title]{text-decoration:underline;-webkit-text-decoration:underline dotted;text-decoration:underline dotted;cursor:help;border-bottom:0;-webkit-text-decoration-skip-ink:none;text-decoration-skip-ink:none}address{margin-bottom:1rem;font-style:normal;line-height:inherit}dl,ol,ul{margin-top:0;margin-bottom:1rem}ol ol,ol ul,ul ol,ul ul{margin-bottom:0}dt{font-weight:700}dd{margin-bottom:.5rem;margin-left:0}blockquote{margin:0 0 1rem}b,strong{font-weight:bolder}small{font-size:80%}sub,sup{position:relative;font-size:75%;line-height:0;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}a{color:#007bff;text-decoration:none;background-color:transparent}a:hover{color:#0056b3;text-decoration:underline}a:not([href]):not([tabindex]){color:inherit;text-decoration:none}a:not([href]):not([tabindex]):focus,a:not([href]):not([tabindex]):hover{color:inherit;text-decoration:none}a:not([href]):not([tabindex]):focus{outline:0}code,kbd,pre,samp{font-family:SFMono-Regular,Menlo,Monaco,Consolas,"Liberation Mono","Courier New",monospace;font-size:1em}pre{margin-top:0;margin-bottom:1rem;overflow:auto}figure{margin:0 0 1rem}img{vertical-align:middle;border-style:none}svg{overflow:hidden;vertical-align:middle}table{border-collapse:collapse}caption{padding-top:.75rem;padding-bottom:.75rem;color:#6c757d;text-align:left;caption-side:bottom}th{text-align:inherit}label{display:inline-block;margin-bottom:.5rem}button{border-radius:0}button:focus{outline:1px dotted;outline:5px auto -webkit-focus-ring-color}button,input,optgroup,select,textarea{margin:0;font-family:inherit;font-size:inherit;line-height:inherit}button,input{overflow:visible}button,select{text-transform:none}select{word-wrap:normal}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button}[type=button]:not(:disabled),[type=reset]:not(:disabled),[type=submit]:not(:disabled),button:not(:disabled){cursor:pointer}[type=button]::-moz-focus-inner,[type=reset]::-moz-focus-inner,[type=submit]::-moz-focus-inner,button::-moz-focus-inner{padding:0;border-style:none}input[type=checkbox],input[type=radio]{box-sizing:border-box;padding:0}input[type=date],input[type=datetime-local],input[type=month],input[type=time]{-webkit-appearance:listbox}textarea{overflow:auto;resize:vertical}fieldset{min-width:0;padding:0;margin:0;border:0}legend{display:block;width:100%;max-width:100%;padding:0;margin-bottom:.5rem;font-size:1.5rem;line-height:inherit;color:inherit;white-space:normal}progress{vertical-align:baseline}[type=number]::-webkit-inner-spin-button,[type=number]::-webkit-outer-spin-button{height:auto}[type=search]{outline-offset:-2px;-webkit-appearance:none}[type=search]::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{font:inherit;-webkit-appearance:button}output{display:inline-block}summary{display:list-item;cursor:pointer}template{display:none}[hidden]{display:none!important}
|
||||
/*# sourceMappingURL=bootstrap-reboot.min.css.map */
|
File diff suppressed because one or more lines are too long
10038
src/root/static/bootstrap/css/bootstrap.css
vendored
10038
src/root/static/bootstrap/css/bootstrap.css
vendored
File diff suppressed because it is too large
Load diff
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
7013
src/root/static/bootstrap/js/bootstrap.bundle.js
vendored
7013
src/root/static/bootstrap/js/bootstrap.bundle.js
vendored
File diff suppressed because it is too large
Load diff
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
4435
src/root/static/bootstrap/js/bootstrap.js
vendored
4435
src/root/static/bootstrap/js/bootstrap.js
vendored
File diff suppressed because it is too large
Load diff
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
|
@ -1,34 +0,0 @@
|
|||
Font Awesome Free License
|
||||
-------------------------
|
||||
|
||||
Font Awesome Free is free, open source, and GPL friendly. You can use it for
|
||||
commercial projects, open source projects, or really almost whatever you want.
|
||||
Full Font Awesome Free license: https://fontawesome.com/license/free.
|
||||
|
||||
# Icons: CC BY 4.0 License (https://creativecommons.org/licenses/by/4.0/)
|
||||
In the Font Awesome Free download, the CC BY 4.0 license applies to all icons
|
||||
packaged as SVG and JS file types.
|
||||
|
||||
# Fonts: SIL OFL 1.1 License (https://scripts.sil.org/OFL)
|
||||
In the Font Awesome Free download, the SIL OFL license applies to all icons
|
||||
packaged as web and desktop font files.
|
||||
|
||||
# Code: MIT License (https://opensource.org/licenses/MIT)
|
||||
In the Font Awesome Free download, the MIT license applies to all non-font and
|
||||
non-icon files.
|
||||
|
||||
# Attribution
|
||||
Attribution is required by MIT, SIL OFL, and CC BY licenses. Downloaded Font
|
||||
Awesome Free files already contain embedded comments with sufficient
|
||||
attribution, so you shouldn't need to do anything additional when using these
|
||||
files normally.
|
||||
|
||||
We've kept attribution comments terse, so we ask that you do not actively work
|
||||
to remove them from files, especially code. They're a great way for folks to
|
||||
learn about Font Awesome.
|
||||
|
||||
# Brand Icons
|
||||
All brand icons are trademarks of their respective owners. The use of these
|
||||
trademarks does not indicate endorsement of the trademark holder by Font
|
||||
Awesome, nor vice versa. **Please do not use brand logos for any purpose except
|
||||
to represent the company, product, or service to which they refer.**
|
4396
src/root/static/fontawesome/css/all.css
vendored
4396
src/root/static/fontawesome/css/all.css
vendored
File diff suppressed because it is too large
Load diff
5
src/root/static/fontawesome/css/all.min.css
vendored
5
src/root/static/fontawesome/css/all.min.css
vendored
File diff suppressed because one or more lines are too long
14
src/root/static/fontawesome/css/brands.css
vendored
14
src/root/static/fontawesome/css/brands.css
vendored
|
@ -1,14 +0,0 @@
|
|||
/*!
|
||||
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
|
||||
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
|
||||
*/
|
||||
@font-face {
|
||||
font-family: 'Font Awesome 5 Brands';
|
||||
font-style: normal;
|
||||
font-weight: normal;
|
||||
font-display: auto;
|
||||
src: url("../webfonts/fa-brands-400.eot");
|
||||
src: url("../webfonts/fa-brands-400.eot?#iefix") format("embedded-opentype"), url("../webfonts/fa-brands-400.woff2") format("woff2"), url("../webfonts/fa-brands-400.woff") format("woff"), url("../webfonts/fa-brands-400.ttf") format("truetype"), url("../webfonts/fa-brands-400.svg#fontawesome") format("svg"); }
|
||||
|
||||
.fab {
|
||||
font-family: 'Font Awesome 5 Brands'; }
|
|
@ -1,5 +0,0 @@
|
|||
/*!
|
||||
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
|
||||
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
|
||||
*/
|
||||
@font-face{font-family:"Font Awesome 5 Brands";font-style:normal;font-weight:normal;font-display:auto;src:url(../webfonts/fa-brands-400.eot);src:url(../webfonts/fa-brands-400.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-brands-400.woff2) format("woff2"),url(../webfonts/fa-brands-400.woff) format("woff"),url(../webfonts/fa-brands-400.ttf) format("truetype"),url(../webfonts/fa-brands-400.svg#fontawesome) format("svg")}.fab{font-family:"Font Awesome 5 Brands"}
|
4363
src/root/static/fontawesome/css/fontawesome.css
vendored
4363
src/root/static/fontawesome/css/fontawesome.css
vendored
File diff suppressed because it is too large
Load diff
File diff suppressed because one or more lines are too long
15
src/root/static/fontawesome/css/regular.css
vendored
15
src/root/static/fontawesome/css/regular.css
vendored
|
@ -1,15 +0,0 @@
|
|||
/*!
|
||||
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
|
||||
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
|
||||
*/
|
||||
@font-face {
|
||||
font-family: 'Font Awesome 5 Free';
|
||||
font-style: normal;
|
||||
font-weight: 400;
|
||||
font-display: auto;
|
||||
src: url("../webfonts/fa-regular-400.eot");
|
||||
src: url("../webfonts/fa-regular-400.eot?#iefix") format("embedded-opentype"), url("../webfonts/fa-regular-400.woff2") format("woff2"), url("../webfonts/fa-regular-400.woff") format("woff"), url("../webfonts/fa-regular-400.ttf") format("truetype"), url("../webfonts/fa-regular-400.svg#fontawesome") format("svg"); }
|
||||
|
||||
.far {
|
||||
font-family: 'Font Awesome 5 Free';
|
||||
font-weight: 400; }
|
|
@ -1,5 +0,0 @@
|
|||
/*!
|
||||
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
|
||||
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
|
||||
*/
|
||||
@font-face{font-family:"Font Awesome 5 Free";font-style:normal;font-weight:400;font-display:auto;src:url(../webfonts/fa-regular-400.eot);src:url(../webfonts/fa-regular-400.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-regular-400.woff2) format("woff2"),url(../webfonts/fa-regular-400.woff) format("woff"),url(../webfonts/fa-regular-400.ttf) format("truetype"),url(../webfonts/fa-regular-400.svg#fontawesome) format("svg")}.far{font-family:"Font Awesome 5 Free";font-weight:400}
|
16
src/root/static/fontawesome/css/solid.css
vendored
16
src/root/static/fontawesome/css/solid.css
vendored
|
@ -1,16 +0,0 @@
|
|||
/*!
|
||||
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
|
||||
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
|
||||
*/
|
||||
@font-face {
|
||||
font-family: 'Font Awesome 5 Free';
|
||||
font-style: normal;
|
||||
font-weight: 900;
|
||||
font-display: auto;
|
||||
src: url("../webfonts/fa-solid-900.eot");
|
||||
src: url("../webfonts/fa-solid-900.eot?#iefix") format("embedded-opentype"), url("../webfonts/fa-solid-900.woff2") format("woff2"), url("../webfonts/fa-solid-900.woff") format("woff"), url("../webfonts/fa-solid-900.ttf") format("truetype"), url("../webfonts/fa-solid-900.svg#fontawesome") format("svg"); }
|
||||
|
||||
.fa,
|
||||
.fas {
|
||||
font-family: 'Font Awesome 5 Free';
|
||||
font-weight: 900; }
|
|
@ -1,5 +0,0 @@
|
|||
/*!
|
||||
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
|
||||
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
|
||||
*/
|
||||
@font-face{font-family:"Font Awesome 5 Free";font-style:normal;font-weight:900;font-display:auto;src:url(../webfonts/fa-solid-900.eot);src:url(../webfonts/fa-solid-900.eot?#iefix) format("embedded-opentype"),url(../webfonts/fa-solid-900.woff2) format("woff2"),url(../webfonts/fa-solid-900.woff) format("woff"),url(../webfonts/fa-solid-900.ttf) format("truetype"),url(../webfonts/fa-solid-900.svg#fontawesome) format("svg")}.fa,.fas{font-family:"Font Awesome 5 Free";font-weight:900}
|
371
src/root/static/fontawesome/css/svg-with-js.css
vendored
371
src/root/static/fontawesome/css/svg-with-js.css
vendored
|
@ -1,371 +0,0 @@
|
|||
/*!
|
||||
* Font Awesome Free 5.10.2 by @fontawesome - https://fontawesome.com
|
||||
* License - https://fontawesome.com/license/free (Icons: CC BY 4.0, Fonts: SIL OFL 1.1, Code: MIT License)
|
||||
*/
|
||||
svg:not(:root).svg-inline--fa {
|
||||
overflow: visible; }
|
||||
|
||||
.svg-inline--fa {
|
||||
display: inline-block;
|
||||
font-size: inherit;
|
||||
height: 1em;
|
||||
overflow: visible;
|
||||
vertical-align: -.125em; }
|
||||
.svg-inline--fa.fa-lg {
|
||||
vertical-align: -.225em; }
|
||||
.svg-inline--fa.fa-w-1 {
|
||||
width: 0.0625em; }
|
||||
.svg-inline--fa.fa-w-2 {
|
||||
width: 0.125em; }
|
||||
.svg-inline--fa.fa-w-3 {
|
||||
width: 0.1875em; }
|
||||
.svg-inline--fa.fa-w-4 {
|
||||
width: 0.25em; }
|
||||
.svg-inline--fa.fa-w-5 {
|
||||
width: 0.3125em; }
|
||||
.svg-inline--fa.fa-w-6 {
|
||||
width: 0.375em; }
|
||||
.svg-inline--fa.fa-w-7 {
|
||||
width: 0.4375em; }
|
||||
.svg-inline--fa.fa-w-8 {
|
||||
width: 0.5em; }
|
||||
.svg-inline--fa.fa-w-9 {
|
||||
width: 0.5625em; }
|
||||
.svg-inline--fa.fa-w-10 {
|
||||
width: 0.625em; }
|
||||
.svg-inline--fa.fa-w-11 {
|
||||
width: 0.6875em; }
|
||||
.svg-inline--fa.fa-w-12 {
|
||||
width: 0.75em; }
|
||||
.svg-inline--fa.fa-w-13 {
|
||||
width: 0.8125em; }
|
||||
.svg-inline--fa.fa-w-14 {
|
||||
width: 0.875em; }
|
||||
.svg-inline--fa.fa-w-15 {
|
||||
width: 0.9375em; }
|
||||
.svg-inline--fa.fa-w-16 {
|
||||
width: 1em; }
|
||||
.svg-inline--fa.fa-w-17 {
|
||||
width: 1.0625em; }
|
||||
.svg-inline--fa.fa-w-18 {
|
||||
width: 1.125em; }
|
||||
.svg-inline--fa.fa-w-19 {
|
||||
width: 1.1875em; }
|
||||
.svg-inline--fa.fa-w-20 {
|
||||
width: 1.25em; }
|
||||
.svg-inline--fa.fa-pull-left {
|
||||
margin-right: .3em;
|
||||
width: auto; }
|
||||
.svg-inline--fa.fa-pull-right {
|
||||
margin-left: .3em;
|
||||
width: auto; }
|
||||
.svg-inline--fa.fa-border {
|
||||
height: 1.5em; }
|
||||
.svg-inline--fa.fa-li {
|
||||
width: 2em; }
|
||||
.svg-inline--fa.fa-fw {
|
||||
width: 1.25em; }
|
||||
|
||||
.fa-layers svg.svg-inline--fa {
|
||||
bottom: 0;
|
||||
left: 0;
|
||||
margin: auto;
|
||||
position: absolute;
|
||||
right: 0;
|
||||
top: 0; }
|
||||
|
||||
.fa-layers {
|
||||
display: inline-block;
|
||||
height: 1em;
|
||||
position: relative;
|
||||
text-align: center;
|
||||
vertical-align: -.125em;
|
||||
width: 1em; }
|
||||
.fa-layers svg.svg-inline--fa {
|
||||
-webkit-transform-origin: center center;
|
||||
transform-origin: center center; }
|
||||
|
||||
.fa-layers-text, .fa-layers-counter {
|
||||
display: inline-block;
|
||||
position: absolute;
|
||||
text-align: center; }
|
||||
|
||||
.fa-layers-text {
|
||||
left: 50%;
|
||||
top: 50%;
|
||||
-webkit-transform: translate(-50%, -50%);
|
||||
transform: translate(-50%, -50%);
|
||||
-webkit-transform-origin: center center;
|
||||
transform-origin: center center; }
|
||||
|
||||
.fa-layers-counter {
|
||||
background-color: #ff253a;
|
||||
border-radius: 1em;
|
||||
-webkit-box-sizing: border-box;
|
||||
box-sizing: border-box;
|
||||
color: #fff;
|
||||
height: 1.5em;
|
||||
line-height: 1;
|
||||
max-width: 5em;
|
||||
min-width: 1.5em;
|
||||
overflow: hidden;
|
||||
padding: .25em;
|
||||
right: 0;
|
||||
text-overflow: ellipsis;
|
||||
top: 0;
|
||||
-webkit-transform: scale(0.25);
|
||||
transform: scale(0.25);
|
||||
-webkit-transform-origin: top right;
|
||||
transform-origin: top right; }
|
||||
|
||||
.fa-layers-bottom-right {
|
||||
bottom: 0;
|
||||
right: 0;
|
||||
top: auto;
|
||||
-webkit-transform: scale(0.25);
|
||||
transform: scale(0.25);
|
||||
-webkit-transform-origin: bottom right;
|
||||
transform-origin: bottom right; }
|
||||
|
||||
.fa-layers-bottom-left {
|
||||
bottom: 0;
|
||||
left: 0;
|
||||
right: auto;
|
||||
top: auto;
|
||||
-webkit-transform: scale(0.25);
|
||||
transform: scale(0.25);
|
||||
-webkit-transform-origin: bottom left;
|
||||
transform-origin: bottom left; }
|
||||
|
||||
.fa-layers-top-right {
|
||||
right: 0;
|
||||
top: 0;
|
||||
-webkit-transform: scale(0.25);
|
||||
transform: scale(0.25);
|
||||
-webkit-transform-origin: top right;
|
||||
transform-origin: top right; }
|
||||
|
||||
.fa-layers-top-left {
|
||||
left: 0;
|
||||
right: auto;
|
||||
top: 0;
|
||||
-webkit-transform: scale(0.25);
|
||||
transform: scale(0.25);
|
||||
-webkit-transform-origin: top left;
|
||||
transform-origin: top left; }
|
||||
|
||||
.fa-lg {
|
||||
font-size: 1.33333em;
|
||||
line-height: 0.75em;
|
||||
vertical-align: -.0667em; }
|
||||
|
||||
.fa-xs {
|
||||
font-size: .75em; }
|
||||
|
||||
.fa-sm {
|
||||
font-size: .875em; }
|
||||
|
||||
.fa-1x {
|
||||
font-size: 1em; }
|
||||
|
||||
.fa-2x {
|
||||
font-size: 2em; }
|
||||
|
||||
.fa-3x {
|
||||
font-size: 3em; }
|
||||
|
||||
.fa-4x {
|
||||
font-size: 4em; }
|
||||
|
||||
.fa-5x {
|
||||
font-size: 5em; }
|
||||
|
||||
.fa-6x {
|
||||
font-size: 6em; }
|
||||
|
||||
.fa-7x {
|
||||
font-size: 7em; }
|
||||
|
||||
.fa-8x {
|
||||
font-size: 8em; }
|
||||
|
||||
.fa-9x {
|
||||
font-size: 9em; }
|
||||
|
||||
.fa-10x {
|
||||
font-size: 10em; }
|
||||
|
||||
.fa-fw {
|
||||
text-align: center;
|
||||
width: 1.25em; }
|
||||
|
||||
.fa-ul {
|
||||
list-style-type: none;
|
||||
margin-left: 2.5em;
|
||||
padding-left: 0; }
|
||||
.fa-ul > li {
|
||||
position: relative; }
|
||||
|
||||
.fa-li {
|
||||
left: -2em;
|
||||
position: absolute;
|
||||
text-align: center;
|
||||
width: 2em;
|
||||
line-height: inherit; }
|
||||
|
||||
.fa-border {
|
||||
border: solid 0.08em #eee;
|
||||
border-radius: .1em;
|
||||
padding: .2em .25em .15em; }
|
||||
|
||||
.fa-pull-left {
|
||||
float: left; }
|
||||
|
||||
.fa-pull-right {
|
||||
float: right; }
|
||||
|
||||
.fa.fa-pull-left,
|
||||
.fas.fa-pull-left,
|
||||
.far.fa-pull-left,
|
||||
.fal.fa-pull-left,
|
||||
.fab.fa-pull-left {
|
||||
margin-right: .3em; }
|
||||
|
||||
.fa.fa-pull-right,
|
||||
.fas.fa-pull-right,
|
||||
.far.fa-pull-right,
|
||||
.fal.fa-pull-right,
|
||||
.fab.fa-pull-right {
|
||||
margin-left: .3em; }
|
||||
|
||||
.fa-spin {
|
||||
-webkit-animation: fa-spin 2s infinite linear;
|
||||
animation: fa-spin 2s infinite linear; }
|
||||
|
||||
.fa-pulse {
|
||||
-webkit-animation: fa-spin 1s infinite steps(8);
|
||||
animation: fa-spin 1s infinite steps(8); }
|
||||
|
||||
@-webkit-keyframes fa-spin {
|
||||
0% {
|
||||
-webkit-transform: rotate(0deg);
|
||||
transform: rotate(0deg); }
|
||||
100% {
|
||||
-webkit-transform: rotate(360deg);
|
||||
transform: rotate(360deg); } }
|
||||
|
||||
@keyframes fa-spin {
|
||||
0% {
|
||||
-webkit-transform: rotate(0deg);
|
||||
transform: rotate(0deg); }
|
||||
100% {
|
||||
-webkit-transform: rotate(360deg);
|
||||
transform: rotate(360deg); } }
|
||||
|
||||
.fa-rotate-90 {
|
||||
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=1)";
|
||||
-webkit-transform: rotate(90deg);
|
||||
transform: rotate(90deg); }
|
||||
|
||||
.fa-rotate-180 {
|
||||
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=2)";
|
||||
-webkit-transform: rotate(180deg);
|
||||
transform: rotate(180deg); }
|
||||
|
||||
.fa-rotate-270 {
|
||||
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=3)";
|
||||
-webkit-transform: rotate(270deg);
|
||||
transform: rotate(270deg); }
|
||||
|
||||
.fa-flip-horizontal {
|
||||
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=0, mirror=1)";
|
||||
-webkit-transform: scale(-1, 1);
|
||||
transform: scale(-1, 1); }
|
||||
|
||||
.fa-flip-vertical {
|
||||
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";
|
||||
-webkit-transform: scale(1, -1);
|
||||
transform: scale(1, -1); }
|
||||
|
||||
.fa-flip-both, .fa-flip-horizontal.fa-flip-vertical {
|
||||
-ms-filter: "progid:DXImageTransform.Microsoft.BasicImage(rotation=2, mirror=1)";
|
||||
-webkit-transform: scale(-1, -1);
|
||||
transform: scale(-1, -1); }
|
||||
|
||||
:root .fa-rotate-90,
|
||||
:root .fa-rotate-180,
|
||||
:root .fa-rotate-270,
|
||||
:root .fa-flip-horizontal,
|
||||
:root .fa-flip-vertical,
|
||||
:root .fa-flip-both {
|
||||
-webkit-filter: none;
|
||||
filter: none; }
|
||||
|
||||
.fa-stack {
|
||||
display: inline-block;
|
||||
height: 2em;
|
||||
position: relative;
|
||||
width: 2.5em; }
|
||||
|
||||
.fa-stack-1x,
|
||||
.fa-stack-2x {
|
||||
bottom: 0;
|
||||
left: 0;
|
||||
margin: auto;
|
||||
position: absolute;
|
||||
right: 0;
|
||||
top: 0; }
|
||||
|
||||
.svg-inline--fa.fa-stack-1x {
|
||||
height: 1em;
|
||||
width: 1.25em; }
|
||||
|
||||
.svg-inline--fa.fa-stack-2x {
|
||||
height: 2em;
|
||||
width: 2.5em; }
|
||||
|
||||
.fa-inverse {
|
||||
color: #fff; }
|
||||
|
||||
.sr-only {
|
||||
border: 0;
|
||||
clip: rect(0, 0, 0, 0);
|
||||
height: 1px;
|
||||
margin: -1px;
|
||||
overflow: hidden;
|
||||
padding: 0;
|
||||
position: absolute;
|
||||
width: 1px; }
|
||||
|
||||
.sr-only-focusable:active, .sr-only-focusable:focus {
|
||||
clip: auto;
|
||||
height: auto;
|
||||
margin: 0;
|
||||
overflow: visible;
|
||||
position: static;
|
||||
width: auto; }
|
||||
|
||||
.svg-inline--fa .fa-primary {
|
||||
fill: var(--fa-primary-color, currentColor);
|
||||
opacity: 1;
|
||||
opacity: var(--fa-primary-opacity, 1); }
|
||||
|
||||
.svg-inline--fa .fa-secondary {
|
||||
fill: var(--fa-secondary-color, currentColor);
|
||||
opacity: 0.4;
|
||||
opacity: var(--fa-secondary-opacity, 0.4); }
|
||||
|
||||
.svg-inline--fa.fa-swap-opacity .fa-primary {
|
||||
opacity: 0.4;
|
||||
opacity: var(--fa-secondary-opacity, 0.4); }
|
||||
|
||||
.svg-inline--fa.fa-swap-opacity .fa-secondary {
|
||||
opacity: 1;
|
||||
opacity: var(--fa-primary-opacity, 1); }
|
||||
|
||||
.svg-inline--fa mask .fa-primary,
|
||||
.svg-inline--fa mask .fa-secondary {
|
||||
fill: black; }
|
||||
|
||||
.fad.fa-inverse {
|
||||
color: #fff; }
|
File diff suppressed because one or more lines are too long
2166
src/root/static/fontawesome/css/v4-shims.css
vendored
2166
src/root/static/fontawesome/css/v4-shims.css
vendored
File diff suppressed because it is too large
Load diff
File diff suppressed because one or more lines are too long
Binary file not shown.
File diff suppressed because it is too large
Load diff
Before Width: | Height: | Size: 675 KiB |
Binary file not shown.
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue