forked from lix-project/hydra
Merge remote-tracking branch 'upstream/master' into split-buildRemote
This commit is contained in:
commit
3526d61ff2
2
.github/workflows/test.yml
vendored
2
.github/workflows/test.yml
vendored
|
@ -9,6 +9,6 @@ jobs:
|
||||||
- uses: actions/checkout@v3
|
- uses: actions/checkout@v3
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
- uses: cachix/install-nix-action@v16
|
- uses: cachix/install-nix-action@v17
|
||||||
#- run: nix flake check
|
#- run: nix flake check
|
||||||
- run: nix-build -A checks.x86_64-linux.build -A checks.x86_64-linux.validate-openapi
|
- run: nix-build -A checks.x86_64-linux.build -A checks.x86_64-linux.validate-openapi
|
||||||
|
|
129
doc/architecture.md
Normal file
129
doc/architecture.md
Normal file
|
@ -0,0 +1,129 @@
|
||||||
|
This is a rough overview from informal discussions and explanations of inner workings of Hydra.
|
||||||
|
You can use it as a guide to navigate the codebase or ask questions.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Components
|
||||||
|
|
||||||
|
- Postgres database
|
||||||
|
- configuration
|
||||||
|
- build queue
|
||||||
|
- what is already built
|
||||||
|
- what is going to build
|
||||||
|
- `hydra-server`
|
||||||
|
- Perl, Catalyst
|
||||||
|
- web frontend
|
||||||
|
- `hydra-evaluator`
|
||||||
|
- Perl, C++
|
||||||
|
- fetches repositories
|
||||||
|
- evaluates job sets
|
||||||
|
- pointers to a repository
|
||||||
|
- adds builds to the queue
|
||||||
|
- `hydra-queue-runner`
|
||||||
|
- C++
|
||||||
|
- monitors the queue
|
||||||
|
- executes build steps
|
||||||
|
- uploads build results
|
||||||
|
- copy to a Nix store
|
||||||
|
- Nix store
|
||||||
|
- contains `.drv`s
|
||||||
|
- populated by `hydra-evaluator`
|
||||||
|
- read by `hydra-queue-runner`
|
||||||
|
- destination Nix store
|
||||||
|
- can be a binary cache
|
||||||
|
- e.g. `[cache.nixos.org](http://cache.nixos.org)` or the same store again (for small Hydra instances)
|
||||||
|
- plugin architecture
|
||||||
|
- extend evaluator for new kinds of repositories
|
||||||
|
- e.g. fetch from `git`
|
||||||
|
|
||||||
|
### Database Schema
|
||||||
|
|
||||||
|
[https://github.com/NixOS/hydra/blob/master/src/sql/hydra.sql](https://github.com/NixOS/hydra/blob/master/src/sql/hydra.sql)
|
||||||
|
|
||||||
|
- `Jobsets`
|
||||||
|
- populated by calling Nix evaluator
|
||||||
|
- every Nix derivation in `release.nix` is a Job
|
||||||
|
- `flake`
|
||||||
|
- URL to flake, if job is from a flake
|
||||||
|
- single-point of configuration for flake builds
|
||||||
|
- flake itself contains pointers to dependencies
|
||||||
|
- for other builds we need more configuration data
|
||||||
|
- `JobsetInputs`
|
||||||
|
- more configuration for a Job
|
||||||
|
- `JobsetInputAlts`
|
||||||
|
- historical, where you could have more than one alternative for each input
|
||||||
|
- it would have done the cross product of all possibilities
|
||||||
|
- not used any more, as now every input is unique
|
||||||
|
- originally that was to have alternative values for the system parameter
|
||||||
|
- `x86-linux`, `x86_64-darwin`
|
||||||
|
- turned out not to be a good idea, as job set names did not uniquely identify output
|
||||||
|
- `Builds`
|
||||||
|
- queue: scheduled and finished builds
|
||||||
|
- instance of a Job
|
||||||
|
- corresponds to a top-level derivation
|
||||||
|
- can have many dependencies that don’t have a corresponding build
|
||||||
|
- dependencies represented as `BuildSteps`
|
||||||
|
- a Job is all the builds with a particular name, e.g.
|
||||||
|
- `git.x86_64-linux` is a job
|
||||||
|
- there maybe be multiple builds for that job
|
||||||
|
- build ID: just an auto-increment number
|
||||||
|
- building one thing can actually cause many (hundreds of) derivations to be built
|
||||||
|
- for queued builds, the `drv` has to be present in the store
|
||||||
|
- otherwise build will fail, e.g. after garbage collection
|
||||||
|
- `BuildSteps`
|
||||||
|
- corresponds to a derivation or substitution
|
||||||
|
- are reused through the Nix store
|
||||||
|
- may be duplicated for unique derivations due to how they relate to `Jobs`
|
||||||
|
- `BuildStepOutputs`
|
||||||
|
- corresponds directly to derivation outputs
|
||||||
|
- `out`, `dev`, ...
|
||||||
|
- `BuildProducts`
|
||||||
|
- not a Nix concept
|
||||||
|
- populated from a special file `$out/nix-support/hydra-build-producs`
|
||||||
|
- used to scrape parts of build results out to the web frontend
|
||||||
|
- e.g. manuals, ISO images, etc.
|
||||||
|
- `BuildMetrics`
|
||||||
|
- scrapes data from magic location, similar to `BuildProducts` to show fancy graphs
|
||||||
|
- e.g. test coverage, build times, CPU utilization for build
|
||||||
|
- `$out/nix-support/hydra-metrics`
|
||||||
|
- `BuildInputs`
|
||||||
|
- probably obsolute
|
||||||
|
- `JobsetEvalMembers`
|
||||||
|
- joins evaluations with jobs
|
||||||
|
- huge table, 10k’s of entries for one `nixpkgs` evaluation
|
||||||
|
- can be imagined as a subset of the eval cache
|
||||||
|
- could in principle use the eval cache
|
||||||
|
|
||||||
|
### `release.nix`
|
||||||
|
|
||||||
|
- hydra-specific convention to describe the build
|
||||||
|
- should evaluate to an attribute set that contains derivations
|
||||||
|
- hydra considers every attribute in that set a job
|
||||||
|
- every job needs a unique name
|
||||||
|
- if you want to build for multiple platforms, you need to reflect that in the name
|
||||||
|
- hydra does a deep traversal of the attribute set
|
||||||
|
- just evaluating the names may take half an hour
|
||||||
|
|
||||||
|
## FAQ
|
||||||
|
|
||||||
|
Can we imagine Hydra to be a persistence layer for the build graph?
|
||||||
|
|
||||||
|
- partially, it lacks a lot of information
|
||||||
|
- does not keep edges of the build graph
|
||||||
|
|
||||||
|
How does Hydra relate to `nix build`?
|
||||||
|
|
||||||
|
- reimplements the top level Nix build loop, scheduling, etc.
|
||||||
|
- Hydra has to persist build results
|
||||||
|
- Hydra has more sophisticated remote build execution and scheduling than Nix
|
||||||
|
|
||||||
|
Is it conceptually possible to unify Hydra’s capabilities with regular Nix?
|
||||||
|
|
||||||
|
- Nix does not have any scheduling, it just traverses the build graph
|
||||||
|
- Hydra has scheduling in terms of job set priorities, tracks how much of a job set it has worked on
|
||||||
|
- makes sure jobs don’t starve each other
|
||||||
|
- Nix cannot dynamically add build jobs at runtime
|
||||||
|
- [RFC 92](https://github.com/NixOS/rfcs/blob/master/rfcs/0092-plan-dynamism.md) should enable that
|
||||||
|
- internally it is already possible, but there is no interface to do that
|
||||||
|
- Hydra queue runner is a long running process
|
||||||
|
- Nix takes a static set of jobs, working it off at once
|
|
@ -7,6 +7,7 @@
|
||||||
- [Hydra jobs](./jobs.md)
|
- [Hydra jobs](./jobs.md)
|
||||||
- [Plugins](./plugins/README.md)
|
- [Plugins](./plugins/README.md)
|
||||||
- [Declarative Projects](./plugins/declarative-projects.md)
|
- [Declarative Projects](./plugins/declarative-projects.md)
|
||||||
|
- [RunCommand](./plugins/RunCommand.md)
|
||||||
- [Using the external API](api.md)
|
- [Using the external API](api.md)
|
||||||
- [Webhooks](webhooks.md)
|
- [Webhooks](webhooks.md)
|
||||||
- [Monitoring Hydra](./monitoring/README.md)
|
- [Monitoring Hydra](./monitoring/README.md)
|
||||||
|
|
|
@ -102,6 +102,26 @@ in the hydra configuration file, as below:
|
||||||
</hydra_notify>
|
</hydra_notify>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
hydra-queue-runner's Prometheus service
|
||||||
|
---------------------------------------
|
||||||
|
|
||||||
|
hydra-queue-runner supports running a Prometheus webserver for metrics. The
|
||||||
|
exporter's address defaults to exposing on `127.0.0.1:9198`, but is also
|
||||||
|
configurable through the hydra configuration file and a command line argument,
|
||||||
|
as below. A port of `:0` will make the exposer choose a random, available port.
|
||||||
|
|
||||||
|
```conf
|
||||||
|
queue_runner_metrics_address = 127.0.0.1:9198
|
||||||
|
# or
|
||||||
|
queue_runner_metrics_address = [::]:9198
|
||||||
|
```
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ hydra-queue-runner --prometheus-address 127.0.0.1:9198
|
||||||
|
# or
|
||||||
|
$ hydra-queue-runner --prometheus-address [::]:9198
|
||||||
|
```
|
||||||
|
|
||||||
Using LDAP as authentication backend (optional)
|
Using LDAP as authentication backend (optional)
|
||||||
-----------------------------------------------
|
-----------------------------------------------
|
||||||
|
|
||||||
|
@ -165,7 +185,7 @@ Example configuration:
|
||||||
hydra_admin = admin
|
hydra_admin = admin
|
||||||
# Allow all users in the dev group to restart jobs and cancel builds
|
# Allow all users in the dev group to restart jobs and cancel builds
|
||||||
dev = restart-jobs
|
dev = restart-jobs
|
||||||
dev = cancel-builds
|
dev = cancel-build
|
||||||
</role_mapping>
|
</role_mapping>
|
||||||
</ldap>
|
</ldap>
|
||||||
```
|
```
|
||||||
|
|
|
@ -92,7 +92,7 @@ On NixOS:
|
||||||
|
|
||||||
```nix
|
```nix
|
||||||
{
|
{
|
||||||
nix.trustedUsers = [ "YOURUSER" ];
|
nix.settings.trusted-users = [ "YOURUSER" ];
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -172,17 +172,6 @@ Sets Gitlab CI status.
|
||||||
|
|
||||||
- `gitlab_authorization.<projectId>`
|
- `gitlab_authorization.<projectId>`
|
||||||
|
|
||||||
## HipChat notification
|
|
||||||
|
|
||||||
Sends hipchat chat notifications when a build finish.
|
|
||||||
|
|
||||||
### Configuration options
|
|
||||||
|
|
||||||
- `hipchat.[].jobs`
|
|
||||||
- `hipchat.[].builds`
|
|
||||||
- `hipchat.[].token`
|
|
||||||
- `hipchat.[].notify`
|
|
||||||
|
|
||||||
## InfluxDB notification
|
## InfluxDB notification
|
||||||
|
|
||||||
Writes InfluxDB events when a builds finished.
|
Writes InfluxDB events when a builds finished.
|
||||||
|
@ -192,10 +181,12 @@ Writes InfluxDB events when a builds finished.
|
||||||
- `influxdb.url`
|
- `influxdb.url`
|
||||||
- `influxdb.db`
|
- `influxdb.db`
|
||||||
|
|
||||||
## Run command
|
## RunCommand
|
||||||
|
|
||||||
Runs a shell command when the build is finished.
|
Runs a shell command when the build is finished.
|
||||||
|
|
||||||
|
See [The RunCommand Plugin](./RunCommand.md) for more information.
|
||||||
|
|
||||||
### Configuration options:
|
### Configuration options:
|
||||||
|
|
||||||
- `runcommand.[].job`
|
- `runcommand.[].job`
|
||||||
|
|
83
doc/manual/src/plugins/RunCommand.md
Normal file
83
doc/manual/src/plugins/RunCommand.md
Normal file
|
@ -0,0 +1,83 @@
|
||||||
|
## The RunCommand Plugin
|
||||||
|
|
||||||
|
Hydra supports executing a program after certain builds finish.
|
||||||
|
This behavior is disabled by default.
|
||||||
|
|
||||||
|
Hydra executes these commands under the `hydra-notify` service.
|
||||||
|
|
||||||
|
### Static Commands
|
||||||
|
|
||||||
|
Configure specific commands to execute after the specified matching job finishes.
|
||||||
|
|
||||||
|
#### Configuration
|
||||||
|
|
||||||
|
- `runcommand.[].job`
|
||||||
|
|
||||||
|
A matcher for jobs to match in the format `project:jobset:job`. Defaults to `*:*:*`.
|
||||||
|
|
||||||
|
**Note:** This matcher format is not a regular expression.
|
||||||
|
The `*` is a wildcard for that entire section, partial matches are not supported.
|
||||||
|
|
||||||
|
- `runcommand.[].command`
|
||||||
|
|
||||||
|
Command to run. Can use the `$HYDRA_JSON` environment variable to access information about the build.
|
||||||
|
|
||||||
|
### Example
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<runcommand>
|
||||||
|
job = myProject:*:*
|
||||||
|
command = cat $HYDRA_JSON > /tmp/hydra-output
|
||||||
|
</runcommand>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Dynamic Commands
|
||||||
|
|
||||||
|
Hydra can optionally run RunCommand hooks defined dynamically by the jobset. In
|
||||||
|
order to enable dynamic commands, you must enable this feature in your
|
||||||
|
`hydra.conf`, *as well as* in the parent project and jobset configuration.
|
||||||
|
|
||||||
|
#### Behavior
|
||||||
|
|
||||||
|
Hydra will execute any program defined under the `runCommandHook` attribute set. These jobs must have a single output named `out`, and that output must be an executable file located directly at `$out`.
|
||||||
|
|
||||||
|
#### Security Properties
|
||||||
|
|
||||||
|
Safely deploying dynamic commands requires careful design of your Hydra jobs. Allowing arbitrary users to define attributes in your top level attribute set will allow that user to execute code on your Hydra.
|
||||||
|
|
||||||
|
If a jobset has dynamic commands enabled, you must ensure only trusted users can define top level attributes.
|
||||||
|
|
||||||
|
|
||||||
|
#### Configuration
|
||||||
|
|
||||||
|
- `dynamicruncommand.enable`
|
||||||
|
|
||||||
|
Set to 1 to enable dynamic RunCommand program execution.
|
||||||
|
|
||||||
|
#### Example
|
||||||
|
|
||||||
|
In your Hydra configuration, specify:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<dynamicruncommand>
|
||||||
|
enable = 1
|
||||||
|
</dynamicruncommand>
|
||||||
|
```
|
||||||
|
|
||||||
|
Then create a job named `runCommandHook.example` in your jobset:
|
||||||
|
|
||||||
|
```
|
||||||
|
{ pkgs, ... }: {
|
||||||
|
runCommandHook = {
|
||||||
|
recurseForDerivations = true;
|
||||||
|
|
||||||
|
example = pkgs.writeScript "run-me" ''
|
||||||
|
#!${pkgs.runtimeShell}
|
||||||
|
|
||||||
|
${pkgs.jq}/bin/jq . "$HYDRA_JSON"
|
||||||
|
'';
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
After the `runcommandHook.example` build finishes that script will execute.
|
|
@ -34,6 +34,7 @@ To configure a static declarative project, take the following steps:
|
||||||
"checkinterval": 300,
|
"checkinterval": 300,
|
||||||
"schedulingshares": 100,
|
"schedulingshares": 100,
|
||||||
"enableemail": false,
|
"enableemail": false,
|
||||||
|
"enable_dynamic_run_command": false,
|
||||||
"emailoverride": "",
|
"emailoverride": "",
|
||||||
"keepnr": 3,
|
"keepnr": 3,
|
||||||
"inputs": {
|
"inputs": {
|
||||||
|
@ -53,6 +54,7 @@ To configure a static declarative project, take the following steps:
|
||||||
"checkinterval": 300,
|
"checkinterval": 300,
|
||||||
"schedulingshares": 100,
|
"schedulingshares": 100,
|
||||||
"enableemail": false,
|
"enableemail": false,
|
||||||
|
"enable_dynamic_run_command": false,
|
||||||
"emailoverride": "",
|
"emailoverride": "",
|
||||||
"keepnr": 3,
|
"keepnr": 3,
|
||||||
"inputs": {
|
"inputs": {
|
||||||
|
@ -92,6 +94,7 @@ containing the configuration of the jobset, for example:
|
||||||
"checkinterval": 300,
|
"checkinterval": 300,
|
||||||
"schedulingshares": 100,
|
"schedulingshares": 100,
|
||||||
"enableemail": false,
|
"enableemail": false,
|
||||||
|
"enable_dynamic_run_command": false,
|
||||||
"emailoverride": "",
|
"emailoverride": "",
|
||||||
"keepnr": 3,
|
"keepnr": 3,
|
||||||
"inputs": {
|
"inputs": {
|
||||||
|
|
|
@ -378,13 +378,18 @@ This section describes how it can be implemented for `gitea`, but the approach f
|
||||||
analogous:
|
analogous:
|
||||||
|
|
||||||
* [Obtain an API token for your user](https://docs.gitea.io/en-us/api-usage/#authentication)
|
* [Obtain an API token for your user](https://docs.gitea.io/en-us/api-usage/#authentication)
|
||||||
* Add it to your `hydra.conf` like this:
|
* Add it to a file which only users in the hydra group can read like this: see [including files](configuration.md#including-files) for more information
|
||||||
|
```
|
||||||
|
<gitea_authorization>
|
||||||
|
your_username=your_token
|
||||||
|
</gitea_authorization>
|
||||||
|
```
|
||||||
|
|
||||||
|
* Include the file in your `hydra.conf` like this:
|
||||||
``` nix
|
``` nix
|
||||||
{
|
{
|
||||||
services.hydra-dev.extraConfig = ''
|
services.hydra-dev.extraConfig = ''
|
||||||
<gitea_authorization>
|
Include /path/to/secret/file
|
||||||
your_username=your_token
|
|
||||||
</gitea_authorization>
|
|
||||||
'';
|
'';
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
51
flake.lock
51
flake.lock
|
@ -3,16 +3,15 @@
|
||||||
"lowdown-src": {
|
"lowdown-src": {
|
||||||
"flake": false,
|
"flake": false,
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1617481909,
|
"lastModified": 1633514407,
|
||||||
"narHash": "sha256-SqnfOFuLuVRRNeVJr1yeEPJue/qWoCp5N6o5Kr///p4=",
|
"narHash": "sha256-Dw32tiMjdK9t3ETl5fzGrutQTzh2rufgZV4A/BbxuD4=",
|
||||||
"owner": "kristapsdz",
|
"owner": "kristapsdz",
|
||||||
"repo": "lowdown",
|
"repo": "lowdown",
|
||||||
"rev": "148f9b2f586c41b7e36e73009db43ea68c7a1a4d",
|
"rev": "d2c2b44ff6c27b936ec27358a2653caaef8f73b8",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
"owner": "kristapsdz",
|
"owner": "kristapsdz",
|
||||||
"ref": "VERSION_0_8_4",
|
|
||||||
"repo": "lowdown",
|
"repo": "lowdown",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
}
|
}
|
||||||
|
@ -20,34 +19,54 @@
|
||||||
"nix": {
|
"nix": {
|
||||||
"inputs": {
|
"inputs": {
|
||||||
"lowdown-src": "lowdown-src",
|
"lowdown-src": "lowdown-src",
|
||||||
"nixpkgs": "nixpkgs"
|
"nixpkgs": "nixpkgs",
|
||||||
|
"nixpkgs-regression": "nixpkgs-regression"
|
||||||
},
|
},
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1628586117,
|
"lastModified": 1661606874,
|
||||||
"narHash": "sha256-8hS4xy7fq3z9XZIMYm4sQi9SzhcYqEJfdbwgDePoWuc=",
|
"narHash": "sha256-9+rpYzI+SmxJn+EbYxjGv68Ucp22bdFUSy/4LkHkkDQ=",
|
||||||
"owner": "NixOS",
|
"owner": "NixOS",
|
||||||
"repo": "nix",
|
"repo": "nix",
|
||||||
"rev": "a6ba313a0aac3b6e2fef434cb42d190a0849238e",
|
"rev": "11e45768b34fdafdcf019ddbd337afa16127ff0f",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
"id": "nix",
|
"owner": "NixOS",
|
||||||
"type": "indirect"
|
"ref": "2.11.0",
|
||||||
|
"repo": "nix",
|
||||||
|
"type": "github"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"nixpkgs": {
|
"nixpkgs": {
|
||||||
"locked": {
|
"locked": {
|
||||||
"lastModified": 1624862269,
|
"lastModified": 1657693803,
|
||||||
"narHash": "sha256-JFcsh2+7QtfKdJFoPibLFPLgIW6Ycnv8Bts9a7RYme0=",
|
"narHash": "sha256-G++2CJ9u0E7NNTAi9n5G8TdDmGJXcIjkJ3NF8cetQB8=",
|
||||||
"owner": "NixOS",
|
"owner": "NixOS",
|
||||||
"repo": "nixpkgs",
|
"repo": "nixpkgs",
|
||||||
"rev": "f77036342e2b690c61c97202bf48f2ce13acc022",
|
"rev": "365e1b3a859281cf11b94f87231adeabbdd878a2",
|
||||||
"type": "github"
|
"type": "github"
|
||||||
},
|
},
|
||||||
"original": {
|
"original": {
|
||||||
"id": "nixpkgs",
|
"owner": "NixOS",
|
||||||
"ref": "nixos-21.05-small",
|
"ref": "nixos-22.05-small",
|
||||||
"type": "indirect"
|
"repo": "nixpkgs",
|
||||||
|
"type": "github"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"nixpkgs-regression": {
|
||||||
|
"locked": {
|
||||||
|
"lastModified": 1643052045,
|
||||||
|
"narHash": "sha256-uGJ0VXIhWKGXxkeNnq4TvV3CIOkUJ3PAoLZ3HMzNVMw=",
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"rev": "215d4d0fd80ca5163643b03a33fde804a29cc1e2",
|
||||||
|
"type": "github"
|
||||||
|
},
|
||||||
|
"original": {
|
||||||
|
"owner": "NixOS",
|
||||||
|
"repo": "nixpkgs",
|
||||||
|
"rev": "215d4d0fd80ca5163643b03a33fde804a29cc1e2",
|
||||||
|
"type": "github"
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
"root": {
|
"root": {
|
||||||
|
|
432
flake.nix
432
flake.nix
|
@ -2,15 +2,16 @@
|
||||||
description = "A Nix-based continuous build system";
|
description = "A Nix-based continuous build system";
|
||||||
|
|
||||||
inputs.nixpkgs.follows = "nix/nixpkgs";
|
inputs.nixpkgs.follows = "nix/nixpkgs";
|
||||||
|
inputs.nix.url = "github:NixOS/nix/2.11.0";
|
||||||
|
|
||||||
outputs = { self, nixpkgs, nix }:
|
outputs = { self, nixpkgs, nix }:
|
||||||
let
|
let
|
||||||
|
|
||||||
version = "${builtins.readFile ./version.txt}.${builtins.substring 0 8 self.lastModifiedDate}.${self.shortRev or "DIRTY"}";
|
version = "${builtins.readFile ./version.txt}.${builtins.substring 0 8 (self.lastModifiedDate or "19700101")}.${self.shortRev or "DIRTY"}";
|
||||||
|
|
||||||
pkgs = import nixpkgs {
|
pkgs = import nixpkgs {
|
||||||
system = "x86_64-linux";
|
system = "x86_64-linux";
|
||||||
overlays = [ self.overlay nix.overlay ];
|
overlays = [ self.overlays.default nix.overlays.default ];
|
||||||
};
|
};
|
||||||
|
|
||||||
# NixOS configuration used for VM tests.
|
# NixOS configuration used for VM tests.
|
||||||
|
@ -35,144 +36,12 @@
|
||||||
rec {
|
rec {
|
||||||
|
|
||||||
# A Nixpkgs overlay that provides a 'hydra' package.
|
# A Nixpkgs overlay that provides a 'hydra' package.
|
||||||
overlay = final: prev: {
|
overlays.default = final: prev: {
|
||||||
|
|
||||||
# Add LDAP dependencies that aren't currently found within nixpkgs.
|
# Add LDAP dependencies that aren't currently found within nixpkgs.
|
||||||
perlPackages = prev.perlPackages // {
|
perlPackages = prev.perlPackages // {
|
||||||
TestPostgreSQL = final.perlPackages.buildPerlModule {
|
|
||||||
pname = "Test-PostgreSQL";
|
|
||||||
version = "1.28-1";
|
|
||||||
src = final.fetchFromGitHub {
|
|
||||||
owner = "grahamc";
|
|
||||||
repo = "Test-postgresql";
|
|
||||||
rev = "release-1.28-1";
|
|
||||||
hash = "sha256-SFC1C3q3dbcBos18CYd/s0TIcfJW4g04ld0+XQXVToQ=";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages; [ ModuleBuildTiny TestSharedFork pkgs.postgresql ];
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ DBDPg DBI FileWhich FunctionParameters Moo TieHashMethod TryTiny TypeTiny ];
|
|
||||||
|
|
||||||
makeMakerFlags = "POSTGRES_HOME=${final.postgresql}";
|
PrometheusTiny = final.perlPackages.buildPerlPackage {
|
||||||
|
|
||||||
meta = {
|
|
||||||
homepage = "https://github.com/grahamc/Test-postgresql/releases/tag/release-1.28-1";
|
|
||||||
description = "PostgreSQL runner for tests";
|
|
||||||
license = with final.lib.licenses; [ artistic2 ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
FunctionParameters = final.buildPerlPackage {
|
|
||||||
pname = "Function-Parameters";
|
|
||||||
version = "2.001003";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/M/MA/MAUKE/Function-Parameters-2.001003.tar.gz";
|
|
||||||
sha256 = "eaa22c6b43c02499ec7db0758c2dd218a3b2ab47a714b2bdf8010b5ee113c242";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages; [ DirSelf TestFatal ];
|
|
||||||
meta = {
|
|
||||||
description = "Define functions and methods with parameter lists (\"subroutine signatures\")";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
CatalystPluginPrometheusTiny = final.buildPerlPackage {
|
|
||||||
pname = "Catalyst-Plugin-PrometheusTiny";
|
|
||||||
version = "0.005";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/S/SY/SYSPETE/Catalyst-Plugin-PrometheusTiny-0.005.tar.gz";
|
|
||||||
sha256 = "a42ef09efdc3053899ae007c41220d3ed7207582cc86e491b4f534539c992c5a";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages; [ HTTPMessage Plack SubOverride TestDeep ];
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ CatalystRuntime Moose PrometheusTiny PrometheusTinyShared ];
|
|
||||||
meta = {
|
|
||||||
description = "Prometheus metrics for Catalyst";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
CryptArgon2 = final.perlPackages.buildPerlModule {
|
|
||||||
pname = "Crypt-Argon2";
|
|
||||||
version = "0.010";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/L/LE/LEONT/Crypt-Argon2-0.010.tar.gz";
|
|
||||||
sha256 = "3ea1c006f10ef66fd417e502a569df15c4cc1c776b084e35639751c41ce6671a";
|
|
||||||
};
|
|
||||||
nativeBuildInputs = [ pkgs.ld-is-cc-hook ];
|
|
||||||
meta = {
|
|
||||||
description = "Perl interface to the Argon2 key derivation functions";
|
|
||||||
license = final.lib.licenses.cc0;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
CryptPassphrase = final.buildPerlPackage {
|
|
||||||
pname = "Crypt-Passphrase";
|
|
||||||
version = "0.003";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/L/LE/LEONT/Crypt-Passphrase-0.003.tar.gz";
|
|
||||||
sha256 = "685aa090f8179a86d6896212ccf8ccfde7a79cce857199bb14e2277a10d240ad";
|
|
||||||
};
|
|
||||||
meta = {
|
|
||||||
description = "A module for managing passwords in a cryptographically agile manner";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
CryptPassphraseArgon2 = final.buildPerlPackage {
|
|
||||||
pname = "Crypt-Passphrase-Argon2";
|
|
||||||
version = "0.002";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/L/LE/LEONT/Crypt-Passphrase-Argon2-0.002.tar.gz";
|
|
||||||
sha256 = "3906ff81697d13804ee21bd5ab78ffb1c4408b4822ce020e92ecf4737ba1f3a8";
|
|
||||||
};
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ CryptArgon2 CryptPassphrase ];
|
|
||||||
meta = {
|
|
||||||
description = "An Argon2 encoder for Crypt::Passphrase";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
DataRandom = final.buildPerlPackage {
|
|
||||||
pname = "Data-Random";
|
|
||||||
version = "0.13";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/B/BA/BAREFOOT/Data-Random-0.13.tar.gz";
|
|
||||||
sha256 = "eb590184a8db28a7e49eab09e25f8650c33f1f668b6a472829de74a53256bfc0";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages; [ FileShareDirInstall TestMockTime ];
|
|
||||||
meta = {
|
|
||||||
description = "Perl module to generate random data";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
DirSelf = final.buildPerlPackage {
|
|
||||||
pname = "Dir-Self";
|
|
||||||
version = "0.11";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/M/MA/MAUKE/Dir-Self-0.11.tar.gz";
|
|
||||||
sha256 = "e251a51abc7d9ba3e708f73c2aa208e09d47a0c528d6254710fa78cc8d6885b5";
|
|
||||||
};
|
|
||||||
meta = {
|
|
||||||
homepage = "https://github.com/mauke/Dir-Self";
|
|
||||||
description = "A __DIR__ constant for the directory your source file is in";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
HashSharedMem = final.perlPackages.buildPerlModule {
|
|
||||||
pname = "Hash-SharedMem";
|
|
||||||
version = "0.005";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/Z/ZE/ZEFRAM/Hash-SharedMem-0.005.tar.gz";
|
|
||||||
sha256 = "324776808602f7bdc44adaa937895365454029a926fa611f321c9bf6b940bb5e";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages; [ ScalarString ];
|
|
||||||
meta = {
|
|
||||||
description = "Efficient shared mutable hash";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
PrometheusTiny = final.buildPerlPackage {
|
|
||||||
pname = "Prometheus-Tiny";
|
pname = "Prometheus-Tiny";
|
||||||
version = "0.007";
|
version = "0.007";
|
||||||
src = final.fetchurl {
|
src = final.fetchurl {
|
||||||
|
@ -187,269 +56,6 @@
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
|
||||||
PrometheusTinyShared = final.buildPerlPackage {
|
|
||||||
pname = "Prometheus-Tiny-Shared";
|
|
||||||
version = "0.023";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/R/RO/ROBN/Prometheus-Tiny-Shared-0.023.tar.gz";
|
|
||||||
sha256 = "7c2c72397be5d8e4839d1bf4033c1800f467f2509689673c6419df48794f2abe";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages; [ DataRandom HTTPMessage Plack TestDifferences TestException ];
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ HashSharedMem JSONXS PrometheusTiny ];
|
|
||||||
meta = {
|
|
||||||
homepage = "https://github.com/robn/Prometheus-Tiny-Shared";
|
|
||||||
description = "A tiny Prometheus client with a shared database behind it";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
ReadonlyX = final.perlPackages.buildPerlModule {
|
|
||||||
pname = "ReadonlyX";
|
|
||||||
version = "1.04";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/S/SA/SANKO/ReadonlyX-1.04.tar.gz";
|
|
||||||
sha256 = "81bb97dba93ac6b5ccbce04a42c3590eb04557d75018773ee18d5a30fcf48188";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages; [ ModuleBuildTiny TestFatal ];
|
|
||||||
meta = {
|
|
||||||
homepage = "https://github.com/sanko/readonly";
|
|
||||||
description = "Faster facility for creating read-only scalars, arrays, hashes";
|
|
||||||
license = final.lib.licenses.artistic2;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
TieHashMethod = final.buildPerlPackage {
|
|
||||||
pname = "Tie-Hash-Method";
|
|
||||||
version = "0.02";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/Y/YV/YVES/Tie-Hash-Method-0.02.tar.gz";
|
|
||||||
sha256 = "d513fbb51413f7ca1e64a1bdce6194df7ec6076dea55066d67b950191eec32a9";
|
|
||||||
};
|
|
||||||
meta = {
|
|
||||||
description = "Tied hash with specific methods overriden by callbacks";
|
|
||||||
license = with final.lib.licenses; [ artistic1 ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
Test2Harness = final.buildPerlPackage {
|
|
||||||
pname = "Test2-Harness";
|
|
||||||
version = "1.000042";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/E/EX/EXODIST/Test2-Harness-1.000042.tar.gz";
|
|
||||||
sha256 = "aaf231a68af1a6ffd6a11188875fcf572e373e43c8285945227b9d687b43db2d";
|
|
||||||
};
|
|
||||||
|
|
||||||
checkPhase = ''
|
|
||||||
patchShebangs ./t ./scripts/yath
|
|
||||||
./scripts/yath test -j $NIX_BUILD_CORES
|
|
||||||
'';
|
|
||||||
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ DataUUID Importer LongJump ScopeGuard TermTable Test2PluginMemUsage Test2PluginUUID Test2Suite gotofile ];
|
|
||||||
meta = {
|
|
||||||
description = "A new and improved test harness with better Test2 integration";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
Test2PluginMemUsage = prev.perlPackages.buildPerlPackage {
|
|
||||||
pname = "Test2-Plugin-MemUsage";
|
|
||||||
version = "0.002003";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/E/EX/EXODIST/Test2-Plugin-MemUsage-0.002003.tar.gz";
|
|
||||||
sha256 = "5e0662d5a823ae081641f5ce82843111eec1831cd31f883a6c6de54afdf87c25";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages; [ Test2Suite ];
|
|
||||||
meta = {
|
|
||||||
description = "Collect and display memory usage information";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
Test2PluginUUID = prev.perlPackages.buildPerlPackage {
|
|
||||||
pname = "Test2-Plugin-UUID";
|
|
||||||
version = "0.002001";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/E/EX/EXODIST/Test2-Plugin-UUID-0.002001.tar.gz";
|
|
||||||
sha256 = "4c6c8d484d7153d8779dc155a992b203095b5c5aa1cfb1ee8bcedcd0601878c9";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages;[ Test2Suite ];
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ DataUUID ];
|
|
||||||
meta = {
|
|
||||||
description = "Use REAL UUIDs in Test2";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
LongJump = final.buildPerlPackage {
|
|
||||||
pname = "Long-Jump";
|
|
||||||
version = "0.000001";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/E/EX/EXODIST/Long-Jump-0.000001.tar.gz";
|
|
||||||
sha256 = "d5d6456d86992b559d8f66fc90960f919292cd3803c13403faac575762c77af4";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages; [ Test2Suite ];
|
|
||||||
meta = {
|
|
||||||
description = "Mechanism for returning to a specific point from a deeply nested stack";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
gotofile = final.buildPerlPackage {
|
|
||||||
pname = "goto-file";
|
|
||||||
version = "0.005";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/E/EX/EXODIST/goto-file-0.005.tar.gz";
|
|
||||||
sha256 = "c6cdd5ee4a6cdcbdbf314d92a4f9985dbcdf9e4258048cae76125c052aa31f77";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages; [ Test2Suite ];
|
|
||||||
meta = {
|
|
||||||
description = "Stop parsing the current file and move on to a different one";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
NetLDAPServer = prev.perlPackages.buildPerlPackage {
|
|
||||||
pname = "Net-LDAP-Server";
|
|
||||||
version = "0.43";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/A/AA/AAR/Net-LDAP-Server-0.43.tar.gz";
|
|
||||||
sha256 = "0qmh3cri3fpccmwz6bhwp78yskrb3qmalzvqn0a23hqbsfs4qv6x";
|
|
||||||
};
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ NetLDAP ConvertASN1 ];
|
|
||||||
meta = {
|
|
||||||
description = "LDAP server side protocol handling";
|
|
||||||
license = with final.lib.licenses; [ artistic1 ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
NetLDAPSID = prev.perlPackages.buildPerlPackage {
|
|
||||||
pname = "Net-LDAP-SID";
|
|
||||||
version = "0.0001";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/K/KA/KARMAN/Net-LDAP-SID-0.001.tar.gz";
|
|
||||||
sha256 = "1mnnpkmj8kpb7qw50sm8h4sd8py37ssy2xi5hhxzr5whcx0cvhm8";
|
|
||||||
};
|
|
||||||
meta = {
|
|
||||||
description = "Active Directory Security Identifier manipulation";
|
|
||||||
license = with final.lib.licenses; [ artistic2 ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
NetLDAPServerTest = prev.perlPackages.buildPerlPackage {
|
|
||||||
pname = "Net-LDAP-Server-Test";
|
|
||||||
version = "0.22";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/K/KA/KARMAN/Net-LDAP-Server-Test-0.22.tar.gz";
|
|
||||||
sha256 = "13idip7jky92v4adw60jn2gcc3zf339gsdqlnc9nnvqzbxxp285i";
|
|
||||||
};
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ NetLDAP NetLDAPServer TestMore DataDump NetLDAPSID ];
|
|
||||||
meta = {
|
|
||||||
description = "test Net::LDAP code";
|
|
||||||
license = with final.lib.licenses; [ artistic1 ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
CatalystAuthenticationStoreLDAP = prev.perlPackages.buildPerlPackage {
|
|
||||||
pname = "Catalyst-Authentication-Store-LDAP";
|
|
||||||
version = "1.016";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/I/IL/ILMARI/Catalyst-Authentication-Store-LDAP-1.016.tar.gz";
|
|
||||||
sha256 = "0cm399vxqqf05cjgs1j5v3sk4qc6nmws5nfhf52qvpbwc4m82mq8";
|
|
||||||
};
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ NetLDAP CatalystPluginAuthentication ClassAccessorFast ];
|
|
||||||
buildInputs = with final.perlPackages; [ TestMore TestMockObject TestException NetLDAPServerTest ];
|
|
||||||
meta = {
|
|
||||||
description = "Authentication from an LDAP Directory";
|
|
||||||
license = with final.lib.licenses; [ artistic1 ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
PerlCriticCommunity = prev.perlPackages.buildPerlModule {
|
|
||||||
pname = "Perl-Critic-Community";
|
|
||||||
version = "1.0.0";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/D/DB/DBOOK/Perl-Critic-Community-v1.0.0.tar.gz";
|
|
||||||
sha256 = "311b775da4193e9de94cf5225e993cc54dd096ae1e7ef60738cdae1d9b8854e7";
|
|
||||||
};
|
|
||||||
buildInputs = with final.perlPackages; [ ModuleBuildTiny ];
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ PPI PathTiny PerlCritic PerlCriticPolicyVariablesProhibitLoopOnHash PerlCriticPulp ];
|
|
||||||
meta = {
|
|
||||||
homepage = "https://github.com/Grinnz/Perl-Critic-Freenode";
|
|
||||||
description = "Community-inspired Perl::Critic policies";
|
|
||||||
license = final.lib.licenses.artistic2;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
PerlCriticPolicyVariablesProhibitLoopOnHash = prev.perlPackages.buildPerlPackage {
|
|
||||||
pname = "Perl-Critic-Policy-Variables-ProhibitLoopOnHash";
|
|
||||||
version = "0.008";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/X/XS/XSAWYERX/Perl-Critic-Policy-Variables-ProhibitLoopOnHash-0.008.tar.gz";
|
|
||||||
sha256 = "12f5f0be96ea1bdc7828058577bd1c5c63ca23c17fac9c3709452b3dff5b84e0";
|
|
||||||
};
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ PerlCritic ];
|
|
||||||
meta = {
|
|
||||||
description = "Don't write loops on hashes, only on keys and values of hashes";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
PerlCriticPulp = prev.perlPackages.buildPerlPackage {
|
|
||||||
pname = "Perl-Critic-Pulp";
|
|
||||||
version = "99";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/K/KR/KRYDE/Perl-Critic-Pulp-99.tar.gz";
|
|
||||||
sha256 = "b8fda842fcbed74d210257c0a284b6dc7b1d0554a47a3de5d97e7d542e23e7fe";
|
|
||||||
};
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ IOString ListMoreUtils PPI PerlCritic PodMinimumVersion ];
|
|
||||||
meta = {
|
|
||||||
homepage = "http://user42.tuxfamily.org/perl-critic-pulp/index.html";
|
|
||||||
description = "Some add-on policies for Perl::Critic";
|
|
||||||
license = final.lib.licenses.gpl3Plus;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
PodMinimumVersion = prev.perlPackages.buildPerlPackage {
|
|
||||||
pname = "Pod-MinimumVersion";
|
|
||||||
version = "50";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/K/KR/KRYDE/Pod-MinimumVersion-50.tar.gz";
|
|
||||||
sha256 = "0bd2812d9aacbd99bb71fa103a4bb129e955c138ba7598734207dc9fb67b5a6f";
|
|
||||||
};
|
|
||||||
propagatedBuildInputs = with final.perlPackages; [ IOString PodParser ];
|
|
||||||
meta = {
|
|
||||||
homepage = "http://user42.tuxfamily.org/pod-minimumversion/index.html";
|
|
||||||
description = "Determine minimum Perl version of POD directives";
|
|
||||||
license = final.lib.licenses.free;
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
StringCompareConstantTime = final.buildPerlPackage {
|
|
||||||
pname = "String-Compare-ConstantTime";
|
|
||||||
version = "0.321";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/F/FR/FRACTAL/String-Compare-ConstantTime-0.321.tar.gz";
|
|
||||||
sha256 = "0b26ba2b121d8004425d4485d1d46f59001c83763aa26624dff6220d7735d7f7";
|
|
||||||
};
|
|
||||||
meta = {
|
|
||||||
description = "Timing side-channel protected string compare";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
UUID4Tiny = final.buildPerlPackage {
|
|
||||||
pname = "UUID4-Tiny";
|
|
||||||
version = "0.002";
|
|
||||||
src = final.fetchurl {
|
|
||||||
url = "mirror://cpan/authors/id/C/CV/CVLIBRARY/UUID4-Tiny-0.002.tar.gz";
|
|
||||||
sha256 = "e7535b31e386d432dec7adde214348389e1d5cf753e7ed07f1ae04c4360840cf";
|
|
||||||
};
|
|
||||||
meta = {
|
|
||||||
description = "Cryptographically secure v4 UUIDs for Linux x64";
|
|
||||||
license = with final.lib.licenses; [ artistic1 gpl1Plus ];
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
};
|
};
|
||||||
|
|
||||||
hydra = with final; let
|
hydra = with final; let
|
||||||
|
@ -469,7 +75,6 @@
|
||||||
CatalystPluginSessionStateCookie
|
CatalystPluginSessionStateCookie
|
||||||
CatalystPluginSessionStoreFastMmap
|
CatalystPluginSessionStoreFastMmap
|
||||||
CatalystPluginStackTrace
|
CatalystPluginStackTrace
|
||||||
CatalystPluginUnicodeEncoding
|
|
||||||
CatalystTraitForRequestProxyBase
|
CatalystTraitForRequestProxyBase
|
||||||
CatalystViewDownload
|
CatalystViewDownload
|
||||||
CatalystViewJSON
|
CatalystViewJSON
|
||||||
|
@ -486,6 +91,7 @@
|
||||||
DigestSHA1
|
DigestSHA1
|
||||||
EmailMIME
|
EmailMIME
|
||||||
EmailSender
|
EmailSender
|
||||||
|
FileLibMagic
|
||||||
FileSlurper
|
FileSlurper
|
||||||
FileWhich
|
FileWhich
|
||||||
final.nix.perl-bindings
|
final.nix.perl-bindings
|
||||||
|
@ -516,7 +122,6 @@
|
||||||
TermSizeAny
|
TermSizeAny
|
||||||
TermReadKey
|
TermReadKey
|
||||||
Test2Harness
|
Test2Harness
|
||||||
TestMore
|
|
||||||
TestPostgreSQL
|
TestPostgreSQL
|
||||||
TextDiff
|
TextDiff
|
||||||
TextTable
|
TextTable
|
||||||
|
@ -541,9 +146,9 @@
|
||||||
libtool
|
libtool
|
||||||
unzip
|
unzip
|
||||||
nukeReferences
|
nukeReferences
|
||||||
pkgconfig
|
pkg-config
|
||||||
libpqxx
|
libpqxx
|
||||||
gitAndTools.topGit
|
top-git
|
||||||
mercurial
|
mercurial
|
||||||
darcs
|
darcs
|
||||||
subversion
|
subversion
|
||||||
|
@ -561,13 +166,14 @@
|
||||||
(if lib.versionAtLeast lib.version "20.03pre"
|
(if lib.versionAtLeast lib.version "20.03pre"
|
||||||
then nlohmann_json
|
then nlohmann_json
|
||||||
else nlohmann_json.override { multipleHeaders = true; })
|
else nlohmann_json.override { multipleHeaders = true; })
|
||||||
|
prometheus-cpp
|
||||||
];
|
];
|
||||||
|
|
||||||
checkInputs = [
|
checkInputs = [
|
||||||
cacert
|
cacert
|
||||||
foreman
|
foreman
|
||||||
glibcLocales
|
glibcLocales
|
||||||
netcat-openbsd
|
libressl.nc
|
||||||
openldap
|
openldap
|
||||||
python3
|
python3
|
||||||
];
|
];
|
||||||
|
@ -582,11 +188,11 @@
|
||||||
pixz
|
pixz
|
||||||
gzip
|
gzip
|
||||||
bzip2
|
bzip2
|
||||||
lzma
|
xz
|
||||||
gnutar
|
gnutar
|
||||||
unzip
|
unzip
|
||||||
git
|
git
|
||||||
gitAndTools.topGit
|
top-git
|
||||||
mercurial
|
mercurial
|
||||||
darcs
|
darcs
|
||||||
gnused
|
gnused
|
||||||
|
@ -641,7 +247,7 @@
|
||||||
|
|
||||||
dontStrip = true;
|
dontStrip = true;
|
||||||
|
|
||||||
meta.description = "Build of Hydra on ${system}";
|
meta.description = "Build of Hydra on ${final.stdenv.system}";
|
||||||
passthru = { inherit perlDeps; inherit (final) nix; };
|
passthru = { inherit perlDeps; inherit (final) nix; };
|
||||||
};
|
};
|
||||||
};
|
};
|
||||||
|
@ -663,7 +269,7 @@
|
||||||
tests.install.x86_64-linux =
|
tests.install.x86_64-linux =
|
||||||
with import (nixpkgs + "/nixos/lib/testing-python.nix") { system = "x86_64-linux"; };
|
with import (nixpkgs + "/nixos/lib/testing-python.nix") { system = "x86_64-linux"; };
|
||||||
simpleTest {
|
simpleTest {
|
||||||
machine = hydraServer;
|
nodes.machine = hydraServer;
|
||||||
testScript =
|
testScript =
|
||||||
''
|
''
|
||||||
machine.wait_for_job("hydra-init")
|
machine.wait_for_job("hydra-init")
|
||||||
|
@ -678,7 +284,7 @@
|
||||||
tests.notifications.x86_64-linux =
|
tests.notifications.x86_64-linux =
|
||||||
with import (nixpkgs + "/nixos/lib/testing-python.nix") { system = "x86_64-linux"; };
|
with import (nixpkgs + "/nixos/lib/testing-python.nix") { system = "x86_64-linux"; };
|
||||||
simpleTest {
|
simpleTest {
|
||||||
machine = { pkgs, ... }: {
|
nodes.machine = { pkgs, ... }: {
|
||||||
imports = [ hydraServer ];
|
imports = [ hydraServer ];
|
||||||
services.hydra-dev.extraConfig = ''
|
services.hydra-dev.extraConfig = ''
|
||||||
<influxdb>
|
<influxdb>
|
||||||
|
@ -735,7 +341,7 @@
|
||||||
tests.gitea.x86_64-linux =
|
tests.gitea.x86_64-linux =
|
||||||
with import (nixpkgs + "/nixos/lib/testing-python.nix") { system = "x86_64-linux"; };
|
with import (nixpkgs + "/nixos/lib/testing-python.nix") { system = "x86_64-linux"; };
|
||||||
makeTest {
|
makeTest {
|
||||||
machine = { pkgs, ... }: {
|
nodes.machine = { pkgs, ... }: {
|
||||||
imports = [ hydraServer ];
|
imports = [ hydraServer ];
|
||||||
services.hydra-dev.extraConfig = ''
|
services.hydra-dev.extraConfig = ''
|
||||||
<gitea_authorization>
|
<gitea_authorization>
|
||||||
|
@ -942,11 +548,11 @@
|
||||||
checks.x86_64-linux.validate-openapi = hydraJobs.tests.validate-openapi;
|
checks.x86_64-linux.validate-openapi = hydraJobs.tests.validate-openapi;
|
||||||
|
|
||||||
packages.x86_64-linux.hydra = pkgs.hydra;
|
packages.x86_64-linux.hydra = pkgs.hydra;
|
||||||
defaultPackage.x86_64-linux = pkgs.hydra;
|
packages.x86_64-linux.default = pkgs.hydra;
|
||||||
|
|
||||||
nixosModules.hydra = {
|
nixosModules.hydra = {
|
||||||
imports = [ ./hydra-module.nix ];
|
imports = [ ./hydra-module.nix ];
|
||||||
nixpkgs.overlays = [ self.overlay nix.overlay ];
|
nixpkgs.overlays = [ self.overlays.default nix.overlays.default ];
|
||||||
};
|
};
|
||||||
|
|
||||||
nixosModules.hydraTest = {
|
nixosModules.hydraTest = {
|
||||||
|
@ -997,7 +603,7 @@
|
||||||
self.nixosModules.hydraTest
|
self.nixosModules.hydraTest
|
||||||
self.nixosModules.hydraProxy
|
self.nixosModules.hydraProxy
|
||||||
{
|
{
|
||||||
system.configurationRevision = self.rev;
|
system.configurationRevision = self.lastModifiedDate;
|
||||||
|
|
||||||
boot.isContainer = true;
|
boot.isContainer = true;
|
||||||
networking.useDHCP = false;
|
networking.useDHCP = false;
|
||||||
|
|
|
@ -178,6 +178,9 @@ paths:
|
||||||
enabled:
|
enabled:
|
||||||
description: when set to true the project gets scheduled for evaluation
|
description: when set to true the project gets scheduled for evaluation
|
||||||
type: boolean
|
type: boolean
|
||||||
|
enable_dynamic_run_command:
|
||||||
|
description: when true the project's jobsets support executing dynamically defined RunCommand hooks. Requires the server and project's configuration to also enable dynamic RunCommand.
|
||||||
|
type: boolean
|
||||||
visible:
|
visible:
|
||||||
description: when set to true the project is displayed in the web interface
|
description: when set to true the project is displayed in the web interface
|
||||||
type: boolean
|
type: boolean
|
||||||
|
@ -607,6 +610,9 @@ components:
|
||||||
enabled:
|
enabled:
|
||||||
description: when set to true the project gets scheduled for evaluation
|
description: when set to true the project gets scheduled for evaluation
|
||||||
type: boolean
|
type: boolean
|
||||||
|
enable_dynamic_run_command:
|
||||||
|
description: when true the project's jobsets support executing dynamically defined RunCommand hooks. Requires the server and project's configuration to also enable dynamic RunCommand.
|
||||||
|
type: boolean
|
||||||
declarative:
|
declarative:
|
||||||
description: declarative input configured for this project
|
description: declarative input configured for this project
|
||||||
type: object
|
type: object
|
||||||
|
@ -689,6 +695,9 @@ components:
|
||||||
enableemail:
|
enableemail:
|
||||||
description: when true the jobset sends emails when previously-successful builds fail
|
description: when true the jobset sends emails when previously-successful builds fail
|
||||||
type: boolean
|
type: boolean
|
||||||
|
enable_dynamic_run_command:
|
||||||
|
description: when true the jobset supports executing dynamically defined RunCommand hooks. Requires the server and project's configuration to also enable dynamic RunCommand.
|
||||||
|
type: boolean
|
||||||
visible:
|
visible:
|
||||||
description: when true the jobset is visible in the web frontend
|
description: when true the jobset is visible in the web frontend
|
||||||
type: boolean
|
type: boolean
|
||||||
|
|
|
@ -69,6 +69,7 @@ in
|
||||||
package = mkOption {
|
package = mkOption {
|
||||||
type = types.path;
|
type = types.path;
|
||||||
default = pkgs.hydra;
|
default = pkgs.hydra;
|
||||||
|
defaultText = literalExpression "pkgs.hydra";
|
||||||
description = "The Hydra package.";
|
description = "The Hydra package.";
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -171,6 +172,7 @@ in
|
||||||
buildMachinesFiles = mkOption {
|
buildMachinesFiles = mkOption {
|
||||||
type = types.listOf types.path;
|
type = types.listOf types.path;
|
||||||
default = optional (config.nix.buildMachines != []) "/etc/nix/machines";
|
default = optional (config.nix.buildMachines != []) "/etc/nix/machines";
|
||||||
|
defaultText = literalExpression ''optional (config.nix.buildMachines != []) "/etc/nix/machines"'';
|
||||||
example = [ "/etc/nix/machines" "/var/lib/hydra/provisioner/machines" ];
|
example = [ "/etc/nix/machines" "/var/lib/hydra/provisioner/machines" ];
|
||||||
description = "List of files containing build machines.";
|
description = "List of files containing build machines.";
|
||||||
};
|
};
|
||||||
|
@ -226,8 +228,12 @@ in
|
||||||
useDefaultShell = true;
|
useDefaultShell = true;
|
||||||
};
|
};
|
||||||
|
|
||||||
nix.trustedUsers = [ "hydra-queue-runner" ];
|
nix.settings = {
|
||||||
|
trusted-users = [ "hydra-queue-runner" ];
|
||||||
|
gc-keep-outputs = true;
|
||||||
|
gc-keep-derivations = true;
|
||||||
|
};
|
||||||
|
|
||||||
services.hydra-dev.extraConfig =
|
services.hydra-dev.extraConfig =
|
||||||
''
|
''
|
||||||
using_frontend_proxy = 1
|
using_frontend_proxy = 1
|
||||||
|
@ -254,11 +260,6 @@ in
|
||||||
|
|
||||||
environment.variables = hydraEnv;
|
environment.variables = hydraEnv;
|
||||||
|
|
||||||
nix.extraOptions = ''
|
|
||||||
gc-keep-outputs = true
|
|
||||||
gc-keep-derivations = true
|
|
||||||
'';
|
|
||||||
|
|
||||||
systemd.services.hydra-init =
|
systemd.services.hydra-init =
|
||||||
{ wantedBy = [ "multi-user.target" ];
|
{ wantedBy = [ "multi-user.target" ];
|
||||||
requires = optional haveLocalDB "postgresql.service";
|
requires = optional haveLocalDB "postgresql.service";
|
||||||
|
@ -266,17 +267,17 @@ in
|
||||||
environment = env // {
|
environment = env // {
|
||||||
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-init";
|
HYDRA_DBI = "${env.HYDRA_DBI};application_name=hydra-init";
|
||||||
};
|
};
|
||||||
path = [ pkgs.utillinux ];
|
path = [ pkgs.util-linux ];
|
||||||
preStart = ''
|
preStart = ''
|
||||||
ln -sf ${hydraConf} ${baseDir}/hydra.conf
|
ln -sf ${hydraConf} ${baseDir}/hydra.conf
|
||||||
|
|
||||||
mkdir -m 0700 -p ${baseDir}/www
|
mkdir -m 0700 -p ${baseDir}/www
|
||||||
chown hydra-www.hydra ${baseDir}/www
|
chown hydra-www:hydra ${baseDir}/www
|
||||||
|
|
||||||
mkdir -m 0700 -p ${baseDir}/queue-runner
|
mkdir -m 0700 -p ${baseDir}/queue-runner
|
||||||
mkdir -m 0750 -p ${baseDir}/build-logs
|
mkdir -m 0750 -p ${baseDir}/build-logs
|
||||||
mkdir -m 0750 -p ${baseDir}/runcommand-logs
|
mkdir -m 0750 -p ${baseDir}/runcommand-logs
|
||||||
chown hydra-queue-runner.hydra \
|
chown hydra-queue-runner:hydra \
|
||||||
${baseDir}/queue-runner \
|
${baseDir}/queue-runner \
|
||||||
${baseDir}/build-logs \
|
${baseDir}/build-logs \
|
||||||
${baseDir}/runcommand-logs
|
${baseDir}/runcommand-logs
|
||||||
|
@ -307,7 +308,7 @@ in
|
||||||
rmdir /nix/var/nix/gcroots/per-user/hydra-www/hydra-roots
|
rmdir /nix/var/nix/gcroots/per-user/hydra-www/hydra-roots
|
||||||
fi
|
fi
|
||||||
|
|
||||||
chown hydra.hydra ${cfg.gcRootsDir}
|
chown hydra:hydra ${cfg.gcRootsDir}
|
||||||
chmod 2775 ${cfg.gcRootsDir}
|
chmod 2775 ${cfg.gcRootsDir}
|
||||||
'';
|
'';
|
||||||
serviceConfig.ExecStart = "${cfg.package}/bin/hydra-init";
|
serviceConfig.ExecStart = "${cfg.package}/bin/hydra-init";
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# The `default.nix` in flake-compat reads `flake.nix` and `flake.lock` from `src` and
|
# The `default.nix` in flake-compat reads `flake.nix` and `flake.lock` from `src` and
|
||||||
# returns an attribute set of the shape `{ defaultNix, shellNix }`
|
# returns an attribute set of the shape `{ defaultNix, shellNix }`
|
||||||
|
|
||||||
(import (fetchTarball https://github.com/edolstra/flake-compat/archive/master.tar.gz) {
|
(import (fetchTarball "https://github.com/edolstra/flake-compat/archive/master.tar.gz") {
|
||||||
src = ./.;
|
src = ./.;
|
||||||
}).shellNix
|
}).shellNix
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
bin_PROGRAMS = hydra-eval-jobs
|
bin_PROGRAMS = hydra-eval-jobs
|
||||||
|
|
||||||
hydra_eval_jobs_SOURCES = hydra-eval-jobs.cc
|
hydra_eval_jobs_SOURCES = hydra-eval-jobs.cc
|
||||||
hydra_eval_jobs_LDADD = $(NIX_LIBS)
|
hydra_eval_jobs_LDADD = $(NIX_LIBS) -lnixcmd
|
||||||
hydra_eval_jobs_CXXFLAGS = $(NIX_CFLAGS) -I ../libhydra
|
hydra_eval_jobs_CXXFLAGS = $(NIX_CFLAGS) -I ../libhydra
|
||||||
|
|
|
@ -25,6 +25,28 @@
|
||||||
|
|
||||||
#include <nlohmann/json.hpp>
|
#include <nlohmann/json.hpp>
|
||||||
|
|
||||||
|
void check_pid_status_nonblocking(pid_t check_pid) {
|
||||||
|
// Only check 'initialized' and known PID's
|
||||||
|
if (check_pid <= 0) { return; }
|
||||||
|
|
||||||
|
int wstatus = 0;
|
||||||
|
pid_t pid = waitpid(check_pid, &wstatus, WNOHANG);
|
||||||
|
// -1 = failure, WNOHANG: 0 = no change
|
||||||
|
if (pid <= 0) { return; }
|
||||||
|
|
||||||
|
std::cerr << "child process (" << pid << ") ";
|
||||||
|
|
||||||
|
if (WIFEXITED(wstatus)) {
|
||||||
|
std::cerr << "exited with status=" << WEXITSTATUS(wstatus) << std::endl;
|
||||||
|
} else if (WIFSIGNALED(wstatus)) {
|
||||||
|
std::cerr << "killed by signal=" << WTERMSIG(wstatus) << std::endl;
|
||||||
|
} else if (WIFSTOPPED(wstatus)) {
|
||||||
|
std::cerr << "stopped by signal=" << WSTOPSIG(wstatus) << std::endl;
|
||||||
|
} else if (WIFCONTINUED(wstatus)) {
|
||||||
|
std::cerr << "continued" << std::endl;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
using namespace nix;
|
using namespace nix;
|
||||||
|
|
||||||
static Path gcRootsDir;
|
static Path gcRootsDir;
|
||||||
|
@ -63,13 +85,13 @@ struct MyArgs : MixEvalArgs, MixCommonArgs
|
||||||
|
|
||||||
static MyArgs myArgs;
|
static MyArgs myArgs;
|
||||||
|
|
||||||
static std::string queryMetaStrings(EvalState & state, DrvInfo & drv, const string & name, const string & subAttribute)
|
static std::string queryMetaStrings(EvalState & state, DrvInfo & drv, const std::string & name, const std::string & subAttribute)
|
||||||
{
|
{
|
||||||
Strings res;
|
Strings res;
|
||||||
std::function<void(Value & v)> rec;
|
std::function<void(Value & v)> rec;
|
||||||
|
|
||||||
rec = [&](Value & v) {
|
rec = [&](Value & v) {
|
||||||
state.forceValue(v);
|
state.forceValue(v, noPos);
|
||||||
if (v.type() == nString)
|
if (v.type() == nString)
|
||||||
res.push_back(v.string.s);
|
res.push_back(v.string.s);
|
||||||
else if (v.isList())
|
else if (v.isList())
|
||||||
|
@ -78,7 +100,7 @@ static std::string queryMetaStrings(EvalState & state, DrvInfo & drv, const stri
|
||||||
else if (v.type() == nAttrs) {
|
else if (v.type() == nAttrs) {
|
||||||
auto a = v.attrs->find(state.symbols.create(subAttribute));
|
auto a = v.attrs->find(state.symbols.create(subAttribute));
|
||||||
if (a != v.attrs->end())
|
if (a != v.attrs->end())
|
||||||
res.push_back(state.forceString(*a->value));
|
res.push_back(std::string(state.forceString(*a->value)));
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -113,7 +135,7 @@ static void worker(
|
||||||
callFlake(state, lockedFlake, *vFlake);
|
callFlake(state, lockedFlake, *vFlake);
|
||||||
|
|
||||||
auto vOutputs = vFlake->attrs->get(state.symbols.create("outputs"))->value;
|
auto vOutputs = vFlake->attrs->get(state.symbols.create("outputs"))->value;
|
||||||
state.forceValue(*vOutputs);
|
state.forceValue(*vOutputs, noPos);
|
||||||
|
|
||||||
auto aHydraJobs = vOutputs->attrs->get(state.symbols.create("hydraJobs"));
|
auto aHydraJobs = vOutputs->attrs->get(state.symbols.create("hydraJobs"));
|
||||||
if (!aHydraJobs)
|
if (!aHydraJobs)
|
||||||
|
@ -157,7 +179,7 @@ static void worker(
|
||||||
if (drv->querySystem() == "unknown")
|
if (drv->querySystem() == "unknown")
|
||||||
throw EvalError("derivation must have a 'system' attribute");
|
throw EvalError("derivation must have a 'system' attribute");
|
||||||
|
|
||||||
auto drvPath = drv->queryDrvPath();
|
auto drvPath = state.store->printStorePath(drv->requireDrvPath());
|
||||||
|
|
||||||
nlohmann::json job;
|
nlohmann::json job;
|
||||||
|
|
||||||
|
@ -175,24 +197,24 @@ static void worker(
|
||||||
|
|
||||||
/* If this is an aggregate, then get its constituents. */
|
/* If this is an aggregate, then get its constituents. */
|
||||||
auto a = v->attrs->get(state.symbols.create("_hydraAggregate"));
|
auto a = v->attrs->get(state.symbols.create("_hydraAggregate"));
|
||||||
if (a && state.forceBool(*a->value, *a->pos)) {
|
if (a && state.forceBool(*a->value, a->pos)) {
|
||||||
auto a = v->attrs->get(state.symbols.create("constituents"));
|
auto a = v->attrs->get(state.symbols.create("constituents"));
|
||||||
if (!a)
|
if (!a)
|
||||||
throw EvalError("derivation must have a ‘constituents’ attribute");
|
throw EvalError("derivation must have a ‘constituents’ attribute");
|
||||||
|
|
||||||
|
|
||||||
PathSet context;
|
PathSet context;
|
||||||
state.coerceToString(*a->pos, *a->value, context, true, false);
|
state.coerceToString(a->pos, *a->value, context, true, false);
|
||||||
for (auto & i : context)
|
for (auto & i : context)
|
||||||
if (i.at(0) == '!') {
|
if (i.at(0) == '!') {
|
||||||
size_t index = i.find("!", 1);
|
size_t index = i.find("!", 1);
|
||||||
job["constituents"].push_back(string(i, index + 1));
|
job["constituents"].push_back(std::string(i, index + 1));
|
||||||
}
|
}
|
||||||
|
|
||||||
state.forceList(*a->value, *a->pos);
|
state.forceList(*a->value, a->pos);
|
||||||
for (unsigned int n = 0; n < a->value->listSize(); ++n) {
|
for (unsigned int n = 0; n < a->value->listSize(); ++n) {
|
||||||
auto v = a->value->listElems()[n];
|
auto v = a->value->listElems()[n];
|
||||||
state.forceValue(*v);
|
state.forceValue(*v, noPos);
|
||||||
if (v->type() == nString)
|
if (v->type() == nString)
|
||||||
job["namedConstituents"].push_back(state.forceStringNoCtx(*v));
|
job["namedConstituents"].push_back(state.forceStringNoCtx(*v));
|
||||||
}
|
}
|
||||||
|
@ -210,7 +232,9 @@ static void worker(
|
||||||
|
|
||||||
nlohmann::json out;
|
nlohmann::json out;
|
||||||
for (auto & j : outputs)
|
for (auto & j : outputs)
|
||||||
out[j.first] = j.second;
|
// FIXME: handle CA/impure builds.
|
||||||
|
if (j.second)
|
||||||
|
out[j.first] = state.store->printStorePath(*j.second);
|
||||||
job["outputs"] = std::move(out);
|
job["outputs"] = std::move(out);
|
||||||
|
|
||||||
reply["job"] = std::move(job);
|
reply["job"] = std::move(job);
|
||||||
|
@ -219,8 +243,8 @@ static void worker(
|
||||||
else if (v->type() == nAttrs) {
|
else if (v->type() == nAttrs) {
|
||||||
auto attrs = nlohmann::json::array();
|
auto attrs = nlohmann::json::array();
|
||||||
StringSet ss;
|
StringSet ss;
|
||||||
for (auto & i : v->attrs->lexicographicOrder()) {
|
for (auto & i : v->attrs->lexicographicOrder(state.symbols)) {
|
||||||
std::string name(i->name);
|
std::string name(state.symbols[i->name]);
|
||||||
if (name.find('.') != std::string::npos || name.find(' ') != std::string::npos) {
|
if (name.find('.') != std::string::npos || name.find(' ') != std::string::npos) {
|
||||||
printError("skipping job with illegal name '%s'", name);
|
printError("skipping job with illegal name '%s'", name);
|
||||||
continue;
|
continue;
|
||||||
|
@ -309,8 +333,8 @@ int main(int argc, char * * argv)
|
||||||
/* Start a handler thread per worker process. */
|
/* Start a handler thread per worker process. */
|
||||||
auto handler = [&]()
|
auto handler = [&]()
|
||||||
{
|
{
|
||||||
|
pid_t pid = -1;
|
||||||
try {
|
try {
|
||||||
pid_t pid = -1;
|
|
||||||
AutoCloseFD from, to;
|
AutoCloseFD from, to;
|
||||||
|
|
||||||
while (true) {
|
while (true) {
|
||||||
|
@ -412,6 +436,7 @@ int main(int argc, char * * argv)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} catch (...) {
|
} catch (...) {
|
||||||
|
check_pid_status_nonblocking(pid);
|
||||||
auto state(state_.lock());
|
auto state(state_.lock());
|
||||||
state->exc = std::current_exception();
|
state->exc = std::current_exception();
|
||||||
wakeup.notify_all();
|
wakeup.notify_all();
|
||||||
|
@ -489,10 +514,14 @@ int main(int argc, char * * argv)
|
||||||
std::string drvName(drvPath.name());
|
std::string drvName(drvPath.name());
|
||||||
assert(hasSuffix(drvName, drvExtension));
|
assert(hasSuffix(drvName, drvExtension));
|
||||||
drvName.resize(drvName.size() - drvExtension.size());
|
drvName.resize(drvName.size() - drvExtension.size());
|
||||||
auto h = std::get<Hash>(hashDerivationModulo(*store, drv, true));
|
|
||||||
auto outPath = store->makeOutputPath("out", h, drvName);
|
auto hashModulo = hashDerivationModulo(*store, drv, true);
|
||||||
|
if (hashModulo.kind != DrvHash::Kind::Regular) continue;
|
||||||
|
auto h = hashModulo.hashes.find("out");
|
||||||
|
if (h == hashModulo.hashes.end()) continue;
|
||||||
|
auto outPath = store->makeOutputPath("out", h->second, drvName);
|
||||||
drv.env["out"] = store->printStorePath(outPath);
|
drv.env["out"] = store->printStorePath(outPath);
|
||||||
drv.outputs.insert_or_assign("out", DerivationOutput { .output = DerivationOutputInputAddressed { .path = outPath } });
|
drv.outputs.insert_or_assign("out", DerivationOutput::InputAddressed { .path = outPath });
|
||||||
auto newDrvPath = store->printStorePath(writeDerivation(*store, drv));
|
auto newDrvPath = store->printStorePath(writeDerivation(*store, drv));
|
||||||
|
|
||||||
debug("rewrote aggregate derivation %s -> %s", store->printStorePath(drvPath), newDrvPath);
|
debug("rewrote aggregate derivation %s -> %s", store->printStorePath(drvPath), newDrvPath);
|
||||||
|
|
|
@ -2,7 +2,7 @@ bin_PROGRAMS = hydra-queue-runner
|
||||||
|
|
||||||
hydra_queue_runner_SOURCES = hydra-queue-runner.cc queue-monitor.cc dispatcher.cc \
|
hydra_queue_runner_SOURCES = hydra-queue-runner.cc queue-monitor.cc dispatcher.cc \
|
||||||
builder.cc build-result.cc build-remote.cc \
|
builder.cc build-result.cc build-remote.cc \
|
||||||
build-result.hh counter.hh state.hh db.hh \
|
hydra-build-result.hh counter.hh state.hh db.hh \
|
||||||
nar-extractor.cc nar-extractor.hh
|
nar-extractor.cc nar-extractor.hh
|
||||||
hydra_queue_runner_LDADD = $(NIX_LIBS) -lpqxx
|
hydra_queue_runner_LDADD = $(NIX_LIBS) -lpqxx -lprometheus-cpp-pull -lprometheus-cpp-core
|
||||||
hydra_queue_runner_CXXFLAGS = $(NIX_CFLAGS) -Wall -I ../libhydra -Wno-deprecated-declarations
|
hydra_queue_runner_CXXFLAGS = $(NIX_CFLAGS) -Wall -I ../libhydra -Wno-deprecated-declarations
|
||||||
|
|
|
@ -5,6 +5,7 @@
|
||||||
#include <sys/stat.h>
|
#include <sys/stat.h>
|
||||||
#include <fcntl.h>
|
#include <fcntl.h>
|
||||||
|
|
||||||
|
#include "build-result.hh"
|
||||||
#include "serve-protocol.hh"
|
#include "serve-protocol.hh"
|
||||||
#include "state.hh"
|
#include "state.hh"
|
||||||
#include "util.hh"
|
#include "util.hh"
|
||||||
|
@ -51,13 +52,35 @@ static Strings extraStoreArgs(std::string & machine)
|
||||||
|
|
||||||
static void openConnection(Machine::ptr machine, Path tmpDir, int stderrFD, Child & child)
|
static void openConnection(Machine::ptr machine, Path tmpDir, int stderrFD, Child & child)
|
||||||
{
|
{
|
||||||
string pgmName;
|
std::string pgmName;
|
||||||
Pipe to, from;
|
Pipe to, from;
|
||||||
to.create();
|
to.create();
|
||||||
from.create();
|
from.create();
|
||||||
|
|
||||||
child.pid = startProcess([&]() {
|
Strings argv;
|
||||||
|
if (machine->isLocalhost()) {
|
||||||
|
pgmName = "nix-store";
|
||||||
|
argv = {"nix-store", "--builders", "", "--serve", "--write"};
|
||||||
|
} else {
|
||||||
|
pgmName = "ssh";
|
||||||
|
auto sshName = machine->sshName;
|
||||||
|
Strings extraArgs = extraStoreArgs(sshName);
|
||||||
|
argv = {"ssh", sshName};
|
||||||
|
if (machine->sshKey != "") append(argv, {"-i", machine->sshKey});
|
||||||
|
if (machine->sshPublicHostKey != "") {
|
||||||
|
Path fileName = tmpDir + "/host-key";
|
||||||
|
auto p = machine->sshName.find("@");
|
||||||
|
std::string host = p != std::string::npos ? std::string(machine->sshName, p + 1) : machine->sshName;
|
||||||
|
writeFile(fileName, host + " " + machine->sshPublicHostKey + "\n");
|
||||||
|
append(argv, {"-oUserKnownHostsFile=" + fileName});
|
||||||
|
}
|
||||||
|
append(argv,
|
||||||
|
{ "-x", "-a", "-oBatchMode=yes", "-oConnectTimeout=60", "-oTCPKeepAlive=yes"
|
||||||
|
, "--", "nix-store", "--serve", "--write" });
|
||||||
|
append(argv, extraArgs);
|
||||||
|
}
|
||||||
|
|
||||||
|
child.pid = startProcess([&]() {
|
||||||
restoreProcessContext();
|
restoreProcessContext();
|
||||||
|
|
||||||
if (dup2(to.readSide.get(), STDIN_FILENO) == -1)
|
if (dup2(to.readSide.get(), STDIN_FILENO) == -1)
|
||||||
|
@ -69,30 +92,6 @@ static void openConnection(Machine::ptr machine, Path tmpDir, int stderrFD, Chil
|
||||||
if (dup2(stderrFD, STDERR_FILENO) == -1)
|
if (dup2(stderrFD, STDERR_FILENO) == -1)
|
||||||
throw SysError("cannot dup stderr");
|
throw SysError("cannot dup stderr");
|
||||||
|
|
||||||
Strings argv;
|
|
||||||
if (machine->isLocalhost()) {
|
|
||||||
pgmName = "nix-store";
|
|
||||||
argv = {"nix-store", "--builders", "", "--serve", "--write"};
|
|
||||||
}
|
|
||||||
else {
|
|
||||||
pgmName = "ssh";
|
|
||||||
auto sshName = machine->sshName;
|
|
||||||
Strings extraArgs = extraStoreArgs(sshName);
|
|
||||||
argv = {"ssh", sshName};
|
|
||||||
if (machine->sshKey != "") append(argv, {"-i", machine->sshKey});
|
|
||||||
if (machine->sshPublicHostKey != "") {
|
|
||||||
Path fileName = tmpDir + "/host-key";
|
|
||||||
auto p = machine->sshName.find("@");
|
|
||||||
string host = p != string::npos ? string(machine->sshName, p + 1) : machine->sshName;
|
|
||||||
writeFile(fileName, host + " " + machine->sshPublicHostKey + "\n");
|
|
||||||
append(argv, {"-oUserKnownHostsFile=" + fileName});
|
|
||||||
}
|
|
||||||
append(argv,
|
|
||||||
{ "-x", "-a", "-oBatchMode=yes", "-oConnectTimeout=60", "-oTCPKeepAlive=yes"
|
|
||||||
, "--", "nix-store", "--serve", "--write" });
|
|
||||||
append(argv, extraArgs);
|
|
||||||
}
|
|
||||||
|
|
||||||
execvp(argv.front().c_str(), (char * *) stringsToCharPtrs(argv).data()); // FIXME: remove cast
|
execvp(argv.front().c_str(), (char * *) stringsToCharPtrs(argv).data()); // FIXME: remove cast
|
||||||
|
|
||||||
throw SysError("cannot start %s", pgmName);
|
throw SysError("cannot start %s", pgmName);
|
||||||
|
@ -179,7 +178,7 @@ StorePaths reverseTopoSortPaths(const std::map<StorePath, ValidPathInfo> & paths
|
||||||
|
|
||||||
std::pair<Path, AutoCloseFD> openLogFile(const std::string & logDir, const StorePath & drvPath)
|
std::pair<Path, AutoCloseFD> openLogFile(const std::string & logDir, const StorePath & drvPath)
|
||||||
{
|
{
|
||||||
string base(drvPath.to_string());
|
std::string base(drvPath.to_string());
|
||||||
auto logFile = logDir + "/" + string(base, 0, 2) + "/" + string(base, 2);
|
auto logFile = logDir + "/" + string(base, 0, 2) + "/" + string(base, 2);
|
||||||
|
|
||||||
createDirs(dirOf(logFile));
|
createDirs(dirOf(logFile));
|
||||||
|
@ -192,7 +191,7 @@ std::pair<Path, AutoCloseFD> openLogFile(const std::string & logDir, const Store
|
||||||
|
|
||||||
void handshake(Machine::Connection & conn, unsigned int repeats)
|
void handshake(Machine::Connection & conn, unsigned int repeats)
|
||||||
{
|
{
|
||||||
conn.to << SERVE_MAGIC_1 << 0x204;
|
conn.to << SERVE_MAGIC_1 << 0x206;
|
||||||
conn.to.flush();
|
conn.to.flush();
|
||||||
|
|
||||||
unsigned int magic = readInt(conn.from);
|
unsigned int magic = readInt(conn.from);
|
||||||
|
@ -232,10 +231,10 @@ BasicDerivation sendInputs(
|
||||||
a no-op for regular stores, but for the binary cache store,
|
a no-op for regular stores, but for the binary cache store,
|
||||||
this will copy the inputs to the binary cache from the local
|
this will copy the inputs to the binary cache from the local
|
||||||
store. */
|
store. */
|
||||||
if (localStore.getUri() != destStore.getUri()) {
|
if (localStore != destStore) {
|
||||||
StorePathSet closure;
|
copyClosure(localStore, destStore,
|
||||||
localStore.computeFSClosure(step.drv->inputSrcs, closure);
|
step.drv->inputSrcs,
|
||||||
copyPaths(localStore, destStore, closure, NoRepair, NoCheckSigs, NoSubstitute);
|
NoRepair, NoCheckSigs, NoSubstitute);
|
||||||
}
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
|
|
|
@ -1,4 +1,4 @@
|
||||||
#include "build-result.hh"
|
#include "hydra-build-result.hh"
|
||||||
#include "store-api.hh"
|
#include "store-api.hh"
|
||||||
#include "util.hh"
|
#include "util.hh"
|
||||||
#include "fs-accessor.hh"
|
#include "fs-accessor.hh"
|
||||||
|
@ -78,7 +78,7 @@ BuildOutput getBuildOutput(
|
||||||
product.type = match[1];
|
product.type = match[1];
|
||||||
product.subtype = match[2];
|
product.subtype = match[2];
|
||||||
std::string s(match[3]);
|
std::string s(match[3]);
|
||||||
product.path = s[0] == '"' ? string(s, 1, s.size() - 2) : s;
|
product.path = s[0] == '"' ? std::string(s, 1, s.size() - 2) : s;
|
||||||
product.defaultPath = match[5];
|
product.defaultPath = match[5];
|
||||||
|
|
||||||
/* Ensure that the path exists and points into the Nix
|
/* Ensure that the path exists and points into the Nix
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
#include <cmath>
|
#include <cmath>
|
||||||
|
|
||||||
#include "state.hh"
|
#include "state.hh"
|
||||||
#include "build-result.hh"
|
#include "hydra-build-result.hh"
|
||||||
#include "finally.hh"
|
#include "finally.hh"
|
||||||
#include "binary-cache-store.hh"
|
#include "binary-cache-store.hh"
|
||||||
|
|
||||||
|
|
|
@ -6,8 +6,10 @@
|
||||||
#include <sys/stat.h>
|
#include <sys/stat.h>
|
||||||
#include <fcntl.h>
|
#include <fcntl.h>
|
||||||
|
|
||||||
|
#include <prometheus/exposer.h>
|
||||||
|
|
||||||
#include "state.hh"
|
#include "state.hh"
|
||||||
#include "build-result.hh"
|
#include "hydra-build-result.hh"
|
||||||
#include "store-api.hh"
|
#include "store-api.hh"
|
||||||
#include "remote-store.hh"
|
#include "remote-store.hh"
|
||||||
|
|
||||||
|
@ -36,8 +38,55 @@ std::string getEnvOrDie(const std::string & key)
|
||||||
return *value;
|
return *value;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
State::PromMetrics::PromMetrics()
|
||||||
|
: registry(std::make_shared<prometheus::Registry>())
|
||||||
|
, queue_checks_started(
|
||||||
|
prometheus::BuildCounter()
|
||||||
|
.Name("hydraqueuerunner_queue_checks_started_total")
|
||||||
|
.Help("Number of times State::getQueuedBuilds() was started")
|
||||||
|
.Register(*registry)
|
||||||
|
.Add({})
|
||||||
|
)
|
||||||
|
, queue_build_loads(
|
||||||
|
prometheus::BuildCounter()
|
||||||
|
.Name("hydraqueuerunner_queue_build_loads_total")
|
||||||
|
.Help("Number of builds loaded")
|
||||||
|
.Register(*registry)
|
||||||
|
.Add({})
|
||||||
|
)
|
||||||
|
, queue_steps_created(
|
||||||
|
prometheus::BuildCounter()
|
||||||
|
.Name("hydraqueuerunner_queue_steps_created_total")
|
||||||
|
.Help("Number of steps created")
|
||||||
|
.Register(*registry)
|
||||||
|
.Add({})
|
||||||
|
)
|
||||||
|
, queue_checks_early_exits(
|
||||||
|
prometheus::BuildCounter()
|
||||||
|
.Name("hydraqueuerunner_queue_checks_early_exits_total")
|
||||||
|
.Help("Number of times State::getQueuedBuilds() yielded to potential bumps")
|
||||||
|
.Register(*registry)
|
||||||
|
.Add({})
|
||||||
|
)
|
||||||
|
, queue_checks_finished(
|
||||||
|
prometheus::BuildCounter()
|
||||||
|
.Name("hydraqueuerunner_queue_checks_finished_total")
|
||||||
|
.Help("Number of times State::getQueuedBuilds() was completed")
|
||||||
|
.Register(*registry)
|
||||||
|
.Add({})
|
||||||
|
)
|
||||||
|
, queue_max_id(
|
||||||
|
prometheus::BuildGauge()
|
||||||
|
.Name("hydraqueuerunner_queue_max_build_id_info")
|
||||||
|
.Help("Maximum build record ID in the queue")
|
||||||
|
.Register(*registry)
|
||||||
|
.Add({})
|
||||||
|
)
|
||||||
|
{
|
||||||
|
|
||||||
State::State()
|
}
|
||||||
|
|
||||||
|
State::State(std::optional<std::string> metricsAddrOpt)
|
||||||
: config(std::make_unique<HydraConfig>())
|
: config(std::make_unique<HydraConfig>())
|
||||||
, maxUnsupportedTime(config->getIntOption("max_unsupported_time", 0))
|
, maxUnsupportedTime(config->getIntOption("max_unsupported_time", 0))
|
||||||
, dbPool(config->getIntOption("max_db_connections", 128))
|
, dbPool(config->getIntOption("max_db_connections", 128))
|
||||||
|
@ -45,11 +94,16 @@ State::State()
|
||||||
, maxLogSize(config->getIntOption("max_log_size", 64ULL << 20))
|
, maxLogSize(config->getIntOption("max_log_size", 64ULL << 20))
|
||||||
, uploadLogsToBinaryCache(config->getBoolOption("upload_logs_to_binary_cache", false))
|
, uploadLogsToBinaryCache(config->getBoolOption("upload_logs_to_binary_cache", false))
|
||||||
, rootsDir(config->getStrOption("gc_roots_dir", fmt("%s/gcroots/per-user/%s/hydra-roots", settings.nixStateDir, getEnvOrDie("LOGNAME"))))
|
, rootsDir(config->getStrOption("gc_roots_dir", fmt("%s/gcroots/per-user/%s/hydra-roots", settings.nixStateDir, getEnvOrDie("LOGNAME"))))
|
||||||
|
, metricsAddr(config->getStrOption("queue_runner_metrics_address", std::string{"127.0.0.1:9198"}))
|
||||||
{
|
{
|
||||||
hydraData = getEnvOrDie("HYDRA_DATA");
|
hydraData = getEnvOrDie("HYDRA_DATA");
|
||||||
|
|
||||||
logDir = canonPath(hydraData + "/build-logs");
|
logDir = canonPath(hydraData + "/build-logs");
|
||||||
|
|
||||||
|
if (metricsAddrOpt.has_value()) {
|
||||||
|
metricsAddr = metricsAddrOpt.value();
|
||||||
|
}
|
||||||
|
|
||||||
/* handle deprecated store specification */
|
/* handle deprecated store specification */
|
||||||
if (config->getStrOption("store_mode") != "")
|
if (config->getStrOption("store_mode") != "")
|
||||||
throw Error("store_mode in hydra.conf is deprecated, please use store_uri");
|
throw Error("store_mode in hydra.conf is deprecated, please use store_uri");
|
||||||
|
@ -87,7 +141,7 @@ void State::parseMachines(const std::string & contents)
|
||||||
}
|
}
|
||||||
|
|
||||||
for (auto line : tokenizeString<Strings>(contents, "\n")) {
|
for (auto line : tokenizeString<Strings>(contents, "\n")) {
|
||||||
line = trim(string(line, 0, line.find('#')));
|
line = trim(std::string(line, 0, line.find('#')));
|
||||||
auto tokens = tokenizeString<std::vector<std::string>>(line);
|
auto tokens = tokenizeString<std::vector<std::string>>(line);
|
||||||
if (tokens.size() < 3) continue;
|
if (tokens.size() < 3) continue;
|
||||||
tokens.resize(8);
|
tokens.resize(8);
|
||||||
|
@ -95,7 +149,7 @@ void State::parseMachines(const std::string & contents)
|
||||||
auto machine = std::make_shared<Machine>();
|
auto machine = std::make_shared<Machine>();
|
||||||
machine->sshName = tokens[0];
|
machine->sshName = tokens[0];
|
||||||
machine->systemTypes = tokenizeString<StringSet>(tokens[1], ",");
|
machine->systemTypes = tokenizeString<StringSet>(tokens[1], ",");
|
||||||
machine->sshKey = tokens[2] == "-" ? string("") : tokens[2];
|
machine->sshKey = tokens[2] == "-" ? std::string("") : tokens[2];
|
||||||
if (tokens[3] != "")
|
if (tokens[3] != "")
|
||||||
machine->maxJobs = string2Int<decltype(machine->maxJobs)>(tokens[3]).value();
|
machine->maxJobs = string2Int<decltype(machine->maxJobs)>(tokens[3]).value();
|
||||||
else
|
else
|
||||||
|
@ -149,7 +203,7 @@ void State::parseMachines(const std::string & contents)
|
||||||
|
|
||||||
void State::monitorMachinesFile()
|
void State::monitorMachinesFile()
|
||||||
{
|
{
|
||||||
string defaultMachinesFile = "/etc/nix/machines";
|
std::string defaultMachinesFile = "/etc/nix/machines";
|
||||||
auto machinesFiles = tokenizeString<std::vector<Path>>(
|
auto machinesFiles = tokenizeString<std::vector<Path>>(
|
||||||
getEnv("NIX_REMOTE_SYSTEMS").value_or(pathExists(defaultMachinesFile) ? defaultMachinesFile : ""), ":");
|
getEnv("NIX_REMOTE_SYSTEMS").value_or(pathExists(defaultMachinesFile) ? defaultMachinesFile : ""), ":");
|
||||||
|
|
||||||
|
@ -191,7 +245,7 @@ void State::monitorMachinesFile()
|
||||||
|
|
||||||
debug("reloading machines files");
|
debug("reloading machines files");
|
||||||
|
|
||||||
string contents;
|
std::string contents;
|
||||||
for (auto & machinesFile : machinesFiles) {
|
for (auto & machinesFile : machinesFiles) {
|
||||||
try {
|
try {
|
||||||
contents += readFile(machinesFile);
|
contents += readFile(machinesFile);
|
||||||
|
@ -308,7 +362,7 @@ void State::finishBuildStep(pqxx::work & txn, const RemoteResult & result,
|
||||||
|
|
||||||
|
|
||||||
int State::createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
|
int State::createSubstitutionStep(pqxx::work & txn, time_t startTime, time_t stopTime,
|
||||||
Build::ptr build, const StorePath & drvPath, const string & outputName, const StorePath & storePath)
|
Build::ptr build, const StorePath & drvPath, const std::string & outputName, const StorePath & storePath)
|
||||||
{
|
{
|
||||||
restart:
|
restart:
|
||||||
auto stepNr = allocBuildStep(txn, build->id);
|
auto stepNr = allocBuildStep(txn, build->id);
|
||||||
|
@ -683,14 +737,14 @@ void State::showStatus()
|
||||||
auto conn(dbPool.get());
|
auto conn(dbPool.get());
|
||||||
receiver statusDumped(*conn, "status_dumped");
|
receiver statusDumped(*conn, "status_dumped");
|
||||||
|
|
||||||
string status;
|
std::string status;
|
||||||
bool barf = false;
|
bool barf = false;
|
||||||
|
|
||||||
/* Get the last JSON status dump from the database. */
|
/* Get the last JSON status dump from the database. */
|
||||||
{
|
{
|
||||||
pqxx::work txn(*conn);
|
pqxx::work txn(*conn);
|
||||||
auto res = txn.exec("select status from SystemStatus where what = 'queue-runner'");
|
auto res = txn.exec("select status from SystemStatus where what = 'queue-runner'");
|
||||||
if (res.size()) status = res[0][0].as<string>();
|
if (res.size()) status = res[0][0].as<std::string>();
|
||||||
}
|
}
|
||||||
|
|
||||||
if (status != "") {
|
if (status != "") {
|
||||||
|
@ -710,7 +764,7 @@ void State::showStatus()
|
||||||
{
|
{
|
||||||
pqxx::work txn(*conn);
|
pqxx::work txn(*conn);
|
||||||
auto res = txn.exec("select status from SystemStatus where what = 'queue-runner'");
|
auto res = txn.exec("select status from SystemStatus where what = 'queue-runner'");
|
||||||
if (res.size()) status = res[0][0].as<string>();
|
if (res.size()) status = res[0][0].as<std::string>();
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -754,6 +808,18 @@ void State::run(BuildID buildOne)
|
||||||
if (!lock)
|
if (!lock)
|
||||||
throw Error("hydra-queue-runner is already running");
|
throw Error("hydra-queue-runner is already running");
|
||||||
|
|
||||||
|
std::cout << "Starting the Prometheus exporter on " << metricsAddr << std::endl;
|
||||||
|
|
||||||
|
/* Set up simple exporter, to show that we're still alive. */
|
||||||
|
prometheus::Exposer promExposer{metricsAddr};
|
||||||
|
auto exposerPort = promExposer.GetListeningPorts().front();
|
||||||
|
|
||||||
|
promExposer.RegisterCollectable(prom.registry);
|
||||||
|
|
||||||
|
std::cout << "Started the Prometheus exporter, listening on "
|
||||||
|
<< metricsAddr << "/metrics (port " << exposerPort << ")"
|
||||||
|
<< std::endl;
|
||||||
|
|
||||||
Store::Params localParams;
|
Store::Params localParams;
|
||||||
localParams["max-connections"] = "16";
|
localParams["max-connections"] = "16";
|
||||||
localParams["max-connection-age"] = "600";
|
localParams["max-connection-age"] = "600";
|
||||||
|
@ -864,6 +930,7 @@ int main(int argc, char * * argv)
|
||||||
bool unlock = false;
|
bool unlock = false;
|
||||||
bool status = false;
|
bool status = false;
|
||||||
BuildID buildOne = 0;
|
BuildID buildOne = 0;
|
||||||
|
std::optional<std::string> metricsAddrOpt = std::nullopt;
|
||||||
|
|
||||||
parseCmdLine(argc, argv, [&](Strings::iterator & arg, const Strings::iterator & end) {
|
parseCmdLine(argc, argv, [&](Strings::iterator & arg, const Strings::iterator & end) {
|
||||||
if (*arg == "--unlock")
|
if (*arg == "--unlock")
|
||||||
|
@ -875,6 +942,8 @@ int main(int argc, char * * argv)
|
||||||
buildOne = *b;
|
buildOne = *b;
|
||||||
else
|
else
|
||||||
throw Error("‘--build-one’ requires a build ID");
|
throw Error("‘--build-one’ requires a build ID");
|
||||||
|
} else if (*arg == "--prometheus-address") {
|
||||||
|
metricsAddrOpt = getArg(*arg, arg, end);
|
||||||
} else
|
} else
|
||||||
return false;
|
return false;
|
||||||
return true;
|
return true;
|
||||||
|
@ -883,7 +952,7 @@ int main(int argc, char * * argv)
|
||||||
settings.verboseBuild = true;
|
settings.verboseBuild = true;
|
||||||
settings.lockCPU = false;
|
settings.lockCPU = false;
|
||||||
|
|
||||||
State state;
|
State state{metricsAddrOpt};
|
||||||
if (status)
|
if (status)
|
||||||
state.showStatus();
|
state.showStatus();
|
||||||
else if (unlock)
|
else if (unlock)
|
||||||
|
|
|
@ -64,7 +64,7 @@ struct Extractor : ParseSink
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void createSymlink(const Path & path, const string & target) override
|
void createSymlink(const Path & path, const std::string & target) override
|
||||||
{
|
{
|
||||||
members.insert_or_assign(prefix + path, NarMemberData { .type = FSAccessor::Type::tSymlink });
|
members.insert_or_assign(prefix + path, NarMemberData { .type = FSAccessor::Type::tSymlink });
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
#include "state.hh"
|
#include "state.hh"
|
||||||
#include "build-result.hh"
|
#include "hydra-build-result.hh"
|
||||||
#include "globals.hh"
|
#include "globals.hh"
|
||||||
|
|
||||||
#include <cstring>
|
#include <cstring>
|
||||||
|
@ -82,6 +82,8 @@ struct PreviousFailure : public std::exception {
|
||||||
bool State::getQueuedBuilds(Connection & conn,
|
bool State::getQueuedBuilds(Connection & conn,
|
||||||
ref<Store> destStore, unsigned int & lastBuildId)
|
ref<Store> destStore, unsigned int & lastBuildId)
|
||||||
{
|
{
|
||||||
|
prom.queue_checks_started.Increment();
|
||||||
|
|
||||||
printInfo("checking the queue for builds > %d...", lastBuildId);
|
printInfo("checking the queue for builds > %d...", lastBuildId);
|
||||||
|
|
||||||
/* Grab the queued builds from the database, but don't process
|
/* Grab the queued builds from the database, but don't process
|
||||||
|
@ -107,16 +109,19 @@ bool State::getQueuedBuilds(Connection & conn,
|
||||||
auto builds_(builds.lock());
|
auto builds_(builds.lock());
|
||||||
BuildID id = row["id"].as<BuildID>();
|
BuildID id = row["id"].as<BuildID>();
|
||||||
if (buildOne && id != buildOne) continue;
|
if (buildOne && id != buildOne) continue;
|
||||||
if (id > newLastBuildId) newLastBuildId = id;
|
if (id > newLastBuildId) {
|
||||||
|
newLastBuildId = id;
|
||||||
|
prom.queue_max_id.Set(id);
|
||||||
|
}
|
||||||
if (builds_->count(id)) continue;
|
if (builds_->count(id)) continue;
|
||||||
|
|
||||||
auto build = std::make_shared<Build>(
|
auto build = std::make_shared<Build>(
|
||||||
localStore->parseStorePath(row["drvPath"].as<string>()));
|
localStore->parseStorePath(row["drvPath"].as<std::string>()));
|
||||||
build->id = id;
|
build->id = id;
|
||||||
build->jobsetId = row["jobset_id"].as<JobsetID>();
|
build->jobsetId = row["jobset_id"].as<JobsetID>();
|
||||||
build->projectName = row["project"].as<string>();
|
build->projectName = row["project"].as<std::string>();
|
||||||
build->jobsetName = row["jobset"].as<string>();
|
build->jobsetName = row["jobset"].as<std::string>();
|
||||||
build->jobName = row["job"].as<string>();
|
build->jobName = row["job"].as<std::string>();
|
||||||
build->maxSilentTime = row["maxsilent"].as<int>();
|
build->maxSilentTime = row["maxsilent"].as<int>();
|
||||||
build->buildTimeout = row["timeout"].as<int>();
|
build->buildTimeout = row["timeout"].as<int>();
|
||||||
build->timestamp = row["timestamp"].as<time_t>();
|
build->timestamp = row["timestamp"].as<time_t>();
|
||||||
|
@ -136,6 +141,7 @@ bool State::getQueuedBuilds(Connection & conn,
|
||||||
std::set<StorePath> finishedDrvs;
|
std::set<StorePath> finishedDrvs;
|
||||||
|
|
||||||
createBuild = [&](Build::ptr build) {
|
createBuild = [&](Build::ptr build) {
|
||||||
|
prom.queue_build_loads.Increment();
|
||||||
printMsg(lvlTalkative, format("loading build %1% (%2%)") % build->id % build->fullJobName());
|
printMsg(lvlTalkative, format("loading build %1% (%2%)") % build->id % build->fullJobName());
|
||||||
nrAdded++;
|
nrAdded++;
|
||||||
newBuildsByID.erase(build->id);
|
newBuildsByID.erase(build->id);
|
||||||
|
@ -306,9 +312,14 @@ bool State::getQueuedBuilds(Connection & conn,
|
||||||
|
|
||||||
/* Stop after a certain time to allow priority bumps to be
|
/* Stop after a certain time to allow priority bumps to be
|
||||||
processed. */
|
processed. */
|
||||||
if (std::chrono::system_clock::now() > start + std::chrono::seconds(600)) break;
|
if (std::chrono::system_clock::now() > start + std::chrono::seconds(600)) {
|
||||||
|
prom.queue_checks_early_exits.Increment();
|
||||||
|
break;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
prom.queue_checks_finished.Increment();
|
||||||
|
|
||||||
lastBuildId = newBuildsByID.empty() ? newLastBuildId : newBuildsByID.begin()->first - 1;
|
lastBuildId = newBuildsByID.empty() ? newLastBuildId : newBuildsByID.begin()->first - 1;
|
||||||
return newBuildsByID.empty();
|
return newBuildsByID.empty();
|
||||||
}
|
}
|
||||||
|
@ -437,6 +448,8 @@ Step::ptr State::createStep(ref<Store> destStore,
|
||||||
|
|
||||||
if (!isNew) return step;
|
if (!isNew) return step;
|
||||||
|
|
||||||
|
prom.queue_steps_created.Increment();
|
||||||
|
|
||||||
printMsg(lvlDebug, "considering derivation ‘%1%’", localStore->printStorePath(drvPath));
|
printMsg(lvlDebug, "considering derivation ‘%1%’", localStore->printStorePath(drvPath));
|
||||||
|
|
||||||
/* Initialize the step. Note that the step may be visible in
|
/* Initialize the step. Note that the step may be visible in
|
||||||
|
@ -447,7 +460,7 @@ Step::ptr State::createStep(ref<Store> destStore,
|
||||||
step->parsedDrv = std::make_unique<ParsedDerivation>(drvPath, *step->drv);
|
step->parsedDrv = std::make_unique<ParsedDerivation>(drvPath, *step->drv);
|
||||||
|
|
||||||
step->preferLocalBuild = step->parsedDrv->willBuildLocally(*localStore);
|
step->preferLocalBuild = step->parsedDrv->willBuildLocally(*localStore);
|
||||||
step->isDeterministic = get(step->drv->env, "isDetermistic").value_or("0") == "1";
|
step->isDeterministic = getOr(step->drv->env, "isDetermistic", "0") == "1";
|
||||||
|
|
||||||
step->systemType = step->drv->platform;
|
step->systemType = step->drv->platform;
|
||||||
{
|
{
|
||||||
|
@ -513,9 +526,9 @@ Step::ptr State::createStep(ref<Store> destStore,
|
||||||
// FIXME: should copy directly from substituter to destStore.
|
// FIXME: should copy directly from substituter to destStore.
|
||||||
}
|
}
|
||||||
|
|
||||||
StorePathSet closure;
|
copyClosure(*localStore, *destStore,
|
||||||
localStore->computeFSClosure({*path}, closure);
|
StorePathSet { *path },
|
||||||
copyPaths(*localStore, *destStore, closure, NoRepair, CheckSigs, NoSubstitute);
|
NoRepair, CheckSigs, NoSubstitute);
|
||||||
|
|
||||||
time_t stopTime = time(0);
|
time_t stopTime = time(0);
|
||||||
|
|
||||||
|
@ -620,7 +633,7 @@ void State::processJobsetSharesChange(Connection & conn)
|
||||||
auto res = txn.exec("select project, name, schedulingShares from Jobsets");
|
auto res = txn.exec("select project, name, schedulingShares from Jobsets");
|
||||||
for (auto const & row : res) {
|
for (auto const & row : res) {
|
||||||
auto jobsets_(jobsets.lock());
|
auto jobsets_(jobsets.lock());
|
||||||
auto i = jobsets_->find(std::make_pair(row["project"].as<string>(), row["name"].as<string>()));
|
auto i = jobsets_->find(std::make_pair(row["project"].as<std::string>(), row["name"].as<std::string>()));
|
||||||
if (i == jobsets_->end()) continue;
|
if (i == jobsets_->end()) continue;
|
||||||
i->second->setShares(row["schedulingShares"].as<unsigned int>());
|
i->second->setShares(row["schedulingShares"].as<unsigned int>());
|
||||||
}
|
}
|
||||||
|
|
|
@ -6,12 +6,18 @@
|
||||||
#include <map>
|
#include <map>
|
||||||
#include <memory>
|
#include <memory>
|
||||||
#include <queue>
|
#include <queue>
|
||||||
|
#include <regex>
|
||||||
|
|
||||||
|
#include <prometheus/counter.h>
|
||||||
|
#include <prometheus/gauge.h>
|
||||||
|
#include <prometheus/registry.h>
|
||||||
|
|
||||||
#include "db.hh"
|
#include "db.hh"
|
||||||
|
|
||||||
#include "parsed-derivations.hh"
|
#include "parsed-derivations.hh"
|
||||||
#include "pathlocks.hh"
|
#include "pathlocks.hh"
|
||||||
#include "pool.hh"
|
#include "pool.hh"
|
||||||
|
#include "build-result.hh"
|
||||||
#include "store-api.hh"
|
#include "store-api.hh"
|
||||||
#include "sync.hh"
|
#include "sync.hh"
|
||||||
#include "nar-extractor.hh"
|
#include "nar-extractor.hh"
|
||||||
|
@ -290,7 +296,8 @@ struct Machine
|
||||||
|
|
||||||
bool isLocalhost()
|
bool isLocalhost()
|
||||||
{
|
{
|
||||||
return sshName == "localhost";
|
std::regex r("^(ssh://|ssh-ng://)?localhost$");
|
||||||
|
return std::regex_search(sshName, r);
|
||||||
}
|
}
|
||||||
|
|
||||||
// A connection to a machine
|
// A connection to a machine
|
||||||
|
@ -444,8 +451,25 @@ private:
|
||||||
via gc_roots_dir. */
|
via gc_roots_dir. */
|
||||||
nix::Path rootsDir;
|
nix::Path rootsDir;
|
||||||
|
|
||||||
|
std::string metricsAddr;
|
||||||
|
|
||||||
|
struct PromMetrics
|
||||||
|
{
|
||||||
|
std::shared_ptr<prometheus::Registry> registry;
|
||||||
|
|
||||||
|
prometheus::Counter& queue_checks_started;
|
||||||
|
prometheus::Counter& queue_build_loads;
|
||||||
|
prometheus::Counter& queue_steps_created;
|
||||||
|
prometheus::Counter& queue_checks_early_exits;
|
||||||
|
prometheus::Counter& queue_checks_finished;
|
||||||
|
prometheus::Gauge& queue_max_id;
|
||||||
|
|
||||||
|
PromMetrics();
|
||||||
|
};
|
||||||
|
PromMetrics prom;
|
||||||
|
|
||||||
public:
|
public:
|
||||||
State();
|
State(std::optional<std::string> metricsAddrOpt);
|
||||||
|
|
||||||
struct BuildOptions {
|
struct BuildOptions {
|
||||||
unsigned int maxSilentTime, buildTimeout, repeats;
|
unsigned int maxSilentTime, buildTimeout, repeats;
|
||||||
|
@ -560,6 +584,8 @@ private:
|
||||||
|
|
||||||
void addRoot(const nix::StorePath & storePath);
|
void addRoot(const nix::StorePath & storePath);
|
||||||
|
|
||||||
|
void runMetricsExporter();
|
||||||
|
|
||||||
public:
|
public:
|
||||||
|
|
||||||
void showStatus();
|
void showStatus();
|
||||||
|
|
|
@ -7,15 +7,16 @@ use base 'Hydra::Base::Controller::NixChannel';
|
||||||
use Hydra::Helper::Nix;
|
use Hydra::Helper::Nix;
|
||||||
use Hydra::Helper::CatalystUtils;
|
use Hydra::Helper::CatalystUtils;
|
||||||
use File::Basename;
|
use File::Basename;
|
||||||
|
use File::LibMagic;
|
||||||
use File::stat;
|
use File::stat;
|
||||||
use Data::Dump qw(dump);
|
use Data::Dump qw(dump);
|
||||||
use Nix::Store;
|
use Nix::Store;
|
||||||
use Nix::Config;
|
use Nix::Config;
|
||||||
use List::SomeUtils qw(all);
|
use List::SomeUtils qw(all);
|
||||||
use Encode;
|
use Encode;
|
||||||
use MIME::Types;
|
|
||||||
use JSON::PP;
|
use JSON::PP;
|
||||||
|
|
||||||
|
use feature 'state';
|
||||||
|
|
||||||
sub buildChain :Chained('/') :PathPart('build') :CaptureArgs(1) {
|
sub buildChain :Chained('/') :PathPart('build') :CaptureArgs(1) {
|
||||||
my ($self, $c, $id) = @_;
|
my ($self, $c, $id) = @_;
|
||||||
|
@ -38,6 +39,17 @@ sub buildChain :Chained('/') :PathPart('build') :CaptureArgs(1) {
|
||||||
$c->stash->{jobset} = $c->stash->{build}->jobset;
|
$c->stash->{jobset} = $c->stash->{build}->jobset;
|
||||||
$c->stash->{job} = $c->stash->{build}->job;
|
$c->stash->{job} = $c->stash->{build}->job;
|
||||||
$c->stash->{runcommandlogs} = [$c->stash->{build}->runcommandlogs->search({}, {order_by => ["id DESC"]})];
|
$c->stash->{runcommandlogs} = [$c->stash->{build}->runcommandlogs->search({}, {order_by => ["id DESC"]})];
|
||||||
|
|
||||||
|
$c->stash->{runcommandlogProblem} = undef;
|
||||||
|
if ($c->stash->{job} =~ qr/^runCommandHook\..*/) {
|
||||||
|
if (!$c->config->{dynamicruncommand}->{enable}) {
|
||||||
|
$c->stash->{runcommandlogProblem} = "disabled-server";
|
||||||
|
} elsif (!$c->stash->{project}->enable_dynamic_run_command) {
|
||||||
|
$c->stash->{runcommandlogProblem} = "disabled-project";
|
||||||
|
} elsif (!$c->stash->{jobset}->enable_dynamic_run_command) {
|
||||||
|
$c->stash->{runcommandlogProblem} = "disabled-jobset";
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -223,16 +235,12 @@ sub serveFile {
|
||||||
elsif ($ls->{type} eq "regular") {
|
elsif ($ls->{type} eq "regular") {
|
||||||
|
|
||||||
$c->stash->{'plain'} = { data => grab(cmd => ["nix", "--experimental-features", "nix-command",
|
$c->stash->{'plain'} = { data => grab(cmd => ["nix", "--experimental-features", "nix-command",
|
||||||
"cat-store", "--store", getStoreUri(), "$path"]) };
|
"store", "cat", "--store", getStoreUri(), "$path"]) };
|
||||||
|
|
||||||
# Detect MIME type. Borrowed from Catalyst::Plugin::Static::Simple.
|
# Detect MIME type.
|
||||||
my $type = "text/plain";
|
state $magic = File::LibMagic->new(follow_symlinks => 1);
|
||||||
if ($path =~ /.*\.(\S{1,})$/xms) {
|
my $info = $magic->info_from_filename($path);
|
||||||
my $ext = $1;
|
my $type = $info->{mime_with_encoding};
|
||||||
my $mimeTypes = MIME::Types->new(only_complete => 1);
|
|
||||||
my $t = $mimeTypes->mimeTypeOf($ext);
|
|
||||||
$type = ref $t ? $t->type : $t if $t;
|
|
||||||
}
|
|
||||||
$c->response->content_type($type);
|
$c->response->content_type($type);
|
||||||
$c->forward('Hydra::View::Plain');
|
$c->forward('Hydra::View::Plain');
|
||||||
}
|
}
|
||||||
|
@ -277,29 +285,7 @@ sub download : Chained('buildChain') PathPart {
|
||||||
my $path = $product->path;
|
my $path = $product->path;
|
||||||
$path .= "/" . join("/", @path) if scalar @path > 0;
|
$path .= "/" . join("/", @path) if scalar @path > 0;
|
||||||
|
|
||||||
if (isLocalStore) {
|
serveFile($c, $path);
|
||||||
|
|
||||||
notFound($c, "File '" . $product->path . "' does not exist.") unless -e $product->path;
|
|
||||||
|
|
||||||
# Make sure the file is in the Nix store.
|
|
||||||
$path = checkPath($self, $c, $path);
|
|
||||||
|
|
||||||
# If this is a directory but no "/" is attached, then redirect.
|
|
||||||
if (-d $path && substr($c->request->uri, -1) ne "/") {
|
|
||||||
return $c->res->redirect($c->request->uri . "/");
|
|
||||||
}
|
|
||||||
|
|
||||||
$path = "$path/index.html" if -d $path && -e "$path/index.html";
|
|
||||||
|
|
||||||
notFound($c, "File '$path' does not exist.") if !-e $path;
|
|
||||||
|
|
||||||
notFound($c, "Path '$path' is a directory.") if -d $path;
|
|
||||||
|
|
||||||
$c->serve_static_file($path);
|
|
||||||
|
|
||||||
} else {
|
|
||||||
serveFile($c, $path);
|
|
||||||
}
|
|
||||||
|
|
||||||
$c->response->headers->last_modified($c->stash->{build}->stoptime);
|
$c->response->headers->last_modified($c->stash->{build}->stoptime);
|
||||||
}
|
}
|
||||||
|
@ -355,7 +341,7 @@ sub contents : Chained('buildChain') PathPart Args(1) {
|
||||||
|
|
||||||
# FIXME: don't use shell invocations below.
|
# FIXME: don't use shell invocations below.
|
||||||
|
|
||||||
# FIXME: use nix cat-store
|
# FIXME: use nix store cat
|
||||||
|
|
||||||
my $res;
|
my $res;
|
||||||
|
|
||||||
|
|
|
@ -69,7 +69,7 @@ sub prometheus : Chained('job') PathPart('prometheus') Args(0) {
|
||||||
|
|
||||||
my $lastBuild = $c->stash->{jobset}->builds->find(
|
my $lastBuild = $c->stash->{jobset}->builds->find(
|
||||||
{ job => $c->stash->{job}, finished => 1 },
|
{ job => $c->stash->{job}, finished => 1 },
|
||||||
{ order_by => 'id DESC', rows => 1, columns => [@buildListColumns] }
|
{ order_by => 'id DESC', rows => 1, columns => ["stoptime", "buildstatus", "closuresize", "size"] }
|
||||||
);
|
);
|
||||||
|
|
||||||
$prometheus->new_counter(
|
$prometheus->new_counter(
|
||||||
|
@ -92,6 +92,26 @@ sub prometheus : Chained('job') PathPart('prometheus') Args(0) {
|
||||||
$c->stash->{job},
|
$c->stash->{job},
|
||||||
)->inc($lastBuild->buildstatus > 0);
|
)->inc($lastBuild->buildstatus > 0);
|
||||||
|
|
||||||
|
$prometheus->new_gauge(
|
||||||
|
name => "hydra_build_closure_size",
|
||||||
|
help => "Closure size of the last job's build in bytes",
|
||||||
|
labels => [ "project", "jobset", "job" ]
|
||||||
|
)->labels(
|
||||||
|
$c->stash->{project}->name,
|
||||||
|
$c->stash->{jobset}->name,
|
||||||
|
$c->stash->{job},
|
||||||
|
)->inc($lastBuild->closuresize);
|
||||||
|
|
||||||
|
$prometheus->new_gauge(
|
||||||
|
name => "hydra_build_output_size",
|
||||||
|
help => "Output size of the last job's build in bytes",
|
||||||
|
labels => [ "project", "jobset", "job" ]
|
||||||
|
)->labels(
|
||||||
|
$c->stash->{project}->name,
|
||||||
|
$c->stash->{jobset}->name,
|
||||||
|
$c->stash->{job},
|
||||||
|
)->inc($lastBuild->size);
|
||||||
|
|
||||||
$c->stash->{'plain'} = { data => $prometheus->render };
|
$c->stash->{'plain'} = { data => $prometheus->render };
|
||||||
$c->forward('Hydra::View::Plain');
|
$c->forward('Hydra::View::Plain');
|
||||||
}
|
}
|
||||||
|
|
|
@ -261,6 +261,14 @@ sub updateJobset {
|
||||||
|
|
||||||
my $checkinterval = int(trim($c->stash->{params}->{checkinterval}));
|
my $checkinterval = int(trim($c->stash->{params}->{checkinterval}));
|
||||||
|
|
||||||
|
my $enable_dynamic_run_command = defined $c->stash->{params}->{enable_dynamic_run_command} ? 1 : 0;
|
||||||
|
if ($enable_dynamic_run_command
|
||||||
|
&& !($c->config->{dynamicruncommand}->{enable}
|
||||||
|
&& $jobset->project->enable_dynamic_run_command))
|
||||||
|
{
|
||||||
|
badRequest($c, "Dynamic RunCommand is not enabled by the server or the parent project.");
|
||||||
|
}
|
||||||
|
|
||||||
$jobset->update(
|
$jobset->update(
|
||||||
{ name => $jobsetName
|
{ name => $jobsetName
|
||||||
, description => trim($c->stash->{params}->{"description"})
|
, description => trim($c->stash->{params}->{"description"})
|
||||||
|
@ -268,6 +276,7 @@ sub updateJobset {
|
||||||
, nixexprinput => $nixExprInput
|
, nixexprinput => $nixExprInput
|
||||||
, enabled => $enabled
|
, enabled => $enabled
|
||||||
, enableemail => defined $c->stash->{params}->{enableemail} ? 1 : 0
|
, enableemail => defined $c->stash->{params}->{enableemail} ? 1 : 0
|
||||||
|
, enable_dynamic_run_command => $enable_dynamic_run_command
|
||||||
, emailoverride => trim($c->stash->{params}->{emailoverride}) || ""
|
, emailoverride => trim($c->stash->{params}->{emailoverride}) || ""
|
||||||
, hidden => defined $c->stash->{params}->{visible} ? 0 : 1
|
, hidden => defined $c->stash->{params}->{visible} ? 0 : 1
|
||||||
, keepnr => int(trim($c->stash->{params}->{keepnr} // "0"))
|
, keepnr => int(trim($c->stash->{params}->{keepnr} // "0"))
|
||||||
|
|
|
@ -149,6 +149,11 @@ sub updateProject {
|
||||||
my $displayName = trim $c->stash->{params}->{displayname};
|
my $displayName = trim $c->stash->{params}->{displayname};
|
||||||
error($c, "You must specify a display name.") if $displayName eq "";
|
error($c, "You must specify a display name.") if $displayName eq "";
|
||||||
|
|
||||||
|
my $enable_dynamic_run_command = defined $c->stash->{params}->{enable_dynamic_run_command} ? 1 : 0;
|
||||||
|
if ($enable_dynamic_run_command && !$c->config->{dynamicruncommand}->{enable}) {
|
||||||
|
badRequest($c, "Dynamic RunCommand is not enabled by the server.");
|
||||||
|
}
|
||||||
|
|
||||||
$project->update(
|
$project->update(
|
||||||
{ name => $projectName
|
{ name => $projectName
|
||||||
, displayname => $displayName
|
, displayname => $displayName
|
||||||
|
@ -157,6 +162,7 @@ sub updateProject {
|
||||||
, enabled => defined $c->stash->{params}->{enabled} ? 1 : 0
|
, enabled => defined $c->stash->{params}->{enabled} ? 1 : 0
|
||||||
, hidden => defined $c->stash->{params}->{visible} ? 0 : 1
|
, hidden => defined $c->stash->{params}->{visible} ? 0 : 1
|
||||||
, owner => $owner
|
, owner => $owner
|
||||||
|
, enable_dynamic_run_command => $enable_dynamic_run_command
|
||||||
, declfile => trim($c->stash->{params}->{declarative}->{file})
|
, declfile => trim($c->stash->{params}->{declarative}->{file})
|
||||||
, decltype => trim($c->stash->{params}->{declarative}->{type})
|
, decltype => trim($c->stash->{params}->{declarative}->{type})
|
||||||
, declvalue => trim($c->stash->{params}->{declarative}->{value})
|
, declvalue => trim($c->stash->{params}->{declarative}->{value})
|
||||||
|
|
|
@ -19,14 +19,16 @@ use Hydra::Helper::CatalystUtils;
|
||||||
|
|
||||||
our @ISA = qw(Exporter);
|
our @ISA = qw(Exporter);
|
||||||
our @EXPORT = qw(
|
our @EXPORT = qw(
|
||||||
|
validateDeclarativeJobset
|
||||||
|
createJobsetInputsRowAndData
|
||||||
updateDeclarativeJobset
|
updateDeclarativeJobset
|
||||||
handleDeclarativeJobsetBuild
|
handleDeclarativeJobsetBuild
|
||||||
handleDeclarativeJobsetJson
|
handleDeclarativeJobsetJson
|
||||||
);
|
);
|
||||||
|
|
||||||
|
|
||||||
sub updateDeclarativeJobset {
|
sub validateDeclarativeJobset {
|
||||||
my ($db, $project, $jobsetName, $declSpec) = @_;
|
my ($config, $project, $jobsetName, $declSpec) = @_;
|
||||||
|
|
||||||
my @allowed_keys = qw(
|
my @allowed_keys = qw(
|
||||||
enabled
|
enabled
|
||||||
|
@ -39,6 +41,7 @@ sub updateDeclarativeJobset {
|
||||||
checkinterval
|
checkinterval
|
||||||
schedulingshares
|
schedulingshares
|
||||||
enableemail
|
enableemail
|
||||||
|
enable_dynamic_run_command
|
||||||
emailoverride
|
emailoverride
|
||||||
keepnr
|
keepnr
|
||||||
);
|
);
|
||||||
|
@ -61,16 +64,39 @@ sub updateDeclarativeJobset {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
my $enable_dynamic_run_command = defined $update{enable_dynamic_run_command} ? 1 : 0;
|
||||||
|
if ($enable_dynamic_run_command
|
||||||
|
&& !($config->{dynamicruncommand}->{enable}
|
||||||
|
&& $project->enable_dynamic_run_command))
|
||||||
|
{
|
||||||
|
die "Dynamic RunCommand is not enabled by the server or the parent project.";
|
||||||
|
}
|
||||||
|
|
||||||
|
return %update;
|
||||||
|
}
|
||||||
|
|
||||||
|
sub createJobsetInputsRowAndData {
|
||||||
|
my ($name, $declSpec) = @_;
|
||||||
|
my $data = $declSpec->{"inputs"}->{$name};
|
||||||
|
my $row = {
|
||||||
|
name => $name,
|
||||||
|
type => $data->{type}
|
||||||
|
};
|
||||||
|
$row->{emailresponsible} = $data->{emailresponsible} // 0;
|
||||||
|
|
||||||
|
return ($row, $data);
|
||||||
|
}
|
||||||
|
|
||||||
|
sub updateDeclarativeJobset {
|
||||||
|
my ($config, $db, $project, $jobsetName, $declSpec) = @_;
|
||||||
|
|
||||||
|
my %update = validateDeclarativeJobset($config, $project, $jobsetName, $declSpec);
|
||||||
|
|
||||||
$db->txn_do(sub {
|
$db->txn_do(sub {
|
||||||
my $jobset = $project->jobsets->update_or_create(\%update);
|
my $jobset = $project->jobsets->update_or_create(\%update);
|
||||||
$jobset->jobsetinputs->delete;
|
$jobset->jobsetinputs->delete;
|
||||||
foreach my $name (keys %{$declSpec->{"inputs"}}) {
|
foreach my $name (keys %{$declSpec->{"inputs"}}) {
|
||||||
my $data = $declSpec->{"inputs"}->{$name};
|
my ($row, $data) = createJobsetInputsRowAndData($name, $declSpec);
|
||||||
my $row = {
|
|
||||||
name => $name,
|
|
||||||
type => $data->{type}
|
|
||||||
};
|
|
||||||
$row->{emailresponsible} = $data->{emailresponsible} // 0;
|
|
||||||
my $input = $jobset->jobsetinputs->create($row);
|
my $input = $jobset->jobsetinputs->create($row);
|
||||||
$input->jobsetinputalts->create({altnr => 0, value => $data->{value}});
|
$input->jobsetinputalts->create({altnr => 0, value => $data->{value}});
|
||||||
}
|
}
|
||||||
|
@ -81,6 +107,7 @@ sub updateDeclarativeJobset {
|
||||||
|
|
||||||
sub handleDeclarativeJobsetJson {
|
sub handleDeclarativeJobsetJson {
|
||||||
my ($db, $project, $declSpec) = @_;
|
my ($db, $project, $declSpec) = @_;
|
||||||
|
my $config = getHydraConfig();
|
||||||
$db->txn_do(sub {
|
$db->txn_do(sub {
|
||||||
my @kept = keys %$declSpec;
|
my @kept = keys %$declSpec;
|
||||||
push @kept, ".jobsets";
|
push @kept, ".jobsets";
|
||||||
|
@ -88,7 +115,7 @@ sub handleDeclarativeJobsetJson {
|
||||||
foreach my $jobsetName (keys %$declSpec) {
|
foreach my $jobsetName (keys %$declSpec) {
|
||||||
my $spec = $declSpec->{$jobsetName};
|
my $spec = $declSpec->{$jobsetName};
|
||||||
eval {
|
eval {
|
||||||
updateDeclarativeJobset($db, $project, $jobsetName, $spec);
|
updateDeclarativeJobset($config, $db, $project, $jobsetName, $spec);
|
||||||
1;
|
1;
|
||||||
} or do {
|
} or do {
|
||||||
print STDERR "ERROR: failed to process declarative jobset ", $project->name, ":${jobsetName}, ", $@, "\n";
|
print STDERR "ERROR: failed to process declarative jobset ", $project->name, ":${jobsetName}, ", $@, "\n";
|
||||||
|
|
|
@ -537,7 +537,7 @@ sub getStoreUri {
|
||||||
sub readNixFile {
|
sub readNixFile {
|
||||||
my ($path) = @_;
|
my ($path) = @_;
|
||||||
return grab(cmd => ["nix", "--experimental-features", "nix-command",
|
return grab(cmd => ["nix", "--experimental-features", "nix-command",
|
||||||
"cat-store", "--store", getStoreUri(), "$path"]);
|
"store", "cat", "--store", getStoreUri(), "$path"]);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -261,7 +261,7 @@ sub getCommits {
|
||||||
|
|
||||||
my $clonePath = getSCMCacheDir . "/git/" . sha256_hex($uri);
|
my $clonePath = getSCMCacheDir . "/git/" . sha256_hex($uri);
|
||||||
|
|
||||||
my $out = grab(cmd => ["git", "log", "--pretty=format:%H%x09%an%x09%ae%x09%at", "$rev1..$rev2"], dir => $clonePath);
|
my $out = grab(cmd => ["git", "--git-dir=.git", "log", "--pretty=format:%H%x09%an%x09%ae%x09%at", "$rev1..$rev2"], dir => $clonePath);
|
||||||
|
|
||||||
my $res = [];
|
my $res = [];
|
||||||
foreach my $line (split /\n/, $out) {
|
foreach my $line (split /\n/, $out) {
|
||||||
|
|
|
@ -30,7 +30,7 @@ sub _iterate {
|
||||||
$pulls->{$pull->{number}} = $pull;
|
$pulls->{$pull->{number}} = $pull;
|
||||||
}
|
}
|
||||||
# TODO Make Link header parsing more robust!!!
|
# TODO Make Link header parsing more robust!!!
|
||||||
my @links = split ',', $res->header("Link");
|
my @links = split ',', ($res->header("Link") // "");
|
||||||
my $next = "";
|
my $next = "";
|
||||||
foreach my $link (@links) {
|
foreach my $link (@links) {
|
||||||
my ($url, $rel) = split ";", $link;
|
my ($url, $rel) = split ";", $link;
|
||||||
|
|
|
@ -1,89 +0,0 @@
|
||||||
package Hydra::Plugin::HipChatNotification;
|
|
||||||
|
|
||||||
use strict;
|
|
||||||
use warnings;
|
|
||||||
use parent 'Hydra::Plugin';
|
|
||||||
use LWP::UserAgent;
|
|
||||||
use Hydra::Helper::CatalystUtils;
|
|
||||||
|
|
||||||
sub isEnabled {
|
|
||||||
my ($self) = @_;
|
|
||||||
return defined $self->{config}->{hipchat};
|
|
||||||
}
|
|
||||||
|
|
||||||
sub buildFinished {
|
|
||||||
my ($self, $topbuild, $dependents) = @_;
|
|
||||||
|
|
||||||
my $cfg = $self->{config}->{hipchat};
|
|
||||||
my @config = defined $cfg ? ref $cfg eq "ARRAY" ? @$cfg : ($cfg) : ();
|
|
||||||
|
|
||||||
my $baseurl = $self->{config}->{'base_uri'} || "http://localhost:3000";
|
|
||||||
|
|
||||||
# Figure out to which rooms to send notification. For each email
|
|
||||||
# room, we send one aggregate message.
|
|
||||||
my %rooms;
|
|
||||||
foreach my $build ($topbuild, @{$dependents}) {
|
|
||||||
my $prevBuild = getPreviousBuild($build);
|
|
||||||
my $jobName = showJobName $build;
|
|
||||||
|
|
||||||
foreach my $room (@config) {
|
|
||||||
my $force = $room->{force};
|
|
||||||
next unless $jobName =~ /^$room->{jobs}$/;
|
|
||||||
|
|
||||||
# If build is cancelled or aborted, do not send email.
|
|
||||||
next if ! $force && ($build->buildstatus == 4 || $build->buildstatus == 3);
|
|
||||||
|
|
||||||
# If there is a previous (that is not cancelled or aborted) build
|
|
||||||
# with same buildstatus, do not send email.
|
|
||||||
next if ! $force && defined $prevBuild && ($build->buildstatus == $prevBuild->buildstatus);
|
|
||||||
|
|
||||||
$rooms{$room->{room}} //= { room => $room, builds => [] };
|
|
||||||
push @{$rooms{$room->{room}}->{builds}}, $build;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return if scalar keys %rooms == 0;
|
|
||||||
|
|
||||||
my ($authors, $nrCommits) = getResponsibleAuthors($topbuild, $self->{plugins});
|
|
||||||
|
|
||||||
# Send a message to each room.
|
|
||||||
foreach my $roomId (keys %rooms) {
|
|
||||||
my $room = $rooms{$roomId};
|
|
||||||
my @deps = grep { $_->id != $topbuild->id } @{$room->{builds}};
|
|
||||||
|
|
||||||
my $img =
|
|
||||||
$topbuild->buildstatus == 0 ? "$baseurl/static/images/checkmark_16.png" :
|
|
||||||
$topbuild->buildstatus == 2 ? "$baseurl/static/images/dependency_16.png" :
|
|
||||||
$topbuild->buildstatus == 4 ? "$baseurl/static/images/cancelled_16.png" :
|
|
||||||
"$baseurl/static/images/error_16.png";
|
|
||||||
|
|
||||||
my $msg = "";
|
|
||||||
$msg .= "<img src='$img'/> ";
|
|
||||||
$msg .= "Job <a href='$baseurl/job/${\$topbuild->jobset->get_column('project')}/${\$topbuild->jobset->get_column('name')}/${\$topbuild->get_column('job')}'>${\showJobName($topbuild)}</a>";
|
|
||||||
$msg .= " (and ${\scalar @deps} others)" if scalar @deps > 0;
|
|
||||||
$msg .= ": <a href='$baseurl/build/${\$topbuild->id}'>" . showStatus($topbuild) . "</a>";
|
|
||||||
|
|
||||||
if (scalar keys %{$authors} > 0) {
|
|
||||||
# FIXME: HTML escaping
|
|
||||||
my @x = map { "<a href='mailto:$authors->{$_}'>$_</a>" } (sort keys %{$authors});
|
|
||||||
$msg .= ", likely due to ";
|
|
||||||
$msg .= "$nrCommits commits by " if $nrCommits > 1;
|
|
||||||
$msg .= join(" or ", scalar @x > 1 ? join(", ", @x[0..scalar @x - 2]) : (), $x[-1]);
|
|
||||||
}
|
|
||||||
|
|
||||||
print STDERR "sending hipchat notification to room $roomId: $msg\n";
|
|
||||||
|
|
||||||
my $ua = LWP::UserAgent->new();
|
|
||||||
my $resp = $ua->post('https://api.hipchat.com/v1/rooms/message?format=json&auth_token=' . $room->{room}->{token}, {
|
|
||||||
room_id => $roomId,
|
|
||||||
from => 'Hydra',
|
|
||||||
message => $msg,
|
|
||||||
message_format => 'html',
|
|
||||||
notify => $room->{room}->{notify} || 0,
|
|
||||||
color => $topbuild->buildstatus == 0 ? 'green' : 'red' });
|
|
||||||
|
|
||||||
print STDERR $resp->status_line, ": ", $resp->decoded_content,"\n" if !$resp->is_success;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
1;
|
|
|
@ -12,7 +12,74 @@ use Try::Tiny;
|
||||||
|
|
||||||
sub isEnabled {
|
sub isEnabled {
|
||||||
my ($self) = @_;
|
my ($self) = @_;
|
||||||
return defined $self->{config}->{runcommand};
|
|
||||||
|
return areStaticCommandsEnabled($self->{config}) || areDynamicCommandsEnabled($self->{config});
|
||||||
|
}
|
||||||
|
|
||||||
|
sub areStaticCommandsEnabled {
|
||||||
|
my ($config) = @_;
|
||||||
|
|
||||||
|
if (defined $config->{runcommand}) {
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
sub areDynamicCommandsEnabled {
|
||||||
|
my ($config) = @_;
|
||||||
|
|
||||||
|
if ((defined $config->{dynamicruncommand})
|
||||||
|
&& $config->{dynamicruncommand}->{enable}) {
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
sub isBuildEligibleForDynamicRunCommand {
|
||||||
|
my ($build) = @_;
|
||||||
|
|
||||||
|
if ($build->get_column("buildstatus") != 0) {
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if ($build->get_column("job") =~ "^runCommandHook\..+") {
|
||||||
|
my $out = $build->buildoutputs->find({name => "out"});
|
||||||
|
if (!defined $out) {
|
||||||
|
warn "DynamicRunCommand hook on " . $build->job . " (" . $build->id . ") rejected: no output named 'out'.";
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
my $path = $out->path;
|
||||||
|
if (-l $path) {
|
||||||
|
$path = readlink($path);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (! -e $path) {
|
||||||
|
warn "DynamicRunCommand hook on " . $build->job . " (" . $build->id . ") rejected: The 'out' output doesn't exist locally. This is a bug.";
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (! -x $path) {
|
||||||
|
warn "DynamicRunCommand hook on " . $build->job . " (" . $build->id . ") rejected: The 'out' output is not executable.";
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (! -f $path) {
|
||||||
|
warn "DynamicRunCommand hook on " . $build->job . " (" . $build->id . ") rejected: The 'out' output is not a regular file or symlink.";
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (! $build->jobset->supportsDynamicRunCommand()) {
|
||||||
|
warn "DynamicRunCommand hook on " . $build->job . " (" . $build->id . ") rejected: The project or jobset don't have dynamic runcommand enabled.";
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
sub configSectionMatches {
|
sub configSectionMatches {
|
||||||
|
@ -43,10 +110,11 @@ sub eventMatches {
|
||||||
}
|
}
|
||||||
|
|
||||||
sub fanoutToCommands {
|
sub fanoutToCommands {
|
||||||
my ($config, $event, $project, $jobset, $job) = @_;
|
my ($config, $event, $build) = @_;
|
||||||
|
|
||||||
my @commands;
|
my @commands;
|
||||||
|
|
||||||
|
# Calculate all the statically defined commands to execute
|
||||||
my $cfg = $config->{runcommand};
|
my $cfg = $config->{runcommand};
|
||||||
my @config = defined $cfg ? ref $cfg eq "ARRAY" ? @$cfg : ($cfg) : ();
|
my @config = defined $cfg ? ref $cfg eq "ARRAY" ? @$cfg : ($cfg) : ();
|
||||||
|
|
||||||
|
@ -55,9 +123,10 @@ sub fanoutToCommands {
|
||||||
next unless eventMatches($conf, $event);
|
next unless eventMatches($conf, $event);
|
||||||
next unless configSectionMatches(
|
next unless configSectionMatches(
|
||||||
$matcher,
|
$matcher,
|
||||||
$project,
|
$build->jobset->get_column('project'),
|
||||||
$jobset,
|
$build->jobset->get_column('name'),
|
||||||
$job);
|
$build->get_column('job')
|
||||||
|
);
|
||||||
|
|
||||||
if (!defined($conf->{command})) {
|
if (!defined($conf->{command})) {
|
||||||
warn "<runcommand> section for '$matcher' lacks a 'command' option";
|
warn "<runcommand> section for '$matcher' lacks a 'command' option";
|
||||||
|
@ -70,6 +139,18 @@ sub fanoutToCommands {
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Calculate all dynamically defined commands to execute
|
||||||
|
if (areDynamicCommandsEnabled($config)) {
|
||||||
|
if (isBuildEligibleForDynamicRunCommand($build)) {
|
||||||
|
my $job = $build->get_column('job');
|
||||||
|
my $out = $build->buildoutputs->find({name => "out"});
|
||||||
|
push(@commands, {
|
||||||
|
matcher => "DynamicRunCommand($job)",
|
||||||
|
command => $out->path
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
return \@commands;
|
return \@commands;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -138,9 +219,7 @@ sub buildFinished {
|
||||||
my $commandsToRun = fanoutToCommands(
|
my $commandsToRun = fanoutToCommands(
|
||||||
$self->{config},
|
$self->{config},
|
||||||
$event,
|
$event,
|
||||||
$build->project->get_column('name'),
|
$build
|
||||||
$build->jobset->get_column('name'),
|
|
||||||
$build->get_column('job')
|
|
||||||
);
|
);
|
||||||
|
|
||||||
if (@$commandsToRun == 0) {
|
if (@$commandsToRun == 0) {
|
||||||
|
|
|
@ -155,6 +155,12 @@ __PACKAGE__->table("jobsets");
|
||||||
data_type: 'text'
|
data_type: 'text'
|
||||||
is_nullable: 1
|
is_nullable: 1
|
||||||
|
|
||||||
|
=head2 enable_dynamic_run_command
|
||||||
|
|
||||||
|
data_type: 'boolean'
|
||||||
|
default_value: false
|
||||||
|
is_nullable: 0
|
||||||
|
|
||||||
=cut
|
=cut
|
||||||
|
|
||||||
__PACKAGE__->add_columns(
|
__PACKAGE__->add_columns(
|
||||||
|
@ -207,6 +213,8 @@ __PACKAGE__->add_columns(
|
||||||
{ data_type => "integer", default_value => 0, is_nullable => 0 },
|
{ data_type => "integer", default_value => 0, is_nullable => 0 },
|
||||||
"flake",
|
"flake",
|
||||||
{ data_type => "text", is_nullable => 1 },
|
{ data_type => "text", is_nullable => 1 },
|
||||||
|
"enable_dynamic_run_command",
|
||||||
|
{ data_type => "boolean", default_value => \"false", is_nullable => 0 },
|
||||||
);
|
);
|
||||||
|
|
||||||
=head1 PRIMARY KEY
|
=head1 PRIMARY KEY
|
||||||
|
@ -354,8 +362,8 @@ __PACKAGE__->has_many(
|
||||||
);
|
);
|
||||||
|
|
||||||
|
|
||||||
# Created by DBIx::Class::Schema::Loader v0.07049 @ 2022-01-08 22:24:10
|
# Created by DBIx::Class::Schema::Loader v0.07049 @ 2022-01-24 14:17:33
|
||||||
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:cQOnMitrWGMoJX6kZGNW+w
|
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:7wPE5ebeVTkenMCWG9Sgcg
|
||||||
|
|
||||||
use JSON::MaybeXS;
|
use JSON::MaybeXS;
|
||||||
|
|
||||||
|
@ -378,6 +386,13 @@ __PACKAGE__->add_column(
|
||||||
"+id" => { retrieve_on_insert => 1 }
|
"+id" => { retrieve_on_insert => 1 }
|
||||||
);
|
);
|
||||||
|
|
||||||
|
sub supportsDynamicRunCommand {
|
||||||
|
my ($self) = @_;
|
||||||
|
|
||||||
|
return $self->get_column('enable_dynamic_run_command') == 1
|
||||||
|
&& $self->project->supportsDynamicRunCommand();
|
||||||
|
}
|
||||||
|
|
||||||
sub as_json {
|
sub as_json {
|
||||||
my $self = shift;
|
my $self = shift;
|
||||||
|
|
||||||
|
@ -406,6 +421,7 @@ sub as_json {
|
||||||
|
|
||||||
# boolean_columns
|
# boolean_columns
|
||||||
"enableemail" => $self->get_column("enableemail") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
|
"enableemail" => $self->get_column("enableemail") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
|
||||||
|
"enable_dynamic_run_command" => $self->get_column("enable_dynamic_run_command") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
|
||||||
"visible" => $self->get_column("hidden") ? JSON::MaybeXS::false : JSON::MaybeXS::true,
|
"visible" => $self->get_column("hidden") ? JSON::MaybeXS::false : JSON::MaybeXS::true,
|
||||||
|
|
||||||
"inputs" => { map { $_->name => $_ } $self->jobsetinputs }
|
"inputs" => { map { $_->name => $_ } $self->jobsetinputs }
|
||||||
|
|
|
@ -88,6 +88,12 @@ __PACKAGE__->table("projects");
|
||||||
data_type: 'text'
|
data_type: 'text'
|
||||||
is_nullable: 1
|
is_nullable: 1
|
||||||
|
|
||||||
|
=head2 enable_dynamic_run_command
|
||||||
|
|
||||||
|
data_type: 'boolean'
|
||||||
|
default_value: false
|
||||||
|
is_nullable: 0
|
||||||
|
|
||||||
=cut
|
=cut
|
||||||
|
|
||||||
__PACKAGE__->add_columns(
|
__PACKAGE__->add_columns(
|
||||||
|
@ -111,6 +117,8 @@ __PACKAGE__->add_columns(
|
||||||
{ data_type => "text", is_nullable => 1 },
|
{ data_type => "text", is_nullable => 1 },
|
||||||
"declvalue",
|
"declvalue",
|
||||||
{ data_type => "text", is_nullable => 1 },
|
{ data_type => "text", is_nullable => 1 },
|
||||||
|
"enable_dynamic_run_command",
|
||||||
|
{ data_type => "boolean", default_value => \"false", is_nullable => 0 },
|
||||||
);
|
);
|
||||||
|
|
||||||
=head1 PRIMARY KEY
|
=head1 PRIMARY KEY
|
||||||
|
@ -228,8 +236,8 @@ Composing rels: L</projectmembers> -> username
|
||||||
__PACKAGE__->many_to_many("usernames", "projectmembers", "username");
|
__PACKAGE__->many_to_many("usernames", "projectmembers", "username");
|
||||||
|
|
||||||
|
|
||||||
# Created by DBIx::Class::Schema::Loader v0.07049 @ 2022-01-08 22:24:10
|
# Created by DBIx::Class::Schema::Loader v0.07049 @ 2022-01-24 14:20:32
|
||||||
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:r/wbX3FAm5/OFrrwOQL5fA
|
# DO NOT MODIFY THIS OR ANYTHING ABOVE! md5sum:PtXDyT8Pc7LYhhdEG39EKQ
|
||||||
|
|
||||||
use JSON::MaybeXS;
|
use JSON::MaybeXS;
|
||||||
|
|
||||||
|
@ -238,6 +246,12 @@ sub builds {
|
||||||
return $self->jobsets->related_resultset('builds');
|
return $self->jobsets->related_resultset('builds');
|
||||||
};
|
};
|
||||||
|
|
||||||
|
sub supportsDynamicRunCommand {
|
||||||
|
my ($self) = @_;
|
||||||
|
|
||||||
|
return $self->get_column('enable_dynamic_run_command') == 1;
|
||||||
|
}
|
||||||
|
|
||||||
sub as_json {
|
sub as_json {
|
||||||
my $self = shift;
|
my $self = shift;
|
||||||
|
|
||||||
|
@ -251,6 +265,7 @@ sub as_json {
|
||||||
|
|
||||||
# boolean_columns
|
# boolean_columns
|
||||||
"enabled" => $self->get_column("enabled") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
|
"enabled" => $self->get_column("enabled") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
|
||||||
|
"enable_dynamic_run_command" => $self->get_column("enable_dynamic_run_command") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
|
||||||
"hidden" => $self->get_column("hidden") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
|
"hidden" => $self->get_column("hidden") ? JSON::MaybeXS::true : JSON::MaybeXS::false,
|
||||||
|
|
||||||
"jobsets" => [ map { $_->name } $self->jobsets ]
|
"jobsets" => [ map { $_->name } $self->jobsets ]
|
||||||
|
|
|
@ -18,7 +18,7 @@ struct Connection : pqxx::connection
|
||||||
std::string upper_prefix = "DBI:Pg:";
|
std::string upper_prefix = "DBI:Pg:";
|
||||||
|
|
||||||
if (hasPrefix(s, lower_prefix) || hasPrefix(s, upper_prefix)) {
|
if (hasPrefix(s, lower_prefix) || hasPrefix(s, upper_prefix)) {
|
||||||
return concatStringsSep(" ", tokenizeString<Strings>(string(s, lower_prefix.size()), ";"));
|
return concatStringsSep(" ", tokenizeString<Strings>(std::string(s, lower_prefix.size()), ";"));
|
||||||
}
|
}
|
||||||
|
|
||||||
throw Error("$HYDRA_DBI does not denote a PostgreSQL database");
|
throw Error("$HYDRA_DBI does not denote a PostgreSQL database");
|
||||||
|
|
|
@ -17,7 +17,7 @@ struct HydraConfig
|
||||||
if (hydraConfigFile && pathExists(*hydraConfigFile)) {
|
if (hydraConfigFile && pathExists(*hydraConfigFile)) {
|
||||||
|
|
||||||
for (auto line : tokenizeString<Strings>(readFile(*hydraConfigFile), "\n")) {
|
for (auto line : tokenizeString<Strings>(readFile(*hydraConfigFile), "\n")) {
|
||||||
line = trim(string(line, 0, line.find('#')));
|
line = trim(std::string(line, 0, line.find('#')));
|
||||||
|
|
||||||
auto eq = line.find('=');
|
auto eq = line.find('=');
|
||||||
if (eq == std::string::npos) continue;
|
if (eq == std::string::npos) continue;
|
||||||
|
|
|
@ -4,6 +4,6 @@
|
||||||
|
|
||||||
<div class="dep-tree">
|
<div class="dep-tree">
|
||||||
<ul class="tree">
|
<ul class="tree">
|
||||||
[% INCLUDE renderNode node=buildTimeGraph %]
|
[% INCLUDE renderNode node=buildTimeGraph isRoot=1 %]
|
||||||
</ul>
|
</ul>
|
||||||
</div>
|
</div>
|
||||||
|
|
|
@ -149,7 +149,7 @@ END;
|
||||||
[% IF build.dependents %]<li class="nav-item"><a class="nav-link" href="#tabs-usedby" data-toggle="tab">Used By</a></li>[% END%]
|
[% IF build.dependents %]<li class="nav-item"><a class="nav-link" href="#tabs-usedby" data-toggle="tab">Used By</a></li>[% END%]
|
||||||
[% IF drvAvailable %]<li class="nav-item"><a class="nav-link" href="#tabs-build-deps" data-toggle="tab">Build Dependencies</a></li>[% END %]
|
[% IF drvAvailable %]<li class="nav-item"><a class="nav-link" href="#tabs-build-deps" data-toggle="tab">Build Dependencies</a></li>[% END %]
|
||||||
[% IF localStore && available %]<li class="nav-item"><a class="nav-link" href="#tabs-runtime-deps" data-toggle="tab">Runtime Dependencies</a></li>[% END %]
|
[% IF localStore && available %]<li class="nav-item"><a class="nav-link" href="#tabs-runtime-deps" data-toggle="tab">Runtime Dependencies</a></li>[% END %]
|
||||||
[% IF runcommandlogs.size() > 0 %]<li class="nav-item"><a class="nav-link" href="#tabs-runcommandlogs" data-toggle="tab">RunCommand Logs</a></li>[% END %]
|
[% IF runcommandlogProblem || runcommandlogs.size() > 0 %]<li class="nav-item"><a class="nav-link" href="#tabs-runcommandlogs" data-toggle="tab">RunCommand Logs[% IF runcommandlogProblem %] <span class="badge badge-warning">Disabled</span>[% END %]</a></li>[% END %]
|
||||||
</ul>
|
</ul>
|
||||||
|
|
||||||
<div id="generic-tabs" class="tab-content">
|
<div id="generic-tabs" class="tab-content">
|
||||||
|
@ -481,14 +481,27 @@ END;
|
||||||
[% END %]
|
[% END %]
|
||||||
|
|
||||||
[% IF drvAvailable %]
|
[% IF drvAvailable %]
|
||||||
[% INCLUDE makeLazyTab tabName="tabs-build-deps" uri=c.uri_for('/build' build.id 'build-deps') %]
|
[% INCLUDE makeLazyTab tabName="tabs-build-deps" uri=c.uri_for('/build' build.id 'build-deps') callback="makeTreeCollapsible" %]
|
||||||
[% END %]
|
[% END %]
|
||||||
|
|
||||||
[% IF available %]
|
[% IF available %]
|
||||||
[% INCLUDE makeLazyTab tabName="tabs-runtime-deps" uri=c.uri_for('/build' build.id 'runtime-deps') %]
|
[% INCLUDE makeLazyTab tabName="tabs-runtime-deps" uri=c.uri_for('/build' build.id 'runtime-deps') callback="makeTreeCollapsible" %]
|
||||||
[% END %]
|
[% END %]
|
||||||
|
|
||||||
<div id="tabs-runcommandlogs" class="tab-pane">
|
<div id="tabs-runcommandlogs" class="tab-pane">
|
||||||
|
[% IF runcommandlogProblem %]
|
||||||
|
<div class="alert alert-warning" role="alert">
|
||||||
|
[% IF runcommandlogProblem == "disabled-server" %]
|
||||||
|
This server does not enable Dynamic RunCommand support.
|
||||||
|
[% ELSIF runcommandlogProblem == "disabled-project" %]
|
||||||
|
This project does not enable Dynamic RunCommand support.
|
||||||
|
[% ELSIF runcommandlogProblem == "disabled-jobset" %]
|
||||||
|
This jobset does not enable Dynamic RunCommand support.
|
||||||
|
[% ELSE %]
|
||||||
|
Dynamic RunCommand is not enabled: [% runcommandlogProblem %].
|
||||||
|
[% END %]
|
||||||
|
</div>
|
||||||
|
[% END %]
|
||||||
<div class="d-flex flex-column">
|
<div class="d-flex flex-column">
|
||||||
[% FOREACH runcommandlog IN runcommandlogs %]
|
[% FOREACH runcommandlog IN runcommandlogs %]
|
||||||
<div class="p-2 border-bottom">
|
<div class="p-2 border-bottom">
|
||||||
|
|
|
@ -520,7 +520,11 @@ BLOCK makeLazyTab %]
|
||||||
<center><span class="spinner-border spinner-border-sm"/></center>
|
<center><span class="spinner-border spinner-border-sm"/></center>
|
||||||
</div>
|
</div>
|
||||||
<script>
|
<script>
|
||||||
$(function() { makeLazyTab("[% tabName %]", "[% uri %]"); });
|
[% IF callback.defined %]
|
||||||
|
$(function() { makeLazyTab("[% tabName %]", "[% uri %]", [% callback %] ); });
|
||||||
|
[% ELSE %]
|
||||||
|
$(function() { makeLazyTab("[% tabName %]", "[% uri %]", null ); });
|
||||||
|
[% END %]
|
||||||
</script>
|
</script>
|
||||||
[% END;
|
[% END;
|
||||||
|
|
||||||
|
|
|
@ -19,9 +19,16 @@
|
||||||
<tt>[% node.name %]</tt> (<em>no info</em>)
|
<tt>[% node.name %]</tt> (<em>no info</em>)
|
||||||
[% END %]
|
[% END %]
|
||||||
</span></span>
|
</span></span>
|
||||||
|
[% IF isRoot %]
|
||||||
|
<span class="dep-tree-buttons">
|
||||||
|
(<a href="#" class="tree-collapse-all">collapse all</a>
|
||||||
|
–
|
||||||
|
<a href="#" class="tree-expand-all">expand all</a>)
|
||||||
|
</span>
|
||||||
|
[% END %]
|
||||||
[% IF node.refs.size > 0 %]
|
[% IF node.refs.size > 0 %]
|
||||||
<ul class="subtree">
|
<ul class="subtree">
|
||||||
[% FOREACH ref IN node.refs; INCLUDE renderNode node=ref; END %]
|
[% FOREACH ref IN node.refs; INCLUDE renderNode node=ref isRoot=0; END %]
|
||||||
</ul>
|
</ul>
|
||||||
[% END %]
|
[% END %]
|
||||||
[% END %]
|
[% END %]
|
||||||
|
|
|
@ -157,6 +157,21 @@
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
<div class="form-group row">
|
||||||
|
<label class="col-sm-3" for="editjobsetenable_dynamic_run_command">Enable Dynamic RunCommand Hooks</label>
|
||||||
|
<div class="col-sm-9">
|
||||||
|
<input type="checkbox" id="editjobsetenable_dynamic_run_command" name="enable_dynamic_run_command"
|
||||||
|
[% IF !c.config.dynamicruncommand.enable %]
|
||||||
|
title="The server has not enabled dynamic RunCommands" disabled
|
||||||
|
[% ELSIF !project.enable_dynamic_run_command %]
|
||||||
|
title="The parent project has not enabled dynamic RunCommands" disabled
|
||||||
|
[% ELSIF jobset.enable_dynamic_run_command %]
|
||||||
|
checked
|
||||||
|
[% END %]
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
<div class="form-group row">
|
<div class="form-group row">
|
||||||
<label class="col-sm-3" for="editjobsetenableemail">Email notification</label>
|
<label class="col-sm-3" for="editjobsetenableemail">Email notification</label>
|
||||||
<div class="col-sm-9">
|
<div class="col-sm-9">
|
||||||
|
|
|
@ -52,6 +52,20 @@
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
|
||||||
|
<div class="form-group row">
|
||||||
|
<label class="col-sm-3" for="editprojectenable_dynamic_run_command">Enable Dynamic RunCommand Hooks for Jobsets</label>
|
||||||
|
<div class="col-sm-9">
|
||||||
|
<input type="checkbox" id="editprojectenable_dynamic_run_command" name="enable_dynamic_run_command"
|
||||||
|
[% IF !c.config.dynamicruncommand.enable %]
|
||||||
|
title="The server has not enabled dynamic RunCommands" disabled
|
||||||
|
[% ELSIF project.enable_dynamic_run_command %]
|
||||||
|
checked
|
||||||
|
[% END %]
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
<div class="form-group row">
|
<div class="form-group row">
|
||||||
<label class="col-sm-3" for="editprojectdeclfile">
|
<label class="col-sm-3" for="editprojectdeclfile">
|
||||||
Declarative spec file
|
Declarative spec file
|
||||||
|
|
|
@ -160,6 +160,10 @@
|
||||||
<th>Scheduling shares:</th>
|
<th>Scheduling shares:</th>
|
||||||
<td>[% jobset.schedulingshares %] [% IF totalShares %] ([% f = format("%.2f"); f(jobset.schedulingshares / totalShares * 100) %]% out of [% totalShares %] shares)[% END %]</td>
|
<td>[% jobset.schedulingshares %] [% IF totalShares %] ([% f = format("%.2f"); f(jobset.schedulingshares / totalShares * 100) %]% out of [% totalShares %] shares)[% END %]</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th>Enable Dynamic RunCommand Hooks:</th>
|
||||||
|
<td>[% c.config.dynamicruncommand.enable ? project.enable_dynamic_run_command ? jobset.enable_dynamic_run_command ? "Yes" : "No (not enabled by jobset)" : "No (not enabled by project)" : "No (not enabled by server)" %]</td>
|
||||||
|
</tr>
|
||||||
[% IF emailNotification %]
|
[% IF emailNotification %]
|
||||||
<tr>
|
<tr>
|
||||||
<th>Enable email notification:</th>
|
<th>Enable email notification:</th>
|
||||||
|
|
|
@ -93,7 +93,7 @@
|
||||||
<footer class="navbar">
|
<footer class="navbar">
|
||||||
<hr />
|
<hr />
|
||||||
<small>
|
<small>
|
||||||
<em><a href="http://nixos.org/hydra" target="_blank">Hydra</a> [% HTML.escape(version) %] (using [% HTML.escape(nixVersion) %]).</em>
|
<em><a href="http://nixos.org/hydra" target="_blank" class="squiggle">Hydra</a> [% HTML.escape(version) %] (using [% HTML.escape(nixVersion) %]).</em>
|
||||||
[% IF c.user_exists %]
|
[% IF c.user_exists %]
|
||||||
You are signed in as <tt>[% HTML.escape(c.user.username) %]</tt>
|
You are signed in as <tt>[% HTML.escape(c.user.username) %]</tt>
|
||||||
[%- IF c.user.type == 'google' %] via Google[% END %].
|
[%- IF c.user.type == 'google' %] via Google[% END %].
|
||||||
|
|
|
@ -92,6 +92,10 @@
|
||||||
<th>Enabled:</th>
|
<th>Enabled:</th>
|
||||||
<td>[% project.enabled ? "Yes" : "No" %]</td>
|
<td>[% project.enabled ? "Yes" : "No" %]</td>
|
||||||
</tr>
|
</tr>
|
||||||
|
<tr>
|
||||||
|
<th>Enable Dynamic RunCommand Hooks:</th>
|
||||||
|
<td>[% c.config.dynamicruncommand.enable ? project.enable_dynamic_run_command ? "Yes" : "No (not enabled by project)" : "No (not enabled by server)" %]</td>
|
||||||
|
</tr>
|
||||||
</table>
|
</table>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
|
|
@ -1,5 +1,5 @@
|
||||||
div.skip-topbar {
|
div.skip-topbar {
|
||||||
padding-top: 40px;
|
padding-top: 20px;
|
||||||
margin-bottom: 1.5em;
|
margin-bottom: 1.5em;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -33,6 +33,11 @@ span:target > span.dep-tree-line {
|
||||||
font-weight: bold;
|
font-weight: bold;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
span.dep-tree-buttons {
|
||||||
|
font-style: italic;
|
||||||
|
padding-left: 10px;
|
||||||
|
}
|
||||||
|
|
||||||
span.disabled-project, span.disabled-jobset, span.disabled-job {
|
span.disabled-project, span.disabled-jobset, span.disabled-job {
|
||||||
text-decoration: line-through;
|
text-decoration: line-through;
|
||||||
}
|
}
|
||||||
|
@ -146,6 +151,36 @@ td.step-status span.warn {
|
||||||
padding-top: 1.5rem;
|
padding-top: 1.5rem;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
.container {
|
||||||
|
max-width: 80%;
|
||||||
|
}
|
||||||
|
|
||||||
|
.tab-content {
|
||||||
|
margin-right: 0 !important;
|
||||||
|
}
|
||||||
|
|
||||||
|
body {
|
||||||
|
line-height: 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
.navbar-nav {
|
||||||
|
line-height: 1.5;
|
||||||
|
}
|
||||||
|
|
||||||
|
.dropdown-item {
|
||||||
|
line-height: 1.5;
|
||||||
|
}
|
||||||
|
|
||||||
|
a.squiggle:hover {
|
||||||
|
background-image: url("data:image/svg+xml;charset=utf8,%3Csvg id='squiggle-link' xmlns='http://www.w3.org/2000/svg' xmlns:xlink='http://www.w3.org/1999/xlink' xmlns:ev='http://www.w3.org/2001/xml-events' viewBox='0 0 10 18'%3E%3Cstyle type='text/css'%3E.squiggle{animation:shift .5s linear infinite;}@keyframes shift {from {transform:translateX(-10px);}to {transform:translateX(0);}}%3C/style%3E%3Cpath fill='none' stroke='%230056b3' stroke-width='0.65' class='squiggle' d='M0,17.5 c 2.5,0,2.5,-1.5,5,-1.5 s 2.5,1.5,5,1.5 c 2.5,0,2.5,-1.5,5,-1.5 s 2.5,1.5,5,1.5' /%3E%3C/svg%3E");
|
||||||
|
background-position: 0 100%;
|
||||||
|
background-size: auto 24px;
|
||||||
|
background-repeat: repeat;
|
||||||
|
text-decoration: none;
|
||||||
|
border-bottom: none;
|
||||||
|
padding-bottom: 1px;
|
||||||
|
}
|
||||||
|
|
||||||
@media (prefers-color-scheme: dark) {
|
@media (prefers-color-scheme: dark) {
|
||||||
/* Prevent some flickering */
|
/* Prevent some flickering */
|
||||||
html {
|
html {
|
||||||
|
|
|
@ -9,6 +9,7 @@ ul.tree, ul.subtree {
|
||||||
ul.subtree > li {
|
ul.subtree > li {
|
||||||
position: relative;
|
position: relative;
|
||||||
padding-left: 2.0em;
|
padding-left: 2.0em;
|
||||||
|
line-height: 140%;
|
||||||
border-left: 0.1em solid #6185a0;
|
border-left: 0.1em solid #6185a0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -1,10 +1,9 @@
|
||||||
$(document).ready(function() {
|
function makeTreeCollapsible(tab) {
|
||||||
|
|
||||||
/*** Tree toggles in logfiles. ***/
|
/*** Tree toggles in logfiles. ***/
|
||||||
|
|
||||||
/* Set the appearance of the toggle depending on whether the
|
/* Set the appearance of the toggle depending on whether the
|
||||||
corresponding subtree is initially shown or hidden. */
|
corresponding subtree is initially shown or hidden. */
|
||||||
$(".tree-toggle").map(function() {
|
tab.find(".tree-toggle").map(function() {
|
||||||
if ($(this).siblings("ul:hidden").length == 0) {
|
if ($(this).siblings("ul:hidden").length == 0) {
|
||||||
$(this).text("-");
|
$(this).text("-");
|
||||||
} else {
|
} else {
|
||||||
|
@ -13,7 +12,7 @@ $(document).ready(function() {
|
||||||
});
|
});
|
||||||
|
|
||||||
/* When a toggle is clicked, show or hide the subtree. */
|
/* When a toggle is clicked, show or hide the subtree. */
|
||||||
$(".tree-toggle").click(function() {
|
tab.find(".tree-toggle").click(function() {
|
||||||
if ($(this).siblings("ul:hidden").length != 0) {
|
if ($(this).siblings("ul:hidden").length != 0) {
|
||||||
$(this).siblings("ul").show();
|
$(this).siblings("ul").show();
|
||||||
$(this).text("-");
|
$(this).text("-");
|
||||||
|
@ -24,21 +23,23 @@ $(document).ready(function() {
|
||||||
});
|
});
|
||||||
|
|
||||||
/* Implementation of the expand all link. */
|
/* Implementation of the expand all link. */
|
||||||
$(".tree-expand-all").click(function() {
|
tab.find(".tree-expand-all").click(function() {
|
||||||
$(".tree-toggle", $(this).parent().siblings(".tree")).map(function() {
|
tab.find(".tree-toggle", $(this).parent().siblings(".tree")).map(function() {
|
||||||
$(this).siblings("ul").show();
|
$(this).siblings("ul").show();
|
||||||
$(this).text("-");
|
$(this).text("-");
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
/* Implementation of the collapse all link. */
|
/* Implementation of the collapse all link. */
|
||||||
$(".tree-collapse-all").click(function() {
|
tab.find(".tree-collapse-all").click(function() {
|
||||||
$(".tree-toggle", $(this).parent().siblings(".tree")).map(function() {
|
tab.find(".tree-toggle", $(this).parent().siblings(".tree")).map(function() {
|
||||||
$(this).siblings("ul").hide();
|
$(this).siblings("ul").hide();
|
||||||
$(this).text("+");
|
$(this).text("+");
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
$(document).ready(function() {
|
||||||
$("table.clickable-rows").click(function(event) {
|
$("table.clickable-rows").click(function(event) {
|
||||||
if ($(event.target).closest("a").length) return;
|
if ($(event.target).closest("a").length) return;
|
||||||
link = $(event.target).parents("tr").find("a.row-link");
|
link = $(event.target).parents("tr").find("a.row-link");
|
||||||
|
@ -132,7 +133,7 @@ $(document).ready(function() {
|
||||||
|
|
||||||
var tabsLoaded = {};
|
var tabsLoaded = {};
|
||||||
|
|
||||||
function makeLazyTab(tabName, uri) {
|
function makeLazyTab(tabName, uri, callback) {
|
||||||
$('.nav-tabs').bind('show.bs.tab', function(e) {
|
$('.nav-tabs').bind('show.bs.tab', function(e) {
|
||||||
var pattern = /#.+/gi;
|
var pattern = /#.+/gi;
|
||||||
var id = e.target.toString().match(pattern)[0];
|
var id = e.target.toString().match(pattern)[0];
|
||||||
|
@ -140,11 +141,15 @@ function makeLazyTab(tabName, uri) {
|
||||||
tabsLoaded[id] = 1;
|
tabsLoaded[id] = 1;
|
||||||
$('#' + tabName).load(uri, function(response, status, xhr) {
|
$('#' + tabName).load(uri, function(response, status, xhr) {
|
||||||
var lazy = xhr.getResponseHeader("X-Hydra-Lazy") === "Yes";
|
var lazy = xhr.getResponseHeader("X-Hydra-Lazy") === "Yes";
|
||||||
|
var tab = $('#' + tabName);
|
||||||
if (status == "error" && !lazy) {
|
if (status == "error" && !lazy) {
|
||||||
$('#' + tabName).html("<div class='alert alert-error'>Error loading tab: " + xhr.status + " " + xhr.statusText + "</div>");
|
tab.html("<div class='alert alert-error'>Error loading tab: " + xhr.status + " " + xhr.statusText + "</div>");
|
||||||
}
|
}
|
||||||
else {
|
else {
|
||||||
$('#' + tabName).html(response);
|
tab.html(response);
|
||||||
|
if (callback) {
|
||||||
|
callback(tab);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
});
|
});
|
||||||
}
|
}
|
||||||
|
|
|
@ -619,7 +619,7 @@ sub checkJobsetWrapped {
|
||||||
} else {
|
} else {
|
||||||
# Update the jobset with the spec's inputs, and the continue
|
# Update the jobset with the spec's inputs, and the continue
|
||||||
# evaluating the .jobsets jobset.
|
# evaluating the .jobsets jobset.
|
||||||
updateDeclarativeJobset($db, $project, ".jobsets", $declSpec);
|
updateDeclarativeJobset($config, $db, $project, ".jobsets", $declSpec);
|
||||||
$jobset->discard_changes;
|
$jobset->discard_changes;
|
||||||
$inputInfo->{"declInput"} = [ $declInput ];
|
$inputInfo->{"declInput"} = [ $declInput ];
|
||||||
$inputInfo->{"projectName"} = [ fetchInput($plugins, $db, $project, $jobset, "projectName", "string", $project->name, 0) ];
|
$inputInfo->{"projectName"} = [ fetchInput($plugins, $db, $project, $jobset, "projectName", "string", $project->name, 0) ];
|
||||||
|
@ -640,8 +640,8 @@ sub checkJobsetWrapped {
|
||||||
my $flakeRef = $jobset->flake;
|
my $flakeRef = $jobset->flake;
|
||||||
if (defined $flakeRef) {
|
if (defined $flakeRef) {
|
||||||
(my $res, my $json, my $stderr) = captureStdoutStderr(
|
(my $res, my $json, my $stderr) = captureStdoutStderr(
|
||||||
600, "nix", "flake", "info", "--tarball-ttl", 0, "--json", "--", $flakeRef);
|
600, "nix", "flake", "metadata", "--refresh", "--json", "--", $flakeRef);
|
||||||
die "'nix flake info' returned " . ($res & 127 ? "signal $res" : "exit code " . ($res >> 8))
|
die "'nix flake metadata' returned " . ($res & 127 ? "signal $res" : "exit code " . ($res >> 8))
|
||||||
. ":\n" . ($stderr ? decode("utf-8", $stderr) : "(no output)\n")
|
. ":\n" . ($stderr ? decode("utf-8", $stderr) : "(no output)\n")
|
||||||
if $res;
|
if $res;
|
||||||
$flakeRef = decode_json($json)->{'url'};
|
$flakeRef = decode_json($json)->{'url'};
|
||||||
|
|
|
@ -49,6 +49,7 @@ create table Projects (
|
||||||
declfile text, -- File containing declarative jobset specification
|
declfile text, -- File containing declarative jobset specification
|
||||||
decltype text, -- Type of the input containing declarative jobset specification
|
decltype text, -- Type of the input containing declarative jobset specification
|
||||||
declvalue text, -- Value of the input containing declarative jobset specification
|
declvalue text, -- Value of the input containing declarative jobset specification
|
||||||
|
enable_dynamic_run_command boolean not null default false,
|
||||||
foreign key (owner) references Users(userName) on update cascade
|
foreign key (owner) references Users(userName) on update cascade
|
||||||
);
|
);
|
||||||
|
|
||||||
|
@ -88,6 +89,7 @@ create table Jobsets (
|
||||||
startTime integer, -- if jobset is currently running
|
startTime integer, -- if jobset is currently running
|
||||||
type integer not null default 0, -- 0 == legacy, 1 == flake
|
type integer not null default 0, -- 0 == legacy, 1 == flake
|
||||||
flake text,
|
flake text,
|
||||||
|
enable_dynamic_run_command boolean not null default false,
|
||||||
constraint jobsets_schedulingshares_nonzero_check check (schedulingShares > 0),
|
constraint jobsets_schedulingshares_nonzero_check check (schedulingShares > 0),
|
||||||
constraint jobsets_type_known_check check (type = 0 or type = 1),
|
constraint jobsets_type_known_check check (type = 0 or type = 1),
|
||||||
-- If the type is 0, then nixExprInput and nixExprPath should be non-null and other type-specific fields should be null
|
-- If the type is 0, then nixExprInput and nixExprPath should be non-null and other type-specific fields should be null
|
||||||
|
|
4
src/sql/upgrade-82.sql
Normal file
4
src/sql/upgrade-82.sql
Normal file
|
@ -0,0 +1,4 @@
|
||||||
|
ALTER TABLE Jobsets
|
||||||
|
ADD COLUMN enable_dynamic_run_command boolean not null default false;
|
||||||
|
ALTER TABLE Projects
|
||||||
|
ADD COLUMN enable_dynamic_run_command boolean not null default false;
|
107
t/Helper/AddBuilds/dynamic-disabled.t
Normal file
107
t/Helper/AddBuilds/dynamic-disabled.t
Normal file
|
@ -0,0 +1,107 @@
|
||||||
|
use strict;
|
||||||
|
use warnings;
|
||||||
|
use Setup;
|
||||||
|
use Test2::V0;
|
||||||
|
|
||||||
|
require Catalyst::Test;
|
||||||
|
use HTTP::Request::Common qw(POST PUT GET DELETE);
|
||||||
|
use JSON::MaybeXS qw(decode_json encode_json);
|
||||||
|
|
||||||
|
my $ctx = test_context();
|
||||||
|
|
||||||
|
Catalyst::Test->import('Hydra');
|
||||||
|
|
||||||
|
my $db = Hydra::Model::DB->new;
|
||||||
|
hydra_setup($db);
|
||||||
|
|
||||||
|
my $user = $db->resultset('Users')->create({ username => 'alice', emailaddress => 'root@invalid.org', password => '!' });
|
||||||
|
$user->setPassword('foobar');
|
||||||
|
$user->userroles->update_or_create({ role => 'admin' });
|
||||||
|
|
||||||
|
my $project_with_dynamic_run_command = $db->resultset('Projects')->create({
|
||||||
|
name => 'tests_with_dynamic_runcommand',
|
||||||
|
displayname => 'Tests with dynamic runcommand',
|
||||||
|
owner => 'alice',
|
||||||
|
enable_dynamic_run_command => 1,
|
||||||
|
});
|
||||||
|
my $project_without_dynamic_run_command = $db->resultset('Projects')->create({
|
||||||
|
name => 'tests_without_dynamic_runcommand',
|
||||||
|
displayname => 'Tests without dynamic runcommand',
|
||||||
|
owner => 'alice',
|
||||||
|
enable_dynamic_run_command => 0,
|
||||||
|
});
|
||||||
|
|
||||||
|
sub makeJobsetSpec {
|
||||||
|
my ($dynamic) = @_;
|
||||||
|
|
||||||
|
return {
|
||||||
|
enabled => 2,
|
||||||
|
enable_dynamic_run_command => $dynamic ? JSON::MaybeXS::true : undef,
|
||||||
|
visible => JSON::MaybeXS::true,
|
||||||
|
name => "job",
|
||||||
|
type => 1,
|
||||||
|
description => "test jobset",
|
||||||
|
flake => "github:nixos/nix",
|
||||||
|
checkinterval => 0,
|
||||||
|
schedulingshares => 100,
|
||||||
|
keepnr => 3
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "validate declarative jobset with dynamic RunCommand disabled by server" => sub {
|
||||||
|
my $config = Hydra::Helper::Nix->getHydraConfig();
|
||||||
|
require Hydra::Helper::AddBuilds;
|
||||||
|
Hydra::Helper::AddBuilds->import( qw(validateDeclarativeJobset) );
|
||||||
|
|
||||||
|
subtest "project enabled dynamic runcommand, declarative jobset enabled dynamic runcommand" => sub {
|
||||||
|
like(
|
||||||
|
dies {
|
||||||
|
validateDeclarativeJobset(
|
||||||
|
$config,
|
||||||
|
$project_with_dynamic_run_command,
|
||||||
|
"test-jobset",
|
||||||
|
makeJobsetSpec(JSON::MaybeXS::true),
|
||||||
|
),
|
||||||
|
},
|
||||||
|
qr/Dynamic RunCommand is not enabled/,
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "project enabled dynamic runcommand, declarative jobset disabled dynamic runcommand" => sub {
|
||||||
|
ok(
|
||||||
|
validateDeclarativeJobset(
|
||||||
|
$config,
|
||||||
|
$project_with_dynamic_run_command,
|
||||||
|
"test-jobset",
|
||||||
|
makeJobsetSpec(JSON::MaybeXS::false)
|
||||||
|
),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "project disabled dynamic runcommand, declarative jobset enabled dynamic runcommand" => sub {
|
||||||
|
like(
|
||||||
|
dies {
|
||||||
|
validateDeclarativeJobset(
|
||||||
|
$config,
|
||||||
|
$project_without_dynamic_run_command,
|
||||||
|
"test-jobset",
|
||||||
|
makeJobsetSpec(JSON::MaybeXS::true),
|
||||||
|
),
|
||||||
|
},
|
||||||
|
qr/Dynamic RunCommand is not enabled/,
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "project disabled dynamic runcommand, declarative jobset disabled dynamic runcommand" => sub {
|
||||||
|
ok(
|
||||||
|
validateDeclarativeJobset(
|
||||||
|
$config,
|
||||||
|
$project_without_dynamic_run_command,
|
||||||
|
"test-jobset",
|
||||||
|
makeJobsetSpec(JSON::MaybeXS::false)
|
||||||
|
),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
done_testing;
|
110
t/Helper/AddBuilds/dynamic-enabled.t
Normal file
110
t/Helper/AddBuilds/dynamic-enabled.t
Normal file
|
@ -0,0 +1,110 @@
|
||||||
|
use strict;
|
||||||
|
use warnings;
|
||||||
|
use Setup;
|
||||||
|
use Test2::V0;
|
||||||
|
|
||||||
|
require Catalyst::Test;
|
||||||
|
use HTTP::Request::Common qw(POST PUT GET DELETE);
|
||||||
|
use JSON::MaybeXS qw(decode_json encode_json);
|
||||||
|
|
||||||
|
my $ctx = test_context(
|
||||||
|
hydra_config => q|
|
||||||
|
<dynamicruncommand>
|
||||||
|
enable = 1
|
||||||
|
</dynamicruncommand>
|
||||||
|
|
|
||||||
|
);
|
||||||
|
|
||||||
|
Catalyst::Test->import('Hydra');
|
||||||
|
|
||||||
|
my $db = Hydra::Model::DB->new;
|
||||||
|
hydra_setup($db);
|
||||||
|
|
||||||
|
my $user = $db->resultset('Users')->create({ username => 'alice', emailaddress => 'root@invalid.org', password => '!' });
|
||||||
|
$user->setPassword('foobar');
|
||||||
|
$user->userroles->update_or_create({ role => 'admin' });
|
||||||
|
|
||||||
|
my $project_with_dynamic_run_command = $db->resultset('Projects')->create({
|
||||||
|
name => 'tests_with_dynamic_runcommand',
|
||||||
|
displayname => 'Tests with dynamic runcommand',
|
||||||
|
owner => 'alice',
|
||||||
|
enable_dynamic_run_command => 1,
|
||||||
|
});
|
||||||
|
my $project_without_dynamic_run_command = $db->resultset('Projects')->create({
|
||||||
|
name => 'tests_without_dynamic_runcommand',
|
||||||
|
displayname => 'Tests without dynamic runcommand',
|
||||||
|
owner => 'alice',
|
||||||
|
enable_dynamic_run_command => 0,
|
||||||
|
});
|
||||||
|
|
||||||
|
sub makeJobsetSpec {
|
||||||
|
my ($dynamic) = @_;
|
||||||
|
|
||||||
|
return {
|
||||||
|
enabled => 2,
|
||||||
|
enable_dynamic_run_command => $dynamic ? JSON::MaybeXS::true : undef,
|
||||||
|
visible => JSON::MaybeXS::true,
|
||||||
|
name => "job",
|
||||||
|
type => 1,
|
||||||
|
description => "test jobset",
|
||||||
|
flake => "github:nixos/nix",
|
||||||
|
checkinterval => 0,
|
||||||
|
schedulingshares => 100,
|
||||||
|
keepnr => 3
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "validate declarative jobset with dynamic RunCommand enabled by server" => sub {
|
||||||
|
my $config = Hydra::Helper::Nix->getHydraConfig();
|
||||||
|
require Hydra::Helper::AddBuilds;
|
||||||
|
Hydra::Helper::AddBuilds->import( qw(validateDeclarativeJobset) );
|
||||||
|
|
||||||
|
subtest "project enabled dynamic runcommand, declarative jobset enabled dynamic runcommand" => sub {
|
||||||
|
ok(
|
||||||
|
validateDeclarativeJobset(
|
||||||
|
$config,
|
||||||
|
$project_with_dynamic_run_command,
|
||||||
|
"test-jobset",
|
||||||
|
makeJobsetSpec(JSON::MaybeXS::true)
|
||||||
|
),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "project enabled dynamic runcommand, declarative jobset disabled dynamic runcommand" => sub {
|
||||||
|
ok(
|
||||||
|
validateDeclarativeJobset(
|
||||||
|
$config,
|
||||||
|
$project_with_dynamic_run_command,
|
||||||
|
"test-jobset",
|
||||||
|
makeJobsetSpec(JSON::MaybeXS::false)
|
||||||
|
),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "project disabled dynamic runcommand, declarative jobset enabled dynamic runcommand" => sub {
|
||||||
|
like(
|
||||||
|
dies {
|
||||||
|
validateDeclarativeJobset(
|
||||||
|
$config,
|
||||||
|
$project_without_dynamic_run_command,
|
||||||
|
"test-jobset",
|
||||||
|
makeJobsetSpec(JSON::MaybeXS::true),
|
||||||
|
),
|
||||||
|
},
|
||||||
|
qr/Dynamic RunCommand is not enabled/,
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "project disabled dynamic runcommand, declarative jobset disabled dynamic runcommand" => sub {
|
||||||
|
ok(
|
||||||
|
validateDeclarativeJobset(
|
||||||
|
$config,
|
||||||
|
$project_without_dynamic_run_command,
|
||||||
|
"test-jobset",
|
||||||
|
makeJobsetSpec(JSON::MaybeXS::false)
|
||||||
|
),
|
||||||
|
);
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
done_testing;
|
|
@ -20,6 +20,7 @@ write_file($ctx{'tmpdir'} . "/bar.conf", q|
|
||||||
|);
|
|);
|
||||||
|
|
||||||
is(getHydraConfig(), {
|
is(getHydraConfig(), {
|
||||||
|
queue_runner_metrics_address => "127.0.0.1:0",
|
||||||
foo => { bar => "baz" }
|
foo => { bar => "baz" }
|
||||||
}, "Nested includes work.");
|
}, "Nested includes work.");
|
||||||
|
|
||||||
|
|
|
@ -55,6 +55,12 @@ subtest "/job/PROJECT/JOBSET/JOB/shield" => sub {
|
||||||
subtest "/job/PROJECT/JOBSET/JOB/prometheus" => sub {
|
subtest "/job/PROJECT/JOBSET/JOB/prometheus" => sub {
|
||||||
my $response = request(GET '/job/' . $project->name . '/' . $jobset->name . '/' . $build->job . '/prometheus');
|
my $response = request(GET '/job/' . $project->name . '/' . $jobset->name . '/' . $build->job . '/prometheus');
|
||||||
ok($response->is_success, "The page showing the job's prometheus data returns 200.");
|
ok($response->is_success, "The page showing the job's prometheus data returns 200.");
|
||||||
|
my $metrics = $response->content;
|
||||||
|
|
||||||
|
ok($metrics =~ m/hydra_job_failed\{.*\} 0/);
|
||||||
|
ok($metrics =~ m/hydra_job_completion_time\{.*\} [\d]+/);
|
||||||
|
ok($metrics =~ m/hydra_build_closure_size\{.*\} 96/);
|
||||||
|
ok($metrics =~ m/hydra_build_output_size\{.*\} 96/);
|
||||||
};
|
};
|
||||||
|
|
||||||
done_testing;
|
done_testing;
|
||||||
|
|
|
@ -73,6 +73,7 @@ subtest 'Read newly-created jobset "job"' => sub {
|
||||||
emailoverride => "",
|
emailoverride => "",
|
||||||
enabled => 2,
|
enabled => 2,
|
||||||
enableemail => JSON::MaybeXS::false,
|
enableemail => JSON::MaybeXS::false,
|
||||||
|
enable_dynamic_run_command => JSON::MaybeXS::false,
|
||||||
errortime => undef,
|
errortime => undef,
|
||||||
errormsg => "",
|
errormsg => "",
|
||||||
fetcherrormsg => "",
|
fetcherrormsg => "",
|
||||||
|
@ -131,6 +132,7 @@ subtest 'Update jobset "job" to legacy type' => sub {
|
||||||
emailoverride => "",
|
emailoverride => "",
|
||||||
enabled => 3,
|
enabled => 3,
|
||||||
enableemail => JSON::MaybeXS::false,
|
enableemail => JSON::MaybeXS::false,
|
||||||
|
enable_dynamic_run_command => JSON::MaybeXS::false,
|
||||||
errortime => undef,
|
errortime => undef,
|
||||||
errormsg => "",
|
errormsg => "",
|
||||||
fetcherrormsg => "",
|
fetcherrormsg => "",
|
||||||
|
|
|
@ -46,6 +46,7 @@ subtest "Read project 'tests'" => sub {
|
||||||
description => "",
|
description => "",
|
||||||
displayname => "Tests",
|
displayname => "Tests",
|
||||||
enabled => JSON::MaybeXS::true,
|
enabled => JSON::MaybeXS::true,
|
||||||
|
enable_dynamic_run_command => JSON::MaybeXS::false,
|
||||||
hidden => JSON::MaybeXS::false,
|
hidden => JSON::MaybeXS::false,
|
||||||
homepage => "",
|
homepage => "",
|
||||||
jobsets => [],
|
jobsets => [],
|
||||||
|
@ -85,6 +86,7 @@ subtest "Transitioning from declarative project to normal" => sub {
|
||||||
description => "",
|
description => "",
|
||||||
displayname => "Tests",
|
displayname => "Tests",
|
||||||
enabled => JSON::MaybeXS::true,
|
enabled => JSON::MaybeXS::true,
|
||||||
|
enable_dynamic_run_command => JSON::MaybeXS::false,
|
||||||
hidden => JSON::MaybeXS::false,
|
hidden => JSON::MaybeXS::false,
|
||||||
homepage => "",
|
homepage => "",
|
||||||
jobsets => [".jobsets"],
|
jobsets => [".jobsets"],
|
||||||
|
@ -128,6 +130,7 @@ subtest "Transitioning from declarative project to normal" => sub {
|
||||||
description => "",
|
description => "",
|
||||||
displayname => "Tests",
|
displayname => "Tests",
|
||||||
enabled => JSON::MaybeXS::true,
|
enabled => JSON::MaybeXS::true,
|
||||||
|
enable_dynamic_run_command => JSON::MaybeXS::false,
|
||||||
hidden => JSON::MaybeXS::false,
|
hidden => JSON::MaybeXS::false,
|
||||||
homepage => "",
|
homepage => "",
|
||||||
jobsets => [],
|
jobsets => [],
|
||||||
|
|
110
t/Hydra/Plugin/RunCommand/dynamic-disabled.t
Normal file
110
t/Hydra/Plugin/RunCommand/dynamic-disabled.t
Normal file
|
@ -0,0 +1,110 @@
|
||||||
|
use strict;
|
||||||
|
use warnings;
|
||||||
|
use Setup;
|
||||||
|
use Test2::V0;
|
||||||
|
|
||||||
|
require Catalyst::Test;
|
||||||
|
use HTTP::Request::Common qw(POST PUT GET DELETE);
|
||||||
|
use JSON::MaybeXS qw(decode_json encode_json);
|
||||||
|
|
||||||
|
my $ctx = test_context();
|
||||||
|
Catalyst::Test->import('Hydra');
|
||||||
|
|
||||||
|
# Create a user to log in to
|
||||||
|
my $user = $ctx->db->resultset('Users')->create({ username => 'alice', emailaddress => 'root@invalid.org', password => '!' });
|
||||||
|
$user->setPassword('foobar');
|
||||||
|
$user->userroles->update_or_create({ role => 'admin' });
|
||||||
|
|
||||||
|
subtest "can't enable dynamic RunCommand when disabled by server" => sub {
|
||||||
|
my $builds = $ctx->makeAndEvaluateJobset(
|
||||||
|
expression => "runcommand-dynamic.nix",
|
||||||
|
build => 1
|
||||||
|
);
|
||||||
|
|
||||||
|
my $build = $builds->{"runCommandHook.example"};
|
||||||
|
my $project = $build->project;
|
||||||
|
my $project_name = $project->name;
|
||||||
|
my $jobset = $build->jobset;
|
||||||
|
my $jobset_name = $jobset->name;
|
||||||
|
|
||||||
|
is($project->enable_dynamic_run_command, 0, "dynamic RunCommand is disabled on projects by default");
|
||||||
|
is($jobset->enable_dynamic_run_command, 0, "dynamic RunCommand is disabled on jobsets by default");
|
||||||
|
|
||||||
|
my $req = request(POST '/login',
|
||||||
|
Referer => 'http://localhost/',
|
||||||
|
Content => {
|
||||||
|
username => 'alice',
|
||||||
|
password => 'foobar'
|
||||||
|
}
|
||||||
|
);
|
||||||
|
is($req->code, 302, "logged in successfully");
|
||||||
|
my $cookie = $req->header("set-cookie");
|
||||||
|
|
||||||
|
subtest "can't enable dynamic RunCommand on project" => sub {
|
||||||
|
my $projectresponse = request(GET "/project/$project_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
);
|
||||||
|
|
||||||
|
my $projectjson = decode_json($projectresponse->content);
|
||||||
|
$projectjson->{enable_dynamic_run_command} = 1;
|
||||||
|
|
||||||
|
my $projectupdate = request(PUT "/project/$project_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
Content => encode_json($projectjson)
|
||||||
|
);
|
||||||
|
|
||||||
|
$projectresponse = request(GET "/project/$project_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
);
|
||||||
|
$projectjson = decode_json($projectresponse->content);
|
||||||
|
|
||||||
|
is($projectupdate->code, 400);
|
||||||
|
like(
|
||||||
|
$projectupdate->content,
|
||||||
|
qr/Dynamic RunCommand is not/,
|
||||||
|
"failed to change enable_dynamic_run_command, not any other error"
|
||||||
|
);
|
||||||
|
is($projectjson->{enable_dynamic_run_command}, JSON::MaybeXS::false);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "can't enable dynamic RunCommand on jobset" => sub {
|
||||||
|
my $jobsetresponse = request(GET "/jobset/$project_name/$jobset_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
);
|
||||||
|
|
||||||
|
my $jobsetjson = decode_json($jobsetresponse->content);
|
||||||
|
$jobsetjson->{enable_dynamic_run_command} = 1;
|
||||||
|
|
||||||
|
my $jobsetupdate = request(PUT "/jobset/$project_name/$jobset_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
Content => encode_json($jobsetjson)
|
||||||
|
);
|
||||||
|
|
||||||
|
$jobsetresponse = request(GET "/jobset/$project_name/$jobset_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
);
|
||||||
|
$jobsetjson = decode_json($jobsetresponse->content);
|
||||||
|
|
||||||
|
is($jobsetupdate->code, 400);
|
||||||
|
like(
|
||||||
|
$jobsetupdate->content,
|
||||||
|
qr/Dynamic RunCommand is not/,
|
||||||
|
"failed to change enable_dynamic_run_command, not any other error"
|
||||||
|
);
|
||||||
|
is($jobsetjson->{enable_dynamic_run_command}, JSON::MaybeXS::false);
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
done_testing;
|
106
t/Hydra/Plugin/RunCommand/dynamic-enabled.t
Normal file
106
t/Hydra/Plugin/RunCommand/dynamic-enabled.t
Normal file
|
@ -0,0 +1,106 @@
|
||||||
|
use strict;
|
||||||
|
use warnings;
|
||||||
|
use Setup;
|
||||||
|
use Test2::V0;
|
||||||
|
|
||||||
|
require Catalyst::Test;
|
||||||
|
use HTTP::Request::Common qw(POST PUT GET DELETE);
|
||||||
|
use JSON::MaybeXS qw(decode_json encode_json);
|
||||||
|
|
||||||
|
my $ctx = test_context(
|
||||||
|
hydra_config => q|
|
||||||
|
<dynamicruncommand>
|
||||||
|
enable = 1
|
||||||
|
</dynamicruncommand>
|
||||||
|
|
|
||||||
|
);
|
||||||
|
Catalyst::Test->import('Hydra');
|
||||||
|
|
||||||
|
# Create a user to log in to
|
||||||
|
my $user = $ctx->db->resultset('Users')->create({ username => 'alice', emailaddress => 'root@invalid.org', password => '!' });
|
||||||
|
$user->setPassword('foobar');
|
||||||
|
$user->userroles->update_or_create({ role => 'admin' });
|
||||||
|
|
||||||
|
subtest "can enable dynamic RunCommand when enabled by server" => sub {
|
||||||
|
my $builds = $ctx->makeAndEvaluateJobset(
|
||||||
|
expression => "runcommand-dynamic.nix",
|
||||||
|
build => 1
|
||||||
|
);
|
||||||
|
|
||||||
|
my $build = $builds->{"runCommandHook.example"};
|
||||||
|
my $project = $build->project;
|
||||||
|
my $project_name = $project->name;
|
||||||
|
my $jobset = $build->jobset;
|
||||||
|
my $jobset_name = $jobset->name;
|
||||||
|
|
||||||
|
is($project->enable_dynamic_run_command, 0, "dynamic RunCommand is disabled on projects by default");
|
||||||
|
is($jobset->enable_dynamic_run_command, 0, "dynamic RunCommand is disabled on jobsets by default");
|
||||||
|
|
||||||
|
my $req = request(POST '/login',
|
||||||
|
Referer => 'http://localhost/',
|
||||||
|
Content => {
|
||||||
|
username => 'alice',
|
||||||
|
password => 'foobar'
|
||||||
|
}
|
||||||
|
);
|
||||||
|
is($req->code, 302, "logged in successfully");
|
||||||
|
my $cookie = $req->header("set-cookie");
|
||||||
|
|
||||||
|
subtest "can enable dynamic RunCommand on project" => sub {
|
||||||
|
my $projectresponse = request(GET "/project/$project_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
);
|
||||||
|
|
||||||
|
my $projectjson = decode_json($projectresponse->content);
|
||||||
|
$projectjson->{enable_dynamic_run_command} = 1;
|
||||||
|
|
||||||
|
my $projectupdate = request(PUT "/project/$project_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
Content => encode_json($projectjson)
|
||||||
|
);
|
||||||
|
|
||||||
|
$projectresponse = request(GET "/project/$project_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
);
|
||||||
|
$projectjson = decode_json($projectresponse->content);
|
||||||
|
|
||||||
|
is($projectupdate->code, 200);
|
||||||
|
is($projectjson->{enable_dynamic_run_command}, JSON::MaybeXS::true);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "can enable dynamic RunCommand on jobset" => sub {
|
||||||
|
my $jobsetresponse = request(GET "/jobset/$project_name/$jobset_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
);
|
||||||
|
|
||||||
|
my $jobsetjson = decode_json($jobsetresponse->content);
|
||||||
|
$jobsetjson->{enable_dynamic_run_command} = 1;
|
||||||
|
|
||||||
|
my $jobsetupdate = request(PUT "/jobset/$project_name/$jobset_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
Content => encode_json($jobsetjson)
|
||||||
|
);
|
||||||
|
|
||||||
|
$jobsetresponse = request(GET "/jobset/$project_name/$jobset_name",
|
||||||
|
Accept => 'application/json',
|
||||||
|
Content_Type => 'application/json',
|
||||||
|
Cookie => $cookie,
|
||||||
|
);
|
||||||
|
$jobsetjson = decode_json($jobsetresponse->content);
|
||||||
|
|
||||||
|
is($jobsetupdate->code, 200);
|
||||||
|
is($jobsetjson->{enable_dynamic_run_command}, JSON::MaybeXS::true);
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
done_testing;
|
233
t/Hydra/Plugin/RunCommand/fanout.t
Normal file
233
t/Hydra/Plugin/RunCommand/fanout.t
Normal file
|
@ -0,0 +1,233 @@
|
||||||
|
use strict;
|
||||||
|
use warnings;
|
||||||
|
use Setup;
|
||||||
|
use Test2::V0;
|
||||||
|
use Hydra::Plugin::RunCommand;
|
||||||
|
|
||||||
|
my $ctx = test_context();
|
||||||
|
|
||||||
|
my $builds = $ctx->makeAndEvaluateJobset(
|
||||||
|
expression => "runcommand-dynamic.nix",
|
||||||
|
build => 1
|
||||||
|
);
|
||||||
|
|
||||||
|
my $build = $builds->{"runCommandHook.example"};
|
||||||
|
|
||||||
|
# Enable dynamic runcommand on the project and jobset
|
||||||
|
$build->project->update({enable_dynamic_run_command => 1});
|
||||||
|
$build->jobset->update({enable_dynamic_run_command => 1});
|
||||||
|
|
||||||
|
is($build->job, "runCommandHook.example", "The only job should be runCommandHook.example");
|
||||||
|
is($build->finished, 1, "Build should be finished.");
|
||||||
|
is($build->buildstatus, 0, "Build should have buildstatus 0.");
|
||||||
|
|
||||||
|
subtest "fanoutToCommands" => sub {
|
||||||
|
my $config = {
|
||||||
|
runcommand => [
|
||||||
|
{
|
||||||
|
job => "",
|
||||||
|
command => "foo"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
job => "*:*:*",
|
||||||
|
command => "bar"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
job => "tests:basic:nomatch",
|
||||||
|
command => "baz"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
};
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::fanoutToCommands(
|
||||||
|
$config,
|
||||||
|
"buildFinished",
|
||||||
|
$build
|
||||||
|
),
|
||||||
|
[
|
||||||
|
{
|
||||||
|
matcher => "",
|
||||||
|
command => "foo"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
matcher => "*:*:*",
|
||||||
|
command => "bar"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"fanoutToCommands returns a command per matching job"
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "fanoutToCommandsWithDynamicRunCommandSupport" => sub {
|
||||||
|
like(
|
||||||
|
$build->buildoutputs->find({name => "out"})->path,
|
||||||
|
qr/my-build-product$/,
|
||||||
|
"The way we find the out path is reasonable"
|
||||||
|
);
|
||||||
|
|
||||||
|
my $config = {
|
||||||
|
dynamicruncommand => { enable => 1 },
|
||||||
|
runcommand => [
|
||||||
|
{
|
||||||
|
job => "*:*:*",
|
||||||
|
command => "baz"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
};
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::fanoutToCommands(
|
||||||
|
$config,
|
||||||
|
"buildFinished",
|
||||||
|
$build
|
||||||
|
),
|
||||||
|
[
|
||||||
|
{
|
||||||
|
matcher => "*:*:*",
|
||||||
|
command => "baz"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
matcher => "DynamicRunCommand(runCommandHook.example)",
|
||||||
|
command => $build->buildoutputs->find({name => "out"})->path
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"fanoutToCommands returns a command per matching job"
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "isBuildEligibleForDynamicRunCommand" => sub {
|
||||||
|
subtest "Non-matches based on name alone ..." => sub {
|
||||||
|
my $build = $builds->{"foo-bar-baz"};
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($build),
|
||||||
|
0,
|
||||||
|
"The job name does not match"
|
||||||
|
);
|
||||||
|
|
||||||
|
$build->set_column("job", "runCommandHook");
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($build),
|
||||||
|
0,
|
||||||
|
"The job name does not match"
|
||||||
|
);
|
||||||
|
|
||||||
|
$build->set_column("job", "runCommandHook.");
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($build),
|
||||||
|
0,
|
||||||
|
"The job name does not match"
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "On outputs ..." => sub {
|
||||||
|
ok(!warns {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($builds->{"runCommandHook.example"}),
|
||||||
|
1,
|
||||||
|
"out is an executable file"
|
||||||
|
);
|
||||||
|
}, "No warnings for an executable file.");
|
||||||
|
|
||||||
|
ok(!warns {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($builds->{"runCommandHook.symlink"}),
|
||||||
|
1,
|
||||||
|
"out is a symlink to an executable file"
|
||||||
|
);
|
||||||
|
}, "No warnings for a symlink to an executable file.");
|
||||||
|
|
||||||
|
like(warning {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($builds->{"runCommandHook.no-out"}),
|
||||||
|
0,
|
||||||
|
"No output named out"
|
||||||
|
);
|
||||||
|
}, qr/rejected: no output named 'out'/, "A relevant warning is provided for a missing output");
|
||||||
|
|
||||||
|
like(warning {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($builds->{"runCommandHook.out-is-directory"}),
|
||||||
|
0,
|
||||||
|
"out is a directory"
|
||||||
|
);
|
||||||
|
}, qr/output is not a regular file or symlink/, "A relevant warning is provided for a directory output");
|
||||||
|
|
||||||
|
like(warning {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($builds->{"runCommandHook.out-is-not-executable-file"}),
|
||||||
|
0,
|
||||||
|
"out is a file which is not a regular file or symlink"
|
||||||
|
);
|
||||||
|
}, qr/output is not executable/, "A relevant warning is provided if the file isn't executable");
|
||||||
|
|
||||||
|
like(warning {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($builds->{"runCommandHook.symlink-non-executable"}),
|
||||||
|
0,
|
||||||
|
"out is a symlink to a non-executable file"
|
||||||
|
);
|
||||||
|
}, qr/output is not executable/, "A relevant warning is provided for symlinks to non-executables");
|
||||||
|
|
||||||
|
like(warning {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($builds->{"runCommandHook.symlink-directory"}),
|
||||||
|
0,
|
||||||
|
"out is a symlink to a directory"
|
||||||
|
);
|
||||||
|
}, qr/output is not a regular file or symlink/, "A relevant warning is provided for symlinks to directories");
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "On build status ..." => sub {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($builds->{"runCommandHook.failed"}),
|
||||||
|
0,
|
||||||
|
"Failed builds don't get run"
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "With dynamic runcommand disabled ..." => sub {
|
||||||
|
subtest "disabled on the project, enabled on the jobset" => sub {
|
||||||
|
$build->project->update({enable_dynamic_run_command => 0});
|
||||||
|
$build->jobset->update({enable_dynamic_run_command => 1});
|
||||||
|
|
||||||
|
|
||||||
|
like(warning {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($builds->{"runCommandHook.example"}),
|
||||||
|
0,
|
||||||
|
"Builds don't run from a jobset with disabled dynamic runcommand"
|
||||||
|
);
|
||||||
|
}, qr/project or jobset don't have dynamic runcommand enabled./, "A relevant warning is provided for a disabled runcommand support")
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "enabled on the project, disabled on the jobset" => sub {
|
||||||
|
$build->project->update({enable_dynamic_run_command => 1});
|
||||||
|
$build->jobset->update({enable_dynamic_run_command => 0});
|
||||||
|
|
||||||
|
like(warning {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($builds->{"runCommandHook.example"}),
|
||||||
|
0,
|
||||||
|
"Builds don't run from a jobset with disabled dynamic runcommand"
|
||||||
|
);
|
||||||
|
}, qr/project or jobset don't have dynamic runcommand enabled./, "A relevant warning is provided for a disabled runcommand support")
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "disabled on the project, disabled on the jobset" => sub {
|
||||||
|
$build->project->update({enable_dynamic_run_command => 0});
|
||||||
|
$build->jobset->update({enable_dynamic_run_command => 0});
|
||||||
|
|
||||||
|
like(warning {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isBuildEligibleForDynamicRunCommand($builds->{"runCommandHook.example"}),
|
||||||
|
0,
|
||||||
|
"Builds don't run from a jobset with disabled dynamic runcommand"
|
||||||
|
);
|
||||||
|
}, qr/project or jobset don't have dynamic runcommand enabled./, "A relevant warning is provided for a disabled runcommand support")
|
||||||
|
};
|
||||||
|
};
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
done_testing;
|
|
@ -7,13 +7,13 @@ use Hydra::Plugin::RunCommand;
|
||||||
subtest "isEnabled" => sub {
|
subtest "isEnabled" => sub {
|
||||||
is(
|
is(
|
||||||
Hydra::Plugin::RunCommand::isEnabled({}),
|
Hydra::Plugin::RunCommand::isEnabled({}),
|
||||||
"",
|
0,
|
||||||
"Disabled by default."
|
"Disabled by default."
|
||||||
);
|
);
|
||||||
|
|
||||||
is(
|
is(
|
||||||
Hydra::Plugin::RunCommand::isEnabled({ config => {}}),
|
Hydra::Plugin::RunCommand::isEnabled({ config => {}}),
|
||||||
"",
|
0,
|
||||||
"Disabled by default."
|
"Disabled by default."
|
||||||
);
|
);
|
||||||
|
|
||||||
|
@ -22,6 +22,121 @@ subtest "isEnabled" => sub {
|
||||||
1,
|
1,
|
||||||
"Enabled if any runcommand blocks exist."
|
"Enabled if any runcommand blocks exist."
|
||||||
);
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isEnabled({ config => { dynamicruncommand => {}}}),
|
||||||
|
0,
|
||||||
|
"Not enabled if an empty dynamicruncommand blocks exist."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isEnabled({ config => { dynamicruncommand => { enable => 0 }}}),
|
||||||
|
0,
|
||||||
|
"Not enabled if a dynamicruncommand blocks exist without enable being set to 1."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isEnabled({ config => { dynamicruncommand => { enable => 1 }}}),
|
||||||
|
1,
|
||||||
|
"Enabled if a dynamicruncommand blocks exist with enable being set to 1."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::isEnabled({ config => {
|
||||||
|
runcommand => {},
|
||||||
|
dynamicruncommand => { enable => 0 }
|
||||||
|
}}),
|
||||||
|
1,
|
||||||
|
"Enabled if a runcommand config block exists, even if a dynamicruncommand is explicitly disabled."
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "areStaticCommandsEnabled" => sub {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areStaticCommandsEnabled({}),
|
||||||
|
0,
|
||||||
|
"Disabled by default."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areStaticCommandsEnabled({}),
|
||||||
|
0,
|
||||||
|
"Disabled by default."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areStaticCommandsEnabled({ runcommand => {}}),
|
||||||
|
1,
|
||||||
|
"Enabled if any runcommand blocks exist."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areStaticCommandsEnabled({ dynamicruncommand => {}}),
|
||||||
|
0,
|
||||||
|
"Not enabled by dynamicruncommand blocks."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areStaticCommandsEnabled({ dynamicruncommand => { enable => 0 }}),
|
||||||
|
0,
|
||||||
|
"Not enabled by dynamicruncommand blocks."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areStaticCommandsEnabled({ dynamicruncommand => { enable => 1 }}),
|
||||||
|
0,
|
||||||
|
"Not enabled by dynamicruncommand blocks."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areStaticCommandsEnabled({
|
||||||
|
runcommand => {},
|
||||||
|
dynamicruncommand => { enable => 0 }
|
||||||
|
}),
|
||||||
|
1,
|
||||||
|
"Enabled if a runcommand config block exists, even if a dynamicruncommand is explicitly disabled."
|
||||||
|
);
|
||||||
|
};
|
||||||
|
|
||||||
|
subtest "areDynamicCommandsEnabled" => sub {
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areDynamicCommandsEnabled({}),
|
||||||
|
0,
|
||||||
|
"Disabled by default."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areDynamicCommandsEnabled({ runcommand => {}}),
|
||||||
|
0,
|
||||||
|
"Disabled even if any runcommand blocks exist."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areDynamicCommandsEnabled({ dynamicruncommand => {}}),
|
||||||
|
0,
|
||||||
|
"Not enabled if an empty dynamicruncommand blocks exist."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areDynamicCommandsEnabled({ dynamicruncommand => { enable => 0 }}),
|
||||||
|
0,
|
||||||
|
"Not enabled if a dynamicruncommand blocks exist without enable being set to 1."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areDynamicCommandsEnabled({ dynamicruncommand => { enable => 1 }}),
|
||||||
|
1,
|
||||||
|
"Enabled if a dynamicruncommand blocks exist with enable being set to 1."
|
||||||
|
);
|
||||||
|
|
||||||
|
is(
|
||||||
|
Hydra::Plugin::RunCommand::areDynamicCommandsEnabled({
|
||||||
|
runcommand => {},
|
||||||
|
dynamicruncommand => { enable => 0 }
|
||||||
|
}),
|
||||||
|
0,
|
||||||
|
"Disabled if dynamicruncommand is explicitly disabled."
|
||||||
|
);
|
||||||
};
|
};
|
||||||
|
|
||||||
subtest "configSectionMatches" => sub {
|
subtest "configSectionMatches" => sub {
|
||||||
|
@ -134,44 +249,4 @@ subtest "eventMatches" => sub {
|
||||||
);
|
);
|
||||||
};
|
};
|
||||||
|
|
||||||
subtest "fanoutToCommands" => sub {
|
|
||||||
my $config = {
|
|
||||||
runcommand => [
|
|
||||||
{
|
|
||||||
job => "",
|
|
||||||
command => "foo"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
job => "project:*:*",
|
|
||||||
command => "bar"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
job => "project:jobset:nomatch",
|
|
||||||
command => "baz"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
};
|
|
||||||
|
|
||||||
is(
|
|
||||||
Hydra::Plugin::RunCommand::fanoutToCommands(
|
|
||||||
$config,
|
|
||||||
"buildFinished",
|
|
||||||
"project",
|
|
||||||
"jobset",
|
|
||||||
"job"
|
|
||||||
),
|
|
||||||
[
|
|
||||||
{
|
|
||||||
matcher => "",
|
|
||||||
command => "foo"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
matcher => "project:*:*",
|
|
||||||
command => "bar"
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"fanoutToCommands returns a command per matching job"
|
|
||||||
);
|
|
||||||
};
|
|
||||||
|
|
||||||
done_testing;
|
done_testing;
|
||||||
|
|
63
t/evaluator/evaluate-oom-job.t
Normal file
63
t/evaluator/evaluate-oom-job.t
Normal file
|
@ -0,0 +1,63 @@
|
||||||
|
use strict;
|
||||||
|
use warnings;
|
||||||
|
use Setup;
|
||||||
|
use Test2::V0;
|
||||||
|
use Hydra::Helper::Exec;
|
||||||
|
|
||||||
|
# Ensure that `systemd-run` is
|
||||||
|
# - Available in the PATH/envionment
|
||||||
|
# - Accessable to the user executing it
|
||||||
|
# - Capable of using the command switches we use in our test
|
||||||
|
my $sd_res;
|
||||||
|
eval {
|
||||||
|
($sd_res) = captureStdoutStderr(3, (
|
||||||
|
"systemd-run",
|
||||||
|
"--user",
|
||||||
|
"--collect",
|
||||||
|
"--scope",
|
||||||
|
"--property",
|
||||||
|
"MemoryMax=25M",
|
||||||
|
"--",
|
||||||
|
"true"
|
||||||
|
));
|
||||||
|
} or do {
|
||||||
|
# The command failed to execute, likely because `systemd-run` is not present
|
||||||
|
# in `PATH`
|
||||||
|
skip_all("`systemd-run` failed when invoked in this environment");
|
||||||
|
};
|
||||||
|
if ($sd_res != 0) {
|
||||||
|
# `systemd-run` executed but `sytemd-run` failed to call `true` and return
|
||||||
|
# successfully
|
||||||
|
skip_all("`systemd-run` returned non-zero when executing `true` (expected 0)");
|
||||||
|
}
|
||||||
|
|
||||||
|
my $ctx = test_context();
|
||||||
|
|
||||||
|
# Contain the memory usage to 25 MegaBytes using `systemd-run`
|
||||||
|
# Run `hydra-eval-jobs` on test job that will purposefully consume all memory
|
||||||
|
# available
|
||||||
|
my ($res, $stdout, $stderr) = captureStdoutStderr(60, (
|
||||||
|
"systemd-run",
|
||||||
|
"--user",
|
||||||
|
"--collect",
|
||||||
|
"--scope",
|
||||||
|
"--property",
|
||||||
|
"MemoryMax=25M",
|
||||||
|
"--",
|
||||||
|
"hydra-eval-jobs",
|
||||||
|
"-I", "/dev/zero",
|
||||||
|
"-I", $ctx->jobsdir,
|
||||||
|
($ctx->jobsdir . "/oom.nix")
|
||||||
|
));
|
||||||
|
|
||||||
|
isnt($res, 0, "`hydra-eval-jobs` exits non-zero");
|
||||||
|
ok(utf8::decode($stderr), "Stderr output is UTF8-clean");
|
||||||
|
like(
|
||||||
|
$stderr,
|
||||||
|
# Assert error log contains messages added in PR
|
||||||
|
# https://github.com/NixOS/hydra/pull/1203
|
||||||
|
qr/^child process \(\d+?\) killed by signal=9$/m,
|
||||||
|
"The stderr record includes a relevant error message"
|
||||||
|
);
|
||||||
|
|
||||||
|
done_testing;
|
3
t/jobs/oom.nix
Normal file
3
t/jobs/oom.nix
Normal file
|
@ -0,0 +1,3 @@
|
||||||
|
{
|
||||||
|
oom = builtins.readFile "/dev/zero";
|
||||||
|
}
|
148
t/jobs/runcommand-dynamic.nix
Normal file
148
t/jobs/runcommand-dynamic.nix
Normal file
|
@ -0,0 +1,148 @@
|
||||||
|
with import ./config.nix;
|
||||||
|
rec {
|
||||||
|
foo-bar-baz = mkDerivation {
|
||||||
|
name = "foo-bar-baz";
|
||||||
|
builder = "/bin/sh";
|
||||||
|
outputs = [ "out" ];
|
||||||
|
args = [
|
||||||
|
(
|
||||||
|
builtins.toFile "builder.sh" ''
|
||||||
|
#! /bin/sh
|
||||||
|
|
||||||
|
touch $out
|
||||||
|
''
|
||||||
|
)
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
runCommandHook.example = mkDerivation {
|
||||||
|
name = "my-build-product";
|
||||||
|
builder = "/bin/sh";
|
||||||
|
outputs = [ "out" ];
|
||||||
|
args = [
|
||||||
|
(
|
||||||
|
builtins.toFile "builder.sh" ''
|
||||||
|
#! /bin/sh
|
||||||
|
|
||||||
|
touch $out
|
||||||
|
chmod +x $out
|
||||||
|
# ... dunno ...
|
||||||
|
''
|
||||||
|
)
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
runCommandHook.symlink = mkDerivation {
|
||||||
|
name = "symlink-out";
|
||||||
|
builder = "/bin/sh";
|
||||||
|
outputs = [ "out" ];
|
||||||
|
args = [
|
||||||
|
(
|
||||||
|
builtins.toFile "builder.sh" ''
|
||||||
|
#! /bin/sh
|
||||||
|
|
||||||
|
ln -s $1 $out
|
||||||
|
''
|
||||||
|
)
|
||||||
|
|
||||||
|
runCommandHook.example
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
runCommandHook.no-out = mkDerivation {
|
||||||
|
name = "no-out";
|
||||||
|
builder = "/bin/sh";
|
||||||
|
outputs = [ "bin" ];
|
||||||
|
args = [
|
||||||
|
(
|
||||||
|
builtins.toFile "builder.sh" ''
|
||||||
|
#! /bin/sh
|
||||||
|
mkdir $bin
|
||||||
|
''
|
||||||
|
)
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
runCommandHook.out-is-directory = mkDerivation {
|
||||||
|
name = "out-is-directory";
|
||||||
|
builder = "/bin/sh";
|
||||||
|
outputs = [ "out" ];
|
||||||
|
args = [
|
||||||
|
(
|
||||||
|
builtins.toFile "builder.sh" ''
|
||||||
|
#! /bin/sh
|
||||||
|
|
||||||
|
mkdir $out
|
||||||
|
''
|
||||||
|
)
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
runCommandHook.out-is-not-executable-file = mkDerivation {
|
||||||
|
name = "out-is-directory";
|
||||||
|
builder = "/bin/sh";
|
||||||
|
outputs = [ "out" ];
|
||||||
|
args = [
|
||||||
|
(
|
||||||
|
builtins.toFile "builder.sh" ''
|
||||||
|
#! /bin/sh
|
||||||
|
|
||||||
|
touch $out
|
||||||
|
''
|
||||||
|
)
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
runCommandHook.symlink-non-executable = mkDerivation {
|
||||||
|
name = "symlink-out";
|
||||||
|
builder = "/bin/sh";
|
||||||
|
outputs = [ "out" ];
|
||||||
|
args = [
|
||||||
|
(
|
||||||
|
builtins.toFile "builder.sh" ''
|
||||||
|
#! /bin/sh
|
||||||
|
|
||||||
|
ln -s $1 $out
|
||||||
|
''
|
||||||
|
)
|
||||||
|
|
||||||
|
runCommandHook.out-is-not-executable-file
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
runCommandHook.symlink-directory = mkDerivation {
|
||||||
|
name = "symlink-directory";
|
||||||
|
builder = "/bin/sh";
|
||||||
|
outputs = [ "out" ];
|
||||||
|
args = [
|
||||||
|
(
|
||||||
|
builtins.toFile "builder.sh" ''
|
||||||
|
#! /bin/sh
|
||||||
|
|
||||||
|
ln -s $1 $out
|
||||||
|
''
|
||||||
|
)
|
||||||
|
|
||||||
|
runCommandHook.out-is-directory
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
runCommandHook.failed = mkDerivation {
|
||||||
|
name = "failed";
|
||||||
|
builder = "/bin/sh";
|
||||||
|
outputs = [ "out" ];
|
||||||
|
args = [
|
||||||
|
(
|
||||||
|
builtins.toFile "builder.sh" ''
|
||||||
|
#! /bin/sh
|
||||||
|
|
||||||
|
touch $out
|
||||||
|
chmod +x $out
|
||||||
|
|
||||||
|
exit 1
|
||||||
|
''
|
||||||
|
)
|
||||||
|
];
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
|
@ -51,17 +51,21 @@ sub new {
|
||||||
$ENV{'HYDRA_CONFIG'} = "$dir/hydra.conf";
|
$ENV{'HYDRA_CONFIG'} = "$dir/hydra.conf";
|
||||||
|
|
||||||
my $hydra_config = $opts{'hydra_config'} || "";
|
my $hydra_config = $opts{'hydra_config'} || "";
|
||||||
|
$hydra_config = "queue_runner_metrics_address = 127.0.0.1:0\n" . $hydra_config;
|
||||||
if ($opts{'use_external_destination_store'} // 1) {
|
if ($opts{'use_external_destination_store'} // 1) {
|
||||||
$hydra_config = "store_uri = file:$dir/nix/dest-store\n" . $hydra_config;
|
$hydra_config = "store_uri = file://$dir/nix/dest-store\n" . $hydra_config;
|
||||||
}
|
}
|
||||||
|
|
||||||
write_file($ENV{'HYDRA_CONFIG'}, $hydra_config);
|
write_file($ENV{'HYDRA_CONFIG'}, $hydra_config);
|
||||||
|
|
||||||
$ENV{'NIX_LOG_DIR'} = "$dir/nix/var/log/nix";
|
my $nix_store_dir = "$dir/nix/store";
|
||||||
|
my $nix_state_dir = "$dir/nix/var/nix";
|
||||||
|
my $nix_log_dir = "$dir/nix/var/log/nix";
|
||||||
|
|
||||||
$ENV{'NIX_REMOTE_SYSTEMS'} = '';
|
$ENV{'NIX_REMOTE_SYSTEMS'} = '';
|
||||||
$ENV{'NIX_REMOTE'} = '';
|
$ENV{'NIX_REMOTE'} = "local?store=$nix_store_dir&state=$nix_state_dir&log=$nix_log_dir";
|
||||||
$ENV{'NIX_STATE_DIR'} = "$dir/nix/var/nix";
|
$ENV{'NIX_STATE_DIR'} = $nix_state_dir; # FIXME: remove
|
||||||
$ENV{'NIX_STORE_DIR'} = "$dir/nix/store";
|
$ENV{'NIX_STORE_DIR'} = $nix_store_dir; # FIXME: remove
|
||||||
|
|
||||||
my $pgsql = Test::PostgreSQL->new(
|
my $pgsql = Test::PostgreSQL->new(
|
||||||
extra_initdb_args => "--locale C.UTF-8"
|
extra_initdb_args => "--locale C.UTF-8"
|
||||||
|
@ -72,7 +76,8 @@ sub new {
|
||||||
_db => undef,
|
_db => undef,
|
||||||
db_handle => $pgsql,
|
db_handle => $pgsql,
|
||||||
tmpdir => $dir,
|
tmpdir => $dir,
|
||||||
nix_state_dir => "$dir/nix/var/nix",
|
nix_state_dir => $nix_state_dir,
|
||||||
|
nix_log_dir => $nix_log_dir,
|
||||||
testdir => abs_path(dirname(__FILE__) . "/.."),
|
testdir => abs_path(dirname(__FILE__) . "/.."),
|
||||||
jobsdir => abs_path(dirname(__FILE__) . "/../jobs")
|
jobsdir => abs_path(dirname(__FILE__) . "/../jobs")
|
||||||
}, $class;
|
}, $class;
|
||||||
|
|
|
@ -33,9 +33,6 @@ my $ctx = test_context(
|
||||||
# the build locally.
|
# the build locally.
|
||||||
|
|
||||||
subtest "Pre-build the job, upload to the cache, and then delete locally" => sub {
|
subtest "Pre-build the job, upload to the cache, and then delete locally" => sub {
|
||||||
my $scratchlogdir = File::Temp->newdir();
|
|
||||||
$ENV{'NIX_LOG_DIR'} = "$scratchlogdir";
|
|
||||||
|
|
||||||
my $outlink = $ctx->tmpdir . "/basic-canbesubstituted";
|
my $outlink = $ctx->tmpdir . "/basic-canbesubstituted";
|
||||||
is(system('nix-build', $ctx->jobsdir . '/notifications.nix', '-A', 'canbesubstituted', '--out-link', $outlink), 0, "Building notifications.nix succeeded");
|
is(system('nix-build', $ctx->jobsdir . '/notifications.nix', '-A', 'canbesubstituted', '--out-link', $outlink), 0, "Building notifications.nix succeeded");
|
||||||
is(system('nix', 'copy', '--to', "file://${binarycachedir}", $outlink), 0, "Copying the closure to the binary cache succeeded");
|
is(system('nix', 'copy', '--to', "file://${binarycachedir}", $outlink), 0, "Copying the closure to the binary cache succeeded");
|
||||||
|
@ -46,6 +43,7 @@ subtest "Pre-build the job, upload to the cache, and then delete locally" => sub
|
||||||
is(system('nix', 'log', $outpath), 0, "Reading the output's log succeeds");
|
is(system('nix', 'log', $outpath), 0, "Reading the output's log succeeds");
|
||||||
is(system('nix-store', '--delete', $outpath), 0, "Deleting the notifications.nix output succeeded");
|
is(system('nix-store', '--delete', $outpath), 0, "Deleting the notifications.nix output succeeded");
|
||||||
is(system("nix-collect-garbage"), 0, "Delete all the system's garbage");
|
is(system("nix-collect-garbage"), 0, "Delete all the system's garbage");
|
||||||
|
File::Path::rmtree($ctx->{nix_log_dir});
|
||||||
};
|
};
|
||||||
|
|
||||||
subtest "Ensure substituting the job works, but reading the log fails" => sub {
|
subtest "Ensure substituting the job works, but reading the log fails" => sub {
|
||||||
|
|
Loading…
Reference in a new issue