At the moment, aggregate jobs can easily break and cause the entire
evaluation to fail, which is not ideal. For Nixpkgs, we do have some
important aggregate jobs (like `tested`), but for debugging and building
purposes it's still useful to get a partial result even if the channel
won't actually advance.
This commit changes the behaviour of hydra-eval-jobs such that it
aggregates any errors found during the construction of an aggregate, and
will instead annotate the job with the evaluation failure such that it
shows up in a "cleaner" way.
There are really two types of failure that we care about: one is where
the attribute just ends up missing altogether in the final output, and
also where the attribute is in the output but fails to evaluate. Both
are handled here.
Note that this does mean that the same error message may be output
multiple times, but this aids debuggability because it'll be much
clearer what's blocking the job from being created.
At the moment, the jobset object is unlikely to actually retrieve the
evaluation error output, because it isn't refreshed after
hydra-eval-jobsets is run.
Explicitly calling DBIx::Class::Row->discard_changes causes any updated
data to be refreshed, at the cost of losing any not-yet committed
changes to the row.
Fix parsing breakage from #1003: assigning the lines to $lines broke chomp and the filters.
This test validates the parsing works as expected, and also fixes
a minor bug where '-' in features isn't pruned, like in the C++
repo.
This is causing CI to fail after #1026 merged. #1026 had a green
bill of health, but #1003 increased perlcritic to level 4. #1003
was not part of #1026 so it was not checked at perlcritic level 4.
Yet again, manual testing is proving to be insufficient. I'm pretty
sure I wrote this code but lost it in a rebase, or perhaps the switch
to result classes.
At any rate, this implements the actual "fetch a retry row and run it"
for the hydra-notify daemon.
Tested by hand.
Get the number of seconds before the next retriable task is ready.
This number is specifically intended to be used as a timeout, where
`undef` means never time out.
Exposes metrics:
* http_request_duration_seconds_bucket
* http_request_size_bytes_bucket
* http_response_size_bytes_bucket
* http_requests_total
with labels of action and controller to help identify popular
endpoints and their performance characteristics.
If the project isn't declarative, who cares about it in the response? After
setting the `declfile` to an empty string, everything related to declarative-
ness is wiped out, anyways.
To further align with the API, we return custom JSON in order to display a
`visible` field rather than `hidden` -- a `PUT` request expects `visible`, while
a `GET` request returns `hidden`.
This also allows us to rename the `jobsetinputs` field to `inputs` for the same
reason: `PUT` expects `inputs`, while `GET` returns `jobsetinputs`.