We now take into account the memory necessary for compressing the NAR
being exported to the binary cache, plus xz compression overhead.
Also, we now release the memory tokens for the NAR accessor *after*
releasing the NAR accessor. Previously the memory for the NAR accessor
might still be in use while another thread does an allocation, causing
the maximum to be exceeded temporarily.
Also, use notify_all instead of notify_one to wake up memory token
waiters. This is not very nice, but not every waiter is requesting the
same number of tokens, so some might be able to proceed.
If a step is cancelled just as its builder step is starting,
doBuildStep() will return sRetry. This causes builder() to make the
step runnable again, since the queue monitor may have added new builds
referencing it. The idea is that if the latter condition is not true,
the step's reference count will drop to zero and it will be
deleted. However, if the dispatcher thread sees and locks the step
before the reference count can drop to zero in the builder thread, the
dispatcher thread will start a new builder thread for the step. Thus
the step can be kept alive for an indefinite amount of time.
The fix is for State::builder() to use a weak pointer to the step, to
ensure that the step's reference count can drop to zero *before* it's
added to the runnable queue.
This was a bad idea because pthread_cancel() is unsalvageable broken
in C++. Destructors are not allowed to throw exceptions (especially in
C++11), but pthread_cancel() can cause a __cxxabiv1::__forced_unwind
exception inside any destructor that invokes a cancellation
point. (This exception can be caught but *must* be rethrown.) So let's
just kill the builder process instead.
It was hitting
assert(reservation.unique());
Since we do want the machine reservation to be released before calling
wakeDispatcher(), let's use a different object for keeping track of
active steps.
We now kill active build steps when there are no more referring
builds. This is useful e.g. for preventing cancelled multi-hour TPC-H
benchmark runs from hogging build machines.
If two active steps of the same build failed, then the first would be
marked as "failed", but the second would end up as "orphaned", causing
it to be marked as "aborted" later on. Now it's correctly marked as
"failed".
Without this, if (failed or aborted) derivations have been
garbage-collected, there is no way to restart them, which is very
annoying. Now we set a forceEval flag in the jobset to cause it to be
re-evaluated even if none of the inputs have changed.
‘basicDrv.inputSrcs’ also contains the outputs of inputDrvs. These
don't necessarily exist in the local store, so copying them may cause
an exception. We should only copy the real inputSrcs.
Some Hydra API requests were vulnerable to XSRF attacks, e.g. you
could have a form on another website using http://hydra/logout as the
form action. So we now require POST requests to come from the same
origin.
Reported by Hans-Christian Esperer.
This rewrites the top-level loop of hydra-evaluator in C++. The Perl
stuff is moved into hydra-eval-jobset. (Rewriting the entire evaluator
would be nice but is a bit too much work.) The new version has some
advantages:
* It can run multiple jobset evaluations in parallel.
* It uses PostgreSQL notifications so it doesn't have to poll the
database. So if a jobset is triggered via the web interface or from
a GitHub / Bitbucket webhook, evaluation of the jobset will start
almost instantaneously (assuming the evaluator is not at its
concurrency limit).
* It imposes a timeout on evaluations. So if e.g. hydra-eval-jobset
hangs connecting to a Mercurial server, it will eventually be
killed.
This prevents the server from gradually filling up due to store paths
fetched by hydra-server that then get turned into a GC root by
hydra-update-gc-roots.
Dashboards can now be marked as publically visible in the user
preferences. The dashboard URL has changed from /user/<name>/dashboard
to /dashboard/<name> because /user/<name> requires being logged in as
<name> or as an admin.
This allows fully declarative project specifications. This is best
illustrated by example:
* I create a new project, setting the declarative spec file to
"spec.json" and the declarative input to a git repo pointing
at git://github.com/shlevy/declarative-hydra-example.git
* hydra creates a special ".jobsets" jobset alongside the project
* Just before evaluating the ".jobsets" jobset, hydra fetches
declarative-hydra-example.git, reads spec.json as a jobset spec,
and updates the jobset's configuration accordingly:
{
"enabled": 1,
"hidden": false,
"description": "Jobsets",
"nixexprinput": "src",
"nixexprpath": "default.nix",
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
"src": { "type": "git", "value": "git://github.com/shlevy/declarative-hydra-example.git", "emailresponsible": false },
"nixpkgs": { "type": "git", "value": "git://github.com/NixOS/nixpkgs.git release-16.03", "emailresponsible": false }
}
}
* When the "jobsets" job of the ".jobsets" jobset completes, hydra
reads its output as a JSON representation of a dictionary of
jobset specs and creates a jobset named "master" configured
accordingly (In this example, this is the same configuration as
.jobsets itself, except using release.nix instead of default.nix):
{
"enabled": 1,
"hidden": false,
"description": "js",
"nixexprinput": "src",
"nixexprpath": "release.nix",
"checkinterval": 300,
"schedulingshares": 100,
"enableemail": false,
"emailoverride": "",
"keepnr": 3,
"inputs": {
"src": { "type": "git", "value": "git://github.com/shlevy/declarative-hydra-example.git", "emailresponsible": false },
"nixpkgs": { "type": "git", "value": "git://github.com/NixOS/nixpkgs.git release-16.03", "emailresponsible": false }
}
}
Currently, the hydra.nixos.org queue contains 1000s of Darwin builds
that all depend on a stdenv-darwin that previously failed. However,
before, first createStep() would construct a dependency graph for each
build, then getQueuedBuilds() would discover that one of the steps had
failed previously and discard all those steps. Since the graph
construction involves a lot of uncached calls to isValidPath(), this
took several seconds per build.
Now createStep() detects the previous failure right away and bails
out.
These are build steps that remain "busy" in the database even though
they have finished, because they couldn't be updated (e.g. due to a
PostgreSQL connection problem). To prevent them from showing up as
busy in the "Machine status" page, we now periodically purge them.
Previously, if the queue monitor thread encounters a build that Hydra
has previously built, it downloaded the output paths from the binary
cache, just to determine the build products and metrics. This is very
inefficient. In particular, when doing something like merging
nixpkgs:staging into nixpkgs:master, the queue monitor thread will be
locked up for a long time fetching files from S3, causing the build
farm to be mostly idle.
Of course this is entirely unnecessary, since the build
products/metrics are already in the Hydra database. So now we just
look up a previous build with the same output path, and copy the
products/metrics.
Mutliple <githubstatus> sections are possible:
* jobs: regexp for jobs to match
* inputs: the input which corresponds to the github repo/rev whose
status we want to report. Can be repeated
* authorization: Verbatim contents of the Authorization header. See
https://developer.github.com/v3/#authentication.
Otherwise, the browser may mix up HTML and JSON responses if it has
requested both. For example, hitting the back button to return to a
job metric page will show a JSON response, because that was the last
thing the browser fetched for that URL.
This requires Catalyst::Action::Rest >= 1.20.
The previous query
select count(*) from builds b left join buildsteps s on s.build = b.id where busy = 1 and finished = 0
is suddenly taking several minutes. Probably PostgreSQL decided to use
a suboptimal query plan.
The maximum output size per build step (as the sum of the NARs of each
output) can be set via hydra.conf, e.g.
max-output-size = 1000000000
The default is 2 GiB.
Also refactored the build error / status handling a bit.
When using a binary cache store, the queue runner receives NARs from
the build machines, compresses them, and uploads them to the
cache. However, keeping multiple large NARs in memory can cause the
queue runner to run out of memory. This can happen for instance when
it's processing multiple ISO images concurrently.
The fix is to use a TokenServer to prevent the builder threads to
store more than a certain total size of NARs concurrently (at the
moment, this is hard-coded at 4 GiB). Builder threads that cause the
limit to be exceeded will block until other threads have finished.
The 4 GiB limit does not include certain other allocations, such as
for xz compression or for FSAccessor::readFile(). But since these are
unlikely to be more than the size of the NARs and hydra.nixos.org has
32 GiB RAM, it should be fine.
The old page didn't scale very well if you have 150K builds in the
queue, in fact it tended to make browsers hang. The new one just
shows, for each jobset, the number of queued builds. The actual builds
can be seen by going to the corresponding jobset page and looking at
the evals.
Same problem as d744362e4a.
at /nix/store/ksvsbr7pg4z69bv6fbbc8h7x7rm2104m-gcc-4.9.3/include/c++/4.9.3/bits/predefined_ops.h:166
__last@entry=..., __comp=...) at /nix/store/ksvsbr7pg4z69bv6fbbc8h7x7rm2104m-gcc-4.9.3/include/c++/4.9.3/bits/stl_algo.h:1827
__comp=...) at /nix/store/ksvsbr7pg4z69bv6fbbc8h7x7rm2104m-gcc-4.9.3/include/c++/4.9.3/bits/stl_algo.h:4717
Respects <slack> blocks in the hydra config, with attributes:
* jobs: a regexp matching the job name (in the format project:jobset:job)
* url: The URL to a slack incoming webhook
* force: If true, always send messages. Otherwise, only when the build status changes
Multiple <slack> blocks are allowed
To use the local Nix store (default):
store_mode = direct
To use a local binary cache:
store_mode = local-binary-cache
binary_cache_dir = /var/lib/hydra/binary-cache
To use an S3 bucket:
store_mode = s3-binary-cache
binary_cache_s3_bucket = my-nix-bucket
Also, respect binary_cache_{secret,public}_key_file for signing the
binary cache.
The queue runner no longer uses this field, and it doesn't provide
very interesting historical data (mostly SSH failures), but it takes
up a lot of space. Also, it contained some bad UTF-8 which was
preventing an upgrade to Postgres 9.5, so a good occasion to get rid
of it.
The required configuration in hydra.conf:
enable_google_login = 1
google_client_id = 238429sdjkds....apps.googleusercontent.com
and optionally persona_allowed_domains to restrict to one or more
domains.
This is necessary given the current size of the Nixpkgs/NixOS
jobsets. Once we have a Nix store + Postgres on SSD, we can reduce
this again.
Should really make this configurable...
The uid split a while back caused the web interface to create GC roots
in /nix/var/nix/gcroots/per-user/hydra-www, where they wouldn't be
purged by hydra-update-gc-roots. Thus restarted builds would
accumulate forever. The fix is to keep the roots in a shared directory
with gid=hydra.
Regression introduced by 1fdc258de0.
The commit introduced a channel/custom PathPart which uses the new
custom channel expressions, but I forgot to remove CaptureArgs, so the
URL really is channel/latest/ignored-value.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Reported-by: Peter Simons <simons@cryp.to>
This removes the "busy", "locker" and "logfile" columns, which are no
longer used by the queue runner. The "Running builds" page now only
shows builds that have an active build step.
Previously, priority bumps could take a long time to get noticed if
getQueuedBuilds() was busy processing zillions of queue
additions. (This was made worse by the reintroduction of substitute
checking.)
We have this set in upgrade-42.sql, so it's better to stay consistent
with the basic SQL file to avoid problems with new Hydra installations.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Reported-by: Eelco Dolstra <eelco.dolstra@logicblox.com>
There is still a tiny window between the calls to nix-prefetch-* and
addTempRoot. This could be eliminated by adding a "-o" option to
nix-prefetch-*, or by not using those scripts at all (and use
addToStore directly).
This allows Hydra to use binaries from available binary caches. It
makes the queue monitor thread quite a bit slower, so if you don't
want to use binary caches, it's better to add "--option
build-use-substitutes false" to the hydra-queue-runner invocation.
Fixed#243.
The last paragraph states about package installation of the "following"
jobs, but it only applies to generic channels, so let's only display it
there.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
So this is the final part which is needed in order to be able to deliver
custom channels, everything else is now just polishing.
We do this by simply redirecting to the build product download URL and
we use binary_cache_url the same way as in NixChannel.pm.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We should now get an overview and help text on how to add a particular
channel and also a bit of information about the builds that are required
for a channel to get upgraded.
Right now we only select the latest successful build in the latest
successful evaluation, so if someone wants to have more information about
which channel has failed, (s)he still has to look at the "Channels" tab
of the jobset.
We can make this more fancy at some later point if this is really
needed, because right now we're only interested in the latest build,
because it's the only thing necessary to deliver the channel contents.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
It's actually lower-case _despite_ the spelling in the SQL file(s),
because the schema auto-generator from DBIx::Class doesn't take it into
account because it's working on SQLite and the latter seems to ignore
case.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We want to have contents and detauls of channel expressions as well and
we already have that in product.type == file, so why not reuse the same
for the channel expression?
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We now have a searchBuildsAndEvalsForJobset, which creates such a
mapping for us, so we don't need to duplicate code in jobs_tab and
channels_tab.
Also, we're going to use this for the overview of a particular channel
as well, so it makes sense to put it in CatalystUtils instead of
directly in Jobset.pm.
Instead of eval->jobs, it's now eval->builds, because it's really an
aggregate over the builds schema, rather than the job schema.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We only allow channel/latest anyway, so it really doesn't make sense to
explicitly specify this in the PathPart and provide other dispatcher
once we have more than just "latest", which greatly simplifies the
dispatch tree.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We now have a column for that, so no need for counting rows which was a
bit inefficient anyway, because we only would have needed the first row
in the result.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Now that we have our dedicated "Channels" tab, there is no need anymore
to show redundant information.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We now no longer need that additional join of the build outputs and can
solely use the isChannel column of the Builds table to determine whether
it's a channel build.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is to properly separate channels from regular jobs and also make
sure that we can always iterate on them, no matter whether the build has
failed. The reason why we were not able to do this until now was because
we were iterating on the build products, and whenever some constituent
of a channel job has failed, we didn't get a build output.
So whenever there is a meta.isHydraChannel, we can now properly
distinguish it from the other jobs.
I still don't have any clue, why "make -C src/sql update-dbix" without
*any* modifications tries to create additional schema definitions. But
I've checked the md5sums of the existing schema definitions and they
don't seem to match, so it seems that they already have been tampered
with.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Now we can provide different channel expressions for one particular
channel build. Not sure yet how this would be useful, but I found it
more appropriate to use a type instead of a subtype of "file".
This should get us consistent with the provious commit.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is to get a bit more consistency among channel builds but doesn't
do a radical change on the display. Ideally we may want to have a
channel overview with all the constituents and a small help showing how
the user can add the channel.
Unfortunately, this also introduces an inconsistency: We previously used
the *subtype* "channel", but now we're expecting "channel" as the type
of the product, so we need to change this for the channels overview as
well.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
It's very similar to "jobs" and the code is pretty much the same, except
that we don't do filtering on it. At least it doesn't waste space for a
filter option when there are usually WAY less channel jobs than ordinary
jobs.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Currently I'm using a (not very well) downscaled version of the NixOS
logo, so we want to replace it by a proper image ASAP.
Other than that, the idea is to have something like this in
hydra-build-products:
file channel $out/channel.tar.bz2
Right now of course, it's only displayed at the corresponding builds, so
we might want to have aggregates on all channels for a project, jobset
or maybe even single jobs?
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
They will show up in machineTypes as (e.g.) x86_64-linux:local instead
of x86_64-linux. This is to prevent the Hydra provisioner from
creating machines for steps that are supposed to be executed locally.
It's easier for the Hydra provisioner to put host public keys in the
machines file than to separately manage the known_hosts file
(especially when the provisioner runs on a different machine).
This is necessary because the required system type can become
available later (e.g. by being provisioned by the
auto-scaler). However, in the future, we may want to fail steps if
they have been unsupported for more than a certain amount of time.
For example, steps that require the "kvm" feature may require a
different kind of machine to be provisioned. This can also be used to
require performance-sensitive tests to run on a particular kind of
machine, e.g., by setting requiredSystemFeatures to something like
"ec2-i2.8xlarge".
"hydra-queue-runner --status" now prints how many runnable and running
build steps exist for each machine type. This allows additional
machines to be provisioned based on the Hydra load.
If there is no input named 'inputs', hydra-eval-jobs now passes in a set
of lists, where each attribute corresponds to an input defined in the
jobset specification and each list element is a different input alt, as
an argument named 'inputs'.
Among other things, this allows for generic hydra expressions to be
shared amongst projects with similar structures but different sets of
specific inputs.
Builds can now emit metrics that Hydra will store in its database and
render as time series via flot charts. Typical applications are to
keep track of performance indicators, coverage percentages, artifact
sizes, and so on.
For example, a coverage build can emit the coverage percentage as
follows:
echo "lineCoverage $pct %" > $out/nix-support/hydra-metrics
Graphs of all metrics for a job can be seen at
http://.../job/<project>/<jobset>/<job>#tabs-charts
Specific metrics are also visible at
http://.../job/<project>/<jobset>/<job>/metric/<metric>
The latter URL also allows getting the data in JSON format (e.g. via
"curl -H 'Accept: application/json'").
If Hydra isn't hosted on https://example.com/ but something like
https://example.com/hydra/, the URL for /api/scmdiff would have ended up
on /api/scmdiff rather than /hydra/api/scmdiff.
This is because we didn't use the URI resolver from the controller,
hence we're using it now to build up the whole URL including the query
string.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Without an index on (machine, stoptime desc), this requires a
sequential scan. And adding a whole index for this seems
overkill. (Possibly the queue runner could maintain this info more
efficiently.)