Having a hundred threads doing I/O at the same time is bad on magnetic
disks because of the excessive disk seeks. So allow only 4 threads to
copy closures in parallel.
While sorting machines by load, the load of a machine
(machine->currentJobs) can be changed by other threads. If that
happens, the comparator is no longer a proper ordering, in which case
std::sort() can segfault. So we now make a copy of currentJobs before
sorting.
There is a slight possibility that the queue monitor and a builder
thread simultaneously decide to mark a build as finished. That's fine,
as long as we ensure the DB update is idempotent (as ensured by doing
"update Builds set finished = 1 ... where finished = 0").
If a build A depends on a derivation that is the top-level derivation
of some build B, then we should process B before A (meaning we
shouldn't make the derivation runnable before B has been
added). Otherwise, the derivation will be "accounted" to A rather than
B (so the build step will show up in the wrong build).
Aborted builds are now put back on the runnable queue and retried
after a certain time interval (currently 60 seconds for the first
retry, then tripled on each subsequent retry).
Hydra-queue-runner now no longer polls the queue periodically, but
instead sleeps until it receives a notification from PostgreSQL about
a change to the queue (build added, build cancelled or build
restarted).
Also, for the "build added" case, we now only check for builds with an
ID greater than the previous greatest ID. This is much more efficient
if the queue is large.
It just makes things unnecessarily complicated. We can just exit
without cleaning anything up, since the only thing to do is unmark
builds and build steps as busy. But we can do that by having systemd
call "hydra-queue-runner --unlock" from ExecStopPost.
If multiple threads create a step for the same build, they could get
the same "max(stepnr)" and allocate conflicting new step numbers. So
lock the BuildSteps table while doing this. We could use a different
isolation level, but this is easier.
This removes the need for Nix's build-remote.pl.
Build logs are now written to $HYDRA_DATA/build-logs because
hydra-queue-runner doesn't have write permission to /nix/var/log.
When visiting the tail-reload page, for a short amount of time the
"unscrolled" version is shown. To circumvent that, let's scroll down
immediately at the first possibility to fill the gap between the loading
of the document and the first AJAX request coming in.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
There are quite a lot of build outputs which have lines with a length
exceeding the width of the taillog <pre/> and thus visually produce more
lines than 50. This causes the tail "box" to change height frequently
and to get to the bottom you need to scroll down.
We now set a fixed line-height to 120% of the font size and cap the
maximum height based on that value (50 * 1.2 = 60). It's probably not
nice to override the line-height, but max-lines is currently only
available using browser-specific property names. But after all it's just
for the tail output, if people complain about the line-height, we can
still change it :-)
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We're just implicitly escaping the tail content by not using .load() but
explicitly setting the text content using .text(), so that escaping
isn't needed on our side.
This should get rid of a few formatting errors and possibly XSS if
someone manages to place JS code in the tail of a build and manages to
lurk a user to that tail output.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Like eval IDs, build IDs don't convey useful information.
Also, make the job name link to the build rather than the job. When
people click on a build, they expect to go to the build page, not the
job page.
Scheduling is mostly based on jobset shares these days. So showing and
sorting by priority just wastes space and gives the incorrect
impression that Hydra executes builds in the order shown on the queue
page.
These give warnings in Perl >= 5.18:
given is experimental at /home/hydra/src/hydra/src/lib/Hydra/Helper/CatalystUtils.pm line 241.
when is experimental at /home/hydra/src/hydra/src/lib/Hydra/Helper/CatalystUtils.pm line 242.
...
There is no point in indexing rows with common column values like
"finished = 1", since those are the majority of the table. Only the
exceptions ("finished = 0") are interesting. Having smaller tables
should make updates/insertions faster.
This incorporates the following two commits from <nixpkgs>:
NixOS/nixpkgs@f83af95f8aNixOS/nixpkgs@5e7a1cf955
Hydra was the original reason why I was fixing tempdir creation in the
first place. Seeing that Hydra ships its own versions of these scripts,
we need to patch them here as well.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
If hydra-eval-jobs creates a new root, and hydra-update-gc-roots runs
before hydra-evaluator has had a chance to add the corresponding build
to the database, then hydra-update-gc-roots will remove the root. If
subsequently the Nix garbage collector kicks in, it may remove the
build's .drv file before the build is performed. Since evaluation of
the Nixpkgs and NixOS jobsets nowadays takes a lot of time (e.g. an
hour), the probability of this happening is fairly high.
The quick fix is not to delete roots that are less than a day old. So
long as evaluation doesn't take longer than a day, this should be fine
;-)
Fixes#166.
This adds a Hydra plugin for users to submit their open source projects
to the Coverity Scan system for analysis.
First, add a <coverityscan> section to your Hydra config, including the
access token, project name, and email, and a regex specifying jobs to
upload:
<coverityscan>
project = testrix
jobs = foobar:.*:coverity.*
email = aseipp@pobox.com
token = ${builtins.readFile ./coverity-token}
</coverityscan>
This will upload the scan results for any job whose name matches
'coverity.*' in any jobset in the Hydra 'foobar' project, for the
Coverity Scan project named 'testrix'.
Note that one upload will occur per job matched by the regular
expression - so be careful with how many builds you upload.
The jobs which are matched by the jobs specification must have a file in
their output path of the form:
$out/tarballs/...-cov-int.(xz|lzma|zip|bz2|tgz)
The file must have the 'cov-int' directory produced by `cov-build` in
the root.
(You can also output something into
$out/nix-support/hydra-build-products for the Hydra UI.)
This file will be found in the store, and uploaded to the service
directly using your access credentials. Note the exact extension: don't
use .tar.xz, only use .xz specifically.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Fixes errors like:
Caught exception in engine "Wide character in syswrite at /nix/store/498lwsrn5kkdh1q8kn3vcpd3457w6m7a-hydra-perl-deps/lib/perl5/site_perl/5.16.3/Starman/Server.pm line 547."
Note that these errors didn't happen if the database encoding was set
to SQL_ASCII (which was the case for hydra.nixos.org, explaining why
it didn't get these errors). However, now the encoding must be
UTF8. To change it, do:
update pg_database set encoding = pg_char_to_encoding('UTF8') where datname = 'hydra';
This gets rid of the warning:
DBIx::Class::Storage::DBI::select_single(): Query returned more than one row. SQL that returns multiple rows is DEPRECATED for ->find and ->single at /home/eelco/Dev/hydra/src/script/../lib/Hydra/Controller/Project.pm line 15
In the dashboard and on the job page, indicate whether the job appears
in the latest jobset eval. That way, the user gets some indication if
a job has accidentally disappeared (e.g. due to an evaluation error).
Use the following in your hydra.conf to make your instance a
private Hydra instance (public is the default):
private 1
Currently, this will not allow you to use the API, channels
and the binary cache when running in private mode. We will add
solutions for these functionalities later.
This requires adding the following to hydra.conf:
binary_cache_key_name = <key-name>
binary_cache_private_key_file = <path-to-private-key>
e.g.
binary_cache_key_name = hydra.nixos.org-1
binary_cache_private_key_file = /home/hydra/cache-key.sec
All successful, non-garbage-collected builds in the evaluation are
passed in a attribute set. So if you declare a Hydra input named
‘foo’ of type ‘eval’, you get a set with members ‘foo.<jobname>’. For
instance, if you passed a Nixpkgs eval as an input named ‘nixpkgs’,
then you could get the Firefox build for x86_64-linux as
‘nixpkgs.firefox.x86_64-linux’.
Inputs of type ‘eval’ can be specified in three ways:
* As the number of the evaluation.
* As a jobset identifier (‘<project>:<jobset>’), which will yield the
latest finished evaluation of that jobset. Note that there is no
guarantee that any job in that evaluation has succeeded, so it might
not be very useful.
* As a job identifier (‘<project>:<jobset>:<job>’), which will yield
the latest finished evaluation of that jobset in which <job>
succeeded. In conjunction with aggregate jobs, this allows you to
make sure that the evaluation contains the desired builds.