The required configuration in hydra.conf:
enable_google_login = 1
google_client_id = 238429sdjkds....apps.googleusercontent.com
and optionally persona_allowed_domains to restrict to one or more
domains.
This is necessary given the current size of the Nixpkgs/NixOS
jobsets. Once we have a Nix store + Postgres on SSD, we can reduce
this again.
Should really make this configurable...
The uid split a while back caused the web interface to create GC roots
in /nix/var/nix/gcroots/per-user/hydra-www, where they wouldn't be
purged by hydra-update-gc-roots. Thus restarted builds would
accumulate forever. The fix is to keep the roots in a shared directory
with gid=hydra.
Regression introduced by 1fdc258de0.
The commit introduced a channel/custom PathPart which uses the new
custom channel expressions, but I forgot to remove CaptureArgs, so the
URL really is channel/latest/ignored-value.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Reported-by: Peter Simons <simons@cryp.to>
This removes the "busy", "locker" and "logfile" columns, which are no
longer used by the queue runner. The "Running builds" page now only
shows builds that have an active build step.
There is still a tiny window between the calls to nix-prefetch-* and
addTempRoot. This could be eliminated by adding a "-o" option to
nix-prefetch-*, or by not using those scripts at all (and use
addToStore directly).
So this is the final part which is needed in order to be able to deliver
custom channels, everything else is now just polishing.
We do this by simply redirecting to the build product download URL and
we use binary_cache_url the same way as in NixChannel.pm.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We should now get an overview and help text on how to add a particular
channel and also a bit of information about the builds that are required
for a channel to get upgraded.
Right now we only select the latest successful build in the latest
successful evaluation, so if someone wants to have more information about
which channel has failed, (s)he still has to look at the "Channels" tab
of the jobset.
We can make this more fancy at some later point if this is really
needed, because right now we're only interested in the latest build,
because it's the only thing necessary to deliver the channel contents.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We now have a searchBuildsAndEvalsForJobset, which creates such a
mapping for us, so we don't need to duplicate code in jobs_tab and
channels_tab.
Also, we're going to use this for the overview of a particular channel
as well, so it makes sense to put it in CatalystUtils instead of
directly in Jobset.pm.
Instead of eval->jobs, it's now eval->builds, because it's really an
aggregate over the builds schema, rather than the job schema.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We only allow channel/latest anyway, so it really doesn't make sense to
explicitly specify this in the PathPart and provide other dispatcher
once we have more than just "latest", which greatly simplifies the
dispatch tree.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We now have a column for that, so no need for counting rows which was a
bit inefficient anyway, because we only would have needed the first row
in the result.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Now that we have our dedicated "Channels" tab, there is no need anymore
to show redundant information.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We now no longer need that additional join of the build outputs and can
solely use the isChannel column of the Builds table to determine whether
it's a channel build.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is to properly separate channels from regular jobs and also make
sure that we can always iterate on them, no matter whether the build has
failed. The reason why we were not able to do this until now was because
we were iterating on the build products, and whenever some constituent
of a channel job has failed, we didn't get a build output.
So whenever there is a meta.isHydraChannel, we can now properly
distinguish it from the other jobs.
I still don't have any clue, why "make -C src/sql update-dbix" without
*any* modifications tries to create additional schema definitions. But
I've checked the md5sums of the existing schema definitions and they
don't seem to match, so it seems that they already have been tampered
with.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Now we can provide different channel expressions for one particular
channel build. Not sure yet how this would be useful, but I found it
more appropriate to use a type instead of a subtype of "file".
This should get us consistent with the provious commit.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This is to get a bit more consistency among channel builds but doesn't
do a radical change on the display. Ideally we may want to have a
channel overview with all the constituents and a small help showing how
the user can add the channel.
Unfortunately, this also introduces an inconsistency: We previously used
the *subtype* "channel", but now we're expecting "channel" as the type
of the product, so we need to change this for the channels overview as
well.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
It's very similar to "jobs" and the code is pretty much the same, except
that we don't do filtering on it. At least it doesn't waste space for a
filter option when there are usually WAY less channel jobs than ordinary
jobs.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Builds can now emit metrics that Hydra will store in its database and
render as time series via flot charts. Typical applications are to
keep track of performance indicators, coverage percentages, artifact
sizes, and so on.
For example, a coverage build can emit the coverage percentage as
follows:
echo "lineCoverage $pct %" > $out/nix-support/hydra-metrics
Graphs of all metrics for a job can be seen at
http://.../job/<project>/<jobset>/<job>#tabs-charts
Specific metrics are also visible at
http://.../job/<project>/<jobset>/<job>/metric/<metric>
The latter URL also allows getting the data in JSON format (e.g. via
"curl -H 'Accept: application/json'").
Without an index on (machine, stoptime desc), this requires a
sequential scan. And adding a whole index for this seems
overkill. (Possibly the queue runner could maintain this info more
efficiently.)
Hydra-queue-runner now no longer polls the queue periodically, but
instead sleeps until it receives a notification from PostgreSQL about
a change to the queue (build added, build cancelled or build
restarted).
Also, for the "build added" case, we now only check for builds with an
ID greater than the previous greatest ID. This is much more efficient
if the queue is large.
This removes the need for Nix's build-remote.pl.
Build logs are now written to $HYDRA_DATA/build-logs because
hydra-queue-runner doesn't have write permission to /nix/var/log.
Scheduling is mostly based on jobset shares these days. So showing and
sorting by priority just wastes space and gives the incorrect
impression that Hydra executes builds in the order shown on the queue
page.
These give warnings in Perl >= 5.18:
given is experimental at /home/hydra/src/hydra/src/lib/Hydra/Helper/CatalystUtils.pm line 241.
when is experimental at /home/hydra/src/hydra/src/lib/Hydra/Helper/CatalystUtils.pm line 242.
...
This adds a Hydra plugin for users to submit their open source projects
to the Coverity Scan system for analysis.
First, add a <coverityscan> section to your Hydra config, including the
access token, project name, and email, and a regex specifying jobs to
upload:
<coverityscan>
project = testrix
jobs = foobar:.*:coverity.*
email = aseipp@pobox.com
token = ${builtins.readFile ./coverity-token}
</coverityscan>
This will upload the scan results for any job whose name matches
'coverity.*' in any jobset in the Hydra 'foobar' project, for the
Coverity Scan project named 'testrix'.
Note that one upload will occur per job matched by the regular
expression - so be careful with how many builds you upload.
The jobs which are matched by the jobs specification must have a file in
their output path of the form:
$out/tarballs/...-cov-int.(xz|lzma|zip|bz2|tgz)
The file must have the 'cov-int' directory produced by `cov-build` in
the root.
(You can also output something into
$out/nix-support/hydra-build-products for the Hydra UI.)
This file will be found in the store, and uploaded to the service
directly using your access credentials. Note the exact extension: don't
use .tar.xz, only use .xz specifically.
Signed-off-by: Austin Seipp <aseipp@pobox.com>
Fixes errors like:
Caught exception in engine "Wide character in syswrite at /nix/store/498lwsrn5kkdh1q8kn3vcpd3457w6m7a-hydra-perl-deps/lib/perl5/site_perl/5.16.3/Starman/Server.pm line 547."
Note that these errors didn't happen if the database encoding was set
to SQL_ASCII (which was the case for hydra.nixos.org, explaining why
it didn't get these errors). However, now the encoding must be
UTF8. To change it, do:
update pg_database set encoding = pg_char_to_encoding('UTF8') where datname = 'hydra';
This gets rid of the warning:
DBIx::Class::Storage::DBI::select_single(): Query returned more than one row. SQL that returns multiple rows is DEPRECATED for ->find and ->single at /home/eelco/Dev/hydra/src/script/../lib/Hydra/Controller/Project.pm line 15
In the dashboard and on the job page, indicate whether the job appears
in the latest jobset eval. That way, the user gets some indication if
a job has accidentally disappeared (e.g. due to an evaluation error).
Use the following in your hydra.conf to make your instance a
private Hydra instance (public is the default):
private 1
Currently, this will not allow you to use the API, channels
and the binary cache when running in private mode. We will add
solutions for these functionalities later.
This requires adding the following to hydra.conf:
binary_cache_key_name = <key-name>
binary_cache_private_key_file = <path-to-private-key>
e.g.
binary_cache_key_name = hydra.nixos.org-1
binary_cache_private_key_file = /home/hydra/cache-key.sec
All successful, non-garbage-collected builds in the evaluation are
passed in a attribute set. So if you declare a Hydra input named
‘foo’ of type ‘eval’, you get a set with members ‘foo.<jobname>’. For
instance, if you passed a Nixpkgs eval as an input named ‘nixpkgs’,
then you could get the Firefox build for x86_64-linux as
‘nixpkgs.firefox.x86_64-linux’.
Inputs of type ‘eval’ can be specified in three ways:
* As the number of the evaluation.
* As a jobset identifier (‘<project>:<jobset>’), which will yield the
latest finished evaluation of that jobset. Note that there is no
guarantee that any job in that evaluation has succeeded, so it might
not be very useful.
* As a job identifier (‘<project>:<jobset>:<job>’), which will yield
the latest finished evaluation of that jobset in which <job>
succeeded. In conjunction with aggregate jobs, this allows you to
make sure that the evaluation contains the desired builds.
This reverts commit 2d7e106d29.
Unfortunately some jobsets still depend on this behaviour. They could
probably do something like "assert system == input.system; ..." but
changing them all is undesirable.
Include information about who changed the build status in notification
emails, and enable optional per-input notification of said committers.
Conflicts due to two branches modifying the database schema.
Signed-off-by: Shea Levy <shea@shealevy.com>
Conflicts:
src/lib/Hydra/Schema/Jobsets.pm
src/sql/upgrade-23.sql
Currently the dashboard allows users to get a quick overview of the
status of jobs they're interested in, but more will be added,
e.g. viewing all your jobsets or all jobs of which you're a
maintainer.
There are jobsets that are evaluated only once, that is, after they've
been evaluated, they're disabled automatically. This is primarily
useful for doing releases: for instance, doing an evaluation with
"officialRelease" set to "true" should be done only once.
We can just show the normal "edit jobset" page for the original jobset
and then do a PUT request to create a new jobset.
Also simplified updating the jobset inputs. We can just delete all of
them and recreate them from the user parameters. That's safe because
it's done in a transaction.
It's now a dropdown menu in the tabs thingy, which subsumes the
"Reproduce locally" button. This makes the actions in the menu a bit
more visible, IMHO.
Each jobset now has a "scheduling share" that determines how much of
the build farm's time it is entitled to. For instance, if a jobset
has 100 shares and the total number of shares of all jobsets is 1000,
it's entitled to 10% of the build farm's time. When there is a free
build slot for a given system type, the queue runner will select the
jobset that is furthest below its scheduling share over a certain time
window (currently, the last day). Withing that jobset, it will pick
the build with the highest priority.
So meta.schedulingPriority now only determines the order of builds
within a jobset, not between jobsets. This makes it much easier to
prioritise one jobset over another (e.g. nixpkgs:trunk over
nixpkgs:stdenv).
In your hydra config, you can add an arbitrary number of <s3config>
sections, with the following options:
* name (required): Bucket name
* jobs (required): A regex to match job names (in project:jobset:job
format) that should be backed up to this bucket
* compression_type: bzip2 (default), xz, or none
* prefix: String to prepend to all hydra-created s3 keys (if this is
meant to represent a directory, you should include the trailing slash,
e.g. "cache/"). Default "".
After each build with an output (i.e. successful or failed-with-output
builds), the output path and its closure are uploaded to the bucket as
.nar files, with corresponding .narinfos to enable use as a binary
cache.
This plugin requires that s3 credentials be available. It uses
Net::Amazon::S3, which as of this commit the nixpkgs version can
retrieve s3 credentials from the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables, or from ec2 instance
metadata when using an IAM role.
This commit also adds a hydra-s3-backup-collect-garbage program, which
uses hydra's gc roots directory to determine which paths are live, and
then deletes all files except nix-cache-info and any .nar or .narinfo
files corresponding to live paths. hydra-s3-backup-collect-garbage
respects the prefix configuration option, so it won't delete anything
outside of the hierarchy you give it, and it has the same credential
requirements as the plugin. Probably a timer unit running the garbage
collection periodically should be added to hydra-module.nix
Note that two of the added tests fail, due to a bug in the interaction
between Net::Amazon::S3 and fake-s3. Those behaviors work against real
s3 though, so I'm committing this even with the broken tests.
Signed-off-by: Shea Levy <shea@shealevy.com>
Due to the fixed-output derivation hashing scheme, there can be
multiple derivations of the same output path. But build logs are
indexed by derivation path. Thus, we may not be able to find the
log of a build or build step using its derivation. So as a fallback,
Hydra now looks for other derivations with the same output paths.
They're mostly redundant since there is a faster "jobs" tab on
the jobset pages now. The only thing the latter lacks is the
ability to see status change times, but those are quite expensive
to compute, and are visible on build pages if you really need them.
PostgreSQL and Perl have different sort orders, in particular when
comparing job names such as "aspell.x86_64-linux" and
"aspellDicts.cs.i686-freebsd". This confused the evaluation
comparison code, causing some jobs to appear as "removed".
So now we do all the sorting in Perl.
Fixes#105.
Aggregate constituents are derivations. However there can be multiple
builds in an evaluation that have the same derivation, i.e. they can
alias each other (e.g. "emacs", "emacs24" and "emacs24Packages.emacs"
in Nixpkgs). Previously we picked a build arbitrarily for the
AggregateConstituents table. Now we pick the one with the shortest
name (e.g. "emacs").
For presentation purposes, we need to know what builds are part of an
aggregate build. So at evaluation time, look at the "members"
attribute, find the corresponding builds in the eval, and create a
mapping in the AggregateMembers table.
It redirects to the latest successful build from a finished
evaluation. This is mostly useful for the Nixpkgs/NixOS mirroring
script, which need the latest finished evaluation in which some
aggregate job (such as ‘tested’ in NixOS) succeeded.
The NrBuilds table tracks the value of ‘select count(*) from Builds
where finished = 0’, keeping it up to date via a trigger. This is
necessary to make the /all page fast, since otherwise it needs to do a
sequential scan on the Builds table.