Previously, for scheduled builds, "timestamp" contained the time the
build was added to the queue, while for finished builds, it was the
time the build finished. Now it's always the former.
This is mostly so we don't have to pass around common parameters like
"db" and "config", and we don't have to check for the existence of
methods.
A plugin now looks like this:
package Hydra::Plugin::TwitterNotification;
use parent 'Hydra::Plugin';
sub buildFinished {
my ($self, $build, $dependents) = @_;
print STDERR "tweeting about build ", $build->id, "\n";
# Send tweet...
# Hydra database is $self->{db}.
}
You can now add plugins to Hydra by writing a module called
Hydra::Plugin::<whatever> and putting it in Perl's search path. The
only plugin operation currently supported in buildFinished, called
when hydra-build has finished doing a build.
For instance, a Twitter notification plugin would look like this:
package Hydra::Plugin::TwitterNotification;
sub buildFinished {
my ($self, $db, $config, $build, $dependents) = @_;
print STDERR "tweeting about build ", $build->id, "\n";
# send tweet...
}
1;
This allows checking a jobset (say) at most once a day. It's also
possible to disable polling by setting the interval to 0. This is
useful for jobsets that use push notification or are manually
evaluated.
This caused exceptions like:
Caught exception in Hydra::Controller::Build->view_build "writing to file: Broken pipe at /nix/store/ihdb3widsq1dk7sbl5vqjxfcxb5ypad4-hydra-0.1pre1297-8158093/libexec/hydra/lib/Hydra/Controller/Build.pm line 59."
because the connection to the Nix daemon would be terminated due to a
protocol violation (calling queryPathInfo with an empty string).
Returning only the first 20 results can cause NixOS/Nixpkgs channel
generation to fail, if the first 20 view results correspond to
evaluations that haven't finished yet. Then URLs like
/view/nixos/tested/latest-finished will return 500 rather than the
latest finished view.
You can now do:
bash <(curl http://hydra-server/build/1238757/reproduce)
to download and execute a script that reproduces a Hydra build
locally. This script fetches all inputs (e.g. Git repositories) and
then invokes nix-build.
The downloaded sources are stored in /tmp/build-<buildid> and reused
between invocations of the script.
Any additional command line options are passed to nix-build. So
bash <(curl http://hydra-server/build/1238757/reproduce) --run-env
will drop you in a shell where you can interactively hack on the
build, e.g.
$ source $stdenv/setup
$ set +e
$ unpackPhase
$ cd $sourceRoot
$ configurePhase
$ emacs foo.c &
$ make
and so on.
Build product paths cannot reference locations outside of the Nix
store. We previously disallowed paths from being symlinks, but this
didn't take into account that parent path elements can be symlinks as
well. So a build product /nix/store/bla.../foo/passwd, with
/nix/store/bla.../foo being a symlink to /etc, would still work.
So now we check all paths encountered during path resolution.
Symlinks are allowed again so long as they point to the Nix store.
Chaining paths only works properly when PathPart is used. Before this
fix, the affected URIs bypassed the top-level 'admin' sub.
Signed-off-by: Shea Levy <shea@shealevy.com>
This reverts commit 71d020735b.
Unfortunately there are still some cases where we need to set Hydra's
concurrency separately. (Ideally, Hydra would start *all* queued
builds in parallel and let Nix take care of everything...)
You can use the URL
http://<hydra-server>/api/push-github
as GitHub's WebHook URL. Hydra will automatically trigger an
evaluation of all affected jobsets.
External machines can now notify Hydra that it should check a
repository by sending a GET or PUSH request to /api/push, providing a
list of jobsets to be checked and/or a list of repository URLs. In
the latter case, all jobsets that have any of the specified
repositories as an input will be checked.
For instance, you can configure GitHub or BitBucket to send a request
to the URL
http://hydra.example.org/api/push?repos=git://github.com/NixOS/nixpkgs.git
to trigger evaluation of all jobsets that have
git://github.com/NixOS/nixpkgs.git as an input, or to the URL
http://hydra.example.org/api/push?jobsets=patchelf:trunk,nixpkgs:trunk
to trigger evaluation of just the specified jobsets.
Otherwise you can do
ln -s /etc/passwd $out/foo
echo "file misc $out/foo" >> $out/nix-support/hydra-build-products
and get Hydra to serve its /etc/passwd file.
You can now just click on the evaluation link on the first tab to see
all builds in the same jobset. This also makes rendering build pages
quite a bit faster for jobsets like Nixpkgs.
It's pointless to store these, since Nix knows where the logs are.
Also handle (in fact require) Nix's new log storage scheme. Also some
cleanups in the build page.
The check to see whether a build had been scheduled in a previous
evaluation took about 200 ms for the nixpkgs:trunk jobset. Given
that it has more than 15000 builds, this added up to a lot. Now
it takes 0.2 ms per build.
The action .../jobset/<project>/<jobset>/latest-eval redirects to the
latest evaluation of the jobset that has no unfinished builds. Thus,
for instance,
http://hydra.nixos.org/jobset/nixpkgs/trunk/latest-eval/channel
is the channel containing the latest consistent set of Nixpkgs builds.
The URI parameter "compare=..." can denote either an arbitrary
evaluation ID, or the name of a jobset in the same project. In the
latter case, the comparison is made against the latest completed
evaluation of the specified jobset.
This happened in a pathological case in Nixpkgs: the "grub" job is
evaluated for i686-linux and x86_64-linux, but in the latter case it
returns the same derivation as in the former case. So only one build
should be added.
This gets rid of the openHydraDB function and ensures that we
open the database in a consistent way.
Also drop the PostgreSQL sequence hacks. They don't seem to be
necessary anymore.
When checking whether the jobset is unchanged, we need to compare with
the previous JobsetEval regardless of whether it had new builds.
Otherwise we'll keep adding new JobsetEval rows.
Because of the way DBIx::Class does prepared statements, even
innocuous queries such
$c->model('DB::Builds)->search({finished => 0})
can be extremely slow. This is because DBIx::Class prepares a
PostgreSQL statement
select ... from Builds where finished = ?
and since Builds is very large and there is a large fraction of rows
with "finished = 1", the PostgreSQL query planner decides to implement
this query with a sequential scan of the Builds table (despite the
existence of an index on "finished"), which is extremely slow. It
would be nice if we could tell DBIx::Class that constants should be
part of the prepared statement, i.e.
select ... from Builds where finished = 0
but AFAIK we can't.
In particular the /pkg action is now O(lg n) instead of O(n) in the
number of packages in the channel, and listing the channel contents
no longer requires calling isValidPath() on all packages.
Derivations (and thus build time dependencies) are no longer included
in the channel, because they're not GC roots. Thus they could
disappear unexpectedly.
* Don't use isCurrent anymore; instead look up builds in the previous
jobset evaluation. (The isCurrent field is still maintained because
it's still used in some other places.)
* To determine whether to perform an evaluation, compare the hash of
the current inputs with the inputs of the previous jobset
evaluation, rather than checking if there was ever an evaluation
with those inputs. This way, if the inputs of an evaluation change
back to a previous state, we get a new jobset evaluation in the
database (and thus the latest jobset evaluation correctly represents
the latest state of the jobset).
* Improve performance by removing some unnecessary operations and
adding an index.
Prepared statements are sometimes much slower than unprepared
statements, because the planner doesn't have access to the query
parameters. This is the case for the active build steps query (in
/status), where a prepared statement is three orders of magnitude
slower. So disable the use of prepared statements in this case.
(Since the query parameters are constant here, it would be nicer if we
could tell DBIx::Class to prepare a statement with those parameters
fixed. But I don't know an easy way to do so.)
For schema upgrades, hydra-init executes the files
src/sql/upgrade-<N>.sql, each of which upgrades the schema from
version N-1 to N. The upgrades are wrapped in a transaction.
Since a build may be a member of multiple jobset evaluations, we need
to do a "select distinct" here. But maybe we should only show builds
from a single evaluation (e.g. the most recent), since showing builds
from several may be confusing.
The abbreviated Git revision hash (e.g. "267480b") is typically
contained in ‘gitTag’ as well, but the latter can contain other
elements as well, e.g., the delta to the closest tag. That may
be undesirable in version strings, so this is an alternative.
The ‘revCount’ attribute is the number of commits in the history
of the revision. This is useful if you need a monotonically
increasing version number.
The ‘gitTag’ attribute is the output of ‘git describe’, e.g.
‘v1.0.4-14-g2414721’ to indicate that the current revision is 14
commits after the tag ‘v1.0.4’.
The singleton table SchemaVersion contains the current version
of the Hydra database schema. This can be used to upgrade the
schema on the fly.
Also reran the DBIx::Class schema loader.
In hydra-evaluator, reuse an SVN working copy between runs (similar to
what we do with Git and other input types). This reduces network
traffic in the common case.
Also, don't use nix-prefetch-svn. It doesn't do anything useful.
The "all" channel fundamentally doesn't scale, because it needs
to fetch N builds from the database (where N is potentially a very
large number), then check whether they are still valid. And it's
not very useful anyway.