This happened in a pathological case in Nixpkgs: the "grub" job is
evaluated for i686-linux and x86_64-linux, but in the latter case it
returns the same derivation as in the former case. So only one build
should be added.
This gets rid of the openHydraDB function and ensures that we
open the database in a consistent way.
Also drop the PostgreSQL sequence hacks. They don't seem to be
necessary anymore.
When checking whether the jobset is unchanged, we need to compare with
the previous JobsetEval regardless of whether it had new builds.
Otherwise we'll keep adding new JobsetEval rows.
This isn't perfect because it doesn't handle the case where a
previous build hasn't finished yet. But at least it won't send mail
for old builds that fail while a newer build has already succeeded.
* Don't use isCurrent anymore; instead look up builds in the previous
jobset evaluation. (The isCurrent field is still maintained because
it's still used in some other places.)
* To determine whether to perform an evaluation, compare the hash of
the current inputs with the inputs of the previous jobset
evaluation, rather than checking if there was ever an evaluation
with those inputs. This way, if the inputs of an evaluation change
back to a previous state, we get a new jobset evaluation in the
database (and thus the latest jobset evaluation correctly represents
the latest state of the jobset).
* Improve performance by removing some unnecessary operations and
adding an index.
Since it read the actual roots after determining the set of desired
roots, there was a possibility that it would delete roots added by
hydra-evaluator or hydra-build while hydra-update-gc-roots was
running. This could cause a derivation to be garbage-collected before
the build was performed, for instance. Now the actual roots are read
first, so any root added after that time is not deleted.
The hydra-update-gc-roots script is taking around 95 minutes on our
Hydra instance (though a lot of that is I/O wait). This patch
significantly reduces the number of database queries. In particular,
the N most recent successful builds for each job in a jobset are now
determined in a single query. Also, it removes the calls to
readlink().
Prepared statements are sometimes much slower than unprepared
statements, because the planner doesn't have access to the query
parameters. This is the case for the active build steps query (in
/status), where a prepared statement is three orders of magnitude
slower. So disable the use of prepared statements in this case.
(Since the query parameters are constant here, it would be nicer if we
could tell DBIx::Class to prepare a statement with those parameters
fixed. But I don't know an easy way to do so.)
For schema upgrades, hydra-init executes the files
src/sql/upgrade-<N>.sql, each of which upgrades the schema from
version N-1 to N. The upgrades are wrapped in a transaction.
In hydra-evaluator, reuse an SVN working copy between runs (similar to
what we do with Git and other input types). This reduces network
traffic in the common case.
Also, don't use nix-prefetch-svn. It doesn't do anything useful.
The underscores are ugly and the .pl extension is an implementation
detail that shouldn't be visible to the outside.
Also, get rid of the *.in files. It's not really necessary to
generate them. And I was always modifying the wrong file.
to predict how much disk space a package will require.
* Compute the output / closure size using the info stored in the
Nix database (rather than doing a slow "du").
* Store the system type in the BuildSteps table.
* Don't query the queue size when serving static pages. This prevents
two unnecessary database queries per request.
recording the builds that are part of a jobset evaluation. We need
this to be able to answer queries such as "return the latest NixOS
ISO for which the installation test succeeded". This wasn't previously
possible because the database didn't record which builds of (say)
the `isoMinimal' job and the `tests.installer.simple' job came from
the same evaluation of the nixos:trunk jobset.
Keeping a record of evaluations is also useful for logging purposes.
3 environment variables are important:
TWITTER_USER
TWITTER_PASS
HYDRA_BUILD_BASEURL
- twitter notification is off when TWITTER_USER and TWITTER_PASS are not defined
- if HYDRA_BUILD_BASEURL is not defined, no URL is put in the twitter messages
against the set of current builds for the job). This ensures that
the builds with the highest ID are what we want in the channel, even
in case of reverts.
the derivations that the jobset currently contains. This is
necessary to allow the "latest" channel to contain the correct
builds when the sources of a jobset are reverted.
mainly to prevent all those Nixpkgs builds named "kde*" from
building at the same time. Since they all have the same slow
dependencies (qt, kdelibs) this tends to block the buildfarm.
the input build to be specified, as well as constraints on the
inputs of the inputs build. For instance, you can require that a
build has input `system = "i686-linux"'.
This is important when one binary build serves as an input to
another binary build. Obviously, we shouldn't pass a build on
i686-linux as an input to another on i686-darwin. Hence the
necessity for constraint.
The constraint are currently quite limited. What you really want to
say is that the "system" input of the other build has to match the
"system" input of this build. But those require a bit more work
since they introduce dependencies between inputs.
distinguish between jobs with the same name in different jobsets
(e.g. "trunk" vs "stdenv-branch" for Nixpkgs).
* Renamed the "attrName" field of Builds to "job".
* Renamed the "id" field of BuildSteps to "build".
failed in a previous build. This is essential for Nixpkgs: we don't
want to keep doing the same failed dependency (say, Glibc) over and
over again for a few hundred jobs.