* Don't use isCurrent anymore; instead look up builds in the previous
jobset evaluation. (The isCurrent field is still maintained because
it's still used in some other places.)
* To determine whether to perform an evaluation, compare the hash of
the current inputs with the inputs of the previous jobset
evaluation, rather than checking if there was ever an evaluation
with those inputs. This way, if the inputs of an evaluation change
back to a previous state, we get a new jobset evaluation in the
database (and thus the latest jobset evaluation correctly represents
the latest state of the jobset).
* Improve performance by removing some unnecessary operations and
adding an index.
The hydra-update-gc-roots script is taking around 95 minutes on our
Hydra instance (though a lot of that is I/O wait). This patch
significantly reduces the number of database queries. In particular,
the N most recent successful builds for each job in a jobset are now
determined in a single query. Also, it removes the calls to
readlink().
Prepared statements are sometimes much slower than unprepared
statements, because the planner doesn't have access to the query
parameters. This is the case for the active build steps query (in
/status), where a prepared statement is three orders of magnitude
slower. So disable the use of prepared statements in this case.
(Since the query parameters are constant here, it would be nicer if we
could tell DBIx::Class to prepare a statement with those parameters
fixed. But I don't know an easy way to do so.)
For schema upgrades, hydra-init executes the files
src/sql/upgrade-<N>.sql, each of which upgrades the schema from
version N-1 to N. The upgrades are wrapped in a transaction.
The singleton table SchemaVersion contains the current version
of the Hydra database schema. This can be used to upgrade the
schema on the fly.
Also reran the DBIx::Class schema loader.
* Store the system type in the BuildSteps table.
* Don't query the queue size when serving static pages. This prevents
two unnecessary database queries per request.
recording the builds that are part of a jobset evaluation. We need
this to be able to answer queries such as "return the latest NixOS
ISO for which the installation test succeeded". This wasn't previously
possible because the database didn't record which builds of (say)
the `isoMinimal' job and the `tests.installer.simple' job came from
the same evaluation of the nixos:trunk jobset.
Keeping a record of evaluations is also useful for logging purposes.
faster, from about 4.5s to 1.0s for the global "latest" channel.
Note that the query is only fast if the "IndexBuildsOnJob" and
"IndexBuildsOnJobAndIsCurrent" indices are dropped - if they exist,
PostgreSQL will use those instead of the more efficient
"IndexBuildsOnJobFinishedId" index. Looks like a bug in the planner
to me...
releases as a dynamic view on the database was misguided, since
doing thing like adding a new job to a release set will invalidate
all old releases. So we rename release sets to views, and we'll
reintroduce releases as separate, static entities in the database.
the derivations that the jobset currently contains. This is
necessary to allow the "latest" channel to contain the correct
builds when the sources of a jobset are reverted.
distinguish between jobs with the same name in different jobsets
(e.g. "trunk" vs "stdenv-branch" for Nixpkgs).
* Renamed the "attrName" field of Builds to "job".
* Renamed the "id" field of BuildSteps to "build".
Note: to upgrade old databases, do a dump with an old Sqlite first;
dumping with a new Sqlite will silently discard (!) the contents of
the ReleaseSetJobs table.
failed in a previous build. This is essential for Nixpkgs: we don't
want to keep doing the same failed dependency (say, Glibc) over and
over again for a few hundred jobs.