Mandatory features are features that MUST be present in a derivation's
requiredSystemFeatures attribute. One application is performance
testing, where we have a dedicated machine to run performance tests
(and nothing else). Then we would add the label "perf" to the
machine's mandatory features and to the performance testing
derivations.
"nix-channel --add" now accepts a second argument: the channel name.
This allows channels to have a nicer name than (say) nixpkgs_unstable.
If no name is given, it defaults to the last component of the URL
(with "-unstable" or "-stable" removed).
Also, channels are now stored in a profile
(/nix/var/nix/profiles/per-user/$USER/channels). One advantage of
this is that it allows rollbacks (e.g. if "nix-channel --update" gives
an undesirable update).
Sometimes when doing "nix-build --run-env" you don't want all
dependencies to be built. For instance, if we want to do "--run-env"
on the "build" attribute in Hydra's release.nix (to get Hydra's build
environment), we don't want its "tarball" dependency to be built. So
we can do:
$ nix-build --run-env release.nix -A build --exclude 'hydra-tarball'
This will skip the dependency whose name matches the "hydra-tarball"
regular expression. The "--exclude" option can be repeated any number
of times.
This command builds or fetches all dependencies of the given
derivation, then starts a shell with the environment variables from
the derivation. This shell also sources $stdenv/setup to initialise
the environment further.
The current directory is not changed. Thus this is a convenient way
to reproduce a build environment in an existing working tree.
Existing environment variables are left untouched (unless the
derivation overrides them). As a special hack, the original value of
$PATH is appended to the $PATH produced by $stdenv/setup.
Example session:
$ nix-build --run-env '<nixpkgs>' -A xterm
(the dependencies of xterm are built/fetched...)
$ tar xf $src
$ ./configure
$ make
$ emacs
(... hack source ...)
$ make
$ ./xterm
directory. Previously in this situation we did add the Nix
expressions from the channel to allow installation from source, but
this doesn't work for binary-only channels and leads to confusing
error messages.
scripts.
* Include the version and architecture in the -I flag so that there is
at least a chance that a Nix binary built for one Perl version will
run on another version.
other simplifications.
* Use <nix/...> to locate the corepkgs. This allows them to be
overriden through $NIX_PATH.
* Use bash's pipefail option in the NAR builder so that we don't need
to create a temporary file.
closure to a given machine at the same time. This prevents the case
where multiple instances try to copy the same missing store path to
the target machine, which is very wasteful.
‘nix-store --export’.
* Add a Perl module that provides the functionality of
‘nix-copy-closure --to’. This is used by build-remote.pl so it no
longer needs to start a separate nix-copy-closure process. Also, it
uses the Perl API to do the export, so it doesn't need to start a
separate nix-store process either. As a result, nix-copy-closure
and build-remote.pl should no longer fail on very large closures due
to an "Argument list too long" error. (Note that having very many
dependencies in a single derivation can still fail because the
environment can become too large. Can't be helped though.)
read the manifest just to check the version and print the number of
paths. This makes nix-pull very fast for the cached cache (speeding
up nixos-rebuild without the ‘--no-pull’ or ‘--fast’ options).
brackets, e.g.
import <nixpkgs/pkgs/lib>
are resolved by looking them up relative to the elements listed in
the search path. This allows us to get rid of hacks like
import "${builtins.getEnv "NIXPKGS_ALL"}/pkgs/lib"
The search path can be specified through the ‘-I’ command-line flag
and through the colon-separated ‘NIX_PATH’ environment variable,
e.g.,
$ nix-build -I /etc/nixos ...
If a file is not found in the search path, an error message is
lazily thrown.
SQLite manifest cache. The DBI AutoCommit feature caused every
process to have an active transaction at all times, which could
indefinitely block processes wanting to update the manifest cache.
* Disable fsync() in the manifest cache because we don't need
integrity (the cache can always be recreated if it gets corrupted).
them into memory. This brings memory use down to (more or less)
O(1). For instance, on my test case, the maximum resident size of
download-using-manifests while filling the DB went from 142 MiB to
11 MiB.
This significantly speeds up the download-using-manifests
substituter, especially if manifests are very large. For instance,
one "nix-build -A geeqie" operation that updated four packages using
binary patches went from 18.5s to 1.6s. It also significantly
reduces memory use.
The cache is kept in /nix/var/nix/manifests/cache.sqlite. It's
updated automatically when manifests are added to or removed from
/nix/var/nix/manifests. It might be interesting to have nix-pull
store manifests directly in the DB, rather than storing them as
separate flat files, but then we would need a command line interface
to delete manifests from the DB.
`+' and `?' in filenames. This is very slow if /nix/store is very
large. (This is a quick hack - a cleaner solution would be to
bypass the shell entirely.)
`nix-store -q --hash' to get the hash of the base path rather than
`nix-hash'. However, only do this for estimating the size of a
download, not for the actual substitution, because sometimes the
contents of store paths are modified (which they shouldn't, of
course).
hook script proper, and the stdout/stderr of the builder. Only the
latter should be saved in /nix/var/log/nix/drvs.
* Allow the verbosity to be set through an option.
* Added a flag --quiet to lower the verbosity level.
it requires a certain feature on the build machine, e.g.
requiredSystemFeatures = [ "kvm" ];
We need this in Hydra to make sure that builds that require KVM
support are forwarded to machines that have KVM support. Probably
this should also be enforced for local builds.
the hook every time we want to ask whether we can run a remote build
(which can be very often), we now reuse a hook process for answering
those queries until it accepts a build. So if there are N
derivations to be built, at most N hooks will be started.
load on the Hydra build farm (where it's unnecessary anyway because
it has a fast connection to the build machines). In any case,
compression can be enabled by using the `-C' option to ssh.
doesn't exist. The Debian packages don't include the manifests
directory, so nix-channel would silently skip doing a nix-pull,
resulting in everything being built from source. Thanks to Juan
Pedro Bolívar Puente.
giving jobs to the first machine until it hits its job limit, then
the second machine and so on. This should improve utilisation of
the Hydra build farm a lot. Also take an optional speed factor
into account to cause fast machines to be preferred over slower
machines with a similar load.
(that is, call the build hook with a certain interval until it
accepts the build).
* build-remote.pl was totally broken: for all system types other than
the local system type, it would send all builds to the *first*
machine of the appropriate type.
sure that it works as expected when you pass it a derivation. That
is, we have to make sure that all build-time dependencies are built,
and that they are all in the input closure (otherwise remote builds
might fail, for example). This is ensured at instantiation time by
adding all derivations and their sources to inputDrvs and inputSrcs.
downloaded files; rather, check the hash of the unpacked store
path.
When the server produces bzipped NAR archives on demand (like Hydra
does), the hash of the file is not known in advance; it's streamed
from the server. Thus the manifest doesn't contain a hash for the
bzipped NAR archive. However, the server does know the hash of the
*uncompressed* NAR archive (the "NarHash" field), since it's stored
in the Nix database (nix-store -q --hash /nix/store/bla). So we use
that instead for checking the integrity of the download.
scan for runtime dependencies (i.e. the local machine shouldn't do a
scan that the remote machine has already done). Also pipe directly
into `nix-store --import': don't use a temporary file.
(e.g. an SSH connection problem) and permanent failures (i.e. the
builder failed). This matters to Hydra (it wants to know whether it
makes sense to retry a build).
makes more sense for the build farm, otherwise every nix-store
invocation will lead to at least one local build. Will come up with
a better solution later...
necessary that at least one build hook doesn't return "postpone",
otherwise nix-store will barf ("waiting for a build slot, yet there
are no running children"). So inform the build hook when this is
the case, so that it can start a build even when that would exceed
the maximum load on a machine.
list like
root@example.org x86_64-linux /root/.ssh/id_buildfarm 1
root@example.org i686-darwin /root/.ssh/id_buildfarm 1
This is possible when the Nix installation on example.org itself has
remote builds enabled.