downloaded files; rather, check the hash of the unpacked store
path.
When the server produces bzipped NAR archives on demand (like Hydra
does), the hash of the file is not known in advance; it's streamed
from the server. Thus the manifest doesn't contain a hash for the
bzipped NAR archive. However, the server does know the hash of the
*uncompressed* NAR archive (the "NarHash" field), since it's stored
in the Nix database (nix-store -q --hash /nix/store/bla). So we use
that instead for checking the integrity of the download.
the DerivationGoal runs. Otherwise, if a goal is a top-level goal,
then the lock won't be released until nix-store finishes. With
--keep-going and lots of top-level goals, it's possible to run out
of file descriptors (this happened sometimes in the build farm for
Nixpkgs). Also, for failed derivation, it won't be possible to
build it again until the lock is released.
* Idem for locks on build users: these weren't released in a timely
manner for failed top-level derivation goals. So if there were more
than (say) 10 such failed builds, you would get an error about
having run out of build users.
scan for runtime dependencies (i.e. the local machine shouldn't do a
scan that the remote machine has already done). Also pipe directly
into `nix-store --import': don't use a temporary file.
(e.g. an SSH connection problem) and permanent failures (i.e. the
builder failed). This matters to Hydra (it wants to know whether it
makes sense to retry a build).
allocate memory, which is verboten in signal handlers. This caused
random failures in the test suite on Mac OS X (triggered by the spurious
SIGPOLL signals on Mac OS X, which should also be fixed).
closure of the inputs. This really enforces that there can't be any
undeclared dependencies on paths in the store. This is done by
creating a fake Nix store and creating bind-mounts or hard-links in
the fake store for all paths in the closure. After the build, the
build output is moved from the fake store to the real store. TODO:
the chroot has to be on the same filesystem as the Nix store for
this to work, but this isn't enforced yet. (I.e. it only works
currently if /tmp is on the same FS as /nix/store.)
bind-mounts we do are only visible to the builder process and its
children. So accidentally doing "rm -rf" on the chroot directory
won't wipe out /nix/store and other bind-mounted directories
anymore. Also, the bind-mounts in the private namespace disappear
automatically when the builder exits.
makes more sense for the build farm, otherwise every nix-store
invocation will lead to at least one local build. Will come up with
a better solution later...
necessary that at least one build hook doesn't return "postpone",
otherwise nix-store will barf ("waiting for a build slot, yet there
are no running children"). So inform the build hook when this is
the case, so that it can start a build even when that would exceed
the maximum load on a machine.
nix-store -r (or some other operation) is started via ssh, it will
at least have a chance of terminating quickly when the connection is
killed. Right now it just runs to completion, because it never
notices that stderr is no longer connected to anything. Of course
it would be better if sshd would just send a SIGHUP, but it doesn't
(https://bugzilla.mindrot.org/show_bug.cgi?id=396).
list like
root@example.org x86_64-linux /root/.ssh/id_buildfarm 1
root@example.org i686-darwin /root/.ssh/id_buildfarm 1
This is possible when the Nix installation on example.org itself has
remote builds enabled.