There is a long-standing race condition when copying a closure to a
remote machine, particularly affecting build-remote.pl: the client
first asks the remote machine which paths it already has, then copies
over the missing paths. If the garbage collector kicks in on the
remote machine between the first and second step, the already-present
paths may be deleted. The missing paths may then refer to deleted
paths, causing nix-copy-closure to fail. The client now performs both
steps using a single remote Nix call (using ‘nix-store --serve’),
locking all paths in the closure while querying.
I changed the --serve protocol a bit (getting rid of QueryCommand), so
this breaks the SSH substituter from older versions. But it was marked
experimental anyway.
Fixes#141.
NAR info files in binary caches can now have a cryptographic signature
that Nix will verify before using the corresponding NAR file.
To create a private/public key pair for signing and verifying a binary
cache, do:
$ openssl genrsa -out ./cache-key.sec 2048
$ openssl rsa -in ./cache-key.sec -pubout > ./cache-key.pub
You should also come up with a symbolic name for the key, such as
"cache.example.org-1". This will be used by clients to look up the
public key. (It's a good idea to number keys, in case you ever need
to revoke/replace one.)
To create a binary cache signed with the private key:
$ nix-push --dest /path/to/binary-cache --key ./cache-key.sec --key-name cache.example.org-1
The public key (cache-key.pub) should be distributed to the clients.
They should have a nix.conf should contain something like:
signed-binary-caches = *
binary-cache-public-key-cache.example.org-1 = /path/to/cache-key.pub
If all works well, then if Nix fetches something from the signed
binary cache, you will see a message like:
*** Downloading ‘http://cache.example.org/nar/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’ (signed by ‘cache.example.org-1’) to ‘/nix/store/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’...
On the other hand, if the signature is wrong, you get a message like
NAR info file `http://cache.example.org/7dppcj5sc1nda7l54rjc0g5l1hamj09j.narinfo' has an invalid signature; ignoring
Signatures are implemented as a single line appended to the NAR info
file, which looks like this:
Signature: 1;cache.example.org-1;HQ9Xzyanq9iV...muQ==
Thus the signature has 3 fields: a version (currently "1"), the ID of
key, and the base64-encoded signature of the SHA-256 hash of the
contents of the NAR info file up to but not including the Signature
line.
Issue #75.
If the database is opened through perl bindings (and even though nix.conf has
use-sqlite-wal set to false), the database is automatically converted into WAL
mode. This makes the next nix process to access the database convert it back to
"truncate". If the database is still open at the time in wal mode by the perl
program, this fails and crashes the nix doing the wal -> truncate conversion.
Ever since SQLite in Nixpkgs was updated to 3.8.0.2, Nix has randomly
segfaulted on Darwin:
http://hydra.nixos.org/build/6175515http://hydra.nixos.org/build/6611038
It turns out that this is because the binary cache substituter somehow
ends up loading two versions of SQLite: the one in Nixpkgs and the
other from /usr/lib/libsqlite3.dylib. It's not exactly clear why the
latter is loaded, but it appears to be because WWW::Curl indirectly loads
/System/Library/Frameworks/CoreFoundation.framework/Versions/A/CoreFoundation,
which in turn seems to load /usr/lib/libsqlite3.dylib. This leads to
a segfault when Perl exits:
#0 0x00000001010375f4 in sqlite3_finalize ()
#1 0x000000010125806e in sqlite_st_destroy ()
#2 0x000000010124bc30 in XS_DBD__SQLite__st_DESTROY ()
#3 0x00000001001c8155 in XS_DBI_dispatch ()
...
#14 0x0000000100023224 in perl_destruct ()
#15 0x0000000100000d6a in main ()
...
The workaround is to explicitly load DBD::SQLite before WWW::Curl.
As discovered by Todd Veldhuizen, the shell started by nix-shell has
its affinity set to a single CPU. This is because nix-shell connects
to the Nix daemon, which causes the affinity hack to be applied. So
we turn this off for Perl programs.
For instance, it's pointless to keep copy-from-other-stores running if
there are no other stores, or download-using-manifests if there are no
manifests. This also speeds things up because we don't send queries
to those substituters.
I.e.
Subroutine Nix::Store::isValidPath redefined at /nix/store/clfzsf6gi7qh5i9c0vks1ifjam47rijn-perl-5.16.2/lib/perl5/5.16.2/XSLoader.pm line 92.
and so on.
To implement binary caches efficiently, Hydra needs to be able to map
the hash part of a store path (e.g. "gbg...zr7") to the full store
path (e.g. "/nix/store/gbg...kzr7-subversion-1.7.5"). (The binary
cache mechanism uses hash parts as a key for looking up store paths to
ensure privacy.) However, doing a search in the Nix store for
/nix/store/<hash>* is expensive since it requires reading the entire
directory. queryPathFromHashPart() prevents this by doing a cheap
database lookup.
XZ compresses significantly better than bzip2. Here are the
compression ratios and execution times (using 4 cores in parallel) on
my /var/run/current-system (3.1 GiB):
bzip2: total compressed size 849.56 MiB, 30.8% [2m08]
xz -6: total compressed size 641.84 MiB, 23.4% [6m53]
xz -7: total compressed size 621.82 MiB, 22.6% [7m19]
xz -8: total compressed size 599.33 MiB, 21.8% [7m18]
xz -9: total compressed size 588.18 MiB, 21.4% [7m40]
Note that compression takes much longer. More importantly, however,
decompression is much faster:
bzip2: 1m47.274s
xz -6: 0m55.446s
xz -7: 0m54.119s
xz -8: 0m52.388s
xz -9: 0m51.842s
The only downside to using -9 is that decompression takes a fair
amount (~65 MB) of memory.
This command builds or fetches all dependencies of the given
derivation, then starts a shell with the environment variables from
the derivation. This shell also sources $stdenv/setup to initialise
the environment further.
The current directory is not changed. Thus this is a convenient way
to reproduce a build environment in an existing working tree.
Existing environment variables are left untouched (unless the
derivation overrides them). As a special hack, the original value of
$PATH is appended to the $PATH produced by $stdenv/setup.
Example session:
$ nix-build --run-env '<nixpkgs>' -A xterm
(the dependencies of xterm are built/fetched...)
$ tar xf $src
$ ./configure
$ make
$ emacs
(... hack source ...)
$ make
$ ./xterm
Without these, Nix fails on 32-bit Linux with Perl 5.14, with a
rather unhelpful error message:
Not a CODE reference at /nix/store/n6kpbacn6nn7i3i735v8j3di8aqyl07v-perl-5.14.2/lib/perl5/5.14.2/i686-linux-thread-multi/DynaLoader.pm
This is likely because the lack of -D_FILE_OFFSET_BITS=64 causes
various Perl structures to not match what the Perl interpreter
expects.
scripts.
* Include the version and architecture in the -I flag so that there is
at least a chance that a Nix binary built for one Perl version will
run on another version.
‘nix-store --export’.
* Add a Perl module that provides the functionality of
‘nix-copy-closure --to’. This is used by build-remote.pl so it no
longer needs to start a separate nix-copy-closure process. Also, it
uses the Perl API to do the export, so it doesn't need to start a
separate nix-store process either. As a result, nix-copy-closure
and build-remote.pl should no longer fail on very large closures due
to an "Argument list too long" error. (Note that having very many
dependencies in a single derivation can still fail because the
environment can become too large. Can't be helped though.)
libstore so that the Perl bindings can use it as well. It's vital
that the Perl bindings use the configuration file, because otherwise
nix-copy-closure will fail with a ‘database locked’ message if the
value of ‘use-sqlite-wal’ is changed from the default.
read the manifest just to check the version and print the number of
paths. This makes nix-pull very fast for the cached cache (speeding
up nixos-rebuild without the ‘--no-pull’ or ‘--fast’ options).
bindings to be used in Nix's own Perl scripts.
The only downside is that Perl XS and Automake/libtool don't really
like each other, so building is a bit tricky.