Before the changes when building the whole system with
`contentAddressedByDefault = true;` we get many noninformative messages:
$ nix build -f nixos system --keep-going
...
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0'; cross fingers
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0-dev'; cross fingers
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0-python'; cross fingers
error: 2 dependencies of derivation '/nix/store/...-hub-2.14.2.drv' failed to build
warning: rewriting hashes in '/nix/store/...-subversion-1.14.1.drv.chroot/nix/store/...-subversion-1.14.1-dev'; cross fingers
warning: rewriting hashes in '/nix/store/...-subversion-1.14.1.drv.chroot/nix/store/...-subversion-1.14.1-man'; cross fingers
...
Let's downgrade these messages down to debug().
I had started the trend of doing `std::visit` by value (because a type
error once mislead me into thinking that was the only form that
existed). While the optomizer in principle should be able to deal with
extra coppying or extra indirection once the lambdas inlined, sticking
with by reference is the conventional default. I hope this might even
improve performance.
This actually bit me quite recently in `nixpkgs` because I assumed that
`nix-build --check` would also error out if hashes don't match anymore[1]
and so I wrongly assumed that I couldn't reproduce the mismatch error.
The fix is rather simple, during the output registration a so-called
`delayedException` is instantiated e.g. if a FOD hash-mismatch occurs.
However, in case of `nix-build --check` (or `--rebuild` in case of `nix
build`), the code-path where this exception is thrown will never be
reached.
By adding that check to the if-clause that causes an early exit in case
of `bmCheck`, the issue is gone. Also added a (previously failing)
test-case to demonstrate the problem.
[1] https://github.com/NixOS/nixpkgs/pull/139238, the underlying issue
was that `nix-prefetch-git` returns different hashes than `fetchgit`
because the latter one fetches submodules by default.
This is important if the remote side *does* execute
nix-store/nix-daemon successfully, but stdout is polluted
(e.g. because the remote user's bashrc script prints something to
stdout). In that case we have to shutdown the write side to force the
remote nix process to exit.
Instead of
error: serialised integer 7161674624452356180 is too large for type 'j'
we now get
error: 'nix-store --serve' protocol mismatch from 'sshtest@localhost', got 'This account is currently not available.'
Fixes https://github.com/NixOS/nixpkgs/issues/37287.
Before this commit, the dns lookup in preloadNSS would still go through
nscd. This did not have the effect of loading the nss_dns.so as expected
(nss_dns.so being out of reach from within the sandbox).
Should LOCALDOMAIN environment variable be defined, nss will completely
avoid nscd and will do its dns resolution on its own.
By temporarly setting LOCALDOMAIN variable before calling in NSS, we can
force NSS to load the shared libraries as expected.
Fixes#5089
Signed-off-by: Arthur Gautier <baloo@superbaloo.net>
Store paths are only allowed to contain a limited subset of the
alphabet, which doesn’t include `!`. So don’t create lockfiles that
contain this `!` character as that would otherwise confuse (and break)
the gc.
Fix#5176
When doing e.g.
nix-build -A package --keep-failed --option \
builders \
'ssh://mfhydra?remote-store=/home/bosch/store x86_64-linux - 10 4 big-parallel'
this doesn't work properly because this build-setting is ignored.
I changed this behavior by passing the `settings.keepFailed` through the
serve-protocol to remote machines to make sure that I can introspect the
build-directory (which is particularly helpful when I have to look at a
`config.log` from a failed build for instance).
Use `$(libdir)` while installing .pc files looks like a more generic
solution. For example, it will work for distributions like RHEL or
Fedora where .pc files are installed in `/usr/lib64/pkgconfig`.
This replaces the O(n) search complexity in our insert code with a
lookup of O(log n). It also makes removing waitees easier as we can use
the extract method provided by the set class.
Previously the code ensures that the isBase32 array would only be
initialised once in a single-threaded context. If two threads happen to
call the function before the initialisation was completed both of them
would have completed the initialization step. This allowed for a
race-condition where one thread might be done with the initialization
but the other thread sets all the fields to false again. For a brief
moment the base32 detection would then produce false-negatives.
The experimental features are, well, experimental, and shouldn’t be
carelessly and transparently enabled.
Besides, some (`ca-derivations` at least) need to be enabled at startup
in order to work properly.
So it’s better to just require that daemon be started with the right
`experimental-features` option.
Fix#5017
This adds a new store operation 'addMultipleToStore' that reads a
number of NARs and ValidPathInfos from a Source, allowing any number
of store paths to be copied in a single call. This is much faster on
high-latency links when copying a lot of small files, like .drv
closures.
For example, on a connection with an 50 ms delay:
Before:
$ nix copy --to 'unix:///tmp/proxy-socket?root=/tmp/dest-chroot' \
/nix/store/90jjw94xiyg5drj70whm9yll6xjj0ca9-hello-2.10.drv \
--derivation --no-check-sigs
real 0m57.868s
user 0m0.103s
sys 0m0.056s
After:
real 0m0.690s
user 0m0.017s
sys 0m0.011s
With this, we don't have to copy the entire .drv closure to the
destination store ahead of time (or at all). Instead, buildPaths()
reads .drv files from the eval store and copies inputSrcs to the
destination store if it needs to build a derivation.
Issue #5025.