This ensures any started processes can't write to /nix/store (except
during builds). This partially reverts 01d07b1e, which happened because
of #2646.
The problem was only happening after nix downloads anything, causing
me to suspect the download thread. The problem turns out to be:
"A process can't join a new mount namespace if it is sharing
filesystem-related attributes with another process", in this case this
process is the curl thread.
Ideally, we might kill it before spawning the shell process, but it's
inside a static variable in the getFileTransfer() function. So
instead, stop it from sharing FS state using unshare(). A strategy
such as the one from #5057 (single-threaded chroot helper binary) is
also very much on the table.
Fixes#4337.
- This way we improve error messages
on infinite recursion
- Demo:
```nix
let
x = builtins.fetchMercurial x;
in
x
```
- Before:
```bash
$ nix-instantiate --show-trace --strict
error: infinite recursion encountered
```
- After:
```bash
nix-instantiate --show-trace --strict
error: infinite recursion encountered
at /data/github/kamadorueda/test/default.nix:2:7:
1| let
2| x = builtins.fetchMercurial x;
| ^
3| in
```
Mentions: #3505
We can actually just load nss ourselves and call in nss to configure it
and we don't need to run a dummy query entirely to have nss load nss_dns
as a side-effect.
Signed-off-by: Arthur Gautier <baloo@superbaloo.net>
This fixes a bug in the garbage collector where if a path
/nix/store/abcd-foo is valid, but we do a
isValidPath("/nix/store/abcd-foo.lock") first, then a negative entry
for /nix/store/abcd is added to pathInfoCache, so /nix/store/abcd-foo
is subsequently considered invalid and deleted.
(where "referrers" includes the reverse of derivation outputs and
derivers). Now we do a full traversal to look if we can reach any
root. If not, all paths reached can be deleted.
The garbage collector no longer blocks other processes from
adding/building store paths or adding GC roots. To prevent the
collector from deleting store paths just added by another process,
processes need to connect to the garbage collector via a Unix domain
socket to register new temporary roots.
This reverts some parts of commit
8430a8f086 which was trying to rethrow
some exceptions while we weren’t in the context of a `catch` block,
causing some weird “terminate called without an active exception”
errors.
Fix#5368
In https://github.com/NixOS/nix/pull/5350 we noticed link failures
pkgsStatic.nixUnstable. Adding explicit dependency on libutil fixes
libstore-tests linking.
When I stop a download with Ctrl-C in a `nix repl` of a flake, the REPL
refuses to do any other downloads:
nix-repl> builtins.getFlake "nix-serve"
[0.0 MiB DL] downloading 'https://api.github.com/repos/edolstra/nix-serve/tarball/e9828a9e01a14297d15ca41 error: download of 'e9828a9e01' was interrupted
[0.0 MiB DL]
nix-repl> builtins.getFlake "nix-serve"
error: interrupted by the user
[0.0 MiB DL]
To fix this issue, two changes were necessary:
* Reset the global `_isInterrupted` variable: only because a single
operation was aborted, it should still be possible to continue the
session.
* Recreate a `fileTransfer`-instance if the current one was shut down by
an abort.
We now build the context (so this has the side-effect of making
builtins.{path,filterSource} work on derivations outputs, if IFD is
enabled) and then check that the path has no references (which is what
we really care about).
The boolean is only used to determine if the formals are set to a
non-null pointer in all our cases. We can get rid of that allocation and
instead just compare the pointer value with NULL. Saving up to
sizeof(bool) + platform specific alignment per ExprLambda instace.
Probably not a lot of memory but perhaps a few kilobyte with nixpkgs?
This also gets rid of a potential issue with dereferencing formals based on
the value of the boolean that didn't have to be aligned with the formals
pointer but was in all our cases.
9c766a40cb broke logging from the
daemon, because commonChildInit is called when starting the build hook
in a vfork, so it ends up resetting the parent's logger. So don't
vfork.
It might be best to get rid of vfork altogether, but that may cause
problems, e.g. when we call an external program like git from the
evaluator.
Before the changes when building the whole system with
`contentAddressedByDefault = true;` we get many noninformative messages:
$ nix build -f nixos system --keep-going
...
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0'; cross fingers
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0-dev'; cross fingers
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0-python'; cross fingers
error: 2 dependencies of derivation '/nix/store/...-hub-2.14.2.drv' failed to build
warning: rewriting hashes in '/nix/store/...-subversion-1.14.1.drv.chroot/nix/store/...-subversion-1.14.1-dev'; cross fingers
warning: rewriting hashes in '/nix/store/...-subversion-1.14.1.drv.chroot/nix/store/...-subversion-1.14.1-man'; cross fingers
...
Let's downgrade these messages down to debug().
I had started the trend of doing `std::visit` by value (because a type
error once mislead me into thinking that was the only form that
existed). While the optomizer in principle should be able to deal with
extra coppying or extra indirection once the lambdas inlined, sticking
with by reference is the conventional default. I hope this might even
improve performance.
I found it somewhat confusing to have an error like
error: attribute 'getFlake' missing
if the required experimental-feature (`flakes`) is not enabled. Instead,
I'd expect Nix to throw an error just like it's the case when using e.g. `nix
flake` without `flakes` being enabled.
With this change, the error looks like this:
$ nix-instantiate -E 'builtins.getFlake "nixpkgs"'
error: Cannot call 'builtins.getFlake' because experimental Nix feature 'flakes' is disabled. You can enable it via '--extra-experimental-features flakes'.
at «string»:1:1:
1| builtins.getFlake "nixpkgs"
| ^
I didn't use `settings.requireExperimentalFeature` here on purpose
because this doesn't contain a position. Also, it doesn't seem as if we
need to catch the error and check for the missing feature here since
this already happens at evaluation time.
This actually bit me quite recently in `nixpkgs` because I assumed that
`nix-build --check` would also error out if hashes don't match anymore[1]
and so I wrongly assumed that I couldn't reproduce the mismatch error.
The fix is rather simple, during the output registration a so-called
`delayedException` is instantiated e.g. if a FOD hash-mismatch occurs.
However, in case of `nix-build --check` (or `--rebuild` in case of `nix
build`), the code-path where this exception is thrown will never be
reached.
By adding that check to the if-clause that causes an early exit in case
of `bmCheck`, the issue is gone. Also added a (previously failing)
test-case to demonstrate the problem.
[1] https://github.com/NixOS/nixpkgs/pull/139238, the underlying issue
was that `nix-prefetch-git` returns different hashes than `fetchgit`
because the latter one fetches submodules by default.
This fixes
$ nix path-info -r $(type -P ls)
/nix/store/vfilzcp8a467w3p0mp54ybq6bdzb8w49-coreutils-8.32
/nix/store/5d821pjgzb90lw4zbg6xwxs7llm335wr-libunistring-0.9.10
...
/nix/store/mrv4y369nw6hg4pw8d9p9bfdxj9pjw0x-acl-2.3.0
/nix/store/vfilzcp8a467w3p0mp54ybq6bdzb8w49-coreutils-8.32
Also, output the paths in topologically sorted order like we used to.
This is important if the remote side *does* execute
nix-store/nix-daemon successfully, but stdout is polluted
(e.g. because the remote user's bashrc script prints something to
stdout). In that case we have to shutdown the write side to force the
remote nix process to exit.
Instead of
error: serialised integer 7161674624452356180 is too large for type 'j'
we now get
error: 'nix-store --serve' protocol mismatch from 'sshtest@localhost', got 'This account is currently not available.'
Fixes https://github.com/NixOS/nixpkgs/issues/37287.
Previously, type or coercion errors for string interpolation, path
interpolation, and plus expressions were always reported at the
beginning of the outer expression. This leads to confusing evaluation
error messages making it hard to accurately diagnose and then fix the
error.
For example, errors were reported as follows.
```
cannot coerce an integer to a string
1| let foo = 7; in "bar" + foo
| ^
cannot add a string to an integer
1| let foo = "bar"; in 4 + foo
| ^
cannot coerce an integer to a string
1| let foo = 7; in "x${foo}"
| ^
```
This commit changes the ExprConcatStrings expression vector to store a
sequence of expressions *and* their expansion locations so that error
locations can be reported accurately. For interpolation, the error is
reported at the beginning of the entire `${foo}`, not at the beginning
of `foo` because I thought this was slightly clearer. The previous
errors are now reported as:
```
cannot coerce an integer to a string
1| let foo = 7; in "bar" + foo
| ^
cannot add a string to an integer
1| let foo = "bar"; in 4 + foo
| ^
cannot coerce an integer to a string
1| let foo = 7; in "x${foo}"
| ^
```
The error is reported at this kind of precise location even for
multi-line indented strings.
This probably helps with at least some of the cases mentioned in #561
It's now disabled by default for the following:
* 'nix search' (this was already implied by read-only mode)
* 'nix flake show'
* 'nix flake check', but only on the hydraJobs output
Without this, flakes within the same tree and same lock data will have
the same fingerprint and the eval cache for one flake will be
incorrectly used for another.
Alternative to #4639. You can still read flake.lock, but at least in
reproducible workflows like NixOS configurations where you require a
non-dirty tree, evaluation will fail because there is no rev.
With --no-write-lock-file, it's possible that flake.lock is out of
sync with the actual inputs used by the evaluation. So doing fromJSON
(readFile ./flake.lock) will give wrong results.
Fixes#4639.
Before this commit, the dns lookup in preloadNSS would still go through
nscd. This did not have the effect of loading the nss_dns.so as expected
(nss_dns.so being out of reach from within the sandbox).
Should LOCALDOMAIN environment variable be defined, nss will completely
avoid nscd and will do its dns resolution on its own.
By temporarly setting LOCALDOMAIN variable before calling in NSS, we can
force NSS to load the shared libraries as expected.
Fixes#5089
Signed-off-by: Arthur Gautier <baloo@superbaloo.net>
In dry run mode, new derivations can't be create, so running the command on anything that has not been evaluated before results in an error message of the form `don't know how to build these paths (may be caused by read-only store access)`.
For comparison, the classical `nix-build --dry-run` doesn't use read-only mode.
Closes#1795
(cherry picked from commit 54525682df707742e58311c32e9c9cb18de1e31f)
If the store path contains a flake, this means that a command like
"nix path-info /path" will show info about /path, not about the
default output of the flake in /path. If you want the latter, you can
explicitly ask for it by doing "nix path-info path:/path".
Fixes#4568.
Store paths are only allowed to contain a limited subset of the
alphabet, which doesn’t include `!`. So don’t create lockfiles that
contain this `!` character as that would otherwise confuse (and break)
the gc.
Fix#5176
When doing e.g.
nix-build -A package --keep-failed --option \
builders \
'ssh://mfhydra?remote-store=/home/bosch/store x86_64-linux - 10 4 big-parallel'
this doesn't work properly because this build-setting is ignored.
I changed this behavior by passing the `settings.keepFailed` through the
serve-protocol to remote machines to make sure that I can introspect the
build-directory (which is particularly helpful when I have to look at a
`config.log` from a failed build for instance).
This fixes a use-after-free bug:
1. s = new EvalState();
2. callFlake()
3. static vCallFlake now references s
4. delete s;
5. s2 = new EvalState();
6. callFlake()
7. static vCallFlake still references s
8. crash
Nix 2.3 did not have a problem with recreating EvalState.
This fixes a class of crashes and introduces ptr<T> to make the
code robust against this failure mode going forward.
Thanks regnat for the idea of a ref<T> without overhead!
Closes#4895Closes#4893Closes#5127Closes#5113
“packages” was probably meant to be “packages.${system}.” but that
is already listed in `getDefaultFlakeAttrPathPrefixes` in `installables`,
which is probably why no one noticed it was broken.
It currently fails with the following error:
error: flake 'git+file://…' does not provide attribute 'devShells.x86_64-linuxhaskell', 'packages.x86_64-linux.haskell', 'legacyPackages.x86_64-linux.haskell' or 'haskell'
For git+file and path flakes, chdir to flake directory so that phases
that expect to be in the flake directory can run
Fixes https://github.com/NixOS/nix/issues/3976
Use `$(libdir)` while installing .pc files looks like a more generic
solution. For example, it will work for distributions like RHEL or
Fedora where .pc files are installed in `/usr/lib64/pkgconfig`.
This replaces the O(n) search complexity in our insert code with a
lookup of O(log n). It also makes removing waitees easier as we can use
the extract method provided by the set class.
Previously the code ensures that the isBase32 array would only be
initialised once in a single-threaded context. If two threads happen to
call the function before the initialisation was completed both of them
would have completed the initialization step. This allowed for a
race-condition where one thread might be done with the initialization
but the other thread sets all the fields to false again. For a brief
moment the base32 detection would then produce false-negatives.
`nix-shell --pure` when applied to a non stdenv derivation doesn't seem
to clear the PATH. It expects the stdenv/setup file to do so.
This adds an explicit `unset PATH` by nix-build.cc (nix-shell) itself so
that it's not reliant on stdenv/setup anymore.
This does not break impure nix-shell since the PATH is persisted as the
variable `p` prior in the bash rcfile
fixes#5092
Previously, despite having a boolean that tracked initialization, the
decode characters have been "calculated" every single time a base64
string was being decoded.
With this change we only initialize the decode array once in a
thread-safe manner.
The experimental features are, well, experimental, and shouldn’t be
carelessly and transparently enabled.
Besides, some (`ca-derivations` at least) need to be enabled at startup
in order to work properly.
So it’s better to just require that daemon be started with the right
`experimental-features` option.
Fix#5017
This extends https://github.com/NixOS/nix/pull/4978 with
supporting the SCP-like urls in expressions like
```nix
builtins.fetchGit {
url = "git@github.com:NixOS/nix.git";
ref = "master";
}
```
This adds a new store operation 'addMultipleToStore' that reads a
number of NARs and ValidPathInfos from a Source, allowing any number
of store paths to be copied in a single call. This is much faster on
high-latency links when copying a lot of small files, like .drv
closures.
For example, on a connection with an 50 ms delay:
Before:
$ nix copy --to 'unix:///tmp/proxy-socket?root=/tmp/dest-chroot' \
/nix/store/90jjw94xiyg5drj70whm9yll6xjj0ca9-hello-2.10.drv \
--derivation --no-check-sigs
real 0m57.868s
user 0m0.103s
sys 0m0.056s
After:
real 0m0.690s
user 0m0.017s
sys 0m0.011s
Otherwise I get a compiler error when building for NetBSD:
src/libutil/util.cc: In function 'void nix::_deletePath(const Path&, uint64_t&)':
src/libutil/util.cc:438:17: error: base operand of '->' is not a pointer
438 | AutoCloseFD dirfd(open(dir.c_str(), O_RDONLY));
| ^~~~~
src/libutil/util.cc:439:10: error: 'dirfd' was not declared in this scope
439 | if (!dirfd) {
| ^~~~~
src/libutil/util.cc:444:17: error: 'dirfd' was not declared in this scope
444 | _deletePath(dirfd.get(), path, bytesFreed);
| ^~~~~
With this, we don't have to copy the entire .drv closure to the
destination store ahead of time (or at all). Instead, buildPaths()
reads .drv files from the eval store and copies inputSrcs to the
destination store if it needs to build a derivation.
Issue #5025.
In particular, this now works:
$ nix path-info --eval-store auto --store https://cache.nixos.org nixpkgs#hello
Previously this would fail as it would try to upload the hello .drv to
cache.nixos.org. Now the .drv is instantiated in the local store, and
then we check for the existence of the outputs in cache.nixos.org.
It supports functions as well. Also change `package` to
`derivation` because it operates at the language level and does
not open the derivation (which would be useful but not nearly
as much).
It does not operate on a derivation and does not return a
derivation path. Instead it works at the language level,
where a distinct term "package" is more appropriate to
distinguish the parent object of `meta.position`; an
attribute which doesn't even make it into the derivation.
Some people want to avoid using registries at all on their system; Instead
of having to add --no-registries to every command, this commit allows to
set use-registries = false in the config. --no-registries is still allowed
everywhere it was allowed previously, but is now deprecated.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
- This can legitimately happen (for example because of a non-determinism
causing a build-time dependency to be kept or not as a runtime
reference)
- Because of older Nix versions, it can happen that we encounter a
realisation with an (erroneously) empty set of dependencies, in which
case we don’t want to fail, but just warn the user and try to fix it.
Fill `NIX_CONFIG` with the value of the current Nix configuration before
calling the nix subprocesses in the repl
That way the whole configuration (including the possible
`experimental-features`, a possibly `--store` option or whatever) will
be made available.
This is required for example to make `nix repl` work with a custom
`--store`
Fill `NIX_CONFIG` with the value of the current Nix configuration before
calling the post-build-hook.
That way the whole configuration (including the possible
`experimental-features`, a possibly `--store` option or whatever) will
be made available to the hook
'--delete-older-than 10' deletes the generations older than a single day, and '--delete-older-than 12m' deletes all generations older than 12 days.
This changes makes it throw on those invalid inputs, and gives an example of a valid input.
We need to support it for the “old” fetch* functions for backwards
compatibility, but we don’t need it for fetchTree (as it’s a new
function).
Given that changing the `name` messes-up the content hashing, we can
just forbid passing a custom `name` argument to it
Before this commit, nixConfig.flake-registry didn't have any real effect
on the evaluation, since config was applied after inputs were evaluated.
Change this behavior: apply the config in the beginning of flake::lockFile.
Pass the current experimental features using `NIX_CONFIG` to the various
Nix subprocesses that `nix repl` invokes.
This is quite a hack, but having `nix repl` call Nix with a subprocess
is a hack already, so I guess that’s fine.
`nix develop` is getting bash from an (assumed existing) `nixpkgs`
flake. However, when doing so, it reuses the `lockFlags` passed to the
current flake, including the `--input-overrides` and `--input-update`
which generally don’t make sense anymore at that point (and trigger a
warning because of that)
Clear these overrides before getting the nixpkgs flake to get rid of the
warning.
Add an access-control list to the realisations in recursive-nix (similar
to the already existing one for store paths), so that we can build
content-addressed derivations in the restricted store.
Fix#4353
Previously, the build system used uname(1) output when it wanted to
check the operating system it was being built for, which meant that it
didn't take into-account cross-compilation when the build and host
operating systems were different.
To fix this, instead of consulting uname output, we consult the host
triple, specifically the third "kernel" part.
For "kernel"s with stable ABIs, like Linux or Cygwin, we can use a
simple ifeq to test whether we're compiling for that system, but for
other platforms, like Darwin, FreeBSD, or Solaris, we have to use a
more complicated check to take into account the version numbers at the
end of the "kernel"s. I couldn't find a way to just strip these
version numbers in GNU Make without shelling out, which would be even
more ugly IMO. Because these checks differ between kernels, and the
patsubst ones are quite fiddly, I've added variables for each host OS
we might want to check to make them easier to reuse.
This way no derivation has to expect that these files are in the `cwd`
during the build. This is problematic for `nix-shell` where these files
would have to be inserted into the nix-shell's `cwd` which can become
problematic with e.g. recursive `nix-shell`.
To remain backwards-compatible, the location inside the build sandbox
will be kept, however using these files directly should be deprecated
from now on.
This is needed to push the adoption of structured attrs[1] forward. It's
now checked if a `__json` exists in the environment-map of the derivation
to be openend in a `nix-shell`.
Derivations with structured attributes enabled also make use of a file
named `.attrs.json` containing every environment variable represented as
JSON which is useful for e.g. `exportReferencesGraph`[2]. To
provide an environment similar to the build sandbox, `nix-shell` now
adds a `.attrs.json` to `cwd` (which is mostly equal to the one in the
build sandbox) and removes it using an exit hook when closing the shell.
To avoid leaking internals of the build-process to the `nix-shell`, the
entire logic to generate JSON and shell code for structured attrs was
moved into the `ParsedDerivation` class.
[1] https://nixos.mayflower.consulting/blog/2020/01/20/structured-attrs/
[2] https://nixos.org/manual/nix/unstable/expressions/advanced-attributes.html#advanced-attributes
Make sure that the derivation we send to the remote builder is exactly
the one that we want to build locally so that the output ids are exactly
the same
Fix#4845
Useful when we're using a daemon with a chroot store, e.g.
$ NIX_DAEMON_SOCKET_PATH=/tmp/chroot/nix/var/nix/daemon-socket/socket nix-daemon --store /tmp/chroot
Then the client can now connect with
$ nix build --store unix:///tmp/chroot/nix/var/nix/daemon-socket/socket?root=/tmp/chroot nixpkgs#hello
That doesn’t really make sense with CA derivations (and wasn’t even
really correct before because of FO derivations, though that probably
didn’t matter much in practice)
Resolve the derivation before trying to load its environment −
essentially reproducing what the build loop does − so that we can
effectively access our dependencies (and not just their placeholders).
Fix#4821
Make ca-derivations require a `ca-derivations` machine feature, and
ca-aware builders expose it.
That way, a network of builders can mix ca-aware and non-ca-aware
machines, and the scheduler will send them in the right place.
When the `keep-going` option is set to `true`, make `nix flake check`
continue as much as it can before failing.
The UI isn’t perfect as-it-is as all the lines currently start with a
mostly useless `error (ignored): error:` prefix, but I’m not sure what
the best output would be, so I’ll leave it as-it-is for the time being
(This is a bit hijacking the `keep-going` flag as it’s supposed to be a
build-time only thing. But I think it’s faire to reuse it here).
Fix https://github.com/NixOS/nix/issues/4450
When adding a path to the local store (via `LocalStore::addToStore`),
ensure that the `ca` field of the provided `ValidPathInfo` does indeed
correspond to the content of the path.
Otherwise any untrusted user (or any binary cache) can add arbitrary
content-addressed paths to the store (as content-addressed paths don’t
need a signature).
Linux is (as far as I know) the only mainstream operating system that
requires linking with libdl for dlopen. On BSD, libdl doesn't exist,
so on non-FreeBSD BSDs linking will currently fail. On macOS, it's
apparently just a symlink to libSystem (macOS libc), presumably
present for compatibility with things that assume Linux.
So the right thing to do here is to only add -ldl on Linux, not to add
it for everything that isn't FreeBSD.
Only considers the closure in term of `Realisation`, ignores all the
opaque inputs.
Dunno whether that’s the nicest solution, need to think it through a bit
Align all the worker protocol with `buildDerivation` which inlines the
realisations as one opaque json blob.
That way we don’t have to bother changing the remote store protocol
when the definition of `Realisation` changes, as long as we keep the
json backwards-compatible
Align all the worker protocol with `buildDerivation` which inlines the
realisations as one opaque json blob.
That way we don’t have to bother changing the remote store protocol
when the definition of `Realisation` changes, as long as we keep the
json backwards-compatible
Move the `closure` logic of `computeFSClosure` to its own (templated) function.
This doesn’t bring much by itself (except for the ability to properly
test the “closure” functionality independently from the rest), but it
allows reusing it (in particular for the realisations which will require
a very similar closure computation)
When you have a symlink like:
/tmp -> ./private/tmp
you need to resolve ./private/tmp relative to /tmp’s dir: ‘/’. Unlike
any other path output by dirOf, / ends with a slash. We don’t want
trailing slashes here since we will append another slash in the next
comoponent, so clear s like we would if it was a symlink to an absoute
path.
This should fix at least part of the issue in
https://github.com/NixOS/nix/issues/4822, will need confirmation that
it actually fixes the problem to close though.
Introduced in f3f228700a.
Accidentally removed in ca96f52194. This
caused `nix run` to systematically fail with
```
error: app program '/nix/store/…' is not in the Nix store
```
~/.bashrc should be sourced first in the rc script so that PATH &
other env vars give precedence over the bashrc PATH.
Also, in my bashrc I alias rm as:
alias rm='rm -Iv'
To avoid running this alias (which shows ‘removed '/tmp/nix-shell.*'),
we can just prefix rm with command.
For whatever reason, many programs trying to access SystemVersion.plist
also open SystemVersionCompat.plist; this includes Python code and
coreutils’ `cat(1)` (but not the native macOS `/bin/cat`). Illustratory
`dtruss(1m)` output:
open("/System/Library/CoreServices/SystemVersion.plist\0", 0x0, 0x0) = 3 0
open("/System/Library/CoreServices/SystemVersionCompat.plist\0", 0x0, 0x0) = 4 0
I assume this is a Big Sur change relating to the 10.16.x/11.x
version compatibility divide and that it’s something along the lines of
a hook inside libSystem.
Fixes a lot of sandboxed package builds under Big Sur.
When we don’t have enough free job slots to run a goal, we put it in
the waitForBuildSlot list & unlock its output locks. This will
continue from where we left off (tryLocalBuild). However, we need the
locks to get reacquired when/if the goal ever restarts. So, we need to
send it back through tryToBuild to get reqacquire those locks.
I think this bug was introduced in
https://github.com/NixOS/nix/pull/4570. It leads to some builds
starting without proper locks.
Similar to the nar-info disk cache (and using the same db).
This makes rebuilds muuch faster.
- This works regardless of the ca-derivations experimental feature.
I could modify the logic to not touch the db if the flag isn’t there,
but given that this is a trash-able local cache, it doesn’t seem to be
really worth it.
- We could unify the `NARs` and `Realisation` tables to only have one
generic kv table. This is left as an exercise to the reader.
- I didn’t update the cache db version number as the new schema just
adds a new table to the previous one, so the db will be transparently
migrated and is backwards-compatible.
Fix#4746
This was previously done in https://github.com/NixOS/nix/pull/4515 but
got clobbered away in https://github.com/NixOS/nix/pull/4594.
--------------------------------------------------------------------------------
This fixes an issue where derivations with a primary output that is
not "out" would fail with:
$ nix profile install nixpkgs#sqlite
error: opening directory '/nix/store/2a2ydlgyydly5czcc8lg12n6qqkfz863-sqlite-3.34.1-bin': No such file or directory
This happens because while derivations produce every output when
built, you might not have them if you didn't build the derivation
yourself (for instance, the store path was fetch from a binary cache).
This uses outputName provided from DerivationInfo which appears to
match the first output of the derivation.
I guess I misunderstood John's initial explanation about why wildcards
for outputs are sent to older stores[1]. My `nix-daemon` from 2021-03-26
also has version 1.29, but misses the wildcard[2]. So bumping seems to
be the right call.
[1] https://github.com/NixOS/nix/pull/4759#issuecomment-830812464
[2] 255d145ba7
Starting in macOS 11, the on-disk dylib bundles are no longer available,
but nixpkgs needs to be able to keep compatibility with older versions
that require `/usr/lib/libSystem.B.dylib` in `__impureHostDeps`. Allow
it to keep backwards compatibility with these versions by marking these
dependencies as optional.
Fixes#4658.
As described in #4745 it's otherwise fairly hard to understand where
this is coming from. Say you have an expression which uses e.g.
`types.package`:
``` nix
{ outputs = { self, nixpkgs }: {
packages.x86_64-linux.hello = let
foo = nixpkgs.lib.evalModules {
modules = [
{
options.foo.bar = with nixpkgs.lib; mkOption { type = types.package; };
}
{
foo.bar = ./.;
}
];
};
in builtins.trace foo.config.foo.bar.outPath nixpkgs.legacyPackages.x86_64-linux.hello;
defaultPackage.x86_64-linux = self.packages.x86_64-linux.hello;
};
}
```
Then you'll get an error trace like this:
```
error: 'builtins.storePath' is not allowed in pure evaluation mode
at /nix/store/p4h2x6r80njkb0j2rc1xjhhl99yri3zb-source/lib/attrsets.nix:328:15:
327| let
328| path' = builtins.storePath path;
| ^
329| res =
… while evaluating the attribute 'config.foo.bar.outPath'
at /nix/store/p4h2x6r80njkb0j2rc1xjhhl99yri3zb-source/lib/attrsets.nix:332:11:
331| name = sanitizeDerivationName (builtins.substring 33 (-1) (baseNameOf path'));
332| outPath = path';
| ^
333| outputs = [ "out" ];
… while evaluating the attribute 'packages.x86_64-linux.hello'
at /nix/store/6c1rfsqzrhjw1235palzjmf5vihcpci7-source/flake.nix:3:5:
2| { outputs = { self, nixpkgs }: {
3| packages.x86_64-linux.hello = let
| ^
4| foo = nixpkgs.lib.evalModules {
```
Fixes#4745
They are equivalent according to
<https://spec.commonmark.org/0.29/#hard-line-breaks>,
and the trailing spaces tend to be a pain (because the make git
complain, editors tend to want to remove them − the `.editorconfig`
actually specifies that − etc..).
This function doesn't support all compression methods (i.e. 'none' and
'br') so it shouldn't be exposed.
Also restore the original decompress() as a wrapper around
makeDecompressionSink().
The S3 store relies on the ability to be able to decompress things with
an empty method, because it just passes the value of the Content-Encoding
directly to decompress.
If the file is not compressed, then this will cause the compression
routine to get confused.
This caused NixOS/nixpkgs#120120.
This makes Nix look up paths derivations when they are passed as a
store paths. So:
$ nix path-info --derivation /nix/store/0pisd259nldh8yfjvw663mspm60cn5v8-hello-2.10
now gives
/nix/store/qp724w90516n4bk5r9gfb37vzmqdh3z7-hello-2.10.drv
instead of "".
If no deriver is available, Nix now errors instead of silently
ignoring that argument.
In case of pure input-addressed derivations, the build loop doesn't
guaranty that the realisations are stored in the db (if the output paths
are already there or can be substituted, the realisations won't be
registered). This caused `nix shell` to fail in some cases because it
was assuming that the realisations were always existing.
A better (but more involved) fix would probably to ensure that we always
register the realisations, but in the meantime this patches the surface
issue.
Fix#4721
I think that it's not very helpful to get "cached failures" in a wrong
`flake.nix`. This can be very confusing when debugging a Nix expression.
See for instance NixOS/nixpkgs#118115.
In fact, the eval cache allows a forced reevaluation which is used for
e.g. `nix eval`.
This change makes sure that this is the case for `nix build` as well. So
rather than
λ ma27 [~/Projects/exp] → ../nix/outputs/out/bin/nix build -L --rebuild --experimental-features 'nix-command flakes'
error: cached failure of attribute 'defaultPackage.x86_64-linux'
the evaluation of already-evaluated (and failed) attributes looks like
this now:
λ ma27 [~/Projects/exp] → ../nix/outputs/out/bin/nix build -L --rebuild --experimental-features 'nix-command flakes'
error: attribute 'hell' missing
at /nix/store/mrnvi9ss8zn5wj6gpn4bcd68vbh42mfh-source/flake.nix:6:35:
5|
6| packages.x86_64-linux.hello = nixpkgs.legacyPackages.x86_64-linux.hell;
| ^
7|
(use '--show-trace' to show detailed location information)
When working on some more complex Nix code, there are sometimes rather
unhelpful or misleading error messages, especially if coerce-errors are
thrown.
This patch is a first steps towards improving that. I'm happy to file
more changes after that, but I'd like to gather some feedback first.
To summarize, this patch does the following things:
* Attrsets (a.k.a. `Bindings` in `libexpr`) now have a `Pos`. This is
helpful e.g. to identify which attribute-set in `listToAttrs` is
invalid.
* The `Value`-struct has a new method named `determinePos` which tries
to guess the position of a value and falls back to a default if that's
not possible.
This can be used to provide better messages if a coercion fails.
* The new `determinePos`-API is used by `builtins.concatMap` now. With
that change, Nix shows the exact position in the error where a wrong
value was returned by the lambda.
To make sure it's still obvious that `concatMap` is the problem,
another stack-frame was added.
* The changes described above can be added to every other `primop`, but
first I'd like to get some feedback about the overall approach.
* The position of the `name`-attribute appears in the trace.
* If e.g. `meta` has no `outPath`-attribute, a `cannot coerce set to
string` error will be thrown where `pos` points to `name =` which is
highly misleading.
If there were many top-level goals (which are not destroyed until the
very end), commands like
$ nix copy --to 'ssh://localhost?remote-store=/tmp/nix' \
/run/current-system --no-check-sigs --substitute-on-destination
could fail with "Too many open files". So now we do some explicit
cleanup from amDone(). It would be cleaner to separate goals from
their temporary internal state, but that would be a bigger refactor.
This avoids an ambiguity where the `StorePathWithOutputs { drvPath, {}
}` could mean "build `brvPath`" or "substitute `drvPath`" depending on
context.
It also brings the internals closer in line to the new CLI, by
generalizing the `Buildable` type is used there and makes that
distinction already.
In doing so, relegate `StorePathWithOutputs` to being a type just for
backwards compatibility (CLI and RPC).
These are by no means part of the notion of a store, but rather are
things that happen to use stores. (Or put another way, there's no way
we'd make them virtual methods any time soon.) It's better to move them
out of that too-big class then.
Also, this helps us remove StorePathWithOutputs from the Store interface
altogether next commit.
This fixes builtins.fetchGit { url = ...; ref = "HEAD"; }, that works in
stable nix (v2.3.10), but is broken in nix master:
$ ./result/bin/nix repl
Welcome to Nix version 2.4pre19700101_dd77f71. Type :? for help.
nix-repl> builtins.fetchGit { url = "https://github.com/NixOS/nix"; ref = "HEAD"; }
fetching Git repository 'https://github.com/NixOS/nix'fatal: couldn't find remote ref refs/heads/HEAD
error: program 'git' failed with exit code 128
The documentation for builtins.fetchGit says ref = "HEAD" is the
default, so it should also be supported to explicitly pass it.
I came across this issue because poetry2nix can use ref = "HEAD" in some
situations.
Fixes#4674.
A few versioning mistakes were corrected:
- In 27b5747ca7, Daemon protocol had some
version `>= 0xc` that should have been `>= 0x1c`, or `28` since the
other conditions used decimal.
- In a2b69660a9, legacy SSH gated new CAS
info on version 6, but version 5 in the server. It is now 6
everywhere.
Additionally, legacy ssh was sending over more metadata than the daemon
one was. The daemon now sends that data too.
CC @regnat
Co-authored-by: Cole Helbling <cole.e.helbling@outlook.com>
According to RFC4007[1], IPv6 addresses can have a so-called zone_id
separated from the actual address with `%` as delimiter. In contrast to
Nix 2.3, the version on `master` doesn't recognize it as such:
$ nix ping-store --store ssh://root@fe80::1%18 --experimental-features nix-command
warning: 'ping-store' is a deprecated alias for 'store ping'
error: --- Error ----------------------------------------------------------------- nix
don't know how to open Nix store 'ssh://root@fe80::1%18'
I modified the IPv6 match-regex accordingly to optionally detect this
part of the address. As we don't seem to do anything special with it, I
decided to leave it as part of the URL for now.
Fixes#4490
[1] https://tools.ietf.org/html/rfc4007
I guess the rationale behind the old name wath that
`pathInfoIsTrusted(info)` returns `true` iff we would need to `blindly`
trust the path (because it has no valid signature and `requireSigs` is
set), but I find it to be a really confusing footgun because it's quite
natural to give it the opposite meaning.
When starting a nix-shell with `-i` it was previously possible for it to
silently fail in the scenario where the specified interpreter didn't
exist. This happened due to the `exec` call masking the issue.
With this change we enable `execfail`, which causes the script using
`nix-shell` as interpreter to correctly exit with code 127.
Fixes: #4598
Basically, if a tarball URL is used as a flake input, and the URL leads
to a redirect, the final redirect destination would be recorded as the
locked URL.
This allows tarballs under https://nixos.org/channels to be used as
flake inputs. If we, as before, lock on to the original URL it would
break every time the channel updates.
Local git repositories are normally used directly instead of
cloning. This commit checks if a repo is bare and forces a
clone.
Co-authored-by: Théophane Hufschmitt <regnat@users.noreply.github.com>
What happened was that Nix was trying to unconditionally mount these
paths in fixed-output derivations, but since the outer derivation was
pure, those paths did not exist. The solution is to only mount those
paths when they exist.
This separates the scheduling logic (including simple hook pathway) from
the local-store needing code.
This should be the final split for now. I'm reasonably happy with how
it's turning out, even before I'm done moving code into
`local-derivation-goal`. Benefits:
1. This will help "witness" that the hook case is indeed a lot simpler,
and also compensate for the increased complexity that comes from
content-addressed derivation outputs.
2. It also moves us ever so slightly towards a world where we could use
off-the-shelf storage or sandboxing, since `local-derivation-goal`
would be gutted in those cases, but `derivation-goal` should remain
nearly the same.
The new `#if 0` in the new files will be deleted in the following
commit. I keep it here so if it turns out more stuff can be moved over,
it's easy to do so in a way that preserves ordering --- and thus
prevents conflicts.
N.B.
```sh
git diff HEAD^^ --color-moved --find-copies-harder --patience --stat
```
makes nicer output.
This is probably what most people expect it to do. Fixes#3781.
There is a new command 'nix flake lock' that has the old behaviour of
'nix flake update', i.e. it just adds missing lock file entries unless
overriden using --update-input.
This is already used by Hydra, and is very useful when materializing
a remote builder list from service discovery. This allows the service
discovery tool to only sync one file instead of two.
This is technically a breaking change, since attempting to set plugin
files after the first non-flag argument will now throw an error. This
is acceptable given the relative lack of stability in a plugin
interface and the need to tie the knot somewhere once plugins can
actually define new subcommands.
This field used to be a `BasicDerivation`, but this `BasicDerivation`
was downcasted to a `Derivation` when needed (implicitely or not), so we
might as well make it a full `Derivation` and upcast it when needed.
This also allows getting rid of a weird duplication in the way we
compute the static output hashes for the derivation. We had to
do it differently and in a different place depending on whether the
derivation was a full derivation or just a basic drv, but we can now do
it unconditionally on the full derivation.
Fix#4559
- Pass it the name of the outputs rather than their output paths (as
these don't exist for ca derivations)
- Get the built output paths from the remote builder
- Register the new received realisations
The PR #4240 changed messag of the error that was thrown when an auto-called
function was missing an argument.
However this change also changed the type of the error, from `EvalError`
to a new `MissingArgumentError`. This broke hydra which was relying on
an `EvalError` being thrown.
Make `MissingArgumentError` a subclass of `EvalError` to un-break hydra.