Unless `--precise` is passed, make `nix why-depends` only show the
dependencies between the store paths, without introspecting them to
find the actual references.
This also makes it ~3x faster
it can be replaced with StringToken if we add another bit if information to
StringToken, namely whether this string should take part in indentation scanning
or not. since all escaping terminates indentation scanning we need to set this
bit only for the non-escaped IND_STRING rule.
this improves performance by about 1%.
before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.880 s ± 0.048 s [User: 6.809 s, System: 1.643 s]
Range (min … max): 8.781 s … 8.993 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 375.0 ms ± 2.2 ms [User: 339.8 ms, System: 35.2 ms]
Range (min … max): 371.5 ms … 379.3 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.831 s ± 0.040 s [User: 2.536 s, System: 0.225 s]
Range (min … max): 2.769 s … 2.912 s 20 runs
after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.832 s ± 0.048 s [User: 6.757 s, System: 1.657 s]
Range (min … max): 8.743 s … 8.921 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 367.4 ms ± 3.2 ms [User: 332.7 ms, System: 34.7 ms]
Range (min … max): 364.6 ms … 374.6 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.810 s ± 0.030 s [User: 2.517 s, System: 0.225 s]
Range (min … max): 2.742 s … 2.854 s 20 runs
This is needed to get the path of a derivation that might not exist
(e.g. for 'nix store copy-log').
InstallableStorePath::toDerivedPaths() cannot be used for this because
it calls readDerivation(), so it fails if the store doesn't have the
derivation.
When stderr is not connected to a tty, show "building" and
"substituting" messages, a-la nix-build et al.
Closes https://github.com/NixOS/nix/issues/4402
Co-authored-by: Théophane Hufschmitt <7226587+thufschmitt@users.noreply.github.com>
gives about 1% improvement on system eval, a bit less on nix search.
# before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 7.419 s ± 0.045 s [User: 6.362 s, System: 0.794 s]
Range (min … max): 7.335 s … 7.517 s 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.921 s ± 0.023 s [User: 2.626 s, System: 0.210 s]
Range (min … max): 2.883 s … 2.957 s 20 runs
# after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 7.370 s ± 0.059 s [User: 6.333 s, System: 0.791 s]
Range (min … max): 7.286 s … 7.541 s 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.891 s ± 0.033 s [User: 2.606 s, System: 0.210 s]
Range (min … max): 2.823 s … 2.958 s 20 runs
`nix why-depends` is piping its output into a pager by default.
However the pager was only started after the first path is printed,
causing it to be excluded from the pager output.
(Actually the pager was started *inside* the recursive function that was
printing the dependency chain, so a new instance was started at each
level. It’s a little miracle that it worked at all).
Fix#5911
mainly to avoid an allocation and a copy of a string that can be
modified in place (ever since EvalState holds on to the buffer, not the
generated parser itself).
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 571.7 ms ± 2.4 ms [User: 563.3 ms, System: 8.0 ms]
Range (min … max): 566.7 ms … 579.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 376.6 ms ± 1.0 ms [User: 345.8 ms, System: 30.5 ms]
Range (min … max): 374.5 ms … 379.1 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.922 s ± 0.006 s [User: 2.707 s, System: 0.215 s]
Range (min … max): 2.906 s … 2.934 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 570.4 ms ± 2.8 ms [User: 561.3 ms, System: 8.6 ms]
Range (min … max): 564.6 ms … 578.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 375.4 ms ± 1.3 ms [User: 343.2 ms, System: 31.7 ms]
Range (min … max): 373.4 ms … 378.2 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.925 s ± 0.006 s [User: 2.704 s, System: 0.219 s]
Range (min … max): 2.910 s … 2.942 s 50 runs
when given a string yacc will copy the entire input to a newly allocated
location so that it can add a second terminating NUL byte. since the
parser is a very internal thing to EvalState we can ensure that having
two terminating NUL bytes is always possible without copying, and have
the parser itself merely check that the expected NULs are present.
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 572.4 ms ± 2.3 ms [User: 563.4 ms, System: 8.6 ms]
Range (min … max): 566.9 ms … 579.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 381.7 ms ± 1.0 ms [User: 348.3 ms, System: 33.1 ms]
Range (min … max): 380.2 ms … 387.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.936 s ± 0.005 s [User: 2.715 s, System: 0.221 s]
Range (min … max): 2.923 s … 2.946 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 571.7 ms ± 2.4 ms [User: 563.3 ms, System: 8.0 ms]
Range (min … max): 566.7 ms … 579.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 376.6 ms ± 1.0 ms [User: 345.8 ms, System: 30.5 ms]
Range (min … max): 374.5 ms … 379.1 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.922 s ± 0.006 s [User: 2.707 s, System: 0.215 s]
Range (min … max): 2.906 s … 2.934 s 50 runs
speeds up parsing by ~3%, system builds by a bit more than 1%
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 574.7 ms ± 2.8 ms [User: 566.3 ms, System: 8.0 ms]
Range (min … max): 569.2 ms … 580.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 394.4 ms ± 0.8 ms [User: 361.8 ms, System: 32.3 ms]
Range (min … max): 392.7 ms … 395.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.976 s ± 0.005 s [User: 2.757 s, System: 0.218 s]
Range (min … max): 2.966 s … 2.990 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 572.4 ms ± 2.3 ms [User: 563.4 ms, System: 8.6 ms]
Range (min … max): 566.9 ms … 579.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 381.7 ms ± 1.0 ms [User: 348.3 ms, System: 33.1 ms]
Range (min … max): 380.2 ms … 387.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.936 s ± 0.005 s [User: 2.715 s, System: 0.221 s]
Range (min … max): 2.923 s … 2.946 s 50 runs
every stringy token the lexer returns is turned into a Symbol and not
used further, so we don't have to strdup. using a string_view is
sufficient, but due to limitations of the current parser we have to use
a POD type that holds the same information.
gives ~2% on system build, 6% on search, 8% on parsing alone
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 610.6 ms ± 2.4 ms [User: 602.5 ms, System: 7.8 ms]
Range (min … max): 606.6 ms … 617.3 ms 50 runs
Benchmark 2: nix eval -f hackage-packages.nix
Time (mean ± σ): 430.1 ms ± 1.4 ms [User: 393.1 ms, System: 36.7 ms]
Range (min … max): 428.2 ms … 434.2 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 3.032 s ± 0.005 s [User: 2.808 s, System: 0.223 s]
Range (min … max): 3.023 s … 3.041 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 574.7 ms ± 2.8 ms [User: 566.3 ms, System: 8.0 ms]
Range (min … max): 569.2 ms … 580.7 ms 50 runs
Benchmark 2: nix eval -f hackage-packages.nix
Time (mean ± σ): 394.4 ms ± 0.8 ms [User: 361.8 ms, System: 32.3 ms]
Range (min … max): 392.7 ms … 395.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.976 s ± 0.005 s [User: 2.757 s, System: 0.218 s]
Range (min … max): 2.966 s … 2.990 s 50 runs
there's a few symbols in primops we can create once and pick them out of
EvalState afterwards instead of creating them every time we need them. this
gives almost 1% speedup to an uncached nix search.
there's a couple places that can be easily converted from using strings to using
string_views instead. gives a slight (~1%) boost to system eval.
# before
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.946 s ± 0.026 s [User: 2.655 s, System: 0.209 s]
Range (min … max): 2.905 s … 2.995 s 20 runs
# after
Time (mean ± σ): 2.928 s ± 0.024 s [User: 2.638 s, System: 0.211 s]
Range (min … max): 2.893 s … 2.970 s 20 runs
this avoids one copy from `s` into `str`, and possibly another copy needed to
construct `s` at the call site. lexical_cast is also more efficient in general.
constructing an ostringstream for non-string concats (like integer addition) is
a small constant cost that we can avoid. for string concats we can keep all the
string temporaries we get from coerceToString and concatenate them in one go,
which saves a lot of intermediate temporaries and copies in ostringstream. we
can also avoid copying the concatenated string again by directly allocating it
in GC memory and moving ownership of the concatenated string into the target
value.
saves about 2% on system eval.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.837 s ± 0.031 s [User: 2.562 s, System: 0.191 s]
Range (min … max): 2.796 s … 2.892 s 20 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.790 s ± 0.035 s [User: 2.532 s, System: 0.187 s]
Range (min … max): 2.722 s … 2.836 s 20 runs
There already existed a smoke test for the link content length,
but it appears that there exists some corruptions pernicious enough
to replace the file content with zeros, and keeping the same length.
--repair-path now goes as far as checking the content of the link,
making it true to its name and actually repairing the path for such
coruption cases.
we don't have to create an ostream sentry object for every character of a JSON
string we write. format a bunch of characters and flush them to the stream all
at once instead.
this doesn't affect small numbers of string characters, but larger numbers of
total JSON string characters written gain a lot. at 1MB of total string written
we gain almost 30%, at 16MB it's almost a factor of 3x. large numbers of JSON
string characters do occur naturally in a nixos system evaluation to generate
documentation (though this is now somewhat mitigated by caching the largest part
of nixos option docs).
benchmarked with
hyperfine 'nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) {e})"' --warmup 1 -L e 1,4,256,4096,65536
before:
Benchmark 1: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 1)"
Time (mean ± σ): 12.5 ms ± 0.2 ms [User: 9.2 ms, System: 4.0 ms]
Range (min … max): 11.9 ms … 13.1 ms 223 runs
Benchmark 2: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4)"
Time (mean ± σ): 12.5 ms ± 0.2 ms [User: 9.3 ms, System: 3.8 ms]
Range (min … max): 11.9 ms … 13.2 ms 220 runs
Benchmark 3: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 256)"
Time (mean ± σ): 13.2 ms ± 0.3 ms [User: 9.8 ms, System: 4.0 ms]
Range (min … max): 12.6 ms … 14.3 ms 205 runs
Benchmark 4: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4096)"
Time (mean ± σ): 24.0 ms ± 0.4 ms [User: 19.4 ms, System: 5.2 ms]
Range (min … max): 22.7 ms … 25.8 ms 119 runs
Benchmark 5: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 65536)"
Time (mean ± σ): 196.0 ms ± 3.7 ms [User: 171.2 ms, System: 25.8 ms]
Range (min … max): 190.6 ms … 201.5 ms 14 runs
after:
Benchmark 1: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 1)"
Time (mean ± σ): 12.4 ms ± 0.3 ms [User: 9.1 ms, System: 4.0 ms]
Range (min … max): 11.7 ms … 13.3 ms 204 runs
Benchmark 2: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4)"
Time (mean ± σ): 12.4 ms ± 0.2 ms [User: 9.2 ms, System: 3.9 ms]
Range (min … max): 11.8 ms … 13.0 ms 214 runs
Benchmark 3: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 256)"
Time (mean ± σ): 12.6 ms ± 0.2 ms [User: 9.5 ms, System: 3.8 ms]
Range (min … max): 12.1 ms … 13.3 ms 209 runs
Benchmark 4: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 4096)"
Time (mean ± σ): 15.9 ms ± 0.2 ms [User: 11.4 ms, System: 5.1 ms]
Range (min … max): 15.2 ms … 16.4 ms 171 runs
Benchmark 5: nix eval --raw --expr "let s = __concatStringsSep \"\" (__genList (_: \"c\") 256); in __toJSON (__genList (_: s) 65536)"
Time (mean ± σ): 69.0 ms ± 0.9 ms [User: 44.3 ms, System: 25.3 ms]
Range (min … max): 67.2 ms … 70.9 ms 42 runs
Previously you had to remember to call value->attrs->sort() after
populating value->attrs. Now there is a BindingsBuilder helper that
wraps Bindings and ensures that sort() is called before you can use
it.
nixpkgs can save a good bit of eval memory with this primop. zipAttrsWith is
used quite a bit around nixpkgs (eg in the form of recursiveUpdate), but the
most costly application for this primop is in the module system. it improves
the implementation of zipAttrsWith from nixpkgs by not checking an attribute
multiple times if it occurs more than once in the input list, allocates less
values and set elements, and just avoids many a temporary object in general.
nixpkgs has a more generic version of this operation, zipAttrsWithNames, but
this version is only used once so isn't suitable for being the base of a new
primop. if it were to be used more we should add a second primop instead.
When we check for disappeared overrides, we can get "false positives"
for follows and overrides which are defined in the dependencies of the
flake we are locking, since they are not parsed by
parseFlakeInputs. However, at that point we already know that the
overrides couldn't have possible been changed if the input itself
hasn't changed (since we check that oldLock->originalRef == *input.ref
for the input's parent). So, to prevent this, only perform this check
when it was possible that the flake changed (e.g. the flake we're
locking, or a new input, or the input has changed and mustRefetch ==
true).
This makes sure that values parsed from TOML have a proper size. Using
e.g. `double` caused issues on i686 where the size of `double` (32bit)
was too small to accommodate some values.
This was already accidentally disabled in ba87b08. It also no longer
appears to be beneficial, and in fact slow things down, e.g. when
evaluating a NixOS system configuration:
elapsed time: median = 3.8170 mean = 3.8202 stddev = 0.0195 min = 3.7894 max = 3.8600 [rejected, p=0.00000, Δ=0.36929±0.02513]
calling GC_malloc for each value is significantly more expensive than
allocating a bunch of values at once with GC_malloc_many. "a bunch" here
is a GC block size, ie 16KiB or less.
this gives a 1.5% performance boost when evaluating our nixos system.
tested with
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
# on master
Time (mean ± σ): 3.335 s ± 0.007 s [User: 2.774 s, System: 0.293 s]
Range (min … max): 3.315 s … 3.347 s 50 runs
# with this change
Time (mean ± σ): 3.288 s ± 0.006 s [User: 2.728 s, System: 0.292 s]
Range (min … max): 3.274 s … 3.307 s 50 runs
Add a `_NIX_TRACE_BUILT_OUTPUTS` environment variable that can be set to
a filename in which the result of each build will be logged.
This is intentionally crude and undocumented as it’s only meant to be a
temporary thing to assess the usefulness of CA derivations.
Any other use would need a cleaner re-implementation first.
Make the build of unresolved derivations return the same status as the
resolved one, except in the case of an `AlreadyValid` in which case it
will return `ResolvesToAlreadyValid` to mean that the outputs of the unresolved
derivation weren’t known, but the resolved one is.
When a variable is assigned in the REPL, make sure to remove any possible reference to the old one so that we correctly pick the new one afterwards
Fix#5706
I downloaded Nix tonight, and immediately broke it by accidentally removing the default binary caching.
After figuring this out, I also failed to fix it properly, due to using the wrong key for Nix's default binary cache
If the diagnostic message would have been clearer about what/where a "signature" for a "substituter" is + comes from, it probably would have saved me a few hours.
Maybe we can save other noobs the same pain?
Before this change, stdout was closed after the pager exits. This is
fine for non-interactive commands where we want to exit right after
the pager exits anyways, but for interactive things (e.g. nix repl)
this breaks the output after we quit the pager.
Keep the initial stdout fd as part of RunPager, and restore it in
RunPager::~RunPager using dup2.
This function is very useful in nixpkgs, but its implementation in Nix
itself is rather slow due to it requiring a lot of attribute set and
list appends.
Previously, when we were attempting to reuse the old lockfile
information in the computeLocks function, we have passed the parent of
the current input to the next computeLocks call. This was incorrect,
since the follows are resolved relative to the parent. This caused
issues when we tried to reuse oldLock but couldn't for some
reason (read: mustRefetch is true), in that case the follows were
resolved incorrectly.
Fix this by passing the correct parent, and adding some tests to
prevent this particular regression from happening again.
Closes https://github.com/NixOS/nix/issues/5697
- Previous to this commit the boundary was exclusive of the
top level flake.
- This is wrong since the top level flake is still a valid
relative reference.
- Now, the check boundary is inclusive of the top level flake.
Signed-off-by: Timothy DeHerrera <tim.deh@pm.me>
Due to missing <atomic> declaration the build fails as:
src/libutil/util.hh:350:24: error: no match for 'operator||' (operand types are 'std::atomic<bool>' and 'bool')
350 | if (_isInterrupted || (interruptCheck && interruptCheck()))
| ~~~~~~~~~~~~~~ ^~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| | |
| std::atomic<bool> bool
Because the manual is generated from default values which are themselves
generated from various sources (cpuid, bios settings (kvm), number of
cores). This commit hides non-reproducible settings from the manual
output.
No matter what, we need to resize the buffer to not have any scratch
space after we do the `read`. In the end of file case, `got` will be 0
from it's initial value.
Before, we forgot to resize in the EOF case with the break. Yes, we know
we didn't recieve any data in that case, but we still have the scatch
space to undo.
Co-Authored-By: Will Fancher <Will.Fancher@Obsidian.Systems>
This doesn't fix the bug, but makes the code less difficult to read.
Also improve the comments, now that it is clear what part is needed in
each code path.
Moving arguments of the primOp into the registration structure makes it
impossible to initialize a second EvalState with the correct primOp
registration. It will end up registering all those "RegisterPrimOp"'s
with an arity of zero on all but the 2nd instance of the EvalState.
Not moving the memory will add a tiny bit of memory overhead during the
eval since we need a copy of all the argument lists of all the primOp's.
The overhead shouldn't be too bad as it is static (based on the amonut
of registered operations) and only occurs once during the interpreter
startup.
If we’re in pure eval mode, then tell that in the error message rather
than (wrongly) speaking about restricted mode.
Fix https://github.com/NixOS/nix/issues/5611
We let DISPLAY (X11) through, so we should let the Wayland equivalents
through as well. Similarly, we let HOME through, so it should be okay
to allow XDG_RUNTIME_DIR (which is needed for connecting to Wayland
with WAYLAND_DISPLAY) through as well. Otherwise graphical
applications will either fall back to X11 (if they support it), or
just not work (if they don't).
This is not really useful on its own, but it does recover the
'infinite recursion' error message for '{ __functor = x: x; } 1', and
is more efficient in conjunction with #3718.
Fixes#5515.
- This change applies to builtins.toXML and inner workings
- Proof of concept:
```nix
let e = builtins.toXML e; in e
```
- Before:
```
$ nix-instantiate --eval poc.nix
error: infinite recursion encountered
```
- After:
```
$ nix-instantiate --eval poc.nix
error: infinite recursion encountered
at /data/github/kamadorueda/nix/poc.nix:1:9:
1| let e = builtins.toXML e; in e
|
```
These headers are included by the libexpr, libfetchers, libstore
and libutil headers.
Considering that these are vendored sources, Nix should expose them,
as it is not a good idea for reverse dependencies to rely on a
potentially different source that can go out of sync.
When an input follows disappears, we can't just reuse the old lock
file entries since we may be missing some required ones. Refetch the
input when this happens.
Closes https://github.com/NixOS/nix/issues/5289
For a typical desktop system (~2K packages) we can easily get 100K
entries in RealisationsRefs. Without indices query for RealisationsRefs
requires linear scan.
RealisationsRefs(referrer)
--------------------------
Inefficiency is seen as a 100% CPU load of nix-daemon for the following
scenario:
$ nix edit -f . bash # add unused environment variable, like FOO="1"
# populate RealisationsRefs, build fresh system
$ nix build -f nixos system --arg config '{ contentAddressedByDefault = true; }'
$ nix edit -f . bash # add unused environment variable, like FOO="2"
$ time nix build -f nixos system --arg config '{ contentAddressedByDefault = true; }'
In this case `bash `will be rebuilt a few times and then rest of CPU
time is spent on scanning RealisationsRefs table (about 5 CPU-minutes
on my machine).
Before the change:
$ time nix build -f nixos system ... # step 4 above
real 34m3,613s
user 0m5,232s
sys 0m0,758s
Of all this time about 29.5 minutes are taken by nix-daemon's CPU time.
After the change:
$ time nix build -f nixos system ... # step 4 above
real 4m50,061s
user 0m5,038s
sys 0m0,677s
Of all this time about 1 minute is taken by nix-daemon's CPU time.
Most of the time is spent polling for non-existent realisations on
cache-nixos.org.
Realisations(outputPath)
------------------------
After running CA system for two weeks I got ~1M entries in Realisations
table. `nix-collect-garbage` became very slow (seemingly 100 path deletions
per second). It happens due to a slow cascading delete from Realisations
triggered by deletion from ValidPaths.
The fix is to add an index on primary key from ValidPaths(id) that
triggers cascading deletions.
Before the change:
$ time nix-collect-garbage -d --max-freed 100G
<interrupted before finish, took too long>
real 23m32.411s
user 17m49.679s
sys 4m50.609s
Most of time was spent in re-scanning Realisations table on each path deletion.
After the change:
$ time nix-collect-garbage -d --max-freed 100G
real 8m43.226s
user 6m16.317s
sys 1m40.188s
Time is spent scanning sqlite indices and in kernel when unlinking directories.
Doing it as a side-effect of calling LocalStore::makeStoreWritable()
is very ugly.
Also, make sure that stopping the progress bar joins the update
thread, otherwise that thread should be unshared as well.
Since 4806f2f6b0, we can't have paths with
references passed to builtins.{path,filterSource}. This prevents many cases
of those functions called on IFD outputs from working. Resolve this by
passing the references found in the original path to the added path.
When setting flake-local options (with the `nixConfig` field), forward
these options to the daemon in case we’re using one.
This is necessary in particular for options like `binary-caches` or
`post-build-hook` to make sense.
Fix <343239fc8a (r44356843)>
When running a `:b` command in the repl, after building the derivations
query the store for its outputs rather than just assuming that they are
known in the derivation itself (which isn’t true for CA derivations)
Fix#5328
We now parse function applications as a vector of arguments rather
than as a chain of binary applications, e.g. 'substring 1 2 "foo"' is
parsed as
ExprCall { .fun = <substring>, .args = [ <1>, <2>, <"foo"> ] }
rather than
ExprApp (ExprApp (ExprApp <substring> <1>) <2>) <"foo">
This allows primops to be called immediately (if enough arguments are
supplied) without having to allocate intermediate tPrimOpApp values.
On
$ nix-instantiate --dry-run '<nixpkgs/nixos/release-combined.nix>' -A nixos.tests.simple.x86_64-linux
this gives a substantial performance improvement:
user CPU time: median = 0.9209 mean = 0.9218 stddev = 0.0073 min = 0.9086 max = 0.9340 [rejected, p=0.00000, Δ=-0.21433±0.00677]
elapsed time: median = 1.0585 mean = 1.0584 stddev = 0.0024 min = 1.0523 max = 1.0623 [rejected, p=0.00000, Δ=-0.20594±0.00236]
because it reduces the number of tPrimOpApp allocations from 551990 to
42534 (i.e. only small minority of primop calls are partially
applied) which in turn reduces time spent in the garbage collector.
Rather than having them plain strings scattered through the whole
codebase, create an enum containing all the known experimental features.
This means that
- Nix can now `warn` when an unkwown experimental feature is passed
(making it much nicer to spot typos and spot deprecated features)
- It’s now easy to remove a feature altogether (once the feature isn’t
experimental anymore or is dropped) by just removing the field for the
enum and letting the compiler point us to all the now invalid usages
of it.
Currently machine specification (`/etc/nix/machine`) parser fails
with a vague exception if the file had incorrect format.
This commit adds verbose exceptions and unit-tests for the parser.
Fixed a bug in initialization of 'base64DecodeChars' variable.
Currently decoder do not fail on invalid Base64 strings.
Added test-case to verify the fix.
Also have made 'base64DecodeChars' to be computed at compile time.
And added a test case to encode/decode string with non-printable charactes.
- This way we improve error messages
on infinite recursion
- Demo:
```nix
let x = builtins.fetchTree x;
in x
```
- Before:
```bash
$ nix-instantiate --extra-experimental-features flakes --strict
error: infinite recursion encountered
```
- After:
```bash
$ nix-instantiate --extra-experimental-features flakes --strict
error: infinite recursion encountered
at /data/github/kamadorueda/nix/test.nix:1:9:
1| let x = builtins.fetchTree x;
| ^
2| in x
```
Mentions: #3505
This ensures any started processes can't write to /nix/store (except
during builds). This partially reverts 01d07b1e, which happened because
of #2646.
The problem was only happening after nix downloads anything, causing
me to suspect the download thread. The problem turns out to be:
"A process can't join a new mount namespace if it is sharing
filesystem-related attributes with another process", in this case this
process is the curl thread.
Ideally, we might kill it before spawning the shell process, but it's
inside a static variable in the getFileTransfer() function. So
instead, stop it from sharing FS state using unshare(). A strategy
such as the one from #5057 (single-threaded chroot helper binary) is
also very much on the table.
Fixes#4337.
- This way we improve error messages
on infinite recursion
- Demo:
```nix
let
x = builtins.fetchMercurial x;
in
x
```
- Before:
```bash
$ nix-instantiate --show-trace --strict
error: infinite recursion encountered
```
- After:
```bash
nix-instantiate --show-trace --strict
error: infinite recursion encountered
at /data/github/kamadorueda/test/default.nix:2:7:
1| let
2| x = builtins.fetchMercurial x;
| ^
3| in
```
Mentions: #3505
We can actually just load nss ourselves and call in nss to configure it
and we don't need to run a dummy query entirely to have nss load nss_dns
as a side-effect.
Signed-off-by: Arthur Gautier <baloo@superbaloo.net>
This fixes a bug in the garbage collector where if a path
/nix/store/abcd-foo is valid, but we do a
isValidPath("/nix/store/abcd-foo.lock") first, then a negative entry
for /nix/store/abcd is added to pathInfoCache, so /nix/store/abcd-foo
is subsequently considered invalid and deleted.
(where "referrers" includes the reverse of derivation outputs and
derivers). Now we do a full traversal to look if we can reach any
root. If not, all paths reached can be deleted.
The garbage collector no longer blocks other processes from
adding/building store paths or adding GC roots. To prevent the
collector from deleting store paths just added by another process,
processes need to connect to the garbage collector via a Unix domain
socket to register new temporary roots.
This reverts some parts of commit
8430a8f086 which was trying to rethrow
some exceptions while we weren’t in the context of a `catch` block,
causing some weird “terminate called without an active exception”
errors.
Fix#5368
In https://github.com/NixOS/nix/pull/5350 we noticed link failures
pkgsStatic.nixUnstable. Adding explicit dependency on libutil fixes
libstore-tests linking.
When I stop a download with Ctrl-C in a `nix repl` of a flake, the REPL
refuses to do any other downloads:
nix-repl> builtins.getFlake "nix-serve"
[0.0 MiB DL] downloading 'https://api.github.com/repos/edolstra/nix-serve/tarball/e9828a9e01a14297d15ca41 error: download of 'e9828a9e01' was interrupted
[0.0 MiB DL]
nix-repl> builtins.getFlake "nix-serve"
error: interrupted by the user
[0.0 MiB DL]
To fix this issue, two changes were necessary:
* Reset the global `_isInterrupted` variable: only because a single
operation was aborted, it should still be possible to continue the
session.
* Recreate a `fileTransfer`-instance if the current one was shut down by
an abort.
We now build the context (so this has the side-effect of making
builtins.{path,filterSource} work on derivations outputs, if IFD is
enabled) and then check that the path has no references (which is what
we really care about).
The boolean is only used to determine if the formals are set to a
non-null pointer in all our cases. We can get rid of that allocation and
instead just compare the pointer value with NULL. Saving up to
sizeof(bool) + platform specific alignment per ExprLambda instace.
Probably not a lot of memory but perhaps a few kilobyte with nixpkgs?
This also gets rid of a potential issue with dereferencing formals based on
the value of the boolean that didn't have to be aligned with the formals
pointer but was in all our cases.
9c766a40cb broke logging from the
daemon, because commonChildInit is called when starting the build hook
in a vfork, so it ends up resetting the parent's logger. So don't
vfork.
It might be best to get rid of vfork altogether, but that may cause
problems, e.g. when we call an external program like git from the
evaluator.
Before the changes when building the whole system with
`contentAddressedByDefault = true;` we get many noninformative messages:
$ nix build -f nixos system --keep-going
...
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0'; cross fingers
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0-dev'; cross fingers
warning: rewriting hashes in '/nix/store/...-clang-11.1.0.drv.chroot/nix/store/...-11.1.0-python'; cross fingers
error: 2 dependencies of derivation '/nix/store/...-hub-2.14.2.drv' failed to build
warning: rewriting hashes in '/nix/store/...-subversion-1.14.1.drv.chroot/nix/store/...-subversion-1.14.1-dev'; cross fingers
warning: rewriting hashes in '/nix/store/...-subversion-1.14.1.drv.chroot/nix/store/...-subversion-1.14.1-man'; cross fingers
...
Let's downgrade these messages down to debug().
I had started the trend of doing `std::visit` by value (because a type
error once mislead me into thinking that was the only form that
existed). While the optomizer in principle should be able to deal with
extra coppying or extra indirection once the lambdas inlined, sticking
with by reference is the conventional default. I hope this might even
improve performance.
I found it somewhat confusing to have an error like
error: attribute 'getFlake' missing
if the required experimental-feature (`flakes`) is not enabled. Instead,
I'd expect Nix to throw an error just like it's the case when using e.g. `nix
flake` without `flakes` being enabled.
With this change, the error looks like this:
$ nix-instantiate -E 'builtins.getFlake "nixpkgs"'
error: Cannot call 'builtins.getFlake' because experimental Nix feature 'flakes' is disabled. You can enable it via '--extra-experimental-features flakes'.
at «string»:1:1:
1| builtins.getFlake "nixpkgs"
| ^
I didn't use `settings.requireExperimentalFeature` here on purpose
because this doesn't contain a position. Also, it doesn't seem as if we
need to catch the error and check for the missing feature here since
this already happens at evaluation time.
This actually bit me quite recently in `nixpkgs` because I assumed that
`nix-build --check` would also error out if hashes don't match anymore[1]
and so I wrongly assumed that I couldn't reproduce the mismatch error.
The fix is rather simple, during the output registration a so-called
`delayedException` is instantiated e.g. if a FOD hash-mismatch occurs.
However, in case of `nix-build --check` (or `--rebuild` in case of `nix
build`), the code-path where this exception is thrown will never be
reached.
By adding that check to the if-clause that causes an early exit in case
of `bmCheck`, the issue is gone. Also added a (previously failing)
test-case to demonstrate the problem.
[1] https://github.com/NixOS/nixpkgs/pull/139238, the underlying issue
was that `nix-prefetch-git` returns different hashes than `fetchgit`
because the latter one fetches submodules by default.
This fixes
$ nix path-info -r $(type -P ls)
/nix/store/vfilzcp8a467w3p0mp54ybq6bdzb8w49-coreutils-8.32
/nix/store/5d821pjgzb90lw4zbg6xwxs7llm335wr-libunistring-0.9.10
...
/nix/store/mrv4y369nw6hg4pw8d9p9bfdxj9pjw0x-acl-2.3.0
/nix/store/vfilzcp8a467w3p0mp54ybq6bdzb8w49-coreutils-8.32
Also, output the paths in topologically sorted order like we used to.
This is important if the remote side *does* execute
nix-store/nix-daemon successfully, but stdout is polluted
(e.g. because the remote user's bashrc script prints something to
stdout). In that case we have to shutdown the write side to force the
remote nix process to exit.
Instead of
error: serialised integer 7161674624452356180 is too large for type 'j'
we now get
error: 'nix-store --serve' protocol mismatch from 'sshtest@localhost', got 'This account is currently not available.'
Fixes https://github.com/NixOS/nixpkgs/issues/37287.
Previously, type or coercion errors for string interpolation, path
interpolation, and plus expressions were always reported at the
beginning of the outer expression. This leads to confusing evaluation
error messages making it hard to accurately diagnose and then fix the
error.
For example, errors were reported as follows.
```
cannot coerce an integer to a string
1| let foo = 7; in "bar" + foo
| ^
cannot add a string to an integer
1| let foo = "bar"; in 4 + foo
| ^
cannot coerce an integer to a string
1| let foo = 7; in "x${foo}"
| ^
```
This commit changes the ExprConcatStrings expression vector to store a
sequence of expressions *and* their expansion locations so that error
locations can be reported accurately. For interpolation, the error is
reported at the beginning of the entire `${foo}`, not at the beginning
of `foo` because I thought this was slightly clearer. The previous
errors are now reported as:
```
cannot coerce an integer to a string
1| let foo = 7; in "bar" + foo
| ^
cannot add a string to an integer
1| let foo = "bar"; in 4 + foo
| ^
cannot coerce an integer to a string
1| let foo = 7; in "x${foo}"
| ^
```
The error is reported at this kind of precise location even for
multi-line indented strings.
This probably helps with at least some of the cases mentioned in #561
It's now disabled by default for the following:
* 'nix search' (this was already implied by read-only mode)
* 'nix flake show'
* 'nix flake check', but only on the hydraJobs output
Without this, flakes within the same tree and same lock data will have
the same fingerprint and the eval cache for one flake will be
incorrectly used for another.
Alternative to #4639. You can still read flake.lock, but at least in
reproducible workflows like NixOS configurations where you require a
non-dirty tree, evaluation will fail because there is no rev.
With --no-write-lock-file, it's possible that flake.lock is out of
sync with the actual inputs used by the evaluation. So doing fromJSON
(readFile ./flake.lock) will give wrong results.
Fixes#4639.
Before this commit, the dns lookup in preloadNSS would still go through
nscd. This did not have the effect of loading the nss_dns.so as expected
(nss_dns.so being out of reach from within the sandbox).
Should LOCALDOMAIN environment variable be defined, nss will completely
avoid nscd and will do its dns resolution on its own.
By temporarly setting LOCALDOMAIN variable before calling in NSS, we can
force NSS to load the shared libraries as expected.
Fixes#5089
Signed-off-by: Arthur Gautier <baloo@superbaloo.net>
In dry run mode, new derivations can't be create, so running the command on anything that has not been evaluated before results in an error message of the form `don't know how to build these paths (may be caused by read-only store access)`.
For comparison, the classical `nix-build --dry-run` doesn't use read-only mode.
Closes#1795
(cherry picked from commit 54525682df707742e58311c32e9c9cb18de1e31f)
If the store path contains a flake, this means that a command like
"nix path-info /path" will show info about /path, not about the
default output of the flake in /path. If you want the latter, you can
explicitly ask for it by doing "nix path-info path:/path".
Fixes#4568.
Store paths are only allowed to contain a limited subset of the
alphabet, which doesn’t include `!`. So don’t create lockfiles that
contain this `!` character as that would otherwise confuse (and break)
the gc.
Fix#5176
When doing e.g.
nix-build -A package --keep-failed --option \
builders \
'ssh://mfhydra?remote-store=/home/bosch/store x86_64-linux - 10 4 big-parallel'
this doesn't work properly because this build-setting is ignored.
I changed this behavior by passing the `settings.keepFailed` through the
serve-protocol to remote machines to make sure that I can introspect the
build-directory (which is particularly helpful when I have to look at a
`config.log` from a failed build for instance).
This fixes a use-after-free bug:
1. s = new EvalState();
2. callFlake()
3. static vCallFlake now references s
4. delete s;
5. s2 = new EvalState();
6. callFlake()
7. static vCallFlake still references s
8. crash
Nix 2.3 did not have a problem with recreating EvalState.
This fixes a class of crashes and introduces ptr<T> to make the
code robust against this failure mode going forward.
Thanks regnat for the idea of a ref<T> without overhead!
Closes#4895Closes#4893Closes#5127Closes#5113
“packages” was probably meant to be “packages.${system}.” but that
is already listed in `getDefaultFlakeAttrPathPrefixes` in `installables`,
which is probably why no one noticed it was broken.
It currently fails with the following error:
error: flake 'git+file://…' does not provide attribute 'devShells.x86_64-linuxhaskell', 'packages.x86_64-linux.haskell', 'legacyPackages.x86_64-linux.haskell' or 'haskell'
For git+file and path flakes, chdir to flake directory so that phases
that expect to be in the flake directory can run
Fixes https://github.com/NixOS/nix/issues/3976
Use `$(libdir)` while installing .pc files looks like a more generic
solution. For example, it will work for distributions like RHEL or
Fedora where .pc files are installed in `/usr/lib64/pkgconfig`.
This replaces the O(n) search complexity in our insert code with a
lookup of O(log n). It also makes removing waitees easier as we can use
the extract method provided by the set class.
Previously the code ensures that the isBase32 array would only be
initialised once in a single-threaded context. If two threads happen to
call the function before the initialisation was completed both of them
would have completed the initialization step. This allowed for a
race-condition where one thread might be done with the initialization
but the other thread sets all the fields to false again. For a brief
moment the base32 detection would then produce false-negatives.
`nix-shell --pure` when applied to a non stdenv derivation doesn't seem
to clear the PATH. It expects the stdenv/setup file to do so.
This adds an explicit `unset PATH` by nix-build.cc (nix-shell) itself so
that it's not reliant on stdenv/setup anymore.
This does not break impure nix-shell since the PATH is persisted as the
variable `p` prior in the bash rcfile
fixes#5092
Previously, despite having a boolean that tracked initialization, the
decode characters have been "calculated" every single time a base64
string was being decoded.
With this change we only initialize the decode array once in a
thread-safe manner.
The experimental features are, well, experimental, and shouldn’t be
carelessly and transparently enabled.
Besides, some (`ca-derivations` at least) need to be enabled at startup
in order to work properly.
So it’s better to just require that daemon be started with the right
`experimental-features` option.
Fix#5017
This extends https://github.com/NixOS/nix/pull/4978 with
supporting the SCP-like urls in expressions like
```nix
builtins.fetchGit {
url = "git@github.com:NixOS/nix.git";
ref = "master";
}
```
This adds a new store operation 'addMultipleToStore' that reads a
number of NARs and ValidPathInfos from a Source, allowing any number
of store paths to be copied in a single call. This is much faster on
high-latency links when copying a lot of small files, like .drv
closures.
For example, on a connection with an 50 ms delay:
Before:
$ nix copy --to 'unix:///tmp/proxy-socket?root=/tmp/dest-chroot' \
/nix/store/90jjw94xiyg5drj70whm9yll6xjj0ca9-hello-2.10.drv \
--derivation --no-check-sigs
real 0m57.868s
user 0m0.103s
sys 0m0.056s
After:
real 0m0.690s
user 0m0.017s
sys 0m0.011s
Otherwise I get a compiler error when building for NetBSD:
src/libutil/util.cc: In function 'void nix::_deletePath(const Path&, uint64_t&)':
src/libutil/util.cc:438:17: error: base operand of '->' is not a pointer
438 | AutoCloseFD dirfd(open(dir.c_str(), O_RDONLY));
| ^~~~~
src/libutil/util.cc:439:10: error: 'dirfd' was not declared in this scope
439 | if (!dirfd) {
| ^~~~~
src/libutil/util.cc:444:17: error: 'dirfd' was not declared in this scope
444 | _deletePath(dirfd.get(), path, bytesFreed);
| ^~~~~
With this, we don't have to copy the entire .drv closure to the
destination store ahead of time (or at all). Instead, buildPaths()
reads .drv files from the eval store and copies inputSrcs to the
destination store if it needs to build a derivation.
Issue #5025.
In particular, this now works:
$ nix path-info --eval-store auto --store https://cache.nixos.org nixpkgs#hello
Previously this would fail as it would try to upload the hello .drv to
cache.nixos.org. Now the .drv is instantiated in the local store, and
then we check for the existence of the outputs in cache.nixos.org.
It supports functions as well. Also change `package` to
`derivation` because it operates at the language level and does
not open the derivation (which would be useful but not nearly
as much).
It does not operate on a derivation and does not return a
derivation path. Instead it works at the language level,
where a distinct term "package" is more appropriate to
distinguish the parent object of `meta.position`; an
attribute which doesn't even make it into the derivation.
Some people want to avoid using registries at all on their system; Instead
of having to add --no-registries to every command, this commit allows to
set use-registries = false in the config. --no-registries is still allowed
everywhere it was allowed previously, but is now deprecated.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
- This can legitimately happen (for example because of a non-determinism
causing a build-time dependency to be kept or not as a runtime
reference)
- Because of older Nix versions, it can happen that we encounter a
realisation with an (erroneously) empty set of dependencies, in which
case we don’t want to fail, but just warn the user and try to fix it.
Fill `NIX_CONFIG` with the value of the current Nix configuration before
calling the nix subprocesses in the repl
That way the whole configuration (including the possible
`experimental-features`, a possibly `--store` option or whatever) will
be made available.
This is required for example to make `nix repl` work with a custom
`--store`
Fill `NIX_CONFIG` with the value of the current Nix configuration before
calling the post-build-hook.
That way the whole configuration (including the possible
`experimental-features`, a possibly `--store` option or whatever) will
be made available to the hook
'--delete-older-than 10' deletes the generations older than a single day, and '--delete-older-than 12m' deletes all generations older than 12 days.
This changes makes it throw on those invalid inputs, and gives an example of a valid input.
We need to support it for the “old” fetch* functions for backwards
compatibility, but we don’t need it for fetchTree (as it’s a new
function).
Given that changing the `name` messes-up the content hashing, we can
just forbid passing a custom `name` argument to it
Before this commit, nixConfig.flake-registry didn't have any real effect
on the evaluation, since config was applied after inputs were evaluated.
Change this behavior: apply the config in the beginning of flake::lockFile.
Pass the current experimental features using `NIX_CONFIG` to the various
Nix subprocesses that `nix repl` invokes.
This is quite a hack, but having `nix repl` call Nix with a subprocess
is a hack already, so I guess that’s fine.
`nix develop` is getting bash from an (assumed existing) `nixpkgs`
flake. However, when doing so, it reuses the `lockFlags` passed to the
current flake, including the `--input-overrides` and `--input-update`
which generally don’t make sense anymore at that point (and trigger a
warning because of that)
Clear these overrides before getting the nixpkgs flake to get rid of the
warning.
Add an access-control list to the realisations in recursive-nix (similar
to the already existing one for store paths), so that we can build
content-addressed derivations in the restricted store.
Fix#4353
Previously, the build system used uname(1) output when it wanted to
check the operating system it was being built for, which meant that it
didn't take into-account cross-compilation when the build and host
operating systems were different.
To fix this, instead of consulting uname output, we consult the host
triple, specifically the third "kernel" part.
For "kernel"s with stable ABIs, like Linux or Cygwin, we can use a
simple ifeq to test whether we're compiling for that system, but for
other platforms, like Darwin, FreeBSD, or Solaris, we have to use a
more complicated check to take into account the version numbers at the
end of the "kernel"s. I couldn't find a way to just strip these
version numbers in GNU Make without shelling out, which would be even
more ugly IMO. Because these checks differ between kernels, and the
patsubst ones are quite fiddly, I've added variables for each host OS
we might want to check to make them easier to reuse.
This way no derivation has to expect that these files are in the `cwd`
during the build. This is problematic for `nix-shell` where these files
would have to be inserted into the nix-shell's `cwd` which can become
problematic with e.g. recursive `nix-shell`.
To remain backwards-compatible, the location inside the build sandbox
will be kept, however using these files directly should be deprecated
from now on.
This is needed to push the adoption of structured attrs[1] forward. It's
now checked if a `__json` exists in the environment-map of the derivation
to be openend in a `nix-shell`.
Derivations with structured attributes enabled also make use of a file
named `.attrs.json` containing every environment variable represented as
JSON which is useful for e.g. `exportReferencesGraph`[2]. To
provide an environment similar to the build sandbox, `nix-shell` now
adds a `.attrs.json` to `cwd` (which is mostly equal to the one in the
build sandbox) and removes it using an exit hook when closing the shell.
To avoid leaking internals of the build-process to the `nix-shell`, the
entire logic to generate JSON and shell code for structured attrs was
moved into the `ParsedDerivation` class.
[1] https://nixos.mayflower.consulting/blog/2020/01/20/structured-attrs/
[2] https://nixos.org/manual/nix/unstable/expressions/advanced-attributes.html#advanced-attributes
Make sure that the derivation we send to the remote builder is exactly
the one that we want to build locally so that the output ids are exactly
the same
Fix#4845
Useful when we're using a daemon with a chroot store, e.g.
$ NIX_DAEMON_SOCKET_PATH=/tmp/chroot/nix/var/nix/daemon-socket/socket nix-daemon --store /tmp/chroot
Then the client can now connect with
$ nix build --store unix:///tmp/chroot/nix/var/nix/daemon-socket/socket?root=/tmp/chroot nixpkgs#hello
That doesn’t really make sense with CA derivations (and wasn’t even
really correct before because of FO derivations, though that probably
didn’t matter much in practice)
Resolve the derivation before trying to load its environment −
essentially reproducing what the build loop does − so that we can
effectively access our dependencies (and not just their placeholders).
Fix#4821
Make ca-derivations require a `ca-derivations` machine feature, and
ca-aware builders expose it.
That way, a network of builders can mix ca-aware and non-ca-aware
machines, and the scheduler will send them in the right place.
When the `keep-going` option is set to `true`, make `nix flake check`
continue as much as it can before failing.
The UI isn’t perfect as-it-is as all the lines currently start with a
mostly useless `error (ignored): error:` prefix, but I’m not sure what
the best output would be, so I’ll leave it as-it-is for the time being
(This is a bit hijacking the `keep-going` flag as it’s supposed to be a
build-time only thing. But I think it’s faire to reuse it here).
Fix https://github.com/NixOS/nix/issues/4450
When adding a path to the local store (via `LocalStore::addToStore`),
ensure that the `ca` field of the provided `ValidPathInfo` does indeed
correspond to the content of the path.
Otherwise any untrusted user (or any binary cache) can add arbitrary
content-addressed paths to the store (as content-addressed paths don’t
need a signature).