Changes:
- CPP variable is now `USE_READLINE` not `READLINE`
- `configure.ac` supports with new CLI flag
- `package.nix` supports with new configuration option
- `flake.nix` CIs this (along with no markdown)
Remove old Ubuntu 16.04 stop-gap too, as that is now quite old.
Motivation:
- editline does not build for Windows, but readline *should*. (I am
still working on this in Nixpkgs at this time, however. So there will
be a follow-up Nix PR removing the windows-only skipping of the
readline library once I am done.)
- Per
https://salsa.debian.org/debian/nix/-/blob/master/debian/rules?ref_type=heads#L27
and #2551, Debian builds Nix with readline. Now we better support and
CI that build configuration.
This is picking up where #2551 left off, ensuring we test a few more
things not merely have CPP for them.
Co-authored-by: Weijia Wang <9713184+wegank@users.noreply.github.com>
Also move `SourcePath` into `libutil`.
These changes allow `error.hh` and `error.cc` to access source path and
position information, which we can use to produce better error messages
(for example, we could consider omitting filenames when two or more
consecutive stack frames originate from the same file).
This sets up infrastructure in libutil to allow for signing other than
by a secret key in memory. #9076 uses this to implement remote signing.
(Split from that PR to allow reviewing in smaller chunks.)
Co-Authored-By: Raito Bezarius <masterancpp@gmail.com>
This avoids a Value allocation for empty list constants. During a `nix
search nixpkgs`, about 82% of all thunked lists are empty, so this
removes about 3 million Value allocations.
Performance comparison on `nix search github:NixOS/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 --no-eval-cache`:
maximum RSS: median = 3845432.0000 mean = 3845432.0000 stddev = 0.0000 min = 3845432.0000 max = 3845432.0000 [rejected?, p=0.00000, Δ=-70084.00000±0.00000]
soft page faults: median = 965395.0000 mean = 965394.6667 stddev = 1.1181 min = 965392.0000 max = 965396.0000 [rejected?, p=0.00000, Δ=-17929.77778±38.59610]
system CPU time: median = 1.8029 mean = 1.7702 stddev = 0.0621 min = 1.6749 max = 1.8417 [rejected, p=0.00064, Δ=-0.12873±0.09905]
user CPU time: median = 14.1022 mean = 14.0633 stddev = 0.1869 min = 13.8118 max = 14.3190 [not rejected, p=0.03006, Δ=-0.18248±0.24928]
elapsed time: median = 15.8205 mean = 15.8618 stddev = 0.2312 min = 15.5033 max = 16.1670 [not rejected, p=0.00558, Δ=-0.28963±0.29434]
since `up` and `values` are both pointer-aligned the type field will
also be pointer-aligned, wasting 48 bits of space on most machines. we
can get away with removing the type field altogether by encoding some
information into the `with` expr that created the env to begin with,
reducing the GC load for the absolutely massive amount of single-entry
envs we create for lambdas. this reduces memory usage of system eval by
quite a bit (reducing heap size of our system eval from 8.4GB to 8.23GB)
and gives similar savings in eval time.
running `nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'`
before:
Time (mean ± σ): 5.576 s ± 0.003 s [User: 5.197 s, System: 0.378 s]
Range (min … max): 5.572 s … 5.581 s 10 runs
after:
Time (mean ± σ): 5.408 s ± 0.002 s [User: 5.019 s, System: 0.388 s]
Range (min … max): 5.405 s … 5.411 s 10 runs
many paths need not be heap-allocated, and derivation env name/valye
pairs can be moved into the map.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.883 s ± 0.016 s [User: 5.250 s, System: 1.424 s]
Range (min … max): 6.860 s … 6.905 s 10 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.868 s ± 0.027 s [User: 5.194 s, System: 1.466 s]
Range (min … max): 6.828 s … 6.913 s 10 runs
the table is very small compared to cache sizes and a single indexed
load is faster than three comparisons.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.907 s ± 0.012 s [User: 5.272 s, System: 1.429 s]
Range (min … max): 6.893 s … 6.926 s 10 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.883 s ± 0.016 s [User: 5.250 s, System: 1.424 s]
Range (min … max): 6.860 s … 6.905 s 10 runs
a bunch of derivation strings contain no escape sequences. we can
optimize for this fact by first scanning for the end of a derivation
string and simply returning the contents unmodified if no escape
sequences were found. to make this even more efficient we can also use
BackedStringViews to avoid copies, avoiding heap allocations for
transient data.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.952 s ± 0.015 s [User: 5.294 s, System: 1.452 s]
Range (min … max): 6.926 s … 6.974 s 10 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.907 s ± 0.012 s [User: 5.272 s, System: 1.429 s]
Range (min … max): 6.893 s … 6.926 s 10 runs
This fixes a segfault on infinite function call recursion (rather than
infinite thunk recursion) by tracking the function call depth in
`EvalState`.
Additionally, to avoid printing extremely long stack traces, stack
frames are now deduplicated, with a `(19997 duplicate traces omitted)`
message. This should only really be triggered in infinite recursion
scenarios.
Before:
$ nix-instantiate --eval --expr '(x: x x) (x: x x)'
Segmentation fault: 11
After:
$ nix-instantiate --eval --expr '(x: x x) (x: x x)'
error: stack overflow
at «string»:1:14:
1| (x: x x) (x: x x)
| ^
$ nix-instantiate --eval --expr '(x: x x) (x: x x)' --show-trace
error:
… from call site
at «string»:1:1:
1| (x: x x) (x: x x)
| ^
… while calling anonymous lambda
at «string»:1:2:
1| (x: x x) (x: x x)
| ^
… from call site
at «string»:1:5:
1| (x: x x) (x: x x)
| ^
… while calling anonymous lambda
at «string»:1:11:
1| (x: x x) (x: x x)
| ^
… from call site
at «string»:1:14:
1| (x: x x) (x: x x)
| ^
(19997 duplicate traces omitted)
error: stack overflow
at «string»:1:14:
1| (x: x x) (x: x x)
| ^
more buffers that can be uninitialized and on the stack. small
difference, but still worth doing.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.963 s ± 0.011 s [User: 5.330 s, System: 1.421 s]
Range (min … max): 6.943 s … 6.974 s 10 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.952 s ± 0.015 s [User: 5.294 s, System: 1.452 s]
Range (min … max): 6.926 s … 6.974 s 10 runs
istream sentry objects are very expensive for single-character
operations, and since we don't configure exception masks for the
istreams used here they don't even do anything. all we need is
end-of-string checks and an advancing position in an immutable memory
buffer, both of which can be had for much cheaper than istreams allow.
the effect of this change is most apparent on empty stores.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 7.167 s ± 0.013 s [User: 5.528 s, System: 1.431 s]
Range (min … max): 7.147 s … 7.182 s 10 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 6.963 s ± 0.011 s [User: 5.330 s, System: 1.421 s]
Range (min … max): 6.943 s … 6.974 s 10 runs
Previously, IFDs would be built within the eval store, even though one
is typically using `--eval-store` precisely to *avoid* local builds.
Because the resulting Nix expression must be copied back to the eval
store in order to be imported, this requires the eval store to trust
the build store's signatures.
Prior to this change, Nix would prepend every installable to the PATH
list in order to ensure that installables appeared before the current
PATH from the ambient environment.
With this change, all the installables are still prepended to the PATH,
but in the same order as they appear on the command line. This means
that the first of two packages that expose an executable `hello` would
appear in the PATH first, and thus be executed first.
See the test in the prior commit for a more concrete example.
There's no good reason to deprecate it:
- For consistency reasons it should continue to exist, such that all
primitive types have a corresponding `builtins.is*` primop.
- There's no implementation cost to continuing to have this function
- It costs users time to try to migrate away from it, e.g.
https://github.com/NixOS/nixpkgs/pull/219747 and https://github.com/NixOS/nixpkgs/pull/275548
- Using it can give easier-to-read code like `all isNull list`
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
this also reduces forceValue code size and removes the need for
hideInDiagnostics. coopting thunk forcing like this has the additional
benefit of clarifying how these errors can happen in the first place.
forceValue is extremely hot. interestingly adding likeliness annotations
to the branches does not seem to make a difference.
before:
Time (mean ± σ): 4.224 s ± 0.005 s [User: 3.711 s, System: 0.512 s]
Range (min … max): 4.218 s … 4.234 s 10 runs
after:
Time (mean ± σ): 4.140 s ± 0.009 s [User: 3.647 s, System: 0.492 s]
Range (min … max): 4.130 s … 4.152 s 10 runs
almost all uses of this are interactive, except for deepSeq. deepSeq is
going to be expensive and rare enough to not care much about, and
Value::determinePos should usually be cheap enough to not be too much of
a burden in any case.
~1% parser speedup from not using TLS indirections, less on system eval.
this could have also gone in flex yyextra data, but that's significantly
slower for some reason (albeit still faster than thread locals).
before:
Time (mean ± σ): 4.231 s ± 0.004 s [User: 3.725 s, System: 0.504 s]
Range (min … max): 4.226 s … 4.240 s 10 runs
after:
Time (mean ± σ): 4.224 s ± 0.005 s [User: 3.711 s, System: 0.512 s]
Range (min … max): 4.218 s … 4.234 s 10 runs
~2% speedup on parsing without eval, less (but still significant) on
system eval. having flex generate faster parsers leads to very strange
misparses. maybe re2c is worth investigating.
before:
Time (mean ± σ): 4.260 s ± 0.003 s [User: 3.754 s, System: 0.505 s]
Range (min … max): 4.257 s … 4.266 s 10 runs
after:
Time (mean ± σ): 4.231 s ± 0.004 s [User: 3.725 s, System: 0.504 s]
Range (min … max): 4.226 s … 4.240 s 10 runs