The current definition of `intersectAttrs` is incorrect:
> Return a set consisting of the attributes in the set e2 that also exist in the
> set e1.
Recall that (Nix manual, section 5.1):
> An attribute set is a collection of name-value-pairs (called attributes)
According to the existing description of `intersectAttrs`, the following should
evaluate to the empty set, since no key-value *pair* (i.e. attribute) exists in
both sets:
```
builtins.intersectAttrs { x=3; } {x="foo";}
```
And yet:
```
nix-repl> builtins.intersectAttrs { x=3; } {x="foo";}
{ x = "foo"; }
```
Clearly the intent here was for the *names* of the resulting attribute set to be
the intersection of the *names* of the two arguments, and for the values of the
resulting attribute set to be the values from the second argument.
This commit corrects the definition, making it match the implementation and intent.
Make sure that people who run Nix in non-interactive mode (and so don't have the possibility to interactively accept the individual flake configuration settings) are aware of this flag.
Fix#7086
This commit adds an optional `__impure` parameter to fetchurl.nix, which allows
the caller to use `libfetcher`'s fetcher in an impure derivation. This allows
nixpkgs' patch-normalizing fetcher (fetchpatch) to be rewritten to use nix's
internal fetchurl, thereby eliminating the awkward "you can't use fetchpatch
here" banners scattered all over the place.
See also: https://github.com/NixOS/nixpkgs/pull/188587
98e361ad4c introduced a regression where
previously stored attributes were replaced by placeholders. As a
result, a command like 'nix build nixpkgs#hello' had to be executed at
least twice to get caching.
This code does not seem necessary for suggestions to work.
Makes `printValueAsJSON` not copy paths to the store for `nix eval
--json`, `nix-instantiate --eval --json` and `nix-env --json`.
Fixes https://github.com/NixOS/nix/issues/5612
I recently got fairly confused why the following expression didn't have
any effect
{
description = "Foobar";
inputs.sops-nix = {
url = github:mic92/sops-nix;
inputs.nixpkgs_22_05.follows = "nixpkgs";
};
}
until I found out that the input was called `nixpkgs-22_05` (please note
the dash vs. underscore).
IMHO it's not a good idea to not throw an error in that case and
probably leave end-users rather confused, so I implemented a small check
for that which basically checks whether `follows`-declaration from
overrides actually have corresponding inputs in the transitive flake.
In fact this was done by accident already in our own test-suite where
the removal of a `follows` was apparently forgotten[1].
Since the key of the `std::map` that holds the `overrides` is a vector
and we have to find the last element of each vector (i.e. the override)
this has to be done with a for loop in O(n) complexity with `n` being
the total amount of overrides (which shouldn't be that large though).
Please note that this doesn't work with nested expressions, i.e.
inputs.fenix.inputs.nixpkgs.follows = "...";
which is a known problem[2].
For the expression demonstrated above, an error like this will be
thrown:
error: sops-nix has a `follows'-declaration for a non-existant input nixpkgs_22_05!
[1] 2664a216e5
[2] https://github.com/NixOS/nix/issues/5790
Prevents errors when running with UBSan:
/nix/store/j5vhrywqmz1ixwhsmmjjxa85fpwryzh0-gcc-11.3.0/include/c++/11.3.0/bits/stl_pair.h:353:4: runtime error: load of value 229, which is not a valid value for type 'AttrType'
Overrides for inputs with flake=false were non-sticky, since they
changed the `original` in `flake.lock`. This fixes it, by using the same
locked original for both flake and non-flake inputs.
Don’t explicitely give it a constructor, but use aggregate
initialization instead (also prevents having an implicit coertion, which
is probably good here)
* libexpr: fix builtins.split example
The example was previously indicating that multiple whitespaces would be
collapsed into a single captured whitespace. That isn't true and was
likely a mistake when being documented initially.
* Fix segfault on unitilized list when printing value
Since lists are just chunks of memory the individual elements in the
list might be unitilized when a programming error happens within Nix.
In this case the values are null-initialized (at least with Boehm GC)
and we can avoid a nullptr deref when printing them.
I ran into this issue while ensuring that new expression tests would
show the actual value on an assertion failure.
This is unlikely to cause any runtime performance regressions as
printing values is not really in the hot path (unless the repl is the
primary use case).
* Add operator<< for ValueTypes
* Add libexpr tests
This introduces tests for libexpr that evalulate various trivial Nix
language expressions and primop invocations that should be good smoke
tests wheter or not the implementation is behaving as expected.
'nix profile install' will now install all outputs listed in the
package's meta.outputsToInstall attribute, or all outputs if that
attribute doesn't exist. This makes it behave consistently with
nix-env. Fixes#6385.
Furthermore, for consistency, all other 'nix' commands do this as
well. E.g. 'nix build' will build and symlink the outputs in
meta.outputsToInstall, defaulting to all outputs. Previously, it only
built/symlinked the first output. Note that this means that selecting
a specific output using attrpath selection (e.g. 'nix build
nixpkgs#libxml2.dev') no longer works. A subsequent PR will add a way
to specify the desired outputs explicitly.
after #6218 `Symbol` no longer confers a uniqueness invariant on the
string it wraps, it is now possible to create multiple symbols that
compare equal but whose string contents have different addresses. this
guarantee is now only provided by `SymbolIdx`, leaving `Symbol` only as
a string wrapper that knows about the intricacies of how symbols need to
be formatted for output.
this change renames `SymbolIdx` to `Symbol` to restore the previous
semantics of `Symbol` to that name. we also keep the wrapper type and
rename it to `SymbolStr` instead of returning plain strings from lookups
into the symbol table because symbols are formatted for output in many
places. theoretically we do not need `SymbolStr`, only a function that
formats a string for output as a symbol, but having to wrap every symbol
that appears in a message into eg `formatSymbol()` is error-prone and
inconvient.
The produced path is then allowed be imported or utilized elsewhere:
```
assert (43 == import (builtins.toFile "source" "43")); "good"
```
This will still fail on write-only stores.
with position and symbol tables in place we can now shrink Attr by a full
pointer with some simple field reordering. since Attr is a very hot struct this
has substantial impact on memory use, decreasing GC allocations and heap size by
10-15% each. we also get a ~15% performance improvement due to reduced GC
loading.
pure parsing has taken a hit over the branch base because positions are now
slightly more expensive to create, but overall we get a noticeable improvement.
before (on memory-friendliness):
Benchmark 1: nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.960 s ± 0.028 s [User: 5.832 s, System: 0.897 s]
Range (min … max): 6.886 s … 7.005 s 20 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 328.1 ms ± 1.7 ms [User: 295.8 ms, System: 32.2 ms]
Range (min … max): 324.9 ms … 331.2 ms 20 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.688 s ± 0.029 s [User: 2.365 s, System: 0.238 s]
Range (min … max): 2.642 s … 2.742 s 20 runs
after:
Benchmark 1: nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.902 s ± 0.039 s [User: 5.844 s, System: 0.783 s]
Range (min … max): 6.820 s … 6.956 s 20 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 330.7 ms ± 2.2 ms [User: 300.6 ms, System: 30.0 ms]
Range (min … max): 327.5 ms … 334.5 ms 20 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.330 s ± 0.027 s [User: 2.040 s, System: 0.234 s]
Range (min … max): 2.272 s … 2.383 s 20 runs
this slightly increases the amount of memory used for any given symbol, but this
increase is more than made up for if the symbol is referenced more than once in
the EvalState that holds it. on average every symbol should be referenced at
least twice (once to introduce a binding, once to use it), so we expect no
increase in memory on average.
symbol tables are limited to 2³² entries like position tables, and similar
arguments apply to why overflow is not likely: 2³² symbols would require as many
string instances (at 24 bytes each) and map entries (at 24 bytes or more each,
assuming that the map holds on average at most one item per bucket as the docs
say). a full symbol table would require at least 192GB of memory just for
symbols, which is well out of reach. (an ofborg eval of nixpks today creates
less than a million symbols!)
PosTable deduplicates origin information, so using symbols for paths is no
longer necessary. moving away from path Symbols also reduces the usage of
symbols for things that are not keys in attribute sets, which will become
important in the future when we turn symbols into indices as well.
Pos objects are somewhat wasteful as they duplicate the origin file name and
input type for each object. on files that produce more than one Pos when parsed
this a sizeable waste of memory (one pointer per Pos). the same goes for
ptr<Pos> on 64 bit machines: parsing enough source to require 8 bytes to locate
a position would need at least 8GB of input and 64GB of expression memory. it's
not likely that we'll hit that any time soon, so we can use a uint32_t index to
locate positions instead.
when we introduce position and symbol tables we'll need to do lookups to turn
indices into those tables into actual positions/symbols. having the error
functions as members of EvalState will avoid a lot of churn for adding lookups
into the tables for each caller.
only file and line of the returned position were ever used, it wasn't actually
used a position. as such we may as well use a path+int pair for only those two
values and remove a use of Pos that would not work well with a position table.
a future commit will remove the ability to convert the symbol type used in
bindings to strings. since we only have two users we can inline the error check.
the only use of this function is to determine whether a lambda has a non-set
formal, but this use is arguably better served by Symbol::set and using a
non-Symbol instead of an empty symbol in the parser when no such formal is present.
we don't *need* symbols here. the only advantage they have over strings is
making call-counting slightly faster, but that's a diagnostic feature and thus
needn't be optimized.
this also fixes a move bug that previously didn't show up: PrimOp structs were
accessed after being moved from, which technically invalidates them. previously
the names remained valid because Symbol copies on move, but strings are
invalidated. we now copy the entire primop struct instead of moving since primop
registration happen once and are not performance-sensitive.
In particular, this means that 'nix eval` (which uses toValue()) no
longer auto-calls functions or functors (because
AttrCursor::findAlongAttrPath() doesn't).
Fixes#6152.
Also use ref<> in a few places, and don't return attrpaths from
getCursor() because cursors already have a getAttrPath() method.
Impure derivations are derivations that can produce a different result
every time they're built. Example:
stdenv.mkDerivation {
name = "impure";
__impure = true; # marks this derivation as impure
outputHashAlgo = "sha256";
outputHashMode = "recursive";
buildCommand = "date > $out";
};
Some important characteristics:
* This requires the 'impure-derivations' experimental feature.
* Impure derivations are not "cached". Thus, running "nix-build" on
the example above multiple times will cause a rebuild every time.
* They are implemented similar to CA derivations, i.e. the output is
moved to a content-addressed path in the store. The difference is
that we don't register a realisation in the Nix database.
* Pure derivations are not allowed to depend on impure derivations. In
the future fixed-output derivations will be allowed to depend on
impure derivations, thus forming an "impurity barrier" in the
dependency graph.
* When sandboxing is enabled, impure derivations can access the
network in the same way as fixed-output derivations. In relaxed
sandboxing mode, they can access the local filesystem.
Rather than having four different but very similar types of hashes, make
only one, with a tag indicating whether it corresponds to a regular of
deferred derivation.
This implies a slight logical change: The original Nix+multiple-outputs
model assumed only one hash-modulo per derivation. Adding
multiple-outputs CA derivations changed this as these have one
hash-modulo per output. This change is now treating each derivation as
having one hash modulo per output.
This obviously means that we internally loose the guaranty that
all the outputs of input-addressed derivations have the same hash
modulo. But it turns out that it doesn’t matter because there’s nothing
in the code taking advantage of that fact (and it probably shouldn’t
anyways).
The upside is that it is now much easier to work with these hashes, and
we can get rid of a lot of useless `std::visit{ overloaded`.
Co-authored-by: John Ericson <John.Ericson@Obsidian.Systems>
This allows closures to be imported at evaluation time, without
requiring the user to configure substituters. E.g.
builtins.fetchClosure {
storePath = /nix/store/f89g6yi63m1ywfxj96whv5sxsm74w5ka-python3.9-sqlparse-0.4.2;
from = "https://cache.ngi0.nixos.org";
}
Before the change lexter errors did not report the location:
$ nix build -f. mc
error: path has a trailing slash
(use '--show-trace' to show detailed location information)
Note that it's not clear what file generates the error.
After the change location is reported:
$ src/nix/nix --extra-experimental-features nix-command build -f ~/nm mc
error: path has a trailing slash
at .../pkgs/development/libraries/glib/default.nix:54:18:
53| };
54| src = /tmp/foo/;
| ^
55|
(use '--show-trace' to show detailed location information)
Here we see both problematic file and the string itself.
1. `DerivationOutput` now as the `std::variant` as a base class. And the
variants are given hierarchical names under `DerivationOutput`.
In 8e0d0689be @matthewbauer and I
didn't know a better idiom, and so we made it a field. But this sort
of "newtype" is anoying for literals downstream.
Since then we leaned the base class, inherit the constructors trick,
e.g. used in `DerivedPath`. Switching to use that makes this more
ergonomic, and consistent.
2. `store-api.hh` and `derivations.hh` are now independent.
In bcde5456cc I swapped the dependency,
but I now know it is better to just keep on using incomplete types as
much as possible for faster compilation and good separation of
concerns.
The current `--out-path` flag has two disadvantages when one is only
concerned with querying the names of outputs:
- it requires evaluating every output's `outPath`, which takes
significantly more resources and runs into more failures
- it destroys the information of the order of outputs so we can't tell
which one is the main output
This patch makes the output names always present (replacing paths with
`null` in JSON if `--out-path` isn't given), and adds an `outputName`
field.
This changes was taken from dynamic derivation (#4628). It` somewhat
undoes the refactors I first did for floating CA derivations, as the
benefit of hindsight + requirements of dynamic derivations made me
reconsider some things.
They aren't to consequential, but I figured they might be good to land
first, before the more profound changes @thufschmitt has in the works.
reduces peak hep memory use on eval of our test system from 264.4MB to 242.3MB,
possibly also a slight performance boost.
theoretically memory use could be cut down by another eight bytes per Pos on
average by turning it into a tuple containing an index into a global base
position table with row and column offsets, but that doesn't seem worth the
effort at this point.
```console
$ nix eval --expr '({ foo ? 1 }: foo) { fob = 2; }'
error: anonymous function at (string):1:2 called with unexpected argument 'fob'
at «string»:1:1:
1| ({ foo ? 1 }: foo) { fob = 2; }
| ^
Did you mean foo?
```
Not that because Nix will first check for _missing_ arguments before
checking for extra arguments, `({ foo }: foo) { fob = 1; }` will
complain about the missing `foo` argument (rather than extra `fob`) and
so won’t display a suggestion.
Make the evaluator show some suggestions when trying to access an
invalid field from an attrset.
```console
$ nix eval --expr '{ foo = 1; }.foa'
error: attribute 'foa' missing
at «string»:1:1:
1| { foo = 1; }.foa
| ^
Did you mean foo?
```
No real need for keeping a separate header for such a simple class.
This requires changing a bit `OrSuggestions<T>::operator*` to not throw
an `Error` to prevent a cyclic dependency. But since this error is only
thrown on programmer error, we can replace the whole method by a direct
call to `std::get` which will raise its own assertion if needs be.
Refactor the `size == 0` logic into a new helper function that
replaces dupStringWithLen.
The name had to change, because unlike a `dup`-function, it does
not always allocate a new string.
We now memoize on Bindings / list element vectors rather than Values,
so that e.g. two Values that point to the same Bindings will be
printed only once.
This is useful whenever we want to evaluate something to a store path
(e.g. in get-drvs.cc).
Extracted from the lazy-trees branch (where we can require that a
store path must come from a store source tree accessor).
This was introduced in #6174. However fetch{url,Tarball} are legacy
and we shouldn't have an undocumented attribute that does the same
thing as one that already exists ('sha256').
Starting work on #5638
The exact boundary between `FetchSettings` and `EvalSettings` is not
clear to me, but that's fine. First lets clean out `libstore`, and then
worry about what, if anything, should be the separation between those
two.
previously :a would override old bindings of a name with new values if the added
set contained names that were already bound. in nix 2.6 this doesn't happen any
more, which is potentially confusing.
fixes#6041
we'll retain the old coerceToString interface that returns a string, but callers
that don't need the returned value to outlive the Value it came from can save
copies by using the new interface instead. for values that weren't stringy we'll
pass a new buffer argument that'll be used for storage and shouldn't be
inspected.
It’s totally valid to have entries in `NIX_PATH` that aren’t valid paths
(they can even be arbitrary urls or `channel:<channel-name>`).
Fix#5998 and #5980
keeping it as a simple data member means it won't be scanned by the GC, so
eventually the GC will collect a cache that is still referenced (resulting in
use-after-free of cache elements).
fixes#5962
- Make passing the position to `forceValue` mandatory,
this way we remember people that the position is
important for better error messages
- Add pos to all `forceValue` calls
This no longer worked correctly because 'path' is uninitialised when
an exception occurs, leading to errors like
… while importing ''
at /nix/store/rrzz5b1pshvzh1437ac9nkl06br81lkv-source/flake.nix:352:13:
So move the adding of the error context into realisePath().
if we defer the duplicate argument check for lambda formals we can use more
efficient data structures for the formals set, and we can get rid of the
duplication of formals names to boot. instead of a list of formals we've seen
and a set of names we'll keep a vector instead and run a sort+dupcheck step
before moving the parsed formals into a newly created lambda. this improves
performance on search and rebuild by ~1%, pure parsing gains more (about 4%).
this does reorder lambda arguments in the xml output, but the output is still
stable. this shouldn't be a problem since argument order is not semantically
important anyway.
before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.550 s ± 0.060 s [User: 6.470 s, System: 1.664 s]
Range (min … max): 8.435 s … 8.666 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 346.7 ms ± 2.1 ms [User: 312.4 ms, System: 34.2 ms]
Range (min … max): 343.8 ms … 353.4 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.720 s ± 0.031 s [User: 2.415 s, System: 0.231 s]
Range (min … max): 2.662 s … 2.780 s 20 runs
after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.462 s ± 0.063 s [User: 6.398 s, System: 1.661 s]
Range (min … max): 8.339 s … 8.542 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 329.1 ms ± 1.4 ms [User: 296.8 ms, System: 32.3 ms]
Range (min … max): 326.1 ms … 330.8 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.687 s ± 0.035 s [User: 2.392 s, System: 0.228 s]
Range (min … max): 2.626 s … 2.754 s 20 runs
string expressions by and large do not need the benefits a Symbol gives us,
instead they pollute the symbol table and cause unnecessary overhead for almost
all strings. the one place we can think of that benefits from them (attrpaths
with expressions) extracts the benefit in the parser, which we'll have to touch
anyway when changing ExprString to hold strings.
this gives a sizeable improvement on of 3-5% on all benchmarks we've run.
before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.844 s ± 0.045 s [User: 6.750 s, System: 1.663 s]
Range (min … max): 8.758 s … 8.922 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 367.4 ms ± 3.3 ms [User: 332.3 ms, System: 35.2 ms]
Range (min … max): 364.0 ms … 375.2 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.810 s ± 0.030 s [User: 2.517 s, System: 0.225 s]
Range (min … max): 2.742 s … 2.854 s 20 runs
after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.533 s ± 0.068 s [User: 6.485 s, System: 1.642 s]
Range (min … max): 8.404 s … 8.657 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 347.6 ms ± 3.1 ms [User: 313.1 ms, System: 34.5 ms]
Range (min … max): 343.3 ms … 354.6 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.709 s ± 0.032 s [User: 2.414 s, System: 0.232 s]
Range (min … max): 2.655 s … 2.788 s 20 runs
it can be replaced with StringToken if we add another bit if information to
StringToken, namely whether this string should take part in indentation scanning
or not. since all escaping terminates indentation scanning we need to set this
bit only for the non-escaped IND_STRING rule.
this improves performance by about 1%.
before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.880 s ± 0.048 s [User: 6.809 s, System: 1.643 s]
Range (min … max): 8.781 s … 8.993 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 375.0 ms ± 2.2 ms [User: 339.8 ms, System: 35.2 ms]
Range (min … max): 371.5 ms … 379.3 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.831 s ± 0.040 s [User: 2.536 s, System: 0.225 s]
Range (min … max): 2.769 s … 2.912 s 20 runs
after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.832 s ± 0.048 s [User: 6.757 s, System: 1.657 s]
Range (min … max): 8.743 s … 8.921 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 367.4 ms ± 3.2 ms [User: 332.7 ms, System: 34.7 ms]
Range (min … max): 364.6 ms … 374.6 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.810 s ± 0.030 s [User: 2.517 s, System: 0.225 s]
Range (min … max): 2.742 s … 2.854 s 20 runs
gives about 1% improvement on system eval, a bit less on nix search.
# before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 7.419 s ± 0.045 s [User: 6.362 s, System: 0.794 s]
Range (min … max): 7.335 s … 7.517 s 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.921 s ± 0.023 s [User: 2.626 s, System: 0.210 s]
Range (min … max): 2.883 s … 2.957 s 20 runs
# after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 7.370 s ± 0.059 s [User: 6.333 s, System: 0.791 s]
Range (min … max): 7.286 s … 7.541 s 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.891 s ± 0.033 s [User: 2.606 s, System: 0.210 s]
Range (min … max): 2.823 s … 2.958 s 20 runs
mainly to avoid an allocation and a copy of a string that can be
modified in place (ever since EvalState holds on to the buffer, not the
generated parser itself).
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 571.7 ms ± 2.4 ms [User: 563.3 ms, System: 8.0 ms]
Range (min … max): 566.7 ms … 579.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 376.6 ms ± 1.0 ms [User: 345.8 ms, System: 30.5 ms]
Range (min … max): 374.5 ms … 379.1 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.922 s ± 0.006 s [User: 2.707 s, System: 0.215 s]
Range (min … max): 2.906 s … 2.934 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 570.4 ms ± 2.8 ms [User: 561.3 ms, System: 8.6 ms]
Range (min … max): 564.6 ms … 578.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 375.4 ms ± 1.3 ms [User: 343.2 ms, System: 31.7 ms]
Range (min … max): 373.4 ms … 378.2 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.925 s ± 0.006 s [User: 2.704 s, System: 0.219 s]
Range (min … max): 2.910 s … 2.942 s 50 runs
when given a string yacc will copy the entire input to a newly allocated
location so that it can add a second terminating NUL byte. since the
parser is a very internal thing to EvalState we can ensure that having
two terminating NUL bytes is always possible without copying, and have
the parser itself merely check that the expected NULs are present.
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 572.4 ms ± 2.3 ms [User: 563.4 ms, System: 8.6 ms]
Range (min … max): 566.9 ms … 579.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 381.7 ms ± 1.0 ms [User: 348.3 ms, System: 33.1 ms]
Range (min … max): 380.2 ms … 387.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.936 s ± 0.005 s [User: 2.715 s, System: 0.221 s]
Range (min … max): 2.923 s … 2.946 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 571.7 ms ± 2.4 ms [User: 563.3 ms, System: 8.0 ms]
Range (min … max): 566.7 ms … 579.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 376.6 ms ± 1.0 ms [User: 345.8 ms, System: 30.5 ms]
Range (min … max): 374.5 ms … 379.1 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.922 s ± 0.006 s [User: 2.707 s, System: 0.215 s]
Range (min … max): 2.906 s … 2.934 s 50 runs
speeds up parsing by ~3%, system builds by a bit more than 1%
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 574.7 ms ± 2.8 ms [User: 566.3 ms, System: 8.0 ms]
Range (min … max): 569.2 ms … 580.7 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 394.4 ms ± 0.8 ms [User: 361.8 ms, System: 32.3 ms]
Range (min … max): 392.7 ms … 395.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.976 s ± 0.005 s [User: 2.757 s, System: 0.218 s]
Range (min … max): 2.966 s … 2.990 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 572.4 ms ± 2.3 ms [User: 563.4 ms, System: 8.6 ms]
Range (min … max): 566.9 ms … 579.1 ms 50 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 381.7 ms ± 1.0 ms [User: 348.3 ms, System: 33.1 ms]
Range (min … max): 380.2 ms … 387.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.936 s ± 0.005 s [User: 2.715 s, System: 0.221 s]
Range (min … max): 2.923 s … 2.946 s 50 runs
every stringy token the lexer returns is turned into a Symbol and not
used further, so we don't have to strdup. using a string_view is
sufficient, but due to limitations of the current parser we have to use
a POD type that holds the same information.
gives ~2% on system build, 6% on search, 8% on parsing alone
# before
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 610.6 ms ± 2.4 ms [User: 602.5 ms, System: 7.8 ms]
Range (min … max): 606.6 ms … 617.3 ms 50 runs
Benchmark 2: nix eval -f hackage-packages.nix
Time (mean ± σ): 430.1 ms ± 1.4 ms [User: 393.1 ms, System: 36.7 ms]
Range (min … max): 428.2 ms … 434.2 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 3.032 s ± 0.005 s [User: 2.808 s, System: 0.223 s]
Range (min … max): 3.023 s … 3.041 s 50 runs
# after
Benchmark 1: nix search --offline nixpkgs hello
Time (mean ± σ): 574.7 ms ± 2.8 ms [User: 566.3 ms, System: 8.0 ms]
Range (min … max): 569.2 ms … 580.7 ms 50 runs
Benchmark 2: nix eval -f hackage-packages.nix
Time (mean ± σ): 394.4 ms ± 0.8 ms [User: 361.8 ms, System: 32.3 ms]
Range (min … max): 392.7 ms … 395.7 ms 50 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.976 s ± 0.005 s [User: 2.757 s, System: 0.218 s]
Range (min … max): 2.966 s … 2.990 s 50 runs
there's a few symbols in primops we can create once and pick them out of
EvalState afterwards instead of creating them every time we need them. this
gives almost 1% speedup to an uncached nix search.
constructing an ostringstream for non-string concats (like integer addition) is
a small constant cost that we can avoid. for string concats we can keep all the
string temporaries we get from coerceToString and concatenate them in one go,
which saves a lot of intermediate temporaries and copies in ostringstream. we
can also avoid copying the concatenated string again by directly allocating it
in GC memory and moving ownership of the concatenated string into the target
value.
saves about 2% on system eval.
before:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.837 s ± 0.031 s [User: 2.562 s, System: 0.191 s]
Range (min … max): 2.796 s … 2.892 s 20 runs
after:
Benchmark 1: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.790 s ± 0.035 s [User: 2.532 s, System: 0.187 s]
Range (min … max): 2.722 s … 2.836 s 20 runs
Previously you had to remember to call value->attrs->sort() after
populating value->attrs. Now there is a BindingsBuilder helper that
wraps Bindings and ensures that sort() is called before you can use
it.
nixpkgs can save a good bit of eval memory with this primop. zipAttrsWith is
used quite a bit around nixpkgs (eg in the form of recursiveUpdate), but the
most costly application for this primop is in the module system. it improves
the implementation of zipAttrsWith from nixpkgs by not checking an attribute
multiple times if it occurs more than once in the input list, allocates less
values and set elements, and just avoids many a temporary object in general.
nixpkgs has a more generic version of this operation, zipAttrsWithNames, but
this version is only used once so isn't suitable for being the base of a new
primop. if it were to be used more we should add a second primop instead.
When we check for disappeared overrides, we can get "false positives"
for follows and overrides which are defined in the dependencies of the
flake we are locking, since they are not parsed by
parseFlakeInputs. However, at that point we already know that the
overrides couldn't have possible been changed if the input itself
hasn't changed (since we check that oldLock->originalRef == *input.ref
for the input's parent). So, to prevent this, only perform this check
when it was possible that the flake changed (e.g. the flake we're
locking, or a new input, or the input has changed and mustRefetch ==
true).
This makes sure that values parsed from TOML have a proper size. Using
e.g. `double` caused issues on i686 where the size of `double` (32bit)
was too small to accommodate some values.
calling GC_malloc for each value is significantly more expensive than
allocating a bunch of values at once with GC_malloc_many. "a bunch" here
is a GC block size, ie 16KiB or less.
this gives a 1.5% performance boost when evaluating our nixos system.
tested with
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
# on master
Time (mean ± σ): 3.335 s ± 0.007 s [User: 2.774 s, System: 0.293 s]
Range (min … max): 3.315 s … 3.347 s 50 runs
# with this change
Time (mean ± σ): 3.288 s ± 0.006 s [User: 2.728 s, System: 0.292 s]
Range (min … max): 3.274 s … 3.307 s 50 runs
This function is very useful in nixpkgs, but its implementation in Nix
itself is rather slow due to it requiring a lot of attribute set and
list appends.
Previously, when we were attempting to reuse the old lockfile
information in the computeLocks function, we have passed the parent of
the current input to the next computeLocks call. This was incorrect,
since the follows are resolved relative to the parent. This caused
issues when we tried to reuse oldLock but couldn't for some
reason (read: mustRefetch is true), in that case the follows were
resolved incorrectly.
Fix this by passing the correct parent, and adding some tests to
prevent this particular regression from happening again.
Closes https://github.com/NixOS/nix/issues/5697
Moving arguments of the primOp into the registration structure makes it
impossible to initialize a second EvalState with the correct primOp
registration. It will end up registering all those "RegisterPrimOp"'s
with an arity of zero on all but the 2nd instance of the EvalState.
Not moving the memory will add a tiny bit of memory overhead during the
eval since we need a copy of all the argument lists of all the primOp's.
The overhead shouldn't be too bad as it is static (based on the amonut
of registered operations) and only occurs once during the interpreter
startup.
If we’re in pure eval mode, then tell that in the error message rather
than (wrongly) speaking about restricted mode.
Fix https://github.com/NixOS/nix/issues/5611
This is not really useful on its own, but it does recover the
'infinite recursion' error message for '{ __functor = x: x; } 1', and
is more efficient in conjunction with #3718.
Fixes#5515.
- This change applies to builtins.toXML and inner workings
- Proof of concept:
```nix
let e = builtins.toXML e; in e
```
- Before:
```
$ nix-instantiate --eval poc.nix
error: infinite recursion encountered
```
- After:
```
$ nix-instantiate --eval poc.nix
error: infinite recursion encountered
at /data/github/kamadorueda/nix/poc.nix:1:9:
1| let e = builtins.toXML e; in e
|
```
When an input follows disappears, we can't just reuse the old lock
file entries since we may be missing some required ones. Refetch the
input when this happens.
Closes https://github.com/NixOS/nix/issues/5289
Since 4806f2f6b0, we can't have paths with
references passed to builtins.{path,filterSource}. This prevents many cases
of those functions called on IFD outputs from working. Resolve this by
passing the references found in the original path to the added path.
When setting flake-local options (with the `nixConfig` field), forward
these options to the daemon in case we’re using one.
This is necessary in particular for options like `binary-caches` or
`post-build-hook` to make sense.
Fix <343239fc8a (r44356843)>
We now parse function applications as a vector of arguments rather
than as a chain of binary applications, e.g. 'substring 1 2 "foo"' is
parsed as
ExprCall { .fun = <substring>, .args = [ <1>, <2>, <"foo"> ] }
rather than
ExprApp (ExprApp (ExprApp <substring> <1>) <2>) <"foo">
This allows primops to be called immediately (if enough arguments are
supplied) without having to allocate intermediate tPrimOpApp values.
On
$ nix-instantiate --dry-run '<nixpkgs/nixos/release-combined.nix>' -A nixos.tests.simple.x86_64-linux
this gives a substantial performance improvement:
user CPU time: median = 0.9209 mean = 0.9218 stddev = 0.0073 min = 0.9086 max = 0.9340 [rejected, p=0.00000, Δ=-0.21433±0.00677]
elapsed time: median = 1.0585 mean = 1.0584 stddev = 0.0024 min = 1.0523 max = 1.0623 [rejected, p=0.00000, Δ=-0.20594±0.00236]
because it reduces the number of tPrimOpApp allocations from 551990 to
42534 (i.e. only small minority of primop calls are partially
applied) which in turn reduces time spent in the garbage collector.
Rather than having them plain strings scattered through the whole
codebase, create an enum containing all the known experimental features.
This means that
- Nix can now `warn` when an unkwown experimental feature is passed
(making it much nicer to spot typos and spot deprecated features)
- It’s now easy to remove a feature altogether (once the feature isn’t
experimental anymore or is dropped) by just removing the field for the
enum and letting the compiler point us to all the now invalid usages
of it.
- This way we improve error messages
on infinite recursion
- Demo:
```nix
let x = builtins.fetchTree x;
in x
```
- Before:
```bash
$ nix-instantiate --extra-experimental-features flakes --strict
error: infinite recursion encountered
```
- After:
```bash
$ nix-instantiate --extra-experimental-features flakes --strict
error: infinite recursion encountered
at /data/github/kamadorueda/nix/test.nix:1:9:
1| let x = builtins.fetchTree x;
| ^
2| in x
```
Mentions: #3505
- This way we improve error messages
on infinite recursion
- Demo:
```nix
let
x = builtins.fetchMercurial x;
in
x
```
- Before:
```bash
$ nix-instantiate --show-trace --strict
error: infinite recursion encountered
```
- After:
```bash
nix-instantiate --show-trace --strict
error: infinite recursion encountered
at /data/github/kamadorueda/test/default.nix:2:7:
1| let
2| x = builtins.fetchMercurial x;
| ^
3| in
```
Mentions: #3505
This reverts some parts of commit
8430a8f086 which was trying to rethrow
some exceptions while we weren’t in the context of a `catch` block,
causing some weird “terminate called without an active exception”
errors.
Fix#5368
We now build the context (so this has the side-effect of making
builtins.{path,filterSource} work on derivations outputs, if IFD is
enabled) and then check that the path has no references (which is what
we really care about).
The boolean is only used to determine if the formals are set to a
non-null pointer in all our cases. We can get rid of that allocation and
instead just compare the pointer value with NULL. Saving up to
sizeof(bool) + platform specific alignment per ExprLambda instace.
Probably not a lot of memory but perhaps a few kilobyte with nixpkgs?
This also gets rid of a potential issue with dereferencing formals based on
the value of the boolean that didn't have to be aligned with the formals
pointer but was in all our cases.
I had started the trend of doing `std::visit` by value (because a type
error once mislead me into thinking that was the only form that
existed). While the optomizer in principle should be able to deal with
extra coppying or extra indirection once the lambdas inlined, sticking
with by reference is the conventional default. I hope this might even
improve performance.
I found it somewhat confusing to have an error like
error: attribute 'getFlake' missing
if the required experimental-feature (`flakes`) is not enabled. Instead,
I'd expect Nix to throw an error just like it's the case when using e.g. `nix
flake` without `flakes` being enabled.
With this change, the error looks like this:
$ nix-instantiate -E 'builtins.getFlake "nixpkgs"'
error: Cannot call 'builtins.getFlake' because experimental Nix feature 'flakes' is disabled. You can enable it via '--extra-experimental-features flakes'.
at «string»:1:1:
1| builtins.getFlake "nixpkgs"
| ^
I didn't use `settings.requireExperimentalFeature` here on purpose
because this doesn't contain a position. Also, it doesn't seem as if we
need to catch the error and check for the missing feature here since
this already happens at evaluation time.
Previously, type or coercion errors for string interpolation, path
interpolation, and plus expressions were always reported at the
beginning of the outer expression. This leads to confusing evaluation
error messages making it hard to accurately diagnose and then fix the
error.
For example, errors were reported as follows.
```
cannot coerce an integer to a string
1| let foo = 7; in "bar" + foo
| ^
cannot add a string to an integer
1| let foo = "bar"; in 4 + foo
| ^
cannot coerce an integer to a string
1| let foo = 7; in "x${foo}"
| ^
```
This commit changes the ExprConcatStrings expression vector to store a
sequence of expressions *and* their expansion locations so that error
locations can be reported accurately. For interpolation, the error is
reported at the beginning of the entire `${foo}`, not at the beginning
of `foo` because I thought this was slightly clearer. The previous
errors are now reported as:
```
cannot coerce an integer to a string
1| let foo = 7; in "bar" + foo
| ^
cannot add a string to an integer
1| let foo = "bar"; in 4 + foo
| ^
cannot coerce an integer to a string
1| let foo = 7; in "x${foo}"
| ^
```
The error is reported at this kind of precise location even for
multi-line indented strings.
This probably helps with at least some of the cases mentioned in #561
Without this, flakes within the same tree and same lock data will have
the same fingerprint and the eval cache for one flake will be
incorrectly used for another.
Alternative to #4639. You can still read flake.lock, but at least in
reproducible workflows like NixOS configurations where you require a
non-dirty tree, evaluation will fail because there is no rev.
With --no-write-lock-file, it's possible that flake.lock is out of
sync with the actual inputs used by the evaluation. So doing fromJSON
(readFile ./flake.lock) will give wrong results.
Fixes#4639.
This fixes a use-after-free bug:
1. s = new EvalState();
2. callFlake()
3. static vCallFlake now references s
4. delete s;
5. s2 = new EvalState();
6. callFlake()
7. static vCallFlake still references s
8. crash
Nix 2.3 did not have a problem with recreating EvalState.
This fixes a class of crashes and introduces ptr<T> to make the
code robust against this failure mode going forward.
Thanks regnat for the idea of a ref<T> without overhead!
Closes#4895Closes#4893Closes#5127Closes#5113
Use `$(libdir)` while installing .pc files looks like a more generic
solution. For example, it will work for distributions like RHEL or
Fedora where .pc files are installed in `/usr/lib64/pkgconfig`.
This extends https://github.com/NixOS/nix/pull/4978 with
supporting the SCP-like urls in expressions like
```nix
builtins.fetchGit {
url = "git@github.com:NixOS/nix.git";
ref = "master";
}
```
It does not operate on a derivation and does not return a
derivation path. Instead it works at the language level,
where a distinct term "package" is more appropriate to
distinguish the parent object of `meta.position`; an
attribute which doesn't even make it into the derivation.
Some people want to avoid using registries at all on their system; Instead
of having to add --no-registries to every command, this commit allows to
set use-registries = false in the config. --no-registries is still allowed
everywhere it was allowed previously, but is now deprecated.
Co-authored-by: Eelco Dolstra <edolstra@gmail.com>
We need to support it for the “old” fetch* functions for backwards
compatibility, but we don’t need it for fetchTree (as it’s a new
function).
Given that changing the `name` messes-up the content hashing, we can
just forbid passing a custom `name` argument to it