A few notes:
* The `echo hi` is needed to make sure that a file that can be read by
`nix log` is properly created (i.e. some output is needed). This is
known and to be fixed in #6051.
* We explicitly ignore the floating-CA case here: the `$out` of `input3`
depends on `$out` of `input2`. This means that there are actually two
derivations - I assume that this is because at eval time (i.e.
`nix-instantiate -A`) the hash of `input2` isn't known yet and the
other .drv is created as soon as `input2` was built. This is another
issue on its own, so we ignore the case here explicitly.
- Make sure that it starts even without the `nix-command` xp feature
- Fail if it doesn’t manage to start
This fixes a 30s wait for every test in `init.sh` as the daemon couldn’t
start, but the code was just waiting 30s and continuing as if everything
was all right.
Polling every 1 second means that even the simplest test takes at least
2 seconds. We can reasonably poll 1/10 of that to make things much
quicker (esp. given that most of the time 0.1s is enough for the
daemon to be started or stopped)
The tests are scheduled in the order they appear, so running the long
ones first slightly improves the scheduling.
On my machine, this decreases the time of `make install` from 40s to 36s
diff-index operates on the view that git has of the working tree,
which might be outdated. The higher-level diff command does this
automatically. This change also adds handling for submodules.
fixes#4140
Alternative fixes would be invoking update-index before diff-index or
matching more closely what require_clean_work_tree from git-sh-setup.sh
does, but both those options make it more difficult to reason about
correctness.
The .git/refs/heads directory might be empty for a valid
usable git repository. This often happens in CI environments,
which might only fetch commits, not branches.
Therefore instead we let git itself check if HEAD points to
something that looks like a commit.
fixes#5302
previously :a would override old bindings of a name with new values if the added
set contained names that were already bound. in nix 2.6 this doesn't happen any
more, which is potentially confusing.
fixes#6041
Bundlers are now responsible for correctly handling their inputs which
are no longer constrained to be (Drv->Drv)->Drv->Drv, but can be of
type (attrset->Drv)->attrset->Drv.
It’s totally valid to have entries in `NIX_PATH` that aren’t valid paths
(they can even be arbitrary urls or `channel:<channel-name>`).
Fix#5998 and #5980
if we defer the duplicate argument check for lambda formals we can use more
efficient data structures for the formals set, and we can get rid of the
duplication of formals names to boot. instead of a list of formals we've seen
and a set of names we'll keep a vector instead and run a sort+dupcheck step
before moving the parsed formals into a newly created lambda. this improves
performance on search and rebuild by ~1%, pure parsing gains more (about 4%).
this does reorder lambda arguments in the xml output, but the output is still
stable. this shouldn't be a problem since argument order is not semantically
important anyway.
before
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.550 s ± 0.060 s [User: 6.470 s, System: 1.664 s]
Range (min … max): 8.435 s … 8.666 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 346.7 ms ± 2.1 ms [User: 312.4 ms, System: 34.2 ms]
Range (min … max): 343.8 ms … 353.4 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.720 s ± 0.031 s [User: 2.415 s, System: 0.231 s]
Range (min … max): 2.662 s … 2.780 s 20 runs
after
nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 8.462 s ± 0.063 s [User: 6.398 s, System: 1.661 s]
Range (min … max): 8.339 s … 8.542 s 20 runs
nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 329.1 ms ± 1.4 ms [User: 296.8 ms, System: 32.3 ms]
Range (min … max): 326.1 ms … 330.8 ms 20 runs
nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.687 s ± 0.035 s [User: 2.392 s, System: 0.228 s]
Range (min … max): 2.626 s … 2.754 s 20 runs
This removes a dynamic stack allocation, making the derivation
unparsing logic robust against overflows when large strings are
added to a derivation.
Overflow behavior depends on the platform and stack configuration.
For instance, x86_64-linux/glibc behaves as (somewhat) expected:
$ (ulimit -s 20000; nix-instantiate tests/lang/eval-okay-big-derivation-attr.nix)
error: stack overflow (possible infinite recursion)
$ (ulimit -s 40000; nix-instantiate tests/lang/eval-okay-big-derivation-attr.nix)
error: expression does not evaluate to a derivation (or a set or list of those)
However, on aarch64-darwin:
$ nix-instantiate big-attr.nix ~
zsh: segmentation fault nix-instantiate big-attr.nix
This indicates a slight flaw in the single stack protection page
approach that is not encountered with normal stack frames.
Unless `--precise` is passed, make `nix why-depends` only show the
dependencies between the store paths, without introspecting them to
find the actual references.
This also makes it ~3x faster
There already existed a smoke test for the link content length,
but it appears that there exists some corruptions pernicious enough
to replace the file content with zeros, and keeping the same length.
--repair-path now goes as far as checking the content of the link,
making it true to its name and actually repairing the path for such
coruption cases.
nixpkgs can save a good bit of eval memory with this primop. zipAttrsWith is
used quite a bit around nixpkgs (eg in the form of recursiveUpdate), but the
most costly application for this primop is in the module system. it improves
the implementation of zipAttrsWith from nixpkgs by not checking an attribute
multiple times if it occurs more than once in the input list, allocates less
values and set elements, and just avoids many a temporary object in general.
nixpkgs has a more generic version of this operation, zipAttrsWithNames, but
this version is only used once so isn't suitable for being the base of a new
primop. if it were to be used more we should add a second primop instead.
When we check for disappeared overrides, we can get "false positives"
for follows and overrides which are defined in the dependencies of the
flake we are locking, since they are not parsed by
parseFlakeInputs. However, at that point we already know that the
overrides couldn't have possible been changed if the input itself
hasn't changed (since we check that oldLock->originalRef == *input.ref
for the input's parent). So, to prevent this, only perform this check
when it was possible that the flake changed (e.g. the flake we're
locking, or a new input, or the input has changed and mustRefetch ==
true).
When a variable is assigned in the REPL, make sure to remove any possible reference to the old one so that we correctly pick the new one afterwards
Fix#5706
Previously, when we were attempting to reuse the old lockfile
information in the computeLocks function, we have passed the parent of
the current input to the next computeLocks call. This was incorrect,
since the follows are resolved relative to the parent. This caused
issues when we tried to reuse oldLock but couldn't for some
reason (read: mustRefetch is true), in that case the follows were
resolved incorrectly.
Fix this by passing the correct parent, and adding some tests to
prevent this particular regression from happening again.
Closes https://github.com/NixOS/nix/issues/5697
If we’re in pure eval mode, then tell that in the error message rather
than (wrongly) speaking about restricted mode.
Fix https://github.com/NixOS/nix/issues/5611
When an input follows disappears, we can't just reuse the old lock
file entries since we may be missing some required ones. Refetch the
input when this happens.
Closes https://github.com/NixOS/nix/issues/5289
When setting flake-local options (with the `nixConfig` field), forward
these options to the daemon in case we’re using one.
This is necessary in particular for options like `binary-caches` or
`post-build-hook` to make sense.
Fix <343239fc8a (r44356843)>
Having the `post-build-hook` use `nix` from the client package can lead
to a deadlock in case there’s a db migration to do between both, as a
`nix` command running inside the hook will run as root (and as such will
bypass the daemon), so might trigger a db migration, which will get
stuck trying to get a global lock on the DB (as the daemon that ran the
hook already has a lock on it).
When running a `:b` command in the repl, after building the derivations
query the store for its outputs rather than just assuming that they are
known in the derivation itself (which isn’t true for CA derivations)
Fix#5328
Rather than having them plain strings scattered through the whole
codebase, create an enum containing all the known experimental features.
This means that
- Nix can now `warn` when an unkwown experimental feature is passed
(making it much nicer to spot typos and spot deprecated features)
- It’s now easy to remove a feature altogether (once the feature isn’t
experimental anymore or is dropped) by just removing the field for the
enum and letting the compiler point us to all the now invalid usages
of it.
The min bound written corresponds to the date of the commit that
introduced the change, but it only got merged on master some weeks
later. Since the version is essentially the commit date, that means that
there’s a whole range of commits on master (including the current
`nixUnstable`) that have a higher version but don’t contain the required
change.
This fixes a bug in the garbage collector where if a path
/nix/store/abcd-foo is valid, but we do a
isValidPath("/nix/store/abcd-foo.lock") first, then a negative entry
for /nix/store/abcd is added to pathInfoCache, so /nix/store/abcd-foo
is subsequently considered invalid and deleted.
The garbage collector no longer blocks other processes from
adding/building store paths or adding GC roots. To prevent the
collector from deleting store paths just added by another process,
processes need to connect to the garbage collector via a Unix domain
socket to register new temporary roots.