Previously, when we were attempting to reuse the old lockfile
information in the computeLocks function, we have passed the parent of
the current input to the next computeLocks call. This was incorrect,
since the follows are resolved relative to the parent. This caused
issues when we tried to reuse oldLock but couldn't for some
reason (read: mustRefetch is true), in that case the follows were
resolved incorrectly.
Fix this by passing the correct parent, and adding some tests to
prevent this particular regression from happening again.
Closes https://github.com/NixOS/nix/issues/5697
- Previous to this commit the boundary was exclusive of the
top level flake.
- This is wrong since the top level flake is still a valid
relative reference.
- Now, the check boundary is inclusive of the top level flake.
Signed-off-by: Timothy DeHerrera <tim.deh@pm.me>
Due to missing <atomic> declaration the build fails as:
src/libutil/util.hh:350:24: error: no match for 'operator||' (operand types are 'std::atomic<bool>' and 'bool')
350 | if (_isInterrupted || (interruptCheck && interruptCheck()))
| ~~~~~~~~~~~~~~ ^~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| | |
| std::atomic<bool> bool
Because the manual is generated from default values which are themselves
generated from various sources (cpuid, bios settings (kvm), number of
cores). This commit hides non-reproducible settings from the manual
output.
No matter what, we need to resize the buffer to not have any scratch
space after we do the `read`. In the end of file case, `got` will be 0
from it's initial value.
Before, we forgot to resize in the EOF case with the break. Yes, we know
we didn't recieve any data in that case, but we still have the scatch
space to undo.
Co-Authored-By: Will Fancher <Will.Fancher@Obsidian.Systems>
This doesn't fix the bug, but makes the code less difficult to read.
Also improve the comments, now that it is clear what part is needed in
each code path.
Moving arguments of the primOp into the registration structure makes it
impossible to initialize a second EvalState with the correct primOp
registration. It will end up registering all those "RegisterPrimOp"'s
with an arity of zero on all but the 2nd instance of the EvalState.
Not moving the memory will add a tiny bit of memory overhead during the
eval since we need a copy of all the argument lists of all the primOp's.
The overhead shouldn't be too bad as it is static (based on the amonut
of registered operations) and only occurs once during the interpreter
startup.
If we’re in pure eval mode, then tell that in the error message rather
than (wrongly) speaking about restricted mode.
Fix https://github.com/NixOS/nix/issues/5611
We let DISPLAY (X11) through, so we should let the Wayland equivalents
through as well. Similarly, we let HOME through, so it should be okay
to allow XDG_RUNTIME_DIR (which is needed for connecting to Wayland
with WAYLAND_DISPLAY) through as well. Otherwise graphical
applications will either fall back to X11 (if they support it), or
just not work (if they don't).
This is not really useful on its own, but it does recover the
'infinite recursion' error message for '{ __functor = x: x; } 1', and
is more efficient in conjunction with #3718.
Fixes#5515.
- This change applies to builtins.toXML and inner workings
- Proof of concept:
```nix
let e = builtins.toXML e; in e
```
- Before:
```
$ nix-instantiate --eval poc.nix
error: infinite recursion encountered
```
- After:
```
$ nix-instantiate --eval poc.nix
error: infinite recursion encountered
at /data/github/kamadorueda/nix/poc.nix:1:9:
1| let e = builtins.toXML e; in e
|
```
These headers are included by the libexpr, libfetchers, libstore
and libutil headers.
Considering that these are vendored sources, Nix should expose them,
as it is not a good idea for reverse dependencies to rely on a
potentially different source that can go out of sync.
When an input follows disappears, we can't just reuse the old lock
file entries since we may be missing some required ones. Refetch the
input when this happens.
Closes https://github.com/NixOS/nix/issues/5289
For a typical desktop system (~2K packages) we can easily get 100K
entries in RealisationsRefs. Without indices query for RealisationsRefs
requires linear scan.
RealisationsRefs(referrer)
--------------------------
Inefficiency is seen as a 100% CPU load of nix-daemon for the following
scenario:
$ nix edit -f . bash # add unused environment variable, like FOO="1"
# populate RealisationsRefs, build fresh system
$ nix build -f nixos system --arg config '{ contentAddressedByDefault = true; }'
$ nix edit -f . bash # add unused environment variable, like FOO="2"
$ time nix build -f nixos system --arg config '{ contentAddressedByDefault = true; }'
In this case `bash `will be rebuilt a few times and then rest of CPU
time is spent on scanning RealisationsRefs table (about 5 CPU-minutes
on my machine).
Before the change:
$ time nix build -f nixos system ... # step 4 above
real 34m3,613s
user 0m5,232s
sys 0m0,758s
Of all this time about 29.5 minutes are taken by nix-daemon's CPU time.
After the change:
$ time nix build -f nixos system ... # step 4 above
real 4m50,061s
user 0m5,038s
sys 0m0,677s
Of all this time about 1 minute is taken by nix-daemon's CPU time.
Most of the time is spent polling for non-existent realisations on
cache-nixos.org.
Realisations(outputPath)
------------------------
After running CA system for two weeks I got ~1M entries in Realisations
table. `nix-collect-garbage` became very slow (seemingly 100 path deletions
per second). It happens due to a slow cascading delete from Realisations
triggered by deletion from ValidPaths.
The fix is to add an index on primary key from ValidPaths(id) that
triggers cascading deletions.
Before the change:
$ time nix-collect-garbage -d --max-freed 100G
<interrupted before finish, took too long>
real 23m32.411s
user 17m49.679s
sys 4m50.609s
Most of time was spent in re-scanning Realisations table on each path deletion.
After the change:
$ time nix-collect-garbage -d --max-freed 100G
real 8m43.226s
user 6m16.317s
sys 1m40.188s
Time is spent scanning sqlite indices and in kernel when unlinking directories.
Doing it as a side-effect of calling LocalStore::makeStoreWritable()
is very ugly.
Also, make sure that stopping the progress bar joins the update
thread, otherwise that thread should be unshared as well.
Since 4806f2f6b0, we can't have paths with
references passed to builtins.{path,filterSource}. This prevents many cases
of those functions called on IFD outputs from working. Resolve this by
passing the references found in the original path to the added path.
When setting flake-local options (with the `nixConfig` field), forward
these options to the daemon in case we’re using one.
This is necessary in particular for options like `binary-caches` or
`post-build-hook` to make sense.
Fix <343239fc8a (r44356843)>
When running a `:b` command in the repl, after building the derivations
query the store for its outputs rather than just assuming that they are
known in the derivation itself (which isn’t true for CA derivations)
Fix#5328
We now parse function applications as a vector of arguments rather
than as a chain of binary applications, e.g. 'substring 1 2 "foo"' is
parsed as
ExprCall { .fun = <substring>, .args = [ <1>, <2>, <"foo"> ] }
rather than
ExprApp (ExprApp (ExprApp <substring> <1>) <2>) <"foo">
This allows primops to be called immediately (if enough arguments are
supplied) without having to allocate intermediate tPrimOpApp values.
On
$ nix-instantiate --dry-run '<nixpkgs/nixos/release-combined.nix>' -A nixos.tests.simple.x86_64-linux
this gives a substantial performance improvement:
user CPU time: median = 0.9209 mean = 0.9218 stddev = 0.0073 min = 0.9086 max = 0.9340 [rejected, p=0.00000, Δ=-0.21433±0.00677]
elapsed time: median = 1.0585 mean = 1.0584 stddev = 0.0024 min = 1.0523 max = 1.0623 [rejected, p=0.00000, Δ=-0.20594±0.00236]
because it reduces the number of tPrimOpApp allocations from 551990 to
42534 (i.e. only small minority of primop calls are partially
applied) which in turn reduces time spent in the garbage collector.
Rather than having them plain strings scattered through the whole
codebase, create an enum containing all the known experimental features.
This means that
- Nix can now `warn` when an unkwown experimental feature is passed
(making it much nicer to spot typos and spot deprecated features)
- It’s now easy to remove a feature altogether (once the feature isn’t
experimental anymore or is dropped) by just removing the field for the
enum and letting the compiler point us to all the now invalid usages
of it.
Currently machine specification (`/etc/nix/machine`) parser fails
with a vague exception if the file had incorrect format.
This commit adds verbose exceptions and unit-tests for the parser.
Fixed a bug in initialization of 'base64DecodeChars' variable.
Currently decoder do not fail on invalid Base64 strings.
Added test-case to verify the fix.
Also have made 'base64DecodeChars' to be computed at compile time.
And added a test case to encode/decode string with non-printable charactes.
- This way we improve error messages
on infinite recursion
- Demo:
```nix
let x = builtins.fetchTree x;
in x
```
- Before:
```bash
$ nix-instantiate --extra-experimental-features flakes --strict
error: infinite recursion encountered
```
- After:
```bash
$ nix-instantiate --extra-experimental-features flakes --strict
error: infinite recursion encountered
at /data/github/kamadorueda/nix/test.nix:1:9:
1| let x = builtins.fetchTree x;
| ^
2| in x
```
Mentions: #3505
This ensures any started processes can't write to /nix/store (except
during builds). This partially reverts 01d07b1e, which happened because
of #2646.
The problem was only happening after nix downloads anything, causing
me to suspect the download thread. The problem turns out to be:
"A process can't join a new mount namespace if it is sharing
filesystem-related attributes with another process", in this case this
process is the curl thread.
Ideally, we might kill it before spawning the shell process, but it's
inside a static variable in the getFileTransfer() function. So
instead, stop it from sharing FS state using unshare(). A strategy
such as the one from #5057 (single-threaded chroot helper binary) is
also very much on the table.
Fixes#4337.
- This way we improve error messages
on infinite recursion
- Demo:
```nix
let
x = builtins.fetchMercurial x;
in
x
```
- Before:
```bash
$ nix-instantiate --show-trace --strict
error: infinite recursion encountered
```
- After:
```bash
nix-instantiate --show-trace --strict
error: infinite recursion encountered
at /data/github/kamadorueda/test/default.nix:2:7:
1| let
2| x = builtins.fetchMercurial x;
| ^
3| in
```
Mentions: #3505
We can actually just load nss ourselves and call in nss to configure it
and we don't need to run a dummy query entirely to have nss load nss_dns
as a side-effect.
Signed-off-by: Arthur Gautier <baloo@superbaloo.net>