nixos/nix#6290 introduced a regex pattern to account for tags when
resolving sourcehut refs. nixos/nix#4638 reafactored the code,
accidentally treating the pattern as a regular string, causing all
non-HEAD ref resolving to break.
This fixes the regression and adds more test cases to avoid future
breakage.
The manpage for `getgrouplist` says:
> If the number of groups of which user is a member is less than or
> equal to *ngroups, then the value *ngroups is returned.
>
> If the user is a member of more than *ngroups groups, then
> getgrouplist() returns -1. In this case, the value returned in
> *ngroups can be used to resize the buffer passed to a further
> call getgrouplist().
In our original code, however, we allocated a list of size `10` and, if
`getgrouplist` returned `-1` threw an exception. In practice, this
caused the code to fail for any user belonging to more than 10 groups.
While unusual for single-user systems, large companies commonly have a
huge number of POSIX groups users belong to, causing this issue to crop
up and make multi-user Nix unusable in such settings.
The fix is relatively simple, when `getgrouplist` fails, it stores the
real number of GIDs in `ngroups`, so we must resize our list and retry.
Only then, if it errors once more, we can raise an exception.
This should be backported to, at least, 2.9.x.
Bring back the possibility to copy CA paths with no reference (like the
outputs of FO derivations or stuff imported at eval time) between stores
that have a different prefix.
If a package's attribute path, description or name contains matches for any of the
regexes specified via `-e` or `--exclude` that package is excluded from
the final output.
Currently nix-build prints the "printMissing" information by default,
nix build doesn’t.
People generally don‘t notice this because the standard log-format of
nix build would not display the printMissing
output long enough to perceive the information.
This addresses https://github.com/NixOS/nix/issues/6561
To quote Eelco in #5867:
> Unfortunately we can't do
>
> evalSettings.pureEval.setDefault(false);
>
> because then we have to do the same in main.cc (where
> pureEval is set to true), and that would allow pure-eval
> to be disabled globally from nix.conf.
Instead, a command should specify that it should be impure by
default. Then, `evalSettings.pureEval` will be set to `false;` unless
it's overridden by e.g. a CLI flag.
In that case it's IMHO OK to be (theoretically) able to override
`pure-eval` via `nix.conf` because it doesn't have an effect on commands
where `forceImpureByDefault` returns `false` (i.e. everything where pure
eval actually matters).
Closes#5867
The git fetcher code used to dereference the (potentially empty) `ref`
input attribute. This was magically working, probably because the
compiler somehow outsmarted us, but is now blowing up with newer nixpkgs
versions.
Fix that by not trying to access this field while we don't know for sure
that it has been defined.
Fix#6554
Without the change llvm build fails on this week's gcc-13 snapshot as:
src/libutil/json.cc: In function 'void nix::toJSON(std::ostream&, const char*, const char*)':
src/libutil/json.cc:33:22: error: 'uint16_t' was not declared in this scope
33 | put(hex[(uint16_t(*i) >> 12) & 0xf]);
| ^~~~~~~~
src/libutil/json.cc:5:1: note: 'uint16_t' is defined in header '<cstdint>'; did you forget to '#include <cstdint>'?
4 | #include <cstring>
+++ |+#include <cstdint>
5 |
This solves the error
error: cannot connect to socket at '/nix/var/nix/daemon-socket/socket': Connection refused
on build farm systems that are loaded but operating normally.
I've seen this happen on an M1 mac running a loaded hercules-ci-agent.
Hercules CI uses multiple worker processes, which may connect to
the Nix daemon around the same time. It's not unthinkable that
the Nix daemon listening process isn't scheduled until after 6
workers try to connect, especially on a system under load with
many workers.
Is the increase safe?
The number is the number of connections that the kernel will buffer
while the listening process hasn't `accept`-ed them yet.
It did not - and will not - restrict the total number of daemon
forks that a client can create.
History
The number 5 has remained unchanged since the introduction in
nix-worker with 0130ef88ea in 2006.
Add a new `file` fetcher type, which will fetch a plain file over
http(s), or from the local file.
Because plain `http(s)://` or `file://` urls can already correspond to
`tarball` inputs (if the path ends-up with a know archive extension),
the URL parsing logic is a bit convuluted in that:
- {http,https,file}:// urls will be interpreted as either a tarball or a
file input, depending on the extensions of the path part (so
`https://foo.com/bar` will be a `file` input and
`https://foo.com/bar.tar.gz` as a `tarball` input)
- `file+{something}://` urls will be interpreted as `file` urls (with
the `file+` part removed)
- `tarball+{something}://` urls will be interpreted as `tarball` urls (with
the `tarball+` part removed)
Fix#3785
Co-Authored-By: Tony Olagbaiye <me@fron.io>
If experimental feature "flakes" is enabled, args passed to `nix repl`
will now be considered flake refs and imported using the existing
`:load-flake` machinery.
In addition, `:load-flake` now supports loading flake fragments.
Don’t explicitely give it a constructor, but use aggregate
initialization instead (also prevents having an implicit coertion, which
is probably good here)
Ensures the logger is stopped on exit in legacy commands. Without this,
when using `nix-build --log-format bar` and stopping nix with CTRL+C,
the bar is not cleared from the screen.
* libexpr: fix builtins.split example
The example was previously indicating that multiple whitespaces would be
collapsed into a single captured whitespace. That isn't true and was
likely a mistake when being documented initially.
* Fix segfault on unitilized list when printing value
Since lists are just chunks of memory the individual elements in the
list might be unitilized when a programming error happens within Nix.
In this case the values are null-initialized (at least with Boehm GC)
and we can avoid a nullptr deref when printing them.
I ran into this issue while ensuring that new expression tests would
show the actual value on an assertion failure.
This is unlikely to cause any runtime performance regressions as
printing values is not really in the hot path (unless the repl is the
primary use case).
* Add operator<< for ValueTypes
* Add libexpr tests
This introduces tests for libexpr that evalulate various trivial Nix
language expressions and primop invocations that should be good smoke
tests wheter or not the implementation is behaving as expected.
Since a26be9f3b8, the same parser is used
to parse the result of sourcehut’s `HEAD` endpoint (coming from [git
dumb protocol]) and the output of `git ls-remote`. However, they are very
slightly different (the former doesn’t specify the current reference
since it’s implied to be `HEAD`).
Unify both, and make the parser a bit more robust and understandable (by
making it more typed and adding tests for it)
[git dumb protocol]: https://git-scm.com/book/en/v2/Git-Internals-Transfer-Protocols#_the_dumb_protocol
I'm afraid I missed a few problematic `git(1)`-calls while implementing
PR #6440, sorry for that! Upon investigating what went wrong, I realized
that I only tested against the "cached"-case by accident because my
git-checkout with my system's flake was apparently cached during my
debugging.
I managed to trigger the original issue again by running:
$ git commit --allow-empty -m "tmp"
$ sudo nixos-rebuild switch --flake .# -L --builders ''
Since `repoDir` points to the checkout that's potentially owned by
another user, I decided to add `--git-dir` to each call affecting
`repoDir`.
Since the `tmpDir` for the temporary submodule-checkout is created by
Nix itself, it doesn't seem to be an issue.
Sorry for that, it should be fine now.
The previous head caching implementation stored two paths in the local
cache; one for the cached git repo and another textfile containing the
resolved HEAD ref. This commit instead stores the resolved HEAD by
setting the HEAD ref in the local cache appropriately.
'nix profile install' will now install all outputs listed in the
package's meta.outputsToInstall attribute, or all outputs if that
attribute doesn't exist. This makes it behave consistently with
nix-env. Fixes#6385.
Furthermore, for consistency, all other 'nix' commands do this as
well. E.g. 'nix build' will build and symlink the outputs in
meta.outputsToInstall, defaulting to all outputs. Previously, it only
built/symlinked the first output. Note that this means that selecting
a specific output using attrpath selection (e.g. 'nix build
nixpkgs#libxml2.dev') no longer works. A subsequent PR will add a way
to specify the desired outputs explicitly.
after #6218 `Symbol` no longer confers a uniqueness invariant on the
string it wraps, it is now possible to create multiple symbols that
compare equal but whose string contents have different addresses. this
guarantee is now only provided by `SymbolIdx`, leaving `Symbol` only as
a string wrapper that knows about the intricacies of how symbols need to
be formatted for output.
this change renames `SymbolIdx` to `Symbol` to restore the previous
semantics of `Symbol` to that name. we also keep the wrapper type and
rename it to `SymbolStr` instead of returning plain strings from lookups
into the symbol table because symbols are formatted for output in many
places. theoretically we do not need `SymbolStr`, only a function that
formats a string for output as a symbol, but having to wrap every symbol
that appears in a message into eg `formatSymbol()` is error-prone and
inconvient.
The `--git-dir=` must be `.` in some cases (for cached repos that are
"bare" repos in `~/.cache/nix/gitv3`). With this fix we can add
`--git-dir` to each `git`-invokation needed for `nixos-rebuild`.
To demonstrate the problem:
* You need a `git` at 2.33.3 in your $PATH
* An expression like this in a git repository:
``` nix
{
outputs = { self, nixpkgs }: {
packages.foo.x86_64-linux = with nixpkgs.legacyPackages.x86_64-linux;
runCommand "snens" { } ''
echo ${(builtins.fetchGit ./.).lastModifiedDate} > $out
'';
};
}
```
Now, when instantiating the package via `builtins.getFlake`, it fails on
Nix 2.7 like this:
$ nix-instantiate -E '(builtins.getFlake "'"$(pwd)"'").packages.foo.x86_64-linux'
fatal: unsafe repository ('/nix/store/a7j3125km4h8l0p71q6ssfkxamfh5d61-source' is owned by someone else)
To add an exception for this directory, call:
git config --global --add safe.directory /nix/store/a7j3125km4h8l0p71q6ssfkxamfh5d61-source
error: program 'git' failed with exit code 128
(use '--show-trace' to show detailed location information)
This breaks e.g. `nixops`-deployments using flakes with similar
expressions as shown above.
The cause for this is that `git(1)` tries to find the highest
`.git`-directory in the directory tree and if it finds a such a
directory, but with another owning user (root vs. the user who evaluates
the expression), it fails as above. This was changed recently to fix
CVE-2022-24765[1].
By explicitly specifying `--git-dir`, Git assumes to be in the top-level
directory and doesn't attempt to look for a `.git`-directory in the
parent directories and thus the code-path leading to said error is never
reached.
[1] https://lore.kernel.org/git/xmqqv8veb5i6.fsf@gitster.g/
The produced path is then allowed be imported or utilized elsewhere:
```
assert (43 == import (builtins.toFile "source" "43")); "good"
```
This will still fail on write-only stores.
with position and symbol tables in place we can now shrink Attr by a full
pointer with some simple field reordering. since Attr is a very hot struct this
has substantial impact on memory use, decreasing GC allocations and heap size by
10-15% each. we also get a ~15% performance improvement due to reduced GC
loading.
pure parsing has taken a hit over the branch base because positions are now
slightly more expensive to create, but overall we get a noticeable improvement.
before (on memory-friendliness):
Benchmark 1: nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.960 s ± 0.028 s [User: 5.832 s, System: 0.897 s]
Range (min … max): 6.886 s … 7.005 s 20 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 328.1 ms ± 1.7 ms [User: 295.8 ms, System: 32.2 ms]
Range (min … max): 324.9 ms … 331.2 ms 20 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.688 s ± 0.029 s [User: 2.365 s, System: 0.238 s]
Range (min … max): 2.642 s … 2.742 s 20 runs
after:
Benchmark 1: nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.902 s ± 0.039 s [User: 5.844 s, System: 0.783 s]
Range (min … max): 6.820 s … 6.956 s 20 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 330.7 ms ± 2.2 ms [User: 300.6 ms, System: 30.0 ms]
Range (min … max): 327.5 ms … 334.5 ms 20 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.330 s ± 0.027 s [User: 2.040 s, System: 0.234 s]
Range (min … max): 2.272 s … 2.383 s 20 runs
this slightly increases the amount of memory used for any given symbol, but this
increase is more than made up for if the symbol is referenced more than once in
the EvalState that holds it. on average every symbol should be referenced at
least twice (once to introduce a binding, once to use it), so we expect no
increase in memory on average.
symbol tables are limited to 2³² entries like position tables, and similar
arguments apply to why overflow is not likely: 2³² symbols would require as many
string instances (at 24 bytes each) and map entries (at 24 bytes or more each,
assuming that the map holds on average at most one item per bucket as the docs
say). a full symbol table would require at least 192GB of memory just for
symbols, which is well out of reach. (an ofborg eval of nixpks today creates
less than a million symbols!)
PosTable deduplicates origin information, so using symbols for paths is no
longer necessary. moving away from path Symbols also reduces the usage of
symbols for things that are not keys in attribute sets, which will become
important in the future when we turn symbols into indices as well.
Pos objects are somewhat wasteful as they duplicate the origin file name and
input type for each object. on files that produce more than one Pos when parsed
this a sizeable waste of memory (one pointer per Pos). the same goes for
ptr<Pos> on 64 bit machines: parsing enough source to require 8 bytes to locate
a position would need at least 8GB of input and 64GB of expression memory. it's
not likely that we'll hit that any time soon, so we can use a uint32_t index to
locate positions instead.
when we introduce position and symbol tables we'll need to do lookups to turn
indices into those tables into actual positions/symbols. having the error
functions as members of EvalState will avoid a lot of churn for adding lookups
into the tables for each caller.
only file and line of the returned position were ever used, it wasn't actually
used a position. as such we may as well use a path+int pair for only those two
values and remove a use of Pos that would not work well with a position table.
a future commit will remove the ability to convert the symbol type used in
bindings to strings. since we only have two users we can inline the error check.
the only use of this function is to determine whether a lambda has a non-set
formal, but this use is arguably better served by Symbol::set and using a
non-Symbol instead of an empty symbol in the parser when no such formal is present.
we don't *need* symbols here. the only advantage they have over strings is
making call-counting slightly faster, but that's a diagnostic feature and thus
needn't be optimized.
this also fixes a move bug that previously didn't show up: PrimOp structs were
accessed after being moved from, which technically invalidates them. previously
the names remained valid because Symbol copies on move, but strings are
invalidated. we now copy the entire primop struct instead of moving since primop
registration happen once and are not performance-sensitive.
Without the change any CA deletion triggers linear scan on large
RealisationsRefs table:
sqlite>.eqp full
sqlite> delete from RealisationsRefs where realisationReference IN ( select id from Realisations where outputPath = 1234567890 );
QUERY PLAN
|--SCAN RealisationsRefs
`--LIST SUBQUERY 1
`--SEARCH Realisations USING COVERING INDEX IndexRealisationsRefsOnOutputPath (outputPath=?)
With the change it gets turned into a lookup:
sqlite> CREATE INDEX IndexRealisationsRefsRealisationReference on RealisationsRefs(realisationReference);
sqlite> delete from RealisationsRefs where realisationReference IN ( select id from Realisations where outputPath = 1234567890 );
QUERY PLAN
|--SEARCH RealisationsRefs USING INDEX IndexRealisationsRefsRealisationReference (realisationReference=?)
`--LIST SUBQUERY 1
`--SEARCH Realisations USING COVERING INDEX IndexRealisationsRefsOnOutputPath (outputPath=?)
If the derivation `foo` depends on `bar`, and they both have the same
output path (because they are CA derivations), then this output path
will depend both on the realisation of `foo` and of `bar`, which
themselves depend on each other.
This confuses SQLite which isn’t able to automatically solve this
diamond dependency scheme.
Help it by adding a trigger to delete all the references between the
relevant realisations.
Fix#5320
Otherwise the clang builds fail because the constructor of `SQLiteBusy`
inherits it, `SQLiteError::_throw` tries to call it, which fails.
Strangely, gcc works fine with it. Not sure what the correct behavior is
and who is buggy here, but either way, making it public is at the worst
a reasonable workaround
Don’t say that the derivation is CA as it might happen on a non-ca
derivation too.
Technically we could always recover _something_ for a purely
input-addressed derivation (like we already do when the `ca-derivations`
xp feature isn’t enabled), but it seems better to consistently fail −
the end-result wouldn’t really make sense anyways in most cases.
This ensures that use-sites properly trigger new monomorphisations on
one hand, and on the other hand keeps the main `sqlite.hh` clean and
interface-only. I think that is good practice in general, but in this
situation in particular we do indeed have `sqlite.hh` users that don't
need the `throw_` function.