Add a new `file` fetcher type, which will fetch a plain file over
http(s), or from the local file.
Because plain `http(s)://` or `file://` urls can already correspond to
`tarball` inputs (if the path ends-up with a know archive extension),
the URL parsing logic is a bit convuluted in that:
- {http,https,file}:// urls will be interpreted as either a tarball or a
file input, depending on the extensions of the path part (so
`https://foo.com/bar` will be a `file` input and
`https://foo.com/bar.tar.gz` as a `tarball` input)
- `file+{something}://` urls will be interpreted as `file` urls (with
the `file+` part removed)
- `tarball+{something}://` urls will be interpreted as `tarball` urls (with
the `tarball+` part removed)
Fix#3785
Co-Authored-By: Tony Olagbaiye <me@fron.io>
If experimental feature "flakes" is enabled, args passed to `nix repl`
will now be considered flake refs and imported using the existing
`:load-flake` machinery.
In addition, `:load-flake` now supports loading flake fragments.
Don’t explicitely give it a constructor, but use aggregate
initialization instead (also prevents having an implicit coertion, which
is probably good here)
Ensures the logger is stopped on exit in legacy commands. Without this,
when using `nix-build --log-format bar` and stopping nix with CTRL+C,
the bar is not cleared from the screen.
* libexpr: fix builtins.split example
The example was previously indicating that multiple whitespaces would be
collapsed into a single captured whitespace. That isn't true and was
likely a mistake when being documented initially.
* Fix segfault on unitilized list when printing value
Since lists are just chunks of memory the individual elements in the
list might be unitilized when a programming error happens within Nix.
In this case the values are null-initialized (at least with Boehm GC)
and we can avoid a nullptr deref when printing them.
I ran into this issue while ensuring that new expression tests would
show the actual value on an assertion failure.
This is unlikely to cause any runtime performance regressions as
printing values is not really in the hot path (unless the repl is the
primary use case).
* Add operator<< for ValueTypes
* Add libexpr tests
This introduces tests for libexpr that evalulate various trivial Nix
language expressions and primop invocations that should be good smoke
tests wheter or not the implementation is behaving as expected.
Since a26be9f3b8, the same parser is used
to parse the result of sourcehut’s `HEAD` endpoint (coming from [git
dumb protocol]) and the output of `git ls-remote`. However, they are very
slightly different (the former doesn’t specify the current reference
since it’s implied to be `HEAD`).
Unify both, and make the parser a bit more robust and understandable (by
making it more typed and adding tests for it)
[git dumb protocol]: https://git-scm.com/book/en/v2/Git-Internals-Transfer-Protocols#_the_dumb_protocol
I'm afraid I missed a few problematic `git(1)`-calls while implementing
PR #6440, sorry for that! Upon investigating what went wrong, I realized
that I only tested against the "cached"-case by accident because my
git-checkout with my system's flake was apparently cached during my
debugging.
I managed to trigger the original issue again by running:
$ git commit --allow-empty -m "tmp"
$ sudo nixos-rebuild switch --flake .# -L --builders ''
Since `repoDir` points to the checkout that's potentially owned by
another user, I decided to add `--git-dir` to each call affecting
`repoDir`.
Since the `tmpDir` for the temporary submodule-checkout is created by
Nix itself, it doesn't seem to be an issue.
Sorry for that, it should be fine now.
The previous head caching implementation stored two paths in the local
cache; one for the cached git repo and another textfile containing the
resolved HEAD ref. This commit instead stores the resolved HEAD by
setting the HEAD ref in the local cache appropriately.
'nix profile install' will now install all outputs listed in the
package's meta.outputsToInstall attribute, or all outputs if that
attribute doesn't exist. This makes it behave consistently with
nix-env. Fixes#6385.
Furthermore, for consistency, all other 'nix' commands do this as
well. E.g. 'nix build' will build and symlink the outputs in
meta.outputsToInstall, defaulting to all outputs. Previously, it only
built/symlinked the first output. Note that this means that selecting
a specific output using attrpath selection (e.g. 'nix build
nixpkgs#libxml2.dev') no longer works. A subsequent PR will add a way
to specify the desired outputs explicitly.
after #6218 `Symbol` no longer confers a uniqueness invariant on the
string it wraps, it is now possible to create multiple symbols that
compare equal but whose string contents have different addresses. this
guarantee is now only provided by `SymbolIdx`, leaving `Symbol` only as
a string wrapper that knows about the intricacies of how symbols need to
be formatted for output.
this change renames `SymbolIdx` to `Symbol` to restore the previous
semantics of `Symbol` to that name. we also keep the wrapper type and
rename it to `SymbolStr` instead of returning plain strings from lookups
into the symbol table because symbols are formatted for output in many
places. theoretically we do not need `SymbolStr`, only a function that
formats a string for output as a symbol, but having to wrap every symbol
that appears in a message into eg `formatSymbol()` is error-prone and
inconvient.
The `--git-dir=` must be `.` in some cases (for cached repos that are
"bare" repos in `~/.cache/nix/gitv3`). With this fix we can add
`--git-dir` to each `git`-invokation needed for `nixos-rebuild`.
To demonstrate the problem:
* You need a `git` at 2.33.3 in your $PATH
* An expression like this in a git repository:
``` nix
{
outputs = { self, nixpkgs }: {
packages.foo.x86_64-linux = with nixpkgs.legacyPackages.x86_64-linux;
runCommand "snens" { } ''
echo ${(builtins.fetchGit ./.).lastModifiedDate} > $out
'';
};
}
```
Now, when instantiating the package via `builtins.getFlake`, it fails on
Nix 2.7 like this:
$ nix-instantiate -E '(builtins.getFlake "'"$(pwd)"'").packages.foo.x86_64-linux'
fatal: unsafe repository ('/nix/store/a7j3125km4h8l0p71q6ssfkxamfh5d61-source' is owned by someone else)
To add an exception for this directory, call:
git config --global --add safe.directory /nix/store/a7j3125km4h8l0p71q6ssfkxamfh5d61-source
error: program 'git' failed with exit code 128
(use '--show-trace' to show detailed location information)
This breaks e.g. `nixops`-deployments using flakes with similar
expressions as shown above.
The cause for this is that `git(1)` tries to find the highest
`.git`-directory in the directory tree and if it finds a such a
directory, but with another owning user (root vs. the user who evaluates
the expression), it fails as above. This was changed recently to fix
CVE-2022-24765[1].
By explicitly specifying `--git-dir`, Git assumes to be in the top-level
directory and doesn't attempt to look for a `.git`-directory in the
parent directories and thus the code-path leading to said error is never
reached.
[1] https://lore.kernel.org/git/xmqqv8veb5i6.fsf@gitster.g/
The produced path is then allowed be imported or utilized elsewhere:
```
assert (43 == import (builtins.toFile "source" "43")); "good"
```
This will still fail on write-only stores.
with position and symbol tables in place we can now shrink Attr by a full
pointer with some simple field reordering. since Attr is a very hot struct this
has substantial impact on memory use, decreasing GC allocations and heap size by
10-15% each. we also get a ~15% performance improvement due to reduced GC
loading.
pure parsing has taken a hit over the branch base because positions are now
slightly more expensive to create, but overall we get a noticeable improvement.
before (on memory-friendliness):
Benchmark 1: nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.960 s ± 0.028 s [User: 5.832 s, System: 0.897 s]
Range (min … max): 6.886 s … 7.005 s 20 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 328.1 ms ± 1.7 ms [User: 295.8 ms, System: 32.2 ms]
Range (min … max): 324.9 ms … 331.2 ms 20 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.688 s ± 0.029 s [User: 2.365 s, System: 0.238 s]
Range (min … max): 2.642 s … 2.742 s 20 runs
after:
Benchmark 1: nix search --no-eval-cache --offline ../nixpkgs hello
Time (mean ± σ): 6.902 s ± 0.039 s [User: 5.844 s, System: 0.783 s]
Range (min … max): 6.820 s … 6.956 s 20 runs
Benchmark 2: nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
Time (mean ± σ): 330.7 ms ± 2.2 ms [User: 300.6 ms, System: 30.0 ms]
Range (min … max): 327.5 ms … 334.5 ms 20 runs
Benchmark 3: nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
Time (mean ± σ): 2.330 s ± 0.027 s [User: 2.040 s, System: 0.234 s]
Range (min … max): 2.272 s … 2.383 s 20 runs
this slightly increases the amount of memory used for any given symbol, but this
increase is more than made up for if the symbol is referenced more than once in
the EvalState that holds it. on average every symbol should be referenced at
least twice (once to introduce a binding, once to use it), so we expect no
increase in memory on average.
symbol tables are limited to 2³² entries like position tables, and similar
arguments apply to why overflow is not likely: 2³² symbols would require as many
string instances (at 24 bytes each) and map entries (at 24 bytes or more each,
assuming that the map holds on average at most one item per bucket as the docs
say). a full symbol table would require at least 192GB of memory just for
symbols, which is well out of reach. (an ofborg eval of nixpks today creates
less than a million symbols!)
PosTable deduplicates origin information, so using symbols for paths is no
longer necessary. moving away from path Symbols also reduces the usage of
symbols for things that are not keys in attribute sets, which will become
important in the future when we turn symbols into indices as well.
Pos objects are somewhat wasteful as they duplicate the origin file name and
input type for each object. on files that produce more than one Pos when parsed
this a sizeable waste of memory (one pointer per Pos). the same goes for
ptr<Pos> on 64 bit machines: parsing enough source to require 8 bytes to locate
a position would need at least 8GB of input and 64GB of expression memory. it's
not likely that we'll hit that any time soon, so we can use a uint32_t index to
locate positions instead.
when we introduce position and symbol tables we'll need to do lookups to turn
indices into those tables into actual positions/symbols. having the error
functions as members of EvalState will avoid a lot of churn for adding lookups
into the tables for each caller.
only file and line of the returned position were ever used, it wasn't actually
used a position. as such we may as well use a path+int pair for only those two
values and remove a use of Pos that would not work well with a position table.
a future commit will remove the ability to convert the symbol type used in
bindings to strings. since we only have two users we can inline the error check.
the only use of this function is to determine whether a lambda has a non-set
formal, but this use is arguably better served by Symbol::set and using a
non-Symbol instead of an empty symbol in the parser when no such formal is present.
we don't *need* symbols here. the only advantage they have over strings is
making call-counting slightly faster, but that's a diagnostic feature and thus
needn't be optimized.
this also fixes a move bug that previously didn't show up: PrimOp structs were
accessed after being moved from, which technically invalidates them. previously
the names remained valid because Symbol copies on move, but strings are
invalidated. we now copy the entire primop struct instead of moving since primop
registration happen once and are not performance-sensitive.
Without the change any CA deletion triggers linear scan on large
RealisationsRefs table:
sqlite>.eqp full
sqlite> delete from RealisationsRefs where realisationReference IN ( select id from Realisations where outputPath = 1234567890 );
QUERY PLAN
|--SCAN RealisationsRefs
`--LIST SUBQUERY 1
`--SEARCH Realisations USING COVERING INDEX IndexRealisationsRefsOnOutputPath (outputPath=?)
With the change it gets turned into a lookup:
sqlite> CREATE INDEX IndexRealisationsRefsRealisationReference on RealisationsRefs(realisationReference);
sqlite> delete from RealisationsRefs where realisationReference IN ( select id from Realisations where outputPath = 1234567890 );
QUERY PLAN
|--SEARCH RealisationsRefs USING INDEX IndexRealisationsRefsRealisationReference (realisationReference=?)
`--LIST SUBQUERY 1
`--SEARCH Realisations USING COVERING INDEX IndexRealisationsRefsOnOutputPath (outputPath=?)
If the derivation `foo` depends on `bar`, and they both have the same
output path (because they are CA derivations), then this output path
will depend both on the realisation of `foo` and of `bar`, which
themselves depend on each other.
This confuses SQLite which isn’t able to automatically solve this
diamond dependency scheme.
Help it by adding a trigger to delete all the references between the
relevant realisations.
Fix#5320
Otherwise the clang builds fail because the constructor of `SQLiteBusy`
inherits it, `SQLiteError::_throw` tries to call it, which fails.
Strangely, gcc works fine with it. Not sure what the correct behavior is
and who is buggy here, but either way, making it public is at the worst
a reasonable workaround
Don’t say that the derivation is CA as it might happen on a non-ca
derivation too.
Technically we could always recover _something_ for a purely
input-addressed derivation (like we already do when the `ca-derivations`
xp feature isn’t enabled), but it seems better to consistently fail −
the end-result wouldn’t really make sense anyways in most cases.
This ensures that use-sites properly trigger new monomorphisations on
one hand, and on the other hand keeps the main `sqlite.hh` clean and
interface-only. I think that is good practice in general, but in this
situation in particular we do indeed have `sqlite.hh` users that don't
need the `throw_` function.
nix show-config --json was serializing experimental features as ints.
nlohmann::json will automatically use these definitions to serialize
and deserialize ExperimentalFeatures.
Strictly, we don't use the from_json instance yet, it's provided for
completeness and hopefully future use.
Requested by ppepino on the Matrix:
https://matrix.to/#/!KqkRjyTEzAGRiZFBYT:nixos.org/$Tb32BS3rVE2BSULAX4sPm0h6CDewX2hClOTGzTC7gwM?via=nixos.org&via=matrix.org&via=nixos.dev
This adds a new command, :bl, which works like :b but also creates
a GC root symlink to the various derivation outputs.
ckie@cookiemonster ~/git/nix -> ./outputs/out/bin/nix repl
Welcome to Nix 2.6.0. Type :? for help.
nix-repl> :l <nixpkgs>
Added 16118 variables.
nix-repl> :b runCommand "hello" {} "echo hi > $out"
This derivation produced the following outputs:
./repl-result-out -> /nix/store/kidqq2acdpi05c4a9mlbg2baikmzik44-hello
[1 built, 0.0 MiB DL]
ckie@cookiemonster ~/git/nix -> cat ./repl-result-out
hi
In particular, this means that 'nix eval` (which uses toValue()) no
longer auto-calls functions or functors (because
AttrCursor::findAlongAttrPath() doesn't).
Fixes#6152.
Also use ref<> in a few places, and don't return attrpaths from
getCursor() because cursors already have a getAttrPath() method.
Previously it only logged the builder's path, this changes it to log the
arguments at the same log level, and the environment variables at the
vomit level.
This helped me debug https://github.com/svanderburg/node2nix/issues/75
This was a problem when writing a fetcher that uses e.g. sha256 hashes
for revisions. This doesn't actually do anything new, but allows for
creating such fetchers in the future (perhaps when support for Git's
SHA256 object format gains more popularity).
The filter expects all paths to have a prefix of the raw `actualUrl`, but
`Store::addToStore(...)` provides absolute canonicalized paths.
To fix this create an absolute and canonicalized path from the `actualUrl` and
use it instead.
Fixes#6195.
This was caused by SubstitutionGoal not setting the errorMsg field in
its BuildResult. We now get a more descriptive message than in 2.7.0, e.g.
error: path '/nix/store/13mh...' is required, but there is no substituter that can build it
instead of the misleading (since there was no build)
error: build of '/nix/store/13mh...' failed
Fixes#6295.
Saving the cwd fd didn't actually work well -- prior to this commit, the
following would happen:
: ~/w/vc/nix ; doas outputs/out/bin/nix --experimental-features 'nix-command flakes' run nixpkgs#coreutils -- --coreutils-prog=pwd
pwd: couldn't find directory entry in ‘../../../..’ with matching i-node
: ~/w/vc/nix ; doas outputs/out/bin/nix --experimental-features 'nix-command flakes' develop -c pwd
pwd: couldn't find directory entry in ‘../../../..’ with matching i-node
This doesn't work very well (maybe I'm misunderstanding the desired
implementation):
: ~/w/vc/nix ; doas outputs/out/bin/nix --experimental-features 'nix-command flakes' develop -c pwd
pwd: couldn't find directory entry in ‘../../../..’ with matching i-node
I regularly pass around simple scripts by using nix-shell as the script
interpreter, eg. like this:
#!/usr/bin/env nix-shell
#!nix-shell -p dd_rescue coreutils bash -i bash
While this works most of the time, I recently had one occasion where it
would not and the above would result in the following:
$ sudo ./myscript.sh
bash: ./myscript.sh: No such file or directory
Note the "sudo" here, because this error only occurs if we're root.
The reason for the latter is because running Nix as root means that we
can directly access the store, which makes sure we use a filesystem
namespace to make the store writable. XXX - REWORD!
So when stracing the process, I stumbled on the following sequence:
openat(AT_FDCWD, "/proc/self/ns/mnt", O_RDONLY) = 3
unshare(CLONE_NEWNS) = 0
... later ...
getcwd("/the/real/cwd", 4096) = 14
setns(3, CLONE_NEWNS) = 0
getcwd("/", 4096) = 2
In the whole strace output there are no calls to chdir() whatsoever, so
I decided to look into the kernel source to see what else could change
directories and found this[1]:
/* Update the pwd and root */
set_fs_pwd(fs, &root);
set_fs_root(fs, &root);
The set_fs_pwd() call is roughly equivalent to a chdir() syscall and
this is called when the setns() syscall is invoked[2].
[1]: b14ffae378/fs/namespace.c (L4659)
[2]: b14ffae378/kernel/nsproxy.c (L346)
Impure derivations are derivations that can produce a different result
every time they're built. Example:
stdenv.mkDerivation {
name = "impure";
__impure = true; # marks this derivation as impure
outputHashAlgo = "sha256";
outputHashMode = "recursive";
buildCommand = "date > $out";
};
Some important characteristics:
* This requires the 'impure-derivations' experimental feature.
* Impure derivations are not "cached". Thus, running "nix-build" on
the example above multiple times will cause a rebuild every time.
* They are implemented similar to CA derivations, i.e. the output is
moved to a content-addressed path in the store. The difference is
that we don't register a realisation in the Nix database.
* Pure derivations are not allowed to depend on impure derivations. In
the future fixed-output derivations will be allowed to depend on
impure derivations, thus forming an "impurity barrier" in the
dependency graph.
* When sandboxing is enabled, impure derivations can access the
network in the same way as fixed-output derivations. In relaxed
sandboxing mode, they can access the local filesystem.
The return value of BaseError::addTrace(...) is never used and
error-prone as subclasses calling it will return a BaseError instead of
the subclass.
This commit changes its return value to be void.
Rather than having four different but very similar types of hashes, make
only one, with a tag indicating whether it corresponds to a regular of
deferred derivation.
This implies a slight logical change: The original Nix+multiple-outputs
model assumed only one hash-modulo per derivation. Adding
multiple-outputs CA derivations changed this as these have one
hash-modulo per output. This change is now treating each derivation as
having one hash modulo per output.
This obviously means that we internally loose the guaranty that
all the outputs of input-addressed derivations have the same hash
modulo. But it turns out that it doesn’t matter because there’s nothing
in the code taking advantage of that fact (and it probably shouldn’t
anyways).
The upside is that it is now much easier to work with these hashes, and
we can get rid of a lot of useless `std::visit{ overloaded`.
Co-authored-by: John Ericson <John.Ericson@Obsidian.Systems>
Before this change, processLine always uses the first character
as the start of the line. This cause whitespaces to matter at the
beginning of the line whereas it does not matter anywhere else.
This commit trims leading white spaces of the string line so that
subsequent operations can be performed on the string without explicitly
tracking starting and ending indices of the string.
This avoids an infinite loop in the final test in
tests/binary-cache.sh. I think this was only not triggered previously
by accident (because we were clearing wantedOutputs in between).