The `allow-flake-configuration` option allows the user to control whether to
accept configuration options supplied by flakes. Unfortunately, setting this
to false really meant "ask each time" (with an option to remember the choice
for each specific option encountered). Let no mean no, and introduce (and
default to) a separate value for the "ask each time" behaviour.
Co-Authored-By: Jade Lovelace <lix@jade.fyi>
Change-Id: I7ccd67a95bfc92cffc1ebdc972d243f5191cc1b4
this most notably affects `nix eval`: if there is no progress bar to be
shown and no activities going on we should not print anything at all. a
progress bar with no activities would print a bunch of terminal escapes
*and a space*, which is not helpful in simple cases like nix eval -E 1.
notably this does *not* affect nix eval called on non-terminal outputs,
but it is slightly confusing nevertheless (and not difficult to avoid).
fixes lix-project/lix#424
Change-Id: Iee793c79ba5a485d6606e0d292ed2eae6dfb7216
this gives about 20% performance improvements on pure parsing. obviously
it will be less on full eval, but depending on how much parsing is to be
done (e.g. including hackage-packages.nix or not) it's more like 4%-10%.
this has been tested (with thousands of core hours of fuzzing) to ensure
that the ASTs produced by the new parser are exactly the same as the old
one would have produced. error messages will change (sometimes by a lot)
and are not yet perfect, but we would rather leave this as is for later.
test results for running only the parser (excluding the variable binding
code) in a tight loop with inputs and parameters as given are promising:
- 40% faster on lix's package.nix at 10000 iterations
- 1.3% faster on nixpkgs all-packages.nix at 1000 iterations
- equivalent on all of nixpkgs concatenated at 100 iterations
(excluding invalid files, each file surrounded with parens)
more realistic benchmarks are somewhere in between the extremes, parsing
once again getting the largest uplift. other realistic workloads improve
by a few percentage points as well, notably system builds are 4% faster.
Benchmarks summary (from ./bench/summarize.jq bench/bench-*.json)
old/bin/nix --extra-experimental-features 'nix-command flakes' eval -f bench/nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
mean: 0.408s ± 0.025s
user: 0.355s | system: 0.033s
median: 0.389s
range: 0.388s ... 0.442s
relative: 1
new/bin/nix --extra-experimental-features 'nix-command flakes' eval -f bench/nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
mean: 0.332s ± 0.024s
user: 0.279s | system: 0.033s
median: 0.314s
range: 0.313s ... 0.361s
relative: 0.814
---
old/bin/nix --extra-experimental-features 'nix-command flakes' eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 6.133s ± 0.022s
user: 5.395s | system: 0.437s
median: 6.128s
range: 6.099s ... 6.183s
relative: 1
new/bin/nix --extra-experimental-features 'nix-command flakes' eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 5.925s ± 0.025s
user: 5.176s | system: 0.456s
median: 5.934s
range: 5.861s ... 5.943s
relative: 0.966
---
GC_INITIAL_HEAP_SIZE=10g old/bin/nix eval --extra-experimental-features 'nix-command flakes' --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 4.503s ± 0.027s
user: 3.731s | system: 0.547s
median: 4.499s
range: 4.478s ... 4.541s
relative: 1
GC_INITIAL_HEAP_SIZE=10g new/bin/nix eval --extra-experimental-features 'nix-command flakes' --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 4.285s ± 0.031s
user: 3.504s | system: 0.571s
median: 4.281s
range: 4.221s ... 4.328s
relative: 0.951
---
old/bin/nix --extra-experimental-features 'nix-command flakes' search --no-eval-cache github:nixos/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 hello
mean: 16.475s ± 0.07s
user: 14.088s | system: 1.572s
median: 16.495s
range: 16.351s ... 16.536s
relative: 1
new/bin/nix --extra-experimental-features 'nix-command flakes' search --no-eval-cache github:nixos/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 hello
mean: 15.973s ± 0.013s
user: 13.558s | system: 1.615s
median: 15.973s
range: 15.946s ... 15.99s
relative: 0.97
---
Change-Id: Ie66ec2d045dec964632c6541e25f8f0797319ee2
This reverts commit 35eec921af.
Reason for revert: Regressed nix-eval-jobs, and it appears to be this change is buggy/missing a case. It just needs another pass.
Code causing the problem in n-e-j, when invoked with `nix-eval-jobs --flake '.#hydraJobs'`:
```
n-e-j/tests/assets » ../../build/src/nix-eval-jobs --meta --workers 1 --flake .#hydraJobs
warning: unknown setting 'trusted-users'
warning: `--gc-roots-dir' not specified
error: unsupported Git input attribute 'dir'
error: worker error: error: unsupported Git input attribute 'dir'
```
```
nix::Value *vRoot = [&]() {
if (args.flake) {
auto [flakeRef, fragment, outputSpec] =
nix::parseFlakeRefWithFragmentAndExtendedOutputsSpec(
args.releaseExpr, nix::absPath("."));
nix::InstallableFlake flake{
{}, state, std::move(flakeRef), fragment, outputSpec,
{}, {}, args.lockFlags};
return flake.toValue(*state).first;
} else {
return releaseExprTopLevelValue(*state, autoArgs, args);
}
}();
```
Inspecting the program behaviour reveals that `dir` was in fact set in the URL going into the fetcher. This is in turn because unlike in the case changed in this commit, it was not erased before handing it to libfetchers, which is probably just a mistake.
```
(rr) up
3 0x00007ffff60262ae in nix::fetchers::Input::fromURL (url=..., requireTree=requireTree@entry=true) at src/libfetchers/fetchers.cc:39
warning: Source file is more recent than executable.
39 auto res = inputScheme->inputFromURL(url, requireTree);
(rr) p url
$1 = (const nix::ParsedURL &) @0x7fffdc874190: {url = "git+file:///home/jade/lix/nix-eval-jobs",
base = "git+file:///home/jade/lix/nix-eval-jobs", scheme = "git+file", authority = std::optional<std::string> = {[contained value] = ""},
path = "/home/jade/lix/nix-eval-jobs", query = std::map with 1 element = {["dir"] = "tests/assets"}, fragment = ""}
(rr) up
4 0x00007ffff789d904 in nix::parseFlakeRefWithFragment (url=".#hydraJobs", baseDir=std::optional<std::string> = {...},
allowMissing=allowMissing@entry=false, isFlake=isFlake@entry=true) at src/libexpr/flake/flakeref.cc:179
warning: Source file is more recent than executable.
179 FlakeRef(Input::fromURL(parsedURL, isFlake), getOr(parsedURL.query, "dir", "")),
(rr) p parsedURL
$2 = {url = "git+file:///home/jade/lix/nix-eval-jobs", base = "git+file:///home/jade/lix/nix-eval-jobs", scheme = "git+file",
authority = std::optional<std::string> = {[contained value] = ""}, path = "/home/jade/lix/nix-eval-jobs", query = std::map with 1 element = {
["dir"] = "tests/assets"}, fragment = ""}
(rr) list
174
175 if (pathExists(flakeRoot + "/.git/shallow"))
176 parsedURL.query.insert_or_assign("shallow", "1");
177
178 return std::make_pair(
179 FlakeRef(Input::fromURL(parsedURL, isFlake), getOr(parsedURL.query, "dir", "")),
180 fragment);
181 }
```
Change-Id: Ib55a882eaeb3e59228857761dc1e3b2e366b0f5e
This is a squash of upstream PRs #10303, #10312 and #10883.
fix: Treat empty TMPDIR as unset
Fixes an instance of
nix: src/libutil/util.cc:139: nix::Path nix::canonPath(PathView, bool): Assertion `path != ""' failed.
... which I've been getting in one of my shells for some reason.
I have yet to find out why TMPDIR was empty, but it's no reason for
Nix to break.
(cherry picked from commit c3fb2aa1f9d1fa756dac38d3588c836c5a5395dc)
fix: Treat empty XDG_RUNTIME_DIR as unset
See preceding commit. Not observed in the wild, but is sensible
and consistent with TMPDIR behavior.
(cherry picked from commit b9e7f5aa2df3f0e223f5c44b8089cbf9b81be691)
local-derivation-goal.cc: Reuse defaultTempDir()
(cherry picked from commit fd31945742710984de22805ee8d97fbd83c3f8eb)
fix: remove usage of XDG_RUNTIME_DIR for TMP
(cherry picked from commit 1363f51bcb24ab9948b7b5093490a009947f7453)
tests/functional: Add count()
(cherry picked from commit 6221770c9de4d28137206bdcd1a67eea12e1e499)
Remove uncalled for message
(cherry picked from commit b1fe388d33530f0157dcf9f461348b61eda13228)
Add build-dir setting
(cherry picked from commit 8b16cced18925aa612049d08d5e78eccbf0530e4)
Change-Id: Ic7b75ff0b6a3b19e50a4ac8ff2d70f15c683c16a
Following the latest hacking.md currently fails because of a missing
include in upstream editline. This patch fixes the build by adding
that missing include.
Fixes#410.
Change-Id: Iefd4cb687ed3da71ccda9fe9624f38e6ef4623e5
this was only used in one place, and that place has been rewritten to
use a temporary file instead. keeping this around is not very helpful
at this time, and in any case we'd be better off rewriting subprocess
handling in rust where we not only have a much safer library for such
things but also async frameworks necessary for this easily available.
Change-Id: I6f8641b756857c84ae2602cdf41f74ee7a1fda02
we want to remove runProgram's ability to provide stdin to a process
because the concurrency issues of handling both stdin and stdout are
much more pronounced once runProgram returns not is collected output
but a source. this is possible in the current c++ framework, however
it isn't necessary in almost all cases (as demonstrated by only this
single user existing) and in much better handled with a proper async
concurrency model that lets the caller handle both at the same time.
Change-Id: I29da1e1ad898d45e2e33a7320b246d5003e7712b
This is primarily for readability, but iwrc chaining std::string's
operator+ is also pretty scuffed performance-wise, and this was doing a
lot of operator+ chainging.
Change-Id: I9f25235df153eb2bbb491f1ff7ebbeed9a8ec985
copy-constructing or assigning from pid_t can easily lead to duplicate
Pid instances for the same process if a pid_t was used carelessly, and
Pid itself was copy-constructible. both could cause surprising results
such as killing processes twice (which could become very problemantic,
but luckily modern systems don't reuse PIDs all that quickly), or more
than one piece of the code believing it owns a process when neither do
Change-Id: Ifea7445f84200b34c1a1d0acc2cdffe0f01e20c6
this is only used in one place, and only to set a nicer error message on
EndOfFile. the only caller that actually *catches* this exception should
provide an error message in that catch block rather than forcing support
for setting error message so deep into the stack. copyStorePath is never
called outside of PathSubstitutionGoal anyway, which catches everything.
Change-Id: Ifbae8706d781c388737706faf4c8a8b7917ca278
LocalDerivationGoal includes a large number of low-level sandboxing
primitives for Darwin and Linux, intermingled with ifdefs.
Start creating platform-specific classes to make it easier to add new
platforms and review platform-specific code.
This change only creates support infrastructure and moves two function,
more functions will be moved in future changes.
Change-Id: I9fc29fa2a7345107d4fc96c46fa90b4eabf6bb89
This comes quite often when the available job slots on all remote
builders are exhausted and this is pretty spammy.
This isn't really an issue, but expected behavior.
A better way to display this is a nom-like approach where all scheduled
builds are shown in a tree and pending builds are being marked as such
IMHO.
Change-Id: I6bc14e6054f84e3eb0768127b490e263d8cdcf89
The original idea was to fix lix#174, but for a user friendly solution,
I figured that we'd need more consistency:
* Invalid query params will cause an error, just like invalid
attributes. This has the following two consequences:
* The `?dir=`-param from flakes will be removed before the URL to be
fetched is passed to libfetchers.
* The tarball fetcher doesn't allow URLs with custom query params
anymore. I think this was questionable anyways given that an
arbitrary set of query params was silently removed from the URL you
wanted to fetch. The correct way is to use an attribute-set
with a key `url` that contains the tarball URL to fetch.
* Same for the git & mercurial fetchers: in that case it doesn't even
matter though: both fetchers added unused query params to the URL
that's passed from the input scheme to the fetcher (`url2` in the code).
It turns out that this was never used since the query parameters were
erased again in `getActualUrl`.
* Validation happens for both attributes and URLs. Previously, a lot of
fetchers validated e.g. refs/revs only when specified in a URL and
the validity of attribute names only in `inputFromAttrs`.
Now, all the validation is done in `inputFromAttrs` and `inputFromURL`
constructs attributes that will be passed to `inputFromAttrs`.
* Accept all attributes as URL query parameters. That also includes
lesser used ones such as `narHash`.
And "output" attributes like `lastModified`: these could be declared
already when declaring inputs as attribute rather than URL. Now the
behavior is at least consistent.
Personally, I think we should differentiate in the future between
"fetched input" (basically the attr-set that ends up in the lock-file)
and "unfetched input" earlier: both inputFrom{Attrs,URL} entrypoints
are probably OK for unfetched inputs, but for locked/fetched inputs
a custom entrypoint should be used. Then, the current entrypoints
wouldn't have to allow these attributes anymore.
Change-Id: I1be1992249f7af8287cfc37891ab505ddaa2e8cd
Add the log-formats `multiline` and `multiline-with-logs` which offer
multiple current active building status lines.
Change-Id: Idd8afe62f8591b5d8b70e258c5cefa09be4cab03
If we've consumed the entire input, that doesn't actually mean we're
done decompressing - there might be more output left. This worked (?)
in most cases because the input and output sizes are pretty comparable,
but sometimes they're not and then things get very funny.
Change-Id: I73435a654a911b8ce25119f713b80706c5783c1b
I did a whole bunch of `git log -S` to find out exactly when all these
things were obsoleted and found the commit in which their usage was
removed, which I have added in either the error message or a comment.
I've also made *some* of the version checks into static asserts for when
we update the minimum supported protocol version.
In the end this is not a lot of code we are deleting, but it's code that
we will never have to support into the future when we build a protocol
bridge, which is why I did it. It is not in the support baseline.
Change-Id: Iea3c80795c75ea74f328cf7ede7cbedf8c41926b
without this we will not be able to get rid of makeDecompressionSink,
which in turn will be necessary to get rid of sourceToSink (since the
libarchive archive wrapper *must* be a Source due to api limitations)
Change-Id: Iccd3d333ba2cbcab49cb5a1d3125624de16bce27
don't consume a sink, return a source instead. the only reason to not do
this is a very slight reduction in dynamic allocations, but since we are
going to *at least* do disk io that will not be a lot of overhead anyway
Change-Id: Iae2f879ec64c3c3ac1d5310eeb6a85e696d4614a
if we want have getFile return a source instead of consuming a sink
we'll have to disambiguate this overload another way, eg like this.
Change-Id: Ia26de2020c309a37e7ccc3775c1ad1f32e0a778b