This *should* be sound, plus or minus the amount that the terminal code
eating code is messed up already.
This is useful for testing CLI output because it will strip the escapes
enough to just shove the expected output in a file.
Change-Id: I8a9b58fafb918466ac76e9ab585fc32fb9294819
The original attempt at this introduced a regression; this commit
reverts the revert and fixes the regression.
This reverts commit 3e151d4d77.
Fix to the regression:
flakeref: fix handling of `?dir=` param for flakes in subdirs
As reported in #419[1], accessing a flake in a subdir of a Git
repository fails with the previous commit[2] applied with the error
error: unsupported Git input attribute 'dir'
The problem is that the `dir`-param is inserted into the parsed URL if a
flake is fetched from the subdir of a Git repository. However, for the
fetching part this isn't even needed. The fix is to just pass `subdir`
as second argument to `FlakeRef` (which needs a `basedir` that can be
empty) and leave the parsedURL as-is.
Added a regression test to make sure we don't run into this again.
[1] #419
[2] e22172aaf6b6a366cecd3c025590e68fa2b91bcc,
originally 3e151d4d77
Change-Id: I2c72d5a32e406a7ca308e271730bd0af01c5d18b
What if you could find memory bugs in Lix without really trying very
hard? I've had variously scuffed patches to do this, but this is
blocked on boost coroutines removal at this point tbh.
Change-Id: Id762af076aa06ad51e77a6c17ed10275929ed578
If `:edit`ing a store path, don't reload repl afterwards
to avoid losing local variables: store is immutable,
so "editing" a store path is always just viewing it.
Resolves: #341
Change-Id: I3747f75ce26e0595e953069c39ddc3ee80699718
Unfortunately, io_uring is totally opaque to seccomp, and while currently there
are no dangerous operations implemented, there is no guarantee that it remains
this way. This means that io_uring should be blocked entirely to ensure that
the sandbox is future-proof. This has not been observed to cause issues in
practice.
Change-Id: I45d3895f95abe1bc103a63969f444c334dbbf50d
Previously, system call filtering (to prevent builders from storing files with
setuid/setgid permission bits or extended attributes) was performed using a
blocklist. While this looks simple at first, it actually carries significant
security and maintainability risks: after all, the kernel may add new syscalls
to achieve the same functionality one is trying to block, and it can even be
hard to actually add the syscall to the blocklist when building against a C
library that doesn't know about it yet. For a recent demonstration of this
happening in practice to Nix, see the introduction of fchmodat2 [0] [1].
The allowlist approach does not share the same drawback. While it does require
a rather large list of harmless syscalls to be maintained in the codebase,
failing to update this list (and roll out the update to all users) in time has
rather benign effects; at worst, very recent programs that already rely on new
syscalls will fail with an error the same way they would on a slightly older
kernel that doesn't support them yet. Most importantly, no unintended new ways
of performing dangerous operations will be silently allowed.
Another possible drawback is reduced system call performance due to the larger
filter created by the allowlist requiring more computation [2]. However, this
issue has not convincingly been demonstrated yet in practice, for example in
systemd or various browsers. To the contrary, it has been measured that the the
actual filter constructed here has approximately the same overhead as a very
simple filter blocking only one system call.
This commit tries to keep the behavior as close to unchanged as possible. The
system call list is in line with libseccomp 2.5.5 and glibc 2.39, which are the
latest versions at the point of writing. Since libseccomp 2.5.5 is already a
requirement and the distributions shipping this together with older versions of
glibc are mostly not a thing any more, this should not lead to more build
failures any more.
[0] https://github.com/NixOS/nixpkgs/issues/300635
[1] https://github.com/NixOS/nix/issues/10424
[2] https://github.com/flatpak/flatpak/pull/4462#issuecomment-1061690607
Change-Id: I541be3ea9b249bcceddfed6a5a13ac10b11e16ad
(cherry picked from commit 8cd1d02f90eb9915e640c5d370d919fad9833c65)
nix flake show: Only print up to the first new line if it exists.
(cherry picked from commit 5281a44927bdb51bfe6e5de12262d815c98f6fe7)
add tests
(cherry picked from commit 74ae0fbdc70a5079a527fe143c4832d1357011f7)
Handle long strings, embedded new lines and empty descriptions
(cherry picked from commit 2ca7b3afdbbd983173a17fa0a822cf7623601367)
Account for total length of 80
(cherry picked from commit 1cc808c18cbaaf26aaae42bb1d7f7223f25dd364)
docs: add nix flake show description release note
fix: remove white space
nix flake show: trim length based on terminal size
test: account for terminal size
docs(flake-description): before and after commands; add myself to credits
Upstream-PR: https://github.com/NixOS/nix/pull/10980
Change-Id: Ie1c667dc816b3dd81e65a1f5395e57ea48ee0362
This removes a *whole load* of variables from scope and enforces thread
boundaries with the type system.
There is not much change of significance in here, so the things to watch
out for while reviewing it are primarily that the destructor ordering
may have changed inadvertently, I think.
Change-Id: I3cd87e6d5a08dfcf368637407251db22a8906316
When the configured maximum depth has been reached, attribute sets and lists
are printed with ellipsis to indicate the elision of nested items. Previously,
this happened even in case the structure being printed is empty, so that such
items do not in fact exist. This is confusing, so stop doing it.
Change-Id: I0016970dad3e42625e085dc896e6f476b21226c9
The repeated value detection logic exists so that the occurrence of large
common substructures does not fill up the screen or the computer's memory.
However, empty attribute sets and derivations (when their detection is enabled)
are always cheap to print, and in practice I have observed them to make up a
significant majority of the cases where I was annoyed by the repeated value
detection kicking in. Furthermore, `nix-instantiate --eval` already disables
this logic for empty attribute sets, and empty lists are already exempted
everywhere. For these reasons, always print empty attribute sets and
derivations as what they are.
Change-Id: I5dac8e7739f9d726b76fd0521ec46f38af94463f
When pretty-printing is enabled, previously an unforced thunk would trigger
indentation, even when it subsequently does not evaluate to a nested structure.
The resulting output looked inconsistent, and furthermore pretty-printing was
not idempotent (since pretty-printing the same value again, which is now fully
evaluated, will not trigger indentation).
When strict evaluation is enabled, force the item before inspecting its type,
so that it is properly known whether it contains a nested structure.
Furthermore, there is no need to cause indentation for unforced thunks, since
the very next operation will be printing them as `«thunk»`.
This is mostly a port of https://github.com/NixOS/nix/pull/11100 , but we only
force the item when it's going to be forced anyway due to strict
pretty-printing, and a new test was written since the REPL testing framework in
Lix is different.
Co-Authored-By: Robert Hensing <robert@roberthensing.nl>
Change-Id: Ib7560fe531d09e05ca6b2037a523fe21a26d9d58
Previous test implementation assumed that grep supports newlines
in patterns. It doesn't, so tests spuriously passed, even though
some tests outputs were broken.
This patches output (and expected output) before grepping,
so there're no newlines in pattern.
Change-Id: Ie6561f9f2e18b83d976f162269d20136e2595141
this is cursed. deeply and profoundly cursed. under NO CIRCUMSTANCES
must protocol serializer helpers be applied to temporaries! doing so
will inevitably cause dangling references and cause the entire thing
to crash. we need to do this even so to get rid of boost coroutines,
and likewise to encapsulate the serializers we suffer today at least
a little bit to allow a gradual migration to an actual IPC protocol.
(this isn't a problem that's unique to generators. c++ coroutines in
general cannot safely take references to arbitrary temporaries since
c++ does not have a lifetime system that can make this safe. -sigh-)
Change-Id: I2921ba451e04d86798752d140885d3c5cc08e146
This also bans various sneaking of negative numbers from the language
into unsuspecting builtins as was exposed while auditing the
consequences of changing the Nix language integer type to a newtype.
It's unlikely that this change comprehensively ensures correctness when
passing integers out of the Nix language and we should probably add a
checked-narrowing function or something similar, but that's out of scope
for the immediate change.
During the development of this I found a few fun facts about the
language:
- You could overflow integers by converting from unsigned JSON values.
- You could overflow unsigned integers by converting negative numbers
into them when going into Nix config, into fetchTree, and into flake
inputs.
The flake inputs and Nix config cannot actually be tested properly
since they both ban thunks, however, we put in checks anyway because
it's possible these could somehow be used to do such shenanigans some
other way.
Note that Lix has banned Nix language integer overflows since the very
first public beta, but threw a SIGILL about them because we run with
-fsanitize=signed-overflow -fsanitize-undefined-trap-on-error in
production builds. Since the Nix language uses signed integers, overflow
was simply undefined behaviour, and since we defined that to trap, it
did.
Trapping on it was a bad UX, but we didn't even entirely notice
that we had done this at all until it was reported as a bug a couple of
months later (which is, to be fair, that flag working as intended), and
it's got enough production time that, aside from code that is IMHO buggy
(and which is, in any case, not in nixpkgs) such as
#445, we don't think
anyone doing anything reasonable actually depends on wrapping overflow.
Even for weird use cases such as doing funny bit crimes, it doesn't make
sense IMO to have wrapping behaviour, since two's complement arithmetic
overflow behaviour is so *aggressively* not what you want for *any* kind
of mathematics/algorithms. The Nix language exists for package
management, a domain where bit crimes are already only dubiously in
scope to begin with, and it makes a lot more sense for that domain for
the integers to never lose precision, either by throwing errors if they
would, or by being arbitrary-precision.
This change will be ported to CppNix as well, to maintain language
consistency.
Fixes: #423
Change-Id: I51f253840c4af2ea5422b8a420aa5fafbf8fae75
the rewriting sink was just broken. when given a rewrite set that
contained a key that is also a proper infix of another key it was
possible to produce an incorrectly rewritten result if the writer
used the wrong block size. fixing this duplicates rewriteStrings,
to avoid this we'll rewrite rewriteStrings to use RewritingSource
in a new mode that'll allow rewrites we had previously forbidden.
Change-Id: I57fa0a9a994e654e11d07172b8e31d15f0b7e8c0
`nix-collect-garbage --dry-run` previously elided the entire garbage
collection check, meaning that it would just exit the script without
printing anything.
This change makes the dry run flag instead set the GC action to
`gcReturnDead` rather than `gcDeleteDead`, and then continue with the
script. So if you set `--dry-run`, it will print the paths it *would*
have garbage collected, but not actually delete them.
I filed a bug for this: #432 but then realised I could give fixing it a go myself.
Change-Id: I062dbf1a80bbab192b5fd0b3a453a0b555ad16f2
Turns errors like this:
let
throwMsg = a: throw (a + " invalid bar");
in throwMsg "bullshit"
error:
… from call site
at «string»:3:4:
2| throwMsg = a: throw (a + " invalid bar");
3| in throwMsg "bullshit"
| ^
… while calling 'throwMsg'
at «string»:2:14:
1| let
2| throwMsg = a: throw (a + " invalid bar");
| ^
3| in throwMsg "bullshit"
… while calling the 'throw' builtin
at «string»:2:17:
1| let
2| throwMsg = a: throw (a + " invalid bar");
| ^
3| in throwMsg "bullshit"
error: bullshit invalid bar
into errors like this:
let
throwMsg = a: throw (a + " invalid bar");
in throwMsg "bullshit"
error:
… from call site
at «string»:3:4:
2| throwMsg = a: throw (a + " invalid bar");
3| in throwMsg "bullshit"
| ^
… while calling 'throwMsg'
at «string»:2:14:
1| let
2| throwMsg = a: throw (a + " invalid bar");
| ^
3| in throwMsg "bullshit"
… caused by explicit throw
at «string»:2:17:
1| let
2| throwMsg = a: throw (a + " invalid bar");
| ^
3| in throwMsg "bullshit"
error: bullshit invalid bar
Change-Id: I593688928ece20f97999d1bf03b2b46d9ac338cb
Turns errors like:
let
errpkg = throw "invalid foobar";
in errpkg.meta
error:
… while calling the 'throw' builtin
at «string»:2:12:
1| let
2| errpkg = throw "invalid foobar";
| ^
3| in errpkg.meta
error: invalid foobar
into errors like:
let
errpkg = throw "invalid foobar";
in errpkg.meta
error:
… while evaluating 'errpkg' to select 'meta' on it
at «string»:3:4:
2| errpkg = throw "invalid foobar";
3| in errpkg.meta
| ^
… while calling the 'throw' builtin
at «string»:2:12:
1| let
2| errpkg = throw "invalid foobar";
| ^
3| in errpkg.meta
error: invalid foobar
For the low price of one try/catch, you too can have the incorrect line
of code actually show up in the trace!
Change-Id: If8d6200ec1567706669d405c34adcd7e2d2cd29d
generators are a better basis for serializers than streaming into sinks
as we do currently for many reasons, such as being usable as sources if
one wishes to (without requiring an intermediate sink to serialize full
data sets into memory, or boost coroutines to turn sinks into sources),
composing more naturally (as one can just yield a sub-generator instead
of being forced to wrap entire substreams into clunky functions or even
more clunky custom types to implement operator<< on), allowing wrappers
to transform data with clear ownership semantics (removing the need for
explicit memory allocations and Source wrappers), and many other things
Change-Id: I361d89ff556354f6930d9204f55117565f2f7f20
this will be the basis of non-boost coroutines in lix. anything that is
a boost coroutine *should* be representable with a Generator coroutine,
and many things that are not currently boost coroutines but behave much
like one (such as, notably, serializers) should be as well. this allows
us to greatly simplify many things that look like iteration but aren't.
Change-Id: I2cebcefa0148b631fb30df4c8cfa92167a407e34
Previously, the progress bar had two subtly different states in which the bar
would not actually render, both with their own shortcomings: inactive (which
was irreversible) and paused (reversible, but swallowing logs). Furthermore,
there was no way of resetting the statistics, so a very bad solution was
implemented (243c0f18da) that would create a new
logger for each line of the repl, leaking the previous one and discarding the
value of printBuildLogs. Finally, if stderr was not attached to a TTY, the
update thread was started even though the logger was not active, violating the
invariant required by the destructor (which is not observed because the logger
is leaked).
In this commit, the two aforementioned states are unified into a single one,
which can be exited again, correctly upholds the invariant that the update
thread is only running while the progress bar is active, and does not swallow
logs. The latter change in behavior is not expected to be a problems in the
rare cases where the paused state was used before, since other loggers (like
the simple one) don't exhibit it anyway. The startProgressBar/stopProgressBar
API is removed due to being a footgun, and a new method for properly resetting
the progress is added.
Co-Authored-By: Qyriad <qyriad@qyriad.me>
Change-Id: I2b7c3eb17d439cd0c16f7b896cfb61239ac7ff3a
The `allow-flake-configuration` option allows the user to control whether to
accept configuration options supplied by flakes. Unfortunately, setting this
to false really meant "ask each time" (with an option to remember the choice
for each specific option encountered). Let no mean no, and introduce (and
default to) a separate value for the "ask each time" behaviour.
Co-Authored-By: Jade Lovelace <lix@jade.fyi>
Change-Id: I7ccd67a95bfc92cffc1ebdc972d243f5191cc1b4
We previously allowed you to map any flake URL to any other flake URL,
including shorthand flakerefs, indirect flake URLs like `flake:nixpkgs`,
direct flake URLs like `github:NixOS/nixpkgs`, or local paths.
But flake registry entries mapping from direct flake URLs often come
from swapping the 'from' and 'to' arguments by accident, and even when
created intentionally, they may not actually work correctly.
This patch rejects those URLs (and fully-qualified flake: URLs), making
it harder to swap the arguments by accident.
Fixes#181.
Change-Id: I24713643a534166c052719b8770a4edfcfdb8cf3
This is a shameless layering violation in favour of UX. It falls back
trivially to "unknown", so it's purely a UX feature.
Diagnostic sample:
```
error: hash mismatch in fixed-output derivation '/nix/store/sjfw324j4533lwnpmr5z4icpb85r63ai-x1.drv':
likely URL: https://meow.puppy.forge/puppy.tar.gz
specified: sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
got: sha256-a1Qvp3FOOkWpL9kFHgugU1ok5UtRPSu+NwCZKbbaEro=
```
Change-Id: I873eedcf7984ab23f57a6754be00232b5cb5b02c
this gives about 20% performance improvements on pure parsing. obviously
it will be less on full eval, but depending on how much parsing is to be
done (e.g. including hackage-packages.nix or not) it's more like 4%-10%.
this has been tested (with thousands of core hours of fuzzing) to ensure
that the ASTs produced by the new parser are exactly the same as the old
one would have produced. error messages will change (sometimes by a lot)
and are not yet perfect, but we would rather leave this as is for later.
test results for running only the parser (excluding the variable binding
code) in a tight loop with inputs and parameters as given are promising:
- 40% faster on lix's package.nix at 10000 iterations
- 1.3% faster on nixpkgs all-packages.nix at 1000 iterations
- equivalent on all of nixpkgs concatenated at 100 iterations
(excluding invalid files, each file surrounded with parens)
more realistic benchmarks are somewhere in between the extremes, parsing
once again getting the largest uplift. other realistic workloads improve
by a few percentage points as well, notably system builds are 4% faster.
Benchmarks summary (from ./bench/summarize.jq bench/bench-*.json)
old/bin/nix --extra-experimental-features 'nix-command flakes' eval -f bench/nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
mean: 0.408s ± 0.025s
user: 0.355s | system: 0.033s
median: 0.389s
range: 0.388s ... 0.442s
relative: 1
new/bin/nix --extra-experimental-features 'nix-command flakes' eval -f bench/nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
mean: 0.332s ± 0.024s
user: 0.279s | system: 0.033s
median: 0.314s
range: 0.313s ... 0.361s
relative: 0.814
---
old/bin/nix --extra-experimental-features 'nix-command flakes' eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 6.133s ± 0.022s
user: 5.395s | system: 0.437s
median: 6.128s
range: 6.099s ... 6.183s
relative: 1
new/bin/nix --extra-experimental-features 'nix-command flakes' eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 5.925s ± 0.025s
user: 5.176s | system: 0.456s
median: 5.934s
range: 5.861s ... 5.943s
relative: 0.966
---
GC_INITIAL_HEAP_SIZE=10g old/bin/nix eval --extra-experimental-features 'nix-command flakes' --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 4.503s ± 0.027s
user: 3.731s | system: 0.547s
median: 4.499s
range: 4.478s ... 4.541s
relative: 1
GC_INITIAL_HEAP_SIZE=10g new/bin/nix eval --extra-experimental-features 'nix-command flakes' --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 4.285s ± 0.031s
user: 3.504s | system: 0.571s
median: 4.281s
range: 4.221s ... 4.328s
relative: 0.951
---
old/bin/nix --extra-experimental-features 'nix-command flakes' search --no-eval-cache github:nixos/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 hello
mean: 16.475s ± 0.07s
user: 14.088s | system: 1.572s
median: 16.495s
range: 16.351s ... 16.536s
relative: 1
new/bin/nix --extra-experimental-features 'nix-command flakes' search --no-eval-cache github:nixos/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 hello
mean: 15.973s ± 0.013s
user: 13.558s | system: 1.615s
median: 15.973s
range: 15.946s ... 15.99s
relative: 0.97
---
Change-Id: Ie66ec2d045dec964632c6541e25f8f0797319ee2
This reverts commit 35eec921af.
Reason for revert: Regressed nix-eval-jobs, and it appears to be this change is buggy/missing a case. It just needs another pass.
Code causing the problem in n-e-j, when invoked with `nix-eval-jobs --flake '.#hydraJobs'`:
```
n-e-j/tests/assets » ../../build/src/nix-eval-jobs --meta --workers 1 --flake .#hydraJobs
warning: unknown setting 'trusted-users'
warning: `--gc-roots-dir' not specified
error: unsupported Git input attribute 'dir'
error: worker error: error: unsupported Git input attribute 'dir'
```
```
nix::Value *vRoot = [&]() {
if (args.flake) {
auto [flakeRef, fragment, outputSpec] =
nix::parseFlakeRefWithFragmentAndExtendedOutputsSpec(
args.releaseExpr, nix::absPath("."));
nix::InstallableFlake flake{
{}, state, std::move(flakeRef), fragment, outputSpec,
{}, {}, args.lockFlags};
return flake.toValue(*state).first;
} else {
return releaseExprTopLevelValue(*state, autoArgs, args);
}
}();
```
Inspecting the program behaviour reveals that `dir` was in fact set in the URL going into the fetcher. This is in turn because unlike in the case changed in this commit, it was not erased before handing it to libfetchers, which is probably just a mistake.
```
(rr) up
3 0x00007ffff60262ae in nix::fetchers::Input::fromURL (url=..., requireTree=requireTree@entry=true) at src/libfetchers/fetchers.cc:39
warning: Source file is more recent than executable.
39 auto res = inputScheme->inputFromURL(url, requireTree);
(rr) p url
$1 = (const nix::ParsedURL &) @0x7fffdc874190: {url = "git+file:///home/jade/lix/nix-eval-jobs",
base = "git+file:///home/jade/lix/nix-eval-jobs", scheme = "git+file", authority = std::optional<std::string> = {[contained value] = ""},
path = "/home/jade/lix/nix-eval-jobs", query = std::map with 1 element = {["dir"] = "tests/assets"}, fragment = ""}
(rr) up
4 0x00007ffff789d904 in nix::parseFlakeRefWithFragment (url=".#hydraJobs", baseDir=std::optional<std::string> = {...},
allowMissing=allowMissing@entry=false, isFlake=isFlake@entry=true) at src/libexpr/flake/flakeref.cc:179
warning: Source file is more recent than executable.
179 FlakeRef(Input::fromURL(parsedURL, isFlake), getOr(parsedURL.query, "dir", "")),
(rr) p parsedURL
$2 = {url = "git+file:///home/jade/lix/nix-eval-jobs", base = "git+file:///home/jade/lix/nix-eval-jobs", scheme = "git+file",
authority = std::optional<std::string> = {[contained value] = ""}, path = "/home/jade/lix/nix-eval-jobs", query = std::map with 1 element = {
["dir"] = "tests/assets"}, fragment = ""}
(rr) list
174
175 if (pathExists(flakeRoot + "/.git/shallow"))
176 parsedURL.query.insert_or_assign("shallow", "1");
177
178 return std::make_pair(
179 FlakeRef(Input::fromURL(parsedURL, isFlake), getOr(parsedURL.query, "dir", "")),
180 fragment);
181 }
```
Change-Id: Ib55a882eaeb3e59228857761dc1e3b2e366b0f5e
On operating systems where /bin/sh is not Bash, some scripts are invalid
because of bashisms, and building Lix fails with errors like this:
`render-manpage.sh: 3: set: Illegal option -o pipefail`
This modifies all scripts that use a `/bin/sh` shebang to `/usr/bin/env
bash`, including currently POSIX-compliant ones, to prevent any future
confusion.
Change-Id: Ia074cc6db42d40fc59a63726f6194ea0149ea5e0
This is a squash of upstream PRs #10303, #10312 and #10883.
fix: Treat empty TMPDIR as unset
Fixes an instance of
nix: src/libutil/util.cc:139: nix::Path nix::canonPath(PathView, bool): Assertion `path != ""' failed.
... which I've been getting in one of my shells for some reason.
I have yet to find out why TMPDIR was empty, but it's no reason for
Nix to break.
(cherry picked from commit c3fb2aa1f9d1fa756dac38d3588c836c5a5395dc)
fix: Treat empty XDG_RUNTIME_DIR as unset
See preceding commit. Not observed in the wild, but is sensible
and consistent with TMPDIR behavior.
(cherry picked from commit b9e7f5aa2df3f0e223f5c44b8089cbf9b81be691)
local-derivation-goal.cc: Reuse defaultTempDir()
(cherry picked from commit fd31945742710984de22805ee8d97fbd83c3f8eb)
fix: remove usage of XDG_RUNTIME_DIR for TMP
(cherry picked from commit 1363f51bcb24ab9948b7b5093490a009947f7453)
tests/functional: Add count()
(cherry picked from commit 6221770c9de4d28137206bdcd1a67eea12e1e499)
Remove uncalled for message
(cherry picked from commit b1fe388d33530f0157dcf9f461348b61eda13228)
Add build-dir setting
(cherry picked from commit 8b16cced18925aa612049d08d5e78eccbf0530e4)
Change-Id: Ic7b75ff0b6a3b19e50a4ac8ff2d70f15c683c16a
this is only used in one place, and only to set a nicer error message on
EndOfFile. the only caller that actually *catches* this exception should
provide an error message in that catch block rather than forcing support
for setting error message so deep into the stack. copyStorePath is never
called outside of PathSubstitutionGoal anyway, which catches everything.
Change-Id: Ifbae8706d781c388737706faf4c8a8b7917ca278
The original idea was to fix lix#174, but for a user friendly solution,
I figured that we'd need more consistency:
* Invalid query params will cause an error, just like invalid
attributes. This has the following two consequences:
* The `?dir=`-param from flakes will be removed before the URL to be
fetched is passed to libfetchers.
* The tarball fetcher doesn't allow URLs with custom query params
anymore. I think this was questionable anyways given that an
arbitrary set of query params was silently removed from the URL you
wanted to fetch. The correct way is to use an attribute-set
with a key `url` that contains the tarball URL to fetch.
* Same for the git & mercurial fetchers: in that case it doesn't even
matter though: both fetchers added unused query params to the URL
that's passed from the input scheme to the fetcher (`url2` in the code).
It turns out that this was never used since the query parameters were
erased again in `getActualUrl`.
* Validation happens for both attributes and URLs. Previously, a lot of
fetchers validated e.g. refs/revs only when specified in a URL and
the validity of attribute names only in `inputFromAttrs`.
Now, all the validation is done in `inputFromAttrs` and `inputFromURL`
constructs attributes that will be passed to `inputFromAttrs`.
* Accept all attributes as URL query parameters. That also includes
lesser used ones such as `narHash`.
And "output" attributes like `lastModified`: these could be declared
already when declaring inputs as attribute rather than URL. Now the
behavior is at least consistent.
Personally, I think we should differentiate in the future between
"fetched input" (basically the attr-set that ends up in the lock-file)
and "unfetched input" earlier: both inputFrom{Attrs,URL} entrypoints
are probably OK for unfetched inputs, but for locked/fetched inputs
a custom entrypoint should be used. Then, the current entrypoints
wouldn't have to allow these attributes anymore.
Change-Id: I1be1992249f7af8287cfc37891ab505ddaa2e8cd
If we've consumed the entire input, that doesn't actually mean we're
done decompressing - there might be more output left. This worked (?)
in most cases because the input and output sizes are pretty comparable,
but sometimes they're not and then things get very funny.
Change-Id: I73435a654a911b8ce25119f713b80706c5783c1b
without this we will not be able to get rid of makeDecompressionSink,
which in turn will be necessary to get rid of sourceToSink (since the
libarchive archive wrapper *must* be a Source due to api limitations)
Change-Id: Iccd3d333ba2cbcab49cb5a1d3125624de16bce27