I tested a trivial program that called kill(-1, SIGKILL), which was
run as the only process for an unpriveleged user, on Linux and
FreeBSD. On Linux, kill reported success, while on FreeBSD it failed
with EPERM.
POSIX says:
> If pid is -1, sig shall be sent to all processes (excluding an
> unspecified set of system processes) for which the process has
> permission to send that signal.
and
> The kill() function is successful if the process has permission to
> send sig to any of the processes specified by pid. If kill() fails,
> no signal shall be sent.
and
> [EPERM]
> The process does not have permission to send the signal to any
> receiving process.
My reading of this is that kill(-1, ...) may fail with EPERM when
there are no other processes to kill (since the current process is
ignored). Since kill(-1, ...) only attempts to kill processes the
user has permission to kill, it can't mean that we tried to do
something we didn't have permission to kill, so it should be fine to
interpret EPERM the same as success here for any POSIX-compliant
system.
This fixes an issue that Mic92 encountered[1] when he tried to review a
Nixpkgs PR on FreeBSD.
[1]: https://github.com/NixOS/nixpkgs/pull/81459#issuecomment-606073668
We upgrade to lowdown 0.8.0 [1] which contains a fix/improvement to a
behavior mentioned in this issue thread [2] where a big part of
lowdown's API would just call exit(1) on allocation errors since that
is a satisfying behavior for the lowdown binary.
Now lowdown_term_rndr returns 0 if an allocation error occurred which we
check for in libcmd/markdown.cc.
Also the extern "C" { } wrapper around lowdown.h has been removed as it
is not necessary.
[1]: 6ca7c855a0/versions.xml (L987-L1006)
[2]: https://github.com/kristapsdz/lowdown/issues/45#issuecomment-756681153
In addition to being some ugly template trickery, it was also totally
useless as it was used in only one place where I could replace it by
just a few extra characters
This makes nix search always go through the first level of an
attribute set, even if it's not a top level attribute. For instance,
you can now list all GHC compilers with:
$ nix search nixpkgs#haskell.compiler
...
This is similar to how nix-env works when you pass in -A.
This fixes an issue where derivations with a primary output that is
not "out" would fail with:
$ nix profile install nixpkgs#sqlite
error: opening directory '/nix/store/2a2ydlgyydly5czcc8lg12n6qqkfz863-sqlite-3.34.1-bin': No such file or directory
This happens because while derivations produce every output when
built, you might not have them if you didn't build the derivation
yourself (for instance, the store path was fetch from a binary cache).
This uses outputName provided from DerivationInfo which appears to
match the first output of the derivation.
Where a `RealisedPath` is a store path with its history, meaning either
an opaque path for stuff that has been directly added to the store, or a
`Realisation` for stuff that has been built by a derivation
This is a low-level refactoring that doesn't bring anything by itself
(except a few dozen extra lines of code :/ ), but raising the
abstraction level a bit is important on a number of levels:
- Commands like `nix build` have to query for the realisations after the
build is finished which is fragile (see
27905f12e4a7207450abe37c9ed78e31603b67e1 for example). Having them
oprate directly at the realisation level would avoid that
- Others like `nix copy` currently operate directly on (built) store
paths, but need a bit more information as they will need to register
the realisations on the remote side
Example:
error: builder for '/nix/store/9ysqfidhipyzfiy54mh77iqn29j6cpsb-failing.drv' failed with exit code 1;
last 1 log lines:
> FAIL
For full logs, run 'nix log /nix/store/9ysqfidhipyzfiy54mh77iqn29j6cpsb-failing.drv'.
… while importing '/nix/store/pfp4a4bjh642ylxyipncqs03z6kkgfvy-failing'
at /nix/store/25wgzr2qrqqiqfbdb1chpiry221cjglc-source/flake.nix:58:15:
57|
58| ifd = import self.hydraJobs.broken;
| ^
59|
Fix a mismatch in the errors thrown when a needed output was missing
from an input derivation that was leading to a wrong and quite misleading error
message
Don't only show the name of the output, but also the derivation to which
this output belongs (as otherwise it's very hard to track back what went
wrong)
It's now
at /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix:7:7:
instead of
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
The new format is more standard and clickable.
Changes:
* The divider lines are gone. These were in practice a bit confusing,
in particular with --show-trace or --keep-going, since then there
were multiple lines, suggesting a start/end which wasn't the case.
* Instead, multi-line error messages are now indented to align with
the prefix (e.g. "error: ").
* The 'description' field is gone since we weren't really using it.
* 'hint' is renamed to 'msg' since it really wasn't a hint.
* The error is now printed *before* the location info.
* The 'name' field is no longer printed since most of the time it
wasn't very useful since it was just the name of the exception (like
EvalError). Ideally in the future this would be a unique, easily
googleable error ID (like rustc).
* "trace:" is now just "…". This assumes error contexts start with
something like "while doing X".
Example before:
error: --- AssertionError ---------------------------------------------------------------------------------------- nix
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
assertion 'false' failed
----------------------------------------------------- show-trace -----------------------------------------------------
trace: while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
Example after:
error: assertion 'false' failed
at: (7:7) in file: /home/eelco/Dev/nixpkgs/pkgs/applications/misc/hello/default.nix
6|
7| x = assert false; 1;
| ^
8|
… while evaluating the attribute 'x' of the derivation 'hello-2.10'
at: (192:11) in file: /home/eelco/Dev/nixpkgs/pkgs/stdenv/generic/make-derivation.nix
191| // (lib.optionalAttrs (!(attrs ? name) && attrs ? pname && attrs ? version)) {
192| name = "${attrs.pname}-${attrs.version}";
| ^
193| } // (lib.optionalAttrs (stdenv.hostPlatform != stdenv.buildPlatform && !dontAddHostSuffix && (attrs ? name || (attrs ? pname && attrs ? version)))) {
This change is to simplify [Trustix](https://github.com/tweag/trustix) indexing and makes it possible to reconstruct this URL regardless of the compression used.
In particular this means that 7c2e9ca597/contrib/nix/nar/nar.go (L61-L71) can be removed and only the bits that are required to establish trust needs to be published in the Trustix build logs.
With the `ca-derivation` experimental features, non-ca derivations used
to have their output paths returned as unknown as long as they weren't
built (because of a mistake in the code that systematically erased the
previous value)
Thanks @regnat and @edolstra for catching this and comming up with the
solution.
They way I had generalized those is wrong, because local settings for
non-local stores is confusing default. And due to the nature of C++
inheritance, fixing the defaults is more annoying than it should be.
Additionally, I thought we might just drop the check in the substitution
logic since `Store::addToStore` is now streaming, but @regnat rightfully
pointed out that as it downloads dependencies first, that would still be
too late, and also waste effort on possibly unneeded/unwanted
dependencies.
The simple and correct thing to do is just make a store method for the
boolean logic, keeping all the setting and key stuff the way it was
before. That new method is both used by `LocalStore::addToStore` and the
substitution goal check. Perhaps we might eventually make it fancier,
e.g. sending the ValidPathInfo to remote stores for them to validate,
but this is good enough for now.
By default, once you enter x86_64 Rosetta 2, macOS will try to run
everything in x86_64. So an x86_64 Nix will still try to use x86_64
even when system = aarch64-darwin. To avoid this we can set
kern.curproc_arch_affinity sysctl. With kern.curproc_arch_affinity=0,
we ignore this preference.
This is based on how
https://opensource.apple.com/source/system_cmds/system_cmds-880.40.5/arch.tproj/arch.c.auto.html
works. Completely undocumented, but seems to work!
Note, you can verify this works with this impure Nix expression:
```
{
a = derivation {
name = "a";
system = "aarch64-darwin";
builder = "/bin/sh";
args = [ "-e" (builtins.toFile "builder" ''
[ "$(/usr/bin/arch)" = arm64 ]
[ "$(/usr/bin/arch -arch x86_64 /bin/sh -c /usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch arm64 /bin/sh -c /usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
b = derivation {
name = "b";
system = "x86_64-darwin";
builder = "/bin/sh";
args = [ "-e" (builtins.toFile "builder" ''
[ "$(/usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch x86_64 /bin/sh -c /usr/bin/arch)" = i386 ]
[ "$(/usr/bin/arch -arch arm64 /bin/sh -c /usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
}
```
This resolves#3810 by changing the behavior of `max-jobs = 0`, so
that specifying the option also avoids local building of derivations
with the attribute `preferLocalBuild = true`.
It appears as through the fetch attribute, which
is simply a variant with 3 elements, implicitly
converts boolean arguments to integers. One must
use Explicit<bool> to correctly populate it with
a boolean. This was missing from the implementation,
and resulted in clearly boolean JSON fields being
treated as numbers.
libc++10 seems to be stricter on what it allows in variant conversion.
I'm not sure what the rules are here, but this is the minimal change
needed to get through the compilation errors.
Sometimes it's necessary to fetch a git repository at a revision and
it's unknown which ref contains the revision in question. An example
would be a Cargo.lock which only provides the URL and the revision when
using a git repository as build input.
However it's considered a bad practice to perform a full checkout of a
repository since this may take a lot of time and can eat up a lot of
disk space. This patch makes a full checkout explicit by adding an
`allRefs` argument to `builtins.fetchGit` which fetches all refs if
explicitly set to true.
Closes#2409
A common pitfall when using e.g. `builtins.fetchGit` is the `fatal: not
a tree object`-error when trying to fetch a revision of a git-repository
that isn't on the `master` branch and no `ref` is specified.
In order to make clear what's the problem, I added a simple check
whether the revision in question exists and if it doesn't a more
meaningful error-message is displayed:
```
nix-repl> builtins.fetchGit { url = "https://github.com/owner/myrepo"; rev = "<commit not on master>"; }
moderror: --- Error -------------------------------------------------------------------- nix
Cannot find Git revision 'bf1cc5c648e6aed7360448a3745bb2fe4fbbf0e9' in ref 'master' of repository 'https://gitlab.com/Ma27/nvim.nix'! Please make sure that the rev exists on the ref you've specified or add allRefs = true; to fetchGit.
```
Closes#2431
We embrace virtual the rest of the way, and get rid of the
`assert(false)` 0-param constructors.
We also list config base classes first, so the constructor order is
always:
1. all the configs
2. all the stores
Each in the same order
Move clearValue inside Value
mkInt instead of setInt
mkBool instead of setBool
mkString instead of setString
mkPath instead of setPath
mkNull instead of setNull
mkAttrs instead of setAttrs
mkList instead of setList*
mkThunk instead of setThunk
mkApp instead of setApp
mkLambda instead of setLambda
mkBlackhole instead of setBlackhole
mkPrimOp instead of setPrimOp
mkPrimOpApp instead of setPrimOpApp
mkExternal instead of setExternal
mkFloat instead of setFloat
Add note that the static mk* function should be removed eventually
PRs #4370 and #4348 had a bad interaction in that the second broke the fist
one in a not trivial way.
The issue was that since #4348 the logic for detecting whether a
derivation output is already built requires some logic that was specific
to the `LocalStore`.
It happens though that most of this logic could be upstreamed to any `Store`,
which is what this commit does.
This ignore was here because `queryPartialDrvOutputMap` was used both
1. as a cache to avoid having to re-read the derivation (when gc-ing for
example), and
2. as the source of truth for ca realisations
The use-case 2. required it to be able to work even when the derivation
wasn't there anymore (see https://github.com/NixOS/nix/issues/4138).
However, this use-case is now handled by `queryRealisation`, meaning
that we can safely error out if the derivation isn't there anymore
`buildPaths` can be called even for stores where it's not defined in case it's
bound to be a no-op.
The “no-op detection” mechanism was only detecting the case wher `buildPaths`
was called on a set of (non-drv) paths that were already present on the store.
This commit extends this mechanism to also detect the case where `buildPaths`
is called on a set of derivation outputs which are already built on the store.
This only works with the ca-derivations flag. It could be possible to
extend this to also work without it, but it would add quite a bit of
complexity, and it's not used without it anyways.
Extend `FSAccessor::readFile` to allow not checking that the path is a
valid one, and rewrite `readInvalidDerivation` using this extended
`readFile`.
Several places in the code use `readInvalidDerivation`, either because
they need to read a derivation that has been written in the store but
not registered yet, or more generally to prevent a deadlock because
`readDerivation` tries to lock the state, so can't be called from a
place where the lock is already held.
However, `readInvalidDerivation` implicitely assumes that the store is a
`LocalFSStore`, which isn't always the case.
The concrete motivation for this is that it's required for `nix copy
--from someBinaryCache` to work, which is tremendously useful for the
tests.
Because of a too eager refactoring, `addTextToStore` used to throw an
error because the input wasn't a valid nar.
Partially revert that refactoring to wrap the text into a proper nar
(using `dumpString`) to make this method work again
Rather than storing the derivation outputs as `drvPath!outputName` internally,
store them as `drvHashModulo!outputName` (or `outputHash!outputName` for
fixed-output derivations).
This makes the storage slightly more opaque, but enables an earlier
cutoff in cases where a fixed-output dependency changes (but keeps the
same output hash) − same as what we already do for input-addressed
derivations.
Add a new table for tracking the derivation output mappings.
We used to hijack the `DerivationOutputs` table for that, but (despite its
name), it isn't a really good fit:
- Its entries depend on the drv being a valid path, making it play badly with
garbage collection and preventing us to copy a drv output without copying
the whole drv closure too;
- It dosen't guaranty that the output path exists;
By using a different table, we can experiment with a different schema better
suited for tracking the output mappings of CA derivations.
(incidentally, this also fixes#4138)
For each known realisation, store:
- its output
- its output path
This comes with a set of needed changes:
- New `realisations` module declaring the types needed for describing
these mappings
- New `Store::registerDrvOutput` method registering all the needed informations
about a derivation output (also replaces `LocalStore::linkDeriverToPath`)
- new `Store::queryRealisation` method to retrieve the informations for a
derivations
This introcudes some redundancy on the remote-store side between
`wopQueryDerivationOutputMap` and `wopQueryRealisation`.
However we might need to keep both (regardless of backwards compat)
because we sometimes need to get some infos for all the outputs of a
derivation (where `wopQueryDerivationOutputMap` is handy), but all the
stores can't implement it − because listing all the outputs of a
derivation isn't really possible for binary caches where the server
doesn't allow to list a directory.
In `nixStable` (2.3.7 to be precise) it's possible to connect to stores
using an IPv6 address:
nix ping-store --store ssh://root@2001:db8::1
This is also useful for `nixops(1)` where you could specify an IPv6
address in `deployment.targetHost`.
However, this behavior is broken on `nixUnstable` and fails with the
following error:
$ nix store ping --store ssh://root@2001:db8::1
don't know how to open Nix store 'ssh://root@2001:db8::1'
This happened because `openStore` from `libstore` uses the `parseURL`
function from `libfetchers` which expects a valid URL as defined in
RFC2732. However, this is unsupported by `ssh(1)`:
$ nix store ping --store 'ssh://root@[2001:db8::1]'
cannot connect to 'root@[2001:db8::1]'
This patch now allows both ways of specifying a store (`root@2001:db8::1`) and
also `root@[2001:db8::1]` since the latter one is useful to pass query
parameters to the remote store.
In order to achieve this, the following changes were made:
* The URL regex from `url-parts.hh` now allows an IPv6 address in the
form `2001:db8::1` and also `[2001:db8::1]`.
* In `libstore`, a new function named `extractConnStr` ensures that a
proper URL is passed to e.g. `ssh(1)`:
* If a URL looks like either `[2001:db8::1]` or `root@[2001:db8::1]`,
the brackets will be removed using a regex. No additional validation
is done here as only strings parsed by `parseURL` are expected.
* In any other case, the string will be left untouched.
* The rules above only apply for `LegacySSHStore` and `SSHStore` (a.k.a
`ssh://` and `ssh-ng://`).
Unresolved questions:
* I'm not really sure whether we want to allow both variants of IPv6
addresses in the URL parser. However it should be noted that both seem
to be possible according to RFC2732:
> This document incudes an update to the generic syntax for Uniform
> Resource Identifiers defined in RFC 2396 [URL]. It defines a syntax
> for IPv6 addresses and allows the use of "[" and "]" within a URI
> explicitly for this reserved purpose.
* Currently, it's not supported to specify a port number behind the
hostname, however it seems as this is not really supported by the URL
parser. Hence, this is probably out of scope here.
The `DerivationGoal` has a variable storing the “final” derivation
output paths that is used (amongst other things) to fill the environment
for the post build hook. However this variable wasn't set when the
build-hook is used, causing a crash when both hooks are used together.
Fix this by setting this variable (from the informations in the db) after a run
of the post build hook.
This reverts commit 1b1e076033.
Using `queryPartialDerivationOutputMap` assumes that the derivation
exists locally which isn't the case for remote builders.
Since 0744f7f, it is now useful to have cache.nixos.org in substituers
even if /nix/store is not the Nix Store Dir. This can always be
overridden via configuration, though.
When running universal binaries like /bin/bash, Darwin XNU will choose
which architecture of the binary to use based on "binary preferences".
This change sets that to the current platform for aarch64 and x86_64
builds. In addition it now uses posix_spawn instead of the usual
execve. Note, that this does not prevent the other architecture from
being run, just advises which to use.
Unfortunately, posix_spawnattr_setbinpref_np does not appear to be
inherited by child processes in x86_64 Rosetta 2 translations, meaning
that this will not always work as expected.
For example:
{
arm = derivation {
name = "test";
system = "aarch64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
rosetta = derivation {
name = "test";
system = "x86_64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = i386 ]
echo It works!
/usr/bin/touch $out
'') ];
};
}
`arm' fails on x86_64-compiled Nix, but `arm' and `rosetta' succeed on
aarch64-compiled Nix. I suspect there is a way to fix this since:
$ /usr/bin/arch -arch x86_64 /bin/bash \
-c '/usr/bin/arch -arch arm64e /bin/bash -c /usr/bin/arch'
arm64
seems to work correctly. We may need to wait for Apple to update
system_cmds in opensource.apple.com to find out how though.
macOS systems with ARM64 can utilize a translation layer at
/Library/Apple/usr/libexec/oah to run x86_64 binaries. This change
makes Nix recognize that and it to "extra-platforms". Note that there
are two cases here since Nix could be built for either x86_64 or
aarch64. In either case, we can switch to the other architecture.
Unfortunately there is not a good way to prevent aarch64 binaries from
being run in x86_64 contexts or vice versa - programs can always
execute programs for the other architecture.
If the build closure contains some CA derivations, then we can't know
ahead-of-time that we won't build anything as early-cutoff might come-in
at a laster stage
using fallocate() to preallocate files space does more harm than good:
- breaks compression on btrfs
- has been called "not the right thing to do" by xfs developers
(because delayed allocation that most filesystems implement leads to smarter
allocation than what the filesystem needs to do if we upfront fallocate files)
Without setting HGPLAIN, the user's environment leaks into
hg invocations, which means that the output may not be in the
expected format.
HGPLAIN is the Mercurial-recommended solution for this in that
it's intended for uses by scripts and programs which are looking
to parse Mercurial's output in a consistent manner.
This fixes a bug I encountered where `nix-store -qR` will deadlock when
the `--include-outputs` flag is passed and `max-connections=1`.
The deadlock occurs because `RemoteStore::queryDerivationOutputs` takes
the only connection from the connection pool and uses it to check the
daemon version. If the version is new enough, it calls
`Store::queryDerivationOutputs`, which eventually calls
`RemoteStore::queryPartialDerivationOutputMap`, where we take another
connection from the connection pool to check the version again. Because
we still haven't released the connection from the caller, this waits for
a connection to be available, causing a deadlock.
This diff solves the issue by using `getProtocol` to check the protocol
version in the caller `RemoteStore::queryDerivationOutputs`, which
immediately frees the connection back to the pool before returning the
protocol version. That way we've already freed the connection by the
time we call `RemoteStore::queryPartialDerivationOutputMap`.
Fixes:
$ nix build --store /tmp/nix /home/eelco/Dev/patchelf#hydraJobs.build.x86_64-linux
warning: Git tree '/home/eelco/Dev/patchelf' is dirty
error: --- RestrictedPathError ------------------------------------------------------------------------------------------- nix
access to path '/tmp/nix/nix/store/xmkvfmffk7xfnazykb5kx999aika8an4-source/flake.nix' is forbidden in restricted mode
(use '--show-trace' to show detailed location information)
Until now, it was not possible to substitute missing paths from e.g.
`https://cache.nixos.org` on a remote server when building on it using
the new `ssh-ng` protocol.
This is because every store implementation except legacy `ssh://`
ignores the substitution flag passed to `Store::queryValidPaths` while
the `legacy-ssh-store` substitutes the remote store using
`cmdQueryValidPaths` when the remote store is opened with `nix-store
--serve`.
This patch slightly modifies the daemon protocol to allow passing an
integer value suggesting whether to substitute missing paths during
`wopQueryValidPaths`. To implement this on the daemon-side, the
substitution logic from `nix-store --serve` has been moved into a
protected method named `Store::substitutePaths` which gets currently
called from `LocalStore::queryValidPaths` and `Store::queryValidPaths`
if `maybeSubstitute` is `true`.
Fixes#2770
Crucially this introduces BoehmGCStackAllocator, but it also
adds a bunch of wiring to avoid making libutil depend on bdw-gc.
Part of the solutions for #4178, #4200
This removes the extra-substituters and extra-sandbox-paths settings
and instead makes every array setting extensible by setting
"extra-<name> = <value>" in the configuration file or passing
"--<name> <value>" on the command line.
This makes it even clearer which of the two hashes was specified in the
nix files. Some may think that "wanted" and "got" is obvious, but:
"got" could mean "got in nix file" and "wanted" could mean "want to see in nix file".
This makes it possible to have per-project configuration in flake.nix,
e.g. binary caches and other stuff:
nixConfig.bash-prompt-suffix = "[1;35mngi# [0m";
nixConfig.substituters = [ "https://cache.ngi0.nixos.org/" ];
This is primarily useful if you're hacking simultaneously on a package
and one of its dependencies. E.g. if you're hacking on Hydra and Nix,
you would start a dev shell for Nix, and then a dev shell for Hydra as
follows:
$ nix develop \
--redirect .#hydraJobs.build.x86_64-linux.nix ~/Dev/nix/outputs/out \
--redirect .#hydraJobs.build.x86_64-linux.nix.dev ~/Dev/nix/outputs/dev
(This assumes hydraJobs.build.x86_64-linux has a passthru.nix
attribute. You can also use a store path.)
This causes all references in the environment to those store paths to
be rewritten to ~/Dev/nix/outputs/{out,dev}. Note: unfortunately, you
may need to set LD_LIBRARY_PATH=~/Dev/nix/outputs/out/lib because
Nixpkgs' ld-wrapper only adds -rpath entries for -L flags that point
to the Nix store.
Fix#3975: Currently if Ctrl-C is pressed during a phase, the interactive subshell
is not exited. Removing --rcfile when --phase is present makes bash
non-interactive
Make nix output completions in the form `completion\tdescription`.
This can't be used by bash (afaik), but other shells like zsh or fish
can display it along the completion choices
Observed on Centos 7 when user namespaces are disabled:
DerivationGoal::startBuilder() throws an exception, ~DerivationGoal()
waits for the child process to exit, but the child process hangs
forever in drainFD(userNamespaceSync.readSide.get()) in
DerivationGoal::runChild(). Not sure why the SIGKILL doesn't get
through.
Issue #4092.
When running `nix build -L` it can be fairly hard to read the output if
the build program intentionally renders whitespace on the left. A
typical example is `g++` displaying compilation errors.
With this patch, the whitespace on the left is retained to make the log
more readable:
```
foo> no configure script, doing nothing
foo> building
foo> foobar.cc: In function 'int main()':
foo> foobar.cc:5:5: error: 'wrong_func' was not declared in this scope
foo> 5 | wrong_func(1);
foo> | ^~~~~~~~~~
error: --- Error ------------------------------------------------------------------------------------- nix
error: --- Error --- nix-daemon
builder for '/nix/store/i1q76cw6cyh91raaqg5p5isd1l2x6rx2-foo-1.0.drv' failed with exit code 1
```
The registry targets generally follow a URL formatting schema with
support for a query parameter of "?dir=subpath" to specify a sub-path
location below the URL root.
Alternatively, an absolute path can be specified. This specification
mode accepts the query parameter but ignores/drops it. It would
probably be better to either (a) disallow the query parameter for the
path form, or (b) recognize the query parameter and add to the path.
This patch implements (b) for consistency, and to make it easier for
tooling that might switch between a remote git reference and a local
path reference.
See also issue #4050.
This change provides support for using access tokens with other
instances of GitHub and GitLab beyond just github.com and
gitlab.com (especially company-specific or foundation-specific
instances).
This change also provides the ability to specify the type of access
token being used, where different types may have different handling,
based on the forge type.
std::optional had redundant checks for whether it had a value.
An object is emplaced either way so it can be dereferenced
without repeating a value check
After 0ed946aa61, max-jobs setting (-j/--max-jobs)
stopped working.
The reason was that nrLocalBuilds (which compared to maxBuildJobs to figure
out whether the limit is reached or not) is not incremented yet when tryBuild
is started; So, the solution is to move the check to tryLocalBuild.
Closes https://github.com/nixos/nix/issues/3763
We don't need it yet, but we could/should in the future, and it's a
cost-free change since we already have the reference. I like it.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Rather than showing an integer as the default, instead show the boolean
referenced in the description.
The nix.conf.5 manpage used to show "default: 0", which is unnecessarily
opaque and confusing (doesn't 0 mean false, even though the default is
true?); now it properly shows that the default is true.
pkgs.fetchurl supports an executable argument, which is especially nice
when downloading a large executable. This patch adds the same option to
nix-prefetch-url.
I have tested this to work on the simple case of prefetching a little
executable:
1. nix-prefetch-url --executable https://my/little/script
2. Paste the hash into a pkgs.fetchurl-based package, script-pkg.nix
3. Delete the output from the store to avoid any misidentified artifacts
4. Realise the package script-pkg.nix
5. Run the executable
I repeated the above while using --name, as well.
I suspect --executable would have no meaningful effect if combined with
--unpack, but I have not tried it.
Since 108debef6f we allow a
`url`-attribute for the `github`-fetcher to fetch tarballs from
self-hosted `gitlab`/`github` instances.
However it's not used when defining e.g. a flake-input
foobar = {
type = "github";
url = "gitlab.myserver";
/* ... */
}
and breaks with an evaluation-error:
error: --- Error --------------------------------------nix
unsupported input attribute 'url'
(use '--show-trace' to show detailed location information)
This patch allows flake-inputs to be fetched from self-hosted instances
as well.
`nix flake info` calls the github 'commits' API, which requires
authorization when the repository is private. Currently this request
fails with a 404.
This commit adds an authorization header when calling the 'commits' API.
It also changes the way that the 'tarball' API authenticates, moving the
user's token from a query parameter into the Authorization header.
The query parameter method is recently deprecated and will be disallowed
in November 2020. Using them today triggers a warning email.
It is apparently required for using `toJSONObject()`, which we do inside
the header file (because it's in a template).
This was accidentally working when building Nix itself (presumably because
`config.hh` was always included after `nlohman/json.hpp`) but caused a
(pretty dirty) build failure in the perl bindings package.
Rework the `Store` hierarchy so that there's now one hierarchy for the
store configs and one for the implementations (where each implementation
extends the corresponding config). So a class hierarchy like
```
StoreConfig-------->Store
| |
v v
SubStoreConfig----->SubStore
| |
v v
SubSubStoreConfig-->SubSubStore
```
(with virtual inheritance to prevent DDD).
The advantage of this architecture is that we can now introspect the configuration of a store without having to instantiate the store itself
Add a new `init()` method to the `Store` class that is supposed to
handle all the effectful initialisation needed to set-up the store.
The constructor should remain side-effect free and just initialize the
c++ data structure.
The goal behind that is that we can create “dummy” instances of each
store to query static properties about it (the parameters it accepts for
example)
Directly register the store classes rather than a function to build an
instance of them.
This gives the possibility to introspect static members of the class or
choose different ways of instantiating them.
Add a fallback path in `queryPartialDerivationOutputMap` for daemons
that don't support it.
Also upstreams a couple methods from `SSHStore` to `RemoteStore` as this
is needed to handle the fallback path.
Otherwise the result of the printing can't be parsed back correctly by
Nix (because the unescaped `${` will be parsed as the begining of an
anti-quotation).
Fix#3989
When deploying a Hydra instance with current Nix master, most builds
would not run because of errors like this:
queue monitor: error: --- Error --- hydra-queue-runner
error: --- UsageError --- nix-daemon
not a content address because it is not in the form '<prefix>:<rest>': /nix/store/...-somedrv
The last error message is from parseContentAddress, which expects a
colon-separated string, however what we got here is a store path.
Looking at the worker protocol, the following message sent to the Nix
daemon caused the error above:
0x1E -> wopQuerySubstitutablePathInfos
0x01 -> Number of paths
0x16 -> Length of string
"/nix/store/...-somedrv"
0x00 -> Length of string
""
Looking at writeStorePathCAMap, the store path is indeed the first field
that's transmitted. However, readStorePathCAMap expects it to be the
*second* field *on my machine*, since expression evaluation order is a
classic form of unspecified behaviour[1] in C++.
This has been introduced in https://github.com/NixOS/nix/pull/3689,
specifically in commit 66a62b3189.
[1]: https://en.wikipedia.org/wiki/Unspecified_behavior#Order_of_evaluation_of_subexpressions
Signed-off-by: aszlig <aszlig@nix.build>
The change in 626200713b didn't account
for when the number of auto arguments is bigger than the number of
formal arguments. This causes the following:
$ nix-instantiate --eval -E '{ ... }@args: args.foo' --argstr foo foo
nix-instantiate: src/libexpr/attr-set.hh:55: void nix::Bindings::push_back(const nix::Attr&): Assertion `size_ < capacity_' failed.
Aborted (core dumped)
If we resolve using the known path of a derivation whose output we
didn't have, we previously blew up. Now we just fail gracefully,
returning the map of all outputs unknown.
This means profiles outside of /nix/var/nix/profiles don't get
garbage-collected. It also means we don't need to scan
/nix/var/nix/profiles for GC roots anymore, except for compatibility
with previously existing generations.
This was broken in 50f13b06fb. Once
again it turns out that putting a bool in a std::variant is a bad
idea, since pointers get silently cast to them...
The command line options --arg and --argstr that are used by a bunch of
CLI commands to pass arguments to top-level functions in files go
through the same code-path as auto-calling top-level functions with
their default arguments - this, however, was only passing the arguments
that were *explicitly* mentioned in the formals of the function - in the
case of an as-pattern with an ellipsis (eg args @ { ... }) extra passed
arguments would get omitted. This fixes that to instead pass *all*
specified auto args in the case that our function has an ellipsis.
Fixes#598
When the log.showSignature git setting is enabled, the output of
"git log" contains signature verification information in addition to the
timestamp GitInputScheme::fetch wants:
$ git log -1 --format=%ct
gpg: Signature made Sat 07 Sep 2019 02:02:03 PM PDT
gpg: using RSA key 0123456789ABCDEF0123456789ABCDEF01234567
gpg: issuer "user@example.com"
gpg: Good signature from "User <user@example.com>" [ultimate] 1567890123
1567890123
For folks that had log.showSignature set, this caused all nix operations
on flakes to fail:
$ nix build
error: stoull
Evidentally this was never implemented because Nix switched to using
`buildDerivation` exclusively before `build-remote.pl` was rewritten.
The `nix-copy-ssh` test (already) tests this.
Include a long comment explaining the policy. Perhaps this can be moved
to the manual at some point in the future.
Also bump the daemon protocol minor version, so clients can tell whether
`wopBuildDerivation` supports trustless CA derivation building. I hope
to take advantage of this in a follow-up PR to support trustless remote
building with the minimal sending of derivation closures.
This seems more correct. It also means one can specify the features a
store should support with --store and remote-store=..., which is useful.
I use this to clean up the build remotes test.
Before, processConnection wanted to know a user name and user id, and
`nix-daemon --stdio`, when it isn't proxying to an underlying daemon,
would just assume "root" and 0. But `nix-daemon --stdio` (no proxying)
shouldn't make guesses about who holds the other end of its standard
streams.
Now processConnection takes an "auth hook", so `nix-daemon` can provide
the appropriate policy and daemon.cc doesn't need to know or care what
it is.
Thanks @regnat for catching one of them. The other follows for many of
the same reasons. I'm find fixing others on a need-to-fix basis,
provided their are no regressions.
When having a message like `waiting for a machine to build X` and
building with `nix build -L`, the log-prefix is always colored yellow[1]
on a small terminal-width as everything (including the ANSI color-reset) is
stripped away.
To work around that problem, this patch explicitly adds an `ANSI_NORMAL`
to the end of the line.
[1] https://imgur.com/a/FjtJOk3
Fixes#3872.
This is a bit hacky. Ideally we would automatically re-evaluate the
failed attribute iff we need to print the error message (so in
commands like 'nix search' we wouldn't re-evaluate because we're
suppressing errors).
Some users have their own hashed-mirrors setup, that is used to mirror
things in addition to what’s available on tarballs.nixos.org. Although
this should be feasable to do with a Binary Cache, it’s not always
easy, since you have to remember what "name" each of the tarballs has.
Continuing to support hashed-mirrors is cheap, so it’s best to leave
support in Nix. Note that NIX_HASHED_MIRRORS is also supported in
Nixpkgs through fetchurl.nix.
Note that this excludes tarballs.nixos.org from the default, as in
\#3689. All of these are available on cache.nixos.org.
Occasionally, `nix-build --check` is fairly helpful and I'd like to be
able to use this feature for flakes that need to be built with `nix
build` as well.
This fixes an error found in builtins.path that looks like:
store path mismatch in (possibly filtered) path added from '/private/tmp/nix-shell.CyXViH/nix-test/filter-source/filterin'
when no hash is specified
This adds a ‘nix export’ command which hooks into nix-bundle. It can
be used in a similar way as nix-bundle, with the benefit of hooking
into the new “app” functionality. For instance,
$ nix export nixpkgs#jq
$ ./jq --help
jq - commandline JSON processor [version 1.6]
...
$ scp jq machine-without-nix:
$ ssh machine-without-nix ./jq
jq - commandline JSON processor [version 1.6]
...
Note that nix-bundle currently requires Linux to run. Other exporters
might not have that requirement.
“exporters” are meant to be reusable, so that, other repos can
implement their own bundling.
Fixes#3705
match_continuous limits the search to the current start position,
instead of searching the entire file.
On libc++, this improves performance dramatically:
$ time /nix/store/70ai68dfm6xbzwn26j5n4li9di52ylia-nix-3.0pre20200728_c159f48/bin/nix print-dev-env >/dev/null
/nix/store/70ai68dfm6xbzwn26j5n4li9di52ylia-nix-3.0pre20200728_c159f48/bin/ni 2.39s user 0.19s system 64% cpu 4.032 total
$ time /nix/store/cwjfxxlp83zln4mfyy1d2dbsx7f6s962-nix-3.0pre20200728_dirty/bin/nix print-dev-env >/dev/null
/nix/store/cwjfxxlp83zln4mfyy1d2dbsx7f6s962-nix-3.0pre20200728_dirty/bin/nix 0.09s user 0.05s system 65% cpu 0.204 total
Fixes#3874
Since 6185d25e52, this was very
latency-bound since it required a round-trip for every 32 KiB. So for
example copying a 514 MiB closure over a virtual ethernet device with
a articial delay of just 1 ms took 343s. Now it takes 2.7s.
Fixes#3372.
If a repo is dirty, it used to return a `rev` object with an "empty"
sha1 (0000000000000000000000000000000000000000). Please note that this
only applies for `builtins.fetchGit` and *not* for `builtins.fetchTree{
type = "git"; }`.
The new interface we offer provides a way of getting all the
DerivationOutputs with the storePaths directly, based on the observation
that it's the most common usecase.
istream->tellg() returns -1 so we can't get the number of bytes
written.
Fixes 'uploaded 's3://nix-cache/nar/00819r9lp5kajr6baxfw5dhhc0cx8ndxaz43qmd2f0gn1hk1ynlp.nar.xz' (-1 bytes) in 11620 ms' messages.
The original idea was to implement a git-fetcher in Nix's core that
supports content hashes[1]. In #3549[2] it has been suggested to
actually use `fetchTree` for this since it's a fairly generic wrapper
over the new fetcher-API[3] and already supports content-hashes.
This patch implements a new git-fetcher based on `fetchTree` by
incorporating the following changes:
* Removed the original `fetchGit`-implementation and replaced it with an
alias on the `fetchTree` implementation.
* Ensured that the `git`-fetcher from `libfetchers` always computes a
content-hash and returns an "empty" revision on dirty trees (the
latter one is needed to retain backwards-compatibility).
* The hash-mismatch error in the fetcher-API exits with code 102 as it
usually happens whenever a hash-mismatch is detected by Nix.
* Removed the `flakes`-feature-flag: I didn't see a reason why this API
is so tightly coupled to the flakes-API and at least `fetchGit` should
remain usable without any feature-flags.
* It's only possible to specify a `narHash` for a `git`-tree if either a
`ref` or a `rev` is given[4].
* It's now possible to specify an URL without a protocol. If it's missing,
`file://` is automatically added as it was the case in the original
`fetchGit`-implementation.
[1] https://github.com/NixOS/nix/pull/3216
[2] https://github.com/NixOS/nix/pull/3549#issuecomment-625194383
[3] https://github.com/NixOS/nix/pull/3459
[4] https://github.com/NixOS/nix/pull/3216#issuecomment-553956703
The new error-format is pretty nice from a UX point-of-view, however
it's fairly hard to parse the output e.g. for editor plugins such as
vim-ale[1] that use `nix-instantiate --parse` to determine syntax errors in
Nix expression files.
This patch extends the `internal-json` logger by adding the fields
`line`, `column` and `file` to easily locate an error in a file and the
field `raw_msg` which contains the error-message itself without
code-lines and additional helpers.
An exemplary output may look like this:
```
[nix-shell]$ ./inst/bin/nix-instantiate ~/test.nix --log-format minimal
{"action":"msg","column":1,"file":"/home/ma27/test.nix","level":0,"line":4,"raw_msg":"syntax error, unexpected IF, expecting $end","msg":"<full error-msg with code-lines etc>"}
```
[1] https://github.com/dense-analysis/ale
This assumption is broken by CA derivations. Making a PR now to do the
breaking daemon change as soon as possible (if it is already too late,
we can bump protocol intead).
I think this better captures the intent of what's going on: we either
have an opaque store path, or a drv path with some outputs.
Having this structure will also help us support CA derivations: we'll
have to allow the outpath paths to be optional, so the structure we gain
now makes up for the structure we loose then.
It's a tiny function which is:
- hardly worth abstrating over, and also only used once.
- doesn't work once we get CA drvs
I rewrote the one callsite to be forwards compatable with CA
derivations, and also potentially more performant: instead of reading in
the derivation it can ust consult the SQLite DB in the common case.
to each Store implementation. The generic regStore implementation will
only be for the ambiguous shorthands, like "" and "auto".
This also could get us close to simplifying the daemon command.
Currently resizing of the terminal doesn't play nicely with
nix edit when using kakoune as the editor, as it relies on the
SIGWINCH signal which is trapped by nix. How this is not a problem
with e.g. vim is beyond me.
Virtually all other exec* calls are following a call to
restoreSignals(). This commit adds this behavior to nix edit
as well.
I got it to just become `LocalStore::addToStoreFromDump`, cleanly taking
a store and then doing nothing too fancy with it.
`LocalStore::addToStore(...Path...)` is now just a simple wrapper with a
bare-bones sinkToSource of the right dump command.
That is, the commands 'nix path-info nixpkgs#hello' and 'nix path-info
/nix/store/00ls0qi49qkqpqblmvz5s1ajl3gc63lr-hello-2.10.drv' now do the
same thing (i.e. build the derivation and operate on the output store
path, rather than the .drv path).
This reverts commit a2c27022e9. See
addToStoreSlow(), we don't need to handle this case efficiently
anymore. In fact, we can almost remove the method/hashAlgo arguments
since the non-recursive and/or non-SHA256 are almost not used anymore.
We were calculating the nar hash wrong when the file ingestion method
was flat. I don't think there's anything we can do in that case but dump
the file again, so that's what I do.
As an optomization, we again could reuse the original dump for just the
recursive and non-sha256 case, but I rather do that after this fix, and
after my other PRs which deduplicate this code.
Until now, the `gitlab`-fetcher determined the source's rev by checking
the latest commit of the given `ref` using the
`/repository/branches`-API.
This breaks however when trying to fetch a gitlab-repo by its tag:
```
$ nix repl
nix-repl> builtins.fetchTree gitlab:Ma27/nvim.nix/0.2.0
error: --- Error ------------------------------------------------------------------------------------- nix
unable to download 'https://gitlab.com/api/v4/projects/Ma27%2Fnvim.nix/repository/branches/0.2.0': HTTP error 404 ('')
```
When using the `/commits?ref_name`-endpoint[1] you can pass any kind of
valid ref to the `gitlab`-fetcher.
Please note that this fetches the only first 20 commits on a ref,
unfortunately there's currently no endpoint which only retrieves the
latest commit of any kind of `ref`.
[1] https://docs.gitlab.com/ee/api/commits.html#list-repository-commits
We've added the variant to `DerivationOutput` to support them, but made
`DerivationOutput::path` partial to avoid actually implementing them.
With this chage, we can all collaborate on "just" removing
`DerivationOutput::path` calls to implement CA derivations.
The `m` acts as termination-symbol when declaring graphics. Because
of this, the `;1m` doesn't have any effect and is directly printed to
the console:
```
$ nix repl
> builtins.fetchGit { /* ... */ }
{ outPath = "/nix/store/s0f0iz4a41cxx2h055lmh6p2d5k5bc6r-source"; rev = "e73e45b723a9a6eecb98bd5f3df395d9ab3633b6"; revCount = ;1m428; shortRev = "e73e45b"; submodules = ;1mfalse; }
```
Introduced by 6403508f5a.
This reduces memory consumption of
nix-instantiate \
-E 'with import <nixpkgs> {}; runCommand "foo" { src = ./blender; } "echo foo"' \
--option nar-buffer-size 10000
(where ./blender is a 1.1 GiB tree) from 1716 to 36 MiB, while still
ensuring that we don't do any write I/O for small source paths (up to
'nar-buffer-size' bytes). The downside is that large paths are now
always written to a temporary location in the store, even if they
produce an already valid store path. Thus, adding large paths might be
slower and run out of disk space. ¯\_(ツ)_/¯ Of course, you can always
restore the old behaviour by setting 'nar-buffer-size' to a very high
value.
This allows you to refer to an input from another flake. For example,
$ nix run --inputs-from /path/to/hydra nixpkgs#hello
runs 'hello' from the 'nixpkgs' inputs of the 'hydra' flake.
Fixes#3769.
'nix run' will try to run $out/bin/<name>, where <name> is the
derivation name (excluding the version). This often works well:
$ nix run nixpkgs#hello
Hello, world!
$ nix run nix -- --version
nix (Nix) 2.4pre20200626_adf2fbb
$ nix run patchelf -- --version
patchelf 0.11.20200623.e61654b
$ nix run nixpkgs#firefox -- --version
Mozilla Firefox 77.0.1
$ nix run nixpkgs#gimp -- --version
GNU Image Manipulation Program version 2.10.14
though not always:
$ nix run nixpkgs#git
error: unable to execute '/nix/store/kp7wp760l4gryq9s36x481b2x4rfklcy-git-2.25.4/bin/git-minimal': No such file or directory
Generalize `queryDerivationOutputNames` and `queryDerivationOutputs` by
adding a `queryDerivationOutputMap` that returns the map
`outputName=>outputPath`
(not that this is not equivalent to merging the results of
`queryDerivationOutputs` and `queryDerivationOutputNames` as sets don't
preserve the order, so we would end up with an incorrect mapping).
squash! Add a way to get all the outputs of a derivation with their label
Rename StorePathMap to OutputPathMap
Now the derivation outputs are parsed up front, we can avoid a reparse
by doing it. Also, this just feels a bit better as the `output*` env
vars are more of a `libnixexpr` interface than `libnixstore` interface:
ultimately, it's the derivation outputs that decide whether the
derivation is fixed-output.
Yes, hashed mirrors might go away with #3689, but this bit of code would
be moved rather than deleted, so it's worth doing a cleanup anyways I
think.
When we merge with master, the new lack of string types make this case
impossible (after parsing). Later, when we actually implemenent
CA-derivations, we'll change the types to allow that.
We have a larger problem that passsing computed strings to the first
variable argument of many exception constructors is unsafe because that
first variable argument is interpreted not as a plain string, but format
string, and if it contains '%' boost::format will abort, since there are
no arguments to the format string.
In this particular instance '%' was used as part of an escape code in a
URL, which, when the download failed, caused Nix to abort displaying the
`FileTransferError`.
This is a bit complex because we want to expose extra functionality the
wrapped class has. Perhaps there is some inheritancy trickery to do this
nicer, but I don't know it, and this is the first thing we tried after a
series of attempts that did build.
This design is kind of like that of Rust's Writer, Reader, or Iter
adapters, which impliment more traits based on what the inner type
implements.
This was introduced in fa125b9b28, and
then "reverted" in 1cf4801108, except that
revert left the struct around doing nothing useful.
We're removing it all the way now because we want to make a new
`TeeSink` complementing the already-exiting `TeeSource`, that is
actually a completely different concept as far as the class hierarchy is
concerned.
I’m not 100% sure this is wanted since it kind of makes everything
have to know about ca even if they don’t really want to. But it also
make things easier in dealing with looking up ca.
E.g.
$ nix run nixpkgs#hello
error: --- Error ---------- nix
flake 'flake:nixpkgs' does not provide attribute 'apps.x86_64-linux.hello' or 'hello'
instead of
$ nix run nixpkgs#hello
error: --- Error ---------- nix
flake 'flake:nixpkgs' does not provide attribute 'hello'
Not implementing anything here, just throwing an error if a derivation
sets `__contentAddressed = true` without
`--experimental-features content-addressed-paths`
(and also with it as there's nothing implemented yet)
This further continues with the dependency inverstion. Also I just went
ahead and exposed `parseDerivation`: it seems like the more proper
building block, and not a bad thing to expose if we are trying to be
less wedded to drv files on disk anywas.
On nix-env -qa -f '<nixpkgs>', this reduces maximum RSS by 20970 KiB
and runtime by 0.8%. This is mostly because we're not parsing the hash
part as a hash anymore (just validating that it consists of base-32
characters).
Also, replace storePathToHash() by StorePath::hashPart().
E.g. instead of
error: --- BuildError ----------------------------------------------- nix
builder for '/nix/store/03nk0a3n8h2948k4lqfgnnmym7knkcma-foo.drv' failed with exit code 1
error: --- Error ---------------------------------------------------- nix
build of '/nix/store/03nk0a3n8h2948k4lqfgnnmym7knkcma-foo.drv' failed
we now get
error: --- Error ---------------------------------------------------- nix
builder for '/nix/store/03nk0a3n8h2948k4lqfgnnmym7knkcma-foo.drv' failed with exit code 1
Originally, the test was only checking for different “real” storeDir.
That’s an easy case to handle, but the much harder one is if different
virtual store dirs are used. To do this, we need the SubstitutionGoal
to know about the ca, so it can recalculate the path to copy it over.
An important note here is that the store path passed to copyStorePath
needs to be one for srcStore - so that queryPathInfo works properly.
This also adds an error message when the store path from queryPathInfo
is different from the one we requested.
Substituters can substitute from one store dir to another with a
little bit of help. The store api just needs to have a CA so it can
recompute the store path based on the new store dir. We can only do
this for fixed output derivations with no references, though.
This function was used in only one place, where it could easily be
replaced by readDerivation() since it's not
performance-critical. (This function appears to have been modelled
after queryDerivationOutputs(), which exists only to make the garbage
collector faster.)
This fixes an issue where lockfile generation was not idempotent:
after updating a lockfile, a "follows" node would end up pointing to a
new copy of the node, rather than to the original node.
fetchTarball, fetchTree, and fetchGit all have *optional* hash attrs.
This means that we need to be careful with what we allow to avoid
accidentally making these defaults. When ‘hash = ""’ we assume the
empty hash is wanted.
follow up of https://github.com/NixOS/nix/pull/3544
This allows hash="" so that it can be used for debugging purposes. For
instance, this gives you an error message like:
warning: found empty hash, assuming you wanted 'sha256:0000000000000000000000000000000000000000000000000000'
hash mismatch in fixed-output derivation '/nix/store/asx6qw1r1xk6iak6y6jph4n58h4hdmbm-nix':
wanted: sha256:0000000000000000000000000000000000000000000000000000
got: sha256:0fpfhipl9v1mfzw2ffmxiyyzqwlkvww22bh9wcy4qrfslb4jm429
Needed so that we can include it as a logger in loggers.cc without
adding a dependency on nix
This also requires moving names.hh to libutil to prevent a circular
dependency between libmain and libexpr
Make the printing of the build logs systematically go through the
logger, and replicate the behavior of `no-build-output` by having two
different loggers (one that prints the build logs and one that doesn't)
Add a new `--log-format` cli argument to change the format of the logs.
The possible values are
- raw (the default one for old-style commands)
- bar (the default one for new-style commands)
- bar-with-logs (equivalent to `--print-build-logs`)
- internal-json (the internal machine-readable json format)
The initial contents of the flake is specified by the
'templates.<name>' or 'defaultTemplate' output of another flake. E.g.
outputs = { self }: {
templates = {
nixos-container = {
path = ./nixos-container;
description = "An example of a NixOS container";
};
};
};
allows
$ nix flake init -t templates#nixos-container
Also add a command 'nix flake new', which is identical to 'nix flake
init' except that it initializes a specified directory rather than the
current directory.
bool coerces anything >0 to true, but in the future we may have other
file ingestion methods. This shows a better error message when the
“recursive” byte isn’t 1.