There is a long-standing race condition when copying a closure to a
remote machine, particularly affecting build-remote.pl: the client
first asks the remote machine which paths it already has, then copies
over the missing paths. If the garbage collector kicks in on the
remote machine between the first and second step, the already-present
paths may be deleted. The missing paths may then refer to deleted
paths, causing nix-copy-closure to fail. The client now performs both
steps using a single remote Nix call (using ‘nix-store --serve’),
locking all paths in the closure while querying.
I changed the --serve protocol a bit (getting rid of QueryCommand), so
this breaks the SSH substituter from older versions. But it was marked
experimental anyway.
Fixes#141.
This can be used to import a dynamic shared object and return an
arbitrary value, including new primops. This can be used both to test
new primops without having to recompile nix every time, and to build
specialized primops that probably don't belong upstream (e.g. a function
that calls out to gpg to decrypt a nixops secret as-needed).
The imported function should initialize the Value & as needed. A single
import can define multiple values by creating an attrset or list, of
course.
An example initialization function might look like:
extern "C" void initialize(nix::EvalState & state, nix::Value & v)
{
v.type = nix::tPrimOp;
v.primOp = NEW nix::PrimOp(myFun, 1, state.symbols.create("myFun"));
}
Then `builtins.importNative ./example.so "initialize"` will evaluate to
the primop defined in the myFun function.
When copying a large path causes the daemon to run out of memory, you
now get:
error: Nix daemon out of memory
instead of:
error: writing to file: Broken pipe
It's slower than ExprVar since it doesn't compute a static
displacement. Since we're not using the throw primop in the
implementation of <...> anymore, it's also not really needed.
Nix search path lookups like <nixpkgs> are now desugared to ‘findFile
nixPath <nixpkgs>’, where ‘findFile’ is a new primop. Thus you can
override the search path simply by saying
let
nixPath = [ { prefix = "nixpkgs"; path = "/my-nixpkgs"; } ];
in ... <nixpkgs> ...
In conjunction with ‘scopedImport’ (commit
c273c15cb1), the Nix search path can be
propagated across imports, e.g.
let
overrides = {
nixPath = [ ... ] ++ builtins.nixPath;
import = fn: scopedImport overrides fn;
scopedImport = attrs: fn: scopedImport (overrides // attrs) fn;
builtins = builtins // overrides;
};
in scopedImport overrides ./nixos
‘scopedImport’ works like ‘import’, except that it takes a set of
attributes to be added to the lexical scope of the expression,
essentially extending or overriding the builtin variables. For
instance, the expression
scopedImport { x = 1; } ./foo.nix
where foo.nix contains ‘x’, will evaluate to 1.
This has a few applications:
* It allows getting rid of function argument specifications in package
expressions. For instance, a package expression like:
{ stdenv, fetchurl, libfoo }:
stdenv.mkDerivation { ... buildInputs = [ libfoo ]; }
can now we written as just
stdenv.mkDerivation { ... buildInputs = [ libfoo ]; }
and imported in all-packages.nix as:
bar = scopedImport pkgs ./bar.nix;
So whereas we once had dependencies listed in three places
(buildInputs, the function, and the call site), they now only need
to appear in one place.
* It allows overriding builtin functions. For instance, to trace all
calls to ‘map’:
let
overrides = {
map = f: xs: builtins.trace "map called!" (map f xs);
# Ensure that our override gets propagated by calls to
# import/scopedImport.
import = fn: scopedImport overrides fn;
scopedImport = attrs: fn: scopedImport (overrides // attrs) fn;
# Also update ‘builtins’.
builtins = builtins // overrides;
};
in scopedImport overrides ./bla.nix
* Similarly, it allows extending the set of builtin functions. For
instance, during Nixpkgs/NixOS evaluation, the Nixpkgs library
functions could be added to the default scope.
There is a downside: calls to scopedImport are not memoized, unlike
import. So importing a file multiple times leads to multiple parsings
/ evaluations. It would be possible to construct the AST only once,
but that would require careful handling of variables/environments.
If a build log is not available locally, then ‘nix-store -l’ will now
try to download it from the servers listed in the ‘log-servers’ option
in nix.conf. For instance, if you have:
log-servers = http://hydra.nixos.org/log
then it will try to get logs from http://hydra.nixos.org/log/<base
name of the store path>. So you can do things like:
$ nix-store -l $(which xterm)
and get a log even if xterm wasn't built locally.
By preloading all inodes in the /nix/store/.links directory, we can
quickly determine of a hardlinked file was already linked to the hashed
links.
This is tolerant of removing the .links directory, it will simply
recalculate all hashes in the store.
If an inode in the Nix store has more than 1 link, it probably means that it was linked into .links/ by us. If so, skip.
There's a possibility that something else hardlinked the file, so it would be nice to be able to override this.
Also, by looking at the number of hardlinks for each of the files in .links/, you can get deduplication numbers and space savings.
The option '--delete-generations Nd' deletes all generations older than N
days. However, most likely the user does not want to delete the
generation that was active N days ago.
For example, say that you have these 3 generations:
1: <30 days ago>
2: <15 days ago>
3: <1 hour ago>
If you do --delete-generations 7d (say, as part of a cron job), most
likely you still want to keep generation 2, i.e. the generation that was
active 7 days ago (and for most of the past 7 days, in fact).
This patch fixes this issue. Note that this also affects
'nix-collect-garbage --delete-older-than Nd'.
Thanks to @roconnor for noticing the issue!
This allows error messages like:
error: the anonymous function at `/etc/nixos/configuration.nix:1:1'
called without required argument `foo', at
`/nix/var/nix/profiles/per-user/root/channels/nixos/nixpkgs/lib/modules.nix:77:59'
While running Python 3’s test suite, we noticed that on some systems
/dev/pts/ptmx is created with permissions 0 (that’s the case with my
Nixpkgs-originating 3.0.43 kernel, but someone with a Debian-originating
3.10-3 reported not having this problem.)
There’s still the problem that people without
CONFIG_DEVPTS_MULTIPLE_INSTANCES=y are screwed (as noted in build.cc),
but I don’t see how we could work around it.
Since the addition of build-max-log-size, a call to
handleChildOutput() can result in cancellation of a goal. This
invalidated the "j" iterator in the waitForInput() loop, even though
it was still used afterwards. Likewise for the maxSilentTime
handling.
Probably fixes#231. At least it gets rid of the valgrind warnings.
Ludo reported this error:
unexpected Nix daemon error: boost::too_few_args: format-string refered to more arguments than were passed
coming from this line:
printMsg(lvlError, run.program + ": " + string(err, 0, p));
The problem here is that the string ends up implicitly converted to a
Boost format() object, so % characters are treated specially. I
always assumed (wrongly) that strings are converted to a format object
that outputs the string as-is.
Since this assumption appears in several places that may be hard to
grep for, I've added some C++ type hackery to ensures that the right
thing happens. So you don't have to worry about % in statements like
printMsg(lvlError, "foo: " + s);
or
throw Error("foo: " + s);
The daemon now creates /dev deterministically (thanks!). However, it
expects /dev/kvm to be present.
The patch below restricts that requirement (1) to Linux-based systems,
and (2) to systems where /dev/kvm already exists.
I’m not sure about the way to handle (2). We could special-case
/dev/kvm and create it (instead of bind-mounting it) in the chroot, so
it’s always available; however, it wouldn’t help much since most likely,
if /dev/kvm missing, then KVM support is missing.
Currently, clients cannot recover from an isValidPath RPC with an
invalid path parameter because the daemon closes the connection when
that happens.
More precisely:
1. in performOp, wopIsValidPath case, ‘readStorePath’ raises an
‘Error’ exception;
2. that exception is caught by the handler in ‘processConnection’;
3. the handler determines errorAllowed == false, and thus exits after
sending the message.
This last part is fixed by calling ‘startWork’ early on, as in the patch
below.
The same reasoning could be applied to all the RPCs that take one or
more store paths as inputs, but isValidPath is, by definition, likely to
be passed invalid paths in the first place, so it’s important for this
one to allow recovery.
Since the meta attributes were not sorted, attribute lookup could
fail, leading to package priorities and active flags not working
correctly.
Broken since 0f24400d90.
If we're evaluating some application ‘v = f x’, we can't store ‘f’
temporarily in ‘v’, because if ‘f x’ refers to ‘v’, it will get ‘f’
rather than an infinite recursion error.
Unfortunately, this breaks the tail call optimisation introduced in
c897bac549.
Fixes#217.
We were relying on SubstitutionGoal's destructor releasing the lock,
but if a goal is a top-level goal, the destructor won't run in a
timely manner since its reference count won't drop to zero. So
release it explicitly.
Fixes#178.
Fixes#121. Note that we don't warn about missing $NIX_PATH entries
because it's intended that some may be missing (cf. the default
$NIX_PATH on NixOS, which includes paths like /etc/nixos/nixpkgs for
backward compatibility).
The flag ‘--check’ to ‘nix-store -r’ or ‘nix-build’ will cause Nix to
redo the build of a derivation whose output paths are already valid.
If the new output differs from the original output, an error is
printed. This makes it easier to test if a build is deterministic.
(Obviously this cannot catch all sources of non-determinism, but it
catches the most common one, namely the current time.)
For example:
$ nix-build '<nixpkgs>' -A patchelf
...
$ nix-build '<nixpkgs>' -A patchelf --check
error: derivation `/nix/store/1ipvxsdnbhl1rw6siz6x92s7sc8nwkkb-patchelf-0.6' may not be deterministic: hash mismatch in output `/nix/store/4pc1dmw5xkwmc6q3gdc9i5nbjl4dkjpp-patchelf-0.6.drv'
The --check build fails if not all outputs are valid. Thus the first
call to nix-build is necessary to ensure that all outputs are valid.
The current outputs are left untouched: the new outputs are either put
in a chroot or diverted to a different location in the store using
hash rewriting.
This substituter connects to a remote host, runs nix-store --serve
there, and then forwards substituter commands on to the remote host and
sends their results to the calling program. The ssh-substituter-hosts
option can be specified as a list of hosts to try.
This is an initial implementation and, while it works, it has some
limitations:
* Only the first host is used
* There is no caching of query results (all queries are sent to the
remote machine)
* There is no informative output (such as progress bars)
* Some failure modes may cause unhelpful error messages
* There is no concept of trusted-ssh-substituter-hosts
Signed-off-by: Shea Levy <shea@shealevy.com>
nix-store --export takes a tmproot, which can only release by exiting.
Substituters don't currently work in a way that could take advantage of
the looping, anyway.
Signed-off-by: Shea Levy <shea@shealevy.com>
This is essentially the substituter API operating on the local store,
which will be used by the ssh substituter. It runs in a loop rather than
just taking one command so that in the future nix will be able to keep
one connection open for multiple instances of the substituter.
Signed-off-by: Shea Levy <shea@shealevy.com>
This allows running nix-instantiate --eval-only without performing the
evaluation in readonly mode, letting features like import from
derivation and automatic substitution of builtins.storePath paths work.
Signed-off-by: Shea Levy <shea@shealevy.com>
Namely:
nix-store: derivations.cc:242: nix::Hash nix::hashDerivationModulo(nix::StoreAPI&, nix::Derivation): Assertion `store.isValidPath(i->first)' failed.
This happened because of the derivation output correctness check being
applied before the references of a derivation are valid.
Now, in addition to a."${b}".c, you can write a.${b}.c (applicable
wherever dynamic attributes are valid).
Signed-off-by: Shea Levy <shea@shealevy.com>
*headdesk*
*headdesk*
*headdesk*
So since commit 22144afa8d, Nix hasn't
actually checked whether the content of a downloaded NAR matches the
hash specified in the manifest / NAR info file. Urghhh...
This doesn't change any functionality but moves some behavior out of the
parser and into the evaluator in order to simplify the code.
Signed-off-by: Shea Levy <shea@shealevy.com>
Since addAttr has to iterate through the AttrPath we pass it, it makes
more sense to just iterate through the AttrNames in addAttr instead. As
an added bonus, this allows attrsets where two dynamic attribute paths
have the same static leading part (see added test case for an example
that failed previously).
Signed-off-by: Shea Levy <shea@shealevy.com>
This adds new syntax for attribute names:
* attrs."${name}" => getAttr name attrs
* attrs ? "${name}" => isAttrs attrs && hasAttr attrs name
* attrs."${name}" or def => if attrs ? "${name}" then attrs."${name}" else def
* { "${name}" = value; } => listToAttrs [{ inherit name value; }]
Of course, it's a bit more complicated than that. The attribute chains
can be arbitrarily long and contain combinations of static and dynamic
parts (e.g. attrs."${foo}".bar."${baz}" or qux), which is relatively
straightforward for the getAttrs/hasAttrs cases but is more complex for
the listToAttrs case due to rules about duplicate attribute definitions.
For attribute sets with dynamic attribute names, duplicate static
attributes are detected at parse time while duplicate dynamic attributes
are detected when the attribute set is forced. So, for example, { a =
null; a.b = null; "${"c"}" = true; } will be a parse-time error, while
{ a = {}; "${"a"}".b = null; c = true; } will be an eval-time error
(technically that case could theoretically be detected at parse time,
but the general case would require full evaluation). Moreover, duplicate
dynamic attributes are not allowed even in cases where they would be
with static attributes ({ a.b.d = true; a.b.c = false; } is legal, but {
a."${"b"}".d = true; a."${"b"}".c = false; } is not). This restriction
might be relaxed in the future in cases where the static variant would
not be an error, but it is not obvious that that is desirable.
Finally, recursive attribute sets with dynamic attributes have the
static attributes in scope but not the dynamic ones. So rec { a = true;
"${"b"}" = a; } is equivalent to { a = true; b = true; } but rec {
"${"a"}" = true; b = a; } would be an error or use a from the
surrounding scope if it exists.
Note that the getAttr, getAttr or default, and hasAttr are all
implemented purely in the parser as syntactic sugar, while attribute
sets with dynamic attribute names required changes to the AST to be
implemented cleanly.
This is an alternative solution to and closes#167
Signed-off-by: Shea Levy <shea@shealevy.com>
Certain desugaring schemes may require the parser to use some builtin
function to do some of the work (e.g. currently `throw` is used to
lazily cause an error if a `<>`-style path is not in the search path)
Unfortunately, these names are not reserved keywords, so an expression
that uses such a syntactic sugar will not see the expected behavior
(see tests/lang/eval-okay-redefine-builtin.nix for an example).
This adds the ExprBuiltin AST type, which when evaluated uses the value
from the rootmost variable scope (which of course is initialized
internally and can't shadow any of the builtins).
Signed-off-by: Shea Levy <shea@shealevy.com>
This will allow e.g. channel expressions to use builtins.storePath IFF
it is safe to do so without knowing if the path is valid yet.
Signed-off-by: Shea Levy <shea@shealevy.com>
In particular "libutil" was always a problem because it collides with
Glibc's libutil. Even if we install into $(libdir)/nix, the linker
sometimes got confused (e.g. if a program links against libstore but
not libutil, then ld would report undefined symbols in libstore
because it was looking at Glibc's libutil).
This is requires if you have attribute names with dots in them. So
you can now say:
$ nix-instantiate '<nixos>' -A 'config.systemd.units."postgresql.service".text' --eval-only
Fixes#151.
Note that adding --show-trace prevents functions calls from being
tail-recursive, so an expression that evaluates without --show-trace
may fail with a stack overflow if --show-trace is given.
It kept temporary data in STL containers that were not scanned by
Boehm GC, so Nix programs using genericClosure could randomly crash if
the garbage collector kicked in at a bad time.
Also make it a bit more efficient by copying points to values rather
than values.
We already have some primops for determining the type of a value, such
as isString, but they're incomplete: for instance, there is no isPath.
Rather than adding more isBla functions, the generic typeOf function
returns a string representing the type of the argument (e.g. "int").
I.e. "nix-store -q --roots" will now show (for example)
/home/eelco/Dev/nixpkgs/result
rather than
/nix/var/nix/gcroots/auto/53222qsppi12s2hkap8dm2lg8xhhyk6v