Sandboxes cannot be nested, so if Nix's build runs inside a sandbox,
it cannot use a sandbox itself. I don't see a clean way to detect
whether we're in a sandbox, so use a test-specific hack.
https://github.com/NixOS/nix/issues/1413
This is to simplify remote build configuration. These environment
variables predate nix.conf.
The build hook now has a sensible default (namely build-remote).
The current load is kept in the Nix state directory now.
Timeout tests rely on failed build to determine success,
so make sure these derivations (silent in particular)
don't fail regardless of timeout behavior.
The nix-shell fix in 668fef2e4f revealed
that we had some --pure tests that incorrectly depended on PATH from
config.nix's mkDerivation being overwritten by the caller's PATH.
http://hydra.nixos.org/build/49242478
This closes a long-time bug that allowed builds to hang Nix
indefinitely (regardless of timeouts) simply by doing
exec > /dev/null 2>&1; while true; do true; done
Now, on EOF, we just send SIGKILL to the child to make sure it's
really gone.
Commands such as "cp -p" also use fsetxattr() in addition to fchown(),
so we need to make sure these syscalls always return successful as well
in order to avoid nasty "Invalid value" errors.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Right now it only tests whether seccomp correctly forges the return
value of chown, but the long-term goal is to test the full sandboxing
functionality at some point in the future.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The implementation of "partition" in Nixpkgs is O(n^2) (because of the
use of ++), and for some reason was causing stack overflows in
multi-threaded evaluation (not sure why).
This reduces "nix-env -qa --drv-path" runtime by 0.197s and memory
usage by 298 MiB (in non-Boehm mode).
For example, you can now say:
configureFlags = "--prefix=${placeholder "out"} --includedir=${placeholder "dev"}";
The strings returned by the ‘placeholder’ builtin are replaced at
build time by the actual store paths corresponding to the specified
outputs.
Previously, you had to work around the inability to self-reference by doing stuff like:
preConfigure = ''
configureFlags+=" --prefix $out --includedir=$dev"
'';
or rely on ad-hoc variable interpolation semantics in Autoconf or Make
(e.g. --prefix=\$(out)), which doesn't always work.
Thus, -I / $NIX_PATH entries are now downloaded only when they are
needed for evaluation. An error to download an entry is a non-fatal
warning (just like non-existant paths).
This does change the semantics of builtins.nixPath, which now returns
the original, rather than resulting path. E.g., before we had
[ { path = "/nix/store/hgm3yxf1lrrwa3z14zpqaj5p9vs0qklk-nixexprs.tar.xz"; prefix = "nixpkgs"; } ... ]
but now
[ { path = "https://nixos.org/channels/nixos-16.03/nixexprs.tar.xz"; prefix = "nixpkgs"; } ... ]
Fixes#792.
This removes the need to have multiple downloads in the stdenv
bootstrap process (like a separate busybox binary for Linux, or
curl/mkdir/sh/bzip2 for Darwin). Now all those files can be combined
into a single NAR.
The $channelName variable passed to the channel builder is the last
portion of the URL and while that works in the previous test for
channels prior to #519, it doesn't work if the last portion is
nixexprs.tar.bz2.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Sodium's Ed25519 signatures are much shorter than OpenSSL's RSA
signatures. Public keys are also much shorter, so they're now
specified directly in the nix.conf option ‘binary-cache-public-keys’.
The new command ‘nix-store --generate-binary-cache-key’ generates and
prints a public and secret key.
The function ‘builtins.match’ takes a POSIX extended regular
expression and an arbitrary string. It returns ‘null’ if the string
does not match the regular expression. Otherwise, it returns a list
containing substring matches corresponding to parenthesis groups in
the regex. The regex must match the entire string (i.e. there is an
implied "^<pat>$" around the regex). For example:
match "foo" "foobar" => null
match "foo" "foo" => []
match "f(o+)(.*)" "foooobar" => ["oooo" "bar"]
match "(.*/)?([^/]*)" "/dir/file.nix" => ["/dir/" "file.nix"]
match "(.*/)?([^/]*)" "file.nix" => [null "file.nix"]
The following example finds all regular files with extension .nix or
.patch underneath the current directory:
let
findFiles = pat: dir: concatLists (mapAttrsToList (name: type:
if type == "directory" then
findFiles pat (dir + "/" + name)
else if type == "regular" && match pat name != null then
[(dir + "/" + name)]
else []) (readDir dir));
in findFiles ".*\\.(nix|patch)" (toString ./.)
With this, attribute sets with a `__functor` attribute can be applied
just like normal functions. This can be used to attach arbitrary
metadata to a function without callers needing to treat it specially.
For the "stdenv accidentally referring to bootstrap-tools", it seems
easier to specify the path that we don't want to depend on, e.g.
disallowedRequisites = [ bootstrapTools ];
So all these years I was totally deluded about the meaning of "set
-e". You might think that it causes statements like "false && true" or
"! true" to fail, but it doesn't...
When running NixOps under Mac OS X, we need to be able to import store
paths built on Linux into the local Nix store. However, HFS+ is
usually case-insensitive, so if there are directories with file names
that differ only in case, then importing will fail.
The solution is to add a suffix ("~nix~case~hack~<integer>") to
colliding files. For instance, if we have a directory containing
xt_CONNMARK.h and xt_connmark.h, then the latter will be renamed to
"xt_connmark.h~nix~case~hack~1". If a store path is dumped as a NAR,
the suffixes are removed. Thus, importing and exporting via a
case-insensitive Nix store is round-tripping. So when NixOps calls
nix-copy-closure to copy the path to a Linux machine, you get the
original file names back.
Closes#119.
Nix search path lookups like <nixpkgs> are now desugared to ‘findFile
nixPath <nixpkgs>’, where ‘findFile’ is a new primop. Thus you can
override the search path simply by saying
let
nixPath = [ { prefix = "nixpkgs"; path = "/my-nixpkgs"; } ];
in ... <nixpkgs> ...
In conjunction with ‘scopedImport’ (commit
c273c15cb1), the Nix search path can be
propagated across imports, e.g.
let
overrides = {
nixPath = [ ... ] ++ builtins.nixPath;
import = fn: scopedImport overrides fn;
scopedImport = attrs: fn: scopedImport (overrides // attrs) fn;
builtins = builtins // overrides;
};
in scopedImport overrides ./nixos
‘scopedImport’ works like ‘import’, except that it takes a set of
attributes to be added to the lexical scope of the expression,
essentially extending or overriding the builtin variables. For
instance, the expression
scopedImport { x = 1; } ./foo.nix
where foo.nix contains ‘x’, will evaluate to 1.
This has a few applications:
* It allows getting rid of function argument specifications in package
expressions. For instance, a package expression like:
{ stdenv, fetchurl, libfoo }:
stdenv.mkDerivation { ... buildInputs = [ libfoo ]; }
can now we written as just
stdenv.mkDerivation { ... buildInputs = [ libfoo ]; }
and imported in all-packages.nix as:
bar = scopedImport pkgs ./bar.nix;
So whereas we once had dependencies listed in three places
(buildInputs, the function, and the call site), they now only need
to appear in one place.
* It allows overriding builtin functions. For instance, to trace all
calls to ‘map’:
let
overrides = {
map = f: xs: builtins.trace "map called!" (map f xs);
# Ensure that our override gets propagated by calls to
# import/scopedImport.
import = fn: scopedImport overrides fn;
scopedImport = attrs: fn: scopedImport (overrides // attrs) fn;
# Also update ‘builtins’.
builtins = builtins // overrides;
};
in scopedImport overrides ./bla.nix
* Similarly, it allows extending the set of builtin functions. For
instance, during Nixpkgs/NixOS evaluation, the Nixpkgs library
functions could be added to the default scope.
There is a downside: calls to scopedImport are not memoized, unlike
import. So importing a file multiple times leads to multiple parsings
/ evaluations. It would be possible to construct the AST only once,
but that would require careful handling of variables/environments.
Now, in addition to a."${b}".c, you can write a.${b}.c (applicable
wherever dynamic attributes are valid).
Signed-off-by: Shea Levy <shea@shealevy.com>
NAR info files in binary caches can now have a cryptographic signature
that Nix will verify before using the corresponding NAR file.
To create a private/public key pair for signing and verifying a binary
cache, do:
$ openssl genrsa -out ./cache-key.sec 2048
$ openssl rsa -in ./cache-key.sec -pubout > ./cache-key.pub
You should also come up with a symbolic name for the key, such as
"cache.example.org-1". This will be used by clients to look up the
public key. (It's a good idea to number keys, in case you ever need
to revoke/replace one.)
To create a binary cache signed with the private key:
$ nix-push --dest /path/to/binary-cache --key ./cache-key.sec --key-name cache.example.org-1
The public key (cache-key.pub) should be distributed to the clients.
They should have a nix.conf should contain something like:
signed-binary-caches = *
binary-cache-public-key-cache.example.org-1 = /path/to/cache-key.pub
If all works well, then if Nix fetches something from the signed
binary cache, you will see a message like:
*** Downloading ‘http://cache.example.org/nar/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’ (signed by ‘cache.example.org-1’) to ‘/nix/store/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’...
On the other hand, if the signature is wrong, you get a message like
NAR info file `http://cache.example.org/7dppcj5sc1nda7l54rjc0g5l1hamj09j.narinfo' has an invalid signature; ignoring
Signatures are implemented as a single line appended to the NAR info
file, which looks like this:
Signature: 1;cache.example.org-1;HQ9Xzyanq9iV...muQ==
Thus the signature has 3 fields: a version (currently "1"), the ID of
key, and the base64-encoded signature of the SHA-256 hash of the
contents of the NAR info file up to but not including the Signature
line.
Issue #75.
Since addAttr has to iterate through the AttrPath we pass it, it makes
more sense to just iterate through the AttrNames in addAttr instead. As
an added bonus, this allows attrsets where two dynamic attribute paths
have the same static leading part (see added test case for an example
that failed previously).
Signed-off-by: Shea Levy <shea@shealevy.com>
This adds new syntax for attribute names:
* attrs."${name}" => getAttr name attrs
* attrs ? "${name}" => isAttrs attrs && hasAttr attrs name
* attrs."${name}" or def => if attrs ? "${name}" then attrs."${name}" else def
* { "${name}" = value; } => listToAttrs [{ inherit name value; }]
Of course, it's a bit more complicated than that. The attribute chains
can be arbitrarily long and contain combinations of static and dynamic
parts (e.g. attrs."${foo}".bar."${baz}" or qux), which is relatively
straightforward for the getAttrs/hasAttrs cases but is more complex for
the listToAttrs case due to rules about duplicate attribute definitions.
For attribute sets with dynamic attribute names, duplicate static
attributes are detected at parse time while duplicate dynamic attributes
are detected when the attribute set is forced. So, for example, { a =
null; a.b = null; "${"c"}" = true; } will be a parse-time error, while
{ a = {}; "${"a"}".b = null; c = true; } will be an eval-time error
(technically that case could theoretically be detected at parse time,
but the general case would require full evaluation). Moreover, duplicate
dynamic attributes are not allowed even in cases where they would be
with static attributes ({ a.b.d = true; a.b.c = false; } is legal, but {
a."${"b"}".d = true; a."${"b"}".c = false; } is not). This restriction
might be relaxed in the future in cases where the static variant would
not be an error, but it is not obvious that that is desirable.
Finally, recursive attribute sets with dynamic attributes have the
static attributes in scope but not the dynamic ones. So rec { a = true;
"${"b"}" = a; } is equivalent to { a = true; b = true; } but rec {
"${"a"}" = true; b = a; } would be an error or use a from the
surrounding scope if it exists.
Note that the getAttr, getAttr or default, and hasAttr are all
implemented purely in the parser as syntactic sugar, while attribute
sets with dynamic attribute names required changes to the AST to be
implemented cleanly.
This is an alternative solution to and closes#167
Signed-off-by: Shea Levy <shea@shealevy.com>
Certain desugaring schemes may require the parser to use some builtin
function to do some of the work (e.g. currently `throw` is used to
lazily cause an error if a `<>`-style path is not in the search path)
Unfortunately, these names are not reserved keywords, so an expression
that uses such a syntactic sugar will not see the expected behavior
(see tests/lang/eval-okay-redefine-builtin.nix for an example).
This adds the ExprBuiltin AST type, which when evaluated uses the value
from the rootmost variable scope (which of course is initialized
internally and can't shadow any of the builtins).
Signed-off-by: Shea Levy <shea@shealevy.com>
This is requires if you have attribute names with dots in them. So
you can now say:
$ nix-instantiate '<nixos>' -A 'config.systemd.units."postgresql.service".text' --eval-only
Fixes#151.
Antiquotes should evaluate to strings or paths. This is usually
checked, except in the case where the antiquote makes up the entire
string, as in "${expr}". This is optimised to expr, which discards
the runtime type checks / coercions.
This reduces the difference between inherited and non-inherited
attribute handling to the choice of which env to use (in recs and lets)
by setting the AttrDef::e to a new ExprVar in the parser rather than
carrying a separate AttrDef::v VarRef member.
As an added bonus, this allows inherited attributes that inherit from a
with to delay forcing evaluation of the with's attributes.
Signed-off-by: Shea Levy <shea@shealevy.com>
Evaluation of attribute sets is strict in the attribute names, which
means immediate evaluation of `with` attribute sets rules out some
potentially interesting use cases (e.g. where the attribute names of one
set depend in some way on another but we want to bring those names into
scope for some values in the second set).
The major example of this is overridable self-referential package sets
(e.g. all-packages.nix). With immediate `with` evaluation, the only
options for such sets are to either make them non-recursive and
explicitly use the name of the overridden set in non-overridden one
every time you want to reference another package, or make the set
recursive and use the `__overrides` hack. As shown in the test case that
comes with this commit, though, delayed `with` evaluation allows a nicer
third alternative.
Signed-off-by: Shea Levy <shea@shealevy.com>
For instance, it's pointless to keep copy-from-other-stores running if
there are no other stores, or download-using-manifests if there are no
manifests. This also speeds things up because we don't send queries
to those substituters.
In Nixpkgs, the attribute in all-packages.nix corresponding to a
package is usually equal to the package name. However, this doesn't
work if the package contains a dash, which is fairly common. The
convention is to replace the dash with an underscore (e.g. "dbus-lib"
becomes "dbus_glib"), but that's annoying. So now dashes are valid in
variable / attribute names, allowing you to write:
dbus-glib = callPackage ../development/libraries/dbus-glib { };
and
buildInputs = [ dbus-glib ];
Since we don't have a negation or subtraction operation in Nix, this
is unambiguous.
If the options gc-keep-outputs and gc-keep-derivations are both
enabled, you can get a cycle in the liveness graph. There was a hack
to handle this, but it didn't work with multiple-output derivations,
causing the garbage collector to fail with errors like ‘error: cannot
delete path `...' because it is in use by `...'’. The garbage
collector now handles strongly connected components in the liveness
graph as a unit and decides whether to delete all or none of the paths
in an SCC.
Apparently our DBD::SQLite links against /usr/lib/libsqlite3.dylib,
which is an old version that doesn't respect foreign key constraints.
So manifests/cache.sqlite doesn't get updated properly when a manifest
disappears. We should fix our DBD::SQLite, but in the meantime this
will fix the test.
http://hydra.nixos.org/build/3017959
Querying all substitutable paths via "nix-env -qas" is potentially
hard on a server, since it involves sending thousands of HEAD
requests. So a binary cache must now have a meta-info file named
"nix-cache-info" that specifies whether the server wants this. It
also specifies the store prefix so that we don't send useless queries
to a binary cache for a different store prefix.
Since SubstitutionGoal::finished() in build.cc computes the hash
anyway, we can prevent the inefficiency of computing the hash twice by
letting the substituter tell Nix about the expected hash, which can
then verify it.
"nix-channel --add" now accepts a second argument: the channel name.
This allows channels to have a nicer name than (say) nixpkgs_unstable.
If no name is given, it defaults to the last component of the URL
(with "-unstable" or "-stable" removed).
Also, channels are now stored in a profile
(/nix/var/nix/profiles/per-user/$USER/channels). One advantage of
this is that it allows rollbacks (e.g. if "nix-channel --update" gives
an undesirable update).
Ensuring that the tests work from the build tree requires a growing
number of nasty hacks. The tests also don't verify that the installed
Nix actually works. Thus, the tests now require "make install" to
have been run.
Nix now requires SQLite and bzip2 to be pre-installed. SQLite is
detected using pkg-config. We required DBD::SQLite anyway, so
depending on SQLite is not a big problem.
The --with-bzip2, --with-openssl and --with-sqlite flags are gone.
other simplifications.
* Use <nix/...> to locate the corepkgs. This allows them to be
overriden through $NIX_PATH.
* Use bash's pipefail option in the NAR builder so that we don't need
to create a temporary file.
directory
/home/eelco/src/stdenv-updates
that you want to use as the directory for import such as
with (import <nixpkgs> { });
then you can say
$ nix-build -I nixpkgs=/home/eelco/src/stdenv-updates
brackets, e.g.
import <nixpkgs/pkgs/lib>
are resolved by looking them up relative to the elements listed in
the search path. This allows us to get rid of hacks like
import "${builtins.getEnv "NIXPKGS_ALL"}/pkgs/lib"
The search path can be specified through the ‘-I’ command-line flag
and through the colon-separated ‘NIX_PATH’ environment variable,
e.g.,
$ nix-build -I /etc/nixos ...
If a file is not found in the search path, an error message is
lazily thrown.
the hook every time we want to ask whether we can run a remote build
(which can be very often), we now reuse a hook process for answering
those queries until it accepts a build. So if there are N
derivations to be built, at most N hooks will be started.
The /bin/sh interpreter on Solaris doesn't understand $(...) syntax for running
sub-shells. Consequently, this test fails on Solaris. To remedy the situation,
the script either needs to be run by /bin/bash -- which is non-standard --, or
it needs to use the ancient but portable `...` syntax.
check' now succeeds :-)
* An attribute set such as `{ foo = { enable = true; };
foo.port = 23; }' now parses. It was previously rejected, but I'm
too lazy to implement the check. (The only reason to reject it is
that the reverse, `{ foo.port = 23; foo = { enable = true; }; }', is
rejected, which is kind of ugly.)
* src/libexpr/expr-to-xml.cc (nix::showAttrs): Add `location'
parameter. Provide location XML attributes when it's true. Update
callers.
(nix::printTermAsXML): Likewise.
* src/libexpr/expr-to-xml.hh (nix::printTermAsXML): Update prototype;
have `location' default to `false'.
* src/nix-instantiate/nix-instantiate.cc (printResult, processExpr): Add
`location' parameter; update callers.
(run): Add support for `--no-location'.
* src/nix-instantiate/help.txt: Update accordingly.
* tests/lang.sh: Invoke `nix-instantiate' with `--no-location' for the
XML tests.
* tests/lang/eval-okay-toxml.exp, tests/lang/eval-okay-to-xml.nix: New
files.
allowed. So `name1@name2', `{attrs1}@{attrs2}' and so on are now no
longer legal. This is no big loss because they were not useful
anyway.
This also changes the output of builtins.toXML for @-patterns
slightly.
doesn't exist. The Debian packages don't include the manifests
directory, so nix-channel would silently skip doing a nix-pull,
resulting in everything being built from source. Thanks to Juan
Pedro Bolívar Puente.
intersectAttrs returns the (right-biased) intersection between two
attribute sets, e.g. every attribute from the second set that also
exists in the first. functionArgs returns the set of attributes
expected by a function.
The main goal of these is to allow the elimination of most of
all-packages.nix. Most package instantiations in all-packages.nix
have this form:
foo = import ./foo.nix {
inherit a b c;
};
With intersectAttrs and functionArgs, this can be written as:
foo = callPackage (import ./foo.nix) { };
where
callPackage = f: args:
f ((builtins.intersectAttrs (builtins.functionArgs f) pkgs) // args);
I.e., foo.nix is called with all attributes from "pkgs" that it
actually needs (e.g., pkgs.a, pkgs.b and pkgs.c). (callPackage can
do any other generic package-level stuff we might want, such as
applying makeOverridable.) Of course, the automatically supplied
arguments can be overriden if needed, e.g.
foo = callPackage (import ./foo.nix) {
c = c_version_2;
};
but for the vast majority of packages, this won't be needed.
The advantages are to reduce the amount of typing needed to add a
dependency (from three sites to two), and to reduce the number of
trivial commits to all-packages.nix. For the former, there have
been two previous attempts:
- Use "args: with args;" in the package's function definition.
This however obscures the actual expected arguments of a
function, which is very bad.
- Use "{ arg1, arg2, ... }:" in the package's function definition
(i.e. use the ellipis "..." to allow arbitrary additional
arguments), and then call the function with all of "pkgs" as an
argument. But this inhibits error detection if you call it with
an misspelled (or obsolete) argument.
attributes of the rec are in scope of `e'. This is useful in
expressions such as
rec {
lib = import ./lib;
inherit (lib) concatStrings;
}
It does change the semantics of expressions such as
let x = {y = 1;}; in rec { x = {y = 2;}; inherit (x) y; }.y
This now returns 2 instead of 1. However, no code in Nixpkgs or
NixOS seems to rely on the old behaviour.
shorthand for {x = {y = {z = ...;};};}. This is especially useful
for NixOS configuration files, e.g.
{
services = {
sshd = {
enable = true;
port = 2022;
};
};
}
can now be written as
{
services.sshd.enable = true;
services.sshd.port = 2022;
}
However, it is currently not permitted to write
{
services.sshd = {enable = true;};
services.sshd.port = 2022;
}
as this is considered a duplicate definition of `services.sshd'.
sure that it works as expected when you pass it a derivation. That
is, we have to make sure that all build-time dependencies are built,
and that they are all in the input closure (otherwise remote builds
might fail, for example). This is ensured at instantiation time by
adding all derivations and their sources to inputDrvs and inputSrcs.
hook. This fixes a problem with log files being partially or
completely filled with 0's because another nix-store process
truncates the log file. It should also be more efficient.