This is to simplify remote build configuration. These environment
variables predate nix.conf.
The build hook now has a sensible default (namely build-remote).
The current load is kept in the Nix state directory now.
Timeout tests rely on failed build to determine success,
so make sure these derivations (silent in particular)
don't fail regardless of timeout behavior.
The nix-shell fix in 668fef2e4f revealed
that we had some --pure tests that incorrectly depended on PATH from
config.nix's mkDerivation being overwritten by the caller's PATH.
http://hydra.nixos.org/build/49242478
This closes a long-time bug that allowed builds to hang Nix
indefinitely (regardless of timeouts) simply by doing
exec > /dev/null 2>&1; while true; do true; done
Now, on EOF, we just send SIGKILL to the child to make sure it's
really gone.
Commands such as "cp -p" also use fsetxattr() in addition to fchown(),
so we need to make sure these syscalls always return successful as well
in order to avoid nasty "Invalid value" errors.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Right now it only tests whether seccomp correctly forges the return
value of chown, but the long-term goal is to test the full sandboxing
functionality at some point in the future.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
The implementation of "partition" in Nixpkgs is O(n^2) (because of the
use of ++), and for some reason was causing stack overflows in
multi-threaded evaluation (not sure why).
This reduces "nix-env -qa --drv-path" runtime by 0.197s and memory
usage by 298 MiB (in non-Boehm mode).
For example, you can now say:
configureFlags = "--prefix=${placeholder "out"} --includedir=${placeholder "dev"}";
The strings returned by the ‘placeholder’ builtin are replaced at
build time by the actual store paths corresponding to the specified
outputs.
Previously, you had to work around the inability to self-reference by doing stuff like:
preConfigure = ''
configureFlags+=" --prefix $out --includedir=$dev"
'';
or rely on ad-hoc variable interpolation semantics in Autoconf or Make
(e.g. --prefix=\$(out)), which doesn't always work.
Thus, -I / $NIX_PATH entries are now downloaded only when they are
needed for evaluation. An error to download an entry is a non-fatal
warning (just like non-existant paths).
This does change the semantics of builtins.nixPath, which now returns
the original, rather than resulting path. E.g., before we had
[ { path = "/nix/store/hgm3yxf1lrrwa3z14zpqaj5p9vs0qklk-nixexprs.tar.xz"; prefix = "nixpkgs"; } ... ]
but now
[ { path = "https://nixos.org/channels/nixos-16.03/nixexprs.tar.xz"; prefix = "nixpkgs"; } ... ]
Fixes#792.
This removes the need to have multiple downloads in the stdenv
bootstrap process (like a separate busybox binary for Linux, or
curl/mkdir/sh/bzip2 for Darwin). Now all those files can be combined
into a single NAR.
The $channelName variable passed to the channel builder is the last
portion of the URL and while that works in the previous test for
channels prior to #519, it doesn't work if the last portion is
nixexprs.tar.bz2.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Sodium's Ed25519 signatures are much shorter than OpenSSL's RSA
signatures. Public keys are also much shorter, so they're now
specified directly in the nix.conf option ‘binary-cache-public-keys’.
The new command ‘nix-store --generate-binary-cache-key’ generates and
prints a public and secret key.
The function ‘builtins.match’ takes a POSIX extended regular
expression and an arbitrary string. It returns ‘null’ if the string
does not match the regular expression. Otherwise, it returns a list
containing substring matches corresponding to parenthesis groups in
the regex. The regex must match the entire string (i.e. there is an
implied "^<pat>$" around the regex). For example:
match "foo" "foobar" => null
match "foo" "foo" => []
match "f(o+)(.*)" "foooobar" => ["oooo" "bar"]
match "(.*/)?([^/]*)" "/dir/file.nix" => ["/dir/" "file.nix"]
match "(.*/)?([^/]*)" "file.nix" => [null "file.nix"]
The following example finds all regular files with extension .nix or
.patch underneath the current directory:
let
findFiles = pat: dir: concatLists (mapAttrsToList (name: type:
if type == "directory" then
findFiles pat (dir + "/" + name)
else if type == "regular" && match pat name != null then
[(dir + "/" + name)]
else []) (readDir dir));
in findFiles ".*\\.(nix|patch)" (toString ./.)
With this, attribute sets with a `__functor` attribute can be applied
just like normal functions. This can be used to attach arbitrary
metadata to a function without callers needing to treat it specially.
For the "stdenv accidentally referring to bootstrap-tools", it seems
easier to specify the path that we don't want to depend on, e.g.
disallowedRequisites = [ bootstrapTools ];
So all these years I was totally deluded about the meaning of "set
-e". You might think that it causes statements like "false && true" or
"! true" to fail, but it doesn't...
When running NixOps under Mac OS X, we need to be able to import store
paths built on Linux into the local Nix store. However, HFS+ is
usually case-insensitive, so if there are directories with file names
that differ only in case, then importing will fail.
The solution is to add a suffix ("~nix~case~hack~<integer>") to
colliding files. For instance, if we have a directory containing
xt_CONNMARK.h and xt_connmark.h, then the latter will be renamed to
"xt_connmark.h~nix~case~hack~1". If a store path is dumped as a NAR,
the suffixes are removed. Thus, importing and exporting via a
case-insensitive Nix store is round-tripping. So when NixOps calls
nix-copy-closure to copy the path to a Linux machine, you get the
original file names back.
Closes#119.
Nix search path lookups like <nixpkgs> are now desugared to ‘findFile
nixPath <nixpkgs>’, where ‘findFile’ is a new primop. Thus you can
override the search path simply by saying
let
nixPath = [ { prefix = "nixpkgs"; path = "/my-nixpkgs"; } ];
in ... <nixpkgs> ...
In conjunction with ‘scopedImport’ (commit
c273c15cb1), the Nix search path can be
propagated across imports, e.g.
let
overrides = {
nixPath = [ ... ] ++ builtins.nixPath;
import = fn: scopedImport overrides fn;
scopedImport = attrs: fn: scopedImport (overrides // attrs) fn;
builtins = builtins // overrides;
};
in scopedImport overrides ./nixos
‘scopedImport’ works like ‘import’, except that it takes a set of
attributes to be added to the lexical scope of the expression,
essentially extending or overriding the builtin variables. For
instance, the expression
scopedImport { x = 1; } ./foo.nix
where foo.nix contains ‘x’, will evaluate to 1.
This has a few applications:
* It allows getting rid of function argument specifications in package
expressions. For instance, a package expression like:
{ stdenv, fetchurl, libfoo }:
stdenv.mkDerivation { ... buildInputs = [ libfoo ]; }
can now we written as just
stdenv.mkDerivation { ... buildInputs = [ libfoo ]; }
and imported in all-packages.nix as:
bar = scopedImport pkgs ./bar.nix;
So whereas we once had dependencies listed in three places
(buildInputs, the function, and the call site), they now only need
to appear in one place.
* It allows overriding builtin functions. For instance, to trace all
calls to ‘map’:
let
overrides = {
map = f: xs: builtins.trace "map called!" (map f xs);
# Ensure that our override gets propagated by calls to
# import/scopedImport.
import = fn: scopedImport overrides fn;
scopedImport = attrs: fn: scopedImport (overrides // attrs) fn;
# Also update ‘builtins’.
builtins = builtins // overrides;
};
in scopedImport overrides ./bla.nix
* Similarly, it allows extending the set of builtin functions. For
instance, during Nixpkgs/NixOS evaluation, the Nixpkgs library
functions could be added to the default scope.
There is a downside: calls to scopedImport are not memoized, unlike
import. So importing a file multiple times leads to multiple parsings
/ evaluations. It would be possible to construct the AST only once,
but that would require careful handling of variables/environments.
Now, in addition to a."${b}".c, you can write a.${b}.c (applicable
wherever dynamic attributes are valid).
Signed-off-by: Shea Levy <shea@shealevy.com>
NAR info files in binary caches can now have a cryptographic signature
that Nix will verify before using the corresponding NAR file.
To create a private/public key pair for signing and verifying a binary
cache, do:
$ openssl genrsa -out ./cache-key.sec 2048
$ openssl rsa -in ./cache-key.sec -pubout > ./cache-key.pub
You should also come up with a symbolic name for the key, such as
"cache.example.org-1". This will be used by clients to look up the
public key. (It's a good idea to number keys, in case you ever need
to revoke/replace one.)
To create a binary cache signed with the private key:
$ nix-push --dest /path/to/binary-cache --key ./cache-key.sec --key-name cache.example.org-1
The public key (cache-key.pub) should be distributed to the clients.
They should have a nix.conf should contain something like:
signed-binary-caches = *
binary-cache-public-key-cache.example.org-1 = /path/to/cache-key.pub
If all works well, then if Nix fetches something from the signed
binary cache, you will see a message like:
*** Downloading ‘http://cache.example.org/nar/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’ (signed by ‘cache.example.org-1’) to ‘/nix/store/7dppcj5sc1nda7l54rjc0g5l1hamj09j-subversion-1.7.11’...
On the other hand, if the signature is wrong, you get a message like
NAR info file `http://cache.example.org/7dppcj5sc1nda7l54rjc0g5l1hamj09j.narinfo' has an invalid signature; ignoring
Signatures are implemented as a single line appended to the NAR info
file, which looks like this:
Signature: 1;cache.example.org-1;HQ9Xzyanq9iV...muQ==
Thus the signature has 3 fields: a version (currently "1"), the ID of
key, and the base64-encoded signature of the SHA-256 hash of the
contents of the NAR info file up to but not including the Signature
line.
Issue #75.
Since addAttr has to iterate through the AttrPath we pass it, it makes
more sense to just iterate through the AttrNames in addAttr instead. As
an added bonus, this allows attrsets where two dynamic attribute paths
have the same static leading part (see added test case for an example
that failed previously).
Signed-off-by: Shea Levy <shea@shealevy.com>
This adds new syntax for attribute names:
* attrs."${name}" => getAttr name attrs
* attrs ? "${name}" => isAttrs attrs && hasAttr attrs name
* attrs."${name}" or def => if attrs ? "${name}" then attrs."${name}" else def
* { "${name}" = value; } => listToAttrs [{ inherit name value; }]
Of course, it's a bit more complicated than that. The attribute chains
can be arbitrarily long and contain combinations of static and dynamic
parts (e.g. attrs."${foo}".bar."${baz}" or qux), which is relatively
straightforward for the getAttrs/hasAttrs cases but is more complex for
the listToAttrs case due to rules about duplicate attribute definitions.
For attribute sets with dynamic attribute names, duplicate static
attributes are detected at parse time while duplicate dynamic attributes
are detected when the attribute set is forced. So, for example, { a =
null; a.b = null; "${"c"}" = true; } will be a parse-time error, while
{ a = {}; "${"a"}".b = null; c = true; } will be an eval-time error
(technically that case could theoretically be detected at parse time,
but the general case would require full evaluation). Moreover, duplicate
dynamic attributes are not allowed even in cases where they would be
with static attributes ({ a.b.d = true; a.b.c = false; } is legal, but {
a."${"b"}".d = true; a."${"b"}".c = false; } is not). This restriction
might be relaxed in the future in cases where the static variant would
not be an error, but it is not obvious that that is desirable.
Finally, recursive attribute sets with dynamic attributes have the
static attributes in scope but not the dynamic ones. So rec { a = true;
"${"b"}" = a; } is equivalent to { a = true; b = true; } but rec {
"${"a"}" = true; b = a; } would be an error or use a from the
surrounding scope if it exists.
Note that the getAttr, getAttr or default, and hasAttr are all
implemented purely in the parser as syntactic sugar, while attribute
sets with dynamic attribute names required changes to the AST to be
implemented cleanly.
This is an alternative solution to and closes#167
Signed-off-by: Shea Levy <shea@shealevy.com>
Certain desugaring schemes may require the parser to use some builtin
function to do some of the work (e.g. currently `throw` is used to
lazily cause an error if a `<>`-style path is not in the search path)
Unfortunately, these names are not reserved keywords, so an expression
that uses such a syntactic sugar will not see the expected behavior
(see tests/lang/eval-okay-redefine-builtin.nix for an example).
This adds the ExprBuiltin AST type, which when evaluated uses the value
from the rootmost variable scope (which of course is initialized
internally and can't shadow any of the builtins).
Signed-off-by: Shea Levy <shea@shealevy.com>
This is requires if you have attribute names with dots in them. So
you can now say:
$ nix-instantiate '<nixos>' -A 'config.systemd.units."postgresql.service".text' --eval-only
Fixes#151.
Antiquotes should evaluate to strings or paths. This is usually
checked, except in the case where the antiquote makes up the entire
string, as in "${expr}". This is optimised to expr, which discards
the runtime type checks / coercions.
This reduces the difference between inherited and non-inherited
attribute handling to the choice of which env to use (in recs and lets)
by setting the AttrDef::e to a new ExprVar in the parser rather than
carrying a separate AttrDef::v VarRef member.
As an added bonus, this allows inherited attributes that inherit from a
with to delay forcing evaluation of the with's attributes.
Signed-off-by: Shea Levy <shea@shealevy.com>
Evaluation of attribute sets is strict in the attribute names, which
means immediate evaluation of `with` attribute sets rules out some
potentially interesting use cases (e.g. where the attribute names of one
set depend in some way on another but we want to bring those names into
scope for some values in the second set).
The major example of this is overridable self-referential package sets
(e.g. all-packages.nix). With immediate `with` evaluation, the only
options for such sets are to either make them non-recursive and
explicitly use the name of the overridden set in non-overridden one
every time you want to reference another package, or make the set
recursive and use the `__overrides` hack. As shown in the test case that
comes with this commit, though, delayed `with` evaluation allows a nicer
third alternative.
Signed-off-by: Shea Levy <shea@shealevy.com>
For instance, it's pointless to keep copy-from-other-stores running if
there are no other stores, or download-using-manifests if there are no
manifests. This also speeds things up because we don't send queries
to those substituters.
In Nixpkgs, the attribute in all-packages.nix corresponding to a
package is usually equal to the package name. However, this doesn't
work if the package contains a dash, which is fairly common. The
convention is to replace the dash with an underscore (e.g. "dbus-lib"
becomes "dbus_glib"), but that's annoying. So now dashes are valid in
variable / attribute names, allowing you to write:
dbus-glib = callPackage ../development/libraries/dbus-glib { };
and
buildInputs = [ dbus-glib ];
Since we don't have a negation or subtraction operation in Nix, this
is unambiguous.
If the options gc-keep-outputs and gc-keep-derivations are both
enabled, you can get a cycle in the liveness graph. There was a hack
to handle this, but it didn't work with multiple-output derivations,
causing the garbage collector to fail with errors like ‘error: cannot
delete path `...' because it is in use by `...'’. The garbage
collector now handles strongly connected components in the liveness
graph as a unit and decides whether to delete all or none of the paths
in an SCC.
Apparently our DBD::SQLite links against /usr/lib/libsqlite3.dylib,
which is an old version that doesn't respect foreign key constraints.
So manifests/cache.sqlite doesn't get updated properly when a manifest
disappears. We should fix our DBD::SQLite, but in the meantime this
will fix the test.
http://hydra.nixos.org/build/3017959
Querying all substitutable paths via "nix-env -qas" is potentially
hard on a server, since it involves sending thousands of HEAD
requests. So a binary cache must now have a meta-info file named
"nix-cache-info" that specifies whether the server wants this. It
also specifies the store prefix so that we don't send useless queries
to a binary cache for a different store prefix.
Since SubstitutionGoal::finished() in build.cc computes the hash
anyway, we can prevent the inefficiency of computing the hash twice by
letting the substituter tell Nix about the expected hash, which can
then verify it.
"nix-channel --add" now accepts a second argument: the channel name.
This allows channels to have a nicer name than (say) nixpkgs_unstable.
If no name is given, it defaults to the last component of the URL
(with "-unstable" or "-stable" removed).
Also, channels are now stored in a profile
(/nix/var/nix/profiles/per-user/$USER/channels). One advantage of
this is that it allows rollbacks (e.g. if "nix-channel --update" gives
an undesirable update).
Ensuring that the tests work from the build tree requires a growing
number of nasty hacks. The tests also don't verify that the installed
Nix actually works. Thus, the tests now require "make install" to
have been run.
Nix now requires SQLite and bzip2 to be pre-installed. SQLite is
detected using pkg-config. We required DBD::SQLite anyway, so
depending on SQLite is not a big problem.
The --with-bzip2, --with-openssl and --with-sqlite flags are gone.
other simplifications.
* Use <nix/...> to locate the corepkgs. This allows them to be
overriden through $NIX_PATH.
* Use bash's pipefail option in the NAR builder so that we don't need
to create a temporary file.
directory
/home/eelco/src/stdenv-updates
that you want to use as the directory for import such as
with (import <nixpkgs> { });
then you can say
$ nix-build -I nixpkgs=/home/eelco/src/stdenv-updates
brackets, e.g.
import <nixpkgs/pkgs/lib>
are resolved by looking them up relative to the elements listed in
the search path. This allows us to get rid of hacks like
import "${builtins.getEnv "NIXPKGS_ALL"}/pkgs/lib"
The search path can be specified through the ‘-I’ command-line flag
and through the colon-separated ‘NIX_PATH’ environment variable,
e.g.,
$ nix-build -I /etc/nixos ...
If a file is not found in the search path, an error message is
lazily thrown.
the hook every time we want to ask whether we can run a remote build
(which can be very often), we now reuse a hook process for answering
those queries until it accepts a build. So if there are N
derivations to be built, at most N hooks will be started.
The /bin/sh interpreter on Solaris doesn't understand $(...) syntax for running
sub-shells. Consequently, this test fails on Solaris. To remedy the situation,
the script either needs to be run by /bin/bash -- which is non-standard --, or
it needs to use the ancient but portable `...` syntax.
check' now succeeds :-)
* An attribute set such as `{ foo = { enable = true; };
foo.port = 23; }' now parses. It was previously rejected, but I'm
too lazy to implement the check. (The only reason to reject it is
that the reverse, `{ foo.port = 23; foo = { enable = true; }; }', is
rejected, which is kind of ugly.)
* src/libexpr/expr-to-xml.cc (nix::showAttrs): Add `location'
parameter. Provide location XML attributes when it's true. Update
callers.
(nix::printTermAsXML): Likewise.
* src/libexpr/expr-to-xml.hh (nix::printTermAsXML): Update prototype;
have `location' default to `false'.
* src/nix-instantiate/nix-instantiate.cc (printResult, processExpr): Add
`location' parameter; update callers.
(run): Add support for `--no-location'.
* src/nix-instantiate/help.txt: Update accordingly.
* tests/lang.sh: Invoke `nix-instantiate' with `--no-location' for the
XML tests.
* tests/lang/eval-okay-toxml.exp, tests/lang/eval-okay-to-xml.nix: New
files.
allowed. So `name1@name2', `{attrs1}@{attrs2}' and so on are now no
longer legal. This is no big loss because they were not useful
anyway.
This also changes the output of builtins.toXML for @-patterns
slightly.
doesn't exist. The Debian packages don't include the manifests
directory, so nix-channel would silently skip doing a nix-pull,
resulting in everything being built from source. Thanks to Juan
Pedro Bolívar Puente.
intersectAttrs returns the (right-biased) intersection between two
attribute sets, e.g. every attribute from the second set that also
exists in the first. functionArgs returns the set of attributes
expected by a function.
The main goal of these is to allow the elimination of most of
all-packages.nix. Most package instantiations in all-packages.nix
have this form:
foo = import ./foo.nix {
inherit a b c;
};
With intersectAttrs and functionArgs, this can be written as:
foo = callPackage (import ./foo.nix) { };
where
callPackage = f: args:
f ((builtins.intersectAttrs (builtins.functionArgs f) pkgs) // args);
I.e., foo.nix is called with all attributes from "pkgs" that it
actually needs (e.g., pkgs.a, pkgs.b and pkgs.c). (callPackage can
do any other generic package-level stuff we might want, such as
applying makeOverridable.) Of course, the automatically supplied
arguments can be overriden if needed, e.g.
foo = callPackage (import ./foo.nix) {
c = c_version_2;
};
but for the vast majority of packages, this won't be needed.
The advantages are to reduce the amount of typing needed to add a
dependency (from three sites to two), and to reduce the number of
trivial commits to all-packages.nix. For the former, there have
been two previous attempts:
- Use "args: with args;" in the package's function definition.
This however obscures the actual expected arguments of a
function, which is very bad.
- Use "{ arg1, arg2, ... }:" in the package's function definition
(i.e. use the ellipis "..." to allow arbitrary additional
arguments), and then call the function with all of "pkgs" as an
argument. But this inhibits error detection if you call it with
an misspelled (or obsolete) argument.
attributes of the rec are in scope of `e'. This is useful in
expressions such as
rec {
lib = import ./lib;
inherit (lib) concatStrings;
}
It does change the semantics of expressions such as
let x = {y = 1;}; in rec { x = {y = 2;}; inherit (x) y; }.y
This now returns 2 instead of 1. However, no code in Nixpkgs or
NixOS seems to rely on the old behaviour.
shorthand for {x = {y = {z = ...;};};}. This is especially useful
for NixOS configuration files, e.g.
{
services = {
sshd = {
enable = true;
port = 2022;
};
};
}
can now be written as
{
services.sshd.enable = true;
services.sshd.port = 2022;
}
However, it is currently not permitted to write
{
services.sshd = {enable = true;};
services.sshd.port = 2022;
}
as this is considered a duplicate definition of `services.sshd'.
sure that it works as expected when you pass it a derivation. That
is, we have to make sure that all build-time dependencies are built,
and that they are all in the input closure (otherwise remote builds
might fail, for example). This is ensured at instantiation time by
adding all derivations and their sources to inputDrvs and inputSrcs.
hook. This fixes a problem with log files being partially or
completely filled with 0's because another nix-store process
truncates the log file. It should also be more efficient.
SHA-256 outputs of fixed-output derivations. I.e. they now produce
the same store path:
$ nix-store --add x
/nix/store/j2fq9qxvvxgqymvpszhs773ncci45xsj-x
$ nix-store --add-fixed --recursive sha256 x
/nix/store/j2fq9qxvvxgqymvpszhs773ncci45xsj-x
the latter being the same as the path that a derivation
derivation {
name = "x";
outputHashAlgo = "sha256";
outputHashMode = "recursive";
outputHash = "...";
...
};
produces.
This does change the output path for such fixed-output derivations.
Fortunately they are quite rare. The most common use is fetchsvn
calls with SHA-256 hashes. (There are a handful of those is
Nixpkgs, mostly unstable development packages.)
* Documented the computation of store paths (in store-api.cc).
in attribute set pattern matches. This allows defining a function
that takes *at least* the listed attributes, while ignoring
additional attributes. For instance,
{stdenv, fetchurl, fuse, ...}:
stdenv.mkDerivation {
...
};
defines a function that requires an attribute set that contains the
specified attributes but ignores others. The main advantage is that
we can then write in all-packages.nix
aefs = import ../bla/aefs pkgs;
instead of
aefs = import ../bla/aefs {
inherit stdenv fetchurl fuse;
};
This saves a lot of typing (not to mention not having to update
all-packages.nix with purely mechanical changes). It saves as much
typing as the "args: with args;" style, but has the advantage that
the function arguments are properly declared (not implicit in what
the body of the "with" uses).
functions that take a single argument (plain lambdas) into one AST
node (Function) that contains a Pattern node describing the
arguments. Current patterns are single lazy arguments (VarPat) and
matching against an attribute set (AttrsPat).
This refactoring allows other kinds of patterns to be added easily,
such as Haskell-style @-patterns, or list pattern matching.
again. (After the previous substituter mechanism refactoring I
didn't update the code that obtains the references of substitutable
paths.) This required some refactoring: the substituter programs
are now kept running and receive/respond to info requests via
stdin/stdout.
logic through the `parseDrvName' and `compareVersions' primops.
This will allow expressions to easily check whether some dependency
is a specific needed version or falls in some version range. See
tests/lang/eval-okay-versions.nix for examples.
single quotes. Example (from NixOS):
job = ''
start on network-interfaces
start script
rm -f /var/run/opengl-driver
${if videoDriver == "nvidia"
then "ln -sf ${nvidiaDrivers} /var/run/opengl-driver"
else if cfg.driSupport
then "ln -sf ${mesa} /var/run/opengl-driver"
else ""
}
rm -f /var/log/slim.log
end script
'';
This style has two big advantages:
- \, ' and " aren't special, only '' and ${. So you get a lot less
escaping in shell scripts / configuration files in Nixpkgs/NixOS.
The delimiter '' is rare in scripts (and can usually be written as
""). ${ is also fairly rare.
Other delimiters such as <<...>>, {{...}} and <|...|> were also
considered but this one appears to have the fewest drawbacks
(thanks Martin).
- Indentation is intelligently stripped so that multi-line strings
can follow the nesting structure of the containing Nix
expression. E.g. in the example above 6 spaces are stripped from
the start of each line. This prevents unnecessary indentation in
generated files (which sometimes even breaks things).
See tests/lang/eval-okay-ind-string.nix for some examples.
$ nix-env -e $(which firefox)
or
$ nix-env -e /nix/store/nywzlygrkfcgz7dfmhm5xixlx1l0m60v-pan-0.132
* nix-env -i: if an argument contains a slash anywhere, treat it as a
path and follow it through symlinks into the Nix store. This allows
things like
$ nix-build -A firefox
$ nix-env -i ./result
* nix-env -q/-i/-e: don't complain when the `*' selector doesn't match
anything. In particular, `nix-env -q \*' doesn't fail anymore on an
empty profile.
derivations that produce the same output path don't work properly
wrt locking. This happens a lot in the build farm when fetchurl
derivations downloading the same file on different platforms are
executed in parallel and then copied back to the main machine.
* `sub' to subtract two numbers.
* `stringLength' to get the length of a string.
* `substring' to get a substring of a string. These should be enough
to allow most string operations to be expressed.
from a source directory. All files for which a predicate function
returns true are copied to the store. Typical example is to leave
out the .svn directory:
stdenv.mkDerivation {
...
src = builtins.filterSource
(path: baseNameOf (toString path) != ".svn")
./source-dir;
# as opposed to
# src = ./source-dir;
}
This is important because the .svn directory influences the hash in
a rather unpredictable and variable way.
attribute existence and to return an attribute from an attribute
set, respectively. Example: `hasAttr "foo" {foo = 1;}'. They
differ from the `?' and `.' operators in that the attribute name is
an arbitrary expression. (NIX-61)
* `nix-install-package --help' (NIX-9).
* `nix-install-package --non-interactive': don't prompt or pause.
* Tests for nix-install-package.
* Security fixes: filter the values obtained from the nixpkg.
and returns its path. This can be used to (for instance) write
builders inside a Nix expression, e.g.,
stdenv.mkDerivation {
builder = "
source $stdenv/setup
...
";
...
}
all the primops. This allows Nix expressions to test for new
primops and take appropriate action if they're not available. For
instance, rather than calling a primop `foo' directly, they could
say `if builtins ? foo then builtins.foo ... else ...'.
argument has a valid value, i.e., is in a certain domain. E.g.,
{ foo : [true false]
, bar : ["a" "b" "c"]
}: ...
This previously could be done using assertions, but domain checks
will allow the buildfarm to automatically extract the configuration
space from functions.
"--with-freetype2-library=" + freetype + "/lib"
can now be written as
"--with-freetype2-library=${freetype}/lib"
An arbitrary expression can be enclosed within ${...}, not just
identifiers.
* Escaping in string literals: \n, \r, \t interpreted as in C, any
other character following \ is interpreted as-is.
* Newlines are now allowed in string literals.
externals directory. This is in particular useful because though
most systems have bzip2/bunzip2, they don't always have libbz2,
which we need for bsdiff/bspatch.
packages (provided that they have a `meta.description' attribute).
E.g.,
$ ./src/nix-env/nix-env -qa --description gcc
gcc-4.0.2 GNU Compiler Collection, 4.0.x (cross-compiler for sparc-linux)
gcc-4.0.2 GNU Compiler Collection, 4.0.x (cross-compiler for mips-linux)
gcc-4.0.2 GNU Compiler Collection, 4.0.x (cross-compiler for arm-linux)
gcc-4.0.2 GNU Compiler Collection, 4.0.x
to be queried, e.g., `nix-env -qa firefox'. This does require the
argument '*' to be passed if one wants information about all
derivations, so the old `nix-env -qa' now is `nix-env -qa "*"'.
nix-store query options `--referer' and `--referer-closure' have
been changed to `--referrer' and `--referrer-closure' (but the old
ones are still accepted for compatibility).
`removeAttrs attrs ["x", "y"]' returns the set `attrs' with the
attributes named `x' and `y' removed. It is not an error for the
named attributes to be missing from the input set.
* Removed some dead code (successor stuff) from nix-push.
* Updated terminology in the tests (store expr -> drv path).
* Check that the deriver is set properly in the tests.
being created after the garbage collector has read the temproots
directory. This blocks the creation of new processes, but the
garbage collector could periodically release the GC lock to allow
them to run.
roots to a per-process temporary file in /nix/var/nix/temproots
while holding a write lock on that file. The garbage collector
acquires read locks on all those files, thus blocking further
progress in other Nix processes, and reads the sets of temporary
roots.
closure of the referers relation rather than the references
relation, i.e., the set of all paths that directly or indirectly
refer to the given path. Note that contrary to the references
closure this set is not fixed; it can change as paths are added to
or removed from the store.
Whenever Nix attempts to realise a derivation for which a closure is
already known, but this closure cannot be realised, fall back on
normalising the derivation.
The most common scenario in which this is useful is when we have
registered substitutes in order to perform binary distribution from,
say, a network repository. If the repository is down, the
realisation of the derivation will fail. When this option is
specified, Nix will build the derivation instead. Thus, binary
installation falls back on a source installation. This option is
not the default since it is generally not desirable for a transient
failure in obtaining the substitutes to lead to a full build from
source (with the related consumption of resources).
much as possible. (This is similar to GNU Make's `-k' flag.)
* Refactoring to implement this: previously we just bombed out when
a build failed, but now we have to clean up. In particular this
means that goals must be freed quickly --- they shouldn't hang
around until the worker exits. So the worker now maintains weak
pointers in order not to prevent garbage collection.
* Documented the `-k' and `-j' flags.
* A better substitute mechanism.
Instead of generating a store expression for each store path for
which we have a substitute, we can have a single store expression
that builds a generic program that is invoked to build the desired
store path, which is passed as an argument.
This means that operations like `nix-pull' only produce O(1) files
instead of O(N) files in the store when registering N substitutes.
(It consumes O(N) database storage, of course, but that's not a
performance problem).
* Added a test for the substitute mechanism.
* `nix-store --substitute' reads the substitutes from standard input,
instead of from the command line. This prevents us from running
into the kernel's limit on command line length.
in parallel. Hooks are more efficient: locks on output paths are
only acquired when the hook says that it is willing to accept a
build job. Hooks now work in two phases. First, they should first
tell Nix whether they are willing to accept a job. Nix guarantuees
that no two hooks will ever be in the first phase at the same time
(this simplifies the implementation of hooks, since they don't have
to perform locking (?)). Second, if they accept a job, they are
then responsible for building it (on the remote system), and copying
the result back. These can be run in parallel with other hooks and
locally executed jobs.
The implementation is a bit messy right now, though.
* The directory `distributed' shows a (hacky) example of a hook that
distributes build jobs over a set of machines listed in a
configuration file.
parallel as possible (similar to GNU Make's `-j' switch). This is
useful on SMP systems, but it is especially useful for doing builds
on multiple machines. The idea is that a large derivation is
initiated on one master machine, which then distributes
sub-derivations to any number of slave machines. This should not
happen synchronously or in lock-step, so the master must be capable
of dealing with multiple parallel build jobs. We now have the
infrastructure to support this.
TODO: substitutes are currently broken.