packages (provided that they have a `meta.description' attribute).
E.g.,
$ ./src/nix-env/nix-env -qa --description gcc
gcc-4.0.2 GNU Compiler Collection, 4.0.x (cross-compiler for sparc-linux)
gcc-4.0.2 GNU Compiler Collection, 4.0.x (cross-compiler for mips-linux)
gcc-4.0.2 GNU Compiler Collection, 4.0.x (cross-compiler for arm-linux)
gcc-4.0.2 GNU Compiler Collection, 4.0.x
to be queried, e.g., `nix-env -qa firefox'. This does require the
argument '*' to be passed if one wants information about all
derivations, so the old `nix-env -qa' now is `nix-env -qa "*"'.
instantiation, e.g. "nix-env -i" and "nix-env -qas" (but not
"nix-env -qa"). It turns out that many redundant calls to
addToStore(path) were made, which reads and hashes the entire path.
For instance, the bash bootstrap binary in Nixpkgs would be read and
hashed many times. As a result nix-env would spend around 92% of
its time in the function sha256_block (according to callgrind).
Some simple memoization fixes this.
expressions that cause an assertion failure (like `assert system ==
"i686-linux"'). This allows all-packages.nix in Nixpkgs to be used
on all platforms, even if some Nix expressions don't work on all
platforms.
Not sure if this is a good idea; it's a bit hacky. In particular,
due to laziness some derivations might appear in `nix-env -qa' but
disappear in `nix-env -qas' or `nix-env -i'.
Commit 5000!
with the same name *and* version number, and pick the first one
(this means that the order in which channels appear in
~/.nix-channels matters). E.g.:
$ nix-env ii aterm
warning: there are multiple derivations named `aterm-2.4.2'; using the first one
installing `aterm-2.4.2'
the disk is full (because to delete something from the Nix store, we
need a Berkeley DB transaction, which takes up disk space). Under
normal operation, we make sure that there exists a file
/nix/var/nix/db/reserved of 1 MB. When running the garbage
collector, we delete that file before we open the Berkeley DB
environment.
implementations of MD5, SHA-1 and SHA-256. The main benefit is that
we get assembler-optimised implementations of MD5 and SHA-1 (though
not SHA-256 (at least on x86), unfortunately). OpenSSL's SHA-1
implementation on Intel is twice as fast as ours.
derivation(s) we're interested, e.g.,
$ nix-instantiate ./all-packages.nix --attr xlibs.libX11
List elements can also be selected:
$ nix-instantiate ./build-for-release.nix --attr 0.subversion
This allows a non-ambiguous specification of a derivation. Of
course, this should also be added to nix-env and nix-build.
creates a new process group but also a new session. New sessions
have no controlling tty, so child processes like ssh cannot open
/dev/tty (which is bad).
intended). This ensures that any ssh child processes to remote
machines are also killed, and thus the Nix process on the remote
machine also exits. Without this, the remote Nix process will
continue until it exists or until its stdout buffer gets full and it
locks up. (Partially fixes NIX-35.)
deletes a path even if it is reachable from a root. However, it
won't delete a path that still has referrers (since that would
violate store invariants).
Don't try this at home. It's a useful hack for recovering from
certain situations in a somewhat clean way (e.g., holes in closures
due to disk corruption).
nix-store query options `--referer' and `--referer-closure' have
been changed to `--referrer' and `--referrer-closure' (but the old
ones are still accepted for compatibility).
mapping. The referer table is replaced by a referrer table (note
spelling fix) that stores each referrer separately. That is,
instead of having
referer[P] = {Q_1, Q_2, Q_3, ...}
we store
referer[(P, Q_1)] = ""
referer[(P, Q_2)] = ""
referer[(P, Q_3)] = ""
...
To find the referrers of P, we enumerate over the keys with a value
lexicographically greater than P. This requires the referrer table
to be stored as a B-Tree rather than a hash table.
(The tuples (P, Q) are stored as P + null-byte + Q.)
Old Nix databases are upgraded automatically to the new schema.
Nix is properly shut down when it receives those signals. In
particular this ensures that killing the garbage collector doesn't
cause a subsequent database recovery.
builder. Instead, require that the Nix store has sticky permission
(S_ISVTX); everyone can created files in the Nix store, but they
cannot delete, rename or modify files created by others.
root (or setuid root), then builds will be performed under one of
the users listed in the `build-users' configuration variables. This
is to make it impossible to influence build results externally,
allowing locally built derivations to be shared safely between
users (see ASE-2005 paper).
To do: only one builder should be active per build user.
versions to available versions, or vice versa.
For example, the following compares installed versions to available
versions:
$ nix-env -qc
autoconf-2.59 = 2.59
automake-1.9.4 < 1.9.6
f-spot-0.0.10 - ?
firefox-1.0.4 < 1.0.7
...
I.e., there are newer versions available (in the current default Nix
expression) for Automake and Firefox, but not for Autoconf, and
F-Spot is missing altogether.
Conversely, the available versions can be compared to the installed
versions:
$ nix-env -qac
autoconf-2.59 = 2.59
automake-1.9.6 > 1.9.4
bash-3.0 - ?
firefox-1.0.7 > 1.0.4
...
Note that bash is available but no version of it is installed.
If multiple versions are available for comparison, then the highest
is used. E.g., if Subversion 1.2.0 is installed, and Subversion
1.1.4 and 1.2.3 are available, then `nix-env -qc' will print `<
1.2.3', not `> 1.1.4'.
If higher versions are available, the version column is printed in
red (using ANSI escape codes).
dependencyClosure { ... searchPath = [ ../foo ../bar ]; ... }
* Primop `dirOf' to return the directory part of a path (e.g., dirOf
/a/b/c == /a/b).
* Primop `relativise' (according to Webster that's a real word!) that
given paths A and B returns a string representing path B relative
path to A; e.g., relativise /a/b/c a/b/x/y => "../x/y".
determination (e.g., finding the header files dependencies of a C
file) in Nix low-level builds automatically.
For instance, in the function `compileC' in make/lib/default.nix, we
find the header file dependencies of C file `main' as follows:
localIncludes =
dependencyClosure {
scanner = file:
import (findIncludes {
inherit file;
});
startSet = [main];
};
The function works by "growing" the set of dependencies, starting
with the set `startSet', and calling the function `scanner' for each
file to get its dependencies (which should yield a list of strings
representing relative paths). For instance, when `scanner' is
called on a file `foo.c' that includes the line
#include "../bar/fnord.h"
then `scanner' should yield ["../bar/fnord.h"]. This list of
dependencies is absolutised relative to the including file and added
to the set of dependencies. The process continues until no more
dependencies are found (hence its a closure).
`dependencyClosure' yields a list that contains in alternation a
dependency, and its relative path to the directory of the start
file, e.g.,
[ /bla/bla/foo.c
"foo.c"
/bla/bar/fnord.h
"../bar/fnord.h"
]
These relative paths are necessary for the builder that compiles
foo.c to reconstruct the relative directory structure expected by
foo.c.
The advantage of `dependencyClosure' over the old approach (using
the impure `__currentTime') is that it's completely pure, and more
efficient because it only rescans for dependencies (i.e., by
building the derivations yielded by `scanner') if sources have
actually changed. The old approach rescanned every time.
(closed(closed(closed(...)))) since this reduces performance by
producing bigger terms and killing caching (which incidentally also
prevents useful infinite recursion detection).