This will allow bind and connect to 127.0.0.1, which can reduce purity/
security (if you're running a vulnerable service on localhost) but is
also needed for a ton of test suites, so I'm leaving it turned off by
default but allowing certain derivations to turn it on as needed.
It also allows DNS resolution of arbitrary hostnames but I haven't found
a way to avoid that. In principle I'd just want to allow resolving
localhost but that doesn't seem to be possible.
I don't think this belongs under `build-use-sandbox = relaxed` because we
want it on Hydra and I don't think it's the end of the world.
Used to determine symlink size with stat and value with readlink.
This could technically result in garbage if symlink changed between
calls. Also gets around the broken stat implementation in our
network filesystem (returns size + 1 giving a byte of garbage).
The computation of urlHash didn't take the name into account, so
subsequent fetchurl calls with the same URL but a different name would
resolve to the same cached store path.
The "name" attribute defaults to "source", which we should use for all
similar functions (e.g. fetchTarball and in Hydra) to ensure that we
get a consistent store path regardless of how the tree is fetched.
"source" is not necessarily a correct label, but using an empty name
is problematic: you get an ugly store path ending in a dash, and it's
impossible to have a fixed-output derivation that produces that path
because ".drv" is not a valid store name.
Fixes#904.
You can now include files via the "builders" option, using the syntax
"@<filename>". Having only one option makes it easier to override
builders completely.
For backward compatibility, the default is "@/etc/nix/machines", or
"@<filename>" for each file name in NIX_REMOTE_SYSTEMS.
This makes it slightly more manageable to see at a glance what in a
build's sandbox profile is unique to the build and what is standard. Also
a first step to factoring more of our Darwin logic into scheme functions
that will allow us a bit more flexibility. And of course less of that
nasty codegen in C++! 😀
This speeds up commands like "nix cat-store". For example:
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nar-cache /nix/store/i60yncmq6w9dyv37zd2k454g0fkl3arl-systemd-234/etc/udev/udev.conf
real 0m4.336s
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nar-cache /nix/store/i60yncmq6w9dyv37zd2k454g0fkl3arl-systemd-234/etc/udev/udev.conf
real 0m0.045s
The primary motivation is to allow hydra-server to serve files from S3
binary caches. Previously Hydra had a hack to do "nix-store -r
<path>", but that fetches the entire closure so is prohibitively
expensive.
There is no garbage collection of the NAR cache yet. Also, the entire
NAR is read when accessing a single member file. We could generate the
NAR listing to provide random access.
Note: the NAR cache is indexed by the store path hash, not the content
hash, so NAR caches should not be shared between binary caches, unless
you're sure that all your builds are binary-reproducible.
Probably as a result of a bad merge in
4b8f1b0ec0, we had both a
BinaryCacheStoreAccessor and a
RemoteFSAccessor. BinaryCacheStore::getFSAccessor() returned the
latter, but BinaryCacheStore::addToStore() checked for the
former. This probably caused hydra-queue-runner to download paths that
it just uploaded.
This check spuriously fails for e.g. git@github.com:NixOS/nixpkgs.git,
and even for ssh://git@github.com/NixOS/nixpkgs.git, and is made
redundant by the checks git itself will do when fetching the repo. We
instead pass a -- before passing the URI to git to avoid injection.
I needed this to test ACL/xattr removal in
canonicalisePathMetaData(). Might also be useful if you need to build
old Nixpkgs that doesn't have the required patches to remove
setuid/setgid creation.
The worker threads could exit prematurely if they finished processing
all items while the main thread was still adding items. In particular,
this caused hanging nix-store --serve processes in the build farm.
Also, process items from the main thread.
It was getting too much like whac-a-mole listing all the retriable error
conditions, so we now retry by default and list the cases where retrying
is almost certainly hopeless.
I find the error message 'nix-env --set-flag priority NUMBER PKGNAME'
not as helpful as it could be :
- doesn't share the current priorities
- doesn't say that the command must be run on the already installed
PKGNAME (which is confusing the first time)
- the doc needs careful reading:
"If there are multiple derivations matching a name in args that have the same name (e.g., gcc-3.3.6 and gcc-4.1.1), then the derivation with the highest priority is used."
if one stops reading there, he is screwed. Salvation comes with reading "A derivation can define a priority by declaring the meta.priority attribute. This attribute should be a number, with a higher value denoting a lower priority. The default priority is 0."
To sum it up, lower number wins. I tried to convey this idea in the
message too.
This is a hack to make hydra-queue-runner free its temproots
periodically, thereby ensuring that garbage collection of the
corresponding paths is not blocked until the queue runner is
restarted.
It would be better if temproots could be released earlier than at
process exit. I started working on a RAII object returned by functions
like addToStore() that releases temproots. However, this would be a
pretty massive change so I gave up on it for now.
For example,
$ nix-store -q --roots /nix/store/7phd2sav7068nivgvmj2vpm3v47fd27l-patchelf-0.8pre845_0315148
{temp:1}
denotes that the path is only being kept alive by a temporary root
(i.e. /nix/var/nix/temproots/). Similarly,
$ nix-store --gc --print-roots
...
{memory:9} -> /nix/store/094gpjn9f15ip17wzxhma4r51nvsj17p-curl-7.53.1
shows that curl is being used by some process.
This command shows why a package has another package in its runtime
closure. For example, to see why VLC has libdrm.dev in its closure:
$ nix why-depends nixpkgs.vlc nixpkgs.libdrm.dev
/nix/store/g901z9pcj0n5yy5n6ykxk3qm4ina1d6z-vlc-2.2.5.1:
lib/libvlccore.so.8.0.0: …nfig:/nix/store/405lmx6jl8lp0ad1vrr6j498chrqhz8g-libdrm-2.4.75-d…
/nix/store/s3nm7kd8hlcg0facn2q1ff2n7wrwdi2l-mesa-noglu-17.0.7-dev:
nix-support/propagated-native-build-inputs: …-dev /nix/store/405lmx6jl8lp0ad1vrr6j498chrqhz8g-libdrm-2.4.75-d…
Thus, VLC's lib/libvlccore.so.8.0.0 as well as mesa-noglu's
nix-support/propagated-native-build-inputs cause the dependency.
In particular, process() won't return as long as there are active
items. This prevents work item lambdas from referring to stack frames
that no longer exist.
Since we may use a dedicated file descriptor in the future, this
allows us to change it. So builders can do
if [[ -n $NIX_LOG_FD ]]; then
echo "@nix { message... }" >&$NIX_LOG_FD
fi
Nix can now automatically run the garbage collector during builds or
while adding paths to the store. The option "min-free = <bytes>"
specifies that Nix should run the garbage collector whenever free
space in the Nix store drops below <bytes>. It will then delete
garbage until "max-free" bytes are available.
Garbage collection during builds is asynchronous; running builds are
not paused and new builds are not blocked. However, there also is a
synchronous GC run prior to the first build/substitution.
Currently, no old GC roots are deleted (as in "nix-collect-garbage
-d").
Since file locks are per-process rather than per-file-descriptor, the
garbage collector would always acquire a lock on its own temproots
file and conclude that it's stale.
Without this, substitute info is fetched sequentially, which is
superslow. In the old UI (e.g. nix-build), we call printMissing(),
which calls queryMissing(), thereby preheating the binary cache
cache. But the new UI doesn't do that.
In particular, drop the "build-" and "gc-" prefixes which are
pointless. So now you can say
nix build --no-sandbox
instead of
nix build --no-build-use-sandbox
This is useful for testing commands in isolation.
For example,
$ nix run nixpkgs.geeqie -i -k DISPLAY -k XAUTHORITY -c geeqie
runs geeqie in an empty environment, except for $DISPLAY and
$XAUTHORITY.
E.g.
nix run nixpkgs.hello -c hello --greeting Hallo
Note that unlike "nix-shell --command", no quoting of arguments is
necessary.
"-c" (short for "--command") cannot be combined with "--" because they
both consume all remaining arguments. But since installables shouldn't
start with a dash, this is unlikely to cause problems.
Running "nix run" with a diverted store, e.g.
$ nix run --store local?root=/tmp/nix nixpkgs.hello
stopped working when Nix became multithreaded, because
unshare(CLONE_NEWUSER) doesn't work in multithreaded processes. The
obvious solution is to terminate all other threads first, but 1) there
is no way to terminate Boehm GC marker threads; and 2) it appears that
the kernel has a race where unshare(CLONE_NEWUSER) will still fail for
some indeterminate amount of time after joining other threads.
So instead, "nix run" will now exec() a single-threaded helper ("nix
__run_in_chroot") that performs the actual unshare()/chroot()/exec().
Now that we use threads in lots of places, it's possible for
TunnelLogger::log() to be called asynchronously from other threads
than the main loop. So we need to ensure that STDERR_NEXT messages
don't clobber other messages.
Besides being unused, this function has a bug that it will incorrectly
decode the path component Ubuntu\04016.04.2\040LTS\040amd64 as
"Ubuntu.04.2 LTS amd64" instead of "Ubuntu 16.04.2 LTS amd64".
This adds an argument "rev" specififying the Git commit hash. The
existing argument "rev" is renamed to "ref". The default value for
"ref" is "master". When specifying a hash, it's necessary to specify a
ref since we're not cloning the entire repository but only fetching a
specific ref.
Example usage:
builtins.fetchgit {
url = https://github.com/NixOS/nixpkgs.git;
ref = "release-16.03";
rev = "c1c0484041ab6f9c6858c8ade80a8477c9ae4442";
};
The package list is now cached in
~/.cache/nix/package-search.json. This gives a substantial speedup to
"nix search" queries. For example (on an SSD):
First run: (no package search cache, cold page cache)
$ time nix search blender
Attribute name: nixpkgs.blender
Package name: blender
Version: 2.78c
Description: 3D Creation/Animation/Publishing System
real 0m6.516s
Second run: (package search cache populated)
$ time nix search blender
Attribute name: nixpkgs.blender
Package name: blender
Version: 2.78c
Description: 3D Creation/Animation/Publishing System
real 0m0.143s
In particular, don't use base-64, which we don't support. (We do have
base-32 redirects for hysterical reasons.)
Also, add a test for the hashed mirror feature.
This doesn't work in read-only mode, ensuring that operations like
nix path-info --store https://cache.nixos.org -S nixpkgs.hello
(asking for the closure size of nixpkgs.hello in cache.nixos.org) work
when nixpkgs.hello doesn't exist in the local store.
On second though this was annoying. E.g. "nix log nixpkgs.hello" would
build/download Hello first, even though the log can be fetched
directly from the binary cache.
May need to revisit this.
This allows builds to call setuid binaries. This was previously
possible until we started using seccomp. Turns out that seccomp by
default disallows processes from acquiring new privileges. Generally,
any use of setuid binaries (except those created by the builder
itself) is by definition impure, but some people were relying on this
ability for certain tests.
Example:
$ nix build '(with import <nixpkgs> {}; runCommand "foo" {} "/run/wrappers/bin/ping -c 1 8.8.8.8; exit 1")' --no-allow-new-privileges
builder for ‘/nix/store/j0nd8kv85hd6r4kxgnwzvr0k65ykf6fv-foo.drv’ failed with exit code 1; last 2 log lines:
cannot raise the capability into the Ambient set
: Operation not permitted
$ nix build '(with import <nixpkgs> {}; runCommand "foo" {} "/run/wrappers/bin/ping -c 1 8.8.8.8; exit 1")' --allow-new-privileges
builder for ‘/nix/store/j0nd8kv85hd6r4kxgnwzvr0k65ykf6fv-foo.drv’ failed with exit code 1; last 6 log lines:
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=46 time=15.2 ms
Fixes#1429.
Functions like copyClosure() had 3 bool arguments, which creates a
severe risk of mixing up arguments.
Also, implement copyClosure() using copyPaths().
Cygwin sqlite3 is patched to call SetDllDirectory("/usr/bin") on init, which
affects the current process and is inherited by child processes. It causes
DLLs to be loaded from /usr/bin/ before $PATH, which breaks all sorts of
things. A typical failures would be header/lib version mismatches (e.g.
openssl when running checkPhase on openssh). We'll just set it back to the
default value.
Note that this is a problem with the cygwin version of sqlite3 (currently
3.18.0). nixpkgs doesn't have the problematic patch.
There's no reason to restrict this to Error exceptions. This shouldn't
matter to #1407 since the repl doesn't catch non-Error exceptions
anyway, but you never know...
Recently aws-sdk-cpp quietly switched to using S3 virtual host URIs
(https://github.com/aws/aws-sdk-cpp/commit/69d9c53882), i.e. it sends
requests to http://<bucket>.<region>.s3.amazonaws.com rather than
http://<region>.s3.amazonaws.com/<bucket>. However this interacts
badly with curl connection reuse. For example, if we do the following:
1) Check whether a bucket exists using GetBucketLocation.
2) If it doesn't, create it using CreateBucket.
3) Do operations on the bucket.
then 3) will fail for a minute or so with a NoSuchBucket exception,
presumably because the server being hit is a fallback for cases when
buckets don't exist.
Disabling the use of virtual hosts ensures that 3) succeeds
immediately. (I don't know what S3's consistency guarantees are for
bucket creation, but in practice buckets appear to be available
immediately.)
Newer versions of aws-sdk-cpp call CalculateDelayBeforeNextRetry()
even for non-retriable errors (like NoSuchKey) whih causes log spam in
hydra-queue-runner.
Sandboxes cannot be nested, so if Nix's build runs inside a sandbox,
it cannot use a sandbox itself. I don't see a clean way to detect
whether we're in a sandbox, so use a test-specific hack.
https://github.com/NixOS/nix/issues/1413
In particular, UF_IMMUTABLE (uchg) needs to be cleared to allow the
path to be garbage-collected or optimised.
See https://github.com/NixOS/nixpkgs/issues/25819.
+ the file from being garbage-collected.
Thus, instead of ‘--option <name> <value>’, you can write ‘--<name>
<value>’. So
--option http-connections 100
becomes
--http-connections 100
Apart from brevity, the difference is that it's not an error to set a
non-existent option via --option, but unrecognized arguments are
fatal.
Boolean options have special treatment: they're mapped to the
argument-less flags ‘--<name>’ and ‘--no-<name>’. E.g.
--option auto-optimise-store false
becomes
--no-auto-optimise-store
Even with "build-use-sandbox = false", we now use sandboxing with a
permissive profile that allows everything except the creation of
setuid/setgid binaries.
Also, add rules to allow fixed-output derivations to access the
network.
These rules are sufficient to build stdenvDarwin without any
__sandboxProfile magic.
The filename used was not unique and owned by the build user, so
builds could fail with
error: while setting up the build environment: cannot unlink ‘/nix/store/99i210ihnsjacajaw8r33fmgjvzpg6nr-bison-3.0.4.drv.sb’: Permission denied
runResolver() was barfing on directories like
/System/Library/Frameworks/Security.framework/Versions/Current/PlugIns. It
should probably do something sophisticated for frameworks, but let's
ignore them for now.
This fixes
error: getting attributes of path ‘Versions/Current/CoreFoundation’: No such file or directory
when /System/Library/Frameworks/CoreFoundation.framework/CoreFoundation is a symlink.
Also fixes a segfault when encounting a file that is not a MACH binary (such
as /dev/null, which is included in __impureHostDeps in Nixpkgs).
Possibly fixes#786.
Fixes
src/libstore/build.cc:2321:45: error: non-constant-expression cannot be narrowed from type 'int' to 'scmp_datum_t' (aka 'unsigned long') in initializer list [-Wc++11-narrowing]
EAs/ACLs are not part of the NAR canonicalisation. Worse, setting an
ACL allows a builder to create writable files in the Nix store. So get
rid of them.
Closes#185.
This prevents builders from setting the S_ISUID or S_ISGID bits,
preventing users from using a nixbld* user to create a setuid/setgid
binary to interfere with subsequent builds under the same nixbld* uid.
This is based on aszlig's seccomp code
(47f587700d).
Reported by Linus Heckemann.
Fixes
client# error: size mismatch importing path ‘/nix/store/ywf5fihjlxwijm6ygh6s0a353b5yvq4d-libidn2-0.16’; expected 0, got 120264
This is mostly an artifact of the NixOS VM test environment, where the
Nix database doesn't contain hashes/sizes.
http://hydra.nixos.org/build/53537471
And add a 116 KiB ash shell from busybox to the release build. This
helps to make sandbox builds work out of the box on non-NixOS systems
and with diverted stores.
This is useful when we're using a diverted store (e.g. "--store
local?root=/tmp/nix") in conjunction with a statically-linked sh from
the host store (e.g. "sandbox-paths =/bin/sh=/nix/store/.../bin/busybox").
Previously, if a directory `foo` existed and a file `foo-` (where `-` is any character that is sorted before `/`), then `readDirectory` would return an empty list.
To fix this, we now use a tree where we can just access the children of the node, and do not need to rely on sorting behavior to list the contents of a directory.
It now means "paths that were built locally". It no longer includes
paths that were added locally. For those we don't need info.ultimate,
since we have the content-addressability assertion (info.ca).
This is a little convenience command that opens the Nix expression of
the specified package. For example,
nix edit nixpkgs.perlPackages.Moose
opens <nixpkgs/pkgs/top-level/perl-packages.nix> in $EDITOR (at the
right line number for some editors).
This requires the package to have a meta.position attribute.
There is a security issue when a build accidentally stores its $TMPDIR
in some critical place, such as an RPATH. If
TMPDIR=/tmp/nix-build-..., then any user on the system can recreate
that directory and inject libraries into the RPATH of programs
executed by other users. Since /build probably doesn't exist (or isn't
world-writable), this mitigates the issue.
Opening an SSHStore or LegacySSHStore does not actually establish a
connection, so the try/catch block here did nothing. Added a
Store::connect() method to test whether a connection can be
established.
This is useful for one-off situations where you want to specify a
builder on the command line instead of having to mess with
nix.machines. E.g.
$ nix-build -A hello --argstr system x86_64-darwin \
--option builders 'root@macstadium1 x86_64-darwin'
will perform the specified build on "macstadium1".
It also removes the need for a separate nix.machines file since you
can specify builders in nix.conf directly. (In fact nix.machines is
yet another hack that predates the general nix.conf configuration
file, IIRC.)
Note: this option is supported by the daemon for trusted users. The
fact that this allows trusted users to specify paths to SSH keys to
which they don't normally have access is maybe a bit too much trust...
For backwards compatibility, if the URI is just a hostname, ssh://
(i.e. LegacySSHStore) is prepended automatically.
Also, all fields except the URI are now optional. For example, this is
a valid nix.machines file:
local?root=/tmp/nix
This is useful for testing the remote build machinery since you don't
have to mess around with ssh.
This is to simplify remote build configuration. These environment
variables predate nix.conf.
The build hook now has a sensible default (namely build-remote).
The current load is kept in the Nix state directory now.
Since build-remote uses buildDerivation() now, we don't need to copy
the .drv file anymore. This greatly reduces the set of input paths
copied to the remote side (e.g. from 392 to 51 store paths for GNU
hello on x86_64-darwin).
This default implementation of buildPaths() does nothing if all
requested paths are already valid, and throws an "unsupported
operation" error otherwise. This fixes a regression introduced by
c30330df6f in binary cache and legacy
SSH stores.
With catch-all rules, we hide potential errors.
It turns out that a4744254 made one cath-all useless. Flex detected that
is was impossible to reach.
The other is more subtle, as it can only trigger on unfinished escapes
in unfinished strings, which only occurs at EOF.
This caused "nix-store --import" to compute an incorrect hash on NARs
that don't fit in an unsigned int. The import would succeed, but
"nix-store --verify-path" or subsequent exports would detect an
incorrect hash.
A deeper issue is that the export/import format does not contain a
hash, so we can't detect such issues early.
Also, I learned that -Wall does not warn about this.
So for instance "nix copy --to ... nixpkgs.hello" will build
nixpkgs.hello first. It's debatable whether this is a good idea. It
seems desirable for commands like "nix copy" but maybe not for
commands like "nix path-info".
Thus
$ nix build -f foo.nix
will build foo.nix.
And
$ nix build
will build default.nix. However, this may not be a good idea because
it's kind of inconsistent, given that "nix build foo" will build the
"foo" attribute from the default installation source (i.e. the
synthesis of $NIX_PATH), rather than ./default.nix. So I may revert
this.
This allows commands like 'nix path-info', 'nix copy', 'nix verify'
etc. to work on arbitrary installables. E.g. to copy geeqie to a
binary cache:
$ nix copy -r --to file:///tmp/binary-cache nixpkgs.geeqie
Or to get the closure size of thunderbird:
$ nix path-info -S nixpkgs.thunderbird
In particular, this disallows attribute names containing dots or
starting with dots. Hydra already disallowed these. This affects the
following packages in Nixpkgs master:
2048-in-terminal
2bwm
389-ds-base
90secondportraits
lispPackages.3bmd
lispPackages.hu.dwim.asdf
lispPackages.hu.dwim.def
Closes#1342.
The typical use is to inherit Config and add Setting<T> members:
class MyClass : private Config
{
Setting<int> foo{this, 123, "foo", "the number of foos to use"};
Setting<std::string> bar{this, "blabla", "bar", "the name of the bar"};
MyClass() : Config(readConfigFile("/etc/my-app.conf"))
{
std::cout << foo << "\n"; // will print 123 unless overriden
}
};
Currently, this is used by Store and its subclasses for store
parameters. You now get a warning if you specify a non-existant store
parameter in a store URI.
This provides a significant speedup, e.g. 64 s -> 12 s for
nix-build --dry-run -I nixpkgs=channel:nixos-16.03 '<nixpkgs/nixos/tests/misc.nix>' -A test
on a cold local and CloudFront cache.
The alternative is to use lots of concurrent daemon connections but
that seems wasteful.
This is useless because the client also caches path info, and can
cause problems for long-running clients like hydra-queue-runner
(i.e. it may return cached info about paths that have been
garbage-collected).
This fixes "No such file or directory" when opening /dev/ptmx
(e.g. http://hydra.nixos.org/build/51094249).
The reason appears to be some changes to /dev/ptmx / /dev/pts handling
between Linux 4.4 and 4.9. See
https://patchwork.kernel.org/patch/7832531/.
The fix is to go back to mounting a proper /dev/pts instance inside
the sandbox. Happily, this now works inside user namespaces, even for
unprivileged users. So
NIX_REMOTE=local?root=/tmp/nix nix-build \
'<nixpkgs/nixos/tests/misc.nix>' -A test
works for non-root users.
The downside is that the fix breaks sandbox builds on older kernels
(probably pre-4.6), since mounting a devpts fails inside user
namespaces for some reason I've never been able to figure out. Builds
on those systems will fail with
error: while setting up the build environment: mounting /dev/pts: Invalid argument
Ah well.
Execute a given program with the (optional) given arguments as the
user running the evaluation, parsing stdout as an expression to be
evaluated.
There are many use cases for nix that would benefit from being able to
run arbitrary code during evaluation, including but not limited to:
* Automatic git fetching to get a sha256 from a git revision
* git rev-parse HEAD
* Automatic extraction of information from build specifications from
other tools, particularly language-specific package managers like
cabal or npm
* Secrets decryption (e.g. with nixops)
* Private repository fetching
Ideally, we would add this functionality in a more principled way to
nix, but in the mean time 'builtins.exec' can be used to get these
tasks done.
The primop is only available when the
'allow-unsafe-native-code-during-evaluation' nix option is true. That
flag also enables the 'importNative' primop, which is strictly more
powerful but less convenient (since it requires compiling a plugin
against the running version of nix).
So if "text-compression=br", the .ls file in S3 will get a
Content-Encoding of "br". Brotli appears to compress better than xz
for this kind of file and is natively supported by browsers.
You can now set the store parameter "text-compression=br" to compress
textual files in the binary cache (i.e. narinfo and logs) using
Brotli. This sets the Content-Encoding header; the extension of
compressed files is unchanged.
You can separately specify the compression of log files using
"log-compression=br". This is useful when you don't want to compress
narinfo files for backward compatibility.
Build logs on cache.nixos.org are compressed using Brotli (since this
allows them to be decompressed automatically by Chrome and Firefox),
so it's handy if "nix log" can decompress them.
This allows various Store implementations to provide different ways to
get build logs. For example, BinaryCacheStore can get the build logs
from the binary cache.
Also, remove the log-servers option since we can use substituters for
this.
* Unify SSH code in SSHStore and LegacySSHStore.
* Fix a race starting the SSH master. We now wait synchronously for
the SSH master to finish starting. This prevents the SSH clients
from starting their own connections.
* Don't use a master if max-connections == 1.
* Add a "max-connections" store parameter.
* Add a "compress" store parameter.
"build-max-jobs" and the "-j" option can now be set to "auto" to use
the number of CPUs in the system. (Unlike build-cores, it doesn't use
0 to imply auto-configuration, because a) magic values are a bad idea
in general; b) 0 is a legitimate value used to disable local
building.)
Fixes#1198.
Need to remember that std::map::insert() and emplace() don't overwrite
existing entries...
This fixes a regression relative to 1.11 that in particular triggers
in nested nix-shells.
Before:
$ nativeBuildInputs=/foo nix-shell -p hello --run 'hello'
build input /foo does not exist
After:
$ nativeBuildInputs=/foo nix-shell -p hello --run 'hello'
Hello, world!
Previously, the Settings class allowed other code to query for string
properties, which led to a proliferation of code all over the place making
up new options without any sort of central registry of valid options. This
commit pulls all those options back into the central Settings class and
removes the public get() methods, to discourage future abuses like that.
Furthermore, because we know the full set of options ahead of time, we
now fail loudly if someone enters an unrecognized option, thus preventing
subtle typos. With some template fun, we could probably also dump the full
set of options (with documentation, defaults, etc.) to the command line,
but I'm not doing that yet here.
... and use this in Downloader::downloadCached(). This fixes
$ nix-build https://nixos.org/channels/nixos-16.09-small/nixexprs.tar.xz -A hello
error: cannot import path ‘/nix/store/csfbp1s60dkgmk9f8g0zk0mwb7hzgabd-nixexprs.tar.xz’ because it lacks a valid signature
This allows <nix/fetchurl.nix> to fetch private Git/Mercurial
repositories, e.g.
import <nix/fetchurl.nix> {
url = https://edolstra@bitbucket.org/edolstra/my-private-repo/get/80a14018daed.tar.bz2;
sha256 = "1mgqzn7biqkq3hf2697b0jc4wabkqhmzq2srdymjfa6sb9zb6qs7";
}
where /etc/nix/netrc contains:
machine bitbucket.org
login edolstra
password blabla...
This works even when sandboxing is enabled.
To do: add unpacking support (i.e. fetchzip functionality).
Some sites (e.g. BitBucket) give a helpful 401 error when trying to
download a private archive if the User-Agent contains "curl", but give
a redirect to a login page otherwise (so for instance
"nix-prefetch-url" will succeed but produce useless output).
This adds support for s3:// URIs in all places where Nix allows URIs,
e.g. in builtins.fetchurl, builtins.fetchTarball, <nix/fetchurl.nix>
and NIX_PATH. It allows fetching resources from private S3 buckets,
using credentials obtained from the standard places (i.e. AWS_*
environment variables, ~/.aws/credentials and the EC2 metadata
server). This may not be super-useful in general, but since we already
depend on aws-sdk-cpp, it's a cheap feature to add.
Currently, 'nix-daemon --stdio' is always failing for me, due to the
splice call always failing with (on a 32-bit host):
splice(0, NULL, 3, NULL, 4294967295, SPLICE_F_MOVE) = -1 EINVAL (Invalid argument)
With a bit of ftracing (and luck) the problem seems to be that splice()
always fails with EINVAL if the len cast as ssize_t is negative:
http://lxr.free-electrons.com/source/fs/read_write.c?v=4.4#L384
So use SSIZE_MAX instead of SIZE_MAX.
Because config.h can #define things like _FILE_OFFSET_BITS=64 and not
every compilation unit includes config.h, we currently compile half of
Nix with _FILE_OFFSET_BITS=64 and other half with _FILE_OFFSET_BITS
unset. This causes major havoc with the Settings class on e.g. 32-bit ARM,
where different compilation units disagree with the struct layout.
E.g.:
diff --git a/src/libstore/globals.cc b/src/libstore/globals.cc
@@ -166,6 +166,8 @@ void Settings::update()
_get(useSubstitutes, "build-use-substitutes");
+ fprintf(stderr, "at Settings::update(): &useSubstitutes = %p\n", &nix::settings.useSubstitutes);
_get(buildUsersGroup, "build-users-group");
diff --git a/src/libstore/remote-store.cc b/src/libstore/remote-store.cc
+++ b/src/libstore/remote-store.cc
@@ -138,6 +138,8 @@ void RemoteStore::initConnection(Connection & conn)
void RemoteStore::setOptions(Connection & conn)
{
+ fprintf(stderr, "at RemoteStore::setOptions(): &useSubstitutes = %p\n", &nix::settings.useSubstitutes);
conn.to << wopSetOptions
Gave me:
at Settings::update(): &useSubstitutes = 0xb6e5c5cb
at RemoteStore::setOptions(): &useSubstitutes = 0xb6e5c5c7
That was not a fun one to debug!
This writes info about every path in the closure in the same format as
‘nix path-info --json’. Thus it also includes NAR hashes and sizes.
Example:
[
{
"path": "/nix/store/10h6li26i7g6z3mdpvra09yyf10mmzdr-hello-2.10",
"narHash": "sha256:0ckdc4z20kkmpqdilx0wl6cricxv90lh85xpv2qljppcmz6vzcxl",
"narSize": 197648,
"references": [
"/nix/store/10h6li26i7g6z3mdpvra09yyf10mmzdr-hello-2.10",
"/nix/store/27binbdy296qvjycdgr1535v8872vz3z-glibc-2.24"
],
"closureSize": 20939776
},
{
"path": "/nix/store/27binbdy296qvjycdgr1535v8872vz3z-glibc-2.24",
"narHash": "sha256:1nfn3m3p98y1c0kd0brp80dn9n5mycwgrk183j17rajya0h7gax3",
"narSize": 20742128,
"references": [
"/nix/store/27binbdy296qvjycdgr1535v8872vz3z-glibc-2.24"
],
"closureSize": 20742128
}
]
Fixes#1134.
Previously, all derivation attributes had to be coerced into strings
so that they could be passed via the environment. This is lossy
(e.g. lists get flattened, necessitating configureFlags
vs. configureFlagsArray, of which the latter cannot be specified as an
attribute), doesn't support attribute sets at all, and has size
limitations (necessitating hacks like passAsFile).
This patch adds a new mode for passing attributes to builders, namely
encoded as a JSON file ".attrs.json" in the current directory of the
builder. This mode is activated via the special attribute
__structuredAttrs = true;
(The idea is that one day we can set this in stdenv.mkDerivation.)
For example,
stdenv.mkDerivation {
__structuredAttrs = true;
name = "foo";
buildInputs = [ pkgs.hello pkgs.cowsay ];
doCheck = true;
hardening.format = false;
}
results in a ".attrs.json" file containing (sans the indentation):
{
"buildInputs": [],
"builder": "/nix/store/ygl61ycpr2vjqrx775l1r2mw1g2rb754-bash-4.3-p48/bin/bash",
"configureFlags": [
"--with-foo",
"--with-bar=1 2"
],
"doCheck": true,
"hardening": {
"format": false
},
"name": "foo",
"nativeBuildInputs": [
"/nix/store/10h6li26i7g6z3mdpvra09yyf10mmzdr-hello-2.10",
"/nix/store/4jnvjin0r6wp6cv1hdm5jbkx3vinlcvk-cowsay-3.03"
],
"propagatedBuildInputs": [],
"propagatedNativeBuildInputs": [],
"stdenv": "/nix/store/f3hw3p8armnzy6xhd4h8s7anfjrs15n2-stdenv",
"system": "x86_64-linux"
}
"passAsFile" is ignored in this mode because it's not needed - large
strings are included directly in the JSON representation.
It is up to the builder to do something with the JSON
representation. For example, in bash-based builders, lists/attrsets of
string values could be mapped to bash (associative) arrays.
This closes a long-time bug that allowed builds to hang Nix
indefinitely (regardless of timeouts) simply by doing
exec > /dev/null 2>&1; while true; do true; done
Now, on EOF, we just send SIGKILL to the child to make sure it's
really gone.
This allows other threads to install callbacks that run in a regular,
non-signal context. In particular, we can use this to signal the
downloader thread to quit.
Closes#1183.
Regression from a5f2750e ("Fix early removal of rc-file for nix-shell").
The removal of BASH_ENV causes nothing to be executed by bash if it
detects itself in a non-interactive context. Instead, just
use the same condition used by bash to launch bash differently.
According to bash sources, the condition (stdin and stder both
must be TTYs) is specified by POSIX so this should be pretty
safe to rely on.
Fixes#1171 on master, needs a backport to the Perl code in 1.11.
I had observed that 'bash --rcfile' would do nothing in a
non-interactive context and cause nothing to be executed if a script
using nix-shell shebangs were run in a non-interactive context.
The 'args' variable here is shadowing one in the outer scope and its
contents end up unused. This causes any '#! nix-shell' lines to
effectively be ignored. The intention here was to clear the args vector,
as far as I can tell (and it seems to work).
It failed with
AWS error uploading ‘6gaxphsyhg66mz0a00qghf9nqf7majs2.ls.xz’: Unable to parse ExceptionName: MissingContentLength Message: You must provide the Content-Length HTTP header.
possibly because the istringstream_nocopy introduced in
0d2ebb4373 doesn't supply the seek
method that the AWS library expects. So bring back the old version,
but only for S3BinaryCacheStore.
That is, when build-repeat > 0, and the output of two rounds differ,
then print a warning rather than fail the build. This is primarily to
let Hydra check reproducibility of all packages.
These syscalls are only available in 32bit architectures, but libseccomp
should handle them correctly even if we're on native architectures that
do not have these syscalls.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Commands such as "cp -p" also use fsetxattr() in addition to fchown(),
so we need to make sure these syscalls always return successful as well
in order to avoid nasty "Invalid value" errors.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
What we basically want is a seccomp mode 2 BPF program like this but for
every architecture:
BPF_STMT(BPF_LD+BPF_W+BPF_ABS, offsetof(struct seccomp_data, nr)),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_chown, 4, 0),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_fchown, 3, 0),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_fchownat, 2, 0),
BPF_JUMP(BPF_JMP+BPF_JEQ+BPF_K, __NR_lchown, 1, 0),
BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ALLOW),
BPF_STMT(BPF_RET+BPF_K, SECCOMP_RET_ERRNO)
However, on 32 bit architectures we do have chown32, lchown32 and
fchown32, so we'd need to add all the architecture blurb which
libseccomp handles for us.
So we only need to make sure that we add the 32bit seccomp arch while
we're on x86_64 and otherwise we just stay at the native architecture
which was set during seccomp_init(), which more or less replicates
setting 32bit personality during runChild().
The FORCE_SUCCESS() macro here could be a bit less ugly but I think
repeating the seccomp_rule_add() all over the place is way uglier.
Another way would have been to create a vector of syscalls to iterate
over, but that would make error messages uglier because we can either
only print the (libseccomp-internal) syscall number or use
seccomp_syscall_resolve_num_arch() to get the name or even make the
vector a pair number/name, essentially duplicating everything again.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
We're going to use libseccomp instead of creating the raw BPF program,
because we have different syscall numbers on different architectures.
Although our initial seccomp rules will be quite small it really doesn't
make sense to generate the raw BPF program because we need to duplicate
it and/or make branches on every single architecture we want to suuport.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This reverts commit ff0c0b645c.
We're going to use seccomp to allow "cp -p" and force chown-related
syscalls to always return 0.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
This solves a problem whereby if /gnu/store/.links had enough entries,
ext4's directory index would be full, leading to link(2) returning
ENOSPC.
* nix/libstore/optimise-store.cc (LocalStore::optimisePath_): Upon
ENOSPC from link(2), print a message and return instead of throwing a
'SysError'.
The SSHStore PR adds this functionality to the daemon, but we have to
handle the case where the Nix daemon is 1.11.
Also, don't require signatures for trusted users. This restores 1.11
behaviour.
Fixes https://github.com/NixOS/hydra/issues/398.
For example, you can now set
build-sandbox-paths = /dev/nvidiactl?
to specify that /dev/nvidiactl should only be mounted in the sandbox
if it exists in the host filesystem. This is useful e.g. for EC2
images that should support both CUDA and non-CUDA instances.
On some architectures (like x86_64 or i686, but not ARM for example)
overflow during integer division causes a crash due to SIGFPE.
Reproduces on a 64-bit system with:
nix-instantiate --eval -E '(-9223372036854775807 - 1) / -1'
The only way this can happen is when the smallest possible integer is
divided by -1, so just special-case that.
The removal of CachedFailure caused the value of TimedOut to change,
which broke timed-out handling in Hydra (so timed-out builds would
show up as "aborted" and would be retried, e.g. at
http://hydra.nixos.org/build/42537427).
The store parameter "write-nar-listing=1" will cause BinaryCacheStore
to write a file ‘<store-hash>.ls.xz’ for each ‘<store-hash>.narinfo’
added to the binary cache. This file contains an XZ-compressed JSON
file describing the contents of the NAR, excluding the contents of
regular files.
E.g.
{
"version": 1,
"root": {
"type": "directory",
"entries": {
"lib": {
"type": "directory",
"entries": {
"Mcrt1.o": {
"type": "regular",
"size": 1288
},
"Scrt1.o": {
"type": "regular",
"size": 3920
},
}
}
}
...
}
}
(The actual file has no indentation.)
This is intended to speed up the NixOS channels programs index
generator [1], since fetching gazillions of large NARs from
cache.nixos.org is currently a bottleneck for updating the regular
(non-small) channel.
[1] https://github.com/NixOS/nixos-channel-scripts/blob/master/generate-programs-index.cc
We can now write
throw Error("file '%s' not found", path);
instead of
throw Error(format("file '%s' not found") % path);
and similarly
printError("file '%s' not found", path);
instead of
printMsg(lvlError, format("file '%s' not found") % path);
We were passing "p=$PATH" rather than "p=$PATH;", resulting in some
invalid shell code.
Also, construct a separate environment for the child rather than
overwriting the parent's.
The fact that queryPathInfo() is synchronous meant that we needed a
thread for every concurrent binary cache lookup, even though they end
up being handled by the same download thread. Requiring hundreds of
threads is not a good idea. So now there is an asynchronous version of
queryPathInfo() that takes a callback function to process the
result. Similarly, enqueueDownload() now takes a callback rather than
returning a future.
Thus, a command like
nix path-info --store https://cache.nixos.org/ -r /nix/store/slljrzwmpygy1daay14kjszsr9xix063-nixos-16.09beta231.dccf8c5
that returns 4941 paths now takes 1.87s using only 2 threads (the main
thread and the downloader thread). (This is with a prewarmed
CloudFront.)
It's a slight misnomer now because it actually limits *all* downloads,
not just binary cache lookups.
Also add a "enable-http2" option to allow disabling use of HTTP/2
(enabled by default).
The binary cache store can now use HTTP/2 to do lookups. This is much
more efficient than HTTP/1.1 due to multiplexing: we can issue many
requests in parallel over a single TCP connection. Thus it's no longer
necessary to use a bunch of concurrent TCP connections (25 by
default).
For example, downloading 802 .narinfo files from
https://cache.nixos.org/, using a single TCP connection, takes 11.8s
with HTTP/1.1, but only 0.61s with HTTP/2.
This did require a fairly substantial rewrite of the Downloader class
to use the curl multi interface, because otherwise curl wouldn't be
able to do multiplexing for us. As a bonus, we get connection reuse
even with HTTP/1.1. All downloads are now handled by a single worker
thread. Clients call Downloader::enqueueDownload() to tell the worker
thread to start the download, getting a std::future to the result.
This largely reverts c68e5913c7. Running
builds as root breaks "cp -p", since when running as root, "cp -p"
assumes that it can succesfully chown() files. But that's not actually
the case since the user namespace doesn't provide a complete uid
mapping. So it barfs with a fatal error message ("cp: failed to
preserve ownership for 'foo': Invalid argument").
BASH_ENV causes all non-interactive shells called via eg. /etc/bashrc to
remove the rc-file before the main shell gets to run it. Completion
scripts will often do this. Fixes#976.
Adapted from and fixes#1034.
This fixes an assertion failure in "assert(goal);" in
Worker::waitForInput() after a substitution goal is cancelled by the
termination of another goal. The problem was the line
//worker.childTerminated(shared_from_this()); // FIXME
in the SubstitutionGoal destructor. This was disabled because
shared_from_this() obviously doesn't work from a destructor. So we now
use a real pointer for object identity.
The implementation of "partition" in Nixpkgs is O(n^2) (because of the
use of ++), and for some reason was causing stack overflows in
multi-threaded evaluation (not sure why).
This reduces "nix-env -qa --drv-path" runtime by 0.197s and memory
usage by 298 MiB (in non-Boehm mode).
Normally it's impossible to take a reference to the function passed to
callFunction, so some callers (e.g. ExprApp::eval) allocate that value
on the stack. For functors, a reference to the functor itself may be
kept, so we need to have it on the heap.
Fixes#1045
The inner lambda was returning a SQLite-internal char * rather than a
std::string, leading to Hydra errors liks
Caught exception in Hydra::Controller::Root->narinfo "path âø£â is not in the Nix store at /nix/store/6mvvyb8fgwj23miyal5mdr8ik4ixk15w-hydra-0.1.1234.abcdef/libexec/hydra/lib/Hydra/Controller/Root.pm line 352."
That is, unless --file is specified, the Nix search path is
synthesized into an attribute set. Thus you can say
$ nix build nixpkgs.hello
assuming $NIX_PATH contains an entry of the form "nixpkgs=...". This
is more verbose than
$ nix build hello
but is less ambiguous.
For example, you can now say:
configureFlags = "--prefix=${placeholder "out"} --includedir=${placeholder "dev"}";
The strings returned by the ‘placeholder’ builtin are replaced at
build time by the actual store paths corresponding to the specified
outputs.
Previously, you had to work around the inability to self-reference by doing stuff like:
preConfigure = ''
configureFlags+=" --prefix $out --includedir=$dev"
'';
or rely on ad-hoc variable interpolation semantics in Autoconf or Make
(e.g. --prefix=\$(out)), which doesn't always work.
This makes it easier to create a diverted store, i.e.
NIX_REMOTE="local?root=/tmp/root"
instead of
NIX_REMOTE="local?real=/tmp/root/nix/store&state=/tmp/root/nix/var/nix" NIX_LOG_DIR=/tmp/root/nix/var/log
This fixes instantiation of pythonPackages.pytest that produces a
directory with less permissions during one of it's tests that leads to
a nix error like:
error: opening directory ‘/tmp/nix-build-python2.7-pytest-2.9.2.drv-0/pytest-of-user/pytest-0/testdir/test_cache_failure_warns0/.cache’: Permission denied
For one particular NixOS configuration, this cut the runtime of
"nix-store -r --dry-run" from 6m51s to 3.4s. It also fixes a bug in
the size calculation that was causing certain paths to be counted
twice, e.g. before:
these paths will be fetched (1249.98 MiB download, 2995.74 MiB unpacked):
and after:
these paths will be fetched (1219.56 MiB download, 2862.17 MiB unpacked):
This way, all builds appear to have a uid/gid of 0 inside the
chroot. In the future, this may allow using programs like
systemd-nspawn inside builds, but that will require assigning a larger
UID/GID map to the build.
Issue #625.
This allows an unprivileged user to perform builds on a diverted store
(i.e. where the physical store location differs from the logical
location).
Example:
$ NIX_LOG_DIR=/tmp/log NIX_REMOTE="local?real=/tmp/store&state=/tmp/var" nix-build -E \
'with import <nixpkgs> {}; runCommand "foo" { buildInputs = [procps nettools]; } "id; ps; ifconfig; echo $out > $out"'
will do a build in the Nix store physically in /tmp/store but
logically in /nix/store (and thus using substituters for the latter).
This is a convenience command to allow users who are not privileged to
create /nix/store to use Nix with regular binary caches. For example,
$ NIX_REMOTE="local?state=$HOME/nix/var&real=/$HOME/nix/store" nix run firefox bashInteractive
will download Firefox and bash from cache.nixos.org, then start a
shell in which $HOME/nix/store is mounted on /nix/store.
This is primarily to subsume the functionality of the
copy-from-other-stores substituter. For example, in the NixOS
installer, we can now do (assuming we're in the target chroot, and the
Nix store of the installation CD is bind-mounted on /tmp/nix):
$ nix-build ... --option substituters 'local?state=/tmp/nix/var&real=/tmp/nix/store'
However, unlike copy-from-other-stores, this also allows write access
to such a store. One application might be fetching substitutes for
/nix/store in a situation where the user doesn't have sufficient
privileges to create /nix, e.g.:
$ NIX_REMOTE="local?state=/home/alice/nix/var&real=/home/alice/nix/store" nix-build ...
E.g.
$ nix-build -I nixpkgs=git://github.com/NixOS/nixpkgs '<nixpkgs>' -A hello
This is not extremely useful yet because you can't specify a
branch/revision.
The function builtins.fetchgit fetches Git repositories at evaluation
time, similar to builtins.fetchTarball. (Perhaps the name should be
changed, being confusing with respect to Nixpkgs's fetchgit function,
with works at build time.)
Example:
(import (builtins.fetchgit git://github.com/NixOS/nixpkgs) {}).hello
or
(import (builtins.fetchgit {
url = git://github.com/NixOS/nixpkgs-channels;
rev = "nixos-16.03";
}) {}).hello
Note that the result does not contain a .git directory.
If --no-build-output is given (which will become the default for the
"nix" command at least), show the last 10 lines of the build output if
the build fails.
This replaces nix-push. For example,
$ nix copy --to file:///tmp/cache -r $(type -p firefox)
copies the closure of firefox to the specified binary cache. And
$ nix copy --from file:///tmp/cache --to s3://my-cache /nix/store/abcd...
copies between two binary caches.
It will also replace nix-copy-closure, once we have an SSHStore class,
e.g.
$ nix copy --from ssh://alice@machine /nix/store/abcd...
This allows commands like "nix verify --all" or "nix path-info --all"
to work on S3 caches.
Unfortunately, this requires some ugly hackery: when querying the
contents of the bucket, we don't want to have to read every .narinfo
file. But the S3 bucket keys only include the hash part of each store
path, not the name part. So as a special exception
queryAllValidPaths() can now return store paths *without* the name
part, and queryPathInfo() accepts such store paths (returning a
ValidPathInfo object containing the full name).
Caching path info is generally useful. For instance, it speeds up "nix
path-info -rS /run/current-system" (i.e. showing the closure sizes of
all paths in the closure of the current system) from 5.6s to 0.15s.
This also eliminates some APIs like Store::queryDeriver() and
Store::queryReferences().
"verify-store" is now simply an "--all" flag to "nix verify". This
flag can be used for any other store path command as well (e.g. "nix
path-info", "nix copy-sigs", ...).
For convenience, you can now say
$ nix-env -f channel:nixos-16.03 -iA hello
instead of
$ nix-env -f https://nixos.org/channels/nixos-16.03/nixexprs.tar.xz -iA hello
Similarly,
$ nix-shell -I channel:nixpkgs-unstable -p hello
$ nix-build channel:nixos-15.09 -A hello
Abstracting over the NixOS/Nixpkgs channels location also allows us to
use a more efficient transport (e.g. Git) in the future.
Thus, -I / $NIX_PATH entries are now downloaded only when they are
needed for evaluation. An error to download an entry is a non-fatal
warning (just like non-existant paths).
This does change the semantics of builtins.nixPath, which now returns
the original, rather than resulting path. E.g., before we had
[ { path = "/nix/store/hgm3yxf1lrrwa3z14zpqaj5p9vs0qklk-nixexprs.tar.xz"; prefix = "nixpkgs"; } ... ]
but now
[ { path = "https://nixos.org/channels/nixos-16.03/nixexprs.tar.xz"; prefix = "nixpkgs"; } ... ]
Fixes#792.
This specifies the number of distinct signatures required to consider
each path "trusted".
Also renamed ‘--no-sigs’ to ‘--no-trust’ for the flag that disables
verifying whether a path is trusted (since a path can also be trusted
if it has no signatures, but was built locally).
These are content-addressed paths or outputs of locally performed
builds. They are trusted even if they don't have signatures, so "nix
verify-paths" won't complain about them.
Typical usage is to check local paths using the signatures from a
binary cache:
$ nix verify-paths -r /run/current-system -s https://cache.nixos.org
path ‘/nix/store/c1k4zqfb74wba5sn4yflb044gvap0x6k-nixos-system-mandark-16.03.git.fc2d7a5M’ is untrusted
...
checked 844 paths, 119 untrusted
The flag remembering whether an Interrupted exception was thrown is
now thread-local. Thus, all threads will (eventually) throw
Interrupted. Previously, one thread would throw Interrupted, and then
the other threads wouldn't see that they were supposed to quit.
Unlike "nix-store --verify-path", this command verifies signatures in
addition to store path contents, is multi-threaded (especially useful
when verifying binary caches), and has a progress indicator.
Example use:
$ nix verify-paths --store https://cache.nixos.org -r $(type -p thunderbird)
...
[17/132 checked] checking ‘/nix/store/rawakphadqrqxr6zri2rmnxh03gqkrl3-autogen-5.18.6’
Doing a chdir() is a bad idea in multi-threaded programs, leading to
failures such as
error: cannot connect to daemon at ‘/nix/var/nix/daemon-socket/socket’: No such file or directory
Since Linux doesn't have a connectat() syscall like FreeBSD, there is
no way we can support this in a race-free way.
This enables an optimisation in hydra-queue-runner, preventing a
download of a NAR it just uploaded to the cache when reading files
like hydra-build-products.
This enables an optimisation in hydra-queue-runner, preventing a
download of a NAR it just uploaded to the cache when reading files
like hydra-build-products.
For example,
$ NIX_REMOTE=file:///my-cache nix ls-store -lR /nix/store/f4kbgl8shhyy76rkk3nbxr0lz8d2ip7q-binutils-2.23.1
dr-xr-xr-x 0 ./bin
-r-xr-xr-x 30748 ./bin/addr2line
-r-xr-xr-x 66973 ./bin/ar
...
Similarly, "nix ls-nar" lists the contents of a NAR file, "nix
cat-nar" extracts a file from a NAR file, and "nix cat-store" extract
a file from a Nix store.
This allows a RemoteStore object to be used safely from multiple
threads concurrently. It will make multiple daemon connections if
necessary.
Note: pool.hh and sync.hh have been copied from the Hydra source tree.
This is currently only used by the Hydra queue runner rework, but like
eff5021eaa it presumably will be useful
for the C++ rewrite of nix-push and
download-from-binary-cache. (@shlevy)
Also, move a few free-standing functions into StoreAPI and Derivation.
Also, introduce a non-nullable smart pointer, ref<T>, which is just a
wrapper around std::shared_ptr ensuring that the pointer is never
null. (For reference-counted values, this is better than passing a
"T&", because the latter doesn't maintain the refcount. Usually, the
caller will have a shared_ptr keeping the value alive, but that's not
always the case, e.g., when passing a reference to a std::thread via
std::bind.)
For example,
$ nix-build --hash -A nix-repl.src
will build the fixed-output derivation nix-repl.src (a fetchFromGitHub
call), but instead of *verifying* the hash given in the Nix
expression, it prints out the resulting hash, and then moves the
result to its content-addressed location in the Nix store. E.g
build produced path ‘/nix/store/504a4k6zi69dq0yjc0bm12pa65bccxam-nix-repl-8a2f5f0607540ffe56b56d52db544373e1efb980-src’ with sha256 hash ‘0cjablz01i0g9smnavhf86imwx1f9mnh5flax75i615ml71gsr88’
The goal of this is to make all nix-prefetch-* scripts unnecessary: we
can just let Nix run the real thing (i.e., the corresponding fetch*
derivation).
Another example:
$ nix-build --hash -E 'with import <nixpkgs> {}; fetchgit { url = "https://github.com/NixOS/nix.git"; sha256 = "ffffffffffffffffffffffffffffffffffffffffffffffffffff"; }'
...
git revision is 9e7c1a4bbd
...
build produced path ‘/nix/store/gmsnh9i7x4mb7pyd2ns7n3c9l90jfsi1-nix’ with sha256 hash ‘1188xb621diw89n25rifqg9lxnzpz7nj5bfh4i1y3dnis0dmc0zp’
(Having to specify a fake sha256 hash is a bit annoying...)
Previously files in the Nix store were owned by root or by nixbld,
depending on whether they were created by a substituter or by a
builder. This doesn't matter much, but causes spurious diffoscope
differences. So use root everywhere.
E.g.
$ nix-build pkgs/stdenv/linux/ -A stage1.pkgs.perl --check
nix-store: src/libstore/build.cc:1323: void nix::DerivationGoal::tryToBuild(): Assertion `buildMode != bmCheck || validPaths.size() == drv->outputs.size()' failed.
when perl.out exists but perl.man doesn't. The fix is to only check
the outputs that exist. Note that "nix-build -A stage1.pkgs.all
--check" will still give a (proper) error in this case.
This was observed in the deb_debian7x86_64 build:
http://hydra.nixos.org/build/29973215
Calling c_str() on a temporary should be fine because the temporary
shouldn't be destroyed until after the execl() call, but who knows...
If repair found a corrupted/missing path that depended on a
multiple-output derivation, and some of the outputs of the latter were
not present, it failed with a message like
error: path ‘/nix/store/cnfn9d5fjys1y93cz9shld2xwaibd7nn-bash-4.3-p42-doc’ is not valid
Also show types when nix cannot compare values of different types.
This is also more consistent since types are already shown when comparing values of the same not comparable type.
For example, "${{ foo = "bar"; __toString = x: x.foo; }}" evaluates
to "bar".
With this, we can delay calling functions like mkDerivation,
buildPythonPackage, etc. until we actually need a derivation, enabling
overrides and other modifications to happen by simple attribute set
update.
This makes Darwin consistent with Linux: Nix expressions can't break
out of the sandbox unless relaxed sandbox mode is enabled.
For the normal sandbox mode this will require fixing #759 however.
Otherwise, since the call to write a "d" character to the lock file
can fail with ENOSPC, we can get an unhandled exception resulting in a
call to terminate().
Caused by 8063fc497a. If tmpDir !=
tmpDirInSandbox (typically when there are multiple concurrent builds
with the same name), the *Path attribute would not point to an
existing file. This caused Nixpkgs' writeTextFile to write an empty
file. In particular this showed up as hanging VM builds (because it
would run an empty run-nixos-vm script and then wait for it to finish
booting).
This is arguably nitpicky, but I think this new formulation is even
clearer. My thinking is that it's easier to comprehend when the
calculated hash value is displayed close to the output path. (I think it
is somewhat similar to eliminating double negatives in logic
statements.)
The formulation is inspired / copied from the OpenEmbedded build tool,
bitbake.
Rather than using $<host-TMPDIR>/nix-build-<drvname>-<number>, the
temporary directory is now always /tmp/nix-build-<drvname>-0. This
improves bitwise-exact reproducibility for builds that store $TMPDIR
in their build output. (Of course, those should still be fixed...)
edolstra:
“…since callers of readDirectory have to handle the possibility of
DT_UNKNOWN anyway, and we don't want to do a stat call for every
directory entry unless it's really needed.”
The nixpkgs manual prescribes the use of values from stdenv.lib.licenses
for the meta.license attribute. Those values are attribute sets and
currently skipped when running nix-env with '--xml --meta'. This has the
consequence that also nixpkgs-lint will report missing licenses.
With this commit nix-env with '--xml --meta' will print all attributes
of an attribute set that are of type tString. For example the output for
the package nixpkgs.hello is
<meta name="license" type="strings">
<string type="url" value="http://spdx.org/licenses/GPL-3.0+" />
<string type="shortName" value="gpl3Plus" />
<string type="fullName" value="GNU General Public License v3.0 or later" />
<string type="spdxId" value="GPL-3.0+" />
</meta>
This commit fixes nixpkgs-lint, too.
Temporarily allow derivations to describe their full sandbox profile.
This will be eventually scaled back to a more secure setup, see the
discussion at #695
Nix reports a hash mismatch saying:
output path ‘foo’ should have sha256 hash ‘abc’, instead has ‘xyz’
That message is slightly ambiguous and some people read that statement
to mean the exact opposite of what it is supposed to mean. After this
patch, the message will be:
Nix expects output path ‘foo’ to have sha256 hash ‘abc’, instead it has ‘xyz’
- rename options but leav old names as lower-priority aliases,
also "-dirs" -> "-paths" to get closer to the meaning
- update docs to reflect the new names (old aliases are not documented),
including a new file with release notes
- tests need an update after corresponding changes to nixpkgs
- __noChroot is left as it is (after discussion on the PR)
Passing "--option build-repeat <N>" will cause every build to be
repeated N times. If the build output differs between any round, the
build is rejected, and the output paths are not registered as
valid. This is primarily useful to verify build determinism. (We
already had a --check option to repeat a previously succeeded
build. However, with --check, non-deterministic builds are registered
in the DB. Preventing that is useful for Hydra to ensure that
non-deterministic builds don't end up getting published at all.)
This reverts commit 79ca503332. Ouch,
never noticed this. We definitely don't want to allow builds to have
arbitrary access to /bin and /usr/bin, because then they can (for
instance) bring in a bunch of setuid programs. Also, we shouldn't be
encouraging the use of impurities in the default configuration.
If automatic store optimisation is enabled, and a hard-linked file in
the store gets corrupted, then the corresponding .links entry will
also be corrupted. In that case, trying to repair with --repair or
--repair-path won't work, because the new "good" file will be replaced
by a hard link to the corrupted file. We can catch most of these cases
by doing a sanity-check on the file sizes.
> I made this change for two reasons:
> 1. Darwin's locale data doesn't appear to be open source
> 2. Privileged processes will always use /usr/share/locale regardless of environment variables
This removes the need to have multiple downloads in the stdenv
bootstrap process (like a separate busybox binary for Linux, or
curl/mkdir/sh/bzip2 for Darwin). Now all those files can be combined
into a single NAR.
This makes it consistent with the Nixpkgs fetchurl and makes it work
in chroots. We don't need verification because the hash of the result
is checked anyway.
The stack allocated for the builder was way too small (32 KB). This is
sufficient for normal derivations, because they just do some setup and
then exec() the actual builder. But for the fetchurl builtin
derivation it's not enough. Also, allocating the stack on the caller's
stack was fishy business.
This allows overriding the name component of the resulting Nix store
path, which is necessary if the base name of the URI contains
"illegal" characters.
This is in particular useful for fetchFromGitHub et al., ensuring that
the store path produced by nix-prefetch-url corresponds to what those
functions expect.
This allows nix-prefetch-url to prefetch the output of fetchzip and
its wrappers (like fetchFromGitHub). For example:
$ nix-prefetch-url --unpack https://github.com/NixOS/patchelf/archive/0.8.tar.gz
or from a Nix expression:
$ nix-prefetch-url -A nix-repl.src
In the latter case, --unpack can be omitted because nix-repl.src is a
fetchFromGitHub derivation and thus has "outputHashMode" set to
"recursive".
Some evidence that defining it to be 0 is right:
* OS X headers define it to be 0.
* Other code uses 0 instead of SOL_LOCAL to check for peer credentials
(e.g. FreeBSD's implementation of getpeereid).
Previously, pkg-config was already queried for libsqlite3's and
libcurl's link flags. However they were not used, but hardcoded
instead. This commit replaces the hardcoded LDFLAGS by the ones
provided by pkg-config in a similar pattern as already used for
libsodium.
For example,
$ nix-prefetch-url -A hello.src
will prefetch the file specified by the fetchurl call in the attribute
‘hello.src’ from the Nix expression in the current directory. This
differs from ‘nix-build -A hello.src’ in that it doesn't verify the
hash.
You can also specify a path to the Nix expression:
$ nix-prefetch-url ~/Dev/nixpkgs -A hello.src
List elements (typically used in ‘patches’ attributes) also work:
$ nix-prefetch-url -A portmidi.patches.0
It was strange to show "upgrading" when the version was getting lower.
This is left on "upgrading" when the versions are the same,
as I can't see any better wording.
Until now, if one explicitly installed a low-priority version,
nix-env --upgrade would downgrade it by default and even with --leq.
Let's never accept an upgrade with version not matching the upgradeType.
Additionally, let's never decrease the priority of an installed package;
you can use --install to force that.
Also refactor to use variable bestVersion instead of bestName,
as only version was used from it.
Fixes https://github.com/NixOS/nixpkgs/issues/9504.
Note that this means we may have a non-functional /bin/sh in the
chroot while rebuilding Bash or one of its dependencies. Ideally those
packages don't rely on /bin/sh though.
The value pointers of lists with 1 or 2 elements are now stored in the
list value itself. In particular, this makes the "concatMap (x: if
cond then [(f x)] else [])" idiom cheaper.
This ensures that 1) the derivation doesn't change when Nix changes;
2) the derivation closure doesn't contain Nix and its dependencies; 3)
we don't have to rely on ugly chroot hacks.
In particular, hydra-queue-runner can now distinguish between remote
build / substitution / already-valid. For instance, if a path already
existed on the remote side, we don't want to store a log file.
Previously, to build a derivation remotely, we had to copy the entire
closure of the .drv file to the remote machine, even though we only
need the top-level derivation. This is very wasteful: the closure can
contain thousands of store paths, and in some Hydra use cases, include
source paths that are very large (e.g. Git/Mercurial checkouts).
So now there is a new operation, StoreAPI::buildDerivation(), that
performs a build from an in-memory representation of a derivation
(BasicDerivation) rather than from a on-disk .drv file. The only files
that need to be in the Nix store are the sources of the derivation
(drv.inputSrcs), and the needed output paths of the dependencies (as
described by drv.inputDrvs). "nix-store --serve" exposes this
interface.
Note that this is a privileged operation, because you can construct a
derivation that builds any store path whatsoever. Fixing this will
require changing the hashing scheme (i.e., the output paths should be
computed from the other fields in BasicDerivation, allowing them to be
verified without access to other derivations). However, this would be
quite nice because it would allow .drv-free building (e.g. "nix-env
-i" wouldn't have to write any .drv files to disk).
Fixes#173.
This modification moves Attr and Bindings structures into their own header
file which is dedicated to the attribute set representation. The goal of to
isolate pieces of code which are related to the attribute set
representation. Thus future modifications of the attribute set
representation will only have to modify these files, and not every other
file across the evaluator.
The following patch is an attempt to address this bug (see
<http://bugs.gnu.org/18994>) by preserving the supplementary groups of
build users in the build environment.
In practice, I would expect that supplementary groups would contain only
one or two groups: the build users group, and possibly the “kvm” group.
[Changed &at(0) to data() and removed tabs - Eelco]
Not substituting builds with "preferLocalBuild = true" was a bad idea,
because it didn't take the cost of dependencies into account. For
instance, if we can't substitute a fetchgit call, then we have to
download/build git and all its dependencies.
Partially reverts 5558652709 and adds a
new derivation attribute "allowSubstitutes" to specify whether a
derivation may be substituted.
Nixpkgs' writeTextAsFile does this:
mv "$textPath" "$n"
Since $textPath was owned by root, if $textPath is on the same
filesystem as $n, $n will be owned as root. As a result, the build
result was rejected as having suspicious ownership.
http://hydra.nixos.org/build/22836807
Hello!
The patch below adds a ‘verifyStore’ RPC with the same signature as the
current LocalStore::verifyStore method.
Thanks,
Ludo’.
>From aef46c03ca77eb6344f4892672eb6d9d06432041 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Ludovic=20Court=C3=A8s?= <ludo@gnu.org>
Date: Mon, 1 Jun 2015 23:17:10 +0200
Subject: [PATCH] Add a 'verifyStore' remote procedure call.
This relaxes restricted mode to allow access to anything in the
store. In the future, it would be better to allow access to only paths
that have been constructed in the current evaluation (so a hard-coded
/nix/store/blabla in a Nix expression would still be
rejected). However, note that reading /nix/store itself is still
rejected, so you can't use this so get access to things you don't know
about.
The call to nix-env expects a string which represents how old the
derivations are or just "old" which means any generations other than
the current one in use. Currently nix-collect-garbage passes an empty
string to nix-env when using the -d option. This patch corrects the call
to nix-env such that it follows the old behavior.
This hook can be used to set system-specific per-derivation build
settings that don't fit into the derivation model and are too complex or
volatile to be hard-coded into nix. Currently, the pre-build hook can
only add chroot dirs/files through the interface, but it also has full
access to the chroot root.
The specific use case for this is systems where the operating system ABI
is more complex than just the kernel-support system calls. For example,
on OS X there is a set of system-provided frameworks that can reliably
be accessed by any program linked to them, no matter the version the
program is running on. Unfortunately, those frameworks do not
necessarily live in the same locations on each version of OS X, nor do
their dependencies, and thus nix needs to know the specific version of
OS X currently running in order to make those frameworks available. The
pre-build hook is a perfect mechanism for doing just that.
This hook can be used to set system specific per-derivation build
settings that don't fit into the derivation model and are too complex or
volatile to be hard-coded into nix. Currently, the pre-build hook can
only add chroot dirs/files.
The specific use case for this is systems where the operating system ABI
is more complex than just the kernel-supported system calls. For
example, on OS X there is a set of system-provided frameworks that can
reliably be accessed by any program linked to them, no matter the
version the program is running on. Unfortunately, those frameworks do
not necessarily live in the same locations on each version of OS X, nor
do their dependencies, and thus nix needs to know the specific version
of OS X currently running in order to make those frameworks available.
The pre-build hook is a perfect mechanism for doing just that.
This is because we don't want to do HTTP requests on every evaluation,
even though we can prevent a full redownload via the cached ETag. The
default is one hour.
This was causing NixOS VM tests to fail mysteriously since
5ce50cd99e. Nscd could (sometimes) no
longer read /etc/hosts:
open("/etc/hosts", O_RDONLY|O_CLOEXEC) = -1 EACCES (Permission denied)
Probably there was some wacky interaction between the guest kernel and
the 9pfs implementation in QEMU.
This function downloads and unpacks the given URL at evaluation
time. This is primarily intended to make it easier to deal with Nix
expressions that have external dependencies. For instance, to fetch
Nixpkgs 14.12:
with import (fetchTarball https://github.com/NixOS/nixpkgs-channels/archive/nixos-14.12.tar.gz) {};
Or to fetch a specific revision:
with import (fetchTarball 2766a4b44e.tar.gz) {};
This patch also adds a ‘fetchurl’ builtin that downloads but doesn't
unpack its argument. Not sure if it's useful though.
Thus, for example, to get /bin/sh in a chroot, you only need to
specify /bin/sh=${pkgs.bash}/bin/sh in build-chroot-dirs. The
dependencies of sh will be added automatically.
This doesn't work anymore if the "strict" chroot mode is
enabled. Instead, add Nix's store path as a dependency. This ensures
that its closure is present in the chroot.
I'm seeing hangs in Glibc's setxid_mark_thread() again. This is
probably because the use of an intermediate process to make clone()
safe from a multi-threaded program (see
524f89f139) is defeated by the use of
vfork(), since the intermediate process will have a copy of Glibc's
threading data structures due to the vfork(). So use a regular fork()
again.
If ‘build-use-chroot’ is set to ‘true’, fixed-output derivations are
now also chrooted. However, unlike normal derivations, they don't get
a private network namespace, so they can still access the
network. Also, the use of the ‘__noChroot’ derivation attribute is
no longer allowed.
Setting ‘build-use-chroot’ to ‘relaxed’ gives the old behaviour.
If ‘--option restrict-eval true’ is given, the evaluator will throw an
exception if an attempt is made to access any file outside of the Nix
search path. This is primarily intended for Hydra, where we don't want
people doing ‘builtins.readFile ~/.ssh/id_dsa’ or stuff like that.
chroot only changes the process root directory, not the mount namespace root
directory, and it is well-known that any process with chroot capability can
break out of a chroot "jail". By using pivot_root as well, and unmounting the
original mount namespace root directory, breaking out becomes impossible.
Non-root processes typically have no ability to use chroot() anyway, but they
can gain that capability through the use of clone() or unshare(). For security
reasons, these syscalls are limited in functionality when used inside a normal
chroot environment. Using pivot_root() this way does allow those syscalls to be
put to their full use.
Sodium's Ed25519 signatures are much shorter than OpenSSL's RSA
signatures. Public keys are also much shorter, so they're now
specified directly in the nix.conf option ‘binary-cache-public-keys’.
The new command ‘nix-store --generate-binary-cache-key’ generates and
prints a public and secret key.
The DT_UNKNOWN fallback code was getting the type of the wrong path,
causing readDir to report "directory" as the type of every file.
Reported by deepfire on IRC.
I.e., not readable to the nixbld group. This improves purity a bit for
non-chroot builds, because it prevents a builder from enumerating
store paths (i.e. it can only access paths it knows about).
Let's not just improve the error message itself, but also the behaviour
to actually work around the ntfs-3g symlink bug. If the readlink() call
returns a smaller size than the stat() call, this really isn't a problem
even if the symlink target really has changed between the calls.
So if stat() reports the size for the absolute path, it's most likely
that the relative path is smaller and thus it should also work for file
system bugs as mentioned in 93002d69fc58c2b71e2dfad202139230c630c53a.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Tested-by: John Ericson <Ericson2314@Yahoo.com>
A message like "error: reading symbolic link `...' : Success" really is
quite confusing, so let's not indicate "success" but rather point out
the real issue.
We could also limit the check of this to just check for non-negative
values, but this would introduce a race condition between stat() and
readlink() if the link target changes between those two calls, thus
leading to a buffer overflow vulnerability.
Reported by @Ericson2314 on IRC. Happened due to a possible ntfs-3g bug
where a relative symlink returned the absolute path (st_)size in stat()
while readlink() returned the relative size.
Signed-off-by: aszlig <aszlig@redmoonstudios.org>
Tested-by: John Ericson <Ericson2314@Yahoo.com>
In low memory environments, "nix-env -qa" failed because the fork to
run the pager hit the kernel's overcommit limits. Using posix_spawn
gets around this. (Actually, you have to use posix_spawn with the
undocumented POSIX_SPAWN_USEVFORK flag, otherwise it just uses
fork/exec...)
Code that links to libnixexpr (e.g. plugins loaded with importNative, or
nix-exec) may want to provide custom value types and operations on
values of those types. For example, nix-exec is currently using sets
where a custom IO value type would be more appropriate. This commit
provides a generic hook for such types in the form of tExternal and the
ExternalBase virtual class, which contains all functions necessary for
libnixexpr's type-polymorphic functions (e.g. `showType`) to be
implemented.
The function ‘builtins.match’ takes a POSIX extended regular
expression and an arbitrary string. It returns ‘null’ if the string
does not match the regular expression. Otherwise, it returns a list
containing substring matches corresponding to parenthesis groups in
the regex. The regex must match the entire string (i.e. there is an
implied "^<pat>$" around the regex). For example:
match "foo" "foobar" => null
match "foo" "foo" => []
match "f(o+)(.*)" "foooobar" => ["oooo" "bar"]
match "(.*/)?([^/]*)" "/dir/file.nix" => ["/dir/" "file.nix"]
match "(.*/)?([^/]*)" "file.nix" => [null "file.nix"]
The following example finds all regular files with extension .nix or
.patch underneath the current directory:
let
findFiles = pat: dir: concatLists (mapAttrsToList (name: type:
if type == "directory" then
findFiles pat (dir + "/" + name)
else if type == "regular" && match pat name != null then
[(dir + "/" + name)]
else []) (readDir dir));
in findFiles ".*\\.(nix|patch)" (toString ./.)
Before this there was a bug where a `find` was being called on a
not-yet-sorted set. The code was just a mess before anyway, so I cleaned
it up while fixing it.
Especially in WAL mode on a highly loaded machine, this is not a good
idea because it results in a WAL file of approximately the same size
ad the database, which apparently cannot be deleted while anybody is
accessing it.
This was preventing destructors from running. In particular, it was
preventing the deletion of the temproot file for each worker
process. It may also have been responsible for the excessive WAL
growth on Hydra (due to the SQLite database not being closed
properly).
Apparently broken by accident in
8e9140cfde.
With this, attribute sets with a `__functor` attribute can be applied
just like normal functions. This can be used to attach arbitrary
metadata to a function without callers needing to treat it specially.
In particular, gcc 4.6's std::exception::~exception has an exception
specification in c++0x mode, which requires us to use that deprecated
feature in nix (and led to breakage after some recent changes that were
valid c++11).
nix already uses several c++11 features and gcc 4.7 has been around for
over 2 years.
Clearing v.app.right was not enough, because the length field of a
list only takes 32 bits, so the most significant 32 bits of v.app.left
(a.k.a. v.thunk.env) would remain. This could cause Boehm GC to
interpret it as a valid pointer.
This change reduces maximum RSS for evaluating the ‘tested’ job in
nixos/release-small.nix from 1.33 GiB to 0.80 GiB, and runtime by
about 8%.