This prints the references graph of the store paths in the graphML
format [1]. The graphML format is supported by several graph tools
such as the Python Networkx library or the Apache Thinkerpop project.
[1] http://graphml.graphdrawing.org
Since its superclass RemoteStore::Connection contains 'to' and 'from'
fields that refer to the file descriptor maintained in the subclass,
it was possible for the flush() call in Connection::~Connection() to
write to a closed file descriptor (or worse, a file descriptor now
referencing another file). So make sure that the file descriptor
survives 'to' and 'from'.
This is primarily because Derivation::{can,will}BuildLocally() depends
on attributes like preferLocalBuild and requiredSystemFeatures, but it
can't handle them properly because it doesn't have access to the
structured attributes.
E.g. __noChroot and allowedReferences now work correctly. We also now
check that the attribute type is correct. For instance, instead of
allowedReferences = "out";
you have to write
allowedReferences = [ "out" ];
Fixes#2453.
This meant that making a typo in an s3:// URI would cause a bucket to
be created. Also it didn't handle eventual consistency very well. Now
it's up to the user to create the bucket.
Tools which re-exec `$SHELL` or `$0` or `basename $SHELL` or even just
`bash` will otherwise get the non-interactive bash, providing a
broken shell for the same reasons described in
https://github.com/NixOS/nixpkgs/issues/27493.
Extends c94f3d5575
Calculating roots seems significantly slower on darwin compared to
linux. Checking for /profile/ links could show some false positives but
should still catch most issues.
* Don't wait forever for the client to remove data from the
buffer. This does mean that the buffer can grow without bounds
(e.g. when downloading is faster than writing to disk), but meh.
* Don't hold the state lock while calling the sink. The sink could
take any amount of time to process the data (in particular when it's
actually a coroutine), so we don't want to block the download
thread.
In particular this causes copyStorePath() from HttpBinaryCacheStore to
only start a download if needed. E.g. if the destination LocalStore
goes to sleep waiting for the path lock and another process creates
the path, then LocalStore::addToStore() will never read from the
source so we don't have to do the download.
‘geteuid’ gives us the user that the command is being run as,
including in setuid modes. By using geteuid to determind id, we can
avoid the ‘sudo -i’ hack when upgrading Nix. So now, upgrading Nix on
macOS is as simple as:
$ sudo nix-channel --update
$ sudo nix-env -u
$ sudo launchctl stop org.nixos.nix-daemon
$ sudo launchctl start org.nixos.nix-daemon
or
$ sudo systemctl restart nix-daemon
It's pretty easy to unintentionally install a second version of nix
into the user profile when using a daemon install. In this case it
looks like nix was upgraded while the nix-daemon is probably still
unning an older version.
A protocol mismatch can sometimes cause problems when using specific
features with an older daemon. For example:
Nix 2.0 changed the way files are compied to the store. The daemon is
backwards compatible and can still handle older clients, however a 1.11
nix-daemon isn't forwards compatible.
This is already done by coerceToString(), provided that the argument
is a path (e.g. 'fetchGit ./bla'). It fixes the handling of URLs like
git@github.com:owner/repo.git. It breaks 'fetchGit "./bla"', but that
was never intended to work anyway and is inconsistent with other
builtin functions (e.g. 'readFile "./bla"' fails).
E.g.
$ nix upgrade-nix
error: directory '/home/eelco/Dev/nix/inst/bin' does not appear to be part of a Nix profile
instead of
$ nix upgrade-nix
error: '/home/eelco/Dev/nix/inst' is not a symlink
Fix a 32-bit overflow that resulted in negative numbers being printed;
use fmt() instead of boost::format(); change -H to -h for consistency
with 'ls' and 'du'; make the columns narrower (since they can't be
bigger than 1024.0).
If the user has an object greater than 1024 yottabytes, it'll just display it as
N yottabytes instead of overflowing.
Swaps to use boost::format strings instead of std::setw and std::setprecision.
Using a 64bit integer on 32bit systems will come with a bit of a
performance overhead, but given that Nix doesn't use a lot of integers
compared to other types, I think the overhead is negligible also
considering that 32bit systems are in decline.
The biggest advantage however is that when we use a consistent integer
size across all platforms it's less likely that we miss things that we
break due to that. One example would be:
https://github.com/NixOS/nixpkgs/pull/44233
On Hydra it will evaluate, because the evaluator runs on a 64bit
machine, but when evaluating the same on a 32bit machine it will fail,
so using 64bit integers should make that consistent.
While the change of the type in value.hh is rather easy to do, we have a
few more options available for doing the conversion in the lexer:
* Via an #ifdef on the architecture and using strtol() or strtoll()
accordingly depending on which architecture we are. For the #ifdef
we would need another AX_COMPILE_CHECK_SIZEOF in configure.ac.
* Using istringstream, which would involve copying the value.
* As we're already using boost, lexical_cast might be a good idea.
Spoiler: I went for the latter, first of all because lexical_cast does
have an overload for const char* and second of all, because it doesn't
involve copying around the input string. Also, because istringstream
seems to come with a bigger overhead than boost::lexical_cast:
https://www.boost.org/doc/libs/release/doc/html/boost_lexical_cast/performance.html
The first method (still using strtol/strtoll) also wasn't something I
pursued further, because it is also locale-aware which I doubt is what
we want, given that the regex for int is [0-9]+.
Signed-off-by: aszlig <aszlig@nix.build>
Fixes: #2339
The profile present in PATH is not necessarily the actual profile
location. User profiles are generally added as $HOME/.nix-profile
in which case the indirect profile link needs to be resolved first.
/home/user/.nix-profile -> /nix/var/nix/profiles/per-user/user/profile
/nix/var/nix/profiles/per-user/user/profile -> profile-15-link
/nix/var/nix/profiles/per-user/user/profile-14-link -> /nix/store/hyi4kkjh3bwi2z3wfljrkfymz9904h62-user-environment
/nix/var/nix/profiles/per-user/user/profile-15-link -> /nix/store/6njpl3qvihz46vj911pwx7hfcvwhifl9-user-environment
To upgrade nix here we want /nix/var/nix/profiles/per-user/user/profile-16-link
instead of /home/user/.nix-profile-1-link. The latter is not a gcroot
and would be garbage collected, resulting in a broken profile.
Fixes#2175
The current usage technically works by putting multiple different
repos in to the same git directory. However, it is very slow as
Git tries very hard to find common commits between the two
repositories. If the two repositories are large (like Nixpkgs and
another long-running project,) it is maddeningly slow.
This change busts the cache for existing deployments, but users
will be promptly repaid in per-repository performance.
TransferManager allocates a lot of memory (50 MiB by default), and it
might leak but I'm not sure about that. In any case it was causing
OOMs in hydra-queue-runner. So allocate only one TransferManager per
S3BinaryCacheStore.
Hopefully fixes https://github.com/NixOS/hydra/issues/586.
This callback is executed on a different thread, so exceptions thrown
from the callback are not caught:
Aug 08 16:25:48 chef hydra-queue-runner[11967]: terminate called after throwing an instance of 'nix::Error'
Aug 08 16:25:48 chef hydra-queue-runner[11967]: what(): AWS error: failed to upload 's3://nix-cache/19dbddlfb0vp68g68y19p9fswrgl0bg7.ls'
Therefore, just check the transfer status after it completes. Also
include the S3 error message in the exception.
This didn't work anymore since decompression was only done in the
non-coroutine case.
Decompressors are now sinks, just like compressors.
Also fixed a bug in bzip2 API handling (we have to handle BZ_RUN_OK
rather than BZ_OK), which we didn't notice because there was a missing
'throw':
if (ret != BZ_OK)
CompressionError("error while compressing bzip2 file");
It adds a new operation, cmdAddToStoreNar, that does the same thing as
the corresponding nix-daemon operation, i.e. call addToStore(). This
replaces cmdImportPaths, which has the major issue that it sends the
NAR first and the store path second, thus requiring us to store the
incoming NAR either in memory or on disk until we decide what to do
with it.
For example, this reduces the memory usage of
$ nix copy --to 'ssh://localhost?remote-store=/tmp/nix' /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79
from 267 MiB to 12 MiB.
Probably fixes#1988.
In EvalState::checkSourcePath, the path is checked against the list of
allowed paths first and later it's checked again *after* resolving
symlinks.
The resolving of the symlinks is done via canonPath, which also strips
out "../" and "./". However after the canonicalisation the error message
pointing out that the path is not allowed prints the symlink target in
the error message.
Even if we'd suppress the message, symlink targets could still be leaked
if the symlink target doesn't exist (in this case the error is thrown in
canonPath).
So instead, we now do canonPath() without symlink resolving first before
even checking against the list of allowed paths and then later do the
symlink resolving and checking the allowed paths again.
The first call to canonPath() should get rid of all the "../" and "./",
so in theory the only way to leak a symlink if the attacker is able to
put a symlink in one of the paths allowed by restricted evaluation mode.
For the latter I don't think this is part of the threat model, because
if the attacker can write to that path, the attack vector is even
larger.
Signed-off-by: aszlig <aszlig@nix.build>
This particular `shell` variable wasn't used, since a new one was
declared in the only side of the `if` branch that used a `shell`
variable.
It could realistically confuse developers thinking it could use `$SHELL`
under some situations.