Scanning of /proc/<pid>/{exe,cwd} was broken because '{memory:' was
prepended twice. Also, get rid of the whole '{memory:...}' thing
because it's unnecessary, we can just list the file in /proc directly.
This new structure makes more sense as there may be many sources rooting
the same store path. Many profiles can reference the same path but this
is even more true with /proc/<pid>/maps where distinct pids can and
often do map the same store path.
This implementation is also more efficient as the `Roots` map contains
only one entry per rooted store path.
It could happen that the local builder match the system but lacks some features.
Now it results a failure.
The fix gracefully excludes the local builder from the set of available builders for derivation which requires the feature, so the derivation is built on remote builders only (as though it has incompatible system, like ```aarch64-linux``` when local is x86)
This allows using an arbitrary "provides" attribute from the specified
flake. For example:
nix build --flake nixpkgs packages.hello
(Maybe provides.packages should be used for consistency...)
We want to encourage a brave new world of hermetic evaluation for
source-level reproducibility, so flakes should not poke around in the
filesystem outside of their explicit dependencies.
Note that the default installation source remains impure in that it
can refer to mutable flakes, so "nix build nixpkgs.hello" still works
(and fetches the latest nixpkgs, unless it has been pinned by the
user).
A problem with pure evaluation is that builtins.currentSystem is
unavailable. For the moment, I've hard-coded "x86_64-linux" in the
nixpkgs flake. Eventually, "system" should be a flake function
argument.
For example, github:edolstra/dwarffs is more-or-less equivalent to
https://github.com/edolstra/dwarffs.git. It's a much faster way to get
GitHub repositories: it fetches tarballs rather than entire Git
repositories. It also allows fetching specific revisions by hash
without specifying a ref (e.g. a branch name):
github:edolstra/dwarffs/41c0c1bf292ea3ac3858ff393b49ca1123dbd553
This reverts commit a0ef21262f. This
doesn't work in 'nix run' and nix-shell because setns() fails in
multithreaded programs, and Boehm GC mark threads are uncancellable.
Fixes#2646.
Inside a derivation, exportReferencesGraph already provides a way to
dump the Nix database for a specific closure. On the command line,
--dump-db gave us the same information, but only for the entire Nix
database at once.
With this change, one can now pass a list of paths to --dump-db to get
the Nix database dumped for just those paths. (The user is responsible
for ensuring this is a closure, like for --export).
Among other things, this is useful for deploying a closure to a new
host without using --import/--export; one can use tar to transfer the
store paths, and --dump-db/--load-db to transfer the validity
information. This is useful if the new host doesn't actually have Nix
yet, and the closure that is being deployed itself contains Nix.
Previously, plain derivation paths in the string context (e.g. those
that arose from builtins.storePath on a drv file, not those that arose
from accessing .drvPath of a derivation) were treated somewhat like
derivaiton paths derived from .drvPath, except their dependencies
weren't recursively added to the input set. With this change, such
plain derivation paths are simply treated as paths and added to the
source inputs set accordingly, simplifying context handling code and
removing the inconsistency. If drvPath-like behavior is desired, the
.drv file can be imported and then .drvPath can be accessed.
This is a backwards-incompatibility, but storePath is never used on
drv files within nixpkgs and almost never used elsewhere.
SRI hashes (https://www.w3.org/TR/SRI/) combine the hash algorithm and
a base-64 hash. This allows more concise and standard hash
specifications. For example, instead of
import <nix/fetchurl.nl> {
url = https://nixos.org/releases/nix/nix-2.1.3/nix-2.1.3.tar.xz;
sha256 = "5d22dad058d5c800d65a115f919da22938c50dd6ba98c5e3a183172d149840a4";
};
you can write
import <nix/fetchurl.nl> {
url = https://nixos.org/releases/nix/nix-2.1.3/nix-2.1.3.tar.xz;
hash = "sha256-XSLa0FjVyADWWhFfkZ2iKTjFDda6mMXjoYMXLRSYQKQ=";
};
In fixed-output derivations, the outputHashAlgo is no longer mandatory
if outputHash specifies the hash (either as an SRI or in the old
"<type>:<hash>" format).
'nix hash-{file,path}' now print hashes in SRI format by default. I
also reverted them to use SHA-256 by default because that's what we're
using most of the time in Nixpkgs.
Suggested by @zimbatm.
Use the same output ordering and format everywhere.
This is such a common issue that we trade the single-line error message for
more readability.
Old message:
```
fixed-output derivation produced path '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com' with sha256 hash '08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm' instead of the expected hash '1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m'
```
New message:
```
hash mismatch in fixed-output derivation '/nix/store/d4nw9x2sy9q3r32f3g5l5h1k833c01vq-example.com':
wanted: sha256:1xzwnipjd54wl8g93vpw6hxnpmdabq0wqywriiwmh7x8k0lvpq5m
got: sha256:08y4734bm2zahw75b16bcmcg587vvyvh0n11gwiyir70divwp1rm
```
Without this information the content addressable state and hashes are
lost after the first request, this causes signatures to be required for
everything even tho the path could be verified without signing.
This enables using for http for S3 request for debugging or
implementations that don't have https configured. This is not a problem
for binary caches since they should not contain sensitive information.
Both package signatures and AWS auth already protect against tampering.
The goal is to support libeditline AND libreadline and let the user
decide at compile time which one to use.
Add a compile time option to use libreadline instead of
libeditline. If compiled against libreadline completion functionality
is lost because of a incompatibility between libeditlines and
libreadlines completion function. Completion with libreadline is
possible and can be added later.
To use libreadline instead of libeditline the environment
variables 'EDITLINE_LIBS' and 'EDITLINE_CFLAGS' have to been set
during the ./configure step.
Example:
EDITLINE_LIBS="/usr/lib/x86_64-linux-gnu/libhistory.so /usr/lib/x86_64-linux-gnu/libreadline.so"
EDITLINE_CFLAGS="-DREADLINE"
The reason for this change is that for example on Debian already three
different editline libraries exist but none of those is compatible the
flavor used by nix. My hope is that with this change it would be
easier to port nix to systems that have already libreadline available.
Previously, config would only be read from XDG_CONFIG_HOME. This change
allows reading config from additional directories, which enables e.g.
per-project binary caches or chroot stores with the help of direnv.
The use of TransferManager has several issues, including that it
doesn't allow setting a Content-Encoding without a patch, and it
doesn't handle exceptions in worker threads (causing termination on
memory allocation failure).
Fixes#2493.
This commit partially reverts 48662d151b. When
copying from an older store (in my case a store running Nix 1.11.7), nix would
throw errors about there being no hash. This is fixed by recalculating the hash.
In structured-attributes derivations, you can now specify per-output
checks such as:
outputChecks."out" = {
# The closure of 'out' must not be larger than 256 MiB.
maxClosureSize = 256 * 1024 * 1024;
# It must not refer to C compiler or to the 'dev' output.
disallowedRequisites = [ stdenv.cc "dev" ];
};
outputChecks."dev" = {
# The 'dev' output must not be larger than 128 KiB.
maxSize = 128 * 1024;
};
Also fixed a bug in allowedRequisites that caused it to ignore
self-references.
This prints the references graph of the store paths in the graphML
format [1]. The graphML format is supported by several graph tools
such as the Python Networkx library or the Apache Thinkerpop project.
[1] http://graphml.graphdrawing.org
Since its superclass RemoteStore::Connection contains 'to' and 'from'
fields that refer to the file descriptor maintained in the subclass,
it was possible for the flush() call in Connection::~Connection() to
write to a closed file descriptor (or worse, a file descriptor now
referencing another file). So make sure that the file descriptor
survives 'to' and 'from'.
This is primarily because Derivation::{can,will}BuildLocally() depends
on attributes like preferLocalBuild and requiredSystemFeatures, but it
can't handle them properly because it doesn't have access to the
structured attributes.
E.g. __noChroot and allowedReferences now work correctly. We also now
check that the attribute type is correct. For instance, instead of
allowedReferences = "out";
you have to write
allowedReferences = [ "out" ];
Fixes#2453.
This meant that making a typo in an s3:// URI would cause a bucket to
be created. Also it didn't handle eventual consistency very well. Now
it's up to the user to create the bucket.
Tools which re-exec `$SHELL` or `$0` or `basename $SHELL` or even just
`bash` will otherwise get the non-interactive bash, providing a
broken shell for the same reasons described in
https://github.com/NixOS/nixpkgs/issues/27493.
Extends c94f3d5575
Calculating roots seems significantly slower on darwin compared to
linux. Checking for /profile/ links could show some false positives but
should still catch most issues.
* Don't wait forever for the client to remove data from the
buffer. This does mean that the buffer can grow without bounds
(e.g. when downloading is faster than writing to disk), but meh.
* Don't hold the state lock while calling the sink. The sink could
take any amount of time to process the data (in particular when it's
actually a coroutine), so we don't want to block the download
thread.
In particular this causes copyStorePath() from HttpBinaryCacheStore to
only start a download if needed. E.g. if the destination LocalStore
goes to sleep waiting for the path lock and another process creates
the path, then LocalStore::addToStore() will never read from the
source so we don't have to do the download.
‘geteuid’ gives us the user that the command is being run as,
including in setuid modes. By using geteuid to determind id, we can
avoid the ‘sudo -i’ hack when upgrading Nix. So now, upgrading Nix on
macOS is as simple as:
$ sudo nix-channel --update
$ sudo nix-env -u
$ sudo launchctl stop org.nixos.nix-daemon
$ sudo launchctl start org.nixos.nix-daemon
or
$ sudo systemctl restart nix-daemon
It's pretty easy to unintentionally install a second version of nix
into the user profile when using a daemon install. In this case it
looks like nix was upgraded while the nix-daemon is probably still
unning an older version.
A protocol mismatch can sometimes cause problems when using specific
features with an older daemon. For example:
Nix 2.0 changed the way files are compied to the store. The daemon is
backwards compatible and can still handle older clients, however a 1.11
nix-daemon isn't forwards compatible.
This is already done by coerceToString(), provided that the argument
is a path (e.g. 'fetchGit ./bla'). It fixes the handling of URLs like
git@github.com:owner/repo.git. It breaks 'fetchGit "./bla"', but that
was never intended to work anyway and is inconsistent with other
builtin functions (e.g. 'readFile "./bla"' fails).
E.g.
$ nix upgrade-nix
error: directory '/home/eelco/Dev/nix/inst/bin' does not appear to be part of a Nix profile
instead of
$ nix upgrade-nix
error: '/home/eelco/Dev/nix/inst' is not a symlink
Fix a 32-bit overflow that resulted in negative numbers being printed;
use fmt() instead of boost::format(); change -H to -h for consistency
with 'ls' and 'du'; make the columns narrower (since they can't be
bigger than 1024.0).
If the user has an object greater than 1024 yottabytes, it'll just display it as
N yottabytes instead of overflowing.
Swaps to use boost::format strings instead of std::setw and std::setprecision.
Using a 64bit integer on 32bit systems will come with a bit of a
performance overhead, but given that Nix doesn't use a lot of integers
compared to other types, I think the overhead is negligible also
considering that 32bit systems are in decline.
The biggest advantage however is that when we use a consistent integer
size across all platforms it's less likely that we miss things that we
break due to that. One example would be:
https://github.com/NixOS/nixpkgs/pull/44233
On Hydra it will evaluate, because the evaluator runs on a 64bit
machine, but when evaluating the same on a 32bit machine it will fail,
so using 64bit integers should make that consistent.
While the change of the type in value.hh is rather easy to do, we have a
few more options available for doing the conversion in the lexer:
* Via an #ifdef on the architecture and using strtol() or strtoll()
accordingly depending on which architecture we are. For the #ifdef
we would need another AX_COMPILE_CHECK_SIZEOF in configure.ac.
* Using istringstream, which would involve copying the value.
* As we're already using boost, lexical_cast might be a good idea.
Spoiler: I went for the latter, first of all because lexical_cast does
have an overload for const char* and second of all, because it doesn't
involve copying around the input string. Also, because istringstream
seems to come with a bigger overhead than boost::lexical_cast:
https://www.boost.org/doc/libs/release/doc/html/boost_lexical_cast/performance.html
The first method (still using strtol/strtoll) also wasn't something I
pursued further, because it is also locale-aware which I doubt is what
we want, given that the regex for int is [0-9]+.
Signed-off-by: aszlig <aszlig@nix.build>
Fixes: #2339
The profile present in PATH is not necessarily the actual profile
location. User profiles are generally added as $HOME/.nix-profile
in which case the indirect profile link needs to be resolved first.
/home/user/.nix-profile -> /nix/var/nix/profiles/per-user/user/profile
/nix/var/nix/profiles/per-user/user/profile -> profile-15-link
/nix/var/nix/profiles/per-user/user/profile-14-link -> /nix/store/hyi4kkjh3bwi2z3wfljrkfymz9904h62-user-environment
/nix/var/nix/profiles/per-user/user/profile-15-link -> /nix/store/6njpl3qvihz46vj911pwx7hfcvwhifl9-user-environment
To upgrade nix here we want /nix/var/nix/profiles/per-user/user/profile-16-link
instead of /home/user/.nix-profile-1-link. The latter is not a gcroot
and would be garbage collected, resulting in a broken profile.
Fixes#2175
The current usage technically works by putting multiple different
repos in to the same git directory. However, it is very slow as
Git tries very hard to find common commits between the two
repositories. If the two repositories are large (like Nixpkgs and
another long-running project,) it is maddeningly slow.
This change busts the cache for existing deployments, but users
will be promptly repaid in per-repository performance.
TransferManager allocates a lot of memory (50 MiB by default), and it
might leak but I'm not sure about that. In any case it was causing
OOMs in hydra-queue-runner. So allocate only one TransferManager per
S3BinaryCacheStore.
Hopefully fixes https://github.com/NixOS/hydra/issues/586.
This callback is executed on a different thread, so exceptions thrown
from the callback are not caught:
Aug 08 16:25:48 chef hydra-queue-runner[11967]: terminate called after throwing an instance of 'nix::Error'
Aug 08 16:25:48 chef hydra-queue-runner[11967]: what(): AWS error: failed to upload 's3://nix-cache/19dbddlfb0vp68g68y19p9fswrgl0bg7.ls'
Therefore, just check the transfer status after it completes. Also
include the S3 error message in the exception.
This didn't work anymore since decompression was only done in the
non-coroutine case.
Decompressors are now sinks, just like compressors.
Also fixed a bug in bzip2 API handling (we have to handle BZ_RUN_OK
rather than BZ_OK), which we didn't notice because there was a missing
'throw':
if (ret != BZ_OK)
CompressionError("error while compressing bzip2 file");
It adds a new operation, cmdAddToStoreNar, that does the same thing as
the corresponding nix-daemon operation, i.e. call addToStore(). This
replaces cmdImportPaths, which has the major issue that it sends the
NAR first and the store path second, thus requiring us to store the
incoming NAR either in memory or on disk until we decide what to do
with it.
For example, this reduces the memory usage of
$ nix copy --to 'ssh://localhost?remote-store=/tmp/nix' /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79
from 267 MiB to 12 MiB.
Probably fixes#1988.
In EvalState::checkSourcePath, the path is checked against the list of
allowed paths first and later it's checked again *after* resolving
symlinks.
The resolving of the symlinks is done via canonPath, which also strips
out "../" and "./". However after the canonicalisation the error message
pointing out that the path is not allowed prints the symlink target in
the error message.
Even if we'd suppress the message, symlink targets could still be leaked
if the symlink target doesn't exist (in this case the error is thrown in
canonPath).
So instead, we now do canonPath() without symlink resolving first before
even checking against the list of allowed paths and then later do the
symlink resolving and checking the allowed paths again.
The first call to canonPath() should get rid of all the "../" and "./",
so in theory the only way to leak a symlink if the attacker is able to
put a symlink in one of the paths allowed by restricted evaluation mode.
For the latter I don't think this is part of the threat model, because
if the attacker can write to that path, the attack vector is even
larger.
Signed-off-by: aszlig <aszlig@nix.build>
This particular `shell` variable wasn't used, since a new one was
declared in the only side of the `if` branch that used a `shell`
variable.
It could realistically confuse developers thinking it could use `$SHELL`
under some situations.
forceValue() were called after a value is copied effectively forcing only one of the copies keeping another copy not evaluated.
This resulted in its evaluation of the same lazy value more than once (the number of hits is not big though)
Not ready for this yet, causes the prompt to disappear in nix repl
and more generally can overwrite non-progress-bar messages.
This reverts commit 44de71a396.
Before:
$ command time nix-prefetch-url https://cdn.kernel.org/pub/linux/kernel/v4.x/linux-4.17.6.tar.xz
1.19user 1.02system 0:41.96elapsed 5%CPU (0avgtext+0avgdata 182720maxresident)k
After:
1.38user 1.05system 0:39.73elapsed 6%CPU (0avgtext+0avgdata 16204maxresident)k
Note however that addToStore() can still take a lot of memory
(e.g. RemoteStore::addToStore() is constant space, but
LocalStore::addToStore() isn't; that's fixed by
c94b4fc7ee
though).
Fixes#1400.
For example, this allows you to do run nix-daemon as a non-privileged
user:
eelco$ NIX_STATE_DIR=~/my-nix/nix/var nix-daemon --store ~/my-nix/
The NIX_STATE_DIR is still needed because settings.nixDaemonSocketFile
is not derived from settings.storeUri (and we can't derive it from the
store's state directory because we don't want to open the store in the
parent process).
As proposed in #1634 the `nix search` command could use some
improvements. Initially 0413aeb35d added
some basic sorting behavior using `std::map`, a next step would be an
improvement of the output.
This patch includes the following changes:
* Use `$PAGER` for outputs with `RunPager` from `shared.hh`:
The same behavior is defined for `nix-env --query`, furthermore it
makes searching huge results way easier.
* Simplified result blocks:
The new output is heavily inspired by the output from `nox`, the first
line shows the attribute path and the derivaiton name
(`attribute path (derivation name)`) and the description in the second
line.
Slightly nicer behavior when updates are somewhat far apart
(during a long linking step, perhaps) ensuring things
don't appear unresponsive.
If we wait the maximum amount for the update,
don't bother waiting another 50ms (for rate-limiting purposes)
and just check if we should quit.
This also ensures we'll notice the request to quit within 1s
if quit is signalled but there is not an udpate.
(I'm not sure if this happens or not)
I hate to make this such a large check but the lack of documentation means we really have no idea what's allowed. All of them reported so far have been within ".app/Contents" directories. That appears to be a safe starting point. However, I would not be surprised to also find more paths that are disallowed for instance in .framework or .bundle directories.
Fixes#2031Fixes#2229
This makes 'nix copy' and 'nix path-info' work on .drv store
paths. Removing special treatment of .drv files seems the most
future-proof approach given the possible removal of .drv files in the
future.
Note that 'nix build' will still build (rather than substitute) .drv
paths due to the unfortunate overloading in Store::buildPaths().
EvalState contains a few counters (e.g. nrValues) that increase
quickly enough that they end up being interpreted as pointers by the
garbage collector. Moving it to the heap makes them invisible to the
garbage collector.
This reduces the max RSS doing 100 evaluations of
nixos.tests.firefox.x86_64-linux.drvPath from 455 MiB to 292 MiB.
Note: ideally, allocations would be much further up in the 64-bit
address space to reduce the odds of an integer being misinterpreted as
a pointer. Maybe we can use some linker magic to move the .bss segment
to a higher address.
This reduces the risk of object liveness misdetection. For example,
Glibc has an internal variable "mp_" that often points to a Boehm
object, keeping it alive unnecessarily. Since we don't store any
actual roots in global variables, we can just disable data segment
scanning.
With this, the max RSS doing 100 evaluations of
nixos.tests.firefox.x86_64-linux.drvPath went from 718 MiB to 455 MiB.