Sometimes it's necessary to fetch a git repository at a revision and
it's unknown which ref contains the revision in question. An example
would be a Cargo.lock which only provides the URL and the revision when
using a git repository as build input.
However it's considered a bad practice to perform a full checkout of a
repository since this may take a lot of time and can eat up a lot of
disk space. This patch makes a full checkout explicit by adding an
`allRefs` argument to `builtins.fetchGit` which fetches all refs if
explicitly set to true.
Closes#2409
A common pitfall when using e.g. `builtins.fetchGit` is the `fatal: not
a tree object`-error when trying to fetch a revision of a git-repository
that isn't on the `master` branch and no `ref` is specified.
In order to make clear what's the problem, I added a simple check
whether the revision in question exists and if it doesn't a more
meaningful error-message is displayed:
```
nix-repl> builtins.fetchGit { url = "https://github.com/owner/myrepo"; rev = "<commit not on master>"; }
moderror: --- Error -------------------------------------------------------------------- nix
Cannot find Git revision 'bf1cc5c648e6aed7360448a3745bb2fe4fbbf0e9' in ref 'master' of repository 'https://gitlab.com/Ma27/nvim.nix'! Please make sure that the rev exists on the ref you've specified or add allRefs = true; to fetchGit.
```
Closes#2431
We embrace virtual the rest of the way, and get rid of the
`assert(false)` 0-param constructors.
We also list config base classes first, so the constructor order is
always:
1. all the configs
2. all the stores
Each in the same order
Move clearValue inside Value
mkInt instead of setInt
mkBool instead of setBool
mkString instead of setString
mkPath instead of setPath
mkNull instead of setNull
mkAttrs instead of setAttrs
mkList instead of setList*
mkThunk instead of setThunk
mkApp instead of setApp
mkLambda instead of setLambda
mkBlackhole instead of setBlackhole
mkPrimOp instead of setPrimOp
mkPrimOpApp instead of setPrimOpApp
mkExternal instead of setExternal
mkFloat instead of setFloat
Add note that the static mk* function should be removed eventually
PRs #4370 and #4348 had a bad interaction in that the second broke the fist
one in a not trivial way.
The issue was that since #4348 the logic for detecting whether a
derivation output is already built requires some logic that was specific
to the `LocalStore`.
It happens though that most of this logic could be upstreamed to any `Store`,
which is what this commit does.
This ignore was here because `queryPartialDrvOutputMap` was used both
1. as a cache to avoid having to re-read the derivation (when gc-ing for
example), and
2. as the source of truth for ca realisations
The use-case 2. required it to be able to work even when the derivation
wasn't there anymore (see https://github.com/NixOS/nix/issues/4138).
However, this use-case is now handled by `queryRealisation`, meaning
that we can safely error out if the derivation isn't there anymore
`buildPaths` can be called even for stores where it's not defined in case it's
bound to be a no-op.
The “no-op detection” mechanism was only detecting the case wher `buildPaths`
was called on a set of (non-drv) paths that were already present on the store.
This commit extends this mechanism to also detect the case where `buildPaths`
is called on a set of derivation outputs which are already built on the store.
This only works with the ca-derivations flag. It could be possible to
extend this to also work without it, but it would add quite a bit of
complexity, and it's not used without it anyways.
Extend `FSAccessor::readFile` to allow not checking that the path is a
valid one, and rewrite `readInvalidDerivation` using this extended
`readFile`.
Several places in the code use `readInvalidDerivation`, either because
they need to read a derivation that has been written in the store but
not registered yet, or more generally to prevent a deadlock because
`readDerivation` tries to lock the state, so can't be called from a
place where the lock is already held.
However, `readInvalidDerivation` implicitely assumes that the store is a
`LocalFSStore`, which isn't always the case.
The concrete motivation for this is that it's required for `nix copy
--from someBinaryCache` to work, which is tremendously useful for the
tests.
Because of a too eager refactoring, `addTextToStore` used to throw an
error because the input wasn't a valid nar.
Partially revert that refactoring to wrap the text into a proper nar
(using `dumpString`) to make this method work again
Rather than storing the derivation outputs as `drvPath!outputName` internally,
store them as `drvHashModulo!outputName` (or `outputHash!outputName` for
fixed-output derivations).
This makes the storage slightly more opaque, but enables an earlier
cutoff in cases where a fixed-output dependency changes (but keeps the
same output hash) − same as what we already do for input-addressed
derivations.
Add a new table for tracking the derivation output mappings.
We used to hijack the `DerivationOutputs` table for that, but (despite its
name), it isn't a really good fit:
- Its entries depend on the drv being a valid path, making it play badly with
garbage collection and preventing us to copy a drv output without copying
the whole drv closure too;
- It dosen't guaranty that the output path exists;
By using a different table, we can experiment with a different schema better
suited for tracking the output mappings of CA derivations.
(incidentally, this also fixes#4138)
For each known realisation, store:
- its output
- its output path
This comes with a set of needed changes:
- New `realisations` module declaring the types needed for describing
these mappings
- New `Store::registerDrvOutput` method registering all the needed informations
about a derivation output (also replaces `LocalStore::linkDeriverToPath`)
- new `Store::queryRealisation` method to retrieve the informations for a
derivations
This introcudes some redundancy on the remote-store side between
`wopQueryDerivationOutputMap` and `wopQueryRealisation`.
However we might need to keep both (regardless of backwards compat)
because we sometimes need to get some infos for all the outputs of a
derivation (where `wopQueryDerivationOutputMap` is handy), but all the
stores can't implement it − because listing all the outputs of a
derivation isn't really possible for binary caches where the server
doesn't allow to list a directory.
In `nixStable` (2.3.7 to be precise) it's possible to connect to stores
using an IPv6 address:
nix ping-store --store ssh://root@2001:db8::1
This is also useful for `nixops(1)` where you could specify an IPv6
address in `deployment.targetHost`.
However, this behavior is broken on `nixUnstable` and fails with the
following error:
$ nix store ping --store ssh://root@2001:db8::1
don't know how to open Nix store 'ssh://root@2001:db8::1'
This happened because `openStore` from `libstore` uses the `parseURL`
function from `libfetchers` which expects a valid URL as defined in
RFC2732. However, this is unsupported by `ssh(1)`:
$ nix store ping --store 'ssh://root@[2001:db8::1]'
cannot connect to 'root@[2001:db8::1]'
This patch now allows both ways of specifying a store (`root@2001:db8::1`) and
also `root@[2001:db8::1]` since the latter one is useful to pass query
parameters to the remote store.
In order to achieve this, the following changes were made:
* The URL regex from `url-parts.hh` now allows an IPv6 address in the
form `2001:db8::1` and also `[2001:db8::1]`.
* In `libstore`, a new function named `extractConnStr` ensures that a
proper URL is passed to e.g. `ssh(1)`:
* If a URL looks like either `[2001:db8::1]` or `root@[2001:db8::1]`,
the brackets will be removed using a regex. No additional validation
is done here as only strings parsed by `parseURL` are expected.
* In any other case, the string will be left untouched.
* The rules above only apply for `LegacySSHStore` and `SSHStore` (a.k.a
`ssh://` and `ssh-ng://`).
Unresolved questions:
* I'm not really sure whether we want to allow both variants of IPv6
addresses in the URL parser. However it should be noted that both seem
to be possible according to RFC2732:
> This document incudes an update to the generic syntax for Uniform
> Resource Identifiers defined in RFC 2396 [URL]. It defines a syntax
> for IPv6 addresses and allows the use of "[" and "]" within a URI
> explicitly for this reserved purpose.
* Currently, it's not supported to specify a port number behind the
hostname, however it seems as this is not really supported by the URL
parser. Hence, this is probably out of scope here.
The `DerivationGoal` has a variable storing the “final” derivation
output paths that is used (amongst other things) to fill the environment
for the post build hook. However this variable wasn't set when the
build-hook is used, causing a crash when both hooks are used together.
Fix this by setting this variable (from the informations in the db) after a run
of the post build hook.
This reverts commit 1b1e076033.
Using `queryPartialDerivationOutputMap` assumes that the derivation
exists locally which isn't the case for remote builders.
Since 0744f7f, it is now useful to have cache.nixos.org in substituers
even if /nix/store is not the Nix Store Dir. This can always be
overridden via configuration, though.
When running universal binaries like /bin/bash, Darwin XNU will choose
which architecture of the binary to use based on "binary preferences".
This change sets that to the current platform for aarch64 and x86_64
builds. In addition it now uses posix_spawn instead of the usual
execve. Note, that this does not prevent the other architecture from
being run, just advises which to use.
Unfortunately, posix_spawnattr_setbinpref_np does not appear to be
inherited by child processes in x86_64 Rosetta 2 translations, meaning
that this will not always work as expected.
For example:
{
arm = derivation {
name = "test";
system = "aarch64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = arm64 ]
/usr/bin/touch $out
'') ];
};
rosetta = derivation {
name = "test";
system = "x86_64-darwin";
builder = "/bin/bash";
args = [ "-e" (builtins.toFile "test" ''
set -x
/usr/sbin/sysctl sysctl.proc_translated
/usr/sbin/sysctl sysctl.proc_native
[ "$(/usr/bin/arch)" = i386 ]
echo It works!
/usr/bin/touch $out
'') ];
};
}
`arm' fails on x86_64-compiled Nix, but `arm' and `rosetta' succeed on
aarch64-compiled Nix. I suspect there is a way to fix this since:
$ /usr/bin/arch -arch x86_64 /bin/bash \
-c '/usr/bin/arch -arch arm64e /bin/bash -c /usr/bin/arch'
arm64
seems to work correctly. We may need to wait for Apple to update
system_cmds in opensource.apple.com to find out how though.
macOS systems with ARM64 can utilize a translation layer at
/Library/Apple/usr/libexec/oah to run x86_64 binaries. This change
makes Nix recognize that and it to "extra-platforms". Note that there
are two cases here since Nix could be built for either x86_64 or
aarch64. In either case, we can switch to the other architecture.
Unfortunately there is not a good way to prevent aarch64 binaries from
being run in x86_64 contexts or vice versa - programs can always
execute programs for the other architecture.
If the build closure contains some CA derivations, then we can't know
ahead-of-time that we won't build anything as early-cutoff might come-in
at a laster stage
using fallocate() to preallocate files space does more harm than good:
- breaks compression on btrfs
- has been called "not the right thing to do" by xfs developers
(because delayed allocation that most filesystems implement leads to smarter
allocation than what the filesystem needs to do if we upfront fallocate files)
Without setting HGPLAIN, the user's environment leaks into
hg invocations, which means that the output may not be in the
expected format.
HGPLAIN is the Mercurial-recommended solution for this in that
it's intended for uses by scripts and programs which are looking
to parse Mercurial's output in a consistent manner.
This fixes a bug I encountered where `nix-store -qR` will deadlock when
the `--include-outputs` flag is passed and `max-connections=1`.
The deadlock occurs because `RemoteStore::queryDerivationOutputs` takes
the only connection from the connection pool and uses it to check the
daemon version. If the version is new enough, it calls
`Store::queryDerivationOutputs`, which eventually calls
`RemoteStore::queryPartialDerivationOutputMap`, where we take another
connection from the connection pool to check the version again. Because
we still haven't released the connection from the caller, this waits for
a connection to be available, causing a deadlock.
This diff solves the issue by using `getProtocol` to check the protocol
version in the caller `RemoteStore::queryDerivationOutputs`, which
immediately frees the connection back to the pool before returning the
protocol version. That way we've already freed the connection by the
time we call `RemoteStore::queryPartialDerivationOutputMap`.
Fixes:
$ nix build --store /tmp/nix /home/eelco/Dev/patchelf#hydraJobs.build.x86_64-linux
warning: Git tree '/home/eelco/Dev/patchelf' is dirty
error: --- RestrictedPathError ------------------------------------------------------------------------------------------- nix
access to path '/tmp/nix/nix/store/xmkvfmffk7xfnazykb5kx999aika8an4-source/flake.nix' is forbidden in restricted mode
(use '--show-trace' to show detailed location information)
Until now, it was not possible to substitute missing paths from e.g.
`https://cache.nixos.org` on a remote server when building on it using
the new `ssh-ng` protocol.
This is because every store implementation except legacy `ssh://`
ignores the substitution flag passed to `Store::queryValidPaths` while
the `legacy-ssh-store` substitutes the remote store using
`cmdQueryValidPaths` when the remote store is opened with `nix-store
--serve`.
This patch slightly modifies the daemon protocol to allow passing an
integer value suggesting whether to substitute missing paths during
`wopQueryValidPaths`. To implement this on the daemon-side, the
substitution logic from `nix-store --serve` has been moved into a
protected method named `Store::substitutePaths` which gets currently
called from `LocalStore::queryValidPaths` and `Store::queryValidPaths`
if `maybeSubstitute` is `true`.
Fixes#2770
Crucially this introduces BoehmGCStackAllocator, but it also
adds a bunch of wiring to avoid making libutil depend on bdw-gc.
Part of the solutions for #4178, #4200
This removes the extra-substituters and extra-sandbox-paths settings
and instead makes every array setting extensible by setting
"extra-<name> = <value>" in the configuration file or passing
"--<name> <value>" on the command line.
This makes it even clearer which of the two hashes was specified in the
nix files. Some may think that "wanted" and "got" is obvious, but:
"got" could mean "got in nix file" and "wanted" could mean "want to see in nix file".
This makes it possible to have per-project configuration in flake.nix,
e.g. binary caches and other stuff:
nixConfig.bash-prompt-suffix = "[1;35mngi# [0m";
nixConfig.substituters = [ "https://cache.ngi0.nixos.org/" ];
This is primarily useful if you're hacking simultaneously on a package
and one of its dependencies. E.g. if you're hacking on Hydra and Nix,
you would start a dev shell for Nix, and then a dev shell for Hydra as
follows:
$ nix develop \
--redirect .#hydraJobs.build.x86_64-linux.nix ~/Dev/nix/outputs/out \
--redirect .#hydraJobs.build.x86_64-linux.nix.dev ~/Dev/nix/outputs/dev
(This assumes hydraJobs.build.x86_64-linux has a passthru.nix
attribute. You can also use a store path.)
This causes all references in the environment to those store paths to
be rewritten to ~/Dev/nix/outputs/{out,dev}. Note: unfortunately, you
may need to set LD_LIBRARY_PATH=~/Dev/nix/outputs/out/lib because
Nixpkgs' ld-wrapper only adds -rpath entries for -L flags that point
to the Nix store.
Fix#3975: Currently if Ctrl-C is pressed during a phase, the interactive subshell
is not exited. Removing --rcfile when --phase is present makes bash
non-interactive
Make nix output completions in the form `completion\tdescription`.
This can't be used by bash (afaik), but other shells like zsh or fish
can display it along the completion choices
Observed on Centos 7 when user namespaces are disabled:
DerivationGoal::startBuilder() throws an exception, ~DerivationGoal()
waits for the child process to exit, but the child process hangs
forever in drainFD(userNamespaceSync.readSide.get()) in
DerivationGoal::runChild(). Not sure why the SIGKILL doesn't get
through.
Issue #4092.
When running `nix build -L` it can be fairly hard to read the output if
the build program intentionally renders whitespace on the left. A
typical example is `g++` displaying compilation errors.
With this patch, the whitespace on the left is retained to make the log
more readable:
```
foo> no configure script, doing nothing
foo> building
foo> foobar.cc: In function 'int main()':
foo> foobar.cc:5:5: error: 'wrong_func' was not declared in this scope
foo> 5 | wrong_func(1);
foo> | ^~~~~~~~~~
error: --- Error ------------------------------------------------------------------------------------- nix
error: --- Error --- nix-daemon
builder for '/nix/store/i1q76cw6cyh91raaqg5p5isd1l2x6rx2-foo-1.0.drv' failed with exit code 1
```
The registry targets generally follow a URL formatting schema with
support for a query parameter of "?dir=subpath" to specify a sub-path
location below the URL root.
Alternatively, an absolute path can be specified. This specification
mode accepts the query parameter but ignores/drops it. It would
probably be better to either (a) disallow the query parameter for the
path form, or (b) recognize the query parameter and add to the path.
This patch implements (b) for consistency, and to make it easier for
tooling that might switch between a remote git reference and a local
path reference.
See also issue #4050.
This change provides support for using access tokens with other
instances of GitHub and GitLab beyond just github.com and
gitlab.com (especially company-specific or foundation-specific
instances).
This change also provides the ability to specify the type of access
token being used, where different types may have different handling,
based on the forge type.
std::optional had redundant checks for whether it had a value.
An object is emplaced either way so it can be dereferenced
without repeating a value check
After 0ed946aa61, max-jobs setting (-j/--max-jobs)
stopped working.
The reason was that nrLocalBuilds (which compared to maxBuildJobs to figure
out whether the limit is reached or not) is not incremented yet when tryBuild
is started; So, the solution is to move the check to tryLocalBuild.
Closes https://github.com/nixos/nix/issues/3763
We don't need it yet, but we could/should in the future, and it's a
cost-free change since we already have the reference. I like it.
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
Rather than showing an integer as the default, instead show the boolean
referenced in the description.
The nix.conf.5 manpage used to show "default: 0", which is unnecessarily
opaque and confusing (doesn't 0 mean false, even though the default is
true?); now it properly shows that the default is true.
pkgs.fetchurl supports an executable argument, which is especially nice
when downloading a large executable. This patch adds the same option to
nix-prefetch-url.
I have tested this to work on the simple case of prefetching a little
executable:
1. nix-prefetch-url --executable https://my/little/script
2. Paste the hash into a pkgs.fetchurl-based package, script-pkg.nix
3. Delete the output from the store to avoid any misidentified artifacts
4. Realise the package script-pkg.nix
5. Run the executable
I repeated the above while using --name, as well.
I suspect --executable would have no meaningful effect if combined with
--unpack, but I have not tried it.
Since 108debef6f we allow a
`url`-attribute for the `github`-fetcher to fetch tarballs from
self-hosted `gitlab`/`github` instances.
However it's not used when defining e.g. a flake-input
foobar = {
type = "github";
url = "gitlab.myserver";
/* ... */
}
and breaks with an evaluation-error:
error: --- Error --------------------------------------nix
unsupported input attribute 'url'
(use '--show-trace' to show detailed location information)
This patch allows flake-inputs to be fetched from self-hosted instances
as well.
`nix flake info` calls the github 'commits' API, which requires
authorization when the repository is private. Currently this request
fails with a 404.
This commit adds an authorization header when calling the 'commits' API.
It also changes the way that the 'tarball' API authenticates, moving the
user's token from a query parameter into the Authorization header.
The query parameter method is recently deprecated and will be disallowed
in November 2020. Using them today triggers a warning email.
It is apparently required for using `toJSONObject()`, which we do inside
the header file (because it's in a template).
This was accidentally working when building Nix itself (presumably because
`config.hh` was always included after `nlohman/json.hpp`) but caused a
(pretty dirty) build failure in the perl bindings package.
Rework the `Store` hierarchy so that there's now one hierarchy for the
store configs and one for the implementations (where each implementation
extends the corresponding config). So a class hierarchy like
```
StoreConfig-------->Store
| |
v v
SubStoreConfig----->SubStore
| |
v v
SubSubStoreConfig-->SubSubStore
```
(with virtual inheritance to prevent DDD).
The advantage of this architecture is that we can now introspect the configuration of a store without having to instantiate the store itself
Add a new `init()` method to the `Store` class that is supposed to
handle all the effectful initialisation needed to set-up the store.
The constructor should remain side-effect free and just initialize the
c++ data structure.
The goal behind that is that we can create “dummy” instances of each
store to query static properties about it (the parameters it accepts for
example)
Directly register the store classes rather than a function to build an
instance of them.
This gives the possibility to introspect static members of the class or
choose different ways of instantiating them.
Add a fallback path in `queryPartialDerivationOutputMap` for daemons
that don't support it.
Also upstreams a couple methods from `SSHStore` to `RemoteStore` as this
is needed to handle the fallback path.
Otherwise the result of the printing can't be parsed back correctly by
Nix (because the unescaped `${` will be parsed as the begining of an
anti-quotation).
Fix#3989
When deploying a Hydra instance with current Nix master, most builds
would not run because of errors like this:
queue monitor: error: --- Error --- hydra-queue-runner
error: --- UsageError --- nix-daemon
not a content address because it is not in the form '<prefix>:<rest>': /nix/store/...-somedrv
The last error message is from parseContentAddress, which expects a
colon-separated string, however what we got here is a store path.
Looking at the worker protocol, the following message sent to the Nix
daemon caused the error above:
0x1E -> wopQuerySubstitutablePathInfos
0x01 -> Number of paths
0x16 -> Length of string
"/nix/store/...-somedrv"
0x00 -> Length of string
""
Looking at writeStorePathCAMap, the store path is indeed the first field
that's transmitted. However, readStorePathCAMap expects it to be the
*second* field *on my machine*, since expression evaluation order is a
classic form of unspecified behaviour[1] in C++.
This has been introduced in https://github.com/NixOS/nix/pull/3689,
specifically in commit 66a62b3189.
[1]: https://en.wikipedia.org/wiki/Unspecified_behavior#Order_of_evaluation_of_subexpressions
Signed-off-by: aszlig <aszlig@nix.build>
The change in 626200713b didn't account
for when the number of auto arguments is bigger than the number of
formal arguments. This causes the following:
$ nix-instantiate --eval -E '{ ... }@args: args.foo' --argstr foo foo
nix-instantiate: src/libexpr/attr-set.hh:55: void nix::Bindings::push_back(const nix::Attr&): Assertion `size_ < capacity_' failed.
Aborted (core dumped)
If we resolve using the known path of a derivation whose output we
didn't have, we previously blew up. Now we just fail gracefully,
returning the map of all outputs unknown.
This means profiles outside of /nix/var/nix/profiles don't get
garbage-collected. It also means we don't need to scan
/nix/var/nix/profiles for GC roots anymore, except for compatibility
with previously existing generations.
This was broken in 50f13b06fb. Once
again it turns out that putting a bool in a std::variant is a bad
idea, since pointers get silently cast to them...
The command line options --arg and --argstr that are used by a bunch of
CLI commands to pass arguments to top-level functions in files go
through the same code-path as auto-calling top-level functions with
their default arguments - this, however, was only passing the arguments
that were *explicitly* mentioned in the formals of the function - in the
case of an as-pattern with an ellipsis (eg args @ { ... }) extra passed
arguments would get omitted. This fixes that to instead pass *all*
specified auto args in the case that our function has an ellipsis.
Fixes#598
When the log.showSignature git setting is enabled, the output of
"git log" contains signature verification information in addition to the
timestamp GitInputScheme::fetch wants:
$ git log -1 --format=%ct
gpg: Signature made Sat 07 Sep 2019 02:02:03 PM PDT
gpg: using RSA key 0123456789ABCDEF0123456789ABCDEF01234567
gpg: issuer "user@example.com"
gpg: Good signature from "User <user@example.com>" [ultimate] 1567890123
1567890123
For folks that had log.showSignature set, this caused all nix operations
on flakes to fail:
$ nix build
error: stoull
Evidentally this was never implemented because Nix switched to using
`buildDerivation` exclusively before `build-remote.pl` was rewritten.
The `nix-copy-ssh` test (already) tests this.
Include a long comment explaining the policy. Perhaps this can be moved
to the manual at some point in the future.
Also bump the daemon protocol minor version, so clients can tell whether
`wopBuildDerivation` supports trustless CA derivation building. I hope
to take advantage of this in a follow-up PR to support trustless remote
building with the minimal sending of derivation closures.
This seems more correct. It also means one can specify the features a
store should support with --store and remote-store=..., which is useful.
I use this to clean up the build remotes test.
Before, processConnection wanted to know a user name and user id, and
`nix-daemon --stdio`, when it isn't proxying to an underlying daemon,
would just assume "root" and 0. But `nix-daemon --stdio` (no proxying)
shouldn't make guesses about who holds the other end of its standard
streams.
Now processConnection takes an "auth hook", so `nix-daemon` can provide
the appropriate policy and daemon.cc doesn't need to know or care what
it is.
Thanks @regnat for catching one of them. The other follows for many of
the same reasons. I'm find fixing others on a need-to-fix basis,
provided their are no regressions.
When having a message like `waiting for a machine to build X` and
building with `nix build -L`, the log-prefix is always colored yellow[1]
on a small terminal-width as everything (including the ANSI color-reset) is
stripped away.
To work around that problem, this patch explicitly adds an `ANSI_NORMAL`
to the end of the line.
[1] https://imgur.com/a/FjtJOk3
Fixes#3872.
This is a bit hacky. Ideally we would automatically re-evaluate the
failed attribute iff we need to print the error message (so in
commands like 'nix search' we wouldn't re-evaluate because we're
suppressing errors).