The new interface we offer provides a way of getting all the
DerivationOutputs with the storePaths directly, based on the observation
that it's the most common usecase.
It's a tiny function which is:
- hardly worth abstrating over, and also only used once.
- doesn't work once we get CA drvs
I rewrote the one callsite to be forwards compatable with CA
derivations, and also potentially more performant: instead of reading in
the derivation it can ust consult the SQLite DB in the common case.
We've added the variant to `DerivationOutput` to support them, but made
`DerivationOutput::path` partial to avoid actually implementing them.
With this chage, we can all collaborate on "just" removing
`DerivationOutput::path` calls to implement CA derivations.
This further continues with the dependency inverstion. Also I just went
ahead and exposed `parseDerivation`: it seems like the more proper
building block, and not a bad thing to expose if we are trying to be
less wedded to drv files on disk anywas.
This function was used in only one place, where it could easily be
replaced by readDerivation() since it's not
performance-critical. (This function appears to have been modelled
after queryDerivationOutputs(), which exists only to make the garbage
collector faster.)
See documentattion in header and comments in implementation for details.
This is actually done in preparation for floating ca derivations, not
multi-output fixed ca derivations, but the distinction doesn't yet
mattter.
Thanks @cole-h for finding and fixing a bunch of typos.
Today's fixed output derivations and regular derivations differ in a few
ways which are largely orthogonal. This replaces `isFixedOutput` with a
`type` that returns an enum of possible combinations.
Most functions now take a StorePath argument rather than a Path (which
is just an alias for std::string). The StorePath constructor ensures
that the path is syntactically correct (i.e. it looks like
<store-dir>/<base32-hash>-<name>). Similarly, functions like
buildPaths() now take a StorePathWithOutputs, rather than abusing Path
by adding a '!<outputs>' suffix.
Note that the StorePath type is implemented in Rust. This involves
some hackery to allow Rust values to be used directly in C++, via a
helper type whose destructor calls the Rust type's drop()
function. The main issue is the dynamic nature of C++ move semantics:
after we have moved a Rust value, we should not call the drop function
on the original value. So when we move a value, we set the original
value to bitwise zero, and the destructor only calls drop() if the
value is not bitwise zero. This should be sufficient for most types.
Also lots of minor cleanups to the C++ API to make it more modern
(e.g. using std::optional and std::string_view in some places).
This is primarily because Derivation::{can,will}BuildLocally() depends
on attributes like preferLocalBuild and requiredSystemFeatures, but it
can't handle them properly because it doesn't have access to the
structured attributes.
Functions like copyClosure() had 3 bool arguments, which creates a
severe risk of mixing up arguments.
Also, implement copyClosure() using copyPaths().
For example, you can now say:
configureFlags = "--prefix=${placeholder "out"} --includedir=${placeholder "dev"}";
The strings returned by the ‘placeholder’ builtin are replaced at
build time by the actual store paths corresponding to the specified
outputs.
Previously, you had to work around the inability to self-reference by doing stuff like:
preConfigure = ''
configureFlags+=" --prefix $out --includedir=$dev"
'';
or rely on ad-hoc variable interpolation semantics in Autoconf or Make
(e.g. --prefix=\$(out)), which doesn't always work.
Also, move a few free-standing functions into StoreAPI and Derivation.
Also, introduce a non-nullable smart pointer, ref<T>, which is just a
wrapper around std::shared_ptr ensuring that the pointer is never
null. (For reference-counted values, this is better than passing a
"T&", because the latter doesn't maintain the refcount. Usually, the
caller will have a shared_ptr keeping the value alive, but that's not
always the case, e.g., when passing a reference to a std::thread via
std::bind.)
Previously, to build a derivation remotely, we had to copy the entire
closure of the .drv file to the remote machine, even though we only
need the top-level derivation. This is very wasteful: the closure can
contain thousands of store paths, and in some Hydra use cases, include
source paths that are very large (e.g. Git/Mercurial checkouts).
So now there is a new operation, StoreAPI::buildDerivation(), that
performs a build from an in-memory representation of a derivation
(BasicDerivation) rather than from a on-disk .drv file. The only files
that need to be in the Nix store are the sources of the derivation
(drv.inputSrcs), and the needed output paths of the dependencies (as
described by drv.inputDrvs). "nix-store --serve" exposes this
interface.
Note that this is a privileged operation, because you can construct a
derivation that builds any store path whatsoever. Fixing this will
require changing the hashing scheme (i.e., the output paths should be
computed from the other fields in BasicDerivation, allowing them to be
verified without access to other derivations). However, this would be
quite nice because it would allow .drv-free building (e.g. "nix-env
-i" wouldn't have to write any .drv files to disk).
Fixes#173.
If a derivation has multiple outputs, then we only want to download
those outputs that are actuallty needed. So if we do "nix-build -A
openssl.man", then only the "man" output should be downloaded.
Likewise if another package depends on ${openssl.man}.
The tricky part is that different derivations can depend on different
outputs of a given derivation, so we may need to restart the
corresponding derivation goal if that happens.
For example, given a derivation with outputs "out", "man" and "bin":
$ nix-build -A pkg
produces ./result pointing to the "out" output;
$ nix-build -A pkg.man
produces ./result-man pointing to the "man" output;
$ nix-build -A pkg.all
produces ./result, ./result-man and ./result-bin;
$ nix-build -A pkg.all -A pkg2
produces ./result, ./result-man, ./result-bin and ./result-2.
"config.h" must be included first, because otherwise the compiler
might not see the right value of _FILE_OFFSET_BITS. We've had this
before; see 705868a8a9. In this case,
GCC would compute a different address for ‘settings.useSubstitutes’ in
misc.cc because of the off_t in ‘settings’.
Reverts 3854fc9b42.
http://hydra.nixos.org/build/3016700
This should also fix:
nix-instantiate: ./../boost/shared_ptr.hpp:254: T* boost::shared_ptr<T>::operator->() const [with T = nix::StoreAPI]: Assertion `px != 0' failed.
which was caused by hashDerivationModulo() calling the ‘store’
object (during store upgrades) before openStore() assigned it.
derivations added to the store by clients have "correct" output
paths (meaning that the output paths are computed by hashing the
derivation according to a certain algorithm). This means that a
malicious user could craft a special .drv file to build *any*
desired path in the store with any desired contents (so long as the
path doesn't already exist). Then the attacker just needs to wait
for a victim to come along and install the compromised path.
For instance, if Alice (the attacker) knows that the latest Firefox
derivation in Nixpkgs produces the path
/nix/store/1a5nyfd4ajxbyy97r1fslhgrv70gj8a7-firefox-5.0.1
then (provided this path doesn't already exist) she can craft a .drv
file that creates that path (i.e., has it as one of its outputs),
add it to the store using "nix-store --add", and build it with
"nix-store -r". So the fake .drv could write a Trojan to the
Firefox path. Then, if user Bob (the victim) comes along and does
$ nix-env -i firefox
$ firefox
he executes the Trojan injected by Alice.
The fix is to have the Nix daemon verify that derivation outputs are
correct (in addValidPath()). This required some refactoring to move
the hash computation code to libstore.
promise :-) This allows derivations to specify on *what* output
paths of input derivations they are dependent. This helps to
prevent unnecessary downloads. For instance, a build might be
dependent on the `devel' and `lib' outputs of some library
component, but not the `docs' output.
`derivations.cc', etc.
* Store the SHA-256 content hash of store paths in the database after
they have been built/added. This is so that we can check whether
the store has been messed with (a la `rpm --verify').
* When registering path validity, verify that the closure property
holds.