Rework the `Store` hierarchy so that there's now one hierarchy for the
store configs and one for the implementations (where each implementation
extends the corresponding config). So a class hierarchy like
```
StoreConfig-------->Store
| |
v v
SubStoreConfig----->SubStore
| |
v v
SubSubStoreConfig-->SubSubStore
```
(with virtual inheritance to prevent DDD).
The advantage of this architecture is that we can now introspect the configuration of a store without having to instantiate the store itself
Add a new `init()` method to the `Store` class that is supposed to
handle all the effectful initialisation needed to set-up the store.
The constructor should remain side-effect free and just initialize the
c++ data structure.
The goal behind that is that we can create “dummy” instances of each
store to query static properties about it (the parameters it accepts for
example)
The idea is it's always more flexible to consumer a `Source` than a
plain string, and it might even reduce memory consumption.
I also looked at `addToStoreFromDump` with its `// FIXME: remove?`, but
the worked needed for that is far more up for interpretation, so I
punted for now.
This allows overriding the priority of substituters, e.g.
$ nix-store --store ~/my-nix/ -r /nix/store/df3m4da96d84ljzxx4mygfshm1p0r2n3-geeqie-1.4 \
--substituters 'http://cache.nixos.org?priority=100 daemon?priority=10'
Fixes#3264.
Most functions now take a StorePath argument rather than a Path (which
is just an alias for std::string). The StorePath constructor ensures
that the path is syntactically correct (i.e. it looks like
<store-dir>/<base32-hash>-<name>). Similarly, functions like
buildPaths() now take a StorePathWithOutputs, rather than abusing Path
by adding a '!<outputs>' suffix.
Note that the StorePath type is implemented in Rust. This involves
some hackery to allow Rust values to be used directly in C++, via a
helper type whose destructor calls the Rust type's drop()
function. The main issue is the dynamic nature of C++ move semantics:
after we have moved a Rust value, we should not call the drop function
on the original value. So when we move a value, we set the original
value to bitwise zero, and the destructor only calls drop() if the
value is not bitwise zero. This should be sufficient for most types.
Also lots of minor cleanups to the C++ API to make it more modern
(e.g. using std::optional and std::string_view in some places).
This integrates the functionality of the index-debuginfo program in
nixos-channel-scripts to maintain an index of DWARF debuginfo files in
a format usable by dwarffs. Thus the debug info index is updated by
Hydra rather than by the channel mirroring script.
Example usage:
$ nix copy --to 'file:///tmp/binary-cache?index-debug-info=true' /nix/store/vr9mhcch3fljzzkjld3kvkggvpq38cva-nix-2.2.2-debug
$ cat /tmp/binary-cache/debuginfo/036b210b03bad75ab2d8fc80b7a146f98e7f1ecf.debug
{"archive":"../nar/0313h2kdhk4v73xna9ysiksp2v8xrsk5xsw79mmwr3rg7byb4ka8.nar.xz","member":"lib/debug/.build-id/03/6b210b03bad75ab2d8fc80b7a146f98e7f1ecf.debug"}
Fixes#3083.
This reduces memory consumption of
nix copy --from file://... --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79
from 514 MiB to 18 MiB for an uncompressed binary cache, and from 192
MiB to 53 MiB for a bzipped binary cache. It may also be faster
because fetching can happen concurrently with decompression/writing.
Continuation of 48662d151b.
Issue https://github.com/NixOS/nix/issues/1681.
This speeds up commands like "nix cat-store". For example:
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nar-cache /nix/store/i60yncmq6w9dyv37zd2k454g0fkl3arl-systemd-234/etc/udev/udev.conf
real 0m4.336s
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nar-cache /nix/store/i60yncmq6w9dyv37zd2k454g0fkl3arl-systemd-234/etc/udev/udev.conf
real 0m0.045s
The primary motivation is to allow hydra-server to serve files from S3
binary caches. Previously Hydra had a hack to do "nix-store -r
<path>", but that fetches the entire closure so is prohibitively
expensive.
There is no garbage collection of the NAR cache yet. Also, the entire
NAR is read when accessing a single member file. We could generate the
NAR listing to provide random access.
Note: the NAR cache is indexed by the store path hash, not the content
hash, so NAR caches should not be shared between binary caches, unless
you're sure that all your builds are binary-reproducible.
Functions like copyClosure() had 3 bool arguments, which creates a
severe risk of mixing up arguments.
Also, implement copyClosure() using copyPaths().
This default implementation of buildPaths() does nothing if all
requested paths are already valid, and throws an "unsupported
operation" error otherwise. This fixes a regression introduced by
c30330df6f in binary cache and legacy
SSH stores.
The typical use is to inherit Config and add Setting<T> members:
class MyClass : private Config
{
Setting<int> foo{this, 123, "foo", "the number of foos to use"};
Setting<std::string> bar{this, "blabla", "bar", "the name of the bar"};
MyClass() : Config(readConfigFile("/etc/my-app.conf"))
{
std::cout << foo << "\n"; // will print 123 unless overriden
}
};
Currently, this is used by Store and its subclasses for store
parameters. You now get a warning if you specify a non-existant store
parameter in a store URI.
The store parameter "write-nar-listing=1" will cause BinaryCacheStore
to write a file ‘<store-hash>.ls.xz’ for each ‘<store-hash>.narinfo’
added to the binary cache. This file contains an XZ-compressed JSON
file describing the contents of the NAR, excluding the contents of
regular files.
E.g.
{
"version": 1,
"root": {
"type": "directory",
"entries": {
"lib": {
"type": "directory",
"entries": {
"Mcrt1.o": {
"type": "regular",
"size": 1288
},
"Scrt1.o": {
"type": "regular",
"size": 3920
},
}
}
}
...
}
}
(The actual file has no indentation.)
This is intended to speed up the NixOS channels programs index
generator [1], since fetching gazillions of large NARs from
cache.nixos.org is currently a bottleneck for updating the regular
(non-small) channel.
[1] https://github.com/NixOS/nixos-channel-scripts/blob/master/generate-programs-index.cc
The fact that queryPathInfo() is synchronous meant that we needed a
thread for every concurrent binary cache lookup, even though they end
up being handled by the same download thread. Requiring hundreds of
threads is not a good idea. So now there is an asynchronous version of
queryPathInfo() that takes a callback function to process the
result. Similarly, enqueueDownload() now takes a callback rather than
returning a future.
Thus, a command like
nix path-info --store https://cache.nixos.org/ -r /nix/store/slljrzwmpygy1daay14kjszsr9xix063-nixos-16.09beta231.dccf8c5
that returns 4941 paths now takes 1.87s using only 2 threads (the main
thread and the downloader thread). (This is with a prewarmed
CloudFront.)
Caching path info is generally useful. For instance, it speeds up "nix
path-info -rS /run/current-system" (i.e. showing the closure sizes of
all paths in the closure of the current system) from 5.6s to 0.15s.
This also eliminates some APIs like Store::queryDeriver() and
Store::queryReferences().