On nix-env -qa -f '<nixpkgs>', this reduces maximum RSS by 20970 KiB
and runtime by 0.8%. This is mostly because we're not parsing the hash
part as a hash anymore (just validating that it consists of base-32
characters).
Also, replace storePathToHash() by StorePath::hashPart().
The idea is it's always more flexible to consumer a `Source` than a
plain string, and it might even reduce memory consumption.
I also looked at `addToStoreFromDump` with its `// FIXME: remove?`, but
the worked needed for that is far more up for interpretation, so I
punted for now.
This allows overriding the priority of substituters, e.g.
$ nix-store --store ~/my-nix/ -r /nix/store/df3m4da96d84ljzxx4mygfshm1p0r2n3-geeqie-1.4 \
--substituters 'http://cache.nixos.org?priority=100 daemon?priority=10'
Fixes#3264.
Most functions now take a StorePath argument rather than a Path (which
is just an alias for std::string). The StorePath constructor ensures
that the path is syntactically correct (i.e. it looks like
<store-dir>/<base32-hash>-<name>). Similarly, functions like
buildPaths() now take a StorePathWithOutputs, rather than abusing Path
by adding a '!<outputs>' suffix.
Note that the StorePath type is implemented in Rust. This involves
some hackery to allow Rust values to be used directly in C++, via a
helper type whose destructor calls the Rust type's drop()
function. The main issue is the dynamic nature of C++ move semantics:
after we have moved a Rust value, we should not call the drop function
on the original value. So when we move a value, we set the original
value to bitwise zero, and the destructor only calls drop() if the
value is not bitwise zero. This should be sufficient for most types.
Also lots of minor cleanups to the C++ API to make it more modern
(e.g. using std::optional and std::string_view in some places).
This integrates the functionality of the index-debuginfo program in
nixos-channel-scripts to maintain an index of DWARF debuginfo files in
a format usable by dwarffs. Thus the debug info index is updated by
Hydra rather than by the channel mirroring script.
Example usage:
$ nix copy --to 'file:///tmp/binary-cache?index-debug-info=true' /nix/store/vr9mhcch3fljzzkjld3kvkggvpq38cva-nix-2.2.2-debug
$ cat /tmp/binary-cache/debuginfo/036b210b03bad75ab2d8fc80b7a146f98e7f1ecf.debug
{"archive":"../nar/0313h2kdhk4v73xna9ysiksp2v8xrsk5xsw79mmwr3rg7byb4ka8.nar.xz","member":"lib/debug/.build-id/03/6b210b03bad75ab2d8fc80b7a146f98e7f1ecf.debug"}
Fixes#3083.
Once we've started writing data to a Sink, we can't restart a download
request, because then we end up writing duplicate data to the
Sink. Therefore we shouldn't handle retries in Downloader but at a
higher level (in particular, in copyStorePath()).
Fixes#2952.
(cherry picked from commit a67cf5a358)
This didn't work anymore since decompression was only done in the
non-coroutine case.
Decompressors are now sinks, just like compressors.
Also fixed a bug in bzip2 API handling (we have to handle BZ_RUN_OK
rather than BZ_OK), which we didn't notice because there was a missing
'throw':
if (ret != BZ_OK)
CompressionError("error while compressing bzip2 file");
This reduces memory consumption of
nix copy --from file://... --to ~/my-nix /nix/store/95cwv4q54dc6giaqv6q6p4r02ia2km35-blender-2.79
from 514 MiB to 18 MiB for an uncompressed binary cache, and from 192
MiB to 53 MiB for a bzipped binary cache. It may also be faster
because fetching can happen concurrently with decompression/writing.
Continuation of 48662d151b.
Issue https://github.com/NixOS/nix/issues/1681.
E.g.
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nars \
/nix/store/b0w2hafndl09h64fhb86kw6bmhbmnpm1-blender-2.79/share/icons/hicolor/scalable/apps/blender.svg > /dev/null
real 0m4.139s
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nars \
/nix/store/b0w2hafndl09h64fhb86kw6bmhbmnpm1-blender-2.79/share/icons/hicolor/scalable/apps/blender.svg > /dev/null
real 0m0.024s
(Before, the second call took ~0.220s.)
This will use a NAR listing in
/tmp/nars/b0w2hafndl09h64fhb86kw6bmhbmnpm1.ls containing all metadata,
including the offsets of regular files inside the NAR. Thus, we don't
need to read the entire NAR. (We do read the entire listing, but
that's generally pretty small. We could use a SQLite DB by borrowing
some more code from nixos-channel-scripts/file-cache.hh.)
This is primarily useful when Hydra is serving files from an S3 binary
cache, in particular when you have giant NARs. E.g. we had some 12 GiB
NARs, so accessing individuals files was pretty slow.
This speeds up commands like "nix cat-store". For example:
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nar-cache /nix/store/i60yncmq6w9dyv37zd2k454g0fkl3arl-systemd-234/etc/udev/udev.conf
real 0m4.336s
$ time nix cat-store --store https://cache.nixos.org?local-nar-cache=/tmp/nar-cache /nix/store/i60yncmq6w9dyv37zd2k454g0fkl3arl-systemd-234/etc/udev/udev.conf
real 0m0.045s
The primary motivation is to allow hydra-server to serve files from S3
binary caches. Previously Hydra had a hack to do "nix-store -r
<path>", but that fetches the entire closure so is prohibitively
expensive.
There is no garbage collection of the NAR cache yet. Also, the entire
NAR is read when accessing a single member file. We could generate the
NAR listing to provide random access.
Note: the NAR cache is indexed by the store path hash, not the content
hash, so NAR caches should not be shared between binary caches, unless
you're sure that all your builds are binary-reproducible.
Probably as a result of a bad merge in
4b8f1b0ec0, we had both a
BinaryCacheStoreAccessor and a
RemoteFSAccessor. BinaryCacheStore::getFSAccessor() returned the
latter, but BinaryCacheStore::addToStore() checked for the
former. This probably caused hydra-queue-runner to download paths that
it just uploaded.
Functions like copyClosure() had 3 bool arguments, which creates a
severe risk of mixing up arguments.
Also, implement copyClosure() using copyPaths().
This default implementation of buildPaths() does nothing if all
requested paths are already valid, and throws an "unsupported
operation" error otherwise. This fixes a regression introduced by
c30330df6f in binary cache and legacy
SSH stores.
The typical use is to inherit Config and add Setting<T> members:
class MyClass : private Config
{
Setting<int> foo{this, 123, "foo", "the number of foos to use"};
Setting<std::string> bar{this, "blabla", "bar", "the name of the bar"};
MyClass() : Config(readConfigFile("/etc/my-app.conf"))
{
std::cout << foo << "\n"; // will print 123 unless overriden
}
};
Currently, this is used by Store and its subclasses for store
parameters. You now get a warning if you specify a non-existant store
parameter in a store URI.
So if "text-compression=br", the .ls file in S3 will get a
Content-Encoding of "br". Brotli appears to compress better than xz
for this kind of file and is natively supported by browsers.
You can now set the store parameter "text-compression=br" to compress
textual files in the binary cache (i.e. narinfo and logs) using
Brotli. This sets the Content-Encoding header; the extension of
compressed files is unchanged.
You can separately specify the compression of log files using
"log-compression=br". This is useful when you don't want to compress
narinfo files for backward compatibility.