diff --git a/.version b/.version index 7208c2182..f398a2061 100644 --- a/.version +++ b/.version @@ -1 +1 @@ -2.4 \ No newline at end of file +3.0 \ No newline at end of file diff --git a/Makefile.config.in b/Makefile.config.in index b632444e8..5c245b8e9 100644 --- a/Makefile.config.in +++ b/Makefile.config.in @@ -19,6 +19,7 @@ LIBLZMA_LIBS = @LIBLZMA_LIBS@ OPENSSL_LIBS = @OPENSSL_LIBS@ PACKAGE_NAME = @PACKAGE_NAME@ PACKAGE_VERSION = @PACKAGE_VERSION@ +SHELL = @bash@ SODIUM_LIBS = @SODIUM_LIBS@ SQLITE3_LIBS = @SQLITE3_LIBS@ bash = @bash@ diff --git a/README.md b/README.md index a1588284d..03c5deb7b 100644 --- a/README.md +++ b/README.md @@ -12,7 +12,7 @@ for more details. On Linux and macOS the easiest way to Install Nix is to run the following shell command (as a user other than root): -``` +```console $ curl -L https://nixos.org/nix/install | sh ``` @@ -20,27 +20,8 @@ Information on additional installation methods is available on the [Nix download ## Building And Developing -### Building Nix - -You can build Nix using one of the targets provided by [release.nix](./release.nix): - -``` -$ nix-build ./release.nix -A build.aarch64-linux -$ nix-build ./release.nix -A build.x86_64-darwin -$ nix-build ./release.nix -A build.i686-linux -$ nix-build ./release.nix -A build.x86_64-linux -``` - -### Development Environment - -You can use the provided `shell.nix` to get a working development environment: - -``` -$ nix-shell -$ ./bootstrap.sh -$ ./configure -$ make -``` +See our [Hacking guide](https://hydra.nixos.org/job/nix/master/build.x86_64-linux/latest/download-by-type/doc/manual#chap-hacking) in our manual for instruction on how to +build nix from source with nix-build or how to get a development environment. ## Additional Resources diff --git a/doc/manual/command-ref/conf-file.xml b/doc/manual/command-ref/conf-file.xml index 1fa74a143..d0f1b09ca 100644 --- a/doc/manual/command-ref/conf-file.xml +++ b/doc/manual/command-ref/conf-file.xml @@ -373,16 +373,15 @@ false. hashed-mirrors A list of web servers used by - builtins.fetchurl to obtain files by - hash. The default is - http://tarballs.nixos.org/. Given a hash type - ht and a base-16 hash + builtins.fetchurl to obtain files by hash. + Given a hash type ht and a base-16 hash h, Nix will try to download the file from hashed-mirror/ht/h. This allows files to be downloaded even if they have disappeared - from their original URI. For example, given the default mirror - http://tarballs.nixos.org/, when building the derivation + from their original URI. For example, given the hashed mirror + http://tarballs.example.com/, when building the + derivation builtins.fetchurl { @@ -392,7 +391,7 @@ builtins.fetchurl { Nix will attempt to download this file from - http://tarballs.nixos.org/sha256/2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae + http://tarballs.example.com/sha256/2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae first. If it is not available there, if will try the original URI. diff --git a/doc/manual/expressions/builder-syntax.xml b/doc/manual/expressions/builder-syntax.xml deleted file mode 100644 index e51bade44..000000000 --- a/doc/manual/expressions/builder-syntax.xml +++ /dev/null @@ -1,119 +0,0 @@ -
- -Builder Syntax - -Build script for GNU Hello -(<filename>builder.sh</filename>) - -source $stdenv/setup - -PATH=$perl/bin:$PATH - -tar xvfz $src -cd hello-* -./configure --prefix=$out -make -make install - - - shows the builder referenced -from Hello's Nix expression (stored in -pkgs/applications/misc/hello/ex-1/builder.sh). -The builder can actually be made a lot shorter by using the -generic builder functions provided by -stdenv, but here we write out the build steps to -elucidate what a builder does. It performs the following -steps: - - - - - - When Nix runs a builder, it initially completely clears the - environment (except for the attributes declared in the - derivation). For instance, the PATH variable is - emptyActually, it's initialised to - /path-not-set to prevent Bash from setting it - to a default value.. This is done to prevent - undeclared inputs from being used in the build process. If for - example the PATH contained - /usr/bin, then you might accidentally use - /usr/bin/gcc. - - So the first step is to set up the environment. This is - done by calling the setup script of the - standard environment. The environment variable - stdenv points to the location of the standard - environment being used. (It wasn't specified explicitly as an - attribute in , but - mkDerivation adds it automatically.) - - - - - - Since Hello needs Perl, we have to make sure that Perl is in - the PATH. The perl environment - variable points to the location of the Perl package (since it - was passed in as an attribute to the derivation), so - $perl/bin is the - directory containing the Perl interpreter. - - - - - - Now we have to unpack the sources. The - src attribute was bound to the result of - fetching the Hello source tarball from the network, so the - src environment variable points to the location in - the Nix store to which the tarball was downloaded. After - unpacking, we cd to the resulting source - directory. - - The whole build is performed in a temporary directory - created in /tmp, by the way. This directory is - removed after the builder finishes, so there is no need to clean - up the sources afterwards. Also, the temporary directory is - always newly created, so you don't have to worry about files from - previous builds interfering with the current build. - - - - - - GNU Hello is a typical Autoconf-based package, so we first - have to run its configure script. In Nix - every package is stored in a separate location in the Nix store, - for instance - /nix/store/9a54ba97fb71b65fda531012d0443ce2-hello-2.1.1. - Nix computes this path by cryptographically hashing all attributes - of the derivation. The path is passed to the builder through the - out environment variable. So here we give - configure the parameter - --prefix=$out to cause Hello to be installed in - the expected location. - - - - - - Finally we build Hello (make) and install - it into the location specified by out - (make install). - - - - - -If you are wondering about the absence of error checking on the -result of various commands called in the builder: this is because the -shell script is evaluated with Bash's option, -which causes the script to be aborted if any command fails without an -error check. - -
\ No newline at end of file diff --git a/doc/manual/hacking.xml b/doc/manual/hacking.xml index b671811d3..d25d4b84a 100644 --- a/doc/manual/hacking.xml +++ b/doc/manual/hacking.xml @@ -4,18 +4,37 @@ Hacking -This section provides some notes on how to hack on Nix. To get +This section provides some notes on how to hack on Nix. To get the latest version of Nix from GitHub: -$ git clone git://github.com/NixOS/nix.git +$ git clone https://github.com/NixOS/nix.git $ cd nix -To build it and its dependencies: +To build Nix for the current operating system/architecture use + -$ nix-build release.nix -A build.x86_64-linux +$ nix-build + +or if you have a flakes-enabled nix: + + +$ nix build + + +This will build defaultPackage attribute defined in the flake.nix file. + +To build for other platforms add one of the following suffixes to it: aarch64-linux, +i686-linux, x86_64-darwin, x86_64-linux. + +i.e. + + +nix-build -A defaultPackage.x86_64-linux + + To build all dependencies and start a shell in which all @@ -27,13 +46,27 @@ $ nix-shell To build Nix itself in this shell: [nix-shell]$ ./bootstrap.sh -[nix-shell]$ configurePhase -[nix-shell]$ make +[nix-shell]$ ./configure $configureFlags +[nix-shell]$ make -j $NIX_BUILD_CORES To install it in $(pwd)/inst and test it: [nix-shell]$ make install [nix-shell]$ make installcheck +[nix-shell]$ ./inst/bin/nix --version +nix (Nix) 2.4 + + +If you have a flakes-enabled nix you can replace: + + +$ nix-shell + + +by: + + +$ nix develop diff --git a/mk/precompiled-headers.mk b/mk/precompiled-headers.mk index 1c0452dc2..500c99e4a 100644 --- a/mk/precompiled-headers.mk +++ b/mk/precompiled-headers.mk @@ -21,13 +21,13 @@ clean-files += $(GCH) $(PCH) ifeq ($(PRECOMPILE_HEADERS), 1) - ifeq ($(CXX), g++) + ifeq ($(findstring g++,$(CXX)), g++) GLOBAL_CXXFLAGS_PCH += -include $(buildprefix)precompiled-headers.h -Winvalid-pch GLOBAL_ORDER_AFTER += $(GCH) - else ifeq ($(CXX), clang++) + else ifeq ($(findstring clang++,$(CXX)), clang++) GLOBAL_CXXFLAGS_PCH += -include-pch $(PCH) -Winvalid-pch diff --git a/perl/lib/Nix/Store.xs b/perl/lib/Nix/Store.xs index f14c3f73f..4e038866e 100644 --- a/perl/lib/Nix/Store.xs +++ b/perl/lib/Nix/Store.xs @@ -224,7 +224,7 @@ SV * hashString(char * algo, int base32, char * s) SV * convertHash(char * algo, char * s, int toBase32) PPCODE: try { - Hash h(s, parseHashType(algo)); + auto h = Hash::parseAny(s, parseHashType(algo)); string s = h.to_string(toBase32 ? Base32 : Base16, false); XPUSHs(sv_2mortal(newSVpv(s.c_str(), 0))); } catch (Error & e) { @@ -285,7 +285,7 @@ SV * addToStore(char * srcPath, int recursive, char * algo) SV * makeFixedOutputPath(int recursive, char * algo, char * hash, char * name) PPCODE: try { - Hash h(hash, parseHashType(algo)); + auto h = Hash::parseAny(hash, parseHashType(algo)); auto method = recursive ? FileIngestionMethod::Recursive : FileIngestionMethod::Flat; auto path = store()->makeFixedOutputPath(method, h, name); XPUSHs(sv_2mortal(newSVpv(store()->printStorePath(path).c_str(), 0))); @@ -303,8 +303,11 @@ SV * derivationFromPath(char * drvPath) hash = newHV(); HV * outputs = newHV(); - for (auto & i : drv.outputs) - hv_store(outputs, i.first.c_str(), i.first.size(), newSVpv(store()->printStorePath(i.second.path).c_str(), 0), 0); + for (auto & i : drv.outputsAndPaths(*store())) + hv_store( + outputs, i.first.c_str(), i.first.size(), + newSVpv(store()->printStorePath(i.second.second).c_str(), 0), + 0); hv_stores(hash, "outputs", newRV((SV *) outputs)); AV * inputDrvs = newAV(); diff --git a/scripts/install-multi-user.sh b/scripts/install-multi-user.sh index 00c9d540b..e5cc4d7ed 100644 --- a/scripts/install-multi-user.sh +++ b/scripts/install-multi-user.sh @@ -37,6 +37,8 @@ readonly PROFILE_NIX_FILE="$NIX_ROOT/var/nix/profiles/default/etc/profile.d/nix- readonly NIX_INSTALLED_NIX="@nix@" readonly NIX_INSTALLED_CACERT="@cacert@" +#readonly NIX_INSTALLED_NIX="/nix/store/j8dbv5w6jl34caywh2ygdy88knx1mdf7-nix-2.3.6" +#readonly NIX_INSTALLED_CACERT="/nix/store/7dxhzymvy330i28ii676fl1pqwcahv2f-nss-cacert-3.49.2" readonly EXTRACTED_NIX_PATH="$(dirname "$0")" readonly ROOT_HOME=$(echo ~root) @@ -69,9 +71,11 @@ uninstall_directions() { subheader "Uninstalling nix:" local step=0 - if poly_service_installed_check; then + if [ -e /run/systemd/system ] && poly_service_installed_check; then step=$((step + 1)) poly_service_uninstall_directions "$step" + else + step=$((step + 1)) fi for profile_target in "${PROFILE_TARGETS[@]}"; do @@ -250,7 +254,9 @@ function finish_success { echo "But fetching the nixpkgs channel failed. (Are you offline?)" echo "To try again later, run \"sudo -i nix-channel --update nixpkgs\"." fi - cat <&2 -elif [ "$(uname -s)" = "Linux" ] && [ -e /run/systemd/system ]; then +elif [ "$(uname -s)" = "Linux" ]; then echo "Note: a multi-user installation is possible. See https://nixos.org/nix/manual/#sect-multi-user-installation" >&2 fi @@ -122,7 +122,7 @@ if [ "$(uname -s)" = "Darwin" ]; then fi if [ "$INSTALL_MODE" = "daemon" ]; then - printf '\e[1;31mSwitching to the Daemon-based Installer\e[0m\n' + printf '\e[1;31mSwitching to the Multi-user Installer\e[0m\n' exec "$self/install-multi-user" exit 0 fi @@ -207,7 +207,7 @@ if [ -z "$NIX_INSTALLER_NO_MODIFY_PROFILE" ]; then if [ -w "$fn" ]; then if ! grep -q "$p" "$fn"; then echo "modifying $fn..." >&2 - echo "if [ -e $p ]; then . $p; fi # added by Nix installer" >> "$fn" + echo -e "\nif [ -e $p ]; then . $p; fi # added by Nix installer" >> "$fn" fi added=1 break @@ -218,7 +218,7 @@ if [ -z "$NIX_INSTALLER_NO_MODIFY_PROFILE" ]; then if [ -w "$fn" ]; then if ! grep -q "$p" "$fn"; then echo "modifying $fn..." >&2 - echo "if [ -e $p ]; then . $p; fi # added by Nix installer" >> "$fn" + echo -e "\nif [ -e $p ]; then . $p; fi # added by Nix installer" >> "$fn" fi added=1 break diff --git a/src/build-remote/build-remote.cc b/src/build-remote/build-remote.cc index e07117496..ce5127113 100644 --- a/src/build-remote/build-remote.cc +++ b/src/build-remote/build-remote.cc @@ -33,14 +33,14 @@ std::string escapeUri(std::string uri) static string currentLoad; -static AutoCloseFD openSlotLock(const Machine & m, unsigned long long slot) +static AutoCloseFD openSlotLock(const Machine & m, uint64_t slot) { return openLockFile(fmt("%s/%s-%d", currentLoad, escapeUri(m.storeUri), slot), true); } -static bool allSupportedLocally(const std::set& requiredFeatures) { +static bool allSupportedLocally(Store & store, const std::set& requiredFeatures) { for (auto & feature : requiredFeatures) - if (!settings.systemFeatures.get().count(feature)) return false; + if (!store.systemFeatures.get().count(feature)) return false; return true; } @@ -103,10 +103,10 @@ static int _main(int argc, char * * argv) drvPath = store->parseStorePath(readString(source)); auto requiredFeatures = readStrings>(source); - auto canBuildLocally = amWilling + auto canBuildLocally = amWilling && ( neededSystem == settings.thisSystem || settings.extraPlatforms.get().count(neededSystem) > 0) - && allSupportedLocally(requiredFeatures); + && allSupportedLocally(*store, requiredFeatures); /* Error ignored here, will be caught later */ mkdir(currentLoad.c_str(), 0777); @@ -119,7 +119,7 @@ static int _main(int argc, char * * argv) bool rightType = false; Machine * bestMachine = nullptr; - unsigned long long bestLoad = 0; + uint64_t bestLoad = 0; for (auto & m : machines) { debug("considering building on remote machine '%s'", m.storeUri); @@ -130,8 +130,8 @@ static int _main(int argc, char * * argv) m.mandatoryMet(requiredFeatures)) { rightType = true; AutoCloseFD free; - unsigned long long load = 0; - for (unsigned long long slot = 0; slot < m.maxJobs; ++slot) { + uint64_t load = 0; + for (uint64_t slot = 0; slot < m.maxJobs; ++slot) { auto slotLock = openSlotLock(m, slot); if (lockFile(slotLock.get(), ltWrite, false)) { if (!free) { @@ -170,7 +170,45 @@ static int _main(int argc, char * * argv) if (rightType && !canBuildLocally) std::cerr << "# postpone\n"; else + { + // build the hint template. + string hintstring = "derivation: %s\nrequired (system, features): (%s, %s)"; + hintstring += "\n%s available machines:"; + hintstring += "\n(systems, maxjobs, supportedFeatures, mandatoryFeatures)"; + + for (unsigned int i = 0; i < machines.size(); ++i) { + hintstring += "\n(%s, %s, %s, %s)"; + } + + // add the template values. + string drvstr; + if (drvPath.has_value()) + drvstr = drvPath->to_string(); + else + drvstr = ""; + + auto hint = hintformat(hintstring); + hint + % drvstr + % neededSystem + % concatStringsSep(", ", requiredFeatures) + % machines.size(); + + for (auto & m : machines) { + hint % concatStringsSep>(", ", m.systemTypes) + % m.maxJobs + % concatStringsSep(", ", m.supportedFeatures) + % concatStringsSep(", ", m.mandatoryFeatures); + } + + logErrorInfo(lvlInfo, { + .name = "Remote build", + .description = "Failed to find a machine for remote build!", + .hint = hint + }); + std::cerr << "# decline\n"; + } break; } @@ -186,15 +224,7 @@ static int _main(int argc, char * * argv) Activity act(*logger, lvlTalkative, actUnknown, fmt("connecting to '%s'", bestMachine->storeUri)); - Store::Params storeParams; - if (hasPrefix(bestMachine->storeUri, "ssh://")) { - storeParams["max-connections"] = "1"; - storeParams["log-fd"] = "4"; - if (bestMachine->sshKey != "") - storeParams["ssh-key"] = bestMachine->sshKey; - } - - sshStore = openStore(bestMachine->storeUri, storeParams); + sshStore = bestMachine->openStore(); sshStore->connect(); storeUri = bestMachine->storeUri; diff --git a/src/libexpr/eval-cache.cc b/src/libexpr/eval-cache.cc index 919de8a4e..46177a0a4 100644 --- a/src/libexpr/eval-cache.cc +++ b/src/libexpr/eval-cache.cc @@ -285,11 +285,10 @@ static std::shared_ptr makeAttrDb(const Hash & fingerprint) } EvalCache::EvalCache( - bool useCache, - const Hash & fingerprint, + std::optional> useCache, EvalState & state, RootLoader rootLoader) - : db(useCache ? makeAttrDb(fingerprint) : nullptr) + : db(useCache ? makeAttrDb(*useCache) : nullptr) , state(state) , rootLoader(rootLoader) { @@ -406,7 +405,7 @@ Value & AttrCursor::forceValue() return v; } -std::shared_ptr AttrCursor::maybeGetAttr(Symbol name) +std::shared_ptr AttrCursor::maybeGetAttr(Symbol name, bool forceErrors) { if (root->db) { if (!cachedValue) @@ -423,9 +422,12 @@ std::shared_ptr AttrCursor::maybeGetAttr(Symbol name) if (attr) { if (std::get_if(&attr->second)) return nullptr; - else if (std::get_if(&attr->second)) - throw EvalError("cached failure of attribute '%s'", getAttrPathStr(name)); - else + else if (std::get_if(&attr->second)) { + if (forceErrors) + debug("reevaluating failed cached attribute '%s'"); + else + throw CachedEvalError("cached failure of attribute '%s'", getAttrPathStr(name)); + } else return std::make_shared(root, std::make_pair(shared_from_this(), name), nullptr, std::move(attr)); } @@ -470,9 +472,9 @@ std::shared_ptr AttrCursor::maybeGetAttr(std::string_view name) return maybeGetAttr(root->state.symbols.create(name)); } -std::shared_ptr AttrCursor::getAttr(Symbol name) +std::shared_ptr AttrCursor::getAttr(Symbol name, bool forceErrors) { - auto p = maybeGetAttr(name); + auto p = maybeGetAttr(name, forceErrors); if (!p) throw Error("attribute '%s' does not exist", getAttrPathStr(name)); return p; @@ -601,7 +603,7 @@ bool AttrCursor::isDerivation() StorePath AttrCursor::forceDerivation() { - auto aDrvPath = getAttr(root->state.sDrvPath); + auto aDrvPath = getAttr(root->state.sDrvPath, true); auto drvPath = root->state.store->parseStorePath(aDrvPath->getString()); if (!root->state.store->isValidPath(drvPath) && !settings.readOnlyMode) { /* The eval cache contains 'drvPath', but the actual path has diff --git a/src/libexpr/eval-cache.hh b/src/libexpr/eval-cache.hh index 674bb03c1..8ffffc0ed 100644 --- a/src/libexpr/eval-cache.hh +++ b/src/libexpr/eval-cache.hh @@ -4,10 +4,13 @@ #include "hash.hh" #include "eval.hh" +#include #include namespace nix::eval_cache { +MakeError(CachedEvalError, EvalError); + class AttrDb; class AttrCursor; @@ -26,8 +29,7 @@ class EvalCache : public std::enable_shared_from_this public: EvalCache( - bool useCache, - const Hash & fingerprint, + std::optional> useCache, EvalState & state, RootLoader rootLoader); @@ -92,11 +94,11 @@ public: std::string getAttrPathStr(Symbol name) const; - std::shared_ptr maybeGetAttr(Symbol name); + std::shared_ptr maybeGetAttr(Symbol name, bool forceErrors = false); std::shared_ptr maybeGetAttr(std::string_view name); - std::shared_ptr getAttr(Symbol name); + std::shared_ptr getAttr(Symbol name, bool forceErrors = false); std::shared_ptr getAttr(std::string_view name); diff --git a/src/libexpr/eval.cc b/src/libexpr/eval.cc index 7a2f55504..0123070d1 100644 --- a/src/libexpr/eval.cc +++ b/src/libexpr/eval.cc @@ -345,6 +345,7 @@ EvalState::EvalState(const Strings & _searchPath, ref store) , sStructuredAttrs(symbols.create("__structuredAttrs")) , sBuilder(symbols.create("builder")) , sArgs(symbols.create("args")) + , sContentAddressed(symbols.create("__contentAddressed")) , sOutputHash(symbols.create("outputHash")) , sOutputHashAlgo(symbols.create("outputHashAlgo")) , sOutputHashMode(symbols.create("outputHashMode")) @@ -1256,10 +1257,10 @@ void EvalState::callFunction(Value & fun, Value & arg, Value & v, const Pos & po try { lambda.body->eval(*this, env2, v); } catch (Error & e) { - addErrorTrace(e, lambda.pos, "while evaluating %s", - (lambda.name.set() - ? "'" + (string) lambda.name + "'" - : "anonymous lambdaction")); + addErrorTrace(e, lambda.pos, "while evaluating %s", + (lambda.name.set() + ? "'" + (string) lambda.name + "'" + : "anonymous lambda")); addErrorTrace(e, pos, "from call site%s", ""); throw; } diff --git a/src/libexpr/eval.hh b/src/libexpr/eval.hh index 8986952e3..5855b4ef2 100644 --- a/src/libexpr/eval.hh +++ b/src/libexpr/eval.hh @@ -74,6 +74,7 @@ public: sSystem, sOverrides, sOutputs, sOutputName, sIgnoreNulls, sFile, sLine, sColumn, sFunctor, sToString, sRight, sWrong, sStructuredAttrs, sBuilder, sArgs, + sContentAddressed, sOutputHash, sOutputHashAlgo, sOutputHashMode, sRecurseForDerivations, sDescription, sSelf, sEpsilon; @@ -374,6 +375,9 @@ struct EvalSettings : Config Setting traceFunctionCalls{this, false, "trace-function-calls", "Emit log messages for each function entry and exit at the 'vomit' log level (-vvvv)."}; + + Setting useEvalCache{this, true, "eval-cache", + "Whether to use the flake evaluation cache."}; }; extern EvalSettings evalSettings; diff --git a/src/libexpr/flake/flake.hh b/src/libexpr/flake/flake.hh index 77f3abdeb..c2bb2888b 100644 --- a/src/libexpr/flake/flake.hh +++ b/src/libexpr/flake/flake.hh @@ -106,6 +106,6 @@ void emitTreeAttrs( EvalState & state, const fetchers::Tree & tree, const fetchers::Input & input, - Value & v); + Value & v, bool emptyRevFallback = false); } diff --git a/src/libexpr/get-drvs.cc b/src/libexpr/get-drvs.cc index 9055f59a1..5d6e39aa0 100644 --- a/src/libexpr/get-drvs.cc +++ b/src/libexpr/get-drvs.cc @@ -39,7 +39,7 @@ DrvInfo::DrvInfo(EvalState & state, ref store, const std::string & drvPat if (i == drv.outputs.end()) throw Error("derivation '%s' does not have output '%s'", store->printStorePath(drvPath), outputName); - outPath = store->printStorePath(i->second.path); + outPath = store->printStorePath(i->second.path(*store, drv.name)); } diff --git a/src/libexpr/primops.cc b/src/libexpr/primops.cc index 9f877f765..30f4c3529 100644 --- a/src/libexpr/primops.cc +++ b/src/libexpr/primops.cc @@ -52,7 +52,7 @@ void EvalState::realiseContext(const PathSet & context) DerivationOutputs::iterator i = drv.outputs.find(outputName); if (i == drv.outputs.end()) throw Error("derivation '%s' does not have an output named '%s'", ctxS, outputName); - allowedPaths->insert(store->printStorePath(i->second.path)); + allowedPaths->insert(store->printStorePath(i->second.path(*store, drv.name))); } } } @@ -65,7 +65,7 @@ void EvalState::realiseContext(const PathSet & context) /* For performance, prefetch all substitute info. */ StorePathSet willBuild, willSubstitute, unknown; - unsigned long long downloadSize, narSize; + uint64_t downloadSize, narSize; store->queryMissing(drvs, willBuild, willSubstitute, unknown, downloadSize, narSize); store->buildPaths(drvs); @@ -91,8 +91,17 @@ static void prim_scopedImport(EvalState & state, const Pos & pos, Value * * args Path realPath = state.checkSourcePath(state.toRealPath(path, context)); // FIXME - if (state.store->isStorePath(path) && state.store->isValidPath(state.store->parseStorePath(path)) && isDerivation(path)) { - Derivation drv = readDerivation(*state.store, realPath); + auto isValidDerivationInStore = [&]() -> std::optional { + if (!state.store->isStorePath(path)) + return std::nullopt; + auto storePath = state.store->parseStorePath(path); + if (!(state.store->isValidPath(storePath) && isDerivation(path))) + return std::nullopt; + return storePath; + }; + if (auto optStorePath = isValidDerivationInStore()) { + auto storePath = *optStorePath; + Derivation drv = readDerivation(*state.store, realPath, Derivation::nameFromPath(storePath)); Value & w = *state.allocValue(); state.mkAttrs(w, 3 + drv.outputs.size()); Value * v2 = state.allocAttr(w, state.sDrvPath); @@ -104,9 +113,9 @@ static void prim_scopedImport(EvalState & state, const Pos & pos, Value * * args state.mkList(*outputsVal, drv.outputs.size()); unsigned int outputs_index = 0; - for (const auto & o : drv.outputs) { + for (const auto & o : drv.outputsAndPaths(*state.store)) { v2 = state.allocAttr(w, state.symbols.create(o.first)); - mkString(*v2, state.store->printStorePath(o.second.path), {"!" + o.first + "!" + path}); + mkString(*v2, state.store->printStorePath(o.second.second), {"!" + o.first + "!" + path}); outputsVal->listElems()[outputs_index] = state.allocValue(); mkString(*(outputsVal->listElems()[outputs_index++]), o.first); } @@ -570,9 +579,11 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * * /* Build the derivation expression by processing the attributes. */ Derivation drv; + drv.name = drvName; PathSet context; + bool contentAddressed = false; std::optional outputHash; std::string outputHashAlgo; auto ingestionMethod = FileIngestionMethod::Flat; @@ -629,9 +640,14 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * * if (i->value->type == tNull) continue; } + if (i->name == state.sContentAddressed) { + settings.requireExperimentalFeature("ca-derivations"); + contentAddressed = state.forceBool(*i->value, pos); + } + /* The `args' attribute is special: it supplies the command-line arguments to the builder. */ - if (i->name == state.sArgs) { + else if (i->name == state.sArgs) { state.forceList(*i->value, pos); for (unsigned int n = 0; n < i->value->listSize(); ++n) { string s = state.coerceToString(posDrvName, *i->value->listElems()[n], context, true); @@ -684,7 +700,7 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * * } } catch (Error & e) { - e.addTrace(posDrvName, + e.addTrace(posDrvName, "while evaluating the attribute '%1%' of the derivation '%2%'", key, drvName); throw; @@ -751,7 +767,10 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * * }); if (outputHash) { - /* Handle fixed-output derivations. */ + /* Handle fixed-output derivations. + + Ignore `__contentAddressed` because fixed output derivations are + already content addressed. */ if (outputs.size() != 1 || *(outputs.begin()) != "out") throw Error({ .hint = hintfmt("multiple outputs are not supported in fixed-output derivations"), @@ -762,16 +781,30 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * * Hash h = newHashAllowEmpty(*outputHash, ht); auto outPath = state.store->makeFixedOutputPath(ingestionMethod, h, drvName); - if (!jsonObject) drv.env["out"] = state.store->printStorePath(outPath); + drv.env["out"] = state.store->printStorePath(outPath); drv.outputs.insert_or_assign("out", DerivationOutput { - .path = std::move(outPath), - .hash = FixedOutputHash { - .method = ingestionMethod, - .hash = std::move(h), - }, + .output = DerivationOutputCAFixed { + .hash = FixedOutputHash { + .method = ingestionMethod, + .hash = std::move(h), + }, + }, }); } + else if (contentAddressed) { + HashType ht = parseHashType(outputHashAlgo); + for (auto & i : outputs) { + drv.env[i] = hashPlaceholder(i); + drv.outputs.insert_or_assign(i, DerivationOutput { + .output = DerivationOutputCAFloating { + .method = ingestionMethod, + .hashType = std::move(ht), + }, + }); + } + } + else { /* Compute a hash over the "masked" store derivation, which is the final one except that in the list of outputs, the @@ -780,29 +813,33 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * * that changes in the set of output names do get reflected in the hash. */ for (auto & i : outputs) { - if (!jsonObject) drv.env[i] = ""; + drv.env[i] = ""; drv.outputs.insert_or_assign(i, DerivationOutput { - .path = StorePath::dummy, - .hash = std::optional {}, + .output = DerivationOutputInputAddressed { + .path = StorePath::dummy, + }, }); } - Hash h = hashDerivationModulo(*state.store, Derivation(drv), true); + // Regular, non-CA derivation should always return a single hash and not + // hash per output. + Hash h = std::get<0>(hashDerivationModulo(*state.store, Derivation(drv), true)); for (auto & i : outputs) { auto outPath = state.store->makeOutputPath(i, h, drvName); - if (!jsonObject) drv.env[i] = state.store->printStorePath(outPath); + drv.env[i] = state.store->printStorePath(outPath); drv.outputs.insert_or_assign(i, DerivationOutput { - .path = std::move(outPath), - .hash = std::optional(), + .output = DerivationOutputInputAddressed { + .path = std::move(outPath), + }, }); } } /* Write the resulting term into the Nix store directory. */ - auto drvPath = writeDerivation(state.store, drv, drvName, state.repair); + auto drvPath = writeDerivation(state.store, drv, state.repair); auto drvPathS = state.store->printStorePath(drvPath); printMsg(lvlChatty, "instantiated '%1%' -> '%2%'", drvName, drvPathS); @@ -815,9 +852,9 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * * state.mkAttrs(v, 1 + drv.outputs.size()); mkString(*state.allocAttr(v, state.sDrvPath), drvPathS, {"=" + drvPathS}); - for (auto & i : drv.outputs) { + for (auto & i : drv.outputsAndPaths(*state.store)) { mkString(*state.allocAttr(v, state.symbols.create(i.first)), - state.store->printStorePath(i.second.path), {"!" + i.first + "!" + drvPathS}); + state.store->printStorePath(i.second.second), {"!" + i.first + "!" + drvPathS}); } v.attrs->sort(); } @@ -1111,7 +1148,7 @@ static void prim_toFile(EvalState & state, const Pos & pos, Value * * args, Valu static void addPath(EvalState & state, const Pos & pos, const string & name, const Path & path_, - Value * filterFun, FileIngestionMethod method, const Hash & expectedHash, Value & v) + Value * filterFun, FileIngestionMethod method, const std::optional expectedHash, Value & v) { const auto path = evalSettings.pureEval && expectedHash ? path_ : @@ -1142,7 +1179,7 @@ static void addPath(EvalState & state, const Pos & pos, const string & name, con std::optional expectedStorePath; if (expectedHash) - expectedStorePath = state.store->makeFixedOutputPath(method, expectedHash, name); + expectedStorePath = state.store->makeFixedOutputPath(method, *expectedHash, name); Path dstPath; if (!expectedHash || !state.store->isValidPath(*expectedStorePath)) { dstPath = state.store->printStorePath(settings.readOnlyMode @@ -1176,7 +1213,7 @@ static void prim_filterSource(EvalState & state, const Pos & pos, Value * * args .errPos = pos }); - addPath(state, pos, std::string(baseNameOf(path)), path, args[0], FileIngestionMethod::Recursive, Hash(), v); + addPath(state, pos, std::string(baseNameOf(path)), path, args[0], FileIngestionMethod::Recursive, std::nullopt, v); } static void prim_path(EvalState & state, const Pos & pos, Value * * args, Value & v) @@ -1186,7 +1223,7 @@ static void prim_path(EvalState & state, const Pos & pos, Value * * args, Value string name; Value * filterFun = nullptr; auto method = FileIngestionMethod::Recursive; - Hash expectedHash; + std::optional expectedHash; for (auto & attr : *args[0]->attrs) { const string & n(attr.name); diff --git a/src/libexpr/primops/fetchGit.cc b/src/libexpr/primops/fetchGit.cc deleted file mode 100644 index 5013e74f0..000000000 --- a/src/libexpr/primops/fetchGit.cc +++ /dev/null @@ -1,91 +0,0 @@ -#include "primops.hh" -#include "eval-inline.hh" -#include "store-api.hh" -#include "hash.hh" -#include "fetchers.hh" -#include "url.hh" - -namespace nix { - -static void prim_fetchGit(EvalState & state, const Pos & pos, Value * * args, Value & v) -{ - std::string url; - std::optional ref; - std::optional rev; - std::string name = "source"; - bool fetchSubmodules = false; - PathSet context; - - state.forceValue(*args[0]); - - if (args[0]->type == tAttrs) { - - state.forceAttrs(*args[0], pos); - - for (auto & attr : *args[0]->attrs) { - string n(attr.name); - if (n == "url") - url = state.coerceToString(*attr.pos, *attr.value, context, false, false); - else if (n == "ref") - ref = state.forceStringNoCtx(*attr.value, *attr.pos); - else if (n == "rev") - rev = Hash(state.forceStringNoCtx(*attr.value, *attr.pos), htSHA1); - else if (n == "name") - name = state.forceStringNoCtx(*attr.value, *attr.pos); - else if (n == "submodules") - fetchSubmodules = state.forceBool(*attr.value, *attr.pos); - else - throw EvalError({ - .hint = hintfmt("unsupported argument '%s' to 'fetchGit'", attr.name), - .errPos = *attr.pos - }); - } - - if (url.empty()) - throw EvalError({ - .hint = hintfmt("'url' argument required"), - .errPos = pos - }); - - } else - url = state.coerceToString(pos, *args[0], context, false, false); - - // FIXME: git externals probably can be used to bypass the URI - // whitelist. Ah well. - state.checkURI(url); - - if (evalSettings.pureEval && !rev) - throw Error("in pure evaluation mode, 'fetchGit' requires a Git revision"); - - fetchers::Attrs attrs; - attrs.insert_or_assign("type", "git"); - attrs.insert_or_assign("url", url.find("://") != std::string::npos ? url : "file://" + url); - if (ref) attrs.insert_or_assign("ref", *ref); - if (rev) attrs.insert_or_assign("rev", rev->gitRev()); - if (fetchSubmodules) attrs.insert_or_assign("submodules", fetchers::Explicit{true}); - auto input = fetchers::Input::fromAttrs(std::move(attrs)); - - // FIXME: use name? - auto [tree, input2] = input.fetch(state.store); - - state.mkAttrs(v, 8); - auto storePath = state.store->printStorePath(tree.storePath); - mkString(*state.allocAttr(v, state.sOutPath), storePath, PathSet({storePath})); - // Backward compatibility: set 'rev' to - // 0000000000000000000000000000000000000000 for a dirty tree. - auto rev2 = input2.getRev().value_or(Hash(htSHA1)); - mkString(*state.allocAttr(v, state.symbols.create("rev")), rev2.gitRev()); - mkString(*state.allocAttr(v, state.symbols.create("shortRev")), rev2.gitShortRev()); - // Backward compatibility: set 'revCount' to 0 for a dirty tree. - mkInt(*state.allocAttr(v, state.symbols.create("revCount")), - input2.getRevCount().value_or(0)); - mkBool(*state.allocAttr(v, state.symbols.create("submodules")), fetchSubmodules); - v.attrs->sort(); - - if (state.allowedPaths) - state.allowedPaths->insert(tree.actualPath); -} - -static RegisterPrimOp r("fetchGit", 1, prim_fetchGit); - -} diff --git a/src/libexpr/primops/fetchMercurial.cc b/src/libexpr/primops/fetchMercurial.cc index fc2a6a1c2..cef85cfef 100644 --- a/src/libexpr/primops/fetchMercurial.cc +++ b/src/libexpr/primops/fetchMercurial.cc @@ -31,7 +31,7 @@ static void prim_fetchMercurial(EvalState & state, const Pos & pos, Value * * ar // be both a revision or a branch/tag name. auto value = state.forceStringNoCtx(*attr.value, *attr.pos); if (std::regex_match(value, revRegex)) - rev = Hash(value, htSHA1); + rev = Hash::parseAny(value, htSHA1); else ref = value; } diff --git a/src/libexpr/primops/fetchTree.cc b/src/libexpr/primops/fetchTree.cc index 6a796f3d3..0dbf4ae1d 100644 --- a/src/libexpr/primops/fetchTree.cc +++ b/src/libexpr/primops/fetchTree.cc @@ -14,7 +14,8 @@ void emitTreeAttrs( EvalState & state, const fetchers::Tree & tree, const fetchers::Input & input, - Value & v) + Value & v, + bool emptyRevFallback) { assert(input.isImmutable()); @@ -34,10 +35,20 @@ void emitTreeAttrs( if (auto rev = input.getRev()) { mkString(*state.allocAttr(v, state.symbols.create("rev")), rev->gitRev()); mkString(*state.allocAttr(v, state.symbols.create("shortRev")), rev->gitShortRev()); + } else if (emptyRevFallback) { + // Backwards compat for `builtins.fetchGit`: dirty repos return an empty sha1 as rev + auto emptyHash = Hash(htSHA1); + mkString(*state.allocAttr(v, state.symbols.create("rev")), emptyHash.gitRev()); + mkString(*state.allocAttr(v, state.symbols.create("shortRev")), emptyHash.gitRev()); } + if (input.getType() == "git") + mkBool(*state.allocAttr(v, state.symbols.create("submodules")), maybeGetBoolAttr(input.attrs, "submodules").value_or(false)); + if (auto revCount = input.getRevCount()) mkInt(*state.allocAttr(v, state.symbols.create("revCount")), *revCount); + else if (emptyRevFallback) + mkInt(*state.allocAttr(v, state.symbols.create("revCount")), 0); if (auto lastModified = input.getLastModified()) { mkInt(*state.allocAttr(v, state.symbols.create("lastModified")), *lastModified); @@ -48,10 +59,26 @@ void emitTreeAttrs( v.attrs->sort(); } -static void prim_fetchTree(EvalState & state, const Pos & pos, Value * * args, Value & v) +std::string fixURI(std::string uri, EvalState &state) { - settings.requireExperimentalFeature("flakes"); + state.checkURI(uri); + return uri.find("://") != std::string::npos ? uri : "file://" + uri; +} +void addURI(EvalState &state, fetchers::Attrs &attrs, Symbol name, std::string v) +{ + string n(name); + attrs.emplace(name, n == "url" ? fixURI(v, state) : v); +} + +static void fetchTree( + EvalState &state, + const Pos &pos, + Value **args, + Value &v, + const std::optional type, + bool emptyRevFallback = false +) { fetchers::Input input; PathSet context; @@ -64,8 +91,15 @@ static void prim_fetchTree(EvalState & state, const Pos & pos, Value * * args, V for (auto & attr : *args[0]->attrs) { state.forceValue(*attr.value); - if (attr.value->type == tString) - attrs.emplace(attr.name, attr.value->string.s); + if (attr.value->type == tPath || attr.value->type == tString) + addURI( + state, + attrs, + attr.name, + state.coerceToString(*attr.pos, *attr.value, context, false, false) + ); + else if (attr.value->type == tString) + addURI(state, attrs, attr.name, attr.value->string.s); else if (attr.value->type == tBool) attrs.emplace(attr.name, fetchers::Explicit{attr.value->boolean}); else if (attr.value->type == tInt) @@ -75,6 +109,9 @@ static void prim_fetchTree(EvalState & state, const Pos & pos, Value * * args, V attr.name, showType(*attr.value)); } + if (type) + attrs.emplace("type", type.value()); + if (!attrs.count("type")) throw Error({ .hint = hintfmt("attribute 'type' is missing in call to 'fetchTree'"), @@ -82,8 +119,18 @@ static void prim_fetchTree(EvalState & state, const Pos & pos, Value * * args, V }); input = fetchers::Input::fromAttrs(std::move(attrs)); - } else - input = fetchers::Input::fromURL(state.coerceToString(pos, *args[0], context, false, false)); + } else { + auto url = fixURI(state.coerceToString(pos, *args[0], context, false, false), state); + + if (type == "git") { + fetchers::Attrs attrs; + attrs.emplace("type", "git"); + attrs.emplace("url", url); + input = fetchers::Input::fromAttrs(std::move(attrs)); + } else { + input = fetchers::Input::fromURL(url); + } + } if (!evalSettings.pureEval && !input.isDirect()) input = lookupInRegistries(state.store, input).first; @@ -96,7 +143,13 @@ static void prim_fetchTree(EvalState & state, const Pos & pos, Value * * args, V if (state.allowedPaths) state.allowedPaths->insert(tree.actualPath); - emitTreeAttrs(state, tree, input2, v); + emitTreeAttrs(state, tree, input2, v, emptyRevFallback); +} + +static void prim_fetchTree(EvalState & state, const Pos & pos, Value * * args, Value & v) +{ + settings.requireExperimentalFeature("flakes"); + fetchTree(state, pos, args, v, std::nullopt); } static RegisterPrimOp r("fetchTree", 1, prim_fetchTree); @@ -178,7 +231,13 @@ static void prim_fetchTarball(EvalState & state, const Pos & pos, Value * * args fetch(state, pos, args, v, "fetchTarball", true, "source"); } +static void prim_fetchGit(EvalState &state, const Pos &pos, Value **args, Value &v) +{ + fetchTree(state, pos, args, v, "git", true); +} + static RegisterPrimOp r2("__fetchurl", 1, prim_fetchurl); static RegisterPrimOp r3("fetchTarball", 1, prim_fetchTarball); +static RegisterPrimOp r4("fetchGit", 1, prim_fetchGit); } diff --git a/src/libfetchers/fetchers.cc b/src/libfetchers/fetchers.cc index 2b6173df9..eaa635595 100644 --- a/src/libfetchers/fetchers.cc +++ b/src/libfetchers/fetchers.cc @@ -134,7 +134,7 @@ std::pair Input::fetch(ref store) const if (auto prevNarHash = getNarHash()) { if (narHash != *prevNarHash) - throw Error("NAR hash mismatch in input '%s' (%s), expected '%s', got '%s'", + throw Error((unsigned int) 102, "NAR hash mismatch in input '%s' (%s), expected '%s', got '%s'", to_string(), tree.actualPath, prevNarHash->to_string(SRI, true), narHash.to_string(SRI, true)); } @@ -200,9 +200,12 @@ std::string Input::getType() const std::optional Input::getNarHash() const { - if (auto s = maybeGetStrAttr(attrs, "narHash")) - // FIXME: require SRI hash. - return newHashAllowEmpty(*s, htSHA256); + if (auto s = maybeGetStrAttr(attrs, "narHash")) { + auto hash = s->empty() ? Hash(htSHA256) : Hash::parseSRI(*s); + if (hash.type != htSHA256) + throw UsageError("narHash must use SHA-256"); + return hash; + } return {}; } @@ -216,7 +219,7 @@ std::optional Input::getRef() const std::optional Input::getRev() const { if (auto s = maybeGetStrAttr(attrs, "rev")) - return Hash(*s, htSHA1); + return Hash::parseAny(*s, htSHA1); return {}; } diff --git a/src/libfetchers/git.cc b/src/libfetchers/git.cc index 5d38e0c2b..5ca0f8521 100644 --- a/src/libfetchers/git.cc +++ b/src/libfetchers/git.cc @@ -121,7 +121,7 @@ struct GitInputScheme : InputScheme args.push_back(*ref); } - if (input.getRev()) throw Error("cloning a specific revision is not implemented"); + if (input.getRev()) throw UnimplementedError("cloning a specific revision is not implemented"); args.push_back(destDir); @@ -269,7 +269,7 @@ struct GitInputScheme : InputScheme // modified dirty file? input.attrs.insert_or_assign( "lastModified", - haveCommits ? std::stoull(runProgram("git", true, { "-C", actualUrl, "log", "-1", "--format=%ct", "HEAD" })) : 0); + haveCommits ? std::stoull(runProgram("git", true, { "-C", actualUrl, "log", "-1", "--format=%ct", "--no-show-signature", "HEAD" })) : 0); return { Tree(store->printStorePath(storePath), std::move(storePath)), @@ -293,14 +293,14 @@ struct GitInputScheme : InputScheme if (!input.getRev()) input.attrs.insert_or_assign("rev", - Hash(chomp(runProgram("git", true, { "-C", actualUrl, "rev-parse", *input.getRef() })), htSHA1).gitRev()); + Hash::parseAny(chomp(runProgram("git", true, { "-C", actualUrl, "rev-parse", *input.getRef() })), htSHA1).gitRev()); repoDir = actualUrl; } else { if (auto res = getCache()->lookup(store, mutableAttrs)) { - auto rev2 = Hash(getStrAttr(res->first, "rev"), htSHA1); + auto rev2 = Hash::parseAny(getStrAttr(res->first, "rev"), htSHA1); if (!input.getRev() || input.getRev() == rev2) { input.attrs.insert_or_assign("rev", rev2.gitRev()); return makeResult(res->first, std::move(res->second)); @@ -370,7 +370,7 @@ struct GitInputScheme : InputScheme } if (!input.getRev()) - input.attrs.insert_or_assign("rev", Hash(chomp(readFile(localRefFile)), htSHA1).gitRev()); + input.attrs.insert_or_assign("rev", Hash::parseAny(chomp(readFile(localRefFile)), htSHA1).gitRev()); } bool isShallow = chomp(runProgram("git", true, { "-C", repoDir, "rev-parse", "--is-shallow-repository" })) == "true"; @@ -421,7 +421,7 @@ struct GitInputScheme : InputScheme auto storePath = store->addToStore(name, tmpDir, FileIngestionMethod::Recursive, htSHA256, filter); - auto lastModified = std::stoull(runProgram("git", true, { "-C", repoDir, "log", "-1", "--format=%ct", input.getRev()->gitRev() })); + auto lastModified = std::stoull(runProgram("git", true, { "-C", repoDir, "log", "-1", "--format=%ct", "--no-show-signature", input.getRev()->gitRev() })); Attrs infoAttrs({ {"rev", input.getRev()->gitRev()}, diff --git a/src/libfetchers/github.cc b/src/libfetchers/github.cc index 8bb7c2c1d..9f84ffb68 100644 --- a/src/libfetchers/github.cc +++ b/src/libfetchers/github.cc @@ -29,7 +29,7 @@ struct GitArchiveInputScheme : InputScheme if (path.size() == 2) { } else if (path.size() == 3) { if (std::regex_match(path[2], revRegex)) - rev = Hash(path[2], htSHA1); + rev = Hash::parseAny(path[2], htSHA1); else if (std::regex_match(path[2], refRegex)) ref = path[2]; else @@ -41,7 +41,7 @@ struct GitArchiveInputScheme : InputScheme if (name == "rev") { if (rev) throw BadURL("URL '%s' contains multiple commit hashes", url.url); - rev = Hash(value, htSHA1); + rev = Hash::parseAny(value, htSHA1); } else if (name == "ref") { if (!std::regex_match(value, refRegex)) @@ -191,7 +191,7 @@ struct GitHubInputScheme : GitArchiveInputScheme readFile( store->toRealPath( downloadFile(store, url, "source", false).storePath))); - auto rev = Hash(std::string { json["sha"] }, htSHA1); + auto rev = Hash::parseAny(std::string { json["sha"] }, htSHA1); debug("HEAD revision for '%s' is %s", url, rev.gitRev()); return rev; } @@ -235,7 +235,7 @@ struct GitLabInputScheme : GitArchiveInputScheme readFile( store->toRealPath( downloadFile(store, url, "source", false).storePath))); - auto rev = Hash(std::string(json[0]["id"]), htSHA1); + auto rev = Hash::parseAny(std::string(json[0]["id"]), htSHA1); debug("HEAD revision for '%s' is %s", url, rev.gitRev()); return rev; } diff --git a/src/libfetchers/indirect.cc b/src/libfetchers/indirect.cc index 91dc83740..b981d4d8e 100644 --- a/src/libfetchers/indirect.cc +++ b/src/libfetchers/indirect.cc @@ -18,7 +18,7 @@ struct IndirectInputScheme : InputScheme if (path.size() == 1) { } else if (path.size() == 2) { if (std::regex_match(path[1], revRegex)) - rev = Hash(path[1], htSHA1); + rev = Hash::parseAny(path[1], htSHA1); else if (std::regex_match(path[1], refRegex)) ref = path[1]; else @@ -29,7 +29,7 @@ struct IndirectInputScheme : InputScheme ref = path[1]; if (!std::regex_match(path[2], revRegex)) throw BadURL("in flake URL '%s', '%s' is not a commit hash", url.url, path[2]); - rev = Hash(path[2], htSHA1); + rev = Hash::parseAny(path[2], htSHA1); } else throw BadURL("GitHub URL '%s' is invalid", url.url); diff --git a/src/libfetchers/mercurial.cc b/src/libfetchers/mercurial.cc index c48cb6fd1..3e76ffc4d 100644 --- a/src/libfetchers/mercurial.cc +++ b/src/libfetchers/mercurial.cc @@ -209,7 +209,7 @@ struct MercurialInputScheme : InputScheme }); if (auto res = getCache()->lookup(store, mutableAttrs)) { - auto rev2 = Hash(getStrAttr(res->first, "rev"), htSHA1); + auto rev2 = Hash::parseAny(getStrAttr(res->first, "rev"), htSHA1); if (!input.getRev() || input.getRev() == rev2) { input.attrs.insert_or_assign("rev", rev2.gitRev()); return makeResult(res->first, std::move(res->second)); @@ -252,7 +252,7 @@ struct MercurialInputScheme : InputScheme runProgram("hg", true, { "log", "-R", cacheDir, "-r", revOrRef, "--template", "{node} {rev} {branch}" })); assert(tokens.size() == 3); - input.attrs.insert_or_assign("rev", Hash(tokens[0], htSHA1).gitRev()); + input.attrs.insert_or_assign("rev", Hash::parseAny(tokens[0], htSHA1).gitRev()); auto revCount = std::stoull(tokens[1]); input.attrs.insert_or_assign("ref", tokens[2]); diff --git a/src/libfetchers/tarball.cc b/src/libfetchers/tarball.cc index 55158cece..a2d16365e 100644 --- a/src/libfetchers/tarball.cc +++ b/src/libfetchers/tarball.cc @@ -67,8 +67,10 @@ DownloadFileResult downloadFile( StringSink sink; dumpString(*res.data, sink); auto hash = hashString(htSHA256, *res.data); - ValidPathInfo info(store->makeFixedOutputPath(FileIngestionMethod::Flat, hash, name)); - info.narHash = hashString(htSHA256, *sink.s); + ValidPathInfo info { + store->makeFixedOutputPath(FileIngestionMethod::Flat, hash, name), + hashString(htSHA256, *sink.s), + }; info.narSize = sink.s->size(); info.ca = FixedOutputHash { .method = FileIngestionMethod::Flat, diff --git a/src/libmain/progress-bar.cc b/src/libmain/progress-bar.cc index 3f7d99a1d..be3c06a38 100644 --- a/src/libmain/progress-bar.cc +++ b/src/libmain/progress-bar.cc @@ -362,7 +362,7 @@ public: auto width = getWindowSize().second; if (width <= 0) width = std::numeric_limits::max(); - writeToStderr("\r" + filterANSIEscapes(line, false, width) + "\e[K"); + writeToStderr("\r" + filterANSIEscapes(line, false, width) + ANSI_NORMAL + "\e[K"); } std::string getStatus(State & state) diff --git a/src/libmain/shared.cc b/src/libmain/shared.cc index 52718c231..2b1f25ca3 100644 --- a/src/libmain/shared.cc +++ b/src/libmain/shared.cc @@ -36,7 +36,7 @@ void printGCWarning() void printMissing(ref store, const std::vector & paths, Verbosity lvl) { - unsigned long long downloadSize, narSize; + uint64_t downloadSize, narSize; StorePathSet willBuild, willSubstitute, unknown; store->queryMissing(paths, willBuild, willSubstitute, unknown, downloadSize, narSize); printMissing(store, willBuild, willSubstitute, unknown, downloadSize, narSize, lvl); @@ -45,7 +45,7 @@ void printMissing(ref store, const std::vector & pa void printMissing(ref store, const StorePathSet & willBuild, const StorePathSet & willSubstitute, const StorePathSet & unknown, - unsigned long long downloadSize, unsigned long long narSize, Verbosity lvl) + uint64_t downloadSize, uint64_t narSize, Verbosity lvl) { if (!willBuild.empty()) { if (willBuild.size() == 1) @@ -384,7 +384,7 @@ RunPager::~RunPager() } -string showBytes(unsigned long long bytes) +string showBytes(uint64_t bytes) { return (format("%.2f MiB") % (bytes / (1024.0 * 1024.0))).str(); } diff --git a/src/libmain/shared.hh b/src/libmain/shared.hh index f558247c0..ffae5d796 100644 --- a/src/libmain/shared.hh +++ b/src/libmain/shared.hh @@ -47,7 +47,7 @@ void printMissing( void printMissing(ref store, const StorePathSet & willBuild, const StorePathSet & willSubstitute, const StorePathSet & unknown, - unsigned long long downloadSize, unsigned long long narSize, Verbosity lvl = lvlInfo); + uint64_t downloadSize, uint64_t narSize, Verbosity lvl = lvlInfo); string getArg(const string & opt, Strings::iterator & i, const Strings::iterator & end); @@ -110,7 +110,7 @@ extern volatile ::sig_atomic_t blockInt; /* GC helpers. */ -string showBytes(unsigned long long bytes); +string showBytes(uint64_t bytes); struct GCResults; diff --git a/src/libstore/binary-cache-store.cc b/src/libstore/binary-cache-store.cc index b791c125b..5433fe50d 100644 --- a/src/libstore/binary-cache-store.cc +++ b/src/libstore/binary-cache-store.cc @@ -143,7 +143,7 @@ struct FileSource : FdSource void BinaryCacheStore::addToStore(const ValidPathInfo & info, Source & narSource, RepairFlag repair, CheckSigsFlag checkSigs) { - assert(info.narHash && info.narSize); + assert(info.narSize); if (!repair && isValidPath(info.path)) { // FIXME: copyNAR -> null sink @@ -153,6 +153,8 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, Source & narSource auto [fdTemp, fnTemp] = createTempFile(); + AutoDelete autoDelete(fnTemp); + auto now1 = std::chrono::steady_clock::now(); /* Read the NAR simultaneously into a CompressionSink+FileSink (to @@ -167,6 +169,7 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, Source & narSource TeeSource teeSource(narSource, *compressionSink); narAccessor = makeNarAccessor(teeSource); compressionSink->finish(); + fileSink.flush(); } auto now2 = std::chrono::steady_clock::now(); @@ -178,7 +181,7 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, Source & narSource auto [fileHash, fileSize] = fileHashSink.finish(); narInfo->fileHash = fileHash; narInfo->fileSize = fileSize; - narInfo->url = "nar/" + narInfo->fileHash.to_string(Base32, false) + ".nar" + narInfo->url = "nar/" + narInfo->fileHash->to_string(Base32, false) + ".nar" + (compression == "xz" ? ".xz" : compression == "bzip2" ? ".bz2" : compression == "br" ? ".br" : @@ -216,7 +219,7 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, Source & narSource } } - upsertFile(std::string(info.path.to_string()) + ".ls", jsonOut.str(), "application/json"); + upsertFile(std::string(info.path.hashPart()) + ".ls", jsonOut.str(), "application/json"); } /* Optionally maintain an index of DWARF debug info files @@ -280,7 +283,7 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, Source & narSource if (repair || !fileExists(narInfo->url)) { stats.narWrite++; upsertFile(narInfo->url, - std::make_shared(fnTemp, std::ios_base::in), + std::make_shared(fnTemp, std::ios_base::in | std::ios_base::binary), "application/x-nix-nar"); } else stats.narWriteAverted++; @@ -309,14 +312,10 @@ void BinaryCacheStore::narFromPath(const StorePath & storePath, Sink & sink) { auto info = queryPathInfo(storePath).cast(); - uint64_t narSize = 0; + LengthSink narSize; + TeeSink tee { sink, narSize }; - LambdaSink wrapperSink([&](const unsigned char * data, size_t len) { - sink(data, len); - narSize += len; - }); - - auto decompressor = makeDecompressionSink(info->compression, wrapperSink); + auto decompressor = makeDecompressionSink(info->compression, tee); try { getFile(info->url, *decompressor); @@ -328,7 +327,7 @@ void BinaryCacheStore::narFromPath(const StorePath & storePath, Sink & sink) stats.narRead++; //stats.narReadCompressedBytes += nar->size(); // FIXME - stats.narReadBytes += narSize; + stats.narReadBytes += narSize.length; } void BinaryCacheStore::queryPathInfoUncached(const StorePath & storePath, @@ -372,7 +371,7 @@ StorePath BinaryCacheStore::addToStore(const string & name, const Path & srcPath method for very large paths, but `copyPath' is mainly used for small files. */ StringSink sink; - Hash h; + std::optional h; if (method == FileIngestionMethod::Recursive) { dumpPath(srcPath, sink, filter); h = hashString(hashAlgo, *sink.s); @@ -382,7 +381,10 @@ StorePath BinaryCacheStore::addToStore(const string & name, const Path & srcPath h = hashString(hashAlgo, s); } - ValidPathInfo info(makeFixedOutputPath(method, h, name)); + ValidPathInfo info { + makeFixedOutputPath(method, *h, name), + Hash::dummy, // Will be fixed in addToStore, which recomputes nar hash + }; auto source = StringSource { *sink.s }; addToStore(info, source, repair, CheckSigs); @@ -393,7 +395,10 @@ StorePath BinaryCacheStore::addToStore(const string & name, const Path & srcPath StorePath BinaryCacheStore::addTextToStore(const string & name, const string & s, const StorePathSet & references, RepairFlag repair) { - ValidPathInfo info(computeStorePathForText(name, s, references)); + ValidPathInfo info { + computeStorePathForText(name, s, references), + Hash::dummy, // Will be fixed in addToStore, which recomputes nar hash + }; info.references = references; if (repair || !isValidPath(info.path)) { diff --git a/src/libstore/build.cc b/src/libstore/build.cc index ac2e67574..afb2bb096 100644 --- a/src/libstore/build.cc +++ b/src/libstore/build.cc @@ -297,7 +297,7 @@ public: GoalPtr makeDerivationGoal(const StorePath & drvPath, const StringSet & wantedOutputs, BuildMode buildMode = bmNormal); std::shared_ptr makeBasicDerivationGoal(const StorePath & drvPath, const BasicDerivation & drv, BuildMode buildMode = bmNormal); - GoalPtr makeSubstitutionGoal(const StorePath & storePath, RepairFlag repair = NoRepair); + GoalPtr makeSubstitutionGoal(const StorePath & storePath, RepairFlag repair = NoRepair, std::optional ca = std::nullopt); /* Remove a dead goal. */ void removeGoal(GoalPtr goal); @@ -806,8 +806,8 @@ private: /* RAII object to delete the chroot directory. */ std::shared_ptr autoDelChroot; - /* Whether this is a fixed-output derivation. */ - bool fixedOutput; + /* The sort of derivation we are building. */ + DerivationType derivationType; /* Whether to run the build in a private network namespace. */ bool privateNetwork = false; @@ -1047,7 +1047,7 @@ DerivationGoal::DerivationGoal(const StorePath & drvPath, const BasicDerivation { this->drv = std::make_unique(BasicDerivation(drv)); state = &DerivationGoal::haveDerivation; - name = fmt("building of %s", worker.store.showPaths(drv.outputPaths())); + name = fmt("building of %s", worker.store.showPaths(drv.outputPaths(worker.store))); trace("created"); mcExpectedBuilds = std::make_unique>(worker.expectedBuilds); @@ -1181,8 +1181,8 @@ void DerivationGoal::haveDerivation() retrySubstitution = false; - for (auto & i : drv->outputs) - worker.store.addTempRoot(i.second.path); + for (auto & i : drv->outputsAndPaths(worker.store)) + worker.store.addTempRoot(i.second.second); /* Check what outputs paths are not already valid. */ auto invalidOutputs = checkPathValidity(false, buildMode == bmRepair); @@ -1195,9 +1195,9 @@ void DerivationGoal::haveDerivation() parsedDrv = std::make_unique(drvPath, *drv); - if (parsedDrv->contentAddressed()) { + if (drv->type() == DerivationType::CAFloating) { settings.requireExperimentalFeature("ca-derivations"); - throw Error("ca-derivations isn't implemented yet"); + throw UnimplementedError("ca-derivations isn't implemented yet"); } @@ -1206,7 +1206,7 @@ void DerivationGoal::haveDerivation() them. */ if (settings.useSubstitutes && parsedDrv->substitutesAllowed()) for (auto & i : invalidOutputs) - addWaitee(worker.makeSubstitutionGoal(i, buildMode == bmRepair ? Repair : NoRepair)); + addWaitee(worker.makeSubstitutionGoal(i, buildMode == bmRepair ? Repair : NoRepair, getDerivationCA(*drv))); if (waitees.empty()) /* to prevent hang (no wake-up event) */ outputsSubstituted(); @@ -1288,14 +1288,14 @@ void DerivationGoal::repairClosure() /* Get the output closure. */ StorePathSet outputClosure; - for (auto & i : drv->outputs) { + for (auto & i : drv->outputsAndPaths(worker.store)) { if (!wantOutput(i.first, wantedOutputs)) continue; - worker.store.computeFSClosure(i.second.path, outputClosure); + worker.store.computeFSClosure(i.second.second, outputClosure); } /* Filter out our own outputs (which we have already checked). */ - for (auto & i : drv->outputs) - outputClosure.erase(i.second.path); + for (auto & i : drv->outputsAndPaths(worker.store)) + outputClosure.erase(i.second.second); /* Get all dependencies of this derivation so that we know which derivation is responsible for which path in the output @@ -1306,8 +1306,8 @@ void DerivationGoal::repairClosure() for (auto & i : inputClosure) if (i.isDerivation()) { Derivation drv = worker.store.derivationFromPath(i); - for (auto & j : drv.outputs) - outputsToDrv.insert_or_assign(j.second.path, i); + for (auto & j : drv.outputsAndPaths(worker.store)) + outputsToDrv.insert_or_assign(j.second.second, i); } /* Check each path (slow!). */ @@ -1379,7 +1379,7 @@ void DerivationGoal::inputsRealised() for (auto & j : i.second) { auto k = inDrv.outputs.find(j); if (k != inDrv.outputs.end()) - worker.store.computeFSClosure(k->second.path, inputPaths); + worker.store.computeFSClosure(k->second.path(worker.store, inDrv.name), inputPaths); else throw Error( "derivation '%s' requires non-existent output '%s' from input derivation '%s'", @@ -1392,12 +1392,12 @@ void DerivationGoal::inputsRealised() debug("added input paths %s", worker.store.showPaths(inputPaths)); - /* Is this a fixed-output derivation? */ - fixedOutput = drv->isFixedOutput(); + /* What type of derivation are we building? */ + derivationType = drv->type(); /* Don't repeat fixed-output derivations since they're already verified by their output hash.*/ - nrRounds = fixedOutput ? 1 : settings.buildRepeat + 1; + nrRounds = derivationIsFixed(derivationType) ? 1 : settings.buildRepeat + 1; /* Okay, try to build. Note that here we don't wait for a build slot to become available, since we don't need one if there is a @@ -1432,7 +1432,7 @@ void DerivationGoal::tryToBuild() goal can start a build, and if not, the main loop will sleep a few seconds and then retry this goal. */ PathSet lockFiles; - for (auto & outPath : drv->outputPaths()) + for (auto & outPath : drv->outputPaths(worker.store)) lockFiles.insert(worker.store.Store::toRealPath(outPath)); if (!outputLocks.lockPaths(lockFiles, "", false)) { @@ -1460,22 +1460,22 @@ void DerivationGoal::tryToBuild() return; } - missingPaths = drv->outputPaths(); + missingPaths = drv->outputPaths(worker.store); if (buildMode != bmCheck) for (auto & i : validPaths) missingPaths.erase(i); /* If any of the outputs already exist but are not valid, delete them. */ - for (auto & i : drv->outputs) { - if (worker.store.isValidPath(i.second.path)) continue; - debug("removing invalid path '%s'", worker.store.printStorePath(i.second.path)); - deletePath(worker.store.Store::toRealPath(i.second.path)); + for (auto & i : drv->outputsAndPaths(worker.store)) { + if (worker.store.isValidPath(i.second.second)) continue; + debug("removing invalid path '%s'", worker.store.printStorePath(i.second.second)); + deletePath(worker.store.Store::toRealPath(i.second.second)); } /* Don't do a remote build if the derivation has the attribute `preferLocalBuild' set. Also, check and repair modes are only supported for local builds. */ - bool buildLocally = buildMode != bmNormal || parsedDrv->willBuildLocally(); + bool buildLocally = buildMode != bmNormal || parsedDrv->willBuildLocally(worker.store); /* Is the build hook willing to accept this job? */ if (!buildLocally) { @@ -1646,13 +1646,13 @@ void DerivationGoal::buildDone() So instead, check if the disk is (nearly) full now. If so, we don't mark this build as a permanent failure. */ #if HAVE_STATVFS - unsigned long long required = 8ULL * 1024 * 1024; // FIXME: make configurable + uint64_t required = 8ULL * 1024 * 1024; // FIXME: make configurable struct statvfs st; if (statvfs(worker.store.realStoreDir.c_str(), &st) == 0 && - (unsigned long long) st.f_bavail * st.f_bsize < required) + (uint64_t) st.f_bavail * st.f_bsize < required) diskFull = true; if (statvfs(tmpDir.c_str(), &st) == 0 && - (unsigned long long) st.f_bavail * st.f_bsize < required) + (uint64_t) st.f_bavail * st.f_bsize < required) diskFull = true; #endif @@ -1692,7 +1692,7 @@ void DerivationGoal::buildDone() fmt("running post-build-hook '%s'", settings.postBuildHook), Logger::Fields{worker.store.printStorePath(drvPath)}); PushActivity pact(act.id); - auto outputPaths = drv->outputPaths(); + auto outputPaths = drv->outputPaths(worker.store); std::map hookEnvironment = getEnv(); hookEnvironment.emplace("DRV_PATH", worker.store.printStorePath(drvPath)); @@ -1783,7 +1783,7 @@ void DerivationGoal::buildDone() st = dynamic_cast(&e) ? BuildResult::NotDeterministic : statusOk(status) ? BuildResult::OutputRejected : - fixedOutput || diskFull ? BuildResult::TransientFailure : + derivationIsImpure(derivationType) || diskFull ? BuildResult::TransientFailure : BuildResult::PermanentFailure; } @@ -1919,8 +1919,8 @@ StorePathSet DerivationGoal::exportReferences(const StorePathSet & storePaths) for (auto & j : paths2) { if (j.isDerivation()) { Derivation drv = worker.store.derivationFromPath(j); - for (auto & k : drv.outputs) - worker.store.computeFSClosure(k.second.path, paths); + for (auto & k : drv.outputsAndPaths(worker.store)) + worker.store.computeFSClosure(k.second.second, paths); } } @@ -1964,13 +1964,13 @@ void linkOrCopy(const Path & from, const Path & to) void DerivationGoal::startBuilder() { /* Right platform? */ - if (!parsedDrv->canBuildLocally()) + if (!parsedDrv->canBuildLocally(worker.store)) throw Error("a '%s' with features {%s} is required to build '%s', but I am a '%s' with features {%s}", drv->platform, concatStringsSep(", ", parsedDrv->getRequiredSystemFeatures()), worker.store.printStorePath(drvPath), settings.thisSystem, - concatStringsSep(", ", settings.systemFeatures)); + concatStringsSep(", ", worker.store.systemFeatures)); if (drv->isBuiltin()) preloadNSS(); @@ -1996,7 +1996,7 @@ void DerivationGoal::startBuilder() else if (settings.sandboxMode == smDisabled) useChroot = false; else if (settings.sandboxMode == smRelaxed) - useChroot = !fixedOutput && !noChroot; + useChroot = !(derivationIsImpure(derivationType)) && !noChroot; } if (worker.store.storeDir != worker.store.realStoreDir) { @@ -2014,8 +2014,8 @@ void DerivationGoal::startBuilder() chownToBuilder(tmpDir); /* Substitute output placeholders with the actual output paths. */ - for (auto & output : drv->outputs) - inputRewrites[hashPlaceholder(output.first)] = worker.store.printStorePath(output.second.path); + for (auto & output : drv->outputsAndPaths(worker.store)) + inputRewrites[hashPlaceholder(output.first)] = worker.store.printStorePath(output.second.second); /* Construct the environment passed to the builder. */ initEnv(); @@ -2165,7 +2165,7 @@ void DerivationGoal::startBuilder() "nogroup:x:65534:\n") % sandboxGid).str()); /* Create /etc/hosts with localhost entry. */ - if (!fixedOutput) + if (!(derivationIsImpure(derivationType))) writeFile(chrootRootDir + "/etc/hosts", "127.0.0.1 localhost\n::1 localhost\n"); /* Make the closure of the inputs available in the chroot, @@ -2199,8 +2199,8 @@ void DerivationGoal::startBuilder() rebuilding a path that is in settings.dirsInChroot (typically the dependencies of /bin/sh). Throw them out. */ - for (auto & i : drv->outputs) - dirsInChroot.erase(worker.store.printStorePath(i.second.path)); + for (auto & i : drv->outputsAndPaths(worker.store)) + dirsInChroot.erase(worker.store.printStorePath(i.second.second)); #elif __APPLE__ /* We don't really have any parent prep work to do (yet?) @@ -2373,7 +2373,7 @@ void DerivationGoal::startBuilder() us. */ - if (!fixedOutput) + if (!(derivationIsImpure(derivationType))) privateNetwork = true; userNamespaceSync.create(); @@ -2574,7 +2574,7 @@ void DerivationGoal::initEnv() derivation, tell the builder, so that for instance `fetchurl' can skip checking the output. On older Nixes, this environment variable won't be set, so `fetchurl' will do the check. */ - if (fixedOutput) env["NIX_OUTPUT_CHECKED"] = "1"; + if (derivationIsFixed(derivationType)) env["NIX_OUTPUT_CHECKED"] = "1"; /* *Only* if this is a fixed-output derivation, propagate the values of the environment variables specified in the @@ -2585,7 +2585,7 @@ void DerivationGoal::initEnv() to the builder is generally impure, but the output of fixed-output derivations is by definition pure (since we already know the cryptographic hash of the output). */ - if (fixedOutput) { + if (derivationIsImpure(derivationType)) { for (auto & i : parsedDrv->getStringsAttr("impureEnvVars").value_or(Strings())) env[i] = getEnv(i).value_or(""); } @@ -2612,8 +2612,8 @@ void DerivationGoal::writeStructuredAttrs() /* Add an "outputs" object containing the output paths. */ nlohmann::json outputs; - for (auto & i : drv->outputs) - outputs[i.first] = rewriteStrings(worker.store.printStorePath(i.second.path), inputRewrites); + for (auto & i : drv->outputsAndPaths(worker.store)) + outputs[i.first] = rewriteStrings(worker.store.printStorePath(i.second.second), inputRewrites); json["outputs"] = outputs; /* Handle exportReferencesGraph. */ @@ -2774,7 +2774,7 @@ struct RestrictedStore : public LocalFSStore goal.addDependency(info.path); } - StorePath addToStoreFromDump(const string & dump, const string & name, + StorePath addToStoreFromDump(Source & dump, const string & name, FileIngestionMethod method = FileIngestionMethod::Recursive, HashType hashAlgo = htSHA256, RepairFlag repair = NoRepair) override { auto path = next->addToStoreFromDump(dump, name, method, hashAlgo, repair); @@ -2815,9 +2815,9 @@ struct RestrictedStore : public LocalFSStore if (!goal.isAllowed(path.path)) throw InvalidPath("cannot build unknown path '%s' in recursive Nix", printStorePath(path.path)); auto drv = derivationFromPath(path.path); - for (auto & output : drv.outputs) + for (auto & output : drv.outputsAndPaths(*this)) if (wantOutput(output.first, path.outputs)) - newPaths.insert(output.second.path); + newPaths.insert(output.second.second); } else if (!goal.isAllowed(path.path)) throw InvalidPath("cannot build unknown path '%s' in recursive Nix", printStorePath(path.path)); } @@ -2851,7 +2851,7 @@ struct RestrictedStore : public LocalFSStore void queryMissing(const std::vector & targets, StorePathSet & willBuild, StorePathSet & willSubstitute, StorePathSet & unknown, - unsigned long long & downloadSize, unsigned long long & narSize) override + uint64_t & downloadSize, uint64_t & narSize) override { /* This is slightly impure since it leaks information to the client about what paths will be built/substituted or are @@ -2920,7 +2920,8 @@ void DerivationGoal::startDaemon() FdSink to(remote.get()); try { daemon::processConnection(store, from, to, - daemon::NotTrusted, daemon::Recursive, "nobody", 65535); + daemon::NotTrusted, daemon::Recursive, + [&](Store & store) { store.createUser("nobody", 65535); }); debug("terminated daemon connection"); } catch (SysError &) { ignoreException(); @@ -3179,7 +3180,7 @@ void DerivationGoal::runChild() createDirs(chrootRootDir + "/dev/shm"); createDirs(chrootRootDir + "/dev/pts"); ss.push_back("/dev/full"); - if (settings.systemFeatures.get().count("kvm") && pathExists("/dev/kvm")) + if (worker.store.systemFeatures.get().count("kvm") && pathExists("/dev/kvm")) ss.push_back("/dev/kvm"); ss.push_back("/dev/null"); ss.push_back("/dev/random"); @@ -3195,7 +3196,7 @@ void DerivationGoal::runChild() /* Fixed-output derivations typically need to access the network, so give them access to /etc/resolv.conf and so on. */ - if (fixedOutput) { + if (derivationIsImpure(derivationType)) { ss.push_back("/etc/resolv.conf"); // Only use nss functions to resolve hosts and @@ -3436,7 +3437,7 @@ void DerivationGoal::runChild() sandboxProfile += "(import \"sandbox-defaults.sb\")\n"; - if (fixedOutput) + if (derivationIsImpure(derivationType)) sandboxProfile += "(import \"sandbox-network.sb\")\n"; /* Our rwx outputs */ @@ -3579,7 +3580,7 @@ StorePathSet parseReferenceSpecifiers(Store & store, const BasicDerivation & drv if (store.isStorePath(i)) result.insert(store.parseStorePath(i)); else if (drv.outputs.count(i)) - result.insert(drv.outputs.find(i)->second.path); + result.insert(drv.outputs.find(i)->second.path(store, drv.name)); else throw BuildError("derivation contains an illegal reference specifier '%s'", i); } return result; @@ -3616,8 +3617,8 @@ void DerivationGoal::registerOutputs() to do anything here. */ if (hook) { bool allValid = true; - for (auto & i : drv->outputs) - if (!worker.store.isValidPath(i.second.path)) allValid = false; + for (auto & i : drv->outputsAndPaths(worker.store)) + if (!worker.store.isValidPath(i.second.second)) allValid = false; if (allValid) return; } @@ -3638,23 +3639,23 @@ void DerivationGoal::registerOutputs() Nix calls. */ StorePathSet referenceablePaths; for (auto & p : inputPaths) referenceablePaths.insert(p); - for (auto & i : drv->outputs) referenceablePaths.insert(i.second.path); + for (auto & i : drv->outputsAndPaths(worker.store)) referenceablePaths.insert(i.second.second); for (auto & p : addedPaths) referenceablePaths.insert(p); /* Check whether the output paths were created, and grep each output path to determine what other paths it references. Also make all output paths read-only. */ - for (auto & i : drv->outputs) { - auto path = worker.store.printStorePath(i.second.path); - if (!missingPaths.count(i.second.path)) continue; + for (auto & i : drv->outputsAndPaths(worker.store)) { + auto path = worker.store.printStorePath(i.second.second); + if (!missingPaths.count(i.second.second)) continue; Path actualPath = path; if (needsHashRewrite()) { - auto r = redirectedOutputs.find(i.second.path); + auto r = redirectedOutputs.find(i.second.second); if (r != redirectedOutputs.end()) { auto redirected = worker.store.Store::toRealPath(r->second); if (buildMode == bmRepair - && redirectedBadOutputs.count(i.second.path) + && redirectedBadOutputs.count(i.second.second) && pathExists(redirected)) replaceValidPath(path, redirected); if (buildMode == bmCheck) @@ -3721,9 +3722,24 @@ void DerivationGoal::registerOutputs() hash). */ std::optional ca; - if (fixedOutput) { + if (! std::holds_alternative(i.second.first.output)) { + DerivationOutputCAFloating outputHash; + std::visit(overloaded { + [&](DerivationOutputInputAddressed doi) { + assert(false); // Enclosing `if` handles this case in other branch + }, + [&](DerivationOutputCAFixed dof) { + outputHash = DerivationOutputCAFloating { + .method = dof.hash.method, + .hashType = dof.hash.hash.type, + }; + }, + [&](DerivationOutputCAFloating dof) { + outputHash = dof; + }, + }, i.second.first.output); - if (i.second.hash->method == FileIngestionMethod::Flat) { + if (outputHash.method == FileIngestionMethod::Flat) { /* The output path should be a regular file without execute permission. */ if (!S_ISREG(st.st_mode) || (st.st_mode & S_IXUSR) != 0) throw BuildError( @@ -3734,13 +3750,18 @@ void DerivationGoal::registerOutputs() /* Check the hash. In hash mode, move the path produced by the derivation to its content-addressed location. */ - Hash h2 = i.second.hash->method == FileIngestionMethod::Recursive - ? hashPath(*i.second.hash->hash.type, actualPath).first - : hashFile(*i.second.hash->hash.type, actualPath); + Hash h2 = outputHash.method == FileIngestionMethod::Recursive + ? hashPath(outputHash.hashType, actualPath).first + : hashFile(outputHash.hashType, actualPath); - auto dest = worker.store.makeFixedOutputPath(i.second.hash->method, h2, i.second.path.name()); + auto dest = worker.store.makeFixedOutputPath(outputHash.method, h2, i.second.second.name()); - if (i.second.hash->hash != h2) { + // true if either floating CA, or incorrect fixed hash. + bool needsMove = true; + + if (auto p = std::get_if(& i.second.first.output)) { + Hash & h = p->hash.hash; + if (h != h2) { /* Throw an error after registering the path as valid. */ @@ -3748,9 +3769,15 @@ void DerivationGoal::registerOutputs() delayedException = std::make_exception_ptr( BuildError("hash mismatch in fixed-output derivation '%s':\n wanted: %s\n got: %s", worker.store.printStorePath(dest), - i.second.hash->hash.to_string(SRI, true), + h.to_string(SRI, true), h2.to_string(SRI, true))); + } else { + // matched the fixed hash, so no move needed. + needsMove = false; + } + } + if (needsMove) { Path actualDest = worker.store.Store::toRealPath(dest); if (worker.store.isValidPath(dest)) @@ -3770,7 +3797,7 @@ void DerivationGoal::registerOutputs() assert(worker.store.parseStorePath(path) == dest); ca = FixedOutputHash { - .method = i.second.hash->method, + .method = outputHash.method, .hash = h2, }; } @@ -3785,8 +3812,10 @@ void DerivationGoal::registerOutputs() time. The hash is stored in the database so that we can verify later on whether nobody has messed with the store. */ debug("scanning for references inside '%1%'", path); - HashResult hash; - auto references = worker.store.parseStorePathSet(scanForReferences(actualPath, worker.store.printStorePathSet(referenceablePaths), hash)); + // HashResult hash; + auto pathSetAndHash = scanForReferences(actualPath, worker.store.printStorePathSet(referenceablePaths)); + auto references = worker.store.parseStorePathSet(pathSetAndHash.first); + HashResult hash = pathSetAndHash.second; if (buildMode == bmCheck) { if (!worker.store.isValidPath(worker.store.parseStorePath(path))) continue; @@ -3836,8 +3865,10 @@ void DerivationGoal::registerOutputs() worker.markContentsGood(worker.store.parseStorePath(path)); } - ValidPathInfo info(worker.store.parseStorePath(path)); - info.narHash = hash.first; + ValidPathInfo info { + worker.store.parseStorePath(path), + hash.first, + }; info.narSize = hash.second; info.references = std::move(references); info.deriver = drvPath; @@ -3893,8 +3924,8 @@ void DerivationGoal::registerOutputs() /* If this is the first round of several, then move the output out of the way. */ if (nrRounds > 1 && curRound == 1 && curRound < nrRounds && keepPreviousRound) { - for (auto & i : drv->outputs) { - auto path = worker.store.printStorePath(i.second.path); + for (auto & i : drv->outputsAndPaths(worker.store)) { + auto path = worker.store.printStorePath(i.second.second); Path prev = path + checkSuffix; deletePath(prev); Path dst = path + checkSuffix; @@ -3911,8 +3942,8 @@ void DerivationGoal::registerOutputs() /* Remove the .check directories if we're done. FIXME: keep them if the result was not determistic? */ if (curRound == nrRounds) { - for (auto & i : drv->outputs) { - Path prev = worker.store.printStorePath(i.second.path) + checkSuffix; + for (auto & i : drv->outputsAndPaths(worker.store)) { + Path prev = worker.store.printStorePath(i.second.second) + checkSuffix; deletePath(prev); } } @@ -4210,12 +4241,12 @@ void DerivationGoal::flushLine() StorePathSet DerivationGoal::checkPathValidity(bool returnValid, bool checkHash) { StorePathSet result; - for (auto & i : drv->outputs) { + for (auto & i : drv->outputsAndPaths(worker.store)) { if (!wantOutput(i.first, wantedOutputs)) continue; bool good = - worker.store.isValidPath(i.second.path) && - (!checkHash || worker.pathContentsGood(i.second.path)); - if (good == returnValid) result.insert(i.second.path); + worker.store.isValidPath(i.second.second) && + (!checkHash || worker.pathContentsGood(i.second.second)); + if (good == returnValid) result.insert(i.second.second); } return result; } @@ -4272,6 +4303,10 @@ private: /* The store path that should be realised through a substitute. */ StorePath storePath; + /* The path the substituter refers to the path as. This will be + * different when the stores have different names. */ + std::optional subPath; + /* The remaining substituters. */ std::list> subs; @@ -4305,8 +4340,11 @@ private: typedef void (SubstitutionGoal::*GoalState)(); GoalState state; + /* Content address for recomputing store path */ + std::optional ca; + public: - SubstitutionGoal(const StorePath & storePath, Worker & worker, RepairFlag repair = NoRepair); + SubstitutionGoal(const StorePath & storePath, Worker & worker, RepairFlag repair = NoRepair, std::optional ca = std::nullopt); ~SubstitutionGoal(); void timedOut(Error && ex) override { abort(); }; @@ -4336,10 +4374,11 @@ public: }; -SubstitutionGoal::SubstitutionGoal(const StorePath & storePath, Worker & worker, RepairFlag repair) +SubstitutionGoal::SubstitutionGoal(const StorePath & storePath, Worker & worker, RepairFlag repair, std::optional ca) : Goal(worker) , storePath(storePath) , repair(repair) + , ca(ca) { state = &SubstitutionGoal::init; name = fmt("substitution of '%s'", worker.store.printStorePath(this->storePath)); @@ -4414,14 +4453,18 @@ void SubstitutionGoal::tryNext() sub = subs.front(); subs.pop_front(); - if (sub->storeDir != worker.store.storeDir) { + if (ca) { + subPath = sub->makeFixedOutputPathFromCA(storePath.name(), *ca); + if (sub->storeDir == worker.store.storeDir) + assert(subPath == storePath); + } else if (sub->storeDir != worker.store.storeDir) { tryNext(); return; } try { // FIXME: make async - info = sub->queryPathInfo(storePath); + info = sub->queryPathInfo(subPath ? *subPath : storePath); } catch (InvalidPath &) { tryNext(); return; @@ -4440,6 +4483,19 @@ void SubstitutionGoal::tryNext() throw; } + if (info->path != storePath) { + if (info->isContentAddressed(*sub) && info->references.empty()) { + auto info2 = std::make_shared(*info); + info2->path = storePath; + info = info2; + } else { + printError("asked '%s' for '%s' but got '%s'", + sub->getUri(), worker.store.printStorePath(storePath), sub->printStorePath(info->path)); + tryNext(); + return; + } + } + /* Update the total expected download size. */ auto narInfo = std::dynamic_pointer_cast(info); @@ -4529,7 +4585,7 @@ void SubstitutionGoal::tryToRun() PushActivity pact(act.id); copyStorePath(ref(sub), ref(worker.store.shared_from_this()), - storePath, repair, sub->isTrusted ? NoCheckSigs : CheckSigs); + subPath ? *subPath : storePath, repair, sub->isTrusted ? NoCheckSigs : CheckSigs); promise.set_value(); } catch (...) { @@ -4662,11 +4718,11 @@ std::shared_ptr Worker::makeBasicDerivationGoal(const StorePath } -GoalPtr Worker::makeSubstitutionGoal(const StorePath & path, RepairFlag repair) +GoalPtr Worker::makeSubstitutionGoal(const StorePath & path, RepairFlag repair, std::optional ca) { GoalPtr goal = substitutionGoals[path].lock(); // FIXME if (!goal) { - goal = std::make_shared(path, *this, repair); + goal = std::make_shared(path, *this, repair, ca); substitutionGoals.insert_or_assign(path, goal); wakeUp(goal); } @@ -4823,8 +4879,17 @@ void Worker::run(const Goals & _topGoals) waitForInput(); else { if (awake.empty() && 0 == settings.maxBuildJobs) - throw Error("unable to start any build; either increase '--max-jobs' " - "or enable remote builds"); + { + if (getMachines().empty()) + throw Error("unable to start any build; either increase '--max-jobs' " + "or enable remote builds." + "\nhttps://nixos.org/nix/manual/#chap-distributed-builds"); + else + throw Error("unable to start any build; remote machines may not have " + "all required system features." + "\nhttps://nixos.org/nix/manual/#chap-distributed-builds"); + + } assert(!awake.empty()); } } @@ -5008,7 +5073,7 @@ bool Worker::pathContentsGood(const StorePath & path) if (!pathExists(store.printStorePath(path))) res = false; else { - HashResult current = hashPath(*info->narHash.type, store.printStorePath(path)); + HashResult current = hashPath(info->narHash.type, store.printStorePath(path)); Hash nullHash(htSHA256); res = info->narHash == nullHash || info->narHash == current.first; } @@ -5034,7 +5099,7 @@ void Worker::markContentsGood(const StorePath & path) static void primeCache(Store & store, const std::vector & paths) { StorePathSet willBuild, willSubstitute, unknown; - unsigned long long downloadSize, narSize; + uint64_t downloadSize, narSize; store.queryMissing(paths, willBuild, willSubstitute, unknown, downloadSize, narSize); if (!willBuild.empty() && 0 == settings.maxBuildJobs && getMachines().empty()) diff --git a/src/libstore/builtins/fetchurl.cc b/src/libstore/builtins/fetchurl.cc index e630cf6f1..4fb5d8a06 100644 --- a/src/libstore/builtins/fetchurl.cc +++ b/src/libstore/builtins/fetchurl.cc @@ -58,17 +58,14 @@ void builtinFetchurl(const BasicDerivation & drv, const std::string & netrcData) } }; - /* We always have one output, and if it's a fixed-output derivation (as - checked below) it must be the only output */ - auto & output = drv.outputs.begin()->second; - /* Try the hashed mirrors first. */ - if (output.hash && output.hash->method == FileIngestionMethod::Flat) + if (getAttr("outputHashMode") == "flat") for (auto hashedMirror : settings.hashedMirrors.get()) try { if (!hasSuffix(hashedMirror, "/")) hashedMirror += '/'; - auto & h = output.hash->hash; - fetch(hashedMirror + printHashType(*h.type) + "/" + h.to_string(Base16, false)); + std::optional ht = parseHashTypeOpt(getAttr("outputHashAlgo")); + Hash h = newHashAllowEmpty(getAttr("outputHash"), ht); + fetch(hashedMirror + printHashType(h.type) + "/" + h.to_string(Base16, false)); return; } catch (Error & e) { debug(e.what()); diff --git a/src/libstore/content-address.cc b/src/libstore/content-address.cc index 6cb69d0a9..0885c3d0e 100644 --- a/src/libstore/content-address.cc +++ b/src/libstore/content-address.cc @@ -1,9 +1,11 @@ +#include "args.hh" #include "content-address.hh" +#include "split.hh" namespace nix { std::string FixedOutputHash::printMethodAlgo() const { - return makeFileIngestionPrefix(method) + printHashType(*hash.type); + return makeFileIngestionPrefix(method) + printHashType(hash.type); } std::string makeFileIngestionPrefix(const FileIngestionMethod m) { @@ -24,10 +26,6 @@ std::string makeFixedOutputCA(FileIngestionMethod method, const Hash & hash) + hash.to_string(Base32, true); } -// FIXME Put this somewhere? -template struct overloaded : Ts... { using Ts::operator()...; }; -template overloaded(Ts...) -> overloaded; - std::string renderContentAddress(ContentAddress ca) { return std::visit(overloaded { [](TextHash th) { @@ -40,38 +38,46 @@ std::string renderContentAddress(ContentAddress ca) { } ContentAddress parseContentAddress(std::string_view rawCa) { - auto prefixSeparator = rawCa.find(':'); - if (prefixSeparator != string::npos) { - auto prefix = string(rawCa, 0, prefixSeparator); - if (prefix == "text") { - auto hashTypeAndHash = rawCa.substr(prefixSeparator+1, string::npos); - Hash hash = Hash(string(hashTypeAndHash)); - if (*hash.type != htSHA256) { - throw Error("parseContentAddress: the text hash should have type SHA256"); - } - return TextHash { hash }; - } else if (prefix == "fixed") { - // This has to be an inverse of makeFixedOutputCA - auto methodAndHash = rawCa.substr(prefixSeparator+1, string::npos); - if (methodAndHash.substr(0,2) == "r:") { - std::string_view hashRaw = methodAndHash.substr(2,string::npos); - return FixedOutputHash { - .method = FileIngestionMethod::Recursive, - .hash = Hash(string(hashRaw)), - }; - } else { - std::string_view hashRaw = methodAndHash; - return FixedOutputHash { - .method = FileIngestionMethod::Flat, - .hash = Hash(string(hashRaw)), - }; - } - } else { - throw Error("parseContentAddress: format not recognized; has to be text or fixed"); - } - } else { - throw Error("Not a content address because it lacks an appropriate prefix"); + auto rest = rawCa; + + std::string_view prefix; + { + auto optPrefix = splitPrefixTo(rest, ':'); + if (!optPrefix) + throw UsageError("not a content address because it is not in the form ':': %s", rawCa); + prefix = *optPrefix; } + + auto parseHashType_ = [&](){ + auto hashTypeRaw = splitPrefixTo(rest, ':'); + if (!hashTypeRaw) + throw UsageError("content address hash must be in form ':', but found: %s", rawCa); + HashType hashType = parseHashType(*hashTypeRaw); + return std::move(hashType); + }; + + // Switch on prefix + if (prefix == "text") { + // No parsing of the method, "text" only support flat. + HashType hashType = parseHashType_(); + if (hashType != htSHA256) + throw Error("text content address hash should use %s, but instead uses %s", + printHashType(htSHA256), printHashType(hashType)); + return TextHash { + .hash = Hash::parseNonSRIUnprefixed(rest, std::move(hashType)), + }; + } else if (prefix == "fixed") { + // Parse method + auto method = FileIngestionMethod::Flat; + if (splitPrefix(rest, "r:")) + method = FileIngestionMethod::Recursive; + HashType hashType = parseHashType_(); + return FixedOutputHash { + .method = method, + .hash = Hash::parseNonSRIUnprefixed(rest, std::move(hashType)), + }; + } else + throw UsageError("content address prefix '%s' is unrecognized. Recogonized prefixes are 'text' or 'fixed'", prefix); }; std::optional parseContentAddressOpt(std::string_view rawCaOpt) { diff --git a/src/libstore/daemon.cc b/src/libstore/daemon.cc index db7139374..ad3fe1847 100644 --- a/src/libstore/daemon.cc +++ b/src/libstore/daemon.cc @@ -86,7 +86,7 @@ struct TunnelLogger : public Logger } /* startWork() means that we're starting an operation for which we - want to send out stderr to the client. */ + want to send out stderr to the client. */ void startWork() { auto state(state_.lock()); @@ -173,31 +173,6 @@ struct TunnelSource : BufferedSource } }; -/* If the NAR archive contains a single file at top-level, then save - the contents of the file to `s'. Otherwise barf. */ -struct RetrieveRegularNARSink : ParseSink -{ - bool regular; - string s; - - RetrieveRegularNARSink() : regular(true) { } - - void createDirectory(const Path & path) - { - regular = false; - } - - void receiveContents(unsigned char * data, unsigned int len) - { - s.append((const char *) data, len); - } - - void createSymlink(const Path & path, const string & target) - { - regular = false; - } -}; - struct ClientSettings { bool keepFailed; @@ -375,25 +350,28 @@ static void performOp(TunnelLogger * logger, ref store, } case wopAddToStore: { - std::string s, baseName; + HashType hashAlgo; + std::string baseName; FileIngestionMethod method; { - bool fixed; uint8_t recursive; - from >> baseName >> fixed /* obsolete */ >> recursive >> s; + bool fixed; + uint8_t recursive; + std::string hashAlgoRaw; + from >> baseName >> fixed /* obsolete */ >> recursive >> hashAlgoRaw; if (recursive > (uint8_t) FileIngestionMethod::Recursive) throw Error("unsupported FileIngestionMethod with value of %i; you may need to upgrade nix-daemon", recursive); method = FileIngestionMethod { recursive }; /* Compatibility hack. */ if (!fixed) { - s = "sha256"; + hashAlgoRaw = "sha256"; method = FileIngestionMethod::Recursive; } + hashAlgo = parseHashType(hashAlgoRaw); } - HashType hashAlgo = parseHashType(s); - StringSink savedNAR; - TeeSource savedNARSource(from, savedNAR); - RetrieveRegularNARSink savedRegular; + StringSink saved; + TeeSource savedNARSource(from, saved); + RetrieveRegularNARSink savedRegular { saved }; if (method == FileIngestionMethod::Recursive) { /* Get the entire NAR dump from the client and save it to @@ -407,11 +385,9 @@ static void performOp(TunnelLogger * logger, ref store, logger->startWork(); if (!savedRegular.regular) throw Error("regular file expected"); - auto path = store->addToStoreFromDump( - method == FileIngestionMethod::Recursive ? *savedNAR.s : savedRegular.s, - baseName, - method, - hashAlgo); + // FIXME: try to stream directly from `from`. + StringSource dumpSource { *saved.s }; + auto path = store->addToStoreFromDump(dumpSource, baseName, method, hashAlgo); logger->stopWork(); to << store->printStorePath(path); @@ -475,11 +451,49 @@ static void performOp(TunnelLogger * logger, ref store, case wopBuildDerivation: { auto drvPath = store->parseStorePath(readString(from)); BasicDerivation drv; - readDerivation(from, *store, drv); + readDerivation(from, *store, drv, Derivation::nameFromPath(drvPath)); BuildMode buildMode = (BuildMode) readInt(from); logger->startWork(); - if (!trusted) - throw Error("you are not privileged to build derivations"); + + /* Content-addressed derivations are trustless because their output paths + are verified by their content alone, so any derivation is free to + try to produce such a path. + + Input-addressed derivation output paths, however, are calculated + from the derivation closure that produced them---even knowing the + root derivation is not enough. That the output data actually came + from those derivations is fundamentally unverifiable, but the daemon + trusts itself on that matter. The question instead is whether the + submitted plan has rights to the output paths it wants to fill, and + at least the derivation closure proves that. + + It would have been nice if input-address algorithm merely depended + on the build time closure, rather than depending on the derivation + closure. That would mean input-addressed paths used at build time + would just be trusted and not need their own evidence. This is in + fact fine as the same guarantees would hold *inductively*: either + the remote builder has those paths and already trusts them, or it + needs to build them too and thus their evidence must be provided in + turn. The advantage of this variant algorithm is that the evidence + for input-addressed paths which the remote builder already has + doesn't need to be sent again. + + That said, now that we have floating CA derivations, it is better + that people just migrate to those which also solve this problem, and + others. It's the same migration difficulty with strictly more + benefit. + + Lastly, do note that when we parse fixed-output content-addressed + derivations, we throw out the precomputed output paths and just + store the hashes, so there aren't two competing sources of truth an + attacker could exploit. */ + if (drv.type() == DerivationType::InputAddressed && !trusted) + throw Error("you are not privileged to build input-addressed derivations"); + + /* Make sure that the non-input-addressed derivations that got this far + are in fact content-addressed if we don't trust them. */ + assert(derivationIsCA(drv.type()) || trusted); + auto res = store->buildDerivation(drvPath, drv, buildMode); logger->stopWork(); to << res.status << res.errorMsg; @@ -603,7 +617,7 @@ static void performOp(TunnelLogger * logger, ref store, auto path = store->parseStorePath(readString(from)); logger->startWork(); SubstitutablePathInfos infos; - store->querySubstitutablePathInfos({path}, infos); + store->querySubstitutablePathInfos({{path, std::nullopt}}, infos); logger->stopWork(); auto i = infos.find(path); if (i == infos.end()) @@ -619,10 +633,16 @@ static void performOp(TunnelLogger * logger, ref store, } case wopQuerySubstitutablePathInfos: { - auto paths = readStorePaths(*store, from); - logger->startWork(); SubstitutablePathInfos infos; - store->querySubstitutablePathInfos(paths, infos); + StorePathCAMap pathsMap = {}; + if (GET_PROTOCOL_MINOR(clientVersion) < 22) { + auto paths = readStorePaths(*store, from); + for (auto & path : paths) + pathsMap.emplace(path, std::nullopt); + } else + pathsMap = readStorePathCAMap(*store, from); + logger->startWork(); + store->querySubstitutablePathInfos(pathsMap, infos); logger->stopWork(); to << infos.size(); for (auto & i : infos) { @@ -706,17 +726,18 @@ static void performOp(TunnelLogger * logger, ref store, auto path = store->parseStorePath(readString(from)); logger->startWork(); logger->stopWork(); - dumpPath(store->printStorePath(path), to); + dumpPath(store->toRealPath(path), to); break; } case wopAddToStoreNar: { bool repair, dontCheckSigs; - ValidPathInfo info(store->parseStorePath(readString(from))); + auto path = store->parseStorePath(readString(from)); auto deriver = readString(from); + auto narHash = Hash::parseAny(readString(from), htSHA256); + ValidPathInfo info { path, narHash }; if (deriver != "") info.deriver = store->parseStorePath(deriver); - info.narHash = Hash(readString(from), htSHA256); info.references = readStorePaths(*store, from); from >> info.registrationTime >> info.narSize >> info.ultimate; info.sigs = readStrings(from); @@ -727,24 +748,84 @@ static void performOp(TunnelLogger * logger, ref store, if (!trusted) info.ultimate = false; - std::string saved; - std::unique_ptr source; - if (GET_PROTOCOL_MINOR(clientVersion) >= 21) - source = std::make_unique(from, to); - else { - TeeParseSink tee(from); - parseDump(tee, tee.source); - saved = std::move(*tee.saved.s); - source = std::make_unique(saved); + if (GET_PROTOCOL_MINOR(clientVersion) >= 23) { + + struct FramedSource : Source + { + Source & from; + bool eof = false; + std::vector pending; + size_t pos = 0; + + FramedSource(Source & from) : from(from) + { } + + ~FramedSource() + { + if (!eof) { + while (true) { + auto n = readInt(from); + if (!n) break; + std::vector data(n); + from(data.data(), n); + } + } + } + + size_t read(unsigned char * data, size_t len) override + { + if (eof) throw EndOfFile("reached end of FramedSource"); + + if (pos >= pending.size()) { + size_t len = readInt(from); + if (!len) { + eof = true; + return 0; + } + pending = std::vector(len); + pos = 0; + from(pending.data(), len); + } + + auto n = std::min(len, pending.size() - pos); + memcpy(data, pending.data() + pos, n); + pos += n; + return n; + } + }; + + logger->startWork(); + + { + FramedSource source(from); + store->addToStore(info, source, (RepairFlag) repair, + dontCheckSigs ? NoCheckSigs : CheckSigs); + } + + logger->stopWork(); } - logger->startWork(); + else { + std::unique_ptr source; + if (GET_PROTOCOL_MINOR(clientVersion) >= 21) + source = std::make_unique(from, to); + else { + StringSink saved; + TeeSource tee { from, saved }; + ParseSink ether; + parseDump(ether, tee); + source = std::make_unique(std::move(*saved.s)); + } - // FIXME: race if addToStore doesn't read source? - store->addToStore(info, *source, (RepairFlag) repair, - dontCheckSigs ? NoCheckSigs : CheckSigs); + logger->startWork(); + + // FIXME: race if addToStore doesn't read source? + store->addToStore(info, *source, (RepairFlag) repair, + dontCheckSigs ? NoCheckSigs : CheckSigs); + + logger->stopWork(); + } - logger->stopWork(); break; } @@ -754,7 +835,7 @@ static void performOp(TunnelLogger * logger, ref store, targets.push_back(store->parsePathWithOutputs(s)); logger->startWork(); StorePathSet willBuild, willSubstitute, unknown; - unsigned long long downloadSize, narSize; + uint64_t downloadSize, narSize; store->queryMissing(targets, willBuild, willSubstitute, unknown, downloadSize, narSize); logger->stopWork(); writeStorePaths(*store, to, willBuild); @@ -775,8 +856,7 @@ void processConnection( FdSink & to, TrustedFlag trusted, RecursiveFlag recursive, - const std::string & userName, - uid_t userId) + std::function authHook) { auto monitor = !recursive ? std::make_unique(from.fd) : nullptr; @@ -817,15 +897,7 @@ void processConnection( /* If we can't accept clientVersion, then throw an error *here* (not above). */ - -#if 0 - /* Prevent users from doing something very dangerous. */ - if (geteuid() == 0 && - querySetting("build-users-group", "") == "") - throw Error("if you run 'nix-daemon' as root, then you MUST set 'build-users-group'!"); -#endif - - store->createUser(userName, userId); + authHook(*store); tunnelLogger->stopWork(); to.flush(); diff --git a/src/libstore/daemon.hh b/src/libstore/daemon.hh index 266932013..841ace316 100644 --- a/src/libstore/daemon.hh +++ b/src/libstore/daemon.hh @@ -12,7 +12,10 @@ void processConnection( FdSink & to, TrustedFlag trusted, RecursiveFlag recursive, - const std::string & userName, - uid_t userId); + /* Arbitrary hook to check authorization / initialize user data / whatever + after the protocol has been negotiated. The idea is that this function + and everything it calls doesn't know about this stuff, and the + `nix-daemon` handles that instead. */ + std::function authHook); } diff --git a/src/libstore/derivations.cc b/src/libstore/derivations.cc index f325e511a..a9fed2564 100644 --- a/src/libstore/derivations.cc +++ b/src/libstore/derivations.cc @@ -7,12 +7,51 @@ namespace nix { -const StorePath & BasicDerivation::findOutput(const string & id) const +std::optional DerivationOutput::pathOpt(const Store & store, std::string_view drvName) const { - auto i = outputs.find(id); - if (i == outputs.end()) - throw Error("derivation has no output '%s'", id); - return i->second.path; + return std::visit(overloaded { + [](DerivationOutputInputAddressed doi) -> std::optional { + return { doi.path }; + }, + [&](DerivationOutputCAFixed dof) -> std::optional { + return { + store.makeFixedOutputPath(dof.hash.method, dof.hash.hash, drvName) + }; + }, + [](DerivationOutputCAFloating dof) -> std::optional { + return std::nullopt; + }, + }, output); +} + + +bool derivationIsCA(DerivationType dt) { + switch (dt) { + case DerivationType::InputAddressed: return false; + case DerivationType::CAFixed: return true; + case DerivationType::CAFloating: return true; + }; + // Since enums can have non-variant values, but making a `default:` would + // disable exhaustiveness warnings. + assert(false); +} + +bool derivationIsFixed(DerivationType dt) { + switch (dt) { + case DerivationType::InputAddressed: return false; + case DerivationType::CAFixed: return true; + case DerivationType::CAFloating: return false; + }; + assert(false); +} + +bool derivationIsImpure(DerivationType dt) { + switch (dt) { + case DerivationType::InputAddressed: return false; + case DerivationType::CAFixed: return true; + case DerivationType::CAFloating: return false; + }; + assert(false); } @@ -23,7 +62,7 @@ bool BasicDerivation::isBuiltin() const StorePath writeDerivation(ref store, - const Derivation & drv, std::string_view name, RepairFlag repair) + const Derivation & drv, RepairFlag repair) { auto references = drv.inputSrcs; for (auto & i : drv.inputDrvs) @@ -31,7 +70,7 @@ StorePath writeDerivation(ref store, /* Note that the outputs of a derivation are *not* references (that can be missing (of course) and should not necessarily be held during a garbage collection). */ - auto suffix = std::string(name) + drvExtension; + auto suffix = std::string(drv.name) + drvExtension; auto contents = drv.unparse(*store, false); return settings.readOnlyMode ? store->computeStorePathForText(suffix, contents, references) @@ -100,37 +139,57 @@ static StringSet parseStrings(std::istream & str, bool arePaths) } -static DerivationOutput parseDerivationOutput(const Store & store, std::istringstream & str) +static DerivationOutput parseDerivationOutput(const Store & store, + StorePath path, std::string_view hashAlgo, std::string_view hash) { - expect(str, ","); auto path = store.parseStorePath(parsePath(str)); - expect(str, ","); auto hashAlgo = parseString(str); - expect(str, ","); const auto hash = parseString(str); - expect(str, ")"); - - std::optional fsh; if (hashAlgo != "") { auto method = FileIngestionMethod::Flat; if (string(hashAlgo, 0, 2) == "r:") { method = FileIngestionMethod::Recursive; - hashAlgo = string(hashAlgo, 2); + hashAlgo = hashAlgo.substr(2); } const HashType hashType = parseHashType(hashAlgo); - fsh = FixedOutputHash { - .method = std::move(method), - .hash = Hash(hash, hashType), - }; - } - return DerivationOutput { - .path = std::move(path), - .hash = std::move(fsh), - }; + return hash != "" + ? DerivationOutput { + .output = DerivationOutputCAFixed { + .hash = FixedOutputHash { + .method = std::move(method), + .hash = Hash::parseNonSRIUnprefixed(hash, hashType), + }, + } + } + : (settings.requireExperimentalFeature("ca-derivations"), + DerivationOutput { + .output = DerivationOutputCAFloating { + .method = std::move(method), + .hashType = std::move(hashType), + }, + }); + } else + return DerivationOutput { + .output = DerivationOutputInputAddressed { + .path = std::move(path), + } + }; +} + +static DerivationOutput parseDerivationOutput(const Store & store, std::istringstream & str) +{ + expect(str, ","); auto path = store.parseStorePath(parsePath(str)); + expect(str, ","); const auto hashAlgo = parseString(str); + expect(str, ","); const auto hash = parseString(str); + expect(str, ")"); + + return parseDerivationOutput(store, std::move(path), hashAlgo, hash); } -static Derivation parseDerivation(const Store & store, std::string && s) +static Derivation parseDerivation(const Store & store, std::string && s, std::string_view name) { Derivation drv; + drv.name = name; + std::istringstream str(std::move(s)); expect(str, "Derive(["); @@ -174,10 +233,10 @@ static Derivation parseDerivation(const Store & store, std::string && s) } -Derivation readDerivation(const Store & store, const Path & drvPath) +Derivation readDerivation(const Store & store, const Path & drvPath, std::string_view name) { try { - return parseDerivation(store, readFile(drvPath)); + return parseDerivation(store, readFile(drvPath), name); } catch (FormatError & e) { throw Error("error parsing derivation '%1%': %2%", drvPath, e.msg()); } @@ -195,7 +254,7 @@ Derivation Store::readDerivation(const StorePath & drvPath) { auto accessor = getFSAccessor(); try { - return parseDerivation(*this, accessor->readFile(printStorePath(drvPath))); + return parseDerivation(*this, accessor->readFile(printStorePath(drvPath)), Derivation::nameFromPath(drvPath)); } catch (FormatError & e) { throw Error("error parsing derivation '%s': %s", printStorePath(drvPath), e.msg()); } @@ -263,10 +322,21 @@ string Derivation::unparse(const Store & store, bool maskOutputs, for (auto & i : outputs) { if (first) first = false; else s += ','; s += '('; printUnquotedString(s, i.first); - s += ','; printUnquotedString(s, maskOutputs ? "" : store.printStorePath(i.second.path)); - s += ','; printUnquotedString(s, i.second.hash ? i.second.hash->printMethodAlgo() : ""); - s += ','; printUnquotedString(s, - i.second.hash ? i.second.hash->hash.to_string(Base16, false) : ""); + s += ','; printUnquotedString(s, maskOutputs ? "" : store.printStorePath(i.second.path(store, name))); + std::visit(overloaded { + [&](DerivationOutputInputAddressed doi) { + s += ','; printUnquotedString(s, ""); + s += ','; printUnquotedString(s, ""); + }, + [&](DerivationOutputCAFixed dof) { + s += ','; printUnquotedString(s, dof.hash.printMethodAlgo()); + s += ','; printUnquotedString(s, dof.hash.hash.to_string(Base16, false)); + }, + [&](DerivationOutputCAFloating dof) { + s += ','; printUnquotedString(s, makeFileIngestionPrefix(dof.method) + printHashType(dof.hashType)); + s += ','; printUnquotedString(s, ""); + }, + }, i.second.output); s += ')'; } @@ -318,59 +388,134 @@ bool isDerivation(const string & fileName) } -bool BasicDerivation::isFixedOutput() const +DerivationType BasicDerivation::type() const { - return outputs.size() == 1 && - outputs.begin()->first == "out" && - outputs.begin()->second.hash; + std::set inputAddressedOutputs, fixedCAOutputs, floatingCAOutputs; + std::optional floatingHashType; + for (auto & i : outputs) { + std::visit(overloaded { + [&](DerivationOutputInputAddressed _) { + inputAddressedOutputs.insert(i.first); + }, + [&](DerivationOutputCAFixed _) { + fixedCAOutputs.insert(i.first); + }, + [&](DerivationOutputCAFloating dof) { + floatingCAOutputs.insert(i.first); + if (!floatingHashType) { + floatingHashType = dof.hashType; + } else { + if (*floatingHashType != dof.hashType) + throw Error("All floating outputs must use the same hash type"); + } + }, + }, i.second.output); + } + + if (inputAddressedOutputs.empty() && fixedCAOutputs.empty() && floatingCAOutputs.empty()) { + throw Error("Must have at least one output"); + } else if (! inputAddressedOutputs.empty() && fixedCAOutputs.empty() && floatingCAOutputs.empty()) { + return DerivationType::InputAddressed; + } else if (inputAddressedOutputs.empty() && ! fixedCAOutputs.empty() && floatingCAOutputs.empty()) { + if (fixedCAOutputs.size() > 1) + // FIXME: Experimental feature? + throw Error("Only one fixed output is allowed for now"); + if (*fixedCAOutputs.begin() != "out") + throw Error("Single fixed output must be named \"out\""); + return DerivationType::CAFixed; + } else if (inputAddressedOutputs.empty() && fixedCAOutputs.empty() && ! floatingCAOutputs.empty()) { + return DerivationType::CAFloating; + } else { + throw Error("Can't mix derivation output types"); + } } DrvHashes drvHashes; +/* pathDerivationModulo and hashDerivationModulo are mutually recursive + */ -/* Returns the hash of a derivation modulo fixed-output - subderivations. A fixed-output derivation is a derivation with one - output (`out') for which an expected hash and hash algorithm are - specified (using the `outputHash' and `outputHashAlgo' - attributes). We don't want changes to such derivations to - propagate upwards through the dependency graph, changing output - paths everywhere. +/* Look up the derivation by value and memoize the + `hashDerivationModulo` call. + */ +static const DrvHashModulo & pathDerivationModulo(Store & store, const StorePath & drvPath) +{ + auto h = drvHashes.find(drvPath); + if (h == drvHashes.end()) { + assert(store.isValidPath(drvPath)); + // Cache it + h = drvHashes.insert_or_assign( + drvPath, + hashDerivationModulo( + store, + store.readDerivation(drvPath), + false)).first; + } + return h->second; +} - For instance, if we change the url in a call to the `fetchurl' - function, we do not want to rebuild everything depending on it - (after all, (the hash of) the file being downloaded is unchanged). - So the *output paths* should not change. On the other hand, the - *derivation paths* should change to reflect the new dependency - graph. +/* See the header for interface details. These are the implementation details. - That's what this function does: it returns a hash which is just the - hash of the derivation ATerm, except that any input derivation - paths have been replaced by the result of a recursive call to this - function, and that for fixed-output derivations we return a hash of - its output path. */ -Hash hashDerivationModulo(Store & store, const Derivation & drv, bool maskOutputs) + For fixed-output derivations, each hash in the map is not the + corresponding output's content hash, but a hash of that hash along + with other constant data. The key point is that the value is a pure + function of the output's contents, and there are no preimage attacks + either spoofing an output's contents for a derivation, or + spoofing a derivation for an output's contents. + + For regular derivations, it looks up each subderivation from its hash + and recurs. If the subderivation is also regular, it simply + substitutes the derivation path with its hash. If the subderivation + is fixed-output, however, it takes each output hash and pretends it + is a derivation hash producing a single "out" output. This is so we + don't leak the provenance of fixed outputs, reducing pointless cache + misses as the build itself won't know this. + */ +DrvHashModulo hashDerivationModulo(Store & store, const Derivation & drv, bool maskOutputs) { /* Return a fixed hash for fixed-output derivations. */ - if (drv.isFixedOutput()) { - DerivationOutputs::const_iterator i = drv.outputs.begin(); - return hashString(htSHA256, "fixed:out:" - + i->second.hash->printMethodAlgo() + ":" - + i->second.hash->hash.to_string(Base16, false) + ":" - + store.printStorePath(i->second.path)); + switch (drv.type()) { + case DerivationType::CAFloating: + throw Error("Regular input-addressed derivations are not yet allowed to depend on CA derivations"); + case DerivationType::CAFixed: { + std::map outputHashes; + for (const auto & i : drv.outputsAndPaths(store)) { + auto & dof = std::get(i.second.first.output); + auto hash = hashString(htSHA256, "fixed:out:" + + dof.hash.printMethodAlgo() + ":" + + dof.hash.hash.to_string(Base16, false) + ":" + + store.printStorePath(i.second.second)); + outputHashes.insert_or_assign(i.first, std::move(hash)); + } + return outputHashes; + } + case DerivationType::InputAddressed: + break; } /* For other derivations, replace the inputs paths with recursive - calls to this function.*/ + calls to this function. */ std::map inputs2; for (auto & i : drv.inputDrvs) { - auto h = drvHashes.find(i.first); - if (h == drvHashes.end()) { - assert(store.isValidPath(i.first)); - h = drvHashes.insert_or_assign(i.first, hashDerivationModulo(store, - store.readDerivation(i.first), false)).first; - } - inputs2.insert_or_assign(h->second.to_string(Base16, false), i.second); + const auto & res = pathDerivationModulo(store, i.first); + std::visit(overloaded { + // Regular non-CA derivation, replace derivation + [&](Hash drvHash) { + inputs2.insert_or_assign(drvHash.to_string(Base16, false), i.second); + }, + // CA derivation's output hashes + [&](CaOutputHashes outputHashes) { + std::set justOut = { "out" }; + for (auto & output : i.second) { + /* Put each one in with a single "out" output.. */ + const auto h = outputHashes.at(output); + inputs2.insert_or_assign( + h.to_string(Base16, false), + justOut); + } + }, + }, res); } return hashString(htSHA256, drv.unparse(store, maskOutputs, &inputs2)); @@ -391,38 +536,21 @@ bool wantOutput(const string & output, const std::set & wanted) } -StorePathSet BasicDerivation::outputPaths() const +StorePathSet BasicDerivation::outputPaths(const Store & store) const { StorePathSet paths; - for (auto & i : outputs) - paths.insert(i.second.path); + for (auto & i : outputsAndPaths(store)) + paths.insert(i.second.second); return paths; } static DerivationOutput readDerivationOutput(Source & in, const Store & store) { auto path = store.parseStorePath(readString(in)); - auto hashAlgo = readString(in); - auto hash = readString(in); + const auto hashAlgo = readString(in); + const auto hash = readString(in); - std::optional fsh; - if (hashAlgo != "") { - auto method = FileIngestionMethod::Flat; - if (string(hashAlgo, 0, 2) == "r:") { - method = FileIngestionMethod::Recursive; - hashAlgo = string(hashAlgo, 2); - } - auto hashType = parseHashType(hashAlgo); - fsh = FixedOutputHash { - .method = std::move(method), - .hash = Hash(hash, hashType), - }; - } - - return DerivationOutput { - .path = std::move(path), - .hash = std::move(fsh), - }; + return parseDerivationOutput(store, std::move(path), hashAlgo, hash); } StringSet BasicDerivation::outputNames() const @@ -433,9 +561,41 @@ StringSet BasicDerivation::outputNames() const return names; } +DerivationOutputsAndPaths BasicDerivation::outputsAndPaths(const Store & store) const { + DerivationOutputsAndPaths outsAndPaths; + for (auto output : outputs) + outsAndPaths.insert(std::make_pair( + output.first, + std::make_pair(output.second, output.second.path(store, name)) + ) + ); + return outsAndPaths; +} -Source & readDerivation(Source & in, const Store & store, BasicDerivation & drv) +DerivationOutputsAndOptPaths BasicDerivation::outputsAndOptPaths(const Store & store) const { + DerivationOutputsAndOptPaths outsAndOptPaths; + for (auto output : outputs) + outsAndOptPaths.insert(std::make_pair( + output.first, + std::make_pair(output.second, output.second.pathOpt(store, output.first)) + ) + ); + return outsAndOptPaths; +} + +std::string_view BasicDerivation::nameFromPath(const StorePath & drvPath) { + auto nameWithSuffix = drvPath.name(); + constexpr std::string_view extension = ".drv"; + assert(hasSuffix(nameWithSuffix, extension)); + nameWithSuffix.remove_suffix(extension.size()); + return nameWithSuffix; +} + + +Source & readDerivation(Source & in, const Store & store, BasicDerivation & drv, std::string_view name) { + drv.name = name; + drv.outputs.clear(); auto nr = readNum(in); for (size_t n = 0; n < nr; n++) { @@ -462,15 +622,22 @@ Source & readDerivation(Source & in, const Store & store, BasicDerivation & drv) void writeDerivation(Sink & out, const Store & store, const BasicDerivation & drv) { out << drv.outputs.size(); - for (auto & i : drv.outputs) { + for (auto & i : drv.outputsAndPaths(store)) { out << i.first - << store.printStorePath(i.second.path); - if (i.second.hash) { - out << i.second.hash->printMethodAlgo() - << i.second.hash->hash.to_string(Base16, false); - } else { - out << "" << ""; - } + << store.printStorePath(i.second.second); + std::visit(overloaded { + [&](DerivationOutputInputAddressed doi) { + out << "" << ""; + }, + [&](DerivationOutputCAFixed dof) { + out << dof.hash.printMethodAlgo() + << dof.hash.hash.to_string(Base16, false); + }, + [&](DerivationOutputCAFloating dof) { + out << (makeFileIngestionPrefix(dof.method) + printHashType(dof.hashType)) + << ""; + }, + }, i.second.first.output); } writeStorePaths(store, out, drv.inputSrcs); out << drv.platform << drv.builder << drv.args; diff --git a/src/libstore/derivations.hh b/src/libstore/derivations.hh index 68c53c1ff..3aae30ab2 100644 --- a/src/libstore/derivations.hh +++ b/src/libstore/derivations.hh @@ -6,6 +6,7 @@ #include "content-address.hh" #include +#include namespace nix { @@ -13,20 +14,87 @@ namespace nix { /* Abstract syntax of derivations. */ +/* The traditional non-fixed-output derivation type. */ +struct DerivationOutputInputAddressed +{ + /* Will need to become `std::optional` once input-addressed + derivations are allowed to depend on cont-addressed derivations */ + StorePath path; +}; + +/* Fixed-output derivations, whose output paths are content addressed + according to that fixed output. */ +struct DerivationOutputCAFixed +{ + FixedOutputHash hash; /* hash used for expected hash computation */ +}; + +/* Floating-output derivations, whose output paths are content addressed, but + not fixed, and so are dynamically calculated from whatever the output ends + up being. */ +struct DerivationOutputCAFloating +{ + /* information used for expected hash computation */ + FileIngestionMethod method; + HashType hashType; +}; + struct DerivationOutput { - StorePath path; - std::optional hash; /* hash used for expected hash computation */ + std::variant< + DerivationOutputInputAddressed, + DerivationOutputCAFixed, + DerivationOutputCAFloating + > output; + std::optional hashAlgoOpt(const Store & store) const; + /* Note, when you use this function you should make sure that you're passing + the right derivation name. When in doubt, you should use the safer + interface provided by BasicDerivation::outputsAndPaths */ + std::optional pathOpt(const Store & store, std::string_view drvName) const; + /* DEPRECATED: Remove after CA drvs are fully implemented */ + StorePath path(const Store & store, std::string_view drvName) const { + auto p = pathOpt(store, drvName); + if (!p) throw UnimplementedError("floating content-addressed derivations are not yet implemented"); + return *p; + } }; typedef std::map DerivationOutputs; +/* These are analogues to the previous DerivationOutputs data type, but they + also contains, for each output, the (optional) store path in which it would + be written. To calculate values of these types, see the corresponding + functions in BasicDerivation */ +typedef std::map> + DerivationOutputsAndPaths; +typedef std::map>> + DerivationOutputsAndOptPaths; + /* For inputs that are sub-derivations, we specify exactly which output IDs we are interested in. */ typedef std::map DerivationInputs; typedef std::map StringPairs; +enum struct DerivationType : uint8_t { + InputAddressed, + CAFixed, + CAFloating, +}; + +/* Do the outputs of the derivation have paths calculated from their content, + or from the derivation itself? */ +bool derivationIsCA(DerivationType); + +/* Is the content of the outputs fixed a-priori via a hash? Never true for + non-CA derivations. */ +bool derivationIsFixed(DerivationType); + +/* Is the derivation impure and needs to access non-deterministic resources, or + pure and can be sandboxed? Note that whether or not we actually sandbox the + derivation is controlled separately. Never true for non-CA derivations. */ +bool derivationIsImpure(DerivationType); + struct BasicDerivation { DerivationOutputs outputs; /* keyed on symbolic IDs */ @@ -35,24 +103,30 @@ struct BasicDerivation Path builder; Strings args; StringPairs env; + std::string name; BasicDerivation() { } virtual ~BasicDerivation() { }; - /* Return the path corresponding to the output identifier `id' in - the given derivation. */ - const StorePath & findOutput(const std::string & id) const; - bool isBuiltin() const; /* Return true iff this is a fixed-output derivation. */ - bool isFixedOutput() const; + DerivationType type() const; /* Return the output paths of a derivation. */ - StorePathSet outputPaths() const; + StorePathSet outputPaths(const Store & store) const; /* Return the output names of a derivation. */ StringSet outputNames() const; + + /* Calculates the maps that contains all the DerivationOutputs, but + augmented with knowledge of the Store paths they would be written into. + The first one of these functions will be removed when the CA work is + completed */ + DerivationOutputsAndPaths outputsAndPaths(const Store & store) const; + DerivationOutputsAndOptPaths outputsAndOptPaths(const Store & store) const; + + static std::string_view nameFromPath(const StorePath & storePath); }; struct Derivation : BasicDerivation @@ -73,18 +147,50 @@ enum RepairFlag : bool { NoRepair = false, Repair = true }; /* Write a derivation to the Nix store, and return its path. */ StorePath writeDerivation(ref store, - const Derivation & drv, std::string_view name, RepairFlag repair = NoRepair); + const Derivation & drv, RepairFlag repair = NoRepair); /* Read a derivation from a file. */ -Derivation readDerivation(const Store & store, const Path & drvPath); +Derivation readDerivation(const Store & store, const Path & drvPath, std::string_view name); // FIXME: remove bool isDerivation(const string & fileName); -Hash hashDerivationModulo(Store & store, const Derivation & drv, bool maskOutputs); +// known CA drv's output hashes, current just for fixed-output derivations +// whose output hashes are always known since they are fixed up-front. +typedef std::map CaOutputHashes; + +typedef std::variant< + Hash, // regular DRV normalized hash + CaOutputHashes +> DrvHashModulo; + +/* Returns hashes with the details of fixed-output subderivations + expunged. + + A fixed-output derivation is a derivation whose outputs have a + specified content hash and hash algorithm. (Currently they must have + exactly one output (`out'), which is specified using the `outputHash' + and `outputHashAlgo' attributes, but the algorithm doesn't assume + this.) We don't want changes to such derivations to propagate upwards + through the dependency graph, changing output paths everywhere. + + For instance, if we change the url in a call to the `fetchurl' + function, we do not want to rebuild everything depending on it---after + all, (the hash of) the file being downloaded is unchanged. So the + *output paths* should not change. On the other hand, the *derivation + paths* should change to reflect the new dependency graph. + + For fixed-output derivations, this returns a map from the name of + each output to its hash, unique up to the output's contents. + + For regular derivations, it returns a single hash of the derivation + ATerm, after subderivations have been likewise expunged from that + derivation. + */ +DrvHashModulo hashDerivationModulo(Store & store, const Derivation & drv, bool maskOutputs); /* Memoisation of hashDerivationModulo(). */ -typedef std::map DrvHashes; +typedef std::map DrvHashes; extern DrvHashes drvHashes; // FIXME: global, not thread-safe @@ -93,7 +199,7 @@ bool wantOutput(const string & output, const std::set & wanted); struct Source; struct Sink; -Source & readDerivation(Source & in, const Store & store, BasicDerivation & drv); +Source & readDerivation(Source & in, const Store & store, BasicDerivation & drv, std::string_view name); void writeDerivation(Sink & out, const Store & store, const BasicDerivation & drv); std::string hashPlaceholder(const std::string & outputName); diff --git a/src/libstore/export-import.cc b/src/libstore/export-import.cc index 082d0f1d1..ccd466d09 100644 --- a/src/libstore/export-import.cc +++ b/src/libstore/export-import.cc @@ -38,7 +38,7 @@ void Store::exportPath(const StorePath & path, Sink & sink) filesystem corruption from spreading to other machines. Don't complain if the stored hash is zero (unknown). */ Hash hash = hashSink.currentHash().first; - if (hash != info->narHash && info->narHash != Hash(*info->narHash.type)) + if (hash != info->narHash && info->narHash != Hash(info->narHash.type)) throw Error("hash of path '%s' has changed from '%s' to '%s'!", printStorePath(path), info->narHash.to_string(Base32, true), hash.to_string(Base32, true)); @@ -60,32 +60,35 @@ StorePaths Store::importPaths(Source & source, CheckSigsFlag checkSigs) if (n != 1) throw Error("input doesn't look like something created by 'nix-store --export'"); /* Extract the NAR from the source. */ - TeeParseSink tee(source); - parseDump(tee, tee.source); + StringSink saved; + TeeSource tee { source, saved }; + ParseSink ether; + parseDump(ether, tee); uint32_t magic = readInt(source); if (magic != exportMagic) throw Error("Nix archive cannot be imported; wrong format"); - ValidPathInfo info(parseStorePath(readString(source))); + auto path = parseStorePath(readString(source)); //Activity act(*logger, lvlInfo, format("importing path '%s'") % info.path); - info.references = readStorePaths(*this, source); - + auto references = readStorePaths(*this, source); auto deriver = readString(source); + auto narHash = hashString(htSHA256, *saved.s); + + ValidPathInfo info { path, narHash }; if (deriver != "") info.deriver = parseStorePath(deriver); - - info.narHash = hashString(htSHA256, *tee.saved.s); - info.narSize = tee.saved.s->size(); + info.references = references; + info.narSize = saved.s->size(); // Ignore optional legacy signature. if (readInt(source) == 1) readString(source); // Can't use underlying source, which would have been exhausted - auto source = StringSource { *tee.saved.s }; + auto source = StringSource { *saved.s }; addToStore(info, source, NoRepair, checkSigs); res.push_back(info.path); diff --git a/src/libstore/filetransfer.cc b/src/libstore/filetransfer.cc index beb508e67..4149f8155 100644 --- a/src/libstore/filetransfer.cc +++ b/src/libstore/filetransfer.cc @@ -124,7 +124,7 @@ struct curlFileTransfer : public FileTransfer if (requestHeaders) curl_slist_free_all(requestHeaders); try { if (!done) - fail(FileTransferError(Interrupted, "download of '%s' was interrupted", request.uri)); + fail(FileTransferError(Interrupted, nullptr, "download of '%s' was interrupted", request.uri)); } catch (...) { ignoreException(); } @@ -145,6 +145,7 @@ struct curlFileTransfer : public FileTransfer LambdaSink finalSink; std::shared_ptr decompressionSink; + std::optional errorSink; std::exception_ptr writeException; @@ -154,9 +155,19 @@ struct curlFileTransfer : public FileTransfer size_t realSize = size * nmemb; result.bodySize += realSize; - if (!decompressionSink) + if (!decompressionSink) { decompressionSink = makeDecompressionSink(encoding, finalSink); + if (! successfulStatuses.count(getHTTPStatus())) { + // In this case we want to construct a TeeSink, to keep + // the response around (which we figure won't be big + // like an actual download should be) to improve error + // messages. + errorSink = StringSink { }; + } + } + if (errorSink) + (*errorSink)((unsigned char *) contents, realSize); (*decompressionSink)((unsigned char *) contents, realSize); return realSize; @@ -412,16 +423,21 @@ struct curlFileTransfer : public FileTransfer attempt++; + std::shared_ptr response; + if (errorSink) + response = errorSink->s; auto exc = code == CURLE_ABORTED_BY_CALLBACK && _isInterrupted - ? FileTransferError(Interrupted, fmt("%s of '%s' was interrupted", request.verb(), request.uri)) + ? FileTransferError(Interrupted, response, "%s of '%s' was interrupted", request.verb(), request.uri) : httpStatus != 0 ? FileTransferError(err, + response, fmt("unable to %s '%s': HTTP error %d ('%s')", request.verb(), request.uri, httpStatus, statusMsg) + (code == CURLE_OK ? "" : fmt(" (curl error: %s)", curl_easy_strerror(code))) ) : FileTransferError(err, + response, fmt("unable to %s '%s': %s (%d)", request.verb(), request.uri, curl_easy_strerror(code), code)); @@ -679,7 +695,7 @@ struct curlFileTransfer : public FileTransfer auto s3Res = s3Helper.getObject(bucketName, key); FileTransferResult res; if (!s3Res.data) - throw FileTransferError(NotFound, fmt("S3 object '%s' does not exist", request.uri)); + throw FileTransferError(NotFound, nullptr, "S3 object '%s' does not exist", request.uri); res.data = s3Res.data; callback(std::move(res)); #else @@ -824,6 +840,21 @@ void FileTransfer::download(FileTransferRequest && request, Sink & sink) } } +template +FileTransferError::FileTransferError(FileTransfer::Error error, std::shared_ptr response, const Args & ... args) + : Error(args...), error(error), response(response) +{ + const auto hf = hintfmt(args...); + // FIXME: Due to https://github.com/NixOS/nix/issues/3841 we don't know how + // to print different messages for different verbosity levels. For now + // we add some heuristics for detecting when we want to show the response. + if (response && (response->size() < 1024 || response->find("") != string::npos)) { + err.hint = hintfmt("%1%\n\nresponse body:\n\n%2%", normaltxt(hf.str()), *response); + } else { + err.hint = hf; + } +} + bool isUri(const string & s) { if (s.compare(0, 8, "channel:") == 0) return true; diff --git a/src/libstore/filetransfer.hh b/src/libstore/filetransfer.hh index 11dca2fe0..25ade0add 100644 --- a/src/libstore/filetransfer.hh +++ b/src/libstore/filetransfer.hh @@ -103,10 +103,12 @@ class FileTransferError : public Error { public: FileTransfer::Error error; + std::shared_ptr response; // intentionally optional + template - FileTransferError(FileTransfer::Error error, const Args & ... args) - : Error(args...), error(error) - { } + FileTransferError(FileTransfer::Error error, std::shared_ptr response, const Args & ... args); + + virtual const char* sname() const override { return "FileTransferError"; } }; bool isUri(const string & s); diff --git a/src/libstore/gc.cc b/src/libstore/gc.cc index aaed5c218..e74382ed2 100644 --- a/src/libstore/gc.cc +++ b/src/libstore/gc.cc @@ -500,7 +500,7 @@ struct LocalStore::GCState StorePathSet alive; bool gcKeepOutputs; bool gcKeepDerivations; - unsigned long long bytesInvalidated; + uint64_t bytesInvalidated; bool moveToTrash = true; bool shouldDelete; GCState(const GCOptions & options, GCResults & results) @@ -518,7 +518,7 @@ bool LocalStore::isActiveTempFile(const GCState & state, void LocalStore::deleteGarbage(GCState & state, const Path & path) { - unsigned long long bytesFreed; + uint64_t bytesFreed; deletePath(path, bytesFreed); state.results.bytesFreed += bytesFreed; } @@ -528,7 +528,7 @@ void LocalStore::deletePathRecursive(GCState & state, const Path & path) { checkInterrupt(); - unsigned long long size = 0; + uint64_t size = 0; auto storePath = maybeParseStorePath(path); if (storePath && isValidPath(*storePath)) { @@ -687,7 +687,7 @@ void LocalStore::removeUnusedLinks(const GCState & state) AutoCloseDir dir(opendir(linksDir.c_str())); if (!dir) throw SysError("opening directory '%1%'", linksDir); - long long actualSize = 0, unsharedSize = 0; + int64_t actualSize = 0, unsharedSize = 0; struct dirent * dirent; while (errno = 0, dirent = readdir(dir.get())) { @@ -717,10 +717,10 @@ void LocalStore::removeUnusedLinks(const GCState & state) struct stat st; if (stat(linksDir.c_str(), &st) == -1) throw SysError("statting '%1%'", linksDir); - long long overhead = st.st_blocks * 512ULL; + auto overhead = st.st_blocks * 512ULL; - printInfo(format("note: currently hard linking saves %.2f MiB") - % ((unsharedSize - actualSize - overhead) / (1024.0 * 1024.0))); + printInfo("note: currently hard linking saves %.2f MiB", + ((unsharedSize - actualSize - overhead) / (1024.0 * 1024.0))); } diff --git a/src/libstore/globals.hh b/src/libstore/globals.hh index d47e0b6b5..e3bb4cf84 100644 --- a/src/libstore/globals.hh +++ b/src/libstore/globals.hh @@ -335,7 +335,7 @@ public: "setuid/setgid bits or with file capabilities."}; #endif - Setting hashedMirrors{this, {"http://tarballs.nixos.org/"}, "hashed-mirrors", + Setting hashedMirrors{this, {}, "hashed-mirrors", "A list of servers used by builtins.fetchurl to fetch files by hash."}; Setting minFree{this, 0, "min-free", diff --git a/src/libstore/legacy-ssh-store.cc b/src/libstore/legacy-ssh-store.cc index a8bd8a972..dc03313f0 100644 --- a/src/libstore/legacy-ssh-store.cc +++ b/src/libstore/legacy-ssh-store.cc @@ -93,6 +93,9 @@ struct LegacySSHStore : public Store try { auto conn(connections->get()); + /* No longer support missing NAR hash */ + assert(GET_PROTOCOL_MINOR(conn->remoteVersion) >= 4); + debug("querying remote host '%s' for info on '%s'", host, printStorePath(path)); conn->to << cmdQueryPathInfos << PathSet{printStorePath(path)}; @@ -100,8 +103,10 @@ struct LegacySSHStore : public Store auto p = readString(conn->from); if (p.empty()) return callback(nullptr); - auto info = std::make_shared(parseStorePath(p)); - assert(path == info->path); + auto path2 = parseStorePath(p); + assert(path == path2); + /* Hash will be set below. FIXME construct ValidPathInfo at end. */ + auto info = std::make_shared(path, Hash::dummy); PathSet references; auto deriver = readString(conn->from); @@ -111,12 +116,14 @@ struct LegacySSHStore : public Store readLongLong(conn->from); // download size info->narSize = readLongLong(conn->from); - if (GET_PROTOCOL_MINOR(conn->remoteVersion) >= 4) { + { auto s = readString(conn->from); - info->narHash = s.empty() ? Hash() : Hash(s); - info->ca = parseContentAddressOpt(readString(conn->from)); - info->sigs = readStrings(conn->from); + if (s == "") + throw Error("NAR hash is now mandatory"); + info->narHash = Hash::parseAnyPrefixed(s); } + info->ca = parseContentAddressOpt(readString(conn->from)); + info->sigs = readStrings(conn->from); auto s = readString(conn->from); assert(s == ""); @@ -202,6 +209,24 @@ struct LegacySSHStore : public Store const StorePathSet & references, RepairFlag repair) override { unsupported("addTextToStore"); } +private: + + void putBuildSettings(Connection & conn) + { + conn.to + << settings.maxSilentTime + << settings.buildTimeout; + if (GET_PROTOCOL_MINOR(conn.remoteVersion) >= 2) + conn.to + << settings.maxLogSize; + if (GET_PROTOCOL_MINOR(conn.remoteVersion) >= 3) + conn.to + << settings.buildRepeat + << settings.enforceDeterminism; + } + +public: + BuildResult buildDerivation(const StorePath & drvPath, const BasicDerivation & drv, BuildMode buildMode) override { @@ -211,16 +236,8 @@ struct LegacySSHStore : public Store << cmdBuildDerivation << printStorePath(drvPath); writeDerivation(conn->to, *this, drv); - conn->to - << settings.maxSilentTime - << settings.buildTimeout; - if (GET_PROTOCOL_MINOR(conn->remoteVersion) >= 2) - conn->to - << settings.maxLogSize; - if (GET_PROTOCOL_MINOR(conn->remoteVersion) >= 3) - conn->to - << settings.buildRepeat - << settings.enforceDeterminism; + + putBuildSettings(*conn); conn->to.flush(); @@ -234,6 +251,29 @@ struct LegacySSHStore : public Store return status; } + void buildPaths(const std::vector & drvPaths, BuildMode buildMode) override + { + auto conn(connections->get()); + + conn->to << cmdBuildPaths; + Strings ss; + for (auto & p : drvPaths) + ss.push_back(p.to_string(*this)); + conn->to << ss; + + putBuildSettings(*conn); + + conn->to.flush(); + + BuildResult result; + result.status = (BuildResult::Status) readInt(conn->from); + + if (!result.success()) { + conn->from >> result.errorMsg; + throw Error(result.status, result.errorMsg); + } + } + void ensurePath(const StorePath & path) override { unsupported("ensurePath"); } diff --git a/src/libstore/local-store.cc b/src/libstore/local-store.cc index 88ed0dec3..990810b0e 100644 --- a/src/libstore/local-store.cc +++ b/src/libstore/local-store.cc @@ -544,11 +544,8 @@ void LocalStore::checkDerivationOutputs(const StorePath & drvPath, const Derivat std::string drvName(drvPath.name()); drvName = string(drvName, 0, drvName.size() - drvExtension.size()); - auto check = [&](const StorePath & expected, const StorePath & actual, const std::string & varName) + auto envHasRightPath = [&](const StorePath & actual, const std::string & varName) { - if (actual != expected) - throw Error("derivation '%s' has incorrect output '%s', should be '%s'", - printStorePath(drvPath), printStorePath(actual), printStorePath(expected)); auto j = drv.env.find(varName); if (j == drv.env.end() || parseStorePath(j->second) != actual) throw Error("derivation '%s' has incorrect environment variable '%s', should be '%s'", @@ -556,23 +553,34 @@ void LocalStore::checkDerivationOutputs(const StorePath & drvPath, const Derivat }; - if (drv.isFixedOutput()) { - DerivationOutputs::const_iterator out = drv.outputs.find("out"); - if (out == drv.outputs.end()) - throw Error("derivation '%s' does not have an output named 'out'", printStorePath(drvPath)); + // Don't need the answer, but do this anyways to assert is proper + // combination. The code below is more general and naturally allows + // combinations that are currently prohibited. + drv.type(); - check( - makeFixedOutputPath( - out->second.hash->method, - out->second.hash->hash, - drvName), - out->second.path, "out"); - } - - else { - Hash h = hashDerivationModulo(*this, drv, true); - for (auto & i : drv.outputs) - check(makeOutputPath(i.first, h, drvName), i.second.path, i.first); + std::optional h; + for (auto & i : drv.outputs) { + std::visit(overloaded { + [&](DerivationOutputInputAddressed doia) { + if (!h) { + // somewhat expensive so we do lazily + auto temp = hashDerivationModulo(*this, drv, true); + h = std::get(temp); + } + StorePath recomputed = makeOutputPath(i.first, *h, drvName); + if (doia.path != recomputed) + throw Error("derivation '%s' has incorrect output '%s', should be '%s'", + printStorePath(drvPath), printStorePath(doia.path), printStorePath(recomputed)); + envHasRightPath(doia.path, i.first); + }, + [&](DerivationOutputCAFixed dof) { + StorePath path = makeFixedOutputPath(dof.hash.method, dof.hash.hash, drvName); + envHasRightPath(path, i.first); + }, + [&](DerivationOutputCAFloating _) { + throw UnimplementedError("floating CA output derivations are not yet implemented"); + }, + }, i.second.output); } } @@ -610,11 +618,11 @@ uint64_t LocalStore::addValidPath(State & state, registration above is undone. */ if (checkOutputs) checkDerivationOutputs(info.path, drv); - for (auto & i : drv.outputs) { + for (auto & i : drv.outputsAndPaths(*this)) { state.stmtAddDerivationOutput.use() (id) (i.first) - (printStorePath(i.second.path)) + (printStorePath(i.second.second)) .exec(); } } @@ -633,25 +641,28 @@ void LocalStore::queryPathInfoUncached(const StorePath & path, Callback> callback) noexcept { try { - auto info = std::make_shared(path); - callback(retrySQLite>([&]() { auto state(_state.lock()); /* Get the path info. */ - auto useQueryPathInfo(state->stmtQueryPathInfo.use()(printStorePath(info->path))); + auto useQueryPathInfo(state->stmtQueryPathInfo.use()(printStorePath(path))); if (!useQueryPathInfo.next()) return std::shared_ptr(); - info->id = useQueryPathInfo.getInt(0); + auto id = useQueryPathInfo.getInt(0); + auto narHash = Hash::dummy; try { - info->narHash = Hash(useQueryPathInfo.getStr(1)); + narHash = Hash::parseAnyPrefixed(useQueryPathInfo.getStr(1)); } catch (BadHash & e) { - throw Error("in valid-path entry for '%s': %s", printStorePath(path), e.what()); + throw Error("invalid-path entry for '%s': %s", printStorePath(path), e.what()); } + auto info = std::make_shared(path, narHash); + + info->id = id; + info->registrationTime = useQueryPathInfo.getInt(2); auto s = (const char *) sqlite3_column_text(state->stmtQueryPathInfo, 3); @@ -846,20 +857,32 @@ StorePathSet LocalStore::querySubstitutablePaths(const StorePathSet & paths) } -void LocalStore::querySubstitutablePathInfos(const StorePathSet & paths, - SubstitutablePathInfos & infos) +void LocalStore::querySubstitutablePathInfos(const StorePathCAMap & paths, SubstitutablePathInfos & infos) { if (!settings.useSubstitutes) return; for (auto & sub : getDefaultSubstituters()) { - if (sub->storeDir != storeDir) continue; for (auto & path : paths) { - if (infos.count(path)) continue; - debug("checking substituter '%s' for path '%s'", sub->getUri(), printStorePath(path)); + auto subPath(path.first); + + // recompute store path so that we can use a different store root + if (path.second) { + subPath = makeFixedOutputPathFromCA(path.first.name(), *path.second); + if (sub->storeDir == storeDir) + assert(subPath == path.first); + if (subPath != path.first) + debug("replaced path '%s' with '%s' for substituter '%s'", printStorePath(path.first), sub->printStorePath(subPath), sub->getUri()); + } else if (sub->storeDir != storeDir) continue; + + debug("checking substituter '%s' for path '%s'", sub->getUri(), sub->printStorePath(subPath)); try { - auto info = sub->queryPathInfo(path); + auto info = sub->queryPathInfo(subPath); + + if (sub->storeDir != storeDir && !(info->isContentAddressed(*sub) && info->references.empty())) + continue; + auto narInfo = std::dynamic_pointer_cast( std::shared_ptr(info)); - infos.insert_or_assign(path, SubstitutablePathInfo{ + infos.insert_or_assign(path.first, SubstitutablePathInfo{ info->deriver, info->references, narInfo ? narInfo->fileSize : 0, @@ -964,9 +987,6 @@ const PublicKeys & LocalStore::getPublicKeys() void LocalStore::addToStore(const ValidPathInfo & info, Source & source, RepairFlag repair, CheckSigsFlag checkSigs) { - if (!info.narHash) - throw Error("cannot add path '%s' because it lacks a hash", printStorePath(info.path)); - if (requireSigs && checkSigs && !info.checkSignatures(*this, getPublicKeys())) throw Error("cannot add path '%s' because it lacks a valid signature", printStorePath(info.path)); @@ -1001,11 +1021,7 @@ void LocalStore::addToStore(const ValidPathInfo & info, Source & source, else hashSink = std::make_unique(htSHA256, std::string(info.path.hashPart())); - LambdaSource wrapperSource([&](unsigned char * data, size_t len) -> size_t { - size_t n = source.read(data, len); - (*hashSink)(data, n); - return n; - }); + TeeSource wrapperSource { source, *hashSink }; restorePath(realPath, wrapperSource); @@ -1033,82 +1049,12 @@ void LocalStore::addToStore(const ValidPathInfo & info, Source & source, } -StorePath LocalStore::addToStoreFromDump(const string & dump, const string & name, +StorePath LocalStore::addToStoreFromDump(Source & source0, const string & name, FileIngestionMethod method, HashType hashAlgo, RepairFlag repair) { - Hash h = hashString(hashAlgo, dump); - - auto dstPath = makeFixedOutputPath(method, h, name); - - addTempRoot(dstPath); - - if (repair || !isValidPath(dstPath)) { - - /* The first check above is an optimisation to prevent - unnecessary lock acquisition. */ - - auto realPath = Store::toRealPath(dstPath); - - PathLocks outputLock({realPath}); - - if (repair || !isValidPath(dstPath)) { - - deletePath(realPath); - - autoGC(); - - if (method == FileIngestionMethod::Recursive) { - StringSource source(dump); - restorePath(realPath, source); - } else - writeFile(realPath, dump); - - canonicalisePathMetaData(realPath, -1); - - /* Register the SHA-256 hash of the NAR serialisation of - the path in the database. We may just have computed it - above (if called with recursive == true and hashAlgo == - sha256); otherwise, compute it here. */ - HashResult hash; - if (method == FileIngestionMethod::Recursive) { - hash.first = hashAlgo == htSHA256 ? h : hashString(htSHA256, dump); - hash.second = dump.size(); - } else - hash = hashPath(htSHA256, realPath); - - optimisePath(realPath); // FIXME: combine with hashPath() - - ValidPathInfo info(dstPath); - info.narHash = hash.first; - info.narSize = hash.second; - info.ca = FixedOutputHash { .method = method, .hash = h }; - registerValidPath(info); - } - - outputLock.setDeletion(true); - } - - return dstPath; -} - - -StorePath LocalStore::addToStore(const string & name, const Path & _srcPath, - FileIngestionMethod method, HashType hashAlgo, PathFilter & filter, RepairFlag repair) -{ - Path srcPath(absPath(_srcPath)); - - if (method != FileIngestionMethod::Recursive) - return addToStoreFromDump(readFile(srcPath), name, method, hashAlgo, repair); - - /* For computing the NAR hash. */ - auto sha256Sink = std::make_unique(htSHA256); - - /* For computing the store path. In recursive SHA-256 mode, this - is the same as the NAR hash, so no need to do it again. */ - std::unique_ptr hashSink = - hashAlgo == htSHA256 - ? nullptr - : std::make_unique(hashAlgo); + /* For computing the store path. */ + auto hashSink = std::make_unique(hashAlgo); + TeeSource source { source0, *hashSink }; /* Read the source path into memory, but only if it's up to narBufferSize bytes. If it's larger, write it to a temporary @@ -1116,55 +1062,49 @@ StorePath LocalStore::addToStore(const string & name, const Path & _srcPath, destination store path is already valid, we just delete the temporary path. Otherwise, we move it to the destination store path. */ - bool inMemory = true; - std::string nar; + bool inMemory = false; - auto source = sinkToSource([&](Sink & sink) { + std::string dump; - LambdaSink sink2([&](const unsigned char * buf, size_t len) { - (*sha256Sink)(buf, len); - if (hashSink) (*hashSink)(buf, len); - - if (inMemory) { - if (nar.size() + len > settings.narBufferSize) { - inMemory = false; - sink << 1; - sink((const unsigned char *) nar.data(), nar.size()); - nar.clear(); - } else { - nar.append((const char *) buf, len); - } - } - - if (!inMemory) sink(buf, len); - }); - - dumpPath(srcPath, sink2, filter); - }); + /* Fill out buffer, and decide whether we are working strictly in + memory based on whether we break out because the buffer is full + or the original source is empty */ + while (dump.size() < settings.narBufferSize) { + auto oldSize = dump.size(); + constexpr size_t chunkSize = 65536; + auto want = std::min(chunkSize, settings.narBufferSize - oldSize); + dump.resize(oldSize + want); + auto got = 0; + try { + got = source.read((uint8_t *) dump.data() + oldSize, want); + } catch (EndOfFile &) { + inMemory = true; + break; + } + dump.resize(oldSize + got); + } std::unique_ptr delTempDir; Path tempPath; - try { - /* Wait for the source coroutine to give us some dummy - data. This is so that we don't create the temporary - directory if the NAR fits in memory. */ - readInt(*source); + if (!inMemory) { + /* Drain what we pulled so far, and then keep on pulling */ + StringSource dumpSource { dump }; + ChainSource bothSource { dumpSource, source }; auto tempDir = createTempDir(realStoreDir, "add"); delTempDir = std::make_unique(tempDir); tempPath = tempDir + "/x"; - restorePath(tempPath, *source); + if (method == FileIngestionMethod::Recursive) + restorePath(tempPath, bothSource); + else + writeFile(tempPath, bothSource); - } catch (EndOfFile &) { - if (!inMemory) throw; - /* The NAR fits in memory, so we didn't do restorePath(). */ + dump.clear(); } - auto sha256 = sha256Sink->finish(); - - Hash hash = hashSink ? hashSink->finish().first : sha256.first; + auto [hash, size] = hashSink->finish(); auto dstPath = makeFixedOutputPath(method, hash, name); @@ -1186,22 +1126,33 @@ StorePath LocalStore::addToStore(const string & name, const Path & _srcPath, autoGC(); if (inMemory) { + StringSource dumpSource { dump }; /* Restore from the NAR in memory. */ - StringSource source(nar); - restorePath(realPath, source); + if (method == FileIngestionMethod::Recursive) + restorePath(realPath, dumpSource); + else + writeFile(realPath, dumpSource); } else { /* Move the temporary path we restored above. */ if (rename(tempPath.c_str(), realPath.c_str())) throw Error("renaming '%s' to '%s'", tempPath, realPath); } + /* For computing the nar hash. In recursive SHA-256 mode, this + is the same as the store hash, so no need to do it again. */ + auto narHash = std::pair { hash, size }; + if (method != FileIngestionMethod::Recursive || hashAlgo != htSHA256) { + HashSink narSink { htSHA256 }; + dumpPath(realPath, narSink); + narHash = narSink.finish(); + } + canonicalisePathMetaData(realPath, -1); // FIXME: merge into restorePath optimisePath(realPath); - ValidPathInfo info(dstPath); - info.narHash = sha256.first; - info.narSize = sha256.second; + ValidPathInfo info { dstPath, narHash.first }; + info.narSize = narHash.second; info.ca = FixedOutputHash { .method = method, .hash = hash }; registerValidPath(info); } @@ -1243,8 +1194,7 @@ StorePath LocalStore::addTextToStore(const string & name, const string & s, optimisePath(realPath); - ValidPathInfo info(dstPath); - info.narHash = narHash; + ValidPathInfo info { dstPath, narHash }; info.narSize = sink.s->size(); info.references = references; info.ca = TextHash { .hash = hash }; @@ -1359,9 +1309,9 @@ bool LocalStore::verifyStore(bool checkContents, RepairFlag repair) std::unique_ptr hashSink; if (!info->ca || !info->references.count(info->path)) - hashSink = std::make_unique(*info->narHash.type); + hashSink = std::make_unique(info->narHash.type); else - hashSink = std::make_unique(*info->narHash.type, std::string(info->path.hashPart())); + hashSink = std::make_unique(info->narHash.type, std::string(info->path.hashPart())); dumpPath(Store::toRealPath(i), *hashSink); auto current = hashSink->finish(); diff --git a/src/libstore/local-store.hh b/src/libstore/local-store.hh index c0e5d0286..31e6587ac 100644 --- a/src/libstore/local-store.hh +++ b/src/libstore/local-store.hh @@ -29,8 +29,8 @@ struct Derivation; struct OptimiseStats { unsigned long filesLinked = 0; - unsigned long long bytesFreed = 0; - unsigned long long blocksFreed = 0; + uint64_t bytesFreed = 0; + uint64_t blocksFreed = 0; }; @@ -139,22 +139,14 @@ public: StorePathSet querySubstitutablePaths(const StorePathSet & paths) override; - void querySubstitutablePathInfos(const StorePathSet & paths, + void querySubstitutablePathInfos(const StorePathCAMap & paths, SubstitutablePathInfos & infos) override; void addToStore(const ValidPathInfo & info, Source & source, RepairFlag repair, CheckSigsFlag checkSigs) override; - StorePath addToStore(const string & name, const Path & srcPath, - FileIngestionMethod method, HashType hashAlgo, - PathFilter & filter, RepairFlag repair) override; - - /* Like addToStore(), but the contents of the path are contained - in `dump', which is either a NAR serialisation (if recursive == - true) or simply the contents of a regular file (if recursive == - false). */ - StorePath addToStoreFromDump(const string & dump, const string & name, - FileIngestionMethod method = FileIngestionMethod::Recursive, HashType hashAlgo = htSHA256, RepairFlag repair = NoRepair) override; + StorePath addToStoreFromDump(Source & dump, const string & name, + FileIngestionMethod method, HashType hashAlgo, RepairFlag repair) override; StorePath addTextToStore(const string & name, const string & s, const StorePathSet & references, RepairFlag repair) override; diff --git a/src/libstore/machines.cc b/src/libstore/machines.cc index f848582da..7db2556f4 100644 --- a/src/libstore/machines.cc +++ b/src/libstore/machines.cc @@ -1,6 +1,7 @@ #include "machines.hh" #include "util.hh" #include "globals.hh" +#include "store-api.hh" #include @@ -48,6 +49,29 @@ bool Machine::mandatoryMet(const std::set & features) const { }); } +ref Machine::openStore() const { + Store::Params storeParams; + if (hasPrefix(storeUri, "ssh://")) { + storeParams["max-connections"] = "1"; + storeParams["log-fd"] = "4"; + if (sshKey != "") + storeParams["ssh-key"] = sshKey; + } + { + auto & fs = storeParams["system-features"]; + auto append = [&](auto feats) { + for (auto & f : feats) { + if (fs.size() > 0) fs += ' '; + fs += f; + } + }; + append(supportedFeatures); + append(mandatoryFeatures); + } + + return nix::openStore(storeUri, storeParams); +} + void parseMachines(const std::string & s, Machines & machines) { for (auto line : tokenizeString>(s, "\n;")) { diff --git a/src/libstore/machines.hh b/src/libstore/machines.hh index de92eb924..341d9bd97 100644 --- a/src/libstore/machines.hh +++ b/src/libstore/machines.hh @@ -4,6 +4,8 @@ namespace nix { +class Store; + struct Machine { const string storeUri; @@ -28,6 +30,8 @@ struct Machine { decltype(supportedFeatures) supportedFeatures, decltype(mandatoryFeatures) mandatoryFeatures, decltype(sshPublicHostKey) sshPublicHostKey); + + ref openStore() const; }; typedef std::vector Machines; diff --git a/src/libstore/misc.cc b/src/libstore/misc.cc index e68edb38c..f6aa570bb 100644 --- a/src/libstore/misc.cc +++ b/src/libstore/misc.cc @@ -4,6 +4,7 @@ #include "local-store.hh" #include "store-api.hh" #include "thread-pool.hh" +#include "topo-sort.hh" namespace nix { @@ -108,9 +109,19 @@ void Store::computeFSClosure(const StorePath & startPath, } +std::optional getDerivationCA(const BasicDerivation & drv) +{ + auto out = drv.outputs.find("out"); + if (out != drv.outputs.end()) { + if (auto v = std::get_if(&out->second.output)) + return v->hash; + } + return std::nullopt; +} + void Store::queryMissing(const std::vector & targets, StorePathSet & willBuild_, StorePathSet & willSubstitute_, StorePathSet & unknown_, - unsigned long long & downloadSize_, unsigned long long & narSize_) + uint64_t & downloadSize_, uint64_t & narSize_) { Activity act(*logger, lvlDebug, actUnknown, "querying info about missing paths"); @@ -122,8 +133,8 @@ void Store::queryMissing(const std::vector & targets, { std::unordered_set done; StorePathSet & unknown, & willSubstitute, & willBuild; - unsigned long long & downloadSize; - unsigned long long & narSize; + uint64_t & downloadSize; + uint64_t & narSize; }; struct DrvState @@ -157,7 +168,7 @@ void Store::queryMissing(const std::vector & targets, auto outPath = parseStorePath(outPathS); SubstitutablePathInfos infos; - querySubstitutablePathInfos({outPath}, infos); + querySubstitutablePathInfos({{outPath, getDerivationCA(*drv)}}, infos); if (infos.empty()) { drvState_->lock()->done = true; @@ -196,10 +207,10 @@ void Store::queryMissing(const std::vector & targets, ParsedDerivation parsedDrv(StorePath(path.path), *drv); PathSet invalid; - for (auto & j : drv->outputs) + for (auto & j : drv->outputsAndPaths(*this)) if (wantOutput(j.first, path.outputs) - && !isValidPath(j.second.path)) - invalid.insert(printStorePath(j.second.path)); + && !isValidPath(j.second.second)) + invalid.insert(printStorePath(j.second.second)); if (invalid.empty()) return; if (settings.useSubstitutes && parsedDrv.substitutesAllowed()) { @@ -214,7 +225,7 @@ void Store::queryMissing(const std::vector & targets, if (isValidPath(path.path)) return; SubstitutablePathInfos infos; - querySubstitutablePathInfos({path.path}, infos); + querySubstitutablePathInfos({{path.path, std::nullopt}}, infos); if (infos.empty()) { auto state(state_.lock()); @@ -246,41 +257,21 @@ void Store::queryMissing(const std::vector & targets, StorePaths Store::topoSortPaths(const StorePathSet & paths) { - StorePaths sorted; - StorePathSet visited, parents; - - std::function dfsVisit; - - dfsVisit = [&](const StorePath & path, const StorePath * parent) { - if (parents.count(path)) - throw BuildError("cycle detected in the references of '%s' from '%s'", - printStorePath(path), printStorePath(*parent)); - - if (!visited.insert(path).second) return; - parents.insert(path); - - StorePathSet references; - try { - references = queryPathInfo(path)->references; - } catch (InvalidPath &) { - } - - for (auto & i : references) - /* Don't traverse into paths that don't exist. That can - happen due to substitutes for non-existent paths. */ - if (i != path && paths.count(i)) - dfsVisit(i, &path); - - sorted.push_back(path); - parents.erase(path); - }; - - for (auto & i : paths) - dfsVisit(i, nullptr); - - std::reverse(sorted.begin(), sorted.end()); - - return sorted; + return topoSort(paths, + {[&](const StorePath & path) { + StorePathSet references; + try { + references = queryPathInfo(path)->references; + } catch (InvalidPath &) { + } + return references; + }}, + {[&](const StorePath & path, const StorePath & parent) { + return BuildError( + "cycle detected in the references of '%s' from '%s'", + printStorePath(path), + printStorePath(parent)); + }}); } diff --git a/src/libstore/nar-accessor.cc b/src/libstore/nar-accessor.cc index d884a131e..59ec164b6 100644 --- a/src/libstore/nar-accessor.cc +++ b/src/libstore/nar-accessor.cc @@ -79,14 +79,14 @@ struct NarAccessor : public FSAccessor parents.top()->isExecutable = true; } - void preallocateContents(unsigned long long size) override + void preallocateContents(uint64_t size) override { assert(size <= std::numeric_limits::max()); parents.top()->size = (uint64_t) size; parents.top()->start = pos; } - void receiveContents(unsigned char * data, unsigned int len) override + void receiveContents(unsigned char * data, size_t len) override { } void createSymlink(const Path & path, const string & target) override diff --git a/src/libstore/nar-info-disk-cache.cc b/src/libstore/nar-info-disk-cache.cc index 012dea6ea..8541cc51f 100644 --- a/src/libstore/nar-info-disk-cache.cc +++ b/src/libstore/nar-info-disk-cache.cc @@ -189,13 +189,14 @@ public: return {oInvalid, 0}; auto namePart = queryNAR.getStr(1); - auto narInfo = make_ref(StorePath(hashPart + "-" + namePart)); + auto narInfo = make_ref( + StorePath(hashPart + "-" + namePart), + Hash::parseAnyPrefixed(queryNAR.getStr(6))); narInfo->url = queryNAR.getStr(2); narInfo->compression = queryNAR.getStr(3); if (!queryNAR.isNull(4)) - narInfo->fileHash = Hash(queryNAR.getStr(4)); + narInfo->fileHash = Hash::parseAnyPrefixed(queryNAR.getStr(4)); narInfo->fileSize = queryNAR.getInt(5); - narInfo->narHash = Hash(queryNAR.getStr(6)); narInfo->narSize = queryNAR.getInt(7); for (auto & r : tokenizeString(queryNAR.getStr(8), " ")) narInfo->references.insert(StorePath(r)); @@ -230,7 +231,7 @@ public: (std::string(info->path.name())) (narInfo ? narInfo->url : "", narInfo != 0) (narInfo ? narInfo->compression : "", narInfo != 0) - (narInfo && narInfo->fileHash ? narInfo->fileHash.to_string(Base32, true) : "", narInfo && narInfo->fileHash) + (narInfo && narInfo->fileHash ? narInfo->fileHash->to_string(Base32, true) : "", narInfo && narInfo->fileHash) (narInfo ? narInfo->fileSize : 0, narInfo != 0 && narInfo->fileSize) (info->narHash.to_string(Base32, true)) (info->narSize) diff --git a/src/libstore/nar-info.cc b/src/libstore/nar-info.cc index 04550ed97..3454f34bb 100644 --- a/src/libstore/nar-info.cc +++ b/src/libstore/nar-info.cc @@ -1,36 +1,37 @@ #include "globals.hh" #include "nar-info.hh" +#include "store-api.hh" namespace nix { NarInfo::NarInfo(const Store & store, const std::string & s, const std::string & whence) - : ValidPathInfo(StorePath(StorePath::dummy)) // FIXME: hack + : ValidPathInfo(StorePath(StorePath::dummy), Hash(Hash::dummy)) // FIXME: hack { auto corrupt = [&]() { - throw Error("NAR info file '%1%' is corrupt", whence); + return Error("NAR info file '%1%' is corrupt", whence); }; auto parseHashField = [&](const string & s) { try { - return Hash(s); + return Hash::parseAnyPrefixed(s); } catch (BadHash &) { - corrupt(); - return Hash(); // never reached + throw corrupt(); } }; bool havePath = false; + bool haveNarHash = false; size_t pos = 0; while (pos < s.size()) { size_t colon = s.find(':', pos); - if (colon == std::string::npos) corrupt(); + if (colon == std::string::npos) throw corrupt(); std::string name(s, pos, colon - pos); size_t eol = s.find('\n', colon + 2); - if (eol == std::string::npos) corrupt(); + if (eol == std::string::npos) throw corrupt(); std::string value(s, colon + 2, eol - colon - 2); @@ -45,16 +46,18 @@ NarInfo::NarInfo(const Store & store, const std::string & s, const std::string & else if (name == "FileHash") fileHash = parseHashField(value); else if (name == "FileSize") { - if (!string2Int(value, fileSize)) corrupt(); + if (!string2Int(value, fileSize)) throw corrupt(); } - else if (name == "NarHash") + else if (name == "NarHash") { narHash = parseHashField(value); + haveNarHash = true; + } else if (name == "NarSize") { - if (!string2Int(value, narSize)) corrupt(); + if (!string2Int(value, narSize)) throw corrupt(); } else if (name == "References") { auto refs = tokenizeString(value, " "); - if (!references.empty()) corrupt(); + if (!references.empty()) throw corrupt(); for (auto & r : refs) references.insert(StorePath(r)); } @@ -67,7 +70,7 @@ NarInfo::NarInfo(const Store & store, const std::string & s, const std::string & else if (name == "Sig") sigs.insert(value); else if (name == "CA") { - if (ca) corrupt(); + if (ca) throw corrupt(); // FIXME: allow blank ca or require skipping field? ca = parseContentAddressOpt(value); } @@ -77,7 +80,7 @@ NarInfo::NarInfo(const Store & store, const std::string & s, const std::string & if (compression == "") compression = "bzip2"; - if (!havePath || url.empty() || narSize == 0 || !narHash) corrupt(); + if (!havePath || !haveNarHash || url.empty() || narSize == 0) throw corrupt(); } std::string NarInfo::to_string(const Store & store) const @@ -87,8 +90,8 @@ std::string NarInfo::to_string(const Store & store) const res += "URL: " + url + "\n"; assert(compression != ""); res += "Compression: " + compression + "\n"; - assert(fileHash.type == htSHA256); - res += "FileHash: " + fileHash.to_string(Base32, true) + "\n"; + assert(fileHash && fileHash->type == htSHA256); + res += "FileHash: " + fileHash->to_string(Base32, true) + "\n"; res += "FileSize: " + std::to_string(fileSize) + "\n"; assert(narHash.type == htSHA256); res += "NarHash: " + narHash.to_string(Base32, true) + "\n"; diff --git a/src/libstore/nar-info.hh b/src/libstore/nar-info.hh index 373c33427..39ced76e5 100644 --- a/src/libstore/nar-info.hh +++ b/src/libstore/nar-info.hh @@ -2,20 +2,22 @@ #include "types.hh" #include "hash.hh" -#include "store-api.hh" +#include "path-info.hh" namespace nix { +class Store; + struct NarInfo : ValidPathInfo { std::string url; std::string compression; - Hash fileHash; + std::optional fileHash; uint64_t fileSize = 0; std::string system; NarInfo() = delete; - NarInfo(StorePath && path) : ValidPathInfo(std::move(path)) { } + NarInfo(StorePath && path, Hash narHash) : ValidPathInfo(std::move(path), narHash) { } NarInfo(const ValidPathInfo & info) : ValidPathInfo(info) { } NarInfo(const Store & store, const std::string & s, const std::string & whence); diff --git a/src/libstore/optimise-store.cc b/src/libstore/optimise-store.cc index b2b2412a3..e4b4b6213 100644 --- a/src/libstore/optimise-store.cc +++ b/src/libstore/optimise-store.cc @@ -282,7 +282,7 @@ void LocalStore::optimiseStore(OptimiseStats & stats) } } -static string showBytes(unsigned long long bytes) +static string showBytes(uint64_t bytes) { return (format("%.2f MiB") % (bytes / (1024.0 * 1024.0))).str(); } diff --git a/src/libstore/parsed-derivations.cc b/src/libstore/parsed-derivations.cc index c7797b730..e7b7202d4 100644 --- a/src/libstore/parsed-derivations.cc +++ b/src/libstore/parsed-derivations.cc @@ -94,7 +94,7 @@ StringSet ParsedDerivation::getRequiredSystemFeatures() const return res; } -bool ParsedDerivation::canBuildLocally() const +bool ParsedDerivation::canBuildLocally(Store & localStore) const { if (drv.platform != settings.thisSystem.get() && !settings.extraPlatforms.get().count(drv.platform) @@ -102,14 +102,14 @@ bool ParsedDerivation::canBuildLocally() const return false; for (auto & feature : getRequiredSystemFeatures()) - if (!settings.systemFeatures.get().count(feature)) return false; + if (!localStore.systemFeatures.get().count(feature)) return false; return true; } -bool ParsedDerivation::willBuildLocally() const +bool ParsedDerivation::willBuildLocally(Store & localStore) const { - return getBoolAttr("preferLocalBuild") && canBuildLocally(); + return getBoolAttr("preferLocalBuild") && canBuildLocally(localStore); } bool ParsedDerivation::substitutesAllowed() const @@ -117,9 +117,4 @@ bool ParsedDerivation::substitutesAllowed() const return getBoolAttr("allowSubstitutes", true); } -bool ParsedDerivation::contentAddressed() const -{ - return getBoolAttr("__contentAddressed", false); -} - } diff --git a/src/libstore/parsed-derivations.hh b/src/libstore/parsed-derivations.hh index 0b8e8d031..3fa09f34f 100644 --- a/src/libstore/parsed-derivations.hh +++ b/src/libstore/parsed-derivations.hh @@ -29,13 +29,11 @@ public: StringSet getRequiredSystemFeatures() const; - bool canBuildLocally() const; + bool canBuildLocally(Store & localStore) const; - bool willBuildLocally() const; + bool willBuildLocally(Store & localStore) const; bool substitutesAllowed() const; - - bool contentAddressed() const; }; } diff --git a/src/libstore/path-info.hh b/src/libstore/path-info.hh new file mode 100644 index 000000000..8ff5c466e --- /dev/null +++ b/src/libstore/path-info.hh @@ -0,0 +1,112 @@ +#pragma once + +#include "crypto.hh" +#include "path.hh" +#include "hash.hh" +#include "content-address.hh" + +#include +#include + +namespace nix { + + +class Store; + + +struct SubstitutablePathInfo +{ + std::optional deriver; + StorePathSet references; + uint64_t downloadSize; /* 0 = unknown or inapplicable */ + uint64_t narSize; /* 0 = unknown */ +}; + +typedef std::map SubstitutablePathInfos; + + +struct ValidPathInfo +{ + StorePath path; + std::optional deriver; + // TODO document this + Hash narHash; + StorePathSet references; + time_t registrationTime = 0; + uint64_t narSize = 0; // 0 = unknown + uint64_t id; // internal use only + + /* Whether the path is ultimately trusted, that is, it's a + derivation output that was built locally. */ + bool ultimate = false; + + StringSet sigs; // note: not necessarily verified + + /* If non-empty, an assertion that the path is content-addressed, + i.e., that the store path is computed from a cryptographic hash + of the contents of the path, plus some other bits of data like + the "name" part of the path. Such a path doesn't need + signatures, since we don't have to trust anybody's claim that + the path is the output of a particular derivation. (In the + extensional store model, we have to trust that the *contents* + of an output path of a derivation were actually produced by + that derivation. In the intensional model, we have to trust + that a particular output path was produced by a derivation; the + path then implies the contents.) + + Ideally, the content-addressability assertion would just be a Boolean, + and the store path would be computed from the name component, ‘narHash’ + and ‘references’. However, we support many types of content addresses. + */ + std::optional ca; + + bool operator == (const ValidPathInfo & i) const + { + return + path == i.path + && narHash == i.narHash + && references == i.references; + } + + /* Return a fingerprint of the store path to be used in binary + cache signatures. It contains the store path, the base-32 + SHA-256 hash of the NAR serialisation of the path, the size of + the NAR, and the sorted references. The size field is strictly + speaking superfluous, but might prevent endless/excessive data + attacks. */ + std::string fingerprint(const Store & store) const; + + void sign(const Store & store, const SecretKey & secretKey); + + /* Return true iff the path is verifiably content-addressed. */ + bool isContentAddressed(const Store & store) const; + + /* Functions to view references + hasSelfReference as one set, mainly for + compatibility's sake. */ + StorePathSet referencesPossiblyToSelf() const; + void insertReferencePossiblyToSelf(StorePath && ref); + void setReferencesPossiblyToSelf(StorePathSet && refs); + + static const size_t maxSigs = std::numeric_limits::max(); + + /* Return the number of signatures on this .narinfo that were + produced by one of the specified keys, or maxSigs if the path + is content-addressed. */ + size_t checkSignatures(const Store & store, const PublicKeys & publicKeys) const; + + /* Verify a single signature. */ + bool checkSignature(const Store & store, const PublicKeys & publicKeys, const std::string & sig) const; + + Strings shortRefs() const; + + ValidPathInfo(const ValidPathInfo & other) = default; + + ValidPathInfo(StorePath && path, Hash narHash) : path(std::move(path)), narHash(narHash) { }; + ValidPathInfo(const StorePath & path, Hash narHash) : path(path), narHash(narHash) { }; + + virtual ~ValidPathInfo() { } +}; + +typedef list ValidPathInfos; + +} diff --git a/src/libstore/path.hh b/src/libstore/path.hh index e43a8b50c..b03a0f69d 100644 --- a/src/libstore/path.hh +++ b/src/libstore/path.hh @@ -64,6 +64,8 @@ typedef std::set StorePathSet; typedef std::vector StorePaths; typedef std::map OutputPathMap; +typedef std::map> StorePathCAMap; + /* Extension of derivations in the Nix store. */ const std::string drvExtension = ".drv"; diff --git a/src/libstore/references.cc b/src/libstore/references.cc index a10d536a3..62a3cda61 100644 --- a/src/libstore/references.cc +++ b/src/libstore/references.cc @@ -48,13 +48,12 @@ static void search(const unsigned char * s, size_t len, struct RefScanSink : Sink { - HashSink hashSink; StringSet hashes; StringSet seen; string tail; - RefScanSink() : hashSink(htSHA256) { } + RefScanSink() { } void operator () (const unsigned char * data, size_t len); }; @@ -62,8 +61,6 @@ struct RefScanSink : Sink void RefScanSink::operator () (const unsigned char * data, size_t len) { - hashSink(data, len); - /* It's possible that a reference spans the previous and current fragment, so search in the concatenation of the tail of the previous fragment and the start of the current fragment. */ @@ -79,10 +76,12 @@ void RefScanSink::operator () (const unsigned char * data, size_t len) } -PathSet scanForReferences(const string & path, - const PathSet & refs, HashResult & hash) +std::pair scanForReferences(const string & path, + const PathSet & refs) { - RefScanSink sink; + RefScanSink refsSink; + HashSink hashSink { htSHA256 }; + TeeSink sink { refsSink, hashSink }; std::map backMap; /* For efficiency (and a higher hit rate), just search for the @@ -97,7 +96,7 @@ PathSet scanForReferences(const string & path, assert(s.size() == refLength); assert(backMap.find(s) == backMap.end()); // parseHash(htSHA256, s); - sink.hashes.insert(s); + refsSink.hashes.insert(s); backMap[s] = i; } @@ -106,15 +105,15 @@ PathSet scanForReferences(const string & path, /* Map the hashes found back to their store paths. */ PathSet found; - for (auto & i : sink.seen) { + for (auto & i : refsSink.seen) { std::map::iterator j; if ((j = backMap.find(i)) == backMap.end()) abort(); found.insert(j->second); } - hash = sink.hashSink.finish(); + auto hash = hashSink.finish(); - return found; + return std::pair(found, hash); } diff --git a/src/libstore/references.hh b/src/libstore/references.hh index c38bdd720..598a3203a 100644 --- a/src/libstore/references.hh +++ b/src/libstore/references.hh @@ -5,8 +5,7 @@ namespace nix { -PathSet scanForReferences(const Path & path, const PathSet & refs, - HashResult & hash); +std::pair scanForReferences(const Path & path, const PathSet & refs); struct RewritingSink : Sink { diff --git a/src/libstore/remote-store.cc b/src/libstore/remote-store.cc index a9fbf9f82..553069b89 100644 --- a/src/libstore/remote-store.cc +++ b/src/libstore/remote-store.cc @@ -39,6 +39,24 @@ void writeStorePaths(const Store & store, Sink & out, const StorePathSet & paths out << store.printStorePath(i); } +StorePathCAMap readStorePathCAMap(const Store & store, Source & from) +{ + StorePathCAMap paths; + auto count = readNum(from); + while (count--) + paths.insert_or_assign(store.parseStorePath(readString(from)), parseContentAddressOpt(readString(from))); + return paths; +} + +void writeStorePathCAMap(const Store & store, Sink & out, const StorePathCAMap & paths) +{ + out << paths.size(); + for (auto & i : paths) { + out << store.printStorePath(i.first); + out << renderContentAddress(i.second); + } +} + std::map readOutputPathMap(const Store & store, Source & from) { std::map pathMap; @@ -332,18 +350,17 @@ StorePathSet RemoteStore::querySubstitutablePaths(const StorePathSet & paths) } -void RemoteStore::querySubstitutablePathInfos(const StorePathSet & paths, - SubstitutablePathInfos & infos) +void RemoteStore::querySubstitutablePathInfos(const StorePathCAMap & pathsMap, SubstitutablePathInfos & infos) { - if (paths.empty()) return; + if (pathsMap.empty()) return; auto conn(getConnection()); if (GET_PROTOCOL_MINOR(conn->daemonVersion) < 12) { - for (auto & i : paths) { + for (auto & i : pathsMap) { SubstitutablePathInfo info; - conn->to << wopQuerySubstitutablePathInfo << printStorePath(i); + conn->to << wopQuerySubstitutablePathInfo << printStorePath(i.first); conn.processStderr(); unsigned int reply = readInt(conn->from); if (reply == 0) continue; @@ -353,13 +370,19 @@ void RemoteStore::querySubstitutablePathInfos(const StorePathSet & paths, info.references = readStorePaths(*this, conn->from); info.downloadSize = readLongLong(conn->from); info.narSize = readLongLong(conn->from); - infos.insert_or_assign(i, std::move(info)); + infos.insert_or_assign(i.first, std::move(info)); } } else { conn->to << wopQuerySubstitutablePathInfos; - writeStorePaths(*this, conn->to, paths); + if (GET_PROTOCOL_MINOR(conn->daemonVersion) < 22) { + StorePathSet paths; + for (auto & path : pathsMap) + paths.insert(path.first); + writeStorePaths(*this, conn->to, paths); + } else + writeStorePathCAMap(*this, conn->to, pathsMap); conn.processStderr(); size_t count = readNum(conn->from); for (size_t n = 0; n < count; n++) { @@ -396,10 +419,10 @@ void RemoteStore::queryPathInfoUncached(const StorePath & path, bool valid; conn->from >> valid; if (!valid) throw InvalidPath("path '%s' is not valid", printStorePath(path)); } - info = std::make_shared(StorePath(path)); auto deriver = readString(conn->from); + auto narHash = Hash::parseAny(readString(conn->from), htSHA256); + info = std::make_shared(path, narHash); if (deriver != "") info->deriver = parseStorePath(deriver); - info->narHash = Hash(readString(conn->from), htSHA256); info->references = readStorePaths(*this, conn->from); conn->from >> info->registrationTime >> info->narSize; if (GET_PROTOCOL_MINOR(conn->daemonVersion) >= 16) { @@ -503,9 +526,84 @@ void RemoteStore::addToStore(const ValidPathInfo & info, Source & source, conn->to << info.registrationTime << info.narSize << info.ultimate << info.sigs << renderContentAddress(info.ca) << repair << !checkSigs; - bool tunnel = GET_PROTOCOL_MINOR(conn->daemonVersion) >= 21; - if (!tunnel) copyNAR(source, conn->to); - conn.processStderr(0, tunnel ? &source : nullptr); + + if (GET_PROTOCOL_MINOR(conn->daemonVersion) >= 23) { + + std::exception_ptr ex; + + struct FramedSink : BufferedSink + { + ConnectionHandle & conn; + std::exception_ptr & ex; + + FramedSink(ConnectionHandle & conn, std::exception_ptr & ex) : conn(conn), ex(ex) + { } + + ~FramedSink() + { + try { + conn->to << 0; + conn->to.flush(); + } catch (...) { + ignoreException(); + } + } + + void write(const unsigned char * data, size_t len) override + { + /* Don't send more data if the remote has + encountered an error. */ + if (ex) { + auto ex2 = ex; + ex = nullptr; + std::rethrow_exception(ex2); + } + conn->to << len; + conn->to(data, len); + }; + }; + + /* Handle log messages / exceptions from the remote on a + separate thread. */ + std::thread stderrThread([&]() + { + try { + conn.processStderr(); + } catch (...) { + ex = std::current_exception(); + } + }); + + Finally joinStderrThread([&]() + { + if (stderrThread.joinable()) { + stderrThread.join(); + if (ex) { + try { + std::rethrow_exception(ex); + } catch (...) { + ignoreException(); + } + } + } + }); + + { + FramedSink sink(conn, ex); + copyNAR(source, sink); + sink.flush(); + } + + stderrThread.join(); + if (ex) + std::rethrow_exception(ex); + + } else if (GET_PROTOCOL_MINOR(conn->daemonVersion) >= 21) { + conn.processStderr(0, &source); + } else { + copyNAR(source, conn->to); + conn.processStderr(0, nullptr); + } } } @@ -707,7 +805,7 @@ void RemoteStore::addSignatures(const StorePath & storePath, const StringSet & s void RemoteStore::queryMissing(const std::vector & targets, StorePathSet & willBuild, StorePathSet & willSubstitute, StorePathSet & unknown, - unsigned long long & downloadSize, unsigned long long & narSize) + uint64_t & downloadSize, uint64_t & narSize) { { auto conn(getConnection()); diff --git a/src/libstore/remote-store.hh b/src/libstore/remote-store.hh index 3c1b78b6a..72d2a6689 100644 --- a/src/libstore/remote-store.hh +++ b/src/libstore/remote-store.hh @@ -56,7 +56,7 @@ public: StorePathSet querySubstitutablePaths(const StorePathSet & paths) override; - void querySubstitutablePathInfos(const StorePathSet & paths, + void querySubstitutablePathInfos(const StorePathCAMap & paths, SubstitutablePathInfos & infos) override; void addToStore(const ValidPathInfo & info, Source & nar, @@ -94,7 +94,7 @@ public: void queryMissing(const std::vector & targets, StorePathSet & willBuild, StorePathSet & willSubstitute, StorePathSet & unknown, - unsigned long long & downloadSize, unsigned long long & narSize) override; + uint64_t & downloadSize, uint64_t & narSize) override; void connect() override; diff --git a/src/libstore/s3-binary-cache-store.cc b/src/libstore/s3-binary-cache-store.cc index 1b7dff085..a0a446bd3 100644 --- a/src/libstore/s3-binary-cache-store.cc +++ b/src/libstore/s3-binary-cache-store.cc @@ -266,6 +266,10 @@ struct S3BinaryCacheStoreImpl : public S3BinaryCacheStore const std::string & mimeType, const std::string & contentEncoding) { + istream->seekg(0, istream->end); + auto size = istream->tellg(); + istream->seekg(0, istream->beg); + auto maxThreads = std::thread::hardware_concurrency(); static std::shared_ptr @@ -343,13 +347,11 @@ struct S3BinaryCacheStoreImpl : public S3BinaryCacheStore std::chrono::duration_cast(now2 - now1) .count(); - auto size = istream->tellg(); - printInfo("uploaded 's3://%s/%s' (%d bytes) in %d ms", bucketName, path, size, duration); stats.putTimeMs += duration; - stats.putBytes += size; + stats.putBytes += std::max(size, (decltype(size)) 0); stats.put++; } diff --git a/src/libstore/store-api.cc b/src/libstore/store-api.cc index 7a380f127..6fd0fdfda 100644 --- a/src/libstore/store-api.cc +++ b/src/libstore/store-api.cc @@ -193,6 +193,19 @@ StorePath Store::makeFixedOutputPath( } } +StorePath Store::makeFixedOutputPathFromCA(std::string_view name, ContentAddress ca, + const StorePathSet & references, bool hasSelfReference) const +{ + // New template + return std::visit(overloaded { + [&](TextHash th) { + return makeTextPath(name, th.hash, references); + }, + [&](FixedOutputHash fsh) { + return makeFixedOutputPath(fsh.method, fsh.hash, name, references, hasSelfReference); + } + }, ca); +} StorePath Store::makeTextPath(std::string_view name, const Hash & hash, const StorePathSet & references) const @@ -222,32 +235,101 @@ StorePath Store::computeStorePathForText(const string & name, const string & s, } +StorePath Store::addToStore(const string & name, const Path & _srcPath, + FileIngestionMethod method, HashType hashAlgo, PathFilter & filter, RepairFlag repair) +{ + Path srcPath(absPath(_srcPath)); + auto source = sinkToSource([&](Sink & sink) { + if (method == FileIngestionMethod::Recursive) + dumpPath(srcPath, sink, filter); + else + readFile(srcPath, sink); + }); + return addToStoreFromDump(*source, name, method, hashAlgo, repair); +} + + +/* +The aim of this function is to compute in one pass the correct ValidPathInfo for +the files that we are trying to add to the store. To accomplish that in one +pass, given the different kind of inputs that we can take (normal nar archives, +nar archives with non SHA-256 hashes, and flat files), we set up a net of sinks +and aliases. Also, since the dataflow is obfuscated by this, we include here a +graphviz diagram: + +digraph graphname { + node [shape=box] + fileSource -> narSink + narSink [style=dashed] + narSink -> unsualHashTee [style = dashed, label = "Recursive && !SHA-256"] + narSink -> narHashSink [style = dashed, label = "else"] + unsualHashTee -> narHashSink + unsualHashTee -> caHashSink + fileSource -> parseSink + parseSink [style=dashed] + parseSink-> fileSink [style = dashed, label = "Flat"] + parseSink -> blank [style = dashed, label = "Recursive"] + fileSink -> caHashSink +} +*/ ValidPathInfo Store::addToStoreSlow(std::string_view name, const Path & srcPath, FileIngestionMethod method, HashType hashAlgo, std::optional expectedCAHash) { - /* FIXME: inefficient: we're reading/hashing 'tmpFile' three - times. */ + HashSink narHashSink { htSHA256 }; + HashSink caHashSink { hashAlgo }; - auto [narHash, narSize] = hashPath(htSHA256, srcPath); + /* Note that fileSink and unusualHashTee must be mutually exclusive, since + they both write to caHashSink. Note that that requisite is currently true + because the former is only used in the flat case. */ + RetrieveRegularNARSink fileSink { caHashSink }; + TeeSink unusualHashTee { narHashSink, caHashSink }; - auto hash = method == FileIngestionMethod::Recursive - ? hashAlgo == htSHA256 - ? narHash - : hashPath(hashAlgo, srcPath).first - : hashFile(hashAlgo, srcPath); + auto & narSink = method == FileIngestionMethod::Recursive && hashAlgo != htSHA256 + ? static_cast(unusualHashTee) + : narHashSink; + + /* Functionally, this means that fileSource will yield the content of + srcPath. The fact that we use scratchpadSink as a temporary buffer here + is an implementation detail. */ + auto fileSource = sinkToSource([&](Sink & scratchpadSink) { + dumpPath(srcPath, scratchpadSink); + }); + + /* tapped provides the same data as fileSource, but we also write all the + information to narSink. */ + TeeSource tapped { *fileSource, narSink }; + + ParseSink blank; + auto & parseSink = method == FileIngestionMethod::Flat + ? fileSink + : blank; + + /* The information that flows from tapped (besides being replicated in + narSink), is now put in parseSink. */ + parseDump(parseSink, tapped); + + /* We extract the result of the computation from the sink by calling + finish. */ + auto [narHash, narSize] = narHashSink.finish(); + + auto hash = method == FileIngestionMethod::Recursive && hashAlgo == htSHA256 + ? narHash + : caHashSink.finish().first; if (expectedCAHash && expectedCAHash != hash) throw Error("hash mismatch for '%s'", srcPath); - ValidPathInfo info(makeFixedOutputPath(method, hash, name)); - info.narHash = narHash; + ValidPathInfo info { + makeFixedOutputPath(method, hash, name), + narHash, + }; info.narSize = narSize; info.ca = FixedOutputHash { .method = method, .hash = hash }; if (!isValidPath(info.path)) { - auto source = sinkToSource([&](Sink & sink) { - dumpPath(srcPath, sink); + auto source = sinkToSource([&](Sink & scratchpadSink) { + dumpPath(srcPath, scratchpadSink); }); addToStore(info, *source); } @@ -560,7 +642,7 @@ void Store::pathInfoToJSON(JSONPlaceholder & jsonOut, const StorePathSet & store if (!narInfo->url.empty()) jsonPath.attr("url", narInfo->url); if (narInfo->fileHash) - jsonPath.attr("downloadHash", narInfo->fileHash.to_string(hashBase, true)); + jsonPath.attr("downloadHash", narInfo->fileHash->to_string(hashBase, true)); if (narInfo->fileSize) jsonPath.attr("downloadSize", narInfo->fileSize); if (showClosureSize) @@ -636,18 +718,13 @@ void copyStorePath(ref srcStore, ref dstStore, uint64_t total = 0; - if (!info->narHash) { - StringSink sink; - srcStore->narFromPath({storePath}, sink); + // recompute store path on the chance dstStore does it differently + if (info->ca && info->references.empty()) { auto info2 = make_ref(*info); - info2->narHash = hashString(htSHA256, *sink.s); - if (!info->narSize) info2->narSize = sink.s->size(); - if (info->ultimate) info2->ultimate = false; + info2->path = dstStore->makeFixedOutputPathFromCA(info->path.name(), *info->ca); + if (dstStore->storeDir == srcStore->storeDir) + assert(info->path == info2->path); info = info2; - - StringSource source(*sink.s); - dstStore->addToStore(*info, source, repair, checkSigs); - return; } if (info->ultimate) { @@ -657,12 +734,12 @@ void copyStorePath(ref srcStore, ref dstStore, } auto source = sinkToSource([&](Sink & sink) { - LambdaSink wrapperSink([&](const unsigned char * data, size_t len) { - sink(data, len); + LambdaSink progressSink([&](const unsigned char * data, size_t len) { total += len; act.progress(total, info->narSize); }); - srcStore->narFromPath(storePath, wrapperSink); + TeeSink tee { sink, progressSink }; + srcStore->narFromPath(storePath, tee); }, [&]() { throw EndOfFile("NAR for '%s' fetched from '%s' is incomplete", srcStore->printStorePath(storePath), srcStore->getUri()); }); @@ -671,16 +748,20 @@ void copyStorePath(ref srcStore, ref dstStore, } -void copyPaths(ref srcStore, ref dstStore, const StorePathSet & storePaths, +std::map copyPaths(ref srcStore, ref dstStore, const StorePathSet & storePaths, RepairFlag repair, CheckSigsFlag checkSigs, SubstituteFlag substitute) { auto valid = dstStore->queryValidPaths(storePaths, substitute); - PathSet missing; + StorePathSet missing; for (auto & path : storePaths) - if (!valid.count(path)) missing.insert(srcStore->printStorePath(path)); + if (!valid.count(path)) missing.insert(path); - if (missing.empty()) return; + std::map pathsMap; + for (auto & path : storePaths) + pathsMap.insert_or_assign(path, path); + + if (missing.empty()) return pathsMap; Activity act(*logger, lvlInfo, actCopyPaths, fmt("copying %d paths", missing.size())); @@ -695,30 +776,49 @@ void copyPaths(ref srcStore, ref dstStore, const StorePathSet & st ThreadPool pool; - processGraph(pool, - PathSet(missing.begin(), missing.end()), + processGraph(pool, + StorePathSet(missing.begin(), missing.end()), - [&](const Path & storePath) { - if (dstStore->isValidPath(dstStore->parseStorePath(storePath))) { + [&](const StorePath & storePath) { + auto info = srcStore->queryPathInfo(storePath); + auto storePathForDst = storePath; + if (info->ca && info->references.empty()) { + storePathForDst = dstStore->makeFixedOutputPathFromCA(storePath.name(), *info->ca); + if (dstStore->storeDir == srcStore->storeDir) + assert(storePathForDst == storePath); + if (storePathForDst != storePath) + debug("replaced path '%s' to '%s' for substituter '%s'", srcStore->printStorePath(storePath), dstStore->printStorePath(storePathForDst), dstStore->getUri()); + } + pathsMap.insert_or_assign(storePath, storePathForDst); + + if (dstStore->isValidPath(storePath)) { nrDone++; showProgress(); - return PathSet(); + return StorePathSet(); } - auto info = srcStore->queryPathInfo(srcStore->parseStorePath(storePath)); - bytesExpected += info->narSize; act.setExpected(actCopyPath, bytesExpected); - return srcStore->printStorePathSet(info->references); + return info->references; }, - [&](const Path & storePathS) { + [&](const StorePath & storePath) { checkInterrupt(); - auto storePath = dstStore->parseStorePath(storePathS); + auto info = srcStore->queryPathInfo(storePath); - if (!dstStore->isValidPath(storePath)) { + auto storePathForDst = storePath; + if (info->ca && info->references.empty()) { + storePathForDst = dstStore->makeFixedOutputPathFromCA(storePath.name(), *info->ca); + if (dstStore->storeDir == srcStore->storeDir) + assert(storePathForDst == storePath); + if (storePathForDst != storePath) + debug("replaced path '%s' to '%s' for substituter '%s'", srcStore->printStorePath(storePath), dstStore->printStorePath(storePathForDst), dstStore->getUri()); + } + pathsMap.insert_or_assign(storePath, storePathForDst); + + if (!dstStore->isValidPath(storePathForDst)) { MaintainCount mc(nrRunning); showProgress(); try { @@ -727,7 +827,7 @@ void copyPaths(ref srcStore, ref dstStore, const StorePathSet & st nrFailed++; if (!settings.keepGoing) throw e; - logger->log(lvlError, fmt("could not copy %s: %s", storePathS, e.what())); + logger->log(lvlError, fmt("could not copy %s: %s", dstStore->printStorePath(storePath), e.what())); showProgress(); return; } @@ -736,6 +836,8 @@ void copyPaths(ref srcStore, ref dstStore, const StorePathSet & st nrDone++; showProgress(); }); + + return pathsMap; } @@ -749,19 +851,22 @@ void copyClosure(ref srcStore, ref dstStore, } -std::optional decodeValidPathInfo(const Store & store, std::istream & str, bool hashGiven) +std::optional decodeValidPathInfo(const Store & store, std::istream & str, std::optional hashGiven) { std::string path; getline(str, path); if (str.eof()) { return {}; } - ValidPathInfo info(store.parseStorePath(path)); - if (hashGiven) { + if (!hashGiven) { string s; getline(str, s); - info.narHash = Hash(s, htSHA256); + auto narHash = Hash::parseAny(s, htSHA256); getline(str, s); - if (!string2Int(s, info.narSize)) throw Error("number expected"); + uint64_t narSize; + if (!string2Int(s, narSize)) throw Error("number expected"); + hashGiven = { narHash, narSize }; } + ValidPathInfo info(store.parseStorePath(path), hashGiven->first); + info.narSize = hashGiven->second; std::string deriver; getline(str, deriver); if (deriver != "") info.deriver = store.parseStorePath(deriver); @@ -796,8 +901,8 @@ string showPaths(const PathSet & paths) std::string ValidPathInfo::fingerprint(const Store & store) const { - if (narSize == 0 || !narHash) - throw Error("cannot calculate fingerprint of path '%s' because its size/hash is not known", + if (narSize == 0) + throw Error("cannot calculate fingerprint of path '%s' because its size is not known", store.printStorePath(path)); return "1;" + store.printStorePath(path) + ";" @@ -812,10 +917,6 @@ void ValidPathInfo::sign(const Store & store, const SecretKey & secretKey) sigs.insert(secretKey.signDetached(fingerprint(store))); } -// FIXME Put this somewhere? -template struct overloaded : Ts... { using Ts::operator()...; }; -template overloaded(Ts...) -> overloaded; - bool ValidPathInfo::isContentAddressed(const Store & store) const { if (! ca) return false; diff --git a/src/libstore/store-api.hh b/src/libstore/store-api.hh index c38290add..1680065f3 100644 --- a/src/libstore/store-api.hh +++ b/src/libstore/store-api.hh @@ -4,12 +4,12 @@ #include "hash.hh" #include "content-address.hh" #include "serialise.hh" -#include "crypto.hh" #include "lru-cache.hh" #include "sync.hh" #include "globals.hh" #include "config.hh" #include "derivations.hh" +#include "path-info.hh" #include #include @@ -85,7 +85,7 @@ struct GCOptions StorePathSet pathsToDelete; /* Stop after at least `maxFreed' bytes have been freed. */ - unsigned long long maxFreed{std::numeric_limits::max()}; + uint64_t maxFreed{std::numeric_limits::max()}; }; @@ -97,98 +97,10 @@ struct GCResults /* For `gcReturnDead', `gcDeleteDead' and `gcDeleteSpecific', the number of bytes that would be or was freed. */ - unsigned long long bytesFreed = 0; + uint64_t bytesFreed = 0; }; -struct SubstitutablePathInfo -{ - std::optional deriver; - StorePathSet references; - unsigned long long downloadSize; /* 0 = unknown or inapplicable */ - unsigned long long narSize; /* 0 = unknown */ -}; - -typedef std::map SubstitutablePathInfos; - -struct ValidPathInfo -{ - StorePath path; - std::optional deriver; - Hash narHash; - StorePathSet references; - time_t registrationTime = 0; - uint64_t narSize = 0; // 0 = unknown - uint64_t id; // internal use only - - /* Whether the path is ultimately trusted, that is, it's a - derivation output that was built locally. */ - bool ultimate = false; - - StringSet sigs; // note: not necessarily verified - - /* If non-empty, an assertion that the path is content-addressed, - i.e., that the store path is computed from a cryptographic hash - of the contents of the path, plus some other bits of data like - the "name" part of the path. Such a path doesn't need - signatures, since we don't have to trust anybody's claim that - the path is the output of a particular derivation. (In the - extensional store model, we have to trust that the *contents* - of an output path of a derivation were actually produced by - that derivation. In the intensional model, we have to trust - that a particular output path was produced by a derivation; the - path then implies the contents.) - - Ideally, the content-addressability assertion would just be a Boolean, - and the store path would be computed from the name component, ‘narHash’ - and ‘references’. However, we support many types of content addresses. - */ - std::optional ca; - - bool operator == (const ValidPathInfo & i) const - { - return - path == i.path - && narHash == i.narHash - && references == i.references; - } - - /* Return a fingerprint of the store path to be used in binary - cache signatures. It contains the store path, the base-32 - SHA-256 hash of the NAR serialisation of the path, the size of - the NAR, and the sorted references. The size field is strictly - speaking superfluous, but might prevent endless/excessive data - attacks. */ - std::string fingerprint(const Store & store) const; - - void sign(const Store & store, const SecretKey & secretKey); - - /* Return true iff the path is verifiably content-addressed. */ - bool isContentAddressed(const Store & store) const; - - static const size_t maxSigs = std::numeric_limits::max(); - - /* Return the number of signatures on this .narinfo that were - produced by one of the specified keys, or maxSigs if the path - is content-addressed. */ - size_t checkSignatures(const Store & store, const PublicKeys & publicKeys) const; - - /* Verify a single signature. */ - bool checkSignature(const Store & store, const PublicKeys & publicKeys, const std::string & sig) const; - - Strings shortRefs() const; - - ValidPathInfo(const ValidPathInfo & other) = default; - - ValidPathInfo(StorePath && path) : path(std::move(path)) { }; - ValidPathInfo(const StorePath & path) : path(path) { }; - - virtual ~ValidPathInfo() { } -}; - -typedef list ValidPathInfos; - - enum BuildMode { bmNormal, bmRepair, bmCheck }; @@ -251,6 +163,10 @@ public: Setting wantMassQuery{this, false, "want-mass-query", "whether this substituter can be queried efficiently for path validity"}; + Setting systemFeatures{this, settings.systemFeatures, + "system-features", + "Optional features that the system this store builds on implements (like \"kvm\")."}; + protected: struct PathInfoCacheValue { @@ -343,7 +259,11 @@ public: bool hasSelfReference = false) const; StorePath makeTextPath(std::string_view name, const Hash & hash, - const StorePathSet & references) const; + const StorePathSet & references = {}) const; + + StorePath makeFixedOutputPathFromCA(std::string_view name, ContentAddress ca, + const StorePathSet & references = {}, + bool hasSelfReference = false) const; /* This is the preparatory part of addToStore(); it computes the store path to which srcPath is to be copied. Returns the store @@ -435,9 +355,10 @@ public: virtual StorePathSet querySubstitutablePaths(const StorePathSet & paths) { return {}; }; /* Query substitute info (i.e. references, derivers and download - sizes) of a set of paths. If a path does not have substitute - info, it's omitted from the resulting ‘infos’ map. */ - virtual void querySubstitutablePathInfos(const StorePathSet & paths, + sizes) of a map of paths to their optional ca values. If a path + does not have substitute info, it's omitted from the resulting + ‘infos’ map. */ + virtual void querySubstitutablePathInfos(const StorePathCAMap & paths, SubstitutablePathInfos & infos) { return; }; /* Import a path into the store. */ @@ -450,7 +371,7 @@ public: libutil/archive.hh). */ virtual StorePath addToStore(const string & name, const Path & srcPath, FileIngestionMethod method = FileIngestionMethod::Recursive, HashType hashAlgo = htSHA256, - PathFilter & filter = defaultPathFilter, RepairFlag repair = NoRepair) = 0; + PathFilter & filter = defaultPathFilter, RepairFlag repair = NoRepair); /* Copy the contents of a path to the store and register the validity the resulting path, using a constant amount of @@ -459,8 +380,12 @@ public: FileIngestionMethod method = FileIngestionMethod::Recursive, HashType hashAlgo = htSHA256, std::optional expectedCAHash = {}); + /* Like addToStore(), but the contents of the path are contained + in `dump', which is either a NAR serialisation (if recursive == + true) or simply the contents of a regular file (if recursive == + false). */ // FIXME: remove? - virtual StorePath addToStoreFromDump(const string & dump, const string & name, + virtual StorePath addToStoreFromDump(Source & dump, const string & name, FileIngestionMethod method = FileIngestionMethod::Recursive, HashType hashAlgo = htSHA256, RepairFlag repair = NoRepair) { throw Error("addToStoreFromDump() is not supported by this store"); @@ -609,7 +534,7 @@ public: that will be substituted. */ virtual void queryMissing(const std::vector & targets, StorePathSet & willBuild, StorePathSet & willSubstitute, StorePathSet & unknown, - unsigned long long & downloadSize, unsigned long long & narSize); + uint64_t & downloadSize, uint64_t & narSize); /* Sort a set of paths topologically under the references relation. If p refers to q, then p precedes q in this list. */ @@ -739,11 +664,13 @@ void copyStorePath(ref srcStore, ref dstStore, /* Copy store paths from one store to another. The paths may be copied - in parallel. They are copied in a topologically sorted order - (i.e. if A is a reference of B, then A is copied before B), but - the set of store paths is not automatically closed; use - copyClosure() for that. */ -void copyPaths(ref srcStore, ref dstStore, const StorePathSet & storePaths, + in parallel. They are copied in a topologically sorted order (i.e. + if A is a reference of B, then A is copied before B), but the set + of store paths is not automatically closed; use copyClosure() for + that. Returns a map of what each path was copied to the dstStore + as. */ +std::map copyPaths(ref srcStore, ref dstStore, + const StorePathSet & storePaths, RepairFlag repair = NoRepair, CheckSigsFlag checkSigs = CheckSigs, SubstituteFlag substitute = NoSubstitute); @@ -827,9 +754,11 @@ string showPaths(const PathSet & paths); std::optional decodeValidPathInfo( const Store & store, std::istream & str, - bool hashGiven = false); + std::optional hashGiven = std::nullopt); /* Split URI into protocol+hierarchy part and its parameter set. */ std::pair splitUriAndParams(const std::string & uri); +std::optional getDerivationCA(const BasicDerivation & drv); + } diff --git a/src/libstore/worker-protocol.hh b/src/libstore/worker-protocol.hh index 8b538f6da..5eddaff56 100644 --- a/src/libstore/worker-protocol.hh +++ b/src/libstore/worker-protocol.hh @@ -6,7 +6,7 @@ namespace nix { #define WORKER_MAGIC_1 0x6e697863 #define WORKER_MAGIC_2 0x6478696f -#define PROTOCOL_VERSION 0x116 +#define PROTOCOL_VERSION 0x118 #define GET_PROTOCOL_MAJOR(x) ((x) & 0xff00) #define GET_PROTOCOL_MINOR(x) ((x) & 0x00ff) @@ -70,6 +70,10 @@ template T readStorePaths(const Store & store, Source & from); void writeStorePaths(const Store & store, Sink & out, const StorePathSet & paths); +StorePathCAMap readStorePathCAMap(const Store & store, Source & from); + +void writeStorePathCAMap(const Store & store, Sink & out, const StorePathCAMap & paths); + void writeOutputPathMap(const Store & store, Sink & out, const OutputPathMap & paths); } diff --git a/src/libutil/archive.cc b/src/libutil/archive.cc index 51c88537e..14399dea3 100644 --- a/src/libutil/archive.cc +++ b/src/libutil/archive.cc @@ -150,17 +150,17 @@ static void skipGeneric(Source & source) static void parseContents(ParseSink & sink, Source & source, const Path & path) { - unsigned long long size = readLongLong(source); + uint64_t size = readLongLong(source); sink.preallocateContents(size); - unsigned long long left = size; + uint64_t left = size; std::vector buf(65536); while (left) { checkInterrupt(); auto n = buf.size(); - if ((unsigned long long)n > left) n = left; + if ((uint64_t)n > left) n = left; source(buf.data(), n); sink.receiveContents(buf.data(), n); left -= n; @@ -323,7 +323,7 @@ struct RestoreSink : ParseSink throw SysError("fchmod"); } - void preallocateContents(unsigned long long len) + void preallocateContents(uint64_t len) { #if HAVE_POSIX_FALLOCATE if (len) { @@ -338,7 +338,7 @@ struct RestoreSink : ParseSink #endif } - void receiveContents(unsigned char * data, unsigned int len) + void receiveContents(unsigned char * data, size_t len) { writeFull(fd.get(), data, len); } @@ -366,11 +366,7 @@ void copyNAR(Source & source, Sink & sink) ParseSink parseSink; /* null sink; just parse the NAR */ - LambdaSource wrapper([&](unsigned char * data, size_t len) { - auto n = source.read(data, len); - sink(data, n); - return n; - }); + TeeSource wrapper { source, sink }; parseDump(parseSink, wrapper); } diff --git a/src/libutil/archive.hh b/src/libutil/archive.hh index 302b1bb18..5665732d2 100644 --- a/src/libutil/archive.hh +++ b/src/libutil/archive.hh @@ -57,18 +57,35 @@ struct ParseSink virtual void createRegularFile(const Path & path) { }; virtual void isExecutable() { }; - virtual void preallocateContents(unsigned long long size) { }; - virtual void receiveContents(unsigned char * data, unsigned int len) { }; + virtual void preallocateContents(uint64_t size) { }; + virtual void receiveContents(unsigned char * data, size_t len) { }; virtual void createSymlink(const Path & path, const string & target) { }; }; -struct TeeParseSink : ParseSink +/* If the NAR archive contains a single file at top-level, then save + the contents of the file to `s'. Otherwise barf. */ +struct RetrieveRegularNARSink : ParseSink { - StringSink saved; - TeeSource source; + bool regular = true; + Sink & sink; - TeeParseSink(Source & source) : source(source, saved) { } + RetrieveRegularNARSink(Sink & sink) : sink(sink) { } + + void createDirectory(const Path & path) + { + regular = false; + } + + void receiveContents(unsigned char * data, size_t len) + { + sink(data, len); + } + + void createSymlink(const Path & path, const string & target) + { + regular = false; + } }; void parseDump(ParseSink & sink, Source & source); diff --git a/src/libutil/error.hh b/src/libutil/error.hh index 0daaf3be2..f3babcbde 100644 --- a/src/libutil/error.hh +++ b/src/libutil/error.hh @@ -192,6 +192,7 @@ public: MakeError(Error, BaseError); MakeError(UsageError, Error); +MakeError(UnimplementedError, Error); class SysError : public Error { diff --git a/src/libutil/hash.cc b/src/libutil/hash.cc index 5578a618e..4a94f0dfd 100644 --- a/src/libutil/hash.cc +++ b/src/libutil/hash.cc @@ -7,6 +7,7 @@ #include "args.hh" #include "hash.hh" #include "archive.hh" +#include "split.hh" #include "util.hh" #include @@ -16,18 +17,23 @@ namespace nix { +static size_t regularHashSize(HashType type) { + switch (type) { + case htMD5: return md5HashSize; + case htSHA1: return sha1HashSize; + case htSHA256: return sha256HashSize; + case htSHA512: return sha512HashSize; + } + abort(); +} + + std::set hashTypes = { "md5", "sha1", "sha256", "sha512" }; -void Hash::init() +Hash::Hash(HashType type) : type(type) { - assert(type); - switch (*type) { - case htMD5: hashSize = md5HashSize; break; - case htSHA1: hashSize = sha1HashSize; break; - case htSHA256: hashSize = sha256HashSize; break; - case htSHA512: hashSize = sha512HashSize; break; - } + hashSize = regularHashSize(type); assert(hashSize <= maxHashSize); memset(hash, 0, maxHashSize); } @@ -108,17 +114,11 @@ string printHash16or32(const Hash & hash) } -HashType assertInitHashType(const Hash & h) -{ - assert(h.type); - return *h.type; -} - std::string Hash::to_string(Base base, bool includeType) const { std::string s; if (base == SRI || includeType) { - s += printHashType(assertInitHashType(*this)); + s += printHashType(type); s += base == SRI ? '-' : ':'; } switch (base) { @@ -136,63 +136,103 @@ std::string Hash::to_string(Base base, bool includeType) const return s; } -Hash::Hash(std::string_view s, HashType type) : Hash(s, std::optional { type }) { } -Hash::Hash(std::string_view s) : Hash(s, std::optional{}) { } +Hash Hash::dummy(htSHA256); -Hash::Hash(std::string_view s, std::optional type) - : type(type) -{ - size_t pos = 0; +Hash Hash::parseSRI(std::string_view original) { + auto rest = original; + + // Parse the has type before the separater, if there was one. + auto hashRaw = splitPrefixTo(rest, '-'); + if (!hashRaw) + throw BadHash("hash '%s' is not SRI", original); + HashType parsedType = parseHashType(*hashRaw); + + return Hash(rest, parsedType, true); +} + +// Mutates the string to eliminate the prefixes when found +static std::pair, bool> getParsedTypeAndSRI(std::string_view & rest) { bool isSRI = false; - auto sep = s.find(':'); - if (sep == string::npos) { - sep = s.find('-'); - if (sep != string::npos) { - isSRI = true; - } else if (! type) - throw BadHash("hash '%s' does not include a type", s); + // Parse the has type before the separater, if there was one. + std::optional optParsedType; + { + auto hashRaw = splitPrefixTo(rest, ':'); + + if (!hashRaw) { + hashRaw = splitPrefixTo(rest, '-'); + if (hashRaw) + isSRI = true; + } + if (hashRaw) + optParsedType = parseHashType(*hashRaw); } - if (sep != string::npos) { - string hts = string(s, 0, sep); - this->type = parseHashType(hts); - if (!this->type) - throw BadHash("unknown hash type '%s'", hts); - if (type && type != this->type) - throw BadHash("hash '%s' should have type '%s'", s, printHashType(*type)); - pos = sep + 1; - } + return {optParsedType, isSRI}; +} - init(); +Hash Hash::parseAnyPrefixed(std::string_view original) +{ + auto rest = original; + auto [optParsedType, isSRI] = getParsedTypeAndSRI(rest); - size_t size = s.size() - pos; + // Either the string or user must provide the type, if they both do they + // must agree. + if (!optParsedType) + throw BadHash("hash '%s' does not include a type", rest); - if (!isSRI && size == base16Len()) { + return Hash(rest, *optParsedType, isSRI); +} + +Hash Hash::parseAny(std::string_view original, std::optional optType) +{ + auto rest = original; + auto [optParsedType, isSRI] = getParsedTypeAndSRI(rest); + + // Either the string or user must provide the type, if they both do they + // must agree. + if (!optParsedType && !optType) + throw BadHash("hash '%s' does not include a type, nor is the type otherwise known from context.", rest); + else if (optParsedType && optType && *optParsedType != *optType) + throw BadHash("hash '%s' should have type '%s'", original, printHashType(*optType)); + + HashType hashType = optParsedType ? *optParsedType : *optType; + return Hash(rest, hashType, isSRI); +} + +Hash Hash::parseNonSRIUnprefixed(std::string_view s, HashType type) +{ + return Hash(s, type, false); +} + +Hash::Hash(std::string_view rest, HashType type, bool isSRI) + : Hash(type) +{ + if (!isSRI && rest.size() == base16Len()) { auto parseHexDigit = [&](char c) { if (c >= '0' && c <= '9') return c - '0'; if (c >= 'A' && c <= 'F') return c - 'A' + 10; if (c >= 'a' && c <= 'f') return c - 'a' + 10; - throw BadHash("invalid base-16 hash '%s'", s); + throw BadHash("invalid base-16 hash '%s'", rest); }; for (unsigned int i = 0; i < hashSize; i++) { hash[i] = - parseHexDigit(s[pos + i * 2]) << 4 - | parseHexDigit(s[pos + i * 2 + 1]); + parseHexDigit(rest[i * 2]) << 4 + | parseHexDigit(rest[i * 2 + 1]); } } - else if (!isSRI && size == base32Len()) { + else if (!isSRI && rest.size() == base32Len()) { - for (unsigned int n = 0; n < size; ++n) { - char c = s[pos + size - n - 1]; + for (unsigned int n = 0; n < rest.size(); ++n) { + char c = rest[rest.size() - n - 1]; unsigned char digit; for (digit = 0; digit < base32Chars.size(); ++digit) /* !!! slow */ if (base32Chars[digit] == c) break; if (digit >= 32) - throw BadHash("invalid base-32 hash '%s'", s); + throw BadHash("invalid base-32 hash '%s'", rest); unsigned int b = n * 5; unsigned int i = b / 8; unsigned int j = b % 8; @@ -202,21 +242,21 @@ Hash::Hash(std::string_view s, std::optional type) hash[i + 1] |= digit >> (8 - j); } else { if (digit >> (8 - j)) - throw BadHash("invalid base-32 hash '%s'", s); + throw BadHash("invalid base-32 hash '%s'", rest); } } } - else if (isSRI || size == base64Len()) { - auto d = base64Decode(s.substr(pos)); + else if (isSRI || rest.size() == base64Len()) { + auto d = base64Decode(rest); if (d.size() != hashSize) - throw BadHash("invalid %s hash '%s'", isSRI ? "SRI" : "base-64", s); + throw BadHash("invalid %s hash '%s'", isSRI ? "SRI" : "base-64", rest); assert(hashSize); memcpy(hash, d.data(), hashSize); } else - throw BadHash("hash '%s' has wrong length for hash type '%s'", s, printHashType(*type)); + throw BadHash("hash '%s' has wrong length for hash type '%s'", rest, printHashType(this->type)); } Hash newHashAllowEmpty(std::string hashStr, std::optional ht) @@ -228,7 +268,7 @@ Hash newHashAllowEmpty(std::string hashStr, std::optional ht) warn("found empty hash, assuming '%s'", h.to_string(SRI, true)); return h; } else - return Hash(hashStr, ht); + return Hash::parseAny(hashStr, ht); } @@ -269,7 +309,7 @@ static void finish(HashType ht, Ctx & ctx, unsigned char * hash) } -Hash hashString(HashType ht, const string & s) +Hash hashString(HashType ht, std::string_view s) { Ctx ctx; Hash hash(ht); @@ -336,7 +376,7 @@ HashResult hashPath( Hash compressHash(const Hash & hash, unsigned int newSize) { - Hash h; + Hash h(hash.type); h.hashSize = newSize; for (unsigned int i = 0; i < hash.hashSize; ++i) h.hash[i % newSize] ^= hash.hash[i]; @@ -344,7 +384,7 @@ Hash compressHash(const Hash & hash, unsigned int newSize) } -std::optional parseHashTypeOpt(const string & s) +std::optional parseHashTypeOpt(std::string_view s) { if (s == "md5") return htMD5; else if (s == "sha1") return htSHA1; @@ -353,7 +393,7 @@ std::optional parseHashTypeOpt(const string & s) else return std::optional {}; } -HashType parseHashType(const string & s) +HashType parseHashType(std::string_view s) { auto opt_h = parseHashTypeOpt(s); if (opt_h) diff --git a/src/libutil/hash.hh b/src/libutil/hash.hh index ad6093fca..6d6eb70ca 100644 --- a/src/libutil/hash.hh +++ b/src/libutil/hash.hh @@ -27,34 +27,38 @@ enum Base : int { Base64, Base32, Base16, SRI }; struct Hash { - static const unsigned int maxHashSize = 64; - unsigned int hashSize = 0; - unsigned char hash[maxHashSize] = {}; + constexpr static size_t maxHashSize = 64; + size_t hashSize = 0; + uint8_t hash[maxHashSize] = {}; - std::optional type = {}; - - /* Create an unset hash object. */ - Hash() { }; + HashType type; /* Create a zero-filled hash object. */ - Hash(HashType type) : type(type) { init(); }; + Hash(HashType type); - /* Initialize the hash from a string representation, in the format + /* Parse the hash from a string representation in the format "[:]" or "-" (a Subresource Integrity hash expression). If the 'type' argument is not present, then the hash type must be specified in the string. */ - Hash(std::string_view s, std::optional type); - // type must be provided - Hash(std::string_view s, HashType type); - // hash type must be part of string - Hash(std::string_view s); + static Hash parseAny(std::string_view s, std::optional type); - void init(); + /* Parse a hash from a string representation like the above, except the + type prefix is mandatory is there is no separate arguement. */ + static Hash parseAnyPrefixed(std::string_view s); - /* Check whether a hash is set. */ - operator bool () const { return (bool) type; } + /* Parse a plain hash that musst not have any prefix indicating the type. + The type is passed in to disambiguate. */ + static Hash parseNonSRIUnprefixed(std::string_view s, HashType type); + static Hash parseSRI(std::string_view original); + +private: + /* The type must be provided, the string view must not include + prefix. `isSRI` helps disambigate the various base-* encodings. */ + Hash(std::string_view s, HashType type, bool isSRI); + +public: /* Check whether two hash are equal. */ bool operator == (const Hash & h2) const; @@ -98,6 +102,8 @@ struct Hash assert(type == htSHA1); return std::string(to_string(Base16, false), 0, 7); } + + static Hash dummy; }; /* Helper that defaults empty hashes to the 0 hash. */ @@ -107,14 +113,14 @@ Hash newHashAllowEmpty(std::string hashStr, std::optional ht); string printHash16or32(const Hash & hash); /* Compute the hash of the given string. */ -Hash hashString(HashType ht, const string & s); +Hash hashString(HashType ht, std::string_view s); /* Compute the hash of the given file. */ Hash hashFile(HashType ht, const Path & path); /* Compute the hash of the given path. The hash is defined as (essentially) hashString(ht, dumpPath(path)). */ -typedef std::pair HashResult; +typedef std::pair HashResult; HashResult hashPath(HashType ht, const Path & path, PathFilter & filter = defaultPathFilter); @@ -123,10 +129,10 @@ HashResult hashPath(HashType ht, const Path & path, Hash compressHash(const Hash & hash, unsigned int newSize); /* Parse a string representing a hash type. */ -HashType parseHashType(const string & s); +HashType parseHashType(std::string_view s); /* Will return nothing on parse error */ -std::optional parseHashTypeOpt(const string & s); +std::optional parseHashTypeOpt(std::string_view s); /* And the reverse. */ string printHashType(HashType ht); @@ -144,7 +150,7 @@ class HashSink : public BufferedSink, public AbstractHashSink private: HashType ht; Ctx * ctx; - unsigned long long bytes; + uint64_t bytes; public: HashSink(HashType ht); diff --git a/src/libutil/serialise.cc b/src/libutil/serialise.cc index c8b71188f..00c945113 100644 --- a/src/libutil/serialise.cc +++ b/src/libutil/serialise.cc @@ -322,5 +322,18 @@ void StringSink::operator () (const unsigned char * data, size_t len) s->append((const char *) data, len); } +size_t ChainSource::read(unsigned char * data, size_t len) +{ + if (useSecond) { + return source2.read(data, len); + } else { + try { + return source1.read(data, len); + } catch (EndOfFile &) { + useSecond = true; + return this->read(data, len); + } + } +} } diff --git a/src/libutil/serialise.hh b/src/libutil/serialise.hh index 8386a4991..69ae0874a 100644 --- a/src/libutil/serialise.hh +++ b/src/libutil/serialise.hh @@ -189,7 +189,7 @@ struct TeeSource : Source size_t read(unsigned char * data, size_t len) { size_t n = orig.read(data, len); - sink(data, len); + sink(data, n); return n; } }; @@ -225,6 +225,17 @@ struct SizedSource : Source } }; +/* A sink that that just counts the number of bytes given to it */ +struct LengthSink : Sink +{ + uint64_t length = 0; + + virtual void operator () (const unsigned char * _, size_t len) + { + length += len; + } +}; + /* Convert a function into a sink. */ struct LambdaSink : Sink { @@ -256,6 +267,19 @@ struct LambdaSource : Source } }; +/* Chain two sources together so after the first is exhausted, the second is + used */ +struct ChainSource : Source +{ + Source & source1, & source2; + bool useSecond = false; + ChainSource(Source & s1, Source & s2) + : source1(s1), source2(s2) + { } + + size_t read(unsigned char * data, size_t len) override; +}; + /* Convert a function that feeds data into a Sink into a Source. The Source executes the function as a coroutine. */ @@ -299,14 +323,14 @@ T readNum(Source & source) source(buf, sizeof(buf)); uint64_t n = - ((unsigned long long) buf[0]) | - ((unsigned long long) buf[1] << 8) | - ((unsigned long long) buf[2] << 16) | - ((unsigned long long) buf[3] << 24) | - ((unsigned long long) buf[4] << 32) | - ((unsigned long long) buf[5] << 40) | - ((unsigned long long) buf[6] << 48) | - ((unsigned long long) buf[7] << 56); + ((uint64_t) buf[0]) | + ((uint64_t) buf[1] << 8) | + ((uint64_t) buf[2] << 16) | + ((uint64_t) buf[3] << 24) | + ((uint64_t) buf[4] << 32) | + ((uint64_t) buf[5] << 40) | + ((uint64_t) buf[6] << 48) | + ((uint64_t) buf[7] << 56); if (n > std::numeric_limits::max()) throw SerialisationError("serialised integer %d is too large for type '%s'", n, typeid(T).name()); diff --git a/src/libutil/split.hh b/src/libutil/split.hh new file mode 100644 index 000000000..d19d7d8ed --- /dev/null +++ b/src/libutil/split.hh @@ -0,0 +1,33 @@ +#pragma once + +#include +#include + +#include "util.hh" + +namespace nix { + +// If `separator` is found, we return the portion of the string before the +// separator, and modify the string argument to contain only the part after the +// separator. Otherwise, wer return `std::nullopt`, and we leave the argument +// string alone. +static inline std::optional splitPrefixTo(std::string_view & string, char separator) { + auto sepInstance = string.find(separator); + + if (sepInstance != std::string_view::npos) { + auto prefix = string.substr(0, sepInstance); + string.remove_prefix(sepInstance+1); + return prefix; + } + + return std::nullopt; +} + +static inline bool splitPrefix(std::string_view & string, std::string_view prefix) { + bool res = hasPrefix(string, prefix); + if (res) + string.remove_prefix(prefix.length()); + return res; +} + +} diff --git a/src/libutil/topo-sort.hh b/src/libutil/topo-sort.hh new file mode 100644 index 000000000..7a68ff169 --- /dev/null +++ b/src/libutil/topo-sort.hh @@ -0,0 +1,40 @@ +#include "error.hh" + +namespace nix { + +template +std::vector topoSort(std::set items, + std::function(const T &)> getChildren, + std::function makeCycleError) +{ + std::vector sorted; + std::set visited, parents; + + std::function dfsVisit; + + dfsVisit = [&](const T & path, const T * parent) { + if (parents.count(path)) throw makeCycleError(path, *parent); + + if (!visited.insert(path).second) return; + parents.insert(path); + + std::set references = getChildren(path); + + for (auto & i : references) + /* Don't traverse into items that don't exist in our starting set. */ + if (i != path && items.count(i)) + dfsVisit(i, &path); + + sorted.push_back(path); + parents.erase(path); + }; + + for (auto & i : items) + dfsVisit(i, nullptr); + + std::reverse(sorted.begin(), sorted.end()); + + return sorted; +} + +} diff --git a/src/libutil/util.cc b/src/libutil/util.cc index 93798a765..c0b9698ee 100644 --- a/src/libutil/util.cc +++ b/src/libutil/util.cc @@ -374,7 +374,7 @@ void writeLine(int fd, string s) } -static void _deletePath(int parentfd, const Path & path, unsigned long long & bytesFreed) +static void _deletePath(int parentfd, const Path & path, uint64_t & bytesFreed) { checkInterrupt(); @@ -414,7 +414,7 @@ static void _deletePath(int parentfd, const Path & path, unsigned long long & by } } -static void _deletePath(const Path & path, unsigned long long & bytesFreed) +static void _deletePath(const Path & path, uint64_t & bytesFreed) { Path dir = dirOf(path); if (dir == "") @@ -435,12 +435,12 @@ static void _deletePath(const Path & path, unsigned long long & bytesFreed) void deletePath(const Path & path) { - unsigned long long dummy; + uint64_t dummy; deletePath(path, dummy); } -void deletePath(const Path & path, unsigned long long & bytesFreed) +void deletePath(const Path & path, uint64_t & bytesFreed) { //Activity act(*logger, lvlDebug, format("recursively deleting path '%1%'") % path); bytesFreed = 0; @@ -494,6 +494,7 @@ std::pair createTempFile(const Path & prefix) { Path tmpl(getEnv("TMPDIR").value_or("/tmp") + "/" + prefix + ".XXXXXX"); // Strictly speaking, this is UB, but who cares... + // FIXME: use O_TMPFILE. AutoCloseFD fd(mkstemp((char *) tmpl.c_str())); if (!fd) throw SysError("creating temporary file '%s'", tmpl); @@ -1449,7 +1450,7 @@ string base64Decode(std::string_view s) char digit = decode[(unsigned char) c]; if (digit == -1) - throw Error("invalid character in Base64 string"); + throw Error("invalid character in Base64 string: '%c'", c); bits += 6; d = d << 6 | digit; @@ -1581,7 +1582,7 @@ AutoCloseFD createUnixDomainSocket(const Path & path, mode_t mode) struct sockaddr_un addr; addr.sun_family = AF_UNIX; - if (path.size() >= sizeof(addr.sun_path)) + if (path.size() + 1 >= sizeof(addr.sun_path)) throw Error("socket path '%1%' is too long", path); strcpy(addr.sun_path, path.c_str()); diff --git a/src/libutil/util.hh b/src/libutil/util.hh index 42130f6dc..3a20679a8 100644 --- a/src/libutil/util.hh +++ b/src/libutil/util.hh @@ -125,7 +125,7 @@ void writeLine(int fd, string s); second variant returns the number of bytes and blocks freed. */ void deletePath(const Path & path); -void deletePath(const Path & path, unsigned long long & bytesFreed); +void deletePath(const Path & path, uint64_t & bytesFreed); std::string getUserName(); @@ -601,4 +601,9 @@ constexpr auto enumerate(T && iterable) } +// C++17 std::visit boilerplate +template struct overloaded : Ts... { using Ts::operator()...; }; +template overloaded(Ts...) -> overloaded; + + } diff --git a/src/nix-build/nix-build.cc b/src/nix-build/nix-build.cc index f77de56ea..94412042f 100755 --- a/src/nix-build/nix-build.cc +++ b/src/nix-build/nix-build.cc @@ -174,7 +174,7 @@ static void _main(int argc, char * * argv) else if (*arg == "--run-env") // obsolete runEnv = true; - else if (*arg == "--command" || *arg == "--run") { + else if (runEnv && (*arg == "--command" || *arg == "--run")) { if (*arg == "--run") interactive = false; envCommand = getArg(*arg, arg, end) + "\nexit"; @@ -192,7 +192,7 @@ static void _main(int argc, char * * argv) else if (*arg == "--pure") pure = true; else if (*arg == "--impure") pure = false; - else if (*arg == "--packages" || *arg == "-p") + else if (runEnv && (*arg == "--packages" || *arg == "-p")) packages = true; else if (inShebang && *arg == "-i") { @@ -325,7 +325,7 @@ static void _main(int argc, char * * argv) auto buildPaths = [&](const std::vector & paths) { /* Note: we do this even when !printMissing to efficiently fetch binary cache data. */ - unsigned long long downloadSize, narSize; + uint64_t downloadSize, narSize; StorePathSet willBuild, willSubstitute, unknown; store->queryMissing(paths, willBuild, willSubstitute, unknown, downloadSize, narSize); diff --git a/src/nix-collect-garbage/nix-collect-garbage.cc b/src/nix-collect-garbage/nix-collect-garbage.cc index aa5ada3a6..bcf1d8518 100644 --- a/src/nix-collect-garbage/nix-collect-garbage.cc +++ b/src/nix-collect-garbage/nix-collect-garbage.cc @@ -67,10 +67,8 @@ static int _main(int argc, char * * argv) deleteOlderThan = getArg(*arg, arg, end); } else if (*arg == "--dry-run") dryRun = true; - else if (*arg == "--max-freed") { - long long maxFreed = getIntArg(*arg, arg, end, true); - options.maxFreed = maxFreed >= 0 ? maxFreed : 0; - } + else if (*arg == "--max-freed") + options.maxFreed = std::max(getIntArg(*arg, arg, end, true), (int64_t) 0); else return false; return true; diff --git a/src/nix-daemon/nix-daemon.cc b/src/nix-daemon/nix-daemon.cc index b52cd7989..9613cb7d3 100644 --- a/src/nix-daemon/nix-daemon.cc +++ b/src/nix-daemon/nix-daemon.cc @@ -240,7 +240,15 @@ static void daemonLoop(char * * argv) // Handle the connection. FdSource from(remote.get()); FdSink to(remote.get()); - processConnection(openUncachedStore(), from, to, trusted, NotRecursive, user, peer.uid); + processConnection(openUncachedStore(), from, to, trusted, NotRecursive, [&](Store & store) { +#if 0 + /* Prevent users from doing something very dangerous. */ + if (geteuid() == 0 && + querySetting("build-users-group", "") == "") + throw Error("if you run 'nix-daemon' as root, then you MUST set 'build-users-group'!"); +#endif + store.createUser(user, peer.uid); + }); exit(0); }, options); @@ -327,7 +335,10 @@ static int _main(int argc, char * * argv) } else { FdSource from(STDIN_FILENO); FdSink to(STDOUT_FILENO); - processConnection(openUncachedStore(), from, to, Trusted, NotRecursive, "root", 0); + /* Auth hook is empty because in this mode we blindly trust the + standard streams. Limitting access to thoses is explicitly + not `nix-daemon`'s responsibility. */ + processConnection(openUncachedStore(), from, to, Trusted, NotRecursive, [&](Store & _){}); } } else { daemonLoop(argv); diff --git a/src/nix-env/nix-env.cc b/src/nix-env/nix-env.cc index 5795c2c09..ddd036070 100644 --- a/src/nix-env/nix-env.cc +++ b/src/nix-env/nix-env.cc @@ -381,7 +381,8 @@ static void queryInstSources(EvalState & state, if (path.isDerivation()) { elem.setDrvPath(state.store->printStorePath(path)); - elem.setOutPath(state.store->printStorePath(state.store->derivationFromPath(path).findOutput("out"))); + auto outputs = state.store->queryDerivationOutputMap(path); + elem.setOutPath(state.store->printStorePath(outputs.at("out"))); if (name.size() >= drvExtension.size() && string(name, name.size() - drvExtension.size()) == drvExtension) name = string(name, 0, name.size() - drvExtension.size()); diff --git a/src/nix-prefetch-url/nix-prefetch-url.cc b/src/nix-prefetch-url/nix-prefetch-url.cc index 961e7fb6d..1001f27af 100644 --- a/src/nix-prefetch-url/nix-prefetch-url.cc +++ b/src/nix-prefetch-url/nix-prefetch-url.cc @@ -154,10 +154,10 @@ static int _main(int argc, char * * argv) /* If an expected hash is given, the file may already exist in the store. */ std::optional expectedHash; - Hash hash; + Hash hash(ht); std::optional storePath; if (args.size() == 2) { - expectedHash = Hash(args[1], ht); + expectedHash = Hash::parseAny(args[1], ht); const auto recursive = unpack ? FileIngestionMethod::Recursive : FileIngestionMethod::Flat; storePath = store->makeFixedOutputPath(recursive, *expectedHash, name); if (store->isValidPath(*storePath)) diff --git a/src/nix-store/nix-store.cc b/src/nix-store/nix-store.cc index 23bb48d88..a58edff57 100644 --- a/src/nix-store/nix-store.cc +++ b/src/nix-store/nix-store.cc @@ -77,7 +77,7 @@ static PathSet realisePath(StorePathWithOutputs path, bool build = true) if (i == drv.outputs.end()) throw Error("derivation '%s' does not have an output named '%s'", store2->printStorePath(path.path), j); - auto outPath = store2->printStorePath(i->second.path); + auto outPath = store2->printStorePath(i->second.path(*store, drv.name)); if (store2) { if (gcRoot == "") printGCWarning(); @@ -130,7 +130,7 @@ static void opRealise(Strings opFlags, Strings opArgs) for (auto & i : opArgs) paths.push_back(store->followLinksToStorePathWithOutputs(i)); - unsigned long long downloadSize, narSize; + uint64_t downloadSize, narSize; StorePathSet willBuild, willSubstitute, unknown; store->queryMissing(paths, willBuild, willSubstitute, unknown, downloadSize, narSize); @@ -208,7 +208,7 @@ static void opPrintFixedPath(Strings opFlags, Strings opArgs) string hash = *i++; string name = *i++; - cout << fmt("%s\n", store->printStorePath(store->makeFixedOutputPath(recursive, Hash(hash, hashAlgo), name))); + cout << fmt("%s\n", store->printStorePath(store->makeFixedOutputPath(recursive, Hash::parseAny(hash, hashAlgo), name))); } @@ -218,8 +218,8 @@ static StorePathSet maybeUseOutputs(const StorePath & storePath, bool useOutput, if (useOutput && storePath.isDerivation()) { auto drv = store->derivationFromPath(storePath); StorePathSet outputs; - for (auto & i : drv.outputs) - outputs.insert(i.second.path); + for (auto & i : drv.outputsAndPaths(*store)) + outputs.insert(i.second.second); return outputs; } else return {storePath}; @@ -312,8 +312,8 @@ static void opQuery(Strings opFlags, Strings opArgs) auto i2 = store->followLinksToStorePath(i); if (forceRealise) realisePath({i2}); Derivation drv = store->derivationFromPath(i2); - for (auto & j : drv.outputs) - cout << fmt("%1%\n", store->printStorePath(j.second.path)); + for (auto & j : drv.outputsAndPaths(*store)) + cout << fmt("%1%\n", store->printStorePath(j.second.second)); } break; } @@ -495,7 +495,10 @@ static void registerValidity(bool reregister, bool hashGiven, bool canonicalise) ValidPathInfos infos; while (1) { - auto info = decodeValidPathInfo(*store, cin, hashGiven); + // We use a dummy value because we'll set it below. FIXME be correct by + // construction and avoid dummy value. + auto hashResultOpt = !hashGiven ? std::optional { {Hash::dummy, -1} } : std::nullopt; + auto info = decodeValidPathInfo(*store, cin, hashResultOpt); if (!info) break; if (!store->isValidPath(info->path) || reregister) { /* !!! races */ @@ -572,10 +575,8 @@ static void opGC(Strings opFlags, Strings opArgs) if (*i == "--print-roots") printRoots = true; else if (*i == "--print-live") options.action = GCOptions::gcReturnLive; else if (*i == "--print-dead") options.action = GCOptions::gcReturnDead; - else if (*i == "--max-freed") { - long long maxFreed = getIntArg(*i, i, opFlags.end(), true); - options.maxFreed = maxFreed >= 0 ? maxFreed : 0; - } + else if (*i == "--max-freed") + options.maxFreed = std::max(getIntArg(*i, i, opFlags.end(), true), (int64_t) 0); else throw UsageError("bad sub-operation '%1%' in GC", *i); if (!opArgs.empty()) throw UsageError("no arguments expected"); @@ -725,7 +726,7 @@ static void opVerifyPath(Strings opFlags, Strings opArgs) auto path = store->followLinksToStorePath(i); printMsg(lvlTalkative, "checking path '%s'...", store->printStorePath(path)); auto info = store->queryPathInfo(path); - HashSink sink(*info->narHash.type); + HashSink sink(info->narHash.type); store->narFromPath(path, sink); auto current = sink.finish(); if (current.first != info->narHash) { @@ -831,7 +832,7 @@ static void opServe(Strings opFlags, Strings opArgs) for (auto & path : paths) if (!path.isDerivation()) paths2.push_back({path}); - unsigned long long downloadSize, narSize; + uint64_t downloadSize, narSize; StorePathSet willBuild, willSubstitute, unknown; store->queryMissing(paths2, willBuild, willSubstitute, unknown, downloadSize, narSize); @@ -864,7 +865,9 @@ static void opServe(Strings opFlags, Strings opArgs) out << info->narSize // downloadSize << info->narSize; if (GET_PROTOCOL_MINOR(clientVersion) >= 4) - out << (info->narHash ? info->narHash.to_string(Base32, true) : "") << renderContentAddress(info->ca) << info->sigs; + out << info->narHash.to_string(Base32, true) + << renderContentAddress(info->ca) + << info->sigs; } catch (InvalidPath &) { } } @@ -914,9 +917,9 @@ static void opServe(Strings opFlags, Strings opArgs) if (!writeAllowed) throw Error("building paths is not allowed"); - auto drvPath = store->parseStorePath(readString(in)); // informational only + auto drvPath = store->parseStorePath(readString(in)); BasicDerivation drv; - readDerivation(in, *store, drv); + readDerivation(in, *store, drv, Derivation::nameFromPath(drvPath)); getBuildSettings(); @@ -944,11 +947,13 @@ static void opServe(Strings opFlags, Strings opArgs) if (!writeAllowed) throw Error("importing paths is not allowed"); auto path = readString(in); - ValidPathInfo info(store->parseStorePath(path)); auto deriver = readString(in); + ValidPathInfo info { + store->parseStorePath(path), + Hash::parseAny(readString(in), htSHA256), + }; if (deriver != "") info.deriver = store->parseStorePath(deriver); - info.narHash = Hash(readString(in), htSHA256); info.references = readStorePaths(*store, in); in >> info.registrationTime >> info.narSize >> info.ultimate; info.sigs = readStrings(in); diff --git a/src/nix/add-to-store.cc b/src/nix/add-to-store.cc index f9d6de16e..713155840 100644 --- a/src/nix/add-to-store.cc +++ b/src/nix/add-to-store.cc @@ -9,6 +9,7 @@ struct CmdAddToStore : MixDryRun, StoreCommand { Path path; std::optional namePart; + FileIngestionMethod ingestionMethod = FileIngestionMethod::Recursive; CmdAddToStore() { @@ -21,6 +22,13 @@ struct CmdAddToStore : MixDryRun, StoreCommand .labels = {"name"}, .handler = {&namePart}, }); + + addFlag({ + .longName = "flat", + .shortName = 0, + .description = "add flat file to the Nix store", + .handler = {&ingestionMethod, FileIngestionMethod::Flat}, + }); } std::string description() override @@ -45,12 +53,21 @@ struct CmdAddToStore : MixDryRun, StoreCommand auto narHash = hashString(htSHA256, *sink.s); - ValidPathInfo info(store->makeFixedOutputPath(FileIngestionMethod::Recursive, narHash, *namePart)); - info.narHash = narHash; + Hash hash = narHash; + if (ingestionMethod == FileIngestionMethod::Flat) { + HashSink hsink(htSHA256); + readFile(path, hsink); + hash = hsink.finish().first; + } + + ValidPathInfo info { + store->makeFixedOutputPath(ingestionMethod, hash, *namePart), + narHash, + }; info.narSize = sink.s->size(); info.ca = std::optional { FixedOutputHash { - .method = FileIngestionMethod::Recursive, - .hash = info.narHash, + .method = ingestionMethod, + .hash = hash, } }; if (!dryRun) { diff --git a/src/nix/app.cc b/src/nix/app.cc index 3935297cf..80acbf658 100644 --- a/src/nix/app.cc +++ b/src/nix/app.cc @@ -8,7 +8,7 @@ namespace nix { App Installable::toApp(EvalState & state) { - auto [cursor, attrPath] = getCursor(state, true); + auto [cursor, attrPath] = getCursor(state); auto type = cursor->getAttr("type")->getString(); diff --git a/src/nix/build.cc b/src/nix/build.cc index 0f7e0e123..13d14a7fb 100644 --- a/src/nix/build.cc +++ b/src/nix/build.cc @@ -9,6 +9,7 @@ using namespace nix; struct CmdBuild : InstallablesCommand, MixDryRun, MixProfile { Path outLink = "result"; + BuildMode buildMode = bmNormal; CmdBuild() { @@ -26,6 +27,12 @@ struct CmdBuild : InstallablesCommand, MixDryRun, MixProfile .description = "do not create a symlink to the build result", .handler = {&outLink, Path("")}, }); + + addFlag({ + .longName = "rebuild", + .description = "rebuild an already built package and compare the result to the existing store paths", + .handler = {&buildMode, bmCheck}, + }); } std::string description() override @@ -53,21 +60,28 @@ struct CmdBuild : InstallablesCommand, MixDryRun, MixProfile void run(ref store) override { - auto buildables = build(store, dryRun ? Realise::Nothing : Realise::Outputs, installables); + auto buildables = build(store, dryRun ? Realise::Nothing : Realise::Outputs, installables, buildMode); if (dryRun) return; - if (outLink != "") { - for (size_t i = 0; i < buildables.size(); ++i) { - for (auto & output : buildables[i].outputs) - if (auto store2 = store.dynamic_pointer_cast()) { - std::string symlink = outLink; - if (i) symlink += fmt("-%d", i); - if (output.first != "out") symlink += fmt("-%s", output.first); - store2->addPermRoot(output.second, absPath(symlink), true); - } - } - } + if (outLink != "") + if (auto store2 = store.dynamic_pointer_cast()) + for (size_t i = 0; i < buildables.size(); ++i) + std::visit(overloaded { + [&](BuildableOpaque bo) { + std::string symlink = outLink; + if (i) symlink += fmt("-%d", i); + store2->addPermRoot(bo.path, absPath(symlink), true); + }, + [&](BuildableFromDrv bfd) { + for (auto & output : bfd.outputs) { + std::string symlink = outLink; + if (i) symlink += fmt("-%d", i); + if (output.first != "out") symlink += fmt("-%s", output.first); + store2->addPermRoot(output.second, absPath(symlink), true); + } + }, + }, buildables[i]); updateProfile(buildables); } diff --git a/src/nix/bundle.cc b/src/nix/bundle.cc new file mode 100644 index 000000000..eb3339f5d --- /dev/null +++ b/src/nix/bundle.cc @@ -0,0 +1,129 @@ +#include "command.hh" +#include "common-args.hh" +#include "shared.hh" +#include "store-api.hh" +#include "fs-accessor.hh" + +using namespace nix; + +struct CmdBundle : InstallableCommand +{ + std::string bundler = "github:matthewbauer/nix-bundle"; + std::optional outLink; + + CmdBundle() + { + addFlag({ + .longName = "bundler", + .description = "use custom bundler", + .labels = {"flake-url"}, + .handler = {&bundler}, + .completer = {[&](size_t, std::string_view prefix) { + completeFlakeRef(getStore(), prefix); + }} + }); + + addFlag({ + .longName = "out-link", + .shortName = 'o', + .description = "path of the symlink to the build result", + .labels = {"path"}, + .handler = {&outLink}, + .completer = completePath + }); + } + + std::string description() override + { + return "bundle an application so that it works outside of the Nix store"; + } + + Examples examples() override + { + return { + Example{ + "To bundle Hello:", + "nix bundle hello" + }, + }; + } + + Category category() override { return catSecondary; } + + Strings getDefaultFlakeAttrPaths() override + { + Strings res{"defaultApp." + settings.thisSystem.get()}; + for (auto & s : SourceExprCommand::getDefaultFlakeAttrPaths()) + res.push_back(s); + return res; + } + + Strings getDefaultFlakeAttrPathPrefixes() override + { + Strings res{"apps." + settings.thisSystem.get() + ".", "packages"}; + for (auto & s : SourceExprCommand::getDefaultFlakeAttrPathPrefixes()) + res.push_back(s); + return res; + } + + void run(ref store) override + { + auto evalState = getEvalState(); + + auto app = installable->toApp(*evalState); + store->buildPaths(app.context); + + auto [bundlerFlakeRef, bundlerName] = parseFlakeRefWithFragment(bundler, absPath(".")); + const flake::LockFlags lockFlags{ .writeLockFile = false }; + auto bundler = InstallableFlake( + evalState, std::move(bundlerFlakeRef), + Strings{bundlerName == "" ? "defaultBundler" : bundlerName}, + Strings({"bundlers."}), lockFlags); + + Value * arg = evalState->allocValue(); + evalState->mkAttrs(*arg, 2); + + PathSet context; + for (auto & i : app.context) + context.insert("=" + store->printStorePath(i.path)); + mkString(*evalState->allocAttr(*arg, evalState->symbols.create("program")), app.program, context); + + mkString(*evalState->allocAttr(*arg, evalState->symbols.create("system")), settings.thisSystem.get()); + + arg->attrs->sort(); + + auto vRes = evalState->allocValue(); + evalState->callFunction(*bundler.toValue(*evalState).first, *arg, *vRes, noPos); + + if (!evalState->isDerivation(*vRes)) + throw Error("the bundler '%s' does not produce a derivation", bundler.what()); + + auto attr1 = vRes->attrs->find(evalState->sDrvPath); + if (!attr1) + throw Error("the bundler '%s' does not produce a derivation", bundler.what()); + + PathSet context2; + StorePath drvPath = store->parseStorePath(evalState->coerceToPath(*attr1->pos, *attr1->value, context2)); + + auto attr2 = vRes->attrs->find(evalState->sOutPath); + if (!attr2) + throw Error("the bundler '%s' does not produce a derivation", bundler.what()); + + StorePath outPath = store->parseStorePath(evalState->coerceToPath(*attr2->pos, *attr2->value, context2)); + + store->buildPaths({{drvPath}}); + + auto outPathS = store->printStorePath(outPath); + + auto info = store->queryPathInfo(outPath); + if (!info->references.empty()) + throw Error("'%s' has references; a bundler must not leave any references", outPathS); + + if (!outLink) + outLink = baseNameOf(app.program); + + store.dynamic_pointer_cast()->addPermRoot(outPath, absPath(*outLink), true); + } +}; + +static auto r2 = registerCommand("bundle"); diff --git a/src/nix/command.cc b/src/nix/command.cc index af36dda89..da32819da 100644 --- a/src/nix/command.cc +++ b/src/nix/command.cc @@ -128,20 +128,25 @@ void MixProfile::updateProfile(const Buildables & buildables) { if (!profile) return; - std::optional result; + std::vector result; for (auto & buildable : buildables) { - for (auto & output : buildable.outputs) { - if (result) - throw Error("'--profile' requires that the arguments produce a single store path, but there are multiple"); - result = output.second; - } + std::visit(overloaded { + [&](BuildableOpaque bo) { + result.push_back(bo.path); + }, + [&](BuildableFromDrv bfd) { + for (auto & output : bfd.outputs) { + result.push_back(output.second); + } + }, + }, buildable); } - if (!result) - throw Error("'--profile' requires that the arguments produce a single store path, but there are none"); + if (result.size() != 1) + throw Error("'--profile' requires that the arguments produce a single store path, but there are %d", result.size()); - updateProfile(*result); + updateProfile(result[0]); } MixDefaultProfile::MixDefaultProfile() diff --git a/src/nix/command.hh b/src/nix/command.hh index 856721ebf..bc46a2028 100644 --- a/src/nix/command.hh +++ b/src/nix/command.hh @@ -5,6 +5,7 @@ #include "common-eval-args.hh" #include "path.hh" #include "flake/lockfile.hh" +#include "store-api.hh" #include @@ -185,7 +186,7 @@ static RegisterCommand registerCommand(const std::string & name) } Buildables build(ref store, Realise mode, - std::vector> installables); + std::vector> installables, BuildMode bMode = bmNormal); std::set toStorePaths(ref store, Realise mode, OperateOn operateOn, diff --git a/src/nix/develop.cc b/src/nix/develop.cc index a6d7d6add..434088da7 100644 --- a/src/nix/develop.cc +++ b/src/nix/develop.cc @@ -68,22 +68,22 @@ BuildEnvironment readEnvironment(const Path & path) std::smatch match; - if (std::regex_search(pos, file.cend(), match, declareRegex)) { + if (std::regex_search(pos, file.cend(), match, declareRegex, std::regex_constants::match_continuous)) { pos = match[0].second; exported.insert(match[1]); } - else if (std::regex_search(pos, file.cend(), match, varRegex)) { + else if (std::regex_search(pos, file.cend(), match, varRegex, std::regex_constants::match_continuous)) { pos = match[0].second; res.env.insert({match[1], Var { .exported = exported.count(match[1]) > 0, .value = match[2] }}); } - else if (std::regex_search(pos, file.cend(), match, assocArrayRegex)) { + else if (std::regex_search(pos, file.cend(), match, assocArrayRegex, std::regex_constants::match_continuous)) { pos = match[0].second; res.env.insert({match[1], Var { .associative = true, .value = match[2] }}); } - else if (std::regex_search(pos, file.cend(), match, functionRegex)) { + else if (std::regex_search(pos, file.cend(), match, functionRegex, std::regex_constants::match_continuous)) { res.bashFunctions = std::string(pos, file.cend()); break; } @@ -124,22 +124,21 @@ StorePath getDerivationEnvironment(ref store, const StorePath & drvPath) /* Rehash and write the derivation. FIXME: would be nice to use 'buildDerivation', but that's privileged. */ - auto drvName = std::string(drvPath.name()); - assert(hasSuffix(drvName, ".drv")); - drvName.resize(drvName.size() - 4); - drvName += "-env"; + drv.name += "-env"; for (auto & output : drv.outputs) drv.env.erase(output.first); - drv.outputs = {{"out", DerivationOutput { .path = StorePath::dummy }}}; + drv.outputs = {{"out", DerivationOutput { .output = DerivationOutputInputAddressed { .path = StorePath::dummy }}}}; drv.env["out"] = ""; drv.env["_outputs_saved"] = drv.env["outputs"]; drv.env["outputs"] = "out"; drv.inputSrcs.insert(std::move(getEnvShPath)); - Hash h = hashDerivationModulo(*store, drv, true); - auto shellOutPath = store->makeOutputPath("out", h, drvName); - drv.outputs.insert_or_assign("out", DerivationOutput { .path = shellOutPath }); + Hash h = std::get<0>(hashDerivationModulo(*store, drv, true)); + auto shellOutPath = store->makeOutputPath("out", h, drv.name); + drv.outputs.insert_or_assign("out", DerivationOutput { .output = DerivationOutputInputAddressed { + .path = shellOutPath + } }); drv.env["out"] = store->printStorePath(shellOutPath); - auto shellDrvPath2 = writeDerivation(store, drv, drvName); + auto shellDrvPath2 = writeDerivation(store, drv); /* Build the derivation. */ store->buildPaths({{shellDrvPath2}}); diff --git a/src/nix/edit.cc b/src/nix/edit.cc index dc9775635..378a3739c 100644 --- a/src/nix/edit.cc +++ b/src/nix/edit.cc @@ -45,6 +45,7 @@ struct CmdEdit : InstallableCommand auto args = editorFor(pos); + restoreSignals(); execvp(args.front().c_str(), stringsToCharPtrs(args).data()); std::string command; diff --git a/src/nix/flake.cc b/src/nix/flake.cc index 027a9871e..653f8db1b 100644 --- a/src/nix/flake.cc +++ b/src/nix/flake.cc @@ -368,6 +368,21 @@ struct CmdFlakeCheck : FlakeCommand } }; + auto checkBundler = [&](const std::string & attrPath, Value & v, const Pos & pos) { + try { + state->forceValue(v, pos); + if (v.type != tLambda) + throw Error("bundler must be a function"); + if (!v.lambda.fun->formals || + v.lambda.fun->formals->argNames.find(state->symbols.create("program")) == v.lambda.fun->formals->argNames.end() || + v.lambda.fun->formals->argNames.find(state->symbols.create("system")) == v.lambda.fun->formals->argNames.end()) + throw Error("bundler must take formal arguments 'program' and 'system'"); + } catch (Error & e) { + e.addTrace(pos, hintfmt("while checking the template '%s'", attrPath)); + throw; + } + }; + { Activity act(*logger, lvlInfo, actUnknown, "evaluating flake"); @@ -490,6 +505,16 @@ struct CmdFlakeCheck : FlakeCommand *attr.value, *attr.pos); } + else if (name == "defaultBundler") + checkBundler(name, vOutput, pos); + + else if (name == "bundlers") { + state->forceAttrs(vOutput, pos); + for (auto & attr : *vOutput.attrs) + checkBundler(fmt("%s.%s", name, attr.name), + *attr.value, *attr.pos); + } + else warn("unknown flake output '%s'", name); @@ -547,7 +572,7 @@ struct CmdFlakeInitCommon : virtual Args, EvalCommand Strings{templateName == "" ? "defaultTemplate" : templateName}, Strings(attrsPathPrefixes), lockFlags); - auto [cursor, attrPath] = installable.getCursor(*evalState, true); + auto [cursor, attrPath] = installable.getCursor(*evalState); auto templateDir = cursor->getAttr("path")->getString(); @@ -757,7 +782,6 @@ struct CmdFlakeArchive : FlakeCommand, MixJSON, MixDryRun struct CmdFlakeShow : FlakeCommand { bool showLegacy = false; - bool useEvalCache = true; CmdFlakeShow() { @@ -766,12 +790,6 @@ struct CmdFlakeShow : FlakeCommand .description = "show the contents of the 'legacyPackages' output", .handler = {&showLegacy, true} }); - - addFlag({ - .longName = "no-eval-cache", - .description = "do not use the flake evaluation cache", - .handler = {[&]() { useEvalCache = false; }} - }); } std::string description() override @@ -909,7 +927,7 @@ struct CmdFlakeShow : FlakeCommand } }; - auto cache = openEvalCache(*state, flake, useEvalCache); + auto cache = openEvalCache(*state, flake); visit(*cache->getRoot(), {}, fmt(ANSI_BOLD "%s" ANSI_NORMAL, flake->flake.lockedRef), ""); } diff --git a/src/nix/hash.cc b/src/nix/hash.cc index b94751e45..0eca4f8ea 100644 --- a/src/nix/hash.cc +++ b/src/nix/hash.cc @@ -107,7 +107,7 @@ struct CmdToBase : Command void run() override { for (auto s : args) - logger->stdout(Hash(s, ht).to_string(base, base == SRI)); + logger->stdout(Hash::parseAny(s, ht).to_string(base, base == SRI)); } }; diff --git a/src/nix/installables.cc b/src/nix/installables.cc index a13e5a3df..d34f87982 100644 --- a/src/nix/installables.cc +++ b/src/nix/installables.cc @@ -183,8 +183,7 @@ void completeFlakeRefWithFragment( auto flakeRef = parseFlakeRef(flakeRefS, absPath(".")); auto evalCache = openEvalCache(*evalState, - std::make_shared(lockFlake(*evalState, flakeRef, lockFlags)), - true); + std::make_shared(lockFlake(*evalState, flakeRef, lockFlags))); auto root = evalCache->getRoot(); @@ -273,18 +272,18 @@ Buildable Installable::toBuildable() } std::vector, std::string>> -Installable::getCursors(EvalState & state, bool useEvalCache) +Installable::getCursors(EvalState & state) { auto evalCache = - std::make_shared(false, Hash(), state, + std::make_shared(std::nullopt, state, [&]() { return toValue(state).first; }); return {{evalCache->getRoot(), ""}}; } std::pair, std::string> -Installable::getCursor(EvalState & state, bool useEvalCache) +Installable::getCursor(EvalState & state) { - auto cursors = getCursors(state, useEvalCache); + auto cursors = getCursors(state); if (cursors.empty()) throw Error("cannot find flake attribute '%s'", what()); return cursors[0]; @@ -304,19 +303,19 @@ struct InstallableStorePath : Installable { if (storePath.isDerivation()) { std::map outputs; - for (auto & [name, output] : store->readDerivation(storePath).outputs) - outputs.emplace(name, output.path); + auto drv = store->readDerivation(storePath); + for (auto & i : drv.outputsAndPaths(*store)) + outputs.emplace(i.first, i.second.second); return { - Buildable { + BuildableFromDrv { .drvPath = storePath, .outputs = std::move(outputs) } }; } else { return { - Buildable { - .drvPath = {}, - .outputs = {{"out", storePath}} + BuildableOpaque { + .path = storePath, } }; } @@ -332,33 +331,20 @@ Buildables InstallableValue::toBuildables() { Buildables res; - StorePathSet drvPaths; + std::map drvsToOutputs; + // Group by derivation, helps with .all in particular for (auto & drv : toDerivations()) { - Buildable b{.drvPath = drv.drvPath}; - drvPaths.insert(drv.drvPath); - auto outputName = drv.outputName; if (outputName == "") - throw Error("derivation '%s' lacks an 'outputName' attribute", state->store->printStorePath(*b.drvPath)); - - b.outputs.emplace(outputName, drv.outPath); - - res.push_back(std::move(b)); + throw Error("derivation '%s' lacks an 'outputName' attribute", state->store->printStorePath(drv.drvPath)); + drvsToOutputs[drv.drvPath].insert_or_assign(outputName, drv.outPath); } - // Hack to recognize .all: if all drvs have the same drvPath, - // merge the buildables. - if (drvPaths.size() == 1) { - Buildable b{.drvPath = *drvPaths.begin()}; - for (auto & b2 : res) - for (auto & output : b2.outputs) - b.outputs.insert_or_assign(output.first, output.second); - Buildables bs; - bs.push_back(std::move(b)); - return bs; - } else - return res; + for (auto & i : drvsToOutputs) + res.push_back(BuildableFromDrv { i.first, i.second }); + + return res; } struct InstallableAttrPath : InstallableValue @@ -433,12 +419,13 @@ Value * InstallableFlake::getFlakeOutputs(EvalState & state, const flake::Locked ref openEvalCache( EvalState & state, - std::shared_ptr lockedFlake, - bool useEvalCache) + std::shared_ptr lockedFlake) { - return ref(std::make_shared( - useEvalCache && evalSettings.pureEval, - lockedFlake->getFingerprint(), + auto fingerprint = lockedFlake->getFingerprint(); + return make_ref( + evalSettings.useEvalCache && evalSettings.pureEval + ? std::optional { std::cref(fingerprint) } + : std::nullopt, state, [&state, lockedFlake]() { @@ -456,7 +443,7 @@ ref openEvalCache( assert(aOutputs); return aOutputs->value; - })); + }); } static std::string showAttrPaths(const std::vector & paths) @@ -471,10 +458,9 @@ static std::string showAttrPaths(const std::vector & paths) std::tuple InstallableFlake::toDerivation() { - auto lockedFlake = getLockedFlake(); - auto cache = openEvalCache(*state, lockedFlake, true); + auto cache = openEvalCache(*state, lockedFlake); auto root = cache->getRoot(); for (auto & attrPath : getActualAttrPaths()) { @@ -528,11 +514,10 @@ std::pair InstallableFlake::toValue(EvalState & state) } std::vector, std::string>> -InstallableFlake::getCursors(EvalState & state, bool useEvalCache) +InstallableFlake::getCursors(EvalState & state) { auto evalCache = openEvalCache(state, - std::make_shared(lockFlake(state, flakeRef, lockFlags)), - useEvalCache); + std::make_shared(lockFlake(state, flakeRef, lockFlags))); auto root = evalCache->getRoot(); @@ -642,7 +627,7 @@ std::shared_ptr SourceExprCommand::parseInstallable( } Buildables build(ref store, Realise mode, - std::vector> installables) + std::vector> installables, BuildMode bMode) { if (mode == Realise::Nothing) settings.readOnlyMode = true; @@ -653,14 +638,17 @@ Buildables build(ref store, Realise mode, for (auto & i : installables) { for (auto & b : i->toBuildables()) { - if (b.drvPath) { - StringSet outputNames; - for (auto & output : b.outputs) - outputNames.insert(output.first); - pathsToBuild.push_back({*b.drvPath, outputNames}); - } else - for (auto & output : b.outputs) - pathsToBuild.push_back({output.second}); + std::visit(overloaded { + [&](BuildableOpaque bo) { + pathsToBuild.push_back({bo.path}); + }, + [&](BuildableFromDrv bfd) { + StringSet outputNames; + for (auto & output : bfd.outputs) + outputNames.insert(output.first); + pathsToBuild.push_back({bfd.drvPath, outputNames}); + }, + }, b); buildables.push_back(std::move(b)); } } @@ -668,7 +656,7 @@ Buildables build(ref store, Realise mode, if (mode == Realise::Nothing) printMissing(store, pathsToBuild, lvlError); else if (mode == Realise::Outputs) - store->buildPaths(pathsToBuild); + store->buildPaths(pathsToBuild, bMode); return buildables; } @@ -681,16 +669,23 @@ StorePathSet toStorePaths(ref store, if (operateOn == OperateOn::Output) { for (auto & b : build(store, mode, installables)) - for (auto & output : b.outputs) - outPaths.insert(output.second); + std::visit(overloaded { + [&](BuildableOpaque bo) { + outPaths.insert(bo.path); + }, + [&](BuildableFromDrv bfd) { + for (auto & output : bfd.outputs) + outPaths.insert(output.second); + }, + }, b); } else { if (mode == Realise::Nothing) settings.readOnlyMode = true; for (auto & i : installables) for (auto & b : i->toBuildables()) - if (b.drvPath) - outPaths.insert(*b.drvPath); + if (auto bfd = std::get_if(&b)) + outPaths.insert(bfd->drvPath); } return outPaths; @@ -714,20 +709,21 @@ StorePathSet toDerivations(ref store, StorePathSet drvPaths; for (auto & i : installables) - for (auto & b : i->toBuildables()) { - if (!b.drvPath) { - if (!useDeriver) - throw Error("argument '%s' did not evaluate to a derivation", i->what()); - for (auto & output : b.outputs) { - auto derivers = store->queryValidDerivers(output.second); + for (auto & b : i->toBuildables()) + std::visit(overloaded { + [&](BuildableOpaque bo) { + if (!useDeriver) + throw Error("argument '%s' did not evaluate to a derivation", i->what()); + auto derivers = store->queryValidDerivers(bo.path); if (derivers.empty()) throw Error("'%s' does not have a known deriver", i->what()); // FIXME: use all derivers? drvPaths.insert(*derivers.begin()); - } - } else - drvPaths.insert(*b.drvPath); - } + }, + [&](BuildableFromDrv bfd) { + drvPaths.insert(bfd.drvPath); + }, + }, b); return drvPaths; } diff --git a/src/nix/installables.hh b/src/nix/installables.hh index eb34365d4..26e87ee3a 100644 --- a/src/nix/installables.hh +++ b/src/nix/installables.hh @@ -14,12 +14,20 @@ struct SourceExprCommand; namespace eval_cache { class EvalCache; class AttrCursor; } -struct Buildable -{ - std::optional drvPath; +struct BuildableOpaque { + StorePath path; +}; + +struct BuildableFromDrv { + StorePath drvPath; std::map outputs; }; +typedef std::variant< + BuildableOpaque, + BuildableFromDrv +> Buildable; + typedef std::vector Buildables; struct App @@ -54,10 +62,10 @@ struct Installable } virtual std::vector, std::string>> - getCursors(EvalState & state, bool useEvalCache); + getCursors(EvalState & state); std::pair, std::string> - getCursor(EvalState & state, bool useEvalCache); + getCursor(EvalState & state); virtual FlakeRef nixpkgsFlakeRef() const { @@ -110,7 +118,7 @@ struct InstallableFlake : InstallableValue std::pair toValue(EvalState & state) override; std::vector, std::string>> - getCursors(EvalState & state, bool useEvalCache) override; + getCursors(EvalState & state) override; std::shared_ptr getLockedFlake() const; @@ -119,7 +127,6 @@ struct InstallableFlake : InstallableValue ref openEvalCache( EvalState & state, - std::shared_ptr lockedFlake, - bool useEvalCache); + std::shared_ptr lockedFlake); } diff --git a/src/nix/log.cc b/src/nix/log.cc index 7e10d373a..33380dcf5 100644 --- a/src/nix/log.cc +++ b/src/nix/log.cc @@ -45,11 +45,14 @@ struct CmdLog : InstallableCommand RunPager pager; for (auto & sub : subs) { - auto log = b.drvPath ? sub->getBuildLog(*b.drvPath) : nullptr; - for (auto & output : b.outputs) { - if (log) break; - log = sub->getBuildLog(output.second); - } + auto log = std::visit(overloaded { + [&](BuildableOpaque bo) { + return sub->getBuildLog(bo.path); + }, + [&](BuildableFromDrv bfd) { + return sub->getBuildLog(bfd.drvPath); + }, + }, b); if (!log) continue; stopProgressBar(); printInfo("got build log for '%s' from '%s'", installable->what(), sub->getUri()); diff --git a/src/nix/make-content-addressable.cc b/src/nix/make-content-addressable.cc index 712043978..38b60fc38 100644 --- a/src/nix/make-content-addressable.cc +++ b/src/nix/make-content-addressable.cc @@ -77,10 +77,12 @@ struct CmdMakeContentAddressable : StorePathsCommand, MixJSON auto narHash = hashModuloSink.finish().first; - ValidPathInfo info(store->makeFixedOutputPath(FileIngestionMethod::Recursive, narHash, path.name(), references, hasSelfReference)); + ValidPathInfo info { + store->makeFixedOutputPath(FileIngestionMethod::Recursive, narHash, path.name(), references, hasSelfReference), + narHash, + }; info.references = std::move(references); if (hasSelfReference) info.references.insert(info.path); - info.narHash = narHash; info.narSize = sink.s->size(); info.ca = FixedOutputHash { .method = FileIngestionMethod::Recursive, diff --git a/src/nix/path-info.cc b/src/nix/path-info.cc index 65f73cd94..0c12efaf0 100644 --- a/src/nix/path-info.cc +++ b/src/nix/path-info.cc @@ -61,7 +61,7 @@ struct CmdPathInfo : StorePathsCommand, MixJSON }; } - void printSize(unsigned long long value) + void printSize(uint64_t value) { if (!humanReadable) { std::cout << fmt("\t%11d", value); diff --git a/src/nix/profile.cc b/src/nix/profile.cc index c6cd88c49..cffc9ee44 100644 --- a/src/nix/profile.cc +++ b/src/nix/profile.cc @@ -129,9 +129,11 @@ struct ProfileManifest auto narHash = hashString(htSHA256, *sink.s); - ValidPathInfo info(store->makeFixedOutputPath(FileIngestionMethod::Recursive, narHash, "profile", references)); + ValidPathInfo info { + store->makeFixedOutputPath(FileIngestionMethod::Recursive, narHash, "profile", references), + narHash, + }; info.references = std::move(references); - info.narHash = narHash; info.narSize = sink.s->size(); info.ca = FixedOutputHash { .method = FileIngestionMethod::Recursive, .hash = info.narHash }; diff --git a/src/nix/registry.cc b/src/nix/registry.cc index 16d7e511f..ebee4545c 100644 --- a/src/nix/registry.cc +++ b/src/nix/registry.cc @@ -111,6 +111,7 @@ struct CmdRegistryPin : virtual Args, EvalCommand fetchers::Attrs extraAttrs; if (ref.subdir != "") extraAttrs["dir"] = ref.subdir; userRegistry->add(ref.input, resolved, extraAttrs); + userRegistry->write(fetchers::getUserRegistryPath()); } }; diff --git a/src/nix/repl.cc b/src/nix/repl.cc index 8eb58f62a..a74655200 100644 --- a/src/nix/repl.cc +++ b/src/nix/repl.cc @@ -33,12 +33,17 @@ extern "C" { #include "command.hh" #include "finally.hh" +#if HAVE_BOEHMGC #define GC_INCLUDE_NEW #include +#endif namespace nix { -struct NixRepl : gc +struct NixRepl + #if HAVE_BOEHMGC + : gc + #endif { string curDir; std::unique_ptr state; @@ -483,10 +488,10 @@ bool NixRepl::processLine(string line) but doing it in a child makes it easier to recover from problems / SIGINT. */ if (runProgram(settings.nixBinDir + "/nix", Strings{"build", "--no-link", drvPath}) == 0) { - auto drv = readDerivation(*state->store, drvPath); + auto drv = readDerivation(*state->store, drvPath, Derivation::nameFromPath(state->store->parseStorePath(drvPath))); std::cout << std::endl << "this derivation produced the following outputs:" << std::endl; - for (auto & i : drv.outputs) - std::cout << fmt(" %s -> %s\n", i.first, state->store->printStorePath(i.second.path)); + for (auto & i : drv.outputsAndPaths(*state->store)) + std::cout << fmt(" %s -> %s\n", i.first, state->store->printStorePath(i.second.second)); } } else if (command == ":i") { runProgram(settings.nixBinDir + "/nix-env", Strings{"-i", drvPath}); diff --git a/src/nix/search.cc b/src/nix/search.cc index 65a1e1818..430979274 100644 --- a/src/nix/search.cc +++ b/src/nix/search.cc @@ -177,7 +177,7 @@ struct CmdSearch : InstallableCommand, MixJSON } }; - for (auto & [cursor, prefix] : installable->getCursors(*state, true)) + for (auto & [cursor, prefix] : installable->getCursors(*state)) visit(*cursor, parseAttrPath(*state, prefix)); if (!json && !results) diff --git a/src/nix/show-derivation.cc b/src/nix/show-derivation.cc index b5434f982..8c4bfb03e 100644 --- a/src/nix/show-derivation.cc +++ b/src/nix/show-derivation.cc @@ -67,13 +67,21 @@ struct CmdShowDerivation : InstallablesCommand { auto outputsObj(drvObj.object("outputs")); - for (auto & output : drv.outputs) { + for (auto & output : drv.outputsAndPaths(*store)) { auto outputObj(outputsObj.object(output.first)); - outputObj.attr("path", store->printStorePath(output.second.path)); - if (output.second.hash) { - outputObj.attr("hashAlgo", output.second.hash->printMethodAlgo()); - outputObj.attr("hash", output.second.hash->hash.to_string(Base16, false)); - } + outputObj.attr("path", store->printStorePath(output.second.second)); + + std::visit(overloaded { + [&](DerivationOutputInputAddressed doi) { + }, + [&](DerivationOutputCAFixed dof) { + outputObj.attr("hashAlgo", dof.hash.printMethodAlgo()); + outputObj.attr("hash", dof.hash.hash.to_string(Base16, false)); + }, + [&](DerivationOutputCAFloating dof) { + outputObj.attr("hashAlgo", makeFileIngestionPrefix(dof.method) + printHashType(dof.hashType)); + }, + }, output.second.first.output); } } diff --git a/src/nix/verify.cc b/src/nix/verify.cc index ce90b0f6d..26f755fd9 100644 --- a/src/nix/verify.cc +++ b/src/nix/verify.cc @@ -91,9 +91,9 @@ struct CmdVerify : StorePathsCommand std::unique_ptr hashSink; if (!info->ca) - hashSink = std::make_unique(*info->narHash.type); + hashSink = std::make_unique(info->narHash.type); else - hashSink = std::make_unique(*info->narHash.type, std::string(info->path.hashPart())); + hashSink = std::make_unique(info->narHash.type, std::string(info->path.hashPart())); store->narFromPath(info->path, *hashSink); diff --git a/tests/binary-cache.sh b/tests/binary-cache.sh index 40f1a4f76..fe4ddec8d 100644 --- a/tests/binary-cache.sh +++ b/tests/binary-cache.sh @@ -218,7 +218,9 @@ outPath=$(nix-build --no-out-link -E ' nix copy --to file://$cacheDir?write-nar-listing=1 $outPath -[[ $(cat $cacheDir/$(basename $outPath).ls) = '{"version":1,"root":{"type":"directory","entries":{"bar":{"type":"regular","size":4,"narOffset":232},"link":{"type":"symlink","target":"xyzzy"}}}}' ]] +diff -u \ + <(jq -S < $cacheDir/$(basename $outPath | cut -c1-32).ls) \ + <(echo '{"version":1,"root":{"type":"directory","entries":{"bar":{"type":"regular","size":4,"narOffset":232},"link":{"type":"symlink","target":"xyzzy"}}}}' | jq -S) # Test debug info index generation. @@ -234,4 +236,6 @@ outPath=$(nix-build --no-out-link -E ' nix copy --to "file://$cacheDir?index-debug-info=1&compression=none" $outPath -[[ $(cat $cacheDir/debuginfo/02623eda209c26a59b1a8638ff7752f6b945c26b.debug) = '{"archive":"../nar/100vxs724qr46phz8m24iswmg9p3785hsyagz0kchf6q6gf06sw6.nar","member":"lib/debug/.build-id/02/623eda209c26a59b1a8638ff7752f6b945c26b.debug"}' ]] +diff -u \ + <(cat $cacheDir/debuginfo/02623eda209c26a59b1a8638ff7752f6b945c26b.debug | jq -S) \ + <(echo '{"archive":"../nar/100vxs724qr46phz8m24iswmg9p3785hsyagz0kchf6q6gf06sw6.nar","member":"lib/debug/.build-id/02/623eda209c26a59b1a8638ff7752f6b945c26b.debug"}' | jq -S) diff --git a/tests/build-hook-ca.nix b/tests/build-hook-ca.nix new file mode 100644 index 000000000..98db473fc --- /dev/null +++ b/tests/build-hook-ca.nix @@ -0,0 +1,45 @@ +{ busybox }: + +with import ./config.nix; + +let + + mkDerivation = args: + derivation ({ + inherit system; + builder = busybox; + args = ["sh" "-e" args.builder or (builtins.toFile "builder-${args.name}.sh" "if [ -e .attrs.sh ]; then source .attrs.sh; fi; eval \"$buildCommand\"")]; + outputHashMode = "recursive"; + outputHashAlgo = "sha256"; + } // removeAttrs args ["builder" "meta"]) + // { meta = args.meta or {}; }; + + input1 = mkDerivation { + shell = busybox; + name = "build-remote-input-1"; + buildCommand = "echo FOO > $out"; + requiredSystemFeatures = ["foo"]; + outputHash = "sha256-FePFYIlMuycIXPZbWi7LGEiMmZSX9FMbaQenWBzm1Sc="; + }; + + input2 = mkDerivation { + shell = busybox; + name = "build-remote-input-2"; + buildCommand = "echo BAR > $out"; + requiredSystemFeatures = ["bar"]; + outputHash = "sha256-XArauVH91AVwP9hBBQNlkX9ccuPpSYx9o0zeIHb6e+Q="; + }; + +in + + mkDerivation { + shell = busybox; + name = "build-remote"; + buildCommand = + '' + read x < ${input1} + read y < ${input2} + echo "$x $y" > $out + ''; + outputHash = "sha256-3YGhlOfbGUm9hiPn2teXXTT8M1NEpDFvfXkxMaJRld0="; + } diff --git a/tests/build-hook.nix b/tests/build-hook.nix index a19c10dde..eb16676f0 100644 --- a/tests/build-hook.nix +++ b/tests/build-hook.nix @@ -23,6 +23,17 @@ let shell = busybox; name = "build-remote-input-2"; buildCommand = "echo BAR > $out"; + requiredSystemFeatures = ["bar"]; + }; + + input3 = mkDerivation { + shell = busybox; + name = "build-remote-input-3"; + buildCommand = '' + read x < ${input2} + echo $x BAZ > $out + ''; + requiredSystemFeatures = ["baz"]; }; in @@ -33,7 +44,7 @@ in buildCommand = '' read x < ${input1} - read y < ${input2} - echo $x$y > $out + read y < ${input3} + echo "$x $y" > $out ''; } diff --git a/tests/build-remote-content-addressed-fixed.sh b/tests/build-remote-content-addressed-fixed.sh new file mode 100644 index 000000000..1408a19d5 --- /dev/null +++ b/tests/build-remote-content-addressed-fixed.sh @@ -0,0 +1,5 @@ +source common.sh + +file=build-hook-ca.nix + +source build-remote.sh diff --git a/tests/build-remote-input-addressed.sh b/tests/build-remote-input-addressed.sh new file mode 100644 index 000000000..b34caa061 --- /dev/null +++ b/tests/build-remote-input-addressed.sh @@ -0,0 +1,5 @@ +source common.sh + +file=build-hook.nix + +source build-remote.sh diff --git a/tests/build-remote.sh b/tests/build-remote.sh index 4dfb753e1..ca6d1de09 100644 --- a/tests/build-remote.sh +++ b/tests/build-remote.sh @@ -1,31 +1,47 @@ -source common.sh - -clearStore - if ! canUseSandbox; then exit; fi if ! [[ $busybox =~ busybox ]]; then exit; fi -chmod -R u+w $TEST_ROOT/machine0 || true -chmod -R u+w $TEST_ROOT/machine1 || true -chmod -R u+w $TEST_ROOT/machine2 || true -rm -rf $TEST_ROOT/machine0 $TEST_ROOT/machine1 $TEST_ROOT/machine2 -rm -f $TEST_ROOT/result - unset NIX_STORE_DIR unset NIX_STATE_DIR +function join_by { local d=$1; shift; echo -n "$1"; shift; printf "%s" "${@/#/$d}"; } + +builders=( + # system-features will automatically be added to the outer URL, but not inner + # remote-store URL. + "ssh://localhost?remote-store=$TEST_ROOT/machine1?system-features=foo - - 1 1 foo" + "$TEST_ROOT/machine2 - - 1 1 bar" + "ssh-ng://localhost?remote-store=$TEST_ROOT/machine3?system-features=baz - - 1 1 baz" +) + # Note: ssh://localhost bypasses ssh, directly invoking nix-store as a # child process. This allows us to test LegacySSHStore::buildDerivation(). -nix build -L -v -f build-hook.nix -o $TEST_ROOT/result --max-jobs 0 \ +# ssh-ng://... likewise allows us to test RemoteStore::buildDerivation(). +nix build -L -v -f $file -o $TEST_ROOT/result --max-jobs 0 \ --arg busybox $busybox \ --store $TEST_ROOT/machine0 \ - --builders "ssh://localhost?remote-store=$TEST_ROOT/machine1; $TEST_ROOT/machine2 - - 1 1 foo" \ - --system-features foo + --builders "$(join_by '; ' "${builders[@]}")" outPath=$(readlink -f $TEST_ROOT/result) -cat $TEST_ROOT/machine0/$outPath | grep FOOBAR +grep 'FOO BAR BAZ' $TEST_ROOT/machine0/$outPath -# Ensure that input1 was built on store2 due to the required feature. -(! nix path-info --store $TEST_ROOT/machine1 --all | grep builder-build-remote-input-1.sh) -nix path-info --store $TEST_ROOT/machine2 --all | grep builder-build-remote-input-1.sh +set -o pipefail + +# Ensure that input1 was built on store1 due to the required feature. +nix path-info --store $TEST_ROOT/machine1 --all \ + | grep builder-build-remote-input-1.sh \ + | grep -v builder-build-remote-input-2.sh \ + | grep -v builder-build-remote-input-3.sh + +# Ensure that input2 was built on store2 due to the required feature. +nix path-info --store $TEST_ROOT/machine2 --all \ + | grep -v builder-build-remote-input-1.sh \ + | grep builder-build-remote-input-2.sh \ + | grep -v builder-build-remote-input-3.sh + +# Ensure that input3 was built on store3 due to the required feature. +nix path-info --store $TEST_ROOT/machine3 --all \ + | grep -v builder-build-remote-input-1.sh \ + | grep -v builder-build-remote-input-2.sh \ + | grep builder-build-remote-input-3.sh diff --git a/tests/check.sh b/tests/check.sh index 5f25d04cb..5f4997e28 100644 --- a/tests/check.sh +++ b/tests/check.sh @@ -61,30 +61,30 @@ nix-build check.nix -A nondeterministic --no-out-link --repeat 1 2> $TEST_ROOT/l [ "$status" = "1" ] grep 'differs from previous round' $TEST_ROOT/log -path=$(nix-build check.nix -A fetchurl --no-out-link --hashed-mirrors '') +path=$(nix-build check.nix -A fetchurl --no-out-link) chmod +w $path echo foo > $path chmod -w $path -nix-build check.nix -A fetchurl --no-out-link --check --hashed-mirrors '' +nix-build check.nix -A fetchurl --no-out-link --check # Note: "check" doesn't repair anything, it just compares to the hash stored in the database. [[ $(cat $path) = foo ]] -nix-build check.nix -A fetchurl --no-out-link --repair --hashed-mirrors '' +nix-build check.nix -A fetchurl --no-out-link --repair [[ $(cat $path) != foo ]] -nix-build check.nix -A hashmismatch --no-out-link --hashed-mirrors '' || status=$? +nix-build check.nix -A hashmismatch --no-out-link || status=$? [ "$status" = "102" ] echo -n > ./dummy -nix-build check.nix -A hashmismatch --no-out-link --hashed-mirrors '' +nix-build check.nix -A hashmismatch --no-out-link echo 'Hello World' > ./dummy -nix-build check.nix -A hashmismatch --no-out-link --check --hashed-mirrors '' || status=$? +nix-build check.nix -A hashmismatch --no-out-link --check || status=$? [ "$status" = "102" ] # Multiple failures with --keep-going nix-build check.nix -A nondeterministic --no-out-link -nix-build check.nix -A nondeterministic -A hashmismatch --no-out-link --check --keep-going --hashed-mirrors '' || status=$? +nix-build check.nix -A nondeterministic -A hashmismatch --no-out-link --check --keep-going || status=$? [ "$status" = "110" ] diff --git a/tests/fetchGit.sh b/tests/fetchGit.sh index 9faa5d9f6..cedd796f7 100644 --- a/tests/fetchGit.sh +++ b/tests/fetchGit.sh @@ -32,6 +32,8 @@ rev2=$(git -C $repo rev-parse HEAD) # Fetch a worktree unset _NIX_FORCE_HTTP path0=$(nix eval --impure --raw --expr "(builtins.fetchGit file://$TEST_ROOT/worktree).outPath") +path0_=$(nix eval --impure --raw --expr "(builtins.fetchTree { type = \"git\"; url = file://$TEST_ROOT/worktree; }).outPath") +[[ $path0 = $path0_ ]] export _NIX_FORCE_HTTP=1 [[ $(tail -n 1 $path0/hello) = "hello" ]] @@ -102,6 +104,12 @@ git -C $repo commit -m 'Bla3' -a path4=$(nix eval --impure --refresh --raw --expr "(builtins.fetchGit file://$repo).outPath") [[ $path2 = $path4 ]] +nix eval --impure --raw --expr "(builtins.fetchGit { url = $repo; rev = \"$rev2\"; narHash = \"sha256-B5yIPHhEm0eysJKEsO7nqxprh9vcblFxpJG11gXJus1=\"; }).outPath" || status=$? +[[ "$status" = "102" ]] + +path5=$(nix eval --impure --raw --expr "(builtins.fetchGit { url = $repo; rev = \"$rev2\"; narHash = \"sha256-Hr8g6AqANb3xqX28eu1XnjK/3ab8Gv6TJSnkb1LezG9=\"; }).outPath") +[[ $path = $path5 ]] + # tarball-ttl should be ignored if we specify a rev echo delft > $repo/hello git -C $repo add hello diff --git a/tests/fetchurl.sh b/tests/fetchurl.sh index 2535651b0..0f2044342 100644 --- a/tests/fetchurl.sh +++ b/tests/fetchurl.sh @@ -5,7 +5,7 @@ clearStore # Test fetching a flat file. hash=$(nix-hash --flat --type sha256 ./fetchurl.sh) -outPath=$(nix-build '' --argstr url file://$(pwd)/fetchurl.sh --argstr sha256 $hash --no-out-link --hashed-mirrors '') +outPath=$(nix-build '' --argstr url file://$(pwd)/fetchurl.sh --argstr sha256 $hash --no-out-link) cmp $outPath fetchurl.sh @@ -14,7 +14,7 @@ clearStore hash=$(nix hash-file --type sha512 --base64 ./fetchurl.sh) -outPath=$(nix-build '' --argstr url file://$(pwd)/fetchurl.sh --argstr sha512 $hash --no-out-link --hashed-mirrors '') +outPath=$(nix-build '' --argstr url file://$(pwd)/fetchurl.sh --argstr sha512 $hash --no-out-link) cmp $outPath fetchurl.sh @@ -25,26 +25,24 @@ hash=$(nix hash-file ./fetchurl.sh) [[ $hash =~ ^sha256- ]] -outPath=$(nix-build '' --argstr url file://$(pwd)/fetchurl.sh --argstr hash $hash --no-out-link --hashed-mirrors '') +outPath=$(nix-build '' --argstr url file://$(pwd)/fetchurl.sh --argstr hash $hash --no-out-link) cmp $outPath fetchurl.sh -# Test the hashed mirror feature. +# Test that we can substitute from a different store dir. clearStore -hash=$(nix hash-file --type sha512 --base64 ./fetchurl.sh) -hash32=$(nix hash-file --type sha512 --base16 ./fetchurl.sh) +other_store=file://$TEST_ROOT/other_store?store=/fnord/store -mirror=$TEST_ROOT/hashed-mirror -rm -rf $mirror -mkdir -p $mirror/sha512 -ln -s $(pwd)/fetchurl.sh $mirror/sha512/$hash32 +hash=$(nix hash-file --type sha256 --base16 ./fetchurl.sh) -outPath=$(nix-build '' --argstr url file:///no-such-dir/fetchurl.sh --argstr sha512 $hash --no-out-link --hashed-mirrors "file://$mirror") +storePath=$(nix --store $other_store add-to-store --flat ./fetchurl.sh) + +outPath=$(nix-build '' --argstr url file:///no-such-dir/fetchurl.sh --argstr sha256 $hash --no-out-link --substituters $other_store) # Test hashed mirrors with an SRI hash. -nix-build '' --argstr url file:///no-such-dir/fetchurl.sh --argstr hash $(nix to-sri --type sha512 $hash) \ - --argstr name bla --no-out-link --hashed-mirrors "file://$mirror" +nix-build '' --argstr url file:///no-such-dir/fetchurl.sh --argstr hash $(nix to-sri --type sha256 $hash) \ + --no-out-link --substituters $other_store # Test unpacking a NAR. rm -rf $TEST_ROOT/archive diff --git a/tests/filter-source.sh b/tests/filter-source.sh index 1f8dceee5..ba34d2eac 100644 --- a/tests/filter-source.sh +++ b/tests/filter-source.sh @@ -10,10 +10,16 @@ touch $TEST_ROOT/filterin/bak touch $TEST_ROOT/filterin/bla.c.bak ln -s xyzzy $TEST_ROOT/filterin/link -nix-build ./filter-source.nix -o $TEST_ROOT/filterout +checkFilter() { + test ! -e $1/foo/bar + test -e $1/xyzzy + test -e $1/bak + test ! -e $1/bla.c.bak + test ! -L $1/link +} -test ! -e $TEST_ROOT/filterout/foo/bar -test -e $TEST_ROOT/filterout/xyzzy -test -e $TEST_ROOT/filterout/bak -test ! -e $TEST_ROOT/filterout/bla.c.bak -test ! -L $TEST_ROOT/filterout/link +nix-build ./filter-source.nix -o $TEST_ROOT/filterout1 +checkFilter $TEST_ROOT/filterout1 + +nix-build ./path.nix -o $TEST_ROOT/filterout2 +checkFilter $TEST_ROOT/filterout2 diff --git a/tests/local.mk b/tests/local.mk index 0f3bfe606..53035da41 100644 --- a/tests/local.mk +++ b/tests/local.mk @@ -1,5 +1,5 @@ nix_tests = \ - init.sh hash.sh lang.sh add.sh simple.sh dependencies.sh \ + hash.sh lang.sh add.sh simple.sh dependencies.sh \ config.sh \ gc.sh \ gc-concurrent.sh \ @@ -14,7 +14,7 @@ nix_tests = \ placeholders.sh nix-shell.sh \ linux-sandbox.sh \ build-dry.sh \ - build-remote.sh \ + build-remote-input-addressed.sh \ nar-access.sh \ structured-attrs.sh \ fetchGit.sh \ @@ -34,6 +34,7 @@ nix_tests = \ recursive.sh \ flakes.sh # parallel.sh + # build-remote-content-addressed-fixed.sh \ install-tests += $(foreach x, $(nix_tests), tests/$(x)) diff --git a/tests/nar-access.sh b/tests/nar-access.sh index 553d6ca89..88b997ca6 100644 --- a/tests/nar-access.sh +++ b/tests/nar-access.sh @@ -26,12 +26,24 @@ nix cat-store $storePath/foo/baz > baz.cat-nar diff -u baz.cat-nar $storePath/foo/baz # Test --json. -[[ $(nix ls-nar --json $narFile /) = '{"type":"directory","entries":{"foo":{},"foo-x":{},"qux":{},"zyx":{}}}' ]] -[[ $(nix ls-nar --json -R $narFile /foo) = '{"type":"directory","entries":{"bar":{"type":"regular","size":0,"narOffset":368},"baz":{"type":"regular","size":0,"narOffset":552},"data":{"type":"regular","size":58,"narOffset":736}}}' ]] -[[ $(nix ls-nar --json -R $narFile /foo/bar) = '{"type":"regular","size":0,"narOffset":368}' ]] -[[ $(nix ls-store --json $storePath) = '{"type":"directory","entries":{"foo":{},"foo-x":{},"qux":{},"zyx":{}}}' ]] -[[ $(nix ls-store --json -R $storePath/foo) = '{"type":"directory","entries":{"bar":{"type":"regular","size":0},"baz":{"type":"regular","size":0},"data":{"type":"regular","size":58}}}' ]] -[[ $(nix ls-store --json -R $storePath/foo/bar) = '{"type":"regular","size":0}' ]] +diff -u \ + <(nix ls-nar --json $narFile / | jq -S) \ + <(echo '{"type":"directory","entries":{"foo":{},"foo-x":{},"qux":{},"zyx":{}}}' | jq -S) +diff -u \ + <(nix ls-nar --json -R $narFile /foo | jq -S) \ + <(echo '{"type":"directory","entries":{"bar":{"type":"regular","size":0,"narOffset":368},"baz":{"type":"regular","size":0,"narOffset":552},"data":{"type":"regular","size":58,"narOffset":736}}}' | jq -S) +diff -u \ + <(nix ls-nar --json -R $narFile /foo/bar | jq -S) \ + <(echo '{"type":"regular","size":0,"narOffset":368}' | jq -S) +diff -u \ + <(nix ls-store --json $storePath | jq -S) \ + <(echo '{"type":"directory","entries":{"foo":{},"foo-x":{},"qux":{},"zyx":{}}}' | jq -S) +diff -u \ + <(nix ls-store --json -R $storePath/foo | jq -S) \ + <(echo '{"type":"directory","entries":{"bar":{"type":"regular","size":0},"baz":{"type":"regular","size":0},"data":{"type":"regular","size":58}}}' | jq -S) +diff -u \ + <(nix ls-store --json -R $storePath/foo/bar| jq -S) \ + <(echo '{"type":"regular","size":0}' | jq -S) # Test missing files. nix ls-store --json -R $storePath/xyzzy 2>&1 | grep 'does not exist in NAR' diff --git a/tests/path.nix b/tests/path.nix new file mode 100644 index 000000000..883c3c41b --- /dev/null +++ b/tests/path.nix @@ -0,0 +1,14 @@ +with import ./config.nix; + +mkDerivation { + name = "filter"; + builder = builtins.toFile "builder" "ln -s $input $out"; + input = + builtins.path { + path = ((builtins.getEnv "TEST_ROOT") + "/filterin"); + filter = path: type: + type != "symlink" + && baseNameOf path != "foo" + && !((import ./lang/lib.nix).hasSuffix ".bak" (baseNameOf path)); + }; +} diff --git a/tests/remote-store.sh b/tests/remote-store.sh index 4cc73465a..3a61946f9 100644 --- a/tests/remote-store.sh +++ b/tests/remote-store.sh @@ -2,6 +2,9 @@ source common.sh clearStore +# Ensure "fake ssh" remote store works just as legacy fake ssh would. +nix --store ssh-ng://localhost?remote-store=$TEST_ROOT/other-store doctor + startDaemon storeCleared=1 NIX_REMOTE_=$NIX_REMOTE $SHELL ./user-envs.sh diff --git a/tests/tarball.sh b/tests/tarball.sh index b3ec16d40..88a1a07a0 100644 --- a/tests/tarball.sh +++ b/tests/tarball.sh @@ -27,10 +27,13 @@ test_tarball() { nix-build -o $TEST_ROOT/result -E "import (fetchTarball file://$tarball)" - nix-build --experimental-features flakes -o $TEST_ROOT/result -E "import (fetchTree file://$tarball)" - nix-build --experimental-features flakes -o $TEST_ROOT/result -E "import (fetchTree { type = \"tarball\"; url = file://$tarball; })" - nix-build --experimental-features flakes -o $TEST_ROOT/result -E "import (fetchTree { type = \"tarball\"; url = file://$tarball; narHash = \"$hash\"; })" - nix-build --experimental-features flakes -o $TEST_ROOT/result -E "import (fetchTree { type = \"tarball\"; url = file://$tarball; narHash = \"sha256-xdKv2pq/IiwLSnBBJXW8hNowI4MrdZfW+SYqDQs7Tzc=\"; })" 2>&1 | grep 'NAR hash mismatch in input' + nix-build -o $TEST_ROOT/result -E "import (fetchTree file://$tarball)" + nix-build -o $TEST_ROOT/result -E "import (fetchTree { type = \"tarball\"; url = file://$tarball; })" + nix-build -o $TEST_ROOT/result -E "import (fetchTree { type = \"tarball\"; url = file://$tarball; narHash = \"$hash\"; })" + nix-build -o $TEST_ROOT/result -E "import (fetchTree { type = \"tarball\"; url = file://$tarball; narHash = \"sha256-xdKv2pq/IiwLSnBBJXW8hNowI4MrdZfW+SYqDQs7Tzc=\"; })" 2>&1 | grep 'NAR hash mismatch in input' + + nix-instantiate --strict --eval -E "!((import (fetchTree { type = \"tarball\"; url = file://$tarball; narHash = \"$hash\"; })) ? submodules)" >&2 + nix-instantiate --strict --eval -E "!((import (fetchTree { type = \"tarball\"; url = file://$tarball; narHash = \"$hash\"; })) ? submodules)" 2>&1 | grep 'true' nix-instantiate --eval -E '1 + 2' -I fnord=file://no-such-tarball.tar$ext nix-instantiate --eval -E 'with ; 1 + 2' -I fnord=file://no-such-tarball$ext