Merge pull request #2904 from NixOS/flake-terminology

Rename requires -> inputs, provides -> outputs
This commit is contained in:
Eelco Dolstra 2019-05-31 10:00:51 +02:00 committed by GitHub
commit 134942f56a
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
8 changed files with 112 additions and 111 deletions

View file

@ -103,12 +103,12 @@ module.
# A list of flake references denoting the flakes that this flake # A list of flake references denoting the flakes that this flake
# depends on. Nix will resolve and fetch these flakes and pass them # depends on. Nix will resolve and fetch these flakes and pass them
# as a function argument to `provides` below. # as a function argument to `outputs` below.
# #
# `flake:nixpkgs` denotes a flake named `nixpkgs` which is looked up # `flake:nixpkgs` denotes a flake named `nixpkgs` which is looked up
# in the flake registry, or in `flake.lock` inside this flake, if it # in the flake registry, or in `flake.lock` inside this flake, if it
# exists. # exists.
requires = [ flake:nixpkgs ]; inputs = [ flake:nixpkgs ];
# The stuff provided by this flake. Flakes can provide whatever they # The stuff provided by this flake. Flakes can provide whatever they
# want (convention over configuration), but some attributes have # want (convention over configuration), but some attributes have
@ -117,9 +117,9 @@ module.
# `nixosModules` is used by NixOS to automatically pull in the # `nixosModules` is used by NixOS to automatically pull in the
# modules provided by a flake. # modules provided by a flake.
# #
# `provides` takes a single argument named `deps` that contains # `outputs` takes a single argument named `deps` that contains
# the resolved set of flakes. (See below.) # the resolved set of flakes. (See below.)
provides = deps: { outputs = deps: {
# This is searched by `nix`, so something like `nix install # This is searched by `nix`, so something like `nix install
# dwarffs.dwarffs` resolves to this `packages.dwarffs`. # dwarffs.dwarffs` resolves to this `packages.dwarffs`.
@ -168,7 +168,7 @@ Similarly, a minimal `flake.nix` for Nixpkgs:
description = "A collection of packages for the Nix package manager"; description = "A collection of packages for the Nix package manager";
provides = deps: outputs = deps:
let pkgs = import ./. {}; in let pkgs = import ./. {}; in
{ {
lib = import ./lib; lib = import ./lib;
@ -310,9 +310,9 @@ Example:
``` ```
## `provides` ## `outputs`
The flake attribute `provides` is a function that takes an argument The flake attribute `outputs` is a function that takes an argument
named `deps` and returns a (mostly) arbitrary attrset of values. Some named `deps` and returns a (mostly) arbitrary attrset of values. Some
of the standard result attributes: of the standard result attributes:
@ -329,13 +329,13 @@ of the standard result attributes:
we need to avoid a situation where `nixos-rebuild` needs to fetch we need to avoid a situation where `nixos-rebuild` needs to fetch
its own `nixpkgs` just to do `evalModules`.) its own `nixpkgs` just to do `evalModules`.)
* `shell`: A specification of a development environment in some TBD * `devShell`: A specification of a development environment in some TBD
format. format.
The function argument `flakes` is an attrset that contains an The function argument `flakes` is an attrset that contains an
attribute for each dependency specified in `requires`. (Should it attribute for each dependency specified in `inputs`. (Should it
contain transitive dependencies? Probably not.) Each attribute is an contain transitive dependencies? Probably not.) Each attribute is an
attrset containing the `provides` of the dependency, in addition to attrset containing the `outputs` of the dependency, in addition to
the following attributes: the following attributes:
* `path`: The path to the flake's source code. Useful when you want to * `path`: The path to the flake's source code. Useful when you want to
@ -366,13 +366,13 @@ It may be useful to pull in repositories that are not flakes
(i.e. don't contain a `flake.nix`). This could be done in two ways: (i.e. don't contain a `flake.nix`). This could be done in two ways:
* Allow flakes not to have a `flake.nix` file, in which case it's a * Allow flakes not to have a `flake.nix` file, in which case it's a
flake with no requires and no provides. The downside of this flake with no inputs and no outputs. The downside of this
approach is that we can't detect accidental use of a non-flake approach is that we can't detect accidental use of a non-flake
repository. (Also, we need to conjure up an identifier somehow.) repository. (Also, we need to conjure up an identifier somehow.)
* Add a flake attribute to specifiy non-flake dependencies, e.g. * Add a flake attribute to specifiy non-flake dependencies, e.g.
> nonFlakeRequires.foobar = github:foo/bar; > nonFlakeInputs.foobar = github:foo/bar;
## Flake registry ## Flake registry
@ -454,7 +454,7 @@ The default installation source in `nix` is the `packages` from all
flakes in the registry, that is: flakes in the registry, that is:
``` ```
builtins.mapAttrs (flakeName: flakeInfo: builtins.mapAttrs (flakeName: flakeInfo:
(getFlake flakeInfo.uri).${flakeName}.provides.packages or {}) (getFlake flakeInfo.uri).${flakeName}.outputs.packages or {})
builtins.flakeRegistry builtins.flakeRegistry
``` ```
(where `builtins.flakeRegistry` is the global registry with user (where `builtins.flakeRegistry` is the global registry with user
@ -476,10 +476,11 @@ in the registry named `hello`.
Maybe the command Maybe the command
> nix shell > nix dev-shell
should do something like use `provides.shell` to initialize the shell, should do something like use `outputs.devShell` to initialize the
but probably we should ditch `nix shell` / `nix-shell` for direnv. shell, but probably we should ditch `nix shell` / `nix-shell` for
direnv.
## Pure evaluation and caching ## Pure evaluation and caching
@ -535,7 +536,7 @@ repositories.
```nix ```nix
{ {
provides = flakes: { outputs = flakes: {
nixosSystems.default = nixosSystems.default =
flakes.nixpkgs.lib.evalModules { flakes.nixpkgs.lib.evalModules {
modules = modules =
@ -549,7 +550,7 @@ repositories.
}; };
}; };
requires = inputs =
[ "nixpkgs/nixos-18.09" [ "nixpkgs/nixos-18.09"
"dwarffs" "dwarffs"
"hydra" "hydra"

View file

@ -1,10 +1,10 @@
{ {
"nonFlakeRequires": {}, "inputs": {
"requires": {
"nixpkgs": { "nixpkgs": {
"contentHash": "sha256-vy2UmXQM66aS/Kn2tCtjt9RwxfBvV+nQVb5tJQFwi8E=", "narHash": "sha256-rMiWaLXkhizEEMEeMDutUl0Y/c+VEjfjvMkvBwvuQJU=",
"uri": "github:edolstra/nixpkgs/a4d896e89932e873c4117908d558db6210fa3b56" "uri": "github:edolstra/nixpkgs/eeeffd24cd7e407cfaa99e98cfbb8f93bf4cc033"
} }
}, },
"nonFlakeInputs": {},
"version": 1 "version": 1
} }

View file

@ -5,13 +5,13 @@
epoch = 2019; epoch = 2019;
requires = [ "nixpkgs" ]; inputs = [ "nixpkgs" ];
provides = deps: rec { outputs = inputs: rec {
hydraJobs = import ./release.nix { hydraJobs = import ./release.nix {
nix = deps.self; nix = inputs.self;
nixpkgs = deps.nixpkgs; nixpkgs = inputs.nixpkgs;
}; };
checks = { checks = {
@ -29,7 +29,7 @@
defaultPackage = packages.nix; defaultPackage = packages.nix;
devShell = import ./shell.nix { devShell = import ./shell.nix {
nixpkgs = deps.nixpkgs; nixpkgs = inputs.nixpkgs;
}; };
}; };
} }

View file

@ -56,21 +56,21 @@ LockFile::FlakeEntry readFlakeEntry(nlohmann::json json)
if (!flakeRef.isImmutable()) if (!flakeRef.isImmutable())
throw Error("cannot use mutable flake '%s' in pure mode", flakeRef); throw Error("cannot use mutable flake '%s' in pure mode", flakeRef);
LockFile::FlakeEntry entry(flakeRef, Hash((std::string) json["contentHash"])); LockFile::FlakeEntry entry(flakeRef, Hash((std::string) json["narHash"]));
auto nonFlakeRequires = json["nonFlakeRequires"]; auto nonFlakeInputs = json["nonFlakeInputs"];
for (auto i = nonFlakeRequires.begin(); i != nonFlakeRequires.end(); ++i) { for (auto i = nonFlakeInputs.begin(); i != nonFlakeInputs.end(); ++i) {
FlakeRef flakeRef(i->value("uri", "")); FlakeRef flakeRef(i->value("uri", ""));
if (!flakeRef.isImmutable()) if (!flakeRef.isImmutable())
throw Error("requested to fetch FlakeRef '%s' purely, which is mutable", flakeRef); throw Error("requested to fetch FlakeRef '%s' purely, which is mutable", flakeRef);
LockFile::NonFlakeEntry nonEntry(flakeRef, Hash(i->value("contentHash", ""))); LockFile::NonFlakeEntry nonEntry(flakeRef, Hash(i->value("narHash", "")));
entry.nonFlakeEntries.insert_or_assign(i.key(), nonEntry); entry.nonFlakeEntries.insert_or_assign(i.key(), nonEntry);
} }
auto requires = json["requires"]; auto inputs = json["inputs"];
for (auto i = requires.begin(); i != requires.end(); ++i) for (auto i = inputs.begin(); i != inputs.end(); ++i)
entry.flakeEntries.insert_or_assign(i.key(), readFlakeEntry(*i)); entry.flakeEntries.insert_or_assign(i.key(), readFlakeEntry(*i));
return entry; return entry;
@ -89,19 +89,19 @@ LockFile readLockFile(const Path & path)
if (version != 1) if (version != 1)
throw Error("lock file '%s' has unsupported version %d", path, version); throw Error("lock file '%s' has unsupported version %d", path, version);
auto nonFlakeRequires = json["nonFlakeRequires"]; auto nonFlakeInputs = json["nonFlakeInputs"];
for (auto i = nonFlakeRequires.begin(); i != nonFlakeRequires.end(); ++i) { for (auto i = nonFlakeInputs.begin(); i != nonFlakeInputs.end(); ++i) {
FlakeRef flakeRef(i->value("uri", "")); FlakeRef flakeRef(i->value("uri", ""));
LockFile::NonFlakeEntry nonEntry(flakeRef, Hash(i->value("contentHash", ""))); LockFile::NonFlakeEntry nonEntry(flakeRef, Hash(i->value("narHash", "")));
if (!flakeRef.isImmutable()) if (!flakeRef.isImmutable())
throw Error("found mutable FlakeRef '%s' in lockfile at path %s", flakeRef, path); throw Error("found mutable FlakeRef '%s' in lockfile at path %s", flakeRef, path);
lockFile.nonFlakeEntries.insert_or_assign(i.key(), nonEntry); lockFile.nonFlakeEntries.insert_or_assign(i.key(), nonEntry);
} }
auto requires = json["requires"]; auto inputs = json["inputs"];
for (auto i = requires.begin(); i != requires.end(); ++i) for (auto i = inputs.begin(); i != inputs.end(); ++i)
lockFile.flakeEntries.insert_or_assign(i.key(), readFlakeEntry(*i)); lockFile.flakeEntries.insert_or_assign(i.key(), readFlakeEntry(*i));
return lockFile; return lockFile;
@ -111,13 +111,13 @@ nlohmann::json flakeEntryToJson(const LockFile::FlakeEntry & entry)
{ {
nlohmann::json json; nlohmann::json json;
json["uri"] = entry.ref.to_string(); json["uri"] = entry.ref.to_string();
json["contentHash"] = entry.narHash.to_string(SRI); json["narHash"] = entry.narHash.to_string(SRI);
for (auto & x : entry.nonFlakeEntries) { for (auto & x : entry.nonFlakeEntries) {
json["nonFlakeRequires"][x.first]["uri"] = x.second.ref.to_string(); json["nonFlakeInputs"][x.first]["uri"] = x.second.ref.to_string();
json["nonFlakeRequires"][x.first]["contentHash"] = x.second.narHash.to_string(SRI); json["nonFlakeInputs"][x.first]["narHash"] = x.second.narHash.to_string(SRI);
} }
for (auto & x : entry.flakeEntries) for (auto & x : entry.flakeEntries)
json["requires"][x.first.to_string()] = flakeEntryToJson(x.second); json["inputs"][x.first.to_string()] = flakeEntryToJson(x.second);
return json; return json;
} }
@ -125,14 +125,14 @@ void writeLockFile(const LockFile & lockFile, const Path & path)
{ {
nlohmann::json json; nlohmann::json json;
json["version"] = 1; json["version"] = 1;
json["nonFlakeRequires"] = nlohmann::json::object(); json["nonFlakeInputs"] = nlohmann::json::object();
for (auto & x : lockFile.nonFlakeEntries) { for (auto & x : lockFile.nonFlakeEntries) {
json["nonFlakeRequires"][x.first]["uri"] = x.second.ref.to_string(); json["nonFlakeInputs"][x.first]["uri"] = x.second.ref.to_string();
json["nonFlakeRequires"][x.first]["contentHash"] = x.second.narHash.to_string(SRI); json["nonFlakeInputs"][x.first]["narHash"] = x.second.narHash.to_string(SRI);
} }
json["requires"] = nlohmann::json::object(); json["inputs"] = nlohmann::json::object();
for (auto & x : lockFile.flakeEntries) for (auto & x : lockFile.flakeEntries)
json["requires"][x.first.to_string()] = flakeEntryToJson(x.second); json["inputs"][x.first.to_string()] = flakeEntryToJson(x.second);
createDirs(dirOf(path)); createDirs(dirOf(path));
writeFile(path, json.dump(4) + "\n"); // '4' = indentation in json file writeFile(path, json.dump(4) + "\n"); // '4' = indentation in json file
} }
@ -319,41 +319,41 @@ Flake getFlake(EvalState & state, const FlakeRef & flakeRef, bool impureIsAllowe
if (auto description = vInfo.attrs->get(state.sDescription)) if (auto description = vInfo.attrs->get(state.sDescription))
flake.description = state.forceStringNoCtx(*(**description).value, *(**description).pos); flake.description = state.forceStringNoCtx(*(**description).value, *(**description).pos);
auto sRequires = state.symbols.create("requires"); auto sInputs = state.symbols.create("inputs");
if (auto requires = vInfo.attrs->get(sRequires)) { if (auto inputs = vInfo.attrs->get(sInputs)) {
state.forceList(*(**requires).value, *(**requires).pos); state.forceList(*(**inputs).value, *(**inputs).pos);
for (unsigned int n = 0; n < (**requires).value->listSize(); ++n) for (unsigned int n = 0; n < (**inputs).value->listSize(); ++n)
flake.requires.push_back(FlakeRef(state.forceStringNoCtx( flake.inputs.push_back(FlakeRef(state.forceStringNoCtx(
*(**requires).value->listElems()[n], *(**requires).pos))); *(**inputs).value->listElems()[n], *(**inputs).pos)));
} }
auto sNonFlakeRequires = state.symbols.create("nonFlakeRequires"); auto sNonFlakeInputs = state.symbols.create("nonFlakeInputs");
if (std::optional<Attr *> nonFlakeRequires = vInfo.attrs->get(sNonFlakeRequires)) { if (std::optional<Attr *> nonFlakeInputs = vInfo.attrs->get(sNonFlakeInputs)) {
state.forceAttrs(*(**nonFlakeRequires).value, *(**nonFlakeRequires).pos); state.forceAttrs(*(**nonFlakeInputs).value, *(**nonFlakeInputs).pos);
for (Attr attr : *(*(**nonFlakeRequires).value).attrs) { for (Attr attr : *(*(**nonFlakeInputs).value).attrs) {
std::string myNonFlakeUri = state.forceStringNoCtx(*attr.value, *attr.pos); std::string myNonFlakeUri = state.forceStringNoCtx(*attr.value, *attr.pos);
FlakeRef nonFlakeRef = FlakeRef(myNonFlakeUri); FlakeRef nonFlakeRef = FlakeRef(myNonFlakeUri);
flake.nonFlakeRequires.insert_or_assign(attr.name, nonFlakeRef); flake.nonFlakeInputs.insert_or_assign(attr.name, nonFlakeRef);
} }
} }
auto sProvides = state.symbols.create("provides"); auto sOutputs = state.symbols.create("outputs");
if (auto provides = vInfo.attrs->get(sProvides)) { if (auto outputs = vInfo.attrs->get(sOutputs)) {
state.forceFunction(*(**provides).value, *(**provides).pos); state.forceFunction(*(**outputs).value, *(**outputs).pos);
flake.vProvides = (**provides).value; flake.vOutputs = (**outputs).value;
} else } else
throw Error("flake '%s' lacks attribute 'provides'", flakeRef); throw Error("flake '%s' lacks attribute 'outputs'", flakeRef);
for (auto & attr : *vInfo.attrs) { for (auto & attr : *vInfo.attrs) {
if (attr.name != sEpoch && if (attr.name != sEpoch &&
attr.name != state.sName && attr.name != state.sName &&
attr.name != state.sDescription && attr.name != state.sDescription &&
attr.name != sRequires && attr.name != sInputs &&
attr.name != sNonFlakeRequires && attr.name != sNonFlakeInputs &&
attr.name != sProvides) attr.name != sOutputs)
throw Error("flake '%s' has an unsupported attribute '%s', at %s", throw Error("flake '%s' has an unsupported attribute '%s', at %s",
flakeRef, attr.name, *attr.pos); flakeRef, attr.name, *attr.pos);
} }
@ -436,7 +436,7 @@ ResolvedFlake resolveFlakeFromLockFile(EvalState & state, const FlakeRef & flake
ResolvedFlake deps(flake); ResolvedFlake deps(flake);
for (auto & nonFlakeInfo : flake.nonFlakeRequires) { for (auto & nonFlakeInfo : flake.nonFlakeInputs) {
FlakeRef ref = nonFlakeInfo.second; FlakeRef ref = nonFlakeInfo.second;
auto i = lockFile.nonFlakeEntries.find(nonFlakeInfo.first); auto i = lockFile.nonFlakeEntries.find(nonFlakeInfo.first);
if (i != lockFile.nonFlakeEntries.end()) { if (i != lockFile.nonFlakeEntries.end()) {
@ -451,7 +451,7 @@ ResolvedFlake resolveFlakeFromLockFile(EvalState & state, const FlakeRef & flake
} }
} }
for (auto newFlakeRef : flake.requires) { for (auto newFlakeRef : flake.inputs) {
auto i = lockFile.flakeEntries.find(newFlakeRef); auto i = lockFile.flakeEntries.find(newFlakeRef);
if (i != lockFile.flakeEntries.end()) { // Propagate lockFile downwards if possible if (i != lockFile.flakeEntries.end()) { // Propagate lockFile downwards if possible
ResolvedFlake newResFlake = resolveFlakeFromLockFile(state, i->second.ref, handleLockFile, entryToLockFile(i->second)); ResolvedFlake newResFlake = resolveFlakeFromLockFile(state, i->second.ref, handleLockFile, entryToLockFile(i->second));
@ -532,8 +532,8 @@ static void emitSourceInfoAttrs(EvalState & state, const SourceInfo & sourceInfo
void callFlake(EvalState & state, const ResolvedFlake & resFlake, Value & v) void callFlake(EvalState & state, const ResolvedFlake & resFlake, Value & v)
{ {
// Construct the resulting attrset '{description, provides, // Construct the resulting attrset '{description, outputs,
// ...}'. This attrset is passed lazily as an argument to 'provides'. // ...}'. This attrset is passed lazily as an argument to 'outputs'.
state.mkAttrs(v, resFlake.flakeDeps.size() + resFlake.nonFlakeDeps.size() + 8); state.mkAttrs(v, resFlake.flakeDeps.size() + resFlake.nonFlakeDeps.size() + 8);
@ -558,8 +558,8 @@ void callFlake(EvalState & state, const ResolvedFlake & resFlake, Value & v)
emitSourceInfoAttrs(state, resFlake.flake.sourceInfo, v); emitSourceInfoAttrs(state, resFlake.flake.sourceInfo, v);
auto vProvides = state.allocAttr(v, state.symbols.create("provides")); auto vOutputs = state.allocAttr(v, state.symbols.create("outputs"));
mkApp(*vProvides, *resFlake.flake.vProvides, v); mkApp(*vOutputs, *resFlake.flake.vOutputs, v);
v.attrs->push_back(Attr(state.symbols.create("self"), &v)); v.attrs->push_back(Attr(state.symbols.create("self"), &v));

View file

@ -106,9 +106,9 @@ struct Flake
FlakeRef originalRef; FlakeRef originalRef;
std::string description; std::string description;
SourceInfo sourceInfo; SourceInfo sourceInfo;
std::vector<FlakeRef> requires; std::vector<FlakeRef> inputs;
std::map<FlakeAlias, FlakeRef> nonFlakeRequires; std::map<FlakeAlias, FlakeRef> nonFlakeInputs;
Value * vProvides; // FIXME: gc Value * vOutputs; // FIXME: gc
unsigned int epoch; unsigned int epoch;
Flake(const FlakeRef & origRef, const SourceInfo & sourceInfo) Flake(const FlakeRef & origRef, const SourceInfo & sourceInfo)

View file

@ -199,16 +199,16 @@ struct CmdFlakeUpdate : FlakeCommand
} }
}; };
static void enumerateProvides(EvalState & state, Value & vFlake, static void enumerateOutputs(EvalState & state, Value & vFlake,
std::function<void(const std::string & name, Value & vProvide)> callback) std::function<void(const std::string & name, Value & vProvide)> callback)
{ {
state.forceAttrs(vFlake); state.forceAttrs(vFlake);
auto vProvides = (*vFlake.attrs->get(state.symbols.create("provides")))->value; auto vOutputs = (*vFlake.attrs->get(state.symbols.create("outputs")))->value;
state.forceAttrs(*vProvides); state.forceAttrs(*vOutputs);
for (auto & attr : *vProvides->attrs) for (auto & attr : *vOutputs->attrs)
callback(attr.name, *attr.value); callback(attr.name, *attr.value);
} }
@ -237,9 +237,9 @@ struct CmdFlakeInfo : FlakeCommand, MixJSON
auto vFlake = state->allocValue(); auto vFlake = state->allocValue();
flake::callFlake(*state, flake, *vFlake); flake::callFlake(*state, flake, *vFlake);
auto provides = nlohmann::json::object(); auto outputs = nlohmann::json::object();
enumerateProvides(*state, enumerateOutputs(*state,
*vFlake, *vFlake,
[&](const std::string & name, Value & vProvide) { [&](const std::string & name, Value & vProvide) {
auto provide = nlohmann::json::object(); auto provide = nlohmann::json::object();
@ -250,10 +250,10 @@ struct CmdFlakeInfo : FlakeCommand, MixJSON
provide[aCheck.name] = nlohmann::json::object(); provide[aCheck.name] = nlohmann::json::object();
} }
provides[name] = provide; outputs[name] = provide;
}); });
json["provides"] = std::move(provides); json["outputs"] = std::move(outputs);
std::cout << json.dump() << std::endl; std::cout << json.dump() << std::endl;
} else } else
@ -298,7 +298,7 @@ struct CmdFlakeCheck : FlakeCommand, MixJSON
// FIXME: check meta attributes // FIXME: check meta attributes
return drvInfo->queryDrvPath(); return drvInfo->queryDrvPath();
} catch (Error & e) { } catch (Error & e) {
e.addPrefix(fmt("while checking flake attribute '" ANSI_BOLD "%s" ANSI_NORMAL "':\n", attrPath)); e.addPrefix(fmt("while checking flake output attribute '" ANSI_BOLD "%s" ANSI_NORMAL "':\n", attrPath));
throw; throw;
} }
}; };
@ -311,7 +311,7 @@ struct CmdFlakeCheck : FlakeCommand, MixJSON
auto vFlake = state->allocValue(); auto vFlake = state->allocValue();
flake::callFlake(*state, flake, *vFlake); flake::callFlake(*state, flake, *vFlake);
enumerateProvides(*state, enumerateOutputs(*state,
*vFlake, *vFlake,
[&](const std::string & name, Value & vProvide) { [&](const std::string & name, Value & vProvide) {
Activity act(*logger, lvlChatty, actUnknown, Activity act(*logger, lvlChatty, actUnknown,

View file

@ -230,16 +230,16 @@ struct InstallableFlake : InstallableValue
makeFlakeClosureGCRoot(*state.store, flakeRef, resFlake); makeFlakeClosureGCRoot(*state.store, flakeRef, resFlake);
auto vProvides = (*vFlake->attrs->get(state.symbols.create("provides")))->value; auto vOutputs = (*vFlake->attrs->get(state.symbols.create("outputs")))->value;
state.forceValue(*vProvides); state.forceValue(*vOutputs);
auto emptyArgs = state.allocBindings(0); auto emptyArgs = state.allocBindings(0);
if (searchPackages) { if (searchPackages) {
// As a convenience, look for the attribute in // As a convenience, look for the attribute in
// 'provides.packages'. // 'outputs.packages'.
if (auto aPackages = *vProvides->attrs->get(state.symbols.create("packages"))) { if (auto aPackages = *vOutputs->attrs->get(state.symbols.create("packages"))) {
try { try {
auto * v = findAlongAttrPath(state, *attrPaths.begin(), *emptyArgs, *aPackages->value); auto * v = findAlongAttrPath(state, *attrPaths.begin(), *emptyArgs, *aPackages->value);
state.forceValue(*v); state.forceValue(*v);
@ -250,7 +250,7 @@ struct InstallableFlake : InstallableValue
// As a temporary hack until Nixpkgs is properly converted // As a temporary hack until Nixpkgs is properly converted
// to provide a clean 'packages' set, look in 'legacyPackages'. // to provide a clean 'packages' set, look in 'legacyPackages'.
if (auto aPackages = *vProvides->attrs->get(state.symbols.create("legacyPackages"))) { if (auto aPackages = *vOutputs->attrs->get(state.symbols.create("legacyPackages"))) {
try { try {
auto * v = findAlongAttrPath(state, *attrPaths.begin(), *emptyArgs, *aPackages->value); auto * v = findAlongAttrPath(state, *attrPaths.begin(), *emptyArgs, *aPackages->value);
state.forceValue(*v); state.forceValue(*v);
@ -260,10 +260,10 @@ struct InstallableFlake : InstallableValue
} }
} }
// Otherwise, look for it in 'provides'. // Otherwise, look for it in 'outputs'.
for (auto & attrPath : attrPaths) { for (auto & attrPath : attrPaths) {
try { try {
auto * v = findAlongAttrPath(state, attrPath, *emptyArgs, *vProvides); auto * v = findAlongAttrPath(state, attrPath, *emptyArgs, *vOutputs);
state.forceValue(*v); state.forceValue(*v);
return v; return v;
} catch (AttrPathNotFound & e) { } catch (AttrPathNotFound & e) {

View file

@ -33,7 +33,7 @@ cat > $flake1Dir/flake.nix <<EOF
description = "Bla bla"; description = "Bla bla";
provides = deps: rec { outputs = inputs: rec {
packages.foo = import ./simple.nix; packages.foo = import ./simple.nix;
defaultPackage = packages.foo; defaultPackage = packages.foo;
}; };
@ -50,12 +50,12 @@ cat > $flake2Dir/flake.nix <<EOF
epoch = 2019; epoch = 2019;
requires = [ "flake1" ]; inputs = [ "flake1" ];
description = "Fnord"; description = "Fnord";
provides = deps: rec { outputs = inputs: rec {
packages.bar = deps.flake1.provides.packages.foo; packages.bar = inputs.flake1.outputs.packages.foo;
}; };
} }
EOF EOF
@ -69,12 +69,12 @@ cat > $flake3Dir/flake.nix <<EOF
epoch = 2019; epoch = 2019;
requires = [ "flake2" ]; inputs = [ "flake2" ];
description = "Fnord"; description = "Fnord";
provides = deps: rec { outputs = inputs: rec {
packages.xyzzy = deps.flake2.provides.packages.bar; packages.xyzzy = inputs.flake2.outputs.packages.bar;
}; };
} }
EOF EOF
@ -168,13 +168,13 @@ cat > $flake3Dir/flake.nix <<EOF
epoch = 2019; epoch = 2019;
requires = [ "flake1" "flake2" ]; inputs = [ "flake1" "flake2" ];
description = "Fnord"; description = "Fnord";
provides = deps: rec { outputs = inputs: rec {
packages.xyzzy = deps.flake2.provides.packages.bar; packages.xyzzy = inputs.flake2.outputs.packages.bar;
packages.sth = deps.flake1.provides.packages.foo; packages.sth = inputs.flake1.outputs.packages.foo;
}; };
} }
EOF EOF
@ -209,7 +209,7 @@ nix build -o $TEST_ROOT/result --flake-registry file://$registry file://$flake2D
mv $flake1Dir.tmp $flake1Dir mv $flake1Dir.tmp $flake1Dir
mv $flake2Dir.tmp $flake2Dir mv $flake2Dir.tmp $flake2Dir
# Add nonFlakeRequires to flake3. # Add nonFlakeInputs to flake3.
rm $flake3Dir/flake.nix rm $flake3Dir/flake.nix
cat > $flake3Dir/flake.nix <<EOF cat > $flake3Dir/flake.nix <<EOF
@ -218,23 +218,23 @@ cat > $flake3Dir/flake.nix <<EOF
epoch = 2019; epoch = 2019;
requires = [ "flake1" "flake2" ]; inputs = [ "flake1" "flake2" ];
nonFlakeRequires = { nonFlakeInputs = {
nonFlake = "$nonFlakeDir"; nonFlake = "$nonFlakeDir";
}; };
description = "Fnord"; description = "Fnord";
provides = deps: rec { outputs = inputs: rec {
packages.xyzzy = deps.flake2.provides.packages.bar; packages.xyzzy = inputs.flake2.outputs.packages.bar;
packages.sth = deps.flake1.provides.packages.foo; packages.sth = inputs.flake1.outputs.packages.foo;
}; };
} }
EOF EOF
git -C $flake3Dir add flake.nix git -C $flake3Dir add flake.nix
git -C $flake3Dir commit -m 'Add nonFlakeRequires' git -C $flake3Dir commit -m 'Add nonFlakeInputs'
# Check whether `nix build` works with a lockfile which is missing a nonFlakeRequires # Check whether `nix build` works with a lockfile which is missing a nonFlakeInputs
nix build -o $TEST_ROOT/result --flake-registry $registry $flake3Dir:sth nix build -o $TEST_ROOT/result --flake-registry $registry $flake3Dir:sth