Merge branch 'master' into support-libcxx10

This commit is contained in:
Matthew Bauer 2020-12-24 14:16:09 -06:00 committed by GitHub
commit ede534a3a1
No known key found for this signature in database
GPG key ID: 4AEE18F83AFDEB23
145 changed files with 4971 additions and 1133 deletions

View file

@ -10,7 +10,6 @@ makefiles = \
src/nix/local.mk \ src/nix/local.mk \
src/resolve-system-dependencies/local.mk \ src/resolve-system-dependencies/local.mk \
scripts/local.mk \ scripts/local.mk \
corepkgs/local.mk \
misc/bash/local.mk \ misc/bash/local.mk \
misc/systemd/local.mk \ misc/systemd/local.mk \
misc/launchd/local.mk \ misc/launchd/local.mk \

View file

@ -1,4 +0,0 @@
corepkgs_FILES = \
fetchurl.nix
$(foreach file,$(corepkgs_FILES),$(eval $(call install-data-in,$(d)/$(file),$(datadir)/nix/corepkgs)))

View file

@ -20,11 +20,6 @@ let
(attrNames def.commands)) (attrNames def.commands))
+ "\n" + "\n"
else "") else "")
+ (if def.examples or [] != []
then
"# Examples\n\n"
+ concatStrings (map ({ description, command }: "${description}\n\n```console\n${command}\n```\n\n") def.examples)
else "")
+ (if def ? doc + (if def ? doc
then def.doc + "\n\n" then def.doc + "\n\n"
else "") else "")
@ -43,7 +38,7 @@ let
if flag.category or "" != "config" if flag.category or "" != "config"
then then
" - `--${longName}`" " - `--${longName}`"
+ (if flag ? shortName then " / `${flag.shortName}`" else "") + (if flag ? shortName then " / `-${flag.shortName}`" else "")
+ (if flag ? labels then " " + (concatStringsSep " " (map (s: "*${s}*") flag.labels)) else "") + (if flag ? labels then " " + (concatStringsSep " " (map (s: "*${s}*") flag.labels)) else "")
+ " \n" + " \n"
+ " " + flag.description + "\n\n" + " " + flag.description + "\n\n"

View file

@ -66,7 +66,9 @@
- [Files](command-ref/files.md) - [Files](command-ref/files.md)
- [nix.conf](command-ref/conf-file.md) - [nix.conf](command-ref/conf-file.md)
- [Glossary](glossary.md) - [Glossary](glossary.md)
- [Hacking](hacking.md) - [Contributing](contributing/contributing.md)
- [Hacking](contributing/hacking.md)
- [CLI guideline](contributing/cli-guideline.md)
- [Release Notes](release-notes/release-notes.md) - [Release Notes](release-notes/release-notes.md)
- [Release 2.3 (2019-09-04)](release-notes/rl-2.3.md) - [Release 2.3 (2019-09-04)](release-notes/rl-2.3.md)
- [Release 2.2 (2019-01-11)](release-notes/rl-2.2.md) - [Release 2.2 (2019-01-11)](release-notes/rl-2.2.md)

View file

@ -0,0 +1,589 @@
# CLI guideline
## Goals
Purpose of this document is to provide a clear direction to **help design
delightful command line** experience. This document contain guidelines to
follow to ensure a consistent and approachable user experience.
## Overview
`nix` command provides a single entry to a number of sub-commands that help
**developers and system administrators** in the life-cycle of a software
project. We particularly need to pay special attention to help and assist new
users of Nix.
# Naming the `COMMANDS`
Words matter. Naming is an important part of the usability. Users will be
interacting with Nix on a regular basis so we should **name things for ease of
understanding**.
We recommend following the [Principle of Least
Astonishment](https://en.wikipedia.org/wiki/Principle_of_least_astonishment).
This means that you should **never use acronyms or abbreviations** unless they
are commonly used in other tools (e.g. `nix init`). And if the command name is
too long (> 10-12 characters) then shortening it makes sense (e.g.
“prioritization” → “priority”).
Commands should **follow a noun-verb dialogue**. Although noun-verb formatting
seems backwards from a speaking perspective (i.e. `nix store copy` vs. `nix
copy store`) it allows us to organize commands the same way users think about
completing an action (the group first, then the command).
## Naming rules
Rules are there to guide you by limiting your options. But not everything can
fit the rules all the time. In those cases document the exceptions in [Appendix
1: Commands naming exceptions](#appendix-1-commands-naming-exceptions) and
provide reason. The rules want to force a Nix developer to look, not just at
the command at hand, but also the command in a full context alongside other
`nix` commands.
```shell
$ nix [<GROUP>] <COMMAND> [<ARGUMENTS>] [<OPTIONS>]
```
- `GROUP`, `COMMAND`, `ARGUMENTS` and `OPTIONS` should be lowercase and in a
singular form.
- `GROUP` should be a **NOUN**.
- `COMMAND` should be a **VERB**.
- `ARGUMENTS` and `OPTIONS` are discussed in [*Input* section](#input).
## Classification
Some commands are more important, some less. While we want all of our commands
to be perfect we can only spend limited amount of time testing and improving
them.
This classification tries to separate commands in 3 categories in terms of
their importance in regards to the new users. Users who are likely to be
impacted the most by bad user experience.
- **Main commands**
Commands used for our main use cases and most likely used by new users. We
expect attention to details, such as:
- Proper use of [colors](#colors), [emojis](#special-unicode-characters)
and [aligning of text](#text-alignment).
- [Autocomplete](#shell-completion) of options.
- Show [next possible steps](#next-steps).
- Showing some [“tips”](#educate-the-user) when running logs running tasks
(eg. building / downloading) in order to teach users interesting bits of
Nix ecosystem.
- [Help pages](#help-is-essential) to be as good as we can write them
pointing to external documentation and tutorials for more.
Examples of such commands: `nix init`, `nix develop`, `nix build`, `nix run`,
...
- **Infrequently used commands**
From infrequently used commands we expect less attention to details, but
still some:
- Proper use of [colors](#colors), [emojis](#special-unicode-characters)
and [aligning of text](#text-alignment).
- [Autocomplete](#shell-completion) of options.
Examples of such commands: `nix doctor`, `nix edit`, `nix eval`, ...
- **Utility and scripting commands**
Commands that expose certain internal functionality of `nix`, mostly used by
other scripts.
- [Autocomplete](#shell-completion) of options.
Examples of such commands: `nix store copy`, `nix hash base16`, `nix store
ping`, ...
# Help is essential
Help should be built into your command line so that new users can gradually
discover new features when they need them.
## Looking for help
Since there is no standard way how user will look for help we rely on ways help
is provided by commonly used tools. As a guide for this we took `git` and
whenever in doubt look at it as a preferred direction.
The rules are:
- Help is shown by using `--help` or `help` command (eg `nix` `--``help` or
`nix help`).
- For non-COMMANDs (eg. `nix` `--``help` and `nix store` `--``help`) we **show
a summary** of most common use cases. Summary is presented on the STDOUT
without any use of PAGER.
- For COMMANDs (eg. `nix init` `--``help` or `nix help init`) we display the
man page of that command. By default the PAGER is used (as in `git`).
- At the end of either summary or man page there should be an URL pointing to
an online version of more detailed documentation.
- The structure of summaries and man pages should be the same as in `git`.
## Anticipate where help is needed
Even better then requiring the user to search for help is to anticipate and
predict when user might need it. Either because the lack of discoverability,
typo in the input or simply taking the opportunity to teach the user of
interesting - but less visible - details.
### Shell completion
This type of help is most common and almost expected by users. We need to
**provide the best shell completion** for `bash`, `zsh` and `fish`.
Completion needs to be **context aware**, this mean when a user types:
```shell
$ nix build n<TAB>
```
we need to display a list of flakes starting with `n`.
### Wrong input
As we all know we humans make mistakes, all the time. When a typo - intentional
or unintentional - is made, we should prompt for closest possible options or
point to the documentation which would educate user to not make the same
errors. Here are few examples:
In first example we prompt the user for typing wrong command name:
```shell
$ nix int
------------------------------------------------------------------------
Error! Command `int` not found.
------------------------------------------------------------------------
Did you mean:
|> nix init
|> nix input
```
Sometimes users will make mistake either because of a typo or simply because of
lack of discoverability. Our handling of this cases needs to be context
sensitive.
```shell
$ nix init --template=template#pyton
------------------------------------------------------------------------
Error! Template `template#pyton` not found.
------------------------------------------------------------------------
Initializing Nix project at `/path/to/here`.
Select a template for you new project:
|> template#pyton
template#python-pip
template#python-poetry
```
### Next steps
It can be invaluable to newcomers to show what a possible next steps and what
is the usual development workflow with Nix. For example:
```shell
$ nix init --template=template#python
Initializing project `template#python`
in `/home/USER/dev/new-project`
Next steps
|> nix develop -- to enter development environment
|> nix build -- to build your project
```
### Educate the user
We should take any opportunity to **educate users**, but at the same time we
must **be very very careful to not annoy users**. There is a thin line between
being helpful and being annoying.
An example of educating users might be to provide *Tips* in places where they
are waiting.
```shell
$ nix build
Started building my-project 1.2.3
Downloaded python3.8-poetry 1.2.3 in 5.3 seconds
Downloaded python3.8-requests 1.2.3 in 5.3 seconds
------------------------------------------------------------------------
Press `v` to increase logs verbosity
|> `?` to see other options
------------------------------------------------------------------------
Learn something new with every build...
|> See last logs of a build with `nix log --last` command.
------------------------------------------------------------------------
Evaluated my-project 1.2.3 in 14.43 seconds
Downloading [12 / 200]
|> firefox 1.2.3 [#########> ] 10Mb/s | 2min left
Building [2 / 20]
|> glibc 1.2.3 -> buildPhase: <last log line>
------------------------------------------------------------------------
```
Now **Learn** part of the output is where you educate users. You should only
show it when you know that a build will take some time and not annoy users of
the builds that take only few seconds.
Every feature like this should go though a intensive review and testing to
collect as much a feedback as possible and to fine tune every little detail. If
done right this can be an awesome features beginners and advance users will
love, but if not done perfectly it will annoy users and leave bad impression.
# Input
Input to a command is provided via `ARGUMENTS` and `OPTIONS`.
`ARGUMENTS` represent a required input for a function. When choosing to use
`ARGUMENT` over function please be aware of the downsides that come with it:
- User will need to remember the order of `ARGUMENTS`. This is not a problem if
there is only one `ARGUMENT`.
- With `OPTIONS` it is possible to provide much better auto completion.
- With `OPTIONS` it is possible to provide much better error message.
- Using `OPTIONS` it will mean there is a little bit more typing.
We dont discourage the use of `ARGUMENTS`, but simply want to make every
developer consider the downsides and choose wisely.
## Naming the `OPTIONS`
Then only naming convention - apart from the ones mentioned in Naming the
`COMMANDS` section is how flags are named.
Flags are a type of `OPTION` that represent an option that can be turned ON of
OFF. We can say **flags are boolean type of** `**OPTION**`.
Here are few examples of flag `OPTIONS`:
- `--colors` vs. `--no-colors` (showing colors in the output)
- `--emojis` vs. `--no-emojis` (showing emojis in the output)
## Prompt when input not provided
For *main commands* (as [per classification](#classification)) we want command
to improve the discoverability of possible input. A new user will most likely
not know which `ARGUMENTS` and `OPTIONS` are required or which values are
possible for those options.
In cases, the user might not provide the input or they provide wrong input,
rather then show the error, prompt a user with an option to find and select
correct input (see examples).
Prompting is of course not required when TTY is not attached to STDIN. This
would mean that scripts wont need to handle prompt, but rather handle errors.
A place to use prompt and provide user with interactive select
```shell
$ nix init
Initializing Nix project at `/path/to/here`.
Select a template for you new project:
|> py
template#python-pip
template#python-poetry
[ Showing 2 templates from 1345 templates ]
```
Another great place to add prompts are **confirmation dialogues for dangerous
actions**. For example when adding new substitutor via `OPTIONS` or via
`flake.nix` we should prompt - for the first time - and let user review what is
going to happen.
```shell
$ nix build --option substitutors https://cache.example.org
------------------------------------------------------------------------
Warning! A security related question need to be answered.
------------------------------------------------------------------------
The following substitutors will be used to in `my-project`:
- https://cache.example.org
Do you allow `my-project` to use above mentioned substitutors?
[y/N] |> y
```
# Output
Terminal output can be quite limiting in many ways. Which should forces us to
think about the experience even more. As with every design the output is a
compromise between being terse and being verbose, between showing help to
beginners and annoying advance users. For this it is important that we know
what are the priorities.
Nix command line should be first and foremost written with beginners in mind.
But users wont stay beginners for long and what was once useful might quickly
become annoying. There is no golden rule that we can give in this guideline
that would make it easier how to draw a line and find best compromise.
What we would encourage is to **build prototypes**, do some **user testing**
and collect **feedback**. Then repeat the cycle few times.
First design the *happy path* and only after your iron it out, continue to work
on **edge cases** (handling and displaying errors, changes of the output by
certain `OPTIONS`, etc…)
## Follow best practices
Needless to say we Nix must be a good citizen and follow best practices in
command line.
In short: **STDOUT is for output, STDERR is for (human) messaging.**
STDOUT and STDERR provide a way for you to output messages to the user while
also allowing them to redirect content to a file. For example:
```shell
$ nix build > build.txt
------------------------------------------------------------------------
Error! Atrribute `bin` missing at (1:94) from string.
------------------------------------------------------------------------
1| with import <nixpkgs> { }; (pkgs.runCommandCC or pkgs.runCommand) "shell" { buildInputs = [ (surge.bin) ]; } ""
```
Because this warning is on STDERR, it doesnt end up in the file.
But not everything on STDERR is an error though. For example, you can run `nix
build` and collect logs in a file while still seeing the progress.
```
$ nix build > build.txt
Evaluated 1234 files in 1.2 seconds
Downloaded python3.8-poetry 1.2.3 in 5.3 seconds
Downloaded python3.8-requests 1.2.3 in 5.3 seconds
------------------------------------------------------------------------
Press `v` to increase logs verbosity
|> `?` to see other options
------------------------------------------------------------------------
Learn something new with every build...
|> See last logs of a build with `nix log --last` command.
------------------------------------------------------------------------
Evaluated my-project 1.2.3 in 14.43 seconds
Downloading [12 / 200]
|> firefox 1.2.3 [#########> ] 10Mb/s | 2min left
Building [2 / 20]
|> glibc 1.2.3 -> buildPhase: <last log line>
------------------------------------------------------------------------
```
## Errors (WIP)
**TODO**: Once we have implementation for the *happy path* then we will think
how to present errors.
## Not only for humans
Terse, machine-readable output formats can also be useful but shouldnt get in
the way of making beautiful CLI output. When needed, commands should offer a
`--json` flag to allow users to easily parse and script the CLI.
When TTY is not detected on STDOUT we should remove all design elements (no
colors, no emojis and using ASCII instead of Unicode symbols). The same should
happen when TTY is not detected on STDERR. We should not display progress /
status section, but only print warnings and errors.
## Dialog with the user
CLIs don't always make it clear when an action has taken place. For every
action a user performs, your CLI should provide an equal and appropriate
reaction, clearly highlighting the what just happened. For example:
```shell
$ nix build
Downloaded python3.8-poetry 1.2.3 in 5.3 seconds
Downloaded python3.8-requests 1.2.3 in 5.3 seconds
...
Success! You have successfully built my-project.
$
```
Above command clearly states that command successfully completed. And in case
of `nix build`, which is a command that might take some time to complete, it is
equally important to also show that a command started.
## Text alignment
Text alignment is the number one design element that will present all of the
Nix commands as a family and not as separate tools glued together.
The format we should follow is:
```shell
$ nix COMMAND
VERB_1 NOUN and other words
VERB__1 NOUN and other words
|> Some details
```
Few rules that we can extract from above example:
- Each line should start at least with one space.
- First word should be a VERB and must be aligned to the right.
- Second word should be a NOUN and must be aligned to the left.
- If you can not find a good VERB / NOUN pair, dont worry make it as
understandable to the user as possible.
- More details of each line can be provided by `|>` character which is serving
as the first word when aligning the text
Dont forget you should also test your terminal output with colors and emojis
off (`--no-colors --no-emojis`).
## Dim / Bright
After comparing few terminals with different color schemes we would **recommend
to avoid using dimmed text**. The difference from the rest of the text is very
little in many terminal and color scheme combinations. Sometimes the difference
is not even notable, therefore relying on it wouldnt make much sense.
**The bright text is much better supported** across terminals and color
schemes. Most of the time the difference is perceived as if the bright text
would be bold.
## Colors
Humans are already conditioned by society to attach certain meaning to certain
colors. While the meaning is not universal, a simple collection of colors is
used to represent basic emotions.
Colors that can be used in output
- Red = error, danger, stop
- Green = success, good
- Yellow/Orange = proceed with caution, warning, in progress
- Blue/Magenta = stability, calm
While colors are nice, when command line is used by machines (in automation
scripts) you want to remove the colors. There should be a global `--no-colors`
option that would remove the colors.
## Special (Unicode) characters
Most of the terminal have good support for Unicode characters and you should
use them in your output by default. But always have a backup solution that is
implemented only with ASCII characters and will be used when `--ascii` option
is going to be passed in. Please make sure that you test your output also
without Unicode characters
More they showing all the different Unicode characters it is important to
**establish common set of characters** that we use for certain situations.
## Emojis
Emojis help channel emotions even better than text, colors and special
characters.
We recommend **keeping the set of emojis to a minimum**. This will enable each
emoji to stand out more.
As not everybody is happy about emojis we should provide an `--no-emojis`
option to disable them. Please make sure that you test your output also without
emojis.
## Tables
All commands that are listing certain data can be implemented in some sort of a
table. Its important that each row of your output is a single entry of data.
Never output table borders. Its noisy and a huge pain for parsing using other
tools such as `grep`.
Be mindful of the screen width. Only show a few columns by default with the
table header, for more the table can be manipulated by the following options:
- `--no-headers`: Show column headers by default but allow to hide them.
- `--columns`: Comma-separated list of column names to add.
- `--sort`: Allow sorting by column. Allow inverse and multi-column sort as well.
## Interactive output
Interactive output was selected to be able to strike the balance between
beginners and advance users. While the default output will target beginners it
can, with a few key strokes, be changed into and advance introspection tool.
### Progress
For longer running commands we should provide and overview of the progress.
This is shown best in `nix build` example:
```shell
$ nix build
Started building my-project 1.2.3
Downloaded python3.8-poetry 1.2.3 in 5.3 seconds
Downloaded python3.8-requests 1.2.3 in 5.3 seconds
------------------------------------------------------------------------
Press `v` to increase logs verbosity
|> `?` to see other options
------------------------------------------------------------------------
Learn something new with every build...
|> See last logs of a build with `nix log --last` command.
------------------------------------------------------------------------
Evaluated my-project 1.2.3 in 14.43 seconds
Downloading [12 / 200]
|> firefox 1.2.3 [#########> ] 10Mb/s | 2min left
Building [2 / 20]
|> glibc 1.2.3 -> buildPhase: <last log line>
------------------------------------------------------------------------
```
### Search
Use a `fzf` like fuzzy search when there are multiple options to choose from.
```shell
$ nix init
Initializing Nix project at `/path/to/here`.
Select a template for you new project:
|> py
template#python-pip
template#python-poetry
[ Showing 2 templates from 1345 templates ]
```
### Prompt
In some situations we need to prompt the user and inform the user about what is
going to happen.
```shell
$ nix build --option substitutors https://cache.example.org
------------------------------------------------------------------------
Warning! A security related question need to be answered.
------------------------------------------------------------------------
The following substitutors will be used to in `my-project`:
- https://cache.example.org
Do you allow `my-project` to use above mentioned substitutors?
[y/N] |> y
```
## Verbosity
There are many ways that you can control verbosity.
Verbosity levels are:
- `ERROR` (level 0)
- `WARN` (level 1)
- `NOTICE` (level 2)
- `INFO` (level 3)
- `TALKATIVE` (level 4)
- `CHATTY` (level 5)
- `DEBUG` (level 6)
- `VOMIT` (level 7)
The default level that the command starts is `ERROR`. The simplest way to
increase the verbosity by stacking `-v` option (eg: `-vvv == level 3 == INFO`).
There are also two shortcuts, `--debug` to run in `DEBUG` verbosity level and
`--quiet` to run in `ERROR` verbosity level.
----------
# Appendix 1: Commands naming exceptions
`nix init` and `nix repl` are well established

View file

@ -0,0 +1 @@
# Contributing

View file

@ -67,7 +67,7 @@ std::pair<Value *, Pos> findAlongAttrPath(EvalState & state, const string & attr
if (apType == apAttr) { if (apType == apAttr) {
if (v->type != tAttrs) if (v->type() != nAttrs)
throw TypeError( throw TypeError(
"the expression selected by the selection path '%1%' should be a set but is %2%", "the expression selected by the selection path '%1%' should be a set but is %2%",
attrPath, attrPath,

View file

@ -24,9 +24,7 @@ void EvalState::mkAttrs(Value & v, size_t capacity)
v = vEmptySet; v = vEmptySet;
return; return;
} }
clearValue(v); v.mkAttrs(allocBindings(capacity));
v.type = tAttrs;
v.attrs = allocBindings(capacity);
nrAttrsets++; nrAttrsets++;
nrAttrsInAttrsets += capacity; nrAttrsInAttrsets += capacity;
} }

View file

@ -390,14 +390,14 @@ Value & AttrCursor::forceValue()
} }
if (root->db && (!cachedValue || std::get_if<placeholder_t>(&cachedValue->second))) { if (root->db && (!cachedValue || std::get_if<placeholder_t>(&cachedValue->second))) {
if (v.type == tString) if (v.type() == nString)
cachedValue = {root->db->setString(getKey(), v.string.s, v.string.context), cachedValue = {root->db->setString(getKey(), v.string.s, v.string.context),
string_t{v.string.s, {}}}; string_t{v.string.s, {}}};
else if (v.type == tPath) else if (v.type == nPath)
cachedValue = {root->db->setString(getKey(), v.path), string_t{v.path, {}}}; cachedValue = {root->db->setString(getKey(), v.path), string_t{v.path, {}}};
else if (v.type == tBool) else if (v.type == nBool)
cachedValue = {root->db->setBool(getKey(), v.boolean), v.boolean}; cachedValue = {root->db->setBool(getKey(), v.boolean), v.boolean};
else if (v.type == tAttrs) else if (v.type() == nAttrs)
; // FIXME: do something? ; // FIXME: do something?
else else
cachedValue = {root->db->setMisc(getKey()), misc_t()}; cachedValue = {root->db->setMisc(getKey()), misc_t()};
@ -442,7 +442,7 @@ std::shared_ptr<AttrCursor> AttrCursor::maybeGetAttr(Symbol name, bool forceErro
auto & v = forceValue(); auto & v = forceValue();
if (v.type != tAttrs) if (v.type() != nAttrs)
return nullptr; return nullptr;
//throw TypeError("'%s' is not an attribute set", getAttrPathStr()); //throw TypeError("'%s' is not an attribute set", getAttrPathStr());
@ -512,10 +512,10 @@ std::string AttrCursor::getString()
auto & v = forceValue(); auto & v = forceValue();
if (v.type != tString && v.type != tPath) if (v.type() != nString && v.type() != nPath)
throw TypeError("'%s' is not a string but %s", getAttrPathStr(), showType(v.type)); throw TypeError("'%s' is not a string but %s", getAttrPathStr(), showType(v.type()));
return v.type == tString ? v.string.s : v.path; return v.type() == nString ? v.string.s : v.path;
} }
string_t AttrCursor::getStringWithContext() string_t AttrCursor::getStringWithContext()
@ -543,12 +543,12 @@ string_t AttrCursor::getStringWithContext()
auto & v = forceValue(); auto & v = forceValue();
if (v.type == tString) if (v.type() == nString)
return {v.string.s, v.getContext()}; return {v.string.s, v.getContext()};
else if (v.type == tPath) else if (v.type() == nPath)
return {v.path, {}}; return {v.path, {}};
else else
throw TypeError("'%s' is not a string but %s", getAttrPathStr(), showType(v.type)); throw TypeError("'%s' is not a string but %s", getAttrPathStr(), showType(v.type()));
} }
bool AttrCursor::getBool() bool AttrCursor::getBool()
@ -567,7 +567,7 @@ bool AttrCursor::getBool()
auto & v = forceValue(); auto & v = forceValue();
if (v.type != tBool) if (v.type() != nBool)
throw TypeError("'%s' is not a Boolean", getAttrPathStr()); throw TypeError("'%s' is not a Boolean", getAttrPathStr());
return v.boolean; return v.boolean;
@ -589,7 +589,7 @@ std::vector<Symbol> AttrCursor::getAttrs()
auto & v = forceValue(); auto & v = forceValue();
if (v.type != tAttrs) if (v.type() != nAttrs)
throw TypeError("'%s' is not an attribute set", getAttrPathStr()); throw TypeError("'%s' is not an attribute set", getAttrPathStr());
std::vector<Symbol> attrs; std::vector<Symbol> attrs;

View file

@ -32,23 +32,21 @@ LocalNoInlineNoReturn(void throwTypeError(const Pos & pos, const char * s, const
void EvalState::forceValue(Value & v, const Pos & pos) void EvalState::forceValue(Value & v, const Pos & pos)
{ {
if (v.type == tThunk) { if (v.isThunk()) {
Env * env = v.thunk.env; Env * env = v.thunk.env;
Expr * expr = v.thunk.expr; Expr * expr = v.thunk.expr;
try { try {
v.type = tBlackhole; v.mkBlackhole();
//checkInterrupt(); //checkInterrupt();
expr->eval(*this, *env, v); expr->eval(*this, *env, v);
} catch (...) { } catch (...) {
v.type = tThunk; v.mkThunk(env, expr);
v.thunk.env = env;
v.thunk.expr = expr;
throw; throw;
} }
} }
else if (v.type == tApp) else if (v.isApp())
callFunction(*v.app.left, *v.app.right, v, noPos); callFunction(*v.app.left, *v.app.right, v, noPos);
else if (v.type == tBlackhole) else if (v.isBlackhole())
throwEvalError(pos, "infinite recursion encountered"); throwEvalError(pos, "infinite recursion encountered");
} }
@ -56,7 +54,7 @@ void EvalState::forceValue(Value & v, const Pos & pos)
inline void EvalState::forceAttrs(Value & v) inline void EvalState::forceAttrs(Value & v)
{ {
forceValue(v); forceValue(v);
if (v.type != tAttrs) if (v.type() != nAttrs)
throwTypeError("value is %1% while a set was expected", v); throwTypeError("value is %1% while a set was expected", v);
} }
@ -64,7 +62,7 @@ inline void EvalState::forceAttrs(Value & v)
inline void EvalState::forceAttrs(Value & v, const Pos & pos) inline void EvalState::forceAttrs(Value & v, const Pos & pos)
{ {
forceValue(v, pos); forceValue(v, pos);
if (v.type != tAttrs) if (v.type() != nAttrs)
throwTypeError(pos, "value is %1% while a set was expected", v); throwTypeError(pos, "value is %1% while a set was expected", v);
} }

View file

@ -68,7 +68,7 @@ RootValue allocRootValue(Value * v)
} }
static void printValue(std::ostream & str, std::set<const Value *> & active, const Value & v) void printValue(std::ostream & str, std::set<const Value *> & active, const Value & v)
{ {
checkInterrupt(); checkInterrupt();
@ -77,7 +77,7 @@ static void printValue(std::ostream & str, std::set<const Value *> & active, con
return; return;
} }
switch (v.type) { switch (v.internalType) {
case tInt: case tInt:
str << v.integer; str << v.integer;
break; break;
@ -158,32 +158,27 @@ std::ostream & operator << (std::ostream & str, const Value & v)
const Value *getPrimOp(const Value &v) { const Value *getPrimOp(const Value &v) {
const Value * primOp = &v; const Value * primOp = &v;
while (primOp->type == tPrimOpApp) { while (primOp->isPrimOpApp()) {
primOp = primOp->primOpApp.left; primOp = primOp->primOpApp.left;
} }
assert(primOp->type == tPrimOp); assert(primOp->isPrimOp());
return primOp; return primOp;
} }
string showType(ValueType type) string showType(ValueType type)
{ {
switch (type) { switch (type) {
case tInt: return "an integer"; case nInt: return "an integer";
case tBool: return "a Boolean"; case nBool: return "a Boolean";
case tString: return "a string"; case nString: return "a string";
case tPath: return "a path"; case nPath: return "a path";
case tNull: return "null"; case nNull: return "null";
case tAttrs: return "a set"; case nAttrs: return "a set";
case tList1: case tList2: case tListN: return "a list"; case nList: return "a list";
case tThunk: return "a thunk"; case nFunction: return "a function";
case tApp: return "a function application"; case nExternal: return "an external value";
case tLambda: return "a function"; case nFloat: return "a float";
case tBlackhole: return "a black hole"; case nThunk: return "a thunk";
case tPrimOp: return "a built-in function";
case tPrimOpApp: return "a partially applied built-in function";
case tExternal: return "an external value";
case tFloat: return "a float";
} }
abort(); abort();
} }
@ -191,15 +186,18 @@ string showType(ValueType type)
string showType(const Value & v) string showType(const Value & v)
{ {
switch (v.type) { switch (v.internalType) {
case tString: return v.string.context ? "a string with context" : "a string"; case tString: return v.string.context ? "a string with context" : "a string";
case tPrimOp: case tPrimOp:
return fmt("the built-in function '%s'", string(v.primOp->name)); return fmt("the built-in function '%s'", string(v.primOp->name));
case tPrimOpApp: case tPrimOpApp:
return fmt("the partially applied built-in function '%s'", string(getPrimOp(v)->primOp->name)); return fmt("the partially applied built-in function '%s'", string(getPrimOp(v)->primOp->name));
case tExternal: return v.external->showType(); case tExternal: return v.external->showType();
case tThunk: return "a thunk";
case tApp: return "a function application";
case tBlackhole: return "a black hole";
default: default:
return showType(v.type); return showType(v.type());
} }
} }
@ -207,9 +205,9 @@ string showType(const Value & v)
bool Value::isTrivial() const bool Value::isTrivial() const
{ {
return return
type != tApp internalType != tApp
&& type != tPrimOpApp && internalType != tPrimOpApp
&& (type != tThunk && (internalType != tThunk
|| (dynamic_cast<ExprAttrs *>(thunk.expr) || (dynamic_cast<ExprAttrs *>(thunk.expr)
&& ((ExprAttrs *) thunk.expr)->dynamicAttrs.empty()) && ((ExprAttrs *) thunk.expr)->dynamicAttrs.empty())
|| dynamic_cast<ExprLambda *>(thunk.expr) || dynamic_cast<ExprLambda *>(thunk.expr)
@ -404,11 +402,6 @@ EvalState::EvalState(const Strings & _searchPath, ref<Store> store)
for (auto & i : evalSettings.nixPath.get()) addToSearchPath(i); for (auto & i : evalSettings.nixPath.get()) addToSearchPath(i);
} }
try {
addToSearchPath("nix=" + canonPath(settings.nixDataDir + "/nix/corepkgs", true));
} catch (Error &) {
}
if (evalSettings.restrictEval || evalSettings.pureEval) { if (evalSettings.restrictEval || evalSettings.pureEval) {
allowedPaths = PathSet(); allowedPaths = PathSet();
@ -432,9 +425,7 @@ EvalState::EvalState(const Strings & _searchPath, ref<Store> store)
} }
} }
clearValue(vEmptySet); vEmptySet.mkAttrs(allocBindings(0));
vEmptySet.type = tAttrs;
vEmptySet.attrs = allocBindings(0);
createBaseEnv(); createBaseEnv();
} }
@ -461,6 +452,8 @@ Path EvalState::checkSourcePath(const Path & path_)
*/ */
Path abspath = canonPath(path_); Path abspath = canonPath(path_);
if (hasPrefix(abspath, corepkgsPrefix)) return abspath;
for (auto & i : *allowedPaths) { for (auto & i : *allowedPaths) {
if (isDirOrInDir(abspath, i)) { if (isDirOrInDir(abspath, i)) {
found = true; found = true;
@ -550,16 +543,14 @@ Value * EvalState::addPrimOp(const string & name,
the primop to a dummy value. */ the primop to a dummy value. */
if (arity == 0) { if (arity == 0) {
auto vPrimOp = allocValue(); auto vPrimOp = allocValue();
vPrimOp->type = tPrimOp; vPrimOp->mkPrimOp(new PrimOp { .fun = primOp, .arity = 1, .name = sym });
vPrimOp->primOp = new PrimOp { .fun = primOp, .arity = 1, .name = sym };
Value v; Value v;
mkApp(v, *vPrimOp, *vPrimOp); mkApp(v, *vPrimOp, *vPrimOp);
return addConstant(name, v); return addConstant(name, v);
} }
Value * v = allocValue(); Value * v = allocValue();
v->type = tPrimOp; v->mkPrimOp(new PrimOp { .fun = primOp, .arity = arity, .name = sym });
v->primOp = new PrimOp { .fun = primOp, .arity = arity, .name = sym };
staticBaseEnv.vars[symbols.create(name)] = baseEnvDispl; staticBaseEnv.vars[symbols.create(name)] = baseEnvDispl;
baseEnv.values[baseEnvDispl++] = v; baseEnv.values[baseEnvDispl++] = v;
baseEnv.values[0]->attrs->push_back(Attr(sym, v)); baseEnv.values[0]->attrs->push_back(Attr(sym, v));
@ -574,8 +565,7 @@ Value * EvalState::addPrimOp(PrimOp && primOp)
if (primOp.arity == 0) { if (primOp.arity == 0) {
primOp.arity = 1; primOp.arity = 1;
auto vPrimOp = allocValue(); auto vPrimOp = allocValue();
vPrimOp->type = tPrimOp; vPrimOp->mkPrimOp(new PrimOp(std::move(primOp)));
vPrimOp->primOp = new PrimOp(std::move(primOp));
Value v; Value v;
mkApp(v, *vPrimOp, *vPrimOp); mkApp(v, *vPrimOp, *vPrimOp);
return addConstant(primOp.name, v); return addConstant(primOp.name, v);
@ -586,8 +576,7 @@ Value * EvalState::addPrimOp(PrimOp && primOp)
primOp.name = symbols.create(std::string(primOp.name, 2)); primOp.name = symbols.create(std::string(primOp.name, 2));
Value * v = allocValue(); Value * v = allocValue();
v->type = tPrimOp; v->mkPrimOp(new PrimOp(std::move(primOp)));
v->primOp = new PrimOp(std::move(primOp));
staticBaseEnv.vars[envName] = baseEnvDispl; staticBaseEnv.vars[envName] = baseEnvDispl;
baseEnv.values[baseEnvDispl++] = v; baseEnv.values[baseEnvDispl++] = v;
baseEnv.values[0]->attrs->push_back(Attr(primOp.name, v)); baseEnv.values[0]->attrs->push_back(Attr(primOp.name, v));
@ -603,9 +592,9 @@ Value & EvalState::getBuiltin(const string & name)
std::optional<EvalState::Doc> EvalState::getDoc(Value & v) std::optional<EvalState::Doc> EvalState::getDoc(Value & v)
{ {
if (v.type == tPrimOp || v.type == tPrimOpApp) { if (v.isPrimOp() || v.isPrimOpApp()) {
auto v2 = &v; auto v2 = &v;
while (v2->type == tPrimOpApp) while (v2->isPrimOpApp())
v2 = v2->primOpApp.left; v2 = v2->primOpApp.left;
if (v2->primOp->doc) if (v2->primOp->doc)
return Doc { return Doc {
@ -710,15 +699,13 @@ LocalNoInline(void addErrorTrace(Error & e, const Pos & pos, const char * s, con
void mkString(Value & v, const char * s) void mkString(Value & v, const char * s)
{ {
mkStringNoCopy(v, dupString(s)); v.mkString(dupString(s));
} }
Value & mkString(Value & v, std::string_view s, const PathSet & context) Value & mkString(Value & v, std::string_view s, const PathSet & context)
{ {
v.type = tString; v.mkString(dupStringWithLen(s.data(), s.size()));
v.string.s = dupStringWithLen(s.data(), s.size());
v.string.context = 0;
if (!context.empty()) { if (!context.empty()) {
size_t n = 0; size_t n = 0;
v.string.context = (const char * *) v.string.context = (const char * *)
@ -733,7 +720,7 @@ Value & mkString(Value & v, std::string_view s, const PathSet & context)
void mkPath(Value & v, const char * s) void mkPath(Value & v, const char * s)
{ {
mkPathNoCopy(v, dupString(s)); v.mkPath(dupString(s));
} }
@ -794,16 +781,9 @@ Env & EvalState::allocEnv(size_t size)
void EvalState::mkList(Value & v, size_t size) void EvalState::mkList(Value & v, size_t size)
{ {
clearValue(v); v.mkList(size);
if (size == 1) if (size > 2)
v.type = tList1; v.bigList.elems = (Value * *) allocBytes(size * sizeof(Value *));
else if (size == 2)
v.type = tList2;
else {
v.type = tListN;
v.bigList.size = size;
v.bigList.elems = size ? (Value * *) allocBytes(size * sizeof(Value *)) : 0;
}
nrListElems += size; nrListElems += size;
} }
@ -812,9 +792,7 @@ unsigned long nrThunks = 0;
static inline void mkThunk(Value & v, Env & env, Expr * expr) static inline void mkThunk(Value & v, Env & env, Expr * expr)
{ {
v.type = tThunk; v.mkThunk(&env, expr);
v.thunk.env = &env;
v.thunk.expr = expr;
nrThunks++; nrThunks++;
} }
@ -949,7 +927,7 @@ inline bool EvalState::evalBool(Env & env, Expr * e)
{ {
Value v; Value v;
e->eval(*this, env, v); e->eval(*this, env, v);
if (v.type != tBool) if (v.type() != nBool)
throwTypeError("value is %1% while a Boolean was expected", v); throwTypeError("value is %1% while a Boolean was expected", v);
return v.boolean; return v.boolean;
} }
@ -959,7 +937,7 @@ inline bool EvalState::evalBool(Env & env, Expr * e, const Pos & pos)
{ {
Value v; Value v;
e->eval(*this, env, v); e->eval(*this, env, v);
if (v.type != tBool) if (v.type() != nBool)
throwTypeError(pos, "value is %1% while a Boolean was expected", v); throwTypeError(pos, "value is %1% while a Boolean was expected", v);
return v.boolean; return v.boolean;
} }
@ -968,7 +946,7 @@ inline bool EvalState::evalBool(Env & env, Expr * e, const Pos & pos)
inline void EvalState::evalAttrs(Env & env, Expr * e, Value & v) inline void EvalState::evalAttrs(Env & env, Expr * e, Value & v)
{ {
e->eval(*this, env, v); e->eval(*this, env, v);
if (v.type != tAttrs) if (v.type() != nAttrs)
throwTypeError("value is %1% while a set was expected", v); throwTypeError("value is %1% while a set was expected", v);
} }
@ -1068,7 +1046,7 @@ void ExprAttrs::eval(EvalState & state, Env & env, Value & v)
Value nameVal; Value nameVal;
i.nameExpr->eval(state, *dynamicEnv, nameVal); i.nameExpr->eval(state, *dynamicEnv, nameVal);
state.forceValue(nameVal, i.pos); state.forceValue(nameVal, i.pos);
if (nameVal.type == tNull) if (nameVal.type() == nNull)
continue; continue;
state.forceStringNoCtx(nameVal); state.forceStringNoCtx(nameVal);
Symbol nameSym = state.symbols.create(nameVal.string.s); Symbol nameSym = state.symbols.create(nameVal.string.s);
@ -1153,7 +1131,7 @@ void ExprSelect::eval(EvalState & state, Env & env, Value & v)
Symbol name = getName(i, state, env); Symbol name = getName(i, state, env);
if (def) { if (def) {
state.forceValue(*vAttrs, pos); state.forceValue(*vAttrs, pos);
if (vAttrs->type != tAttrs || if (vAttrs->type() != nAttrs ||
(j = vAttrs->attrs->find(name)) == vAttrs->attrs->end()) (j = vAttrs->attrs->find(name)) == vAttrs->attrs->end())
{ {
def->eval(state, env, v); def->eval(state, env, v);
@ -1193,7 +1171,7 @@ void ExprOpHasAttr::eval(EvalState & state, Env & env, Value & v)
state.forceValue(*vAttrs); state.forceValue(*vAttrs);
Bindings::iterator j; Bindings::iterator j;
Symbol name = getName(i, state, env); Symbol name = getName(i, state, env);
if (vAttrs->type != tAttrs || if (vAttrs->type() != nAttrs ||
(j = vAttrs->attrs->find(name)) == vAttrs->attrs->end()) (j = vAttrs->attrs->find(name)) == vAttrs->attrs->end())
{ {
mkBool(v, false); mkBool(v, false);
@ -1209,9 +1187,7 @@ void ExprOpHasAttr::eval(EvalState & state, Env & env, Value & v)
void ExprLambda::eval(EvalState & state, Env & env, Value & v) void ExprLambda::eval(EvalState & state, Env & env, Value & v)
{ {
v.type = tLambda; v.mkLambda(&env, this);
v.lambda.env = &env;
v.lambda.fun = this;
} }
@ -1229,11 +1205,11 @@ void EvalState::callPrimOp(Value & fun, Value & arg, Value & v, const Pos & pos)
/* Figure out the number of arguments still needed. */ /* Figure out the number of arguments still needed. */
size_t argsDone = 0; size_t argsDone = 0;
Value * primOp = &fun; Value * primOp = &fun;
while (primOp->type == tPrimOpApp) { while (primOp->isPrimOpApp()) {
argsDone++; argsDone++;
primOp = primOp->primOpApp.left; primOp = primOp->primOpApp.left;
} }
assert(primOp->type == tPrimOp); assert(primOp->isPrimOp());
auto arity = primOp->primOp->arity; auto arity = primOp->primOp->arity;
auto argsLeft = arity - argsDone; auto argsLeft = arity - argsDone;
@ -1244,7 +1220,7 @@ void EvalState::callPrimOp(Value & fun, Value & arg, Value & v, const Pos & pos)
Value * vArgs[arity]; Value * vArgs[arity];
auto n = arity - 1; auto n = arity - 1;
vArgs[n--] = &arg; vArgs[n--] = &arg;
for (Value * arg = &fun; arg->type == tPrimOpApp; arg = arg->primOpApp.left) for (Value * arg = &fun; arg->isPrimOpApp(); arg = arg->primOpApp.left)
vArgs[n--] = arg->primOpApp.right; vArgs[n--] = arg->primOpApp.right;
/* And call the primop. */ /* And call the primop. */
@ -1254,9 +1230,7 @@ void EvalState::callPrimOp(Value & fun, Value & arg, Value & v, const Pos & pos)
} else { } else {
Value * fun2 = allocValue(); Value * fun2 = allocValue();
*fun2 = fun; *fun2 = fun;
v.type = tPrimOpApp; v.mkPrimOpApp(fun2, &arg);
v.primOpApp.left = fun2;
v.primOpApp.right = &arg;
} }
} }
@ -1266,12 +1240,12 @@ void EvalState::callFunction(Value & fun, Value & arg, Value & v, const Pos & po
forceValue(fun, pos); forceValue(fun, pos);
if (fun.type == tPrimOp || fun.type == tPrimOpApp) { if (fun.isPrimOp() || fun.isPrimOpApp()) {
callPrimOp(fun, arg, v, pos); callPrimOp(fun, arg, v, pos);
return; return;
} }
if (fun.type == tAttrs) { if (fun.type() == nAttrs) {
auto found = fun.attrs->find(sFunctor); auto found = fun.attrs->find(sFunctor);
if (found != fun.attrs->end()) { if (found != fun.attrs->end()) {
/* fun may be allocated on the stack of the calling function, /* fun may be allocated on the stack of the calling function,
@ -1287,7 +1261,7 @@ void EvalState::callFunction(Value & fun, Value & arg, Value & v, const Pos & po
} }
} }
if (fun.type != tLambda) if (!fun.isLambda())
throwTypeError(pos, "attempt to call something which is not a function but %1%", fun); throwTypeError(pos, "attempt to call something which is not a function but %1%", fun);
ExprLambda & lambda(*fun.lambda.fun); ExprLambda & lambda(*fun.lambda.fun);
@ -1370,7 +1344,7 @@ void EvalState::autoCallFunction(Bindings & args, Value & fun, Value & res)
{ {
forceValue(fun); forceValue(fun);
if (fun.type == tAttrs) { if (fun.type() == nAttrs) {
auto found = fun.attrs->find(sFunctor); auto found = fun.attrs->find(sFunctor);
if (found != fun.attrs->end()) { if (found != fun.attrs->end()) {
Value * v = allocValue(); Value * v = allocValue();
@ -1380,7 +1354,7 @@ void EvalState::autoCallFunction(Bindings & args, Value & fun, Value & res)
} }
} }
if (fun.type != tLambda || !fun.lambda.fun->matchAttrs) { if (!fun.isLambda() || !fun.lambda.fun->matchAttrs) {
res = fun; res = fun;
return; return;
} }
@ -1564,7 +1538,7 @@ void ExprConcatStrings::eval(EvalState & state, Env & env, Value & v)
NixFloat nf = 0; NixFloat nf = 0;
bool first = !forceString; bool first = !forceString;
ValueType firstType = tString; ValueType firstType = nString;
for (auto & i : *es) { for (auto & i : *es) {
Value vTmp; Value vTmp;
@ -1575,36 +1549,36 @@ void ExprConcatStrings::eval(EvalState & state, Env & env, Value & v)
since paths are copied when they are used in a derivation), since paths are copied when they are used in a derivation),
and none of the strings are allowed to have contexts. */ and none of the strings are allowed to have contexts. */
if (first) { if (first) {
firstType = vTmp.type; firstType = vTmp.type();
first = false; first = false;
} }
if (firstType == tInt) { if (firstType == nInt) {
if (vTmp.type == tInt) { if (vTmp.type() == nInt) {
n += vTmp.integer; n += vTmp.integer;
} else if (vTmp.type == tFloat) { } else if (vTmp.type() == nFloat) {
// Upgrade the type from int to float; // Upgrade the type from int to float;
firstType = tFloat; firstType = nFloat;
nf = n; nf = n;
nf += vTmp.fpoint; nf += vTmp.fpoint;
} else } else
throwEvalError(pos, "cannot add %1% to an integer", showType(vTmp)); throwEvalError(pos, "cannot add %1% to an integer", showType(vTmp));
} else if (firstType == tFloat) { } else if (firstType == nFloat) {
if (vTmp.type == tInt) { if (vTmp.type() == nInt) {
nf += vTmp.integer; nf += vTmp.integer;
} else if (vTmp.type == tFloat) { } else if (vTmp.type() == nFloat) {
nf += vTmp.fpoint; nf += vTmp.fpoint;
} else } else
throwEvalError(pos, "cannot add %1% to a float", showType(vTmp)); throwEvalError(pos, "cannot add %1% to a float", showType(vTmp));
} else } else
s << state.coerceToString(pos, vTmp, context, false, firstType == tString); s << state.coerceToString(pos, vTmp, context, false, firstType == nString);
} }
if (firstType == tInt) if (firstType == nInt)
mkInt(v, n); mkInt(v, n);
else if (firstType == tFloat) else if (firstType == nFloat)
mkFloat(v, nf); mkFloat(v, nf);
else if (firstType == tPath) { else if (firstType == nPath) {
if (!context.empty()) if (!context.empty())
throwEvalError(pos, "a string that refers to a store path cannot be appended to a path"); throwEvalError(pos, "a string that refers to a store path cannot be appended to a path");
auto path = canonPath(s.str()); auto path = canonPath(s.str());
@ -1631,7 +1605,7 @@ void EvalState::forceValueDeep(Value & v)
forceValue(v); forceValue(v);
if (v.type == tAttrs) { if (v.type() == nAttrs) {
for (auto & i : *v.attrs) for (auto & i : *v.attrs)
try { try {
recurse(*i.value); recurse(*i.value);
@ -1654,7 +1628,7 @@ void EvalState::forceValueDeep(Value & v)
NixInt EvalState::forceInt(Value & v, const Pos & pos) NixInt EvalState::forceInt(Value & v, const Pos & pos)
{ {
forceValue(v, pos); forceValue(v, pos);
if (v.type != tInt) if (v.type() != nInt)
throwTypeError(pos, "value is %1% while an integer was expected", v); throwTypeError(pos, "value is %1% while an integer was expected", v);
return v.integer; return v.integer;
} }
@ -1663,9 +1637,9 @@ NixInt EvalState::forceInt(Value & v, const Pos & pos)
NixFloat EvalState::forceFloat(Value & v, const Pos & pos) NixFloat EvalState::forceFloat(Value & v, const Pos & pos)
{ {
forceValue(v, pos); forceValue(v, pos);
if (v.type == tInt) if (v.type() == nInt)
return v.integer; return v.integer;
else if (v.type != tFloat) else if (v.type() != nFloat)
throwTypeError(pos, "value is %1% while a float was expected", v); throwTypeError(pos, "value is %1% while a float was expected", v);
return v.fpoint; return v.fpoint;
} }
@ -1674,7 +1648,7 @@ NixFloat EvalState::forceFloat(Value & v, const Pos & pos)
bool EvalState::forceBool(Value & v, const Pos & pos) bool EvalState::forceBool(Value & v, const Pos & pos)
{ {
forceValue(v, pos); forceValue(v, pos);
if (v.type != tBool) if (v.type() != nBool)
throwTypeError(pos, "value is %1% while a Boolean was expected", v); throwTypeError(pos, "value is %1% while a Boolean was expected", v);
return v.boolean; return v.boolean;
} }
@ -1682,14 +1656,14 @@ bool EvalState::forceBool(Value & v, const Pos & pos)
bool EvalState::isFunctor(Value & fun) bool EvalState::isFunctor(Value & fun)
{ {
return fun.type == tAttrs && fun.attrs->find(sFunctor) != fun.attrs->end(); return fun.type() == nAttrs && fun.attrs->find(sFunctor) != fun.attrs->end();
} }
void EvalState::forceFunction(Value & v, const Pos & pos) void EvalState::forceFunction(Value & v, const Pos & pos)
{ {
forceValue(v, pos); forceValue(v, pos);
if (v.type != tLambda && v.type != tPrimOp && v.type != tPrimOpApp && !isFunctor(v)) if (v.type() != nFunction && !isFunctor(v))
throwTypeError(pos, "value is %1% while a function was expected", v); throwTypeError(pos, "value is %1% while a function was expected", v);
} }
@ -1697,7 +1671,7 @@ void EvalState::forceFunction(Value & v, const Pos & pos)
string EvalState::forceString(Value & v, const Pos & pos) string EvalState::forceString(Value & v, const Pos & pos)
{ {
forceValue(v, pos); forceValue(v, pos);
if (v.type != tString) { if (v.type() != nString) {
if (pos) if (pos)
throwTypeError(pos, "value is %1% while a string was expected", v); throwTypeError(pos, "value is %1% while a string was expected", v);
else else
@ -1730,7 +1704,7 @@ void copyContext(const Value & v, PathSet & context)
std::vector<std::pair<Path, std::string>> Value::getContext() std::vector<std::pair<Path, std::string>> Value::getContext()
{ {
std::vector<std::pair<Path, std::string>> res; std::vector<std::pair<Path, std::string>> res;
assert(type == tString); assert(internalType == tString);
if (string.context) if (string.context)
for (const char * * p = string.context; *p; ++p) for (const char * * p = string.context; *p; ++p)
res.push_back(decodeContext(*p)); res.push_back(decodeContext(*p));
@ -1763,11 +1737,11 @@ string EvalState::forceStringNoCtx(Value & v, const Pos & pos)
bool EvalState::isDerivation(Value & v) bool EvalState::isDerivation(Value & v)
{ {
if (v.type != tAttrs) return false; if (v.type() != nAttrs) return false;
Bindings::iterator i = v.attrs->find(sType); Bindings::iterator i = v.attrs->find(sType);
if (i == v.attrs->end()) return false; if (i == v.attrs->end()) return false;
forceValue(*i->value); forceValue(*i->value);
if (i->value->type != tString) return false; if (i->value->type() != nString) return false;
return strcmp(i->value->string.s, "derivation") == 0; return strcmp(i->value->string.s, "derivation") == 0;
} }
@ -1792,17 +1766,17 @@ string EvalState::coerceToString(const Pos & pos, Value & v, PathSet & context,
string s; string s;
if (v.type == tString) { if (v.type() == nString) {
copyContext(v, context); copyContext(v, context);
return v.string.s; return v.string.s;
} }
if (v.type == tPath) { if (v.type() == nPath) {
Path path(canonPath(v.path)); Path path(canonPath(v.path));
return copyToStore ? copyPathToStore(context, path) : path; return copyToStore ? copyPathToStore(context, path) : path;
} }
if (v.type == tAttrs) { if (v.type() == nAttrs) {
auto maybeString = tryAttrsToString(pos, v, context, coerceMore, copyToStore); auto maybeString = tryAttrsToString(pos, v, context, coerceMore, copyToStore);
if (maybeString) { if (maybeString) {
return *maybeString; return *maybeString;
@ -1812,18 +1786,18 @@ string EvalState::coerceToString(const Pos & pos, Value & v, PathSet & context,
return coerceToString(pos, *i->value, context, coerceMore, copyToStore); return coerceToString(pos, *i->value, context, coerceMore, copyToStore);
} }
if (v.type == tExternal) if (v.type() == nExternal)
return v.external->coerceToString(pos, context, coerceMore, copyToStore); return v.external->coerceToString(pos, context, coerceMore, copyToStore);
if (coerceMore) { if (coerceMore) {
/* Note that `false' is represented as an empty string for /* Note that `false' is represented as an empty string for
shell scripting convenience, just like `null'. */ shell scripting convenience, just like `null'. */
if (v.type == tBool && v.boolean) return "1"; if (v.type() == nBool && v.boolean) return "1";
if (v.type == tBool && !v.boolean) return ""; if (v.type() == nBool && !v.boolean) return "";
if (v.type == tInt) return std::to_string(v.integer); if (v.type() == nInt) return std::to_string(v.integer);
if (v.type == tFloat) return std::to_string(v.fpoint); if (v.type() == nFloat) return std::to_string(v.fpoint);
if (v.type == tNull) return ""; if (v.type() == nNull) return "";
if (v.isList()) { if (v.isList()) {
string result; string result;
@ -1886,40 +1860,38 @@ bool EvalState::eqValues(Value & v1, Value & v2)
if (&v1 == &v2) return true; if (&v1 == &v2) return true;
// Special case type-compatibility between float and int // Special case type-compatibility between float and int
if (v1.type == tInt && v2.type == tFloat) if (v1.type() == nInt && v2.type() == nFloat)
return v1.integer == v2.fpoint; return v1.integer == v2.fpoint;
if (v1.type == tFloat && v2.type == tInt) if (v1.type() == nFloat && v2.type() == nInt)
return v1.fpoint == v2.integer; return v1.fpoint == v2.integer;
// All other types are not compatible with each other. // All other types are not compatible with each other.
if (v1.type != v2.type) return false; if (v1.type() != v2.type()) return false;
switch (v1.type) { switch (v1.type()) {
case tInt: case nInt:
return v1.integer == v2.integer; return v1.integer == v2.integer;
case tBool: case nBool:
return v1.boolean == v2.boolean; return v1.boolean == v2.boolean;
case tString: case nString:
return strcmp(v1.string.s, v2.string.s) == 0; return strcmp(v1.string.s, v2.string.s) == 0;
case tPath: case nPath:
return strcmp(v1.path, v2.path) == 0; return strcmp(v1.path, v2.path) == 0;
case tNull: case nNull:
return true; return true;
case tList1: case nList:
case tList2:
case tListN:
if (v1.listSize() != v2.listSize()) return false; if (v1.listSize() != v2.listSize()) return false;
for (size_t n = 0; n < v1.listSize(); ++n) for (size_t n = 0; n < v1.listSize(); ++n)
if (!eqValues(*v1.listElems()[n], *v2.listElems()[n])) return false; if (!eqValues(*v1.listElems()[n], *v2.listElems()[n])) return false;
return true; return true;
case tAttrs: { case nAttrs: {
/* If both sets denote a derivation (type = "derivation"), /* If both sets denote a derivation (type = "derivation"),
then compare their outPaths. */ then compare their outPaths. */
if (isDerivation(v1) && isDerivation(v2)) { if (isDerivation(v1) && isDerivation(v2)) {
@ -1941,15 +1913,13 @@ bool EvalState::eqValues(Value & v1, Value & v2)
} }
/* Functions are incomparable. */ /* Functions are incomparable. */
case tLambda: case nFunction:
case tPrimOp:
case tPrimOpApp:
return false; return false;
case tExternal: case nExternal:
return *v1.external == *v2.external; return *v1.external == *v2.external;
case tFloat: case nFloat:
return v1.fpoint == v2.fpoint; return v1.fpoint == v2.fpoint;
default: default:

View file

@ -432,4 +432,6 @@ struct EvalSettings : Config
extern EvalSettings evalSettings; extern EvalSettings evalSettings;
static const std::string corepkgsPrefix{"/__corepkgs__/"};
} }

View file

@ -73,7 +73,7 @@ static std::tuple<fetchers::Tree, FlakeRef, FlakeRef> fetchOrSubstituteTree(
static void forceTrivialValue(EvalState & state, Value & value, const Pos & pos) static void forceTrivialValue(EvalState & state, Value & value, const Pos & pos)
{ {
if (value.type == tThunk && value.isTrivial()) if (value.isThunk() && value.isTrivial())
state.forceValue(value, pos); state.forceValue(value, pos);
} }
@ -82,9 +82,9 @@ static void expectType(EvalState & state, ValueType type,
Value & value, const Pos & pos) Value & value, const Pos & pos)
{ {
forceTrivialValue(state, value, pos); forceTrivialValue(state, value, pos);
if (value.type != type) if (value.type() != type)
throw Error("expected %s but got %s at %s", throw Error("expected %s but got %s at %s",
showType(type), showType(value.type), pos); showType(type), showType(value.type()), pos);
} }
static std::map<FlakeId, FlakeInput> parseFlakeInputs( static std::map<FlakeId, FlakeInput> parseFlakeInputs(
@ -93,7 +93,7 @@ static std::map<FlakeId, FlakeInput> parseFlakeInputs(
static FlakeInput parseFlakeInput(EvalState & state, static FlakeInput parseFlakeInput(EvalState & state,
const std::string & inputName, Value * value, const Pos & pos) const std::string & inputName, Value * value, const Pos & pos)
{ {
expectType(state, tAttrs, *value, pos); expectType(state, nAttrs, *value, pos);
FlakeInput input; FlakeInput input;
@ -108,19 +108,19 @@ static FlakeInput parseFlakeInput(EvalState & state,
for (nix::Attr attr : *(value->attrs)) { for (nix::Attr attr : *(value->attrs)) {
try { try {
if (attr.name == sUrl) { if (attr.name == sUrl) {
expectType(state, tString, *attr.value, *attr.pos); expectType(state, nString, *attr.value, *attr.pos);
url = attr.value->string.s; url = attr.value->string.s;
attrs.emplace("url", *url); attrs.emplace("url", *url);
} else if (attr.name == sFlake) { } else if (attr.name == sFlake) {
expectType(state, tBool, *attr.value, *attr.pos); expectType(state, nBool, *attr.value, *attr.pos);
input.isFlake = attr.value->boolean; input.isFlake = attr.value->boolean;
} else if (attr.name == sInputs) { } else if (attr.name == sInputs) {
input.overrides = parseFlakeInputs(state, attr.value, *attr.pos); input.overrides = parseFlakeInputs(state, attr.value, *attr.pos);
} else if (attr.name == sFollows) { } else if (attr.name == sFollows) {
expectType(state, tString, *attr.value, *attr.pos); expectType(state, nString, *attr.value, *attr.pos);
input.follows = parseInputPath(attr.value->string.s); input.follows = parseInputPath(attr.value->string.s);
} else { } else {
if (attr.value->type == tString) if (attr.value->type() == nString)
attrs.emplace(attr.name, attr.value->string.s); attrs.emplace(attr.name, attr.value->string.s);
else else
throw TypeError("flake input attribute '%s' is %s while a string is expected", throw TypeError("flake input attribute '%s' is %s while a string is expected",
@ -158,7 +158,7 @@ static std::map<FlakeId, FlakeInput> parseFlakeInputs(
{ {
std::map<FlakeId, FlakeInput> inputs; std::map<FlakeId, FlakeInput> inputs;
expectType(state, tAttrs, *value, pos); expectType(state, nAttrs, *value, pos);
for (nix::Attr & inputAttr : *(*value).attrs) { for (nix::Attr & inputAttr : *(*value).attrs) {
inputs.emplace(inputAttr.name, inputs.emplace(inputAttr.name,
@ -199,10 +199,10 @@ static Flake getFlake(
Value vInfo; Value vInfo;
state.evalFile(flakeFile, vInfo, true); // FIXME: symlink attack state.evalFile(flakeFile, vInfo, true); // FIXME: symlink attack
expectType(state, tAttrs, vInfo, Pos(foFile, state.symbols.create(flakeFile), 0, 0)); expectType(state, nAttrs, vInfo, Pos(foFile, state.symbols.create(flakeFile), 0, 0));
if (auto description = vInfo.attrs->get(state.sDescription)) { if (auto description = vInfo.attrs->get(state.sDescription)) {
expectType(state, tString, *description->value, *description->pos); expectType(state, nString, *description->value, *description->pos);
flake.description = description->value->string.s; flake.description = description->value->string.s;
} }
@ -214,9 +214,9 @@ static Flake getFlake(
auto sOutputs = state.symbols.create("outputs"); auto sOutputs = state.symbols.create("outputs");
if (auto outputs = vInfo.attrs->get(sOutputs)) { if (auto outputs = vInfo.attrs->get(sOutputs)) {
expectType(state, tLambda, *outputs->value, *outputs->pos); expectType(state, nFunction, *outputs->value, *outputs->pos);
if (outputs->value->lambda.fun->matchAttrs) { if (outputs->value->isLambda() && outputs->value->lambda.fun->matchAttrs) {
for (auto & formal : outputs->value->lambda.fun->formals->formals) { for (auto & formal : outputs->value->lambda.fun->formals->formals) {
if (formal.name != state.sSelf) if (formal.name != state.sSelf)
flake.inputs.emplace(formal.name, FlakeInput { flake.inputs.emplace(formal.name, FlakeInput {
@ -231,21 +231,21 @@ static Flake getFlake(
auto sNixConfig = state.symbols.create("nixConfig"); auto sNixConfig = state.symbols.create("nixConfig");
if (auto nixConfig = vInfo.attrs->get(sNixConfig)) { if (auto nixConfig = vInfo.attrs->get(sNixConfig)) {
expectType(state, tAttrs, *nixConfig->value, *nixConfig->pos); expectType(state, nAttrs, *nixConfig->value, *nixConfig->pos);
for (auto & setting : *nixConfig->value->attrs) { for (auto & setting : *nixConfig->value->attrs) {
forceTrivialValue(state, *setting.value, *setting.pos); forceTrivialValue(state, *setting.value, *setting.pos);
if (setting.value->type == tString) if (setting.value->type() == nString)
flake.config.settings.insert({setting.name, state.forceStringNoCtx(*setting.value, *setting.pos)}); flake.config.settings.insert({setting.name, state.forceStringNoCtx(*setting.value, *setting.pos)});
else if (setting.value->type == tInt) else if (setting.value->type() == nInt)
flake.config.settings.insert({setting.name, state.forceInt(*setting.value, *setting.pos)}); flake.config.settings.insert({setting.name, state.forceInt(*setting.value, *setting.pos)});
else if (setting.value->type == tBool) else if (setting.value->type() == nBool)
flake.config.settings.insert({setting.name, state.forceBool(*setting.value, *setting.pos)}); flake.config.settings.insert({setting.name, state.forceBool(*setting.value, *setting.pos)});
else if (setting.value->isList()) { else if (setting.value->type() == nList) {
std::vector<std::string> ss; std::vector<std::string> ss;
for (unsigned int n = 0; n < setting.value->listSize(); ++n) { for (unsigned int n = 0; n < setting.value->listSize(); ++n) {
auto elem = setting.value->listElems()[n]; auto elem = setting.value->listElems()[n];
if (elem->type != tString) if (elem->type() != nString)
throw TypeError("list element in flake configuration setting '%s' is %s while a string is expected", throw TypeError("list element in flake configuration setting '%s' is %s while a string is expected",
setting.name, showType(*setting.value)); setting.name, showType(*setting.value));
ss.push_back(state.forceStringNoCtx(*elem, *setting.pos)); ss.push_back(state.forceStringNoCtx(*elem, *setting.pos));

View file

@ -128,7 +128,7 @@ DrvInfo::Outputs DrvInfo::queryOutputs(bool onlyOutputsToInstall)
if (!outTI->isList()) throw errMsg; if (!outTI->isList()) throw errMsg;
Outputs result; Outputs result;
for (auto i = outTI->listElems(); i != outTI->listElems() + outTI->listSize(); ++i) { for (auto i = outTI->listElems(); i != outTI->listElems() + outTI->listSize(); ++i) {
if ((*i)->type != tString) throw errMsg; if ((*i)->type() != nString) throw errMsg;
auto out = outputs.find((*i)->string.s); auto out = outputs.find((*i)->string.s);
if (out == outputs.end()) throw errMsg; if (out == outputs.end()) throw errMsg;
result.insert(*out); result.insert(*out);
@ -172,20 +172,20 @@ StringSet DrvInfo::queryMetaNames()
bool DrvInfo::checkMeta(Value & v) bool DrvInfo::checkMeta(Value & v)
{ {
state->forceValue(v); state->forceValue(v);
if (v.isList()) { if (v.type() == nList) {
for (unsigned int n = 0; n < v.listSize(); ++n) for (unsigned int n = 0; n < v.listSize(); ++n)
if (!checkMeta(*v.listElems()[n])) return false; if (!checkMeta(*v.listElems()[n])) return false;
return true; return true;
} }
else if (v.type == tAttrs) { else if (v.type() == nAttrs) {
Bindings::iterator i = v.attrs->find(state->sOutPath); Bindings::iterator i = v.attrs->find(state->sOutPath);
if (i != v.attrs->end()) return false; if (i != v.attrs->end()) return false;
for (auto & i : *v.attrs) for (auto & i : *v.attrs)
if (!checkMeta(*i.value)) return false; if (!checkMeta(*i.value)) return false;
return true; return true;
} }
else return v.type == tInt || v.type == tBool || v.type == tString || else return v.type() == nInt || v.type() == nBool || v.type() == nString ||
v.type == tFloat; v.type() == nFloat;
} }
@ -201,7 +201,7 @@ Value * DrvInfo::queryMeta(const string & name)
string DrvInfo::queryMetaString(const string & name) string DrvInfo::queryMetaString(const string & name)
{ {
Value * v = queryMeta(name); Value * v = queryMeta(name);
if (!v || v->type != tString) return ""; if (!v || v->type() != nString) return "";
return v->string.s; return v->string.s;
} }
@ -210,8 +210,8 @@ NixInt DrvInfo::queryMetaInt(const string & name, NixInt def)
{ {
Value * v = queryMeta(name); Value * v = queryMeta(name);
if (!v) return def; if (!v) return def;
if (v->type == tInt) return v->integer; if (v->type() == nInt) return v->integer;
if (v->type == tString) { if (v->type() == nString) {
/* Backwards compatibility with before we had support for /* Backwards compatibility with before we had support for
integer meta fields. */ integer meta fields. */
NixInt n; NixInt n;
@ -224,8 +224,8 @@ NixFloat DrvInfo::queryMetaFloat(const string & name, NixFloat def)
{ {
Value * v = queryMeta(name); Value * v = queryMeta(name);
if (!v) return def; if (!v) return def;
if (v->type == tFloat) return v->fpoint; if (v->type() == nFloat) return v->fpoint;
if (v->type == tString) { if (v->type() == nString) {
/* Backwards compatibility with before we had support for /* Backwards compatibility with before we had support for
float meta fields. */ float meta fields. */
NixFloat n; NixFloat n;
@ -239,8 +239,8 @@ bool DrvInfo::queryMetaBool(const string & name, bool def)
{ {
Value * v = queryMeta(name); Value * v = queryMeta(name);
if (!v) return def; if (!v) return def;
if (v->type == tBool) return v->boolean; if (v->type() == nBool) return v->boolean;
if (v->type == tString) { if (v->type() == nString) {
/* Backwards compatibility with before we had support for /* Backwards compatibility with before we had support for
Boolean meta fields. */ Boolean meta fields. */
if (strcmp(v->string.s, "true") == 0) return true; if (strcmp(v->string.s, "true") == 0) return true;
@ -331,7 +331,7 @@ static void getDerivations(EvalState & state, Value & vIn,
/* Process the expression. */ /* Process the expression. */
if (!getDerivation(state, v, pathPrefix, drvs, done, ignoreAssertionFailures)) ; if (!getDerivation(state, v, pathPrefix, drvs, done, ignoreAssertionFailures)) ;
else if (v.type == tAttrs) { else if (v.type() == nAttrs) {
/* !!! undocumented hackery to support combining channels in /* !!! undocumented hackery to support combining channels in
nix-env.cc. */ nix-env.cc. */
@ -353,7 +353,7 @@ static void getDerivations(EvalState & state, Value & vIn,
/* If the value of this attribute is itself a set, /* If the value of this attribute is itself a set,
should we recurse into it? => Only if it has a should we recurse into it? => Only if it has a
`recurseForDerivations = true' attribute. */ `recurseForDerivations = true' attribute. */
if (i->value->type == tAttrs) { if (i->value->type() == nAttrs) {
Bindings::iterator j = i->value->attrs->find(state.sRecurseForDerivations); Bindings::iterator j = i->value->attrs->find(state.sRecurseForDerivations);
if (j != i->value->attrs->end() && state.forceBool(*j->value, *j->pos)) if (j != i->value->attrs->end() && state.forceBool(*j->value, *j->pos))
getDerivations(state, *i->value, pathPrefix2, autoArgs, drvs, done, ignoreAssertionFailures); getDerivations(state, *i->value, pathPrefix2, autoArgs, drvs, done, ignoreAssertionFailures);
@ -362,7 +362,7 @@ static void getDerivations(EvalState & state, Value & vIn,
} }
} }
else if (v.isList()) { else if (v.type() == nList) {
for (unsigned int n = 0; n < v.listSize(); ++n) { for (unsigned int n = 0; n < v.listSize(); ++n) {
string pathPrefix2 = addToPath(pathPrefix, (format("%1%") % n).str()); string pathPrefix2 = addToPath(pathPrefix, (format("%1%") % n).str());
if (getDerivation(state, *v.listElems()[n], pathPrefix2, drvs, done, ignoreAssertionFailures)) if (getDerivation(state, *v.listElems()[n], pathPrefix2, drvs, done, ignoreAssertionFailures))

View file

@ -40,6 +40,6 @@ $(eval $(call install-file-in, $(d)/nix-expr.pc, $(prefix)/lib/pkgconfig, 0644))
$(foreach i, $(wildcard src/libexpr/flake/*.hh), \ $(foreach i, $(wildcard src/libexpr/flake/*.hh), \
$(eval $(call install-file-in, $(i), $(includedir)/nix/flake, 0644))) $(eval $(call install-file-in, $(i), $(includedir)/nix/flake, 0644)))
$(d)/primops.cc: $(d)/imported-drv-to-derivation.nix.gen.hh $(d)/primops/derivation.nix.gen.hh $(d)/primops.cc: $(d)/imported-drv-to-derivation.nix.gen.hh $(d)/primops/derivation.nix.gen.hh $(d)/fetchurl.nix.gen.hh
$(d)/flake/flake.cc: $(d)/flake/call-flake.nix.gen.hh $(d)/flake/flake.cc: $(d)/flake/call-flake.nix.gen.hh

View file

@ -129,7 +129,7 @@ struct ExprPath : Expr
{ {
string s; string s;
Value v; Value v;
ExprPath(const string & s) : s(s) { mkPathNoCopy(v, this->s.c_str()); }; ExprPath(const string & s) : s(s) { v.mkPath(this->s.c_str()); };
COMMON_METHODS COMMON_METHODS
Value * maybeThunk(EvalState & state, Env & env); Value * maybeThunk(EvalState & state, Env & env);
}; };

View file

@ -698,6 +698,10 @@ Path EvalState::findFile(SearchPath & searchPath, const string & path, const Pos
Path res = r.second + suffix; Path res = r.second + suffix;
if (pathExists(res)) return canonPath(res); if (pathExists(res)) return canonPath(res);
} }
if (hasPrefix(path, "nix/"))
return corepkgsPrefix + path.substr(4);
throw ThrownError({ throw ThrownError({
.hint = hintfmt(evalSettings.pureEval .hint = hintfmt(evalSettings.pureEval
? "cannot look up '<%s>' in pure evaluation mode (use '--impure' to override)" ? "cannot look up '<%s>' in pure evaluation mode (use '--impure' to override)"

View file

@ -164,7 +164,15 @@ static void import(EvalState & state, const Pos & pos, Value & vPath, Value * vS
state.forceFunction(**fun, pos); state.forceFunction(**fun, pos);
mkApp(v, **fun, w); mkApp(v, **fun, w);
state.forceAttrs(v, pos); state.forceAttrs(v, pos);
} else { }
else if (path == corepkgsPrefix + "fetchurl.nix") {
state.eval(state.parseExprFromString(
#include "fetchurl.nix.gen.hh"
, "/"), v);
}
else {
if (!vScope) if (!vScope)
state.evalFile(realPath, v); state.evalFile(realPath, v);
else { else {
@ -356,24 +364,20 @@ static void prim_typeOf(EvalState & state, const Pos & pos, Value * * args, Valu
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
string t; string t;
switch (args[0]->type) { switch (args[0]->type()) {
case tInt: t = "int"; break; case nInt: t = "int"; break;
case tBool: t = "bool"; break; case nBool: t = "bool"; break;
case tString: t = "string"; break; case nString: t = "string"; break;
case tPath: t = "path"; break; case nPath: t = "path"; break;
case tNull: t = "null"; break; case nNull: t = "null"; break;
case tAttrs: t = "set"; break; case nAttrs: t = "set"; break;
case tList1: case tList2: case tListN: t = "list"; break; case nList: t = "list"; break;
case tLambda: case nFunction: t = "lambda"; break;
case tPrimOp: case nExternal:
case tPrimOpApp:
t = "lambda";
break;
case tExternal:
t = args[0]->external->typeOf(); t = args[0]->external->typeOf();
break; break;
case tFloat: t = "float"; break; case nFloat: t = "float"; break;
default: abort(); case nThunk: abort();
} }
mkString(v, state.symbols.create(t)); mkString(v, state.symbols.create(t));
} }
@ -393,7 +397,7 @@ static RegisterPrimOp primop_typeOf({
static void prim_isNull(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_isNull(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
mkBool(v, args[0]->type == tNull); mkBool(v, args[0]->type() == nNull);
} }
static RegisterPrimOp primop_isNull({ static RegisterPrimOp primop_isNull({
@ -413,18 +417,7 @@ static RegisterPrimOp primop_isNull({
static void prim_isFunction(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_isFunction(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
bool res; mkBool(v, args[0]->type() == nFunction);
switch (args[0]->type) {
case tLambda:
case tPrimOp:
case tPrimOpApp:
res = true;
break;
default:
res = false;
break;
}
mkBool(v, res);
} }
static RegisterPrimOp primop_isFunction({ static RegisterPrimOp primop_isFunction({
@ -440,7 +433,7 @@ static RegisterPrimOp primop_isFunction({
static void prim_isInt(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_isInt(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
mkBool(v, args[0]->type == tInt); mkBool(v, args[0]->type() == nInt);
} }
static RegisterPrimOp primop_isInt({ static RegisterPrimOp primop_isInt({
@ -456,7 +449,7 @@ static RegisterPrimOp primop_isInt({
static void prim_isFloat(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_isFloat(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
mkBool(v, args[0]->type == tFloat); mkBool(v, args[0]->type() == nFloat);
} }
static RegisterPrimOp primop_isFloat({ static RegisterPrimOp primop_isFloat({
@ -472,7 +465,7 @@ static RegisterPrimOp primop_isFloat({
static void prim_isString(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_isString(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
mkBool(v, args[0]->type == tString); mkBool(v, args[0]->type() == nString);
} }
static RegisterPrimOp primop_isString({ static RegisterPrimOp primop_isString({
@ -488,7 +481,7 @@ static RegisterPrimOp primop_isString({
static void prim_isBool(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_isBool(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
mkBool(v, args[0]->type == tBool); mkBool(v, args[0]->type() == nBool);
} }
static RegisterPrimOp primop_isBool({ static RegisterPrimOp primop_isBool({
@ -504,7 +497,7 @@ static RegisterPrimOp primop_isBool({
static void prim_isPath(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_isPath(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
mkBool(v, args[0]->type == tPath); mkBool(v, args[0]->type() == nPath);
} }
static RegisterPrimOp primop_isPath({ static RegisterPrimOp primop_isPath({
@ -520,20 +513,20 @@ struct CompareValues
{ {
bool operator () (const Value * v1, const Value * v2) const bool operator () (const Value * v1, const Value * v2) const
{ {
if (v1->type == tFloat && v2->type == tInt) if (v1->type() == nFloat && v2->type() == nInt)
return v1->fpoint < v2->integer; return v1->fpoint < v2->integer;
if (v1->type == tInt && v2->type == tFloat) if (v1->type() == nInt && v2->type() == nFloat)
return v1->integer < v2->fpoint; return v1->integer < v2->fpoint;
if (v1->type != v2->type) if (v1->type() != v2->type())
throw EvalError("cannot compare %1% with %2%", showType(*v1), showType(*v2)); throw EvalError("cannot compare %1% with %2%", showType(*v1), showType(*v2));
switch (v1->type) { switch (v1->type()) {
case tInt: case nInt:
return v1->integer < v2->integer; return v1->integer < v2->integer;
case tFloat: case nFloat:
return v1->fpoint < v2->fpoint; return v1->fpoint < v2->fpoint;
case tString: case nString:
return strcmp(v1->string.s, v2->string.s) < 0; return strcmp(v1->string.s, v2->string.s) < 0;
case tPath: case nPath:
return strcmp(v1->path, v2->path) < 0; return strcmp(v1->path, v2->path) < 0;
default: default:
throw EvalError("cannot compare %1% with %2%", showType(*v1), showType(*v2)); throw EvalError("cannot compare %1% with %2%", showType(*v1), showType(*v2));
@ -777,7 +770,7 @@ static RegisterPrimOp primop_deepSeq({
static void prim_trace(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_trace(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
if (args[0]->type == tString) if (args[0]->type() == nString)
printError("trace: %1%", args[0]->string.s); printError("trace: %1%", args[0]->string.s);
else else
printError("trace: %1%", *args[0]); printError("trace: %1%", *args[0]);
@ -902,7 +895,7 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * *
if (ignoreNulls) { if (ignoreNulls) {
state.forceValue(*i->value, pos); state.forceValue(*i->value, pos);
if (i->value->type == tNull) continue; if (i->value->type() == nNull) continue;
} }
if (i->name == state.sContentAddressed) { if (i->name == state.sContentAddressed) {
@ -1107,7 +1100,7 @@ static void prim_derivationStrict(EvalState & state, const Pos & pos, Value * *
// Shouldn't happen as the toplevel derivation is not CA. // Shouldn't happen as the toplevel derivation is not CA.
assert(false); assert(false);
}, },
[&](UnknownHashes) { [&](DeferredHash _) {
for (auto & i : outputs) { for (auto & i : outputs) {
drv.outputs.insert_or_assign(i, drv.outputs.insert_or_assign(i,
DerivationOutput { DerivationOutput {
@ -1308,7 +1301,7 @@ static void prim_dirOf(EvalState & state, const Pos & pos, Value * * args, Value
{ {
PathSet context; PathSet context;
Path dir = dirOf(state.coerceToString(pos, *args[0], context, false, false)); Path dir = dirOf(state.coerceToString(pos, *args[0], context, false, false));
if (args[0]->type == tPath) mkPath(v, dir.c_str()); else mkString(v, dir, context); if (args[0]->type() == nPath) mkPath(v, dir.c_str()); else mkString(v, dir, context);
} }
static RegisterPrimOp primop_dirOf({ static RegisterPrimOp primop_dirOf({
@ -1449,7 +1442,7 @@ static void prim_readDir(EvalState & state, const Pos & pos, Value * * args, Val
Value * ent_val = state.allocAttr(v, state.symbols.create(ent.name)); Value * ent_val = state.allocAttr(v, state.symbols.create(ent.name));
if (ent.type == DT_UNKNOWN) if (ent.type == DT_UNKNOWN)
ent.type = getFileType(path + "/" + ent.name); ent.type = getFileType(path + "/" + ent.name);
mkStringNoCopy(*ent_val, ent_val->mkString(
ent.type == DT_REG ? "regular" : ent.type == DT_REG ? "regular" :
ent.type == DT_DIR ? "directory" : ent.type == DT_DIR ? "directory" :
ent.type == DT_LNK ? "symlink" : ent.type == DT_LNK ? "symlink" :
@ -1621,7 +1614,12 @@ static RegisterPrimOp primop_toJSON({
static void prim_fromJSON(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_fromJSON(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
string s = state.forceStringNoCtx(*args[0], pos); string s = state.forceStringNoCtx(*args[0], pos);
try {
parseJSON(state, s, v); parseJSON(state, s, v);
} catch (JSONParseError &e) {
e.addTrace(pos, "while decoding a JSON string");
throw e;
}
} }
static RegisterPrimOp primop_fromJSON({ static RegisterPrimOp primop_fromJSON({
@ -1808,7 +1806,7 @@ static void prim_filterSource(EvalState & state, const Pos & pos, Value * * args
}); });
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
if (args[0]->type != tLambda) if (args[0]->type() != nFunction)
throw TypeError({ throw TypeError({
.hint = hintfmt( .hint = hintfmt(
"first argument in call to 'filterSource' is not a function but %1%", "first argument in call to 'filterSource' is not a function but %1%",
@ -2074,7 +2072,7 @@ static RegisterPrimOp primop_hasAttr({
static void prim_isAttrs(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_isAttrs(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
mkBool(v, args[0]->type == tAttrs); mkBool(v, args[0]->type() == nAttrs);
} }
static RegisterPrimOp primop_isAttrs({ static RegisterPrimOp primop_isAttrs({
@ -2254,11 +2252,11 @@ static RegisterPrimOp primop_catAttrs({
static void prim_functionArgs(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_functionArgs(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
if (args[0]->type == tPrimOpApp || args[0]->type == tPrimOp) { if (args[0]->isPrimOpApp() || args[0]->isPrimOp()) {
state.mkAttrs(v, 0); state.mkAttrs(v, 0);
return; return;
} }
if (args[0]->type != tLambda) if (!args[0]->isLambda())
throw TypeError({ throw TypeError({
.hint = hintfmt("'functionArgs' requires a function"), .hint = hintfmt("'functionArgs' requires a function"),
.errPos = pos .errPos = pos
@ -2337,7 +2335,7 @@ static RegisterPrimOp primop_mapAttrs({
static void prim_isList(EvalState & state, const Pos & pos, Value * * args, Value & v) static void prim_isList(EvalState & state, const Pos & pos, Value * * args, Value & v)
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
mkBool(v, args[0]->isList()); mkBool(v, args[0]->type() == nList);
} }
static RegisterPrimOp primop_isList({ static RegisterPrimOp primop_isList({
@ -2689,7 +2687,7 @@ static void prim_sort(EvalState & state, const Pos & pos, Value * * args, Value
auto comparator = [&](Value * a, Value * b) { auto comparator = [&](Value * a, Value * b) {
/* Optimization: if the comparator is lessThan, bypass /* Optimization: if the comparator is lessThan, bypass
callFunction. */ callFunction. */
if (args[0]->type == tPrimOp && args[0]->primOp->fun == prim_lessThan) if (args[0]->isPrimOp() && args[0]->primOp->fun == prim_lessThan)
return CompareValues()(a, b); return CompareValues()(a, b);
Value vTmp1, vTmp2; Value vTmp1, vTmp2;
@ -2831,7 +2829,7 @@ static void prim_add(EvalState & state, const Pos & pos, Value * * args, Value &
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
state.forceValue(*args[1], pos); state.forceValue(*args[1], pos);
if (args[0]->type == tFloat || args[1]->type == tFloat) if (args[0]->type() == nFloat || args[1]->type() == nFloat)
mkFloat(v, state.forceFloat(*args[0], pos) + state.forceFloat(*args[1], pos)); mkFloat(v, state.forceFloat(*args[0], pos) + state.forceFloat(*args[1], pos));
else else
mkInt(v, state.forceInt(*args[0], pos) + state.forceInt(*args[1], pos)); mkInt(v, state.forceInt(*args[0], pos) + state.forceInt(*args[1], pos));
@ -2850,7 +2848,7 @@ static void prim_sub(EvalState & state, const Pos & pos, Value * * args, Value &
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
state.forceValue(*args[1], pos); state.forceValue(*args[1], pos);
if (args[0]->type == tFloat || args[1]->type == tFloat) if (args[0]->type() == nFloat || args[1]->type() == nFloat)
mkFloat(v, state.forceFloat(*args[0], pos) - state.forceFloat(*args[1], pos)); mkFloat(v, state.forceFloat(*args[0], pos) - state.forceFloat(*args[1], pos));
else else
mkInt(v, state.forceInt(*args[0], pos) - state.forceInt(*args[1], pos)); mkInt(v, state.forceInt(*args[0], pos) - state.forceInt(*args[1], pos));
@ -2869,7 +2867,7 @@ static void prim_mul(EvalState & state, const Pos & pos, Value * * args, Value &
{ {
state.forceValue(*args[0], pos); state.forceValue(*args[0], pos);
state.forceValue(*args[1], pos); state.forceValue(*args[1], pos);
if (args[0]->type == tFloat || args[1]->type == tFloat) if (args[0]->type() == nFloat || args[1]->type() == nFloat)
mkFloat(v, state.forceFloat(*args[0], pos) * state.forceFloat(*args[1], pos)); mkFloat(v, state.forceFloat(*args[0], pos) * state.forceFloat(*args[1], pos));
else else
mkInt(v, state.forceInt(*args[0], pos) * state.forceInt(*args[1], pos)); mkInt(v, state.forceInt(*args[0], pos) * state.forceInt(*args[1], pos));
@ -2896,7 +2894,7 @@ static void prim_div(EvalState & state, const Pos & pos, Value * * args, Value &
.errPos = pos .errPos = pos
}); });
if (args[0]->type == tFloat || args[1]->type == tFloat) { if (args[0]->type() == nFloat || args[1]->type() == nFloat) {
mkFloat(v, state.forceFloat(*args[0], pos) / state.forceFloat(*args[1], pos)); mkFloat(v, state.forceFloat(*args[0], pos) / state.forceFloat(*args[1], pos));
} else { } else {
NixInt i1 = state.forceInt(*args[0], pos); NixInt i1 = state.forceInt(*args[0], pos);

View file

@ -17,7 +17,7 @@ static void prim_fetchMercurial(EvalState & state, const Pos & pos, Value * * ar
state.forceValue(*args[0]); state.forceValue(*args[0]);
if (args[0]->type == tAttrs) { if (args[0]->type() == nAttrs) {
state.forceAttrs(*args[0], pos); state.forceAttrs(*args[0], pos);

View file

@ -85,25 +85,25 @@ static void fetchTree(
state.forceValue(*args[0]); state.forceValue(*args[0]);
if (args[0]->type == tAttrs) { if (args[0]->type() == nAttrs) {
state.forceAttrs(*args[0], pos); state.forceAttrs(*args[0], pos);
fetchers::Attrs attrs; fetchers::Attrs attrs;
for (auto & attr : *args[0]->attrs) { for (auto & attr : *args[0]->attrs) {
state.forceValue(*attr.value); state.forceValue(*attr.value);
if (attr.value->type == tPath || attr.value->type == tString) if (attr.value->type() == nPath || attr.value->type() == nString)
addURI( addURI(
state, state,
attrs, attrs,
attr.name, attr.name,
state.coerceToString(*attr.pos, *attr.value, context, false, false) state.coerceToString(*attr.pos, *attr.value, context, false, false)
); );
else if (attr.value->type == tString) else if (attr.value->type() == nString)
addURI(state, attrs, attr.name, attr.value->string.s); addURI(state, attrs, attr.name, attr.value->string.s);
else if (attr.value->type == tBool) else if (attr.value->type() == nBool)
attrs.emplace(attr.name, Explicit<bool>{attr.value->boolean}); attrs.emplace(attr.name, Explicit<bool>{attr.value->boolean});
else if (attr.value->type == tInt) else if (attr.value->type == nInt)
attrs.emplace(attr.name, uint64_t(attr.value->integer)); attrs.emplace(attr.name, uint64_t(attr.value->integer));
else else
throw TypeError("fetchTree argument '%s' is %s while a string, Boolean or integer is expected", throw TypeError("fetchTree argument '%s' is %s while a string, Boolean or integer is expected",
@ -163,7 +163,7 @@ static void fetch(EvalState & state, const Pos & pos, Value * * args, Value & v,
state.forceValue(*args[0]); state.forceValue(*args[0]);
if (args[0]->type == tAttrs) { if (args[0]->type() == nAttrs) {
state.forceAttrs(*args[0], pos); state.forceAttrs(*args[0], pos);
@ -324,6 +324,11 @@ static RegisterPrimOp primop_fetchGit({
A Boolean parameter that specifies whether submodules should be A Boolean parameter that specifies whether submodules should be
checked out. Defaults to `false`. checked out. Defaults to `false`.
- allRefs
Whether to fetch all refs of the repository. With this argument being
true, it's possible to load a `rev` from *any* `ref` (by default only
`rev`s from the specified `ref` are supported).
Here are some examples of how to use `fetchGit`. Here are some examples of how to use `fetchGit`.
- To fetch a private repository over SSH: - To fetch a private repository over SSH:

View file

@ -16,30 +16,30 @@ void printValueAsJSON(EvalState & state, bool strict,
if (strict) state.forceValue(v); if (strict) state.forceValue(v);
switch (v.type) { switch (v.type()) {
case tInt: case nInt:
out.write(v.integer); out.write(v.integer);
break; break;
case tBool: case nBool:
out.write(v.boolean); out.write(v.boolean);
break; break;
case tString: case nString:
copyContext(v, context); copyContext(v, context);
out.write(v.string.s); out.write(v.string.s);
break; break;
case tPath: case nPath:
out.write(state.copyPathToStore(context, v.path)); out.write(state.copyPathToStore(context, v.path));
break; break;
case tNull: case nNull:
out.write(nullptr); out.write(nullptr);
break; break;
case tAttrs: { case nAttrs: {
auto maybeString = state.tryAttrsToString(noPos, v, context, false, false); auto maybeString = state.tryAttrsToString(noPos, v, context, false, false);
if (maybeString) { if (maybeString) {
out.write(*maybeString); out.write(*maybeString);
@ -61,7 +61,7 @@ void printValueAsJSON(EvalState & state, bool strict,
break; break;
} }
case tList1: case tList2: case tListN: { case nList: {
auto list(out.list()); auto list(out.list());
for (unsigned int n = 0; n < v.listSize(); ++n) { for (unsigned int n = 0; n < v.listSize(); ++n) {
auto placeholder(list.placeholder()); auto placeholder(list.placeholder());
@ -70,15 +70,18 @@ void printValueAsJSON(EvalState & state, bool strict,
break; break;
} }
case tExternal: case nExternal:
v.external->printValueAsJSON(state, strict, out, context); v.external->printValueAsJSON(state, strict, out, context);
break; break;
case tFloat: case nFloat:
out.write(v.fpoint); out.write(v.fpoint);
break; break;
default: case nThunk:
throw TypeError("cannot convert %1% to JSON", showType(v));
case nFunction:
throw TypeError("cannot convert %1% to JSON", showType(v)); throw TypeError("cannot convert %1% to JSON", showType(v));
} }
} }

View file

@ -58,31 +58,31 @@ static void printValueAsXML(EvalState & state, bool strict, bool location,
if (strict) state.forceValue(v); if (strict) state.forceValue(v);
switch (v.type) { switch (v.type()) {
case tInt: case nInt:
doc.writeEmptyElement("int", singletonAttrs("value", (format("%1%") % v.integer).str())); doc.writeEmptyElement("int", singletonAttrs("value", (format("%1%") % v.integer).str()));
break; break;
case tBool: case nBool:
doc.writeEmptyElement("bool", singletonAttrs("value", v.boolean ? "true" : "false")); doc.writeEmptyElement("bool", singletonAttrs("value", v.boolean ? "true" : "false"));
break; break;
case tString: case nString:
/* !!! show the context? */ /* !!! show the context? */
copyContext(v, context); copyContext(v, context);
doc.writeEmptyElement("string", singletonAttrs("value", v.string.s)); doc.writeEmptyElement("string", singletonAttrs("value", v.string.s));
break; break;
case tPath: case nPath:
doc.writeEmptyElement("path", singletonAttrs("value", v.path)); doc.writeEmptyElement("path", singletonAttrs("value", v.path));
break; break;
case tNull: case nNull:
doc.writeEmptyElement("null"); doc.writeEmptyElement("null");
break; break;
case tAttrs: case nAttrs:
if (state.isDerivation(v)) { if (state.isDerivation(v)) {
XMLAttrs xmlAttrs; XMLAttrs xmlAttrs;
@ -92,14 +92,14 @@ static void printValueAsXML(EvalState & state, bool strict, bool location,
a = v.attrs->find(state.sDrvPath); a = v.attrs->find(state.sDrvPath);
if (a != v.attrs->end()) { if (a != v.attrs->end()) {
if (strict) state.forceValue(*a->value); if (strict) state.forceValue(*a->value);
if (a->value->type == tString) if (a->value->type() == nString)
xmlAttrs["drvPath"] = drvPath = a->value->string.s; xmlAttrs["drvPath"] = drvPath = a->value->string.s;
} }
a = v.attrs->find(state.sOutPath); a = v.attrs->find(state.sOutPath);
if (a != v.attrs->end()) { if (a != v.attrs->end()) {
if (strict) state.forceValue(*a->value); if (strict) state.forceValue(*a->value);
if (a->value->type == tString) if (a->value->type() == nString)
xmlAttrs["outPath"] = a->value->string.s; xmlAttrs["outPath"] = a->value->string.s;
} }
@ -118,14 +118,19 @@ static void printValueAsXML(EvalState & state, bool strict, bool location,
break; break;
case tList1: case tList2: case tListN: { case nList: {
XMLOpenElement _(doc, "list"); XMLOpenElement _(doc, "list");
for (unsigned int n = 0; n < v.listSize(); ++n) for (unsigned int n = 0; n < v.listSize(); ++n)
printValueAsXML(state, strict, location, *v.listElems()[n], doc, context, drvsSeen); printValueAsXML(state, strict, location, *v.listElems()[n], doc, context, drvsSeen);
break; break;
} }
case tLambda: { case nFunction: {
if (!v.isLambda()) {
// FIXME: Serialize primops and primopapps
doc.writeEmptyElement("unevaluated");
break;
}
XMLAttrs xmlAttrs; XMLAttrs xmlAttrs;
if (location) posToXML(xmlAttrs, v.lambda.fun->pos); if (location) posToXML(xmlAttrs, v.lambda.fun->pos);
XMLOpenElement _(doc, "function", xmlAttrs); XMLOpenElement _(doc, "function", xmlAttrs);
@ -143,15 +148,15 @@ static void printValueAsXML(EvalState & state, bool strict, bool location,
break; break;
} }
case tExternal: case nExternal:
v.external->printValueAsXML(state, strict, location, doc, context, drvsSeen); v.external->printValueAsXML(state, strict, location, doc, context, drvsSeen);
break; break;
case tFloat: case nFloat:
doc.writeEmptyElement("float", singletonAttrs("value", (format("%1%") % v.fpoint).str())); doc.writeEmptyElement("float", singletonAttrs("value", (format("%1%") % v.fpoint).str()));
break; break;
default: case nThunk:
doc.writeEmptyElement("unevaluated"); doc.writeEmptyElement("unevaluated");
} }
} }

View file

@ -27,8 +27,24 @@ typedef enum {
tPrimOpApp, tPrimOpApp,
tExternal, tExternal,
tFloat tFloat
} ValueType; } InternalType;
// This type abstracts over all actual value types in the language,
// grouping together implementation details like tList*, different function
// types, and types in non-normal form (so thunks and co.)
typedef enum {
nThunk,
nInt,
nFloat,
nBool,
nString,
nPath,
nNull,
nAttrs,
nList,
nFunction,
nExternal
} ValueType;
class Bindings; class Bindings;
struct Env; struct Env;
@ -90,7 +106,28 @@ std::ostream & operator << (std::ostream & str, const ExternalValueBase & v);
struct Value struct Value
{ {
ValueType type; private:
InternalType internalType;
friend std::string showType(const Value & v);
friend void printValue(std::ostream & str, std::set<const Value *> & active, const Value & v);
public:
// Functions needed to distinguish the type
// These should be removed eventually, by putting the functionality that's
// needed by callers into methods of this type
// type() == nThunk
inline bool isThunk() const { return internalType == tThunk; };
inline bool isApp() const { return internalType == tApp; };
inline bool isBlackhole() const { return internalType == tBlackhole; };
// type() == nFunction
inline bool isLambda() const { return internalType == tLambda; };
inline bool isPrimOp() const { return internalType == tPrimOp; };
inline bool isPrimOpApp() const { return internalType == tPrimOpApp; };
union union
{ {
NixInt integer; NixInt integer;
@ -147,24 +184,161 @@ struct Value
NixFloat fpoint; NixFloat fpoint;
}; };
// Returns the normal type of a Value. This only returns nThunk if the
// Value hasn't been forceValue'd
inline ValueType type() const
{
switch (internalType) {
case tInt: return nInt;
case tBool: return nBool;
case tString: return nString;
case tPath: return nPath;
case tNull: return nNull;
case tAttrs: return nAttrs;
case tList1: case tList2: case tListN: return nList;
case tLambda: case tPrimOp: case tPrimOpApp: return nFunction;
case tExternal: return nExternal;
case tFloat: return nFloat;
case tThunk: case tApp: case tBlackhole: return nThunk;
}
abort();
}
/* After overwriting an app node, be sure to clear pointers in the
Value to ensure that the target isn't kept alive unnecessarily. */
inline void clearValue()
{
app.left = app.right = 0;
}
inline void mkInt(NixInt n)
{
clearValue();
internalType = tInt;
integer = n;
}
inline void mkBool(bool b)
{
clearValue();
internalType = tBool;
boolean = b;
}
inline void mkString(const char * s, const char * * context = 0)
{
internalType = tString;
string.s = s;
string.context = context;
}
inline void mkPath(const char * s)
{
clearValue();
internalType = tPath;
path = s;
}
inline void mkNull()
{
clearValue();
internalType = tNull;
}
inline void mkAttrs(Bindings * a)
{
clearValue();
internalType = tAttrs;
attrs = a;
}
inline void mkList(size_t size)
{
clearValue();
if (size == 1)
internalType = tList1;
else if (size == 2)
internalType = tList2;
else {
internalType = tListN;
bigList.size = size;
}
}
inline void mkThunk(Env * e, Expr * ex)
{
internalType = tThunk;
thunk.env = e;
thunk.expr = ex;
}
inline void mkApp(Value * l, Value * r)
{
internalType = tApp;
app.left = l;
app.right = r;
}
inline void mkLambda(Env * e, ExprLambda * f)
{
internalType = tLambda;
lambda.env = e;
lambda.fun = f;
}
inline void mkBlackhole()
{
internalType = tBlackhole;
// Value will be overridden anyways
}
inline void mkPrimOp(PrimOp * p)
{
clearValue();
internalType = tPrimOp;
primOp = p;
}
inline void mkPrimOpApp(Value * l, Value * r)
{
internalType = tPrimOpApp;
app.left = l;
app.right = r;
}
inline void mkExternal(ExternalValueBase * e)
{
clearValue();
internalType = tExternal;
external = e;
}
inline void mkFloat(NixFloat n)
{
clearValue();
internalType = tFloat;
fpoint = n;
}
bool isList() const bool isList() const
{ {
return type == tList1 || type == tList2 || type == tListN; return internalType == tList1 || internalType == tList2 || internalType == tListN;
} }
Value * * listElems() Value * * listElems()
{ {
return type == tList1 || type == tList2 ? smallList : bigList.elems; return internalType == tList1 || internalType == tList2 ? smallList : bigList.elems;
} }
const Value * const * listElems() const const Value * const * listElems() const
{ {
return type == tList1 || type == tList2 ? smallList : bigList.elems; return internalType == tList1 || internalType == tList2 ? smallList : bigList.elems;
} }
size_t listSize() const size_t listSize() const
{ {
return type == tList1 ? 1 : type == tList2 ? 2 : bigList.size; return internalType == tList1 ? 1 : internalType == tList2 ? 2 : bigList.size;
} }
/* Check whether forcing this value requires a trivial amount of /* Check whether forcing this value requires a trivial amount of
@ -176,86 +350,42 @@ struct Value
}; };
/* After overwriting an app node, be sure to clear pointers in the
Value to ensure that the target isn't kept alive unnecessarily. */
static inline void clearValue(Value & v)
{
v.app.left = v.app.right = 0;
}
// TODO: Remove these static functions, replace call sites with v.mk* instead
static inline void mkInt(Value & v, NixInt n) static inline void mkInt(Value & v, NixInt n)
{ {
clearValue(v); v.mkInt(n);
v.type = tInt;
v.integer = n;
} }
static inline void mkFloat(Value & v, NixFloat n) static inline void mkFloat(Value & v, NixFloat n)
{ {
clearValue(v); v.mkFloat(n);
v.type = tFloat;
v.fpoint = n;
} }
static inline void mkBool(Value & v, bool b) static inline void mkBool(Value & v, bool b)
{ {
clearValue(v); v.mkBool(b);
v.type = tBool;
v.boolean = b;
} }
static inline void mkNull(Value & v) static inline void mkNull(Value & v)
{ {
clearValue(v); v.mkNull();
v.type = tNull;
} }
static inline void mkApp(Value & v, Value & left, Value & right) static inline void mkApp(Value & v, Value & left, Value & right)
{ {
v.type = tApp; v.mkApp(&left, &right);
v.app.left = &left;
v.app.right = &right;
} }
static inline void mkPrimOpApp(Value & v, Value & left, Value & right)
{
v.type = tPrimOpApp;
v.app.left = &left;
v.app.right = &right;
}
static inline void mkStringNoCopy(Value & v, const char * s)
{
v.type = tString;
v.string.s = s;
v.string.context = 0;
}
static inline void mkString(Value & v, const Symbol & s) static inline void mkString(Value & v, const Symbol & s)
{ {
mkStringNoCopy(v, ((const string &) s).c_str()); v.mkString(((const string &) s).c_str());
} }
void mkString(Value & v, const char * s); void mkString(Value & v, const char * s);
static inline void mkPathNoCopy(Value & v, const char * s)
{
clearValue(v);
v.type = tPath;
v.path = s;
}
void mkPath(Value & v, const char * s); void mkPath(Value & v, const char * s);

View file

@ -59,12 +59,13 @@ struct GitInputScheme : InputScheme
if (maybeGetStrAttr(attrs, "type") != "git") return {}; if (maybeGetStrAttr(attrs, "type") != "git") return {};
for (auto & [name, value] : attrs) for (auto & [name, value] : attrs)
if (name != "type" && name != "url" && name != "ref" && name != "rev" && name != "shallow" && name != "submodules" && name != "lastModified" && name != "revCount" && name != "narHash") if (name != "type" && name != "url" && name != "ref" && name != "rev" && name != "shallow" && name != "submodules" && name != "lastModified" && name != "revCount" && name != "narHash" && name != "allRefs")
throw Error("unsupported Git input attribute '%s'", name); throw Error("unsupported Git input attribute '%s'", name);
parseURL(getStrAttr(attrs, "url")); parseURL(getStrAttr(attrs, "url"));
maybeGetBoolAttr(attrs, "shallow"); maybeGetBoolAttr(attrs, "shallow");
maybeGetBoolAttr(attrs, "submodules"); maybeGetBoolAttr(attrs, "submodules");
maybeGetBoolAttr(attrs, "allRefs");
if (auto ref = maybeGetStrAttr(attrs, "ref")) { if (auto ref = maybeGetStrAttr(attrs, "ref")) {
if (std::regex_search(*ref, badGitRefRegex)) if (std::regex_search(*ref, badGitRefRegex))
@ -169,10 +170,12 @@ struct GitInputScheme : InputScheme
bool shallow = maybeGetBoolAttr(input.attrs, "shallow").value_or(false); bool shallow = maybeGetBoolAttr(input.attrs, "shallow").value_or(false);
bool submodules = maybeGetBoolAttr(input.attrs, "submodules").value_or(false); bool submodules = maybeGetBoolAttr(input.attrs, "submodules").value_or(false);
bool allRefs = maybeGetBoolAttr(input.attrs, "allRefs").value_or(false);
std::string cacheType = "git"; std::string cacheType = "git";
if (shallow) cacheType += "-shallow"; if (shallow) cacheType += "-shallow";
if (submodules) cacheType += "-submodules"; if (submodules) cacheType += "-submodules";
if (allRefs) cacheType += "-all-refs";
auto getImmutableAttrs = [&]() auto getImmutableAttrs = [&]()
{ {
@ -337,6 +340,9 @@ struct GitInputScheme : InputScheme
throw; throw;
} }
} }
} else {
if (allRefs) {
doFetch = true;
} else { } else {
/* If the local ref is older than tarball-ttl seconds, do a /* If the local ref is older than tarball-ttl seconds, do a
git fetch to update the local ref to the remote ref. */ git fetch to update the local ref to the remote ref. */
@ -344,6 +350,7 @@ struct GitInputScheme : InputScheme
doFetch = stat(localRefFile.c_str(), &st) != 0 || doFetch = stat(localRefFile.c_str(), &st) != 0 ||
(uint64_t) st.st_mtime + settings.tarballTtl <= (uint64_t) now; (uint64_t) st.st_mtime + settings.tarballTtl <= (uint64_t) now;
} }
}
if (doFetch) { if (doFetch) {
Activity act(*logger, lvlTalkative, actUnknown, fmt("fetching Git repository '%s'", actualUrl)); Activity act(*logger, lvlTalkative, actUnknown, fmt("fetching Git repository '%s'", actualUrl));
@ -352,7 +359,9 @@ struct GitInputScheme : InputScheme
// we're using --quiet for now. Should process its stderr. // we're using --quiet for now. Should process its stderr.
try { try {
auto ref = input.getRef(); auto ref = input.getRef();
auto fetchRef = ref->compare(0, 5, "refs/") == 0 auto fetchRef = allRefs
? "refs/*"
: ref->compare(0, 5, "refs/") == 0
? *ref ? *ref
: "refs/heads/" + *ref; : "refs/heads/" + *ref;
runProgram("git", true, { "-C", repoDir, "fetch", "--quiet", "--force", "--", actualUrl, fmt("%s:%s", fetchRef, fetchRef) }); runProgram("git", true, { "-C", repoDir, "fetch", "--quiet", "--force", "--", actualUrl, fmt("%s:%s", fetchRef, fetchRef) });
@ -392,6 +401,28 @@ struct GitInputScheme : InputScheme
AutoDelete delTmpDir(tmpDir, true); AutoDelete delTmpDir(tmpDir, true);
PathFilter filter = defaultPathFilter; PathFilter filter = defaultPathFilter;
RunOptions checkCommitOpts(
"git",
{ "-C", repoDir, "cat-file", "commit", input.getRev()->gitRev() }
);
checkCommitOpts.searchPath = true;
checkCommitOpts.mergeStderrToStdout = true;
auto result = runProgram(checkCommitOpts);
if (WEXITSTATUS(result.first) == 128
&& result.second.find("bad file") != std::string::npos
) {
throw Error(
"Cannot find Git revision '%s' in ref '%s' of repository '%s'! "
"Please make sure that the " ANSI_BOLD "rev" ANSI_NORMAL " exists on the "
ANSI_BOLD "ref" ANSI_NORMAL " you've specified or add " ANSI_BOLD
"allRefs = true;" ANSI_NORMAL " to " ANSI_BOLD "fetchGit" ANSI_NORMAL ".",
input.getRev()->gitRev(),
*input.getRef(),
actualUrl
);
}
if (submodules) { if (submodules) {
Path tmpGitDir = createTempDir(); Path tmpGitDir = createTempDir();
AutoDelete delTmpGitDir(tmpGitDir, true); AutoDelete delTmpGitDir(tmpGitDir, true);

View file

@ -433,7 +433,9 @@ StorePath BinaryCacheStore::addTextToStore(const string & name, const string & s
if (!repair && isValidPath(path)) if (!repair && isValidPath(path))
return path; return path;
auto source = StringSource { s }; StringSink sink;
dumpString(s, sink);
auto source = StringSource { *sink.s };
return addToStoreCommon(source, repair, CheckSigs, [&](HashResult nar) { return addToStoreCommon(source, repair, CheckSigs, [&](HashResult nar) {
ValidPathInfo info { path, nar.first }; ValidPathInfo info { path, nar.first };
info.narSize = nar.second; info.narSize = nar.second;
@ -443,6 +445,24 @@ StorePath BinaryCacheStore::addTextToStore(const string & name, const string & s
})->path; })->path;
} }
std::optional<const Realisation> BinaryCacheStore::queryRealisation(const DrvOutput & id)
{
auto outputInfoFilePath = realisationsPrefix + "/" + id.to_string() + ".doi";
auto rawOutputInfo = getFile(outputInfoFilePath);
if (rawOutputInfo) {
return {Realisation::fromJSON(
nlohmann::json::parse(*rawOutputInfo), outputInfoFilePath)};
} else {
return std::nullopt;
}
}
void BinaryCacheStore::registerDrvOutput(const Realisation& info) {
auto filePath = realisationsPrefix + "/" + info.id.to_string() + ".doi";
upsertFile(filePath, info.toJSON().dump(), "application/json");
}
ref<FSAccessor> BinaryCacheStore::getFSAccessor() ref<FSAccessor> BinaryCacheStore::getFSAccessor()
{ {
return make_ref<RemoteFSAccessor>(ref<Store>(shared_from_this()), localNarCache); return make_ref<RemoteFSAccessor>(ref<Store>(shared_from_this()), localNarCache);

View file

@ -24,7 +24,7 @@ struct BinaryCacheStoreConfig : virtual StoreConfig
"enable multi-threading compression, available for xz only currently"}; "enable multi-threading compression, available for xz only currently"};
}; };
class BinaryCacheStore : public Store, public virtual BinaryCacheStoreConfig class BinaryCacheStore : public virtual BinaryCacheStoreConfig, public virtual Store
{ {
private: private:
@ -33,6 +33,9 @@ private:
protected: protected:
// The prefix under which realisation infos will be stored
const std::string realisationsPrefix = "/realisations";
BinaryCacheStore(const Params & params); BinaryCacheStore(const Params & params);
public: public:
@ -99,6 +102,10 @@ public:
StorePath addTextToStore(const string & name, const string & s, StorePath addTextToStore(const string & name, const string & s,
const StorePathSet & references, RepairFlag repair) override; const StorePathSet & references, RepairFlag repair) override;
void registerDrvOutput(const Realisation & info) override;
std::optional<const Realisation> queryRealisation(const DrvOutput &) override;
void narFromPath(const StorePath & path, Sink & sink) override; void narFromPath(const StorePath & path, Sink & sink) override;
BuildResult buildDerivation(const StorePath & drvPath, const BasicDerivation & drv, BuildResult buildDerivation(const StorePath & drvPath, const BasicDerivation & drv,

View file

@ -493,8 +493,9 @@ void DerivationGoal::inputsRealised()
if (useDerivation) { if (useDerivation) {
auto & fullDrv = *dynamic_cast<Derivation *>(drv.get()); auto & fullDrv = *dynamic_cast<Derivation *>(drv.get());
if ((!fullDrv.inputDrvs.empty() && derivationIsCA(fullDrv.type())) if (settings.isExperimentalFeatureEnabled("ca-derivations") &&
|| fullDrv.type() == DerivationType::DeferredInputAddressed) { ((!fullDrv.inputDrvs.empty() && derivationIsCA(fullDrv.type()))
|| fullDrv.type() == DerivationType::DeferredInputAddressed)) {
/* We are be able to resolve this derivation based on the /* We are be able to resolve this derivation based on the
now-known results of dependencies. If so, we become a stub goal now-known results of dependencies. If so, we become a stub goal
aliasing that resolved derivation goal */ aliasing that resolved derivation goal */
@ -503,9 +504,6 @@ void DerivationGoal::inputsRealised()
Derivation drvResolved { *std::move(attempt) }; Derivation drvResolved { *std::move(attempt) };
auto pathResolved = writeDerivation(worker.store, drvResolved); auto pathResolved = writeDerivation(worker.store, drvResolved);
/* Add to memotable to speed up downstream goal's queries with the
original derivation. */
drvPathResolutions.lock()->insert_or_assign(drvPath, pathResolved);
auto msg = fmt("Resolved derivation: '%s' -> '%s'", auto msg = fmt("Resolved derivation: '%s' -> '%s'",
worker.store.printStorePath(drvPath), worker.store.printStorePath(drvPath),
@ -1987,7 +1985,7 @@ void DerivationGoal::writeStructuredAttrs()
chownToBuilder(tmpDir + "/.attrs.sh"); chownToBuilder(tmpDir + "/.attrs.sh");
} }
struct RestrictedStoreConfig : LocalFSStoreConfig struct RestrictedStoreConfig : virtual LocalFSStoreConfig
{ {
using LocalFSStoreConfig::LocalFSStoreConfig; using LocalFSStoreConfig::LocalFSStoreConfig;
const std::string name() { return "Restricted Store"; } const std::string name() { return "Restricted Store"; }
@ -1996,14 +1994,19 @@ struct RestrictedStoreConfig : LocalFSStoreConfig
/* A wrapper around LocalStore that only allows building/querying of /* A wrapper around LocalStore that only allows building/querying of
paths that are in the input closures of the build or were added via paths that are in the input closures of the build or were added via
recursive Nix calls. */ recursive Nix calls. */
struct RestrictedStore : public LocalFSStore, public virtual RestrictedStoreConfig struct RestrictedStore : public virtual RestrictedStoreConfig, public virtual LocalFSStore
{ {
ref<LocalStore> next; ref<LocalStore> next;
DerivationGoal & goal; DerivationGoal & goal;
RestrictedStore(const Params & params, ref<LocalStore> next, DerivationGoal & goal) RestrictedStore(const Params & params, ref<LocalStore> next, DerivationGoal & goal)
: StoreConfig(params), Store(params), LocalFSStore(params), next(next), goal(goal) : StoreConfig(params)
, LocalFSStoreConfig(params)
, RestrictedStoreConfig(params)
, Store(params)
, LocalFSStore(params)
, next(next), goal(goal)
{ } { }
Path getRealStoreDir() override Path getRealStoreDir() override
@ -2094,6 +2097,16 @@ struct RestrictedStore : public LocalFSStore, public virtual RestrictedStoreConf
/* Nothing to be done; 'path' must already be valid. */ /* Nothing to be done; 'path' must already be valid. */
} }
void registerDrvOutput(const Realisation & info) override
// XXX: This should probably be allowed as a no-op if the realisation
// corresponds to an allowed derivation
{ throw Error("registerDrvOutput"); }
std::optional<const Realisation> queryRealisation(const DrvOutput & id) override
// XXX: This should probably be allowed if the realisation corresponds to
// an allowed derivation
{ throw Error("queryRealisation"); }
void buildPaths(const std::vector<StorePathWithOutputs> & paths, BuildMode buildMode) override void buildPaths(const std::vector<StorePathWithOutputs> & paths, BuildMode buildMode) override
{ {
if (buildMode != bmNormal) throw Error("unsupported build mode"); if (buildMode != bmNormal) throw Error("unsupported build mode");
@ -3379,21 +3392,14 @@ void DerivationGoal::registerOutputs()
means it's safe to link the derivation to the output hash. We must do means it's safe to link the derivation to the output hash. We must do
that for floating CA derivations, which otherwise couldn't be cached, that for floating CA derivations, which otherwise couldn't be cached,
but it's fine to do in all cases. */ but it's fine to do in all cases. */
bool isCaFloating = drv->type() == DerivationType::CAFloating;
auto drvPathResolved = drvPath; if (settings.isExperimentalFeatureEnabled("ca-derivations")) {
if (!useDerivation && isCaFloating) { auto outputHashes = staticOutputHashes(worker.store, *drv);
/* Once a floating CA derivations reaches this point, it for (auto& [outputName, newInfo] : infos)
must already be resolved, so we don't bother trying to worker.store.registerDrvOutput(Realisation{
downcast drv to get would would just be an empty .id = DrvOutput{outputHashes.at(outputName), outputName},
inputDrvs field. */ .outPath = newInfo.path});
Derivation drv2 { *drv };
drvPathResolved = writeDerivation(worker.store, drv2);
} }
if (useDerivation || isCaFloating)
for (auto & [outputName, newInfo] : infos)
worker.store.linkDeriverToPath(drvPathResolved, outputName, newInfo.path);
} }

View file

@ -0,0 +1,11 @@
-- Extension of the sql schema for content-addressed derivations.
-- Won't be loaded unless the experimental feature `ca-derivations`
-- is enabled
create table if not exists Realisations (
drvPath text not null,
outputName text not null, -- symbolic output id, usually "out"
outputPath integer not null,
primary key (drvPath, outputName),
foreign key (outputPath) references ValidPaths(id) on delete cascade
);

View file

@ -868,6 +868,28 @@ static void performOp(TunnelLogger * logger, ref<Store> store,
break; break;
} }
case wopRegisterDrvOutput: {
logger->startWork();
auto outputId = DrvOutput::parse(readString(from));
auto outputPath = StorePath(readString(from));
auto resolvedDrv = StorePath(readString(from));
store->registerDrvOutput(Realisation{
.id = outputId, .outPath = outputPath});
logger->stopWork();
break;
}
case wopQueryRealisation: {
logger->startWork();
auto outputId = DrvOutput::parse(readString(from));
auto info = store->queryRealisation(outputId);
logger->stopWork();
std::set<StorePath> outPaths;
if (info) outPaths.insert(info->outPath);
worker_proto::write(*store, to, outPaths);
break;
}
default: default:
throw Error("invalid operation %1%", op); throw Error("invalid operation %1%", op);
} }

View file

@ -496,10 +496,9 @@ static const DrvHashModulo pathDerivationModulo(Store & store, const StorePath &
*/ */
DrvHashModulo hashDerivationModulo(Store & store, const Derivation & drv, bool maskOutputs) DrvHashModulo hashDerivationModulo(Store & store, const Derivation & drv, bool maskOutputs)
{ {
bool isDeferred = false;
/* Return a fixed hash for fixed-output derivations. */ /* Return a fixed hash for fixed-output derivations. */
switch (drv.type()) { switch (drv.type()) {
case DerivationType::CAFloating:
return UnknownHashes {};
case DerivationType::CAFixed: { case DerivationType::CAFixed: {
std::map<std::string, Hash> outputHashes; std::map<std::string, Hash> outputHashes;
for (const auto & i : drv.outputs) { for (const auto & i : drv.outputs) {
@ -512,6 +511,9 @@ DrvHashModulo hashDerivationModulo(Store & store, const Derivation & drv, bool m
} }
return outputHashes; return outputHashes;
} }
case DerivationType::CAFloating:
isDeferred = true;
break;
case DerivationType::InputAddressed: case DerivationType::InputAddressed:
break; break;
case DerivationType::DeferredInputAddressed: case DerivationType::DeferredInputAddressed:
@ -522,13 +524,16 @@ DrvHashModulo hashDerivationModulo(Store & store, const Derivation & drv, bool m
calls to this function. */ calls to this function. */
std::map<std::string, StringSet> inputs2; std::map<std::string, StringSet> inputs2;
for (auto & i : drv.inputDrvs) { for (auto & i : drv.inputDrvs) {
bool hasUnknownHash = false;
const auto & res = pathDerivationModulo(store, i.first); const auto & res = pathDerivationModulo(store, i.first);
std::visit(overloaded { std::visit(overloaded {
// Regular non-CA derivation, replace derivation // Regular non-CA derivation, replace derivation
[&](Hash drvHash) { [&](Hash drvHash) {
inputs2.insert_or_assign(drvHash.to_string(Base16, false), i.second); inputs2.insert_or_assign(drvHash.to_string(Base16, false), i.second);
}, },
[&](DeferredHash deferredHash) {
isDeferred = true;
inputs2.insert_or_assign(deferredHash.hash.to_string(Base16, false), i.second);
},
// CA derivation's output hashes // CA derivation's output hashes
[&](CaOutputHashes outputHashes) { [&](CaOutputHashes outputHashes) {
std::set<std::string> justOut = { "out" }; std::set<std::string> justOut = { "out" };
@ -540,16 +545,37 @@ DrvHashModulo hashDerivationModulo(Store & store, const Derivation & drv, bool m
justOut); justOut);
} }
}, },
[&](UnknownHashes) {
hasUnknownHash = true;
},
}, res); }, res);
if (hasUnknownHash) {
return UnknownHashes {};
}
} }
return hashString(htSHA256, drv.unparse(store, maskOutputs, &inputs2)); auto hash = hashString(htSHA256, drv.unparse(store, maskOutputs, &inputs2));
if (isDeferred)
return DeferredHash { hash };
else
return hash;
}
std::map<std::string, Hash> staticOutputHashes(Store& store, const Derivation& drv)
{
std::map<std::string, Hash> res;
std::visit(overloaded {
[&](Hash drvHash) {
for (auto & outputName : drv.outputNames()) {
res.insert({outputName, drvHash});
}
},
[&](DeferredHash deferredHash) {
for (auto & outputName : drv.outputNames()) {
res.insert({outputName, deferredHash.hash});
}
},
[&](CaOutputHashes outputHashes) {
res = outputHashes;
},
}, hashDerivationModulo(store, drv, true));
return res;
} }
@ -719,10 +745,7 @@ static void rewriteDerivation(Store & store, BasicDerivation & drv, const String
} }
std::optional<BasicDerivation> Derivation::tryResolveUncached(Store & store) {
Sync<DrvPathResolutions> drvPathResolutions;
std::optional<BasicDerivation> Derivation::tryResolve(Store & store) {
BasicDerivation resolved { *this }; BasicDerivation resolved { *this };
// Input paths that we'll want to rewrite in the derivation // Input paths that we'll want to rewrite in the derivation
@ -748,4 +771,34 @@ std::optional<BasicDerivation> Derivation::tryResolve(Store & store) {
return resolved; return resolved;
} }
std::optional<BasicDerivation> Derivation::tryResolve(Store& store)
{
auto drvPath = writeDerivation(store, *this, NoRepair, false);
return Derivation::tryResolve(store, drvPath);
}
std::optional<BasicDerivation> Derivation::tryResolve(Store& store, const StorePath& drvPath)
{
// This is quite dirty and leaky, but will disappear once #4340 is merged
static Sync<std::map<StorePath, std::optional<Derivation>>> resolutionsCache;
{
auto resolutions = resolutionsCache.lock();
auto resolvedDrvIter = resolutions->find(drvPath);
if (resolvedDrvIter != resolutions->end()) {
auto & [_, resolvedDrv] = *resolvedDrvIter;
return *resolvedDrv;
}
}
/* Try resolve drv and use that path instead. */
auto drv = store.readDerivation(drvPath);
auto attempt = drv.tryResolveUncached(store);
if (!attempt)
return std::nullopt;
/* Store in memo table. */
resolutionsCache.lock()->insert_or_assign(drvPath, *attempt);
return *attempt;
}
} }

View file

@ -18,8 +18,6 @@ namespace nix {
/* The traditional non-fixed-output derivation type. */ /* The traditional non-fixed-output derivation type. */
struct DerivationOutputInputAddressed struct DerivationOutputInputAddressed
{ {
/* Will need to become `std::optional<StorePath>` once input-addressed
derivations are allowed to depend on cont-addressed derivations */
StorePath path; StorePath path;
}; };
@ -140,10 +138,14 @@ struct Derivation : BasicDerivation
2. Input placeholders are replaced with realized input store paths. */ 2. Input placeholders are replaced with realized input store paths. */
std::optional<BasicDerivation> tryResolve(Store & store); std::optional<BasicDerivation> tryResolve(Store & store);
static std::optional<BasicDerivation> tryResolve(Store & store, const StorePath & drvPath);
Derivation() = default; Derivation() = default;
Derivation(const BasicDerivation & bd) : BasicDerivation(bd) { } Derivation(const BasicDerivation & bd) : BasicDerivation(bd) { }
Derivation(BasicDerivation && bd) : BasicDerivation(std::move(bd)) { } Derivation(BasicDerivation && bd) : BasicDerivation(std::move(bd)) { }
private:
std::optional<BasicDerivation> tryResolveUncached(Store & store);
}; };
@ -174,12 +176,12 @@ std::string outputPathName(std::string_view drvName, std::string_view outputName
// whose output hashes are always known since they are fixed up-front. // whose output hashes are always known since they are fixed up-front.
typedef std::map<std::string, Hash> CaOutputHashes; typedef std::map<std::string, Hash> CaOutputHashes;
struct UnknownHashes {}; struct DeferredHash { Hash hash; };
typedef std::variant< typedef std::variant<
Hash, // regular DRV normalized hash Hash, // regular DRV normalized hash
CaOutputHashes, // Fixed-output derivation hashes CaOutputHashes, // Fixed-output derivation hashes
UnknownHashes // Deferred hashes for floating outputs drvs and their dependencies DeferredHash // Deferred hashes for floating outputs drvs and their dependencies
> DrvHashModulo; > DrvHashModulo;
/* Returns hashes with the details of fixed-output subderivations /* Returns hashes with the details of fixed-output subderivations
@ -207,22 +209,18 @@ typedef std::variant<
*/ */
DrvHashModulo hashDerivationModulo(Store & store, const Derivation & drv, bool maskOutputs); DrvHashModulo hashDerivationModulo(Store & store, const Derivation & drv, bool maskOutputs);
/*
Return a map associating each output to a hash that uniquely identifies its
derivation (modulo the self-references).
*/
std::map<std::string, Hash> staticOutputHashes(Store& store, const Derivation& drv);
/* Memoisation of hashDerivationModulo(). */ /* Memoisation of hashDerivationModulo(). */
typedef std::map<StorePath, DrvHashModulo> DrvHashes; typedef std::map<StorePath, DrvHashModulo> DrvHashes;
// FIXME: global, though at least thread-safe. // FIXME: global, though at least thread-safe.
extern Sync<DrvHashes> drvHashes; extern Sync<DrvHashes> drvHashes;
/* Memoisation of `readDerivation(..).resove()`. */
typedef std::map<
StorePath,
std::optional<StorePath>
> DrvPathResolutions;
// FIXME: global, though at least thread-safe.
// FIXME: arguably overlaps with hashDerivationModulo memo table.
extern Sync<DrvPathResolutions> drvPathResolutions;
bool wantOutput(const string & output, const std::set<string> & wanted); bool wantOutput(const string & output, const std::set<string> & wanted);
struct Source; struct Source;

View file

@ -9,7 +9,7 @@ struct DummyStoreConfig : virtual StoreConfig {
const std::string name() override { return "Dummy Store"; } const std::string name() override { return "Dummy Store"; }
}; };
struct DummyStore : public Store, public virtual DummyStoreConfig struct DummyStore : public virtual DummyStoreConfig, public virtual Store
{ {
DummyStore(const std::string scheme, const std::string uri, const Params & params) DummyStore(const std::string scheme, const std::string uri, const Params & params)
: DummyStore(params) : DummyStore(params)
@ -17,6 +17,7 @@ struct DummyStore : public Store, public virtual DummyStoreConfig
DummyStore(const Params & params) DummyStore(const Params & params)
: StoreConfig(params) : StoreConfig(params)
, DummyStoreConfig(params)
, Store(params) , Store(params)
{ } { }
@ -60,6 +61,9 @@ struct DummyStore : public Store, public virtual DummyStoreConfig
BuildResult buildDerivation(const StorePath & drvPath, const BasicDerivation & drv, BuildResult buildDerivation(const StorePath & drvPath, const BasicDerivation & drv,
BuildMode buildMode) override BuildMode buildMode) override
{ unsupported("buildDerivation"); } { unsupported("buildDerivation"); }
std::optional<const Realisation> queryRealisation(const DrvOutput&) override
{ unsupported("queryRealisation"); }
}; };
static RegisterStoreImplementation<DummyStore, DummyStoreConfig> regDummyStore; static RegisterStoreImplementation<DummyStore, DummyStoreConfig> regDummyStore;

View file

@ -25,7 +25,14 @@ public:
virtual StringSet readDirectory(const Path & path) = 0; virtual StringSet readDirectory(const Path & path) = 0;
virtual std::string readFile(const Path & path) = 0; /**
* Read a file inside the store.
*
* If `requireValidPath` is set to `true` (the default), the path must be
* inside a valid store path, otherwise it just needs to be physically
* present (but not necessarily properly registered)
*/
virtual std::string readFile(const Path & path, bool requireValidPath = true) = 0;
virtual std::string readLink(const Path & path) = 0; virtual std::string readLink(const Path & path) = 0;
}; };

View file

@ -15,7 +15,7 @@ struct HttpBinaryCacheStoreConfig : virtual BinaryCacheStoreConfig
const std::string name() override { return "Http Binary Cache Store"; } const std::string name() override { return "Http Binary Cache Store"; }
}; };
class HttpBinaryCacheStore : public BinaryCacheStore, public HttpBinaryCacheStoreConfig class HttpBinaryCacheStore : public virtual HttpBinaryCacheStoreConfig, public virtual BinaryCacheStore
{ {
private: private:
@ -36,6 +36,9 @@ public:
const Path & _cacheUri, const Path & _cacheUri,
const Params & params) const Params & params)
: StoreConfig(params) : StoreConfig(params)
, BinaryCacheStoreConfig(params)
, HttpBinaryCacheStoreConfig(params)
, Store(params)
, BinaryCacheStore(params) , BinaryCacheStore(params)
, cacheUri(scheme + "://" + _cacheUri) , cacheUri(scheme + "://" + _cacheUri)
{ {

View file

@ -22,7 +22,7 @@ struct LegacySSHStoreConfig : virtual StoreConfig
const std::string name() override { return "Legacy SSH Store"; } const std::string name() override { return "Legacy SSH Store"; }
}; };
struct LegacySSHStore : public Store, public virtual LegacySSHStoreConfig struct LegacySSHStore : public virtual LegacySSHStoreConfig, public virtual Store
{ {
// Hack for getting remote build log output. // Hack for getting remote build log output.
// Intentionally not in `LegacySSHStoreConfig` so that it doesn't appear in // Intentionally not in `LegacySSHStoreConfig` so that it doesn't appear in
@ -48,6 +48,7 @@ struct LegacySSHStore : public Store, public virtual LegacySSHStoreConfig
LegacySSHStore(const string & scheme, const string & host, const Params & params) LegacySSHStore(const string & scheme, const string & host, const Params & params)
: StoreConfig(params) : StoreConfig(params)
, LegacySSHStoreConfig(params)
, Store(params) , Store(params)
, host(host) , host(host)
, connections(make_ref<Pool<Connection>>( , connections(make_ref<Pool<Connection>>(
@ -333,6 +334,10 @@ public:
auto conn(connections->get()); auto conn(connections->get());
return conn->remoteVersion; return conn->remoteVersion;
} }
std::optional<const Realisation> queryRealisation(const DrvOutput&) override
// TODO: Implement
{ unsupported("queryRealisation"); }
}; };
static RegisterStoreImplementation<LegacySSHStore, LegacySSHStoreConfig> regLegacySSHStore; static RegisterStoreImplementation<LegacySSHStore, LegacySSHStoreConfig> regLegacySSHStore;

View file

@ -11,7 +11,7 @@ struct LocalBinaryCacheStoreConfig : virtual BinaryCacheStoreConfig
const std::string name() override { return "Local Binary Cache Store"; } const std::string name() override { return "Local Binary Cache Store"; }
}; };
class LocalBinaryCacheStore : public BinaryCacheStore, public virtual LocalBinaryCacheStoreConfig class LocalBinaryCacheStore : public virtual LocalBinaryCacheStoreConfig, public virtual BinaryCacheStore
{ {
private: private:
@ -24,6 +24,9 @@ public:
const Path & binaryCacheDir, const Path & binaryCacheDir,
const Params & params) const Params & params)
: StoreConfig(params) : StoreConfig(params)
, BinaryCacheStoreConfig(params)
, LocalBinaryCacheStoreConfig(params)
, Store(params)
, BinaryCacheStore(params) , BinaryCacheStore(params)
, binaryCacheDir(binaryCacheDir) , binaryCacheDir(binaryCacheDir)
{ {
@ -87,6 +90,7 @@ protected:
void LocalBinaryCacheStore::init() void LocalBinaryCacheStore::init()
{ {
createDirs(binaryCacheDir + "/nar"); createDirs(binaryCacheDir + "/nar");
createDirs(binaryCacheDir + realisationsPrefix);
if (writeDebugInfo) if (writeDebugInfo)
createDirs(binaryCacheDir + "/debuginfo"); createDirs(binaryCacheDir + "/debuginfo");
BinaryCacheStore::init(); BinaryCacheStore::init();

View file

@ -19,10 +19,10 @@ struct LocalStoreAccessor : public FSAccessor
LocalStoreAccessor(ref<LocalFSStore> store) : store(store) { } LocalStoreAccessor(ref<LocalFSStore> store) : store(store) { }
Path toRealPath(const Path & path) Path toRealPath(const Path & path, bool requireValidPath = true)
{ {
auto storePath = store->toStorePath(path).first; auto storePath = store->toStorePath(path).first;
if (!store->isValidPath(storePath)) if (requireValidPath && !store->isValidPath(storePath))
throw InvalidPath("path '%1%' is not a valid store path", store->printStorePath(storePath)); throw InvalidPath("path '%1%' is not a valid store path", store->printStorePath(storePath));
return store->getRealStoreDir() + std::string(path, store->storeDir.size()); return store->getRealStoreDir() + std::string(path, store->storeDir.size());
} }
@ -61,9 +61,9 @@ struct LocalStoreAccessor : public FSAccessor
return res; return res;
} }
std::string readFile(const Path & path) override std::string readFile(const Path & path, bool requireValidPath = true) override
{ {
return nix::readFile(toRealPath(path)); return nix::readFile(toRealPath(path, requireValidPath));
} }
std::string readLink(const Path & path) override std::string readLink(const Path & path) override

View file

@ -20,7 +20,7 @@ struct LocalFSStoreConfig : virtual StoreConfig
"log", "directory where Nix will store state"}; "log", "directory where Nix will store state"};
}; };
class LocalFSStore : public virtual Store, public virtual LocalFSStoreConfig class LocalFSStore : public virtual LocalFSStoreConfig, public virtual Store
{ {
public: public:

View file

@ -52,14 +52,56 @@ struct LocalStore::State::Stmts {
SQLiteStmt QueryReferrers; SQLiteStmt QueryReferrers;
SQLiteStmt InvalidatePath; SQLiteStmt InvalidatePath;
SQLiteStmt AddDerivationOutput; SQLiteStmt AddDerivationOutput;
SQLiteStmt RegisterRealisedOutput;
SQLiteStmt QueryValidDerivers; SQLiteStmt QueryValidDerivers;
SQLiteStmt QueryDerivationOutputs; SQLiteStmt QueryDerivationOutputs;
SQLiteStmt QueryRealisedOutput;
SQLiteStmt QueryAllRealisedOutputs;
SQLiteStmt QueryPathFromHashPart; SQLiteStmt QueryPathFromHashPart;
SQLiteStmt QueryValidPaths; SQLiteStmt QueryValidPaths;
}; };
int getSchema(Path schemaPath)
{
int curSchema = 0;
if (pathExists(schemaPath)) {
string s = readFile(schemaPath);
if (!string2Int(s, curSchema))
throw Error("'%1%' is corrupt", schemaPath);
}
return curSchema;
}
void migrateCASchema(SQLite& db, Path schemaPath, AutoCloseFD& lockFd)
{
const int nixCASchemaVersion = 1;
int curCASchema = getSchema(schemaPath);
if (curCASchema != nixCASchemaVersion) {
if (curCASchema > nixCASchemaVersion) {
throw Error("current Nix store ca-schema is version %1%, but I only support %2%",
curCASchema, nixCASchemaVersion);
}
if (!lockFile(lockFd.get(), ltWrite, false)) {
printInfo("waiting for exclusive access to the Nix store for ca drvs...");
lockFile(lockFd.get(), ltWrite, true);
}
if (curCASchema == 0) {
static const char schema[] =
#include "ca-specific-schema.sql.gen.hh"
;
db.exec(schema);
}
writeFile(schemaPath, fmt("%d", nixCASchemaVersion));
lockFile(lockFd.get(), ltRead, true);
}
}
LocalStore::LocalStore(const Params & params) LocalStore::LocalStore(const Params & params)
: StoreConfig(params) : StoreConfig(params)
, LocalFSStoreConfig(params)
, LocalStoreConfig(params)
, Store(params) , Store(params)
, LocalFSStore(params) , LocalFSStore(params)
, realStoreDir_{this, false, rootDir != "" ? rootDir + "/nix/store" : storeDir, "real", , realStoreDir_{this, false, rootDir != "" ? rootDir + "/nix/store" : storeDir, "real",
@ -238,6 +280,10 @@ LocalStore::LocalStore(const Params & params)
else openDB(*state, false); else openDB(*state, false);
if (settings.isExperimentalFeatureEnabled("ca-derivations")) {
migrateCASchema(state->db, dbDir + "/ca-schema", globalLock);
}
/* Prepare SQL statements. */ /* Prepare SQL statements. */
state->stmts->RegisterValidPath.create(state->db, state->stmts->RegisterValidPath.create(state->db,
"insert into ValidPaths (path, hash, registrationTime, deriver, narSize, ultimate, sigs, ca) values (?, ?, ?, ?, ?, ?, ?, ?);"); "insert into ValidPaths (path, hash, registrationTime, deriver, narSize, ultimate, sigs, ca) values (?, ?, ?, ?, ?, ?, ?, ?);");
@ -264,6 +310,28 @@ LocalStore::LocalStore(const Params & params)
state->stmts->QueryPathFromHashPart.create(state->db, state->stmts->QueryPathFromHashPart.create(state->db,
"select path from ValidPaths where path >= ? limit 1;"); "select path from ValidPaths where path >= ? limit 1;");
state->stmts->QueryValidPaths.create(state->db, "select path from ValidPaths"); state->stmts->QueryValidPaths.create(state->db, "select path from ValidPaths");
if (settings.isExperimentalFeatureEnabled("ca-derivations")) {
state->stmts->RegisterRealisedOutput.create(state->db,
R"(
insert or replace into Realisations (drvPath, outputName, outputPath)
values (?, ?, (select id from ValidPaths where path = ?))
;
)");
state->stmts->QueryRealisedOutput.create(state->db,
R"(
select Output.path from Realisations
inner join ValidPaths as Output on Output.id = Realisations.outputPath
where drvPath = ? and outputName = ?
;
)");
state->stmts->QueryAllRealisedOutputs.create(state->db,
R"(
select outputName, Output.path from Realisations
inner join ValidPaths as Output on Output.id = Realisations.outputPath
where drvPath = ?
;
)");
}
} }
@ -301,16 +369,7 @@ std::string LocalStore::getUri()
int LocalStore::getSchema() int LocalStore::getSchema()
{ { return nix::getSchema(schemaPath); }
int curSchema = 0;
if (pathExists(schemaPath)) {
string s = readFile(schemaPath);
if (!string2Int(s, curSchema))
throw Error("'%1%' is corrupt", schemaPath);
}
return curSchema;
}
void LocalStore::openDB(State & state, bool create) void LocalStore::openDB(State & state, bool create)
{ {
@ -597,13 +656,19 @@ void LocalStore::checkDerivationOutputs(const StorePath & drvPath, const Derivat
} }
void LocalStore::linkDeriverToPath(const StorePath & deriver, const string & outputName, const StorePath & output) void LocalStore::registerDrvOutput(const Realisation & info)
{ {
auto state(_state.lock()); auto state(_state.lock());
return linkDeriverToPath(*state, queryValidPathId(*state, deriver), outputName, output); retrySQLite<void>([&]() {
state->stmts->RegisterRealisedOutput.use()
(info.id.strHash())
(info.id.outputName)
(printStorePath(info.outPath))
.exec();
});
} }
void LocalStore::linkDeriverToPath(State & state, uint64_t deriver, const string & outputName, const StorePath & output) void LocalStore::cacheDrvOutputMapping(State & state, const uint64_t deriver, const string & outputName, const StorePath & output)
{ {
retrySQLite<void>([&]() { retrySQLite<void>([&]() {
state.stmts->AddDerivationOutput.use() state.stmts->AddDerivationOutput.use()
@ -653,7 +718,7 @@ uint64_t LocalStore::addValidPath(State & state,
/* Floating CA derivations have indeterminate output paths until /* Floating CA derivations have indeterminate output paths until
they are built, so don't register anything in that case */ they are built, so don't register anything in that case */
if (i.second.second) if (i.second.second)
linkDeriverToPath(state, id, i.first, *i.second.second); cacheDrvOutputMapping(state, id, i.first, *i.second.second);
} }
} }
@ -815,69 +880,38 @@ StorePathSet LocalStore::queryValidDerivers(const StorePath & path)
} }
std::map<std::string, std::optional<StorePath>> LocalStore::queryPartialDerivationOutputMap(const StorePath & path_) std::map<std::string, std::optional<StorePath>>
LocalStore::queryDerivationOutputMapNoResolve(const StorePath& path_)
{ {
auto path = path_; auto path = path_;
std::map<std::string, std::optional<StorePath>> outputs; auto outputs = retrySQLite<std::map<std::string, std::optional<StorePath>>>([&]() {
Derivation drv = readDerivation(path);
for (auto & [outName, _] : drv.outputs) {
outputs.insert_or_assign(outName, std::nullopt);
}
bool haveCached = false;
{
auto resolutions = drvPathResolutions.lock();
auto resolvedPathOptIter = resolutions->find(path);
if (resolvedPathOptIter != resolutions->end()) {
auto & [_, resolvedPathOpt] = *resolvedPathOptIter;
if (resolvedPathOpt)
path = *resolvedPathOpt;
haveCached = true;
}
}
/* can't just use else-if instead of `!haveCached` because we need to unlock
`drvPathResolutions` before it is locked in `Derivation::resolve`. */
if (!haveCached && (drv.type() == DerivationType::CAFloating || drv.type() == DerivationType::DeferredInputAddressed)) {
/* Try resolve drv and use that path instead. */
auto attempt = drv.tryResolve(*this);
if (!attempt)
/* If we cannot resolve the derivation, we cannot have any path
assigned so we return the map of all std::nullopts. */
return outputs;
/* Just compute store path */
auto pathResolved = writeDerivation(*this, *std::move(attempt), NoRepair, true);
/* Store in memo table. */
/* FIXME: memo logic should not be local-store specific, should have
wrapper-method instead. */
drvPathResolutions.lock()->insert_or_assign(path, pathResolved);
path = std::move(pathResolved);
}
return retrySQLite<std::map<std::string, std::optional<StorePath>>>([&]() {
auto state(_state.lock()); auto state(_state.lock());
std::map<std::string, std::optional<StorePath>> outputs;
uint64_t drvId; uint64_t drvId;
try {
drvId = queryValidPathId(*state, path); drvId = queryValidPathId(*state, path);
} catch (InvalidPath &) { auto use(state->stmts->QueryDerivationOutputs.use()(drvId));
/* FIXME? if the derivation doesn't exist, we cannot have a mapping while (use.next())
for it. */
return outputs;
}
auto useQueryDerivationOutputs {
state->stmts->QueryDerivationOutputs.use()
(drvId)
};
while (useQueryDerivationOutputs.next())
outputs.insert_or_assign( outputs.insert_or_assign(
useQueryDerivationOutputs.getStr(0), use.getStr(0), parseStorePath(use.getStr(1)));
parseStorePath(useQueryDerivationOutputs.getStr(1))
);
return outputs; return outputs;
}); });
}
if (!settings.isExperimentalFeatureEnabled("ca-derivations"))
return outputs;
auto drv = readInvalidDerivation(path);
auto drvHashes = staticOutputHashes(*this, drv);
for (auto& [outputName, hash] : drvHashes) {
auto realisation = queryRealisation(DrvOutput{hash, outputName});
if (realisation)
outputs.insert_or_assign(outputName, realisation->outPath);
else
outputs.insert_or_assign(outputName, std::nullopt);
}
return outputs;
}
std::optional<StorePath> LocalStore::queryPathFromHashPart(const std::string & hashPart) std::optional<StorePath> LocalStore::queryPathFromHashPart(const std::string & hashPart)
{ {
@ -1612,5 +1646,18 @@ void LocalStore::createUser(const std::string & userName, uid_t userId)
} }
} }
std::optional<const Realisation> LocalStore::queryRealisation(
const DrvOutput& id) {
typedef std::optional<const Realisation> Ret;
return retrySQLite<Ret>([&]() -> Ret {
auto state(_state.lock());
auto use(state->stmts->QueryRealisedOutput.use()(id.strHash())(
id.outputName));
if (!use.next())
return std::nullopt;
auto outputPath = parseStorePath(use.getStr(0));
return Ret{
Realisation{.id = id, .outPath = outputPath}};
});
} }
} // namespace nix

View file

@ -43,7 +43,7 @@ struct LocalStoreConfig : virtual LocalFSStoreConfig
}; };
class LocalStore : public LocalFSStore, public virtual LocalStoreConfig class LocalStore : public virtual LocalStoreConfig, public virtual LocalFSStore
{ {
private: private:
@ -127,7 +127,7 @@ public:
StorePathSet queryValidDerivers(const StorePath & path) override; StorePathSet queryValidDerivers(const StorePath & path) override;
std::map<std::string, std::optional<StorePath>> queryPartialDerivationOutputMap(const StorePath & path) override; std::map<std::string, std::optional<StorePath>> queryDerivationOutputMapNoResolve(const StorePath & path) override;
std::optional<StorePath> queryPathFromHashPart(const std::string & hashPart) override; std::optional<StorePath> queryPathFromHashPart(const std::string & hashPart) override;
@ -208,6 +208,13 @@ public:
garbage until it exceeds maxFree. */ garbage until it exceeds maxFree. */
void autoGC(bool sync = true); void autoGC(bool sync = true);
/* Register the store path 'output' as the output named 'outputName' of
derivation 'deriver'. */
void registerDrvOutput(const Realisation & info) override;
void cacheDrvOutputMapping(State & state, const uint64_t deriver, const string & outputName, const StorePath & output);
std::optional<const Realisation> queryRealisation(const DrvOutput&) override;
private: private:
int getSchema(); int getSchema();
@ -276,11 +283,6 @@ private:
specified by the secret-key-files option. */ specified by the secret-key-files option. */
void signPathInfo(ValidPathInfo & info); void signPathInfo(ValidPathInfo & info);
/* Register the store path 'output' as the output named 'outputName' of
derivation 'deriver'. */
void linkDeriverToPath(const StorePath & deriver, const string & outputName, const StorePath & output);
void linkDeriverToPath(State & state, uint64_t deriver, const string & outputName, const StorePath & output);
Path getRealStoreDir() override { return realStoreDir; } Path getRealStoreDir() override { return realStoreDir; }
void createUser(const std::string & userName, uid_t userId) override; void createUser(const std::string & userName, uid_t userId) override;

View file

@ -48,7 +48,7 @@ ifneq ($(sandbox_shell),)
libstore_CXXFLAGS += -DSANDBOX_SHELL="\"$(sandbox_shell)\"" libstore_CXXFLAGS += -DSANDBOX_SHELL="\"$(sandbox_shell)\""
endif endif
$(d)/local-store.cc: $(d)/schema.sql.gen.hh $(d)/local-store.cc: $(d)/schema.sql.gen.hh $(d)/ca-specific-schema.sql.gen.hh
$(d)/build.cc: $(d)/build.cc:
@ -58,7 +58,7 @@ $(d)/build.cc:
@echo ')foo"' >> $@.tmp @echo ')foo"' >> $@.tmp
@mv $@.tmp $@ @mv $@.tmp $@
clean-files += $(d)/schema.sql.gen.hh clean-files += $(d)/schema.sql.gen.hh $(d)/ca-specific-schema.sql.gen.hh
$(eval $(call install-file-in, $(d)/nix-store.pc, $(prefix)/lib/pkgconfig, 0644)) $(eval $(call install-file-in, $(d)/nix-store.pc, $(prefix)/lib/pkgconfig, 0644))

View file

@ -203,7 +203,7 @@ struct NarAccessor : public FSAccessor
return res; return res;
} }
std::string readFile(const Path & path) override std::string readFile(const Path & path, bool requireValidPath = true) override
{ {
auto i = get(path); auto i = get(path);
if (i.type != FSAccessor::Type::tRegular) if (i.type != FSAccessor::Type::tRegular)

View file

@ -0,0 +1,49 @@
#include "realisation.hh"
#include "store-api.hh"
#include <nlohmann/json.hpp>
namespace nix {
MakeError(InvalidDerivationOutputId, Error);
DrvOutput DrvOutput::parse(const std::string &strRep) {
size_t n = strRep.find("!");
if (n == strRep.npos)
throw InvalidDerivationOutputId("Invalid derivation output id %s", strRep);
return DrvOutput{
.drvHash = Hash::parseAnyPrefixed(strRep.substr(0, n)),
.outputName = strRep.substr(n+1),
};
}
std::string DrvOutput::to_string() const {
return strHash() + "!" + outputName;
}
nlohmann::json Realisation::toJSON() const {
return nlohmann::json{
{"id", id.to_string()},
{"outPath", outPath.to_string()},
};
}
Realisation Realisation::fromJSON(
const nlohmann::json& json,
const std::string& whence) {
auto getField = [&](std::string fieldName) -> std::string {
auto fieldIterator = json.find(fieldName);
if (fieldIterator == json.end())
throw Error(
"Drv output info file '%1%' is corrupt, missing field %2%",
whence, fieldName);
return *fieldIterator;
};
return Realisation{
.id = DrvOutput::parse(getField("id")),
.outPath = StorePath(getField("outPath")),
};
}
} // namespace nix

View file

@ -0,0 +1,39 @@
#pragma once
#include "path.hh"
#include <nlohmann/json_fwd.hpp>
namespace nix {
struct DrvOutput {
// The hash modulo of the derivation
Hash drvHash;
std::string outputName;
std::string to_string() const;
std::string strHash() const
{ return drvHash.to_string(Base16, true); }
static DrvOutput parse(const std::string &);
bool operator<(const DrvOutput& other) const { return to_pair() < other.to_pair(); }
bool operator==(const DrvOutput& other) const { return to_pair() == other.to_pair(); }
private:
// Just to make comparison operators easier to write
std::pair<Hash, std::string> to_pair() const
{ return std::make_pair(drvHash, outputName); }
};
struct Realisation {
DrvOutput id;
StorePath outPath;
nlohmann::json toJSON() const;
static Realisation fromJSON(const nlohmann::json& json, const std::string& whence);
};
typedef std::map<DrvOutput, Realisation> DrvOutputs;
}

View file

@ -43,13 +43,13 @@ void RemoteFSAccessor::addToCache(std::string_view hashPart, const std::string &
} }
} }
std::pair<ref<FSAccessor>, Path> RemoteFSAccessor::fetch(const Path & path_) std::pair<ref<FSAccessor>, Path> RemoteFSAccessor::fetch(const Path & path_, bool requireValidPath)
{ {
auto path = canonPath(path_); auto path = canonPath(path_);
auto [storePath, restPath] = store->toStorePath(path); auto [storePath, restPath] = store->toStorePath(path);
if (!store->isValidPath(storePath)) if (requireValidPath && !store->isValidPath(storePath))
throw InvalidPath("path '%1%' is not a valid store path", store->printStorePath(storePath)); throw InvalidPath("path '%1%' is not a valid store path", store->printStorePath(storePath));
auto i = nars.find(std::string(storePath.hashPart())); auto i = nars.find(std::string(storePath.hashPart()));
@ -113,9 +113,9 @@ StringSet RemoteFSAccessor::readDirectory(const Path & path)
return res.first->readDirectory(res.second); return res.first->readDirectory(res.second);
} }
std::string RemoteFSAccessor::readFile(const Path & path) std::string RemoteFSAccessor::readFile(const Path & path, bool requireValidPath)
{ {
auto res = fetch(path); auto res = fetch(path, requireValidPath);
return res.first->readFile(res.second); return res.first->readFile(res.second);
} }

View file

@ -14,7 +14,7 @@ class RemoteFSAccessor : public FSAccessor
Path cacheDir; Path cacheDir;
std::pair<ref<FSAccessor>, Path> fetch(const Path & path_); std::pair<ref<FSAccessor>, Path> fetch(const Path & path_, bool requireValidPath = true);
friend class BinaryCacheStore; friend class BinaryCacheStore;
@ -32,7 +32,7 @@ public:
StringSet readDirectory(const Path & path) override; StringSet readDirectory(const Path & path) override;
std::string readFile(const Path & path) override; std::string readFile(const Path & path, bool requireValidPath = true) override;
std::string readLink(const Path & path) override; std::string readLink(const Path & path) override;
}; };

View file

@ -77,8 +77,8 @@ void write(const Store & store, Sink & out, const std::optional<ContentAddress>
/* TODO: Separate these store impls into different files, give them better names */ /* TODO: Separate these store impls into different files, give them better names */
RemoteStore::RemoteStore(const Params & params) RemoteStore::RemoteStore(const Params & params)
: Store(params) : RemoteStoreConfig(params)
, RemoteStoreConfig(params) , Store(params)
, connections(make_ref<Pool<Connection>>( , connections(make_ref<Pool<Connection>>(
std::max(1, (int) maxConnections), std::max(1, (int) maxConnections),
[this]() { [this]() {
@ -609,6 +609,27 @@ StorePath RemoteStore::addTextToStore(const string & name, const string & s,
return addCAToStore(source, name, TextHashMethod{}, references, repair)->path; return addCAToStore(source, name, TextHashMethod{}, references, repair)->path;
} }
void RemoteStore::registerDrvOutput(const Realisation & info)
{
auto conn(getConnection());
conn->to << wopRegisterDrvOutput;
conn->to << info.id.to_string();
conn->to << std::string(info.outPath.to_string());
conn.processStderr();
}
std::optional<const Realisation> RemoteStore::queryRealisation(const DrvOutput & id)
{
auto conn(getConnection());
conn->to << wopQueryRealisation;
conn->to << id.to_string();
conn.processStderr();
auto outPaths = worker_proto::read(*this, conn->from, Phantom<std::set<StorePath>>{});
if (outPaths.empty())
return std::nullopt;
return {Realisation{.id = id, .outPath = *outPaths.begin()}};
}
void RemoteStore::buildPaths(const std::vector<StorePathWithOutputs> & drvPaths, BuildMode buildMode) void RemoteStore::buildPaths(const std::vector<StorePathWithOutputs> & drvPaths, BuildMode buildMode)
{ {

View file

@ -29,7 +29,7 @@ struct RemoteStoreConfig : virtual StoreConfig
/* FIXME: RemoteStore is a misnomer - should be something like /* FIXME: RemoteStore is a misnomer - should be something like
DaemonStore. */ DaemonStore. */
class RemoteStore : public virtual Store, public virtual RemoteStoreConfig class RemoteStore : public virtual RemoteStoreConfig, public virtual Store
{ {
public: public:
@ -81,6 +81,10 @@ public:
StorePath addTextToStore(const string & name, const string & s, StorePath addTextToStore(const string & name, const string & s,
const StorePathSet & references, RepairFlag repair) override; const StorePathSet & references, RepairFlag repair) override;
void registerDrvOutput(const Realisation & info) override;
std::optional<const Realisation> queryRealisation(const DrvOutput &) override;
void buildPaths(const std::vector<StorePathWithOutputs> & paths, BuildMode buildMode) override; void buildPaths(const std::vector<StorePathWithOutputs> & paths, BuildMode buildMode) override;
BuildResult buildDerivation(const StorePath & drvPath, const BasicDerivation & drv, BuildResult buildDerivation(const StorePath & drvPath, const BasicDerivation & drv,

View file

@ -166,7 +166,8 @@ S3Helper::FileTransferResult S3Helper::getObject(
dynamic_cast<std::stringstream &>(result.GetBody()).str()); dynamic_cast<std::stringstream &>(result.GetBody()).str());
} catch (S3Error & e) { } catch (S3Error & e) {
if (e.err != Aws::S3::S3Errors::NO_SUCH_KEY) throw; if ((e.err != Aws::S3::S3Errors::NO_SUCH_KEY) &&
(e.err != Aws::S3::S3Errors::ACCESS_DENIED)) throw;
} }
auto now2 = std::chrono::steady_clock::now(); auto now2 = std::chrono::steady_clock::now();
@ -176,6 +177,11 @@ S3Helper::FileTransferResult S3Helper::getObject(
return res; return res;
} }
S3BinaryCacheStore::S3BinaryCacheStore(const Params & params)
: BinaryCacheStoreConfig(params)
, BinaryCacheStore(params)
{ }
struct S3BinaryCacheStoreConfig : virtual BinaryCacheStoreConfig struct S3BinaryCacheStoreConfig : virtual BinaryCacheStoreConfig
{ {
using BinaryCacheStoreConfig::BinaryCacheStoreConfig; using BinaryCacheStoreConfig::BinaryCacheStoreConfig;
@ -194,7 +200,7 @@ struct S3BinaryCacheStoreConfig : virtual BinaryCacheStoreConfig
const std::string name() override { return "S3 Binary Cache Store"; } const std::string name() override { return "S3 Binary Cache Store"; }
}; };
struct S3BinaryCacheStoreImpl : public S3BinaryCacheStore, virtual S3BinaryCacheStoreConfig struct S3BinaryCacheStoreImpl : virtual S3BinaryCacheStoreConfig, public virtual S3BinaryCacheStore
{ {
std::string bucketName; std::string bucketName;
@ -207,6 +213,10 @@ struct S3BinaryCacheStoreImpl : public S3BinaryCacheStore, virtual S3BinaryCache
const std::string & bucketName, const std::string & bucketName,
const Params & params) const Params & params)
: StoreConfig(params) : StoreConfig(params)
, BinaryCacheStoreConfig(params)
, S3BinaryCacheStoreConfig(params)
, Store(params)
, BinaryCacheStore(params)
, S3BinaryCacheStore(params) , S3BinaryCacheStore(params)
, bucketName(bucketName) , bucketName(bucketName)
, s3Helper(profile, region, scheme, endpoint) , s3Helper(profile, region, scheme, endpoint)

View file

@ -6,13 +6,11 @@
namespace nix { namespace nix {
class S3BinaryCacheStore : public BinaryCacheStore class S3BinaryCacheStore : public virtual BinaryCacheStore
{ {
protected: protected:
S3BinaryCacheStore(const Params & params) S3BinaryCacheStore(const Params & params);
: BinaryCacheStore(params)
{ }
public: public:

View file

@ -20,12 +20,14 @@ struct SSHStoreConfig : virtual RemoteStoreConfig
const std::string name() override { return "SSH Store"; } const std::string name() override { return "SSH Store"; }
}; };
class SSHStore : public virtual RemoteStore, public virtual SSHStoreConfig class SSHStore : public virtual SSHStoreConfig, public virtual RemoteStore
{ {
public: public:
SSHStore(const std::string & scheme, const std::string & host, const Params & params) SSHStore(const std::string & scheme, const std::string & host, const Params & params)
: StoreConfig(params) : StoreConfig(params)
, RemoteStoreConfig(params)
, SSHStoreConfig(params)
, Store(params) , Store(params)
, RemoteStore(params) , RemoteStore(params)
, host(host) , host(host)

View file

@ -366,6 +366,29 @@ bool Store::PathInfoCacheValue::isKnownNow()
return std::chrono::steady_clock::now() < time_point + ttl; return std::chrono::steady_clock::now() < time_point + ttl;
} }
std::map<std::string, std::optional<StorePath>> Store::queryDerivationOutputMapNoResolve(const StorePath & path)
{
std::map<std::string, std::optional<StorePath>> outputs;
auto drv = readInvalidDerivation(path);
for (auto& [outputName, output] : drv.outputsAndOptPaths(*this)) {
outputs.emplace(outputName, output.second);
}
return outputs;
}
std::map<std::string, std::optional<StorePath>> Store::queryPartialDerivationOutputMap(const StorePath & path)
{
if (settings.isExperimentalFeatureEnabled("ca-derivations")) {
auto resolvedDrv = Derivation::tryResolve(*this, path);
if (resolvedDrv) {
auto resolvedDrvPath = writeDerivation(*this, *resolvedDrv, NoRepair, true);
if (isValidPath(resolvedDrvPath))
return queryDerivationOutputMapNoResolve(resolvedDrvPath);
}
}
return queryDerivationOutputMapNoResolve(path);
}
OutputPathMap Store::queryDerivationOutputMap(const StorePath & path) { OutputPathMap Store::queryDerivationOutputMap(const StorePath & path) {
auto resp = queryPartialDerivationOutputMap(path); auto resp = queryPartialDerivationOutputMap(path);
OutputPathMap result; OutputPathMap result;
@ -729,8 +752,16 @@ void Store::buildPaths(const std::vector<StorePathWithOutputs> & paths, BuildMod
StorePathSet paths2; StorePathSet paths2;
for (auto & path : paths) { for (auto & path : paths) {
if (path.path.isDerivation()) if (path.path.isDerivation()) {
auto outPaths = queryPartialDerivationOutputMap(path.path);
for (auto & outputName : path.outputs) {
auto currentOutputPathIter = outPaths.find(outputName);
if (currentOutputPathIter == outPaths.end() ||
!currentOutputPathIter->second ||
!isValidPath(*currentOutputPathIter->second))
unsupported("buildPaths"); unsupported("buildPaths");
}
} else
paths2.insert(path.path); paths2.insert(path.path);
} }
@ -1018,26 +1049,23 @@ Derivation Store::derivationFromPath(const StorePath & drvPath)
return readDerivation(drvPath); return readDerivation(drvPath);
} }
Derivation readDerivationCommon(Store& store, const StorePath& drvPath, bool requireValidPath)
Derivation Store::readDerivation(const StorePath & drvPath)
{ {
auto accessor = getFSAccessor(); auto accessor = store.getFSAccessor();
try { try {
return parseDerivation(*this, return parseDerivation(store,
accessor->readFile(printStorePath(drvPath)), accessor->readFile(store.printStorePath(drvPath), requireValidPath),
Derivation::nameFromPath(drvPath)); Derivation::nameFromPath(drvPath));
} catch (FormatError & e) { } catch (FormatError & e) {
throw Error("error parsing derivation '%s': %s", printStorePath(drvPath), e.msg()); throw Error("error parsing derivation '%s': %s", store.printStorePath(drvPath), e.msg());
} }
} }
Derivation Store::readDerivation(const StorePath & drvPath)
{ return readDerivationCommon(*this, drvPath, true); }
Derivation Store::readInvalidDerivation(const StorePath & drvPath) Derivation Store::readInvalidDerivation(const StorePath & drvPath)
{ { return readDerivationCommon(*this, drvPath, false); }
return parseDerivation(
*this,
readFile(Store::toRealPath(drvPath)),
Derivation::nameFromPath(drvPath));
}
} }

View file

@ -1,5 +1,6 @@
#pragma once #pragma once
#include "realisation.hh"
#include "path.hh" #include "path.hh"
#include "hash.hh" #include "hash.hh"
#include "content-address.hh" #include "content-address.hh"
@ -174,25 +175,7 @@ struct StoreConfig : public Config
{ {
using Config::Config; using Config::Config;
/** StoreConfig() = delete;
* When constructing a store implementation, we pass in a map `params` of
* parameters that's supposed to initialize the associated config.
* To do that, we must use the `StoreConfig(StringMap & params)`
* constructor, so we'd like to `delete` its default constructor to enforce
* it.
*
* However, actually deleting it means that all the subclasses of
* `StoreConfig` will have their default constructor deleted (because it's
* supposed to call the deleted default constructor of `StoreConfig`). But
* because we're always using virtual inheritance, the constructors of
* child classes will never implicitely call this one, so deleting it will
* be more painful than anything else.
*
* So we `assert(false)` here to ensure at runtime that the right
* constructor is always called without having to redefine a custom
* constructor for each `*Config` class.
*/
StoreConfig() { assert(false); }
virtual ~StoreConfig() { } virtual ~StoreConfig() { }
@ -396,6 +379,8 @@ protected:
public: public:
virtual std::optional<const Realisation> queryRealisation(const DrvOutput &) = 0;
/* Queries the set of incoming FS references for a store path. /* Queries the set of incoming FS references for a store path.
The result is not cleared. */ The result is not cleared. */
virtual void queryReferrers(const StorePath & path, StorePathSet & referrers) virtual void queryReferrers(const StorePath & path, StorePathSet & referrers)
@ -413,8 +398,13 @@ public:
/* Query the mapping outputName => outputPath for the given derivation. All /* Query the mapping outputName => outputPath for the given derivation. All
outputs are mentioned so ones mising the mapping are mapped to outputs are mentioned so ones mising the mapping are mapped to
`std::nullopt`. */ `std::nullopt`. */
virtual std::map<std::string, std::optional<StorePath>> queryPartialDerivationOutputMap(const StorePath & path) virtual std::map<std::string, std::optional<StorePath>> queryPartialDerivationOutputMap(const StorePath & path);
{ unsupported("queryPartialDerivationOutputMap"); }
/*
* Similar to `queryPartialDerivationOutputMap`, but doesn't try to resolve
* the derivation
*/
virtual std::map<std::string, std::optional<StorePath>> queryDerivationOutputMapNoResolve(const StorePath & path);
/* Query the mapping outputName=>outputPath for the given derivation. /* Query the mapping outputName=>outputPath for the given derivation.
Assume every output has a mapping and throw an exception otherwise. */ Assume every output has a mapping and throw an exception otherwise. */
@ -468,6 +458,18 @@ public:
virtual StorePath addTextToStore(const string & name, const string & s, virtual StorePath addTextToStore(const string & name, const string & s,
const StorePathSet & references, RepairFlag repair = NoRepair) = 0; const StorePathSet & references, RepairFlag repair = NoRepair) = 0;
/**
* Add a mapping indicating that `deriver!outputName` maps to the output path
* `output`.
*
* This is redundant for known-input-addressed and fixed-output derivations
* as this information is already present in the drv file, but necessary for
* floating-ca derivations and their dependencies as there's no way to
* retrieve this information otherwise.
*/
virtual void registerDrvOutput(const Realisation & output)
{ unsupported("registerDrvOutput"); }
/* Write a NAR dump of a store path. */ /* Write a NAR dump of a store path. */
virtual void narFromPath(const StorePath & path, Sink & sink) = 0; virtual void narFromPath(const StorePath & path, Sink & sink) = 0;

View file

@ -15,6 +15,9 @@ namespace nix {
UDSRemoteStore::UDSRemoteStore(const Params & params) UDSRemoteStore::UDSRemoteStore(const Params & params)
: StoreConfig(params) : StoreConfig(params)
, LocalFSStoreConfig(params)
, RemoteStoreConfig(params)
, UDSRemoteStoreConfig(params)
, Store(params) , Store(params)
, LocalFSStore(params) , LocalFSStore(params)
, RemoteStore(params) , RemoteStore(params)

View file

@ -14,15 +14,10 @@ struct UDSRemoteStoreConfig : virtual LocalFSStoreConfig, virtual RemoteStoreCon
{ {
} }
UDSRemoteStoreConfig()
: UDSRemoteStoreConfig(Store::Params({}))
{
}
const std::string name() override { return "Local Daemon Store"; } const std::string name() override { return "Local Daemon Store"; }
}; };
class UDSRemoteStore : public LocalFSStore, public RemoteStore, public virtual UDSRemoteStoreConfig class UDSRemoteStore : public virtual UDSRemoteStoreConfig, public virtual LocalFSStore, public virtual RemoteStore
{ {
public: public:

View file

@ -1,5 +1,8 @@
#pragma once #pragma once
#include "store-api.hh"
#include "serialise.hh"
namespace nix { namespace nix {
@ -50,6 +53,8 @@ typedef enum {
wopAddToStoreNar = 39, wopAddToStoreNar = 39,
wopQueryMissing = 40, wopQueryMissing = 40,
wopQueryDerivationOutputMap = 41, wopQueryDerivationOutputMap = 41,
wopRegisterDrvOutput = 42,
wopQueryRealisation = 43,
} WorkerOp; } WorkerOp;

View file

@ -254,6 +254,8 @@ nlohmann::json Args::toJSON()
res["description"] = description(); res["description"] = description();
res["flags"] = std::move(flags); res["flags"] = std::move(flags);
res["args"] = std::move(args); res["args"] = std::move(args);
auto s = doc();
if (s != "") res.emplace("doc", stripIndentation(s));
return res; return res;
} }
@ -351,38 +353,6 @@ void printTable(std::ostream & out, const Table2 & table)
} }
} }
void Command::printHelp(const string & programName, std::ostream & out)
{
Args::printHelp(programName, out);
auto exs = examples();
if (!exs.empty()) {
out << "\n" ANSI_BOLD "Examples:" ANSI_NORMAL "\n";
for (auto & ex : exs)
out << "\n"
<< " " << ex.description << "\n" // FIXME: wrap
<< " $ " << ex.command << "\n";
}
}
nlohmann::json Command::toJSON()
{
auto exs = nlohmann::json::array();
for (auto & example : examples()) {
auto ex = nlohmann::json::object();
ex["description"] = example.description;
ex["command"] = chomp(stripIndentation(example.command));
exs.push_back(std::move(ex));
}
auto res = Args::toJSON();
res["examples"] = std::move(exs);
auto s = doc();
if (s != "") res.emplace("doc", stripIndentation(s));
return res;
}
MultiCommand::MultiCommand(const Commands & commands) MultiCommand::MultiCommand(const Commands & commands)
: commands(commands) : commands(commands)
{ {

View file

@ -25,6 +25,9 @@ public:
/* Return a short one-line description of the command. */ /* Return a short one-line description of the command. */
virtual std::string description() { return ""; } virtual std::string description() { return ""; }
/* Return documentation about this command, in Markdown format. */
virtual std::string doc() { return ""; }
protected: protected:
static const size_t ArityAny = std::numeric_limits<size_t>::max(); static const size_t ArityAny = std::numeric_limits<size_t>::max();
@ -225,28 +228,11 @@ struct Command : virtual Args
virtual void prepare() { }; virtual void prepare() { };
virtual void run() = 0; virtual void run() = 0;
/* Return documentation about this command, in Markdown format. */
virtual std::string doc() { return ""; }
struct Example
{
std::string description;
std::string command;
};
typedef std::list<Example> Examples;
virtual Examples examples() { return Examples(); }
typedef int Category; typedef int Category;
static constexpr Category catDefault = 0; static constexpr Category catDefault = 0;
virtual Category category() { return catDefault; } virtual Category category() { return catDefault; }
void printHelp(const string & programName, std::ostream & out) override;
nlohmann::json toJSON() override;
}; };
typedef std::map<std::string, std::function<ref<Command>()>> Commands; typedef std::map<std::string, std::function<ref<Command>()>> Commands;

View file

@ -62,12 +62,8 @@ std::optional<LinesOfCode> getCodeLines(const ErrPos & errPos)
LinesOfCode loc; LinesOfCode loc;
try { try {
AutoCloseFD fd = open(errPos.file.c_str(), O_RDONLY | O_CLOEXEC); AutoCloseFD fd = open(errPos.file.c_str(), O_RDONLY | O_CLOEXEC);
if (!fd) { if (!fd) return {};
logError(SysError("opening file '%1%'", errPos.file).info());
return std::nullopt;
}
else
{
// count the newlines. // count the newlines.
int count = 0; int count = 0;
string line; string line;
@ -77,21 +73,18 @@ std::optional<LinesOfCode> getCodeLines(const ErrPos & errPos)
line = readLine(fd.get()); line = readLine(fd.get());
++count; ++count;
if (count < pl) if (count < pl)
{
; ;
} else if (count == pl)
else if (count == pl) {
loc.prevLineOfCode = line; loc.prevLineOfCode = line;
} else if (count == pl + 1) { else if (count == pl + 1)
loc.errLineOfCode = line; loc.errLineOfCode = line;
} else if (count == pl + 2) { else if (count == pl + 2) {
loc.nextLineOfCode = line; loc.nextLineOfCode = line;
break; break;
} }
} while (true); } while (true);
return loc; return loc;
} }
}
catch (EndOfFile & eof) { catch (EndOfFile & eof) {
if (loc.errLineOfCode.has_value()) if (loc.errLineOfCode.has_value())
return loc; return loc;
@ -99,7 +92,6 @@ std::optional<LinesOfCode> getCodeLines(const ErrPos & errPos)
return std::nullopt; return std::nullopt;
} }
catch (std::exception & e) { catch (std::exception & e) {
printError("error reading nix file: %s\n%s", errPos.file, e.what());
return std::nullopt; return std::nullopt;
} }
} else { } else {

View file

@ -49,7 +49,7 @@ namespace nix {
}); });
auto str = testing::internal::GetCapturedStderr(); auto str = testing::internal::GetCapturedStderr();
ASSERT_STREQ(str.c_str(), "\x1B[31;1merror:\x1B[0m\x1B[34;1m --- SysError --- error-unit-test\x1B[0m\nopening file '\x1B[33;1mrandom.nix\x1B[0m': \x1B[33;1mNo such file or directory\x1B[0m\n@nix {\"action\":\"msg\",\"column\":13,\"file\":\"random.nix\",\"level\":0,\"line\":2,\"msg\":\"\\u001b[31;1merror:\\u001b[0m\\u001b[34;1m --- error name --- error-unit-test\\u001b[0m\\n\\u001b[34;1mat: \\u001b[33;1m(2:13)\\u001b[34;1m in file: \\u001b[0mrandom.nix\\n\\nerror without any code lines.\\n\\nthis hint has \\u001b[33;1myellow\\u001b[0m templated \\u001b[33;1mvalues\\u001b[0m!!\",\"raw_msg\":\"this hint has \\u001b[33;1myellow\\u001b[0m templated \\u001b[33;1mvalues\\u001b[0m!!\"}\n"); ASSERT_STREQ(str.c_str(), "@nix {\"action\":\"msg\",\"column\":13,\"file\":\"random.nix\",\"level\":0,\"line\":2,\"msg\":\"\\u001b[31;1merror:\\u001b[0m\\u001b[34;1m --- error name --- error-unit-test\\u001b[0m\\n\\u001b[34;1mat: \\u001b[33;1m(2:13)\\u001b[34;1m in file: \\u001b[0mrandom.nix\\n\\nerror without any code lines.\\n\\nthis hint has \\u001b[33;1myellow\\u001b[0m templated \\u001b[33;1mvalues\\u001b[0m!!\",\"raw_msg\":\"this hint has \\u001b[33;1myellow\\u001b[0m templated \\u001b[33;1mvalues\\u001b[0m!!\"}\n");
} }
TEST(logEI, appendingHintsToPreviousError) { TEST(logEI, appendingHintsToPreviousError) {
@ -208,7 +208,7 @@ namespace nix {
}); });
auto str = testing::internal::GetCapturedStderr(); auto str = testing::internal::GetCapturedStderr();
ASSERT_STREQ(str.c_str(), "\x1B[31;1merror:\x1B[0m\x1B[34;1m --- SysError --- error-unit-test\x1B[0m\nopening file '\x1B[33;1minvalid filename\x1B[0m': \x1B[33;1mNo such file or directory\x1B[0m\n\x1B[31;1merror:\x1B[0m\x1B[34;1m --- error name --- error-unit-test\x1B[0m\n\x1B[34;1mat: \x1B[33;1m(2:13)\x1B[34;1m in file: \x1B[0minvalid filename\n\nerror without any code lines.\n\nthis hint has \x1B[33;1myellow\x1B[0m templated \x1B[33;1mvalues\x1B[0m!!\n"); ASSERT_STREQ(str.c_str(), "\x1B[31;1merror:\x1B[0m\x1B[34;1m --- error name --- error-unit-test\x1B[0m\n\x1B[34;1mat: \x1B[33;1m(2:13)\x1B[34;1m in file: \x1B[0minvalid filename\n\nerror without any code lines.\n\nthis hint has \x1B[33;1myellow\x1B[0m templated \x1B[33;1mvalues\x1B[0m!!\n");
} }
TEST(logError, logErrorWithOnlyHintAndName) { TEST(logError, logErrorWithOnlyHintAndName) {
@ -290,7 +290,7 @@ namespace nix {
logError(e.info()); logError(e.info());
auto str = testing::internal::GetCapturedStderr(); auto str = testing::internal::GetCapturedStderr();
ASSERT_STREQ(str.c_str(), "\x1B[31;1merror:\x1B[0m\x1B[34;1m --- SysError --- error-unit-test\x1B[0m\nopening file '\x1B[33;1minvalid filename\x1B[0m': \x1B[33;1mNo such file or directory\x1B[0m\n\x1B[31;1merror:\x1B[0m\x1B[34;1m --- AssertionError --- error-unit-test\x1B[0m\n\x1B[34;1mat: \x1B[33;1m(2:13)\x1B[34;1m from string\x1B[0m\n\nshow-traces\n\n 1| previous line of code\n 2| this is the problem line of code\n | \x1B[31;1m^\x1B[0m\n 3| next line of code\n\nit has been \x1B[33;1mzero\x1B[0m days since our last error\n\x1B[34;1m---- show-trace ----\x1B[0m\n\x1B[34;1mtrace: \x1B[0mwhile trying to compute \x1B[33;1m42\x1B[0m\n\x1B[34;1mat: \x1B[33;1m(1:19)\x1B[34;1m from stdin\x1B[0m\n\n 1| this is the other problem line of code\n | \x1B[31;1m^\x1B[0m\n\n\x1B[34;1mtrace: \x1B[0mwhile doing something without a \x1B[33;1mpos\x1B[0m\n\x1B[34;1mtrace: \x1B[0mmissing \x1B[33;1mnix file\x1B[0m\n\x1B[34;1mat: \x1B[33;1m(100:1)\x1B[34;1m in file: \x1B[0minvalid filename\n"); ASSERT_STREQ(str.c_str(), "\x1B[31;1merror:\x1B[0m\x1B[34;1m --- AssertionError --- error-unit-test\x1B[0m\n\x1B[34;1mat: \x1B[33;1m(2:13)\x1B[34;1m from string\x1B[0m\n\nshow-traces\n\n 1| previous line of code\n 2| this is the problem line of code\n | \x1B[31;1m^\x1B[0m\n 3| next line of code\n\nit has been \x1B[33;1mzero\x1B[0m days since our last error\n\x1B[34;1m---- show-trace ----\x1B[0m\n\x1B[34;1mtrace: \x1B[0mwhile trying to compute \x1B[33;1m42\x1B[0m\n\x1B[34;1mat: \x1B[33;1m(1:19)\x1B[34;1m from stdin\x1B[0m\n\n 1| this is the other problem line of code\n | \x1B[31;1m^\x1B[0m\n\n\x1B[34;1mtrace: \x1B[0mwhile doing something without a \x1B[33;1mpos\x1B[0m\n\x1B[34;1mtrace: \x1B[0mmissing \x1B[33;1mnix file\x1B[0m\n\x1B[34;1mat: \x1B[33;1m(100:1)\x1B[34;1m in file: \x1B[0minvalid filename\n");
} }
TEST(addTrace, hideTracesWithoutShowTrace) { TEST(addTrace, hideTracesWithoutShowTrace) {

View file

@ -1138,38 +1138,38 @@ static void opQuery(Globals & globals, Strings opFlags, Strings opArgs)
i.queryName(), j) i.queryName(), j)
}); });
else { else {
if (v->type == tString) { if (v->type() == nString) {
attrs2["type"] = "string"; attrs2["type"] = "string";
attrs2["value"] = v->string.s; attrs2["value"] = v->string.s;
xml.writeEmptyElement("meta", attrs2); xml.writeEmptyElement("meta", attrs2);
} else if (v->type == tInt) { } else if (v->type() == nInt) {
attrs2["type"] = "int"; attrs2["type"] = "int";
attrs2["value"] = (format("%1%") % v->integer).str(); attrs2["value"] = (format("%1%") % v->integer).str();
xml.writeEmptyElement("meta", attrs2); xml.writeEmptyElement("meta", attrs2);
} else if (v->type == tFloat) { } else if (v->type() == nFloat) {
attrs2["type"] = "float"; attrs2["type"] = "float";
attrs2["value"] = (format("%1%") % v->fpoint).str(); attrs2["value"] = (format("%1%") % v->fpoint).str();
xml.writeEmptyElement("meta", attrs2); xml.writeEmptyElement("meta", attrs2);
} else if (v->type == tBool) { } else if (v->type() == nBool) {
attrs2["type"] = "bool"; attrs2["type"] = "bool";
attrs2["value"] = v->boolean ? "true" : "false"; attrs2["value"] = v->boolean ? "true" : "false";
xml.writeEmptyElement("meta", attrs2); xml.writeEmptyElement("meta", attrs2);
} else if (v->isList()) { } else if (v->type() == nList) {
attrs2["type"] = "strings"; attrs2["type"] = "strings";
XMLOpenElement m(xml, "meta", attrs2); XMLOpenElement m(xml, "meta", attrs2);
for (unsigned int j = 0; j < v->listSize(); ++j) { for (unsigned int j = 0; j < v->listSize(); ++j) {
if (v->listElems()[j]->type != tString) continue; if (v->listElems()[j]->type() != nString) continue;
XMLAttrs attrs3; XMLAttrs attrs3;
attrs3["value"] = v->listElems()[j]->string.s; attrs3["value"] = v->listElems()[j]->string.s;
xml.writeEmptyElement("string", attrs3); xml.writeEmptyElement("string", attrs3);
} }
} else if (v->type == tAttrs) { } else if (v->type() == nAttrs) {
attrs2["type"] = "strings"; attrs2["type"] = "strings";
XMLOpenElement m(xml, "meta", attrs2); XMLOpenElement m(xml, "meta", attrs2);
Bindings & attrs = *v->attrs; Bindings & attrs = *v->attrs;
for (auto &i : attrs) { for (auto &i : attrs) {
Attr & a(*attrs.find(i.name)); Attr & a(*attrs.find(i.name));
if(a.value->type != tString) continue; if(a.value->type() != nString) continue;
XMLAttrs attrs3; XMLAttrs attrs3;
attrs3["type"] = i.name; attrs3["type"] = i.name;
attrs3["value"] = a.value->string.s; attrs3["value"] = a.value->string.s;

View file

@ -43,22 +43,11 @@ struct CmdBuild : InstallablesCommand, MixDryRun, MixJSON, MixProfile
return "build a derivation or fetch a store path"; return "build a derivation or fetch a store path";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "build.md"
"To build and run GNU Hello from NixOS 17.03:", ;
"nix build -f channel:nixos-17.03 hello; ./result/bin/hello"
},
Example{
"To build the build.x86_64-linux attribute from release.nix:",
"nix build -f release.nix build.x86_64-linux"
},
Example{
"To make a profile point at GNU Hello:",
"nix build --profile /tmp/profile nixpkgs#hello"
},
};
} }
void run(ref<Store> store) override void run(ref<Store> store) override

92
src/nix/build.md Normal file
View file

@ -0,0 +1,92 @@
R""(
# Examples
* Build the default package from the flake in the current directory:
```console
# nix build
```
* Build and run GNU Hello from the `nixpkgs` flake:
```console
# nix build nixpkgs#hello
# ./result/bin/hello
Hello, world!
```
* Build GNU Hello and Cowsay, leaving two result symlinks:
```console
# nix build nixpkgs#hello nixpkgs#cowsay
# ls -l result*
lrwxrwxrwx 1 … result -> /nix/store/v5sv61sszx301i0x6xysaqzla09nksnd-hello-2.10
lrwxrwxrwx 1 … result-1 -> /nix/store/rkfrm0z6x6jmi7d3gsmma4j53h15mg33-cowsay-3.03+dfsg2
```
* Build a specific output:
```console
# nix build nixpkgs#glibc.dev
# ls -ld ./result-dev
lrwxrwxrwx 1 … ./result-dev -> /nix/store/dkm3gwl0xrx0wrw6zi5x3px3lpgjhlw4-glibc-2.32-dev
```
* Build attribute `build.x86_64-linux` from (non-flake) Nix expression
`release.nix`:
```console
# nix build -f release.nix build.x86_64-linux
```
* Build a NixOS system configuration from a flake, and make a profile
point to the result:
```console
# nix build --profile /nix/var/nix/profiles/system \
~/my-configurations#nixosConfigurations.machine.config.system.build.toplevel
```
(This is essentially what `nixos-rebuild` does.)
* Build an expression specified on the command line:
```console
# nix build --impure --expr \
'with import <nixpkgs> {};
runCommand "foo" {
buildInputs = [ hello ];
}
"hello > $out"'
# cat ./result
Hello, world!
```
Note that `--impure` is needed because we're using `<nixpkgs>`,
which relies on the `$NIX_PATH` environment variable.
* Fetch a store path from the configured substituters, if it doesn't
already exist:
```console
# nix build /nix/store/rkfrm0z6x6jmi7d3gsmma4j53h15mg33-cowsay-3.03+dfsg2
```
# Description
`nix build` builds the specified *installables*. Installables that
resolve to derivations are built (or substituted if possible). Store
path installables are substituted.
Unless `--no-link` is specified, after a successful build, it creates
symlinks to the store paths of the installables. These symlinks have
the prefix `./result` by default; this can be overriden using the
`--out-link` option. Each symlink has a suffix `-<N>-<outname>`, where
*N* is the index of the installable (with the left-most installable
having index 0), and *outname* is the symbolic derivation output name
(e.g. `bin`, `dev` or `lib`). `-<N>` is omitted if *N* = 0, and
`-<outname>` is omitted if *outname* = `out` (denoting the default
output).
)""

View file

@ -40,14 +40,11 @@ struct CmdBundle : InstallableCommand
return "bundle an application so that it works outside of the Nix store"; return "bundle an application so that it works outside of the Nix store";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "bundle.md"
"To bundle Hello:", ;
"nix bundle hello"
},
};
} }
Category category() override { return catSecondary; } Category category() override { return catSecondary; }

36
src/nix/bundle.md Normal file
View file

@ -0,0 +1,36 @@
R""(
# Examples
* Bundle Hello:
```console
# nix bundle nixpkgs#hello
# ./hello
Hello, world!
```
* Bundle a specific version of Nix:
```console
# nix bundle github:NixOS/nix/e3ddffb27e5fc37a209cfd843c6f7f6a9460a8ec
# ./nix --version
nix (Nix) 2.4pre20201215_e3ddffb
```
# Description
`nix bundle` packs the closure of the [Nix app](./nix3-run.md)
*installable* into a single self-extracting executable. See the
[`nix-bundle` homepage](https://github.com/matthewbauer/nix-bundle)
for more details.
> **Note**
>
> This command only works on Linux.
# Bundler definitions
TODO
)""

View file

@ -37,6 +37,13 @@ struct CmdCatStore : StoreCommand, MixCat
return "print the contents of a file in the Nix store on stdout"; return "print the contents of a file in the Nix store on stdout";
} }
std::string doc() override
{
return
#include "store-cat.md"
;
}
void run(ref<Store> store) override void run(ref<Store> store) override
{ {
cat(store->getFSAccessor()); cat(store->getFSAccessor());
@ -62,6 +69,13 @@ struct CmdCatNar : StoreCommand, MixCat
return "print the contents of a file inside a NAR file on stdout"; return "print the contents of a file inside a NAR file on stdout";
} }
std::string doc() override
{
return
#include "nar-cat.md"
;
}
void run(ref<Store> store) override void run(ref<Store> store) override
{ {
cat(makeNarAccessor(make_ref<std::string>(readFile(narPath)))); cat(makeNarAccessor(make_ref<std::string>(readFile(narPath))));

View file

@ -54,32 +54,11 @@ struct CmdCopy : StorePathsCommand
return "copy paths between Nix stores"; return "copy paths between Nix stores";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "copy.md"
"To copy Firefox from the local store to a binary cache in file:///tmp/cache:", ;
"nix copy --to file:///tmp/cache $(type -p firefox)"
},
Example{
"To copy the entire current NixOS system closure to another machine via SSH:",
"nix copy --to ssh://server /run/current-system"
},
Example{
"To copy a closure from another machine via SSH:",
"nix copy --from ssh://server /nix/store/a6cnl93nk1wxnq84brbbwr6hxw9gp2w9-blender-2.79-rc2"
},
#ifdef ENABLE_S3
Example{
"To copy Hello to an S3 binary cache:",
"nix copy --to s3://my-bucket?region=eu-west-1 nixpkgs#hello"
},
Example{
"To copy Hello to an S3-compatible binary cache:",
"nix copy --to s3://my-bucket?region=eu-west-1&endpoint=example.com nixpkgs#hello"
},
#endif
};
} }
Category category() override { return catSecondary; } Category category() override { return catSecondary; }

58
src/nix/copy.md Normal file
View file

@ -0,0 +1,58 @@
R""(
# Examples
* Copy Firefox from the local store to a binary cache in `/tmp/cache`:
```console
# nix copy --to file:///tmp/cache $(type -p firefox)
```
Note the `file://` - without this, the destination is a chroot
store, not a binary cache.
* Copy the entire current NixOS system closure to another machine via
SSH:
```console
# nix copy -s --to ssh://server /run/current-system
```
The `-s` flag causes the remote machine to try to substitute missing
store paths, which may be faster if the link between the local and
remote machines is slower than the link between the remote machine
and its substituters (e.g. `https://cache.nixos.org`).
* Copy a closure from another machine via SSH:
```console
# nix copy --from ssh://server /nix/store/a6cnl93nk1wxnq84brbbwr6hxw9gp2w9-blender-2.79-rc2
```
* Copy Hello to a binary cache in an Amazon S3 bucket:
```console
# nix copy --to s3://my-bucket?region=eu-west-1 nixpkgs#hello
```
or to an S3-compatible storage system:
```console
# nix copy --to s3://my-bucket?region=eu-west-1&endpoint=example.com nixpkgs#hello
```
Note that this only works if Nix is built with AWS support.
* Copy a closure from `/nix/store` to the chroot store `/tmp/nix/nix/store`:
```console
# nix copy --to /tmp/nix nixpkgs#hello --no-check-sigs
```
# Description
`nix copy` copies store path closures between two Nix stores. The
source store is specified using `--from` and the destination using
`--to`. If one of these is omitted, it defaults to the local store.
)""

View file

@ -385,30 +385,11 @@ struct CmdDevelop : Common, MixEnvironment
return "run a bash shell that provides the build environment of a derivation"; return "run a bash shell that provides the build environment of a derivation";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "develop.md"
"To get the build environment of GNU hello:", ;
"nix develop nixpkgs#hello"
},
Example{
"To get the build environment of the default package of flake in the current directory:",
"nix develop"
},
Example{
"To store the build environment in a profile:",
"nix develop --profile /tmp/my-shell nixpkgs#hello"
},
Example{
"To use a build environment previously recorded in a profile:",
"nix develop /tmp/my-shell"
},
Example{
"To replace all occurences of a store path with a writable directory:",
"nix develop --redirect nixpkgs#glibc.dev ~/my-glibc/outputs/dev"
},
};
} }
void run(ref<Store> store) override void run(ref<Store> store) override
@ -495,14 +476,11 @@ struct CmdPrintDevEnv : Common
return "print shell code that can be sourced by bash to reproduce the build environment of a derivation"; return "print shell code that can be sourced by bash to reproduce the build environment of a derivation";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "print-dev-env.md"
"To apply the build environment of GNU hello to the current shell:", ;
". <(nix print-dev-env nixpkgs#hello)"
},
};
} }
Category category() override { return catUtility; } Category category() override { return catUtility; }

94
src/nix/develop.md Normal file
View file

@ -0,0 +1,94 @@
R""(
# Examples
* Start a shell with the build environment of the default package of
the flake in the current directory:
```console
# nix develop
```
Typical commands to run inside this shell are:
```console
# configurePhase
# buildPhase
# installPhase
```
Alternatively, you can run whatever build tools your project uses
directly, e.g. for a typical Unix project:
```console
# ./configure --prefix=$out
# make
# make install
```
* Run a particular build phase directly:
```console
# nix develop --configure
# nix develop --build
# nix develop --check
# nix develop --install
# nix develop --installcheck
```
* Start a shell with the build environment of GNU Hello:
```console
# nix develop nixpkgs#hello
```
* Record a build environment in a profile:
```console
# nix develop --profile /tmp/my-build-env nixpkgs#hello
```
* Use a build environment previously recorded in a profile:
```console
# nix develop /tmp/my-build-env
```
* Replace all occurences of the store path corresponding to
`glibc.dev` with a writable directory:
```console
# nix develop --redirect nixpkgs#glibc.dev ~/my-glibc/outputs/dev
```
Note that this is useful if you're running a `nix develop` shell for
`nixpkgs#glibc` in `~/my-glibc` and want to compile another package
against it.
# Description
`nix develop` starts a `bash` shell that provides an interactive build
environment nearly identical to what Nix would use to build
*installable*. Inside this shell, environment variables and shell
functions are set up so that you can interactively and incrementally
build your package.
Nix determines the build environment by building a modified version of
the derivation *installable* that just records the environment
initialised by `stdenv` and exits. This build environment can be
recorded into a profile using `--profile`.
The prompt used by the `bash` shell can be customised by setting the
`bash-prompt` and `bash-prompt-suffix` settings in `nix.conf` or in
the flake's `nixConfig` attribute.
# Flake output attributes
If no flake output attribute is given, `nix run` tries the following
flake output attributes:
* `devShell.<system>`
* `defaultPackage.<system>`
)""

View file

@ -121,14 +121,11 @@ struct CmdDiffClosures : SourceExprCommand
return "show what packages and versions were added and removed between two closures"; return "show what packages and versions were added and removed between two closures";
} }
Examples examples() override std::string doc() override
{ {
return { return
{ #include "diff-closures.md"
"To show what got added and removed between two versions of the NixOS system profile:", ;
"nix store diff-closures /nix/var/nix/profiles/system-655-link /nix/var/nix/profiles/system-658-link",
},
};
} }
void run(ref<Store> store) override void run(ref<Store> store) override

51
src/nix/diff-closures.md Normal file
View file

@ -0,0 +1,51 @@
R""(
# Examples
* Show what got added and removed between two versions of the NixOS
system profile:
```console
# nix store diff-closures /nix/var/nix/profiles/system-655-link /nix/var/nix/profiles/system-658-link
acpi-call: 2020-04-07-5.8.16 → 2020-04-07-5.8.18
baloo-widgets: 20.08.1 → 20.08.2
bluez-qt: +12.6 KiB
dolphin: 20.08.1 → 20.08.2, +13.9 KiB
kdeconnect: 20.08.2 → ∅, -6597.8 KiB
kdeconnect-kde: ∅ → 20.08.2, +6599.7 KiB
```
# Description
This command shows the differences between the two closures *before*
and *after* with respect to the addition, removal, or version change
of packages, as well as changes in store path sizes.
For each package name in the two closures (where a package name is
defined as the name component of a store path excluding the version),
if there is a change in the set of versions of the package, or a
change in the size of the store paths of more than 8 KiB, it prints a
line like this:
```console
dolphin: 20.08.1 → 20.08.2, +13.9 KiB
```
No size change is shown if it's below the threshold. If the package
does not exist in either the *before* or *after* closures, it is
represented using `∅` (empty set) on the appropriate side of the
arrow. If a package has an empty version string, the version is
rendered as `ε` (epsilon).
There may be multiple versions of a package in each closure. In that
case, only the changed versions are shown. Thus,
```console
libfoo: 1.2, 1.3 → 1.4
```
leaves open the possibility that there are other versions (e.g. `1.1`)
that exist in both closures.
)""

View file

@ -11,14 +11,11 @@ struct CmdDumpPath : StorePathCommand
return "serialise a store path to stdout in NAR format"; return "serialise a store path to stdout in NAR format";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "store-dump-path.md"
"To get a NAR from the binary cache https://cache.nixos.org/:", ;
"nix store dump-path --store https://cache.nixos.org/ /nix/store/7crrmih8c52r8fbnqb933dxrsp44md93-glibc-2.25"
},
};
} }
void run(ref<Store> store, const StorePath & storePath) override void run(ref<Store> store, const StorePath & storePath) override
@ -49,14 +46,11 @@ struct CmdDumpPath2 : Command
return "serialise a path to stdout in NAR format"; return "serialise a path to stdout in NAR format";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "nar-dump-path.md"
"To serialise directory 'foo' as a NAR:", ;
"nix nar dump-path ./foo"
},
};
} }
void run() override void run() override

View file

@ -15,14 +15,11 @@ struct CmdEdit : InstallableCommand
return "open the Nix expression of a Nix package in $EDITOR"; return "open the Nix expression of a Nix package in $EDITOR";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "edit.md"
"To open the Nix expression of the GNU Hello package:", ;
"nix edit nixpkgs#hello"
},
};
} }
Category category() override { return catSecondary; } Category category() override { return catSecondary; }

31
src/nix/edit.md Normal file
View file

@ -0,0 +1,31 @@
R""(
# Examples
* Open the Nix expression of the GNU Hello package:
```console
# nix edit nixpkgs#hello
```
* Get the filename and line number used by `nix edit`:
```console
# nix eval --raw nixpkgs#hello.meta.position
/nix/store/fvafw0gvwayzdan642wrv84pzm5bgpmy-source/pkgs/applications/misc/hello/default.nix:15
```
# Description
This command opens the Nix expression of a derivation in an
editor. The filename and line number of the derivation are taken from
its `meta.position` attribute. Nixpkgs' `stdenv.mkDerivation` sets
this attribute to the location of the definition of the
`meta.description`, `version` or `name` derivation attributes.
The editor to invoke is specified by the `EDITOR` environment
variable. It defaults to `cat`. If the editor is `emacs`, `nano` or
`vim`, it is passed the line number of the derivation using the
argument `+<lineno>`.
)""

View file

@ -40,30 +40,11 @@ struct CmdEval : MixJSON, InstallableCommand
return "evaluate a Nix expression"; return "evaluate a Nix expression";
} }
Examples examples() override std::string doc() override
{ {
return { return
{ #include "eval.md"
"To evaluate a Nix expression given on the command line:", ;
"nix eval --expr '1 + 2'"
},
{
"To evaluate a Nix expression from a file or URI:",
"nix eval -f ./my-nixpkgs hello.name"
},
{
"To get the current version of Nixpkgs:",
"nix eval --raw nixpkgs#lib.version"
},
{
"To print the store path of the Hello package:",
"nix eval --raw nixpkgs#hello"
},
{
"To get a list of checks in the 'nix' flake:",
"nix eval nix#checks.x86_64-linux --apply builtins.attrNames"
},
};
} }
Category category() override { return catSecondary; } Category category() override { return catSecondary; }
@ -97,10 +78,10 @@ struct CmdEval : MixJSON, InstallableCommand
recurse = [&](Value & v, const Pos & pos, const Path & path) recurse = [&](Value & v, const Pos & pos, const Path & path)
{ {
state->forceValue(v); state->forceValue(v);
if (v.type == tString) if (v.type() == nString)
// FIXME: disallow strings with contexts? // FIXME: disallow strings with contexts?
writeFile(path, v.string.s); writeFile(path, v.string.s);
else if (v.type == tAttrs) { else if (v.type() == nAttrs) {
if (mkdir(path.c_str(), 0777) == -1) if (mkdir(path.c_str(), 0777) == -1)
throw SysError("creating directory '%s'", path); throw SysError("creating directory '%s'", path);
for (auto & attr : *v.attrs) for (auto & attr : *v.attrs)

74
src/nix/eval.md Normal file
View file

@ -0,0 +1,74 @@
R""(
# Examples
* Evaluate a Nix expression given on the command line:
```console
# nix eval --expr '1 + 2'
```
* Evaluate a Nix expression to JSON:
```console
# nix eval --json --expr '{ x = 1; }'
{"x":1}
```
* Evaluate a Nix expression from a file:
```console
# nix eval -f ./my-nixpkgs hello.name
```
* Get the current version of the `nixpkgs` flake:
```console
# nix eval --raw nixpkgs#lib.version
```
* Print the store path of the Hello package:
```console
# nix eval --raw nixpkgs#hello
```
* Get a list of checks in the `nix` flake:
```console
# nix eval nix#checks.x86_64-linux --apply builtins.attrNames
```
* Generate a directory with the specified contents:
```console
# nix eval --write-to ./out --expr '{ foo = "bar"; subdir.bla = "123"; }'
# cat ./out/foo
bar
# cat ./out/subdir/bla
123
# Description
This command evaluates the Nix expression *installable* and prints the
result on standard output.
# Output format
`nix eval` can produce output in several formats:
* By default, the evaluation result is printed as a Nix expression.
* With `--json`, the evaluation result is printed in JSON format. Note
that this fails if the result contains values that are not
representable as JSON, such as functions.
* With `--raw`, the evaluation result must be a string, which is
printed verbatim, without any quoting.
* With `--write-to` *path*, the evaluation result must be a string or
a nested attribute set whose leaf values are strings. These strings
are written to files named *path*/*attrpath*. *path* must not
already exist.
)""

29
src/nix/flake-archive.md Normal file
View file

@ -0,0 +1,29 @@
R""(
# Examples
* Copy the `dwarffs` flake and its dependencies to a binary cache:
```console
# nix flake archive --to file:///tmp/my-cache dwarffs
```
* Fetch the `dwarffs` flake and its dependencies to the local Nix
store:
```console
# nix flake archive dwarffs
```
* Print the store paths of the flake sources of NixOps without
fetching them:
```console
# nix flake archive --json --dry-run nixops
```
# Description
FIXME
)""

68
src/nix/flake-check.md Normal file
View file

@ -0,0 +1,68 @@
R""(
# Examples
* Evaluate the flake in the current directory, and build its checks:
```console
# nix flake check
```
* Verify that the `patchelf` flake evaluates, but don't build its
checks:
```console
# nix flake check --no-build github:NixOS/patchelf
```
# Description
This command verifies that the flake specified by flake reference
*flake-url* can be evaluated successfully (as detailed below), and
that the derivations specified by the flake's `checks` output can be
built successfully.
# Evaluation checks
This following flake output attributes must be derivations:
* `checks.`*system*`.`*name*
* `defaultPackage.`*system*`
* `devShell.`*system*`
* `nixosConfigurations.`*name*`.config.system.build.toplevel
* `packages.`*system*`.`*name*
The following flake output attributes must be [app
definitions](./nix3-run.md):
* `apps.`*system*`.`*name*
* `defaultApp.`*system*`
The following flake output attributes must be [template
definitions](./nix3-flake-init.md):
* `defaultTemplate`
* `templates`.`*name*
The following flake output attributes must be *Nixpkgs overlays*:
* `overlay`
* `overlays`.`*name*
The following flake output attributes must be *NixOS modules*:
* `nixosModule`
* `nixosModules`.`*name*
The following flake output attributes must be
[bundlers](./nix3-bundle.md):
* `bundlers`.`*name*
* `defaultBundler`
In addition, the `hydraJobs` output is evaluated in the same way as
Hydra's `hydra-eval-jobs` (i.e. as a arbitrarily deeply nested
attribute set of derivations). Similarly, the
`legacyPackages`.*system* output is evaluated like `nix-env -qa`.
)""

18
src/nix/flake-clone.md Normal file
View file

@ -0,0 +1,18 @@
R""(
# Examples
* Check out the source code of the `dwarffs` flake and build it:
```console
# nix flake clone dwarffs --dest dwarffs
# cd dwarffs
# nix build
```
# Description
This command performs a Git or Mercurial clone of the repository
containing the source code of the flake *flake-url*.
)""

99
src/nix/flake-info.md Normal file
View file

@ -0,0 +1,99 @@
R""(
# Examples
* Show what `nixpkgs` resolves to:
```console
# nix flake info nixpkgs
Resolved URL: github:NixOS/nixpkgs
Locked URL: github:NixOS/nixpkgs/b67ba0bfcc714453cdeb8d713e35751eb8b4c8f4
Description: A collection of packages for the Nix package manager
Path: /nix/store/23qapccs6cfmwwrlq8kr41vz5vdmns3r-source
Revision: b67ba0bfcc714453cdeb8d713e35751eb8b4c8f4
Last modified: 2020-12-23 12:36:12
```
* Show information about `dwarffs` in JSON format:
```console
# nix flake info dwarffs --json | jq .
{
"description": "A filesystem that fetches DWARF debug info from the Internet on demand",
"lastModified": 1597153508,
"locked": {
"lastModified": 1597153508,
"narHash": "sha256-VHg3MYVgQ12LeRSU2PSoDeKlSPD8PYYEFxxwkVVDRd0=",
"owner": "edolstra",
"repo": "dwarffs",
"rev": "d181d714fd36eb06f4992a1997cd5601e26db8f5",
"type": "github"
},
"original": {
"id": "dwarffs",
"type": "indirect"
},
"originalUrl": "flake:dwarffs",
"path": "/nix/store/hang3792qwdmm2n0d9nsrs5n6bsws6kv-source",
"resolved": {
"owner": "edolstra",
"repo": "dwarffs",
"type": "github"
},
"resolvedUrl": "github:edolstra/dwarffs",
"revision": "d181d714fd36eb06f4992a1997cd5601e26db8f5",
"url": "github:edolstra/dwarffs/d181d714fd36eb06f4992a1997cd5601e26db8f5"
}
```
# Description
This command shows information about the flake specified by the flake
reference *flake-url*. It resolves the flake reference using the
[flake registry](./nix3-registry.md), fetches it, and prints some meta
data. This includes:
* `Resolved URL`: If *flake-url* is a flake identifier, then this is
the flake reference that specifies its actual location, looked up in
the flake registry.
* `Locked URL`: A flake reference that contains a commit or content
hash and thus uniquely identifies a specific flake version.
* `Description`: A one-line description of the flake, taken from the
`description` field in `flake.nix`.
* `Path`: The store path containing the source code of the flake.
* `Revision`: The Git or Mercurial commit hash of the locked flake.
* `Revisions`: The number of ancestors of the Git or Mercurial commit
of the locked flake. Note that this is not available for `github`
flakes.
* `Last modified`: For Git or Mercurial flakes, this is the commit
time of the commit of the locked flake; for tarball flakes, it's the
most recent timestamp of any file inside the tarball.
With `--json`, the output is a JSON object with the following fields:
* `original` and `originalUrl`: The flake reference specified by the
user (*flake-url*) in attribute set and URL representation.
* `resolved` and `resolvedUrl`: The resolved flake reference (see
above) in attribute set and URL representation.
* `locked` and `lockedUrl`: The locked flake reference (see above) in
attribute set and URL representation.
* `description`: See `Description` above.
* `path`: See `Path` above.
* `revision`: See `Revision` above.
* `revCount`: See `Revisions` above.
* `lastModified`: See `Last modified` above.
)""

54
src/nix/flake-init.md Normal file
View file

@ -0,0 +1,54 @@
R""(
# Examples
* Create a flake using the default template:
```console
# nix flake init
```
* List available templates:
```console
# nix flake show templates
```
* Create a flake from a specific template:
```console
# nix flake init -t templates#simpleContainer
```
# Description
This command creates a flake in the current directory by copying the
files of a template. It will not overwrite existing files. The default
template is `templates#defaultTemplate`, but this can be overriden
using `-t`.
# Template definitions
A flake can declare templates through its `templates` and
`defaultTemplate` output attributes. A template has two attributes:
* `description`: A one-line description of the template, in CommonMark
syntax.
* `path`: The path of the directory to be copied.
Here is an example:
```
outputs = { self }: {
templates.rust = {
path = ./rust;
description = "A simple Rust/Cargo project";
};
templates.defaultTemplate = self.templates.rust;
}
```
)""

View file

@ -0,0 +1,23 @@
R""(
# Examples
* Show the inputs of the `hydra` flake:
```console
# nix flake list-inputs github:NixOS/hydra
github:NixOS/hydra/bde8d81876dfc02143e5070e42c78d8f0d83d6f7
├───nix: github:NixOS/nix/79aa7d95183cbe6c0d786965f0dbff414fd1aa67
│ ├───lowdown-src: github:kristapsdz/lowdown/1705b4a26fbf065d9574dce47a94e8c7c79e052f
│ └───nixpkgs: github:NixOS/nixpkgs/ad0d20345219790533ebe06571f82ed6b034db31
└───nixpkgs follows input 'nix/nixpkgs'
```
# Description
This command shows the inputs of the flake specified by the flake
referenced *flake-url*. Since it prints the locked inputs that result
from generating or updating the lock file, this command essentially
displays the contents of the flake's lock file in human-readable form.
)""

34
src/nix/flake-new.md Normal file
View file

@ -0,0 +1,34 @@
R""(
# Examples
* Create a flake using the default template in the directory `hello`:
```console
# nix flake new hello
```
* List available templates:
```console
# nix flake show templates
```
* Create a flake from a specific template in the directory `hello`:
```console
# nix flake new hello -t templates#trivial
```
# Description
This command creates a flake in the directory `dest-dir`, which must
not already exist. It's equivalent to:
```console
# mkdir dest-dir
# cd dest-dir
# nix flake init
```
)""

38
src/nix/flake-show.md Normal file
View file

@ -0,0 +1,38 @@
R""(
# Examples
* Show the output attributes provided by the `patchelf` flake:
```console
github:NixOS/patchelf/f34751b88bd07d7f44f5cd3200fb4122bf916c7e
├───checks
│ ├───aarch64-linux
│ │ └───build: derivation 'patchelf-0.12.20201207.f34751b'
│ ├───i686-linux
│ │ └───build: derivation 'patchelf-0.12.20201207.f34751b'
│ └───x86_64-linux
│ └───build: derivation 'patchelf-0.12.20201207.f34751b'
├───defaultPackage
│ ├───aarch64-linux: package 'patchelf-0.12.20201207.f34751b'
│ ├───i686-linux: package 'patchelf-0.12.20201207.f34751b'
│ └───x86_64-linux: package 'patchelf-0.12.20201207.f34751b'
├───hydraJobs
│ ├───build
│ │ ├───aarch64-linux: derivation 'patchelf-0.12.20201207.f34751b'
│ │ ├───i686-linux: derivation 'patchelf-0.12.20201207.f34751b'
│ │ └───x86_64-linux: derivation 'patchelf-0.12.20201207.f34751b'
│ ├───coverage: derivation 'patchelf-coverage-0.12.20201207.f34751b'
│ ├───release: derivation 'patchelf-0.12.20201207.f34751b'
│ └───tarball: derivation 'patchelf-tarball-0.12.20201207.f34751b'
└───overlay: Nixpkgs overlay
```
# Description
This command shows the output attributes provided by the flake
specified by flake reference *flake-url*. These are the top-level
attributes in the `outputs` of the flake, as well as lower-level
attributes for some standard outputs (e.g. `packages` or `checks`).
)""

53
src/nix/flake-update.md Normal file
View file

@ -0,0 +1,53 @@
R""(
# Examples
* Update the `nixpkgs` and `nix` inputs of the flake in the current
directory:
```console
# nix flake update --update-input nixpkgs --update-input nix
* Updated 'nix': 'github:NixOS/nix/9fab14adbc3810d5cc1f88672fde1eee4358405c' -> 'github:NixOS/nix/8927cba62f5afb33b01016d5c4f7f8b7d0adde3c'
* Updated 'nixpkgs': 'github:NixOS/nixpkgs/3d2d8f281a27d466fa54b469b5993f7dde198375' -> 'github:NixOS/nixpkgs/a3a3dda3bacf61e8a39258a0ed9c924eeca8e293'
```
* Recreate the lock file (i.e. update all inputs) and commit the new
lock file:
```console
# nix flake update --recreate-lock-file --commit-lock-file
warning: committed new revision '158bcbd9d6cc08ab859c0810186c1beebc982aad'
```
# Description
This command updates the lock file of a flake (`flake.lock`) so that
it contains a lock for every flake input specified in
`flake.nix`. Note that every command that operates on a flake will
also update the lock file if needed, and supports the same
flags. Therefore,
```console
# nix flake update --update-input nixpkgs
# nix build
```
is equivalent to:
```console
# nix build --update-input nixpkgs
```
Thus, this command is only useful if you want to update the lock file
separately from any other action such as building.
> **Note**
>
> This command does *not* update locks that are already present unless
> you explicitly ask for it using `--update-input` or
> `--recreate-lock-file`. Thus, if the lock file already has locks for
> every input, then `nix flake update` (without arguments) does
> nothing.
)""

View file

@ -104,6 +104,13 @@ struct CmdFlakeUpdate : FlakeCommand
return "update flake lock file"; return "update flake lock file";
} }
std::string doc() override
{
return
#include "flake-update.md"
;
}
void run(nix::ref<nix::Store> store) override void run(nix::ref<nix::Store> store) override
{ {
/* Use --refresh by default for 'nix flake update'. */ /* Use --refresh by default for 'nix flake update'. */
@ -134,6 +141,13 @@ struct CmdFlakeInfo : FlakeCommand, MixJSON
return "list info about a given flake"; return "list info about a given flake";
} }
std::string doc() override
{
return
#include "flake-info.md"
;
}
void run(nix::ref<nix::Store> store) override void run(nix::ref<nix::Store> store) override
{ {
auto flake = getFlake(); auto flake = getFlake();
@ -153,6 +167,13 @@ struct CmdFlakeListInputs : FlakeCommand, MixJSON
return "list flake inputs"; return "list flake inputs";
} }
std::string doc() override
{
return
#include "flake-list-inputs.md"
;
}
void run(nix::ref<nix::Store> store) override void run(nix::ref<nix::Store> store) override
{ {
auto flake = lockFlake(); auto flake = lockFlake();
@ -211,6 +232,13 @@ struct CmdFlakeCheck : FlakeCommand
return "check whether the flake evaluates and run its tests"; return "check whether the flake evaluates and run its tests";
} }
std::string doc() override
{
return
#include "flake-check.md"
;
}
void run(nix::ref<nix::Store> store) override void run(nix::ref<nix::Store> store) override
{ {
settings.readOnlyMode = !build; settings.readOnlyMode = !build;
@ -260,7 +288,7 @@ struct CmdFlakeCheck : FlakeCommand
auto checkOverlay = [&](const std::string & attrPath, Value & v, const Pos & pos) { auto checkOverlay = [&](const std::string & attrPath, Value & v, const Pos & pos) {
try { try {
state->forceValue(v, pos); state->forceValue(v, pos);
if (v.type != tLambda || v.lambda.fun->matchAttrs || std::string(v.lambda.fun->arg) != "final") if (!v.isLambda() || v.lambda.fun->matchAttrs || std::string(v.lambda.fun->arg) != "final")
throw Error("overlay does not take an argument named 'final'"); throw Error("overlay does not take an argument named 'final'");
auto body = dynamic_cast<ExprLambda *>(v.lambda.fun->body); auto body = dynamic_cast<ExprLambda *>(v.lambda.fun->body);
if (!body || body->matchAttrs || std::string(body->arg) != "prev") if (!body || body->matchAttrs || std::string(body->arg) != "prev")
@ -276,10 +304,10 @@ struct CmdFlakeCheck : FlakeCommand
auto checkModule = [&](const std::string & attrPath, Value & v, const Pos & pos) { auto checkModule = [&](const std::string & attrPath, Value & v, const Pos & pos) {
try { try {
state->forceValue(v, pos); state->forceValue(v, pos);
if (v.type == tLambda) { if (v.isLambda()) {
if (!v.lambda.fun->matchAttrs || !v.lambda.fun->formals->ellipsis) if (!v.lambda.fun->matchAttrs || !v.lambda.fun->formals->ellipsis)
throw Error("module must match an open attribute set ('{ config, ... }')"); throw Error("module must match an open attribute set ('{ config, ... }')");
} else if (v.type == tAttrs) { } else if (v.type() == nAttrs) {
for (auto & attr : *v.attrs) for (auto & attr : *v.attrs)
try { try {
state->forceValue(*attr.value, *attr.pos); state->forceValue(*attr.value, *attr.pos);
@ -371,7 +399,7 @@ struct CmdFlakeCheck : FlakeCommand
auto checkBundler = [&](const std::string & attrPath, Value & v, const Pos & pos) { auto checkBundler = [&](const std::string & attrPath, Value & v, const Pos & pos) {
try { try {
state->forceValue(v, pos); state->forceValue(v, pos);
if (v.type != tLambda) if (!v.isLambda())
throw Error("bundler must be a function"); throw Error("bundler must be a function");
if (!v.lambda.fun->formals || if (!v.lambda.fun->formals ||
v.lambda.fun->formals->argNames.find(state->symbols.create("program")) == v.lambda.fun->formals->argNames.end() || v.lambda.fun->formals->argNames.find(state->symbols.create("program")) == v.lambda.fun->formals->argNames.end() ||
@ -631,22 +659,11 @@ struct CmdFlakeInit : CmdFlakeInitCommon
return "create a flake in the current directory from a template"; return "create a flake in the current directory from a template";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "flake-init.md"
"To create a flake using the default template:", ;
"nix flake init"
},
Example{
"To see available templates:",
"nix flake show templates"
},
Example{
"To create a flake from a specific template:",
"nix flake init -t templates#nixos-container"
},
};
} }
CmdFlakeInit() CmdFlakeInit()
@ -662,6 +679,13 @@ struct CmdFlakeNew : CmdFlakeInitCommon
return "create a flake in the specified directory from a template"; return "create a flake in the specified directory from a template";
} }
std::string doc() override
{
return
#include "flake-new.md"
;
}
CmdFlakeNew() CmdFlakeNew()
{ {
expectArgs({ expectArgs({
@ -681,6 +705,13 @@ struct CmdFlakeClone : FlakeCommand
return "clone flake repository"; return "clone flake repository";
} }
std::string doc() override
{
return
#include "flake-clone.md"
;
}
CmdFlakeClone() CmdFlakeClone()
{ {
addFlag({ addFlag({
@ -720,22 +751,11 @@ struct CmdFlakeArchive : FlakeCommand, MixJSON, MixDryRun
return "copy a flake and all its inputs to a store"; return "copy a flake and all its inputs to a store";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "flake-archive.md"
"To copy the dwarffs flake and its dependencies to a binary cache:", ;
"nix flake archive --to file:///tmp/my-cache dwarffs"
},
Example{
"To fetch the dwarffs flake and its dependencies to the local Nix store:",
"nix flake archive dwarffs"
},
Example{
"To print the store paths of the flake sources of NixOps without fetching them:",
"nix flake archive --json --dry-run nixops"
},
};
} }
void run(nix::ref<nix::Store> store) override void run(nix::ref<nix::Store> store) override
@ -797,6 +817,13 @@ struct CmdFlakeShow : FlakeCommand
return "show the outputs provided by a flake"; return "show the outputs provided by a flake";
} }
std::string doc() override
{
return
#include "flake-show.md"
;
}
void run(nix::ref<nix::Store> store) override void run(nix::ref<nix::Store> store) override
{ {
auto state = getEvalState(); auto state = getEvalState();
@ -955,6 +982,13 @@ struct CmdFlake : NixMultiCommand
return "manage Nix flakes"; return "manage Nix flakes";
} }
std::string doc() override
{
return
#include "flake.md"
;
}
void run() override void run() override
{ {
if (!command) if (!command)

566
src/nix/flake.md Normal file
View file

@ -0,0 +1,566 @@
R""(
# Description
`nix flake` provides subcommands for creating, modifying and querying
*Nix flakes*. Flakes are the unit for packaging Nix code in a
reproducible and discoverable way. They can have dependencies on other
flakes, making it possible to have multi-repository Nix projects.
A flake is a filesystem tree (typically fetched from a Git repository
or a tarball) that contains a file named `flake.nix` in the root
directory. `flake.nix` specifies some metadata about the flake such as
dependencies (called *inputs*), as well as its *outputs* (the Nix
values such as packages or NixOS modules provided by the flake).
# Flake references
Flake references (*flakerefs*) are a way to specify the location of a
flake. These have two different forms:
* An attribute set representation, e.g.
```nix
{
type = "github";
owner = "NixOS";
repo = "nixpkgs";
}
```
The only required attribute is `type`. The supported types are
listed below.
* A URL-like syntax, e.g.
```
github:NixOS/nixpkgs
```
These are used on the command line as a more convenient alternative
to the attribute set representation. For instance, in the command
```console
# nix build github:NixOS/nixpkgs#hello
```
`github:NixOS/nixpkgs` is a flake reference (while `hello` is an
output attribute). They are also allowed in the `inputs` attribute
of a flake, e.g.
```nix
inputs.nixpkgs.url = github:NixOS/nixpkgs;
```
is equivalent to
```nix
inputs.nixpkgs = {
type = "github";
owner = "NixOS";
repo = "nixpkgs";
};
```
## Examples
Here are some examples of flake references in their URL-like representation:
* `.`: The flake in the current directory.
* `/home/alice/src/patchelf`: A flake in some other directory.
* `nixpkgs`: The `nixpkgs` entry in the flake registry.
* `nixpkgs/a3a3dda3bacf61e8a39258a0ed9c924eeca8e293`: The `nixpkgs`
entry in the flake registry, with its Git revision overriden to a
specific value.
* `github:NixOS/nixpkgs`: The `master` branch of the `NixOS/nixpkgs`
repository on GitHub.
* `github:NixOS/nixpkgs/nixos-20.09`: The `nixos-20.09` branch of the
`nixpkgs` repository.
* `github:NixOS/nixpkgs/a3a3dda3bacf61e8a39258a0ed9c924eeca8e293`: A
specific revision of the `nixpkgs` repository.
* `github:edolstra/nix-warez?dir=blender`: A flake in a subdirectory
of a GitHub repository.
* `git+https://github.com/NixOS/patchelf`: A Git repository.
* `git+https://github.com/NixOS/patchelf?ref=master`: A specific
branch of a Git repository.
* `git+https://github.com/NixOS/patchelf?ref=master&rev=f34751b88bd07d7f44f5cd3200fb4122bf916c7e`:
A specific branch *and* revision of a Git repository.
* `https://github.com/NixOS/patchelf/archive/master.tar.gz`: A tarball
flake.
## Flake reference attributes
The following generic flake reference attributes are supported:
* `dir`: The subdirectory of the flake in which `flake.nix` is
located. This parameter enables having multiple flakes in a
repository or tarball. The default is the root directory of the
flake.
* `narHash`: The hash of the NAR serialisation (in SRI format) of the
contents of the flake. This is useful for flake types such as
tarballs that lack a unique content identifier such as a Git commit
hash.
In addition, the following attributes are common to several flake
reference types:
* `rev`: A Git or Mercurial commit hash.
* `ref`: A Git or Mercurial branch or tag name.
Finally, some attribute are typically not specified by the user, but
can occur in *locked* flake references and are available to Nix code:
* `revCount`: The number of ancestors of the commit `rev`.
* `lastModified`: The timestamp (in seconds since the Unix epoch) of
the last modification of this version of the flake. For
Git/Mercurial flakes, this is the commit time of commit *rev*, while
for tarball flakes, it's the most recent timestamp of any file
inside the tarball.
## Types
Currently the `type` attribute can be one of the following:
* `path`: arbitrary local directories, or local Git trees. The
required attribute `path` specifies the path of the flake. The URL
form is
```
[path:]<path>(\?<params)?
```
where *path* is an absolute path.
*path* must be a directory in the file system containing a file
named `flake.nix`.
If the directory or any of its parents is a Git repository, then
this is essentially equivalent to `git+file://<path>` (see below),
except that the `dir` parameter is derived automatically. For
example, if `/foo/bar` is a Git repository, then the flake reference
`/foo/bar/flake` is equivalent to `/foo/bar?dir=flake`.
If the directory is not inside a Git repository, then the flake
contents is the entire contents of *path*.
*path* generally must be an absolute path. However, on the command
line, it can be a relative path (e.g. `.` or `./foo`) which is
interpreted as relative to the current directory. In this case, it
must start with `.` to avoid ambiguity with registry lookups
(e.g. `nixpkgs` is a registry lookup; `./nixpkgs` is a relative
path).
* `git`: Git repositories. The location of the repository is specified
by the attribute `url`.
They have the URL form
```
git(+http|+https|+ssh|+git|+file|):(//<server>)?<path>(\?<params>)?
```
The `ref` attribute defaults to `master`.
The `rev` attribute must denote a commit that exists in the branch
or tag specified by the `ref` attribute, since Nix doesn't do a full
clone of the remote repository by default (and the Git protocol
doesn't allow fetching a `rev` without a known `ref`). The default
is the commit currently pointed to by `ref`.
For example, the following are valid Git flake references:
* `git+https://example.org/my/repo`
* `git+https://example.org/my/repo?dir=flake1`
* `git+ssh://git@github.com/NixOS/nix?ref=v1.2.3`
* `git://github.com/edolstra/dwarffs?ref=unstable&rev=e486d8d40e626a20e06d792db8cc5ac5aba9a5b4`
* `git+file:///home/my-user/some-repo/some-repo`
* `mercurial`: Mercurial repositories. The URL form is similar to the
`git` type, except that the URL schema must be one of `hg+http`,
`hg+https`, `hg+ssh` or `hg+file`.
* `tarball`: Tarballs. The location of the tarball is specified by the
attribute `url`.
In URL form, the schema must be `http://`, `https://` or `file://`
URLs and the extension must be `.zip`, `.tar`, `.tar.gz`, `.tar.xz`
or `.tar.bz2`.
* `github`: A more efficient way to fetch repositories from
GitHub. The following attributes are required:
* `owner`: The owner of the repository.
* `repo`: The name of the repository.
These are downloaded as tarball archives, rather than
through Git. This is often much faster and uses less disk space
since it doesn't require fetching the entire history of the
repository. On the other hand, it doesn't allow incremental fetching
(but full downloads are often faster than incremental fetches!).
The URL syntax for `github` flakes is:
```
github:<owner>/<repo>(/<rev-or-ref>)?(\?<params>)?
```
`<rev-or-ref>` specifies the name of a branch or tag (`ref`), or a
commit hash (`rev`). Note that unlike Git, GitHub allows fetching by
commit hash without specifying a branch or tag.
Some examples:
* `github:edolstra/dwarffs`
* `github:edolstra/dwarffs/unstable`
* `github:edolstra/dwarffs/d3f2baba8f425779026c6ec04021b2e927f61e31`
* `indirect`: Indirections through the flake registry. These have the
form
```
[flake:]<flake-id>(/<rev-or-ref>(/rev)?)?
```
These perform a lookup of `<flake-id>` in the flake registry. or
example, `nixpkgs` and `nixpkgs/release-20.09` are indirect flake
references. The specified `rev` and/or `ref` are merged with the
entry in the registry; see [nix registry](./nix3-registry.md) for
details.
# Flake format
As an example, here is a simple `flake.nix` that depends on the
Nixpkgs flake and provides a single package (i.e. an installable
derivation):
```nix
{
description = "A flake for building Hello World";
inputs.nixpkgs.url = github:NixOS/nixpkgs/nixos-20.03;
outputs = { self, nixpkgs }: {
defaultPackage.x86_64-linux =
# Notice the reference to nixpkgs here.
with import nixpkgs { system = "x86_64-linux"; };
stdenv.mkDerivation {
name = "hello";
src = self;
buildPhase = "gcc -o hello ./hello.c";
installPhase = "mkdir -p $out/bin; install -t $out/bin hello";
};
};
}
```
The following attributes are supported in `flake.nix`:
* `description`: A short, one-line description of the flake.
* `inputs`: An attrset specifying the dependencies of the flake
(described below).
* `outputs`: A function that, given an attribute set containing the
outputs of each of the input flakes keyed by their identifier,
yields the Nix values provided by this flake. Thus, in the example
above, `inputs.nixpkgs` contains the result of the call to the
`outputs` function of the `nixpkgs` flake.
In addition to the outputs of each input, each input in `inputs`
also contains some metadata about the inputs. These are:
* `outPath`: The path in the Nix store of the flake's source tree.
* `rev`: The commit hash of the flake's repository, if applicable.
* `revCount`: The number of ancestors of the revision `rev`. This is
not available for `github` repositories, since they're fetched as
tarballs rather than as Git repositories.
* `lastModifiedDate`: The commit time of the revision `rev`, in the
format `%Y%m%d%H%M%S` (e.g. `20181231100934`). Unlike `revCount`,
this is available for both Git and GitHub repositories, so it's
useful for generating (hopefully) monotonically increasing version
strings.
* `lastModified`: The commit time of the revision `rev` as an integer
denoting the number of seconds since 1970.
* `narHash`: The SHA-256 (in SRI format) of the NAR serialization of
the flake's source tree.
The value returned by the `outputs` function must be an attribute
set. The attributes can have arbitrary values; however, various
`nix` subcommands require specific attributes to have a specific
value (e.g. `packages.x86_64-linux` must be an attribute set of
derivations built for the `x86_64-linux` platform).
## Flake inputs
The attribute `inputs` specifies the dependencies of a flake, as an
attrset mapping input names to flake references. For example, the
following specifies a dependency on the `nixpkgs` and `import-cargo`
repositories:
```nix
# A GitHub repository.
inputs.import-cargo = {
type = "github";
owner = "edolstra";
repo = "import-cargo";
};
# An indirection through the flake registry.
inputs.nixpkgs = {
type = "indirect";
id = "nixpkgs";
};
```
Alternatively, you can use the URL-like syntax:
```nix
inputs.import-cargo.url = github:edolstra/import-cargo;
inputs.nixpkgs.url = "nixpkgs";
```
Each input is fetched, evaluated and passed to the `outputs` function
as a set of attributes with the same name as the corresponding
input. The special input named `self` refers to the outputs and source
tree of *this* flake. Thus, a typical `outputs` function looks like
this:
```nix
outputs = { self, nixpkgs, import-cargo }: {
... outputs ...
};
```
It is also possible to omit an input entirely and *only* list it as
expected function argument to `outputs`. Thus,
```nix
outputs = { self, nixpkgs }: ...;
```
without an `inputs.nixpkgs` attribute is equivalent to
```nix
inputs.nixpkgs = {
type = "indirect";
id = "nixpkgs";
};
```
Repositories that don't contain a `flake.nix` can also be used as
inputs, by setting the input's `flake` attribute to `false`:
```nix
inputs.grcov = {
type = "github";
owner = "mozilla";
repo = "grcov";
flake = false;
};
outputs = { self, nixpkgs, grcov }: {
packages.x86_64-linux.grcov = stdenv.mkDerivation {
src = grcov;
...
};
};
```
Transitive inputs can be overriden from a `flake.nix` file. For
example, the following overrides the `nixpkgs` input of the `nixops`
input:
```nix
inputs.nixops.inputs.nixpkgs = {
type = "github";
owner = "my-org";
repo = "nixpkgs";
};
```
It is also possible to "inherit" an input from another input. This is
useful to minimize flake dependencies. For example, the following sets
the `nixpkgs` input of the top-level flake to be equal to the
`nixpkgs` input of the `dwarffs` input of the top-level flake:
```nix
inputs.nixops.follows = "dwarffs/nixpkgs";
```
The value of the `follows` attribute is a `/`-separated sequence of
input names denoting the path of inputs to be followed from the root
flake.
Overrides and `follows` can be combined, e.g.
```nix
inputs.nixops.inputs.nixpkgs.follows = "dwarffs/nixpkgs";
```
sets the `nixpkgs` input of `nixops` to be the same as the `nixpkgs`
input of `dwarffs`. It is worth noting, however, that it is generally
not useful to eliminate transitive `nixpkgs` flake inputs in this
way. Most flakes provide their functionality through Nixpkgs overlays
or NixOS modules, which are composed into the top-level flake's
`nixpkgs` input; so their own `nixpkgs` input is usually irrelevant.
# Lock files
Inputs specified in `flake.nix` are typically "unlocked" in the sense
that they don't specify an exact revision. To ensure reproducibility,
Nix will automatically generate and use a *lock file* called
`flake.lock` in the flake's directory. The lock file contains a graph
structure isomorphic to the graph of dependencies of the root
flake. Each node in the graph (except the root node) maps the
(usually) unlocked input specifications in `flake.nix` to locked input
specifications. Each node also contains some metadata, such as the
dependencies (outgoing edges) of the node.
For example, if `flake.nix` has the inputs in the example above, then
the resulting lock file might be:
```json
{
"version": 7,
"root": "n1",
"nodes": {
"n1": {
"inputs": {
"nixpkgs": "n2",
"import-cargo": "n3",
"grcov": "n4"
}
},
"n2": {
"inputs": {},
"locked": {
"owner": "edolstra",
"repo": "nixpkgs",
"rev": "7f8d4b088e2df7fdb6b513bc2d6941f1d422a013",
"type": "github",
"lastModified": 1580555482,
"narHash": "sha256-OnpEWzNxF/AU4KlqBXM2s5PWvfI5/BS6xQrPvkF5tO8="
},
"original": {
"id": "nixpkgs",
"type": "indirect"
}
},
"n3": {
"inputs": {},
"locked": {
"owner": "edolstra",
"repo": "import-cargo",
"rev": "8abf7b3a8cbe1c8a885391f826357a74d382a422",
"type": "github",
"lastModified": 1567183309,
"narHash": "sha256-wIXWOpX9rRjK5NDsL6WzuuBJl2R0kUCnlpZUrASykSc="
},
"original": {
"owner": "edolstra",
"repo": "import-cargo",
"type": "github"
}
},
"n4": {
"inputs": {},
"locked": {
"owner": "mozilla",
"repo": "grcov",
"rev": "989a84bb29e95e392589c4e73c29189fd69a1d4e",
"type": "github",
"lastModified": 1580729070,
"narHash": "sha256-235uMxYlHxJ5y92EXZWAYEsEb6mm+b069GAd+BOIOxI="
},
"original": {
"owner": "mozilla",
"repo": "grcov",
"type": "github"
},
"flake": false
}
}
}
```
This graph has 4 nodes: the root flake, and its 3 dependencies. The
nodes have arbitrary labels (e.g. `n1`). The label of the root node of
the graph is specified by the `root` attribute. Nodes contain the
following fields:
* `inputs`: The dependencies of this node, as a mapping from input
names (e.g. `nixpkgs`) to node labels (e.g. `n2`).
* `original`: The original input specification from `flake.lock`, as a
set of `builtins.fetchTree` arguments.
* `locked`: The locked input specification, as a set of
`builtins.fetchTree` arguments. Thus, in the example above, when we
build this flake, the input `nixpkgs` is mapped to revision
`7f8d4b088e2df7fdb6b513bc2d6941f1d422a013` of the `edolstra/nixpkgs`
repository on GitHub.
It also includes the attribute `narHash`, specifying the expected
contents of the tree in the Nix store (as computed by `nix
hash-path`), and may include input-type-specific attributes such as
the `lastModified` or `revCount`. The main reason for these
attributes is to allow flake inputs to be substituted from a binary
cache: `narHash` allows the store path to be computed, while the
other attributes are necessary because they provide information not
stored in the store path.
* `flake`: A Boolean denoting whether this is a flake or non-flake
dependency. Corresponds to the `flake` attribute in the `inputs`
attribute in `flake.nix`.
The `original` and `locked` attributes are omitted for the root
node. This is because we cannot record the commit hash or content hash
of the root flake, since modifying `flake.lock` will invalidate these.
The graph representation of lock files allows circular dependencies
between flakes. For example, here are two flakes that reference each
other:
```nix
{
inputs.b = ... location of flake B ...;
# Tell the 'b' flake not to fetch 'a' again, to ensure its 'a' is
# *this* 'a'.
inputs.b.inputs.a.follows = "";
outputs = { self, b }: {
foo = 123 + b.bar;
xyzzy = 1000;
};
}
```
and
```nix
{
inputs.a = ... location of flake A ...;
inputs.a.inputs.b.follows = "";
outputs = { self, a }: {
bar = 456 + a.xyzzy;
};
}
```
Lock files transitively lock direct as well as indirect
dependencies. That is, if a lock file exists and is up to date, Nix
will not look at the lock files of dependencies. However, lock file
generation itself *does* use the lock files of dependencies by
default.
)""

17
src/nix/help.md Normal file
View file

@ -0,0 +1,17 @@
R""(
# Examples
* Show help about `nix` in general:
```console
# nix help
```
* Show help about a particular subcommand:
```console
# nix help flake info
```
)""

View file

@ -13,22 +13,11 @@ struct CmdLog : InstallableCommand
return "show the build log of the specified packages or paths, if available"; return "show the build log of the specified packages or paths, if available";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "log.md"
"To get the build log of GNU Hello:", ;
"nix log nixpkgs#hello"
},
Example{
"To get the build log of a specific path:",
"nix log /nix/store/lmngj4wcm9rkv3w4dfhzhcyij3195hiq-thunderbird-52.2.1"
},
Example{
"To get a build log from a specific binary cache:",
"nix log --store https://cache.nixos.org nixpkgs#hello"
},
};
} }
Category category() override { return catSecondary; } Category category() override { return catSecondary; }

40
src/nix/log.md Normal file
View file

@ -0,0 +1,40 @@
R""(
# Examples
* Get the build log of GNU Hello:
```console
# nix log nixpkgs#hello
```
* Get the build log of a specific store path:
```console
# nix log /nix/store/lmngj4wcm9rkv3w4dfhzhcyij3195hiq-thunderbird-52.2.1
```
* Get a build log from a specific binary cache:
```console
# nix log --store https://cache.nixos.org nixpkgs#hello
```
# Description
This command prints the log of a previous build of the derivation
*installable* on standard output.
Nix looks for build logs in two places:
* In the directory `/nix/var/log/nix/drvs`, which contains logs for
locally built derivations.
* In the binary caches listed in the `substituters` setting. Logs
should be named `<cache>/log/<base-name-of-store-path>`, where
`store-path` is a derivation,
e.g. `https://cache.nixos.org/log/dvmig8jgrdapvbyxb1rprckdmdqx08kv-hello-2.10.drv`.
For non-derivation store paths, Nix will first try to determine the
deriver by fetching the `.narinfo` file for this store path.
)""

View file

@ -75,6 +75,8 @@ struct MixLs : virtual Args, MixJSON
if (json) { if (json) {
JSONPlaceholder jsonRoot(std::cout); JSONPlaceholder jsonRoot(std::cout);
if (showDirectory)
throw UsageError("'--directory' is useless with '--json'");
listNar(jsonRoot, accessor, path, recursive); listNar(jsonRoot, accessor, path, recursive);
} else } else
listText(accessor); listText(accessor);
@ -92,21 +94,18 @@ struct CmdLsStore : StoreCommand, MixLs
}); });
} }
Examples examples() override
{
return {
Example{
"To list the contents of a store path in a binary cache:",
"nix store ls --store https://cache.nixos.org/ -lR /nix/store/0i2jd68mp5g6h2sa5k9c85rb80sn8hi9-hello-2.10"
},
};
}
std::string description() override std::string description() override
{ {
return "show information about a path in the Nix store"; return "show information about a path in the Nix store";
} }
std::string doc() override
{
return
#include "store-ls.md"
;
}
void run(ref<Store> store) override void run(ref<Store> store) override
{ {
list(store->getFSAccessor()); list(store->getFSAccessor());
@ -127,14 +126,11 @@ struct CmdLsNar : Command, MixLs
expectArg("path", &path); expectArg("path", &path);
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "nar-ls.md"
"To list a specific file in a NAR:", ;
"nix nar ls -l hello.nar /bin/hello"
},
};
} }
std::string description() override std::string description() override

View file

@ -184,6 +184,13 @@ struct NixArgs : virtual MultiCommand, virtual MixCommonArgs
{ {
return "a tool for reproducible and declarative configuration management"; return "a tool for reproducible and declarative configuration management";
} }
std::string doc() override
{
return
#include "nix.md"
;
}
}; };
static void showHelp(std::vector<std::string> subcommand) static void showHelp(std::vector<std::string> subcommand)
@ -205,21 +212,14 @@ struct CmdHelp : Command
std::string description() override std::string description() override
{ {
return "show help about 'nix' or a particular subcommand"; return "show help about `nix` or a particular subcommand";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "help.md"
"To show help about 'nix' in general:", ;
"nix help"
},
Example{
"To show help about a particular subcommand:",
"nix help run"
},
};
} }
void run() override void run() override
@ -272,7 +272,7 @@ void mainWrapped(int argc, char * * argv)
auto builtins = state.baseEnv.values[0]->attrs; auto builtins = state.baseEnv.values[0]->attrs;
for (auto & builtin : *builtins) { for (auto & builtin : *builtins) {
auto b = nlohmann::json::object(); auto b = nlohmann::json::object();
if (builtin.value->type != tPrimOp) continue; if (!builtin.value->isPrimOp()) continue;
auto primOp = builtin.value->primOp; auto primOp = builtin.value->primOp;
if (!primOp->doc) continue; if (!primOp->doc) continue;
b["arity"] = primOp->arity; b["arity"] = primOp->arity;

View file

@ -15,21 +15,14 @@ struct CmdMakeContentAddressable : StorePathsCommand, MixJSON
std::string description() override std::string description() override
{ {
return "rewrite a path or closure to content-addressable form"; return "rewrite a path or closure to content-addressed form";
} }
Examples examples() override std::string doc() override
{ {
return { return
Example{ #include "make-content-addressable.md"
"To create a content-addressable representation of GNU Hello (but not its dependencies):", ;
"nix store make-content-addressable nixpkgs#hello"
},
Example{
"To compute a content-addressable representation of the current NixOS system closure:",
"nix store make-content-addressable -r /run/current-system"
},
};
} }
void run(ref<Store> store, StorePaths storePaths) override void run(ref<Store> store, StorePaths storePaths) override

View file

@ -0,0 +1,59 @@
R""(
# Examples
* Create a content-addressed representation of the closure of GNU Hello:
```console
# nix store make-content-addressable -r nixpkgs#hello
rewrote '/nix/store/v5sv61sszx301i0x6xysaqzla09nksnd-hello-2.10' to '/nix/store/5skmmcb9svys5lj3kbsrjg7vf2irid63-hello-2.10'
```
Since the resulting paths are content-addressed, they are always
trusted and don't need signatures to copied to another store:
```console
# nix copy --to /tmp/nix --trusted-public-keys '' /nix/store/5skmmcb9svys5lj3kbsrjg7vf2irid63-hello-2.10
```
By contrast, the original closure is input-addressed, so it does
need signatures to be trusted:
```console
# nix copy --to /tmp/nix --trusted-public-keys '' nixpkgs#hello
cannot add path '/nix/store/zy9wbxwcygrwnh8n2w9qbbcr6zk87m26-libunistring-0.9.10' because it lacks a valid signature
```
* Create a content-addressed representation of the current NixOS
system closure:
```console
# nix store make-content-addressable -r /run/current-system
```
# Description
This command converts the closure of the store paths specified by
*installables* to content-addressed form. Nix store paths are usually
*input-addressed*, meaning that the hash part of the store path is
computed from the contents of the derivation (i.e., the build-time
dependency graph). Input-addressed paths need to be signed by a
trusted key if you want to import them into a store, because we need
to trust that the contents of the path were actually built by the
derivation.
By contrast, in a *content-addressed* path, the hash part is computed
from the contents of the path. This allows the contents of the path to
be verified without any additional information such as
signatures. This means that a command like
```console
# nix store build /nix/store/5skmmcb9svys5lj3kbsrjg7vf2irid63-hello-2.10 \
--substituters https://my-cache.example.org
```
will succeed even if the binary cache `https://my-cache.example.org`
doesn't present any signatures.
)""

19
src/nix/nar-cat.md Normal file
View file

@ -0,0 +1,19 @@
R""(
# Examples
* List a file in a NAR and pipe it through `gunzip`:
```console
# nix nar cat ./hello.nar /share/man/man1/hello.1.gz | gunzip
.\" DO NOT MODIFY THIS FILE! It was generated by help2man 1.46.4.
.TH HELLO "1" "November 2014" "hello 2.10" "User Commands"
```
# Description
This command prints on standard output the contents of the regular
file *path* inside the NAR file *nar*.
)""

17
src/nix/nar-dump-path.md Normal file
View file

@ -0,0 +1,17 @@
R""(
# Examples
* To serialise directory `foo` as a NAR:
```console
# nix nar dump-path ./foo > foo.nar
```
# Description
This command generates a NAR file containing the serialisation of
*path*, which must contain only regular files, directories and
symbolic links. The NAR is written to standard output.
)""

24
src/nix/nar-ls.md Normal file
View file

@ -0,0 +1,24 @@
R""(
# Examples
* To list a specific file in a NAR:
```console
# nix nar ls -l ./hello.nar /bin/hello
-r-xr-xr-x 38184 hello
```
* To recursively list the contents of a directory inside a NAR, in JSON
format:
```console
# nix nar ls --json -R ./hello.nar /bin
{"type":"directory","entries":{"hello":{"type":"regular","size":38184,"executable":true,"narOffset":400}}}
```
# Description
This command shows information about a *path* inside NAR file *nar*.
)""

Some files were not shown because too many files have changed in this diff Show more