Compare commits

..

138 commits

Author SHA1 Message Date
jade 87fd6e0095 Merge "Revert "libexpr: Replace regex engine with boost::regex"" into main 2024-08-22 22:34:10 +00:00
jade 9896d309cb Revert "libexpr: Replace regex engine with boost::regex"
This reverts commit 447212fa65.

Reason for revert: Regression in eval behaviour bug-compatibility.

Expected behaviour (Nix 2.18.5, macOS and Linux [libstdc++/libc++]):

```
nix-repl> builtins.match "\\.*(.*)" ".keep"
[ "keep" ]

nix-repl> builtins.match "(\\.*)(.*)" ".keep"
[ "." "keep" ]
```

Actual behaviour (boost::regex):

```
nix-repl> builtins.match "\\.*(.*)" ".keep"
[ ".keep" ]

nix-repl> builtins.match "(\\.*)(.*)" ".keep"
[
  "."
  "keep"
]
```

Bug: lix-project/lix#483
Change-Id: Id462eb8586dcd54856cf095f09b3e3a216955b60
2024-08-22 18:35:11 +00:00
sugar🍬🍭🏳️‍⚧️ f2e7f8bab8 Merge "libexpr: Replace regex engine with boost::regex" into main 2024-08-22 07:20:00 +00:00
sugar🍬🍭🏳️‍⚧️ 447212fa65 libexpr: Replace regex engine with boost::regex
This avoids C++'s standard library regexes, which aren't the same
across platforms, and have many other issues, like using stack
so much that they stack overflow when processing a lot of data.

To avoid backwards and forward compatibility issues, regexes are
processed using a function converting libstdc++ regexes into Boost
regexes, escaping characters that Boost needs to have escaped, and
rejecting features that Boost has and libstdc++ doesn't.

Related context:

- Original failed attempt to use `boost::regex` in CppNix, failed due to
  boost icu dependency being large (disabling ICU is no longer necessary
  because linking ICU requires using a different header file,
  `boost/regex/icu.hpp`): https://github.com/NixOS/nix/pull/3826

- An attempt to use PCRE, rejected due to providing less backwards
  compatibility with `std::regex` than `boost::regex`:
  https://github.com/NixOS/nix/pull/7336

- Second attempt to use `boost::regex`, failed due to `}` regex failing
  to compile (dealt with by writing a wrapper that parses a regular
  expression and escapes `}` characters):
  https://github.com/NixOS/nix/pull/7762

Closes #34. Closes #476.

Change-Id: Ieb0eb9e270a93e4c7eed412ba4f9f96cb00a5fa4
2024-08-22 03:17:55 +02:00
jade 651cc0e5b4 fix: build with meson 1.5 also
nixpkgs delivered us the untimely gift of a meson 1.5 upgrade, which
*does* make our lives easier by allowing us to delete wrap generation
code, but it does so at the cost of renaming all rust crates in such a
way that the wrap logic cannot tolerate the new names on the old meson
version 😭.

It also means that support burden for this is going to be atrocious
until we either give in and vendor meson 1.5 or we make a CI target for
it. Neither seems appealing, though the latter is not super absurd for
ensuring we don't break nixpkgs unstable.

This commit causes meson 1.5 to ignore the .wrap files in subprojects/
entirely (since they have the wrong names lol) and instead use
Cargo.lock, so it now hard-depends on our workspace reshuffling
improvement.

It also deletes the hack that we were using to get the sources of Cargo
deps into meson by using a feature that went unnoticed when this code
was originally written: MESON_PACKAGE_CACHE_DIR:
8a202de6ec/mesonbuild/wrap/wrap.py (L490-L502)

Change-Id: I7a28f12fc2812c6ed7537b60bc3025c141a05874
2024-08-21 17:09:10 +00:00
jade dba615098d build: move to a Cargo workspace
This is purely to let Cargo's dependency resolver do stuff for us, we do
not actually intend to build this stuff with Cargo to begin with.

Change-Id: I4c08d55595c7c27b7096375022581e1e34308a87
2024-08-21 17:09:10 +00:00
piegames e38410799b Merge "libexpr: Soft-deprecate ancient let syntax" into main 2024-08-21 11:22:48 +00:00
piegames 0edfea450b libexpr: Soft-deprecate ancient let syntax
Change-Id: I6802b26f038578870ea1fa1ed298f0c4b1f29c4a
2024-08-21 12:59:03 +02:00
jade 3cbbe22fab Merge "flake: fix compiler warning" into main 2024-08-21 10:44:01 +00:00
piegames 0a8888d1c7 treewide: Stop using ancient let syntax
Shows for how long these tests have not been touched by anyone …

Change-Id: I3d0c1209a86283ddb012db4e7d45073264fdd0eb
2024-08-21 06:55:52 +00:00
piegames 7210ed1b87 libexpr: Soft-deprecate __overrides
Change-Id: I787e69e1dad6edc5ccdb747b74a9ccd6e8e13bb3
2024-08-21 06:55:52 +00:00
jade c25c43d8c8 flake: fix compiler warning
GCC was complaining, rightfully, about mixed-sign comparisons in there.
I removed some extra sign mixing too.

Change-Id: I949a618c7405c23d4dc3fd17440ea2d7b5c22c9d
2024-08-20 16:13:17 -07:00
Audrey Dutcher ac6974777e Merge "tests/functional/restricted: Don't use a process substitution" into main 2024-08-20 22:51:38 +00:00
jade 736b5d5913 lix-doc: move under src/
This is required to make more meson stuff easier/possible, and honestly
it *is* now Lix sources anyhow.

Change-Id: Ia6c38fabce9aa5c53768745ee38c5cf344f5c226
2024-08-20 13:38:46 -06:00
Qyriad 95863b258b build: build lix-doc with Meson! 🎉
lix-doc is now built with Meson, with lix-doc's dependencies built as
Meson subprojects, either fetched on demand with .wrap files, or fetched
in advance by Nix with importCargoLock. It even builds statically.

Fixes #256.

Co-authored-by: Lunaphied <lunaphied@lunaphied.me>
Co-authored-by: Jade Lovelace <lix@jade.fyi>

Change-Id: I3a4731ff13278e7117e0316bc0d7169e85f5eb0c
2024-08-20 17:21:13 +00:00
Yureka f1533160aa Merge "libutil: fix conditional for close_range availability" into main 2024-08-20 07:55:01 +00:00
Yureka df49d37b71 libutil: fix conditional for close_range availability
This check is wrong and would cause the close_range() function being called even when it's not available

Change-Id: Ide65b36830e705fe772196c37349873353622761
2024-08-20 08:58:25 +02:00
Audrey Dutcher ae628d4af2 tests/functional/restricted: Don't use a process substitution
The <() process substitution syntax doesn't work for this one testcase
in bash for FreeBSD. The exact reason for this is unknown, possibly to
do with pipe vs file vs fifo EOF behavior. The prior behavior was this
test hanging forever, with no children of the bash process.

Change-Id: I71822a4b9dea6059b34300568256c5b7848109ac
2024-08-19 20:37:51 -07:00
Maximilian Bosch 040e783232 flake: don't refetch unmodified inputs by recursive follows
Closes #460

I managed to trigger the issue by having the following inputs (shortened):

    authentik-nix.url = "github:nix-community/authentik-nix";
    authentik-nix.inputs.poetry2nix.inputs.nixpkgs.follows = "nixpkgs";

When evaluating this using

    nix-eval-jobs --flake .#hydraJobs

I got the following error:

    error: cannot update unlocked flake input 'authentik-nix/poetry2nix' in pure mode

The issue we have here is that `authentik-nix/poetry2nix` was written
into the `overrideMap` which caused Nix to assume it's a new input and
tried to refetch it (#460) or errored out in pure mode
(nix-eval-jobs / Hydra).

The testcase unfortunately only involves checking for the output log
and makes sure that something *is* logged on the first fetch so that
the test doesn't rot when the logging changes since I didn't
manage to trigger the error above with the reproducer from #460. In
fact, I only managed to trigger the `cannot update unlocked flake input`
error in this context with `nix-eval-jobs`.

Change-Id: Ifd00091eec9a0067ed4bb3e5765a15d027328807
2024-08-19 19:57:12 +00:00
eldritch horrors e727dbc3a3 libstore: un-enable_shared_from_this Goal
it's no longer needed for anything, and not even a great idea.

Change-Id: Ia7a59e1e3f9d8f4ad2ac3b054e38485157c210a6
2024-08-19 09:13:44 +00:00
eldritch horrors b40369942c libstore: make Worker::childStarted private
this can be a proper WorkResult now. childTerminated is unfortunately a
lot more stubborn and won't be made private for quite a while yet. once
we can get rid of the Worker poll loop that *should* be possible though

Change-Id: I2218df202da5cb84e852f6a37e4c20367495b617
2024-08-19 09:13:44 +00:00
eldritch horrors fca523d661 libstore: turn HookReply into a variant type
we'll need this once we want to pass extra information out of accepting
replies, such as fd sets or possibly even async output reader promises.

Change-Id: I5e2f18cdb80b0d2faf3067703cc18bd263329b3f
2024-08-19 09:13:44 +00:00
eldritch horrors 5e9db09761 libstore: downsize hook pipes
don't keep fds open we're not using. currently this does not cause any
problems, but it does increase the size of our fd table needlessly and
in the future, when we have proper async processing, having builderOut
open in the daemon once the hook has been fully started is problematic

Change-Id: I6e7fb773b280b042873103638d3e04272ca1e4fc
2024-08-19 09:13:44 +00:00
eldritch horrors e513cd2beb libstore: run childStarted as late as possible
otherwise we *technically* give away the output fds before we've read them.

Change-Id: I6ad0d6a1bb553ecfcdd7708f50d34142a425374d
2024-08-19 09:13:44 +00:00
eldritch horrors fb8eb539fc libstore: move respect-timeoutiness to goal method
this is useless to do on the face of it, but it'll make it easier to
convert the entire output handling to use async io and promises soon

Change-Id: I2d1eb62c4bbf8f57bd558b9599c08710a389b1a8
2024-08-19 09:13:44 +00:00
jade 3d14567d0b Merge "doc: fix broken meson deps for various manuals outputs" into main 2024-08-19 04:30:39 +00:00
jade 925e08b858 Merge "build: limit clang-tidy concurrency and respect NIX_BUILD_CORES" into main 2024-08-19 02:55:48 +00:00
eldritch horrors 5cbca85535 libstore: clarify that build log fd and hook log fd are different
only DerivationGoal can set the hook to anything at all. it always sets
buildOutFD to something that is not related to fromHook in any way, and
mixing the two would have rather dire consequences for log consistency.

Change-Id: Ida86727fd1cd5e1ecd78f07f3bde330a346658a8
2024-08-18 22:44:11 +00:00
jade ecfe9345cf build: limit clang-tidy concurrency and respect NIX_BUILD_CORES
Apparently it was impolite to lint with 128 jobs on our CI machine with
128 threads. Let's fix it.

Change-Id: I9ca7306294c6773c6f233690ba49d45a1da6bf7a
2024-08-18 15:39:05 -07:00
jade 84543b459c doc: fix broken meson deps for various manuals outputs
This is incredibly haunted, but it can happen that you change libutil,
breaking the generation of the .json files, which then does not rebuild
the files. I don't expect they are slow to build, so it does not seem so
bad to just rebuild them every time instead of extracting a list of all
the possible deps.

We want to delete this nonsense anyway and replace it with generated
code.

Change-Id: Ia576d1a3bdee48fbaefbb5ac194354428d179a84
2024-08-18 15:19:15 -07:00
eldritch horrors e2d330aeed libstore: remove DerivationGoal::isReadDesc
all derivation goals need a log fd of some description. let's save this
single fd in a dedicated pointer field for all subclasses so that later
we have just the one spot to change if we turn this into async promises

Change-Id: If223adf90909247363fb823d751cae34d25d0c0b
2024-08-18 22:04:06 +00:00
piegames 007211e7a2 libutil: Optimize feature checks
Instead of doing a linear search on an std::set, we use a bitset enum.

Change-Id: Ide537f6cffdd16d06e59aaeb2e4ac0acb6493421
2024-08-18 16:56:49 +00:00
eldritch horrors 7506d680ac libstore: don't ignore max-build-log-size for ssh-ng
Change-Id: Ieab14662bea6e6f5533325f0e945147be998f9a2
2024-08-18 09:10:05 +00:00
eldritch horrors 38f550708d libstore: add explicit in-build-slot-ness to goals
we don't need to expose information about how busy a Worker is if the
worker can instead tell its work items whether they are in a slot. in
the future we might use this to not start items waiting for a slot if
no slots are currently available, but that requires more preparation.

Change-Id: Ibe01ac536da7e6d6f80520164117c43e772f9bd9
2024-08-18 09:10:05 +00:00
eldritch horrors 176e1058f1 libstore: remove method without definition
Change-Id: I676411752a4b1777045d7211ac1176693f1a3d7d
2024-08-18 09:10:05 +00:00
eldritch horrors 91a74ba82a libstore: remove unused includes in worker code
Change-Id: I6c7fccc4e710e23a22faae2669cb75f2f6da27b4
2024-08-18 09:10:05 +00:00
eldritch horrors b66fd9ff4b libstore: make Worker::removeGoal private
Change-Id: I8583d9ff752f702a10ec52b0330b0d4d4d2614fa
2024-08-18 09:10:05 +00:00
piegames 278fddc317 libexpr: Deprecate URL literals
Closes #437.

Change-Id: I9f67fc965bb4a7e7fd849e5067ac1cb3bab064cd
2024-08-17 20:31:57 +02:00
piegames 49d61b2e4b libexpr: Introduce Deprecated features
They are like experimental features, but opt-in instead of opt-out. They
will allow us to gracefully remove language features. See #437

Change-Id: I9ca04cc48e6926750c4d622c2b229b25cc142c42
2024-08-17 19:47:51 +02:00
piegames 1c080a8239 treewide: Stop using URL literals
They must die

Change-Id: Ibe2b1818b21d98ec1a68836d01d5dad729b8c501
2024-08-17 15:48:10 +00:00
Artemis Tosini 41a0b08e64 meson: Don't use target_machine
The target_machine variable is meant for the target
of cross compilers. We are not a cross compiler, so
instead reuse our host_machine based checks.

Fixes Linux→FreeBSD cross, since Meson can't figure
out `target_machine.kernel()` in that case.

Fixes: lix-project/lix#469

Change-Id: Ia46a64c8d507c3b08987a1de1eda171ff5e50df4
2024-08-16 14:24:03 +00:00
Artemis Tosini b016eb0895 Merge "libutil: Add bindPath function from libstore" into main 2024-08-13 19:39:10 +00:00
jade f9a3bf6ccc Update version to 2.92
Change-Id: Ib64d695c50a733e0e739ff193f1ea65ed7cb0a57
2024-08-12 18:06:08 -07:00
jade 4d04adf6ba release: merge release 2.91.0 back to mainline
This merge commit returns to the previous state prior to the release but leaves the tag in the branch history.
Release created with releng/create_release.xsh

Change-Id: I8fc975f856631dec7fb3314abd436675adabb59c
2024-08-12 16:05:27 -07:00
jade 10ac99a79c release: 2.91.0 "Dragon's Breath"
Release produced with releng/create_release.xsh

Change-Id: I2fa79b268c44b5f024dd833ee366d3e83f054af1
2024-08-12 16:05:26 -07:00
jade 7e0fee5309 release: release notes for 2.91.0
Release created with releng/create_release.xsh

Change-Id: Ieb6ca02d3cf986b28440fce3792e8c38ce80a33e
2024-08-12 16:04:22 -07:00
jade 5137cea990 README: clarify license to match documentation
For years both the documentation and nixpkgs have said that CppNix is
LGPL-2.1-or-later, not LGPL-2.1-only as is somewhat implied by the
README. We are choosing to update the README to match the rest of the
references.

Related: https://github.com/NixOS/nix/pull/5218
Change-Id: I6a765ae7857a2f84872f80a25983c4c4b2b3b1c1
2024-08-10 16:11:58 -07:00
jade b9ed79c99a libutil: deal with Linux systems that do not implement close_range
Seems a little bit Rich that musl does not implement close_range because
they suspect that the system call itself is a bad idea, so they uhhhh
are considering not implementing a wrapper. Let's just fix the problem
at hand by writing our own wrapper.

Change-Id: I1f8e5858e4561d58a5450503d9c4585aded2b216
2024-08-10 16:11:58 -07:00
jade b15d5cc6ee nix: remove explosions if you have a window size less than four
Turns out strings do not like being resized to -4.

This was discovered while messing with the tests to remove unbuffer and
trying stdbuf instead. Turns out that was not the right approach.

This basically rewrites the handling of this case to be much more
correct, and fixes a bug where with small window sizes where it would
ALSO truncate the attr names in addition to the optional descriptions.

Change-Id: Ifd1beeaffdb47cbb5f4a462b183fcb6c0ff6c524
2024-08-10 16:11:58 -07:00
jade 0c76195351 build: remove expect as a dependency
I was packaging Lix 2.91 for nixpkgs and was annoyed at the expect
dependency. Turns out that you can replace unbuffer with a pretty-short
Python script.

It became less short after I found out that Linux was converting \n to
\r\n in the terminal subsystem, which was not very funny, but is at
least solved by twiddling termios bits.

Change-Id: I8a2700abcbbf6a9902e01b05b40fa9340c0ab90c
2024-08-10 16:10:16 -07:00
jade 292567e0b0 fix: check if it is a Real terminal, not just if it is a terminal
This will stop printing stuff to dumb terminals that they don't support.

I've overall audited usage of isatty and replaced the ones with intent
to mean "is a Real terminal" with checking for that. I've also caught a
case of carelessly assuming "is a tty" means "should be colour" in
nix-env.

Change-Id: I6d83725d9a2d932ac94ff2294f92c0a1100d23c9
2024-08-10 16:07:21 -07:00
jade 3775b6ac88 package: remove unused autotools code, empty file
I noticed there was some stuff setting configureFlags that definitely do
not do anything with meson, so let's rip them out.

As for the empty file, it was added when I was thinking I needed a fake
C++ target to convince meson to create the necessary dependencies. That
was not in fact possible so it should have never been committed.

Change-Id: Ied4723d8a5d21aed85f352c48b080ab2c977a496
2024-08-09 23:22:11 -07:00
jade 9851be99b9 version: update, and add codename
We're going for Dragon's Breath because horrors called dibs on it.

It's fine to merge this a little before the final release, since all the
dev versions have -pre in them anyway.

Change-Id: I763acb2fc1bf76030f7feaed983addf6ae2fdd53
2024-08-09 23:22:11 -07:00
jade 7ca47a0e69 rl-next: add extra context to a few release notes
This was found while writing the release blog post.

Change-Id: Ifd55f308d4d4c831273cbe6ea35d29a38e134783
2024-08-09 23:22:11 -07:00
jade 35c9069c66 rl-next: fix incorrect CL list syntax
This also fixes the script to not pass pre-commit by failing to parse an
int if this mistake is made again.

Change-Id: I714369f515dc9987cf0c600d54a2ac745ba56830
2024-08-09 19:03:08 -07:00
eldritch horrors c7d97802e4 libutil: rename and optimize closeMostFDs
this is only used to close non-stdio files in derivation sandboxes. we
may as well encode that in its name, drop the unnecessary integer set,
and use close_range to deal with the actual closing of files. not only
is this clearer, it also makes sandbox setup on linux fast by 1ms each

Change-Id: Id90e259a49c7bc896189e76bfbbf6ef2c0bcd3b2
2024-08-09 19:59:17 +00:00
eldritch horrors 35a2f28a46 libstore: deprecate the build-hook setting
implementing a build hook is pretty much impossible without either being
a nix, or blindly forwarding the important bits of all build requests to
some kind of nix. we've found no uses of build-hook in the wild, and the
build-hook protocol (apart from being entirely undocumented) is not able
to convey any kind of versioning information between hook and daemon. if
we want to upgrade this infrastructure (which we do), this must not stay

Change-Id: I1ec4976a35adf8105b8ca9240b7984f8b91e147e
2024-08-09 19:30:45 +00:00
jade 790d1079e1 Merge changes Ib7c80826,I636f8a71,I67669b98 into main
* changes:
  perl: un-autos your conf
  build: declare all the deps as -isystem
  darwin: workaround PROC_PIDLISTFDS on processes with no fds
2024-08-09 19:24:29 +00:00
Qyriad 346e340cbf Merge "libexpr: move Value implementations out of eval.cc" into main 2024-08-09 14:25:13 +00:00
eldritch horrors 5d4686bcd5 libutil: allow marking settings as deprecated
this is a bit of a hack, but it's apparently the cleanest way of doing
this in the absence of any kind of priority/provenance information for
values of some given setting. we'll need this to deprecate build-hook.

Change-Id: I03644a9c3f17681c052ecdc610b4f1301266ab9e
2024-08-09 11:33:09 +00:00
eldritch horrors baa4fda340 main: require argv[0]
sure, linux has been providing argv[0] by default for a while now. other
OSes may not be as forthcoming though, and relying on the OS to create a
world in which we can just make assumptions we could test for instead is
unnecessarily lazy. we *could* default argv0, but that's a little silly.

notably we abort instead of returning normally to avoid confusions where
a caller interprets our exit status like a Worker build results bitmask.

Change-Id: Id73f8cd0a630293b789c59a8c4b0c4a2b936b505
2024-08-09 11:33:09 +00:00
eldritch horrors 6491cde997 resolve-system-dependencies: remove entirely
this hasn't been used since 2020, and hasn't been compiled since may.

Change-Id: I865550966630eee6ba18d742ba36f0a90901279d
2024-08-09 11:33:09 +00:00
Qyriad 0787dcf5f6 libexpr: move Value implementations out of eval.cc
Change-Id: I2ce8a9713533888b3d109a56947156eb3a5ab492
2024-08-08 22:01:12 -06:00
jade 3b902683e9 Merge changes I0373ac01,I7b543967,I537103eb into main
* changes:
  releng: fix the git push
  releng: clarify/update docs, add instructions after tag
  Fix is_maintenance_branch heuristic
2024-08-08 23:12:11 +00:00
jade 9682ab4f38 Merge changes I6358a393,I2d9f276b,Idd096dc9 into main
* changes:
  clang-tidy: write a lint for charptr_cast
  tree-wide: automated migration to charptr_cast
  clang-tidy: enforce the new rules
2024-08-08 23:09:30 +00:00
jade 757041c3e7 Merge changes I526cceed,Ia4e2f1fa,I22e66972,I9fbd55a9,Ifca22e44 into main
* changes:
  sqlite: add a Use::fromStrNullable
  util: implement charptr_cast
  tree-wide: fix a pile of lints
  refactor: make HashType and Base enum classes for type safety
  build: integrate clang-tidy into CI
2024-08-08 22:43:10 +00:00
jade a5f0954c29 clang-tidy: write a lint for charptr_cast
This lets us ensure that nobody is putting in new reinterpret_cast
instances where they could safely use charptr_cast instead.

Change-Id: I6358a3934c8133c7150042635843bdbb6b9218d4
2024-08-08 14:53:17 -07:00
jade 4ed8461cac sqlite: add a Use::fromStrNullable
There were several usages of the raw sqlite primitives along with C
style casts, seemingly because nobody thought to use an optional for
getting a string or NULL.

Let's fix this API given we already *have* a wrapper.

Change-Id: I526cceedc2e356209d8fb62e11b3572282c314e8
2024-08-08 14:53:17 -07:00
jade a85c4ce535 tree-wide: automated migration to charptr_cast
The lint did it :3

Change-Id: I2d9f276b01ebbf14101de4257ea13e44ff6fe0a0
2024-08-08 14:53:17 -07:00
jade a318c96851 util: implement charptr_cast
I don't like having so many reinterpret_cast statements that have to
actually be looked at to determine if they are UB. A huge number of the
reinterpret_cast instances in Lix are actually casting to some pointer
of some character type, which is always valid no matter the source type.

However, it is also worth looking at if it is not casting both *from* a
character type and also *to* a character type, since IMO splatting a
struct into a character array should be a very deliberate action instead
of just being about dealing with bad APIs.

So let's write a template that encapsulates this invariant so we can
not worry about the trivially safe reinterpret_cast invocations.

Change-Id: Ia4e2f1fa0c567123a96604ddadb3bdd7449660a4
2024-08-08 14:53:17 -07:00
jade c1291fd102 clang-tidy: enforce the new rules
Fixes: lix-project/lix#241

Change-Id: Idd096dc9ca92ffd4be8c22d293ba5bf2ec48a85f
2024-08-08 14:53:17 -07:00
jade e34833c025 tree-wide: fix a pile of lints
This:
- Converts a bunch of C style casts into C++ casts.
- Removes some very silly pointer subtraction code (which is no more or
  less busted on i686 than it began)
- Fixes some "technically UB" that never had to be UB in the first
  place.
- Makes finally follow the noexcept status of the inner function. Maybe
  in the future we should ban the function from not being noexcept, but
  that is not today.
- Makes various locally-used exceptions inherit from std::exception.

Change-Id: I22e66972602604989b5e494fd940b93e0e6e9297
2024-08-08 14:53:17 -07:00
jade 370ac940dd refactor: make HashType and Base enum classes for type safety
Change-Id: I9fbd55a9d50464a56fe11cb42a06a206914150d8
2024-08-08 14:53:17 -07:00
jade f3ef0899c7 build: integrate clang-tidy into CI
This still has utterly unacceptably bad output format design that I
would not inflict on anyone I like, but it *does* now exist, and you
*can* find the errors in the log.

Future work would obviously be to fix that and integrate the actual
errors into Gerrit using codechecker or so.

Followup issue: lix-project/lix#457

Fixes: lix-project/lix#147
Change-Id: Ifca22e443d357762125f4ad6bc4f568af3a26c62
2024-08-08 14:53:17 -07:00
piegames e03cd8b3a6 Merge "libexpr: Add experimental pipe operator" into main 2024-08-08 18:15:21 +00:00
eldritch horrors a957219df2 libstore: make Worker::waitForInput private
Change-Id: I71a42acd5a4a9a18b55cf754cdf9896614134398
2024-08-08 12:02:17 +00:00
eldritch horrors ba85e501ce libstore: make Worker status flags private
Change-Id: I16ec8994c6448d70b686a2e4c10f19d4e240750d
2024-08-08 12:02:17 +00:00
eldritch horrors fc987b4123 libstore: remove Goal::addWaitee
Change-Id: I1b00d1a537d84790878cb0e81aaa1cbaa143d62d
2024-08-08 12:02:17 +00:00
eldritch horrors 4c3010a1be libstore: make Worker::wakeUp private
Change-Id: Iffa55272fe6ef4adaf3e9d4d25e5339792c2e460
2024-08-08 12:02:17 +00:00
eldritch horrors 3ecb46e3e7 libstore: make Worker::waitForAWhile private
Change-Id: I0cdcd436ee71124ca992b4f4fe307624a25f11e9
2024-08-08 12:02:17 +00:00
eldritch horrors b33c969519 libstore: make Worker::waitForBuildSlot private
Change-Id: I02a54846cd65622edbd7a1d6c24a623b4a59e5b3
2024-08-08 12:02:17 +00:00
piegames 28ae24f3f7 libexpr: Add experimental pipe operator
The |> operator is a reverse function operator with low binding strength
to replace lib.pipe. Implements RFC 148, see the RFC text for more
details. Closes #438.

Change-Id: I21df66e8014e0d4dd9753dd038560a2b0b7fd805
2024-08-08 11:13:53 +02:00
jade 7246c2d104 releng: fix the git push
This was broken because gerrit requires that the revision actually
is known before it is pushed as a tag.

Also, arguably this fixes the original problem mentioned in
lix-project/lix#439

Change-Id: I0373ac01584440f18d32b8da5699bb359cc2c89a
2024-08-07 21:46:44 -07:00
jade 83247b1c38 releng: clarify/update docs, add instructions after tag
This is not a proper fix for the confusion that can happen about how the
tags are supposed to be used.

For a proper fix, we need to do
lix-project/lix#439 and implement
worktrees such that the user never sees the git state anymore.

Change-Id: I7b543967f522cede486e42684b48cad47da95429
2024-08-07 20:52:09 -07:00
jade 8a86f38bca Fix is_maintenance_branch heuristic
This was broken because Nix language's version comparison does not know
how to deal with versions like -rc1 and considers them newer, which is
in this case not desirable.

That in turn led to not tagging 2.90.0 docker images as "latest" since
the heuristic was wrong.

This commit also adds some more cross-checking and failsafes in case the
person running releng does not have a local main branch that is up to
date.

Fixes: lix-project/lix#443
Change-Id: I537103ebab58ae978c00e06972abe14432dd9c80
2024-08-07 20:14:45 -07:00
Max “Goldstein” Siling 6fdb47f0b2 Merge "src/libcmd/repl.cc: allow :log /path/to/store.drv" into main 2024-08-07 21:48:01 +00:00
jade 0800a81a95 Merge "oops: fix warning about catching polymorphic exception" into main 2024-08-07 19:06:54 +00:00
piegames ec7552ff74 libexpr/parser: Test experimental features
Currently, the parser relies on the global experimental feature flags.
In order to properly test conditional language features, we instead need
to pass it around in the parser::State.

This means that the parser cannot cache the result of isEnabled anymore,
which wouldn't necessarily hurt performance if the function didn't
perform a linear search on the list of enabled features on every single
call. While we could simply evaluate once at the start of parsing and
cache the result in the parser state, the more sustainable solution
would be to fix `isEnabled` such that all callers may profit from the
performance improvement.

Change-Id: Ic9b9c5d882b6270e1114988b63e6064d36c25cf2
2024-08-07 13:07:50 +00:00
Max “Goldstein” Siling 9adfd9b8ad
src/libcmd/repl.cc: allow :log /path/to/store.drv
This adds a second form to the `:log` command: it now can accept a
derivation path in addition to a derivation expression. As derivation
store paths start with `/nix/store`, this is not ambiguous.

Resolves: lix-project/lix#51
Change-Id: Iebc7b011537e7012fae8faed4024ea1b8fdc81c3
2024-08-07 15:58:44 +03:00
Maximilian Bosch 27a63db710 Merge "fix: warn and document when advanced attributes will have no impact due to __structuredAttrs" into main 2024-08-07 10:38:39 +00:00
jade d1fd1dc8ac perl: un-autos your conf
I definitely don't think we were using this, and it is probably an
omission in the original autoconf deletion more than anything.

Change-Id: Ib7c8082685e550575bca5af06f0e93adf982bd7c
2024-08-07 02:52:00 -07:00
jade f8fb335eb7 build: declare all the deps as -isystem
I don't know why but I was getting a spurious -Werror=switch-enum inside
toml11. It does not make sense why it did not occur before, but it
should be stopped.

This was not done at an earlier stage to better match the legacy make
build system, but we don't use it anyway.

Change-Id: I636f8a71e8a0ba5e0feb80b435ae24c3af995c5d
2024-08-07 02:52:00 -07:00
jade 1437d3df15 darwin: workaround PROC_PIDLISTFDS on processes with no fds
This has been causing various seemingly spurious CI failures as well as
some failures on people running tests on beta builds.

lix> ++(nix-collect-garbage-dry-run.sh:20) nix-store --gc --print-dead
lix> ++(nix-collect-garbage-dry-run.sh:20) wc -l
lix> finding garbage collector roots...
lix> error: Listing pid 87261 file descriptors: Undefined error: 0

There is no real way to write a proper test for this, other than to
start a process like the following:

int main(void) {
    for (int i = 0; i < 1000; ++i) {
        close(i);
    }
    sleep(10000);
}

and then let Lix's gc look at it.

I have a relatively high confidence this *will* fix the problem since I
have manually confirmed the behaviour of the libproc call is
as-unexpected, and it would perfectly explain the observed symptom.

Fixes: lix-project/lix#446
Change-Id: I67669b98377af17895644b3bafdf42fc33abd076
2024-08-07 02:52:00 -07:00
alois31 780998f4ea Merge "package: improve support for building without BDW-GC" into main 2024-08-07 07:07:28 +00:00
jade d280e4990c oops: fix warning about catching polymorphic exception
This was introduced in I0fc80718eb7e02d84cc4b5d5deec4c0f41116134 and
unnoticed since it only appears in gcc builds.

Change-Id: I1de80ce2a8fab63efdca7ca0de2a302ceb118267
2024-08-06 22:45:19 -07:00
jade 529eed74c4 Merge changes I0fc80718,Ia182b86f,I355f82cb,I8a9b58fa,Id89f8a1f, ... into main
* changes:
  tree-wide: fix various lint warnings
  flake & doxygen: update tagline
  nix flake metadata: print modified dates for input flakes
  cli: eat terminal codes from stdout also
  Implement forcing CLI colour on, and document it better
  manual: fix a syntax error in redirects.js that made it not do anything
  misc docs/meson tidying
  build: implement clang-tidy using our plugin
2024-08-07 00:50:30 +00:00
alois31 2c48460850
libstore/linux: precompile and cache the seccomp BPF
The growth of the seccomp filter in 127ee1a101
made its compilation time significant (roughly 10 milliseconds have been
measured on one machine). For this reason, it is now precompiled and cached in
the parent process so that this overhead is not hit for every single build. It
is still not optimal when going through the daemon, because compilation still
happens once per client, but it's better than before and doing it only once for
the entire daemon requires excessive crimes with the current architecture.

Fixes: lix-project/lix#461
Change-Id: I2277eaaf6bab9bd74bbbfd9861e52392a54b61a3
2024-08-06 19:10:33 +02:00
alois31 403fa9e2b6
libstore/linux: compile the seccomp BPF explicitly
This is a preparation for precompiling the filter, which is done separately.
The behaviour should be unchanged for now.

Change-Id: I899aa7242962615949208597aca88913feba1cb8
2024-08-06 18:31:40 +02:00
alois31 741d3b441c
libstore: add LocalDerivationGoal setupSyscallFilter hook
The seccomp setup code was a huge chunk of conditionally compiled
platform-specific code. For this reason, it is appropriate to move it to the
platform-specific implementation file. Ideally its setup could be moved a bit
to make it happen at the same place as the Darwin restrictions, but that change
is going to be less mechanical.

Change-Id: I496aa3c4fabf34656aba1e32b0089044ab5b99f8
2024-08-06 18:27:09 +02:00
alois31 f84997cbef
package: don't hide system-wide manual pages
When MANPATH is unset or contains an empty component, a reasonable default is
used. Previously (after 3dced96741), when MANPATH
was unset, the shell hook would only place a location containing the Lix manual
pages there, and system-wide manual pages would become unavailable in the
development shell, which is undesired. Fix the issue by including an empty
component in this case.

Change-Id: Ib3c67a831d709fe2a87520e15917eebb59397bd1
2024-08-06 17:18:05 +02:00
jade ca9d3e6e00 tree-wide: fix various lint warnings
Change-Id: I0fc80718eb7e02d84cc4b5d5deec4c0f41116134
2024-08-04 20:55:45 -07:00
jade 9238e62ae6 flake & doxygen: update tagline
This tagline was left over from CppNix and we should make it tastier.

Change-Id: Ia182b86f6e751591be71a50521992ad73c7b38b5
2024-08-04 20:41:19 -07:00
jade bd1344ec54 nix flake metadata: print modified dates for input flakes
This was always in the lock file and we can simply actually print it.

The test for this is a little bit silly but it should correctly
control for my daring to exercise timezone code *and* locale code in a
test, which I strongly suspect nobody dared do before.

Sample (abridged):
```
Path:          /nix/store/gaxb42z68bcr8lch467shvmnhjjzgd8b-source
Last modified: 1970-01-01 00:16:40
Inputs:
├───flake-compat: github:edolstra/flake-compat/0f9255e01c2351cc7d116c072cb317785dd33b33
│   Last modified: 2023-10-04 13:37:54
├───flake-utils: github:numtide/flake-utils/b1d9ab70662946ef0850d488da1c9019f3a9752a
│   Last modified: 2024-03-11 08:33:50
│   └───systems: github:nix-systems/default/da67096a3b9bf56a91d16901293e51ba5b49a27e
│       Last modified: 2023-04-09 08:27:08
```

Change-Id: I355f82cb4b633974295375ebad646fb6e2107f9b
2024-08-04 20:41:19 -07:00
jade 5f0ef50077 cli: eat terminal codes from stdout also
This *should* be sound, plus or minus the amount that the terminal code
eating code is messed up already.

This is useful for testing CLI output because it will strip the escapes
enough to just shove the expected output in a file.

Change-Id: I8a9b58fafb918466ac76e9ab585fc32fb9294819
2024-08-04 20:41:19 -07:00
jade 378ec5fb06 Implement forcing CLI colour on, and document it better
This is necessary to make some old tests work when testing colour
against non-interactive outputs.

Change-Id: Id89f8a1f45c587fede35a69db85f7a52f2c0a981
2024-08-04 20:41:19 -07:00
jade 700762d8b2 manual: fix a syntax error in redirects.js that made it not do anything
lol lmao

Let's put in a syntax checker in CI so we do not have to deal with this
nonsense ever again.

Change-Id: I0fe875e0cfc59ab1783087762e5bb07e09ded105
2024-08-04 20:41:19 -07:00
jade 0f998056fa misc docs/meson tidying
The docs page has an incorrect escape that leads to a backslash
appearing in output. Meson stuff is self-explanatory, just shortens and
simplifies a bit.

Change-Id: Ib63adf934efd3caeb82ca82988f230e8858a79f9
2024-08-04 20:41:19 -07:00
jade 3daeeaefb1 build: implement clang-tidy using our plugin
The principle of this is that you can either externally build it with
Nix (actual implementation will be in a future commit), or it can be
built with meson if the Nix one is not passed in.

The idea I have is that dev shells don't receive the one from Nix to
avoid having to build it, but CI can use the one from Nix and save some
gratuitous rebuilds.

The design of this is that you can run `ninja -C build clang-tidy` and
it will simply correctly clang-tidy the codebase in spite of PCH
bullshit caused by the cc-wrapper.

This is a truly horrendous number of hacks in a ball, caused by bugs in
several pieces of software, and I am not even getting started.

I don't consider this to fix the clang-tidy issue filing, since we still
have a fair number of issues to fix even on the existing minimal
configuration, and I have not yet implemented it in CI. Realistically we
will need to do something like https://github.com/Ericsson/codechecker
to be able to silence warnings without physically touching the code, or
at least *diff* reports between versions.

Also, the run-clang-tidy output design is rather atrocious and must
not be inflicted upon anyone I have respect for, since it buries the
diagnostics in a pile of invocation logs. We would do really well to
integrate with the Gerrit SARIF stuff so we can dump the reports on
people in a user-friendly manner.

Related: lix-project/lix#147

Change-Id: Ifefe533f3b56874795de231667046b2da6ff2461
2024-08-04 20:41:19 -07:00
Tom Bereknyei 7fc481396c
fix: warn and document when advanced attributes will have no impact due to __structuredAttrs
Backport of https://github.com/NixOS/nix/pull/10884.

Change-Id: I82cc2794730ae9f4a9b7df0185ed0aea83efb65a
2024-08-03 13:32:51 +02:00
alois31 58758c0f87
package: improve support for building without BDW-GC
Expose an option for disabling the BDW-GC build dependency entirely. Fix the
place where one of its headers was included (unnecessarily) without proper
guarding. Finally, use this machinery to exclude BDW-GC from the ASAN builds
entirely (its usage has already been disabled due to compatibility issues
anyway), to ensure this configuration is not regressed again.

Change-Id: I2ebe8094abf67e7d1e99eed971de3e99d071c10b
2024-08-03 06:14:41 +02:00
eldritch horrors 66469fc281 libstore: move Goal::waiteeDone into Worker::goalFinished
this begins a long and arduous journey to remove all result state from
Goal, to eventually drop the std::enable_shared_from_this base, and to
completely eliminate all unsynchronized modification of states of both
Goal and Worker. by the end of this we will hopefully be able to start
and reap multiple derivation builds in parallel, which should speed up
the process quite a bit (at least for short local builds, others might
not notice a large difference. the build hooks will remain a problem.)

Change-Id: I57dcd9b2cab4636ed4aa24cdec67124fef883345
2024-08-03 00:08:44 +00:00
alois31 32ca194ebf Merge "libstore/ssh: only resume the logger when we paused it" into main 2024-08-02 16:59:44 +00:00
alois31 a93dade821
libstore/ssh: only resume the logger when we paused it
In the SSH code, the logger was conditionally paused, but unconditionally
resumed. This was fine as long as resuming the logger was idempotent. Starting
with 0dd1d8ca1c, it isn't any more, and the
behaviour of the code in question was missed. Consequently, an assertion
failure is triggered for example when performing builds against an "SSH" store
on localhost. Fix the issue by only resuming the logger when it has actually
been paused.

Fixes: lix-project/lix#458
Change-Id: Ib1e4d047744a129f15730b7216f9c9368c2f4211
2024-08-02 18:38:14 +02:00
eldritch horrors e5177dddff libstore: move Goal::amDone to Worker
we still mutate goal state to store the results of any given goal run,
but now we also have that information in Worker and could in theory do
something else with it. we could return a map of goal to goal results,
which would also let us better diagnose failures of subgoals (at all).

Change-Id: I1df956bbd9fa8cc9485fb6df32918d68dda3ff48
2024-08-02 13:52:15 +00:00
eldritch horrors dfcab1c3f0 libstore: return finishedness from Goal methods
this is the first step towards removing all result-related mutation of
Goal state from goal implementations themselves, and into Worker state
instead. once that is done we can treat all non-const Goal fields like
private state of the goal itself, and make threading of goals possible

Change-Id: I69ff7d02a6fd91a65887c6640bfc4f5fb785b45c
2024-08-02 13:52:15 +00:00
eldritch horrors 724b345eb9 libstore: encapsulate worker build hook state
once goals run on multiple threads these fields must by synchronized as
one, or we try to run build hooks to often (or worse, not often enough)

Change-Id: I47860e46fe5c6db41755b2a3a1d9dbb5701c4ca4
2024-08-02 13:52:15 +00:00
eldritch horrors 868eb5ecde libutil: make RunningProgram::wait more resilient
this will usually be used either directly (which is always fine) or in
Finally blocks (where it must never throw execptions). make sure that,
exceptions being handled or not, the calling wait() in Finally doesn't
cause crashes due to the Finally no-nested-exceptions-thrown assertion

Change-Id: Ib83a5d9483b1fe83b9a957dcefeefce5d088f06d
2024-08-02 13:12:44 +00:00
Jeremy List c907d805bf Merge "package: make aws-sdk-cpp build input optional" into main 2024-08-02 11:42:13 +00:00
Isabel 9eb374dc6d Merge "nix flake show: add the description if it exists" into main 2024-08-02 07:56:06 +00:00
Maximilian Bosch 3bb8c627ae Merge "Reapply "libfetchers: make attribute / URL query handling consistent"" into main 2024-08-02 04:50:25 +00:00
Maximilian Bosch 87fd1f024c Reapply "libfetchers: make attribute / URL query handling consistent"
The original attempt at this introduced a regression; this commit
reverts the revert and fixes the regression.

This reverts commit 3e151d4d77.

Fix to the regression:

flakeref: fix handling of `?dir=` param for flakes in subdirs

As reported in #419[1], accessing a flake in a subdir of a Git
repository fails with the previous commit[2] applied with the error

    error: unsupported Git input attribute 'dir'

The problem is that the `dir`-param is inserted into the parsed URL if a
flake is fetched from the subdir of a Git repository. However, for the
fetching part this isn't even needed. The fix is to just pass `subdir`
as second argument to `FlakeRef` (which needs a `basedir` that can be
empty) and leave the parsedURL as-is.

Added a regression test to make sure we don't run into this again.

[1] lix-project/lix#419
[2] e22172aaf6b6a366cecd3c025590e68fa2b91bcc,
    originally 3e151d4d77

Change-Id: I2c72d5a32e406a7ca308e271730bd0af01c5d18b
2024-08-01 15:41:30 -07:00
jade 8b69d13368 Merge "flake: remove control character from file by using fromJSON" into main 2024-08-01 22:25:36 +00:00
Jeremy List f41190552f
package: make aws-sdk-cpp build input optional
I have added an option to turn off this build input because I'm much
more comfortable when I don't have that type of thing on my computer.
Its default value is true in order to avoid impacting anyone who depends
on AWS features.

Change-Id: Ic57f3c9b9468f422e9fbdcf3ba0fe96177631067
2024-08-02 09:14:48 +12:00
Qyriad 61a93d5308 Merge changes Icc4747ae,Id4717b5b,Ie3ddb3d0,Ic4d93a08,I00d9ff70 into main
* changes:
  remove unused headers in installable-attr-path
  libexpr: include the type of the non-derivation value in the type error
  libexpr: mild cleanup to getDerivations
  libexpr: DrvInfo: remove unused bad-citizen constructor
  cleanup and slightly refactor DrvInfo::queryOutputs
2024-08-01 16:25:43 +00:00
jade e6fc3e9227 flake: remove control character from file by using fromJSON
I was reminded by various evil things puck did to the evaluator
involving null bytes that you can get funny bytes by abusing JSON
parsing. It's neater than putting binary in the source file, so let's do
it.

Change-Id: I1ff2e0d829eb303fbed81fa2ebb3a39412e89ff1
2024-07-31 23:23:42 -07:00
jade a3ab2cc78a Merge changes from topic "undefined-behaviour" into main
* changes:
  releng: move officialRelease to version.json
  Add -Werror CI job
  ci: add a asan+ubsan test run on x86_64-linux
  tree-wide: add support for asan!
2024-08-01 04:01:34 +00:00
Qyriad 17d7e88707 remove unused headers in installable-attr-path
Change-Id: Icc4747aed195e3855b128c73df82e202405af6a8
2024-08-01 00:37:13 +00:00
Qyriad 4f6a3d7e9e libexpr: include the type of the non-derivation value in the type error
Change-Id: Id4717b5b0df7c09b0dbf17e642d8713a0a3efbae
2024-08-01 00:37:03 +00:00
Qyriad 5ffed6d06a libexpr: mild cleanup to getDerivations
Shuffled the logic around a bit so the shorter code paths are early
returns, added comments, etc.

Should be NFC.

Change-Id: Ie3ddb3d0eddd614d6f8c37bf9a4d5a50282084ea
2024-08-01 00:36:55 +00:00
Qyriad 6a30ea0cc4 libexpr: DrvInfo: remove unused bad-citizen constructor
DrvInfo's constructor that only takes `EvalState` leaves everything else
empty; a DrvInfo which has no iota of information about the derivation
it represents is not useful, and was not used anywhere.

Change-Id: Ic4d93a08cb2748b8cef9a61e41e70404834b23f9
2024-08-01 00:36:41 +00:00
Qyriad eb18dcb0ea cleanup and slightly refactor DrvInfo::queryOutputs
Change-Id: I00d9ff707fe61995737b86af6d2eaa1e4d8116ff
2024-08-01 00:36:27 +00:00
jade 5eecdd3ae9 releng: move officialRelease to version.json
This was causing a few bits of suffering downstream, in particular, in
the NixOS module, which, after this change, can have the
`officialRelease` stuff in *it* completely deleted since we now have
correct defaulting in package.nix for it.

It also eliminates some automated editing of Nix files, which is
certainly always welcome to eliminate.

Fixes: lix-project/lix#406
Change-Id: Id12f3018cff4633e379dbfcbe26b7bc84922bdaf
2024-07-31 14:13:39 -07:00
jade b5c6ce7a53 Add -Werror CI job
We should cause CLs that introduce compiler warnings to fail CI. Sadly
this will only cover Clang, but it will cover Clang for free, so it's
truly impossible to say if it's bad or not.

Change-Id: I45ca20d77251af9671d5cbe0d29cb08c5f1d03c2
2024-07-31 14:13:39 -07:00
jade e51263057f ci: add a asan+ubsan test run on x86_64-linux
This should at least catch out blatantly bad patches that don't pass the
test suite with ASan. We don't do this to the integration tests since
they run on relatively limited-memory VMs and so it may not be super
safe to run an evaluator with leak driven garbage collection for them.

Fixes: lix-project/lix#403
Fixes: lix-project/lix#319
Change-Id: I5267b02626866fd33e8b4d8794344531af679f78
2024-07-31 14:13:39 -07:00
jade 19ae87e5ce tree-wide: add support for asan!
What if you could find memory bugs in Lix without really trying very
hard? I've had variously scuffed patches to do this, but this is
blocked on boost coroutines removal at this point tbh.

Change-Id: Id762af076aa06ad51e77a6c17ed10275929ed578
2024-07-31 14:13:39 -07:00
Qyriad ddfca6e81b libexpr: implement actual constructors for nix::Value
Change-Id: Iebc2bb4e4ea5e93045afe47677df756de4ec4d05
2024-07-31 15:38:37 +02:00
Artemis Tosini 3058029fba
libutil: Add bindPath function from libstore
bindPath/doBind is a useful function in build that is used in several
parts of LocalDerivationGoal. Moving this function makes it easier to
split LocalDerivationGoal implementation between several files.

Change-Id: Ic5a0768479c153c1aa3ed425f12604b20bbf0f42
2024-07-27 19:40:40 +00:00
Isabel d2422771eb
nix flake show: add the description if it exists
(cherry picked from commit 8cd1d02f90eb9915e640c5d370d919fad9833c65)

nix flake show: Only print up to the first new line if it exists.

(cherry picked from commit 5281a44927bdb51bfe6e5de12262d815c98f6fe7)

add tests

(cherry picked from commit 74ae0fbdc70a5079a527fe143c4832d1357011f7)

Handle long strings, embedded new lines and empty descriptions

(cherry picked from commit 2ca7b3afdbbd983173a17fa0a822cf7623601367)

Account for total length of 80

(cherry picked from commit 1cc808c18cbaaf26aaae42bb1d7f7223f25dd364)

docs: add nix flake show description release note

fix: remove white space

nix flake show: trim length based on terminal size

test: account for terminal size

docs(flake-description): before and after commands; add myself to credits

Upstream-PR: https://github.com/NixOS/nix/pull/10980
Change-Id: Ie1c667dc816b3dd81e65a1f5395e57ea48ee0362
2024-07-23 13:21:15 +01:00
302 changed files with 6039 additions and 3075 deletions

View file

@ -16,3 +16,20 @@ Checks:
- -bugprone-unchecked-optional-access
# many warnings, seems like a questionable lint
- -bugprone-branch-clone
# all thrown exceptions must derive from std::exception
- hicpp-exception-baseclass
# capturing async lambdas are dangerous
- cppcoreguidelines-avoid-capturing-lambda-coroutines
# crimes must be appropriately declared as crimes
- cppcoreguidelines-pro-type-cstyle-cast
- lix-*
# This can not yet be applied to Lix itself since we need to do source
# reorganization so that lix/ include paths work.
- -lix-fixincludes
# This lint is included as an example, but the lib function it replaces is
# already gone.
- -lix-hasprefixsuffix
CheckOptions:
bugprone-reserved-identifier.AllowedIdentifiers: '__asan_default_options'

7
.gitignore vendored
View file

@ -9,6 +9,10 @@ GTAGS
# ccls
/.ccls-cache
# auto-generated compilation database
compile_commands.json
rust-project.json
result
result-*
@ -29,3 +33,6 @@ buildtime.bin
/.pre-commit-config.yaml
/.nocontribmsg
/release
# Rust build files when using Cargo (not actually supported for building but it spews the files anyway)
/target/

View file

6
Cargo.toml Normal file
View file

@ -0,0 +1,6 @@
[workspace]
resolver = "2"
members = ["src/lix-doc"]
[workspace.package]
edition = "2021"

View file

@ -26,4 +26,4 @@ See our [Hacking guide](https://git.lix.systems/lix-project/lix/src/branch/main/
## License
Lix is released under the [LGPL v2.1](./COPYING).
Lix is released under [LGPL-2.1-or-later](./COPYING).

View file

@ -1,13 +0,0 @@
project('lix-clang-tidy', ['cpp', 'c'],
version : '0.1',
default_options : ['warning_level=3', 'cpp_std=c++20'])
llvm = dependency('Clang', version: '>= 14', modules: ['libclang'])
sources = files(
'HasPrefixSuffix.cc',
'LixClangTidyChecks.cc',
'FixIncludes.cc',
)
shared_module('lix-clang-tidy', sources,
dependencies: llvm)

View file

@ -20,7 +20,7 @@ OUTPUT_DIRECTORY = @docdir@
# for a project that appears at the top of each page and should give viewer a
# quick idea about the purpose of the project. Keep the description short.
PROJECT_BRIEF = "Nix, the purely functional package manager; unstable internal interfaces"
PROJECT_BRIEF = "Lix: A modern, delicious implementation of the Nix package manager; unstable internal interfaces"
# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
# The default value is: YES.

View file

@ -70,10 +70,17 @@ horrors:
iFreilicht:
github: iFreilicht
isabelroses:
forgejo: isabelroses
github: isabelroses
jade:
forgejo: jade
github: lf-
kjeremy:
github: kjeremy
kloenk:
forgejo: kloenk
github: kloenk
@ -96,6 +103,11 @@ midnightveil:
ncfavier:
github: ncfavier
piegames:
display_name: piegames
forgejo: piegames
github: piegamesde
puck:
display_name: puck
forgejo: puck

View file

@ -1,9 +1,14 @@
# Usually "experimental" or "deprecated"
kind:
# "xp" or "dp"
kindShort:
with builtins;
with import ./utils.nix;
let
showExperimentalFeature = name: doc: ''
- [`${name}`](@docroot@/contributing/experimental-features.md#xp-feature-${name})
- [`${name}`](@docroot@/contributing/${kind}-features.md#${kindShort}-feature-${name})
'';
in
xps: indent " " (concatStrings (attrValues (mapAttrs showExperimentalFeature xps)))

View file

@ -0,0 +1,18 @@
# Usually "experimental" or "deprecated"
_kind:
# "xp" or "dp"
kindShort:
with builtins;
with import ./utils.nix;
let
showFeature =
name: doc:
squash ''
## [`${name}`]{#${kindShort}-feature-${name}}
${doc}
'';
in
xps: (concatStringsSep "\n" (attrValues (mapAttrs showFeature xps)))

View file

@ -1,13 +0,0 @@
with builtins;
with import ./utils.nix;
let
showExperimentalFeature =
name: doc:
squash ''
## [`${name}`]{#xp-feature-${name}}
${doc}
'';
in
xps: (concatStringsSep "\n" (attrValues (mapAttrs showExperimentalFeature xps)))

View file

@ -20,6 +20,8 @@ conf_file_json = custom_target(
capture : true,
output : 'conf-file.json',
env : nix_env_for_docs,
# FIXME: put the actual lib targets in here? meson have introspection challenge 2024 though.
build_always_stale : true,
)
nix_conf_file_md_body = custom_target(
@ -50,6 +52,8 @@ nix_exp_features_json = custom_target(
command : [ nix, '__dump-xp-features' ],
capture : true,
output : 'xp-features.json',
# FIXME: put the actual lib targets in here? meson have introspection challenge 2024 though.
build_always_stale : true,
)
language_json = custom_target(
@ -57,6 +61,8 @@ language_json = custom_target(
output : 'language.json',
capture : true,
env : nix_env_for_docs,
# FIXME: put the actual lib targets in here? meson have introspection challenge 2024 though.
build_always_stale : true,
)
nix3_cli_json = custom_target(
@ -64,6 +70,8 @@ nix3_cli_json = custom_target(
capture : true,
output : 'nix.json',
env : nix_env_for_docs,
# FIXME: put the actual lib targets in here? meson have introspection challenge 2024 though.
build_always_stale : true,
)
generate_manual_deps = files(
@ -72,9 +80,9 @@ generate_manual_deps = files(
# Generates builtins.md and builtin-constants.md.
subdir('src/language')
# Generates new-cli pages, experimental-features-shortlist.md, and conf-file.md.
# Generates new-cli pages, {experimental,deprecated}-features-shortlist.md, and conf-file.md.
subdir('src/command-ref')
# Generates experimental-feature-descriptions.md.
# Generates {experimental,deprecated}-feature-descriptions.md.
subdir('src/contributing')
# Generates rl-next-generated.md.
subdir('src/release-notes')
@ -106,6 +114,8 @@ manual = custom_target(
nix3_cli_files,
experimental_features_shortlist_md,
experimental_feature_descriptions_md,
deprecated_features_shortlist_md,
deprecated_feature_descriptions_md,
conf_file_md,
builtins_md,
builtin_constants_md,

View file

@ -345,7 +345,7 @@ const redirects = {
"linux": "uninstall.html#linux",
"macos": "uninstall.html#macos",
"uninstalling": "uninstall.html",
}
},
"contributing/hacking.html": {
"nix-with-flakes": "#building-nix-with-flakes",
"classic-nix": "#building-nix",

View file

@ -1,23 +0,0 @@
---
synopsis: Define integer overflow in the Nix language as an error
issues: [fj#423]
cls: [1594, 1595, 1597, 1609]
category: Fixes
credits: [jade]
---
Previously, integer overflow in the Nix language invoked C++ level signed overflow, which was undefined behaviour, but *probably* manifested as wrapping around on overflow.
Since prior to the public release of Lix, Lix had C++ signed overflow defined to crash the process and nobody noticed this having accidentally removed overflow from the Nix language for three months until it was caught by fiddling around.
Given the significant body of actual Nix code that has been evaluated by Lix in that time, it does not appear that nixpkgs or much of importance depends on integer overflow, so it is safe to turn into an error.
Some other overflows were fixed:
- `builtins.fromJSON` of values greater than the maximum representable value in a signed 64-bit integer will generate an error.
- `nixConfig` in flakes will no longer accept negative values for configuration options.
Integer overflow now looks like the following:
```
» nix eval --expr '9223372036854775807 + 1'
error: integer overflow in adding 9223372036854775807 + 1
```

View file

@ -1,90 +0,0 @@
---
synopsis: "Trace which part of a `foo.bar.baz` expression errors"
cls: 1505, 1506
credits: Qyriad
category: Improvements
---
Previously, if an attribute path selection expression like `linux_4_9.meta.description` it wouldn't show you which one of those parts in the attribute path, or even that that line of code is what caused evaluation of the failing expression.
The previous error looks like this:
```
pkgs.linuxKernel.kernels.linux_4_9.meta.description
error:
… while evaluating the attribute 'linuxKernel.kernels.linux_4_9.meta.description'
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:5:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
… while calling the 'throw' builtin
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:17:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
error: linux 4.9 was removed because it will reach its end of life within 22.11
```
Now, the error will look like this:
```
pkgs.linuxKernel.kernels.linux_4_9.meta.description
error:
… while evaluating the attribute 'linuxKernel.kernels.linux_4_9.meta.description'
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:5:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
… while evaluating 'pkgs.linuxKernel.kernels.linux_4_9' to select 'meta' on it
at «string»:1:1:
1| pkgs.linuxKernel.kernels.linux_4_9.meta.description
| ^
… caused by explicit throw
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:17:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
error: linux 4.9 was removed because it will reach its end of life within 22.11
```
Not only does the line of code that referenced the failing attribute show up in the trace, it also tells you that it was specifically the `linux_4_9` part that failed.
This includes if the failing part is a top-level binding:
```
let
inherit (pkgs.linuxKernel.kernels) linux_4_9;
in linux_4_9.meta.description
error:
… while evaluating 'linux_4_9' to select 'meta.description' on it
at «string»:3:4:
2| inherit (pkgs.linuxKernel.kernels) linux_4_9;
3| in linux_4_9.meta.description
| ^
… while evaluating the attribute 'linux_4_9'
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:5:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
… caused by explicit throw
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:17:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
error: linux 4.9 was removed because it will reach its end of life within 22.11
```

View file

@ -1,12 +0,0 @@
---
synopsis: "Block io_uring in the Linux sandbox"
cls: 1611
credits: alois31
category: Breaking Changes
---
The io\_uring API has the unfortunate property that it is not possible to selectively decide which operations should be allowed.
This, together with the fact that new operations are routinely added, makes it a hazard to the proper function of the sandbox.
Therefore, any access to io\_uring has been made unavailable inside the sandbox.
As such, attempts to execute any system calls forming part of this API will fail with the error `ENOSYS`, as if io\_uring support had not been configured into the kernel.

View file

@ -1,12 +0,0 @@
---
synopsis: "Add a `build-dir` setting to set the backing directory for builds"
cls: 1514
credits: [roberth, tomberek]
category: Improvements
---
`build-dir` can now be set in the Nix configuration to choose the backing directory for the build sandbox.
This can be useful on systems with `/tmp` on tmpfs, or simply to relocate large builds to another disk.
Also, `XDG_RUNTIME_DIR` is no longer considered when selecting the default temporary directory,
as it's not intended to be used for large amounts of data.

View file

@ -0,0 +1,17 @@
---
synopsis: Deprecated language features
issues: [fj#437]
cls: [1785, 1736, 1735, 1744]
category: Breaking Changes
credits: [piegames, horrors]
---
A system for deprecation (and then the planned removal) of undesired language features has been put into place.
It is controlled via feature flags much like experimental features, except that the deprecations are enabled default,
and can be disabled via the flags for backwards compatibility (opt-out with `--extra-deprecated-features` or the Nix configuration file).
- `url-literals`: **URL literals** have long been obsolete and discouraged of use, and now they are officially deprecated.
This means that all URLs must be properly put within quotes like all other strings.
- `rec-set-overrides`: **__overrides** is an old arcane syntax which has not been in use for more than a decade.
It is soft-deprecated with a warning only, with the plan to turn that into an error in a future release.
- `ancient-let`: **The old `let` syntax** (`let { body = …; … }`) is soft-deprecated with a warning as well. Use the regular `let … in` instead.

View file

@ -1,70 +0,0 @@
---
synopsis: "Distinguish between explicit throws and errors that happened while evaluating a throw"
cls: 1511
credits: Qyriad
category: Improvements
---
Previously, errors caused by an expression like `throw "invalid argument"` were treated like an error that happened simply while some builtin function was being called:
```
let
throwMsg = p: throw "${p} isn't the right package";
in throwMsg "linuz"
error:
… while calling the 'throw' builtin
at «string»:2:17:
1| let
2| throwMsg = p: throw "${p} isn't the right package";
| ^
3| in throwMsg "linuz"
error: linuz isn't the right package
```
But the error didn't just happen "while" calling the `throw` builtin — it's a throw error!
Now it looks like this:
```
let
throwMsg = p: throw "${p} isn't the right package";
in throwMsg "linuz"
error:
… caused by explicit throw
at «string»:2:17:
1| let
2| throwMsg = p: throw "${p} isn't the right package";
| ^
3| in throwMsg "linuz"
error: linuz isn't the right package
```
This also means that incorrect usage of `throw` or errors evaluating its arguments are easily distinguishable from explicit throws:
```
let
throwMsg = p: throw "${p} isn't the right package";
in throwMsg { attrs = "error when coerced in string interpolation"; }
error:
… while calling the 'throw' builtin
at «string»:2:17:
1| let
2| throwMsg = p: throw "${p} isn't the right package";
| ^
3| in throwMsg { attrs = "error when coerced in string interpolation"; }
… while evaluating a path segment
at «string»:2:24:
1| let
2| throwMsg = p: throw "${p} isn't the right package";
| ^
3| in throwMsg { attrs = "error when coerced in string interpolation"; }
error: cannot coerce a set to a string: { attrs = "error when coerced in string interpolation"; }
```
Here, instead of an actual thrown error, a type error happens first (trying to coerce an attribute set to a string), but that type error happened *while* calling `throw`.

View file

@ -1,29 +0,0 @@
---
synopsis: Fix nix-collect-garbage --dry-run
issues: [fj#432]
cls: [1566]
category: Fixes
credits: [quantumjump]
---
`nix-collect-garbage --dry-run` did not previously give any output - it simply
exited without even checking to see what paths would be deleted.
```
$ nix-collect-garbage --dry-run
$
```
We updated the behaviour of the flag such that instead it prints out how many
paths it *would* delete, but doesn't actually delete them.
```
$ nix-collect-garbage --dry-run
finding garbage collector roots...
determining live/dead paths...
...
<nix store paths>
...
2670 store paths deleted, 0.00MiB freed
$
```

View file

@ -1,16 +0,0 @@
---
synopsis: "Hash mismatch diagnostics for fixed-output derivations include the URL"
cls: [1536]
credits: [jade]
category: Improvements
---
Now, when building fixed-output derivations, Lix will guess the URL that was used in the derivation using the `url` or `urls` properties in the derivation environment.
This is a layering violation but making these diagnostics tractable when there are multiple instances of the `AAAA` hash is too significant of an improvement to pass it up.
```
error: hash mismatch in fixed-output derivation '/nix/store/sjfw324j4533lwnpmr5z4icpb85r63ai-x1.drv':
likely URL: https://meow.puppy.forge/puppy.tar.gz
specified: sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
got: sha256-a1Qvp3FOOkWpL9kFHgugU1ok5UtRPSu+NwCZKbbaEro=
```

View file

@ -1,14 +0,0 @@
---
synopsis: Add log formats `multiline` and `multiline-with-logs`
cls: [1369]
credits: [kloenk]
category: Improvements
---
Added two new log formats (`multiline` and `multiline-with-logs`) that display
current activities below each other for better visibility.
These formats attempt to use the maximum available lines
(defaulting to 25 if unable to determine) and print up to that many lines.
The status bar is displayed as the first line, with each subsequent
activity on its own line.

View file

@ -1,12 +0,0 @@
---
synopsis: "`nix copy` is now several times faster at `querying info about /nix/store/...`"
cls: [1462]
issues: [fj#366]
credits: [jade]
category: Fixes
---
We fixed a locking bug that serialized `querying info about /nix/store/...`
onto just one thread such that it was eating `O(paths to copy * latency)` time
while setting up to copy paths to s3 and other stores. It is now `nproc` times
faster.

View file

@ -1,21 +0,0 @@
---
synopsis: "Lix no longer speaks the Nix remote-build worker protocol to clients or servers older than CppNix 2.3"
cls: [1207, 1208, 1206, 1205, 1204, 1203, 1479]
issues: [fj#325]
credits: [jade]
category: Breaking Changes
---
CppNix 2.3 was released in 2019, and is the new oldest supported version. We
will increase our support baseline in the future up to a final version of CppNix
2.18 (which may happen soon given that it is the only still-packaged and thus
still-tested >2.3 version), but this step already removes a significant amount
of dead, untested, code paths.
Lix speaks the same version of the protocol as CppNix 2.18 and that fact will
never change in the future; the Lix plans to replace the protocol for evolution
will entail a complete incompatible replacement that will be supported in
parallel with the old protocol. Lix will thus retain remote build compatibility
with CppNix as long as CppNix maintains protocol compatibility with 2.18, and
as long as Lix retains legacy protocol support (which will likely be a long
time given that we plan to convert it to a frozen-in-time shim).

View file

@ -1,58 +0,0 @@
---
synopsis: "Eliminate some pretty-printing surprises"
cls: [1616, 1617, 1618]
prs: [11100]
credits: [alois31, roberth]
category: Improvements
---
Some inconsistent and surprising behaviours have been eliminated from the pretty-printing used by the REPL and `nix eval`:
* Lists and attribute sets that contain only a single item without nested structures are no longer sometimes inappropriately indented in the REPL, depending on internal state of the evaluator.
* Empty attribute sets and derivations are no longer shown as `«repeated»`, since they are always cheap to print.
This matches the existing behaviour of `nix-instantiate` on empty attribute sets.
Empty lists were never printed as `«repeated»` already.
* The REPL by default does not print nested attribute sets and lists, and indicates elided items with an ellipsis.
Previously, the ellipsis was printed even when the structure was empty, so that such items do not in fact exist.
Since this behaviour was confusing, it does not happen any more.
Before:
```
nix-repl> :p let x = 1 + 2; in [ [ x ] [ x ] ]
[
[
3
]
[ 3 ]
]
nix-repl> let inherit (import <nixpkgs> { }) hello; in [ hello hello ]
[
«derivation /nix/store/fqs92lzychkm6p37j7fnj4d65nq9fzla-hello-2.12.1.drv»
«repeated»
]
nix-repl> let x = {}; in [ x ]
[
{ ... }
]
```
After:
```
nix-repl> :p let x = 1 + 2; in [ [ x ] [ x ] ]
[
[ 3 ]
[ 3 ]
]
nix-repl> let inherit (import <nixpkgs> { }) hello; in [ hello hello ]
[
«derivation /nix/store/fqs92lzychkm6p37j7fnj4d65nq9fzla-hello-2.12.1.drv»
«derivation /nix/store/fqs92lzychkm6p37j7fnj4d65nq9fzla-hello-2.12.1.drv»
]
nix-repl> let x = {}; in [ x ]
[
{ }
]
```

View file

@ -1,10 +0,0 @@
---
synopsis: "`nix registry add` now requires a shorthand flakeref on the 'from' side"
cls: 1494
credits: delan
category: Improvements
---
The 'from' argument must now be a shorthand flakeref like `nixpkgs` or `nixpkgs/nixos-20.03`, making it harder to accidentally swap the 'from' and 'to' arguments.
Registry entries that map from other flake URLs can still be specified in registry.json, the `nix.registry` option in NixOS, or the `--override-flake` option in the CLI, but they are not guaranteed to work correctly.

View file

@ -1,9 +0,0 @@
---
synopsis: Allow automatic rejection of configuration options from flakes
cls: [1541]
credits: [alois31]
category: Improvements
---
Setting `accept-flake-config` to `false` now respects user choice by automatically rejecting configuration options set by flakes.
The old behaviour of asking each time is still available (and default) by setting it to the special value `ask`.

View file

@ -1,8 +0,0 @@
---
synopsis: "`nix repl` now allows tab-completing the special repl :colon commands"
cls: 1367
credits: Qyriad
category: Improvements
---
The REPL (`nix repl`) supports pressing `<TAB>` to complete a partial expression, but now also supports completing the special :colon commands as well (`:b`, `:edit`, `:doc`, etc), if the line starts with a colon.

View file

@ -1,11 +0,0 @@
---
synopsis: "`:edit`ing a file in Nix store no longer reloads the repl"
issues: [fj#341]
cls: [1620]
category: Improvements
credits: [goldstein]
---
Calling `:edit` from the repl now only reloads if the file being edited was outside of Nix store.
That means that all the local variables are now preserved across `:edit`s of store paths.
This is always safe because the store is read-only.

View file

@ -1,10 +0,0 @@
---
synopsis: "Lix now supports building with UndefinedBehaviorSanitizer"
cls: [1483]
credits: [jade]
category: Development
---
You can now build Lix with the configuration option `-Db_sanitize=undefined` and it will both work and pass tests. AddressSanitizer support is also coming soon.
For a list of undefined behaviour fixed by sanitizer usage, see [the gerrit topic "undefined-behaviour"](https://gerrit.lix.systems/q/topic:%22undefined-behaviour%22).

View file

@ -192,11 +192,13 @@
- [Hacking](contributing/hacking.md)
- [Testing](contributing/testing.md)
- [Experimental Features](contributing/experimental-features.md)
- [Deprecated Features](contributing/deprecated-features.md)
- [CLI guideline](contributing/cli-guideline.md)
- [C++ style guide](contributing/cxx.md)
- [Release Notes](release-notes/release-notes.md)
- [Upcoming release](release-notes/rl-next.md)
<!-- RELENG-AUTO-INSERTION-MARKER (see releng/release_notes.py) -->
- [Lix 2.91 (2024-08-12)](release-notes/rl-2.91.md)
- [Lix 2.90 (2024-07-10)](release-notes/rl-2.90.md)
- [Nix 2.18 (2023-09-20)](release-notes/rl-2.18.md)
- [Nix 2.17 (2023-07-24)](release-notes/rl-2.17.md)

View file

@ -1,23 +1,37 @@
xp_features_json = custom_target(
command : [nix, '__dump-xp-features'],
capture : true,
output : 'xp-features.json',
)
experimental_features_shortlist_md = custom_target(
command : nix_eval_for_docs + [
'--expr',
'import @INPUT0@ (builtins.fromJSON (builtins.readFile @INPUT1@))',
'import @INPUT0@ "experimental" "xp" (builtins.fromJSON (builtins.readFile @INPUT1@))',
],
input : [
'../../generate-xp-features-shortlist.nix',
xp_features_json,
'../../generate-features-shortlist.nix',
nix_exp_features_json,
],
capture : true,
output : 'experimental-features-shortlist.md',
env : nix_env_for_docs,
)
dp_features_json = custom_target(
command : [nix, '__dump-dp-features'],
capture : true,
output : 'dp-features.json',
)
deprecated_features_shortlist_md = custom_target(
command : nix_eval_for_docs + [
'--expr',
'import @INPUT0@ "deprecated" "dp" (builtins.fromJSON (builtins.readFile @INPUT1@))',
],
input : [
'../../generate-features-shortlist.nix',
dp_features_json,
],
capture : true,
output : 'deprecated-features-shortlist.md',
env : nix_env_for_docs,
)
# Intermediate step for manpage generation.
# This splorks the output of generate-manpage.nix as JSON,
# which gets written as a directory tree below.
@ -60,6 +74,7 @@ conf_file_md = custom_target(
'../../utils.nix',
conf_file_json,
experimental_features_shortlist_md,
deprecated_features_shortlist_md,
],
output : 'conf-file.md',
env : nix_env_for_docs,

View file

@ -0,0 +1,37 @@
This section describes the notion of *deprecated features*, and how it fits into the big picture of the development of Lix.
# What are deprecated features?
Deprecated features are disabled by default, with the intent to eventually remove them.
Users must explicitly enable them to keep using them, by toggling the associated [deprecated feature flags](@docroot@/command-ref/conf-file.md#conf-deprecated-features).
This allows backwards compatibility and a graceful transition away from undesired features.
# Which features can be deprecated?
Undesired features should be soft-deprecated by yielding a warning when used for a significant amount of time before the can be deprecated.
Legacy obsolete feature with little to no usage may go through this process faster.
Deprecated features should have a migration path to a preferred alternative.
# Lifecycle of a deprecated feature
This description is not normative, but a feature removal may roughly happen like this:
1. Add a warning when the feature is being used.
2. Disable the feature by default, putting it behind a deprecated feature flag.
- If disabling the feature started out as an opt-in experimental feature, turn that experimental flag into a no-op or remove it entirely.
For example, `--extra-experimental-features=no-url-literals` becomes `--extra-deprecated-features=url-literals`.
3. Decide on a time frame for how long that feature will still be supported for backwards compatibility, and clearly communicate that in the error messages.
- Sometimes, automatic migration to alternatives is possible, and such should be provided if possible
- At least one NixOS release cycle should be the minimum
4. Finally remove the feature entirely, only keeping the error message for those still using it.
# Relation to language versioning
Obviously, removing anything breaks backwards compatibility.
In an ideal world, we'd have SemVer controls over the language and its features, cleanly allowing us to make breaking changes.
See https://wiki.lix.systems/books/lix-contributors/page/language-versioning and [RFC 137](https://github.com/nixos/rfcs/pull/137) for efforts on that.
However, we do not live in such an ideal world, and currently this goal is so far away, that "just disable it with some back-compat for a couple of years" is the most realistic solution, especially for comparatively minor changes.
# Currently available deprecated features
{{#include @generated@/contributing/deprecated-feature-descriptions.md}}

View file

@ -4,12 +4,25 @@
experimental_feature_descriptions_md = custom_target(
command : nix_eval_for_docs + [
'--expr',
'import @INPUT0@ (builtins.fromJSON (builtins.readFile @INPUT1@))',
'import @INPUT0@ "experimental" "xp" (builtins.fromJSON (builtins.readFile @INPUT1@))',
],
input : [
'../../generate-xp-features.nix',
xp_features_json,
'../../generate-features.nix',
nix_exp_features_json,
],
capture : true,
output : 'experimental-feature-descriptions.md',
)
deprecated_feature_descriptions_md = custom_target(
command : nix_eval_for_docs + [
'--expr',
'import @INPUT0@ "deprecated" "dp" (builtins.fromJSON (builtins.readFile @INPUT1@))',
],
input : [
'../../generate-features.nix',
dp_features_json,
],
capture : true,
output : 'deprecated-feature-descriptions.md',
)

View file

@ -427,6 +427,7 @@ I grepped `src/` for `get[eE]nv\("` to find the mentions in Lix code.
- `NIX_SHOW_STATS_PATH` - Writes those statistics into a file at the given path instead of stdout. Undocumented.
- `NIX_SHOW_SYMBOLS` - Dumps the symbol table into the show-stats json output.
- `TERM` - If `dumb` or unset, disables ANSI colour output.
- `FORCE_COLOR`, `CLICOLOR_FORCE` - Enables ANSI colour output if `NO_COLOR`/`NOCOLOR` not set.
- `NO_COLOR`, `NOCOLOR` - Disables ANSI colour output.
- `_NIX_DEVELOPER_SHOW_UNKNOWN_LOCATIONS` - Highlights unknown locations in errors.
- `NIX_PROFILE` - Selects which profile `nix-env` will operate on. Documented elsewhere.

View file

@ -292,6 +292,12 @@ Derivations can declare some infrequently used optional attributes.
(associative) arrays. For example, the attribute `hardening.format = true`
ends up as the Bash associative array element `${hardening[format]}`.
> **Warning**
>
> If set to `true`, other advanced attributes such as [`allowedReferences`](#adv-attr-allowedReferences), [`allowedReferences`](#adv-attr-allowedReferences), [`allowedRequisites`](#adv-attr-allowedRequisites),
[`disallowedReferences`](#adv-attr-disallowedReferences) and [`disallowedRequisites`](#adv-attr-disallowedRequisites), maxSize, and maxClosureSize.
will have no effect.
- [`outputChecks`]{#adv-attr-outputChecks}\
When using [structured attributes](#adv-attr-structuredAttrs), the `outputChecks`
attribute allows defining checks per-output.
@ -326,7 +332,6 @@ Derivations can declare some infrequently used optional attributes.
```
- [`unsafeDiscardReferences`]{#adv-attr-unsafeDiscardReferences}\
When using [structured attributes](#adv-attr-structuredAttrs), the
attribute `unsafeDiscardReferences` is an attribute set with a boolean value for each output name.
If set to `true`, it disables scanning the output for runtime dependencies.

View file

@ -26,6 +26,8 @@
| Logical conjunction (`AND`) | *bool* `&&` *bool* | left | 12 |
| Logical disjunction (`OR`) | *bool* <code>\|\|</code> *bool* | left | 13 |
| [Logical implication] | *bool* `->` *bool* | none | 14 |
| \[Experimental\] [Function piping] | *expr* |> *func* | left | 15 |
| \[Experimental\] [Function piping] | *expr* <| *func* | right | 16 |
[string]: ./values.md#type-string
[path]: ./values.md#type-path
@ -215,3 +217,33 @@ nix-repl> let f = x: 1; s = { func = f; }; in [ (f == f) (s == s) ]
Equivalent to `!`*b1* `||` *b2*.
[Logical implication]: #logical-implication
## \[Experimental\] Function piping
*This language feature is still experimental and may change at any time. Enable `--extra-experimental-features pipe-operator` to use it.*
Pipes are a dedicated operator for function application, but with reverse order and a lower binding strength.
This allows you to chain function calls together in way that is more natural to read and requires less parentheses.
`a |> f b |> g` is equivalent to `g (f b a)`.
`g <| f b <| a` is equivalent to `g (f b a)`.
Example code snippet:
```nix
defaultPrefsFile = defaultPrefs
|> lib.mapAttrsToList (
key: value: ''
// ${value.reason}
pref("${key}", ${builtins.toJSON value.value});
''
)
|> lib.concatStringsSep "\n"
|> pkgs.writeText "nixos-default-prefs.js";
```
Note how `mapAttrsToList` is called with two arguments (the lambda and `defaultPrefs`),
but moving the last argument in front of the rest improves the reading flow.
This is common for functions with long first argument, including all `map`-like functions.
[Function piping]: #experimental-function-piping

View file

@ -77,12 +77,6 @@
}
```
Finally, as a convenience, *URIs* as defined in appendix B of
[RFC 2396](http://www.ietf.org/rfc/rfc2396.txt) can be written *as
is*, without quotes. For instance, the string
`"http://example.org/foo.tar.bz2"` can also be written as
`http://example.org/foo.tar.bz2`.
- <a id="type-number" href="#type-number">Number</a>
Numbers, which can be *integers* (like `123`) or *floating point*

View file

@ -0,0 +1,489 @@
# Lix 2.91 "Dragon's Breath" (2024-08-12)
# Lix 2.91.0 (2024-08-12)
## Breaking Changes
- Block io_uring in the Linux sandbox [cl/1611](https://gerrit.lix.systems/c/lix/+/1611)
The io\_uring API has the unfortunate property that it is not possible to selectively decide which operations should be allowed.
This, together with the fact that new operations are routinely added, makes it a hazard to the proper function of the sandbox.
Therefore, any access to io\_uring has been made unavailable inside the sandbox.
As such, attempts to execute any system calls forming part of this API will fail with the error `ENOSYS`, as if io\_uring support had not been configured into the kernel.
Many thanks to [alois31](https://git.lix.systems/alois31) for this.
- The `build-hook` setting is now deprecated
Build hooks communicate with the daemon using a custom, internal, undocumented protocol that is entirely unversioned and cannot be changed.
Since we intend to change it anyway we must unfortunately deprecate the current build hook infrastructure.
We do not expect this to impact most users—we have not found any uses of `build-hook` in the wild—but if this does affect you, we'd like to hear from you!
- Lix no longer speaks the Nix remote-build worker protocol to clients or servers older than CppNix 2.3 [fj#325](https://git.lix.systems/lix-project/lix/issues/325) [cl/1207](https://gerrit.lix.systems/c/lix/+/1207) [cl/1208](https://gerrit.lix.systems/c/lix/+/1208) [cl/1206](https://gerrit.lix.systems/c/lix/+/1206) [cl/1205](https://gerrit.lix.systems/c/lix/+/1205) [cl/1204](https://gerrit.lix.systems/c/lix/+/1204) [cl/1203](https://gerrit.lix.systems/c/lix/+/1203) [cl/1479](https://gerrit.lix.systems/c/lix/+/1479)
CppNix 2.3 was released in 2019, and is the new oldest supported version. We
will increase our support baseline in the future up to a final version of CppNix
2.18 (which may happen soon given that it is the only still-packaged and thus
still-tested >2.3 version), but this step already removes a significant amount
of dead, untested, code paths.
Lix speaks the same version of the protocol as CppNix 2.18 and that fact will
never change in the future; the Lix plans to replace the protocol for evolution
will entail a complete incompatible replacement that will be supported in
parallel with the old protocol. Lix will thus retain remote build compatibility
with CppNix as long as CppNix maintains protocol compatibility with 2.18, and
as long as Lix retains legacy protocol support (which will likely be a long
time given that we plan to convert it to a frozen-in-time shim).
Many thanks to [jade](https://git.lix.systems/jade) for this.
## Features
- Pipe operator `|>` (experimental) [fj#438](https://git.lix.systems/lix-project/lix/issues/438) [cl/1654](https://gerrit.lix.systems/c/lix/+/1654)
Implementation of the pipe operator (`|>`) in the language as described in [RFC 148](https://github.com/NixOS/rfcs/pull/148).
The feature is still marked experimental, enable `--extra-experimental-features pipe-operator` to use it.
Many thanks to [piegames](https://git.lix.systems/piegames) and [eldritch horrors](https://git.lix.systems/pennae) for this.
## Improvements
- Trace which part of a `foo.bar.baz` expression errors [cl/1505](https://gerrit.lix.systems/c/lix/+/1505) [cl/1506](https://gerrit.lix.systems/c/lix/+/1506)
Previously, if an attribute path selection expression like `linux_4_9.meta.description` it wouldn't show you which one of those parts in the attribute path, or even that that line of code is what caused evaluation of the failing expression.
The previous error looks like this:
```
pkgs.linuxKernel.kernels.linux_4_9.meta.description
error:
… while evaluating the attribute 'linuxKernel.kernels.linux_4_9.meta.description'
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:5:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
… while calling the 'throw' builtin
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:17:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
error: linux 4.9 was removed because it will reach its end of life within 22.11
```
Now, the error will look like this:
```
pkgs.linuxKernel.kernels.linux_4_9.meta.description
error:
… while evaluating the attribute 'linuxKernel.kernels.linux_4_9.meta.description'
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:5:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
… while evaluating 'pkgs.linuxKernel.kernels.linux_4_9' to select 'meta' on it
at «string»:1:1:
1| pkgs.linuxKernel.kernels.linux_4_9.meta.description
| ^
… caused by explicit throw
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:17:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
error: linux 4.9 was removed because it will reach its end of life within 22.11
```
Not only does the line of code that referenced the failing attribute show up in the trace, it also tells you that it was specifically the `linux_4_9` part that failed.
This includes if the failing part is a top-level binding:
```
let
inherit (pkgs.linuxKernel.kernels) linux_4_9;
in linux_4_9.meta.description
error:
… while evaluating 'linux_4_9' to select 'meta.description' on it
at «string»:3:4:
2| inherit (pkgs.linuxKernel.kernels) linux_4_9;
3| in linux_4_9.meta.description
| ^
… while evaluating the attribute 'linux_4_9'
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:5:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
… caused by explicit throw
at /nix/store/dk2rpyb6ndvfbf19bkb2plcz5y3k8i5v-source/pkgs/top-level/linux-kernels.nix:278:17:
277| } // lib.optionalAttrs config.allowAliases {
278| linux_4_9 = throw "linux 4.9 was removed because it will reach its end of life within 22.11";
| ^
279| linux_4_14 = throw "linux 4.14 was removed because it will reach its end of life within 23.11";
error: linux 4.9 was removed because it will reach its end of life within 22.11
```
Many thanks to [Qyriad](https://git.lix.systems/Qyriad) for this.
- Confusing 'invalid path' errors are now 'path does not exist' [cl/1161](https://gerrit.lix.systems/c/lix/+/1161) [cl/1160](https://gerrit.lix.systems/c/lix/+/1160) [cl/1159](https://gerrit.lix.systems/c/lix/+/1159)
Previously, if a path did not exist in a Nix store, it was referred to as the internal name "path is invalid".
This is, however, very confusing, and there were numerous such errors that were exactly the same, making it hard to debug.
These errors are now more specific and refer to the path not existing in the store.
Many thanks to [julia](https://git.lix.systems/midnightveil) for this.
- Add a `build-dir` setting to set the backing directory for builds [gh#10303](https://github.com/NixOS/nix/pull/10303) [gh#10312](https://github.com/NixOS/nix/pull/10312) [gh#10883](https://github.com/NixOS/nix/pull/10883) [cl/1514](https://gerrit.lix.systems/c/lix/+/1514)
`build-dir` can now be set in the Nix configuration to choose the backing directory for the build sandbox.
This can be useful on systems with `/tmp` on tmpfs, or simply to relocate large builds to another disk.
Also, `XDG_RUNTIME_DIR` is no longer considered when selecting the default temporary directory,
as it's not intended to be used for large amounts of data.
Many thanks to [Robert Hensing](https://github.com/roberth) and [Tom Bereknyei](https://github.com/tomberek) for this.
- Better usage of colour control environment variables [cl/1699](https://gerrit.lix.systems/c/lix/+/1699) [cl/1702](https://gerrit.lix.systems/c/lix/+/1702)
Lix now heeds `NO_COLOR`/`NOCOLOR` for more output types, such as that used in `nix search`, `nix flake metadata` and similar.
It also now supports `CLICOLOR_FORCE`/`FORCE_COLOR` to force colours regardless of whether there is a terminal on the other side.
It now follows rules compatible with those described on <https://bixense.com/clicolors/> with `CLICOLOR` defaulted to enabled.
That is to say, the following procedure is followed in order:
- NO_COLOR or NOCOLOR set
Always disable colour
- CLICOLOR_FORCE or FORCE_COLOR set
Enable colour
- The output is a tty; TERM != "dumb"
Enable colour
- Otherwise
Disable colour
Many thanks to [jade](https://git.lix.systems/jade) for this.
- Distinguish between explicit throws and errors that happened while evaluating a throw [cl/1511](https://gerrit.lix.systems/c/lix/+/1511)
Previously, errors caused by an expression like `throw "invalid argument"` were treated like an error that happened simply while some builtin function was being called:
```
let
throwMsg = p: throw "${p} isn't the right package";
in throwMsg "linuz"
error:
… while calling the 'throw' builtin
at «string»:2:17:
1| let
2| throwMsg = p: throw "${p} isn't the right package";
| ^
3| in throwMsg "linuz"
error: linuz isn't the right package
```
But the error didn't just happen "while" calling the `throw` builtin — it's a throw error!
Now it looks like this:
```
let
throwMsg = p: throw "${p} isn't the right package";
in throwMsg "linuz"
error:
… caused by explicit throw
at «string»:2:17:
1| let
2| throwMsg = p: throw "${p} isn't the right package";
| ^
3| in throwMsg "linuz"
error: linuz isn't the right package
```
This also means that incorrect usage of `throw` or errors evaluating its arguments are easily distinguishable from explicit throws:
```
let
throwMsg = p: throw "${p} isn't the right package";
in throwMsg { attrs = "error when coerced in string interpolation"; }
error:
… while calling the 'throw' builtin
at «string»:2:17:
1| let
2| throwMsg = p: throw "${p} isn't the right package";
| ^
3| in throwMsg { attrs = "error when coerced in string interpolation"; }
… while evaluating a path segment
at «string»:2:24:
1| let
2| throwMsg = p: throw "${p} isn't the right package";
| ^
3| in throwMsg { attrs = "error when coerced in string interpolation"; }
error: cannot coerce a set to a string: { attrs = "error when coerced in string interpolation"; }
```
Here, instead of an actual thrown error, a type error happens first (trying to coerce an attribute set to a string), but that type error happened *while* calling `throw`.
Many thanks to [Qyriad](https://git.lix.systems/Qyriad) for this.
- `nix flake metadata` prints modified date [cl/1700](https://gerrit.lix.systems/c/lix/+/1700)
Ever wonder "gee, when *did* I update nixpkgs"?
Wonder no more, because `nix flake metadata` now simply tells you the times every locked flake input was updated:
```
<...>
Description: The purely functional package manager
Path: /nix/store/c91yi8sxakc2ry7y4ac1smzwka4l5p78-source
Revision: c52cff582043838bbe29768e7da232483d52b61d-dirty
Last modified: 2024-07-31 22:15:54
Inputs:
├───flake-compat: github:edolstra/flake-compat/0f9255e01c2351cc7d116c072cb317785dd33b33
│ Last modified: 2023-10-04 06:37:54
├───nix2container: github:nlewo/nix2container/3853e5caf9ad24103b13aa6e0e8bcebb47649fe4
│ Last modified: 2024-07-10 13:15:56
├───nixpkgs: github:NixOS/nixpkgs/e21630230c77140bc6478a21cd71e8bb73706fce
│ Last modified: 2024-07-25 11:26:27
├───nixpkgs-regression: github:NixOS/nixpkgs/215d4d0fd80ca5163643b03a33fde804a29cc1e2
│ Last modified: 2022-01-24 11:20:45
└───pre-commit-hooks: github:cachix/git-hooks.nix/f451c19376071a90d8c58ab1a953c6e9840527fd
Last modified: 2024-07-15 04:21:09
```
Many thanks to [jade](https://git.lix.systems/jade) for this.
- Hash mismatch diagnostics for fixed-output derivations include the URL [cl/1536](https://gerrit.lix.systems/c/lix/+/1536)
Now, when building fixed-output derivations, Lix will guess the URL that was used in the derivation using the `url` or `urls` properties in the derivation environment.
This is a layering violation but making these diagnostics tractable when there are multiple instances of the `AAAA` hash is too significant of an improvement to pass it up.
```
error: hash mismatch in fixed-output derivation '/nix/store/sjfw324j4533lwnpmr5z4icpb85r63ai-x1.drv':
likely URL: https://meow.puppy.forge/puppy.tar.gz
specified: sha256-AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=
got: sha256-a1Qvp3FOOkWpL9kFHgugU1ok5UtRPSu+NwCZKbbaEro=
```
Many thanks to [jade](https://git.lix.systems/jade) for this.
- Add log formats `multiline` and `multiline-with-logs` [cl/1369](https://gerrit.lix.systems/c/lix/+/1369)
Added two new log formats (`multiline` and `multiline-with-logs`) that display
current activities below each other for better visibility.
These formats attempt to use the maximum available lines
(defaulting to 25 if unable to determine) and print up to that many lines.
The status bar is displayed as the first line, with each subsequent
activity on its own line.
Many thanks to [kloenk](https://git.lix.systems/kloenk) for this.
- Lix will now show the package descriptions in when running `nix flake show`. [cl/1540](https://gerrit.lix.systems/c/lix/+/1540)
When running `nix flake show`, Lix will now show the package descriptions, if they exist.
Before:
```shell
$ nix flake show
path:/home/isabel/dev/lix-show?lastModified=1721736108&narHash=sha256-Zo8HP1ur7Q2b39hKUEG8EAh/opgq8xJ2jvwQ/htwO4Q%3D
└───packages
└───x86_64-linux
├───aNoDescription: package 'simple'
├───bOneLineDescription: package 'simple'
├───cMultiLineDescription: package 'simple'
├───dLongDescription: package 'simple'
└───eEmptyDescription: package 'simple'
```
After:
```shell
$ nix flake show
path:/home/isabel/dev/lix-show?lastModified=1721736108&narHash=sha256-Zo8HP1ur7Q2b39hKUEG8EAh/opgq8xJ2jvwQ/htwO4Q%3D
└───packages
└───x86_64-linux
├───aNoDescription: package 'simple'
├───bOneLineDescription: package 'simple' - 'one line'
├───cMultiLineDescription: package 'simple' - 'line one'
├───dLongDescription: package 'simple' - 'abcdefghijklmnopqrstuvwxyz'
└───eEmptyDescription: package 'simple'
```
Many thanks to [kjeremy](https://github.com/kjeremy) and [isabelroses](https://git.lix.systems/isabelroses) for this.
- Eliminate some pretty-printing surprises [#11100](https://github.com/NixOS/nix/pull/11100) [cl/1616](https://gerrit.lix.systems/c/lix/+/1616) [cl/1617](https://gerrit.lix.systems/c/lix/+/1617) [cl/1618](https://gerrit.lix.systems/c/lix/+/1618)
Some inconsistent and surprising behaviours have been eliminated from the pretty-printing used by the REPL and `nix eval`:
* Lists and attribute sets that contain only a single item without nested structures are no longer sometimes inappropriately indented in the REPL, depending on internal state of the evaluator.
* Empty attribute sets and derivations are no longer shown as `«repeated»`, since they are always cheap to print.
This matches the existing behaviour of `nix-instantiate` on empty attribute sets.
Empty lists were never printed as `«repeated»` already.
* The REPL by default does not print nested attribute sets and lists, and indicates elided items with an ellipsis.
Previously, the ellipsis was printed even when the structure was empty, so that such items do not in fact exist.
Since this behaviour was confusing, it does not happen any more.
Before:
```
nix-repl> :p let x = 1 + 2; in [ [ x ] [ x ] ]
[
[
3
]
[ 3 ]
]
nix-repl> let inherit (import <nixpkgs> { }) hello; in [ hello hello ]
[
«derivation /nix/store/fqs92lzychkm6p37j7fnj4d65nq9fzla-hello-2.12.1.drv»
«repeated»
]
nix-repl> let x = {}; in [ x ]
[
{ ... }
]
```
After:
```
nix-repl> :p let x = 1 + 2; in [ [ x ] [ x ] ]
[
[ 3 ]
[ 3 ]
]
nix-repl> let inherit (import <nixpkgs> { }) hello; in [ hello hello ]
[
«derivation /nix/store/fqs92lzychkm6p37j7fnj4d65nq9fzla-hello-2.12.1.drv»
«derivation /nix/store/fqs92lzychkm6p37j7fnj4d65nq9fzla-hello-2.12.1.drv»
]
nix-repl> let x = {}; in [ x ]
[
{ }
]
```
Many thanks to [alois31](https://git.lix.systems/alois31) and [Robert Hensing](https://github.com/roberth) for this.
- `nix registry add` now requires a shorthand flakeref on the 'from' side [cl/1494](https://gerrit.lix.systems/c/lix/+/1494)
The 'from' argument must now be a shorthand flakeref like `nixpkgs` or `nixpkgs/nixos-20.03`, making it harder to accidentally swap the 'from' and 'to' arguments.
Registry entries that map from other flake URLs can still be specified in registry.json, the `nix.registry` option in NixOS, or the `--override-flake` option in the CLI, but they are not guaranteed to work correctly.
Many thanks to [delan](https://git.lix.systems/delan) for this.
- Allow automatic rejection of configuration options from flakes [cl/1541](https://gerrit.lix.systems/c/lix/+/1541)
Setting `accept-flake-config` to `false` now respects user choice by automatically rejecting configuration options set by flakes.
The old behaviour of asking each time is still available (and default) by setting it to the special value `ask`.
Many thanks to [alois31](https://git.lix.systems/alois31) for this.
- `nix repl` now allows tab-completing the special repl :colon commands [cl/1367](https://gerrit.lix.systems/c/lix/+/1367)
The REPL (`nix repl`) supports pressing `<TAB>` to complete a partial expression, but now also supports completing the special :colon commands as well (`:b`, `:edit`, `:doc`, etc), if the line starts with a colon.
Many thanks to [Qyriad](https://git.lix.systems/Qyriad) for this.
- `:edit`ing a file in Nix store no longer reloads the repl [fj#341](https://git.lix.systems/lix-project/lix/issues/341) [cl/1620](https://gerrit.lix.systems/c/lix/+/1620)
Calling `:edit` from the repl now only reloads if the file being edited was outside of Nix store.
That means that all the local variables are now preserved across `:edit`s of store paths.
This is always safe because the store is read-only.
Many thanks to [goldstein](https://git.lix.systems/goldstein) for this.
- `:log` in repl now works on derivation paths [fj#51](https://git.lix.systems/lix-project/lix/issues/51) [cl/1716](https://gerrit.lix.systems/c/lix/+/1716)
`:log` can now accept store derivation paths in addition to derivation expressions.
Many thanks to [goldstein](https://git.lix.systems/goldstein) for this.
## Fixes
- Define integer overflow in the Nix language as an error [fj#423](https://git.lix.systems/lix-project/lix/issues/423) [cl/1594](https://gerrit.lix.systems/c/lix/+/1594) [cl/1595](https://gerrit.lix.systems/c/lix/+/1595) [cl/1597](https://gerrit.lix.systems/c/lix/+/1597) [cl/1609](https://gerrit.lix.systems/c/lix/+/1609)
Previously, integer overflow in the Nix language invoked C++ level signed overflow, which was undefined behaviour, but *probably* manifested as wrapping around on overflow.
Since prior to the public release of Lix, Lix had C++ signed overflow defined to crash the process and nobody noticed this having accidentally removed overflow from the Nix language for three months until it was caught by fiddling around.
Given the significant body of actual Nix code that has been evaluated by Lix in that time, it does not appear that nixpkgs or much of importance depends on integer overflow, so it is safe to turn into an error.
Some other overflows were fixed:
- `builtins.fromJSON` of values greater than the maximum representable value in a signed 64-bit integer will generate an error.
- `nixConfig` in flakes will no longer accept negative values for configuration options.
Integer overflow now looks like the following:
```
» nix eval --expr '9223372036854775807 + 1'
error: integer overflow in adding 9223372036854775807 + 1
```
Many thanks to [jade](https://git.lix.systems/jade) for this.
- Fix nix-collect-garbage --dry-run [fj#432](https://git.lix.systems/lix-project/lix/issues/432) [cl/1566](https://gerrit.lix.systems/c/lix/+/1566)
`nix-collect-garbage --dry-run` did not previously give any output - it simply
exited without even checking to see what paths would be deleted.
```
$ nix-collect-garbage --dry-run
$
```
We updated the behaviour of the flag such that instead it prints out how many
paths it *would* delete, but doesn't actually delete them.
```
$ nix-collect-garbage --dry-run
finding garbage collector roots...
determining live/dead paths...
...
<nix store paths>
...
2670 store paths deleted, 0.00MiB freed
$
```
Many thanks to [Quantum Jump](https://github.com/QuantumBJump) for this.
- Fix unexpectedly-successful GC failures on macOS [fj#446](https://git.lix.systems/lix-project/lix/issues/446) [cl/1723](https://gerrit.lix.systems/c/lix/+/1723)
Has the following happened to you on macOS? This failure has been successfully eliminated, thanks to our successful deployment of advanced successful-failure detection technology (it's just `if (failed && errno == 0)`. Patent pending<sup>not really</sup>):
```
$ nix-store --gc --print-dead
finding garbage collector roots...
error: Listing pid 87261 file descriptors: Undefined error: 0
```
Many thanks to [jade](https://git.lix.systems/jade) for this.
- `nix copy` is now several times faster at `querying info about /nix/store/...` [fj#366](https://git.lix.systems/lix-project/lix/issues/366) [cl/1462](https://gerrit.lix.systems/c/lix/+/1462)
We fixed a locking bug that serialized `querying info about /nix/store/...`
onto just one thread such that it was eating `O(paths to copy * latency)` time
while setting up to copy paths to s3 and other stores. It is now `nproc` times
faster.
Many thanks to [jade](https://git.lix.systems/jade) for this.
## Development
- clang-tidy support [fj#147](https://git.lix.systems/lix-project/lix/issues/147) [cl/1697](https://gerrit.lix.systems/c/lix/+/1697)
`clang-tidy` can be used to lint Lix with a limited set of lints using `ninja -C build clang-tidy` and `ninja -C build clang-tidy-fix`.
In practice, this fixes the built-in meson rule that was used the same as above being broken ever since precompiled headers were introduced.
Many thanks to [jade](https://git.lix.systems/jade) for this.
- Lix now supports building with UndefinedBehaviorSanitizer [cl/1483](https://gerrit.lix.systems/c/lix/+/1483) [cl/1481](https://gerrit.lix.systems/c/lix/+/1481) [cl/1669](https://gerrit.lix.systems/c/lix/+/1669)
You can now build Lix with the configuration option `-Db_sanitize=undefined,address` and it will both work and pass tests with both AddressSanitizer and UndefinedBehaviorSanitizer enabled.
To use ASan specifically, you have to set `-Dgc=disabled`, which an error message will tell you to do if necessary anyhow.
Furthermore, tests passing with Clang ASan+UBSan is checked on every change in CI.
For a list of undefined behaviour found by tooling usage, see [the gerrit topic "undefined-behaviour"](https://gerrit.lix.systems/q/topic:%22undefined-behaviour%22).
Many thanks to [jade](https://git.lix.systems/jade) for this.

View file

@ -1,5 +1,5 @@
{
description = "The purely functional package manager";
description = "Lix: A modern, delicious implementation of the Nix package manager";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-24.05-small";
@ -33,7 +33,7 @@
# This notice gets echoed as a dev shell hook, and can be turned off with
# `touch .nocontribmsg`
sgr = ''['';
sgr = builtins.fromJSON ''"\u001b["'';
freezePage = "https://wiki.lix.systems/books/lix-contributors/page/freezes-and-recommended-contributions";
codebaseOverview = "https://wiki.lix.systems/books/lix-contributors/page/codebase-overview";
contribNotice = builtins.toFile "lix-contrib-notice" ''
@ -59,7 +59,8 @@
(Run `touch .nocontribmsg` to hide this message.)
'';
officialRelease = false;
versionJson = builtins.fromJSON (builtins.readFile ./version.json);
officialRelease = versionJson.official_release;
# Set to true to build the release notes for the next release.
buildUnreleasedNotes = true;
@ -196,6 +197,8 @@
busybox-sandbox-shell = final.busybox-sandbox-shell or final.default-busybox-sandbox-shell;
};
lix-clang-tidy = final.callPackage ./subprojects/lix-clang-tidy { };
# Export the patched version of boehmgc that Lix uses into the overlay
# for consumers of this flake.
boehmgc-nix = final.nix.passthru.boehmgc-nix;
@ -275,6 +278,52 @@
# System tests.
tests = import ./tests/nixos { inherit lib nixpkgs nixpkgsFor; } // {
# This is x86_64-linux only, just because we have significantly
# cheaper x86_64-linux compute in CI.
# It is clangStdenv because clang's sanitizers are nicer.
asanBuild = self.packages.x86_64-linux.nix-clangStdenv.override {
# Improve caching of non-code changes by not changing the
# derivation name every single time, since this will never be seen
# by users anyway.
versionSuffix = "";
sanitize = [
"address"
"undefined"
];
# it is very hard to make *every* CI build use this option such
# that we don't wind up building Lix twice, so we do it here where
# we are already doing so.
werror = true;
};
# Although this might be nicer to do with pre-commit, that would
# require adding 12MB of nodejs to the dev shell, whereas building it
# in CI with Nix avoids that at a cost of slower feedback on rarely
# touched files.
jsSyntaxCheck =
let
nixpkgs = nixpkgsFor.x86_64-linux.native;
inherit (nixpkgs) pkgs;
docSources = lib.fileset.toSource {
root = ./doc;
fileset = lib.fileset.fileFilter (f: f.hasExt "js") ./doc;
};
in
pkgs.runCommand "js-syntax-check" { } ''
find ${docSources} -type f -print -exec ${pkgs.nodejs-slim}/bin/node --check '{}' ';'
touch $out
'';
# clang-tidy run against the Lix codebase using the Lix clang-tidy plugin
clang-tidy =
let
nixpkgs = nixpkgsFor.x86_64-linux.native;
inherit (nixpkgs) pkgs;
in
pkgs.callPackage ./package.nix {
versionSuffix = "";
lintInsteadOfBuild = true;
};
# Make sure that nix-env still produces the exact same result
# on a particular version of Nixpkgs.
@ -370,6 +419,8 @@
rec {
inherit (nixpkgsFor.${system}.native) nix;
default = nix;
inherit (nixpkgsFor.${system}.native) lix-clang-tidy;
}
// (
lib.optionalAttrs (builtins.elem system linux64BitSystems) {
@ -406,7 +457,7 @@
pkgs: stdenv:
let
nix = pkgs.callPackage ./package.nix {
inherit stdenv officialRelease versionSuffix;
inherit stdenv versionSuffix;
busybox-sandbox-shell = pkgs.busybox-sandbox-shell or pkgs.default-busybox-sandbox;
internalApiDocs = false;
};

View file

@ -25,3 +25,13 @@ install *OPTIONS: (build OPTIONS)
# Run tests
test *OPTIONS:
meson test -C build --print-errorlogs {{ OPTIONS }}
alias clang-tidy := lint
lint:
ninja -C build clang-tidy
alias clang-tidy-fix := lint-fix
lint-fix:
ninja -C build clang-tidy-fix

View file

@ -106,7 +106,7 @@ def do_category(author_info: AuthorInfoDB, entries: list[Tuple[pathlib.Path, Any
links = []
links += [format_issue(str(s)) for s in listify(entry.metadata.get('issues', []))]
links += [format_pr(str(s)) for s in listify(entry.metadata.get('prs', []))]
links += [format_cl(cl) for cl in listify(entry.metadata.get('cls', []))]
links += [format_cl(int(cl)) for cl in listify(entry.metadata.get('cls', []))]
if links != []:
header += " " + " ".join(links)
if header:
@ -129,7 +129,7 @@ def run_on_dir(author_info: AuthorInfoDB, d):
entries = defaultdict(list)
for p in paths:
try:
e = frontmatter.load(p)
e = frontmatter.load(p) # type: ignore
if 'synopsis' not in e.metadata:
raise Exception('missing synopsis')
unknownKeys = set(e.metadata.keys()) - set(KNOWN_KEYS)

View file

@ -2,6 +2,6 @@
set -e
diff -u <(awk < src/libstore/build/local-derivation-goal.cc '/BEGIN extract-syscalls/ { extracting = 1; next }
diff -u <(awk < src/libstore/platform/linux.cc '/BEGIN extract-syscalls/ { extracting = 1; next }
match($0, /allowSyscall\(ctx, SCMP_SYS\(([^)]*)\)\);|\/\/ skip ([^ ]*)/, result) { print result[1] result[2] }
/END extract-syscalls/ { extracting = 0; next }') <(tail -n+2 "$1" | cut -d, -f 1)

View file

@ -30,6 +30,14 @@
# FIXME: This hack should be removed when https://git.lix.systems/lix-project/lix/issues/359
# is fixed.
#
# lix-doc is built with Meson in lix-doc/meson.build, and linked into libcmd in
# src/libcmd/meson.build. When building outside the Nix sandbox, Meson will use the .wrap
# files in subprojects/ to download and extract the dependency crates into subprojects/.
# When building inside the Nix sandbox, Lix's derivation in package.nix uses a
# fixed-output derivation to fetch those crates in advance instead, and then symlinks
# them into subprojects/ with the same names that Meson uses when downloading them
# itself -- perfect for --wrap-mode=nodownload, which mesonConfigurePhase uses.
#
# Unit tests are setup in tests/unit/meson.build, under the test suite "check".
#
# Functional tests are a bit more complicated. Generally they're defined in
@ -38,10 +46,11 @@
# be placed in specific directories' meson.build files to create the right directory tree
# in the build directory.
project('lix', 'cpp',
project('lix', 'cpp', 'rust',
version : run_command('bash', '-c', 'echo -n $(jq -r .version < ./version.json)$VERSION_SUFFIX', check : true).stdout().strip(),
default_options : [
'cpp_std=c++2a',
'rust_std=2021',
# TODO(Qyriad): increase the warning level
'warning_level=1',
'debug=true',
@ -199,23 +208,27 @@ configdata = { }
# Dependencies
#
boehm = dependency('bdw-gc', required : get_option('gc'), version : '>=8.2.6')
gc_opt = get_option('gc').disable_if(
'address' in get_option('b_sanitize'),
error_message: 'gc does far too many memory crimes for ASan'
)
boehm = dependency('bdw-gc', required : gc_opt, version : '>=8.2.6', include_type : 'system')
configdata += {
'HAVE_BOEHMGC': boehm.found().to_int(),
}
boost = dependency('boost', required : true, modules : ['container'])
boost = dependency('boost', required : true, modules : ['container'], include_type : 'system')
# cpuid only makes sense on x86_64
cpuid_required = is_x64 ? get_option('cpuid') : false
cpuid = dependency('libcpuid', 'cpuid', required : cpuid_required)
cpuid = dependency('libcpuid', 'cpuid', required : cpuid_required, include_type : 'system')
configdata += {
'HAVE_LIBCPUID': cpuid.found().to_int(),
}
# seccomp only makes sense on Linux
seccomp_required = is_linux ? get_option('seccomp-sandboxing') : false
seccomp = dependency('libseccomp', 'seccomp', required : seccomp_required, version : '>=2.5.5')
seccomp = dependency('libseccomp', 'seccomp', required : seccomp_required, version : '>=2.5.5', include_type : 'system')
if is_linux and not seccomp.found()
warning('Sandbox security is reduced because libseccomp has not been found! Please provide libseccomp if it supports your CPU architecture.')
endif
@ -223,19 +236,24 @@ configdata += {
'HAVE_SECCOMP': seccomp.found().to_int(),
}
libarchive = dependency('libarchive', required : true)
libarchive = dependency('libarchive', required : true, include_type : 'system')
brotli = [
dependency('libbrotlicommon', required : true),
dependency('libbrotlidec', required : true),
dependency('libbrotlienc', required : true),
dependency('libbrotlicommon', required : true, include_type : 'system'),
dependency('libbrotlidec', required : true, include_type : 'system'),
dependency('libbrotlienc', required : true, include_type : 'system'),
]
openssl = dependency('libcrypto', 'openssl', required : true)
openssl = dependency('libcrypto', 'openssl', required : true, include_type : 'system')
# FIXME: confirm we actually support such old versions of aws-sdk-cpp
aws_sdk = dependency('aws-cpp-sdk-core', required : false, version : '>=1.8')
aws_sdk_transfer = dependency('aws-cpp-sdk-transfer', required : aws_sdk.found(), fallback : ['aws_sdk', 'aws_cpp_sdk_transfer_dep'])
aws_sdk = dependency('aws-cpp-sdk-core', required : false, version : '>=1.8', include_type : 'system')
aws_sdk_transfer = dependency(
'aws-cpp-sdk-transfer',
required : aws_sdk.found(),
fallback : ['aws_sdk', 'aws_cpp_sdk_transfer_dep'],
include_type : 'system',
)
if aws_sdk.found()
# The AWS pkg-config adds -std=c++11.
# https://github.com/aws/aws-sdk-cpp/issues/2673
@ -255,7 +273,12 @@ if aws_sdk.found()
)
endif
aws_s3 = dependency('aws-cpp-sdk-s3', required : aws_sdk.found(), fallback : ['aws_sdk', 'aws_cpp_sdk_s3_dep'])
aws_s3 = dependency(
'aws-cpp-sdk-s3',
required : aws_sdk.found(),
fallback : ['aws_sdk', 'aws_cpp_sdk_s3_dep'],
include_type : 'system',
)
if aws_s3.found()
# The AWS pkg-config adds -std=c++11.
# https://github.com/aws/aws-sdk-cpp/issues/2673
@ -272,30 +295,30 @@ configdata += {
'ENABLE_S3': aws_s3.found().to_int(),
}
sqlite = dependency('sqlite3', 'sqlite', version : '>=3.6.19', required : true)
sqlite = dependency('sqlite3', 'sqlite', version : '>=3.6.19', required : true, include_type : 'system')
sodium = dependency('libsodium', 'sodium', required : true)
sodium = dependency('libsodium', 'sodium', required : true, include_type : 'system')
curl = dependency('libcurl', 'curl', required : true)
curl = dependency('libcurl', 'curl', required : true, include_type : 'system')
editline = dependency('libeditline', 'editline', version : '>=1.14', required : true)
editline = dependency('libeditline', 'editline', version : '>=1.14', required : true, include_type : 'system')
lowdown = dependency('lowdown', version : '>=0.9.0', required : true)
lowdown = dependency('lowdown', version : '>=0.9.0', required : true, include_type : 'system')
# HACK(Qyriad): rapidcheck's pkg-config doesn't include the libs lol
# Note: technically we 'check' for rapidcheck twice, for the internal-api-docs handling above,
# but Meson will cache the result of the first one, and the required : arguments are different.
rapidcheck_meson = dependency('rapidcheck', required : enable_tests)
rapidcheck_meson = dependency('rapidcheck', required : enable_tests, include_type : 'system')
rapidcheck = declare_dependency(dependencies : rapidcheck_meson, link_args : ['-lrapidcheck'])
gtest = [
dependency('gtest', required : enable_tests),
dependency('gtest_main', required : enable_tests),
dependency('gmock', required : enable_tests),
dependency('gmock_main', required : enable_tests),
dependency('gtest', required : enable_tests, include_type : 'system'),
dependency('gtest_main', required : enable_tests, include_type : 'system'),
dependency('gmock', required : enable_tests, include_type : 'system'),
dependency('gmock_main', required : enable_tests, include_type : 'system'),
]
toml11 = dependency('toml11', version : '>=3.7.0', required : true, method : 'cmake')
toml11 = dependency('toml11', version : '>=3.7.0', required : true, method : 'cmake', include_type : 'system')
pegtl = dependency(
'pegtl',
@ -303,16 +326,10 @@ pegtl = dependency(
required : true,
method : 'cmake',
modules : [ 'taocpp::pegtl' ],
include_type : 'system',
)
nlohmann_json = dependency('nlohmann_json', required : true)
# lix-doc is a Rust project provided via buildInputs and unfortunately doesn't have any way to be detected.
# Just declare it manually to resolve this.
#
# FIXME: build this with meson in the future after we drop Make (with which we
# *absolutely* are not going to make it work)
lix_doc = declare_dependency(link_args : [ '-llix_doc' ])
nlohmann_json = dependency('nlohmann_json', required : true, include_type : 'system')
if is_freebsd
libprocstat = declare_dependency(link_args : [ '-lprocstat' ])
@ -402,6 +419,11 @@ check_funcs = [
'strsignal',
'sysconf',
]
if is_linux or is_freebsd
# musl does not have close_range as of 2024-08-10
# patch: https://www.openwall.com/lists/musl/2024/08/01/9
check_funcs += [ 'close_range' ]
endif
foreach funcspec : check_funcs
define_name = 'HAVE_' + funcspec.underscorify().to_upper()
define_value = cxx.has_function(funcspec).to_int()
@ -482,7 +504,14 @@ if cxx.get_id() == 'clang' and get_option('b_sanitize') != ''
add_project_link_arguments('-shared-libsan', language : 'cpp')
endif
# Clang gets grumpy about missing libasan symbols if -shared-libasan is not
# passed when building shared libs, at least on Linux
if cxx.get_id() == 'clang' and 'address' in get_option('b_sanitize')
add_project_link_arguments('-shared-libasan', language : 'cpp')
endif
add_project_link_arguments('-pthread', language : 'cpp')
if cxx.get_linker_id() in ['ld.bfd', 'ld.gold']
add_project_link_arguments('-Wl,--no-copy-dt-needed-entries', language : 'cpp')
endif
@ -497,7 +526,7 @@ endif
# maintainers/buildtime_report.sh BUILD-DIR to simply work in clang builds.
#
# They can also be manually viewed at https://ui.perfetto.dev
if get_option('profile-build').require(meson.get_compiler('cpp').get_id() == 'clang').enabled()
if get_option('profile-build').require(cxx.get_id() == 'clang').enabled()
add_project_arguments('-ftime-trace', language: 'cpp')
endif
@ -516,6 +545,33 @@ if cxx.get_id() in ['clang', 'gcc']
)
endif
# Until Meson 1.5¹, we can't just give Meson a Cargo.lock file and be done with it.
# Meson will *detect* what dependencies are needed from Cargo files; it just won't
# fetch them. The Meson 1.5 feature essentially internally translates Cargo.lock entries
# to .wrap files, and that translation is incredibly straightforward, so let's just
# use a simple Python script to generate the .wrap files ourselves while we wait for
# Meson 1.5. Weirdly, it seems Meson will only detect dependencies from other
# dependency() calls, so we have to specify lix-doc's two top-level dependencies,
# rnix and rowan, manually, and then their dependencies will be recursively translated
# into more dependency() calls.
#
# When Meson translates a Cargo dependency, the string passed to `dependency()` follows
# a fixed format, which is important as the .wrap files' basenames must match the string
# passed to `dependency()` exactly.
# In Meson 1.4, this format is `$packageName-rs`. Meson 1.5 changes this to
# `$packageName-$shortenedVersionString-rs`, because of course it does, but we'll cross
# that bridge when we get there...
#
# [1]: https://github.com/mesonbuild/meson/commit/9b8378985dbdc0112d11893dd42b33b7bc8d1e62
# FIXME: remove (along with its generated wrap files) when we get rid of meson 1.4
run_command(
python,
meson.project_source_root() / 'meson/cargo-lock-to-wraps.py',
meson.project_source_root() / 'Cargo.lock',
meson.project_source_root() / 'subprojects',
check : true,
)
if is_darwin
configure_file(
input : 'misc/launchd/org.nixos.nix-daemon.plist.in',
@ -537,3 +593,5 @@ if enable_tests
subdir('tests/unit')
subdir('tests/functional')
endif
subdir('meson/clang-tidy')

View file

@ -1,7 +1,7 @@
# vim: filetype=meson
option('enable-build', type : 'boolean', value : true,
description : 'Set to false to not actually build. Only really makes sense with -Dinternal-api-docs=true',
description : 'set to false to not actually build. Only really makes sense with -Dinternal-api-docs=true',
)
option('gc', type : 'feature',
@ -37,7 +37,7 @@ option('tests-brief', type : 'boolean', value : false,
)
option('profile-build', type : 'feature', value: 'disabled',
description : 'whether to enable -ftime-trace in clang builds, allowing for speeding up the build.'
description : 'whether to enable -ftime-trace in clang builds, allowing for diagnosing the cause of build time.'
)
option('store-dir', type : 'string', value : '/nix/store',
@ -68,3 +68,7 @@ option('profile-dir', type : 'string', value : 'etc/profile.d',
option('enable-pch-std', type : 'boolean', value : true,
description : 'whether to use precompiled headers for C++\'s standard library (breaks clangd if you\'re using GCC)',
)
option('lix-clang-tidy-checks-path', type : 'string', value : '',
description: 'path to lix-clang-tidy-checks library file, if providing it externally. Uses an internal one if this is not set',
)

43
meson/cargo-lock-to-wraps.py Executable file
View file

@ -0,0 +1,43 @@
#!/usr/bin/env python3
import argparse
import tomllib
import sys
DOWNLOAD_URI_FORMAT = 'https://crates.io/api/v1/crates/{crate}/{version}/download'
WRAP_TEMPLATE = """
[wrap-file]
method = cargo
directory = {crate}-{version}
source_url = {url}
source_filename = {crate}-{version}.tar.gz
source_hash = {hash}
""".lstrip()
parser = argparse.ArgumentParser()
parser.add_argument('lockfile', help='path to the Cargo lockfile to generate wraps from')
parser.add_argument('outdir', help="the 'subprojects' directory to write .wrap files to")
args = parser.parse_args()
with open(args.lockfile, 'rb') as f:
lock_toml = tomllib.load(f)
for dependency in lock_toml['package']:
try:
hash = dependency['checksum']
except KeyError:
# The base package, e.g. lix-doc, won't have a checksum, and conveniently
# the base package is also not something we want a wrap file for.
# Doesn't that work out nicely?
continue
crate = dependency['name']
version = dependency['version']
url = DOWNLOAD_URI_FORMAT.format(crate=crate, version=version)
wrap_text = WRAP_TEMPLATE.format(crate=crate, version=version, url=url, hash=hash)
with open(f'{args.outdir}/{crate}-rs.wrap', 'w') as f:
f.write(wrap_text)

View file

@ -0,0 +1,21 @@
#!/usr/bin/env python3
import subprocess
def get_targets_of_rule(build_root: str, rule_name: str) -> list[str]:
return subprocess.check_output(['ninja', '-C', build_root, '-t', 'targets', 'rule', rule_name]).decode().strip().splitlines()
def ninja_build(build_root: str, targets: list[str]):
subprocess.check_call(['ninja', '-C', build_root, '--', *targets])
def main():
import argparse
ap = argparse.ArgumentParser(description='Builds required targets for clang-tidy')
ap.add_argument('build_root', help='Ninja build root', type=str)
args = ap.parse_args()
targets = [t for t in get_targets_of_rule(args.build_root, 'CUSTOM_COMMAND') if t.endswith('gen.hh')]
ninja_build(args.build_root, targets)
if __name__ == '__main__':
main()

View file

@ -0,0 +1,60 @@
#!/usr/bin/env python3
# Deletes the PCH arguments from a compilation database, to workaround nixpkgs
# stdenv having a cc-wrapper that is impossible to use for anything except cc
# itself, for example, clang-tidy.
import json
import shlex
def process_compdb(compdb: list[dict]) -> list[dict]:
def munch_command(args: list[str]) -> list[str]:
out = []
eat_next = False
for i, arg in enumerate(args):
if arg == '-fpch-preprocess':
# as used with gcc
continue
elif arg == '-include-pch' or (arg == '-include' and args[i + 1] == 'precompiled-headers.hh'):
# -include-pch some-pch (clang), or -include some-pch (gcc)
eat_next = True
continue
if not eat_next:
out.append(arg)
eat_next = False
return out
def chomp(item: dict) -> dict:
item = item.copy()
item['command'] = shlex.join(munch_command(shlex.split(item['command'])))
return item
def cmdfilter(item: dict) -> bool:
file = item['file']
return (
not file.endswith('precompiled-headers.hh')
and not file.endswith('.rs')
)
return [chomp(x) for x in compdb if cmdfilter(x)]
def main():
import argparse
ap = argparse.ArgumentParser(
description='Delete pch arguments from compilation database')
ap.add_argument('input',
type=argparse.FileType('r'),
help='Input json file')
ap.add_argument('output',
type=argparse.FileType('w'),
help='Output json file')
args = ap.parse_args()
input_json = json.load(args.input)
json.dump(process_compdb(input_json), args.output, indent=2)
if __name__ == '__main__':
main()

View file

@ -0,0 +1,105 @@
# The clang-tidy target for Lix
run_clang_tidy = find_program('run-clang-tidy', required : false)
# Although this looks like it wants to be pkg-config, pkg-config does not
# really work for *plugins*, which are executable-like .so files that also
# cannot be found via find_program. Fun!
if get_option('lix-clang-tidy-checks-path') != ''
lix_clang_tidy_so = get_option('lix-clang-tidy-checks-path')
lix_clang_tidy_so_found = true
else
lix_clang_tidy_subproj = subproject(
'lix-clang-tidy',
required : false,
default_options : {'build-by-default': false}
)
if lix_clang_tidy_subproj.found()
lix_clang_tidy_so = lix_clang_tidy_subproj.get_variable('lix_clang_tidy')
lix_clang_tidy_so_found = true
else
lix_clang_tidy_so_found = false
endif
endif
# Due to numerous problems, such as:
# - Meson does not expose pch targets, but *fine*, I can just ask Ninja for
# them with `ninja -t targets rule cpp_PCH` and build them manually:
# https://github.com/mesonbuild/meson/issues/13499
# - Nixpkgs stdenv buries the cc-wrapper under a giant pile of assumptions
# about the cc-wrapper actually being used on the cc of a stdenv, rather than
# independently for clang-tidy, and we need to use cc-wrapper to get the
# correct hardening flags so that clang-tidy can actually parse the PCH file
#
# I give up. I am going to delete the damn PCH args and then it will work.
meson.add_postconf_script(
python,
meson.current_source_dir() / 'clean_compdb.py',
meson.global_build_root() / 'compile_commands.json',
meson.current_build_dir() / 'compile_commands.json',
)
# Horrible hack to get around not being able to depend on another target's
# generated headers in any way in the meson DSL
# https://github.com/mesonbuild/meson/issues/12817 which was incorrectly
# closed, if you *actually* need to generate the files once.
# Also related: https://github.com/mesonbuild/meson/issues/3667
#
# Or we could ban meson generators because their design is broken.
build_all_generated_headers = custom_target(
command : [
python,
meson.current_source_dir() / 'build_required_targets.py',
meson.global_build_root(),
],
output : 'generated_headers.stamp',
build_by_default : false,
build_always_stale : true,
)
if lix_clang_tidy_so_found
default_concurrency = run_command(python, '-c', '''
import multiprocessing
import os
print(min(multiprocessing.cpu_count(), int(os.environ.get("NIX_BUILD_CORES", "16"))))
''', check : true).stdout()
run_clang_tidy_args = [
'-load',
lix_clang_tidy_so,
'-p',
# We have to workaround a run-clang-tidy bug too, so we must give the
# directory name rather than the actual compdb file.
# https://github.com/llvm/llvm-project/issues/101440
meson.current_build_dir(),
'-quiet',
'-j', default_concurrency,
]
run_target(
'clang-tidy',
command : [
# XXX: This explicitly invokes it with python because of a nixpkgs bug
# where clang-unwrapped does not patch interpreters in run-clang-tidy.
# However, making clang-unwrapped depend on python is also silly, so idk.
python,
run_clang_tidy,
run_clang_tidy_args,
'-warnings-as-errors',
'*',
],
depends : [
build_all_generated_headers,
],
)
run_target(
'clang-tidy-fix',
command : [
python,
run_clang_tidy,
run_clang_tidy_args,
'-fix',
],
depends : [
build_all_generated_headers,
],
)
endif

View file

@ -4,3 +4,9 @@ subdir('zsh')
subdir('systemd')
subdir('flake-registry')
runinpty = configure_file(
copy : true,
input : meson.current_source_dir() / 'runinpty.py',
output : 'runinpty.py',
)

77
misc/runinpty.py Executable file
View file

@ -0,0 +1,77 @@
#!/usr/bin/env python3
# SPDX-FileCopyrightText: 2001, 2002, 2003, 2004, 2005, 2006 Python Software Foundation; All Rights Reserved
# SPDX-FileCopyrightText: 2024 Jade Lovelace
# SPDX-License-Identifier: LGPL-2.1-or-later
"""
This script exists to lose Lix a dependency on expect(1) for the ability to run
something in a pty.
Yes, it could be replaced by script(1) but macOS and Linux script(1) have
diverged sufficiently badly that even specifying a subcommand to run is not the
same.
"""
import pty
import sys
import os
from termios import ONLCR, ONLRET, ONOCR, OPOST, TCSAFLUSH, tcgetattr, tcsetattr
from tty import setraw
import termios
def setup_terminal():
# does not matter which fd we use because we are in a fresh pty
modi = tcgetattr(pty.STDOUT_FILENO)
[iflag, oflag, cflag, lflag, ispeed, ospeed, cc] = modi
# Turning \n into \r\n is not cool, Linux!
oflag &= ~ONLCR
# I don't know what "implementation dependent postprocessing means" but it
# sounds bad
oflag &= ~OPOST
# Assume that NL performs the role of CR; do not insert CRs at column 0
oflag |= ONLRET | ONOCR
modi = [iflag, oflag, cflag, lflag, ispeed, ospeed, cc]
tcsetattr(pty.STDOUT_FILENO, TCSAFLUSH, modi)
def spawn(argv: list[str]):
"""
As opposed to pty.spawn, this one more seriously controls the pty settings.
Necessary to turn off such fun functionality as onlcr (LF to CRLF).
This is essentially copy pasted from pty.spawn, since there is no way to
hook the child pre-execve
"""
pid, master_fd = pty.fork()
if pid == pty.CHILD:
setup_terminal()
os.execlp(argv[0], *argv)
try:
mode = tcgetattr(pty.STDIN_FILENO)
setraw(pty.STDIN_FILENO)
restore = True
except termios.error:
restore = False
try:
pty._copy(master_fd, pty._read, pty._read) # type: ignore
finally:
if restore:
tcsetattr(pty.STDIN_FILENO, TCSAFLUSH, mode) # type: ignore
os.close(master_fd)
return os.waitpid(pid, 0)[1]
def main():
if len(sys.argv) == 1:
print(f'Usage: {sys.argv[0]} [command args]', file=sys.stderr)
sys.exit(1)
sys.exit(os.waitstatus_to_exitcode(spawn(sys.argv[1:])))
if __name__ == '__main__':
main()

View file

@ -27,6 +27,8 @@
libcpuid,
libseccomp,
libsodium,
lix-clang-tidy ? null,
llvmPackages,
lsof,
lowdown,
mdbook,
@ -39,6 +41,8 @@
pkg-config,
python3,
rapidcheck,
rustPlatform,
rustc,
sqlite,
toml11,
util-linuxMinimal ? utillinuxMinimal,
@ -47,28 +51,37 @@
busybox-sandbox-shell,
# internal fork of nix-doc providing :doc in the repl
lix-doc ? __forDefaults.lix-doc,
pname ? "lix",
versionSuffix ? "",
officialRelease ? false,
officialRelease ? __forDefaults.versionJson.official_release,
# Set to true to build the release notes for the next release.
buildUnreleasedNotes ? true,
internalApiDocs ? false,
# Support garbage collection in the evaluator.
enableGC ? sanitize == null || !builtins.elem "address" sanitize,
# List of Meson sanitize options. Accepts values of b_sanitize, e.g.
# "address", "undefined", "thread".
# Enabling the "address" sanitizer will disable garbage collection in the evaluator.
sanitize ? null,
# Turn compiler warnings into errors.
werror ? false,
lintInsteadOfBuild ? false,
# Not a real argument, just the only way to approximate let-binding some
# stuff for argument defaults.
__forDefaults ? {
canRunInstalled = stdenv.buildPlatform.canExecute stdenv.hostPlatform;
versionJson = builtins.fromJSON (builtins.readFile ./version.json);
boehmgc-nix = boehmgc.override { enableLargeConfig = true; };
editline-lix = editline.overrideAttrs (prev: {
configureFlags = prev.configureFlags or [ ] ++ [ (lib.enableFeature true "sigstop") ];
});
lix-doc = callPackage ./lix-doc/package.nix { };
build-release-notes = callPackage ./maintainers/build-release-notes.nix { };
},
}:
@ -77,16 +90,19 @@ let
inherit (lib) fileset;
inherit (stdenv) hostPlatform buildPlatform;
versionJson = builtins.fromJSON (builtins.readFile ./version.json);
version = versionJson.version + versionSuffix;
version = __forDefaults.versionJson.version + versionSuffix;
aws-sdk-cpp-nix = aws-sdk-cpp.override {
apis = [
"s3"
"transfer"
];
customMemoryManagement = false;
};
aws-sdk-cpp-nix =
if aws-sdk-cpp == null then
null
else
aws-sdk-cpp.override {
apis = [
"s3"
"transfer"
];
customMemoryManagement = false;
};
# Reimplementation of Nixpkgs' Meson cross file, with some additions to make
# it actually work.
@ -106,10 +122,7 @@ let
# The internal API docs need these for the build, but if we're not building
# Nix itself, then these don't need to be propagated.
maybePropagatedInputs = [
boehmgc-nix
nlohmann_json
];
maybePropagatedInputs = lib.optional enableGC boehmgc-nix ++ [ nlohmann_json ];
# .gitignore has already been processed, so any changes in it are irrelevant
# at this point. It is not represented verbatim for test purposes because
@ -124,6 +137,8 @@ let
./meson
./scripts/meson.build
./subprojects
# Required for meson to generate Cargo wraps
./Cargo.lock
]);
functionalTestFiles = fileset.unions [
@ -132,6 +147,7 @@ let
(fileset.fileFilter (f: lib.strings.hasPrefix "nix-profile" f.name) ./scripts)
];
in
assert (lintInsteadOfBuild -> lix-clang-tidy != null);
stdenv.mkDerivation (finalAttrs: {
inherit pname version;
@ -144,12 +160,13 @@ stdenv.mkDerivation (finalAttrs: {
topLevelBuildFiles
functionalTestFiles
]
++ lib.optionals (!finalAttrs.dontBuild || internalApiDocs) [
++ lib.optionals (!finalAttrs.dontBuild || internalApiDocs || lintInsteadOfBuild) [
./doc
./misc
./src
./COPYING
]
++ lib.optionals lintInsteadOfBuild [ ./.clang-tidy ]
)
);
};
@ -163,9 +180,14 @@ stdenv.mkDerivation (finalAttrs: {
"doc"
];
dontBuild = false;
dontBuild = lintInsteadOfBuild;
mesonFlags =
let
sanitizeOpts = lib.optional (
sanitize != null
) "-Db_sanitize=${builtins.concatStringsSep "," sanitize}";
in
lib.optionals hostPlatform.isLinux [
# You'd think meson could just find this in PATH, but busybox is in buildInputs,
# which don't actually get added to PATH. And buildInputs is correct over
@ -173,16 +195,20 @@ stdenv.mkDerivation (finalAttrs: {
"-Dsandbox-shell=${lib.getExe' busybox-sandbox-shell "busybox"}"
]
++ lib.optional hostPlatform.isStatic "-Denable-embedded-sandbox-shell=true"
++ lib.optional (finalAttrs.dontBuild) "-Denable-build=false"
++ lib.optional (finalAttrs.dontBuild && !lintInsteadOfBuild) "-Denable-build=false"
++ lib.optional lintInsteadOfBuild "-Dlix-clang-tidy-checks-path=${lix-clang-tidy}/lib/liblix-clang-tidy.so"
++ [
# mesonConfigurePhase automatically passes -Dauto_features=enabled,
# so we must explicitly enable or disable features that we are not passing
# dependencies for.
(lib.mesonEnable "gc" enableGC)
(lib.mesonEnable "internal-api-docs" internalApiDocs)
(lib.mesonBool "enable-tests" finalAttrs.finalPackage.doCheck)
(lib.mesonBool "enable-tests" (finalAttrs.finalPackage.doCheck || lintInsteadOfBuild))
(lib.mesonBool "enable-docs" canRunInstalled)
(lib.mesonBool "werror" werror)
]
++ lib.optional (hostPlatform != buildPlatform) "--cross-file=${mesonCrossFile}";
++ lib.optional (hostPlatform != buildPlatform) "--cross-file=${mesonCrossFile}"
++ sanitizeOpts;
# We only include CMake so that Meson can locate toml11, which only ships CMake dependency metadata.
dontUseCmakeConfigure = true;
@ -193,6 +219,7 @@ stdenv.mkDerivation (finalAttrs: {
meson
ninja
cmake
rustc
]
++ [
(lib.getBin lowdown)
@ -210,7 +237,13 @@ stdenv.mkDerivation (finalAttrs: {
]
++ lib.optional hostPlatform.isLinux util-linuxMinimal
++ lib.optional (!officialRelease && buildUnreleasedNotes) build-release-notes
++ lib.optional internalApiDocs doxygen;
++ lib.optional internalApiDocs doxygen
++ lib.optionals lintInsteadOfBuild [
# required for a wrapped clang-tidy
llvmPackages.clang-tools
# required for run-clang-tidy
llvmPackages.clang-unwrapped
];
buildInputs =
[
@ -226,7 +259,6 @@ stdenv.mkDerivation (finalAttrs: {
lowdown
libsodium
toml11
lix-doc
pegtl
]
++ lib.optionals hostPlatform.isLinux [
@ -237,7 +269,10 @@ stdenv.mkDerivation (finalAttrs: {
++ lib.optional hostPlatform.isx86_64 libcpuid
# There have been issues building these dependencies
++ lib.optional (hostPlatform.canExecute buildPlatform) aws-sdk-cpp-nix
++ lib.optionals (finalAttrs.dontBuild) maybePropagatedInputs;
++ lib.optionals (finalAttrs.dontBuild) maybePropagatedInputs
# I am so sorry. This is because checkInputs are required to pass
# configure, but we don't actually want to *run* the checks here.
++ lib.optionals lintInsteadOfBuild finalAttrs.checkInputs;
checkInputs = [
gtest
@ -253,8 +288,15 @@ stdenv.mkDerivation (finalAttrs: {
env = {
BOOST_INCLUDEDIR = "${lib.getDev boost}/include";
BOOST_LIBRARYDIR = "${lib.getLib boost}/lib";
# Meson allows referencing a /usr/share/cargo/registry shaped thing for subproject sources.
# Turns out the Nix-generated Cargo dependencies are named the same as they
# would be in a Cargo registry cache.
MESON_PACKAGE_CACHE_DIR = finalAttrs.cargoDeps;
};
cargoDeps = rustPlatform.importCargoLock { lockFile = ./Cargo.lock; };
preConfigure =
lib.optionalString (!finalAttrs.dontBuild && !hostPlatform.isStatic) ''
# Copy libboost_context so we don't get all of Boost in our closure.
@ -276,13 +318,6 @@ stdenv.mkDerivation (finalAttrs: {
install_name_tool -change ${boost}/lib/libboost_system.dylib $out/lib/libboost_system.dylib $out/lib/libboost_thread.dylib
''
+ ''
# Workaround https://github.com/NixOS/nixpkgs/issues/294890.
if [[ -n "''${doCheck:-}" ]]; then
appendToVar configureFlags "--enable-tests"
else
appendToVar configureFlags "--disable-tests"
fi
# Fix up /usr/bin/env shebangs relied on by the build
patchShebangs --build tests/ doc/manual/
'';
@ -293,7 +328,7 @@ stdenv.mkDerivation (finalAttrs: {
enableParallelBuilding = true;
doCheck = canRunInstalled;
doCheck = canRunInstalled && !lintInsteadOfBuild;
mesonCheckFlags = [
"--suite=check"
@ -305,8 +340,19 @@ stdenv.mkDerivation (finalAttrs: {
# Make sure the internal API docs are already built, because mesonInstallPhase
# won't let us build them there. They would normally be built in buildPhase,
# but the internal API docs are conventionally built with doBuild = false.
preInstall = lib.optional internalApiDocs ''
meson ''${mesonBuildFlags:-} compile "$installTargets"
preInstall =
(lib.optionalString internalApiDocs ''
meson ''${mesonBuildFlags:-} compile "$installTargets"
'')
# evil, but like above, we do not want to run an actual build phase
+ lib.optionalString lintInsteadOfBuild ''
ninja clang-tidy
'';
installPhase = lib.optionalString lintInsteadOfBuild ''
runHook preInstall
touch $out
runHook postInstall
'';
postInstall =
@ -367,8 +413,6 @@ stdenv.mkDerivation (finalAttrs: {
pegtl
;
inherit officialRelease;
# The collection of dependency logic for this derivation is complicated enough that
# it's easier to parameterize the devShell off an already called package.nix.
mkDevShell =
@ -376,12 +420,10 @@ stdenv.mkDerivation (finalAttrs: {
mkShell,
bashInteractive,
clang-tools,
clangbuildanalyzer,
doxygen,
glibcLocales,
just,
llvmPackages,
nixfmt-rfc-style,
skopeo,
xonsh,
@ -403,6 +445,7 @@ stdenv.mkDerivation (finalAttrs: {
p.python-frontmatter
p.requests
p.xdg-base-dirs
p.packaging
(p.toPythonModule xonsh.passthru.unwrapped)
]
);
@ -434,7 +477,7 @@ stdenv.mkDerivation (finalAttrs: {
++ [ (lib.mesonBool "enable-pch-std" stdenv.cc.isClang) ];
packages =
lib.optional (stdenv.cc.isClang && hostPlatform == buildPlatform) clang-tools
lib.optional (stdenv.cc.isClang && hostPlatform == buildPlatform) llvmPackages.clang-tools
++ [
# Why are we providing a bashInteractive? Well, when you run
# `bash` from inside `nix develop`, say, because you are using it
@ -482,7 +525,7 @@ stdenv.mkDerivation (finalAttrs: {
PATH=$prefix/bin''${PATH:+:''${PATH}}
unset PYTHONPATH
export MANPATH=$out/share/man''${MANPATH:+:''${MANPATH}}
export MANPATH=$out/share/man:''${MANPATH:-}
# Make bash completion work.
XDG_DATA_DIRS+=:$out/share

View file

@ -1,84 +0,0 @@
AC_INIT(nix-perl, m4_esyscmd([bash -c "echo -n $(cat ../.version)$VERSION_SUFFIX"]))
AC_CONFIG_SRCDIR(MANIFEST)
AC_CONFIG_AUX_DIR(../config)
CFLAGS=
CXXFLAGS=
AC_PROG_CC
AC_PROG_CXX
AC_CANONICAL_HOST
# Use 64-bit file system calls so that we can support files > 2 GiB.
AC_SYS_LARGEFILE
AC_DEFUN([NEED_PROG],
[
AC_PATH_PROG($1, $2)
if test -z "$$1"; then
AC_MSG_ERROR([$2 is required])
fi
])
NEED_PROG(perl, perl)
NEED_PROG(curl, curl)
NEED_PROG(bzip2, bzip2)
NEED_PROG(xz, xz)
# Test that Perl has the open/fork feature (Perl 5.8.0 and beyond).
AC_MSG_CHECKING([whether Perl is recent enough])
if ! $perl -e 'open(FOO, "-|", "true"); while (<FOO>) { print; }; close FOO or die;'; then
AC_MSG_RESULT(no)
AC_MSG_ERROR([Your Perl version is too old. Lix requires Perl 5.8.0 or newer.])
fi
AC_MSG_RESULT(yes)
# Figure out where to install Perl modules.
AC_MSG_CHECKING([for the Perl installation prefix])
perlversion=$($perl -e 'use Config; print $Config{version};')
perlarchname=$($perl -e 'use Config; print $Config{archname};')
AC_SUBST(perllibdir, [${libdir}/perl5/site_perl/$perlversion/$perlarchname])
AC_MSG_RESULT($perllibdir)
# Look for libsodium.
PKG_CHECK_MODULES([SODIUM], [libsodium], [CXXFLAGS="$SODIUM_CFLAGS $CXXFLAGS"])
# Check for the required Perl dependencies (DBI and DBD::SQLite).
perlFlags="-I$perllibdir"
AC_ARG_WITH(dbi, AC_HELP_STRING([--with-dbi=PATH],
[prefix of the Perl DBI library]),
perlFlags="$perlFlags -I$withval")
AC_ARG_WITH(dbd-sqlite, AC_HELP_STRING([--with-dbd-sqlite=PATH],
[prefix of the Perl DBD::SQLite library]),
perlFlags="$perlFlags -I$withval")
AC_MSG_CHECKING([whether DBD::SQLite works])
if ! $perl $perlFlags -e 'use DBI; use DBD::SQLite;' 2>&5; then
AC_MSG_RESULT(no)
AC_MSG_FAILURE([The Perl modules DBI and/or DBD::SQLite are missing.])
fi
AC_MSG_RESULT(yes)
AC_SUBST(perlFlags)
PKG_CHECK_MODULES([NIX], [nix-store])
NEED_PROG([NIX], [nix])
# Expand all variables in config.status.
test "$prefix" = NONE && prefix=$ac_default_prefix
test "$exec_prefix" = NONE && exec_prefix='${prefix}'
for name in $ac_subst_vars; do
declare $name="$(eval echo "${!name}")"
declare $name="$(eval echo "${!name}")"
declare $name="$(eval echo "${!name}")"
done
rm -f Makefile.config
ln -sfn ../mk mk
AC_CONFIG_FILES([])
AC_OUTPUT

View file

@ -77,7 +77,7 @@ SV * queryReferences(char * path)
SV * queryPathHash(char * path)
PPCODE:
try {
auto s = store()->queryPathInfo(store()->parseStorePath(path))->narHash.to_string(Base32, true);
auto s = store()->queryPathInfo(store()->parseStorePath(path))->narHash.to_string(Base::Base32, true);
XPUSHs(sv_2mortal(newSVpv(s.c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());
@ -103,7 +103,7 @@ SV * queryPathInfo(char * path, int base32)
XPUSHs(&PL_sv_undef);
else
XPUSHs(sv_2mortal(newSVpv(store()->printStorePath(*info->deriver).c_str(), 0)));
auto s = info->narHash.to_string(base32 ? Base32 : Base16, true);
auto s = info->narHash.to_string(base32 ? Base::Base32 : Base::Base16, true);
XPUSHs(sv_2mortal(newSVpv(s.c_str(), 0)));
mXPUSHi(info->registrationTime);
mXPUSHi(info->narSize);
@ -205,7 +205,7 @@ SV * hashPath(char * algo, int base32, char * path)
PPCODE:
try {
Hash h = hashPath(parseHashType(algo), path).first;
auto s = h.to_string(base32 ? Base32 : Base16, false);
auto s = h.to_string(base32 ? Base::Base32 : Base::Base16, false);
XPUSHs(sv_2mortal(newSVpv(s.c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());
@ -216,7 +216,7 @@ SV * hashFile(char * algo, int base32, char * path)
PPCODE:
try {
Hash h = hashFile(parseHashType(algo), path);
auto s = h.to_string(base32 ? Base32 : Base16, false);
auto s = h.to_string(base32 ? Base::Base32 : Base::Base16, false);
XPUSHs(sv_2mortal(newSVpv(s.c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());
@ -227,7 +227,7 @@ SV * hashString(char * algo, int base32, char * s)
PPCODE:
try {
Hash h = hashString(parseHashType(algo), s);
auto s = h.to_string(base32 ? Base32 : Base16, false);
auto s = h.to_string(base32 ? Base::Base32 : Base::Base16, false);
XPUSHs(sv_2mortal(newSVpv(s.c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());
@ -238,7 +238,7 @@ SV * convertHash(char * algo, char * s, int toBase32)
PPCODE:
try {
auto h = Hash::parseAny(s, parseHashType(algo));
auto s = h.to_string(toBase32 ? Base32 : Base16, false);
auto s = h.to_string(toBase32 ? Base::Base32 : Base::Base16, false);
XPUSHs(sv_2mortal(newSVpv(s.c_str(), 0)));
} catch (Error & e) {
croak("%s", e.what());

View file

@ -30,7 +30,7 @@ First, we prepare the release. `python -m releng prepare` is used for this.
Then we tag the release with `python -m releng tag`:
* Git HEAD is detached.
* `officialRelease = true` is set in `flake.nix`, this is committed, and a
* `"official_release": true` is set in `version.json`, this is committed, and a
release is tagged.
* The tag is merged back into the last branch (either `main` for new releases
or `release-MAJOR` for maintenance releases) with `git merge -s ours VERSION`
@ -57,12 +57,10 @@ Next, we do the publication with `python -m releng upload`:
`nix upgrade-nix`.
* s3://releases/lix/lix-VERSION/ gets the following contents
* Binary tarballs
* Docs: `manual/` (FIXME: should we actually do this? what about putting it
on docs.lix.systems? I think doing both is correct, since the Web site
should not be an archive of random old manuals)
* Docs: `manual/`, primarily as an archive of old manuals
* Docs as tarball in addition to web.
* Source tarball
* Docker image (FIXME: upload to forgejo registry and github registry [in the future][upload-docker])
* Docker image
* s3://docs/manual/lix/MAJOR
* s3://docs/manual/lix/stable
@ -80,6 +78,7 @@ Next, we do the publication with `python -m releng upload`:
FIXME: automate branch-off to `release-*` branch.
* **Manually** (FIXME?) switch back to the release branch, which now has the
correct revision.
* Deal with the external systems (see sections below).
* Post!!
* Merge release blog post to [lix-website].
* Toot about it! https://chaos.social/@lix_project
@ -87,22 +86,33 @@ Next, we do the publication with `python -m releng upload`:
[lix-website]: https://git.lix.systems/lix-project/lix-website
[upload-docker]: https://git.lix.systems/lix-project/lix/issues/252
### Installer
The installer is cross-built to several systems from a Mac using
`build-all.xsh` and `upload-to-lix.xsh` in the installer repo (FIXME: currently
at least; maybe this should be moved here?) .
The installer is cross-built to several systems from a Mac using `build-all.xsh` and `upload-to-lix.xsh` in the installer repo (FIXME: currently at least; maybe this should be moved here?).
It installs a binary tarball (FIXME: [it should be taught to substitute from
cache instead][installer-substitute])
from some URL; this is the `hydraJobs.binaryTarball`. The default URLs differ
by architecture and are [configured here][tarball-urls].
It installs a binary tarball (FIXME: [it should be taught to substitute from cache instead][installer-substitute]) from some URL; this is the `hydraJobs.binaryTarball`.
The default URLs differ by architecture and are [configured here][tarball-urls].
To automatically do the file changes for a new version, run `python3 set_version.py NEW_VERSION`, and submit the result for review.
[installer-substitute]: https://git.lix.systems/lix-project/lix-installer/issues/13
[tarball-urls]: https://git.lix.systems/lix-project/lix-installer/src/commit/693592ed10d421a885bec0a9dd45e87ab87eb90a/src/settings.rs#L14-L28
### Web site
The website has various release-version dependent pieces.
You can update them with `python3 update_version.py NEW_VERSION`, which will regenerate the affected page sources.
These need the release to have been done first as they need hashes for tarballs and such.
### NixOS module
The NixOS module has underdeveloped releng in it.
Currently you have to do the whole branch-off dance manually to a `release-VERSION` branch and update the tarball URLs to point to the release versions manually.
FIXME: this should be unified with the `set_version.py` work in `lix-installer` and probably all the releng kept in here, or kept elsewhere.
Related: https://git.lix.systems/lix-project/lix/issues/439
## Infrastructure summary
* releases.lix.systems (`s3://releases`):

View file

@ -1,10 +1,13 @@
import logging
import argparse
import sys
from . import create_release
from . import docker
from .environment import RelengEnvironment
from . import environment
import argparse
import sys
log = logging.getLogger(__name__)
def do_build(args):
if args.target == 'all':
@ -21,6 +24,9 @@ def do_tag(args):
create_release.do_tag_merge(force_tag=args.force_tag,
no_check_git=args.no_check_git)
log.info('Merged the release commit into your last branch, and switched to a detached HEAD of the artifact to be released.')
log.info('After you are done with releasing, switch to your previous branch and push that branch for review.')
def do_upload(env: RelengEnvironment, args):
create_release.setup_creds(env)

View file

@ -2,6 +2,7 @@ import json
import subprocess
import itertools
import textwrap
import logging
from pathlib import Path
import tempfile
import hashlib
@ -11,10 +12,12 @@ from . import environment
from .environment import RelengEnvironment
from . import keys
from . import docker
from .version import VERSION, RELEASE_NAME, MAJOR
from .version import VERSION, RELEASE_NAME, MAJOR, OFFICIAL_RELEASE
from .gitutils import verify_are_on_tag, git_preconditions
from . import release_notes
log = logging.getLogger(__name__)
$RAISE_SUBPROC_ERROR = True
$XONSH_SHOW_TRACEBACK = True
@ -39,16 +42,25 @@ def setup_creds(env: RelengEnvironment):
def official_release_commit_tag(force_tag=False):
print('[+] Setting officialRelease in flake.nix and tagging')
print('[+] Setting officialRelease in version.json and tagging')
prev_branch = $(git symbolic-ref --short HEAD).strip()
git switch --detach
sed -i 's/officialRelease = false/officialRelease = true/' flake.nix
git add flake.nix
# Must be done in two parts due to buffering (opening the file immediately
# would truncate it).
new_version_json = $(jq --indent 4 '.official_release = true' version.json)
with open('version.json', 'w') as fh:
fh.write(new_version_json)
git add version.json
message = f'release: {VERSION} "{RELEASE_NAME}"\n\nRelease produced with releng/create_release.xsh'
git commit -m @(message)
git tag @(['-f'] if force_tag else []) -a -m @(message) @(VERSION)
with open('releng/prev-git-branch.txt', 'w') as fh:
fh.write(prev_branch)
return prev_branch
@ -229,8 +241,24 @@ def upload_artifacts(env: RelengEnvironment, noconfirm=False, no_check_git=False
print('[+] Upload manual')
upload_manual(env)
print('[+] git push tag')
git push @(['-f'] if force_push_tag else []) @(env.git_repo) f'{VERSION}:refs/tags/{VERSION}'
prev_branch = None
try:
with open('releng/prev-git-branch.txt', 'r') as fh:
prev_branch = fh.read().strip()
except FileNotFoundError:
log.warn('Cannot find previous git branch file, skipping pushing git objects')
if prev_branch:
print('[+] git push to the repo')
# We have to push the ref to gerrit for review at least such that the
# commit is known, before we can push it as a tag.
if env.git_repo_is_gerrit:
git push @(env.git_repo) f'{prev_branch}:refs/for/{prev_branch}'
else:
git push @(env.git_repo) f'{prev_branch}:{prev_branch}'
print('[+] git push tag')
git push @(['-f'] if force_push_tag else []) @(env.git_repo) f'{VERSION}:refs/tags/{VERSION}'
def do_tag_merge(force_tag=False, no_check_git=False):
@ -250,15 +278,14 @@ def build_manual(eval_result):
def upload_manual(env: RelengEnvironment):
stable = json.loads($(nix eval --json '.#nix.officialRelease'))
if stable:
if OFFICIAL_RELEASE:
version = MAJOR
else:
version = 'nightly'
print('[+] aws s3 sync manual')
aws s3 sync @(MANUAL)/ @(env.docs_bucket)/manual/lix/@(version)/
if stable:
if OFFICIAL_RELEASE:
aws s3 sync @(MANUAL)/ @(env.docs_bucket)/manual/lix/stable/

View file

@ -52,6 +52,7 @@ class RelengEnvironment:
releases_bucket: str
docs_bucket: str
git_repo: str
git_repo_is_gerrit: bool
docker_targets: list[DockerTarget]
@ -79,6 +80,7 @@ STAGING = RelengEnvironment(
cache_store_overlay={'secret-key': 'staging.key'},
releases_bucket='s3://staging-releases',
git_repo='ssh://git@git.lix.systems/lix-project/lix-releng-staging',
git_repo_is_gerrit=False,
docker_targets=[
# latest will be auto tagged if appropriate
DockerTarget('git.lix.systems/lix-project/lix-releng-staging',
@ -113,6 +115,7 @@ PROD = RelengEnvironment(
cache_store_overlay={'secret-key': 'prod.key'},
releases_bucket='s3://releases',
git_repo=guess_gerrit_remote(),
git_repo_is_gerrit=True,
docker_targets=[
# latest will be auto tagged if appropriate
DockerTarget('git.lix.systems/lix-project/lix',

View file

@ -1,11 +1,24 @@
import subprocess
import json
from packaging.version import Version
from .version import VERSION
def remote_is_plausible(url: str) -> bool:
return ('git.lix.systems' in url and 'lix-project/lix' in url) or ('gerrit.lix.systems' in url and url.endswith('lix'))
def version_compare(v1: str, v2: str):
return json.loads($(nix-instantiate --eval --json --argstr v1 @(v1) --argstr v2 @(v2) --expr '{v1, v2}: builtins.compareVersions v1 v2'))
v1 = Version(v1)
v2 = Version(v2)
if v1 < v2:
return -1
elif v1 > v2:
return 1
elif v1 == v2:
return 0
else:
raise ValueError('these versions are beyond each others celestial plane')
def latest_tag_on_branch(branch: str) -> str:
@ -13,16 +26,18 @@ def latest_tag_on_branch(branch: str) -> str:
def is_maintenance_branch(branch: str) -> bool:
try:
main_tag = latest_tag_on_branch('main')
current_tag = latest_tag_on_branch(branch)
"""
Returns whether the given branch is probably a maintenance branch.
return version_compare(current_tag, main_tag) < 0
except subprocess.CalledProcessError:
# This is the case before Lix releases 2.90, since main *has* no
# release tag on it.
# FIXME: delete this case after 2.91
return False
This uses a heuristic: `main` should have a newer tag than a given
maintenance branch if there has been a major release since that maintenance
branch.
"""
assert remote_is_plausible($(git remote get-url origin).strip())
main_tag = latest_tag_on_branch('origin/main')
current_tag = latest_tag_on_branch(branch)
return version_compare(current_tag, main_tag) < 0
def verify_are_on_tag():

View file

@ -4,3 +4,4 @@ version_json = json.load(open('version.json'))
VERSION = version_json['version']
MAJOR = '.'.join(VERSION.split('.')[:2])
RELEASE_NAME = version_json['release_name']
OFFICIAL_RELEASE = version_json['official_release']

View file

@ -0,0 +1,17 @@
/// @file This is very bothersome code that has to be included in every
/// executable to get the correct default ASan options. I am so sorry.
extern "C" [[gnu::retain]] const char *__asan_default_options()
{
// We leak a bunch of memory knowingly on purpose. It's not worthwhile to
// diagnose that memory being leaked for now.
//
// Instruction bytes are useful for finding the actual code that
// corresponds to an ASan report.
//
// TODO: setting log_path=asan.log or not: neither works, since you can't
// write to the fs in certain places in the testsuite, but you also cannot
// write arbitrarily to stderr in other places so the reports get eaten.
// pain 🥖
return "halt_on_error=1:abort_on_error=1:detect_leaks=0:print_summary=1:dump_instruction_bytes=1";
}

View file

@ -42,7 +42,7 @@ static std::string makeLockFilename(const std::string & storeUri) {
// This avoids issues with the escaped URI being very long and causing
// path too long errors, while also avoiding any possibility of collision
// caused by simple truncation.
auto hash = hashString(HashType::htSHA256, storeUri).to_string(Base::Base32, false);
auto hash = hashString(HashType::SHA256, storeUri).to_string(Base::Base32, false);
return escapeUri(storeUri).substr(0, 48) + "-" + hash.substr(0, 16);
}

View file

@ -246,7 +246,7 @@ StorePath ProfileManifest::build(ref<Store> store)
StringSink sink;
sink << dumpPath(tempDir);
auto narHash = hashString(htSHA256, sink.s);
auto narHash = hashString(HashType::SHA256, sink.s);
ValidPathInfo info{
*store,

View file

@ -1,23 +1,11 @@
#include "globals.hh"
#include "installable-attr-path.hh"
#include "outputs-spec.hh"
#include "command.hh"
#include "attr-path.hh"
#include "common-eval-args.hh"
#include "derivations.hh"
#include "eval-inline.hh"
#include "eval.hh"
#include "get-drvs.hh"
#include "store-api.hh"
#include "shared.hh"
#include "flake/flake.hh"
#include "eval-cache.hh"
#include "url.hh"
#include "registry.hh"
#include "build-result.hh"
#include <regex>
#include <queue>
#include <nlohmann/json.hpp>

View file

@ -1,25 +1,11 @@
#pragma once
///@file
#include "globals.hh"
#include "installable-value.hh"
#include "outputs-spec.hh"
#include "command.hh"
#include "attr-path.hh"
#include "common-eval-args.hh"
#include "derivations.hh"
#include "eval-inline.hh"
#include "eval.hh"
#include "get-drvs.hh"
#include "store-api.hh"
#include "shared.hh"
#include "eval-cache.hh"
#include "url.hh"
#include "registry.hh"
#include "build-result.hh"
#include <regex>
#include <queue>
#include <nlohmann/json.hpp>

View file

@ -50,7 +50,7 @@ libcmd = library(
editline,
lowdown,
nlohmann_json,
lix_doc
liblix_doc,
],
cpp_pch : cpp_pch,
install : true,

View file

@ -96,7 +96,7 @@ static int listPossibleCallback(char * s, char *** avp)
return p;
};
vp = check((char **) malloc(possible.size() * sizeof(char *)));
vp = check(static_cast<char **>(malloc(possible.size() * sizeof(char *))));
for (auto & p : possible)
vp[ac++] = check(strdup(p.c_str()));

View file

@ -536,7 +536,7 @@ ProcessLineResult NixRepl::processLine(std::string line)
<< " :t <expr> Describe result of evaluation\n"
<< " :u <expr> Build derivation, then start nix-shell\n"
<< " :doc <expr> Show documentation for the provided function (experimental lambda support)\n"
<< " :log <expr> Show logs for a derivation\n"
<< " :log <expr | .drv path> Show logs for a derivation\n"
<< " :te, :trace-enable [bool] Enable, disable or toggle showing traces for\n"
<< " errors\n"
<< " :?, :help Brings up this help menu\n"
@ -676,7 +676,49 @@ ProcessLineResult NixRepl::processLine(std::string line)
runNix("nix-shell", {state->store->printStorePath(drvPath)});
}
else if (command == ":b" || command == ":bl" || command == ":i" || command == ":sh" || command == ":log") {
else if (command == ":log") {
StorePath drvPath = ([&] {
auto maybeDrvPath = state->store->maybeParseStorePath(arg);
if (maybeDrvPath && maybeDrvPath->isDerivation()) {
return std::move(*maybeDrvPath);
} else {
Value v;
evalString(arg, v);
return getDerivationPath(v);
}
})();
Path drvPathRaw = state->store->printStorePath(drvPath);
settings.readOnlyMode = true;
Finally roModeReset([&]() {
settings.readOnlyMode = false;
});
auto subs = getDefaultSubstituters();
subs.push_front(state->store);
bool foundLog = false;
RunPager pager;
for (auto & sub : subs) {
auto * logSubP = dynamic_cast<LogStore *>(&*sub);
if (!logSubP) {
printInfo("Skipped '%s' which does not support retrieving build logs", sub->getUri());
continue;
}
auto & logSub = *logSubP;
auto log = logSub.getBuildLog(drvPath);
if (log) {
printInfo("got build log for '%s' from '%s'", drvPathRaw, logSub.getUri());
logger->writeToStdout(*log);
foundLog = true;
break;
}
}
if (!foundLog) throw Error("build log of '%s' is not available", drvPathRaw);
}
else if (command == ":b" || command == ":bl" || command == ":i" || command == ":sh") {
Value v;
evalString(arg, v);
StorePath drvPath = getDerivationPath(v);
@ -712,34 +754,6 @@ ProcessLineResult NixRepl::processLine(std::string line)
}
} else if (command == ":i") {
runNix("nix-env", {"-i", drvPathRaw});
} else if (command == ":log") {
settings.readOnlyMode = true;
Finally roModeReset([&]() {
settings.readOnlyMode = false;
});
auto subs = getDefaultSubstituters();
subs.push_front(state->store);
bool foundLog = false;
RunPager pager;
for (auto & sub : subs) {
auto * logSubP = dynamic_cast<LogStore *>(&*sub);
if (!logSubP) {
printInfo("Skipped '%s' which does not support retrieving build logs", sub->getUri());
continue;
}
auto & logSub = *logSubP;
auto log = logSub.getBuildLog(drvPath);
if (log) {
printInfo("got build log for '%s' from '%s'", drvPathRaw, logSub.getUri());
logger->writeToStdout(*log);
foundLog = true;
break;
}
}
if (!foundLog) throw Error("build log of '%s' is not available", drvPathRaw);
} else {
runNix("nix-shell", {drvPathRaw});
}

View file

@ -50,7 +50,7 @@ struct AttrDb
Path cacheDir = getCacheDir() + "/nix/eval-cache-v5";
createDirs(cacheDir);
Path dbPath = cacheDir + "/" + fingerprint.to_string(Base16, false) + ".sqlite";
Path dbPath = cacheDir + "/" + fingerprint.to_string(Base::Base16, false) + ".sqlite";
state->db = SQLite(dbPath);
state->db.isCache();

View file

@ -21,7 +21,6 @@
#include "flake/flakeref.hh"
#include <algorithm>
#include <chrono>
#include <iostream>
#include <sstream>
#include <cstring>
@ -71,11 +70,6 @@ std::string printValue(EvalState & state, Value & v)
return out.str();
}
void Value::print(EvalState & state, std::ostream & str, PrintOptions options)
{
printValue(state, str, *this, options);
}
const Value * getPrimOp(const Value &v) {
const Value * primOp = &v;
while (primOp->isPrimOpApp()) {
@ -125,32 +119,6 @@ std::string showType(const Value & v)
#pragma GCC diagnostic pop
}
PosIdx Value::determinePos(const PosIdx pos) const
{
// Allow selecting a subset of enum values
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wswitch-enum"
switch (internalType) {
case tAttrs: return attrs->pos;
case tLambda: return lambda.fun->pos;
case tApp: return app.left->determinePos(pos);
default: return pos;
}
#pragma GCC diagnostic pop
}
bool Value::isTrivial() const
{
return
internalType != tApp
&& internalType != tPrimOpApp
&& (internalType != tThunk
|| (dynamic_cast<ExprAttrs *>(thunk.expr)
&& ((ExprAttrs *) thunk.expr)->dynamicAttrs.empty())
|| dynamic_cast<ExprLambda *>(thunk.expr)
|| dynamic_cast<ExprList *>(thunk.expr));
}
#if HAVE_BOEHMGC
/* Called when the Boehm GC runs out of memory. */
@ -249,6 +217,12 @@ EvalState::EvalState(
, sRight(symbols.create("right"))
, sWrong(symbols.create("wrong"))
, sStructuredAttrs(symbols.create("__structuredAttrs"))
, sAllowedReferences(symbols.create("allowedReferences"))
, sAllowedRequisites(symbols.create("allowedRequisites"))
, sDisallowedReferences(symbols.create("disallowedReferences"))
, sDisallowedRequisites(symbols.create("disallowedRequisites"))
, sMaxSize(symbols.create("maxSize"))
, sMaxClosureSize(symbols.create("maxClosureSize"))
, sBuilder(symbols.create("builder"))
, sArgs(symbols.create("args"))
, sContentAddressed(symbols.create("__contentAddressed"))
@ -275,6 +249,7 @@ EvalState::EvalState(
.findFile = symbols.create("__findFile"),
.nixPath = symbols.create("__nixPath"),
.body = symbols.create("body"),
.overrides = symbols.create("__overrides"),
}
, repair(NoRepair)
, emptyBindings(0)
@ -493,29 +468,6 @@ std::ostream & operator<<(std::ostream & output, PrimOp & primOp)
return output;
}
PrimOp * Value::primOpAppPrimOp() const
{
Value * left = primOpApp.left;
while (left && !left->isPrimOp()) {
left = left->primOpApp.left;
}
if (!left)
return nullptr;
return left->primOp;
}
void Value::mkPrimOp(PrimOp * p)
{
p->check();
clearValue();
internalType = tPrimOp;
primOp = p;
}
Value * EvalState::addPrimOp(PrimOp && primOp)
{
/* Hack to make constants lazy: turn them into a application of
@ -767,42 +719,6 @@ DebugTraceStacker::DebugTraceStacker(EvalState & evalState, DebugTrace t)
evalState.runDebugRepl(nullptr, trace.env, trace.expr);
}
void Value::mkString(std::string_view s)
{
mkString(gcCopyStringIfNeeded(s));
}
static void copyContextToValue(Value & v, const NixStringContext & context)
{
if (!context.empty()) {
size_t n = 0;
v.string.context = gcAllocType<char const *>(context.size() + 1);
for (auto & i : context)
v.string.context[n++] = gcCopyStringIfNeeded(i.to_string());
v.string.context[n] = 0;
}
}
void Value::mkString(std::string_view s, const NixStringContext & context)
{
mkString(s);
copyContextToValue(*this, context);
}
void Value::mkStringMove(const char * s, const NixStringContext & context)
{
mkString(s);
copyContextToValue(*this, context);
}
void Value::mkPath(const SourcePath & path)
{
mkPath(gcCopyStringIfNeeded(path.path.abs()));
}
inline Value * EvalState::lookupVar(Env * env, const ExprVar & var, bool noEval)
{
for (auto l = var.level; l; --l, env = env->up) ;
@ -2795,20 +2711,29 @@ Expr & EvalState::parseExprFromFile(const SourcePath & path, std::shared_ptr<Sta
}
Expr & EvalState::parseExprFromString(std::string s_, const SourcePath & basePath, std::shared_ptr<StaticEnv> & staticEnv)
Expr & EvalState::parseExprFromString(
std::string s_,
const SourcePath & basePath,
std::shared_ptr<StaticEnv> & staticEnv,
const FeatureSettings & featureSettings
)
{
// NOTE this method (and parseStdin) must take care to *fully copy* their input
// into their respective Pos::Origin until the parser stops overwriting its input
// data.
auto s = make_ref<std::string>(s_);
s_.append("\0\0", 2);
return *parse(s_.data(), s_.size(), Pos::String{.source = s}, basePath, staticEnv);
return *parse(s_.data(), s_.size(), Pos::String{.source = s}, basePath, staticEnv, featureSettings);
}
Expr & EvalState::parseExprFromString(std::string s, const SourcePath & basePath)
Expr & EvalState::parseExprFromString(
std::string s,
const SourcePath & basePath,
const FeatureSettings & featureSettings
)
{
return parseExprFromString(std::move(s), basePath, staticBaseEnv);
return parseExprFromString(std::move(s), basePath, staticBaseEnv, featureSettings);
}

View file

@ -13,7 +13,6 @@
#include "search-path.hh"
#include "repl-exit-status.hh"
#include <gc/gc_allocator.h>
#include <map>
#include <optional>
#include <unordered_map>
@ -162,7 +161,10 @@ public:
const Symbol sWith, sOutPath, sDrvPath, sType, sMeta, sName, sValue,
sSystem, sOverrides, sOutputs, sOutputName, sIgnoreNulls,
sFile, sLine, sColumn, sFunctor, sToString,
sRight, sWrong, sStructuredAttrs, sBuilder, sArgs,
sRight, sWrong, sStructuredAttrs,
sAllowedReferences, sAllowedRequisites, sDisallowedReferences, sDisallowedRequisites,
sMaxSize, sMaxClosureSize,
sBuilder, sArgs,
sContentAddressed, sImpure,
sOutputHash, sOutputHashAlgo, sOutputHashMode,
sRecurseForDerivations,
@ -342,8 +344,17 @@ public:
/**
* Parse a Nix expression from the specified string.
*/
Expr & parseExprFromString(std::string s, const SourcePath & basePath, std::shared_ptr<StaticEnv> & staticEnv);
Expr & parseExprFromString(std::string s, const SourcePath & basePath);
Expr & parseExprFromString(
std::string s,
const SourcePath & basePath,
std::shared_ptr<StaticEnv> & staticEnv,
const FeatureSettings & xpSettings = featureSettings
);
Expr & parseExprFromString(
std::string s,
const SourcePath & basePath,
const FeatureSettings & xpSettings = featureSettings
);
Expr & parseStdin();
@ -566,7 +577,8 @@ private:
size_t length,
Pos::Origin origin,
const SourcePath & basePath,
std::shared_ptr<StaticEnv> & staticEnv);
std::shared_ptr<StaticEnv> & staticEnv,
const FeatureSettings & xpSettings = featureSettings);
/**
* Current Nix call stack depth, used with `max-call-depth` setting to throw stack overflow hopefully before we run out of system stack.

View file

@ -53,7 +53,7 @@ void ConfigFile::apply()
bool trusted = whitelist.count(baseName);
if (!trusted) {
switch (nix::fetchSettings.acceptFlakeConfig) {
switch (nix::fetchSettings.acceptFlakeConfig.get()) {
case AcceptFlakeConfig::True: {
trusted = true;
break;

View file

@ -342,8 +342,21 @@ static void updateOverrides(std::map<InputPath, FlakeInput> & overrideMap, const
for (auto & [id, input] : overrides) {
auto inputPath(inputPathPrefix);
inputPath.push_back(id);
// Do not override existing assignment from outer flake
overrideMap.insert({inputPath, input});
/* Given
*
* { inputs.hydra.inputs.nix-eval-jobs.inputs.lix.follows = "lix"; }
*
* then `nix-eval-jobs` doesn't have an override.
* It's neither replaced using follows nor by a different
* URL. Thus no need to add it to overrides and thus re-fetch
* it.
*/
if (input.ref || input.follows) {
// Do not override existing assignment from outer flake
overrideMap.insert({inputPath, input});
}
updateOverrides(overrideMap, input.overrides, inputPath);
}
}
@ -937,7 +950,7 @@ Fingerprint LockedFlake::getFingerprint() const
// FIXME: as an optimization, if the flake contains a lock file
// and we haven't changed it, then it's sufficient to use
// flake.sourceInfo.storePath for the fingerprint.
return hashString(htSHA256,
return hashString(HashType::SHA256,
fmt("%s;%s;%d;%d;%s",
flake.sourceInfo->storePath.to_string(),
flake.lockedRef.subdir,

View file

@ -169,14 +169,13 @@ std::pair<FlakeRef, std::string> parseFlakeRefWithFragment(
if (subdir != "") {
if (parsedURL.query.count("dir"))
throw Error("flake URL '%s' has an inconsistent 'dir' parameter", url);
parsedURL.query.insert_or_assign("dir", subdir);
}
if (pathExists(flakeRoot + "/.git/shallow"))
parsedURL.query.insert_or_assign("shallow", "1");
return std::make_pair(
FlakeRef(Input::fromURL(parsedURL, isFlake), getOr(parsedURL.query, "dir", "")),
FlakeRef(Input::fromURL(parsedURL, isFlake), subdir),
fragment);
}
@ -204,7 +203,13 @@ std::pair<FlakeRef, std::string> parseFlakeRefWithFragment(
std::string fragment;
std::swap(fragment, parsedURL.fragment);
auto input = Input::fromURL(parsedURL, isFlake);
// This has a special meaning for flakes and must not be passed to libfetchers.
// Of course this means that libfetchers cannot have fetchers
// expecting an argument `dir` 🫠
ParsedURL urlForFetchers(parsedURL);
urlForFetchers.query.erase("dir");
auto input = Input::fromURL(urlForFetchers, isFlake);
input.parent = baseDir;
return std::make_pair(

View file

@ -1,6 +1,7 @@
#include "get-drvs.hh"
#include "eval-inline.hh"
#include "derivations.hh"
#include "eval.hh"
#include "store-api.hh"
#include "path-with-outputs.hh"
@ -101,66 +102,134 @@ StorePath DrvInfo::queryOutPath()
return *outPath;
}
void DrvInfo::fillOutputs(bool withPaths)
{
auto fillDefault = [&]() {
std::optional<StorePath> outPath = std::nullopt;
if (withPaths) {
outPath.emplace(this->queryOutPath());
}
this->outputs.emplace("out", outPath);
};
// lol. lmao even.
if (this->attrs == nullptr) {
fillDefault();
return;
}
Attr * outputs = this->attrs->get(this->state->sOutputs);
if (outputs == nullptr) {
fillDefault();
return;
}
// NOTE(Qyriad): I don't think there is any codepath that can cause this to error.
this->state->forceList(
*outputs->value,
outputs->pos,
"while evaluating the 'outputs' attribute of a derivation"
);
for (auto [idx, elem] : enumerate(outputs->value->listItems())) {
// NOTE(Qyriad): This error should be *extremely* rare in practice.
// It is impossible to construct with `stdenv.mkDerivation`,
// `builtins.derivation`, or even `derivationStrict`. As far as we can tell,
// it is only possible by overriding a derivation attrset already created by
// one of those with `//` to introduce the failing `outputs` entry.
auto errMsg = fmt("while evaluating output %d of a derivation", idx);
std::string_view outputName = state->forceStringNoCtx(
*elem,
outputs->pos,
errMsg
);
if (withPaths) {
// Find the attr with this output's name...
Attr * out = this->attrs->get(this->state->symbols.create(outputName));
if (out == nullptr) {
// FIXME: throw error?
continue;
}
// Meanwhile we couldn't figure out any circumstances
// that cause this to error.
state->forceAttrs(*out->value, outputs->pos, errMsg);
// ...and evaluate its `outPath` attribute.
Attr * outPath = out->value->attrs->get(this->state->sOutPath);
if (outPath == nullptr) {
continue;
// FIXME: throw error?
}
NixStringContext context;
// And idk what could possibly cause this one to error
// that wouldn't error before here.
auto storePath = state->coerceToStorePath(
outPath->pos,
*outPath->value,
context,
errMsg
);
this->outputs.emplace(outputName, storePath);
} else {
this->outputs.emplace(outputName, std::nullopt);
}
}
}
DrvInfo::Outputs DrvInfo::queryOutputs(bool withPaths, bool onlyOutputsToInstall)
{
// If we haven't already cached the outputs set, then do so now.
if (outputs.empty()) {
/* Get the outputs list. */
Bindings::iterator i;
if (attrs && (i = attrs->find(state->sOutputs)) != attrs->end()) {
state->forceList(*i->value, i->pos, "while evaluating the 'outputs' attribute of a derivation");
/* For each output... */
for (auto elem : i->value->listItems()) {
std::string output(state->forceStringNoCtx(*elem, i->pos, "while evaluating the name of an output of a derivation"));
if (withPaths) {
/* Evaluate the corresponding set. */
Bindings::iterator out = attrs->find(state->symbols.create(output));
if (out == attrs->end()) continue; // FIXME: throw error?
state->forceAttrs(*out->value, i->pos, "while evaluating an output of a derivation");
/* And evaluate its outPath attribute. */
Bindings::iterator outPath = out->value->attrs->find(state->sOutPath);
if (outPath == out->value->attrs->end()) continue; // FIXME: throw error?
NixStringContext context;
outputs.emplace(output, state->coerceToStorePath(outPath->pos, *outPath->value, context, "while evaluating an output path of a derivation"));
} else
outputs.emplace(output, std::nullopt);
}
} else
outputs.emplace("out", withPaths ? std::optional{queryOutPath()} : std::nullopt);
// FIXME: this behavior seems kind of busted, since whether or not this
// DrvInfo will have paths is forever determined by the *first* call to
// this function??
fillOutputs(withPaths);
}
// Things that operate on derivations like packages, like `nix-env` and `nix build`,
// allow derivations to specify which outputs should be used in those user-facing
// cases if the user didn't specify an output explicitly.
// If the caller just wanted all the outputs for this derivation, though,
// then we're done here.
if (!onlyOutputsToInstall || !attrs)
return outputs;
Bindings::iterator i;
if (attrs && (i = attrs->find(state->sOutputSpecified)) != attrs->end() && state->forceBool(*i->value, i->pos, "while evaluating the 'outputSpecified' attribute of a derivation")) {
Outputs result;
auto out = outputs.find(queryOutputName());
if (out == outputs.end())
throw Error("derivation does not have output '%s'", queryOutputName());
result.insert(*out);
return result;
// Regardless of `meta.outputsToInstall`, though, you can select into a derivation
// output by its attribute, e.g. `pkgs.lix.dev`, which (lol?) sets the magic
// attribute `outputSpecified = true`, and changes the `outputName` attr to the
// explicitly selected-into output.
if (Attr * outSpecAttr = attrs->get(state->sOutputSpecified)) {
bool outputSpecified = this->state->forceBool(
*outSpecAttr->value,
outSpecAttr->pos,
"while evaluating the 'outputSpecified' attribute of a derivation"
);
if (outputSpecified) {
auto maybeOut = outputs.find(queryOutputName());
if (maybeOut == outputs.end()) {
throw Error("derivation does not have output '%s'", queryOutputName());
}
return Outputs{*maybeOut};
}
}
else {
/* Check for `meta.outputsToInstall` and return `outputs` reduced to that. */
const Value * outTI = queryMeta("outputsToInstall");
if (!outTI) return outputs;
auto errMsg = Error("this derivation has bad 'meta.outputsToInstall'");
/* ^ this shows during `nix-env -i` right under the bad derivation */
if (!outTI->isList()) throw errMsg;
Outputs result;
for (auto elem : outTI->listItems()) {
if (elem->type() != nString) throw errMsg;
auto out = outputs.find(elem->string.s);
if (out == outputs.end()) throw errMsg;
result.insert(*out);
}
return result;
/* Check for `meta.outputsToInstall` and return `outputs` reduced to that. */
const Value * outTI = queryMeta("outputsToInstall");
if (!outTI) return outputs;
auto errMsg = Error("this derivation has bad 'meta.outputsToInstall'");
/* ^ this shows during `nix-env -i` right under the bad derivation */
if (!outTI->isList()) throw errMsg;
Outputs result;
for (auto elem : outTI->listItems()) {
if (elem->type() != nString) throw errMsg;
auto out = outputs.find(elem->string.s);
if (out == outputs.end()) throw errMsg;
result.insert(*out);
}
return result;
}
@ -350,56 +419,95 @@ static void getDerivations(EvalState & state, Value & vIn,
Value v;
state.autoCallFunction(autoArgs, vIn, v);
/* Process the expression. */
if (!getDerivation(state, v, pathPrefix, drvs, ignoreAssertionFailures)) ;
else if (v.type() == nAttrs) {
/* Dont consider sets we've already seen, e.g. y in
`rec { x.d = derivation {...}; y = x; }`. */
if (!done.insert(v.attrs).second) return;
/* !!! undocumented hackery to support combining channels in
nix-env.cc. */
bool combineChannels = v.attrs->find(state.symbols.create("_combineChannels")) != v.attrs->end();
/* Consider the attributes in sorted order to get more
deterministic behaviour in nix-env operations (e.g. when
there are names clashes between derivations, the derivation
bound to the attribute with the "lower" name should take
precedence). */
for (auto & i : v.attrs->lexicographicOrder(state.symbols)) {
debug("evaluating attribute '%1%'", state.symbols[i->name]);
if (!std::regex_match(std::string(state.symbols[i->name]), attrRegex))
continue;
std::string pathPrefix2 = addToPath(pathPrefix, state.symbols[i->name]);
if (combineChannels)
getDerivations(state, *i->value, pathPrefix2, autoArgs, drvs, done, ignoreAssertionFailures);
else if (getDerivation(state, *i->value, pathPrefix2, drvs, ignoreAssertionFailures)) {
/* If the value of this attribute is itself a set,
should we recurse into it? => Only if it has a
`recurseForDerivations = true' attribute. */
if (i->value->type() == nAttrs) {
Bindings::iterator j = i->value->attrs->find(state.sRecurseForDerivations);
if (j != i->value->attrs->end() && state.forceBool(*j->value, j->pos, "while evaluating the attribute `recurseForDerivations`"))
getDerivations(state, *i->value, pathPrefix2, autoArgs, drvs, done, ignoreAssertionFailures);
}
}
}
bool shouldRecurse = getDerivation(state, v, pathPrefix, drvs, ignoreAssertionFailures);
if (!shouldRecurse) {
// We're done here.
return;
}
else if (v.type() == nList) {
if (v.type() == nList) {
// NOTE we can't really deduplicate here because small lists don't have stable addresses
// and can cause spurious duplicate detections due to v being on the stack.
for (auto [n, elem] : enumerate(v.listItems())) {
std::string pathPrefix2 = addToPath(pathPrefix, fmt("%d", n));
if (getDerivation(state, *elem, pathPrefix2, drvs, ignoreAssertionFailures))
getDerivations(state, *elem, pathPrefix2, autoArgs, drvs, done, ignoreAssertionFailures);
std::string joinedAttrPath = addToPath(pathPrefix, fmt("%d", n));
bool shouldRecurse = getDerivation(state, *elem, joinedAttrPath, drvs, ignoreAssertionFailures);
if (shouldRecurse) {
getDerivations(
state,
*elem,
joinedAttrPath,
autoArgs,
drvs,
done,
ignoreAssertionFailures
);
}
}
return;
} else if (v.type() != nAttrs) {
state.error<TypeError>(
"expression was expected to be a derivation or collection of derivations, but instead was %s",
showType(v.type(), true)
).debugThrow();
}
else
state.error<TypeError>("expression does not evaluate to a derivation (or a set or list of those)").debugThrow();
/* Dont consider sets we've already seen, e.g. y in
`rec { x.d = derivation {...}; y = x; }`. */
auto const &[_, didInsert] = done.insert(v.attrs);
if (!didInsert) {
return;
}
// FIXME: what the fuck???
/* !!! undocumented hackery to support combining channels in
nix-env.cc. */
bool combineChannels = v.attrs->find(state.symbols.create("_combineChannels")) != v.attrs->end();
/* Consider the attributes in sorted order to get more
deterministic behaviour in nix-env operations (e.g. when
there are names clashes between derivations, the derivation
bound to the attribute with the "lower" name should take
precedence). */
for (auto & attr : v.attrs->lexicographicOrder(state.symbols)) {
debug("evaluating attribute '%1%'", state.symbols[attr->name]);
// FIXME: only consider attrs with identifier-like names?? Why???
if (!std::regex_match(std::string(state.symbols[attr->name]), attrRegex)) {
continue;
}
std::string joinedAttrPath = addToPath(pathPrefix, state.symbols[attr->name]);
if (combineChannels) {
getDerivations(state, *attr->value, joinedAttrPath, autoArgs, drvs, done, ignoreAssertionFailures);
} else if (getDerivation(state, *attr->value, joinedAttrPath, drvs, ignoreAssertionFailures)) {
/* If the value of this attribute is itself a set,
should we recurse into it? => Only if it has a
`recurseForDerivations = true' attribute. */
if (attr->value->type() == nAttrs) {
Attr * recurseForDrvs = attr->value->attrs->get(state.sRecurseForDerivations);
if (recurseForDrvs == nullptr) {
continue;
}
bool shouldRecurse = state.forceBool(
*recurseForDrvs->value,
attr->pos,
fmt("while evaluating the '%s' attribute", Magenta("recurseForDerivations"))
);
if (!shouldRecurse) {
continue;
}
getDerivations(
state,
*attr->value,
joinedAttrPath,
autoArgs,
drvs,
done,
ignoreAssertionFailures
);
}
}
}
}

View file

@ -37,13 +37,14 @@ private:
bool checkMeta(Value & v);
void fillOutputs(bool withPaths = true);
public:
/**
* path towards the derivation
*/
std::string attrPath;
DrvInfo(EvalState & state) : state(&state) { };
DrvInfo(EvalState & state, std::string attrPath, Bindings * attrs);
DrvInfo(EvalState & state, ref<Store> store, const std::string & drvPathWithOutputs);

View file

@ -31,6 +31,7 @@ libexpr_sources = files(
'print-ambiguous.cc',
'print.cc',
'search-path.cc',
'value.cc',
'value-to-json.cc',
'value-to-xml.cc',
'flake/config.cc',

View file

@ -48,7 +48,7 @@ protected:
public:
struct AstSymbols {
Symbol sub, lessThan, mul, div, or_, findFile, nixPath, body;
Symbol sub, lessThan, mul, div, or_, findFile, nixPath, body, overrides;
};

View file

@ -434,6 +434,8 @@ struct op {
struct and_ : _op<TAO_PEGTL_STRING("&&"), 12> {};
struct or_ : _op<TAO_PEGTL_STRING("||"), 13> {};
struct implies : _op<TAO_PEGTL_STRING("->"), 14, kind::rightAssoc> {};
struct pipe_right : _op<TAO_PEGTL_STRING("|>"), 15> {};
struct pipe_left : _op<TAO_PEGTL_STRING("<|"), 16, kind::rightAssoc> {};
};
struct _expr {
@ -521,6 +523,7 @@ struct _expr {
app
> {};
/* Order matters here. The order is the parsing order, not the precedence order: '<=' must be parsed before '<'. */
struct _binary_operator : sor<
operator_<op::implies>,
operator_<op::update>,
@ -529,6 +532,8 @@ struct _expr {
operator_<op::minus>,
operator_<op::mul>,
operator_<op::div>,
operator_<op::pipe_right>,
operator_<op::pipe_left>,
operator_<op::less_eq>,
operator_<op::greater_eq>,
operator_<op::less>,
@ -649,6 +654,8 @@ struct operator_semantics {
grammar::op::minus,
grammar::op::mul,
grammar::op::div,
grammar::op::pipe_right,
grammar::op::pipe_left,
has_attr
> op;
};

View file

@ -12,7 +12,6 @@
#include "state.hh"
#include <charconv>
#include <clocale>
#include <memory>
// flip this define when doing parser development to enable some g checks.
@ -114,6 +113,29 @@ struct ExprState
return std::make_unique<ExprCall>(pos, std::make_unique<ExprVar>(fn), std::move(args));
}
std::unique_ptr<Expr> pipe(PosIdx pos, State & state, bool flip = false)
{
if (!state.featureSettings.isEnabled(Xp::PipeOperator))
throw ParseError({
.msg = HintFmt("Pipe operator is disabled"),
.pos = state.positions[pos]
});
// Reverse the order compared to normal function application: arg |> fn
std::unique_ptr<Expr> fn, arg;
if (flip) {
fn = popExprOnly();
arg = popExprOnly();
} else {
arg = popExprOnly();
fn = popExprOnly();
}
std::vector<std::unique_ptr<Expr>> args{1};
args[0] = std::move(arg);
return std::make_unique<ExprCall>(pos, std::move(fn), std::move(args));
}
std::unique_ptr<Expr> order(PosIdx pos, bool less, State & state)
{
return call(pos, state.s.lessThan, !less);
@ -163,6 +185,8 @@ struct ExprState
[&] (Op::concat) { return applyBinary<ExprOpConcatLists>(pos); },
[&] (has_attr & a) { return applyUnary<ExprOpHasAttr>(std::move(a.path)); },
[&] (Op::unary_minus) { return negate(pos, state); },
[&] (Op::pipe_right) { return pipe(pos, state, true); },
[&] (Op::pipe_left) { return pipe(pos, state); },
})(op)
};
}
@ -254,7 +278,8 @@ struct AttrState : SubexprState {
std::vector<AttrName> attrs;
void pushAttr(auto && attr, PosIdx) { attrs.emplace_back(std::move(attr)); }
template <typename T>
void pushAttr(T && attr, PosIdx) { attrs.emplace_back(std::forward<T>(attr)); }
};
template<> struct BuildAST<grammar::attr::simple> {
@ -290,7 +315,8 @@ struct InheritState : SubexprState {
std::unique_ptr<Expr> from;
PosIdx fromPos;
void pushAttr(auto && attr, PosIdx pos) { attrs.emplace_back(std::move(attr), pos); }
template <typename T>
void pushAttr(T && attr, PosIdx pos) { attrs.emplace_back(std::forward<T>(attr), pos); }
};
template<> struct BuildAST<grammar::inherit::from> {
@ -630,10 +656,10 @@ template<> struct BuildAST<grammar::expr::path> : p::maybe_nothing {};
template<> struct BuildAST<grammar::expr::uri> {
static void apply(const auto & in, ExprState & s, State & ps) {
static bool noURLLiterals = experimentalFeatureSettings.isEnabled(Xp::NoUrlLiterals);
if (noURLLiterals)
bool URLLiterals = ps.featureSettings.isEnabled(Dep::UrlLiterals);
if (!URLLiterals)
throw ParseError({
.msg = HintFmt("URL literals are disabled"),
.msg = HintFmt("URL literals are deprecated, allow using them with --extra-deprecated-features=url-literals"),
.pos = ps.positions[ps.at(in)]
});
s.pushExpr<ExprString>(ps.at(in), in.string());
@ -642,6 +668,16 @@ template<> struct BuildAST<grammar::expr::uri> {
template<> struct BuildAST<grammar::expr::ancient_let> : change_head<BindingsState> {
static void success(const auto & in, BindingsState & b, ExprState & s, State & ps) {
// Added 2024-09-18. Turn into an error at some point in the future.
// See the documentation on deprecated features for more details.
if (!ps.featureSettings.isEnabled(Dep::AncientLet))
warn(
"%s found at %s. This feature is deprecated and will be removed in the future. Use %s to silence this warning.",
"let {",
ps.positions[ps.at(in)],
"--extra-deprecated-features ancient-let"
);
b.attrs.pos = ps.at(in);
b.attrs.recursive = true;
s.pushExpr<ExprSelect>(b.attrs.pos, b.attrs.pos, std::make_unique<ExprAttrs>(std::move(b.attrs)), ps.s.body);
@ -650,6 +686,12 @@ template<> struct BuildAST<grammar::expr::ancient_let> : change_head<BindingsSta
template<> struct BuildAST<grammar::expr::rec_set> : change_head<BindingsState> {
static void success(const auto & in, BindingsState & b, ExprState & s, State & ps) {
// Before inserting new attrs, check for __override and throw an error
// (the error will initially be a warning to ease migration)
if (!featureSettings.isEnabled(Dep::RecSetOverrides) && b.attrs.attrs.contains(ps.s.overrides)) {
ps.overridesFound(ps.at(in));
}
b.attrs.pos = ps.at(in);
b.attrs.recursive = true;
s.pushExpr<ExprAttrs>(b.attrs.pos, std::move(b.attrs));
@ -831,7 +873,8 @@ Expr * EvalState::parse(
size_t length,
Pos::Origin origin,
const SourcePath & basePath,
std::shared_ptr<StaticEnv> & staticEnv)
std::shared_ptr<StaticEnv> & staticEnv,
const FeatureSettings & featureSettings)
{
parser::State s = {
symbols,
@ -839,6 +882,7 @@ Expr * EvalState::parse(
basePath,
positions.addOrigin(origin, length),
exprSymbols,
featureSettings,
};
parser::ExprState x;

View file

@ -2,6 +2,7 @@
///@file
#include "eval.hh"
#include "logging.hh"
namespace nix::parser {
@ -19,9 +20,11 @@ struct State
SourcePath basePath;
PosTable::Origin origin;
const Expr::AstSymbols & s;
const FeatureSettings & featureSettings;
void dupAttr(const AttrPath & attrPath, const PosIdx pos, const PosIdx prevPos);
void dupAttr(Symbol attr, const PosIdx pos, const PosIdx prevPos);
void overridesFound(const PosIdx pos);
void addAttr(ExprAttrs * attrs, AttrPath && attrPath, std::unique_ptr<Expr> e, const PosIdx pos);
std::unique_ptr<Formals> validateFormals(std::unique_ptr<Formals> formals, PosIdx pos = noPos, Symbol arg = {});
std::unique_ptr<Expr> stripIndentation(const PosIdx pos,
@ -57,6 +60,17 @@ inline void State::dupAttr(Symbol attr, const PosIdx pos, const PosIdx prevPos)
});
}
inline void State::overridesFound(const PosIdx pos) {
// Added 2024-09-18. Turn into an error at some point in the future.
// See the documentation on deprecated features for more details.
warn(
"%s found at %s. This feature is deprecated and will be removed in the future. Use %s to silence this warning.",
"__overrides",
positions[pos],
"--extra-deprecated-features rec-set-overrides"
);
}
inline void State::addAttr(ExprAttrs * attrs, AttrPath && attrPath, std::unique_ptr<Expr> e, const PosIdx pos)
{
AttrPath::iterator i;
@ -122,6 +136,12 @@ inline void State::addAttr(ExprAttrs * attrs, AttrPath && attrPath, std::unique_
dupAttr(attrPath, pos, j->second.pos);
}
} else {
// Before inserting new attrs, check for __override and throw an error
// (the error will initially be a warning to ease migration)
if (attrs->recursive && !featureSettings.isEnabled(Dep::RecSetOverrides) && i->symbol == s.overrides) {
overridesFound(pos);
}
// This attr path is not defined. Let's create it.
e->setName(i->symbol);
attrs->attrs.emplace(std::piecewise_construct,

View file

@ -346,7 +346,7 @@ void prim_importNative(EvalState & state, const PosIdx pos, Value * * args, Valu
state.error<EvalError>("could not open '%1%': %2%", path, dlerror()).debugThrow();
dlerror();
ValueInitializer func = (ValueInitializer) dlsym(handle, sym.c_str());
ValueInitializer func = reinterpret_cast<ValueInitializer>(dlsym(handle, sym.c_str()));
if(!func) {
char *message = dlerror();
if (message)
@ -1213,6 +1213,20 @@ drvName, Bindings * attrs, Value & v)
handleOutputs(ss);
}
if (i->name == state.sAllowedReferences)
warn("In a derivation named '%s', 'structuredAttrs' disables the effect of the derivation attribute 'allowedReferences'; use 'outputChecks.<output>.allowedReferences' instead", drvName);
if (i->name == state.sAllowedRequisites)
warn("In a derivation named '%s', 'structuredAttrs' disables the effect of the derivation attribute 'allowedRequisites'; use 'outputChecks.<output>.allowedRequisites' instead", drvName);
if (i->name == state.sDisallowedReferences)
warn("In a derivation named '%s', 'structuredAttrs' disables the effect of the derivation attribute 'disallowedReferences'; use 'outputChecks.<output>.disallowedReferences' instead", drvName);
if (i->name == state.sDisallowedRequisites)
warn("In a derivation named '%s', 'structuredAttrs' disables the effect of the derivation attribute 'disallowedRequisites'; use 'outputChecks.<output>.disallowedRequisites' instead", drvName);
if (i->name == state.sMaxSize)
warn("In a derivation named '%s', 'structuredAttrs' disables the effect of the derivation attribute 'maxSize'; use 'outputChecks.<output>.maxSize' instead", drvName);
if (i->name == state.sMaxClosureSize)
warn("In a derivation named '%s', 'structuredAttrs' disables the effect of the derivation attribute 'maxClosureSize'; use 'outputChecks.<output>.maxClosureSize' instead", drvName);
} else {
auto s = state.coerceToString(pos, *i->value, context, context_below, true).toOwned();
drv.env.emplace(key, s);
@ -1322,7 +1336,7 @@ drvName, Bindings * attrs, Value & v)
state.error<EvalError>("derivation cannot be both content-addressed and impure")
.atPos(v).debugThrow();
auto ht = parseHashTypeOpt(outputHashAlgo).value_or(htSHA256);
auto ht = parseHashTypeOpt(outputHashAlgo).value_or(HashType::SHA256);
auto method = ingestionMethod.value_or(FileIngestionMethod::Recursive);
for (auto & i : outputs) {
@ -1750,7 +1764,7 @@ static void prim_hashFile(EvalState & state, const PosIdx pos, Value * * args, V
auto path = realisePath(state, pos, *args[1]);
v.mkString(hashString(*ht, path.readFile()).to_string(Base16, false));
v.mkString(hashString(*ht, path.readFile()).to_string(Base::Base16, false));
}
static RegisterPrimOp primop_hashFile({
@ -2332,7 +2346,7 @@ static void prim_path(EvalState & state, const PosIdx pos, Value * * args, Value
else if (n == "recursive")
method = FileIngestionMethod { state.forceBool(*attr.value, attr.pos, "while evaluating the `recursive` attribute passed to builtins.path") };
else if (n == "sha256")
expectedHash = newHashAllowEmpty(state.forceStringNoCtx(*attr.value, attr.pos, "while evaluating the `sha256` attribute passed to builtins.path"), htSHA256);
expectedHash = newHashAllowEmpty(state.forceStringNoCtx(*attr.value, attr.pos, "while evaluating the `sha256` attribute passed to builtins.path"), HashType::SHA256);
else
state.error<EvalError>(
"unsupported argument '%1%' to 'addPath'",
@ -2425,6 +2439,8 @@ static void prim_attrValues(EvalState & state, const PosIdx pos, Value * * args,
state.mkList(v, args[0]->attrs->size());
// FIXME: this is incredibly evil, *why*
// NOLINTBEGIN(cppcoreguidelines-pro-type-cstyle-cast)
unsigned int n = 0;
for (auto & i : *args[0]->attrs)
v.listElems()[n++] = (Value *) &i;
@ -2438,6 +2454,7 @@ static void prim_attrValues(EvalState & state, const PosIdx pos, Value * * args,
for (unsigned int i = 0; i < n; ++i)
v.listElems()[i] = ((Attr *) v.listElems()[i])->value;
// NOLINTEND(cppcoreguidelines-pro-type-cstyle-cast)
}
static RegisterPrimOp primop_attrValues({
@ -3847,7 +3864,7 @@ static void prim_hashString(EvalState & state, const PosIdx pos, Value * * args,
NixStringContext context; // discarded
auto s = state.forceString(*args[1], context, pos, "while evaluating the second argument passed to builtins.hashString");
v.mkString(hashString(*ht, s).to_string(Base16, false));
v.mkString(hashString(*ht, s).to_string(Base::Base16, false));
}
static RegisterPrimOp primop_hashString({

View file

@ -31,7 +31,7 @@ static void prim_fetchMercurial(EvalState & state, const PosIdx pos, Value * * a
// be both a revision or a branch/tag name.
auto value = state.forceStringNoCtx(*attr.value, attr.pos, "while evaluating the `rev` attribute passed to builtins.fetchMercurial");
if (std::regex_match(value.begin(), value.end(), revRegex))
rev = Hash::parseAny(value, htSHA1);
rev = Hash::parseAny(value, HashType::SHA1);
else
ref = value;
}
@ -73,7 +73,7 @@ static void prim_fetchMercurial(EvalState & state, const PosIdx pos, Value * * a
attrs2.alloc("branch").mkString(*input2.getRef());
// Backward compatibility: set 'rev' to
// 0000000000000000000000000000000000000000 for a dirty tree.
auto rev2 = input2.getRev().value_or(Hash(htSHA1));
auto rev2 = input2.getRev().value_or(Hash(HashType::SHA1));
attrs2.alloc("rev").mkString(rev2.gitRev());
attrs2.alloc("shortRev").mkString(rev2.gitRev().substr(0, 12));
if (auto revCount = input2.getRevCount())

View file

@ -32,7 +32,7 @@ void emitTreeAttrs(
auto narHash = input.getNarHash();
assert(narHash);
attrs.alloc("narHash").mkString(narHash->to_string(SRI, true));
attrs.alloc("narHash").mkString(narHash->to_string(Base::SRI, true));
if (input.getType() == "git")
attrs.alloc("submodules").mkBool(
@ -45,7 +45,7 @@ void emitTreeAttrs(
attrs.alloc("shortRev").mkString(rev->gitShortRev());
} else if (emptyRevFallback) {
// Backwards compat for `builtins.fetchGit`: dirty repos return an empty sha1 as rev
auto emptyHash = Hash(htSHA1);
auto emptyHash = Hash(HashType::SHA1);
attrs.alloc("rev").mkString(emptyHash.gitRev());
attrs.alloc("shortRev").mkString(emptyHash.gitShortRev());
}
@ -226,7 +226,7 @@ static void fetch(EvalState & state, const PosIdx pos, Value * * args, Value & v
if (n == "url")
url = state.forceStringNoCtx(*attr.value, attr.pos, "while evaluating the url we should fetch");
else if (n == "sha256")
expectedHash = newHashAllowEmpty(state.forceStringNoCtx(*attr.value, attr.pos, "while evaluating the sha256 of the content we should fetch"), htSHA256);
expectedHash = newHashAllowEmpty(state.forceStringNoCtx(*attr.value, attr.pos, "while evaluating the sha256 of the content we should fetch"), HashType::SHA256);
else if (n == "name")
name = state.forceStringNoCtx(*attr.value, attr.pos, "while evaluating the name of the content we should fetch");
else
@ -252,7 +252,7 @@ static void fetch(EvalState & state, const PosIdx pos, Value * * args, Value & v
state.error<EvalError>("in pure evaluation mode, '%s' requires a 'sha256' argument", who).atPos(pos).debugThrow();
// early exit if pinned and already in the store
if (expectedHash && expectedHash->type == htSHA256) {
if (expectedHash && expectedHash->type == HashType::SHA256) {
auto expectedPath = state.store->makeFixedOutputPath(
name,
FixedOutputInfo {
@ -277,13 +277,13 @@ static void fetch(EvalState & state, const PosIdx pos, Value * * args, Value & v
if (expectedHash) {
auto hash = unpack
? state.store->queryPathInfo(storePath)->narHash
: hashFile(htSHA256, state.store->toRealPath(storePath));
: hashFile(HashType::SHA256, state.store->toRealPath(storePath));
if (hash != *expectedHash) {
state.error<EvalError>(
"hash mismatch in file downloaded from '%s':\n specified: %s\n got: %s",
*url,
expectedHash->to_string(Base32, true),
hash.to_string(Base32, true)
expectedHash->to_string(Base::Base32, true),
hash.to_string(Base::Base32, true)
).withExitStatus(102)
.debugThrow();
}

106
src/libexpr/value.cc Normal file
View file

@ -0,0 +1,106 @@
#include "value.hh"
#include <ostream>
#include "eval.hh"
#include "print.hh"
namespace nix
{
static void copyContextToValue(Value & v, const NixStringContext & context)
{
if (!context.empty()) {
size_t n = 0;
v.string.context = gcAllocType<char const *>(context.size() + 1);
for (auto & i : context)
v.string.context[n++] = gcCopyStringIfNeeded(i.to_string());
v.string.context[n] = 0;
}
}
Value::Value(primop_t, PrimOp & primop)
: internalType(tPrimOp)
, primOp(&primop)
, _primop_pad(0)
{
primop.check();
}
void Value::print(EvalState & state, std::ostream & str, PrintOptions options)
{
printValue(state, str, *this, options);
}
PosIdx Value::determinePos(const PosIdx pos) const
{
// Allow selecting a subset of enum values
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wswitch-enum"
switch (internalType) {
case tAttrs: return attrs->pos;
case tLambda: return lambda.fun->pos;
case tApp: return app.left->determinePos(pos);
default: return pos;
}
#pragma GCC diagnostic pop
}
bool Value::isTrivial() const
{
return
internalType != tApp
&& internalType != tPrimOpApp
&& (internalType != tThunk
|| (dynamic_cast<ExprAttrs *>(thunk.expr)
&& static_cast<ExprAttrs *>(thunk.expr)->dynamicAttrs.empty())
|| dynamic_cast<ExprLambda *>(thunk.expr)
|| dynamic_cast<ExprList *>(thunk.expr));
}
PrimOp * Value::primOpAppPrimOp() const
{
Value * left = primOpApp.left;
while (left && !left->isPrimOp()) {
left = left->primOpApp.left;
}
if (!left)
return nullptr;
return left->primOp;
}
void Value::mkPrimOp(PrimOp * p)
{
p->check();
clearValue();
internalType = tPrimOp;
primOp = p;
}
void Value::mkString(std::string_view s)
{
mkString(gcCopyStringIfNeeded(s));
}
void Value::mkString(std::string_view s, const NixStringContext & context)
{
mkString(s);
copyContextToValue(*this, context);
}
void Value::mkStringMove(const char * s, const NixStringContext & context)
{
mkString(s);
copyContextToValue(*this, context);
}
void Value::mkPath(const SourcePath & path)
{
mkPath(gcCopyStringIfNeeded(path.path.abs()));
}
}

View file

@ -3,6 +3,9 @@
#include <cassert>
#include <climits>
#include <functional>
#include <ranges>
#include <span>
#include "gc-alloc.hh"
#include "symbol-table.hh"
@ -11,6 +14,7 @@
#include "source-path.hh"
#include "print-options.hh"
#include "checked-arithmetic.hh"
#include "concepts.hh"
#include <nlohmann/json_fwd.hpp>
@ -132,6 +136,55 @@ class ExternalValueBase
std::ostream & operator << (std::ostream & str, const ExternalValueBase & v);
extern ExprBlackHole eBlackHole;
struct NewValueAs
{
struct integer_t { };
constexpr static integer_t integer{};
struct floating_t { };
constexpr static floating_t floating{};
struct boolean_t { };
constexpr static boolean_t boolean{};
struct string_t { };
constexpr static string_t string{};
struct path_t { };
constexpr static path_t path{};
struct list_t { };
constexpr static list_t list{};
struct attrs_t { };
constexpr static attrs_t attrs{};
struct thunk_t { };
constexpr static thunk_t thunk{};
struct null_t { };
constexpr static null_t null{};
struct app_t { };
constexpr static app_t app{};
struct primop_t { };
constexpr static primop_t primop{};
struct primOpApp_t { };
constexpr static primOpApp_t primOpApp{};
struct lambda_t { };
constexpr static lambda_t lambda{};
struct external_t { };
constexpr static external_t external{};
struct blackhole_t { };
constexpr static blackhole_t blackhole{};
};
struct Value
{
@ -142,6 +195,315 @@ private:
public:
// Discount `using NewValueAs::*;`
#define USING_VALUETYPE(name) using name = NewValueAs::name
USING_VALUETYPE(integer_t);
USING_VALUETYPE(floating_t);
USING_VALUETYPE(boolean_t);
USING_VALUETYPE(string_t);
USING_VALUETYPE(path_t);
USING_VALUETYPE(list_t);
USING_VALUETYPE(attrs_t);
USING_VALUETYPE(thunk_t);
USING_VALUETYPE(primop_t);
USING_VALUETYPE(app_t);
USING_VALUETYPE(null_t);
USING_VALUETYPE(primOpApp_t);
USING_VALUETYPE(lambda_t);
USING_VALUETYPE(external_t);
USING_VALUETYPE(blackhole_t);
#undef USING_VALUETYPE
/// Default constructor which is still used in the codebase but should not
/// be used in new code. Zero initializes its members.
[[deprecated]] Value()
: internalType(static_cast<InternalType>(0))
, _empty{ 0, 0 }
{ }
/// Constructs a nix language value of type "int", with the integral value
/// of @ref i.
Value(integer_t, NixInt i)
: internalType(tInt)
, _empty{ 0, 0 }
{
// the NixInt ctor here is is special because NixInt has a ctor too, so
// we're not allowed to have it as an anonymous aggreagte member. we do
// however still have the option to clear the data members using _empty
// and leaving the second word of data cleared by setting only integer.
integer = i;
}
/// Constructs a nix language value of type "float", with the floating
/// point value of @ref f.
Value(floating_t, NixFloat f)
: internalType(tFloat)
, fpoint(f)
, _float_pad(0)
{ }
/// Constructs a nix language value of type "bool", with the boolean
/// value of @ref b.
Value(boolean_t, bool b)
: internalType(tBool)
, boolean(b)
, _bool_pad(0)
{ }
/// Constructs a nix language value of type "string", with the value of the
/// C-string pointed to by @ref strPtr, and optionally with an array of
/// string context pointed to by @ref contextPtr.
///
/// Neither the C-string nor the context array are copied; this constructor
/// assumes suitable memory has already been allocated (with the GC if
/// enabled), and string and context data copied into that memory.
Value(string_t, char const * strPtr, char const ** contextPtr = nullptr)
: internalType(tString)
, string({ .s = strPtr, .context = contextPtr })
{ }
/// Constructx a nix language value of type "string", with a copy of the
/// string data viewed by @ref copyFrom.
///
/// The string data *is* copied from @ref copyFrom, and this constructor
/// performs a dynamic (GC) allocation to do so.
Value(string_t, std::string_view copyFrom, NixStringContext const & context = {})
: internalType(tString)
, string({ .s = gcCopyStringIfNeeded(copyFrom), .context = nullptr })
{
if (context.empty()) {
// It stays nullptr.
return;
}
// Copy the context.
this->string.context = gcAllocType<char const *>(context.size() + 1);
size_t n = 0;
for (NixStringContextElem const & contextElem : context) {
this->string.context[n] = gcCopyStringIfNeeded(contextElem.to_string());
n += 1;
}
// Terminator sentinel.
this->string.context[n] = nullptr;
}
/// Constructx a nix language value of type "string", with the value of the
/// C-string pointed to by @ref strPtr, and optionally with a set of string
/// context @ref context.
///
/// The C-string is not copied; this constructor assumes suitable memory
/// has already been allocated (with the GC if enabled), and string data
/// has been copied into that memory. The context data *is* copied from
/// @ref context, and this constructor performs a dynamic (GC) allocation
/// to do so.
Value(string_t, char const * strPtr, NixStringContext const & context)
: internalType(tString)
, string({ .s = strPtr, .context = nullptr })
{
if (context.empty()) {
// It stays nullptr
return;
}
// Copy the context.
this->string.context = gcAllocType<char const *>(context.size() + 1);
size_t n = 0;
for (NixStringContextElem const & contextElem : context) {
this->string.context[n] = gcCopyStringIfNeeded(contextElem.to_string());
n += 1;
}
// Terminator sentinel.
this->string.context[n] = nullptr;
}
/// Constructs a nix language value of type "path", with the value of the
/// C-string pointed to by @ref strPtr.
///
/// The C-string is not copied; this constructor assumes suitable memory
/// has already been allocated (with the GC if enabled), and string data
/// has been copied into that memory.
Value(path_t, char const * strPtr)
: internalType(tPath)
, _path(strPtr)
, _path_pad(0)
{ }
/// Constructs a nix language value of type "path", with the path
/// @ref path.
///
/// The data from @ref path *is* copied, and this constructor performs a
/// dynamic (GC) allocation to do so.
Value(path_t, SourcePath const & path)
: internalType(tPath)
, _path(gcCopyStringIfNeeded(path.path.abs()))
, _path_pad(0)
{ }
/// Constructs a nix language value of type "list", with element array
/// @ref items.
///
/// Generally, the data in @ref items is neither deep copied nor shallow
/// copied. This construct assumes the std::span @ref items is a region of
/// memory that has already been allocated (with the GC if enabled), and
/// an array of valid Value pointers has been copied into that memory.
///
/// Howver, as an implementation detail, if @ref items is only 2 items or
/// smaller, the list is stored inline, and the Value pointers in
/// @ref items are shallow copied into this structure, without dynamically
/// allocating memory.
Value(list_t, std::span<Value *> items)
{
if (items.size() == 1) {
this->internalType = tList1;
this->smallList[0] = items[0];
this->smallList[1] = nullptr;
} else if (items.size() == 2) {
this->internalType = tList2;
this->smallList[0] = items[0];
this->smallList[1] = items[1];
} else {
this->internalType = tListN;
this->bigList.size = items.size();
this->bigList.elems = items.data();
}
}
/// Constructs a nix language value of type "list", with an element array
/// initialized by applying @ref transformer to each element in @ref items.
///
/// This allows "in-place" construction of a nix list when some logic is
/// needed to get each Value pointer. This constructor dynamically (GC)
/// allocates memory for the size of @ref items, and the Value pointers
/// returned by @ref transformer are shallow copied into it.
template<
std::ranges::sized_range SizedIterableT,
InvocableR<Value *, typename SizedIterableT::value_type const &> TransformerT
>
Value(list_t, SizedIterableT & items, TransformerT const & transformer)
{
if (items.size() == 1) {
this->internalType = tList1;
this->smallList[0] = transformer(*items.begin());
this->smallList[1] = nullptr;
} else if (items.size() == 2) {
this->internalType = tList2;
auto it = items.begin();
this->smallList[0] = transformer(*it);
it++;
this->smallList[1] = transformer(*it);
} else {
this->internalType = tListN;
this->bigList.size = items.size();
this->bigList.elems = gcAllocType<Value *>(items.size());
auto it = items.begin();
for (size_t i = 0; i < items.size(); i++, it++) {
this->bigList.elems[i] = transformer(*it);
}
}
}
/// Constructs a nix language value of the singleton type "null".
Value(null_t)
: internalType(tNull)
, _empty{0, 0}
{ }
/// Constructs a nix language value of type "set", with the attribute
/// bindings pointed to by @ref bindings.
///
/// The bindings are not not copied; this constructor assumes @ref bindings
/// has already been suitably allocated by something like nix::buildBindings.
Value(attrs_t, Bindings * bindings)
: internalType(tAttrs)
, attrs(bindings)
, _attrs_pad(0)
{ }
/// Constructs a nix language lazy delayed computation, or "thunk".
///
/// The thunk stores the environment it will be computed in @ref env, and
/// the expression that will need to be evaluated @ref expr.
Value(thunk_t, Env & env, Expr & expr)
: internalType(tThunk)
, thunk({ .env = &env, .expr = &expr })
{ }
/// Constructs a nix language value of type "lambda", which represents
/// a builtin, primitive operation ("primop"), from the primop
/// implemented by @ref primop.
Value(primop_t, PrimOp & primop);
/// Constructs a nix language value of type "lambda", which represents a
/// partially applied primop.
Value(primOpApp_t, Value & lhs, Value & rhs)
: internalType(tPrimOpApp)
, primOpApp({ .left = &lhs, .right = &rhs })
{ }
/// Constructs a nix language value of type "lambda", which represents a
/// lazy partial application of another lambda.
Value(app_t, Value & lhs, Value & rhs)
: internalType(tApp)
, app({ .left = &lhs, .right = &rhs })
{ }
/// Constructs a nix language value of type "external", which is only used
/// by plugins. Do any existing plugins even use this mechanism?
Value(external_t, ExternalValueBase & external)
: internalType(tExternal)
, external(&external)
, _external_pad(0)
{ }
/// Constructs a nix language value of type "lambda", which represents a
/// run of the mill lambda defined in nix code.
///
/// This takes the environment the lambda is closed over @ref env, and
/// the lambda expression itself @ref lambda, which will not be evaluated
/// until it is applied.
Value(lambda_t, Env & env, ExprLambda & lambda)
: internalType(tLambda)
, lambda({ .env = &env, .fun = &lambda })
{ }
/// Constructs an evil thunk, whose evaluation represents infinite recursion.
explicit Value(blackhole_t)
: internalType(tThunk)
, thunk({ .env = nullptr, .expr = reinterpret_cast<Expr *>(&eBlackHole) })
{ }
Value(Value const & rhs) = default;
/// Move constructor. Does the same thing as the copy constructor, but
/// also zeroes out the other Value.
Value(Value && rhs)
: internalType(rhs.internalType)
, _empty{ 0, 0 }
{
*this = std::move(rhs);
}
Value & operator=(Value const & rhs) = default;
/// Move assignment operator.
/// Does the same thing as the copy assignment operator, but also zeroes out
/// the rhs.
inline Value & operator=(Value && rhs)
{
*this = static_cast<const Value &>(rhs);
if (this != &rhs) {
// Kill `rhs`, because non-destructive move lol.
rhs.internalType = static_cast<InternalType>(0);
rhs._empty[0] = 0;
rhs._empty[1] = 0;
}
return *this;
}
void print(EvalState &state, std::ostream &str, PrintOptions options = PrintOptions {});
// Functions needed to distinguish the type
@ -160,8 +522,15 @@ public:
union
{
/// Dummy field, which takes up as much space as the largest union variants
/// to set the union's memory to zeroed memory.
uintptr_t _empty[2];
NixInt integer;
bool boolean;
struct {
bool boolean;
uintptr_t _bool_pad;
};
/**
* Strings in the evaluator carry a so-called `context` which
@ -190,8 +559,14 @@ public:
const char * * context; // must be in sorted order
} string;
const char * _path;
Bindings * attrs;
struct {
const char * _path;
uintptr_t _path_pad;
};
struct {
Bindings * attrs;
uintptr_t _attrs_pad;
};
struct {
size_t size;
Value * * elems;
@ -208,12 +583,21 @@ public:
Env * env;
ExprLambda * fun;
} lambda;
PrimOp * primOp;
struct {
PrimOp * primOp;
uintptr_t _primop_pad;
};
struct {
Value * left, * right;
} primOpApp;
ExternalValueBase * external;
NixFloat fpoint;
struct {
ExternalValueBase * external;
uintptr_t _external_pad;
};
struct {
NixFloat fpoint;
uintptr_t _float_pad;
};
};
/**
@ -449,8 +833,6 @@ public:
};
extern ExprBlackHole eBlackHole;
bool Value::isBlackhole() const
{
return internalType == tThunk && thunk.expr == (Expr*) &eBlackHole;

View file

@ -18,8 +18,8 @@ StorePath fetchToStore(
return
settings.readOnlyMode
? store.computeStorePathForPath(name, path.path.abs(), method, htSHA256, filter2).first
: store.addToStore(name, path.path.abs(), method, htSHA256, filter2, repair);
? store.computeStorePathForPath(name, path.path.abs(), method, HashType::SHA256, filter2).first
: store.addToStore(name, path.path.abs(), method, HashType::SHA256, filter2, repair);
}

View file

@ -153,12 +153,12 @@ std::pair<Tree, Input> Input::fetch(ref<Store> store) const
};
auto narHash = store->queryPathInfo(tree.storePath)->narHash;
input.attrs.insert_or_assign("narHash", narHash.to_string(SRI, true));
input.attrs.insert_or_assign("narHash", narHash.to_string(Base::SRI, true));
if (auto prevNarHash = getNarHash()) {
if (narHash != *prevNarHash)
throw Error((unsigned int) 102, "NAR hash mismatch in input '%s' (%s), expected '%s', got '%s'",
to_string(), tree.actualPath, prevNarHash->to_string(SRI, true), narHash.to_string(SRI, true));
to_string(), tree.actualPath, prevNarHash->to_string(Base::SRI, true), narHash.to_string(Base::SRI, true));
}
if (auto prevLastModified = getLastModified()) {
@ -240,8 +240,8 @@ std::string Input::getType() const
std::optional<Hash> Input::getNarHash() const
{
if (auto s = maybeGetStrAttr(attrs, "narHash")) {
auto hash = s->empty() ? Hash(htSHA256) : Hash::parseSRI(*s);
if (hash.type != htSHA256)
auto hash = s->empty() ? Hash(HashType::SHA256) : Hash::parseSRI(*s);
if (hash.type != HashType::SHA256)
throw UsageError("narHash must use SHA-256");
return hash;
}
@ -264,7 +264,7 @@ std::optional<Hash> Input::getRev() const
hash = Hash::parseAnyPrefixed(*s);
} catch (BadHash &e) {
// Default to sha1 for backwards compatibility with existing flakes
hash = Hash::parseAny(*s, htSHA1);
hash = Hash::parseAny(*s, HashType::SHA1);
}
}

View file

@ -159,6 +159,37 @@ struct InputScheme
std::optional<std::string> commitMsg) const;
virtual std::pair<StorePath, Input> fetch(ref<Store> store, const Input & input) = 0;
protected:
void emplaceURLQueryIntoAttrs(
const ParsedURL & parsedURL,
Attrs & attrs,
const StringSet & numericParams,
const StringSet & booleanParams) const
{
for (auto &[name, value] : parsedURL.query) {
if (name == "url") {
throw BadURL(
"URL '%s' must not override url via query param!",
parsedURL.to_string()
);
} else if (numericParams.count(name) != 0) {
if (auto n = string2Int<uint64_t>(value)) {
attrs.insert_or_assign(name, *n);
} else {
throw BadURL(
"URL '%s' has non-numeric parameter '%s'",
parsedURL.to_string(),
name
);
}
} else if (booleanParams.count(name) != 0) {
attrs.emplace(name, Explicit<bool> { value == "1" });
} else {
attrs.emplace(name, value);
}
}
}
};
void registerInputScheme(std::shared_ptr<InputScheme> && fetcher);

View file

@ -49,7 +49,7 @@ bool touchCacheFile(const Path & path, time_t touch_time)
Path getCachePath(std::string_view key)
{
return getCacheDir() + "/nix/gitv3/" +
hashString(htSHA256, key).to_string(Base32, false);
hashString(HashType::SHA256, key).to_string(Base::Base32, false);
}
// Returns the name of the HEAD branch.
@ -238,7 +238,7 @@ std::pair<StorePath, Input> fetchFromWorkdir(ref<Store> store, Input & input, co
return files.count(file);
};
auto storePath = store->addToStore(input.getName(), actualPath, FileIngestionMethod::Recursive, htSHA256, filter);
auto storePath = store->addToStore(input.getName(), actualPath, FileIngestionMethod::Recursive, HashType::SHA256, filter);
// FIXME: maybe we should use the timestamp of the last
// modified dirty file?
@ -273,18 +273,15 @@ struct GitInputScheme : InputScheme
Attrs attrs;
attrs.emplace("type", "git");
for (auto & [name, value] : url.query) {
if (name == "rev" || name == "ref")
attrs.emplace(name, value);
else if (name == "shallow" || name == "submodules" || name == "allRefs")
attrs.emplace(name, Explicit<bool> { value == "1" });
else
url2.query.emplace(name, value);
}
attrs.emplace("url", url2.to_string());
emplaceURLQueryIntoAttrs(
url,
attrs,
{"lastModified", "revCount"},
{"shallow", "submodules", "allRefs"}
);
return inputFromAttrs(attrs);
}
@ -440,8 +437,8 @@ struct GitInputScheme : InputScheme
auto checkHashType = [&](const std::optional<Hash> & hash)
{
if (hash.has_value() && !(hash->type == htSHA1 || hash->type == htSHA256))
throw Error("Hash '%s' is not supported by Git. Supported types are sha1 and sha256.", hash->to_string(Base16, true));
if (hash.has_value() && !(hash->type == HashType::SHA1 || hash->type == HashType::SHA256))
throw Error("Hash '%s' is not supported by Git. Supported types are sha1 and sha256.", hash->to_string(Base::Base16, true));
};
auto getLockedAttrs = [&]()
@ -504,7 +501,7 @@ struct GitInputScheme : InputScheme
if (!input.getRev())
input.attrs.insert_or_assign("rev",
Hash::parseAny(chomp(runProgram("git", true, { "-C", actualUrl, "--git-dir", gitDir, "rev-parse", *input.getRef() })), htSHA1).gitRev());
Hash::parseAny(chomp(runProgram("git", true, { "-C", actualUrl, "--git-dir", gitDir, "rev-parse", *input.getRef() })), HashType::SHA1).gitRev());
repoDir = actualUrl;
} else {
@ -524,7 +521,7 @@ struct GitInputScheme : InputScheme
}
if (auto res = getCache()->lookup(store, unlockedAttrs)) {
auto rev2 = Hash::parseAny(getStrAttr(res->first, "rev"), htSHA1);
auto rev2 = Hash::parseAny(getStrAttr(res->first, "rev"), HashType::SHA1);
if (!input.getRev() || input.getRev() == rev2) {
input.attrs.insert_or_assign("rev", rev2.gitRev());
return makeResult(res->first, std::move(res->second));
@ -602,7 +599,7 @@ struct GitInputScheme : InputScheme
}
if (!input.getRev())
input.attrs.insert_or_assign("rev", Hash::parseAny(chomp(readFile(localRefFile)), htSHA1).gitRev());
input.attrs.insert_or_assign("rev", Hash::parseAny(chomp(readFile(localRefFile)), HashType::SHA1).gitRev());
// cache dir lock is removed at scope end; we will only use read-only operations on specific revisions in the remainder
}
@ -698,7 +695,7 @@ struct GitInputScheme : InputScheme
unpackTarfile(*proc.getStdout(), tmpDir);
}
auto storePath = store->addToStore(name, tmpDir, FileIngestionMethod::Recursive, htSHA256, filter);
auto storePath = store->addToStore(name, tmpDir, FileIngestionMethod::Recursive, HashType::SHA256, filter);
auto lastModified = std::stoull(runProgram("git", true, { "-C", repoDir, "--git-dir", gitDir, "log", "-1", "--format=%ct", "--no-show-signature", input.getRev()->gitRev() }));

View file

@ -1,3 +1,4 @@
#include "attrs.hh"
#include "filetransfer.hh"
#include "cache.hh"
#include "globals.hh"
@ -36,18 +37,11 @@ struct GitArchiveInputScheme : InputScheme
auto path = tokenizeString<std::vector<std::string>>(url.path, "/");
std::optional<Hash> rev;
std::optional<std::string> ref;
std::optional<std::string> host_url;
std::optional<std::string> refOrRev;
auto size = path.size();
if (size == 3) {
if (std::regex_match(path[2], revRegex))
rev = Hash::parseAny(path[2], htSHA1);
else if (std::regex_match(path[2], refRegex))
ref = path[2];
else
throw BadURL("in URL '%s', '%s' is not a commit hash or branch/tag name", url.url, path[2]);
refOrRev = path[2];
} else if (size > 3) {
std::string rs;
for (auto i = std::next(path.begin(), 2); i != path.end(); i++) {
@ -58,61 +52,91 @@ struct GitArchiveInputScheme : InputScheme
}
if (std::regex_match(rs, refRegex)) {
ref = rs;
refOrRev = rs;
} else {
throw BadURL("in URL '%s', '%s' is not a branch/tag name", url.url, rs);
}
} else if (size < 2)
throw BadURL("URL '%s' is invalid", url.url);
Attrs attrs;
attrs.emplace("type", type());
attrs.emplace("owner", path[0]);
attrs.emplace("repo", path[1]);
for (auto &[name, value] : url.query) {
if (name == "rev") {
if (rev)
throw BadURL("URL '%s' contains multiple commit hashes", url.url);
rev = Hash::parseAny(value, htSHA1);
if (name == "rev" || name == "ref") {
if (refOrRev) {
throw BadURL("URL '%s' already contains a ref or rev", url.url);
} else {
refOrRev = value;
}
} else if (name == "lastModified") {
if (auto n = string2Int<uint64_t>(value)) {
attrs.emplace(name, *n);
} else {
throw Error(
"Attribute 'lastModified' in URL '%s' must be an integer",
url.to_string()
);
}
} else {
attrs.emplace(name, value);
}
else if (name == "ref") {
if (!std::regex_match(value, refRegex))
throw BadURL("URL '%s' contains an invalid branch/tag name", url.url);
if (ref)
throw BadURL("URL '%s' contains multiple branch/tag names", url.url);
ref = value;
}
else if (name == "host") {
if (!std::regex_match(value, hostRegex))
throw BadURL("URL '%s' contains an invalid instance host", url.url);
host_url = value;
}
// FIXME: barf on unsupported attributes
}
if (ref && rev)
throw BadURL("URL '%s' contains both a commit hash and a branch/tag name %s %s", url.url, *ref, rev->gitRev());
if (refOrRev) attrs.emplace("refOrRev", *refOrRev);
Input input;
input.attrs.insert_or_assign("type", type());
input.attrs.insert_or_assign("owner", path[0]);
input.attrs.insert_or_assign("repo", path[1]);
if (rev) input.attrs.insert_or_assign("rev", rev->gitRev());
if (ref) input.attrs.insert_or_assign("ref", *ref);
if (host_url) input.attrs.insert_or_assign("host", *host_url);
return input;
return inputFromAttrs(attrs);
}
std::optional<Input> inputFromAttrs(const Attrs & attrs) const override
{
if (maybeGetStrAttr(attrs, "type") != type()) return {};
// Attributes can contain refOrRev and it needs to be figured out
// which one it is (see inputFromURL for when that may happen).
// The correct one (ref or rev) will be written into finalAttrs and
// it needs to be mutable for that.
Attrs finalAttrs(attrs);
auto type_ = maybeGetStrAttr(finalAttrs, "type");
if (type_ != type()) return {};
for (auto & [name, value] : attrs)
if (name != "type" && name != "owner" && name != "repo" && name != "ref" && name != "rev" && name != "narHash" && name != "lastModified" && name != "host")
auto owner = getStrAttr(finalAttrs, "owner");
auto repo = getStrAttr(finalAttrs, "repo");
auto url = fmt("%s:%s/%s", *type_, owner, repo);
if (auto host = maybeGetStrAttr(finalAttrs, "host")) {
if (!std::regex_match(*host, hostRegex)) {
throw BadURL("URL '%s' contains an invalid instance host", url);
}
}
if (auto refOrRev = maybeGetStrAttr(finalAttrs, "refOrRev")) {
finalAttrs.erase("refOrRev");
if (std::regex_match(*refOrRev, revRegex)) {
finalAttrs.emplace("rev", *refOrRev);
} else if (std::regex_match(*refOrRev, refRegex)) {
finalAttrs.emplace("ref", *refOrRev);
} else {
throw Error(
"in URL '%s', '%s' is not a commit hash or a branch/tag name",
url,
*refOrRev
);
}
} else if (auto ref = maybeGetStrAttr(finalAttrs, "ref")) {
if (!std::regex_match(*ref, refRegex)) {
throw BadURL("URL '%s' contains an invalid branch/tag name", url);
}
}
for (auto & [name, value] : finalAttrs) {
if (name != "type" && name != "owner" && name != "repo" && name != "ref" && name != "rev" && name != "narHash" && name != "lastModified" && name != "host") {
throw Error("unsupported input attribute '%s'", name);
getStrAttr(attrs, "owner");
getStrAttr(attrs, "repo");
}
}
Input input;
input.attrs = attrs;
input.attrs = finalAttrs;
return input;
}
@ -125,7 +149,7 @@ struct GitArchiveInputScheme : InputScheme
auto path = owner + "/" + repo;
assert(!(ref && rev));
if (ref) path += "/" + *ref;
if (rev) path += "/" + rev->to_string(Base16, false);
if (rev) path += "/" + rev->to_string(Base::Base16, false);
return ParsedURL {
.scheme = type(),
.path = path,
@ -250,7 +274,7 @@ struct GitHubInputScheme : GitArchiveInputScheme
readFile(
store->toRealPath(
downloadFile(store, url, "source", false, headers).storePath)));
auto rev = Hash::parseAny(std::string { json["sha"] }, htSHA1);
auto rev = Hash::parseAny(std::string { json["sha"] }, HashType::SHA1);
debug("HEAD revision for '%s' is %s", url, rev.gitRev());
return rev;
}
@ -271,7 +295,7 @@ struct GitHubInputScheme : GitArchiveInputScheme
: "https://api.%s/repos/%s/%s/tarball/%s";
const auto url = fmt(urlFmt, host, getOwner(input), getRepo(input),
input.getRev()->to_string(Base16, false));
input.getRev()->to_string(Base::Base16, false));
return DownloadUrl { url, headers };
}
@ -323,7 +347,7 @@ struct GitLabInputScheme : GitArchiveInputScheme
store->toRealPath(
downloadFile(store, url, "source", false, headers).storePath)));
if (json.is_array() && json.size() >= 1 && json[0]["id"] != nullptr) {
auto rev = Hash::parseAny(std::string(json[0]["id"]), htSHA1);
auto rev = Hash::parseAny(std::string(json[0]["id"]), HashType::SHA1);
debug("HEAD revision for '%s' is %s", url, rev.gitRev());
return rev;
} else if (json.is_array() && json.size() == 0) {
@ -343,7 +367,7 @@ struct GitLabInputScheme : GitArchiveInputScheme
auto host = maybeGetStrAttr(input.attrs, "host").value_or("gitlab.com");
auto url = fmt("https://%s/api/v4/projects/%s%%2F%s/repository/archive.tar.gz?sha=%s",
host, getStrAttr(input.attrs, "owner"), getStrAttr(input.attrs, "repo"),
input.getRev()->to_string(Base16, false));
input.getRev()->to_string(Base::Base16, false));
Headers headers = makeHeadersWithAuthTokens(host);
return DownloadUrl { url, headers };
@ -420,7 +444,7 @@ struct SourceHutInputScheme : GitArchiveInputScheme
if(!id)
throw BadURL("in '%d', couldn't find ref '%d'", input.to_string(), ref);
auto rev = Hash::parseAny(*id, htSHA1);
auto rev = Hash::parseAny(*id, HashType::SHA1);
debug("HEAD revision for '%s' is %s", fmt("%s/%s", base_url, ref), rev.gitRev());
return rev;
}
@ -430,7 +454,7 @@ struct SourceHutInputScheme : GitArchiveInputScheme
auto host = maybeGetStrAttr(input.attrs, "host").value_or("git.sr.ht");
auto url = fmt("https://%s/%s/%s/archive/%s.tar.gz",
host, getStrAttr(input.attrs, "owner"), getStrAttr(input.attrs, "repo"),
input.getRev()->to_string(Base16, false));
input.getRev()->to_string(Base::Base16, false));
Headers headers = makeHeadersWithAuthTokens(host);
return DownloadUrl { url, headers };

View file

@ -17,38 +17,32 @@ struct IndirectInputScheme : InputScheme
std::optional<Hash> rev;
std::optional<std::string> ref;
Attrs attrs;
if (path.size() == 1) {
} else if (path.size() == 2) {
if (std::regex_match(path[1], revRegex))
rev = Hash::parseAny(path[1], htSHA1);
rev = Hash::parseAny(path[1], HashType::SHA1);
else if (std::regex_match(path[1], refRegex))
ref = path[1];
else
throw BadURL("in flake URL '%s', '%s' is not a commit hash or branch/tag name", url.url, path[1]);
} else if (path.size() == 3) {
if (!std::regex_match(path[1], refRegex))
throw BadURL("in flake URL '%s', '%s' is not a branch/tag name", url.url, path[1]);
ref = path[1];
if (!std::regex_match(path[2], revRegex))
throw BadURL("in flake URL '%s', '%s' is not a commit hash", url.url, path[2]);
rev = Hash::parseAny(path[2], htSHA1);
rev = Hash::parseAny(path[2], HashType::SHA1);
} else
throw BadURL("GitHub URL '%s' is invalid", url.url);
std::string id = path[0];
if (!std::regex_match(id, flakeRegex))
throw BadURL("'%s' is not a valid flake ID", id);
// FIXME: forbid query params?
attrs.emplace("type", "indirect");
attrs.emplace("id", id);
if (rev) attrs.emplace("rev", rev->gitRev());
if (ref) attrs.emplace("ref", *ref);
Input input;
input.direct = false;
input.attrs.insert_or_assign("type", "indirect");
input.attrs.insert_or_assign("id", id);
if (rev) input.attrs.insert_or_assign("rev", rev->gitRev());
if (ref) input.attrs.insert_or_assign("ref", *ref);
emplaceURLQueryIntoAttrs(url, attrs, {}, {});
return input;
return inputFromAttrs(attrs);
}
std::optional<Input> inputFromAttrs(const Attrs & attrs) const override
@ -63,6 +57,18 @@ struct IndirectInputScheme : InputScheme
if (!std::regex_match(id, flakeRegex))
throw BadURL("'%s' is not a valid flake ID", id);
// TODO come up with a nicer error message for those two.
if (auto rev = maybeGetStrAttr(attrs, "rev")) {
if (!std::regex_match(*rev, revRegex)) {
throw BadURL("in flake '%s', '%s' is not a commit hash", id, *rev);
}
}
if (auto ref = maybeGetStrAttr(attrs, "ref")) {
if (!std::regex_match(*ref, refRegex)) {
throw BadURL("in flake '%s', '%s' is not a valid branch/tag name", id, *ref);
}
}
Input input;
input.direct = false;
input.attrs = attrs;

View file

@ -56,12 +56,7 @@ struct MercurialInputScheme : InputScheme
Attrs attrs;
attrs.emplace("type", "hg");
for (auto &[name, value] : url.query) {
if (name == "rev" || name == "ref")
attrs.emplace(name, value);
else
url2.query.emplace(name, value);
}
emplaceURLQueryIntoAttrs(url, attrs, {"revCount"}, {});
attrs.emplace("url", url2.to_string());
@ -203,7 +198,7 @@ struct MercurialInputScheme : InputScheme
return files.count(file);
};
auto storePath = store->addToStore(input.getName(), actualPath, FileIngestionMethod::Recursive, htSHA256, filter);
auto storePath = store->addToStore(input.getName(), actualPath, FileIngestionMethod::Recursive, HashType::SHA256, filter);
return {std::move(storePath), input};
}
@ -213,8 +208,8 @@ struct MercurialInputScheme : InputScheme
auto checkHashType = [&](const std::optional<Hash> & hash)
{
if (hash.has_value() && hash->type != htSHA1)
throw Error("Hash '%s' is not supported by Mercurial. Only sha1 is supported.", hash->to_string(Base16, true));
if (hash.has_value() && hash->type != HashType::SHA1)
throw Error("Hash '%s' is not supported by Mercurial. Only sha1 is supported.", hash->to_string(Base::Base16, true));
};
@ -253,14 +248,14 @@ struct MercurialInputScheme : InputScheme
});
if (auto res = getCache()->lookup(store, unlockedAttrs)) {
auto rev2 = Hash::parseAny(getStrAttr(res->first, "rev"), htSHA1);
auto rev2 = Hash::parseAny(getStrAttr(res->first, "rev"), HashType::SHA1);
if (!input.getRev() || input.getRev() == rev2) {
input.attrs.insert_or_assign("rev", rev2.gitRev());
return makeResult(res->first, std::move(res->second));
}
}
Path cacheDir = fmt("%s/nix/hg/%s", getCacheDir(), hashString(htSHA256, actualUrl).to_string(Base32, false));
Path cacheDir = fmt("%s/nix/hg/%s", getCacheDir(), hashString(HashType::SHA256, actualUrl).to_string(Base::Base32, false));
/* If this is a commit hash that we already have, we don't
have to pull again. */
@ -294,7 +289,7 @@ struct MercurialInputScheme : InputScheme
runHg({ "log", "-R", cacheDir, "-r", revOrRef, "--template", "{node} {rev} {branch}" }));
assert(tokens.size() == 3);
input.attrs.insert_or_assign("rev", Hash::parseAny(tokens[0], htSHA1).gitRev());
input.attrs.insert_or_assign("rev", Hash::parseAny(tokens[0], HashType::SHA1).gitRev());
auto revCount = std::stoull(tokens[1]);
input.attrs.insert_or_assign("ref", tokens[2]);

View file

@ -72,7 +72,7 @@ DownloadFileResult downloadFile(
} else {
StringSink sink;
sink << dumpString(res.data);
auto hash = hashString(htSHA256, res.data);
auto hash = hashString(HashType::SHA256, res.data);
ValidPathInfo info {
*store,
name,
@ -81,7 +81,7 @@ DownloadFileResult downloadFile(
.hash = hash,
.references = {},
},
hashString(htSHA256, sink.s),
hashString(HashType::SHA256, sink.s),
};
info.narSize = sink.s.size();
auto source = StringSource { sink.s };
@ -155,7 +155,7 @@ DownloadTarballResult downloadTarball(
throw nix::Error("tarball '%s' contains an unexpected number of top-level files", url);
auto topDir = tmpDir + "/" + members.begin()->name;
lastModified = lstat(topDir).st_mtime;
unpackedStorePath = store->addToStore(name, topDir, FileIngestionMethod::Recursive, htSHA256, defaultPathFilter, NoRepair);
unpackedStorePath = store->addToStore(name, topDir, FileIngestionMethod::Recursive, HashType::SHA256, defaultPathFilter, NoRepair);
}
Attrs infoAttrs({
@ -201,29 +201,17 @@ struct CurlInputScheme : InputScheme
if (!isValidURL(_url, requireTree))
return std::nullopt;
Input input;
auto url = _url;
Attrs attrs;
attrs.emplace("type", inputType());
url.scheme = parseUrlScheme(url.scheme).transport;
auto narHash = url.query.find("narHash");
if (narHash != url.query.end())
input.attrs.insert_or_assign("narHash", narHash->second);
emplaceURLQueryIntoAttrs(url, attrs, {"revCount"}, {});
if (auto i = get(url.query, "rev"))
input.attrs.insert_or_assign("rev", *i);
if (auto i = get(url.query, "revCount"))
if (auto n = string2Int<uint64_t>(*i))
input.attrs.insert_or_assign("revCount", *n);
url.query.erase("rev");
url.query.erase("revCount");
input.attrs.insert_or_assign("type", inputType());
input.attrs.insert_or_assign("url", url.to_string());
return input;
attrs.emplace("url", url.to_string());
return inputFromAttrs(attrs);
}
std::optional<Input> inputFromAttrs(const Attrs & attrs) const override
@ -235,7 +223,7 @@ struct CurlInputScheme : InputScheme
std::set<std::string> allowedNames = {"type", "url", "narHash", "name", "unpack", "rev", "revCount", "lastModified"};
for (auto & [name, value] : attrs)
if (!allowedNames.count(name))
throw Error("unsupported %s input attribute '%s'", *type, name);
throw Error("unsupported %s input attribute '%s'. If you wanted to fetch a tarball with a query parameter, please use '{ type = \"tarball\"; url = \"...\"; }'", *type, name);
Input input;
input.attrs = attrs;
@ -250,7 +238,7 @@ struct CurlInputScheme : InputScheme
// NAR hashes are preferred over file hashes since tar/zip
// files don't have a canonical representation.
if (auto narHash = input.getNarHash())
url.query.insert_or_assign("narHash", narHash->to_string(SRI, true));
url.query.insert_or_assign("narHash", narHash->to_string(Base::SRI, true));
return url;
}

View file

@ -5,9 +5,9 @@
#include "signals.hh"
#include "loggers.hh"
#include "current-process.hh"
#include "terminal.hh"
#include <algorithm>
#include <cctype>
#include <exception>
#include <iostream>
@ -347,7 +347,7 @@ int handleExceptions(const std::string & programName, std::function<void()> fun)
RunPager::RunPager()
{
if (!isatty(STDOUT_FILENO)) return;
if (!isOutputARealTerminal(StandardOutputStream::Stdout)) return;
char * pager = getenv("NIX_PAGER");
if (!pager) pager = getenv("PAGER");
if (pager && ((std::string) pager == "" || (std::string) pager == "cat")) return;

View file

@ -2,8 +2,6 @@
#include "shared.hh"
#include <cstring>
#include <cstddef>
#include <cstdlib>
#include <unistd.h>
#include <signal.h>
@ -17,17 +15,17 @@ static void sigsegvHandler(int signo, siginfo_t * info, void * ctx)
the stack pointer. Unfortunately, getting the stack pointer is
not portable. */
bool haveSP = true;
char * sp = 0;
int64_t sp = 0;
#if defined(__x86_64__) && defined(REG_RSP)
sp = (char *) ((ucontext_t *) ctx)->uc_mcontext.gregs[REG_RSP];
sp = static_cast<ucontext_t *>(ctx)->uc_mcontext.gregs[REG_RSP];
#elif defined(REG_ESP)
sp = (char *) ((ucontext_t *) ctx)->uc_mcontext.gregs[REG_ESP];
sp = static_cast<ucontext_t *>(ctx)->uc_mcontext.gregs[REG_ESP];
#else
haveSP = false;
#endif
if (haveSP) {
ptrdiff_t diff = (char *) info->si_addr - sp;
int64_t diff = int64_t(info->si_addr) - sp;
if (diff < 0) diff = -diff;
if (diff < 4096) {
nix::stackOverflowHandler(info, ctx);

View file

@ -3,17 +3,15 @@
#include "compression.hh"
#include "derivations.hh"
#include "fs-accessor.hh"
#include "globals.hh"
#include "nar-info.hh"
#include "sync.hh"
#include "remote-fs-accessor.hh"
#include "nar-info-disk-cache.hh"
#include "nar-info-disk-cache.hh" // IWYU pragma: keep
#include "nar-accessor.hh"
#include "thread-pool.hh"
#include "signals.hh"
#include <chrono>
#include <future>
#include <regex>
#include <fstream>
#include <sstream>
@ -128,9 +126,9 @@ ref<const ValidPathInfo> BinaryCacheStore::addToStoreCommon(
/* Read the NAR simultaneously into a CompressionSink+FileSink (to
write the compressed NAR to disk), into a HashSink (to get the
NAR hash), and into a NarAccessor (to get the NAR listing). */
HashSink fileHashSink { htSHA256 };
HashSink fileHashSink { HashType::SHA256 };
std::shared_ptr<FSAccessor> narAccessor;
HashSink narHashSink { htSHA256 };
HashSink narHashSink { HashType::SHA256 };
{
FdSink fileSink(fdTemp.get());
TeeSink teeSinkCompressed { fileSink, fileHashSink };
@ -150,7 +148,7 @@ ref<const ValidPathInfo> BinaryCacheStore::addToStoreCommon(
auto [fileHash, fileSize] = fileHashSink.finish();
narInfo->fileHash = fileHash;
narInfo->fileSize = fileSize;
narInfo->url = "nar/" + narInfo->fileHash->to_string(Base32, false) + ".nar"
narInfo->url = "nar/" + narInfo->fileHash->to_string(Base::Base32, false) + ".nar"
+ (compression == "xz" ? ".xz" :
compression == "bzip2" ? ".bz2" :
compression == "zstd" ? ".zst" :
@ -288,7 +286,7 @@ void BinaryCacheStore::addToStore(const ValidPathInfo & info, Source & narSource
StorePath BinaryCacheStore::addToStoreFromDump(Source & dump, std::string_view name,
FileIngestionMethod method, HashType hashAlgo, RepairFlag repair, const StorePathSet & references)
{
if (method != FileIngestionMethod::Recursive || hashAlgo != htSHA256)
if (method != FileIngestionMethod::Recursive || hashAlgo != HashType::SHA256)
unsupported("addToStoreFromDump");
return addToStoreCommon(dump, repair, CheckSigs, [&](HashResult nar) {
ValidPathInfo info {
@ -425,7 +423,7 @@ StorePath BinaryCacheStore::addTextToStore(
const StorePathSet & references,
RepairFlag repair)
{
auto textHash = hashString(htSHA256, s);
auto textHash = hashString(HashType::SHA256, s);
auto path = makeTextPath(name, TextInfo { { textHash }, references });
if (!repair && isValidPath(path))
@ -480,7 +478,8 @@ void BinaryCacheStore::addSignatures(const StorePath & storePath, const StringSe
when addSignatures() is called sequentially on a path, because
S3 might return an outdated cached version. */
auto narInfo = make_ref<NarInfo>((NarInfo &) *queryPathInfo(storePath));
// downcast: BinaryCacheStore always returns NarInfo from queryPathInfoUncached, making it sound
auto narInfo = make_ref<NarInfo>(dynamic_cast<NarInfo const &>(*queryPathInfo(storePath)));
narInfo->sigs.insert(sigs.begin(), sigs.end());

View file

@ -1,23 +1,15 @@
#include "derivation-goal.hh"
#include "hook-instance.hh"
#include "worker.hh"
#include "builtins.hh"
#include "builtins/buildenv.hh"
#include "references.hh"
#include "finally.hh"
#include "archive.hh"
#include "compression.hh"
#include "common-protocol.hh"
#include "common-protocol-impl.hh"
#include "topo-sort.hh"
#include "common-protocol-impl.hh" // IWYU pragma: keep
#include "local-store.hh" // TODO remove, along with remaining downcasts
#include "logging-json.hh"
#include "substitution-goal.hh"
#include "drv-output-substitution-goal.hh"
#include <regex>
#include <queue>
#include <fstream>
#include <sys/types.h>
#include <sys/socket.h>
@ -127,19 +119,20 @@ std::string DerivationGoal::key()
void DerivationGoal::killChild()
{
hook.reset();
builderOutFD = nullptr;
}
void DerivationGoal::timedOut(Error && ex)
Goal::Finished DerivationGoal::timedOut(Error && ex)
{
killChild();
done(BuildResult::TimedOut, {}, std::move(ex));
return done(BuildResult::TimedOut, {}, std::move(ex));
}
void DerivationGoal::work()
Goal::WorkResult DerivationGoal::work(bool inBuildSlot)
{
(this->*state)();
return (this->*state)(inBuildSlot);
}
void DerivationGoal::addWantedOutputs(const OutputsSpec & outputs)
@ -163,7 +156,7 @@ void DerivationGoal::addWantedOutputs(const OutputsSpec & outputs)
}
void DerivationGoal::getDerivation()
Goal::WorkResult DerivationGoal::getDerivation(bool inBuildSlot)
{
trace("init");
@ -171,23 +164,21 @@ void DerivationGoal::getDerivation()
exists. If it doesn't, it may be created through a
substitute. */
if (buildMode == bmNormal && worker.evalStore.isValidPath(drvPath)) {
loadDerivation();
return;
return loadDerivation(inBuildSlot);
}
addWaitee(worker.makePathSubstitutionGoal(drvPath));
state = &DerivationGoal::loadDerivation;
return WaitForGoals{{worker.makePathSubstitutionGoal(drvPath)}};
}
void DerivationGoal::loadDerivation()
Goal::WorkResult DerivationGoal::loadDerivation(bool inBuildSlot)
{
trace("loading derivation");
if (nrFailed != 0) {
done(BuildResult::MiscFailure, {}, Error("cannot build missing derivation '%s'", worker.store.printStorePath(drvPath)));
return;
return done(BuildResult::MiscFailure, {}, Error("cannot build missing derivation '%s'", worker.store.printStorePath(drvPath)));
}
/* `drvPath' should already be a root, but let's be on the safe
@ -209,11 +200,11 @@ void DerivationGoal::loadDerivation()
}
assert(drv);
haveDerivation();
return haveDerivation(inBuildSlot);
}
void DerivationGoal::haveDerivation()
Goal::WorkResult DerivationGoal::haveDerivation(bool inBuildSlot)
{
trace("have derivation");
@ -241,8 +232,7 @@ void DerivationGoal::haveDerivation()
});
}
gaveUpOnSubstitution();
return;
return gaveUpOnSubstitution(inBuildSlot);
}
for (auto & i : drv->outputsAndOptPaths(worker.store))
@ -264,18 +254,18 @@ void DerivationGoal::haveDerivation()
/* If they are all valid, then we're done. */
if (allValid && buildMode == bmNormal) {
done(BuildResult::AlreadyValid, std::move(validOutputs));
return;
return done(BuildResult::AlreadyValid, std::move(validOutputs));
}
/* We are first going to try to create the invalid output paths
through substitutes. If that doesn't work, we'll build
them. */
WaitForGoals result;
if (settings.useSubstitutes && parsedDrv->substitutesAllowed())
for (auto & [outputName, status] : initialOutputs) {
if (!status.wanted) continue;
if (!status.known)
addWaitee(
result.goals.insert(
worker.makeDrvOutputSubstitutionGoal(
DrvOutput{status.outputHash, outputName},
buildMode == bmRepair ? Repair : NoRepair
@ -283,31 +273,31 @@ void DerivationGoal::haveDerivation()
);
else {
auto * cap = getDerivationCA(*drv);
addWaitee(worker.makePathSubstitutionGoal(
result.goals.insert(worker.makePathSubstitutionGoal(
status.known->path,
buildMode == bmRepair ? Repair : NoRepair,
cap ? std::optional { *cap } : std::nullopt));
}
}
if (waitees.empty()) /* to prevent hang (no wake-up event) */
outputsSubstitutionTried();
else
if (result.goals.empty()) { /* to prevent hang (no wake-up event) */
return outputsSubstitutionTried(inBuildSlot);
} else {
state = &DerivationGoal::outputsSubstitutionTried;
return result;
}
}
void DerivationGoal::outputsSubstitutionTried()
Goal::WorkResult DerivationGoal::outputsSubstitutionTried(bool inBuildSlot)
{
trace("all outputs substituted (maybe)");
assert(drv->type().isPure());
if (nrFailed > 0 && nrFailed > nrNoSubstituters + nrIncompleteClosure && !settings.tryFallback) {
done(BuildResult::TransientFailure, {},
return done(BuildResult::TransientFailure, {},
Error("some substitutes for the outputs of derivation '%s' failed (usually happens due to networking issues); try '--fallback' to build derivation from source ",
worker.store.printStorePath(drvPath)));
return;
}
/* If the substitutes form an incomplete closure, then we should
@ -341,33 +331,32 @@ void DerivationGoal::outputsSubstitutionTried()
if (needRestart == NeedRestartForMoreOutputs::OutputsAddedDoNeed) {
needRestart = NeedRestartForMoreOutputs::OutputsUnmodifedDontNeed;
haveDerivation();
return;
return haveDerivation(inBuildSlot);
}
auto [allValid, validOutputs] = checkPathValidity();
if (buildMode == bmNormal && allValid) {
done(BuildResult::Substituted, std::move(validOutputs));
return;
return done(BuildResult::Substituted, std::move(validOutputs));
}
if (buildMode == bmRepair && allValid) {
repairClosure();
return;
return repairClosure();
}
if (buildMode == bmCheck && !allValid)
throw Error("some outputs of '%s' are not valid, so checking is not possible",
worker.store.printStorePath(drvPath));
/* Nothing to wait for; tail call */
gaveUpOnSubstitution();
return gaveUpOnSubstitution(inBuildSlot);
}
/* At least one of the output paths could not be
produced using a substitute. So we have to build instead. */
void DerivationGoal::gaveUpOnSubstitution()
Goal::WorkResult DerivationGoal::gaveUpOnSubstitution(bool inBuildSlot)
{
WaitForGoals result;
/* At this point we are building all outputs, so if more are wanted there
is no need to restart. */
needRestart = NeedRestartForMoreOutputs::BuildInProgressWillNotNeed;
@ -379,7 +368,7 @@ void DerivationGoal::gaveUpOnSubstitution()
addWaiteeDerivedPath = [&](ref<SingleDerivedPath> inputDrv, const DerivedPathMap<StringSet>::ChildNode & inputNode) {
if (!inputNode.value.empty())
addWaitee(worker.makeGoal(
result.goals.insert(worker.makeGoal(
DerivedPath::Built {
.drvPath = inputDrv,
.outputs = inputNode.value,
@ -424,17 +413,19 @@ void DerivationGoal::gaveUpOnSubstitution()
if (!settings.useSubstitutes)
throw Error("dependency '%s' of '%s' does not exist, and substitution is disabled",
worker.store.printStorePath(i), worker.store.printStorePath(drvPath));
addWaitee(worker.makePathSubstitutionGoal(i));
result.goals.insert(worker.makePathSubstitutionGoal(i));
}
if (waitees.empty()) /* to prevent hang (no wake-up event) */
inputsRealised();
else
if (result.goals.empty()) {/* to prevent hang (no wake-up event) */
return inputsRealised(inBuildSlot);
} else {
state = &DerivationGoal::inputsRealised;
return result;
}
}
void DerivationGoal::repairClosure()
Goal::WorkResult DerivationGoal::repairClosure()
{
assert(drv->type().isPure());
@ -470,6 +461,7 @@ void DerivationGoal::repairClosure()
}
/* Check each path (slow!). */
WaitForGoals result;
for (auto & i : outputClosure) {
if (worker.pathContentsGood(i)) continue;
printError(
@ -477,9 +469,9 @@ void DerivationGoal::repairClosure()
worker.store.printStorePath(i), worker.store.printStorePath(drvPath));
auto drvPath2 = outputsToDrv.find(i);
if (drvPath2 == outputsToDrv.end())
addWaitee(worker.makePathSubstitutionGoal(i, Repair));
result.goals.insert(worker.makePathSubstitutionGoal(i, Repair));
else
addWaitee(worker.makeGoal(
result.goals.insert(worker.makeGoal(
DerivedPath::Built {
.drvPath = makeConstantStorePathRef(drvPath2->second),
.outputs = OutputsSpec::All { },
@ -487,42 +479,40 @@ void DerivationGoal::repairClosure()
bmRepair));
}
if (waitees.empty()) {
done(BuildResult::AlreadyValid, assertPathValidity());
return;
if (result.goals.empty()) {
return done(BuildResult::AlreadyValid, assertPathValidity());
}
state = &DerivationGoal::closureRepaired;
return result;
}
void DerivationGoal::closureRepaired()
Goal::WorkResult DerivationGoal::closureRepaired(bool inBuildSlot)
{
trace("closure repaired");
if (nrFailed > 0)
throw Error("some paths in the output closure of derivation '%s' could not be repaired",
worker.store.printStorePath(drvPath));
done(BuildResult::AlreadyValid, assertPathValidity());
return done(BuildResult::AlreadyValid, assertPathValidity());
}
void DerivationGoal::inputsRealised()
Goal::WorkResult DerivationGoal::inputsRealised(bool inBuildSlot)
{
trace("all inputs realised");
if (nrFailed != 0) {
if (!useDerivation)
throw Error("some dependencies of '%s' are missing", worker.store.printStorePath(drvPath));
done(BuildResult::DependencyFailed, {}, Error(
return done(BuildResult::DependencyFailed, {}, Error(
"%s dependencies of derivation '%s' failed to build",
nrFailed, worker.store.printStorePath(drvPath)));
return;
}
if (retrySubstitution == RetrySubstitution::YesNeed) {
retrySubstitution = RetrySubstitution::AlreadyRetried;
haveDerivation();
return;
return haveDerivation(inBuildSlot);
}
/* Gather information necessary for computing the closure and/or
@ -586,10 +576,9 @@ void DerivationGoal::inputsRealised()
resolvedDrvGoal = worker.makeDerivationGoal(
pathResolved, wantedOutputs, buildMode);
addWaitee(resolvedDrvGoal);
state = &DerivationGoal::resolvedFinished;
return;
return WaitForGoals{{resolvedDrvGoal}};
}
std::function<void(const StorePath &, const DerivedPathMap<StringSet>::ChildNode &)> accumInputPaths;
@ -654,7 +643,7 @@ void DerivationGoal::inputsRealised()
slot to become available, since we don't need one if there is a
build hook. */
state = &DerivationGoal::tryToBuild;
worker.wakeUp(shared_from_this());
return ContinueImmediately{};
}
void DerivationGoal::started()
@ -670,7 +659,7 @@ void DerivationGoal::started()
mcRunningBuilds = std::make_unique<MaintainCount<uint64_t>>(worker.runningBuilds);
}
void DerivationGoal::tryToBuild()
Goal::WorkResult DerivationGoal::tryToBuild(bool inBuildSlot)
{
trace("trying to build");
@ -705,8 +694,7 @@ void DerivationGoal::tryToBuild()
if (!actLock)
actLock = std::make_unique<Activity>(*logger, lvlWarn, actBuildWaiting,
fmt("waiting for lock on %s", Magenta(showPaths(lockFiles))));
worker.waitForAWhile(shared_from_this());
return;
return WaitForAWhile{};
}
actLock.reset();
@ -723,8 +711,7 @@ void DerivationGoal::tryToBuild()
if (buildMode != bmCheck && allValid) {
debug("skipping build of derivation '%s', someone beat us to it", worker.store.printStorePath(drvPath));
outputLocks.setDeletion(true);
done(BuildResult::AlreadyValid, std::move(validOutputs));
return;
return done(BuildResult::AlreadyValid, std::move(validOutputs));
}
/* If any of the outputs already exist but are not valid, delete
@ -744,37 +731,45 @@ void DerivationGoal::tryToBuild()
&& settings.maxBuildJobs.get() != 0;
if (!buildLocally) {
switch (tryBuildHook()) {
case rpAccept:
/* Yes, it has started doing so. Wait until we get
EOF from the hook. */
actLock.reset();
buildResult.startTime = time(0); // inexact
state = &DerivationGoal::buildDone;
started();
return;
case rpPostpone:
/* Not now; wait until at least one child finishes or
the wake-up timeout expires. */
if (!actLock)
actLock = std::make_unique<Activity>(*logger, lvlTalkative, actBuildWaiting,
fmt("waiting for a machine to build '%s'", Magenta(worker.store.printStorePath(drvPath))));
worker.waitForAWhile(shared_from_this());
outputLocks.unlock();
return;
case rpDecline:
/* We should do it ourselves. */
break;
auto hookReply = tryBuildHook(inBuildSlot);
auto result = std::visit(
overloaded{
[&](HookReply::Accept & a) -> std::optional<WorkResult> {
/* Yes, it has started doing so. Wait until we get
EOF from the hook. */
actLock.reset();
buildResult.startTime = time(0); // inexact
state = &DerivationGoal::buildDone;
started();
return WaitForWorld{std::move(a.fds), false};
},
[&](HookReply::Postpone) -> std::optional<WorkResult> {
/* Not now; wait until at least one child finishes or
the wake-up timeout expires. */
if (!actLock)
actLock = std::make_unique<Activity>(*logger, lvlTalkative, actBuildWaiting,
fmt("waiting for a machine to build '%s'", Magenta(worker.store.printStorePath(drvPath))));
outputLocks.unlock();
return WaitForAWhile{};
},
[&](HookReply::Decline) -> std::optional<WorkResult> {
/* We should do it ourselves. */
return std::nullopt;
},
},
hookReply);
if (result) {
return std::move(*result);
}
}
actLock.reset();
state = &DerivationGoal::tryLocalBuild;
worker.wakeUp(shared_from_this());
return ContinueImmediately{};
}
void DerivationGoal::tryLocalBuild() {
Goal::WorkResult DerivationGoal::tryLocalBuild(bool inBuildSlot) {
throw Error(
"unable to build with a primary store that isn't a local store; "
"either pass a different '--store' or enable remote builds."
@ -829,14 +824,16 @@ void replaceValidPath(const Path & storePath, const Path & tmpPath)
int DerivationGoal::getChildStatus()
{
builderOutFD = nullptr;
return hook->pid.kill();
}
void DerivationGoal::closeReadPipes()
{
hook->builderOut.readSide.reset();
hook->fromHook.readSide.reset();
hook->builderOut.reset();
hook->fromHook.reset();
builderOutFD = nullptr;
}
@ -932,7 +929,7 @@ void runPostBuildHook(
proc.getStdout()->drainInto(sink);
}
void DerivationGoal::buildDone()
Goal::WorkResult DerivationGoal::buildDone(bool inBuildSlot)
{
trace("build done");
@ -1027,8 +1024,7 @@ void DerivationGoal::buildDone()
outputLocks.setDeletion(true);
outputLocks.unlock();
done(BuildResult::Built, std::move(builtOutputs));
return done(BuildResult::Built, std::move(builtOutputs));
} catch (BuildError & e) {
outputLocks.unlock();
@ -1049,12 +1045,11 @@ void DerivationGoal::buildDone()
BuildResult::PermanentFailure;
}
done(st, {}, std::move(e));
return;
return done(st, {}, std::move(e));
}
}
void DerivationGoal::resolvedFinished()
Goal::WorkResult DerivationGoal::resolvedFinished(bool inBuildSlot)
{
trace("resolved derivation finished");
@ -1122,26 +1117,26 @@ void DerivationGoal::resolvedFinished()
if (status == BuildResult::AlreadyValid)
status = BuildResult::ResolvesToAlreadyValid;
done(status, std::move(builtOutputs));
return done(status, std::move(builtOutputs));
}
HookReply DerivationGoal::tryBuildHook()
HookReply DerivationGoal::tryBuildHook(bool inBuildSlot)
{
if (!worker.tryBuildHook || !useDerivation) return rpDecline;
if (!worker.hook.available || !useDerivation) return HookReply::Decline{};
if (!worker.hook)
worker.hook = std::make_unique<HookInstance>();
if (!worker.hook.instance)
worker.hook.instance = std::make_unique<HookInstance>();
try {
/* Send the request to the hook. */
worker.hook->sink
worker.hook.instance->sink
<< "try"
<< (worker.getNrLocalBuilds() < settings.maxBuildJobs ? 1 : 0)
<< (inBuildSlot ? 1 : 0)
<< drv->platform
<< worker.store.printStorePath(drvPath)
<< parsedDrv->getRequiredSystemFeatures();
worker.hook->sink.flush();
worker.hook.instance->sink.flush();
/* Read the first line of input, which should be a word indicating
whether the hook wishes to perform the build. */
@ -1149,13 +1144,13 @@ HookReply DerivationGoal::tryBuildHook()
while (true) {
auto s = [&]() {
try {
return readLine(worker.hook->fromHook.readSide.get());
return readLine(worker.hook.instance->fromHook.get());
} catch (Error & e) {
e.addTrace({}, "while reading the response from the build hook");
throw;
}
}();
if (handleJSONLogMessage(s, worker.act, worker.hook->activities, true))
if (handleJSONLogMessage(s, worker.act, worker.hook.instance->activities, true))
;
else if (s.substr(0, 2) == "# ") {
reply = s.substr(2);
@ -1170,14 +1165,14 @@ HookReply DerivationGoal::tryBuildHook()
debug("hook reply is '%1%'", reply);
if (reply == "decline")
return rpDecline;
return HookReply::Decline{};
else if (reply == "decline-permanently") {
worker.tryBuildHook = false;
worker.hook = 0;
return rpDecline;
worker.hook.available = false;
worker.hook.instance.reset();
return HookReply::Decline{};
}
else if (reply == "postpone")
return rpPostpone;
return HookReply::Postpone{};
else if (reply != "accept")
throw Error("bad hook reply '%s'", reply);
@ -1185,17 +1180,17 @@ HookReply DerivationGoal::tryBuildHook()
if (e.errNo == EPIPE) {
printError(
"build hook died unexpectedly: %s",
chomp(drainFD(worker.hook->fromHook.readSide.get())));
worker.hook = 0;
return rpDecline;
chomp(drainFD(worker.hook.instance->fromHook.get())));
worker.hook.instance.reset();
return HookReply::Decline{};
} else
throw;
}
hook = std::move(worker.hook);
hook = std::move(worker.hook.instance);
try {
machineName = readLine(hook->fromHook.readSide.get());
machineName = readLine(hook->fromHook.get());
} catch (Error & e) {
e.addTrace({}, "while reading the machine name from the build hook");
throw;
@ -1218,17 +1213,17 @@ HookReply DerivationGoal::tryBuildHook()
}
hook->sink = FdSink();
hook->toHook.writeSide.reset();
hook->toHook.reset();
/* Create the log file and pipe. */
Path logFile = openLogFile();
std::set<int> fds;
fds.insert(hook->fromHook.readSide.get());
fds.insert(hook->builderOut.readSide.get());
worker.childStarted(shared_from_this(), fds, false, false);
fds.insert(hook->fromHook.get());
fds.insert(hook->builderOut.get());
builderOutFD = &hook->builderOut;
return rpAccept;
return HookReply::Accept{std::move(fds)};
}
@ -1288,25 +1283,23 @@ void DerivationGoal::closeLogFile()
}
bool DerivationGoal::isReadDesc(int fd)
Goal::WorkResult DerivationGoal::handleChildOutput(int fd, std::string_view data)
{
return fd == hook->builderOut.readSide.get();
}
assert(builderOutFD);
auto tooMuchLogs = [&] {
killChild();
return done(
BuildResult::LogLimitExceeded, {},
Error("%s killed after writing more than %d bytes of log output",
getName(), settings.maxLogSize));
};
void DerivationGoal::handleChildOutput(int fd, std::string_view data)
{
// local & `ssh://`-builds are dealt with here.
auto isWrittenToLog = isReadDesc(fd);
if (isWrittenToLog)
{
if (fd == builderOutFD->get()) {
logSize += data.size();
if (settings.maxLogSize && logSize > settings.maxLogSize) {
killChild();
done(
BuildResult::LogLimitExceeded, {},
Error("%s killed after writing more than %d bytes of log output",
getName(), settings.maxLogSize));
return;
return tooMuchLogs();
}
for (auto c : data)
@ -1321,9 +1314,10 @@ void DerivationGoal::handleChildOutput(int fd, std::string_view data)
}
if (logSink) (*logSink)(data);
return StillAlive{};
}
if (hook && fd == hook->fromHook.readSide.get()) {
if (hook && fd == hook->fromHook.get()) {
for (auto c : data)
if (c == '\n') {
auto json = parseJSONMessage(currentHookLine);
@ -1331,11 +1325,17 @@ void DerivationGoal::handleChildOutput(int fd, std::string_view data)
auto s = handleJSONLogMessage(*json, worker.act, hook->activities, true);
// ensure that logs from a builder using `ssh-ng://` as protocol
// are also available to `nix log`.
if (s && !isWrittenToLog && logSink) {
if (s && logSink) {
const auto type = (*json)["type"];
const auto fields = (*json)["fields"];
if (type == resBuildLogLine) {
(*logSink)((fields.size() > 0 ? fields[0].get<std::string>() : "") + "\n");
const std::string logLine =
(fields.size() > 0 ? fields[0].get<std::string>() : "") + "\n";
logSize += logLine.size();
if (settings.maxLogSize && logSize > settings.maxLogSize) {
return tooMuchLogs();
}
(*logSink)(logLine);
} else if (type == resSetPhase && ! fields.is_null()) {
const auto phase = fields[0];
if (! phase.is_null()) {
@ -1357,6 +1357,8 @@ void DerivationGoal::handleChildOutput(int fd, std::string_view data)
} else
currentHookLine += c;
}
return StillAlive{};
}
@ -1505,7 +1507,7 @@ SingleDrvOutputs DerivationGoal::assertPathValidity()
}
void DerivationGoal::done(
Goal::Finished DerivationGoal::done(
BuildResult::Status status,
SingleDrvOutputs builtOutputs,
std::optional<Error> ex)
@ -1514,10 +1516,6 @@ void DerivationGoal::done(
buildResult.status = status;
if (ex)
buildResult.errorMsg = fmt("%s", Uncolored(ex->info().msg));
if (buildResult.status == BuildResult::TimedOut)
worker.timedOut = true;
if (buildResult.status == BuildResult::PermanentFailure)
worker.permanentFailure = true;
mcExpectedBuilds.reset();
mcRunningBuilds.reset();
@ -1540,14 +1538,19 @@ void DerivationGoal::done(
fs << worker.store.printStorePath(drvPath) << "\t" << buildResult.toString() << std::endl;
}
amDone(buildResult.success() ? ecSuccess : ecFailed, std::move(ex));
return Finished{
.result = buildResult.success() ? ecSuccess : ecFailed,
.ex = ex ? std::make_unique<Error>(std::move(*ex)) : nullptr,
.permanentFailure = buildResult.status == BuildResult::PermanentFailure,
.timedOut = buildResult.status == BuildResult::TimedOut,
.hashMismatch = anyHashMismatchSeen,
.checkMismatch = anyCheckMismatchSeen,
};
}
void DerivationGoal::waiteeDone(GoalPtr waitee, ExitCode result)
void DerivationGoal::waiteeDone(GoalPtr waitee)
{
Goal::waiteeDone(waitee, result);
if (!useDerivation) return;
auto * dg = dynamic_cast<DerivationGoal *>(&*waitee);

View file

@ -14,7 +14,21 @@ using std::map;
struct HookInstance;
typedef enum {rpAccept, rpDecline, rpPostpone} HookReply;
struct HookReplyBase {
struct [[nodiscard]] Accept {
std::set<int> fds;
};
struct [[nodiscard]] Decline {};
struct [[nodiscard]] Postpone {};
};
struct [[nodiscard]] HookReply
: HookReplyBase,
std::variant<HookReplyBase::Accept, HookReplyBase::Decline, HookReplyBase::Postpone>
{
HookReply() = delete;
using variant::variant;
};
/**
* Unless we are repairing, we don't both to test validity and just assume it,
@ -107,6 +121,9 @@ struct DerivationGoal : public Goal
*/
NeedRestartForMoreOutputs needRestart = NeedRestartForMoreOutputs::OutputsUnmodifedDontNeed;
bool anyHashMismatchSeen = false;
bool anyCheckMismatchSeen = false;
/**
* See `retrySubstitution`; just for that field.
*/
@ -183,12 +200,19 @@ struct DerivationGoal : public Goal
*/
std::unique_ptr<HookInstance> hook;
/**
* Builder output is pulled from this file descriptor when not null.
* Owned by the derivation goal or subclass, must not be reset until
* the build has finished and no more output must be processed by us
*/
AutoCloseFD * builderOutFD = nullptr;
/**
* The sort of derivation we are building.
*/
std::optional<DerivationType> derivationType;
typedef void (DerivationGoal::*GoalState)();
typedef WorkResult (DerivationGoal::*GoalState)(bool inBuildSlot);
GoalState state;
BuildMode buildMode;
@ -217,11 +241,11 @@ struct DerivationGoal : public Goal
BuildMode buildMode = bmNormal);
virtual ~DerivationGoal() noexcept(false);
void timedOut(Error && ex) override;
Finished timedOut(Error && ex) override;
std::string key() override;
void work() override;
WorkResult work(bool inBuildSlot) override;
/**
* Add wanted outputs to an already existing derivation goal.
@ -231,23 +255,23 @@ struct DerivationGoal : public Goal
/**
* The states.
*/
void getDerivation();
void loadDerivation();
void haveDerivation();
void outputsSubstitutionTried();
void gaveUpOnSubstitution();
void closureRepaired();
void inputsRealised();
void tryToBuild();
virtual void tryLocalBuild();
void buildDone();
WorkResult getDerivation(bool inBuildSlot);
WorkResult loadDerivation(bool inBuildSlot);
WorkResult haveDerivation(bool inBuildSlot);
WorkResult outputsSubstitutionTried(bool inBuildSlot);
WorkResult gaveUpOnSubstitution(bool inBuildSlot);
WorkResult closureRepaired(bool inBuildSlot);
WorkResult inputsRealised(bool inBuildSlot);
WorkResult tryToBuild(bool inBuildSlot);
virtual WorkResult tryLocalBuild(bool inBuildSlot);
WorkResult buildDone(bool inBuildSlot);
void resolvedFinished();
WorkResult resolvedFinished(bool inBuildSlot);
/**
* Is the build hook willing to perform the build?
*/
HookReply tryBuildHook();
HookReply tryBuildHook(bool inBuildSlot);
virtual int getChildStatus();
@ -287,12 +311,10 @@ struct DerivationGoal : public Goal
virtual void cleanupPostOutputsRegisteredModeCheck();
virtual void cleanupPostOutputsRegisteredModeNonCheck();
virtual bool isReadDesc(int fd);
/**
* Callback used by the worker to write to the log.
*/
void handleChildOutput(int fd, std::string_view data) override;
WorkResult handleChildOutput(int fd, std::string_view data) override;
void handleEOF(int fd) override;
void flushLine();
@ -323,16 +345,16 @@ struct DerivationGoal : public Goal
*/
virtual void killChild();
void repairClosure();
WorkResult repairClosure();
void started();
void done(
Finished done(
BuildResult::Status status,
SingleDrvOutputs builtOutputs = {},
std::optional<Error> ex = {});
void waiteeDone(GoalPtr waitee, ExitCode result) override;
void waiteeDone(GoalPtr waitee) override;
StorePathSet exportReferences(const StorePathSet & storePaths);

View file

@ -20,30 +20,25 @@ DrvOutputSubstitutionGoal::DrvOutputSubstitutionGoal(
}
void DrvOutputSubstitutionGoal::init()
Goal::WorkResult DrvOutputSubstitutionGoal::init(bool inBuildSlot)
{
trace("init");
/* If the derivation already exists, were done */
if (worker.store.queryRealisation(id)) {
amDone(ecSuccess);
return;
return Finished{ecSuccess};
}
subs = settings.useSubstitutes ? getDefaultSubstituters() : std::list<ref<Store>>();
tryNext();
return tryNext(inBuildSlot);
}
void DrvOutputSubstitutionGoal::tryNext()
Goal::WorkResult DrvOutputSubstitutionGoal::tryNext(bool inBuildSlot)
{
trace("trying next substituter");
/* Make sure that we are allowed to start a substitution. Note that even
if maxSubstitutionJobs == 0, we still allow a substituter to run. This
prevents infinite waiting. */
if (worker.runningSubstitutions >= std::max(1U, settings.maxSubstitutionJobs.get())) {
worker.waitForBuildSlot(shared_from_this());
return;
if (!inBuildSlot) {
return WaitForSlot{};
}
maintainRunningSubstitutions =
@ -54,16 +49,14 @@ void DrvOutputSubstitutionGoal::tryNext()
with it. */
debug("derivation output '%s' is required, but there is no substituter that can provide it", id.to_string());
/* Hack: don't indicate failure if there were no substituters.
In that case the calling derivation should just do a
build. */
amDone(substituterFailed ? ecFailed : ecNoSubstituters);
if (substituterFailed) {
worker.failedSubstitutions++;
}
return;
/* Hack: don't indicate failure if there were no substituters.
In that case the calling derivation should just do a
build. */
return Finished{substituterFailed ? ecFailed : ecNoSubstituters};
}
sub = subs.front();
@ -82,12 +75,11 @@ void DrvOutputSubstitutionGoal::tryNext()
return sub->queryRealisation(id);
});
worker.childStarted(shared_from_this(), {downloadState->outPipe.readSide.get()}, true, false);
state = &DrvOutputSubstitutionGoal::realisationFetched;
return WaitForWorld{{downloadState->outPipe.readSide.get()}, true};
}
void DrvOutputSubstitutionGoal::realisationFetched()
Goal::WorkResult DrvOutputSubstitutionGoal::realisationFetched(bool inBuildSlot)
{
worker.childTerminated(this);
maintainRunningSubstitutions.reset();
@ -100,9 +92,10 @@ void DrvOutputSubstitutionGoal::realisationFetched()
}
if (!outputInfo) {
return tryNext();
return tryNext(inBuildSlot);
}
WaitForGoals result;
for (const auto & [depId, depPath] : outputInfo->dependentRealisations) {
if (depId != id) {
if (auto localOutputInfo = worker.store.queryRealisation(depId);
@ -116,38 +109,42 @@ void DrvOutputSubstitutionGoal::realisationFetched()
worker.store.printStorePath(localOutputInfo->outPath),
worker.store.printStorePath(depPath)
);
tryNext();
return;
return tryNext(inBuildSlot);
}
addWaitee(worker.makeDrvOutputSubstitutionGoal(depId));
result.goals.insert(worker.makeDrvOutputSubstitutionGoal(depId));
}
}
addWaitee(worker.makePathSubstitutionGoal(outputInfo->outPath));
result.goals.insert(worker.makePathSubstitutionGoal(outputInfo->outPath));
if (waitees.empty()) outPathValid();
else state = &DrvOutputSubstitutionGoal::outPathValid;
if (result.goals.empty()) {
return outPathValid(inBuildSlot);
} else {
state = &DrvOutputSubstitutionGoal::outPathValid;
return result;
}
}
void DrvOutputSubstitutionGoal::outPathValid()
Goal::WorkResult DrvOutputSubstitutionGoal::outPathValid(bool inBuildSlot)
{
assert(outputInfo);
trace("output path substituted");
if (nrFailed > 0) {
debug("The output path of the derivation output '%s' could not be substituted", id.to_string());
amDone(nrNoSubstituters > 0 || nrIncompleteClosure > 0 ? ecIncompleteClosure : ecFailed);
return;
return Finished{
nrNoSubstituters > 0 || nrIncompleteClosure > 0 ? ecIncompleteClosure : ecFailed
};
}
worker.store.registerDrvOutput(*outputInfo);
finished();
return finished();
}
void DrvOutputSubstitutionGoal::finished()
Goal::WorkResult DrvOutputSubstitutionGoal::finished()
{
trace("finished");
amDone(ecSuccess);
return Finished{ecSuccess};
}
std::string DrvOutputSubstitutionGoal::key()
@ -157,9 +154,9 @@ std::string DrvOutputSubstitutionGoal::key()
return "a$" + std::string(id.to_string());
}
void DrvOutputSubstitutionGoal::work()
Goal::WorkResult DrvOutputSubstitutionGoal::work(bool inBuildSlot)
{
(this->*state)();
return (this->*state)(inBuildSlot);
}

Some files were not shown because too many files have changed in this diff Show more