Compare commits

...

154 commits

Author SHA1 Message Date
eldritch horrors 5a54b0a20c meson: install systemd files
Change-Id: Idacf602fd379c82a051f00df2293cb02c8b286d4
2024-03-29 20:10:33 +00:00
eldritch horrors e28dc26084 meson: install shell files
Change-Id: I7c30690e5763d095cf7444333f7b687509051c5f
2024-03-29 20:10:33 +00:00
eldritch horrors 1da1f501fc meson: fix state-dir default value
the autoconf build system defaults to /nix/var, not /nix/var/nix. the
latter is only used in libstore, so we'll move the extra segment there.

Change-Id: Idfbc988ee302355982abdcd51d6d7b5d5d661c0d
2024-03-29 19:14:23 +00:00
Winter Cute 6646b80396 meson: add missing explicit dependency on nlohmann_json
Without this, the Meson setup won't bail out if nlohmann_json is
missing, leading to subpar DX (and maybe worse, but I'm not entirely
sure).

Change-Id: I5913111060226b540dcf003257c99a08e84da0de
2024-03-29 14:16:58 -04:00
Rebecca Turner 9d97d1cb68 Merge "Set MAKEFLAGS=-j and GTEST_BRIEF in .envrc" into main 2024-03-29 17:09:13 +00:00
Rebecca Turner 877750b7c5 Merge "Move DebugChar into its own file" into main 2024-03-29 16:20:14 +00:00
eldritch horrors 6e5db5e4a2 meson: install missing/generated headers
one headers (args/root.hh) was simply missing, and the generated headers
were not installed. not all of them *should* be installed either, only a
select few (and sadly this needs a custom target for each one, it seems)

Change-Id: I37b25517895d0e5e521abc1202fa65624de57ed1
2024-03-29 02:45:48 +00:00
eldritch horrors 69bfd21e20 meson: install pkg-config files for libraries
Change-Id: I14b9d81d09f188eacfb9c68bcfb84751c18e3779
2024-03-29 02:45:48 +00:00
eldritch horrors 86b954a7af meson: increase functional test timeout
sometimes these fail with timeouts on loaded machines. let's up the
timeouts until we can pull the tests apart to more reasonable sizes

Change-Id: I2dfff2183cc1f3ff5e6107f43748ac046fe00d05
2024-03-29 02:19:36 +00:00
Rebecca Turner f9d5079c69 Set MAKEFLAGS=-j and GTEST_BRIEF in .envrc
- Enable parallel builds by default (and allow using environment
  variables to override `make` variables)
  - Hopefully we can get rid of this once we have Meson
- Set `GTEST_BRIEF=1`
  - This only shows failed tests, instead of listing every test on its
    own line.

```
$ GTEST_BRIEF=1 make check
[==========] 328 tests from 15 test suites ran. (37 ms total)
[  PASSED  ] 328 tests.
```

Change-Id: Id8103a8f24a9681be2be87e1b4df6fd5fdd7e4fd
2024-03-28 18:17:28 -07:00
Rebecca Turner 236bc046ba Merge "Remove HintFmt::operator%" into main 2024-03-29 01:13:45 +00:00
raito 55350bd68d Merge "feat: unprivileged read-only open of SQLite DB" into main 2024-03-29 00:49:17 +00:00
Rebecca Turner 5ec2efb686 Move DebugChar into its own file
Change-Id: Ia40549e5d0b78ece8dd0722c3a5a032b9915f24b
2024-03-28 15:54:12 -07:00
Rebecca Turner 62332c1250 Merge "Move shell_words into its own file" into main 2024-03-28 22:49:00 +00:00
Qyriad 81e50fef70 Merge "meson: implement functional tests" into main 2024-03-28 20:38:05 +00:00
jade 61d394e344 Merge "Build with traps on signed overflow" into main 2024-03-28 20:27:32 +00:00
jade ae065a992d Merge "progress-bar.cc: fix signed overflow" into main 2024-03-28 15:21:11 +00:00
jade 14207e1cf8 Build with traps on signed overflow
This is UB, we should not be doing it, and we can cheaply turn it into
crashes reliably. We would much rather have crashes than the program
doing something silly.

Benchmarks, but i wonder if they are nonsense because they get identical
times across compilers?!

| Command | Mean [ms] | Min [ms] | Max [ms] | Relative |
|:---|---:|---:|---:|---:|
| `result-clang/bin/nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix` | 375.5 ± 24.0 | 353.8 | 408.8 | 1.00 |
| `result-gcc/bin/nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix` | 407.9 ± 26.0 | 385.1 | 449.5 | 1.09 ± 0.10 |
| `result-clangsan/bin/nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix` | 382.2 ± 26.6 | 354.9 | 419.0 | 1.02 ± 0.10 |
| `result-gccsan/bin/nix eval -f ../nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix` | 408.6 ± 24.6 | 384.5 | 441.9 | 1.09 ± 0.10 |

| Command | Mean [s] | Min [s] | Max [s] | Relative |
|:---|---:|---:|---:|---:|
| `result-clang/bin/nix search --no-eval-cache github:nixos/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 hello` | 17.199 ± 0.167 | 16.930 | 17.499 | 1.01 ± 0.01 |
| `result-gcc/bin/nix search --no-eval-cache github:nixos/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 hello` | 17.409 ± 0.126 | 17.242 | 17.633 | 1.02 ± 0.01 |
| `result-clangsan/bin/nix search --no-eval-cache github:nixos/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 hello` | 17.080 ± 0.137 | 16.879 | 17.350 | 1.00 |
| `result-gccsan/bin/nix search --no-eval-cache github:nixos/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 hello` | 17.396 ± 0.160 | 17.131 | 17.660 | 1.02 ± 0.01 |

| Command | Mean [s] | Min [s] | Max [s] | Relative |
|:---|---:|---:|---:|---:|
| `result-clang/bin/nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'` | 6.267 ± 0.069 | 6.197 | 6.415 | 1.02 ± 0.01 |
| `result-gcc/bin/nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'` | 6.232 ± 0.045 | 6.180 | 6.311 | 1.01 ± 0.01 |
| `result-clangsan/bin/nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'` | 6.162 ± 0.020 | 6.133 | 6.196 | 1.00 |
| `result-gccsan/bin/nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'` | 6.229 ± 0.031 | 6.199 | 6.289 | 1.01 ± 0.01 |

| Command | Mean [s] | Min [s] | Max [s] | Relative |
|:---|---:|---:|---:|---:|
| `GC_INITIAL_HEAP_SIZE=10g result-clang/bin/nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'` | 4.683 ± 0.044 | 4.630 | 4.761 | 1.00 |
| `GC_INITIAL_HEAP_SIZE=10g result-gcc/bin/nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'` | 4.750 ± 0.041 | 4.680 | 4.812 | 1.01 ± 0.01 |
| `GC_INITIAL_HEAP_SIZE=10g result-clangsan/bin/nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'` | 4.703 ± 0.040 | 4.640 | 4.760 | 1.00 ± 0.01 |
| `GC_INITIAL_HEAP_SIZE=10g result-gccsan/bin/nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'` | 4.766 ± 0.037 | 4.727 | 4.844 | 1.02 ± 0.01 |

Change-Id: I616ca3eab670317587d47b41870d8ac963c019ae
2024-03-27 23:54:04 -07:00
jade ffbad9b762 progress-bar.cc: fix signed overflow
this was caused by the use of std::chrono::duration::max() which gets
multiplied by some ratio to calculate nanoseconds to wait. then, it
explodes because that is a signed integer overflow. this was definitely
a bug.

error below:

/nix/store/fdiknsmnnczx6brsbppyljcs9hqckawk-gcc-12.3.0/include/c++/12.3.0/bits/chrono.h:225:38: runtime error: signed integer overflow: 9223372036854775807 * 1000000 cannot be represented in type 'long'
    #0 0x736d376b2b69 in std::chrono::duration<long, std::ratio<1l, 1000000000l>> std::chrono::__duration_cast_impl<std::chrono:
:duration<long, std::ratio<1l, 1000000000l>>, std::ratio<1000000l, 1l>, long, false, true>::__cast<long, std::ratio<1l, 1000l>>(
std::chrono::duration<long, std::ratio<1l, 1000l>> const&) /nix/store/fdiknsmnnczx6brsbppyljcs9hqckawk-gcc-12.3.0/include/c++/12
.3.0/bits/chrono.h:225:38
    #1 0x736d376b2b69 in std::enable_if<__is_duration<std::chrono::duration<long, std::ratio<1l, 1000000000l>>>::value, std::chr
ono::duration<long, std::ratio<1l, 1000000000l>>>::type std::chrono::duration_cast<std::chrono::duration<long, std::ratio<1l, 10
00000000l>>, long, std::ratio<1l, 1000l>>(std::chrono::duration<long, std::ratio<1l, 1000l>> const&) /nix/store/fdiknsmnnczx6brs
bppyljcs9hqckawk-gcc-12.3.0/include/c++/12.3.0/bits/chrono.h:270:9
    #2 0x736d376b2b69 in std::enable_if<__is_duration<std::chrono::duration<long, std::ratio<1l, 1000000000l>>>::value, std::chr
ono::duration<long, std::ratio<1l, 1000000000l>>>::type std::chrono::ceil<std::chrono::duration<long, std::ratio<1l, 1000000000l
>>, long, std::ratio<1l, 1000l>>(std::chrono::duration<long, std::ratio<1l, 1000l>> const&) /nix/store/fdiknsmnnczx6brsbppyljcs9
hqckawk-gcc-12.3.0/include/c++/12.3.0/bits/chrono.h:386:14
    #3 0x736d376b2b69 in std::cv_status std::condition_variable::wait_for<long, std::ratio<1l, 1000l>>(std::unique_lock<std::mut
ex>&, std::chrono::duration<long, std::ratio<1l, 1000l>> const&) /nix/store/fdiknsmnnczx6brsbppyljcs9hqckawk-gcc-12.3.0/include/
c++/12.3.0/condition_variable:164:6
    #4 0x736d376b1ee9 in std::cv_status nix::Sync<nix::ProgressBar::State, std::mutex>::Lock::wait_for<long, std::ratio<1l, 1000
l>>(std::condition_variable&, std::chrono::duration<long, std::ratio<1l, 1000l>> const&) /home/jade/lix/lix/src/libutil/sync.hh:
65:23
    #5 0x736d376b1ee9 in nix::ProgressBar::ProgressBar(bool)::'lambda'()::operator()() const /home/jade/lix/lix/src/libmain/prog
ress-bar.cc:99:27
    #6 0x736d36de25c2 in execute_native_thread_routine (/nix/store/a3zlvnswi1p8cg7i9w4lpnvaankc7dxx-gcc-12.3.0-lib/lib/libstdc++
.so.6+0xe05c2)
    #7 0x736d36b6b0e3 in start_thread (/nix/store/1zy01hjzwvvia6h9dq5xar88v77fgh9x-glibc-2.38-44/lib/libc.so.6+0x8b0e3) (BuildId
: 287831bffdbdde0ec25dbd021d12bdfc0ab9f5ff)
    #8 0x736d36bed5e3 in __clone (/nix/store/1zy01hjzwvvia6h9dq5xar88v77fgh9x-glibc-2.38-44/lib/libc.so.6+0x10d5e3) (BuildId: 28
7831bffdbdde0ec25dbd021d12bdfc0ab9f5ff)

SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /nix/store/fdiknsmnnczx6brsbppyljcs9hqckawk-gcc-12.3.0/include/c++/12.3.
0/bits/chrono.h:225:38 in

Change-Id: Ia0303242cdfd5d49385ae9e99718d709625a4633
2024-03-27 22:56:04 -07:00
Winter Cute 80405d0626 Stop vendoring toml11
We don't apply any patches to it, and vendoring it locks users into
bugs (it hasn't been updated since its introduction in late 2021).

Closes lix-project/lix#164

Change-Id: Ied071c841fc30b0dfb575151afd1e7f66970fdb9
2024-03-27 21:04:00 -04:00
Qyriad 038daad218 meson: implement functional tests
Functional tests can be run with
`meson test -C build --suite installcheck`.

Notably, functional tests must be run *after* running `meson install`
(Lix's derivation runs the installcheck suite in installCheckPhase so it
does this correctly), due to some quirks between Meson and the testing
system.

As far as I can tell the functional tests are meant to be run after
installing anyway, but unfortunately I can't transparently make
`meson test --suite installcheck` depend on the install targets.

The script that runs the functional tests, meson/run-test.py, checks
that `meson install` has happened and fails fast with a (hopefully)
helpful error message if any of the functional tests are run before
installing.

TODO: this change needs reflection in developer documentation

Change-Id: I8dcb5fdfc0b6cb17580973d24ad930abd57018f6
2024-03-27 18:37:50 -06:00
jade edba570664 HOT SALE: 15% off your build times!
This was achieved by running maintainers/buildtime_report.sh on the
build directory of a meson build, then asking "why the heck is json
eating our build times", and strategically moving the json using bits
out of widely included headers.

It turns out that putting literally any metrics whatsoever into the
build had immediate and predictable results.

Results are 1382.5s frontend time -> 1175.4s frontend time, back end
time approximately invariant.

Related: lix-project/lix#159

Change-Id: I7edea95c8536203325c8bb4dae5f32d727a21b2d
2024-03-27 03:52:57 +00:00
jade 412a9c9f67 Enable clang build timing analysis
I didn't enable this by default for clang due to making the build time
10% worse or so. Unfortunate, but tbh devs for whom 10% of build time is
not *that* bad should probably simply enable this.

Change-Id: I8d1e5b6f3f76c649a4e2f115f534f7f97cee46e6
2024-03-27 03:52:57 +00:00
jade 50c6feeb77 Add release notes system for dev facing release notes
We keep changing dev stuff and we probably should keep the news up to
date?

Change-Id: I819da6a29f1c56c8ab8d758c159a9c96164cb04e
2024-03-27 03:52:57 +00:00
eldritch horrors 0436f4cfa6 flake: always build release notes in devshell
Change-Id: I0e02567fe8f102a8a8f1558aa094eefacdac9393
2024-03-27 03:09:14 +00:00
eldritch horrors 279e30e7ef build: replace changelog-d with local script
hacking changelog-d to support not just github but also forgejo and
gerrit is a lot more complicated than it's worth, even moreso since
the entire thing can just as well be done with ~60 lines of python.
this new script is also much cheaper to instantiate (being python),
so having it enabled in all shells is far less of a hassle.

we've also adjusted existing release notes that referenced a gerrit
cl to auto-link to the cl in question, making the diff a bit bigger

closes lix-project/lix#176

Change-Id: I8ba7dd0070aad9ba4474401731215fcf5d9d2130
2024-03-27 03:09:14 +00:00
eldritch horrors 8fd02df90d manual: fix release notes
fix key spelling errors, type errors, things-should-not-be-comments errors

Change-Id: I3ce12873aa78002bca686bd88404771895b05d30
2024-03-27 03:09:14 +00:00
Rebecca Turner aee3d639b5 Move shell_words into its own file
Change-Id: I34c0ebfb6dcea49bf632d8880e04075335a132bf
2024-03-26 16:44:04 -07:00
Rebecca Turner 8e63eca912 Remove HintFmt::operator%
Change-Id: Ibcf1a7848b4b18ec9b0807628ff229079ae7a0fe
2024-03-26 15:40:05 -07:00
jade da22dbc333 Merge "Issue importer: do not notify" into main 2024-03-26 22:04:48 +00:00
Ilya K fa3088a878 Merge "envrc: improve" into main 2024-03-26 20:49:40 +00:00
jade a5254186fd Merge "libstore/filetransfer: use Lix UA and unnix error message" into main 2024-03-26 20:46:27 +00:00
Qyriad 207d24da4e Merge "meson: implement unit tests" into main 2024-03-26 20:06:09 +00:00
Ilya K 312cee142a envrc: improve
- add shellcheck hint
- load .envrc.local if one exists
- use a variable to allow choosing the shell variant

Change-Id: Iea34e5a800f5d463e5792020c5c293b8b3071ca5
2024-03-26 19:40:43 +03:00
raito 80b66b5065 libstore/filetransfer: use Lix UA and unnix error message
Once this commit lands, we are even more visible in analytics FWIW.

Change-Id: Id7e0c162315d0f191edbea9cb5fb82ce363704b9
Signed-off-by: Raito Bezarius <raito@lix.systems>
2024-03-26 16:06:27 +00:00
jade 531b8d0ab8 Merge "libmain: version printer uses Lix instead of Nix" into main 2024-03-26 16:06:15 +00:00
Ilya K 7170e9f947 envrc: add
direnv good.

Change-Id: I81bd0086b847c6ee642a201fbc991a1f63fc7d7a
2024-03-26 15:19:37 +03:00
Ilya K a69f6e185a build-remote: fix format string shenanigans
HintFmt(string) invokes the HintFmt("%s", literal) constructor,
which is not what we want here. Add a constructor with a proper name
and call that.

Next step: rename all the other ones to HintFmt::literal(string).

Fixes: lix-project/lix#178

Change-Id: If52d2eb8864ceb8663e05992e9d1fffef573d6b8
2024-03-26 07:58:24 +00:00
Lunaphied 73624f7d9c Merge "Merge pull request #8817 from iFreilicht/flake-update-lock-overhaul" into main 2024-03-26 00:45:49 +00:00
Qyriad e1ffe56793 meson: implement unit tests
Unit tests can be run with `meson test -C build --suite check`.
`--suite check` is optional, as right now that's the only test suite,
but when functional tests are added those will be in a separate suite.

Change-Id: I7f22f1cde4b489b3cdb5f9a36a544f0c409fcc1f
2024-03-26 00:43:33 +01:00
Théophane Hufschmitt 86881226b0 Merge pull request #8817 from iFreilicht/flake-update-lock-overhaul
Overhaul `nix flake update` and `nix flake lock` UX

(cherry picked from commit 12a0ae73dbb37becefa5a442eb4532ff0de9ce65)
Change-Id: Iff3b4f4235ebb1948ec612036b39ab29e4ca22b2
2024-03-25 17:36:24 -06:00
eldritch horrors c3a5f937d3 flake.nix: linearize meson builds
parallel meson builds need too much ram. linearize them for now, and
hopefully we can remove the make build system and this hack soonish.

Change-Id: I42c092db8b0c63680e77da2263cdfe9e7f6575be
2024-03-25 21:48:55 +00:00
Qyriad 787c4397f1 Merge "issue template: use nix --version instead of nix-env --version" into main 2024-03-25 21:47:53 +00:00
Qyriad 1da1aa5045 issue template: use nix --version instead of nix-env --version
`nix --version` doesn't require `nix-command` experimental feature to
run and we could all do with less nix-env

Change-Id: I90748d591c574d96eda46591e9f9ce828311da29
2024-03-25 12:59:30 -06:00
Eelco Dolstra aa7653608d Minor cleanup in libexpr/flake/flake.cc
(cherry picked from commit 05316d401fa509557c71140e17bb19814412fcb8)
Change-Id: I6ba0b55709f5fe21beb4e9f3bf72ee28715d15f3
2024-03-25 15:30:36 +00:00
Eelco Dolstra b525d0f20c Input: Replace markFileChanged() by putFile()
Committing a lock file using markFileChanged() required the input to
be writable by the caller in the local filesystem (using the path
returned by getSourcePath()). putFile() abstracts over this.

(cherry picked from commit 95d657c8b3ae4282e24628ba7426edb90c8f3942)
Change-Id: Ie081c5d9eb4e923b229191c5e23ece85145557ff
2024-03-25 15:30:36 +00:00
John Ericson 3d065192c0 Overhaul completions, redo #6693 (#8131)
As I complained in
https://github.com/NixOS/nix/pull/6784#issuecomment-1421777030 (a
comment on the wrong PR, sorry again!), #6693 introduced a second
completions mechanism to fix a bug. Having two completion mechanisms
isn't so nice.

As @thufschmitt also pointed out, it was a bummer to go from `FlakeRef`
to `std::string` when collecting flake refs. Now it is `FlakeRefs`
again.

The underlying issue that sought to work around was that completion of
arguments not at the end can still benefit from the information from
latter arguments.

To fix this better, we rip out that change and simply defer all
completion processing until after all the (regular, already-complete)
arguments have been passed.

In addition, I noticed the original completion logic used some global
variables. I do not like global variables, because even if they save
lines of code, they also obfuscate the architecture of the code.

I got rid of them  moved them to a new `RootArgs` class, which now has
`parseCmdline` instead of `Args`. The idea is that we have many argument
parsers from subcommands and what-not, but only one root args that owns
the other per actual parsing invocation. The state that was global is
now part of the root args instead.

This did, admittedly, add a bunch of new code. And I do feel bad about
that. So I went and added a lot of API docs to try to at least make the
current state of things clear to the next person.

--

This is needed for RFC 134 (tracking issue #7868). It was very hard to
modularize `Installable` parsing when there were two completion
arguments. I wouldn't go as far as to say it is *easy* now, but at least
it is less hard (and the completions test finally passed).

Co-authored-by: Valentin Gagarin <valentin.gagarin@tweag.io>
Change-Id: If18cd5be78da4a70635e3fdcac6326dbfeea71a5
(cherry picked from commit 67eb37c1d0de28160cd25376e51d1ec1b1c8305b)
2024-03-25 15:30:36 +00:00
Tom Bereknyei 4494f9097f feat: notation to refer to no attribute search prefix
An attrPath prefix of "." indicates no need to try default attrPath prefixes. For example `nixpkgs#legacyPackages.x86_64-linux.ERROR` searches through

```
trying flake output attribute 'packages.x86_64-linux.legacyPackages.x86_64-linux.ERROR'
using cached attrset attribute ''
trying flake output attribute 'legacyPackages.x86_64-linux.legacyPackages.x86_64-linux.ERROR'
using cached attrset attribute 'legacyPackages.x86_64-linux'
trying flake output attribute 'legacyPackages.x86_64-linux.ERROR'
using cached attrset attribute 'legacyPackages.x86_64-linux'
```

And there is no way to specify that one does not want the automatic
search behavior. Now one can specify
`nixpkgs#.legacyPackages.x86_64-linux.ERROR` to only refer to the rooted
attribute path without any default injection of attribute search path or
system.

Change-Id: Iac1334e1470137b7ce11dcf845513810230638ec
(cherry picked from commit d4aed18883b361133607296fb6cd789c47427a38)
2024-03-25 15:30:36 +00:00
Lunaphied d3d7489571 Merge "Improve new CLI UX by supporting short -E flag for --expr" into main 2024-03-25 14:13:44 +00:00
raito ad8a4b380e libmain: version printer uses Lix instead of Nix
Change-Id: I014ff24b900c0b9a48b7a63c8bb8b86cde3ebe54
Signed-off-by: Raito Bezarius <raito@lix.systems>
2024-03-25 08:04:31 +00:00
jade 5a1c35f907 Merge "Restore system-install profile files from the previous installer" into main 2024-03-25 04:17:36 +00:00
jade dee6b75702 Restore system-install profile files from the previous installer
These files are required to get Nix in PATH in existing multi-user installs using
the legacy installer. We really could use some tests.

Cc: lix-project/lix#33

This partially reverts commit 93cc063344.

Fixes: lix-project/lix#173

Change-Id: Iafb55280596732670a432f604b897f48562868e4
2024-03-25 04:17:14 +00:00
Lunaphied 185ecf1f45 Improve new CLI UX by supporting short -E flag for --expr
Change-Id: I55881c846da8416a92a14deedfa5bbbf09a122fb
2024-03-24 21:17:51 -06:00
eldritch horrors c856b82c2e libstore: despecialcase protocol version check
protocol versions are sent as u64. on the peer we read them as uint64,
check that the upper half is 0, and throw an exception if not. we then
read an arbitrary amount of data from the peer and dump it to the user
terminal. this is a little bit ridiculous, can never happen in correct
implementation, and is severly untested. let us just drop it entirely.

Change-Id: Ibd2f53a765341ed6439d40d9d1eac11e79c6b5e3
2024-03-24 18:45:22 +00:00
eldritch horrors 3e428f2289 libstore: un-inline copyNAR expansions
these are copies of copyNAR with only some variables renamed.

Change-Id: I98ddd7a98250fa5d304e18e1debf417e9f7768dd
2024-03-24 15:24:02 +01:00
jade 33da9c09c8 Issue importer: do not notify
This uses the forgejo patch we have for dont_notify on issue creation on
the api, and indeed does not notify, so we can simply run the script
safely :D

Fixes: lix-project/web-services#38

Change-Id: I86bcbf9b4499b439b79b82af84ee7df0f8eb3298
2024-03-23 19:03:34 -07:00
jade 946fc12e4e Revert "Merge pull request #9476 from alois31/restore-progress-bar"
Observed to regress nix repl attrset printing with narrow windows.

This reverts commit a2d5e803cf.

Fixes: lix-project/lix#168

Change-Id: I8e0031475b4ec26d6a71014357d973578b70815c
2024-03-23 18:04:29 -07:00
eldritch horrors 652f52f071 libutil: don't memset 64k in drainFD
this is not needed and introduces a bunch of memset calls, making up for
3% of valgrind cycle estimation *alone*. real-world impact is a lot
lower on our test machine, but we suspect that less powerful machines
would see an impact from dropping this.

Change-Id: Iad10e9d556e64fdeb0bee0059a4e52520058d11e
2024-03-23 22:17:46 +00:00
raito 8044540c42 feat: unprivileged read-only open of SQLite DB
If the state SQLite database is configured to use a write-ahead-log, it
creates WAL files in the state directory.

When the state SQLite database is closed by the `nix-daemon` after
builds, those files are removed.

When an unprivileged user would like to open _in read only_ that
database, they cannot do so because they would need to create those WAL
files and they do not have the permission to do so.

For this, SQLite offers a "persistent WAL" feature [1] to leave the WAL
files around, even after closing the database.

This CL enable the persistent WAL mode.

Fixes: https://github.com/NixOS/nix/issues/10300
[1]: https://www.sqlite.org/wal.html

Change-Id: Id8ae534d7d2290457af28782e5215222ae051fe5
Signed-off-by: Raito Bezarius <raito@lix.systems>
2024-03-23 15:07:48 +01:00
Qyriad b4d07656ff build: optionally build and install with meson
This commit adds several meson.build, which successfully build and
install Lix executables, libraries, and headers. Meson does not yet
build docs, Perl bindings, or run tests, which will be added in
following commits. As such, this commit does not remove the existing
build system, or make it the default, and also as such, this commit has
several FIXMEs and TODOs as notes for what should be done before the
existing autoconf + make buildsystem can be removed and Meson made the
default. This commit does not modify any source files.

A Meson-enabled build is also added as a Hydra job, and to
`nix flake check`.

Change-Id: I667c8685b13b7bab91e281053f807a11616ae3d4
2024-03-22 08:36:50 -06:00
jade a7161b6c0f Merge "clang-tidy check infrastructure" into main 2024-03-21 12:28:13 -06:00
Qyriad fab55aff0e flake: fix arm32 Linux cross devShell on macOS (fix nix flake check)
Change-Id: Iacac97de0b3d5f2df52c7bc985148624a351f45d
2024-03-21 08:22:38 -06:00
eldritch horrors 22e3f0e987 libexpr: unbreak PosTable performance
this was mostly an inconvenience for error reporting, but fully broke
the debugger (because the debugger does *a lot* of eager position
resolution). copying the line offsets into a local and filling that
local when empty without also storing the calculated offsets back does
kind of ... not cache anything.

fixes lix-project/lix#165

Change-Id: Iccb0ba193ce2f15c832978daecf7b9bebbbe8585
2024-03-20 13:45:36 +01:00
jade 5a28d70d1e Merge "un-ups your start" into main 2024-03-19 14:29:57 -06:00
eldritch horrors d9a83886f9 libutil: remove exception handling workingness check
within lix itself this problem is caught by the test suite. outside of
lix itself three cases can be had: either the problem is fully inside
lix libs, fully inside user code, or it exists at the boundary. the
first is caught by the test suite, the second isn't caught at all, and
the third is something lix should not be responsible for.

Change-Id: I95aa35d8cb6f0ef5816a2941c467bc0c15916063
2024-03-19 06:09:42 -06:00
jade 4050245faa Merge changes I72c945ca,I2138bb4d,Ib96749f3 into main
* changes:
  Release notes for builtins.nixVersion change
  un-nixes ur lix, a little
  issue importer: list issues that are *not* closed when finding existing issues
2024-03-18 20:19:53 -06:00
jade 985bd5eb9f un-ups your start
We do not need upstart, it is so thoroughly obsolete that we should not
care about supporting it.

Change-Id: Ie0ca084740845555fddffacc899cd129c9a4c1fe
2024-03-18 18:28:08 -07:00
jade 20b4a97af3 Release notes for builtins.nixVersion change
Change-Id: I72c945cab464d26d73f5594ef0a4bb2184545da4
2024-03-18 18:26:52 -07:00
jade 30233d87f9 un-nixes ur lix, a little
I didn't really go attack the docs because we need to pull a bunch of
PRs. I went looking for strings in the code that called lix nix.

Change-Id: I2138bb4dd239096bc530946b281db7f875195b39
2024-03-18 18:20:24 -07:00
jade 81be5eb7c6 issue importer: list issues that are *not* closed when finding existing issues
Turns out also, you cannot set the queue to 0 with any success. So we
really should just like, prevent notifications in forgejo itself.

Filed a bug for that:
lix-project/web-services#38

Change-Id: Ib96749f3159659182904963cab7b2ef88fc64442
2024-03-18 18:14:31 -07:00
jade 6b0020749d clang-tidy check infrastructure
This brings in infrastructure for developing new custom clang-tidy lints
and refactors for Lix.

Change-Id: I3df5f5855712ab4f97d4e84d771e5e818f81f881
2024-03-18 16:10:29 -07:00
eldritch horrors f38ae92a38 libutil: make AutoCloseFD a better resource
add a reset() method to close the wrapped fd instead of assigning magic
constants. also make the from-fd constructor explicit so you can't
accidentally assign the *wrong* magic constant, or even an unrelated
integer that also just happens to be an fd by pure chance.

Change-Id: I51311b0f6e040240886b5103d39d1794a6acc325
2024-03-18 15:42:52 -06:00
eldritch horrors 0f518f44e2 Merge "libexpr: associate let exprs with the correct StaticEnv" into main 2024-03-18 15:20:21 -06:00
eldritch horrors afb839a0c9 libexpr: associate let exprs with the correct StaticEnv
static env association is from expr to its enclosing scope, but let
exprs set their association to their *inner* scope. this skips one level
of envs and will cause segfaults if the parent is a with expr.

fixes #145

Change-Id: I1d22146110f071ede21b4eed7ed34b5850ef2ef3
2024-03-18 14:15:22 -07:00
jade 37c4b10c44 Add clang format configuration
Contemplate the configuration with: https://clang-format-configurator.site/

(cherry picked from commit 53fdcbca509b6c5dacaea3d3c465d86e49b0dd74)
Change-Id: I5446fd45de2bf644e34112f719afb3318a440b30
2024-03-18 13:31:39 -07:00
eldritch horrors b3599166ad libexpr: sort binding name in debugger
not doing this exposes the binding name order to the annoying
interference of parse order on symbol order, which wouldn't be so bad if
it didn't make the tests less reliable and, importantly, dependent on
linker behavior (due to primop initialization being done in static
initializer, and the order of static initializers being defined only
within a single translation unit).

fixes #143

Change-Id: I3cf417893fbcf19e9ad3ff8986deb7cbcf3ca511
2024-03-18 20:03:31 +01:00
jade 47a237f7ec Merge "Delete hasPrefix and hasSuffix from the codebase" into main 2024-03-18 12:01:39 -06:00
eldritch horrors 86a1121d16 use byte indexed locations for PosIdx
we now keep not a table of all positions, but a table of all origins and
their sizes. position indices are now direct pointers into the virtual
concatenation of all parsed contents. this slightly reduces memory usage
and time spent in the parser, at the cost of not being able to report
positions if the total input size exceeds 4GiB. this limit is not unique
to nix though, rustc and clang also limit their input to 4GiB (although
at least clang refuses to process inputs that are larger, we will not).

this new 4GiB limit probably will not cause any problems for quite a
while, all of nixpkgs together is less than 100MiB in size and already
needs over 700MiB of memory and multiple seconds just to parse. 4GiB
worth of input will easily take multiple minutes and over 30GiB of
memory without even evaluating anything. if problems *do* arise we can
probably recover the old table-based system by adding some tracking to
Pos::Origin (or increasing the size of PosIdx outright), but for time
being this looks like more complexity than it's worth.

since we now need to read the entire input again to determine the
line/column of a position we'll make unsafeGetAttrPos slightly lazy:
mostly the set it returns is only used to determine the file of origin
of an attribute, not its exact location. the thunks do not add
measurable runtime overhead.

notably this change is necessary to allow changing the parser since
apparently nothing supports nix's very idiosyncratic line ending choice
of "anything goes", making it very hard to calculate line/column
positions in the parser (while byte offsets are very easy).

(cherry picked from commit 5d9fdab3de0ee17c71369ad05806b9ea06dfceda)
Change-Id: Ie0b2430cb120c09097afa8c0101884d94f4bbf34
2024-03-18 16:12:46 +01:00
eldritch horrors c39150e6bb diagnose "unexpected EOF" at EOF
this needs a string comparison because there seems to be no other way to
get that information out of bison. usually the location info is going to
be correct (pointing at a bad token), but since EOF isn't a token as
such it'll be wrong in that this case.

this hasn't shown up much so far because a single line ending *is* a
token, so any file formatted in the usual manner (ie, ending in a line
ending) would have its EOF position reported correctly.

(cherry picked from commit 855fd5a1bb781e4f722c1d757ba43e866d370132)
Change-Id: I120c56a962f4286b1ae3b71da7b71ce8ec3e0535
2024-03-18 16:12:46 +01:00
eldritch horrors 4c072c7c5f match line endings used by parser and error reports
the parser treats a plain \r as a newline, error reports do not. this
can lead to interesting divergences if anything makes use of this
feature, with error reports pointing to wrong locations in the input (or
even outside the input altogether).

(cherry picked from commit 2be6b143289e5479cc4a2667bb84e879116c2447)
Change-Id: Ieb7f7655bac8cb0cf5734c60bd41723388f2973c
2024-03-18 16:12:46 +01:00
eldritch horrors 9cf92c012d report inherit attr errors at the duplicate name
previously we reported the error at the beginning of the binding
block (for plain inherits) or the beginning of the attr list (for
inherit-from), effectively hiding where exactly the error happened.

this also carries over to runtime positions of attributes in sets as
reported by unsafeGetAttrPos. we're not worried about this changing
observable eval behavior because it *is* marked unsafe, and the new
behavior is much more useful.

(cherry picked from commit 1edd6fada53553b89847ac3981ac28025857ca02)
Change-Id: I2f50eb9f3dc3977db4eb3e3da96f1cb37ccd5174
2024-03-18 16:12:45 +01:00
eldritch horrors d826427f02 normalize formal order on ExprLambda::show
we already normalize attr order to lexicographic, doing the same for
formals makes sense. doubly so because the order of formals would
otherwise depend on the context of the expression, which is not quite as
useful as one might expect.

(cherry picked from commit 4147ecfb1c51f3fe3b4adcbd4e753fd487dab645)
Change-Id: I3fd0dbdef3ac7447a3a03ff20bb514a0d0f23fb1
2024-03-18 07:56:34 -06:00
eldritch horrors 314f044c2b keep copies of parser inputs that are in-memory only
the parser modifies its inputs, which means that sharing them between
the error context reporting system and the parser itself can confuse the
reporting system. usually this led to early truncation of error context
reports which, while not dangerous, can be quite confusing.

(cherry picked from commit d384ecd553aa997270b79ee98d02f7cf7e1849e6)
Change-Id: I677646b5675b12b2faa787943646aa36dc6e6ee3
2024-03-18 07:56:23 -06:00
eldritch horrors 1f8b85786e libutil: remove vfork
vfork confers a large performance advantage over fork, measured locally
at 16µs per vfork agains 90µs per fork. however nix *almost always*
follows a vfork up with an execve-family call, melting the performance
advantage from 6x to only 15%. in most of those cases it's doing things
that are undefined behavior (like manipulating the heap, or even
throwing exceptions and trashing the parent process stack).

most notably the one place that could benefit from the vfork performance
improvement is linux derivation sandbox setup—which doesn't use vfork.

Change-Id: I2037b7384d5a4ca24da219a569e1b1f39531410e
2024-03-18 06:10:41 -06:00
jade 2890840b96 Merge "Forgejo issue importer" into main 2024-03-17 22:08:59 -06:00
jade 61e21b2557 Delete hasPrefix and hasSuffix from the codebase
These now have equivalents in the standard lib in C++20. This change was
performed with a custom clang-tidy check which I will submit later.
Executed like so:

ninja -C build && run-clang-tidy -checks='-*,nix-*' -load=build/libnix-clang-tidy.so -p .. -fix ../tests | tee -a clang-tidy-result

Change-Id: I62679e315ff9e7ce72a40b91b79c3e9fc01b27e9
2024-03-17 20:17:19 -07:00
jade 706cee5c49 Merge "builtins.nixVersion: return fixed fake version" into main 2024-03-17 12:13:21 -06:00
Qyriad 32d6e58069 flake: fix musl static stdenv devShell (fix nix flake check)
pkgs.pkgsStatic.glibcLocales is null, so the string coercion was failing
for devShells against static stdenvs

Change-Id: Iee8e1042a852133ce0432627d72a85e97c17055e
2024-03-17 10:05:18 -06:00
jade 886a418d23 builtins.nixVersion: return fixed fake version
This builtin is only going to cause us problems because we are not Nix,
so let's just falsify being in the 2.18 series, since that is the
closest target that has any meaning.

In future we might want to have a better feature detection mechanism,
for when we actually add stuff to some builtin's attr set argument. But
builtins.nixVersion is just going to be hopelessly broken and it should
be stubbed out.

Fixes lix-project/lix#144

Change-Id: Id7390b32a29c6147f2977737d81846320de5d67e
2024-03-17 00:32:19 -07:00
eldritch horrors 11f35afa6f diagnose duplicated attrs at correct path
diagnose attr duplication at the path the duplication was detected, not
at the path the current attribute wanted to place. doing the latter is
only correct if a leaf attribute was duplicated, not if an attrpath was
set to a non-attrset in one binding and a (potentially implied) attrset
in another binding.

fixes #124

Change-Id: Ic4aa9cc12a9874d4e7897c6f64408f10aa36fc82
2024-03-16 22:12:49 +01:00
jade 3392020710 Forgejo issue importer
We needed a script to go yoink all the real NixOS/Nix issues from our
mirror into the Lix repo.

Change-Id: If8c8ebfb58634c675eae450454c0189288c6b18a
2024-03-16 00:22:33 -07:00
Rebecca Turner e257ff10fd Merge "Fix gc-small-vector.hh includes" into main 2024-03-15 16:20:40 -06:00
eldritch horrors 3c52344300 out with the -O3, in with the -O2
-O3 does not measurably improve performance of the resulting binaries,
neither with lto enabled nor with lto disabled. what it does to however
is cause gcc warning spew in libstdc++ that we can't do anything
about (and that upon inspection of libstdc++ source looks like a gcc
bug).

with lto, -O3:

Benchmark 1: GC_INITIAL_HEAP_SIZE=10g nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
    Time (mean ± σ):      4.608 s ±  0.027 s    [User: 3.866 s, System: 0.522 s]
    Range (min … max):    4.579 s …  4.640 s    10 runs

Benchmark 2:  nix eval -f <nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix>
    Time (mean ± σ):     408.1 ms ±  25.5 ms    [User: 360.0 ms, System: 28.1 ms]
    Range (min … max):   387.6 ms … 439.0 ms    10 runs

with lto, -O2:

Benchmark 1: GC_INITIAL_HEAP_SIZE=10g nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
    Time (mean ± σ):      4.632 s ±  0.044 s    [User: 3.874 s, System: 0.544 s]
    Range (min … max):    4.563 s …  4.673 s    10 runs

Benchmark 2:  nix eval -f <nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix>
    Time (mean ± σ):     394.0 ms ±  23.9 ms    [User: 351.2 ms, System: 27.6 ms]
    Range (min … max):   377.8 ms … 429.3 ms    10 runs

without lto, -O3:

Benchmark 1: GC_INITIAL_HEAP_SIZE=10g nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
    Time (mean ± σ):      4.700 s ±  0.024 s    [User: 3.906 s, System: 0.559 s]
    Range (min … max):    4.663 s …  4.717 s    10 runs

Benchmark 2:  nix eval -f <nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix>
    Time (mean ± σ):     400.4 ms ±  25.6 ms    [User: 353.7 ms, System: 26.8 ms]
    Range (min … max):   379.8 ms … 430.6 ms    10 runs

without lto, -O2:

Benchmark 1: GC_INITIAL_HEAP_SIZE=10g nix eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
    Time (mean ± σ):      4.724 s ±  0.030 s    [User: 3.924 s, System: 0.570 s]
    Range (min … max):    4.687 s …  4.749 s    10 runs

Benchmark 2:  nix eval -f <nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix>
    Time (mean ± σ):     392.4 ms ±  24.3 ms    [User: 350.9 ms, System: 26.4 ms]
    Range (min … max):   376.9 ms … 428.0 ms    10 runs

fixes #46

Change-Id: Ib8afad8a07c278f57f2e3317d00cce4f9ec0f338
2024-03-15 15:31:49 -06:00
Rebecca Turner 7abbce500b
Fix gc-small-vector.hh includes
Change-Id: I4abc19029fb62712582761d4fc1895156b68803d
2024-03-15 13:39:32 -07:00
jade 0d85875c3a Allow dlopen of plugins to fail
It happens with some frequency that plugins that might be unimportant to
the evaluation at hand mismatch with the nix version, leading to
spurious load failures. Let's make these non fatal.

Change-Id: Iba10e951d171725ccf1a121bcd9be1e1d6ad69eb
2024-03-15 12:31:16 -07:00
jade 7d361f1a82 Test that :st does ... something
Change-Id: I97c00b5eb1288f68d8c2b484436cc185d040b8b2
2024-03-15 12:31:16 -07:00
jade af066af7f3 repl_characterization: Also verify the stack trace exists
Change-Id: I8b2d8211a24011fae1586a1182d7d0772a039cd7
2024-03-15 12:31:16 -07:00
jade 78513b1fc8 repl_characterization: eat newlines after commands and source-dir paths
This is because they are unrepresentable in the source files with
commentary but not in the output, so we should just eat them in
normalization. It's ok.

Change-Id: I2cb7e8b3fc7b00874885bb287cbaa200b41cb16b
2024-03-15 12:31:16 -07:00
jade 8a8715af89 Add regression tests for #9917, #9918
Change-Id: Ib0591e1499c5dba5e5a83ee75a899c9d16986827
2024-03-15 12:31:16 -07:00
jade 18ed6c3bdf Implement a repl characterization test system
This allows for automating using the repl without needing a PTY, with
very easy to write test files.

Change-Id: Ia8d7854edd91f93477638942cb6fc261354e6035
2024-03-15 12:31:16 -07:00
jade 38571c50e6 Implement a parser for a literate testing system for the repl
This parser can be reused for other purposes. It's inspired by
https://bitheap.org/cram/

Although eelco's impostor exists https://github.com/mobusoperandi/eelco,
it is not very nice to depend on out of tree testing frameworks with no
way to customize them.

Change-Id: Ifca50177e09730182baf0ebf829c3505bbb0274a
2024-03-14 14:30:38 -07:00
Cole Helbling 84727bebb4
Backport PR#9633 by cole-h: package: don't set sysconfdir in devShells
(cherry-picked from commit ba0087316acc2aba999cabe5e1a159da636b2569)

Change-Id: I5a50afb3b7b65516df798ee51b74f06727a91928
2024-03-14 14:06:31 -07:00
Rebecca Turner 9f242fae76
Only set LOCALE_ARCHIVE on Linux
`macOS` does not have `glibcLocales`:

    error:
       … while calling the 'derivationStrict' builtin

         at /derivation-internal.nix:9:12:

            8|
            9|   strict = derivationStrict drvAttrs;
             |            ^
           10|

       … while evaluating derivation 'nix-2.90.0'
         whose name attribute is located at /nix/store/y0c95bwyvs80pm69hdd4b11pyq2ghiwh-source
/pkgs/stdenv/generic/make-derivation.nix:348:7

       … while evaluating attribute 'LOCALE_ARCHIVE' of derivation 'nix-2.90.0'

         at /nix/store/ng5qzbyv4902b4pw7g35caqw5cnmryf9-source/flake.nix:331:15:

          330|               # Required to make non-NixOS Linux not complain about missing loc

Change-Id: I4464484a0eca12b5e073d49d900b6f25886245c1
2024-03-14 14:06:23 -07:00
puck caded0d55e Merge "Delete the existing installer" into main 2024-03-14 13:14:21 -06:00
jade c79b5dca2a Merge "flake: dev shells should include the LOCALE_ARCHIVE so configure does not complain" into main 2024-03-14 12:22:35 -06:00
jade cc55da201b Merge "un-thumbs-up ur github templates" into main 2024-03-14 12:22:09 -06:00
puck 93cc063344 Delete the existing installer
We're not going to use it.

Fixes: #31
Change-Id: Ib17a2eb6cae1ecbbf9ad1062e576ba6107a3c13b
2024-03-14 18:15:46 +00:00
eldritch horrors c26599b143 libexpr: fix elided value counting in printer
using the total-attrs-printed and total-list-items-printed counters to
calculate how many attrs were elided only works properly if no nesting
is involved. once things do nest the global counter can exceed the size
of the currently printed object, leading to unsigned wrapping and great
overestimation of elided counts. counting locally in addition to global
counts fixes this.

these are functional tests because creating these objects requires the
evaluator to not be a huge amount of code, and we also want defaults to
be tested for cli usage.

fixes #14

Change-Id: Icb9a0cb21b2f4bacbc5e9dcdd8c0b9055b4088a7
2024-03-14 01:52:19 -06:00
jade 0278c03de5 un-thumbs-up ur github templates
These are not the way that we want to do things.

Change-Id: I5f3706cf50d007a6659edb96a6230d52e18a769a
2024-03-13 23:10:25 -07:00
jade 2a8f579c53 Merge "Backport PR#10204 by 9999years: Replace foo with __NIX_STR in cxx-big-literal" into main 2024-03-14 00:07:21 -06:00
jade ab5ff86917 flake: dev shells should include the LOCALE_ARCHIVE so configure does not complain
Change-Id: Id661bd8c90696cb663a25503878a697f596d1e77
2024-03-13 12:25:23 -07:00
eldritch horrors 06952cf7c4 support <program>_ENV variables
this lets us set per-test-program environment variables rather than only
a single, global default. this was supported in nix originally but
might've gone partially missing in the upstream backports process?

Change-Id: Iad0919841b1b6d11e0b7ebd3920449a62f544e77
2024-03-13 19:48:26 +01:00
Rebecca Turner da2f165128 Backport PR#10204 by 9999years: Replace foo with __NIX_STR in cxx-big-literal
Looks a little nicer when you check the generated sources.

(cherry-picked from commit e65e9114d2797cc4380da218972979dda7395df6)

Change-Id: I91bd185bf12deef72d20fba36178ff42a686c518
Upstream-PR: https://github.com/NixOS/nix/pull/10204
2024-03-12 11:58:36 -07:00
jade 1b8662b85c import the revisions to the characterization test framework from cppnix
This has some Flaws for sure (like, it is going to be a bit stretched to
use for repl characterization), but it is a start.

Change-Id: I258c8beb3aee236f45818a03be83bcda858120c9
2024-03-11 14:14:43 -07:00
jade be2b87ed4d add automated usage mode to the repl
This is definitely not a stable thing, but it does feel slightly crimes
to put it as an experimental feature. Shrug, up for bikeshedding.

Change-Id: I6ef176e3dee6fb1cac9c0a7a60d553a2c63ea728
2024-03-11 14:14:43 -07:00
jade b06a392114 Merge "refactor: repl prompts are now the job of the interacter" into main 2024-03-11 15:12:09 -06:00
jade dd05106d1c Merge "refactor: move readline stuff into its own file" into main 2024-03-11 15:11:56 -06:00
jade df2723b972 Merge "finally.hh: delete copy constructor which is a bad idea" into main 2024-03-11 15:11:20 -06:00
jade d9367da027 Merge "Add box_ptr: nonnull unique_ptr with value semantics" into main 2024-03-11 15:11:15 -06:00
Qyriad f9701da8f6 Merge changes from topic "maint/flake-package.nix" into main
* changes:
  package: cleanup of all intermediaries
  package: migrate devShells
  package: migrate internal-api-docs
  package: migrate testNixVersions
  package: use pname, version, and dontBuild (first change with diff hash)
  package: refactor Nix out of flake.nix and into package.nix
2024-03-11 14:32:37 -06:00
jade 50c401b4c1 Merge "util.hh: split out signals stuff" into main 2024-03-11 11:14:24 -06:00
Qyriad 36a8d151e1 package: cleanup of all intermediaries
Change-Id: I0da5182de6b01c192cfcba407959d659d70c6dc9
2024-03-11 04:26:35 -06:00
Qyriad 529a01ade2 package: migrate devShells
Change-Id: Ic63721667edd4bef79aa699a0de8411639e5159b
2024-03-11 04:26:35 -06:00
Qyriad b072c069b7 package: migrate internal-api-docs
Change-Id: I344d73a412c2c6e4bb2eb14bd4859056324f1ba7
2024-03-11 04:26:35 -06:00
Qyriad 4ad3446311 package: migrate testNixVersions
Change-Id: I71845f8a6d7b77c3617d055e726ed4a28cd05fa3
2024-03-11 04:26:35 -06:00
Qyriad 875b76d0c7 package: use pname, version, and dontBuild (first change with diff hash)
The src fileset, preConfigure, and separateDebugInfo also respond to doBuild if its overridden

This commit is logically just a continuation of the previous commit's
refactor, but exists separately to delineate when the core Nix
derivation hash changed (this commit).

Change-Id: I67a61bc9608d91b6a833ebc5c3894b2d2e694050
2024-03-11 04:26:35 -06:00
Qyriad 15380b4c6e package: refactor Nix out of flake.nix and into package.nix
This series takes a somewhat different approach from the flake rework
done in NixOS/nix. The package.nix here does not provide callPackage
options for all the various settings in the build, and instead the other
places Nix derivations are used (like internal-api-docs) will .overrideAttrs
the normal Nix package derivation. This more closely matches how these
things were structured originally, and results in less churn and more
atomicity in these changes.

In the future, package.nix likely will migrate to have more build
options in the callPackage arguments, but we are also planning to
rewrite the build system anyway.

Change-Id: I170c4e5a4184bab62e1fd75e56db876d4ff116cf
2024-03-11 04:26:35 -06:00
jade 1758a6ef25 refactor: repl prompts are now the job of the interacter
Change-Id: I17c2873dfbbff303cdbdc7a8903deb8409ce3026
2024-03-11 01:04:52 -07:00
jade 95a87f2c2a refactor: move readline stuff into its own file
This is in direct preparation for an automation mode of nix repl.

Change-Id: I26e6ca88ef1c48aab11a2d1e939ff769f1770caa
2024-03-11 01:04:52 -07:00
jade 45f6e3521a finally.hh: delete copy constructor which is a bad idea
Change-Id: I6d0b5736893c44bddc6f5789b452b434f8671b9b
2024-03-11 01:04:52 -07:00
jade af515baf6e Add box_ptr: nonnull unique_ptr with value semantics
This solves the problem of collections of boxed subclasses with virtual
dispatch, which should still be treated as values, since the
indirection is only there due to the virtual dispatch.

Change-Id: I368daedd3f31298e99c6e56a15606337a55494c6
2024-03-11 01:04:52 -07:00
jade 8be7030299 util.hh: split out signals stuff
Copies part of the changes of ac89bb064aeea85a62b82a6daf0ecca7190a28b7

Change-Id: I9ce601875cd6d4db5eb1132d7835c5bab9f126d8
2024-03-11 00:52:09 -07:00
jade 6432bf9197 Merge "Print derivation paths in nix eval" into main 2024-03-10 16:12:32 -06:00
eldritch horrors 0a4737f519 add doc comment justifying ExprInheritFrom
Co-authored-by: Robert Hensing <roberth@users.noreply.github.com>
(cherry picked from commit f24e445bc024cfd3c26be5f061280af549321c22)
Change-Id: I7acda5d5c34c0914a78adc2385d32782c4c275cd
2024-03-10 03:18:32 -06:00
eldritch horrors 06764118ea remove ExprAttrs::AttrDef::inherited
it's no longer widely used and has a rather confusing meaning now that
inherit-from is handled very differently.

(cherry picked from commit 1cd87b7042d14aae1fafa47b1c28db4c5bd20de7)
Change-Id: I90bbebddf06762960d8ca4f621cf042ce8ae83f9
2024-03-10 03:18:32 -06:00
eldritch horrors b667b4cded evaluate inherit (from) exprs only once per directive
desugaring inherit-from to syntactic duplication of the source expr also
duplicates side effects of the source expr (such as trace calls) and
expensive computations (such as derivationStrict).

(cherry picked from commit cefd0302b55b3360dbca59cfcb4bf6a750d6cdcf)
Change-Id: Iff519f991adef2e51683ba2c552d37a3df7a179e
2024-03-10 03:18:32 -06:00
eldritch horrors 71e0114708 remove getDerivations deduplication
deduplication does not currently work fully, showing derivations
multiple times if they have different underlying values. this can happen
by selecting the same derivation twice for two different attributes of a
set, using inherit-from (which reduces to the previous), importing
nixpkgs twice, or any other number of things.

since users already have to deal with duplicates for this reason it
won't hurt to add *more* duplicates. the alternative would be to
deduplicate fully, which would drop derivations that are currently
returned and those pose a regression risk.

Change-Id: I64b397351237e10375d270f1bddecb71f62aa131
2024-03-10 03:18:32 -06:00
eldritch horrors 2a84123631 group inherit by source during Expr::show
for plain inherits this is really just a stylistic choice, but for
inherit-from it actually fixes an exponential size increase problem
during expr printing (as may happen during assertion failure reporting,
on during duplicate attr detection in the parser)

(cherry picked from commit ecf8b12d60ad2929f9998666cf0966475b91e291)
Change-Id: Ie55f0cb01a37e766414c31f8d40f51c2c7d106b0
2024-03-10 03:18:32 -06:00
eldritch horrors bf19eebb9b use the same bindings print for ExprAttrs and ExprLet
this also has the effect of sorting let bindings lexicographically
rather than by symbol creation order as was previously done, giving a
better canonicalization in the process.

(cherry picked from commit 6c08fba533ef31cad2bdc03ba72ecf58dc8ee5a0)
Change-Id: Ia887f629305645bb8a165fbbc0d32e620912595a
2024-03-10 03:18:32 -06:00
eldritch horrors 1cf0fa0633 add ExprAttrs::AttrDef::chooseByKind
in place of inherited() — not quite useful yet since we don't
distinguish plain and inheritFrom attr kinds so far.

(cherry picked from commit 1f542adb3e18e7078e6a589182a53a47d971748a)
Change-Id: If948c9d43e875de18f213a73a06a36f7c335b536
2024-03-10 03:18:32 -06:00
eldritch horrors 03f852b2c6 preserve information about whether/how an attribute was inherited
(cherry picked from commit c66ee57edc6cac3571bfbf77d0c0ea4d25b4e805)
Change-Id: Ie8606a8b2f5946c87dd4d16b7b46203e199a4cc1
2024-03-10 03:18:32 -06:00
eldritch horrors 3e43f4aeff add test for inherit expr printing
(cherry picked from commit 73065a400d176b21f518c1f4ece90c31318b218d)
Change-Id: I9356d8084d241a7904b66554d7c4194f8433edf7
2024-03-10 03:18:32 -06:00
eldritch horrors 0c19e6b56c add test for inherit-from semantics
(cherry picked from commit 8669c02468994887be91072ac58b1ee43380d354)
Change-Id: If8513316bf4b4b559c5bb63842c856f016816802
2024-03-10 03:18:32 -06:00
eldritch horrors 7c882a5075 Merge "make the multi-node vm tests a bit more reliable" into main 2024-03-10 03:18:08 -06:00
eldritch horrors 34fb7a7e9d make the multi-node vm tests a bit more reliable
without these changes the tests will very repeatably (although not very
reliably) wedge in our runs. the ssh command starts, opens a sessions,
does something, the session closes again, but the test does not move on.
adding *just* the redirect and not the unit waits is not sufficient
either, it needs both. this feels like a bug in the nixos testing
framework somewhere, but digging that far is not in the cards right now.

Change-Id: Idab577b83a36cc4899bb5ffbb3d9adc04e83e51c
2024-03-10 10:10:52 +01:00
eldritch horrors a9b813cc3b Merge pull request #10066 from 9999years/print-all-frames
Do not skip any stack frames when `--show-trace` is given

(cherry picked from commit 0b47783d0a879875d558f0b56e49584f25ceb2d0)
Change-Id: Ia0f18266dbcf97543110110c655c219c7a3e3270
2024-03-09 10:17:26 -07:00
eldritch horrors f2e11ddce1 Merge pull request #9914 from 9999years/debugger-on-trace
Enter debugger on `builtins.trace` with an option

(cherry picked from commit 774e7ca5847ebc392eac2a124a8f12b24da4f65a)
Change-Id: If01e2110b3a128e639b05143227e365227d149f1
2024-03-09 10:17:26 -07:00
eldritch horrors 030c8aa833 Rename ProcessLineResult variants
(cherry picked from commit 8e71883e3f59100479e96aa1883ef52dbaa03fd3)
Change-Id: If7d8b75eaec623dac106ce2363fa148af37d150c
2024-03-09 10:17:26 -07:00
eldritch horrors 992d99592f :quit in the debugger should quit the whole program
(cherry picked from commit 2a8fe9a93837733e9dd9ed5c078734a35b203e14)
Change-Id: I71dadfef6b24d9272b206e9e2c408040559d8a1c
2024-03-09 10:17:26 -07:00
eldritch horrors 6b11c2cd70 Extract printSpace helper
(cherry picked from commit 403c90ddf58a3f16a44dfe1f20004b6baa4e5ce2)
Change-Id: I53c9824e6b1c4c619b4dfd8346d39e5289d92265
2024-03-09 07:20:23 -07:00
eldritch horrors 73cdaf44cf prettyPrint -> shouldPrettyPrint
(cherry picked from commit 1c5f5d4291df7bf80806e57c75d2ec67bced8616)
Change-Id: I7a517490e7baa5cef00716f6d6cfcbcbcdde11bf
2024-03-09 07:20:23 -07:00
eldritch horrors 4dabde0485 Add assertion for decreasing the indent
Co-authored-by: Théophane Hufschmitt <7226587+thufschmitt@users.noreply.github.com>
(cherry picked from commit a27651908fc1b5ef73a81e46434a408c5868fa7b)
Change-Id: I2ec78e234c1c6e982f7b05f81d8b8356daf6c274
2024-03-09 07:20:23 -07:00
eldritch horrors 1958152d14 Pretty-print values in the REPL
Pretty-print values in the REPL by printing each item in a list or
attrset on a separate line. When possible, single-item lists and
attrsets are printed on one line, as long as they don't contain a nested
list, attrset, or thunk.

Before:
```
{ attrs = { a = { b = { c = { }; }; }; }; list = [ 1 ]; list' = [ 1 2 3 ]; }
```

After:
```
{
  attrs = {
    a = {
      b = {
        c = { };
      };
    };
  };
  list = [ 1 ];
  list' = [
    1
    2
    3
  ];
}
```

(cherry picked from commit c0a15fb7d03dfb8f53bc6726c414bc88aa362592)
Change-Id: Ia2b41849165a5ddb63f7a8c272a2476b3e4292df
2024-03-09 07:20:23 -07:00
Rebecca Turner 6ef9d8efba Print derivation paths in nix eval
`nix eval` forces values and prints derivations as attribute sets, so
commands that print derivations (e.g. `nix eval nixpkgs#bash`) will
infinitely loop and segfault.

Printing derivations as `.drv` paths makes `nix eval` complete as
expected. Further work is needed, but this is better than a segfault.

(cherry picked from commit 4910d74086a85876e093136a0e8ebc547b467af7)

Change-Id: I8e1cb39c05db812080759ec183ee7a131760e6ea
2024-03-08 23:46:16 -08:00
396 changed files with 7511 additions and 17933 deletions

51
.clang-format Normal file
View file

@ -0,0 +1,51 @@
---
BasedOnStyle: LLVM
AccessModifierOffset: -4
AlignAfterOpenBracket: BlockIndent
AlignEscapedNewlines: Left
AlignOperands: DontAlign
AllowShortBlocksOnASingleLine: Always
AllowShortFunctionsOnASingleLine: Empty
AllowShortIfStatementsOnASingleLine: WithoutElse
AlwaysBreakBeforeMultilineStrings: true
AlwaysBreakTemplateDeclarations: Yes
BinPackArguments: false
BinPackParameters: false
BitFieldColonSpacing: None
BraceWrapping:
AfterCaseLabel: false
AfterClass: true
AfterControlStatement: MultiLine
AfterEnum: false
AfterFunction: true
AfterNamespace: false
AfterObjCDeclaration: false
AfterStruct: true
AfterUnion: true
AfterExternBlock: false
BeforeCatch: false
BeforeElse: false
BeforeLambdaBody: false
BeforeWhile: false
IndentBraces: false
SplitEmptyFunction: true
SplitEmptyRecord: false
SplitEmptyNamespace: true
BreakAfterAttributes: Always
BreakBeforeBinaryOperators: NonAssignment
BreakBeforeBraces: Custom
BreakConstructorInitializers: BeforeComma
ColumnLimit: 100
EmptyLineAfterAccessModifier: Leave
EmptyLineBeforeAccessModifier: Leave
FixNamespaceComments: false
IndentWidth: 4
InsertBraces: true
InsertTrailingCommas: Wrapped
LambdaBodyIndentation: Signature
PackConstructorInitializers: CurrentLine
PointerAlignment: Middle
SortIncludes: Never
SpaceAfterCStyleCast: true
SpaceAfterTemplateKeyword: false

9
.envrc Normal file
View file

@ -0,0 +1,9 @@
# shellcheck shell=bash
source_env_if_exists .envrc.local
# TODO: `use flake .#native-clangStdenvPackages` on macOS?
use flake ".#${LIX_SHELL_VARIANT:-default}" "${LIX_SHELL_EXTRA_ARGS[@]}"
export MAKEFLAGS="$MAKEFLAGS -e"
if [[ -n "$NIX_BUILD_CORES" ]]; then
export MAKEFLAGS="$MAKEFLAGS -j $NIX_BUILD_CORES"
fi
export GTEST_BRIEF=1

View file

@ -7,30 +7,26 @@ assignees: ''
---
**Describe the bug**
## Describe the bug
A clear and concise description of what the bug is.
If you have a problem with a specific package or NixOS,
you probably want to file an issue at https://github.com/NixOS/nixpkgs/issues.
you probably want to file an issue at https://github.com/NixOS/nixpkgs/issues.
**Steps To Reproduce**
## Steps To Reproduce
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
## Expected behavior
A clear and concise description of what you expected to happen.
**`nix-env --version` output**
## `nix --version` output
**Additional context**
## Additional context
Add any other context about the problem here.
**Priorities**
Add :+1: to [issues you find important](https://github.com/NixOS/nix/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).

View file

@ -7,18 +7,18 @@ assignees: ''
---
**Is your feature request related to a problem? Please describe.**
## Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
## Describe the solution you'd like
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
## Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
## Additional context
Add any other context or screenshots about the feature request here.
**Priorities**
Add :+1: to [issues you find important](https://github.com/NixOS/nix/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).

View file

@ -30,7 +30,3 @@ assignees: ''
```
</details>
## Priorities
Add :+1: to [issues you find important](https://github.com/NixOS/nix/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).

View file

@ -25,7 +25,3 @@ assignees: ''
[latest Nix manual]: https://nixos.org/manual/nix/unstable/
[source]: https://github.com/NixOS/nix/tree/master/doc/manual/src
[open documentation issues and pull requests]: https://github.com/NixOS/nix/labels/documentation
## Priorities
Add :+1: to [issues you find important](https://github.com/NixOS/nix/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc).

View file

@ -9,7 +9,3 @@
<!-- Invasive change: Discuss alternative designs or approaches you considered. -->
<!-- Large change: Provide instructions to reviewers how to read the diff. -->
# Priorities
Add :+1: to [pull requests you find important](https://github.com/NixOS/nix/pulls?q=is%3Aopen+sort%3Areactions-%2B1-desc).

7
.gitignore vendored
View file

@ -146,9 +146,16 @@ result
result-*
.vscode/
.direnv/
.envrc.local
# clangd and possibly more
.cache/
# Mac OS
.DS_Store
# ClangBuildAnalyzer output, see maintainers/buildtime_report.sh
buildtime.bin
.envrc.local

View file

@ -20,8 +20,7 @@ makefiles = \
misc/fish/local.mk \
misc/zsh/local.mk \
misc/systemd/local.mk \
misc/launchd/local.mk \
misc/upstart/local.mk
misc/launchd/local.mk
endif
ifeq ($(ENABLE_BUILD)_$(ENABLE_TESTS), yes_yes)
@ -41,6 +40,7 @@ makefiles += \
tests/functional/ca/local.mk \
tests/functional/dyn-drv/local.mk \
tests/functional/test-libstoreconsumer/local.mk \
tests/functional/repl_characterization/local.mk \
tests/functional/plugins/local.mk
else
makefiles += \
@ -60,7 +60,7 @@ endif
OPTIMIZE = 1
ifeq ($(OPTIMIZE), 1)
GLOBAL_CXXFLAGS += -O3 $(CXXLTO)
GLOBAL_CXXFLAGS += -O2 $(CXXLTO)
GLOBAL_LDFLAGS += $(CXXLTO)
else
GLOBAL_CXXFLAGS += -O0 -U_FORTIFY_SOURCE

1
clang-tidy/.clang-format Normal file
View file

@ -0,0 +1 @@
BasedOnStyle: llvm

View file

@ -0,0 +1,80 @@
#include "HasPrefixSuffix.hh"
#include <clang/AST/ASTTypeTraits.h>
#include <clang/AST/Expr.h>
#include <clang/AST/PrettyPrinter.h>
#include <clang/AST/Type.h>
#include <clang/ASTMatchers/ASTMatchers.h>
#include <clang/Basic/Diagnostic.h>
#include <clang/Frontend/FrontendAction.h>
#include <clang/Frontend/FrontendPluginRegistry.h>
#include <clang/Tooling/Transformer/SourceCode.h>
#include <clang/Tooling/Transformer/SourceCodeBuilders.h>
#include <iostream>
namespace nix::clang_tidy {
using namespace clang::ast_matchers;
using namespace clang;
void HasPrefixSuffixCheck::registerMatchers(ast_matchers::MatchFinder *Finder) {
Finder->addMatcher(
traverse(clang::TK_AsIs,
callExpr(callee(functionDecl(anyOf(hasName("hasPrefix"),
hasName("hasSuffix")))
.bind("callee-decl")),
optionally(hasArgument(
0, cxxConstructExpr(
hasDeclaration(functionDecl(hasParameter(
0, parmVarDecl(hasType(
asString("const char *")))))))
.bind("implicit-cast"))))
.bind("call")),
this);
}
void HasPrefixSuffixCheck::check(
const ast_matchers::MatchFinder::MatchResult &Result) {
const auto *CalleeDecl = Result.Nodes.getNodeAs<FunctionDecl>("callee-decl");
auto FuncName = std::string(CalleeDecl->getName());
std::string NewName;
if (FuncName == "hasPrefix") {
NewName = "starts_with";
} else if (FuncName == "hasSuffix") {
NewName = "ends_with";
} else {
llvm_unreachable("nix-has-prefix: invalid callee");
}
const auto *MatchedDecl = Result.Nodes.getNodeAs<CallExpr>("call");
const auto *ImplicitConvertArg =
Result.Nodes.getNodeAs<CXXConstructExpr>("implicit-cast");
const auto *Lhs = MatchedDecl->getArg(0);
const auto *Rhs = MatchedDecl->getArg(1);
auto Diag = diag(MatchedDecl->getExprLoc(), FuncName + " is deprecated");
std::string Text = "";
// Form possible cast to string_view, or nothing.
if (ImplicitConvertArg) {
Text = "std::string_view(";
Text.append(tooling::getText(*Lhs, *Result.Context));
Text.append(").");
} else {
Text.append(*tooling::buildAccess(*Lhs, *Result.Context));
}
// Call .starts_with.
Text.append(NewName);
Text.push_back('(');
Text.append(tooling::getText(*Rhs, *Result.Context));
Text.push_back(')');
Diag << FixItHint::CreateReplacement(MatchedDecl->getSourceRange(), Text);
// for (const auto *arg : MatchedDecl->arguments()) {
// arg->dumpColor();
// arg->getType().dump();
// }
}
}; // namespace nix::clang_tidy

View file

@ -0,0 +1,25 @@
#pragma once
///@file
/// This is an example of a clang-tidy automated refactoring against the Nix
/// codebase. The refactoring has been completed in
/// https://gerrit.lix.systems/c/lix/+/565 so this code is around as
/// an example.
#include <clang-tidy/ClangTidyCheck.h>
#include <clang/ASTMatchers/ASTMatchFinder.h>
#include <llvm/ADT/StringRef.h>
namespace nix::clang_tidy {
using namespace clang;
using namespace clang::tidy;
using namespace llvm;
class HasPrefixSuffixCheck : public ClangTidyCheck {
public:
HasPrefixSuffixCheck(StringRef Name, ClangTidyContext *Context)
: ClangTidyCheck(Name, Context) {}
void registerMatchers(ast_matchers::MatchFinder *Finder) override;
void check(const ast_matchers::MatchFinder::MatchResult &Result) override;
};
}; // namespace nix::clang_tidy

View file

@ -0,0 +1,17 @@
#include <clang-tidy/ClangTidyModule.h>
#include <clang-tidy/ClangTidyModuleRegistry.h>
#include "HasPrefixSuffix.hh"
namespace nix::clang_tidy {
using namespace clang;
using namespace clang::tidy;
class NixClangTidyChecks : public ClangTidyModule {
public:
void addCheckFactories(ClangTidyCheckFactories &CheckFactories) override {
CheckFactories.registerCheck<HasPrefixSuffixCheck>("nix-hasprefixsuffix");
}
};
static ClangTidyModuleRegistry::Add<NixClangTidyChecks> X("nix-module", "Adds nix specific checks");
};

56
clang-tidy/README.md Normal file
View file

@ -0,0 +1,56 @@
# Clang tidy lints for Nix
This is a skeleton of a clang-tidy lints library for Nix.
Currently there is one check (which is already obsolete as it has served its
goal and is there as an example), `HasPrefixSuffixCheck`.
## Running fixes/checks
One file:
```
ninja -C build && clang-tidy --checks='-*,nix-*' --load=build/libnix-clang-tidy.so -p ../compile_commands.json --fix ../src/libcmd/installables.cc
```
Several files, in parallel:
```
ninja -C build && run-clang-tidy -checks='-*,nix-*' -load=build/libnix-clang-tidy.so -p .. -fix ../src | tee -a clang-tidy-result
```
## Resources
* https://firefox-source-docs.mozilla.org/code-quality/static-analysis/writing-new/clang-query.html
* https://clang.llvm.org/docs/LibASTMatchersReference.html
* https://devblogs.microsoft.com/cppblog/exploring-clang-tooling-part-3-rewriting-code-with-clang-tidy/
## Developing new checks
Put something like so in `myquery.txt`:
```
set traversal IgnoreUnlessSpelledInSource
# ^ Ignore implicit AST nodes. May need to use AsIs depending on how you are
# working.
set bind-root true
# ^ true unless you use any .bind("foo") commands
set print-matcher true
enable output dump
match callExpr(callee(functionDecl(hasName("hasPrefix"))), optionally(hasArgument( 0, cxxConstructExpr(hasDeclaration(functionDecl(hasParameter(0, parmVarDecl(hasType(asString("const char *"))).bind("meow2"))))))))
```
Then run, e.g. `clang-query --preload hasprefix.query -p compile_commands.json src/libcmd/installables.cc`.
With this you can iterate a query before writing it in C++ and suffering from
C++.
### Tips and tricks for the C++
There is a function `dump()` on many things that will dump to stderr. Also
`llvm::errs()` lets you print to stderr.
When I wrote `HasPrefixSuffixCheck`, I was not really able to figure out how
the structured replacement system was supposed to work. In principle you can
describe the replacement with a nice DSL. Look up the Stencil system in Clang
for details.

8
clang-tidy/meson.build Normal file
View file

@ -0,0 +1,8 @@
project('nix-clang-tidy', ['cpp', 'c'],
version : '0.1',
default_options : ['warning_level=3', 'cpp_std=c++20'])
llvm = dependency('Clang', version: '>= 14', modules: ['libclang'])
sources = ['HasPrefixSuffix.cc', 'NixClangTidyChecks.cc']
shared_module('nix-clang-tidy', sources,
dependencies: llvm)

View file

@ -342,6 +342,13 @@ AC_SUBST(doc_generate)
# Look for lowdown library.
PKG_CHECK_MODULES([LOWDOWN], [lowdown >= 0.9.0], [CXXFLAGS="$LOWDOWN_CFLAGS $CXXFLAGS"])
# Look for toml11, a required dependency.
AC_ARG_VAR([TOML11_HEADERS], [include path of toml11 headers])
AC_LANG_PUSH(C++)
[CXXFLAGS="-I $TOML11_HEADERS $CXXFLAGS"]
AC_CHECK_HEADER([toml.hpp], [], [AC_MSG_ERROR([toml11 is not found.])])
AC_LANG_POP(C++)
# Setuid installations.
AC_CHECK_FUNCS([setresuid setreuid lchown])

View file

@ -151,9 +151,9 @@ $(d)/language.json: $(doc_nix)
# Generate "Upcoming release" notes (or clear it and remove from menu)
$(d)/src/release-notes/rl-next.md: $(d)/rl-next $(d)/rl-next/*
@if type -p changelog-d > /dev/null; then \
@if type -p build-release-notes > /dev/null; then \
echo " GEN " $@; \
changelog-d doc/manual/rl-next > $@; \
build-release-notes doc/manual/rl-next > $@; \
else \
echo " NULL " $@; \
true > $@; \

View file

@ -0,0 +1,15 @@
---
synopsis: Clang build timing analysis
cls: 587
---
We now have Clang build profiling available, which generates Chrome
tracing files for each compilation unit. To enable it, run `meson configure
build -Dprofile-build=enabled` then rerun the compilation.
If you want to make the build go faster, do a clang build with meson, then run
`maintainers/buildtime_report.sh build`, then contemplate how to improve the
build time.
You can also look at individual object files' traces in
<https://ui.perfetto.dev>.

View file

@ -1,2 +0,0 @@
organization: NixOS
repository: nix

View file

@ -1,7 +1,7 @@
---
synopsis: "`--debugger` can now access bindings from `let` expressions"
prs: 9918
issues: 8827.
issues: 8827
---
Breakpoints and errors in the bindings of a `let` expression can now access

View file

@ -0,0 +1,9 @@
---
synopsis: Enter the `--debugger` when `builtins.trace` is called if `debugger-on-trace` is set
prs: 9914
---
If the `debugger-on-trace` option is set and `--debugger` is given,
`builtins.trace` calls will behave similarly to `builtins.break` and will enter
the debug REPL. This is useful for determining where warnings are being emitted
from.

View file

@ -0,0 +1,7 @@
---
synopsis: Stop vendoring toml11
cls: 675
---
We don't apply any patches to it, and vendoring it locks users into
bugs (it hasn't been updated since its introduction in late 2021).

View file

@ -0,0 +1,22 @@
---
synopsis: Duplicate attribute reports are more accurate
cls: 557
---
Duplicate attribute errors are now more accurate, showing the path at which an error was detected rather than the full, possibly longer, path that caused the error.
Error reports are now
```ShellSession
$ nix eval --expr '{ a.b = 1; a.b.c.d = 1; }'
error: attribute 'a.b' already defined at «string»:1:3
at «string»:1:12:
1| { a.b = 1; a.b.c.d = 1;
| ^
```
instead of
```ShellSession
$ nix eval --expr '{ a.b = 1; a.b.c.d = 1; }'
error: attribute 'a.b.c.d' already defined at «string»:1:3
at «string»:1:12:
1| { a.b = 1; a.b.c.d = 1;
| ^
```

View file

@ -1,8 +1,6 @@
---
synopsis: Disallow empty search regex in `nix search`
prs: #9481
description: {
prs: 9481
---
[`nix search`](@docroot@/command-ref/new-cli/nix3-search.md) now requires a search regex to be passed. To show all packages, use `^`.
}

View file

@ -0,0 +1,7 @@
---
synopsis: consistent order of lambda formals in printed expressions
prs: 9874
---
Always print lambda formals in lexicographic order rather than the internal, creation-time based symbol order.
This makes printed formals independent of the context they appear in.

View file

@ -0,0 +1,6 @@
---
synopsis: fix duplicate attribute error positions for `inherit`
prs: 9874
---
When an inherit caused a duplicate attribute error the position of the error was not reported correctly, placing the error with the inherit itself or at the start of the bindings block instead of the offending attribute name.

View file

@ -0,0 +1,7 @@
---
synopsis: "`inherit (x) ...` evaluates `x` only once"
prs: 9847
---
`inherit (x) a b ...` now evaluates the expression `x` only once for all inherited attributes rather than once for each inherited attribute.
This does not usually have a measurable impact, but side-effects (such as `builtins.trace`) would be duplicated and expensive expressions (such as derivations) could cause a measurable slowdown.

View file

@ -0,0 +1,17 @@
---
synopsis: "`Overhaul `nix flake update` and `nix flake lock` UX"
prs: 8817
---
The interface for creating and updating lock files has been overhauled:
- [`nix flake lock`](@docroot@/command-ref/new-cli/nix3-flake-lock.md) only creates lock files and adds missing inputs now.
It will *never* update existing inputs.
- [`nix flake update`](@docroot@/command-ref/new-cli/nix3-flake-update.md) does the same, but *will* update inputs.
- Passing no arguments will update all inputs of the current flake, just like it already did.
- Passing input names as arguments will ensure only those are updated. This replaces the functionality of `nix flake lock --update-input`
- To operate on a flake outside the current directory, you must now pass `--flake path/to/flake`.
- The flake-specific flags `--recreate-lock-file` and `--update-input` have been removed from all commands operating on installables.
They are superceded by `nix flake update`.

View file

@ -0,0 +1,11 @@
---
synopsis: "`builtins.nixVersion` now returns a fixed value \"2.18.3-lix\""
cls: 558
---
`builtins.nixVersion` now returns a fixed value `"2.18.3-lix"`. This prevents
feature detection assuming that features that exist in Nix post-Lix-branch-off
might exist, even though the Lix version is greater than the Nix version.
In the future, check for builtins for feature detection. If a feature cannot be
detected by *those* means, please file a Lix bug.

View file

@ -1,7 +1,7 @@
---
synopsis: Coercion errors include the failing value
issues: #561
prs: #9754
issues: 561
prs: 9754
---
The `error: cannot coerce a <TYPE> to a string` message now includes the value

View file

@ -1,7 +1,7 @@
---
synopsis: Type errors include the failing value
issues: #561
prs: #9753
issues: 561
prs: 9753
---
In errors like `value is an integer while a list was expected`, the message now

View file

@ -0,0 +1,14 @@
---
synopsis: reintroduce shortened `-E` form for `--expr` to new CLI
cls: 605
---
In the past, it was possible to supply a shorter `-E` flag instead of fully
specifying `--expr` every time you wished to provide an expression that would
be evaluated to produce the given command's input. This was retained for the
`--file` flag when the new CLI utilities were written with `-f`, but `-E` was
dropped.
We now restore the `-E` short form for better UX. This is most useful for
`nix eval` but most any command that takes an Installable argument should benefit
from it as well.

View file

@ -0,0 +1,8 @@
---
synopsis: Upstart scripts removed
cls: 574
---
Upstart scripts have been removed from Lix, since Upstart is obsolete and has
not been shipped by any major distributions for many years. If these are
necessary to your use case, please back port them to your packaging.

View file

@ -340,9 +340,6 @@ Significant changes should add the following header, which moves them to the top
significance: significant
```
<!-- Keep an eye on https://codeberg.org/fgaz/changelog-d/issues/1 -->
See also the [format documentation](https://github.com/haskell/cabal/blob/master/CONTRIBUTING.md#changelog).
### Build process
Releases have a precomputed `rl-MAJOR.MINOR.md`, and no `rl-next.md`.

591
flake.nix
View file

@ -48,50 +48,6 @@
})
stdenvs);
baseFiles =
# .gitignore has already been processed, so any changes in it are irrelevant
# at this point. It is not represented verbatim for test purposes because
# that would interfere with repo semantics.
fileset.fileFilter (f: f.name != ".gitignore") ./.;
configureFiles = fileset.unions [
./.version
./configure.ac
./m4
# TODO: do we really need README.md? It doesn't seem used in the build.
./README.md
];
topLevelBuildFiles = fileset.unions [
./local.mk
./Makefile
./Makefile.config.in
./mk
];
functionalTestFiles = fileset.unions [
./tests/functional
./tests/unit
(fileset.fileFilter (f: lib.strings.hasPrefix "nix-profile" f.name) ./scripts)
];
nixSrc = fileset.toSource {
root = ./.;
fileset = fileset.intersection baseFiles (fileset.unions [
configureFiles
topLevelBuildFiles
./boehmgc-coroutine-sp-fallback.diff
./doc
./misc
./precompiled-headers.h
./src
./unit-test-data
./COPYING
./scripts/local.mk
functionalTestFiles
]);
};
# Memoize nixpkgs for different platforms for efficiency.
nixpkgsFor = forAllSystems
(system: let
@ -118,205 +74,45 @@
cross = forAllCrossSystems (crossSystem: make-pkgs crossSystem "stdenv");
});
commonDeps =
{ pkgs
, isStatic ? pkgs.stdenv.hostPlatform.isStatic
}:
with pkgs; rec {
# Use "busybox-sandbox-shell" if present,
# if not (legacy) fallback and hope it's sufficient.
sh = pkgs.busybox-sandbox-shell or (busybox.override {
useMusl = true;
enableStatic = true;
enableMinimal = true;
extraConfig = ''
CONFIG_FEATURE_FANCY_ECHO y
CONFIG_FEATURE_SH_MATH y
CONFIG_FEATURE_SH_MATH_64 y
testNixVersions = pkgs: client: daemon: let
nix = pkgs.callPackage ./package.nix {
pname =
"nix-tests"
+ lib.optionalString
(lib.versionAtLeast daemon.version "2.4pre20211005" &&
lib.versionAtLeast client.version "2.4pre20211005")
"-${client.version}-against-${daemon.version}";
CONFIG_ASH y
CONFIG_ASH_OPTIMIZE_FOR_SIZE y
CONFIG_ASH_ALIAS y
CONFIG_ASH_BASH_COMPAT y
CONFIG_ASH_CMDCMD y
CONFIG_ASH_ECHO y
CONFIG_ASH_GETOPTS y
CONFIG_ASH_INTERNAL_GLOB y
CONFIG_ASH_JOB_CONTROL y
CONFIG_ASH_PRINTF y
CONFIG_ASH_TEST y
'';
});
configureFlags =
lib.optionals stdenv.isLinux [
"--with-boost=${boost}/lib"
"--with-sandbox-shell=${sh}/bin/busybox"
]
++ lib.optionals (stdenv.isLinux && !(isStatic && stdenv.system == "aarch64-linux")) [
"LDFLAGS=-fuse-ld=gold"
];
testConfigureFlags = [
"RAPIDCHECK_HEADERS=${lib.getDev rapidcheck}/extras/gtest/include"
];
internalApiDocsConfigureFlags = [
"--enable-internal-api-docs"
];
changelog-d = pkgs.buildPackages.callPackage ./misc/changelog-d.nix { };
nativeBuildDeps =
[
buildPackages.bison
buildPackages.flex
(lib.getBin buildPackages.lowdown)
buildPackages.mdbook
buildPackages.mdbook-linkcheck
buildPackages.autoconf-archive
buildPackages.autoreconfHook
buildPackages.pkg-config
# Tests
buildPackages.git
buildPackages.mercurial # FIXME: remove? only needed for tests
buildPackages.jq # Also for custom mdBook preprocessor.
]
++ lib.optionals stdenv.hostPlatform.isLinux [(buildPackages.util-linuxMinimal or buildPackages.utillinuxMinimal)]
# Official releases don't have rl-next, so we don't need to compile a changelog
++ lib.optional (!officialRelease && buildUnreleasedNotes) changelog-d
;
buildDeps =
[ curl
bzip2 xz brotli editline
openssl sqlite
libarchive
boost
lowdown
libsodium
]
++ lib.optionals stdenv.isLinux [libseccomp]
++ lib.optional stdenv.hostPlatform.isx86_64 libcpuid;
checkDeps = [
gtest
rapidcheck
];
internalApiDocsDeps = [
buildPackages.doxygen
];
awsDeps = lib.optional (stdenv.isLinux || stdenv.isDarwin)
(aws-sdk-cpp.override {
apis = ["s3" "transfer"];
customMemoryManagement = false;
});
propagatedDeps =
[ ((boehmgc.override {
enableLargeConfig = true;
}).overrideAttrs(o: {
patches = (o.patches or []) ++ [
./boehmgc-coroutine-sp-fallback.diff
# https://github.com/ivmai/bdwgc/pull/586
./boehmgc-traceable_allocator-public.diff
];
})
)
nlohmann_json
];
};
installScriptFor = systems:
with nixpkgsFor.x86_64-linux.native;
runCommand "installer-script"
{ buildInputs = [ nix ];
}
''
mkdir -p $out/nix-support
# Converts /nix/store/50p3qk8k...-nix-2.4pre20201102_550e11f/bin/nix to 50p3qk8k.../bin/nix.
tarballPath() {
# Remove the store prefix
local path=''${1#${builtins.storeDir}/}
# Get the path relative to the derivation root
local rest=''${path#*/}
# Get the derivation hash
local drvHash=''${path%%-*}
echo "$drvHash/$rest"
}
substitute ${./scripts/install.in} $out/install \
${pkgs.lib.concatMapStrings
(system: let
tarball = if builtins.elem system crossSystems then self.hydraJobs.binaryTarballCross.x86_64-linux.${system} else self.hydraJobs.binaryTarball.${system};
in '' \
--replace '@tarballHash_${system}@' $(nix --experimental-features nix-command hash-file --base16 --type sha256 ${tarball}/*.tar.xz) \
--replace '@tarballPath_${system}@' $(tarballPath ${tarball}/*.tar.xz) \
''
)
systems
} --replace '@nixVersion@' ${version}
echo "file installer $out/install" >> $out/nix-support/hydra-build-products
'';
testNixVersions = pkgs: client: daemon: with commonDeps { inherit pkgs; }; with pkgs.lib; pkgs.stdenv.mkDerivation {
inherit fileset;
};
in nix.overrideAttrs (prevAttrs: {
NIX_DAEMON_PACKAGE = daemon;
NIX_CLIENT_PACKAGE = client;
name =
"nix-tests"
+ optionalString
(versionAtLeast daemon.version "2.4pre20211005" &&
versionAtLeast client.version "2.4pre20211005")
"-${client.version}-against-${daemon.version}";
inherit version;
src = fileset.toSource {
root = ./.;
fileset = fileset.intersection baseFiles (fileset.unions [
configureFiles
topLevelBuildFiles
functionalTestFiles
]);
};
VERSION_SUFFIX = versionSuffix;
nativeBuildInputs = nativeBuildDeps;
buildInputs = buildDeps ++ awsDeps ++ checkDeps;
propagatedBuildInputs = propagatedDeps;
enableParallelBuilding = true;
configureFlags =
testConfigureFlags # otherwise configure fails
++ [ "--disable-build" ];
dontBuild = true;
doInstallCheck = true;
configureFlags = prevAttrs.configureFlags ++ [
# We don't need the actual build here.
"--disable-build"
];
installPhase = ''
mkdir -p $out
'';
installCheckPhase = (optionalString pkgs.stdenv.hostPlatform.isDarwin ''
installCheckPhase = lib.optionalString pkgs.stdenv.hostPlatform.isDarwin ''
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
'') + ''
'' + ''
mkdir -p src/nix-channel
make installcheck -j$NIX_BUILD_CORES -l$NIX_BUILD_CORES
'';
};
});
binaryTarball = nix: pkgs:
let
inherit (pkgs) buildPackages;
inherit (pkgs) cacert;
installerClosureInfo = buildPackages.closureInfo { rootPaths = [ nix cacert ]; };
installerClosureInfo = buildPackages.closureInfo { rootPaths = [ nix ]; };
in
buildPackages.runCommand "nix-binary-tarball-${version}"
@ -325,45 +121,7 @@
}
''
cp ${installerClosureInfo}/registration $TMPDIR/reginfo
cp ${./scripts/create-darwin-volume.sh} $TMPDIR/create-darwin-volume.sh
substitute ${./scripts/install-nix-from-closure.sh} $TMPDIR/install \
--subst-var-by nix ${nix} \
--subst-var-by cacert ${cacert}
substitute ${./scripts/install-darwin-multi-user.sh} $TMPDIR/install-darwin-multi-user.sh \
--subst-var-by nix ${nix} \
--subst-var-by cacert ${cacert}
substitute ${./scripts/install-systemd-multi-user.sh} $TMPDIR/install-systemd-multi-user.sh \
--subst-var-by nix ${nix} \
--subst-var-by cacert ${cacert}
substitute ${./scripts/install-multi-user.sh} $TMPDIR/install-multi-user \
--subst-var-by nix ${nix} \
--subst-var-by cacert ${cacert}
if type -p shellcheck; then
# SC1090: Don't worry about not being able to find
# $nix/etc/profile.d/nix.sh
shellcheck --exclude SC1090 $TMPDIR/install
shellcheck $TMPDIR/create-darwin-volume.sh
shellcheck $TMPDIR/install-darwin-multi-user.sh
shellcheck $TMPDIR/install-systemd-multi-user.sh
# SC1091: Don't panic about not being able to source
# /etc/profile
# SC2002: Ignore "useless cat" "error", when loading
# .reginfo, as the cat is a much cleaner
# implementation, even though it is "useless"
# SC2116: Allow ROOT_HOME=$(echo ~root) for resolving
# root's home directory
shellcheck --external-sources \
--exclude SC1091,SC2002,SC2116 $TMPDIR/install-multi-user
fi
chmod +x $TMPDIR/install
chmod +x $TMPDIR/create-darwin-volume.sh
chmod +x $TMPDIR/install-darwin-multi-user.sh
chmod +x $TMPDIR/install-systemd-multi-user.sh
chmod +x $TMPDIR/install-multi-user
dir=nix-${version}-${pkgs.system}
fn=$out/$dir.tar.xz
mkdir -p $out/nix-support
@ -373,123 +131,69 @@
--mtime='1970-01-01' \
--absolute-names \
--hard-dereference \
--transform "s,$TMPDIR/install,$dir/install," \
--transform "s,$TMPDIR/create-darwin-volume.sh,$dir/create-darwin-volume.sh," \
--transform "s,$TMPDIR/reginfo,$dir/.reginfo," \
--transform "s,$NIX_STORE,$dir/store,S" \
$TMPDIR/install \
$TMPDIR/create-darwin-volume.sh \
$TMPDIR/install-darwin-multi-user.sh \
$TMPDIR/install-systemd-multi-user.sh \
$TMPDIR/install-multi-user \
$TMPDIR/reginfo \
$(cat ${installerClosureInfo}/store-paths)
'';
overlayFor = getStdenv: final: prev:
let currentStdenv = getStdenv final; in
{
nixStable = prev.nix;
nix =
with final;
with commonDeps {
let
currentStdenv = getStdenv final;
comDeps = with final; commonDeps {
inherit pkgs;
inherit (currentStdenv.hostPlatform) isStatic;
};
let
canRunInstalled = currentStdenv.buildPlatform.canExecute currentStdenv.hostPlatform;
in currentStdenv.mkDerivation (finalAttrs: {
name = "nix-${version}";
inherit version;
in {
nixStable = prev.nix;
src = nixSrc;
VERSION_SUFFIX = versionSuffix;
# Forward from the previous stage as we dont want it to pick the lowdown override
nixUnstable = prev.nixUnstable;
outputs = [ "out" "dev" "doc" ];
build-release-notes =
final.buildPackages.callPackage ./maintainers/build-release-notes.nix { };
clangbuildanalyzer = final.buildPackages.callPackage ./misc/clangbuildanalyzer.nix { };
boehmgc-nix = (final.boehmgc.override {
enableLargeConfig = true;
}).overrideAttrs (o: {
patches = (o.patches or [ ]) ++ [
./boehmgc-coroutine-sp-fallback.diff
nativeBuildInputs = nativeBuildDeps;
buildInputs = buildDeps
# There have been issues building these dependencies
++ lib.optionals (currentStdenv.hostPlatform == currentStdenv.buildPlatform) awsDeps
++ lib.optionals finalAttrs.doCheck checkDeps;
propagatedBuildInputs = propagatedDeps;
disallowedReferences = [ boost ];
preConfigure = lib.optionalString (! currentStdenv.hostPlatform.isStatic)
''
# Copy libboost_context so we don't get all of Boost in our closure.
# https://github.com/NixOS/nixpkgs/issues/45462
mkdir -p $out/lib
cp -pd ${boost}/lib/{libboost_context*,libboost_thread*,libboost_system*} $out/lib
rm -f $out/lib/*.a
${lib.optionalString currentStdenv.hostPlatform.isLinux ''
chmod u+w $out/lib/*.so.*
patchelf --set-rpath $out/lib:${currentStdenv.cc.cc.lib}/lib $out/lib/libboost_thread.so.*
''}
${lib.optionalString currentStdenv.hostPlatform.isDarwin ''
for LIB in $out/lib/*.dylib; do
chmod u+w $LIB
install_name_tool -id $LIB $LIB
install_name_tool -delete_rpath ${boost}/lib/ $LIB || true
done
install_name_tool -change ${boost}/lib/libboost_system.dylib $out/lib/libboost_system.dylib $out/lib/libboost_thread.dylib
''}
'';
configureFlags = configureFlags ++
[ "--sysconfdir=/etc" ] ++
lib.optional stdenv.hostPlatform.isStatic "--enable-embedded-sandbox-shell" ++
[ (lib.enableFeature finalAttrs.doCheck "tests") ] ++
lib.optionals finalAttrs.doCheck testConfigureFlags ++
lib.optional (!canRunInstalled) "--disable-doc-gen";
enableParallelBuilding = true;
makeFlags = "profiledir=$(out)/etc/profile.d PRECOMPILE_HEADERS=1";
doCheck = true;
installFlags = "sysconfdir=$(out)/etc";
postInstall = ''
mkdir -p $doc/nix-support
echo "doc manual $doc/share/doc/nix/manual" >> $doc/nix-support/hydra-build-products
${lib.optionalString currentStdenv.hostPlatform.isStatic ''
mkdir -p $out/nix-support
echo "file binary-dist $out/bin/nix" >> $out/nix-support/hydra-build-products
''}
${lib.optionalString currentStdenv.isDarwin ''
install_name_tool \
-change ${boost}/lib/libboost_context.dylib \
$out/lib/libboost_context.dylib \
$out/lib/libnixutil.dylib
''}
'';
doInstallCheck = finalAttrs.doCheck;
installCheckFlags = "sysconfdir=$(out)/etc";
installCheckTarget = "installcheck"; # work around buggy detection in stdenv
preInstallCheck = lib.optionalString stdenv.hostPlatform.isDarwin ''
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
'';
separateDebugInfo = !currentStdenv.hostPlatform.isStatic;
strictDeps = true;
hardeningDisable = lib.optional stdenv.hostPlatform.isStatic "pie";
passthru.perl-bindings = final.callPackage ./perl {
inherit fileset;
stdenv = currentStdenv;
};
meta.platforms = lib.platforms.unix;
# https://github.com/ivmai/bdwgc/pull/586
./boehmgc-traceable_allocator-public.diff
];
});
default-busybox-sandbox-shell = final.busybox.override {
useMusl = true;
enableStatic = true;
enableMinimal = true;
extraConfig = ''
CONFIG_FEATURE_FANCY_ECHO y
CONFIG_FEATURE_SH_MATH y
CONFIG_FEATURE_SH_MATH_64 y
CONFIG_ASH y
CONFIG_ASH_OPTIMIZE_FOR_SIZE y
CONFIG_ASH_ALIAS y
CONFIG_ASH_BASH_COMPAT y
CONFIG_ASH_CMDCMD y
CONFIG_ASH_ECHO y
CONFIG_ASH_GETOPTS y
CONFIG_ASH_INTERNAL_GLOB y
CONFIG_ASH_JOB_CONTROL y
CONFIG_ASH_PRINTF y
CONFIG_ASH_TEST y
'';
};
nix = final.callPackage ./package.nix {
inherit versionSuffix fileset;
stdenv = currentStdenv;
boehmgc = final.boehmgc-nix;
busybox-sandbox-shell = final.busybox-sandbox-shell or final.default-busybox-sandbox-shell;
};
};
in {
@ -502,43 +206,53 @@
# Binary package for various platforms.
build = forAllSystems (system: self.packages.${system}.nix);
# FIXME(Qyriad): remove this when the migration to Meson has been completed.
# NOTE: mesonBuildClang depends on mesonBuild depends on build to avoid OOMs
# on aarch64 builders caused by too many parallel compiler/linker processes.
mesonBuild = forAllSystems (system:
(self.packages.${system}.nix.override {
buildWithMeson = true;
}).overrideAttrs (prev: {
buildInputs = prev.buildInputs ++ [ self.packages.${system}.nix ];
}));
mesonBuildClang = forAllSystems (system:
(nixpkgsFor.${system}.stdenvs.clangStdenvPackages.nix.override {
buildWithMeson = true;
}).overrideAttrs (prev: {
buildInputs = prev.buildInputs ++ [ self.hydraJobs.mesonBuild.${system} ];
})
);
# Perl bindings for various platforms.
perlBindings = forAllSystems (system: nixpkgsFor.${system}.native.nix.perl-bindings);
# Binary tarball for various platforms, containing a Nix store
# with the closure of 'nix' package, and the second half of
# the installation script.
# with the closure of 'nix' package.
binaryTarball = forAllSystems (system: binaryTarball nixpkgsFor.${system}.native.nix nixpkgsFor.${system}.native);
# docker image with Nix inside
dockerImage = lib.genAttrs linux64BitSystems (system: self.packages.${system}.dockerImage);
# API docs for Nix's unstable internal C++ interfaces.
internal-api-docs =
with nixpkgsFor.x86_64-linux.native;
with commonDeps { inherit pkgs; };
internal-api-docs = let
nixpkgs = nixpkgsFor.x86_64-linux.native;
inherit (nixpkgs) pkgs;
stdenv.mkDerivation {
pname = "nix-internal-api-docs";
inherit version;
src = nixSrc;
configureFlags = testConfigureFlags ++ internalApiDocsConfigureFlags;
nativeBuildInputs = nativeBuildDeps;
buildInputs = buildDeps ++ propagatedDeps
++ awsDeps ++ checkDeps ++ internalApiDocsDeps;
dontBuild = true;
installTargets = [ "internal-api-html" ];
postInstall = ''
mkdir -p $out/nix-support
echo "doc internal-api-docs $out/share/doc/nix/internal-api/html" >> $out/nix-support/hydra-build-products
'';
nix = pkgs.callPackage ./package.nix {
inherit versionSuffix fileset officialRelease buildUnreleasedNotes;
inherit (pkgs) build-release-notes;
internalApiDocs = true;
boehmgc = pkgs.boehmgc-nix;
busybox-sandbox-shell = pkgs.busybox-sandbox-shell;
};
in
nix.overrideAttrs (prev: {
# This Hydra job is just for the internal API docs.
# We don't need the build artifacts here.
dontBuild = true;
doCheck = false;
doInstallCheck = false;
});
# System tests.
tests = import ./tests/nixos { inherit lib nixpkgs nixpkgsFor; } // {
@ -552,7 +266,7 @@
type -p nix-env
# Note: we're filtering out nixos-install-tools because https://github.com/NixOS/nixpkgs/pull/153594#issuecomment-1020530593.
time nix-env --store dummy:// -f ${nixpkgs-regression} -qaP --drv-path | sort | grep -v nixos-install-tools > packages
[[ $(sha1sum < packages | cut -c1-40) = ff451c521e61e4fe72bdbe2d0ca5d1809affa733 ]]
[[ $(sha1sum < packages | cut -c1-40) = 402242fca90874112b34718b8199d844e8b03d12 ]]
mkdir $out
'';
@ -564,32 +278,23 @@
}
);
};
installTests = forAllSystems (system:
let pkgs = nixpkgsFor.${system}.native; in
pkgs.runCommand "install-tests" {
againstSelf = testNixVersions pkgs pkgs.nix pkgs.pkgs.nix;
againstCurrentUnstable =
# FIXME: temporarily disable this on macOS because of #3605.
if system == "x86_64-linux"
then testNixVersions pkgs pkgs.nix pkgs.nixUnstable
else null;
# Disabled because the latest stable version doesn't handle
# `NIX_DAEMON_SOCKET_PATH` which is required for the tests to work
# againstLatestStable = testNixVersions pkgs pkgs.nix pkgs.nixStable;
} "touch $out");
};
checks = forAllSystems (system: {
checks = forAllSystems (system: let
rl-next-check = name: dir:
let pkgs = nixpkgsFor.${system}.native;
in pkgs.buildPackages.runCommand "test-${name}-release-notes" { } ''
LANG=C.UTF-8 ${lib.getExe pkgs.build-release-notes} ${dir} >$out
'';
in {
# FIXME(Qyriad): remove this when the migration to Meson has been completed.
mesonBuild = self.hydraJobs.mesonBuild.${system};
mesonBuildClang = self.hydraJobs.mesonBuildClang.${system};
binaryTarball = self.hydraJobs.binaryTarball.${system};
perlBindings = self.hydraJobs.perlBindings.${system};
installTests = self.hydraJobs.installTests.${system};
nixpkgsLibTests = self.hydraJobs.tests.nixpkgsLibTests.${system};
rl-next =
let pkgs = nixpkgsFor.${system}.native;
in pkgs.buildPackages.runCommand "test-rl-next-release-notes" { } ''
LANG=C.UTF-8 ${(commonDeps { inherit pkgs; }).changelog-d}/bin/changelog-d ${./doc/manual/rl-next} >$out
'';
rl-next = rl-next-check "rl-next" ./doc/manual/rl-next;
rl-next-dev = rl-next-check "rl-next-dev" ./doc/manual/rl-next-dev;
} // (lib.optionalAttrs (builtins.elem system linux64BitSystems)) {
dockerImage = self.hydraJobs.dockerImage.${system};
});
@ -629,36 +334,43 @@
devShells = let
makeShell = pkgs: stdenv:
let
canRunInstalled = stdenv.buildPlatform.canExecute stdenv.hostPlatform;
nix = pkgs.callPackage ./package.nix {
inherit stdenv versionSuffix fileset;
boehmgc = pkgs.boehmgc-nix;
busybox-sandbox-shell = pkgs.busybox-sandbox-shell or pkgs.default-busybox-sandbox;
forDevShell = true;
};
in
with commonDeps { inherit pkgs; };
stdenv.mkDerivation {
name = "nix";
(nix.override {
buildUnreleasedNotes = true;
officialRelease = false;
}).overrideAttrs (prev: {
# Required for clang-tidy checks
buildInputs = prev.buildInputs ++ lib.optionals (stdenv.cc.isClang) [ pkgs.llvmPackages.llvm pkgs.llvmPackages.clang-unwrapped.dev ];
nativeBuildInputs = prev.nativeBuildInputs
++ lib.optional (stdenv.cc.isClang && !stdenv.buildPlatform.isDarwin) pkgs.buildPackages.bear
# Required for clang-tidy checks
++ lib.optionals (stdenv.cc.isClang) [ pkgs.buildPackages.cmake pkgs.buildPackages.ninja pkgs.buildPackages.llvmPackages.llvm.dev ]
++ lib.optional
(stdenv.cc.isClang && stdenv.hostPlatform == stdenv.buildPlatform)
# for some reason that seems accidental and was changed in
# NixOS 24.05-pre, clang-tools is pinned to LLVM 14 when
# default LLVM is newer.
(pkgs.buildPackages.clang-tools.override { inherit (pkgs.buildPackages) llvmPackages; })
++ [
# FIXME(Qyriad): remove once the migration to Meson is complete.
pkgs.buildPackages.meson
pkgs.buildPackages.ninja
outputs = [ "out" "dev" "doc" ];
pkgs.buildPackages.clangbuildanalyzer
];
nativeBuildInputs = nativeBuildDeps
++ lib.optional (stdenv.cc.isClang && !stdenv.buildPlatform.isDarwin) pkgs.buildPackages.bear
++ lib.optional
(stdenv.cc.isClang && stdenv.hostPlatform == stdenv.buildPlatform)
pkgs.buildPackages.clang-tools
# We want changelog-d in the shell even if the current build doesn't need it
++ lib.optional (officialRelease || ! buildUnreleasedNotes) changelog-d
;
src = null;
buildInputs = buildDeps ++ propagatedDeps
++ awsDeps ++ checkDeps ++ internalApiDocsDeps;
installFlags = "sysconfdir=$(out)/etc";
strictDeps = false;
configureFlags = configureFlags
++ testConfigureFlags ++ internalApiDocsConfigureFlags
++ lib.optional (!canRunInstalled) "--disable-doc-gen";
enableParallelBuilding = true;
installFlags = "sysconfdir=$(out)/etc";
shellHook =
''
shellHook = ''
PATH=$prefix/bin:$PATH
unset PYTHONPATH
export MANPATH=$out/share/man:$MANPATH
@ -666,7 +378,10 @@
# Make bash completion work.
XDG_DATA_DIRS+=:$out/share
'';
};
} // lib.optionalAttrs (stdenv.buildPlatform.isLinux && pkgs.glibcLocales != null) {
# Required to make non-NixOS Linux not complain about missing locale files during configure in a dev shell
LOCALE_ARCHIVE = "${lib.getLib pkgs.glibcLocales}/lib/locale/locale-archive";
});
in
forAllSystems (system:
let

View file

@ -1,4 +1,11 @@
GLOBAL_CXXFLAGS += -Wno-deprecated-declarations -Werror=switch
# 2024-03-24: jade benchmarked the default sanitize reporting in clang and got
# a regression of about 10% on hackage-packages.nix with clang. So we are trapping instead.
#
# This has an overhead of 0-4% on gcc and unmeasurably little on clang, in
# Nix evaluation benchmarks.
DEFAULT_SANITIZE_FLAGS = -fsanitize=signed-integer-overflow -fsanitize-undefined-trap-on-error
GLOBAL_CXXFLAGS += -Wno-deprecated-declarations -Werror=switch $(DEFAULT_SANITIZE_FLAGS)
GLOBAL_LDFLAGS += $(DEFAULT_SANITIZE_FLAGS)
# Allow switch-enum to be overridden for files that do not support it, usually because of dependency headers.
ERROR_SWITCH_ENUM = -Werror=switch-enum

View file

@ -0,0 +1,6 @@
{ lib, python3, writeShellScriptBin }:
writeShellScriptBin "build-release-notes" ''
exec ${lib.getExe (python3.withPackages (p: [ p.python-frontmatter ]))} \
${./build-release-notes.py} "$@"
''

View file

@ -0,0 +1,66 @@
import frontmatter
import sys
import pathlib
import textwrap
GH_BASE = "https://github.com/NixOS/nix"
FORGEJO_BASE = "https://git.lix.systems/lix-project/lix"
GERRIT_BASE = "https://gerrit.lix.systems/c/lix/+"
SIGNIFICANCECES = {
None: 0,
'significant': 10,
}
def format_link(ident: str, gh_part: str, fj_part: str) -> str:
# FIXME: deprecate github as default
if ident.isdigit():
num, link, base = int(ident), f"#{ident}", f"{GH_BASE}/{gh_part}"
elif ident.startswith("gh#"):
num, link, base = int(ident[3:]), ident, f"{GH_BASE}/{gh_part}"
elif ident.startswith("fj#"):
num, link, base = int(ident[3:]), ident, f"{FORGEJO_BASE}/{fj_part}"
else:
raise Exception("unrecognized reference format", ident)
return f"[{link}]({base}/{num})"
def format_issue(issue: str) -> str:
return format_link(issue, "issues", "issues")
def format_pr(pr: str) -> str:
return format_link(pr, "pull", "pulls")
def format_cl(cl: str) -> str:
clid = int(cl)
return f"[cl/{clid}]({GERRIT_BASE}/{clid})"
paths = pathlib.Path(sys.argv[1]).glob('*.md')
entries = []
for p in paths:
try:
e = frontmatter.load(p)
if 'synopsis' not in e.metadata:
raise Exception('missing synposis')
unknownKeys = set(e.metadata.keys()) - set(('synopsis', 'cls', 'issues', 'prs', 'significance'))
if unknownKeys:
raise Exception('unknown keys', unknownKeys)
entries.append((p, e))
except Exception as e:
e.add_note(f"in {p}")
raise
for p, entry in sorted(entries, key=lambda e: (-SIGNIFICANCECES[e[1].metadata.get('significance')], e[0])):
try:
header = entry.metadata['synopsis']
links = []
links += map(format_issue, str(entry.metadata.get('issues', "")).split())
links += map(format_pr, str(entry.metadata.get('prs', "")).split())
links += map(format_cl, str(entry.metadata.get('cls', "")).split())
if links != []:
header += " " + " ".join(links)
if header:
print(f"- {header}")
print()
print(textwrap.indent(entry.content, ' '))
print()
except Exception as e:
e.add_note(f"in {p}")
raise

20
maintainers/buildtime_report.sh Executable file
View file

@ -0,0 +1,20 @@
#!/bin/sh
# Generates a report of build time based on a meson build using -ftime-trace in
# Clang.
if [ $# -lt 1 ]; then
echo "usage: $0 BUILD-DIR [filename]" >&2
exit 1
fi
scriptdir=$(cd "$(dirname -- "$0")" || exit ; pwd -P)
filename=${2:-$scriptdir/../buildtime.bin}
if [ "$(meson introspect "$1" --buildoptions | jq -r '.[] | select(.name == "profile-build") | .value')" != enabled ]; then
echo 'This build was not done with profile-build enabled, so cannot generate a report' >&2
# shellcheck disable=SC2016
echo 'Run `meson configure build -Dprofile-build=enabled` then rebuild, first' >&2
exit 1
fi
ClangBuildAnalyzer --all "$1" "$filename" && ClangBuildAnalyzer --analyze "$filename"

148
maintainers/issue_import.py Normal file
View file

@ -0,0 +1,148 @@
import requests
import textwrap
import dataclasses
import logging
import re
import os
API_BASE = 'https://git.lix.systems/api/v1'
API_KEY = os.environ['FORGEJO_API_KEY']
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
fmt = logging.Formatter('{asctime} {levelname} {name}: {message}',
datefmt='%b %d %H:%M:%S',
style='{')
if not any(isinstance(h, logging.StreamHandler) for h in log.handlers):
hand = logging.StreamHandler()
hand.setFormatter(fmt)
log.addHandler(hand)
# These are erring in the direction of re-triage, rather than necessarily
# mapping all metadata of the issue
LABEL_MAPPING = {
'lix-import': 153, # 'imported',
'contributor-experience': 148, # 'devx',
'bug': 150, # 'bug',
'UX': 149, # 'ux',
'error-messages': 149, # 'ux',
'lix-stability': 146, # 'stability',
'performance': 147, # 'performance',
'tests': 121, # 'tests',
}
def api(method, endpoint: str, resp_json=True, **kwargs):
log.info('http %s %s', method, endpoint)
if not endpoint.startswith('https'):
endpoint = API_BASE + endpoint
resp = requests.request(method,
endpoint,
headers={'Authorization': f'Bearer {API_KEY}'},
**kwargs)
resp.raise_for_status()
if resp_json:
return resp.json()
else:
return resp
def paginate(method: str, url: str):
while True:
resp = api(method, url, resp_json=False)
yield from resp.json()
next_one = resp.links.get('next')
if not next_one:
return
url = next_one.get('url')
if not url:
return
class DataClassUnpack:
"""Taken from: https://stackoverflow.com/a/72164665"""
classFieldCache = {}
@classmethod
def instantiate(cls, classToInstantiate, argDict):
if classToInstantiate not in cls.classFieldCache:
cls.classFieldCache[classToInstantiate] = {
f.name
for f in getattr(classToInstantiate, dataclasses._FIELDS).values() if f._field_type is not dataclasses._FIELD_CLASSVAR # type: ignore
}
fieldSet = cls.classFieldCache[classToInstantiate]
filteredArgDict = {k: v for k, v in argDict.items() if k in fieldSet}
return classToInstantiate(**filteredArgDict)
@dataclasses.dataclass
class Label:
name: str
description: str
@dataclasses.dataclass
class Issue:
number: int
url: str
html_url: str
title: str
body: str
labels: dataclasses.InitVar[list[dict]]
labels_clean: list[Label] = dataclasses.field(init=False)
def __post_init__(self, labels):
self.labels_clean = [DataClassUnpack.instantiate(Label, l) for l in labels]
def issues_to_import():
yield from paginate('GET', '/repos/nixos/nix/issues?state=open&labels=lix-import')
def issues_already_imported():
yield from paginate('GET', '/repos/lix-project/lix/issues?state=all&labels=imported')
UPSTREAM_ISSUE_RE = re.compile(r'^Upstream-Issue: https://git\.lix\.systems/NixOS/nix/issues/(\d+)$', re.MULTILINE)
def make_already_imported():
d = {}
for issue in issues_already_imported():
iss = DataClassUnpack.instantiate(Issue, issue)
print(iss)
match = UPSTREAM_ISSUE_RE.search(iss.body)
if match:
d[int(match.group(1))] = iss
return d
def new_issue(title, body, labels):
api('POST', '/repos/lix-project/lix/issues', resp_json=True, json={
'labels': labels,
'body': body,
'title': title,
'dont_notify': True,
})
already_imported = make_already_imported()
def import_issue(iss: Issue):
if iss.number in already_imported:
log.info('Skipping already imported %d', iss.number)
return
new_body = textwrap.dedent('''
Upstream-Issue: {iss}
{original_body}
''').format(iss=iss.html_url, original_body=iss.body)
new_labels = [LABEL_MAPPING[l.name] for l in iss.labels_clean if l.name in LABEL_MAPPING]
new_title = '[Nix#{num}] {title}'.format(num=iss.number, title=iss.title)
log.info('%s', f'create issue with: {new_labels} {new_title} {new_body}')
new_issue(new_title, new_body, new_labels)
def go():
log.info('Importing issues!')
for issue in issues_to_import():
import_issue(DataClassUnpack.instantiate(Issue, issue))
if __name__ == '__main__':
go()

View file

@ -152,7 +152,7 @@ section_title="Release $version_full ($DATE)"
# TODO add minor number, and append?
echo "# $section_title"
echo
changelog-d doc/manual/rl-next | sed -e 's/ *$//'
build-release-notes doc/manual/rl-next
) | tee -a $file
log "Wrote $file"

363
meson.build Normal file
View file

@ -0,0 +1,363 @@
#
# OUTLINE:
#
# The top-level meson.build file (this file) handles general logic for build options,
# generation of config.h (which is put in the build directory, not the source root
# like the previous, autoconf-based build system did), the mechanism for header
# generation, and the few global C++ compiler arguments that are added to all targets in Lix.
#
# src/meson.build coordinates each of Lix's subcomponents (the lib dirs in ./src),
# which each have their own meson.build. Lix's components depend on each other,
# so each of `src/lib{util,store,fetchers,expr,main,cmd}/meson.build` rely on variables
# set in earlier `meson.build` files. Each of these also defines the install targets for
# their headers.
#
# src/meson.build also collects the miscellaneous source files that are in further subdirectories
# that become part of the final Nix command (things like `src/nix-build/*.cc`).
#
# Finally, src/nix/meson.build defines the Nix command itself, relying on all prior meson files.
#
# Unit tests are setup in tests/unit/meson.build, under the test suite "check".
#
# Functional tests are a bit more complicated. Generally they're defined in
# tests/functional/meson.build, and rely on helper scripts meson/setup-functional-tests.py
# and meson/run-test.py. Scattered around also are configure_file() invocations, which must
# be placed in specific directories' meson.build files to create the right directory tree
# in the build directory.
project('lix', 'cpp',
version : run_command('bash', '-c', 'echo -n $(cat ./.version)$VERSION_SUFFIX', check : true).stdout().strip(),
default_options : [
'cpp_std=c++2a',
# TODO(Qyriad): increase the warning level
'warning_level=1',
'debug=true',
'optimization=2',
'errorlogs=true', # Please print logs for tests that fail
],
)
fs = import('fs')
prefix = get_option('prefix')
# For each of these paths, assume that it is relative to the prefix unless
# it is already an absolute path (which is the default for store-dir, state-dir, and log-dir).
path_opts = [
# Meson built-ins.
'datadir',
'sysconfdir',
'bindir',
'mandir',
'libdir',
'includedir',
# Homecooked Lix directories.
'store-dir',
'state-dir',
'log-dir',
]
# For your grepping pleasure, this loop sets the following variables that aren't mentioned
# literally above:
# store_dir
# state_dir
# log_dir
foreach optname : path_opts
varname = optname.replace('-', '_')
path = get_option(optname)
if fs.is_absolute(path)
set_variable(varname, path)
else
set_variable(varname, prefix / path)
endif
endforeach
enable_tests = get_option('enable-tests')
tests_args = []
if get_option('tests-color')
tests_args += '--gtest_color=yes'
endif
if get_option('tests-brief')
tests_args += '--gtest_brief=1'
endif
cxx = meson.get_compiler('cpp')
host_system = host_machine.cpu_family() + '-' + host_machine.system()
message('canonical Nix system name:', host_system)
is_linux = host_machine.system() == 'linux'
is_x64 = host_machine.cpu_family() == 'x86_64'
deps = [ ]
configdata = { }
#
# Dependencies
#
boehm = dependency('bdw-gc', required : get_option('gc'))
if boehm.found()
deps += boehm
endif
configdata += {
'HAVE_BOEHMGC': boehm.found().to_int(),
}
boost = dependency('boost', required : true, modules : ['context', 'coroutine', 'container'])
deps += boost
# cpuid only makes sense on x86_64
cpuid_required = is_x64 ? get_option('cpuid') : false
cpuid = dependency('libcpuid', 'cpuid', required : cpuid_required)
configdata += {
'HAVE_LIBCPUID': cpuid.found().to_int(),
}
deps += cpuid
# seccomp only makes sense on Linux
seccomp_required = is_linux ? get_option('seccomp-sandboxing') : false
seccomp = dependency('libseccomp', 'seccomp', required : seccomp_required)
configdata += {
'HAVE_SECCOMP': seccomp.found().to_int(),
}
libarchive = dependency('libarchive', required : true)
deps += libarchive
brotli = [
dependency('libbrotlicommon', required : true),
dependency('libbrotlidec', required : true),
dependency('libbrotlienc', required : true),
]
deps += brotli
openssl = dependency('libcrypto', 'openssl', required : true)
deps += openssl
aws_sdk = dependency('aws-cpp-sdk-core', required : false)
if aws_sdk.found()
# The AWS pkg-config adds -std=c++11.
# https://github.com/aws/aws-sdk-cpp/issues/2673
aws_sdk = aws_sdk.partial_dependency(
compile_args : false,
includes : true,
link_args : true,
links : true,
sources : true,
)
deps += aws_sdk
s = aws_sdk.version().split('.')
configdata += {
'AWS_VERSION_MAJOR': s[0].to_int(),
'AWS_VERSION_MINOR': s[1].to_int(),
'AWS_VERSION_PATCH': s[2].to_int(),
}
aws_sdk_transfer = dependency('aws-cpp-sdk-transfer', required : true).partial_dependency(
compile_args : false,
includes : true,
link_args : true,
links : true,
sources : true,
)
endif
aws_s3 = dependency('aws-cpp-sdk-s3', required : false)
if aws_s3.found()
# The AWS pkg-config adds -std=c++11.
# https://github.com/aws/aws-sdk-cpp/issues/2673
aws_s3 = aws_s3.partial_dependency(
compile_args : false,
includes : true,
link_args : true,
links : true,
sources : true,
)
deps += aws_s3
endif
configdata += {
'ENABLE_S3': aws_s3.found().to_int(),
}
sqlite = dependency('sqlite3', 'sqlite', version : '>=3.6.19', required : true)
deps += sqlite
sodium = dependency('libsodium', 'sodium', required : true)
deps += sodium
curl = dependency('libcurl', 'curl', required : true)
deps += curl
editline = dependency('libeditline', 'editline', version : '>=1.14', required : true)
deps += editline
lowdown = dependency('lowdown', version : '>=0.9.0', required : true)
deps += lowdown
# HACK(Qyriad): rapidcheck's pkg-config doesn't include the libs lol
rapidcheck_meson = dependency('rapidcheck', required : enable_tests)
rapidcheck = declare_dependency(dependencies : rapidcheck_meson, link_args : ['-lrapidcheck'])
deps += rapidcheck
gtest = [
dependency('gtest', required : enable_tests),
dependency('gtest_main', required : enable_tests),
dependency('gmock', required : enable_tests),
dependency('gmock_main', required : enable_tests),
]
deps += gtest
toml11 = dependency('toml11', version : '>=3.7.0', required : true)
deps += toml11
nlohmann_json = dependency('nlohmann_json', required : true)
deps += nlohmann_json
#
# Build-time tools
#
bash = find_program('bash')
coreutils = find_program('coreutils')
dot = find_program('dot', required : false)
pymod = import('python')
python = pymod.find_installation('python3')
# Used to workaround https://github.com/mesonbuild/meson/issues/2320 in src/nix/meson.build.
installcmd = find_program('install')
sandbox_shell = get_option('sandbox-shell')
# Consider it required if we're on Linux and the user explicitly specified a non-default value.
sandbox_shell_required = sandbox_shell != 'busybox' and host_machine.system() == 'linux'
# NOTE(Qyriad): package.nix puts busybox in buildInputs for Linux.
# Most builds should not require setting this.
busybox = find_program(sandbox_shell, required : sandbox_shell_required, native : false)
if not busybox.found() and host_machine.system() == 'linux' and sandbox_shell_required
warning('busybox not found and other sandbox shell was specified')
warning('a sandbox shell is recommended on Linux -- configure with -Dsandbox-shell=/path/to/shell to set')
endif
# FIXME(Qyriad): the autoconf system checks that busybox has the "standalone" feature, indicating
# that busybox sh won't run busybox applets as builtins (which would break our sandbox).
lsof = find_program('lsof')
bison = find_program('bison')
flex = find_program('flex')
# This is how Nix does generated headers...
# other instances of header generation use a very similar command.
# FIXME(Qyriad): do we really need to use the shell for this?
gen_header_sh = 'echo \'R"__NIX_STR(\' | cat - @INPUT@ && echo \')__NIX_STR"\''
gen_header = generator(
bash,
arguments : [ '-c', gen_header_sh ],
capture : true,
output : '@PLAINNAME@.gen.hh',
)
#
# Configuration
#
run_command('ln', '-s',
meson.project_build_root() / '__nothing_link_target',
meson.project_build_root() / '__nothing_symlink',
check : true,
)
can_link_symlink = run_command('ln',
meson.project_build_root() / '__nothing_symlink',
meson.project_build_root() / '__nothing_hardlink',
check : false,
).returncode() == 0
run_command('rm', '-f',
meson.project_build_root() / '__nothing_symlink',
meson.project_build_root() / '__nothing_hardlink',
check : true,
)
summary('can hardlink to symlink', can_link_symlink, bool_yn : true)
configdata += { 'CAN_LINK_SYMLINK': can_link_symlink.to_int() }
# Check for each of these functions, and create a define like `#define HAVE_LCHOWN 1`.
check_funcs = [
'lchown',
'lutimes',
'pipe2',
'posix_fallocate',
'statvfs',
'strsignal',
'sysconf',
]
foreach funcspec : check_funcs
define_name = 'HAVE_' + funcspec.underscorify().to_upper()
define_value = cxx.has_function(funcspec).to_int()
configdata += {
define_name: define_value,
}
endforeach
config_h = configure_file(
configuration : {
'PACKAGE_NAME': '"' + meson.project_name() + '"',
'PACKAGE_VERSION': '"' + meson.project_version() + '"',
'PACKAGE_TARNAME': '"' + meson.project_name() + '"',
'PACKAGE_STRING': '"' + meson.project_name() + ' ' + meson.project_version() + '"',
'HAVE_STRUCT_DIRENT_D_TYPE': 1, # FIXME: actually check this for solaris
'SYSTEM': '"' + host_system + '"',
} + configdata,
output : 'config.h',
)
install_headers(config_h, subdir : 'nix')
add_project_arguments(
# TODO(Qyriad): Yes this is how the autoconf+Make system did it.
# It would be nice for our headers to be idempotent instead.
'-include', 'config.h',
'-Wno-deprecated-declarations',
'-Wimplicit-fallthrough',
'-Werror=switch',
'-Werror=switch-enum',
language : 'cpp',
)
if cxx.get_id() in ['gcc', 'clang']
# 2024-03-24: jade benchmarked the default sanitize reporting in clang and got
# a regression of about 10% on hackage-packages.nix with clang. So we are trapping instead.
#
# This has an overhead of 0-4% on gcc and unmeasurably little on clang, in
# Nix evaluation benchmarks.
#
# N.B. Meson generates a completely nonsense warning here:
# https://github.com/mesonbuild/meson/issues/9822
# Both of these args cannot be written in the default meson configuration.
# b_sanitize=signed-integer-overflow is ignored, and
# -fsanitize-undefined-trap-on-error is not representable.
sanitize_args = ['-fsanitize=signed-integer-overflow', '-fsanitize-undefined-trap-on-error']
add_project_arguments(sanitize_args, language: 'cpp')
add_project_link_arguments(sanitize_args, language: 'cpp')
endif
add_project_link_arguments('-pthread', language : 'cpp')
if cxx.get_linker_id() in ['ld.bfd', 'ld.gold']
add_project_link_arguments('-Wl,--no-copy-dt-needed-entries', language : 'cpp')
endif
# Generate Chromium tracing files for each compiled file, which enables
# maintainers/buildtime_report.sh BUILD-DIR to simply work in clang builds.
#
# They can also be manually viewed at https://ui.perfetto.dev
if get_option('profile-build').require(meson.get_compiler('cpp').get_id() == 'clang').enabled()
add_project_arguments('-ftime-trace', language: 'cpp')
endif
subdir('src')
subdir('scripts')
subdir('misc')
if enable_tests
subdir('tests/unit')
subdir('tests/functional')
endif

48
meson.options Normal file
View file

@ -0,0 +1,48 @@
# vim: filetype=meson
option('gc', type : 'feature',
description : 'enable garbage collection in the Nix expression evaluator (requires Boehm GC)',
)
# TODO(Qyriad): is this feature maintained?
option('embedded-sandbox-shell', type : 'feature',
description : 'include the sandbox shell in the Nix binary',
)
option('cpuid', type : 'feature',
description : 'determine microarchitecture levels with libcpuid (only relevant on x86_64)',
)
option('seccomp-sandboxing', type : 'feature',
description : 'build support for seccomp sandboxing (recommended unless your arch doesn\'t support libseccomp, only relevant on Linux)',
)
option('sandbox-shell', type : 'string', value : 'busybox',
description : 'path to a statically-linked shell to use as /bin/sh in sandboxes (usually busybox)',
)
option('enable-tests', type : 'boolean', value : true,
description : 'whether to enable tests or not (requires rapidcheck and gtest)',
)
option('tests-color', type : 'boolean', value : true,
description : 'set to false to disable color output in gtest',
)
option('tests-brief', type : 'boolean', value : false,
description : 'set to true for shorter tests output',
)
option('profile-build', type : 'feature', value: 'disabled',
description : 'whether to enable -ftime-trace in clang builds, allowing for speeding up the build.'
)
option('store-dir', type : 'string', value : '/nix/store',
description : 'path of the Nix store',
)
option('state-dir', type : 'string', value : '/nix/var',
description : 'path to store state in for Nix',
)
option('log-dir', type : 'string', value : '/nix/var/log',
description : 'path to store logs in for Nix',
)

50
meson/cleanup-install.bash Executable file
View file

@ -0,0 +1,50 @@
#!/usr/bin/env bash
# Meson will call this with an absolute path to Bash.
# The shebang is just for convenience.
# The parser and lexer tab are generated via custom Meson targets in src/libexpr/meson.build,
# but Meson doesn't support marking only part of a target for install. The generation creates
# both headers (parser-tab.hh, lexer-tab.hh) and source files (parser-tab.cc, lexer-tab.cc),
# and we definitely want the former installed, but not the latter. This script is added to
# Meson's install steps to correct this, as the logic for it is just complex enough to
# warrant separate and careful handling, because both Meson's configured include directory
# may or may not be an absolute path, and DESTDIR may or may not be set at all, but can't be
# manipulated in Meson logic.
set -euo pipefail
echo "cleanup-install: removing Meson-placed C++ sources from dest includedir"
if [[ "${1/--help/}" != "$1" ]]; then
echo "cleanup-install: this script should only be called from the Meson build system"
exit 1
fi
# Ensure the includedir was passed as the first argument
# (set -u will make this fail otherwise).
includedir="$1"
# And then ensure that first argument is a directory that exists.
if ! [[ -d "$1" ]]; then
echo "cleanup-install: this script should only be called from the Meson build system"
echo "argv[1] (${1@Q}) is not a directory"
exit 2
fi
# If DESTDIR environment variable is set, prepend it to the include dir.
# Unfortunately, we cannot do this on the Meson side. We do have an environment variable
# `MESON_INSTALL_DESTDIR_PREFIX`, but that will not refer to the include directory if
# includedir has been set separately, which Lix's split-output derivation does.
# We also cannot simply do an inline bash conditional like "${DESTDIR:=}" or similar,
# because we need to specifically *join* DESTDIR and includedir with a slash, and *not*
# have a slash if DESTDIR isn't set at all, since $includedir could be a relative directory.
# Finally, DESTDIR is only available to us as an environment variable in these install scripts,
# not in Meson logic.
# Therefore, our best option is to have Meson pass this script the configured includedir,
# and perform this dance with it and $DESTDIR.
if [[ -n "${DESTDIR:-}" ]]; then
includedir="$DESTDIR/$includedir"
fi
# Intentionally not using -f.
# If these files don't exist then our assumptions have been violated and we should fail.
rm -v "$includedir/nix/parser-tab.cc" "$includedir/nix/lexer-tab.cc"

87
meson/run-test.py Executable file
View file

@ -0,0 +1,87 @@
#!/usr/bin/env python3
"""
This script is a helper for this project's Meson buildsystem to run Lix's
functional tests. It is an analogue to mk/run-test.sh in the autoconf+Make
buildsystem.
These tests are run in the installCheckPhase in Lix's derivation, and as such
expect to be run after the project has already been "installed" to some extent.
Look at meson/setup-functional-tests.py for more details.
"""
import argparse
from pathlib import Path
import os
import shutil
import subprocess
import sys
name = 'run-test.py'
if 'MESON_BUILD_ROOT' not in os.environ:
raise ValueError(f'{name}: this script must be run from the Meson build system')
def main():
tests_dir = Path(os.path.join(os.environ['MESON_BUILD_ROOT'], 'tests/functional'))
parser = argparse.ArgumentParser(name)
parser.add_argument('target', help='the script path relative to tests/functional to run')
args = parser.parse_args()
target = Path(args.target)
# The test suite considers the test's name to be the path to the test relative to
# `tests/functional`, but without the file extension.
# e.g. for `tests/functional/flakes/develop.sh`, the test name is `flakes/develop`
test_name = target.with_suffix('').as_posix()
if not target.is_absolute():
target = tests_dir.joinpath(target).resolve()
assert target.exists(), f'{name}: test {target} does not exist; did you run `meson install`?'
bash = os.environ.get('BASH', shutil.which('bash'))
if bash is None:
raise ValueError(f'{name}: bash executable not found and BASH environment variable not set')
test_environment = os.environ | {
'TEST_NAME': test_name,
# mk/run-test.sh did this, but I don't know if it has any effect since it seems
# like the tests that interact with remote stores set it themselves?
'NIX_REMOTE': '',
}
# Initialize testing.
init_result = subprocess.run([bash, '-e', 'init.sh'], cwd=tests_dir, env=test_environment)
if init_result.returncode != 0:
print(f'{name}: internal error initializing {args.target}', file=sys.stderr)
print('[ERROR]')
# Meson interprets exit code 99 as indicating an *error* in the testing process.
return 99
# Run the test itself.
test_result = subprocess.run([bash, '-e', target.name], cwd=target.parent, env=test_environment)
if test_result.returncode == 0:
print('[PASS]')
elif test_result.returncode == 99:
print('[SKIP]')
# Meson interprets exit code 77 as indicating a skipped test.
return 77
else:
print('[FAIL]')
return test_result.returncode
try:
sys.exit(main())
except AssertionError as e:
# This should mean that this test was run not-from-Meson, probably without
# having run `meson install` first, which is not an bug in this script.
print(e, file=sys.stderr)
sys.exit(99)
except Exception as e:
print(f'{name}: INTERNAL ERROR running test ({sys.argv}): {e}', file=sys.stderr)
print(f'this is a bug in {name}')
sys.exit(99)

105
meson/setup-functional-tests.py Executable file
View file

@ -0,0 +1,105 @@
#!/usr/bin/env python3
"""
So like. This script is cursed.
It's a helper for this project's Meson buildsystem for Lix's functional tests.
The functional tests are a bunch of bash scripts, that each expect to be run from the
directory from the directory that that script is in, and also expect modifications to have
happened to the source tree, and even splork files around. The last is against the spirit
of Meson (and personally annoying), to have build processes that aren't self-contained in the
out-of-source build directory, but more problematically we need configured files in the test
tree.
So. We copy the tests tree into the build directory.
Meson doesn't have a good way of doing this natively -- the best you could do is subdir()
into every directory in the tests tree and configure_file(copy : true) on every file,
but this won't copy symlinks as symlinks, which we need since the test suite has, well,
tests on symlinks.
However, the functional tests are normally run during Lix's derivation's installCheckPhase,
after Lix has already been "installed" somewhere. So in Meson we setup add this file as an
install script and copy everything in tests/functional to the build directory, preserving
things like symlinks, even broken ones (which are intentional).
TODO(Qyriad): when we remove the old build system entirely, we can instead fix the tests.
"""
from pathlib import Path
import os, os.path
import shutil
import sys
import traceback
name = 'setup-functional-tests.py'
if 'MESON_SOURCE_ROOT' not in os.environ or 'MESON_BUILD_ROOT' not in os.environ:
raise ValueError(f'{name}: this script must be run from the Meson build system')
print(f'{name}: mirroring tests/functional to build directory')
tests_source = Path(os.environ['MESON_SOURCE_ROOT']) / 'tests/functional'
tests_build = Path(os.environ['MESON_BUILD_ROOT']) / 'tests/functional'
def main():
os.chdir(tests_build)
for src_dirpath, src_dirnames, src_filenames in os.walk(tests_source):
src_dirpath = Path(src_dirpath)
assert src_dirpath.is_absolute(), f'{src_dirpath=} is not absolute'
# os.walk() gives us the absolute path to the directory we're currently in as src_dirpath.
# We want to mirror from the perspective of `tests_source`.
rel_subdir = src_dirpath.relative_to(tests_source)
assert (not rel_subdir.is_absolute()), f'{rel_subdir=} is not relative'
# And then join that relative path on `tests_build` to get the absolute
# path in the build directory that corresponds to `src_dirpath`.
build_dirpath = tests_build / rel_subdir
assert build_dirpath.is_absolute(), f'{build_dirpath=} is not absolute'
# More concretely, for the test file tests/functional/ca/build.sh:
# - src_dirpath is `$MESON_SOURCE_ROOT/tests/functional/ca`
# - rel_subidr is `ca`
# - build_dirpath is `$MESON_BUILD_ROOT/tests/functional/ca`
# `src_dirname` are directories underneath `src_dirpath`, and will be relative
# to `src_dirpath`.
for src_dirname in src_dirnames:
# Take the name of the directory in the tests source and join it on `src_dirpath`
# to get the full path to this specific directory in the tests source.
src = src_dirpath / src_dirname
# If there isn't *something* here, then our logic is wrong.
# Path.exists(follow_symlinks=False) wasn't added until Python 3.12, so we use
# os.path.lexists() here.
assert os.path.lexists(src), f'{src=} does not exist'
# Take the name of this directory and join it on `build_dirpath` to get the full
# path to the directory in `build/tests/functional` that we need to create.
dest = build_dirpath / src_dirname
if src.is_symlink():
src_target = src.readlink()
dest.unlink(missing_ok=True)
dest.symlink_to(src_target)
else:
dest.mkdir(parents=True, exist_ok=True)
for src_filename in src_filenames:
# os.walk() should be giving us relative filenames.
# If it isn't, our path joins will be veeeery wrong.
assert (not Path(src_filename).is_absolute()), f'{src_filename=} is not relative'
src = src_dirpath / src_filename
dst = build_dirpath / src_filename
# Mildly misleading name -- unlink removes ordinary files as well as symlinks.
dst.unlink(missing_ok=True)
# shutil.copy2() best-effort preserves metadata.
shutil.copy2(src, dst, follow_symlinks=False)
try:
sys.exit(main())
except Exception as e:
# Any error is likely a bug in this script.
print(f'{name}: INTERNAL ERROR setting up functional tests: {e}', file=sys.stderr)
print(traceback.format_exc())
print(f'this is a bug in {name}')
sys.exit(1)

8
misc/bash/meson.build Normal file
View file

@ -0,0 +1,8 @@
configure_file(
input : 'completion.sh',
output : 'nix',
install : true,
install_dir : datadir / 'bash-completion/completions',
install_mode : 'rw-r--r--',
copy : true,
)

View file

@ -1,31 +0,0 @@
{ mkDerivation, aeson, base, bytestring, cabal-install-parsers
, Cabal-syntax, containers, directory, filepath, frontmatter
, generic-lens-lite, lib, mtl, optparse-applicative, parsec, pretty
, regex-applicative, text, pkgs
}:
let rev = "f30f6969e9cd8b56242309639d58acea21c99d06";
in
mkDerivation {
pname = "changelog-d";
version = "0.1";
src = pkgs.fetchurl {
name = "changelog-d-${rev}.tar.gz";
url = "https://codeberg.org/roberth/changelog-d/archive/${rev}.tar.gz";
hash = "sha256-8a2+i5u7YoszAgd5OIEW0eYUcP8yfhtoOIhLJkylYJ4=";
} // { inherit rev; };
isLibrary = false;
isExecutable = true;
libraryHaskellDepends = [
aeson base bytestring cabal-install-parsers Cabal-syntax containers
directory filepath frontmatter generic-lens-lite mtl parsec pretty
regex-applicative text
];
executableHaskellDepends = [
base bytestring Cabal-syntax directory filepath
optparse-applicative
];
doHaddock = false;
description = "Concatenate changelog entries into a single one";
license = lib.licenses.gpl3Plus;
mainProgram = "changelog-d";
}

View file

@ -1,31 +0,0 @@
# Taken temporarily from <nixpkgs/pkgs/by-name/ch/changelog-d/package.nix>
{
callPackage,
lib,
haskell,
haskellPackages,
}:
let
hsPkg = haskellPackages.callPackage ./changelog-d.cabal.nix { };
addCompletions = haskellPackages.generateOptparseApplicativeCompletions ["changelog-d"];
haskellModifications =
lib.flip lib.pipe [
addCompletions
haskell.lib.justStaticExecutables
];
mkDerivationOverrides = finalAttrs: oldAttrs: {
version = oldAttrs.version + "-git-${lib.strings.substring 0 7 oldAttrs.src.rev}";
meta = oldAttrs.meta // {
homepage = "https://codeberg.org/roberth/changelog-d";
maintainers = [ lib.maintainers.roberth ];
};
};
in
(haskellModifications hsPkg).overrideAttrs mkDerivationOverrides

View file

@ -0,0 +1,27 @@
# Upstreaming here, can be deleted once it's upstreamed:
# https://github.com/NixOS/nixpkgs/pull/297102
{ stdenv, lib, cmake, fetchFromGitHub }:
stdenv.mkDerivation (finalAttrs: {
pname = "clangbuildanalyzer";
version = "1.5.0";
src = fetchFromGitHub {
owner = "aras-p";
repo = "ClangBuildAnalyzer";
rev = "v${finalAttrs.version}";
sha256 = "sha256-kmgdk634zM0W0OoRoP/RzepArSipa5bNqdVgdZO9gxo=";
};
nativeBuildInputs = [
cmake
];
meta = {
description = "Tool for analyzing Clang's -ftrace-time files";
homepage = "https://github.com/aras-p/ClangBuildAnalyzer";
maintainers = with lib.maintainers; [ lf- ];
license = lib.licenses.unlicense;
platforms = lib.platforms.unix;
mainProgram = "ClangBuildAnalyzer";
};
})

8
misc/fish/meson.build Normal file
View file

@ -0,0 +1,8 @@
configure_file(
input : 'completion.fish',
output : 'nix.fish',
install : true,
install_dir : datadir / 'fish/vendor_completions.d',
install_mode : 'rw-r--r--',
copy : true,
)

5
misc/meson.build Normal file
View file

@ -0,0 +1,5 @@
subdir('bash')
subdir('fish')
subdir('zsh')
subdir('systemd')

25
misc/systemd/meson.build Normal file
View file

@ -0,0 +1,25 @@
foreach config : [ 'nix-daemon.socket', 'nix-daemon.service' ]
configure_file(
input : config + '.in',
output : config,
install : true,
install_dir : prefix / 'lib/systemd/system',
install_mode : 'rw-r--r--',
configuration : {
'storedir' : store_dir,
'localstatedir' : state_dir,
'bindir' : bindir,
},
)
endforeach
configure_file(
input : 'nix-daemon.conf.in',
output : 'nix-daemon.conf',
install : true,
install_dir : prefix / 'lib/tmpfiles.d',
install_mode : 'rw-r--r--',
configuration : {
'localstatedir' : state_dir,
},
)

View file

@ -1,7 +0,0 @@
ifdef HOST_LINUX
$(foreach n, nix-daemon.conf, $(eval $(call install-file-in, $(d)/$(n), $(sysconfdir)/init, 0644)))
clean-files += $(d)/nix-daemon.conf
endif

View file

@ -1,5 +0,0 @@
description "Nix Daemon"
start on filesystem
stop on shutdown
respawn
exec @bindir@/nix-daemon --daemon

10
misc/zsh/meson.build Normal file
View file

@ -0,0 +1,10 @@
foreach script : [ [ 'completion.zsh', '_nix' ], [ 'run-help-nix' ] ]
configure_file(
input : script[0],
output : script.get(1, script[0]),
install : true,
install_dir : datadir / 'zsh/site-functions',
install_mode : 'rw-r--r--',
copy : true,
)
endforeach

View file

@ -1,5 +1,5 @@
%.gen.hh: %
@echo 'R"foo(' >> $@.tmp
@echo 'R"__NIX_STR(' >> $@.tmp
$(trace-gen) cat $< >> $@.tmp
@echo ')foo"' >> $@.tmp
@echo ')__NIX_STR"' >> $@.tmp
@mv $@.tmp $@

View file

@ -78,11 +78,7 @@ define build-library
$(1)_LDFLAGS += -undefined suppress -flat_namespace
endif
else
ifndef HOST_DARWIN
ifndef HOST_CYGWIN
$(1)_LDFLAGS += -Wl,-z,defs
endif
endif
# -Wl,-z,defs is broken with sanitizers on Linux/clang at least.
endif
ifndef HOST_DARWIN

View file

@ -6,6 +6,9 @@ programs-list :=
# - $(1)_NAME: the name of the program (e.g. foo); defaults to
# $(1).
#
# - $(1)_ENV: environment variables to set when running the program
# from the Makefile using the $(1)_RUN target.
#
# - $(1)_DIR: the directory where the (non-installed) program will be
# placed.
#
@ -87,6 +90,6 @@ define build-program
# Phony target to run this program (typically as a dependency of 'check').
.PHONY: $(1)_RUN
$(1)_RUN: $$($(1)_PATH)
$(trace-test) $$(UNIT_TEST_ENV) $$($(1)_PATH)
$(trace-test) $$($(1)_ENV) $$($(1)_PATH)
endef

321
package.nix Normal file
View file

@ -0,0 +1,321 @@
{
pkgs,
lib,
stdenv,
autoconf-archive,
autoreconfHook,
aws-sdk-cpp,
boehmgc,
nlohmann_json,
bison,
build-release-notes,
boost,
brotli,
bzip2,
cmake,
curl,
doxygen,
editline,
fileset,
flex,
git,
gtest,
jq,
libarchive,
libcpuid,
libseccomp,
libsodium,
lsof,
lowdown,
mdbook,
mdbook-linkcheck,
mercurial,
meson,
ninja,
openssl,
pkg-config,
rapidcheck,
sqlite,
toml11,
util-linuxMinimal ? utillinuxMinimal,
utillinuxMinimal ? null,
xz,
busybox-sandbox-shell,
pname ? "nix",
versionSuffix ? "",
officialRelease ? true,
# Set to true to build the release notes for the next release.
buildUnreleasedNotes ? false,
internalApiDocs ? false,
# Avoid setting things that would interfere with a functioning devShell
forDevShell ? false,
# FIXME(Qyriad): build Lix using Meson instead of autoconf and make.
# This flag will be removed when the migration to Meson is complete.
buildWithMeson ? false,
# Not a real argument, just the only way to approximate let-binding some
# stuff for argument defaults.
__forDefaults ? {
canRunInstalled = stdenv.buildPlatform.canExecute stdenv.hostPlatform;
},
}: let
inherit (__forDefaults) canRunInstalled;
version = lib.fileContents ./.version + versionSuffix;
aws-sdk-cpp-nix = aws-sdk-cpp.override {
apis = [ "s3" "transfer" ];
customMemoryManagement = false;
};
testConfigureFlags = [
"RAPIDCHECK_HEADERS=${lib.getDev rapidcheck}/extras/gtest/include"
];
# The internal API docs need these for the build, but if we're not building
# Nix itself, then these don't need to be propagated.
maybePropagatedInputs = [
boehmgc
nlohmann_json
];
# .gitignore has already been processed, so any changes in it are irrelevant
# at this point. It is not represented verbatim for test purposes because
# that would interfere with repo semantics.
baseFiles = fileset.fileFilter (f: f.name != ".gitignore") ./.;
configureFiles = fileset.unions [
./.version
./configure.ac
./m4
# TODO: do we really need README.md? It doesn't seem used in the build.
./README.md
];
topLevelBuildFiles = fileset.unions ([
./local.mk
./Makefile
./Makefile.config.in
./mk
] ++ lib.optionals buildWithMeson [
./meson.build
./meson.options
./meson
./scripts/meson.build
]);
functionalTestFiles = fileset.unions [
./tests/functional
./tests/unit
(fileset.fileFilter (f: lib.strings.hasPrefix "nix-profile" f.name) ./scripts)
];
in stdenv.mkDerivation (finalAttrs: {
inherit pname version;
src = fileset.toSource {
root = ./.;
fileset = fileset.intersection baseFiles (fileset.unions ([
configureFiles
topLevelBuildFiles
functionalTestFiles
] ++ lib.optionals (!finalAttrs.dontBuild || internalApiDocs) [
./boehmgc-coroutine-sp-fallback.diff
./doc
./misc
./precompiled-headers.h
./src
./COPYING
./scripts/local.mk
]));
};
VERSION_SUFFIX = versionSuffix;
outputs = [ "out" ]
++ lib.optionals (!finalAttrs.dontBuild) [ "dev" "doc" ];
dontBuild = false;
# FIXME(Qyriad): see if this is still needed once the migration to Meson is completed.
mesonFlags = lib.optionals (buildWithMeson && stdenv.hostPlatform.isLinux) [
"-Dsandbox-shell=${lib.getBin busybox-sandbox-shell}/bin/busybox"
];
# We only include CMake so that Meson can locate toml11, which only ships CMake dependency metadata.
dontUseCmakeConfigure = true;
nativeBuildInputs = [
bison
flex
] ++ [
(lib.getBin lowdown)
mdbook
mdbook-linkcheck
autoconf-archive
] ++ lib.optional (!buildWithMeson) autoreconfHook ++ [
pkg-config
# Tests
git
mercurial
jq
lsof
] ++ lib.optional stdenv.hostPlatform.isLinux util-linuxMinimal
++ lib.optional (!officialRelease && buildUnreleasedNotes) build-release-notes
++ lib.optional internalApiDocs doxygen
++ lib.optionals buildWithMeson [
meson
ninja
cmake
];
buildInputs = [
curl
bzip2
xz
brotli
editline
openssl
sqlite
libarchive
boost
lowdown
libsodium
toml11
]
++ lib.optionals stdenv.hostPlatform.isLinux [ libseccomp busybox-sandbox-shell ]
++ lib.optional stdenv.hostPlatform.isx86_64 libcpuid
# There have been issues building these dependencies
++ lib.optional (stdenv.hostPlatform == stdenv.buildPlatform) aws-sdk-cpp-nix
++ lib.optionals (finalAttrs.dontBuild) maybePropagatedInputs
;
checkInputs = [
gtest
rapidcheck
];
propagatedBuildInputs = lib.optionals (!finalAttrs.dontBuild) maybePropagatedInputs;
disallowedReferences = [
boost
];
# Needed for Meson to find Boost.
# https://github.com/NixOS/nixpkgs/issues/86131.
env = lib.optionalAttrs (buildWithMeson || forDevShell) {
BOOST_INCLUDEDIR = "${lib.getDev boost}/include";
BOOST_LIBRARYDIR = "${lib.getLib boost}/lib";
};
preConfigure = lib.optionalString (!finalAttrs.dontBuild && !stdenv.hostPlatform.isStatic) ''
# Copy libboost_context so we don't get all of Boost in our closure.
# https://github.com/NixOS/nixpkgs/issues/45462
mkdir -p $out/lib
cp -pd ${boost}/lib/{libboost_context*,libboost_thread*,libboost_system*} $out/lib
rm -f $out/lib/*.a
'' + lib.optionalString (!finalAttrs.dontBuild && stdenv.hostPlatform.isLinux) ''
chmod u+w $out/lib/*.so.*
patchelf --set-rpath $out/lib:${stdenv.cc.cc.lib}/lib $out/lib/libboost_thread.so.*
'' + lib.optionalString (!finalAttrs.dontBuild && stdenv.hostPlatform.isDarwin) ''
for LIB in $out/lib/*.dylib; do
chmod u+w $LIB
install_name_tool -id $LIB $LIB
install_name_tool -delete_rpath ${boost}/lib/ $LIB || true
done
install_name_tool -change ${boost}/lib/libboost_system.dylib $out/lib/libboost_system.dylib $out/lib/libboost_thread.dylib
'' + ''
# Workaround https://github.com/NixOS/nixpkgs/issues/294890.
if [[ -n "''${doCheck:-}" ]]; then
appendToVar configureFlags "--enable-tests"
else
appendToVar configureFlags "--disable-tests"
fi
'';
configureFlags = lib.optionals stdenv.isLinux [
"--with-boost=${boost}/lib"
"--with-sandbox-shell=${busybox-sandbox-shell}/bin/busybox"
] ++ lib.optionals (stdenv.isLinux && !(stdenv.hostPlatform.isStatic && stdenv.system == "aarch64-linux")) [
"LDFLAGS=-fuse-ld=gold"
]
++ lib.optional stdenv.hostPlatform.isStatic "--enable-embedded-sandbox-shell"
++ lib.optionals (finalAttrs.doCheck || internalApiDocs) testConfigureFlags
++ lib.optional (!canRunInstalled) "--disable-doc-gen"
++ [ (lib.enableFeature internalApiDocs "internal-api-docs") ]
++ lib.optional (!forDevShell) "--sysconfdir=/etc"
++ [
"TOML11_HEADERS=${lib.getDev toml11}/include"
];
mesonBuildType = lib.optional (buildWithMeson || forDevShell) "debugoptimized";
installTargets = lib.optional internalApiDocs "internal-api-html";
enableParallelBuilding = true;
makeFlags = "profiledir=$(out)/etc/profile.d PRECOMPILE_HEADERS=1";
doCheck = true;
mesonCheckFlags = lib.optionals (buildWithMeson || forDevShell) [
"--suite=check"
];
installFlags = "sysconfdir=$(out)/etc";
postInstall = lib.optionalString (!finalAttrs.dontBuild) ''
mkdir -p $doc/nix-support
echo "doc manual $doc/share/doc/nix/manual" >> $doc/nix-support/hydra-build-products
'' + lib.optionalString stdenv.hostPlatform.isStatic ''
mkdir -p $out/nix-support
echo "file binary-dist $out/bin/nix" >> $out/nix-support/hydra-build-products
'' + lib.optionalString stdenv.isDarwin ''
for lib in libnixutil.dylib libnixexpr.dylib; do
install_name_tool \
-change "${lib.getLib boost}/lib/libboost_context.dylib" \
"$out/lib/libboost_context.dylib" \
"$out/lib/$lib"
done
'' + lib.optionalString internalApiDocs ''
mkdir -p $out/nix-support
echo "doc internal-api-docs $out/share/doc/nix/internal-api/html" >> "$out/nix-support/hydra-build-products"
'';
doInstallCheck = finalAttrs.doCheck;
installCheckFlags = "sysconfdir=$(out)/etc";
installCheckTarget = "installcheck"; # work around buggy detection in stdenv
mesonInstallCheckFlags = [
"--suite=installcheck"
];
preInstallCheck = lib.optionalString stdenv.hostPlatform.isDarwin ''
export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
'';
installCheckPhase = lib.optionalString buildWithMeson ''
runHook preInstallCheck
flagsArray=($mesonInstallCheckFlags "''${mesonInstallCheckFlagsArray[@]}")
meson test --no-rebuild "''${flagsArray[@]}"
runHook postInstallCheck
'';
separateDebugInfo = !stdenv.hostPlatform.isStatic && !finalAttrs.dontBuild;
strictDeps = true;
# strictoverflow is disabled because we trap on signed overflow instead
hardeningDisable = [ "strictoverflow" ]
++ lib.optional stdenv.hostPlatform.isStatic "pie";
meta.platforms = lib.platforms.unix;
passthru.perl-bindings = pkgs.callPackage ./perl {
inherit fileset stdenv;
};
})

View file

@ -1,46 +0,0 @@
#!/usr/bin/env bash
((NEW_NIX_FIRST_BUILD_UID=301))
id_available(){
dscl . list /Users UniqueID | grep -E '\b'$1'\b' >/dev/null
}
change_nixbld_names_and_ids(){
local name uid next_id
((next_id=NEW_NIX_FIRST_BUILD_UID))
echo "Attempting to migrate nixbld users."
echo "Each user should change from nixbld# to _nixbld#"
echo "and their IDs relocated to $next_id+"
while read -r name uid; do
echo " Checking $name (uid: $uid)"
# iterate for a clean ID
while id_available "$next_id"; do
((next_id++))
if ((next_id >= 400)); then
echo "We've hit UID 400 without placing all of your users :("
echo "You should use the commands in this script as a starting"
echo "point to review your UID-space and manually move the"
echo "remaining users (or delete them, if you don't need them)."
exit 1
fi
done
if [[ $name == _* ]]; then
echo " It looks like $name has already been renamed--skipping."
else
# first 3 are cleanup, it's OK if they aren't here
sudo dscl . delete /Users/$name dsAttrTypeNative:_writers_passwd &>/dev/null || true
sudo dscl . change /Users/$name NFSHomeDirectory "/private/var/empty 1" "/var/empty" &>/dev/null || true
# remove existing user from group
sudo dseditgroup -o edit -t user -d $name nixbld || true
sudo dscl . change /Users/$name UniqueID $uid $next_id
sudo dscl . change /Users/$name RecordName $name _$name
# add renamed user to group
sudo dseditgroup -o edit -t user -a _$name nixbld
echo " $name migrated to _$name (uid: $next_id)"
fi
done < <(dscl . list /Users UniqueID | grep nixbld | sort -n -k2)
}
change_nixbld_names_and_ids

View file

@ -1,33 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# set -x
# mapfile BUILDS_FOR_LATEST_EVAL < <(
# curl -H 'Accept: application/json' https://hydra.nixos.org/jobset/nix/master/evals | \
# jq -r '.evals[0].builds[] | @sh')
BUILDS_FOR_LATEST_EVAL=$(
curl -sS -H 'Accept: application/json' https://hydra.nixos.org/jobset/nix/master/evals | \
jq -r '.evals[0].builds[]')
someBuildFailed=0
for buildId in $BUILDS_FOR_LATEST_EVAL; do
buildInfo=$(curl --fail -sS -H 'Accept: application/json' "https://hydra.nixos.org/build/$buildId")
finished=$(echo "$buildInfo" | jq -r '.finished')
if [[ $finished = 0 ]]; then
continue
fi
buildStatus=$(echo "$buildInfo" | jq -r '.buildstatus')
if [[ $buildStatus != 0 ]]; then
someBuildFailed=1
echo "Job “$(echo "$buildInfo" | jq -r '.job')” failed on hydra: $buildInfo"
fi
done
exit "$someBuildFailed"

View file

@ -1,875 +0,0 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
# I'm a little agnostic on the choices, but supporting a wide
# slate of uses for now, including:
# - import-only: `. create-darwin-volume.sh no-main[ ...]`
# - legacy: `./create-darwin-volume.sh` or `. create-darwin-volume.sh`
# (both will run main())
# - external alt-routine: `./create-darwin-volume.sh no-main func[ ...]`
if [ "${1-}" = "no-main" ]; then
shift
readonly _CREATE_VOLUME_NO_MAIN=1
else
readonly _CREATE_VOLUME_NO_MAIN=0
# declare some things we expect to inherit from install-multi-user
# I don't love this (because it's a bit of a kludge).
#
# CAUTION: (Dec 19 2020)
# This is a stopgap. It doesn't cover the full slate of
# identifiers we inherit--just those necessary to:
# - avoid breaking direct invocations of this script (here/now)
# - avoid hard-to-reverse structural changes before the call to rm
# single-user support is verified
#
# In the near-mid term, I (personally) think we should:
# - decide to deprecate the direct call and add a notice
# - fold all of this into install-darwin-multi-user.sh
# - intentionally remove the old direct-invocation form (kill the
# routine, replace this script w/ deprecation notice and a note
# on the remove-after date)
#
readonly NIX_ROOT="${NIX_ROOT:-/nix}"
_sudo() {
shift # throw away the 'explanation'
/usr/bin/sudo "$@"
}
failure() {
if [ "$*" = "" ]; then
cat
else
echo "$@"
fi
exit 1
}
task() {
echo "$@"
}
fi
# usually "disk1"
root_disk_identifier() {
# For performance (~10ms vs 280ms) I'm parsing 'diskX' from stat output
# (~diskXsY)--but I'm retaining the more-semantic approach since
# it documents intent better.
# /usr/sbin/diskutil info -plist / | xmllint --xpath "/plist/dict/key[text()='ParentWholeDisk']/following-sibling::string[1]/text()" -
#
local special_device
special_device="$(/usr/bin/stat -f "%Sd" /)"
echo "${special_device%s[0-9]*}"
}
# make it easy to play w/ 'Case-sensitive APFS'
readonly NIX_VOLUME_FS="${NIX_VOLUME_FS:-APFS}"
readonly NIX_VOLUME_LABEL="${NIX_VOLUME_LABEL:-Nix Store}"
# Strongly assuming we'll make a volume on the device / is on
# But you can override NIX_VOLUME_USE_DISK to create it on some other device
readonly NIX_VOLUME_USE_DISK="${NIX_VOLUME_USE_DISK:-$(root_disk_identifier)}"
NIX_VOLUME_USE_SPECIAL="${NIX_VOLUME_USE_SPECIAL:-}"
NIX_VOLUME_USE_UUID="${NIX_VOLUME_USE_UUID:-}"
readonly NIX_VOLUME_MOUNTD_DEST="${NIX_VOLUME_MOUNTD_DEST:-/Library/LaunchDaemons/org.nixos.darwin-store.plist}"
if /usr/bin/fdesetup isactive >/dev/null; then
test_filevault_in_use() { return 0; }
# no readonly; we may modify if user refuses from cure_volume
NIX_VOLUME_DO_ENCRYPT="${NIX_VOLUME_DO_ENCRYPT:-1}"
else
test_filevault_in_use() { return 1; }
NIX_VOLUME_DO_ENCRYPT="${NIX_VOLUME_DO_ENCRYPT:-0}"
fi
should_encrypt_volume() {
test_filevault_in_use && (( NIX_VOLUME_DO_ENCRYPT == 1 ))
}
substep() {
printf " %s\n" "" "- $1" "" "${@:2}"
}
volumes_labeled() {
local label="$1"
xsltproc --novalid --stringparam label "$label" - <(/usr/sbin/ioreg -ra -c "AppleAPFSVolume") <<'EOF'
<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
<xsl:output method="text"/>
<xsl:template match="/">
<xsl:apply-templates select="/plist/array/dict/key[text()='IORegistryEntryName']/following-sibling::*[1][text()=$label]/.."/>
</xsl:template>
<xsl:template match="dict">
<xsl:apply-templates match="string" select="key[text()='BSD Name']/following-sibling::*[1]"/>
<xsl:text>=</xsl:text>
<xsl:apply-templates match="string" select="key[text()='UUID']/following-sibling::*[1]"/>
<xsl:text>&#xA;</xsl:text>
</xsl:template>
</xsl:stylesheet>
EOF
# I cut label out of the extracted values, but here it is for reference:
# <xsl:apply-templates match="string" select="key[text()='IORegistryEntryName']/following-sibling::*[1]"/>
# <xsl:text>=</xsl:text>
}
right_disk() {
local volume_special="$1" # (i.e., disk1s7)
[[ "$volume_special" == "$NIX_VOLUME_USE_DISK"s* ]]
}
right_volume() {
local volume_special="$1" # (i.e., disk1s7)
# if set, it must match; otherwise ensure it's on the right disk
if [ -z "$NIX_VOLUME_USE_SPECIAL" ]; then
if right_disk "$volume_special"; then
NIX_VOLUME_USE_SPECIAL="$volume_special" # latch on
return 0
else
return 1
fi
else
[ "$volume_special" = "$NIX_VOLUME_USE_SPECIAL" ]
fi
}
right_uuid() {
local volume_uuid="$1"
# if set, it must match; otherwise allow
if [ -z "$NIX_VOLUME_USE_UUID" ]; then
NIX_VOLUME_USE_UUID="$volume_uuid" # latch on
return 0
else
[ "$volume_uuid" = "$NIX_VOLUME_USE_UUID" ]
fi
}
cure_volumes() {
local found volume special uuid
# loop just in case they have more than one volume
# (nothing stops you from doing this)
for volume in $(volumes_labeled "$NIX_VOLUME_LABEL"); do
# CAUTION: this could (maybe) be a more normal read
# loop like:
# while IFS== read -r special uuid; do
# # ...
# done <<<"$(volumes_labeled "$NIX_VOLUME_LABEL")"
#
# I did it with for to skirt a problem with the obvious
# pattern replacing stdin and causing user prompts
# inside (which also use read and access stdin) to skip
#
# If there's an existing encrypted volume we can't find
# in keychain, the user never gets prompted to delete
# the volume, and the install fails.
#
# If you change this, a human needs to test a very
# specific scenario: you already have an encrypted
# Nix Store volume, and have deleted its credential
# from keychain. Ensure the script asks you if it can
# delete the volume, and then prompts for your sudo
# password to confirm.
#
# shellcheck disable=SC1097
IFS== read -r special uuid <<< "$volume"
# take the first one that's on the right disk
if [ -z "${found:-}" ]; then
if right_volume "$special" && right_uuid "$uuid"; then
cure_volume "$special" "$uuid"
found="${special} (${uuid})"
else
warning <<EOF
Ignoring ${special} (${uuid}) because I am looking for:
disk=${NIX_VOLUME_USE_DISK} special=${NIX_VOLUME_USE_SPECIAL:-${NIX_VOLUME_USE_DISK}sX} uuid=${NIX_VOLUME_USE_UUID:-any}
EOF
# TODO: give chance to delete if ! headless?
fi
else
warning <<EOF
Ignoring ${special} (${uuid}), already found target: $found
EOF
# TODO reminder? I feel like I want one
# idiom that reminds some warnings, or warns
# some reminders?
# TODO: if ! headless, chance to delete?
fi
done
if [ -z "${found:-}" ]; then
readonly NIX_VOLUME_USE_SPECIAL NIX_VOLUME_USE_UUID
fi
}
volume_encrypted() {
local volume_special="$1" # (i.e., disk1s7)
# Trying to match the first line of output; known first lines:
# No cryptographic users for <special>
# Cryptographic user for <special> (1 found)
# Cryptographic users for <special> (2 found)
/usr/sbin/diskutil apfs listCryptoUsers -plist "$volume_special" | /usr/bin/grep -q APFSCryptoUserUUID
}
test_fstab() {
/usr/bin/grep -q "$NIX_ROOT apfs rw" /etc/fstab 2>/dev/null
}
test_nix_root_is_symlink() {
[ -L "$NIX_ROOT" ]
}
test_synthetic_conf_either(){
/usr/bin/grep -qE "^${NIX_ROOT:1}($|\t.{3,}$)" /etc/synthetic.conf 2>/dev/null
}
test_synthetic_conf_mountable() {
/usr/bin/grep -q "^${NIX_ROOT:1}$" /etc/synthetic.conf 2>/dev/null
}
test_synthetic_conf_symlinked() {
/usr/bin/grep -qE "^${NIX_ROOT:1}\t.{3,}$" /etc/synthetic.conf 2>/dev/null
}
test_nix_volume_mountd_installed() {
test -e "$NIX_VOLUME_MOUNTD_DEST"
}
# current volume password
test_keychain_by_uuid() {
local volume_uuid="$1"
# Note: doesn't need sudo just to check; doesn't output pw
security find-generic-password -s "$volume_uuid" &>/dev/null
}
get_volume_pass() {
local volume_uuid="$1"
_sudo \
"to confirm keychain has a password that unlocks this volume" \
security find-generic-password -s "$volume_uuid" -w
}
verify_volume_pass() {
local volume_special="$1" # (i.e., disk1s7)
local volume_uuid="$2"
_sudo "to confirm the password actually unlocks the volume" \
/usr/sbin/diskutil apfs unlockVolume "$volume_special" -verify -stdinpassphrase -user "$volume_uuid"
}
volume_pass_works() {
local volume_special="$1" # (i.e., disk1s7)
local volume_uuid="$2"
get_volume_pass "$volume_uuid" | verify_volume_pass "$volume_special" "$volume_uuid"
}
# Create the paths defined in synthetic.conf, saving us a reboot.
create_synthetic_objects() {
# Big Sur takes away the -B flag we were using and replaces it
# with a -t flag that appears to do the same thing (but they
# don't behave exactly the same way in terms of return values).
# This feels a little dirty, but as far as I can tell the
# simplest way to get the right one is to just throw away stderr
# and call both... :]
{
/System/Library/Filesystems/apfs.fs/Contents/Resources/apfs.util -t || true # Big Sur
/System/Library/Filesystems/apfs.fs/Contents/Resources/apfs.util -B || true # Catalina
} >/dev/null 2>&1
}
test_nix() {
test -d "$NIX_ROOT"
}
test_voldaemon() {
test -f "$NIX_VOLUME_MOUNTD_DEST"
}
generate_mount_command() {
local cmd_type="$1" # encrypted|unencrypted
local volume_uuid mountpoint cmd=()
printf -v volume_uuid "%q" "$2"
printf -v mountpoint "%q" "$NIX_ROOT"
case "$cmd_type" in
encrypted)
cmd=(/bin/sh -c "/usr/bin/security find-generic-password -s '$volume_uuid' -w | /usr/sbin/diskutil apfs unlockVolume '$volume_uuid' -mountpoint '$mountpoint' -stdinpassphrase");;
unencrypted)
cmd=(/usr/sbin/diskutil mount -mountPoint "$mountpoint" "$volume_uuid");;
*)
failure "Invalid first arg $cmd_type to generate_mount_command";;
esac
printf " <string>%s</string>\n" "${cmd[@]}"
}
generate_mount_daemon() {
local cmd_type="$1" # encrypted|unencrypted
local volume_uuid="$2"
cat <<EOF
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>RunAtLoad</key>
<true/>
<key>Label</key>
<string>org.nixos.darwin-store</string>
<key>ProgramArguments</key>
<array>
$(generate_mount_command "$cmd_type" "$volume_uuid")
</array>
</dict>
</plist>
EOF
}
_eat_bootout_err() {
/usr/bin/grep -v "Boot-out failed: 36: Operation now in progress"
}
# TODO: remove with --uninstall?
uninstall_launch_daemon_directions() {
local daemon_label="$1" # i.e., org.nixos.blah-blah
local daemon_plist="$2" # abspath
substep "Uninstall LaunchDaemon $daemon_label" \
" sudo launchctl bootout system/$daemon_label" \
" sudo rm $daemon_plist"
}
uninstall_launch_daemon_prompt() {
local daemon_label="$1" # i.e., org.nixos.blah-blah
local daemon_plist="$2" # abspath
local reason_for_daemon="$3"
cat <<EOF
The installer adds a LaunchDaemon to $reason_for_daemon: $daemon_label
EOF
if ui_confirm "Can I remove it?"; then
_sudo "to terminate the daemon" \
launchctl bootout "system/$daemon_label" 2> >(_eat_bootout_err >&2) || true
# this can "fail" with a message like:
# Boot-out failed: 36: Operation now in progress
_sudo "to remove the daemon definition" rm "$daemon_plist"
fi
}
nix_volume_mountd_uninstall_directions() {
uninstall_launch_daemon_directions "org.nixos.darwin-store" \
"$NIX_VOLUME_MOUNTD_DEST"
}
nix_volume_mountd_uninstall_prompt() {
uninstall_launch_daemon_prompt "org.nixos.darwin-store" \
"$NIX_VOLUME_MOUNTD_DEST" \
"mount your Nix volume"
}
# TODO: move nix_daemon to install-darwin-multi-user if/when uninstall_launch_daemon_prompt moves up to install-multi-user
nix_daemon_uninstall_prompt() {
uninstall_launch_daemon_prompt "org.nixos.nix-daemon" \
"$NIX_DAEMON_DEST" \
"run the nix-daemon"
}
# TODO: remove with --uninstall?
nix_daemon_uninstall_directions() {
uninstall_launch_daemon_directions "org.nixos.nix-daemon" \
"$NIX_DAEMON_DEST"
}
# TODO: remove with --uninstall?
synthetic_conf_uninstall_directions() {
# :1 to strip leading slash
substep "Remove ${NIX_ROOT:1} from /etc/synthetic.conf" \
" If nix is the only entry: sudo rm /etc/synthetic.conf" \
" Otherwise: sudo /usr/bin/sed -i '' -e '/^${NIX_ROOT:1}$/d' /etc/synthetic.conf"
}
synthetic_conf_uninstall_prompt() {
cat <<EOF
During install, I add '${NIX_ROOT:1}' to /etc/synthetic.conf, which instructs
macOS to create an empty root directory for mounting the Nix volume.
EOF
# make the edit to a copy
/usr/bin/grep -vE "^${NIX_ROOT:1}($|\t.{3,}$)" /etc/synthetic.conf > "$SCRATCH/synthetic.conf.edit"
if test_synthetic_conf_symlinked; then
warning <<EOF
/etc/synthetic.conf already contains a line instructing your system
to make '${NIX_ROOT}' as a symlink:
$(/usr/bin/grep -nE "^${NIX_ROOT:1}\t.{3,}$" /etc/synthetic.conf)
This may mean your system has/had a non-standard Nix install.
The volume-creation process in this installer is *not* compatible
with a symlinked store, so I'll have to remove this instruction to
continue.
If you want/need to keep this instruction, answer 'n' to abort.
EOF
fi
# ask to rm if this left the file empty aside from comments, else edit
if /usr/bin/diff -q <(:) <(/usr/bin/grep -v "^#" "$SCRATCH/synthetic.conf.edit") &>/dev/null; then
if confirm_rm "/etc/synthetic.conf"; then
if test_nix_root_is_symlink; then
failure >&2 <<EOF
I removed /etc/synthetic.conf, but $NIX_ROOT is already a symlink
(-> $(readlink "$NIX_ROOT")). The system should remove it when you reboot.
Once you've rebooted, run the installer again.
EOF
fi
return 0
fi
else
if confirm_edit "$SCRATCH/synthetic.conf.edit" "/etc/synthetic.conf"; then
if test_nix_root_is_symlink; then
failure >&2 <<EOF
I edited Nix out of /etc/synthetic.conf, but $NIX_ROOT is already a symlink
(-> $(readlink "$NIX_ROOT")). The system should remove it when you reboot.
Once you've rebooted, run the installer again.
EOF
fi
return 0
fi
fi
# fallback instructions
echo "Manually remove nix from /etc/synthetic.conf"
return 1
}
add_nix_vol_fstab_line() {
local uuid="$1"
# shellcheck disable=SC1003,SC2026
local escaped_mountpoint="${NIX_ROOT/ /'\\\'040}"
shift
# wrap `ex` to work around problems w/ vim features breaking exit codes
# - plugins (see github.com/NixOS/nix/issues/5468): -u NONE
# - swap file: -n
#
# the first draft used `--noplugin`, but github.com/NixOS/nix/issues/6462
# suggests we need the less-semantic `-u NONE`
#
# we'd prefer EDITOR="/usr/bin/ex -u NONE" but vifs doesn't word-split
# the EDITOR env.
#
# TODO: at some point we should switch to `--clean`, but it wasn't added
# until https://github.com/vim/vim/releases/tag/v8.0.1554 while the macOS
# minver 10.12.6 seems to have released with vim 7.4
cat > "$SCRATCH/ex_cleanroom_wrapper" <<EOF
#!/bin/sh
/usr/bin/ex -u NONE -n "\$@"
EOF
chmod 755 "$SCRATCH/ex_cleanroom_wrapper"
EDITOR="$SCRATCH/ex_cleanroom_wrapper" _sudo "to add nix to fstab" "$@" <<EOF
:a
UUID=$uuid $escaped_mountpoint apfs rw,noauto,nobrowse,suid,owners
.
:x
EOF
# TODO: preserving my notes on suid,owners above until resolved
# There *may* be some issue regarding volume ownership, see nix#3156
#
# It seems like the cheapest fix is adding "suid,owners" to fstab, but:
# - We don't have much info on this condition yet
# - I'm not certain if these cause other problems?
# - There's a "chown" component some people claim to need to fix this
# that I don't understand yet
# (Note however that I've had to add a chown step to handle
# single->multi-user reinstalls, which may cover this)
#
# I'm not sure if it's safe to approach this way?
#
# I think I think the most-proper way to test for it is:
# diskutil info -plist "$NIX_VOLUME_LABEL" | xmllint --xpath "(/plist/dict/key[text()='GlobalPermissionsEnabled'])/following-sibling::*[1][name()='true']" -; echo $?
#
# There's also `sudo /usr/sbin/vsdbutil -c /path` (which is much faster, but is also
# deprecated and needs minor parsing).
#
# If no one finds a problem with doing so, I think the simplest approach
# is to just eagerly set this. I found a few imperative approaches:
# (diskutil enableOwnership, ~100ms), a cheap one (/usr/sbin/vsdbutil -a, ~40-50ms),
# a very cheap one (append the internal format to /var/db/volinfo.database).
#
# But vsdbutil's deprecation notice suggests using fstab, so I want to
# give that a whirl first.
#
# TODO: when this is workable, poke infinisil about reproducing the issue
# and confirming this fix?
}
delete_nix_vol_fstab_line() {
# TODO: I'm scaffolding this to handle the new nix volumes
# but it might be nice to generalize a smidge further to
# go ahead and set up a pattern for curing "old" things
# we no longer do?
EDITOR="/usr/bin/patch" _sudo "to cut nix from fstab" "$@" < <(/usr/bin/diff /etc/fstab <(/usr/bin/grep -v "$NIX_ROOT apfs rw" /etc/fstab))
# leaving some parts out of the grep; people may fiddle this a little?
}
# TODO: hope to remove with --uninstall
fstab_uninstall_directions() {
substep "Remove ${NIX_ROOT} from /etc/fstab" \
" If nix is the only entry: sudo rm /etc/fstab" \
" Otherwise, run 'sudo /usr/sbin/vifs' to remove the nix line"
}
fstab_uninstall_prompt() {
cat <<EOF
During install, I add '${NIX_ROOT}' to /etc/fstab so that macOS knows what
mount options to use for the Nix volume.
EOF
cp /etc/fstab "$SCRATCH/fstab.edit"
# technically doesn't need the _sudo path, but throwing away the
# output is probably better than mostly-duplicating the code...
delete_nix_vol_fstab_line patch "$SCRATCH/fstab.edit" &>/dev/null
# if the patch test edit, minus comment lines, is equal to empty (:)
if /usr/bin/diff -q <(:) <(/usr/bin/grep -v "^#" "$SCRATCH/fstab.edit") &>/dev/null; then
# this edit would leave it empty; propose deleting it
if confirm_rm "/etc/fstab"; then
return 0
else
echo "Remove nix from /etc/fstab (or remove the file)"
fi
else
echo "I might be able to help you make this edit. Here's the diff:"
if ! _diff "/etc/fstab" "$SCRATCH/fstab.edit" && ui_confirm "Does the change above look right?"; then
delete_nix_vol_fstab_line /usr/sbin/vifs
else
echo "Remove nix from /etc/fstab (or remove the file)"
fi
fi
}
remove_volume() {
local volume_special="$1" # (i.e., disk1s7)
_sudo "to unmount the Nix volume" \
/usr/sbin/diskutil unmount force "$volume_special" || true # might not be mounted
_sudo "to delete the Nix volume" \
/usr/sbin/diskutil apfs deleteVolume "$volume_special"
}
# aspiration: robust enough to both fix problems
# *and* update older darwin volumes
cure_volume() {
local volume_special="$1" # (i.e., disk1s7)
local volume_uuid="$2"
header "Found existing Nix volume"
row " special" "$volume_special"
row " uuid" "$volume_uuid"
if volume_encrypted "$volume_special"; then
row "encrypted" "yes"
if volume_pass_works "$volume_special" "$volume_uuid"; then
NIX_VOLUME_DO_ENCRYPT=0
ok "Found a working decryption password in keychain :)"
echo ""
else
# - this is a volume we made, and
# - the user encrypted it on their own
# - something deleted the credential
# - this is an old or BYO volume and the pw
# just isn't somewhere we can find it.
#
# We're going to explain why we're freaking out
# and prompt them to either delete the volume
# (requiring a sudo auth), or abort to fix
warning <<EOF
This volume is encrypted, but I don't see a password to decrypt it.
The quick fix is to let me delete this volume and make you a new one.
If that's okay, enter your (sudo) password to continue. If not, you
can ensure the decryption password is in your system keychain with a
"Where" (service) field set to this volume's UUID:
$volume_uuid
EOF
if password_confirm "delete this volume"; then
remove_volume "$volume_special"
else
# TODO: this is a good design case for a warn-and
# remind idiom...
failure <<EOF
Your Nix volume is encrypted, but I couldn't find its password. Either:
- Delete or rename the volume out of the way
- Ensure its decryption password is in the system keychain with a
"Where" (service) field set to this volume's UUID:
$volume_uuid
EOF
fi
fi
elif test_filevault_in_use; then
row "encrypted" "no"
warning <<EOF
FileVault is on, but your $NIX_VOLUME_LABEL volume isn't encrypted.
EOF
# if we're interactive, give them a chance to
# encrypt the volume. If not, /shrug
if ! headless && (( NIX_VOLUME_DO_ENCRYPT == 1 )); then
if ui_confirm "Should I encrypt it and add the decryption key to your keychain?"; then
encrypt_volume "$volume_uuid" "$NIX_VOLUME_LABEL"
NIX_VOLUME_DO_ENCRYPT=0
else
NIX_VOLUME_DO_ENCRYPT=0
reminder "FileVault is on, but your $NIX_VOLUME_LABEL volume isn't encrypted."
fi
fi
else
row "encrypted" "no"
fi
}
remove_volume_artifacts() {
if test_synthetic_conf_either; then
# NIX_ROOT is in synthetic.conf
if synthetic_conf_uninstall_prompt; then
# TODO: moot until we tackle uninstall, but when we're
# actually uninstalling, we should issue:
# reminder "macOS will clean up the empty mount-point directory at $NIX_ROOT on reboot."
:
fi
fi
if test_fstab; then
fstab_uninstall_prompt
fi
if test_nix_volume_mountd_installed; then
nix_volume_mountd_uninstall_prompt
fi
}
setup_synthetic_conf() {
if test_nix_root_is_symlink; then
if ! test_synthetic_conf_symlinked; then
failure >&2 <<EOF
error: $NIX_ROOT is a symlink (-> $(readlink "$NIX_ROOT")).
Please remove it. If nix is in /etc/synthetic.conf, remove it and reboot.
EOF
fi
fi
if ! test_synthetic_conf_mountable; then
task "Configuring /etc/synthetic.conf to make a mount-point at $NIX_ROOT" >&2
# technically /etc/synthetic.d/nix is supported in Big Sur+
# but handling both takes even more code...
# See earlier note; `-u NONE` disables vim plugins/rc, `-n` skips swapfile
_sudo "to add Nix to /etc/synthetic.conf" \
/usr/bin/ex -u NONE -n /etc/synthetic.conf <<EOF
:a
${NIX_ROOT:1}
.
:x
EOF
if ! test_synthetic_conf_mountable; then
failure "error: failed to configure synthetic.conf" >&2
fi
create_synthetic_objects
if ! test_nix; then
failure >&2 <<EOF
error: failed to bootstrap $NIX_ROOT
If you enabled FileVault after booting, this is likely a known issue
with macOS that you'll have to reboot to fix. If you didn't enable FV,
though, please open an issue describing how the system that you see
this error on was set up.
EOF
fi
fi
}
setup_fstab() {
local volume_uuid="$1"
# fstab used to be responsible for mounting the volume. Now the last
# step adds a LaunchDaemon responsible for mounting. This is technically
# redundant for mounting, but diskutil appears to pick up mount options
# from fstab (and diskutil's support for specifying them directly is not
# consistent across versions/subcommands).
if ! test_fstab; then
task "Configuring /etc/fstab to specify volume mount options" >&2
add_nix_vol_fstab_line "$volume_uuid" /usr/sbin/vifs
fi
}
encrypt_volume() {
local volume_uuid="$1"
local volume_label="$2"
local password
task "Encrypt the Nix volume" >&2
# Note: mount/unmount are late additions to support the right order
# of operations for creating the volume and then baking its uuid into
# other artifacts; not as well-trod wrt to potential errors, race
# conditions, etc.
_sudo "to mount your Nix volume for encrypting" \
/usr/sbin/diskutil mount "$volume_label"
password="$(/usr/bin/xxd -l 32 -p -c 256 /dev/random)"
_sudo "to add your Nix volume's password to Keychain" \
/usr/bin/security -i <<EOF
add-generic-password -a "$volume_label" -s "$volume_uuid" -l "$volume_label encryption password" -D "Encrypted volume password" -j "Added automatically by the Nix installer for use by $NIX_VOLUME_MOUNTD_DEST" -w "$password" -T /System/Library/CoreServices/APFSUserAgent -T /System/Library/CoreServices/CSUserAgent -T /usr/bin/security "/Library/Keychains/System.keychain"
EOF
builtin printf "%s" "$password" | _sudo "to actually encrypt your Nix volume" \
/usr/sbin/diskutil apfs encryptVolume "$volume_label" -user disk -stdinpassphrase
_sudo "to unmount the encrypted volume" \
/usr/sbin/diskutil unmount force "$volume_label"
}
create_volume() {
# Notes:
# 1) using `-nomount` instead of `-mountpoint "$NIX_ROOT"` to get
# its UUID and set mount opts in fstab before first mount
#
# 2) system is in some sense less secure than user keychain... (it's
# possible to read the password for decrypting the keychain) but
# the user keychain appears to be available too late. As far as I
# can tell, the file with this password (/var/db/SystemKey) is
# inside the FileVault envelope. If that isn't true, it may make
# sense to store the password inside the envelope?
#
# 3) At some point it would be ideal to have a small binary to serve
# as the daemon itself, and for it to replace /usr/bin/security here.
#
# 4) *UserAgent exemptions should let the system seamlessly supply the
# password if noauto is removed from fstab entry. This is intentional;
# the user will hopefully look for help if the volume stops mounting,
# rather than failing over into subtle race-condition problems.
#
# 5) If we ever get users griping about not having space to do
# anything useful with Nix, it is possibly to specify
# `-reserve 10g` or something, which will fail w/o that much
#
# 6) getting special w/ awk may be fragile, but doing it to:
# - save time over running slow diskutil commands
# - skirt risk we grab wrong volume if multiple match
_sudo "to create a new APFS volume '$NIX_VOLUME_LABEL' on $NIX_VOLUME_USE_DISK" \
/usr/sbin/diskutil apfs addVolume "$NIX_VOLUME_USE_DISK" "$NIX_VOLUME_FS" "$NIX_VOLUME_LABEL" -nomount | /usr/bin/awk '/Created new APFS Volume/ {print $5}'
}
volume_uuid_from_special() {
local volume_special="$1" # (i.e., disk1s7)
# For reasons I won't pretend to fathom, this returns 253 when it works
/System/Library/Filesystems/apfs.fs/Contents/Resources/apfs.util -k "$volume_special" || true
}
# this sometimes clears immediately, and AFAIK clears
# within about 1s. diskutil info on an unmounted path
# fails in around 50-100ms and a match takes about
# 250-300ms. I suspect it's usually ~250-750ms
await_volume() {
# caution: this could, in theory, get stuck
until /usr/sbin/diskutil info "$NIX_ROOT" &>/dev/null; do
:
done
}
setup_volume() {
local use_special use_uuid profile_packages
task "Creating a Nix volume" >&2
use_special="${NIX_VOLUME_USE_SPECIAL:-$(create_volume)}"
_sudo "to ensure the Nix volume is not mounted" \
/usr/sbin/diskutil unmount force "$use_special" || true # might not be mounted
use_uuid=${NIX_VOLUME_USE_UUID:-$(volume_uuid_from_special "$use_special")}
setup_fstab "$use_uuid"
if should_encrypt_volume; then
encrypt_volume "$use_uuid" "$NIX_VOLUME_LABEL"
setup_volume_daemon "encrypted" "$use_uuid"
# TODO: might be able to save ~60ms by caching or setting
# this somewhere rather than re-checking here.
elif volume_encrypted "$use_special"; then
setup_volume_daemon "encrypted" "$use_uuid"
else
setup_volume_daemon "unencrypted" "$use_uuid"
fi
await_volume
if [ "$(/usr/sbin/diskutil info -plist "$NIX_ROOT" | xmllint --xpath "(/plist/dict/key[text()='GlobalPermissionsEnabled'])/following-sibling::*[1]" -)" = "<false/>" ]; then
_sudo "to set enableOwnership (enabling users to own files)" \
/usr/sbin/diskutil enableOwnership "$NIX_ROOT"
fi
# TODO: below is a vague kludge for now; I just don't know
# what if any safe action there is to take here. Also, the
# reminder isn't very helpful.
# I'm less sure where this belongs, but it also wants mounted, pre-install
if type -p nix-env; then
profile_packages="$(nix-env --query --installed)"
# TODO: can probably do below faster w/ read
# intentionally unquoted string to eat whitespace in wc output
# shellcheck disable=SC2046,SC2059
if ! [ $(printf "$profile_packages" | /usr/bin/wc -l) = "0" ]; then
reminder <<EOF
Nix now supports only multi-user installs on Darwin/macOS, and your user's
Nix profile has some packages in it. These packages may obscure those in the
default profile, including the Nix this installer will add. You should
review these packages:
$profile_packages
EOF
fi
fi
}
setup_volume_daemon() {
local cmd_type="$1" # encrypted|unencrypted
local volume_uuid="$2"
if ! test_voldaemon; then
task "Configuring LaunchDaemon to mount '$NIX_VOLUME_LABEL'" >&2
# See earlier note; `-u NONE` disables vim plugins/rc, `-n` skips swapfile
_sudo "to install the Nix volume mounter" /usr/bin/ex -u NONE -n "$NIX_VOLUME_MOUNTD_DEST" <<EOF
:a
$(generate_mount_daemon "$cmd_type" "$volume_uuid")
.
:x
EOF
# TODO: should probably alert the user if this is disabled?
_sudo "to launch the Nix volume mounter" \
launchctl bootstrap system "$NIX_VOLUME_MOUNTD_DEST" || true
# TODO: confirm whether kickstart is necessesary?
# I feel a little superstitous, but it can guard
# against multiple problems (doesn't start, old
# version still running for some reason...)
_sudo "to launch the Nix volume mounter" \
launchctl kickstart -k system/org.nixos.darwin-store
fi
}
setup_darwin_volume() {
setup_synthetic_conf
setup_volume
}
if [ "$_CREATE_VOLUME_NO_MAIN" = 1 ]; then
if [ -n "$*" ]; then
"$@" # expose functions in case we want multiple routines?
fi
else
# no reason to pay for bash to process this
main() {
{
echo ""
echo " ------------------------------------------------------------------ "
echo " | This installer will create a volume for the nix store and |"
echo " | configure it to mount at $NIX_ROOT. Follow these steps to uninstall. |"
echo " ------------------------------------------------------------------ "
echo ""
echo " 1. Remove the entry from fstab using 'sudo /usr/sbin/vifs'"
echo " 2. Run 'sudo launchctl bootout system/org.nixos.darwin-store'"
echo " 3. Remove $NIX_VOLUME_MOUNTD_DEST"
echo " 4. Destroy the data volume using '/usr/sbin/diskutil apfs deleteVolume'"
echo " 5. Remove the 'nix' line from /etc/synthetic.conf (or the file)"
echo ""
} >&2
setup_darwin_volume
}
main "$@"
fi

View file

@ -1,226 +0,0 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
# System specific settings
export NIX_FIRST_BUILD_UID="${NIX_FIRST_BUILD_UID:-301}"
export NIX_BUILD_USER_NAME_TEMPLATE="_nixbld%d"
readonly NIX_DAEMON_DEST=/Library/LaunchDaemons/org.nixos.nix-daemon.plist
# create by default; set 0 to DIY, use a symlink, etc.
readonly NIX_VOLUME_CREATE=${NIX_VOLUME_CREATE:-1} # now default
# caution: may update times on / if not run as normal non-root user
read_only_root() {
# this touch command ~should~ always produce an error
# as of this change I confirmed /usr/bin/touch emits:
# "touch: /: Operation not permitted" Monterey
# "touch: /: Read-only file system" Catalina+ and Big Sur
# "touch: /: Permission denied" Mojave
# (not matching prefix for compat w/ coreutils touch in case using
# an explicit path causes problems; its prefix differs)
case "$(/usr/bin/touch / 2>&1)" in
*"Read-only file system") # Catalina, Big Sur
return 0
;;
*"Operation not permitted") # Monterey
return 0
;;
*)
return 1
;;
esac
# Avoiding the slow semantic way to get this information (~330ms vs ~8ms)
# unless using touch causes problems. Just in case, that approach is:
# diskutil info -plist / | <find the Writable or WritableVolume keys>, i.e.
# diskutil info -plist / | xmllint --xpath "name(/plist/dict/key[text()='Writable']/following-sibling::*[1])" -
}
if read_only_root && [ "$NIX_VOLUME_CREATE" = 1 ]; then
should_create_volume() { return 0; }
else
should_create_volume() { return 1; }
fi
# shellcheck source=./create-darwin-volume.sh
. "$EXTRACTED_NIX_PATH/create-darwin-volume.sh" "no-main"
dsclattr() {
/usr/bin/dscl . -read "$1" \
| /usr/bin/awk "/$2/ { print \$2 }"
}
test_nix_daemon_installed() {
test -e "$NIX_DAEMON_DEST"
}
poly_cure_artifacts() {
if should_create_volume; then
task "Fixing any leftover Nix volume state"
cat <<EOF
Before I try to install, I'll check for any existing Nix volume config
and ask for your permission to remove it (so that the installer can
start fresh). I'll also ask for permission to fix any issues I spot.
EOF
cure_volumes
remove_volume_artifacts
fi
}
poly_service_installed_check() {
if should_create_volume; then
test_nix_daemon_installed || test_nix_volume_mountd_installed
else
test_nix_daemon_installed
fi
}
poly_service_uninstall_directions() {
echo "$1. Remove macOS-specific components:"
if should_create_volume && test_nix_volume_mountd_installed; then
nix_volume_mountd_uninstall_directions
fi
if test_nix_daemon_installed; then
nix_daemon_uninstall_directions
fi
}
poly_service_setup_note() {
if should_create_volume; then
echo " - create a Nix volume and a LaunchDaemon to mount it"
fi
echo " - create a LaunchDaemon (at $NIX_DAEMON_DEST) for nix-daemon"
echo ""
}
poly_extra_try_me_commands() {
:
}
poly_configure_nix_daemon_service() {
task "Setting up the nix-daemon LaunchDaemon"
_sudo "to set up the nix-daemon as a LaunchDaemon" \
/usr/bin/install -m "u=rw,go=r" "/nix/var/nix/profiles/default$NIX_DAEMON_DEST" "$NIX_DAEMON_DEST"
_sudo "to load the LaunchDaemon plist for nix-daemon" \
launchctl load /Library/LaunchDaemons/org.nixos.nix-daemon.plist
_sudo "to start the nix-daemon" \
launchctl kickstart -k system/org.nixos.nix-daemon
}
poly_group_exists() {
/usr/bin/dscl . -read "/Groups/$1" > /dev/null 2>&1
}
poly_group_id_get() {
dsclattr "/Groups/$1" "PrimaryGroupID"
}
poly_create_build_group() {
_sudo "Create the Nix build group, $NIX_BUILD_GROUP_NAME" \
/usr/sbin/dseditgroup -o create \
-r "Nix build group for nix-daemon" \
-i "$NIX_BUILD_GROUP_ID" \
"$NIX_BUILD_GROUP_NAME" >&2
}
poly_user_exists() {
/usr/bin/dscl . -read "/Users/$1" > /dev/null 2>&1
}
poly_user_id_get() {
dsclattr "/Users/$1" "UniqueID"
}
poly_user_hidden_get() {
dsclattr "/Users/$1" "IsHidden"
}
poly_user_hidden_set() {
_sudo "in order to make $1 a hidden user" \
/usr/bin/dscl . -create "/Users/$1" "IsHidden" "1"
}
poly_user_home_get() {
dsclattr "/Users/$1" "NFSHomeDirectory"
}
poly_user_home_set() {
# This can trigger a permission prompt now:
# "Terminal" would like to administer your computer. Administration can include modifying passwords, networking, and system settings.
_sudo "in order to give $1 a safe home directory" \
/usr/bin/dscl . -create "/Users/$1" "NFSHomeDirectory" "$2"
}
poly_user_note_get() {
dsclattr "/Users/$1" "RealName"
}
poly_user_note_set() {
_sudo "in order to give $username a useful note" \
/usr/bin/dscl . -create "/Users/$1" "RealName" "$2"
}
poly_user_shell_get() {
dsclattr "/Users/$1" "UserShell"
}
poly_user_shell_set() {
_sudo "in order to give $1 a safe shell" \
/usr/bin/dscl . -create "/Users/$1" "UserShell" "$2"
}
poly_user_in_group_check() {
username=$1
group=$2
/usr/sbin/dseditgroup -o checkmember -m "$username" "$group" > /dev/null 2>&1
}
poly_user_in_group_set() {
username=$1
group=$2
_sudo "Add $username to the $group group"\
/usr/sbin/dseditgroup -o edit -t user \
-a "$username" "$group"
}
poly_user_primary_group_get() {
dsclattr "/Users/$1" "PrimaryGroupID"
}
poly_user_primary_group_set() {
_sudo "to let the nix daemon use this user for builds (this might seem redundant, but there are two concepts of group membership)" \
/usr/bin/dscl . -create "/Users/$1" "PrimaryGroupID" "$2"
}
poly_create_build_user() {
username=$1
uid=$2
builder_num=$3
_sudo "Creating the Nix build user (#$builder_num), $username" \
/usr/bin/dscl . create "/Users/$username" \
UniqueID "${uid}"
}
poly_prepare_to_install() {
if should_create_volume; then
header "Preparing a Nix volume"
# intentional indent below to match task indent
cat <<EOF
Nix traditionally stores its data in the root directory $NIX_ROOT, but
macOS now (starting in 10.15 Catalina) has a read-only root directory.
To support Nix, I will create a volume and configure macOS to mount it
at $NIX_ROOT.
EOF
setup_darwin_volume
fi
if [ "$(/usr/sbin/diskutil info -plist /nix | xmllint --xpath "(/plist/dict/key[text()='GlobalPermissionsEnabled'])/following-sibling::*[1]" -)" = "<false/>" ]; then
failure "This script needs a /nix volume with global permissions! This may require running sudo /usr/sbin/diskutil enableOwnership /nix."
fi
}

File diff suppressed because it is too large Load diff

View file

@ -1,284 +0,0 @@
#!/bin/sh
set -e
umask 0022
dest="/nix"
self="$(dirname "$0")"
nix="@nix@"
cacert="@cacert@"
if ! [ -e "$self/.reginfo" ]; then
echo "$0: incomplete installer (.reginfo is missing)" >&2
fi
if [ -z "$USER" ] && ! USER=$(id -u -n); then
echo "$0: \$USER is not set" >&2
exit 1
fi
if [ -z "$HOME" ]; then
echo "$0: \$HOME is not set" >&2
exit 1
fi
# macOS support for 10.12.6 or higher
if [ "$(uname -s)" = "Darwin" ]; then
IFS='.' read -r macos_major macos_minor macos_patch << EOF
$(sw_vers -productVersion)
EOF
if [ "$macos_major" -lt 10 ] || { [ "$macos_major" -eq 10 ] && [ "$macos_minor" -lt 12 ]; } || { [ "$macos_minor" -eq 12 ] && [ "$macos_patch" -lt 6 ]; }; then
# patch may not be present; command substitution for simplicity
echo "$0: macOS $(sw_vers -productVersion) is not supported, upgrade to 10.12.6 or higher"
exit 1
fi
fi
# Determine if we could use the multi-user installer or not
if [ "$(uname -s)" = "Linux" ]; then
echo "Note: a multi-user installation is possible. See https://nixos.org/manual/nix/stable/installation/installing-binary.html#multi-user-installation" >&2
fi
case "$(uname -s)" in
"Darwin")
INSTALL_MODE=daemon;;
*)
INSTALL_MODE=no-daemon;;
esac
# space-separated string
ACTIONS=
# handle the command line flags
while [ $# -gt 0 ]; do
case $1 in
--daemon)
INSTALL_MODE=daemon
ACTIONS="${ACTIONS}install "
;;
--no-daemon)
if [ "$(uname -s)" = "Darwin" ]; then
printf '\e[1;31mError: --no-daemon installs are no-longer supported on Darwin/macOS!\e[0m\n' >&2
exit 1
fi
INSTALL_MODE=no-daemon
# intentional tail space
ACTIONS="${ACTIONS}install "
;;
# --uninstall)
# # intentional tail space
# ACTIONS="${ACTIONS}uninstall "
# ;;
--yes)
export NIX_INSTALLER_YES=1;;
--no-channel-add)
export NIX_INSTALLER_NO_CHANNEL_ADD=1;;
--daemon-user-count)
export NIX_USER_COUNT=$2
shift;;
--no-modify-profile)
NIX_INSTALLER_NO_MODIFY_PROFILE=1;;
--darwin-use-unencrypted-nix-store-volume)
{
echo "Warning: the flag --darwin-use-unencrypted-nix-store-volume"
echo " is no longer needed and will be removed in the future."
echo ""
} >&2;;
--nix-extra-conf-file)
# shellcheck disable=SC2155
export NIX_EXTRA_CONF="$(cat "$2")"
shift;;
*)
{
echo "Nix Installer [--daemon|--no-daemon] [--daemon-user-count INT] [--yes] [--no-channel-add] [--no-modify-profile] [--nix-extra-conf-file FILE]"
echo "Choose installation method."
echo ""
echo " --daemon: Installs and configures a background daemon that manages the store,"
echo " providing multi-user support and better isolation for local builds."
echo " Both for security and reproducibility, this method is recommended if"
echo " supported on your platform."
echo " See https://nixos.org/manual/nix/stable/installation/installing-binary.html#multi-user-installation"
echo ""
echo " --no-daemon: Simple, single-user installation that does not require root and is"
echo " trivial to uninstall."
echo " (default)"
echo ""
echo " --yes: Run the script non-interactively, accepting all prompts."
echo ""
echo " --no-channel-add: Don't add any channels. nixpkgs-unstable is installed by default."
echo ""
echo " --no-modify-profile: Don't modify the user profile to automatically load nix."
echo ""
echo " --daemon-user-count: Number of build users to create. Defaults to 32."
echo ""
echo " --nix-extra-conf-file: Path to nix.conf to prepend when installing /etc/nix/nix.conf"
echo ""
if [ -n "${INVOKED_FROM_INSTALL_IN:-}" ]; then
echo " --tarball-url-prefix URL: Base URL to download the Nix tarball from."
fi
} >&2
exit;;
esac
shift
done
if [ "$INSTALL_MODE" = "daemon" ]; then
printf '\e[1;31mSwitching to the Multi-user Installer\e[0m\n'
exec "$self/install-multi-user" $ACTIONS # let ACTIONS split
exit 0
fi
if [ "$(id -u)" -eq 0 ]; then
printf '\e[1;31mwarning: installing Nix as root is not supported by this script!\e[0m\n'
fi
echo "performing a single-user installation of Nix..." >&2
if ! [ -e "$dest" ]; then
cmd="mkdir -m 0755 $dest && chown $USER $dest"
echo "directory $dest does not exist; creating it by running '$cmd' using sudo" >&2
if ! sudo sh -c "$cmd"; then
echo "$0: please manually run '$cmd' as root to create $dest" >&2
exit 1
fi
fi
if ! [ -w "$dest" ]; then
echo "$0: directory $dest exists, but is not writable by you. This could indicate that another user has already performed a single-user installation of Nix on this system. If you wish to enable multi-user support see https://nixos.org/manual/nix/stable/installation/multi-user.html. If you wish to continue with a single-user install for $USER please run 'chown -R $USER $dest' as root." >&2
exit 1
fi
# The auto-chroot code in openFromNonUri() checks for the
# non-existence of /nix/var/nix, so we need to create it here.
mkdir -p "$dest/store" "$dest/var/nix"
printf "copying Nix to %s..." "${dest}/store" >&2
# Insert a newline if no progress is shown.
if [ ! -t 0 ]; then
echo ""
fi
for i in $(cd "$self/store" >/dev/null && echo ./*); do
if [ -t 0 ]; then
printf "." >&2
fi
i_tmp="$dest/store/$i.$$"
if [ -e "$i_tmp" ]; then
rm -rf "$i_tmp"
fi
if ! [ -e "$dest/store/$i" ]; then
cp -RPp "$self/store/$i" "$i_tmp"
chmod -R a-w "$i_tmp"
chmod +w "$i_tmp"
mv "$i_tmp" "$dest/store/$i"
chmod -w "$dest/store/$i"
fi
done
echo "" >&2
if ! "$nix/bin/nix-store" --load-db < "$self/.reginfo"; then
echo "$0: unable to register valid paths" >&2
exit 1
fi
# shellcheck source=./nix-profile.sh.in
. "$nix/etc/profile.d/nix.sh"
NIX_LINK="$HOME/.nix-profile"
if ! "$nix/bin/nix-env" -i "$nix"; then
echo "$0: unable to install Nix into your default profile" >&2
exit 1
fi
# Install an SSL certificate bundle.
if [ -z "$NIX_SSL_CERT_FILE" ] || ! [ -f "$NIX_SSL_CERT_FILE" ]; then
"$nix/bin/nix-env" -i "$cacert"
export NIX_SSL_CERT_FILE="$NIX_LINK/etc/ssl/certs/ca-bundle.crt"
fi
# Subscribe the user to the Nixpkgs channel and fetch it.
if [ -z "$NIX_INSTALLER_NO_CHANNEL_ADD" ]; then
if ! "$nix/bin/nix-channel" --list | grep -q "^nixpkgs "; then
"$nix/bin/nix-channel" --add https://nixos.org/channels/nixpkgs-unstable
fi
if [ -z "$_NIX_INSTALLER_TEST" ]; then
if ! "$nix/bin/nix-channel" --update nixpkgs; then
echo "Fetching the nixpkgs channel failed. (Are you offline?)"
echo "To try again later, run \"nix-channel --update nixpkgs\"."
fi
fi
fi
added=
p=
p_sh=$NIX_LINK/etc/profile.d/nix.sh
p_fish=$NIX_LINK/etc/profile.d/nix.fish
if [ -z "$NIX_INSTALLER_NO_MODIFY_PROFILE" ]; then
# Make the shell source nix.sh during login.
for i in .bash_profile .bash_login .profile; do
fn="$HOME/$i"
if [ -w "$fn" ]; then
if ! grep -q "$p_sh" "$fn"; then
echo "modifying $fn..." >&2
printf '\nif [ -e %s ]; then . %s; fi # added by Nix installer\n' "$p_sh" "$p_sh" >> "$fn"
fi
added=1
p=${p_sh}
break
fi
done
for i in .zshenv .zshrc; do
fn="$HOME/$i"
if [ -w "$fn" ]; then
if ! grep -q "$p_sh" "$fn"; then
echo "modifying $fn..." >&2
printf '\nif [ -e %s ]; then . %s; fi # added by Nix installer\n' "$p_sh" "$p_sh" >> "$fn"
fi
added=1
p=${p_sh}
break
fi
done
if [ -d "$HOME/.config/fish" ]; then
fishdir=$HOME/.config/fish/conf.d
if [ ! -d "$fishdir" ]; then
mkdir -p "$fishdir"
fi
fn="$fishdir/nix.fish"
echo "placing $fn..." >&2
printf '\nif test -e %s; . %s; end # added by Nix installer\n' "$p_fish" "$p_fish" > "$fn"
added=1
p=${p_fish}
fi
else
p=${p_sh}
fi
if [ -z "$added" ]; then
cat >&2 <<EOF
Installation finished! To ensure that the necessary environment
variables are set, please add the line
. $p
to your shell profile (e.g. ~/.profile).
EOF
else
cat >&2 <<EOF
Installation finished! To ensure that the necessary environment
variables are set, either log in again, or type
. $p
in your shell.
EOF
fi

View file

@ -1,222 +0,0 @@
#!/usr/bin/env bash
set -eu
set -o pipefail
# System specific settings
export NIX_FIRST_BUILD_UID="${NIX_FIRST_BUILD_UID:-30001}"
export NIX_BUILD_USER_NAME_TEMPLATE="nixbld%d"
readonly SERVICE_SRC=/lib/systemd/system/nix-daemon.service
readonly SERVICE_DEST=/etc/systemd/system/nix-daemon.service
readonly SOCKET_SRC=/lib/systemd/system/nix-daemon.socket
readonly SOCKET_DEST=/etc/systemd/system/nix-daemon.socket
readonly TMPFILES_SRC=/lib/tmpfiles.d/nix-daemon.conf
readonly TMPFILES_DEST=/etc/tmpfiles.d/nix-daemon.conf
# Path for the systemd override unit file to contain the proxy settings
readonly SERVICE_OVERRIDE=${SERVICE_DEST}.d/override.conf
create_systemd_override() {
header "Configuring proxy for the nix-daemon service"
_sudo "create directory for systemd unit override" mkdir -p "$(dirname "$SERVICE_OVERRIDE")"
cat <<EOF | _sudo "create systemd unit override" tee "$SERVICE_OVERRIDE"
[Service]
$1
EOF
}
escape_systemd_env() {
temp_var="${1//\'/\\\'}"
echo "${temp_var//\%/%%}"
}
# Gather all non-empty proxy environment variables into a string
create_systemd_proxy_env() {
vars="http_proxy https_proxy ftp_proxy no_proxy HTTP_PROXY HTTPS_PROXY FTP_PROXY NO_PROXY"
for v in $vars; do
if [ "x${!v:-}" != "x" ]; then
echo "Environment=${v}=$(escape_systemd_env ${!v})"
fi
done
}
handle_network_proxy() {
# Create a systemd unit override with proxy environment variables
# if any proxy environment variables are not empty.
PROXY_ENV_STRING=$(create_systemd_proxy_env)
if [ -n "${PROXY_ENV_STRING}" ]; then
create_systemd_override "${PROXY_ENV_STRING}"
fi
}
poly_cure_artifacts() {
:
}
poly_service_installed_check() {
[ "$(systemctl is-enabled nix-daemon.service)" = "linked" ] \
|| [ "$(systemctl is-enabled nix-daemon.socket)" = "enabled" ]
}
poly_service_uninstall_directions() {
cat <<EOF
$1. Delete the systemd service and socket units
sudo systemctl stop nix-daemon.socket
sudo systemctl stop nix-daemon.service
sudo systemctl disable nix-daemon.socket
sudo systemctl disable nix-daemon.service
sudo systemctl daemon-reload
EOF
}
poly_service_setup_note() {
cat <<EOF
- load and start a service (at $SERVICE_DEST
and $SOCKET_DEST) for nix-daemon
EOF
}
poly_extra_try_me_commands() {
if [ -e /run/systemd/system ]; then
:
else
cat <<EOF
$ sudo nix-daemon
EOF
fi
}
poly_configure_nix_daemon_service() {
if [ -e /run/systemd/system ]; then
task "Setting up the nix-daemon systemd service"
_sudo "to create the nix-daemon tmpfiles config" \
ln -sfn "/nix/var/nix/profiles/default$TMPFILES_SRC" "$TMPFILES_DEST"
_sudo "to run systemd-tmpfiles once to pick that path up" \
systemd-tmpfiles --create --prefix=/nix/var/nix
_sudo "to set up the nix-daemon service" \
systemctl link "/nix/var/nix/profiles/default$SERVICE_SRC"
_sudo "to set up the nix-daemon socket service" \
systemctl enable "/nix/var/nix/profiles/default$SOCKET_SRC"
handle_network_proxy
_sudo "to load the systemd unit for nix-daemon" \
systemctl daemon-reload
_sudo "to start the nix-daemon.socket" \
systemctl start nix-daemon.socket
_sudo "to start the nix-daemon.service" \
systemctl restart nix-daemon.service
else
reminder "I don't support your init system yet; you may want to add nix-daemon manually."
fi
}
poly_group_exists() {
getent group "$1" > /dev/null 2>&1
}
poly_group_id_get() {
getent group "$1" | cut -d: -f3
}
poly_create_build_group() {
_sudo "Create the Nix build group, $NIX_BUILD_GROUP_NAME" \
groupadd -g "$NIX_BUILD_GROUP_ID" --system \
"$NIX_BUILD_GROUP_NAME" >&2
}
poly_user_exists() {
getent passwd "$1" > /dev/null 2>&1
}
poly_user_id_get() {
getent passwd "$1" | cut -d: -f3
}
poly_user_hidden_get() {
echo "1"
}
poly_user_hidden_set() {
true
}
poly_user_home_get() {
getent passwd "$1" | cut -d: -f6
}
poly_user_home_set() {
_sudo "in order to give $1 a safe home directory" \
usermod --home "$2" "$1"
}
poly_user_note_get() {
getent passwd "$1" | cut -d: -f5
}
poly_user_note_set() {
_sudo "in order to give $1 a useful comment" \
usermod --comment "$2" "$1"
}
poly_user_shell_get() {
getent passwd "$1" | cut -d: -f7
}
poly_user_shell_set() {
_sudo "in order to prevent $1 from logging in" \
usermod --shell "$2" "$1"
}
poly_user_in_group_check() {
groups "$1" | grep -q "$2" > /dev/null 2>&1
}
poly_user_in_group_set() {
_sudo "Add $1 to the $2 group"\
usermod --append --groups "$2" "$1"
}
poly_user_primary_group_get() {
getent passwd "$1" | cut -d: -f4
}
poly_user_primary_group_set() {
_sudo "to let the nix daemon use this user for builds (this might seem redundant, but there are two concepts of group membership)" \
usermod --gid "$2" "$1"
}
poly_create_build_user() {
username=$1
uid=$2
builder_num=$3
_sudo "Creating the Nix build user, $username" \
useradd \
--home-dir /var/empty \
--comment "Nix build user $builder_num" \
--gid "$NIX_BUILD_GROUP_ID" \
--groups "$NIX_BUILD_GROUP_NAME" \
--no-user-group \
--system \
--shell /sbin/nologin \
--uid "$uid" \
--password "!" \
"$username"
}
poly_prepare_to_install() {
:
}

View file

@ -1,119 +0,0 @@
#!/bin/sh
# This script installs the Nix package manager on your system by
# downloading a binary distribution and running its installer script
# (which in turn creates and populates /nix).
{ # Prevent execution if this script was only partially downloaded
oops() {
echo "$0:" "$@" >&2
exit 1
}
umask 0022
tmpDir="$(mktemp -d -t nix-binary-tarball-unpack.XXXXXXXXXX || \
oops "Can't create temporary directory for downloading the Nix binary tarball")"
cleanup() {
rm -rf "$tmpDir"
}
trap cleanup EXIT INT QUIT TERM
require_util() {
command -v "$1" > /dev/null 2>&1 ||
oops "you do not have '$1' installed, which I need to $2"
}
case "$(uname -s).$(uname -m)" in
Linux.x86_64)
hash=@tarballHash_x86_64-linux@
path=@tarballPath_x86_64-linux@
system=x86_64-linux
;;
Linux.i?86)
hash=@tarballHash_i686-linux@
path=@tarballPath_i686-linux@
system=i686-linux
;;
Linux.aarch64)
hash=@tarballHash_aarch64-linux@
path=@tarballPath_aarch64-linux@
system=aarch64-linux
;;
Linux.armv6l)
hash=@tarballHash_armv6l-linux@
path=@tarballPath_armv6l-linux@
system=armv6l-linux
;;
Linux.armv7l)
hash=@tarballHash_armv7l-linux@
path=@tarballPath_armv7l-linux@
system=armv7l-linux
;;
Darwin.x86_64)
hash=@tarballHash_x86_64-darwin@
path=@tarballPath_x86_64-darwin@
system=x86_64-darwin
;;
Darwin.arm64|Darwin.aarch64)
hash=@tarballHash_aarch64-darwin@
path=@tarballPath_aarch64-darwin@
system=aarch64-darwin
;;
*) oops "sorry, there is no binary distribution of Nix for your platform";;
esac
# Use this command-line option to fetch the tarballs using nar-serve or Cachix
if [ "${1:-}" = "--tarball-url-prefix" ]; then
if [ -z "${2:-}" ]; then
oops "missing argument for --tarball-url-prefix"
fi
url=${2}/${path}
shift 2
else
url=https://releases.nixos.org/nix/nix-@nixVersion@/nix-@nixVersion@-$system.tar.xz
fi
tarball=$tmpDir/nix-@nixVersion@-$system.tar.xz
require_util tar "unpack the binary tarball"
if [ "$(uname -s)" != "Darwin" ]; then
require_util xz "unpack the binary tarball"
fi
if command -v curl > /dev/null 2>&1; then
fetch() { curl --fail -L "$1" -o "$2"; }
elif command -v wget > /dev/null 2>&1; then
fetch() { wget "$1" -O "$2"; }
else
oops "you don't have wget or curl installed, which I need to download the binary tarball"
fi
echo "downloading Nix @nixVersion@ binary tarball for $system from '$url' to '$tmpDir'..."
fetch "$url" "$tarball" || oops "failed to download '$url'"
if command -v sha256sum > /dev/null 2>&1; then
hash2="$(sha256sum -b "$tarball" | cut -c1-64)"
elif command -v shasum > /dev/null 2>&1; then
hash2="$(shasum -a 256 -b "$tarball" | cut -c1-64)"
elif command -v openssl > /dev/null 2>&1; then
hash2="$(openssl dgst -r -sha256 "$tarball" | cut -c1-64)"
else
oops "cannot verify the SHA-256 hash of '$url'; you need one of 'shasum', 'sha256sum', or 'openssl'"
fi
if [ "$hash" != "$hash2" ]; then
oops "SHA-256 hash mismatch in '$url'; expected $hash, got $hash2"
fi
unpack=$tmpDir/unpack
mkdir -p "$unpack"
tar -xJf "$tarball" -C "$unpack" || oops "failed to unpack '$url'"
script=$(echo "$unpack"/*/install)
[ -e "$script" ] || oops "installation script is missing from the binary tarball!"
export INVOKED_FROM_INSTALL_IN=1
"$script" "$@"
} # End of wrapping

29
scripts/meson.build Normal file
View file

@ -0,0 +1,29 @@
# configures `scripts/nix-profile.sh.in` (and copies the original to the build directory).
# this is only needed for tests, but running it unconditionally does not hurt enough to care.
configure_file(
input : 'nix-profile.sh.in',
output : 'nix-profile.sh',
configuration : {
'localstatedir': state_dir,
}
)
# https://github.com/mesonbuild/meson/issues/860
configure_file(
input : 'nix-profile.sh.in',
output : 'nix-profile.sh.in',
copy : true,
)
foreach rc : [ '.sh', '.fish', '-daemon.sh', '-daemon.fish' ]
configure_file(
input : 'nix-profile' + rc + '.in',
output : 'nix' + rc,
install : true,
install_dir : sysconfdir / 'profile.d',
install_mode : 'rw-r--r--',
configuration : {
'localstatedir': state_dir,
},
)
endforeach

View file

@ -1,10 +0,0 @@
#!/usr/bin/env bash
set -e
script=$(nix-build -A outputs.hydraJobs.installerScriptForGHA --no-out-link)
installerHash=$(echo "$script" | cut -b12-43 -)
installerURL=https://$CACHIX_NAME.cachix.org/serve/$installerHash/install
echo "::set-output name=installerURL::$installerURL"

View file

@ -126,7 +126,7 @@ static int main_build_remote(int argc, char * * argv)
mkdir(currentLoad.c_str(), 0777);
while (true) {
bestSlotLock = -1;
bestSlotLock.reset();
AutoCloseFD lock = openLockFile(currentLoad + "/main-lock", true);
lockFile(lock.get(), ltWrite, true);
@ -185,16 +185,6 @@ static int main_build_remote(int argc, char * * argv)
std::cerr << "# postpone\n";
else
{
// build the hint template.
std::string errorText =
"Failed to find a machine for remote build!\n"
"derivation: %s\nrequired (system, features): (%s, [%s])";
errorText += "\n%s available machines:";
errorText += "\n(systems, maxjobs, supportedFeatures, mandatoryFeatures)";
for (unsigned int i = 0; i < machines.size(); ++i)
errorText += "\n([%s], %s, [%s], [%s])";
// add the template values.
std::string drvstr;
if (drvPath.has_value())
@ -202,19 +192,30 @@ static int main_build_remote(int argc, char * * argv)
else
drvstr = "<unknown>";
auto error = HintFmt(errorText);
error
% drvstr
% neededSystem
% concatStringsSep<StringSet>(", ", requiredFeatures)
% machines.size();
std::string machinesFormatted;
for (auto & m : machines)
error
% concatStringsSep<StringSet>(", ", m.systemTypes)
% m.maxJobs
% concatStringsSep<StringSet>(", ", m.supportedFeatures)
% concatStringsSep<StringSet>(", ", m.mandatoryFeatures);
for (auto & m : machines) {
machinesFormatted += HintFmt(
"\n([%s], %s, [%s], [%s])",
concatStringsSep<StringSet>(", ", m.systemTypes),
m.maxJobs,
concatStringsSep<StringSet>(", ", m.supportedFeatures),
concatStringsSep<StringSet>(", ", m.mandatoryFeatures)
).str();
}
auto error = HintFmt(
"Failed to find a machine for remote build!\n"
"derivation: %s\n"
"required (system, features): (%s, [%s])\n"
"%s available machines:\n"
"(systems, maxjobs, supportedFeatures, mandatoryFeatures)%s",
drvstr,
neededSystem,
concatStringsSep<StringSet>(", ", requiredFeatures),
machines.size(),
Uncolored(machinesFormatted)
);
printMsg(couldBuildLocally ? lvlChatty : lvlWarn, error.str());
@ -229,7 +230,7 @@ static int main_build_remote(int argc, char * * argv)
futimens(bestSlotLock.get(), NULL);
#endif
lock = -1;
lock.reset();
try {
@ -282,7 +283,7 @@ connected:
copyPaths(*store, *sshStore, store->parseStorePathSet(inputs), NoRepair, NoCheckSigs, substitute);
}
uploadLock = -1;
uploadLock.reset();
auto drv = store->readDerivation(*drvPath);

View file

@ -97,8 +97,6 @@ struct MixFlakeOptions : virtual Args, EvalCommand
{
flake::LockFlags lockFlags;
std::optional<std::string> needsFlakeInputCompletion = {};
MixFlakeOptions();
/**
@ -109,12 +107,8 @@ struct MixFlakeOptions : virtual Args, EvalCommand
* command is operating with (presumably specified via some other
* arguments) so that the completions for these flags can use them.
*/
virtual std::vector<std::string> getFlakesForCompletion()
virtual std::vector<FlakeRef> getFlakeRefsForCompletion()
{ return {}; }
void completeFlakeInput(std::string_view prefix);
void completionHook() override;
};
struct SourceExprCommand : virtual Args, MixFlakeOptions
@ -137,7 +131,13 @@ struct SourceExprCommand : virtual Args, MixFlakeOptions
/**
* Complete an installable from the given prefix.
*/
void completeInstallable(std::string_view prefix);
void completeInstallable(AddCompletions & completions, std::string_view prefix);
/**
* Convenience wrapper around the underlying function to make setting the
* callback easier.
*/
CompleterClosure getCompleteInstallable();
};
/**
@ -170,7 +170,7 @@ struct RawInstallablesCommand : virtual Args, SourceExprCommand
bool readFromStdIn = false;
std::vector<std::string> getFlakesForCompletion() override;
std::vector<FlakeRef> getFlakeRefsForCompletion() override;
private:
@ -199,10 +199,7 @@ struct InstallableCommand : virtual Args, SourceExprCommand
void run(ref<Store> store) override;
std::vector<std::string> getFlakesForCompletion() override
{
return {_installable};
}
std::vector<FlakeRef> getFlakeRefsForCompletion() override;
private:
@ -329,9 +326,16 @@ struct MixEnvironment : virtual Args {
void setEnviron();
};
void completeFlakeRef(ref<Store> store, std::string_view prefix);
void completeFlakeInputPath(
AddCompletions & completions,
ref<EvalState> evalState,
const std::vector<FlakeRef> & flakeRefs,
std::string_view prefix);
void completeFlakeRef(AddCompletions & completions, ref<Store> store, std::string_view prefix);
void completeFlakeRefWithFragment(
AddCompletions & completions,
ref<EvalState> evalState,
flake::LockFlags lockFlags,
Strings attrPathPrefixes,

View file

@ -132,8 +132,8 @@ MixEvalArgs::MixEvalArgs()
if (to.subdir != "") extraAttrs["dir"] = to.subdir;
fetchers::overrideRegistry(from.input, to.input, extraAttrs);
}},
.completer = {[&](size_t, std::string_view prefix) {
completeFlakeRef(openStore(), prefix);
.completer = {[&](AddCompletions & completions, size_t, std::string_view prefix) {
completeFlakeRef(completions, openStore(), prefix);
}}
});
@ -172,7 +172,7 @@ SourcePath lookupFileArg(EvalState & state, std::string_view s)
return state.rootPath(CanonPath(state.store->toRealPath(storePath)));
}
else if (hasPrefix(s, "flake:")) {
else if (s.starts_with("flake:")) {
experimentalFeatureSettings.require(Xp::Flakes);
auto flakeRef = parseFlakeRef(std::string(s.substr(6)), {}, true, false);
auto storePath = flakeRef.resolve(state.store).fetchTree(state.store).first.storePath;

View file

@ -28,6 +28,10 @@ namespace nix {
std::vector<std::string> InstallableFlake::getActualAttrPaths()
{
std::vector<std::string> res;
if (attrPaths.size() == 1 && attrPaths.front().starts_with(".")){
res.push_back(attrPaths.front().substr(1));
return res;
}
for (auto & prefix : prefixes)
res.push_back(prefix + *attrPaths.begin());

View file

@ -28,17 +28,24 @@
namespace nix {
void completeFlakeInputPath(
AddCompletions & completions,
ref<EvalState> evalState,
const std::vector<FlakeRef> & flakeRefs,
std::string_view prefix)
{
for (auto & flakeRef : flakeRefs) {
auto flake = flake::getFlake(*evalState, flakeRef, true);
for (auto & input : flake.inputs)
if (input.first.starts_with(prefix))
completions.add(input.first);
}
}
MixFlakeOptions::MixFlakeOptions()
{
auto category = "Common flake-related options";
addFlag({
.longName = "recreate-lock-file",
.description = "Recreate the flake's lock file from scratch.",
.category = category,
.handler = {&lockFlags.recreateLockFile, true}
});
addFlag({
.longName = "no-update-lock-file",
.description = "Do not allow any updates to the flake's lock file.",
@ -71,19 +78,6 @@ MixFlakeOptions::MixFlakeOptions()
.handler = {&lockFlags.commitLockFile, true}
});
addFlag({
.longName = "update-input",
.description = "Update a specific flake input (ignoring its previous entry in the lock file).",
.category = category,
.labels = {"input-path"},
.handler = {[&](std::string s) {
lockFlags.inputUpdates.insert(flake::parseInputPath(s));
}},
.completer = {[&](size_t, std::string_view prefix) {
needsFlakeInputCompletion = {std::string(prefix)};
}}
});
addFlag({
.longName = "override-input",
.description = "Override a specific flake input (e.g. `dwarffs/nixpkgs`). This implies `--no-write-lock-file`.",
@ -95,11 +89,12 @@ MixFlakeOptions::MixFlakeOptions()
flake::parseInputPath(inputPath),
parseFlakeRef(flakeRef, absPath("."), true));
}},
.completer = {[&](size_t n, std::string_view prefix) {
if (n == 0)
needsFlakeInputCompletion = {std::string(prefix)};
else if (n == 1)
completeFlakeRef(getEvalState()->store, prefix);
.completer = {[&](AddCompletions & completions, size_t n, std::string_view prefix) {
if (n == 0) {
completeFlakeInputPath(completions, getEvalState(), getFlakeRefsForCompletion(), prefix);
} else if (n == 1) {
completeFlakeRef(completions, getEvalState()->store, prefix);
}
}}
});
@ -146,30 +141,12 @@ MixFlakeOptions::MixFlakeOptions()
}
}
}},
.completer = {[&](size_t, std::string_view prefix) {
completeFlakeRef(getEvalState()->store, prefix);
.completer = {[&](AddCompletions & completions, size_t, std::string_view prefix) {
completeFlakeRef(completions, getEvalState()->store, prefix);
}}
});
}
void MixFlakeOptions::completeFlakeInput(std::string_view prefix)
{
auto evalState = getEvalState();
for (auto & flakeRefS : getFlakesForCompletion()) {
auto flakeRef = parseFlakeRefWithFragment(expandTilde(flakeRefS), absPath(".")).first;
auto flake = flake::getFlake(*evalState, flakeRef, true);
for (auto & input : flake.inputs)
if (hasPrefix(input.first, prefix))
completions->add(input.first);
}
}
void MixFlakeOptions::completionHook()
{
if (auto & prefix = needsFlakeInputCompletion)
completeFlakeInput(*prefix);
}
SourceExprCommand::SourceExprCommand()
{
addFlag({
@ -187,6 +164,7 @@ SourceExprCommand::SourceExprCommand()
addFlag({
.longName = "expr",
.shortName = 'E',
.description = "Interpret [*installables*](@docroot@/command-ref/new-cli/nix.md#installables) as attribute paths relative to the Nix expression *expr*.",
.category = installablesCategory,
.labels = {"expr"},
@ -226,11 +204,18 @@ Strings SourceExprCommand::getDefaultFlakeAttrPathPrefixes()
};
}
void SourceExprCommand::completeInstallable(std::string_view prefix)
Args::CompleterClosure SourceExprCommand::getCompleteInstallable()
{
return [this](AddCompletions & completions, size_t, std::string_view prefix) {
completeInstallable(completions, prefix);
};
}
void SourceExprCommand::completeInstallable(AddCompletions & completions, std::string_view prefix)
{
try {
if (file) {
completionType = ctAttrs;
completions.setType(AddCompletions::Type::Attrs);
evalSettings.pureEval = false;
auto state = getEvalState();
@ -265,14 +250,15 @@ void SourceExprCommand::completeInstallable(std::string_view prefix)
std::string name = state->symbols[i.name];
if (name.find(searchWord) == 0) {
if (prefix_ == "")
completions->add(name);
completions.add(name);
else
completions->add(prefix_ + "." + name);
completions.add(prefix_ + "." + name);
}
}
}
} else {
completeFlakeRefWithFragment(
completions,
getEvalState(),
lockFlags,
getDefaultFlakeAttrPathPrefixes(),
@ -285,6 +271,7 @@ void SourceExprCommand::completeInstallable(std::string_view prefix)
}
void completeFlakeRefWithFragment(
AddCompletions & completions,
ref<EvalState> evalState,
flake::LockFlags lockFlags,
Strings attrPathPrefixes,
@ -296,11 +283,16 @@ void completeFlakeRefWithFragment(
try {
auto hash = prefix.find('#');
if (hash == std::string::npos) {
completeFlakeRef(evalState->store, prefix);
completeFlakeRef(completions, evalState->store, prefix);
} else {
completionType = ctAttrs;
completions.setType(AddCompletions::Type::Attrs);
auto fragment = prefix.substr(hash + 1);
std::string prefixRoot = "";
if (fragment.starts_with(".")){
fragment = fragment.substr(1);
prefixRoot = ".";
}
auto flakeRefS = std::string(prefix.substr(0, hash));
auto flakeRef = parseFlakeRef(expandTilde(flakeRefS), absPath("."));
@ -309,6 +301,9 @@ void completeFlakeRefWithFragment(
auto root = evalCache->getRoot();
if (prefixRoot == "."){
attrPathPrefixes.clear();
}
/* Complete 'fragment' relative to all the
attrpath prefixes as well as the root of the
flake. */
@ -320,7 +315,7 @@ void completeFlakeRefWithFragment(
auto attrPath = parseAttrPath(*evalState, attrPathS);
std::string lastAttr;
if (!attrPath.empty() && !hasSuffix(attrPathS, ".")) {
if (!attrPath.empty() && !attrPathS.ends_with(".")) {
lastAttr = evalState->symbols[attrPath.back()];
attrPath.pop_back();
}
@ -329,11 +324,11 @@ void completeFlakeRefWithFragment(
if (!attr) continue;
for (auto & attr2 : (*attr)->getAttrs()) {
if (hasPrefix(evalState->symbols[attr2], lastAttr)) {
if (std::string_view(evalState->symbols[attr2]).starts_with(lastAttr)) {
auto attrPath2 = (*attr)->getAttrPath(attr2);
/* Strip the attrpath prefix. */
attrPath2.erase(attrPath2.begin(), attrPath2.begin() + attrPathPrefix.size());
completions->add(flakeRefS + "#" + concatStringsSep(".", evalState->symbols.resolve(attrPath2)));
completions.add(flakeRefS + "#" + prefixRoot + concatStringsSep(".", evalState->symbols.resolve(attrPath2)));
}
}
}
@ -344,7 +339,7 @@ void completeFlakeRefWithFragment(
for (auto & attrPath : defaultFlakeAttrPaths) {
auto attr = root->findAlongAttrPath(parseAttrPath(*evalState, attrPath));
if (!attr) continue;
completions->add(flakeRefS + "#");
completions.add(flakeRefS + "#" + prefixRoot);
}
}
}
@ -353,27 +348,27 @@ void completeFlakeRefWithFragment(
}
}
void completeFlakeRef(ref<Store> store, std::string_view prefix)
void completeFlakeRef(AddCompletions & completions, ref<Store> store, std::string_view prefix)
{
if (!experimentalFeatureSettings.isEnabled(Xp::Flakes))
return;
if (prefix == "")
completions->add(".");
completions.add(".");
completeDir(0, prefix);
Args::completeDir(completions, 0, prefix);
/* Look for registry entries that match the prefix. */
for (auto & registry : fetchers::getRegistries(store)) {
for (auto & entry : registry->entries) {
auto from = entry.from.to_string();
if (!hasPrefix(prefix, "flake:") && hasPrefix(from, "flake:")) {
if (!prefix.starts_with("flake:") && from.starts_with("flake:")) {
std::string from2(from, 6);
if (hasPrefix(from2, prefix))
completions->add(from2);
if (from2.starts_with(prefix))
completions.add(from2);
} else {
if (hasPrefix(from, prefix))
completions->add(from);
if (from.starts_with(prefix))
completions.add(from);
}
}
}
@ -753,9 +748,7 @@ RawInstallablesCommand::RawInstallablesCommand()
expectArgs({
.label = "installables",
.handler = {&rawInstallables},
.completer = {[&](size_t, std::string_view prefix) {
completeInstallable(prefix);
}}
.completer = getCompleteInstallable(),
});
}
@ -768,6 +761,17 @@ void RawInstallablesCommand::applyDefaultInstallables(std::vector<std::string> &
}
}
std::vector<FlakeRef> RawInstallablesCommand::getFlakeRefsForCompletion()
{
applyDefaultInstallables(rawInstallables);
std::vector<FlakeRef> res;
for (auto i : rawInstallables)
res.push_back(parseFlakeRefWithFragment(
expandTilde(i),
absPath(".")).first);
return res;
}
void RawInstallablesCommand::run(ref<Store> store)
{
if (readFromStdIn && !isatty(STDIN_FILENO)) {
@ -781,10 +785,13 @@ void RawInstallablesCommand::run(ref<Store> store)
run(store, std::move(rawInstallables));
}
std::vector<std::string> RawInstallablesCommand::getFlakesForCompletion()
std::vector<FlakeRef> InstallableCommand::getFlakeRefsForCompletion()
{
applyDefaultInstallables(rawInstallables);
return rawInstallables;
return {
parseFlakeRefWithFragment(
expandTilde(_installable),
absPath(".")).first
};
}
void InstallablesCommand::run(ref<Store> store, std::vector<std::string> && rawInstallables)
@ -800,9 +807,7 @@ InstallableCommand::InstallableCommand()
.label = "installable",
.optional = true,
.handler = {&_installable},
.completer = {[&](size_t, std::string_view prefix) {
completeInstallable(prefix);
}}
.completer = getCompleteInstallable(),
});
}

73
src/libcmd/meson.build Normal file
View file

@ -0,0 +1,73 @@
libcmd_sources = files(
'built-path.cc',
'command-installable-value.cc',
'command.cc',
'common-eval-args.cc',
'editor-for.cc',
'installable-attr-path.cc',
'installable-derived-path.cc',
'installable-flake.cc',
'installable-value.cc',
'installables.cc',
'legacy.cc',
'markdown.cc',
'repl.cc',
'repl-interacter.cc',
)
libcmd_headers = files(
'built-path.hh',
'command-installable-value.hh',
'command.hh',
'common-eval-args.hh',
'editor-for.hh',
'installable-attr-path.hh',
'installable-derived-path.hh',
'installable-flake.hh',
'installable-value.hh',
'installables.hh',
'legacy.hh',
'markdown.hh',
'repl-interacter.hh',
'repl.hh',
)
libcmd = library(
'nixcmd',
libcmd_sources,
dependencies : [
liblixutil,
liblixstore,
liblixexpr,
liblixfetchers,
liblixmain,
boehm,
editline,
lowdown,
nlohmann_json,
],
install : true,
# FIXME(Qyriad): is this right?
install_rpath : libdir,
)
install_headers(libcmd_headers, subdir : 'nix', preserve_path : true)
liblixcmd = declare_dependency(
include_directories : '.',
link_with : libcmd,
)
# FIXME: not using the pkg-config module because it creates way too many deps
# while meson migration is in progress, and we want to not include boost here
configure_file(
input : 'nix-cmd.pc.in',
output : 'nix-cmd.pc',
install_dir : libdir / 'pkgconfig',
configuration : {
'prefix' : prefix,
'libdir' : libdir,
'includedir' : includedir,
'PACKAGE_VERSION' : meson.project_version(),
},
)

View file

@ -0,0 +1,208 @@
#include <cstdio>
#include <iostream>
#include <string>
#ifdef READLINE
#include <readline/history.h>
#include <readline/readline.h>
#else
// editline < 1.15.2 don't wrap their API for C++ usage
// (added in https://github.com/troglobit/editline/commit/91398ceb3427b730995357e9d120539fb9bb7461).
// This results in linker errors due to to name-mangling of editline C symbols.
// For compatibility with these versions, we wrap the API here
// (wrapping multiple times on newer versions is no problem).
extern "C" {
#include <editline.h>
}
#endif
#include "signals.hh"
#include "finally.hh"
#include "repl-interacter.hh"
#include "util.hh"
#include "repl.hh"
namespace nix {
namespace {
// Used to communicate to NixRepl::getLine whether a signal occurred in ::readline.
volatile sig_atomic_t g_signal_received = 0;
void sigintHandler(int signo)
{
g_signal_received = signo;
}
};
static detail::ReplCompleterMixin * curRepl; // ugly
static char * completionCallback(char * s, int * match)
{
auto possible = curRepl->completePrefix(s);
if (possible.size() == 1) {
*match = 1;
auto * res = strdup(possible.begin()->c_str() + strlen(s));
if (!res)
throw Error("allocation failure");
return res;
} else if (possible.size() > 1) {
auto checkAllHaveSameAt = [&](size_t pos) {
auto & first = *possible.begin();
for (auto & p : possible) {
if (p.size() <= pos || p[pos] != first[pos])
return false;
}
return true;
};
size_t start = strlen(s);
size_t len = 0;
while (checkAllHaveSameAt(start + len))
++len;
if (len > 0) {
*match = 1;
auto * res = strdup(std::string(*possible.begin(), start, len).c_str());
if (!res)
throw Error("allocation failure");
return res;
}
}
*match = 0;
return nullptr;
}
static int listPossibleCallback(char * s, char *** avp)
{
auto possible = curRepl->completePrefix(s);
if (possible.size() > (INT_MAX / sizeof(char *)))
throw Error("too many completions");
int ac = 0;
char ** vp = nullptr;
auto check = [&](auto * p) {
if (!p) {
if (vp) {
while (--ac >= 0)
free(vp[ac]);
free(vp);
}
throw Error("allocation failure");
}
return p;
};
vp = check((char **) malloc(possible.size() * sizeof(char *)));
for (auto & p : possible)
vp[ac++] = check(strdup(p.c_str()));
*avp = vp;
return ac;
}
ReadlineLikeInteracter::Guard ReadlineLikeInteracter::init(detail::ReplCompleterMixin * repl)
{
// Allow nix-repl specific settings in .inputrc
rl_readline_name = "nix-repl";
try {
createDirs(dirOf(historyFile));
} catch (SysError & e) {
logWarning(e.info());
}
#ifndef READLINE
el_hist_size = 1000;
#endif
read_history(historyFile.c_str());
auto oldRepl = curRepl;
curRepl = repl;
Guard restoreRepl([oldRepl] { curRepl = oldRepl; });
#ifndef READLINE
rl_set_complete_func(completionCallback);
rl_set_list_possib_func(listPossibleCallback);
#endif
return restoreRepl;
}
static constexpr const char * promptForType(ReplPromptType promptType)
{
switch (promptType) {
case ReplPromptType::ReplPrompt:
return "nix-repl> ";
case ReplPromptType::ContinuationPrompt:
return " ";
}
assert(false);
}
bool ReadlineLikeInteracter::getLine(std::string & input, ReplPromptType promptType)
{
struct sigaction act, old;
sigset_t savedSignalMask, set;
auto setupSignals = [&]() {
act.sa_handler = sigintHandler;
sigfillset(&act.sa_mask);
act.sa_flags = 0;
if (sigaction(SIGINT, &act, &old))
throw SysError("installing handler for SIGINT");
sigemptyset(&set);
sigaddset(&set, SIGINT);
if (sigprocmask(SIG_UNBLOCK, &set, &savedSignalMask))
throw SysError("unblocking SIGINT");
};
auto restoreSignals = [&]() {
if (sigprocmask(SIG_SETMASK, &savedSignalMask, nullptr))
throw SysError("restoring signals");
if (sigaction(SIGINT, &old, 0))
throw SysError("restoring handler for SIGINT");
};
setupSignals();
char * s = readline(promptForType(promptType));
Finally doFree([&]() { free(s); });
restoreSignals();
if (g_signal_received) {
g_signal_received = 0;
input.clear();
return true;
}
if (!s)
return false;
input += s;
input += '\n';
return true;
}
ReadlineLikeInteracter::~ReadlineLikeInteracter()
{
write_history(historyFile.c_str());
}
AutomationInteracter::Guard AutomationInteracter::init(detail::ReplCompleterMixin *)
{
return Guard([] {});
}
// ASCII ENQ character
constexpr const char * automationPrompt = "\x05";
bool AutomationInteracter::getLine(std::string & input, ReplPromptType promptType)
{
std::cout << std::unitbuf;
std::cout << automationPrompt;
if (!std::getline(std::cin, input)) {
// reset failure bits on EOF
std::cin.clear();
return false;
}
return true;
}
};

View file

@ -0,0 +1,57 @@
#pragma once
/// @file
#include "finally.hh"
#include "types.hh"
#include <functional>
#include <string>
namespace nix {
namespace detail {
/** Provides the completion hooks for the repl, without exposing its complete
* internals. */
struct ReplCompleterMixin {
virtual StringSet completePrefix(const std::string & prefix) = 0;
};
};
enum class ReplPromptType {
ReplPrompt,
ContinuationPrompt,
};
class ReplInteracter
{
public:
using Guard = Finally<std::function<void()>>;
virtual Guard init(detail::ReplCompleterMixin * repl) = 0;
/** Returns a boolean of whether the interacter got EOF */
virtual bool getLine(std::string & input, ReplPromptType promptType) = 0;
virtual ~ReplInteracter(){};
};
class ReadlineLikeInteracter : public virtual ReplInteracter
{
std::string historyFile;
public:
ReadlineLikeInteracter(std::string historyFile)
: historyFile(historyFile)
{
}
virtual Guard init(detail::ReplCompleterMixin * repl) override;
virtual bool getLine(std::string & input, ReplPromptType promptType) override;
virtual ~ReadlineLikeInteracter() override;
};
class AutomationInteracter : public virtual ReplInteracter
{
public:
AutomationInteracter() = default;
virtual Guard init(detail::ReplCompleterMixin * repl) override;
virtual bool getLine(std::string & input, ReplPromptType promptType) override;
virtual ~AutomationInteracter() override = default;
};
};

View file

@ -3,22 +3,8 @@
#include <cstring>
#include <climits>
#include <setjmp.h>
#ifdef READLINE
#include <readline/history.h>
#include <readline/readline.h>
#else
// editline < 1.15.2 don't wrap their API for C++ usage
// (added in https://github.com/troglobit/editline/commit/91398ceb3427b730995357e9d120539fb9bb7461).
// This results in linker errors due to to name-mangling of editline C symbols.
// For compatibility with these versions, we wrap the API here
// (wrapping multiple times on newer versions is no problem).
extern "C" {
#include <editline.h>
}
#endif
#include "box_ptr.hh"
#include "repl-interacter.hh"
#include "repl.hh"
#include "ansicolor.hh"
@ -28,6 +14,7 @@ extern "C" {
#include "eval-inline.hh"
#include "eval-settings.hh"
#include "attr-path.hh"
#include "signals.hh"
#include "store-api.hh"
#include "log-store.hh"
#include "common-eval-args.hh"
@ -40,7 +27,9 @@ extern "C" {
#include "finally.hh"
#include "markdown.hh"
#include "local-fs-store.hh"
#include "signals.hh"
#include "print.hh"
#include "progress-bar.hh"
#if HAVE_BOEHMGC
#define GC_INCLUDE_NEW
@ -49,8 +38,30 @@ extern "C" {
namespace nix {
/**
* Returned by `NixRepl::processLine`.
*/
enum class ProcessLineResult {
/**
* The user exited with `:quit`. The REPL should exit. The surrounding
* program or evaluation (e.g., if the REPL was acting as the debugger)
* should also exit.
*/
Quit,
/**
* The user exited with `:continue`. The REPL should exit, but the program
* should continue running.
*/
Continue,
/**
* The user did not exit. The REPL should request another line of input.
*/
PromptAgain,
};
struct NixRepl
: AbstractNixRepl
, detail::ReplCompleterMixin
#if HAVE_BOEHMGC
, gc
#endif
@ -66,19 +77,18 @@ struct NixRepl
int displ;
StringSet varNames;
const Path historyFile;
box_ptr<ReplInteracter> interacter;
NixRepl(const SearchPath & searchPath, nix::ref<Store> store,ref<EvalState> state,
std::function<AnnotatedValues()> getValues);
virtual ~NixRepl();
virtual ~NixRepl() = default;
void mainLoop() override;
ReplExitStatus mainLoop() override;
void initEnv() override;
StringSet completePrefix(const std::string & prefix);
bool getLine(std::string & input, const std::string & prompt);
virtual StringSet completePrefix(const std::string & prefix) override;
StorePath getDerivationPath(Value & v);
bool processLine(std::string line);
ProcessLineResult processLine(std::string line);
void loadFile(const Path & path);
void loadFlake(const std::string & flakeRef);
@ -98,7 +108,8 @@ struct NixRepl
.ansiColors = true,
.force = true,
.derivationPaths = true,
.maxDepth = maxDepth
.maxDepth = maxDepth,
.prettyIndent = 2
});
}
};
@ -111,6 +122,12 @@ std::string removeWhitespace(std::string s)
return s;
}
static box_ptr<ReplInteracter> makeInteracter() {
if (experimentalFeatureSettings.isEnabled(Xp::ReplAutomation))
return make_box_ptr<AutomationInteracter>();
else
return make_box_ptr<ReadlineLikeInteracter>(getDataDir() + "/nix/repl-history");
}
NixRepl::NixRepl(const SearchPath & searchPath, nix::ref<Store> store, ref<EvalState> state,
std::function<NixRepl::AnnotatedValues()> getValues)
@ -118,16 +135,10 @@ NixRepl::NixRepl(const SearchPath & searchPath, nix::ref<Store> store, ref<EvalS
, debugTraceIndex(0)
, getValues(getValues)
, staticEnv(new StaticEnv(nullptr, state->staticBaseEnv.get()))
, historyFile(getDataDir() + "/nix/repl-history")
, interacter(makeInteracter())
{
}
NixRepl::~NixRepl()
{
write_history(historyFile.c_str());
}
void runNix(Path program, const Strings & args,
const std::optional<std::string> & input = {})
{
@ -144,79 +155,6 @@ void runNix(Path program, const Strings & args,
return;
}
static NixRepl * curRepl; // ugly
static char * completionCallback(char * s, int *match) {
auto possible = curRepl->completePrefix(s);
if (possible.size() == 1) {
*match = 1;
auto *res = strdup(possible.begin()->c_str() + strlen(s));
if (!res) throw Error("allocation failure");
return res;
} else if (possible.size() > 1) {
auto checkAllHaveSameAt = [&](size_t pos) {
auto &first = *possible.begin();
for (auto &p : possible) {
if (p.size() <= pos || p[pos] != first[pos])
return false;
}
return true;
};
size_t start = strlen(s);
size_t len = 0;
while (checkAllHaveSameAt(start + len)) ++len;
if (len > 0) {
*match = 1;
auto *res = strdup(std::string(*possible.begin(), start, len).c_str());
if (!res) throw Error("allocation failure");
return res;
}
}
*match = 0;
return nullptr;
}
static int listPossibleCallback(char *s, char ***avp) {
auto possible = curRepl->completePrefix(s);
if (possible.size() > (INT_MAX / sizeof(char*)))
throw Error("too many completions");
int ac = 0;
char **vp = nullptr;
auto check = [&](auto *p) {
if (!p) {
if (vp) {
while (--ac >= 0)
free(vp[ac]);
free(vp);
}
throw Error("allocation failure");
}
return p;
};
vp = check((char **)malloc(possible.size() * sizeof(char*)));
for (auto & p : possible)
vp[ac++] = check(strdup(p.c_str()));
*avp = vp;
return ac;
}
namespace {
// Used to communicate to NixRepl::getLine whether a signal occurred in ::readline.
volatile sig_atomic_t g_signal_received = 0;
void sigintHandler(int signo) {
g_signal_received = signo;
}
}
static std::ostream & showDebugTrace(std::ostream & out, const PosTable & positions, const DebugTrace & dt)
{
if (dt.isError)
@ -242,56 +180,50 @@ static std::ostream & showDebugTrace(std::ostream & out, const PosTable & positi
static bool isFirstRepl = true;
void NixRepl::mainLoop()
ReplExitStatus NixRepl::mainLoop()
{
if (isFirstRepl) {
std::string_view debuggerNotice = "";
if (state->debugRepl) {
debuggerNotice = " debugger";
}
notice("Nix %1%%2%\nType :? for help.", nixVersion, debuggerNotice);
notice("Lix %1%%2%\nType :? for help.", nixVersion, debuggerNotice);
}
isFirstRepl = false;
loadFiles();
// Allow nix-repl specific settings in .inputrc
rl_readline_name = "nix-repl";
try {
createDirs(dirOf(historyFile));
} catch (SysError & e) {
logWarning(e.info());
}
#ifndef READLINE
el_hist_size = 1000;
#endif
read_history(historyFile.c_str());
auto oldRepl = curRepl;
curRepl = this;
Finally restoreRepl([&] { curRepl = oldRepl; });
#ifndef READLINE
rl_set_complete_func(completionCallback);
rl_set_list_possib_func(listPossibleCallback);
#endif
auto _guard = interacter->init(static_cast<detail::ReplCompleterMixin *>(this));
/* Stop the progress bar because it interferes with the display of
the repl. */
stopProgressBar();
std::string input;
while (true) {
// Hide the progress bar while waiting for user input, so that it won't interfere.
logger->pause();
// When continuing input from previous lines, don't print a prompt, just align to the same
// number of chars as the prompt.
if (!getLine(input, input.empty() ? "nix-repl> " : " ")) {
// ctrl-D should exit the debugger.
if (!interacter->getLine(input, input.empty() ? ReplPromptType::ReplPrompt : ReplPromptType::ContinuationPrompt)) {
// Ctrl-D should exit the debugger.
state->debugStop = false;
state->debugQuit = true;
logger->cout("");
break;
// TODO: Should Ctrl-D exit just the current debugger session or
// the entire program?
return ReplExitStatus::QuitAll;
}
logger->resume();
try {
if (!removeWhitespace(input).empty() && !processLine(input)) return;
switch (processLine(input)) {
case ProcessLineResult::Quit:
return ReplExitStatus::QuitAll;
case ProcessLineResult::Continue:
return ReplExitStatus::Continue;
case ProcessLineResult::PromptAgain:
break;
default:
abort();
}
} catch (ParseError & e) {
if (e.msg().find("unexpected end of file") != std::string::npos) {
// For parse errors on incomplete input, we continue waiting for the next line of
@ -321,51 +253,6 @@ void NixRepl::mainLoop()
}
}
bool NixRepl::getLine(std::string & input, const std::string & prompt)
{
struct sigaction act, old;
sigset_t savedSignalMask, set;
auto setupSignals = [&]() {
act.sa_handler = sigintHandler;
sigfillset(&act.sa_mask);
act.sa_flags = 0;
if (sigaction(SIGINT, &act, &old))
throw SysError("installing handler for SIGINT");
sigemptyset(&set);
sigaddset(&set, SIGINT);
if (sigprocmask(SIG_UNBLOCK, &set, &savedSignalMask))
throw SysError("unblocking SIGINT");
};
auto restoreSignals = [&]() {
if (sigprocmask(SIG_SETMASK, &savedSignalMask, nullptr))
throw SysError("restoring signals");
if (sigaction(SIGINT, &old, 0))
throw SysError("restoring handler for SIGINT");
};
setupSignals();
char * s = readline(prompt.c_str());
Finally doFree([&]() { free(s); });
restoreSignals();
if (g_signal_received) {
g_signal_received = 0;
input.clear();
return true;
}
if (!s)
return false;
input += s;
input += '\n';
return true;
}
StringSet NixRepl::completePrefix(const std::string & prefix)
{
StringSet completions;
@ -387,7 +274,7 @@ StringSet NixRepl::completePrefix(const std::string & prefix)
auto dir = std::string(cur, 0, slash);
auto prefix2 = std::string(cur, slash + 1);
for (auto & entry : readDirectory(dir == "" ? "/" : dir)) {
if (entry.name[0] != '.' && hasPrefix(entry.name, prefix2))
if (entry.name[0] != '.' && entry.name.starts_with(prefix2))
completions.insert(prev + dir + "/" + entry.name);
}
} catch (Error &) {
@ -478,10 +365,11 @@ void NixRepl::loadDebugTraceEnv(DebugTrace & dt)
}
}
bool NixRepl::processLine(std::string line)
ProcessLineResult NixRepl::processLine(std::string line)
{
line = trim(line);
if (line == "") return true;
if (line.empty())
return ProcessLineResult::PromptAgain;
_isInterrupted = false;
@ -576,13 +464,13 @@ bool NixRepl::processLine(std::string line)
else if (state->debugRepl && (command == ":s" || command == ":step")) {
// set flag to stop at next DebugTrace; exit repl.
state->debugStop = true;
return false;
return ProcessLineResult::Continue;
}
else if (state->debugRepl && (command == ":c" || command == ":continue")) {
// set flag to run to next breakpoint or end of program; exit repl.
state->debugStop = false;
return false;
return ProcessLineResult::Continue;
}
else if (command == ":a" || command == ":add") {
@ -725,8 +613,7 @@ bool NixRepl::processLine(std::string line)
else if (command == ":q" || command == ":quit") {
state->debugStop = false;
state->debugQuit = true;
return false;
return ProcessLineResult::Quit;
}
else if (command == ":doc") {
@ -787,7 +674,7 @@ bool NixRepl::processLine(std::string line)
}
}
return true;
return ProcessLineResult::PromptAgain;
}
void NixRepl::loadFile(const Path & path)
@ -918,7 +805,7 @@ std::unique_ptr<AbstractNixRepl> AbstractNixRepl::create(
}
void AbstractNixRepl::runSimple(
ReplExitStatus AbstractNixRepl::runSimple(
ref<EvalState> evalState,
const ValMap & extraEnv)
{
@ -940,7 +827,7 @@ void AbstractNixRepl::runSimple(
for (auto & [name, value] : extraEnv)
repl->addVarToScope(repl->state->symbols.create(name), *value);
repl->mainLoop();
return repl->mainLoop();
}
}

View file

@ -3,11 +3,6 @@
#include "eval.hh"
#if HAVE_BOEHMGC
#define GC_INCLUDE_NEW
#include <gc/gc_cpp.h>
#endif
namespace nix {
struct AbstractNixRepl
@ -28,13 +23,13 @@ struct AbstractNixRepl
const SearchPath & searchPath, nix::ref<Store> store, ref<EvalState> state,
std::function<AnnotatedValues()> getValues);
static void runSimple(
static ReplExitStatus runSimple(
ref<EvalState> evalState,
const ValMap & extraEnv);
virtual void initEnv() = 0;
virtual void mainLoop() = 0;
virtual ReplExitStatus mainLoop() = 0;
};
}

View file

@ -28,15 +28,7 @@ template<class T>
EvalErrorBuilder<T> & EvalErrorBuilder<T>::withTrace(PosIdx pos, const std::string_view text)
{
error.err.traces.push_front(
Trace{.pos = error.state.positions[pos], .hint = HintFmt(std::string(text)), .frame = false});
return *this;
}
template<class T>
EvalErrorBuilder<T> & EvalErrorBuilder<T>::withFrameTrace(PosIdx pos, const std::string_view text)
{
error.err.traces.push_front(
Trace{.pos = error.state.positions[pos], .hint = HintFmt(std::string(text)), .frame = true});
Trace{.pos = error.state.positions[pos], .hint = HintFmt(std::string(text))});
return *this;
}
@ -63,9 +55,9 @@ EvalErrorBuilder<T> & EvalErrorBuilder<T>::withFrame(const Env & env, const Expr
}
template<class T>
EvalErrorBuilder<T> & EvalErrorBuilder<T>::addTrace(PosIdx pos, HintFmt hint, bool frame)
EvalErrorBuilder<T> & EvalErrorBuilder<T>::addTrace(PosIdx pos, HintFmt hint)
{
error.addTrace(error.state.positions[pos], hint, frame);
error.addTrace(error.state.positions[pos], hint);
return *this;
}

View file

@ -90,7 +90,7 @@ public:
[[nodiscard, gnu::noinline]] EvalErrorBuilder<T> & withFrame(const Env & e, const Expr & ex);
[[nodiscard, gnu::noinline]] EvalErrorBuilder<T> & addTrace(PosIdx pos, HintFmt hint, bool frame = false);
[[nodiscard, gnu::noinline]] EvalErrorBuilder<T> & addTrace(PosIdx pos, HintFmt hint);
template<typename... Args>
[[nodiscard, gnu::noinline]] EvalErrorBuilder<T> &

View file

@ -29,7 +29,7 @@ static Strings parseNixPath(const std::string & s)
if (*p == ':') {
auto prefix = std::string(start2, s.end());
if (EvalSettings::isPseudoUrl(prefix) || hasPrefix(prefix, "flake:")) {
if (EvalSettings::isPseudoUrl(prefix) || prefix.starts_with("flake:")) {
++p;
while (p != s.end() && *p != ':') ++p;
}
@ -82,7 +82,7 @@ bool EvalSettings::isPseudoUrl(std::string_view s)
std::string EvalSettings::resolvePseudoUrl(std::string_view url)
{
if (hasPrefix(url, "channel:"))
if (url.starts_with("channel:"))
return "https://nixos.org/channels/" + std::string(url.substr(8)) + "/nixexprs.tar.xz";
else
return std::string(url);

View file

@ -114,6 +114,16 @@ struct EvalSettings : Config
Setting<unsigned int> maxCallDepth{this, 10000, "max-call-depth",
"The maximum function call depth to allow before erroring."};
Setting<bool> builtinsTraceDebugger{this, false, "debugger-on-trace",
R"(
If set to true and the `--debugger` flag is given,
[`builtins.trace`](@docroot@/language/builtins.md#builtins-trace) will
enter the debugger like
[`builtins.break`](@docroot@/language/builtins.md#builtins-break).
This is useful for debugging warnings in third-party Nix code.
)"};
};
extern EvalSettings evalSettings;

View file

@ -3,6 +3,7 @@
#include "hash.hh"
#include "primops.hh"
#include "print-options.hh"
#include "shared.hh"
#include "types.hh"
#include "util.hh"
#include "store-api.hh"
@ -391,7 +392,6 @@ EvalState::EvalState(
, buildStore(buildStore ? buildStore : store)
, debugRepl(nullptr)
, debugStop(false)
, debugQuit(false)
, trylevel(0)
, regexCache(makeRegexCache())
#if HAVE_BOEHMGC
@ -484,7 +484,7 @@ SourcePath EvalState::checkSourcePath(const SourcePath & path_)
*/
Path abspath = canonPath(path_.path.abs());
if (hasPrefix(abspath, corepkgsPrefix)) return CanonPath(abspath);
if (abspath.starts_with(corepkgsPrefix)) return CanonPath(abspath);
for (auto & i : *allowedPaths) {
if (isDirOrInDir(abspath, i)) {
@ -527,18 +527,18 @@ void EvalState::checkURI(const std::string & uri)
if (uri == prefix ||
(uri.size() > prefix.size()
&& prefix.size() > 0
&& hasPrefix(uri, prefix)
&& uri.starts_with(prefix)
&& (prefix[prefix.size() - 1] == '/' || uri[prefix.size()] == '/')))
return;
/* If the URI is a path, then check it against allowedPaths as
well. */
if (hasPrefix(uri, "/")) {
if (uri.starts_with("/")) {
checkSourcePath(CanonPath(uri));
return;
}
if (hasPrefix(uri, "file://")) {
if (uri.starts_with("file://")) {
checkSourcePath(CanonPath(std::string(uri, 7)));
return;
}
@ -642,7 +642,7 @@ Value * EvalState::addPrimOp(PrimOp && primOp)
}
auto envName = symbols.create(primOp.name);
if (hasPrefix(primOp.name, "__"))
if (primOp.name.starts_with("__"))
primOp.name = primOp.name.substr(2);
Value * v = allocValue();
@ -677,12 +677,21 @@ std::optional<EvalState::Doc> EvalState::getDoc(Value & v)
}
static std::set<std::string_view> sortedBindingNames(const SymbolTable & st, const StaticEnv & se)
{
std::set<std::string_view> bindings;
for (auto [symbol, displ] : se.vars)
bindings.emplace(st[symbol]);
return bindings;
}
// just for the current level of StaticEnv, not the whole chain.
void printStaticEnvBindings(const SymbolTable & st, const StaticEnv & se)
{
std::cout << ANSI_MAGENTA;
for (auto & i : se.vars)
std::cout << st[i.first] << " ";
for (auto & i : sortedBindingNames(st, se))
std::cout << i << " ";
std::cout << ANSI_NORMAL;
std::cout << std::endl;
}
@ -691,13 +700,14 @@ void printStaticEnvBindings(const SymbolTable & st, const StaticEnv & se)
void printWithBindings(const SymbolTable & st, const Env & env)
{
if (!env.values[0]->isThunk()) {
std::set<std::string_view> bindings;
for (const auto & attr : *env.values[0]->attrs)
bindings.emplace(st[attr.name]);
std::cout << "with: ";
std::cout << ANSI_MAGENTA;
Bindings::iterator j = env.values[0]->attrs->begin();
while (j != env.values[0]->attrs->end()) {
std::cout << st[j->name] << " ";
++j;
}
for (auto & i : bindings)
std::cout << i << " ";
std::cout << ANSI_NORMAL;
std::cout << std::endl;
}
@ -718,9 +728,9 @@ void printEnvBindings(const SymbolTable & st, const StaticEnv & se, const Env &
std::cout << ANSI_MAGENTA;
// for the top level, don't print the double underscore ones;
// they are in builtins.
for (auto & i : se.vars)
if (!hasPrefix(st[i.first], "__"))
std::cout << st[i.first] << " ";
for (auto & i : sortedBindingNames(st, se))
if (!i.starts_with("__"))
std::cout << i << " ";
std::cout << ANSI_NORMAL;
std::cout << std::endl;
if (se.isWith)
@ -798,18 +808,30 @@ void EvalState::runDebugRepl(const Error * error, const Env & env, const Expr &
auto se = getStaticEnv(expr);
if (se) {
auto vm = mapStaticEnvBindings(symbols, *se.get(), env);
(debugRepl)(ref<EvalState>(shared_from_this()), *vm);
auto exitStatus = (debugRepl)(ref<EvalState>(shared_from_this()), *vm);
switch (exitStatus) {
case ReplExitStatus::QuitAll:
if (error)
throw *error;
throw Exit(0);
case ReplExitStatus::Continue:
break;
default:
abort();
}
}
}
void EvalState::addErrorTrace(Error & e, const char * s, const std::string & s2) const
template<typename... Args>
void EvalState::addErrorTrace(Error & e, const Args & ... formatArgs) const
{
e.addTrace(nullptr, s, s2);
e.addTrace(nullptr, HintFmt(formatArgs...));
}
void EvalState::addErrorTrace(Error & e, const PosIdx pos, const char * s, const std::string & s2, bool frame) const
template<typename... Args>
void EvalState::addErrorTrace(Error & e, const PosIdx pos, const Args & ... formatArgs) const
{
e.addTrace(positions[pos], HintFmt(s, s2), frame);
e.addTrace(positions[pos], HintFmt(formatArgs...));
}
template<typename... Args>
@ -928,12 +950,11 @@ void EvalState::mkThunk_(Value & v, Expr * expr)
void EvalState::mkPos(Value & v, PosIdx p)
{
auto pos = positions[p];
if (auto path = std::get_if<SourcePath>(&pos.origin)) {
auto origin = positions.originOf(p);
if (auto path = std::get_if<SourcePath>(&origin)) {
auto attrs = buildBindings(3);
attrs.alloc(sFile).mkString(path->path.abs());
attrs.alloc(sLine).mkInt(pos.line);
attrs.alloc(sColumn).mkInt(pos.column);
makePositionThunks(*this, p, attrs.alloc(sLine), attrs.alloc(sColumn));
v.mkAttrs(attrs);
} else
v.mkNull();
@ -1205,6 +1226,18 @@ void ExprPath::eval(EvalState & state, Env & env, Value & v)
}
Env * ExprAttrs::buildInheritFromEnv(EvalState & state, Env & up)
{
Env & inheritEnv = state.allocEnv(inheritFromExprs->size());
inheritEnv.up = &up;
Displacement displ = 0;
for (auto from : *inheritFromExprs)
inheritEnv.values[displ++] = from->maybeThunk(state, up);
return &inheritEnv;
}
void ExprAttrs::eval(EvalState & state, Env & env, Value & v)
{
v.mkAttrs(state.buildBindings(attrs.size() + dynamicAttrs.size()).finish());
@ -1216,6 +1249,7 @@ void ExprAttrs::eval(EvalState & state, Env & env, Value & v)
Env & env2(state.allocEnv(attrs.size()));
env2.up = &env;
dynamicEnv = &env2;
Env * inheritEnv = inheritFromExprs ? buildInheritFromEnv(state, env2) : nullptr;
AttrDefs::iterator overrides = attrs.find(state.sOverrides);
bool hasOverrides = overrides != attrs.end();
@ -1226,11 +1260,11 @@ void ExprAttrs::eval(EvalState & state, Env & env, Value & v)
Displacement displ = 0;
for (auto & i : attrs) {
Value * vAttr;
if (hasOverrides && !i.second.inherited) {
if (hasOverrides && i.second.kind != AttrDef::Kind::Inherited) {
vAttr = state.allocValue();
mkThunk(*vAttr, env2, i.second.e);
mkThunk(*vAttr, *i.second.chooseByKind(&env2, &env, inheritEnv), i.second.e);
} else
vAttr = i.second.e->maybeThunk(state, i.second.inherited ? env : env2);
vAttr = i.second.e->maybeThunk(state, *i.second.chooseByKind(&env2, &env, inheritEnv));
env2.values[displ++] = vAttr;
v.attrs->push_back(Attr(i.first, vAttr, i.second.pos));
}
@ -1262,9 +1296,15 @@ void ExprAttrs::eval(EvalState & state, Env & env, Value & v)
}
}
else
for (auto & i : attrs)
v.attrs->push_back(Attr(i.first, i.second.e->maybeThunk(state, env), i.second.pos));
else {
Env * inheritEnv = inheritFromExprs ? buildInheritFromEnv(state, env) : nullptr;
for (auto & i : attrs) {
v.attrs->push_back(Attr(
i.first,
i.second.e->maybeThunk(state, *i.second.chooseByKind(&env, &env, inheritEnv)),
i.second.pos));
}
}
/* Dynamic attrs apply *after* rec and __overrides. */
for (auto & i : dynamicAttrs) {
@ -1296,12 +1336,17 @@ void ExprLet::eval(EvalState & state, Env & env, Value & v)
Env & env2(state.allocEnv(attrs->attrs.size()));
env2.up = &env;
Env * inheritEnv = attrs->inheritFromExprs ? attrs->buildInheritFromEnv(state, env2) : nullptr;
/* The recursive attributes are evaluated in the new environment,
while the inherited attributes are evaluated in the original
environment. */
Displacement displ = 0;
for (auto & i : attrs->attrs)
env2.values[displ++] = i.second.e->maybeThunk(state, i.second.inherited ? env : env2);
for (auto & i : attrs->attrs) {
env2.values[displ++] = i.second.e->maybeThunk(
state,
*i.second.chooseByKind(&env2, &env, inheritEnv));
}
auto dts = state.debugRepl
? makeDebugTraceStacker(
@ -1596,9 +1641,8 @@ void EvalState::callFunction(Value & fun, size_t nrArgs, Value * * args, Value &
"while calling %s",
lambda.name
? concatStrings("'", symbols[lambda.name], "'")
: "anonymous lambda",
true);
if (pos) addErrorTrace(e, pos, "from call site%s", "", true);
: "anonymous lambda");
if (pos) addErrorTrace(e, pos, "from call site");
}
throw;
}
@ -2710,9 +2754,12 @@ Expr * EvalState::parseExprFromFile(const SourcePath & path, std::shared_ptr<Sta
Expr * EvalState::parseExprFromString(std::string s_, const SourcePath & basePath, std::shared_ptr<StaticEnv> & staticEnv)
{
auto s = make_ref<std::string>(std::move(s_));
s->append("\0\0", 2);
return parse(s->data(), s->size(), Pos::String{.source = s}, basePath, staticEnv);
// NOTE this method (and parseStdin) must take care to *fully copy* their input
// into their respective Pos::Origin until the parser stops overwriting its input
// data.
auto s = make_ref<std::string>(s_);
s_.append("\0\0", 2);
return parse(s_.data(), s_.size(), Pos::String{.source = s}, basePath, staticEnv);
}
@ -2724,12 +2771,15 @@ Expr * EvalState::parseExprFromString(std::string s, const SourcePath & basePath
Expr * EvalState::parseStdin()
{
// NOTE this method (and parseExprFromString) must take care to *fully copy* their
// input into their respective Pos::Origin until the parser stops overwriting its
// input data.
//Activity act(*logger, lvlTalkative, "parsing standard input");
auto buffer = drainFD(0);
// drainFD should have left some extra space for terminators
auto s = make_ref<std::string>(buffer);
buffer.append("\0\0", 2);
auto s = make_ref<std::string>(std::move(buffer));
return parse(s->data(), s->size(), Pos::Stdin{.source = s}, rootPath(CanonPath::fromCwd()), staticBaseEnv);
return parse(buffer.data(), buffer.size(), Pos::Stdin{.source = s}, rootPath(CanonPath::fromCwd()), staticBaseEnv);
}
@ -2755,7 +2805,7 @@ SourcePath EvalState::findFile(const SearchPath & searchPath, const std::string_
if (pathExists(res)) return CanonPath(canonPath(res));
}
if (hasPrefix(path, "nix/"))
if (path.starts_with("nix/"))
return CanonPath(concatStrings(corepkgsPrefix, path.substr(4)));
error<ThrownError>(
@ -2788,7 +2838,7 @@ std::optional<std::string> EvalState::resolveSearchPathPath(const SearchPath::Pa
}
}
else if (hasPrefix(value, "flake:")) {
else if (value.starts_with("flake:")) {
experimentalFeatureSettings.require(Xp::Flakes);
auto flakeRef = parseFlakeRef(value.substr(6), {}, true, false);
debug("fetching flake search path element '%s''", value);

View file

@ -11,6 +11,7 @@
#include "experimental-features.hh"
#include "input-accessor.hh"
#include "search-path.hh"
#include "repl-exit-status.hh"
#include <map>
#include <optional>
@ -207,9 +208,8 @@ public:
/**
* Debugger
*/
void (* debugRepl)(ref<EvalState> es, const ValMap & extraEnv);
ReplExitStatus (* debugRepl)(ref<EvalState> es, const ValMap & extraEnv);
bool debugStop;
bool debugQuit;
int trylevel;
std::list<DebugTrace> debugTraces;
std::map<const Expr*, const std::shared_ptr<const StaticEnv>> exprEnvs;
@ -434,10 +434,12 @@ public:
std::string_view forceString(Value & v, NixStringContext & context, const PosIdx pos, std::string_view errorCtx);
std::string_view forceStringNoCtx(Value & v, const PosIdx pos, std::string_view errorCtx);
template<typename... Args>
[[gnu::noinline]]
void addErrorTrace(Error & e, const char * s, const std::string & s2) const;
void addErrorTrace(Error & e, const Args & ... formatArgs) const;
template<typename... Args>
[[gnu::noinline]]
void addErrorTrace(Error & e, const PosIdx pos, const char * s, const std::string & s2, bool frame = false) const;
void addErrorTrace(Error & e, const PosIdx pos, const Args & ... formatArgs) const;
public:
/**
@ -753,7 +755,6 @@ struct DebugTraceStacker {
DebugTraceStacker(EvalState & evalState, DebugTrace t);
~DebugTraceStacker()
{
// assert(evalState.debugTraces.front() == trace);
evalState.debugTraces.pop_front();
}
EvalState & evalState;

View file

@ -35,7 +35,7 @@ void ConfigFile::apply()
for (auto & [name, value] : settings) {
auto baseName = hasPrefix(name, "extra-") ? std::string(name, 6) : name;
auto baseName = name.starts_with("extra-") ? std::string(name, 6) : name;
// FIXME: Move into libutil/config.cc.
std::string valueS;

View file

@ -227,11 +227,10 @@ static Flake getFlake(
.sourceInfo = std::make_shared<fetchers::Tree>(std::move(sourceInfo))
};
// NOTE evalFile forces vInfo to be an attrset because mustBeTrivial is true.
Value vInfo;
state.evalFile(CanonPath(flakeFile), vInfo, true); // FIXME: symlink attack
expectType(state, nAttrs, vInfo, state.positions.add({CanonPath(flakeFile)}, 1, 1));
if (auto description = vInfo.attrs->get(state.sDescription)) {
expectType(state, nString, *description->value, description->pos);
flake.description = description->value->string.s;
@ -452,8 +451,8 @@ LockedFlake lockFlake(
assert(input.ref);
/* Do we have an entry in the existing lock file? And we
don't have a --update-input flag for this input? */
/* Do we have an entry in the existing lock file?
And the input is not in updateInputs? */
std::shared_ptr<LockedNode> oldLock;
updatesUsed.insert(inputPath);
@ -477,9 +476,8 @@ LockedFlake lockFlake(
node->inputs.insert_or_assign(id, childNode);
/* If we have an --update-input flag for an input
of this input, then we must fetch the flake to
update it. */
/* If we have this input in updateInputs, then we
must fetch the flake to update it. */
auto lb = lockFlags.inputUpdates.lower_bound(inputPath);
auto mustRefetch =
@ -621,19 +619,14 @@ LockedFlake lockFlake(
for (auto & i : lockFlags.inputUpdates)
if (!updatesUsed.count(i))
warn("the flag '--update-input %s' does not match any input", printInputPath(i));
warn("'%s' does not match any input of this flake", printInputPath(i));
/* Check 'follows' inputs. */
newLockFile.check();
debug("new lock file: %s", newLockFile);
auto relPath = (topRef.subdir == "" ? "" : topRef.subdir + "/") + "flake.lock";
auto sourcePath = topRef.input.getSourcePath();
auto outputLockFilePath = sourcePath ? std::optional{*sourcePath + "/" + relPath} : std::nullopt;
if (lockFlags.outputLockFilePath) {
outputLockFilePath = lockFlags.outputLockFilePath;
}
/* Check whether we need to / can write the new lock file. */
if (newLockFile != oldLockFile || lockFlags.outputLockFilePath) {
@ -641,7 +634,7 @@ LockedFlake lockFlake(
auto diff = LockFile::diff(oldLockFile, newLockFile);
if (lockFlags.writeLockFile) {
if (outputLockFilePath) {
if (sourcePath || lockFlags.outputLockFilePath) {
if (auto unlockedInput = newLockFile.isUnlocked()) {
if (fetchSettings.warnDirty)
warn("will not write lock file of flake '%s' because it has an unlocked input ('%s')", topRef, *unlockedInput);
@ -649,41 +642,48 @@ LockedFlake lockFlake(
if (!lockFlags.updateLockFile)
throw Error("flake '%s' requires lock file changes but they're not allowed due to '--no-update-lock-file'", topRef);
bool lockFileExists = pathExists(*outputLockFilePath);
auto newLockFileS = fmt("%s\n", newLockFile);
if (lockFlags.outputLockFilePath) {
if (lockFlags.commitLockFile)
throw Error("'--commit-lock-file' and '--output-lock-file' are incompatible");
writeFile(*lockFlags.outputLockFilePath, newLockFileS);
} else {
auto relPath = (topRef.subdir == "" ? "" : topRef.subdir + "/") + "flake.lock";
auto outputLockFilePath = *sourcePath + "/" + relPath;
bool lockFileExists = pathExists(outputLockFilePath);
if (lockFileExists) {
auto s = chomp(diff);
if (s.empty())
warn("updating lock file '%s'", *outputLockFilePath);
else
warn("updating lock file '%s':\n%s", *outputLockFilePath, s);
} else
warn("creating lock file '%s'", *outputLockFilePath);
if (lockFileExists) {
if (s.empty())
warn("updating lock file '%s'", outputLockFilePath);
else
warn("updating lock file '%s':\n%s", outputLockFilePath, s);
} else
warn("creating lock file '%s':\n%s", outputLockFilePath, s);
newLockFile.write(*outputLockFilePath);
std::optional<std::string> commitMessage = std::nullopt;
std::optional<std::string> commitMessage = std::nullopt;
if (lockFlags.commitLockFile) {
if (lockFlags.outputLockFilePath) {
throw Error("--commit-lock-file and --output-lock-file are currently incompatible");
}
std::string cm;
if (lockFlags.commitLockFile) {
std::string cm;
cm = fetchSettings.commitLockFileSummary.get();
cm = fetchSettings.commitLockFileSummary.get();
if (cm == "") {
cm = fmt("%s: %s", relPath, lockFileExists ? "Update" : "Add");
if (cm == "") {
cm = fmt("%s: %s", relPath, lockFileExists ? "Update" : "Add");
}
cm += "\n\nFlake lock file updates:\n\n";
cm += filterANSIEscapes(diff, true);
commitMessage = cm;
}
cm += "\n\nFlake lock file updates:\n\n";
cm += filterANSIEscapes(diff, true);
commitMessage = cm;
topRef.input.putFile(
CanonPath((topRef.subdir == "" ? "" : topRef.subdir + "/") + "flake.lock"),
newLockFileS, commitMessage);
}
topRef.input.markChangedFile(
(topRef.subdir == "" ? "" : topRef.subdir + "/") + "flake.lock",
commitMessage);
/* Rewriting the lockfile changed the top-level
repo, so we should re-read it. FIXME: we could
also just clear the 'rev' field... */

View file

@ -186,7 +186,7 @@ std::pair<FlakeRef, std::string> parseFlakeRefWithFragment(
}
} else {
if (!hasPrefix(path, "/"))
if (!path.starts_with("/"))
throw BadURL("flake reference '%s' is not an absolute path", url);
auto query = decodeQuery(match[2]);
path = canonPath(path + "/" + getOr(query, "dir", ""));

View file

@ -0,0 +1,8 @@
libexpr_generated_headers += custom_target(
command : [ 'bash', '-c', 'echo \'R"__NIX_STR(\' | cat - @INPUT@ && echo \')__NIX_STR"\'' ],
input : 'call-flake.nix',
output : '@PLAINNAME@.gen.hh',
capture : true,
install : true,
install_dir : includedir / 'nix/flake',
)

View file

@ -4,6 +4,7 @@
#if HAVE_BOEHMGC
#define GC_INCLUDE_NEW
#include <gc/gc.h>
#include <gc/gc_cpp.h>
#include <gc/gc_allocator.h>
@ -39,4 +40,4 @@ using SmallValueVector = SmallVector<Value *, nItems>;
template <size_t nItems>
using SmallTemporaryValueVector = SmallVector<Value, nItems>;
}
}

View file

@ -296,22 +296,16 @@ void DrvInfo::setMeta(const std::string & name, Value * v)
typedef std::set<Bindings *> Done;
/* Evaluate value `v'. If it evaluates to a set of type `derivation',
then put information about it in `drvs' (unless it's already in `done').
The result boolean indicates whether it makes sense
/* The result boolean indicates whether it makes sense
for the caller to recursively search for derivations in `v'. */
static bool getDerivation(EvalState & state, Value & v,
const std::string & attrPath, DrvInfos & drvs, Done & done,
const std::string & attrPath, DrvInfos & drvs,
bool ignoreAssertionFailures)
{
try {
state.forceValue(v, v.determinePos(noPos));
if (!state.isDerivation(v)) return true;
/* Remove spurious duplicates (e.g., a set like `rec { x =
derivation {...}; y = x;}'. */
if (!done.insert(v.attrs).second) return false;
DrvInfo drv(state, attrPath, v.attrs);
drv.queryName();
@ -330,9 +324,8 @@ static bool getDerivation(EvalState & state, Value & v,
std::optional<DrvInfo> getDerivation(EvalState & state, Value & v,
bool ignoreAssertionFailures)
{
Done done;
DrvInfos drvs;
getDerivation(state, v, "", drvs, done, ignoreAssertionFailures);
getDerivation(state, v, "", drvs, ignoreAssertionFailures);
if (drvs.size() != 1) return {};
return std::move(drvs.front());
}
@ -347,6 +340,9 @@ static std::string addToPath(const std::string & s1, const std::string & s2)
static std::regex attrRegex("[A-Za-z_][A-Za-z0-9-_+]*");
/* Evaluate value `v'. If it evaluates to a set of type `derivation',
then put information about it in `drvs'. If it evaluates to a different
kind of set recurse (unless it's already in `done'). */
static void getDerivations(EvalState & state, Value & vIn,
const std::string & pathPrefix, Bindings & autoArgs,
DrvInfos & drvs, Done & done,
@ -356,10 +352,14 @@ static void getDerivations(EvalState & state, Value & vIn,
state.autoCallFunction(autoArgs, vIn, v);
/* Process the expression. */
if (!getDerivation(state, v, pathPrefix, drvs, done, ignoreAssertionFailures)) ;
if (!getDerivation(state, v, pathPrefix, drvs, ignoreAssertionFailures)) ;
else if (v.type() == nAttrs) {
/* Dont consider sets we've already seen, e.g. y in
`rec { x.d = derivation {...}; y = x; }`. */
if (!done.insert(v.attrs).second) return;
/* !!! undocumented hackery to support combining channels in
nix-env.cc. */
bool combineChannels = v.attrs->find(state.symbols.create("_combineChannels")) != v.attrs->end();
@ -376,7 +376,7 @@ static void getDerivations(EvalState & state, Value & vIn,
std::string pathPrefix2 = addToPath(pathPrefix, state.symbols[i->name]);
if (combineChannels)
getDerivations(state, *i->value, pathPrefix2, autoArgs, drvs, done, ignoreAssertionFailures);
else if (getDerivation(state, *i->value, pathPrefix2, drvs, done, ignoreAssertionFailures)) {
else if (getDerivation(state, *i->value, pathPrefix2, drvs, ignoreAssertionFailures)) {
/* If the value of this attribute is itself a set,
should we recurse into it? => Only if it has a
`recurseForDerivations = true' attribute. */
@ -390,9 +390,11 @@ static void getDerivations(EvalState & state, Value & vIn,
}
else if (v.type() == nList) {
// NOTE we can't really deduplicate here because small lists don't have stable addresses
// and can cause spurious duplicate detections due to v being on the stack.
for (auto [n, elem] : enumerate(v.listItems())) {
std::string pathPrefix2 = addToPath(pathPrefix, fmt("%d", n));
if (getDerivation(state, *elem, pathPrefix2, drvs, done, ignoreAssertionFailures))
if (getDerivation(state, *elem, pathPrefix2, drvs, ignoreAssertionFailures))
getDerivations(state, *elem, pathPrefix2, autoArgs, drvs, done, ignoreAssertionFailures);
}
}

View file

@ -39,33 +39,16 @@ namespace nix {
static void initLoc(YYLTYPE * loc)
{
loc->first_line = loc->last_line = 1;
loc->first_column = loc->last_column = 1;
loc->first_line = loc->last_line = 0;
loc->first_column = loc->last_column = 0;
}
static void adjustLoc(YYLTYPE * loc, const char * s, size_t len)
{
loc->stash();
loc->first_line = loc->last_line;
loc->first_column = loc->last_column;
for (size_t i = 0; i < len; i++) {
switch (*s++) {
case '\r':
if (*s == '\n') { /* cr/lf */
i++;
s++;
}
/* fall through */
case '\n':
++loc->last_line;
loc->last_column = 1;
break;
default:
++loc->last_column;
}
}
loc->last_column += len;
}

176
src/libexpr/meson.build Normal file
View file

@ -0,0 +1,176 @@
parser_tab = custom_target(
input : 'parser.y',
output : [
'parser-tab.cc',
'parser-tab.hh',
],
command : [
'bison',
'-v',
'-o',
'@OUTPUT0@',
'@INPUT@',
'-d',
],
# NOTE(Qyriad): Meson doesn't support installing only part of a custom target, so we add
# an install script below which removes parser-tab.cc.
install : true,
install_dir : includedir / 'nix',
)
lexer_tab = custom_target(
input : [
'lexer.l',
parser_tab,
],
output : [
'lexer-tab.cc',
'lexer-tab.hh',
],
command : [
'flex',
'--outfile',
'@OUTPUT0@',
'--header-file=' + '@OUTPUT1@',
'@INPUT0@',
],
# NOTE(Qyriad): Meson doesn't support installing only part of a custom target, so we add
# an install script below which removes lexer-tab.cc.
install : true,
install_dir : includedir / 'nix',
)
# TODO(Qyriad): When the parser and lexer are rewritten this should be removed.
# NOTE(Qyriad): We do this this way instead of an inline bash or rm command
# due to subtleties in Meson. Check the comments in cleanup-install.bash for details.
meson.add_install_script(
bash,
meson.project_source_root() / 'meson/cleanup-install.bash',
'@0@'.format(includedir),
)
libexpr_generated_headers = [
gen_header.process('primops/derivation.nix', preserve_path_from : meson.current_source_dir()),
]
foreach header : [ 'imported-drv-to-derivation.nix', 'fetchurl.nix' ]
libexpr_generated_headers += custom_target(
command : [ 'bash', '-c', 'echo \'R"__NIX_STR(\' | cat - @INPUT@ && echo \')__NIX_STR"\'' ],
input : header,
output : '@PLAINNAME@.gen.hh',
capture : true,
install : true,
install_dir : includedir / 'nix',
)
endforeach
subdir('flake')
libexpr_sources = files(
'attr-path.cc',
'attr-set.cc',
'eval-cache.cc',
'eval-error.cc',
'eval-settings.cc',
'eval.cc',
'function-trace.cc',
'get-drvs.cc',
'json-to-value.cc',
'nixexpr.cc',
'paths.cc',
'primops.cc',
'print-ambiguous.cc',
'print.cc',
'search-path.cc',
'value-to-json.cc',
'value-to-xml.cc',
'flake/config.cc',
'flake/flake.cc',
'flake/flakeref.cc',
'flake/lockfile.cc',
'primops/context.cc',
'primops/fetchClosure.cc',
'primops/fetchMercurial.cc',
'primops/fetchTree.cc',
'primops/fromTOML.cc',
'value/context.cc',
)
libexpr_headers = files(
'attr-path.hh',
'attr-set.hh',
'eval-cache.hh',
'eval-error.hh',
'eval-inline.hh',
'eval-settings.hh',
'eval.hh',
'flake/flake.hh',
'flake/flakeref.hh',
'flake/lockfile.hh',
'function-trace.hh',
'gc-small-vector.hh',
'get-drvs.hh',
'json-to-value.hh',
'nixexpr.hh',
'parser-state.hh',
'pos-idx.hh',
'pos-table.hh',
'primops.hh',
'print-ambiguous.hh',
'print-options.hh',
'print.hh',
'repl-exit-status.hh',
'search-path.hh',
'symbol-table.hh',
'value/context.hh',
'value-to-json.hh',
'value-to-xml.hh',
'value.hh',
)
libexpr = library(
'nixexpr',
libexpr_sources,
parser_tab,
lexer_tab,
libexpr_generated_headers,
dependencies : [
liblixutil,
liblixstore,
liblixfetchers,
boehm,
boost,
toml11,
nlohmann_json,
],
# for shared.hh
include_directories : [
'../libmain',
],
install : true,
# FIXME(Qyriad): is this right?
install_rpath : libdir,
)
install_headers(
libexpr_headers,
subdir : 'nix',
preserve_path : true,
)
liblixexpr = declare_dependency(
include_directories : include_directories('.'),
link_with : libexpr,
)
# FIXME: not using the pkg-config module because it creates way too many deps
# while meson migration is in progress, and we want to not include boost here
configure_file(
input : 'nix-expr.pc.in',
output : 'nix-expr.pc',
install_dir : libdir / 'pkgconfig',
configuration : {
'prefix' : prefix,
'libdir' : libdir,
'includedir' : includedir,
'PACKAGE_VERSION' : meson.project_version(),
},
)

View file

@ -68,10 +68,8 @@ void ExprOpHasAttr::show(const SymbolTable & symbols, std::ostream & str) const
str << ") ? " << showAttrPath(symbols, attrPath) << ")";
}
void ExprAttrs::show(const SymbolTable & symbols, std::ostream & str) const
void ExprAttrs::showBindings(const SymbolTable & symbols, std::ostream & str) const
{
if (recursive) str << "rec ";
str << "{ ";
typedef const decltype(attrs)::value_type * Attr;
std::vector<Attr> sorted;
for (auto & i : attrs) sorted.push_back(&i);
@ -79,10 +77,37 @@ void ExprAttrs::show(const SymbolTable & symbols, std::ostream & str) const
std::string_view sa = symbols[a->first], sb = symbols[b->first];
return sa < sb;
});
std::vector<Symbol> inherits;
std::map<ExprInheritFrom *, std::vector<Symbol>> inheritsFrom;
for (auto & i : sorted) {
if (i->second.inherited)
str << "inherit " << symbols[i->first] << " " << "; ";
else {
switch (i->second.kind) {
case AttrDef::Kind::Plain:
break;
case AttrDef::Kind::Inherited:
inherits.push_back(i->first);
break;
case AttrDef::Kind::InheritedFrom: {
auto & select = dynamic_cast<ExprSelect &>(*i->second.e);
auto & from = dynamic_cast<ExprInheritFrom &>(*select.e);
inheritsFrom[&from].push_back(i->first);
break;
}
}
}
if (!inherits.empty()) {
str << "inherit";
for (auto sym : inherits) str << " " << symbols[sym];
str << "; ";
}
for (const auto & [from, syms] : inheritsFrom) {
str << "inherit (";
(*inheritFromExprs)[from->displ]->show(symbols, str);
str << ")";
for (auto sym : syms) str << " " << symbols[sym];
str << "; ";
}
for (auto & i : sorted) {
if (i->second.kind == AttrDef::Kind::Plain) {
str << symbols[i->first] << " = ";
i->second.e->show(symbols, str);
str << "; ";
@ -95,6 +120,13 @@ void ExprAttrs::show(const SymbolTable & symbols, std::ostream & str) const
i.valueExpr->show(symbols, str);
str << "; ";
}
}
void ExprAttrs::show(const SymbolTable & symbols, std::ostream & str) const
{
if (recursive) str << "rec ";
str << "{ ";
showBindings(symbols, str);
str << "}";
}
@ -115,7 +147,10 @@ void ExprLambda::show(const SymbolTable & symbols, std::ostream & str) const
if (hasFormals()) {
str << "{ ";
bool first = true;
for (auto & i : formals->formals) {
// the natural Symbol ordering is by creation time, which can lead to the
// same expression being printed in two different ways depending on its
// context. always use lexicographic ordering to avoid this.
for (auto & i : formals->lexicographicOrder(symbols)) {
if (first) first = false; else str << ", ";
str << symbols[i.name];
if (i.def) {
@ -150,15 +185,7 @@ void ExprCall::show(const SymbolTable & symbols, std::ostream & str) const
void ExprLet::show(const SymbolTable & symbols, std::ostream & str) const
{
str << "(let ";
for (auto & i : attrs->attrs)
if (i.second.inherited) {
str << "inherit " << symbols[i.first] << "; ";
}
else {
str << symbols[i.first] << " = ";
i.second.e->show(symbols, str);
str << "; ";
}
attrs->showBindings(symbols, str);
str << "in ";
body->show(symbols, str);
str << ")";
@ -303,6 +330,12 @@ void ExprVar::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> &
this->level = withLevel;
}
void ExprInheritFrom::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> & env)
{
if (es.debugRepl)
es.exprEnvs.insert(std::make_pair(this, env));
}
void ExprSelect::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> & env)
{
if (es.debugRepl)
@ -326,22 +359,47 @@ void ExprOpHasAttr::bindVars(EvalState & es, const std::shared_ptr<const StaticE
i.expr->bindVars(es, env);
}
std::shared_ptr<const StaticEnv> ExprAttrs::bindInheritSources(
EvalState & es, const std::shared_ptr<const StaticEnv> & env)
{
if (!inheritFromExprs)
return nullptr;
// the inherit (from) source values are inserted into an env of its own, which
// does not introduce any variable names.
// analysis must see an empty env, or an env that contains only entries with
// otherwise unused names to not interfere with regular names. the parser
// has already filled all exprs that access this env with appropriate level
// and displacement, and nothing else is allowed to access it. ideally we'd
// not even *have* an expr that grabs anything from this env since it's fully
// invisible, but the evaluator does not allow for this yet.
auto inner = std::make_shared<StaticEnv>(nullptr, env.get(), 0);
for (auto from : *inheritFromExprs)
from->bindVars(es, env);
return inner;
}
void ExprAttrs::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> & env)
{
if (es.debugRepl)
es.exprEnvs.insert(std::make_pair(this, env));
if (recursive) {
auto newEnv = std::make_shared<StaticEnv>(nullptr, env.get(), recursive ? attrs.size() : 0);
auto newEnv = [&] () -> std::shared_ptr<const StaticEnv> {
auto newEnv = std::make_shared<StaticEnv>(nullptr, env.get(), attrs.size());
Displacement displ = 0;
for (auto & i : attrs)
newEnv->vars.emplace_back(i.first, i.second.displ = displ++);
Displacement displ = 0;
for (auto & i : attrs)
newEnv->vars.emplace_back(i.first, i.second.displ = displ++);
return newEnv;
}();
// No need to sort newEnv since attrs is in sorted order.
auto inheritFromEnv = bindInheritSources(es, newEnv);
for (auto & i : attrs)
i.second.e->bindVars(es, i.second.inherited ? env : newEnv);
i.second.e->bindVars(es, i.second.chooseByKind(newEnv, env, inheritFromEnv));
for (auto & i : dynamicAttrs) {
i.nameExpr->bindVars(es, newEnv);
@ -349,8 +407,10 @@ void ExprAttrs::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv>
}
}
else {
auto inheritFromEnv = bindInheritSources(es, env);
for (auto & i : attrs)
i.second.e->bindVars(es, env);
i.second.e->bindVars(es, i.second.chooseByKind(env, env, inheritFromEnv));
for (auto & i : dynamicAttrs) {
i.nameExpr->bindVars(es, env);
@ -407,19 +467,23 @@ void ExprCall::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> &
void ExprLet::bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> & env)
{
auto newEnv = std::make_shared<StaticEnv>(nullptr, env.get(), attrs->attrs.size());
auto newEnv = [&] () -> std::shared_ptr<const StaticEnv> {
auto newEnv = std::make_shared<StaticEnv>(nullptr, env.get(), attrs->attrs.size());
Displacement displ = 0;
for (auto & i : attrs->attrs)
newEnv->vars.emplace_back(i.first, i.second.displ = displ++);
Displacement displ = 0;
for (auto & i : attrs->attrs)
newEnv->vars.emplace_back(i.first, i.second.displ = displ++);
return newEnv;
}();
// No need to sort newEnv since attrs->attrs is in sorted order.
auto inheritFromEnv = attrs->bindInheritSources(es, newEnv);
for (auto & i : attrs->attrs)
i.second.e->bindVars(es, i.second.inherited ? env : newEnv);
i.second.e->bindVars(es, i.second.chooseByKind(newEnv, env, inheritFromEnv));
if (es.debugRepl)
es.exprEnvs.insert(std::make_pair(this, newEnv));
es.exprEnvs.insert(std::make_pair(this, env));
body->bindVars(es, newEnv);
}
@ -517,6 +581,39 @@ std::string ExprLambda::showNamePos(const EvalState & state) const
/* Position table. */
Pos PosTable::operator[](PosIdx p) const
{
auto origin = resolve(p);
if (!origin)
return {};
const auto offset = origin->offsetOf(p);
Pos result{0, 0, origin->origin};
auto lines = this->lines.lock();
auto & linesForInput = (*lines)[origin->offset];
if (linesForInput.empty()) {
auto source = result.getSource().value_or("");
const char * begin = source.data();
for (Pos::LinesIterator it(source), end; it != end; it++)
linesForInput.push_back(it->data() - begin);
if (linesForInput.empty())
linesForInput.push_back(0);
}
// as above: the first line starts at byte 0 and is always present
auto lineStartOffset = std::prev(
std::upper_bound(linesForInput.begin(), linesForInput.end(), offset));
result.line = 1 + (lineStartOffset - linesForInput.begin());
result.column = 1 + (offset - *lineStartOffset);
return result;
}
/* Symbol table. */
size_t SymbolTable::totalSize() const

View file

@ -7,7 +7,6 @@
#include "value.hh"
#include "symbol-table.hh"
#include "error.hh"
#include "chunked-vector.hh"
#include "position.hh"
#include "eval-error.hh"
#include "pos-idx.hh"
@ -129,6 +128,23 @@ struct ExprVar : Expr
COMMON_METHODS
};
/**
* A pseudo-expression for the purpose of evaluating the `from` expression in `inherit (from)` syntax.
* Unlike normal variable references, the displacement is set during parsing, and always refers to
* `ExprAttrs::inheritFromExprs` (by itself or in `ExprLet`), whose values are put into their own `Env`.
*/
struct ExprInheritFrom : ExprVar
{
ExprInheritFrom(PosIdx pos, Displacement displ): ExprVar(pos, {})
{
this->level = 0;
this->displ = displ;
this->fromWith = nullptr;
}
void bindVars(EvalState & es, const std::shared_ptr<const StaticEnv> & env);
};
struct ExprSelect : Expr
{
PosIdx pos;
@ -154,16 +170,40 @@ struct ExprAttrs : Expr
bool recursive;
PosIdx pos;
struct AttrDef {
bool inherited;
enum class Kind {
/** `attr = expr;` */
Plain,
/** `inherit attr1 attrn;` */
Inherited,
/** `inherit (expr) attr1 attrn;` */
InheritedFrom,
};
Kind kind;
Expr * e;
PosIdx pos;
Displacement displ; // displacement
AttrDef(Expr * e, const PosIdx & pos, bool inherited=false)
: inherited(inherited), e(e), pos(pos) { };
AttrDef(Expr * e, const PosIdx & pos, Kind kind = Kind::Plain)
: kind(kind), e(e), pos(pos) { };
AttrDef() { };
template<typename T>
const T & chooseByKind(const T & plain, const T & inherited, const T & inheritedFrom) const
{
switch (kind) {
case Kind::Plain:
return plain;
case Kind::Inherited:
return inherited;
default:
case Kind::InheritedFrom:
return inheritedFrom;
}
}
};
typedef std::map<Symbol, AttrDef> AttrDefs;
AttrDefs attrs;
std::unique_ptr<std::vector<Expr *>> inheritFromExprs;
struct DynamicAttrDef {
Expr * nameExpr, * valueExpr;
PosIdx pos;
@ -176,6 +216,11 @@ struct ExprAttrs : Expr
ExprAttrs() : recursive(false) { };
PosIdx getPos() const override { return pos; }
COMMON_METHODS
std::shared_ptr<const StaticEnv> bindInheritSources(
EvalState & es, const std::shared_ptr<const StaticEnv> & env);
Env * buildInheritFromEnv(EvalState & state, Env & up);
void showBindings(const SymbolTable & symbols, std::ostream & str) const;
};
struct ExprList : Expr

View file

@ -24,20 +24,15 @@ struct ParserLocation
int last_line, last_column;
// backup to recover from yyless(0)
int stashed_first_line, stashed_first_column;
int stashed_last_line, stashed_last_column;
int stashed_first_column, stashed_last_column;
void stash() {
stashed_first_line = first_line;
stashed_first_column = first_column;
stashed_last_line = last_line;
stashed_last_column = last_column;
}
void unstash() {
first_line = stashed_first_line;
first_column = stashed_first_column;
last_line = stashed_last_line;
last_column = stashed_last_column;
}
};
@ -88,12 +83,12 @@ inline void ParserState::addAttr(ExprAttrs * attrs, AttrPath && attrPath, Expr *
if (i->symbol) {
ExprAttrs::AttrDefs::iterator j = attrs->attrs.find(i->symbol);
if (j != attrs->attrs.end()) {
if (!j->second.inherited) {
if (j->second.kind != ExprAttrs::AttrDef::Kind::Inherited) {
ExprAttrs * attrs2 = dynamic_cast<ExprAttrs *>(j->second.e);
if (!attrs2) dupAttr(attrPath, pos, j->second.pos);
if (!attrs2) dupAttr({attrPath.begin(), i + 1}, pos, j->second.pos);
attrs = attrs2;
} else
dupAttr(attrPath, pos, j->second.pos);
dupAttr({attrPath.begin(), i + 1}, pos, j->second.pos);
} else {
ExprAttrs * nested = new ExprAttrs;
attrs->attrs[i->symbol] = ExprAttrs::AttrDef(nested, pos);
@ -117,13 +112,24 @@ inline void ParserState::addAttr(ExprAttrs * attrs, AttrPath && attrPath, Expr *
auto ae = dynamic_cast<ExprAttrs *>(e);
auto jAttrs = dynamic_cast<ExprAttrs *>(j->second.e);
if (jAttrs && ae) {
if (ae->inheritFromExprs && !jAttrs->inheritFromExprs)
jAttrs->inheritFromExprs = std::make_unique<std::vector<Expr *>>();
for (auto & ad : ae->attrs) {
auto j2 = jAttrs->attrs.find(ad.first);
if (j2 != jAttrs->attrs.end()) // Attr already defined in iAttrs, error.
dupAttr(ad.first, j2->second.pos, ad.second.pos);
jAttrs->attrs.emplace(ad.first, ad.second);
if (ad.second.kind == ExprAttrs::AttrDef::Kind::InheritedFrom) {
auto & sel = dynamic_cast<ExprSelect &>(*ad.second.e);
auto & from = dynamic_cast<ExprInheritFrom &>(*sel.e);
from.displ += jAttrs->inheritFromExprs->size();
}
}
jAttrs->dynamicAttrs.insert(jAttrs->dynamicAttrs.end(), ae->dynamicAttrs.begin(), ae->dynamicAttrs.end());
if (ae->inheritFromExprs) {
jAttrs->inheritFromExprs->insert(jAttrs->inheritFromExprs->end(),
ae->inheritFromExprs->begin(), ae->inheritFromExprs->end());
}
} else {
dupAttr(attrPath, pos, j->second.pos);
}
@ -264,7 +270,7 @@ inline Expr * ParserState::stripIndentation(const PosIdx pos,
inline PosIdx ParserState::at(const ParserLocation & loc)
{
return positions.add(origin, loc.first_line, loc.first_column);
return positions.add(origin, loc.first_column);
}
}

View file

@ -62,6 +62,10 @@ using namespace nix;
void yyerror(YYLTYPE * loc, yyscan_t scanner, ParserState * state, const char * error)
{
if (std::string_view(error).starts_with("syntax error, unexpected end of file")) {
loc->first_column = loc->last_column;
loc->first_line = loc->last_line;
}
throw ParseError({
.msg = HintFmt(error),
.pos = state->positions[state->at(*loc)]
@ -85,6 +89,7 @@ void yyerror(YYLTYPE * loc, yyscan_t scanner, ParserState * state, const char *
nix::StringToken uri;
nix::StringToken str;
std::vector<nix::AttrName> * attrNames;
std::vector<std::pair<nix::AttrName, nix::PosIdx>> * inheritAttrs;
std::vector<std::pair<nix::PosIdx, nix::Expr *>> * string_parts;
std::vector<std::pair<nix::PosIdx, std::variant<nix::Expr *, nix::StringToken>>> * ind_string_parts;
}
@ -95,7 +100,8 @@ void yyerror(YYLTYPE * loc, yyscan_t scanner, ParserState * state, const char *
%type <attrs> binds
%type <formals> formals
%type <formal> formal
%type <attrNames> attrs attrpath
%type <attrNames> attrpath
%type <inheritAttrs> attrs
%type <string_parts> string_parts_interpolated
%type <ind_string_parts> ind_string_parts
%type <e> path_start string_parts string_attr
@ -307,21 +313,30 @@ binds
: binds attrpath '=' expr ';' { $$ = $1; state->addAttr($$, std::move(*$2), $4, state->at(@2)); delete $2; }
| binds INHERIT attrs ';'
{ $$ = $1;
for (auto & i : *$3) {
for (auto & [i, iPos] : *$3) {
if ($$->attrs.find(i.symbol) != $$->attrs.end())
state->dupAttr(i.symbol, state->at(@3), $$->attrs[i.symbol].pos);
auto pos = state->at(@3);
$$->attrs.emplace(i.symbol, ExprAttrs::AttrDef(new ExprVar(CUR_POS, i.symbol), pos, true));
state->dupAttr(i.symbol, iPos, $$->attrs[i.symbol].pos);
$$->attrs.emplace(
i.symbol,
ExprAttrs::AttrDef(new ExprVar(iPos, i.symbol), iPos, ExprAttrs::AttrDef::Kind::Inherited));
}
delete $3;
}
| binds INHERIT '(' expr ')' attrs ';'
{ $$ = $1;
/* !!! Should ensure sharing of the expression in $4. */
for (auto & i : *$6) {
if (!$$->inheritFromExprs)
$$->inheritFromExprs = std::make_unique<std::vector<Expr *>>();
$$->inheritFromExprs->push_back($4);
auto from = new nix::ExprInheritFrom(state->at(@4), $$->inheritFromExprs->size() - 1);
for (auto & [i, iPos] : *$6) {
if ($$->attrs.find(i.symbol) != $$->attrs.end())
state->dupAttr(i.symbol, state->at(@6), $$->attrs[i.symbol].pos);
$$->attrs.emplace(i.symbol, ExprAttrs::AttrDef(new ExprSelect(CUR_POS, $4, i.symbol), state->at(@6)));
state->dupAttr(i.symbol, iPos, $$->attrs[i.symbol].pos);
$$->attrs.emplace(
i.symbol,
ExprAttrs::AttrDef(
new ExprSelect(iPos, from, i.symbol),
iPos,
ExprAttrs::AttrDef::Kind::InheritedFrom));
}
delete $6;
}
@ -329,12 +344,12 @@ binds
;
attrs
: attrs attr { $$ = $1; $1->push_back(AttrName(state->symbols.create($2))); }
: attrs attr { $$ = $1; $1->emplace_back(AttrName(state->symbols.create($2)), state->at(@2)); }
| attrs string_attr
{ $$ = $1;
ExprString * str = dynamic_cast<ExprString *>($2);
if (str) {
$$->push_back(AttrName(state->symbols.create(str->s)));
$$->emplace_back(AttrName(state->symbols.create(str->s)), state->at(@2));
delete str;
} else
throw ParseError({
@ -342,7 +357,7 @@ attrs
.pos = state->positions[state->at(@2)]
});
}
| { $$ = new AttrPath; }
| { $$ = new std::vector<std::pair<AttrName, PosIdx>>; }
;
attrpath
@ -420,7 +435,7 @@ Expr * parseExprFromBuf(
.symbols = symbols,
.positions = positions,
.basePath = basePath,
.origin = {origin},
.origin = positions.addOrigin(origin, length),
.s = astSymbols,
};

View file

@ -6,6 +6,7 @@ namespace nix {
class PosIdx
{
friend struct LazyPosAcessors;
friend class PosTable;
private:

View file

@ -7,6 +7,7 @@
#include "chunked-vector.hh"
#include "pos-idx.hh"
#include "position.hh"
#include "sync.hh"
namespace nix {
@ -17,66 +18,68 @@ public:
{
friend PosTable;
private:
// must always be invalid by default, add() replaces this with the actual value.
// subsequent add() calls use this index as a token to quickly check whether the
// current origins.back() can be reused or not.
mutable uint32_t idx = std::numeric_limits<uint32_t>::max();
uint32_t offset;
// Used for searching in PosTable::[].
explicit Origin(uint32_t idx)
: idx(idx)
, origin{std::monostate()}
{
}
Origin(Pos::Origin origin, uint32_t offset, size_t size):
offset(offset), origin(origin), size(size)
{}
public:
const Pos::Origin origin;
const size_t size;
Origin(Pos::Origin origin)
: origin(origin)
uint32_t offsetOf(PosIdx p) const
{
return p.id - 1 - offset;
}
};
struct Offset
{
uint32_t line, column;
};
private:
std::vector<Origin> origins;
ChunkedVector<Offset, 8192> offsets;
using Lines = std::vector<uint32_t>;
public:
PosTable()
: offsets(1024)
{
origins.reserve(1024);
}
std::map<uint32_t, Origin> origins;
mutable Sync<std::map<uint32_t, Lines>> lines;
PosIdx add(const Origin & origin, uint32_t line, uint32_t column)
const Origin * resolve(PosIdx p) const
{
const auto idx = offsets.add({line, column}).second;
if (origins.empty() || origins.back().idx != origin.idx) {
origin.idx = idx;
origins.push_back(origin);
}
return PosIdx(idx + 1);
}
if (p.id == 0)
return nullptr;
Pos operator[](PosIdx p) const
{
if (p.id == 0 || p.id > offsets.size())
return {};
const auto idx = p.id - 1;
/* we want the last key <= idx, so we'll take prev(first key > idx).
this is guaranteed to never rewind origin.begin because the first
key is always 0. */
const auto pastOrigin = std::upper_bound(
origins.begin(), origins.end(), Origin(idx), [](const auto & a, const auto & b) { return a.idx < b.idx; });
const auto origin = *std::prev(pastOrigin);
const auto offset = offsets[idx];
return {offset.line, offset.column, origin.origin};
this is guaranteed to never rewind origin.begin because the first
key is always 0. */
const auto pastOrigin = origins.upper_bound(idx);
return &std::prev(pastOrigin)->second;
}
public:
Origin addOrigin(Pos::Origin origin, size_t size)
{
uint32_t offset = 0;
if (auto it = origins.rbegin(); it != origins.rend())
offset = it->first + it->second.size;
// +1 because all PosIdx are offset by 1 to begin with (because noPos == 0), and
// another +1 to ensure that all origins can point to EOF, eg on (invalid) empty inputs.
if (2 + offset + size < offset)
return Origin{origin, offset, 0};
return origins.emplace(offset, Origin{origin, offset, size}).first->second;
}
PosIdx add(const Origin & origin, size_t offset)
{
if (offset > origin.size)
return PosIdx();
return PosIdx(1 + origin.offset + offset);
}
Pos operator[](PosIdx p) const;
Pos::Origin originOf(PosIdx p) const
{
if (auto o = resolve(p))
return o->origin;
return std::monostate{};
}
};

Some files were not shown because too many files have changed in this diff Show more