this gives about 20% performance improvements on pure parsing. obviously
it will be less on full eval, but depending on how much parsing is to be
done (e.g. including hackage-packages.nix or not) it's more like 4%-10%.
this has been tested (with thousands of core hours of fuzzing) to ensure
that the ASTs produced by the new parser are exactly the same as the old
one would have produced. error messages will change (sometimes by a lot)
and are not yet perfect, but we would rather leave this as is for later.
test results for running only the parser (excluding the variable binding
code) in a tight loop with inputs and parameters as given are promising:
- 40% faster on lix's package.nix at 10000 iterations
- 1.3% faster on nixpkgs all-packages.nix at 1000 iterations
- equivalent on all of nixpkgs concatenated at 100 iterations
(excluding invalid files, each file surrounded with parens)
more realistic benchmarks are somewhere in between the extremes, parsing
once again getting the largest uplift. other realistic workloads improve
by a few percentage points as well, notably system builds are 4% faster.
Benchmarks summary (from ./bench/summarize.jq bench/bench-*.json)
old/bin/nix --extra-experimental-features 'nix-command flakes' eval -f bench/nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
mean: 0.408s ± 0.025s
user: 0.355s | system: 0.033s
median: 0.389s
range: 0.388s ... 0.442s
relative: 1
new/bin/nix --extra-experimental-features 'nix-command flakes' eval -f bench/nixpkgs/pkgs/development/haskell-modules/hackage-packages.nix
mean: 0.332s ± 0.024s
user: 0.279s | system: 0.033s
median: 0.314s
range: 0.313s ... 0.361s
relative: 0.814
---
old/bin/nix --extra-experimental-features 'nix-command flakes' eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 6.133s ± 0.022s
user: 5.395s | system: 0.437s
median: 6.128s
range: 6.099s ... 6.183s
relative: 1
new/bin/nix --extra-experimental-features 'nix-command flakes' eval --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 5.925s ± 0.025s
user: 5.176s | system: 0.456s
median: 5.934s
range: 5.861s ... 5.943s
relative: 0.966
---
GC_INITIAL_HEAP_SIZE=10g old/bin/nix eval --extra-experimental-features 'nix-command flakes' --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 4.503s ± 0.027s
user: 3.731s | system: 0.547s
median: 4.499s
range: 4.478s ... 4.541s
relative: 1
GC_INITIAL_HEAP_SIZE=10g new/bin/nix eval --extra-experimental-features 'nix-command flakes' --raw --impure --expr 'with import <nixpkgs/nixos> {}; system'
mean: 4.285s ± 0.031s
user: 3.504s | system: 0.571s
median: 4.281s
range: 4.221s ... 4.328s
relative: 0.951
---
old/bin/nix --extra-experimental-features 'nix-command flakes' search --no-eval-cache github:nixos/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 hello
mean: 16.475s ± 0.07s
user: 14.088s | system: 1.572s
median: 16.495s
range: 16.351s ... 16.536s
relative: 1
new/bin/nix --extra-experimental-features 'nix-command flakes' search --no-eval-cache github:nixos/nixpkgs/e1fa12d4f6c6fe19ccb59cac54b5b3f25e160870 hello
mean: 15.973s ± 0.013s
user: 13.558s | system: 1.615s
median: 15.973s
range: 15.946s ... 15.99s
relative: 0.97
---
Change-Id: Ie66ec2d045dec964632c6541e25f8f0797319ee2
We realized that there's really no good place to put these dev facing
bulletins, and the user-facing release notes aren't really the worst
place to put them, I guess, and we do kind of hope that it converts
users to devs.
Change-Id: Id9387b2964fe291cb5a3f74ad6344157f19b540c
It seems like someone implemented precompiled headers a long time ago
and then it never got ported to meson or maybe didn't work at all.
This is, however, blessedly easy to simply implement. I went looking for
`#define` that could affect the result of precompiling the headers, and
as far as I can tell we aren't doing any of that, so this should truly
just be free build time savings.
Previous state:
Compilation (551 times):
Parsing (frontend): 1302.1 s
Codegen & opts (backend): 956.3 s
New state:
**** Time summary:
Compilation (567 times):
Parsing (frontend): 1123.0 s
Codegen & opts (backend): 1078.1 s
I wonder if the "regression" in codegen time is just doing the PCH
operation a few times, because meson does it per-target.
Change-Id: I664366b8069bab4851308b3a7571bea97ac64022
this should make it easier to spot future instances of entries being
duplicated by accident. also add a pre-commit check to remain sorted
Change-Id: I500caf862e93480b38c9d51144273bb2dcab1af0
Fixes#183, #110, #116.
The default flake-registry option becomes 'vendored', and refers
to a vendored flake-registry.json file in the install path.
Vendored copy of the flake-registry is from github:NixOS/flake-registry
at commit 9c69f7bd2363e71fe5cd7f608113290c7614dcdd.
Change-Id: I752b81c85ebeaab4e582ac01c239d69d65580f37
* changes:
flake: fix devShell on i686-linux by disabling ClangBuildAnalyzer on it
flake: fix eval of checks & devshell on i686-linux
flake: move the pre-commit definition to its own file
ClangBuildAnalyzer doesn't build on i686-linux due to
`long long int`/`size_t` conversion errors, so let's just exclude it
from the devshell on that platform
Change-Id: If1077a7b3860db4381999c8e304f6d4b2bc96a05
Because of an objc quirk[1], calling curl_global_init for the first time
after fork() will always result in a crash.
Up until now the solution has been to set
OBJC_DISABLE_INITIALIZE_FORK_SAFETY for every nix process to ignore
that error.
This is less than ideal because we were setting it in package.nix,
which meant that running nix tests locally would fail because
that variable was not set.
Instead of working around that error we address it at the core -
by calling curl_global_init inside initLibStore, which should mean
curl will already have been initialized by the time we try to do so in
a forked process.
[1] 01edf1705f/runtime/objc-initialize.mm (L614-L636)
Change-Id: Icf26010a8be655127cc130efb9c77b603a6660d0
The big ones here are `trim-trailing-whitespace` and `end-of-file-fixer`
(which makes sure that every file ends with exactly one newline
character).
Change-Id: Idca73b640883188f068f9903e013cf0d82aa1123
I didn't enable this by default for clang due to making the build time
10% worse or so. Unfortunate, but tbh devs for whom 10% of build time is
not *that* bad should probably simply enable this.
Change-Id: I8d1e5b6f3f76c649a4e2f115f534f7f97cee46e6
hacking changelog-d to support not just github but also forgejo and
gerrit is a lot more complicated than it's worth, even moreso since
the entire thing can just as well be done with ~60 lines of python.
this new script is also much cheaper to instantiate (being python),
so having it enabled in all shells is far less of a hassle.
we've also adjusted existing release notes that referenced a gerrit
cl to auto-link to the cl in question, making the diff a bit bigger
closes lix-project/lix#176
Change-Id: I8ba7dd0070aad9ba4474401731215fcf5d9d2130
rl-next: Fix and support markdown frontmatter syntax
(cherry picked from commit 69b7876a0810269ad71807594cfd99b26cd8a5ff)
Change-Id: I8bfb8967af0943080fdd70d257c34abaf0a9fedf
This provides a platform-independent way to configure the SSL
certificates file in the Nix daemon. Previously we provided
instructions for overriding the environment variable in launchd, but
that obviously doesn't work with systemd. Now we can just tell users
to add
ssl-cert-file = /etc/ssl/my-certificate-bundle.crt
to their nix.conf.
This matches the behavior of bash. We don’t want to add a space after
completion on attrs. Uses -S.
Switches to new compadd style comppletions instead of _describe.
Shouldn’t have any negative issues from what I can tell.
Users may want to mount a filesystem just for the Nix database, with
the filesystem's parameters specially tuned for sqlite. For example, on
ZFS you might set the recordsize to 64k after changing the database's
page size to 65536.
nix-daemon.socket is used to socket-activate nix-daemon.service when
/nix/var/nix/daemon-socket/socket is accessed.
In container usecases, sometimes /nix/var/nix/daemon-socket is
bind-mounted read-only into the container.
In these cases, we want to skip starting nix-daemon.socket.
However, since systemd 250, `ConditionPathIsReadWrite` is also not met
if /nix/var/nix/daemon-socket doesn't exist at all. This means, a
regular NixOS system will skip starting nix-daemon.socket:
> [ 237.187747] systemd[1]: Nix Daemon Socket was skipped because of a failed condition check (ConditionPathIsReadWrite=/nix/var/nix/daemon-socket).
To prevent this from happening, ship a tmpfiles file that'll cause the
directory to be created if it doesn't exist already.
In the case of NixOS, we can just add Nix to `systemd.tmpfiles.packages`
and have these files picked up automatically.
Allows completing `nix build ~/flake#<Tab>`.
We can implement expansion for `~user` later if needed.
Not using wordexp(3) since that expands way too much.
The default maxfiles on macOS 11 and macOS 12 is 256, which is too low
for nix to work:
```
$ launchctl limit maxfiles
maxfiles 256 unlimited
```
Set NumberOfFiles of nix-daemon to 4096 to avoid `Too many open files`
error.
This is only rudimentary support as allowed by `NIX_GET_COMPLETIONS`.
In the future, we could use complete’s `--wraps` argument to autocomplete arguments for programs after `nix shell -c`.
Previously, the build system used uname(1) output when it wanted to
check the operating system it was being built for, which meant that it
didn't take into-account cross-compilation when the build and host
operating systems were different.
To fix this, instead of consulting uname output, we consult the host
triple, specifically the third "kernel" part.
For "kernel"s with stable ABIs, like Linux or Cygwin, we can use a
simple ifeq to test whether we're compiling for that system, but for
other platforms, like Darwin, FreeBSD, or Solaris, we have to use a
more complicated check to take into account the version numbers at the
end of the "kernel"s. I couldn't find a way to just strip these
version numbers in GNU Make without shelling out, which would be even
more ugly IMO. Because these checks differ between kernels, and the
patsubst ones are quite fiddly, I've added variables for each host OS
we might want to check to make them easier to reuse.