Compare commits

...

32 commits

Author SHA1 Message Date
kloenk bcc12e53e2
libmain: add progress bar with multiple status lines
Add the log-formats `multiline` and `multiline-with-logs` which offer
multiple current active building status lines.

Change-Id: Idd8afe62f8591b5d8b70e258c5cefa09be4cab03
2024-06-12 13:11:35 +02:00
alois31 e3da55df8a
tests/libcmd: set HOME to a temporary directory
The libcmd unit test creates files (more specifically, the fetcher cache) in
its home directory. In the single-user sandbox, this leads to the creation of
/homeless-shelter, since this is the default HOME and the root is writable.
Unfortunately, this conflicts with the assumption of the functional tests that
this directory does not exist. Use a different home directory to prevent these
test failures, and thus restore the ability to build inside the single-user
sandbox.

Fixes: lix-project/lix#365
Change-Id: I4df8c53d043234b95a7c0ac45fc5ee89e8d46aff
2024-06-12 13:11:35 +02:00
alois31 e1e9785f4f
libstore/build: use an allowlist approach to syscall filtering
Previously, system call filtering (to prevent builders from storing files with
setuid/setgid permission bits or extended attributes) was performed using a
blocklist. While this looks simple at first, it actually carries significant
security and maintainability risks: after all, the kernel may add new syscalls
to achieve the same functionality one is trying to block, and it can even be
hard to actually add the syscall to the blocklist when building against a C
library that doesn't know about it yet. For a recent demonstration of this
happening in practice to Nix, see the introduction of fchmodat2 [0] [1].

The allowlist approach does not share the same drawback. While it does require
a rather large list of harmless syscalls to be maintained in the codebase,
failing to update this list (and roll out the update to all users) in time has
rather benign effects; at worst, very recent programs that already rely on new
syscalls will fail with an error the same way they would on a slightly older
kernel that doesn't support them yet. Most importantly, no unintended new ways
of performing dangerous operations will be silently allowed.

Another possible drawback is reduced system call performance due to the larger
filter created by the allowlist requiring more computation [2]. However, this
issue has not convincingly been demonstrated yet in practice, for example in
systemd or various browsers.

This commit tries to keep the behavior as close to unchanged as possible. Only
newer syscalls that are not supported by glibc 2.38 (as found in NixOS 23.11)
are blocked. Since this includes fchmodat2, the compatibility code added for
handling this syscall can be removed too.

[0] https://github.com/NixOS/nixpkgs/issues/300635
[1] https://github.com/NixOS/nix/issues/10424
[2] https://github.com/flatpak/flatpak/pull/4462#issuecomment-1061690607

Change-Id: I541be3ea9b249bcceddfed6a5a13ac10b11e16ad
2024-06-12 13:11:33 +02:00
alois31 e511f66f24
libstore/build: always treat seccomp setup failures as fatal
In f047e4357b, I missed the behavior that if
building without a dedicated build user (i.e. in single-user setups), seccomp
setup failures are silently ignored. This was introduced without explanation 7
years ago (ff6becafa8). Hopefully the only
use-case nowadays is causing spurious test suite successes when messing up the
seccomp filter during development. Let's try removing it.

Change-Id: Ibe51416d9c7a6dd635c2282990224861adf1ceab
2024-06-12 13:10:46 +02:00
jade 8a3d063a49 Merge changes from topic "releng" into main
* changes:
  releng: add prod environment, ready for release
  releng: automatically figure out if we should tag latest for docker
  releng: support multiarch docker images
  manual: rewrite the docker guide now that we have images
  Rewrite docker to be sensible and smaller
  Implement docker upload in the releng tools
2024-06-11 04:45:12 +00:00
jade f432e464dd Merge "tests: fix daemon version in isDaemonNewer function" into main 2024-06-10 23:22:05 +00:00
jade a986a8dfa1 Merge "Revert "flake: update nixpkgs pin 23.11->24.05 (+ boehmgc compat changes)"" into main 2024-06-10 04:48:12 +00:00
jade 8a09465c3a Revert "flake: update nixpkgs pin 23.11->24.05 (+ boehmgc compat changes)"
This reverts commit 28a079f841.

Reason for revert: This caused a pile of regressions in CI, and does not pass nix flake check. Some number of them are fixed in CL: https://gerrit.lix.systems/c/lix/+/1429 but there's more to be fixed.

We should defer this after 2.90.

Change-Id: Ib839d0fcb08eb52094af2b521145e3c1b4e0556f
2024-06-10 04:29:13 +00:00
jade 82dc712d93 releng: add prod environment, ready for release
I am *reasonably* confident that this releng infrastructure can actually
build a Lix 2.90 and release it successfully. Let's make it possible to
do, and add some cute colours to the confirmation message.

Change-Id: I85e498b6fb49ffc5e75c0a72c5e45fb1f69030d3
2024-06-09 20:33:24 -07:00
jade ce71d0e9ab releng: automatically figure out if we should tag latest for docker
For example, when releasing from release-2.90, if `main` has a 2.91 tag
ancestor, we know that 2.91 was released, so we should *not* tag latest.

Change-Id: Ia56b17a2ee03bbec74b7c271c742858c690d450d
2024-06-09 20:33:24 -07:00
jade 9aeb314e6a releng: support multiarch docker images
If we don't want to have separate registry tags by architecture (EWWWW),
we need to be able to build multiarch docker images. This is pretty
simple, and just requires making a manifest pointing to each of the
component images.

I was *going* to just do this API prodding with manifest-tool, but it
doesn't support putting metadata on the outer manifest, which is
actually kind of a problem because it then doesn't render the metadata
on github. So I guess we get a simple little containers API
implementation that is 90% auth code.

Change-Id: I8bdd118d4cbc13b23224f2fb174b232432686bea
2024-06-09 20:33:24 -07:00
jade 4392d89eea manual: rewrite the docker guide now that we have images
Change-Id: I5bdf47e67059ae4099552750a47ae070dbe094df
2024-06-09 20:33:24 -07:00
jade 9bb7fb8f69 Rewrite docker to be sensible and smaller
I have checked the image can build things and inspected `diff -ru`
compared to the old image. As far as I can tell it is more or less
the same besides the later git change.

Layers are now 65MB or less, and we aren't against the maxLayers limit
for the broken automatic layering to do anything but shove one store
path in a layer (which is good behaviour, actually).

This uses nix2container which streams images, so the build time is much
shorter.

I have also taken the opportunity to, in addition to fixing the 400MB
single layer (terrible, and what motivated this in the first place),
delete about 200MB of closure size inflicted by git vs gitMinimal
causing both perl and python to get into closure.

People mostly use this thing for CI, so I don't really think you need
advanced git operations, and large git can be added at the user side if
really motivated.

With love for whichever container developer somewhat ironically assumed
that one would not run skopeo in a minimal container that doesn't have a
/var/tmp.

Fixes: lix-project/lix#378

Change-Id: Icc3aa20e64446276716fbbb87535fd5b50628010
2024-06-09 20:33:24 -07:00
jade 7dfa2a761e Merge changes from topic "releng" into main
* changes:
  releng: support pushing the manual to docs also
  Expose officialRelease from the flake
  Put into place initial release engineering
2024-06-09 08:28:52 +00:00
jade ff95b980d4 Implement docker upload in the releng tools
This uses skopeo to not think about docker daemons. I, however, noticed
that the docker image we had would have totally terrible cache hits, so
I rewrote it.

Fixes: lix-project/lix#252

Change-Id: I3c5b6c1f3ba0b9dfcac212b2148f390e0cd542b7
2024-06-09 00:30:12 -07:00
Pierre Bourdon 28a079f841
flake: update nixpkgs pin 23.11->24.05 (+ boehmgc compat changes)
The boehmgc changes are bundled into this commit because doing otherwise
would require an annoying dance of "adding compatibility for < 8.2.6 and
>= 8.2.6" then updating the pin then removing the (now unneeded)
compatibility. It doesn't seem worth the trouble to me given the low
complexity of said changes.

Rebased coroutine-sp-fallback.diff patch taken from https://github.com/NixOS/nixpkgs/pull/317227

Change-Id: I8c590e9fe25c0f566d0cfeacb96d8cf50abf12e8
2024-06-09 01:25:53 +02:00
Pierre Bourdon 9281a12532
tests/nixos/nix-copy: fix NixOS >= 24.05 compatibility
4b128008c5d9fde881ce1b0a25e60ae0415a14d5 in nixpkgs introduced a default
hashedPasswordFile for root in NixOS tests, which takes precedence over
the password option set in the nix-copy test.

Change-Id: Iffaebec5992e50614b854033f0d14312c8d275b5
2024-06-08 17:59:08 +02:00
Mario Rodas a05de58ebd tests: fix daemon version in isDaemonNewer function
Since ad8a4b380e, the version printer returns "nix (Lix, like Nix) 2.x",
hence the `daemonVersion` was being set to the string "like".

Using `compareVersions` with a letter compares them lexicographically:

   builtins.compareVersions "like" "2.12pre20230103"  // => -1
   builtins.compareVersions "like" "2.16.0"           // => -1

This caused that `isDaemonNewer` always returned 1, falsy in Bash terms.
Therefore, the test suite skipped those tests where they use it.

Fixes lix-project/lix#324

Change-Id: If6682515bf0bf8b8add641af9a4e98b50a9acb51
2024-06-08 04:20:00 +00:00
jade 4f94531209 Merge changes from topic "releng" into main
* changes:
  Add meson release note
  Move version to a JSON file so we can have release names
  Remove rl-next-dev
2024-06-07 03:53:31 +00:00
jade 98e8475147 releng: support pushing the manual to docs also
Change-Id: Ifd0b51425ee4955e0230fb2804a6f54ef0fe16e9
2024-06-06 20:53:08 -07:00
jade bdf1b264ad Expose officialRelease from the flake
Change-Id: If87beb3f31dfb5d59862294ac2e1c821ea864277
2024-06-06 20:53:08 -07:00
jade c32a01f9eb Put into place initial release engineering
This can release x86_64-linux binaries to staging, with ephemeral keys.
I think it's good enough to review at least at this point, so we don't
keep adding more stuff to it to make it harder to review.

Change-Id: Ie95e8f35d1252f5d014e819566f170b30eda152e
2024-06-06 20:53:08 -07:00
Qyriad ec768df004 Merge changes Ic4be41eb,I48db2385 into main
* changes:
  devshells: only enable pch for clang
  build: expose option to enable or disable precompiled std headers
2024-06-06 22:21:52 +00:00
jade 611b1de441 Add meson release note
Change-Id: I4d2d08dc77a3ab4dce9fbb129c1487aa8c9f1722
2024-06-06 15:08:12 -07:00
jade 9c77c62e73 Move version to a JSON file so we can have release names
Change-Id: I5ff3396a302565ee5ee6c2db97e048e403779076
2024-06-06 15:08:12 -07:00
jade 24057dcb6a Remove rl-next-dev
We realized that there's really no good place to put these dev facing
bulletins, and the user-facing release notes aren't really the worst
place to put them, I guess, and we do kind of hope that it converts
users to devs.

Change-Id: Id9387b2964fe291cb5a3f74ad6344157f19b540c
2024-06-06 15:08:12 -07:00
jade 1659404626 Add xonsh to the shell
Change-Id: If8f3825d2bdcc3f1d00583a11d890c1c8ab37b9f
2024-06-06 14:50:27 -07:00
jade e0748377dc pname: nix -> lix
This had a regression last time: https://gerrit.lix.systems/c/lix/+/1196

But f3f68fcfa fixed upgrade-nix to not be broken, so this should be ok tbh.

Change-Id: I48ea1359790878bb8ead5d8a4b3f61caa4aabfb5
2024-06-06 20:42:29 +00:00
Qyriad 766e718f67 devshells: only enable pch for clang
clangd seems to break if GCC is using precompiled headers for C++'s
standard library, so this sets -Denable-pch-std=${stdenv.cc.isClang}

Fixes #374.

Change-Id: Ic4be41ebe7576ebcb9c208275596f953c2003109
2024-06-06 12:48:13 -06:00
Qyriad 06e65e537b build: expose option to enable or disable precompiled std headers
They are enabled by default, and Meson will also prints whether or not
they're enabled at the bottom at the end of configuration.

Change-Id: I48db238510bf9e74340b86f243f4bbe360794281
2024-06-06 12:46:26 -06:00
jade 8f9bcd20eb Merge "libstore/filetransfer: fix no-s3 build" into main 2024-06-06 03:08:14 +00:00
Linus Heckemann 609b721425 libstore/filetransfer: fix no-s3 build
Fixes a compiler error that looks like:

error: could not convert '[...]' from 'future<void>' to 'future<nix::FileTransferResult>'
Change-Id: I4aeadfeba0dadfdf133f25e6abce90ede7a86ca6
2024-06-05 15:50:57 -07:00
50 changed files with 2313 additions and 297 deletions

1
.gitignore vendored
View file

@ -28,3 +28,4 @@ buildtime.bin
# We generate this with a Nix shell hook # We generate this with a Nix shell hook
/.pre-commit-config.yaml /.pre-commit-config.yaml
/.nocontribmsg /.nocontribmsg
/release

View file

@ -1 +0,0 @@
2.90.0

View file

@ -1,11 +1,13 @@
--- ---
synopsis: Clang build timing analysis synopsis: Clang build timing analysis
cls: 587 cls: 587
category: Development
--- ---
We now have Clang build profiling available, which generates Chrome We now have Clang build profiling available, which generates Chrome
tracing files for each compilation unit. To enable it, run `meson configure tracing files for each compilation unit. To enable it, run `meson configure
build -Dprofile-build=enabled` then rerun the compilation. build -Dprofile-build=enabled` in a Clang stdenv (`nix develop
.#native-clangStdenvPackages`) then rerun the compilation.
If you want to make the build go faster, do a clang build with meson, then run If you want to make the build go faster, do a clang build with meson, then run
`maintainers/buildtime_report.sh build`, then contemplate how to improve the `maintainers/buildtime_report.sh build`, then contemplate how to improve the
@ -13,3 +15,8 @@ build time.
You can also look at individual object files' traces in You can also look at individual object files' traces in
<https://ui.perfetto.dev>. <https://ui.perfetto.dev>.
See [the wiki page][improving-build-times-wiki] for more details on how to do
this.
[improving-build-times-wiki]: https://wiki.lix.systems/link/8#bkmrk-page-title

View file

@ -0,0 +1,15 @@
---
synopsis: Lix is built with meson
# and many more
cls: [580, 627, 628, 707, 711, 712, 719]
credits: [Qyriad, horrors, jade, 9999years, winter]
category: Packaging
---
Lix is built exclusively with the meson build system thanks to a huge team-wide
effort, and the legacy `make`/`autoconf` based build system has been removed
altogether. This improves maintainability of Lix, enables things like saving
20% of compile times with precompiled headers, and generally makes the build
less able to produce obscure incremental compilation bugs.
Non-Nix-based downstream packaging needs rewriting accordingly.

View file

@ -1,64 +1,62 @@
# Using Lix within Docker # Using Lix within Docker
Currently the Lix project doesn't ship docker images. However, we have the infrastructure to do it, it's just not yet been done. See https://git.lix.systems/lix-project/lix/issues/252 Lix is available on the following two container registries:
- [ghcr.io/lix-project/lix](https://ghcr.io/lix-project/lix)
<!-- - [git.lix.systems/lix-project/lix](https://git.lix.systems/lix-project/-/packages/container/lix)
To run the latest stable release of Lix with Docker run the following command: To run the latest stable release of Lix with Docker run the following command:
```console ```console
$ docker run -ti nixos/nix ~ » sudo podman run -it ghcr.io/lix-project/lix:latest
Unable to find image 'nixos/nix:latest' locally Trying to pull ghcr.io/lix-project/lix:latest...
latest: Pulling from nixos/nix
5843afab3874: Pull complete bash-5.2# nix --version
b52bf13f109c: Pull complete nix (Lix, like Nix) 2.90.0
1e2415612aa3: Pull complete
Digest: sha256:27f6e7f60227e959ee7ece361f75d4844a40e1cc6878b6868fe30140420031ff
Status: Downloaded newer image for nixos/nix:latest
35ca4ada6e96:/# nix --version
nix (Nix) 2.3.12
35ca4ada6e96:/# exit
``` ```
# What is included in Lix's Docker image? # What is included in Lix's Docker image?
The official Docker image is created using `pkgs.dockerTools.buildLayeredImage` The official Docker image is created using [nix2container]
(and not with `Dockerfile` as it is usual with Docker images). You can still (and not with `Dockerfile` as it is usual with Docker images). You can still
base your custom Docker image on it as you would do with any other Docker base your custom Docker image on it as you would do with any other Docker
image. image.
The Docker image is also not based on any other image and includes minimal set [nix2container]: https://github.com/nlewo/nix2container
of runtime dependencies that are required to use Lix:
- pkgs.nix The Docker image is also not based on any other image and includes the nixpkgs
- pkgs.bashInteractive that Lix was built with along with a minimal set of tools in the system profile:
- pkgs.coreutils-full
- pkgs.gnutar - bashInteractive
- pkgs.gzip - cacert.out
- pkgs.gnugrep - coreutils-full
- pkgs.which - curl
- pkgs.curl - findutils
- pkgs.less - gitMinimal
- pkgs.wget - gnugrep
- pkgs.man - gnutar
- pkgs.cacert.out - gzip
- pkgs.findutils - iana-etc
- less
- libxml2
- lix
- man
- openssh
- sqlite
- wget
- which
# Docker image with the latest development version of Lix # Docker image with the latest development version of Lix
To get the latest image that was built by [Hydra](https://hydra.nixos.org) run FIXME: There are not currently images of development versions of Lix. Tracking issue: https://git.lix.systems/lix-project/lix/issues/381
the following command:
You can build a Docker image from source yourself and copy it to either:
Podman: `nix run '.#dockerImage.copyTo' containers-storage:lix`
Docker: `nix run '.#dockerImage.copyToDockerDaemon'`
Then:
```console ```console
$ curl -L https://hydra.nixos.org/job/nix/master/dockerImage.x86_64-linux/latest/download/1 | docker load $ docker run -ti lix
$ docker run -ti nix:2.5pre20211105
``` ```
You can also build a Docker image from source yourself:
```console
$ nix build ./\#hydraJobs.dockerImage.x86_64-linux
$ docker load -i ./result/image.tar.gz
$ docker run -ti nix:2.5pre20211105
```
-->

View file

@ -1,7 +1,10 @@
{ {
pkgs ? import <nixpkgs> { }, pkgs ? import <nixpkgs> { },
# Git commit ID, if available
lixRevision ? null,
nix2container,
lib ? pkgs.lib, lib ? pkgs.lib,
name ? "nix", name ? "lix",
tag ? "latest", tag ? "latest",
bundleNixpkgs ? true, bundleNixpkgs ? true,
channelName ? "nixpkgs", channelName ? "nixpkgs",
@ -12,26 +15,51 @@
flake-registry ? null, flake-registry ? null,
}: }:
let let
layerContents = with pkgs; [
# pulls in glibc and openssl, about 60MB
{ contents = [ coreutils-full ]; }
# some stuff that is low in the closure graph and small ish, mostly to make
# incremental lix updates cheaper
{
contents = [
curl
libxml2
sqlite
];
}
# 50MB of git
{ contents = [ gitMinimal ]; }
# 144MB of nixpkgs
{
contents = [ channel ];
inProfile = false;
}
];
# These packages are left to be auto layered by nix2container, since it is
# less critical that they get layered sensibly and they tend to not be deps
# of anything in particular
autoLayered = with pkgs; [
bashInteractive
gnutar
gzip
gnugrep
which
less
wget
man
cacert.out
findutils
iana-etc
openssh
nix
];
defaultPkgs = defaultPkgs =
with pkgs; lib.lists.flatten (
[ map (x: if !(x ? inProfile) || x.inProfile then x.contents else [ ]) layerContents
nix )
bashInteractive ++ autoLayered
coreutils-full
gnutar
gzip
gnugrep
which
curl
less
wget
man
cacert.out
findutils
iana-etc
git
openssh
]
++ extraPkgs; ++ extraPkgs;
users = users =
@ -139,16 +167,17 @@ let
)) ))
+ "\n"; + "\n";
nixpkgs = pkgs.path;
channel = pkgs.runCommand "channel-nixpkgs" { } ''
mkdir $out
${lib.optionalString bundleNixpkgs ''
ln -s ${nixpkgs} $out/nixpkgs
echo "[]" > $out/manifest.nix
''}
'';
baseSystem = baseSystem =
let let
nixpkgs = pkgs.path;
channel = pkgs.runCommand "channel-nixos" { inherit bundleNixpkgs; } ''
mkdir $out
if [ "$bundleNixpkgs" ]; then
ln -s ${nixpkgs} $out/nixpkgs
echo "[]" > $out/manifest.nix
fi
'';
rootEnv = pkgs.buildPackages.buildEnv { rootEnv = pkgs.buildPackages.buildEnv {
name = "root-profile-env"; name = "root-profile-env";
paths = defaultPkgs; paths = defaultPkgs;
@ -187,7 +216,7 @@ let
profile = pkgs.buildPackages.runCommand "user-environment" { } '' profile = pkgs.buildPackages.runCommand "user-environment" { } ''
mkdir $out mkdir $out
cp -a ${rootEnv}/* $out/ cp -a ${rootEnv}/* $out/
ln -s ${manifest} $out/manifest.nix ln -sf ${manifest} $out/manifest.nix
''; '';
flake-registry-path = flake-registry-path =
if (flake-registry == null) then if (flake-registry == null) then
@ -236,6 +265,7 @@ let
ln -s /nix/var/nix/profiles/share $out/usr/ ln -s /nix/var/nix/profiles/share $out/usr/
mkdir -p $out/nix/var/nix/gcroots mkdir -p $out/nix/var/nix/gcroots
ln -s /nix/var/nix/profiles $out/nix/var/nix/gcroots/profiles
mkdir $out/tmp mkdir $out/tmp
@ -248,14 +278,14 @@ let
mkdir -p $out/nix/var/nix/profiles/per-user/root mkdir -p $out/nix/var/nix/profiles/per-user/root
ln -s ${profile} $out/nix/var/nix/profiles/default-1-link ln -s ${profile} $out/nix/var/nix/profiles/default-1-link
ln -s $out/nix/var/nix/profiles/default-1-link $out/nix/var/nix/profiles/default ln -s /nix/var/nix/profiles/default-1-link $out/nix/var/nix/profiles/default
ln -s /nix/var/nix/profiles/default $out/root/.nix-profile ln -s /nix/var/nix/profiles/default $out/root/.nix-profile
ln -s ${channel} $out/nix/var/nix/profiles/per-user/root/channels-1-link ln -s ${channel} $out/nix/var/nix/profiles/per-user/root/channels-1-link
ln -s $out/nix/var/nix/profiles/per-user/root/channels-1-link $out/nix/var/nix/profiles/per-user/root/channels ln -s /nix/var/nix/profiles/per-user/root/channels-1-link $out/nix/var/nix/profiles/per-user/root/channels
mkdir -p $out/root/.nix-defexpr mkdir -p $out/root/.nix-defexpr
ln -s $out/nix/var/nix/profiles/per-user/root/channels $out/root/.nix-defexpr/channels ln -s /nix/var/nix/profiles/per-user/root/channels $out/root/.nix-defexpr/channels
echo "${channelURL} ${channelName}" > $out/root/.nix-channels echo "${channelURL} ${channelName}" > $out/root/.nix-channels
mkdir -p $out/bin $out/usr/bin mkdir -p $out/bin $out/usr/bin
@ -273,43 +303,99 @@ let
ln -s $globalFlakeRegistryPath $out/nix/var/nix/gcroots/auto/$rootName ln -s $globalFlakeRegistryPath $out/nix/var/nix/gcroots/auto/$rootName
'') '')
); );
in
pkgs.dockerTools.buildLayeredImageWithNixDb {
inherit name tag maxLayers; layers = builtins.foldl' (
layersList: el:
let
layer = nix2container.buildLayer {
deps = el.contents;
layers = layersList;
};
in
layersList ++ [ layer ]
) [ ] layerContents;
contents = [ baseSystem ]; image = nix2container.buildImage {
extraCommands = '' inherit name tag maxLayers;
rm -rf nix-support
ln -s /nix/var/nix/profiles nix/var/nix/gcroots/profiles
'';
fakeRootCommands = ''
chmod 1777 tmp
chmod 1777 var/tmp
'';
config = { inherit layers;
Cmd = [ "/root/.nix-profile/bin/bash" ];
Env = [ copyToRoot = [ baseSystem ];
"USER=root"
"PATH=${ initializeNixDatabase = true;
lib.concatStringsSep ":" [
"/root/.nix-profile/bin" perms = [
"/nix/var/nix/profiles/default/bin" {
"/nix/var/nix/profiles/default/sbin" path = baseSystem;
] regex = "(/var)?/tmp";
}" mode = "1777";
"MANPATH=${ }
lib.concatStringsSep ":" [
"/root/.nix-profile/share/man"
"/nix/var/nix/profiles/default/share/man"
]
}"
"SSL_CERT_FILE=/nix/var/nix/profiles/default/etc/ssl/certs/ca-bundle.crt"
"GIT_SSL_CAINFO=/nix/var/nix/profiles/default/etc/ssl/certs/ca-bundle.crt"
"NIX_SSL_CERT_FILE=/nix/var/nix/profiles/default/etc/ssl/certs/ca-bundle.crt"
"NIX_PATH=/nix/var/nix/profiles/per-user/root/channels:/root/.nix-defexpr/channels"
]; ];
config = {
Cmd = [ "/root/.nix-profile/bin/bash" ];
Env = [
"USER=root"
"PATH=${
lib.concatStringsSep ":" [
"/root/.nix-profile/bin"
"/nix/var/nix/profiles/default/bin"
"/nix/var/nix/profiles/default/sbin"
]
}"
"MANPATH=${
lib.concatStringsSep ":" [
"/root/.nix-profile/share/man"
"/nix/var/nix/profiles/default/share/man"
]
}"
"SSL_CERT_FILE=/nix/var/nix/profiles/default/etc/ssl/certs/ca-bundle.crt"
"GIT_SSL_CAINFO=/nix/var/nix/profiles/default/etc/ssl/certs/ca-bundle.crt"
"NIX_SSL_CERT_FILE=/nix/var/nix/profiles/default/etc/ssl/certs/ca-bundle.crt"
"NIX_PATH=/nix/var/nix/profiles/per-user/root/channels:/root/.nix-defexpr/channels"
];
Labels = {
"org.opencontainers.image.title" = "Lix";
"org.opencontainers.image.source" = "https://git.lix.systems/lix-project/lix";
"org.opencontainers.image.vendor" = "Lix project";
"org.opencontainers.image.version" = pkgs.nix.version;
"org.opencontainers.image.description" = "Minimal Lix container image, with some batteries included.";
} // lib.optionalAttrs (lixRevision != null) { "org.opencontainers.image.revision" = lixRevision; };
};
meta = {
description = "Docker image for Lix. This is built with nix2container; see that project's README for details";
longDescription = ''
Docker image for Lix, built with nix2container.
To copy it to your docker daemon, nix run .#dockerImage.copyToDockerDaemon
To copy it to podman, nix run .#dockerImage.copyTo containers-storage:lix
'';
};
}; };
in
image
// {
# We don't ship the tarball as the default output because it is a strange thing to want imo
tarball =
pkgs.buildPackages.runCommand "docker-image-tarball-${pkgs.nix.version}"
{
nativeBuildInputs = [ pkgs.buildPackages.bubblewrap ];
meta.description = "Docker image tarball with Lix for ${pkgs.system}";
}
''
mkdir -p $out/nix-support
image=$out/image.tar
# bwrap for foolish temp dir selection code that forces /var/tmp:
# https://github.com/containers/skopeo.git/blob/60ee543f7f7c242f46cc3a7541d9ac8ab1c89168/vendor/github.com/containers/image/v5/internal/tmpdir/tmpdir.go#L15-L18
mkdir -p $TMPDIR/fake-var/tmp
args=(--unshare-user --bind "$TMPDIR/fake-var" /var)
for dir in /*; do
args+=(--dev-bind "/$dir" "/$dir")
done
bwrap ''${args[@]} -- ${lib.getExe image.copyTo} docker-archive:$image
gzip $image
echo "file binary-dist $image" >> $out/nix-support/hydra-build-products
'';
} }

View file

@ -16,6 +16,22 @@
"type": "github" "type": "github"
} }
}, },
"nix2container": {
"flake": false,
"locked": {
"lastModified": 1712990762,
"narHash": "sha256-hO9W3w7NcnYeX8u8cleHiSpK2YJo7ecarFTUlbybl7k=",
"owner": "nlewo",
"repo": "nix2container",
"rev": "20aad300c925639d5d6cbe30013c8357ce9f2a2e",
"type": "github"
},
"original": {
"owner": "nlewo",
"repo": "nix2container",
"type": "github"
}
},
"nixpkgs": { "nixpkgs": {
"locked": { "locked": {
"lastModified": 1715123187, "lastModified": 1715123187,
@ -67,6 +83,7 @@
"root": { "root": {
"inputs": { "inputs": {
"flake-compat": "flake-compat", "flake-compat": "flake-compat",
"nix2container": "nix2container",
"nixpkgs": "nixpkgs", "nixpkgs": "nixpkgs",
"nixpkgs-regression": "nixpkgs-regression", "nixpkgs-regression": "nixpkgs-regression",
"pre-commit-hooks": "pre-commit-hooks" "pre-commit-hooks": "pre-commit-hooks"

View file

@ -8,6 +8,10 @@
url = "github:cachix/git-hooks.nix"; url = "github:cachix/git-hooks.nix";
flake = false; flake = false;
}; };
nix2container = {
url = "github:nlewo/nix2container";
flake = false;
};
flake-compat = { flake-compat = {
url = "github:edolstra/flake-compat"; url = "github:edolstra/flake-compat";
flake = false; flake = false;
@ -20,6 +24,7 @@
nixpkgs, nixpkgs,
nixpkgs-regression, nixpkgs-regression,
pre-commit-hooks, pre-commit-hooks,
nix2container,
flake-compat, flake-compat,
}: }:
@ -59,7 +64,6 @@
# Set to true to build the release notes for the next release. # Set to true to build the release notes for the next release.
buildUnreleasedNotes = true; buildUnreleasedNotes = true;
version = lib.fileContents ./.version + versionSuffix;
versionSuffix = versionSuffix =
if officialRelease then if officialRelease then
"" ""
@ -149,8 +153,7 @@
} }
); );
binaryTarball = binaryTarball = nix: pkgs: pkgs.callPackage ./nix-support/binary-tarball.nix { inherit nix; };
nix: pkgs: pkgs.callPackage ./nix-support/binary-tarball.nix { inherit nix version; };
overlayFor = overlayFor =
getStdenv: final: prev: getStdenv: final: prev:
@ -164,6 +167,7 @@
nixUnstable = prev.nixUnstable; nixUnstable = prev.nixUnstable;
check-headers = final.buildPackages.callPackage ./maintainers/check-headers.nix { }; check-headers = final.buildPackages.callPackage ./maintainers/check-headers.nix { };
check-syscalls = final.buildPackages.callPackage ./maintainers/check-syscalls.nix { };
clangbuildanalyzer = final.buildPackages.callPackage ./misc/clangbuildanalyzer.nix { }; clangbuildanalyzer = final.buildPackages.callPackage ./misc/clangbuildanalyzer.nix { };
default-busybox-sandbox-shell = final.busybox.override { default-busybox-sandbox-shell = final.busybox.override {
@ -191,7 +195,7 @@
}; };
nix = final.callPackage ./package.nix { nix = final.callPackage ./package.nix {
inherit versionSuffix; inherit versionSuffix officialRelease;
stdenv = currentStdenv; stdenv = currentStdenv;
busybox-sandbox-shell = final.busybox-sandbox-shell or final.default-busybox-sandbox-shell; busybox-sandbox-shell = final.busybox-sandbox-shell or final.default-busybox-sandbox-shell;
}; };
@ -209,7 +213,6 @@
overlays.default = overlayFor (p: p.stdenv); overlays.default = overlayFor (p: p.stdenv);
hydraJobs = { hydraJobs = {
# Binary package for various platforms. # Binary package for various platforms.
build = forAllSystems (system: self.packages.${system}.nix); build = forAllSystems (system: self.packages.${system}.nix);
@ -227,7 +230,6 @@
in in
{ {
user = rl-next-check "rl-next" ./doc/manual/rl-next; user = rl-next-check "rl-next" ./doc/manual/rl-next;
dev = rl-next-check "rl-next-dev" ./doc/manual/rl-next-dev;
} }
); );
@ -300,6 +302,11 @@
); );
}; };
release-jobs = import ./releng/release-jobs.nix {
inherit (self) hydraJobs;
pkgs = nixpkgsFor.x86_64-linux.native;
};
# NOTE *do not* add fresh derivations to checks, always add them to # NOTE *do not* add fresh derivations to checks, always add them to
# hydraJobs first (so CI will pick them up) and only link them here # hydraJobs first (so CI will pick them up) and only link them here
checks = forAvailableSystems ( checks = forAvailableSystems (
@ -309,7 +316,6 @@
perlBindings = self.hydraJobs.perlBindings.${system}; perlBindings = self.hydraJobs.perlBindings.${system};
nixpkgsLibTests = self.hydraJobs.tests.nixpkgsLibTests.${system}; nixpkgsLibTests = self.hydraJobs.tests.nixpkgsLibTests.${system};
rl-next = self.hydraJobs.rl-next.${system}.user; rl-next = self.hydraJobs.rl-next.${system}.user;
rl-next-dev = self.hydraJobs.rl-next.${system}.dev;
# Will be empty attr set on i686-linux, and filtered out by forAvailableSystems. # Will be empty attr set on i686-linux, and filtered out by forAvailableSystems.
pre-commit = self.hydraJobs.pre-commit.${system}; pre-commit = self.hydraJobs.pre-commit.${system};
} }
@ -330,19 +336,13 @@
dockerImage = dockerImage =
let let
pkgs = nixpkgsFor.${system}.native; pkgs = nixpkgsFor.${system}.native;
image = import ./docker.nix { nix2container' = import nix2container { inherit pkgs system; };
inherit pkgs;
tag = version;
};
in in
pkgs.runCommand "docker-image-tarball-${version}" import ./docker.nix {
{ meta.description = "Docker image with Lix for ${system}"; } inherit pkgs;
'' nix2container = nix2container'.nix2container;
mkdir -p $out/nix-support tag = pkgs.nix.version;
image=$out/image.tar.gz };
ln -s ${image} $image
echo "file binary-dist $image" >> $out/nix-support/hydra-build-products
'';
} }
// builtins.listToAttrs ( // builtins.listToAttrs (
map (crossSystem: { map (crossSystem: {
@ -365,7 +365,7 @@
pkgs: stdenv: pkgs: stdenv:
let let
nix = pkgs.callPackage ./package.nix { nix = pkgs.callPackage ./package.nix {
inherit stdenv versionSuffix; inherit stdenv officialRelease versionSuffix;
busybox-sandbox-shell = pkgs.busybox-sandbox-shell or pkgs.default-busybox-sandbox; busybox-sandbox-shell = pkgs.busybox-sandbox-shell or pkgs.default-busybox-sandbox;
internalApiDocs = true; internalApiDocs = true;
}; };

View file

@ -1,6 +1,5 @@
from collections import defaultdict from collections import defaultdict
import frontmatter import frontmatter
import sys
import pathlib import pathlib
import textwrap import textwrap
from typing import Any, Tuple from typing import Any, Tuple
@ -27,6 +26,7 @@ CATEGORIES = [
'Improvements', 'Improvements',
'Fixes', 'Fixes',
'Packaging', 'Packaging',
'Development',
'Miscellany', 'Miscellany',
] ]
@ -143,7 +143,7 @@ def run_on_dir(author_info: AuthorInfoDB, d):
for category in CATEGORIES: for category in CATEGORIES:
if entries[category]: if entries[category]:
print('\n#', category) print('\n##', category)
do_category(author_info, entries[category]) do_category(author_info, entries[category])
def main(): def main():

View file

@ -0,0 +1,16 @@
{
runCommandNoCC,
lib,
libseccomp,
writeShellScriptBin,
}:
let
syscalls-csv = runCommandNoCC "syscalls.csv" { } ''
echo ${lib.escapeShellArg libseccomp.src}
tar -xf ${lib.escapeShellArg libseccomp.src} --strip-components=2 ${libseccomp.name}/src/syscalls.csv
mv syscalls.csv "$out"
'';
in
writeShellScriptBin "check-syscalls" ''
${./check-syscalls.sh} ${syscalls-csv}
''

7
maintainers/check-syscalls.sh Executable file
View file

@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -e
diff -u <(awk < src/libstore/build/local-derivation-goal.cc '/BEGIN extract-syscalls/ { extracting = 1; next }
match($0, /allowSyscall\(ctx, SCMP_SYS\(([^)]*)\)\);|\/\/ skip ([^ ]*)/, result) { print result[1] result[2] }
/END extract-syscalls/ { extracting = 0; next }') <(tail -n+2 "$1" | cut -d, -f 1)

View file

@ -39,7 +39,7 @@
# in the build directory. # in the build directory.
project('lix', 'cpp', project('lix', 'cpp',
version : run_command('bash', '-c', 'echo -n $(cat ./.version)$VERSION_SUFFIX', check : true).stdout().strip(), version : run_command('bash', '-c', 'echo -n $(jq -r .version < ./version.json)$VERSION_SUFFIX', check : true).stdout().strip(),
default_options : [ default_options : [
'cpp_std=c++2a', 'cpp_std=c++2a',
# TODO(Qyriad): increase the warning level # TODO(Qyriad): increase the warning level
@ -129,6 +129,20 @@ endif
cxx = meson.get_compiler('cpp') cxx = meson.get_compiler('cpp')
# clangd breaks when GCC is using precompiled headers lmao
# https://git.lix.systems/lix-project/lix/issues/374
should_pch = get_option('enable-pch-std')
summary('PCH C++ stdlib', should_pch, bool_yn : true)
if should_pch
# Unlike basically everything else that takes a file, Meson requires the arguments to
# cpp_pch : to be strings and doesn't accept files(). So absolute path it is.
cpp_pch = [meson.project_source_root() / 'src/pch/precompiled-headers.hh']
else
cpp_pch = []
endif
# Translate some historical and Mesony CPU names to Lixy CPU names. # Translate some historical and Mesony CPU names to Lixy CPU names.
# FIXME(Qyriad): the 32-bit x86 code is not tested right now, because cross compilation for Lix # FIXME(Qyriad): the 32-bit x86 code is not tested right now, because cross compilation for Lix
# to those architectures is currently broken for other reasons, namely: # to those architectures is currently broken for other reasons, namely:

View file

@ -64,3 +64,7 @@ option('internal-api-docs', type : 'feature', value : 'auto',
option('profile-dir', type : 'string', value : 'etc/profile.d', option('profile-dir', type : 'string', value : 'etc/profile.d',
description : 'the path to install shell profile files', description : 'the path to install shell profile files',
) )
option('enable-pch-std', type : 'boolean', value : true,
description : 'whether to use precompiled headers for C++\'s standard library (breaks clangd if you\'re using GCC)',
)

View file

@ -63,7 +63,7 @@ pre-commit-run {
files = ''^doc/manual/(change-authors\.yml|rl-next(-dev)?)''; files = ''^doc/manual/(change-authors\.yml|rl-next(-dev)?)'';
pass_filenames = false; pass_filenames = false;
entry = '' entry = ''
${lib.getExe pkgs.build-release-notes} --change-authors doc/manual/change-authors.yml doc/manual/rl-next doc/manual/rl-next-dev ${lib.getExe pkgs.build-release-notes} --change-authors doc/manual/change-authors.yml doc/manual/rl-next
''; '';
}; };
change-authors-sorted = { change-authors-sorted = {

View file

@ -3,7 +3,6 @@
cacert, cacert,
nix, nix,
system, system,
version,
}: }:
let let
installerClosureInfo = buildPackages.closureInfo { installerClosureInfo = buildPackages.closureInfo {
@ -15,10 +14,10 @@ let
meta.description = "Distribution-independent Lix bootstrap binaries for ${system}"; meta.description = "Distribution-independent Lix bootstrap binaries for ${system}";
in in
buildPackages.runCommand "nix-binary-tarball-${version}" { inherit meta; } '' buildPackages.runCommand "lix-binary-tarball-${nix.version}" { inherit meta; } ''
cp ${installerClosureInfo}/registration $TMPDIR/reginfo cp ${installerClosureInfo}/registration $TMPDIR/reginfo
dir=nix-${version}-${system} dir=lix-${nix.version}-${system}
fn=$out/$dir.tar.xz fn=$out/$dir.tar.xz
mkdir -p $out/nix-support mkdir -p $out/nix-support
echo "file binary-dist $fn" >> $out/nix-support/hydra-build-products echo "file binary-dist $fn" >> $out/nix-support/hydra-build-products

View file

@ -39,10 +39,12 @@
pkg-config, pkg-config,
python3, python3,
rapidcheck, rapidcheck,
skopeo,
sqlite, sqlite,
toml11, toml11,
util-linuxMinimal ? utillinuxMinimal, util-linuxMinimal ? utillinuxMinimal,
utillinuxMinimal ? null, utillinuxMinimal ? null,
xonsh-unwrapped,
xz, xz,
busybox-sandbox-shell, busybox-sandbox-shell,
@ -50,7 +52,7 @@
# internal fork of nix-doc providing :doc in the repl # internal fork of nix-doc providing :doc in the repl
lix-doc ? __forDefaults.lix-doc, lix-doc ? __forDefaults.lix-doc,
pname ? "nix", pname ? "lix",
versionSuffix ? "", versionSuffix ? "",
officialRelease ? false, officialRelease ? false,
# Set to true to build the release notes for the next release. # Set to true to build the release notes for the next release.
@ -87,7 +89,8 @@ let
inherit (lib) fileset; inherit (lib) fileset;
inherit (stdenv) hostPlatform buildPlatform; inherit (stdenv) hostPlatform buildPlatform;
version = lib.fileContents ./.version + versionSuffix; versionJson = builtins.fromJSON (builtins.readFile ./version.json);
version = versionJson.version + versionSuffix;
aws-sdk-cpp-nix = aws-sdk-cpp.override { aws-sdk-cpp-nix = aws-sdk-cpp.override {
apis = [ apis = [
@ -137,7 +140,7 @@ let
# that would interfere with repo semantics. # that would interfere with repo semantics.
baseFiles = fileset.fileFilter (f: f.name != ".gitignore") ./.; baseFiles = fileset.fileFilter (f: f.name != ".gitignore") ./.;
configureFiles = fileset.unions [ ./.version ]; configureFiles = fileset.unions [ ./version.json ];
topLevelBuildFiles = fileset.unions ([ topLevelBuildFiles = fileset.unions ([
./meson.build ./meson.build
@ -384,6 +387,8 @@ stdenv.mkDerivation (finalAttrs: {
passthru = { passthru = {
inherit (__forDefaults) boehmgc-nix editline-lix build-release-notes; inherit (__forDefaults) boehmgc-nix editline-lix build-release-notes;
inherit officialRelease;
# The collection of dependency logic for this derivation is complicated enough that # The collection of dependency logic for this derivation is complicated enough that
# it's easier to parameterize the devShell off an already called package.nix. # it's easier to parameterize the devShell off an already called package.nix.
mkDevShell = mkDevShell =
@ -397,6 +402,7 @@ stdenv.mkDerivation (finalAttrs: {
llvmPackages, llvmPackages,
clangbuildanalyzer, clangbuildanalyzer,
contribNotice, contribNotice,
check-syscalls,
}: }:
let let
glibcFix = lib.optionalAttrs (buildPlatform.isLinux && glibcLocales != null) { glibcFix = lib.optionalAttrs (buildPlatform.isLinux && glibcLocales != null) {
@ -408,6 +414,19 @@ stdenv.mkDerivation (finalAttrs: {
# default LLVM is newer. # default LLVM is newer.
clang-tools_llvm = clang-tools.override { inherit llvmPackages; }; clang-tools_llvm = clang-tools.override { inherit llvmPackages; };
pythonPackages = (
p: [
p.yapf
p.python-frontmatter
p.requests
p.xdg-base-dirs
(p.toPythonModule xonsh-unwrapped)
]
);
# FIXME: This will explode when we switch to 24.05 if we don't backport
# https://github.com/NixOS/nixpkgs/pull/317636 first
pythonEnv = python3.withPackages pythonPackages;
# pkgs.mkShell uses pkgs.stdenv by default, regardless of inputsFrom. # pkgs.mkShell uses pkgs.stdenv by default, regardless of inputsFrom.
actualMkShell = mkShell.override { inherit stdenv; }; actualMkShell = mkShell.override { inherit stdenv; };
in in
@ -424,13 +443,22 @@ stdenv.mkDerivation (finalAttrs: {
# For Meson to find Boost. # For Meson to find Boost.
env = finalAttrs.env; env = finalAttrs.env;
# I guess this is necessary because mesonFlags to mkDerivation doesn't propagate in inputsFrom, mesonFlags =
# which only propagates stuff set in hooks? idk. # I guess this is necessary because mesonFlags to mkDerivation doesn't propagate in inputsFrom,
inherit (finalAttrs) mesonFlags; # which only propagates stuff set in hooks? idk.
finalAttrs.mesonFlags
# Clangd breaks when GCC is using precompiled headers, so for the devshell specifically
# we make precompiled C++ stdlib conditional on using Clang.
# https://git.lix.systems/lix-project/lix/issues/374
++ [ (lib.mesonBool "enable-pch-std" stdenv.cc.isClang) ];
packages = packages =
lib.optional (stdenv.cc.isClang && hostPlatform == buildPlatform) clang-tools_llvm lib.optional (stdenv.cc.isClang && hostPlatform == buildPlatform) clang-tools_llvm
++ [ ++ [
pythonEnv
# docker image tool
skopeo
check-syscalls
just just
nixfmt nixfmt
# Load-bearing order. Must come before clang-unwrapped below, but after clang_tools above. # Load-bearing order. Must come before clang-unwrapped below, but after clang_tools above.

View file

@ -23,7 +23,7 @@ perl.pkgs.toPerlModule (
src = fileset.toSource { src = fileset.toSource {
root = ../.; root = ../.;
fileset = fileset.unions ([ fileset = fileset.unions ([
../.version ../version.json
./lib ./lib
./meson.build ./meson.build
]); ]);

View file

@ -1,5 +1,5 @@
project('lix-perl', 'cpp', project('lix-perl', 'cpp',
version : run_command('bash', '-c', 'echo -n $(cat ../.version)$VERSION_SUFFIX', check : true).stdout().strip(), version : run_command('bash', '-c', 'echo -n $(jq -r .version < ../version.json)$VERSION_SUFFIX', check : true).stdout().strip(),
default_options : [ default_options : [
'cpp_std=c++2a', 'cpp_std=c++2a',
# TODO(Qyriad): increase the warning level # TODO(Qyriad): increase the warning level

132
releng/README.md Normal file
View file

@ -0,0 +1,132 @@
# Release engineering
This directory contains the release engineering scripts for Lix.
## Release process
### Prerequisites
* FIXME: validation via misc tests in nixpkgs, among other things? What other
validation do we need before we can actually release?
* Have a release post ready to go as a PR on the website repo.
* No [release-blocker bugs][release-blockers].
[release-blockers]: https://git.lix.systems/lix-project/lix/issues?q=&type=all&sort=&labels=145&state=open&milestone=0&project=0&assignee=0&poster=0
### Process
The following process can be done either against the staging environment or the
live environment.
For staging, the buckets are `staging-releases`, `staging-cache`, etc.
FIXME: obtainment of signing key for signing cache paths?
First, we prepare the release. `python -m releng prepare` is used for this.
* Gather everything in `doc/manual/rl-next` and put it in
`doc/manual/src/release-notes/rl-MAJOR.md`.
Then we tag the release with `python -m releng tag`:
* Git HEAD is detached.
* `officialRelease = true` is set in `flake.nix`, this is committed, and a
release is tagged.
* The tag is merged back into the last branch (either `main` for new releases
or `release-MAJOR` for maintenance releases) with `git merge -s ours VERSION`
creating a history link but ignoring the tree of the release tag.
* Git HEAD is once again detached onto the release tag.
Then, we build the release artifacts with `python -m releng build`:
* Source tarball is generated with `git archive`, then checksummed.
* Manifest for `nix upgrade-nix` is produced and put in `s3://releases` at
`/manifest.nix` and `/lix/lix-VERSION`.
* Release is built: `hydraJobs.binaryTarball` jobs are built, and joined into a
derivation that depends on all of them and adds checksum files. This and the
sources go into `s3://releases/lix/lix-VERSION`.
At this point we have a `release/artifacts` and `release/manual` directory
which are ready to publish, and tags ready for publication. No keys are
required to do this part.
Next, we do the publication with `python -m releng upload`:
* Artifacts for this release are uploaded:
* s3://releases/manifest.nix, changing the default version of Lix for
`nix upgrade-nix`.
* s3://releases/lix/lix-VERSION/ gets the following contents
* Binary tarballs
* Docs: `manual/` (FIXME: should we actually do this? what about putting it
on docs.lix.systems? I think doing both is correct, since the Web site
should not be an archive of random old manuals)
* Docs as tarball in addition to web.
* Source tarball
* Docker image (FIXME: upload to forgejo registry and github registry [in the future][upload-docker])
* s3://docs/manual/lix/MAJOR
* s3://docs/manual/lix/stable
* The tag is uploaded to the remote repo.
* **Manually** build the installer using the scripts in the installer repo and upload.
FIXME: This currently requires a local Apple Macintosh® aarch64 computer, but
we could possibly automate doing it from the aarch64-darwin builder.
* **Manually** Push the main/release branch directly to gerrit.
* If this is a new major release, branch-off to `release-MAJOR` and push *that* branch
directly to gerrit as well (FIXME: special creds for doing this as a service
account so we don't need to have the gerrit perms to shoot ourselves in the
foot by default because pushing to main is bad?).
FIXME: automate branch-off to `release-*` branch.
* **Manually** (FIXME?) switch back to the release branch, which now has the
correct revision.
* Post!!
* Merge release blog post to [lix-website].
* Toot about it! https://chaos.social/@lix_project
* Tweet about it! https://twitter.com/lixproject
[lix-website]: https://git.lix.systems/lix-project/lix-website
[upload-docker]: https://git.lix.systems/lix-project/lix/issues/252
### Installer
The installer is cross-built to several systems from a Mac using
`build-all.xsh` and `upload-to-lix.xsh` in the installer repo (FIXME: currently
at least; maybe this should be moved here?) .
It installs a binary tarball (FIXME: [it should be taught to substitute from
cache instead][installer-substitute])
from some URL; this is the `hydraJobs.binaryTarball`. The default URLs differ
by architecture and are [configured here][tarball-urls].
[installer-substitute]: https://git.lix.systems/lix-project/lix-installer/issues/13
[tarball-urls]: https://git.lix.systems/lix-project/lix-installer/src/commit/693592ed10d421a885bec0a9dd45e87ab87eb90a/src/settings.rs#L14-L28
## Infrastructure summary
* releases.lix.systems (`s3://releases`):
* Each release gets a directory: https://releases.lix.systems/lix/lix-2.90-beta.1
* Binary tarballs: `nix-2.90.0-beta.1-x86_64-linux.tar.xz`, from `hydraJobs.binaryTarball`
* Manifest: `manifest.nix`, an attrset of the store paths by architecture.
* Manifest for `nix upgrade-nix` to the latest release at `/manifest.nix`.
* cache.lix.systems (`s3://cache`):
* Receives all artifacts for released versions of Lix; is a plain HTTP binary cache.
* install.lix.systems (`s3://install`):
```
~ » aws s3 ls s3://install/lix/
PRE lix-2.90-beta.0/
PRE lix-2.90-beta.1/
PRE lix-2.90.0pre20240411/
PRE lix-2.90.0pre20240412/
2024-05-05 18:59:11 6707344 lix-installer-aarch64-darwin
2024-05-05 18:59:16 7479768 lix-installer-aarch64-linux
2024-05-05 18:59:14 7982208 lix-installer-x86_64-darwin
2024-05-05 18:59:17 8978352 lix-installer-x86_64-linux
~ » aws s3 ls s3://install/lix/lix-2.90-beta.1/
2024-05-05 18:59:01 6707344 lix-installer-aarch64-darwin
2024-05-05 18:59:06 7479768 lix-installer-aarch64-linux
2024-05-05 18:59:03 7982208 lix-installer-x86_64-darwin
2024-05-05 18:59:07 8978352 lix-installer-x86_64-linux
```

39
releng/__init__.py Normal file
View file

@ -0,0 +1,39 @@
from xonsh.main import setup
setup()
del setup
import logging
from . import environment
from . import create_release
from . import keys
from . import version
from . import cli
from . import docker
from . import docker_assemble
from . import gitutils
rootLogger = logging.getLogger()
rootLogger.setLevel(logging.DEBUG)
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
fmt = logging.Formatter('{asctime} {levelname} {name}: {message}',
datefmt='%b %d %H:%M:%S',
style='{')
if not any(isinstance(h, logging.StreamHandler) for h in rootLogger.handlers):
hand = logging.StreamHandler()
hand.setFormatter(fmt)
rootLogger.addHandler(hand)
def reload():
import importlib
importlib.reload(environment)
importlib.reload(create_release)
importlib.reload(keys)
importlib.reload(version)
importlib.reload(cli)
importlib.reload(docker)
importlib.reload(docker_assemble)
importlib.reload(gitutils)

3
releng/__main__.py Normal file
View file

@ -0,0 +1,3 @@
from . import cli
cli.main()

112
releng/cli.py Normal file
View file

@ -0,0 +1,112 @@
from . import create_release
from . import docker
from .environment import RelengEnvironment
from . import environment
import functools
import argparse
import sys
def do_build(args):
if args.target == 'all':
create_release.build_artifacts(no_check_git=args.no_check_git)
elif args.target == 'manual':
eval_result = create_release.eval_jobs()
create_release.build_manual(eval_result)
else:
raise ValueError('invalid target, unreachable')
def do_tag(args):
create_release.do_tag_merge(force_tag=args.force_tag,
no_check_git=args.no_check_git)
def do_upload(env: RelengEnvironment, args):
create_release.setup_creds(env)
if args.target == 'all':
docker.check_all_logins(env)
create_release.upload_artifacts(env,
force_push_tag=args.force_push_tag,
noconfirm=args.noconfirm,
no_check_git=args.no_check_git)
elif args.target == 'manual':
create_release.upload_manual(env)
else:
raise ValueError('invalid target, unreachable')
def do_prepare(args):
create_release.prepare_release_notes()
def main():
ap = argparse.ArgumentParser(description='*Lix ur release engineering*')
def fail(args):
ap.print_usage()
sys.exit(1)
ap.set_defaults(cmd=fail)
sps = ap.add_subparsers()
prepare = sps.add_parser(
'prepare',
help='Prepares for a release by moving the release notes over.')
prepare.set_defaults(cmd=do_prepare)
tag = sps.add_parser(
'tag',
help=
'Create the tag for the current release in .version and merge it back to the current branch, then switch to it'
)
tag.add_argument('--no-check-git',
action='store_true',
help="Don't check git state before tagging. For testing.")
tag.add_argument('--force-tag',
action='store_true',
help='Overwrite the existing tag. For testing.')
tag.set_defaults(cmd=do_tag)
build = sps.add_parser(
'build',
help=
'Build an artifacts/ directory with the things that would be released')
build.add_argument(
'--no-check-git',
action='store_true',
help="Don't check git state before building. For testing.")
build.add_argument('--target',
choices=['manual', 'all'],
help='Whether to build everything or just the manual')
build.set_defaults(cmd=do_build)
upload = sps.add_parser(
'upload', help='Upload artifacts to cache and releases bucket')
upload.add_argument(
'--no-check-git',
action='store_true',
help="Don't check git state before uploading. For testing.")
upload.add_argument('--force-push-tag',
action='store_true',
help='Force push the tag. For testing.')
upload.add_argument(
'--target',
choices=['manual', 'all'],
default='all',
help='Whether to upload a release or just the nightly/otherwise manual'
)
upload.add_argument(
'--noconfirm',
action='store_true',
help="Don't ask for confirmation. For testing/automation.")
upload.add_argument('--environment',
choices=list(environment.ENVIRONMENTS.keys()),
default='staging',
help='Environment to release to')
upload.set_defaults(cmd=lambda args: do_upload(
environment.ENVIRONMENTS[args.environment], args))
args = ap.parse_args()
args.cmd(args)

317
releng/create_release.xsh Normal file
View file

@ -0,0 +1,317 @@
import json
import subprocess
import itertools
import textwrap
from pathlib import Path
import tempfile
import hashlib
import datetime
from . import environment
from .environment import RelengEnvironment
from . import keys
from . import docker
from .version import VERSION, RELEASE_NAME, MAJOR
from .gitutils import verify_are_on_tag, git_preconditions
$RAISE_SUBPROC_ERROR = True
$XONSH_SHOW_TRACEBACK = True
GCROOTS_DIR = Path('./release/gcroots')
BUILT_GCROOTS_DIR = Path('./release/gcroots-build')
DRVS_TXT = Path('./release/drvs.txt')
ARTIFACTS = Path('./release/artifacts')
MANUAL = Path('./release/manual')
RELENG_MSG = "Release created with releng/create_release.xsh"
BUILD_CORES = 16
MAX_JOBS = 2
# TODO
RELEASE_SYSTEMS = ["x86_64-linux"]
def setup_creds(env: RelengEnvironment):
key = keys.get_ephemeral_key(env)
$AWS_SECRET_ACCESS_KEY = key.secret_key
$AWS_ACCESS_KEY_ID = key.id
$AWS_DEFAULT_REGION = 'garage'
$AWS_ENDPOINT_URL = environment.S3_ENDPOINT
def official_release_commit_tag(force_tag=False):
print('[+] Setting officialRelease in flake.nix and tagging')
prev_branch = $(git symbolic-ref --short HEAD).strip()
git switch --detach
sed -i 's/officialRelease = false/officialRelease = true/' flake.nix
git add flake.nix
message = f'release: {VERSION} "{RELEASE_NAME}"\n\nRelease produced with releng/create_release.xsh'
git commit -m @(message)
git tag @(['-f'] if force_tag else []) -a -m @(message) @(VERSION)
return prev_branch
def merge_to_release(prev_branch):
git switch @(prev_branch)
# Create a merge back into the release branch so that git tools understand
# that the release branch contains the tag, without the release commit
# actually influencing the tree.
merge_msg = textwrap.dedent("""\
release: merge release {VERSION} back to mainline
This merge commit returns to the previous state prior to the release but leaves the tag in the branch history.
{RELENG_MSG}
""").format(VERSION=VERSION, RELENG_MSG=RELENG_MSG)
git merge -m @(merge_msg) -s ours @(VERSION)
def realise(paths: list[str]):
args = [
'--realise',
'--max-jobs',
MAX_JOBS,
'--cores',
BUILD_CORES,
'--log-format',
'bar-with-logs',
'--add-root',
BUILT_GCROOTS_DIR
]
nix-store @(args) @(paths)
def eval_jobs():
nej_output = $(nix-eval-jobs --workers 4 --gc-roots-dir @(GCROOTS_DIR) --force-recurse --flake '.#release-jobs')
return [x for x in (json.loads(s) for s in nej_output.strip().split('\n'))
if x['system'] in RELEASE_SYSTEMS
]
def upload_drv_paths_and_outputs(env: RelengEnvironment, paths: list[str]):
proc = subprocess.Popen([
'nix',
'copy',
'-v',
'--to',
env.cache_store_uri(),
'--stdin',
],
stdin=subprocess.PIPE,
env=__xonsh__.env.detype(),
)
proc.stdin.write('\n'.join(itertools.chain(paths, x + '^*' for x in paths)).encode())
proc.stdin.close()
rv = proc.wait()
if rv != 0:
raise subprocess.CalledProcessError(rv, proc.args)
def make_manifest(eval_result):
manifest = {vs['system']: vs['outputs']['out'] for vs in eval_result}
def manifest_line(system, out):
return f' {system} = "{out}";'
manifest_text = textwrap.dedent("""\
# This file was generated by releng/create_release.xsh in Lix
{{
{lines}
}}
""").format(lines='\n'.join(manifest_line(s, p) for (s, p) in manifest.items()))
return manifest_text
def make_git_tarball(to: Path):
git archive --verbose --prefix=lix-@(VERSION)/ --format=tar.gz -o @(to) @(VERSION)
def confirm(prompt, expected):
resp = input(prompt)
if resp != expected:
raise ValueError('Unconfirmed')
def sha256_file(f: Path):
hasher = hashlib.sha256()
with open(f, 'rb') as h:
while data := h.read(1024 * 1024):
hasher.update(data)
return hasher.hexdigest()
def make_artifacts_dir(eval_result, d: Path):
d.mkdir(exist_ok=True, parents=True)
version_dir = d / 'lix' / f'lix-{VERSION}'
version_dir.mkdir(exist_ok=True, parents=True)
tarballs_drv = next(p for p in eval_result if p['attr'] == 'tarballs')
cp --no-preserve=mode -r @(tarballs_drv['outputs']['out'])/* @(version_dir)
# FIXME: upgrade-nix searches for manifest.nix at root, which is rather annoying
with open(d / 'manifest.nix', 'w') as h:
h.write(make_manifest(eval_result))
with open(version_dir / 'manifest.nix', 'w') as h:
h.write(make_manifest(eval_result))
print('[+] Make sources tarball')
filename = f'lix-{VERSION}.tar.gz'
git_tarball = version_dir / filename
make_git_tarball(git_tarball)
file_hash = sha256_file(git_tarball)
print(f'Hash: {file_hash}')
with open(version_dir / f'{filename}.sha256', 'w') as h:
h.write(file_hash)
def prepare_release_notes():
print('[+] Preparing release notes')
RELEASE_NOTES_PATH = Path('doc/manual/rl-next')
if RELEASE_NOTES_PATH.isdir():
notes_body = subprocess.check_output(['build-release-notes', '--change-authors', 'doc/manual/change-authors.yml', 'doc/manual/rl-next']).decode()
else:
# I guess nobody put release notes on their changes?
print('[-] Warning: seemingly missing any release notes, not worrying about it')
notes_body = ''
rl_path = Path(f'doc/manual/src/release-notes/rl-{MAJOR}.md')
existing_rl = ''
try:
with open(rl_path, 'r') as fh:
existing_rl = fh.read()
except FileNotFoundError:
pass
date = datetime.datetime.now().strftime('%Y-%m-%d')
minor_header = f'# Lix {VERSION} ({date})'
header = f'# Lix {MAJOR} "{RELEASE_NAME}"'
if existing_rl.startswith(header):
# strip the header off for minor releases
lines = existing_rl.splitlines()
header = lines[0]
existing_rl = '\n'.join(lines[1:])
else:
header += f' ({date})\n\n'
header += '\n' + minor_header + '\n'
notes = header
notes += notes_body
notes += "\n\n"
notes += existing_rl
# make pre-commit happy about one newline
notes = notes.rstrip()
notes += "\n"
with open(rl_path, 'w') as fh:
fh.write(notes)
commit_msg = textwrap.dedent("""\
release: release notes for {VERSION}
{RELENG_MSG}
""").format(VERSION=VERSION, RELENG_MSG=RELENG_MSG)
git add @(rl_path)
git rm doc/manual/rl-next/*.md
git commit -m @(commit_msg)
def upload_artifacts(env: RelengEnvironment, noconfirm=False, no_check_git=False, force_push_tag=False):
if not no_check_git:
verify_are_on_tag()
git_preconditions()
assert 'AWS_SECRET_ACCESS_KEY' in __xonsh__.env
tree @(ARTIFACTS)
env_part = f'environment {env.name}'
not noconfirm and confirm(
f'Would you like to release {ARTIFACTS} as {VERSION} in {env.colour(env_part)}? Type "I want to release this to {env.name}" to confirm\n',
f'I want to release this to {env.name}'
)
docker_images = list((ARTIFACTS / f'lix/lix-{VERSION}').glob(f'lix-{VERSION}-docker-image-*.tar.gz'))
assert docker_images
print('[+] Upload to cache')
with open(DRVS_TXT) as fh:
upload_drv_paths_and_outputs(env, [x.strip() for x in fh.readlines() if x])
print('[+] Upload docker images')
for target in env.docker_targets:
docker.upload_docker_images(target, docker_images)
print('[+] Upload to release bucket')
aws s3 cp --recursive @(ARTIFACTS)/ @(env.releases_bucket)/
print('[+] Upload manual')
upload_manual(env)
print('[+] git push tag')
git push @(['-f'] if force_push_tag else []) @(env.git_repo) f'{VERSION}:refs/tags/{VERSION}'
def do_tag_merge(force_tag=False, no_check_git=False):
if not no_check_git:
git_preconditions()
prev_branch = official_release_commit_tag(force_tag=force_tag)
merge_to_release(prev_branch)
git switch --detach @(VERSION)
def build_manual(eval_result):
manual = next(x['outputs']['doc'] for x in eval_result if x['attr'] == 'build.x86_64-linux')
print('[+] Building manual')
realise([manual])
cp --no-preserve=mode -vr @(manual)/share/doc/nix @(MANUAL)
def upload_manual(env: RelengEnvironment):
stable = json.loads($(nix eval --json '.#nix.officialRelease'))
if stable:
version = MAJOR
else:
version = 'nightly'
print('[+] aws s3 sync manual')
aws s3 sync @(MANUAL)/ @(env.docs_bucket)/manual/lix/@(version)/
if stable:
aws s3 sync @(MANUAL)/ @(env.docs_bucket)/manual/lix/stable/
def build_artifacts(no_check_git=False):
rm -rf release/
if not no_check_git:
verify_are_on_tag()
git_preconditions()
print('[+] Evaluating')
eval_result = eval_jobs()
drv_paths = [x['drvPath'] for x in eval_result]
print('[+] Building')
realise(drv_paths)
build_manual(eval_result)
with open(DRVS_TXT, 'w') as fh:
# don't bother putting the release tarballs themselves because they are duplicate and huge
fh.write('\n'.join(x['drvPath'] for x in eval_result if x['attr'] != 'lix-release-tarballs'))
make_artifacts_dir(eval_result, ARTIFACTS)
print(f'[+] Done! See {ARTIFACTS}')

74
releng/docker.xsh Normal file
View file

@ -0,0 +1,74 @@
import json
import logging
from pathlib import Path
import tempfile
import requests
from .environment import DockerTarget, RelengEnvironment
from .version import VERSION, MAJOR
from . import gitutils
from .docker_assemble import Registry, OCIIndex, OCIIndexItem
from . import docker_assemble
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
def check_all_logins(env: RelengEnvironment):
for target in env.docker_targets:
check_login(target)
def check_login(target: DockerTarget):
skopeo login @(target.registry_name())
def upload_docker_images(target: DockerTarget, paths: list[Path]):
if not paths: return
sess = requests.Session()
sess.headers['User-Agent'] = 'lix-releng'
tag_names = [DockerTarget.resolve(tag, version=VERSION, major=MAJOR) for tag in target.tags]
# latest only gets tagged for the current release branch of Lix
if not gitutils.is_maintenance_branch('HEAD'):
tag_names.append('latest')
meta = {}
reg = docker_assemble.Registry(sess)
manifests = []
with tempfile.TemporaryDirectory() as tmp:
tmp = Path(tmp)
for path in paths:
digest_file = tmp / (path.name + '.digest')
inspection = json.loads($(skopeo inspect docker-archive:@(path)))
docker_arch = inspection['Architecture']
docker_os = inspection['Os']
meta = inspection['Labels']
log.info('Pushing image %s for %s to %s', path, docker_arch, target.registry_path)
# insecure-policy: we don't have any signature policy, we are just uploading an image
# We upload to a junk tag, because otherwise it will upload to `latest`, which is undesirable
skopeo --insecure-policy copy --format oci --digestfile @(digest_file) docker-archive:@(path) docker://@(target.registry_path):temp
digest = digest_file.read_text().strip()
# skopeo doesn't give us the manifest size directly, so we just ask the registry
metadata = reg.image_info(target.registry_path, digest)
manifests.append(OCIIndexItem(metadata=metadata, architecture=docker_arch, os=docker_os))
# delete the temp tag, which we only have to create because of skopeo
# limitations anyhow (it seems to not have a way to say "don't tag it, find
# your checksum and put it there")
# FIXME: this is not possible because GitHub only has a proprietary API for it. amazing. 11/10.
# reg.delete_tag(target.registry_path, 'temp')
log.info('Pushed images to %r, building a bigger and more menacing manifest from %r with metadata %r', target, manifests, meta)
# send the multiarch manifest to each tag
index = OCIIndex(manifests=manifests, annotations=meta)
for tag in tag_names:
reg.upload_index(target.registry_path, tag, index)

399
releng/docker_assemble.py Normal file
View file

@ -0,0 +1,399 @@
from typing import Any, Literal, Optional
import re
from pathlib import Path
import json
import dataclasses
import time
from urllib.parse import unquote
import urllib.request
import logging
import requests.auth
import requests
import xdg_base_dirs
log = logging.getLogger(__name__)
log.setLevel(logging.INFO)
DEBUG_REQUESTS = False
if DEBUG_REQUESTS:
urllib3_logger = logging.getLogger('requests.packages.urllib3')
urllib3_logger.setLevel(logging.DEBUG)
urllib3_logger.propagate = True
# So, there is a bunch of confusing stuff happening in this file. The gist of why it's Like This is:
#
# nix2container does not concern itself with tags (reasonably enough):
# https://github.com/nlewo/nix2container/issues/59
#
# This is fine. But then we noticed: docker images don't play nice if you have
# multiple architectures you want to abstract over if you don't do special
# things. Those special things are images with manifests containing multiple
# images.
#
# Docker has a data model vaguely analogous to git: you have higher level
# objects referring to a bunch of content-addressed blobs.
#
# A multiarch image is more or less just a manifest that refers to more
# manifests; in OCI it is an Index.
#
# See the API spec here: https://github.com/opencontainers/distribution-spec/blob/v1.0.1/spec.md#definitions
# And the Index spec here: https://github.com/opencontainers/image-spec/blob/v1.0.1/image-index.md
#
# skopeo doesn't *know* how to make multiarch *manifests*:
# https://github.com/containers/skopeo/issues/1136
#
# There is a tool called manifest-tool that is supposed to do this
# (https://github.com/estesp/manifest-tool) but it doesn't support putting in
# annotations on the outer image, and I *really* didn't want to write golang to
# fix that. Thus, a little bit of homebrew containers code.
#
# Essentially what we are doing in here is splatting a bunch of images into the
# registry without tagging them (except as "temp", due to podman issues), then
# simply sending a new composite manifest ourselves.
DockerArchitecture = Literal['amd64'] | Literal['arm64']
MANIFEST_MIME = 'application/vnd.oci.image.manifest.v1+json'
INDEX_MIME = 'application/vnd.oci.image.index.v1+json'
@dataclasses.dataclass(frozen=True, order=True)
class ImageMetadata:
size: int
digest: str
"""sha256:SOMEHEX"""
@dataclasses.dataclass(frozen=True, order=True)
class OCIIndexItem:
"""Information about an untagged uploaded image."""
metadata: ImageMetadata
architecture: DockerArchitecture
os: str = 'linux'
def serialize(self):
return {
'mediaType': MANIFEST_MIME,
'size': self.metadata.size,
'digest': self.metadata.digest,
'platform': {
'architecture': self.architecture,
'os': self.os,
}
}
@dataclasses.dataclass(frozen=True)
class OCIIndex:
manifests: list[OCIIndexItem]
annotations: dict[str, str]
def serialize(self):
return {
'schemaVersion': 2,
'manifests': [item.serialize() for item in sorted(self.manifests)],
'annotations': self.annotations
}
def docker_architecture_from_nix_system(system: str) -> DockerArchitecture:
MAP = {
'x86_64-linux': 'amd64',
'aarch64-linux': 'arm64',
}
return MAP[system] # type: ignore
@dataclasses.dataclass
class TaggingOperation:
manifest: OCIIndex
tags: list[str]
"""Tags this image is uploaded under"""
runtime_dir = xdg_base_dirs.xdg_runtime_dir()
config_dir = xdg_base_dirs.xdg_config_home()
AUTH_FILES = ([runtime_dir / 'containers/auth.json'] if runtime_dir else []) + \
[config_dir / 'containers/auth.json', Path.home() / '.docker/config.json']
# Copied from Werkzeug https://github.com/pallets/werkzeug/blob/62e3ea45846d06576199a2f8470be7fe44c867c1/src/werkzeug/http.py#L300-L325
def parse_list_header(value: str) -> list[str]:
"""Parse a header value that consists of a list of comma separated items according
to `RFC 9110 <https://httpwg.org/specs/rfc9110.html#abnf.extension>`__.
This extends :func:`urllib.request.parse_http_list` to remove surrounding quotes
from values.
.. code-block:: python
parse_list_header('token, "quoted value"')
['token', 'quoted value']
This is the reverse of :func:`dump_header`.
:param value: The header value to parse.
"""
result = []
for item in urllib.request.parse_http_list(value):
if len(item) >= 2 and item[0] == item[-1] == '"':
item = item[1:-1]
result.append(item)
return result
# https://www.rfc-editor.org/rfc/rfc2231#section-4
_charset_value_re = re.compile(
r"""
([\w!#$%&*+\-.^`|~]*)' # charset part, could be empty
[\w!#$%&*+\-.^`|~]*' # don't care about language part, usually empty
([\w!#$%&'*+\-.^`|~]+) # one or more token chars with percent encoding
""",
re.ASCII | re.VERBOSE,
)
# Copied from: https://github.com/pallets/werkzeug/blob/62e3ea45846d06576199a2f8470be7fe44c867c1/src/werkzeug/http.py#L327-L394
def parse_dict_header(value: str) -> dict[str, str | None]:
"""Parse a list header using :func:`parse_list_header`, then parse each item as a
``key=value`` pair.
.. code-block:: python
parse_dict_header('a=b, c="d, e", f')
{"a": "b", "c": "d, e", "f": None}
This is the reverse of :func:`dump_header`.
If a key does not have a value, it is ``None``.
This handles charsets for values as described in
`RFC 2231 <https://www.rfc-editor.org/rfc/rfc2231#section-3>`__. Only ASCII, UTF-8,
and ISO-8859-1 charsets are accepted, otherwise the value remains quoted.
:param value: The header value to parse.
.. versionchanged:: 3.0
Passing bytes is not supported.
.. versionchanged:: 3.0
The ``cls`` argument is removed.
.. versionchanged:: 2.3
Added support for ``key*=charset''value`` encoded items.
.. versionchanged:: 0.9
The ``cls`` argument was added.
"""
result: dict[str, str | None] = {}
for item in parse_list_header(value):
key, has_value, value = item.partition("=")
key = key.strip()
if not has_value:
result[key] = None
continue
value = value.strip()
encoding: str | None = None
if key[-1] == "*":
# key*=charset''value becomes key=value, where value is percent encoded
# adapted from parse_options_header, without the continuation handling
key = key[:-1]
match = _charset_value_re.match(value)
if match:
# If there is a charset marker in the value, split it off.
encoding, value = match.groups()
assert encoding
encoding = encoding.lower()
# A safe list of encodings. Modern clients should only send ASCII or UTF-8.
# This list will not be extended further. An invalid encoding will leave the
# value quoted.
if encoding in {"ascii", "us-ascii", "utf-8", "iso-8859-1"}:
# invalid bytes are replaced during unquoting
value = unquote(value, encoding=encoding)
if len(value) >= 2 and value[0] == value[-1] == '"':
value = value[1:-1]
result[key] = value
return result
def parse_www_authenticate(www_authenticate):
scheme, _, rest = www_authenticate.partition(' ')
scheme = scheme.lower()
rest = rest.strip()
parsed = parse_dict_header(rest.rstrip('='))
return parsed
class AuthState:
def __init__(self, auth_files: list[Path] = AUTH_FILES):
self.auth_map: dict[str, str] = {}
for f in auth_files:
self.auth_map.update(AuthState.load_auth_file(f))
self.token_cache: dict[str, str] = {}
@staticmethod
def load_auth_file(path: Path) -> dict[str, str]:
if path.exists():
with path.open() as fh:
try:
json_obj = json.load(fh)
return {k: v['auth'] for k, v in json_obj['auths'].items()}
except (json.JSONDecodeError, KeyError) as e:
log.exception('JSON decode error in %s', path, exc_info=e)
return {}
def get_token(self, hostname: str) -> Optional[str]:
return self.token_cache.get(hostname)
def obtain_token(self, session: requests.Session, token_endpoint: str,
scope: str, service: str, image_path: str) -> str:
authority, _, _ = image_path.partition('/')
if tok := self.get_token(authority):
return tok
creds = self.find_credential_for(image_path)
if not creds:
raise ValueError('No credentials available for ' + image_path)
resp = session.get(token_endpoint,
params={
'client_id': 'lix-releng',
'scope': scope,
'service': service,
},
headers={
'Authorization': 'Basic ' + creds
}).json()
token = resp['token']
self.token_cache[service] = token
return token
def find_credential_for(self, image_path: str):
trails = image_path.split('/')
for i in range(len(trails)):
prefix = '/'.join(trails[:len(trails) - i])
if prefix in self.auth_map:
return self.auth_map[prefix]
return None
class RegistryAuthenticator(requests.auth.AuthBase):
"""Authenticates to an OCI compliant registry"""
def __init__(self, auth_state: AuthState, session: requests.Session,
image: str):
self.auth_map: dict[str, str] = {}
self.image = image
self.session = session
self.auth_state = auth_state
def response_hook(self, r: requests.Response,
**kwargs: Any) -> requests.Response:
if r.status_code == 401:
www_authenticate = r.headers.get('www-authenticate', '').lower()
parsed = parse_www_authenticate(www_authenticate)
assert parsed
tok = self.auth_state.obtain_token(
self.session,
parsed['realm'], # type: ignore
parsed['scope'], # type: ignore
parsed['service'], # type: ignore
self.image)
new_req = r.request.copy()
new_req.headers['Authorization'] = 'Bearer ' + tok
return self.session.send(new_req)
else:
return r
def __call__(self,
r: requests.PreparedRequest) -> requests.PreparedRequest:
authority, _, _ = self.image.partition('/')
auth_may = self.auth_state.get_token(authority)
if auth_may:
r.headers['Authorization'] = 'Bearer ' + auth_may
r.register_hook('response', self.response_hook)
return r
class Registry:
def __init__(self, session: requests.Session):
self.auth_state = AuthState()
self.session = session
def image_info(self, image_path: str, manifest_id: str) -> ImageMetadata:
authority, _, path = image_path.partition('/')
resp = self.session.head(
f'https://{authority}/v2/{path}/manifests/{manifest_id}',
headers={'Accept': MANIFEST_MIME},
auth=RegistryAuthenticator(self.auth_state, self.session,
image_path))
resp.raise_for_status()
return ImageMetadata(int(resp.headers['content-length']),
resp.headers['docker-content-digest'])
def delete_tag(self, image_path: str, tag: str):
authority, _, path = image_path.partition('/')
resp = self.session.delete(
f'https://{authority}/v2/{path}/manifests/{tag}',
headers={'Content-Type': INDEX_MIME},
auth=RegistryAuthenticator(self.auth_state, self.session,
image_path))
resp.raise_for_status()
def _upload_index(self, image_path: str, tag: str, index: OCIIndex):
authority, _, path = image_path.partition('/')
body = json.dumps(index.serialize(),
separators=(',', ':'),
sort_keys=True)
resp = self.session.put(
f'https://{authority}/v2/{path}/manifests/{tag}',
data=body,
headers={'Content-Type': INDEX_MIME},
auth=RegistryAuthenticator(self.auth_state, self.session,
image_path))
resp.raise_for_status()
return resp.headers['Location']
def upload_index(self,
image_path: str,
tag: str,
index: OCIIndex,
retries=20,
retry_delay=1):
# eventual consistency lmao
for _ in range(retries):
try:
return self._upload_index(image_path, tag, index)
except requests.HTTPError as e:
if e.response.status_code != 404:
raise
time.sleep(retry_delay)

134
releng/environment.py Normal file
View file

@ -0,0 +1,134 @@
from typing import Callable
import urllib.parse
import re
import functools
import subprocess
import dataclasses
S3_HOST = 's3.lix.systems'
S3_ENDPOINT = 'https://s3.lix.systems'
DEFAULT_STORE_URI_BITS = {
'region': 'garage',
'endpoint': 's3.lix.systems',
'want-mass-query': 'true',
'write-nar-listing': 'true',
'ls-compression': 'zstd',
'narinfo-compression': 'zstd',
'compression': 'zstd',
'parallel-compression': 'true',
}
@dataclasses.dataclass
class DockerTarget:
registry_path: str
"""Registry path without the tag, e.g. ghcr.io/lix-project/lix"""
tags: list[str]
"""List of tags this image should take. There must be at least one."""
@staticmethod
def resolve(item: str, version: str, major: str) -> str:
"""
Applies templates:
- version: the Lix version e.g. 2.90.0
- major: the major Lix version e.g. 2.90
"""
return item.format(version=version, major=major)
def registry_name(self) -> str:
[a, _, _] = self.registry_path.partition('/')
return a
@dataclasses.dataclass
class RelengEnvironment:
name: str
colour: Callable[[str], str]
cache_store_overlay: dict[str, str]
cache_bucket: str
releases_bucket: str
docs_bucket: str
git_repo: str
docker_targets: list[DockerTarget]
def cache_store_uri(self):
qs = DEFAULT_STORE_URI_BITS.copy()
qs.update(self.cache_store_overlay)
return self.cache_bucket + "?" + urllib.parse.urlencode(qs)
SGR = '\x1b['
RED = '31;1m'
GREEN = '32;1m'
RESET = '0m'
def sgr(colour: str, text: str) -> str:
return f'{SGR}{colour}{text}{SGR}{RESET}'
STAGING = RelengEnvironment(
name='staging',
colour=functools.partial(sgr, GREEN),
docs_bucket='s3://staging-docs',
cache_bucket='s3://staging-cache',
cache_store_overlay={'secret-key': 'staging.key'},
releases_bucket='s3://staging-releases',
git_repo='ssh://git@git.lix.systems/lix-project/lix-releng-staging',
docker_targets=[
# latest will be auto tagged if appropriate
DockerTarget('git.lix.systems/lix-project/lix-releng-staging',
tags=['{version}', '{major}']),
DockerTarget('ghcr.io/lix-project/lix-releng-staging',
tags=['{version}', '{major}']),
],
)
GERRIT_REMOTE_RE = re.compile(r'^ssh://(\w+@)?gerrit.lix.systems:2022/lix$')
def guess_gerrit_remote():
"""
Deals with people having unknown gerrit username.
"""
out = [
x.split()[1] for x in subprocess.check_output(
['git', 'remote', '-v']).decode().splitlines()
]
return next(x for x in out if GERRIT_REMOTE_RE.match(x))
PROD = RelengEnvironment(
name='production',
colour=functools.partial(sgr, RED),
docs_bucket='s3://docs',
cache_bucket='s3://cache',
# FIXME: we should decrypt this with age into a tempdir in the future, but
# the issue is how to deal with the recipients file. For now, we should
# just delete it after doing a release.
cache_store_overlay={'secret-key': 'prod.key'},
releases_bucket='s3://releases',
git_repo=guess_gerrit_remote(),
docker_targets=[
# latest will be auto tagged if appropriate
DockerTarget('git.lix.systems/lix-project/lix',
tags=['{version}', '{major}']),
DockerTarget('ghcr.io/lix-project/lix', tags=['{version}', '{major}']),
],
)
ENVIRONMENTS = {
'staging': STAGING,
'production': PROD,
}
@dataclasses.dataclass
class S3Credentials:
name: str
id: str
secret_key: str

37
releng/gitutils.xsh Normal file
View file

@ -0,0 +1,37 @@
import subprocess
import json
def version_compare(v1: str, v2: str):
return json.loads($(nix-instantiate --eval --json --argstr v1 @(v1) --argstr v2 @(v2) --expr '{v1, v2}: builtins.compareVersions v1 v2'))
def latest_tag_on_branch(branch: str) -> str:
return $(git describe --abbrev=0 @(branch) e>/dev/null).strip()
def is_maintenance_branch(branch: str) -> bool:
try:
main_tag = latest_tag_on_branch('main')
current_tag = latest_tag_on_branch(branch)
return version_compare(current_tag, main_tag) < 0
except subprocess.CalledProcessError:
# This is the case before Lix releases 2.90, since main *has* no
# release tag on it.
# FIXME: delete this case after 2.91
return False
def verify_are_on_tag():
current_tag = $(git describe --tag).strip()
assert current_tag == VERSION
def git_preconditions():
# verify there is nothing in index ready to stage
proc = !(git diff-index --quiet --cached HEAD --)
assert proc.rtn == 0
# verify there is nothing *stageable* and tracked
proc = !(git diff-files --quiet)
assert proc.rtn == 0

19
releng/keys.py Normal file
View file

@ -0,0 +1,19 @@
import subprocess
import json
from . import environment
def get_ephemeral_key(
env: environment.RelengEnvironment) -> environment.S3Credentials:
output = subprocess.check_output([
'ssh', '-l', 'root', environment.S3_HOST, 'garage-ephemeral-key',
'new', '--name', f'releng-{env.name}', '--read', '--write',
'--age-secs', '3600',
env.releases_bucket.removeprefix('s3://'),
env.cache_bucket.removeprefix('s3://'),
env.docs_bucket.removeprefix('s3://'),
])
d = json.loads(output.decode())
return environment.S3Credentials(name=d['name'],
id=d['id'],
secret_key=d['secret_key'])

57
releng/release-jobs.nix Normal file
View file

@ -0,0 +1,57 @@
{ hydraJobs, pkgs }:
let
inherit (pkgs) lib;
lix = hydraJobs.build.x86_64-linux;
systems = [ "x86_64-linux" ];
dockerSystems = [ "x86_64-linux" ];
doTarball =
{
target,
targetName,
rename ? null,
}:
''
echo "doing: ${target}"
# expand wildcard
filename=$(echo ${target}/${targetName})
basename="$(basename $filename)"
echo $filename $basename
cp -v "$filename" "$out"
${lib.optionalString (rename != null) ''
mv "$out/$basename" "$out/${rename}"
basename="${rename}"
''}
sha256sum --binary $filename | cut -f1 -d' ' > $out/$basename.sha256
'';
targets =
builtins.map (system: {
target = hydraJobs.binaryTarball.${system};
targetName = "*.tar.xz";
}) systems
++ builtins.map (system: {
target = hydraJobs.dockerImage.${system}.tarball;
targetName = "image.tar.gz";
rename = "lix-${lix.version}-docker-image-${system}.tar.gz";
}) dockerSystems;
manualTar = pkgs.runCommand "lix-manual-tarball" { } ''
mkdir -p $out
cp -r ${lix.doc}/share/doc/nix/manual lix-${lix.version}-manual
tar -cvzf "$out/lix-${lix.version}-manual.tar.gz" lix-${lix.version}-manual
'';
tarballs = pkgs.runCommand "lix-release-tarballs" { } ''
mkdir -p $out
${lib.concatMapStringsSep "\n" doTarball targets}
cp ${manualTar}/*.tar.gz $out
cp -r ${lix.doc}/share/doc/nix/manual $out
'';
in
{
inherit (hydraJobs) build;
inherit tarballs;
}

6
releng/version.py Normal file
View file

@ -0,0 +1,6 @@
import json
version_json = json.load(open('version.json'))
VERSION = version_json['version']
MAJOR = '.'.join(VERSION.split('.')[:2])
RELEASE_NAME = version_json['release_name']

View file

@ -54,7 +54,7 @@ libcmd = library(
nlohmann_json, nlohmann_json,
lix_doc lix_doc
], ],
cpp_pch : ['../pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
install : true, install : true,
# FIXME(Qyriad): is this right? # FIXME(Qyriad): is this right?
install_rpath : libdir, install_rpath : libdir,

View file

@ -145,7 +145,7 @@ libexpr = library(
include_directories : [ include_directories : [
'../libmain', '../libmain',
], ],
cpp_pch : ['../pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
install : true, install : true,
# FIXME(Qyriad): is this right? # FIXME(Qyriad): is this right?
install_rpath : libdir, install_rpath : libdir,

View file

@ -30,7 +30,7 @@ libfetchers = library(
liblixutil, liblixutil,
nlohmann_json, nlohmann_json,
], ],
cpp_pch : ['../pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
install : true, install : true,
# FIXME(Qyriad): is this right? # FIXME(Qyriad): is this right?
install_rpath : libdir, install_rpath : libdir,

View file

@ -17,6 +17,12 @@ LogFormat parseLogFormat(const std::string & logFormatStr) {
return LogFormat::bar; return LogFormat::bar;
else if (logFormatStr == "bar-with-logs") else if (logFormatStr == "bar-with-logs")
return LogFormat::barWithLogs; return LogFormat::barWithLogs;
#if !defined(WIN32) || !defined(_WIN32) || !defined(__WIN32__) || !defined(__NT__)
else if (logFormatStr == "multiline")
return LogFormat::multiline;
else if (logFormatStr == "multiline-with-logs")
return LogFormat::multilineWithLogs;
#endif
throw Error("option 'log-format' has an invalid value '%s'", logFormatStr); throw Error("option 'log-format' has an invalid value '%s'", logFormatStr);
} }
@ -35,6 +41,19 @@ Logger * makeDefaultLogger() {
logger->setPrintBuildLogs(true); logger->setPrintBuildLogs(true);
return logger; return logger;
} }
#if !defined(WIN32) || !defined(_WIN32) || !defined(__WIN32__) || !defined(__NT__)
case LogFormat::multiline: {
auto logger = makeProgressBar();
logger->setPrintMultiline(true);
return logger;
}
case LogFormat::multilineWithLogs: {
auto logger = makeProgressBar();
logger->setPrintMultiline(true);
logger->setPrintBuildLogs(true);
return logger;
}
#endif
default: default:
abort(); abort();
} }

View file

@ -11,6 +11,10 @@ enum class LogFormat {
internalJSON, internalJSON,
bar, bar,
barWithLogs, barWithLogs,
#if !defined(WIN32) || !defined(_WIN32) || !defined(__WIN32__) || !defined(__NT__)
multiline,
multilineWithLogs,
#endif
}; };
void setLogFormat(const std::string & logFormatStr); void setLogFormat(const std::string & logFormatStr);

View file

@ -20,7 +20,7 @@ libmain = library(
liblixutil, liblixutil,
liblixstore, liblixstore,
], ],
cpp_pch : ['../pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
install : true, install : true,
# FIXME(Qyriad): is this right? # FIXME(Qyriad): is this right?
install_rpath : libdir, install_rpath : libdir,

View file

@ -73,6 +73,8 @@ private:
std::map<ActivityType, ActivitiesByType> activitiesByType; std::map<ActivityType, ActivitiesByType> activitiesByType;
int lastLines = 0;
uint64_t filesLinked = 0, bytesLinked = 0; uint64_t filesLinked = 0, bytesLinked = 0;
uint64_t corruptedPaths = 0, untrustedPaths = 0; uint64_t corruptedPaths = 0, untrustedPaths = 0;
@ -89,6 +91,7 @@ private:
std::condition_variable quitCV, updateCV; std::condition_variable quitCV, updateCV;
bool printBuildLogs = false; bool printBuildLogs = false;
bool printMultiline = false;
bool isTTY; bool isTTY;
public: public:
@ -103,7 +106,7 @@ public:
while (state->active) { while (state->active) {
if (!state->haveUpdate) if (!state->haveUpdate)
state.wait_for(updateCV, nextWakeup); state.wait_for(updateCV, nextWakeup);
nextWakeup = draw(*state); nextWakeup = draw(*state, {});
state.wait_for(quitCV, std::chrono::milliseconds(50)); state.wait_for(quitCV, std::chrono::milliseconds(50));
} }
}); });
@ -165,8 +168,7 @@ public:
void log(State & state, Verbosity lvl, std::string_view s) void log(State & state, Verbosity lvl, std::string_view s)
{ {
if (state.active) { if (state.active) {
writeToStderr("\r\e[K" + filterANSIEscapes(s, !isTTY) + ANSI_NORMAL "\n"); draw(state, s);
draw(state);
} else { } else {
auto s2 = s + ANSI_NORMAL "\n"; auto s2 = s + ANSI_NORMAL "\n";
if (!isTTY) s2 = filterANSIEscapes(s2, true); if (!isTTY) s2 = filterANSIEscapes(s2, true);
@ -354,60 +356,99 @@ public:
updateCV.notify_one(); updateCV.notify_one();
} }
std::chrono::milliseconds draw(State & state) std::chrono::milliseconds draw(State & state, const std::optional<std::string_view> & s)
{ {
auto nextWakeup = A_LONG_TIME; auto nextWakeup = A_LONG_TIME;
state.haveUpdate = false; state.haveUpdate = false;
if (state.paused || !state.active) return nextWakeup; if (state.paused || !state.active) return nextWakeup;
std::string line; auto windowSize = getWindowSize();
auto width = windowSize.second;
if (width <= 0) {
width = std::numeric_limits<decltype(width)>::max();
}
if (printMultiline && (state.lastLines >= 1)) {
writeToStderr(fmt("\e[G\e[%dF\e[J", state.lastLines));
}
state.lastLines = 0;
if (s != std::nullopt)
writeToStderr("\r\e[K" + filterANSIEscapes(s.value(), !isTTY) + ANSI_NORMAL "\n");
std::string line;
std::string status = getStatus(state); std::string status = getStatus(state);
if (!status.empty()) { if (!status.empty()) {
line += '['; line += '[';
line += status; line += status;
line += "]"; line += "]";
} }
if (printMultiline && !line.empty()) {
writeToStderr(filterANSIEscapes(line, false, width) + "\n");
state.lastLines++;
}
auto height = windowSize.first > 0 ? windowSize.first : 25;
auto moreBuilds = 0;
auto now = std::chrono::steady_clock::now(); auto now = std::chrono::steady_clock::now();
if (!state.activities.empty()) { if (!state.activities.empty()) {
if (!status.empty()) line += " "; for (auto i = state.activities.begin(); i != state.activities.end(); ++i) {
auto i = state.activities.rbegin(); if (!(i->visible && (!i->s.empty() || !i->lastLine.empty()))) {
continue;
while (i != state.activities.rend()) { }
if (i->visible && (!i->s.empty() || !i->lastLine.empty())) {
/* Don't show activities until some time has /* Don't show activities until some time has
passed, to avoid displaying very short passed, to avoid displaying very short
activities. */ activities. */
auto delay = std::chrono::milliseconds(10); auto delay = std::chrono::milliseconds(10);
if (i->startTime + delay < now) if (i->startTime + delay >= now) {
break; nextWakeup = std::min(
else nextWakeup,
nextWakeup = std::min(nextWakeup, std::chrono::duration_cast<std::chrono::milliseconds>(delay - (now - i->startTime))); std::chrono::duration_cast<std::chrono::milliseconds>(
delay - (now - i->startTime)
)
);
}
if (printMultiline) {
line = i->s;
} else {
line += " ";
line += i->s;
} }
++i;
}
if (i != state.activities.rend()) {
line += i->s;
if (!i->phase.empty()) { if (!i->phase.empty()) {
line += " ("; line += " (";
line += i->phase; line += i->phase;
line += ")"; line += ")";
} }
if (!i->lastLine.empty()) { if (!i->lastLine.empty()) {
if (!i->s.empty()) line += ": "; if (!i->s.empty()) {
line += ": ";
}
line += i->lastLine; line += i->lastLine;
} }
if (printMultiline) {
if (state.lastLines < (height - 1)) {
writeToStderr(filterANSIEscapes(line, false, width) + "\n");
state.lastLines++;
} else {
moreBuilds++;
}
}
} }
} }
auto width = getWindowSize().second; if (printMultiline && moreBuilds) {
if (width <= 0) width = std::numeric_limits<decltype(width)>::max(); writeToStderr(fmt("And %d more...", moreBuilds));
}
writeToStderr("\r" + filterANSIEscapes(line, false, width) + ANSI_NORMAL + "\e[K"); if (!printMultiline) {
writeToStderr("\r" + filterANSIEscapes(line, false, width) + ANSI_NORMAL + "\e[K");
}
return nextWakeup; return nextWakeup;
} }
@ -506,9 +547,8 @@ public:
{ {
auto state(state_.lock()); auto state(state_.lock());
if (state->active) { if (state->active) {
std::cerr << "\r\e[K";
Logger::writeToStdout(s); Logger::writeToStdout(s);
draw(*state); draw(*state, {});
} else { } else {
Logger::writeToStdout(s); Logger::writeToStdout(s);
} }
@ -521,7 +561,7 @@ public:
std::cerr << fmt("\r\e[K%s ", msg); std::cerr << fmt("\r\e[K%s ", msg);
auto s = trim(readLine(STDIN_FILENO)); auto s = trim(readLine(STDIN_FILENO));
if (s.size() != 1) return {}; if (s.size() != 1) return {};
draw(*state); draw(*state, {});
return s[0]; return s[0];
} }
@ -529,6 +569,11 @@ public:
{ {
this->printBuildLogs = printBuildLogs; this->printBuildLogs = printBuildLogs;
} }
void setPrintMultiline(bool printMultiline) override
{
this->printMultiline = printMultiline;
}
}; };
Logger * makeProgressBar() Logger * makeProgressBar()

View file

@ -44,7 +44,6 @@
#include <sys/prctl.h> #include <sys/prctl.h>
#include <sys/syscall.h> #include <sys/syscall.h>
#if HAVE_SECCOMP #if HAVE_SECCOMP
#include "linux/fchmodat2-compat.hh"
#include <seccomp.h> #include <seccomp.h>
#endif #endif
#define pivot_root(new_root, put_old) (syscall(SYS_pivot_root, new_root, put_old)) #define pivot_root(new_root, put_old) (syscall(SYS_pivot_root, new_root, put_old))
@ -1617,6 +1616,12 @@ void LocalDerivationGoal::chownToBuilder(const Path & path)
throw SysError("cannot change ownership of '%1%'", path); throw SysError("cannot change ownership of '%1%'", path);
} }
#if HAVE_SECCOMP
void allowSyscall(scmp_filter_ctx ctx, int syscall) {
if (seccomp_rule_add(ctx, SCMP_ACT_ALLOW, syscall, 0) != 0)
throw SysError("unable to add seccomp rule");
}
#endif
void setupSeccomp() void setupSeccomp()
{ {
@ -1624,7 +1629,9 @@ void setupSeccomp()
#if HAVE_SECCOMP #if HAVE_SECCOMP
scmp_filter_ctx ctx; scmp_filter_ctx ctx;
if (!(ctx = seccomp_init(SCMP_ACT_ALLOW))) // Pretend that syscalls we don't yet know about don't exist.
// This is the best option for compatibility: after all, they did in fact not exist not too long ago.
if (!(ctx = seccomp_init(SCMP_ACT_ERRNO(ENOSYS))))
throw SysError("unable to initialize seccomp mode 2"); throw SysError("unable to initialize seccomp mode 2");
Finally cleanup([&]() { Finally cleanup([&]() {
@ -1659,28 +1666,520 @@ void setupSeccomp()
seccomp_arch_add(ctx, SCMP_ARCH_MIPSEL64N32) != 0) seccomp_arch_add(ctx, SCMP_ARCH_MIPSEL64N32) != 0)
printError("unable to add mips64el-*abin32 seccomp architecture"); printError("unable to add mips64el-*abin32 seccomp architecture");
/* Prevent builders from creating setuid/setgid binaries. */ // This list is intended for machine consumption.
for (int perm : { S_ISUID, S_ISGID }) { // Please keep its format, order and BEGIN/END markers.
if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(chmod), 1, //
SCMP_A1(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) != 0) // Currently, it is up to date with libseccomp 2.5.5 and glibc 2.38.
throw SysError("unable to add seccomp rule"); // Run check-syscalls to determine which new syscalls should be added.
// New syscalls must be audited and handled in a way that blocks the following dangerous operations:
// * Creation of non-empty setuid/setgid files
// * Creation of extended attributes (including ACLs)
//
// BEGIN extract-syscalls
allowSyscall(ctx, SCMP_SYS(accept));
allowSyscall(ctx, SCMP_SYS(accept4));
allowSyscall(ctx, SCMP_SYS(access));
allowSyscall(ctx, SCMP_SYS(acct));
allowSyscall(ctx, SCMP_SYS(add_key));
allowSyscall(ctx, SCMP_SYS(adjtimex));
allowSyscall(ctx, SCMP_SYS(afs_syscall));
allowSyscall(ctx, SCMP_SYS(alarm));
allowSyscall(ctx, SCMP_SYS(arch_prctl));
allowSyscall(ctx, SCMP_SYS(arm_fadvise64_64));
allowSyscall(ctx, SCMP_SYS(arm_sync_file_range));
allowSyscall(ctx, SCMP_SYS(bdflush));
allowSyscall(ctx, SCMP_SYS(bind));
allowSyscall(ctx, SCMP_SYS(bpf));
allowSyscall(ctx, SCMP_SYS(break));
allowSyscall(ctx, SCMP_SYS(breakpoint));
allowSyscall(ctx, SCMP_SYS(brk));
allowSyscall(ctx, SCMP_SYS(cachectl));
allowSyscall(ctx, SCMP_SYS(cacheflush));
allowSyscall(ctx, SCMP_SYS(cachestat));
allowSyscall(ctx, SCMP_SYS(capget));
allowSyscall(ctx, SCMP_SYS(capset));
allowSyscall(ctx, SCMP_SYS(chdir));
// skip chmod (dangerous)
allowSyscall(ctx, SCMP_SYS(chown));
allowSyscall(ctx, SCMP_SYS(chown32));
allowSyscall(ctx, SCMP_SYS(chroot));
allowSyscall(ctx, SCMP_SYS(clock_adjtime));
allowSyscall(ctx, SCMP_SYS(clock_adjtime64));
allowSyscall(ctx, SCMP_SYS(clock_getres));
allowSyscall(ctx, SCMP_SYS(clock_getres_time64));
allowSyscall(ctx, SCMP_SYS(clock_gettime));
allowSyscall(ctx, SCMP_SYS(clock_gettime64));
allowSyscall(ctx, SCMP_SYS(clock_nanosleep));
allowSyscall(ctx, SCMP_SYS(clock_nanosleep_time64));
allowSyscall(ctx, SCMP_SYS(clock_settime));
allowSyscall(ctx, SCMP_SYS(clock_settime64));
allowSyscall(ctx, SCMP_SYS(clone));
allowSyscall(ctx, SCMP_SYS(clone3));
allowSyscall(ctx, SCMP_SYS(close));
allowSyscall(ctx, SCMP_SYS(close_range));
allowSyscall(ctx, SCMP_SYS(connect));
allowSyscall(ctx, SCMP_SYS(copy_file_range));
allowSyscall(ctx, SCMP_SYS(creat));
allowSyscall(ctx, SCMP_SYS(create_module));
allowSyscall(ctx, SCMP_SYS(delete_module));
allowSyscall(ctx, SCMP_SYS(dup));
allowSyscall(ctx, SCMP_SYS(dup2));
allowSyscall(ctx, SCMP_SYS(dup3));
allowSyscall(ctx, SCMP_SYS(epoll_create));
allowSyscall(ctx, SCMP_SYS(epoll_create1));
allowSyscall(ctx, SCMP_SYS(epoll_ctl));
allowSyscall(ctx, SCMP_SYS(epoll_ctl_old));
allowSyscall(ctx, SCMP_SYS(epoll_pwait));
allowSyscall(ctx, SCMP_SYS(epoll_pwait2));
allowSyscall(ctx, SCMP_SYS(epoll_wait));
allowSyscall(ctx, SCMP_SYS(epoll_wait_old));
allowSyscall(ctx, SCMP_SYS(eventfd));
allowSyscall(ctx, SCMP_SYS(eventfd2));
allowSyscall(ctx, SCMP_SYS(execve));
allowSyscall(ctx, SCMP_SYS(execveat));
allowSyscall(ctx, SCMP_SYS(exit));
allowSyscall(ctx, SCMP_SYS(exit_group));
allowSyscall(ctx, SCMP_SYS(faccessat));
allowSyscall(ctx, SCMP_SYS(faccessat2));
allowSyscall(ctx, SCMP_SYS(fadvise64));
allowSyscall(ctx, SCMP_SYS(fadvise64_64));
allowSyscall(ctx, SCMP_SYS(fallocate));
allowSyscall(ctx, SCMP_SYS(fanotify_init));
allowSyscall(ctx, SCMP_SYS(fanotify_mark));
allowSyscall(ctx, SCMP_SYS(fchdir));
// skip fchmod (dangerous)
// skip fchmodat (dangerous)
// skip fchmodat2 (requires glibc 2.39, dangerous)
allowSyscall(ctx, SCMP_SYS(fchown));
allowSyscall(ctx, SCMP_SYS(fchown32));
allowSyscall(ctx, SCMP_SYS(fchownat));
allowSyscall(ctx, SCMP_SYS(fcntl));
allowSyscall(ctx, SCMP_SYS(fcntl64));
allowSyscall(ctx, SCMP_SYS(fdatasync));
allowSyscall(ctx, SCMP_SYS(fgetxattr));
allowSyscall(ctx, SCMP_SYS(finit_module));
allowSyscall(ctx, SCMP_SYS(flistxattr));
allowSyscall(ctx, SCMP_SYS(flock));
allowSyscall(ctx, SCMP_SYS(fork));
allowSyscall(ctx, SCMP_SYS(fremovexattr));
allowSyscall(ctx, SCMP_SYS(fsconfig));
// skip fsetxattr (dangerous)
allowSyscall(ctx, SCMP_SYS(fsmount));
allowSyscall(ctx, SCMP_SYS(fsopen));
allowSyscall(ctx, SCMP_SYS(fspick));
allowSyscall(ctx, SCMP_SYS(fstat));
allowSyscall(ctx, SCMP_SYS(fstat64));
allowSyscall(ctx, SCMP_SYS(fstatat64));
allowSyscall(ctx, SCMP_SYS(fstatfs));
allowSyscall(ctx, SCMP_SYS(fstatfs64));
allowSyscall(ctx, SCMP_SYS(fsync));
allowSyscall(ctx, SCMP_SYS(ftime));
allowSyscall(ctx, SCMP_SYS(ftruncate));
allowSyscall(ctx, SCMP_SYS(ftruncate64));
allowSyscall(ctx, SCMP_SYS(futex));
// skip futex_requeue (requires glibc 2.39)
allowSyscall(ctx, SCMP_SYS(futex_time64));
// skip futex_wait (requires glibc 2.39)
allowSyscall(ctx, SCMP_SYS(futex_waitv));
// skip futex_wake (requires glibc 2.39)
allowSyscall(ctx, SCMP_SYS(futimesat));
allowSyscall(ctx, SCMP_SYS(getcpu));
allowSyscall(ctx, SCMP_SYS(getcwd));
allowSyscall(ctx, SCMP_SYS(getdents));
allowSyscall(ctx, SCMP_SYS(getdents64));
allowSyscall(ctx, SCMP_SYS(getegid));
allowSyscall(ctx, SCMP_SYS(getegid32));
allowSyscall(ctx, SCMP_SYS(geteuid));
allowSyscall(ctx, SCMP_SYS(geteuid32));
allowSyscall(ctx, SCMP_SYS(getgid));
allowSyscall(ctx, SCMP_SYS(getgid32));
allowSyscall(ctx, SCMP_SYS(getgroups));
allowSyscall(ctx, SCMP_SYS(getgroups32));
allowSyscall(ctx, SCMP_SYS(getitimer));
allowSyscall(ctx, SCMP_SYS(get_kernel_syms));
allowSyscall(ctx, SCMP_SYS(get_mempolicy));
allowSyscall(ctx, SCMP_SYS(getpeername));
allowSyscall(ctx, SCMP_SYS(getpgid));
allowSyscall(ctx, SCMP_SYS(getpgrp));
allowSyscall(ctx, SCMP_SYS(getpid));
allowSyscall(ctx, SCMP_SYS(getpmsg));
allowSyscall(ctx, SCMP_SYS(getppid));
allowSyscall(ctx, SCMP_SYS(getpriority));
allowSyscall(ctx, SCMP_SYS(getrandom));
allowSyscall(ctx, SCMP_SYS(getresgid));
allowSyscall(ctx, SCMP_SYS(getresgid32));
allowSyscall(ctx, SCMP_SYS(getresuid));
allowSyscall(ctx, SCMP_SYS(getresuid32));
allowSyscall(ctx, SCMP_SYS(getrlimit));
allowSyscall(ctx, SCMP_SYS(get_robust_list));
allowSyscall(ctx, SCMP_SYS(getrusage));
allowSyscall(ctx, SCMP_SYS(getsid));
allowSyscall(ctx, SCMP_SYS(getsockname));
allowSyscall(ctx, SCMP_SYS(getsockopt));
allowSyscall(ctx, SCMP_SYS(get_thread_area));
allowSyscall(ctx, SCMP_SYS(gettid));
allowSyscall(ctx, SCMP_SYS(gettimeofday));
allowSyscall(ctx, SCMP_SYS(get_tls));
allowSyscall(ctx, SCMP_SYS(getuid));
allowSyscall(ctx, SCMP_SYS(getuid32));
allowSyscall(ctx, SCMP_SYS(getxattr));
allowSyscall(ctx, SCMP_SYS(gtty));
allowSyscall(ctx, SCMP_SYS(idle));
allowSyscall(ctx, SCMP_SYS(init_module));
allowSyscall(ctx, SCMP_SYS(inotify_add_watch));
allowSyscall(ctx, SCMP_SYS(inotify_init));
allowSyscall(ctx, SCMP_SYS(inotify_init1));
allowSyscall(ctx, SCMP_SYS(inotify_rm_watch));
allowSyscall(ctx, SCMP_SYS(io_cancel));
allowSyscall(ctx, SCMP_SYS(ioctl));
allowSyscall(ctx, SCMP_SYS(io_destroy));
allowSyscall(ctx, SCMP_SYS(io_getevents));
allowSyscall(ctx, SCMP_SYS(ioperm));
allowSyscall(ctx, SCMP_SYS(io_pgetevents));
allowSyscall(ctx, SCMP_SYS(io_pgetevents_time64));
allowSyscall(ctx, SCMP_SYS(iopl));
allowSyscall(ctx, SCMP_SYS(ioprio_get));
allowSyscall(ctx, SCMP_SYS(ioprio_set));
allowSyscall(ctx, SCMP_SYS(io_setup));
allowSyscall(ctx, SCMP_SYS(io_submit));
allowSyscall(ctx, SCMP_SYS(io_uring_enter));
allowSyscall(ctx, SCMP_SYS(io_uring_register));
allowSyscall(ctx, SCMP_SYS(io_uring_setup));
allowSyscall(ctx, SCMP_SYS(ipc));
allowSyscall(ctx, SCMP_SYS(kcmp));
allowSyscall(ctx, SCMP_SYS(kexec_file_load));
allowSyscall(ctx, SCMP_SYS(kexec_load));
allowSyscall(ctx, SCMP_SYS(keyctl));
allowSyscall(ctx, SCMP_SYS(kill));
allowSyscall(ctx, SCMP_SYS(landlock_add_rule));
allowSyscall(ctx, SCMP_SYS(landlock_create_ruleset));
allowSyscall(ctx, SCMP_SYS(landlock_restrict_self));
allowSyscall(ctx, SCMP_SYS(lchown));
allowSyscall(ctx, SCMP_SYS(lchown32));
allowSyscall(ctx, SCMP_SYS(lgetxattr));
allowSyscall(ctx, SCMP_SYS(link));
allowSyscall(ctx, SCMP_SYS(linkat));
allowSyscall(ctx, SCMP_SYS(listen));
allowSyscall(ctx, SCMP_SYS(listxattr));
allowSyscall(ctx, SCMP_SYS(llistxattr));
allowSyscall(ctx, SCMP_SYS(_llseek));
allowSyscall(ctx, SCMP_SYS(lock));
allowSyscall(ctx, SCMP_SYS(lookup_dcookie));
allowSyscall(ctx, SCMP_SYS(lremovexattr));
allowSyscall(ctx, SCMP_SYS(lseek));
// skip lsetxattr (dangerous)
allowSyscall(ctx, SCMP_SYS(lstat));
allowSyscall(ctx, SCMP_SYS(lstat64));
allowSyscall(ctx, SCMP_SYS(madvise));
// skip map_shadow_stack (requires glibc 2.39)
allowSyscall(ctx, SCMP_SYS(mbind));
allowSyscall(ctx, SCMP_SYS(membarrier));
allowSyscall(ctx, SCMP_SYS(memfd_create));
allowSyscall(ctx, SCMP_SYS(memfd_secret));
allowSyscall(ctx, SCMP_SYS(migrate_pages));
allowSyscall(ctx, SCMP_SYS(mincore));
allowSyscall(ctx, SCMP_SYS(mkdir));
allowSyscall(ctx, SCMP_SYS(mkdirat));
allowSyscall(ctx, SCMP_SYS(mknod));
allowSyscall(ctx, SCMP_SYS(mknodat));
allowSyscall(ctx, SCMP_SYS(mlock));
allowSyscall(ctx, SCMP_SYS(mlock2));
allowSyscall(ctx, SCMP_SYS(mlockall));
allowSyscall(ctx, SCMP_SYS(mmap));
allowSyscall(ctx, SCMP_SYS(mmap2));
allowSyscall(ctx, SCMP_SYS(modify_ldt));
allowSyscall(ctx, SCMP_SYS(mount));
allowSyscall(ctx, SCMP_SYS(mount_setattr));
allowSyscall(ctx, SCMP_SYS(move_mount));
allowSyscall(ctx, SCMP_SYS(move_pages));
allowSyscall(ctx, SCMP_SYS(mprotect));
allowSyscall(ctx, SCMP_SYS(mpx));
allowSyscall(ctx, SCMP_SYS(mq_getsetattr));
allowSyscall(ctx, SCMP_SYS(mq_notify));
allowSyscall(ctx, SCMP_SYS(mq_open));
allowSyscall(ctx, SCMP_SYS(mq_timedreceive));
allowSyscall(ctx, SCMP_SYS(mq_timedreceive_time64));
allowSyscall(ctx, SCMP_SYS(mq_timedsend));
allowSyscall(ctx, SCMP_SYS(mq_timedsend_time64));
allowSyscall(ctx, SCMP_SYS(mq_unlink));
allowSyscall(ctx, SCMP_SYS(mremap));
allowSyscall(ctx, SCMP_SYS(msgctl));
allowSyscall(ctx, SCMP_SYS(msgget));
allowSyscall(ctx, SCMP_SYS(msgrcv));
allowSyscall(ctx, SCMP_SYS(msgsnd));
allowSyscall(ctx, SCMP_SYS(msync));
allowSyscall(ctx, SCMP_SYS(multiplexer));
allowSyscall(ctx, SCMP_SYS(munlock));
allowSyscall(ctx, SCMP_SYS(munlockall));
allowSyscall(ctx, SCMP_SYS(munmap));
allowSyscall(ctx, SCMP_SYS(name_to_handle_at));
allowSyscall(ctx, SCMP_SYS(nanosleep));
allowSyscall(ctx, SCMP_SYS(newfstatat));
allowSyscall(ctx, SCMP_SYS(_newselect));
allowSyscall(ctx, SCMP_SYS(nfsservctl));
allowSyscall(ctx, SCMP_SYS(nice));
allowSyscall(ctx, SCMP_SYS(oldfstat));
allowSyscall(ctx, SCMP_SYS(oldlstat));
allowSyscall(ctx, SCMP_SYS(oldolduname));
allowSyscall(ctx, SCMP_SYS(oldstat));
allowSyscall(ctx, SCMP_SYS(olduname));
allowSyscall(ctx, SCMP_SYS(open));
allowSyscall(ctx, SCMP_SYS(openat));
allowSyscall(ctx, SCMP_SYS(openat2));
allowSyscall(ctx, SCMP_SYS(open_by_handle_at));
allowSyscall(ctx, SCMP_SYS(open_tree));
allowSyscall(ctx, SCMP_SYS(pause));
allowSyscall(ctx, SCMP_SYS(pciconfig_iobase));
allowSyscall(ctx, SCMP_SYS(pciconfig_read));
allowSyscall(ctx, SCMP_SYS(pciconfig_write));
allowSyscall(ctx, SCMP_SYS(perf_event_open));
allowSyscall(ctx, SCMP_SYS(personality));
allowSyscall(ctx, SCMP_SYS(pidfd_getfd));
allowSyscall(ctx, SCMP_SYS(pidfd_open));
allowSyscall(ctx, SCMP_SYS(pidfd_send_signal));
allowSyscall(ctx, SCMP_SYS(pipe));
allowSyscall(ctx, SCMP_SYS(pipe2));
allowSyscall(ctx, SCMP_SYS(pivot_root));
allowSyscall(ctx, SCMP_SYS(pkey_alloc));
allowSyscall(ctx, SCMP_SYS(pkey_free));
allowSyscall(ctx, SCMP_SYS(pkey_mprotect));
allowSyscall(ctx, SCMP_SYS(poll));
allowSyscall(ctx, SCMP_SYS(ppoll));
allowSyscall(ctx, SCMP_SYS(ppoll_time64));
allowSyscall(ctx, SCMP_SYS(prctl));
allowSyscall(ctx, SCMP_SYS(pread64));
allowSyscall(ctx, SCMP_SYS(preadv));
allowSyscall(ctx, SCMP_SYS(preadv2));
allowSyscall(ctx, SCMP_SYS(prlimit64));
allowSyscall(ctx, SCMP_SYS(process_madvise));
allowSyscall(ctx, SCMP_SYS(process_mrelease));
allowSyscall(ctx, SCMP_SYS(process_vm_readv));
allowSyscall(ctx, SCMP_SYS(process_vm_writev));
allowSyscall(ctx, SCMP_SYS(prof));
allowSyscall(ctx, SCMP_SYS(profil));
allowSyscall(ctx, SCMP_SYS(pselect6));
allowSyscall(ctx, SCMP_SYS(pselect6_time64));
allowSyscall(ctx, SCMP_SYS(ptrace));
allowSyscall(ctx, SCMP_SYS(putpmsg));
allowSyscall(ctx, SCMP_SYS(pwrite64));
allowSyscall(ctx, SCMP_SYS(pwritev));
allowSyscall(ctx, SCMP_SYS(pwritev2));
allowSyscall(ctx, SCMP_SYS(query_module));
allowSyscall(ctx, SCMP_SYS(quotactl));
allowSyscall(ctx, SCMP_SYS(quotactl_fd));
allowSyscall(ctx, SCMP_SYS(read));
allowSyscall(ctx, SCMP_SYS(readahead));
allowSyscall(ctx, SCMP_SYS(readdir));
allowSyscall(ctx, SCMP_SYS(readlink));
allowSyscall(ctx, SCMP_SYS(readlinkat));
allowSyscall(ctx, SCMP_SYS(readv));
allowSyscall(ctx, SCMP_SYS(reboot));
allowSyscall(ctx, SCMP_SYS(recv));
allowSyscall(ctx, SCMP_SYS(recvfrom));
allowSyscall(ctx, SCMP_SYS(recvmmsg));
allowSyscall(ctx, SCMP_SYS(recvmmsg_time64));
allowSyscall(ctx, SCMP_SYS(recvmsg));
allowSyscall(ctx, SCMP_SYS(remap_file_pages));
allowSyscall(ctx, SCMP_SYS(removexattr));
allowSyscall(ctx, SCMP_SYS(rename));
allowSyscall(ctx, SCMP_SYS(renameat));
allowSyscall(ctx, SCMP_SYS(renameat2));
allowSyscall(ctx, SCMP_SYS(request_key));
allowSyscall(ctx, SCMP_SYS(restart_syscall));
allowSyscall(ctx, SCMP_SYS(riscv_flush_icache));
allowSyscall(ctx, SCMP_SYS(rmdir));
allowSyscall(ctx, SCMP_SYS(rseq));
allowSyscall(ctx, SCMP_SYS(rtas));
allowSyscall(ctx, SCMP_SYS(rt_sigaction));
allowSyscall(ctx, SCMP_SYS(rt_sigpending));
allowSyscall(ctx, SCMP_SYS(rt_sigprocmask));
allowSyscall(ctx, SCMP_SYS(rt_sigqueueinfo));
allowSyscall(ctx, SCMP_SYS(rt_sigreturn));
allowSyscall(ctx, SCMP_SYS(rt_sigsuspend));
allowSyscall(ctx, SCMP_SYS(rt_sigtimedwait));
allowSyscall(ctx, SCMP_SYS(rt_sigtimedwait_time64));
allowSyscall(ctx, SCMP_SYS(rt_tgsigqueueinfo));
allowSyscall(ctx, SCMP_SYS(s390_guarded_storage));
allowSyscall(ctx, SCMP_SYS(s390_pci_mmio_read));
allowSyscall(ctx, SCMP_SYS(s390_pci_mmio_write));
allowSyscall(ctx, SCMP_SYS(s390_runtime_instr));
allowSyscall(ctx, SCMP_SYS(s390_sthyi));
allowSyscall(ctx, SCMP_SYS(sched_getaffinity));
allowSyscall(ctx, SCMP_SYS(sched_getattr));
allowSyscall(ctx, SCMP_SYS(sched_getparam));
allowSyscall(ctx, SCMP_SYS(sched_get_priority_max));
allowSyscall(ctx, SCMP_SYS(sched_get_priority_min));
allowSyscall(ctx, SCMP_SYS(sched_getscheduler));
allowSyscall(ctx, SCMP_SYS(sched_rr_get_interval));
allowSyscall(ctx, SCMP_SYS(sched_rr_get_interval_time64));
allowSyscall(ctx, SCMP_SYS(sched_setaffinity));
allowSyscall(ctx, SCMP_SYS(sched_setattr));
allowSyscall(ctx, SCMP_SYS(sched_setparam));
allowSyscall(ctx, SCMP_SYS(sched_setscheduler));
allowSyscall(ctx, SCMP_SYS(sched_yield));
allowSyscall(ctx, SCMP_SYS(seccomp));
allowSyscall(ctx, SCMP_SYS(security));
allowSyscall(ctx, SCMP_SYS(select));
allowSyscall(ctx, SCMP_SYS(semctl));
allowSyscall(ctx, SCMP_SYS(semget));
allowSyscall(ctx, SCMP_SYS(semop));
allowSyscall(ctx, SCMP_SYS(semtimedop));
allowSyscall(ctx, SCMP_SYS(semtimedop_time64));
allowSyscall(ctx, SCMP_SYS(send));
allowSyscall(ctx, SCMP_SYS(sendfile));
allowSyscall(ctx, SCMP_SYS(sendfile64));
allowSyscall(ctx, SCMP_SYS(sendmmsg));
allowSyscall(ctx, SCMP_SYS(sendmsg));
allowSyscall(ctx, SCMP_SYS(sendto));
allowSyscall(ctx, SCMP_SYS(setdomainname));
allowSyscall(ctx, SCMP_SYS(setfsgid));
allowSyscall(ctx, SCMP_SYS(setfsgid32));
allowSyscall(ctx, SCMP_SYS(setfsuid));
allowSyscall(ctx, SCMP_SYS(setfsuid32));
allowSyscall(ctx, SCMP_SYS(setgid));
allowSyscall(ctx, SCMP_SYS(setgid32));
allowSyscall(ctx, SCMP_SYS(setgroups));
allowSyscall(ctx, SCMP_SYS(setgroups32));
allowSyscall(ctx, SCMP_SYS(sethostname));
allowSyscall(ctx, SCMP_SYS(setitimer));
allowSyscall(ctx, SCMP_SYS(set_mempolicy));
allowSyscall(ctx, SCMP_SYS(set_mempolicy_home_node));
allowSyscall(ctx, SCMP_SYS(setns));
allowSyscall(ctx, SCMP_SYS(setpgid));
allowSyscall(ctx, SCMP_SYS(setpriority));
allowSyscall(ctx, SCMP_SYS(setregid));
allowSyscall(ctx, SCMP_SYS(setregid32));
allowSyscall(ctx, SCMP_SYS(setresgid));
allowSyscall(ctx, SCMP_SYS(setresgid32));
allowSyscall(ctx, SCMP_SYS(setresuid));
allowSyscall(ctx, SCMP_SYS(setresuid32));
allowSyscall(ctx, SCMP_SYS(setreuid));
allowSyscall(ctx, SCMP_SYS(setreuid32));
allowSyscall(ctx, SCMP_SYS(setrlimit));
allowSyscall(ctx, SCMP_SYS(set_robust_list));
allowSyscall(ctx, SCMP_SYS(setsid));
allowSyscall(ctx, SCMP_SYS(setsockopt));
allowSyscall(ctx, SCMP_SYS(set_thread_area));
allowSyscall(ctx, SCMP_SYS(set_tid_address));
allowSyscall(ctx, SCMP_SYS(settimeofday));
allowSyscall(ctx, SCMP_SYS(set_tls));
allowSyscall(ctx, SCMP_SYS(setuid));
allowSyscall(ctx, SCMP_SYS(setuid32));
// skip setxattr (dangerous)
allowSyscall(ctx, SCMP_SYS(sgetmask));
allowSyscall(ctx, SCMP_SYS(shmat));
allowSyscall(ctx, SCMP_SYS(shmctl));
allowSyscall(ctx, SCMP_SYS(shmdt));
allowSyscall(ctx, SCMP_SYS(shmget));
allowSyscall(ctx, SCMP_SYS(shutdown));
allowSyscall(ctx, SCMP_SYS(sigaction));
allowSyscall(ctx, SCMP_SYS(sigaltstack));
allowSyscall(ctx, SCMP_SYS(signal));
allowSyscall(ctx, SCMP_SYS(signalfd));
allowSyscall(ctx, SCMP_SYS(signalfd4));
allowSyscall(ctx, SCMP_SYS(sigpending));
allowSyscall(ctx, SCMP_SYS(sigprocmask));
allowSyscall(ctx, SCMP_SYS(sigreturn));
allowSyscall(ctx, SCMP_SYS(sigsuspend));
allowSyscall(ctx, SCMP_SYS(socket));
allowSyscall(ctx, SCMP_SYS(socketcall));
allowSyscall(ctx, SCMP_SYS(socketpair));
allowSyscall(ctx, SCMP_SYS(splice));
allowSyscall(ctx, SCMP_SYS(spu_create));
allowSyscall(ctx, SCMP_SYS(spu_run));
allowSyscall(ctx, SCMP_SYS(ssetmask));
allowSyscall(ctx, SCMP_SYS(stat));
allowSyscall(ctx, SCMP_SYS(stat64));
allowSyscall(ctx, SCMP_SYS(statfs));
allowSyscall(ctx, SCMP_SYS(statfs64));
allowSyscall(ctx, SCMP_SYS(statx));
allowSyscall(ctx, SCMP_SYS(stime));
allowSyscall(ctx, SCMP_SYS(stty));
allowSyscall(ctx, SCMP_SYS(subpage_prot));
allowSyscall(ctx, SCMP_SYS(swapcontext));
allowSyscall(ctx, SCMP_SYS(swapoff));
allowSyscall(ctx, SCMP_SYS(swapon));
allowSyscall(ctx, SCMP_SYS(switch_endian));
allowSyscall(ctx, SCMP_SYS(symlink));
allowSyscall(ctx, SCMP_SYS(symlinkat));
allowSyscall(ctx, SCMP_SYS(sync));
allowSyscall(ctx, SCMP_SYS(sync_file_range));
allowSyscall(ctx, SCMP_SYS(sync_file_range2));
allowSyscall(ctx, SCMP_SYS(syncfs));
allowSyscall(ctx, SCMP_SYS(syscall));
allowSyscall(ctx, SCMP_SYS(_sysctl));
allowSyscall(ctx, SCMP_SYS(sys_debug_setcontext));
allowSyscall(ctx, SCMP_SYS(sysfs));
allowSyscall(ctx, SCMP_SYS(sysinfo));
allowSyscall(ctx, SCMP_SYS(syslog));
allowSyscall(ctx, SCMP_SYS(sysmips));
allowSyscall(ctx, SCMP_SYS(tee));
allowSyscall(ctx, SCMP_SYS(tgkill));
allowSyscall(ctx, SCMP_SYS(time));
allowSyscall(ctx, SCMP_SYS(timer_create));
allowSyscall(ctx, SCMP_SYS(timer_delete));
allowSyscall(ctx, SCMP_SYS(timerfd));
allowSyscall(ctx, SCMP_SYS(timerfd_create));
allowSyscall(ctx, SCMP_SYS(timerfd_gettime));
allowSyscall(ctx, SCMP_SYS(timerfd_gettime64));
allowSyscall(ctx, SCMP_SYS(timerfd_settime));
allowSyscall(ctx, SCMP_SYS(timerfd_settime64));
allowSyscall(ctx, SCMP_SYS(timer_getoverrun));
allowSyscall(ctx, SCMP_SYS(timer_gettime));
allowSyscall(ctx, SCMP_SYS(timer_gettime64));
allowSyscall(ctx, SCMP_SYS(timer_settime));
allowSyscall(ctx, SCMP_SYS(timer_settime64));
allowSyscall(ctx, SCMP_SYS(times));
allowSyscall(ctx, SCMP_SYS(tkill));
allowSyscall(ctx, SCMP_SYS(truncate));
allowSyscall(ctx, SCMP_SYS(truncate64));
allowSyscall(ctx, SCMP_SYS(tuxcall));
allowSyscall(ctx, SCMP_SYS(ugetrlimit));
allowSyscall(ctx, SCMP_SYS(ulimit));
allowSyscall(ctx, SCMP_SYS(umask));
allowSyscall(ctx, SCMP_SYS(umount));
allowSyscall(ctx, SCMP_SYS(umount2));
allowSyscall(ctx, SCMP_SYS(uname));
allowSyscall(ctx, SCMP_SYS(unlink));
allowSyscall(ctx, SCMP_SYS(unlinkat));
allowSyscall(ctx, SCMP_SYS(unshare));
allowSyscall(ctx, SCMP_SYS(uselib));
allowSyscall(ctx, SCMP_SYS(userfaultfd));
allowSyscall(ctx, SCMP_SYS(usr26));
allowSyscall(ctx, SCMP_SYS(usr32));
allowSyscall(ctx, SCMP_SYS(ustat));
allowSyscall(ctx, SCMP_SYS(utime));
allowSyscall(ctx, SCMP_SYS(utimensat));
allowSyscall(ctx, SCMP_SYS(utimensat_time64));
allowSyscall(ctx, SCMP_SYS(utimes));
allowSyscall(ctx, SCMP_SYS(vfork));
allowSyscall(ctx, SCMP_SYS(vhangup));
allowSyscall(ctx, SCMP_SYS(vm86));
allowSyscall(ctx, SCMP_SYS(vm86old));
allowSyscall(ctx, SCMP_SYS(vmsplice));
allowSyscall(ctx, SCMP_SYS(vserver));
allowSyscall(ctx, SCMP_SYS(wait4));
allowSyscall(ctx, SCMP_SYS(waitid));
allowSyscall(ctx, SCMP_SYS(waitpid));
allowSyscall(ctx, SCMP_SYS(write));
allowSyscall(ctx, SCMP_SYS(writev));
// END extract-syscalls
if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(fchmod), 1, // chmod family: prevent adding setuid/setgid bits to existing files.
SCMP_A1(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) != 0) // The Nix store does not support setuid/setgid, and even their temporary creation can weaken the security of the sandbox.
throw SysError("unable to add seccomp rule"); if (seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(chmod), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, S_ISUID | S_ISGID, 0)) != 0 ||
seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(chmod), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, S_ISUID, S_ISUID)) != 0 ||
seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(chmod), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, S_ISGID, S_ISGID)) != 0 ||
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(fchmod), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, S_ISUID | S_ISGID, 0)) != 0 ||
seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(fchmod), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, S_ISUID, S_ISUID)) != 0 ||
seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(fchmod), 1, SCMP_A1(SCMP_CMP_MASKED_EQ, S_ISGID, S_ISGID)) != 0 ||
seccomp_rule_add(ctx, SCMP_ACT_ALLOW, SCMP_SYS(fchmodat), 1, SCMP_A2(SCMP_CMP_MASKED_EQ, S_ISUID | S_ISGID, 0)) != 0 ||
seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(fchmodat), 1, SCMP_A2(SCMP_CMP_MASKED_EQ, S_ISUID, S_ISUID)) != 0 ||
seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(fchmodat), 1, SCMP_A2(SCMP_CMP_MASKED_EQ, S_ISGID, S_ISGID)) != 0)
throw SysError("unable to add seccomp rule");
if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), SCMP_SYS(fchmodat), 1, // setxattr family: prevent creation of extended attributes or ACLs.
SCMP_A2(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) != 0) // Not all filesystems support them, and they're incompatible with the NAR format.
throw SysError("unable to add seccomp rule");
if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(EPERM), NIX_SYSCALL_FCHMODAT2, 1,
SCMP_A2(SCMP_CMP_MASKED_EQ, (scmp_datum_t) perm, (scmp_datum_t) perm)) != 0)
throw SysError("unable to add seccomp rule");
}
/* Prevent builders from creating EAs or ACLs. Not all filesystems
support these, and they're not allowed in the Nix store because
they're not representable in the NAR serialisation. */
if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(setxattr), 0) != 0 || if (seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(setxattr), 0) != 0 ||
seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(lsetxattr), 0) != 0 || seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(lsetxattr), 0) != 0 ||
seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(fsetxattr), 0) != 0) seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(fsetxattr), 0) != 0)
@ -1714,11 +2213,7 @@ void LocalDerivationGoal::runChild()
commonChildInit(); commonChildInit();
try { setupSeccomp();
setupSeccomp();
} catch (...) {
if (buildUser) throw;
}
bool setUser = true; bool setUser = true;

View file

@ -656,7 +656,7 @@ struct curlFileTransfer : public FileTransfer
/* Ugly hack to support s3:// URIs. */ /* Ugly hack to support s3:// URIs. */
if (request.uri.starts_with("s3://")) { if (request.uri.starts_with("s3://")) {
// FIXME: do this on a worker thread // FIXME: do this on a worker thread
return std::async(std::launch::deferred, [uri{request.uri}] { return std::async(std::launch::deferred, [uri{request.uri}]() -> FileTransferResult {
#if ENABLE_S3 #if ENABLE_S3
auto [bucketName, key, params] = parseS3Uri(uri); auto [bucketName, key, params] = parseS3Uri(uri);

View file

@ -1,35 +0,0 @@
/*
* Determine the syscall number for `fchmodat2`.
*
* On most platforms this is 452. Exceptions can be found on
* a glibc git checkout via `rg --pcre2 'define __NR_fchmodat2 (?!452)'`.
*
* The problem is that glibc 2.39 and libseccomp 2.5.5 are needed to
* get the syscall number. However, a Lix built against nixpkgs 23.11
* (glibc 2.38) should still have the issue fixed without depending
* on the build environment.
*
* To achieve that, the macros below try to determine the platform and
* set the syscall number which is platform-specific, but
* in most cases 452.
*
* TODO: remove this when 23.11 is EOL and the entire (supported) ecosystem
* is on glibc 2.39.
*/
#pragma once
///@file
#if defined(__alpha__)
# define NIX_SYSCALL_FCHMODAT2 562
#elif defined(__x86_64__) && SIZE_MAX == 0xFFFFFFFF // x32
# define NIX_SYSCALL_FCHMODAT2 1073742276
#elif defined(__mips__) && defined(__mips64) && defined(_ABIN64) // mips64/n64
# define NIX_SYSCALL_FCHMODAT2 5452
#elif defined(__mips__) && defined(__mips64) && defined(_ABIN32) // mips64/n32
# define NIX_SYSCALL_FCHMODAT2 6452
#elif defined(__mips__) && defined(_ABIO32) // mips32
# define NIX_SYSCALL_FCHMODAT2 4452
#else
# define NIX_SYSCALL_FCHMODAT2 452
#endif

View file

@ -220,7 +220,7 @@ libstore = library(
nlohmann_json, nlohmann_json,
], ],
cpp_args : cpp_args, cpp_args : cpp_args,
cpp_pch : ['../pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
install : true, install : true,
# FIXME(Qyriad): is this right? # FIXME(Qyriad): is this right?
install_rpath : libdir, install_rpath : libdir,

View file

@ -114,6 +114,9 @@ public:
virtual void setPrintBuildLogs(bool printBuildLogs) virtual void setPrintBuildLogs(bool printBuildLogs)
{ } { }
virtual void setPrintMultiline(bool printMultiline)
{ }
}; };
/** /**

View file

@ -129,7 +129,7 @@ libutil = library(
openssl, openssl,
nlohmann_json, nlohmann_json,
], ],
cpp_pch : ['../pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
implicit_include_directories : true, implicit_include_directories : true,
install : true, install : true,
) )

View file

@ -89,7 +89,7 @@ nix = executable(
boehm, boehm,
nlohmann_json, nlohmann_json,
], ],
cpp_pch : ['../pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
install : true, install : true,
# FIXME(Qyriad): is this right? # FIXME(Qyriad): is this right?
install_rpath : libdir, install_rpath : libdir,

View file

@ -146,7 +146,8 @@ fi
isDaemonNewer () { isDaemonNewer () {
[[ -n "${NIX_DAEMON_PACKAGE:-}" ]] || return 0 [[ -n "${NIX_DAEMON_PACKAGE:-}" ]] || return 0
local requiredVersion="$1" local requiredVersion="$1"
local daemonVersion=$($NIX_DAEMON_PACKAGE/bin/nix daemon --version | cut -d' ' -f3) local versionOutput=$($NIX_DAEMON_PACKAGE/bin/nix daemon --version)
local daemonVersion=${versionOutput##* }
[[ $(nix eval --expr "builtins.compareVersions ''$daemonVersion'' ''$requiredVersion''") -ge 0 ]] [[ $(nix eval --expr "builtins.compareVersions ''$daemonVersion'' ''$requiredVersion''") -ge 0 ]]
} }

View file

@ -37,7 +37,8 @@ in {
{ config, pkgs, ... }: { config, pkgs, ... }:
{ services.openssh.enable = true; { services.openssh.enable = true;
services.openssh.settings.PermitRootLogin = "yes"; services.openssh.settings.PermitRootLogin = "yes";
users.users.root.password = "foobar"; users.users.root.hashedPasswordFile = lib.mkForce null;
users.users.root.password = "foobar";
virtualisation.writableStore = true; virtualisation.writableStore = true;
virtualisation.additionalPaths = [ pkgB pkgC ]; virtualisation.additionalPaths = [ pkgB pkgC ];
}; };

View file

@ -1,21 +0,0 @@
#include <stdio.h>
#include <stdlib.h>
#include <sys/stat.h>
#include <sys/syscall.h>
#include <errno.h>
#include <unistd.h>
#include <assert.h>
int main(void) {
char *name = getenv("out");
FILE *fd = fopen(name, "w");
fprintf(fd, "henlo :3");
fclose(fd);
// FIXME use something nicer here that's less
// platform-dependent as soon as we go to 24.05
// and the glibc is new enough to support fchmodat2
long rs = syscall(452, NULL, name, S_ISUID, 0);
assert(rs == -1);
assert(errno == EPERM);
}

View file

@ -4,17 +4,6 @@
let let
pkgs = config.nodes.machine.nixpkgs.pkgs; pkgs = config.nodes.machine.nixpkgs.pkgs;
fchmodat2-builder = pkgs.runCommandCC "fchmodat2-suid" {
passAsFile = [ "code" ];
code = builtins.readFile ./fchmodat2-suid.c;
# Doesn't work with -O0, shuts up the warning about that.
hardeningDisable = [ "fortify" ];
} ''
mkdir -p $out/bin/
$CC -x c "$codePath" -O0 -g -o $out/bin/fchmodat2-suid
'';
in in
{ {
name = "setuid"; name = "setuid";
@ -27,26 +16,13 @@ in
virtualisation.additionalPaths = [ virtualisation.additionalPaths = [
pkgs.stdenvNoCC pkgs.stdenvNoCC
pkgs.pkgsi686Linux.stdenvNoCC pkgs.pkgsi686Linux.stdenvNoCC
fchmodat2-builder
]; ];
# need at least 6.6 to test for fchmodat2
boot.kernelPackages = pkgs.linuxKernel.packages.linux_6_6;
}; };
testScript = { nodes }: '' testScript = { nodes }: ''
# fmt: off # fmt: off
start_all() start_all()
with subtest("fchmodat2 suid regression test"):
machine.succeed("""
nix-build -E '(with import <nixpkgs> {}; runCommand "fchmodat2-suid" {
BUILDER = builtins.storePath ${fchmodat2-builder};
} "
exec \\"$BUILDER\\"/bin/fchmodat2-suid
")'
""")
# Copying to /tmp should succeed. # Copying to /tmp should succeed.
machine.succeed(r""" machine.succeed(r"""
nix-build --no-sandbox -E '(with import <nixpkgs> {}; runCommand "foo" {} " nix-build --no-sandbox -E '(with import <nixpkgs> {}; runCommand "foo" {} "

View file

@ -68,7 +68,7 @@ libutil_tester = executable(
liblixutil_test_support, liblixutil_test_support,
nlohmann_json, nlohmann_json,
], ],
cpp_pch : ['../../src/pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
) )
test( test(
@ -103,7 +103,7 @@ libstore_test_support = library(
include_directories : include_directories( include_directories : include_directories(
'libstore-support', 'libstore-support',
), ),
cpp_pch : ['../../src/pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
) )
liblixstore_test_support = declare_dependency( liblixstore_test_support = declare_dependency(
include_directories : include_directories('libstore-support'), include_directories : include_directories('libstore-support'),
@ -137,7 +137,7 @@ libstore_tester = executable(
gtest, gtest,
nlohmann_json, nlohmann_json,
], ],
cpp_pch : ['../../src/pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
) )
test( test(
@ -169,7 +169,7 @@ libexpr_test_support = library(
include_directories : include_directories( include_directories : include_directories(
'libexpr-support', 'libexpr-support',
), ),
cpp_pch : ['../../src/pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
) )
liblixexpr_test_support = declare_dependency( liblixexpr_test_support = declare_dependency(
include_directories : include_directories('libexpr-support'), include_directories : include_directories('libexpr-support'),
@ -203,7 +203,7 @@ libexpr_tester = executable(
gtest, gtest,
nlohmann_json, nlohmann_json,
], ],
cpp_pch : ['../../src/pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
) )
test( test(
@ -230,7 +230,7 @@ libcmd_tester = executable(
gtest, gtest,
boost, boost,
], ],
cpp_pch : ['../../src/pch/precompiled-headers.hh'], cpp_pch : cpp_pch,
) )
test( test(
@ -241,6 +241,10 @@ test(
# No special meaning here, it's just a file laying around that is unlikely to go anywhere # No special meaning here, it's just a file laying around that is unlikely to go anywhere
# any time soon. # any time soon.
'_NIX_TEST_UNIT_DATA': meson.project_source_root() / 'src/nix-env/buildenv.nix', '_NIX_TEST_UNIT_DATA': meson.project_source_root() / 'src/nix-env/buildenv.nix',
# Use a temporary home directory for the unit tests.
# Otherwise, /homeless-shelter is created in the single-user sandbox, and functional tests will fail.
# TODO(alois31): handle TMPDIR properly (meson can't, and setting HOME in the test is too late)…
'HOME': '/tmp/nix-test/libcmd-unit-tests',
}, },
suite : 'check', suite : 'check',
protocol : 'gtest', protocol : 'gtest',

4
version.json Normal file
View file

@ -0,0 +1,4 @@
{
"version": "2.90.0",
"release_name": "Vanilla Ice Cream"
}