Compare commits

...

30 commits

Author SHA1 Message Date
alois31 c8aa1dcc21
libstore/build: always enable seccomp filtering and no-new-privileges
Seccomp filtering and the no-new-privileges functionality improve the security
of the sandbox, and have been enabled by default for a long time. In
lix-project/lix#265 it was decided that they
should be enabled unconditionally. Accordingly, remove the allow-new-privileges
(which had weird behavior anyway) and filter-syscall settings, and force the
security features on. This turns libseccomp into a required dependency on
Linux.

Change-Id: Iedbfa18d720ae557dee07a24f69b2520f30119cb
2024-05-21 16:34:12 +02:00
alois31 d5fdb995d3
doc: fix repl-interrupt release note entry
The timing of the merge resulted in the newly introduced metadata not being
present.

Change-Id: I07f28cf37703ec05c3e1b96301797a42d913264b
2024-05-21 16:34:04 +02:00
Artemis Tosini 3de77e6dbd Merge "libutil: Create chmodPath function" into main 2024-05-20 15:13:53 +00:00
raito 8e1a883186 Merge "chore: remove incorrect maintainers/*.md documentation" into main 2024-05-20 12:35:20 +00:00
jade 992c63fc0b Merge "Remove upload-release.pl" into main 2024-05-20 00:38:12 +00:00
Qyriad 589953e832 Merge "fix -Wdeprecated-copy on clang (BaseError copy assignment)" into main 2024-05-20 00:11:12 +00:00
puck bfb91db4f6 repl-interacter: save history after entering every line
Fixes: lix-project/lix#328
Change-Id: Iedd79ff5f72e84766ebd234c63856170afc624f0
2024-05-19 22:47:45 +00:00
puck 40311973a8 change-authors: add puck
Change-Id: I04b8cd04a168b3adea7790f816e774d5d90fcea2
2024-05-19 22:47:45 +00:00
Artemis Tosini 5411fbf204
libutil: Create chmodPath function
Move the identical static `chmod_` functions in libstore to
libutil. the function is called `chmodPath` instead of `chmod`
as otherwise it will shadow the standard library chmod in the nix
namespace, which is somewhat confusing.

Change-Id: I7b5ce379c6c602e3d3a1bbc49dbb70b1ae8f7bad
2024-05-19 22:07:58 +00:00
jade a354779d78 Remove upload-release.pl
We are doing releases totally differently than Nix so this will need
rewriting anyway.

Change-Id: Iba4ad160b9d215fcbf20a14243fd87cfbb527760
2024-05-19 13:53:39 -07:00
Qyriad 4eb6779ea8 fix -Wdeprecated-copy on clang (BaseError copy assignment)
2bbe3efd1¹ added the -Wdeprecated-copy warning, and fixed the instances
of it which GCC warned about, in HintFmt and ref<T>. However, when
building with Clang, there is an additional deprecated-copy warning in
BaseError. This commit explicitly defaults the copy assignment operator
for BaseError and silences this warning.

1: 2bbe3efd16
Change-Id: I50aa4a7ab1a7aae5d7b31f765994abd3db06379d
2024-05-19 12:32:13 -06:00
raito 93dbb698b3 chore: remove incorrect maintainers/*.md documentation
Fate has something different in store for the release process,
backporting process and the general maintainer documentation.

See lix-project/lix#260.

Change-Id: I626686ff4059aee22a3ab1664b52581b2dbf6ed7
Signed-off-by: Raito Bezarius <raito@lix.systems>
2024-05-19 16:58:52 +02:00
eldritch horrors 774c56094f libstore: fix old RemoteStore::addToStore serializer
having the serializer write into `*conn` is not legal because we are
in a sinkToSource that will be drained by the remote we're connected
to. writing into `*conn` directly can break the framing protocol. it
is unlikely this code was ever run: to protocol it caters to is from
2016(!) and thoroughly untested in-tree, and since it's been present
since nix 2.17 and the 1.18 protocol broken here is nix 2.0 we might
safely assume that daemons older than nix 2.1 are no longer used now

see also #325 (though that wants <2.3 gone, this is sadly only <2.1)

Change-Id: I9d674c18f6d802f61c5d85dfd9608587b73e70a5
2024-05-19 11:57:55 +00:00
Alyssa Ross 139d31f876 Improve nix-store --delete failure message
On several occasions I've found myself confused when trying to delete
a store path, because I am told it's still alive, but
nix-store --query --roots doesn't show anything.  Let's save future
users this confusion by mentioning that a path might be alive due to
having referrers, not just roots.

(cherry picked from commit 979a019014569eee7d0071605f6ff500b544f6ac)

Upstream-PR: https://github.com/NixOS/nix/pull/10733
Change-Id: I54ae839a85f3de3393493fba27fd40d7d3af0516
2024-05-18 14:49:40 -06:00
puck 62b1adf8c1 Merge "nix cat/dump-path/key: stop progress bar before writeFull" into main 2024-05-18 20:13:48 +00:00
jade d7d1547a41 Merge "lix-doc: don't chomp bold headings off" into main 2024-05-18 18:24:49 +00:00
puck 1fe58bd8a7 nix cat/dump-path/key: stop progress bar before writeFull
These commands outputs data that may not end with a newline. This
causes problems when the progress bar redraws, as that completely
wipes the last line of output. As nix key generate-secret outputs
a single line of text with no output, it shows up entirely blank,
making it look like nothing happened.

Fixes: lix-project/lix#320
Change-Id: I5ac706d71d839b6dfa760b60a351414cd96297cf
2024-05-18 17:51:16 +00:00
Pierre Bourdon d1c8fd3b09 Merge "derived-path: refuse built derived path with a non-derivation base" into main 2024-05-18 07:26:26 +00:00
julia 7a3745b076
Deprecate the online flake registries and vendor the default registry
Fixes #183, #110, #116.

The default flake-registry option becomes 'vendored', and refers
to a vendored flake-registry.json file in the install path.

Vendored copy of the flake-registry is from github:NixOS/flake-registry
at commit 9c69f7bd2363e71fe5cd7f608113290c7614dcdd.

Change-Id: I752b81c85ebeaab4e582ac01c239d69d65580f37
2024-05-18 12:27:23 +10:00
Qyriad 236466faf3 package: add --print-errorlogs to meson's tests
This should have been in there originally, which is our mistake,
considering that debugging CI failures is basically impossible without
it.

Change-Id: I4ab8799e6e0abca1984ed9801fe10c58200861a3
2024-05-17 21:42:33 +00:00
puck 23c92f0815 Merge "primops: change to std::function, allowing the passing of user data" into main 2024-05-17 21:37:41 +00:00
puck 92e1df23b3 Merge "Loosen constness on listElems() result" into main 2024-05-17 21:37:35 +00:00
jade 0d2cc81956 Merge "make lix dev shells un-bear-able since we un-make them now" into main 2024-05-17 20:44:02 +00:00
Qyriad 93b7edfd07 Merge "docs: mention importNative/exec in allow-unsafe-native-code-during-evaluation" into main 2024-05-17 18:17:05 +00:00
jade e1119f4378 make lix dev shells un-bear-able since we un-make them now
We don't need bear anymore, since we don't have any more bad build
systems that lack compile commands generation inside Lix.

Change-Id: I7809ddfd993180468f846e8cd862bdd54d5b31ec
2024-05-16 23:43:44 -07:00
Qyriad 5ff076d8ad docs: mention importNative/exec in allow-unsafe-native-code-during-evaluation
Both of these still needs their own actual documentation, but they are
at least now mentioned that they exist and what they're enabled by.

Change-Id: I235b9e8e627e04ed06611423c8e67a8eca233120
2024-05-17 00:41:35 +00:00
Pierre Bourdon 5a1824ebe1
derived-path: refuse built derived path with a non-derivation base
Example: /nix/store/dr53sp25hyfsnzjpm8mh3r3y36vrw3ng-neovim-0.9.5^out

This is nonsensical since selecting outputs can only be done for a
buildable derivation, not for a realised store path. The build worker
side of things ends up crashing with an assertion when trying to handle
such malformed paths.

Change-Id: Ia3587c71fe3da5bea45d4e506e1be4dd62291ddf
2024-05-17 02:16:15 +02:00
Yorick 194654c96f primops: change to std::function, allowing the passing of user data
(cherry picked from commit 48aa57549d514432d6621c1e29f051951eca2d7f)
Change-Id: Ib7d5c6514031ceb6c42ac44588be6b0c1c3c225b
2024-05-16 13:01:40 +00:00
puck c6bb377c91 Loosen constness on listElems() result
Change-Id: I1caff000362c83e5172413a036c22a2e9ed3ede8
2024-05-16 13:01:40 +00:00
FireFly eca8bce081 lix-doc: don't chomp bold headings off
There are a few places in nixpkgs lib where `**Foo**:` is used as a heading instead of the usual markdown `# Foo` ones. I think this is intentional with how it gets rendered in the manual, e.g. [`lib.lists.sortOn`][1].

[1]: https://nixos.org/manual/nixpkgs/stable/#function-library-lib.lists.sortOn

`nix-doc` prints this as
```
   *Laws**:
       ```nix
       sortOn f == sort (p: q: f p < f q)
       ```
```
chomping off the first asterisk as part of `cleanup_single_line` that's meant to deal with `/** \n * \n * \n */` style doc comments. This also means the usage in lix ends up funny-looking with a trailing asterisk as if there's a footnote to pay attention to (which is how I first noticed it, heh)

The fix:

When cleaning up a single line and removing a prefix comment character,
ensure it's followed by whitespace (or the last character of the line).

Upstream-PR: https://github.com/lf-/nix-doc/pull/26
Change-Id: If2870c53a632f6bbbcca98a4bfbd72f5bef37879
2024-05-15 15:24:03 -07:00
49 changed files with 732 additions and 775 deletions

View file

@ -66,3 +66,12 @@ midnightveil:
display_name: julia
forgejo: midnightveil
github: midnightveil
puck:
display_name: puck
forgejo: puck
github: puckipedia
alois31:
forgejo: alois31
github: alois31

View file

@ -24,7 +24,6 @@ const redirects = {
"chap-writing-nix-expressions": "language/index.html",
"part-command-ref": "command-ref/command-ref.html",
"conf-allow-import-from-derivation": "command-ref/conf-file.html#conf-allow-import-from-derivation",
"conf-allow-new-privileges": "command-ref/conf-file.html#conf-allow-new-privileges",
"conf-allowed-uris": "command-ref/conf-file.html#conf-allowed-uris",
"conf-allowed-users": "command-ref/conf-file.html#conf-allowed-users",
"conf-auto-optimise-store": "command-ref/conf-file.html#conf-auto-optimise-store",

View file

@ -0,0 +1,16 @@
---
synopsis: "Deprecate the online flake registries and vendor the default registry"
cls: 1127
credits: midnightveil
issues: [fj#183, fj#110, fj#116, 8953, 9087]
category: Breaking Changes
---
The online flake registry [https://channels.nixos.org/flake-registry.json](https://channels.nixos.org/flake-registry.json) is not pinned in any way,
and the targets of the indirections can both update or change entirely at any
point. Furthermore, it is refetched on every use of a flake reference, even if
there is a local flake reference, and even if you are offline (which breaks).
For now, we deprecate the (any) online flake registry, and vendor a copy of the
current online flake registry. This makes it work offline, and ensures that
it won't change in the future.

View file

@ -0,0 +1,10 @@
---
synopsis: Enforce syscall filtering and no-new-privileges on Linux
cls: 1063
category: Breaking Changes
credits: alois31
---
In order to improve consistency of the build environment, system call filtering and no-new-privileges are now unconditionally enabled on Linux.
The `filter-syscalls` and `allow-new-privileges` options which could be used to disable these features under some circumstances have been removed.
Furthermore, libseccomp is now a required dependency on Linux, and syscall filtering cannot be disabled at build time any more either.

View file

@ -0,0 +1,9 @@
---
synopsis: "`nix repl` history is saved more reliably"
cls: 1164
credits: puck
---
`nix repl` now saves its history file after each line, rather than at the end
of the session; ensuring that it will remember what you typed even after it
crashes.

View file

@ -1,6 +1,8 @@
---
synopsis: Interrupting builds in the REPL works more than once
cls: 1097
category: Fixes
credits: alois31
---
Builds in the REPL can be interrupted by pressing Ctrl+C.

View file

@ -68,10 +68,7 @@ The most current alternative to this section is to read `package.nix` and see wh
may also work, but ancient versions like the ubiquitous 2.5.4a
won't.
- The `libseccomp` is used to provide syscall filtering on Linux. This
is an optional dependency and can be disabled passing a
`--disable-seccomp-sandboxing` option to the `configure` script (Not
recommended unless your system doesn't support `libseccomp`). To get
- The `libseccomp` is used to provide syscall filtering on Linux. To get
the library, visit <https://github.com/seccomp/libseccomp>.
- On 64-bit x86 machines only, `libcpuid` library

View file

@ -84,9 +84,13 @@ fn indented(s: &str, indent: usize) -> String {
/// Cleans up a single line, erasing prefix single line comments but preserving indentation
fn cleanup_single_line<'a>(s: &'a str) -> &'a str {
let mut cmt_new_start = 0;
for (idx, ch) in s.char_indices() {
let mut iter = s.char_indices().peekable();
while let Some((idx, ch)) = iter.next() {
// peek at the next character, with an explicit '\n' as "next character" at end of line
let (_, next_ch) = iter.peek().unwrap_or(&(0, '\n'));
// if we find a character, save the byte position after it as our new string start
if ch == '#' || ch == '*' {
if ch == '#' || (ch == '*' && next_ch.is_whitespace()) {
cmt_new_start = idx + 1;
break;
}
@ -206,7 +210,7 @@ fn visit_lambda(name: String, lambda: &Lambda) -> SearchResult {
SearchResult {
identifier: name,
doc: comment,
param_block
param_block,
}
}
@ -246,7 +250,7 @@ pub extern "C" fn nd_get_function_docs(
filename: *const c_char,
line: usize,
col: usize,
) -> *const c_char {
) -> *const c_char {
let fname = unsafe { CStr::from_ptr(filename) };
fname
.to_str()
@ -257,9 +261,9 @@ pub extern "C" fn nd_get_function_docs(
eprintln!("panic!! {:#?}", e);
e
})
.ok()
.ok()
})
.flatten()
.flatten()
.and_then(|s| CString::new(s).ok())
.map(|s| s.into_raw() as *const c_char)
.unwrap_or(ptr::null())
@ -319,8 +323,16 @@ mod tests {
let ex1 = " * a";
let ex2 = " # a";
let ex3 = " a";
let ex4 = " *";
assert_eq!(cleanup_single_line(ex1), " a");
assert_eq!(cleanup_single_line(ex2), " a");
assert_eq!(cleanup_single_line(ex3), ex3);
assert_eq!(cleanup_single_line(ex4), "");
}
#[test]
fn test_single_line_retains_bold_headings() {
let ex1 = " **Foo**:";
assert_eq!(cleanup_single_line(ex1), ex1);
}
}

View file

@ -1,146 +0,0 @@
# Nix maintainers team
## Motivation
The team's main responsibility is to set a direction for the development of Nix and ensure that the code is in good shape.
We aim to achieve this by improving the contributor experience and attracting more maintainers that is, by helping other people contributing to Nix and eventually taking responsibility in order to scale the development process to match users' needs.
### Objectives
- It is obvious what is worthwhile to work on.
- It is easy to find the right place in the code to make a change.
- It is clear what is expected of a pull request.
- It is predictable how to get a change merged and released.
### Tasks
- Establish, communicate, and maintain a technical roadmap
- Improve documentation targeted at contributors
- Record architecture and design decisions
- Elaborate contribution guides and abide to them
- Define and assert quality criteria for contributions
- Maintain the issue tracker and triage pull requests
- Help contributors succeed with pull requests that address roadmap milestones
- Manage the release lifecycle
- Regularly publish reports on work done
- Engage with third parties in the interest of the project
- Ensure the required maintainer capacity for all of the above
## Members
- Eelco Dolstra (@edolstra) Team lead
- Théophane Hufschmitt (@thufschmitt)
- Valentin Gagarin (@fricklerhandwerk)
- Thomas Bereknyei (@tomberek)
- Robert Hensing (@roberth)
- John Ericson (@Ericson2314)
## Meeting protocol
The team meets twice a week:
- Discussion meeting: [Fridays 13:00-14:00 CET](https://calendar.google.com/calendar/event?eid=MHNtOGVuNWtrZXNpZHR2bW1sM3QyN2ZjaGNfMjAyMjExMjVUMTIwMDAwWiBiOW81MmZvYnFqYWs4b3E4bGZraGczdDBxZ0Bn)
1. Triage issues and pull requests from the [No Status](#no-status) column (30 min)
2. Discuss issues and pull requests from the [To discuss](#to-discuss) column (30 min)
- Work meeting: [Mondays 13:00-15:00 CET](https://calendar.google.com/calendar/event?eid=NTM1MG1wNGJnOGpmOTZhYms3bTB1bnY5cWxfMjAyMjExMjFUMTIwMDAwWiBiOW81MmZvYnFqYWs4b3E4bGZraGczdDBxZ0Bn)
1. Code review on pull requests from [In review](#in-review).
2. Other chores and tasks.
Meeting notes are collected on a [collaborative scratchpad](https://pad.lassul.us/Cv7FpYx-Ri-4VjUykQOLAw), and published on Discourse under the [Nix category](https://discourse.nixos.org/c/dev/nix/50).
## Project board protocol
The team uses a [GitHub project board](https://github.com/orgs/NixOS/projects/19/views/1) for tracking its work.
Items on the board progress through the following states:
### No Status
During the discussion meeting, the team triages new items.
To be considered, issues and pull requests must have a high-level description to provide the whole team with the necessary context at a glance.
On every meeting, at least one item from each of the following categories is inspected:
1. [critical](https://github.com/NixOS/nix/labels/critical)
2. [security](https://github.com/NixOS/nix/labels/security)
3. [regression](https://github.com/NixOS/nix/labels/regression)
4. [bug](https://github.com/NixOS/nix/issues?q=is%3Aopen+label%3Abug+sort%3Areactions-%2B1-desc)
5. [tests of existing functionality](https://github.com/NixOS/nix/issues?q=is%3Aopen+label%3Atests+-label%3Afeature+sort%3Areactions-%2B1-desc)
- [oldest pull requests](https://github.com/NixOS/nix/pulls?q=is%3Apr+is%3Aopen+sort%3Acreated-asc)
- [most popular pull requests](https://github.com/NixOS/nix/pulls?q=is%3Apr+is%3Aopen+sort%3Areactions-%2B1-desc)
- [oldest issues](https://github.com/NixOS/nix/issues?q=is%3Aissue+is%3Aopen+sort%3Acreated-asc)
- [most popular issues](https://github.com/NixOS/nix/issues?q=is%3Aissue+is%3Aopen+sort%3Areactions-%2B1-desc)
Team members can also add pull requests or issues they would like the whole team to consider.
To ensure process quality and reliability, all non-trivial pull requests must be triaged before merging.
If there is disagreement on the general idea behind an issue or pull request, it is moved to [To discuss](#to-discuss).
Otherwise, the issue or pull request in questions get the label [`idea approved`](https://github.com/NixOS/nix/labels/idea%20approved).
For issues this means that an implementation is welcome and will be prioritised for review.
For pull requests this means that:
- Unfinished work is encouraged to be continued.
- A reviewer is assigned to take responsibility for getting the pull request merged.
The item is moved to the [Assigned](#assigned) column.
- If needed, the team can decide to do a collarorative review.
Then the item is moved to the [In review](#in-review) column, and review session is scheduled.
What constitutes a trivial pull request is up to maintainers' judgement.
### To discuss
Pull requests and issues that are deemed important and controversial are discussed by the team during discussion meetings.
This may be where the merit of the change itself or the implementation strategy is contested by a team member.
As a general guideline, the order of items is determined as follows:
- Prioritise pull requests over issues
Contributors who took the time to implement concrete change proposals should not wait indefinitely.
- Prioritise fixing bugs and testing over documentation, improvements or new features
The team values stability and accessibility higher than raw functionality.
- Interleave issues and PRs
This way issues without attempts at a solution get a chance to get addressed.
### In review
Pull requests in this column are reviewed together during work meetings.
This is both for spreading implementation knowledge and for establishing common values in code reviews.
When the overall direction is agreed upon, even when further changes are required, the pull request is assigned to one team member.
If significant changes are requested or reviewers cannot come to a conclusion in reasonable time, the pull request is [marked as draft](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/changing-the-stage-of-a-pull-request#converting-a-pull-request-to-a-draft).
### Assigned
One team member is assigned to each of these pull requests.
They will communicate with the authors, and make the final approval once all remaining issues are addressed.
If more substantive issues arise, the assignee can move the pull request back to [To discuss](#to-discuss) or [In review](#in-review) to involve the team again.
### Flowchart
The process is illustrated in the following diagram:
```mermaid
flowchart TD
discuss[To discuss]
review[To review]
New --> |Disagreement on idea| discuss
New & discuss --> |Consensus on idea| review
review --> |Consensus on implementation| Assigned
Assigned --> |Implementation issues arise| review
Assigned --> |Remaining issues fixed| Merged
```

View file

@ -1,12 +0,0 @@
# Backporting
To [automatically backport a pull request](https://github.com/NixOS/nix/blob/master/.github/workflows/backport.yml) to a release branch once it's merged, assign it a label of the form [`backport <branch>`](https://github.com/NixOS/nix/labels?q=backport).
Since [GitHub Actions workflows will not trigger other workflows](https://docs.github.com/en/actions/using-workflows/triggering-a-workflow#triggering-a-workflow-from-a-workflow), checks on the automatic backport need to be triggered by another actor.
This is achieved by closing and reopening the backport pull request.
This specifically affects the [`installer_test`] check.
Note that it only runs after the other tests, so it may take a while to appear.
[`installer_test`]: https://github.com/NixOS/nix/blob/895dfc656a21f6252ddf48df0d1f215effa04ecb/.github/workflows/ci.yml#L70-L91

View file

@ -1,196 +0,0 @@
# Nix release process
## Release artifacts
The release process is intended to create the following for each
release:
* A Git tag
* Binary tarballs in https://releases.nixos.org/?prefix=nix/
* Docker images
* Closures in https://cache.nixos.org
* (Optionally) Updated `fallback-paths.nix` in Nixpkgs
* An updated manual on https://nixos.org/manual/nix/stable/
## Creating a new release from the `master` branch
* Make sure that the [Hydra `master` jobset](https://hydra.nixos.org/jobset/nix/master) succeeds.
* In a checkout of the Nix repo, make sure you're on `master` and run
`git pull`.
* Compile the release notes by running
```console
$ git checkout -b release-notes
$ VERSION=X.YY ./maintainers/release-notes
```
where `X.YY` is *without* the patch level, e.g. `2.12` rather than ~~`2.12.0`~~.
A commit is created.
* Proof-read / edit / rearrange the release notes if needed. Breaking changes
and highlights should go to the top.
* Push.
```console
$ git push --set-upstream $REMOTE release-notes
```
* Create a PR for `release-notes`.
* Wait for the PR to be merged.
* Create a branch for the release:
```console
$ git checkout master
$ git pull
$ git checkout -b $VERSION-maintenance
```
* Mark the release as official:
```console
$ sed -e 's/officialRelease = false;/officialRelease = true;/' -i flake.nix
$ sed -e '/rl-next.md/ d' -i doc/manual/src/SUMMARY.md
```
This removes the link to `rl-next.md` from the manual and sets
`officialRelease = true` in `flake.nix`.
* Commit
* Push the release branch:
```console
$ git push --set-upstream origin $VERSION-maintenance
```
* Create a jobset for the release branch on Hydra as follows:
* Go to the jobset of the previous release
(e.g. https://hydra.nixos.org/jobset/nix/maintenance-2.11).
* Select `Actions -> Clone this jobset`.
* Set identifier to `maintenance-$VERSION`.
* Set description to `$VERSION release branch`.
* Set flake URL to `github:NixOS/nix/$VERSION-maintenance`.
* Hit `Create jobset`.
* Wait for the new jobset to evaluate and build. If impatient, go to
the evaluation and select `Actions -> Bump builds to front of
queue`.
* When the jobset evaluation has succeeded building, take note of the
evaluation ID (e.g. `1780832` in
`https://hydra.nixos.org/eval/1780832`).
* Tag the release and upload the release artifacts to
[`releases.nixos.org`](https://releases.nixos.org/) and [Docker Hub](https://hub.docker.com/):
```console
$ IS_LATEST=1 ./maintainers/upload-release.pl <EVAL-ID>
```
Note: `IS_LATEST=1` causes the `latest-release` branch to be
force-updated. This is used by the `nixos.org` website to get the
[latest Nix manual](https://nixos.org/manual/nixpkgs/unstable/).
TODO: This script requires the right AWS credentials. Document.
TODO: This script currently requires a
`/home/eelco/Dev/nix-pristine`.
TODO: trigger nixos.org netlify: https://docs.netlify.com/configure-builds/build-hooks/
* Prepare for the next point release by editing `.version` to
e.g.
```console
$ echo 2.12.1 > .version
$ git commit -a -m 'Bump version'
$ git push
```
Commit and push this to the maintenance branch.
* Bump the version of `master`:
```console
$ git checkout master
$ git pull
$ NEW_VERSION=2.13.0
$ echo $NEW_VERSION > .version
$ git checkout -b bump-$NEW_VERSION
$ git commit -a -m 'Bump version'
$ git push --set-upstream origin bump-$NEW_VERSION
```
Make a pull request and auto-merge it.
* Create a milestone for the next release, move all unresolved issues
from the previous milestone, and close the previous milestone. Set
the date for the next milestone 6 weeks from now.
* Create a backport label.
* Post an [announcement on Discourse](https://discourse.nixos.org/c/announcements/8), including the contents of
`rl-$VERSION.md`.
## Creating a point release
* Checkout.
```console
$ git checkout XX.YY-maintenance
```
* Determine the next patch version.
```console
$ export VERSION=XX.YY.ZZ
```
* Update release notes.
```console
$ ./maintainers/release-notes
```
* Push.
```console
$ git push
```
* Wait for the desired evaluation of the maintenance jobset to finish
building.
* Run
```console
$ IS_LATEST=1 ./maintainers/upload-release.pl <EVAL-ID>
```
Omit `IS_LATEST=1` when creating a point release that is not on the
most recent stable branch. This prevents `nixos.org` to going back
to an older release.
* Bump the version number of the release branch as above (e.g. to
`2.12.2`).
## Recovering from mistakes
`upload-release.pl` should be idempotent. For instance a wrong `IS_LATEST` value can be fixed that way, by running the script on the actual latest release.

View file

@ -1,256 +0,0 @@
#! /usr/bin/env nix-shell
#! nix-shell -i perl -p perl perlPackages.LWPUserAgent perlPackages.LWPProtocolHttps perlPackages.FileSlurp perlPackages.NetAmazonS3 gnupg1
use strict;
use Data::Dumper;
use File::Basename;
use File::Path;
use File::Slurp;
use File::Copy;
use JSON::PP;
use LWP::UserAgent;
use Net::Amazon::S3;
my $evalId = $ARGV[0] or die "Usage: $0 EVAL-ID\n";
my $releasesBucketName = "nix-releases";
my $channelsBucketName = "nix-channels";
my $TMPDIR = $ENV{'TMPDIR'} // "/tmp";
my $isLatest = ($ENV{'IS_LATEST'} // "") eq "1";
# FIXME: cut&paste from nixos-channel-scripts.
sub fetch {
my ($url, $type) = @_;
my $ua = LWP::UserAgent->new;
$ua->default_header('Accept', $type) if defined $type;
my $response = $ua->get($url);
die "could not download $url: ", $response->status_line, "\n" unless $response->is_success;
return $response->decoded_content;
}
my $evalUrl = "https://hydra.nixos.org/eval/$evalId";
my $evalInfo = decode_json(fetch($evalUrl, 'application/json'));
#print Dumper($evalInfo);
my $flakeUrl = $evalInfo->{flake} or die;
my $flakeInfo = decode_json(`nix flake metadata --json "$flakeUrl"` or die);
my $nixRev = $flakeInfo->{revision} or die;
my $buildInfo = decode_json(fetch("$evalUrl/job/build.x86_64-linux", 'application/json'));
#print Dumper($buildInfo);
my $releaseName = $buildInfo->{nixname};
$releaseName =~ /nix-(.*)$/ or die;
my $version = $1;
print STDERR "Flake URL is $flakeUrl, Nix revision is $nixRev, version is $version\n";
my $releaseDir = "nix/$releaseName";
my $tmpDir = "$TMPDIR/nix-release/$releaseName";
File::Path::make_path($tmpDir);
my $narCache = "$TMPDIR/nar-cache";
File::Path::make_path($narCache);
my $binaryCache = "https://cache.nixos.org/?local-nar-cache=$narCache";
# S3 setup.
my $aws_access_key_id = $ENV{'AWS_ACCESS_KEY_ID'} or die "No AWS_ACCESS_KEY_ID given.";
my $aws_secret_access_key = $ENV{'AWS_SECRET_ACCESS_KEY'} or die "No AWS_SECRET_ACCESS_KEY given.";
my $s3 = Net::Amazon::S3->new(
{ aws_access_key_id => $aws_access_key_id,
aws_secret_access_key => $aws_secret_access_key,
retry => 1,
host => "s3-eu-west-1.amazonaws.com",
});
my $releasesBucket = $s3->bucket($releasesBucketName) or die;
my $s3_us = Net::Amazon::S3->new(
{ aws_access_key_id => $aws_access_key_id,
aws_secret_access_key => $aws_secret_access_key,
retry => 1,
});
my $channelsBucket = $s3_us->bucket($channelsBucketName) or die;
sub getStorePath {
my ($jobName, $output) = @_;
my $buildInfo = decode_json(fetch("$evalUrl/job/$jobName", 'application/json'));
return $buildInfo->{buildoutputs}->{$output or "out"}->{path} or die "cannot get store path for '$jobName'";
}
sub copyManual {
my $manual = getStorePath("build.x86_64-linux", "doc");
print "$manual\n";
my $manualNar = "$tmpDir/$releaseName-manual.nar.xz";
print "$manualNar\n";
unless (-e $manualNar) {
system("NIX_REMOTE=$binaryCache nix store dump-path '$manual' | xz > '$manualNar'.tmp") == 0
or die "unable to fetch $manual\n";
rename("$manualNar.tmp", $manualNar) or die;
}
unless (-e "$tmpDir/manual") {
system("xz -d < '$manualNar' | nix-store --restore $tmpDir/manual.tmp") == 0
or die "unable to unpack $manualNar\n";
rename("$tmpDir/manual.tmp/share/doc/nix/manual", "$tmpDir/manual") or die;
system("rm -rf '$tmpDir/manual.tmp'") == 0 or die;
}
system("aws s3 sync '$tmpDir/manual' s3://$releasesBucketName/$releaseDir/manual") == 0
or die "syncing manual to S3\n";
}
copyManual;
sub downloadFile {
my ($jobName, $productNr, $dstName) = @_;
my $buildInfo = decode_json(fetch("$evalUrl/job/$jobName", 'application/json'));
#print STDERR "$jobName: ", Dumper($buildInfo), "\n";
my $srcFile = $buildInfo->{buildproducts}->{$productNr}->{path} or die "job '$jobName' lacks product $productNr\n";
$dstName //= basename($srcFile);
my $tmpFile = "$tmpDir/$dstName";
if (!-e $tmpFile) {
print STDERR "downloading $srcFile to $tmpFile...\n";
my $fileInfo = decode_json(`NIX_REMOTE=$binaryCache nix store ls --json '$srcFile'`);
$srcFile = $fileInfo->{target} if $fileInfo->{type} eq 'symlink';
#print STDERR $srcFile, " ", Dumper($fileInfo), "\n";
system("NIX_REMOTE=$binaryCache nix store cat '$srcFile' > '$tmpFile'.tmp") == 0
or die "unable to fetch $srcFile\n";
rename("$tmpFile.tmp", $tmpFile) or die;
}
my $sha256_expected = $buildInfo->{buildproducts}->{$productNr}->{sha256hash};
my $sha256_actual = `nix hash file --base16 --type sha256 '$tmpFile'`;
chomp $sha256_actual;
if (defined($sha256_expected) && $sha256_expected ne $sha256_actual) {
print STDERR "file $tmpFile is corrupt, got $sha256_actual, expected $sha256_expected\n";
exit 1;
}
write_file("$tmpFile.sha256", $sha256_actual);
return $sha256_expected;
}
downloadFile("binaryTarball.i686-linux", "1");
downloadFile("binaryTarball.x86_64-linux", "1");
downloadFile("binaryTarball.aarch64-linux", "1");
downloadFile("binaryTarball.x86_64-darwin", "1");
downloadFile("binaryTarball.aarch64-darwin", "1");
downloadFile("binaryTarballCross.x86_64-linux.armv6l-linux", "1");
downloadFile("binaryTarballCross.x86_64-linux.armv7l-linux", "1");
downloadFile("installerScript", "1");
# Upload docker images to dockerhub.
my $dockerManifest = "";
my $dockerManifestLatest = "";
for my $platforms (["x86_64-linux", "amd64"], ["aarch64-linux", "arm64"]) {
my $system = $platforms->[0];
my $dockerPlatform = $platforms->[1];
my $fn = "nix-$version-docker-image-$dockerPlatform.tar.gz";
downloadFile("dockerImage.$system", "1", $fn);
print STDERR "loading docker image for $dockerPlatform...\n";
system("docker load -i $tmpDir/$fn") == 0 or die;
my $tag = "nixos/nix:$version-$dockerPlatform";
my $latestTag = "nixos/nix:latest-$dockerPlatform";
print STDERR "tagging $version docker image for $dockerPlatform...\n";
system("docker tag nix:$version $tag") == 0 or die;
if ($isLatest) {
print STDERR "tagging latest docker image for $dockerPlatform...\n";
system("docker tag nix:$version $latestTag") == 0 or die;
}
print STDERR "pushing $version docker image for $dockerPlatform...\n";
system("docker push -q $tag") == 0 or die;
if ($isLatest) {
print STDERR "pushing latest docker image for $dockerPlatform...\n";
system("docker push -q $latestTag") == 0 or die;
}
$dockerManifest .= " --amend $tag";
$dockerManifestLatest .= " --amend $latestTag"
}
print STDERR "creating multi-platform docker manifest...\n";
system("docker manifest rm nixos/nix:$version");
system("docker manifest create nixos/nix:$version $dockerManifest") == 0 or die;
if ($isLatest) {
print STDERR "creating latest multi-platform docker manifest...\n";
system("docker manifest rm nixos/nix:latest");
system("docker manifest create nixos/nix:latest $dockerManifestLatest") == 0 or die;
}
print STDERR "pushing multi-platform docker manifest...\n";
system("docker manifest push nixos/nix:$version") == 0 or die;
if ($isLatest) {
print STDERR "pushing latest multi-platform docker manifest...\n";
system("docker manifest push nixos/nix:latest") == 0 or die;
}
# Upload nix-fallback-paths.nix.
write_file("$tmpDir/fallback-paths.nix",
"{\n" .
" x86_64-linux = \"" . getStorePath("build.x86_64-linux") . "\";\n" .
" i686-linux = \"" . getStorePath("build.i686-linux") . "\";\n" .
" aarch64-linux = \"" . getStorePath("build.aarch64-linux") . "\";\n" .
" x86_64-darwin = \"" . getStorePath("build.x86_64-darwin") . "\";\n" .
" aarch64-darwin = \"" . getStorePath("build.aarch64-darwin") . "\";\n" .
"}\n");
# Upload release files to S3.
for my $fn (glob "$tmpDir/*") {
my $name = basename($fn);
next if $name eq "manual";
my $dstKey = "$releaseDir/" . $name;
unless (defined $releasesBucket->head_key($dstKey)) {
print STDERR "uploading $fn to s3://$releasesBucketName/$dstKey...\n";
my $configuration = ();
$configuration->{content_type} = "application/octet-stream";
if ($fn =~ /.sha256|install|\.nix$/) {
$configuration->{content_type} = "text/plain";
}
$releasesBucket->add_key_filename($dstKey, $fn, $configuration)
or die $releasesBucket->err . ": " . $releasesBucket->errstr;
}
}
# Update the "latest" symlink.
$channelsBucket->add_key(
"nix-latest/install", "",
{ "x-amz-website-redirect-location" => "https://releases.nixos.org/$releaseDir/install" })
or die $channelsBucket->err . ": " . $channelsBucket->errstr
if $isLatest;
# Tag the release in Git.
chdir("/home/eelco/Dev/nix-pristine") or die;
system("git remote update origin") == 0 or die;
system("git tag --force --sign $version $nixRev -m 'Tagging release $version'") == 0 or die;
system("git push --tags") == 0 or die;
system("git push --force-with-lease origin $nixRev:refs/heads/latest-release") == 0 or die if $isLatest;

View file

@ -180,11 +180,7 @@ configdata += {
deps += cpuid
# seccomp only makes sense on Linux
seccomp_required = is_linux ? get_option('seccomp-sandboxing') : false
seccomp = dependency('libseccomp', 'seccomp', required : seccomp_required, version : '>=2.5.5')
configdata += {
'HAVE_SECCOMP': seccomp.found().to_int(),
}
seccomp = dependency('libseccomp', 'seccomp', required : is_linux, version : '>=2.5.5')
libarchive = dependency('libarchive', required : true)
deps += libarchive

View file

@ -16,10 +16,6 @@ option('cpuid', type : 'feature',
description : 'determine microarchitecture levels with libcpuid (only relevant on x86_64)',
)
option('seccomp-sandboxing', type : 'feature',
description : 'build support for seccomp sandboxing (recommended unless your arch doesn\'t support libseccomp, only relevant on Linux)',
)
option('sandbox-shell', type : 'string', value : 'busybox',
description : 'path to a statically-linked shell to use as /bin/sh in sandboxes (usually busybox)',
)

View file

@ -0,0 +1,414 @@
{
"flakes": [
{
"from": {
"id": "agda",
"type": "indirect"
},
"to": {
"owner": "agda",
"repo": "agda",
"type": "github"
}
},
{
"from": {
"id": "arion",
"type": "indirect"
},
"to": {
"owner": "hercules-ci",
"repo": "arion",
"type": "github"
}
},
{
"from": {
"id": "blender-bin",
"type": "indirect"
},
"to": {
"dir": "blender",
"owner": "edolstra",
"repo": "nix-warez",
"type": "github"
}
},
{
"from": {
"id": "bundlers",
"type": "indirect"
},
"to": {
"owner": "NixOS",
"repo": "bundlers",
"type": "github"
}
},
{
"from": {
"id": "cachix",
"type": "indirect"
},
"to": {
"owner": "cachix",
"repo": "cachix",
"type": "github"
}
},
{
"from": {
"id": "composable",
"type": "indirect"
},
"to": {
"owner": "ComposableFi",
"repo": "composable",
"type": "github"
}
},
{
"from": {
"id": "disko",
"type": "indirect"
},
"to": {
"owner": "nix-community",
"repo": "disko",
"type": "github"
}
},
{
"from": {
"id": "dreampkgs",
"type": "indirect"
},
"to": {
"owner": "nix-community",
"repo": "dreampkgs",
"type": "github"
}
},
{
"from": {
"id": "dwarffs",
"type": "indirect"
},
"to": {
"owner": "edolstra",
"repo": "dwarffs",
"type": "github"
}
},
{
"from": {
"id": "emacs-overlay",
"type": "indirect"
},
"to": {
"owner": "nix-community",
"repo": "emacs-overlay",
"type": "github"
}
},
{
"from": {
"id": "fenix",
"type": "indirect"
},
"to": {
"owner": "nix-community",
"repo": "fenix",
"type": "github"
}
},
{
"from": {
"id": "flake-parts",
"type": "indirect"
},
"to": {
"owner": "hercules-ci",
"repo": "flake-parts",
"type": "github"
}
},
{
"from": {
"id": "flake-utils",
"type": "indirect"
},
"to": {
"owner": "numtide",
"repo": "flake-utils",
"type": "github"
}
},
{
"from": {
"id": "gemini",
"type": "indirect"
},
"to": {
"owner": "nix-community",
"repo": "flake-gemini",
"type": "github"
}
},
{
"from": {
"id": "helix",
"type": "indirect"
},
"to": {
"owner": "helix-editor",
"repo": "helix",
"type": "github"
}
},
{
"from": {
"id": "hercules-ci-agent",
"type": "indirect"
},
"to": {
"owner": "hercules-ci",
"repo": "hercules-ci-agent",
"type": "github"
}
},
{
"from": {
"id": "hercules-ci-effects",
"type": "indirect"
},
"to": {
"owner": "hercules-ci",
"repo": "hercules-ci-effects",
"type": "github"
}
},
{
"from": {
"id": "home-manager",
"type": "indirect"
},
"to": {
"owner": "nix-community",
"repo": "home-manager",
"type": "github"
}
},
{
"from": {
"id": "hydra",
"type": "indirect"
},
"to": {
"owner": "NixOS",
"repo": "hydra",
"type": "github"
}
},
{
"from": {
"id": "mach-nix",
"type": "indirect"
},
"to": {
"owner": "DavHau",
"repo": "mach-nix",
"type": "github"
}
},
{
"from": {
"id": "nickel",
"type": "indirect"
},
"to": {
"owner": "tweag",
"repo": "nickel",
"type": "github"
}
},
{
"from": {
"id": "nimble",
"type": "indirect"
},
"to": {
"owner": "nix-community",
"repo": "flake-nimble",
"type": "github"
}
},
{
"from": {
"id": "nix",
"type": "indirect"
},
"to": {
"owner": "NixOS",
"repo": "nix",
"type": "github"
}
},
{
"from": {
"id": "nix-darwin",
"type": "indirect"
},
"to": {
"owner": "LnL7",
"repo": "nix-darwin",
"type": "github"
}
},
{
"from": {
"id": "nix-serve",
"type": "indirect"
},
"to": {
"owner": "edolstra",
"repo": "nix-serve",
"type": "github"
}
},
{
"from": {
"id": "nixops",
"type": "indirect"
},
"to": {
"owner": "NixOS",
"repo": "nixops",
"type": "github"
}
},
{
"from": {
"id": "nixos-hardware",
"type": "indirect"
},
"to": {
"owner": "NixOS",
"repo": "nixos-hardware",
"type": "github"
}
},
{
"from": {
"id": "nixos-homepage",
"type": "indirect"
},
"to": {
"owner": "NixOS",
"repo": "nixos-homepage",
"type": "github"
}
},
{
"from": {
"id": "nixos-search",
"type": "indirect"
},
"to": {
"owner": "NixOS",
"repo": "nixos-search",
"type": "github"
}
},
{
"from": {
"id": "nixpkgs",
"type": "indirect"
},
"to": {
"owner": "NixOS",
"ref": "nixpkgs-unstable",
"repo": "nixpkgs",
"type": "github"
}
},
{
"from": {
"id": "nur",
"type": "indirect"
},
"to": {
"owner": "nix-community",
"repo": "NUR",
"type": "github"
}
},
{
"from": {
"id": "patchelf",
"type": "indirect"
},
"to": {
"owner": "NixOS",
"repo": "patchelf",
"type": "github"
}
},
{
"from": {
"id": "poetry2nix",
"type": "indirect"
},
"to": {
"owner": "nix-community",
"repo": "poetry2nix",
"type": "github"
}
},
{
"from": {
"id": "pridefetch",
"type": "indirect"
},
"to": {
"owner": "SpyHoodle",
"repo": "pridefetch",
"type": "github"
}
},
{
"from": {
"id": "sops-nix",
"type": "indirect"
},
"to": {
"owner": "Mic92",
"repo": "sops-nix",
"type": "github"
}
},
{
"from": {
"id": "systems",
"type": "indirect"
},
"to": {
"owner": "nix-systems",
"repo": "default",
"type": "github"
}
},
{
"from": {
"id": "templates",
"type": "indirect"
},
"to": {
"owner": "NixOS",
"repo": "templates",
"type": "github"
}
}
],
"version": 2
}

View file

@ -0,0 +1,4 @@
install_data(
'flake-registry.json',
install_dir : datadir,
)

View file

@ -3,3 +3,4 @@ subdir('fish')
subdir('zsh')
subdir('systemd')
subdir('flake-registry')

View file

@ -309,7 +309,12 @@ stdenv.mkDerivation (finalAttrs: {
doCheck = canRunInstalled;
mesonCheckFlags = [ "--suite=check" ];
mesonCheckFlags = [
"--suite=check"
"--print-errorlogs"
];
# the tests access localhost.
__darwinAllowLocalNetworking = true;
# Make sure the internal API docs are already built, because mesonInstallPhase
# won't let us build them there. They would normally be built in buildPhase,
@ -342,7 +347,10 @@ stdenv.mkDerivation (finalAttrs: {
doInstallCheck = finalAttrs.doCheck;
mesonInstallCheckFlags = [ "--suite=installcheck" ];
mesonInstallCheckFlags = [
"--suite=installcheck"
"--print-errorlogs"
];
installCheckPhase = ''
runHook preInstallCheck
@ -375,7 +383,6 @@ stdenv.mkDerivation (finalAttrs: {
just,
nixfmt,
glibcLocales,
bear,
pre-commit-checks,
clang-tools,
llvmPackages,
@ -418,7 +425,6 @@ stdenv.mkDerivation (finalAttrs: {
llvmPackages.clang-unwrapped.dev
]
++ lib.optional (pre-commit-checks ? enabledPackages) pre-commit-checks.enabledPackages
++ lib.optional (stdenv.cc.isClang && !stdenv.buildPlatform.isDarwin) bear
++ lib.optional (lib.meta.availableOn stdenv.buildPlatform clangbuildanalyzer) clangbuildanalyzer
++ finalAttrs.checkInputs;

View file

@ -175,6 +175,8 @@ bool ReadlineLikeInteracter::getLine(std::string & input, ReplPromptType promptT
if (!s)
return false;
write_history(historyFile.c_str());
input += s;
input += '\n';
return true;

View file

@ -14,8 +14,11 @@ struct EvalSettings : Config
static std::string resolvePseudoUrl(std::string_view url);
Setting<bool> enableNativeCode{this, false, "allow-unsafe-native-code-during-evaluation",
"Whether builtin functions that allow executing native code should be enabled."};
Setting<bool> enableNativeCode{this, false, "allow-unsafe-native-code-during-evaluation", R"(
Whether builtin functions that allow executing native code should be enabled.
In particular, this adds the `importNative` and `exec` builtins.
)"};
Setting<Strings> nixPath{
this, getDefaultNixPath(), "nix-path",

View file

@ -17,6 +17,7 @@
#include <optional>
#include <unordered_map>
#include <mutex>
#include <functional>
namespace nix {
@ -71,7 +72,7 @@ struct PrimOp
/**
* Implementation of the primop.
*/
PrimOpFun fun;
std::function<std::remove_pointer<PrimOpFun>::type> fun;
/**
* Optional experimental for this to be gated on.

View file

@ -3329,8 +3329,11 @@ static void prim_sort(EvalState & state, const PosIdx pos, Value * * args, Value
callFunction. */
/* TODO: (layus) this is absurd. An optimisation like this
should be outside the lambda creation */
if (args[0]->isPrimOp() && args[0]->primOp->fun == prim_lessThan)
return CompareValues(state, noPos, "while evaluating the ordering function passed to builtins.sort")(a, b);
if (args[0]->isPrimOp()) {
auto ptr = args[0]->primOp->fun.target<decltype(&prim_lessThan)>();
if (ptr && *ptr == prim_lessThan)
return CompareValues(state, noPos, "while evaluating the ordering function passed to builtins.sort")(a, b);
}
Value * vs[] = {a, b};
Value vBool;

View file

@ -384,7 +384,7 @@ public:
return internalType == tList1 || internalType == tList2 ? smallList : bigList.elems;
}
const Value * const * listElems() const
Value * const * listElems() const
{
return internalType == tList1 || internalType == tList2 ? smallList : bigList.elems;
}

View file

@ -71,10 +71,13 @@ struct FetchSettings : public Config
Setting<bool> warnDirty{this, true, "warn-dirty",
"Whether to warn about dirty Git/Mercurial trees."};
Setting<std::string> flakeRegistry{this, "https://channels.nixos.org/flake-registry.json", "flake-registry",
Setting<std::string> flakeRegistry{this, "vendored", "flake-registry",
R"(
Path or URI of the global flake registry.
URIs are deprecated. When set to 'vendored', defaults to a vendored
copy of https://channels.nixos.org/flake-registry.json.
When empty, disables the global flake registry.
)",
{}, true, Xp::Flakes};

View file

@ -16,8 +16,12 @@ std::shared_ptr<Registry> Registry::read(
{
auto registry = std::make_shared<Registry>(type);
if (!pathExists(path))
if (!pathExists(path)) {
if (type == RegistryType::Global) {
warn("cannot read flake registry '%s': path does not exist", path);
}
return std::make_shared<Registry>(type);
}
try {
@ -155,9 +159,13 @@ static std::shared_ptr<Registry> getGlobalRegistry(ref<Store> store)
auto path = fetchSettings.flakeRegistry.get();
if (path == "") {
return std::make_shared<Registry>(Registry::Global); // empty registry
} else if (path == "vendored") {
return Registry::read(settings.nixDataDir + "/flake-registry.json", Registry::Global);
}
if (!path.starts_with("/")) {
warn("config option flake-registry referring to a URL is deprecated and will be removed in Lix 3.0; yours is: `%s'", path);
auto storePath = downloadFile(store, path, "flake-registry.json", false).storePath;
if (auto store2 = store.dynamic_pointer_cast<LocalFSStore>())
store2->addPermRoot(storePath, getCacheDir() + "/nix/flake-registry.json");

View file

@ -45,9 +45,6 @@
#include <sys/param.h>
#include <sys/mount.h>
#include <sys/syscall.h>
#if HAVE_SECCOMP
#include <seccomp.h>
#endif
#define pivot_root(new_root, put_old) (syscall(SYS_pivot_root, new_root, put_old))
#endif
@ -786,13 +783,6 @@ void DerivationGoal::tryLocalBuild() {
}
static void chmod_(const Path & path, mode_t mode)
{
if (chmod(path.c_str(), mode) == -1)
throw SysError("setting permissions on '%s'", path);
}
/* Move/rename path 'src' to 'dst'. Temporarily make 'src' writable if
it's a directory and we're not root (to be able to update the
directory's parent link ".."). */
@ -803,12 +793,12 @@ static void movePath(const Path & src, const Path & dst)
bool changePerm = (geteuid() && S_ISDIR(st.st_mode) && !(st.st_mode & S_IWUSR));
if (changePerm)
chmod_(src, st.st_mode | S_IWUSR);
chmodPath(src, st.st_mode | S_IWUSR);
renameFile(src, dst);
if (changePerm)
chmod_(dst, st.st_mode);
chmodPath(dst, st.st_mode);
}

View file

@ -43,9 +43,7 @@
#include <sys/mount.h>
#include <sys/prctl.h>
#include <sys/syscall.h>
#if HAVE_SECCOMP
#include <seccomp.h>
#endif
#define pivot_root(new_root, put_old) (syscall(SYS_pivot_root, new_root, put_old))
#endif
@ -272,12 +270,6 @@ void LocalDerivationGoal::tryLocalBuild()
started();
}
static void chmod_(const Path & path, mode_t mode)
{
if (chmod(path.c_str(), mode) == -1)
throw SysError("setting permissions on '%s'", path);
}
/* Move/rename path 'src' to 'dst'. Temporarily make 'src' writable if
it's a directory and we're not root (to be able to update the
@ -289,12 +281,12 @@ static void movePath(const Path & src, const Path & dst)
bool changePerm = (geteuid() && S_ISDIR(st.st_mode) && !(st.st_mode & S_IWUSR));
if (changePerm)
chmod_(src, st.st_mode | S_IWUSR);
chmodPath(src, st.st_mode | S_IWUSR);
renameFile(src, dst);
if (changePerm)
chmod_(dst, st.st_mode);
chmodPath(dst, st.st_mode);
}
@ -696,7 +688,7 @@ void LocalDerivationGoal::startBuilder()
instead.) */
Path chrootTmpDir = chrootRootDir + "/tmp";
createDirs(chrootTmpDir);
chmod_(chrootTmpDir, 01777);
chmodPath(chrootTmpDir, 01777);
/* Create a /etc/passwd with entries for the build user and the
nobody account. The latter is kind of a hack to support
@ -721,7 +713,7 @@ void LocalDerivationGoal::startBuilder()
build user. */
Path chrootStoreDir = chrootRootDir + worker.store.storeDir;
createDirs(chrootStoreDir);
chmod_(chrootStoreDir, 01775);
chmodPath(chrootStoreDir, 01775);
if (buildUser && chown(chrootStoreDir.c_str(), 0, buildUser->getGID()) == -1)
throw SysError("cannot change ownership of '%1%'", chrootStoreDir);
@ -1618,8 +1610,6 @@ void LocalDerivationGoal::chownToBuilder(const Path & path)
void setupSeccomp()
{
#if __linux__
if (!settings.filterSyscalls) return;
#if HAVE_SECCOMP
scmp_filter_ctx ctx;
if (!(ctx = seccomp_init(SCMP_ACT_ALLOW)))
@ -1684,16 +1674,14 @@ void setupSeccomp()
seccomp_rule_add(ctx, SCMP_ACT_ERRNO(ENOTSUP), SCMP_SYS(fsetxattr), 0) != 0)
throw SysError("unable to add seccomp rule");
if (seccomp_attr_set(ctx, SCMP_FLTATR_CTL_NNP, settings.allowNewPrivileges ? 0 : 1) != 0)
// Set the NO_NEW_PRIVS prctl flag.
// This both makes loading seccomp filters work for unprivileged users,
// and is an additional security measure in its own right.
if (seccomp_attr_set(ctx, SCMP_FLTATR_CTL_NNP, 1) != 0)
throw SysError("unable to set 'no new privileges' seccomp attribute");
if (seccomp_load(ctx) != 0)
throw SysError("unable to load seccomp BPF program");
#else
throw Error(
"seccomp is not supported on this platform; "
"you can bypass this error by setting the option 'filter-syscalls' to false, but note that untrusted builds can then create setuid binaries!");
#endif
#endif
}
@ -1862,7 +1850,7 @@ void LocalDerivationGoal::runChild()
auto dst = chrootRootDir + i.first;
createDirs(dirOf(dst));
writeFile(dst, std::string_view((const char *) sh, sizeof(sh)));
chmod_(dst, 0555);
chmodPath(dst, 0555);
} else
#endif
doBind(i.second.source, chrootRootDir + i.first, i.second.optional);
@ -1900,7 +1888,7 @@ void LocalDerivationGoal::runChild()
/* Make sure /dev/pts/ptmx is world-writable. With some
Linux versions, it is created with permissions 0. */
chmod_(chrootRootDir + "/dev/pts/ptmx", 0666);
chmodPath(chrootRootDir + "/dev/pts/ptmx", 0666);
} else {
if (errno != EINVAL)
throw SysError("mounting /dev/pts");
@ -1911,7 +1899,7 @@ void LocalDerivationGoal::runChild()
/* Make /etc unwritable */
if (!parsedDrv->useUidRange())
chmod_(chrootRootDir + "/etc", 0555);
chmodPath(chrootRootDir + "/etc", 0555);
/* Unshare this mount namespace. This is necessary because
pivot_root() below changes the root of the mount
@ -1960,10 +1948,6 @@ void LocalDerivationGoal::runChild()
throw SysError("setuid failed");
setUser = false;
// Make sure we can't possibly gain new privileges in the sandbox
if (prctl(PR_SET_NO_NEW_PRIVS, 1, 0, 0, 0) == -1)
throw SysError("PR_SET_NO_NEW_PRIVS failed");
}
#endif

View file

@ -185,21 +185,32 @@ DerivedPath::Built DerivedPath::Built::parse(
};
}
static SingleDerivedPath parseWithSingle(
template <typename DerivedPathT>
static DerivedPathT parseDerivedPath(
const Store & store, std::string_view s, std::string_view separator,
const ExperimentalFeatureSettings & xpSettings)
{
size_t n = s.rfind(separator);
return n == s.npos
? (SingleDerivedPath) SingleDerivedPath::Opaque::parse(store, s)
: (SingleDerivedPath) SingleDerivedPath::Built::parse(store,
make_ref<SingleDerivedPath>(parseWithSingle(
if (n == s.npos) {
return DerivedPathT::Opaque::parse(store, s);
} else {
auto path = DerivedPathT::Built::parse(store,
make_ref<SingleDerivedPath>(parseDerivedPath<SingleDerivedPath>(
store,
s.substr(0, n),
separator,
xpSettings)),
s.substr(n + 1),
xpSettings);
const auto& basePath = path.getBaseStorePath();
if (!basePath.isDerivation()) {
throw InvalidPath("cannot use output selection ('%s') on non-derivation store path '%s'",
separator, basePath.to_string());
}
return path;
}
}
SingleDerivedPath SingleDerivedPath::parse(
@ -207,7 +218,7 @@ SingleDerivedPath SingleDerivedPath::parse(
std::string_view s,
const ExperimentalFeatureSettings & xpSettings)
{
return parseWithSingle(store, s, "^", xpSettings);
return parseDerivedPath<SingleDerivedPath>(store, s, "^", xpSettings);
}
SingleDerivedPath SingleDerivedPath::parseLegacy(
@ -215,24 +226,7 @@ SingleDerivedPath SingleDerivedPath::parseLegacy(
std::string_view s,
const ExperimentalFeatureSettings & xpSettings)
{
return parseWithSingle(store, s, "!", xpSettings);
}
static DerivedPath parseWith(
const Store & store, std::string_view s, std::string_view separator,
const ExperimentalFeatureSettings & xpSettings)
{
size_t n = s.rfind(separator);
return n == s.npos
? (DerivedPath) DerivedPath::Opaque::parse(store, s)
: (DerivedPath) DerivedPath::Built::parse(store,
make_ref<SingleDerivedPath>(parseWithSingle(
store,
s.substr(0, n),
separator,
xpSettings)),
s.substr(n + 1),
xpSettings);
return parseDerivedPath<SingleDerivedPath>(store, s, "!", xpSettings);
}
DerivedPath DerivedPath::parse(
@ -240,7 +234,7 @@ DerivedPath DerivedPath::parse(
std::string_view s,
const ExperimentalFeatureSettings & xpSettings)
{
return parseWith(store, s, "^", xpSettings);
return parseDerivedPath<DerivedPath>(store, s, "^", xpSettings);
}
DerivedPath DerivedPath::parseLegacy(
@ -248,7 +242,7 @@ DerivedPath DerivedPath::parseLegacy(
std::string_view s,
const ExperimentalFeatureSettings & xpSettings)
{
return parseWith(store, s, "!", xpSettings);
return parseDerivedPath<DerivedPath>(store, s, "!", xpSettings);
}
DerivedPath DerivedPath::fromSingle(const SingleDerivedPath & req)

View file

@ -695,7 +695,7 @@ void LocalStore::collectGarbage(const GCOptions & options, GCResults & results)
throw Error(
"Cannot delete path '%1%' since it is still alive. "
"To find out why, use: "
"nix-store --query --roots",
"nix-store --query --roots and nix-store --query --referrers",
printStorePath(i));
}

View file

@ -912,29 +912,6 @@ public:
)"};
#if __linux__
Setting<bool> filterSyscalls{
this, true, "filter-syscalls",
R"(
Whether to prevent certain dangerous system calls, such as
creation of setuid/setgid files or adding ACLs or extended
attributes. Only disable this if you're aware of the
security implications.
)"};
Setting<bool> allowNewPrivileges{
this, false, "allow-new-privileges",
R"(
(Linux-specific.) By default, builders on Linux cannot acquire new
privileges by calling setuid/setgid programs or programs that have
file capabilities. For example, programs such as `sudo` or `ping`
will fail. (Note that in sandbox builds, no such programs are
available unless you bind-mount them into the sandbox via the
`sandbox-paths` option.) You can allow the use of such programs by
enabling this option. This is impure and usually undesirable, but
may be useful in certain scenarios (e.g. to spin up containers or
set up userspace network interfaces in tests).
)"};
Setting<StringSet> ignoredAcls{
this, {"security.selinux", "system.nfs4_acl", "security.csm"}, "ignored-acls",
R"(

View file

@ -20,18 +20,16 @@
#pragma once
///@file
#if HAVE_SECCOMP
# if defined(__alpha__)
# define NIX_SYSCALL_FCHMODAT2 562
# elif defined(__x86_64__) && SIZE_MAX == 0xFFFFFFFF // x32
# define NIX_SYSCALL_FCHMODAT2 1073742276
# elif defined(__mips__) && defined(__mips64) && defined(_ABIN64) // mips64/n64
# define NIX_SYSCALL_FCHMODAT2 5452
# elif defined(__mips__) && defined(__mips64) && defined(_ABIN32) // mips64/n32
# define NIX_SYSCALL_FCHMODAT2 6452
# elif defined(__mips__) && defined(_ABIO32) // mips32
# define NIX_SYSCALL_FCHMODAT2 4452
# else
# define NIX_SYSCALL_FCHMODAT2 452
# endif
#endif // HAVE_SECCOMP
#if defined(__alpha__)
# define NIX_SYSCALL_FCHMODAT2 562
#elif defined(__x86_64__) && SIZE_MAX == 0xFFFFFFFF // x32
# define NIX_SYSCALL_FCHMODAT2 1073742276
#elif defined(__mips__) && defined(__mips64) && defined(_ABIN64) // mips64/n64
# define NIX_SYSCALL_FCHMODAT2 5452
#elif defined(__mips__) && defined(__mips64) && defined(_ABIN32) // mips64/n32
# define NIX_SYSCALL_FCHMODAT2 6452
#elif defined(__mips__) && defined(_ABIO32) // mips32
# define NIX_SYSCALL_FCHMODAT2 4452
#else
# define NIX_SYSCALL_FCHMODAT2 452
#endif

View file

@ -210,7 +210,6 @@ libstore = library(
seccomp,
sqlite,
sodium,
seccomp,
curl,
openssl,
aws_sdk,

View file

@ -509,7 +509,8 @@ void RemoteStore::addToStore(const ValidPathInfo & info, Source & source,
sink
<< exportMagic
<< printStorePath(info.path);
WorkerProto::write(*this, *conn, info.references);
WorkerProto::WriteConn nested { .to = sink, .version = conn->daemonVersion };
WorkerProto::write(*this, nested, info.references);
sink
<< (info.deriver ? printStorePath(*info.deriver) : "")
<< 0 // == no legacy signature

View file

@ -110,6 +110,8 @@ protected:
public:
BaseError(const BaseError &) = default;
BaseError & operator=(BaseError const & rhs) = default;
template<typename... Args>
BaseError(unsigned int status, const Args & ... args)
: err { .level = lvlError, .msg = HintFmt(args...), .status = status }

View file

@ -184,6 +184,11 @@ Path canonPath(PathView path, bool resolveSymlinks)
return s.empty() ? "/" : std::move(s);
}
void chmodPath(const Path & path, mode_t mode)
{
if (chmod(path.c_str(), mode) == -1)
throw SysError("setting permissions on '%s'", path);
}
Path dirOf(const PathView path)
{
@ -1799,8 +1804,7 @@ AutoCloseFD createUnixDomainSocket(const Path & path, mode_t mode)
bind(fdSocket.get(), path);
if (chmod(path.c_str(), mode) == -1)
throw SysError("changing permissions on '%1%'", path);
chmodPath(path.c_str(), mode);
if (listen(fdSocket.get(), 100) == -1)
throw SysError("cannot listen on socket '%1%'", path);

View file

@ -77,6 +77,13 @@ Path absPath(Path path,
*/
Path canonPath(PathView path, bool resolveSymlinks = false);
/**
* Change the permissions of a path
* Not called `chmod` as it shadows and could be confused with
* `int chmod(char *, mode_t)`, which does not handle errors
*/
void chmodPath(const Path & path, mode_t mode);
/**
* @return The directory part of the given canonical path, i.e.,
* everything before the final `/`. If the path is the root or an

View file

@ -2,6 +2,7 @@
#include "store-api.hh"
#include "fs-accessor.hh"
#include "nar-accessor.hh"
#include "progress-bar.hh"
using namespace nix;
@ -17,7 +18,10 @@ struct MixCat : virtual Args
if (st.type != FSAccessor::Type::tRegular)
throw Error("path '%1%' is not a regular file", path);
writeFull(STDOUT_FILENO, accessor->readFile(path));
auto file = accessor->readFile(path);
stopProgressBar();
writeFull(STDOUT_FILENO, file);
}
};

View file

@ -1,6 +1,7 @@
#include "command.hh"
#include "store-api.hh"
#include "archive.hh"
#include "progress-bar.hh"
using namespace nix;
@ -20,6 +21,7 @@ struct CmdDumpPath : StorePathCommand
void run(ref<Store> store, const StorePath & storePath) override
{
stopProgressBar();
FdSink sink(STDOUT_FILENO);
store->narFromPath(storePath, sink);
sink.flush();
@ -55,6 +57,7 @@ struct CmdDumpPath2 : Command
void run() override
{
stopProgressBar();
FdSink sink(STDOUT_FILENO);
dumpPath(path, sink);
sink.flush();

View file

@ -3,6 +3,7 @@
#include "store-api.hh"
#include "thread-pool.hh"
#include "signals.hh"
#include "progress-bar.hh"
#include <atomic>
@ -220,6 +221,8 @@ struct CmdKey : NixMultiCommand
{
if (!command)
throw UsageError("'nix key' requires a sub-command.");
stopProgressBar();
command->second->run();
}
};

View file

@ -0,0 +1,72 @@
source ./common.sh
# remove the flake registry from nix.conf, to set to default ("vendored")
sed -i '/flake-registry/d' "$NIX_CONF_DIR/nix.conf"
# Make sure the vendored registry contains the correct amount.
[[ $(nix registry list | wc -l) == 37 ]]
# sanity check, contains the important ones
nix registry list | grep '^global flake:nixpkgs'
nix registry list | grep '^global flake:home-manager'
# it should work the same if we set to vendored directly.
echo 'flake-registry = vendored' >> "$NIX_CONF_DIR/nix.conf"
[[ $(nix registry list | wc -l) == 37 ]]
# sanity check, contains the important ones
nix registry list | grep '^global flake:nixpkgs'
nix registry list | grep '^global flake:home-manager'
# the online flake registry should still work, but it is deprecated.
set -m
# port 0: auto pick a free port, unbufferred output
python3 -u -m http.server 0 --bind 127.0.0.1 > server.out &
# wait for the http server to admit it is working
while ! grep -qP 'port \d+' server.out ; do
echo 'waiting for python http' >&2
sleep 0.2
done
port=$(awk 'match($0,/port ([[:digit:]]+)/, ary) { print ary[1] }' server.out)
sed -i '/flake-registry/d' "$NIX_CONF_DIR/nix.conf"
echo "flake-registry = http://127.0.0.1:$port/flake-registry.json" >> "$NIX_CONF_DIR/nix.conf"
cat <<EOF > flake-registry.json
{
"flakes": [
{
"from": {
"type": "indirect",
"id": "nixpkgs"
},
"to": {
"type": "github",
"owner": "NixOS",
"repo": "nixpkgs"
}
},
{
"from": {
"type": "indirect",
"id": "private-flake"
},
"to": {
"type": "github",
"owner": "fancy-enterprise",
"repo": "private-flake"
}
}
],
"version": 2
}
EOF
[[ $(nix registry list | wc -l) == 2 ]]
nix registry list | grep '^global flake:nixpkgs'
nix registry list | grep '^global flake:private-flake'
# make sure we have a warning:
nix registry list 2>&1 | grep "config option flake-registry referring to a URL is deprecated and will be removed"
kill %1

View file

@ -69,6 +69,7 @@ functional_tests_scripts = [
'flakes/unlocked-override.sh',
'flakes/absolute-paths.sh',
'flakes/build-paths.sh',
'flakes/flake-registry.sh',
'flakes/flake-in-submodule.sh',
'gc.sh',
'nix-collect-garbage-d.sh',

View file

@ -163,7 +163,9 @@ in
symlinkResolvconf = runNixOSTestFor "x86_64-linux" ./symlink-resolvconf.nix;
rootInSandbox = runNixOSTestFor "x86_64-linux" ./root-in-sandbox;
noNewPrivilegesInSandbox = runNixOSTestFor "x86_64-linux" ./no-new-privileges/sandbox.nix;
noNewPrivilegesOutsideSandbox = runNixOSTestFor "x86_64-linux" ./no-new-privileges/no-sandbox.nix;
broken-userns = runNixOSTestFor "x86_64-linux" ./broken-userns.nix;

View file

@ -146,6 +146,8 @@ in
virtualisation.additionalPaths = [ pkgs.hello pkgs.fuse ];
virtualisation.memorySize = 4096;
nix.settings.substituters = lib.mkForce [ ];
# note: URL flake-registries are currently deprecated.
nix.settings.flake-registry = "https://channels.nixos.org/flake-registry.json";
nix.extraOptions = "experimental-features = nix-command flakes";
networking.hosts.${(builtins.head nodes.github.networking.interfaces.eth1.ipv4.addresses).address} =
[ "channels.nixos.org" "api.github.com" "github.com" ];

View file

@ -0,0 +1,21 @@
let
inherit (import ../util.nix) mkNixBuildTest;
in
mkNixBuildTest {
name = "no-new-privileges-outside-sandbox";
extraMachineConfig =
{ pkgs, ... }:
{
security.wrappers.ohno = {
owner = "root";
group = "root";
capabilities = "cap_sys_nice=eip";
source = "${pkgs.libcap}/bin/getpcaps";
};
nix.settings = {
extra-sandbox-paths = [ "/run/wrappers/bin/ohno" ];
sandbox = false;
};
};
expressionFile = ./package.nix;
}

View file

@ -0,0 +1,8 @@
{ runCommand, libcap }:
runCommand "cant-get-capabilities" { nativeBuildInputs = [ libcap.out ]; } ''
if /run/wrappers/bin/ohno; then
echo "Oh no! We gained capabilities!"
exit 1
fi
touch $out
''

View file

@ -0,0 +1,18 @@
let
inherit (import ../util.nix) mkNixBuildTest;
in
mkNixBuildTest {
name = "no-new-privileges-in-sandbox";
extraMachineConfig =
{ pkgs, ... }:
{
security.wrappers.ohno = {
owner = "root";
group = "root";
capabilities = "cap_sys_nice=eip";
source = "${pkgs.libcap}/bin/getpcaps";
};
nix.settings.extra-sandbox-paths = [ "/run/wrappers/bin/ohno" ];
};
expressionFile = ./package.nix;
}

View file

@ -1,15 +0,0 @@
let
inherit (import ../util.nix) mkNixBuildTest;
in mkNixBuildTest {
name = "root-in-sandbox";
extraMachineConfig = { pkgs, ... }: {
security.wrappers.ohno = {
owner = "root";
group = "root";
setuid = true;
source = "${pkgs.coreutils}/bin/whoami";
};
nix.settings.extra-sandbox-paths = ["/run/wrappers/bin"];
};
expressionFile = ./package.nix;
}

View file

@ -1,8 +0,0 @@
{ runCommand }:
runCommand "cant-get-root-in-sandbox" {} ''
if /run/wrappers/bin/ohno; then
echo "Oh no! We're root in the sandbox!"
exit 1
fi
touch $out
''

View file

@ -77,6 +77,15 @@ TEST_F(DerivedPathTest, built_built_xp) {
MissingExperimentalFeature);
}
/**
* Built paths with a non-derivation base should fail parsing.
*/
TEST_F(DerivedPathTest, non_derivation_base) {
ASSERT_THROW(
DerivedPath::parse(*store, "/nix/store/g1w7hy3qg1w7hy3qg1w7hy3qg1w7hy3q-x^foo"),
InvalidPath);
}
#ifndef COVERAGE
RC_GTEST_FIXTURE_PROP(