this was a debugging aid from day one that should not have any impact on
build semantics, and if it *does* have an impact on build semantics then
build semantics are seriously broken. keeping the order imposed by these
keys will be impossible once we let a real event loop schedule our jobs.
Change-Id: I5c313324e1f213ab6453d82f41ae5e59de809a5b
the new event loop could very occasionally notice that a dependency of
some goal has failed, process the failure, cause the depending goal to
fail accordingly, and in the doing of the latter two steps let further
dependencies that previously have not been reported as failed do their
reporting anyway. in such cases a goal could fail with "1 dependencies
failed", but more than one dependency failure message was shown. we'll
now report the correct number of failed dependency goals in all cases.
Change-Id: I5aa95dcb2db4de4fd5fee8acbf5db833531d81a8
Since fb38459d6e, each `ref` is appended
with `refs/heads` unless it starts with `refs/` already. This regressed
two use-cases that worked fine before:
* Specifying a commit hash as `ref`: now, if `ref` looks like a commit
hash it will be directly passed to `git fetch`.
* Specifying a tag without `refs/tags` as prefix: now, the fetcher prepends
`refs/*` to a ref that doesn't start with `refs/` and doesn't look
like a commit hash. That way, both a branch and a tag specified in
`ref` can be fetched.
The order of preference in git is
* file in `refs/` (e.g. `HEAD`)
* file in `refs/tags/`
* file in `refs/heads` (i.e. a branch)
After fetching `refs/*`, ref is resolved the same way as git does.
Change-Id: Idd49b97cbdc8c6fdc8faa5a48bef3dec25e4ccc3
like kj::joinPromisesFailFast this allows waiting for the results of
multiple promises at once, but unlike it not all input promises must
be complete (or any of them failed) for results to become available.
Change-Id: I0e4a37e7bd90651d56b33d0bc5afbadc56cde70c
like a normal semaphore, but with awaitable acquire actions. this is
primarily intended as an intermediate concurrency limiting device in
the Worker code, but it may find other uses over time. we do not use
std::counting_semaphore as a base because the counter of that is not
inspectable as will be needed for Worker. we also do not need atomic
operations for cross-thread consistency since we don't have multiple
threads (thanks to kj event loops being confined to a single thread)
Change-Id: Ie2bcb107f3a2c0185138330f7cbba4cec6cbdd95
Without this, verifying TLS certificates would fail on macOS, as well
as any system that doesn't have a certificate file at /etc/ssl/certs/ca-certificates.crt,
which includes e.g. Fedora.
Change-Id: Iaa2e0e9db3747645b5482c82e3e0e4e8f229f5f9
This is better for privacy and to avoid leaking netrc credentials in a
MITM attack, but also the assumption that we check the hash no longer
holds in some cases (in particular for impure derivations).
Partially reverts 5db358d4d7.
(cherry picked from commit c04bc17a5a0fdcb725a11ef6541f94730112e7b6)
(cherry picked from commit f2f47fa725fc87bfb536de171a2ea81f2789c9fb)
(cherry picked from commit 7b39cd631e0d3c3d238015c6f450c59bbc9cbc5b)
Upstream-PR: https://github.com/NixOS/nix/pull/11585
Change-Id: Ia973420f6098113da05a594d48394ce1fe41fbb9
These stack traces kind of suck for the reasons mentioned on the
CppTrace page here (no symbols for inline functions is a major one):
https://github.com/jeremy-rifkin/cpptrace
I would consider using CppTrace if it were packaged, but to be honest, I
think that the more reasonable option is actually to move entirely to
out-of-process crash handling and symbolization.
The reason for this is that if you want to generate anything of
substance on SIGSEGV or really any deadly signal, you are stuck in
async-signal-safe land, which is not a place to be trying to run a
symbolizer. LLVM does it anyway, probably carefully, and chromium *can*
do it on debug builds but in general uses crashpad:
https://source.chromium.org/chromium/chromium/src/+/main:base/debug/stack_trace_posix.cc;l=974;drc=82dff63dbf9db05e9274e11d9128af7b9f51ceaa;bpv=1;bpt=1
However, some stack traces are better than *no* stack traces when we get
mystery exceptions falling out the bottom of the program. I've also
promoted the path for "mystery exceptions falling out the bottom of the
program" to hard crash and generate a core dump because although there's
been some months since the last one of these, these are nonetheless
always *atrociously* diagnosed.
We can't improve the crash handling further until either we use Crashpad
(which involves more C++ deps, no thanks) or we put in the ostensibly
work in progress Rust minidump infrastructure, in which case we need to
finish full support for Rust in libutil first.
Sample report:
Lix crashed. This is a bug. We would appreciate if you report it at https://git.lix.systems/lix-project/lix/issues with the following information included:
Exception: std::runtime_error: lol
Stack trace:
0# nix::printStackTrace() in /home/jade/lix/lix3/build/src/nix/../libutil/liblixutil.so
1# 0x000073C9862331F2 in /home/jade/lix/lix3/build/src/nix/../libmain/liblixmain.so
2# 0x000073C985F2E21A in /nix/store/p44qan69linp3ii0xrviypsw2j4qdcp2-gcc-13.2.0-lib/lib/libstdc++.so.6
3# 0x000073C985F2E285 in /nix/store/p44qan69linp3ii0xrviypsw2j4qdcp2-gcc-13.2.0-lib/lib/libstdc++.so.6
4# nix::handleExceptions(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()>) in /home/jade/lix/lix3/build/src/nix/../libmain/liblixmain.so
5# 0x00005CF65B6B048B in /home/jade/lix/lix3/build/src/nix/nix
6# 0x000073C985C8810E in /nix/store/dbcw19dshdwnxdv5q2g6wldj6syyvq7l-glibc-2.39-52/lib/libc.so.6
7# __libc_start_main in /nix/store/dbcw19dshdwnxdv5q2g6wldj6syyvq7l-glibc-2.39-52/lib/libc.so.6
8# 0x00005CF65B610335 in /home/jade/lix/lix3/build/src/nix/nix
Change-Id: I1a9f6d349b617fd7145a37159b78ecb9382cb4e9
This caused an infinite loop before since it would just keep asking the
underlying source for more data.
In practice this happened because an HTTP server served a
response to a HEAD request (for which curl will not retrieve any body or
call our write callback function) with Content-Encoding: br, leading to
decompressing nothing at all and going into an infinite loop.
This adds a test to make sure none of our compression methods do that
again, as well as just patching the HTTP client to never feed empty data
into a compression algorithm (since they absolutely have the right to
throw CompressionError on unexpectedly-short streams!).
Reported on Matrix: https://matrix.to/#/!lymvtcwDJ7ZA9Npq:lix.systems/$8BWQR_zKxCQDJ40C5NnDo4bQPId3pZ_aoDj2ANP7Itc?via=lix.systems&via=matrix.org&via=tchncs.de
Change-Id: I027566e280f0f569fdb8df40e5ecbf46c211dad1
This test suite was in desperate need of using the parameterization
available with gtest, and was a bunch of useless duplicated code. At
least now it's not duplicated code, though it still probably should be
more full of property tests.
Change-Id: Ia8ccee7ef4f02b2fa40417b79aa8c8f0626ea479
Fixes:
- Identifiers starting with _ are prohibited
- Some driveby header dependency cleaning which wound up with doing some
extra fixups.
- Fucking C style casts, man. C++ made these 1000% worse by letting you
also do memory corruption with them with references.
- Remove casts to Expr * where ExprBlackHole is an incomplete type by
introducing an explicitly-cast eBlackHoleAddr as Expr *.
- An incredibly illegal cast of the text bytes of the StorePath hash
into a size_t directly. You can't DO THAT.
Replaced with actually parsing the hash so we get 100% of the bits
being entropy, then memcpying the start of the hash. If this shows
up in a profile we should just make the hash parser faster with a
lookup table or something sensible like that.
- This horrendous bit of UB which I thankfully slapped a deprecation
warning on, built, and it didn't trigger anywhere so it was dead
code and I just deleted it. But holy crap you *cannot* do that.
inline void mkString(const Symbol & s)
{
mkString(((const std::string &) s).c_str());
}
- Some wrong lints. Lots of wrong macro lints, one wrong
suspicious-sizeof lint triggered by the template being instantiated
with only pointers, but the calculation being correct for both
pointers and not-pointers.
- Exceptions in destructors strike again. I tried to catch the
exceptions that might actually happen rather than all the exceptions
imaginable. We can let the runtime hard-kill it on other exceptions
imo.
Change-Id: I71761620846cba64d66ee7ca231b20c061e69710
the current test relies on derivation build order being deterministic,
which will not be a reasonable expectation for all that long any more.
Change-Id: I9be44a7725185f614a9a4c724045b8b1e6962c03
this should be done where we're actually trying to build something, not
in the main worker loop that shouldn't have to be aware of such details
Change-Id: I07276740c0e2e5591a8ce4828a4bfc705396527e
This caused an absolute saga which I would not like anyone else to have
to experience. Let's put in a laser targeted error message that
diagnoses this exact problem.
Fixes: #484
Change-Id: I2a79f04aeb4a1b67c10115e5e39501d958836298
There have been multiple setting types for paths that are supposed to be
canonicalised, depending on whether zero or one, one, or any number of paths is
to be specified. Naturally, they behaved in slightly different ways in the
code. Simplify things by unifying them and removing special behaviour (mainly
the "multiple paths type can coerce to boolean" thing).
Change-Id: I7c1ce95e9c8e1829a866fb37d679e167811e9705
The <() process substitution syntax doesn't work for this one testcase
in bash for FreeBSD. The exact reason for this is unknown, possibly to
do with pipe vs file vs fifo EOF behavior. The prior behavior was this
test hanging forever, with no children of the bash process.
Change-Id: I71822a4b9dea6059b34300568256c5b7848109ac
Closes#460
I managed to trigger the issue by having the following inputs (shortened):
authentik-nix.url = "github:nix-community/authentik-nix";
authentik-nix.inputs.poetry2nix.inputs.nixpkgs.follows = "nixpkgs";
When evaluating this using
nix-eval-jobs --flake .#hydraJobs
I got the following error:
error: cannot update unlocked flake input 'authentik-nix/poetry2nix' in pure mode
The issue we have here is that `authentik-nix/poetry2nix` was written
into the `overrideMap` which caused Nix to assume it's a new input and
tried to refetch it (#460) or errored out in pure mode
(nix-eval-jobs / Hydra).
The testcase unfortunately only involves checking for the output log
and makes sure that something *is* logged on the first fetch so that
the test doesn't rot when the logging changes since I didn't
manage to trigger the error above with the reproducer from #460. In
fact, I only managed to trigger the `cannot update unlocked flake input`
error in this context with `nix-eval-jobs`.
Change-Id: Ifd00091eec9a0067ed4bb3e5765a15d027328807
They are like experimental features, but opt-in instead of opt-out. They
will allow us to gracefully remove language features. See #437
Change-Id: I9ca04cc48e6926750c4d622c2b229b25cc142c42
Turns out strings do not like being resized to -4.
This was discovered while messing with the tests to remove unbuffer and
trying stdbuf instead. Turns out that was not the right approach.
This basically rewrites the handling of this case to be much more
correct, and fixes a bug where with small window sizes where it would
ALSO truncate the attr names in addition to the optional descriptions.
Change-Id: Ifd1beeaffdb47cbb5f4a462b183fcb6c0ff6c524
I was packaging Lix 2.91 for nixpkgs and was annoyed at the expect
dependency. Turns out that you can replace unbuffer with a pretty-short
Python script.
It became less short after I found out that Linux was converting \n to
\r\n in the terminal subsystem, which was not very funny, but is at
least solved by twiddling termios bits.
Change-Id: I8a2700abcbbf6a9902e01b05b40fa9340c0ab90c
* changes:
sqlite: add a Use::fromStrNullable
util: implement charptr_cast
tree-wide: fix a pile of lints
refactor: make HashType and Base enum classes for type safety
build: integrate clang-tidy into CI
This:
- Converts a bunch of C style casts into C++ casts.
- Removes some very silly pointer subtraction code (which is no more or
less busted on i686 than it began)
- Fixes some "technically UB" that never had to be UB in the first
place.
- Makes finally follow the noexcept status of the inner function. Maybe
in the future we should ban the function from not being noexcept, but
that is not today.
- Makes various locally-used exceptions inherit from std::exception.
Change-Id: I22e66972602604989b5e494fd940b93e0e6e9297
The |> operator is a reverse function operator with low binding strength
to replace lib.pipe. Implements RFC 148, see the RFC text for more
details. Closes#438.
Change-Id: I21df66e8014e0d4dd9753dd038560a2b0b7fd805
Currently, the parser relies on the global experimental feature flags.
In order to properly test conditional language features, we instead need
to pass it around in the parser::State.
This means that the parser cannot cache the result of isEnabled anymore,
which wouldn't necessarily hurt performance if the function didn't
perform a linear search on the list of enabled features on every single
call. While we could simply evaluate once at the start of parsing and
cache the result in the parser state, the more sustainable solution
would be to fix `isEnabled` such that all callers may profit from the
performance improvement.
Change-Id: Ic9b9c5d882b6270e1114988b63e6064d36c25cf2
This adds a second form to the `:log` command: it now can accept a
derivation path in addition to a derivation expression. As derivation
store paths start with `/nix/store`, this is not ambiguous.
Resolves: #51
Change-Id: Iebc7b011537e7012fae8faed4024ea1b8fdc81c3
This was always in the lock file and we can simply actually print it.
The test for this is a little bit silly but it should correctly
control for my daring to exercise timezone code *and* locale code in a
test, which I strongly suspect nobody dared do before.
Sample (abridged):
```
Path: /nix/store/gaxb42z68bcr8lch467shvmnhjjzgd8b-source
Last modified: 1970-01-01 00:16:40
Inputs:
├───flake-compat: github:edolstra/flake-compat/0f9255e01c2351cc7d116c072cb317785dd33b33
│ Last modified: 2023-10-04 13:37:54
├───flake-utils: github:numtide/flake-utils/b1d9ab70662946ef0850d488da1c9019f3a9752a
│ Last modified: 2024-03-11 08:33:50
│ └───systems: github:nix-systems/default/da67096a3b9bf56a91d16901293e51ba5b49a27e
│ Last modified: 2023-04-09 08:27:08
```
Change-Id: I355f82cb4b633974295375ebad646fb6e2107f9b
This *should* be sound, plus or minus the amount that the terminal code
eating code is messed up already.
This is useful for testing CLI output because it will strip the escapes
enough to just shove the expected output in a file.
Change-Id: I8a9b58fafb918466ac76e9ab585fc32fb9294819
The original attempt at this introduced a regression; this commit
reverts the revert and fixes the regression.
This reverts commit 3e151d4d77.
Fix to the regression:
flakeref: fix handling of `?dir=` param for flakes in subdirs
As reported in #419[1], accessing a flake in a subdir of a Git
repository fails with the previous commit[2] applied with the error
error: unsupported Git input attribute 'dir'
The problem is that the `dir`-param is inserted into the parsed URL if a
flake is fetched from the subdir of a Git repository. However, for the
fetching part this isn't even needed. The fix is to just pass `subdir`
as second argument to `FlakeRef` (which needs a `basedir` that can be
empty) and leave the parsedURL as-is.
Added a regression test to make sure we don't run into this again.
[1] #419
[2] e22172aaf6b6a366cecd3c025590e68fa2b91bcc,
originally 3e151d4d77
Change-Id: I2c72d5a32e406a7ca308e271730bd0af01c5d18b