This commit should faithfully reproduce the old behavior down to the
bugs. The new code is a lot more readable, all quirks are well
documented, and it is overall much more maintainable.
Change-Id: I629585918e4f2b7d296b6b8330235cdc90b7bade
Some settings only make sense on particular platforms, or only when a certain
experimental feature is enabled. Several of those were already conditionally
available. Do the same for a bunch more instead of silently ignoring them.
Exceptionally, the use-case-hack setting is not made conditional because it is
included in the test suite.
Change-Id: I29e66ad8ee6178a7c0eff9efb55c3410fae32514
The build hook is still running locally, so it will run with the same default
settings. Hence, just as with the daemon, it is enough to send it only the
overridden settings. This will prevent warnings like
warning: Ignoring setting 'auto-allocate-uids' because experimental feature 'auto-allocate-uids' is not enabled
when the user didn't actually set those settings.
This is inspired by and an alternative to [0].
[0] https://github.com/NixOS/nix/pull/10049
Change-Id: I77ea62cd017614b16b55979dd30e75f09f860d21
Only overridden settings are sent to the daemon, and we're going to do the same
for the build hook to. It needs to be ensured that overridden settings are in
fact consistently marked as such, so that they actually get sent.
Change-Id: I7cd58d925702f86cf2c35ad121eb191ceb62a355
Before this change, expressions like:
with import <nixpkgs> {};
runCommand "foo" {} ''
echo '@nix {}' >&$NIX_LOG_FD
''
would result in Lix crashing, because accessing nonexistent fields of
a JSON object throws an exception.
Rather than handling each field individually, we just catch JSON
exceptions wholesale. Since these log messages are an unusual
circumstance, log a warning when this happens.
Fixes#544.
Change-Id: Idc2d8acf6e37046b3ec212f42e29269163dca893
By implementing the `PathSet` specialization for `PathsSetting`, we'll
be able to use `PathsSetting` for the `sandboxPaths` setting in
`src/libstore/globals.hh`.
Fixes: #498
Change-Id: I8bf7dfff98609d1774fdb36d63e57d787bcc829f
This was always a terrible idea independently of whether it crashes.
Stop doing it!
This commit was verified by running nix-shell on a trivial derivation
with --debug --verbose to get the vomit-level output of the shell rc
file and then diffing it before/after this change. I have reasonable
confidence it did not regress anything, though this code is genuinely
really hard to follow (which is a second reason that I split it into two
fmt calls).
Fixes: #533
Change-Id: I8e11ddbece2b12749fda13efe0b587a71b00bfe5
A better fix than in 104448e75d, hence a
revert + the fix.
It turns out that this commit has the side-effect that when having e.g.
`StrictHostKeyChecking=accept-new` for a remote builder, the warnings à la
Warning: Permanently added 'builder' (ED25519) to the list of known hosts.
actually end up in the derivation's log whereas hostkey verification
errors don't, but only in the stderr of the `nix-build` invocation
(which was the motivation for the patch).
This change writes the stderr from the build-hook to
* the daemon's stderr, so that the SSH errors appear in the journal
(which was the case before 104448e75d)
* the client's stderr, as a log message
* NOT to the drv log (this is handled via `handleJSONLogMessage`)
I tried to fix the issue for legacy-ssh as well, but failed and
ultimately decided to not bother.
I know that we'll sooner or later replace the entire component, however
this is the part of the patch I have working for a while, so I figured I
might still submit it for the time being.
Change-Id: I21ca1aa0d8ae281d2eacddf26e0aa825272707e5
While debugging something else I observed that latest `main` ignores
`Control-C` on `sudo nix-build`.
After reading through the capnproto docs, it seems as if the promise
must be fulfilled to actually terminate the `promise.wait()` below.
This also applies to scenarios such as stopping the client
(`nix-build`), but the builders on the daemon-side are still running,
i.e. closes#540
Co-authored-by: eldritch horrors <pennae@lix.systems>
Change-Id: I9634d14df4909fc1b65d05654aad0309bcca8a0a
So we received a report that the thread pool crashed due to an
Interrupted exception.
Relevant log tail:
copying path '/nix/store/0kal2k73inviikxv9f1ciaj39lkl9a87-etc-os-release' to 'ssh://192.168.0.27'...
Lix crashed. This is a bug. We would appreciate if you report it along with what caused it at https://git.lix.systems/lix-project/lix/issues with the following information included:
error (ignored): error: interrupted by the user
Exception: nix::Interrupted: error: interrupted by the user
Relevant stack trace:
4# __cxa_rethrow in /nix/store/22nxhmsfcv2q2rpkmfvzwg2w5z1l231z-gcc-13.3.0-lib/lib/libstdc++.so.6
5# nix::ignoreExceptionExceptInterrupt(nix::Verbosity) in /nix/store/ghxr2ykqc3rrfcy8rzdys0rzx9ah5fqj-lix-2.92.0-dev-pre20241005-ed9b7f4/lib/liblixutil.so
6# nix::ThreadPool::doWork(bool) in /nix/store/ghxr2ykqc3rrfcy8rzdys0rzx9ah5fqj-lix-2.92.0-dev-pre20241005-ed9b7f4/lib/liblixutil.so
7# 0x00007FA7A00E86D3 in /nix/store/22nxhmsfcv2q2rpkmfvzwg2w5z1l231z-gcc-13.3.0-lib/lib/libstdc++.so.6
8# 0x00007FA79FE99A42 in /nix/store/3dyw8dzj9ab4m8hv5dpyx7zii8d0w6fi-glibc-2.39-52/lib/libc.so.6
9# 0x00007FA79FF1905C in /nix/store/3dyw8dzj9ab4m8hv5dpyx7zii8d0w6fi-glibc-2.39-52/lib/libc.so.6
Notably, this is *not* in the main thread, so this implies that the
thread didn't get joined properly before their destructors got called.
That, in turn, should have only possibly happened because join() threw
on a previous iteration of the loop joining threads, I think. Or if it
threw while in the ThreadPool destructor. Either way we had better stop
letting Interrupted fall out of our child threads!
If:
- Interrupted was thrown inside the action in the main thread: it would
have fallen out of doWork if state->exception was already set and got
caught by ThreadPool::process, calling shutdown() and the join loop
which would crash the process entirely.
- Interrupted was thrown inside the action on a secondary thread: it
would have been caught and put into the exception field and then
possibly rethrown to fall out of the thread (since it was previously
ignoreExceptionExceptInterrupt).
The one possible hole in this hypothesis is that there is an "error
(ignored)" line in there implying that at least one Interrupted got
eaten by an ignoreExceptionInDestructor. It's also unclear whether this
got reordered because of stderr buffering.
Fixes: #542
Change-Id: I322cf050da660af78f5cb0e08ec6e6d27d09ac76
There is absolutely no good reason these should show up in NARs besides
misconfigured systems and as long as the case hack exists, unpacking
such a NAR will cause its repacking to be wrong on systems with case
hack enabled.
This should not have any security impact on Lix to fix, but it was one
of the vectors for CVE-2024-45593:
https://github.com/NixOS/nix/security/advisories/GHSA-h4vv-h3jq-v493
Change-Id: I85b6075aacc069ee7039240b0f525804a2d8edcb
I followed @pennae's advice and moved the constructor definition of
`AttrName` from the header file `nixexpr.hh` to `nixexpr.cc`.
Change-Id: I733f56c25635b366b11ba332ccec38dd7444e793
The approach that was taken here was to add default values to the type
definitions rather than specify them whenever they are missing.
Now the only remaining warning is '-Wunused-parameter' which @jade said
is usually counterproductive and that we can just disable it:
#456 (comment)
So this change adds the flags '-Wall', '-Wextra' and
'-Wno-unused-parameter', so that all warnings are enabled except for
'-Wunused-parameter'.
Change-Id: Ic223a964d67ab429e8da804c0721ba5e25d53012
There was a bug report about a potential call to `memcpy` with a null
pointer which is not reproducible:
#492
This occurred in `src/libstore/filetransfer.cc` in `InnerSource::read`.
To ensure that this doesn't happen, an early return is added before
calling `memcpy` if the length of the data to be copied is 0.
This change also adds a test that ensures that when `InnerSource::read`
is called with an empty file, it throws an `EndOfFile` exception.
Change-Id: Ia18149bee9a3488576c864f28475a3a0c9eadfbb
these two functions are now nearly trivial and much better inline into
makeGoalCommon. keeping them separate also separates information about
goal completion flows and how failure information ends up in `Worker`.
Change-Id: I6af86996e4a2346583371186595e3013c88fb082
we can use our newfound powers of Goal::work Is A Real Promise to remove
completed goals from continuation promises. apart from being much easier
to follow it's also a lot more efficient because we have the iterator to
the item we are trying to remove, skipping a linear search of the cache.
Change-Id: Ie0190d051c5f4b81304d98db478348b20c209df5
Goal::work() is a fully usable promise that does not rely on the worker
to report completion conditions. as such we no longer need the `notify`
field that enabled this interplay. we do have to clear goal caches when
destroying the worker though, otherwise goal promises may (incorrectly)
keep goals alive due to strong shared pointers created by childStarted.
Change-Id: Ie607209aafec064dbdf3464fe207d70ba9ee158a
derivation goals still hold a BuildResult member variable since parts of
these results of accumulated in different places, but the Goal class now
no longer has such a field. substitution goals don't need it at all, and
derivation goals should also be refactored to not drop their buildResult
Change-Id: Ic6d3d471cdbe790a6e09a43445e25bedec6ed446
the field is simply duplicated between the two, and now that we can
return WorkResults from Worker::run we no longer need both of them.
Change-Id: I82fc47d050b39b7bb7d1656445630d271f6c9830
this will be needed to move all interesting result fields out of Goal
proper and into WorkResult. once that is done we can treat goals as a
totally internal construct of the worker mechanism, which also allows
us to fully stop exposing unclear intermediate state to Worker users.
Change-Id: I98d7778a4b5b2590b7b070bdfc164a22a0ef7190
since we now propagate goal exceptions properly we no longer need to
check topGoals for a reason to abort early. any early abort reasons,
whether by exception or a clean top goal failure, can now be handled
by inspecting the goal result in the main loop. this greatly reduces
goal-to-goal interactions that do not happen at the main loop level.
since the underscore-free name is now available for use as variables
we'll migrate to that where we currently use `_topGoals` for locals.
Change-Id: I5727c5ea7799647c0a69ab76975b1a03a6558aa6
drop childException since it's no longer needed. also makes
waitForInput, childFinished, and childTerminated redundant.
Change-Id: I05d88ffd323c5b5c909ac21056162f69ffb0eb9f
there's no reason to have the worker set information on goals that the
goals themselves return from their entry point. doing this in the goal
`work()` function is much cleaner, and a prerequisite to removing more
implicit strong shared references to goals that are currently running.
Change-Id: Ibb3e953ab8482a6a21ce2ed659d5023a991e7923
this simplifies the worker loop, and lets us remove it entirely later.
note that ideally only one promise waiting for interrupts should exist
in the entire system. not one per event loop, one per *process*. extra
interrupt waiters make interrupt response nondeterministic and as such
aren't great for user experience. if anything wants to react to aborts
caused by explicit interruptions, or anything else, those things would
be much better served using RAII guards such as Finally (or KJ_DEFER).
Change-Id: I41d035ff40172d536e098153c7375b0972110d51
this was a triumph. i'm making a note here: huge success. it's hard to
overstate my satisfaction! i'm not even angry. i'm being so sincere ri
actually, no. we *are* angry. this was one dumbass odyssey. nobody has
asked for this. but not doing it would have locked us into old, broken
protocols forever or (possibly worse) forced us to write our own async
framework building on the old did-you-mean-continuations in Worker. if
we had done that we'd be locked into ever more, and ever more complex,
manual state management all over the place. this just could not stand.
Change-Id: I43a6de1035febff59d2eff83be9ad52af4659871
due to event loop scheduling behavior it's possible for a derivation
goal to fully finish (having seen all paths it was asked to create),
but to not notify the worker of this in time to prevent another goal
asking the recently-finished goal for more outputs. if this happened
the finished goal would ignore the request for more outputs since it
considered itself fully done, and the delayed result reporting would
cause the requesting goal to assume its request had been honored. if
the requested goal had finished *properly* the worker would recreate
it instead of asking for more outputs, and this would succeed. it is
thus safe to always recreate goals once they are done, so we now do.
Change-Id: Ifedd69ca153372c623abe9a9b49cd1523588814f
This moves the "legacy"/"nix2" commands under a new `src/legacy/`
directory, instead of being scattered around in a bunch of different
directories.
A new `liblegacy` build target is defined, and the `nix` binary is
linked against it.
Then, `RegisterLegacyCommand` is replaced with `LegacyCommand::add`
calls in functions like `registerNixCollectGarbage()`. These
registration functions are called explicitly in `src/nix/main.cc`.
See: #359
Change-Id: Id450ffc3f793374907599cfcc121863b792aac1a
we'll now loop to update displayed statistics, and use this loop to
limit the update rate to 50 times per second. we could have updated
much more frequently before this (once per iteration of `runImpl`),
much faster than would ever be useful in practice. aggressive stats
updates can even impede progress due to terminal or network delays.
Change-Id: Ifba755a2569f73c919b1fbb06a142c0951395d6d
Worker::run() is now entirely based on the kj event loop and promises,
so we need not handle awakeness of goals manually any more. every goal
can instead, once it has finished a partial work call, defer itself to
being called again in the next iteration of the loop. same end effect.
Change-Id: I320eee2fa60bcebaabd74d1323fa96d1402c1d15
notably we will check whether we want to do GC at all only once during
startup, and we'll only attempt GC every ten seconds rather than every
time a goal has finished a partial work call. this shouldn't cause any
problems in practice since relying on auto-gc is not deterministic and
stores in which builds can fill all remaining free space in merely ten
seconds are severely troubled even when gargage collection runs a lot.
Change-Id: I1175a56bf7f4e531f8be90157ad88750ff2ddec4
Revert submission 1946
Reason for revert: regression in building (found via bisection)
Reported by users:
> error: path '/nix/store/04ca5xwvasz6s3jg0k7njz6rzi0d225w-jq-1.7.1-dev' does not exist in the store
Reverted changes: /q/submissionid:1946
Change-Id: I6f1a4b2f7d7ef5ca430e477fc32bca62fd97036b
nothing needs to signal being still active but not actively pollable,
only that immediate polling for the next goal work phase is in order.
Change-Id: Ia43c1015e94ba4f5f6b9cb92943da608c4a01555
this was immensely inefficient on large caches, as can exist when many
derivations are buildable simultaneously. since we have smart pointers
to goals we can do cache maintenance in goal deleters instead, and use
the exact iterators instead of doing a linear search. this *does* rely
on goals being deleted to remove them from the cache, which isn't true
for toplevel goals. those would have previously been removed when done
in all cases, removing the cache entry when keep-going is set. this is
arguably incorrect since it might result in those goals being retried,
although that could only happen with dynamic derivations or the likes.
(luckily dynamic derivations not complete enough to allow this at all)
Change-Id: I8e750b868393588c33e4829333d370f2c509ce99
makeDerivationGoalCommon had the right idea, but it didn't quite go far
enough. let's do the rest and remove the remaining factory duplication.
Change-Id: I1fe32446bdfb501e81df56226fd962f85720725b
this was a debugging aid from day one that should not have any impact on
build semantics, and if it *does* have an impact on build semantics then
build semantics are seriously broken. keeping the order imposed by these
keys will be impossible once we let a real event loop schedule our jobs.
Change-Id: I5c313324e1f213ab6453d82f41ae5e59de809a5b
without circular references we do not need weak goal pointers except for
caches, which should not prevent goal destructors running. caches though
cannot create circular references even when they keep strong references.
if we removed goals from caches when their work() is fully finished, not
when their destructors are run, we could keep strong pointers in caches.
since we do not gain much from this we keep those pointers weak for now.
Change-Id: I1d4a6850ff5e264443c90eb4531da89f5e97a3a0
have DerivationGoal and its subclasses produce a wrapper promise for
their intermediate results instead, and return this wrapper promise.
Worker already handles promises that do not complete immediately, so
we do not have to duplicate this into an entire result type variant.
Change-Id: Iae8dbf63cfc742afda4d415922a29ac5a3f39348
the new event loop could very occasionally notice that a dependency of
some goal has failed, process the failure, cause the depending goal to
fail accordingly, and in the doing of the latter two steps let further
dependencies that previously have not been reported as failed do their
reporting anyway. in such cases a goal could fail with "1 dependencies
failed", but more than one dependency failure message was shown. we'll
now report the correct number of failed dependency goals in all cases.
Change-Id: I5aa95dcb2db4de4fd5fee8acbf5db833531d81a8
these can be unique rather than shared because shared_ptr has a
converting constructor. preparatory refactor for something else
and not necessary on its own, and the extra allocations we must
do for shared_ptr control blocks isn't usually relevant anyway.
Change-Id: I5391715545240c6ec8e83a031206edafdfc6462f
Since fb38459d6e, each `ref` is appended
with `refs/heads` unless it starts with `refs/` already. This regressed
two use-cases that worked fine before:
* Specifying a commit hash as `ref`: now, if `ref` looks like a commit
hash it will be directly passed to `git fetch`.
* Specifying a tag without `refs/tags` as prefix: now, the fetcher prepends
`refs/*` to a ref that doesn't start with `refs/` and doesn't look
like a commit hash. That way, both a branch and a tag specified in
`ref` can be fetched.
The order of preference in git is
* file in `refs/` (e.g. `HEAD`)
* file in `refs/tags/`
* file in `refs/heads` (i.e. a branch)
After fetching `refs/*`, ref is resolved the same way as git does.
Change-Id: Idd49b97cbdc8c6fdc8faa5a48bef3dec25e4ccc3
Using `configure_file` to copy files has been deprecated since Meson 0.64.0.
The intended replacement is the `fs.copyfile` method.
This removes the following deprecation warning that arises when a minimum
Meson version is specified:
``
Project [...] uses feature deprecated since '0.64.0': copy arg in configure_file. Use fs.copyfile instead
``
Change-Id: I09ffc92e96311ef9ed594343a0a16d51e74b114a
also gets rid of explicit strong references to dependencies of any goal,
and weak references to dependers as well. those are now only held within
promises representing goal completion and thus independent of the goal's
relation to each other. the weak references to dependers was only needed
for notifications, and that's much better handled entirely by kj itself.
Change-Id: I00d06df9090f8d6336ee4bb0c1313a7052fb016b
now that we have an event loop in the worker we can use it and its
magical execution suspending properties to replace the slot counts
we managed explicitly with semaphores and raii tokens. technically
this would not have needed an event loop base to be doable, but it
is a whole lot easier to wait for a token to be available if there
is a callback mechanism ready for use that doesn't require a whole
damn dedicated abstract method in Goal to work, and specific calls
to that dedicated method strewn all over the worker implementation
Change-Id: I1da7cf386d94e2bbf2dba9b53ff51dbce6a0cff7
with waitForAWhile turned into promised the core functionality of
waitForInput is now merely to let gc run every so often if needed
Change-Id: I68da342bbc1d67653901cf4502dabfa5bc947628
this simplifies waitForInput quite a lot, and at the same time makes
polling less thundering-herd-y. it even fixes early polling wakeups!
Change-Id: I6dfa62ce91729b8880342117d71af5ae33366414
this removes the rather janky did-you-mean-async poll loop we had so
far. sadly kj does not play well with pty file descriptors, so we do
have to add our own async input stream that does not eat pty EIO and
turns it into an exception. that's still a *lot* better than the old
code, and using a real even loop makes everything else easier later.
Change-Id: Idd7e0428c59758602cc530bcad224cd2fed4c15e
When `nix fmt` is called without an argument, Nix appends the "." argument before calling the formatter. The comment in the code is:
> Format the current flake out of the box
This also happens when formatting sub-folders.
This means that the formatter is now unable to distinguish, as an interface, whether the "." argument is coming from the flake or the user's intent to format the current folder. This decision should be up to the formatter.
Treefmt, for example, will automatically look up the project's root and format all the files. This is the desired behaviour. But because the "." argument is passed, it cannot function as expected.
Upstream-PR: https://github.com/nixos/nix/pull/11438
Change-Id: I60fb6b3ed4ec1b24f81b5f0d76c0be98470817ce
like kj::joinPromisesFailFast this allows waiting for the results of
multiple promises at once, but unlike it not all input promises must
be complete (or any of them failed) for results to become available.
Change-Id: I0e4a37e7bd90651d56b33d0bc5afbadc56cde70c
like a normal semaphore, but with awaitable acquire actions. this is
primarily intended as an intermediate concurrency limiting device in
the Worker code, but it may find other uses over time. we do not use
std::counting_semaphore as a base because the counter of that is not
inspectable as will be needed for Worker. we also do not need atomic
operations for cross-thread consistency since we don't have multiple
threads (thanks to kj event loops being confined to a single thread)
Change-Id: Ie2bcb107f3a2c0185138330f7cbba4cec6cbdd95
Without this, verifying TLS certificates would fail on macOS, as well
as any system that doesn't have a certificate file at /etc/ssl/certs/ca-certificates.crt,
which includes e.g. Fedora.
Change-Id: Iaa2e0e9db3747645b5482c82e3e0e4e8f229f5f9
This is better for privacy and to avoid leaking netrc credentials in a
MITM attack, but also the assumption that we check the hash no longer
holds in some cases (in particular for impure derivations).
Partially reverts 5db358d4d7.
(cherry picked from commit c04bc17a5a0fdcb725a11ef6541f94730112e7b6)
(cherry picked from commit f2f47fa725fc87bfb536de171a2ea81f2789c9fb)
(cherry picked from commit 7b39cd631e0d3c3d238015c6f450c59bbc9cbc5b)
Upstream-PR: https://github.com/NixOS/nix/pull/11585
Change-Id: Ia973420f6098113da05a594d48394ce1fe41fbb9
These stack traces kind of suck for the reasons mentioned on the
CppTrace page here (no symbols for inline functions is a major one):
https://github.com/jeremy-rifkin/cpptrace
I would consider using CppTrace if it were packaged, but to be honest, I
think that the more reasonable option is actually to move entirely to
out-of-process crash handling and symbolization.
The reason for this is that if you want to generate anything of
substance on SIGSEGV or really any deadly signal, you are stuck in
async-signal-safe land, which is not a place to be trying to run a
symbolizer. LLVM does it anyway, probably carefully, and chromium *can*
do it on debug builds but in general uses crashpad:
https://source.chromium.org/chromium/chromium/src/+/main:base/debug/stack_trace_posix.cc;l=974;drc=82dff63dbf9db05e9274e11d9128af7b9f51ceaa;bpv=1;bpt=1
However, some stack traces are better than *no* stack traces when we get
mystery exceptions falling out the bottom of the program. I've also
promoted the path for "mystery exceptions falling out the bottom of the
program" to hard crash and generate a core dump because although there's
been some months since the last one of these, these are nonetheless
always *atrociously* diagnosed.
We can't improve the crash handling further until either we use Crashpad
(which involves more C++ deps, no thanks) or we put in the ostensibly
work in progress Rust minidump infrastructure, in which case we need to
finish full support for Rust in libutil first.
Sample report:
Lix crashed. This is a bug. We would appreciate if you report it at https://git.lix.systems/lix-project/lix/issues with the following information included:
Exception: std::runtime_error: lol
Stack trace:
0# nix::printStackTrace() in /home/jade/lix/lix3/build/src/nix/../libutil/liblixutil.so
1# 0x000073C9862331F2 in /home/jade/lix/lix3/build/src/nix/../libmain/liblixmain.so
2# 0x000073C985F2E21A in /nix/store/p44qan69linp3ii0xrviypsw2j4qdcp2-gcc-13.2.0-lib/lib/libstdc++.so.6
3# 0x000073C985F2E285 in /nix/store/p44qan69linp3ii0xrviypsw2j4qdcp2-gcc-13.2.0-lib/lib/libstdc++.so.6
4# nix::handleExceptions(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()>) in /home/jade/lix/lix3/build/src/nix/../libmain/liblixmain.so
5# 0x00005CF65B6B048B in /home/jade/lix/lix3/build/src/nix/nix
6# 0x000073C985C8810E in /nix/store/dbcw19dshdwnxdv5q2g6wldj6syyvq7l-glibc-2.39-52/lib/libc.so.6
7# __libc_start_main in /nix/store/dbcw19dshdwnxdv5q2g6wldj6syyvq7l-glibc-2.39-52/lib/libc.so.6
8# 0x00005CF65B610335 in /home/jade/lix/lix3/build/src/nix/nix
Change-Id: I1a9f6d349b617fd7145a37159b78ecb9382cb4e9
This caused an infinite loop before since it would just keep asking the
underlying source for more data.
In practice this happened because an HTTP server served a
response to a HEAD request (for which curl will not retrieve any body or
call our write callback function) with Content-Encoding: br, leading to
decompressing nothing at all and going into an infinite loop.
This adds a test to make sure none of our compression methods do that
again, as well as just patching the HTTP client to never feed empty data
into a compression algorithm (since they absolutely have the right to
throw CompressionError on unexpectedly-short streams!).
Reported on Matrix: https://matrix.to/#/!lymvtcwDJ7ZA9Npq:lix.systems/$8BWQR_zKxCQDJ40C5NnDo4bQPId3pZ_aoDj2ANP7Itc?via=lix.systems&via=matrix.org&via=tchncs.de
Change-Id: I027566e280f0f569fdb8df40e5ecbf46c211dad1
The legitimate output of `nix path-info` may visually interfere with the
progress bar, by appending to stale progress output before the latter has been
erased. Conveniently, all expensive operations (evaluation or building) have
already been performed before, so we can simply wipe the progress bar at this
point to fix the issue.
Fixes: #343
Change-Id: Id9a807a5c882295b3e6fbf841f9c15dc96f67f6e
See #496.
The core idea is to be able to do e.g.
nix-instantiate -A some-nonfree-thing --arg config.allowUnfree true
which is currently not possible since `config.allowUnfree` is
interpreted as attribute name with a dot in it.
In order to change that (probably), Jade suggested to find out if there
are any folks out there relying on this behavior.
For such a use-case, it may still be possible to accept strings, i.e.
`--arg '"config.allowUnfree"'.
Change-Id: I986c73619fbd87a95b55e2f0ac03feaed3de2d2d
* Move the extended attribute deletion after the hardlink sanity check. We
shouldn't be removing extended attributes on random files.
* Make the entity owner-writable before attempting to remove extended
attributes, since this operation usually requires write access on the file,
and we shouldn't fail xattr deletion on a file that has been made unwritable
by the builder or a previous canonicalisation pass.
Fixes: #507
Change-Id: I7e6ccb71649185764cd5210f4a4794ee174afea6
Remove the mutable state stuff that assumes that one file is being
written a time. It's true that we don't write multiple files
interleaved, but that mutable state is evil.
Change-Id: Ia1481da48255d901e4b09a9b783e7af44fae8cff
When generating shell completions, no logging output should be visible because
it would destroy the shell prompt. Originally this was attempted to be done by
simply disabling the progress bar (ca946860ce),
since the situation is particularly bad there (the screen clearing required for
the rendering ends up erasing the shell prompt). Due to overlooking the
implementation of this hack, it was accidentally undone during a later change
(0dd1d8ca1c).
Since even with the hack correctly in place, it is still possible to mess up
the prompt by logging output (for example warnings for disabled experimental
features, or messages generated by `builtins.trace`), simply send it to the bit
bucket where it belongs. This was already done for bash and zsh
(9d840758a8), and it seems that fish was simply
missed at that time. The last trace of the no-longer-working and obsolete hack
is deleted too.
Fixes: #513
Change-Id: I59f1ebf90903034e2059298fa8d76bf970bc3315
- Rename the listener to not be called a "sink". If it were a "sink" it
would be eating bytes and conform with any of the Nix sink stuff
(maybe FileHandle should be a Sink itself! but that's a later CL's
problem). This is a parser listener.
- Move the RetrieveRegularNARSink thing into store-api.cc, which is its
only usage, and fix it to actually do what it is stated to do: crash
if its invariants are violated.
It's, of course, used to erm, unpack single-file NAR files, generated
via a horrible contraption of sources and sinks that looks like a
plumbing blueprint. Refactoring that is a future task.
- Add a description of the invariants of NARParseVisitor in preparation
of refactoring it.
Change-Id: Ifca1d74d2947204a1f66349772e54dad0743e944
using a proper event loop basis we no longer have to worry about most of
the intricacies of poll(), or platform-dependent replacements for it. we
may even be able to use the event loop and its promise system for all of
our scheduling in the future. we don't do any real async processing yet,
this is just preparation to separate the first such change from the huge
api design difference with the async framework we chose (kj from capnp):
kj::Promise, unlike std::future, doesn't return exceptions unmangled. it
instead wraps any non-kj exception into a kj exception, erasing all type
information and preserving mostly the what() string in the process. this
makes sense in the capnp rpc use case where unrestricted exception types
can't be transferred, and since it moves error handling styles closer to
a world we'd actually like there's no harm in doing it only here for now
Change-Id: I20f888de74d525fb2db36ca30ebba4bcfe9cc838
we're using boost::outcome rather than leaf or stl types because stl
types are not available everywhere and leaf does not provide its own
storage for error values, relying on thread-locals and the stack. if
we want to use promises we won't have a stack and would have to wrap
everything into leaf-specific allocating wrappers, so outcome it is.
Change-Id: I35111a1f9ed517e7f12a839e2162b1ba6a993f8f
When the multi-line log format is enabled, the progress bar usually occupies
multiple lines on the screen. When stopping the progress bar, only the last
line was wiped, leaving all others visible on the screen. Erase all lines
belonging to the progress bar to prevent these leftovers.
Asking the user for input is theoretically affected by a similar issue, but
this is not observed in practice since the only place where the user is asked
(whether configuration options coming from flakes should be accepted) does not
actually have multiple lines on the progress bar. However, there is no real
reason to not fix this either, so let's do it anyway.
Change-Id: Iaa5a701874fca32e6f06d85912835d86b8fa7a16
The AcceptFlakeConfig type used was missing its JSON serialisation definition,
so it was incorrectly serialised as an integer, ending up that way for example
in the nix.conf manual page. Declare a proper serialisation.
Change-Id: If8ec210f9d4dd42fe480c4e97d0a4920eb66a01e
The JSON serialisation should be declared in the header so that all translation
units can see it when needed, even though it seems that it has not been used
anywhere else so far. Unfortunately, this means we cannot use the
NLOHMANN_JSON_SERIALIZE_ENUM convenience macro, since it uses a slightly
different signature, but the code is not too bad either.
Change-Id: I6e2851b250e0b53114d2fecb8011ff1ea9379d0f
This is the repl overlay from my dotfiles, which I think provides a
reasonable and ergonomic set of variables. We can iterate on this over
time, or (perhaps?) provide a sentinel value like `repl-overlays =
<DEFAULT>` to include a "suggested default" overlay like this one.
Change-Id: I8eba3934c50fbac8367111103e66c7375b8d134e
it just makes sense to have it too, rather than just the pass/fail
information we keep so far. once we turn goals into something more
promise-shaped it'll also help detangle the current data flow mess
Change-Id: I915cf04d177cad849ea7a5833215d795326f1946
it doesn't have a purpose except cache priming, which is largely
irrelevant by default (since another code path already runs this
exact query). our store implementations do not benefit that much
from this either, and the more bursty load may indeed harm them.
Change-Id: I1cc12f8c21cede42524317736d5987f1e43fc9c9
updating statistics *immediately* when any counter changes declutters
things somewhat and makes useful status reports less dependent on the
current worker main loop. using callbacks will make it easier to move
the worker loop into kj entirely, using only promises for scheduling.
Change-Id: I695dfa83111b1ec09b1a54cff268f3c1d7743ed6
there's no reason to go through the event loop in these cases. returning
ContinueImmediately here is just a very convoluted way of jumping to the
state we've just set after unwinding one frame of the stack, which never
matters in the cases changed here because there are no live RAII guards.
Change-Id: I7c00948c22e3caf35e934c1a14ffd2d40efc5547
this is not ideal, but it's better than having this stuck in the worker
loop itself. setting ex on all failing goals is not problematic because
only toplevel goals can ever be observable, all the others are ignored.
notably only derivation goals ever set `ex`, substitution goals do not.
Change-Id: I02e2164487b2955df053fef3c8e774d557aa638a
this doesn't serve a great purpose yet except to confine construction of
goals to the stack frame of Worker::run() and its child frames. we don't
need this yet (and the goal constructors remain fully visible), but in a
future change that fully removes the current worker loop we'll need some
way of knowing which goals are top-level goals without passing the goals
themselves around. once that's possible we can remove visible goals as a
concept and rely on build result futures and a scheduler built upon them
Change-Id: Ia73cdeffcfb9ba1ce9d69b702dc0bc637a4c4ce6
whether goal errors are reported via the `ex` member or just printed to
the log depends on whether the goal is a toplevel goal or a dependency.
if goals are aware of this themselves we can move error printing out of
the worker loop, and since a running worker can only be used by running
goals it's totally sufficient to keep a `Worker::running` flag for this
Change-Id: I6b5cbe6eccee1afa5fde80653c4b968554ddd16f
Apparently the fmt contraption has some extremely popular overloads, and
the boost stuff in there gets built approximately infinite times in
every compilation unit.
Change-Id: Ideba2db7d6bf8559e4d91974bab636f5ed106198
Fixes:
- Identifiers starting with _ are prohibited
- Some driveby header dependency cleaning which wound up with doing some
extra fixups.
- Fucking C style casts, man. C++ made these 1000% worse by letting you
also do memory corruption with them with references.
- Remove casts to Expr * where ExprBlackHole is an incomplete type by
introducing an explicitly-cast eBlackHoleAddr as Expr *.
- An incredibly illegal cast of the text bytes of the StorePath hash
into a size_t directly. You can't DO THAT.
Replaced with actually parsing the hash so we get 100% of the bits
being entropy, then memcpying the start of the hash. If this shows
up in a profile we should just make the hash parser faster with a
lookup table or something sensible like that.
- This horrendous bit of UB which I thankfully slapped a deprecation
warning on, built, and it didn't trigger anywhere so it was dead
code and I just deleted it. But holy crap you *cannot* do that.
inline void mkString(const Symbol & s)
{
mkString(((const std::string &) s).c_str());
}
- Some wrong lints. Lots of wrong macro lints, one wrong
suspicious-sizeof lint triggered by the template being instantiated
with only pointers, but the calculation being correct for both
pointers and not-pointers.
- Exceptions in destructors strike again. I tried to catch the
exceptions that might actually happen rather than all the exceptions
imaginable. We can let the runtime hard-kill it on other exceptions
imo.
Change-Id: I71761620846cba64d66ee7ca231b20c061e69710
It's nice for this to be a separate function and not just inline in
`absPath`.
Prepared as part of cl/1865, though I don't think I actually ended up
using it there.
Change-Id: I24d9d4a984cee0af587010baf04b3939a1c147ec
this makes WorkResult copyable, and just all around easier to deal with.
in the future we'll need this to let Goal::work() return a promise for a
WorkResult (or even just a Finished) that can be awaited by other goals.
Change-Id: Ic5a1ce04c5a0f8e683bd00a2ed2b77a2e28989c1
this should be done where we're actually trying to build something, not
in the main worker loop that shouldn't have to be aware of such details
Change-Id: I07276740c0e2e5591a8ce4828a4bfc705396527e
This caused an absolute saga which I would not like anyone else to have
to experience. Let's put in a laser targeted error message that
diagnoses this exact problem.
Fixes: #484
Change-Id: I2a79f04aeb4a1b67c10115e5e39501d958836298
I don't know why the AWS sdk disabled it by default. It would be nice
to have test coverage of the s3 store or proxies, but neither currently
exist.
Fixes: #433
Change-Id: If1e76169a3d66dbec2e926af0d0d0eccf983b97b
This avoids C++'s standard library regexes, which aren't the same
across platforms, and have many other issues, like using stack
so much that they stack overflow when processing a lot of data.
To avoid backwards and forward compatibility issues, regexes are
processed using a function converting libstdc++ regexes into Boost
regexes, escaping characters that Boost needs to have escaped, and
rejecting features that Boost has and libstdc++ doesn't.
Related context:
- Original failed attempt to use `boost::regex` in CppNix, failed due to
boost icu dependency being large (disabling ICU is no longer necessary
because linking ICU requires using a different header file,
`boost/regex/icu.hpp`): https://github.com/NixOS/nix/pull/3826
- An attempt to use PCRE, rejected due to providing less backwards
compatibility with `std::regex` than `boost::regex`:
https://github.com/NixOS/nix/pull/7336
- Second attempt to use `boost::regex`, failed due to `}` regex failing
to compile (dealt with by writing a wrapper that parses a regular
expression and escapes `}` characters):
https://github.com/NixOS/nix/pull/7762Closes#34. Closes#476.
Change-Id: Ieb0eb9e270a93e4c7eed412ba4f9f96cb00a5fa4
nixpkgs delivered us the untimely gift of a meson 1.5 upgrade, which
*does* make our lives easier by allowing us to delete wrap generation
code, but it does so at the cost of renaming all rust crates in such a
way that the wrap logic cannot tolerate the new names on the old meson
version 😭.
It also means that support burden for this is going to be atrocious
until we either give in and vendor meson 1.5 or we make a CI target for
it. Neither seems appealing, though the latter is not super absurd for
ensuring we don't break nixpkgs unstable.
This commit causes meson 1.5 to ignore the .wrap files in subprojects/
entirely (since they have the wrong names lol) and instead use
Cargo.lock, so it now hard-depends on our workspace reshuffling
improvement.
It also deletes the hack that we were using to get the sources of Cargo
deps into meson by using a feature that went unnoticed when this code
was originally written: MESON_PACKAGE_CACHE_DIR:
8a202de6ec/mesonbuild/wrap/wrap.py (L490-L502)
Change-Id: I7a28f12fc2812c6ed7537b60bc3025c141a05874
This is purely to let Cargo's dependency resolver do stuff for us, we do
not actually intend to build this stuff with Cargo to begin with.
Change-Id: I4c08d55595c7c27b7096375022581e1e34308a87
There have been multiple setting types for paths that are supposed to be
canonicalised, depending on whether zero or one, one, or any number of paths is
to be specified. Naturally, they behaved in slightly different ways in the
code. Simplify things by unifying them and removing special behaviour (mainly
the "multiple paths type can coerce to boolean" thing).
Change-Id: I7c1ce95e9c8e1829a866fb37d679e167811e9705
Commit 0dd1d8ca1c included an accidental revert
of 1461e6cdda (actually slightly worse), leading
to the progress bar not being stopped properly when a legacy command was
invoked with `--log-format bar` (or similar options that show a progress bar).
Move the progress bar stopping code to its proper place again to fix this
regression.
Change-Id: I676333da096d5990b717a387924bb988c9b73fab