there's no reason to have the worker set information on goals that the
goals themselves return from their entry point. doing this in the goal
`work()` function is much cleaner, and a prerequisite to removing more
implicit strong shared references to goals that are currently running.
Change-Id: Ibb3e953ab8482a6a21ce2ed659d5023a991e7923
this simplifies the worker loop, and lets us remove it entirely later.
note that ideally only one promise waiting for interrupts should exist
in the entire system. not one per event loop, one per *process*. extra
interrupt waiters make interrupt response nondeterministic and as such
aren't great for user experience. if anything wants to react to aborts
caused by explicit interruptions, or anything else, those things would
be much better served using RAII guards such as Finally (or KJ_DEFER).
Change-Id: I41d035ff40172d536e098153c7375b0972110d51
this was a triumph. i'm making a note here: huge success. it's hard to
overstate my satisfaction! i'm not even angry. i'm being so sincere ri
actually, no. we *are* angry. this was one dumbass odyssey. nobody has
asked for this. but not doing it would have locked us into old, broken
protocols forever or (possibly worse) forced us to write our own async
framework building on the old did-you-mean-continuations in Worker. if
we had done that we'd be locked into ever more, and ever more complex,
manual state management all over the place. this just could not stand.
Change-Id: I43a6de1035febff59d2eff83be9ad52af4659871
due to event loop scheduling behavior it's possible for a derivation
goal to fully finish (having seen all paths it was asked to create),
but to not notify the worker of this in time to prevent another goal
asking the recently-finished goal for more outputs. if this happened
the finished goal would ignore the request for more outputs since it
considered itself fully done, and the delayed result reporting would
cause the requesting goal to assume its request had been honored. if
the requested goal had finished *properly* the worker would recreate
it instead of asking for more outputs, and this would succeed. it is
thus safe to always recreate goals once they are done, so we now do.
Change-Id: Ifedd69ca153372c623abe9a9b49cd1523588814f
This moves the "legacy"/"nix2" commands under a new `src/legacy/`
directory, instead of being scattered around in a bunch of different
directories.
A new `liblegacy` build target is defined, and the `nix` binary is
linked against it.
Then, `RegisterLegacyCommand` is replaced with `LegacyCommand::add`
calls in functions like `registerNixCollectGarbage()`. These
registration functions are called explicitly in `src/nix/main.cc`.
See: #359
Change-Id: Id450ffc3f793374907599cfcc121863b792aac1a
we'll now loop to update displayed statistics, and use this loop to
limit the update rate to 50 times per second. we could have updated
much more frequently before this (once per iteration of `runImpl`),
much faster than would ever be useful in practice. aggressive stats
updates can even impede progress due to terminal or network delays.
Change-Id: Ifba755a2569f73c919b1fbb06a142c0951395d6d
Worker::run() is now entirely based on the kj event loop and promises,
so we need not handle awakeness of goals manually any more. every goal
can instead, once it has finished a partial work call, defer itself to
being called again in the next iteration of the loop. same end effect.
Change-Id: I320eee2fa60bcebaabd74d1323fa96d1402c1d15
notably we will check whether we want to do GC at all only once during
startup, and we'll only attempt GC every ten seconds rather than every
time a goal has finished a partial work call. this shouldn't cause any
problems in practice since relying on auto-gc is not deterministic and
stores in which builds can fill all remaining free space in merely ten
seconds are severely troubled even when gargage collection runs a lot.
Change-Id: I1175a56bf7f4e531f8be90157ad88750ff2ddec4
Revert submission 1946
Reason for revert: regression in building (found via bisection)
Reported by users:
> error: path '/nix/store/04ca5xwvasz6s3jg0k7njz6rzi0d225w-jq-1.7.1-dev' does not exist in the store
Reverted changes: /q/submissionid:1946
Change-Id: I6f1a4b2f7d7ef5ca430e477fc32bca62fd97036b
nothing needs to signal being still active but not actively pollable,
only that immediate polling for the next goal work phase is in order.
Change-Id: Ia43c1015e94ba4f5f6b9cb92943da608c4a01555
this was immensely inefficient on large caches, as can exist when many
derivations are buildable simultaneously. since we have smart pointers
to goals we can do cache maintenance in goal deleters instead, and use
the exact iterators instead of doing a linear search. this *does* rely
on goals being deleted to remove them from the cache, which isn't true
for toplevel goals. those would have previously been removed when done
in all cases, removing the cache entry when keep-going is set. this is
arguably incorrect since it might result in those goals being retried,
although that could only happen with dynamic derivations or the likes.
(luckily dynamic derivations not complete enough to allow this at all)
Change-Id: I8e750b868393588c33e4829333d370f2c509ce99
makeDerivationGoalCommon had the right idea, but it didn't quite go far
enough. let's do the rest and remove the remaining factory duplication.
Change-Id: I1fe32446bdfb501e81df56226fd962f85720725b
this was a debugging aid from day one that should not have any impact on
build semantics, and if it *does* have an impact on build semantics then
build semantics are seriously broken. keeping the order imposed by these
keys will be impossible once we let a real event loop schedule our jobs.
Change-Id: I5c313324e1f213ab6453d82f41ae5e59de809a5b
without circular references we do not need weak goal pointers except for
caches, which should not prevent goal destructors running. caches though
cannot create circular references even when they keep strong references.
if we removed goals from caches when their work() is fully finished, not
when their destructors are run, we could keep strong pointers in caches.
since we do not gain much from this we keep those pointers weak for now.
Change-Id: I1d4a6850ff5e264443c90eb4531da89f5e97a3a0
have DerivationGoal and its subclasses produce a wrapper promise for
their intermediate results instead, and return this wrapper promise.
Worker already handles promises that do not complete immediately, so
we do not have to duplicate this into an entire result type variant.
Change-Id: Iae8dbf63cfc742afda4d415922a29ac5a3f39348
the new event loop could very occasionally notice that a dependency of
some goal has failed, process the failure, cause the depending goal to
fail accordingly, and in the doing of the latter two steps let further
dependencies that previously have not been reported as failed do their
reporting anyway. in such cases a goal could fail with "1 dependencies
failed", but more than one dependency failure message was shown. we'll
now report the correct number of failed dependency goals in all cases.
Change-Id: I5aa95dcb2db4de4fd5fee8acbf5db833531d81a8
these can be unique rather than shared because shared_ptr has a
converting constructor. preparatory refactor for something else
and not necessary on its own, and the extra allocations we must
do for shared_ptr control blocks isn't usually relevant anyway.
Change-Id: I5391715545240c6ec8e83a031206edafdfc6462f
Since fb38459d6e, each `ref` is appended
with `refs/heads` unless it starts with `refs/` already. This regressed
two use-cases that worked fine before:
* Specifying a commit hash as `ref`: now, if `ref` looks like a commit
hash it will be directly passed to `git fetch`.
* Specifying a tag without `refs/tags` as prefix: now, the fetcher prepends
`refs/*` to a ref that doesn't start with `refs/` and doesn't look
like a commit hash. That way, both a branch and a tag specified in
`ref` can be fetched.
The order of preference in git is
* file in `refs/` (e.g. `HEAD`)
* file in `refs/tags/`
* file in `refs/heads` (i.e. a branch)
After fetching `refs/*`, ref is resolved the same way as git does.
Change-Id: Idd49b97cbdc8c6fdc8faa5a48bef3dec25e4ccc3
Using `configure_file` to copy files has been deprecated since Meson 0.64.0.
The intended replacement is the `fs.copyfile` method.
This removes the following deprecation warning that arises when a minimum
Meson version is specified:
``
Project [...] uses feature deprecated since '0.64.0': copy arg in configure_file. Use fs.copyfile instead
``
Change-Id: I09ffc92e96311ef9ed594343a0a16d51e74b114a
also gets rid of explicit strong references to dependencies of any goal,
and weak references to dependers as well. those are now only held within
promises representing goal completion and thus independent of the goal's
relation to each other. the weak references to dependers was only needed
for notifications, and that's much better handled entirely by kj itself.
Change-Id: I00d06df9090f8d6336ee4bb0c1313a7052fb016b
now that we have an event loop in the worker we can use it and its
magical execution suspending properties to replace the slot counts
we managed explicitly with semaphores and raii tokens. technically
this would not have needed an event loop base to be doable, but it
is a whole lot easier to wait for a token to be available if there
is a callback mechanism ready for use that doesn't require a whole
damn dedicated abstract method in Goal to work, and specific calls
to that dedicated method strewn all over the worker implementation
Change-Id: I1da7cf386d94e2bbf2dba9b53ff51dbce6a0cff7
with waitForAWhile turned into promised the core functionality of
waitForInput is now merely to let gc run every so often if needed
Change-Id: I68da342bbc1d67653901cf4502dabfa5bc947628
this simplifies waitForInput quite a lot, and at the same time makes
polling less thundering-herd-y. it even fixes early polling wakeups!
Change-Id: I6dfa62ce91729b8880342117d71af5ae33366414
this removes the rather janky did-you-mean-async poll loop we had so
far. sadly kj does not play well with pty file descriptors, so we do
have to add our own async input stream that does not eat pty EIO and
turns it into an exception. that's still a *lot* better than the old
code, and using a real even loop makes everything else easier later.
Change-Id: Idd7e0428c59758602cc530bcad224cd2fed4c15e
When `nix fmt` is called without an argument, Nix appends the "." argument before calling the formatter. The comment in the code is:
> Format the current flake out of the box
This also happens when formatting sub-folders.
This means that the formatter is now unable to distinguish, as an interface, whether the "." argument is coming from the flake or the user's intent to format the current folder. This decision should be up to the formatter.
Treefmt, for example, will automatically look up the project's root and format all the files. This is the desired behaviour. But because the "." argument is passed, it cannot function as expected.
Upstream-PR: https://github.com/nixos/nix/pull/11438
Change-Id: I60fb6b3ed4ec1b24f81b5f0d76c0be98470817ce
like kj::joinPromisesFailFast this allows waiting for the results of
multiple promises at once, but unlike it not all input promises must
be complete (or any of them failed) for results to become available.
Change-Id: I0e4a37e7bd90651d56b33d0bc5afbadc56cde70c
like a normal semaphore, but with awaitable acquire actions. this is
primarily intended as an intermediate concurrency limiting device in
the Worker code, but it may find other uses over time. we do not use
std::counting_semaphore as a base because the counter of that is not
inspectable as will be needed for Worker. we also do not need atomic
operations for cross-thread consistency since we don't have multiple
threads (thanks to kj event loops being confined to a single thread)
Change-Id: Ie2bcb107f3a2c0185138330f7cbba4cec6cbdd95
Without this, verifying TLS certificates would fail on macOS, as well
as any system that doesn't have a certificate file at /etc/ssl/certs/ca-certificates.crt,
which includes e.g. Fedora.
Change-Id: Iaa2e0e9db3747645b5482c82e3e0e4e8f229f5f9
This is better for privacy and to avoid leaking netrc credentials in a
MITM attack, but also the assumption that we check the hash no longer
holds in some cases (in particular for impure derivations).
Partially reverts 5db358d4d7.
(cherry picked from commit c04bc17a5a0fdcb725a11ef6541f94730112e7b6)
(cherry picked from commit f2f47fa725fc87bfb536de171a2ea81f2789c9fb)
(cherry picked from commit 7b39cd631e0d3c3d238015c6f450c59bbc9cbc5b)
Upstream-PR: https://github.com/NixOS/nix/pull/11585
Change-Id: Ia973420f6098113da05a594d48394ce1fe41fbb9
These stack traces kind of suck for the reasons mentioned on the
CppTrace page here (no symbols for inline functions is a major one):
https://github.com/jeremy-rifkin/cpptrace
I would consider using CppTrace if it were packaged, but to be honest, I
think that the more reasonable option is actually to move entirely to
out-of-process crash handling and symbolization.
The reason for this is that if you want to generate anything of
substance on SIGSEGV or really any deadly signal, you are stuck in
async-signal-safe land, which is not a place to be trying to run a
symbolizer. LLVM does it anyway, probably carefully, and chromium *can*
do it on debug builds but in general uses crashpad:
https://source.chromium.org/chromium/chromium/src/+/main:base/debug/stack_trace_posix.cc;l=974;drc=82dff63dbf9db05e9274e11d9128af7b9f51ceaa;bpv=1;bpt=1
However, some stack traces are better than *no* stack traces when we get
mystery exceptions falling out the bottom of the program. I've also
promoted the path for "mystery exceptions falling out the bottom of the
program" to hard crash and generate a core dump because although there's
been some months since the last one of these, these are nonetheless
always *atrociously* diagnosed.
We can't improve the crash handling further until either we use Crashpad
(which involves more C++ deps, no thanks) or we put in the ostensibly
work in progress Rust minidump infrastructure, in which case we need to
finish full support for Rust in libutil first.
Sample report:
Lix crashed. This is a bug. We would appreciate if you report it at https://git.lix.systems/lix-project/lix/issues with the following information included:
Exception: std::runtime_error: lol
Stack trace:
0# nix::printStackTrace() in /home/jade/lix/lix3/build/src/nix/../libutil/liblixutil.so
1# 0x000073C9862331F2 in /home/jade/lix/lix3/build/src/nix/../libmain/liblixmain.so
2# 0x000073C985F2E21A in /nix/store/p44qan69linp3ii0xrviypsw2j4qdcp2-gcc-13.2.0-lib/lib/libstdc++.so.6
3# 0x000073C985F2E285 in /nix/store/p44qan69linp3ii0xrviypsw2j4qdcp2-gcc-13.2.0-lib/lib/libstdc++.so.6
4# nix::handleExceptions(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()>) in /home/jade/lix/lix3/build/src/nix/../libmain/liblixmain.so
5# 0x00005CF65B6B048B in /home/jade/lix/lix3/build/src/nix/nix
6# 0x000073C985C8810E in /nix/store/dbcw19dshdwnxdv5q2g6wldj6syyvq7l-glibc-2.39-52/lib/libc.so.6
7# __libc_start_main in /nix/store/dbcw19dshdwnxdv5q2g6wldj6syyvq7l-glibc-2.39-52/lib/libc.so.6
8# 0x00005CF65B610335 in /home/jade/lix/lix3/build/src/nix/nix
Change-Id: I1a9f6d349b617fd7145a37159b78ecb9382cb4e9
This caused an infinite loop before since it would just keep asking the
underlying source for more data.
In practice this happened because an HTTP server served a
response to a HEAD request (for which curl will not retrieve any body or
call our write callback function) with Content-Encoding: br, leading to
decompressing nothing at all and going into an infinite loop.
This adds a test to make sure none of our compression methods do that
again, as well as just patching the HTTP client to never feed empty data
into a compression algorithm (since they absolutely have the right to
throw CompressionError on unexpectedly-short streams!).
Reported on Matrix: https://matrix.to/#/!lymvtcwDJ7ZA9Npq:lix.systems/$8BWQR_zKxCQDJ40C5NnDo4bQPId3pZ_aoDj2ANP7Itc?via=lix.systems&via=matrix.org&via=tchncs.de
Change-Id: I027566e280f0f569fdb8df40e5ecbf46c211dad1
The legitimate output of `nix path-info` may visually interfere with the
progress bar, by appending to stale progress output before the latter has been
erased. Conveniently, all expensive operations (evaluation or building) have
already been performed before, so we can simply wipe the progress bar at this
point to fix the issue.
Fixes: #343
Change-Id: Id9a807a5c882295b3e6fbf841f9c15dc96f67f6e
See #496.
The core idea is to be able to do e.g.
nix-instantiate -A some-nonfree-thing --arg config.allowUnfree true
which is currently not possible since `config.allowUnfree` is
interpreted as attribute name with a dot in it.
In order to change that (probably), Jade suggested to find out if there
are any folks out there relying on this behavior.
For such a use-case, it may still be possible to accept strings, i.e.
`--arg '"config.allowUnfree"'.
Change-Id: I986c73619fbd87a95b55e2f0ac03feaed3de2d2d
* Move the extended attribute deletion after the hardlink sanity check. We
shouldn't be removing extended attributes on random files.
* Make the entity owner-writable before attempting to remove extended
attributes, since this operation usually requires write access on the file,
and we shouldn't fail xattr deletion on a file that has been made unwritable
by the builder or a previous canonicalisation pass.
Fixes: #507
Change-Id: I7e6ccb71649185764cd5210f4a4794ee174afea6
Remove the mutable state stuff that assumes that one file is being
written a time. It's true that we don't write multiple files
interleaved, but that mutable state is evil.
Change-Id: Ia1481da48255d901e4b09a9b783e7af44fae8cff
When generating shell completions, no logging output should be visible because
it would destroy the shell prompt. Originally this was attempted to be done by
simply disabling the progress bar (ca946860ce),
since the situation is particularly bad there (the screen clearing required for
the rendering ends up erasing the shell prompt). Due to overlooking the
implementation of this hack, it was accidentally undone during a later change
(0dd1d8ca1c).
Since even with the hack correctly in place, it is still possible to mess up
the prompt by logging output (for example warnings for disabled experimental
features, or messages generated by `builtins.trace`), simply send it to the bit
bucket where it belongs. This was already done for bash and zsh
(9d840758a8), and it seems that fish was simply
missed at that time. The last trace of the no-longer-working and obsolete hack
is deleted too.
Fixes: #513
Change-Id: I59f1ebf90903034e2059298fa8d76bf970bc3315
- Rename the listener to not be called a "sink". If it were a "sink" it
would be eating bytes and conform with any of the Nix sink stuff
(maybe FileHandle should be a Sink itself! but that's a later CL's
problem). This is a parser listener.
- Move the RetrieveRegularNARSink thing into store-api.cc, which is its
only usage, and fix it to actually do what it is stated to do: crash
if its invariants are violated.
It's, of course, used to erm, unpack single-file NAR files, generated
via a horrible contraption of sources and sinks that looks like a
plumbing blueprint. Refactoring that is a future task.
- Add a description of the invariants of NARParseVisitor in preparation
of refactoring it.
Change-Id: Ifca1d74d2947204a1f66349772e54dad0743e944
using a proper event loop basis we no longer have to worry about most of
the intricacies of poll(), or platform-dependent replacements for it. we
may even be able to use the event loop and its promise system for all of
our scheduling in the future. we don't do any real async processing yet,
this is just preparation to separate the first such change from the huge
api design difference with the async framework we chose (kj from capnp):
kj::Promise, unlike std::future, doesn't return exceptions unmangled. it
instead wraps any non-kj exception into a kj exception, erasing all type
information and preserving mostly the what() string in the process. this
makes sense in the capnp rpc use case where unrestricted exception types
can't be transferred, and since it moves error handling styles closer to
a world we'd actually like there's no harm in doing it only here for now
Change-Id: I20f888de74d525fb2db36ca30ebba4bcfe9cc838
we're using boost::outcome rather than leaf or stl types because stl
types are not available everywhere and leaf does not provide its own
storage for error values, relying on thread-locals and the stack. if
we want to use promises we won't have a stack and would have to wrap
everything into leaf-specific allocating wrappers, so outcome it is.
Change-Id: I35111a1f9ed517e7f12a839e2162b1ba6a993f8f
When the multi-line log format is enabled, the progress bar usually occupies
multiple lines on the screen. When stopping the progress bar, only the last
line was wiped, leaving all others visible on the screen. Erase all lines
belonging to the progress bar to prevent these leftovers.
Asking the user for input is theoretically affected by a similar issue, but
this is not observed in practice since the only place where the user is asked
(whether configuration options coming from flakes should be accepted) does not
actually have multiple lines on the progress bar. However, there is no real
reason to not fix this either, so let's do it anyway.
Change-Id: Iaa5a701874fca32e6f06d85912835d86b8fa7a16
The AcceptFlakeConfig type used was missing its JSON serialisation definition,
so it was incorrectly serialised as an integer, ending up that way for example
in the nix.conf manual page. Declare a proper serialisation.
Change-Id: If8ec210f9d4dd42fe480c4e97d0a4920eb66a01e
The JSON serialisation should be declared in the header so that all translation
units can see it when needed, even though it seems that it has not been used
anywhere else so far. Unfortunately, this means we cannot use the
NLOHMANN_JSON_SERIALIZE_ENUM convenience macro, since it uses a slightly
different signature, but the code is not too bad either.
Change-Id: I6e2851b250e0b53114d2fecb8011ff1ea9379d0f
This is the repl overlay from my dotfiles, which I think provides a
reasonable and ergonomic set of variables. We can iterate on this over
time, or (perhaps?) provide a sentinel value like `repl-overlays =
<DEFAULT>` to include a "suggested default" overlay like this one.
Change-Id: I8eba3934c50fbac8367111103e66c7375b8d134e
it just makes sense to have it too, rather than just the pass/fail
information we keep so far. once we turn goals into something more
promise-shaped it'll also help detangle the current data flow mess
Change-Id: I915cf04d177cad849ea7a5833215d795326f1946
it doesn't have a purpose except cache priming, which is largely
irrelevant by default (since another code path already runs this
exact query). our store implementations do not benefit that much
from this either, and the more bursty load may indeed harm them.
Change-Id: I1cc12f8c21cede42524317736d5987f1e43fc9c9
updating statistics *immediately* when any counter changes declutters
things somewhat and makes useful status reports less dependent on the
current worker main loop. using callbacks will make it easier to move
the worker loop into kj entirely, using only promises for scheduling.
Change-Id: I695dfa83111b1ec09b1a54cff268f3c1d7743ed6
there's no reason to go through the event loop in these cases. returning
ContinueImmediately here is just a very convoluted way of jumping to the
state we've just set after unwinding one frame of the stack, which never
matters in the cases changed here because there are no live RAII guards.
Change-Id: I7c00948c22e3caf35e934c1a14ffd2d40efc5547
this is not ideal, but it's better than having this stuck in the worker
loop itself. setting ex on all failing goals is not problematic because
only toplevel goals can ever be observable, all the others are ignored.
notably only derivation goals ever set `ex`, substitution goals do not.
Change-Id: I02e2164487b2955df053fef3c8e774d557aa638a
this doesn't serve a great purpose yet except to confine construction of
goals to the stack frame of Worker::run() and its child frames. we don't
need this yet (and the goal constructors remain fully visible), but in a
future change that fully removes the current worker loop we'll need some
way of knowing which goals are top-level goals without passing the goals
themselves around. once that's possible we can remove visible goals as a
concept and rely on build result futures and a scheduler built upon them
Change-Id: Ia73cdeffcfb9ba1ce9d69b702dc0bc637a4c4ce6
whether goal errors are reported via the `ex` member or just printed to
the log depends on whether the goal is a toplevel goal or a dependency.
if goals are aware of this themselves we can move error printing out of
the worker loop, and since a running worker can only be used by running
goals it's totally sufficient to keep a `Worker::running` flag for this
Change-Id: I6b5cbe6eccee1afa5fde80653c4b968554ddd16f
Apparently the fmt contraption has some extremely popular overloads, and
the boost stuff in there gets built approximately infinite times in
every compilation unit.
Change-Id: Ideba2db7d6bf8559e4d91974bab636f5ed106198
Fixes:
- Identifiers starting with _ are prohibited
- Some driveby header dependency cleaning which wound up with doing some
extra fixups.
- Fucking C style casts, man. C++ made these 1000% worse by letting you
also do memory corruption with them with references.
- Remove casts to Expr * where ExprBlackHole is an incomplete type by
introducing an explicitly-cast eBlackHoleAddr as Expr *.
- An incredibly illegal cast of the text bytes of the StorePath hash
into a size_t directly. You can't DO THAT.
Replaced with actually parsing the hash so we get 100% of the bits
being entropy, then memcpying the start of the hash. If this shows
up in a profile we should just make the hash parser faster with a
lookup table or something sensible like that.
- This horrendous bit of UB which I thankfully slapped a deprecation
warning on, built, and it didn't trigger anywhere so it was dead
code and I just deleted it. But holy crap you *cannot* do that.
inline void mkString(const Symbol & s)
{
mkString(((const std::string &) s).c_str());
}
- Some wrong lints. Lots of wrong macro lints, one wrong
suspicious-sizeof lint triggered by the template being instantiated
with only pointers, but the calculation being correct for both
pointers and not-pointers.
- Exceptions in destructors strike again. I tried to catch the
exceptions that might actually happen rather than all the exceptions
imaginable. We can let the runtime hard-kill it on other exceptions
imo.
Change-Id: I71761620846cba64d66ee7ca231b20c061e69710
It's nice for this to be a separate function and not just inline in
`absPath`.
Prepared as part of cl/1865, though I don't think I actually ended up
using it there.
Change-Id: I24d9d4a984cee0af587010baf04b3939a1c147ec
this makes WorkResult copyable, and just all around easier to deal with.
in the future we'll need this to let Goal::work() return a promise for a
WorkResult (or even just a Finished) that can be awaited by other goals.
Change-Id: Ic5a1ce04c5a0f8e683bd00a2ed2b77a2e28989c1
this should be done where we're actually trying to build something, not
in the main worker loop that shouldn't have to be aware of such details
Change-Id: I07276740c0e2e5591a8ce4828a4bfc705396527e
This caused an absolute saga which I would not like anyone else to have
to experience. Let's put in a laser targeted error message that
diagnoses this exact problem.
Fixes: #484
Change-Id: I2a79f04aeb4a1b67c10115e5e39501d958836298
I don't know why the AWS sdk disabled it by default. It would be nice
to have test coverage of the s3 store or proxies, but neither currently
exist.
Fixes: #433
Change-Id: If1e76169a3d66dbec2e926af0d0d0eccf983b97b
This avoids C++'s standard library regexes, which aren't the same
across platforms, and have many other issues, like using stack
so much that they stack overflow when processing a lot of data.
To avoid backwards and forward compatibility issues, regexes are
processed using a function converting libstdc++ regexes into Boost
regexes, escaping characters that Boost needs to have escaped, and
rejecting features that Boost has and libstdc++ doesn't.
Related context:
- Original failed attempt to use `boost::regex` in CppNix, failed due to
boost icu dependency being large (disabling ICU is no longer necessary
because linking ICU requires using a different header file,
`boost/regex/icu.hpp`): https://github.com/NixOS/nix/pull/3826
- An attempt to use PCRE, rejected due to providing less backwards
compatibility with `std::regex` than `boost::regex`:
https://github.com/NixOS/nix/pull/7336
- Second attempt to use `boost::regex`, failed due to `}` regex failing
to compile (dealt with by writing a wrapper that parses a regular
expression and escapes `}` characters):
https://github.com/NixOS/nix/pull/7762Closes#34. Closes#476.
Change-Id: Ieb0eb9e270a93e4c7eed412ba4f9f96cb00a5fa4
nixpkgs delivered us the untimely gift of a meson 1.5 upgrade, which
*does* make our lives easier by allowing us to delete wrap generation
code, but it does so at the cost of renaming all rust crates in such a
way that the wrap logic cannot tolerate the new names on the old meson
version 😭.
It also means that support burden for this is going to be atrocious
until we either give in and vendor meson 1.5 or we make a CI target for
it. Neither seems appealing, though the latter is not super absurd for
ensuring we don't break nixpkgs unstable.
This commit causes meson 1.5 to ignore the .wrap files in subprojects/
entirely (since they have the wrong names lol) and instead use
Cargo.lock, so it now hard-depends on our workspace reshuffling
improvement.
It also deletes the hack that we were using to get the sources of Cargo
deps into meson by using a feature that went unnoticed when this code
was originally written: MESON_PACKAGE_CACHE_DIR:
8a202de6ec/mesonbuild/wrap/wrap.py (L490-L502)
Change-Id: I7a28f12fc2812c6ed7537b60bc3025c141a05874
This is purely to let Cargo's dependency resolver do stuff for us, we do
not actually intend to build this stuff with Cargo to begin with.
Change-Id: I4c08d55595c7c27b7096375022581e1e34308a87
There have been multiple setting types for paths that are supposed to be
canonicalised, depending on whether zero or one, one, or any number of paths is
to be specified. Naturally, they behaved in slightly different ways in the
code. Simplify things by unifying them and removing special behaviour (mainly
the "multiple paths type can coerce to boolean" thing).
Change-Id: I7c1ce95e9c8e1829a866fb37d679e167811e9705
Commit 0dd1d8ca1c included an accidental revert
of 1461e6cdda (actually slightly worse), leading
to the progress bar not being stopped properly when a legacy command was
invoked with `--log-format bar` (or similar options that show a progress bar).
Move the progress bar stopping code to its proper place again to fix this
regression.
Change-Id: I676333da096d5990b717a387924bb988c9b73fab
lix-doc is now built with Meson, with lix-doc's dependencies built as
Meson subprojects, either fetched on demand with .wrap files, or fetched
in advance by Nix with importCargoLock. It even builds statically.
Fixes#256.
Co-authored-by: Lunaphied <lunaphied@lunaphied.me>
Co-authored-by: Jade Lovelace <lix@jade.fyi>
Change-Id: I3a4731ff13278e7117e0316bc0d7169e85f5eb0c
Closes#460
I managed to trigger the issue by having the following inputs (shortened):
authentik-nix.url = "github:nix-community/authentik-nix";
authentik-nix.inputs.poetry2nix.inputs.nixpkgs.follows = "nixpkgs";
When evaluating this using
nix-eval-jobs --flake .#hydraJobs
I got the following error:
error: cannot update unlocked flake input 'authentik-nix/poetry2nix' in pure mode
The issue we have here is that `authentik-nix/poetry2nix` was written
into the `overrideMap` which caused Nix to assume it's a new input and
tried to refetch it (#460) or errored out in pure mode
(nix-eval-jobs / Hydra).
The testcase unfortunately only involves checking for the output log
and makes sure that something *is* logged on the first fetch so that
the test doesn't rot when the logging changes since I didn't
manage to trigger the error above with the reproducer from #460. In
fact, I only managed to trigger the `cannot update unlocked flake input`
error in this context with `nix-eval-jobs`.
Change-Id: Ifd00091eec9a0067ed4bb3e5765a15d027328807
this can be a proper WorkResult now. childTerminated is unfortunately a
lot more stubborn and won't be made private for quite a while yet. once
we can get rid of the Worker poll loop that *should* be possible though
Change-Id: I2218df202da5cb84e852f6a37e4c20367495b617
we'll need this once we want to pass extra information out of accepting
replies, such as fd sets or possibly even async output reader promises.
Change-Id: I5e2f18cdb80b0d2faf3067703cc18bd263329b3f
don't keep fds open we're not using. currently this does not cause any
problems, but it does increase the size of our fd table needlessly and
in the future, when we have proper async processing, having builderOut
open in the daemon once the hook has been fully started is problematic
Change-Id: I6e7fb773b280b042873103638d3e04272ca1e4fc
this is useless to do on the face of it, but it'll make it easier to
convert the entire output handling to use async io and promises soon
Change-Id: I2d1eb62c4bbf8f57bd558b9599c08710a389b1a8
only DerivationGoal can set the hook to anything at all. it always sets
buildOutFD to something that is not related to fromHook in any way, and
mixing the two would have rather dire consequences for log consistency.
Change-Id: Ida86727fd1cd5e1ecd78f07f3bde330a346658a8
all derivation goals need a log fd of some description. let's save this
single fd in a dedicated pointer field for all subclasses so that later
we have just the one spot to change if we turn this into async promises
Change-Id: If223adf90909247363fb823d751cae34d25d0c0b
we don't need to expose information about how busy a Worker is if the
worker can instead tell its work items whether they are in a slot. in
the future we might use this to not start items waiting for a slot if
no slots are currently available, but that requires more preparation.
Change-Id: Ibe01ac536da7e6d6f80520164117c43e772f9bd9
They are like experimental features, but opt-in instead of opt-out. They
will allow us to gracefully remove language features. See #437
Change-Id: I9ca04cc48e6926750c4d622c2b229b25cc142c42
Seems a little bit Rich that musl does not implement close_range because
they suspect that the system call itself is a bad idea, so they uhhhh
are considering not implementing a wrapper. Let's just fix the problem
at hand by writing our own wrapper.
Change-Id: I1f8e5858e4561d58a5450503d9c4585aded2b216
Turns out strings do not like being resized to -4.
This was discovered while messing with the tests to remove unbuffer and
trying stdbuf instead. Turns out that was not the right approach.
This basically rewrites the handling of this case to be much more
correct, and fixes a bug where with small window sizes where it would
ALSO truncate the attr names in addition to the optional descriptions.
Change-Id: Ifd1beeaffdb47cbb5f4a462b183fcb6c0ff6c524
This will stop printing stuff to dumb terminals that they don't support.
I've overall audited usage of isatty and replaced the ones with intent
to mean "is a Real terminal" with checking for that. I've also caught a
case of carelessly assuming "is a tty" means "should be colour" in
nix-env.
Change-Id: I6d83725d9a2d932ac94ff2294f92c0a1100d23c9
this is only used to close non-stdio files in derivation sandboxes. we
may as well encode that in its name, drop the unnecessary integer set,
and use close_range to deal with the actual closing of files. not only
is this clearer, it also makes sandbox setup on linux fast by 1ms each
Change-Id: Id90e259a49c7bc896189e76bfbbf6ef2c0bcd3b2
implementing a build hook is pretty much impossible without either being
a nix, or blindly forwarding the important bits of all build requests to
some kind of nix. we've found no uses of build-hook in the wild, and the
build-hook protocol (apart from being entirely undocumented) is not able
to convey any kind of versioning information between hook and daemon. if
we want to upgrade this infrastructure (which we do), this must not stay
Change-Id: I1ec4976a35adf8105b8ca9240b7984f8b91e147e
this is a bit of a hack, but it's apparently the cleanest way of doing
this in the absence of any kind of priority/provenance information for
values of some given setting. we'll need this to deprecate build-hook.
Change-Id: I03644a9c3f17681c052ecdc610b4f1301266ab9e
sure, linux has been providing argv[0] by default for a while now. other
OSes may not be as forthcoming though, and relying on the OS to create a
world in which we can just make assumptions we could test for instead is
unnecessarily lazy. we *could* default argv0, but that's a little silly.
notably we abort instead of returning normally to avoid confusions where
a caller interprets our exit status like a Worker build results bitmask.
Change-Id: Id73f8cd0a630293b789c59a8c4b0c4a2b936b505
* changes:
sqlite: add a Use::fromStrNullable
util: implement charptr_cast
tree-wide: fix a pile of lints
refactor: make HashType and Base enum classes for type safety
build: integrate clang-tidy into CI
There were several usages of the raw sqlite primitives along with C
style casts, seemingly because nobody thought to use an optional for
getting a string or NULL.
Let's fix this API given we already *have* a wrapper.
Change-Id: I526cceedc2e356209d8fb62e11b3572282c314e8
I don't like having so many reinterpret_cast statements that have to
actually be looked at to determine if they are UB. A huge number of the
reinterpret_cast instances in Lix are actually casting to some pointer
of some character type, which is always valid no matter the source type.
However, it is also worth looking at if it is not casting both *from* a
character type and also *to* a character type, since IMO splatting a
struct into a character array should be a very deliberate action instead
of just being about dealing with bad APIs.
So let's write a template that encapsulates this invariant so we can
not worry about the trivially safe reinterpret_cast invocations.
Change-Id: Ia4e2f1fa0c567123a96604ddadb3bdd7449660a4
This:
- Converts a bunch of C style casts into C++ casts.
- Removes some very silly pointer subtraction code (which is no more or
less busted on i686 than it began)
- Fixes some "technically UB" that never had to be UB in the first
place.
- Makes finally follow the noexcept status of the inner function. Maybe
in the future we should ban the function from not being noexcept, but
that is not today.
- Makes various locally-used exceptions inherit from std::exception.
Change-Id: I22e66972602604989b5e494fd940b93e0e6e9297
This still has utterly unacceptably bad output format design that I
would not inflict on anyone I like, but it *does* now exist, and you
*can* find the errors in the log.
Future work would obviously be to fix that and integrate the actual
errors into Gerrit using codechecker or so.
Followup issue: #457
Fixes: #147
Change-Id: Ifca22e443d357762125f4ad6bc4f568af3a26c62
The |> operator is a reverse function operator with low binding strength
to replace lib.pipe. Implements RFC 148, see the RFC text for more
details. Closes#438.
Change-Id: I21df66e8014e0d4dd9753dd038560a2b0b7fd805
Currently, the parser relies on the global experimental feature flags.
In order to properly test conditional language features, we instead need
to pass it around in the parser::State.
This means that the parser cannot cache the result of isEnabled anymore,
which wouldn't necessarily hurt performance if the function didn't
perform a linear search on the list of enabled features on every single
call. While we could simply evaluate once at the start of parsing and
cache the result in the parser state, the more sustainable solution
would be to fix `isEnabled` such that all callers may profit from the
performance improvement.
Change-Id: Ic9b9c5d882b6270e1114988b63e6064d36c25cf2
This adds a second form to the `:log` command: it now can accept a
derivation path in addition to a derivation expression. As derivation
store paths start with `/nix/store`, this is not ambiguous.
Resolves: #51
Change-Id: Iebc7b011537e7012fae8faed4024ea1b8fdc81c3
This has been causing various seemingly spurious CI failures as well as
some failures on people running tests on beta builds.
lix> ++(nix-collect-garbage-dry-run.sh:20) nix-store --gc --print-dead
lix> ++(nix-collect-garbage-dry-run.sh:20) wc -l
lix> finding garbage collector roots...
lix> error: Listing pid 87261 file descriptors: Undefined error: 0
There is no real way to write a proper test for this, other than to
start a process like the following:
int main(void) {
for (int i = 0; i < 1000; ++i) {
close(i);
}
sleep(10000);
}
and then let Lix's gc look at it.
I have a relatively high confidence this *will* fix the problem since I
have manually confirmed the behaviour of the libproc call is
as-unexpected, and it would perfectly explain the observed symptom.
Fixes: #446
Change-Id: I67669b98377af17895644b3bafdf42fc33abd076
* changes:
tree-wide: fix various lint warnings
flake & doxygen: update tagline
nix flake metadata: print modified dates for input flakes
cli: eat terminal codes from stdout also
Implement forcing CLI colour on, and document it better
manual: fix a syntax error in redirects.js that made it not do anything
misc docs/meson tidying
build: implement clang-tidy using our plugin
The growth of the seccomp filter in 127ee1a101
made its compilation time significant (roughly 10 milliseconds have been
measured on one machine). For this reason, it is now precompiled and cached in
the parent process so that this overhead is not hit for every single build. It
is still not optimal when going through the daemon, because compilation still
happens once per client, but it's better than before and doing it only once for
the entire daemon requires excessive crimes with the current architecture.
Fixes: #461
Change-Id: I2277eaaf6bab9bd74bbbfd9861e52392a54b61a3
This is a preparation for precompiling the filter, which is done separately.
The behaviour should be unchanged for now.
Change-Id: I899aa7242962615949208597aca88913feba1cb8
The seccomp setup code was a huge chunk of conditionally compiled
platform-specific code. For this reason, it is appropriate to move it to the
platform-specific implementation file. Ideally its setup could be moved a bit
to make it happen at the same place as the Darwin restrictions, but that change
is going to be less mechanical.
Change-Id: I496aa3c4fabf34656aba1e32b0089044ab5b99f8
This was always in the lock file and we can simply actually print it.
The test for this is a little bit silly but it should correctly
control for my daring to exercise timezone code *and* locale code in a
test, which I strongly suspect nobody dared do before.
Sample (abridged):
```
Path: /nix/store/gaxb42z68bcr8lch467shvmnhjjzgd8b-source
Last modified: 1970-01-01 00:16:40
Inputs:
├───flake-compat: github:edolstra/flake-compat/0f9255e01c2351cc7d116c072cb317785dd33b33
│ Last modified: 2023-10-04 13:37:54
├───flake-utils: github:numtide/flake-utils/b1d9ab70662946ef0850d488da1c9019f3a9752a
│ Last modified: 2024-03-11 08:33:50
│ └───systems: github:nix-systems/default/da67096a3b9bf56a91d16901293e51ba5b49a27e
│ Last modified: 2023-04-09 08:27:08
```
Change-Id: I355f82cb4b633974295375ebad646fb6e2107f9b
This *should* be sound, plus or minus the amount that the terminal code
eating code is messed up already.
This is useful for testing CLI output because it will strip the escapes
enough to just shove the expected output in a file.
Change-Id: I8a9b58fafb918466ac76e9ab585fc32fb9294819
The docs page has an incorrect escape that leads to a backslash
appearing in output. Meson stuff is self-explanatory, just shortens and
simplifies a bit.
Change-Id: Ib63adf934efd3caeb82ca82988f230e8858a79f9
Expose an option for disabling the BDW-GC build dependency entirely. Fix the
place where one of its headers was included (unnecessarily) without proper
guarding. Finally, use this machinery to exclude BDW-GC from the ASAN builds
entirely (its usage has already been disabled due to compatibility issues
anyway), to ensure this configuration is not regressed again.
Change-Id: I2ebe8094abf67e7d1e99eed971de3e99d071c10b
this begins a long and arduous journey to remove all result state from
Goal, to eventually drop the std::enable_shared_from_this base, and to
completely eliminate all unsynchronized modification of states of both
Goal and Worker. by the end of this we will hopefully be able to start
and reap multiple derivation builds in parallel, which should speed up
the process quite a bit (at least for short local builds, others might
not notice a large difference. the build hooks will remain a problem.)
Change-Id: I57dcd9b2cab4636ed4aa24cdec67124fef883345
In the SSH code, the logger was conditionally paused, but unconditionally
resumed. This was fine as long as resuming the logger was idempotent. Starting
with 0dd1d8ca1c, it isn't any more, and the
behaviour of the code in question was missed. Consequently, an assertion
failure is triggered for example when performing builds against an "SSH" store
on localhost. Fix the issue by only resuming the logger when it has actually
been paused.
Fixes: #458
Change-Id: Ib1e4d047744a129f15730b7216f9c9368c2f4211
we still mutate goal state to store the results of any given goal run,
but now we also have that information in Worker and could in theory do
something else with it. we could return a map of goal to goal results,
which would also let us better diagnose failures of subgoals (at all).
Change-Id: I1df956bbd9fa8cc9485fb6df32918d68dda3ff48
this is the first step towards removing all result-related mutation of
Goal state from goal implementations themselves, and into Worker state
instead. once that is done we can treat all non-const Goal fields like
private state of the goal itself, and make threading of goals possible
Change-Id: I69ff7d02a6fd91a65887c6640bfc4f5fb785b45c
once goals run on multiple threads these fields must by synchronized as
one, or we try to run build hooks to often (or worse, not often enough)
Change-Id: I47860e46fe5c6db41755b2a3a1d9dbb5701c4ca4
this will usually be used either directly (which is always fine) or in
Finally blocks (where it must never throw execptions). make sure that,
exceptions being handled or not, the calling wait() in Finally doesn't
cause crashes due to the Finally no-nested-exceptions-thrown assertion
Change-Id: Ib83a5d9483b1fe83b9a957dcefeefce5d088f06d
The original attempt at this introduced a regression; this commit
reverts the revert and fixes the regression.
This reverts commit 3e151d4d77.
Fix to the regression:
flakeref: fix handling of `?dir=` param for flakes in subdirs
As reported in #419[1], accessing a flake in a subdir of a Git
repository fails with the previous commit[2] applied with the error
error: unsupported Git input attribute 'dir'
The problem is that the `dir`-param is inserted into the parsed URL if a
flake is fetched from the subdir of a Git repository. However, for the
fetching part this isn't even needed. The fix is to just pass `subdir`
as second argument to `FlakeRef` (which needs a `basedir` that can be
empty) and leave the parsedURL as-is.
Added a regression test to make sure we don't run into this again.
[1] #419
[2] e22172aaf6b6a366cecd3c025590e68fa2b91bcc,
originally 3e151d4d77
Change-Id: I2c72d5a32e406a7ca308e271730bd0af01c5d18b
* changes:
remove unused headers in installable-attr-path
libexpr: include the type of the non-derivation value in the type error
libexpr: mild cleanup to getDerivations
libexpr: DrvInfo: remove unused bad-citizen constructor
cleanup and slightly refactor DrvInfo::queryOutputs