this will let us turn copyNAR into a generator as well, which in turn is
necessary to turn the users of copyNAR into generators without resorting
to sinkToSource coroutines. currently this uses the SerializingTransform
in all cases, even for copyNAR where it is not necessary. should this be
a performance problem we can easily swap out the transform for one which
does not produce any bytes of its own, but that should not be necessary.
Change-Id: I7e685879318fcbb78d8b88abfddd7752360eb0ce
the sole remaining user of this function can use makeDecompressionSource
instead, while making the sinkToSource in the caller unnecessary as well
Change-Id: I4258227b5dbbb735a75b477d8a57007bfca305e9
the rewriting sink was just broken. when given a rewrite set that
contained a key that is also a proper infix of another key it was
possible to produce an incorrectly rewritten result if the writer
used the wrong block size. fixing this duplicates rewriteStrings,
to avoid this we'll rewrite rewriteStrings to use RewritingSource
in a new mode that'll allow rewrites we had previously forbidden.
Change-Id: I57fa0a9a994e654e11d07172b8e31d15f0b7e8c0
size tracking can be done with a LengthSink and a tee. match tracking
was defeated by never having done any match tracking, all users would
see the same (empty) set of matches at all times. match tracking with
bytes offsets alone would not be sufficient in the general case, only
because computeHashModulo uses a single rewrite could it have worked.
Change-Id: Idb214b5222e0ea24f450f5505712a342b63d7570
this much more closely mimics what is actually happening: we're reading
data from somewhere else, actively, rather than passively waiting. with
the data flow matching the underlying system interactions better we can
remove a few sinkToSource calls that merely exists to undo the mismatch
caused by not treating subprocess output as a data source to begin with
Change-Id: If4abfc2f8398fb5e88c9b91a8bdefd5504bb2d11
this will let us also return a source for the program output later,
which will in turn make sinkToSource unnecessary for program output
processing. this may also reopen a path for provigin program input,
but that still needs a proper async io framework to avoid problems.
Change-Id: Iaf93f47db99c38cfaf134bd60ed6a804d7ddf688
generators are a better basis for serializers than streaming into sinks
as we do currently for many reasons, such as being usable as sources if
one wishes to (without requiring an intermediate sink to serialize full
data sets into memory, or boost coroutines to turn sinks into sources),
composing more naturally (as one can just yield a sub-generator instead
of being forced to wrap entire substreams into clunky functions or even
more clunky custom types to implement operator<< on), allowing wrappers
to transform data with clear ownership semantics (removing the need for
explicit memory allocations and Source wrappers), and many other things
Change-Id: I361d89ff556354f6930d9204f55117565f2f7f20
the `*Source` name is a slight misnomer since we do also have a
Source type, but we can probably live with this for time being.
Change-Id: I54eb2e59a4009014e324797f16b80b962759c7d3
not used anywhere yet, but we'll use this a lot soon for generators that
return file contents, wire protocol fragments, or indeed any byte stream
Change-Id: I01a46f9bf9d75aaf4a5d7662773b99f498862a28
this will be the basis of non-boost coroutines in lix. anything that is
a boost coroutine *should* be representable with a Generator coroutine,
and many things that are not currently boost coroutines but behave much
like one (such as, notably, serializers) should be as well. this allows
us to greatly simplify many things that look like iteration but aren't.
Change-Id: I2cebcefa0148b631fb30df4c8cfa92167a407e34
Previously, the progress bar had two subtly different states in which the bar
would not actually render, both with their own shortcomings: inactive (which
was irreversible) and paused (reversible, but swallowing logs). Furthermore,
there was no way of resetting the statistics, so a very bad solution was
implemented (243c0f18da) that would create a new
logger for each line of the repl, leaking the previous one and discarding the
value of printBuildLogs. Finally, if stderr was not attached to a TTY, the
update thread was started even though the logger was not active, violating the
invariant required by the destructor (which is not observed because the logger
is leaked).
In this commit, the two aforementioned states are unified into a single one,
which can be exited again, correctly upholds the invariant that the update
thread is only running while the progress bar is active, and does not swallow
logs. The latter change in behavior is not expected to be a problems in the
rare cases where the paused state was used before, since other loggers (like
the simple one) don't exhibit it anyway. The startProgressBar/stopProgressBar
API is removed due to being a footgun, and a new method for properly resetting
the progress is added.
Co-Authored-By: Qyriad <qyriad@qyriad.me>
Change-Id: I2b7c3eb17d439cd0c16f7b896cfb61239ac7ff3a
We previously allowed you to map any flake URL to any other flake URL,
including shorthand flakerefs, indirect flake URLs like `flake:nixpkgs`,
direct flake URLs like `github:NixOS/nixpkgs`, or local paths.
But flake registry entries mapping from direct flake URLs often come
from swapping the 'from' and 'to' arguments by accident, and even when
created intentionally, they may not actually work correctly.
This patch rejects those URLs (and fully-qualified flake: URLs), making
it harder to swap the arguments by accident.
Fixes#181.
Change-Id: I24713643a534166c052719b8770a4edfcfdb8cf3
This is a squash of upstream PRs #10303, #10312 and #10883.
fix: Treat empty TMPDIR as unset
Fixes an instance of
nix: src/libutil/util.cc:139: nix::Path nix::canonPath(PathView, bool): Assertion `path != ""' failed.
... which I've been getting in one of my shells for some reason.
I have yet to find out why TMPDIR was empty, but it's no reason for
Nix to break.
(cherry picked from commit c3fb2aa1f9d1fa756dac38d3588c836c5a5395dc)
fix: Treat empty XDG_RUNTIME_DIR as unset
See preceding commit. Not observed in the wild, but is sensible
and consistent with TMPDIR behavior.
(cherry picked from commit b9e7f5aa2df3f0e223f5c44b8089cbf9b81be691)
local-derivation-goal.cc: Reuse defaultTempDir()
(cherry picked from commit fd31945742710984de22805ee8d97fbd83c3f8eb)
fix: remove usage of XDG_RUNTIME_DIR for TMP
(cherry picked from commit 1363f51bcb24ab9948b7b5093490a009947f7453)
tests/functional: Add count()
(cherry picked from commit 6221770c9de4d28137206bdcd1a67eea12e1e499)
Remove uncalled for message
(cherry picked from commit b1fe388d33530f0157dcf9f461348b61eda13228)
Add build-dir setting
(cherry picked from commit 8b16cced18925aa612049d08d5e78eccbf0530e4)
Change-Id: Ic7b75ff0b6a3b19e50a4ac8ff2d70f15c683c16a
this was only used in one place, and that place has been rewritten to
use a temporary file instead. keeping this around is not very helpful
at this time, and in any case we'd be better off rewriting subprocess
handling in rust where we not only have a much safer library for such
things but also async frameworks necessary for this easily available.
Change-Id: I6f8641b756857c84ae2602cdf41f74ee7a1fda02
copy-constructing or assigning from pid_t can easily lead to duplicate
Pid instances for the same process if a pid_t was used carelessly, and
Pid itself was copy-constructible. both could cause surprising results
such as killing processes twice (which could become very problemantic,
but luckily modern systems don't reuse PIDs all that quickly), or more
than one piece of the code believing it owns a process when neither do
Change-Id: Ifea7445f84200b34c1a1d0acc2cdffe0f01e20c6
this is only used in one place, and only to set a nicer error message on
EndOfFile. the only caller that actually *catches* this exception should
provide an error message in that catch block rather than forcing support
for setting error message so deep into the stack. copyStorePath is never
called outside of PathSubstitutionGoal anyway, which catches everything.
Change-Id: Ifbae8706d781c388737706faf4c8a8b7917ca278
Add the log-formats `multiline` and `multiline-with-logs` which offer
multiple current active building status lines.
Change-Id: Idd8afe62f8591b5d8b70e258c5cefa09be4cab03
If we've consumed the entire input, that doesn't actually mean we're
done decompressing - there might be more output left. This worked (?)
in most cases because the input and output sizes are pretty comparable,
but sometimes they're not and then things get very funny.
Change-Id: I73435a654a911b8ce25119f713b80706c5783c1b
don't consume a sink, return a source instead. the only reason to not do
this is a very slight reduction in dynamic allocations, but since we are
going to *at least* do disk io that will not be a lot of overhead anyway
Change-Id: Iae2f879ec64c3c3ac1d5310eeb6a85e696d4614a
This was generating an out-of-range verbosity value. We should just
process it as an int and then convert to verbosity with a clamping
function, which trivially avoids any domain type violations.
Change-Id: I0ed20da8e1496a1225ff3008b76827d99265d404
There was a previously-unused move constructor that just called abort,
which makes no sense since it ought to just be `= delete` if you don't
want one (commit history says it was Eelco doing an optimistic
performance optimisation in 2016, so it probably would not pass review
today).
However, a Lock has some great reasons to be moved! You might need to
unlock it early, for instance, or give it to someone else. So we change
the move constructor to instead hollow out the moved-from object and
make it unusable.
Change-Id: Iff2a4c2f7ebd0a558c4866d4dfe526bc8558bed7
FreeBSD uses libunwind unwind.h, which does not require
`_GNU_SOURCE` to expose `_Unwind_Backtrace`.
Tell Boost that.
Change-Id: I81e767967b1458118b86d212b5552d4d0a1200d9
They are enabled by default, and Meson will also prints whether or not
they're enabled at the bottom at the end of configuration.
Change-Id: I48db238510bf9e74340b86f243f4bbe360794281
Here's my guide so far:
$ rg '((?!(recursive).*) Nix
(?!(daemon|store|expression|Rocks!|Packages|language|derivation|archive|account|user|sandbox|flake).*))'
-g '!doc/' --pcre2
All items from this query have been tackled. For the documentation side:
that's for #162.
Additionally, all remaining references to github.com/NixOS/nix which
were not relevant were also replaced.
Fixes: #148.
Fixes: #162.
Change-Id: Ib3451fae5cb8ab8cd9ac9e4e4551284ee6794545
Signed-off-by: Raito Bezarius <raito@lix.systems>
This causes libstore, libexpr, libfetchers, and libutil to be linked
with -Wl,--whole-archive to executables, when building statically.
libstore for the store backends, libexpr for the primops, libfetchers
for the fetcher backends I assume(?), and libutil for the nix::logger
initializer (which notably shows in pre-main constructors when HOME is
not owned by the user. cursed.).
This workaround should be removed when #359 is fixed.
Fixes#306.
Change-Id: Ie9ef0154e09a6ed97920ee8ab23810ca5e2de84c
This is because a dynamic_cast<nix::RootArgs *> of a (n-e-j) MyArgs
returns nullptr even though MyArgs has virtual nix::RootArgs as a
parent.
class MyArgs : virtual public nix::MixEvalArgs,
virtual public nix::MixCommonArgs,
virtual nix::RootArgs { ... };
So this should work right?? But it does not. We found out that it's
caused by -fvisibility=hidden in n-e-j, but honestly this code was bad
anyway.
The trivial solution is to simply stop relying on RTTI working properly
here, which is probably better OO architecture anyway. However, I am not
100% confident *this* is sound, since we have this horrible hierarchy:
Args (defines getRoot)
/ | \
RootArgs MixCommonArgs MixEvalArgs
(overrides)
I am not confident that this is guaranteed to resolve from Args always
in the case of this override.
Assertion failed: (res), function getRoot, file src/libutil/args.cc, line 67.
6MyArgsProcess 60503 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = hit program assert
frame #4: 0x0000000100b1a41c liblixutil.dylib`nix::Args::processArgs(std::__1::list<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>, std::__1::allocator<std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>>> const&, bool) [inlined] nix::Args::getRoot(this=0x00000001000d0688) at args.cc:67:5 [opt]
64 std::cout << typeid(*p).name();
65
66 auto * res = dynamic_cast<RootArgs *>(p);
-> 67 assert(res);
68 return *res;
69 }
70
Target 0: (nix-eval-jobs) stopped.
(lldb) p this
(MyArgs *) 0x00000001000d0688
(lldb) p *this
(nix::Args) {
longFlags = size=180 { ... }
shortFlags = size=4 { ... }
expectedArgs = size=1 { ... }
processedArgs = size=0 {}
hiddenCategories = size=1 {
[0] = "Options to override configuration settings"
}
parent = nullptr
}
We also found that if we did this:
class [[gnu::visibility("default")]] RootArgs : virtual public Args
it would work properly (???!). This is of course, very strange, because
objdump -Ct output on liblixexpr.dylib is identical both with and
without it.
Possibly related: https://www.qt.io/blog/quality-assurance/one-way-dynamic_cast-across-library-boundaries-can-fail-and-how-to-fix-it
Fixes: lix-project/nix-eval-jobs#2
Change-Id: I6b9ed968ed56420a9c4d2dffd18999d78c2761bd
It seems like someone implemented precompiled headers a long time ago
and then it never got ported to meson or maybe didn't work at all.
This is, however, blessedly easy to simply implement. I went looking for
`#define` that could affect the result of precompiling the headers, and
as far as I can tell we aren't doing any of that, so this should truly
just be free build time savings.
Previous state:
Compilation (551 times):
Parsing (frontend): 1302.1 s
Codegen & opts (backend): 956.3 s
New state:
**** Time summary:
Compilation (567 times):
Parsing (frontend): 1123.0 s
Codegen & opts (backend): 1078.1 s
I wonder if the "regression" in codegen time is just doing the PCH
operation a few times, because meson does it per-target.
Change-Id: I664366b8069bab4851308b3a7571bea97ac64022
We reviewed this code a while ago, and we neglected to get a comment in
saying why it's Like This at the time. Let's fix that, since it is code
that looks very absurd at first glance.
Change-Id: Ib67b49605ef9ef1c84ecda1db16be74fc9105398
This breaks downstreams linking to us on purpose to make sure that if
someone is linking to Lix they're doing it on purpose and crucially not
mixing up Nix and Lix versions in compatibility code.
We still need to fix the internal includes to follow the same schema so
we can drop the single-level include system entirely. However, this
requires a little more effort.
This adds pkg-config for libfetchers and config.h.
Migration path:
expr.hh -> lix/libexpr/expr.hh
nix/config.h -> lix/config.h
To apply this migration automatically, remove all `<nix/>` from
includes, so: `#include <nix/expr.hh>` -> `#include <expr.hh>`. Then,
the correct paths will be resolved from the tangled mess, and the
clang-tidy automated fix will work.
Then run the following for out of tree projects:
```
lix_root=$HOME/lix
(cd $lix_root/clang-tidy && nix develop -c 'meson setup build && ninja -C build')
run-clang-tidy -checks='-*,lix-fixincludes' -load=$lix_root/clang-tidy/build/liblix-clang-tidy.so -p build/ -fix src
```
Related: lix-project/nix-eval-jobs#5
Fixes: #279
Change-Id: I7498e903afa6850a731ef8ce77a70da6b2b46966
Move the identical static `chmod_` functions in libstore to
libutil. the function is called `chmodPath` instead of `chmod`
as otherwise it will shadow the standard library chmod in the nix
namespace, which is somewhat confusing.
Change-Id: I7b5ce379c6c602e3d3a1bbc49dbb70b1ae8f7bad
2bbe3efd1¹ added the -Wdeprecated-copy warning, and fixed the instances
of it which GCC warned about, in HintFmt and ref<T>. However, when
building with Clang, there is an additional deprecated-copy warning in
BaseError. This commit explicitly defaults the copy assignment operator
for BaseError and silences this warning.
1: 2bbe3efd16
Change-Id: I50aa4a7ab1a7aae5d7b31f765994abd3db06379d
The interrupt-blocking code was originally introduced 20 years ago so that
trying to log an error message does not result in an interrupt exception being
thrown and then going unhandled (c8d3882cdc).
However, the logging code does not check for interrupts any more
(054be50257), so this reasoning is no longer
applicable. Delete this code so that later interrupts are unblocked again, for
example in the next line entered into the repl.
Closes: #296
Change-Id: I48253f5f4272e75001148c13046e709ef5427fbd
it's no longer used. it really shouldn't have existed this long since it
was just a mashup of both std::promise and std::packaged_task in a shape
that makes composition unnecessarily difficult. all but a single case of
Callback pattern calls were fully synchronous anyway, and even this sole
outlier was by far not important enough to justify the extra complexity.
Change-Id: I208aec4572bf2501cdbd0f331f27d505fca3a62f
only two users of this function exist. only one used it in a way that
even bears resemblance to asynchronicity, and even that one didn't do
it right. fully async and parallel computation would have only worked
if any getEdgesAsync never calls the continuation it receives itself,
only from more derived callbacks running on other threads. calling it
directly would cause the decoupling promise to be awaited immediately
*on the original thread*, completely negating all nice async effects.
Change-Id: I0aa640950cf327533a32dee410105efdabb448df
If unprivileged userns are *believed* to be disabled (such as with
"kernel.unprivileged_userns_clone = 0"), Lix would previously *give up*
on trying to use a user namespace before actually trying it, even if, in
cases such as unprivileged_userns_clone, it would actually be allowed
since Nix has CAP_SYS_ADMIN when running as daemon.
(see, e.g. 25d4709a4f)
We changed it to actually try it first, and then diagnose possible
causes, and also to be more loud about the whole thing, using warnings
instead of debugs. These warnings will only print on the first build run
by the daemon, which is, tbh, eh, shrug.
This is what led to us realizing that no-userns was a poorly exercised
condition.
Change-Id: I8e4f21afc89c574020dc7e89a560cc740ce6573a
this is used in CA rewriting, replacement of placeholders in
derivations, generating scripts for devShells, and some more
places. in all of these transitive replacements are unsound,
and overlapping replacements would be as well. there even is
a test that transitive replacements do not happen (in the CA
RewriteSink suite), but none for overlapping replacements. a
minimally surprising binary rewriter surely would not do any
of these replacements, the only reason we have not seen this
break yet is probably that rewriteStrings is only called for
store paths and things that look like store paths (and those
should never overlap nor admit such transitive replacements)
Change-Id: I6fc29f939d5061d9f56c752624a823ece8437c07
Previously, the garbage collector found runtime roots on Darwin by
shelling out to `lsof -n -w -F n` then parsing the result.
However, this requires an lsof binary and can be extremely slow.
The official Apple lsof returns in a reasonable amount of time,
about 250ms in my tests, but the lsof packaged in nixpkgs is quite slow,
taking about 40 seconds to run the command.
Using libproc directly is about the same speed as Apple lsof,
and allows us to reënable several tests that were disabled on Darwin.
Change-Id: Ifa0adda7984e13c15535693baba835aae79a3577
Saves us a bunch of thinking about how to handle symlinks, and prevents
the DNS config from changing on the fly under the build, which may or may
not be a good thing?
Change-Id: I071e6ae7e220884690b788d94f480866f428db71
* changes:
meson: fix log-dir
manual: build docs with dummy envs
libcmd: install generated headers as well
docs: redo content generation for mdbook and manual
manpages can be rendered using the markdown output of mdbook, the rest
of the manual can generated out of the main doc/manual source tree. we
still use lowdown to actually render manpages instead of eg mdbook-man
because lowdown does generate reasonably good manpages (though that is
also somewhat debatable, but they're a lot better than mdbook-man).
doing this not only lets us drastically simplify the lowdown pipeline,
but also remove all custom {{#include}} handling since now mdbook does
all of it, even for the manpage builds. even the lowdown wrapper isn't
entirely necessary because lowdown can take all wrapper arguments with
command line flags rather than bits of input file content.
This also implements running mdbook in Meson, in order to generate the
manpages. The mdbook outputs are also installed in the usual location.
Co-authored-by: Qyriad <qyriad@qyriad.me>
Change-Id: I60193f9fd0f15d48872f071af35855cda2a0f40b
throwing exceptions is fine, but throwing exceptions during exception
handling is hard enough to do correctly that we should just forbid it
entirely out of an overabundance of caution. in cases where terminate
is the correct answer the users of Finally must call it manually now.
Change-Id: Ia51a2cb4a0638500550bfabc89cf01a6d8098983
setting this only on exceptions caused by actual fd access is not
sufficient to diagnose all errors (such as SerialisationError) in
some cases. this usually does not have any negative effects since
those errors will end up killing the process in another way. this
is not a reliable assumption though and we should be using proper
error handling (and closing connections more often, preferring to
close over keeping something open that might be in a weird state)
Change-Id: I1b792cd7ad8ba9ff0f6bd174945ab2575ff2208e
not needed yet, but returning a resource from the exception handling
path that has ownership of a handle is currently not well-supported.
we could also add a default constructor to Handle, but then we would
also need to change the pool reference to a pointer. eventually that
should be done since now resources can be swapped between pools with
clever moves, but since that's not a problem yet we won't do it now.
Change-Id: I26eb06581f7be34569e9e67a33da736128d167af
this is supposed to act like a finally block does in other languages. a
finally block should be able to throw exceptions of its own rather than
just crashing the entire program when it throws it own exceptions. even
in the rare case of a finally throwing an unexpected exception it might
be better to report the exception from Finally instead of the original,
at least that can keep our program running instead of letting it crash.
Change-Id: Id42011e46b1df369152b4564938c0e93fa1acf32
it was used incorrectly (not swapped on handle move), only used in one
place (that is now handled with exception handling detection in Handle
itself), and if ever reintroduced should be replaced with a different,
more understandable mechanism (like an explicit dropAsInvalid method).
Change-Id: Ie3e5d5cfa81d335429cb2ee5c3ad85c74a9df17b
this was never actually used, and bad design in the first place—why
should a bad resource be put back into the idle pool? just drop it.
Change-Id: Idab8774bee19dadae0209d404c4fb86dd4aeba1e