What if you could find memory bugs in Lix without really trying very
hard? I've had variously scuffed patches to do this, but this is
blocked on boost coroutines removal at this point tbh.
Change-Id: Id762af076aa06ad51e77a6c17ed10275929ed578
When the configured maximum depth has been reached, attribute sets and lists
are printed with ellipsis to indicate the elision of nested items. Previously,
this happened even in case the structure being printed is empty, so that such
items do not in fact exist. This is confusing, so stop doing it.
Change-Id: I0016970dad3e42625e085dc896e6f476b21226c9
The repeated value detection logic exists so that the occurrence of large
common substructures does not fill up the screen or the computer's memory.
However, empty attribute sets and derivations (when their detection is enabled)
are always cheap to print, and in practice I have observed them to make up a
significant majority of the cases where I was annoyed by the repeated value
detection kicking in. Furthermore, `nix-instantiate --eval` already disables
this logic for empty attribute sets, and empty lists are already exempted
everywhere. For these reasons, always print empty attribute sets and
derivations as what they are.
Change-Id: I5dac8e7739f9d726b76fd0521ec46f38af94463f
this is cursed. deeply and profoundly cursed. under NO CIRCUMSTANCES
must protocol serializer helpers be applied to temporaries! doing so
will inevitably cause dangling references and cause the entire thing
to crash. we need to do this even so to get rid of boost coroutines,
and likewise to encapsulate the serializers we suffer today at least
a little bit to allow a gradual migration to an actual IPC protocol.
(this isn't a problem that's unique to generators. c++ coroutines in
general cannot safely take references to arbitrary temporaries since
c++ does not have a lifetime system that can make this safe. -sigh-)
Change-Id: I2921ba451e04d86798752d140885d3c5cc08e146
This also bans various sneaking of negative numbers from the language
into unsuspecting builtins as was exposed while auditing the
consequences of changing the Nix language integer type to a newtype.
It's unlikely that this change comprehensively ensures correctness when
passing integers out of the Nix language and we should probably add a
checked-narrowing function or something similar, but that's out of scope
for the immediate change.
During the development of this I found a few fun facts about the
language:
- You could overflow integers by converting from unsigned JSON values.
- You could overflow unsigned integers by converting negative numbers
into them when going into Nix config, into fetchTree, and into flake
inputs.
The flake inputs and Nix config cannot actually be tested properly
since they both ban thunks, however, we put in checks anyway because
it's possible these could somehow be used to do such shenanigans some
other way.
Note that Lix has banned Nix language integer overflows since the very
first public beta, but threw a SIGILL about them because we run with
-fsanitize=signed-overflow -fsanitize-undefined-trap-on-error in
production builds. Since the Nix language uses signed integers, overflow
was simply undefined behaviour, and since we defined that to trap, it
did.
Trapping on it was a bad UX, but we didn't even entirely notice
that we had done this at all until it was reported as a bug a couple of
months later (which is, to be fair, that flag working as intended), and
it's got enough production time that, aside from code that is IMHO buggy
(and which is, in any case, not in nixpkgs) such as
#445, we don't think
anyone doing anything reasonable actually depends on wrapping overflow.
Even for weird use cases such as doing funny bit crimes, it doesn't make
sense IMO to have wrapping behaviour, since two's complement arithmetic
overflow behaviour is so *aggressively* not what you want for *any* kind
of mathematics/algorithms. The Nix language exists for package
management, a domain where bit crimes are already only dubiously in
scope to begin with, and it makes a lot more sense for that domain for
the integers to never lose precision, either by throwing errors if they
would, or by being arbitrary-precision.
This change will be ported to CppNix as well, to maintain language
consistency.
Fixes: #423
Change-Id: I51f253840c4af2ea5422b8a420aa5fafbf8fae75
the rewriting sink was just broken. when given a rewrite set that
contained a key that is also a proper infix of another key it was
possible to produce an incorrectly rewritten result if the writer
used the wrong block size. fixing this duplicates rewriteStrings,
to avoid this we'll rewrite rewriteStrings to use RewritingSource
in a new mode that'll allow rewrites we had previously forbidden.
Change-Id: I57fa0a9a994e654e11d07172b8e31d15f0b7e8c0
generators are a better basis for serializers than streaming into sinks
as we do currently for many reasons, such as being usable as sources if
one wishes to (without requiring an intermediate sink to serialize full
data sets into memory, or boost coroutines to turn sinks into sources),
composing more naturally (as one can just yield a sub-generator instead
of being forced to wrap entire substreams into clunky functions or even
more clunky custom types to implement operator<< on), allowing wrappers
to transform data with clear ownership semantics (removing the need for
explicit memory allocations and Source wrappers), and many other things
Change-Id: I361d89ff556354f6930d9204f55117565f2f7f20
this will be the basis of non-boost coroutines in lix. anything that is
a boost coroutine *should* be representable with a Generator coroutine,
and many things that are not currently boost coroutines but behave much
like one (such as, notably, serializers) should be as well. this allows
us to greatly simplify many things that look like iteration but aren't.
Change-Id: I2cebcefa0148b631fb30df4c8cfa92167a407e34
Previously, the progress bar had two subtly different states in which the bar
would not actually render, both with their own shortcomings: inactive (which
was irreversible) and paused (reversible, but swallowing logs). Furthermore,
there was no way of resetting the statistics, so a very bad solution was
implemented (243c0f18da) that would create a new
logger for each line of the repl, leaking the previous one and discarding the
value of printBuildLogs. Finally, if stderr was not attached to a TTY, the
update thread was started even though the logger was not active, violating the
invariant required by the destructor (which is not observed because the logger
is leaked).
In this commit, the two aforementioned states are unified into a single one,
which can be exited again, correctly upholds the invariant that the update
thread is only running while the progress bar is active, and does not swallow
logs. The latter change in behavior is not expected to be a problems in the
rare cases where the paused state was used before, since other loggers (like
the simple one) don't exhibit it anyway. The startProgressBar/stopProgressBar
API is removed due to being a footgun, and a new method for properly resetting
the progress is added.
Co-Authored-By: Qyriad <qyriad@qyriad.me>
Change-Id: I2b7c3eb17d439cd0c16f7b896cfb61239ac7ff3a
If we've consumed the entire input, that doesn't actually mean we're
done decompressing - there might be more output left. This worked (?)
in most cases because the input and output sizes are pretty comparable,
but sometimes they're not and then things get very funny.
Change-Id: I73435a654a911b8ce25119f713b80706c5783c1b
without this we will not be able to get rid of makeDecompressionSink,
which in turn will be necessary to get rid of sourceToSink (since the
libarchive archive wrapper *must* be a Source due to api limitations)
Change-Id: Iccd3d333ba2cbcab49cb5a1d3125624de16bce27
even the transfer function is not all that necessary since there aren't
that many users, but we'll keep it for now. we could've kept both names
but we also kind of want to use `download` for something else very soon
Change-Id: I005e403ee59de433e139e37aa2045c26a523ccbf
* changes:
libstore client: remove remaining dead code
libstore: refuse to serialise ancient protocols
libstore client: remove support for <2.3 clients
libstore daemon: remove very old protocol support (<2.3)
Delete old ValidPathInfo test, fix UnkeyedValidPathInfo
Set up minimum protocol version
with the prepatory work done this mostly means turning plain pointers
into unique_ptrs, with all the associated churn that necessitates. we
might want to change some of these to box_ptrs at some point as well,
but that would be a semantic change that isn't fully appropriate yet.
Change-Id: I0c238c118617420650432f4ed45569baa3e3f413
almost all places where Exprs are passed as pointers expect the pointers
to be non-null. pass them as references to encode this constraint in the
type system as well (and also communicate that Exprs must not be freed).
Change-Id: Ia98f166fec3c23151f906e13acb4a0954a5980a2
We don't want to deal with these at all, let's stop doing so.
(marking this one as the fix commit since its immediate predecessors
aren't the complete fix)
Fixes: #325
Change-Id: Ieea1b0b8ac0f903d1e24e5b3e63cfe12eeec119d
The UnkeyedValidPathInfo test was testing an ancient version but not the
current version. Doesn't make much sense to me.
Change-Id: Ib476a4297d9075f2dcd31a073b3e7b149b2189af
The libcmd unit test creates files (more specifically, the fetcher cache) in
its home directory. In the single-user sandbox, this leads to the creation of
/homeless-shelter, since this is the default HOME and the root is writable.
Unfortunately, this conflicts with the assumption of the functional tests that
this directory does not exist. Use a different home directory to prevent these
test failures, and thus restore the ability to build inside the single-user
sandbox.
Fixes: #365
Change-Id: I4df8c53d043234b95a7c0ac45fc5ee89e8d46aff
They are enabled by default, and Meson will also prints whether or not
they're enabled at the bottom at the end of configuration.
Change-Id: I48db238510bf9e74340b86f243f4bbe360794281
This causes libstore, libexpr, libfetchers, and libutil to be linked
with -Wl,--whole-archive to executables, when building statically.
libstore for the store backends, libexpr for the primops, libfetchers
for the fetcher backends I assume(?), and libutil for the nix::logger
initializer (which notably shows in pre-main constructors when HOME is
not owned by the user. cursed.).
This workaround should be removed when #359 is fixed.
Fixes#306.
Change-Id: Ie9ef0154e09a6ed97920ee8ab23810ca5e2de84c
It seems like someone implemented precompiled headers a long time ago
and then it never got ported to meson or maybe didn't work at all.
This is, however, blessedly easy to simply implement. I went looking for
`#define` that could affect the result of precompiling the headers, and
as far as I can tell we aren't doing any of that, so this should truly
just be free build time savings.
Previous state:
Compilation (551 times):
Parsing (frontend): 1302.1 s
Codegen & opts (backend): 956.3 s
New state:
**** Time summary:
Compilation (567 times):
Parsing (frontend): 1123.0 s
Codegen & opts (backend): 1078.1 s
I wonder if the "regression" in codegen time is just doing the PCH
operation a few times, because meson does it per-target.
Change-Id: I664366b8069bab4851308b3a7571bea97ac64022
This reverts commit 285bc67318.
Reason for revert: #364
For some reason this broke `main` even though the change we are reverting passed CI! Mysterious, haunted, etc. Needs more debugging, let's turn it off for now.
Change-Id: Ica4819d61cd35b83eb52985bfcb657e858f025a9
* changes:
util.hh: Delete remaining file and clean up headers
util.hh: Move nativeSystem to local-derivation-goal.cc
util.hh: Move stuff to types.hh
util.cc: Delete remaining file
util.{hh,cc}: Move ignoreException to error.{hh,cc}
util.{hh,cc}: Split out namespaces.{hh,cc}
util.{hh,cc}: Split out users.{hh,cc}
util.{hh,cc}: Split out strings.{hh,cc}
util.{hh,cc}: Split out unix-domain-socket.{hh,cc}
util.{hh,cc}: Split out child.{hh,cc}
util.{hh,cc}: Split out current-process.{hh,cc}
util.{hh,cc}: Split out processes.{hh,cc}
util.{hh,cc}: Split out file-descriptor.{hh,cc}
util.{hh,cc}: Split out file-system.{hh,cc}
util.{hh,cc}: Split out terminal.{hh,cc}
util.{hh,cc}: Split out environment-variables.{hh,cc}
while refactoring the curl wrapper we inadvertently broken the immutable
flake protocol, because the immutable flake protocol accumulates headers
across the entire redirect chain instead of using only the headers given
in the final response of the chain. this is a problem because Some Known
Providers Of Flake Infrastructure set rel=immutable link headers only in
the penultimate entry of the redirect chain, and curl does not regard it
as worth returning to us via its response header enumeration mechanisms.
fixes #358
Change-Id: I645c3932b465cde848bd6a3565925a1e3cbcdda0
* changes:
docs: linkify nix3-build mention in nix-build.md
build: make internal-api-docs PHONY
cleanup lookupFileArg
add docstring to lookupFileArg
add libcmd test for lookupFileArg
This breaks downstreams linking to us on purpose to make sure that if
someone is linking to Lix they're doing it on purpose and crucially not
mixing up Nix and Lix versions in compatibility code.
We still need to fix the internal includes to follow the same schema so
we can drop the single-level include system entirely. However, this
requires a little more effort.
This adds pkg-config for libfetchers and config.h.
Migration path:
expr.hh -> lix/libexpr/expr.hh
nix/config.h -> lix/config.h
To apply this migration automatically, remove all `<nix/>` from
includes, so: `#include <nix/expr.hh>` -> `#include <expr.hh>`. Then,
the correct paths will be resolved from the tangled mess, and the
clang-tidy automated fix will work.
Then run the following for out of tree projects:
```
lix_root=$HOME/lix
(cd $lix_root/clang-tidy && nix develop -c 'meson setup build && ninja -C build')
run-clang-tidy -checks='-*,lix-fixincludes' -load=$lix_root/clang-tidy/build/liblix-clang-tidy.so -p build/ -fix src
```
Related: lix-project/nix-eval-jobs#5
Fixes: #279
Change-Id: I7498e903afa6850a731ef8ce77a70da6b2b46966
Example: /nix/store/dr53sp25hyfsnzjpm8mh3r3y36vrw3ng-neovim-0.9.5^out
This is nonsensical since selecting outputs can only be done for a
buildable derivation, not for a realised store path. The build worker
side of things ends up crashing with an assertion when trying to handle
such malformed paths.
Change-Id: Ia3587c71fe3da5bea45d4e506e1be4dd62291ddf
Very basic behavior test to ensure that gzip data gets internally
decompressed by the file transfer pipeline.
Change a std::string_view return value in the test harness to
std::string. I wouldn't call myself a C++ beginner and I still managed
to shoot myself in the foot like three times with the lifetime
managements there (e.g. [&] { return an_std_string; } ends up with a
dangling string_view!).
Change-Id: I1360750d4181ce1ca2a3aa4dc0e97e131351c469
also add a few more tests for exception propagation behavior. using
packaged_tasks and futures (which only allow a single call to a few
of their methods) introduces error paths that weren't there before.
Change-Id: I42ca5236f156fefec17df972f6e9be45989cf805
only two users of this function exist. only one used it in a way that
even bears resemblance to asynchronicity, and even that one didn't do
it right. fully async and parallel computation would have only worked
if any getEdgesAsync never calls the continuation it receives itself,
only from more derived callbacks running on other threads. calling it
directly would cause the decoupling promise to be awaited immediately
*on the original thread*, completely negating all nice async effects.
Change-Id: I0aa640950cf327533a32dee410105efdabb448df
not doing this will cause transfers that had their readers disappear to
linger. with lingering transfers the curl thread can't shut down, which
will cause nix itself to not shut down until the transfer finishes some
other way (most likely network timeouts). also add a new test for this.
Change-Id: Id2401b3ac85731c824db05918d4079125be25b57
this is used in CA rewriting, replacement of placeholders in
derivations, generating scripts for devShells, and some more
places. in all of these transitive replacements are unsound,
and overlapping replacements would be as well. there even is
a test that transitive replacements do not happen (in the CA
RewriteSink suite), but none for overlapping replacements. a
minimally surprising binary rewriter surely would not do any
of these replacements, the only reason we have not seen this
break yet is probably that rewriteStrings is only called for
store paths and things that look like store paths (and those
should never overlap nor admit such transitive replacements)
Change-Id: I6fc29f939d5061d9f56c752624a823ece8437c07
* changes:
nix3-profile: remove check "name" attr in manifests
Add profile migration test
nix3-profile: make element names stable
getNameFromURL(): Support uppercase characters in attribute names
nix3-profile: remove indices
nix3-profile: allow using human-readable names to select packages
implement parsing human-readable names from URLs