it's just a uri and some headers now. those can be function arguments
with no loss of clarity. *actual* additional arguments, for example a
TLS context with additional certificates, could be added on a new and
improved FileTransfer class that carries not just a backend reference
but some real, visible context for its transfers. curl not being very
multi-threading-friendly when using multi handles will make sharing a
bit hard anyway once we drop the single global download worker thread
Change-Id: Id2112c95cbd118c6d920488f38d272d7da926460
we don't even need this outside of tests. maybe we should not do
automatic retries at this level at all and use retrying wrappers
instead? at some point we may have to do this, but not just yet.
Change-Id: If0088aa55215be81f1770c25b3bb1b5268c65cf8
Only overridden settings are sent to the daemon, and we're going to do the same
for the build hook to. It needs to be ensured that overridden settings are in
fact consistently marked as such, so that they actually get sent.
Change-Id: I7cd58d925702f86cf2c35ad121eb191ceb62a355
There was a bug report about a potential call to `memcpy` with a null
pointer which is not reproducible:
#492
This occurred in `src/libstore/filetransfer.cc` in `InnerSource::read`.
To ensure that this doesn't happen, an early return is added before
calling `memcpy` if the length of the data to be copied is 0.
This change also adds a test that ensures that when `InnerSource::read`
is called with an empty file, it throws an `EndOfFile` exception.
Change-Id: Ia18149bee9a3488576c864f28475a3a0c9eadfbb
The test suite can load the global configuration files under certain
circumstances, and, though we would really rather it didn't ever do that
at all, we should at least break the mechanism.
Fixes: #474
Change-Id: Ib27cb43dd5dfaa70ac491c395b5ba308fd7bd289
This moves the "legacy"/"nix2" commands under a new `src/legacy/`
directory, instead of being scattered around in a bunch of different
directories.
A new `liblegacy` build target is defined, and the `nix` binary is
linked against it.
Then, `RegisterLegacyCommand` is replaced with `LegacyCommand::add`
calls in functions like `registerNixCollectGarbage()`. These
registration functions are called explicitly in `src/nix/main.cc`.
See: #359
Change-Id: Id450ffc3f793374907599cfcc121863b792aac1a
like kj::joinPromisesFailFast this allows waiting for the results of
multiple promises at once, but unlike it not all input promises must
be complete (or any of them failed) for results to become available.
Change-Id: I0e4a37e7bd90651d56b33d0bc5afbadc56cde70c
like a normal semaphore, but with awaitable acquire actions. this is
primarily intended as an intermediate concurrency limiting device in
the Worker code, but it may find other uses over time. we do not use
std::counting_semaphore as a base because the counter of that is not
inspectable as will be needed for Worker. we also do not need atomic
operations for cross-thread consistency since we don't have multiple
threads (thanks to kj event loops being confined to a single thread)
Change-Id: Ie2bcb107f3a2c0185138330f7cbba4cec6cbdd95
These stack traces kind of suck for the reasons mentioned on the
CppTrace page here (no symbols for inline functions is a major one):
https://github.com/jeremy-rifkin/cpptrace
I would consider using CppTrace if it were packaged, but to be honest, I
think that the more reasonable option is actually to move entirely to
out-of-process crash handling and symbolization.
The reason for this is that if you want to generate anything of
substance on SIGSEGV or really any deadly signal, you are stuck in
async-signal-safe land, which is not a place to be trying to run a
symbolizer. LLVM does it anyway, probably carefully, and chromium *can*
do it on debug builds but in general uses crashpad:
https://source.chromium.org/chromium/chromium/src/+/main:base/debug/stack_trace_posix.cc;l=974;drc=82dff63dbf9db05e9274e11d9128af7b9f51ceaa;bpv=1;bpt=1
However, some stack traces are better than *no* stack traces when we get
mystery exceptions falling out the bottom of the program. I've also
promoted the path for "mystery exceptions falling out the bottom of the
program" to hard crash and generate a core dump because although there's
been some months since the last one of these, these are nonetheless
always *atrociously* diagnosed.
We can't improve the crash handling further until either we use Crashpad
(which involves more C++ deps, no thanks) or we put in the ostensibly
work in progress Rust minidump infrastructure, in which case we need to
finish full support for Rust in libutil first.
Sample report:
Lix crashed. This is a bug. We would appreciate if you report it at https://git.lix.systems/lix-project/lix/issues with the following information included:
Exception: std::runtime_error: lol
Stack trace:
0# nix::printStackTrace() in /home/jade/lix/lix3/build/src/nix/../libutil/liblixutil.so
1# 0x000073C9862331F2 in /home/jade/lix/lix3/build/src/nix/../libmain/liblixmain.so
2# 0x000073C985F2E21A in /nix/store/p44qan69linp3ii0xrviypsw2j4qdcp2-gcc-13.2.0-lib/lib/libstdc++.so.6
3# 0x000073C985F2E285 in /nix/store/p44qan69linp3ii0xrviypsw2j4qdcp2-gcc-13.2.0-lib/lib/libstdc++.so.6
4# nix::handleExceptions(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()>) in /home/jade/lix/lix3/build/src/nix/../libmain/liblixmain.so
5# 0x00005CF65B6B048B in /home/jade/lix/lix3/build/src/nix/nix
6# 0x000073C985C8810E in /nix/store/dbcw19dshdwnxdv5q2g6wldj6syyvq7l-glibc-2.39-52/lib/libc.so.6
7# __libc_start_main in /nix/store/dbcw19dshdwnxdv5q2g6wldj6syyvq7l-glibc-2.39-52/lib/libc.so.6
8# 0x00005CF65B610335 in /home/jade/lix/lix3/build/src/nix/nix
Change-Id: I1a9f6d349b617fd7145a37159b78ecb9382cb4e9
This caused an infinite loop before since it would just keep asking the
underlying source for more data.
In practice this happened because an HTTP server served a
response to a HEAD request (for which curl will not retrieve any body or
call our write callback function) with Content-Encoding: br, leading to
decompressing nothing at all and going into an infinite loop.
This adds a test to make sure none of our compression methods do that
again, as well as just patching the HTTP client to never feed empty data
into a compression algorithm (since they absolutely have the right to
throw CompressionError on unexpectedly-short streams!).
Reported on Matrix: https://matrix.to/#/!lymvtcwDJ7ZA9Npq:lix.systems/$8BWQR_zKxCQDJ40C5NnDo4bQPId3pZ_aoDj2ANP7Itc?via=lix.systems&via=matrix.org&via=tchncs.de
Change-Id: I027566e280f0f569fdb8df40e5ecbf46c211dad1
This test suite was in desperate need of using the parameterization
available with gtest, and was a bunch of useless duplicated code. At
least now it's not duplicated code, though it still probably should be
more full of property tests.
Change-Id: Ia8ccee7ef4f02b2fa40417b79aa8c8f0626ea479
Fixes:
- Identifiers starting with _ are prohibited
- Some driveby header dependency cleaning which wound up with doing some
extra fixups.
- Fucking C style casts, man. C++ made these 1000% worse by letting you
also do memory corruption with them with references.
- Remove casts to Expr * where ExprBlackHole is an incomplete type by
introducing an explicitly-cast eBlackHoleAddr as Expr *.
- An incredibly illegal cast of the text bytes of the StorePath hash
into a size_t directly. You can't DO THAT.
Replaced with actually parsing the hash so we get 100% of the bits
being entropy, then memcpying the start of the hash. If this shows
up in a profile we should just make the hash parser faster with a
lookup table or something sensible like that.
- This horrendous bit of UB which I thankfully slapped a deprecation
warning on, built, and it didn't trigger anywhere so it was dead
code and I just deleted it. But holy crap you *cannot* do that.
inline void mkString(const Symbol & s)
{
mkString(((const std::string &) s).c_str());
}
- Some wrong lints. Lots of wrong macro lints, one wrong
suspicious-sizeof lint triggered by the template being instantiated
with only pointers, but the calculation being correct for both
pointers and not-pointers.
- Exceptions in destructors strike again. I tried to catch the
exceptions that might actually happen rather than all the exceptions
imaginable. We can let the runtime hard-kill it on other exceptions
imo.
Change-Id: I71761620846cba64d66ee7ca231b20c061e69710
There have been multiple setting types for paths that are supposed to be
canonicalised, depending on whether zero or one, one, or any number of paths is
to be specified. Naturally, they behaved in slightly different ways in the
code. Simplify things by unifying them and removing special behaviour (mainly
the "multiple paths type can coerce to boolean" thing).
Change-Id: I7c1ce95e9c8e1829a866fb37d679e167811e9705
They are like experimental features, but opt-in instead of opt-out. They
will allow us to gracefully remove language features. See #437
Change-Id: I9ca04cc48e6926750c4d622c2b229b25cc142c42
* changes:
sqlite: add a Use::fromStrNullable
util: implement charptr_cast
tree-wide: fix a pile of lints
refactor: make HashType and Base enum classes for type safety
build: integrate clang-tidy into CI
This:
- Converts a bunch of C style casts into C++ casts.
- Removes some very silly pointer subtraction code (which is no more or
less busted on i686 than it began)
- Fixes some "technically UB" that never had to be UB in the first
place.
- Makes finally follow the noexcept status of the inner function. Maybe
in the future we should ban the function from not being noexcept, but
that is not today.
- Makes various locally-used exceptions inherit from std::exception.
Change-Id: I22e66972602604989b5e494fd940b93e0e6e9297
The |> operator is a reverse function operator with low binding strength
to replace lib.pipe. Implements RFC 148, see the RFC text for more
details. Closes#438.
Change-Id: I21df66e8014e0d4dd9753dd038560a2b0b7fd805
Currently, the parser relies on the global experimental feature flags.
In order to properly test conditional language features, we instead need
to pass it around in the parser::State.
This means that the parser cannot cache the result of isEnabled anymore,
which wouldn't necessarily hurt performance if the function didn't
perform a linear search on the list of enabled features on every single
call. While we could simply evaluate once at the start of parsing and
cache the result in the parser state, the more sustainable solution
would be to fix `isEnabled` such that all callers may profit from the
performance improvement.
Change-Id: Ic9b9c5d882b6270e1114988b63e6064d36c25cf2
What if you could find memory bugs in Lix without really trying very
hard? I've had variously scuffed patches to do this, but this is
blocked on boost coroutines removal at this point tbh.
Change-Id: Id762af076aa06ad51e77a6c17ed10275929ed578
When the configured maximum depth has been reached, attribute sets and lists
are printed with ellipsis to indicate the elision of nested items. Previously,
this happened even in case the structure being printed is empty, so that such
items do not in fact exist. This is confusing, so stop doing it.
Change-Id: I0016970dad3e42625e085dc896e6f476b21226c9
The repeated value detection logic exists so that the occurrence of large
common substructures does not fill up the screen or the computer's memory.
However, empty attribute sets and derivations (when their detection is enabled)
are always cheap to print, and in practice I have observed them to make up a
significant majority of the cases where I was annoyed by the repeated value
detection kicking in. Furthermore, `nix-instantiate --eval` already disables
this logic for empty attribute sets, and empty lists are already exempted
everywhere. For these reasons, always print empty attribute sets and
derivations as what they are.
Change-Id: I5dac8e7739f9d726b76fd0521ec46f38af94463f
this is cursed. deeply and profoundly cursed. under NO CIRCUMSTANCES
must protocol serializer helpers be applied to temporaries! doing so
will inevitably cause dangling references and cause the entire thing
to crash. we need to do this even so to get rid of boost coroutines,
and likewise to encapsulate the serializers we suffer today at least
a little bit to allow a gradual migration to an actual IPC protocol.
(this isn't a problem that's unique to generators. c++ coroutines in
general cannot safely take references to arbitrary temporaries since
c++ does not have a lifetime system that can make this safe. -sigh-)
Change-Id: I2921ba451e04d86798752d140885d3c5cc08e146
This also bans various sneaking of negative numbers from the language
into unsuspecting builtins as was exposed while auditing the
consequences of changing the Nix language integer type to a newtype.
It's unlikely that this change comprehensively ensures correctness when
passing integers out of the Nix language and we should probably add a
checked-narrowing function or something similar, but that's out of scope
for the immediate change.
During the development of this I found a few fun facts about the
language:
- You could overflow integers by converting from unsigned JSON values.
- You could overflow unsigned integers by converting negative numbers
into them when going into Nix config, into fetchTree, and into flake
inputs.
The flake inputs and Nix config cannot actually be tested properly
since they both ban thunks, however, we put in checks anyway because
it's possible these could somehow be used to do such shenanigans some
other way.
Note that Lix has banned Nix language integer overflows since the very
first public beta, but threw a SIGILL about them because we run with
-fsanitize=signed-overflow -fsanitize-undefined-trap-on-error in
production builds. Since the Nix language uses signed integers, overflow
was simply undefined behaviour, and since we defined that to trap, it
did.
Trapping on it was a bad UX, but we didn't even entirely notice
that we had done this at all until it was reported as a bug a couple of
months later (which is, to be fair, that flag working as intended), and
it's got enough production time that, aside from code that is IMHO buggy
(and which is, in any case, not in nixpkgs) such as
#445, we don't think
anyone doing anything reasonable actually depends on wrapping overflow.
Even for weird use cases such as doing funny bit crimes, it doesn't make
sense IMO to have wrapping behaviour, since two's complement arithmetic
overflow behaviour is so *aggressively* not what you want for *any* kind
of mathematics/algorithms. The Nix language exists for package
management, a domain where bit crimes are already only dubiously in
scope to begin with, and it makes a lot more sense for that domain for
the integers to never lose precision, either by throwing errors if they
would, or by being arbitrary-precision.
This change will be ported to CppNix as well, to maintain language
consistency.
Fixes: #423
Change-Id: I51f253840c4af2ea5422b8a420aa5fafbf8fae75
the rewriting sink was just broken. when given a rewrite set that
contained a key that is also a proper infix of another key it was
possible to produce an incorrectly rewritten result if the writer
used the wrong block size. fixing this duplicates rewriteStrings,
to avoid this we'll rewrite rewriteStrings to use RewritingSource
in a new mode that'll allow rewrites we had previously forbidden.
Change-Id: I57fa0a9a994e654e11d07172b8e31d15f0b7e8c0
generators are a better basis for serializers than streaming into sinks
as we do currently for many reasons, such as being usable as sources if
one wishes to (without requiring an intermediate sink to serialize full
data sets into memory, or boost coroutines to turn sinks into sources),
composing more naturally (as one can just yield a sub-generator instead
of being forced to wrap entire substreams into clunky functions or even
more clunky custom types to implement operator<< on), allowing wrappers
to transform data with clear ownership semantics (removing the need for
explicit memory allocations and Source wrappers), and many other things
Change-Id: I361d89ff556354f6930d9204f55117565f2f7f20
this will be the basis of non-boost coroutines in lix. anything that is
a boost coroutine *should* be representable with a Generator coroutine,
and many things that are not currently boost coroutines but behave much
like one (such as, notably, serializers) should be as well. this allows
us to greatly simplify many things that look like iteration but aren't.
Change-Id: I2cebcefa0148b631fb30df4c8cfa92167a407e34
Previously, the progress bar had two subtly different states in which the bar
would not actually render, both with their own shortcomings: inactive (which
was irreversible) and paused (reversible, but swallowing logs). Furthermore,
there was no way of resetting the statistics, so a very bad solution was
implemented (243c0f18da) that would create a new
logger for each line of the repl, leaking the previous one and discarding the
value of printBuildLogs. Finally, if stderr was not attached to a TTY, the
update thread was started even though the logger was not active, violating the
invariant required by the destructor (which is not observed because the logger
is leaked).
In this commit, the two aforementioned states are unified into a single one,
which can be exited again, correctly upholds the invariant that the update
thread is only running while the progress bar is active, and does not swallow
logs. The latter change in behavior is not expected to be a problems in the
rare cases where the paused state was used before, since other loggers (like
the simple one) don't exhibit it anyway. The startProgressBar/stopProgressBar
API is removed due to being a footgun, and a new method for properly resetting
the progress is added.
Co-Authored-By: Qyriad <qyriad@qyriad.me>
Change-Id: I2b7c3eb17d439cd0c16f7b896cfb61239ac7ff3a
If we've consumed the entire input, that doesn't actually mean we're
done decompressing - there might be more output left. This worked (?)
in most cases because the input and output sizes are pretty comparable,
but sometimes they're not and then things get very funny.
Change-Id: I73435a654a911b8ce25119f713b80706c5783c1b