this only ever worked for empty uploads, and there it worked only by
complete accident: curl was asked to send more data than the wrapper
would provide, which curl would not like and report as an error. the
error would cause a retry with even less data to send, until finally
failing by running into the retry limit. let's just forbid all this.
Change-Id: I229a94b3b8b33e2c6cdb8ea19edd57cd6740e6c6
This adds a new temp-dir setting for controlling the temporary directory
without having to change the TMPDIR env var. This can be used to e.g.
use a path on a case-sensitive store on macOS for temporary files
without changing the TMPDIR var used by interactive shells or commands
invoked with `nix run`.
This also stops unsetting `TMPDIR` on darwin when the env var value
starts with `/var/folders/`, preferring instead to just do the check
when reading `TMPDIR`. This way the inherited `TMPDIR` env var is
preserved for child processes (such as interactive shells).
As a side effect this changes the behavior of `nix-build -o ''` to act
like `nix-build --no-out-link` instead of failing with an error caused
by trying to create a symlink at the cwd.
Fixes: #253
Fixes: #112
Change-Id: I9ee826323f2deca62854715a77ca7a373a948a29
don't pause the entire curl thread. we have multiple consumer threads
after all, not just one, so stalling all of them is likely not great.
note that libcurl advises against using transfer pauses if compressed
encodings are allowed and automatically decoded. this should not lead
to problems in practice because our data is usually not compressed to
such a degree that curl buffering *uncompressed* data matters. should
this cause problems we can reintroduce the whole-thread pause, but we
will probably get away with this until the entire file transfer class
is made kj::Promise-using async (and *then* curl can be hardpaused if
it cannot get rid of its data, solving the problem once and for all).
Change-Id: I218e41bfa5a27c7454eafb0bdb54f2a29a7f6493
it's no longer needed. `download` can do everything `enqueueDownload`
did, and a lot more. e.g. not block the calling thread, for instance.
Change-Id: I4b36235ed707c92d117b4c33efa3db50d26f9a84
return it as a separate item in a pair instead. this will let us remove
enqueueDownload() in favor of returning metadata from download() itself
Change-Id: I74fad2ca15f920da1eefabc950c2baa2c360f2ba
it's just a uri and some headers now. those can be function arguments
with no loss of clarity. *actual* additional arguments, for example a
TLS context with additional certificates, could be added on a new and
improved FileTransfer class that carries not just a backend reference
but some real, visible context for its transfers. curl not being very
multi-threading-friendly when using multi handles will make sharing a
bit hard anyway once we drop the single global download worker thread
Change-Id: Id2112c95cbd118c6d920488f38d272d7da926460
we don't even need this outside of tests. maybe we should not do
automatic retries at this level at all and use retrying wrappers
instead? at some point we may have to do this, but not just yet.
Change-Id: If0088aa55215be81f1770c25b3bb1b5268c65cf8
Only overridden settings are sent to the daemon, and we're going to do the same
for the build hook to. It needs to be ensured that overridden settings are in
fact consistently marked as such, so that they actually get sent.
Change-Id: I7cd58d925702f86cf2c35ad121eb191ceb62a355
There was a bug report about a potential call to `memcpy` with a null
pointer which is not reproducible:
#492
This occurred in `src/libstore/filetransfer.cc` in `InnerSource::read`.
To ensure that this doesn't happen, an early return is added before
calling `memcpy` if the length of the data to be copied is 0.
This change also adds a test that ensures that when `InnerSource::read`
is called with an empty file, it throws an `EndOfFile` exception.
Change-Id: Ia18149bee9a3488576c864f28475a3a0c9eadfbb
The test suite can load the global configuration files under certain
circumstances, and, though we would really rather it didn't ever do that
at all, we should at least break the mechanism.
Fixes: #474
Change-Id: Ib27cb43dd5dfaa70ac491c395b5ba308fd7bd289
This moves the "legacy"/"nix2" commands under a new `src/legacy/`
directory, instead of being scattered around in a bunch of different
directories.
A new `liblegacy` build target is defined, and the `nix` binary is
linked against it.
Then, `RegisterLegacyCommand` is replaced with `LegacyCommand::add`
calls in functions like `registerNixCollectGarbage()`. These
registration functions are called explicitly in `src/nix/main.cc`.
See: #359
Change-Id: Id450ffc3f793374907599cfcc121863b792aac1a
like kj::joinPromisesFailFast this allows waiting for the results of
multiple promises at once, but unlike it not all input promises must
be complete (or any of them failed) for results to become available.
Change-Id: I0e4a37e7bd90651d56b33d0bc5afbadc56cde70c
like a normal semaphore, but with awaitable acquire actions. this is
primarily intended as an intermediate concurrency limiting device in
the Worker code, but it may find other uses over time. we do not use
std::counting_semaphore as a base because the counter of that is not
inspectable as will be needed for Worker. we also do not need atomic
operations for cross-thread consistency since we don't have multiple
threads (thanks to kj event loops being confined to a single thread)
Change-Id: Ie2bcb107f3a2c0185138330f7cbba4cec6cbdd95
These stack traces kind of suck for the reasons mentioned on the
CppTrace page here (no symbols for inline functions is a major one):
https://github.com/jeremy-rifkin/cpptrace
I would consider using CppTrace if it were packaged, but to be honest, I
think that the more reasonable option is actually to move entirely to
out-of-process crash handling and symbolization.
The reason for this is that if you want to generate anything of
substance on SIGSEGV or really any deadly signal, you are stuck in
async-signal-safe land, which is not a place to be trying to run a
symbolizer. LLVM does it anyway, probably carefully, and chromium *can*
do it on debug builds but in general uses crashpad:
https://source.chromium.org/chromium/chromium/src/+/main:base/debug/stack_trace_posix.cc;l=974;drc=82dff63dbf9db05e9274e11d9128af7b9f51ceaa;bpv=1;bpt=1
However, some stack traces are better than *no* stack traces when we get
mystery exceptions falling out the bottom of the program. I've also
promoted the path for "mystery exceptions falling out the bottom of the
program" to hard crash and generate a core dump because although there's
been some months since the last one of these, these are nonetheless
always *atrociously* diagnosed.
We can't improve the crash handling further until either we use Crashpad
(which involves more C++ deps, no thanks) or we put in the ostensibly
work in progress Rust minidump infrastructure, in which case we need to
finish full support for Rust in libutil first.
Sample report:
Lix crashed. This is a bug. We would appreciate if you report it at https://git.lix.systems/lix-project/lix/issues with the following information included:
Exception: std::runtime_error: lol
Stack trace:
0# nix::printStackTrace() in /home/jade/lix/lix3/build/src/nix/../libutil/liblixutil.so
1# 0x000073C9862331F2 in /home/jade/lix/lix3/build/src/nix/../libmain/liblixmain.so
2# 0x000073C985F2E21A in /nix/store/p44qan69linp3ii0xrviypsw2j4qdcp2-gcc-13.2.0-lib/lib/libstdc++.so.6
3# 0x000073C985F2E285 in /nix/store/p44qan69linp3ii0xrviypsw2j4qdcp2-gcc-13.2.0-lib/lib/libstdc++.so.6
4# nix::handleExceptions(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::function<void ()>) in /home/jade/lix/lix3/build/src/nix/../libmain/liblixmain.so
5# 0x00005CF65B6B048B in /home/jade/lix/lix3/build/src/nix/nix
6# 0x000073C985C8810E in /nix/store/dbcw19dshdwnxdv5q2g6wldj6syyvq7l-glibc-2.39-52/lib/libc.so.6
7# __libc_start_main in /nix/store/dbcw19dshdwnxdv5q2g6wldj6syyvq7l-glibc-2.39-52/lib/libc.so.6
8# 0x00005CF65B610335 in /home/jade/lix/lix3/build/src/nix/nix
Change-Id: I1a9f6d349b617fd7145a37159b78ecb9382cb4e9
This caused an infinite loop before since it would just keep asking the
underlying source for more data.
In practice this happened because an HTTP server served a
response to a HEAD request (for which curl will not retrieve any body or
call our write callback function) with Content-Encoding: br, leading to
decompressing nothing at all and going into an infinite loop.
This adds a test to make sure none of our compression methods do that
again, as well as just patching the HTTP client to never feed empty data
into a compression algorithm (since they absolutely have the right to
throw CompressionError on unexpectedly-short streams!).
Reported on Matrix: https://matrix.to/#/!lymvtcwDJ7ZA9Npq:lix.systems/$8BWQR_zKxCQDJ40C5NnDo4bQPId3pZ_aoDj2ANP7Itc?via=lix.systems&via=matrix.org&via=tchncs.de
Change-Id: I027566e280f0f569fdb8df40e5ecbf46c211dad1
This test suite was in desperate need of using the parameterization
available with gtest, and was a bunch of useless duplicated code. At
least now it's not duplicated code, though it still probably should be
more full of property tests.
Change-Id: Ia8ccee7ef4f02b2fa40417b79aa8c8f0626ea479
Fixes:
- Identifiers starting with _ are prohibited
- Some driveby header dependency cleaning which wound up with doing some
extra fixups.
- Fucking C style casts, man. C++ made these 1000% worse by letting you
also do memory corruption with them with references.
- Remove casts to Expr * where ExprBlackHole is an incomplete type by
introducing an explicitly-cast eBlackHoleAddr as Expr *.
- An incredibly illegal cast of the text bytes of the StorePath hash
into a size_t directly. You can't DO THAT.
Replaced with actually parsing the hash so we get 100% of the bits
being entropy, then memcpying the start of the hash. If this shows
up in a profile we should just make the hash parser faster with a
lookup table or something sensible like that.
- This horrendous bit of UB which I thankfully slapped a deprecation
warning on, built, and it didn't trigger anywhere so it was dead
code and I just deleted it. But holy crap you *cannot* do that.
inline void mkString(const Symbol & s)
{
mkString(((const std::string &) s).c_str());
}
- Some wrong lints. Lots of wrong macro lints, one wrong
suspicious-sizeof lint triggered by the template being instantiated
with only pointers, but the calculation being correct for both
pointers and not-pointers.
- Exceptions in destructors strike again. I tried to catch the
exceptions that might actually happen rather than all the exceptions
imaginable. We can let the runtime hard-kill it on other exceptions
imo.
Change-Id: I71761620846cba64d66ee7ca231b20c061e69710
There have been multiple setting types for paths that are supposed to be
canonicalised, depending on whether zero or one, one, or any number of paths is
to be specified. Naturally, they behaved in slightly different ways in the
code. Simplify things by unifying them and removing special behaviour (mainly
the "multiple paths type can coerce to boolean" thing).
Change-Id: I7c1ce95e9c8e1829a866fb37d679e167811e9705
They are like experimental features, but opt-in instead of opt-out. They
will allow us to gracefully remove language features. See #437
Change-Id: I9ca04cc48e6926750c4d622c2b229b25cc142c42
* changes:
sqlite: add a Use::fromStrNullable
util: implement charptr_cast
tree-wide: fix a pile of lints
refactor: make HashType and Base enum classes for type safety
build: integrate clang-tidy into CI
This:
- Converts a bunch of C style casts into C++ casts.
- Removes some very silly pointer subtraction code (which is no more or
less busted on i686 than it began)
- Fixes some "technically UB" that never had to be UB in the first
place.
- Makes finally follow the noexcept status of the inner function. Maybe
in the future we should ban the function from not being noexcept, but
that is not today.
- Makes various locally-used exceptions inherit from std::exception.
Change-Id: I22e66972602604989b5e494fd940b93e0e6e9297
The |> operator is a reverse function operator with low binding strength
to replace lib.pipe. Implements RFC 148, see the RFC text for more
details. Closes#438.
Change-Id: I21df66e8014e0d4dd9753dd038560a2b0b7fd805
Currently, the parser relies on the global experimental feature flags.
In order to properly test conditional language features, we instead need
to pass it around in the parser::State.
This means that the parser cannot cache the result of isEnabled anymore,
which wouldn't necessarily hurt performance if the function didn't
perform a linear search on the list of enabled features on every single
call. While we could simply evaluate once at the start of parsing and
cache the result in the parser state, the more sustainable solution
would be to fix `isEnabled` such that all callers may profit from the
performance improvement.
Change-Id: Ic9b9c5d882b6270e1114988b63e6064d36c25cf2
What if you could find memory bugs in Lix without really trying very
hard? I've had variously scuffed patches to do this, but this is
blocked on boost coroutines removal at this point tbh.
Change-Id: Id762af076aa06ad51e77a6c17ed10275929ed578
When the configured maximum depth has been reached, attribute sets and lists
are printed with ellipsis to indicate the elision of nested items. Previously,
this happened even in case the structure being printed is empty, so that such
items do not in fact exist. This is confusing, so stop doing it.
Change-Id: I0016970dad3e42625e085dc896e6f476b21226c9
The repeated value detection logic exists so that the occurrence of large
common substructures does not fill up the screen or the computer's memory.
However, empty attribute sets and derivations (when their detection is enabled)
are always cheap to print, and in practice I have observed them to make up a
significant majority of the cases where I was annoyed by the repeated value
detection kicking in. Furthermore, `nix-instantiate --eval` already disables
this logic for empty attribute sets, and empty lists are already exempted
everywhere. For these reasons, always print empty attribute sets and
derivations as what they are.
Change-Id: I5dac8e7739f9d726b76fd0521ec46f38af94463f
this is cursed. deeply and profoundly cursed. under NO CIRCUMSTANCES
must protocol serializer helpers be applied to temporaries! doing so
will inevitably cause dangling references and cause the entire thing
to crash. we need to do this even so to get rid of boost coroutines,
and likewise to encapsulate the serializers we suffer today at least
a little bit to allow a gradual migration to an actual IPC protocol.
(this isn't a problem that's unique to generators. c++ coroutines in
general cannot safely take references to arbitrary temporaries since
c++ does not have a lifetime system that can make this safe. -sigh-)
Change-Id: I2921ba451e04d86798752d140885d3c5cc08e146
This also bans various sneaking of negative numbers from the language
into unsuspecting builtins as was exposed while auditing the
consequences of changing the Nix language integer type to a newtype.
It's unlikely that this change comprehensively ensures correctness when
passing integers out of the Nix language and we should probably add a
checked-narrowing function or something similar, but that's out of scope
for the immediate change.
During the development of this I found a few fun facts about the
language:
- You could overflow integers by converting from unsigned JSON values.
- You could overflow unsigned integers by converting negative numbers
into them when going into Nix config, into fetchTree, and into flake
inputs.
The flake inputs and Nix config cannot actually be tested properly
since they both ban thunks, however, we put in checks anyway because
it's possible these could somehow be used to do such shenanigans some
other way.
Note that Lix has banned Nix language integer overflows since the very
first public beta, but threw a SIGILL about them because we run with
-fsanitize=signed-overflow -fsanitize-undefined-trap-on-error in
production builds. Since the Nix language uses signed integers, overflow
was simply undefined behaviour, and since we defined that to trap, it
did.
Trapping on it was a bad UX, but we didn't even entirely notice
that we had done this at all until it was reported as a bug a couple of
months later (which is, to be fair, that flag working as intended), and
it's got enough production time that, aside from code that is IMHO buggy
(and which is, in any case, not in nixpkgs) such as
#445, we don't think
anyone doing anything reasonable actually depends on wrapping overflow.
Even for weird use cases such as doing funny bit crimes, it doesn't make
sense IMO to have wrapping behaviour, since two's complement arithmetic
overflow behaviour is so *aggressively* not what you want for *any* kind
of mathematics/algorithms. The Nix language exists for package
management, a domain where bit crimes are already only dubiously in
scope to begin with, and it makes a lot more sense for that domain for
the integers to never lose precision, either by throwing errors if they
would, or by being arbitrary-precision.
This change will be ported to CppNix as well, to maintain language
consistency.
Fixes: #423
Change-Id: I51f253840c4af2ea5422b8a420aa5fafbf8fae75
the rewriting sink was just broken. when given a rewrite set that
contained a key that is also a proper infix of another key it was
possible to produce an incorrectly rewritten result if the writer
used the wrong block size. fixing this duplicates rewriteStrings,
to avoid this we'll rewrite rewriteStrings to use RewritingSource
in a new mode that'll allow rewrites we had previously forbidden.
Change-Id: I57fa0a9a994e654e11d07172b8e31d15f0b7e8c0