we only wait for the worker thread to remove the handle from its multi
and then signal us back. after that's done the transfer can be cleaned
up independently of the multi handle and no transfer progress is made.
we also no longer remove transfer items from multi handles in the item
destructor; only the worker thread can add a transfer and as such only
the worker thread should be responsible for removing them again later.
Change-Id: I6ff57bb5e7c1e91faf8657b257e73d6a72aa928e
this only ever worked for empty uploads, and there it worked only by
complete accident: curl was asked to send more data than the wrapper
would provide, which curl would not like and report as an error. the
error would cause a retry with even less data to send, until finally
failing by running into the retry limit. let's just forbid all this.
Change-Id: I229a94b3b8b33e2c6cdb8ea19edd57cd6740e6c6
we'll need this shared code for kjified transfers that don't use
sources. it's a while out, but we can clean this up now already.
Change-Id: Ife8c160e6ab379761362d6c54aba05093deee99e
since 4ae6fb5a8f dropping a source of a
download might not properly cancel the associated curl transfer after
the transfer was paused. we have also not unpaused the transfer often
enough, only if the transfer buffer had been drained in its entirety.
Change-Id: Ic9298d9df71daa0f3d1c3fd718ed441edae9e863
Most builtins are now generated from data too, with two exceptions:
* Undocumented builtins, since supporting them would add complexity to the
generator, the harms of the current implementation mostly don't apply, and
the proper fix is to document them.
* `derivation` is somewhat magic (it is a function, but defined in the code as
a constant), so the current treatment of having it separately documented is
kept (for now, at least).
Since it is slightly easier to do and probably a good idea anyway, the builtin
function registrations generated this way are now processed directly in code
and don't go through global variables any more.
Unfortunately, a slight breaking change is introduced because the order of the
builtins' names in the symbol table changes. Hopefully, this will turn out to
not matter in practice.
Change-Id: I7b4379a93ae380b6524e41a916a21c5c6f70555e
Currently, a bunch of documentation is generated by embedding parts of it in
the nix executable, getting it out again by running it, and then postprocessing
the output. This is bad, since it creates a pointless dependency of the
documentation on the executable, and also makes documentation generation
impossible when cross-compiling.
Instead, both the code and the documentation should be generated from data, see
#292 . Here we start applying
this approach to the experimental and deprecated features, which are done in
one go since the technical implementation is very similar.
Of course, the actual benefits are not realised yet, since the offending
pattern is used in several more places. These will be fixed later.
Change-Id: I4c802052cc7e865c61119a34b8f1063c4decc9cb
Commit 4dbbd721eb intended to mark all settings
as overridden when they are. Unfortunately, due to an oversight, this marking
was accidentally performed in the implementation details of non-appendable
options. Move it to the common codepath so that it works for appendable options
too.
Fixes: #573
Change-Id: Idc3402bac48b19d832acd9b553e16e5791470c26
So I recently saw it the first time in the wild, I liked that you get
interactively asked about the nix.conf settings from the flake, but
there were a few minor things that I'd like to see changed:
* The `(y/N)` was somewhere in the middle of the line. Moved it to
the end. At first I assumed it was a bug because another thread into
my terminal while I was answering the question.
* I had to say no four times for a single flake with two options. So if
you already know you don't want any of the config for _this_ flake, I
found a `No to all` switch that ignores the rest of the nix.conf
settings a little more ergonomic than having to stop the invocation,
looking up the exact wording of `--no-accept-flake-config` and
restarting it. Hence, I added it.
* Added a note where the choices which settings to trust are persisted.
My initial assumption was that this went into `nix.conf` which is not
writable on NixOS, so I said no there as well.
Change-Id: I0a0d9c403f0662df4707697a77f08e6cd003ec6f
This adds a new temp-dir setting for controlling the temporary directory
without having to change the TMPDIR env var. This can be used to e.g.
use a path on a case-sensitive store on macOS for temporary files
without changing the TMPDIR var used by interactive shells or commands
invoked with `nix run`.
This also stops unsetting `TMPDIR` on darwin when the env var value
starts with `/var/folders/`, preferring instead to just do the check
when reading `TMPDIR`. This way the inherited `TMPDIR` env var is
preserved for child processes (such as interactive shells).
As a side effect this changes the behavior of `nix-build -o ''` to act
like `nix-build --no-out-link` instead of failing with an error caused
by trying to create a symlink at the cwd.
Fixes: #253
Fixes: #112
Change-Id: I9ee826323f2deca62854715a77ca7a373a948a29
libcurl may call callbacks during unpause. if libcurl calls these
callbacks often enough they may find that they've exhausted their
allotted buffer space, at which point they will call dataCallback
as provided in enqueueFileTransfer. download() provides callbacks
that have their own state locks and may call unpause on transfers
with *their* state locks they share with curlFileTransfer. it was
possible to cause a deadlock between these two if the curl worker
thread tried to provide some data to a callback with the transfer
state lock held and the user transfer simultaneously unpaused its
associated curl transfer, with its own state lock held. obviously
not a great situation, but avoidable by not holding any lock when
we unpause transfers and execute their callbacks, as is otherwise
the case when a transfer runs normally and is never paused at all
Change-Id: I58556d292adaf7dfb14001d3a6c5c38fa71994da
FreeBSD was left out of a few refactors over the last few months.
Add a header and register the store implementation so it's back
to working as well as it was before.
Change-Id: I6f7b2ceb557c290f2d9e0d7f207b3fea87b353ed
The devShell relied on several packages directly from `pkgs`
or used with a non-splice-aware functions.
These would be built for the host system, making them useless
in a devShell for the build system.
Make sure that all packages are for the build system when needed.
Some other minor changes also required:
* Make devShells use `clangStdenv` because GCC is currently broken
* Disable rr when making a cross stdenv
Change-Id: Iee5f8a1a0c594139a50f2261b203491bd1644866
The lix package currently fails unless it's using a clang stdenv.
However, the flake's cross build outputs (e.g. `packages.x86_64-linux.nix-armv7l-linux`)
used the default stdenv, normally gcc.
Replace this with clang to fix package build.
Also take this opportunity to remove the no longer necessary `useLLVM = true`
override on FreeBSD. Since 24.05, nixpkgs always sets `useLLVM = true`
on FreeBSD in `lib.systems.elaborate`.
Change-Id: I939302e4f6385291fa9e582d38d908c42f6db89a
This reverts commit 02c35ea9df.
Reason for revert: this code path is also used for `Input::getRev()`, i.e. flakes VCS revision validation, which, in the case of Git, are using SHA1.
As a result, this cause too much noise due to SHA1 revisions in Flakes.
Change-Id: I8064c1ebc26e4e83b627f0803a7a9ba56cfe1f37
random() is not thread-safe, it relies on global state, and calling it
from worker threads can result in multiple threads producing the same
value. It also doesn't guarantee unique values even in single-threaded
use.
Use an atomic counter for the use-case of generating temporary paths,
and switch to a thread-local RNG for the one remaining call.
This will probably fix https://github.com/NixOS/nix/issues/7273 though
I'm not willing to risk corrupting my store to find out.
Change-Id: I4c4c4c9796613573ffefd29cc8efe3d07839facc
Also tweak `pathAccessible` to ignore other relevant errors too. It was
documented as ignoring permission errors but it was only ignoring
`EPERM`, which comes from the darwin sandbox, and not ignoring `EACCESS`
which is the real permission error. I figured it also makes sense to
ignore `ELOOP`.
Fixes: #560
Change-Id: Ibb849b68d07386eb80afb52b57f7d12b3a48a202
curl handles timeouts internally when it acts as the main event loop. we
only need to wake up to add new transfers to the multi handle or restart
an existing transfer that has passed its restart wait time. periodically
waking up is not required for curl, and we don't need it any more either
Change-Id: Ic774e1d9519f807cda1a89694bc3ede75216f329
don't pause the entire curl thread. we have multiple consumer threads
after all, not just one, so stalling all of them is likely not great.
note that libcurl advises against using transfer pauses if compressed
encodings are allowed and automatically decoded. this should not lead
to problems in practice because our data is usually not compressed to
such a degree that curl buffering *uncompressed* data matters. should
this cause problems we can reintroduce the whole-thread pause, but we
will probably get away with this until the entire file transfer class
is made kj::Promise-using async (and *then* curl can be hardpaused if
it cannot get rid of its data, solving the problem once and for all).
Change-Id: I218e41bfa5a27c7454eafb0bdb54f2a29a7f6493
Sets the `X-GitHub-Api-Version` header to `2022-11-28` for calls to the
GitHub API.
This follows the later version as per
https://docs.github.com/en/rest/about-the-rest-api/api-versions?apiVersion=2022-11-28.
This affected the check on whether to use the API versus unauthenticated
calls as well, given the headers would no longer be empty if the
authentication token were missing.
The workaround used here is to use a check similar to an existing
check for the token.
In the current implementation, headers are (still) similarly sent to
non-authenticated as well as GitHub on-prem calls.
For what it's worth, manual curl calls with such a header seemed to
break nor unauthenticated calls nor ones to the github.com API.
Change-Id: I6e10839e6b99cb65eb451e923b2a64f5d3c0f578
using UTF-8 bullets in the sample avoids locale confusion where Bash
doesn't know to treat `•` as a single character.
Signed-off-by: Dusk Banks <me@bb010g.com>
Change-Id: I829019b66e93e6d33ac3a6641df07d0dd2332a5a
It's only used in a couple of tests, and only in such a way that
replacing it with a random command suffices.
I also removed a few pointless uses of the variable.
Fixes: #376
Change-Id: I90aedb61d64b02f7c9b007e72f9d614cc1b37a2e
Before:
error: derivation '/nix/store/4spy3nz1661zm15gkybsy1h5f36aliwx-python3.11-test-1.0.0.drv' may not be deterministic: output '/nix/store/ccqcp01zg18wp9iadzmzimqzdi3ll08d-python3.11-test-1.0.0-dist' differs from '/nix/store/ccqcp01zg18wp9iadzmzimqzdi3ll08d-python3.11-test-1.0.0-dist.check'
After:
error: derivation '4spy3nz1661zm15gkybsy1h5f36aliwx-python3.11-test-1.0.0.drv' may not be deterministic: outputs differ
output differs: output '/nix/store/ccqcp01zg18wp9iadzmzimqzdi3ll08d-python3.11-test-1.0.0-dist' differs from '/nix/store/ccqcp01zg18wp9iadzmzimqzdi3ll08d-python3.11-test-1.0.0-dist.check'
output differs: output '/nix/store/yl59v08356i841c560alb0zmk7q16klb-python3.11-test-1.0.0' differs from '/nix/store/yl59v08356i841c560alb0zmk7q16klb-python3.11-test-1.0.0.check'
Change-Id: Ib2871fa602bf1fa9c00e2565b3a2e1b26f908152
curl can't pause downloads of file:// urls, which is very much in the
way of making the curl wrapper fully asynchronous. we can emulate all
the things curl does, but unfortunately curl is *rather extensive* in
its support of frankly weird shit. hopefully this subset of features,
which notably does not include curl readdir support, is enough for us
Change-Id: I5f67768c4b512565655b94b0421270c7dbbd8d11
with the api cleaned up we can suddenly reunify uploads, downloads, and
existence checks through curl in the same wrapper function. uploads and
existence checks simply don't use the result source, and given that all
transfers (or at least *most* transfers to date) go through the network
the few extra allocations do not hurt us at all. even for file:// calls
the overhead won't much matter as going to disk and back *is* expensive
Change-Id: I4f9ca6681a8fc303377b4cf4c63e3363ae32c18b
it's no longer needed. `download` can do everything `enqueueDownload`
did, and a lot more. e.g. not block the calling thread, for instance.
Change-Id: I4b36235ed707c92d117b4c33efa3db50d26f9a84
this will let us return metadata from FileTransfer::download, which in
turn is necessary to remove enqueueDownload. it also opens avenues for
streaming downloads that keep download metadata instead of dropping it
Change-Id: If0fc6af5eb2aeb689fc866c345c9d7bce4d59f2d