this only ever worked for empty uploads, and there it worked only by
complete accident: curl was asked to send more data than the wrapper
would provide, which curl would not like and report as an error. the
error would cause a retry with even less data to send, until finally
failing by running into the retry limit. let's just forbid all this.
Change-Id: I229a94b3b8b33e2c6cdb8ea19edd57cd6740e6c6
don't pause the entire curl thread. we have multiple consumer threads
after all, not just one, so stalling all of them is likely not great.
note that libcurl advises against using transfer pauses if compressed
encodings are allowed and automatically decoded. this should not lead
to problems in practice because our data is usually not compressed to
such a degree that curl buffering *uncompressed* data matters. should
this cause problems we can reintroduce the whole-thread pause, but we
will probably get away with this until the entire file transfer class
is made kj::Promise-using async (and *then* curl can be hardpaused if
it cannot get rid of its data, solving the problem once and for all).
Change-Id: I218e41bfa5a27c7454eafb0bdb54f2a29a7f6493
it's no longer needed. `download` can do everything `enqueueDownload`
did, and a lot more. e.g. not block the calling thread, for instance.
Change-Id: I4b36235ed707c92d117b4c33efa3db50d26f9a84
return it as a separate item in a pair instead. this will let us remove
enqueueDownload() in favor of returning metadata from download() itself
Change-Id: I74fad2ca15f920da1eefabc950c2baa2c360f2ba
it's just a uri and some headers now. those can be function arguments
with no loss of clarity. *actual* additional arguments, for example a
TLS context with additional certificates, could be added on a new and
improved FileTransfer class that carries not just a backend reference
but some real, visible context for its transfers. curl not being very
multi-threading-friendly when using multi handles will make sharing a
bit hard anyway once we drop the single global download worker thread
Change-Id: Id2112c95cbd118c6d920488f38d272d7da926460
we don't even need this outside of tests. maybe we should not do
automatic retries at this level at all and use retrying wrappers
instead? at some point we may have to do this, but not just yet.
Change-Id: If0088aa55215be81f1770c25b3bb1b5268c65cf8
There was a bug report about a potential call to `memcpy` with a null
pointer which is not reproducible:
#492
This occurred in `src/libstore/filetransfer.cc` in `InnerSource::read`.
To ensure that this doesn't happen, an early return is added before
calling `memcpy` if the length of the data to be copied is 0.
This change also adds a test that ensures that when `InnerSource::read`
is called with an empty file, it throws an `EndOfFile` exception.
Change-Id: Ia18149bee9a3488576c864f28475a3a0c9eadfbb
This:
- Converts a bunch of C style casts into C++ casts.
- Removes some very silly pointer subtraction code (which is no more or
less busted on i686 than it began)
- Fixes some "technically UB" that never had to be UB in the first
place.
- Makes finally follow the noexcept status of the inner function. Maybe
in the future we should ban the function from not being noexcept, but
that is not today.
- Makes various locally-used exceptions inherit from std::exception.
Change-Id: I22e66972602604989b5e494fd940b93e0e6e9297
without this we will not be able to get rid of makeDecompressionSink,
which in turn will be necessary to get rid of sourceToSink (since the
libarchive archive wrapper *must* be a Source due to api limitations)
Change-Id: Iccd3d333ba2cbcab49cb5a1d3125624de16bce27
even the transfer function is not all that necessary since there aren't
that many users, but we'll keep it for now. we could've kept both names
but we also kind of want to use `download` for something else very soon
Change-Id: I005e403ee59de433e139e37aa2045c26a523ccbf
This reverts commit 285bc67318.
Reason for revert: #364
For some reason this broke `main` even though the change we are reverting passed CI! Mysterious, haunted, etc. Needs more debugging, let's turn it off for now.
Change-Id: Ica4819d61cd35b83eb52985bfcb657e858f025a9
while refactoring the curl wrapper we inadvertently broken the immutable
flake protocol, because the immutable flake protocol accumulates headers
across the entire redirect chain instead of using only the headers given
in the final response of the chain. this is a problem because Some Known
Providers Of Flake Infrastructure set rel=immutable link headers only in
the penultimate entry of the redirect chain, and curl does not regard it
as worth returning to us via its response header enumeration mechanisms.
fixes #358
Change-Id: I645c3932b465cde848bd6a3565925a1e3cbcdda0
Very basic behavior test to ensure that gzip data gets internally
decompressed by the file transfer pipeline.
Change a std::string_view return value in the test harness to
std::string. I wouldn't call myself a C++ beginner and I still managed
to shoot myself in the foot like three times with the lifetime
managements there (e.g. [&] { return an_std_string; } ends up with a
dangling string_view!).
Change-Id: I1360750d4181ce1ca2a3aa4dc0e97e131351c469
also add a few more tests for exception propagation behavior. using
packaged_tasks and futures (which only allow a single call to a few
of their methods) introduces error paths that weren't there before.
Change-Id: I42ca5236f156fefec17df972f6e9be45989cf805
not doing this will cause transfers that had their readers disappear to
linger. with lingering transfers the curl thread can't shut down, which
will cause nix itself to not shut down until the transfer finishes some
other way (most likely network timeouts). also add a new test for this.
Change-Id: Id2401b3ac85731c824db05918d4079125be25b57