Flake tarball immutable links are broken #358
Labels
No labels
Area/build-packaging
Area/cli
Area/evaluator
Area/fetching
Area/flakes
Area/language
Area/profiles
Area/protocol
Area/releng
Area/remote-builds
Area/repl
Area/store
bug
crash 💥
Cross Compilation
devx
docs
Downstream Dependents
E/easy
E/hard
E/help wanted
E/reproducible
E/requires rearchitecture
imported
Needs Langver
OS/Linux
OS/macOS
performance
regression
release-blocker
RFD
stability
Status
blocked
Status
invalid
Status
postponed
Status
wontfix
testing
testing/flakey
ux
No milestone
No project
No assignees
6 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: lix-project/lix#358
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Describe the bug
The
Link: <flakeref>; rel="immutable"
HTTP header described in the tarball-fetcher docs doesn't seem to be respected, unlike upstream NixSteps To Reproduce
nix flake show "https://flakehub.com/f/numtide/flake-utils/0.1.*.tar.gz"
(or any URL that eventually redirects to the aforementionedLink
HTTP header)Expected behavior
The value of the
Link
HTTP header will be used in place of the explicit URLnix --version
outputAdditional context
This doesn't occur with upstream Nix. Note the resolved URL of each
This is very interesting, because this functionality does work at some level. See:
I think there was some work on the URL handling that might be related, @ma27 may know?
This is a regression since 2.18. Marking as release blocker.
Confirmed the bug is a regression since Nix 2.18. We can go bisect it later today.
Hi, I managed to bisect to this commit
d0390b5cf2
Reverting said commit seems to fix the problem. At least for me.
this code base is absolutely haunted. holy shit
that revert doesn't seem to fix it for us? what on earth
This issue is very cursed and related to filetransfer.cc refactorings, most likely
86bfede948
. In the CppNix version,Link
headers are taken into account at any point into the redirect chain. In the Lix version, onlyLink
headers in the last HTTP request are considered, because they're read bygetHeader
at the end of the request instead of from the streamed headers callback.cc @pennae
Note: I'm not convinced that this is how
Link
should work, and it would also be perfectly acceptable to consider this a bug and ask Flakehub to fix it server side, imo. Or the documentation should be improved along the way.And unlike
content-encoding
andaccept-ranges
, this can't be moved towriteCallback
because most redirects won't have a body andwriteCallback
is never called for them. In case someone was thinking of trying this fix... (I just did.)We have to accept ones in the last redirect because the last redirect might be to s3 or something that can't add headers, as is the case in practice here. That said, do we even have coverage on this lmao.
Someone who's willing to interact with the CppNix team should go ask them to better define their protocol, because right now it's a mess. There's nothing in the documentation which specifies headers should be read from intermediate redirects (it's definitely not the usual way HTTP APIs work...). If intermediate redirects are supposed to be able to set a
Link
header then what happens when there are multiple ones in the chain?Right now the CppNix implementation does "take the latest value found in the chain" (incl. in the case where multiple
Link
headers are returned in the same request). It's also not really conforming to the HTTP standards sinceLink
can have multiple comma-separated values, which CppNix doesn't support... So many problems :/Flake tarballs are brokento Flake tarball immutable links are brokenI mean, we can define the
FlakeHub protocolimmutable tarball link protocol ourselves, but it's genuinely a very bad protocol to begin with. I think the best thing for interop is to collect them from all intermediate redirects since it would be very surprising behaviour if only the last redirect's Link header were considered (since the s3-like would then break it by putting in a redirect to itself, e.g.) :/It is a thing that exists that was shipped and is useful, but it definitely has regrettable design choices. Also, like, why is it
rel="immutable"
but it's a flakeref?! No reasonable implementer who isn't nix would be likely to use it if the defined interface is "it's a flakeref" tbh. We should probably also document some subset of fields that happens to look like a flakeref that we do consider in it, and ignore all other ones, to make it not a flake thing anymore at least.horrors suggests this could be instead implemented with spec compliant http redirects:
0.6.*.tar.gz --302-> 0.6.2.tar.gz --301-> some-hash.tar.gz --302-> horrible-s3-thing.tar.gz
We would then store the last 301 in the chain. Obviously we should actually fix the present flakehub regression tho.
Btw: spec for Link header if we want to write a spec compliant parser: https://httpwg.org/specs/rfc8288.html#parse-fv
Hi 👋
Ryan kindly brought this thread to our attention in the meeting today; much appreciated.
I was the only one (not from DetSys) who somewhat reviewed this feature before it was merged rather soon. I had let that slip because I figured it'd be ok to return to it when users have feedback, which I guess is now. (Didn't expect Lix, but hey)
I know I know.
I hope I can help a bit. Occasionally, something is going to be weird for a reason. I'd give it a 50/50 in general.
This is by design; the idea being that an API can add this information in the redirect response, so that the actual bulk data can be handled by a dumb CDN, bucket, etc.
I agree with @delroth that this is generally not how HTTP clients work, but I would consider this to be a case of being liberal in what we accept.
The 302 -> 301 -> 302 sequence is interesting. That does look like it fits the bill, but it feels less intentional than the
Link
-based syntax. Would any such sequence of redirects just work? That might not be desirable. It also seems to require more roundtrips, which is a bit unfortunate.I'm also reading that our
Link
parser isn't compliant. That definitely seems like a bug to be fixed.I thought I originally proposed in my reviews for it to be a normal HTTP url with extra
Link
parameters (ie at the level ofrel
) rather than query parameters, but I can't find that in the thread.We do enforce that the flakeref is an http(s) one, for what it's worth.
What do you think of having a syntax like this instead?
Link <http://example.com/normal/url/foo.tar.gz>; rel="immutable"; rev="abcdef" Or might the original
X-Nix-Rev` header be better?