Cache evaluation errors, including the actual error #250
Labels
No labels
Area/build-packaging
Area/cli
Area/evaluator
Area/fetching
Area/flakes
Area/language
Area/profiles
Area/protocol
Area/releng
Area/remote-builds
Area/repl
Area/store
bug
crash 💥
Cross Compilation
devx
docs
Downstream Dependents
E/easy
E/hard
E/help wanted
E/reproducible
E/requires rearchitecture
imported
Needs Langver
OS/Linux
OS/macOS
performance
regression
release-blocker
RFD
stability
Status
blocked
Status
invalid
Status
postponed
Status
wontfix
testing
testing/flakey
ux
No milestone
No project
No assignees
5 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: lix-project/lix#250
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Cached failures were rightfully removed as of #223.
When I debugged the issue in CppNix with @lheckemann [1] (unaware of Lix back then :p) we had the idea to cache eval errors entirely and thus save re-evaluation of stuff that's known to break and provide a reasonable UX.
May be worth revisiting after Lix has left the ice-cube state.
[1] https://github.com/NixOS/nix/pull/10368 (not an actual implementation of this proposal, but a little bit of discussion about the topic)
eval failures are nowhere near deterministic enough to even be cached, and probably won't be for quite a while. in fact we::horrors are planning to remove the current eval cache completely, even the positive caches, in favor of builtins that'll allow caches that actually work to be built. because the eval cache as it exists now is ... highly questionable at best.
To make sure I'm not misunderstanding you: are we talking about impure evaluation here or about transient failures such as a libfetcher error. I think both can be caught and the cache being invalidated, right? Or do you have something in mind I'm unaware of currently?
I do agree here: to my knowledge, the cache only works for top-level attributes (of flakes - though that's not a requirement I think), right?
To me, this is the best solution we have so far, hence the suggestion.
Now that you mention it, I think I read about the idea of a
builtins.cacheExpr (<thunk>)
basically (no idea how it was called), but I can't find it. Is that what you have in mind?Are the details up to discussion or are these written down somewhere already? Can't promise that I'll have the spoons then, but I can help out there, let me know.
Will need to think a moment about that idea (and await your answer!), but I think I agree with you :)
all the things you mentioned and more: IFD can cause nondeterministic eval, so we'd have to force it off completely to make the cache sound. evaluation order in builtins plays a role too, so the cache would not be stable across versions. even parse order of identifiers impacts cache validity, if you remember nixpkgs breaking because it depended on the (not guaranteed) order in which nix compares attributes to not crash stdenv all the damn time (and just broke when that order changed through spooky action at a distance from yonder unrelated file)
it's a bunch more complicated than that unfortunately; if we only cached toplevel attrs we'd cache nothing, if we cached two levels we'd still not cache anything in nixpkgs, any fixed depth would break caching of scope members in legacyPackages etc
infinisil had a pre-rfc about such things, but we::horrors find this pre-rfc highly ill-advised for many reasons. ultimately something like that is our plan though in a very different setting: a
memoize :: str -> any -> any
that takes a user-supplied cache key and an environmental namespace key taken from elsewhere (eg scopedImportUsing and the git hash of the evaluated data etc)In pure mode at least, in a purely functional language we should be able to memoize an arbitrary function (with some limitations on the eval contents like IFD obviously) as an optimization, right? That always struck us as a better design approach than the eval cache (and rather than
builtins.memoize
or similar)it should be able to do this, yes. but for a laundry list of reasons nix as very, very far away from that. relying on heuristics to memoize things to disk seems rather fraught though, we'd much rather leave it up to the authors of a still hypothetical
flake-impl.nix
to encode their own heuristics instead of relying on ours (which, as the current eval cache shows, are always wrong anyway)does that make sense?
Checking in: I think the resolution here is wontfix, is that correct?
would say so, yes. at least until we can detect whether a given eval is deterministic, and cache full errors only in that case.
Good point, just forgot about it since I refuse to use it in its current state, so I think I forgot about it here 😅
Oof yeah 🫠
OK I worded this pretty wrongly, sorry!
I think we mean the exact same thing: I meant "top-level" as in flake-attributes, but not arbitrary thunks inside the expression (e.g. a
buildInput
of a package or something with an even narrower scope). but yeah, was poorly worded on my side.Given your other input I think I agree with this.
Absolutely! Apologies for not getting back to this earlier.
https://github.com/tweag/epcb is where you read about a caching builtin, probably maybe?