benchmarking script: allow benchmarking just one variant #234
Labels
No labels
Area/build-packaging
Area/cli
Area/evaluator
Area/fetching
Area/flakes
Area/language
Area/profiles
Area/protocol
Area/releng
Area/remote-builds
Area/repl
Area/store
bug
crash 💥
Cross Compilation
devx
docs
Downstream Dependents
E/easy
E/hard
E/help wanted
E/reproducible
E/requires rearchitecture
imported
Needs Langver
OS/Linux
OS/macOS
performance
regression
release-blocker
RFD
stability
Status
blocked
Status
invalid
Status
postponed
Status
wontfix
testing
testing/flakey
ux
No milestone
No project
No assignees
2 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference: lix-project/lix#234
Loading…
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
unfortunately we cannot combine results in such a case with the existing jq script since the jq script processes JSON files benchmark-type by benchmark-type, so if you have just one run in one, you would probably lose data. we would have to rename the result files for each run (they are named
bench-TYPE.json
), and combine them first, which is simply not in scope for this patchset. currently the script would overwrite the result files for each bench run, which would have to be changed also.alternatively we could make it so that the script prints out the json name before printing the summary and that could work as well. either way this requires some rethinking of how we use hyperfine in there.
and then we would lose what few niceties hyperfine gives us (at this point actually not really a lot... we already reimplemented the display output and run to run comparison).
raised on https://gerrit.lix.systems/c/lix/+/798
the output doesn't have to be excellent or even retained in the single-variant case, just enough to copy somewhere for reference (even the straight hyperfine output would do perfectly fine). it's more of an in-progress measurement thing than a full benchmark run. maybe another script with bits shared among that and the benchmarks?
yeah that makes sense for optimization use cases, for sure. i think the easiest way is to change where the json files go based on an env-var or arg or something (but ugh arg parsing, sounds like "rewrite it in python" to me, which is why i am deferring this).