benchmarking script: allow benchmarking just one variant #234

Open
opened 2024-04-09 01:47:00 +00:00 by jade · 2 comments
Owner

unfortunately we cannot combine results in such a case with the existing jq script since the jq script processes JSON files benchmark-type by benchmark-type, so if you have just one run in one, you would probably lose data. we would have to rename the result files for each run (they are named bench-TYPE.json), and combine them first, which is simply not in scope for this patchset. currently the script would overwrite the result files for each bench run, which would have to be changed also.

alternatively we could make it so that the script prints out the json name before printing the summary and that could work as well. either way this requires some rethinking of how we use hyperfine in there.

and then we would lose what few niceties hyperfine gives us (at this point actually not really a lot... we already reimplemented the display output and run to run comparison).

raised on https://gerrit.lix.systems/c/lix/+/798

unfortunately we cannot combine results in such a case with the existing jq script since the jq script processes JSON files benchmark-type by benchmark-type, so if you have just one run in one, you would probably lose data. we would have to rename the result files for each run (they are named `bench-TYPE.json`), and combine them first, which is simply not in scope for this patchset. currently the script would overwrite the result files for each bench run, which would have to be changed also. alternatively we could make it so that the script prints out the json name before printing the summary and that could work as well. either way this requires some rethinking of how we use hyperfine in there. and then we would lose what few niceties hyperfine gives us (at this point actually not really a lot... we already reimplemented the display output and run to run comparison). raised on https://gerrit.lix.systems/c/lix/+/798
jade added the
devx
label 2024-04-09 01:47:00 +00:00
Owner

the output doesn't have to be excellent or even retained in the single-variant case, just enough to copy somewhere for reference (even the straight hyperfine output would do perfectly fine). it's more of an in-progress measurement thing than a full benchmark run. maybe another script with bits shared among that and the benchmarks?

the output doesn't have to be excellent or even retained in the single-variant case, just enough to copy somewhere for reference (even the straight hyperfine output would do perfectly fine). it's more of an in-progress measurement thing than a full benchmark run. maybe another script with bits shared among that and the benchmarks?
Author
Owner

yeah that makes sense for optimization use cases, for sure. i think the easiest way is to change where the json files go based on an env-var or arg or something (but ugh arg parsing, sounds like "rewrite it in python" to me, which is why i am deferring this).

yeah that makes sense for optimization use cases, for sure. i think the easiest way is to change where the json files go based on an env-var or arg or something (but ugh arg parsing, sounds like "rewrite it in python" to me, which is why i am deferring this).
Sign in to join this conversation.
No milestone
No project
No assignees
2 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: lix-project/lix#234
No description provided.