Problem:
I wrote a benchmark to compare file crawling and get quite different results compared to e.g. benchmark. With benchmark I'm able to run tests in sequence so they can all after another access the same resources. In this way the results are reliable.
With tinybench I throw out of memory or other odd errors due to parallel processing.
Suggested solution:
Implement a way to run all test cases in sequence.
Links:
Pay now to fund the work behind this issue.
Get updates on progress being made.
Maintainer is rewarded once the issue is completed.
You're funding impactful open source efforts
You want to contribute to this effort
You want to get funding like this too