I want to benchmark how my server (runs on aws) performs (a stress test).
I need to run certain number of concurrent requests from my machine to this server. For example, I want to run 100 concurrent requests (tasks) for 60 seconds.
I tried to test how it might work using settings from the docs
from docs
import { Bench } from 'tinybench';
const bench = new Bench({ time: 100 });
bench
.add('faster task', () => {
console.log('I am faster')
})
.add('slower task', async () => {
await new Promise(r => setTimeout(r, 1)) // we wait 1ms :)
console.log('I am slower')
// options way (recommended)
bench.threshold = 10 // The maximum number of concurrent tasks to run. Defaults to Infinity.
bench.concurrency = "task" // The concurrency mode to determine how tasks are run.
// await bench.warmup()
await bench.run()
// standalone method way
// await bench.warmupConcurrently(10, "task")
await bench.runConcurrently(10, "task") // with runConcurrently, mode is set to 'bench' by default
it crushes during warmup
or run
.
I cloned repo to debug this test
import Bench from "src";
import { test, expect, vi } from "vitest";
test("should run concurrently using default time and iterations", async () => {
const bench = new Bench();
bench.add("foo", async () => 1);
bench.concurrency = "task";
bench.threshold = 2;
await bench.run(); // FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
});
I found that it runs this line promises.push(limit(executeTask));
runs endlessly because while
condition is always true
:
while (totalTime < time || ((samples.length + limit.activeCount + limit.pendingCount) < iterations))
. Specifically: totalTime < time
where totalTime = 0
and time = 500
.
I can fix my problem by using time: 0, warmupTime: 0,
in options like it is done in your tests. I find it to be confusing. If I misconfigured a benchmark it should throw an error instead of leaking memory until it crashes.
After Theo promoted your library as benchmark js
replacement I think it should have a more friendly api for concurrent tests (or more comprehensive docs) for wider adoption.
Pay now to fund the work behind this issue.
Get updates on progress being made.
Maintainer is rewarded once the issue is completed.
You're funding impactful open source efforts
You want to contribute to this effort
You want to get funding like this too