public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: Puranjay Mohan <puranjay@kernel.org>
To: bpf@vger.kernel.org
Cc: Puranjay Mohan <puranjay@kernel.org>,
	Puranjay Mohan <puranjay12@gmail.com>,
	Alexei Starovoitov <ast@kernel.org>,
	Andrii Nakryiko <andrii@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	Martin KaFai Lau <martin.lau@kernel.org>,
	Eduard Zingerman <eddyz87@gmail.com>,
	Kumar Kartikeya Dwivedi <memxor@gmail.com>,
	Mykyta Yatsenko <mykyta.yatsenko5@gmail.com>,
	Fei Chen <feichen@meta.com>, Taruna Agrawal <taragrawal@meta.com>,
	Nikhil Dixit Limaye <ndixit@meta.com>,
	"Nikita V. Shirokov" <tehnerd@tehnerd.com>,
	kernel-team@meta.com
Subject: [PATCH bpf-next 1/7] selftests/bpf: Add bench_force_done() for early benchmark completion
Date: Mon, 27 Apr 2026 16:22:58 -0700	[thread overview]
Message-ID: <20260427232313.1582588-2-puranjay@kernel.org> (raw)
In-Reply-To: <20260427232313.1582588-1-puranjay@kernel.org>

The bench framework waits for duration_sec to elapse before collecting
results.  Benchmarks that know exactly how many samples they need can
call bench_force_done() to signal completion early, avoiding wasted
wall-clock time.

Also refactor collect_measurements() to reuse bench_force_done()
instead of open-coding the same mutex/cond_signal sequence.

Signed-off-by: Puranjay Mohan <puranjay@kernel.org>
---
 tools/testing/selftests/bpf/bench.c | 14 +++++++++-----
 tools/testing/selftests/bpf/bench.h |  1 +
 2 files changed, 10 insertions(+), 5 deletions(-)

diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
index 029b3e21f438..47a4e72208d6 100644
--- a/tools/testing/selftests/bpf/bench.c
+++ b/tools/testing/selftests/bpf/bench.c
@@ -741,6 +741,13 @@ static void setup_benchmark(void)
 static pthread_mutex_t bench_done_mtx = PTHREAD_MUTEX_INITIALIZER;
 static pthread_cond_t bench_done = PTHREAD_COND_INITIALIZER;
 
+void bench_force_done(void)
+{
+	pthread_mutex_lock(&bench_done_mtx);
+	pthread_cond_signal(&bench_done);
+	pthread_mutex_unlock(&bench_done_mtx);
+}
+
 static void collect_measurements(long delta_ns) {
 	int iter = state.res_cnt++;
 	struct bench_res *res = &state.results[iter];
@@ -750,11 +757,8 @@ static void collect_measurements(long delta_ns) {
 	if (bench->report_progress)
 		bench->report_progress(iter, res, delta_ns);
 
-	if (iter == env.duration_sec + env.warmup_sec) {
-		pthread_mutex_lock(&bench_done_mtx);
-		pthread_cond_signal(&bench_done);
-		pthread_mutex_unlock(&bench_done_mtx);
-	}
+	if (iter == env.duration_sec + env.warmup_sec)
+		bench_force_done();
 }
 
 int main(int argc, char **argv)
diff --git a/tools/testing/selftests/bpf/bench.h b/tools/testing/selftests/bpf/bench.h
index 7cf21936e7ed..89a3fc72f70e 100644
--- a/tools/testing/selftests/bpf/bench.h
+++ b/tools/testing/selftests/bpf/bench.h
@@ -70,6 +70,7 @@ extern struct env env;
 extern const struct bench *bench;
 
 void setup_libbpf(void);
+void bench_force_done(void);
 void hits_drops_report_progress(int iter, struct bench_res *res, long delta_ns);
 void hits_drops_report_final(struct bench_res res[], int res_cnt);
 void false_hits_report_progress(int iter, struct bench_res *res, long delta_ns);
-- 
2.52.0


  reply	other threads:[~2026-04-27 23:23 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-27 23:22 [PATCH bpf-next 0/7] selftests/bpf: Add XDP load-balancer benchmark Puranjay Mohan
2026-04-27 23:22 ` Puranjay Mohan [this message]
2026-04-27 23:39   ` [PATCH bpf-next 1/7] selftests/bpf: Add bench_force_done() for early benchmark completion sashiko-bot
2026-04-28  0:05   ` bot+bpf-ci
2026-04-28  9:15     ` Puranjay Mohan
2026-04-27 23:22 ` [PATCH bpf-next 2/7] selftests/bpf: Add BPF batch-timing library Puranjay Mohan
2026-04-28  0:12   ` sashiko-bot
2026-04-28  0:18   ` bot+bpf-ci
2026-04-28  9:23     ` Puranjay Mohan
2026-04-27 23:23 ` [PATCH bpf-next 3/7] selftests/bpf: Add bpf-nop benchmark for timing overhead baseline Puranjay Mohan
2026-04-27 23:23 ` [PATCH bpf-next 4/7] selftests/bpf: Add XDP load-balancer common definitions Puranjay Mohan
2026-04-28  0:05   ` bot+bpf-ci
2026-04-28  0:38   ` sashiko-bot
2026-04-28  9:29     ` Puranjay Mohan
2026-04-27 23:23 ` [PATCH bpf-next 5/7] selftests/bpf: Add XDP load-balancer BPF program Puranjay Mohan
2026-04-28  0:18   ` bot+bpf-ci
2026-04-28  1:05   ` sashiko-bot
2026-04-28  9:30     ` Puranjay Mohan
2026-04-27 23:23 ` [PATCH bpf-next 6/7] selftests/bpf: Add XDP load-balancer benchmark driver Puranjay Mohan
2026-04-28  0:05   ` bot+bpf-ci
2026-04-28  1:29   ` sashiko-bot
2026-04-28  9:33     ` Puranjay Mohan
2026-04-27 23:23 ` [PATCH bpf-next 7/7] selftests/bpf: Add XDP load-balancer benchmark run script Puranjay Mohan
2026-04-28  2:03   ` sashiko-bot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260427232313.1582588-2-puranjay@kernel.org \
    --to=puranjay@kernel.org \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=eddyz87@gmail.com \
    --cc=feichen@meta.com \
    --cc=kernel-team@meta.com \
    --cc=martin.lau@kernel.org \
    --cc=memxor@gmail.com \
    --cc=mykyta.yatsenko5@gmail.com \
    --cc=ndixit@meta.com \
    --cc=puranjay12@gmail.com \
    --cc=taragrawal@meta.com \
    --cc=tehnerd@tehnerd.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox