public inbox for bpf@vger.kernel.org
 help / color / mirror / Atom feed
From: sashiko-bot@kernel.org
To: "Puranjay Mohan" <puranjay@kernel.org>
Cc: bpf@vger.kernel.org
Subject: Re: [RFC PATCH bpf-next 1/6] selftests/bpf: Add bench_force_done() for early benchmark completion
Date: Mon, 20 Apr 2026 12:41:19 +0000	[thread overview]
Message-ID: <20260420124120.13970C19425@smtp.kernel.org> (raw)
In-Reply-To: <20260420111726.2118636-2-puranjay@kernel.org>

Sashiko AI review found 3 potential issue(s):
- [High] bench_force_done() introduces lost wakeups and spurious wakeups because it signals a condition variable without a shared state predicate.
- [High] Early termination via bench_force_done() can pass a negative sample count to report_final(), leading to division by zero.
- [High] Potential deadlock and undefined behavior due to locking a non-async-signal-safe mutex inside a signal handler.
--

commit 892d0b6cc51abbb9999e78aaf1cd4e37a1a9bcfc
Author: Puranjay Mohan <puranjay@kernel.org>

selftests/bpf: Add bench_force_done() for early benchmark completion

This commit introduces bench_force_done() to allow benchmarks to signal
early completion. It avoids waiting for the full duration_sec when a
benchmark has already collected the necessary samples.

> diff --git a/tools/testing/selftests/bpf/bench.c b/tools/testing/selftests/bpf/bench.c
> --- a/tools/testing/selftests/bpf/bench.c
> +++ b/tools/testing/selftests/bpf/bench.c

[ ... ]

> @@ -741,6 +741,13 @@ static void setup_benchmark(void)
>  static pthread_mutex_t bench_done_mtx = PTHREAD_MUTEX_INITIALIZER;
>  static pthread_cond_t bench_done = PTHREAD_COND_INITIALIZER;
>  
> +void bench_force_done(void)
> +{
> +	pthread_mutex_lock(&bench_done_mtx);

Is it safe to lock this mutex here considering collect_measurements() is
called from a SIGALRM handler?

If the signal interrupts a thread that is already holding bench_done_mtx
(like a worker thread executing bench_force_done()), the signal handler
will attempt to lock the same mutex and cause a self-deadlock.

Since pthread_mutex_lock() and pthread_cond_signal() are not
async-signal-safe, should an async-signal-safe mechanism like an atomic
flag with sem_post() be used instead?

> +	pthread_cond_signal(&bench_done);

Could this introduce lost wakeups or spurious wakeups because the condition
variable is signaled without a shared state predicate?

If a worker thread completes quickly and calls bench_force_done() before
main() reaches pthread_cond_wait(), the signal will be lost because POSIX
condition variables do not retain state. This would cause main() to block
for the full timer duration.

Also, if main() waits without a while loop checking a condition flag, could
a POSIX spurious wakeup arbitrarily terminate the benchmark prematurely?

> +	pthread_mutex_unlock(&bench_done_mtx);
> +}

What happens if bench_force_done() is called before the warmup period fully
elapses?

In main(), the final sample count passed to bench->report_final() is
calculated as:

state.res_cnt - env.warmup_sec

If state.res_cnt is strictly less than env.warmup_sec, the resulting
negative count will bypass the iteration loops in the report_final
callbacks (like ops_report_final()), leaving aggregate variables at zero.

Would subsequent calculations divide by zero and corrupt the benchmark
summary?

-- 
Sashiko AI review · https://sashiko.dev/#/patchset/20260420111726.2118636-1-puranjay@kernel.org?part=1

  reply	other threads:[~2026-04-20 12:41 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-20 11:17 [RFC PATCH bpf-next 0/6] selftests/bpf: Add XDP load-balancer benchmark Puranjay Mohan
2026-04-20 11:17 ` [RFC PATCH bpf-next 1/6] selftests/bpf: Add bench_force_done() for early benchmark completion Puranjay Mohan
2026-04-20 12:41   ` sashiko-bot [this message]
2026-04-20 15:32   ` Mykyta Yatsenko
2026-04-20 11:17 ` [RFC PATCH bpf-next 2/6] selftests/bpf: Add BPF batch-timing library Puranjay Mohan
2026-04-20 13:18   ` sashiko-bot
2026-04-22  1:10   ` Alexei Starovoitov
2026-04-20 11:17 ` [RFC PATCH bpf-next 3/6] selftests/bpf: Add XDP load-balancer common definitions Puranjay Mohan
2026-04-20 13:26   ` sashiko-bot
2026-04-20 11:17 ` [RFC PATCH bpf-next 4/6] selftests/bpf: Add XDP load-balancer BPF program Puranjay Mohan
2026-04-20 13:57   ` sashiko-bot
2026-04-20 11:17 ` [RFC PATCH bpf-next 5/6] selftests/bpf: Add XDP load-balancer benchmark driver Puranjay Mohan
2026-04-20 17:11   ` sashiko-bot
2026-04-20 11:17 ` [RFC PATCH bpf-next 6/6] selftests/bpf: Add XDP load-balancer benchmark run script Puranjay Mohan
2026-04-20 17:36   ` sashiko-bot
2026-04-22  1:16 ` [RFC PATCH bpf-next 0/6] selftests/bpf: Add XDP load-balancer benchmark Alexei Starovoitov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20260420124120.13970C19425@smtp.kernel.org \
    --to=sashiko-bot@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=puranjay@kernel.org \
    --cc=sashiko@lists.linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox