BPF List
 help / color / mirror / Atom feed
From: Eduard Zingerman <eddyz87@gmail.com>
To: Paul Chaignon <paul.chaignon@gmail.com>, bpf@vger.kernel.org
Cc: Alexei Starovoitov <ast@kernel.org>,
	Daniel Borkmann <daniel@iogearbox.net>,
	 Andrii Nakryiko <andrii@kernel.org>,
	Kumar Kartikeya Dwivedi <memxor@gmail.com>
Subject: Re: [PATCH bpf-next 1/2] bpf: Report maximum combined stack depth
Date: Tue, 12 May 2026 14:53:33 -0700	[thread overview]
Message-ID: <2a6d16baade6d1d22ebda367c67f49ad9aeb4dc5.camel@gmail.com> (raw)
In-Reply-To: <05f82d1180b68e856bb0cc03a5cd86305f5b7669.1778604369.git.paul.chaignon@gmail.com>

On Tue, 2026-05-12 at 19:19 +0200, Paul Chaignon wrote:
> We've hit the 512 bytes limit on stack depth a few times in Cilium
> recently. As a result, we started reporting in CI our current maximum
> stack depth across all configurations for each BPF program.
> 
> Unfortunately, that is not trivial to compute in userspace. The
> verifier reports the stack depths of individual subprogs at the end of
> the logs. However the maximum combined stack depth also depends on the
> callgraph of those subprogs (the max combined stack depth is the height
> of the callgraph weighted by per-subprog stack depths). We can compute
> a callgraph in userspace from the loaded instructions, but it often
> doesn't match the verifier's own callgraph because of dead code
> elimination. Our current approach relies on dumping the BPF_LOG_LEVEL2
> logs, but this feels overkill considering the verifier already has the
> information we need.
> 
> The patch lets the verifier dump the maximum combined stack depth in
> the logs, on the same line as the per-subprog stack depths:
> 
>     stack depth 16+256 max 272
> 
> The per-subprog stack depths and the new max stack depth are not
> directly comparable. The former is sometimes updated during fixups,
> while the latter is not. As a result, even with a single subprog, we
> may end up with two slightly different values. The aim of the new max
> value is to be closest to what is actually enforced by the verifier.
> 
> Signed-off-by: Paul Chaignon <paul.chaignon@gmail.com>
> ---
>  include/linux/bpf_verifier.h | 2 ++
>  kernel/bpf/verifier.c        | 6 +++++-
>  2 files changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/bpf_verifier.h b/include/linux/bpf_verifier.h
> index 976e2b2f40e8..d91843994c82 100644
> --- a/include/linux/bpf_verifier.h
> +++ b/include/linux/bpf_verifier.h
> @@ -936,6 +936,8 @@ struct bpf_verifier_env {
>  	u32 prev_insn_processed, insn_processed;
>  	/* number of jmps, calls, exits analyzed so far */
>  	u32 prev_jmps_processed, jmps_processed;
> +	/* maximum combined stack depth */
> +	u32 max_stack_depth;
>  	/* total verification time */
>  	u64 verification_time;
>  	/* maximum number of verifier states kept in 'branching' instructions */
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index 11054ad89c14..896dbb4515d7 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -5045,6 +5045,8 @@ static int check_max_stack_depth_subprog(struct bpf_verifier_env *env, int idx,
>  		}
>  	} else {
>  		depth += subprog_depth;
> +		if (depth > env->max_stack_depth)
> +			env->max_stack_depth = depth;
>  		if (depth > MAX_BPF_STACK) {
>  			total = 0;
>  			for (tmp = idx; tmp >= 0; tmp = dinfo[tmp].caller)
> @@ -5185,6 +5187,8 @@ static int check_max_stack_depth(struct bpf_verifier_env *env)
>  	if (priv_stack_mode == PRIV_STACK_UNKNOWN)
>  		priv_stack_mode = bpf_enable_priv_stack(env->prog);
>  
> +	env->max_stack_depth = env->subprog_info[0].stack_depth;
> +

I think this line is redundant, the loop below would call
check_max_stack_depth_subprog() for the main subprogram anyway.
Additionally it does not round the value same way
check_max_stack_depth_subprog() does. Also note that if main
subprogram uses private stack it's depth is omitted in cumulative
depth computation.

>  	/* All async_cb subprogs use normal kernel stack. If a particular
>  	 * subprog appears in both main prog and async_cb subtree, that
>  	 * subprog will use normal kernel stack to avoid potential nesting.
> @@ -18289,7 +18293,7 @@ static void print_verification_stats(struct bpf_verifier_env *env)
>  		verbose(env, "stack depth %d", env->subprog_info[0].stack_depth);
>  		for (i = 1; i < subprog_cnt; i++)
>  			verbose(env, "+%d", env->subprog_info[i].stack_depth);
> -		verbose(env, "\n");
> +		verbose(env, " max %d\n", env->max_stack_depth);
>  		verbose(env, "insns processed %d", env->subprog_info[0].insn_processed);
>  		for (i = 1; i < subprog_cnt; i++)
>  			if (bpf_subprog_is_global(env, i))

Maybe also add a veristat metric for this value?

  parent reply	other threads:[~2026-05-12 21:54 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-05-12 17:19 [PATCH bpf-next 1/2] bpf: Report maximum combined stack depth Paul Chaignon
2026-05-12 17:19 ` [PATCH bpf-next 2/2] selftests/bpf: Test reported max " Paul Chaignon
2026-05-12 21:53 ` Eduard Zingerman [this message]
2026-05-13 14:06   ` [PATCH bpf-next 1/2] bpf: Report maximum combined " Paul Chaignon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2a6d16baade6d1d22ebda367c67f49ad9aeb4dc5.camel@gmail.com \
    --to=eddyz87@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=memxor@gmail.com \
    --cc=paul.chaignon@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox