BPF List
 help / color / mirror / Atom feed
From: Paul Chaignon <paul.chaignon@gmail.com>
To: Eduard Zingerman <eddyz87@gmail.com>
Cc: bpf@vger.kernel.org, ast@kernel.org, andrii@kernel.org,
	daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com,
	yonghong.song@linux.dev
Subject: Re: [PATCH bpf-next v4 10/14] bpf: change logging scheme for live stack analysis
Date: Mon, 11 May 2026 19:45:09 +0200	[thread overview]
Message-ID: <agIVpXzC_bXgJ44N@mail.gmail.com> (raw)
In-Reply-To: <639f1b0f97330c98668c00244c0a7bae19e30e3c.camel@gmail.com>

On Fri, May 08, 2026 at 06:46:23AM -0700, Eduard Zingerman wrote:
> On Fri, 2026-05-08 at 01:29 +0200, Paul Chaignon wrote:
> > On Fri, Apr 10, 2026 at 01:56:01PM -0700, Eduard Zingerman wrote:
> > > Instead of breadcrumbs like:
> > > 
> > >   (d2,cs15) frame 0 insn 18 +live -16
> > >   (d2,cs15) frame 0 insn 17 +live -16
> > > 
> > > Print final accumulated stack use/def data per-func_instance
> > > per-instruction. printed func_instance's are ordered by callsite and
> > > depth. For example:
> > 
> > Hi Eduard,
> > 
> > Sorry to revive an old thread.
> > 
> > We've started running a kernel with this patchset in Cilium's CI and
> > noticed a big increase of log verbosity for BPF_LOG_LEVEL2. To get an
> > idea, a full dump of all (uncompressed) verifier logs for our
> > tested programs used to take ~8.4G. With this patchset, it now takes
> > 15G. [1] helps a bit to reduce it, but still only getting to 12G.
> > 
> > The increase seems to come from print_subprog_arg_access(), which prints
> > full functions with the results of the fixed-point analysis, from
> > print_instances(), which prints full functions with the use/def slots,
> > and to a lesser extent from compute_subprog_args(), which prints the
> > fixed-point iterations.
> 
> Hi Paul,
> 
> Do you have a break-down? I'd expect most of the log to come from
> arg_track_log(), as it visits instructions multiple times
> (although, convergence might take only a few iterations).
> If it is a significant chunk, I think we can drop it completely.

That was also my first guess, but looking at a couple examples, it seems
to converge fairly quickly. Some statistics to confirm this: if I filter
my 12G of logs to only lines containing " -> " (so coming from
arg_track_log()), then I only get 30M of logs remaining. So
arg_track_log() looks really negligible here.

> 
> As for print_subprog_arg_access() and print_instances(), as you
> suggest the output can be combined, such that the full function is
> printed only once per call depth. This combined print-out can also
> include results of registers liveness analysis.

Yep, that makes sense. I'll try to find some time to look into that.

>  
> > I'm wondering if maybe there are other opportunities to reduce
> > verbosity here (besides [1]). Maybe we don't need to print the
> > fixed-point iterations if we're already printing the results? Or maybe
> > we could put the more detailed liveness-related logs behind
> > BPF_LOG_LEVEL3 or BPF_LOG_LIVENESS?
> 
> Had this been a normal compiler or jit engine, I'd vote for
> BPF_LOG_LIVENESS log channel, I'm not sure what our stance regarding
> user visible ABI here.

IIUC, it was only released in an RC and the logs themselves are not
really part of the ABI, no? Or is there some other concern I'm missing?

> 
> Regarding BPF_LOG_LEVEL3, I think that the original idea behind
> BPF_LOG_LEVEL2 is that it would serve as a "debug log" that regular
> users won't need to consume. What is the motivation behind collecting
> level 2 log on your CI? Is it to infer clues regarding programs
> hitting 1M instructions limit?

At the moment, we collect this (1) in case of failures due to the 1M
limit and (2) to compute the maximum combined stack depth using the
per-subprog stack depths and the callgraph. (1) is not expected to be
happening often and isn't much of an issue. For (2), I'm planning to
send a patch to have the verifier report the max stack depth itself.

Overall, this isn't an issue for us. We're just using a bit more memory
and disk space. I just though the sudden increase was unexpected and
thought I'd have a look :)

> 
> [...]

  reply	other threads:[~2026-05-11 17:45 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2026-04-10 20:55 [PATCH bpf-next v4 00/14] bpf: static stack liveness data flow analysis Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 01/14] bpf: share several utility functions as internal API Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 02/14] bpf: save subprogram name in bpf_subprog_info Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 03/14] bpf: Add spis_*() helpers for 4-byte stack slot bitmasks Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 04/14] bpf: make liveness.c track stack with 4-byte granularity Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 05/14] bpf: 4-byte precise clean_verifier_state Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 06/14] bpf: prepare liveness internal API for static analysis pass Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 07/14] bpf: introduce forward arg-tracking dataflow analysis Eduard Zingerman
2026-04-10 21:44   ` bot+bpf-ci
2026-04-10 21:46     ` Eduard Zingerman
2026-04-10 22:17       ` Alexei Starovoitov
2026-04-10 20:55 ` [PATCH bpf-next v4 08/14] bpf: record arg tracking results in bpf_liveness masks Eduard Zingerman
2026-04-10 20:56 ` [PATCH bpf-next v4 09/14] bpf: simplify liveness to use (callsite, depth) keyed func_instances Eduard Zingerman
2026-04-10 21:39   ` Paul Chaignon
2026-04-10 21:42     ` Eduard Zingerman
2026-04-13 13:31       ` Paul Chaignon
2026-04-10 21:44   ` bot+bpf-ci
2026-04-10 22:33     ` Alexei Starovoitov
2026-04-10 20:56 ` [PATCH bpf-next v4 10/14] bpf: change logging scheme for live stack analysis Eduard Zingerman
2026-05-07 23:29   ` Paul Chaignon
2026-05-08 13:46     ` Eduard Zingerman
2026-05-11 17:45       ` Paul Chaignon [this message]
2026-05-11 18:54         ` Eduard Zingerman
2026-05-11 19:20           ` Kumar Kartikeya Dwivedi
2026-05-11 20:03             ` Eduard Zingerman
2026-04-10 20:56 ` [PATCH bpf-next v4 11/14] selftests/bpf: update existing tests due to liveness changes Eduard Zingerman
2026-04-10 20:56 ` [PATCH bpf-next v4 12/14] selftests/bpf: adjust verifier_log buffers Eduard Zingerman
2026-04-10 20:56 ` [PATCH bpf-next v4 13/14] selftests/bpf: add new tests for static stack liveness analysis Eduard Zingerman
2026-04-10 20:56 ` [PATCH bpf-next v4 14/14] bpf: poison dead stack slots Eduard Zingerman
2026-04-10 22:40 ` [PATCH bpf-next v4 00/14] bpf: static stack liveness data flow analysis patchwork-bot+netdevbpf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=agIVpXzC_bXgJ44N@mail.gmail.com \
    --to=paul.chaignon@gmail.com \
    --cc=andrii@kernel.org \
    --cc=ast@kernel.org \
    --cc=bpf@vger.kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=eddyz87@gmail.com \
    --cc=kernel-team@fb.com \
    --cc=martin.lau@linux.dev \
    --cc=yonghong.song@linux.dev \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox