From: Eduard Zingerman <eddyz87@gmail.com>
To: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Cc: Paul Chaignon <paul.chaignon@gmail.com>,
bpf@vger.kernel.org, ast@kernel.org, andrii@kernel.org,
daniel@iogearbox.net, martin.lau@linux.dev, kernel-team@fb.com,
yonghong.song@linux.dev
Subject: Re: [PATCH bpf-next v4 10/14] bpf: change logging scheme for live stack analysis
Date: Mon, 11 May 2026 13:03:17 -0700 [thread overview]
Message-ID: <966863ca988cff74c06702c26d318e8d6e2327f9.camel@gmail.com> (raw)
In-Reply-To: <CAP01T76nb17=8eugBG9tPgTY5p-_G4Y9=cq16eiCkFdVujChKw@mail.gmail.com>
On Mon, 2026-05-11 at 21:20 +0200, Kumar Kartikeya Dwivedi wrote:
> On Mon, 11 May 2026 at 20:54, Eduard Zingerman <eddyz87@gmail.com> wrote:
> >
> > On Mon, 2026-05-11 at 19:45 +0200, Paul Chaignon wrote:
> >
> > [...]
> >
> > > > > I'm wondering if maybe there are other opportunities to reduce
> > > > > verbosity here (besides [1]). Maybe we don't need to print the
> > > > > fixed-point iterations if we're already printing the results? Or maybe
> > > > > we could put the more detailed liveness-related logs behind
> > > > > BPF_LOG_LEVEL3 or BPF_LOG_LIVENESS?
> > > >
> > > > Had this been a normal compiler or jit engine, I'd vote for
> > > > BPF_LOG_LIVENESS log channel, I'm not sure what our stance regarding
> > > > user visible ABI here.
> > >
> > > IIUC, it was only released in an RC and the logs themselves are not
> > > really part of the ABI, no? Or is there some other concern I'm missing?
> >
> > I mean the valid flag values themselves. I hope log is not an ABI,
> > given the number of times we changed it. If BPF_LOG_LEVEL2 is considered
> > a kernel development only thing, then splitting it into multiple channels
> > would actually help the developer, imo.
> >
> > > > Regarding BPF_LOG_LEVEL3, I think that the original idea behind
> > > > BPF_LOG_LEVEL2 is that it would serve as a "debug log" that regular
> > > > users won't need to consume. What is the motivation behind collecting
> > > > level 2 log on your CI? Is it to infer clues regarding programs
> > > > hitting 1M instructions limit?
> > >
> > > At the moment, we collect this (1) in case of failures due to the 1M
> > > limit and (2) to compute the maximum combined stack depth using the
> > > per-subprog stack depths and the callgraph. (1) is not expected to be
> > > happening often and isn't much of an issue. For (2), I'm planning to
> > > send a patch to have the verifier report the max stack depth itself.
> > >
> > > Overall, this isn't an issue for us. We're just using a bit more memory
> > > and disk space. I just though the sudden increase was unexpected and
> > > thought I'd have a look :)
> >
> > For 1M instructions, I have the code to identify loop headers, so
> > error reporting here can be changed as follows:
> > - count the number of times each loop header is visited
> > - when 1M instructions limit is hit, identify the "hottest" loop
> > - print info about the loop
> > - if loop is supposed to converge (e.g. it is an iterator based loop)
> > print out the samples from states cache, noting which registers
> > differ between samples.
> >
> > Alongside your future patch to printout stack depth this should make
> > log level 3 not needed for now, I think.
> >
> > Kartikeya, do we plan to do something about 1M reporting or are we now
> > pivoting towards rust2bpf vision?
>
> The main thing I'm changing is more context around an error (by
> relating it to the source), categorization, and mitigation
> information. For now all of this will just be appended to the existing
> logs at the end, once I share something we can discuss specifics.
>
> I do think convergence failures can use more summarized output, right
> now, the huge volume of information makes it difficult for most users
> to figure out why the loop fails to converge. If you have something
> lying around for that, you should go ahead and share it. I was
> wondering if we could avoid emitting a huge volume of info upfront by
> identifying cases of failed loop convergence, instead of logging a lot
> by default and then pruning it later only when the 1M limit is hit.
I don't have anything specifically for error reporting, just the code
to detect the loop structure. But I think that 1M error summarization
should be easy to bolt on top of it. And this is an avenue to start
landing scev related things gradually. I'll take a look.
> Rust-BPF stuff is orthogonal; we will have and still need useful
> verification errors from the in-kernel verifier with or without the
> Rust frontend. It is just likely a lot of the errors will be caught
> ahead of time by encoding a lot of invariants into the Rust type
> system, but the specifics of the solutions there are too speculative
> right now.
The reason I am asking is that Alexei wanted to eventually forgo 1M
instructions limit completely.
next prev parent reply other threads:[~2026-05-11 20:03 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-10 20:55 [PATCH bpf-next v4 00/14] bpf: static stack liveness data flow analysis Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 01/14] bpf: share several utility functions as internal API Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 02/14] bpf: save subprogram name in bpf_subprog_info Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 03/14] bpf: Add spis_*() helpers for 4-byte stack slot bitmasks Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 04/14] bpf: make liveness.c track stack with 4-byte granularity Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 05/14] bpf: 4-byte precise clean_verifier_state Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 06/14] bpf: prepare liveness internal API for static analysis pass Eduard Zingerman
2026-04-10 20:55 ` [PATCH bpf-next v4 07/14] bpf: introduce forward arg-tracking dataflow analysis Eduard Zingerman
2026-04-10 21:44 ` bot+bpf-ci
2026-04-10 21:46 ` Eduard Zingerman
2026-04-10 22:17 ` Alexei Starovoitov
2026-04-10 20:55 ` [PATCH bpf-next v4 08/14] bpf: record arg tracking results in bpf_liveness masks Eduard Zingerman
2026-04-10 20:56 ` [PATCH bpf-next v4 09/14] bpf: simplify liveness to use (callsite, depth) keyed func_instances Eduard Zingerman
2026-04-10 21:39 ` Paul Chaignon
2026-04-10 21:42 ` Eduard Zingerman
2026-04-13 13:31 ` Paul Chaignon
2026-04-10 21:44 ` bot+bpf-ci
2026-04-10 22:33 ` Alexei Starovoitov
2026-04-10 20:56 ` [PATCH bpf-next v4 10/14] bpf: change logging scheme for live stack analysis Eduard Zingerman
2026-05-07 23:29 ` Paul Chaignon
2026-05-08 13:46 ` Eduard Zingerman
2026-05-11 17:45 ` Paul Chaignon
2026-05-11 18:54 ` Eduard Zingerman
2026-05-11 19:20 ` Kumar Kartikeya Dwivedi
2026-05-11 20:03 ` Eduard Zingerman [this message]
2026-04-10 20:56 ` [PATCH bpf-next v4 11/14] selftests/bpf: update existing tests due to liveness changes Eduard Zingerman
2026-04-10 20:56 ` [PATCH bpf-next v4 12/14] selftests/bpf: adjust verifier_log buffers Eduard Zingerman
2026-04-10 20:56 ` [PATCH bpf-next v4 13/14] selftests/bpf: add new tests for static stack liveness analysis Eduard Zingerman
2026-04-10 20:56 ` [PATCH bpf-next v4 14/14] bpf: poison dead stack slots Eduard Zingerman
2026-04-10 22:40 ` [PATCH bpf-next v4 00/14] bpf: static stack liveness data flow analysis patchwork-bot+netdevbpf
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=966863ca988cff74c06702c26d318e8d6e2327f9.camel@gmail.com \
--to=eddyz87@gmail.com \
--cc=andrii@kernel.org \
--cc=ast@kernel.org \
--cc=bpf@vger.kernel.org \
--cc=daniel@iogearbox.net \
--cc=kernel-team@fb.com \
--cc=martin.lau@linux.dev \
--cc=memxor@gmail.com \
--cc=paul.chaignon@gmail.com \
--cc=yonghong.song@linux.dev \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox