netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC] A couple of issues on BPF callstack
@ 2022-03-04 23:28 Namhyung Kim
  2022-03-06  0:28 ` Yonghong Song
  0 siblings, 1 reply; 3+ messages in thread
From: Namhyung Kim @ 2022-03-04 23:28 UTC (permalink / raw)
  To: Alexei Starovoitov, Daniel Borkmann, Andrii Nakryiko
  Cc: Martin KaFai Lau, Song Liu, Yonghong Song, John Fastabend,
	KP Singh, netdev, bpf, Eugene Loh, Peter Zijlstra, Hao Luo

Hello,

While I'm working on lock contention tracepoints [1] for a future BPF
use, I found some issues on the stack trace in BPF programs.  Maybe
there are things that I missed but I'd like to share my thoughts for
your feedback.  So please correct me if I'm wrong.

The first thing I found is how it handles skipped frames in the
bpf_get_stack{,id}.  Initially I wanted a short stack trace like 4
depth to identify callers quickly, but it turned out that 4 is not
enough and it's all filled with the BPF code itself.

So I set to skip 4 frames but it always returns an error (-EFAULT).
After some time I figured out that BPF doesn't allow to set skip
frames greater than or equal to buffer size.  This seems strange and
looks like a bug.  Then I found a bug report (and a partial fix) [2]
and work on a full fix now.

But it revealed another problem with BPF programs on perf_event which
use a variant of stack trace functions.  The difference is that it
needs to use a callchain in the perf sample data.  The perf callchain
is saved from the begining while BPF callchain is saved at the last to
limit the stack depth by the buffer size.  But I can handle that.

More important thing to me is the content of the (perf) callchain.  If
the event has __PERF_SAMPLE_CALLCHAIN_EARLY, it will have context info
like PERF_CONTEXT_KERNEL.  So user might or might not see it depending
on whether the perf_event set with precise_ip and SAMPLE_CALLCHAIN.
This doesn't look good.

After all, I think it'd be really great if we can skip those
uninteresting info easily.  Maybe we could add a flag to skip BPF code
perf context, and even some scheduler code from the trace respectively
like in stack_trace_consume_entry_nosched().

Thoughts?

Thanks,
Namhyung


[1] https://lore.kernel.org/all/20220301010412.431299-1-namhyung@kernel.org/
[2] https://lore.kernel.org/bpf/30a7b5d5-6726-1cc2-eaee-8da2828a9a9c@oracle.com/

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2022-03-08  4:38 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-03-04 23:28 [RFC] A couple of issues on BPF callstack Namhyung Kim
2022-03-06  0:28 ` Yonghong Song
2022-03-08  4:37   ` Namhyung Kim

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).