From: Alexei Starovoitov <ast@plumgrid.com>
To: Daniel Wagner <daniel.wagner@bmw-carit.de>
Cc: linux-kernel@vger.kernel.org
Subject: Re: [PATCH v0] bpf: BPF based latency tracing
Date: Fri, 19 Jun 2015 00:06:17 -0700 [thread overview]
Message-ID: <5583BF69.3090204@plumgrid.com> (raw)
In-Reply-To: <5583B1A9.60503@bmw-carit.de>
On 6/18/15 11:07 PM, Daniel Wagner wrote:
>> I'm only a bit suspicious of kprobes, since we have:
>> >NOKPROBE_SYMBOL(preempt_count_sub)
>> >but trace_preemp_on() called by preempt_count_sub()
>> >don't have this mark...
> The original commit indicates that anything called from
> preempt_disable() should also be marked as NOKPROBE_SYMBOL:
>
> commit 43627582799db317e966ecb0002c2c3c9805ec0f
> Author: Srinivasa Ds<srinivasa@in.ibm.com> Sun Feb 24 00:24:04 2008
> Committer: Linus Torvalds<torvalds@woody.linux-foundation.org> Sun Feb 24 02:13:24 2008
> Original File: kernel/sched.c
>
> kprobes: refuse kprobe insertion on add/sub_preempt_counter()
...
> Obviously, this would render this patch useless.
well, I've tracked it to that commit as well, but I couldn't find
any discussion about kprobe crashes that led to that patch.
kprobe has its own mechanism to prevent recursion.
>>> >>+SEC("kprobe/trace_preempt_off")
> BTW, is there a reason why not supporting build-in
> tracepoints/events? It looks like it is only an artificial
> limitation of bpf_helpers.
The original bpf+tracing patch attached programs to both
tracepoints and kprobes, but there was a concern that it
promotes tracepoint arguments to stable ABI, since tracepoints in
general are considered stable by most maintainers.
So we decided to go for bpf+kprobe for now, since kprobes
are unstable, so no one can complain that scripts suddenly
break because probed function disappears or its arguments change.
Since then we've discussed attaching to trace marker, debug tracepoints
and other things. So hopefully soon it will be ready.
>>> >>+int bpf_prog1(struct pt_regs *ctx)
>>> >>+{
>>> >>+ int cpu = bpf_get_smp_processor_id();
>>> >>+ u64 *ts = bpf_map_lookup_elem(&my_map, &cpu);
>>> >>+
>>> >>+ if (ts)
>>> >>+ *ts = bpf_ktime_get_ns();
>> >
>> >btw, I'm planning to add native per-cpu maps which will
>> >speed up things more and reduce measurement overhead.
> Funny I was about to suggest something like this :)
>
>> >I think you can retarget this patch to net-next and send
>> >it to netdev. It's not too late for this merge window.
> I'll rebase it to net-next.
Great :)
prev parent reply other threads:[~2015-06-19 7:06 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-06-18 11:40 [PATCH v0] bpf: BPF based latency tracing Daniel Wagner
2015-06-18 17:06 ` Alexei Starovoitov
2015-06-19 6:07 ` Daniel Wagner
2015-06-19 7:06 ` Alexei Starovoitov [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5583BF69.3090204@plumgrid.com \
--to=ast@plumgrid.com \
--cc=daniel.wagner@bmw-carit.de \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox