netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: xiakaixu <xiakaixu@huawei.com>
To: Alexei Starovoitov <ast@plumgrid.com>
Cc: <davem@davemloft.net>, <acme@kernel.org>, <mingo@redhat.com>,
	<a.p.zijlstra@chello.nl>, <masami.hiramatsu.pt@hitachi.com>,
	<jolsa@kernel.org>, <daniel@iogearbox.net>, <wangnan0@huawei.com>,
	<linux-kernel@vger.kernel.org>, <pi3orama@163.com>,
	<hekuang@huawei.com>, <netdev@vger.kernel.org>
Subject: Re: [PATCH V2 2/2] bpf: control a set of perf events by creating a new ioctl PERF_EVENT_IOC_SET_ENABLER
Date: Thu, 15 Oct 2015 10:21:17 +0800	[thread overview]
Message-ID: <561F0D9D.4000205@huawei.com> (raw)
In-Reply-To: <561EC917.8090001@plumgrid.com>

于 2015/10/15 5:28, Alexei Starovoitov 写道:
> On 10/14/15 5:37 AM, Kaixu Xia wrote:
>> +    event->p_sample_disable = &enabler_event->sample_disable;
> 
> I don't like it as a concept and it's buggy implementation.
> What happens here when enabler is alive, but other event is destroyed?
> 
>> --- a/kernel/trace/bpf_trace.c
>> +++ b/kernel/trace/bpf_trace.c
>> @@ -221,9 +221,12 @@ static u64 bpf_perf_event_sample_control(u64 r1, u64 index, u64 flag, u64 r4, u6
>>       struct bpf_array *array = container_of(map, struct bpf_array, map);
>>       struct perf_event *event;
>>
>> -    if (unlikely(index >= array->map.max_entries))
>> +    if (unlikely(index > array->map.max_entries))
>>           return -E2BIG;
>>
>> +    if (index == array->map.max_entries)
>> +        index = 0;
> 
> what is this hack for ?
> 
> Either use notification and user space disable or
> call bpf_perf_event_sample_control() manually for each cpu.

I will discard current implemention that controlling a set of
perf events by the 'enabler' event. Call bpf_perf_event_sample_control()
manually for each cpu is fine. Maybe we can add a loop to control all the
events stored in maps by judging the index, OK?
> 
> 
> 
> .
> 

      reply	other threads:[~2015-10-15  2:21 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-10-14 12:37 [PATCH V2 0/2] bpf: enable/disable events stored in PERF_EVENT_ARRAY maps trace data output when perf sampling Kaixu Xia
2015-10-14 12:37 ` [PATCH V2 1/2] bpf: control the trace data output on current cpu " Kaixu Xia
2015-10-14 13:49   ` [RFC PATCH] bpf: bpf_perf_event_sample_control_proto can be static kbuild test robot
2015-10-14 13:49   ` [PATCH V2 1/2] bpf: control the trace data output on current cpu when perf sampling kbuild test robot
2015-10-14 21:21   ` Alexei Starovoitov
2015-10-14 12:37 ` [PATCH V2 2/2] bpf: control a set of perf events by creating a new ioctl PERF_EVENT_IOC_SET_ENABLER Kaixu Xia
2015-10-14 21:28   ` Alexei Starovoitov
2015-10-15  2:21     ` xiakaixu [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=561F0D9D.4000205@huawei.com \
    --to=xiakaixu@huawei.com \
    --cc=a.p.zijlstra@chello.nl \
    --cc=acme@kernel.org \
    --cc=ast@plumgrid.com \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=hekuang@huawei.com \
    --cc=jolsa@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=masami.hiramatsu.pt@hitachi.com \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pi3orama@163.com \
    --cc=wangnan0@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).