From: Alexander Shishkin <alexander.shishkin@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>,
linux-kernel@vger.kernel.org, vince@deater.net,
eranian@google.com, Arnaldo Carvalho de Melo <acme@infradead.org>
Subject: Re: [PATCH v2 2/5] perf: Free aux pages in unmap path
Date: Mon, 14 Mar 2016 16:04:44 +0200 [thread overview]
Message-ID: <87fuvtyx37.fsf@ashishki-desk.ger.corp.intel.com> (raw)
In-Reply-To: <20160314123837.GU6356@twins.programming.kicks-ass.net>
Peter Zijlstra <peterz@infradead.org> writes:
> On Fri, Mar 04, 2016 at 03:42:46PM +0200, Alexander Shishkin wrote:
>> @@ -4649,10 +4679,22 @@ static void perf_mmap_close(struct vm_area_struct *vma)
>> */
>> if (rb_has_aux(rb) && vma->vm_pgoff == rb->aux_pgoff &&
>> atomic_dec_and_mutex_lock(&rb->aux_mmap_count, &event->mmap_mutex)) {
>> + /*
>> + * Stop all aux events that are writing to this here buffer,
>> + * so that we can free its aux pages and corresponding pmu
>> + * data. Note that after rb::aux_mmap_count dropped to zero,
>> + * they won't start any more (see perf_aux_output_begin()).
>> + */
>> + perf_pmu_output_stop(event);
>
> So to me it seems like we're interested in rb, we don't particularly
> care about @event in this case.
Yeah, @event is used only for its rb and pmu down this path.
>> + if (!has_aux(event))
>> + return;
>> +
>
> if (!parent)
> parent = event;
>
>> + if (rcu_dereference(event->rb) == rb)
> s/event/parent/
>
>> + ro->err = __perf_event_stop(event);
>
>> + else if (parent && rcu_dereference(parent->rb) == rb)
>> + ro->err = __perf_event_stop(event);
>
> and these can go.. However..
You're right: it's the parent that's got the ->rb, but it may well be
the child that's actually running and writing data there. So we need to
stop any running children. I think I actually broke this bit since the
previous version, because there needs to be a comment about it.
>
>> +}
>> +
>> +static int __perf_pmu_output_stop(void *info)
>> +{
>> + struct perf_event *event = info;
>> + struct pmu *pmu = event->pmu;
>> + struct perf_cpu_context *cpuctx = get_cpu_ptr(pmu->pmu_cpu_context);
>> + struct remote_output ro = {
>> + .rb = event->rb,
>> + };
>> +
>> + rcu_read_lock();
>> + perf_event_aux_ctx(&cpuctx->ctx, __perf_event_output_stop, &ro);
>> + if (cpuctx->task_ctx)
>> + perf_event_aux_ctx(cpuctx->task_ctx, __perf_event_output_stop,
>> + &ro);
>> + rcu_read_unlock();
>> +
>> + return ro.err;
>> +}
>> +
>> +static void perf_pmu_output_stop(struct perf_event *event)
>> +{
>> + int cpu, err;
>> +
>> + /* better be thorough */
>> + get_online_cpus();
>> +restart:
>> + for_each_online_cpu(cpu) {
>> + err = cpu_function_call(cpu, __perf_pmu_output_stop, event);
>> + if (err)
>> + goto restart;
>> + }
>> + put_online_cpus();
>> +}
>
> This seems wildly overkill, could we not iterate rb->event_list like we
> do for the normal buffer?
Actually we can. One problem though is that iterating rb::event_list
requires rcu read section or irqsafe rb::event_lock and we need to send
IPIs. The normal buffer case tears down the rb::event_list as it goes,
so it can close the rcu read section right after it fetches one event
from it. In this case however, we must keep the list intact.
> Sure, we need to IPI for each event found, but that seems better than
> unconditionally sending IPIs to all CPUs.
Actually, won't it "often" be the case that the number of events will be
a multiple of the number of cpus? The usual use case being one event per
task per cpu and inheritance enabled. In this case we'll zap multiple
events per IPI.
Regards,
--
Alex
next prev parent reply other threads:[~2016-03-14 14:04 UTC|newest]
Thread overview: 16+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-04 13:42 [PATCH v2 0/5] perf: Untangle aux refcounting Alexander Shishkin
2016-03-04 13:42 ` [PATCH v2 1/5] perf: Refuse to begin aux transaction after aux_mmap_count drops Alexander Shishkin
2016-03-31 9:23 ` [tip:perf/core] perf/ring_buffer: Refuse to begin AUX transaction after rb->aux_mmap_count drops tip-bot for Alexander Shishkin
2016-03-04 13:42 ` [PATCH v2 2/5] perf: Free aux pages in unmap path Alexander Shishkin
2016-03-14 12:38 ` Peter Zijlstra
2016-03-14 14:04 ` Alexander Shishkin [this message]
2016-03-14 16:42 ` Peter Zijlstra
2016-03-17 13:05 ` Alexander Shishkin
2016-03-23 8:34 ` Peter Zijlstra
2016-03-31 9:24 ` [tip:perf/core] perf/core: Free AUX " tip-bot for Alexander Shishkin
2016-03-04 13:42 ` [PATCH v2 3/5] perf: Document aux api usage Alexander Shishkin
2016-03-31 9:24 ` [tip:perf/core] perf/ring_buffer: Document AUX API usage tip-bot for Alexander Shishkin
2016-03-04 13:42 ` [PATCH v2 4/5] perf/x86/intel/pt: Move transaction start/stop to pmu start/stop callbacks Alexander Shishkin
2016-03-31 9:24 ` [tip:perf/core] perf/x86/intel/pt: Move transaction start/stop to PMU " tip-bot for Alexander Shishkin
2016-03-04 13:42 ` [PATCH v2 5/5] perf/x86/intel/bts: Move transaction start/stop to " Alexander Shishkin
2016-03-31 9:25 ` [tip:perf/core] " tip-bot for Alexander Shishkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87fuvtyx37.fsf@ashishki-desk.ger.corp.intel.com \
--to=alexander.shishkin@linux.intel.com \
--cc=acme@infradead.org \
--cc=eranian@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@redhat.com \
--cc=peterz@infradead.org \
--cc=vince@deater.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox