public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Frederic Weisbecker <fweisbec@gmail.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: LKML <linux-kernel@vger.kernel.org>, Ingo Molnar <mingo@elte.hu>,
	Arnaldo Carvalho de Melo <acme@redhat.com>,
	Paul Mackerras <paulus@samba.org>,
	Stephane Eranian <eranian@google.com>,
	Will Deacon <will.deacon@arm.com>,
	Paul Mundt <lethal@linux-sh.org>,
	David Miller <davem@davemloft.net>,
	Borislav Petkov <bp@amd64.org>
Subject: Re: [RFC PATCH 5/6] perf: Fix race in callchains
Date: Sat, 3 Jul 2010 22:28:17 +0200	[thread overview]
Message-ID: <20100703202814.GA5232@nowhere> (raw)
In-Reply-To: <1278094055.1917.285.camel@laptop>

On Fri, Jul 02, 2010 at 08:07:35PM +0200, Peter Zijlstra wrote:
> On Thu, 2010-07-01 at 17:36 +0200, Frederic Weisbecker wrote:
> > Now that software events don't have interrupt disabled anymore in
> > the event path, callchains can nest on any context. So seperating
> > nmi and others contexts in two buffers has become racy.
> > 
> > Fix this by providing one buffer per nesting level. Given the size
> > of the callchain entries (2040 bytes * 4), we now need to allocate
> > them dynamically.
> 
> OK so I guess you want to allocate them because 8k per cpu is too much
> to always have about?



Right. I know that really adds complexity and I hesitated much before
doing so. But I think that's quite necessary.


 
> > +static int get_callchain_buffers(void)
> > +{
> > +	int i;
> > +	int err = 0;
> > +	struct perf_callchain_entry_cpus *buf;
> > +
> > +	mutex_lock(&callchain_mutex);
> > +
> > +	if (WARN_ON_ONCE(++nr_callchain_events < 1)) {
> > +		err = -EINVAL;
> > +		goto exit;
> > +	}
> > +
> > +	if (nr_callchain_events > 1)
> > +		goto exit;
> > +
> > +	for (i = 0; i < 4; i++) {
> > +		buf = kzalloc(sizeof(*buf), GFP_KERNEL);
> > +		/* free_event() will clean the rest */
> > +		if (!buf) {
> > +			err = -ENOMEM;
> > +			goto exit;
> > +		}
> > +		buf->entries = alloc_percpu(struct perf_callchain_entry);
> > +		if (!buf->entries) {
> > +			kfree(buf);
> > +			err = -ENOMEM;
> > +			goto exit;
> > +		}
> > +		rcu_assign_pointer(callchain_entries[i], buf);
> > +	}
> > +
> > +exit:
> > +	mutex_unlock(&callchain_mutex);
> > +
> > +	return err;
> > +}
> 
> > +static void put_callchain_buffers(void)
> > +{
> > +	int i;
> > +	struct perf_callchain_entry_cpus *entry;
> > +
> > +	mutex_lock(&callchain_mutex);
> > +
> > +	if (WARN_ON_ONCE(--nr_callchain_events < 0))
> > +		goto exit;
> > +
> > +	if (nr_callchain_events > 0)
> > +		goto exit;
> > +
> > +	for (i = 0; i < 4; i++) {
> > +		entry = callchain_entries[i];
> > +		if (entry) {
> > +			callchain_entries[i] = NULL;
> > +			call_rcu(&entry->rcu_head, release_callchain_buffers);
> > +		}
> > +	}
> > +
> > +exit:
> > +	mutex_unlock(&callchain_mutex);
> > +}
> 
> If you make nr_callchain_events an atomic_t, then you can do the
> refcounting outside the mutex. See the existing user of
> atomic_dec_and_mutex_lock().
> 
> I would also split it in get/put and alloc/free functions for clarity.



Ok I will.




> I'm not at all sure why you're using RCU though.
> 
> > @@ -1895,6 +2072,8 @@ static void free_event(struct perf_event *event)
> >  			atomic_dec(&nr_comm_events);
> >  		if (event->attr.task)
> >  			atomic_dec(&nr_task_events);
> > +		if (event->attr.sample_type & PERF_SAMPLE_CALLCHAIN)
> > +			put_callchain_buffers();
> >  	}
> >  
> >  	if (event->buffer) {
> 
> If this was the last even, there's no callchain user left, so nobody can
> be here:
> 
> > @@ -3480,14 +3610,20 @@ static void perf_event_output(struct perf_event *event, int nmi,
> >  	struct perf_output_handle handle;
> >  	struct perf_event_header header;
> >  
> > +	/* protect the callchain buffers */
> > +	rcu_read_lock();
> > +
> >  	perf_prepare_sample(&header, data, event, regs);
> >  
> >  	if (perf_output_begin(&handle, event, header.size, nmi, 1))
> > -		return;
> > +		goto exit;
> >  
> >  	perf_output_sample(&handle, &header, data, event);
> >  
> >  	perf_output_end(&handle);
> > +
> > +exit:
> > +	rcu_read_unlock();
> >  }
> 
> Rendering that RCU stuff superfluous.


May be I'm omitting something that would make it non-rcu-safe.

But consider a perf event running on CPU 1. And you close the fd on
CPU 0. CPU 1 has started to use a callchain buffer but receives an IPI
to retire the event from the cpu. But still it has yet to finish his
callchain processing.

If right after that CPU 0 releases the callchain buffers, CPU 1 may
crash in the middle.

So you need to wait for the grace period to end.


  reply	other threads:[~2010-07-03 20:28 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-07-01 15:35 [RFC PATCH 0/6] perf: cleanup and fixes Frederic Weisbecker
2010-07-01 15:35 ` [RFC PATCH 1/6] perf: Drop unappropriate tests on arch callchains Frederic Weisbecker
2010-07-01 15:35 ` [RFC PATCH 2/6] perf: Generalize callchain_store() Frederic Weisbecker
2010-07-01 15:35 ` [RFC PATCH 3/6] perf: Generalize some arch callchain code Frederic Weisbecker
2010-07-01 15:46   ` Peter Zijlstra
2010-07-01 15:47     ` Frederic Weisbecker
2010-07-01 15:49     ` Frederic Weisbecker
2010-07-01 15:51       ` Peter Zijlstra
2010-07-01 15:53         ` Frederic Weisbecker
2010-07-01 15:36 ` [RFC PATCH 4/6] perf: Factorize callchain context handling Frederic Weisbecker
2010-07-01 15:36 ` [RFC PATCH 5/6] perf: Fix race in callchains Frederic Weisbecker
2010-07-01 15:42   ` Frederic Weisbecker
2010-07-02 18:07   ` Peter Zijlstra
2010-07-03 20:28     ` Frederic Weisbecker [this message]
2010-07-01 15:36 ` [RFC PATCH 6/6] perf: Fix double put_ctx Frederic Weisbecker
  -- strict thread matches above, loose matches on Subject: below --
2010-08-16 20:48 [RFC PATCH 0/0 v3] callchain fixes and cleanups Frederic Weisbecker
2010-08-16 20:48 ` [RFC PATCH 5/6] perf: Fix race in callchains Frederic Weisbecker

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100703202814.GA5232@nowhere \
    --to=fweisbec@gmail.com \
    --cc=acme@redhat.com \
    --cc=bp@amd64.org \
    --cc=davem@davemloft.net \
    --cc=eranian@google.com \
    --cc=lethal@linux-sh.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox