public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Frederic Weisbecker <fweisbec@gmail.com>
To: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
Cc: Ingo Molnar <mingo@elte.hu>,
	Peter Zijlstra <peterz@infradead.org>,
	Paul Mackerras <paulus@samba.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/3] perf_event: fix race in perf_swevent_get_recursion_context()
Date: Tue, 19 Jan 2010 09:58:15 +0100	[thread overview]
Message-ID: <20100119085811.GA5145@nowhere> (raw)
In-Reply-To: <4B5508AF.1080302@cn.fujitsu.com>

On Tue, Jan 19, 2010 at 09:19:43AM +0800, Xiao Guangrong wrote:
> 
> 
> Frederic Weisbecker wrote:
> > On Mon, Jan 18, 2010 at 09:42:34PM +0800, Xiao Guangrong wrote:
> >> It only disable preemption in perf_swevent_get_recursion_context()
> >> it can't avoid race of hard-irq and NMI
> >>
> >> In this patch, we use atomic operation to avoid it and reduce
> >> cpu_ctx->recursion size, it also make this patch no need diable
> >> preemption
> >>
> >> Signed-off-by: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
> > 
> > 
> > 
> > I don't understand what is racy in what we have currently.
> > 
> 
> It's because hard-irq(we can handle interruption with interruption enabled)
> and NMI are nested, for example:
> 
> int perf_swevent_get_recursion_context(void)
> {
> 	......
> 	if (cpuctx->recursion[rctx]) {
> 		put_cpu_var(perf_cpu_context);
> 		return -1;
> 	}
> 
> 	/*
> 	 * Another interruption handler/NMI will re-enter there if it
> 	 * happed, it make the recursion value chaotic
> 	 */
> 	cpuctx->recursion[rctx]++;
> 	......




I still don't understand the problem.

It's not like a fight between different cpus, it's a local per cpu
fight.

NMIs can't nest other NMIs but hardirq can nest another hardirqs,
we don't care much about these though.
So let's imagine the following sequence, a fight between nested
hardirqs:

cpuctx->recursion[irq] initially = 0

Interrupt (level 0):

       if (cpuctx->recursion[rctx]) {
               put_cpu_var(perf_cpu_context);
               return -1;
       }

Interrupt (level 1):


	cpuctx->recursion[rctx]++; // = 1

	...
	do something
	...
	cpuctx->recursion[rctx]--; // = 0

End Interrupt (level 1)

	cpuctx->recursion[rctx]++; // = 1

        ...
        do something
        ...
        cpuctx->recursion[rctx]--; // = 0

End interrupt (level 0)

Another sequence could be Interrupt level 0 has
already incremented recursion and we are interrupted by
irq level 1 which then won't be able to get the recursion
context. But that's not a big deal I think.


> > This looks broken. We don't call back perf_swevent_put_recursion_context
> > in fail case, so the bit won't ever be cleared once we recurse.
> > 
> 
> Um, i think we can't clear the bit in this fail case, consider below
> sequence:
> 
>  path A:                                path B
> 
>                                 set bit but find the bit already set
>  atomic set bit                                 |
>     |                                           |
>     V                                           |
>  handle SW event                                | 
>     |                                           V
>     V                               exit and not clear the bit 
>  atomic clear bit
> 
> After A and B, the bit is still zero
> 
> Right? :-)


Ah indeed, it will be cleared by the interrupted path.
I still don't understand what this patch brings us though.


  parent reply	other threads:[~2010-01-19  8:58 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-01-18 13:42 [PATCH 1/3] perf_event: fix race in perf_swevent_get_recursion_context() Xiao Guangrong
2010-01-18 13:44 ` [PATCH 2/3] perf_event: cleanup for event profile buffer operation Xiao Guangrong
2010-01-18 13:46   ` [PATCH 3/3] tracing/kprobe: cleanup unused return value of function Xiao Guangrong
2010-01-18 16:16     ` Masami Hiramatsu
2010-01-19  8:37     ` [PATCH 1/3 v2] perf_event: fix race in perf_swevent_get_recursion_context() Xiao Guangrong
2010-01-19  8:46       ` Peter Zijlstra
2010-01-19  9:06         ` Xiao Guangrong
2010-01-19  8:39     ` [PATCH 2/3 v2] perf_event: cleanup for event profile buffer operation Xiao Guangrong
2010-01-19  8:41     ` [PATCH 3/3 v2] tracing/kprobe: cleanup unused return value of function Xiao Guangrong
2010-01-18 16:21   ` [PATCH 2/3] perf_event: cleanup for event profile buffer operation Masami Hiramatsu
2010-01-18 17:20     ` Frederic Weisbecker
2010-01-18 17:48       ` Masami Hiramatsu
2010-01-18 18:02         ` Frederic Weisbecker
2010-01-19  1:26         ` Xiao Guangrong
2010-01-19  9:00           ` Frederic Weisbecker
2010-01-19 14:26             ` Masami Hiramatsu
2010-01-18 17:11   ` Frederic Weisbecker
2010-01-18 13:55 ` [PATCH 1/3] perf_event: fix race in perf_swevent_get_recursion_context() Peter Zijlstra
2010-01-19  7:36   ` Xiao Guangrong
2010-01-19  8:41     ` Peter Zijlstra
2010-01-18 16:41 ` Frederic Weisbecker
2010-01-19  1:19   ` Xiao Guangrong
2010-01-19  8:46     ` Peter Zijlstra
2010-01-19  8:58     ` Frederic Weisbecker [this message]
2010-01-19  9:09       ` Xiao Guangrong

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20100119085811.GA5145@nowhere \
    --to=fweisbec@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@elte.hu \
    --cc=paulus@samba.org \
    --cc=peterz@infradead.org \
    --cc=xiaoguangrong@cn.fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox