From: Xiao Guangrong <xiaoguangrong@cn.fujitsu.com>
To: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Ingo Molnar <mingo@elte.hu>,
Peter Zijlstra <peterz@infradead.org>,
Paul Mackerras <paulus@samba.org>,
LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 1/3] perf_event: fix race in perf_swevent_get_recursion_context()
Date: Tue, 19 Jan 2010 17:09:44 +0800 [thread overview]
Message-ID: <4B5576D8.90804@cn.fujitsu.com> (raw)
In-Reply-To: <20100119085811.GA5145@nowhere>
Frederic Weisbecker wrote:
>
> I still don't understand the problem.
>
> It's not like a fight between different cpus, it's a local per cpu
> fight.
>
> NMIs can't nest other NMIs but hardirq can nest another hardirqs,
> we don't care much about these though.
> So let's imagine the following sequence, a fight between nested
> hardirqs:
>
> cpuctx->recursion[irq] initially = 0
>
> Interrupt (level 0):
>
> if (cpuctx->recursion[rctx]) {
> put_cpu_var(perf_cpu_context);
> return -1;
> }
>
> Interrupt (level 1):
>
>
> cpuctx->recursion[rctx]++; // = 1
>
> ...
> do something
> ...
> cpuctx->recursion[rctx]--; // = 0
>
> End Interrupt (level 1)
>
> cpuctx->recursion[rctx]++; // = 1
>
> ...
> do something
> ...
> cpuctx->recursion[rctx]--; // = 0
>
> End interrupt (level 0)
>
> Another sequence could be Interrupt level 0 has
> already incremented recursion and we are interrupted by
> irq level 1 which then won't be able to get the recursion
> context. But that's not a big deal I think.
>
Thanks Frederic, i forget this feature of nesting of hard-irq :-(
Sorry for disturb you all
- Xiao
prev parent reply other threads:[~2010-01-19 9:11 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-01-18 13:42 [PATCH 1/3] perf_event: fix race in perf_swevent_get_recursion_context() Xiao Guangrong
2010-01-18 13:44 ` [PATCH 2/3] perf_event: cleanup for event profile buffer operation Xiao Guangrong
2010-01-18 13:46 ` [PATCH 3/3] tracing/kprobe: cleanup unused return value of function Xiao Guangrong
2010-01-18 16:16 ` Masami Hiramatsu
2010-01-19 8:37 ` [PATCH 1/3 v2] perf_event: fix race in perf_swevent_get_recursion_context() Xiao Guangrong
2010-01-19 8:46 ` Peter Zijlstra
2010-01-19 9:06 ` Xiao Guangrong
2010-01-19 8:39 ` [PATCH 2/3 v2] perf_event: cleanup for event profile buffer operation Xiao Guangrong
2010-01-19 8:41 ` [PATCH 3/3 v2] tracing/kprobe: cleanup unused return value of function Xiao Guangrong
2010-01-18 16:21 ` [PATCH 2/3] perf_event: cleanup for event profile buffer operation Masami Hiramatsu
2010-01-18 17:20 ` Frederic Weisbecker
2010-01-18 17:48 ` Masami Hiramatsu
2010-01-18 18:02 ` Frederic Weisbecker
2010-01-19 1:26 ` Xiao Guangrong
2010-01-19 9:00 ` Frederic Weisbecker
2010-01-19 14:26 ` Masami Hiramatsu
2010-01-18 17:11 ` Frederic Weisbecker
2010-01-18 13:55 ` [PATCH 1/3] perf_event: fix race in perf_swevent_get_recursion_context() Peter Zijlstra
2010-01-19 7:36 ` Xiao Guangrong
2010-01-19 8:41 ` Peter Zijlstra
2010-01-18 16:41 ` Frederic Weisbecker
2010-01-19 1:19 ` Xiao Guangrong
2010-01-19 8:46 ` Peter Zijlstra
2010-01-19 8:58 ` Frederic Weisbecker
2010-01-19 9:09 ` Xiao Guangrong [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4B5576D8.90804@cn.fujitsu.com \
--to=xiaoguangrong@cn.fujitsu.com \
--cc=fweisbec@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=paulus@samba.org \
--cc=peterz@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox