* [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list()
@ 2013-03-01 1:36 Yong Zhang
2013-03-01 5:58 ` Yong Zhang
` (2 more replies)
0 siblings, 3 replies; 7+ messages in thread
From: Yong Zhang @ 2013-03-01 1:36 UTC (permalink / raw)
To: linux-kernel, linux-rt-users; +Cc: Thomas Gleixner, Steven Rostedt
From: Yong Zhang <yong.zhang@windriver.com>
Otherwise, below warning is shown somtimes when running some test:
WARNING: at kernel/sched/core.c:3423 migrate_disable+0xbf/0xd0()
Hardware name: OptiPlex 755
Modules linked in: floppy parport parport_pc minix
Pid: 1800, comm: tst-robustpi8 Tainted: G W 3.4.28-rt40 #1
Call Trace:
[<ffffffff81031f3f>] warn_slowpath_common+0x7f/0xc0
[<ffffffff81031f9a>] warn_slowpath_null+0x1a/0x20
[<ffffffff81066eaf>] migrate_disable+0xbf/0xd0
[<ffffffff81085d95>] exit_pi_state_list+0xa5/0x170
[<ffffffff8102f71f>] mm_release+0x12f/0x170
[<ffffffff81036906>] exit_mm+0x26/0x140
[<ffffffff81090fc6>] ? acct_collect+0x186/0x1c0
[<ffffffff81036b66>] do_exit+0x146/0x930
[<ffffffff810658d1>] ? get_parent_ip+0x11/0x50
[<ffffffff8103760d>] do_group_exit+0x4d/0xc0
[<ffffffff8104828f>] get_signal_to_deliver+0x23f/0x6a0
[<ffffffff810019e5>] do_signal+0x65/0x5e0
[<ffffffff81047816>] ? group_send_sig_info+0x76/0x80
[<ffffffff81002018>] do_notify_resume+0x98/0xd0
[<ffffffff8165779b>] int_signal+0x12/0x17
---[ end trace 0000000000000004 ]---
The reason is that spin_lock() is taken in atomic context, but
spin_unlock() is not.
Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Steven Rostedt <rostedt@goodmis.org>
---
kernel/futex.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/futex.c b/kernel/futex.c
index 9e26e87..2b676a2 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -562,16 +562,17 @@ void exit_pi_state_list(struct task_struct *curr)
spin_lock(&hb->lock);
- raw_spin_lock_irq(&curr->pi_lock);
/*
* We dropped the pi-lock, so re-check whether this
* task still owns the PI-state:
*/
if (head->next != next) {
spin_unlock(&hb->lock);
+ raw_spin_lock_irq(&curr->pi_lock);
continue;
}
+ raw_spin_lock_irq(&curr->pi_lock);
WARN_ON(pi_state->owner != curr);
WARN_ON(list_empty(&pi_state->list));
list_del_init(&pi_state->list);
--
1.7.9.5
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list()
2013-03-01 1:36 [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list() Yong Zhang
@ 2013-03-01 5:58 ` Yong Zhang
2013-03-01 10:17 ` Thomas Gleixner
2013-03-01 12:01 ` John Kacur
2 siblings, 0 replies; 7+ messages in thread
From: Yong Zhang @ 2013-03-01 5:58 UTC (permalink / raw)
To: linux-kernel, linux-rt-users
On Fri, Mar 1, 2013 at 9:36 AM, Yong Zhang <yong.zhang0@gmail.com> wrote:
> --- a/kernel/futex.c
> +++ b/kernel/futex.c
> @@ -562,16 +562,17 @@ void exit_pi_state_list(struct task_struct *curr)
>
> spin_lock(&hb->lock);
>
> - raw_spin_lock_irq(&curr->pi_lock);
> /*
> * We dropped the pi-lock, so re-check whether this
> * task still owns the PI-state:
> */
> if (head->next != next) {
Just ignore this patch, race window is opened here.
New patch comes soon.
Thanks,
Yong
> spin_unlock(&hb->lock);
> + raw_spin_lock_irq(&curr->pi_lock);
> continue;
> }
>
> + raw_spin_lock_irq(&curr->pi_lock);
> WARN_ON(pi_state->owner != curr);
> WARN_ON(list_empty(&pi_state->list));
> list_del_init(&pi_state->list);
> --
> 1.7.9.5
>
--
Only stand for myself
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list()
2013-03-01 1:36 [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list() Yong Zhang
2013-03-01 5:58 ` Yong Zhang
@ 2013-03-01 10:17 ` Thomas Gleixner
2013-03-03 8:38 ` Yong Zhang
2013-03-03 21:50 ` Steven Rostedt
2013-03-01 12:01 ` John Kacur
2 siblings, 2 replies; 7+ messages in thread
From: Thomas Gleixner @ 2013-03-01 10:17 UTC (permalink / raw)
To: Yong Zhang; +Cc: linux-kernel, linux-rt-users, Steven Rostedt
On Fri, 1 Mar 2013, Yong Zhang wrote:
> From: Yong Zhang <yong.zhang@windriver.com>
>
> Otherwise, below warning is shown somtimes when running some test:
>
> WARNING: at kernel/sched/core.c:3423 migrate_disable+0xbf/0xd0()
> Hardware name: OptiPlex 755
> Modules linked in: floppy parport parport_pc minix
> Pid: 1800, comm: tst-robustpi8 Tainted: G W 3.4.28-rt40 #1
> Call Trace:
> [<ffffffff81031f3f>] warn_slowpath_common+0x7f/0xc0
> [<ffffffff81031f9a>] warn_slowpath_null+0x1a/0x20
> [<ffffffff81066eaf>] migrate_disable+0xbf/0xd0
> [<ffffffff81085d95>] exit_pi_state_list+0xa5/0x170
> [<ffffffff8102f71f>] mm_release+0x12f/0x170
> [<ffffffff81036906>] exit_mm+0x26/0x140
> [<ffffffff81090fc6>] ? acct_collect+0x186/0x1c0
> [<ffffffff81036b66>] do_exit+0x146/0x930
> [<ffffffff810658d1>] ? get_parent_ip+0x11/0x50
> [<ffffffff8103760d>] do_group_exit+0x4d/0xc0
> [<ffffffff8104828f>] get_signal_to_deliver+0x23f/0x6a0
> [<ffffffff810019e5>] do_signal+0x65/0x5e0
> [<ffffffff81047816>] ? group_send_sig_info+0x76/0x80
> [<ffffffff81002018>] do_notify_resume+0x98/0xd0
> [<ffffffff8165779b>] int_signal+0x12/0x17
> ---[ end trace 0000000000000004 ]---
>
> The reason is that spin_lock() is taken in atomic context, but
> spin_unlock() is not.
This doesn't make sense. The spin_lock() happens in non atomic
context.
spin_lock(&hb->lock);
raw_spin_lock_irq(&curr->pi_lock);
The unlock is in atomic context and the unlock does not call
migrate_disable().
Though on RT this is caused by the in_atomic check of migrate_enable()
and this results in asymetry. See below.
> Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> ---
> kernel/futex.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/futex.c b/kernel/futex.c
> index 9e26e87..2b676a2 100644
> --- a/kernel/futex.c
> +++ b/kernel/futex.c
> @@ -562,16 +562,17 @@ void exit_pi_state_list(struct task_struct *curr)
>
> spin_lock(&hb->lock);
>
> - raw_spin_lock_irq(&curr->pi_lock);
> /*
> * We dropped the pi-lock, so re-check whether this
> * task still owns the PI-state:
> */
Did you read and understand this comment ?
The logic here is
raw_spin_lock_irq(&curr->pi_lock);
next = head->next;
raw_spin_unlock_irq(&curr->pi_lock);
spin_lock(&hb->lock);
raw_spin_lock_irq(&curr->pi_lock);
if (head->next != next)
We must drop pi_lock before locking the hash bucket lock. That opens a
window for another task to modify head list. So we must relock pi_lock
and verify whether head->next is unmodified. If it changed, we need to
reevaluate.
> if (head->next != next) {
> spin_unlock(&hb->lock);
> + raw_spin_lock_irq(&curr->pi_lock);
> continue;
> }
>
> + raw_spin_lock_irq(&curr->pi_lock);
> WARN_ON(pi_state->owner != curr);
> WARN_ON(list_empty(&pi_state->list));
> list_del_init(&pi_state->list);
So both your patch description and your patch are patently wrong.
Correct solution below.
Thanks,
tglx
---
futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock
In exit_pi_state_list() we have the following locking construct:
spin_lock(&hb->lock);
raw_spin_lock_irq(&curr->pi_lock);
...
spin_unlock(&hb->lock);
In !RT this works, but on RT the migrate_enable() function which is
called from spin_unlock() sees atomic context due to the held pi_lock
and just decrements the migrate_disable_atomic counter of the
task. Now the next call to migrate_disable() sees the counter being
negative and issues a warning. That check should be in
migrate_enable() already.
Fix this by dropping pi_lock before unlocking hb->lock and reaquire
pi_lock after that again. This is safe as the loop code reevaluates
head again under the pi_lock.
Reported-by: Yong Zhang <yong.zhang@windriver.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
diff --git a/kernel/futex.c b/kernel/futex.c
index f15f0e4..c795c9c 100644
--- a/kernel/futex.c
+++ b/kernel/futex.c
@@ -568,7 +568,9 @@ void exit_pi_state_list(struct task_struct *curr)
* task still owns the PI-state:
*/
if (head->next != next) {
+ raw_spin_unlock_irq(&curr->pi_lock);
spin_unlock(&hb->lock);
+ raw_spin_lock_irq(&curr->pi_lock);
continue;
}
^ permalink raw reply related [flat|nested] 7+ messages in thread
* Re: [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list()
2013-03-01 1:36 [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list() Yong Zhang
2013-03-01 5:58 ` Yong Zhang
2013-03-01 10:17 ` Thomas Gleixner
@ 2013-03-01 12:01 ` John Kacur
2013-03-01 15:12 ` John Kacur
2 siblings, 1 reply; 7+ messages in thread
From: John Kacur @ 2013-03-01 12:01 UTC (permalink / raw)
To: Yong Zhang; +Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Steven Rostedt
On Fri, Mar 1, 2013 at 2:36 AM, Yong Zhang <yong.zhang0@gmail.com> wrote:
> From: Yong Zhang <yong.zhang@windriver.com>
>
> Otherwise, below warning is shown somtimes when running some test:
It would be nice to know which test!
----->o SNIP
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list()
2013-03-01 12:01 ` John Kacur
@ 2013-03-01 15:12 ` John Kacur
0 siblings, 0 replies; 7+ messages in thread
From: John Kacur @ 2013-03-01 15:12 UTC (permalink / raw)
To: Yong Zhang
Cc: linux-kernel, linux-rt-users, Thomas Gleixner, Steven Rostedt,
Luis Claudio R. Goncalves
On Fri, Mar 1, 2013 at 1:01 PM, John Kacur <jkacur@redhat.com> wrote:
> On Fri, Mar 1, 2013 at 2:36 AM, Yong Zhang <yong.zhang0@gmail.com> wrote:
>> From: Yong Zhang <yong.zhang@windriver.com>
>>
>> Otherwise, below warning is shown somtimes when running some test:
>
> It would be nice to know which test!
> ----->o SNIP
Luis points out to me that you can tell from the PID that the test is
http://fxr.watson.org/fxr/source/nptl/tst-robust8.c?v=GLIBC27
of the NPTL project (sys/nptl/tst-robust8.c)
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list()
2013-03-01 10:17 ` Thomas Gleixner
@ 2013-03-03 8:38 ` Yong Zhang
2013-03-03 21:50 ` Steven Rostedt
1 sibling, 0 replies; 7+ messages in thread
From: Yong Zhang @ 2013-03-03 8:38 UTC (permalink / raw)
To: Thomas Gleixner; +Cc: linux-kernel, linux-rt-users, Steven Rostedt
On Fri, Mar 01, 2013 at 11:17:42AM +0100, Thomas Gleixner wrote:
> On Fri, 1 Mar 2013, Yong Zhang wrote:
>
> > From: Yong Zhang <yong.zhang@windriver.com>
> >
> > Otherwise, below warning is shown somtimes when running some test:
> >
> > WARNING: at kernel/sched/core.c:3423 migrate_disable+0xbf/0xd0()
> > Hardware name: OptiPlex 755
> > Modules linked in: floppy parport parport_pc minix
> > Pid: 1800, comm: tst-robustpi8 Tainted: G W 3.4.28-rt40 #1
> > Call Trace:
> > [<ffffffff81031f3f>] warn_slowpath_common+0x7f/0xc0
> > [<ffffffff81031f9a>] warn_slowpath_null+0x1a/0x20
> > [<ffffffff81066eaf>] migrate_disable+0xbf/0xd0
> > [<ffffffff81085d95>] exit_pi_state_list+0xa5/0x170
> > [<ffffffff8102f71f>] mm_release+0x12f/0x170
> > [<ffffffff81036906>] exit_mm+0x26/0x140
> > [<ffffffff81090fc6>] ? acct_collect+0x186/0x1c0
> > [<ffffffff81036b66>] do_exit+0x146/0x930
> > [<ffffffff810658d1>] ? get_parent_ip+0x11/0x50
> > [<ffffffff8103760d>] do_group_exit+0x4d/0xc0
> > [<ffffffff8104828f>] get_signal_to_deliver+0x23f/0x6a0
> > [<ffffffff810019e5>] do_signal+0x65/0x5e0
> > [<ffffffff81047816>] ? group_send_sig_info+0x76/0x80
> > [<ffffffff81002018>] do_notify_resume+0x98/0xd0
> > [<ffffffff8165779b>] int_signal+0x12/0x17
> > ---[ end trace 0000000000000004 ]---
> >
> > The reason is that spin_lock() is taken in atomic context, but
> > spin_unlock() is not.
>
> This doesn't make sense. The spin_lock() happens in non atomic
> context.
>
> spin_lock(&hb->lock);
> raw_spin_lock_irq(&curr->pi_lock);
>
> The unlock is in atomic context and the unlock does not call
> migrate_disable().
>
> Though on RT this is caused by the in_atomic check of migrate_enable()
> and this results in asymetry. See below.
Yeah, this is what I want to say. s/atomic/no-atomic/ in my words :)
>
> > Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > ---
> > kernel/futex.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/futex.c b/kernel/futex.c
> > index 9e26e87..2b676a2 100644
> > --- a/kernel/futex.c
> > +++ b/kernel/futex.c
> > @@ -562,16 +562,17 @@ void exit_pi_state_list(struct task_struct *curr)
> >
> > spin_lock(&hb->lock);
> >
> > - raw_spin_lock_irq(&curr->pi_lock);
> > /*
> > * We dropped the pi-lock, so re-check whether this
> > * task still owns the PI-state:
> > */
>
> Did you read and understand this comment ?
>
> The logic here is
>
> raw_spin_lock_irq(&curr->pi_lock);
> next = head->next;
> raw_spin_unlock_irq(&curr->pi_lock);
> spin_lock(&hb->lock);
> raw_spin_lock_irq(&curr->pi_lock);
> if (head->next != next)
>
> We must drop pi_lock before locking the hash bucket lock. That opens a
> window for another task to modify head list. So we must relock pi_lock
> and verify whether head->next is unmodified. If it changed, we need to
> reevaluate.
>
> > if (head->next != next) {
> > spin_unlock(&hb->lock);
> > + raw_spin_lock_irq(&curr->pi_lock);
> > continue;
> > }
> >
> > + raw_spin_lock_irq(&curr->pi_lock);
> > WARN_ON(pi_state->owner != curr);
> > WARN_ON(list_empty(&pi_state->list));
> > list_del_init(&pi_state->list);
>
> So both your patch description and your patch are patently wrong.
> Correct solution below.
Recognized it after sending my V1 out. So V2 is sent out to close the
race window and it happens to be the same patch with yours below.
http://marc.info/?l=linux-rt-users&m=136211763323758&w=2
Thanks,
Yong
>
> Thanks,
>
> tglx
> ---
> futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock
>
> In exit_pi_state_list() we have the following locking construct:
>
> spin_lock(&hb->lock);
> raw_spin_lock_irq(&curr->pi_lock);
>
> ...
> spin_unlock(&hb->lock);
>
> In !RT this works, but on RT the migrate_enable() function which is
> called from spin_unlock() sees atomic context due to the held pi_lock
> and just decrements the migrate_disable_atomic counter of the
> task. Now the next call to migrate_disable() sees the counter being
> negative and issues a warning. That check should be in
> migrate_enable() already.
>
> Fix this by dropping pi_lock before unlocking hb->lock and reaquire
> pi_lock after that again. This is safe as the loop code reevaluates
> head again under the pi_lock.
>
> Reported-by: Yong Zhang <yong.zhang@windriver.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
>
> diff --git a/kernel/futex.c b/kernel/futex.c
> index f15f0e4..c795c9c 100644
> --- a/kernel/futex.c
> +++ b/kernel/futex.c
> @@ -568,7 +568,9 @@ void exit_pi_state_list(struct task_struct *curr)
> * task still owns the PI-state:
> */
> if (head->next != next) {
> + raw_spin_unlock_irq(&curr->pi_lock);
> spin_unlock(&hb->lock);
> + raw_spin_lock_irq(&curr->pi_lock);
> continue;
> }
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list()
2013-03-01 10:17 ` Thomas Gleixner
2013-03-03 8:38 ` Yong Zhang
@ 2013-03-03 21:50 ` Steven Rostedt
1 sibling, 0 replies; 7+ messages in thread
From: Steven Rostedt @ 2013-03-03 21:50 UTC (permalink / raw)
To: Thomas Gleixner; +Cc: Yong Zhang, linux-kernel, linux-rt-users
On Fri, 2013-03-01 at 11:17 +0100, Thomas Gleixner wrote:
> > Signed-off-by: Yong Zhang <yong.zhang0@gmail.com>
> > Cc: Thomas Gleixner <tglx@linutronix.de>
> > Cc: Steven Rostedt <rostedt@goodmis.org>
> > ---
> > kernel/futex.c | 3 ++-
> > 1 file changed, 2 insertions(+), 1 deletion(-)
> >
> > diff --git a/kernel/futex.c b/kernel/futex.c
> > index 9e26e87..2b676a2 100644
> > --- a/kernel/futex.c
> > +++ b/kernel/futex.c
> > @@ -562,16 +562,17 @@ void exit_pi_state_list(struct task_struct *curr)
> >
> > spin_lock(&hb->lock);
> >
> > - raw_spin_lock_irq(&curr->pi_lock);
> > /*
> > * We dropped the pi-lock, so re-check whether this
> > * task still owns the PI-state:
> > */
>
> Did you read and understand this comment ?
>
> The logic here is
>
> raw_spin_lock_irq(&curr->pi_lock);
> next = head->next;
> raw_spin_unlock_irq(&curr->pi_lock);
> spin_lock(&hb->lock);
> raw_spin_lock_irq(&curr->pi_lock);
> if (head->next != next)
>
> We must drop pi_lock before locking the hash bucket lock. That opens a
> window for another task to modify head list. So we must relock pi_lock
> and verify whether head->next is unmodified. If it changed, we need to
> reevaluate.
>
> > if (head->next != next) {
> > spin_unlock(&hb->lock);
> > + raw_spin_lock_irq(&curr->pi_lock);
> > continue;
> > }
> >
> > + raw_spin_lock_irq(&curr->pi_lock);
> > WARN_ON(pi_state->owner != curr);
> > WARN_ON(list_empty(&pi_state->list));
> > list_del_init(&pi_state->list);
>
> So both your patch description and your patch are patently wrong.
> Correct solution below.
>
> Thanks,
>
> tglx
> ---
> futex: Ensure lock/unlock symetry versus pi_lock and hash bucket lock
>
> In exit_pi_state_list() we have the following locking construct:
>
> spin_lock(&hb->lock);
> raw_spin_lock_irq(&curr->pi_lock);
>
> ...
> spin_unlock(&hb->lock);
>
> In !RT this works, but on RT the migrate_enable() function which is
> called from spin_unlock() sees atomic context due to the held pi_lock
> and just decrements the migrate_disable_atomic counter of the
> task. Now the next call to migrate_disable() sees the counter being
> negative and issues a warning. That check should be in
> migrate_enable() already.
>
> Fix this by dropping pi_lock before unlocking hb->lock and reaquire
> pi_lock after that again. This is safe as the loop code reevaluates
> head again under the pi_lock.
>
> Reported-by: Yong Zhang <yong.zhang@windriver.com>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
>
> diff --git a/kernel/futex.c b/kernel/futex.c
> index f15f0e4..c795c9c 100644
> --- a/kernel/futex.c
> +++ b/kernel/futex.c
> @@ -568,7 +568,9 @@ void exit_pi_state_list(struct task_struct *curr)
> * task still owns the PI-state:
> */
> if (head->next != next) {
Can we add the following comment here:
/*
* Normal spin_lock() and matching spin_unlock() must not be
* inside a raw_spin_lock() or any raw preempt disabled
* context.
*/
Or something similar. Otherwise, people looking at this may think that
the unlock and relock is unnecessary. Or they may just not understand it
in general. Things like this deserve comments, or we ourselves may
forget why we did it ;-)
We probably should document somewhere (if we haven't already, I haven't
looked), that if a spin_trylock() is inside a preempt disabled section,
the entire context of that lock must stay inside a preempt disabled
section (to the unlock). No normal spin_lock()s, and their matching
unlocks, should be in any preempt disabled section, although a preempt
disabled section may exist between the two.
-- Steve
> + raw_spin_unlock_irq(&curr->pi_lock);
> spin_unlock(&hb->lock);
> + raw_spin_lock_irq(&curr->pi_lock);
> continue;
> }
>
^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2013-03-03 21:50 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2013-03-01 1:36 [PATCH] futex: fix unbalanced spin_lock/spin_unlock() in exit_pi_state_list() Yong Zhang
2013-03-01 5:58 ` Yong Zhang
2013-03-01 10:17 ` Thomas Gleixner
2013-03-03 8:38 ` Yong Zhang
2013-03-03 21:50 ` Steven Rostedt
2013-03-01 12:01 ` John Kacur
2013-03-01 15:12 ` John Kacur
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox