public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
* [PATCH] atomic: Only take lock when the counter drops to zero on UP as well
@ 2009-04-10 16:13 Jan Blunck
  2009-04-11 17:49 ` Paul E. McKenney
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Jan Blunck @ 2009-04-10 16:13 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Paul E. McKenney, Linux-Kernel Mailinglist

I think it is wrong to unconditionally take the lock before calling
atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
situation where it is known that the counter will not reach zero (e.g. holding
another reference to the same object) but the lock is already taken.

Signed-off-by: Jan Blunck <jblunck@suse.de>
---
 lib/dec_and_lock.c |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
index a65c314..e73822a 100644
--- a/lib/dec_and_lock.c
+++ b/lib/dec_and_lock.c
@@ -19,11 +19,10 @@
  */
 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
 {
-#ifdef CONFIG_SMP
 	/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
 	if (atomic_add_unless(atomic, -1, 1))
 		return 0;
-#endif
+
 	/* Otherwise do it the slow way */
 	spin_lock(lock);
 	if (atomic_dec_and_test(atomic))
-- 
1.6.0.2


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well
  2009-04-10 16:13 [PATCH] atomic: Only take lock when the counter drops to zero on UP as well Jan Blunck
@ 2009-04-11 17:49 ` Paul E. McKenney
  2009-04-12 11:32   ` Jan Blunck
  2009-04-14  6:52 ` Nick Piggin
  2009-04-17 22:14 ` Andrew Morton
  2 siblings, 1 reply; 10+ messages in thread
From: Paul E. McKenney @ 2009-04-11 17:49 UTC (permalink / raw)
  To: Jan Blunck; +Cc: Nick Piggin, Linux-Kernel Mailinglist

On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
> I think it is wrong to unconditionally take the lock before calling
> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> situation where it is known that the counter will not reach zero (e.g. holding
> another reference to the same object) but the lock is already taken.

The thought of calling _atomic_dec_and_lock() when you already hold the
lock really really scares me.

Could you please give an example where you need to do this?

							Thanx, Paul

> Signed-off-by: Jan Blunck <jblunck@suse.de>
> ---
>  lib/dec_and_lock.c |    3 +--
>  1 files changed, 1 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> index a65c314..e73822a 100644
> --- a/lib/dec_and_lock.c
> +++ b/lib/dec_and_lock.c
> @@ -19,11 +19,10 @@
>   */
>  int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
>  {
> -#ifdef CONFIG_SMP
>  	/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
>  	if (atomic_add_unless(atomic, -1, 1))
>  		return 0;
> -#endif
> +
>  	/* Otherwise do it the slow way */
>  	spin_lock(lock);
>  	if (atomic_dec_and_test(atomic))
> -- 
> 1.6.0.2
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well
  2009-04-11 17:49 ` Paul E. McKenney
@ 2009-04-12 11:32   ` Jan Blunck
  2009-04-13  6:02     ` Paul E. McKenney
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Blunck @ 2009-04-12 11:32 UTC (permalink / raw)
  To: paulmck@linux.vnet.ibm.com; +Cc: Nick Piggin, Linux-Kernel Mailinglist

Am 11.04.2009 um 19:49 schrieb "Paul E. McKenney" <paulmck@linux.vnet.ibm.com 
 >:

> On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
>> I think it is wrong to unconditionally take the lock before calling
>> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock  
>> in
>> situation where it is known that the counter will not reach zero  
>> (e.g. holding
>> another reference to the same object) but the lock is already taken.
>
> The thought of calling _atomic_dec_and_lock() when you already hold  
> the
> lock really really scares me.
>
> Could you please give an example where you need to do this?
>

There is a part of the union mount patches that needs to do a  
union_put() (which itself includes a path_put() that uses  
atomic_dec_and_lock() in mntput() ). Since it is changing the  
namespace I need to hold the vfsmount lock. I know that the mnt's  
count > 1 since it is a parent of the mnt I'm changing in the mount  
tree. I could possibly delay the union_put().

In general this let's atomic_dec_and_lock() behave similar on SMP and  
UP. Remember that this already works with CONFIG_SMP as before Nick's  
patch.


>                            Thanx, Paul
>
>> Signed-off-by: Jan Blunck <jblunck@suse.de>
>> ---
>> lib/dec_and_lock.c |    3 +--
>> 1 files changed, 1 insertions(+), 2 deletions(-)
>>
>> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
>> index a65c314..e73822a 100644
>> --- a/lib/dec_and_lock.c
>> +++ b/lib/dec_and_lock.c
>> @@ -19,11 +19,10 @@
>>  */
>> int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
>> {
>> -#ifdef CONFIG_SMP
>>    /* Subtract 1 from counter unless that drops it to 0 (ie. it was  
>> 1) */
>>    if (atomic_add_unless(atomic, -1, 1))
>>        return 0;
>> -#endif
>> +
>>    /* Otherwise do it the slow way */
>>    spin_lock(lock);
>>    if (atomic_dec_and_test(atomic))
>> -- 
>> 1.6.0.2
>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well
  2009-04-12 11:32   ` Jan Blunck
@ 2009-04-13  6:02     ` Paul E. McKenney
  2009-04-22 12:56       ` Jan Blunck
  0 siblings, 1 reply; 10+ messages in thread
From: Paul E. McKenney @ 2009-04-13  6:02 UTC (permalink / raw)
  To: Jan Blunck; +Cc: Nick Piggin, Linux-Kernel Mailinglist

On Sun, Apr 12, 2009 at 01:32:54PM +0200, Jan Blunck wrote:
> Am 11.04.2009 um 19:49 schrieb "Paul E. McKenney" 
> <paulmck@linux.vnet.ibm.com>:
>
>> On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
>>> I think it is wrong to unconditionally take the lock before calling
>>> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
>>> situation where it is known that the counter will not reach zero (e.g. 
>>> holding
>>> another reference to the same object) but the lock is already taken.
>>
>> The thought of calling _atomic_dec_and_lock() when you already hold the
>> lock really really scares me.
>>
>> Could you please give an example where you need to do this?
>>
>
> There is a part of the union mount patches that needs to do a union_put() 
> (which itself includes a path_put() that uses atomic_dec_and_lock() in 
> mntput() ). Since it is changing the namespace I need to hold the vfsmount 
> lock. I know that the mnt's count > 1 since it is a parent of the mnt I'm 
> changing in the mount tree. I could possibly delay the union_put().
>
> In general this let's atomic_dec_and_lock() behave similar on SMP and UP. 
> Remember that this already works with CONFIG_SMP as before Nick's patch.

I asked, I guess.  ;-)

There is some sort of common code path, so that you cannot simply call
atomic_dec() when holding the lock?

                            Thanx, Paul
>>
>>> Signed-off-by: Jan Blunck <jblunck@suse.de>
>>> ---
>>> lib/dec_and_lock.c |    3 +--
>>> 1 files changed, 1 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
>>> index a65c314..e73822a 100644
>>> --- a/lib/dec_and_lock.c
>>> +++ b/lib/dec_and_lock.c
>>> @@ -19,11 +19,10 @@
>>>  */
>>> int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
>>> {
>>> -#ifdef CONFIG_SMP
>>>    /* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
>>>    if (atomic_add_unless(atomic, -1, 1))
>>>        return 0;
>>> -#endif
>>> +
>>>    /* Otherwise do it the slow way */
>>>    spin_lock(lock);
>>>    if (atomic_dec_and_test(atomic))
>>> -- 
>>> 1.6.0.2
>>>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well
  2009-04-10 16:13 [PATCH] atomic: Only take lock when the counter drops to zero on UP as well Jan Blunck
  2009-04-11 17:49 ` Paul E. McKenney
@ 2009-04-14  6:52 ` Nick Piggin
  2009-04-14 16:48   ` Paul E. McKenney
  2009-04-17 22:14 ` Andrew Morton
  2 siblings, 1 reply; 10+ messages in thread
From: Nick Piggin @ 2009-04-14  6:52 UTC (permalink / raw)
  To: Jan Blunck; +Cc: Paul E. McKenney, Linux-Kernel Mailinglist

On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
> I think it is wrong to unconditionally take the lock before calling
> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> situation where it is known that the counter will not reach zero (e.g. holding
> another reference to the same object) but the lock is already taken.
> 
> Signed-off-by: Jan Blunck <jblunck@suse.de>

Paul's worry about callers aside, I think it is probably a good idea
to reduce ifdefs and share more code.

So for this patch,

Acked-by: Nick Piggin <npiggin@suse.de>
 
> ---
>  lib/dec_and_lock.c |    3 +--
>  1 files changed, 1 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> index a65c314..e73822a 100644
> --- a/lib/dec_and_lock.c
> +++ b/lib/dec_and_lock.c
> @@ -19,11 +19,10 @@
>   */
>  int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
>  {
> -#ifdef CONFIG_SMP
>  	/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
>  	if (atomic_add_unless(atomic, -1, 1))
>  		return 0;
> -#endif
> +
>  	/* Otherwise do it the slow way */
>  	spin_lock(lock);
>  	if (atomic_dec_and_test(atomic))
> -- 
> 1.6.0.2

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well
  2009-04-14  6:52 ` Nick Piggin
@ 2009-04-14 16:48   ` Paul E. McKenney
  0 siblings, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2009-04-14 16:48 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Jan Blunck, Linux-Kernel Mailinglist

On Tue, Apr 14, 2009 at 08:52:39AM +0200, Nick Piggin wrote:
> On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
> > I think it is wrong to unconditionally take the lock before calling
> > atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> > situation where it is known that the counter will not reach zero (e.g. holding
> > another reference to the same object) but the lock is already taken.
> > 
> > Signed-off-by: Jan Blunck <jblunck@suse.de>
> 
> Paul's worry about callers aside, I think it is probably a good idea
> to reduce ifdefs and share more code.

I am also OK with this patch.

Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> So for this patch,
> 
> Acked-by: Nick Piggin <npiggin@suse.de>
> 
> > ---
> >  lib/dec_and_lock.c |    3 +--
> >  1 files changed, 1 insertions(+), 2 deletions(-)
> > 
> > diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> > index a65c314..e73822a 100644
> > --- a/lib/dec_and_lock.c
> > +++ b/lib/dec_and_lock.c
> > @@ -19,11 +19,10 @@
> >   */
> >  int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
> >  {
> > -#ifdef CONFIG_SMP
> >  	/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
> >  	if (atomic_add_unless(atomic, -1, 1))
> >  		return 0;
> > -#endif
> > +
> >  	/* Otherwise do it the slow way */
> >  	spin_lock(lock);
> >  	if (atomic_dec_and_test(atomic))
> > -- 
> > 1.6.0.2

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well
  2009-04-10 16:13 [PATCH] atomic: Only take lock when the counter drops to zero on UP as well Jan Blunck
  2009-04-11 17:49 ` Paul E. McKenney
  2009-04-14  6:52 ` Nick Piggin
@ 2009-04-17 22:14 ` Andrew Morton
  2009-04-23 13:32   ` Jan Blunck
  2 siblings, 1 reply; 10+ messages in thread
From: Andrew Morton @ 2009-04-17 22:14 UTC (permalink / raw)
  To: Jan Blunck; +Cc: npiggin, paulmck, linux-kernel

On Fri, 10 Apr 2009 18:13:57 +0200
Jan Blunck <jblunck@suse.de> wrote:

> I think it is wrong to unconditionally take the lock before calling
> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> situation where it is known that the counter will not reach zero (e.g. holding
> another reference to the same object) but the lock is already taken.
> 

It can't deadlock, because spin_lock() doesn't do anything on
CONFIG_SMP=n.

You might get lockdep whines on CONFIG_SMP=n, but they'd be false
positives because lockdep doesn't know that we generate additional code
for SMP builds.

> ---
>  lib/dec_and_lock.c |    3 +--
>  1 files changed, 1 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> index a65c314..e73822a 100644
> --- a/lib/dec_and_lock.c
> +++ b/lib/dec_and_lock.c
> @@ -19,11 +19,10 @@
>   */
>  int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
>  {
> -#ifdef CONFIG_SMP
>  	/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
>  	if (atomic_add_unless(atomic, -1, 1))
>  		return 0;
> -#endif
> +
>  	/* Otherwise do it the slow way */
>  	spin_lock(lock);
>  	if (atomic_dec_and_test(atomic))

The patch looks reasonable from a cleanup/consistency POV, but the
analysis and changelog need a bit of help, methinks.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well
  2009-04-13  6:02     ` Paul E. McKenney
@ 2009-04-22 12:56       ` Jan Blunck
  2009-04-22 14:08         ` Paul E. McKenney
  0 siblings, 1 reply; 10+ messages in thread
From: Jan Blunck @ 2009-04-22 12:56 UTC (permalink / raw)
  To: Paul E. McKenney; +Cc: Nick Piggin, Linux-Kernel Mailinglist

On Sun, Apr 12, Paul E. McKenney wrote:

> On Sun, Apr 12, 2009 at 01:32:54PM +0200, Jan Blunck wrote:
> > Am 11.04.2009 um 19:49 schrieb "Paul E. McKenney" 
> > <paulmck@linux.vnet.ibm.com>:
> >
> >> On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
> >>> I think it is wrong to unconditionally take the lock before calling
> >>> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> >>> situation where it is known that the counter will not reach zero (e.g. 
> >>> holding
> >>> another reference to the same object) but the lock is already taken.
> >>
> >> The thought of calling _atomic_dec_and_lock() when you already hold the
> >> lock really really scares me.
> >>
> >> Could you please give an example where you need to do this?
> >>
> >
> > There is a part of the union mount patches that needs to do a union_put() 
> > (which itself includes a path_put() that uses atomic_dec_and_lock() in 
> > mntput() ). Since it is changing the namespace I need to hold the vfsmount 
> > lock. I know that the mnt's count > 1 since it is a parent of the mnt I'm 
> > changing in the mount tree. I could possibly delay the union_put().
> >
> > In general this let's atomic_dec_and_lock() behave similar on SMP and UP. 
> > Remember that this already works with CONFIG_SMP as before Nick's patch.
> 
> I asked, I guess.  ;-)
> 
> There is some sort of common code path, so that you cannot simply call
> atomic_dec() when holding the lock?

If it is possible I don't want to introduce another special mntput() variant
just for that code path.

Thanks,
Jan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well
  2009-04-22 12:56       ` Jan Blunck
@ 2009-04-22 14:08         ` Paul E. McKenney
  0 siblings, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2009-04-22 14:08 UTC (permalink / raw)
  To: Jan Blunck; +Cc: Nick Piggin, Linux-Kernel Mailinglist

On Wed, Apr 22, 2009 at 02:56:20PM +0200, Jan Blunck wrote:
> On Sun, Apr 12, Paul E. McKenney wrote:
> 
> > On Sun, Apr 12, 2009 at 01:32:54PM +0200, Jan Blunck wrote:
> > > Am 11.04.2009 um 19:49 schrieb "Paul E. McKenney" 
> > > <paulmck@linux.vnet.ibm.com>:
> > >
> > >> On Fri, Apr 10, 2009 at 06:13:57PM +0200, Jan Blunck wrote:
> > >>> I think it is wrong to unconditionally take the lock before calling
> > >>> atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> > >>> situation where it is known that the counter will not reach zero (e.g. 
> > >>> holding
> > >>> another reference to the same object) but the lock is already taken.
> > >>
> > >> The thought of calling _atomic_dec_and_lock() when you already hold the
> > >> lock really really scares me.
> > >>
> > >> Could you please give an example where you need to do this?
> > >>
> > >
> > > There is a part of the union mount patches that needs to do a union_put() 
> > > (which itself includes a path_put() that uses atomic_dec_and_lock() in 
> > > mntput() ). Since it is changing the namespace I need to hold the vfsmount 
> > > lock. I know that the mnt's count > 1 since it is a parent of the mnt I'm 
> > > changing in the mount tree. I could possibly delay the union_put().
> > >
> > > In general this let's atomic_dec_and_lock() behave similar on SMP and UP. 
> > > Remember that this already works with CONFIG_SMP as before Nick's patch.
> > 
> > I asked, I guess.  ;-)
> > 
> > There is some sort of common code path, so that you cannot simply call
> > atomic_dec() when holding the lock?
> 
> If it is possible I don't want to introduce another special mntput() variant
> just for that code path.

Fair enough!!!

							Thanx, Paul

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] atomic: Only take lock when the counter drops to zero on UP as well
  2009-04-17 22:14 ` Andrew Morton
@ 2009-04-23 13:32   ` Jan Blunck
  0 siblings, 0 replies; 10+ messages in thread
From: Jan Blunck @ 2009-04-23 13:32 UTC (permalink / raw)
  To: Andrew Morton; +Cc: npiggin, paulmck, linux-kernel

On Fri, Apr 17, Andrew Morton wrote:

> On Fri, 10 Apr 2009 18:13:57 +0200
> Jan Blunck <jblunck@suse.de> wrote:
> 
> > I think it is wrong to unconditionally take the lock before calling
> > atomic_dec_and_test() in _atomic_dec_and_lock(). This will deadlock in
> > situation where it is known that the counter will not reach zero (e.g. holding
> > another reference to the same object) but the lock is already taken.
> > 
> 
> It can't deadlock, because spin_lock() doesn't do anything on
> CONFIG_SMP=n.
> 
> You might get lockdep whines on CONFIG_SMP=n, but they'd be false
> positives because lockdep doesn't know that we generate additional code
> for SMP builds.

Sorry, you are right. spin_lock() isn't the problem here. _raw_spin_lock()
calls into __spin_lock_debug():

static void __spin_lock_debug(spinlock_t *lock)
{
        u64 i;
        u64 loops = loops_per_jiffy * HZ;
        int print_once = 1;

        for (;;) {
                for (i = 0; i < loops; i++) {
                        if (__raw_spin_trylock(&lock->raw_lock))
                                return;
                        __delay(1);
                }
                /* lockup suspected: */
                if (print_once) {
                        print_once = 0;
                        printk(KERN_EMERG "BUG: spinlock lockup on CPU#%d, "
                                        "%s/%d, %p\n",
                                raw_smp_processor_id(), current->comm,
                                task_pid_nr(current), lock);
                        dump_stack();
#ifdef CONFIG_SMP
                        trigger_all_cpu_backtrace();
#endif
                }
        }
}

This is an endless loop in this cases since the lock is already held and
therefore __raw_spin_trylock() never succeeds.

> > ---
> >  lib/dec_and_lock.c |    3 +--
> >  1 files changed, 1 insertions(+), 2 deletions(-)
> > 
> > diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> > index a65c314..e73822a 100644
> > --- a/lib/dec_and_lock.c
> > +++ b/lib/dec_and_lock.c
> > @@ -19,11 +19,10 @@
> >   */
> >  int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
> >  {
> > -#ifdef CONFIG_SMP
> >  	/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
> >  	if (atomic_add_unless(atomic, -1, 1))
> >  		return 0;
> > -#endif
> > +
> >  	/* Otherwise do it the slow way */
> >  	spin_lock(lock);
> >  	if (atomic_dec_and_test(atomic))
> 
> The patch looks reasonable from a cleanup/consistency POV, but the
> analysis and changelog need a bit of help, methinks.
> 

Sorry, I'll come up with a more verbose description of the root cause of how
this locks up.

Cheers,
Jan

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2009-04-23 13:32 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-04-10 16:13 [PATCH] atomic: Only take lock when the counter drops to zero on UP as well Jan Blunck
2009-04-11 17:49 ` Paul E. McKenney
2009-04-12 11:32   ` Jan Blunck
2009-04-13  6:02     ` Paul E. McKenney
2009-04-22 12:56       ` Jan Blunck
2009-04-22 14:08         ` Paul E. McKenney
2009-04-14  6:52 ` Nick Piggin
2009-04-14 16:48   ` Paul E. McKenney
2009-04-17 22:14 ` Andrew Morton
2009-04-23 13:32   ` Jan Blunck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox