* [PATCH] ipc/sem.c: Update/correct memory barriers.
@ 2015-02-28 20:36 Manfred Spraul
2015-02-28 21:45 ` Peter Zijlstra
0 siblings, 1 reply; 11+ messages in thread
From: Manfred Spraul @ 2015-02-28 20:36 UTC (permalink / raw)
To: Oleg Nesterov, Paul E. McKenney
Cc: LKML, 1vier1, Peter Zijlstra, Kirill Tkhai, Ingo Molnar,
Josh Poimboeuf, Manfred Spraul, stable
sem_lock() did not properly pair memory barriers:
spin_is_locked() or spin_unlock_wait() are both only control barriers.
The code needs an acquire barrier, otherwise the cpu might perform
read operations before the lock test.
One path did the memory barriers correctly, in the other path the
memory barrier was missing.
The patch:
- defines a new barrier, that defaults to smp_rmb().
- conversion ipc/sem.c to the new barrier.
It's necessary for all kernels that use sem_wait_array()
(i.e.: starting from 3.10)
Open tasks:
- checkpatch.pl gives a warning, I think it is spurious
- Who can take care of adding it to a tree that is heading
for Linus' tree?
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Reported-by: Oleg Nesterov <oleg@redhat.com>
Cc: <stable@vger.kernel.org>
---
include/linux/spinlock.h | 10 ++++++++++
ipc/sem.c | 7 ++++++-
2 files changed, 16 insertions(+), 1 deletion(-)
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 3e18379..c383a9c 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -140,6 +140,16 @@ do { \
#define smp_mb__after_unlock_lock() do { } while (0)
#endif
+/*
+ * Place this after a control barrier (such as e.g. a spin_unlock_wait())
+ * to ensure that reads cannot be moved ahead of the control_barrier.
+ * Writes do not need a barrier, they are not speculated and thus cannot
+ * pass the control barrier.
+ */
+#ifndef smp_mb__after_control_barrier
+#define smp_mb__after_control_barrier() smp_rmb()
+#endif
+
/**
* raw_spin_unlock_wait - wait until the spinlock gets unlocked
* @lock: the spinlock in question.
diff --git a/ipc/sem.c b/ipc/sem.c
index 9284211..ea9a989 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -267,6 +267,10 @@ static void sem_wait_array(struct sem_array *sma)
if (sma->complex_count) {
/* The thread that increased sma->complex_count waited on
* all sem->lock locks. Thus we don't need to wait again.
+ *
+ * The is also no need for memory barriers: with
+ * complex_count>0, all threads acquire/release
+ * sem_perm.lock.
*/
return;
}
@@ -275,6 +279,7 @@ static void sem_wait_array(struct sem_array *sma)
sem = sma->sem_base + i;
spin_unlock_wait(&sem->lock);
}
+ smp_mb__after_control_barrier();
}
/*
@@ -333,7 +338,7 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
* complex_count++;
* spin_unlock(sem_perm.lock);
*/
- smp_rmb();
+ smp_mb__after_control_barrier();
/*
* Now repeat the test of complex_count:
--
2.1.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH] ipc/sem.c: Update/correct memory barriers.
2015-02-28 20:36 Manfred Spraul
@ 2015-02-28 21:45 ` Peter Zijlstra
2015-02-28 23:34 ` Paul E. McKenney
2015-03-01 13:22 ` Oleg Nesterov
0 siblings, 2 replies; 11+ messages in thread
From: Peter Zijlstra @ 2015-02-28 21:45 UTC (permalink / raw)
To: Manfred Spraul
Cc: Oleg Nesterov, Paul E. McKenney, LKML, 1vier1, Kirill Tkhai,
Ingo Molnar, Josh Poimboeuf, stable
On Sat, Feb 28, 2015 at 09:36:15PM +0100, Manfred Spraul wrote:
> +/*
> + * Place this after a control barrier (such as e.g. a spin_unlock_wait())
> + * to ensure that reads cannot be moved ahead of the control_barrier.
> + * Writes do not need a barrier, they are not speculated and thus cannot
> + * pass the control barrier.
> + */
> +#ifndef smp_mb__after_control_barrier
> +#define smp_mb__after_control_barrier() smp_rmb()
> +#endif
Sorry to go bike shedding again; but should we call this:
smp_acquire__after_control_barrier() ?
The thing is; its not a full MB because:
- stores might actually creep into it; while the control dependency
guarantees stores will not creep out, nothing is stopping them from
getting in;
- its not transitive, and our MB is defined to be so.
Oleg, Paul?
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] ipc/sem.c: Update/correct memory barriers.
2015-02-28 21:45 ` Peter Zijlstra
@ 2015-02-28 23:34 ` Paul E. McKenney
2015-03-01 13:28 ` Oleg Nesterov
2015-03-01 13:22 ` Oleg Nesterov
1 sibling, 1 reply; 11+ messages in thread
From: Paul E. McKenney @ 2015-02-28 23:34 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Manfred Spraul, Oleg Nesterov, LKML, 1vier1, Kirill Tkhai,
Ingo Molnar, Josh Poimboeuf, stable
On Sat, Feb 28, 2015 at 10:45:33PM +0100, Peter Zijlstra wrote:
> On Sat, Feb 28, 2015 at 09:36:15PM +0100, Manfred Spraul wrote:
> > +/*
> > + * Place this after a control barrier (such as e.g. a spin_unlock_wait())
> > + * to ensure that reads cannot be moved ahead of the control_barrier.
> > + * Writes do not need a barrier, they are not speculated and thus cannot
> > + * pass the control barrier.
> > + */
> > +#ifndef smp_mb__after_control_barrier
> > +#define smp_mb__after_control_barrier() smp_rmb()
> > +#endif
>
> Sorry to go bike shedding again; but should we call this:
>
> smp_acquire__after_control_barrier() ?
>
> The thing is; its not a full MB because:
>
> - stores might actually creep into it; while the control dependency
> guarantees stores will not creep out, nothing is stopping them from
> getting in;
>
> - its not transitive, and our MB is defined to be so.
>
> Oleg, Paul?
The idea is that this would become a no-op on x86, s390, sparc &c, an isb
instruction on ARM, an isync instruction on Power, and I cannot remember
what on Itanium? The other idea being to provide read-to-read control
ordering in addition to the current read-to-write control ordering?
Thanx, Paul
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] ipc/sem.c: Update/correct memory barriers.
2015-02-28 21:45 ` Peter Zijlstra
2015-02-28 23:34 ` Paul E. McKenney
@ 2015-03-01 13:22 ` Oleg Nesterov
2015-03-01 16:07 ` Manfred Spraul
1 sibling, 1 reply; 11+ messages in thread
From: Oleg Nesterov @ 2015-03-01 13:22 UTC (permalink / raw)
To: Peter Zijlstra
Cc: Manfred Spraul, Paul E. McKenney, LKML, 1vier1, Kirill Tkhai,
Ingo Molnar, Josh Poimboeuf, stable
On 02/28, Peter Zijlstra wrote:
>
> On Sat, Feb 28, 2015 at 09:36:15PM +0100, Manfred Spraul wrote:
> > +/*
> > + * Place this after a control barrier (such as e.g. a spin_unlock_wait())
> > + * to ensure that reads cannot be moved ahead of the control_barrier.
> > + * Writes do not need a barrier, they are not speculated and thus cannot
> > + * pass the control barrier.
> > + */
> > +#ifndef smp_mb__after_control_barrier
> > +#define smp_mb__after_control_barrier() smp_rmb()
> > +#endif
>
> Sorry to go bike shedding again; but should we call this:
>
> smp_acquire__after_control_barrier() ?
>
> The thing is; its not a full MB because:
>
> - stores might actually creep into it; while the control dependency
> guarantees stores will not creep out, nothing is stopping them from
> getting in;
>
> - its not transitive, and our MB is defined to be so.
I agree, so perhaps it should be named smp_acquire_after_unlock_wait ?
even if it is actually stronger than "acquire"...
To me "control_barrier" looks a bit confusing. I think this helper should
be only used after spin_unlock_wait() or spin_is_locked/unlocked(). In this
case it is clear that this "barrier" pairs with release semantics of
spin_unlock(). And we use it because we want to serialize with that unlock,
as if we are taking this lock.
But I won't insist.
Oleg.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] ipc/sem.c: Update/correct memory barriers.
2015-02-28 23:34 ` Paul E. McKenney
@ 2015-03-01 13:28 ` Oleg Nesterov
0 siblings, 0 replies; 11+ messages in thread
From: Oleg Nesterov @ 2015-03-01 13:28 UTC (permalink / raw)
To: Paul E. McKenney
Cc: Peter Zijlstra, Manfred Spraul, LKML, 1vier1, Kirill Tkhai,
Ingo Molnar, Josh Poimboeuf, stable
On 02/28, Paul E. McKenney wrote:
>
> The idea is that this would become a no-op on x86, s390, sparc &c, an isb
> instruction on ARM, an isync instruction on Power, and I cannot remember
> what on Itanium? The other idea being to provide read-to-read control
> ordering in addition to the current read-to-write control ordering?
To me, the only purpose is documentation. Let's look at task_work_run()
/*
* Synchronize with task_work_cancel(). It can't remove
* the first entry == work, cmpxchg(task_works) should
* fail, but it can play with *work and other entries.
*/
raw_spin_unlock_wait(&task->pi_lock);
smp_mb();
It doesn't need the full mb() too. But rmb() will look very confusing
without a fat comment. So I think that it would be nice to write this
comment once and put it into the new helper.
Oleg.
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] ipc/sem.c: Update/correct memory barriers.
2015-03-01 13:22 ` Oleg Nesterov
@ 2015-03-01 16:07 ` Manfred Spraul
0 siblings, 0 replies; 11+ messages in thread
From: Manfred Spraul @ 2015-03-01 16:07 UTC (permalink / raw)
To: Oleg Nesterov, Peter Zijlstra
Cc: Paul E. McKenney, LKML, 1vier1, Kirill Tkhai, Ingo Molnar,
Josh Poimboeuf, stable
Hi Oleg,
On 03/01/2015 02:22 PM, Oleg Nesterov wrote:
> On 02/28, Peter Zijlstra wrote:
>> On Sat, Feb 28, 2015 at 09:36:15PM +0100, Manfred Spraul wrote:
>>> +/*
>>> + * Place this after a control barrier (such as e.g. a spin_unlock_wait())
>>> + * to ensure that reads cannot be moved ahead of the control_barrier.
>>> + * Writes do not need a barrier, they are not speculated and thus cannot
>>> + * pass the control barrier.
>>> + */
>>> +#ifndef smp_mb__after_control_barrier
>>> +#define smp_mb__after_control_barrier() smp_rmb()
>>> +#endif
>> Sorry to go bike shedding again; but should we call this:
>>
>> smp_acquire__after_control_barrier() ?
>>
>> The thing is; its not a full MB because:
>>
>> - stores might actually creep into it; while the control dependency
>> guarantees stores will not creep out, nothing is stopping them from
>> getting in;
>>
>> - its not transitive, and our MB is defined to be so.
> I agree, so perhaps it should be named smp_acquire_after_unlock_wait ?
> even if it is actually stronger than "acquire"...
>
> To me "control_barrier" looks a bit confusing. I think this helper should
> be only used after spin_unlock_wait() or spin_is_locked/unlocked().
Then lets make two helpers:
smp_acquire__after_spin_unlock_wait() and
smp_acquire__after_spin_is_unlocked().
I'll send a new proposal.
Oleg: I would leave the update of task_work_run() to you:
The current code is not buggy, doing an docu update immediately and risk
that the patch might collide with other changes is probably not worth it.
--
Manfred
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH] ipc/sem.c: Update/correct memory barriers
@ 2015-03-01 16:18 Manfred Spraul
2015-03-01 19:16 ` Oleg Nesterov
0 siblings, 1 reply; 11+ messages in thread
From: Manfred Spraul @ 2015-03-01 16:18 UTC (permalink / raw)
To: Oleg Nesterov, Paul E. McKenney
Cc: LKML, 1vier1, Peter Zijlstra, Kirill Tkhai, Ingo Molnar,
Josh Poimboeuf, Manfred Spraul, stable
3rd version of the patch:
sem_lock() did not properly pair memory barriers:
!spin_is_locked() and spin_unlock_wait() are both only control barriers.
The code needs an acquire barrier, otherwise the cpu might perform
read operations before the lock test.
The patch:
- defines new barriers that defaults to smp_rmb().
- converts ipc/sem.c to the new barriers.
With regards to -stable:
The change of sem_wait_array() is a bugfix, the change to sem_lock()
is a nop (just a preprocessor redefinition to improve the readability).
The bugfix is necessary for all kernels that use sem_wait_array()
(i.e.: starting from 3.10).
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Reported-by: Oleg Nesterov <oleg@redhat.com>
Cc: <stable@vger.kernel.org>
---
include/linux/spinlock.h | 15 +++++++++++++++
ipc/sem.c | 8 ++++----
2 files changed, 19 insertions(+), 4 deletions(-)
diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 3e18379..5049ff5 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -140,6 +140,21 @@ do { \
#define smp_mb__after_unlock_lock() do { } while (0)
#endif
+/*
+ * spin_unlock_wait() and !spin_is_locked() are not memory barriers, they
+ * are only control barriers, thus a memory barrier is required if the
+ * operation should act as an acquire memory barrier, i.e. if it should
+ * pair with the release memory barrier from the spin_unlock() that released
+ * the spinlock.
+ * smp_rmb() is sufficient, as writes cannot pass the implicit control barrier.
+ */
+#ifndef smp_acquire__after_spin_unlock_wait
+#define smp_acquire__after_spin_unlock_wait() smp_rmb()
+#endif
+#ifndef smp_acquire__after_spin_is_unlocked
+#define smp_acquire__after_spin_is_unlocked() smp_rmb()
+#endif
+
/**
* raw_spin_unlock_wait - wait until the spinlock gets unlocked
* @lock: the spinlock in question.
diff --git a/ipc/sem.c b/ipc/sem.c
index 9284211..d580cfa 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -275,6 +275,7 @@ static void sem_wait_array(struct sem_array *sma)
sem = sma->sem_base + i;
spin_unlock_wait(&sem->lock);
}
+ smp_acquire__after_spin_unlock_wait();
}
/*
@@ -327,13 +328,12 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
/* Then check that the global lock is free */
if (!spin_is_locked(&sma->sem_perm.lock)) {
/*
- * The ipc object lock check must be visible on all
- * cores before rechecking the complex count. Otherwise
- * we can race with another thread that does:
+ * We need a memory barrier with acquire semantics,
+ * otherwise we can race with another thread that does:
* complex_count++;
* spin_unlock(sem_perm.lock);
*/
- smp_rmb();
+ smp_acquire__after_spin_is_unlocked();
/*
* Now repeat the test of complex_count:
--
2.1.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH] ipc/sem.c: Update/correct memory barriers
2015-03-01 16:18 Manfred Spraul
@ 2015-03-01 19:16 ` Oleg Nesterov
0 siblings, 0 replies; 11+ messages in thread
From: Oleg Nesterov @ 2015-03-01 19:16 UTC (permalink / raw)
To: Manfred Spraul
Cc: Paul E. McKenney, LKML, 1vier1, Peter Zijlstra, Kirill Tkhai,
Ingo Molnar, Josh Poimboeuf, stable
Manfred,
I leave this to you and Paul/Peter, but...
On 03/01, Manfred Spraul wrote:
>
> +/*
> + * spin_unlock_wait() and !spin_is_locked() are not memory barriers, they
> + * are only control barriers, thus a memory barrier is required if the
> + * operation should act as an acquire memory barrier, i.e. if it should
> + * pair with the release memory barrier from the spin_unlock() that released
> + * the spinlock.
> + * smp_rmb() is sufficient, as writes cannot pass the implicit control barrier.
> + */
> +#ifndef smp_acquire__after_spin_unlock_wait
> +#define smp_acquire__after_spin_unlock_wait() smp_rmb()
> +#endif
> +#ifndef smp_acquire__after_spin_is_unlocked
> +#define smp_acquire__after_spin_is_unlocked() smp_rmb()
> +#endif
But spin_unlock_wait() and spin_is_locked() is the "same thing" when it
comes to serialization with spin_unlock()... Not sure we need 2 helpers.
But I won't argue of course.
Oleg.
^ permalink raw reply [flat|nested] 11+ messages in thread
* [PATCH] ipc/sem.c: Update/correct memory barriers
@ 2015-08-09 17:55 Manfred Spraul
2015-08-10 8:15 ` Peter Zijlstra
2015-08-12 13:31 ` Oleg Nesterov
0 siblings, 2 replies; 11+ messages in thread
From: Manfred Spraul @ 2015-08-09 17:55 UTC (permalink / raw)
To: Andrew Morton
Cc: LKML, 1vier1, Oleg Nesterov, Paul E. McKenney, Peter Zijlstra,
Kirill Tkhai, Ingo Molnar, Josh Poimboeuf, Manfred Spraul, stable
sem_lock() did not properly pair memory barriers:
!spin_is_locked() and spin_unlock_wait() are both only control barriers.
The code needs an acquire barrier, otherwise the cpu might perform
read operations before the lock test.
As no primitive exists inside <include/spinlock.h> and since it seems
noone wants another primitive, the code creates a local primitive within
ipc/sem.c.
With regards to -stable:
The change of sem_wait_array() is a bugfix, the change to sem_lock()
is a nop (just a preprocessor redefinition to improve the readability).
The bugfix is necessary for all kernels that use sem_wait_array()
(i.e.: starting from 3.10).
Andrew: Could you include it into your tree and forward it?
Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
Reported-by: Oleg Nesterov <oleg@redhat.com>
Cc: <stable@vger.kernel.org>
---
ipc/sem.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/ipc/sem.c b/ipc/sem.c
index bc3d530..e581b08 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -253,6 +253,16 @@ static void sem_rcu_free(struct rcu_head *head)
}
/*
+ * spin_unlock_wait() and !spin_is_locked() are not memory barriers, they
+ * are only control barriers.
+ * The code must pair with spin_unlock(&sem->lock) or
+ * spin_unlock(&sem_perm.lock), thus just the control barrier is insufficient.
+ *
+ * smp_rmb() is sufficient, as writes cannot pass the control barrier.
+ */
+#define ipc_smp_acquire__after_spin_is_unlocked() smp_rmb()
+
+/*
* Wait until all currently ongoing simple ops have completed.
* Caller must own sem_perm.lock.
* New simple ops cannot start, because simple ops first check
@@ -275,6 +285,7 @@ static void sem_wait_array(struct sem_array *sma)
sem = sma->sem_base + i;
spin_unlock_wait(&sem->lock);
}
+ ipc_smp_acquire__after_spin_is_unlocked();
}
/*
@@ -327,13 +338,12 @@ static inline int sem_lock(struct sem_array *sma, struct sembuf *sops,
/* Then check that the global lock is free */
if (!spin_is_locked(&sma->sem_perm.lock)) {
/*
- * The ipc object lock check must be visible on all
- * cores before rechecking the complex count. Otherwise
- * we can race with another thread that does:
+ * We need a memory barrier with acquire semantics,
+ * otherwise we can race with another thread that does:
* complex_count++;
* spin_unlock(sem_perm.lock);
*/
- smp_rmb();
+ ipc_smp_acquire__after_spin_is_unlocked();
/*
* Now repeat the test of complex_count:
--
2.4.3
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [PATCH] ipc/sem.c: Update/correct memory barriers
2015-08-09 17:55 [PATCH] ipc/sem.c: Update/correct memory barriers Manfred Spraul
@ 2015-08-10 8:15 ` Peter Zijlstra
2015-08-12 13:31 ` Oleg Nesterov
1 sibling, 0 replies; 11+ messages in thread
From: Peter Zijlstra @ 2015-08-10 8:15 UTC (permalink / raw)
To: Manfred Spraul
Cc: Andrew Morton, LKML, 1vier1, Oleg Nesterov, Paul E. McKenney,
Kirill Tkhai, Ingo Molnar, Josh Poimboeuf, stable
On Sun, Aug 09, 2015 at 07:55:39PM +0200, Manfred Spraul wrote:
> sem_lock() did not properly pair memory barriers:
>
> !spin_is_locked() and spin_unlock_wait() are both only control barriers.
> The code needs an acquire barrier, otherwise the cpu might perform
> read operations before the lock test.
> As no primitive exists inside <include/spinlock.h> and since it seems
> noone wants another primitive, the code creates a local primitive within
> ipc/sem.c.
>
> With regards to -stable:
> The change of sem_wait_array() is a bugfix, the change to sem_lock()
> is a nop (just a preprocessor redefinition to improve the readability).
> The bugfix is necessary for all kernels that use sem_wait_array()
> (i.e.: starting from 3.10).
>
> Andrew: Could you include it into your tree and forward it?
>
> Signed-off-by: Manfred Spraul <manfred@colorfullife.com>
> Reported-by: Oleg Nesterov <oleg@redhat.com>
> Cc: <stable@vger.kernel.org>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH] ipc/sem.c: Update/correct memory barriers
2015-08-09 17:55 [PATCH] ipc/sem.c: Update/correct memory barriers Manfred Spraul
2015-08-10 8:15 ` Peter Zijlstra
@ 2015-08-12 13:31 ` Oleg Nesterov
1 sibling, 0 replies; 11+ messages in thread
From: Oleg Nesterov @ 2015-08-12 13:31 UTC (permalink / raw)
To: Manfred Spraul
Cc: Andrew Morton, LKML, 1vier1, Paul E. McKenney, Peter Zijlstra,
Kirill Tkhai, Ingo Molnar, Josh Poimboeuf, stable
On 08/09, Manfred Spraul wrote:
>
> /*
> + * spin_unlock_wait() and !spin_is_locked() are not memory barriers, they
> + * are only control barriers.
> + * The code must pair with spin_unlock(&sem->lock) or
> + * spin_unlock(&sem_perm.lock), thus just the control barrier is insufficient.
> + *
> + * smp_rmb() is sufficient, as writes cannot pass the control barrier.
> + */
> +#define ipc_smp_acquire__after_spin_is_unlocked() smp_rmb()
Agreed.
But to remind, this can have more users. In particular, task_work_run()
which currently does mb() after spin_unlock_wait().
Can someone suggest a good "generic" name for this helper so that we can
move it into include/linux/spinlock.h?
Oleg.
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2015-08-12 13:33 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-08-09 17:55 [PATCH] ipc/sem.c: Update/correct memory barriers Manfred Spraul
2015-08-10 8:15 ` Peter Zijlstra
2015-08-12 13:31 ` Oleg Nesterov
-- strict thread matches above, loose matches on Subject: below --
2015-03-01 16:18 Manfred Spraul
2015-03-01 19:16 ` Oleg Nesterov
2015-02-28 20:36 Manfred Spraul
2015-02-28 21:45 ` Peter Zijlstra
2015-02-28 23:34 ` Paul E. McKenney
2015-03-01 13:28 ` Oleg Nesterov
2015-03-01 13:22 ` Oleg Nesterov
2015-03-01 16:07 ` Manfred Spraul
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).