linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] osq_lock: fix osq_lock queue corruption
@ 2017-07-14 13:47 Prateek Sood
  2017-07-18 11:25 ` Peter Zijlstra
  0 siblings, 1 reply; 5+ messages in thread
From: Prateek Sood @ 2017-07-14 13:47 UTC (permalink / raw)
  To: peterz, mingo; +Cc: sramana, linux-kernel, Prateek Sood

Fix ordering of link creation between node->prev and prev->next in
osq_lock(). A case in which the status of optimistic spin queue is
CPU6->CPU2 in which CPU6 has acquired the lock. At this point if CPU0
comes in to acquire osq_lock, it will update the tail count. After tail
count update if CPU2 starts to unqueue itself from optimistic spin queue,
it will find updated tail count with CPU0 and update CPU2 node->next to
NULL in osq_wait_next(). If reordering of following stores happen then
prev->next where prev being CPU2 would be updated to point to CPU0 node:
	node->prev = prev;
	WRITE_ONCE(prev->next, node);
At this point if next instruction
	WRITE_ONCE(next->prev, prev);
in CPU2 path is committed before the update of CPU0 node->prev = prev then
CPU0 node->prev will point to CPU6 node. At this point if CPU0 path's
node->prev = prev is committed resulting in change of CPU0 prev back to
CPU2 node. CPU2 node->next is NULL currently, so if CPU0 gets into unqueue
path of osq_lock it will keep spinning in infinite loop as condition
prev->next == node will never be true.

Signed-off-by: Prateek Sood <prsood@codeaurora.org>
---
 kernel/locking/osq_lock.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index 05a3785..1d371a0 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -104,6 +104,32 @@ bool osq_lock(struct optimistic_spin_queue *lock)
 
 	prev = decode_cpu(old);
 	node->prev = prev;
+
+	/*
+	 * We need to avoid reordering of link updation sequence of osq.
+	 * A case in which the status of optimistic spin queue is
+	 * CPU6->CPU2 in which CPU6 has acquired the lock. At this point
+	 * if CPU0 comes in to acquire osq_lock, it will update the tail
+	 * count. After tail count update if CPU2 starts to unqueue itself
+	 * from optimistic spin queue, it will find updated tail count with
+	 * CPU0 and update CPU2 node->next to NULL in osq_wait_next(). If
+	 * reordering of following stores happen then prev->next where prev
+	 * being CPU2 would be updated to point to CPU0 node:
+	 *      node->prev = prev;
+	 *      WRITE_ONCE(prev->next, node);
+	 *
+	 * At this point if next instruction
+	 *      WRITE_ONCE(next->prev, prev);
+	 * in CPU2 path is committed before the update of CPU0 node->prev =
+	 * prev then CPU0 node->prev will point to CPU6 node. At this point
+	 * if CPU0 path's node->prev = prev is committed resulting in change
+	 * of CPU0 prev back to CPU2 node. CPU2 node->next is NULL, so if
+	 * CPU0 gets into unqueue path of osq_lock it will keep spinning
+	 * in infinite loop as condition prev->next == node will never be
+	 * true.
+	 */
+	smp_wmb();
+
 	WRITE_ONCE(prev->next, node);
 
 	/*
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc., 
is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] osq_lock: fix osq_lock queue corruption
  2017-07-14 13:47 Prateek Sood
@ 2017-07-18 11:25 ` Peter Zijlstra
  0 siblings, 0 replies; 5+ messages in thread
From: Peter Zijlstra @ 2017-07-18 11:25 UTC (permalink / raw)
  To: Prateek Sood; +Cc: mingo, sramana, linux-kernel, will.deacon


I added a few pictures, just the text didn't want to make sense to me.

On Fri, Jul 14, 2017 at 07:17:56PM +0530, Prateek Sood wrote:

> Fix ordering of link creation between node->prev and prev->next in
> osq_lock(). A case in which the status of optimistic spin queue is
> CPU6->CPU2 in which CPU6 has acquired the lock.


        tail
          v
  ,-. <- ,-.
  |6|    |2|
  `-' -> `-'

> At this point if CPU0 comes in to acquire osq_lock, it will update the
> tail count.


  CPU2			CPU0
  ----------------------------------

				       tail
				         v
			  ,-. <- ,-.    ,-.
			  |6|    |2|    |0|
			  `-' -> `-'    `-'


> After tail count update if CPU2 starts to unqueue itself from
> optimistic spin queue, it will find updated tail count with CPU0 and
> update CPU2 node->next to NULL in osq_wait_next().


  unqueue-A

	       tail
	         v
  ,-. <- ,-.    ,-.
  |6|    |2|    |0|
  `-'    `-'    `-'

  unqueue-B

  ->tail != curr && !node->next

> If reordering of following stores happen then
> prev->next where prev being CPU2 would be updated to point to CPU0 node:

				       tail
				         v
			  ,-. <- ,-.    ,-.
			  |6|    |2|    |0|
			  `-' -> `-' -> `-'

  osq_wait_next()
    node->next <- 0
    xchg(node->next, NULL)

	       tail
	         v
  ,-. <- ,-.    ,-.
  |6|    |2|    |0|
  `-'    `-'    `-'

  unqueue-C

> At this point if next instruction
>	WRITE_ONCE(next->prev, prev);
> in CPU2 path is committed before the update of CPU0 node->prev = prev then
> CPU0 node->prev will point to CPU6 node.

	       tail
    v----------. v
  ,-. <- ,-.    ,-.
  |6|    |2|    |0|
  `-'    `-'    `-'
     `----------^

> At this point if CPU0 path's node->prev = prev is committed resulting
> in change of CPU0 prev back to CPU2 node. CPU2 node->next is NULL
> currently,

				       tail
			                 v
			  ,-. <- ,-. <- ,-.
			  |6|    |2|    |0|
			  `-'    `-'    `-'
			     `----------^


> so if CPU0 gets into unqueue path of osq_lock it will keep spinning
> in infinite loop as condition prev->next == node will never be true.

Also updated the comment..

---
 kernel/locking/osq_lock.c |   13 +++++++++++++
 1 file changed, 13 insertions(+)

--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -109,6 +109,19 @@ bool osq_lock(struct optimistic_spin_que
 
 	prev = decode_cpu(old);
 	node->prev = prev;
+
+	/*
+	 * osq_lock()			unqueue
+	 *
+	 * node->prev = prev		osq_wait_next()
+	 * WMB				MB
+	 * prev->next = node		next->prev = prev // unqueue-C
+	 *
+	 * Here 'node->prev' and 'next->prev' are the same variable and we need
+	 * to ensure these stores happen in-order to avoid corrupting the list.
+	 */
+	smp_wmb();
+
 	WRITE_ONCE(prev->next, node);
 
 	/*

^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH] osq_lock: fix osq_lock queue corruption
@ 2017-07-31 17:24 Prateek Sood
  2017-08-07  5:06 ` Prateek Sood
  2017-08-10  4:40 ` Andrea Parri
  0 siblings, 2 replies; 5+ messages in thread
From: Prateek Sood @ 2017-07-31 17:24 UTC (permalink / raw)
  To: peterz, mingo; +Cc: Prateek Sood, sramana, linux-kernel

Fix ordering of link creation between node->prev and prev->next in
osq_lock(). A case in which the status of optimistic spin queue is
CPU6->CPU2 in which CPU6 has acquired the lock.

        tail
          v
  ,-. <- ,-.
  |6|    |2|
  `-' -> `-'

At this point if CPU0 comes in to acquire osq_lock, it will update the
tail count.

  CPU2			CPU0
  ----------------------------------

				       tail
				         v
			  ,-. <- ,-.    ,-.
			  |6|    |2|    |0|
			  `-' -> `-'    `-'

After tail count update if CPU2 starts to unqueue itself from
optimistic spin queue, it will find updated tail count with CPU0 and
update CPU2 node->next to NULL in osq_wait_next().

  unqueue-A

	       tail
	         v
  ,-. <- ,-.    ,-.
  |6|    |2|    |0|
  `-'    `-'    `-'

  unqueue-B

  ->tail != curr && !node->next

If reordering of following stores happen then
prev->next where prev being CPU2 would be updated to point to CPU0 node:

				       tail
				         v
			  ,-. <- ,-.    ,-.
			  |6|    |2|    |0|
			  `-' -> `-' -> `-'

  osq_wait_next()
    node->next <- 0
    xchg(node->next, NULL)

	       tail
	         v
  ,-. <- ,-.    ,-.
  |6|    |2|    |0|
  `-'    `-'    `-'

  unqueue-C

At this point if next instruction
WRITE_ONCE(next->prev, prev);
in CPU2 path is committed before the update of CPU0 node->prev = prev then
CPU0 node->prev will point to CPU6 node.

	       tail
    V----------. v
  ,-. <- ,-.    ,-.
  |6|    |2|    |0|
  `-'    `-'    `-'
     `----------^

At this point if CPU0 path's node->prev = prev is committed resulting
in change of CPU0 prev back to CPU2 node. CPU2 node->next is NULL
currently,

				       tail
			                 v
			  ,-. <- ,-. <- ,-.
			  |6|    |2|    |0|
			  `-'    `-'    `-'
			     `----------^

so if CPU0 gets into unqueue path of osq_lock it will keep spinning
in infinite loop as condition prev->next == node will never be true.

Signed-off-by: Prateek Sood <prsood@codeaurora.org>
---
 kernel/locking/osq_lock.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
index a316794..9f4afa3 100644
--- a/kernel/locking/osq_lock.c
+++ b/kernel/locking/osq_lock.c
@@ -109,6 +109,19 @@ bool osq_lock(struct optimistic_spin_queue *lock)
 
 	prev = decode_cpu(old);
 	node->prev = prev;
+
+	/*
+	 * osq_lock()			unqueue
+	 *
+	 * node->prev = prev            osq_wait_next()
+	 * WMB                          MB
+	 * prev->next = node            next->prev = prev //unqueue-C
+	 *
+	 * Here 'node->prev' and 'next->prev' are the same variable and we need
+	 * to ensure these stores happen in-order to avoid corrupting the list.
+	 */
+	smp_wmb();
+
 	WRITE_ONCE(prev->next, node);
 
 	/*
-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc., 
is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] osq_lock: fix osq_lock queue corruption
  2017-07-31 17:24 [PATCH] osq_lock: fix osq_lock queue corruption Prateek Sood
@ 2017-08-07  5:06 ` Prateek Sood
  2017-08-10  4:40 ` Andrea Parri
  1 sibling, 0 replies; 5+ messages in thread
From: Prateek Sood @ 2017-08-07  5:06 UTC (permalink / raw)
  To: peterz, mingo; +Cc: sramana, linux-kernel

On 07/31/2017 10:54 PM, Prateek Sood wrote:
> Fix ordering of link creation between node->prev and prev->next in
> osq_lock(). A case in which the status of optimistic spin queue is
> CPU6->CPU2 in which CPU6 has acquired the lock.
> 
>         tail
>           v
>   ,-. <- ,-.
>   |6|    |2|
>   `-' -> `-'
> 
> At this point if CPU0 comes in to acquire osq_lock, it will update the
> tail count.
> 
>   CPU2			CPU0
>   ----------------------------------
> 
> 				       tail
> 				         v
> 			  ,-. <- ,-.    ,-.
> 			  |6|    |2|    |0|
> 			  `-' -> `-'    `-'
> 
> After tail count update if CPU2 starts to unqueue itself from
> optimistic spin queue, it will find updated tail count with CPU0 and
> update CPU2 node->next to NULL in osq_wait_next().
> 
>   unqueue-A
> 
> 	       tail
> 	         v
>   ,-. <- ,-.    ,-.
>   |6|    |2|    |0|
>   `-'    `-'    `-'
> 
>   unqueue-B
> 
>   ->tail != curr && !node->next
> 
> If reordering of following stores happen then
> prev->next where prev being CPU2 would be updated to point to CPU0 node:
> 
> 				       tail
> 				         v
> 			  ,-. <- ,-.    ,-.
> 			  |6|    |2|    |0|
> 			  `-' -> `-' -> `-'
> 
>   osq_wait_next()
>     node->next <- 0
>     xchg(node->next, NULL)
> 
> 	       tail
> 	         v
>   ,-. <- ,-.    ,-.
>   |6|    |2|    |0|
>   `-'    `-'    `-'
> 
>   unqueue-C
> 
> At this point if next instruction
> WRITE_ONCE(next->prev, prev);
> in CPU2 path is committed before the update of CPU0 node->prev = prev then
> CPU0 node->prev will point to CPU6 node.
> 
> 	       tail
>     V----------. v
>   ,-. <- ,-.    ,-.
>   |6|    |2|    |0|
>   `-'    `-'    `-'
>      `----------^
> 
> At this point if CPU0 path's node->prev = prev is committed resulting
> in change of CPU0 prev back to CPU2 node. CPU2 node->next is NULL
> currently,
> 
> 				       tail
> 			                 v
> 			  ,-. <- ,-. <- ,-.
> 			  |6|    |2|    |0|
> 			  `-'    `-'    `-'
> 			     `----------^
> 
> so if CPU0 gets into unqueue path of osq_lock it will keep spinning
> in infinite loop as condition prev->next == node will never be true.
> 
> Signed-off-by: Prateek Sood <prsood@codeaurora.org>
> ---
>  kernel/locking/osq_lock.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
> index a316794..9f4afa3 100644
> --- a/kernel/locking/osq_lock.c
> +++ b/kernel/locking/osq_lock.c
> @@ -109,6 +109,19 @@ bool osq_lock(struct optimistic_spin_queue *lock)
>  
>  	prev = decode_cpu(old);
>  	node->prev = prev;
> +
> +	/*
> +	 * osq_lock()			unqueue
> +	 *
> +	 * node->prev = prev            osq_wait_next()
> +	 * WMB                          MB
> +	 * prev->next = node            next->prev = prev //unqueue-C
> +	 *
> +	 * Here 'node->prev' and 'next->prev' are the same variable and we need
> +	 * to ensure these stores happen in-order to avoid corrupting the list.
> +	 */
> +	smp_wmb();
> +
>  	WRITE_ONCE(prev->next, node);
>  
>  	/*
> 

Hi Peter,

I have updated the change log and comments in code.

-- 
Qualcomm India Private Limited, on behalf of Qualcomm Innovation
Center, Inc., is a member of Code Aurora Forum, a Linux Foundation
Collaborative Project

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] osq_lock: fix osq_lock queue corruption
  2017-07-31 17:24 [PATCH] osq_lock: fix osq_lock queue corruption Prateek Sood
  2017-08-07  5:06 ` Prateek Sood
@ 2017-08-10  4:40 ` Andrea Parri
  1 sibling, 0 replies; 5+ messages in thread
From: Andrea Parri @ 2017-08-10  4:40 UTC (permalink / raw)
  To: Prateek Sood, peterz, mingo; +Cc: sramana, linux-kernel

On Mon, Jul 31, 2017 at 10:54:50PM +0530, Prateek Sood wrote:
> Fix ordering of link creation between node->prev and prev->next in
> osq_lock(). A case in which the status of optimistic spin queue is
> CPU6->CPU2 in which CPU6 has acquired the lock.
> 
>         tail
>           v
>   ,-. <- ,-.
>   |6|    |2|
>   `-' -> `-'
> 
> At this point if CPU0 comes in to acquire osq_lock, it will update the
> tail count.
> 
>   CPU2			CPU0
>   ----------------------------------
> 
> 				       tail
> 				         v
> 			  ,-. <- ,-.    ,-.
> 			  |6|    |2|    |0|
> 			  `-' -> `-'    `-'
> 
> After tail count update if CPU2 starts to unqueue itself from
> optimistic spin queue, it will find updated tail count with CPU0 and
> update CPU2 node->next to NULL in osq_wait_next().
> 
>   unqueue-A
> 
> 	       tail
> 	         v
>   ,-. <- ,-.    ,-.
>   |6|    |2|    |0|
>   `-'    `-'    `-'
> 
>   unqueue-B
> 
>   ->tail != curr && !node->next
> 
> If reordering of following stores happen then
> prev->next where prev being CPU2 would be updated to point to CPU0 node:
> 
> 				       tail
> 				         v
> 			  ,-. <- ,-.    ,-.
> 			  |6|    |2|    |0|
> 			  `-' -> `-' -> `-'
> 
>   osq_wait_next()
>     node->next <- 0
>     xchg(node->next, NULL)
> 
> 	       tail
> 	         v
>   ,-. <- ,-.    ,-.
>   |6|    |2|    |0|
>   `-'    `-'    `-'
> 
>   unqueue-C
> 
> At this point if next instruction
> WRITE_ONCE(next->prev, prev);
> in CPU2 path is committed before the update of CPU0 node->prev = prev then
> CPU0 node->prev will point to CPU6 node.
> 
> 	       tail
>     V----------. v
>   ,-. <- ,-.    ,-.
>   |6|    |2|    |0|
>   `-'    `-'    `-'
>      `----------^
> 
> At this point if CPU0 path's node->prev = prev is committed resulting
> in change of CPU0 prev back to CPU2 node. CPU2 node->next is NULL
> currently,
> 
> 				       tail
> 			                 v
> 			  ,-. <- ,-. <- ,-.
> 			  |6|    |2|    |0|
> 			  `-'    `-'    `-'
> 			     `----------^
> 
> so if CPU0 gets into unqueue path of osq_lock it will keep spinning
> in infinite loop as condition prev->next == node will never be true.
> 
> Signed-off-by: Prateek Sood <prsood@codeaurora.org>
> ---
>  kernel/locking/osq_lock.c | 13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/kernel/locking/osq_lock.c b/kernel/locking/osq_lock.c
> index a316794..9f4afa3 100644
> --- a/kernel/locking/osq_lock.c
> +++ b/kernel/locking/osq_lock.c
> @@ -109,6 +109,19 @@ bool osq_lock(struct optimistic_spin_queue *lock)
>  
>  	prev = decode_cpu(old);
>  	node->prev = prev;
> +
> +	/*
> +	 * osq_lock()			unqueue
> +	 *
> +	 * node->prev = prev            osq_wait_next()
> +	 * WMB                          MB
> +	 * prev->next = node            next->prev = prev //unqueue-C
> +	 *
> +	 * Here 'node->prev' and 'next->prev' are the same variable and we need
> +	 * to ensure these stores happen in-order to avoid corrupting the list.
> +	 */

The interested pattern/behavior remains somehow implicit in this snippet
(for example, as you described above, a load "reading from" the store to
prev->next is implicit in that osq_wait_next()); however I was unable to
come up with an alternative solution without complicating the comment.

Reviewed-by: Andrea Parri <parri.andrea@gmail.com>

  Andrea


> +	smp_wmb();
> +
>  	WRITE_ONCE(prev->next, node);
>  
>  	/*
> -- 
> Qualcomm India Private Limited, on behalf of Qualcomm Innovation Center, Inc., 
> is a member of Code Aurora Forum, a Linux Foundation Collaborative Project.
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-08-10  4:40 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2017-07-31 17:24 [PATCH] osq_lock: fix osq_lock queue corruption Prateek Sood
2017-08-07  5:06 ` Prateek Sood
2017-08-10  4:40 ` Andrea Parri
  -- strict thread matches above, loose matches on Subject: below --
2017-07-14 13:47 Prateek Sood
2017-07-18 11:25 ` Peter Zijlstra

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).