From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp.codeaurora.org ([198.145.29.96]:40364 "EHLO smtp.codeaurora.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2388234AbeGXCRl (ORCPT ); Mon, 23 Jul 2018 22:17:41 -0400 MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII; format=flowed Content-Transfer-Encoding: 7bit Date: Mon, 23 Jul 2018 18:13:48 -0700 From: isaacm@codeaurora.org To: peterz@infradead.org, matt@codeblueprint.co.uk, mingo@kernel.org, tglx@linutronix.de, bigeasy@linutronix.de Cc: linux-kernel@vger.kernel.org, psodagud@codeaurora.org, gregkh@linuxfoundation.org, pkondeti@codeaurora.org, stable@vger.kernel.org Subject: Re: [PATCH] stop_machine: Disable preemption after queueing stopper threads In-Reply-To: <1531856129-9871-1-git-send-email-isaacm@codeaurora.org> References: <1531856129-9871-1-git-send-email-isaacm@codeaurora.org> Message-ID: Sender: stable-owner@vger.kernel.org List-ID: Hi all, Are there any comments about this patch? Thanks, Isaac Manjarres On 2018-07-17 12:35, Isaac J. Manjarres wrote: > This commit: > > 9fb8d5dc4b64 ("stop_machine, Disable preemption when > waking two stopper threads") > > does not fully address the race condition that can occur > as follows: > > On one CPU, call it CPU 3, thread 1 invokes > cpu_stop_queue_two_works(2, 3,...), and the execution is such > that thread 1 queues the works for migration/2 and migration/3, > and is preempted after releasing the locks for migration/2 and > migration/3, but before waking the threads. > > Then, On CPU 2, a kworker, call it thread 2, is running, > and it invokes cpu_stop_queue_two_works(1, 2,...), such that > thread 2 queues the works for migration/1 and migration/2. > Meanwhile, on CPU 3, thread 1 resumes execution, and wakes > migration/2 and migration/3. This means that when CPU 2 > releases the locks for migration/1 and migration/2, but before > it wakes those threads, it can be preempted by migration/2. > > If thread 2 is preempted by migration/2, then migration/2 will > execute the first work item successfully, since migration/3 > was woken up by CPU 3, but when it goes to execute the second > work item, it disables preemption, calls multi_cpu_stop(), > and thus, CPU 2 will wait forever for migration/1, which should > have been woken up by thread 2. However migration/1 cannot be > woken up by thread 2, since it is a kworker, so it is affine to > CPU 2, but CPU 2 is running migration/2 with preemption > disabled, so thread 2 will never run. > > Disable preemption after queueing works for stopper threads > to ensure that the operation of queueing the works and waking > the stopper threads is atomic. > > Fixes: 9fb8d5dc4b64 ("stop_machine, Disable preemption when waking two > stopper threads") > Co-Developed-by: Prasad Sodagudi > Co-Developed-by: Pavankumar Kondeti > Signed-off-by: Isaac J. Manjarres > Signed-off-by: Prasad Sodagudi > Signed-off-by: Pavankumar Kondeti > Cc: stable@vger.kernel.org > --- > kernel/stop_machine.c | 10 +++++++++- > 1 file changed, 9 insertions(+), 1 deletion(-) > > diff --git a/kernel/stop_machine.c b/kernel/stop_machine.c > index 1ff523d..e190d1e 100644 > --- a/kernel/stop_machine.c > +++ b/kernel/stop_machine.c > @@ -260,6 +260,15 @@ static int cpu_stop_queue_two_works(int cpu1, > struct cpu_stop_work *work1, > err = 0; > __cpu_stop_queue_work(stopper1, work1, &wakeq); > __cpu_stop_queue_work(stopper2, work2, &wakeq); > + /* > + * The waking up of stopper threads has to happen > + * in the same scheduling context as the queueing. > + * Otherwise, there is a possibility of one of the > + * above stoppers being woken up by another CPU, > + * and preempting us. This will cause us to n ot > + * wake up the other stopper forever. > + */ > + preempt_disable(); > unlock: > raw_spin_unlock(&stopper2->lock); > raw_spin_unlock_irq(&stopper1->lock); > @@ -271,7 +280,6 @@ static int cpu_stop_queue_two_works(int cpu1, > struct cpu_stop_work *work1, > } > > if (!err) { > - preempt_disable(); > wake_up_q(&wakeq); > preempt_enable(); > }