From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail.linuxfoundation.org ([140.211.169.12]:39248 "EHLO mail.linuxfoundation.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726002AbeIHBzH (ORCPT ); Fri, 7 Sep 2018 21:55:07 -0400 From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Boqun Feng , "Paul E. McKenney" , "Aneesh Kumar K.V" Subject: [PATCH 4.18 001/145] rcu: Make expedited GPs handle CPU 0 being offline Date: Fri, 7 Sep 2018 23:07:47 +0200 Message-Id: <20180907210903.744555886@linuxfoundation.org> In-Reply-To: <20180907210903.617721278@linuxfoundation.org> References: <20180907210903.617721278@linuxfoundation.org> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Sender: stable-owner@vger.kernel.org List-ID: 4.18-stable review patch. If anyone has any objections, please let me know. ------------------ From: Boqun Feng commit fcc63543650150629c8a873cbef3578770acecd9 upstream. Currently, the parallelized initialization of expedited grace periods uses the workqueue associated with each rcu_node structure's ->grplo field. This works fine unless that CPU is offline. This commit therefore uses the CPU corresponding to the lowest-numbered online CPU, or just queues the work on WORK_CPU_UNBOUND if there are no online CPUs corresponding to this rcu_node structure. Note that this patch uses cpu_is_offline() instead of the usual approach of checking bits in the rcu_node structure's ->qsmaskinitnext field. This is safe because preemption is disabled across both the cpu_is_offline() check and the call to queue_work_on(). Signed-off-by: Boqun Feng [ paulmck: Disable preemption to close offline race window. ] Signed-off-by: Paul E. McKenney [ paulmck: Apply Peter Zijlstra feedback on CPU selection. ] Tested-by: Aneesh Kumar K.V Signed-off-by: Greg Kroah-Hartman --- kernel/rcu/tree_exp.h | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -472,6 +472,7 @@ retry_ipi: static void sync_rcu_exp_select_cpus(struct rcu_state *rsp, smp_call_func_t func) { + int cpu; struct rcu_node *rnp; trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), TPS("reset")); @@ -492,7 +493,13 @@ static void sync_rcu_exp_select_cpus(str continue; } INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus); - queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work); + preempt_disable(); + cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask); + /* If all offline, queue the work on an unbound CPU. */ + if (unlikely(cpu > rnp->grphi)) + cpu = WORK_CPU_UNBOUND; + queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work); + preempt_enable(); rnp->exp_need_flush = true; }