From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 406B5C433EF for ; Tue, 19 Apr 2022 12:23:43 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352282AbiDSM0U (ORCPT ); Tue, 19 Apr 2022 08:26:20 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:44922 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1352267AbiDSM0P (ORCPT ); Tue, 19 Apr 2022 08:26:15 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [IPv6:2604:1380:4641:c500::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CDF3636307 for ; Tue, 19 Apr 2022 05:23:33 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 6378D61508 for ; Tue, 19 Apr 2022 12:23:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12F28C385A7; Tue, 19 Apr 2022 12:23:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1650371012; bh=8W6J/a2HTyChmb/5v1B4Mtw37/myuTcF67qxGda3gHM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PrnnTIh5AhzFhFoiXFiPXVu0FlkRXtlQ3HNnyB87U+3oHWL62p6aqVpjH2CjoD9Qw rVNXDXaKYNlRH46rsCWqQ/rDkFuTNJ0NWgp5lbdWodabP+WEoUOdXbk2Zz2f55mHTn D66oyolipfrjgR9N+54XaFaFj2JgZuktzNU3kbfhF6vr/xInMS0eUtgdhmwzWHE6vk s1rSjrVj4nTqVqX0vyxCUIg7+PQ8353LFQ5XMxy3bEsz3tGeXBRccTPLT/lfWdQXXF LQvV7GaUaSaieGzU73Ugif9/EMWehKogd0Nzav8vWeZOV04SBt+PZkkreplJG6osYB ijL+HbKpXs8Hg== From: Frederic Weisbecker To: "Paul E . McKenney" Cc: LKML , Zqiang , Frederic Weisbecker , Neeraj Upadhyay , Boqun Feng , Uladzislau Rezki , Joel Fernandes Subject: [PATCH 2/3] rcu/nocb: Invert rcu_state.barrier_mutex VS hotplug lock locking order Date: Tue, 19 Apr 2022 14:23:19 +0200 Message-Id: <20220419122320.2060902-3-frederic@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220419122320.2060902-1-frederic@kernel.org> References: <20220419122320.2060902-1-frederic@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Zqiang In case of failure to spawn either rcuog or rcuo[p] kthreads for a given rdp, rcu_nocb_rdp_deoffload() needs to be called with the hotplug lock and the barrier_mutex held. However cpus write lock is already held while calling rcutree_prepare_cpu(). It's not possible to call rcu_nocb_rdp_deoffload() from there with just locking the barrier_mutex or this would result in a locking inversion against rcu_nocb_cpu_deoffload() which holds both locks in the reverse order. Simply solve this with inverting the locking order inside rcu_nocb_cpu_[de]offload(). This will be a pre-requisite to toggle NOCB states toward cpusets anyway. Signed-off-by: Zqiang Cc: Neeraj Upadhyay Cc: Boqun Feng Cc: Uladzislau Rezki Cc: Joel Fernandes Signed-off-by: Frederic Weisbecker --- kernel/rcu/tree_nocb.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/rcu/tree_nocb.h b/kernel/rcu/tree_nocb.h index dac74952e1d1..f2f2cab6285a 100644 --- a/kernel/rcu/tree_nocb.h +++ b/kernel/rcu/tree_nocb.h @@ -1055,8 +1055,8 @@ int rcu_nocb_cpu_deoffload(int cpu) struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); int ret = 0; - mutex_lock(&rcu_state.barrier_mutex); cpus_read_lock(); + mutex_lock(&rcu_state.barrier_mutex); if (rcu_rdp_is_offloaded(rdp)) { if (cpu_online(cpu)) { ret = work_on_cpu(cpu, rcu_nocb_rdp_deoffload, rdp); @@ -1067,8 +1067,8 @@ int rcu_nocb_cpu_deoffload(int cpu) ret = -EINVAL; } } - cpus_read_unlock(); mutex_unlock(&rcu_state.barrier_mutex); + cpus_read_unlock(); return ret; } @@ -1134,8 +1134,8 @@ int rcu_nocb_cpu_offload(int cpu) struct rcu_data *rdp = per_cpu_ptr(&rcu_data, cpu); int ret = 0; - mutex_lock(&rcu_state.barrier_mutex); cpus_read_lock(); + mutex_lock(&rcu_state.barrier_mutex); if (!rcu_rdp_is_offloaded(rdp)) { if (cpu_online(cpu)) { ret = work_on_cpu(cpu, rcu_nocb_rdp_offload, rdp); @@ -1146,8 +1146,8 @@ int rcu_nocb_cpu_offload(int cpu) ret = -EINVAL; } } - cpus_read_unlock(); mutex_unlock(&rcu_state.barrier_mutex); + cpus_read_unlock(); return ret; } -- 2.25.1