From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2781FC433EF for ; Wed, 16 Mar 2022 14:43:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1356833AbiCPOo1 (ORCPT ); Wed, 16 Mar 2022 10:44:27 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50570 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1356818AbiCPOoX (ORCPT ); Wed, 16 Mar 2022 10:44:23 -0400 Received: from ams.source.kernel.org (ams.source.kernel.org [IPv6:2604:1380:4601:e00::1]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7B47D23BD3 for ; Wed, 16 Mar 2022 07:43:09 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by ams.source.kernel.org (Postfix) with ESMTPS id 40A48B81BF9 for ; Wed, 16 Mar 2022 14:43:08 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4FC6FC340F0; Wed, 16 Mar 2022 14:43:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1647441786; bh=+MPNWmYaV22ZBHxQaIDswT0l2QT2UXXflRMR8tnJfho=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Lt/yHnmWLDYkpm1P5DbK9fKk0I5ZrK80pJrO0ZaZUHBMoTl+TjmIqQxDSmQGLCXg3 89pYhEsXau9n/j5BwolMB2sOidW3N9+ZNIB04iPEEYmHqYJMd92HOVFzsj/pS3OChH orrSJaVostvTj9KgsEdhj1f5HaYvZONauYzircQ51fzuGxYKJGhCUfhHK52ydZLoEh TXkAH/OfGRXOudwWFXp97dqbKJ4G1vXI4upUk62nPyQ4KmGhWQOVAYdHKbLawuxqWI uxPEwxmwhSgUnOZ8mEteez2DO9/B9yyGpl/LOWprGsF7I/7X3m7xD8oR4WZt006OFs WRhYH6jx0W5ug== From: Frederic Weisbecker To: "Paul E . McKenney" Cc: LKML , Frederic Weisbecker , Uladzislau Rezki , Boqun Feng , Neeraj Upadhyay , Joel Fernandes Subject: [PATCH 3/4] rcu: Perform early sequence fetch for polling locklessly Date: Wed, 16 Mar 2022 15:42:54 +0100 Message-Id: <20220316144255.336021-4-frederic@kernel.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220316144255.336021-1-frederic@kernel.org> References: <20220316144255.336021-1-frederic@kernel.org> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org The workqueue ordering guarantees that the work sees all the accesses of the task prior to its call to the corresponding queue_work(). Therefore the sequence to poll can be retrieved locklessly. The only downside is that it is then possible to miss the 0x1 flag set by a prior work. But this could already happen concurrently anyway after the exp_poll_lock is unlocked. In the worst case the slow path involving synchronize_rcu_expedited() takes care of the situation. Signed-off-by: Frederic Weisbecker Cc: Neeraj Upadhyay Cc: Boqun Feng Cc: Uladzislau Rezki Cc: Joel Fernandes --- kernel/rcu/tree_exp.h | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-) diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h index 763ec35546ed..c4a19c6a83cf 100644 --- a/kernel/rcu/tree_exp.h +++ b/kernel/rcu/tree_exp.h @@ -909,9 +909,7 @@ static void sync_rcu_do_polled_gp(struct work_struct *wp) struct rcu_node *rnp = container_of(wp, struct rcu_node, exp_poll_wq); unsigned long s; - raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); - s = rnp->exp_seq_poll_rq; - raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags); + s = READ_ONCE(rnp->exp_seq_poll_rq); if (s & 0x1) return; while (!sync_exp_work_done(s)) @@ -919,7 +917,7 @@ static void sync_rcu_do_polled_gp(struct work_struct *wp) raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); s = rnp->exp_seq_poll_rq; if (!(s & 0x1) && sync_exp_work_done(s)) - rnp->exp_seq_poll_rq |= 0x1; + WRITE_ONCE(rnp->exp_seq_poll_rq, s | 0x1); raw_spin_unlock_irqrestore(&rnp->exp_poll_lock, flags); } @@ -949,7 +947,7 @@ unsigned long start_poll_synchronize_rcu_expedited(void) if (rcu_init_invoked()) raw_spin_lock_irqsave(&rnp->exp_poll_lock, flags); if ((rnp->exp_seq_poll_rq & 0x1) || ULONG_CMP_LT(rnp->exp_seq_poll_rq, s)) { - rnp->exp_seq_poll_rq = s; + WRITE_ONCE(rnp->exp_seq_poll_rq, s); if (rcu_init_invoked()) queue_work(rcu_gp_wq, &rnp->exp_poll_wq); } -- 2.25.1