From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 6E4B9CA1016 for ; Mon, 8 Sep 2025 08:12:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=CcBj7SkfGcRisUp144Doi/BeLUzFNWGgepHeyPHMYvQ=; b=H3Xd5z2bMW0k/qa0PnXFO+8mrZ e30G/MVL5t9I0w4ipm96Z2Hf1FGJ4N/QyJ1PFA7TZ4TbamDZpm3B6ppdORvDpdzpKZ4w1L/LTq/cd 2Gu+0+SQa7KWHgw/1iR2tmo2W1+mew7zqUSr/dPTGYYtr6dgjWAZEnFqQuTAGy6zd3x8BatYHEpNB cHYURQ1ZMU7Ka15a6rkC7nOw/rJzqy2n5KrbN6g6jQkjcPsBsAkSHumWRs4b5J8un62Pf+gYBBIQd dBV/xFYv3ybMlKa5oaYvOnTNvaPIcmlacvlSg9sEKXp3D6mcJtWlEeNgimU87BEDcuHM482GU1o7X hKO1ZCyg==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uvWz6-0000000Fc2Z-2rM3; Mon, 08 Sep 2025 08:12:20 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1uvWUb-0000000FN26-1j42 for linux-arm-kernel@lists.infradead.org; Mon, 08 Sep 2025 07:40:50 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AEDB91764; Mon, 8 Sep 2025 00:40:40 -0700 (PDT) Received: from e123572-lin.arm.com (e123572-lin.cambridge.arm.com [10.1.194.54]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 5A6C13F63F; Mon, 8 Sep 2025 00:40:44 -0700 (PDT) From: Kevin Brodsky To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Kevin Brodsky , Alexander Gordeev , Andreas Larsson , Andrew Morton , Boris Ostrovsky , Borislav Petkov , Catalin Marinas , Christophe Leroy , Dave Hansen , David Hildenbrand , "David S. Miller" , "H. Peter Anvin" , Ingo Molnar , Jann Horn , Juergen Gross , "Liam R. Howlett" , Lorenzo Stoakes , Madhavan Srinivasan , Michael Ellerman , Michal Hocko , Mike Rapoport , Nicholas Piggin , Peter Zijlstra , Ryan Roberts , Suren Baghdasaryan , Thomas Gleixner , Vlastimil Babka , Will Deacon , Yeoreum Yun , linux-arm-kernel@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, xen-devel@lists.xenproject.org Subject: [PATCH v2 3/7] arm64: mm: fully support nested lazy_mmu sections Date: Mon, 8 Sep 2025 08:39:27 +0100 Message-ID: <20250908073931.4159362-4-kevin.brodsky@arm.com> X-Mailer: git-send-email 2.47.0 In-Reply-To: <20250908073931.4159362-1-kevin.brodsky@arm.com> References: <20250908073931.4159362-1-kevin.brodsky@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20250908_004049_558619_3D4C2D66 X-CRM114-Status: GOOD ( 16.38 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Despite recent efforts to prevent lazy_mmu sections from nesting, it remains difficult to ensure that it never occurs - and in fact it does occur on arm64 in certain situations (CONFIG_DEBUG_PAGEALLOC). Commit 1ef3095b1405 ("arm64/mm: Permit lazy_mmu_mode to be nested") made nesting tolerable on arm64, but without truly supporting it: the inner leave() call clears TIF_LAZY_MMU, disabling the batching optimisation before the outer section ends. Now that the lazy_mmu API allows enter() to pass through a state to the matching leave() call, we can actually support nesting. If enter() is called inside an active lazy_mmu section, TIF_LAZY_MMU will already be set, and we can then return LAZY_MMU_NESTED to instruct the matching leave() call not to clear TIF_LAZY_MMU. The only effect of this patch is to ensure that TIF_LAZY_MMU (and therefore the batching optimisation) remains set until the outermost lazy_mmu section ends. leave() still emits barriers if needed, regardless of the nesting level, as the caller may expect any page table changes to become visible when leave() returns. Signed-off-by: Kevin Brodsky --- arch/arm64/include/asm/pgtable.h | 19 +++++-------------- 1 file changed, 5 insertions(+), 14 deletions(-) diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h index 816197d08165..602feda97dc4 100644 --- a/arch/arm64/include/asm/pgtable.h +++ b/arch/arm64/include/asm/pgtable.h @@ -85,24 +85,14 @@ typedef int lazy_mmu_state_t; static inline lazy_mmu_state_t arch_enter_lazy_mmu_mode(void) { - /* - * lazy_mmu_mode is not supposed to permit nesting. But in practice this - * does happen with CONFIG_DEBUG_PAGEALLOC, where a page allocation - * inside a lazy_mmu_mode section (such as zap_pte_range()) will change - * permissions on the linear map with apply_to_page_range(), which - * re-enters lazy_mmu_mode. So we tolerate nesting in our - * implementation. The first call to arch_leave_lazy_mmu_mode() will - * flush and clear the flag such that the remainder of the work in the - * outer nest behaves as if outside of lazy mmu mode. This is safe and - * keeps tracking simple. - */ + int lazy_mmu_nested; if (in_interrupt()) return LAZY_MMU_DEFAULT; - set_thread_flag(TIF_LAZY_MMU); + lazy_mmu_nested = test_and_set_thread_flag(TIF_LAZY_MMU); - return LAZY_MMU_DEFAULT; + return lazy_mmu_nested ? LAZY_MMU_NESTED : LAZY_MMU_DEFAULT; } static inline void arch_leave_lazy_mmu_mode(lazy_mmu_state_t state) @@ -113,7 +103,8 @@ static inline void arch_leave_lazy_mmu_mode(lazy_mmu_state_t state) if (test_and_clear_thread_flag(TIF_LAZY_MMU_PENDING)) emit_pte_barriers(); - clear_thread_flag(TIF_LAZY_MMU); + if (state != LAZY_MMU_NESTED) + clear_thread_flag(TIF_LAZY_MMU); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE -- 2.47.0