From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 0FA20C369B4 for ; Mon, 14 Apr 2025 17:40:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=s3aL/BQJnmp8kDsseDJeVOwhRU3fAuZEScOgxsiLZ8Q=; b=Y695kzB/Giqe1GL90di8JUJhx4 7dNncuCGX/0PkW+AfbbHYo2W1q1/M5my8OckNOkiMlNgD2c8E4/qpgApfUFvcdWc+8+hqQ0SyTIpB 9vszobFNbwBB72Fy5FBUcDkOKG4iktn6vSFhDPtogUgCNy6g0TOr0WarOCcGydAGo7rEgHEZVaBc/ zO7EAP/TCUCLGkZu64kPq2Jxc0Y5gtAPikiiUQcRKjxUXM64fvHdCNSHAHovV5u+DKGKXULHFONJ8 370xC8vxycdLiC+cpXzA6lwNouhQcottsw1pHNyW454AALe8Rx9C4cew2nKGV3bo+08o3yQt+Rwlz +wKrSPOA==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.98.2 #2 (Red Hat Linux)) id 1u4NnC-00000002y4N-2GTI; Mon, 14 Apr 2025 17:40:22 +0000 Received: from tor.source.kernel.org ([172.105.4.254]) by bombadil.infradead.org with esmtps (Exim 4.98.2 #2 (Red Hat Linux)) id 1u4NlJ-00000002xlL-2Q1u for linux-arm-kernel@lists.infradead.org; Mon, 14 Apr 2025 17:38:25 +0000 Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 2F3FC6114C; Mon, 14 Apr 2025 17:38:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 28492C4CEE2; Mon, 14 Apr 2025 17:38:22 +0000 (UTC) Date: Mon, 14 Apr 2025 18:38:19 +0100 From: Catalin Marinas To: Ryan Roberts Cc: Will Deacon , Pasha Tatashin , Andrew Morton , Uladzislau Rezki , Christoph Hellwig , David Hildenbrand , "Matthew Wilcox (Oracle)" , Mark Rutland , Anshuman Khandual , Alexandre Ghiti , Kevin Brodsky , linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 11/11] arm64/mm: Batch barriers when updating kernel mappings Message-ID: References: <20250304150444.3788920-1-ryan.roberts@arm.com> <20250304150444.3788920-12-ryan.roberts@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20250304150444.3788920-12-ryan.roberts@arm.com> X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Tue, Mar 04, 2025 at 03:04:41PM +0000, Ryan Roberts wrote: > diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h > index 1898c3069c43..149df945c1ab 100644 > --- a/arch/arm64/include/asm/pgtable.h > +++ b/arch/arm64/include/asm/pgtable.h > @@ -40,6 +40,55 @@ > #include > #include > > +static inline void emit_pte_barriers(void) > +{ > + /* > + * These barriers are emitted under certain conditions after a pte entry > + * was modified (see e.g. __set_pte_complete()). The dsb makes the store > + * visible to the table walker. The isb ensures that any previous > + * speculative "invalid translation" marker that is in the CPU's > + * pipeline gets cleared, so that any access to that address after > + * setting the pte to valid won't cause a spurious fault. If the thread > + * gets preempted after storing to the pgtable but before emitting these > + * barriers, __switch_to() emits a dsb which ensure the walker gets to > + * see the store. There is no guarrantee of an isb being issued though. > + * This is safe because it will still get issued (albeit on a > + * potentially different CPU) when the thread starts running again, > + * before any access to the address. > + */ > + dsb(ishst); > + isb(); > +} > + > +static inline void queue_pte_barriers(void) > +{ > + if (test_thread_flag(TIF_LAZY_MMU)) > + set_thread_flag(TIF_LAZY_MMU_PENDING); As we can have lots of calls here, it might be slightly cheaper to test TIF_LAZY_MMU_PENDING and avoid setting it unnecessarily. I haven't checked - does the compiler generate multiple mrs from sp_el0 for subsequent test_thread_flag()? > + else > + emit_pte_barriers(); > +} > + > +#define __HAVE_ARCH_ENTER_LAZY_MMU_MODE > +static inline void arch_enter_lazy_mmu_mode(void) > +{ > + VM_WARN_ON(in_interrupt()); > + VM_WARN_ON(test_thread_flag(TIF_LAZY_MMU)); > + > + set_thread_flag(TIF_LAZY_MMU); > +} > + > +static inline void arch_flush_lazy_mmu_mode(void) > +{ > + if (test_and_clear_thread_flag(TIF_LAZY_MMU_PENDING)) > + emit_pte_barriers(); > +} > + > +static inline void arch_leave_lazy_mmu_mode(void) > +{ > + arch_flush_lazy_mmu_mode(); > + clear_thread_flag(TIF_LAZY_MMU); > +} > + > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > #define __HAVE_ARCH_FLUSH_PMD_TLB_RANGE > > @@ -323,10 +372,8 @@ static inline void __set_pte_complete(pte_t pte) > * Only if the new pte is valid and kernel, otherwise TLB maintenance > * has the necessary barriers. > */ > - if (pte_valid_not_user(pte)) { > - dsb(ishst); > - isb(); > - } > + if (pte_valid_not_user(pte)) > + queue_pte_barriers(); > } I think this scheme works, I couldn't find a counter-example unless __set_pte() gets called in an interrupt context. You could add VM_WARN_ON(in_interrupt()) in queue_pte_barriers() as well. With preemption, the newly mapped range shouldn't be used before arch_flush_lazy_mmu_mode() is called, so it looks safe as well. I think x86 uses a per-CPU variable to track this but per-thread is easier to reason about if there's no nesting. > static inline void __set_pte(pte_t *ptep, pte_t pte) > @@ -778,10 +825,8 @@ static inline void set_pmd(pmd_t *pmdp, pmd_t pmd) > > WRITE_ONCE(*pmdp, pmd); > > - if (pmd_valid(pmd)) { > - dsb(ishst); > - isb(); > - } > + if (pmd_valid(pmd)) > + queue_pte_barriers(); > } We discussed on a previous series - for pmd/pud we end up with barriers even for user mappings but they are at a much coarser granularity (and I wasn't keen on 'user' attributes for the table entries). Reviewed-by: Catalin Marinas