From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id B9DBACD484C for ; Wed, 13 May 2026 04:46:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=uhzyf6SzK5v8ouPO1vD0uJkzAnaEz87YunFCeP9jsjA=; b=Ob4VGE7Utv/9GHamqyKu5a5f7v kFN4e6q7BAwwiLfz+PeS0OXcFZwpWcYEKPHhsSpwDj2vcjkJcLT9KwmhvQVYU1+iAScUtUTZjbJp1 Vx8z5v31spZJW2w2hGJkfV77n6Qv2kAF3bxYxZjTC7gUKcahdrgUfDYEWPYPlnrpgHMWiUipAey0V YzpRwA7nn6/21SveIEQahCic0jzWbIPjhzBmatmYW0o9iS2ayJpYvWH9T65jtJO+6gHpIta42VcyL Bw3t2s8aRDHyA9f+TwVwWdMb48BHvmwCGWGT/ZPzPmk4Y2SVdnUB6/H7eTGDdimfXU3gzMDUR8wRH vPdF7/9Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.99.1 #2 (Red Hat Linux)) id 1wN1UL-00000001EKB-1aZ3; Wed, 13 May 2026 04:46:29 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.99.1 #2 (Red Hat Linux)) id 1wN1UI-00000001EIe-2JEM for linux-arm-kernel@lists.infradead.org; Wed, 13 May 2026 04:46:28 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B21DB153B; Tue, 12 May 2026 21:46:19 -0700 (PDT) Received: from a085714.blr.arm.com (a085714.arm.com [10.164.18.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 0D8A23F7B4; Tue, 12 May 2026 21:46:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778647585; bh=XAUdZDyrObHdgTp0toPaK0vnw+3+Mz33Tzx7yEj2vSI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=beTSK5cXEgLmZU6vSh1uneGf+iRimyg6UVTp7Qn51LkNh2vVB6/sR9KhN7ecJgyHQ +R8FVW0RdFcu6sf/7IsSfdXgb5NVpLrNmndBLpzD7ARJdMHsGAsH+VP5eVdH0XtHdW vXM4OmvLt79zBmAf/ik2NiLIVBl6QsYba7WrpdU8= From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Catalin Marinas , Will Deacon , Ryan Roberts , Mark Rutland , Lorenzo Stoakes , Andrew Morton , David Hildenbrand , Mike Rapoport , Linu Cherian , Usama Arif , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [RFC V2 02/14] mm: Add read-write accessors for vm_page_prot Date: Wed, 13 May 2026 10:15:35 +0530 Message-ID: <20260513044547.4128549-3-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260513044547.4128549-1-anshuman.khandual@arm.com> References: <20260513044547.4128549-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.9.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260512_214627_046969_648C6FC5 X-CRM114-Status: GOOD ( 13.95 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Currently vma->vm_page_prot is safely read from and written to, without any locks with READ_ONCE() and WRITE_ONCE(). But with introduction of D128 page tables on arm64 platform, vm_page_prot grows to 128 bits which can't safely be handled with READ_ONCE() and WRITE_ONCE(). Add read and write accessors for vm_page_prot like pgprot_[read|write]() which any platform can override when required, although still defaulting as READ_ONCE() and WRITE_ONCE(), thus preserving the functionality for others. Cc: Andrew Morton Cc: David Hildenbrand Cc: Lorenzo Stoakes Cc: Mike Rapoport Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Anshuman Khandual --- Changes in RFC V2: - Dropped _once from pgprot_[read|write]() callbacks per Mike include/linux/pgtable.h | 14 ++++++++++++++ mm/huge_memory.c | 4 ++-- mm/memory.c | 2 +- mm/migrate.c | 2 +- mm/mmap.c | 2 +- 5 files changed, 19 insertions(+), 5 deletions(-) diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h index a738048128e7..ca0fc76bedcb 100644 --- a/include/linux/pgtable.h +++ b/include/linux/pgtable.h @@ -501,6 +501,20 @@ static inline pgd_t pgdp_get(pgd_t *pgdp) } #endif +#ifndef pgprot_read +static inline pgprot_t pgprot_read(pgprot_t *prot) +{ + return READ_ONCE(*prot); +} +#endif + +#ifndef pgprot_write +static inline void pgprot_write(pgprot_t *prot, pgprot_t val) +{ + WRITE_ONCE(*prot, val); +} +#endif + #ifndef __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG static inline bool ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 970e077019b7..a24abf7cfd63 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3339,7 +3339,7 @@ static void __split_huge_pmd_locked(struct vm_area_struct *vma, pmd_t *pmd, } else { pte_t entry; - entry = mk_pte(page, READ_ONCE(vma->vm_page_prot)); + entry = mk_pte(page, pgprot_read(&vma->vm_page_prot)); if (write) entry = pte_mkwrite(entry, vma); if (!young) @@ -5042,7 +5042,7 @@ void remove_migration_pmd(struct page_vma_mapped_walk *pvmw, struct page *new) entry = softleaf_from_pmd(*pvmw->pmd); folio_get(folio); - pmde = folio_mk_pmd(folio, READ_ONCE(vma->vm_page_prot)); + pmde = folio_mk_pmd(folio, pgprot_read(&vma->vm_page_prot)); if (pmd_swp_soft_dirty(*pvmw->pmd)) pmde = pmd_mksoft_dirty(pmde); diff --git a/mm/memory.c b/mm/memory.c index 7b6ee3b847a0..86b2c9513885 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -876,7 +876,7 @@ static void restore_exclusive_pte(struct vm_area_struct *vma, VM_WARN_ON_FOLIO(!folio_test_locked(folio), folio); - pte = pte_mkold(mk_pte(page, READ_ONCE(vma->vm_page_prot))); + pte = pte_mkold(mk_pte(page, pgprot_read(&vma->vm_page_prot))); if (pte_swp_soft_dirty(orig_pte)) pte = pte_mksoft_dirty(pte); diff --git a/mm/migrate.c b/mm/migrate.c index 8a64291ab5b4..ff2cbe66daf5 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -377,7 +377,7 @@ static bool remove_migration_pte(struct folio *folio, continue; folio_get(folio); - pte = mk_pte(new, READ_ONCE(vma->vm_page_prot)); + pte = mk_pte(new, pgprot_read(&vma->vm_page_prot)); entry = softleaf_from_pte(old_pte); if (!softleaf_is_migration_young(entry)) diff --git a/mm/mmap.c b/mm/mmap.c index 5754d1c36462..4f11eb732c81 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -89,7 +89,7 @@ void vma_set_page_prot(struct vm_area_struct *vma) vm_page_prot = vm_pgprot_modify(vm_page_prot, vm_flags); } /* remove_protection_ptes reads vma->vm_page_prot without mmap_lock */ - WRITE_ONCE(vma->vm_page_prot, vm_page_prot); + pgprot_write(&vma->vm_page_prot, vm_page_prot); } /* -- 2.43.0