From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB293477E4A for ; Wed, 1 Apr 2026 16:45:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061929; cv=none; b=sU7nAaKPcfWp0lEKKGtzfpi7gJHQnCtLzLJ7Pvvfcs80kSiwnfIeaIB2LcGR3Lcb044spqnGBTIwKr8picqsYrjNv9ivIIoDlDfmi6LIlGSGd/p5QMohd7OOrq2ArM1iR4sLhhDGiEiUCwrBdnjjhaMKKm+s8x9hTQeA0g6hOCY= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775061929; c=relaxed/simple; bh=yqGEtDQhQRGtGutPwYIvAugqf+VWR0r06i264PtQHRc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=P15DQPou8LTFVyJNEq3cKe+wtGhu5sfkdrks63Sz+V+wpZAxZlpqxk4J0tIpcM/ci/w80cI2G+jq5swLO5xECzu45Z8Urjyry/OVYogYrXUStBePALSwsdQWFywDu0cZwF0nP6FBdhBHC5aR5n9A/rx/Xu6CkYXSeBWmZhO05cU= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Os6cq29+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Os6cq29+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4F2B1C4CEF7; Wed, 1 Apr 2026 16:45:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775061929; bh=yqGEtDQhQRGtGutPwYIvAugqf+VWR0r06i264PtQHRc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Os6cq29+JAfb9UDlPYJneeLeJbpAIfOhyAypfO++PYjSZf/Tq7QJZ6OF40AfTAJon m0ZXM8jnqyd/oKqoNpv9Lbv0f88pfuvkSbU/qzMCWMFXr+W/qVUx/TGkizyaCKcs3R XIClWeEdxl85NwP4vPdlBVcowZCkSWes8SMzZVtPIBBZLRlUtzjfU5RNa56VKi+TLE t5+FtPnyNjA/HVyQc2NnKdKx/51IJw8VPFTMuN+rD8UuR6w2BB1xtOyatHcFuheGhC FIvZVsFTnGcTGb4YFCkwYbwApUHG3eJNzFEN54NSR1WzPsMLvxejbIdV7Ue+ZkQgvQ 40PVHRFjvqSmQ== From: Sasha Levin To: stable@vger.kernel.org Cc: Anshuman Khandual , David Hildenbrand , Lance Yang , Wei Yang , Dev Jain , Andrew Morton , Sasha Levin Subject: [PATCH 6.18.y 1/2] mm: replace READ_ONCE() with standard page table accessors Date: Wed, 1 Apr 2026 12:45:25 -0400 Message-ID: <20260401164526.141913-1-sashal@kernel.org> X-Mailer: git-send-email 2.53.0 In-Reply-To: <2026033028-blitz-spill-525b@gregkh> References: <2026033028-blitz-spill-525b@gregkh> Precedence: bulk X-Mailing-List: stable@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: Anshuman Khandual [ Upstream commit c0efdb373c3aaacb32db59cadb0710cac13e44ae ] Replace all READ_ONCE() with a standard page table accessors i.e pxdp_get() that defaults into READ_ONCE() in cases where platform does not override. Link: https://lkml.kernel.org/r/20251007063100.2396936-1-anshuman.khandual@arm.com Signed-off-by: Anshuman Khandual Acked-by: David Hildenbrand Reviewed-by: Lance Yang Reviewed-by: Wei Yang Reviewed-by: Dev Jain Signed-off-by: Andrew Morton Stable-dep-of: ffef67b93aa3 ("mm/memory: fix PMD/PUD checks in follow_pfnmap_start()") Signed-off-by: Sasha Levin --- mm/gup.c | 10 +++++----- mm/hmm.c | 2 +- mm/memory.c | 4 ++-- mm/mprotect.c | 2 +- mm/sparse-vmemmap.c | 2 +- mm/vmscan.c | 2 +- 6 files changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/gup.c b/mm/gup.c index d2524fe09338f..95d948c8e86c9 100644 --- a/mm/gup.c +++ b/mm/gup.c @@ -950,7 +950,7 @@ static struct page *follow_pud_mask(struct vm_area_struct *vma, struct mm_struct *mm = vma->vm_mm; pudp = pud_offset(p4dp, address); - pud = READ_ONCE(*pudp); + pud = pudp_get(pudp); if (!pud_present(pud)) return no_page_table(vma, flags, address); if (pud_leaf(pud)) { @@ -975,7 +975,7 @@ static struct page *follow_p4d_mask(struct vm_area_struct *vma, p4d_t *p4dp, p4d; p4dp = p4d_offset(pgdp, address); - p4d = READ_ONCE(*p4dp); + p4d = p4dp_get(p4dp); BUILD_BUG_ON(p4d_leaf(p4d)); if (!p4d_present(p4d) || p4d_bad(p4d)) @@ -3060,7 +3060,7 @@ static int gup_fast_pud_range(p4d_t *p4dp, p4d_t p4d, unsigned long addr, pudp = pud_offset_lockless(p4dp, p4d, addr); do { - pud_t pud = READ_ONCE(*pudp); + pud_t pud = pudp_get(pudp); next = pud_addr_end(addr, end); if (unlikely(!pud_present(pud))) @@ -3086,7 +3086,7 @@ static int gup_fast_p4d_range(pgd_t *pgdp, pgd_t pgd, unsigned long addr, p4dp = p4d_offset_lockless(pgdp, pgd, addr); do { - p4d_t p4d = READ_ONCE(*p4dp); + p4d_t p4d = p4dp_get(p4dp); next = p4d_addr_end(addr, end); if (!p4d_present(p4d)) @@ -3108,7 +3108,7 @@ static void gup_fast_pgd_range(unsigned long addr, unsigned long end, pgdp = pgd_offset(current->mm, addr); do { - pgd_t pgd = READ_ONCE(*pgdp); + pgd_t pgd = pgdp_get(pgdp); next = pgd_addr_end(addr, end); if (pgd_none(pgd)) diff --git a/mm/hmm.c b/mm/hmm.c index 87562914670a1..a56081d67ad69 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -491,7 +491,7 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end, /* Normally we don't want to split the huge page */ walk->action = ACTION_CONTINUE; - pud = READ_ONCE(*pudp); + pud = pudp_get(pudp); if (!pud_present(pud)) { spin_unlock(ptl); return hmm_vma_walk_hole(start, end, -1, walk); diff --git a/mm/memory.c b/mm/memory.c index e43f0a4702c48..a217d9bacc0cf 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -6691,12 +6691,12 @@ int follow_pfnmap_start(struct follow_pfnmap_args *args) goto out; p4dp = p4d_offset(pgdp, address); - p4d = READ_ONCE(*p4dp); + p4d = p4dp_get(p4dp); if (p4d_none(p4d) || unlikely(p4d_bad(p4d))) goto out; pudp = pud_offset(p4dp, address); - pud = READ_ONCE(*pudp); + pud = pudp_get(pudp); if (pud_none(pud)) goto out; if (pud_leaf(pud)) { diff --git a/mm/mprotect.c b/mm/mprotect.c index 113b489858341..988c366137d50 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -599,7 +599,7 @@ static inline long change_pud_range(struct mmu_gather *tlb, break; } - pud = READ_ONCE(*pudp); + pud = pudp_get(pudp); if (pud_none(pud)) continue; diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index dbd8daccade28..37522d6cb3988 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -439,7 +439,7 @@ int __meminit vmemmap_populate_hugepages(unsigned long start, unsigned long end, return -ENOMEM; pmd = pmd_offset(pud, addr); - if (pmd_none(READ_ONCE(*pmd))) { + if (pmd_none(pmdp_get(pmd))) { void *p; p = vmemmap_alloc_block_buf(PMD_SIZE, node, altmap); diff --git a/mm/vmscan.c b/mm/vmscan.c index 05d9354a59c65..95b1179a14e78 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -3783,7 +3783,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long start, unsigned long end, pud = pud_offset(p4d, start & P4D_MASK); restart: for (i = pud_index(start), addr = start; addr != end; i++, addr = next) { - pud_t val = READ_ONCE(pud[i]); + pud_t val = pudp_get(pud + i); next = pud_addr_end(addr, end); -- 2.53.0