From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.lore.kernel.org (Postfix) with ESMTPS id 116D1CD484C for ; Wed, 13 May 2026 04:47:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender:List-Subscribe:List-Help :List-Post:List-Archive:List-Unsubscribe:List-Id:Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From: Reply-To:Content-Type:Content-ID:Content-Description:Resent-Date:Resent-From: Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=MEX+2Ym7JpcP0+RHWoBaPddkhJXYY6DHiRaLx9DP154=; b=3+OD4inFIGvG3ogwgloxg4vz8x HQnQFpWyE0E7EQuJTVXIKsi9Qag1eVtY02nLQ1M5MQEdLGb7GnVrnkwuCqo8ZARjs5QkcsShb2atm oOIvhgfV6tO/H6edBBDmu+aPuRxNYDWzCfq5TtA7Tq6XOQgO88GdhgpvbP/Kv6XQenOWWN+WGgYdu YVR6IdFDPH0vlMgathIh969uC681lYG4qxf0ehLyfjzaWoeJE85ylRyDoud0o/DZIWEEsuh5bMH0w DY5HRNoHcU1SeJLXRBqEdXdHeTtiVjzLr/JHSX7nn6yvNHckGiut90fuDbifdwo64tpHT0zJQK0wQ hsuqVzPw==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.99.1 #2 (Red Hat Linux)) id 1wN1Uo-00000001Eca-0M3l; Wed, 13 May 2026 04:46:58 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.99.1 #2 (Red Hat Linux)) id 1wN1Uk-00000001EZZ-3NVk for linux-arm-kernel@lists.infradead.org; Wed, 13 May 2026 04:46:56 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E7168153B; Tue, 12 May 2026 21:46:48 -0700 (PDT) Received: from a085714.blr.arm.com (a085714.arm.com [10.164.18.87]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id DB7783F7B4; Tue, 12 May 2026 21:46:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=arm.com; s=foss; t=1778647614; bh=aNGNLik2cFTbHuQIkoEf2Aq2TD9aOisnliZZERSED6g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JO7yhyBbSg/dVCmvGD666/97SNj4HdCyfiYpLjZRubsf1LWuBmv3nSxg9HWlmFiy+ y7lBC6JgqOXedNvCmjVMb7XtobyLF3IAUGu6YwRddVHK/Mk65K0muOZ2IudRBZoPO5 AZ00SMw43vBMZCkUVsVM8zwOfOg/Ab/1FaFGaIII= From: Anshuman Khandual To: linux-arm-kernel@lists.infradead.org Cc: Anshuman Khandual , Catalin Marinas , Will Deacon , Ryan Roberts , Mark Rutland , Lorenzo Stoakes , Andrew Morton , David Hildenbrand , Mike Rapoport , Linu Cherian , Usama Arif , linux-kernel@vger.kernel.org, linux-mm@kvack.org, kasan-dev@googlegroups.com Subject: [RFC V2 06/14] arm64/mm: Convert READ_ONCE() as pgdp_get() while accessing PGD Date: Wed, 13 May 2026 10:15:39 +0530 Message-ID: <20260513044547.4128549-7-anshuman.khandual@arm.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20260513044547.4128549-1-anshuman.khandual@arm.com> References: <20260513044547.4128549-1-anshuman.khandual@arm.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.9.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20260512_214654_925620_90C77F0F X-CRM114-Status: GOOD ( 16.61 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org Convert all READ_ONCE() based PGD accesses as pgdp_get() instead which will support both D64 and D128 translation regime going forward. That is because READ_ONCE() would need 128 bit single copy atomic guarantees, while reading 128 bit page table entries which is currently not supported on arm64. Build fails for READ_ONCE() while accessing beyond 64 bits. Load Pair/Store Pair (ldp/stp) are only single copy atomic if FEAT_LSE128 is supported (which is required when FEAT_D128 is supported). Currently 128 bit pgtables is a compile time decision - so we could have chosen to extend READ_ONCE()/WRITE_ONCE() to allow 128 bit for this configuration. But then it's a general purpose API and we were concerned that other users might eventually creep in that expect 128 and then fail to compile in the other configs. But worse, we are considering eventually making D128 a boot time option, at which point we'd have to make READ_ONCE() always allow 128 bit at compile time but then it might silently tear at runtime. So our preference is to standardize on these existing helpers, which we can override in arm64 to give the 128 bit single copy guarantee when required. Cc: Catalin Marinas Cc: Will Deacon Cc: Ryan Roberts Cc: Mark Rutland Cc: linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org Cc: kasan-dev@googlegroups.com Signed-off-by: Anshuman Khandual --- Changes in RFC V2 - Moved back helpers back from arch/arm64/mm/mmu.c into the header arch/arm64/mm/fault.c | 2 +- arch/arm64/mm/hugetlbpage.c | 2 +- arch/arm64/mm/kasan_init.c | 2 +- arch/arm64/mm/mmu.c | 6 +++--- arch/arm64/mm/pageattr.c | 2 +- arch/arm64/mm/trans_pgd.c | 4 ++-- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c index 12131ece18af..5c61f39f7f29 100644 --- a/arch/arm64/mm/fault.c +++ b/arch/arm64/mm/fault.c @@ -153,7 +153,7 @@ static void show_pte(unsigned long addr) mm == &init_mm ? "swapper" : "user", PAGE_SIZE / SZ_1K, vabits_actual, mm_to_pgd_phys(mm)); pgdp = pgd_offset(mm, addr); - pgd = READ_ONCE(*pgdp); + pgd = pgdp_get(pgdp); pr_alert("[%016lx] pgd=%016llx", addr, pgd_val(pgd)); do { diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c index 8eb235db7581..d4c976128bbd 100644 --- a/arch/arm64/mm/hugetlbpage.c +++ b/arch/arm64/mm/hugetlbpage.c @@ -284,7 +284,7 @@ pte_t *huge_pte_offset(struct mm_struct *mm, pmd_t *pmdp, pmd; pgdp = pgd_offset(mm, addr); - if (!pgd_present(READ_ONCE(*pgdp))) + if (!pgd_present(pgdp_get(pgdp))) return NULL; p4dp = p4d_offset(pgdp, addr); diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c index e50c40162bce..d05c16cfa5aa 100644 --- a/arch/arm64/mm/kasan_init.c +++ b/arch/arm64/mm/kasan_init.c @@ -102,7 +102,7 @@ static pud_t *__init kasan_pud_offset(p4d_t *p4dp, unsigned long addr, int node, static p4d_t *__init kasan_p4d_offset(pgd_t *pgdp, unsigned long addr, int node, bool early) { - if (pgd_none(READ_ONCE(*pgdp))) { + if (pgd_none(pgdp_get(pgdp))) { phys_addr_t p4d_phys = early ? __pa_symbol(kasan_early_shadow_p4d) : kasan_alloc_zeroed_page(node); diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c index 34e2013c1b7e..7fbb2ef86cfa 100644 --- a/arch/arm64/mm/mmu.c +++ b/arch/arm64/mm/mmu.c @@ -413,7 +413,7 @@ static int alloc_init_p4d(pgd_t *pgdp, unsigned long addr, unsigned long end, { int ret; unsigned long next; - pgd_t pgd = READ_ONCE(*pgdp); + pgd_t pgd = pgdp_get(pgdp); p4d_t *p4dp; if (pgd_none(pgd)) { @@ -1587,7 +1587,7 @@ static void unmap_hotplug_range(unsigned long addr, unsigned long end, do { next = pgd_addr_end(addr, end); pgdp = pgd_offset_k(addr); - pgd = READ_ONCE(*pgdp); + pgd = pgdp_get(pgdp); if (pgd_none(pgd)) continue; @@ -1765,7 +1765,7 @@ static void free_empty_tables(unsigned long addr, unsigned long end, do { next = pgd_addr_end(addr, end); pgdp = pgd_offset_k(addr); - pgd = READ_ONCE(*pgdp); + pgd = pgdp_get(pgdp); if (pgd_none(pgd)) continue; diff --git a/arch/arm64/mm/pageattr.c b/arch/arm64/mm/pageattr.c index 2edfde177b6e..9d70f7c0bbae 100644 --- a/arch/arm64/mm/pageattr.c +++ b/arch/arm64/mm/pageattr.c @@ -399,7 +399,7 @@ bool kernel_page_present(struct page *page) unsigned long addr = (unsigned long)page_address(page); pgdp = pgd_offset_k(addr); - if (pgd_none(READ_ONCE(*pgdp))) + if (pgd_none(pgdp_get(pgdp))) return false; p4dp = p4d_offset(pgdp, addr); diff --git a/arch/arm64/mm/trans_pgd.c b/arch/arm64/mm/trans_pgd.c index 7afe2beca4ba..06470d690f9f 100644 --- a/arch/arm64/mm/trans_pgd.c +++ b/arch/arm64/mm/trans_pgd.c @@ -134,7 +134,7 @@ static int copy_p4d(struct trans_pgd_info *info, pgd_t *dst_pgdp, unsigned long next; unsigned long addr = start; - if (pgd_none(READ_ONCE(*dst_pgdp))) { + if (pgd_none(pgdp_get(dst_pgdp))) { dst_p4dp = trans_alloc(info); if (!dst_p4dp) return -ENOMEM; @@ -164,7 +164,7 @@ static int copy_page_tables(struct trans_pgd_info *info, pgd_t *dst_pgdp, dst_pgdp = pgd_offset_pgd(dst_pgdp, start); do { next = pgd_addr_end(addr, end); - if (pgd_none(READ_ONCE(*src_pgdp))) + if (pgd_none(pgdp_get(src_pgdp))) continue; if (copy_p4d(info, dst_pgdp, src_pgdp, addr, next)) return -ENOMEM; -- 2.43.0