From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by smtp.lore.kernel.org (Postfix) with ESMTP id EDFE5C83F2D for ; Tue, 15 Jul 2025 13:24:20 +0000 (UTC) Received: by kanga.kvack.org (Postfix) id 9FEDB8D000D; Tue, 15 Jul 2025 09:24:15 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 9D6068D0001; Tue, 15 Jul 2025 09:24:15 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 8C4FF8D000D; Tue, 15 Jul 2025 09:24:15 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from relay.hostedemail.com (smtprelay0012.hostedemail.com [216.40.44.12]) by kanga.kvack.org (Postfix) with ESMTP id 7C43F8D0001 for ; Tue, 15 Jul 2025 09:24:15 -0400 (EDT) Received: from smtpin15.hostedemail.com (a10.router.float.18 [10.200.18.1]) by unirelay09.hostedemail.com (Postfix) with ESMTP id 23A2D804AD for ; Tue, 15 Jul 2025 13:24:15 +0000 (UTC) X-FDA: 83666567670.15.23A171B Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by imf22.hostedemail.com (Postfix) with ESMTP id D8508C0002 for ; Tue, 15 Jul 2025 13:24:12 +0000 (UTC) Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=b66qwkLF; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf22.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=hostedemail.com; s=arc-20220608; t=1752585852; h=from:from:sender:reply-to:subject:subject:date:date: message-id:message-id:to:to:cc:cc:mime-version:mime-version: content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references:dkim-signature; bh=cI2lHUCMuK2zQhlnl2y4PjQd6rkotpnK5tiH1r4owRs=; b=5aafWhyeSUx3W44DOfxv47WwZNuzLbIQ68JZuW+bkNahGOL7XewaxPEIYMsD19vGYP7HyI OIMu8smkkj1xu7BnHjO9ZVFkGfdeBNDhzHF62XFj9JYf0RnnhlkJF0bF5DwBi0eud4NY6/ DCKsNgmkEY5B3HpTNiztunKTaCVl9WU= ARC-Seal: i=1; s=arc-20220608; d=hostedemail.com; t=1752585852; a=rsa-sha256; cv=none; b=s9m68NAKfeq4aRRw2/nliAhm6yObSaRgwvLV8z9nPPfbBaBZ+xSyw5u33/PlB/UYzwLKYh mrucmrLn2tqM0YR73P0LZyB8pI7GbSan6F04Fpg3JSCyfJnCfE6WhVaLJP1ZlOHG7R0lik Ygcrjvp/UkFJzTgVOezZUu2dR2JSruE= ARC-Authentication-Results: i=1; imf22.hostedemail.com; dkim=pass header.d=redhat.com header.s=mimecast20190719 header.b=b66qwkLF; dmarc=pass (policy=quarantine) header.from=redhat.com; spf=pass (imf22.hostedemail.com: domain of dhildenb@redhat.com designates 170.10.129.124 as permitted sender) smtp.mailfrom=dhildenb@redhat.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1752585852; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=cI2lHUCMuK2zQhlnl2y4PjQd6rkotpnK5tiH1r4owRs=; b=b66qwkLF/RKUaO+K2pYKUXdeLMt24iBRoY90VWUIGFc2g64h75ffkwJt9LmkbYPqmFRyh/ AGJDpfsW+IJFz06yKqotOzQ0sDOALNcMmaMEUv6HYHtey5w9WsgTJnPma7uh/yB5twr27q zWITw34R5dE8nW2GY4nCEfLUX1FhUu8= Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-125-IrzEnepTNFKpygFe3VQMYw-1; Tue, 15 Jul 2025 09:24:11 -0400 X-MC-Unique: IrzEnepTNFKpygFe3VQMYw-1 X-Mimecast-MFC-AGG-ID: IrzEnepTNFKpygFe3VQMYw_1752585850 Received: by mail-wm1-f71.google.com with SMTP id 5b1f17b1804b1-455e9e09afeso19701185e9.2 for ; Tue, 15 Jul 2025 06:24:10 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1752585850; x=1753190650; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=cI2lHUCMuK2zQhlnl2y4PjQd6rkotpnK5tiH1r4owRs=; b=JXIc33Qu7IU/y6q0qNnm4bbXm3mHFQEHrlb5bOj49BKmdzevL4tPn1cE52jpInrcYL wuEAjft33YpW7ZlPfs5wq0w/iJsN3XTmQOKMjtm22WKSUwX9RAdpGVyA0kF+vVwNEf9A YmZVg+dXrMhpwiuNyKRZeFjJUZT1NUieBOZ2K3u2lhnGh6j6EfrYTAszZl3VZs8Z7iMx Re8s9IIqlo1/XXesvvqJRjyxkGX+j0fmfRye4OXrotPVQGaDbxFXWS2Rka+p3ysoMeDS J9TjcwIQCU8sLYDg3DhjZ9dymkB+tFqCNZozZj+hj3ojfj+EPvbtk/Q19yPXabZf3uRq O2vg== X-Gm-Message-State: AOJu0YzcxawOH2SgrwntCmEX1+ISvnuOb3JUWWlVCxe+pN3tv27SR+kQ qcqvuOva6rlHhj839uZXTmoCON8iPq0C/H+8YTh9yf+RHEq/zsGMzlst8dpSxr81sNENIZ4r3+S G4Nl/B92MVk0SN+MClaUw3g774XTfrSHMzsz3oDeCtgg5OOnXP/X1 X-Gm-Gg: ASbGncvyl+hYzjdlT/q4aaUauLRJWdvFbtgsC0hdbmpN13YxW8seYDnre+hvLtzxzyH em8YMe3GcTCz5fJRxVQzoC4xB5a0JnxD39E6EaAX4u4p0JR+TxMELXWqYcsETUwJrhQqOQAQC4y 1HxJbyU8DsKvOjubq+kA2LAjCVflh7tUXyR36htrU8h30PGYkuI3dfP8+aHBkkHDNfUH9juCLTv WRmARaowxt+mBmQgd90Xj0BDLkAkDqkWxB5xq6FArsDwfYB8FykpMIj0oRUqVj1XHDSn821oc2+ IIBgm5lWKBmgLfDyK384I3inLabuNTPZePT8t9tjM1551/dq+nq8rc+GEu6sgwE7CnauAjUvpfL jx/FF9h6ctqqTUIC45cu37mdb X-Received: by 2002:a05:600c:3592:b0:456:173c:8a53 with SMTP id 5b1f17b1804b1-456173c8b27mr94728785e9.2.1752585849650; Tue, 15 Jul 2025 06:24:09 -0700 (PDT) X-Google-Smtp-Source: AGHT+IGkPjEhwUD11Kc7R628CsdkyUBojyPzeegGbQKL/YV/Ocaib7+IW0nyg3EFT/AEMoIuz4CMUw== X-Received: by 2002:a05:600c:3592:b0:456:173c:8a53 with SMTP id 5b1f17b1804b1-456173c8b27mr94728115e9.2.1752585848983; Tue, 15 Jul 2025 06:24:08 -0700 (PDT) Received: from localhost (p200300d82f2849002c244e201f219fbd.dip0.t-ipconnect.de. [2003:d8:2f28:4900:2c24:4e20:1f21:9fbd]) by smtp.gmail.com with UTF8SMTPSA id ffacd0b85a97d-3b5e8e2710bsm15102268f8f.99.2025.07.15.06.24.07 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 15 Jul 2025 06:24:08 -0700 (PDT) From: David Hildenbrand To: linux-kernel@vger.kernel.org Cc: linux-mm@kvack.org, xen-devel@lists.xenproject.org, linux-fsdevel@vger.kernel.org, nvdimm@lists.linux.dev, David Hildenbrand , Andrew Morton , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Dan Williams , Matthew Wilcox , Jan Kara , Alexander Viro , Christian Brauner , Lorenzo Stoakes , "Liam R. Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Zi Yan , Baolin Wang , Nico Pache , Ryan Roberts , Dev Jain , Barry Song , Jann Horn , Pedro Falcato , Hugh Dickins , Oscar Salvador , Lance Yang Subject: [PATCH v1 7/9] mm/memory: factor out common code from vm_normal_page_*() Date: Tue, 15 Jul 2025 15:23:48 +0200 Message-ID: <20250715132350.2448901-8-david@redhat.com> X-Mailer: git-send-email 2.50.1 In-Reply-To: <20250715132350.2448901-1-david@redhat.com> References: <20250715132350.2448901-1-david@redhat.com> MIME-Version: 1.0 X-Mimecast-Spam-Score: 0 X-Mimecast-MFC-PROC-ID: 0y_OtW6cfyasmus4l7bGcu7EjP55Q0st_R-OkXve58I_1752585850 X-Mimecast-Originator: redhat.com Content-Transfer-Encoding: 8bit content-type: text/plain; charset="US-ASCII"; x-default=true X-Rspamd-Queue-Id: D8508C0002 X-Stat-Signature: tn1k4c17i6yi6rfccsq39uw4d581s6uo X-Rspam-User: X-Rspamd-Server: rspam11 X-HE-Tag: 1752585852-543326 X-HE-Meta: U2FsdGVkX1/jNniqWqXgmw1xQvxnVyPdRUKJFC/g6BgfDKEloGV4I/MVJxXzhz5MZRTsWpm/VX6CxMViRJeofqYGt9gRWkf4Tc+l8okoZH8EFzK4LuqO497rLOqGuPYIbVyUecvlryOPBUQMDPvd1KN/qvXeYP9aSNlPTAkPvaOgum/XN0xioX1U8l2gTf+R2eEyXoYVhlenvRAU4cbyFH48ONxA9ott/7L6oIFIKG56eqwbqeMg9kqDQqR+pPGQCq5VgO2DIWuU7LHd45A7oDnnmKPbKqKj9pHereaXuMHSJmHNQbu0BMQVFMFujl11gPx6ZlBZWnT0T+NVmrAEgQogAytGyqzvBUUjjoRV/T5FXgnY6t6ToImg0fm+KqkW6ZwxJnRmbI5+BK6c81RoPyILHG03WL3frTGCN3LsljLBM+MrB64JgC4OQTwsVX+2QwRCihVjsUL4JEaM7YOojYsh/Ktdr05hgffzXxznml0O8wKwJyVTroZ6fGa2Uj0QZIHobzWOrLb9Za4Pm7n6CJW5WlZ+/6/MbMk7AorjGNJesoDfPnuKXF4RhnD5ojsB1Dd11t4/NndQNS2Zyv7ZmVBNeZzgYhXppXFmfH4DsXKDJrSsE+5215bx9mttobZ89GNowJL6QdpGsIvsaBAiPbymvKDzblJJFB2ksvW8ieCAH14+8+kmiGR5hb5/dQCEWkWItlmwbmxSYsBXWrRutkCi2fvdifQTgif5xzjrILd7wTl+ngRfLZupdcFXRtMkAFPrNoErKQI+R8sqFTgbQFPl2mgGJkxwc7DuGkyqpFSTuNplwsYsj4UStTlpb9QCQTh8HVnFvwhs2vFRxIzBZkhGlpBrot0mpwyj4cddZM+T8A8dcSCeuTuo/O5tf/XL4pDWLIHgqt686lLmA9JyIJMjDMueDv9gmSkGlhlbDF8ffHNKXG53RNI5e+SDptcPduozEqzyRW78fP0VAZZ 13wDKbsm cbFi8vEll+0z7LayvxUS3IPYp1Rb5eqp/lN4cHgAC3tDxxAjVoKEikP58rMazG5t/rcyXc1lzpEbU0NhR+RwInZRVfINCn82U2Hehm24Ok20tPE3CnqhzYV5rVSHxmlJ8Gn6NZ7WXPaokaLgZ5kn+hQkLHP9Vingg+5hicfwYC1GB62lqNiAD1TrFyAIunniTjEAP0HbscaV5aBY9IrDNceQtcZGjtSKt0CkIN0hHvCzukFNrHB6dUqrtAg5Go3/qfWOiqpAHiR/ucyUnOYUb/G9mcJy5SW/tmga/pDtde0Y3eeYc9OnO3akt988W7rqjQ8yfK/1WIi5JK8LcVqimwJGkncwlSY9NyWnkpBF/c2pruaRc20kqzjUgoWw2ffNMzGaNx/WAK+MoW9UEHAopjRwlVJEOQk0ObISb9PdJhyT9oSfcNakws8BwNmjMkGYSz28gi1r1Zl0/lYQ7mifv1sbygni/pGnUkyWKZSl2jAA2q7U5EEEqNvbTRJ3TP/J2IuQprn9hdS/++cCoOdRyzWrgZ4JCICQkukLMbkrkPkt6dGs9Ps0JZGkLvA== X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: List-Subscribe: List-Unsubscribe: Let's reduce the code duplication and factor out the non-pte/pmd related magic into vm_normal_page_pfn(). To keep it simpler, check the pfn against both zero folios. We could optimize this, but as it's only for the !CONFIG_ARCH_HAS_PTE_SPECIAL case, it's not a compelling micro-optimization. With CONFIG_ARCH_HAS_PTE_SPECIAL we don't have to check anything else, really. It's a good question if we can even hit the !CONFIG_ARCH_HAS_PTE_SPECIAL scenario in the PMD case in practice: but doesn't really matter, as it's now all unified in vm_normal_page_pfn(). Add kerneldoc for all involved functions. No functional change intended. Signed-off-by: David Hildenbrand --- mm/memory.c | 183 +++++++++++++++++++++++++++++++--------------------- 1 file changed, 109 insertions(+), 74 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index 00ee0df020503..d5f80419989b9 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -596,8 +596,13 @@ static void print_bad_page_map(struct vm_area_struct *vma, add_taint(TAINT_BAD_PAGE, LOCKDEP_NOW_UNRELIABLE); } -/* - * vm_normal_page -- This function gets the "struct page" associated with a pte. +/** + * vm_normal_page_pfn() - Get the "struct page" associated with a PFN in a + * non-special page table entry. + * @vma: The VMA mapping the @pfn. + * @addr: The address where the @pfn is mapped. + * @pfn: The PFN. + * @entry: The page table entry value for error reporting purposes. * * "Special" mappings do not wish to be associated with a "struct page" (either * it doesn't exist, or it exists but they don't want to touch it). In this @@ -609,10 +614,10 @@ static void print_bad_page_map(struct vm_area_struct *vma, * (such as GUP) can still identify these mappings and work with the * underlying "struct page". * - * There are 2 broad cases. Firstly, an architecture may define a pte_special() - * pte bit, in which case this function is trivial. Secondly, an architecture - * may not have a spare pte bit, which requires a more complicated scheme, - * described below. + * There are 2 broad cases. Firstly, an architecture may define a "special" + * page table entry bit (e.g., pte_special()), in which case this function is + * trivial. Secondly, an architecture may not have a spare page table + * entry bit, which requires a more complicated scheme, described below. * * A raw VM_PFNMAP mapping (ie. one that is not COWed) is always considered a * special mapping (even if there are underlying and valid "struct pages"). @@ -645,15 +650,72 @@ static void print_bad_page_map(struct vm_area_struct *vma, * don't have to follow the strict linearity rule of PFNMAP mappings in * order to support COWable mappings. * + * This function is not expected to be called for obviously special mappings: + * when the page table entry has the "special" bit set. + * + * Return: Returns the "struct page" if this is a "normal" mapping. Returns + * NULL if this is a "special" mapping. + */ +static inline struct page *vm_normal_page_pfn(struct vm_area_struct *vma, + unsigned long addr, unsigned long pfn, unsigned long long entry) +{ + /* + * With CONFIG_ARCH_HAS_PTE_SPECIAL, any special page table mappings + * (incl. shared zero folios) are marked accordingly and are handled + * by the caller. + */ + if (!IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) { + if (unlikely(vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))) { + if (vma->vm_flags & VM_MIXEDMAP) { + /* If it has a "struct page", it's "normal". */ + if (!pfn_valid(pfn)) + return NULL; + } else { + unsigned long off = (addr - vma->vm_start) >> PAGE_SHIFT; + + /* Only CoW'ed anon folios are "normal". */ + if (pfn == vma->vm_pgoff + off) + return NULL; + if (!is_cow_mapping(vma->vm_flags)) + return NULL; + } + } + + if (is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)) + return NULL; + } + + /* Cheap check for corrupted page table entries. */ + if (pfn > highest_memmap_pfn) { + print_bad_page_map(vma, addr, entry, NULL); + return NULL; + } + /* + * NOTE! We still have PageReserved() pages in the page tables. + * For example, VDSO mappings can cause them to exist. + */ + VM_WARN_ON_ONCE(is_zero_pfn(pfn) || is_huge_zero_pfn(pfn)); + return pfn_to_page(pfn); +} + +/** + * vm_normal_page() - Get the "struct page" associated with a PTE + * @vma: The VMA mapping the @pte. + * @addr: The address where the @pte is mapped. + * @pte: The PTE. + * + * Get the "struct page" associated with a PTE. See vm_normal_page_pfn() + * for details. + * + * Return: Returns the "struct page" if this is a "normal" mapping. Returns + * NULL if this is a "special" mapping. */ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, pte_t pte) { unsigned long pfn = pte_pfn(pte); - if (IS_ENABLED(CONFIG_ARCH_HAS_PTE_SPECIAL)) { - if (likely(!pte_special(pte))) - goto check_pfn; + if (unlikely(pte_special(pte))) { if (vma->vm_ops && vma->vm_ops->find_special_page) return vma->vm_ops->find_special_page(vma, addr); if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP)) @@ -664,44 +726,21 @@ struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr, print_bad_page_map(vma, addr, pte_val(pte), NULL); return NULL; } - - /* !CONFIG_ARCH_HAS_PTE_SPECIAL case follows: */ - - if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { - if (vma->vm_flags & VM_MIXEDMAP) { - if (!pfn_valid(pfn)) - return NULL; - if (is_zero_pfn(pfn)) - return NULL; - goto out; - } else { - unsigned long off; - off = (addr - vma->vm_start) >> PAGE_SHIFT; - if (pfn == vma->vm_pgoff + off) - return NULL; - if (!is_cow_mapping(vma->vm_flags)) - return NULL; - } - } - - if (is_zero_pfn(pfn)) - return NULL; - -check_pfn: - if (unlikely(pfn > highest_memmap_pfn)) { - print_bad_page_map(vma, addr, pte_val(pte), NULL); - return NULL; - } - - /* - * NOTE! We still have PageReserved() pages in the page tables. - * eg. VDSO mappings can cause them to exist. - */ -out: - VM_WARN_ON_ONCE(is_zero_pfn(pfn)); - return pfn_to_page(pfn); + return vm_normal_page_pfn(vma, addr, pfn, pte_val(pte)); } +/** + * vm_normal_folio() - Get the "struct folio" associated with a PTE + * @vma: The VMA mapping the @pte. + * @addr: The address where the @pte is mapped. + * @pte: The PTE. + * + * Get the "struct folio" associated with a PTE. See vm_normal_page_pfn() + * for details. + * + * Return: Returns the "struct folio" if this is a "normal" mapping. Returns + * NULL if this is a "special" mapping. + */ struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr, pte_t pte) { @@ -713,6 +752,18 @@ struct folio *vm_normal_folio(struct vm_area_struct *vma, unsigned long addr, } #ifdef CONFIG_PGTABLE_HAS_HUGE_LEAVES +/** + * vm_normal_page_pmd() - Get the "struct page" associated with a PMD + * @vma: The VMA mapping the @pmd. + * @addr: The address where the @pmd is mapped. + * @pmd: The PMD. + * + * Get the "struct page" associated with a PMD. See vm_normal_page_pfn() + * for details. + * + * Return: Returns the "struct page" if this is a "normal" mapping. Returns + * NULL if this is a "special" mapping. + */ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t pmd) { @@ -727,37 +778,21 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr, print_bad_page_map(vma, addr, pmd_val(pmd), NULL); return NULL; } - - if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) { - if (vma->vm_flags & VM_MIXEDMAP) { - if (!pfn_valid(pfn)) - return NULL; - goto out; - } else { - unsigned long off; - off = (addr - vma->vm_start) >> PAGE_SHIFT; - if (pfn == vma->vm_pgoff + off) - return NULL; - if (!is_cow_mapping(vma->vm_flags)) - return NULL; - } - } - - if (is_huge_zero_pfn(pfn)) - return NULL; - if (unlikely(pfn > highest_memmap_pfn)) { - print_bad_page_map(vma, addr, pmd_val(pmd), NULL); - return NULL; - } - - /* - * NOTE! We still have PageReserved() pages in the page tables. - * eg. VDSO mappings can cause them to exist. - */ -out: - return pfn_to_page(pfn); + return vm_normal_page_pfn(vma, addr, pfn, pmd_val(pmd)); } +/** + * vm_normal_folio_pmd() - Get the "struct folio" associated with a PMD + * @vma: The VMA mapping the @pmd. + * @addr: The address where the @pmd is mapped. + * @pmd: The PMD. + * + * Get the "struct folio" associated with a PMD. See vm_normal_page_pfn() + * for details. + * + * Return: Returns the "struct folio" if this is a "normal" mapping. Returns + * NULL if this is a "special" mapping. + */ struct folio *vm_normal_folio_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t pmd) { -- 2.50.1