From: David Hildenbrand <david@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
Gavin Shan <gshan@redhat.com>,
Catalin Marinas <catalin.marinas@arm.com>,
x86@kernel.org, Ingo Molnar <mingo@redhat.com>,
Andrew Morton <akpm@linux-foundation.org>,
Paolo Bonzini <pbonzini@redhat.com>,
Dave Hansen <dave.hansen@linux.intel.com>,
Thomas Gleixner <tglx@linutronix.de>,
Alistair Popple <apopple@nvidia.com>,
kvm@vger.kernel.org, linux-arm-kernel@lists.infradead.org,
Sean Christopherson <seanjc@google.com>,
Oscar Salvador <osalvador@suse.de>,
Jason Gunthorpe <jgg@nvidia.com>, Borislav Petkov <bp@alien8.de>,
Zi Yan <ziy@nvidia.com>,
Axel Rasmussen <axelrasmussen@google.com>,
Yan Zhao <yan.y.zhao@intel.com>, Will Deacon <will@kernel.org>,
Kefeng Wang <wangkefeng.wang@huawei.com>,
Alex Williamson <alex.williamson@redhat.com>
Subject: Re: [PATCH v2 06/19] mm/pagewalk: Check pfnmap for folio_walk_start()
Date: Wed, 28 Aug 2024 17:30:43 +0200 [thread overview]
Message-ID: <c1d8220c-e292-48af-bbab-21f4bb9c7dc5@redhat.com> (raw)
In-Reply-To: <Zs8zBT1aDh1v9Eje@x1n>
> This one is correct; I overlooked this comment which can be obsolete. I
> can either refine this patch or add one patch on top to refine the comment
> at least.
Probably best if you use what you consider reasonable in your patch.
>
>> + if (IS_ENABLED(CONFIG_ARCH_HAS_PMD_SPECIAL)) {
>
> We don't yet have CONFIG_ARCH_HAS_PMD_SPECIAL, but I get your point.
>
>> + if (likely(!pmd_special(pmd)))
>> + goto check_pfn;
>> + if (vma->vm_ops && vma->vm_ops->find_special_page)
>> + return vma->vm_ops->find_special_page(vma, addr);
>
> Why do we ever need this? This is so far destined to be totally a waste of
> cycles. I think it's better we leave that until either xen/gntdev.c or any
> new driver start to use it, rather than keeping dead code around.
I just copy-pasted what we had in vm_normal_page() to showcase. If not
required, good, we can add a comment we this is not required.
>
>> + if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
>> + return NULL;
>> + if (is_huge_zero_pmd(pmd))
>> + return NULL;
>
> This is meaningless too until we make huge zero pmd apply special bit
> first, which does sound like to be outside the scope of this series.
Again, copy-paste, but ...
>
>> + if (pmd_devmap(pmd))
>> + /* See vm_normal_page() */
>> + return NULL;
>
> When will it be pmd_devmap() if it's already pmd_special()?
>
>> + return NULL;
>
> And see this one.. it's after:
>
> if (xxx)
> return NULL;
> if (yyy)
> return NULL;
> if (zzz)
> return NULL;
> return NULL;
>
> Hmm?? If so, what's the difference if we simply check pmd_special and
> return NULL..
Yes, they all return NULL. The compiler likely optimizes it all out.
Maybe we have it like that for pure documentation purposes. But yeah, we
should simply return NULL and think about cleaning up vm_normal_page()
as well, it does look strange.
>
>> + }
>> +
>> + /* !CONFIG_ARCH_HAS_PMD_SPECIAL case follows: */
>> +
>> if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
>> if (vma->vm_flags & VM_MIXEDMAP) {
>> if (!pfn_valid(pfn))
>> return NULL;
>> + if (is_huge_zero_pmd(pmd))
>> + return NULL;
>
> I'd rather not touch here as this series doesn't change anything for
> MIXEDMAP yet..
Yes, that can be a separate change.
>
>> goto out;
>> } else {
>> unsigned long off;
>> @@ -692,6 +706,11 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>> }
>> }
>> + /*
>> + * For historical reasons, these might not have pmd_special() set,
>> + * so we'll check them manually, in contrast to vm_normal_page().
>> + */
>> +check_pfn:
>> if (pmd_devmap(pmd))
>> return NULL;
>> if (is_huge_zero_pmd(pmd))
>>
>>
>>
>> We should then look into mapping huge zeropages also with pmd_special.
>> pmd_devmap we'll leave alone until removed. But that's indeoendent of your series.
>
> This does look reasonable to match what we do with pte zeropage. Could you
> remind me what might be the benefit when we switch to using special bit for
> pmd zero pages?
See below. It's the way to tell the VM that a page is special, so you
can avoid a separate check at relevant places, like GUP-fast or in
vm_normal_*.
>
>>
>> I wonder if CONFIG_ARCH_HAS_PTE_SPECIAL is sufficient and we don't need additional
>> CONFIG_ARCH_HAS_PMD_SPECIAL.
>
> The hope is we can always reuse the bit in the pte to work the same for
> pmd/pud.
>
> Now we require arch to select ARCH_SUPPORTS_HUGE_PFNMAP to say "pmd/pud has
> the same special bit defined".
Note that pte_special() is the way to signalize to the VM that a PTE
does not reference a refcounted page, or is similarly special and shall
mostly be ignored. It doesn't imply that it is a PFNAMP pte, not at all.
The shared zeropage is usually not refcounted (except during GUP
FOLL_GET ... but not FOLL_PIN) and the huge zeropage is usually also not
refcounted (but FOLL_PIN still does it). Both are special.
If you take a look at the history pte_special(), it was introduced for
VM_MIXEDMAP handling on s390x, because pfn_valid() to identify "special"
pages did not work:
commit 7e675137a8e1a4d45822746456dd389b65745bf6
Author: Nicholas Piggin <npiggin@gmail.com>
Date: Mon Apr 28 02:13:00 2008 -0700
mm: introduce pte_special pte bit
In the meantime, it's required for architectures that wants to support
GUP-fast I think, to make GUP-fast bail out and fallback to the slow
path where we do a vm_normal_page() -- or fail right at the VMA check
for now (VM_PFNMAP).
An architecture that doesn't implement pte_special() can support pfnmaps
but not GUP-fast. Similarly, an architecture that doesn't implement
pmd_special() can support huge pfnmaps, but not GUP-fast.
If you take a closer look, really the only two code paths that look at
pte_special() are GUP-fast and vm_normal_page().
If we use pmd_special/pud_special in other code than that, we are
diverging from the pte_special() model, and are likely doing something
wrong.
I see how you arrived at the current approach, focusing exclusively on
x86. But I think this just adds inconsistency.
So my point is that we use the same model, where we limit
* pmd_special() to GUP-fast and vm_normal_page_pmd()
* pud_special() to GUP-fast and vm_normal_page_pud()
And simply do the exact same thing as we do for pte_special().
If an arch supports pmd_special() and pud_special() we can support both
types of hugepfn mappings. If not, an architecture *might* support it,
depending on support for GUP-fast and maybe depending on MIXEDMAP
support (again, just like pte_special()). Not your task to worry about,
you will only "unlock" x86.
So maybe we do want CONFIG_ARCH_HAS_PMD_SPECIAL as well, maybe it can be
glued to CONFIG_ARCH_HAS_PTE_SPECIAL (but I'm afraid it can't unless all
archs support both). I'll leave that up to you.
>
>>
>> As I said, if you need someone to add vm_normal_page_pud(), I can handle that.
>
> I'm pretty confused why we need that for this series alone.
See above.
>
> If you prefer vm_normal_page_pud() to be defined and check pud_special()
> there, I can do that. But again, I don't yet see how that can make a
> functional difference considering the so far very limited usage of the
> special bit, and wonder whether we can do that on top when it became
> necessary (and when we start to have functional requirement of such).
I hope my explanation why pte_special() even exists and how it is used
makes it clearer.
It's not that much code to handle it like pte_special(), really. I don't
expect you to teach GUP-slow about vm_normal_page() etc.
If you want me to just takeover some stuff, let me know.
--
Cheers,
David / dhildenb
next prev parent reply other threads:[~2024-08-28 15:30 UTC|newest]
Thread overview: 69+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-26 20:43 [PATCH v2 00/19] mm: Support huge pfnmaps Peter Xu
2024-08-26 20:43 ` [PATCH v2 01/19] mm: Introduce ARCH_SUPPORTS_HUGE_PFNMAP and special bits to pmd/pud Peter Xu
2024-08-26 20:43 ` [PATCH v2 02/19] mm: Drop is_huge_zero_pud() Peter Xu
2024-08-26 20:43 ` [PATCH v2 03/19] mm: Mark special bits for huge pfn mappings when inject Peter Xu
2024-08-28 15:31 ` David Hildenbrand
2024-08-26 20:43 ` [PATCH v2 04/19] mm: Allow THP orders for PFNMAPs Peter Xu
2024-08-28 15:31 ` David Hildenbrand
2024-08-26 20:43 ` [PATCH v2 05/19] mm/gup: Detect huge pfnmap entries in gup-fast Peter Xu
2024-08-26 20:43 ` [PATCH v2 06/19] mm/pagewalk: Check pfnmap for folio_walk_start() Peter Xu
2024-08-28 7:44 ` David Hildenbrand
2024-08-28 14:24 ` Peter Xu
2024-08-28 15:30 ` David Hildenbrand [this message]
2024-08-28 19:45 ` Peter Xu
2024-08-28 23:46 ` Jason Gunthorpe
2024-08-29 6:35 ` David Hildenbrand
2024-08-29 18:45 ` Peter Xu
2024-08-29 15:10 ` David Hildenbrand
2024-08-29 18:49 ` Peter Xu
2024-08-26 20:43 ` [PATCH v2 07/19] mm/fork: Accept huge pfnmap entries Peter Xu
2024-08-29 15:10 ` David Hildenbrand
2024-08-29 18:26 ` Peter Xu
2024-08-29 19:44 ` David Hildenbrand
2024-08-29 20:01 ` Peter Xu
2024-09-02 7:58 ` Yan Zhao
2024-09-03 21:23 ` Peter Xu
2024-09-09 22:25 ` Andrew Morton
2024-09-09 22:43 ` Peter Xu
2024-09-09 23:15 ` Andrew Morton
2024-09-10 0:08 ` Peter Xu
2024-09-10 2:52 ` Yan Zhao
2024-09-10 12:16 ` Peter Xu
2024-09-11 2:16 ` Yan Zhao
2024-09-11 14:34 ` Peter Xu
2024-08-26 20:43 ` [PATCH v2 08/19] mm: Always define pxx_pgprot() Peter Xu
2024-08-26 20:43 ` [PATCH v2 09/19] mm: New follow_pfnmap API Peter Xu
2024-08-26 20:43 ` [PATCH v2 10/19] KVM: Use " Peter Xu
2024-08-26 20:43 ` [PATCH v2 11/19] s390/pci_mmio: " Peter Xu
2024-08-26 20:43 ` [PATCH v2 12/19] mm/x86/pat: Use the new " Peter Xu
2024-08-26 20:43 ` [PATCH v2 13/19] vfio: " Peter Xu
2024-08-26 20:43 ` [PATCH v2 14/19] acrn: " Peter Xu
2024-08-26 20:43 ` [PATCH v2 15/19] mm/access_process_vm: " Peter Xu
2024-08-26 20:43 ` [PATCH v2 16/19] mm: Remove follow_pte() Peter Xu
2024-09-01 4:33 ` Yu Zhao
2024-09-01 13:39 ` David Hildenbrand
2024-08-26 20:43 ` [PATCH v2 17/19] mm/x86: Support large pfn mappings Peter Xu
2024-08-26 20:43 ` [PATCH v2 18/19] mm/arm64: " Peter Xu
2025-03-19 22:22 ` Keith Busch
2025-03-19 22:46 ` Peter Xu
2025-03-19 22:53 ` Keith Busch
2024-08-26 20:43 ` [PATCH v2 19/19] vfio/pci: Implement huge_fault support Peter Xu
2024-08-27 22:36 ` [PATCH v2 00/19] mm: Support huge pfnmaps Jiaqi Yan
2024-08-27 22:57 ` Peter Xu
2024-08-28 0:42 ` Jiaqi Yan
2024-08-28 0:46 ` Jiaqi Yan
2024-08-28 14:24 ` Jason Gunthorpe
2024-08-28 16:10 ` Jiaqi Yan
2024-08-28 23:49 ` Jason Gunthorpe
2024-08-29 19:21 ` Jiaqi Yan
2024-09-04 15:52 ` Jason Gunthorpe
2024-09-04 16:38 ` Jiaqi Yan
2024-09-04 16:43 ` Jason Gunthorpe
2024-09-04 16:58 ` Jiaqi Yan
2024-09-04 17:00 ` Jason Gunthorpe
2024-09-04 17:07 ` Jiaqi Yan
2024-09-09 3:56 ` Ankit Agrawal
2024-08-28 14:41 ` Peter Xu
2024-08-28 16:23 ` Jiaqi Yan
2024-09-09 4:03 ` Ankit Agrawal
2024-09-09 15:03 ` Peter Xu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c1d8220c-e292-48af-bbab-21f4bb9c7dc5@redhat.com \
--to=david@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=alex.williamson@redhat.com \
--cc=apopple@nvidia.com \
--cc=axelrasmussen@google.com \
--cc=bp@alien8.de \
--cc=catalin.marinas@arm.com \
--cc=dave.hansen@linux.intel.com \
--cc=gshan@redhat.com \
--cc=jgg@nvidia.com \
--cc=kvm@vger.kernel.org \
--cc=linux-arm-kernel@lists.infradead.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mingo@redhat.com \
--cc=osalvador@suse.de \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=seanjc@google.com \
--cc=tglx@linutronix.de \
--cc=wangkefeng.wang@huawei.com \
--cc=will@kernel.org \
--cc=x86@kernel.org \
--cc=yan.y.zhao@intel.com \
--cc=ziy@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).