Linux-mm Archive on lore.kernel.org
 help / color / mirror / Atom feed
* Re: [PATCH v2 1/9] mm/huge_memory: move more common code into insert_pmd()
       [not found] ` <20250717115212.1825089-2-david@redhat.com>
@ 2025-07-25  2:47   ` Wei Yang
  0 siblings, 0 replies; 8+ messages in thread
From: Wei Yang @ 2025-07-25  2:47 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, xen-devel, linux-fsdevel, nvdimm,
	Andrew Morton, Juergen Gross, Stefano Stabellini,
	Oleksandr Tyshchenko, Dan Williams, Matthew Wilcox, Jan Kara,
	Alexander Viro, Christian Brauner, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Zi Yan, Baolin Wang, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Jann Horn, Pedro Falcato,
	Hugh Dickins, Oscar Salvador, Lance Yang, Alistair Popple

On Thu, Jul 17, 2025 at 01:52:04PM +0200, David Hildenbrand wrote:
>Let's clean it all further up.
>
>No functional change intended.
>
>Reviewed-by: Oscar Salvador <osalvador@suse.de>
>Reviewed-by: Alistair Popple <apopple@nvidia.com>
>Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Wei Yang <richard.weiyang@gmail.com>

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 2/9] mm/huge_memory: move more common code into insert_pud()
       [not found] ` <20250717115212.1825089-3-david@redhat.com>
@ 2025-07-25  2:56   ` Wei Yang
  0 siblings, 0 replies; 8+ messages in thread
From: Wei Yang @ 2025-07-25  2:56 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, xen-devel, linux-fsdevel, nvdimm,
	Andrew Morton, Juergen Gross, Stefano Stabellini,
	Oleksandr Tyshchenko, Dan Williams, Matthew Wilcox, Jan Kara,
	Alexander Viro, Christian Brauner, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Zi Yan, Baolin Wang, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Jann Horn, Pedro Falcato,
	Hugh Dickins, Oscar Salvador, Lance Yang, Alistair Popple

On Thu, Jul 17, 2025 at 01:52:05PM +0200, David Hildenbrand wrote:
>Let's clean it all further up.
>
>No functional change intended.
>
>Reviewed-by: Oscar Salvador <osalvador@suse.de>
>Reviewed-by: Alistair Popple <apopple@nvidia.com>
>Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Wei Yang <richard.weiyang@gmail.com>

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 3/9] mm/huge_memory: support huge zero folio in vmf_insert_folio_pmd()
       [not found] ` <20250717115212.1825089-4-david@redhat.com>
@ 2025-07-25  8:07   ` Wei Yang
  0 siblings, 0 replies; 8+ messages in thread
From: Wei Yang @ 2025-07-25  8:07 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, xen-devel, linux-fsdevel, nvdimm,
	Andrew Morton, Juergen Gross, Stefano Stabellini,
	Oleksandr Tyshchenko, Dan Williams, Matthew Wilcox, Jan Kara,
	Alexander Viro, Christian Brauner, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Zi Yan, Baolin Wang, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Jann Horn, Pedro Falcato,
	Hugh Dickins, Oscar Salvador, Lance Yang

On Thu, Jul 17, 2025 at 01:52:06PM +0200, David Hildenbrand wrote:
>Just like we do for vmf_insert_page_mkwrite() -> ... ->
>insert_page_into_pte_locked() with the shared zeropage, support the
>huge zero folio in vmf_insert_folio_pmd().
>
>When (un)mapping the huge zero folio in page tables, we neither
>adjust the refcount nor the mapcount, just like for the shared zeropage.
>
>For now, the huge zero folio is not marked as special yet, although
>vm_normal_page_pmd() really wants to treat it as special. We'll change
>that next.
>
>Reviewed-by: Oscar Salvador <osalvador@suse.de>
>Signed-off-by: David Hildenbrand <david@redhat.com>
>---
> mm/huge_memory.c | 8 +++++---
> 1 file changed, 5 insertions(+), 3 deletions(-)
>
>diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>index 849feacaf8064..db08c37b87077 100644
>--- a/mm/huge_memory.c
>+++ b/mm/huge_memory.c
>@@ -1429,9 +1429,11 @@ static vm_fault_t insert_pmd(struct vm_area_struct *vma, unsigned long addr,
> 	if (fop.is_folio) {
> 		entry = folio_mk_pmd(fop.folio, vma->vm_page_prot);
> 
>-		folio_get(fop.folio);
>-		folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma);
>-		add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR);
>+		if (!is_huge_zero_folio(fop.folio)) {
>+			folio_get(fop.folio);
>+			folio_add_file_rmap_pmd(fop.folio, &fop.folio->page, vma);
>+			add_mm_counter(mm, mm_counter_file(fop.folio), HPAGE_PMD_NR);
>+		}

I think this is reasonable.

Reviewed-by: Wei Yang <richard.weiyang@gmail.com>

> 	} else {
> 		entry = pmd_mkhuge(pfn_pmd(fop.pfn, prot));
> 		entry = pmd_mkspecial(entry);
>-- 
>2.50.1
>

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 5/9] mm/huge_memory: mark PMD mappings of the huge zero folio special
       [not found] ` <20250717115212.1825089-6-david@redhat.com>
@ 2025-07-28  8:49   ` Wei Yang
  0 siblings, 0 replies; 8+ messages in thread
From: Wei Yang @ 2025-07-28  8:49 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, xen-devel, linux-fsdevel, nvdimm,
	Andrew Morton, Juergen Gross, Stefano Stabellini,
	Oleksandr Tyshchenko, Dan Williams, Matthew Wilcox, Jan Kara,
	Alexander Viro, Christian Brauner, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Zi Yan, Baolin Wang, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Jann Horn, Pedro Falcato,
	Hugh Dickins, Oscar Salvador, Lance Yang

On Thu, Jul 17, 2025 at 01:52:08PM +0200, David Hildenbrand wrote:
>The huge zero folio is refcounted (+mapcounted -- is that a word?)
>differently than "normal" folios, similarly (but different) to the ordinary
>shared zeropage.
>
>For this reason, we special-case these pages in
>vm_normal_page*/vm_normal_folio*, and only allow selected callers to
>still use them (e.g., GUP can still take a reference on them).
>
>vm_normal_page_pmd() already filters out the huge zero folio. However,
>so far we are not marking it as special like we do with the ordinary
>shared zeropage. Let's mark it as special, so we can further refactor
>vm_normal_page_pmd() and vm_normal_page().
>
>While at it, update the doc regarding the shared zero folios.
>
>Reviewed-by: Oscar Salvador <osalvador@suse.de>
>Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Wei Yang <richard.weiyang@gmail.com>

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 8/9] mm: introduce and use vm_normal_page_pud()
       [not found] ` <20250717115212.1825089-9-david@redhat.com>
@ 2025-07-29  7:52   ` Wei Yang
  0 siblings, 0 replies; 8+ messages in thread
From: Wei Yang @ 2025-07-29  7:52 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, xen-devel, linux-fsdevel, nvdimm,
	Andrew Morton, Juergen Gross, Stefano Stabellini,
	Oleksandr Tyshchenko, Dan Williams, Matthew Wilcox, Jan Kara,
	Alexander Viro, Christian Brauner, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Zi Yan, Baolin Wang, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Jann Horn, Pedro Falcato,
	Hugh Dickins, Oscar Salvador, Lance Yang

On Thu, Jul 17, 2025 at 01:52:11PM +0200, David Hildenbrand wrote:
>Let's introduce vm_normal_page_pud(), which ends up being fairly simple
>because of our new common helpers and there not being a PUD-sized zero
>folio.
>
>Use vm_normal_page_pud() in folio_walk_start() to resolve a TODO,
>structuring the code like the other (pmd/pte) cases. Defer
>introducing vm_normal_folio_pud() until really used.
>
>Reviewed-by: Oscar Salvador <osalvador@suse.de>
>Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Wei Yang <richard.weiyang@gmail.com>

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 9/9] mm: rename vm_ops->find_special_page() to vm_ops->find_normal_page()
       [not found] ` <20250717115212.1825089-10-david@redhat.com>
@ 2025-07-29  7:53   ` Wei Yang
  0 siblings, 0 replies; 8+ messages in thread
From: Wei Yang @ 2025-07-29  7:53 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, xen-devel, linux-fsdevel, nvdimm,
	Andrew Morton, Juergen Gross, Stefano Stabellini,
	Oleksandr Tyshchenko, Dan Williams, Matthew Wilcox, Jan Kara,
	Alexander Viro, Christian Brauner, Lorenzo Stoakes,
	Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
	Suren Baghdasaryan, Michal Hocko, Zi Yan, Baolin Wang, Nico Pache,
	Ryan Roberts, Dev Jain, Barry Song, Jann Horn, Pedro Falcato,
	Hugh Dickins, Oscar Salvador, Lance Yang, David Vrabel

On Thu, Jul 17, 2025 at 01:52:12PM +0200, David Hildenbrand wrote:
>... and hide it behind a kconfig option. There is really no need for
>any !xen code to perform this check.
>
>The naming is a bit off: we want to find the "normal" page when a PTE
>was marked "special". So it's really not "finding a special" page.
>
>Improve the documentation, and add a comment in the code where XEN ends
>up performing the pte_mkspecial() through a hypercall. More details can
>be found in commit 923b2919e2c3 ("xen/gntdev: mark userspace PTEs as
>special on x86 PV guests").
>
>Cc: David Vrabel <david.vrabel@citrix.com>
>Reviewed-by: Oscar Salvador <osalvador@suse.de>
>Signed-off-by: David Hildenbrand <david@redhat.com>

Reviewed-by: Wei Yang <richard.weiyang@gmail.com>

-- 
Wei Yang
Help you, Help me


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 7/9] mm/memory: factor out common code from vm_normal_page_*()
       [not found]         ` <eab1eb16-b99b-4d6b-9539-545d62ed1d5d@lucifer.local>
@ 2025-07-30 12:54           ` David Hildenbrand
  2025-07-30 13:24             ` Lorenzo Stoakes
  0 siblings, 1 reply; 8+ messages in thread
From: David Hildenbrand @ 2025-07-30 12:54 UTC (permalink / raw)
  To: Lorenzo Stoakes
  Cc: linux-kernel, linux-mm, xen-devel, linux-fsdevel, nvdimm,
	Andrew Morton, Juergen Gross, Stefano Stabellini,
	Oleksandr Tyshchenko, Dan Williams, Matthew Wilcox, Jan Kara,
	Alexander Viro, Christian Brauner, Liam R. Howlett,
	Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	Zi Yan, Baolin Wang, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Jann Horn, Pedro Falcato, Hugh Dickins,
	Oscar Salvador, Lance Yang

On 18.07.25 14:43, Lorenzo Stoakes wrote:
> On Thu, Jul 17, 2025 at 10:03:44PM +0200, David Hildenbrand wrote:
>> On 17.07.25 21:55, Lorenzo Stoakes wrote:
>>> On Thu, Jul 17, 2025 at 08:51:51PM +0100, Lorenzo Stoakes wrote:
>>>>> @@ -721,37 +772,21 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
>>>>>    		print_bad_page_map(vma, addr, pmd_val(pmd), NULL);
>>>>>    		return NULL;
>>>>>    	}
>>>>> -
>>>>> -	if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
>>>>> -		if (vma->vm_flags & VM_MIXEDMAP) {
>>>>> -			if (!pfn_valid(pfn))
>>>>> -				return NULL;
>>>>> -			goto out;
>>>>> -		} else {
>>>>> -			unsigned long off;
>>>>> -			off = (addr - vma->vm_start) >> PAGE_SHIFT;
>>>>> -			if (pfn == vma->vm_pgoff + off)
>>>>> -				return NULL;
>>>>> -			if (!is_cow_mapping(vma->vm_flags))
>>>>> -				return NULL;
>>>>> -		}
>>>>> -	}
>>>>> -
>>>>> -	if (is_huge_zero_pfn(pfn))
>>>>> -		return NULL;
>>>>> -	if (unlikely(pfn > highest_memmap_pfn)) {
>>>>> -		print_bad_page_map(vma, addr, pmd_val(pmd), NULL);
>>>>> -		return NULL;
>>>>> -	}
>>>>> -
>>>>> -	/*
>>>>> -	 * NOTE! We still have PageReserved() pages in the page tables.
>>>>> -	 * eg. VDSO mappings can cause them to exist.
>>>>> -	 */
>>>>> -out:
>>>>> -	return pfn_to_page(pfn);
>>>>> +	return vm_normal_page_pfn(vma, addr, pfn, pmd_val(pmd));
>>>>
>>>> Hmm this seems broken, because you're now making these special on arches with
>>>> pte_special() right? But then you're invoking the not-special function?
>>>>
>>>> Also for non-pte_special() arches you're kind of implying they _maybe_ could be
>>>> special.
>>>
>>> OK sorry the diff caught me out here, you explicitly handle the pmd_special()
>>> case here, duplicatively (yuck).
>>>
>>> Maybe you fix this up in a later patch :)
>>
>> I had that, but the conditions depend on the level, meaning: unnecessary
>> checks for pte/pmd/pud level.
>>
>> I had a variant where I would pass "bool special" into vm_normal_page_pfn(),
>> but I didn't like it.
>>
>> To optimize out, I would have to provide a "level" argument, and did not
>> convince myself yet that that is a good idea at this point.
> 
> Yeah fair enough. That probably isn't worth it or might end up making things
> even more ugly.

So, I decided to add the level arguments, but not use them to optimize the checks,
only to forward it to the new print_bad_pte().

So the new helper will be

/**
   * __vm_normal_page() - Get the "struct page" associated with a page table entry.
   * @vma: The VMA mapping the page table entry.
   * @addr: The address where the page table entry is mapped.
   * @pfn: The PFN stored in the page table entry.
   * @special: Whether the page table entry is marked "special".
   * @level: The page table level for error reporting purposes only.
   * @entry: The page table entry value for error reporting purposes only.
...
   */
static inline struct page *__vm_normal_page(struct vm_area_struct *vma,
                unsigned long addr, unsigned long pfn, bool special,
                unsigned long long entry, enum pgtable_level level)
...


And vm_nomal_page() will for example be

/**
  * vm_normal_page() - Get the "struct page" associated with a PTE
  * @vma: The VMA mapping the @pte.
  * @addr: The address where the @pte is mapped.
  * @pte: The PTE.
  *
  * Get the "struct page" associated with a PTE. See __vm_normal_page()
  * for details on "normal" and "special" mappings.
  *
  * Return: Returns the "struct page" if this is a "normal" mapping. Returns
  *        NULL if this is a "special" mapping.
  */
struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
                             pte_t pte)
{
        return __vm_normal_page(vma, addr, pte_pfn(pte), pte_special(pte),
                                pte_val(pte), PGTABLE_LEVEL_PTE);
}



-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 7/9] mm/memory: factor out common code from vm_normal_page_*()
  2025-07-30 12:54           ` [PATCH v2 7/9] mm/memory: factor out common code from vm_normal_page_*() David Hildenbrand
@ 2025-07-30 13:24             ` Lorenzo Stoakes
  0 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Stoakes @ 2025-07-30 13:24 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: linux-kernel, linux-mm, xen-devel, linux-fsdevel, nvdimm,
	Andrew Morton, Juergen Gross, Stefano Stabellini,
	Oleksandr Tyshchenko, Dan Williams, Matthew Wilcox, Jan Kara,
	Alexander Viro, Christian Brauner, Liam R. Howlett,
	Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
	Zi Yan, Baolin Wang, Nico Pache, Ryan Roberts, Dev Jain,
	Barry Song, Jann Horn, Pedro Falcato, Hugh Dickins,
	Oscar Salvador, Lance Yang

On Wed, Jul 30, 2025 at 02:54:46PM +0200, David Hildenbrand wrote:
> On 18.07.25 14:43, Lorenzo Stoakes wrote:
> > On Thu, Jul 17, 2025 at 10:03:44PM +0200, David Hildenbrand wrote:
> > > On 17.07.25 21:55, Lorenzo Stoakes wrote:
> > > > On Thu, Jul 17, 2025 at 08:51:51PM +0100, Lorenzo Stoakes wrote:
> > > > > > @@ -721,37 +772,21 @@ struct page *vm_normal_page_pmd(struct vm_area_struct *vma, unsigned long addr,
> > > > > >    		print_bad_page_map(vma, addr, pmd_val(pmd), NULL);
> > > > > >    		return NULL;
> > > > > >    	}
> > > > > > -
> > > > > > -	if (unlikely(vma->vm_flags & (VM_PFNMAP|VM_MIXEDMAP))) {
> > > > > > -		if (vma->vm_flags & VM_MIXEDMAP) {
> > > > > > -			if (!pfn_valid(pfn))
> > > > > > -				return NULL;
> > > > > > -			goto out;
> > > > > > -		} else {
> > > > > > -			unsigned long off;
> > > > > > -			off = (addr - vma->vm_start) >> PAGE_SHIFT;
> > > > > > -			if (pfn == vma->vm_pgoff + off)
> > > > > > -				return NULL;
> > > > > > -			if (!is_cow_mapping(vma->vm_flags))
> > > > > > -				return NULL;
> > > > > > -		}
> > > > > > -	}
> > > > > > -
> > > > > > -	if (is_huge_zero_pfn(pfn))
> > > > > > -		return NULL;
> > > > > > -	if (unlikely(pfn > highest_memmap_pfn)) {
> > > > > > -		print_bad_page_map(vma, addr, pmd_val(pmd), NULL);
> > > > > > -		return NULL;
> > > > > > -	}
> > > > > > -
> > > > > > -	/*
> > > > > > -	 * NOTE! We still have PageReserved() pages in the page tables.
> > > > > > -	 * eg. VDSO mappings can cause them to exist.
> > > > > > -	 */
> > > > > > -out:
> > > > > > -	return pfn_to_page(pfn);
> > > > > > +	return vm_normal_page_pfn(vma, addr, pfn, pmd_val(pmd));
> > > > >
> > > > > Hmm this seems broken, because you're now making these special on arches with
> > > > > pte_special() right? But then you're invoking the not-special function?
> > > > >
> > > > > Also for non-pte_special() arches you're kind of implying they _maybe_ could be
> > > > > special.
> > > >
> > > > OK sorry the diff caught me out here, you explicitly handle the pmd_special()
> > > > case here, duplicatively (yuck).
> > > >
> > > > Maybe you fix this up in a later patch :)
> > >
> > > I had that, but the conditions depend on the level, meaning: unnecessary
> > > checks for pte/pmd/pud level.
> > >
> > > I had a variant where I would pass "bool special" into vm_normal_page_pfn(),
> > > but I didn't like it.
> > >
> > > To optimize out, I would have to provide a "level" argument, and did not
> > > convince myself yet that that is a good idea at this point.
> >
> > Yeah fair enough. That probably isn't worth it or might end up making things
> > even more ugly.
>
> So, I decided to add the level arguments, but not use them to optimize the checks,
> only to forward it to the new print_bad_pte().
>
> So the new helper will be
>
> /**
>   * __vm_normal_page() - Get the "struct page" associated with a page table entry.
>   * @vma: The VMA mapping the page table entry.
>   * @addr: The address where the page table entry is mapped.
>   * @pfn: The PFN stored in the page table entry.
>   * @special: Whether the page table entry is marked "special".
>   * @level: The page table level for error reporting purposes only.
>   * @entry: The page table entry value for error reporting purposes only.
> ...
>   */
> static inline struct page *__vm_normal_page(struct vm_area_struct *vma,
>                unsigned long addr, unsigned long pfn, bool special,
>                unsigned long long entry, enum pgtable_level level)
> ...
>
>
> And vm_nomal_page() will for example be
>
> /**
>  * vm_normal_page() - Get the "struct page" associated with a PTE
>  * @vma: The VMA mapping the @pte.
>  * @addr: The address where the @pte is mapped.
>  * @pte: The PTE.
>  *
>  * Get the "struct page" associated with a PTE. See __vm_normal_page()
>  * for details on "normal" and "special" mappings.
>  *
>  * Return: Returns the "struct page" if this is a "normal" mapping. Returns
>  *        NULL if this is a "special" mapping.
>  */
> struct page *vm_normal_page(struct vm_area_struct *vma, unsigned long addr,
>                             pte_t pte)
> {
>        return __vm_normal_page(vma, addr, pte_pfn(pte), pte_special(pte),
>                                pte_val(pte), PGTABLE_LEVEL_PTE);
> }
>

OK that could work out well actually, cool thank you!


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2025-07-30 13:25 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
     [not found] <20250717115212.1825089-1-david@redhat.com>
     [not found] ` <20250717115212.1825089-2-david@redhat.com>
2025-07-25  2:47   ` [PATCH v2 1/9] mm/huge_memory: move more common code into insert_pmd() Wei Yang
     [not found] ` <20250717115212.1825089-3-david@redhat.com>
2025-07-25  2:56   ` [PATCH v2 2/9] mm/huge_memory: move more common code into insert_pud() Wei Yang
     [not found] ` <20250717115212.1825089-4-david@redhat.com>
2025-07-25  8:07   ` [PATCH v2 3/9] mm/huge_memory: support huge zero folio in vmf_insert_folio_pmd() Wei Yang
     [not found] ` <20250717115212.1825089-6-david@redhat.com>
2025-07-28  8:49   ` [PATCH v2 5/9] mm/huge_memory: mark PMD mappings of the huge zero folio special Wei Yang
     [not found] ` <20250717115212.1825089-9-david@redhat.com>
2025-07-29  7:52   ` [PATCH v2 8/9] mm: introduce and use vm_normal_page_pud() Wei Yang
     [not found] ` <20250717115212.1825089-10-david@redhat.com>
2025-07-29  7:53   ` [PATCH v2 9/9] mm: rename vm_ops->find_special_page() to vm_ops->find_normal_page() Wei Yang
     [not found] ` <20250717115212.1825089-8-david@redhat.com>
     [not found]   ` <1aef6483-18e6-463b-a197-34dd32dd6fbd@lucifer.local>
     [not found]     ` <50190a14-78fb-4a4a-82fa-d7b887aa4754@lucifer.local>
     [not found]       ` <b7457b96-2b78-4202-8380-4c7cd70767b9@redhat.com>
     [not found]         ` <eab1eb16-b99b-4d6b-9539-545d62ed1d5d@lucifer.local>
2025-07-30 12:54           ` [PATCH v2 7/9] mm/memory: factor out common code from vm_normal_page_*() David Hildenbrand
2025-07-30 13:24             ` Lorenzo Stoakes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox