linux-trace-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nico Pache <npache@redhat.com>
To: David Hildenbrand <david@redhat.com>
Cc: linux-mm@kvack.org, linux-doc@vger.kernel.org,
	 linux-kernel@vger.kernel.org,
	linux-trace-kernel@vger.kernel.org,  ziy@nvidia.com,
	baolin.wang@linux.alibaba.com, lorenzo.stoakes@oracle.com,
	 Liam.Howlett@oracle.com, ryan.roberts@arm.com, dev.jain@arm.com,
	 corbet@lwn.net, rostedt@goodmis.org, mhiramat@kernel.org,
	 mathieu.desnoyers@efficios.com, akpm@linux-foundation.org,
	baohua@kernel.org,  willy@infradead.org, peterx@redhat.com,
	wangkefeng.wang@huawei.com,  usamaarif642@gmail.com,
	sunnanyong@huawei.com, vishal.moola@gmail.com,
	 thomas.hellstrom@linux.intel.com, yang@os.amperecomputing.com,
	 kirill.shutemov@linux.intel.com, aarcange@redhat.com,
	raquini@redhat.com,  anshuman.khandual@arm.com,
	catalin.marinas@arm.com, tiwai@suse.de,  will@kernel.org,
	dave.hansen@linux.intel.com, jack@suse.cz, cl@gentwo.org,
	 jglisse@google.com, surenb@google.com, zokeefe@google.com,
	hannes@cmpxchg.org,  rientjes@google.com, mhocko@suse.com,
	rdunlap@infradead.org, hughd@google.com
Subject: Re: [PATCH v9 05/14] khugepaged: generalize __collapse_huge_page_* for mTHP support
Date: Thu, 17 Jul 2025 01:22:58 -0600	[thread overview]
Message-ID: <CAA1CXcBuJfUs_dhzo1CM2B-nDpptAwi+bFGXPn7oxAinmRUggA@mail.gmail.com> (raw)
In-Reply-To: <d2f622f6-3ef2-4227-b672-2fbd3a7dc931@redhat.com>

On Wed, Jul 16, 2025 at 7:53 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 14.07.25 02:31, Nico Pache wrote:
> > generalize the order of the __collapse_huge_page_* functions
> > to support future mTHP collapse.
> >
> > mTHP collapse can suffer from incosistant behavior, and memory waste
> > "creep". disable swapin and shared support for mTHP collapse.
> >
> > No functional changes in this patch.
> >
> > Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> > Co-developed-by: Dev Jain <dev.jain@arm.com>
> > Signed-off-by: Dev Jain <dev.jain@arm.com>
> > Signed-off-by: Nico Pache <npache@redhat.com>
> > ---
> >   mm/khugepaged.c | 49 +++++++++++++++++++++++++++++++------------------
> >   1 file changed, 31 insertions(+), 18 deletions(-)
> >
> > diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> > index cc9a35185604..ee54e3c1db4e 100644
> > --- a/mm/khugepaged.c
> > +++ b/mm/khugepaged.c
> > @@ -552,15 +552,17 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> >                                       unsigned long address,
> >                                       pte_t *pte,
> >                                       struct collapse_control *cc,
> > -                                     struct list_head *compound_pagelist)
> > +                                     struct list_head *compound_pagelist,
> > +                                     u8 order)
>
> u8 ... (applies to all instances)
Fixed all instances of this (other than those that need to stay)
>
> >   {
> >       struct page *page = NULL;
> >       struct folio *folio = NULL;
> >       pte_t *_pte;
> >       int none_or_zero = 0, shared = 0, result = SCAN_FAIL, referenced = 0;
> >       bool writable = false;
> > +     int scaled_none = khugepaged_max_ptes_none >> (HPAGE_PMD_ORDER - order);
>
> "scaled_max_ptes_none" maybe?
done!
>
> >
> > -     for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> > +     for (_pte = pte; _pte < pte + (1 << order);
> >            _pte++, address += PAGE_SIZE) {
> >               pte_t pteval = ptep_get(_pte);
> >               if (pte_none(pteval) || (pte_present(pteval) &&
> > @@ -568,7 +570,7 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> >                       ++none_or_zero;
> >                       if (!userfaultfd_armed(vma) &&
> >                           (!cc->is_khugepaged ||
> > -                          none_or_zero <= khugepaged_max_ptes_none)) {
> > +                          none_or_zero <= scaled_none)) {
> >                               continue;
> >                       } else {
> >                               result = SCAN_EXCEED_NONE_PTE;
> > @@ -596,8 +598,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
> >               /* See hpage_collapse_scan_pmd(). */
> >               if (folio_maybe_mapped_shared(folio)) {
> >                       ++shared;
> > -                     if (cc->is_khugepaged &&
> > -                         shared > khugepaged_max_ptes_shared) {
> > +                     if (order != HPAGE_PMD_ORDER || (cc->is_khugepaged &&
> > +                         shared > khugepaged_max_ptes_shared)) {
>
> Please add a comment why we do something different with PMD. As
> commenting below, does this deserve a TODO?
>
> >                               result = SCAN_EXCEED_SHARED_PTE;
> >                               count_vm_event(THP_SCAN_EXCEED_SHARED_PTE);
> >                               goto out;
> > @@ -698,13 +700,14 @@ static void __collapse_huge_page_copy_succeeded(pte_t *pte,
> >                                               struct vm_area_struct *vma,
> >                                               unsigned long address,
> >                                               spinlock_t *ptl,
> > -                                             struct list_head *compound_pagelist)
> > +                                             struct list_head *compound_pagelist,
> > +                                             u8 order)
> >   {
> >       struct folio *src, *tmp;
> >       pte_t *_pte;
> >       pte_t pteval;
> >
> > -     for (_pte = pte; _pte < pte + HPAGE_PMD_NR;
> > +     for (_pte = pte; _pte < pte + (1 << order);
> >            _pte++, address += PAGE_SIZE) {
> >               pteval = ptep_get(_pte);
> >               if (pte_none(pteval) || is_zero_pfn(pte_pfn(pteval))) {
> > @@ -751,7 +754,8 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
> >                                            pmd_t *pmd,
> >                                            pmd_t orig_pmd,
> >                                            struct vm_area_struct *vma,
> > -                                          struct list_head *compound_pagelist)
> > +                                          struct list_head *compound_pagelist,
> > +                                          u8 order)
> >   {
> >       spinlock_t *pmd_ptl;
> >
> > @@ -768,7 +772,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
> >        * Release both raw and compound pages isolated
> >        * in __collapse_huge_page_isolate.
> >        */
> > -     release_pte_pages(pte, pte + HPAGE_PMD_NR, compound_pagelist);
> > +     release_pte_pages(pte, pte + (1 << order), compound_pagelist);
> >   }
> >
> >   /*
> > @@ -789,7 +793,7 @@ static void __collapse_huge_page_copy_failed(pte_t *pte,
> >   static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
> >               pmd_t *pmd, pmd_t orig_pmd, struct vm_area_struct *vma,
> >               unsigned long address, spinlock_t *ptl,
> > -             struct list_head *compound_pagelist)
> > +             struct list_head *compound_pagelist, u8 order)
> >   {
> >       unsigned int i;
> >       int result = SCAN_SUCCEED;
> > @@ -797,7 +801,7 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
> >       /*
> >        * Copying pages' contents is subject to memory poison at any iteration.
> >        */
> > -     for (i = 0; i < HPAGE_PMD_NR; i++) {
> > +     for (i = 0; i < (1 << order); i++) {
> >               pte_t pteval = ptep_get(pte + i);
> >               struct page *page = folio_page(folio, i);
> >               unsigned long src_addr = address + i * PAGE_SIZE;
> > @@ -816,10 +820,10 @@ static int __collapse_huge_page_copy(pte_t *pte, struct folio *folio,
> >
> >       if (likely(result == SCAN_SUCCEED))
> >               __collapse_huge_page_copy_succeeded(pte, vma, address, ptl,
> > -                                                 compound_pagelist);
> > +                                                 compound_pagelist, order);
> >       else
> >               __collapse_huge_page_copy_failed(pte, pmd, orig_pmd, vma,
> > -                                              compound_pagelist);
> > +                                              compound_pagelist, order);
> >
> >       return result;
> >   }
> > @@ -994,11 +998,11 @@ static int check_pmd_still_valid(struct mm_struct *mm,
> >   static int __collapse_huge_page_swapin(struct mm_struct *mm,
> >                                      struct vm_area_struct *vma,
> >                                      unsigned long haddr, pmd_t *pmd,
> > -                                    int referenced)
> > +                                    int referenced, u8 order)
> >   {
> >       int swapped_in = 0;
> >       vm_fault_t ret = 0;
> > -     unsigned long address, end = haddr + (HPAGE_PMD_NR * PAGE_SIZE);
> > +     unsigned long address, end = haddr + (PAGE_SIZE << order);
> >       int result;
> >       pte_t *pte = NULL;
> >       spinlock_t *ptl;
> > @@ -1029,6 +1033,15 @@ static int __collapse_huge_page_swapin(struct mm_struct *mm,
> >               if (!is_swap_pte(vmf.orig_pte))
> >                       continue;
> >
> > +             /* Dont swapin for mTHP collapse */
>
> Should we turn this into a TODO, because it's something to figure out
> regarding the scaling etc?
Good idea, I changed both of these into TODOs
>
> > +             if (order != HPAGE_PMD_ORDER) {
> > +                     count_mthp_stat(order, MTHP_STAT_COLLAPSE_EXCEED_SWAP);
> > +                     pte_unmap(pte);
> > +                     mmap_read_unlock(mm);
> > +                     result = SCAN_EXCEED_SWAP_PTE;
> > +                     goto out;
> > +             }
> > +
> >               vmf.pte = pte;
> >               vmf.ptl = ptl;
> >               ret = do_swap_page(&vmf);
> > @@ -1149,7 +1162,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> >                * that case.  Continuing to collapse causes inconsistency.
> >                */
> >               result = __collapse_huge_page_swapin(mm, vma, address, pmd,
> > -                                                  referenced);
>  > +                            referenced, HPAGE_PMD_ORDER);
>
> Indent messed up. Feel free to exceed 80 chars if it aids readability.
Fixed!
>
> >               if (result != SCAN_SUCCEED)
> >                       goto out_nolock;
> >       }
> > @@ -1197,7 +1210,7 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
> >       pte = pte_offset_map_lock(mm, &_pmd, address, &pte_ptl);
> >       if (pte) {
> >               result = __collapse_huge_page_isolate(vma, address, pte, cc,
> > -                                                   &compound_pagelist);
> > +                                     &compound_pagelist, HPAGE_PMD_ORDER);
>
> Dito.
Fixed!
>
>
> Apart from that, nothing jumped at me
>
> Acked-by: David Hildenbrand <david@redhat.com>
Thanks for the ack! I fixed the compile issue you noted too.
>
> --
> Cheers,
>
> David / dhildenb
>


  reply	other threads:[~2025-07-17  7:23 UTC|newest]

Thread overview: 51+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-14  0:31 [PATCH v9 00/14] khugepaged: mTHP support Nico Pache
2025-07-14  0:31 ` [PATCH v9 01/14] khugepaged: rename hpage_collapse_* to collapse_* Nico Pache
2025-07-15 15:39   ` David Hildenbrand
2025-07-16 14:29   ` Liam R. Howlett
2025-07-16 15:20     ` David Hildenbrand
2025-07-17  7:21     ` Nico Pache
2025-07-25 16:43   ` Lorenzo Stoakes
2025-07-25 22:35     ` Nico Pache
2025-07-14  0:31 ` [PATCH v9 02/14] introduce collapse_single_pmd to unify khugepaged and madvise_collapse Nico Pache
2025-07-15 15:53   ` David Hildenbrand
2025-07-23  1:56     ` Nico Pache
2025-07-16 15:12   ` Liam R. Howlett
2025-07-23  1:55     ` Nico Pache
2025-07-14  0:31 ` [PATCH v9 03/14] khugepaged: generalize hugepage_vma_revalidate for mTHP support Nico Pache
2025-07-15 15:55   ` David Hildenbrand
2025-07-14  0:31 ` [PATCH v9 04/14] khugepaged: generalize alloc_charge_folio() Nico Pache
2025-07-16 13:46   ` David Hildenbrand
2025-07-17  7:22     ` Nico Pache
2025-07-14  0:31 ` [PATCH v9 05/14] khugepaged: generalize __collapse_huge_page_* for mTHP support Nico Pache
2025-07-16 13:52   ` David Hildenbrand
2025-07-17  7:22     ` Nico Pache [this message]
2025-07-16 14:02   ` David Hildenbrand
2025-07-17  7:23     ` Nico Pache
2025-07-17 15:54     ` Lorenzo Stoakes
2025-07-25 16:09   ` Lorenzo Stoakes
2025-07-25 22:37     ` Nico Pache
2025-07-14  0:31 ` [PATCH v9 06/14] khugepaged: introduce collapse_scan_bitmap " Nico Pache
2025-07-16 14:03   ` David Hildenbrand
2025-07-17  7:23     ` Nico Pache
2025-07-16 15:38   ` Liam R. Howlett
2025-07-17  7:24     ` Nico Pache
2025-07-14  0:32 ` [PATCH v9 07/14] khugepaged: add " Nico Pache
2025-07-14  0:32 ` [PATCH v9 08/14] khugepaged: skip collapsing mTHP to smaller orders Nico Pache
2025-07-16 14:32   ` David Hildenbrand
2025-07-17  7:24     ` Nico Pache
2025-07-14  0:32 ` [PATCH v9 09/14] khugepaged: avoid unnecessary mTHP collapse attempts Nico Pache
2025-07-18  2:14   ` Baolin Wang
2025-07-18 22:34     ` Nico Pache
2025-07-14  0:32 ` [PATCH v9 10/14] khugepaged: allow khugepaged to check all anonymous mTHP orders Nico Pache
2025-07-16 15:28   ` David Hildenbrand
2025-07-17  7:25     ` Nico Pache
2025-07-18  8:40       ` David Hildenbrand
2025-07-14  0:32 ` [PATCH v9 11/14] khugepaged: kick khugepaged for enabling none-PMD-sized mTHPs Nico Pache
2025-07-14  0:32 ` [PATCH v9 12/14] khugepaged: improve tracepoints for mTHP orders Nico Pache
2025-07-22 15:39   ` David Hildenbrand
2025-07-14  0:32 ` [PATCH v9 13/14] khugepaged: add per-order mTHP khugepaged stats Nico Pache
2025-07-18  5:04   ` Baolin Wang
2025-07-18 21:00     ` Nico Pache
2025-07-19  4:42       ` Baolin Wang
2025-07-14  0:32 ` [PATCH v9 14/14] Documentation: mm: update the admin guide for mTHP collapse Nico Pache
2025-07-15  0:39 ` [PATCH v9 00/14] khugepaged: mTHP support Andrew Morton

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAA1CXcBuJfUs_dhzo1CM2B-nDpptAwi+bFGXPn7oxAinmRUggA@mail.gmail.com \
    --to=npache@redhat.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=anshuman.khandual@arm.com \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=catalin.marinas@arm.com \
    --cc=cl@gentwo.org \
    --cc=corbet@lwn.net \
    --cc=dave.hansen@linux.intel.com \
    --cc=david@redhat.com \
    --cc=dev.jain@arm.com \
    --cc=hannes@cmpxchg.org \
    --cc=hughd@google.com \
    --cc=jack@suse.cz \
    --cc=jglisse@google.com \
    --cc=kirill.shutemov@linux.intel.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-trace-kernel@vger.kernel.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=mathieu.desnoyers@efficios.com \
    --cc=mhiramat@kernel.org \
    --cc=mhocko@suse.com \
    --cc=peterx@redhat.com \
    --cc=raquini@redhat.com \
    --cc=rdunlap@infradead.org \
    --cc=rientjes@google.com \
    --cc=rostedt@goodmis.org \
    --cc=ryan.roberts@arm.com \
    --cc=sunnanyong@huawei.com \
    --cc=surenb@google.com \
    --cc=thomas.hellstrom@linux.intel.com \
    --cc=tiwai@suse.de \
    --cc=usamaarif642@gmail.com \
    --cc=vishal.moola@gmail.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=will@kernel.org \
    --cc=willy@infradead.org \
    --cc=yang@os.amperecomputing.com \
    --cc=ziy@nvidia.com \
    --cc=zokeefe@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).