linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Baolin Wang <baolin.wang@linux.alibaba.com>,
	Dev Jain <dev.jain@arm.com>,
	akpm@linux-foundation.org, hughd@google.com
Cc: willy@infradead.org, 21cnbao@gmail.com, ryan.roberts@arm.com,
	ziy@nvidia.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/2] mm: mincore: use folio_pte_batch() to batch process large folios
Date: Wed, 7 May 2025 11:54:56 +0200	[thread overview]
Message-ID: <fc883e4c-41cb-4f05-a5ef-3b756c689da3@redhat.com> (raw)
In-Reply-To: <6a8418ba-dbd1-489f-929b-e31831bea0cf@linux.alibaba.com>

On 07.05.25 11:48, Baolin Wang wrote:
> 
> 
> On 2025/5/7 13:12, Dev Jain wrote:
>>
>>
>> On 26/03/25 9:08 am, Baolin Wang wrote:
>>> When I tested the mincore() syscall, I observed that it takes longer with
>>> 64K mTHP enabled on my Arm64 server. The reason is the
>>> mincore_pte_range()
>>> still checks each PTE individually, even when the PTEs are contiguous,
>>> which is not efficient.
>>>
>>> Thus we can use folio_pte_batch() to get the batch number of the present
>>> contiguous PTEs, which can improve the performance. I tested the
>>> mincore()
>>> syscall with 1G anonymous memory populated with 64K mTHP, and observed an
>>> obvious performance improvement:
>>>
>>> w/o patch        w/ patch        changes
>>> 6022us            1115us            +81%
>>>
>>> Moreover, I also tested mincore() with disabling mTHP/THP, and did not
>>> see any obvious regression.
>>>
>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> ---
>>>    mm/mincore.c | 27 ++++++++++++++++++++++-----
>>>    1 file changed, 22 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/mm/mincore.c b/mm/mincore.c
>>> index 832f29f46767..88be180b5550 100644
>>> --- a/mm/mincore.c
>>> +++ b/mm/mincore.c
>>> @@ -21,6 +21,7 @@
>>>    #include <linux/uaccess.h>
>>>    #include "swap.h"
>>> +#include "internal.h"
>>>    static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned
>>> long addr,
>>>                unsigned long end, struct mm_walk *walk)
>>> @@ -105,6 +106,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned
>>> long addr, unsigned long end,
>>>        pte_t *ptep;
>>>        unsigned char *vec = walk->private;
>>>        int nr = (end - addr) >> PAGE_SHIFT;
>>> +    int step, i;
>>>        ptl = pmd_trans_huge_lock(pmd, vma);
>>>        if (ptl) {
>>> @@ -118,16 +120,31 @@ static int mincore_pte_range(pmd_t *pmd,
>>> unsigned long addr, unsigned long end,
>>>            walk->action = ACTION_AGAIN;
>>>            return 0;
>>>        }
>>> -    for (; addr != end; ptep++, addr += PAGE_SIZE) {
>>> +    for (; addr != end; ptep += step, addr += step * PAGE_SIZE) {
>>>            pte_t pte = ptep_get(ptep);
>>> +        step = 1;
>>>            /* We need to do cache lookup too for pte markers */
>>>            if (pte_none_mostly(pte))
>>>                __mincore_unmapped_range(addr, addr + PAGE_SIZE,
>>>                             vma, vec);
>>> -        else if (pte_present(pte))
>>> -            *vec = 1;
>>> -        else { /* pte is a swap entry */
>>> +        else if (pte_present(pte)) {
>>> +            if (pte_batch_hint(ptep, pte) > 1) {
>>> +                struct folio *folio = vm_normal_folio(vma, addr, pte);
>>> +
>>> +                if (folio && folio_test_large(folio)) {
>>> +                    const fpb_t fpb_flags = FPB_IGNORE_DIRTY |
>>> +                                FPB_IGNORE_SOFT_DIRTY;
>>> +                    int max_nr = (end - addr) / PAGE_SIZE;
>>> +
>>> +                    step = folio_pte_batch(folio, addr, ptep, pte,
>>> +                            max_nr, fpb_flags, NULL, NULL, NULL);
>>> +                }
>>> +            }
>>
>> Can we go ahead with this along with [1], that will help us generalize
>> for all arches.
>>
>> [1] https://lore.kernel.org/all/20250506050056.59250-3-dev.jain@arm.com/
>> (Please replace PAGE_SIZE with 1)
> 
> As discussed with Ryan, we don’t need to call folio_pte_batch()
> (something like the code below), so your patch seems unnecessarily
> complicated. However, David is unhappy about the open-coded
> pte_batch_hint().

I can live with the below :)

Having something more universal does maybe not make sense here. Any form 
of patching contiguous PTEs (contiguous PFNs) -- whether with folios or 
not -- is not required here as we really only want to

(a) Identify pte_present() PTEs
(b) Avoid the cost of repeated ptep_get() with cont-pte.

> 
>    static int mincore_hugetlb(pte_t *pte, unsigned long hmask, unsigned
> long addr,
>                           unsigned long end, struct mm_walk *walk)
> @@ -105,6 +106,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned
> long addr, unsigned long end,
>           pte_t *ptep;
>           unsigned char *vec = walk->private;
>           int nr = (end - addr) >> PAGE_SHIFT;
> +       int step, i;
> 
>           ptl = pmd_trans_huge_lock(pmd, vma);
>           if (ptl) {
> @@ -118,16 +120,21 @@ static int mincore_pte_range(pmd_t *pmd, unsigned
> long addr, unsigned long end,
>                   walk->action = ACTION_AGAIN;
>                   return 0;
>           }
> -       for (; addr != end; ptep++, addr += PAGE_SIZE) {
> +       for (; addr != end; ptep += step, addr += step * PAGE_SIZE) {
>                   pte_t pte = ptep_get(ptep);
> 
> +               step = 1;
>                   /* We need to do cache lookup too for pte markers */
>                   if (pte_none_mostly(pte))
>                           __mincore_unmapped_range(addr, addr + PAGE_SIZE,
>                                                    vma, vec);
> -               else if (pte_present(pte))
> -                       *vec = 1;
> -               else { /* pte is a swap entry */
> +               else if (pte_present(pte)) {
> +                       unsigned int max_nr = (end - addr) / PAGE_SIZE;
> +
> +                       step = min(pte_batch_hint(ptep, pte), max_nr);
> +                       for (i = 0; i < step; i++)
> +                               vec[i] = 1;
> +               } else { /* pte is a swap entry */
>                           swp_entry_t entry = pte_to_swp_entry(pte);
> 
>                           if (non_swap_entry(entry)) {
> @@ -146,7 +153,7 @@ static int mincore_pte_range(pmd_t *pmd, unsigned
> long addr, unsigned long end,
>    #endif
>                           }
>                   }
> -               vec++;
> +               vec += step;
>           }
>           pte_unmap_unlock(ptep - 1, ptl);
>    out:
> 


-- 
Cheers,

David / dhildenb



  reply	other threads:[~2025-05-07  9:55 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-26  3:38 [PATCH 0/2] Fix mincore() tmpfs test failure Baolin Wang
2025-03-26  3:38 ` [PATCH 1/2] selftests: mincore: fix tmpfs mincore " Baolin Wang
2025-03-27 14:36   ` Zi Yan
2025-03-30 19:47     ` Baolin Wang
2025-04-01 12:54   ` David Hildenbrand
2025-04-07  3:49     ` Baolin Wang
2025-04-07  7:49       ` David Hildenbrand
2025-04-07  8:35         ` Baolin Wang
2025-03-26  3:38 ` [PATCH 2/2] mm: mincore: use folio_pte_batch() to batch process large folios Baolin Wang
2025-03-27 10:49   ` Oscar Salvador
2025-03-27 11:54     ` Baolin Wang
2025-03-27 14:08   ` Ryan Roberts
2025-03-28 13:10     ` Oscar Salvador
2025-03-30 19:57     ` Baolin Wang
2025-04-01 10:45       ` Ryan Roberts
2025-04-01 13:04         ` David Hildenbrand
2025-04-07  6:33           ` Baolin Wang
2025-04-14 13:46             ` Ryan Roberts
2025-05-07  5:12   ` Dev Jain
2025-05-07  9:48     ` Baolin Wang
2025-05-07  9:54       ` David Hildenbrand [this message]
2025-05-07 10:03         ` Baolin Wang
2025-05-07 11:14           ` Ryan Roberts

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fc883e4c-41cb-4f05-a5ef-3b756c689da3@redhat.com \
    --to=david@redhat.com \
    --cc=21cnbao@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=dev.jain@arm.com \
    --cc=hughd@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=ryan.roberts@arm.com \
    --cc=willy@infradead.org \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).