linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Dev Jain <dev.jain@arm.com>
To: akpm@linux-foundation.org, david@redhat.com
Cc: ziy@nvidia.com, baolin.wang@linux.alibaba.com,
	lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
	npache@redhat.com, ryan.roberts@arm.com, baohua@kernel.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Dev Jain <dev.jain@arm.com>
Subject: [PATCH v2 2/3] khugepaged: Optimize collapse_pte_mapped_thp() for large folios by PTE batching
Date: Wed, 25 Jun 2025 11:28:05 +0530	[thread overview]
Message-ID: <20250625055806.82645-3-dev.jain@arm.com> (raw)
In-Reply-To: <20250625055806.82645-1-dev.jain@arm.com>

Use PTE batching to optimize collapse_pte_mapped_thp().

On arm64, suppose khugepaged is scanning a pte-mapped 2MB THP for collapse.
Then, calling ptep_clear() for every pte will cause a TLB flush for every
contpte block. Instead, clear_full_ptes() does a
contpte_try_unfold_partial() which will flush the TLB only for the (if any)
starting and ending contpte block, if they partially overlap with the range
khugepaged is looking at.

For all arches, there should be a benefit due to batching atomic operations
on mapcounts due to folio_remove_rmap_ptes().

Note that we do not need to make a change to the check
"if (folio_page(folio, i) != page)"; if i'th page of the folio is equal
to the first page of our batch, then i + 1, .... i + nr_batch_ptes - 1
pages of the folio will be equal to the corresponding pages of our
batch mapping consecutive pages.

No issues were observed with mm-selftests.

Signed-off-by: Dev Jain <dev.jain@arm.com>
---
 mm/khugepaged.c | 38 ++++++++++++++++++++++++++------------
 1 file changed, 26 insertions(+), 12 deletions(-)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 3944b112d452..4c8d33abfbd8 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -1499,15 +1499,16 @@ static int set_huge_pmd(struct vm_area_struct *vma, unsigned long addr,
 int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 			    bool install_pmd)
 {
+	int nr_mapped_ptes = 0, nr_batch_ptes, result = SCAN_FAIL;
 	struct mmu_notifier_range range;
 	bool notified = false;
 	unsigned long haddr = addr & HPAGE_PMD_MASK;
+	unsigned long end = haddr + HPAGE_PMD_SIZE;
 	struct vm_area_struct *vma = vma_lookup(mm, haddr);
 	struct folio *folio;
 	pte_t *start_pte, *pte;
 	pmd_t *pmd, pgt_pmd;
 	spinlock_t *pml = NULL, *ptl;
-	int nr_ptes = 0, result = SCAN_FAIL;
 	int i;
 
 	mmap_assert_locked(mm);
@@ -1621,11 +1622,17 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 		goto abort;
 
 	/* step 2: clear page table and adjust rmap */
-	for (i = 0, addr = haddr, pte = start_pte;
-	     i < HPAGE_PMD_NR; i++, addr += PAGE_SIZE, pte++) {
+	for (i = 0, addr = haddr, pte = start_pte; i < HPAGE_PMD_NR;
+	     i += nr_batch_ptes, addr += nr_batch_ptes * PAGE_SIZE,
+	     pte += nr_batch_ptes) {
+		const fpb_t flags = FPB_IGNORE_DIRTY | FPB_IGNORE_SOFT_DIRTY;
+		int max_nr_batch_ptes = (end - addr) >> PAGE_SHIFT;
+		struct folio *mapped_folio;
 		struct page *page;
 		pte_t ptent = ptep_get(pte);
 
+		nr_batch_ptes = 1;
+
 		if (pte_none(ptent))
 			continue;
 		/*
@@ -1639,26 +1646,33 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 			goto abort;
 		}
 		page = vm_normal_page(vma, addr, ptent);
+		mapped_folio = page_folio(page);
+
 		if (folio_page(folio, i) != page)
 			goto abort;
 
+		mapped_folio = page_folio(page);
+		nr_batch_ptes = folio_pte_batch(mapped_folio, addr, pte, ptent,
+						max_nr_batch_ptes, flags,
+						NULL, NULL, NULL);
+
 		/*
 		 * Must clear entry, or a racing truncate may re-remove it.
 		 * TLB flush can be left until pmdp_collapse_flush() does it.
 		 * PTE dirty? Shmem page is already dirty; file is read-only.
 		 */
-		ptep_clear(mm, addr, pte);
-		folio_remove_rmap_pte(folio, page, vma);
-		nr_ptes++;
+		clear_full_ptes(mm, addr, pte, nr_batch_ptes, /* full = */ false);
+		folio_remove_rmap_ptes(folio, page, nr_batch_ptes, vma);
+		nr_mapped_ptes += nr_batch_ptes;
 	}
 
 	if (!pml)
 		spin_unlock(ptl);
 
 	/* step 3: set proper refcount and mm_counters. */
-	if (nr_ptes) {
-		folio_ref_sub(folio, nr_ptes);
-		add_mm_counter(mm, mm_counter_file(folio), -nr_ptes);
+	if (nr_mapped_ptes) {
+		folio_ref_sub(folio, nr_mapped_ptes);
+		add_mm_counter(mm, mm_counter_file(folio), -nr_mapped_ptes);
 	}
 
 	/* step 4: remove empty page table */
@@ -1691,10 +1705,10 @@ int collapse_pte_mapped_thp(struct mm_struct *mm, unsigned long addr,
 			: SCAN_SUCCEED;
 	goto drop_folio;
 abort:
-	if (nr_ptes) {
+	if (nr_mapped_ptes) {
 		flush_tlb_mm(mm);
-		folio_ref_sub(folio, nr_ptes);
-		add_mm_counter(mm, mm_counter_file(folio), -nr_ptes);
+		folio_ref_sub(folio, nr_mapped_ptes);
+		add_mm_counter(mm, mm_counter_file(folio), -nr_mapped_ptes);
 	}
 unlock:
 	if (start_pte)
-- 
2.30.2



  parent reply	other threads:[~2025-06-25  5:58 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-06-25  5:58 [PATCH v2 0/3] Optimizations for khugepaged Dev Jain
2025-06-25  5:58 ` [PATCH 1/3] khugepaged: Optimize __collapse_huge_page_copy_succeeded() by PTE batching Dev Jain
2025-06-25 11:14   ` David Hildenbrand
2025-06-25 11:19     ` Dev Jain
2025-06-25 12:14   ` Lorenzo Stoakes
2025-06-25 12:22     ` David Hildenbrand
2025-06-26  3:53     ` Dev Jain
2025-06-25  5:58 ` Dev Jain [this message]
2025-06-25 13:11   ` [PATCH v2 2/3] khugepaged: Optimize collapse_pte_mapped_thp() for large folios " Lorenzo Stoakes
2025-06-26  3:48     ` Dev Jain
2025-06-26  4:47       ` Lorenzo Stoakes
2025-06-26  4:54         ` Dev Jain
2025-07-15  6:34         ` Dev Jain
2025-07-15  9:43           ` Lorenzo Stoakes
2025-07-15  9:56             ` David Hildenbrand
2025-07-15 10:02               ` Lorenzo Stoakes
2025-07-15 10:40                 ` Dev Jain
2025-07-15 11:13                   ` Lorenzo Stoakes
2025-06-25  5:58 ` [PATCH v2 3/3] khugepaged: Reduce race probability between migration and khugepaged Dev Jain
2025-06-25 13:28   ` Lorenzo Stoakes
2025-06-26  3:52     ` Dev Jain
2025-06-26  4:57       ` Lorenzo Stoakes
2025-06-26  4:59         ` Dev Jain
2025-06-26  5:02           ` Lorenzo Stoakes
2025-06-26  5:04             ` Dev Jain
2025-06-26  5:06             ` Lorenzo Stoakes
2025-06-26  5:27               ` Dev Jain
2025-06-25 10:36 ` [PATCH v2 0/3] Optimizations for khugepaged Lorenzo Stoakes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250625055806.82645-3-dev.jain@arm.com \
    --to=dev.jain@arm.com \
    --cc=Liam.Howlett@oracle.com \
    --cc=akpm@linux-foundation.org \
    --cc=baohua@kernel.org \
    --cc=baolin.wang@linux.alibaba.com \
    --cc=david@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=lorenzo.stoakes@oracle.com \
    --cc=npache@redhat.com \
    --cc=ryan.roberts@arm.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).