linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] mm/ksm: cmp_and_merge_page() optimizations and cleanup
@ 2024-06-21  7:54 Chengming Zhou
  2024-06-21  7:54 ` [PATCH v2 1/3] mm/ksm: refactor out try_to_merge_with_zero_page() Chengming Zhou
                   ` (3 more replies)
  0 siblings, 4 replies; 6+ messages in thread
From: Chengming Zhou @ 2024-06-21  7:54 UTC (permalink / raw)
  To: Andrew Morton, david, aarcange, hughd, shr
  Cc: linux-mm, linux-kernel, zhouchengming, Chengming Zhou

Changes in v2:
- Fix the comments of try_to_merge_with_zero_page(), per David.
- Drop the last one patch in this version since it's a very rare case
  and hard to testing to prove.
- Rebase on the latest mm/mm-stable branch.
- Link to v1: https://lore.kernel.org/r/20240524-b4-ksm-scan-optimize-v1-0-053b31bd7ab4@linux.dev

Hello,

This series mainly optimizes cmp_and_merge_page() to have more efficient
separate code flow for ksm page and non-ksm anon page.

- ksm page: don't need to calculate the checksum obviously.
- anon page: don't need to search stable tree if changing fast and try
  to merge with zero page before searching ksm page on stable tree.

Please see the patch-2 for details.

Patch-3 is cleanup also a little optimization for the chain()/chain_prune
interfaces, which made the stable_tree_search()/stable_tree_insert() over
complex.

I have done simple testing using "hackbench -g 1 -l 300000" (maybe I need
to use a better workload) on my machine, have seen a little CPU usage
decrease of ksmd and some improvements of cmp_and_merge_page() latency:

We can see the latency of cmp_and_merge_page() when handling non-ksm
anon pages has been improved.

Thanks for review and comments!

Before:

- ksm page
[128, 256)            21 |                                                    |
[256, 512)         12509 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[512, 1K)            769 |@@@                                                 |
[1K, 2K)              99 |                                                    |
[2K, 4K)               4 |                                                    |
[4K, 8K)               2 |                                                    |
[8K, 16K)              8 |                                                    |

- anon page
[512, 1K)             19 |                                                    |
[1K, 2K)            7160 |@@@@@@@@@@@                                         |
[2K, 4K)           33516 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[4K, 8K)           33172 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ |
[8K, 16K)          11305 |@@@@@@@@@@@@@@@@@                                   |
[16K, 32K)          1303 |@@                                                  |
[32K, 64K)            16 |                                                    |
[64K, 128K)            6 |                                                    |
[128K, 256K)           6 |                                                    |
[256K, 512K)           9 |                                                    |
[512K, 1M)             3 |                                                    |
[1M, 2M)               2 |                                                    |
[2M, 4M)               1 |                                                    |

After:

- ksm page
[128, 256)             9 |                                                    |
[256, 512)           915 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[512, 1K)             41 |@@                                                  |
[1K, 2K)               1 |                                                    |
[2K, 4K)               1 |                                                    |

- anon page
[512, 1K)            374 |                                                    |
[1K, 2K)            5367 |@@@@                                                |
[2K, 4K)           64362 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
[4K, 8K)           27721 |@@@@@@@@@@@@@@@@@@@@@@                              |
[8K, 16K)           1047 |                                                    |
[16K, 32K)            63 |                                                    |
[32K, 64K)             7 |                                                    |
[64K, 128K)            6 |                                                    |
[128K, 256K)           5 |                                                    |
[256K, 512K)           3 |                                                    |
[512K, 1M)             1 |                                                    |

Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
---
Chengming Zhou (3):
      mm/ksm: refactor out try_to_merge_with_zero_page()
      mm/ksm: don't waste time searching stable tree for fast changing page
      mm/ksm: optimize the chain()/chain_prune() interfaces

 mm/ksm.c | 250 +++++++++++++++++++++------------------------------------------
 1 file changed, 82 insertions(+), 168 deletions(-)
---
base-commit: 6ba59ff4227927d3a8530fc2973b80e94b54d58f
change-id: 20240621-b4-ksm-scan-optimize-e614a3a52217

Best regards,
-- 
Chengming Zhou <chengming.zhou@linux.dev>



^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/3] mm/ksm: refactor out try_to_merge_with_zero_page()
  2024-06-21  7:54 [PATCH v2 0/3] mm/ksm: cmp_and_merge_page() optimizations and cleanup Chengming Zhou
@ 2024-06-21  7:54 ` Chengming Zhou
  2024-06-21  7:54 ` [PATCH v2 2/3] mm/ksm: don't waste time searching stable tree for fast changing page Chengming Zhou
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 6+ messages in thread
From: Chengming Zhou @ 2024-06-21  7:54 UTC (permalink / raw)
  To: Andrew Morton, david, aarcange, hughd, shr
  Cc: linux-mm, linux-kernel, zhouchengming, Chengming Zhou

In preparation for later changes, refactor out a new function called
try_to_merge_with_zero_page(), which tries to merge with zero page.

Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
---
 mm/ksm.c | 70 ++++++++++++++++++++++++++++++++++++----------------------------
 1 file changed, 40 insertions(+), 30 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 34c4820e0d3d..1427abd18627 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1531,6 +1531,44 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
 	return err;
 }
 
+/*
+ * This function returns 0 if the pages were merged or if they are
+ * no longer merging candidates (e.g., VMA stale), -EFAULT otherwise.
+ */
+static int try_to_merge_with_zero_page(struct ksm_rmap_item *rmap_item,
+				       struct page *page)
+{
+	struct mm_struct *mm = rmap_item->mm;
+	int err = -EFAULT;
+
+	/*
+	 * Same checksum as an empty page. We attempt to merge it with the
+	 * appropriate zero page if the user enabled this via sysfs.
+	 */
+	if (ksm_use_zero_pages && (rmap_item->oldchecksum == zero_checksum)) {
+		struct vm_area_struct *vma;
+
+		mmap_read_lock(mm);
+		vma = find_mergeable_vma(mm, rmap_item->address);
+		if (vma) {
+			err = try_to_merge_one_page(vma, page,
+					ZERO_PAGE(rmap_item->address));
+			trace_ksm_merge_one_page(
+				page_to_pfn(ZERO_PAGE(rmap_item->address)),
+				rmap_item, mm, err);
+		} else {
+			/*
+			 * If the vma is out of date, we do not need to
+			 * continue.
+			 */
+			err = 0;
+		}
+		mmap_read_unlock(mm);
+	}
+
+	return err;
+}
+
 /*
  * try_to_merge_with_ksm_page - like try_to_merge_two_pages,
  * but no new kernel page is allocated: kpage must already be a ksm page.
@@ -2306,7 +2344,6 @@ static void stable_tree_append(struct ksm_rmap_item *rmap_item,
  */
 static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_item)
 {
-	struct mm_struct *mm = rmap_item->mm;
 	struct ksm_rmap_item *tree_rmap_item;
 	struct page *tree_page = NULL;
 	struct ksm_stable_node *stable_node;
@@ -2375,36 +2412,9 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
 		return;
 	}
 
-	/*
-	 * Same checksum as an empty page. We attempt to merge it with the
-	 * appropriate zero page if the user enabled this via sysfs.
-	 */
-	if (ksm_use_zero_pages && (checksum == zero_checksum)) {
-		struct vm_area_struct *vma;
+	if (!try_to_merge_with_zero_page(rmap_item, page))
+		return;
 
-		mmap_read_lock(mm);
-		vma = find_mergeable_vma(mm, rmap_item->address);
-		if (vma) {
-			err = try_to_merge_one_page(vma, page,
-					ZERO_PAGE(rmap_item->address));
-			trace_ksm_merge_one_page(
-				page_to_pfn(ZERO_PAGE(rmap_item->address)),
-				rmap_item, mm, err);
-		} else {
-			/*
-			 * If the vma is out of date, we do not need to
-			 * continue.
-			 */
-			err = 0;
-		}
-		mmap_read_unlock(mm);
-		/*
-		 * In case of failure, the page was not really empty, so we
-		 * need to continue. Otherwise we're done.
-		 */
-		if (!err)
-			return;
-	}
 	tree_rmap_item =
 		unstable_tree_search_insert(rmap_item, page, &tree_page);
 	if (tree_rmap_item) {

-- 
2.45.2



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 2/3] mm/ksm: don't waste time searching stable tree for fast changing page
  2024-06-21  7:54 [PATCH v2 0/3] mm/ksm: cmp_and_merge_page() optimizations and cleanup Chengming Zhou
  2024-06-21  7:54 ` [PATCH v2 1/3] mm/ksm: refactor out try_to_merge_with_zero_page() Chengming Zhou
@ 2024-06-21  7:54 ` Chengming Zhou
  2024-06-21  7:54 ` [PATCH v2 3/3] mm/ksm: optimize the chain()/chain_prune() interfaces Chengming Zhou
  2024-06-28 23:59 ` [PATCH v2 0/3] mm/ksm: cmp_and_merge_page() optimizations and cleanup Andrew Morton
  3 siblings, 0 replies; 6+ messages in thread
From: Chengming Zhou @ 2024-06-21  7:54 UTC (permalink / raw)
  To: Andrew Morton, david, aarcange, hughd, shr
  Cc: linux-mm, linux-kernel, zhouchengming, Chengming Zhou

The code flow in cmp_and_merge_page() is suboptimal for handling the
ksm page and non-ksm page at the same time. For example:

- ksm page
 1. Mostly just return if this ksm page is not migrated and this rmap_item
    has been on the rmap hlist. Or we have to fix this rmap_item mapping.
 2. But we absolutely don't need to checksum for this ksm page, since it
    can't change.

- non-ksm page
 1. First don't need to waste time searching stable tree if fast changing.
 2. Should try to merge with zero page before search the stable tree.
 3. Then search stable tree to find mergeable ksm page.

This patch optimizes the code flow so the handling differences between
ksm page and non-ksm page become clearer and more efficient too.

Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
---
 mm/ksm.c | 32 +++++++++++++++++---------------
 1 file changed, 17 insertions(+), 15 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 1427abd18627..2cf836fb1367 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2370,6 +2370,23 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
 		 */
 		if (!is_page_sharing_candidate(stable_node))
 			max_page_sharing_bypass = true;
+	} else {
+		remove_rmap_item_from_tree(rmap_item);
+
+		/*
+		 * If the hash value of the page has changed from the last time
+		 * we calculated it, this page is changing frequently: therefore we
+		 * don't want to insert it in the unstable tree, and we don't want
+		 * to waste our time searching for something identical to it there.
+		 */
+		checksum = calc_checksum(page);
+		if (rmap_item->oldchecksum != checksum) {
+			rmap_item->oldchecksum = checksum;
+			return;
+		}
+
+		if (!try_to_merge_with_zero_page(rmap_item, page))
+			return;
 	}
 
 	/* We first start with searching the page inside the stable tree */
@@ -2400,21 +2417,6 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
 		return;
 	}
 
-	/*
-	 * If the hash value of the page has changed from the last time
-	 * we calculated it, this page is changing frequently: therefore we
-	 * don't want to insert it in the unstable tree, and we don't want
-	 * to waste our time searching for something identical to it there.
-	 */
-	checksum = calc_checksum(page);
-	if (rmap_item->oldchecksum != checksum) {
-		rmap_item->oldchecksum = checksum;
-		return;
-	}
-
-	if (!try_to_merge_with_zero_page(rmap_item, page))
-		return;
-
 	tree_rmap_item =
 		unstable_tree_search_insert(rmap_item, page, &tree_page);
 	if (tree_rmap_item) {

-- 
2.45.2



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 3/3] mm/ksm: optimize the chain()/chain_prune() interfaces
  2024-06-21  7:54 [PATCH v2 0/3] mm/ksm: cmp_and_merge_page() optimizations and cleanup Chengming Zhou
  2024-06-21  7:54 ` [PATCH v2 1/3] mm/ksm: refactor out try_to_merge_with_zero_page() Chengming Zhou
  2024-06-21  7:54 ` [PATCH v2 2/3] mm/ksm: don't waste time searching stable tree for fast changing page Chengming Zhou
@ 2024-06-21  7:54 ` Chengming Zhou
  2024-06-22  1:09   ` Andrew Morton
  2024-06-28 23:59 ` [PATCH v2 0/3] mm/ksm: cmp_and_merge_page() optimizations and cleanup Andrew Morton
  3 siblings, 1 reply; 6+ messages in thread
From: Chengming Zhou @ 2024-06-21  7:54 UTC (permalink / raw)
  To: Andrew Morton, david, aarcange, hughd, shr
  Cc: linux-mm, linux-kernel, zhouchengming, Chengming Zhou

Now the implementation of stable_node_dup() causes chain()/chain_prune()
interfaces and usages are overcomplicated.

Why? stable_node_dup() only find and return a candidate stable_node for
sharing, so the users have to recheck using stable_node_dup_any() if any
non-candidate stable_node exist. And try to ksm_get_folio() from it again.

Actually, stable_node_dup() can just return a best stable_node as it can,
then the users can check if it's a candidate for sharing or not.

The code is simplified too and fewer corner cases: such as stable_node and
stable_node_dup can't be NULL if returned tree_folio is not NULL.

Signed-off-by: Chengming Zhou <chengming.zhou@linux.dev>
---
 mm/ksm.c | 152 ++++++++++++---------------------------------------------------
 1 file changed, 27 insertions(+), 125 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 2cf836fb1367..8a5d88472223 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1663,7 +1663,6 @@ static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
 	struct ksm_stable_node *dup, *found = NULL, *stable_node = *_stable_node;
 	struct hlist_node *hlist_safe;
 	struct folio *folio, *tree_folio = NULL;
-	int nr = 0;
 	int found_rmap_hlist_len;
 
 	if (!prune_stale_stable_nodes ||
@@ -1690,33 +1689,26 @@ static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
 		folio = ksm_get_folio(dup, KSM_GET_FOLIO_NOLOCK);
 		if (!folio)
 			continue;
-		nr += 1;
-		if (is_page_sharing_candidate(dup)) {
-			if (!found ||
-			    dup->rmap_hlist_len > found_rmap_hlist_len) {
-				if (found)
-					folio_put(tree_folio);
-				found = dup;
-				found_rmap_hlist_len = found->rmap_hlist_len;
-				tree_folio = folio;
-
-				/* skip put_page for found dup */
-				if (!prune_stale_stable_nodes)
-					break;
-				continue;
-			}
+		/* Pick the best candidate if possible. */
+		if (!found || (is_page_sharing_candidate(dup) &&
+		    (!is_page_sharing_candidate(found) ||
+		     dup->rmap_hlist_len > found_rmap_hlist_len))) {
+			if (found)
+				folio_put(tree_folio);
+			found = dup;
+			found_rmap_hlist_len = found->rmap_hlist_len;
+			tree_folio = folio;
+			/* skip put_page for found candidate */
+			if (!prune_stale_stable_nodes &&
+			    is_page_sharing_candidate(found))
+				break;
+			continue;
 		}
 		folio_put(folio);
 	}
 
 	if (found) {
-		/*
-		 * nr is counting all dups in the chain only if
-		 * prune_stale_stable_nodes is true, otherwise we may
-		 * break the loop at nr == 1 even if there are
-		 * multiple entries.
-		 */
-		if (prune_stale_stable_nodes && nr == 1) {
+		if (hlist_is_singular_node(&found->hlist_dup, &stable_node->hlist)) {
 			/*
 			 * If there's not just one entry it would
 			 * corrupt memory, better BUG_ON. In KSM
@@ -1768,25 +1760,15 @@ static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
 			hlist_add_head(&found->hlist_dup,
 				       &stable_node->hlist);
 		}
+	} else {
+		/* Its hlist must be empty if no one found. */
+		free_stable_node_chain(stable_node, root);
 	}
 
 	*_stable_node_dup = found;
 	return tree_folio;
 }
 
-static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stable_node,
-					       struct rb_root *root)
-{
-	if (!is_stable_node_chain(stable_node))
-		return stable_node;
-	if (hlist_empty(&stable_node->hlist)) {
-		free_stable_node_chain(stable_node, root);
-		return NULL;
-	}
-	return hlist_entry(stable_node->hlist.first,
-			   typeof(*stable_node), hlist_dup);
-}
-
 /*
  * Like for ksm_get_folio, this function can free the *_stable_node and
  * *_stable_node_dup if the returned tree_page is NULL.
@@ -1807,17 +1789,10 @@ static struct folio *__stable_node_chain(struct ksm_stable_node **_stable_node_d
 					 bool prune_stale_stable_nodes)
 {
 	struct ksm_stable_node *stable_node = *_stable_node;
+
 	if (!is_stable_node_chain(stable_node)) {
-		if (is_page_sharing_candidate(stable_node)) {
-			*_stable_node_dup = stable_node;
-			return ksm_get_folio(stable_node, KSM_GET_FOLIO_NOLOCK);
-		}
-		/*
-		 * _stable_node_dup set to NULL means the stable_node
-		 * reached the ksm_max_page_sharing limit.
-		 */
-		*_stable_node_dup = NULL;
-		return NULL;
+		*_stable_node_dup = stable_node;
+		return ksm_get_folio(stable_node, KSM_GET_FOLIO_NOLOCK);
 	}
 	return stable_node_dup(_stable_node_dup, _stable_node, root,
 			       prune_stale_stable_nodes);
@@ -1831,16 +1806,10 @@ static __always_inline struct folio *chain_prune(struct ksm_stable_node **s_n_d,
 }
 
 static __always_inline struct folio *chain(struct ksm_stable_node **s_n_d,
-					   struct ksm_stable_node *s_n,
+					   struct ksm_stable_node **s_n,
 					   struct rb_root *root)
 {
-	struct ksm_stable_node *old_stable_node = s_n;
-	struct folio *tree_folio;
-
-	tree_folio = __stable_node_chain(s_n_d, &s_n, root, false);
-	/* not pruning dups so s_n cannot have changed */
-	VM_BUG_ON(s_n != old_stable_node);
-	return tree_folio;
+	return __stable_node_chain(s_n_d, s_n, root, false);
 }
 
 /*
@@ -1858,7 +1827,7 @@ static struct page *stable_tree_search(struct page *page)
 	struct rb_root *root;
 	struct rb_node **new;
 	struct rb_node *parent;
-	struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
+	struct ksm_stable_node *stable_node, *stable_node_dup;
 	struct ksm_stable_node *page_node;
 	struct folio *folio;
 
@@ -1882,45 +1851,7 @@ static struct page *stable_tree_search(struct page *page)
 
 		cond_resched();
 		stable_node = rb_entry(*new, struct ksm_stable_node, node);
-		stable_node_any = NULL;
 		tree_folio = chain_prune(&stable_node_dup, &stable_node, root);
-		/*
-		 * NOTE: stable_node may have been freed by
-		 * chain_prune() if the returned stable_node_dup is
-		 * not NULL. stable_node_dup may have been inserted in
-		 * the rbtree instead as a regular stable_node (in
-		 * order to collapse the stable_node chain if a single
-		 * stable_node dup was found in it). In such case the
-		 * stable_node is overwritten by the callee to point
-		 * to the stable_node_dup that was collapsed in the
-		 * stable rbtree and stable_node will be equal to
-		 * stable_node_dup like if the chain never existed.
-		 */
-		if (!stable_node_dup) {
-			/*
-			 * Either all stable_node dups were full in
-			 * this stable_node chain, or this chain was
-			 * empty and should be rb_erased.
-			 */
-			stable_node_any = stable_node_dup_any(stable_node,
-							      root);
-			if (!stable_node_any) {
-				/* rb_erase just run */
-				goto again;
-			}
-			/*
-			 * Take any of the stable_node dups page of
-			 * this stable_node chain to let the tree walk
-			 * continue. All KSM pages belonging to the
-			 * stable_node dups in a stable_node chain
-			 * have the same content and they're
-			 * write protected at all times. Any will work
-			 * fine to continue the walk.
-			 */
-			tree_folio = ksm_get_folio(stable_node_any,
-						   KSM_GET_FOLIO_NOLOCK);
-		}
-		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
 		if (!tree_folio) {
 			/*
 			 * If we walked over a stale stable_node,
@@ -1958,7 +1889,7 @@ static struct page *stable_tree_search(struct page *page)
 					goto chain_append;
 			}
 
-			if (!stable_node_dup) {
+			if (!is_page_sharing_candidate(stable_node_dup)) {
 				/*
 				 * If the stable_node is a chain and
 				 * we got a payload match in memcmp
@@ -2067,9 +1998,6 @@ static struct page *stable_tree_search(struct page *page)
 	return &folio->page;
 
 chain_append:
-	/* stable_node_dup could be null if it reached the limit */
-	if (!stable_node_dup)
-		stable_node_dup = stable_node_any;
 	/*
 	 * If stable_node was a chain and chain_prune collapsed it,
 	 * stable_node has been updated to be the new regular
@@ -2114,7 +2042,7 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
 	struct rb_root *root;
 	struct rb_node **new;
 	struct rb_node *parent;
-	struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
+	struct ksm_stable_node *stable_node, *stable_node_dup;
 	bool need_chain = false;
 
 	kpfn = folio_pfn(kfolio);
@@ -2130,33 +2058,7 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
 
 		cond_resched();
 		stable_node = rb_entry(*new, struct ksm_stable_node, node);
-		stable_node_any = NULL;
-		tree_folio = chain(&stable_node_dup, stable_node, root);
-		if (!stable_node_dup) {
-			/*
-			 * Either all stable_node dups were full in
-			 * this stable_node chain, or this chain was
-			 * empty and should be rb_erased.
-			 */
-			stable_node_any = stable_node_dup_any(stable_node,
-							      root);
-			if (!stable_node_any) {
-				/* rb_erase just run */
-				goto again;
-			}
-			/*
-			 * Take any of the stable_node dups page of
-			 * this stable_node chain to let the tree walk
-			 * continue. All KSM pages belonging to the
-			 * stable_node dups in a stable_node chain
-			 * have the same content and they're
-			 * write protected at all times. Any will work
-			 * fine to continue the walk.
-			 */
-			tree_folio = ksm_get_folio(stable_node_any,
-						   KSM_GET_FOLIO_NOLOCK);
-		}
-		VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
+		tree_folio = chain(&stable_node_dup, &stable_node, root);
 		if (!tree_folio) {
 			/*
 			 * If we walked over a stale stable_node,

-- 
2.45.2



^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 3/3] mm/ksm: optimize the chain()/chain_prune() interfaces
  2024-06-21  7:54 ` [PATCH v2 3/3] mm/ksm: optimize the chain()/chain_prune() interfaces Chengming Zhou
@ 2024-06-22  1:09   ` Andrew Morton
  0 siblings, 0 replies; 6+ messages in thread
From: Andrew Morton @ 2024-06-22  1:09 UTC (permalink / raw)
  To: Chengming Zhou
  Cc: david, aarcange, hughd, shr, linux-mm, linux-kernel,
	zhouchengming

On Fri, 21 Jun 2024 15:54:31 +0800 Chengming Zhou <chengming.zhou@linux.dev> wrote:

> mm/ksm.c | 152 ++++++++++++---------------------------------------------------
>  1 file changed, 27 insertions(+), 125 deletions(-)

That got my attention.

Thanks.  I merged it for testing, pending additional review.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 0/3] mm/ksm: cmp_and_merge_page() optimizations and cleanup
  2024-06-21  7:54 [PATCH v2 0/3] mm/ksm: cmp_and_merge_page() optimizations and cleanup Chengming Zhou
                   ` (2 preceding siblings ...)
  2024-06-21  7:54 ` [PATCH v2 3/3] mm/ksm: optimize the chain()/chain_prune() interfaces Chengming Zhou
@ 2024-06-28 23:59 ` Andrew Morton
  3 siblings, 0 replies; 6+ messages in thread
From: Andrew Morton @ 2024-06-28 23:59 UTC (permalink / raw)
  To: Chengming Zhou
  Cc: david, aarcange, hughd, shr, linux-mm, linux-kernel,
	zhouchengming

On Fri, 21 Jun 2024 15:54:28 +0800 Chengming Zhou <chengming.zhou@linux.dev> wrote:

> This series mainly optimizes cmp_and_merge_page() to have more efficient
> separate code flow for ksm page and non-ksm anon page.

Is anyone interested in reviewing this patchset further?


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2024-06-28 23:59 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-06-21  7:54 [PATCH v2 0/3] mm/ksm: cmp_and_merge_page() optimizations and cleanup Chengming Zhou
2024-06-21  7:54 ` [PATCH v2 1/3] mm/ksm: refactor out try_to_merge_with_zero_page() Chengming Zhou
2024-06-21  7:54 ` [PATCH v2 2/3] mm/ksm: don't waste time searching stable tree for fast changing page Chengming Zhou
2024-06-21  7:54 ` [PATCH v2 3/3] mm/ksm: optimize the chain()/chain_prune() interfaces Chengming Zhou
2024-06-22  1:09   ` Andrew Morton
2024-06-28 23:59 ` [PATCH v2 0/3] mm/ksm: cmp_and_merge_page() optimizations and cleanup Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).