linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/5] Remove PageKsm()
@ 2024-10-02 15:25 Matthew Wilcox (Oracle)
  2024-10-02 15:25 ` [PATCH 1/5] ksm: Use a folio in try_to_merge_one_page() Matthew Wilcox (Oracle)
                   ` (4 more replies)
  0 siblings, 5 replies; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-10-02 15:25 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Alex Shi

The KSM flag is almost always tested on the folio rather than on the page.
This series removes the final users of PageKsm() and makes the flag only
testable on the folio.

Matthew Wilcox (Oracle) (5):
  ksm: Use a folio in try_to_merge_one_page()
  ksm: Convert cmp_and_merge_page() to use a folio
  ksm: Convert should_skip_rmap_item() to take a folio
  mm: Add PageAnonNotKsm()
  mm: Remove PageKsm()

 include/linux/page-flags.h | 18 ++++----
 mm/internal.h              |  2 +-
 mm/ksm.c                   | 93 +++++++++++++++++++-------------------
 3 files changed, 58 insertions(+), 55 deletions(-)

-- 
2.43.0



^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH 1/5] ksm: Use a folio in try_to_merge_one_page()
  2024-10-02 15:25 [PATCH 0/5] Remove PageKsm() Matthew Wilcox (Oracle)
@ 2024-10-02 15:25 ` Matthew Wilcox (Oracle)
  2024-10-07  9:43   ` David Hildenbrand
  2024-10-02 15:25 ` [PATCH 2/5] ksm: Convert cmp_and_merge_page() to use a folio Matthew Wilcox (Oracle)
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-10-02 15:25 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Alex Shi

It is safe to use a folio here because all callers took a refcount on
this page.  The one wrinkle is that we have to recalculate the value
of folio after splitting the page, since it has probably changed.
Replaces nine calls to compound_head() with one.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/ksm.c | 33 +++++++++++++++++----------------
 1 file changed, 17 insertions(+), 16 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index a2e2a521df0a..57f998b172e6 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1443,28 +1443,29 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
 static int try_to_merge_one_page(struct vm_area_struct *vma,
 				 struct page *page, struct page *kpage)
 {
+	struct folio *folio = page_folio(page);
 	pte_t orig_pte = __pte(0);
 	int err = -EFAULT;
 
 	if (page == kpage)			/* ksm page forked */
 		return 0;
 
-	if (!PageAnon(page))
+	if (!folio_test_anon(folio))
 		goto out;
 
 	/*
 	 * We need the folio lock to read a stable swapcache flag in
-	 * write_protect_page().  We use trylock_page() instead of
-	 * lock_page() because we don't want to wait here - we
-	 * prefer to continue scanning and merging different pages,
-	 * then come back to this page when it is unlocked.
+	 * write_protect_page().  We trylock because we don't want to wait
+	 * here - we prefer to continue scanning and merging different
+	 * pages, then come back to this page when it is unlocked.
 	 */
-	if (!trylock_page(page))
+	if (!folio_trylock(folio))
 		goto out;
 
-	if (PageTransCompound(page)) {
+	if (folio_test_large(folio)) {
 		if (split_huge_page(page))
 			goto out_unlock;
+		folio = page_folio(page);
 	}
 
 	/*
@@ -1473,28 +1474,28 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
 	 * ptes are necessarily already write-protected.  But in either
 	 * case, we need to lock and check page_count is not raised.
 	 */
-	if (write_protect_page(vma, page_folio(page), &orig_pte) == 0) {
+	if (write_protect_page(vma, folio, &orig_pte) == 0) {
 		if (!kpage) {
 			/*
-			 * While we hold page lock, upgrade page from
-			 * PageAnon+anon_vma to PageKsm+NULL stable_node:
+			 * While we hold folio lock, upgrade folio from
+			 * anon to a NULL stable_node with the KSM flag set:
 			 * stable_tree_insert() will update stable_node.
 			 */
-			folio_set_stable_node(page_folio(page), NULL);
-			mark_page_accessed(page);
+			folio_set_stable_node(folio, NULL);
+			folio_mark_accessed(folio);
 			/*
-			 * Page reclaim just frees a clean page with no dirty
+			 * Page reclaim just frees a clean folio with no dirty
 			 * ptes: make sure that the ksm page would be swapped.
 			 */
-			if (!PageDirty(page))
-				SetPageDirty(page);
+			if (!folio_test_dirty(folio))
+				folio_mark_dirty(folio);
 			err = 0;
 		} else if (pages_identical(page, kpage))
 			err = replace_page(vma, page, kpage, orig_pte);
 	}
 
 out_unlock:
-	unlock_page(page);
+	folio_unlock(folio);
 out:
 	return err;
 }
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 2/5] ksm: Convert cmp_and_merge_page() to use a folio
  2024-10-02 15:25 [PATCH 0/5] Remove PageKsm() Matthew Wilcox (Oracle)
  2024-10-02 15:25 ` [PATCH 1/5] ksm: Use a folio in try_to_merge_one_page() Matthew Wilcox (Oracle)
@ 2024-10-02 15:25 ` Matthew Wilcox (Oracle)
  2024-10-07  9:44   ` David Hildenbrand
  2024-10-02 15:25 ` [PATCH 3/5] ksm: Convert should_skip_rmap_item() to take " Matthew Wilcox (Oracle)
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-10-02 15:25 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Alex Shi

By making try_to_merge_two_pages() and stable_tree_search() return a
folio, we can replace kpage with kfolio.  This replaces 7 calls to
compound_head() with one.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/ksm.c | 48 ++++++++++++++++++++++++------------------------
 1 file changed, 24 insertions(+), 24 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 57f998b172e6..19e17b228ae1 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1583,7 +1583,7 @@ static int try_to_merge_with_ksm_page(struct ksm_rmap_item *rmap_item,
  * Note that this function upgrades page to ksm page: if one of the pages
  * is already a ksm page, try_to_merge_with_ksm_page should be used.
  */
-static struct page *try_to_merge_two_pages(struct ksm_rmap_item *rmap_item,
+static struct folio *try_to_merge_two_pages(struct ksm_rmap_item *rmap_item,
 					   struct page *page,
 					   struct ksm_rmap_item *tree_rmap_item,
 					   struct page *tree_page)
@@ -1601,7 +1601,7 @@ static struct page *try_to_merge_two_pages(struct ksm_rmap_item *rmap_item,
 		if (err)
 			break_cow(rmap_item);
 	}
-	return err ? NULL : page;
+	return err ? NULL : page_folio(page);
 }
 
 static __always_inline
@@ -1790,7 +1790,7 @@ static __always_inline struct folio *chain(struct ksm_stable_node **s_n_d,
  * This function returns the stable tree node of identical content if found,
  * NULL otherwise.
  */
-static struct page *stable_tree_search(struct page *page)
+static struct folio *stable_tree_search(struct page *page)
 {
 	int nid;
 	struct rb_root *root;
@@ -1805,7 +1805,7 @@ static struct page *stable_tree_search(struct page *page)
 	if (page_node && page_node->head != &migrate_nodes) {
 		/* ksm page forked */
 		folio_get(folio);
-		return &folio->page;
+		return folio;
 	}
 
 	nid = get_kpfn_nid(folio_pfn(folio));
@@ -1900,7 +1900,7 @@ static struct page *stable_tree_search(struct page *page)
 				folio_put(tree_folio);
 				goto replace;
 			}
-			return &tree_folio->page;
+			return tree_folio;
 		}
 	}
 
@@ -1914,7 +1914,7 @@ static struct page *stable_tree_search(struct page *page)
 out:
 	if (is_page_sharing_candidate(page_node)) {
 		folio_get(folio);
-		return &folio->page;
+		return folio;
 	} else
 		return NULL;
 
@@ -1964,7 +1964,7 @@ static struct page *stable_tree_search(struct page *page)
 	}
 	stable_node_dup->head = &migrate_nodes;
 	list_add(&stable_node_dup->list, stable_node_dup->head);
-	return &folio->page;
+	return folio;
 
 chain_append:
 	/*
@@ -2218,7 +2218,7 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
 	struct ksm_rmap_item *tree_rmap_item;
 	struct page *tree_page = NULL;
 	struct ksm_stable_node *stable_node;
-	struct page *kpage;
+	struct folio *kfolio;
 	unsigned int checksum;
 	int err;
 	bool max_page_sharing_bypass = false;
@@ -2260,31 +2260,31 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
 			return;
 	}
 
-	/* We first start with searching the page inside the stable tree */
-	kpage = stable_tree_search(page);
-	if (kpage == page && rmap_item->head == stable_node) {
-		put_page(kpage);
+	/* Start by searching for the folio in the stable tree */
+	kfolio = stable_tree_search(page);
+	if (&kfolio->page == page && rmap_item->head == stable_node) {
+		folio_put(kfolio);
 		return;
 	}
 
 	remove_rmap_item_from_tree(rmap_item);
 
-	if (kpage) {
-		if (PTR_ERR(kpage) == -EBUSY)
+	if (kfolio) {
+		if (kfolio == ERR_PTR(-EBUSY))
 			return;
 
-		err = try_to_merge_with_ksm_page(rmap_item, page, kpage);
+		err = try_to_merge_with_ksm_page(rmap_item, page, &kfolio->page);
 		if (!err) {
 			/*
 			 * The page was successfully merged:
 			 * add its rmap_item to the stable tree.
 			 */
-			lock_page(kpage);
-			stable_tree_append(rmap_item, page_stable_node(kpage),
+			folio_lock(kfolio);
+			stable_tree_append(rmap_item, folio_stable_node(kfolio),
 					   max_page_sharing_bypass);
-			unlock_page(kpage);
+			folio_unlock(kfolio);
 		}
-		put_page(kpage);
+		folio_put(kfolio);
 		return;
 	}
 
@@ -2293,7 +2293,7 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
 	if (tree_rmap_item) {
 		bool split;
 
-		kpage = try_to_merge_two_pages(rmap_item, page,
+		kfolio = try_to_merge_two_pages(rmap_item, page,
 						tree_rmap_item, tree_page);
 		/*
 		 * If both pages we tried to merge belong to the same compound
@@ -2308,20 +2308,20 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
 		split = PageTransCompound(page)
 			&& compound_head(page) == compound_head(tree_page);
 		put_page(tree_page);
-		if (kpage) {
+		if (kfolio) {
 			/*
 			 * The pages were successfully merged: insert new
 			 * node in the stable tree and add both rmap_items.
 			 */
-			lock_page(kpage);
-			stable_node = stable_tree_insert(page_folio(kpage));
+			folio_lock(kfolio);
+			stable_node = stable_tree_insert(kfolio);
 			if (stable_node) {
 				stable_tree_append(tree_rmap_item, stable_node,
 						   false);
 				stable_tree_append(rmap_item, stable_node,
 						   false);
 			}
-			unlock_page(kpage);
+			folio_unlock(kfolio);
 
 			/*
 			 * If we fail to insert the page into the stable tree,
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 3/5] ksm: Convert should_skip_rmap_item() to take a folio
  2024-10-02 15:25 [PATCH 0/5] Remove PageKsm() Matthew Wilcox (Oracle)
  2024-10-02 15:25 ` [PATCH 1/5] ksm: Use a folio in try_to_merge_one_page() Matthew Wilcox (Oracle)
  2024-10-02 15:25 ` [PATCH 2/5] ksm: Convert cmp_and_merge_page() to use a folio Matthew Wilcox (Oracle)
@ 2024-10-02 15:25 ` Matthew Wilcox (Oracle)
  2024-10-07  9:46   ` David Hildenbrand
  2024-10-02 15:25 ` [PATCH 4/5] mm: Add PageAnonNotKsm() Matthew Wilcox (Oracle)
  2024-10-02 15:25 ` [PATCH 5/5] mm: Remove PageKsm() Matthew Wilcox (Oracle)
  4 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-10-02 15:25 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Alex Shi

Remove a call to PageKSM() by passing the folio containing tmp_page to
should_skip_rmap_item.  Removes a hidden call to compound_head().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 mm/ksm.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/mm/ksm.c b/mm/ksm.c
index 19e17b228ae1..1b8b43dc6ba7 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2402,10 +2402,10 @@ static unsigned int skip_age(rmap_age_t age)
 /*
  * Determines if a page should be skipped for the current scan.
  *
- * @page: page to check
+ * @folio: folio containing the page to check
  * @rmap_item: associated rmap_item of page
  */
-static bool should_skip_rmap_item(struct page *page,
+static bool should_skip_rmap_item(struct folio *folio,
 				  struct ksm_rmap_item *rmap_item)
 {
 	rmap_age_t age;
@@ -2418,7 +2418,7 @@ static bool should_skip_rmap_item(struct page *page,
 	 * will essentially ignore them, but we still have to process them
 	 * properly.
 	 */
-	if (PageKsm(page))
+	if (folio_test_ksm(folio))
 		return false;
 
 	age = rmap_item->age;
@@ -2561,7 +2561,7 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
 					ksm_scan.rmap_list =
 							&rmap_item->rmap_list;
 
-					if (should_skip_rmap_item(tmp_page, rmap_item)) {
+					if (should_skip_rmap_item(folio, rmap_item)) {
 						folio_put(folio);
 						goto next_page;
 					}
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 4/5] mm: Add PageAnonNotKsm()
  2024-10-02 15:25 [PATCH 0/5] Remove PageKsm() Matthew Wilcox (Oracle)
                   ` (2 preceding siblings ...)
  2024-10-02 15:25 ` [PATCH 3/5] ksm: Convert should_skip_rmap_item() to take " Matthew Wilcox (Oracle)
@ 2024-10-02 15:25 ` Matthew Wilcox (Oracle)
       [not found]   ` <CGME20241004114615eucas1p1910b6b4e74f8a878f56104026eece731@eucas1p1.samsung.com>
                     ` (2 more replies)
  2024-10-02 15:25 ` [PATCH 5/5] mm: Remove PageKsm() Matthew Wilcox (Oracle)
  4 siblings, 3 replies; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-10-02 15:25 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Alex Shi

Check that this anonymous page is really anonymous, not
anonymous-or-KSM.  This optimises the debug check, but its real purpose
is to remove the last two users of PageKsm().

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/page-flags.h | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 4c2dfe289046..157c4ffc2fdc 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -689,6 +689,13 @@ static __always_inline bool folio_test_anon(const struct folio *folio)
 	return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
 }
 
+static __always_inline bool PageAnonNotKsm(const struct page *page)
+{
+	unsigned long flags = (unsigned long)page_folio(page)->mapping;
+
+	return (flags & PAGE_MAPPING_FLAGS) == PAGE_MAPPING_ANON;
+}
+
 static __always_inline bool PageAnon(const struct page *page)
 {
 	return folio_test_anon(page_folio(page));
@@ -1129,14 +1136,14 @@ static __always_inline int PageAnonExclusive(const struct page *page)
 
 static __always_inline void SetPageAnonExclusive(struct page *page)
 {
-	VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page);
+	VM_BUG_ON_PGFLAGS(PageAnonNotKsm(page), page);
 	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
 	set_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
 }
 
 static __always_inline void ClearPageAnonExclusive(struct page *page)
 {
-	VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page);
+	VM_BUG_ON_PGFLAGS(PageAnonNotKsm(page), page);
 	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
 	clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
 }
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH 5/5] mm: Remove PageKsm()
  2024-10-02 15:25 [PATCH 0/5] Remove PageKsm() Matthew Wilcox (Oracle)
                   ` (3 preceding siblings ...)
  2024-10-02 15:25 ` [PATCH 4/5] mm: Add PageAnonNotKsm() Matthew Wilcox (Oracle)
@ 2024-10-02 15:25 ` Matthew Wilcox (Oracle)
  2024-10-07  9:47   ` David Hildenbrand
  4 siblings, 1 reply; 13+ messages in thread
From: Matthew Wilcox (Oracle) @ 2024-10-02 15:25 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Matthew Wilcox (Oracle), linux-mm, Alex Shi

All callers have been converted to use folio_test_ksm() or
PageAnonNotKsm(), so we can remove this wrapper.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
---
 include/linux/page-flags.h | 7 +------
 mm/internal.h              | 2 +-
 mm/ksm.c                   | 4 ++--
 3 files changed, 4 insertions(+), 9 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 157c4ffc2fdc..84746da35f79 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -725,13 +725,8 @@ static __always_inline bool folio_test_ksm(const struct folio *folio)
 	return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) ==
 				PAGE_MAPPING_KSM;
 }
-
-static __always_inline bool PageKsm(const struct page *page)
-{
-	return folio_test_ksm(page_folio(page));
-}
 #else
-TESTPAGEFLAG_FALSE(Ksm, ksm)
+FOLIO_TEST_FLAG_FALSE(ksm)
 #endif
 
 u64 stable_page_flags(const struct page *page);
diff --git a/mm/internal.h b/mm/internal.h
index 93083bbeeefa..5099558c7500 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -1311,7 +1311,7 @@ static inline bool gup_must_unshare(struct vm_area_struct *vma,
 		smp_rmb();
 
 	/*
-	 * Note that PageKsm() pages cannot be exclusive, and consequently,
+	 * Note that KSM pages cannot be exclusive, and consequently,
 	 * cannot get pinned.
 	 */
 	return !PageAnonExclusive(page);
diff --git a/mm/ksm.c b/mm/ksm.c
index 1b8b43dc6ba7..e2068c73429b 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -657,7 +657,7 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr, bool lock_v
 	 *
 	 * VM_FAULT_SIGBUS could occur if we race with truncation of the
 	 * backing file, which also invalidates anonymous pages: that's
-	 * okay, that truncation will have unmapped the PageKsm for us.
+	 * okay, that truncation will have unmapped the KSM page for us.
 	 *
 	 * VM_FAULT_OOM: at the time of writing (late July 2009), setting
 	 * aside mem_cgroup limits, VM_FAULT_OOM would only be set if the
@@ -1435,7 +1435,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
  * try_to_merge_one_page - take two pages and merge them into one
  * @vma: the vma that holds the pte pointing to page
  * @page: the PageAnon page that we want to replace with kpage
- * @kpage: the PageKsm page that we want to map instead of page,
+ * @kpage: the KSM page that we want to map instead of page,
  *         or NULL the first time when we want to use page as kpage.
  *
  * This function returns 0 if the pages were merged, -EFAULT otherwise.
-- 
2.43.0



^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/5] mm: Add PageAnonNotKsm()
       [not found]   ` <CGME20241004114615eucas1p1910b6b4e74f8a878f56104026eece731@eucas1p1.samsung.com>
@ 2024-10-04 11:46     ` Marek Szyprowski
  0 siblings, 0 replies; 13+ messages in thread
From: Marek Szyprowski @ 2024-10-04 11:46 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), Andrew Morton; +Cc: linux-mm, Alex Shi

On 02.10.2024 17:25, Matthew Wilcox (Oracle) wrote:
> Check that this anonymous page is really anonymous, not
> anonymous-or-KSM.  This optimises the debug check, but its real purpose
> is to remove the last two users of PageKsm().
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   include/linux/page-flags.h | 11 +++++++++--
>   1 file changed, 9 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 4c2dfe289046..157c4ffc2fdc 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -689,6 +689,13 @@ static __always_inline bool folio_test_anon(const struct folio *folio)
>   	return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
>   }
>   
> +static __always_inline bool PageAnonNotKsm(const struct page *page)
> +{
> +	unsigned long flags = (unsigned long)page_folio(page)->mapping;
> +
> +	return (flags & PAGE_MAPPING_FLAGS) == PAGE_MAPPING_ANON;
> +}
> +
>   static __always_inline bool PageAnon(const struct page *page)
>   {
>   	return folio_test_anon(page_folio(page));
> @@ -1129,14 +1136,14 @@ static __always_inline int PageAnonExclusive(const struct page *page)
>   
>   static __always_inline void SetPageAnonExclusive(struct page *page)
>   {
> -	VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page);
> +	VM_BUG_ON_PGFLAGS(PageAnonNotKsm(page), page);

!PageAnonNotKsm(page) ?

At least such change fixes booting of today's linux-next with debug 
enabled on RISC-V based boards.


>   	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
>   	set_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
>   }
>   
>   static __always_inline void ClearPageAnonExclusive(struct page *page)
>   {
> -	VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page);
> +	VM_BUG_ON_PGFLAGS(PageAnonNotKsm(page), page);

ditto


>   	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
>   	clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
>   }

Best regards
-- 
Marek Szyprowski, PhD
Samsung R&D Institute Poland



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/5] mm: Add PageAnonNotKsm()
  2024-10-02 15:25 ` [PATCH 4/5] mm: Add PageAnonNotKsm() Matthew Wilcox (Oracle)
       [not found]   ` <CGME20241004114615eucas1p1910b6b4e74f8a878f56104026eece731@eucas1p1.samsung.com>
@ 2024-10-06 14:11   ` kernel test robot
  2024-10-07  9:46   ` David Hildenbrand
  2 siblings, 0 replies; 13+ messages in thread
From: kernel test robot @ 2024-10-06 14:11 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle)
  Cc: oe-lkp, lkp, linux-mm, Andrew Morton, Matthew Wilcox (Oracle),
	Alex Shi, oliver.sang



Hello,

kernel test robot noticed "kernel_BUG_at_include/linux/page-flags.h" on:

commit: 3e721b4c4250773f06818e43821a30108775a59b ("[PATCH 4/5] mm: Add PageAnonNotKsm()")
url: https://github.com/intel-lab-lkp/linux/commits/Matthew-Wilcox-Oracle/ksm-Use-a-folio-in-try_to_merge_one_page/20241002-232657
base: https://git.kernel.org/cgit/linux/kernel/git/akpm/mm.git mm-everything
patch link: https://lore.kernel.org/all/20241002152533.1350629-5-willy@infradead.org/
patch subject: [PATCH 4/5] mm: Add PageAnonNotKsm()

in testcase: boot

compiler: clang-18
test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 16G

(please refer to attached dmesg/kmsg for entire log/backtrace)


+------------------------------------------+------------+------------+
|                                          | 3e3e3b90a7 | 3e721b4c42 |
+------------------------------------------+------------+------------+
| boot_successes                           | 6          | 0          |
| boot_failures                            | 0          | 6          |
| kernel_BUG_at_include/linux/page-flags.h | 0          | 6          |
| Oops:invalid_opcode:#[##]SMP             | 0          | 6          |
| RIP:folio_add_new_anon_rmap              | 0          | 6          |
| Kernel_panic-not_syncing:Fatal_exception | 0          | 6          |
+------------------------------------------+------------+------------+


If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202410062122.193c5b0e-oliver.sang@intel.com


[  219.772371][   T65] ------------[ cut here ]------------
[  219.783937][   T65] kernel BUG at include/linux/page-flags.h:1139!
[  219.798558][   T65] Oops: invalid opcode: 0000 [#1] SMP
[  219.800949][   T65] CPU: 1 UID: 0 PID: 65 Comm: kworker/u10:1 Tainted: G                T  6.12.0-rc1-00097-g3e721b4c4250 #1
[  219.800949][   T65] Tainted: [T]=RANDSTRUCT
[ 219.800949][ T65] RIP: 0010:folio_add_new_anon_rmap (include/linux/page-flags.h:1139) 
[ 219.800949][ T65] Code: 40 ec ff 48 89 df 48 c7 c6 1a 3c 7b 90 e8 78 58 fc ff 90 0f 0b e8 40 d9 e7 ff 4c 89 ff 48 c7 c6 ef 26 78 90 e8 61 58 fc ff 90 <0f> 0b e8 29 d9 e7 ff 4c 89 ff 48 c7 c6 d9 61 68 90 e8 4a 58 fc ff
All code
========
   0:	40 ec                	rex in (%dx),%al
   2:	ff 48 89             	decl   -0x77(%rax)
   5:	df 48 c7             	fisttps -0x39(%rax)
   8:	c6                   	(bad)
   9:	1a 3c 7b             	sbb    (%rbx,%rdi,2),%bh
   c:	90                   	nop
   d:	e8 78 58 fc ff       	call   0xfffffffffffc588a
  12:	90                   	nop
  13:	0f 0b                	ud2
  15:	e8 40 d9 e7 ff       	call   0xffffffffffe7d95a
  1a:	4c 89 ff             	mov    %r15,%rdi
  1d:	48 c7 c6 ef 26 78 90 	mov    $0xffffffff907826ef,%rsi
  24:	e8 61 58 fc ff       	call   0xfffffffffffc588a
  29:	90                   	nop
  2a:*	0f 0b                	ud2		<-- trapping instruction
  2c:	e8 29 d9 e7 ff       	call   0xffffffffffe7d95a
  31:	4c 89 ff             	mov    %r15,%rdi
  34:	48 c7 c6 d9 61 68 90 	mov    $0xffffffff906861d9,%rsi
  3b:	e8 4a 58 fc ff       	call   0xfffffffffffc588a

Code starting with the faulting instruction
===========================================
   0:	0f 0b                	ud2
   2:	e8 29 d9 e7 ff       	call   0xffffffffffe7d930
   7:	4c 89 ff             	mov    %r15,%rdi
   a:	48 c7 c6 d9 61 68 90 	mov    $0xffffffff906861d9,%rsi
  11:	e8 4a 58 fc ff       	call   0xfffffffffffc5860
[  219.800949][   T65] RSP: 0000:ffff90094044fae8 EFLAGS: 00010246
[  219.800949][   T65] RAX: 0000000000000000 RBX: 0000000000000001 RCX: 0000000000000000
[  219.800949][   T65] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
[  219.800949][   T65] RBP: ffff90094044fb28 R08: 0000000000000000 R09: 0000000000000000
[  219.800949][   T65] R10: 0000000000000000 R11: 0000000000000000 R12: ffff900940107000
[  219.800949][   T65] R13: 00000007fffffffe R14: 0000000000000000 R15: fffff9dd10bf0ec0
[  219.800949][   T65] FS:  0000000000000000(0000) GS:ffff900c2fd00000(0000) knlGS:0000000000000000
[  219.800949][   T65] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  219.800949][   T65] CR2: 0000000000000000 CR3: 00000000a1a37000 CR4: 00000000000006b0
[  219.800949][   T65] Call Trace:
[  219.800949][   T65]  <TASK>
[ 219.800949][ T65] ? __die_body (arch/x86/kernel/dumpstack.c:421) 
[ 219.800949][ T65] ? die (arch/x86/kernel/dumpstack.c:?) 
[ 219.800949][ T65] ? do_trap (arch/x86/kernel/traps.c:171 arch/x86/kernel/traps.c:197) 
[ 219.800949][ T65] ? do_error_trap (arch/x86/kernel/traps.c:217) 
[ 219.800949][ T65] ? folio_add_new_anon_rmap (include/linux/page-flags.h:1139) 
[ 219.800949][ T65] ? handle_invalid_op (arch/x86/kernel/traps.c:254) 
[ 219.800949][ T65] ? folio_add_new_anon_rmap (include/linux/page-flags.h:1139) 
[ 219.800949][ T65] ? exc_invalid_op (arch/x86/kernel/traps.c:315) 
[ 219.800949][ T65] ? asm_exc_invalid_op (arch/x86/include/asm/idtentry.h:621) 
[ 219.800949][ T65] ? folio_add_new_anon_rmap (include/linux/page-flags.h:1139) 
[ 219.800949][ T65] ? folio_add_new_anon_rmap (include/linux/page-flags.h:1139) 
[ 219.800949][ T65] handle_pte_fault (mm/memory.c:4842) 
[ 219.800949][ T65] ? ftrace_likely_update (kernel/trace/trace_branch.c:?) 
[ 219.800949][ T65] handle_mm_fault (mm/memory.c:?) 
[ 219.800949][ T65] ? handle_mm_fault (arch/x86/include/asm/current.h:49 mm/memory.c:6062) 
[ 219.800949][ T65] __get_user_pages (mm/gup.c:1189) 
[ 219.800949][ T65] ? ftrace_likely_update (kernel/trace/trace_branch.c:?) 
[ 219.800949][ T65] ? is_valid_gup_args (mm/gup.c:2546) 
[ 219.800949][ T65] get_user_pages_remote (mm/gup.c:?) 
[ 219.800949][ T65] get_arg_page (fs/exec.c:225) 
[ 219.800949][ T65] copy_string_kernel (fs/exec.c:685) 
[ 219.800949][ T65] ? __cond_resched (kernel/sched/core.c:?) 
[ 219.800949][ T65] kernel_execve (fs/exec.c:2000) 
[ 219.800949][ T65] call_usermodehelper_exec_async (kernel/umh.c:110) 
[ 219.800949][ T65] ? __cfi_call_usermodehelper_exec_async (kernel/umh.c:65) 
[ 219.800949][ T65] ret_from_fork (arch/x86/kernel/process.c:153) 
[ 219.800949][ T65] ? __cfi_call_usermodehelper_exec_async (kernel/umh.c:65) 
[ 219.800949][ T65] ret_from_fork_asm (arch/x86/entry/entry_64.S:257) 
[  219.800949][   T65]  </TASK>
[  219.800949][   T65] Modules linked in:
[  220.299454][   T65] ---[ end trace 0000000000000000 ]---
[ 220.310211][ T65] RIP: 0010:folio_add_new_anon_rmap (include/linux/page-flags.h:1139) 
[ 220.322429][ T65] Code: 40 ec ff 48 89 df 48 c7 c6 1a 3c 7b 90 e8 78 58 fc ff 90 0f 0b e8 40 d9 e7 ff 4c 89 ff 48 c7 c6 ef 26 78 90 e8 61 58 fc ff 90 <0f> 0b e8 29 d9 e7 ff 4c 89 ff 48 c7 c6 d9 61 68 90 e8 4a 58 fc ff
All code
========
   0:	40 ec                	rex in (%dx),%al
   2:	ff 48 89             	decl   -0x77(%rax)
   5:	df 48 c7             	fisttps -0x39(%rax)
   8:	c6                   	(bad)
   9:	1a 3c 7b             	sbb    (%rbx,%rdi,2),%bh
   c:	90                   	nop
   d:	e8 78 58 fc ff       	call   0xfffffffffffc588a
  12:	90                   	nop
  13:	0f 0b                	ud2
  15:	e8 40 d9 e7 ff       	call   0xffffffffffe7d95a
  1a:	4c 89 ff             	mov    %r15,%rdi
  1d:	48 c7 c6 ef 26 78 90 	mov    $0xffffffff907826ef,%rsi
  24:	e8 61 58 fc ff       	call   0xfffffffffffc588a
  29:	90                   	nop
  2a:*	0f 0b                	ud2		<-- trapping instruction
  2c:	e8 29 d9 e7 ff       	call   0xffffffffffe7d95a
  31:	4c 89 ff             	mov    %r15,%rdi
  34:	48 c7 c6 d9 61 68 90 	mov    $0xffffffff906861d9,%rsi
  3b:	e8 4a 58 fc ff       	call   0xfffffffffffc588a

Code starting with the faulting instruction
===========================================
   0:	0f 0b                	ud2
   2:	e8 29 d9 e7 ff       	call   0xffffffffffe7d930
   7:	4c 89 ff             	mov    %r15,%rdi
   a:	48 c7 c6 d9 61 68 90 	mov    $0xffffffff906861d9,%rsi
  11:	e8 4a 58 fc ff       	call   0xfffffffffffc5860


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20241006/202410062122.193c5b0e-oliver.sang@intel.com



-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 1/5] ksm: Use a folio in try_to_merge_one_page()
  2024-10-02 15:25 ` [PATCH 1/5] ksm: Use a folio in try_to_merge_one_page() Matthew Wilcox (Oracle)
@ 2024-10-07  9:43   ` David Hildenbrand
  0 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2024-10-07  9:43 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), Andrew Morton; +Cc: linux-mm, Alex Shi

On 02.10.24 17:25, Matthew Wilcox (Oracle) wrote:
> It is safe to use a folio here because all callers took a refcount on
> this page.  The one wrinkle is that we have to recalculate the value
> of folio after splitting the page, since it has probably changed.
> Replaces nine calls to compound_head() with one.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   mm/ksm.c | 33 +++++++++++++++++----------------
>   1 file changed, 17 insertions(+), 16 deletions(-)
> 
> diff --git a/mm/ksm.c b/mm/ksm.c
> index a2e2a521df0a..57f998b172e6 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1443,28 +1443,29 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
>   static int try_to_merge_one_page(struct vm_area_struct *vma,
>   				 struct page *page, struct page *kpage)
>   {
> +	struct folio *folio = page_folio(page);
>   	pte_t orig_pte = __pte(0);
>   	int err = -EFAULT;
>   
>   	if (page == kpage)			/* ksm page forked */
>   		return 0;
>   
> -	if (!PageAnon(page))
> +	if (!folio_test_anon(folio))
>   		goto out;
>   
>   	/*
>   	 * We need the folio lock to read a stable swapcache flag in
> -	 * write_protect_page().  We use trylock_page() instead of
> -	 * lock_page() because we don't want to wait here - we
> -	 * prefer to continue scanning and merging different pages,
> -	 * then come back to this page when it is unlocked.
> +	 * write_protect_page().  We trylock because we don't want to wait
> +	 * here - we prefer to continue scanning and merging different
> +	 * pages, then come back to this page when it is unlocked.
>   	 */
> -	if (!trylock_page(page))
> +	if (!folio_trylock(folio))
>   		goto out;
>   
> -	if (PageTransCompound(page)) {
> +	if (folio_test_large(folio)) {
>   		if (split_huge_page(page))
>   			goto out_unlock;
> +		folio = page_folio(page);
>   	}
>   
>   	/*
> @@ -1473,28 +1474,28 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
>   	 * ptes are necessarily already write-protected.  But in either
>   	 * case, we need to lock and check page_count is not raised.
>   	 */
> -	if (write_protect_page(vma, page_folio(page), &orig_pte) == 0) {
> +	if (write_protect_page(vma, folio, &orig_pte) == 0) {
>   		if (!kpage) {
>   			/*
> -			 * While we hold page lock, upgrade page from
> -			 * PageAnon+anon_vma to PageKsm+NULL stable_node:
> +			 * While we hold folio lock, upgrade folio from
> +			 * anon to a NULL stable_node with the KSM flag set:
>   			 * stable_tree_insert() will update stable_node.
>   			 */
> -			folio_set_stable_node(page_folio(page), NULL);
> -			mark_page_accessed(page);
> +			folio_set_stable_node(folio, NULL);
> +			folio_mark_accessed(folio);
>   			/*
> -			 * Page reclaim just frees a clean page with no dirty
> +			 * Page reclaim just frees a clean folio with no dirty
>   			 * ptes: make sure that the ksm page would be swapped.
>   			 */
> -			if (!PageDirty(page))
> -				SetPageDirty(page);
> +			if (!folio_test_dirty(folio))
> +				folio_mark_dirty(folio);

Wouldn't the direct translation be folio_set_dirty()?

I guess folio_mark_dirty() will work as well, as we'll usually end up in 
noop_dirty_folio() where we do a folio_test_set_dirty().

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 2/5] ksm: Convert cmp_and_merge_page() to use a folio
  2024-10-02 15:25 ` [PATCH 2/5] ksm: Convert cmp_and_merge_page() to use a folio Matthew Wilcox (Oracle)
@ 2024-10-07  9:44   ` David Hildenbrand
  0 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2024-10-07  9:44 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), Andrew Morton; +Cc: linux-mm, Alex Shi

On 02.10.24 17:25, Matthew Wilcox (Oracle) wrote:
> By making try_to_merge_two_pages() and stable_tree_search() return a
> folio, we can replace kpage with kfolio.  This replaces 7 calls to
> compound_head() with one.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 3/5] ksm: Convert should_skip_rmap_item() to take a folio
  2024-10-02 15:25 ` [PATCH 3/5] ksm: Convert should_skip_rmap_item() to take " Matthew Wilcox (Oracle)
@ 2024-10-07  9:46   ` David Hildenbrand
  0 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2024-10-07  9:46 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), Andrew Morton; +Cc: linux-mm, Alex Shi

On 02.10.24 17:25, Matthew Wilcox (Oracle) wrote:
> Remove a call to PageKSM() by passing the folio containing tmp_page to
> should_skip_rmap_item.  Removes a hidden call to compound_head().

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 4/5] mm: Add PageAnonNotKsm()
  2024-10-02 15:25 ` [PATCH 4/5] mm: Add PageAnonNotKsm() Matthew Wilcox (Oracle)
       [not found]   ` <CGME20241004114615eucas1p1910b6b4e74f8a878f56104026eece731@eucas1p1.samsung.com>
  2024-10-06 14:11   ` kernel test robot
@ 2024-10-07  9:46   ` David Hildenbrand
  2 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2024-10-07  9:46 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), Andrew Morton; +Cc: linux-mm, Alex Shi

On 02.10.24 17:25, Matthew Wilcox (Oracle) wrote:
> Check that this anonymous page is really anonymous, not
> anonymous-or-KSM.  This optimises the debug check, but its real purpose
> is to remove the last two users of PageKsm().
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---
>   include/linux/page-flags.h | 11 +++++++++--
>   1 file changed, 9 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 4c2dfe289046..157c4ffc2fdc 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -689,6 +689,13 @@ static __always_inline bool folio_test_anon(const struct folio *folio)
>   	return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
>   }
>   
> +static __always_inline bool PageAnonNotKsm(const struct page *page)
> +{
> +	unsigned long flags = (unsigned long)page_folio(page)->mapping;
> +
> +	return (flags & PAGE_MAPPING_FLAGS) == PAGE_MAPPING_ANON;
> +}
> +
>   static __always_inline bool PageAnon(const struct page *page)
>   {
>   	return folio_test_anon(page_folio(page));
> @@ -1129,14 +1136,14 @@ static __always_inline int PageAnonExclusive(const struct page *page)
>   
>   static __always_inline void SetPageAnonExclusive(struct page *page)
>   {
> -	VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page);
> +	VM_BUG_ON_PGFLAGS(PageAnonNotKsm(page), page);
>   	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
>   	set_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
>   }
>   
>   static __always_inline void ClearPageAnonExclusive(struct page *page)
>   {
> -	VM_BUG_ON_PGFLAGS(!PageAnon(page) || PageKsm(page), page);
> +	VM_BUG_ON_PGFLAGS(PageAnonNotKsm(page), page);
>   	VM_BUG_ON_PGFLAGS(PageHuge(page) && !PageHead(page), page);
>   	clear_bit(PG_anon_exclusive, &PF_ANY(page, 1)->flags);
>   }

With the "!" added in both cases

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH 5/5] mm: Remove PageKsm()
  2024-10-02 15:25 ` [PATCH 5/5] mm: Remove PageKsm() Matthew Wilcox (Oracle)
@ 2024-10-07  9:47   ` David Hildenbrand
  0 siblings, 0 replies; 13+ messages in thread
From: David Hildenbrand @ 2024-10-07  9:47 UTC (permalink / raw)
  To: Matthew Wilcox (Oracle), Andrew Morton; +Cc: linux-mm, Alex Shi

On 02.10.24 17:25, Matthew Wilcox (Oracle) wrote:
> All callers have been converted to use folio_test_ksm() or
> PageAnonNotKsm(), so we can remove this wrapper.
> 
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> ---

Acked-by: David Hildenbrand <david@redhat.com>

-- 
Cheers,

David / dhildenb



^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2024-10-07  9:47 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2024-10-02 15:25 [PATCH 0/5] Remove PageKsm() Matthew Wilcox (Oracle)
2024-10-02 15:25 ` [PATCH 1/5] ksm: Use a folio in try_to_merge_one_page() Matthew Wilcox (Oracle)
2024-10-07  9:43   ` David Hildenbrand
2024-10-02 15:25 ` [PATCH 2/5] ksm: Convert cmp_and_merge_page() to use a folio Matthew Wilcox (Oracle)
2024-10-07  9:44   ` David Hildenbrand
2024-10-02 15:25 ` [PATCH 3/5] ksm: Convert should_skip_rmap_item() to take " Matthew Wilcox (Oracle)
2024-10-07  9:46   ` David Hildenbrand
2024-10-02 15:25 ` [PATCH 4/5] mm: Add PageAnonNotKsm() Matthew Wilcox (Oracle)
     [not found]   ` <CGME20241004114615eucas1p1910b6b4e74f8a878f56104026eece731@eucas1p1.samsung.com>
2024-10-04 11:46     ` Marek Szyprowski
2024-10-06 14:11   ` kernel test robot
2024-10-07  9:46   ` David Hildenbrand
2024-10-02 15:25 ` [PATCH 5/5] mm: Remove PageKsm() Matthew Wilcox (Oracle)
2024-10-07  9:47   ` David Hildenbrand

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).