* Re: [PATCH v2 00/14] transfer page to folio in KSM
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
@ 2024-03-25 12:47 ` David Hildenbrand
2024-03-25 12:48 ` [PATCH v3 01/14] mm/ksm: add ksm_get_folio alexs
` (13 subsequent siblings)
14 siblings, 0 replies; 27+ messages in thread
From: David Hildenbrand @ 2024-03-25 12:47 UTC (permalink / raw)
To: alexs, Matthew Wilcox, Andrea Arcangeli, Izik Eidus,
Andrew Morton, linux-mm, linux-kernel, ryncsn
On 25.03.24 13:48, alexs@kernel.org wrote:
> From: "Alex Shi (tencent)" <alexs@kernel.org>
>
> This is the first part of page to folio transfer on KSM. Since only
> single page could be stored in KSM, we could safely transfer stable tree
> pages to folios.
You should slow down a bit with new versions.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 27+ messages in thread* [PATCH v3 01/14] mm/ksm: add ksm_get_folio
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
2024-03-25 12:47 ` David Hildenbrand
@ 2024-03-25 12:48 ` alexs
2024-04-05 7:23 ` David Hildenbrand
2024-03-25 12:48 ` [PATCH v3 02/14] mm/ksm: use folio in remove_rmap_item_from_tree alexs
` (12 subsequent siblings)
14 siblings, 1 reply; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
The ksm only contains single pages, so we could add a new func
ksm_get_folio for get_ksm_page to use folio instead of pages to save a
couple of compound_head calls.
After all caller replaced, get_ksm_page will be removed.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 40 ++++++++++++++++++++++++----------------
1 file changed, 24 insertions(+), 16 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 8c001819cf10..ac080235b002 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -915,10 +915,10 @@ enum get_ksm_page_flags {
* a page to put something that might look like our key in page->mapping.
* is on its way to being freed; but it is an anomaly to bear in mind.
*/
-static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
+static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
enum get_ksm_page_flags flags)
{
- struct page *page;
+ struct folio *folio;
void *expected_mapping;
unsigned long kpfn;
@@ -926,8 +926,8 @@ static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
PAGE_MAPPING_KSM);
again:
kpfn = READ_ONCE(stable_node->kpfn); /* Address dependency. */
- page = pfn_to_page(kpfn);
- if (READ_ONCE(page->mapping) != expected_mapping)
+ folio = pfn_folio(kpfn);
+ if (READ_ONCE(folio->mapping) != expected_mapping)
goto stale;
/*
@@ -940,41 +940,41 @@ static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
* in folio_migrate_mapping(), it might still be our page,
* in which case it's essential to keep the node.
*/
- while (!get_page_unless_zero(page)) {
+ while (!folio_try_get(folio)) {
/*
* Another check for page->mapping != expected_mapping would
* work here too. We have chosen the !PageSwapCache test to
* optimize the common case, when the page is or is about to
* be freed: PageSwapCache is cleared (under spin_lock_irq)
* in the ref_freeze section of __remove_mapping(); but Anon
- * page->mapping reset to NULL later, in free_pages_prepare().
+ * folio->mapping reset to NULL later, in free_pages_prepare().
*/
- if (!PageSwapCache(page))
+ if (!folio_test_swapcache(folio))
goto stale;
cpu_relax();
}
- if (READ_ONCE(page->mapping) != expected_mapping) {
- put_page(page);
+ if (READ_ONCE(folio->mapping) != expected_mapping) {
+ folio_put(folio);
goto stale;
}
if (flags == GET_KSM_PAGE_TRYLOCK) {
- if (!trylock_page(page)) {
- put_page(page);
+ if (!folio_trylock(folio)) {
+ folio_put(folio);
return ERR_PTR(-EBUSY);
}
} else if (flags == GET_KSM_PAGE_LOCK)
- lock_page(page);
+ folio_lock(folio);
if (flags != GET_KSM_PAGE_NOLOCK) {
- if (READ_ONCE(page->mapping) != expected_mapping) {
- unlock_page(page);
- put_page(page);
+ if (READ_ONCE(folio->mapping) != expected_mapping) {
+ folio_unlock(folio);
+ folio_put(folio);
goto stale;
}
}
- return page;
+ return folio;
stale:
/*
@@ -990,6 +990,14 @@ static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
return NULL;
}
+static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
+ enum get_ksm_page_flags flags)
+{
+ struct folio *folio = ksm_get_folio(stable_node, flags);
+
+ return &folio->page;
+}
+
/*
* Removing rmap_item from stable or unstable tree.
* This function will clean the information from the stable/unstable tree.
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v3 01/14] mm/ksm: add ksm_get_folio
2024-03-25 12:48 ` [PATCH v3 01/14] mm/ksm: add ksm_get_folio alexs
@ 2024-04-05 7:23 ` David Hildenbrand
0 siblings, 0 replies; 27+ messages in thread
From: David Hildenbrand @ 2024-04-05 7:23 UTC (permalink / raw)
To: alexs, Matthew Wilcox, Andrea Arcangeli, Izik Eidus,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Hugh Dickins, Chris Wright
On 25.03.24 13:48, alexs@kernel.org wrote:
> From: "Alex Shi (tencent)" <alexs@kernel.org>
>
> The ksm only contains single pages, so we could add a new func
> ksm_get_folio for get_ksm_page to use folio instead of pages to save a
> couple of compound_head calls.
>
> After all caller replaced, get_ksm_page will be removed.
>
> Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
> Cc: Izik Eidus <izik.eidus@ravellosystems.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Chris Wright <chrisw@sous-sol.org>
> ---
> mm/ksm.c | 40 ++++++++++++++++++++++++----------------
> 1 file changed, 24 insertions(+), 16 deletions(-)
>
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3 02/14] mm/ksm: use folio in remove_rmap_item_from_tree
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
2024-03-25 12:47 ` David Hildenbrand
2024-03-25 12:48 ` [PATCH v3 01/14] mm/ksm: add ksm_get_folio alexs
@ 2024-03-25 12:48 ` alexs
2024-03-25 12:48 ` [PATCH v3 03/14] mm/ksm: add folio_set_stable_node alexs
` (11 subsequent siblings)
14 siblings, 0 replies; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
To save 2 compound_head calls.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index ac080235b002..ea3dabf71e47 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1006,16 +1006,16 @@ static void remove_rmap_item_from_tree(struct ksm_rmap_item *rmap_item)
{
if (rmap_item->address & STABLE_FLAG) {
struct ksm_stable_node *stable_node;
- struct page *page;
+ struct folio *folio;
stable_node = rmap_item->head;
- page = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
- if (!page)
+ folio = ksm_get_folio(stable_node, GET_KSM_PAGE_LOCK);
+ if (!folio)
goto out;
hlist_del(&rmap_item->hlist);
- unlock_page(page);
- put_page(page);
+ folio_unlock(folio);
+ folio_put(folio);
if (!hlist_empty(&stable_node->hlist))
ksm_pages_sharing--;
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH v3 03/14] mm/ksm: add folio_set_stable_node
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (2 preceding siblings ...)
2024-03-25 12:48 ` [PATCH v3 02/14] mm/ksm: use folio in remove_rmap_item_from_tree alexs
@ 2024-03-25 12:48 ` alexs
2024-03-25 12:48 ` [PATCH v3 04/14] mm/ksm: use folio in remove_stable_node alexs
` (10 subsequent siblings)
14 siblings, 0 replies; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
Turn set_page_stable_node() into a wrapper folio_set_stable_node, and then
use it to replace the former. we will merge them together after all
place converted to folio.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index ea3dabf71e47..c9b7c5701f22 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1109,6 +1109,12 @@ static inline void set_page_stable_node(struct page *page,
page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
}
+static inline void folio_set_stable_node(struct folio *folio,
+ struct ksm_stable_node *stable_node)
+{
+ set_page_stable_node(&folio->page, stable_node);
+}
+
#ifdef CONFIG_SYSFS
/*
* Only called through the sysfs control interface:
@@ -3241,7 +3247,7 @@ void folio_migrate_ksm(struct folio *newfolio, struct folio *folio)
* has gone stale (or that folio_test_swapcache has been cleared).
*/
smp_wmb();
- set_page_stable_node(&folio->page, NULL);
+ folio_set_stable_node(folio, NULL);
}
}
#endif /* CONFIG_MIGRATION */
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH v3 04/14] mm/ksm: use folio in remove_stable_node
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (3 preceding siblings ...)
2024-03-25 12:48 ` [PATCH v3 03/14] mm/ksm: add folio_set_stable_node alexs
@ 2024-03-25 12:48 ` alexs
2024-03-25 12:48 ` [PATCH v3 05/14] mm/ksm: use folio in stable_node_dup alexs
` (9 subsequent siblings)
14 siblings, 0 replies; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
pages in stable tree are all single normal page, so uses ksm_get_folio()
and folio_set_stable_node(), also saves 3 calls to compound_head().
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 20 ++++++++++----------
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index c9b7c5701f22..b6ee2bc7646f 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1121,11 +1121,11 @@ static inline void folio_set_stable_node(struct folio *folio,
*/
static int remove_stable_node(struct ksm_stable_node *stable_node)
{
- struct page *page;
+ struct folio *folio;
int err;
- page = get_ksm_page(stable_node, GET_KSM_PAGE_LOCK);
- if (!page) {
+ folio = ksm_get_folio(stable_node, GET_KSM_PAGE_LOCK);
+ if (!folio) {
/*
* get_ksm_page did remove_node_from_stable_tree itself.
*/
@@ -1138,22 +1138,22 @@ static int remove_stable_node(struct ksm_stable_node *stable_node)
* merge_across_nodes/max_page_sharing be switched.
*/
err = -EBUSY;
- if (!page_mapped(page)) {
+ if (!folio_mapped(folio)) {
/*
* The stable node did not yet appear stale to get_ksm_page(),
- * since that allows for an unmapped ksm page to be recognized
+ * since that allows for an unmapped ksm folio to be recognized
* right up until it is freed; but the node is safe to remove.
- * This page might be in an LRU cache waiting to be freed,
- * or it might be PageSwapCache (perhaps under writeback),
+ * This folio might be in an LRU cache waiting to be freed,
+ * or it might be in the swapcache (perhaps under writeback),
* or it might have been removed from swapcache a moment ago.
*/
- set_page_stable_node(page, NULL);
+ folio_set_stable_node(folio, NULL);
remove_node_from_stable_tree(stable_node);
err = 0;
}
- unlock_page(page);
- put_page(page);
+ folio_unlock(folio);
+ folio_put(folio);
return err;
}
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH v3 05/14] mm/ksm: use folio in stable_node_dup
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (4 preceding siblings ...)
2024-03-25 12:48 ` [PATCH v3 04/14] mm/ksm: use folio in remove_stable_node alexs
@ 2024-03-25 12:48 ` alexs
2024-03-25 12:48 ` [PATCH v3 06/14] mm/ksm: use ksm_get_folio in scan_get_next_rmap_item alexs
` (8 subsequent siblings)
14 siblings, 0 replies; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
Use ksm_get_folio() and save 2 compound_head calls.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index b6ee2bc7646f..aa80fbf3a8e0 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1638,7 +1638,7 @@ static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
{
struct ksm_stable_node *dup, *found = NULL, *stable_node = *_stable_node;
struct hlist_node *hlist_safe;
- struct page *_tree_page, *tree_page = NULL;
+ struct folio *folio, *tree_folio = NULL;
int nr = 0;
int found_rmap_hlist_len;
@@ -1663,18 +1663,18 @@ static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
* stable_node parameter itself will be freed from
* under us if it returns NULL.
*/
- _tree_page = get_ksm_page(dup, GET_KSM_PAGE_NOLOCK);
- if (!_tree_page)
+ folio = ksm_get_folio(dup, GET_KSM_PAGE_NOLOCK);
+ if (!folio)
continue;
nr += 1;
if (is_page_sharing_candidate(dup)) {
if (!found ||
dup->rmap_hlist_len > found_rmap_hlist_len) {
if (found)
- put_page(tree_page);
+ folio_put(tree_folio);
found = dup;
found_rmap_hlist_len = found->rmap_hlist_len;
- tree_page = _tree_page;
+ tree_folio = folio;
/* skip put_page for found dup */
if (!prune_stale_stable_nodes)
@@ -1682,7 +1682,7 @@ static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
continue;
}
}
- put_page(_tree_page);
+ folio_put(folio);
}
if (found) {
@@ -1747,7 +1747,7 @@ static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
}
*_stable_node_dup = found;
- return tree_page;
+ return &tree_folio->page;
}
static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stable_node,
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH v3 06/14] mm/ksm: use ksm_get_folio in scan_get_next_rmap_item
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (5 preceding siblings ...)
2024-03-25 12:48 ` [PATCH v3 05/14] mm/ksm: use folio in stable_node_dup alexs
@ 2024-03-25 12:48 ` alexs
2024-04-05 7:28 ` David Hildenbrand
2024-03-25 12:48 ` [PATCH v3 07/14] mm/ksm: use folio in write_protect_page alexs
` (7 subsequent siblings)
14 siblings, 1 reply; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
Save a compound calls.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index aa80fbf3a8e0..95a487a21eed 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2611,14 +2611,14 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
*/
if (!ksm_merge_across_nodes) {
struct ksm_stable_node *stable_node, *next;
- struct page *page;
+ struct folio *folio;
list_for_each_entry_safe(stable_node, next,
&migrate_nodes, list) {
- page = get_ksm_page(stable_node,
- GET_KSM_PAGE_NOLOCK);
- if (page)
- put_page(page);
+ folio = ksm_get_folio(stable_node,
+ GET_KSM_PAGE_NOLOCK);
+ if (folio)
+ folio_put(folio);
cond_resched();
}
}
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v3 06/14] mm/ksm: use ksm_get_folio in scan_get_next_rmap_item
2024-03-25 12:48 ` [PATCH v3 06/14] mm/ksm: use ksm_get_folio in scan_get_next_rmap_item alexs
@ 2024-04-05 7:28 ` David Hildenbrand
0 siblings, 0 replies; 27+ messages in thread
From: David Hildenbrand @ 2024-04-05 7:28 UTC (permalink / raw)
To: alexs, Matthew Wilcox, Andrea Arcangeli, Izik Eidus,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Hugh Dickins, Chris Wright
On 25.03.24 13:48, alexs@kernel.org wrote:
> From: "Alex Shi (tencent)" <alexs@kernel.org>
>
> Save a compound calls.
>
> Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
> Cc: Izik Eidus <izik.eidus@ravellosystems.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Chris Wright <chrisw@sous-sol.org>
> ---
> mm/ksm.c | 10 +++++-----
> 1 file changed, 5 insertions(+), 5 deletions(-)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index aa80fbf3a8e0..95a487a21eed 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -2611,14 +2611,14 @@ static struct ksm_rmap_item *scan_get_next_rmap_item(struct page **page)
> */
> if (!ksm_merge_across_nodes) {
> struct ksm_stable_node *stable_node, *next;
> - struct page *page;
> + struct folio *folio;
>
> list_for_each_entry_safe(stable_node, next,
> &migrate_nodes, list) {
> - page = get_ksm_page(stable_node,
> - GET_KSM_PAGE_NOLOCK);
> - if (page)
> - put_page(page);
> + folio = ksm_get_folio(stable_node,
> + GET_KSM_PAGE_NOLOCK);
> + if (folio)
> + folio_put(folio);
> cond_resched();
> }
> }
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3 07/14] mm/ksm: use folio in write_protect_page
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (6 preceding siblings ...)
2024-03-25 12:48 ` [PATCH v3 06/14] mm/ksm: use ksm_get_folio in scan_get_next_rmap_item alexs
@ 2024-03-25 12:48 ` alexs
[not found] ` <e16806b0-7c17-4356-8b47-30f624756e85@redhat.com>
2024-03-25 12:48 ` [PATCH v3 08/14] mm/ksm: Convert chain series funcs to use folio alexs
` (6 subsequent siblings)
14 siblings, 1 reply; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
Compound page is checked and skipped before write_protect_page() called,
use folio to save a few compound_head checking.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 95a487a21eed..5d1f62e7462a 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1289,22 +1289,22 @@ static u32 calc_checksum(struct page *page)
return checksum;
}
-static int write_protect_page(struct vm_area_struct *vma, struct page *page,
+static int write_protect_page(struct vm_area_struct *vma, struct folio *folio,
pte_t *orig_pte)
{
struct mm_struct *mm = vma->vm_mm;
- DEFINE_PAGE_VMA_WALK(pvmw, page, vma, 0, 0);
+ DEFINE_FOLIO_VMA_WALK(pvmw, folio, vma, 0, 0);
int swapped;
int err = -EFAULT;
struct mmu_notifier_range range;
bool anon_exclusive;
pte_t entry;
- pvmw.address = page_address_in_vma(page, vma);
+ pvmw.address = page_address_in_vma(&folio->page, vma);
if (pvmw.address == -EFAULT)
goto out;
- BUG_ON(PageTransCompound(page));
+ VM_BUG_ON(folio_test_large(folio));
mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, pvmw.address,
pvmw.address + PAGE_SIZE);
@@ -1315,12 +1315,12 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
if (WARN_ONCE(!pvmw.pte, "Unexpected PMD mapping?"))
goto out_unlock;
- anon_exclusive = PageAnonExclusive(page);
+ anon_exclusive = PageAnonExclusive(&folio->page);
entry = ptep_get(pvmw.pte);
if (pte_write(entry) || pte_dirty(entry) ||
anon_exclusive || mm_tlb_flush_pending(mm)) {
- swapped = PageSwapCache(page);
- flush_cache_page(vma, pvmw.address, page_to_pfn(page));
+ swapped = folio_test_swapcache(folio);
+ flush_cache_page(vma, pvmw.address, folio_pfn(folio));
/*
* Ok this is tricky, when get_user_pages_fast() run it doesn't
* take any lock, therefore the check that we are going to make
@@ -1340,20 +1340,20 @@ static int write_protect_page(struct vm_area_struct *vma, struct page *page,
* Check that no O_DIRECT or similar I/O is in progress on the
* page
*/
- if (page_mapcount(page) + 1 + swapped != page_count(page)) {
+ if (folio_mapcount(folio) + 1 + swapped != folio_ref_count(folio)) {
set_pte_at(mm, pvmw.address, pvmw.pte, entry);
goto out_unlock;
}
/* See folio_try_share_anon_rmap_pte(): clear PTE first. */
if (anon_exclusive &&
- folio_try_share_anon_rmap_pte(page_folio(page), page)) {
+ folio_try_share_anon_rmap_pte(folio, &folio->page)) {
set_pte_at(mm, pvmw.address, pvmw.pte, entry);
goto out_unlock;
}
if (pte_dirty(entry))
- set_page_dirty(page);
+ folio_mark_dirty(folio);
entry = pte_mkclean(entry);
if (pte_write(entry))
@@ -1519,7 +1519,7 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
* ptes are necessarily already write-protected. But in either
* case, we need to lock and check page_count is not raised.
*/
- if (write_protect_page(vma, page, &orig_pte) == 0) {
+ if (write_protect_page(vma, page_folio(page), &orig_pte) == 0) {
if (!kpage) {
/*
* While we hold page lock, upgrade page from
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH v3 08/14] mm/ksm: Convert chain series funcs to use folio
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (7 preceding siblings ...)
2024-03-25 12:48 ` [PATCH v3 07/14] mm/ksm: use folio in write_protect_page alexs
@ 2024-03-25 12:48 ` alexs
2024-04-05 7:36 ` David Hildenbrand
2024-03-25 12:48 ` [PATCH v3 09/14] mm/ksm: Convert stable_tree_insert " alexs
` (5 subsequent siblings)
14 siblings, 1 reply; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
In ksm stable tree all page are single, let's convert them to use folios.
Change return type to void is ugly, but for a series funcs, it's still a
bit simpler than adding new funcs. And they will be changed to 'struct
folio' soon.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 5d1f62e7462a..7188997437d3 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1777,7 +1777,7 @@ static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stabl
* function and will be overwritten in all cases, the caller doesn't
* need to initialize it.
*/
-static struct page *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
+static void *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
struct ksm_stable_node **_stable_node,
struct rb_root *root,
bool prune_stale_stable_nodes)
@@ -1799,24 +1799,24 @@ static struct page *__stable_node_chain(struct ksm_stable_node **_stable_node_du
prune_stale_stable_nodes);
}
-static __always_inline struct page *chain_prune(struct ksm_stable_node **s_n_d,
+static __always_inline void *chain_prune(struct ksm_stable_node **s_n_d,
struct ksm_stable_node **s_n,
struct rb_root *root)
{
return __stable_node_chain(s_n_d, s_n, root, true);
}
-static __always_inline struct page *chain(struct ksm_stable_node **s_n_d,
+static __always_inline void *chain(struct ksm_stable_node **s_n_d,
struct ksm_stable_node *s_n,
struct rb_root *root)
{
struct ksm_stable_node *old_stable_node = s_n;
- struct page *tree_page;
+ struct folio *tree_folio;
- tree_page = __stable_node_chain(s_n_d, &s_n, root, false);
+ tree_folio = __stable_node_chain(s_n_d, &s_n, root, false);
/* not pruning dups so s_n cannot have changed */
VM_BUG_ON(s_n != old_stable_node);
- return tree_page;
+ return tree_folio;
}
/*
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v3 08/14] mm/ksm: Convert chain series funcs to use folio
2024-03-25 12:48 ` [PATCH v3 08/14] mm/ksm: Convert chain series funcs to use folio alexs
@ 2024-04-05 7:36 ` David Hildenbrand
0 siblings, 0 replies; 27+ messages in thread
From: David Hildenbrand @ 2024-04-05 7:36 UTC (permalink / raw)
To: alexs, Matthew Wilcox, Andrea Arcangeli, Izik Eidus,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Hugh Dickins, Chris Wright
On 25.03.24 13:48, alexs@kernel.org wrote:
> From: "Alex Shi (tencent)" <alexs@kernel.org>
>
> In ksm stable tree all page are single, let's convert them to use folios.
> Change return type to void is ugly, but for a series funcs, it's still a
> bit simpler than adding new funcs. And they will be changed to 'struct
> folio' soon.
>
> Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
> Cc: Izik Eidus <izik.eidus@ravellosystems.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Chris Wright <chrisw@sous-sol.org>
> ---
Why not simply squash 8,9,10 and avoid this completely? There are not
that many relevant calls that need conversion.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3 09/14] mm/ksm: Convert stable_tree_insert to use folio
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (8 preceding siblings ...)
2024-03-25 12:48 ` [PATCH v3 08/14] mm/ksm: Convert chain series funcs to use folio alexs
@ 2024-03-25 12:48 ` alexs
2024-03-25 12:48 ` [PATCH v3 10/14] mm/ksm: Convert stable_tree_search " alexs
` (4 subsequent siblings)
14 siblings, 0 replies; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
KSM stable tree only store single page, so convert the func users to use
folio and save few compound_head calls.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 22 +++++++++++-----------
1 file changed, 11 insertions(+), 11 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 7188997437d3..c2afe2f926db 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -2078,7 +2078,7 @@ static struct page *stable_tree_search(struct page *page)
* This function returns the stable tree node just allocated on success,
* NULL otherwise.
*/
-static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
+static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
{
int nid;
unsigned long kpfn;
@@ -2088,7 +2088,7 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
bool need_chain = false;
- kpfn = page_to_pfn(kpage);
+ kpfn = folio_pfn(kfolio);
nid = get_kpfn_nid(kpfn);
root = root_stable_tree + nid;
again:
@@ -2096,13 +2096,13 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
new = &root->rb_node;
while (*new) {
- struct page *tree_page;
+ struct folio *tree_folio;
int ret;
cond_resched();
stable_node = rb_entry(*new, struct ksm_stable_node, node);
stable_node_any = NULL;
- tree_page = chain(&stable_node_dup, stable_node, root);
+ tree_folio = chain(&stable_node_dup, stable_node, root);
if (!stable_node_dup) {
/*
* Either all stable_node dups were full in
@@ -2124,11 +2124,11 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
* write protected at all times. Any will work
* fine to continue the walk.
*/
- tree_page = get_ksm_page(stable_node_any,
- GET_KSM_PAGE_NOLOCK);
+ tree_folio = ksm_get_folio(stable_node_any,
+ GET_KSM_PAGE_NOLOCK);
}
VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
- if (!tree_page) {
+ if (!tree_folio) {
/*
* If we walked over a stale stable_node,
* get_ksm_page() will call rb_erase() and it
@@ -2141,8 +2141,8 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
goto again;
}
- ret = memcmp_pages(kpage, tree_page);
- put_page(tree_page);
+ ret = memcmp_pages(&kfolio->page, &tree_folio->page);
+ folio_put(tree_folio);
parent = *new;
if (ret < 0)
@@ -2161,7 +2161,7 @@ static struct ksm_stable_node *stable_tree_insert(struct page *kpage)
INIT_HLIST_HEAD(&stable_node_dup->hlist);
stable_node_dup->kpfn = kpfn;
- set_page_stable_node(kpage, stable_node_dup);
+ folio_set_stable_node(kfolio, stable_node_dup);
stable_node_dup->rmap_hlist_len = 0;
DO_NUMA(stable_node_dup->nid = nid);
if (!need_chain) {
@@ -2439,7 +2439,7 @@ static void cmp_and_merge_page(struct page *page, struct ksm_rmap_item *rmap_ite
* node in the stable tree and add both rmap_items.
*/
lock_page(kpage);
- stable_node = stable_tree_insert(kpage);
+ stable_node = stable_tree_insert(page_folio(kpage));
if (stable_node) {
stable_tree_append(tree_rmap_item, stable_node,
false);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH v3 10/14] mm/ksm: Convert stable_tree_search to use folio
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (9 preceding siblings ...)
2024-03-25 12:48 ` [PATCH v3 09/14] mm/ksm: Convert stable_tree_insert " alexs
@ 2024-03-25 12:48 ` alexs
2024-03-25 12:48 ` [PATCH v3 11/14] mm/ksm: remove get_ksm_page and related info alexs
` (3 subsequent siblings)
14 siblings, 0 replies; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
Although, the func may pass a tail page to check its contents, but only
single page exist in KSM stable tree, so we still can use folio in
stable_tree_search() to save a few compound_head calls.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 58 +++++++++++++++++++++++++++++---------------------------
1 file changed, 30 insertions(+), 28 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index c2afe2f926db..e92445f29685 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1828,7 +1828,7 @@ static __always_inline void *chain(struct ksm_stable_node **s_n_d,
* This function returns the stable tree node of identical content if found,
* NULL otherwise.
*/
-static struct page *stable_tree_search(struct page *page)
+static void *stable_tree_search(struct page *page)
{
int nid;
struct rb_root *root;
@@ -1836,28 +1836,30 @@ static struct page *stable_tree_search(struct page *page)
struct rb_node *parent;
struct ksm_stable_node *stable_node, *stable_node_dup, *stable_node_any;
struct ksm_stable_node *page_node;
+ struct folio *folio;
- page_node = page_stable_node(page);
+ folio = page_folio(page);
+ page_node = folio_stable_node(folio);
if (page_node && page_node->head != &migrate_nodes) {
/* ksm page forked */
- get_page(page);
- return page;
+ folio_get(folio);
+ return folio;
}
- nid = get_kpfn_nid(page_to_pfn(page));
+ nid = get_kpfn_nid(folio_pfn(folio));
root = root_stable_tree + nid;
again:
new = &root->rb_node;
parent = NULL;
while (*new) {
- struct page *tree_page;
+ struct folio *tree_folio;
int ret;
cond_resched();
stable_node = rb_entry(*new, struct ksm_stable_node, node);
stable_node_any = NULL;
- tree_page = chain_prune(&stable_node_dup, &stable_node, root);
+ tree_folio = chain_prune(&stable_node_dup, &stable_node, root);
/*
* NOTE: stable_node may have been freed by
* chain_prune() if the returned stable_node_dup is
@@ -1891,11 +1893,11 @@ static struct page *stable_tree_search(struct page *page)
* write protected at all times. Any will work
* fine to continue the walk.
*/
- tree_page = get_ksm_page(stable_node_any,
- GET_KSM_PAGE_NOLOCK);
+ tree_folio = ksm_get_folio(stable_node_any,
+ GET_KSM_PAGE_NOLOCK);
}
VM_BUG_ON(!stable_node_dup ^ !!stable_node_any);
- if (!tree_page) {
+ if (!tree_folio) {
/*
* If we walked over a stale stable_node,
* get_ksm_page() will call rb_erase() and it
@@ -1908,8 +1910,8 @@ static struct page *stable_tree_search(struct page *page)
goto again;
}
- ret = memcmp_pages(page, tree_page);
- put_page(tree_page);
+ ret = memcmp_pages(page, &tree_folio->page);
+ folio_put(tree_folio);
parent = *new;
if (ret < 0)
@@ -1952,26 +1954,26 @@ static struct page *stable_tree_search(struct page *page)
* It would be more elegant to return stable_node
* than kpage, but that involves more changes.
*/
- tree_page = get_ksm_page(stable_node_dup,
- GET_KSM_PAGE_TRYLOCK);
+ tree_folio = ksm_get_folio(stable_node_dup,
+ GET_KSM_PAGE_TRYLOCK);
- if (PTR_ERR(tree_page) == -EBUSY)
+ if (PTR_ERR(tree_folio) == -EBUSY)
return ERR_PTR(-EBUSY);
- if (unlikely(!tree_page))
+ if (unlikely(!tree_folio))
/*
* The tree may have been rebalanced,
* so re-evaluate parent and new.
*/
goto again;
- unlock_page(tree_page);
+ folio_unlock(tree_folio);
if (get_kpfn_nid(stable_node_dup->kpfn) !=
NUMA(stable_node_dup->nid)) {
- put_page(tree_page);
+ folio_put(tree_folio);
goto replace;
}
- return tree_page;
+ return tree_folio;
}
}
@@ -1984,8 +1986,8 @@ static struct page *stable_tree_search(struct page *page)
rb_insert_color(&page_node->node, root);
out:
if (is_page_sharing_candidate(page_node)) {
- get_page(page);
- return page;
+ folio_get(folio);
+ return folio;
} else
return NULL;
@@ -2010,12 +2012,12 @@ static struct page *stable_tree_search(struct page *page)
&page_node->node,
root);
if (is_page_sharing_candidate(page_node))
- get_page(page);
+ folio_get(folio);
else
- page = NULL;
+ folio = NULL;
} else {
rb_erase(&stable_node_dup->node, root);
- page = NULL;
+ folio = NULL;
}
} else {
VM_BUG_ON(!is_stable_node_chain(stable_node));
@@ -2026,16 +2028,16 @@ static struct page *stable_tree_search(struct page *page)
DO_NUMA(page_node->nid = nid);
stable_node_chain_add_dup(page_node, stable_node);
if (is_page_sharing_candidate(page_node))
- get_page(page);
+ folio_get(folio);
else
- page = NULL;
+ folio = NULL;
} else {
- page = NULL;
+ folio = NULL;
}
}
stable_node_dup->head = &migrate_nodes;
list_add(&stable_node_dup->list, stable_node_dup->head);
- return page;
+ return folio;
chain_append:
/* stable_node_dup could be null if it reached the limit */
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH v3 11/14] mm/ksm: remove get_ksm_page and related info
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (10 preceding siblings ...)
2024-03-25 12:48 ` [PATCH v3 10/14] mm/ksm: Convert stable_tree_search " alexs
@ 2024-03-25 12:48 ` alexs
[not found] ` <15080b4f-ac9f-4f5f-9a27-d1773d015fdc@redhat.com>
2024-03-25 12:48 ` [PATCH v3 12/14] mm/ksm: return folio for chain series funcs alexs
` (2 subsequent siblings)
14 siblings, 1 reply; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
Now since all caller are changed to ksm_get_folio, let's sync up the
related usages with ksm_get_folio, and remove get_ksm_page.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 34 +++++++++++++---------------------
mm/migrate.c | 2 +-
2 files changed, 14 insertions(+), 22 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index e92445f29685..0ad02524e363 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -890,14 +890,14 @@ static void remove_node_from_stable_tree(struct ksm_stable_node *stable_node)
free_stable_node(stable_node);
}
-enum get_ksm_page_flags {
+enum ksm_get_folio_flags {
GET_KSM_PAGE_NOLOCK,
GET_KSM_PAGE_LOCK,
GET_KSM_PAGE_TRYLOCK
};
/*
- * get_ksm_page: checks if the page indicated by the stable node
+ * ksm_get_folio: checks if the page indicated by the stable node
* is still its ksm page, despite having held no reference to it.
* In which case we can trust the content of the page, and it
* returns the gotten page; but if the page has now been zapped,
@@ -916,7 +916,7 @@ enum get_ksm_page_flags {
* is on its way to being freed; but it is an anomaly to bear in mind.
*/
static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
- enum get_ksm_page_flags flags)
+ enum ksm_get_folio_flags flags)
{
struct folio *folio;
void *expected_mapping;
@@ -990,14 +990,6 @@ static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
return NULL;
}
-static struct page *get_ksm_page(struct ksm_stable_node *stable_node,
- enum get_ksm_page_flags flags)
-{
- struct folio *folio = ksm_get_folio(stable_node, flags);
-
- return &folio->page;
-}
-
/*
* Removing rmap_item from stable or unstable tree.
* This function will clean the information from the stable/unstable tree.
@@ -1127,7 +1119,7 @@ static int remove_stable_node(struct ksm_stable_node *stable_node)
folio = ksm_get_folio(stable_node, GET_KSM_PAGE_LOCK);
if (!folio) {
/*
- * get_ksm_page did remove_node_from_stable_tree itself.
+ * ksm_get_folio did remove_node_from_stable_tree itself.
*/
return 0;
}
@@ -1140,7 +1132,7 @@ static int remove_stable_node(struct ksm_stable_node *stable_node)
err = -EBUSY;
if (!folio_mapped(folio)) {
/*
- * The stable node did not yet appear stale to get_ksm_page(),
+ * The stable node did not yet appear stale to ksm_get_folio(),
* since that allows for an unmapped ksm folio to be recognized
* right up until it is freed; but the node is safe to remove.
* This folio might be in an LRU cache waiting to be freed,
@@ -1657,7 +1649,7 @@ static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
* We must walk all stable_node_dup to prune the stale
* stable nodes during lookup.
*
- * get_ksm_page can drop the nodes from the
+ * ksm_get_folio can drop the nodes from the
* stable_node->hlist if they point to freed pages
* (that's why we do a _safe walk). The "dup"
* stable_node parameter itself will be freed from
@@ -1764,7 +1756,7 @@ static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stabl
}
/*
- * Like for get_ksm_page, this function can free the *_stable_node and
+ * Like for ksm_get_folio, this function can free the *_stable_node and
* *_stable_node_dup if the returned tree_page is NULL.
*
* It can also free and overwrite *_stable_node with the found
@@ -1786,7 +1778,7 @@ static void *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
if (!is_stable_node_chain(stable_node)) {
if (is_page_sharing_candidate(stable_node)) {
*_stable_node_dup = stable_node;
- return get_ksm_page(stable_node, GET_KSM_PAGE_NOLOCK);
+ return ksm_get_folio(stable_node, GET_KSM_PAGE_NOLOCK);
}
/*
* _stable_node_dup set to NULL means the stable_node
@@ -1900,7 +1892,7 @@ static void *stable_tree_search(struct page *page)
if (!tree_folio) {
/*
* If we walked over a stale stable_node,
- * get_ksm_page() will call rb_erase() and it
+ * ksm_get_folio() will call rb_erase() and it
* may rebalance the tree from under us. So
* restart the search from scratch. Returning
* NULL would be safe too, but we'd generate
@@ -2133,7 +2125,7 @@ static struct ksm_stable_node *stable_tree_insert(struct folio *kfolio)
if (!tree_folio) {
/*
* If we walked over a stale stable_node,
- * get_ksm_page() will call rb_erase() and it
+ * ksm_get_folio() will call rb_erase() and it
* may rebalance the tree from under us. So
* restart the search from scratch. Returning
* NULL would be safe too, but we'd generate
@@ -3245,7 +3237,7 @@ void folio_migrate_ksm(struct folio *newfolio, struct folio *folio)
/*
* newfolio->mapping was set in advance; now we need smp_wmb()
* to make sure that the new stable_node->kpfn is visible
- * to get_ksm_page() before it can see that folio->mapping
+ * to ksm_get_folio() before it can see that folio->mapping
* has gone stale (or that folio_test_swapcache has been cleared).
*/
smp_wmb();
@@ -3272,7 +3264,7 @@ static bool stable_node_dup_remove_range(struct ksm_stable_node *stable_node,
if (stable_node->kpfn >= start_pfn &&
stable_node->kpfn < end_pfn) {
/*
- * Don't get_ksm_page, page has already gone:
+ * Don't ksm_get_folio, page has already gone:
* which is why we keep kpfn instead of page*
*/
remove_node_from_stable_tree(stable_node);
@@ -3360,7 +3352,7 @@ static int ksm_memory_callback(struct notifier_block *self,
* Most of the work is done by page migration; but there might
* be a few stable_nodes left over, still pointing to struct
* pages which have been offlined: prune those from the tree,
- * otherwise get_ksm_page() might later try to access a
+ * otherwise ksm_get_folio() might later try to access a
* non-existent struct page.
*/
ksm_check_stable_tree(mn->start_pfn,
diff --git a/mm/migrate.c b/mm/migrate.c
index 73a052a382f1..9f0494fd902c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -616,7 +616,7 @@ void folio_migrate_flags(struct folio *newfolio, struct folio *folio)
folio_migrate_ksm(newfolio, folio);
/*
* Please do not reorder this without considering how mm/ksm.c's
- * get_ksm_page() depends upon ksm_migrate_page() and PageSwapCache().
+ * ksm_get_folio() depends upon ksm_migrate_page() and PageSwapCache().
*/
if (folio_test_swapcache(folio))
folio_clear_swapcache(folio);
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH v3 12/14] mm/ksm: return folio for chain series funcs
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (11 preceding siblings ...)
2024-03-25 12:48 ` [PATCH v3 11/14] mm/ksm: remove get_ksm_page and related info alexs
@ 2024-03-25 12:48 ` alexs
2024-03-25 12:49 ` [PATCH v3 13/14] mm/ksm: use folio_set_stable_node in try_to_merge_one_page alexs
2024-03-25 12:49 ` [PATCH v3 14/14] mm/ksm: remove set_page_stable_node alexs
14 siblings, 0 replies; 27+ messages in thread
From: alexs @ 2024-03-25 12:48 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
Since all caller changed to folios, change their return type to folio
too.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 30 +++++++++++++++---------------
1 file changed, 15 insertions(+), 15 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 0ad02524e363..15a78a9bab59 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1623,10 +1623,10 @@ bool is_page_sharing_candidate(struct ksm_stable_node *stable_node)
return __is_page_sharing_candidate(stable_node, 0);
}
-static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
- struct ksm_stable_node **_stable_node,
- struct rb_root *root,
- bool prune_stale_stable_nodes)
+static struct folio *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
+ struct ksm_stable_node **_stable_node,
+ struct rb_root *root,
+ bool prune_stale_stable_nodes)
{
struct ksm_stable_node *dup, *found = NULL, *stable_node = *_stable_node;
struct hlist_node *hlist_safe;
@@ -1739,7 +1739,7 @@ static struct page *stable_node_dup(struct ksm_stable_node **_stable_node_dup,
}
*_stable_node_dup = found;
- return &tree_folio->page;
+ return tree_folio;
}
static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stable_node,
@@ -1769,10 +1769,10 @@ static struct ksm_stable_node *stable_node_dup_any(struct ksm_stable_node *stabl
* function and will be overwritten in all cases, the caller doesn't
* need to initialize it.
*/
-static void *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
- struct ksm_stable_node **_stable_node,
- struct rb_root *root,
- bool prune_stale_stable_nodes)
+static struct folio *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
+ struct ksm_stable_node **_stable_node,
+ struct rb_root *root,
+ bool prune_stale_stable_nodes)
{
struct ksm_stable_node *stable_node = *_stable_node;
if (!is_stable_node_chain(stable_node)) {
@@ -1791,16 +1791,16 @@ static void *__stable_node_chain(struct ksm_stable_node **_stable_node_dup,
prune_stale_stable_nodes);
}
-static __always_inline void *chain_prune(struct ksm_stable_node **s_n_d,
- struct ksm_stable_node **s_n,
- struct rb_root *root)
+static __always_inline struct folio *chain_prune(struct ksm_stable_node **s_n_d,
+ struct ksm_stable_node **s_n,
+ struct rb_root *root)
{
return __stable_node_chain(s_n_d, s_n, root, true);
}
-static __always_inline void *chain(struct ksm_stable_node **s_n_d,
- struct ksm_stable_node *s_n,
- struct rb_root *root)
+static __always_inline struct folio *chain(struct ksm_stable_node **s_n_d,
+ struct ksm_stable_node *s_n,
+ struct rb_root *root)
{
struct ksm_stable_node *old_stable_node = s_n;
struct folio *tree_folio;
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* [PATCH v3 13/14] mm/ksm: use folio_set_stable_node in try_to_merge_one_page
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (12 preceding siblings ...)
2024-03-25 12:48 ` [PATCH v3 12/14] mm/ksm: return folio for chain series funcs alexs
@ 2024-03-25 12:49 ` alexs
2024-04-05 9:06 ` David Hildenbrand
2024-03-25 12:49 ` [PATCH v3 14/14] mm/ksm: remove set_page_stable_node alexs
14 siblings, 1 reply; 27+ messages in thread
From: alexs @ 2024-03-25 12:49 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
Only single page could be reached where we set stable node after write
protect, so use folio converted func to replace page's.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index 15a78a9bab59..d7c4cc4a0cc1 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1518,7 +1518,7 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
* PageAnon+anon_vma to PageKsm+NULL stable_node:
* stable_tree_insert() will update stable_node.
*/
- set_page_stable_node(page, NULL);
+ folio_set_stable_node(page_folio(page), NULL);
mark_page_accessed(page);
/*
* Page reclaim just frees a clean page with no dirty
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v3 13/14] mm/ksm: use folio_set_stable_node in try_to_merge_one_page
2024-03-25 12:49 ` [PATCH v3 13/14] mm/ksm: use folio_set_stable_node in try_to_merge_one_page alexs
@ 2024-04-05 9:06 ` David Hildenbrand
0 siblings, 0 replies; 27+ messages in thread
From: David Hildenbrand @ 2024-04-05 9:06 UTC (permalink / raw)
To: alexs, Matthew Wilcox, Andrea Arcangeli, Izik Eidus,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Hugh Dickins, Chris Wright
On 25.03.24 13:49, alexs@kernel.org wrote:
> From: "Alex Shi (tencent)" <alexs@kernel.org>
>
> Only single page could be reached where we set stable node after write
> protect, so use folio converted func to replace page's.
>
> Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
> Cc: Izik Eidus <izik.eidus@ravellosystems.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Chris Wright <chrisw@sous-sol.org>
> ---
> mm/ksm.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/ksm.c b/mm/ksm.c
> index 15a78a9bab59..d7c4cc4a0cc1 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -1518,7 +1518,7 @@ static int try_to_merge_one_page(struct vm_area_struct *vma,
> * PageAnon+anon_vma to PageKsm+NULL stable_node:
> * stable_tree_insert() will update stable_node.
> */
> - set_page_stable_node(page, NULL);
> + folio_set_stable_node(page_folio(page), NULL);
> mark_page_accessed(page);
> /*
> * Page reclaim just frees a clean page with no dirty
Reviewed-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 27+ messages in thread
* [PATCH v3 14/14] mm/ksm: remove set_page_stable_node
2024-03-25 12:48 [PATCH v2 00/14] transfer page to folio in KSM alexs
` (13 preceding siblings ...)
2024-03-25 12:49 ` [PATCH v3 13/14] mm/ksm: use folio_set_stable_node in try_to_merge_one_page alexs
@ 2024-03-25 12:49 ` alexs
2024-04-05 7:32 ` David Hildenbrand
2024-04-05 9:07 ` David Hildenbrand
14 siblings, 2 replies; 27+ messages in thread
From: alexs @ 2024-03-25 12:49 UTC (permalink / raw)
To: Matthew Wilcox, Andrea Arcangeli, Izik Eidus, david,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Alex Shi (tencent), Hugh Dickins, Chris Wright
From: "Alex Shi (tencent)" <alexs@kernel.org>
Remove the func since all caller are gone. Also remove the
VM_BUG_ON_PAGE() because it's not applicable for a folio.
Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
To: linux-kernel@vger.kernel.org
To: linux-mm@kvack.org
To: Andrew Morton <akpm@linux-foundation.org>
Cc: Izik Eidus <izik.eidus@ravellosystems.com>
Cc: Matthew Wilcox <willy@infradead.org>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Hugh Dickins <hughd@google.com>
Cc: Chris Wright <chrisw@sous-sol.org>
---
mm/ksm.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/mm/ksm.c b/mm/ksm.c
index d7c4cc4a0cc1..136909f0c5d5 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1094,17 +1094,10 @@ static inline struct ksm_stable_node *page_stable_node(struct page *page)
return folio_stable_node(page_folio(page));
}
-static inline void set_page_stable_node(struct page *page,
- struct ksm_stable_node *stable_node)
-{
- VM_BUG_ON_PAGE(PageAnon(page) && PageAnonExclusive(page), page);
- page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
-}
-
static inline void folio_set_stable_node(struct folio *folio,
struct ksm_stable_node *stable_node)
{
- set_page_stable_node(&folio->page, stable_node);
+ folio->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
}
#ifdef CONFIG_SYSFS
--
2.43.0
^ permalink raw reply related [flat|nested] 27+ messages in thread* Re: [PATCH v3 14/14] mm/ksm: remove set_page_stable_node
2024-03-25 12:49 ` [PATCH v3 14/14] mm/ksm: remove set_page_stable_node alexs
@ 2024-04-05 7:32 ` David Hildenbrand
2024-04-08 7:00 ` Alex Shi
2024-04-05 9:07 ` David Hildenbrand
1 sibling, 1 reply; 27+ messages in thread
From: David Hildenbrand @ 2024-04-05 7:32 UTC (permalink / raw)
To: alexs, Matthew Wilcox, Andrea Arcangeli, Izik Eidus,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Hugh Dickins, Chris Wright
On 25.03.24 13:49, alexs@kernel.org wrote:
> From: "Alex Shi (tencent)" <alexs@kernel.org>
>
> Remove the func since all caller are gone. Also remove the
> VM_BUG_ON_PAGE() because it's not applicable for a folio.
Ehm, it is for small folios that we are working with here.
Please keep that check and convert it into a warn.
VM_WARN_ON_FOLIO(folio_test_anon(folio) && PageAnonExclusive(&folio->page), folio);
> - page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
> -}
> -
> static inline void folio_set_stable_node(struct folio *folio,
> struct ksm_stable_node *stable_node)
> {
> - set_page_stable_node(&folio->page, stable_node);
> + folio->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
> }
>
> #ifdef CONFIG_SYSFS
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 27+ messages in thread* Re: [PATCH v3 14/14] mm/ksm: remove set_page_stable_node
2024-04-05 7:32 ` David Hildenbrand
@ 2024-04-08 7:00 ` Alex Shi
0 siblings, 0 replies; 27+ messages in thread
From: Alex Shi @ 2024-04-08 7:00 UTC (permalink / raw)
To: David Hildenbrand, alexs, Matthew Wilcox, Andrea Arcangeli,
Izik Eidus, Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Hugh Dickins, Chris Wright
On 4/5/24 3:32 PM, David Hildenbrand wrote:
> On 25.03.24 13:49, alexs@kernel.org wrote:
>> From: "Alex Shi (tencent)" <alexs@kernel.org>
>>
>> Remove the func since all caller are gone. Also remove the
>> VM_BUG_ON_PAGE() because it's not applicable for a folio.
>
> Ehm, it is for small folios that we are working with here.
>
> Please keep that check and convert it into a warn.
>
> VM_WARN_ON_FOLIO(folio_test_anon(folio) && PageAnonExclusive(&folio->page), folio);
will take it. Thanks!
>
>> - page->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
>> -}
>> -
>> static inline void folio_set_stable_node(struct folio *folio,
>> struct ksm_stable_node *stable_node)
>> {
>> - set_page_stable_node(&folio->page, stable_node);
>> + folio->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
>> }
>> #ifdef CONFIG_SYSFS
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v3 14/14] mm/ksm: remove set_page_stable_node
2024-03-25 12:49 ` [PATCH v3 14/14] mm/ksm: remove set_page_stable_node alexs
2024-04-05 7:32 ` David Hildenbrand
@ 2024-04-05 9:07 ` David Hildenbrand
2024-04-08 7:01 ` Alex Shi
1 sibling, 1 reply; 27+ messages in thread
From: David Hildenbrand @ 2024-04-05 9:07 UTC (permalink / raw)
To: alexs, Matthew Wilcox, Andrea Arcangeli, Izik Eidus,
Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Hugh Dickins, Chris Wright
On 25.03.24 13:49, alexs@kernel.org wrote:
> From: "Alex Shi (tencent)" <alexs@kernel.org>
>
> Remove the func since all caller are gone. Also remove the
> VM_BUG_ON_PAGE() because it's not applicable for a folio.
>
> Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
> To: linux-kernel@vger.kernel.org
> To: linux-mm@kvack.org
> To: Andrew Morton <akpm@linux-foundation.org>
> Cc: Izik Eidus <izik.eidus@ravellosystems.com>
> Cc: Matthew Wilcox <willy@infradead.org>
> Cc: Andrea Arcangeli <aarcange@redhat.com>
> Cc: Hugh Dickins <hughd@google.com>
> Cc: Chris Wright <chrisw@sous-sol.org>
> ---
Also, best to just squash this and #13.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [PATCH v3 14/14] mm/ksm: remove set_page_stable_node
2024-04-05 9:07 ` David Hildenbrand
@ 2024-04-08 7:01 ` Alex Shi
0 siblings, 0 replies; 27+ messages in thread
From: Alex Shi @ 2024-04-08 7:01 UTC (permalink / raw)
To: David Hildenbrand, alexs, Matthew Wilcox, Andrea Arcangeli,
Izik Eidus, Andrew Morton, linux-mm, linux-kernel, ryncsn
Cc: Hugh Dickins, Chris Wright
On 4/5/24 5:07 PM, David Hildenbrand wrote:
> On 25.03.24 13:49, alexs@kernel.org wrote:
>> From: "Alex Shi (tencent)" <alexs@kernel.org>
>>
>> Remove the func since all caller are gone. Also remove the
>> VM_BUG_ON_PAGE() because it's not applicable for a folio.
>>
>> Signed-off-by: Alex Shi (tencent) <alexs@kernel.org>
>> To: linux-kernel@vger.kernel.org
>> To: linux-mm@kvack.org
>> To: Andrew Morton <akpm@linux-foundation.org>
>> Cc: Izik Eidus <izik.eidus@ravellosystems.com>
>> Cc: Matthew Wilcox <willy@infradead.org>
>> Cc: Andrea Arcangeli <aarcange@redhat.com>
>> Cc: Hugh Dickins <hughd@google.com>
>> Cc: Chris Wright <chrisw@sous-sol.org>
>> ---
>
> Also, best to just squash this and #13.
Sure, it's better merge them.
>
^ permalink raw reply [flat|nested] 27+ messages in thread