* [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1)
@ 2025-07-04 10:24 David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 01/29] mm/balloon_compaction: we cannot have isolated pages in the balloon list David Hildenbrand
` (29 more replies)
0 siblings, 30 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:24 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Based on mm/mm-new.
In the future, as we decouple "struct page" from "struct folio", pages
that support "non-lru page migration" -- movable_ops page migration
such as memory balloons and zsmalloc -- will no longer be folios. They
will not have ->mapping, ->lru, and likely no refcount and no
page lock. But they will have a type and flags 🙂
This is the first part (other parts not written yet) of decoupling
movable_ops page migration from folio migration.
In this series, we get rid of the ->mapping usage, and start cleaning up
the code + separating it from folio migration.
Migration core will have to be further reworked to not treat movable_ops
pages like folios. This is the first step into that direction.
Heavily tested with virtio-balloon and lightly tested with zsmalloc
on x86-64. Cross-compile-tested.
v1 -> v2:
* "mm/balloon_compaction: convert balloon_page_delete() to
balloon_page_finalize()"
-> Extended patch description
* "mm/page_alloc: let page freeing clear any set page type"
-> Add comment
* "mm/zsmalloc: make PageZsmalloc() sticky until the page is freed"
-> Add comment
* "mm/migrate: factor out movable_ops page handling into
migrate_movable_ops_page()"
-> Extended patch description
* "mm/migrate: remove folio_test_movable() and folio_movable_ops()"
-> Extended patch description
* "mm/zsmalloc: stop using __ClearPageMovable()"
-> Clarify+extend comment
* "mm/migration: remove PageMovable()"
-> Adjust patch description
* "mm: rename __PageMovable() to page_has_movable_ops()"
-> Update comment in scan_movable_pages()
* "mm: convert "movable" flag in page->mapping to a page flag"
-> Updated+extended patch description
-> Use TESTPAGEFLAG+SETPAGEFLAG only
-> Adjust comments for #else + #endif
* "mm/page-alloc: remove PageMappingFlags()"
-> Extend patch description
* "docs/mm: convert from "Non-LRU page migration" to "movable_ops page
migration""
-> Fixup usage of page_movable_ops()
* Smaller patch description changes
* Collect RBs+Acks (thanks everybody!)
RFC -> v1:
* Some smaller fixups + comment changes + subject/description updates
* Added ACKs/RBs (hope I didn't miss any)
* "mm/migrate: move movable_ops page handling out of move_to_new_folio()"
-> Fix goto out; vs goto out_unlock_both;
* "mm: remove __folio_test_movable()"
-> Fix page_has_movable_ops() checking wrong page
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Jonathan Corbet <corbet@lwn.net>
Cc: Madhavan Srinivasan <maddy@linux.ibm.com>
Cc: Michael Ellerman <mpe@ellerman.id.au>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Christophe Leroy <christophe.leroy@csgroup.eu>
Cc: Jerrin Shaji George <jerrin.shaji-george@broadcom.com>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: David Hildenbrand <david@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: "Eugenio Pérez" <eperezma@redhat.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jan Kara <jack@suse.cz>
Cc: Zi Yan <ziy@nvidia.com>
Cc: Matthew Brost <matthew.brost@intel.com>
Cc: Joshua Hahn <joshua.hahnjy@gmail.com>
Cc: Rakie Kim <rakie.kim@sk.com>
Cc: Byungchul Park <byungchul@sk.com>
Cc: Gregory Price <gourry@gourry.net>
Cc: Ying Huang <ying.huang@linux.alibaba.com>
Cc: Alistair Popple <apopple@nvidia.com>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Cc: "Liam R. Howlett" <Liam.Howlett@oracle.com>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Suren Baghdasaryan <surenb@google.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: "Matthew Wilcox (Oracle)" <willy@infradead.org>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Sergey Senozhatsky <senozhatsky@chromium.org>
Cc: Brendan Jackman <jackmanb@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Jason Gunthorpe <jgg@ziepe.ca>
Cc: John Hubbard <jhubbard@nvidia.com>
Cc: Peter Xu <peterx@redhat.com>
Cc: Xu Xin <xu.xin16@zte.com.cn>
Cc: Chengming Zhou <chengming.zhou@linux.dev>
Cc: Miaohe Lin <linmiaohe@huawei.com>
Cc: Naoya Horiguchi <nao.horiguchi@gmail.com>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: Rik van Riel <riel@surriel.com>
Cc: Harry Yoo <harry.yoo@oracle.com>
Cc: Qi Zheng <zhengqi.arch@bytedance.com>
Cc: Shakeel Butt <shakeel.butt@linux.dev>
David Hildenbrand (29):
mm/balloon_compaction: we cannot have isolated pages in the balloon
list
mm/balloon_compaction: convert balloon_page_delete() to
balloon_page_finalize()
mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs
mm/page_alloc: let page freeing clear any set page type
mm/balloon_compaction: make PageOffline sticky until the page is freed
mm/zsmalloc: make PageZsmalloc() sticky until the page is freed
mm/migrate: rename isolate_movable_page() to
isolate_movable_ops_page()
mm/migrate: rename putback_movable_folio() to
putback_movable_ops_page()
mm/migrate: factor out movable_ops page handling into
migrate_movable_ops_page()
mm/migrate: remove folio_test_movable() and folio_movable_ops()
mm/migrate: move movable_ops page handling out of move_to_new_folio()
mm/zsmalloc: stop using __ClearPageMovable()
mm/balloon_compaction: stop using __ClearPageMovable()
mm/migrate: remove __ClearPageMovable()
mm/migration: remove PageMovable()
mm: rename __PageMovable() to page_has_movable_ops()
mm/page_isolation: drop __folio_test_movable() check for large folios
mm: remove __folio_test_movable()
mm: stop storing migration_ops in page->mapping
mm: convert "movable" flag in page->mapping to a page flag
mm: rename PG_isolated to PG_movable_ops_isolated
mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM
mm/page-alloc: remove PageMappingFlags()
mm/page-flags: remove folio_mapping_flags()
mm: simplify folio_expected_ref_count()
mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_*
docs/mm: convert from "Non-LRU page migration" to "movable_ops page
migration"
mm/balloon_compaction: "movable_ops" doc updates
mm/balloon_compaction: provide single balloon_page_insert() and
balloon_mapping_gfp_mask()
Documentation/mm/page_migration.rst | 39 ++--
arch/powerpc/platforms/pseries/cmm.c | 2 +-
drivers/misc/vmw_balloon.c | 3 +-
drivers/virtio/virtio_balloon.c | 4 +-
fs/proc/page.c | 4 +-
include/linux/balloon_compaction.h | 90 ++++-----
include/linux/fs.h | 2 +-
include/linux/migrate.h | 46 +----
include/linux/mm.h | 4 +-
include/linux/mm_types.h | 1 -
include/linux/page-flags.h | 106 +++++++----
include/linux/pagemap.h | 2 +-
include/linux/zsmalloc.h | 2 +
mm/balloon_compaction.c | 21 ++-
mm/compaction.c | 44 +----
mm/gup.c | 4 +-
mm/internal.h | 2 +-
mm/ksm.c | 4 +-
mm/memory-failure.c | 4 +-
mm/memory_hotplug.c | 10 +-
mm/migrate.c | 271 ++++++++++++++++-----------
mm/page_alloc.c | 13 +-
mm/page_isolation.c | 12 +-
mm/rmap.c | 16 +-
mm/util.c | 6 +-
mm/vmscan.c | 6 +-
mm/zpdesc.h | 15 +-
mm/zsmalloc.c | 33 ++--
28 files changed, 373 insertions(+), 393 deletions(-)
base-commit: 31a2460cb90e6ac3604c72fb54e936b8129fec05
--
2.49.0
^ permalink raw reply [flat|nested] 33+ messages in thread
* [PATCH v2 01/29] mm/balloon_compaction: we cannot have isolated pages in the balloon list
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
@ 2025-07-04 10:24 ` David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize() David Hildenbrand
` (28 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:24 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
The core will set PG_isolated only after mops->isolate_page() was
called. In case of the balloon, that is where we will remove it from
the balloon list. So we cannot have isolated pages in the balloon list.
Let's drop this unnecessary check.
Acked-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/balloon_compaction.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index d3e00731e2628..fcb60233aa35d 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -94,12 +94,6 @@ size_t balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info,
if (!trylock_page(page))
continue;
- if (IS_ENABLED(CONFIG_BALLOON_COMPACTION) &&
- PageIsolated(page)) {
- /* raced with isolation */
- unlock_page(page);
- continue;
- }
balloon_page_delete(page);
__count_vm_event(BALLOON_DEFLATE);
list_add(&page->lru, pages);
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 01/29] mm/balloon_compaction: we cannot have isolated pages in the balloon list David Hildenbrand
@ 2025-07-04 10:24 ` David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 03/29] mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs David Hildenbrand
` (27 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:24 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's move the removal of the page from the balloon list into the single
caller, to remove the dependency on the PG_isolated flag and clarify
locking requirements.
Note that for now, balloon_page_delete() was used on two paths:
(1) Removing a page from the balloon for deflation through
balloon_page_list_dequeue()
(2) Removing an isolated page from the balloon for migration in the
per-driver migration handlers. Isolated pages were already removed from
the balloon list during isolation.
So instead of relying on the flag, we can just distinguish both cases
directly and handle it accordingly in the caller.
We'll shuffle the operations a bit such that they logically make more sense
(e.g., remove from the list before clearing flags).
In balloon migration functions we can now move the balloon_page_finalize()
out of the balloon lock and perform the finalization just before dropping
the balloon reference.
Document that the page lock is currently required when modifying the
movability aspects of a page; hopefully we can soon decouple this from the
page lock.
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
arch/powerpc/platforms/pseries/cmm.c | 2 +-
drivers/misc/vmw_balloon.c | 3 +-
drivers/virtio/virtio_balloon.c | 4 +--
include/linux/balloon_compaction.h | 43 +++++++++++-----------------
mm/balloon_compaction.c | 3 +-
5 files changed, 21 insertions(+), 34 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/cmm.c b/arch/powerpc/platforms/pseries/cmm.c
index 5f4037c1d7fe8..5e0a718d1be7b 100644
--- a/arch/powerpc/platforms/pseries/cmm.c
+++ b/arch/powerpc/platforms/pseries/cmm.c
@@ -532,7 +532,6 @@ static int cmm_migratepage(struct balloon_dev_info *b_dev_info,
spin_lock_irqsave(&b_dev_info->pages_lock, flags);
balloon_page_insert(b_dev_info, newpage);
- balloon_page_delete(page);
b_dev_info->isolated_pages--;
spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
@@ -542,6 +541,7 @@ static int cmm_migratepage(struct balloon_dev_info *b_dev_info,
*/
plpar_page_set_active(page);
+ balloon_page_finalize(page);
/* balloon page list reference */
put_page(page);
diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
index c817d8c216413..6653fc53c951c 100644
--- a/drivers/misc/vmw_balloon.c
+++ b/drivers/misc/vmw_balloon.c
@@ -1778,8 +1778,7 @@ static int vmballoon_migratepage(struct balloon_dev_info *b_dev_info,
* @pages_lock . We keep holding @comm_lock since we will need it in a
* second.
*/
- balloon_page_delete(page);
-
+ balloon_page_finalize(page);
put_page(page);
/* Inflate */
diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 89da052f4f687..e299e18346a30 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -866,15 +866,13 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
tell_host(vb, vb->inflate_vq);
/* balloon's page migration 2nd step -- deflate "page" */
- spin_lock_irqsave(&vb_dev_info->pages_lock, flags);
- balloon_page_delete(page);
- spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags);
vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
set_page_pfns(vb, vb->pfns, page);
tell_host(vb, vb->deflate_vq);
mutex_unlock(&vb->balloon_lock);
+ balloon_page_finalize(page);
put_page(page); /* balloon reference */
return MIGRATEPAGE_SUCCESS;
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index 5ca2d56996201..b9f19da37b089 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -97,27 +97,6 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
list_add(&page->lru, &balloon->pages);
}
-/*
- * balloon_page_delete - delete a page from balloon's page list and clear
- * the page->private assignement accordingly.
- * @page : page to be released from balloon's page list
- *
- * Caller must ensure the page is locked and the spin_lock protecting balloon
- * pages list is held before deleting a page from the balloon device.
- */
-static inline void balloon_page_delete(struct page *page)
-{
- __ClearPageOffline(page);
- __ClearPageMovable(page);
- set_page_private(page, 0);
- /*
- * No touch page.lru field once @page has been isolated
- * because VM is using the field.
- */
- if (!PageIsolated(page))
- list_del(&page->lru);
-}
-
/*
* balloon_page_device - get the b_dev_info descriptor for the balloon device
* that enqueues the given page.
@@ -141,12 +120,6 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
list_add(&page->lru, &balloon->pages);
}
-static inline void balloon_page_delete(struct page *page)
-{
- __ClearPageOffline(page);
- list_del(&page->lru);
-}
-
static inline gfp_t balloon_mapping_gfp_mask(void)
{
return GFP_HIGHUSER;
@@ -154,6 +127,22 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
#endif /* CONFIG_BALLOON_COMPACTION */
+/*
+ * balloon_page_finalize - prepare a balloon page that was removed from the
+ * balloon list for release to the page allocator
+ * @page: page to be released to the page allocator
+ *
+ * Caller must ensure that the page is locked.
+ */
+static inline void balloon_page_finalize(struct page *page)
+{
+ if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
+ __ClearPageMovable(page);
+ set_page_private(page, 0);
+ }
+ __ClearPageOffline(page);
+}
+
/*
* balloon_page_push - insert a page into a page list.
* @head : pointer to list
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index fcb60233aa35d..ec176bdb8a78b 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -94,7 +94,8 @@ size_t balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info,
if (!trylock_page(page))
continue;
- balloon_page_delete(page);
+ list_del(&page->lru);
+ balloon_page_finalize(page);
__count_vm_event(BALLOON_DEFLATE);
list_add(&page->lru, pages);
unlock_page(page);
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 03/29] mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 01/29] mm/balloon_compaction: we cannot have isolated pages in the balloon list David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize() David Hildenbrand
@ 2025-07-04 10:24 ` David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 04/29] mm/page_alloc: let page freeing clear any set page type David Hildenbrand
` (26 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:24 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's drop these checks; these are conditions the core migration code
must make sure will hold either way, no need to double check.
Acked-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Acked-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/zpdesc.h | 5 -----
mm/zsmalloc.c | 5 -----
2 files changed, 10 deletions(-)
diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index d3df316e5bb7b..5cb7e3de43952 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -168,11 +168,6 @@ static inline void __zpdesc_clear_zsmalloc(struct zpdesc *zpdesc)
__ClearPageZsmalloc(zpdesc_page(zpdesc));
}
-static inline bool zpdesc_is_isolated(struct zpdesc *zpdesc)
-{
- return PageIsolated(zpdesc_page(zpdesc));
-}
-
static inline struct zone *zpdesc_zone(struct zpdesc *zpdesc)
{
return page_zone(zpdesc_page(zpdesc));
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 999b513c7fdff..7f1431f2be98f 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1719,8 +1719,6 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
* Page is locked so zspage couldn't be destroyed. For detail, look at
* lock_zspage in free_zspage.
*/
- VM_BUG_ON_PAGE(PageIsolated(page), page);
-
return true;
}
@@ -1739,8 +1737,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
unsigned long old_obj, new_obj;
unsigned int obj_idx;
- VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc));
-
/* The page is locked, so this pointer must remain valid */
zspage = get_zspage(zpdesc);
pool = zspage->pool;
@@ -1811,7 +1807,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
static void zs_page_putback(struct page *page)
{
- VM_BUG_ON_PAGE(!PageIsolated(page), page);
}
static const struct movable_operations zsmalloc_mops = {
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 04/29] mm/page_alloc: let page freeing clear any set page type
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (2 preceding siblings ...)
2025-07-04 10:24 ` [PATCH v2 03/29] mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs David Hildenbrand
@ 2025-07-04 10:24 ` David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed David Hildenbrand
` (25 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:24 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Currently, any user of page types must clear that type before freeing
a page back to the buddy, otherwise we'll run into mapcount related
sanity checks (because the page type currently overlays the page
mapcount).
Let's allow for not clearing the page type by page type users by letting
the buddy handle it instead.
We'll focus on having a page type set on the first page of a larger
allocation only.
With this change, we can reliably identify typed folios even though
they might be in the process of getting freed, which will come in handy
in migration code (at least in the transition phase).
In the future we might want to warn on some page types. Instead of
having an "allow list", let's rather wait until we know about once that
should go on such a "disallow list".
Reviewed-by: Zi Yan <ziy@nvidia.com>
Acked-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/page_alloc.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 858bc17653af9..b825f224af01f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1380,6 +1380,10 @@ __always_inline bool free_pages_prepare(struct page *page,
mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
page->mapping = NULL;
}
+ if (unlikely(page_has_type(page)))
+ /* Reset the page_type (which overlays _mapcount) */
+ page->page_type = UINT_MAX;
+
if (is_check_pages_enabled()) {
if (free_page_is_bad(page))
bad++;
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (3 preceding siblings ...)
2025-07-04 10:24 ` [PATCH v2 04/29] mm/page_alloc: let page freeing clear any set page type David Hildenbrand
@ 2025-07-04 10:24 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 06/29] mm/zsmalloc: make PageZsmalloc() " David Hildenbrand
` (24 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:24 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let the page freeing code handle clearing the page type. Being able to
identify balloon pages until actually freed is a requirement for
upcoming movable_ops migration changes.
Acked-by: Zi Yan <ziy@nvidia.com>
Acked-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index b9f19da37b089..bfc6e50bd004b 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -140,7 +140,7 @@ static inline void balloon_page_finalize(struct page *page)
__ClearPageMovable(page);
set_page_private(page, 0);
}
- __ClearPageOffline(page);
+ /* PageOffline is sticky until the page is freed to the buddy. */
}
/*
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 06/29] mm/zsmalloc: make PageZsmalloc() sticky until the page is freed
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (4 preceding siblings ...)
2025-07-04 10:24 ` [PATCH v2 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page() David Hildenbrand
` (23 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let the page freeing code handle clearing the page type. Being able to
identify balloon pages until actually freed is a requirement for
upcoming movable_ops migration changes.
Acked-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Acked-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/zpdesc.h | 5 -----
mm/zsmalloc.c | 4 ++--
2 files changed, 2 insertions(+), 7 deletions(-)
diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 5cb7e3de43952..5763f36039736 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -163,11 +163,6 @@ static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc)
__SetPageZsmalloc(zpdesc_page(zpdesc));
}
-static inline void __zpdesc_clear_zsmalloc(struct zpdesc *zpdesc)
-{
- __ClearPageZsmalloc(zpdesc_page(zpdesc));
-}
-
static inline struct zone *zpdesc_zone(struct zpdesc *zpdesc)
{
return page_zone(zpdesc_page(zpdesc));
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 7f1431f2be98f..626f09fb27138 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -244,6 +244,7 @@ static inline void free_zpdesc(struct zpdesc *zpdesc)
{
struct page *page = zpdesc_page(zpdesc);
+ /* PageZsmalloc is sticky until the page is freed to the buddy. */
__free_page(page);
}
@@ -880,7 +881,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
ClearPagePrivate(page);
zpdesc->zspage = NULL;
zpdesc->next = NULL;
- __ClearPageZsmalloc(page);
+ /* PageZsmalloc is sticky until the page is freed to the buddy. */
}
static int trylock_zspage(struct zspage *zspage)
@@ -1055,7 +1056,6 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
if (!zpdesc) {
while (--i >= 0) {
zpdesc_dec_zone_page_state(zpdescs[i]);
- __zpdesc_clear_zsmalloc(zpdescs[i]);
free_zpdesc(zpdescs[i]);
}
cache_free_zspage(pool, zspage);
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (5 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 06/29] mm/zsmalloc: make PageZsmalloc() " David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page() David Hildenbrand
` (22 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
... and start moving back to per-page things that will absolutely not be
folio things in the future. Add documentation and a comment that the
remaining folio stuff (lock, refcount) will have to be reworked as well.
While at it, convert the VM_BUG_ON() into a WARN_ON_ONCE() and handle
it gracefully (relevant with further changes), and convert a
WARN_ON_ONCE() into a VM_WARN_ON_ONCE_PAGE().
Note that we will leave anything that needs a rework (lock, refcount,
->lru) to be using folios for now: that perfectly highlights the
problematic bits.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/migrate.h | 4 ++--
mm/compaction.c | 2 +-
mm/migrate.c | 39 +++++++++++++++++++++++++++++----------
3 files changed, 32 insertions(+), 13 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index aaa2114498d6d..c0ec7422837bd 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -69,7 +69,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
unsigned long private, enum migrate_mode mode, int reason,
unsigned int *ret_succeeded);
struct folio *alloc_migration_target(struct folio *src, unsigned long private);
-bool isolate_movable_page(struct page *page, isolate_mode_t mode);
+bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode);
bool isolate_folio_to_list(struct folio *folio, struct list_head *list);
int migrate_huge_page_move_mapping(struct address_space *mapping,
@@ -90,7 +90,7 @@ static inline int migrate_pages(struct list_head *l, new_folio_t new,
static inline struct folio *alloc_migration_target(struct folio *src,
unsigned long private)
{ return NULL; }
-static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+static inline bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
{ return false; }
static inline bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
{ return false; }
diff --git a/mm/compaction.c b/mm/compaction.c
index 3925cb61dbb8f..17455c5a4be05 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1093,7 +1093,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
locked = NULL;
}
- if (isolate_movable_page(page, mode)) {
+ if (isolate_movable_ops_page(page, mode)) {
folio = page_folio(page);
goto isolate_success;
}
diff --git a/mm/migrate.c b/mm/migrate.c
index 208d2d4a2f8d4..2e648d75248e4 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -51,8 +51,26 @@
#include "internal.h"
#include "swap.h"
-bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+/**
+ * isolate_movable_ops_page - isolate a movable_ops page for migration
+ * @page: The page.
+ * @mode: The isolation mode.
+ *
+ * Try to isolate a movable_ops page for migration. Will fail if the page is
+ * not a movable_ops page, if the page is already isolated for migration
+ * or if the page was just was released by its owner.
+ *
+ * Once isolated, the page cannot get freed until it is either putback
+ * or migrated.
+ *
+ * Returns true if isolation succeeded, otherwise false.
+ */
+bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
{
+ /*
+ * TODO: these pages will not be folios in the future. All
+ * folio dependencies will have to be removed.
+ */
struct folio *folio = folio_get_nontail_page(page);
const struct movable_operations *mops;
@@ -73,7 +91,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
* we use non-atomic bitops on newly allocated page flags so
* unconditionally grabbing the lock ruins page's owner side.
*/
- if (unlikely(!__folio_test_movable(folio)))
+ if (unlikely(!__PageMovable(page)))
goto out_putfolio;
/*
@@ -90,18 +108,19 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
if (unlikely(!folio_trylock(folio)))
goto out_putfolio;
- if (!folio_test_movable(folio) || folio_test_isolated(folio))
+ if (!PageMovable(page) || PageIsolated(page))
goto out_no_isolated;
- mops = folio_movable_ops(folio);
- VM_BUG_ON_FOLIO(!mops, folio);
+ mops = page_movable_ops(page);
+ if (WARN_ON_ONCE(!mops))
+ goto out_no_isolated;
- if (!mops->isolate_page(&folio->page, mode))
+ if (!mops->isolate_page(page, mode))
goto out_no_isolated;
/* Driver shouldn't use the isolated flag */
- WARN_ON_ONCE(folio_test_isolated(folio));
- folio_set_isolated(folio);
+ VM_WARN_ON_ONCE_PAGE(PageIsolated(page), page);
+ SetPageIsolated(page);
folio_unlock(folio);
return true;
@@ -175,8 +194,8 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
if (lru)
isolated = folio_isolate_lru(folio);
else
- isolated = isolate_movable_page(&folio->page,
- ISOLATE_UNEVICTABLE);
+ isolated = isolate_movable_ops_page(&folio->page,
+ ISOLATE_UNEVICTABLE);
if (!isolated)
return false;
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (6 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page() David Hildenbrand
` (21 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
... and factor the complete handling of movable_ops pages out.
Convert it similar to isolate_movable_ops_page().
While at it, convert the VM_BUG_ON_FOLIO() into a VM_WARN_ON_PAGE().
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/migrate.c | 37 ++++++++++++++++++++++++-------------
1 file changed, 24 insertions(+), 13 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 2e648d75248e4..c3cd66b05fe2f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -133,12 +133,30 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
return false;
}
-static void putback_movable_folio(struct folio *folio)
+/**
+ * putback_movable_ops_page - putback an isolated movable_ops page
+ * @page: The isolated page.
+ *
+ * Putback an isolated movable_ops page.
+ *
+ * After the page was putback, it might get freed instantly.
+ */
+static void putback_movable_ops_page(struct page *page)
{
- const struct movable_operations *mops = folio_movable_ops(folio);
-
- mops->putback_page(&folio->page);
- folio_clear_isolated(folio);
+ /*
+ * TODO: these pages will not be folios in the future. All
+ * folio dependencies will have to be removed.
+ */
+ struct folio *folio = page_folio(page);
+
+ VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
+ folio_lock(folio);
+ /* If the page was released by it's owner, there is nothing to do. */
+ if (PageMovable(page))
+ page_movable_ops(page)->putback_page(page);
+ ClearPageIsolated(page);
+ folio_unlock(folio);
+ folio_put(folio);
}
/*
@@ -166,14 +184,7 @@ void putback_movable_pages(struct list_head *l)
* have PAGE_MAPPING_MOVABLE.
*/
if (unlikely(__folio_test_movable(folio))) {
- VM_BUG_ON_FOLIO(!folio_test_isolated(folio), folio);
- folio_lock(folio);
- if (folio_test_movable(folio))
- putback_movable_folio(folio);
- else
- folio_clear_isolated(folio);
- folio_unlock(folio);
- folio_put(folio);
+ putback_movable_ops_page(&folio->page);
} else {
node_stat_mod_folio(folio, NR_ISOLATED_ANON +
folio_is_file_lru(folio), -folio_nr_pages(folio));
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (7 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops() David Hildenbrand
` (20 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's factor it out, simplifying the calling code.
Before this change, we would have called flush_dcache_folio() also on
movable_ops pages. As documented in Documentation/core-api/cachetlb.rst:
"This routine need only be called for page cache pages which can
potentially ever be mapped into the address space of a user
process."
So don't do it for movable_ops pages. If there would ever be such a
movable_ops page user, it should do the flushing itself after performing
the copy.
Note that we can now change folio_mapping_flags() to folio_test_anon()
to make it clearer, because movable_ops pages will never take that path.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/migrate.c | 82 ++++++++++++++++++++++++++++------------------------
1 file changed, 45 insertions(+), 37 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index c3cd66b05fe2f..d66d0776036c3 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -159,6 +159,45 @@ static void putback_movable_ops_page(struct page *page)
folio_put(folio);
}
+/**
+ * migrate_movable_ops_page - migrate an isolated movable_ops page
+ * @page: The isolated page.
+ *
+ * Migrate an isolated movable_ops page.
+ *
+ * If the src page was already released by its owner, the src page is
+ * un-isolated (putback) and migration succeeds; the migration core will be the
+ * owner of both pages.
+ *
+ * If the src page was not released by its owner and the migration was
+ * successful, the owner of the src page and the dst page are swapped and
+ * the src page is un-isolated.
+ *
+ * If migration fails, the ownership stays unmodified and the src page
+ * remains isolated: migration may be retried later or the page can be putback.
+ *
+ * TODO: migration core will treat both pages as folios and lock them before
+ * this call to unlock them after this call. Further, the folio refcounts on
+ * src and dst are also released by migration core. These pages will not be
+ * folios in the future, so that must be reworked.
+ *
+ * Returns MIGRATEPAGE_SUCCESS on success, otherwise a negative error
+ * code.
+ */
+static int migrate_movable_ops_page(struct page *dst, struct page *src,
+ enum migrate_mode mode)
+{
+ int rc = MIGRATEPAGE_SUCCESS;
+
+ VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
+ /* If the page was released by it's owner, there is nothing to do. */
+ if (PageMovable(src))
+ rc = page_movable_ops(src)->migrate_page(dst, src, mode);
+ if (rc == MIGRATEPAGE_SUCCESS)
+ ClearPageIsolated(src);
+ return rc;
+}
+
/*
* Put previously isolated pages back onto the appropriate lists
* from where they were once taken off for compaction/migration.
@@ -1023,51 +1062,20 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
mode);
else
rc = fallback_migrate_folio(mapping, dst, src, mode);
- } else {
- const struct movable_operations *mops;
- /*
- * In case of non-lru page, it could be released after
- * isolation step. In that case, we shouldn't try migration.
- */
- VM_BUG_ON_FOLIO(!folio_test_isolated(src), src);
- if (!folio_test_movable(src)) {
- rc = MIGRATEPAGE_SUCCESS;
- folio_clear_isolated(src);
+ if (rc != MIGRATEPAGE_SUCCESS)
goto out;
- }
-
- mops = folio_movable_ops(src);
- rc = mops->migrate_page(&dst->page, &src->page, mode);
- WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS &&
- !folio_test_isolated(src));
- }
-
- /*
- * When successful, old pagecache src->mapping must be cleared before
- * src is freed; but stats require that PageAnon be left as PageAnon.
- */
- if (rc == MIGRATEPAGE_SUCCESS) {
- if (__folio_test_movable(src)) {
- VM_BUG_ON_FOLIO(!folio_test_isolated(src), src);
-
- /*
- * We clear PG_movable under page_lock so any compactor
- * cannot try to migrate this page.
- */
- folio_clear_isolated(src);
- }
-
/*
- * Anonymous and movable src->mapping will be cleared by
- * free_pages_prepare so don't reset it here for keeping
- * the type to work PageAnon, for example.
+ * For pagecache folios, src->mapping must be cleared before src
+ * is freed. Anonymous folios must stay anonymous until freed.
*/
- if (!folio_mapping_flags(src))
+ if (!folio_test_anon(src))
src->mapping = NULL;
if (likely(!folio_is_zone_device(dst)))
flush_dcache_folio(dst);
+ } else {
+ rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
}
out:
return rc;
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (8 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio() David Hildenbrand
` (19 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Folios will have nothing to do with movable_ops page migration. These
functions are now unused, so let's remove them.
Note that __folio_test_movable() and friends will be removed separately
next, after more rework.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/migrate.h | 14 --------------
1 file changed, 14 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index c0ec7422837bd..c99a00d4ca27d 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -118,20 +118,6 @@ static inline void __ClearPageMovable(struct page *page)
}
#endif
-static inline bool folio_test_movable(struct folio *folio)
-{
- return PageMovable(&folio->page);
-}
-
-static inline
-const struct movable_operations *folio_movable_ops(struct folio *folio)
-{
- VM_BUG_ON(!__folio_test_movable(folio));
-
- return (const struct movable_operations *)
- ((unsigned long)folio->mapping - PAGE_MAPPING_MOVABLE);
-}
-
static inline
const struct movable_operations *page_movable_ops(struct page *page)
{
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (9 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 12/29] mm/zsmalloc: stop using __ClearPageMovable() David Hildenbrand
` (18 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's move that handling directly into migrate_folio_move(), so we can
simplify move_to_new_folio(). While at it, fixup the documentation a
bit.
Note that unmap_and_move_huge_page() does not care, because it only
deals with actual folios. (we only support migration of
individual movable_ops pages)
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/migrate.c | 63 +++++++++++++++++++++++++---------------------------
1 file changed, 30 insertions(+), 33 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index d66d0776036c3..9a63bd338d30b 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1024,11 +1024,12 @@ static int fallback_migrate_folio(struct address_space *mapping,
}
/*
- * Move a page to a newly allocated page
- * The page is locked and all ptes have been successfully removed.
+ * Move a src folio to a newly allocated dst folio.
*
- * The new page will have replaced the old page if this function
- * is successful.
+ * The src and dst folios are locked and the src folios was unmapped from
+ * the page tables.
+ *
+ * On success, the src folio was replaced by the dst folio.
*
* Return value:
* < 0 - error code
@@ -1037,34 +1038,30 @@ static int fallback_migrate_folio(struct address_space *mapping,
static int move_to_new_folio(struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
+ struct address_space *mapping = folio_mapping(src);
int rc = -EAGAIN;
- bool is_lru = !__folio_test_movable(src);
VM_BUG_ON_FOLIO(!folio_test_locked(src), src);
VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst);
- if (likely(is_lru)) {
- struct address_space *mapping = folio_mapping(src);
-
- if (!mapping)
- rc = migrate_folio(mapping, dst, src, mode);
- else if (mapping_inaccessible(mapping))
- rc = -EOPNOTSUPP;
- else if (mapping->a_ops->migrate_folio)
- /*
- * Most folios have a mapping and most filesystems
- * provide a migrate_folio callback. Anonymous folios
- * are part of swap space which also has its own
- * migrate_folio callback. This is the most common path
- * for page migration.
- */
- rc = mapping->a_ops->migrate_folio(mapping, dst, src,
- mode);
- else
- rc = fallback_migrate_folio(mapping, dst, src, mode);
+ if (!mapping)
+ rc = migrate_folio(mapping, dst, src, mode);
+ else if (mapping_inaccessible(mapping))
+ rc = -EOPNOTSUPP;
+ else if (mapping->a_ops->migrate_folio)
+ /*
+ * Most folios have a mapping and most filesystems
+ * provide a migrate_folio callback. Anonymous folios
+ * are part of swap space which also has its own
+ * migrate_folio callback. This is the most common path
+ * for page migration.
+ */
+ rc = mapping->a_ops->migrate_folio(mapping, dst, src,
+ mode);
+ else
+ rc = fallback_migrate_folio(mapping, dst, src, mode);
- if (rc != MIGRATEPAGE_SUCCESS)
- goto out;
+ if (rc == MIGRATEPAGE_SUCCESS) {
/*
* For pagecache folios, src->mapping must be cleared before src
* is freed. Anonymous folios must stay anonymous until freed.
@@ -1074,10 +1071,7 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
if (likely(!folio_is_zone_device(dst)))
flush_dcache_folio(dst);
- } else {
- rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
}
-out:
return rc;
}
@@ -1328,20 +1322,23 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
int rc;
int old_page_state = 0;
struct anon_vma *anon_vma = NULL;
- bool is_lru = !__folio_test_movable(src);
struct list_head *prev;
__migrate_folio_extract(dst, &old_page_state, &anon_vma);
prev = dst->lru.prev;
list_del(&dst->lru);
+ if (unlikely(__folio_test_movable(src))) {
+ rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
+ if (rc)
+ goto out;
+ goto out_unlock_both;
+ }
+
rc = move_to_new_folio(dst, src, mode);
if (rc)
goto out;
- if (unlikely(!is_lru))
- goto out_unlock_both;
-
/*
* When successful, push dst to LRU immediately: so that if it
* turns out to be an mlocked page, remove_migration_ptes() will
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (10 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-07 2:39 ` Sergey Senozhatsky
2025-07-04 10:25 ` [PATCH v2 13/29] mm/balloon_compaction: " David Hildenbrand
` (17 subsequent siblings)
29 siblings, 1 reply; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Instead, let's check in the callbacks if the page was already destroyed,
which can be checked by looking at zpdesc->zspage (see reset_zpdesc()).
If we detect that the page was destroyed:
(1) Fail isolation, just like the migration core would
(2) Fake migration success just like the migration core would
In the putback case there is nothing to do, as we don't do anything just
like the migration core would do.
In the future, we should look into not letting these pages get destroyed
while they are isolated -- and instead delaying that to the
putback/migration call. Add a TODO for that.
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/zsmalloc.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 626f09fb27138..b12250e219bb7 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -877,7 +877,6 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
{
struct page *page = zpdesc_page(zpdesc);
- __ClearPageMovable(page);
ClearPagePrivate(page);
zpdesc->zspage = NULL;
zpdesc->next = NULL;
@@ -1716,10 +1715,11 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
{
/*
- * Page is locked so zspage couldn't be destroyed. For detail, look at
- * lock_zspage in free_zspage.
+ * Page is locked so zspage can't be destroyed concurrently
+ * (see free_zspage()). But if the page was already destroyed
+ * (see reset_zpdesc()), refuse isolation here.
*/
- return true;
+ return page_zpdesc(page)->zspage;
}
static int zs_page_migrate(struct page *newpage, struct page *page,
@@ -1737,6 +1737,16 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
unsigned long old_obj, new_obj;
unsigned int obj_idx;
+ /*
+ * TODO: nothing prevents a zspage from getting destroyed while
+ * it is isolated for migration, as the page lock is temporarily
+ * dropped after zs_page_isolate() succeeded: we should rework that
+ * and defer destroying such pages once they are un-isolated (putback)
+ * instead.
+ */
+ if (!zpdesc->zspage)
+ return MIGRATEPAGE_SUCCESS;
+
/* The page is locked, so this pointer must remain valid */
zspage = get_zspage(zpdesc);
pool = zspage->pool;
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 13/29] mm/balloon_compaction: stop using __ClearPageMovable()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (11 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 12/29] mm/zsmalloc: stop using __ClearPageMovable() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 14/29] mm/migrate: remove __ClearPageMovable() David Hildenbrand
` (16 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
We can just look at the balloon device (stored in page->private), to see
if the page is still part of the balloon.
As isolated balloon pages cannot get released (they are taken off the
balloon list while isolated), we don't have to worry about this case in
the putback and migration callback. Add a WARN_ON_ONCE for now.
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 4 +---
mm/balloon_compaction.c | 11 +++++++++++
2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index bfc6e50bd004b..9bce8e9f5018c 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -136,10 +136,8 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
*/
static inline void balloon_page_finalize(struct page *page)
{
- if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
- __ClearPageMovable(page);
+ if (IS_ENABLED(CONFIG_BALLOON_COMPACTION))
set_page_private(page, 0);
- }
/* PageOffline is sticky until the page is freed to the buddy. */
}
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index ec176bdb8a78b..e4f1a122d786b 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -206,6 +206,9 @@ static bool balloon_page_isolate(struct page *page, isolate_mode_t mode)
struct balloon_dev_info *b_dev_info = balloon_page_device(page);
unsigned long flags;
+ if (!b_dev_info)
+ return false;
+
spin_lock_irqsave(&b_dev_info->pages_lock, flags);
list_del(&page->lru);
b_dev_info->isolated_pages++;
@@ -219,6 +222,10 @@ static void balloon_page_putback(struct page *page)
struct balloon_dev_info *b_dev_info = balloon_page_device(page);
unsigned long flags;
+ /* Isolated balloon pages cannot get deflated. */
+ if (WARN_ON_ONCE(!b_dev_info))
+ return;
+
spin_lock_irqsave(&b_dev_info->pages_lock, flags);
list_add(&page->lru, &b_dev_info->pages);
b_dev_info->isolated_pages--;
@@ -234,6 +241,10 @@ static int balloon_page_migrate(struct page *newpage, struct page *page,
VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
+ /* Isolated balloon pages cannot get deflated. */
+ if (WARN_ON_ONCE(!balloon))
+ return -EAGAIN;
+
return balloon->migratepage(balloon, newpage, page, mode);
}
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 14/29] mm/migrate: remove __ClearPageMovable()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (12 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 13/29] mm/balloon_compaction: " David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 15/29] mm/migration: remove PageMovable() David Hildenbrand
` (15 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Unused, let's remove it.
The Chinese docs in Documentation/translations/zh_CN/mm/page_migration.rst
still mention it, but that whole docs is destined to get outdated and
updated by somebody that actually speaks that language.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/migrate.h | 8 ++------
mm/compaction.c | 11 -----------
2 files changed, 2 insertions(+), 17 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index c99a00d4ca27d..6eeda8eb1e0d8 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -35,8 +35,8 @@ struct migration_target_control;
* @src page. The driver should copy the contents of the
* @src page to the @dst page and set up the fields of @dst page.
* Both pages are locked.
- * If page migration is successful, the driver should call
- * __ClearPageMovable(@src) and return MIGRATEPAGE_SUCCESS.
+ * If page migration is successful, the driver should
+ * return MIGRATEPAGE_SUCCESS.
* If the driver cannot migrate the page at the moment, it can return
* -EAGAIN. The VM interprets this as a temporary migration failure and
* will retry it later. Any other error value is a permanent migration
@@ -106,16 +106,12 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
#ifdef CONFIG_COMPACTION
bool PageMovable(struct page *page);
void __SetPageMovable(struct page *page, const struct movable_operations *ops);
-void __ClearPageMovable(struct page *page);
#else
static inline bool PageMovable(struct page *page) { return false; }
static inline void __SetPageMovable(struct page *page,
const struct movable_operations *ops)
{
}
-static inline void __ClearPageMovable(struct page *page)
-{
-}
#endif
static inline
diff --git a/mm/compaction.c b/mm/compaction.c
index 17455c5a4be05..889ec696ba96a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -137,17 +137,6 @@ void __SetPageMovable(struct page *page, const struct movable_operations *mops)
}
EXPORT_SYMBOL(__SetPageMovable);
-void __ClearPageMovable(struct page *page)
-{
- VM_BUG_ON_PAGE(!PageMovable(page), page);
- /*
- * This page still has the type of a movable page, but it's
- * actually not movable any more.
- */
- page->mapping = (void *)PAGE_MAPPING_MOVABLE;
-}
-EXPORT_SYMBOL(__ClearPageMovable);
-
/* Do not skip compaction more than 64 times */
#define COMPACT_MAX_DEFER_SHIFT 6
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 15/29] mm/migration: remove PageMovable()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (13 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 14/29] mm/migrate: remove __ClearPageMovable() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 16/29] mm: rename __PageMovable() to page_has_movable_ops() David Hildenbrand
` (14 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Previously, if __ClearPageMovable() were invoked on a page, this would
cause __PageMovable() to return false, but due to the continued
existence of page movable ops, PageMovable() would have returned true.
With __ClearPageMovable() gone, the two are exactly equivalent.
So we can replace PageMovable() checks by __PageMovable(). In fact,
__PageMovable() cannot change until a page is freed, so we can turn
some PageMovable() into sanity checks for __PageMovable().
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/migrate.h | 2 --
mm/compaction.c | 15 ---------------
mm/migrate.c | 18 ++++++++++--------
3 files changed, 10 insertions(+), 25 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 6eeda8eb1e0d8..25659a685e2aa 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -104,10 +104,8 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
#endif /* CONFIG_MIGRATION */
#ifdef CONFIG_COMPACTION
-bool PageMovable(struct page *page);
void __SetPageMovable(struct page *page, const struct movable_operations *ops);
#else
-static inline bool PageMovable(struct page *page) { return false; }
static inline void __SetPageMovable(struct page *page,
const struct movable_operations *ops)
{
diff --git a/mm/compaction.c b/mm/compaction.c
index 889ec696ba96a..5c37373017014 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -114,21 +114,6 @@ static unsigned long release_free_list(struct list_head *freepages)
}
#ifdef CONFIG_COMPACTION
-bool PageMovable(struct page *page)
-{
- const struct movable_operations *mops;
-
- VM_BUG_ON_PAGE(!PageLocked(page), page);
- if (!__PageMovable(page))
- return false;
-
- mops = page_movable_ops(page);
- if (mops)
- return true;
-
- return false;
-}
-
void __SetPageMovable(struct page *page, const struct movable_operations *mops)
{
VM_BUG_ON_PAGE(!PageLocked(page), page);
diff --git a/mm/migrate.c b/mm/migrate.c
index 9a63bd338d30b..63a8c94c165e2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -87,9 +87,12 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
goto out;
/*
- * Check movable flag before taking the page lock because
+ * Check for movable_ops pages before taking the page lock because
* we use non-atomic bitops on newly allocated page flags so
* unconditionally grabbing the lock ruins page's owner side.
+ *
+ * Note that once a page has movable_ops, it will stay that way
+ * until the page was freed.
*/
if (unlikely(!__PageMovable(page)))
goto out_putfolio;
@@ -108,7 +111,8 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
if (unlikely(!folio_trylock(folio)))
goto out_putfolio;
- if (!PageMovable(page) || PageIsolated(page))
+ VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
+ if (PageIsolated(page))
goto out_no_isolated;
mops = page_movable_ops(page);
@@ -149,11 +153,10 @@ static void putback_movable_ops_page(struct page *page)
*/
struct folio *folio = page_folio(page);
+ VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
folio_lock(folio);
- /* If the page was released by it's owner, there is nothing to do. */
- if (PageMovable(page))
- page_movable_ops(page)->putback_page(page);
+ page_movable_ops(page)->putback_page(page);
ClearPageIsolated(page);
folio_unlock(folio);
folio_put(folio);
@@ -189,10 +192,9 @@ static int migrate_movable_ops_page(struct page *dst, struct page *src,
{
int rc = MIGRATEPAGE_SUCCESS;
+ VM_WARN_ON_ONCE_PAGE(!__PageMovable(src), src);
VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
- /* If the page was released by it's owner, there is nothing to do. */
- if (PageMovable(src))
- rc = page_movable_ops(src)->migrate_page(dst, src, mode);
+ rc = page_movable_ops(src)->migrate_page(dst, src, mode);
if (rc == MIGRATEPAGE_SUCCESS)
ClearPageIsolated(src);
return rc;
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 16/29] mm: rename __PageMovable() to page_has_movable_ops()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (14 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 15/29] mm/migration: remove PageMovable() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios David Hildenbrand
` (13 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's make it clearer that we are talking about movable_ops pages.
While at it, convert a VM_BUG_ON to a VM_WARN_ON_ONCE_PAGE.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/migrate.h | 2 +-
include/linux/page-flags.h | 2 +-
mm/compaction.c | 7 ++-----
mm/memory-failure.c | 4 ++--
mm/memory_hotplug.c | 10 ++++------
mm/migrate.c | 8 ++++----
mm/page_alloc.c | 2 +-
mm/page_isolation.c | 10 +++++-----
8 files changed, 20 insertions(+), 25 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 25659a685e2aa..e04035f70e36f 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -115,7 +115,7 @@ static inline void __SetPageMovable(struct page *page,
static inline
const struct movable_operations *page_movable_ops(struct page *page)
{
- VM_BUG_ON(!__PageMovable(page));
+ VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
return (const struct movable_operations *)
((unsigned long)page->mapping - PAGE_MAPPING_MOVABLE);
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 4fe5ee67535b2..c67163b73c5ec 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -750,7 +750,7 @@ static __always_inline bool __folio_test_movable(const struct folio *folio)
PAGE_MAPPING_MOVABLE;
}
-static __always_inline bool __PageMovable(const struct page *page)
+static __always_inline bool page_has_movable_ops(const struct page *page)
{
return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
PAGE_MAPPING_MOVABLE;
diff --git a/mm/compaction.c b/mm/compaction.c
index 5c37373017014..41fd6a1fe9a33 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1056,11 +1056,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
* Skip any other type of page
*/
if (!PageLRU(page)) {
- /*
- * __PageMovable can return false positive so we need
- * to verify it under page_lock.
- */
- if (unlikely(__PageMovable(page)) &&
+ /* Isolation code will deal with any races. */
+ if (unlikely(page_has_movable_ops(page)) &&
!PageIsolated(page)) {
if (locked) {
unlock_page_lruvec_irqrestore(locked, flags);
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index b91a33fb6c694..9e2cff1999347 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1388,8 +1388,8 @@ static inline bool HWPoisonHandlable(struct page *page, unsigned long flags)
if (PageSlab(page))
return false;
- /* Soft offline could migrate non-LRU movable pages */
- if ((flags & MF_SOFT_OFFLINE) && __PageMovable(page))
+ /* Soft offline could migrate movable_ops pages */
+ if ((flags & MF_SOFT_OFFLINE) && page_has_movable_ops(page))
return true;
return PageLRU(page) || is_free_buddy_page(page);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 62d45752f9f44..69a636e20f7bb 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1739,8 +1739,8 @@ bool mhp_range_allowed(u64 start, u64 size, bool need_mapping)
#ifdef CONFIG_MEMORY_HOTREMOVE
/*
- * Scan pfn range [start,end) to find movable/migratable pages (LRU pages,
- * non-lru movable pages and hugepages). Will skip over most unmovable
+ * Scan pfn range [start,end) to find movable/migratable pages (LRU and
+ * hugetlb folio, movable_ops pages). Will skip over most unmovable
* pages (esp., pages that can be skipped when offlining), but bail out on
* definitely unmovable pages.
*
@@ -1759,13 +1759,11 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
struct folio *folio;
page = pfn_to_page(pfn);
- if (PageLRU(page))
- goto found;
- if (__PageMovable(page))
+ if (PageLRU(page) || page_has_movable_ops(page))
goto found;
/*
- * PageOffline() pages that are not marked __PageMovable() and
+ * PageOffline() pages that do not have movable_ops and
* have a reference count > 0 (after MEM_GOING_OFFLINE) are
* definitely unmovable. If their reference count would be 0,
* they could at least be skipped when offlining memory.
diff --git a/mm/migrate.c b/mm/migrate.c
index 63a8c94c165e2..3be7a53c13b66 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -94,7 +94,7 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
* Note that once a page has movable_ops, it will stay that way
* until the page was freed.
*/
- if (unlikely(!__PageMovable(page)))
+ if (unlikely(!page_has_movable_ops(page)))
goto out_putfolio;
/*
@@ -111,7 +111,7 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
if (unlikely(!folio_trylock(folio)))
goto out_putfolio;
- VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
+ VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
if (PageIsolated(page))
goto out_no_isolated;
@@ -153,7 +153,7 @@ static void putback_movable_ops_page(struct page *page)
*/
struct folio *folio = page_folio(page);
- VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
+ VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
folio_lock(folio);
page_movable_ops(page)->putback_page(page);
@@ -192,7 +192,7 @@ static int migrate_movable_ops_page(struct page *dst, struct page *src,
{
int rc = MIGRATEPAGE_SUCCESS;
- VM_WARN_ON_ONCE_PAGE(!__PageMovable(src), src);
+ VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(src), src);
VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
rc = page_movable_ops(src)->migrate_page(dst, src, mode);
if (rc == MIGRATEPAGE_SUCCESS)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index b825f224af01f..4aefeb2ae927f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2006,7 +2006,7 @@ static bool prep_move_freepages_block(struct zone *zone, struct page *page,
* migration are movable. But we don't actually try
* isolating, as that would be expensive.
*/
- if (PageLRU(page) || __PageMovable(page))
+ if (PageLRU(page) || page_has_movable_ops(page))
(*num_movable)++;
pfn++;
}
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index ece3bfc56bcd5..b97b965b3ed01 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -21,9 +21,9 @@
* consequently belong to a single zone.
*
* PageLRU check without isolation or lru_lock could race so that
- * MIGRATE_MOVABLE block might include unmovable pages. And __PageMovable
- * check without lock_page also may miss some movable non-lru pages at
- * race condition. So you can't expect this function should be exact.
+ * MIGRATE_MOVABLE block might include unmovable pages. Similarly, pages
+ * with movable_ops can only be identified some time after they were
+ * allocated. So you can't expect this function should be exact.
*
* Returns a page without holding a reference. If the caller wants to
* dereference that page (e.g., dumping), it has to make sure that it
@@ -133,7 +133,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e
if ((mode == PB_ISOLATE_MODE_MEM_OFFLINE) && PageOffline(page))
continue;
- if (__PageMovable(page) || PageLRU(page))
+ if (PageLRU(page) || page_has_movable_ops(page))
continue;
/*
@@ -421,7 +421,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn,
* proper free and split handling for them.
*/
VM_WARN_ON_ONCE_PAGE(PageLRU(page), page);
- VM_WARN_ON_ONCE_PAGE(__PageMovable(page), page);
+ VM_WARN_ON_ONCE_PAGE(page_has_movable_ops(page), page);
goto failed;
}
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (15 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 16/29] mm: rename __PageMovable() to page_has_movable_ops() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 18/29] mm: remove __folio_test_movable() David Hildenbrand
` (12 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Currently, we only support migration of individual movable_ops pages, so
we can not run into that.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/page_isolation.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index b97b965b3ed01..f72b6cd38b958 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -92,7 +92,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e
h = size_to_hstate(folio_size(folio));
if (h && !hugepage_migration_supported(h))
return page;
- } else if (!folio_test_lru(folio) && !__folio_test_movable(folio)) {
+ } else if (!folio_test_lru(folio)) {
return page;
}
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 18/29] mm: remove __folio_test_movable()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (16 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 19/29] mm: stop storing migration_ops in page->mapping David Hildenbrand
` (11 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Convert to page_has_movable_ops(). While at it, cleanup relevant code
a bit.
The data_race() in migrate_folio_unmap() is questionable: we already
hold a page reference, and concurrent modifications can no longer
happen (iow: __ClearPageMovable() no longer exists). Drop it for now,
we'll rework page_has_movable_ops() soon either way to no longer
rely on page->mapping.
Wherever we cast from folio to page now is a clear sign that this
code has to be decoupled.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 6 ------
mm/migrate.c | 43 ++++++++++++--------------------------
mm/vmscan.c | 6 ++++--
3 files changed, 17 insertions(+), 38 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index c67163b73c5ec..4c27ebb689e3c 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -744,12 +744,6 @@ static __always_inline bool PageAnon(const struct page *page)
return folio_test_anon(page_folio(page));
}
-static __always_inline bool __folio_test_movable(const struct folio *folio)
-{
- return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) ==
- PAGE_MAPPING_MOVABLE;
-}
-
static __always_inline bool page_has_movable_ops(const struct page *page)
{
return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
diff --git a/mm/migrate.c b/mm/migrate.c
index 3be7a53c13b66..e307b142ab41a 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -219,12 +219,7 @@ void putback_movable_pages(struct list_head *l)
continue;
}
list_del(&folio->lru);
- /*
- * We isolated non-lru movable folio so here we can use
- * __folio_test_movable because LRU folio's mapping cannot
- * have PAGE_MAPPING_MOVABLE.
- */
- if (unlikely(__folio_test_movable(folio))) {
+ if (unlikely(page_has_movable_ops(&folio->page))) {
putback_movable_ops_page(&folio->page);
} else {
node_stat_mod_folio(folio, NR_ISOLATED_ANON +
@@ -237,26 +232,20 @@ void putback_movable_pages(struct list_head *l)
/* Must be called with an elevated refcount on the non-hugetlb folio */
bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
{
- bool isolated, lru;
-
if (folio_test_hugetlb(folio))
return folio_isolate_hugetlb(folio, list);
- lru = !__folio_test_movable(folio);
- if (lru)
- isolated = folio_isolate_lru(folio);
- else
- isolated = isolate_movable_ops_page(&folio->page,
- ISOLATE_UNEVICTABLE);
-
- if (!isolated)
- return false;
-
- list_add(&folio->lru, list);
- if (lru)
+ if (page_has_movable_ops(&folio->page)) {
+ if (!isolate_movable_ops_page(&folio->page,
+ ISOLATE_UNEVICTABLE))
+ return false;
+ } else {
+ if (!folio_isolate_lru(folio))
+ return false;
node_stat_add_folio(folio, NR_ISOLATED_ANON +
folio_is_file_lru(folio));
-
+ }
+ list_add(&folio->lru, list);
return true;
}
@@ -1140,12 +1129,7 @@ static void migrate_folio_undo_dst(struct folio *dst, bool locked,
static void migrate_folio_done(struct folio *src,
enum migrate_reason reason)
{
- /*
- * Compaction can migrate also non-LRU pages which are
- * not accounted to NR_ISOLATED_*. They can be recognized
- * as __folio_test_movable
- */
- if (likely(!__folio_test_movable(src)) && reason != MR_DEMOTION)
+ if (likely(!page_has_movable_ops(&src->page)) && reason != MR_DEMOTION)
mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON +
folio_is_file_lru(src), -folio_nr_pages(src));
@@ -1164,7 +1148,6 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
int rc = -EAGAIN;
int old_page_state = 0;
struct anon_vma *anon_vma = NULL;
- bool is_lru = data_race(!__folio_test_movable(src));
bool locked = false;
bool dst_locked = false;
@@ -1265,7 +1248,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
goto out;
dst_locked = true;
- if (unlikely(!is_lru)) {
+ if (unlikely(page_has_movable_ops(&src->page))) {
__migrate_folio_record(dst, old_page_state, anon_vma);
return MIGRATEPAGE_UNMAP;
}
@@ -1330,7 +1313,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
prev = dst->lru.prev;
list_del(&dst->lru);
- if (unlikely(__folio_test_movable(src))) {
+ if (unlikely(page_has_movable_ops(&src->page))) {
rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
if (rc)
goto out;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 331f157a6c62a..935013f73fff6 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1658,9 +1658,11 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
unsigned int noreclaim_flag;
list_for_each_entry_safe(folio, next, folio_list, lru) {
+ /* TODO: these pages should not even appear in this list. */
+ if (page_has_movable_ops(&folio->page))
+ continue;
if (!folio_test_hugetlb(folio) && folio_is_file_lru(folio) &&
- !folio_test_dirty(folio) && !__folio_test_movable(folio) &&
- !folio_test_unevictable(folio)) {
+ !folio_test_dirty(folio) && !folio_test_unevictable(folio)) {
folio_clear_active(folio);
list_move(&folio->lru, &clean_folios);
}
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 19/29] mm: stop storing migration_ops in page->mapping
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (17 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 18/29] mm: remove __folio_test_movable() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 20/29] mm: convert "movable" flag in page->mapping to a page flag David Hildenbrand
` (10 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
... instead, look them up statically based on the page type. Maybe in the
future we want a registration interface? At least for now, it can be
easily handled using the two page types that actually support page
migration.
The remaining usage of page->mapping is to flag such pages as actually
being movable (having movable_ops), which we will change next.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 2 +-
include/linux/migrate.h | 14 ++------------
include/linux/zsmalloc.h | 2 ++
mm/balloon_compaction.c | 1 -
mm/compaction.c | 5 ++---
mm/migrate.c | 23 +++++++++++++++++++++++
mm/zpdesc.h | 5 ++---
mm/zsmalloc.c | 8 +++-----
8 files changed, 35 insertions(+), 25 deletions(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index 9bce8e9f5018c..a8a1706cc56f3 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -92,7 +92,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
struct page *page)
{
__SetPageOffline(page);
- __SetPageMovable(page, &balloon_mops);
+ __SetPageMovable(page);
set_page_private(page, (unsigned long)balloon);
list_add(&page->lru, &balloon->pages);
}
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index e04035f70e36f..6aece3f3c8be8 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -104,23 +104,13 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
#endif /* CONFIG_MIGRATION */
#ifdef CONFIG_COMPACTION
-void __SetPageMovable(struct page *page, const struct movable_operations *ops);
+void __SetPageMovable(struct page *page);
#else
-static inline void __SetPageMovable(struct page *page,
- const struct movable_operations *ops)
+static inline void __SetPageMovable(struct page *page)
{
}
#endif
-static inline
-const struct movable_operations *page_movable_ops(struct page *page)
-{
- VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
-
- return (const struct movable_operations *)
- ((unsigned long)page->mapping - PAGE_MAPPING_MOVABLE);
-}
-
#ifdef CONFIG_NUMA_BALANCING
int migrate_misplaced_folio_prepare(struct folio *folio,
struct vm_area_struct *vma, int node);
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index 13e9cc5490f71..f3ccff2d966cd 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -46,4 +46,6 @@ void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
void zs_obj_write(struct zs_pool *pool, unsigned long handle,
void *handle_mem, size_t mem_len);
+extern const struct movable_operations zsmalloc_mops;
+
#endif
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index e4f1a122d786b..2a4a649805c11 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -253,6 +253,5 @@ const struct movable_operations balloon_mops = {
.isolate_page = balloon_page_isolate,
.putback_page = balloon_page_putback,
};
-EXPORT_SYMBOL_GPL(balloon_mops);
#endif /* CONFIG_BALLOON_COMPACTION */
diff --git a/mm/compaction.c b/mm/compaction.c
index 41fd6a1fe9a33..348eb754cb227 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -114,11 +114,10 @@ static unsigned long release_free_list(struct list_head *freepages)
}
#ifdef CONFIG_COMPACTION
-void __SetPageMovable(struct page *page, const struct movable_operations *mops)
+void __SetPageMovable(struct page *page)
{
VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE((unsigned long)mops & PAGE_MAPPING_MOVABLE, page);
- page->mapping = (void *)((unsigned long)mops | PAGE_MAPPING_MOVABLE);
+ page->mapping = (void *)(PAGE_MAPPING_MOVABLE);
}
EXPORT_SYMBOL(__SetPageMovable);
diff --git a/mm/migrate.c b/mm/migrate.c
index e307b142ab41a..fde6221562399 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -43,6 +43,8 @@
#include <linux/sched/sysctl.h>
#include <linux/memory-tiers.h>
#include <linux/pagewalk.h>
+#include <linux/balloon_compaction.h>
+#include <linux/zsmalloc.h>
#include <asm/tlbflush.h>
@@ -51,6 +53,27 @@
#include "internal.h"
#include "swap.h"
+static const struct movable_operations *page_movable_ops(struct page *page)
+{
+ VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
+
+ /*
+ * If we enable page migration for a page of a certain type by marking
+ * it as movable, the page type must be sticky until the page gets freed
+ * back to the buddy.
+ */
+#ifdef CONFIG_BALLOON_COMPACTION
+ if (PageOffline(page))
+ /* Only balloon compaction sets PageOffline pages movable. */
+ return &balloon_mops;
+#endif /* CONFIG_BALLOON_COMPACTION */
+#if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION)
+ if (PageZsmalloc(page))
+ return &zsmalloc_mops;
+#endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */
+ return NULL;
+}
+
/**
* isolate_movable_ops_page - isolate a movable_ops page for migration
* @page: The page.
diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 5763f36039736..6855d9e2732d8 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -152,10 +152,9 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn)
return page_zpdesc(pfn_to_page(pfn));
}
-static inline void __zpdesc_set_movable(struct zpdesc *zpdesc,
- const struct movable_operations *mops)
+static inline void __zpdesc_set_movable(struct zpdesc *zpdesc)
{
- __SetPageMovable(zpdesc_page(zpdesc), mops);
+ __SetPageMovable(zpdesc_page(zpdesc));
}
static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index b12250e219bb7..4aaff7c26ea96 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1685,8 +1685,6 @@ static void lock_zspage(struct zspage *zspage)
#ifdef CONFIG_COMPACTION
-static const struct movable_operations zsmalloc_mops;
-
static void replace_sub_page(struct size_class *class, struct zspage *zspage,
struct zpdesc *newzpdesc, struct zpdesc *oldzpdesc)
{
@@ -1709,7 +1707,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
set_first_obj_offset(newzpdesc, first_obj_offset);
if (unlikely(ZsHugePage(zspage)))
newzpdesc->handle = oldzpdesc->handle;
- __zpdesc_set_movable(newzpdesc, &zsmalloc_mops);
+ __zpdesc_set_movable(newzpdesc);
}
static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
@@ -1819,7 +1817,7 @@ static void zs_page_putback(struct page *page)
{
}
-static const struct movable_operations zsmalloc_mops = {
+const struct movable_operations zsmalloc_mops = {
.isolate_page = zs_page_isolate,
.migrate_page = zs_page_migrate,
.putback_page = zs_page_putback,
@@ -1882,7 +1880,7 @@ static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage)
do {
WARN_ON(!zpdesc_trylock(zpdesc));
- __zpdesc_set_movable(zpdesc, &zsmalloc_mops);
+ __zpdesc_set_movable(zpdesc);
zpdesc_unlock(zpdesc);
} while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL);
}
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 20/29] mm: convert "movable" flag in page->mapping to a page flag
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (18 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 19/29] mm: stop storing migration_ops in page->mapping David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-11 9:58 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 21/29] mm: rename PG_isolated to PG_movable_ops_isolated David Hildenbrand
` (9 subsequent siblings)
29 siblings, 1 reply; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Instead, let's use a page flag. As the page flag can result in
false-positives, glue it to the page types for which we
support/implement movable_ops page migration.
We are reusing PG_uptodate, that is for example used to track file
system state and does not apply to movable_ops pages. So
warning in case it is set in page_has_movable_ops() on other page types
could result in false-positive warnings.
Likely we could set the bit using a non-atomic update: in contrast to
page->mapping, we could have others trying to update the flags
concurrently when trying to lock the folio. In
isolate_movable_ops_page(), we already take care of that by checking if
the page has movable_ops before locking it. Let's start with the atomic
variant, we could later switch to the non-atomic variant once we are
sure other cases are similarly fine. Once we perform the switch, we'll
have to introduce __SETPAGEFLAG_NOOP().
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 2 +-
include/linux/migrate.h | 8 -----
include/linux/page-flags.h | 54 ++++++++++++++++++++++++------
mm/compaction.c | 6 ----
mm/zpdesc.h | 2 +-
5 files changed, 46 insertions(+), 26 deletions(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index a8a1706cc56f3..b222b0737c466 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -92,7 +92,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
struct page *page)
{
__SetPageOffline(page);
- __SetPageMovable(page);
+ SetPageMovableOps(page);
set_page_private(page, (unsigned long)balloon);
list_add(&page->lru, &balloon->pages);
}
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 6aece3f3c8be8..acadd41e0b5cf 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -103,14 +103,6 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
#endif /* CONFIG_MIGRATION */
-#ifdef CONFIG_COMPACTION
-void __SetPageMovable(struct page *page);
-#else
-static inline void __SetPageMovable(struct page *page)
-{
-}
-#endif
-
#ifdef CONFIG_NUMA_BALANCING
int migrate_misplaced_folio_prepare(struct folio *folio,
struct vm_area_struct *vma, int node);
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 4c27ebb689e3c..5f2b570735852 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -170,6 +170,11 @@ enum pageflags {
/* non-lru isolated movable page */
PG_isolated = PG_reclaim,
+#ifdef CONFIG_MIGRATION
+ /* this is a movable_ops page (for selected typed pages only) */
+ PG_movable_ops = PG_uptodate,
+#endif
+
/* Only valid for buddy pages. Used to track pages that are reported */
PG_reported = PG_uptodate,
@@ -698,9 +703,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
* bit; and then folio->mapping points, not to an anon_vma, but to a private
* structure which KSM associates with that merged page. See ksm.h.
*
- * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is used for non-lru movable
- * page and then folio->mapping points to a struct movable_operations.
- *
* Please note that, confusingly, "folio_mapping" refers to the inode
* address_space which maps the folio from disk; whereas "folio_mapped"
* refers to user virtual address space into which the folio is mapped.
@@ -743,13 +745,6 @@ static __always_inline bool PageAnon(const struct page *page)
{
return folio_test_anon(page_folio(page));
}
-
-static __always_inline bool page_has_movable_ops(const struct page *page)
-{
- return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
- PAGE_MAPPING_MOVABLE;
-}
-
#ifdef CONFIG_KSM
/*
* A KSM page is one of those write-protected "shared pages" or "merged pages"
@@ -1133,6 +1128,45 @@ bool is_free_buddy_page(const struct page *page);
PAGEFLAG(Isolated, isolated, PF_ANY);
+#ifdef CONFIG_MIGRATION
+/*
+ * This page is migratable through movable_ops (for selected typed pages
+ * only).
+ *
+ * Page migration of such pages might fail, for example, if the page is
+ * already isolated by somebody else, or if the page is about to get freed.
+ *
+ * While a subsystem might set selected typed pages that support page migration
+ * as being movable through movable_ops, it must never clear this flag.
+ *
+ * This flag is only cleared when the page is freed back to the buddy.
+ *
+ * Only selected page types support this flag (see page_movable_ops()) and
+ * the flag might be used in other context for other pages. Always use
+ * page_has_movable_ops() instead.
+ */
+TESTPAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
+SETPAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
+#else /* !CONFIG_MIGRATION */
+TESTPAGEFLAG_FALSE(MovableOps, movable_ops);
+SETPAGEFLAG_NOOP(MovableOps, movable_ops);
+#endif /* CONFIG_MIGRATION */
+
+/**
+ * page_has_movable_ops - test for a movable_ops page
+ * @page The page to test.
+ *
+ * Test whether this is a movable_ops page. Such pages will stay that
+ * way until freed.
+ *
+ * Returns true if this is a movable_ops page, otherwise false.
+ */
+static inline bool page_has_movable_ops(const struct page *page)
+{
+ return PageMovableOps(page) &&
+ (PageOffline(page) || PageZsmalloc(page));
+}
+
static __always_inline int PageAnonExclusive(const struct page *page)
{
VM_BUG_ON_PGFLAGS(!PageAnon(page), page);
diff --git a/mm/compaction.c b/mm/compaction.c
index 348eb754cb227..349f4ea0ec3e5 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -114,12 +114,6 @@ static unsigned long release_free_list(struct list_head *freepages)
}
#ifdef CONFIG_COMPACTION
-void __SetPageMovable(struct page *page)
-{
- VM_BUG_ON_PAGE(!PageLocked(page), page);
- page->mapping = (void *)(PAGE_MAPPING_MOVABLE);
-}
-EXPORT_SYMBOL(__SetPageMovable);
/* Do not skip compaction more than 64 times */
#define COMPACT_MAX_DEFER_SHIFT 6
diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 6855d9e2732d8..25bf5ea0beb83 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -154,7 +154,7 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn)
static inline void __zpdesc_set_movable(struct zpdesc *zpdesc)
{
- __SetPageMovable(zpdesc_page(zpdesc));
+ SetPageMovableOps(zpdesc_page(zpdesc));
}
static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc)
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 21/29] mm: rename PG_isolated to PG_movable_ops_isolated
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (19 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 20/29] mm: convert "movable" flag in page->mapping to a page flag David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM David Hildenbrand
` (8 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's rename the flag to make it clearer where it applies (not folios
...).
While at it, define the flag only with CONFIG_MIGRATION.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 16 +++++++++++-----
mm/compaction.c | 2 +-
mm/migrate.c | 14 +++++++-------
3 files changed, 19 insertions(+), 13 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 5f2b570735852..8b0e5c7371e67 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -167,10 +167,9 @@ enum pageflags {
/* Remapped by swiotlb-xen. */
PG_xen_remapped = PG_owner_priv_1,
- /* non-lru isolated movable page */
- PG_isolated = PG_reclaim,
-
#ifdef CONFIG_MIGRATION
+ /* movable_ops page that is isolated for migration */
+ PG_movable_ops_isolated = PG_reclaim,
/* this is a movable_ops page (for selected typed pages only) */
PG_movable_ops = PG_uptodate,
#endif
@@ -1126,8 +1125,6 @@ static inline bool folio_contain_hwpoisoned_page(struct folio *folio)
bool is_free_buddy_page(const struct page *page);
-PAGEFLAG(Isolated, isolated, PF_ANY);
-
#ifdef CONFIG_MIGRATION
/*
* This page is migratable through movable_ops (for selected typed pages
@@ -1147,9 +1144,18 @@ PAGEFLAG(Isolated, isolated, PF_ANY);
*/
TESTPAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
SETPAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
+/*
+ * A movable_ops page has this flag set while it is isolated for migration.
+ * This flag primarily protects against concurrent migration attempts.
+ *
+ * Once migration ended (success or failure), the flag is cleared. The
+ * flag is managed by the migration core.
+ */
+PAGEFLAG(MovableOpsIsolated, movable_ops_isolated, PF_NO_TAIL);
#else /* !CONFIG_MIGRATION */
TESTPAGEFLAG_FALSE(MovableOps, movable_ops);
SETPAGEFLAG_NOOP(MovableOps, movable_ops);
+PAGEFLAG_FALSE(MovableOpsIsolated, movable_ops_isolated);
#endif /* CONFIG_MIGRATION */
/**
diff --git a/mm/compaction.c b/mm/compaction.c
index 349f4ea0ec3e5..bf021b31c7ece 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1051,7 +1051,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (!PageLRU(page)) {
/* Isolation code will deal with any races. */
if (unlikely(page_has_movable_ops(page)) &&
- !PageIsolated(page)) {
+ !PageMovableOpsIsolated(page)) {
if (locked) {
unlock_page_lruvec_irqrestore(locked, flags);
locked = NULL;
diff --git a/mm/migrate.c b/mm/migrate.c
index fde6221562399..7fd3d38410c42 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -135,7 +135,7 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
goto out_putfolio;
VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
- if (PageIsolated(page))
+ if (PageMovableOpsIsolated(page))
goto out_no_isolated;
mops = page_movable_ops(page);
@@ -146,8 +146,8 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
goto out_no_isolated;
/* Driver shouldn't use the isolated flag */
- VM_WARN_ON_ONCE_PAGE(PageIsolated(page), page);
- SetPageIsolated(page);
+ VM_WARN_ON_ONCE_PAGE(PageMovableOpsIsolated(page), page);
+ SetPageMovableOpsIsolated(page);
folio_unlock(folio);
return true;
@@ -177,10 +177,10 @@ static void putback_movable_ops_page(struct page *page)
struct folio *folio = page_folio(page);
VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
- VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
+ VM_WARN_ON_ONCE_PAGE(!PageMovableOpsIsolated(page), page);
folio_lock(folio);
page_movable_ops(page)->putback_page(page);
- ClearPageIsolated(page);
+ ClearPageMovableOpsIsolated(page);
folio_unlock(folio);
folio_put(folio);
}
@@ -216,10 +216,10 @@ static int migrate_movable_ops_page(struct page *dst, struct page *src,
int rc = MIGRATEPAGE_SUCCESS;
VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(src), src);
- VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
+ VM_WARN_ON_ONCE_PAGE(!PageMovableOpsIsolated(src), src);
rc = page_movable_ops(src)->migrate_page(dst, src, mode);
if (rc == MIGRATEPAGE_SUCCESS)
- ClearPageIsolated(src);
+ ClearPageMovableOpsIsolated(src);
return rc;
}
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (20 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 21/29] mm: rename PG_isolated to PG_movable_ops_isolated David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 23/29] mm/page-alloc: remove PageMappingFlags() David Hildenbrand
` (7 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
KSM is the only remaining user, let's rename the flag. While at it,
adjust to remaining page -> folio in the doc.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 8b0e5c7371e67..094c8605a879e 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -697,10 +697,10 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
* folio->mapping points to its anon_vma, not to a struct address_space;
* with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h.
*
- * On an anonymous page in a VM_MERGEABLE area, if CONFIG_KSM is enabled,
- * the PAGE_MAPPING_MOVABLE bit may be set along with the PAGE_MAPPING_ANON
+ * On an anonymous folio in a VM_MERGEABLE area, if CONFIG_KSM is enabled,
+ * the PAGE_MAPPING_ANON_KSM bit may be set along with the PAGE_MAPPING_ANON
* bit; and then folio->mapping points, not to an anon_vma, but to a private
- * structure which KSM associates with that merged page. See ksm.h.
+ * structure which KSM associates with that merged folio. See ksm.h.
*
* Please note that, confusingly, "folio_mapping" refers to the inode
* address_space which maps the folio from disk; whereas "folio_mapped"
@@ -714,9 +714,9 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
* See mm/slab.h.
*/
#define PAGE_MAPPING_ANON 0x1
-#define PAGE_MAPPING_MOVABLE 0x2
-#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE)
-#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE)
+#define PAGE_MAPPING_ANON_KSM 0x2
+#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
+#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
static __always_inline bool folio_mapping_flags(const struct folio *folio)
{
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 23/29] mm/page-alloc: remove PageMappingFlags()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (21 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 24/29] mm/page-flags: remove folio_mapping_flags() David Hildenbrand
` (6 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
As PageMappingFlags() now only indicates anon (incl. KSM) folios, we can
now simply check for PageAnon() and remove PageMappingFlags().
... and while at it, use the folio instead and operate on
folio->mapping.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 5 -----
mm/page_alloc.c | 7 +++----
2 files changed, 3 insertions(+), 9 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 094c8605a879e..fc159fa945351 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -723,11 +723,6 @@ static __always_inline bool folio_mapping_flags(const struct folio *folio)
return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) != 0;
}
-static __always_inline bool PageMappingFlags(const struct page *page)
-{
- return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) != 0;
-}
-
static __always_inline bool folio_test_anon(const struct folio *folio)
{
return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4aefeb2ae927f..78ddf1d43c6c1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1375,10 +1375,9 @@ __always_inline bool free_pages_prepare(struct page *page,
(page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
}
}
- if (PageMappingFlags(page)) {
- if (PageAnon(page))
- mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
- page->mapping = NULL;
+ if (folio_test_anon(folio)) {
+ mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
+ folio->mapping = NULL;
}
if (unlikely(page_has_type(page)))
/* Reset the page_type (which overlays _mapcount) */
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 24/29] mm/page-flags: remove folio_mapping_flags()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (22 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 23/29] mm/page-alloc: remove PageMappingFlags() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 25/29] mm: simplify folio_expected_ref_count() David Hildenbrand
` (5 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
It's unused and the page counterpart is gone, so let's remove it.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 5 -----
1 file changed, 5 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index fc159fa945351..e575ecf880e59 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -718,11 +718,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
-static __always_inline bool folio_mapping_flags(const struct folio *folio)
-{
- return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) != 0;
-}
-
static __always_inline bool folio_test_anon(const struct folio *folio)
{
return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 25/29] mm: simplify folio_expected_ref_count()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (23 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 24/29] mm/page-flags: remove folio_mapping_flags() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 26/29] mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_* David Hildenbrand
` (4 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Now that PAGE_MAPPING_MOVABLE is gone, we can simplify and rely on the
folio_test_anon() test only.
... but staring at the users, this function should never even have been
called on movable_ops pages. E.g.,
* __buffer_migrate_folio() does not make sense for them
* folio_migrate_mapping() does not make sense for them
* migrate_huge_page_move_mapping() does not make sense for them
* __migrate_folio() does not make sense for them
* ... and khugepaged should never stumble over them
Let's simply refuse typed pages (which includes slab) except hugetlb,
and WARN.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/mm.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef40f68c1183d..805108d7bbc31 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2167,13 +2167,13 @@ static inline int folio_expected_ref_count(const struct folio *folio)
const int order = folio_order(folio);
int ref_count = 0;
- if (WARN_ON_ONCE(folio_test_slab(folio)))
+ if (WARN_ON_ONCE(page_has_type(&folio->page) && !folio_test_hugetlb(folio)))
return 0;
if (folio_test_anon(folio)) {
/* One reference per page from the swapcache. */
ref_count += folio_test_swapcache(folio) << order;
- } else if (!((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS)) {
+ } else {
/* One reference per page from the pagecache. */
ref_count += !!folio->mapping << order;
/* One reference from PG_private. */
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 26/29] mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_*
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (24 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 25/29] mm: simplify folio_expected_ref_count() David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration" David Hildenbrand
` (3 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Now that the mapping flags are only used for folios, let's rename the
defines.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
fs/proc/page.c | 4 ++--
include/linux/fs.h | 2 +-
include/linux/mm_types.h | 1 -
include/linux/page-flags.h | 20 ++++++++++----------
include/linux/pagemap.h | 2 +-
mm/gup.c | 4 ++--
mm/internal.h | 2 +-
mm/ksm.c | 4 ++--
mm/rmap.c | 16 ++++++++--------
mm/util.c | 6 +++---
10 files changed, 30 insertions(+), 31 deletions(-)
diff --git a/fs/proc/page.c b/fs/proc/page.c
index 999af26c72985..0cdc78c0d23fa 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -149,7 +149,7 @@ u64 stable_page_flags(const struct page *page)
k = folio->flags;
mapping = (unsigned long)folio->mapping;
- is_anon = mapping & PAGE_MAPPING_ANON;
+ is_anon = mapping & FOLIO_MAPPING_ANON;
/*
* pseudo flags for the well known (anonymous) memory mapped pages
@@ -158,7 +158,7 @@ u64 stable_page_flags(const struct page *page)
u |= 1 << KPF_MMAP;
if (is_anon) {
u |= 1 << KPF_ANON;
- if (mapping & PAGE_MAPPING_KSM)
+ if (mapping & FOLIO_MAPPING_KSM)
u |= 1 << KPF_KSM;
}
diff --git a/include/linux/fs.h b/include/linux/fs.h
index c68c9a07cda33..9b0de18746815 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -526,7 +526,7 @@ struct address_space {
/*
* On most architectures that alignment is already the case; but
* must be enforced here for CRIS, to let the least significant bit
- * of struct page's "mapping" pointer be used for PAGE_MAPPING_ANON.
+ * of struct folio's "mapping" pointer be used for FOLIO_MAPPING_ANON.
*/
/* XArray tags, for tagging dirty and writeback pages in the pagecache. */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 804d269a4f5e8..1ec273b066915 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -105,7 +105,6 @@ struct page {
unsigned int order;
};
};
- /* See page-flags.h for PAGE_MAPPING_FLAGS */
struct address_space *mapping;
union {
pgoff_t __folio_index; /* Our offset within mapping. */
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index e575ecf880e59..970600d79daca 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -695,10 +695,10 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
/*
* On an anonymous folio mapped into a user virtual memory area,
* folio->mapping points to its anon_vma, not to a struct address_space;
- * with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h.
+ * with the FOLIO_MAPPING_ANON bit set to distinguish it. See rmap.h.
*
* On an anonymous folio in a VM_MERGEABLE area, if CONFIG_KSM is enabled,
- * the PAGE_MAPPING_ANON_KSM bit may be set along with the PAGE_MAPPING_ANON
+ * the FOLIO_MAPPING_ANON_KSM bit may be set along with the FOLIO_MAPPING_ANON
* bit; and then folio->mapping points, not to an anon_vma, but to a private
* structure which KSM associates with that merged folio. See ksm.h.
*
@@ -713,21 +713,21 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
* false before calling the following functions (e.g., folio_test_anon).
* See mm/slab.h.
*/
-#define PAGE_MAPPING_ANON 0x1
-#define PAGE_MAPPING_ANON_KSM 0x2
-#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
-#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
+#define FOLIO_MAPPING_ANON 0x1
+#define FOLIO_MAPPING_ANON_KSM 0x2
+#define FOLIO_MAPPING_KSM (FOLIO_MAPPING_ANON | FOLIO_MAPPING_ANON_KSM)
+#define FOLIO_MAPPING_FLAGS (FOLIO_MAPPING_ANON | FOLIO_MAPPING_ANON_KSM)
static __always_inline bool folio_test_anon(const struct folio *folio)
{
- return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
+ return ((unsigned long)folio->mapping & FOLIO_MAPPING_ANON) != 0;
}
static __always_inline bool PageAnonNotKsm(const struct page *page)
{
unsigned long flags = (unsigned long)page_folio(page)->mapping;
- return (flags & PAGE_MAPPING_FLAGS) == PAGE_MAPPING_ANON;
+ return (flags & FOLIO_MAPPING_FLAGS) == FOLIO_MAPPING_ANON;
}
static __always_inline bool PageAnon(const struct page *page)
@@ -743,8 +743,8 @@ static __always_inline bool PageAnon(const struct page *page)
*/
static __always_inline bool folio_test_ksm(const struct folio *folio)
{
- return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) ==
- PAGE_MAPPING_KSM;
+ return ((unsigned long)folio->mapping & FOLIO_MAPPING_FLAGS) ==
+ FOLIO_MAPPING_KSM;
}
#else
FOLIO_TEST_FLAG_FALSE(ksm)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index e63fbfbd5b0f3..10a222e68b851 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -502,7 +502,7 @@ static inline pgoff_t mapping_align_index(struct address_space *mapping,
static inline bool mapping_large_folio_support(struct address_space *mapping)
{
/* AS_FOLIO_ORDER is only reasonable for pagecache folios */
- VM_WARN_ONCE((unsigned long)mapping & PAGE_MAPPING_ANON,
+ VM_WARN_ONCE((unsigned long)mapping & FOLIO_MAPPING_ANON,
"Anonymous mapping always supports large folio");
return mapping_max_folio_order(mapping) > 0;
diff --git a/mm/gup.c b/mm/gup.c
index 30d320719fa23..adffe663594dc 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2804,9 +2804,9 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
return false;
/* Anonymous folios pose no problem. */
- mapping_flags = (unsigned long)mapping & PAGE_MAPPING_FLAGS;
+ mapping_flags = (unsigned long)mapping & FOLIO_MAPPING_FLAGS;
if (mapping_flags)
- return mapping_flags & PAGE_MAPPING_ANON;
+ return mapping_flags & FOLIO_MAPPING_ANON;
/*
* At this point, we know the mapping is non-null and points to an
diff --git a/mm/internal.h b/mm/internal.h
index b7131bd3d1ad1..5b0f71e5434b2 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -149,7 +149,7 @@ static inline void *folio_raw_mapping(const struct folio *folio)
{
unsigned long mapping = (unsigned long)folio->mapping;
- return (void *)(mapping & ~PAGE_MAPPING_FLAGS);
+ return (void *)(mapping & ~FOLIO_MAPPING_FLAGS);
}
/*
diff --git a/mm/ksm.c b/mm/ksm.c
index ef73b25fd65a6..2b0210d41c553 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -893,7 +893,7 @@ static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
unsigned long kpfn;
expected_mapping = (void *)((unsigned long)stable_node |
- PAGE_MAPPING_KSM);
+ FOLIO_MAPPING_KSM);
again:
kpfn = READ_ONCE(stable_node->kpfn); /* Address dependency. */
folio = pfn_folio(kpfn);
@@ -1070,7 +1070,7 @@ static inline void folio_set_stable_node(struct folio *folio,
struct ksm_stable_node *stable_node)
{
VM_WARN_ON_FOLIO(folio_test_anon(folio) && PageAnonExclusive(&folio->page), folio);
- folio->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
+ folio->mapping = (void *)((unsigned long)stable_node | FOLIO_MAPPING_KSM);
}
#ifdef CONFIG_SYSFS
diff --git a/mm/rmap.c b/mm/rmap.c
index a15939453c41a..f93ce27132abc 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -503,12 +503,12 @@ struct anon_vma *folio_get_anon_vma(const struct folio *folio)
rcu_read_lock();
anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
- if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
+ if ((anon_mapping & FOLIO_MAPPING_FLAGS) != FOLIO_MAPPING_ANON)
goto out;
if (!folio_mapped(folio))
goto out;
- anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
+ anon_vma = (struct anon_vma *) (anon_mapping - FOLIO_MAPPING_ANON);
if (!atomic_inc_not_zero(&anon_vma->refcount)) {
anon_vma = NULL;
goto out;
@@ -550,12 +550,12 @@ struct anon_vma *folio_lock_anon_vma_read(const struct folio *folio,
retry:
rcu_read_lock();
anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
- if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
+ if ((anon_mapping & FOLIO_MAPPING_FLAGS) != FOLIO_MAPPING_ANON)
goto out;
if (!folio_mapped(folio))
goto out;
- anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
+ anon_vma = (struct anon_vma *) (anon_mapping - FOLIO_MAPPING_ANON);
root_anon_vma = READ_ONCE(anon_vma->root);
if (down_read_trylock(&root_anon_vma->rwsem)) {
/*
@@ -1334,9 +1334,9 @@ void folio_move_anon_rmap(struct folio *folio, struct vm_area_struct *vma)
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
VM_BUG_ON_VMA(!anon_vma, vma);
- anon_vma += PAGE_MAPPING_ANON;
+ anon_vma += FOLIO_MAPPING_ANON;
/*
- * Ensure that anon_vma and the PAGE_MAPPING_ANON bit are written
+ * Ensure that anon_vma and the FOLIO_MAPPING_ANON bit are written
* simultaneously, so a concurrent reader (eg folio_referenced()'s
* folio_test_anon()) will not see one without the other.
*/
@@ -1367,10 +1367,10 @@ static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
/*
* page_idle does a lockless/optimistic rmap scan on folio->mapping.
* Make sure the compiler doesn't split the stores of anon_vma and
- * the PAGE_MAPPING_ANON type identifier, otherwise the rmap code
+ * the FOLIO_MAPPING_ANON type identifier, otherwise the rmap code
* could mistake the mapping for a struct address_space and crash.
*/
- anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
+ anon_vma = (void *) anon_vma + FOLIO_MAPPING_ANON;
WRITE_ONCE(folio->mapping, (struct address_space *) anon_vma);
folio->index = linear_page_index(vma, address);
}
diff --git a/mm/util.c b/mm/util.c
index ce826ca82a11d..68ea833ba25f1 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -670,9 +670,9 @@ struct anon_vma *folio_anon_vma(const struct folio *folio)
{
unsigned long mapping = (unsigned long)folio->mapping;
- if ((mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
+ if ((mapping & FOLIO_MAPPING_FLAGS) != FOLIO_MAPPING_ANON)
return NULL;
- return (void *)(mapping - PAGE_MAPPING_ANON);
+ return (void *)(mapping - FOLIO_MAPPING_ANON);
}
/**
@@ -699,7 +699,7 @@ struct address_space *folio_mapping(struct folio *folio)
return swap_address_space(folio->swap);
mapping = folio->mapping;
- if ((unsigned long)mapping & PAGE_MAPPING_FLAGS)
+ if ((unsigned long)mapping & FOLIO_MAPPING_FLAGS)
return NULL;
return mapping;
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration"
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (25 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 26/29] mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_* David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 28/29] mm/balloon_compaction: "movable_ops" doc updates David Hildenbrand
` (2 subsequent siblings)
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's bring the docs up-to-date.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
Documentation/mm/page_migration.rst | 39 ++++++++++++++++++++---------
1 file changed, 27 insertions(+), 12 deletions(-)
diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_migration.rst
index 519b35a4caf5b..34602b254aa63 100644
--- a/Documentation/mm/page_migration.rst
+++ b/Documentation/mm/page_migration.rst
@@ -146,18 +146,33 @@ Steps:
18. The new page is moved to the LRU and can be scanned by the swapper,
etc. again.
-Non-LRU page migration
-======================
-
-Although migration originally aimed for reducing the latency of memory
-accesses for NUMA, compaction also uses migration to create high-order
-pages. For compaction purposes, it is also useful to be able to move
-non-LRU pages, such as zsmalloc and virtio-balloon pages.
-
-If a driver wants to make its pages movable, it should define a struct
-movable_operations. It then needs to call __SetPageMovable() on each
-page that it may be able to move. This uses the ``page->mapping`` field,
-so this field is not available for the driver to use for other purposes.
+movable_ops page migration
+==========================
+
+Selected typed, non-folio pages (e.g., pages inflated in a memory balloon,
+zsmalloc pages) can be migrated using the movable_ops migration framework.
+
+The "struct movable_operations" provide callbacks specific to a page type
+for isolating, migrating and un-isolating (putback) these pages.
+
+Once a page is indicated as having movable_ops, that condition must not
+change until the page was freed back to the buddy. This includes not
+changing/clearing the page type and not changing/clearing the
+PG_movable_ops page flag.
+
+Arbitrary drivers cannot currently make use of this framework, as it
+requires:
+
+(a) a page type
+(b) indicating them as possibly having movable_ops in page_has_movable_ops()
+ based on the page type
+(c) returning the movable_ops from page_movable_ops() based on the page
+ type
+(d) not reusing the PG_movable_ops and PG_movable_ops_isolated page flags
+ for other purposes
+
+For example, balloon drivers can make use of this framework through the
+balloon-compaction infrastructure residing in the core kernel.
Monitoring Migration
=====================
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 28/29] mm/balloon_compaction: "movable_ops" doc updates
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (26 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration" David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 29/29] mm/balloon_compaction: provide single balloon_page_insert() and balloon_mapping_gfp_mask() David Hildenbrand
2025-07-04 21:06 ` [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) Andrew Morton
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's bring the docs up-to-date. Setting PG_movable_ops + page->private
very likely still requires to be performed under documented locks:
it's complicated.
We will rework this in the future, as we will try avoiding using the
page lock.
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index b222b0737c466..2fecfead91d26 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -4,12 +4,13 @@
*
* Common interface definitions for making balloon pages movable by compaction.
*
- * Balloon page migration makes use of the general non-lru movable page
+ * Balloon page migration makes use of the general "movable_ops page migration"
* feature.
*
* page->private is used to reference the responsible balloon device.
- * page->mapping is used in context of non-lru page migration to reference
- * the address space operations for page isolation/migration/compaction.
+ * That these pages have movable_ops, and which movable_ops apply,
+ * is derived from the page type (PageOffline()) combined with the
+ * PG_movable_ops flag (PageMovableOps()).
*
* As the page isolation scanning step a compaction thread does is a lockless
* procedure (from a page standpoint), it might bring some racy situations while
@@ -17,12 +18,10 @@
* and safely perform balloon's page compaction and migration we must, always,
* ensure following these simple rules:
*
- * i. when updating a balloon's page ->mapping element, strictly do it under
- * the following lock order, independently of the far superior
- * locking scheme (lru_lock, balloon_lock):
+ * i. Setting the PG_movable_ops flag and page->private with the following
+ * lock order
* +-page_lock(page);
* +--spin_lock_irq(&b_dev_info->pages_lock);
- * ... page->mapping updates here ...
*
* ii. isolation or dequeueing procedure must remove the page from balloon
* device page list under b_dev_info->pages_lock.
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* [PATCH v2 29/29] mm/balloon_compaction: provide single balloon_page_insert() and balloon_mapping_gfp_mask()
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (27 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 28/29] mm/balloon_compaction: "movable_ops" doc updates David Hildenbrand
@ 2025-07-04 10:25 ` David Hildenbrand
2025-07-04 21:06 ` [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) Andrew Morton
29 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-04 10:25 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's just special-case based on IS_ENABLED(CONFIG_BALLOON_COMPACTION)
like we did for balloon_page_finalize().
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 42 +++++++++++-------------------
1 file changed, 15 insertions(+), 27 deletions(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index 2fecfead91d26..7cfe48769239e 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -77,6 +77,15 @@ static inline void balloon_devinfo_init(struct balloon_dev_info *balloon)
#ifdef CONFIG_BALLOON_COMPACTION
extern const struct movable_operations balloon_mops;
+/*
+ * balloon_page_device - get the b_dev_info descriptor for the balloon device
+ * that enqueues the given page.
+ */
+static inline struct balloon_dev_info *balloon_page_device(struct page *page)
+{
+ return (struct balloon_dev_info *)page_private(page);
+}
+#endif /* CONFIG_BALLOON_COMPACTION */
/*
* balloon_page_insert - insert a page into the balloon's page list and make
@@ -91,41 +100,20 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
struct page *page)
{
__SetPageOffline(page);
- SetPageMovableOps(page);
- set_page_private(page, (unsigned long)balloon);
- list_add(&page->lru, &balloon->pages);
-}
-
-/*
- * balloon_page_device - get the b_dev_info descriptor for the balloon device
- * that enqueues the given page.
- */
-static inline struct balloon_dev_info *balloon_page_device(struct page *page)
-{
- return (struct balloon_dev_info *)page_private(page);
-}
-
-static inline gfp_t balloon_mapping_gfp_mask(void)
-{
- return GFP_HIGHUSER_MOVABLE;
-}
-
-#else /* !CONFIG_BALLOON_COMPACTION */
-
-static inline void balloon_page_insert(struct balloon_dev_info *balloon,
- struct page *page)
-{
- __SetPageOffline(page);
+ if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
+ SetPageMovableOps(page);
+ set_page_private(page, (unsigned long)balloon);
+ }
list_add(&page->lru, &balloon->pages);
}
static inline gfp_t balloon_mapping_gfp_mask(void)
{
+ if (IS_ENABLED(CONFIG_BALLOON_COMPACTION))
+ return GFP_HIGHUSER_MOVABLE;
return GFP_HIGHUSER;
}
-#endif /* CONFIG_BALLOON_COMPACTION */
-
/*
* balloon_page_finalize - prepare a balloon page that was removed from the
* balloon list for release to the page allocator
--
2.49.0
^ permalink raw reply related [flat|nested] 33+ messages in thread
* Re: [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1)
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (28 preceding siblings ...)
2025-07-04 10:25 ` [PATCH v2 29/29] mm/balloon_compaction: provide single balloon_page_insert() and balloon_mapping_gfp_mask() David Hildenbrand
@ 2025-07-04 21:06 ` Andrew Morton
29 siblings, 0 replies; 33+ messages in thread
From: Andrew Morton @ 2025-07-04 21:06 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Jonathan Corbet, Madhavan Srinivasan,
Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Jerrin Shaji George, Arnd Bergmann, Greg Kroah-Hartman,
Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Alexander Viro, Christian Brauner, Jan Kara, Zi Yan,
Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
On Fri, 4 Jul 2025 12:24:54 +0200 David Hildenbrand <david@redhat.com> wrote:
> In the future, as we decouple "struct page" from "struct folio", pages
> that support "non-lru page migration" -- movable_ops page migration
> such as memory balloons and zsmalloc -- will no longer be folios. They
> will not have ->mapping, ->lru, and likely no refcount and no
> page lock. But they will have a type and flags 🙂
>
> This is the first part (other parts not written yet) of decoupling
> movable_ops page migration from folio migration.
>
> In this series, we get rid of the ->mapping usage, and start cleaning up
> the code + separating it from folio migration.
>
> Migration core will have to be further reworked to not treat movable_ops
> pages like folios. This is the first step into that direction.
>
> Heavily tested with virtio-balloon and lightly tested with zsmalloc
> on x86-64. Cross-compile-tested.
Thanks, I added this to mm-new. I suppressed the 1363 mm-commits
emails to avoid breaking the internet.
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v2 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-07-04 10:25 ` [PATCH v2 12/29] mm/zsmalloc: stop using __ClearPageMovable() David Hildenbrand
@ 2025-07-07 2:39 ` Sergey Senozhatsky
0 siblings, 0 replies; 33+ messages in thread
From: Sergey Senozhatsky @ 2025-07-07 2:39 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
On (25/07/04 12:25), David Hildenbrand wrote:
> Instead, let's check in the callbacks if the page was already destroyed,
> which can be checked by looking at zpdesc->zspage (see reset_zpdesc()).
>
> If we detect that the page was destroyed:
>
> (1) Fail isolation, just like the migration core would
>
> (2) Fake migration success just like the migration core would
>
> In the putback case there is nothing to do, as we don't do anything just
> like the migration core would do.
>
> In the future, we should look into not letting these pages get destroyed
> while they are isolated -- and instead delaying that to the
> putback/migration call. Add a TODO for that.
>
> Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
^ permalink raw reply [flat|nested] 33+ messages in thread
* Re: [PATCH v2 20/29] mm: convert "movable" flag in page->mapping to a page flag
2025-07-04 10:25 ` [PATCH v2 20/29] mm: convert "movable" flag in page->mapping to a page flag David Hildenbrand
@ 2025-07-11 9:58 ` David Hildenbrand
0 siblings, 0 replies; 33+ messages in thread
From: David Hildenbrand @ 2025-07-11 9:58 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
Andrew Morton, Jonathan Corbet, Madhavan Srinivasan,
Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Jerrin Shaji George, Arnd Bergmann, Greg Kroah-Hartman,
Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Alexander Viro, Christian Brauner, Jan Kara, Zi Yan,
Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
On 04.07.25 12:25, David Hildenbrand wrote:
> Instead, let's use a page flag. As the page flag can result in
> false-positives, glue it to the page types for which we
> support/implement movable_ops page migration.
>
> We are reusing PG_uptodate, that is for example used to track file
> system state and does not apply to movable_ops pages. So
> warning in case it is set in page_has_movable_ops() on other page types
> could result in false-positive warnings.
>
> Likely we could set the bit using a non-atomic update: in contrast to
> page->mapping, we could have others trying to update the flags
> concurrently when trying to lock the folio. In
> isolate_movable_ops_page(), we already take care of that by checking if
> the page has movable_ops before locking it. Let's start with the atomic
> variant, we could later switch to the non-atomic variant once we are
> sure other cases are similarly fine. Once we perform the switch, we'll
> have to introduce __SETPAGEFLAG_NOOP().
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> include/linux/balloon_compaction.h | 2 +-
> include/linux/migrate.h | 8 -----
> include/linux/page-flags.h | 54 ++++++++++++++++++++++++------
> mm/compaction.c | 6 ----
> mm/zpdesc.h | 2 +-
> 5 files changed, 46 insertions(+), 26 deletions(-)
>
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index a8a1706cc56f3..b222b0737c466 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -92,7 +92,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
> struct page *page)
> {
> __SetPageOffline(page);
> - __SetPageMovable(page);
> + SetPageMovableOps(page);
> set_page_private(page, (unsigned long)balloon);
> list_add(&page->lru, &balloon->pages);
> }
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 6aece3f3c8be8..acadd41e0b5cf 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -103,14 +103,6 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
>
> #endif /* CONFIG_MIGRATION */
>
> -#ifdef CONFIG_COMPACTION
> -void __SetPageMovable(struct page *page);
> -#else
> -static inline void __SetPageMovable(struct page *page)
> -{
> -}
> -#endif
> -
> #ifdef CONFIG_NUMA_BALANCING
> int migrate_misplaced_folio_prepare(struct folio *folio,
> struct vm_area_struct *vma, int node);
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 4c27ebb689e3c..5f2b570735852 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -170,6 +170,11 @@ enum pageflags {
> /* non-lru isolated movable page */
> PG_isolated = PG_reclaim,
>
> +#ifdef CONFIG_MIGRATION
> + /* this is a movable_ops page (for selected typed pages only) */
> + PG_movable_ops = PG_uptodate,
> +#endif
> +
> /* Only valid for buddy pages. Used to track pages that are reported */
> PG_reported = PG_uptodate,
>
> @@ -698,9 +703,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
> * bit; and then folio->mapping points, not to an anon_vma, but to a private
> * structure which KSM associates with that merged page. See ksm.h.
> *
> - * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is used for non-lru movable
> - * page and then folio->mapping points to a struct movable_operations.
> - *
> * Please note that, confusingly, "folio_mapping" refers to the inode
> * address_space which maps the folio from disk; whereas "folio_mapped"
> * refers to user virtual address space into which the folio is mapped.
> @@ -743,13 +745,6 @@ static __always_inline bool PageAnon(const struct page *page)
> {
> return folio_test_anon(page_folio(page));
> }
> -
> -static __always_inline bool page_has_movable_ops(const struct page *page)
> -{
> - return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
> - PAGE_MAPPING_MOVABLE;
> -}
> -
> #ifdef CONFIG_KSM
> /*
> * A KSM page is one of those write-protected "shared pages" or "merged pages"
> @@ -1133,6 +1128,45 @@ bool is_free_buddy_page(const struct page *page);
>
> PAGEFLAG(Isolated, isolated, PF_ANY);
>
> +#ifdef CONFIG_MIGRATION
> +/*
> + * This page is migratable through movable_ops (for selected typed pages
> + * only).
> + *
> + * Page migration of such pages might fail, for example, if the page is
> + * already isolated by somebody else, or if the page is about to get freed.
> + *
> + * While a subsystem might set selected typed pages that support page migration
> + * as being movable through movable_ops, it must never clear this flag.
> + *
> + * This flag is only cleared when the page is freed back to the buddy.
> + *
> + * Only selected page types support this flag (see page_movable_ops()) and
> + * the flag might be used in other context for other pages. Always use
> + * page_has_movable_ops() instead.
> + */
> +TESTPAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
> +SETPAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
> +#else /* !CONFIG_MIGRATION */
> +TESTPAGEFLAG_FALSE(MovableOps, movable_ops);
> +SETPAGEFLAG_NOOP(MovableOps, movable_ops);
> +#endif /* CONFIG_MIGRATION */
> +
> +/**
> + * page_has_movable_ops - test for a movable_ops page
> + * @page The page to test.
> + *
> + * Test whether this is a movable_ops page. Such pages will stay that
> + * way until freed.
> + *
> + * Returns true if this is a movable_ops page, otherwise false.
> + */
> +static inline bool page_has_movable_ops(const struct page *page)
> +{
> + return PageMovableOps(page) &&
> + (PageOffline(page) || PageZsmalloc(page));
> +}
> +
The following fixup on top:
From 3a52911a299d3328d9fa2aeba00170240795702d Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Fri, 11 Jul 2025 11:57:43 +0200
Subject: [PATCH] fixup: "mm: convert "movable" flag in page->mapping to a page
flag"
We're missing a ":".
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 970600d79daca..8e4d6eda8a8d6 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -1150,7 +1150,7 @@ PAGEFLAG_FALSE(MovableOpsIsolated, movable_ops_isolated);
/**
* page_has_movable_ops - test for a movable_ops page
- * @page The page to test.
+ * @page: The page to test.
*
* Test whether this is a movable_ops page. Such pages will stay that
* way until freed.
--
2.50.1
--
Cheers,
David / dhildenb
^ permalink raw reply related [flat|nested] 33+ messages in thread
end of thread, other threads:[~2025-07-11 9:58 UTC | newest]
Thread overview: 33+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-04 10:24 [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 01/29] mm/balloon_compaction: we cannot have isolated pages in the balloon list David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize() David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 03/29] mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 04/29] mm/page_alloc: let page freeing clear any set page type David Hildenbrand
2025-07-04 10:24 ` [PATCH v2 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 06/29] mm/zsmalloc: make PageZsmalloc() " David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 12/29] mm/zsmalloc: stop using __ClearPageMovable() David Hildenbrand
2025-07-07 2:39 ` Sergey Senozhatsky
2025-07-04 10:25 ` [PATCH v2 13/29] mm/balloon_compaction: " David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 14/29] mm/migrate: remove __ClearPageMovable() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 15/29] mm/migration: remove PageMovable() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 16/29] mm: rename __PageMovable() to page_has_movable_ops() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 18/29] mm: remove __folio_test_movable() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 19/29] mm: stop storing migration_ops in page->mapping David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 20/29] mm: convert "movable" flag in page->mapping to a page flag David Hildenbrand
2025-07-11 9:58 ` David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 21/29] mm: rename PG_isolated to PG_movable_ops_isolated David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 23/29] mm/page-alloc: remove PageMappingFlags() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 24/29] mm/page-flags: remove folio_mapping_flags() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 25/29] mm: simplify folio_expected_ref_count() David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 26/29] mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_* David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration" David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 28/29] mm/balloon_compaction: "movable_ops" doc updates David Hildenbrand
2025-07-04 10:25 ` [PATCH v2 29/29] mm/balloon_compaction: provide single balloon_page_insert() and balloon_mapping_gfp_mask() David Hildenbrand
2025-07-04 21:06 ` [PATCH v2 00/29] mm/migration: rework movable_ops page migration (part 1) Andrew Morton
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).