* [PATCH v1 01/29] mm/balloon_compaction: we cannot have isolated pages in the balloon list
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-06-30 14:45 ` Lorenzo Stoakes
2025-06-30 12:59 ` [PATCH v1 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize() David Hildenbrand
` (28 subsequent siblings)
29 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
The core will set PG_isolated only after mops->isolate_page() was
called. In case of the balloon, that is where we will remove it from
the balloon list. So we cannot have isolated pages in the balloon list.
Let's drop this unnecessary check.
Acked-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/balloon_compaction.c | 6 ------
1 file changed, 6 deletions(-)
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index d3e00731e2628..fcb60233aa35d 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -94,12 +94,6 @@ size_t balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info,
if (!trylock_page(page))
continue;
- if (IS_ENABLED(CONFIG_BALLOON_COMPACTION) &&
- PageIsolated(page)) {
- /* raced with isolation */
- unlock_page(page);
- continue;
- }
balloon_page_delete(page);
__count_vm_event(BALLOON_DEFLATE);
list_add(&page->lru, pages);
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 01/29] mm/balloon_compaction: we cannot have isolated pages in the balloon list
2025-06-30 12:59 ` [PATCH v1 01/29] mm/balloon_compaction: we cannot have isolated pages in the balloon list David Hildenbrand
@ 2025-06-30 14:45 ` Lorenzo Stoakes
0 siblings, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 14:45 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:42PM +0200, David Hildenbrand wrote:
> The core will set PG_isolated only after mops->isolate_page() was
> called. In case of the balloon, that is where we will remove it from
> the balloon list. So we cannot have isolated pages in the balloon list.
Indeed, I see isolate_movable_ops_page() is the only place the beautiful +
consistent macro SetPageMovableOpsIsolated() is invoked, and
balloon_page_isolate() invokes list_del(&page->lru).
The only case it doesn't do that is one where it returns false so the flag
wouldn't be set.
>
> Let's drop this unnecessary check.
>
> Acked-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
So,
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> mm/balloon_compaction.c | 6 ------
> 1 file changed, 6 deletions(-)
>
> diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
> index d3e00731e2628..fcb60233aa35d 100644
> --- a/mm/balloon_compaction.c
> +++ b/mm/balloon_compaction.c
> @@ -94,12 +94,6 @@ size_t balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info,
> if (!trylock_page(page))
> continue;
>
> - if (IS_ENABLED(CONFIG_BALLOON_COMPACTION) &&
> - PageIsolated(page)) {
> - /* raced with isolation */
> - unlock_page(page);
> - continue;
> - }
> balloon_page_delete(page);
> __count_vm_event(BALLOON_DEFLATE);
> list_add(&page->lru, pages);
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
2025-06-30 12:59 ` [PATCH v1 01/29] mm/balloon_compaction: we cannot have isolated pages in the balloon list David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-06-30 15:15 ` Lorenzo Stoakes
2025-06-30 12:59 ` [PATCH v1 03/29] mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs David Hildenbrand
` (27 subsequent siblings)
29 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's move the removal of the page from the balloon list into the single
caller, to remove the dependency on the PG_isolated flag and clarify
locking requirements.
We'll shuffle the operations a bit such that they logically make more sense
(e.g., remove from the list before clearing flags).
In balloon migration functions we can now move the balloon_page_finalize()
out of the balloon lock and perform the finalization just before dropping
the balloon reference.
Document that the page lock is currently required when modifying the
movability aspects of a page; hopefully we can soon decouple this from the
page lock.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
arch/powerpc/platforms/pseries/cmm.c | 2 +-
drivers/misc/vmw_balloon.c | 3 +-
drivers/virtio/virtio_balloon.c | 4 +--
include/linux/balloon_compaction.h | 43 +++++++++++-----------------
mm/balloon_compaction.c | 3 +-
5 files changed, 21 insertions(+), 34 deletions(-)
diff --git a/arch/powerpc/platforms/pseries/cmm.c b/arch/powerpc/platforms/pseries/cmm.c
index 5f4037c1d7fe8..5e0a718d1be7b 100644
--- a/arch/powerpc/platforms/pseries/cmm.c
+++ b/arch/powerpc/platforms/pseries/cmm.c
@@ -532,7 +532,6 @@ static int cmm_migratepage(struct balloon_dev_info *b_dev_info,
spin_lock_irqsave(&b_dev_info->pages_lock, flags);
balloon_page_insert(b_dev_info, newpage);
- balloon_page_delete(page);
b_dev_info->isolated_pages--;
spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
@@ -542,6 +541,7 @@ static int cmm_migratepage(struct balloon_dev_info *b_dev_info,
*/
plpar_page_set_active(page);
+ balloon_page_finalize(page);
/* balloon page list reference */
put_page(page);
diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
index c817d8c216413..6653fc53c951c 100644
--- a/drivers/misc/vmw_balloon.c
+++ b/drivers/misc/vmw_balloon.c
@@ -1778,8 +1778,7 @@ static int vmballoon_migratepage(struct balloon_dev_info *b_dev_info,
* @pages_lock . We keep holding @comm_lock since we will need it in a
* second.
*/
- balloon_page_delete(page);
-
+ balloon_page_finalize(page);
put_page(page);
/* Inflate */
diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
index 89da052f4f687..e299e18346a30 100644
--- a/drivers/virtio/virtio_balloon.c
+++ b/drivers/virtio/virtio_balloon.c
@@ -866,15 +866,13 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
tell_host(vb, vb->inflate_vq);
/* balloon's page migration 2nd step -- deflate "page" */
- spin_lock_irqsave(&vb_dev_info->pages_lock, flags);
- balloon_page_delete(page);
- spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags);
vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
set_page_pfns(vb, vb->pfns, page);
tell_host(vb, vb->deflate_vq);
mutex_unlock(&vb->balloon_lock);
+ balloon_page_finalize(page);
put_page(page); /* balloon reference */
return MIGRATEPAGE_SUCCESS;
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index 5ca2d56996201..b9f19da37b089 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -97,27 +97,6 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
list_add(&page->lru, &balloon->pages);
}
-/*
- * balloon_page_delete - delete a page from balloon's page list and clear
- * the page->private assignement accordingly.
- * @page : page to be released from balloon's page list
- *
- * Caller must ensure the page is locked and the spin_lock protecting balloon
- * pages list is held before deleting a page from the balloon device.
- */
-static inline void balloon_page_delete(struct page *page)
-{
- __ClearPageOffline(page);
- __ClearPageMovable(page);
- set_page_private(page, 0);
- /*
- * No touch page.lru field once @page has been isolated
- * because VM is using the field.
- */
- if (!PageIsolated(page))
- list_del(&page->lru);
-}
-
/*
* balloon_page_device - get the b_dev_info descriptor for the balloon device
* that enqueues the given page.
@@ -141,12 +120,6 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
list_add(&page->lru, &balloon->pages);
}
-static inline void balloon_page_delete(struct page *page)
-{
- __ClearPageOffline(page);
- list_del(&page->lru);
-}
-
static inline gfp_t balloon_mapping_gfp_mask(void)
{
return GFP_HIGHUSER;
@@ -154,6 +127,22 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
#endif /* CONFIG_BALLOON_COMPACTION */
+/*
+ * balloon_page_finalize - prepare a balloon page that was removed from the
+ * balloon list for release to the page allocator
+ * @page: page to be released to the page allocator
+ *
+ * Caller must ensure that the page is locked.
+ */
+static inline void balloon_page_finalize(struct page *page)
+{
+ if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
+ __ClearPageMovable(page);
+ set_page_private(page, 0);
+ }
+ __ClearPageOffline(page);
+}
+
/*
* balloon_page_push - insert a page into a page list.
* @head : pointer to list
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index fcb60233aa35d..ec176bdb8a78b 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -94,7 +94,8 @@ size_t balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info,
if (!trylock_page(page))
continue;
- balloon_page_delete(page);
+ list_del(&page->lru);
+ balloon_page_finalize(page);
__count_vm_event(BALLOON_DEFLATE);
list_add(&page->lru, pages);
unlock_page(page);
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize()
2025-06-30 12:59 ` [PATCH v1 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize() David Hildenbrand
@ 2025-06-30 15:15 ` Lorenzo Stoakes
2025-07-01 7:58 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 15:15 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:43PM +0200, David Hildenbrand wrote:
> Let's move the removal of the page from the balloon list into the single
> caller, to remove the dependency on the PG_isolated flag and clarify
> locking requirements.
>
> We'll shuffle the operations a bit such that they logically make more sense
> (e.g., remove from the list before clearing flags).
>
> In balloon migration functions we can now move the balloon_page_finalize()
> out of the balloon lock and perform the finalization just before dropping
> the balloon reference.
>
> Document that the page lock is currently required when modifying the
> movability aspects of a page; hopefully we can soon decouple this from the
> page lock.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> arch/powerpc/platforms/pseries/cmm.c | 2 +-
> drivers/misc/vmw_balloon.c | 3 +-
> drivers/virtio/virtio_balloon.c | 4 +--
> include/linux/balloon_compaction.h | 43 +++++++++++-----------------
> mm/balloon_compaction.c | 3 +-
> 5 files changed, 21 insertions(+), 34 deletions(-)
>
> diff --git a/arch/powerpc/platforms/pseries/cmm.c b/arch/powerpc/platforms/pseries/cmm.c
> index 5f4037c1d7fe8..5e0a718d1be7b 100644
> --- a/arch/powerpc/platforms/pseries/cmm.c
> +++ b/arch/powerpc/platforms/pseries/cmm.c
> @@ -532,7 +532,6 @@ static int cmm_migratepage(struct balloon_dev_info *b_dev_info,
>
> spin_lock_irqsave(&b_dev_info->pages_lock, flags);
> balloon_page_insert(b_dev_info, newpage);
> - balloon_page_delete(page);
We seem to just be removing this and not replacing with finalize, is this right?
> b_dev_info->isolated_pages--;
> spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
>
> @@ -542,6 +541,7 @@ static int cmm_migratepage(struct balloon_dev_info *b_dev_info,
> */
> plpar_page_set_active(page);
>
> + balloon_page_finalize(page);
> /* balloon page list reference */
> put_page(page);
>
> diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
> index c817d8c216413..6653fc53c951c 100644
> --- a/drivers/misc/vmw_balloon.c
> +++ b/drivers/misc/vmw_balloon.c
> @@ -1778,8 +1778,7 @@ static int vmballoon_migratepage(struct balloon_dev_info *b_dev_info,
> * @pages_lock . We keep holding @comm_lock since we will need it in a
> * second.
> */
> - balloon_page_delete(page);
> -
> + balloon_page_finalize(page);
> put_page(page);
>
> /* Inflate */
> diff --git a/drivers/virtio/virtio_balloon.c b/drivers/virtio/virtio_balloon.c
> index 89da052f4f687..e299e18346a30 100644
> --- a/drivers/virtio/virtio_balloon.c
> +++ b/drivers/virtio/virtio_balloon.c
> @@ -866,15 +866,13 @@ static int virtballoon_migratepage(struct balloon_dev_info *vb_dev_info,
> tell_host(vb, vb->inflate_vq);
>
> /* balloon's page migration 2nd step -- deflate "page" */
> - spin_lock_irqsave(&vb_dev_info->pages_lock, flags);
> - balloon_page_delete(page);
> - spin_unlock_irqrestore(&vb_dev_info->pages_lock, flags);
> vb->num_pfns = VIRTIO_BALLOON_PAGES_PER_PAGE;
> set_page_pfns(vb, vb->pfns, page);
> tell_host(vb, vb->deflate_vq);
>
> mutex_unlock(&vb->balloon_lock);
>
> + balloon_page_finalize(page);
> put_page(page); /* balloon reference */
>
> return MIGRATEPAGE_SUCCESS;
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index 5ca2d56996201..b9f19da37b089 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -97,27 +97,6 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
> list_add(&page->lru, &balloon->pages);
> }
>
> -/*
> - * balloon_page_delete - delete a page from balloon's page list and clear
> - * the page->private assignement accordingly.
> - * @page : page to be released from balloon's page list
> - *
> - * Caller must ensure the page is locked and the spin_lock protecting balloon
> - * pages list is held before deleting a page from the balloon device.
> - */
> -static inline void balloon_page_delete(struct page *page)
> -{
> - __ClearPageOffline(page);
> - __ClearPageMovable(page);
> - set_page_private(page, 0);
> - /*
> - * No touch page.lru field once @page has been isolated
> - * because VM is using the field.
> - */
> - if (!PageIsolated(page))
> - list_del(&page->lru);
I don't see this check elsewhere, is it because, as per the 1/xx of this series,
because by the time we do the finalize
> -}
> -
> /*
> * balloon_page_device - get the b_dev_info descriptor for the balloon device
> * that enqueues the given page.
> @@ -141,12 +120,6 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
> list_add(&page->lru, &balloon->pages);
> }
>
> -static inline void balloon_page_delete(struct page *page)
> -{
> - __ClearPageOffline(page);
> - list_del(&page->lru);
> -}
> -
> static inline gfp_t balloon_mapping_gfp_mask(void)
> {
> return GFP_HIGHUSER;
> @@ -154,6 +127,22 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
>
> #endif /* CONFIG_BALLOON_COMPACTION */
>
> +/*
> + * balloon_page_finalize - prepare a balloon page that was removed from the
> + * balloon list for release to the page allocator
> + * @page: page to be released to the page allocator
> + *
> + * Caller must ensure that the page is locked.
Can we assert this? Maybe mention that the balloon lock should not be held?
> + */
> +static inline void balloon_page_finalize(struct page *page)
> +{
> + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
> + __ClearPageMovable(page);
> + set_page_private(page, 0);
> + }
Why do we check this? Is this function called from anywhere where that config won't be set?
> + __ClearPageOffline(page);
> +}
> +
> /*
> * balloon_page_push - insert a page into a page list.
> * @head : pointer to list
> diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
> index fcb60233aa35d..ec176bdb8a78b 100644
> --- a/mm/balloon_compaction.c
> +++ b/mm/balloon_compaction.c
> @@ -94,7 +94,8 @@ size_t balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info,
> if (!trylock_page(page))
> continue;
>
> - balloon_page_delete(page);
> + list_del(&page->lru);
> + balloon_page_finalize(page);
> __count_vm_event(BALLOON_DEFLATE);
> list_add(&page->lru, pages);
> unlock_page(page);
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize()
2025-06-30 15:15 ` Lorenzo Stoakes
@ 2025-07-01 7:58 ` David Hildenbrand
2025-07-01 9:01 ` Lorenzo Stoakes
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 7:58 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 30.06.25 17:15, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:43PM +0200, David Hildenbrand wrote:
>> Let's move the removal of the page from the balloon list into the single
>> caller, to remove the dependency on the PG_isolated flag and clarify
>> locking requirements.
>>
>> We'll shuffle the operations a bit such that they logically make more sense
>> (e.g., remove from the list before clearing flags).
>>
>> In balloon migration functions we can now move the balloon_page_finalize()
>> out of the balloon lock and perform the finalization just before dropping
>> the balloon reference.
>>
>> Document that the page lock is currently required when modifying the
>> movability aspects of a page; hopefully we can soon decouple this from the
>> page lock.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> arch/powerpc/platforms/pseries/cmm.c | 2 +-
>> drivers/misc/vmw_balloon.c | 3 +-
>> drivers/virtio/virtio_balloon.c | 4 +--
>> include/linux/balloon_compaction.h | 43 +++++++++++-----------------
>> mm/balloon_compaction.c | 3 +-
>> 5 files changed, 21 insertions(+), 34 deletions(-)
>>
>> diff --git a/arch/powerpc/platforms/pseries/cmm.c b/arch/powerpc/platforms/pseries/cmm.c
>> index 5f4037c1d7fe8..5e0a718d1be7b 100644
>> --- a/arch/powerpc/platforms/pseries/cmm.c
>> +++ b/arch/powerpc/platforms/pseries/cmm.c
>> @@ -532,7 +532,6 @@ static int cmm_migratepage(struct balloon_dev_info *b_dev_info,
>>
>> spin_lock_irqsave(&b_dev_info->pages_lock, flags);
>> balloon_page_insert(b_dev_info, newpage);
>> - balloon_page_delete(page);
>
Hi Lorenzo,
as always, thanks for the detailed review!
> We seem to just be removing this and not replacing with finalize, is this right?
See below.
>
>> b_dev_info->isolated_pages--;
>> spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
>>
>> @@ -542,6 +541,7 @@ static int cmm_migratepage(struct balloon_dev_info *b_dev_info,
>> */
>> plpar_page_set_active(page);
>>
>> + balloon_page_finalize(page);
^ here it is, next to the put_page() just like for the other cases.
Or did you mean something else?
>> /* balloon page list reference */
>> put_page(page);
>>
>> diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
>> index c817d8c216413..6653fc53c951c 100644
>> --- a/drivers/misc/vmw_balloon.c
>> +++ b/drivers/misc/vmw_balloon.c
>> @@ -1778,8 +1778,7 @@ static int vmballoon_migratepage(struct balloon_dev_info *b_dev_info,
>> * @pages_lock . We keep holding @comm_lock since we will need it in a
>> * second.
>> */
>> - balloon_page_delete(page);
>> -
>> + balloon_page_finalize(page);
>> put_page(page);
>>
[...]
>> -/*
>> - * balloon_page_delete - delete a page from balloon's page list and clear
>> - * the page->private assignement accordingly.
>> - * @page : page to be released from balloon's page list
>> - *
>> - * Caller must ensure the page is locked and the spin_lock protecting balloon
>> - * pages list is held before deleting a page from the balloon device.
>> - */
>> -static inline void balloon_page_delete(struct page *page)
>> -{
>> - __ClearPageOffline(page);
>> - __ClearPageMovable(page);
>> - set_page_private(page, 0);
>> - /*
>> - * No touch page.lru field once @page has been isolated
>> - * because VM is using the field.
>> - */
>> - if (!PageIsolated(page))
>> - list_del(&page->lru);
>
> I don't see this check elsewhere, is it because, as per the 1/xx of this series,
> because by the time we do the finalize
balloon_page_delete() was used on two paths
1) Removing a page from the balloon for deflation through
balloon_page_list_dequeue()
2) Removing an isolated page from the balloon for migration in the
per-driver migration handlers. Isolated pages were already removed from
the balloon list during ... isolation.
With this change, 1) does the list_del(&page->lru) manually and 2) only
calls balloon_page_finalize().
During 1) the same reasoning as in 1/xx applies: isolated pages cannot
be in the balloon list.
>
>> -}
>> -
>> /*
>> * balloon_page_device - get the b_dev_info descriptor for the balloon device
>> * that enqueues the given page.
>> @@ -141,12 +120,6 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
>> list_add(&page->lru, &balloon->pages);
>> }
>>
>> -static inline void balloon_page_delete(struct page *page)
>> -{
>> - __ClearPageOffline(page);
>> - list_del(&page->lru);
>> -}
>> -
>> static inline gfp_t balloon_mapping_gfp_mask(void)
>> {
>> return GFP_HIGHUSER;
>> @@ -154,6 +127,22 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
>>
>> #endif /* CONFIG_BALLOON_COMPACTION */
>>
>> +/*
>> + * balloon_page_finalize - prepare a balloon page that was removed from the
>> + * balloon list for release to the page allocator
>> + * @page: page to be released to the page allocator
>> + *
>> + * Caller must ensure that the page is locked.
>
> Can we assert this?
We could, but I'm planning on removing the page lock next (see patch
description), so not too keen to create more code around that.
Maybe mention that the balloon lock should not be held?
Not a limitation. It could be called with it, just not a requirement today.
I suspect that once we remove the page lock, that we might use the
balloon lock and rework balloon_page_migrate() to take the lock. TBD.
> >> + */
>> +static inline void balloon_page_finalize(struct page *page)
>> +{
>> + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
>> + __ClearPageMovable(page);
>> + set_page_private(page, 0);
>> + }
>
> Why do we check this? Is this function called from anywhere where that config won't be set?
Sure. balloon_page_list_dequeue() is called from balloon_page_dequeue(),
which resides outside the CONFIG_BALLOON_COMPACTION ifdef in
mm/balloon_compaction.c.
At some point (not in this series) we should probably rename
balloon_compaction.c -> balloon.c
To match CONFIG_MEMORY_BALLOON.
Because the compaction part is just one extra bit in there. (an
important one, but still, you can use the balloon infrastructure without
compaction/page migration)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize()
2025-07-01 7:58 ` David Hildenbrand
@ 2025-07-01 9:01 ` Lorenzo Stoakes
2025-07-01 9:59 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 9:01 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Tue, Jul 01, 2025 at 09:58:09AM +0200, David Hildenbrand wrote:
> On 30.06.25 17:15, Lorenzo Stoakes wrote:
> > On Mon, Jun 30, 2025 at 02:59:43PM +0200, David Hildenbrand wrote:
> > > Let's move the removal of the page from the balloon list into the single
> > > caller, to remove the dependency on the PG_isolated flag and clarify
> > > locking requirements.
> > >
> > > We'll shuffle the operations a bit such that they logically make more sense
> > > (e.g., remove from the list before clearing flags).
> > >
> > > In balloon migration functions we can now move the balloon_page_finalize()
> > > out of the balloon lock and perform the finalization just before dropping
> > > the balloon reference.
> > >
> > > Document that the page lock is currently required when modifying the
> > > movability aspects of a page; hopefully we can soon decouple this from the
> > > page lock.
> > >
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
Based on below this LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > > ---
> > > arch/powerpc/platforms/pseries/cmm.c | 2 +-
> > > drivers/misc/vmw_balloon.c | 3 +-
> > > drivers/virtio/virtio_balloon.c | 4 +--
> > > include/linux/balloon_compaction.h | 43 +++++++++++-----------------
> > > mm/balloon_compaction.c | 3 +-
> > > 5 files changed, 21 insertions(+), 34 deletions(-)
> > >
> > > diff --git a/arch/powerpc/platforms/pseries/cmm.c b/arch/powerpc/platforms/pseries/cmm.c
> > > index 5f4037c1d7fe8..5e0a718d1be7b 100644
> > > --- a/arch/powerpc/platforms/pseries/cmm.c
> > > +++ b/arch/powerpc/platforms/pseries/cmm.c
> > > @@ -532,7 +532,6 @@ static int cmm_migratepage(struct balloon_dev_info *b_dev_info,
> > >
> > > spin_lock_irqsave(&b_dev_info->pages_lock, flags);
> > > balloon_page_insert(b_dev_info, newpage);
> > > - balloon_page_delete(page);
> >
>
> Hi Lorenzo,
>
> as always, thanks for the detailed review!
You're welcome :>)
>
> > We seem to just be removing this and not replacing with finalize, is this right?
>
> See below.
>
> >
> > > b_dev_info->isolated_pages--;
> > > spin_unlock_irqrestore(&b_dev_info->pages_lock, flags);
> > >
> > > @@ -542,6 +541,7 @@ static int cmm_migratepage(struct balloon_dev_info *b_dev_info,
> > > */
> > > plpar_page_set_active(page);
> > >
> > > + balloon_page_finalize(page);
>
> ^ here it is, next to the put_page() just like for the other cases.
OK so it's just moved to a different place for consistency.
>
> Or did you mean something else?
No this is what I meant :)
>
> > > /* balloon page list reference */
> > > put_page(page);
> > >
> > > diff --git a/drivers/misc/vmw_balloon.c b/drivers/misc/vmw_balloon.c
> > > index c817d8c216413..6653fc53c951c 100644
> > > --- a/drivers/misc/vmw_balloon.c
> > > +++ b/drivers/misc/vmw_balloon.c
> > > @@ -1778,8 +1778,7 @@ static int vmballoon_migratepage(struct balloon_dev_info *b_dev_info,
> > > * @pages_lock . We keep holding @comm_lock since we will need it in a
> > > * second.
> > > */
> > > - balloon_page_delete(page);
> > > -
> > > + balloon_page_finalize(page);
> > > put_page(page);
> > >
>
>
> [...]
>
> > > -/*
> > > - * balloon_page_delete - delete a page from balloon's page list and clear
> > > - * the page->private assignement accordingly.
> > > - * @page : page to be released from balloon's page list
> > > - *
> > > - * Caller must ensure the page is locked and the spin_lock protecting balloon
> > > - * pages list is held before deleting a page from the balloon device.
> > > - */
> > > -static inline void balloon_page_delete(struct page *page)
> > > -{
> > > - __ClearPageOffline(page);
> > > - __ClearPageMovable(page);
> > > - set_page_private(page, 0);
> > > - /*
> > > - * No touch page.lru field once @page has been isolated
> > > - * because VM is using the field.
> > > - */
> > > - if (!PageIsolated(page))
> > > - list_del(&page->lru);
> >
> > I don't see this check elsewhere, is it because, as per the 1/xx of this series,
> > because by the time we do the finalize
>
> balloon_page_delete() was used on two paths
>
> 1) Removing a page from the balloon for deflation through
> balloon_page_list_dequeue()
>
> 2) Removing an isolated page from the balloon for migration in the
> per-driver migration handlers. Isolated pages were already removed from the
> balloon list during ... isolation.
>
> With this change, 1) does the list_del(&page->lru) manually and 2) only
> calls balloon_page_finalize().
>
> During 1) the same reasoning as in 1/xx applies: isolated pages cannot be in
> the balloon list.
Right yeah this is what I thought, thanks!
>
> >
> > > -}
> > > -
> > > /*
> > > * balloon_page_device - get the b_dev_info descriptor for the balloon device
> > > * that enqueues the given page.
> > > @@ -141,12 +120,6 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
> > > list_add(&page->lru, &balloon->pages);
> > > }
> > >
> > > -static inline void balloon_page_delete(struct page *page)
> > > -{
> > > - __ClearPageOffline(page);
> > > - list_del(&page->lru);
> > > -}
> > > -
> > > static inline gfp_t balloon_mapping_gfp_mask(void)
> > > {
> > > return GFP_HIGHUSER;
> > > @@ -154,6 +127,22 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
> > >
> > > #endif /* CONFIG_BALLOON_COMPACTION */
> > >
> > > +/*
> > > + * balloon_page_finalize - prepare a balloon page that was removed from the
> > > + * balloon list for release to the page allocator
> > > + * @page: page to be released to the page allocator
> > > + *
> > > + * Caller must ensure that the page is locked.
> >
> > Can we assert this?
>
> We could, but I'm planning on removing the page lock next (see patch
> description), so not too keen to create more code around that.
>
> Maybe mention that the balloon lock should not be held?
>
> Not a limitation. It could be called with it, just not a requirement today.
>
> I suspect that once we remove the page lock, that we might use the balloon
> lock and rework balloon_page_migrate() to take the lock. TBD.
OK fair enough!
>
> > >> + */
> > > +static inline void balloon_page_finalize(struct page *page)
> > > +{
> > > + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
> > > + __ClearPageMovable(page);
> > > + set_page_private(page, 0);
> > > + }
> >
> > Why do we check this? Is this function called from anywhere where that config won't be set?
>
> Sure. balloon_page_list_dequeue() is called from balloon_page_dequeue(),
> which resides outside the CONFIG_BALLOON_COMPACTION ifdef in
> mm/balloon_compaction.c.
>
> At some point (not in this series) we should probably rename
>
> balloon_compaction.c -> balloon.c
>
> To match CONFIG_MEMORY_BALLOON.
>
> Because the compaction part is just one extra bit in there. (an important
> one, but still, you can use the balloon infrastructure without
> compaction/page migration)
Yeah this would be nice!
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize()
2025-07-01 9:01 ` Lorenzo Stoakes
@ 2025-07-01 9:59 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 9:59 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
[...]
>>>> -{
>>>> - __ClearPageOffline(page);
>>>> - __ClearPageMovable(page);
>>>> - set_page_private(page, 0);
>>>> - /*
>>>> - * No touch page.lru field once @page has been isolated
>>>> - * because VM is using the field.
>>>> - */
>>>> - if (!PageIsolated(page))
>>>> - list_del(&page->lru);
>>>
>>> I don't see this check elsewhere, is it because, as per the 1/xx of this series,
>>> because by the time we do the finalize
>>
>> balloon_page_delete() was used on two paths
>>
>> 1) Removing a page from the balloon for deflation through
>> balloon_page_list_dequeue()
>>
>> 2) Removing an isolated page from the balloon for migration in the
>> per-driver migration handlers. Isolated pages were already removed from the
>> balloon list during ... isolation.
>>
>> With this change, 1) does the list_del(&page->lru) manually and 2) only
>> calls balloon_page_finalize().
>>
>> During 1) the same reasoning as in 1/xx applies: isolated pages cannot be in
>> the balloon list.
>
> Right yeah this is what I thought, thanks!
I'll add some of that to the patch description!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 03/29] mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
2025-06-30 12:59 ` [PATCH v1 01/29] mm/balloon_compaction: we cannot have isolated pages in the balloon list David Hildenbrand
2025-06-30 12:59 ` [PATCH v1 02/29] mm/balloon_compaction: convert balloon_page_delete() to balloon_page_finalize() David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-06-30 15:17 ` Lorenzo Stoakes
2025-06-30 12:59 ` [PATCH v1 04/29] mm/page_alloc: let page freeing clear any set page type David Hildenbrand
` (26 subsequent siblings)
29 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's drop these checks; these are conditions the core migration code
must make sure will hold either way, no need to double check.
Acked-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Acked-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/zpdesc.h | 5 -----
mm/zsmalloc.c | 5 -----
2 files changed, 10 deletions(-)
diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index d3df316e5bb7b..5cb7e3de43952 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -168,11 +168,6 @@ static inline void __zpdesc_clear_zsmalloc(struct zpdesc *zpdesc)
__ClearPageZsmalloc(zpdesc_page(zpdesc));
}
-static inline bool zpdesc_is_isolated(struct zpdesc *zpdesc)
-{
- return PageIsolated(zpdesc_page(zpdesc));
-}
-
static inline struct zone *zpdesc_zone(struct zpdesc *zpdesc)
{
return page_zone(zpdesc_page(zpdesc));
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 999b513c7fdff..7f1431f2be98f 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1719,8 +1719,6 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
* Page is locked so zspage couldn't be destroyed. For detail, look at
* lock_zspage in free_zspage.
*/
- VM_BUG_ON_PAGE(PageIsolated(page), page);
-
return true;
}
@@ -1739,8 +1737,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
unsigned long old_obj, new_obj;
unsigned int obj_idx;
- VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc));
-
/* The page is locked, so this pointer must remain valid */
zspage = get_zspage(zpdesc);
pool = zspage->pool;
@@ -1811,7 +1807,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
static void zs_page_putback(struct page *page)
{
- VM_BUG_ON_PAGE(!PageIsolated(page), page);
}
static const struct movable_operations zsmalloc_mops = {
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 03/29] mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs
2025-06-30 12:59 ` [PATCH v1 03/29] mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs David Hildenbrand
@ 2025-06-30 15:17 ` Lorenzo Stoakes
2025-07-01 8:03 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 15:17 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:44PM +0200, David Hildenbrand wrote:
> Let's drop these checks; these are conditions the core migration code
> must make sure will hold either way, no need to double check.
>
> Acked-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> Acked-by: Harry Yoo <harry.yoo@oracle.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, one comment below.
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> mm/zpdesc.h | 5 -----
> mm/zsmalloc.c | 5 -----
> 2 files changed, 10 deletions(-)
>
> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
> index d3df316e5bb7b..5cb7e3de43952 100644
> --- a/mm/zpdesc.h
> +++ b/mm/zpdesc.h
> @@ -168,11 +168,6 @@ static inline void __zpdesc_clear_zsmalloc(struct zpdesc *zpdesc)
> __ClearPageZsmalloc(zpdesc_page(zpdesc));
> }
>
> -static inline bool zpdesc_is_isolated(struct zpdesc *zpdesc)
> -{
> - return PageIsolated(zpdesc_page(zpdesc));
> -}
> -
> static inline struct zone *zpdesc_zone(struct zpdesc *zpdesc)
> {
> return page_zone(zpdesc_page(zpdesc));
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 999b513c7fdff..7f1431f2be98f 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1719,8 +1719,6 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
> * Page is locked so zspage couldn't be destroyed. For detail, look at
> * lock_zspage in free_zspage.
> */
> - VM_BUG_ON_PAGE(PageIsolated(page), page);
> -
> return true;
> }
>
> @@ -1739,8 +1737,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
> unsigned long old_obj, new_obj;
> unsigned int obj_idx;
>
> - VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc));
> -
> /* The page is locked, so this pointer must remain valid */
> zspage = get_zspage(zpdesc);
> pool = zspage->pool;
> @@ -1811,7 +1807,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
>
> static void zs_page_putback(struct page *page)
> {
> - VM_BUG_ON_PAGE(!PageIsolated(page), page);
> }
Can we just drop zs_page_putback from movable_operations() now this is empty?
>
> static const struct movable_operations zsmalloc_mops = {
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 03/29] mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs
2025-06-30 15:17 ` Lorenzo Stoakes
@ 2025-07-01 8:03 ` David Hildenbrand
2025-07-01 8:57 ` Lorenzo Stoakes
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 8:03 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 30.06.25 17:17, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:44PM +0200, David Hildenbrand wrote:
>> Let's drop these checks; these are conditions the core migration code
>> must make sure will hold either way, no need to double check.
>>
>> Acked-by: Zi Yan <ziy@nvidia.com>
>> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
>> Acked-by: Harry Yoo <harry.yoo@oracle.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> LGTM, one comment below.
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
Thanks!
>> ---
>> mm/zpdesc.h | 5 -----
>> mm/zsmalloc.c | 5 -----
>> 2 files changed, 10 deletions(-)
>>
>> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
>> index d3df316e5bb7b..5cb7e3de43952 100644
>> --- a/mm/zpdesc.h
>> +++ b/mm/zpdesc.h
>> @@ -168,11 +168,6 @@ static inline void __zpdesc_clear_zsmalloc(struct zpdesc *zpdesc)
>> __ClearPageZsmalloc(zpdesc_page(zpdesc));
>> }
>>
>> -static inline bool zpdesc_is_isolated(struct zpdesc *zpdesc)
>> -{
>> - return PageIsolated(zpdesc_page(zpdesc));
>> -}
>> -
>> static inline struct zone *zpdesc_zone(struct zpdesc *zpdesc)
>> {
>> return page_zone(zpdesc_page(zpdesc));
>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>> index 999b513c7fdff..7f1431f2be98f 100644
>> --- a/mm/zsmalloc.c
>> +++ b/mm/zsmalloc.c
>> @@ -1719,8 +1719,6 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
>> * Page is locked so zspage couldn't be destroyed. For detail, look at
>> * lock_zspage in free_zspage.
>> */
>> - VM_BUG_ON_PAGE(PageIsolated(page), page);
>> -
>> return true;
>> }
>>
>> @@ -1739,8 +1737,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
>> unsigned long old_obj, new_obj;
>> unsigned int obj_idx;
>>
>> - VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc));
>> -
>> /* The page is locked, so this pointer must remain valid */
>> zspage = get_zspage(zpdesc);
>> pool = zspage->pool;
>> @@ -1811,7 +1807,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
>>
>> static void zs_page_putback(struct page *page)
>> {
>> - VM_BUG_ON_PAGE(!PageIsolated(page), page);
>> }
>
> Can we just drop zs_page_putback from movable_operations() now this is empty?
Common code expects there to be a callback, and I don't want to change
that. Long-term I assume it will rather indicate a BUG if there is no
putback handler, not something we want to encourage.
Likely, once we rework that isolated pages cannot get freed here, we'd
have to handle stuff on the putback path (realize that the page can be
freed and free it) -- TODO for that is added in #12.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 03/29] mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs
2025-07-01 8:03 ` David Hildenbrand
@ 2025-07-01 8:57 ` Lorenzo Stoakes
0 siblings, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 8:57 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Tue, Jul 01, 2025 at 10:03:57AM +0200, David Hildenbrand wrote:
> > > diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> > > index 999b513c7fdff..7f1431f2be98f 100644
> > > --- a/mm/zsmalloc.c
> > > +++ b/mm/zsmalloc.c
> > > @@ -1719,8 +1719,6 @@ static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
> > > * Page is locked so zspage couldn't be destroyed. For detail, look at
> > > * lock_zspage in free_zspage.
> > > */
> > > - VM_BUG_ON_PAGE(PageIsolated(page), page);
> > > -
> > > return true;
> > > }
> > >
> > > @@ -1739,8 +1737,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
> > > unsigned long old_obj, new_obj;
> > > unsigned int obj_idx;
> > >
> > > - VM_BUG_ON_PAGE(!zpdesc_is_isolated(zpdesc), zpdesc_page(zpdesc));
> > > -
> > > /* The page is locked, so this pointer must remain valid */
> > > zspage = get_zspage(zpdesc);
> > > pool = zspage->pool;
> > > @@ -1811,7 +1807,6 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
> > >
> > > static void zs_page_putback(struct page *page)
> > > {
> > > - VM_BUG_ON_PAGE(!PageIsolated(page), page);
> > > }
> >
> > Can we just drop zs_page_putback from movable_operations() now this is empty?
>
> Common code expects there to be a callback, and I don't want to change that.
> Long-term I assume it will rather indicate a BUG if there is no putback
> handler, not something we want to encourage.
>
> Likely, once we rework that isolated pages cannot get freed here, we'd have
> to handle stuff on the putback path (realize that the page can be freed and
> free it) -- TODO for that is added in #12.
Ack thanks!
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 04/29] mm/page_alloc: let page freeing clear any set page type
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (2 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 03/29] mm/zsmalloc: drop PageIsolated() related VM_BUG_ONs David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-06-30 15:27 ` Lorenzo Stoakes
2025-06-30 12:59 ` [PATCH v1 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed David Hildenbrand
` (25 subsequent siblings)
29 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Currently, any user of page types must clear that type before freeing
a page back to the buddy, otherwise we'll run into mapcount related
sanity checks (because the page type currently overlays the page
mapcount).
Let's allow for not clearing the page type by page type users by letting
the buddy handle it instead.
We'll focus on having a page type set on the first page of a larger
allocation only.
With this change, we can reliably identify typed folios even though
they might be in the process of getting freed, which will come in handy
in migration code (at least in the transition phase).
In the future we might want to warn on some page types. Instead of
having an "allow list", let's rather wait until we know about once that
should go on such a "disallow list".
Reviewed-by: Zi Yan <ziy@nvidia.com>
Acked-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/page_alloc.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 858bc17653af9..44e56d31cfeb1 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1380,6 +1380,9 @@ __always_inline bool free_pages_prepare(struct page *page,
mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
page->mapping = NULL;
}
+ if (unlikely(page_has_type(page)))
+ page->page_type = UINT_MAX;
+
if (is_check_pages_enabled()) {
if (free_page_is_bad(page))
bad++;
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 04/29] mm/page_alloc: let page freeing clear any set page type
2025-06-30 12:59 ` [PATCH v1 04/29] mm/page_alloc: let page freeing clear any set page type David Hildenbrand
@ 2025-06-30 15:27 ` Lorenzo Stoakes
2025-07-01 8:17 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 15:27 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:45PM +0200, David Hildenbrand wrote:
> Currently, any user of page types must clear that type before freeing
> a page back to the buddy, otherwise we'll run into mapcount related
> sanity checks (because the page type currently overlays the page
> mapcount).
>
> Let's allow for not clearing the page type by page type users by letting
> the buddy handle it instead.
>
> We'll focus on having a page type set on the first page of a larger
> allocation only.
>
> With this change, we can reliably identify typed folios even though
> they might be in the process of getting freed, which will come in handy
> in migration code (at least in the transition phase).
>
> In the future we might want to warn on some page types. Instead of
> having an "allow list", let's rather wait until we know about once that
> should go on such a "disallow list".
Is the idea here to get this to show up on folio dumps or?
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Acked-by: Harry Yoo <harry.yoo@oracle.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> mm/page_alloc.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 858bc17653af9..44e56d31cfeb1 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1380,6 +1380,9 @@ __always_inline bool free_pages_prepare(struct page *page,
> mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
> page->mapping = NULL;
> }
> + if (unlikely(page_has_type(page)))
> + page->page_type = UINT_MAX;
Feels like this could do with a comment!
> +
> if (is_check_pages_enabled()) {
> if (free_page_is_bad(page))
> bad++;
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 04/29] mm/page_alloc: let page freeing clear any set page type
2025-06-30 15:27 ` Lorenzo Stoakes
@ 2025-07-01 8:17 ` David Hildenbrand
2025-07-01 8:27 ` Lorenzo Stoakes
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 8:17 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 30.06.25 17:27, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:45PM +0200, David Hildenbrand wrote:
>> Currently, any user of page types must clear that type before freeing
>> a page back to the buddy, otherwise we'll run into mapcount related
>> sanity checks (because the page type currently overlays the page
>> mapcount).
>>
>> Let's allow for not clearing the page type by page type users by letting
>> the buddy handle it instead.
>>
>> We'll focus on having a page type set on the first page of a larger
>> allocation only.
>>
>> With this change, we can reliably identify typed folios even though
>> they might be in the process of getting freed, which will come in handy
>> in migration code (at least in the transition phase).
>>
>> In the future we might want to warn on some page types. Instead of
>> having an "allow list", let's rather wait until we know about once that
>> should go on such a "disallow list".
>
> Is the idea here to get this to show up on folio dumps or?
As part of the netmem_desc series, there was a discussion about removing
the mystical PP checks -- page_pool_page_is_pp() in page_alloc.c and
replacing them by a proper page type check.
In that case, we would probably want to warn in case we get such a
netmem page unexpectedly freed.
But, that page type does not exist yet in code, so the sanity check must
be added once introduced.
>
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Acked-by: Harry Yoo <harry.yoo@oracle.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> mm/page_alloc.c | 3 +++
>> 1 file changed, 3 insertions(+)
>>
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 858bc17653af9..44e56d31cfeb1 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1380,6 +1380,9 @@ __always_inline bool free_pages_prepare(struct page *page,
>> mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>> page->mapping = NULL;
>> }
>> + if (unlikely(page_has_type(page)))
>> + page->page_type = UINT_MAX;
>
> Feels like this could do with a comment!
/* Reset the page_type -> _mapcount to -1 */
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 04/29] mm/page_alloc: let page freeing clear any set page type
2025-07-01 8:17 ` David Hildenbrand
@ 2025-07-01 8:27 ` Lorenzo Stoakes
2025-07-01 8:34 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 8:27 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Tue, Jul 01, 2025 at 10:17:13AM +0200, David Hildenbrand wrote:
> On 30.06.25 17:27, Lorenzo Stoakes wrote:
> > On Mon, Jun 30, 2025 at 02:59:45PM +0200, David Hildenbrand wrote:
> > > Currently, any user of page types must clear that type before freeing
> > > a page back to the buddy, otherwise we'll run into mapcount related
> > > sanity checks (because the page type currently overlays the page
> > > mapcount).
> > >
> > > Let's allow for not clearing the page type by page type users by letting
> > > the buddy handle it instead.
> > >
> > > We'll focus on having a page type set on the first page of a larger
> > > allocation only.
> > >
> > > With this change, we can reliably identify typed folios even though
> > > they might be in the process of getting freed, which will come in handy
> > > in migration code (at least in the transition phase).
> > >
> > > In the future we might want to warn on some page types. Instead of
> > > having an "allow list", let's rather wait until we know about once that
> > > should go on such a "disallow list".
> >
> > Is the idea here to get this to show up on folio dumps or?
>
> As part of the netmem_desc series, there was a discussion about removing the
> mystical PP checks -- page_pool_page_is_pp() in page_alloc.c and replacing
> them by a proper page type check.
>
> In that case, we would probably want to warn in case we get such a netmem
> page unexpectedly freed.
>
> But, that page type does not exist yet in code, so the sanity check must be
> added once introduced.
OK, and I realise that the UINT_MAX thing is a convention for how a reset
page_type looks anyway now.
>
> >
> > >
> > > Reviewed-by: Zi Yan <ziy@nvidia.com>
> > > Acked-by: Harry Yoo <harry.yoo@oracle.com>
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
> > > ---
> > > mm/page_alloc.c | 3 +++
> > > 1 file changed, 3 insertions(+)
> > >
> > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > index 858bc17653af9..44e56d31cfeb1 100644
> > > --- a/mm/page_alloc.c
> > > +++ b/mm/page_alloc.c
> > > @@ -1380,6 +1380,9 @@ __always_inline bool free_pages_prepare(struct page *page,
> > > mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
> > > page->mapping = NULL;
> > > }
> > > + if (unlikely(page_has_type(page)))
> > > + page->page_type = UINT_MAX;
> >
> > Feels like this could do with a comment!
>
> /* Reset the page_type -> _mapcount to -1 */
Hm this feels like saying 'the reason we set it to -1 is to set it to -1' :P
I'd be fine with something simple like
/* Set page_type to reset value */
But... Can't we just put a #define somewhere here to make life easy? Like:
include/linux/page-flags.h | 17 ++++++++++-------
1 file changed, 10 insertions(+), 7 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 4fe5ee67535b..c2abf66ebbce 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -197,6 +197,8 @@ enum pageflags {
#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
DECLARE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key);
+#define PAGE_TYPE_RESET (UINT_MAX)
+
/*
* Return the real head page struct iff the @page is a fake head page, otherwise
* return the @page itself. See Documentation/mm/vmemmap_dedup.rst.
@@ -986,16 +988,16 @@ static __always_inline void __folio_set_##fname(struct folio *folio) \
{ \
if (folio_test_##fname(folio)) \
return; \
- VM_BUG_ON_FOLIO(data_race(folio->page.page_type) != UINT_MAX, \
- folio); \
+ VM_WARN_ON_FOLIO(data_race(folio->page.page_type) != \
+ PAGE_TYPE_RESET, folio); \
folio->page.page_type = (unsigned int)PGTY_##lname << 24; \
} \
static __always_inline void __folio_clear_##fname(struct folio *folio) \
{ \
- if (folio->page.page_type == UINT_MAX) \
+ if (folio->page.page_type == PAGE_TYPE_RESET) \
return; \
VM_BUG_ON_FOLIO(!folio_test_##fname(folio), folio); \
- folio->page.page_type = UINT_MAX; \
+ folio->page.page_type = PAGE_TYPE_RESET; \
}
#define PAGE_TYPE_OPS(uname, lname, fname) \
@@ -1008,15 +1010,16 @@ static __always_inline void __SetPage##uname(struct page *page) \
{ \
if (Page##uname(page)) \
return; \
- VM_BUG_ON_PAGE(data_race(page->page_type) != UINT_MAX, page); \
+ VM_BUG_ON_PAGE(data_race(page->page_type) != \
+ PAGE_TYPE_RESET, page); \
page->page_type = (unsigned int)PGTY_##lname << 24; \
} \
static __always_inline void __ClearPage##uname(struct page *page) \
{ \
- if (page->page_type == UINT_MAX) \
+ if (page->page_type == PAGE_TYPE_RESET) \
return; \
VM_BUG_ON_PAGE(!Page##uname(page), page); \
- page->page_type = UINT_MAX; \
+ page->page_type = PAGE_TYPE_RESET; \
}
/*
--
2.50.0
>
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 04/29] mm/page_alloc: let page freeing clear any set page type
2025-07-01 8:27 ` Lorenzo Stoakes
@ 2025-07-01 8:34 ` David Hildenbrand
2025-07-01 8:37 ` Lorenzo Stoakes
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 8:34 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 10:27, Lorenzo Stoakes wrote:
> On Tue, Jul 01, 2025 at 10:17:13AM +0200, David Hildenbrand wrote:
>> On 30.06.25 17:27, Lorenzo Stoakes wrote:
>>> On Mon, Jun 30, 2025 at 02:59:45PM +0200, David Hildenbrand wrote:
>>>> Currently, any user of page types must clear that type before freeing
>>>> a page back to the buddy, otherwise we'll run into mapcount related
>>>> sanity checks (because the page type currently overlays the page
>>>> mapcount).
>>>>
>>>> Let's allow for not clearing the page type by page type users by letting
>>>> the buddy handle it instead.
>>>>
>>>> We'll focus on having a page type set on the first page of a larger
>>>> allocation only.
>>>>
>>>> With this change, we can reliably identify typed folios even though
>>>> they might be in the process of getting freed, which will come in handy
>>>> in migration code (at least in the transition phase).
>>>>
>>>> In the future we might want to warn on some page types. Instead of
>>>> having an "allow list", let's rather wait until we know about once that
>>>> should go on such a "disallow list".
>>>
>>> Is the idea here to get this to show up on folio dumps or?
>>
>> As part of the netmem_desc series, there was a discussion about removing the
>> mystical PP checks -- page_pool_page_is_pp() in page_alloc.c and replacing
>> them by a proper page type check.
>>
>> In that case, we would probably want to warn in case we get such a netmem
>> page unexpectedly freed.
>>
>> But, that page type does not exist yet in code, so the sanity check must be
>> added once introduced.
>
> OK, and I realise that the UINT_MAX thing is a convention for how a reset
> page_type looks anyway now.
>
>>
>>>
>>>>
>>>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>>>> Acked-by: Harry Yoo <harry.yoo@oracle.com>
>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>> ---
>>>> mm/page_alloc.c | 3 +++
>>>> 1 file changed, 3 insertions(+)
>>>>
>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>> index 858bc17653af9..44e56d31cfeb1 100644
>>>> --- a/mm/page_alloc.c
>>>> +++ b/mm/page_alloc.c
>>>> @@ -1380,6 +1380,9 @@ __always_inline bool free_pages_prepare(struct page *page,
>>>> mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>>>> page->mapping = NULL;
>>>> }
>>>> + if (unlikely(page_has_type(page)))
>>>> + page->page_type = UINT_MAX;
>>>
>>> Feels like this could do with a comment!
>>
>> /* Reset the page_type -> _mapcount to -1 */
>
> Hm this feels like saying 'the reason we set it to -1 is to set it to -1' :P
Bingo! Guess why I didn't add a comment in the first place :P
>
> I'd be fine with something simple like
>
> /* Set page_type to reset value */
"Reset the page_type (which overlays _mapcount)"
?
> > But... Can't we just put a #define somewhere here to make life
easy? Like:
Given that Willy will change all that soon, I'm not in favor of doing
that in this series.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 04/29] mm/page_alloc: let page freeing clear any set page type
2025-07-01 8:34 ` David Hildenbrand
@ 2025-07-01 8:37 ` Lorenzo Stoakes
2025-07-01 10:02 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 8:37 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Tue, Jul 01, 2025 at 10:34:33AM +0200, David Hildenbrand wrote:
> > > > > Reviewed-by: Zi Yan <ziy@nvidia.com>
> > > > > Acked-by: Harry Yoo <harry.yoo@oracle.com>
> > > > > Signed-off-by: David Hildenbrand <david@redhat.com>
Based on discussion below, I'm good with this now with the comment change, so
feel free to add:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> > > > > ---
> > > > > mm/page_alloc.c | 3 +++
> > > > > 1 file changed, 3 insertions(+)
> > > > >
> > > > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> > > > > index 858bc17653af9..44e56d31cfeb1 100644
> > > > > --- a/mm/page_alloc.c
> > > > > +++ b/mm/page_alloc.c
> > > > > @@ -1380,6 +1380,9 @@ __always_inline bool free_pages_prepare(struct page *page,
> > > > > mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
> > > > > page->mapping = NULL;
> > > > > }
> > > > > + if (unlikely(page_has_type(page)))
> > > > > + page->page_type = UINT_MAX;
> > > >
> > > > Feels like this could do with a comment!
> > >
> > > /* Reset the page_type -> _mapcount to -1 */
> >
> > Hm this feels like saying 'the reason we set it to -1 is to set it to -1' :P
>
> Bingo! Guess why I didn't add a comment in the first place :P
>
> >
> > I'd be fine with something simple like
> >
> > /* Set page_type to reset value */
>
> "Reset the page_type (which overlays _mapcount)"
>
> ?
Sounds good thanks, have an R-b above on the basis of this change.
>
> > > But... Can't we just put a #define somewhere here to make life easy?
> Like:
>
> Given that Willy will change all that soon, I'm not in favor of doing that
> in this series.
Ah is he? I mean of course he is :))) this does seem like a prime target for the
ongoing memdesc/foliofication efforts.
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 04/29] mm/page_alloc: let page freeing clear any set page type
2025-07-01 8:37 ` Lorenzo Stoakes
@ 2025-07-01 10:02 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 10:02 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 10:37, Lorenzo Stoakes wrote:
> On Tue, Jul 01, 2025 at 10:34:33AM +0200, David Hildenbrand wrote:
>>>>>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>>>>>> Acked-by: Harry Yoo <harry.yoo@oracle.com>
>>>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> Based on discussion below, I'm good with this now with the comment change, so
> feel free to add:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
>>>>>> ---
>>>>>> mm/page_alloc.c | 3 +++
>>>>>> 1 file changed, 3 insertions(+)
>>>>>>
>>>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>>>> index 858bc17653af9..44e56d31cfeb1 100644
>>>>>> --- a/mm/page_alloc.c
>>>>>> +++ b/mm/page_alloc.c
>>>>>> @@ -1380,6 +1380,9 @@ __always_inline bool free_pages_prepare(struct page *page,
>>>>>> mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>>>>>> page->mapping = NULL;
>>>>>> }
>>>>>> + if (unlikely(page_has_type(page)))
>>>>>> + page->page_type = UINT_MAX;
>>>>>
>>>>> Feels like this could do with a comment!
>>>>
>>>> /* Reset the page_type -> _mapcount to -1 */
>>>
>>> Hm this feels like saying 'the reason we set it to -1 is to set it to -1' :P
>>
>> Bingo! Guess why I didn't add a comment in the first place :P
>>
>>>
>>> I'd be fine with something simple like
>>>
>>> /* Set page_type to reset value */
>>
>> "Reset the page_type (which overlays _mapcount)"
>>
>> ?
>
> Sounds good thanks, have an R-b above on the basis of this change.
>
>>
>>>> But... Can't we just put a #define somewhere here to make life easy?
>> Like:
>>
>> Given that Willy will change all that soon, I'm not in favor of doing that
>> in this series.
>
> Ah is he? I mean of course he is :))) this does seem like a prime target for the
> ongoing memdesc/foliofication efforts.
Right. According to the plans I know, the type will be stored as part of
the memdesc pointer.
Clearing the type (where to clear, what to clear, when to clear) is
probably the interesting bit in the future: probably it will be cleared
as part of freeing any memdesc (thereby, invalidating the pointer).
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (3 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 04/29] mm/page_alloc: let page freeing clear any set page type David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-06-30 16:01 ` Lorenzo Stoakes
2025-06-30 12:59 ` [PATCH v1 06/29] mm/zsmalloc: make PageZsmalloc() " David Hildenbrand
` (24 subsequent siblings)
29 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let the page freeing code handle clearing the page type.
Acked-by: Zi Yan <ziy@nvidia.com>
Acked-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index b9f19da37b089..bfc6e50bd004b 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -140,7 +140,7 @@ static inline void balloon_page_finalize(struct page *page)
__ClearPageMovable(page);
set_page_private(page, 0);
}
- __ClearPageOffline(page);
+ /* PageOffline is sticky until the page is freed to the buddy. */
}
/*
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed
2025-06-30 12:59 ` [PATCH v1 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed David Hildenbrand
@ 2025-06-30 16:01 ` Lorenzo Stoakes
2025-06-30 16:14 ` Zi Yan
2025-07-01 8:21 ` David Hildenbrand
0 siblings, 2 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 16:01 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:46PM +0200, David Hildenbrand wrote:
> Let the page freeing code handle clearing the page type.
Why is this advantageous? We want to keep the page marked offline for longer?
>
> Acked-by: Zi Yan <ziy@nvidia.com>
> Acked-by: Harry Yoo <harry.yoo@oracle.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
On assumption this UINT_MAX stuff is sane :)) I mean this is straightforward I
guess:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/balloon_compaction.h | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index b9f19da37b089..bfc6e50bd004b 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -140,7 +140,7 @@ static inline void balloon_page_finalize(struct page *page)
> __ClearPageMovable(page);
> set_page_private(page, 0);
> }
> - __ClearPageOffline(page);
> + /* PageOffline is sticky until the page is freed to the buddy. */
OK so we are relying on this UINT_MAX thing in free_pages_prepare() to handle this.
> }
>
> /*
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed
2025-06-30 16:01 ` Lorenzo Stoakes
@ 2025-06-30 16:14 ` Zi Yan
2025-06-30 16:17 ` Lorenzo Stoakes
` (2 more replies)
2025-07-01 8:21 ` David Hildenbrand
1 sibling, 3 replies; 138+ messages in thread
From: Zi Yan @ 2025-06-30 16:14 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: David Hildenbrand, linux-kernel, linux-mm, linux-doc,
linuxppc-dev, virtualization, linux-fsdevel, Andrew Morton,
Jonathan Corbet, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, Christophe Leroy, Jerrin Shaji George,
Arnd Bergmann, Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang,
Xuan Zhuo, Eugenio Pérez, Alexander Viro, Christian Brauner,
Jan Kara, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 30 Jun 2025, at 12:01, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:46PM +0200, David Hildenbrand wrote:
>> Let the page freeing code handle clearing the page type.
>
> Why is this advantageous? We want to keep the page marked offline for longer?
>
>>
>> Acked-by: Zi Yan <ziy@nvidia.com>
>> Acked-by: Harry Yoo <harry.yoo@oracle.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> On assumption this UINT_MAX stuff is sane :)) I mean this is straightforward I
> guess:
This is how page type is cleared.
See: https://elixir.bootlin.com/linux/v6.15.4/source/include/linux/page-flags.h#L1013.
I agree with you that patch 4 should have a comment in free_pages_prepare()
about what the code is for and why UINT_MAX is used.
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed
2025-06-30 16:14 ` Zi Yan
@ 2025-06-30 16:17 ` Lorenzo Stoakes
2025-07-01 6:13 ` Harry Yoo
2025-07-01 8:12 ` David Hildenbrand
2 siblings, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 16:17 UTC (permalink / raw)
To: Zi Yan
Cc: David Hildenbrand, linux-kernel, linux-mm, linux-doc,
linuxppc-dev, virtualization, linux-fsdevel, Andrew Morton,
Jonathan Corbet, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, Christophe Leroy, Jerrin Shaji George,
Arnd Bergmann, Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang,
Xuan Zhuo, Eugenio Pérez, Alexander Viro, Christian Brauner,
Jan Kara, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 12:14:01PM -0400, Zi Yan wrote:
> On 30 Jun 2025, at 12:01, Lorenzo Stoakes wrote:
>
> > On Mon, Jun 30, 2025 at 02:59:46PM +0200, David Hildenbrand wrote:
> >> Let the page freeing code handle clearing the page type.
> >
> > Why is this advantageous? We want to keep the page marked offline for longer?
> >
> >>
> >> Acked-by: Zi Yan <ziy@nvidia.com>
> >> Acked-by: Harry Yoo <harry.yoo@oracle.com>
> >> Signed-off-by: David Hildenbrand <david@redhat.com>
> >
> > On assumption this UINT_MAX stuff is sane :)) I mean this is straightforward I
> > guess:
>
> This is how page type is cleared.
> See: https://elixir.bootlin.com/linux/v6.15.4/source/include/linux/page-flags.h#L1013.
Doh did go looking there but missed this!
I hate these macros so much. Almost designed to obfuscate.
>
> I agree with you that patch 4 should have a comment in free_pages_prepare()
> about what the code is for and why UINT_MAX is used.
Thx!
>
>
> Best Regards,
> Yan, Zi
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed
2025-06-30 16:14 ` Zi Yan
2025-06-30 16:17 ` Lorenzo Stoakes
@ 2025-07-01 6:13 ` Harry Yoo
2025-07-01 8:11 ` David Hildenbrand
2025-07-01 8:12 ` David Hildenbrand
2 siblings, 1 reply; 138+ messages in thread
From: Harry Yoo @ 2025-07-01 6:13 UTC (permalink / raw)
To: Zi Yan
Cc: Lorenzo Stoakes, David Hildenbrand, linux-kernel, linux-mm,
linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
Andrew Morton, Jonathan Corbet, Madhavan Srinivasan,
Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Jerrin Shaji George, Arnd Bergmann, Greg Kroah-Hartman,
Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Alexander Viro, Christian Brauner, Jan Kara, Matthew Brost,
Joshua Hahn, Rakie Kim, Byungchul Park, Gregory Price, Ying Huang,
Alistair Popple, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 12:14:01PM -0400, Zi Yan wrote:
> On 30 Jun 2025, at 12:01, Lorenzo Stoakes wrote:
>
> > On Mon, Jun 30, 2025 at 02:59:46PM +0200, David Hildenbrand wrote:
> >> Let the page freeing code handle clearing the page type.
> >
> > Why is this advantageous? We want to keep the page marked offline for longer?
> >
> >>
> >> Acked-by: Zi Yan <ziy@nvidia.com>
> >> Acked-by: Harry Yoo <harry.yoo@oracle.com>
> >> Signed-off-by: David Hildenbrand <david@redhat.com>
> >
> > On assumption this UINT_MAX stuff is sane :)) I mean this is straightforward I
> > guess:
>
> This is how page type is cleared.
> See: https://elixir.bootlin.com/linux/v6.15.4/source/include/linux/page-flags.h#L1013.
>
> I agree with you that patch 4 should have a comment in free_pages_prepare()
> about what the code is for and why UINT_MAX is used.
Or instead of comment, maybe something like this:
/* Clear any page type */
static __always_inline void __ClearPageType(struct page *page)
{
VM_WARN_ON_ONCE_PAGE(!page_has_type(page), page);
page->page_type = UINT_MAX;
}
in patch 4:
if (unlikely(page_has_type(page)))
__ClearPageType(page);
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed
2025-07-01 6:13 ` Harry Yoo
@ 2025-07-01 8:11 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 8:11 UTC (permalink / raw)
To: Harry Yoo, Zi Yan
Cc: Lorenzo Stoakes, linux-kernel, linux-mm, linux-doc, linuxppc-dev,
virtualization, linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Qi Zheng, Shakeel Butt
On 01.07.25 08:13, Harry Yoo wrote:
> On Mon, Jun 30, 2025 at 12:14:01PM -0400, Zi Yan wrote:
>> On 30 Jun 2025, at 12:01, Lorenzo Stoakes wrote:
>>
>>> On Mon, Jun 30, 2025 at 02:59:46PM +0200, David Hildenbrand wrote:
>>>> Let the page freeing code handle clearing the page type.
>>>
>>> Why is this advantageous? We want to keep the page marked offline for longer?
>>>
>>>>
>>>> Acked-by: Zi Yan <ziy@nvidia.com>
>>>> Acked-by: Harry Yoo <harry.yoo@oracle.com>
>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>
>>> On assumption this UINT_MAX stuff is sane :)) I mean this is straightforward I
>>> guess:
>>
>> This is how page type is cleared.
>> See: https://elixir.bootlin.com/linux/v6.15.4/source/include/linux/page-flags.h#L1013.
>>
>> I agree with you that patch 4 should have a comment in free_pages_prepare()
>> about what the code is for and why UINT_MAX is used.
>
> Or instead of comment, maybe something like this:
>
> /* Clear any page type */
> static __always_inline void __ClearPageType(struct page *page)
> {
> VM_WARN_ON_ONCE_PAGE(!page_has_type(page), page);
> page->page_type = UINT_MAX;
> }
>
> in patch 4:
>
> if (unlikely(page_has_type(page)))
> __ClearPageType(page);
>
I don't think we should do that. It's very specialized code that nobody
should be reusing.
And it will all change once Willy reworks the page_type vs. _mapcount
overlay.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed
2025-06-30 16:14 ` Zi Yan
2025-06-30 16:17 ` Lorenzo Stoakes
2025-07-01 6:13 ` Harry Yoo
@ 2025-07-01 8:12 ` David Hildenbrand
2 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 8:12 UTC (permalink / raw)
To: Zi Yan, Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 30.06.25 18:14, Zi Yan wrote:
> On 30 Jun 2025, at 12:01, Lorenzo Stoakes wrote:
>
>> On Mon, Jun 30, 2025 at 02:59:46PM +0200, David Hildenbrand wrote:
>>> Let the page freeing code handle clearing the page type.
>>
>> Why is this advantageous? We want to keep the page marked offline for longer?
>>
>>>
>>> Acked-by: Zi Yan <ziy@nvidia.com>
>>> Acked-by: Harry Yoo <harry.yoo@oracle.com>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>
>> On assumption this UINT_MAX stuff is sane :)) I mean this is straightforward I
>> guess:
>
> This is how page type is cleared.
> See: https://elixir.bootlin.com/linux/v6.15.4/source/include/linux/page-flags.h#L1013.
>
> I agree with you that patch 4 should have a comment in free_pages_prepare()
> about what the code is for and why UINT_MAX is used.
I can add a comment
/* Reset the page_type -> _mapcount to -1 */
To patch #4.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed
2025-06-30 16:01 ` Lorenzo Stoakes
2025-06-30 16:14 ` Zi Yan
@ 2025-07-01 8:21 ` David Hildenbrand
1 sibling, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 8:21 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 30.06.25 18:01, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:46PM +0200, David Hildenbrand wrote:
>> Let the page freeing code handle clearing the page type.
>
> Why is this advantageous? We want to keep the page marked offline for longer?
Less code? ;)
I will add:
"Being able to identify balloon pages until actually freed is a
requirement for upcoming movable_ops migration changes."
Note that the documentation is extended in patch #27 to mention that.
>
>>
>> Acked-by: Zi Yan <ziy@nvidia.com>
>> Acked-by: Harry Yoo <harry.yoo@oracle.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> On assumption this UINT_MAX stuff is sane :)) I mean this is straightforward I
> guess:
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
>> ---
>> include/linux/balloon_compaction.h | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
>> index b9f19da37b089..bfc6e50bd004b 100644
>> --- a/include/linux/balloon_compaction.h
>> +++ b/include/linux/balloon_compaction.h
>> @@ -140,7 +140,7 @@ static inline void balloon_page_finalize(struct page *page)
>> __ClearPageMovable(page);
>> set_page_private(page, 0);
>> }
>> - __ClearPageOffline(page);
>> + /* PageOffline is sticky until the page is freed to the buddy. */
>
> OK so we are relying on this UINT_MAX thing in free_pages_prepare() to handle this.
Yes. Resetting the page_type -> _mapcount to the initial value -1.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 06/29] mm/zsmalloc: make PageZsmalloc() sticky until the page is freed
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (4 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 05/29] mm/balloon_compaction: make PageOffline sticky until the page is freed David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-06-30 16:03 ` Lorenzo Stoakes
2025-06-30 12:59 ` [PATCH v1 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page() David Hildenbrand
` (23 subsequent siblings)
29 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let the page freeing code handle clearing the page type.
Acked-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
Acked-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/zpdesc.h | 5 -----
mm/zsmalloc.c | 3 +--
2 files changed, 1 insertion(+), 7 deletions(-)
diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 5cb7e3de43952..5763f36039736 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -163,11 +163,6 @@ static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc)
__SetPageZsmalloc(zpdesc_page(zpdesc));
}
-static inline void __zpdesc_clear_zsmalloc(struct zpdesc *zpdesc)
-{
- __ClearPageZsmalloc(zpdesc_page(zpdesc));
-}
-
static inline struct zone *zpdesc_zone(struct zpdesc *zpdesc)
{
return page_zone(zpdesc_page(zpdesc));
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 7f1431f2be98f..f98747aed4330 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -880,7 +880,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
ClearPagePrivate(page);
zpdesc->zspage = NULL;
zpdesc->next = NULL;
- __ClearPageZsmalloc(page);
+ /* PageZsmalloc is sticky until the page is freed to the buddy. */
}
static int trylock_zspage(struct zspage *zspage)
@@ -1055,7 +1055,6 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
if (!zpdesc) {
while (--i >= 0) {
zpdesc_dec_zone_page_state(zpdescs[i]);
- __zpdesc_clear_zsmalloc(zpdescs[i]);
free_zpdesc(zpdescs[i]);
}
cache_free_zspage(pool, zspage);
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 06/29] mm/zsmalloc: make PageZsmalloc() sticky until the page is freed
2025-06-30 12:59 ` [PATCH v1 06/29] mm/zsmalloc: make PageZsmalloc() " David Hildenbrand
@ 2025-06-30 16:03 ` Lorenzo Stoakes
2025-07-01 8:27 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 16:03 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:47PM +0200, David Hildenbrand wrote:
> Let the page freeing code handle clearing the page type.
>
> Acked-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> Acked-by: Harry Yoo <harry.yoo@oracle.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
On basis of sanity of UINT_MAX thing:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> mm/zpdesc.h | 5 -----
> mm/zsmalloc.c | 3 +--
> 2 files changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
> index 5cb7e3de43952..5763f36039736 100644
> --- a/mm/zpdesc.h
> +++ b/mm/zpdesc.h
> @@ -163,11 +163,6 @@ static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc)
> __SetPageZsmalloc(zpdesc_page(zpdesc));
> }
>
> -static inline void __zpdesc_clear_zsmalloc(struct zpdesc *zpdesc)
> -{
> - __ClearPageZsmalloc(zpdesc_page(zpdesc));
> -}
> -
> static inline struct zone *zpdesc_zone(struct zpdesc *zpdesc)
> {
> return page_zone(zpdesc_page(zpdesc));
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 7f1431f2be98f..f98747aed4330 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -880,7 +880,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
> ClearPagePrivate(page);
> zpdesc->zspage = NULL;
> zpdesc->next = NULL;
> - __ClearPageZsmalloc(page);
> + /* PageZsmalloc is sticky until the page is freed to the buddy. */
> }
>
> static int trylock_zspage(struct zspage *zspage)
> @@ -1055,7 +1055,6 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
> if (!zpdesc) {
> while (--i >= 0) {
> zpdesc_dec_zone_page_state(zpdescs[i]);
Maybe for consistency put a
/* PageZsmalloc is sticky until the page is freed to the buddy. */
comment here also?
> - __zpdesc_clear_zsmalloc(zpdescs[i]);
> free_zpdesc(zpdescs[i]);
> }
> cache_free_zspage(pool, zspage);
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 06/29] mm/zsmalloc: make PageZsmalloc() sticky until the page is freed
2025-06-30 16:03 ` Lorenzo Stoakes
@ 2025-07-01 8:27 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 8:27 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 30.06.25 18:03, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:47PM +0200, David Hildenbrand wrote:
>> Let the page freeing code handle clearing the page type.
>>
>> Acked-by: Zi Yan <ziy@nvidia.com>
>> Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org>
>> Acked-by: Harry Yoo <harry.yoo@oracle.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> On basis of sanity of UINT_MAX thing:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
>> ---
>> mm/zpdesc.h | 5 -----
>> mm/zsmalloc.c | 3 +--
>> 2 files changed, 1 insertion(+), 7 deletions(-)
>>
>> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
>> index 5cb7e3de43952..5763f36039736 100644
>> --- a/mm/zpdesc.h
>> +++ b/mm/zpdesc.h
>> @@ -163,11 +163,6 @@ static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc)
>> __SetPageZsmalloc(zpdesc_page(zpdesc));
>> }
>>
>> -static inline void __zpdesc_clear_zsmalloc(struct zpdesc *zpdesc)
>> -{
>> - __ClearPageZsmalloc(zpdesc_page(zpdesc));
>> -}
>> -
>> static inline struct zone *zpdesc_zone(struct zpdesc *zpdesc)
>> {
>> return page_zone(zpdesc_page(zpdesc));
>> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
>> index 7f1431f2be98f..f98747aed4330 100644
>> --- a/mm/zsmalloc.c
>> +++ b/mm/zsmalloc.c
>> @@ -880,7 +880,7 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
>> ClearPagePrivate(page);
>> zpdesc->zspage = NULL;
>> zpdesc->next = NULL;
>> - __ClearPageZsmalloc(page);
>> + /* PageZsmalloc is sticky until the page is freed to the buddy. */
>> }
>>
>> static int trylock_zspage(struct zspage *zspage)
>> @@ -1055,7 +1055,6 @@ static struct zspage *alloc_zspage(struct zs_pool *pool,
>> if (!zpdesc) {
>> while (--i >= 0) {
>> zpdesc_dec_zone_page_state(zpdescs[i]);
>
> Maybe for consistency put a
>
> /* PageZsmalloc is sticky until the page is freed to the buddy. */
>
I'll add that inside free_zpdesc(), before the __free_page().
Thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (5 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 06/29] mm/zsmalloc: make PageZsmalloc() " David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-06-30 16:24 ` Lorenzo Stoakes
2025-06-30 12:59 ` [PATCH v1 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page() David Hildenbrand
` (22 subsequent siblings)
29 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
... and start moving back to per-page things that will absolutely not be
folio things in the future. Add documentation and a comment that the
remaining folio stuff (lock, refcount) will have to be reworked as well.
While at it, convert the VM_BUG_ON() into a WARN_ON_ONCE() and handle
it gracefully (relevant with further changes), and convert a
WARN_ON_ONCE() into a VM_WARN_ON_ONCE_PAGE().
Note that we will leave anything that needs a rework (lock, refcount,
->lru) to be using folios for now: that perfectly highlights the
problematic bits.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/migrate.h | 4 ++--
mm/compaction.c | 2 +-
mm/migrate.c | 39 +++++++++++++++++++++++++++++----------
3 files changed, 32 insertions(+), 13 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index aaa2114498d6d..c0ec7422837bd 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -69,7 +69,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
unsigned long private, enum migrate_mode mode, int reason,
unsigned int *ret_succeeded);
struct folio *alloc_migration_target(struct folio *src, unsigned long private);
-bool isolate_movable_page(struct page *page, isolate_mode_t mode);
+bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode);
bool isolate_folio_to_list(struct folio *folio, struct list_head *list);
int migrate_huge_page_move_mapping(struct address_space *mapping,
@@ -90,7 +90,7 @@ static inline int migrate_pages(struct list_head *l, new_folio_t new,
static inline struct folio *alloc_migration_target(struct folio *src,
unsigned long private)
{ return NULL; }
-static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+static inline bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
{ return false; }
static inline bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
{ return false; }
diff --git a/mm/compaction.c b/mm/compaction.c
index 3925cb61dbb8f..17455c5a4be05 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1093,7 +1093,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
locked = NULL;
}
- if (isolate_movable_page(page, mode)) {
+ if (isolate_movable_ops_page(page, mode)) {
folio = page_folio(page);
goto isolate_success;
}
diff --git a/mm/migrate.c b/mm/migrate.c
index 767f503f08758..d4b4a7eefb6bd 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -51,8 +51,26 @@
#include "internal.h"
#include "swap.h"
-bool isolate_movable_page(struct page *page, isolate_mode_t mode)
+/**
+ * isolate_movable_ops_page - isolate a movable_ops page for migration
+ * @page: The page.
+ * @mode: The isolation mode.
+ *
+ * Try to isolate a movable_ops page for migration. Will fail if the page is
+ * not a movable_ops page, if the page is already isolated for migration
+ * or if the page was just was released by its owner.
+ *
+ * Once isolated, the page cannot get freed until it is either putback
+ * or migrated.
+ *
+ * Returns true if isolation succeeded, otherwise false.
+ */
+bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
{
+ /*
+ * TODO: these pages will not be folios in the future. All
+ * folio dependencies will have to be removed.
+ */
struct folio *folio = folio_get_nontail_page(page);
const struct movable_operations *mops;
@@ -73,7 +91,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
* we use non-atomic bitops on newly allocated page flags so
* unconditionally grabbing the lock ruins page's owner side.
*/
- if (unlikely(!__folio_test_movable(folio)))
+ if (unlikely(!__PageMovable(page)))
goto out_putfolio;
/*
@@ -90,18 +108,19 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
if (unlikely(!folio_trylock(folio)))
goto out_putfolio;
- if (!folio_test_movable(folio) || folio_test_isolated(folio))
+ if (!PageMovable(page) || PageIsolated(page))
goto out_no_isolated;
- mops = folio_movable_ops(folio);
- VM_BUG_ON_FOLIO(!mops, folio);
+ mops = page_movable_ops(page);
+ if (WARN_ON_ONCE(!mops))
+ goto out_no_isolated;
- if (!mops->isolate_page(&folio->page, mode))
+ if (!mops->isolate_page(page, mode))
goto out_no_isolated;
/* Driver shouldn't use the isolated flag */
- WARN_ON_ONCE(folio_test_isolated(folio));
- folio_set_isolated(folio);
+ VM_WARN_ON_ONCE_PAGE(PageIsolated(page), page);
+ SetPageIsolated(page);
folio_unlock(folio);
return true;
@@ -175,8 +194,8 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
if (lru)
isolated = folio_isolate_lru(folio);
else
- isolated = isolate_movable_page(&folio->page,
- ISOLATE_UNEVICTABLE);
+ isolated = isolate_movable_ops_page(&folio->page,
+ ISOLATE_UNEVICTABLE);
if (!isolated)
return false;
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page()
2025-06-30 12:59 ` [PATCH v1 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page() David Hildenbrand
@ 2025-06-30 16:24 ` Lorenzo Stoakes
2025-07-01 8:29 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 16:24 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:48PM +0200, David Hildenbrand wrote:
> ... and start moving back to per-page things that will absolutely not be
> folio things in the future. Add documentation and a comment that the
> remaining folio stuff (lock, refcount) will have to be reworked as well.
>
> While at it, convert the VM_BUG_ON() into a WARN_ON_ONCE() and handle
> it gracefully (relevant with further changes), and convert a
> WARN_ON_ONCE() into a VM_WARN_ON_ONCE_PAGE().
>
> Note that we will leave anything that needs a rework (lock, refcount,
> ->lru) to be using folios for now: that perfectly highlights the
> problematic bits.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Seesm reasonable to me so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/migrate.h | 4 ++--
> mm/compaction.c | 2 +-
> mm/migrate.c | 39 +++++++++++++++++++++++++++++----------
> 3 files changed, 32 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index aaa2114498d6d..c0ec7422837bd 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -69,7 +69,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
> unsigned long private, enum migrate_mode mode, int reason,
> unsigned int *ret_succeeded);
> struct folio *alloc_migration_target(struct folio *src, unsigned long private);
> -bool isolate_movable_page(struct page *page, isolate_mode_t mode);
> +bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode);
> bool isolate_folio_to_list(struct folio *folio, struct list_head *list);
>
> int migrate_huge_page_move_mapping(struct address_space *mapping,
> @@ -90,7 +90,7 @@ static inline int migrate_pages(struct list_head *l, new_folio_t new,
> static inline struct folio *alloc_migration_target(struct folio *src,
> unsigned long private)
> { return NULL; }
> -static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
> +static inline bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
> { return false; }
> static inline bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
> { return false; }
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 3925cb61dbb8f..17455c5a4be05 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1093,7 +1093,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> locked = NULL;
> }
>
> - if (isolate_movable_page(page, mode)) {
> + if (isolate_movable_ops_page(page, mode)) {
> folio = page_folio(page);
> goto isolate_success;
> }
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 767f503f08758..d4b4a7eefb6bd 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -51,8 +51,26 @@
> #include "internal.h"
> #include "swap.h"
>
> -bool isolate_movable_page(struct page *page, isolate_mode_t mode)
> +/**
> + * isolate_movable_ops_page - isolate a movable_ops page for migration
> + * @page: The page.
> + * @mode: The isolation mode.
> + *
> + * Try to isolate a movable_ops page for migration. Will fail if the page is
> + * not a movable_ops page, if the page is already isolated for migration
> + * or if the page was just was released by its owner.
> + *
> + * Once isolated, the page cannot get freed until it is either putback
> + * or migrated.
> + *
> + * Returns true if isolation succeeded, otherwise false.
> + */
> +bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
> {
> + /*
> + * TODO: these pages will not be folios in the future. All
> + * folio dependencies will have to be removed.
> + */
> struct folio *folio = folio_get_nontail_page(page);
> const struct movable_operations *mops;
>
> @@ -73,7 +91,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
> * we use non-atomic bitops on newly allocated page flags so
> * unconditionally grabbing the lock ruins page's owner side.
> */
> - if (unlikely(!__folio_test_movable(folio)))
> + if (unlikely(!__PageMovable(page)))
> goto out_putfolio;
>
> /*
> @@ -90,18 +108,19 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
> if (unlikely(!folio_trylock(folio)))
> goto out_putfolio;
>
> - if (!folio_test_movable(folio) || folio_test_isolated(folio))
> + if (!PageMovable(page) || PageIsolated(page))
I wonder, in the wonderful future where PageXXX() always refers to a page, can
we use something less horrible than these macros?
> goto out_no_isolated;
>
> - mops = folio_movable_ops(folio);
> - VM_BUG_ON_FOLIO(!mops, folio);
> + mops = page_movable_ops(page);
> + if (WARN_ON_ONCE(!mops))
> + goto out_no_isolated;
>
> - if (!mops->isolate_page(&folio->page, mode))
> + if (!mops->isolate_page(page, mode))
> goto out_no_isolated;
>
> /* Driver shouldn't use the isolated flag */
> - WARN_ON_ONCE(folio_test_isolated(folio));
> - folio_set_isolated(folio);
> + VM_WARN_ON_ONCE_PAGE(PageIsolated(page), page);
> + SetPageIsolated(page);
> folio_unlock(folio);
>
> return true;
> @@ -175,8 +194,8 @@ bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
> if (lru)
> isolated = folio_isolate_lru(folio);
> else
> - isolated = isolate_movable_page(&folio->page,
> - ISOLATE_UNEVICTABLE);
> + isolated = isolate_movable_ops_page(&folio->page,
> + ISOLATE_UNEVICTABLE);
>
> if (!isolated)
> return false;
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page()
2025-06-30 16:24 ` Lorenzo Stoakes
@ 2025-07-01 8:29 ` David Hildenbrand
2025-07-01 9:11 ` Lorenzo Stoakes
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 8:29 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 30.06.25 18:24, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:48PM +0200, David Hildenbrand wrote:
>> ... and start moving back to per-page things that will absolutely not be
>> folio things in the future. Add documentation and a comment that the
>> remaining folio stuff (lock, refcount) will have to be reworked as well.
>>
>> While at it, convert the VM_BUG_ON() into a WARN_ON_ONCE() and handle
>> it gracefully (relevant with further changes), and convert a
>> WARN_ON_ONCE() into a VM_WARN_ON_ONCE_PAGE().
>>
>> Note that we will leave anything that needs a rework (lock, refcount,
>> ->lru) to be using folios for now: that perfectly highlights the
>> problematic bits.
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> Seesm reasonable to me so:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
>> ---
>> include/linux/migrate.h | 4 ++--
>> mm/compaction.c | 2 +-
>> mm/migrate.c | 39 +++++++++++++++++++++++++++++----------
>> 3 files changed, 32 insertions(+), 13 deletions(-)
>>
>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>> index aaa2114498d6d..c0ec7422837bd 100644
>> --- a/include/linux/migrate.h
>> +++ b/include/linux/migrate.h
>> @@ -69,7 +69,7 @@ int migrate_pages(struct list_head *l, new_folio_t new, free_folio_t free,
>> unsigned long private, enum migrate_mode mode, int reason,
>> unsigned int *ret_succeeded);
>> struct folio *alloc_migration_target(struct folio *src, unsigned long private);
>> -bool isolate_movable_page(struct page *page, isolate_mode_t mode);
>> +bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode);
>> bool isolate_folio_to_list(struct folio *folio, struct list_head *list);
>>
>> int migrate_huge_page_move_mapping(struct address_space *mapping,
>> @@ -90,7 +90,7 @@ static inline int migrate_pages(struct list_head *l, new_folio_t new,
>> static inline struct folio *alloc_migration_target(struct folio *src,
>> unsigned long private)
>> { return NULL; }
>> -static inline bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>> +static inline bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
>> { return false; }
>> static inline bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
>> { return false; }
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 3925cb61dbb8f..17455c5a4be05 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -1093,7 +1093,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
>> locked = NULL;
>> }
>>
>> - if (isolate_movable_page(page, mode)) {
>> + if (isolate_movable_ops_page(page, mode)) {
>> folio = page_folio(page);
>> goto isolate_success;
>> }
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 767f503f08758..d4b4a7eefb6bd 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -51,8 +51,26 @@
>> #include "internal.h"
>> #include "swap.h"
>>
>> -bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>> +/**
>> + * isolate_movable_ops_page - isolate a movable_ops page for migration
>> + * @page: The page.
>> + * @mode: The isolation mode.
>> + *
>> + * Try to isolate a movable_ops page for migration. Will fail if the page is
>> + * not a movable_ops page, if the page is already isolated for migration
>> + * or if the page was just was released by its owner.
>> + *
>> + * Once isolated, the page cannot get freed until it is either putback
>> + * or migrated.
>> + *
>> + * Returns true if isolation succeeded, otherwise false.
>> + */
>> +bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
>> {
>> + /*
>> + * TODO: these pages will not be folios in the future. All
>> + * folio dependencies will have to be removed.
>> + */
>> struct folio *folio = folio_get_nontail_page(page);
>> const struct movable_operations *mops;
>>
>> @@ -73,7 +91,7 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>> * we use non-atomic bitops on newly allocated page flags so
>> * unconditionally grabbing the lock ruins page's owner side.
>> */
>> - if (unlikely(!__folio_test_movable(folio)))
>> + if (unlikely(!__PageMovable(page)))
>> goto out_putfolio;
>>
>> /*
>> @@ -90,18 +108,19 @@ bool isolate_movable_page(struct page *page, isolate_mode_t mode)
>> if (unlikely(!folio_trylock(folio)))
>> goto out_putfolio;
>>
>> - if (!folio_test_movable(folio) || folio_test_isolated(folio))
>> + if (!PageMovable(page) || PageIsolated(page))
>
> I wonder, in the wonderful future where PageXXX() always refers to a page, can
> we use something less horrible than these macros?
Good question. It all interacts with how we believe compound pages will
work / look like in the future.
Doing a change from PageXXX() to page_test_XXX() might be reasonable
change in the future. But, I mean, there are more important things to
clean up that that :)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page()
2025-07-01 8:29 ` David Hildenbrand
@ 2025-07-01 9:11 ` Lorenzo Stoakes
0 siblings, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 9:11 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Tue, Jul 01, 2025 at 10:29:54AM +0200, David Hildenbrand wrote:
> > I wonder, in the wonderful future where PageXXX() always refers to a page, can
> > we use something less horrible than these macros?
>
> Good question. It all interacts with how we believe compound pages will work
> / look like in the future.
Indeed.
>
> Doing a change from PageXXX() to page_test_XXX() might be reasonable change
> in the future. But, I mean, there are more important things to clean up that
> that :)
Yeah one for the future, and not exactly high priority :)
>
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (6 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 07/29] mm/migrate: rename isolate_movable_page() to isolate_movable_ops_page() David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-06-30 16:29 ` Lorenzo Stoakes
` (2 more replies)
2025-06-30 12:59 ` [PATCH v1 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page() David Hildenbrand
` (21 subsequent siblings)
29 siblings, 3 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
... and factor the complete handling of movable_ops pages out.
Convert it similar to isolate_movable_ops_page().
While at it, convert the VM_BUG_ON_FOLIO() into a VM_WARN_ON_PAGE().
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/migrate.c | 37 ++++++++++++++++++++++++-------------
1 file changed, 24 insertions(+), 13 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index d4b4a7eefb6bd..d97f7cd137e63 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -133,12 +133,30 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
return false;
}
-static void putback_movable_folio(struct folio *folio)
+/**
+ * putback_movable_ops_page - putback an isolated movable_ops page
+ * @page: The isolated page.
+ *
+ * Putback an isolated movable_ops page.
+ *
+ * After the page was putback, it might get freed instantly.
+ */
+static void putback_movable_ops_page(struct page *page)
{
- const struct movable_operations *mops = folio_movable_ops(folio);
-
- mops->putback_page(&folio->page);
- folio_clear_isolated(folio);
+ /*
+ * TODO: these pages will not be folios in the future. All
+ * folio dependencies will have to be removed.
+ */
+ struct folio *folio = page_folio(page);
+
+ VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
+ folio_lock(folio);
+ /* If the page was released by it's owner, there is nothing to do. */
+ if (PageMovable(page))
+ page_movable_ops(page)->putback_page(page);
+ ClearPageIsolated(page);
+ folio_unlock(folio);
+ folio_put(folio);
}
/*
@@ -166,14 +184,7 @@ void putback_movable_pages(struct list_head *l)
* have PAGE_MAPPING_MOVABLE.
*/
if (unlikely(__folio_test_movable(folio))) {
- VM_BUG_ON_FOLIO(!folio_test_isolated(folio), folio);
- folio_lock(folio);
- if (folio_test_movable(folio))
- putback_movable_folio(folio);
- else
- folio_clear_isolated(folio);
- folio_unlock(folio);
- folio_put(folio);
+ putback_movable_ops_page(&folio->page);
} else {
node_stat_mod_folio(folio, NR_ISOLATED_ANON +
folio_is_file_lru(folio), -folio_nr_pages(folio));
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page()
2025-06-30 12:59 ` [PATCH v1 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page() David Hildenbrand
@ 2025-06-30 16:29 ` Lorenzo Stoakes
2025-07-01 6:04 ` Harry Yoo
2025-07-01 14:42 ` Zi Yan
2 siblings, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 16:29 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:49PM +0200, David Hildenbrand wrote:
> ... and factor the complete handling of movable_ops pages out.
> Convert it similar to isolate_movable_ops_page().
>
> While at it, convert the VM_BUG_ON_FOLIO() into a VM_WARN_ON_PAGE().
<3
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> mm/migrate.c | 37 ++++++++++++++++++++++++-------------
> 1 file changed, 24 insertions(+), 13 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index d4b4a7eefb6bd..d97f7cd137e63 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -133,12 +133,30 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
> return false;
> }
>
> -static void putback_movable_folio(struct folio *folio)
> +/**
> + * putback_movable_ops_page - putback an isolated movable_ops page
> + * @page: The isolated page.
> + *
> + * Putback an isolated movable_ops page.
> + *
> + * After the page was putback, it might get freed instantly.
> + */
> +static void putback_movable_ops_page(struct page *page)
> {
> - const struct movable_operations *mops = folio_movable_ops(folio);
> -
> - mops->putback_page(&folio->page);
> - folio_clear_isolated(folio);
> + /*
> + * TODO: these pages will not be folios in the future. All
> + * folio dependencies will have to be removed.
> + */
> + struct folio *folio = page_folio(page);
> +
> + VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
> + folio_lock(folio);
> + /* If the page was released by it's owner, there is nothing to do. */
> + if (PageMovable(page))
> + page_movable_ops(page)->putback_page(page);
> + ClearPageIsolated(page);
> + folio_unlock(folio);
> + folio_put(folio);
> }
>
> /*
> @@ -166,14 +184,7 @@ void putback_movable_pages(struct list_head *l)
> * have PAGE_MAPPING_MOVABLE.
> */
> if (unlikely(__folio_test_movable(folio))) {
> - VM_BUG_ON_FOLIO(!folio_test_isolated(folio), folio);
> - folio_lock(folio);
> - if (folio_test_movable(folio))
> - putback_movable_folio(folio);
> - else
> - folio_clear_isolated(folio);
> - folio_unlock(folio);
> - folio_put(folio);
> + putback_movable_ops_page(&folio->page);
> } else {
> node_stat_mod_folio(folio, NR_ISOLATED_ANON +
> folio_is_file_lru(folio), -folio_nr_pages(folio));
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page()
2025-06-30 12:59 ` [PATCH v1 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page() David Hildenbrand
2025-06-30 16:29 ` Lorenzo Stoakes
@ 2025-07-01 6:04 ` Harry Yoo
2025-07-01 14:42 ` Zi Yan
2 siblings, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-01 6:04 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:49PM +0200, David Hildenbrand wrote:
> ... and factor the complete handling of movable_ops pages out.
> Convert it similar to isolate_movable_ops_page().
>
> While at it, convert the VM_BUG_ON_FOLIO() into a VM_WARN_ON_PAGE().
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
Looks good to me,
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page()
2025-06-30 12:59 ` [PATCH v1 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page() David Hildenbrand
2025-06-30 16:29 ` Lorenzo Stoakes
2025-07-01 6:04 ` Harry Yoo
@ 2025-07-01 14:42 ` Zi Yan
2 siblings, 0 replies; 138+ messages in thread
From: Zi Yan @ 2025-07-01 14:42 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
On 30 Jun 2025, at 8:59, David Hildenbrand wrote:
> ... and factor the complete handling of movable_ops pages out.
> Convert it similar to isolate_movable_ops_page().
>
> While at it, convert the VM_BUG_ON_FOLIO() into a VM_WARN_ON_PAGE().
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> mm/migrate.c | 37 ++++++++++++++++++++++++-------------
> 1 file changed, 24 insertions(+), 13 deletions(-)
>
Reviewed-by: Zi Yan <ziy@nvidia.com>
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (7 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 08/29] mm/migrate: rename putback_movable_folio() to putback_movable_ops_page() David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-06-30 17:05 ` Lorenzo Stoakes
2025-07-01 7:05 ` Harry Yoo
2025-06-30 12:59 ` [PATCH v1 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops() David Hildenbrand
` (20 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's factor it out, simplifying the calling code.
The assumption is that flush_dcache_page() is not required for
movable_ops pages: as documented for flush_dcache_folio(), it really
only applies when the kernel wrote to pagecache pages / pages in
highmem. movable_ops callbacks should be handling flushing
caches if ever required.
Note that we can now change folio_mapping_flags() to folio_test_anon()
to make it clearer, because movable_ops pages will never take that path.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/migrate.c | 82 ++++++++++++++++++++++++++++------------------------
1 file changed, 45 insertions(+), 37 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index d97f7cd137e63..0898ddd2f661f 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -159,6 +159,45 @@ static void putback_movable_ops_page(struct page *page)
folio_put(folio);
}
+/**
+ * migrate_movable_ops_page - migrate an isolated movable_ops page
+ * @page: The isolated page.
+ *
+ * Migrate an isolated movable_ops page.
+ *
+ * If the src page was already released by its owner, the src page is
+ * un-isolated (putback) and migration succeeds; the migration core will be the
+ * owner of both pages.
+ *
+ * If the src page was not released by its owner and the migration was
+ * successful, the owner of the src page and the dst page are swapped and
+ * the src page is un-isolated.
+ *
+ * If migration fails, the ownership stays unmodified and the src page
+ * remains isolated: migration may be retried later or the page can be putback.
+ *
+ * TODO: migration core will treat both pages as folios and lock them before
+ * this call to unlock them after this call. Further, the folio refcounts on
+ * src and dst are also released by migration core. These pages will not be
+ * folios in the future, so that must be reworked.
+ *
+ * Returns MIGRATEPAGE_SUCCESS on success, otherwise a negative error
+ * code.
+ */
+static int migrate_movable_ops_page(struct page *dst, struct page *src,
+ enum migrate_mode mode)
+{
+ int rc = MIGRATEPAGE_SUCCESS;
+
+ VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
+ /* If the page was released by it's owner, there is nothing to do. */
+ if (PageMovable(src))
+ rc = page_movable_ops(src)->migrate_page(dst, src, mode);
+ if (rc == MIGRATEPAGE_SUCCESS)
+ ClearPageIsolated(src);
+ return rc;
+}
+
/*
* Put previously isolated pages back onto the appropriate lists
* from where they were once taken off for compaction/migration.
@@ -1023,51 +1062,20 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
mode);
else
rc = fallback_migrate_folio(mapping, dst, src, mode);
- } else {
- const struct movable_operations *mops;
- /*
- * In case of non-lru page, it could be released after
- * isolation step. In that case, we shouldn't try migration.
- */
- VM_BUG_ON_FOLIO(!folio_test_isolated(src), src);
- if (!folio_test_movable(src)) {
- rc = MIGRATEPAGE_SUCCESS;
- folio_clear_isolated(src);
+ if (rc != MIGRATEPAGE_SUCCESS)
goto out;
- }
-
- mops = folio_movable_ops(src);
- rc = mops->migrate_page(&dst->page, &src->page, mode);
- WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS &&
- !folio_test_isolated(src));
- }
-
- /*
- * When successful, old pagecache src->mapping must be cleared before
- * src is freed; but stats require that PageAnon be left as PageAnon.
- */
- if (rc == MIGRATEPAGE_SUCCESS) {
- if (__folio_test_movable(src)) {
- VM_BUG_ON_FOLIO(!folio_test_isolated(src), src);
-
- /*
- * We clear PG_movable under page_lock so any compactor
- * cannot try to migrate this page.
- */
- folio_clear_isolated(src);
- }
-
/*
- * Anonymous and movable src->mapping will be cleared by
- * free_pages_prepare so don't reset it here for keeping
- * the type to work PageAnon, for example.
+ * For pagecache folios, src->mapping must be cleared before src
+ * is freed. Anonymous folios must stay anonymous until freed.
*/
- if (!folio_mapping_flags(src))
+ if (!folio_test_anon(src))
src->mapping = NULL;
if (likely(!folio_is_zone_device(dst)))
flush_dcache_folio(dst);
+ } else {
+ rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
}
out:
return rc;
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page()
2025-06-30 12:59 ` [PATCH v1 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page() David Hildenbrand
@ 2025-06-30 17:05 ` Lorenzo Stoakes
2025-07-01 9:24 ` David Hildenbrand
2025-07-01 7:05 ` Harry Yoo
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 17:05 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:50PM +0200, David Hildenbrand wrote:
> Let's factor it out, simplifying the calling code.
>
> The assumption is that flush_dcache_page() is not required for
> movable_ops pages: as documented for flush_dcache_folio(), it really
> only applies when the kernel wrote to pagecache pages / pages in
> highmem. movable_ops callbacks should be handling flushing
> caches if ever required.
But we've enot changed this have we? The flush_dcache_folio() invocation seems
to happen the same way now as before? Did I miss something?
>
> Note that we can now change folio_mapping_flags() to folio_test_anon()
> to make it clearer, because movable_ops pages will never take that path.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Have scrutinised this a lot and it seems correct to me, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> mm/migrate.c | 82 ++++++++++++++++++++++++++++------------------------
> 1 file changed, 45 insertions(+), 37 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index d97f7cd137e63..0898ddd2f661f 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -159,6 +159,45 @@ static void putback_movable_ops_page(struct page *page)
> folio_put(folio);
> }
>
> +/**
> + * migrate_movable_ops_page - migrate an isolated movable_ops page
> + * @page: The isolated page.
> + *
> + * Migrate an isolated movable_ops page.
> + *
> + * If the src page was already released by its owner, the src page is
> + * un-isolated (putback) and migration succeeds; the migration core will be the
> + * owner of both pages.
> + *
> + * If the src page was not released by its owner and the migration was
> + * successful, the owner of the src page and the dst page are swapped and
> + * the src page is un-isolated.
> + *
> + * If migration fails, the ownership stays unmodified and the src page
> + * remains isolated: migration may be retried later or the page can be putback.
> + *
> + * TODO: migration core will treat both pages as folios and lock them before
> + * this call to unlock them after this call. Further, the folio refcounts on
> + * src and dst are also released by migration core. These pages will not be
> + * folios in the future, so that must be reworked.
> + *
> + * Returns MIGRATEPAGE_SUCCESS on success, otherwise a negative error
> + * code.
> + */
Love these comments you're adding!!
> +static int migrate_movable_ops_page(struct page *dst, struct page *src,
> + enum migrate_mode mode)
> +{
> + int rc = MIGRATEPAGE_SUCCESS;
Maybe worth asserting src, dst locking?
> +
> + VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
> + /* If the page was released by it's owner, there is nothing to do. */
> + if (PageMovable(src))
> + rc = page_movable_ops(src)->migrate_page(dst, src, mode);
> + if (rc == MIGRATEPAGE_SUCCESS)
> + ClearPageIsolated(src);
> + return rc;
> +}
> +
> /*
> * Put previously isolated pages back onto the appropriate lists
> * from where they were once taken off for compaction/migration.
> @@ -1023,51 +1062,20 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
> mode);
> else
> rc = fallback_migrate_folio(mapping, dst, src, mode);
> - } else {
> - const struct movable_operations *mops;
>
> - /*
> - * In case of non-lru page, it could be released after
> - * isolation step. In that case, we shouldn't try migration.
> - */
> - VM_BUG_ON_FOLIO(!folio_test_isolated(src), src);
> - if (!folio_test_movable(src)) {
> - rc = MIGRATEPAGE_SUCCESS;
> - folio_clear_isolated(src);
> + if (rc != MIGRATEPAGE_SUCCESS)
> goto out;
> - }
> -
> - mops = folio_movable_ops(src);
> - rc = mops->migrate_page(&dst->page, &src->page, mode);
> - WARN_ON_ONCE(rc == MIGRATEPAGE_SUCCESS &&
> - !folio_test_isolated(src));
> - }
> -
> - /*
> - * When successful, old pagecache src->mapping must be cleared before
> - * src is freed; but stats require that PageAnon be left as PageAnon.
> - */
> - if (rc == MIGRATEPAGE_SUCCESS) {
> - if (__folio_test_movable(src)) {
> - VM_BUG_ON_FOLIO(!folio_test_isolated(src), src);
> -
> - /*
> - * We clear PG_movable under page_lock so any compactor
> - * cannot try to migrate this page.
> - */
> - folio_clear_isolated(src);
> - }
> -
> /*
> - * Anonymous and movable src->mapping will be cleared by
> - * free_pages_prepare so don't reset it here for keeping
> - * the type to work PageAnon, for example.
> + * For pagecache folios, src->mapping must be cleared before src
> + * is freed. Anonymous folios must stay anonymous until freed.
> */
> - if (!folio_mapping_flags(src))
> + if (!folio_test_anon(src))
> src->mapping = NULL;
>
> if (likely(!folio_is_zone_device(dst)))
> flush_dcache_folio(dst);
> + } else {
> + rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
> }
> out:
> return rc;
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page()
2025-06-30 17:05 ` Lorenzo Stoakes
@ 2025-07-01 9:24 ` David Hildenbrand
2025-07-01 10:10 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 9:24 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 30.06.25 19:05, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:50PM +0200, David Hildenbrand wrote:
>> Let's factor it out, simplifying the calling code.
>>
>> The assumption is that flush_dcache_page() is not required for
>> movable_ops pages: as documented for flush_dcache_folio(), it really
>> only applies when the kernel wrote to pagecache pages / pages in
>> highmem. movable_ops callbacks should be handling flushing
>> caches if ever required.
>
> But we've enot changed this have we? The flush_dcache_folio() invocation seems
> to happen the same way now as before? Did I miss something?
I think, before this change we would have called it also for movable_ops
pages
if (rc == MIGRATEPAGE_SUCCESS) {
if (__folio_test_movable(src)) {
...
}
...
if (likely(!folio_is_zone_device(dst)))
flush_dcache_folio(dst);
}
Now, we no longer do that for movable_ops pages.
For balloon pages, we're not copying anything, so we never possibly have
to flush the dcache.
For zsmalloc, we do the copy in zs_object_copy() through kmap_local.
I think we could have HIGHMEM, so I wonder if we should just do a
flush_dcache_page() in zs_object_copy().
At least, staring at highmem.h with memcpy_to_page(), it looks like that
might be the right thing to do.
So likely I'll add a patch before this one that will do the
flush_dcache_page() in there.
>
>>
>> Note that we can now change folio_mapping_flags() to folio_test_anon()
>> to make it clearer, because movable_ops pages will never take that path.
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> Have scrutinised this a lot and it seems correct to me, so:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
>> ---
>> mm/migrate.c | 82 ++++++++++++++++++++++++++++------------------------
>> 1 file changed, 45 insertions(+), 37 deletions(-)
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index d97f7cd137e63..0898ddd2f661f 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -159,6 +159,45 @@ static void putback_movable_ops_page(struct page *page)
>> folio_put(folio);
>> }
>>
>> +/**
>> + * migrate_movable_ops_page - migrate an isolated movable_ops page
>> + * @page: The isolated page.
>> + *
>> + * Migrate an isolated movable_ops page.
>> + *
>> + * If the src page was already released by its owner, the src page is
>> + * un-isolated (putback) and migration succeeds; the migration core will be the
>> + * owner of both pages.
>> + *
>> + * If the src page was not released by its owner and the migration was
>> + * successful, the owner of the src page and the dst page are swapped and
>> + * the src page is un-isolated.
>> + *
>> + * If migration fails, the ownership stays unmodified and the src page
>> + * remains isolated: migration may be retried later or the page can be putback.
>> + *
>> + * TODO: migration core will treat both pages as folios and lock them before
>> + * this call to unlock them after this call. Further, the folio refcounts on
>> + * src and dst are also released by migration core. These pages will not be
>> + * folios in the future, so that must be reworked.
>> + *
>> + * Returns MIGRATEPAGE_SUCCESS on success, otherwise a negative error
>> + * code.
>> + */
>
> Love these comments you're adding!!
>
>> +static int migrate_movable_ops_page(struct page *dst, struct page *src,
>> + enum migrate_mode mode)
>> +{
>> + int rc = MIGRATEPAGE_SUCCESS;
>
> Maybe worth asserting src, dst locking?
We do have these sanity checks right now in move_to_new_folio() already.
(next patch moves it further out)
Not sure how reasonable these sanity checks are in these internal
helpers: E.g., after we called move_to_new_folio() we will unlock both
folios, which should blow up if the folios wouldn't be locked.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page()
2025-07-01 9:24 ` David Hildenbrand
@ 2025-07-01 10:10 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 10:10 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 11:24, David Hildenbrand wrote:
> On 30.06.25 19:05, Lorenzo Stoakes wrote:
>> On Mon, Jun 30, 2025 at 02:59:50PM +0200, David Hildenbrand wrote:
>>> Let's factor it out, simplifying the calling code.
>>>
>>> The assumption is that flush_dcache_page() is not required for
>>> movable_ops pages: as documented for flush_dcache_folio(), it really
>>> only applies when the kernel wrote to pagecache pages / pages in
>>> highmem. movable_ops callbacks should be handling flushing
>>> caches if ever required.
>>
>> But we've enot changed this have we? The flush_dcache_folio() invocation seems
>> to happen the same way now as before? Did I miss something?
>
> I think, before this change we would have called it also for movable_ops
> pages
>
>
> if (rc == MIGRATEPAGE_SUCCESS) {
> if (__folio_test_movable(src)) {
> ...
> }
>
> ...
>
> if (likely(!folio_is_zone_device(dst)))
> flush_dcache_folio(dst);
> }
>
> Now, we no longer do that for movable_ops pages.
>
> For balloon pages, we're not copying anything, so we never possibly have
> to flush the dcache.
>
> For zsmalloc, we do the copy in zs_object_copy() through kmap_local.
>
> I think we could have HIGHMEM, so I wonder if we should just do a
> flush_dcache_page() in zs_object_copy().
>
> At least, staring at highmem.h with memcpy_to_page(), it looks like that
> might be the right thing to do.
>
>
> So likely I'll add a patch before this one that will do the
> flush_dcache_page() in there.
But reading the docs again:
"This routine need only be called for page cache pages which can
potentially ever be mapped into the address space of a user process."
So, not required IIUC. I'll clarify in the patch description.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page()
2025-06-30 12:59 ` [PATCH v1 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page() David Hildenbrand
2025-06-30 17:05 ` Lorenzo Stoakes
@ 2025-07-01 7:05 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-01 7:05 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:50PM +0200, David Hildenbrand wrote:
> Let's factor it out, simplifying the calling code.
>
> The assumption is that flush_dcache_page() is not required for
> movable_ops pages: as documented for flush_dcache_folio(), it really
> only applies when the kernel wrote to pagecache pages / pages in
> highmem. movable_ops callbacks should be handling flushing
> caches if ever required.
>
> Note that we can now change folio_mapping_flags() to folio_test_anon()
> to make it clearer, because movable_ops pages will never take that path.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
Looks correct to me.
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (8 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 09/29] mm/migrate: factor out movable_ops page handling into migrate_movable_ops_page() David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-06-30 17:07 ` Lorenzo Stoakes
2025-07-01 6:31 ` Harry Yoo
2025-06-30 12:59 ` [PATCH v1 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio() David Hildenbrand
` (19 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Folios will have nothing to do with movable_ops page migration. These
functions are now unused, so let's remove them.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/migrate.h | 14 --------------
1 file changed, 14 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index c0ec7422837bd..c99a00d4ca27d 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -118,20 +118,6 @@ static inline void __ClearPageMovable(struct page *page)
}
#endif
-static inline bool folio_test_movable(struct folio *folio)
-{
- return PageMovable(&folio->page);
-}
-
-static inline
-const struct movable_operations *folio_movable_ops(struct folio *folio)
-{
- VM_BUG_ON(!__folio_test_movable(folio));
-
- return (const struct movable_operations *)
- ((unsigned long)folio->mapping - PAGE_MAPPING_MOVABLE);
-}
-
static inline
const struct movable_operations *page_movable_ops(struct page *page)
{
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops()
2025-06-30 12:59 ` [PATCH v1 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops() David Hildenbrand
@ 2025-06-30 17:07 ` Lorenzo Stoakes
2025-07-01 10:15 ` David Hildenbrand
2025-07-01 6:31 ` Harry Yoo
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-06-30 17:07 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:51PM +0200, David Hildenbrand wrote:
> Folios will have nothing to do with movable_ops page migration. These
> functions are now unused, so let's remove them.
Maybe worth mentioning that __folio_test_movable() is still a thing (for now).
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/migrate.h | 14 --------------
> 1 file changed, 14 deletions(-)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index c0ec7422837bd..c99a00d4ca27d 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -118,20 +118,6 @@ static inline void __ClearPageMovable(struct page *page)
> }
> #endif
>
> -static inline bool folio_test_movable(struct folio *folio)
> -{
> - return PageMovable(&folio->page);
> -}
> -
> -static inline
> -const struct movable_operations *folio_movable_ops(struct folio *folio)
> -{
> - VM_BUG_ON(!__folio_test_movable(folio));
> -
> - return (const struct movable_operations *)
> - ((unsigned long)folio->mapping - PAGE_MAPPING_MOVABLE);
> -}
> -
> static inline
> const struct movable_operations *page_movable_ops(struct page *page)
> {
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops()
2025-06-30 17:07 ` Lorenzo Stoakes
@ 2025-07-01 10:15 ` David Hildenbrand
2025-07-01 10:25 ` Lorenzo Stoakes
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 10:15 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 30.06.25 19:07, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:51PM +0200, David Hildenbrand wrote:
>> Folios will have nothing to do with movable_ops page migration. These
>> functions are now unused, so let's remove them.
>
> Maybe worth mentioning that __folio_test_movable() is still a thing (for now).
"Note that __folio_test_movable() and friends will be removed separately
next, after more rework."
Thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops()
2025-07-01 10:15 ` David Hildenbrand
@ 2025-07-01 10:25 ` Lorenzo Stoakes
0 siblings, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 10:25 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Tue, Jul 01, 2025 at 12:15:41PM +0200, David Hildenbrand wrote:
> On 30.06.25 19:07, Lorenzo Stoakes wrote:
> > On Mon, Jun 30, 2025 at 02:59:51PM +0200, David Hildenbrand wrote:
> > > Folios will have nothing to do with movable_ops page migration. These
> > > functions are now unused, so let's remove them.
> >
> > Maybe worth mentioning that __folio_test_movable() is still a thing (for now).
>
> "Note that __folio_test_movable() and friends will be removed separately
> next, after more rework."
Sounds good to me! :)
>
> Thanks!
>
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops()
2025-06-30 12:59 ` [PATCH v1 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops() David Hildenbrand
2025-06-30 17:07 ` Lorenzo Stoakes
@ 2025-07-01 6:31 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-01 6:31 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:51PM +0200, David Hildenbrand wrote:
> Folios will have nothing to do with movable_ops page migration. These
> functions are now unused, so let's remove them.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (9 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 10/29] mm/migrate: remove folio_test_movable() and folio_movable_ops() David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-07-01 7:14 ` Harry Yoo
2025-07-01 9:37 ` Lorenzo Stoakes
2025-06-30 12:59 ` [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable() David Hildenbrand
` (18 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's move that handling directly into migrate_folio_move(), so we can
simplify move_to_new_folio(). While at it, fixup the documentation a
bit.
Note that unmap_and_move_huge_page() does not care, because it only
deals with actual folios. (we only support migration of
individual movable_ops pages)
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/migrate.c | 63 +++++++++++++++++++++++++---------------------------
1 file changed, 30 insertions(+), 33 deletions(-)
diff --git a/mm/migrate.c b/mm/migrate.c
index 0898ddd2f661f..22c115710d0e2 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1024,11 +1024,12 @@ static int fallback_migrate_folio(struct address_space *mapping,
}
/*
- * Move a page to a newly allocated page
- * The page is locked and all ptes have been successfully removed.
+ * Move a src folio to a newly allocated dst folio.
*
- * The new page will have replaced the old page if this function
- * is successful.
+ * The src and dst folios are locked and the src folios was unmapped from
+ * the page tables.
+ *
+ * On success, the src folio was replaced by the dst folio.
*
* Return value:
* < 0 - error code
@@ -1037,34 +1038,30 @@ static int fallback_migrate_folio(struct address_space *mapping,
static int move_to_new_folio(struct folio *dst, struct folio *src,
enum migrate_mode mode)
{
+ struct address_space *mapping = folio_mapping(src);
int rc = -EAGAIN;
- bool is_lru = !__folio_test_movable(src);
VM_BUG_ON_FOLIO(!folio_test_locked(src), src);
VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst);
- if (likely(is_lru)) {
- struct address_space *mapping = folio_mapping(src);
-
- if (!mapping)
- rc = migrate_folio(mapping, dst, src, mode);
- else if (mapping_inaccessible(mapping))
- rc = -EOPNOTSUPP;
- else if (mapping->a_ops->migrate_folio)
- /*
- * Most folios have a mapping and most filesystems
- * provide a migrate_folio callback. Anonymous folios
- * are part of swap space which also has its own
- * migrate_folio callback. This is the most common path
- * for page migration.
- */
- rc = mapping->a_ops->migrate_folio(mapping, dst, src,
- mode);
- else
- rc = fallback_migrate_folio(mapping, dst, src, mode);
+ if (!mapping)
+ rc = migrate_folio(mapping, dst, src, mode);
+ else if (mapping_inaccessible(mapping))
+ rc = -EOPNOTSUPP;
+ else if (mapping->a_ops->migrate_folio)
+ /*
+ * Most folios have a mapping and most filesystems
+ * provide a migrate_folio callback. Anonymous folios
+ * are part of swap space which also has its own
+ * migrate_folio callback. This is the most common path
+ * for page migration.
+ */
+ rc = mapping->a_ops->migrate_folio(mapping, dst, src,
+ mode);
+ else
+ rc = fallback_migrate_folio(mapping, dst, src, mode);
- if (rc != MIGRATEPAGE_SUCCESS)
- goto out;
+ if (rc == MIGRATEPAGE_SUCCESS) {
/*
* For pagecache folios, src->mapping must be cleared before src
* is freed. Anonymous folios must stay anonymous until freed.
@@ -1074,10 +1071,7 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
if (likely(!folio_is_zone_device(dst)))
flush_dcache_folio(dst);
- } else {
- rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
}
-out:
return rc;
}
@@ -1328,20 +1322,23 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
int rc;
int old_page_state = 0;
struct anon_vma *anon_vma = NULL;
- bool is_lru = !__folio_test_movable(src);
struct list_head *prev;
__migrate_folio_extract(dst, &old_page_state, &anon_vma);
prev = dst->lru.prev;
list_del(&dst->lru);
+ if (unlikely(__folio_test_movable(src))) {
+ rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
+ if (rc)
+ goto out;
+ goto out_unlock_both;
+ }
+
rc = move_to_new_folio(dst, src, mode);
if (rc)
goto out;
- if (unlikely(!is_lru))
- goto out_unlock_both;
-
/*
* When successful, push dst to LRU immediately: so that if it
* turns out to be an mlocked page, remove_migration_ptes() will
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio()
2025-06-30 12:59 ` [PATCH v1 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio() David Hildenbrand
@ 2025-07-01 7:14 ` Harry Yoo
2025-07-01 9:37 ` Lorenzo Stoakes
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-01 7:14 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:52PM +0200, David Hildenbrand wrote:
> Let's move that handling directly into migrate_folio_move(), so we can
> simplify move_to_new_folio(). While at it, fixup the documentation a
> bit.
>
> Note that unmap_and_move_huge_page() does not care, because it only
> deals with actual folios. (we only support migration of
> individual movable_ops pages)
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
Looks correct to me.
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio()
2025-06-30 12:59 ` [PATCH v1 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio() David Hildenbrand
2025-07-01 7:14 ` Harry Yoo
@ 2025-07-01 9:37 ` Lorenzo Stoakes
1 sibling, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 9:37 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:52PM +0200, David Hildenbrand wrote:
> Let's move that handling directly into migrate_folio_move(), so we can
> simplify move_to_new_folio(). While at it, fixup the documentation a
> bit.
>
> Note that unmap_and_move_huge_page() does not care, because it only
> deals with actual folios. (we only support migration of
> individual movable_ops pages)
Important caveat here :)
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> mm/migrate.c | 63 +++++++++++++++++++++++++---------------------------
> 1 file changed, 30 insertions(+), 33 deletions(-)
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 0898ddd2f661f..22c115710d0e2 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -1024,11 +1024,12 @@ static int fallback_migrate_folio(struct address_space *mapping,
> }
>
> /*
> - * Move a page to a newly allocated page
> - * The page is locked and all ptes have been successfully removed.
> + * Move a src folio to a newly allocated dst folio.
> *
> - * The new page will have replaced the old page if this function
> - * is successful.
> + * The src and dst folios are locked and the src folios was unmapped from
> + * the page tables.
> + *
> + * On success, the src folio was replaced by the dst folio.
> *
> * Return value:
> * < 0 - error code
> @@ -1037,34 +1038,30 @@ static int fallback_migrate_folio(struct address_space *mapping,
> static int move_to_new_folio(struct folio *dst, struct folio *src,
> enum migrate_mode mode)
> {
> + struct address_space *mapping = folio_mapping(src);
> int rc = -EAGAIN;
> - bool is_lru = !__folio_test_movable(src);
This is_lru was already sketchy, !movable_ops doesn't imply on lru...
>
> VM_BUG_ON_FOLIO(!folio_test_locked(src), src);
> VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst);
>
> - if (likely(is_lru)) {
> - struct address_space *mapping = folio_mapping(src);
> -
> - if (!mapping)
> - rc = migrate_folio(mapping, dst, src, mode);
> - else if (mapping_inaccessible(mapping))
> - rc = -EOPNOTSUPP;
> - else if (mapping->a_ops->migrate_folio)
> - /*
> - * Most folios have a mapping and most filesystems
> - * provide a migrate_folio callback. Anonymous folios
> - * are part of swap space which also has its own
> - * migrate_folio callback. This is the most common path
> - * for page migration.
> - */
> - rc = mapping->a_ops->migrate_folio(mapping, dst, src,
> - mode);
> - else
> - rc = fallback_migrate_folio(mapping, dst, src, mode);
> + if (!mapping)
> + rc = migrate_folio(mapping, dst, src, mode);
> + else if (mapping_inaccessible(mapping))
> + rc = -EOPNOTSUPP;
> + else if (mapping->a_ops->migrate_folio)
> + /*
> + * Most folios have a mapping and most filesystems
> + * provide a migrate_folio callback. Anonymous folios
> + * are part of swap space which also has its own
> + * migrate_folio callback. This is the most common path
> + * for page migration.
> + */
> + rc = mapping->a_ops->migrate_folio(mapping, dst, src,
> + mode);
> + else
> + rc = fallback_migrate_folio(mapping, dst, src, mode);
>
> - if (rc != MIGRATEPAGE_SUCCESS)
> - goto out;
> + if (rc == MIGRATEPAGE_SUCCESS) {
> /*
> * For pagecache folios, src->mapping must be cleared before src
> * is freed. Anonymous folios must stay anonymous until freed.
> @@ -1074,10 +1071,7 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
>
> if (likely(!folio_is_zone_device(dst)))
> flush_dcache_folio(dst);
> - } else {
> - rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
> }
> -out:
> return rc;
> }
>
> @@ -1328,20 +1322,23 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
> int rc;
> int old_page_state = 0;
> struct anon_vma *anon_vma = NULL;
> - bool is_lru = !__folio_test_movable(src);
> struct list_head *prev;
>
> __migrate_folio_extract(dst, &old_page_state, &anon_vma);
> prev = dst->lru.prev;
> list_del(&dst->lru);
>
> + if (unlikely(__folio_test_movable(src))) {
> + rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
> + if (rc)
> + goto out;
> + goto out_unlock_both;
> + }
> +
> rc = move_to_new_folio(dst, src, mode);
> if (rc)
> goto out;
>
> - if (unlikely(!is_lru))
> - goto out_unlock_both;
> -
> /*
> * When successful, push dst to LRU immediately: so that if it
> * turns out to be an mlocked page, remove_migration_ptes() will
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (10 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 11/29] mm/migrate: move movable_ops page handling out of move_to_new_folio() David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-07-01 8:20 ` Harry Yoo
` (2 more replies)
2025-06-30 12:59 ` [PATCH v1 13/29] mm/balloon_compaction: " David Hildenbrand
` (17 subsequent siblings)
29 siblings, 3 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Instead, let's check in the callbacks if the page was already destroyed,
which can be checked by looking at zpdesc->zspage (see reset_zpdesc()).
If we detect that the page was destroyed:
(1) Fail isolation, just like the migration core would
(2) Fake migration success just like the migration core would
In the putback case there is nothing to do, as we don't do anything just
like the migration core would do.
In the future, we should look into not letting these pages get destroyed
while they are isolated -- and instead delaying that to the
putback/migration call. Add a TODO for that.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/zsmalloc.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index f98747aed4330..72c2b7562c511 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -876,7 +876,6 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
{
struct page *page = zpdesc_page(zpdesc);
- __ClearPageMovable(page);
ClearPagePrivate(page);
zpdesc->zspage = NULL;
zpdesc->next = NULL;
@@ -1715,10 +1714,11 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
{
/*
- * Page is locked so zspage couldn't be destroyed. For detail, look at
- * lock_zspage in free_zspage.
+ * Page is locked so zspage can't be destroyed concurrently
+ * (see free_zspage()). But if the page was already destroyed
+ * (see reset_zpdesc()), refuse isolation here.
*/
- return true;
+ return page_zpdesc(page)->zspage;
}
static int zs_page_migrate(struct page *newpage, struct page *page,
@@ -1736,6 +1736,13 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
unsigned long old_obj, new_obj;
unsigned int obj_idx;
+ /*
+ * TODO: nothing prevents a zspage from getting destroyed while
+ * isolated: we should disallow that and defer it.
+ */
+ if (!zpdesc->zspage)
+ return MIGRATEPAGE_SUCCESS;
+
/* The page is locked, so this pointer must remain valid */
zspage = get_zspage(zpdesc);
pool = zspage->pool;
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-06-30 12:59 ` [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable() David Hildenbrand
@ 2025-07-01 8:20 ` Harry Yoo
2025-07-01 9:40 ` Lorenzo Stoakes
2025-07-02 8:11 ` Sergey Senozhatsky
2 siblings, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-01 8:20 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:53PM +0200, David Hildenbrand wrote:
> Instead, let's check in the callbacks if the page was already destroyed,
> which can be checked by looking at zpdesc->zspage (see reset_zpdesc()).
>
> If we detect that the page was destroyed:
>
> (1) Fail isolation, just like the migration core would
>
> (2) Fake migration success just like the migration core would
>
> In the putback case there is nothing to do, as we don't do anything just
> like the migration core would do.
>
> In the future, we should look into not letting these pages get destroyed
> while they are isolated -- and instead delaying that to the
> putback/migration call. Add a TODO for that.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
Looks correct to me.
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-06-30 12:59 ` [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable() David Hildenbrand
2025-07-01 8:20 ` Harry Yoo
@ 2025-07-01 9:40 ` Lorenzo Stoakes
2025-07-02 8:11 ` Sergey Senozhatsky
2 siblings, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 9:40 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:53PM +0200, David Hildenbrand wrote:
> Instead, let's check in the callbacks if the page was already destroyed,
> which can be checked by looking at zpdesc->zspage (see reset_zpdesc()).
>
> If we detect that the page was destroyed:
>
> (1) Fail isolation, just like the migration core would
>
> (2) Fake migration success just like the migration core would
>
> In the putback case there is nothing to do, as we don't do anything just
> like the migration core would do.
>
> In the future, we should look into not letting these pages get destroyed
> while they are isolated -- and instead delaying that to the
> putback/migration call. Add a TODO for that.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> mm/zsmalloc.c | 15 +++++++++++----
> 1 file changed, 11 insertions(+), 4 deletions(-)
>
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index f98747aed4330..72c2b7562c511 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -876,7 +876,6 @@ static void reset_zpdesc(struct zpdesc *zpdesc)
> {
> struct page *page = zpdesc_page(zpdesc);
>
> - __ClearPageMovable(page);
> ClearPagePrivate(page);
> zpdesc->zspage = NULL;
> zpdesc->next = NULL;
> @@ -1715,10 +1714,11 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
> static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
> {
> /*
> - * Page is locked so zspage couldn't be destroyed. For detail, look at
> - * lock_zspage in free_zspage.
> + * Page is locked so zspage can't be destroyed concurrently
> + * (see free_zspage()). But if the page was already destroyed
> + * (see reset_zpdesc()), refuse isolation here.
> */
> - return true;
> + return page_zpdesc(page)->zspage;
> }
>
> static int zs_page_migrate(struct page *newpage, struct page *page,
> @@ -1736,6 +1736,13 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
> unsigned long old_obj, new_obj;
> unsigned int obj_idx;
>
> + /*
> + * TODO: nothing prevents a zspage from getting destroyed while
> + * isolated: we should disallow that and defer it.
> + */
> + if (!zpdesc->zspage)
> + return MIGRATEPAGE_SUCCESS;
> +
> /* The page is locked, so this pointer must remain valid */
> zspage = get_zspage(zpdesc);
> pool = zspage->pool;
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-06-30 12:59 ` [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable() David Hildenbrand
2025-07-01 8:20 ` Harry Yoo
2025-07-01 9:40 ` Lorenzo Stoakes
@ 2025-07-02 8:11 ` Sergey Senozhatsky
2025-07-02 8:25 ` David Hildenbrand
2 siblings, 1 reply; 138+ messages in thread
From: Sergey Senozhatsky @ 2025-07-02 8:11 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
On (25/06/30 14:59), David Hildenbrand wrote:
[..]
> static int zs_page_migrate(struct page *newpage, struct page *page,
> @@ -1736,6 +1736,13 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
> unsigned long old_obj, new_obj;
> unsigned int obj_idx;
>
> + /*
> + * TODO: nothing prevents a zspage from getting destroyed while
> + * isolated: we should disallow that and defer it.
> + */
Can you elaborate?
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-07-02 8:11 ` Sergey Senozhatsky
@ 2025-07-02 8:25 ` David Hildenbrand
2025-07-02 10:10 ` Sergey Senozhatsky
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-02 8:25 UTC (permalink / raw)
To: Sergey Senozhatsky
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Brendan Jackman, Johannes Weiner, Jason Gunthorpe,
John Hubbard, Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin,
Naoya Horiguchi, Oscar Salvador, Rik van Riel, Harry Yoo,
Qi Zheng, Shakeel Butt
On 02.07.25 10:11, Sergey Senozhatsky wrote:
> On (25/06/30 14:59), David Hildenbrand wrote:
> [..]
>> static int zs_page_migrate(struct page *newpage, struct page *page,
>> @@ -1736,6 +1736,13 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
>> unsigned long old_obj, new_obj;
>> unsigned int obj_idx;
>>
>> + /*
>> + * TODO: nothing prevents a zspage from getting destroyed while
>> + * isolated: we should disallow that and defer it.
>> + */
>
> Can you elaborate?
We can only free a zspage in free_zspage() while the page is locked.
After we isolated a zspage page for migration (under page lock!), we
drop the lock again, to retake the lock when trying to migrate it.
That means, there is a window where a zspage can be freed although the
page is isolated for migration.
While we currently keep that working (as far as I can see), in the
future we want to remove that support from the core.
So what probably needs to be done is, checking in free_zspage(), whether
the page is isolated. If isolated, defer freeing to the
putback/migration call.
That way, it will be clear who the current owner of an object is
(isolation makes mm core the owner, while putback returns ownership),
and prepare for some pages to be migrated to have a permanently frozen
refcount (esp PageOffline pages without any refcount).
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-07-02 8:25 ` David Hildenbrand
@ 2025-07-02 10:10 ` Sergey Senozhatsky
2025-07-02 10:55 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Sergey Senozhatsky @ 2025-07-02 10:10 UTC (permalink / raw)
To: David Hildenbrand
Cc: Sergey Senozhatsky, linux-kernel, linux-mm, linux-doc,
linuxppc-dev, virtualization, linux-fsdevel, Andrew Morton,
Jonathan Corbet, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, Christophe Leroy, Jerrin Shaji George,
Arnd Bergmann, Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang,
Xuan Zhuo, Eugenio Pérez, Alexander Viro, Christian Brauner,
Jan Kara, Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim,
Byungchul Park, Gregory Price, Ying Huang, Alistair Popple,
Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Brendan Jackman, Johannes Weiner, Jason Gunthorpe,
John Hubbard, Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin,
Naoya Horiguchi, Oscar Salvador, Rik van Riel, Harry Yoo,
Qi Zheng, Shakeel Butt
On (25/07/02 10:25), David Hildenbrand wrote:
> On 02.07.25 10:11, Sergey Senozhatsky wrote:
> > On (25/06/30 14:59), David Hildenbrand wrote:
> > [..]
> > > static int zs_page_migrate(struct page *newpage, struct page *page,
> > > @@ -1736,6 +1736,13 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
> > > unsigned long old_obj, new_obj;
> > > unsigned int obj_idx;
> > > + /*
> > > + * TODO: nothing prevents a zspage from getting destroyed while
> > > + * isolated: we should disallow that and defer it.
> > > + */
> >
> > Can you elaborate?
>
> We can only free a zspage in free_zspage() while the page is locked.
>
> After we isolated a zspage page for migration (under page lock!), we drop
^^ a physical page? (IOW zspage chain page?)
> the lock again, to retake the lock when trying to migrate it.
>
> That means, there is a window where a zspage can be freed although the page
> is isolated for migration.
I see, thanks. Looks somewhat fragile. Is this a new thing?
> While we currently keep that working (as far as I can see), in the future we
> want to remove that support from the core.
Maybe comment can more explicitly distinguish zspage isolation and
physical page (zspage chain) isolation? zspages can get isolated
for compaction (defragmentation), for instance, which is a different
form of isolation.
> So what probably needs to be done is, checking in free_zspage(), whether the
> page is isolated. If isolated, defer freeing to the putback/migration call.
Perhaps.
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-07-02 10:10 ` Sergey Senozhatsky
@ 2025-07-02 10:55 ` David Hildenbrand
2025-07-03 2:28 ` Sergey Senozhatsky
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-02 10:55 UTC (permalink / raw)
To: Sergey Senozhatsky
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Brendan Jackman, Johannes Weiner, Jason Gunthorpe,
John Hubbard, Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin,
Naoya Horiguchi, Oscar Salvador, Rik van Riel, Harry Yoo,
Qi Zheng, Shakeel Butt
On 02.07.25 12:10, Sergey Senozhatsky wrote:
> On (25/07/02 10:25), David Hildenbrand wrote:
>> On 02.07.25 10:11, Sergey Senozhatsky wrote:
>>> On (25/06/30 14:59), David Hildenbrand wrote:
>>> [..]
>>>> static int zs_page_migrate(struct page *newpage, struct page *page,
>>>> @@ -1736,6 +1736,13 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
>>>> unsigned long old_obj, new_obj;
>>>> unsigned int obj_idx;
>>>> + /*
>>>> + * TODO: nothing prevents a zspage from getting destroyed while
>>>> + * isolated: we should disallow that and defer it.
>>>> + */
>>>
>>> Can you elaborate?
>>
>> We can only free a zspage in free_zspage() while the page is locked.
>>
>> After we isolated a zspage page for migration (under page lock!), we drop
> ^^ a physical page? (IOW zspage chain page?)
>
>> the lock again, to retake the lock when trying to migrate it.
>>
>> That means, there is a window where a zspage can be freed although the page
>> is isolated for migration.
>
> I see, thanks. Looks somewhat fragile. Is this a new thing?
No, it's been like that forever. And I was surprised that only zsmalloc
behaves that way -- balloon implements isolation as one would expect it
(disallow freeing while isolated).
>
>> While we currently keep that working (as far as I can see), in the future we
>> want to remove that support from the core.
>
> Maybe comment can more explicitly distinguish zspage isolation and
> physical page (zspage chain) isolation? zspages can get isolated
> for compaction (defragmentation), for instance, which is a different
> form of isolation.
Well, it's confusing, as we have MM compaction (-> migration) and
apparently zs_compact.
I'll try to clarify that we are talking about isolation for page
migration purposes.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-07-02 10:55 ` David Hildenbrand
@ 2025-07-03 2:28 ` Sergey Senozhatsky
2025-07-03 3:22 ` Sergey Senozhatsky
0 siblings, 1 reply; 138+ messages in thread
From: Sergey Senozhatsky @ 2025-07-03 2:28 UTC (permalink / raw)
To: David Hildenbrand
Cc: Sergey Senozhatsky, linux-kernel, linux-mm, linux-doc,
linuxppc-dev, virtualization, linux-fsdevel, Andrew Morton,
Jonathan Corbet, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, Christophe Leroy, Jerrin Shaji George,
Arnd Bergmann, Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang,
Xuan Zhuo, Eugenio Pérez, Alexander Viro, Christian Brauner,
Jan Kara, Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim,
Byungchul Park, Gregory Price, Ying Huang, Alistair Popple,
Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Brendan Jackman, Johannes Weiner, Jason Gunthorpe,
John Hubbard, Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin,
Naoya Horiguchi, Oscar Salvador, Rik van Riel, Harry Yoo,
Qi Zheng, Shakeel Butt
On (25/07/02 12:55), David Hildenbrand wrote:
> On 02.07.25 12:10, Sergey Senozhatsky wrote:
> > On (25/07/02 10:25), David Hildenbrand wrote:
> > > On 02.07.25 10:11, Sergey Senozhatsky wrote:
> > > > On (25/06/30 14:59), David Hildenbrand wrote:
> > > > [..]
> > > > > static int zs_page_migrate(struct page *newpage, struct page *page,
> > > > > @@ -1736,6 +1736,13 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
> > > > > unsigned long old_obj, new_obj;
> > > > > unsigned int obj_idx;
> > > > > + /*
> > > > > + * TODO: nothing prevents a zspage from getting destroyed while
> > > > > + * isolated: we should disallow that and defer it.
> > > > > + */
> > > >
> > > > Can you elaborate?
> > >
> > > We can only free a zspage in free_zspage() while the page is locked.
> > >
> > > After we isolated a zspage page for migration (under page lock!), we drop
> > ^^ a physical page? (IOW zspage chain page?)
> >
> > > the lock again, to retake the lock when trying to migrate it.
> > >
> > > That means, there is a window where a zspage can be freed although the page
> > > is isolated for migration.
> >
> > I see, thanks. Looks somewhat fragile. Is this a new thing?
>
> No, it's been like that forever. And I was surprised that only zsmalloc
> behaves that way
Oh, that makes two of us.
> > > While we currently keep that working (as far as I can see), in the future we
> > > want to remove that support from the core.
> >
> > Maybe comment can more explicitly distinguish zspage isolation and
> > physical page (zspage chain) isolation? zspages can get isolated
> > for compaction (defragmentation), for instance, which is a different
> > form of isolation.
>
> Well, it's confusing, as we have MM compaction (-> migration) and apparently
> zs_compact.
True.
> I'll try to clarify that we are talking about isolation for page migration
> purposes.
Thanks.
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-07-03 2:28 ` Sergey Senozhatsky
@ 2025-07-03 3:22 ` Sergey Senozhatsky
2025-07-03 7:45 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Sergey Senozhatsky @ 2025-07-03 3:22 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Brendan Jackman, Johannes Weiner, Jason Gunthorpe,
John Hubbard, Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin,
Naoya Horiguchi, Oscar Salvador, Rik van Riel, Harry Yoo,
Qi Zheng, Shakeel Butt, Sergey Senozhatsky
On (25/07/03 11:28), Sergey Senozhatsky wrote:
> > > > > > static int zs_page_migrate(struct page *newpage, struct page *page,
> > > > > > @@ -1736,6 +1736,13 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
> > > > > > unsigned long old_obj, new_obj;
> > > > > > unsigned int obj_idx;
> > > > > > + /*
> > > > > > + * TODO: nothing prevents a zspage from getting destroyed while
> > > > > > + * isolated: we should disallow that and defer it.
> > > > > > + */
> > > > >
> > > > > Can you elaborate?
> > > >
> > > > We can only free a zspage in free_zspage() while the page is locked.
> > > >
> > > > After we isolated a zspage page for migration (under page lock!), we drop
> > > ^^ a physical page? (IOW zspage chain page?)
> > >
> > > > the lock again, to retake the lock when trying to migrate it.
> > > >
> > > > That means, there is a window where a zspage can be freed although the page
> > > > is isolated for migration.
> > >
> > > I see, thanks. Looks somewhat fragile. Is this a new thing?
> >
> > No, it's been like that forever. And I was surprised that only zsmalloc
> > behaves that way
>
> Oh, that makes two of us.
I sort of wonder if zs_page_migrate() VM_BUG_ON_PAGE() removal and
zspage check addition need to be landed outside of this series, as
a zsmalloc fixup.
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-07-03 3:22 ` Sergey Senozhatsky
@ 2025-07-03 7:45 ` David Hildenbrand
2025-07-03 7:49 ` Sergey Senozhatsky
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-03 7:45 UTC (permalink / raw)
To: Sergey Senozhatsky
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Brendan Jackman, Johannes Weiner, Jason Gunthorpe,
John Hubbard, Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin,
Naoya Horiguchi, Oscar Salvador, Rik van Riel, Harry Yoo,
Qi Zheng, Shakeel Butt
On 03.07.25 05:22, Sergey Senozhatsky wrote:
> On (25/07/03 11:28), Sergey Senozhatsky wrote:
>>>>>>> static int zs_page_migrate(struct page *newpage, struct page *page,
>>>>>>> @@ -1736,6 +1736,13 @@ static int zs_page_migrate(struct page *newpage, struct page *page,
>>>>>>> unsigned long old_obj, new_obj;
>>>>>>> unsigned int obj_idx;
>>>>>>> + /*
>>>>>>> + * TODO: nothing prevents a zspage from getting destroyed while
>>>>>>> + * isolated: we should disallow that and defer it.
>>>>>>> + */
>>>>>>
>>>>>> Can you elaborate?
>>>>>
>>>>> We can only free a zspage in free_zspage() while the page is locked.
>>>>>
>>>>> After we isolated a zspage page for migration (under page lock!), we drop
>>>> ^^ a physical page? (IOW zspage chain page?)
>>>>
>>>>> the lock again, to retake the lock when trying to migrate it.
>>>>>
>>>>> That means, there is a window where a zspage can be freed although the page
>>>>> is isolated for migration.
>>>>
>>>> I see, thanks. Looks somewhat fragile. Is this a new thing?
>>>
>>> No, it's been like that forever. And I was surprised that only zsmalloc
>>> behaves that way
>>
>> Oh, that makes two of us.
>
> I sort of wonder if zs_page_migrate() VM_BUG_ON_PAGE() removal and
> zspage check addition need to be landed outside of this series, as
> a zsmalloc fixup.
Not sure if there is real value for that; given the review status, I
assume this series won't take too long to be ready for upstream. Of
course, if that is not the case we could try pulling them out.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable()
2025-07-03 7:45 ` David Hildenbrand
@ 2025-07-03 7:49 ` Sergey Senozhatsky
0 siblings, 0 replies; 138+ messages in thread
From: Sergey Senozhatsky @ 2025-07-03 7:49 UTC (permalink / raw)
To: David Hildenbrand
Cc: Sergey Senozhatsky, linux-kernel, linux-mm, linux-doc,
linuxppc-dev, virtualization, linux-fsdevel, Andrew Morton,
Jonathan Corbet, Madhavan Srinivasan, Michael Ellerman,
Nicholas Piggin, Christophe Leroy, Jerrin Shaji George,
Arnd Bergmann, Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang,
Xuan Zhuo, Eugenio Pérez, Alexander Viro, Christian Brauner,
Jan Kara, Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim,
Byungchul Park, Gregory Price, Ying Huang, Alistair Popple,
Lorenzo Stoakes, Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Brendan Jackman, Johannes Weiner, Jason Gunthorpe,
John Hubbard, Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin,
Naoya Horiguchi, Oscar Salvador, Rik van Riel, Harry Yoo,
Qi Zheng, Shakeel Butt
On (25/07/03 09:45), David Hildenbrand wrote:
> Not sure if there is real value for that; given the review status, I assume
> this series won't take too long to be ready for upstream. Of course, if that
> is not the case we could try pulling them out.
Sounds good to me.
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 13/29] mm/balloon_compaction: stop using __ClearPageMovable()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (11 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 12/29] mm/zsmalloc: stop using __ClearPageMovable() David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-07-01 10:03 ` Lorenzo Stoakes
2025-06-30 12:59 ` [PATCH v1 14/29] mm/migrate: remove __ClearPageMovable() David Hildenbrand
` (16 subsequent siblings)
29 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
We can just look at the balloon device (stored in page->private), to see
if the page is still part of the balloon.
As isolated balloon pages cannot get released (they are taken off the
balloon list while isolated), we don't have to worry about this case in
the putback and migration callback. Add a WARN_ON_ONCE for now.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 4 +---
mm/balloon_compaction.c | 11 +++++++++++
2 files changed, 12 insertions(+), 3 deletions(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index bfc6e50bd004b..9bce8e9f5018c 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -136,10 +136,8 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
*/
static inline void balloon_page_finalize(struct page *page)
{
- if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
- __ClearPageMovable(page);
+ if (IS_ENABLED(CONFIG_BALLOON_COMPACTION))
set_page_private(page, 0);
- }
/* PageOffline is sticky until the page is freed to the buddy. */
}
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index ec176bdb8a78b..e4f1a122d786b 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -206,6 +206,9 @@ static bool balloon_page_isolate(struct page *page, isolate_mode_t mode)
struct balloon_dev_info *b_dev_info = balloon_page_device(page);
unsigned long flags;
+ if (!b_dev_info)
+ return false;
+
spin_lock_irqsave(&b_dev_info->pages_lock, flags);
list_del(&page->lru);
b_dev_info->isolated_pages++;
@@ -219,6 +222,10 @@ static void balloon_page_putback(struct page *page)
struct balloon_dev_info *b_dev_info = balloon_page_device(page);
unsigned long flags;
+ /* Isolated balloon pages cannot get deflated. */
+ if (WARN_ON_ONCE(!b_dev_info))
+ return;
+
spin_lock_irqsave(&b_dev_info->pages_lock, flags);
list_add(&page->lru, &b_dev_info->pages);
b_dev_info->isolated_pages--;
@@ -234,6 +241,10 @@ static int balloon_page_migrate(struct page *newpage, struct page *page,
VM_BUG_ON_PAGE(!PageLocked(page), page);
VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
+ /* Isolated balloon pages cannot get deflated. */
+ if (WARN_ON_ONCE(!balloon))
+ return -EAGAIN;
+
return balloon->migratepage(balloon, newpage, page, mode);
}
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 13/29] mm/balloon_compaction: stop using __ClearPageMovable()
2025-06-30 12:59 ` [PATCH v1 13/29] mm/balloon_compaction: " David Hildenbrand
@ 2025-07-01 10:03 ` Lorenzo Stoakes
2025-07-01 10:19 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 10:03 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:54PM +0200, David Hildenbrand wrote:
> We can just look at the balloon device (stored in page->private), to see
> if the page is still part of the balloon.
>
> As isolated balloon pages cannot get released (they are taken off the
> balloon list while isolated), we don't have to worry about this case in
> the putback and migration callback. Add a WARN_ON_ONCE for now.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Seems reasonable, one comment below re: comment.
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/balloon_compaction.h | 4 +---
> mm/balloon_compaction.c | 11 +++++++++++
> 2 files changed, 12 insertions(+), 3 deletions(-)
>
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index bfc6e50bd004b..9bce8e9f5018c 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -136,10 +136,8 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
> */
> static inline void balloon_page_finalize(struct page *page)
> {
> - if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
> - __ClearPageMovable(page);
> + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION))
> set_page_private(page, 0);
> - }
> /* PageOffline is sticky until the page is freed to the buddy. */
> }
>
> diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
> index ec176bdb8a78b..e4f1a122d786b 100644
> --- a/mm/balloon_compaction.c
> +++ b/mm/balloon_compaction.c
> @@ -206,6 +206,9 @@ static bool balloon_page_isolate(struct page *page, isolate_mode_t mode)
> struct balloon_dev_info *b_dev_info = balloon_page_device(page);
> unsigned long flags;
>
> + if (!b_dev_info)
> + return false;
> +
> spin_lock_irqsave(&b_dev_info->pages_lock, flags);
> list_del(&page->lru);
> b_dev_info->isolated_pages++;
> @@ -219,6 +222,10 @@ static void balloon_page_putback(struct page *page)
> struct balloon_dev_info *b_dev_info = balloon_page_device(page);
> unsigned long flags;
>
> + /* Isolated balloon pages cannot get deflated. */
> + if (WARN_ON_ONCE(!b_dev_info))
> + return;
> +
> spin_lock_irqsave(&b_dev_info->pages_lock, flags);
> list_add(&page->lru, &b_dev_info->pages);
> b_dev_info->isolated_pages--;
> @@ -234,6 +241,10 @@ static int balloon_page_migrate(struct page *newpage, struct page *page,
> VM_BUG_ON_PAGE(!PageLocked(page), page);
> VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
>
> + /* Isolated balloon pages cannot get deflated. */
Hm do you mean migrated?
> + if (WARN_ON_ONCE(!balloon))
> + return -EAGAIN;
> +
> return balloon->migratepage(balloon, newpage, page, mode);
> }
>
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 13/29] mm/balloon_compaction: stop using __ClearPageMovable()
2025-07-01 10:03 ` Lorenzo Stoakes
@ 2025-07-01 10:19 ` David Hildenbrand
2025-07-01 10:40 ` Lorenzo Stoakes
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 10:19 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 12:03, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:54PM +0200, David Hildenbrand wrote:
>> We can just look at the balloon device (stored in page->private), to see
>> if the page is still part of the balloon.
>>
>> As isolated balloon pages cannot get released (they are taken off the
>> balloon list while isolated), we don't have to worry about this case in
>> the putback and migration callback. Add a WARN_ON_ONCE for now.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> Seems reasonable, one comment below re: comment.
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
>> ---
>> include/linux/balloon_compaction.h | 4 +---
>> mm/balloon_compaction.c | 11 +++++++++++
>> 2 files changed, 12 insertions(+), 3 deletions(-)
>>
>> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
>> index bfc6e50bd004b..9bce8e9f5018c 100644
>> --- a/include/linux/balloon_compaction.h
>> +++ b/include/linux/balloon_compaction.h
>> @@ -136,10 +136,8 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
>> */
>> static inline void balloon_page_finalize(struct page *page)
>> {
>> - if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
>> - __ClearPageMovable(page);
>> + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION))
>> set_page_private(page, 0);
>> - }
>> /* PageOffline is sticky until the page is freed to the buddy. */
>> }
>>
>> diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
>> index ec176bdb8a78b..e4f1a122d786b 100644
>> --- a/mm/balloon_compaction.c
>> +++ b/mm/balloon_compaction.c
>> @@ -206,6 +206,9 @@ static bool balloon_page_isolate(struct page *page, isolate_mode_t mode)
>> struct balloon_dev_info *b_dev_info = balloon_page_device(page);
>> unsigned long flags;
>>
>> + if (!b_dev_info)
>> + return false;
>> +
>> spin_lock_irqsave(&b_dev_info->pages_lock, flags);
>> list_del(&page->lru);
>> b_dev_info->isolated_pages++;
>> @@ -219,6 +222,10 @@ static void balloon_page_putback(struct page *page)
>> struct balloon_dev_info *b_dev_info = balloon_page_device(page);
>> unsigned long flags;
>>
>> + /* Isolated balloon pages cannot get deflated. */
>> + if (WARN_ON_ONCE(!b_dev_info))
>> + return;
>> +
>> spin_lock_irqsave(&b_dev_info->pages_lock, flags);
>> list_add(&page->lru, &b_dev_info->pages);
>> b_dev_info->isolated_pages--;
>> @@ -234,6 +241,10 @@ static int balloon_page_migrate(struct page *newpage, struct page *page,
>> VM_BUG_ON_PAGE(!PageLocked(page), page);
>> VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
>>
>> + /* Isolated balloon pages cannot get deflated. */
>
> Hm do you mean migrated?
Well, they can get migrated, obviously :)
Deflation would be the other code path where we would remove a balloon
page from the balloon, and invalidate page->private, suddenly seeing
!b_dev_info here.
But that cannot happen, as isolation takes them off the balloon list. So
deflating the balloon cannot find them until un-isolated.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 13/29] mm/balloon_compaction: stop using __ClearPageMovable()
2025-07-01 10:19 ` David Hildenbrand
@ 2025-07-01 10:40 ` Lorenzo Stoakes
2025-07-01 12:24 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 10:40 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Tue, Jul 01, 2025 at 12:19:47PM +0200, David Hildenbrand wrote:
> On 01.07.25 12:03, Lorenzo Stoakes wrote:
> > On Mon, Jun 30, 2025 at 02:59:54PM +0200, David Hildenbrand wrote:
> > > We can just look at the balloon device (stored in page->private), to see
> > > if the page is still part of the balloon.
> > >
> > > As isolated balloon pages cannot get released (they are taken off the
> > > balloon list while isolated), we don't have to worry about this case in
> > > the putback and migration callback. Add a WARN_ON_ONCE for now.
> > >
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
> >
> > Seems reasonable, one comment below re: comment.
> >
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> >
> > > ---
> > > include/linux/balloon_compaction.h | 4 +---
> > > mm/balloon_compaction.c | 11 +++++++++++
> > > 2 files changed, 12 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> > > index bfc6e50bd004b..9bce8e9f5018c 100644
> > > --- a/include/linux/balloon_compaction.h
> > > +++ b/include/linux/balloon_compaction.h
> > > @@ -136,10 +136,8 @@ static inline gfp_t balloon_mapping_gfp_mask(void)
> > > */
> > > static inline void balloon_page_finalize(struct page *page)
> > > {
> > > - if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
> > > - __ClearPageMovable(page);
> > > + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION))
> > > set_page_private(page, 0);
> > > - }
> > > /* PageOffline is sticky until the page is freed to the buddy. */
> > > }
> > >
> > > diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
> > > index ec176bdb8a78b..e4f1a122d786b 100644
> > > --- a/mm/balloon_compaction.c
> > > +++ b/mm/balloon_compaction.c
> > > @@ -206,6 +206,9 @@ static bool balloon_page_isolate(struct page *page, isolate_mode_t mode)
> > > struct balloon_dev_info *b_dev_info = balloon_page_device(page);
> > > unsigned long flags;
> > >
> > > + if (!b_dev_info)
> > > + return false;
> > > +
> > > spin_lock_irqsave(&b_dev_info->pages_lock, flags);
> > > list_del(&page->lru);
> > > b_dev_info->isolated_pages++;
> > > @@ -219,6 +222,10 @@ static void balloon_page_putback(struct page *page)
> > > struct balloon_dev_info *b_dev_info = balloon_page_device(page);
> > > unsigned long flags;
> > >
> > > + /* Isolated balloon pages cannot get deflated. */
> > > + if (WARN_ON_ONCE(!b_dev_info))
> > > + return;
> > > +
> > > spin_lock_irqsave(&b_dev_info->pages_lock, flags);
> > > list_add(&page->lru, &b_dev_info->pages);
> > > b_dev_info->isolated_pages--;
> > > @@ -234,6 +241,10 @@ static int balloon_page_migrate(struct page *newpage, struct page *page,
> > > VM_BUG_ON_PAGE(!PageLocked(page), page);
> > > VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
> > >
> > > + /* Isolated balloon pages cannot get deflated. */
> >
> > Hm do you mean migrated?
>
> Well, they can get migrated, obviously :)
Right yeah we isolate to migrate :P
I guess I was confused by the 'get' deflated, wrongly thinking putback was doing
this (I got confused about terminology), but I see in balloon_page_dequeue() we
balloon_page_finalize() which sets page->private = NULL which is what
balloon_page_device() returns.
OK I guess this is fine... :)
An aside, unrelated tot his series: it'd be nice to use 'deflate' consistently
in this code. We do __count_vm_event(BALLOON_DEFLATE) in
balloon_page_list_dequeue() but say 'deflate' nowhere else... well before this
patch :)
>
> Deflation would be the other code path where we would remove a balloon page
> from the balloon, and invalidate page->private, suddenly seeing !b_dev_info
> here.
ACtually it's 'balloon' not b_dev_info. Kind of out of scope for this patch but
would be good to rename.
>
> But that cannot happen, as isolation takes them off the balloon list. So
> deflating the balloon cannot find them until un-isolated.
Ack.
>
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 13/29] mm/balloon_compaction: stop using __ClearPageMovable()
2025-07-01 10:40 ` Lorenzo Stoakes
@ 2025-07-01 12:24 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 12:24 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
> OK I guess this is fine... :)
>
> An aside, unrelated tot his series: it'd be nice to use 'deflate' consistently
> in this code. We do __count_vm_event(BALLOON_DEFLATE) in
> balloon_page_list_dequeue() but say 'deflate' nowhere else... well before this
> patch :)
Right, dequeue is actually deflate, because one couldn't use that
function for anything else as it stands.
TODO list ... :)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 14/29] mm/migrate: remove __ClearPageMovable()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (12 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 13/29] mm/balloon_compaction: " David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-07-01 8:36 ` Harry Yoo
2025-07-01 10:43 ` Lorenzo Stoakes
2025-06-30 12:59 ` [PATCH v1 15/29] mm/migration: remove PageMovable() David Hildenbrand
` (15 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Unused, let's remove it.
The Chinese docs in Documentation/translations/zh_CN/mm/page_migration.rst
still mention it, but that whole docs is destined to get outdated and
updated by somebody that actually speaks that language.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/migrate.h | 8 ++------
mm/compaction.c | 11 -----------
2 files changed, 2 insertions(+), 17 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index c99a00d4ca27d..6eeda8eb1e0d8 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -35,8 +35,8 @@ struct migration_target_control;
* @src page. The driver should copy the contents of the
* @src page to the @dst page and set up the fields of @dst page.
* Both pages are locked.
- * If page migration is successful, the driver should call
- * __ClearPageMovable(@src) and return MIGRATEPAGE_SUCCESS.
+ * If page migration is successful, the driver should
+ * return MIGRATEPAGE_SUCCESS.
* If the driver cannot migrate the page at the moment, it can return
* -EAGAIN. The VM interprets this as a temporary migration failure and
* will retry it later. Any other error value is a permanent migration
@@ -106,16 +106,12 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
#ifdef CONFIG_COMPACTION
bool PageMovable(struct page *page);
void __SetPageMovable(struct page *page, const struct movable_operations *ops);
-void __ClearPageMovable(struct page *page);
#else
static inline bool PageMovable(struct page *page) { return false; }
static inline void __SetPageMovable(struct page *page,
const struct movable_operations *ops)
{
}
-static inline void __ClearPageMovable(struct page *page)
-{
-}
#endif
static inline
diff --git a/mm/compaction.c b/mm/compaction.c
index 17455c5a4be05..889ec696ba96a 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -137,17 +137,6 @@ void __SetPageMovable(struct page *page, const struct movable_operations *mops)
}
EXPORT_SYMBOL(__SetPageMovable);
-void __ClearPageMovable(struct page *page)
-{
- VM_BUG_ON_PAGE(!PageMovable(page), page);
- /*
- * This page still has the type of a movable page, but it's
- * actually not movable any more.
- */
- page->mapping = (void *)PAGE_MAPPING_MOVABLE;
-}
-EXPORT_SYMBOL(__ClearPageMovable);
-
/* Do not skip compaction more than 64 times */
#define COMPACT_MAX_DEFER_SHIFT 6
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 14/29] mm/migrate: remove __ClearPageMovable()
2025-06-30 12:59 ` [PATCH v1 14/29] mm/migrate: remove __ClearPageMovable() David Hildenbrand
@ 2025-07-01 8:36 ` Harry Yoo
2025-07-01 10:43 ` Lorenzo Stoakes
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-01 8:36 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:55PM +0200, David Hildenbrand wrote:
> Unused, let's remove it.
>
> The Chinese docs in Documentation/translations/zh_CN/mm/page_migration.rst
> still mention it, but that whole docs is destined to get outdated and
> updated by somebody that actually speaks that language.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 14/29] mm/migrate: remove __ClearPageMovable()
2025-06-30 12:59 ` [PATCH v1 14/29] mm/migrate: remove __ClearPageMovable() David Hildenbrand
2025-07-01 8:36 ` Harry Yoo
@ 2025-07-01 10:43 ` Lorenzo Stoakes
2025-07-01 12:25 ` David Hildenbrand
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 10:43 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:55PM +0200, David Hildenbrand wrote:
> Unused, let's remove it.
>
> The Chinese docs in Documentation/translations/zh_CN/mm/page_migration.rst
> still mention it, but that whole docs is destined to get outdated and
> updated by somebody that actually speaks that language.
Yeah I've noticed these getting out of sync before, perhaps somebody fluent in
Simplified Chinese can assist at some point :) mine is rather rusty...
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Lovely! The best code is no code :>)
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/migrate.h | 8 ++------
> mm/compaction.c | 11 -----------
> 2 files changed, 2 insertions(+), 17 deletions(-)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index c99a00d4ca27d..6eeda8eb1e0d8 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -35,8 +35,8 @@ struct migration_target_control;
> * @src page. The driver should copy the contents of the
> * @src page to the @dst page and set up the fields of @dst page.
> * Both pages are locked.
> - * If page migration is successful, the driver should call
> - * __ClearPageMovable(@src) and return MIGRATEPAGE_SUCCESS.
> + * If page migration is successful, the driver should
> + * return MIGRATEPAGE_SUCCESS.
> * If the driver cannot migrate the page at the moment, it can return
> * -EAGAIN. The VM interprets this as a temporary migration failure and
> * will retry it later. Any other error value is a permanent migration
> @@ -106,16 +106,12 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
> #ifdef CONFIG_COMPACTION
> bool PageMovable(struct page *page);
> void __SetPageMovable(struct page *page, const struct movable_operations *ops);
> -void __ClearPageMovable(struct page *page);
> #else
> static inline bool PageMovable(struct page *page) { return false; }
> static inline void __SetPageMovable(struct page *page,
> const struct movable_operations *ops)
> {
> }
> -static inline void __ClearPageMovable(struct page *page)
> -{
> -}
> #endif
>
> static inline
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 17455c5a4be05..889ec696ba96a 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -137,17 +137,6 @@ void __SetPageMovable(struct page *page, const struct movable_operations *mops)
> }
> EXPORT_SYMBOL(__SetPageMovable);
>
> -void __ClearPageMovable(struct page *page)
> -{
> - VM_BUG_ON_PAGE(!PageMovable(page), page);
> - /*
> - * This page still has the type of a movable page, but it's
> - * actually not movable any more.
> - */
> - page->mapping = (void *)PAGE_MAPPING_MOVABLE;
> -}
> -EXPORT_SYMBOL(__ClearPageMovable);
> -
> /* Do not skip compaction more than 64 times */
> #define COMPACT_MAX_DEFER_SHIFT 6
>
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 14/29] mm/migrate: remove __ClearPageMovable()
2025-07-01 10:43 ` Lorenzo Stoakes
@ 2025-07-01 12:25 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 12:25 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 12:43, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:55PM +0200, David Hildenbrand wrote:
>> Unused, let's remove it.
>>
>> The Chinese docs in Documentation/translations/zh_CN/mm/page_migration.rst
>> still mention it, but that whole docs is destined to get outdated and
>> updated by somebody that actually speaks that language.
>
> Yeah I've noticed these getting out of sync before, perhaps somebody fluent in
> Simplified Chinese can assist at some point :) mine is rather rusty...
I already saw doc updates for this. Obviously, I can't review them in
any way, but people seem to be active updating them.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 15/29] mm/migration: remove PageMovable()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (13 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 14/29] mm/migrate: remove __ClearPageMovable() David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-07-01 10:50 ` Lorenzo Stoakes
2025-07-02 9:20 ` Harry Yoo
2025-06-30 12:59 ` [PATCH v1 16/29] mm: rename __PageMovable() to page_has_movable_ops() David Hildenbrand
` (14 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
As __ClearPageMovable() is gone that would have only made
PageMovable()==false but still __PageMovable()==true, now
PageMovable() == __PageMovable().
So we can replace PageMovable() checks by __PageMovable(). In fact,
__PageMovable() cannot change until a page is freed, so we can turn
some PageMovable() into sanity checks for __PageMovable().
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/migrate.h | 2 --
mm/compaction.c | 15 ---------------
mm/migrate.c | 18 ++++++++++--------
3 files changed, 10 insertions(+), 25 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 6eeda8eb1e0d8..25659a685e2aa 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -104,10 +104,8 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
#endif /* CONFIG_MIGRATION */
#ifdef CONFIG_COMPACTION
-bool PageMovable(struct page *page);
void __SetPageMovable(struct page *page, const struct movable_operations *ops);
#else
-static inline bool PageMovable(struct page *page) { return false; }
static inline void __SetPageMovable(struct page *page,
const struct movable_operations *ops)
{
diff --git a/mm/compaction.c b/mm/compaction.c
index 889ec696ba96a..5c37373017014 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -114,21 +114,6 @@ static unsigned long release_free_list(struct list_head *freepages)
}
#ifdef CONFIG_COMPACTION
-bool PageMovable(struct page *page)
-{
- const struct movable_operations *mops;
-
- VM_BUG_ON_PAGE(!PageLocked(page), page);
- if (!__PageMovable(page))
- return false;
-
- mops = page_movable_ops(page);
- if (mops)
- return true;
-
- return false;
-}
-
void __SetPageMovable(struct page *page, const struct movable_operations *mops)
{
VM_BUG_ON_PAGE(!PageLocked(page), page);
diff --git a/mm/migrate.c b/mm/migrate.c
index 22c115710d0e2..040484230aebc 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -87,9 +87,12 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
goto out;
/*
- * Check movable flag before taking the page lock because
+ * Check for movable_ops pages before taking the page lock because
* we use non-atomic bitops on newly allocated page flags so
* unconditionally grabbing the lock ruins page's owner side.
+ *
+ * Note that once a page has movable_ops, it will stay that way
+ * until the page was freed.
*/
if (unlikely(!__PageMovable(page)))
goto out_putfolio;
@@ -108,7 +111,8 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
if (unlikely(!folio_trylock(folio)))
goto out_putfolio;
- if (!PageMovable(page) || PageIsolated(page))
+ VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
+ if (PageIsolated(page))
goto out_no_isolated;
mops = page_movable_ops(page);
@@ -149,11 +153,10 @@ static void putback_movable_ops_page(struct page *page)
*/
struct folio *folio = page_folio(page);
+ VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
folio_lock(folio);
- /* If the page was released by it's owner, there is nothing to do. */
- if (PageMovable(page))
- page_movable_ops(page)->putback_page(page);
+ page_movable_ops(page)->putback_page(page);
ClearPageIsolated(page);
folio_unlock(folio);
folio_put(folio);
@@ -189,10 +192,9 @@ static int migrate_movable_ops_page(struct page *dst, struct page *src,
{
int rc = MIGRATEPAGE_SUCCESS;
+ VM_WARN_ON_ONCE_PAGE(!__PageMovable(src), src);
VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
- /* If the page was released by it's owner, there is nothing to do. */
- if (PageMovable(src))
- rc = page_movable_ops(src)->migrate_page(dst, src, mode);
+ rc = page_movable_ops(src)->migrate_page(dst, src, mode);
if (rc == MIGRATEPAGE_SUCCESS)
ClearPageIsolated(src);
return rc;
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 15/29] mm/migration: remove PageMovable()
2025-06-30 12:59 ` [PATCH v1 15/29] mm/migration: remove PageMovable() David Hildenbrand
@ 2025-07-01 10:50 ` Lorenzo Stoakes
2025-07-01 12:27 ` David Hildenbrand
2025-07-02 9:20 ` Harry Yoo
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 10:50 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:56PM +0200, David Hildenbrand wrote:
> As __ClearPageMovable() is gone that would have only made
> PageMovable()==false but still __PageMovable()==true, now
> PageMovable() == __PageMovable().
I think this could be rephrased to be clearer, something like:
Previously, if __ClearPageMovable() were invoked on a page, this would
cause __PageMovable() to return false, but due to the continued
existance of page movable ops, PageMovable() would have returned true.
With __ClearPageMovable() gone, the two are exactly equivalent.
>
> So we can replace PageMovable() checks by __PageMovable(). In fact,
> __PageMovable() cannot change until a page is freed, so we can turn
> some PageMovable() into sanity checks for __PageMovable().
Deferring the clear does seem to simplify things!
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/migrate.h | 2 --
> mm/compaction.c | 15 ---------------
> mm/migrate.c | 18 ++++++++++--------
> 3 files changed, 10 insertions(+), 25 deletions(-)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 6eeda8eb1e0d8..25659a685e2aa 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -104,10 +104,8 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
> #endif /* CONFIG_MIGRATION */
>
> #ifdef CONFIG_COMPACTION
> -bool PageMovable(struct page *page);
> void __SetPageMovable(struct page *page, const struct movable_operations *ops);
> #else
> -static inline bool PageMovable(struct page *page) { return false; }
> static inline void __SetPageMovable(struct page *page,
> const struct movable_operations *ops)
> {
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 889ec696ba96a..5c37373017014 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -114,21 +114,6 @@ static unsigned long release_free_list(struct list_head *freepages)
> }
>
> #ifdef CONFIG_COMPACTION
> -bool PageMovable(struct page *page)
> -{
> - const struct movable_operations *mops;
> -
> - VM_BUG_ON_PAGE(!PageLocked(page), page);
> - if (!__PageMovable(page))
> - return false;
> -
> - mops = page_movable_ops(page);
> - if (mops)
> - return true;
> -
> - return false;
> -}
> -
> void __SetPageMovable(struct page *page, const struct movable_operations *mops)
> {
> VM_BUG_ON_PAGE(!PageLocked(page), page);
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 22c115710d0e2..040484230aebc 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -87,9 +87,12 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
> goto out;
>
> /*
> - * Check movable flag before taking the page lock because
> + * Check for movable_ops pages before taking the page lock because
> * we use non-atomic bitops on newly allocated page flags so
> * unconditionally grabbing the lock ruins page's owner side.
> + *
> + * Note that once a page has movable_ops, it will stay that way
> + * until the page was freed.
> */
> if (unlikely(!__PageMovable(page)))
> goto out_putfolio;
> @@ -108,7 +111,8 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
> if (unlikely(!folio_trylock(folio)))
> goto out_putfolio;
>
> - if (!PageMovable(page) || PageIsolated(page))
> + VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
> + if (PageIsolated(page))
> goto out_no_isolated;
>
> mops = page_movable_ops(page);
> @@ -149,11 +153,10 @@ static void putback_movable_ops_page(struct page *page)
> */
> struct folio *folio = page_folio(page);
>
> + VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
> VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
> folio_lock(folio);
> - /* If the page was released by it's owner, there is nothing to do. */
> - if (PageMovable(page))
> - page_movable_ops(page)->putback_page(page);
> + page_movable_ops(page)->putback_page(page);
> ClearPageIsolated(page);
> folio_unlock(folio);
> folio_put(folio);
> @@ -189,10 +192,9 @@ static int migrate_movable_ops_page(struct page *dst, struct page *src,
> {
> int rc = MIGRATEPAGE_SUCCESS;
>
> + VM_WARN_ON_ONCE_PAGE(!__PageMovable(src), src);
> VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
> - /* If the page was released by it's owner, there is nothing to do. */
> - if (PageMovable(src))
> - rc = page_movable_ops(src)->migrate_page(dst, src, mode);
> + rc = page_movable_ops(src)->migrate_page(dst, src, mode);
> if (rc == MIGRATEPAGE_SUCCESS)
> ClearPageIsolated(src);
> return rc;
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 15/29] mm/migration: remove PageMovable()
2025-07-01 10:50 ` Lorenzo Stoakes
@ 2025-07-01 12:27 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 12:27 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 12:50, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:56PM +0200, David Hildenbrand wrote:
>> As __ClearPageMovable() is gone that would have only made
>> PageMovable()==false but still __PageMovable()==true, now
>> PageMovable() == __PageMovable().
>
> I think this could be rephrased to be clearer, something like:
>
> Previously, if __ClearPageMovable() were invoked on a page, this would
> cause __PageMovable() to return false, but due to the continued
> existance of page movable ops, PageMovable() would have returned true.
>
"existence", yes will use that, thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 15/29] mm/migration: remove PageMovable()
2025-06-30 12:59 ` [PATCH v1 15/29] mm/migration: remove PageMovable() David Hildenbrand
2025-07-01 10:50 ` Lorenzo Stoakes
@ 2025-07-02 9:20 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 9:20 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:56PM +0200, David Hildenbrand wrote:
> As __ClearPageMovable() is gone that would have only made
> PageMovable()==false but still __PageMovable()==true, now
> PageMovable() == __PageMovable().
>
> So we can replace PageMovable() checks by __PageMovable(). In fact,
> __PageMovable() cannot change until a page is freed, so we can turn
> some PageMovable() into sanity checks for __PageMovable().
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
LGTM
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
(and Lorenzo's rephrasing looks good)
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 16/29] mm: rename __PageMovable() to page_has_movable_ops()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (14 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 15/29] mm/migration: remove PageMovable() David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-07-01 10:59 ` Lorenzo Stoakes
2025-07-02 9:29 ` Harry Yoo
2025-06-30 12:59 ` [PATCH v1 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios David Hildenbrand
` (13 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's make it clearer that we are talking about movable_ops pages.
While at it, convert a VM_BUG_ON to a VM_WARN_ON_ONCE_PAGE.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/migrate.h | 2 +-
include/linux/page-flags.h | 2 +-
mm/compaction.c | 7 ++-----
mm/memory-failure.c | 4 ++--
mm/memory_hotplug.c | 8 +++-----
mm/migrate.c | 8 ++++----
mm/page_alloc.c | 2 +-
mm/page_isolation.c | 10 +++++-----
8 files changed, 19 insertions(+), 24 deletions(-)
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 25659a685e2aa..e04035f70e36f 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -115,7 +115,7 @@ static inline void __SetPageMovable(struct page *page,
static inline
const struct movable_operations *page_movable_ops(struct page *page)
{
- VM_BUG_ON(!__PageMovable(page));
+ VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
return (const struct movable_operations *)
((unsigned long)page->mapping - PAGE_MAPPING_MOVABLE);
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 4fe5ee67535b2..c67163b73c5ec 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -750,7 +750,7 @@ static __always_inline bool __folio_test_movable(const struct folio *folio)
PAGE_MAPPING_MOVABLE;
}
-static __always_inline bool __PageMovable(const struct page *page)
+static __always_inline bool page_has_movable_ops(const struct page *page)
{
return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
PAGE_MAPPING_MOVABLE;
diff --git a/mm/compaction.c b/mm/compaction.c
index 5c37373017014..41fd6a1fe9a33 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1056,11 +1056,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
* Skip any other type of page
*/
if (!PageLRU(page)) {
- /*
- * __PageMovable can return false positive so we need
- * to verify it under page_lock.
- */
- if (unlikely(__PageMovable(page)) &&
+ /* Isolation code will deal with any races. */
+ if (unlikely(page_has_movable_ops(page)) &&
!PageIsolated(page)) {
if (locked) {
unlock_page_lruvec_irqrestore(locked, flags);
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index b91a33fb6c694..9e2cff1999347 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1388,8 +1388,8 @@ static inline bool HWPoisonHandlable(struct page *page, unsigned long flags)
if (PageSlab(page))
return false;
- /* Soft offline could migrate non-LRU movable pages */
- if ((flags & MF_SOFT_OFFLINE) && __PageMovable(page))
+ /* Soft offline could migrate movable_ops pages */
+ if ((flags & MF_SOFT_OFFLINE) && page_has_movable_ops(page))
return true;
return PageLRU(page) || is_free_buddy_page(page);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 62d45752f9f44..5fad126949d08 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1739,8 +1739,8 @@ bool mhp_range_allowed(u64 start, u64 size, bool need_mapping)
#ifdef CONFIG_MEMORY_HOTREMOVE
/*
- * Scan pfn range [start,end) to find movable/migratable pages (LRU pages,
- * non-lru movable pages and hugepages). Will skip over most unmovable
+ * Scan pfn range [start,end) to find movable/migratable pages (LRU and
+ * hugetlb folio, movable_ops pages). Will skip over most unmovable
* pages (esp., pages that can be skipped when offlining), but bail out on
* definitely unmovable pages.
*
@@ -1759,9 +1759,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
struct folio *folio;
page = pfn_to_page(pfn);
- if (PageLRU(page))
- goto found;
- if (__PageMovable(page))
+ if (PageLRU(page) || page_has_movable_ops(page))
goto found;
/*
diff --git a/mm/migrate.c b/mm/migrate.c
index 040484230aebc..587af35b7390d 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -94,7 +94,7 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
* Note that once a page has movable_ops, it will stay that way
* until the page was freed.
*/
- if (unlikely(!__PageMovable(page)))
+ if (unlikely(!page_has_movable_ops(page)))
goto out_putfolio;
/*
@@ -111,7 +111,7 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
if (unlikely(!folio_trylock(folio)))
goto out_putfolio;
- VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
+ VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
if (PageIsolated(page))
goto out_no_isolated;
@@ -153,7 +153,7 @@ static void putback_movable_ops_page(struct page *page)
*/
struct folio *folio = page_folio(page);
- VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
+ VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
folio_lock(folio);
page_movable_ops(page)->putback_page(page);
@@ -192,7 +192,7 @@ static int migrate_movable_ops_page(struct page *dst, struct page *src,
{
int rc = MIGRATEPAGE_SUCCESS;
- VM_WARN_ON_ONCE_PAGE(!__PageMovable(src), src);
+ VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(src), src);
VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
rc = page_movable_ops(src)->migrate_page(dst, src, mode);
if (rc == MIGRATEPAGE_SUCCESS)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 44e56d31cfeb1..a134b9fa9520e 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2005,7 +2005,7 @@ static bool prep_move_freepages_block(struct zone *zone, struct page *page,
* migration are movable. But we don't actually try
* isolating, as that would be expensive.
*/
- if (PageLRU(page) || __PageMovable(page))
+ if (PageLRU(page) || page_has_movable_ops(page))
(*num_movable)++;
pfn++;
}
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index ece3bfc56bcd5..b97b965b3ed01 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -21,9 +21,9 @@
* consequently belong to a single zone.
*
* PageLRU check without isolation or lru_lock could race so that
- * MIGRATE_MOVABLE block might include unmovable pages. And __PageMovable
- * check without lock_page also may miss some movable non-lru pages at
- * race condition. So you can't expect this function should be exact.
+ * MIGRATE_MOVABLE block might include unmovable pages. Similarly, pages
+ * with movable_ops can only be identified some time after they were
+ * allocated. So you can't expect this function should be exact.
*
* Returns a page without holding a reference. If the caller wants to
* dereference that page (e.g., dumping), it has to make sure that it
@@ -133,7 +133,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e
if ((mode == PB_ISOLATE_MODE_MEM_OFFLINE) && PageOffline(page))
continue;
- if (__PageMovable(page) || PageLRU(page))
+ if (PageLRU(page) || page_has_movable_ops(page))
continue;
/*
@@ -421,7 +421,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn,
* proper free and split handling for them.
*/
VM_WARN_ON_ONCE_PAGE(PageLRU(page), page);
- VM_WARN_ON_ONCE_PAGE(__PageMovable(page), page);
+ VM_WARN_ON_ONCE_PAGE(page_has_movable_ops(page), page);
goto failed;
}
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 16/29] mm: rename __PageMovable() to page_has_movable_ops()
2025-06-30 12:59 ` [PATCH v1 16/29] mm: rename __PageMovable() to page_has_movable_ops() David Hildenbrand
@ 2025-07-01 10:59 ` Lorenzo Stoakes
2025-07-01 12:29 ` David Hildenbrand
2025-07-02 9:29 ` Harry Yoo
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 10:59 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:57PM +0200, David Hildenbrand wrote:
> Let's make it clearer that we are talking about movable_ops pages.
>
> While at it, convert a VM_BUG_ON to a VM_WARN_ON_ONCE_PAGE.
<3
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Great, love it.
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
I noticed that the Simplified Chinese documentation has references for this, but
again we have to defer to somebody fluent in this of course!
but also in mm/memory_hotplug.c in scan_movable_pages():
/*
* PageOffline() pages that are not marked __PageMovable() and
Trivial one but might be worth fixing that up also?
> ---
> include/linux/migrate.h | 2 +-
> include/linux/page-flags.h | 2 +-
> mm/compaction.c | 7 ++-----
> mm/memory-failure.c | 4 ++--
> mm/memory_hotplug.c | 8 +++-----
> mm/migrate.c | 8 ++++----
> mm/page_alloc.c | 2 +-
> mm/page_isolation.c | 10 +++++-----
> 8 files changed, 19 insertions(+), 24 deletions(-)
>
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 25659a685e2aa..e04035f70e36f 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -115,7 +115,7 @@ static inline void __SetPageMovable(struct page *page,
> static inline
> const struct movable_operations *page_movable_ops(struct page *page)
> {
> - VM_BUG_ON(!__PageMovable(page));
> + VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
>
> return (const struct movable_operations *)
> ((unsigned long)page->mapping - PAGE_MAPPING_MOVABLE);
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 4fe5ee67535b2..c67163b73c5ec 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -750,7 +750,7 @@ static __always_inline bool __folio_test_movable(const struct folio *folio)
> PAGE_MAPPING_MOVABLE;
> }
>
> -static __always_inline bool __PageMovable(const struct page *page)
> +static __always_inline bool page_has_movable_ops(const struct page *page)
> {
> return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
> PAGE_MAPPING_MOVABLE;
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 5c37373017014..41fd6a1fe9a33 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1056,11 +1056,8 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> * Skip any other type of page
> */
> if (!PageLRU(page)) {
> - /*
> - * __PageMovable can return false positive so we need
> - * to verify it under page_lock.
> - */
> - if (unlikely(__PageMovable(page)) &&
> + /* Isolation code will deal with any races. */
> + if (unlikely(page_has_movable_ops(page)) &&
> !PageIsolated(page)) {
> if (locked) {
> unlock_page_lruvec_irqrestore(locked, flags);
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index b91a33fb6c694..9e2cff1999347 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1388,8 +1388,8 @@ static inline bool HWPoisonHandlable(struct page *page, unsigned long flags)
> if (PageSlab(page))
> return false;
>
> - /* Soft offline could migrate non-LRU movable pages */
> - if ((flags & MF_SOFT_OFFLINE) && __PageMovable(page))
> + /* Soft offline could migrate movable_ops pages */
> + if ((flags & MF_SOFT_OFFLINE) && page_has_movable_ops(page))
> return true;
>
> return PageLRU(page) || is_free_buddy_page(page);
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 62d45752f9f44..5fad126949d08 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -1739,8 +1739,8 @@ bool mhp_range_allowed(u64 start, u64 size, bool need_mapping)
>
> #ifdef CONFIG_MEMORY_HOTREMOVE
> /*
> - * Scan pfn range [start,end) to find movable/migratable pages (LRU pages,
> - * non-lru movable pages and hugepages). Will skip over most unmovable
> + * Scan pfn range [start,end) to find movable/migratable pages (LRU and
> + * hugetlb folio, movable_ops pages). Will skip over most unmovable
> * pages (esp., pages that can be skipped when offlining), but bail out on
> * definitely unmovable pages.
> *
> @@ -1759,9 +1759,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
> struct folio *folio;
>
> page = pfn_to_page(pfn);
> - if (PageLRU(page))
> - goto found;
> - if (__PageMovable(page))
> + if (PageLRU(page) || page_has_movable_ops(page))
> goto found;
>
> /*
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 040484230aebc..587af35b7390d 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -94,7 +94,7 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
> * Note that once a page has movable_ops, it will stay that way
> * until the page was freed.
> */
> - if (unlikely(!__PageMovable(page)))
> + if (unlikely(!page_has_movable_ops(page)))
> goto out_putfolio;
>
> /*
> @@ -111,7 +111,7 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
> if (unlikely(!folio_trylock(folio)))
> goto out_putfolio;
>
> - VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
> + VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
> if (PageIsolated(page))
> goto out_no_isolated;
>
> @@ -153,7 +153,7 @@ static void putback_movable_ops_page(struct page *page)
> */
> struct folio *folio = page_folio(page);
>
> - VM_WARN_ON_ONCE_PAGE(!__PageMovable(page), page);
> + VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
> VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
> folio_lock(folio);
> page_movable_ops(page)->putback_page(page);
> @@ -192,7 +192,7 @@ static int migrate_movable_ops_page(struct page *dst, struct page *src,
> {
> int rc = MIGRATEPAGE_SUCCESS;
>
> - VM_WARN_ON_ONCE_PAGE(!__PageMovable(src), src);
> + VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(src), src);
> VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
> rc = page_movable_ops(src)->migrate_page(dst, src, mode);
> if (rc == MIGRATEPAGE_SUCCESS)
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 44e56d31cfeb1..a134b9fa9520e 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -2005,7 +2005,7 @@ static bool prep_move_freepages_block(struct zone *zone, struct page *page,
> * migration are movable. But we don't actually try
> * isolating, as that would be expensive.
> */
> - if (PageLRU(page) || __PageMovable(page))
> + if (PageLRU(page) || page_has_movable_ops(page))
> (*num_movable)++;
> pfn++;
> }
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index ece3bfc56bcd5..b97b965b3ed01 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -21,9 +21,9 @@
> * consequently belong to a single zone.
> *
> * PageLRU check without isolation or lru_lock could race so that
> - * MIGRATE_MOVABLE block might include unmovable pages. And __PageMovable
> - * check without lock_page also may miss some movable non-lru pages at
> - * race condition. So you can't expect this function should be exact.
> + * MIGRATE_MOVABLE block might include unmovable pages. Similarly, pages
> + * with movable_ops can only be identified some time after they were
> + * allocated. So you can't expect this function should be exact.
> *
> * Returns a page without holding a reference. If the caller wants to
> * dereference that page (e.g., dumping), it has to make sure that it
> @@ -133,7 +133,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e
> if ((mode == PB_ISOLATE_MODE_MEM_OFFLINE) && PageOffline(page))
> continue;
>
> - if (__PageMovable(page) || PageLRU(page))
> + if (PageLRU(page) || page_has_movable_ops(page))
> continue;
>
> /*
> @@ -421,7 +421,7 @@ static int isolate_single_pageblock(unsigned long boundary_pfn,
> * proper free and split handling for them.
> */
> VM_WARN_ON_ONCE_PAGE(PageLRU(page), page);
> - VM_WARN_ON_ONCE_PAGE(__PageMovable(page), page);
> + VM_WARN_ON_ONCE_PAGE(page_has_movable_ops(page), page);
>
> goto failed;
> }
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 16/29] mm: rename __PageMovable() to page_has_movable_ops()
2025-07-01 10:59 ` Lorenzo Stoakes
@ 2025-07-01 12:29 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 12:29 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 12:59, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:57PM +0200, David Hildenbrand wrote:
>> Let's make it clearer that we are talking about movable_ops pages.
>>
>> While at it, convert a VM_BUG_ON to a VM_WARN_ON_ONCE_PAGE.
>
> <3
>
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> Great, love it.
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
> I noticed that the Simplified Chinese documentation has references for this, but
> again we have to defer to somebody fluent in this of course!
>
> but also in mm/memory_hotplug.c in scan_movable_pages():
>
> /*
> * PageOffline() pages that are not marked __PageMovable() and
>
> Trivial one but might be worth fixing that up also?
Ah, yes, missed that, burried under the the Chinese doc occurrences.
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 5fad126949d08..69a636e20f7bb 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1763,7 +1763,7 @@ static int scan_movable_pages(unsigned long start, unsigned long end,
goto found;
/*
- * PageOffline() pages that are not marked __PageMovable() and
+ * PageOffline() pages that do not have movable_ops and
* have a reference count > 0 (after MEM_GOING_OFFLINE) are
* definitely unmovable. If their reference count would be 0,
* they could at least be skipped when offlining memory.
--
Cheers,
David / dhildenb
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 16/29] mm: rename __PageMovable() to page_has_movable_ops()
2025-06-30 12:59 ` [PATCH v1 16/29] mm: rename __PageMovable() to page_has_movable_ops() David Hildenbrand
2025-07-01 10:59 ` Lorenzo Stoakes
@ 2025-07-02 9:29 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 9:29 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:57PM +0200, David Hildenbrand wrote:
> Let's make it clearer that we are talking about movable_ops pages.
>
> While at it, convert a VM_BUG_ON to a VM_WARN_ON_ONCE_PAGE.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
With the comment update mentioned in the other thread,
LGTM
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (15 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 16/29] mm: rename __PageMovable() to page_has_movable_ops() David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-07-01 11:03 ` Lorenzo Stoakes
2025-07-02 9:48 ` Harry Yoo
2025-06-30 12:59 ` [PATCH v1 18/29] mm: remove __folio_test_movable() David Hildenbrand
` (12 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Currently, we only support migration of individual movable_ops pages, so
we can not run into that.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/page_isolation.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/page_isolation.c b/mm/page_isolation.c
index b97b965b3ed01..f72b6cd38b958 100644
--- a/mm/page_isolation.c
+++ b/mm/page_isolation.c
@@ -92,7 +92,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e
h = size_to_hstate(folio_size(folio));
if (h && !hugepage_migration_supported(h))
return page;
- } else if (!folio_test_lru(folio) && !__folio_test_movable(folio)) {
+ } else if (!folio_test_lru(folio)) {
return page;
}
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios
2025-06-30 12:59 ` [PATCH v1 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios David Hildenbrand
@ 2025-07-01 11:03 ` Lorenzo Stoakes
2025-07-01 12:32 ` David Hildenbrand
2025-07-02 9:48 ` Harry Yoo
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 11:03 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:58PM +0200, David Hildenbrand wrote:
> Currently, we only support migration of individual movable_ops pages, so
> we can not run into that.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Seems sensible, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Maybe worth adding a VM_WARN_ON_ONCE() just in case? Or do you think not worth it?
> ---
> mm/page_isolation.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/mm/page_isolation.c b/mm/page_isolation.c
> index b97b965b3ed01..f72b6cd38b958 100644
> --- a/mm/page_isolation.c
> +++ b/mm/page_isolation.c
> @@ -92,7 +92,7 @@ static struct page *has_unmovable_pages(unsigned long start_pfn, unsigned long e
> h = size_to_hstate(folio_size(folio));
> if (h && !hugepage_migration_supported(h))
> return page;
> - } else if (!folio_test_lru(folio) && !__folio_test_movable(folio)) {
> + } else if (!folio_test_lru(folio)) {
> return page;
> }
>
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios
2025-07-01 11:03 ` Lorenzo Stoakes
@ 2025-07-01 12:32 ` David Hildenbrand
2025-07-02 9:54 ` Harry Yoo
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 12:32 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 13:03, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 02:59:58PM +0200, David Hildenbrand wrote:
>> Currently, we only support migration of individual movable_ops pages, so
>> we can not run into that.
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> Seems sensible, so:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
> Maybe worth adding a VM_WARN_ON_ONCE() just in case? Or do you think not worth it?
Not for now I think. Whoever wants to support compound pages has to
fixup a bunch of other stuff first, before running into that one here.
So a full audit of all paths that handle page_has_movable_ops() is
required either way.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios
2025-07-01 12:32 ` David Hildenbrand
@ 2025-07-02 9:54 ` Harry Yoo
0 siblings, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 9:54 UTC (permalink / raw)
To: David Hildenbrand
Cc: Lorenzo Stoakes, linux-kernel, linux-mm, linux-doc, linuxppc-dev,
virtualization, linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Qi Zheng, Shakeel Butt,
Tangquan Zheng, Barry Song, Barry Song
On Tue, Jul 01, 2025 at 02:32:54PM +0200, David Hildenbrand wrote:
> On 01.07.25 13:03, Lorenzo Stoakes wrote:
> > On Mon, Jun 30, 2025 at 02:59:58PM +0200, David Hildenbrand wrote:
> > > Currently, we only support migration of individual movable_ops pages, so
> > > we can not run into that.
> > >
> > > Reviewed-by: Zi Yan <ziy@nvidia.com>
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
> >
> > Seems sensible, so:
> >
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> >
> > Maybe worth adding a VM_WARN_ON_ONCE() just in case? Or do you think not worth it?
>
> Not for now I think. Whoever wants to support compound pages has to fixup a
> bunch of other stuff first, before running into that one here.
>
> So a full audit of all paths that handle page_has_movable_ops() is required
> either way.
IIRC there was an RFC series last year [1] that adds support for
order > 0 pages in zsmalloc.
Cc'ing Barry and Tangquan in case it's still on their TODO list...
[1] https://lore.kernel.org/linux-mm/20241121222521.83458-2-21cnbao@gmail.com
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios
2025-06-30 12:59 ` [PATCH v1 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios David Hildenbrand
2025-07-01 11:03 ` Lorenzo Stoakes
@ 2025-07-02 9:48 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 9:48 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:58PM +0200, David Hildenbrand wrote:
> Currently, we only support migration of individual movable_ops pages, so
> we can not run into that.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
Looks correct to me.
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 18/29] mm: remove __folio_test_movable()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (16 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 17/29] mm/page_isolation: drop __folio_test_movable() check for large folios David Hildenbrand
@ 2025-06-30 12:59 ` David Hildenbrand
2025-07-01 11:30 ` Lorenzo Stoakes
2025-07-02 10:20 ` Harry Yoo
2025-06-30 13:00 ` [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping David Hildenbrand
` (11 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 12:59 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Convert to page_has_movable_ops(). While at it, cleanup relevant code
a bit.
The data_race() in migrate_folio_unmap() is questionable: we already
hold a page reference, and concurrent modifications can no longer
happen (iow: __ClearPageMovable() no longer exists). Drop it for now,
we'll rework page_has_movable_ops() soon either way to no longer
rely on page->mapping.
Wherever we cast from folio to page now is a clear sign that this
code has to be decoupled.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 6 ------
mm/migrate.c | 43 ++++++++++++--------------------------
mm/vmscan.c | 6 ++++--
3 files changed, 17 insertions(+), 38 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index c67163b73c5ec..4c27ebb689e3c 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -744,12 +744,6 @@ static __always_inline bool PageAnon(const struct page *page)
return folio_test_anon(page_folio(page));
}
-static __always_inline bool __folio_test_movable(const struct folio *folio)
-{
- return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) ==
- PAGE_MAPPING_MOVABLE;
-}
-
static __always_inline bool page_has_movable_ops(const struct page *page)
{
return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
diff --git a/mm/migrate.c b/mm/migrate.c
index 587af35b7390d..15d3c1031530c 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -219,12 +219,7 @@ void putback_movable_pages(struct list_head *l)
continue;
}
list_del(&folio->lru);
- /*
- * We isolated non-lru movable folio so here we can use
- * __folio_test_movable because LRU folio's mapping cannot
- * have PAGE_MAPPING_MOVABLE.
- */
- if (unlikely(__folio_test_movable(folio))) {
+ if (unlikely(page_has_movable_ops(&folio->page))) {
putback_movable_ops_page(&folio->page);
} else {
node_stat_mod_folio(folio, NR_ISOLATED_ANON +
@@ -237,26 +232,20 @@ void putback_movable_pages(struct list_head *l)
/* Must be called with an elevated refcount on the non-hugetlb folio */
bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
{
- bool isolated, lru;
-
if (folio_test_hugetlb(folio))
return folio_isolate_hugetlb(folio, list);
- lru = !__folio_test_movable(folio);
- if (lru)
- isolated = folio_isolate_lru(folio);
- else
- isolated = isolate_movable_ops_page(&folio->page,
- ISOLATE_UNEVICTABLE);
-
- if (!isolated)
- return false;
-
- list_add(&folio->lru, list);
- if (lru)
+ if (page_has_movable_ops(&folio->page)) {
+ if (!isolate_movable_ops_page(&folio->page,
+ ISOLATE_UNEVICTABLE))
+ return false;
+ } else {
+ if (!folio_isolate_lru(folio))
+ return false;
node_stat_add_folio(folio, NR_ISOLATED_ANON +
folio_is_file_lru(folio));
-
+ }
+ list_add(&folio->lru, list);
return true;
}
@@ -1140,12 +1129,7 @@ static void migrate_folio_undo_dst(struct folio *dst, bool locked,
static void migrate_folio_done(struct folio *src,
enum migrate_reason reason)
{
- /*
- * Compaction can migrate also non-LRU pages which are
- * not accounted to NR_ISOLATED_*. They can be recognized
- * as __folio_test_movable
- */
- if (likely(!__folio_test_movable(src)) && reason != MR_DEMOTION)
+ if (likely(!page_has_movable_ops(&src->page)) && reason != MR_DEMOTION)
mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON +
folio_is_file_lru(src), -folio_nr_pages(src));
@@ -1164,7 +1148,6 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
int rc = -EAGAIN;
int old_page_state = 0;
struct anon_vma *anon_vma = NULL;
- bool is_lru = data_race(!__folio_test_movable(src));
bool locked = false;
bool dst_locked = false;
@@ -1265,7 +1248,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
goto out;
dst_locked = true;
- if (unlikely(!is_lru)) {
+ if (unlikely(page_has_movable_ops(&src->page))) {
__migrate_folio_record(dst, old_page_state, anon_vma);
return MIGRATEPAGE_UNMAP;
}
@@ -1330,7 +1313,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
prev = dst->lru.prev;
list_del(&dst->lru);
- if (unlikely(__folio_test_movable(src))) {
+ if (unlikely(page_has_movable_ops(&src->page))) {
rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
if (rc)
goto out;
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 098bcc821fc74..103dfc729a823 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1658,9 +1658,11 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
unsigned int noreclaim_flag;
list_for_each_entry_safe(folio, next, folio_list, lru) {
+ /* TODO: these pages should not even appear in this list. */
+ if (page_has_movable_ops(&folio->page))
+ continue;
if (!folio_test_hugetlb(folio) && folio_is_file_lru(folio) &&
- !folio_test_dirty(folio) && !__folio_test_movable(folio) &&
- !folio_test_unevictable(folio)) {
+ !folio_test_dirty(folio) && !folio_test_unevictable(folio)) {
folio_clear_active(folio);
list_move(&folio->lru, &clean_folios);
}
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 18/29] mm: remove __folio_test_movable()
2025-06-30 12:59 ` [PATCH v1 18/29] mm: remove __folio_test_movable() David Hildenbrand
@ 2025-07-01 11:30 ` Lorenzo Stoakes
2025-07-01 12:36 ` David Hildenbrand
2025-07-02 10:20 ` Harry Yoo
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 11:30 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:59PM +0200, David Hildenbrand wrote:
> Convert to page_has_movable_ops(). While at it, cleanup relevant code
> a bit.
>
> The data_race() in migrate_folio_unmap() is questionable: we already
> hold a page reference, and concurrent modifications can no longer
> happen (iow: __ClearPageMovable() no longer exists). Drop it for now,
> we'll rework page_has_movable_ops() soon either way to no longer
> rely on page->mapping.
>
> Wherever we cast from folio to page now is a clear sign that this
> code has to be decoupled.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/page-flags.h | 6 ------
> mm/migrate.c | 43 ++++++++++++--------------------------
> mm/vmscan.c | 6 ++++--
> 3 files changed, 17 insertions(+), 38 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index c67163b73c5ec..4c27ebb689e3c 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -744,12 +744,6 @@ static __always_inline bool PageAnon(const struct page *page)
> return folio_test_anon(page_folio(page));
> }
>
> -static __always_inline bool __folio_test_movable(const struct folio *folio)
> -{
> - return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) ==
> - PAGE_MAPPING_MOVABLE;
> -}
> -
Woah, wait, does this mean we can remove PAGE_MAPPING_MOVABLE??
Nice!
> static __always_inline bool page_has_movable_ops(const struct page *page)
> {
> return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 587af35b7390d..15d3c1031530c 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -219,12 +219,7 @@ void putback_movable_pages(struct list_head *l)
> continue;
> }
> list_del(&folio->lru);
> - /*
> - * We isolated non-lru movable folio so here we can use
> - * __folio_test_movable because LRU folio's mapping cannot
> - * have PAGE_MAPPING_MOVABLE.
> - */
So hate these references to 'LRU' as in meaning 'pages that could be on the
LRU'.
> - if (unlikely(__folio_test_movable(folio))) {
> + if (unlikely(page_has_movable_ops(&folio->page))) {
> putback_movable_ops_page(&folio->page);
> } else {
> node_stat_mod_folio(folio, NR_ISOLATED_ANON +
> @@ -237,26 +232,20 @@ void putback_movable_pages(struct list_head *l)
> /* Must be called with an elevated refcount on the non-hugetlb folio */
> bool isolate_folio_to_list(struct folio *folio, struct list_head *list)
> {
> - bool isolated, lru;
> -
> if (folio_test_hugetlb(folio))
> return folio_isolate_hugetlb(folio, list);
>
> - lru = !__folio_test_movable(folio);
> - if (lru)
> - isolated = folio_isolate_lru(folio);
> - else
> - isolated = isolate_movable_ops_page(&folio->page,
> - ISOLATE_UNEVICTABLE);
> -
> - if (!isolated)
> - return false;
> -
> - list_add(&folio->lru, list);
> - if (lru)
> + if (page_has_movable_ops(&folio->page)) {
> + if (!isolate_movable_ops_page(&folio->page,
> + ISOLATE_UNEVICTABLE))
> + return false;
> + } else {
> + if (!folio_isolate_lru(folio))
> + return false;
> node_stat_add_folio(folio, NR_ISOLATED_ANON +
> folio_is_file_lru(folio));
> -
> + }
> + list_add(&folio->lru, list);
> return true;
> }
>
> @@ -1140,12 +1129,7 @@ static void migrate_folio_undo_dst(struct folio *dst, bool locked,
> static void migrate_folio_done(struct folio *src,
> enum migrate_reason reason)
> {
> - /*
> - * Compaction can migrate also non-LRU pages which are
> - * not accounted to NR_ISOLATED_*. They can be recognized
> - * as __folio_test_movable
> - */
> - if (likely(!__folio_test_movable(src)) && reason != MR_DEMOTION)
> + if (likely(!page_has_movable_ops(&src->page)) && reason != MR_DEMOTION)
> mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON +
> folio_is_file_lru(src), -folio_nr_pages(src));
>
> @@ -1164,7 +1148,6 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
> int rc = -EAGAIN;
> int old_page_state = 0;
> struct anon_vma *anon_vma = NULL;
> - bool is_lru = data_race(!__folio_test_movable(src));
> bool locked = false;
> bool dst_locked = false;
>
> @@ -1265,7 +1248,7 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
> goto out;
> dst_locked = true;
>
> - if (unlikely(!is_lru)) {
> + if (unlikely(page_has_movable_ops(&src->page))) {
> __migrate_folio_record(dst, old_page_state, anon_vma);
> return MIGRATEPAGE_UNMAP;
> }
> @@ -1330,7 +1313,7 @@ static int migrate_folio_move(free_folio_t put_new_folio, unsigned long private,
> prev = dst->lru.prev;
> list_del(&dst->lru);
>
> - if (unlikely(__folio_test_movable(src))) {
> + if (unlikely(page_has_movable_ops(&src->page))) {
> rc = migrate_movable_ops_page(&dst->page, &src->page, mode);
> if (rc)
> goto out;
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 098bcc821fc74..103dfc729a823 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1658,9 +1658,11 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
> unsigned int noreclaim_flag;
>
> list_for_each_entry_safe(folio, next, folio_list, lru) {
> + /* TODO: these pages should not even appear in this list. */
> + if (page_has_movable_ops(&folio->page))
VM_WARN_ON_ONCE()?
> + continue;
> if (!folio_test_hugetlb(folio) && folio_is_file_lru(folio) &&
> - !folio_test_dirty(folio) && !__folio_test_movable(folio) &&
> - !folio_test_unevictable(folio)) {
> + !folio_test_dirty(folio) && !folio_test_unevictable(folio)) {
> folio_clear_active(folio);
> list_move(&folio->lru, &clean_folios);
> }
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 18/29] mm: remove __folio_test_movable()
2025-07-01 11:30 ` Lorenzo Stoakes
@ 2025-07-01 12:36 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 12:36 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
>> ---
>> include/linux/page-flags.h | 6 ------
>> mm/migrate.c | 43 ++++++++++++--------------------------
>> mm/vmscan.c | 6 ++++--
>> 3 files changed, 17 insertions(+), 38 deletions(-)
>>
>> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
>> index c67163b73c5ec..4c27ebb689e3c 100644
>> --- a/include/linux/page-flags.h
>> +++ b/include/linux/page-flags.h
>> @@ -744,12 +744,6 @@ static __always_inline bool PageAnon(const struct page *page)
>> return folio_test_anon(page_folio(page));
>> }
>>
>> -static __always_inline bool __folio_test_movable(const struct folio *folio)
>> -{
>> - return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) ==
>> - PAGE_MAPPING_MOVABLE;
>> -}
>> -
>
> Woah, wait, does this mean we can remove PAGE_MAPPING_MOVABLE??
Jup :)
>
> Nice!
>
>> static __always_inline bool page_has_movable_ops(const struct page *page)
>> {
>> return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 587af35b7390d..15d3c1031530c 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -219,12 +219,7 @@ void putback_movable_pages(struct list_head *l)
>> continue;
>> }
>> list_del(&folio->lru);
>> - /*
>> - * We isolated non-lru movable folio so here we can use
>> - * __folio_test_movable because LRU folio's mapping cannot
>> - * have PAGE_MAPPING_MOVABLE.
>> - */
>
> So hate these references to 'LRU' as in meaning 'pages that could be on the
> LRU'.
Yeah, it's a historical thing.
But for anything we isolated, it had to be an LRU folio (PageLRU)
because that's how we were even able to isolate it ... from the LRU.
[...]
>> diff --git a/mm/vmscan.c b/mm/vmscan.c
>> index 098bcc821fc74..103dfc729a823 100644
>> --- a/mm/vmscan.c
>> +++ b/mm/vmscan.c
>> @@ -1658,9 +1658,11 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
>> unsigned int noreclaim_flag;
>>
>> list_for_each_entry_safe(folio, next, folio_list, lru) {
>> + /* TODO: these pages should not even appear in this list. */
>> + if (page_has_movable_ops(&folio->page))
>
> VM_WARN_ON_ONCE()?
Well, no, it can currently still happen. But really, movable_ops pages
are not folios that could ever be reclaimed that way.
So the TODO highlights that movable_ops pages should never even be put
in a list (page->lru will go away).
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 18/29] mm: remove __folio_test_movable()
2025-06-30 12:59 ` [PATCH v1 18/29] mm: remove __folio_test_movable() David Hildenbrand
2025-07-01 11:30 ` Lorenzo Stoakes
@ 2025-07-02 10:20 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 10:20 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 02:59:59PM +0200, David Hildenbrand wrote:
> Convert to page_has_movable_ops(). While at it, cleanup relevant code
> a bit.
>
> The data_race() in migrate_folio_unmap() is questionable: we already
> hold a page reference, and concurrent modifications can no longer
> happen (iow: __ClearPageMovable() no longer exists). Drop it for now,
> we'll rework page_has_movable_ops() soon either way to no longer
> rely on page->mapping.
>
> Wherever we cast from folio to page now is a clear sign that this
> code has to be decoupled.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
LGTM
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 098bcc821fc74..103dfc729a823 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1658,9 +1658,11 @@ unsigned int reclaim_clean_pages_from_list(struct zone *zone,
> unsigned int noreclaim_flag;
>
> list_for_each_entry_safe(folio, next, folio_list, lru) {
> + /* TODO: these pages should not even appear in this list. */
> + if (page_has_movable_ops(&folio->page))
> + continue;
Looking forward to see how this TODO will be addressed :)
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (17 preceding siblings ...)
2025-06-30 12:59 ` [PATCH v1 18/29] mm: remove __folio_test_movable() David Hildenbrand
@ 2025-06-30 13:00 ` David Hildenbrand
2025-07-01 12:12 ` Lorenzo Stoakes
2025-07-02 10:34 ` Harry Yoo
2025-06-30 13:00 ` [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag David Hildenbrand
` (10 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 13:00 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
... instead, look them up statically based on the page type. Maybe in the
future we want a registration interface? At least for now, it can be
easily handled using the two page types that actually support page
migration.
The remaining usage of page->mapping is to flag such pages as actually
being movable (having movable_ops), which we will change next.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 2 +-
include/linux/migrate.h | 14 ++------------
include/linux/zsmalloc.h | 2 ++
mm/balloon_compaction.c | 1 -
mm/compaction.c | 5 ++---
mm/migrate.c | 23 +++++++++++++++++++++++
mm/zpdesc.h | 5 ++---
mm/zsmalloc.c | 8 +++-----
8 files changed, 35 insertions(+), 25 deletions(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index 9bce8e9f5018c..a8a1706cc56f3 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -92,7 +92,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
struct page *page)
{
__SetPageOffline(page);
- __SetPageMovable(page, &balloon_mops);
+ __SetPageMovable(page);
set_page_private(page, (unsigned long)balloon);
list_add(&page->lru, &balloon->pages);
}
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index e04035f70e36f..6aece3f3c8be8 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -104,23 +104,13 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
#endif /* CONFIG_MIGRATION */
#ifdef CONFIG_COMPACTION
-void __SetPageMovable(struct page *page, const struct movable_operations *ops);
+void __SetPageMovable(struct page *page);
#else
-static inline void __SetPageMovable(struct page *page,
- const struct movable_operations *ops)
+static inline void __SetPageMovable(struct page *page)
{
}
#endif
-static inline
-const struct movable_operations *page_movable_ops(struct page *page)
-{
- VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
-
- return (const struct movable_operations *)
- ((unsigned long)page->mapping - PAGE_MAPPING_MOVABLE);
-}
-
#ifdef CONFIG_NUMA_BALANCING
int migrate_misplaced_folio_prepare(struct folio *folio,
struct vm_area_struct *vma, int node);
diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
index 13e9cc5490f71..f3ccff2d966cd 100644
--- a/include/linux/zsmalloc.h
+++ b/include/linux/zsmalloc.h
@@ -46,4 +46,6 @@ void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
void zs_obj_write(struct zs_pool *pool, unsigned long handle,
void *handle_mem, size_t mem_len);
+extern const struct movable_operations zsmalloc_mops;
+
#endif
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index e4f1a122d786b..2a4a649805c11 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -253,6 +253,5 @@ const struct movable_operations balloon_mops = {
.isolate_page = balloon_page_isolate,
.putback_page = balloon_page_putback,
};
-EXPORT_SYMBOL_GPL(balloon_mops);
#endif /* CONFIG_BALLOON_COMPACTION */
diff --git a/mm/compaction.c b/mm/compaction.c
index 41fd6a1fe9a33..348eb754cb227 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -114,11 +114,10 @@ static unsigned long release_free_list(struct list_head *freepages)
}
#ifdef CONFIG_COMPACTION
-void __SetPageMovable(struct page *page, const struct movable_operations *mops)
+void __SetPageMovable(struct page *page)
{
VM_BUG_ON_PAGE(!PageLocked(page), page);
- VM_BUG_ON_PAGE((unsigned long)mops & PAGE_MAPPING_MOVABLE, page);
- page->mapping = (void *)((unsigned long)mops | PAGE_MAPPING_MOVABLE);
+ page->mapping = (void *)(PAGE_MAPPING_MOVABLE);
}
EXPORT_SYMBOL(__SetPageMovable);
diff --git a/mm/migrate.c b/mm/migrate.c
index 15d3c1031530c..c6c9998014ec8 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -43,6 +43,8 @@
#include <linux/sched/sysctl.h>
#include <linux/memory-tiers.h>
#include <linux/pagewalk.h>
+#include <linux/balloon_compaction.h>
+#include <linux/zsmalloc.h>
#include <asm/tlbflush.h>
@@ -51,6 +53,27 @@
#include "internal.h"
#include "swap.h"
+static const struct movable_operations *page_movable_ops(struct page *page)
+{
+ VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
+
+ /*
+ * If we enable page migration for a page of a certain type by marking
+ * it as movable, the page type must be sticky until the page gets freed
+ * back to the buddy.
+ */
+#ifdef CONFIG_BALLOON_COMPACTION
+ if (PageOffline(page))
+ /* Only balloon compaction sets PageOffline pages movable. */
+ return &balloon_mops;
+#endif /* CONFIG_BALLOON_COMPACTION */
+#if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION)
+ if (PageZsmalloc(page))
+ return &zsmalloc_mops;
+#endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */
+ return NULL;
+}
+
/**
* isolate_movable_ops_page - isolate a movable_ops page for migration
* @page: The page.
diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 5763f36039736..6855d9e2732d8 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -152,10 +152,9 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn)
return page_zpdesc(pfn_to_page(pfn));
}
-static inline void __zpdesc_set_movable(struct zpdesc *zpdesc,
- const struct movable_operations *mops)
+static inline void __zpdesc_set_movable(struct zpdesc *zpdesc)
{
- __SetPageMovable(zpdesc_page(zpdesc), mops);
+ __SetPageMovable(zpdesc_page(zpdesc));
}
static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc)
diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
index 72c2b7562c511..7192196b9421d 100644
--- a/mm/zsmalloc.c
+++ b/mm/zsmalloc.c
@@ -1684,8 +1684,6 @@ static void lock_zspage(struct zspage *zspage)
#ifdef CONFIG_COMPACTION
-static const struct movable_operations zsmalloc_mops;
-
static void replace_sub_page(struct size_class *class, struct zspage *zspage,
struct zpdesc *newzpdesc, struct zpdesc *oldzpdesc)
{
@@ -1708,7 +1706,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
set_first_obj_offset(newzpdesc, first_obj_offset);
if (unlikely(ZsHugePage(zspage)))
newzpdesc->handle = oldzpdesc->handle;
- __zpdesc_set_movable(newzpdesc, &zsmalloc_mops);
+ __zpdesc_set_movable(newzpdesc);
}
static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
@@ -1815,7 +1813,7 @@ static void zs_page_putback(struct page *page)
{
}
-static const struct movable_operations zsmalloc_mops = {
+const struct movable_operations zsmalloc_mops = {
.isolate_page = zs_page_isolate,
.migrate_page = zs_page_migrate,
.putback_page = zs_page_putback,
@@ -1878,7 +1876,7 @@ static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage)
do {
WARN_ON(!zpdesc_trylock(zpdesc));
- __zpdesc_set_movable(zpdesc, &zsmalloc_mops);
+ __zpdesc_set_movable(zpdesc);
zpdesc_unlock(zpdesc);
} while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL);
}
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping
2025-06-30 13:00 ` [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping David Hildenbrand
@ 2025-07-01 12:12 ` Lorenzo Stoakes
2025-07-01 12:41 ` David Hildenbrand
2025-07-02 10:34 ` Harry Yoo
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 12:12 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:00PM +0200, David Hildenbrand wrote:
> ... instead, look them up statically based on the page type. Maybe in the
> future we want a registration interface? At least for now, it can be
> easily handled using the two page types that actually support page
> migration.
>
> The remaining usage of page->mapping is to flag such pages as actually
> being movable (having movable_ops), which we will change next.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
See comment below, this feels iffy in the long run but ok as an interim measure.
So:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/balloon_compaction.h | 2 +-
> include/linux/migrate.h | 14 ++------------
> include/linux/zsmalloc.h | 2 ++
> mm/balloon_compaction.c | 1 -
> mm/compaction.c | 5 ++---
> mm/migrate.c | 23 +++++++++++++++++++++++
> mm/zpdesc.h | 5 ++---
> mm/zsmalloc.c | 8 +++-----
> 8 files changed, 35 insertions(+), 25 deletions(-)
>
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index 9bce8e9f5018c..a8a1706cc56f3 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -92,7 +92,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
> struct page *page)
> {
> __SetPageOffline(page);
> - __SetPageMovable(page, &balloon_mops);
> + __SetPageMovable(page);
> set_page_private(page, (unsigned long)balloon);
> list_add(&page->lru, &balloon->pages);
> }
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index e04035f70e36f..6aece3f3c8be8 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -104,23 +104,13 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
> #endif /* CONFIG_MIGRATION */
>
> #ifdef CONFIG_COMPACTION
> -void __SetPageMovable(struct page *page, const struct movable_operations *ops);
> +void __SetPageMovable(struct page *page);
> #else
> -static inline void __SetPageMovable(struct page *page,
> - const struct movable_operations *ops)
> +static inline void __SetPageMovable(struct page *page)
> {
> }
> #endif
>
> -static inline
> -const struct movable_operations *page_movable_ops(struct page *page)
> -{
> - VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
> -
> - return (const struct movable_operations *)
> - ((unsigned long)page->mapping - PAGE_MAPPING_MOVABLE);
> -}
> -
> #ifdef CONFIG_NUMA_BALANCING
> int migrate_misplaced_folio_prepare(struct folio *folio,
> struct vm_area_struct *vma, int node);
> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
> index 13e9cc5490f71..f3ccff2d966cd 100644
> --- a/include/linux/zsmalloc.h
> +++ b/include/linux/zsmalloc.h
> @@ -46,4 +46,6 @@ void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
> void zs_obj_write(struct zs_pool *pool, unsigned long handle,
> void *handle_mem, size_t mem_len);
>
> +extern const struct movable_operations zsmalloc_mops;
> +
> #endif
> diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
> index e4f1a122d786b..2a4a649805c11 100644
> --- a/mm/balloon_compaction.c
> +++ b/mm/balloon_compaction.c
> @@ -253,6 +253,5 @@ const struct movable_operations balloon_mops = {
> .isolate_page = balloon_page_isolate,
> .putback_page = balloon_page_putback,
> };
> -EXPORT_SYMBOL_GPL(balloon_mops);
>
> #endif /* CONFIG_BALLOON_COMPACTION */
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 41fd6a1fe9a33..348eb754cb227 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -114,11 +114,10 @@ static unsigned long release_free_list(struct list_head *freepages)
> }
>
> #ifdef CONFIG_COMPACTION
> -void __SetPageMovable(struct page *page, const struct movable_operations *mops)
> +void __SetPageMovable(struct page *page)
> {
> VM_BUG_ON_PAGE(!PageLocked(page), page);
> - VM_BUG_ON_PAGE((unsigned long)mops & PAGE_MAPPING_MOVABLE, page);
> - page->mapping = (void *)((unsigned long)mops | PAGE_MAPPING_MOVABLE);
> + page->mapping = (void *)(PAGE_MAPPING_MOVABLE);
> }
> EXPORT_SYMBOL(__SetPageMovable);
>
> diff --git a/mm/migrate.c b/mm/migrate.c
> index 15d3c1031530c..c6c9998014ec8 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -43,6 +43,8 @@
> #include <linux/sched/sysctl.h>
> #include <linux/memory-tiers.h>
> #include <linux/pagewalk.h>
> +#include <linux/balloon_compaction.h>
> +#include <linux/zsmalloc.h>
>
> #include <asm/tlbflush.h>
>
> @@ -51,6 +53,27 @@
> #include "internal.h"
> #include "swap.h"
>
> +static const struct movable_operations *page_movable_ops(struct page *page)
> +{
> + VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
> +
> + /*
> + * If we enable page migration for a page of a certain type by marking
> + * it as movable, the page type must be sticky until the page gets freed
> + * back to the buddy.
> + */
Ah now this makes more sense...
> +#ifdef CONFIG_BALLOON_COMPACTION
> + if (PageOffline(page))
> + /* Only balloon compaction sets PageOffline pages movable. */
> + return &balloon_mops;
So it's certain that if we try to invoke movable ops, and it's the balloon
compaction case, the page will be offline?
> +#endif /* CONFIG_BALLOON_COMPACTION */
> +#if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION)
> + if (PageZsmalloc(page))
And same question only for ZS malloc.
> + return &zsmalloc_mops;
> +#endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */
> + return NULL;
> +}
This is kind of sketchy as it's baking in assumptions implicitly, so I hope we
can find an improved way of doing this later, even if it's about providing
e.g. is_ballon_movable_ops_page() and is_zsmalloc_movable_ops_page() predicates
that abstract this code + placing them in the relevant code so it's at least
obvious to people working on this stuff that this needs to be considered.
But ok as a means of getting away from having to have the hook object encoded.
> +
> /**
> * isolate_movable_ops_page - isolate a movable_ops page for migration
> * @page: The page.
> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
> index 5763f36039736..6855d9e2732d8 100644
> --- a/mm/zpdesc.h
> +++ b/mm/zpdesc.h
> @@ -152,10 +152,9 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn)
> return page_zpdesc(pfn_to_page(pfn));
> }
>
> -static inline void __zpdesc_set_movable(struct zpdesc *zpdesc,
> - const struct movable_operations *mops)
> +static inline void __zpdesc_set_movable(struct zpdesc *zpdesc)
> {
> - __SetPageMovable(zpdesc_page(zpdesc), mops);
> + __SetPageMovable(zpdesc_page(zpdesc));
> }
>
> static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc)
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 72c2b7562c511..7192196b9421d 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -1684,8 +1684,6 @@ static void lock_zspage(struct zspage *zspage)
>
> #ifdef CONFIG_COMPACTION
>
> -static const struct movable_operations zsmalloc_mops;
> -
> static void replace_sub_page(struct size_class *class, struct zspage *zspage,
> struct zpdesc *newzpdesc, struct zpdesc *oldzpdesc)
> {
> @@ -1708,7 +1706,7 @@ static void replace_sub_page(struct size_class *class, struct zspage *zspage,
> set_first_obj_offset(newzpdesc, first_obj_offset);
> if (unlikely(ZsHugePage(zspage)))
> newzpdesc->handle = oldzpdesc->handle;
> - __zpdesc_set_movable(newzpdesc, &zsmalloc_mops);
> + __zpdesc_set_movable(newzpdesc);
> }
>
> static bool zs_page_isolate(struct page *page, isolate_mode_t mode)
> @@ -1815,7 +1813,7 @@ static void zs_page_putback(struct page *page)
> {
> }
>
> -static const struct movable_operations zsmalloc_mops = {
> +const struct movable_operations zsmalloc_mops = {
> .isolate_page = zs_page_isolate,
> .migrate_page = zs_page_migrate,
> .putback_page = zs_page_putback,
> @@ -1878,7 +1876,7 @@ static void SetZsPageMovable(struct zs_pool *pool, struct zspage *zspage)
>
> do {
> WARN_ON(!zpdesc_trylock(zpdesc));
> - __zpdesc_set_movable(zpdesc, &zsmalloc_mops);
> + __zpdesc_set_movable(zpdesc);
> zpdesc_unlock(zpdesc);
> } while ((zpdesc = get_next_zpdesc(zpdesc)) != NULL);
> }
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping
2025-07-01 12:12 ` Lorenzo Stoakes
@ 2025-07-01 12:41 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 12:41 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 14:12, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 03:00:00PM +0200, David Hildenbrand wrote:
>> ... instead, look them up statically based on the page type. Maybe in the
>> future we want a registration interface? At least for now, it can be
>> easily handled using the two page types that actually support page
>> migration.
>>
>> The remaining usage of page->mapping is to flag such pages as actually
>> being movable (having movable_ops), which we will change next.
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> See comment below, this feels iffy in the long run but ok as an interim measure.
>
> So:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
>> ---
>> include/linux/balloon_compaction.h | 2 +-
>> include/linux/migrate.h | 14 ++------------
>> include/linux/zsmalloc.h | 2 ++
>> mm/balloon_compaction.c | 1 -
>> mm/compaction.c | 5 ++---
>> mm/migrate.c | 23 +++++++++++++++++++++++
>> mm/zpdesc.h | 5 ++---
>> mm/zsmalloc.c | 8 +++-----
>> 8 files changed, 35 insertions(+), 25 deletions(-)
>>
>> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
>> index 9bce8e9f5018c..a8a1706cc56f3 100644
>> --- a/include/linux/balloon_compaction.h
>> +++ b/include/linux/balloon_compaction.h
>> @@ -92,7 +92,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
>> struct page *page)
>> {
>> __SetPageOffline(page);
>> - __SetPageMovable(page, &balloon_mops);
>> + __SetPageMovable(page);
>> set_page_private(page, (unsigned long)balloon);
>> list_add(&page->lru, &balloon->pages);
>> }
>> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
>> index e04035f70e36f..6aece3f3c8be8 100644
>> --- a/include/linux/migrate.h
>> +++ b/include/linux/migrate.h
>> @@ -104,23 +104,13 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
>> #endif /* CONFIG_MIGRATION */
>>
>> #ifdef CONFIG_COMPACTION
>> -void __SetPageMovable(struct page *page, const struct movable_operations *ops);
>> +void __SetPageMovable(struct page *page);
>> #else
>> -static inline void __SetPageMovable(struct page *page,
>> - const struct movable_operations *ops)
>> +static inline void __SetPageMovable(struct page *page)
>> {
>> }
>> #endif
>>
>> -static inline
>> -const struct movable_operations *page_movable_ops(struct page *page)
>> -{
>> - VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
>> -
>> - return (const struct movable_operations *)
>> - ((unsigned long)page->mapping - PAGE_MAPPING_MOVABLE);
>> -}
>> -
>> #ifdef CONFIG_NUMA_BALANCING
>> int migrate_misplaced_folio_prepare(struct folio *folio,
>> struct vm_area_struct *vma, int node);
>> diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h
>> index 13e9cc5490f71..f3ccff2d966cd 100644
>> --- a/include/linux/zsmalloc.h
>> +++ b/include/linux/zsmalloc.h
>> @@ -46,4 +46,6 @@ void zs_obj_read_end(struct zs_pool *pool, unsigned long handle,
>> void zs_obj_write(struct zs_pool *pool, unsigned long handle,
>> void *handle_mem, size_t mem_len);
>>
>> +extern const struct movable_operations zsmalloc_mops;
>> +
>> #endif
>> diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
>> index e4f1a122d786b..2a4a649805c11 100644
>> --- a/mm/balloon_compaction.c
>> +++ b/mm/balloon_compaction.c
>> @@ -253,6 +253,5 @@ const struct movable_operations balloon_mops = {
>> .isolate_page = balloon_page_isolate,
>> .putback_page = balloon_page_putback,
>> };
>> -EXPORT_SYMBOL_GPL(balloon_mops);
>>
>> #endif /* CONFIG_BALLOON_COMPACTION */
>> diff --git a/mm/compaction.c b/mm/compaction.c
>> index 41fd6a1fe9a33..348eb754cb227 100644
>> --- a/mm/compaction.c
>> +++ b/mm/compaction.c
>> @@ -114,11 +114,10 @@ static unsigned long release_free_list(struct list_head *freepages)
>> }
>>
>> #ifdef CONFIG_COMPACTION
>> -void __SetPageMovable(struct page *page, const struct movable_operations *mops)
>> +void __SetPageMovable(struct page *page)
>> {
>> VM_BUG_ON_PAGE(!PageLocked(page), page);
>> - VM_BUG_ON_PAGE((unsigned long)mops & PAGE_MAPPING_MOVABLE, page);
>> - page->mapping = (void *)((unsigned long)mops | PAGE_MAPPING_MOVABLE);
>> + page->mapping = (void *)(PAGE_MAPPING_MOVABLE);
>> }
>> EXPORT_SYMBOL(__SetPageMovable);
>>
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 15d3c1031530c..c6c9998014ec8 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -43,6 +43,8 @@
>> #include <linux/sched/sysctl.h>
>> #include <linux/memory-tiers.h>
>> #include <linux/pagewalk.h>
>> +#include <linux/balloon_compaction.h>
>> +#include <linux/zsmalloc.h>
>>
>> #include <asm/tlbflush.h>
>>
>> @@ -51,6 +53,27 @@
>> #include "internal.h"
>> #include "swap.h"
>>
>> +static const struct movable_operations *page_movable_ops(struct page *page)
>> +{
>> + VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
>> +
>> + /*
>> + * If we enable page migration for a page of a certain type by marking
>> + * it as movable, the page type must be sticky until the page gets freed
>> + * back to the buddy.
>> + */
>
> Ah now this makes more sense...
>
>> +#ifdef CONFIG_BALLOON_COMPACTION
>> + if (PageOffline(page))
>> + /* Only balloon compaction sets PageOffline pages movable. */
>> + return &balloon_mops;
>
> So it's certain that if we try to invoke movable ops, and it's the balloon
> compaction case, the page will be offline?
Yes. The page must be marked as having movable_ops by the user. The next
patch reworks that as well.
A PageOffline page without movable_ops will never end up here
(page_has_movable_ops() == false).
>
>> +#endif /* CONFIG_BALLOON_COMPACTION */
>> +#if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION)
>> + if (PageZsmalloc(page))
>
> And same question only for ZS malloc.
Same thing.
>
>> + return &zsmalloc_mops;
>> +#endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */
>> + return NULL;
>> +}
>
> This is kind of sketchy as it's baking in assumptions implicitly, so I hope we
> can find an improved way of doing this later, even if it's about providing
> e.g. is_ballon_movable_ops_page() and is_zsmalloc_movable_ops_page() predicates
> that abstract this code + placing them in the relevant code so it's at least
> obvious to people working on this stuff that this needs to be considered.
>
> But ok as a means of getting away from having to have the hook object encoded.
Yeah, not sure yet how to clean that up in the future. As I stated
somewhere, maybe we just want a registration interface to handle a
specific page type. But for handling the two known in-tree users, this
should get us going.
Thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping
2025-06-30 13:00 ` [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping David Hildenbrand
2025-07-01 12:12 ` Lorenzo Stoakes
@ 2025-07-02 10:34 ` Harry Yoo
2025-07-02 11:04 ` David Hildenbrand
1 sibling, 1 reply; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 10:34 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:00PM +0200, David Hildenbrand wrote:
> ... instead, look them up statically based on the page type. Maybe in the
> future we want a registration interface? At least for now, it can be
> easily handled using the two page types that actually support page
> migration.
>
> The remaining usage of page->mapping is to flag such pages as actually
> being movable (having movable_ops), which we will change next.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> +static const struct movable_operations *page_movable_ops(struct page *page)
> +{
> + VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
> +
> + /*
> + * If we enable page migration for a page of a certain type by marking
> + * it as movable, the page type must be sticky until the page gets freed
> + * back to the buddy.
> + */
> +#ifdef CONFIG_BALLOON_COMPACTION
> + if (PageOffline(page))
> + /* Only balloon compaction sets PageOffline pages movable. */
> + return &balloon_mops;
> +#endif /* CONFIG_BALLOON_COMPACTION */
> +#if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION)
> + if (PageZsmalloc(page))
> + return &zsmalloc_mops;
> +#endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */
What happens if:
CONFIG_ZSMALLOC=y
CONFIG_TRANSPARENT_HUGEPAGE=n
CONFIG_COMPACTION=n
CONFIG_MIGRATION=y
?
> + return NULL;
> +}
> +
> /**
> * isolate_movable_ops_page - isolate a movable_ops page for migration
> * @page: The page.
Otherwise LGTM.
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping
2025-07-02 10:34 ` Harry Yoo
@ 2025-07-02 11:04 ` David Hildenbrand
2025-07-02 11:43 ` Harry Yoo
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-02 11:04 UTC (permalink / raw)
To: Harry Yoo
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On 02.07.25 12:34, Harry Yoo wrote:
> On Mon, Jun 30, 2025 at 03:00:00PM +0200, David Hildenbrand wrote:
>> ... instead, look them up statically based on the page type. Maybe in the
>> future we want a registration interface? At least for now, it can be
>> easily handled using the two page types that actually support page
>> migration.
>>
>> The remaining usage of page->mapping is to flag such pages as actually
>> being movable (having movable_ops), which we will change next.
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>
>> +static const struct movable_operations *page_movable_ops(struct page *page)
>> +{
>> + VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
>> +
>> + /*
>> + * If we enable page migration for a page of a certain type by marking
>> + * it as movable, the page type must be sticky until the page gets freed
>> + * back to the buddy.
>> + */
>> +#ifdef CONFIG_BALLOON_COMPACTION
>> + if (PageOffline(page))
>> + /* Only balloon compaction sets PageOffline pages movable. */
>> + return &balloon_mops;
>> +#endif /* CONFIG_BALLOON_COMPACTION */
>> +#if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION)
>> + if (PageZsmalloc(page))
>> + return &zsmalloc_mops;
>> +#endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */
>
> What happens if:
> CONFIG_ZSMALLOC=y
> CONFIG_TRANSPARENT_HUGEPAGE=n
> CONFIG_COMPACTION=n
> CONFIG_MIGRATION=y
Pages are never allocated from ZONE_MOVABLE/CMA and are not marked as
having movable_ops, so we never end up in this function. See how
zsmalloc.c deals with CONFIG_COMPACTION, especially how
SetZsPageMovable() is a NOP without it.
As a side note, both should probably be moved from COMPACTION to
MIGRATION. Although probably in practice, anybody enabling
CONFIG_MIGRATION likely also enables CONFIG_COMPACTION.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping
2025-07-02 11:04 ` David Hildenbrand
@ 2025-07-02 11:43 ` Harry Yoo
2025-07-02 11:51 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 11:43 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Wed, Jul 02, 2025 at 01:04:05PM +0200, David Hildenbrand wrote:
> On 02.07.25 12:34, Harry Yoo wrote:
> > On Mon, Jun 30, 2025 at 03:00:00PM +0200, David Hildenbrand wrote:
> > > ... instead, look them up statically based on the page type. Maybe in the
> > > future we want a registration interface? At least for now, it can be
> > > easily handled using the two page types that actually support page
> > > migration.
> > >
> > > The remaining usage of page->mapping is to flag such pages as actually
> > > being movable (having movable_ops), which we will change next.
> > >
> > > Reviewed-by: Zi Yan <ziy@nvidia.com>
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
> > > ---
> >
> > > +static const struct movable_operations *page_movable_ops(struct page *page)
> > > +{
> > > + VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
> > > +
> > > + /*
> > > + * If we enable page migration for a page of a certain type by marking
> > > + * it as movable, the page type must be sticky until the page gets freed
> > > + * back to the buddy.
> > > + */
> > > +#ifdef CONFIG_BALLOON_COMPACTION
> > > + if (PageOffline(page))
> > > + /* Only balloon compaction sets PageOffline pages movable. */
> > > + return &balloon_mops;
> > > +#endif /* CONFIG_BALLOON_COMPACTION */
> > > +#if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION)
> > > + if (PageZsmalloc(page))
> > > + return &zsmalloc_mops;
> > > +#endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */
> >
> > What happens if:
> > CONFIG_ZSMALLOC=y
> > CONFIG_TRANSPARENT_HUGEPAGE=n
> > CONFIG_COMPACTION=n
> > CONFIG_MIGRATION=y
>
> Pages are never allocated from ZONE_MOVABLE/CMA and
I don't understand how that's true, neither zram nor zsmalloc clears
__GFP_MOVABLE when CONFIG_COMPACTION=n?
...Or perhaps I'm still missing some pieces ;)
> are not marked as having movable_ops, so we never end up in this function.
Right.
> See how zsmalloc.c deals with CONFIG_COMPACTION, especially how
> SetZsPageMovable() is a NOP without it.
Right.
Now I see what I was missing in the previous reply.
Thanks!
Please feel free to add:
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping
2025-07-02 11:43 ` Harry Yoo
@ 2025-07-02 11:51 ` David Hildenbrand
2025-07-02 11:57 ` Harry Yoo
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-02 11:51 UTC (permalink / raw)
To: Harry Yoo
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On 02.07.25 13:43, Harry Yoo wrote:
> On Wed, Jul 02, 2025 at 01:04:05PM +0200, David Hildenbrand wrote:
>> On 02.07.25 12:34, Harry Yoo wrote:
>>> On Mon, Jun 30, 2025 at 03:00:00PM +0200, David Hildenbrand wrote:
>>>> ... instead, look them up statically based on the page type. Maybe in the
>>>> future we want a registration interface? At least for now, it can be
>>>> easily handled using the two page types that actually support page
>>>> migration.
>>>>
>>>> The remaining usage of page->mapping is to flag such pages as actually
>>>> being movable (having movable_ops), which we will change next.
>>>>
>>>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>> ---
>>>
>>>> +static const struct movable_operations *page_movable_ops(struct page *page)
>>>> +{
>>>> + VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
>>>> +
>>>> + /*
>>>> + * If we enable page migration for a page of a certain type by marking
>>>> + * it as movable, the page type must be sticky until the page gets freed
>>>> + * back to the buddy.
>>>> + */
>>>> +#ifdef CONFIG_BALLOON_COMPACTION
>>>> + if (PageOffline(page))
>>>> + /* Only balloon compaction sets PageOffline pages movable. */
>>>> + return &balloon_mops;
>>>> +#endif /* CONFIG_BALLOON_COMPACTION */
>>>> +#if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION)
>>>> + if (PageZsmalloc(page))
>>>> + return &zsmalloc_mops;
>>>> +#endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */
>>>
>>> What happens if:
>>> CONFIG_ZSMALLOC=y
>>> CONFIG_TRANSPARENT_HUGEPAGE=n
>>> CONFIG_COMPACTION=n
>>> CONFIG_MIGRATION=y
>>
>> Pages are never allocated from ZONE_MOVABLE/CMA and
>
> I don't understand how that's true, neither zram nor zsmalloc clears
> __GFP_MOVABLE when CONFIG_COMPACTION=n?
>
> ...Or perhaps I'm still missing some pieces ;)
You might have found a bug in zsmalloc then :) Without support for compaction, we
must clear __GFP_MOVABLE in alloc_zpdesc() I assume.
Do you have the capacity to look into that and send a fix if really broken?
In balloon compaction code we properly handle that.
>
>> are not marked as having movable_ops, so we never end up in this function.
>
> Right.
>
>> See how zsmalloc.c deals with CONFIG_COMPACTION, especially how
>> SetZsPageMovable() is a NOP without it.
>
> Right.
>
> Now I see what I was missing in the previous reply.
> Thanks!
>
> Please feel free to add:
>
> Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
Thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping
2025-07-02 11:51 ` David Hildenbrand
@ 2025-07-02 11:57 ` Harry Yoo
0 siblings, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 11:57 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Wed, Jul 02, 2025 at 01:51:52PM +0200, David Hildenbrand wrote:
> On 02.07.25 13:43, Harry Yoo wrote:
> > On Wed, Jul 02, 2025 at 01:04:05PM +0200, David Hildenbrand wrote:
> > > On 02.07.25 12:34, Harry Yoo wrote:
> > > > On Mon, Jun 30, 2025 at 03:00:00PM +0200, David Hildenbrand wrote:
> > > > > ... instead, look them up statically based on the page type. Maybe in the
> > > > > future we want a registration interface? At least for now, it can be
> > > > > easily handled using the two page types that actually support page
> > > > > migration.
> > > > >
> > > > > The remaining usage of page->mapping is to flag such pages as actually
> > > > > being movable (having movable_ops), which we will change next.
> > > > >
> > > > > Reviewed-by: Zi Yan <ziy@nvidia.com>
> > > > > Signed-off-by: David Hildenbrand <david@redhat.com>
> > > > > ---
> > > >
> > > > > +static const struct movable_operations *page_movable_ops(struct page *page)
> > > > > +{
> > > > > + VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
> > > > > +
> > > > > + /*
> > > > > + * If we enable page migration for a page of a certain type by marking
> > > > > + * it as movable, the page type must be sticky until the page gets freed
> > > > > + * back to the buddy.
> > > > > + */
> > > > > +#ifdef CONFIG_BALLOON_COMPACTION
> > > > > + if (PageOffline(page))
> > > > > + /* Only balloon compaction sets PageOffline pages movable. */
> > > > > + return &balloon_mops;
> > > > > +#endif /* CONFIG_BALLOON_COMPACTION */
> > > > > +#if defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION)
> > > > > + if (PageZsmalloc(page))
> > > > > + return &zsmalloc_mops;
> > > > > +#endif /* defined(CONFIG_ZSMALLOC) && defined(CONFIG_COMPACTION) */
> > > >
> > > > What happens if:
> > > > CONFIG_ZSMALLOC=y
> > > > CONFIG_TRANSPARENT_HUGEPAGE=n
> > > > CONFIG_COMPACTION=n
> > > > CONFIG_MIGRATION=y
> > >
> > > Pages are never allocated from ZONE_MOVABLE/CMA and
> >
> > I don't understand how that's true, neither zram nor zsmalloc clears
> > __GFP_MOVABLE when CONFIG_COMPACTION=n?
> >
> > ...Or perhaps I'm still missing some pieces ;)
>
> You might have found a bug in zsmalloc then :) Without support for compaction, we
> must clear __GFP_MOVABLE in alloc_zpdesc() I assume.
>
> Do you have the capacity to look into that and send a fix if really broken?
I'll add that to somehwere in my TODO list :)
1) confirming if it's really broken and
2) fixing it if so.
> In balloon compaction code we properly handle that.
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (18 preceding siblings ...)
2025-06-30 13:00 ` [PATCH v1 19/29] mm: stop storing migration_ops in page->mapping David Hildenbrand
@ 2025-06-30 13:00 ` David Hildenbrand
2025-07-01 12:44 ` Lorenzo Stoakes
2025-07-02 11:54 ` Harry Yoo
2025-06-30 13:00 ` [PATCH v1 21/29] mm: rename PG_isolated to PG_movable_ops_isolated David Hildenbrand
` (9 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 13:00 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Instead, let's use a page flag. As the page flag can result in
false-positives, glue it to the page types for which we
support/implement movable_ops page migration.
The flag reused by PageMovableOps() might be sued by other pages, so
warning in case it is set in page_has_movable_ops() might result in
false-positive warnings.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 2 +-
include/linux/migrate.h | 8 -----
include/linux/page-flags.h | 52 ++++++++++++++++++++++++------
mm/compaction.c | 6 ----
mm/zpdesc.h | 2 +-
5 files changed, 44 insertions(+), 26 deletions(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index a8a1706cc56f3..b222b0737c466 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -92,7 +92,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
struct page *page)
{
__SetPageOffline(page);
- __SetPageMovable(page);
+ SetPageMovableOps(page);
set_page_private(page, (unsigned long)balloon);
list_add(&page->lru, &balloon->pages);
}
diff --git a/include/linux/migrate.h b/include/linux/migrate.h
index 6aece3f3c8be8..acadd41e0b5cf 100644
--- a/include/linux/migrate.h
+++ b/include/linux/migrate.h
@@ -103,14 +103,6 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
#endif /* CONFIG_MIGRATION */
-#ifdef CONFIG_COMPACTION
-void __SetPageMovable(struct page *page);
-#else
-static inline void __SetPageMovable(struct page *page)
-{
-}
-#endif
-
#ifdef CONFIG_NUMA_BALANCING
int migrate_misplaced_folio_prepare(struct folio *folio,
struct vm_area_struct *vma, int node);
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 4c27ebb689e3c..016a6e6fa428a 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -170,6 +170,11 @@ enum pageflags {
/* non-lru isolated movable page */
PG_isolated = PG_reclaim,
+#ifdef CONFIG_MIGRATION
+ /* this is a movable_ops page (for selected typed pages only) */
+ PG_movable_ops = PG_uptodate,
+#endif
+
/* Only valid for buddy pages. Used to track pages that are reported */
PG_reported = PG_uptodate,
@@ -698,9 +703,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
* bit; and then folio->mapping points, not to an anon_vma, but to a private
* structure which KSM associates with that merged page. See ksm.h.
*
- * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is used for non-lru movable
- * page and then folio->mapping points to a struct movable_operations.
- *
* Please note that, confusingly, "folio_mapping" refers to the inode
* address_space which maps the folio from disk; whereas "folio_mapped"
* refers to user virtual address space into which the folio is mapped.
@@ -743,13 +745,6 @@ static __always_inline bool PageAnon(const struct page *page)
{
return folio_test_anon(page_folio(page));
}
-
-static __always_inline bool page_has_movable_ops(const struct page *page)
-{
- return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
- PAGE_MAPPING_MOVABLE;
-}
-
#ifdef CONFIG_KSM
/*
* A KSM page is one of those write-protected "shared pages" or "merged pages"
@@ -1133,6 +1128,43 @@ bool is_free_buddy_page(const struct page *page);
PAGEFLAG(Isolated, isolated, PF_ANY);
+#ifdef CONFIG_MIGRATION
+/*
+ * This page is migratable through movable_ops (for selected typed pages
+ * only).
+ *
+ * Page migration of such pages might fail, for example, if the page is
+ * already isolated by somebody else, or if the page is about to get freed.
+ *
+ * While a subsystem might set selected typed pages that support page migration
+ * as being movable through movable_ops, it must never clear this flag.
+ *
+ * This flag is only cleared when the page is freed back to the buddy.
+ *
+ * Only selected page types support this flag (see page_movable_ops()) and
+ * the flag might be used in other context for other pages. Always use
+ * page_has_movable_ops() instead.
+ */
+PAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
+#else
+PAGEFLAG_FALSE(MovableOps, movable_ops);
+#endif
+
+/**
+ * page_has_movable_ops - test for a movable_ops page
+ * @page The page to test.
+ *
+ * Test whether this is a movable_ops page. Such pages will stay that
+ * way until freed.
+ *
+ * Returns true if this is a movable_ops page, otherwise false.
+ */
+static inline bool page_has_movable_ops(const struct page *page)
+{
+ return PageMovableOps(page) &&
+ (PageOffline(page) || PageZsmalloc(page));
+}
+
static __always_inline int PageAnonExclusive(const struct page *page)
{
VM_BUG_ON_PGFLAGS(!PageAnon(page), page);
diff --git a/mm/compaction.c b/mm/compaction.c
index 348eb754cb227..349f4ea0ec3e5 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -114,12 +114,6 @@ static unsigned long release_free_list(struct list_head *freepages)
}
#ifdef CONFIG_COMPACTION
-void __SetPageMovable(struct page *page)
-{
- VM_BUG_ON_PAGE(!PageLocked(page), page);
- page->mapping = (void *)(PAGE_MAPPING_MOVABLE);
-}
-EXPORT_SYMBOL(__SetPageMovable);
/* Do not skip compaction more than 64 times */
#define COMPACT_MAX_DEFER_SHIFT 6
diff --git a/mm/zpdesc.h b/mm/zpdesc.h
index 6855d9e2732d8..25bf5ea0beb83 100644
--- a/mm/zpdesc.h
+++ b/mm/zpdesc.h
@@ -154,7 +154,7 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn)
static inline void __zpdesc_set_movable(struct zpdesc *zpdesc)
{
- __SetPageMovable(zpdesc_page(zpdesc));
+ SetPageMovableOps(zpdesc_page(zpdesc));
}
static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc)
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag
2025-06-30 13:00 ` [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag David Hildenbrand
@ 2025-07-01 12:44 ` Lorenzo Stoakes
2025-07-01 12:49 ` David Hildenbrand
2025-07-02 11:54 ` Harry Yoo
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 12:44 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:01PM +0200, David Hildenbrand wrote:
> Instead, let's use a page flag. As the page flag can result in
> false-positives, glue it to the page types for which we
> support/implement movable_ops page migration.
>
> The flag reused by PageMovableOps() might be sued by other pages, so
I assume 'used' not 'sued' :P
> warning in case it is set in page_has_movable_ops() might result in
> false-positive warnings.
Worth mentioning that it's PG_uptodate. Also probably worth putting a proviso
here that we're safe to use it for movable ops pages because it's used to track
file system state.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
Seems reasonable though, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/balloon_compaction.h | 2 +-
> include/linux/migrate.h | 8 -----
> include/linux/page-flags.h | 52 ++++++++++++++++++++++++------
> mm/compaction.c | 6 ----
> mm/zpdesc.h | 2 +-
> 5 files changed, 44 insertions(+), 26 deletions(-)
>
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index a8a1706cc56f3..b222b0737c466 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -92,7 +92,7 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
> struct page *page)
> {
> __SetPageOffline(page);
> - __SetPageMovable(page);
> + SetPageMovableOps(page);
> set_page_private(page, (unsigned long)balloon);
> list_add(&page->lru, &balloon->pages);
> }
> diff --git a/include/linux/migrate.h b/include/linux/migrate.h
> index 6aece3f3c8be8..acadd41e0b5cf 100644
> --- a/include/linux/migrate.h
> +++ b/include/linux/migrate.h
> @@ -103,14 +103,6 @@ static inline int migrate_huge_page_move_mapping(struct address_space *mapping,
>
> #endif /* CONFIG_MIGRATION */
>
> -#ifdef CONFIG_COMPACTION
> -void __SetPageMovable(struct page *page);
> -#else
> -static inline void __SetPageMovable(struct page *page)
> -{
> -}
> -#endif
> -
> #ifdef CONFIG_NUMA_BALANCING
> int migrate_misplaced_folio_prepare(struct folio *folio,
> struct vm_area_struct *vma, int node);
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 4c27ebb689e3c..016a6e6fa428a 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -170,6 +170,11 @@ enum pageflags {
> /* non-lru isolated movable page */
> PG_isolated = PG_reclaim,
>
> +#ifdef CONFIG_MIGRATION
> + /* this is a movable_ops page (for selected typed pages only) */
> + PG_movable_ops = PG_uptodate,
> +#endif
> +
> /* Only valid for buddy pages. Used to track pages that are reported */
> PG_reported = PG_uptodate,
>
> @@ -698,9 +703,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
> * bit; and then folio->mapping points, not to an anon_vma, but to a private
> * structure which KSM associates with that merged page. See ksm.h.
> *
> - * PAGE_MAPPING_KSM without PAGE_MAPPING_ANON is used for non-lru movable
> - * page and then folio->mapping points to a struct movable_operations.
> - *
> * Please note that, confusingly, "folio_mapping" refers to the inode
> * address_space which maps the folio from disk; whereas "folio_mapped"
> * refers to user virtual address space into which the folio is mapped.
> @@ -743,13 +745,6 @@ static __always_inline bool PageAnon(const struct page *page)
> {
> return folio_test_anon(page_folio(page));
> }
> -
> -static __always_inline bool page_has_movable_ops(const struct page *page)
> -{
> - return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) ==
> - PAGE_MAPPING_MOVABLE;
> -}
> -
> #ifdef CONFIG_KSM
> /*
> * A KSM page is one of those write-protected "shared pages" or "merged pages"
> @@ -1133,6 +1128,43 @@ bool is_free_buddy_page(const struct page *page);
>
> PAGEFLAG(Isolated, isolated, PF_ANY);
>
> +#ifdef CONFIG_MIGRATION
> +/*
> + * This page is migratable through movable_ops (for selected typed pages
> + * only).
> + *
> + * Page migration of such pages might fail, for example, if the page is
> + * already isolated by somebody else, or if the page is about to get freed.
> + *
> + * While a subsystem might set selected typed pages that support page migration
> + * as being movable through movable_ops, it must never clear this flag.
> + *
> + * This flag is only cleared when the page is freed back to the buddy.
> + *
> + * Only selected page types support this flag (see page_movable_ops()) and
> + * the flag might be used in other context for other pages. Always use
> + * page_has_movable_ops() instead.
> + */
> +PAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
> +#else
> +PAGEFLAG_FALSE(MovableOps, movable_ops);
> +#endif
> +
> +/**
> + * page_has_movable_ops - test for a movable_ops page
> + * @page The page to test.
> + *
> + * Test whether this is a movable_ops page. Such pages will stay that
> + * way until freed.
> + *
> + * Returns true if this is a movable_ops page, otherwise false.
> + */
> +static inline bool page_has_movable_ops(const struct page *page)
> +{
> + return PageMovableOps(page) &&
> + (PageOffline(page) || PageZsmalloc(page));
> +}
> +
> static __always_inline int PageAnonExclusive(const struct page *page)
> {
> VM_BUG_ON_PGFLAGS(!PageAnon(page), page);
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 348eb754cb227..349f4ea0ec3e5 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -114,12 +114,6 @@ static unsigned long release_free_list(struct list_head *freepages)
> }
>
> #ifdef CONFIG_COMPACTION
> -void __SetPageMovable(struct page *page)
> -{
> - VM_BUG_ON_PAGE(!PageLocked(page), page);
> - page->mapping = (void *)(PAGE_MAPPING_MOVABLE);
> -}
> -EXPORT_SYMBOL(__SetPageMovable);
>
> /* Do not skip compaction more than 64 times */
> #define COMPACT_MAX_DEFER_SHIFT 6
> diff --git a/mm/zpdesc.h b/mm/zpdesc.h
> index 6855d9e2732d8..25bf5ea0beb83 100644
> --- a/mm/zpdesc.h
> +++ b/mm/zpdesc.h
> @@ -154,7 +154,7 @@ static inline struct zpdesc *pfn_zpdesc(unsigned long pfn)
>
> static inline void __zpdesc_set_movable(struct zpdesc *zpdesc)
> {
> - __SetPageMovable(zpdesc_page(zpdesc));
> + SetPageMovableOps(zpdesc_page(zpdesc));
> }
>
> static inline void __zpdesc_set_zsmalloc(struct zpdesc *zpdesc)
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag
2025-07-01 12:44 ` Lorenzo Stoakes
@ 2025-07-01 12:49 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 12:49 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 14:44, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 03:00:01PM +0200, David Hildenbrand wrote:
>> Instead, let's use a page flag. As the page flag can result in
>> false-positives, glue it to the page types for which we
>> support/implement movable_ops page migration.
>>
>> The flag reused by PageMovableOps() might be sued by other pages, so
>
> I assume 'used' not 'sued' :P
:)
>
>> warning in case it is set in page_has_movable_ops() might result in
>> false-positive warnings.
>
> Worth mentioning that it's PG_uptodate. Also probably worth putting a proviso
> here that we're safe to use it for movable ops pages because it's used to track
> file system state.
Will do.
>
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> Seems reasonable though, so:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag
2025-06-30 13:00 ` [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag David Hildenbrand
2025-07-01 12:44 ` Lorenzo Stoakes
@ 2025-07-02 11:54 ` Harry Yoo
2025-07-02 12:01 ` David Hildenbrand
1 sibling, 1 reply; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 11:54 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:01PM +0200, David Hildenbrand wrote:
> Instead, let's use a page flag. As the page flag can result in
> false-positives, glue it to the page types for which we
> support/implement movable_ops page migration.
>
> The flag reused by PageMovableOps() might be sued by other pages, so
> warning in case it is set in page_has_movable_ops() might result in
> false-positive warnings.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
LGTM,
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
With a question: is there any reason to change the page flag
operations to use atomic bit ops? (.e.g, using SetPageMovableOps instead
of __SetPageMovableOps)
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag
2025-07-02 11:54 ` Harry Yoo
@ 2025-07-02 12:01 ` David Hildenbrand
2025-07-02 13:01 ` Harry Yoo
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-02 12:01 UTC (permalink / raw)
To: Harry Yoo
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On 02.07.25 13:54, Harry Yoo wrote:
> On Mon, Jun 30, 2025 at 03:00:01PM +0200, David Hildenbrand wrote:
>> Instead, let's use a page flag. As the page flag can result in
>> false-positives, glue it to the page types for which we
>> support/implement movable_ops page migration.
>>
>> The flag reused by PageMovableOps() might be sued by other pages, so
>> warning in case it is set in page_has_movable_ops() might result in
>> false-positive warnings.
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>
> LGTM,
> Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
>
> With a question: is there any reason to change the page flag
> operations to use atomic bit ops?
As we have the page lock in there, it's complicated. I thought about
this when writing that code, and was not able to convince myself that it
is safe.
But that was when I was prototyping and reshuffling patches, and we
would still have code that would clear the flag.
Given that we only allow setting the flag, it might be okay to use the
non-atomic variant as long as there can be nobody racing with us when
modifying flags. Especially trying to lock the folio concurrently is the
big problem.
In isolate_movable_ops_page(), there is a comment about checking the
flag before grabbing the page lock, so that should be handled.
I'll have to check some other cases in balloon/zsmalloc code.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag
2025-07-02 12:01 ` David Hildenbrand
@ 2025-07-02 13:01 ` Harry Yoo
2025-07-02 15:25 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 13:01 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Wed, Jul 02, 2025 at 02:01:33PM +0200, David Hildenbrand wrote:
> On 02.07.25 13:54, Harry Yoo wrote:
> > On Mon, Jun 30, 2025 at 03:00:01PM +0200, David Hildenbrand wrote:
> > > Instead, let's use a page flag. As the page flag can result in
> > > false-positives, glue it to the page types for which we
> > > support/implement movable_ops page migration.
> > >
> > > The flag reused by PageMovableOps() might be sued by other pages, so
> > > warning in case it is set in page_has_movable_ops() might result in
> > > false-positive warnings.
> > >
> > > Reviewed-by: Zi Yan <ziy@nvidia.com>
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
> > > ---
> >
> > LGTM,
> > Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
> >
> > With a question: is there any reason to change the page flag
> > operations to use atomic bit ops?
>
> As we have the page lock in there, it's complicated. I thought about this
> when writing that code, and was not able to convince myself that it is safe.
>
> But that was when I was prototyping and reshuffling patches, and we would
> still have code that would clear the flag.
> Given that we only allow setting the flag, it might be okay to use the
> non-atomic variant as long as there can be nobody racing with us when
> modifying flags. Especially trying to lock the folio concurrently is the big
> problem.
>
> In isolate_movable_ops_page(), there is a comment about checking the flag
> before grabbing the page lock, so that should be handled.
Right.
> I'll have to check some other cases in balloon/zsmalloc code.
Okay, it's totally fine to go with atomic version and then
switching back to non atomic ops when we're sure it's safe.
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag
2025-07-02 13:01 ` Harry Yoo
@ 2025-07-02 15:25 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-02 15:25 UTC (permalink / raw)
To: Harry Yoo
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On 02.07.25 15:01, Harry Yoo wrote:
> On Wed, Jul 02, 2025 at 02:01:33PM +0200, David Hildenbrand wrote:
>> On 02.07.25 13:54, Harry Yoo wrote:
>>> On Mon, Jun 30, 2025 at 03:00:01PM +0200, David Hildenbrand wrote:
>>>> Instead, let's use a page flag. As the page flag can result in
>>>> false-positives, glue it to the page types for which we
>>>> support/implement movable_ops page migration.
>>>>
>>>> The flag reused by PageMovableOps() might be sued by other pages, so
>>>> warning in case it is set in page_has_movable_ops() might result in
>>>> false-positive warnings.
>>>>
>>>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>>> ---
>>>
>>> LGTM,
>>> Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
>>>
>>> With a question: is there any reason to change the page flag
>>> operations to use atomic bit ops?
>>
>> As we have the page lock in there, it's complicated. I thought about this
>> when writing that code, and was not able to convince myself that it is safe.
>>
>> But that was when I was prototyping and reshuffling patches, and we would
>> still have code that would clear the flag.
>
>> Given that we only allow setting the flag, it might be okay to use the
>> non-atomic variant as long as there can be nobody racing with us when
>> modifying flags. Especially trying to lock the folio concurrently is the big
>> problem.
>>
>> In isolate_movable_ops_page(), there is a comment about checking the flag
>> before grabbing the page lock, so that should be handled.
>
> Right.
>
>> I'll have to check some other cases in balloon/zsmalloc code.
>
> Okay, it's totally fine to go with atomic version and then
> switching back to non atomic ops when we're sure it's safe.
>
I'll definitely do the following:
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 8b23a74963feb..5f2b570735852 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -1145,9 +1145,11 @@ PAGEFLAG(Isolated, isolated, PF_ANY);
* the flag might be used in other context for other pages. Always use
* page_has_movable_ops() instead.
*/
-PAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
+TESTPAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
+SETPAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
#else /* !CONFIG_MIGRATION */
-PAGEFLAG_FALSE(MovableOps, movable_ops);
+TESTPAGEFLAG_FALSE(MovableOps, movable_ops);
+SETPAGEFLAG_NOOP(MovableOps, movable_ops);
#endif /* CONFIG_MIGRATION */
/**
Because the flag must not get cleared.
There is no __SETPAGEFLAG_NOOP yet, unfortunately.
--
Cheers,
David / dhildenb
^ permalink raw reply related [flat|nested] 138+ messages in thread
* [PATCH v1 21/29] mm: rename PG_isolated to PG_movable_ops_isolated
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (19 preceding siblings ...)
2025-06-30 13:00 ` [PATCH v1 20/29] mm: convert "movable" flag in page->mapping to a page flag David Hildenbrand
@ 2025-06-30 13:00 ` David Hildenbrand
2025-07-01 12:51 ` Lorenzo Stoakes
2025-07-02 13:04 ` Harry Yoo
2025-06-30 13:00 ` [PATCH v1 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM David Hildenbrand
` (8 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 13:00 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's rename the flag to make it clearer where it applies (not folios
...).
While at it, define the flag only with CONFIG_MIGRATION.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 16 +++++++++++-----
mm/compaction.c | 2 +-
mm/migrate.c | 14 +++++++-------
3 files changed, 19 insertions(+), 13 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 016a6e6fa428a..aa48b05536bca 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -167,10 +167,9 @@ enum pageflags {
/* Remapped by swiotlb-xen. */
PG_xen_remapped = PG_owner_priv_1,
- /* non-lru isolated movable page */
- PG_isolated = PG_reclaim,
-
#ifdef CONFIG_MIGRATION
+ /* movable_ops page that is isolated for migration */
+ PG_movable_ops_isolated = PG_reclaim,
/* this is a movable_ops page (for selected typed pages only) */
PG_movable_ops = PG_uptodate,
#endif
@@ -1126,8 +1125,6 @@ static inline bool folio_contain_hwpoisoned_page(struct folio *folio)
bool is_free_buddy_page(const struct page *page);
-PAGEFLAG(Isolated, isolated, PF_ANY);
-
#ifdef CONFIG_MIGRATION
/*
* This page is migratable through movable_ops (for selected typed pages
@@ -1146,8 +1143,17 @@ PAGEFLAG(Isolated, isolated, PF_ANY);
* page_has_movable_ops() instead.
*/
PAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
+/*
+ * A movable_ops page has this flag set while it is isolated for migration.
+ * This flag primarily protects against concurrent migration attempts.
+ *
+ * Once migration ended (success or failure), the flag is cleared. The
+ * flag is managed by the migration core.
+ */
+PAGEFLAG(MovableOpsIsolated, movable_ops_isolated, PF_NO_TAIL);
#else
PAGEFLAG_FALSE(MovableOps, movable_ops);
+PAGEFLAG_FALSE(MovableOpsIsolated, movable_ops_isolated);
#endif
/**
diff --git a/mm/compaction.c b/mm/compaction.c
index 349f4ea0ec3e5..bf021b31c7ece 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -1051,7 +1051,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
if (!PageLRU(page)) {
/* Isolation code will deal with any races. */
if (unlikely(page_has_movable_ops(page)) &&
- !PageIsolated(page)) {
+ !PageMovableOpsIsolated(page)) {
if (locked) {
unlock_page_lruvec_irqrestore(locked, flags);
locked = NULL;
diff --git a/mm/migrate.c b/mm/migrate.c
index c6c9998014ec8..62a3ee590b245 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -135,7 +135,7 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
goto out_putfolio;
VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
- if (PageIsolated(page))
+ if (PageMovableOpsIsolated(page))
goto out_no_isolated;
mops = page_movable_ops(page);
@@ -146,8 +146,8 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
goto out_no_isolated;
/* Driver shouldn't use the isolated flag */
- VM_WARN_ON_ONCE_PAGE(PageIsolated(page), page);
- SetPageIsolated(page);
+ VM_WARN_ON_ONCE_PAGE(PageMovableOpsIsolated(page), page);
+ SetPageMovableOpsIsolated(page);
folio_unlock(folio);
return true;
@@ -177,10 +177,10 @@ static void putback_movable_ops_page(struct page *page)
struct folio *folio = page_folio(page);
VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
- VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
+ VM_WARN_ON_ONCE_PAGE(!PageMovableOpsIsolated(page), page);
folio_lock(folio);
page_movable_ops(page)->putback_page(page);
- ClearPageIsolated(page);
+ ClearPageMovableOpsIsolated(page);
folio_unlock(folio);
folio_put(folio);
}
@@ -216,10 +216,10 @@ static int migrate_movable_ops_page(struct page *dst, struct page *src,
int rc = MIGRATEPAGE_SUCCESS;
VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(src), src);
- VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
+ VM_WARN_ON_ONCE_PAGE(!PageMovableOpsIsolated(src), src);
rc = page_movable_ops(src)->migrate_page(dst, src, mode);
if (rc == MIGRATEPAGE_SUCCESS)
- ClearPageIsolated(src);
+ ClearPageMovableOpsIsolated(src);
return rc;
}
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 21/29] mm: rename PG_isolated to PG_movable_ops_isolated
2025-06-30 13:00 ` [PATCH v1 21/29] mm: rename PG_isolated to PG_movable_ops_isolated David Hildenbrand
@ 2025-07-01 12:51 ` Lorenzo Stoakes
2025-07-01 16:19 ` David Hildenbrand
2025-07-02 13:04 ` Harry Yoo
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 12:51 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:02PM +0200, David Hildenbrand wrote:
> Let's rename the flag to make it clearer where it applies (not folios
> ...).
>
> While at it, define the flag only with CONFIG_MIGRATION.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/page-flags.h | 16 +++++++++++-----
> mm/compaction.c | 2 +-
> mm/migrate.c | 14 +++++++-------
> 3 files changed, 19 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index 016a6e6fa428a..aa48b05536bca 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -167,10 +167,9 @@ enum pageflags {
> /* Remapped by swiotlb-xen. */
> PG_xen_remapped = PG_owner_priv_1,
>
> - /* non-lru isolated movable page */
Ah nice to drop another confusing reference to LRU when really meaning
'non-could-be-lru-possibly' :P
> - PG_isolated = PG_reclaim,
> -
> #ifdef CONFIG_MIGRATION
> + /* movable_ops page that is isolated for migration */
> + PG_movable_ops_isolated = PG_reclaim,
> /* this is a movable_ops page (for selected typed pages only) */
> PG_movable_ops = PG_uptodate,
> #endif
> @@ -1126,8 +1125,6 @@ static inline bool folio_contain_hwpoisoned_page(struct folio *folio)
>
> bool is_free_buddy_page(const struct page *page);
>
> -PAGEFLAG(Isolated, isolated, PF_ANY);
> -
> #ifdef CONFIG_MIGRATION
> /*
> * This page is migratable through movable_ops (for selected typed pages
> @@ -1146,8 +1143,17 @@ PAGEFLAG(Isolated, isolated, PF_ANY);
> * page_has_movable_ops() instead.
> */
> PAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
> +/*
> + * A movable_ops page has this flag set while it is isolated for migration.
> + * This flag primarily protects against concurrent migration attempts.
> + *
> + * Once migration ended (success or failure), the flag is cleared. The
> + * flag is managed by the migration core.
> + */
> +PAGEFLAG(MovableOpsIsolated, movable_ops_isolated, PF_NO_TAIL);
> #else
> PAGEFLAG_FALSE(MovableOps, movable_ops);
> +PAGEFLAG_FALSE(MovableOpsIsolated, movable_ops_isolated);
> #endif
Nit, but maybe worth sticking /* CONFIG_MIGRATION */ on else and endif? Not a
huge block so maybe not massively important but just a thought!
>
> /**
> diff --git a/mm/compaction.c b/mm/compaction.c
> index 349f4ea0ec3e5..bf021b31c7ece 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -1051,7 +1051,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
> if (!PageLRU(page)) {
> /* Isolation code will deal with any races. */
> if (unlikely(page_has_movable_ops(page)) &&
> - !PageIsolated(page)) {
> + !PageMovableOpsIsolated(page)) {
> if (locked) {
> unlock_page_lruvec_irqrestore(locked, flags);
> locked = NULL;
> diff --git a/mm/migrate.c b/mm/migrate.c
> index c6c9998014ec8..62a3ee590b245 100644
> --- a/mm/migrate.c
> +++ b/mm/migrate.c
> @@ -135,7 +135,7 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
> goto out_putfolio;
>
> VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
> - if (PageIsolated(page))
> + if (PageMovableOpsIsolated(page))
> goto out_no_isolated;
>
> mops = page_movable_ops(page);
> @@ -146,8 +146,8 @@ bool isolate_movable_ops_page(struct page *page, isolate_mode_t mode)
> goto out_no_isolated;
>
> /* Driver shouldn't use the isolated flag */
> - VM_WARN_ON_ONCE_PAGE(PageIsolated(page), page);
> - SetPageIsolated(page);
> + VM_WARN_ON_ONCE_PAGE(PageMovableOpsIsolated(page), page);
> + SetPageMovableOpsIsolated(page);
> folio_unlock(folio);
>
> return true;
> @@ -177,10 +177,10 @@ static void putback_movable_ops_page(struct page *page)
> struct folio *folio = page_folio(page);
>
> VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(page), page);
> - VM_WARN_ON_ONCE_PAGE(!PageIsolated(page), page);
> + VM_WARN_ON_ONCE_PAGE(!PageMovableOpsIsolated(page), page);
> folio_lock(folio);
> page_movable_ops(page)->putback_page(page);
> - ClearPageIsolated(page);
> + ClearPageMovableOpsIsolated(page);
> folio_unlock(folio);
> folio_put(folio);
> }
> @@ -216,10 +216,10 @@ static int migrate_movable_ops_page(struct page *dst, struct page *src,
> int rc = MIGRATEPAGE_SUCCESS;
>
> VM_WARN_ON_ONCE_PAGE(!page_has_movable_ops(src), src);
> - VM_WARN_ON_ONCE_PAGE(!PageIsolated(src), src);
> + VM_WARN_ON_ONCE_PAGE(!PageMovableOpsIsolated(src), src);
> rc = page_movable_ops(src)->migrate_page(dst, src, mode);
> if (rc == MIGRATEPAGE_SUCCESS)
> - ClearPageIsolated(src);
> + ClearPageMovableOpsIsolated(src);
> return rc;
> }
>
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 21/29] mm: rename PG_isolated to PG_movable_ops_isolated
2025-07-01 12:51 ` Lorenzo Stoakes
@ 2025-07-01 16:19 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 16:19 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
>> PAGEFLAG(MovableOps, movable_ops, PF_NO_TAIL);
>> +/*
>> + * A movable_ops page has this flag set while it is isolated for migration.
>> + * This flag primarily protects against concurrent migration attempts.
>> + *
>> + * Once migration ended (success or failure), the flag is cleared. The
>> + * flag is managed by the migration core.
>> + */
>> +PAGEFLAG(MovableOpsIsolated, movable_ops_isolated, PF_NO_TAIL);
>> #else
>> PAGEFLAG_FALSE(MovableOps, movable_ops);
>> +PAGEFLAG_FALSE(MovableOpsIsolated, movable_ops_isolated);
>> #endif
>
> Nit, but maybe worth sticking /* CONFIG_MIGRATION */ on else and endif? Not a
> huge block so maybe not massively important but just a thought!
Sure, why not (goes into the introducing patch) :)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 21/29] mm: rename PG_isolated to PG_movable_ops_isolated
2025-06-30 13:00 ` [PATCH v1 21/29] mm: rename PG_isolated to PG_movable_ops_isolated David Hildenbrand
2025-07-01 12:51 ` Lorenzo Stoakes
@ 2025-07-02 13:04 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 13:04 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:02PM +0200, David Hildenbrand wrote:
> Let's rename the flag to make it clearer where it applies (not folios
> ...).
>
> While at it, define the flag only with CONFIG_MIGRATION.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (20 preceding siblings ...)
2025-06-30 13:00 ` [PATCH v1 21/29] mm: rename PG_isolated to PG_movable_ops_isolated David Hildenbrand
@ 2025-06-30 13:00 ` David Hildenbrand
2025-07-01 12:54 ` Lorenzo Stoakes
2025-07-02 13:11 ` Harry Yoo
2025-06-30 13:00 ` [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags() David Hildenbrand
` (7 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 13:00 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
KSM is the only remaining user, let's rename the flag. While at it,
adjust to remaining page -> folio in the doc.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index aa48b05536bca..abed972e902e1 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -697,10 +697,10 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
* folio->mapping points to its anon_vma, not to a struct address_space;
* with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h.
*
- * On an anonymous page in a VM_MERGEABLE area, if CONFIG_KSM is enabled,
- * the PAGE_MAPPING_MOVABLE bit may be set along with the PAGE_MAPPING_ANON
+ * On an anonymous folio in a VM_MERGEABLE area, if CONFIG_KSM is enabled,
+ * the PAGE_MAPPING_ANON_KSM bit may be set along with the PAGE_MAPPING_ANON
* bit; and then folio->mapping points, not to an anon_vma, but to a private
- * structure which KSM associates with that merged page. See ksm.h.
+ * structure which KSM associates with that merged folio. See ksm.h.
*
* Please note that, confusingly, "folio_mapping" refers to the inode
* address_space which maps the folio from disk; whereas "folio_mapped"
@@ -714,9 +714,9 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
* See mm/slab.h.
*/
#define PAGE_MAPPING_ANON 0x1
-#define PAGE_MAPPING_MOVABLE 0x2
-#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE)
-#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE)
+#define PAGE_MAPPING_ANON_KSM 0x2
+#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
+#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
static __always_inline bool folio_mapping_flags(const struct folio *folio)
{
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM
2025-06-30 13:00 ` [PATCH v1 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM David Hildenbrand
@ 2025-07-01 12:54 ` Lorenzo Stoakes
2025-07-01 19:31 ` David Hildenbrand
2025-07-02 13:11 ` Harry Yoo
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 12:54 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:03PM +0200, David Hildenbrand wrote:
> KSM is the only remaining user, let's rename the flag. While at it,
> adjust to remaining page -> folio in the doc.
Hm I wonder if we could just ideally have this be a separate flag rather than a
bitwise combination, however I bet there's code that does somehow rely on this.
I know for sure there's code that has to do a folio_test_ksm() on something
folio_test_anon()'d because the latter isn't sufficient.
But this is one for the future I guess :)
Nice: re change to folio, that is a nice cleanup based on fact you've now made
the per-page mapping op stuff not be part of this.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/page-flags.h | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index aa48b05536bca..abed972e902e1 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -697,10 +697,10 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
> * folio->mapping points to its anon_vma, not to a struct address_space;
> * with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h.
> *
> - * On an anonymous page in a VM_MERGEABLE area, if CONFIG_KSM is enabled,
> - * the PAGE_MAPPING_MOVABLE bit may be set along with the PAGE_MAPPING_ANON
> + * On an anonymous folio in a VM_MERGEABLE area, if CONFIG_KSM is enabled,
> + * the PAGE_MAPPING_ANON_KSM bit may be set along with the PAGE_MAPPING_ANON
> * bit; and then folio->mapping points, not to an anon_vma, but to a private
> - * structure which KSM associates with that merged page. See ksm.h.
> + * structure which KSM associates with that merged folio. See ksm.h.
> *
> * Please note that, confusingly, "folio_mapping" refers to the inode
> * address_space which maps the folio from disk; whereas "folio_mapped"
> @@ -714,9 +714,9 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
> * See mm/slab.h.
> */
> #define PAGE_MAPPING_ANON 0x1
> -#define PAGE_MAPPING_MOVABLE 0x2
> -#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE)
> -#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_MOVABLE)
> +#define PAGE_MAPPING_ANON_KSM 0x2
> +#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
> +#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
>
> static __always_inline bool folio_mapping_flags(const struct folio *folio)
> {
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM
2025-07-01 12:54 ` Lorenzo Stoakes
@ 2025-07-01 19:31 ` David Hildenbrand
2025-07-02 9:06 ` Lorenzo Stoakes
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 19:31 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 14:54, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 03:00:03PM +0200, David Hildenbrand wrote:
>> KSM is the only remaining user, let's rename the flag. While at it,
>> adjust to remaining page -> folio in the doc.
>
> Hm I wonder if we could just ideally have this be a separate flag rather than a
> bitwise combination, however I bet there's code that does somehow rely on this.
Well, KSM folios are anon folios, so that must hold.
Of course, now you could make folio_test_anon() test both bits, and have
KSM folios only set a PAGE_MAPPING_KSM bit.
That should be possible on top of this change, but not sure if that's
really what we want. After all, KSM folios are special ANON folios.
>
> I know for sure there's code that has to do a folio_test_ksm() on something
> folio_test_anon()'d because the latter isn't sufficient.
> > But this is one for the future I guess :)
Yes :)
>
> Nice: re change to folio, that is a nice cleanup based on fact you've now made
> the per-page mapping op stuff not be part of this.
>
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> LGTM, so:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM
2025-07-01 19:31 ` David Hildenbrand
@ 2025-07-02 9:06 ` Lorenzo Stoakes
0 siblings, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-02 9:06 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Tue, Jul 01, 2025 at 09:31:56PM +0200, David Hildenbrand wrote:
> On 01.07.25 14:54, Lorenzo Stoakes wrote:
> > On Mon, Jun 30, 2025 at 03:00:03PM +0200, David Hildenbrand wrote:
> > > KSM is the only remaining user, let's rename the flag. While at it,
> > > adjust to remaining page -> folio in the doc.
> >
> > Hm I wonder if we could just ideally have this be a separate flag rather than a
> > bitwise combination, however I bet there's code that does somehow rely on this.
>
> Well, KSM folios are anon folios, so that must hold.
Right, of course, though they're sort of 'special' anon folios...
>
> Of course, now you could make folio_test_anon() test both bits, and have KSM
> folios only set a PAGE_MAPPING_KSM bit.
>
> That should be possible on top of this change, but not sure if that's really
> what we want. After all, KSM folios are special ANON folios.
Yeah probably best to keep to enforce that KSM == anon.
>
> >
> > I know for sure there's code that has to do a folio_test_ksm() on something
> > folio_test_anon()'d because the latter isn't sufficient.
> > > But this is one for the future I guess :)
>
> Yes :)
>
> >
> > Nice: re change to folio, that is a nice cleanup based on fact you've now made
> > the per-page mapping op stuff not be part of this.
> >
> > >
> > > Reviewed-by: Zi Yan <ziy@nvidia.com>
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
> >
> > LGTM, so:
> >
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
> Thanks!
>
>
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM
2025-06-30 13:00 ` [PATCH v1 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM David Hildenbrand
2025-07-01 12:54 ` Lorenzo Stoakes
@ 2025-07-02 13:11 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 13:11 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:03PM +0200, David Hildenbrand wrote:
> KSM is the only remaining user, let's rename the flag. While at it,
> adjust to remaining page -> folio in the doc.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
LGTM,
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
So now PAGE_MAPPING_ANON_KSM without PAGE_MAPPING_ANON is invalid!
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (21 preceding siblings ...)
2025-06-30 13:00 ` [PATCH v1 22/29] mm/page-flags: rename PAGE_MAPPING_MOVABLE to PAGE_MAPPING_ANON_KSM David Hildenbrand
@ 2025-06-30 13:00 ` David Hildenbrand
2025-07-01 13:02 ` Lorenzo Stoakes
2025-07-02 13:20 ` Harry Yoo
2025-06-30 13:00 ` [PATCH v1 24/29] mm/page-flags: remove folio_mapping_flags() David Hildenbrand
` (6 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 13:00 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
We can now simply check for PageAnon() and remove PageMappingFlags().
... and while at it, use the folio instead and operate on
folio->mapping.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 5 -----
mm/page_alloc.c | 7 +++----
2 files changed, 3 insertions(+), 9 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index abed972e902e1..f539bd5e14200 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -723,11 +723,6 @@ static __always_inline bool folio_mapping_flags(const struct folio *folio)
return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) != 0;
}
-static __always_inline bool PageMappingFlags(const struct page *page)
-{
- return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) != 0;
-}
-
static __always_inline bool folio_test_anon(const struct folio *folio)
{
return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a134b9fa9520e..a0ebcc5f54bb2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1375,10 +1375,9 @@ __always_inline bool free_pages_prepare(struct page *page,
(page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
}
}
- if (PageMappingFlags(page)) {
- if (PageAnon(page))
- mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
- page->mapping = NULL;
+ if (folio_test_anon(folio)) {
+ mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
+ folio->mapping = NULL;
}
if (unlikely(page_has_type(page)))
page->page_type = UINT_MAX;
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags()
2025-06-30 13:00 ` [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags() David Hildenbrand
@ 2025-07-01 13:02 ` Lorenzo Stoakes
2025-07-01 19:34 ` David Hildenbrand
2025-07-02 13:20 ` Harry Yoo
1 sibling, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 13:02 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:04PM +0200, David Hildenbrand wrote:
> We can now simply check for PageAnon() and remove PageMappingFlags().
>
> ... and while at it, use the folio instead and operate on
> folio->mapping.
Probably worth mentioning to be super crystal clear that this is because
now it's either an anon folio or a KSM folio, both of which set the
FOLIO_MAPPING_ANON flag.
I wonder if there's other places that could be fixed up similarly that do
folio_test_anon() || folio_test_ksm() or equivalent?
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/page-flags.h | 5 -----
> mm/page_alloc.c | 7 +++----
> 2 files changed, 3 insertions(+), 9 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index abed972e902e1..f539bd5e14200 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -723,11 +723,6 @@ static __always_inline bool folio_mapping_flags(const struct folio *folio)
> return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) != 0;
> }
>
> -static __always_inline bool PageMappingFlags(const struct page *page)
> -{
> - return ((unsigned long)page->mapping & PAGE_MAPPING_FLAGS) != 0;
> -}
> -
> static __always_inline bool folio_test_anon(const struct folio *folio)
> {
> return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a134b9fa9520e..a0ebcc5f54bb2 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1375,10 +1375,9 @@ __always_inline bool free_pages_prepare(struct page *page,
> (page + i)->flags &= ~PAGE_FLAGS_CHECK_AT_PREP;
> }
> }
> - if (PageMappingFlags(page)) {
> - if (PageAnon(page))
> - mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
> - page->mapping = NULL;
> + if (folio_test_anon(folio)) {
> + mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
> + folio->mapping = NULL;
> }
> if (unlikely(page_has_type(page)))
> page->page_type = UINT_MAX;
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags()
2025-07-01 13:02 ` Lorenzo Stoakes
@ 2025-07-01 19:34 ` David Hildenbrand
2025-07-02 8:49 ` Lorenzo Stoakes
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 19:34 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 01.07.25 15:02, Lorenzo Stoakes wrote:
> On Mon, Jun 30, 2025 at 03:00:04PM +0200, David Hildenbrand wrote:
>> We can now simply check for PageAnon() and remove PageMappingFlags().
>>
>> ... and while at it, use the folio instead and operate on
>> folio->mapping.
>
> Probably worth mentioning to be super crystal clear that this is because
> now it's either an anon folio or a KSM folio, both of which set the
> FOLIO_MAPPING_ANON flag.
"As PageMappingFlags() now only indicates anon (incl. ksm) folios, we
can now simply check for PageAnon() and remove PageMappingFlags()."
>
> I wonder if there's other places that could be fixed up similarly that do
> folio_test_anon() || folio_test_ksm() or equivalent?
I think you spotted the one in patch #25 :)
I looked for others while crafting this patch, but there might be more
hiding that I didn't catch.
>
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>
> LGTM, so:
>
> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
>
Thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags()
2025-07-01 19:34 ` David Hildenbrand
@ 2025-07-02 8:49 ` Lorenzo Stoakes
2025-07-02 9:02 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-02 8:49 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Tue, Jul 01, 2025 at 09:34:41PM +0200, David Hildenbrand wrote:
> On 01.07.25 15:02, Lorenzo Stoakes wrote:
> > On Mon, Jun 30, 2025 at 03:00:04PM +0200, David Hildenbrand wrote:
> > > We can now simply check for PageAnon() and remove PageMappingFlags().
> > >
> > > ... and while at it, use the folio instead and operate on
> > > folio->mapping.
> >
> > Probably worth mentioning to be super crystal clear that this is because
> > now it's either an anon folio or a KSM folio, both of which set the
> > FOLIO_MAPPING_ANON flag.
>
> "As PageMappingFlags() now only indicates anon (incl. ksm) folios, we can
> now simply check for PageAnon() and remove PageMappingFlags()."
Sounds good! Though the extremely nitty part of me says 'capitalise KSM' :P
>
>
> >
> > I wonder if there's other places that could be fixed up similarly that do
> > folio_test_anon() || folio_test_ksm() or equivalent?
>
> I think you spotted the one in patch #25 :)
:)
>
> I looked for others while crafting this patch, but there might be more
> hiding that I didn't catch.
Yeah, one we can keep an eye out for.
>
> >
> > >
> > > Reviewed-by: Zi Yan <ziy@nvidia.com>
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
> >
> > LGTM, so:
> >
> > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> >
>
> Thanks!
>
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags()
2025-07-02 8:49 ` Lorenzo Stoakes
@ 2025-07-02 9:02 ` David Hildenbrand
2025-07-02 9:09 ` Lorenzo Stoakes
0 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-07-02 9:02 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 02.07.25 10:49, Lorenzo Stoakes wrote:
> On Tue, Jul 01, 2025 at 09:34:41PM +0200, David Hildenbrand wrote:
>> On 01.07.25 15:02, Lorenzo Stoakes wrote:
>>> On Mon, Jun 30, 2025 at 03:00:04PM +0200, David Hildenbrand wrote:
>>>> We can now simply check for PageAnon() and remove PageMappingFlags().
>>>>
>>>> ... and while at it, use the folio instead and operate on
>>>> folio->mapping.
>>>
>>> Probably worth mentioning to be super crystal clear that this is because
>>> now it's either an anon folio or a KSM folio, both of which set the
>>> FOLIO_MAPPING_ANON flag.
>>
>> "As PageMappingFlags() now only indicates anon (incl. ksm) folios, we can
>> now simply check for PageAnon() and remove PageMappingFlags()."
>
> Sounds good! Though the extremely nitty part of me says 'capitalise KSM' :P
Like we do so consistently with vma, pte and all the other acronyms ;)
Can do!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags()
2025-07-02 9:02 ` David Hildenbrand
@ 2025-07-02 9:09 ` Lorenzo Stoakes
2025-07-02 9:16 ` David Hildenbrand
0 siblings, 1 reply; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-02 9:09 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Wed, Jul 02, 2025 at 11:02:21AM +0200, David Hildenbrand wrote:
> On 02.07.25 10:49, Lorenzo Stoakes wrote:
> > On Tue, Jul 01, 2025 at 09:34:41PM +0200, David Hildenbrand wrote:
> > > On 01.07.25 15:02, Lorenzo Stoakes wrote:
> > > > On Mon, Jun 30, 2025 at 03:00:04PM +0200, David Hildenbrand wrote:
> > > > > We can now simply check for PageAnon() and remove PageMappingFlags().
> > > > >
> > > > > ... and while at it, use the folio instead and operate on
> > > > > folio->mapping.
> > > >
> > > > Probably worth mentioning to be super crystal clear that this is because
> > > > now it's either an anon folio or a KSM folio, both of which set the
> > > > FOLIO_MAPPING_ANON flag.
> > >
> > > "As PageMappingFlags() now only indicates anon (incl. ksm) folios, we can
> > > now simply check for PageAnon() and remove PageMappingFlags()."
> >
> > Sounds good! Though the extremely nitty part of me says 'capitalise KSM' :P
>
> Like we do so consistently with vma, pte and all the other acronyms ;)
Don't forget pae which now means something different depending on whether you're
talking about x86-64 page tables or anon exclusive flags... :>)
Yeah, I mean it's throwing teaspoons of water out of a reservoir but might as
well :P
>
> Can do!
>
> --
> Cheers,
>
> David / dhildenb
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags()
2025-07-02 9:09 ` Lorenzo Stoakes
@ 2025-07-02 9:16 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-02 9:16 UTC (permalink / raw)
To: Lorenzo Stoakes
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On 02.07.25 11:09, Lorenzo Stoakes wrote:
> On Wed, Jul 02, 2025 at 11:02:21AM +0200, David Hildenbrand wrote:
>> On 02.07.25 10:49, Lorenzo Stoakes wrote:
>>> On Tue, Jul 01, 2025 at 09:34:41PM +0200, David Hildenbrand wrote:
>>>> On 01.07.25 15:02, Lorenzo Stoakes wrote:
>>>>> On Mon, Jun 30, 2025 at 03:00:04PM +0200, David Hildenbrand wrote:
>>>>>> We can now simply check for PageAnon() and remove PageMappingFlags().
>>>>>>
>>>>>> ... and while at it, use the folio instead and operate on
>>>>>> folio->mapping.
>>>>>
>>>>> Probably worth mentioning to be super crystal clear that this is because
>>>>> now it's either an anon folio or a KSM folio, both of which set the
>>>>> FOLIO_MAPPING_ANON flag.
>>>>
>>>> "As PageMappingFlags() now only indicates anon (incl. ksm) folios, we can
>>>> now simply check for PageAnon() and remove PageMappingFlags()."
>>>
>>> Sounds good! Though the extremely nitty part of me says 'capitalise KSM' :P
>>
>> Like we do so consistently with vma, pte and all the other acronyms ;)
>
> Don't forget pae which now means something different depending on whether you're
> talking about x86-64 page tables or anon exclusive flags... :>)
Heh, I don't think we ever used PAE in the src + git log when talking
about PageAnonExclusive at least :)
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags()
2025-06-30 13:00 ` [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags() David Hildenbrand
2025-07-01 13:02 ` Lorenzo Stoakes
@ 2025-07-02 13:20 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 13:20 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:04PM +0200, David Hildenbrand wrote:
> We can now simply check for PageAnon() and remove PageMappingFlags().
>
> ... and while at it, use the folio instead and operate on
> folio->mapping.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
LGTM,
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 24/29] mm/page-flags: remove folio_mapping_flags()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (22 preceding siblings ...)
2025-06-30 13:00 ` [PATCH v1 23/29] mm/page-alloc: remove PageMappingFlags() David Hildenbrand
@ 2025-06-30 13:00 ` David Hildenbrand
2025-07-01 13:03 ` Lorenzo Stoakes
2025-07-02 13:23 ` Harry Yoo
2025-06-30 13:00 ` [PATCH v1 25/29] mm: simplify folio_expected_ref_count() David Hildenbrand
` (5 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 13:00 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
It's unused and the page counterpart is gone, so let's remove it.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/page-flags.h | 5 -----
1 file changed, 5 deletions(-)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index f539bd5e14200..b42986a578b71 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -718,11 +718,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
-static __always_inline bool folio_mapping_flags(const struct folio *folio)
-{
- return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) != 0;
-}
-
static __always_inline bool folio_test_anon(const struct folio *folio)
{
return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 24/29] mm/page-flags: remove folio_mapping_flags()
2025-06-30 13:00 ` [PATCH v1 24/29] mm/page-flags: remove folio_mapping_flags() David Hildenbrand
@ 2025-07-01 13:03 ` Lorenzo Stoakes
2025-07-02 13:23 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 13:03 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:05PM +0200, David Hildenbrand wrote:
> It's unused and the page counterpart is gone, so let's remove it.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/page-flags.h | 5 -----
> 1 file changed, 5 deletions(-)
>
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index f539bd5e14200..b42986a578b71 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -718,11 +718,6 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
> #define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
> #define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
>
> -static __always_inline bool folio_mapping_flags(const struct folio *folio)
> -{
> - return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) != 0;
> -}
> -
> static __always_inline bool folio_test_anon(const struct folio *folio)
> {
> return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 24/29] mm/page-flags: remove folio_mapping_flags()
2025-06-30 13:00 ` [PATCH v1 24/29] mm/page-flags: remove folio_mapping_flags() David Hildenbrand
2025-07-01 13:03 ` Lorenzo Stoakes
@ 2025-07-02 13:23 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 13:23 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:05PM +0200, David Hildenbrand wrote:
> It's unused and the page counterpart is gone, so let's remove it.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
LGTM,
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 25/29] mm: simplify folio_expected_ref_count()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (23 preceding siblings ...)
2025-06-30 13:00 ` [PATCH v1 24/29] mm/page-flags: remove folio_mapping_flags() David Hildenbrand
@ 2025-06-30 13:00 ` David Hildenbrand
2025-07-01 13:15 ` Lorenzo Stoakes
2025-07-02 13:40 ` Harry Yoo
2025-06-30 13:00 ` [PATCH v1 26/29] mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_* David Hildenbrand
` (4 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 13:00 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Now that PAGE_MAPPING_MOVABLE is gone, we can simplify and rely on the
folio_test_anon() test only.
... but staring at the users, this function should never even have been
called on movable_ops pages. E.g.,
* __buffer_migrate_folio() does not make sense for them
* folio_migrate_mapping() does not make sense for them
* migrate_huge_page_move_mapping() does not make sense for them
* __migrate_folio() does not make sense for them
* ... and khugepaged should never stumble over them
Let's simply refuse typed pages (which includes slab) except hugetlb,
and WARN.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/mm.h | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6a5447bd43fd8..f6ef4c4eb536b 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2176,13 +2176,13 @@ static inline int folio_expected_ref_count(const struct folio *folio)
const int order = folio_order(folio);
int ref_count = 0;
- if (WARN_ON_ONCE(folio_test_slab(folio)))
+ if (WARN_ON_ONCE(page_has_type(&folio->page) && !folio_test_hugetlb(folio)))
return 0;
if (folio_test_anon(folio)) {
/* One reference per page from the swapcache. */
ref_count += folio_test_swapcache(folio) << order;
- } else if (!((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS)) {
+ } else {
/* One reference per page from the pagecache. */
ref_count += !!folio->mapping << order;
/* One reference from PG_private. */
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 25/29] mm: simplify folio_expected_ref_count()
2025-06-30 13:00 ` [PATCH v1 25/29] mm: simplify folio_expected_ref_count() David Hildenbrand
@ 2025-07-01 13:15 ` Lorenzo Stoakes
2025-07-02 13:40 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 13:15 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:06PM +0200, David Hildenbrand wrote:
> Now that PAGE_MAPPING_MOVABLE is gone, we can simplify and rely on the
> folio_test_anon() test only.
>
> ... but staring at the users, this function should never even have been
> called on movable_ops pages. E.g.,
> * __buffer_migrate_folio() does not make sense for them
> * folio_migrate_mapping() does not make sense for them
> * migrate_huge_page_move_mapping() does not make sense for them
> * __migrate_folio() does not make sense for them
> * ... and khugepaged should never stumble over them
>
> Let's simply refuse typed pages (which includes slab) except hugetlb,
> and WARN.
I guess also:
* PGTY_buddy - raw buddy allocator pagess should't be here...
* PGTY_table - nor page table...
* PGTY_guard - nor whatever kind of guard this is I assume? (Not my precious guard regions :P)
* PGTY_unaccepted - nor unaccepted memory perhaps?
* PGTY_large_malloc - slab, shouldn't be here
I'd maybe delineate these cases also.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
On assumption no typed page should be tolerable here:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/mm.h | 4 ++--
> 1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 6a5447bd43fd8..f6ef4c4eb536b 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2176,13 +2176,13 @@ static inline int folio_expected_ref_count(const struct folio *folio)
> const int order = folio_order(folio);
> int ref_count = 0;
>
> - if (WARN_ON_ONCE(folio_test_slab(folio)))
> + if (WARN_ON_ONCE(page_has_type(&folio->page) && !folio_test_hugetlb(folio)))
> return 0;
>
> if (folio_test_anon(folio)) {
> /* One reference per page from the swapcache. */
> ref_count += folio_test_swapcache(folio) << order;
> - } else if (!((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS)) {
> + } else {
> /* One reference per page from the pagecache. */
> ref_count += !!folio->mapping << order;
> /* One reference from PG_private. */
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 25/29] mm: simplify folio_expected_ref_count()
2025-06-30 13:00 ` [PATCH v1 25/29] mm: simplify folio_expected_ref_count() David Hildenbrand
2025-07-01 13:15 ` Lorenzo Stoakes
@ 2025-07-02 13:40 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 13:40 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:06PM +0200, David Hildenbrand wrote:
> Now that PAGE_MAPPING_MOVABLE is gone, we can simplify and rely on the
> folio_test_anon() test only.
>
> ... but staring at the users, this function should never even have been
> called on movable_ops pages. E.g.,
> * __buffer_migrate_folio() does not make sense for them
> * folio_migrate_mapping() does not make sense for them
> * migrate_huge_page_move_mapping() does not make sense for them
> * __migrate_folio() does not make sense for them
> * ... and khugepaged should never stumble over them
>
> Let's simply refuse typed pages (which includes slab) except hugetlb,
> and WARN.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
Yup, it doesn't really make sense to do this for typed pages
because they can't be mapped to userspace, except hugetlb.
LGTM,
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 26/29] mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_*
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (24 preceding siblings ...)
2025-06-30 13:00 ` [PATCH v1 25/29] mm: simplify folio_expected_ref_count() David Hildenbrand
@ 2025-06-30 13:00 ` David Hildenbrand
2025-07-01 13:17 ` Lorenzo Stoakes
2025-07-02 14:10 ` Harry Yoo
2025-06-30 13:00 ` [PATCH v1 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration" David Hildenbrand
` (3 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 13:00 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Now that the mapping flags are only used for folios, let's rename the
defines.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
fs/proc/page.c | 4 ++--
include/linux/fs.h | 2 +-
include/linux/mm_types.h | 1 -
include/linux/page-flags.h | 20 ++++++++++----------
include/linux/pagemap.h | 2 +-
mm/gup.c | 4 ++--
mm/internal.h | 2 +-
mm/ksm.c | 4 ++--
mm/rmap.c | 16 ++++++++--------
mm/util.c | 6 +++---
10 files changed, 30 insertions(+), 31 deletions(-)
diff --git a/fs/proc/page.c b/fs/proc/page.c
index 999af26c72985..0cdc78c0d23fa 100644
--- a/fs/proc/page.c
+++ b/fs/proc/page.c
@@ -149,7 +149,7 @@ u64 stable_page_flags(const struct page *page)
k = folio->flags;
mapping = (unsigned long)folio->mapping;
- is_anon = mapping & PAGE_MAPPING_ANON;
+ is_anon = mapping & FOLIO_MAPPING_ANON;
/*
* pseudo flags for the well known (anonymous) memory mapped pages
@@ -158,7 +158,7 @@ u64 stable_page_flags(const struct page *page)
u |= 1 << KPF_MMAP;
if (is_anon) {
u |= 1 << KPF_ANON;
- if (mapping & PAGE_MAPPING_KSM)
+ if (mapping & FOLIO_MAPPING_KSM)
u |= 1 << KPF_KSM;
}
diff --git a/include/linux/fs.h b/include/linux/fs.h
index c68c9a07cda33..9b0de18746815 100644
--- a/include/linux/fs.h
+++ b/include/linux/fs.h
@@ -526,7 +526,7 @@ struct address_space {
/*
* On most architectures that alignment is already the case; but
* must be enforced here for CRIS, to let the least significant bit
- * of struct page's "mapping" pointer be used for PAGE_MAPPING_ANON.
+ * of struct folio's "mapping" pointer be used for FOLIO_MAPPING_ANON.
*/
/* XArray tags, for tagging dirty and writeback pages in the pagecache. */
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 804d269a4f5e8..1ec273b066915 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -105,7 +105,6 @@ struct page {
unsigned int order;
};
};
- /* See page-flags.h for PAGE_MAPPING_FLAGS */
struct address_space *mapping;
union {
pgoff_t __folio_index; /* Our offset within mapping. */
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index b42986a578b71..23b1e458dfeda 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -695,10 +695,10 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
/*
* On an anonymous folio mapped into a user virtual memory area,
* folio->mapping points to its anon_vma, not to a struct address_space;
- * with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h.
+ * with the FOLIO_MAPPING_ANON bit set to distinguish it. See rmap.h.
*
* On an anonymous folio in a VM_MERGEABLE area, if CONFIG_KSM is enabled,
- * the PAGE_MAPPING_ANON_KSM bit may be set along with the PAGE_MAPPING_ANON
+ * the FOLIO_MAPPING_ANON_KSM bit may be set along with the FOLIO_MAPPING_ANON
* bit; and then folio->mapping points, not to an anon_vma, but to a private
* structure which KSM associates with that merged folio. See ksm.h.
*
@@ -713,21 +713,21 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
* false before calling the following functions (e.g., folio_test_anon).
* See mm/slab.h.
*/
-#define PAGE_MAPPING_ANON 0x1
-#define PAGE_MAPPING_ANON_KSM 0x2
-#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
-#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
+#define FOLIO_MAPPING_ANON 0x1
+#define FOLIO_MAPPING_ANON_KSM 0x2
+#define FOLIO_MAPPING_KSM (FOLIO_MAPPING_ANON | FOLIO_MAPPING_ANON_KSM)
+#define FOLIO_MAPPING_FLAGS (FOLIO_MAPPING_ANON | FOLIO_MAPPING_ANON_KSM)
static __always_inline bool folio_test_anon(const struct folio *folio)
{
- return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
+ return ((unsigned long)folio->mapping & FOLIO_MAPPING_ANON) != 0;
}
static __always_inline bool PageAnonNotKsm(const struct page *page)
{
unsigned long flags = (unsigned long)page_folio(page)->mapping;
- return (flags & PAGE_MAPPING_FLAGS) == PAGE_MAPPING_ANON;
+ return (flags & FOLIO_MAPPING_FLAGS) == FOLIO_MAPPING_ANON;
}
static __always_inline bool PageAnon(const struct page *page)
@@ -743,8 +743,8 @@ static __always_inline bool PageAnon(const struct page *page)
*/
static __always_inline bool folio_test_ksm(const struct folio *folio)
{
- return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) ==
- PAGE_MAPPING_KSM;
+ return ((unsigned long)folio->mapping & FOLIO_MAPPING_FLAGS) ==
+ FOLIO_MAPPING_KSM;
}
#else
FOLIO_TEST_FLAG_FALSE(ksm)
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index e63fbfbd5b0f3..10a222e68b851 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -502,7 +502,7 @@ static inline pgoff_t mapping_align_index(struct address_space *mapping,
static inline bool mapping_large_folio_support(struct address_space *mapping)
{
/* AS_FOLIO_ORDER is only reasonable for pagecache folios */
- VM_WARN_ONCE((unsigned long)mapping & PAGE_MAPPING_ANON,
+ VM_WARN_ONCE((unsigned long)mapping & FOLIO_MAPPING_ANON,
"Anonymous mapping always supports large folio");
return mapping_max_folio_order(mapping) > 0;
diff --git a/mm/gup.c b/mm/gup.c
index 30d320719fa23..adffe663594dc 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -2804,9 +2804,9 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
return false;
/* Anonymous folios pose no problem. */
- mapping_flags = (unsigned long)mapping & PAGE_MAPPING_FLAGS;
+ mapping_flags = (unsigned long)mapping & FOLIO_MAPPING_FLAGS;
if (mapping_flags)
- return mapping_flags & PAGE_MAPPING_ANON;
+ return mapping_flags & FOLIO_MAPPING_ANON;
/*
* At this point, we know the mapping is non-null and points to an
diff --git a/mm/internal.h b/mm/internal.h
index e84217e27778d..c29ddec7ade3d 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -149,7 +149,7 @@ static inline void *folio_raw_mapping(const struct folio *folio)
{
unsigned long mapping = (unsigned long)folio->mapping;
- return (void *)(mapping & ~PAGE_MAPPING_FLAGS);
+ return (void *)(mapping & ~FOLIO_MAPPING_FLAGS);
}
/*
diff --git a/mm/ksm.c b/mm/ksm.c
index ef73b25fd65a6..2b0210d41c553 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -893,7 +893,7 @@ static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
unsigned long kpfn;
expected_mapping = (void *)((unsigned long)stable_node |
- PAGE_MAPPING_KSM);
+ FOLIO_MAPPING_KSM);
again:
kpfn = READ_ONCE(stable_node->kpfn); /* Address dependency. */
folio = pfn_folio(kpfn);
@@ -1070,7 +1070,7 @@ static inline void folio_set_stable_node(struct folio *folio,
struct ksm_stable_node *stable_node)
{
VM_WARN_ON_FOLIO(folio_test_anon(folio) && PageAnonExclusive(&folio->page), folio);
- folio->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
+ folio->mapping = (void *)((unsigned long)stable_node | FOLIO_MAPPING_KSM);
}
#ifdef CONFIG_SYSFS
diff --git a/mm/rmap.c b/mm/rmap.c
index 34311f654d0c2..de14fb6963c24 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -503,12 +503,12 @@ struct anon_vma *folio_get_anon_vma(const struct folio *folio)
rcu_read_lock();
anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
- if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
+ if ((anon_mapping & FOLIO_MAPPING_FLAGS) != FOLIO_MAPPING_ANON)
goto out;
if (!folio_mapped(folio))
goto out;
- anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
+ anon_vma = (struct anon_vma *) (anon_mapping - FOLIO_MAPPING_ANON);
if (!atomic_inc_not_zero(&anon_vma->refcount)) {
anon_vma = NULL;
goto out;
@@ -550,12 +550,12 @@ struct anon_vma *folio_lock_anon_vma_read(const struct folio *folio,
retry:
rcu_read_lock();
anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
- if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
+ if ((anon_mapping & FOLIO_MAPPING_FLAGS) != FOLIO_MAPPING_ANON)
goto out;
if (!folio_mapped(folio))
goto out;
- anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
+ anon_vma = (struct anon_vma *) (anon_mapping - FOLIO_MAPPING_ANON);
root_anon_vma = READ_ONCE(anon_vma->root);
if (down_read_trylock(&root_anon_vma->rwsem)) {
/*
@@ -1334,9 +1334,9 @@ void folio_move_anon_rmap(struct folio *folio, struct vm_area_struct *vma)
VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
VM_BUG_ON_VMA(!anon_vma, vma);
- anon_vma += PAGE_MAPPING_ANON;
+ anon_vma += FOLIO_MAPPING_ANON;
/*
- * Ensure that anon_vma and the PAGE_MAPPING_ANON bit are written
+ * Ensure that anon_vma and the FOLIO_MAPPING_ANON bit are written
* simultaneously, so a concurrent reader (eg folio_referenced()'s
* folio_test_anon()) will not see one without the other.
*/
@@ -1367,10 +1367,10 @@ static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
/*
* page_idle does a lockless/optimistic rmap scan on folio->mapping.
* Make sure the compiler doesn't split the stores of anon_vma and
- * the PAGE_MAPPING_ANON type identifier, otherwise the rmap code
+ * the FOLIO_MAPPING_ANON type identifier, otherwise the rmap code
* could mistake the mapping for a struct address_space and crash.
*/
- anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
+ anon_vma = (void *) anon_vma + FOLIO_MAPPING_ANON;
WRITE_ONCE(folio->mapping, (struct address_space *) anon_vma);
folio->index = linear_page_index(vma, address);
}
diff --git a/mm/util.c b/mm/util.c
index 0b270c43d7d12..20bbfe4ce1b8b 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -670,9 +670,9 @@ struct anon_vma *folio_anon_vma(const struct folio *folio)
{
unsigned long mapping = (unsigned long)folio->mapping;
- if ((mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
+ if ((mapping & FOLIO_MAPPING_FLAGS) != FOLIO_MAPPING_ANON)
return NULL;
- return (void *)(mapping - PAGE_MAPPING_ANON);
+ return (void *)(mapping - FOLIO_MAPPING_ANON);
}
/**
@@ -699,7 +699,7 @@ struct address_space *folio_mapping(struct folio *folio)
return swap_address_space(folio->swap);
mapping = folio->mapping;
- if ((unsigned long)mapping & PAGE_MAPPING_FLAGS)
+ if ((unsigned long)mapping & FOLIO_MAPPING_FLAGS)
return NULL;
return mapping;
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 26/29] mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_*
2025-06-30 13:00 ` [PATCH v1 26/29] mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_* David Hildenbrand
@ 2025-07-01 13:17 ` Lorenzo Stoakes
2025-07-02 14:10 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 13:17 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:07PM +0200, David Hildenbrand wrote:
> Now that the mapping flags are only used for folios, let's rename the
> defines.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
As the official King of Churn (TM) I approve of this :)
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> fs/proc/page.c | 4 ++--
> include/linux/fs.h | 2 +-
> include/linux/mm_types.h | 1 -
> include/linux/page-flags.h | 20 ++++++++++----------
> include/linux/pagemap.h | 2 +-
> mm/gup.c | 4 ++--
> mm/internal.h | 2 +-
> mm/ksm.c | 4 ++--
> mm/rmap.c | 16 ++++++++--------
> mm/util.c | 6 +++---
> 10 files changed, 30 insertions(+), 31 deletions(-)
>
> diff --git a/fs/proc/page.c b/fs/proc/page.c
> index 999af26c72985..0cdc78c0d23fa 100644
> --- a/fs/proc/page.c
> +++ b/fs/proc/page.c
> @@ -149,7 +149,7 @@ u64 stable_page_flags(const struct page *page)
>
> k = folio->flags;
> mapping = (unsigned long)folio->mapping;
> - is_anon = mapping & PAGE_MAPPING_ANON;
> + is_anon = mapping & FOLIO_MAPPING_ANON;
>
> /*
> * pseudo flags for the well known (anonymous) memory mapped pages
> @@ -158,7 +158,7 @@ u64 stable_page_flags(const struct page *page)
> u |= 1 << KPF_MMAP;
> if (is_anon) {
> u |= 1 << KPF_ANON;
> - if (mapping & PAGE_MAPPING_KSM)
> + if (mapping & FOLIO_MAPPING_KSM)
> u |= 1 << KPF_KSM;
> }
>
> diff --git a/include/linux/fs.h b/include/linux/fs.h
> index c68c9a07cda33..9b0de18746815 100644
> --- a/include/linux/fs.h
> +++ b/include/linux/fs.h
> @@ -526,7 +526,7 @@ struct address_space {
> /*
> * On most architectures that alignment is already the case; but
> * must be enforced here for CRIS, to let the least significant bit
> - * of struct page's "mapping" pointer be used for PAGE_MAPPING_ANON.
> + * of struct folio's "mapping" pointer be used for FOLIO_MAPPING_ANON.
> */
>
> /* XArray tags, for tagging dirty and writeback pages in the pagecache. */
> diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
> index 804d269a4f5e8..1ec273b066915 100644
> --- a/include/linux/mm_types.h
> +++ b/include/linux/mm_types.h
> @@ -105,7 +105,6 @@ struct page {
> unsigned int order;
> };
> };
> - /* See page-flags.h for PAGE_MAPPING_FLAGS */
> struct address_space *mapping;
> union {
> pgoff_t __folio_index; /* Our offset within mapping. */
> diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
> index b42986a578b71..23b1e458dfeda 100644
> --- a/include/linux/page-flags.h
> +++ b/include/linux/page-flags.h
> @@ -695,10 +695,10 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
> /*
> * On an anonymous folio mapped into a user virtual memory area,
> * folio->mapping points to its anon_vma, not to a struct address_space;
> - * with the PAGE_MAPPING_ANON bit set to distinguish it. See rmap.h.
> + * with the FOLIO_MAPPING_ANON bit set to distinguish it. See rmap.h.
> *
> * On an anonymous folio in a VM_MERGEABLE area, if CONFIG_KSM is enabled,
> - * the PAGE_MAPPING_ANON_KSM bit may be set along with the PAGE_MAPPING_ANON
> + * the FOLIO_MAPPING_ANON_KSM bit may be set along with the FOLIO_MAPPING_ANON
> * bit; and then folio->mapping points, not to an anon_vma, but to a private
> * structure which KSM associates with that merged folio. See ksm.h.
> *
> @@ -713,21 +713,21 @@ PAGEFLAG_FALSE(VmemmapSelfHosted, vmemmap_self_hosted)
> * false before calling the following functions (e.g., folio_test_anon).
> * See mm/slab.h.
> */
> -#define PAGE_MAPPING_ANON 0x1
> -#define PAGE_MAPPING_ANON_KSM 0x2
> -#define PAGE_MAPPING_KSM (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
> -#define PAGE_MAPPING_FLAGS (PAGE_MAPPING_ANON | PAGE_MAPPING_ANON_KSM)
> +#define FOLIO_MAPPING_ANON 0x1
> +#define FOLIO_MAPPING_ANON_KSM 0x2
> +#define FOLIO_MAPPING_KSM (FOLIO_MAPPING_ANON | FOLIO_MAPPING_ANON_KSM)
> +#define FOLIO_MAPPING_FLAGS (FOLIO_MAPPING_ANON | FOLIO_MAPPING_ANON_KSM)
>
> static __always_inline bool folio_test_anon(const struct folio *folio)
> {
> - return ((unsigned long)folio->mapping & PAGE_MAPPING_ANON) != 0;
> + return ((unsigned long)folio->mapping & FOLIO_MAPPING_ANON) != 0;
> }
>
> static __always_inline bool PageAnonNotKsm(const struct page *page)
> {
> unsigned long flags = (unsigned long)page_folio(page)->mapping;
>
> - return (flags & PAGE_MAPPING_FLAGS) == PAGE_MAPPING_ANON;
> + return (flags & FOLIO_MAPPING_FLAGS) == FOLIO_MAPPING_ANON;
> }
>
> static __always_inline bool PageAnon(const struct page *page)
> @@ -743,8 +743,8 @@ static __always_inline bool PageAnon(const struct page *page)
> */
> static __always_inline bool folio_test_ksm(const struct folio *folio)
> {
> - return ((unsigned long)folio->mapping & PAGE_MAPPING_FLAGS) ==
> - PAGE_MAPPING_KSM;
> + return ((unsigned long)folio->mapping & FOLIO_MAPPING_FLAGS) ==
> + FOLIO_MAPPING_KSM;
> }
> #else
> FOLIO_TEST_FLAG_FALSE(ksm)
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index e63fbfbd5b0f3..10a222e68b851 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -502,7 +502,7 @@ static inline pgoff_t mapping_align_index(struct address_space *mapping,
> static inline bool mapping_large_folio_support(struct address_space *mapping)
> {
> /* AS_FOLIO_ORDER is only reasonable for pagecache folios */
> - VM_WARN_ONCE((unsigned long)mapping & PAGE_MAPPING_ANON,
> + VM_WARN_ONCE((unsigned long)mapping & FOLIO_MAPPING_ANON,
> "Anonymous mapping always supports large folio");
>
> return mapping_max_folio_order(mapping) > 0;
> diff --git a/mm/gup.c b/mm/gup.c
> index 30d320719fa23..adffe663594dc 100644
> --- a/mm/gup.c
> +++ b/mm/gup.c
> @@ -2804,9 +2804,9 @@ static bool gup_fast_folio_allowed(struct folio *folio, unsigned int flags)
> return false;
>
> /* Anonymous folios pose no problem. */
> - mapping_flags = (unsigned long)mapping & PAGE_MAPPING_FLAGS;
> + mapping_flags = (unsigned long)mapping & FOLIO_MAPPING_FLAGS;
> if (mapping_flags)
> - return mapping_flags & PAGE_MAPPING_ANON;
> + return mapping_flags & FOLIO_MAPPING_ANON;
>
> /*
> * At this point, we know the mapping is non-null and points to an
> diff --git a/mm/internal.h b/mm/internal.h
> index e84217e27778d..c29ddec7ade3d 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -149,7 +149,7 @@ static inline void *folio_raw_mapping(const struct folio *folio)
> {
> unsigned long mapping = (unsigned long)folio->mapping;
>
> - return (void *)(mapping & ~PAGE_MAPPING_FLAGS);
> + return (void *)(mapping & ~FOLIO_MAPPING_FLAGS);
> }
>
> /*
> diff --git a/mm/ksm.c b/mm/ksm.c
> index ef73b25fd65a6..2b0210d41c553 100644
> --- a/mm/ksm.c
> +++ b/mm/ksm.c
> @@ -893,7 +893,7 @@ static struct folio *ksm_get_folio(struct ksm_stable_node *stable_node,
> unsigned long kpfn;
>
> expected_mapping = (void *)((unsigned long)stable_node |
> - PAGE_MAPPING_KSM);
> + FOLIO_MAPPING_KSM);
> again:
> kpfn = READ_ONCE(stable_node->kpfn); /* Address dependency. */
> folio = pfn_folio(kpfn);
> @@ -1070,7 +1070,7 @@ static inline void folio_set_stable_node(struct folio *folio,
> struct ksm_stable_node *stable_node)
> {
> VM_WARN_ON_FOLIO(folio_test_anon(folio) && PageAnonExclusive(&folio->page), folio);
> - folio->mapping = (void *)((unsigned long)stable_node | PAGE_MAPPING_KSM);
> + folio->mapping = (void *)((unsigned long)stable_node | FOLIO_MAPPING_KSM);
> }
>
> #ifdef CONFIG_SYSFS
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 34311f654d0c2..de14fb6963c24 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -503,12 +503,12 @@ struct anon_vma *folio_get_anon_vma(const struct folio *folio)
>
> rcu_read_lock();
> anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
> - if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> + if ((anon_mapping & FOLIO_MAPPING_FLAGS) != FOLIO_MAPPING_ANON)
> goto out;
> if (!folio_mapped(folio))
> goto out;
>
> - anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
> + anon_vma = (struct anon_vma *) (anon_mapping - FOLIO_MAPPING_ANON);
> if (!atomic_inc_not_zero(&anon_vma->refcount)) {
> anon_vma = NULL;
> goto out;
> @@ -550,12 +550,12 @@ struct anon_vma *folio_lock_anon_vma_read(const struct folio *folio,
> retry:
> rcu_read_lock();
> anon_mapping = (unsigned long)READ_ONCE(folio->mapping);
> - if ((anon_mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> + if ((anon_mapping & FOLIO_MAPPING_FLAGS) != FOLIO_MAPPING_ANON)
> goto out;
> if (!folio_mapped(folio))
> goto out;
>
> - anon_vma = (struct anon_vma *) (anon_mapping - PAGE_MAPPING_ANON);
> + anon_vma = (struct anon_vma *) (anon_mapping - FOLIO_MAPPING_ANON);
> root_anon_vma = READ_ONCE(anon_vma->root);
> if (down_read_trylock(&root_anon_vma->rwsem)) {
> /*
> @@ -1334,9 +1334,9 @@ void folio_move_anon_rmap(struct folio *folio, struct vm_area_struct *vma)
> VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> VM_BUG_ON_VMA(!anon_vma, vma);
>
> - anon_vma += PAGE_MAPPING_ANON;
> + anon_vma += FOLIO_MAPPING_ANON;
> /*
> - * Ensure that anon_vma and the PAGE_MAPPING_ANON bit are written
> + * Ensure that anon_vma and the FOLIO_MAPPING_ANON bit are written
> * simultaneously, so a concurrent reader (eg folio_referenced()'s
> * folio_test_anon()) will not see one without the other.
> */
> @@ -1367,10 +1367,10 @@ static void __folio_set_anon(struct folio *folio, struct vm_area_struct *vma,
> /*
> * page_idle does a lockless/optimistic rmap scan on folio->mapping.
> * Make sure the compiler doesn't split the stores of anon_vma and
> - * the PAGE_MAPPING_ANON type identifier, otherwise the rmap code
> + * the FOLIO_MAPPING_ANON type identifier, otherwise the rmap code
> * could mistake the mapping for a struct address_space and crash.
> */
> - anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
> + anon_vma = (void *) anon_vma + FOLIO_MAPPING_ANON;
> WRITE_ONCE(folio->mapping, (struct address_space *) anon_vma);
> folio->index = linear_page_index(vma, address);
> }
> diff --git a/mm/util.c b/mm/util.c
> index 0b270c43d7d12..20bbfe4ce1b8b 100644
> --- a/mm/util.c
> +++ b/mm/util.c
> @@ -670,9 +670,9 @@ struct anon_vma *folio_anon_vma(const struct folio *folio)
> {
> unsigned long mapping = (unsigned long)folio->mapping;
>
> - if ((mapping & PAGE_MAPPING_FLAGS) != PAGE_MAPPING_ANON)
> + if ((mapping & FOLIO_MAPPING_FLAGS) != FOLIO_MAPPING_ANON)
> return NULL;
> - return (void *)(mapping - PAGE_MAPPING_ANON);
> + return (void *)(mapping - FOLIO_MAPPING_ANON);
> }
>
> /**
> @@ -699,7 +699,7 @@ struct address_space *folio_mapping(struct folio *folio)
> return swap_address_space(folio->swap);
>
> mapping = folio->mapping;
> - if ((unsigned long)mapping & PAGE_MAPPING_FLAGS)
> + if ((unsigned long)mapping & FOLIO_MAPPING_FLAGS)
> return NULL;
>
> return mapping;
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 26/29] mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_*
2025-06-30 13:00 ` [PATCH v1 26/29] mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_* David Hildenbrand
2025-07-01 13:17 ` Lorenzo Stoakes
@ 2025-07-02 14:10 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 14:10 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:07PM +0200, David Hildenbrand wrote:
> Now that the mapping flags are only used for folios, let's rename the
> defines.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
LGTM,
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration"
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (25 preceding siblings ...)
2025-06-30 13:00 ` [PATCH v1 26/29] mm: rename PAGE_MAPPING_* to FOLIO_MAPPING_* David Hildenbrand
@ 2025-06-30 13:00 ` David Hildenbrand
2025-07-01 13:19 ` Lorenzo Stoakes
2025-07-02 14:23 ` Harry Yoo
2025-06-30 13:00 ` [PATCH v1 28/29] mm/balloon_compaction: "movable_ops" doc updates David Hildenbrand
` (2 subsequent siblings)
29 siblings, 2 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 13:00 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's bring the docs up-to-date.
Reviewed-by: Zi Yan <ziy@nvidia.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
Documentation/mm/page_migration.rst | 39 ++++++++++++++++++++---------
1 file changed, 27 insertions(+), 12 deletions(-)
diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_migration.rst
index 519b35a4caf5b..d611bc21920d7 100644
--- a/Documentation/mm/page_migration.rst
+++ b/Documentation/mm/page_migration.rst
@@ -146,18 +146,33 @@ Steps:
18. The new page is moved to the LRU and can be scanned by the swapper,
etc. again.
-Non-LRU page migration
-======================
-
-Although migration originally aimed for reducing the latency of memory
-accesses for NUMA, compaction also uses migration to create high-order
-pages. For compaction purposes, it is also useful to be able to move
-non-LRU pages, such as zsmalloc and virtio-balloon pages.
-
-If a driver wants to make its pages movable, it should define a struct
-movable_operations. It then needs to call __SetPageMovable() on each
-page that it may be able to move. This uses the ``page->mapping`` field,
-so this field is not available for the driver to use for other purposes.
+movable_ops page migration
+==========================
+
+Selected typed, non-folio pages (e.g., pages inflated in a memory balloon,
+zsmalloc pages) can be migrated using the movable_ops migration framework.
+
+The "struct movable_operations" provide callbacks specific to a page type
+for isolating, migrating and un-isolating (putback) these pages.
+
+Once a page is indicated as having movable_ops, that condition must not
+change until the page was freed back to the buddy. This includes not
+changing/clearing the page type and not changing/clearing the
+PG_movable_ops page flag.
+
+Arbitrary drivers cannot currently make use of this framework, as it
+requires:
+
+(a) a page type
+(b) indicating them as possibly having movable_ops in page_has_movable_ops()
+ based on the page type
+(c) returning the movable_ops from page_has_movable_ops() based on the page
+ type
+(d) not reusing the PG_movable_ops and PG_movable_ops_isolated page flags
+ for other purposes
+
+For example, balloon drivers can make use of this framework through the
+balloon-compaction infrastructure residing in the core kernel.
Monitoring Migration
=====================
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration"
2025-06-30 13:00 ` [PATCH v1 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration" David Hildenbrand
@ 2025-07-01 13:19 ` Lorenzo Stoakes
2025-07-02 14:23 ` Harry Yoo
1 sibling, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 13:19 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:08PM +0200, David Hildenbrand wrote:
> Let's bring the docs up-to-date.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> Documentation/mm/page_migration.rst | 39 ++++++++++++++++++++---------
> 1 file changed, 27 insertions(+), 12 deletions(-)
>
> diff --git a/Documentation/mm/page_migration.rst b/Documentation/mm/page_migration.rst
> index 519b35a4caf5b..d611bc21920d7 100644
> --- a/Documentation/mm/page_migration.rst
> +++ b/Documentation/mm/page_migration.rst
> @@ -146,18 +146,33 @@ Steps:
> 18. The new page is moved to the LRU and can be scanned by the swapper,
> etc. again.
>
> -Non-LRU page migration
> -======================
> -
> -Although migration originally aimed for reducing the latency of memory
> -accesses for NUMA, compaction also uses migration to create high-order
> -pages. For compaction purposes, it is also useful to be able to move
> -non-LRU pages, such as zsmalloc and virtio-balloon pages.
> -
> -If a driver wants to make its pages movable, it should define a struct
> -movable_operations. It then needs to call __SetPageMovable() on each
> -page that it may be able to move. This uses the ``page->mapping`` field,
> -so this field is not available for the driver to use for other purposes.
> +movable_ops page migration
> +==========================
Bye bye inaccurate reference to LRU :)
> +
> +Selected typed, non-folio pages (e.g., pages inflated in a memory balloon,
> +zsmalloc pages) can be migrated using the movable_ops migration framework.
> +
> +The "struct movable_operations" provide callbacks specific to a page type
> +for isolating, migrating and un-isolating (putback) these pages.
> +
> +Once a page is indicated as having movable_ops, that condition must not
> +change until the page was freed back to the buddy. This includes not
> +changing/clearing the page type and not changing/clearing the
> +PG_movable_ops page flag.
> +
> +Arbitrary drivers cannot currently make use of this framework, as it
> +requires:
> +
> +(a) a page type
> +(b) indicating them as possibly having movable_ops in page_has_movable_ops()
> + based on the page type
> +(c) returning the movable_ops from page_has_movable_ops() based on the page
> + type
> +(d) not reusing the PG_movable_ops and PG_movable_ops_isolated page flags
> + for other purposes
> +
> +For example, balloon drivers can make use of this framework through the
> +balloon-compaction infrastructure residing in the core kernel.
>
> Monitoring Migration
> =====================
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration"
2025-06-30 13:00 ` [PATCH v1 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration" David Hildenbrand
2025-07-01 13:19 ` Lorenzo Stoakes
@ 2025-07-02 14:23 ` Harry Yoo
2025-07-02 14:52 ` David Hildenbrand
1 sibling, 1 reply; 138+ messages in thread
From: Harry Yoo @ 2025-07-02 14:23 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:08PM +0200, David Hildenbrand wrote:
> Let's bring the docs up-to-date.
>
> Reviewed-by: Zi Yan <ziy@nvidia.com>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>
> +movable_ops page migration
> +==========================
> +
> +Selected typed, non-folio pages (e.g., pages inflated in a memory balloon,
> +zsmalloc pages) can be migrated using the movable_ops migration framework.
> +
> +The "struct movable_operations" provide callbacks specific to a page type
> +for isolating, migrating and un-isolating (putback) these pages.
> +
> +Once a page is indicated as having movable_ops, that condition must not
> +change until the page was freed back to the buddy. This includes not
> +changing/clearing the page type and not changing/clearing the
> +PG_movable_ops page flag.
> +
> +Arbitrary drivers cannot currently make use of this framework, as it
> +requires:
> +
> +(a) a page type
> +(b) indicating them as possibly having movable_ops in page_has_movable_ops()
> + based on the page type
> +(c) returning the movable_ops from page_has_movable_ops() based on the page
> + type
I think you meant page_movable_ops()?
Otherwise LGTM :)
Reviewed-by: Harry Yoo <harry.yoo@oracle.com>
> +(d) not reusing the PG_movable_ops and PG_movable_ops_isolated page flags
> + for other purposes
> +
> +For example, balloon drivers can make use of this framework through the
> +balloon-compaction infrastructure residing in the core kernel.
--
Cheers,
Harry / Hyeonggon
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration"
2025-07-02 14:23 ` Harry Yoo
@ 2025-07-02 14:52 ` David Hildenbrand
0 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-02 14:52 UTC (permalink / raw)
To: Harry Yoo
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Qi Zheng, Shakeel Butt
On 02.07.25 16:23, Harry Yoo wrote:
> On Mon, Jun 30, 2025 at 03:00:08PM +0200, David Hildenbrand wrote:
>> Let's bring the docs up-to-date.
>>
>> Reviewed-by: Zi Yan <ziy@nvidia.com>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>
>> +movable_ops page migration
>> +==========================
>> +
>> +Selected typed, non-folio pages (e.g., pages inflated in a memory balloon,
>> +zsmalloc pages) can be migrated using the movable_ops migration framework.
>> +
>> +The "struct movable_operations" provide callbacks specific to a page type
>> +for isolating, migrating and un-isolating (putback) these pages.
>> +
>> +Once a page is indicated as having movable_ops, that condition must not
>> +change until the page was freed back to the buddy. This includes not
>> +changing/clearing the page type and not changing/clearing the
>> +PG_movable_ops page flag.
>> +
>> +Arbitrary drivers cannot currently make use of this framework, as it
>> +requires:
>> +
>> +(a) a page type
>> +(b) indicating them as possibly having movable_ops in page_has_movable_ops()
>> + based on the page type
>
>> +(c) returning the movable_ops from page_has_movable_ops() based on the page
>> + type
>
> I think you meant page_movable_ops()?
Very right, thanks!
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 28/29] mm/balloon_compaction: "movable_ops" doc updates
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (26 preceding siblings ...)
2025-06-30 13:00 ` [PATCH v1 27/29] docs/mm: convert from "Non-LRU page migration" to "movable_ops page migration" David Hildenbrand
@ 2025-06-30 13:00 ` David Hildenbrand
2025-07-01 13:20 ` Lorenzo Stoakes
2025-06-30 13:00 ` [PATCH v1 29/29] mm/balloon_compaction: provide single balloon_page_insert() and balloon_mapping_gfp_mask() David Hildenbrand
2025-07-01 19:38 ` [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
29 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 13:00 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's bring the docs up-to-date. Setting PG_movable_ops + page->private
very likely still requires to be performed under documented locks:
it's complicated.
We will rework this in the future, as we will try avoiding using the
page lock.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 13 ++++++-------
1 file changed, 6 insertions(+), 7 deletions(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index b222b0737c466..2fecfead91d26 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -4,12 +4,13 @@
*
* Common interface definitions for making balloon pages movable by compaction.
*
- * Balloon page migration makes use of the general non-lru movable page
+ * Balloon page migration makes use of the general "movable_ops page migration"
* feature.
*
* page->private is used to reference the responsible balloon device.
- * page->mapping is used in context of non-lru page migration to reference
- * the address space operations for page isolation/migration/compaction.
+ * That these pages have movable_ops, and which movable_ops apply,
+ * is derived from the page type (PageOffline()) combined with the
+ * PG_movable_ops flag (PageMovableOps()).
*
* As the page isolation scanning step a compaction thread does is a lockless
* procedure (from a page standpoint), it might bring some racy situations while
@@ -17,12 +18,10 @@
* and safely perform balloon's page compaction and migration we must, always,
* ensure following these simple rules:
*
- * i. when updating a balloon's page ->mapping element, strictly do it under
- * the following lock order, independently of the far superior
- * locking scheme (lru_lock, balloon_lock):
+ * i. Setting the PG_movable_ops flag and page->private with the following
+ * lock order
* +-page_lock(page);
* +--spin_lock_irq(&b_dev_info->pages_lock);
- * ... page->mapping updates here ...
*
* ii. isolation or dequeueing procedure must remove the page from balloon
* device page list under b_dev_info->pages_lock.
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 28/29] mm/balloon_compaction: "movable_ops" doc updates
2025-06-30 13:00 ` [PATCH v1 28/29] mm/balloon_compaction: "movable_ops" doc updates David Hildenbrand
@ 2025-07-01 13:20 ` Lorenzo Stoakes
0 siblings, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 13:20 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:09PM +0200, David Hildenbrand wrote:
> Let's bring the docs up-to-date. Setting PG_movable_ops + page->private
> very likely still requires to be performed under documented locks:
> it's complicated.
>
> We will rework this in the future, as we will try avoiding using the
> page lock.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/balloon_compaction.h | 13 ++++++-------
> 1 file changed, 6 insertions(+), 7 deletions(-)
>
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index b222b0737c466..2fecfead91d26 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -4,12 +4,13 @@
> *
> * Common interface definitions for making balloon pages movable by compaction.
> *
> - * Balloon page migration makes use of the general non-lru movable page
> + * Balloon page migration makes use of the general "movable_ops page migration"
> * feature.
> *
> * page->private is used to reference the responsible balloon device.
> - * page->mapping is used in context of non-lru page migration to reference
> - * the address space operations for page isolation/migration/compaction.
> + * That these pages have movable_ops, and which movable_ops apply,
> + * is derived from the page type (PageOffline()) combined with the
> + * PG_movable_ops flag (PageMovableOps()).
> *
> * As the page isolation scanning step a compaction thread does is a lockless
> * procedure (from a page standpoint), it might bring some racy situations while
> @@ -17,12 +18,10 @@
> * and safely perform balloon's page compaction and migration we must, always,
> * ensure following these simple rules:
> *
> - * i. when updating a balloon's page ->mapping element, strictly do it under
> - * the following lock order, independently of the far superior
> - * locking scheme (lru_lock, balloon_lock):
> + * i. Setting the PG_movable_ops flag and page->private with the following
> + * lock order
> * +-page_lock(page);
> * +--spin_lock_irq(&b_dev_info->pages_lock);
> - * ... page->mapping updates here ...
> *
> * ii. isolation or dequeueing procedure must remove the page from balloon
> * device page list under b_dev_info->pages_lock.
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* [PATCH v1 29/29] mm/balloon_compaction: provide single balloon_page_insert() and balloon_mapping_gfp_mask()
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (27 preceding siblings ...)
2025-06-30 13:00 ` [PATCH v1 28/29] mm/balloon_compaction: "movable_ops" doc updates David Hildenbrand
@ 2025-06-30 13:00 ` David Hildenbrand
2025-07-01 13:22 ` Lorenzo Stoakes
2025-07-01 19:38 ` [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
29 siblings, 1 reply; 138+ messages in thread
From: David Hildenbrand @ 2025-06-30 13:00 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
David Hildenbrand, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
Let's just special-case based on IS_ENABLED(CONFIG_BALLOON_COMPACTION
like we did for balloon_page_finalize().
Signed-off-by: David Hildenbrand <david@redhat.com>
---
include/linux/balloon_compaction.h | 42 +++++++++++-------------------
1 file changed, 15 insertions(+), 27 deletions(-)
diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
index 2fecfead91d26..7cfe48769239e 100644
--- a/include/linux/balloon_compaction.h
+++ b/include/linux/balloon_compaction.h
@@ -77,6 +77,15 @@ static inline void balloon_devinfo_init(struct balloon_dev_info *balloon)
#ifdef CONFIG_BALLOON_COMPACTION
extern const struct movable_operations balloon_mops;
+/*
+ * balloon_page_device - get the b_dev_info descriptor for the balloon device
+ * that enqueues the given page.
+ */
+static inline struct balloon_dev_info *balloon_page_device(struct page *page)
+{
+ return (struct balloon_dev_info *)page_private(page);
+}
+#endif /* CONFIG_BALLOON_COMPACTION */
/*
* balloon_page_insert - insert a page into the balloon's page list and make
@@ -91,41 +100,20 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
struct page *page)
{
__SetPageOffline(page);
- SetPageMovableOps(page);
- set_page_private(page, (unsigned long)balloon);
- list_add(&page->lru, &balloon->pages);
-}
-
-/*
- * balloon_page_device - get the b_dev_info descriptor for the balloon device
- * that enqueues the given page.
- */
-static inline struct balloon_dev_info *balloon_page_device(struct page *page)
-{
- return (struct balloon_dev_info *)page_private(page);
-}
-
-static inline gfp_t balloon_mapping_gfp_mask(void)
-{
- return GFP_HIGHUSER_MOVABLE;
-}
-
-#else /* !CONFIG_BALLOON_COMPACTION */
-
-static inline void balloon_page_insert(struct balloon_dev_info *balloon,
- struct page *page)
-{
- __SetPageOffline(page);
+ if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
+ SetPageMovableOps(page);
+ set_page_private(page, (unsigned long)balloon);
+ }
list_add(&page->lru, &balloon->pages);
}
static inline gfp_t balloon_mapping_gfp_mask(void)
{
+ if (IS_ENABLED(CONFIG_BALLOON_COMPACTION))
+ return GFP_HIGHUSER_MOVABLE;
return GFP_HIGHUSER;
}
-#endif /* CONFIG_BALLOON_COMPACTION */
-
/*
* balloon_page_finalize - prepare a balloon page that was removed from the
* balloon list for release to the page allocator
--
2.49.0
^ permalink raw reply related [flat|nested] 138+ messages in thread
* Re: [PATCH v1 29/29] mm/balloon_compaction: provide single balloon_page_insert() and balloon_mapping_gfp_mask()
2025-06-30 13:00 ` [PATCH v1 29/29] mm/balloon_compaction: provide single balloon_page_insert() and balloon_mapping_gfp_mask() David Hildenbrand
@ 2025-07-01 13:22 ` Lorenzo Stoakes
0 siblings, 0 replies; 138+ messages in thread
From: Lorenzo Stoakes @ 2025-07-01 13:22 UTC (permalink / raw)
To: David Hildenbrand
Cc: linux-kernel, linux-mm, linux-doc, linuxppc-dev, virtualization,
linux-fsdevel, Andrew Morton, Jonathan Corbet,
Madhavan Srinivasan, Michael Ellerman, Nicholas Piggin,
Christophe Leroy, Jerrin Shaji George, Arnd Bergmann,
Greg Kroah-Hartman, Michael S. Tsirkin, Jason Wang, Xuan Zhuo,
Eugenio Pérez, Alexander Viro, Christian Brauner, Jan Kara,
Zi Yan, Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Liam R. Howlett,
Vlastimil Babka, Mike Rapoport, Suren Baghdasaryan, Michal Hocko,
Matthew Wilcox (Oracle), Minchan Kim, Sergey Senozhatsky,
Brendan Jackman, Johannes Weiner, Jason Gunthorpe, John Hubbard,
Peter Xu, Xu Xin, Chengming Zhou, Miaohe Lin, Naoya Horiguchi,
Oscar Salvador, Rik van Riel, Harry Yoo, Qi Zheng, Shakeel Butt
On Mon, Jun 30, 2025 at 03:00:10PM +0200, David Hildenbrand wrote:
> Let's just special-case based on IS_ENABLED(CONFIG_BALLOON_COMPACTION
> like we did for balloon_page_finalize().
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
LGTM, so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> include/linux/balloon_compaction.h | 42 +++++++++++-------------------
> 1 file changed, 15 insertions(+), 27 deletions(-)
>
> diff --git a/include/linux/balloon_compaction.h b/include/linux/balloon_compaction.h
> index 2fecfead91d26..7cfe48769239e 100644
> --- a/include/linux/balloon_compaction.h
> +++ b/include/linux/balloon_compaction.h
> @@ -77,6 +77,15 @@ static inline void balloon_devinfo_init(struct balloon_dev_info *balloon)
>
> #ifdef CONFIG_BALLOON_COMPACTION
> extern const struct movable_operations balloon_mops;
> +/*
> + * balloon_page_device - get the b_dev_info descriptor for the balloon device
> + * that enqueues the given page.
> + */
> +static inline struct balloon_dev_info *balloon_page_device(struct page *page)
> +{
> + return (struct balloon_dev_info *)page_private(page);
> +}
> +#endif /* CONFIG_BALLOON_COMPACTION */
>
> /*
> * balloon_page_insert - insert a page into the balloon's page list and make
> @@ -91,41 +100,20 @@ static inline void balloon_page_insert(struct balloon_dev_info *balloon,
> struct page *page)
> {
> __SetPageOffline(page);
> - SetPageMovableOps(page);
> - set_page_private(page, (unsigned long)balloon);
> - list_add(&page->lru, &balloon->pages);
> -}
> -
> -/*
> - * balloon_page_device - get the b_dev_info descriptor for the balloon device
> - * that enqueues the given page.
> - */
> -static inline struct balloon_dev_info *balloon_page_device(struct page *page)
> -{
> - return (struct balloon_dev_info *)page_private(page);
> -}
> -
> -static inline gfp_t balloon_mapping_gfp_mask(void)
> -{
> - return GFP_HIGHUSER_MOVABLE;
> -}
> -
> -#else /* !CONFIG_BALLOON_COMPACTION */
> -
> -static inline void balloon_page_insert(struct balloon_dev_info *balloon,
> - struct page *page)
> -{
> - __SetPageOffline(page);
> + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION)) {
> + SetPageMovableOps(page);
> + set_page_private(page, (unsigned long)balloon);
> + }
> list_add(&page->lru, &balloon->pages);
> }
>
> static inline gfp_t balloon_mapping_gfp_mask(void)
> {
> + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION))
> + return GFP_HIGHUSER_MOVABLE;
> return GFP_HIGHUSER;
> }
>
> -#endif /* CONFIG_BALLOON_COMPACTION */
> -
> /*
> * balloon_page_finalize - prepare a balloon page that was removed from the
> * balloon list for release to the page allocator
> --
> 2.49.0
>
^ permalink raw reply [flat|nested] 138+ messages in thread
* Re: [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1)
2025-06-30 12:59 [PATCH v1 00/29] mm/migration: rework movable_ops page migration (part 1) David Hildenbrand
` (28 preceding siblings ...)
2025-06-30 13:00 ` [PATCH v1 29/29] mm/balloon_compaction: provide single balloon_page_insert() and balloon_mapping_gfp_mask() David Hildenbrand
@ 2025-07-01 19:38 ` David Hildenbrand
29 siblings, 0 replies; 138+ messages in thread
From: David Hildenbrand @ 2025-07-01 19:38 UTC (permalink / raw)
To: linux-kernel
Cc: linux-mm, linux-doc, linuxppc-dev, virtualization, linux-fsdevel,
Andrew Morton, Jonathan Corbet, Madhavan Srinivasan,
Michael Ellerman, Nicholas Piggin, Christophe Leroy,
Jerrin Shaji George, Arnd Bergmann, Greg Kroah-Hartman,
Michael S. Tsirkin, Jason Wang, Xuan Zhuo, Eugenio Pérez,
Alexander Viro, Christian Brauner, Jan Kara, Zi Yan,
Matthew Brost, Joshua Hahn, Rakie Kim, Byungchul Park,
Gregory Price, Ying Huang, Alistair Popple, Lorenzo Stoakes,
Liam R. Howlett, Vlastimil Babka, Mike Rapoport,
Suren Baghdasaryan, Michal Hocko, Matthew Wilcox (Oracle),
Minchan Kim, Sergey Senozhatsky, Brendan Jackman, Johannes Weiner,
Jason Gunthorpe, John Hubbard, Peter Xu, Xu Xin, Chengming Zhou,
Miaohe Lin, Naoya Horiguchi, Oscar Salvador, Rik van Riel,
Harry Yoo, Qi Zheng, Shakeel Butt
On 30.06.25 14:59, David Hildenbrand wrote:
> Based on mm/mm-new.
>
> In the future, as we decouple "struct page" from "struct folio", pages
> that support "non-lru page migration" -- movable_ops page migration
> such as memory balloons and zsmalloc -- will no longer be folios. They
> will not have ->mapping, ->lru, and likely no refcount and no
> page lock. But they will have a type and flags :)
>
> This is the first part (other parts not written yet) of decoupling
> movable_ops page migration from folio migration.
>
> In this series, we get rid of the ->mapping usage, and start cleaning up
> the code + separating it from folio migration.
>
> Migration core will have to be further reworked to not treat movable_ops
> pages like folios. This is the first step into that direction.
>
> Heavily tested with virtio-balloon and lightly tested with zsmalloc
> on x86-64. Cross-compile-tested.
Thanks everybody for the review!
I'm planning on sending v2 probably later tomorrow, so we can get it
into mm-new.
So if someone wants to review parts of this series either (a) do it
until tomorrow; or (b) scream STOP and I'll wait with v2 a bit longer;
or (c) wait until v2.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 138+ messages in thread