* [PATCH v4 0/6] __folio_split() clean up
@ 2025-07-18 2:29 Zi Yan
2025-07-18 2:29 ` [PATCH v4 1/6] mm/huge_memory: move unrelated code out of __split_unmapped_folio() Zi Yan
` (6 more replies)
0 siblings, 7 replies; 16+ messages in thread
From: Zi Yan @ 2025-07-18 2:29 UTC (permalink / raw)
To: David Hildenbrand, Lorenzo Stoakes, linux-mm
Cc: Andrew Morton, Dan Carpenter, Antonio Quartulli, Hugh Dickins,
Kirill Shutemov, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Balbir Singh, Matthew Brost,
linux-kernel
Hi Andrew,
This series replaces both [PATCH v3 0/2] __folio_split() clean up
and [PATCH] mm/huge_memory: refactor after-split (page) cache code.
Hi Lorenzo,
I addressed all of your comments except renaming folio to origin_folio,
since I find that might either cause confusion or require a lot of code
churn. folio variable points to the original folio throughout
__folio_split() and using origin_folio in the middle of __folio_split()
is confusing as one might wonder if origin_folio is different from or
the same as folio. The alternative is to rename all folio to origin_folio
in __folio_split(). That seems to be unnecessary code churn.
Hi all,
This patchset refactors __folio_split() and __split_unmapped_folio() to:
1. make __split_unmapped_folio() reusable for splitting unmapped
folios. It avoids the need for a new boolean unmapped parameter to guard
mapping-related code when __split_unmapped_folio() is reused to split
unmapped folios.
2. improve code readability and prevent smatch/coverity checkers from
complaining about NULL mapping referencing.
An additional benefit for __split_unmapped_folio() refactoring is that
__split_unmapped_folio() could be called on after-split folios by
__folio_split(). It can enable new split methods. For example, at deferred
split time, unmapped subpages can scatter arbitrarily within a large folio,
neither uniform nor non-uniform split can maximize after-split folio orders
for mapped subpages. The hope is that by calling __split_unmapped_folio()
multiple times, a better split result can be achieved.
The patchset is based on mm-new with aforementioned two patchsets
reverted. It passes mm selftests.
Changelog
===
From V3[4]:
1. Split up Patch 1 into incremental changes:
a. Patch 1 moves code out of __split_unmapped_folio();
b. Patch 2 removes after_split label in __split_unmapped_folio();
c. Patch 3 refactors __folio_split() to deduplicate code;
d. Patch 4 converts VM_BUGs to VM_WARMs;
2. Added "mm/huge_memory: refactor after-split (page) cache code"
patch[5] to this series.
3. Added remap_flags to make remap_page() call easier to read.
4. Updated Patch 1 commit log to include variable rename information.
5. Converted additional VM_BUGs in __folio_split().
6. Renamed next_folio to end_folio to avoid confusion.
7. Added a comment about start for loop with folio_next(folio) instead
of just folio plus skipping folio in the loop body.
8. Dropped swapcache folio split check code from __split_unmapped_folio(),
since the check is already done at the beginning of __folio_split().
From V2[3]:
1. Code format fixes
2. Restructured code to remove after_split goto label.
From V1[2]:
1. Fixed indentations.
2. Used folio_expected_ref_count() to calculate ref_count instead of
open coding.
[1] https://lore.kernel.org/linux-mm/94D8C1A4-780C-4BEC-A336-7D3613B54845@nvidia.com/
[2] https://lore.kernel.org/linux-mm/20250711030259.3574392-1-ziy@nvidia.com/
[3] https://lore.kernel.org/linux-mm/20250711182355.3592618-1-ziy@nvidia.com/
[4] https://lore.kernel.org/linux-mm/20250714171823.3626213-1-ziy@nvidia.com/
[5] https://lore.kernel.org/linux-mm/20250716171112.3666150-1-ziy@nvidia.com/
Zi Yan (6):
mm/huge_memory: move unrelated code out of __split_unmapped_folio()
mm/huge_memory: remove after_split label in __split_unmapped_folio().
mm/huge_memory: deduplicate code in __folio_split().
mm/huge_memory: convert VM_BUG* to VM_WARN* in __folio_split.
mm/huge_memory: get frozen folio refcount with
folio_expected_ref_count()
mm/huge_memory: refactor after-split (page) cache code.
mm/huge_memory.c | 317 ++++++++++++++++++++++++-----------------------
1 file changed, 165 insertions(+), 152 deletions(-)
--
2.47.2
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH v4 1/6] mm/huge_memory: move unrelated code out of __split_unmapped_folio()
2025-07-18 2:29 [PATCH v4 0/6] __folio_split() clean up Zi Yan
@ 2025-07-18 2:29 ` Zi Yan
2025-07-18 7:19 ` David Hildenbrand
2025-07-18 14:55 ` Lorenzo Stoakes
2025-07-18 2:29 ` [PATCH v4 2/6] mm/huge_memory: remove after_split label in __split_unmapped_folio() Zi Yan
` (5 subsequent siblings)
6 siblings, 2 replies; 16+ messages in thread
From: Zi Yan @ 2025-07-18 2:29 UTC (permalink / raw)
To: David Hildenbrand, Lorenzo Stoakes, linux-mm
Cc: Andrew Morton, Dan Carpenter, Antonio Quartulli, Hugh Dickins,
Kirill Shutemov, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Balbir Singh, Matthew Brost,
linux-kernel
remap(), folio_ref_unfreeze(), lru_add_split_folio() are not relevant to
splitting unmapped folio operations. Move them out to __folio_split() so
that __split_unmapped_folio() only handles unmapped folio splits. This
makes __split_unmapped_folio() reusable.
Remove the swapcache folio split check code before __split_unmapped_folio()
call, since it is already checked at the beginning of __folio_split() in
uniform_split_supported() and non_uniform_split_supported().
Along with the code move, there are some variable renames:
1. release is renamed to new_folio,
2. origin_folio is now folio, since __folio_split() has folio pointing to
the original folio already.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/huge_memory.c | 270 +++++++++++++++++++++++------------------------
1 file changed, 133 insertions(+), 137 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index ce130225a8e5..63eebca07628 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3385,10 +3385,6 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
* order - 1 to new_order).
* @split_at: in buddy allocator like split, the folio containing @split_at
* will be split until its order becomes @new_order.
- * @lock_at: the folio containing @lock_at is left locked for caller.
- * @list: the after split folios will be added to @list if it is not NULL,
- * otherwise to LRU lists.
- * @end: the end of the file @folio maps to. -1 if @folio is anonymous memory.
* @xas: xa_state pointing to folio->mapping->i_pages and locked by caller
* @mapping: @folio->mapping
* @uniform_split: if the split is uniform or not (buddy allocator like split)
@@ -3414,52 +3410,26 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
* @page, which is split in next for loop.
*
* After splitting, the caller's folio reference will be transferred to the
- * folio containing @page. The other folios may be freed if they are not mapped.
- *
- * In terms of locking, after splitting,
- * 1. uniform split leaves @page (or the folio contains it) locked;
- * 2. buddy allocator like (non-uniform) split leaves @folio locked.
- *
+ * folio containing @page. The caller needs to unlock and/or free after-split
+ * folios if necessary.
*
* For !uniform_split, when -ENOMEM is returned, the original folio might be
* split. The caller needs to check the input folio.
*/
static int __split_unmapped_folio(struct folio *folio, int new_order,
- struct page *split_at, struct page *lock_at,
- struct list_head *list, pgoff_t end,
- struct xa_state *xas, struct address_space *mapping,
- bool uniform_split)
+ struct page *split_at, struct xa_state *xas,
+ struct address_space *mapping, bool uniform_split)
{
- struct lruvec *lruvec;
- struct address_space *swap_cache = NULL;
- struct folio *origin_folio = folio;
- struct folio *next_folio = folio_next(folio);
- struct folio *new_folio;
- struct folio *next;
int order = folio_order(folio);
- int split_order;
int start_order = uniform_split ? new_order : order - 1;
- int nr_dropped = 0;
- int ret = 0;
bool stop_split = false;
-
- if (folio_test_swapcache(folio)) {
- VM_BUG_ON(mapping);
-
- /* a swapcache folio can only be uniformly split to order-0 */
- if (!uniform_split || new_order != 0)
- return -EINVAL;
-
- swap_cache = swap_address_space(folio->swap);
- xa_lock(&swap_cache->i_pages);
- }
+ struct folio *next;
+ int split_order;
+ int ret = 0;
if (folio_test_anon(folio))
mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
- /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
- lruvec = folio_lruvec_lock(folio);
-
folio_clear_has_hwpoisoned(folio);
/*
@@ -3469,9 +3439,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
for (split_order = start_order;
split_order >= new_order && !stop_split;
split_order--) {
- int old_order = folio_order(folio);
- struct folio *release;
struct folio *end_folio = folio_next(folio);
+ int old_order = folio_order(folio);
+ struct folio *new_folio;
/* order-1 anonymous folio is not supported */
if (folio_test_anon(folio) && split_order == 1)
@@ -3506,113 +3476,32 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
after_split:
/*
- * Iterate through after-split folios and perform related
- * operations. But in buddy allocator like split, the folio
+ * Iterate through after-split folios and update folio stats.
+ * But in buddy allocator like split, the folio
* containing the specified page is skipped until its order
* is new_order, since the folio will be worked on in next
* iteration.
*/
- for (release = folio; release != end_folio; release = next) {
- next = folio_next(release);
+ for (new_folio = folio; new_folio != end_folio; new_folio = next) {
+ next = folio_next(new_folio);
/*
- * for buddy allocator like split, the folio containing
- * page will be split next and should not be released,
- * until the folio's order is new_order or stop_split
- * is set to true by the above xas_split() failure.
+ * for buddy allocator like split, new_folio containing
+ * @split_at page could be split again, thus do not
+ * change stats yet. Wait until new_folio's order is
+ * @new_order or stop_split is set to true by the above
+ * xas_split() failure.
*/
- if (release == page_folio(split_at)) {
- folio = release;
+ if (new_folio == page_folio(split_at)) {
+ folio = new_folio;
if (split_order != new_order && !stop_split)
continue;
}
- if (folio_test_anon(release)) {
- mod_mthp_stat(folio_order(release),
- MTHP_STAT_NR_ANON, 1);
- }
-
- /*
- * origin_folio should be kept frozon until page cache
- * entries are updated with all the other after-split
- * folios to prevent others seeing stale page cache
- * entries.
- */
- if (release == origin_folio)
- continue;
-
- folio_ref_unfreeze(release, 1 +
- ((mapping || swap_cache) ?
- folio_nr_pages(release) : 0));
-
- lru_add_split_folio(origin_folio, release, lruvec,
- list);
-
- /* Some pages can be beyond EOF: drop them from cache */
- if (release->index >= end) {
- if (shmem_mapping(mapping))
- nr_dropped += folio_nr_pages(release);
- else if (folio_test_clear_dirty(release))
- folio_account_cleaned(release,
- inode_to_wb(mapping->host));
- __filemap_remove_folio(release, NULL);
- folio_put_refs(release, folio_nr_pages(release));
- } else if (mapping) {
- __xa_store(&mapping->i_pages,
- release->index, release, 0);
- } else if (swap_cache) {
- __xa_store(&swap_cache->i_pages,
- swap_cache_index(release->swap),
- release, 0);
- }
+ if (folio_test_anon(new_folio))
+ mod_mthp_stat(folio_order(new_folio),
+ MTHP_STAT_NR_ANON, 1);
}
}
- /*
- * Unfreeze origin_folio only after all page cache entries, which used
- * to point to it, have been updated with new folios. Otherwise,
- * a parallel folio_try_get() can grab origin_folio and its caller can
- * see stale page cache entries.
- */
- folio_ref_unfreeze(origin_folio, 1 +
- ((mapping || swap_cache) ? folio_nr_pages(origin_folio) : 0));
-
- unlock_page_lruvec(lruvec);
-
- if (swap_cache)
- xa_unlock(&swap_cache->i_pages);
- if (mapping)
- xa_unlock(&mapping->i_pages);
-
- /* Caller disabled irqs, so they are still disabled here */
- local_irq_enable();
-
- if (nr_dropped)
- shmem_uncharge(mapping->host, nr_dropped);
-
- remap_page(origin_folio, 1 << order,
- folio_test_anon(origin_folio) ?
- RMP_USE_SHARED_ZEROPAGE : 0);
-
- /*
- * At this point, folio should contain the specified page.
- * For uniform split, it is left for caller to unlock.
- * For buddy allocator like split, the first after-split folio is left
- * for caller to unlock.
- */
- for (new_folio = origin_folio; new_folio != next_folio; new_folio = next) {
- next = folio_next(new_folio);
- if (new_folio == page_folio(lock_at))
- continue;
-
- folio_unlock(new_folio);
- /*
- * Subpages may be freed if there wasn't any mapping
- * like if add_to_swap() is running on a lru page that
- * had its mapping zapped. And freeing these pages
- * requires taking the lru_lock so we do the put_page
- * of the tail pages after the split is complete.
- */
- free_folio_and_swap_cache(new_folio);
- }
return ret;
}
@@ -3686,6 +3575,11 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
* It is in charge of checking whether the split is supported or not and
* preparing @folio for __split_unmapped_folio().
*
+ * After splitting, the after-split folio containing @lock_at remains locked
+ * and others are unlocked:
+ * 1. for uniform split, @lock_at points to one of @folio's subpages;
+ * 2. for buddy allocator like (non-uniform) split, @lock_at points to @folio.
+ *
* return: 0: successful, <0 failed (if -ENOMEM is returned, @folio might be
* split but not to @new_order, the caller needs to check)
*/
@@ -3695,10 +3589,12 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
{
struct deferred_split *ds_queue = get_deferred_split_queue(folio);
XA_STATE(xas, &folio->mapping->i_pages, folio->index);
+ struct folio *end_folio = folio_next(folio);
bool is_anon = folio_test_anon(folio);
struct address_space *mapping = NULL;
struct anon_vma *anon_vma = NULL;
int order = folio_order(folio);
+ struct folio *new_folio, *next;
int extra_pins, ret;
pgoff_t end;
bool is_hzp;
@@ -3829,6 +3725,10 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
/* Prevent deferred_split_scan() touching ->_refcount */
spin_lock(&ds_queue->split_queue_lock);
if (folio_ref_freeze(folio, 1 + extra_pins)) {
+ struct address_space *swap_cache = NULL;
+ int nr_dropped = 0;
+ struct lruvec *lruvec;
+
if (folio_order(folio) > 1 &&
!list_empty(&folio->_deferred_list)) {
ds_queue->split_queue_len--;
@@ -3862,9 +3762,105 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
}
}
- ret = __split_unmapped_folio(folio, new_order,
- split_at, lock_at, list, end, &xas, mapping,
- uniform_split);
+ if (folio_test_swapcache(folio)) {
+ VM_BUG_ON(mapping);
+
+ swap_cache = swap_address_space(folio->swap);
+ xa_lock(&swap_cache->i_pages);
+ }
+
+ /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
+ lruvec = folio_lruvec_lock(folio);
+
+ ret = __split_unmapped_folio(folio, new_order, split_at, &xas,
+ mapping, uniform_split);
+
+ /*
+ * Unfreeze after-split folios and put them back to the right
+ * list. @folio should be kept frozon until page cache
+ * entries are updated with all the other after-split folios
+ * to prevent others seeing stale page cache entries.
+ * As a result, new_folio starts from the next folio of
+ * @folio.
+ */
+ for (new_folio = folio_next(folio); new_folio != end_folio;
+ new_folio = next) {
+ next = folio_next(new_folio);
+
+ folio_ref_unfreeze(
+ new_folio,
+ 1 + ((mapping || swap_cache) ?
+ folio_nr_pages(new_folio) :
+ 0));
+
+ lru_add_split_folio(folio, new_folio, lruvec, list);
+
+ /* Some pages can be beyond EOF: drop them from cache */
+ if (new_folio->index >= end) {
+ if (shmem_mapping(mapping))
+ nr_dropped += folio_nr_pages(new_folio);
+ else if (folio_test_clear_dirty(new_folio))
+ folio_account_cleaned(
+ new_folio,
+ inode_to_wb(mapping->host));
+ __filemap_remove_folio(new_folio, NULL);
+ folio_put_refs(new_folio,
+ folio_nr_pages(new_folio));
+ } else if (mapping) {
+ __xa_store(&mapping->i_pages, new_folio->index,
+ new_folio, 0);
+ } else if (swap_cache) {
+ __xa_store(&swap_cache->i_pages,
+ swap_cache_index(new_folio->swap),
+ new_folio, 0);
+ }
+ }
+ /*
+ * Unfreeze @folio only after all page cache entries, which
+ * used to point to it, have been updated with new folios.
+ * Otherwise, a parallel folio_try_get() can grab @folio
+ * and its caller can see stale page cache entries.
+ */
+ folio_ref_unfreeze(folio, 1 +
+ ((mapping || swap_cache) ? folio_nr_pages(folio) : 0));
+
+ unlock_page_lruvec(lruvec);
+
+ if (swap_cache)
+ xa_unlock(&swap_cache->i_pages);
+ if (mapping)
+ xas_unlock(&xas);
+
+ local_irq_enable();
+
+ if (nr_dropped)
+ shmem_uncharge(mapping->host, nr_dropped);
+
+ remap_page(folio, 1 << order,
+ !ret && folio_test_anon(folio) ?
+ RMP_USE_SHARED_ZEROPAGE :
+ 0);
+
+ /*
+ * Unlock all after-split folios except the one containing
+ * @lock_at page. If @folio is not split, it will be kept locked.
+ */
+ for (new_folio = folio; new_folio != end_folio;
+ new_folio = next) {
+ next = folio_next(new_folio);
+ if (new_folio == page_folio(lock_at))
+ continue;
+
+ folio_unlock(new_folio);
+ /*
+ * Subpages may be freed if there wasn't any mapping
+ * like if add_to_swap() is running on a lru page that
+ * had its mapping zapped. And freeing these pages
+ * requires taking the lru_lock so we do the put_page
+ * of the tail pages after the split is complete.
+ */
+ free_folio_and_swap_cache(new_folio);
+ }
} else {
spin_unlock(&ds_queue->split_queue_lock);
fail:
--
2.47.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 2/6] mm/huge_memory: remove after_split label in __split_unmapped_folio().
2025-07-18 2:29 [PATCH v4 0/6] __folio_split() clean up Zi Yan
2025-07-18 2:29 ` [PATCH v4 1/6] mm/huge_memory: move unrelated code out of __split_unmapped_folio() Zi Yan
@ 2025-07-18 2:29 ` Zi Yan
2025-07-18 7:19 ` David Hildenbrand
2025-07-18 14:58 ` Lorenzo Stoakes
2025-07-18 2:29 ` [PATCH v4 3/6] mm/huge_memory: deduplicate code in __folio_split() Zi Yan
` (4 subsequent siblings)
6 siblings, 2 replies; 16+ messages in thread
From: Zi Yan @ 2025-07-18 2:29 UTC (permalink / raw)
To: David Hildenbrand, Lorenzo Stoakes, linux-mm
Cc: Andrew Morton, Dan Carpenter, Antonio Quartulli, Hugh Dickins,
Kirill Shutemov, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Balbir Singh, Matthew Brost,
linux-kernel
Checking stop_split instead to avoid the goto statement.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/huge_memory.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 63eebca07628..e01359008b13 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3463,18 +3463,18 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
if (xas_error(xas)) {
ret = xas_error(xas);
stop_split = true;
- goto after_split;
}
}
}
- folio_split_memcg_refs(folio, old_order, split_order);
- split_page_owner(&folio->page, old_order, split_order);
- pgalloc_tag_split(folio, old_order, split_order);
+ if (!stop_split) {
+ folio_split_memcg_refs(folio, old_order, split_order);
+ split_page_owner(&folio->page, old_order, split_order);
+ pgalloc_tag_split(folio, old_order, split_order);
- __split_folio_to_order(folio, old_order, split_order);
+ __split_folio_to_order(folio, old_order, split_order);
+ }
-after_split:
/*
* Iterate through after-split folios and update folio stats.
* But in buddy allocator like split, the folio
--
2.47.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 3/6] mm/huge_memory: deduplicate code in __folio_split().
2025-07-18 2:29 [PATCH v4 0/6] __folio_split() clean up Zi Yan
2025-07-18 2:29 ` [PATCH v4 1/6] mm/huge_memory: move unrelated code out of __split_unmapped_folio() Zi Yan
2025-07-18 2:29 ` [PATCH v4 2/6] mm/huge_memory: remove after_split label in __split_unmapped_folio() Zi Yan
@ 2025-07-18 2:29 ` Zi Yan
2025-07-18 7:23 ` David Hildenbrand
2025-07-18 2:29 ` [PATCH v4 4/6] mm/huge_memory: convert VM_BUG* to VM_WARN* in __folio_split Zi Yan
` (3 subsequent siblings)
6 siblings, 1 reply; 16+ messages in thread
From: Zi Yan @ 2025-07-18 2:29 UTC (permalink / raw)
To: David Hildenbrand, Lorenzo Stoakes, linux-mm
Cc: Andrew Morton, Dan Carpenter, Antonio Quartulli, Hugh Dickins,
Kirill Shutemov, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Balbir Singh, Matthew Brost,
linux-kernel
xas unlock, remap_page(), local_irq_enable() are moved out of if branches
to deduplicate the code. While at it, add remap_flags to clean up
remap_page() call site. nr_dropped is renamed to nr_shmem_dropped, as it
becomes a variable at __folio_split() scope.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/huge_memory.c | 73 +++++++++++++++++++++++-------------------------
1 file changed, 35 insertions(+), 38 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index e01359008b13..d36f7bdaeb38 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3595,6 +3595,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
struct anon_vma *anon_vma = NULL;
int order = folio_order(folio);
struct folio *new_folio, *next;
+ int nr_shmem_dropped = 0;
+ int remap_flags = 0;
int extra_pins, ret;
pgoff_t end;
bool is_hzp;
@@ -3718,15 +3720,16 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
*/
xas_lock(&xas);
xas_reset(&xas);
- if (xas_load(&xas) != folio)
+ if (xas_load(&xas) != folio) {
+ ret = -EAGAIN;
goto fail;
+ }
}
/* Prevent deferred_split_scan() touching ->_refcount */
spin_lock(&ds_queue->split_queue_lock);
if (folio_ref_freeze(folio, 1 + extra_pins)) {
struct address_space *swap_cache = NULL;
- int nr_dropped = 0;
struct lruvec *lruvec;
if (folio_order(folio) > 1 &&
@@ -3798,7 +3801,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
/* Some pages can be beyond EOF: drop them from cache */
if (new_folio->index >= end) {
if (shmem_mapping(mapping))
- nr_dropped += folio_nr_pages(new_folio);
+ nr_shmem_dropped += folio_nr_pages(new_folio);
else if (folio_test_clear_dirty(new_folio))
folio_account_cleaned(
new_folio,
@@ -3828,47 +3831,41 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
if (swap_cache)
xa_unlock(&swap_cache->i_pages);
- if (mapping)
- xas_unlock(&xas);
+ } else {
+ spin_unlock(&ds_queue->split_queue_lock);
+ ret = -EAGAIN;
+ }
+fail:
+ if (mapping)
+ xas_unlock(&xas);
+
+ local_irq_enable();
- local_irq_enable();
+ if (nr_shmem_dropped)
+ shmem_uncharge(mapping->host, nr_shmem_dropped);
- if (nr_dropped)
- shmem_uncharge(mapping->host, nr_dropped);
+ if (!ret && is_anon)
+ remap_flags = RMP_USE_SHARED_ZEROPAGE;
+ remap_page(folio, 1 << order, remap_flags);
- remap_page(folio, 1 << order,
- !ret && folio_test_anon(folio) ?
- RMP_USE_SHARED_ZEROPAGE :
- 0);
+ /*
+ * Unlock all after-split folios except the one containing
+ * @lock_at page. If @folio is not split, it will be kept locked.
+ */
+ for (new_folio = folio; new_folio != end_folio; new_folio = next) {
+ next = folio_next(new_folio);
+ if (new_folio == page_folio(lock_at))
+ continue;
+ folio_unlock(new_folio);
/*
- * Unlock all after-split folios except the one containing
- * @lock_at page. If @folio is not split, it will be kept locked.
+ * Subpages may be freed if there wasn't any mapping
+ * like if add_to_swap() is running on a lru page that
+ * had its mapping zapped. And freeing these pages
+ * requires taking the lru_lock so we do the put_page
+ * of the tail pages after the split is complete.
*/
- for (new_folio = folio; new_folio != end_folio;
- new_folio = next) {
- next = folio_next(new_folio);
- if (new_folio == page_folio(lock_at))
- continue;
-
- folio_unlock(new_folio);
- /*
- * Subpages may be freed if there wasn't any mapping
- * like if add_to_swap() is running on a lru page that
- * had its mapping zapped. And freeing these pages
- * requires taking the lru_lock so we do the put_page
- * of the tail pages after the split is complete.
- */
- free_folio_and_swap_cache(new_folio);
- }
- } else {
- spin_unlock(&ds_queue->split_queue_lock);
-fail:
- if (mapping)
- xas_unlock(&xas);
- local_irq_enable();
- remap_page(folio, folio_nr_pages(folio), 0);
- ret = -EAGAIN;
+ free_folio_and_swap_cache(new_folio);
}
out_unlock:
--
2.47.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 4/6] mm/huge_memory: convert VM_BUG* to VM_WARN* in __folio_split.
2025-07-18 2:29 [PATCH v4 0/6] __folio_split() clean up Zi Yan
` (2 preceding siblings ...)
2025-07-18 2:29 ` [PATCH v4 3/6] mm/huge_memory: deduplicate code in __folio_split() Zi Yan
@ 2025-07-18 2:29 ` Zi Yan
2025-07-18 7:22 ` David Hildenbrand
2025-07-18 15:05 ` Lorenzo Stoakes
2025-07-18 2:29 ` [PATCH v4 5/6] mm/huge_memory: get frozen folio refcount with folio_expected_ref_count() Zi Yan
` (2 subsequent siblings)
6 siblings, 2 replies; 16+ messages in thread
From: Zi Yan @ 2025-07-18 2:29 UTC (permalink / raw)
To: David Hildenbrand, Lorenzo Stoakes, linux-mm
Cc: Andrew Morton, Dan Carpenter, Antonio Quartulli, Hugh Dickins,
Kirill Shutemov, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Balbir Singh, Matthew Brost,
linux-kernel
These VM_BUG* can be handled gracefully without crashing kernel.
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/huge_memory.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d36f7bdaeb38..d6ff5e8c89d7 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3601,8 +3601,14 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
pgoff_t end;
bool is_hzp;
- VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
- VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
+ if (!folio_test_locked(folio)) {
+ VM_WARN_ON_ONCE_FOLIO(1, folio);
+ return -EINVAL;
+ }
+ if (!folio_test_large(folio)) {
+ VM_WARN_ON_ONCE_FOLIO(1, folio);
+ return -EINVAL;
+ }
if (folio != page_folio(split_at) || folio != page_folio(lock_at))
return -EINVAL;
@@ -3766,7 +3772,11 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
}
if (folio_test_swapcache(folio)) {
- VM_BUG_ON(mapping);
+ if (mapping) {
+ VM_WARN_ON_ONCE_FOLIO(mapping, folio);
+ ret = -EINVAL;
+ goto fail;
+ }
swap_cache = swap_address_space(folio->swap);
xa_lock(&swap_cache->i_pages);
--
2.47.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 5/6] mm/huge_memory: get frozen folio refcount with folio_expected_ref_count()
2025-07-18 2:29 [PATCH v4 0/6] __folio_split() clean up Zi Yan
` (3 preceding siblings ...)
2025-07-18 2:29 ` [PATCH v4 4/6] mm/huge_memory: convert VM_BUG* to VM_WARN* in __folio_split Zi Yan
@ 2025-07-18 2:29 ` Zi Yan
2025-07-18 2:30 ` [PATCH v4 6/6] mm/huge_memory: refactor after-split (page) cache code Zi Yan
2025-07-18 5:26 ` [PATCH v4 0/6] __folio_split() clean up Lorenzo Stoakes
6 siblings, 0 replies; 16+ messages in thread
From: Zi Yan @ 2025-07-18 2:29 UTC (permalink / raw)
To: David Hildenbrand, Lorenzo Stoakes, linux-mm
Cc: Andrew Morton, Dan Carpenter, Antonio Quartulli, Hugh Dickins,
Kirill Shutemov, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Balbir Singh, Matthew Brost,
linux-kernel
Instead of open coding the refcount calculation, use
folio_expected_ref_count() to calculate frozen folio refcount.
Because:
1. __folio_split() does not split a folio with PG_private, so no elevated
refcount from PG_private;
2. a frozen folio in __folio_split() is fully unmapped, so folio_mapcount()
in folio_expected_ref_count() is always 0;
3. (mapping || swap_cache) ? folio_nr_pages(folio) is taken care of by
folio_expected_ref_count() too.
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Acked-by: Balbir Singh <balbirs@nvidia.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
mm/huge_memory.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d6ff5e8c89d7..4db67970ae69 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3737,6 +3737,7 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
if (folio_ref_freeze(folio, 1 + extra_pins)) {
struct address_space *swap_cache = NULL;
struct lruvec *lruvec;
+ int expected_refs;
if (folio_order(folio) > 1 &&
!list_empty(&folio->_deferred_list)) {
@@ -3800,11 +3801,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
new_folio = next) {
next = folio_next(new_folio);
- folio_ref_unfreeze(
- new_folio,
- 1 + ((mapping || swap_cache) ?
- folio_nr_pages(new_folio) :
- 0));
+ expected_refs = folio_expected_ref_count(new_folio) + 1;
+ folio_ref_unfreeze(new_folio, expected_refs);
lru_add_split_folio(folio, new_folio, lruvec, list);
@@ -3834,8 +3832,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
* Otherwise, a parallel folio_try_get() can grab @folio
* and its caller can see stale page cache entries.
*/
- folio_ref_unfreeze(folio, 1 +
- ((mapping || swap_cache) ? folio_nr_pages(folio) : 0));
+ expected_refs = folio_expected_ref_count(folio) + 1;
+ folio_ref_unfreeze(folio, expected_refs);
unlock_page_lruvec(lruvec);
--
2.47.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH v4 6/6] mm/huge_memory: refactor after-split (page) cache code.
2025-07-18 2:29 [PATCH v4 0/6] __folio_split() clean up Zi Yan
` (4 preceding siblings ...)
2025-07-18 2:29 ` [PATCH v4 5/6] mm/huge_memory: get frozen folio refcount with folio_expected_ref_count() Zi Yan
@ 2025-07-18 2:30 ` Zi Yan
2025-07-18 5:26 ` [PATCH v4 0/6] __folio_split() clean up Lorenzo Stoakes
6 siblings, 0 replies; 16+ messages in thread
From: Zi Yan @ 2025-07-18 2:30 UTC (permalink / raw)
To: David Hildenbrand, Lorenzo Stoakes, linux-mm
Cc: Andrew Morton, Dan Carpenter, Antonio Quartulli, Hugh Dickins,
Kirill Shutemov, Zi Yan, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Balbir Singh, Matthew Brost,
linux-kernel
Smatch/coverity checkers report NULL mapping referencing issues[1][2][3]
every time the code is modified, because they do not understand that
mapping cannot be NULL when a folio is in page cache in the code.
Refactor the code to make it explicit.
Remove "end = -1" for anonymous folios, since after code refactoring, end
is no longer used by anonymous folio handling code.
No functional change is intended.
[1]https://lore.kernel.org/linux-mm/2afe3d59-aca5-40f7-82a3-a6d976fb0f4f@stanley.mountain/
[2]https://lore.kernel.org/oe-kbuild/64b54034-f311-4e7d-b935-c16775dbb642@suswa.mountain/
[3]https://lore.kernel.org/linux-mm/20250716145804.4836-1-antonio@mandelbit.com/
Suggested-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Zi Yan <ziy@nvidia.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
---
mm/huge_memory.c | 44 ++++++++++++++++++++++++++++----------------
1 file changed, 28 insertions(+), 16 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 4db67970ae69..19342660739b 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3646,7 +3646,6 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
ret = -EBUSY;
goto out;
}
- end = -1;
mapping = NULL;
anon_vma_lock_write(anon_vma);
} else {
@@ -3799,6 +3798,8 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
*/
for (new_folio = folio_next(folio); new_folio != end_folio;
new_folio = next) {
+ unsigned long nr_pages = folio_nr_pages(new_folio);
+
next = folio_next(new_folio);
expected_refs = folio_expected_ref_count(new_folio) + 1;
@@ -3806,25 +3807,36 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
lru_add_split_folio(folio, new_folio, lruvec, list);
- /* Some pages can be beyond EOF: drop them from cache */
- if (new_folio->index >= end) {
- if (shmem_mapping(mapping))
- nr_shmem_dropped += folio_nr_pages(new_folio);
- else if (folio_test_clear_dirty(new_folio))
- folio_account_cleaned(
- new_folio,
- inode_to_wb(mapping->host));
- __filemap_remove_folio(new_folio, NULL);
- folio_put_refs(new_folio,
- folio_nr_pages(new_folio));
- } else if (mapping) {
- __xa_store(&mapping->i_pages, new_folio->index,
- new_folio, 0);
- } else if (swap_cache) {
+ /*
+ * Anonymous folio with swap cache.
+ * NOTE: shmem in swap cache is not supported yet.
+ */
+ if (swap_cache) {
__xa_store(&swap_cache->i_pages,
swap_cache_index(new_folio->swap),
new_folio, 0);
+ continue;
+ }
+
+ /* Anonymous folio without swap cache */
+ if (!mapping)
+ continue;
+
+ /* Add the new folio to the page cache. */
+ if (new_folio->index < end) {
+ __xa_store(&mapping->i_pages, new_folio->index,
+ new_folio, 0);
+ continue;
}
+
+ /* Drop folio beyond EOF: ->index >= end */
+ if (shmem_mapping(mapping))
+ nr_shmem_dropped += nr_pages;
+ else if (folio_test_clear_dirty(new_folio))
+ folio_account_cleaned(
+ new_folio, inode_to_wb(mapping->host));
+ __filemap_remove_folio(new_folio, NULL);
+ folio_put_refs(new_folio, nr_pages);
}
/*
* Unfreeze @folio only after all page cache entries, which
--
2.47.2
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH v4 0/6] __folio_split() clean up
2025-07-18 2:29 [PATCH v4 0/6] __folio_split() clean up Zi Yan
` (5 preceding siblings ...)
2025-07-18 2:30 ` [PATCH v4 6/6] mm/huge_memory: refactor after-split (page) cache code Zi Yan
@ 2025-07-18 5:26 ` Lorenzo Stoakes
6 siblings, 0 replies; 16+ messages in thread
From: Lorenzo Stoakes @ 2025-07-18 5:26 UTC (permalink / raw)
To: Zi Yan
Cc: David Hildenbrand, linux-mm, Andrew Morton, Dan Carpenter,
Antonio Quartulli, Hugh Dickins, Kirill Shutemov, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Balbir Singh, Matthew Brost, linux-kernel
On Thu, Jul 17, 2025 at 10:29:54PM -0400, Zi Yan wrote:
> Hi Andrew,
>
> This series replaces both [PATCH v3 0/2] __folio_split() clean up
> and [PATCH] mm/huge_memory: refactor after-split (page) cache code.
>
> Hi Lorenzo,
>
> I addressed all of your comments except renaming folio to origin_folio,
> since I find that might either cause confusion or require a lot of code
> churn. folio variable points to the original folio throughout
> __folio_split() and using origin_folio in the middle of __folio_split()
> is confusing as one might wonder if origin_folio is different from or
> the same as folio. The alternative is to rename all folio to origin_folio
> in __folio_split(). That seems to be unnecessary code churn.
Sounds reasonable! Cheers :)
>
> Hi all,
>
> This patchset refactors __folio_split() and __split_unmapped_folio() to:
> 1. make __split_unmapped_folio() reusable for splitting unmapped
> folios. It avoids the need for a new boolean unmapped parameter to guard
> mapping-related code when __split_unmapped_folio() is reused to split
> unmapped folios.
> 2. improve code readability and prevent smatch/coverity checkers from
> complaining about NULL mapping referencing.
>
> An additional benefit for __split_unmapped_folio() refactoring is that
> __split_unmapped_folio() could be called on after-split folios by
> __folio_split(). It can enable new split methods. For example, at deferred
> split time, unmapped subpages can scatter arbitrarily within a large folio,
> neither uniform nor non-uniform split can maximize after-split folio orders
> for mapped subpages. The hope is that by calling __split_unmapped_folio()
> multiple times, a better split result can be achieved.
>
> The patchset is based on mm-new with aforementioned two patchsets
> reverted. It passes mm selftests.
>
> Changelog
> ===
> From V3[4]:
> 1. Split up Patch 1 into incremental changes:
> a. Patch 1 moves code out of __split_unmapped_folio();
> b. Patch 2 removes after_split label in __split_unmapped_folio();
> c. Patch 3 refactors __folio_split() to deduplicate code;
> d. Patch 4 converts VM_BUGs to VM_WARMs;
> 2. Added "mm/huge_memory: refactor after-split (page) cache code"
> patch[5] to this series.
> 3. Added remap_flags to make remap_page() call easier to read.
> 4. Updated Patch 1 commit log to include variable rename information.
> 5. Converted additional VM_BUGs in __folio_split().
> 6. Renamed next_folio to end_folio to avoid confusion.
> 7. Added a comment about start for loop with folio_next(folio) instead
> of just folio plus skipping folio in the loop body.
> 8. Dropped swapcache folio split check code from __split_unmapped_folio(),
> since the check is already done at the beginning of __folio_split().
>
> From V2[3]:
> 1. Code format fixes
> 2. Restructured code to remove after_split goto label.
>
> From V1[2]:
> 1. Fixed indentations.
> 2. Used folio_expected_ref_count() to calculate ref_count instead of
> open coding.
>
> [1] https://lore.kernel.org/linux-mm/94D8C1A4-780C-4BEC-A336-7D3613B54845@nvidia.com/
> [2] https://lore.kernel.org/linux-mm/20250711030259.3574392-1-ziy@nvidia.com/
> [3] https://lore.kernel.org/linux-mm/20250711182355.3592618-1-ziy@nvidia.com/
> [4] https://lore.kernel.org/linux-mm/20250714171823.3626213-1-ziy@nvidia.com/
> [5] https://lore.kernel.org/linux-mm/20250716171112.3666150-1-ziy@nvidia.com/
>
> Zi Yan (6):
> mm/huge_memory: move unrelated code out of __split_unmapped_folio()
> mm/huge_memory: remove after_split label in __split_unmapped_folio().
> mm/huge_memory: deduplicate code in __folio_split().
> mm/huge_memory: convert VM_BUG* to VM_WARN* in __folio_split.
> mm/huge_memory: get frozen folio refcount with
> folio_expected_ref_count()
> mm/huge_memory: refactor after-split (page) cache code.
>
> mm/huge_memory.c | 317 ++++++++++++++++++++++++-----------------------
> 1 file changed, 165 insertions(+), 152 deletions(-)
>
> --
> 2.47.2
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 1/6] mm/huge_memory: move unrelated code out of __split_unmapped_folio()
2025-07-18 2:29 ` [PATCH v4 1/6] mm/huge_memory: move unrelated code out of __split_unmapped_folio() Zi Yan
@ 2025-07-18 7:19 ` David Hildenbrand
2025-07-18 14:55 ` Lorenzo Stoakes
1 sibling, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2025-07-18 7:19 UTC (permalink / raw)
To: Zi Yan, Lorenzo Stoakes, linux-mm
Cc: Andrew Morton, Dan Carpenter, Antonio Quartulli, Hugh Dickins,
Kirill Shutemov, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Balbir Singh, Matthew Brost,
linux-kernel
On 18.07.25 04:29, Zi Yan wrote:
> remap(), folio_ref_unfreeze(), lru_add_split_folio() are not relevant to
> splitting unmapped folio operations. Move them out to __folio_split() so
> that __split_unmapped_folio() only handles unmapped folio splits. This
> makes __split_unmapped_folio() reusable.
>
> Remove the swapcache folio split check code before __split_unmapped_folio()
> call, since it is already checked at the beginning of __folio_split() in
> uniform_split_supported() and non_uniform_split_supported().
>
> Along with the code move, there are some variable renames:
>
> 1. release is renamed to new_folio,
> 2. origin_folio is now folio, since __folio_split() has folio pointing to
> the original folio already.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 2/6] mm/huge_memory: remove after_split label in __split_unmapped_folio().
2025-07-18 2:29 ` [PATCH v4 2/6] mm/huge_memory: remove after_split label in __split_unmapped_folio() Zi Yan
@ 2025-07-18 7:19 ` David Hildenbrand
2025-07-18 14:58 ` Lorenzo Stoakes
1 sibling, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2025-07-18 7:19 UTC (permalink / raw)
To: Zi Yan, Lorenzo Stoakes, linux-mm
Cc: Andrew Morton, Dan Carpenter, Antonio Quartulli, Hugh Dickins,
Kirill Shutemov, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Balbir Singh, Matthew Brost,
linux-kernel
On 18.07.25 04:29, Zi Yan wrote:
> Checking stop_split instead to avoid the goto statement.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 4/6] mm/huge_memory: convert VM_BUG* to VM_WARN* in __folio_split.
2025-07-18 2:29 ` [PATCH v4 4/6] mm/huge_memory: convert VM_BUG* to VM_WARN* in __folio_split Zi Yan
@ 2025-07-18 7:22 ` David Hildenbrand
2025-07-18 14:48 ` Zi Yan
2025-07-18 15:05 ` Lorenzo Stoakes
1 sibling, 1 reply; 16+ messages in thread
From: David Hildenbrand @ 2025-07-18 7:22 UTC (permalink / raw)
To: Zi Yan, Lorenzo Stoakes, linux-mm
Cc: Andrew Morton, Dan Carpenter, Antonio Quartulli, Hugh Dickins,
Kirill Shutemov, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Balbir Singh, Matthew Brost,
linux-kernel
On 18.07.25 04:29, Zi Yan wrote:
> These VM_BUG* can be handled gracefully without crashing kernel.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
> mm/huge_memory.c | 16 +++++++++++++---
> 1 file changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index d36f7bdaeb38..d6ff5e8c89d7 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3601,8 +3601,14 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> pgoff_t end;
> bool is_hzp;
>
> - VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> - VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
> + if (!folio_test_locked(folio)) {
> + VM_WARN_ON_ONCE_FOLIO(1, folio);
> + return -EINVAL;
> + }
> + if (!folio_test_large(folio)) {
> + VM_WARN_ON_ONCE_FOLIO(1, folio);
> + return -EINVAL;
> + }
For cases that we handle gracefully you usually want to use
if (WARN_ON_ONCE(..))
because then you get actually notified when that unexpected thing happens.
I am not really sure if recovery is warranted here: smells like a
straight VM_WARN_ON_ONCE_FOLIO() is sufficient, and catching this early
during development that something is extremely off.
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 3/6] mm/huge_memory: deduplicate code in __folio_split().
2025-07-18 2:29 ` [PATCH v4 3/6] mm/huge_memory: deduplicate code in __folio_split() Zi Yan
@ 2025-07-18 7:23 ` David Hildenbrand
0 siblings, 0 replies; 16+ messages in thread
From: David Hildenbrand @ 2025-07-18 7:23 UTC (permalink / raw)
To: Zi Yan, Lorenzo Stoakes, linux-mm
Cc: Andrew Morton, Dan Carpenter, Antonio Quartulli, Hugh Dickins,
Kirill Shutemov, Baolin Wang, Liam R. Howlett, Nico Pache,
Ryan Roberts, Dev Jain, Barry Song, Balbir Singh, Matthew Brost,
linux-kernel
On 18.07.25 04:29, Zi Yan wrote:
> xas unlock, remap_page(), local_irq_enable() are moved out of if branches
> to deduplicate the code. While at it, add remap_flags to clean up
> remap_page() call site. nr_dropped is renamed to nr_shmem_dropped, as it
> becomes a variable at __folio_split() scope.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
> ---
Acked-by: David Hildenbrand <david@redhat.com>
--
Cheers,
David / dhildenb
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 4/6] mm/huge_memory: convert VM_BUG* to VM_WARN* in __folio_split.
2025-07-18 7:22 ` David Hildenbrand
@ 2025-07-18 14:48 ` Zi Yan
0 siblings, 0 replies; 16+ messages in thread
From: Zi Yan @ 2025-07-18 14:48 UTC (permalink / raw)
To: David Hildenbrand
Cc: Lorenzo Stoakes, linux-mm, Andrew Morton, Dan Carpenter,
Antonio Quartulli, Hugh Dickins, Kirill Shutemov, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Balbir Singh, Matthew Brost, linux-kernel
On 18 Jul 2025, at 3:22, David Hildenbrand wrote:
> On 18.07.25 04:29, Zi Yan wrote:
>> These VM_BUG* can be handled gracefully without crashing kernel.
>>
>> Signed-off-by: Zi Yan <ziy@nvidia.com>
>> ---
>> mm/huge_memory.c | 16 +++++++++++++---
>> 1 file changed, 13 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index d36f7bdaeb38..d6ff5e8c89d7 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -3601,8 +3601,14 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
>> pgoff_t end;
>> bool is_hzp;
>> - VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
>> - VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
>> + if (!folio_test_locked(folio)) {
>> + VM_WARN_ON_ONCE_FOLIO(1, folio);
>> + return -EINVAL;
>> + }
>> + if (!folio_test_large(folio)) {
>> + VM_WARN_ON_ONCE_FOLIO(1, folio);
>> + return -EINVAL;
>> + }
>
> For cases that we handle gracefully you usually want to use
>
> if (WARN_ON_ONCE(..))
Got it.
>
> because then you get actually notified when that unexpected thing happens.
>
> I am not really sure if recovery is warranted here: smells like a straight VM_WARN_ON_ONCE_FOLIO() is sufficient, and catching this early during development that something is extremely off.
OK. I will update it to just VM_WARN_ON_ONCE_FOLIO().
Thanks.
Best Regards,
Yan, Zi
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 1/6] mm/huge_memory: move unrelated code out of __split_unmapped_folio()
2025-07-18 2:29 ` [PATCH v4 1/6] mm/huge_memory: move unrelated code out of __split_unmapped_folio() Zi Yan
2025-07-18 7:19 ` David Hildenbrand
@ 2025-07-18 14:55 ` Lorenzo Stoakes
1 sibling, 0 replies; 16+ messages in thread
From: Lorenzo Stoakes @ 2025-07-18 14:55 UTC (permalink / raw)
To: Zi Yan
Cc: David Hildenbrand, linux-mm, Andrew Morton, Dan Carpenter,
Antonio Quartulli, Hugh Dickins, Kirill Shutemov, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Balbir Singh, Matthew Brost, linux-kernel
On Thu, Jul 17, 2025 at 10:29:55PM -0400, Zi Yan wrote:
> remap(), folio_ref_unfreeze(), lru_add_split_folio() are not relevant to
> splitting unmapped folio operations. Move them out to __folio_split() so
> that __split_unmapped_folio() only handles unmapped folio splits. This
> makes __split_unmapped_folio() reusable.
>
> Remove the swapcache folio split check code before __split_unmapped_folio()
> call, since it is already checked at the beginning of __folio_split() in
> uniform_split_supported() and non_uniform_split_supported().
>
> Along with the code move, there are some variable renames:
>
> 1. release is renamed to new_folio,
> 2. origin_folio is now folio, since __folio_split() has folio pointing to
> the original folio already.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
OK, have had a careful look through and LGTM. Thanks very much for address
prior review! So:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> mm/huge_memory.c | 270 +++++++++++++++++++++++------------------------
> 1 file changed, 133 insertions(+), 137 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index ce130225a8e5..63eebca07628 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3385,10 +3385,6 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> * order - 1 to new_order).
> * @split_at: in buddy allocator like split, the folio containing @split_at
> * will be split until its order becomes @new_order.
> - * @lock_at: the folio containing @lock_at is left locked for caller.
> - * @list: the after split folios will be added to @list if it is not NULL,
> - * otherwise to LRU lists.
> - * @end: the end of the file @folio maps to. -1 if @folio is anonymous memory.
> * @xas: xa_state pointing to folio->mapping->i_pages and locked by caller
> * @mapping: @folio->mapping
> * @uniform_split: if the split is uniform or not (buddy allocator like split)
> @@ -3414,52 +3410,26 @@ static void __split_folio_to_order(struct folio *folio, int old_order,
> * @page, which is split in next for loop.
> *
> * After splitting, the caller's folio reference will be transferred to the
> - * folio containing @page. The other folios may be freed if they are not mapped.
> - *
> - * In terms of locking, after splitting,
> - * 1. uniform split leaves @page (or the folio contains it) locked;
> - * 2. buddy allocator like (non-uniform) split leaves @folio locked.
> - *
> + * folio containing @page. The caller needs to unlock and/or free after-split
> + * folios if necessary.
> *
> * For !uniform_split, when -ENOMEM is returned, the original folio might be
> * split. The caller needs to check the input folio.
> */
> static int __split_unmapped_folio(struct folio *folio, int new_order,
> - struct page *split_at, struct page *lock_at,
> - struct list_head *list, pgoff_t end,
> - struct xa_state *xas, struct address_space *mapping,
> - bool uniform_split)
> + struct page *split_at, struct xa_state *xas,
> + struct address_space *mapping, bool uniform_split)
> {
> - struct lruvec *lruvec;
> - struct address_space *swap_cache = NULL;
> - struct folio *origin_folio = folio;
> - struct folio *next_folio = folio_next(folio);
> - struct folio *new_folio;
> - struct folio *next;
> int order = folio_order(folio);
> - int split_order;
> int start_order = uniform_split ? new_order : order - 1;
> - int nr_dropped = 0;
> - int ret = 0;
> bool stop_split = false;
> -
> - if (folio_test_swapcache(folio)) {
> - VM_BUG_ON(mapping);
> -
> - /* a swapcache folio can only be uniformly split to order-0 */
> - if (!uniform_split || new_order != 0)
> - return -EINVAL;
> -
> - swap_cache = swap_address_space(folio->swap);
> - xa_lock(&swap_cache->i_pages);
> - }
> + struct folio *next;
> + int split_order;
> + int ret = 0;
>
> if (folio_test_anon(folio))
> mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1);
>
> - /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
> - lruvec = folio_lruvec_lock(folio);
> -
> folio_clear_has_hwpoisoned(folio);
>
> /*
> @@ -3469,9 +3439,9 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> for (split_order = start_order;
> split_order >= new_order && !stop_split;
> split_order--) {
> - int old_order = folio_order(folio);
> - struct folio *release;
> struct folio *end_folio = folio_next(folio);
> + int old_order = folio_order(folio);
> + struct folio *new_folio;
>
> /* order-1 anonymous folio is not supported */
> if (folio_test_anon(folio) && split_order == 1)
> @@ -3506,113 +3476,32 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
>
> after_split:
> /*
> - * Iterate through after-split folios and perform related
> - * operations. But in buddy allocator like split, the folio
> + * Iterate through after-split folios and update folio stats.
> + * But in buddy allocator like split, the folio
> * containing the specified page is skipped until its order
> * is new_order, since the folio will be worked on in next
> * iteration.
> */
> - for (release = folio; release != end_folio; release = next) {
> - next = folio_next(release);
> + for (new_folio = folio; new_folio != end_folio; new_folio = next) {
> + next = folio_next(new_folio);
> /*
> - * for buddy allocator like split, the folio containing
> - * page will be split next and should not be released,
> - * until the folio's order is new_order or stop_split
> - * is set to true by the above xas_split() failure.
> + * for buddy allocator like split, new_folio containing
> + * @split_at page could be split again, thus do not
> + * change stats yet. Wait until new_folio's order is
> + * @new_order or stop_split is set to true by the above
> + * xas_split() failure.
> */
> - if (release == page_folio(split_at)) {
> - folio = release;
> + if (new_folio == page_folio(split_at)) {
> + folio = new_folio;
> if (split_order != new_order && !stop_split)
> continue;
> }
> - if (folio_test_anon(release)) {
> - mod_mthp_stat(folio_order(release),
> - MTHP_STAT_NR_ANON, 1);
> - }
> -
> - /*
> - * origin_folio should be kept frozon until page cache
> - * entries are updated with all the other after-split
> - * folios to prevent others seeing stale page cache
> - * entries.
> - */
> - if (release == origin_folio)
> - continue;
> -
> - folio_ref_unfreeze(release, 1 +
> - ((mapping || swap_cache) ?
> - folio_nr_pages(release) : 0));
> -
> - lru_add_split_folio(origin_folio, release, lruvec,
> - list);
> -
> - /* Some pages can be beyond EOF: drop them from cache */
> - if (release->index >= end) {
> - if (shmem_mapping(mapping))
> - nr_dropped += folio_nr_pages(release);
> - else if (folio_test_clear_dirty(release))
> - folio_account_cleaned(release,
> - inode_to_wb(mapping->host));
> - __filemap_remove_folio(release, NULL);
> - folio_put_refs(release, folio_nr_pages(release));
> - } else if (mapping) {
> - __xa_store(&mapping->i_pages,
> - release->index, release, 0);
> - } else if (swap_cache) {
> - __xa_store(&swap_cache->i_pages,
> - swap_cache_index(release->swap),
> - release, 0);
> - }
> + if (folio_test_anon(new_folio))
> + mod_mthp_stat(folio_order(new_folio),
> + MTHP_STAT_NR_ANON, 1);
> }
> }
>
> - /*
> - * Unfreeze origin_folio only after all page cache entries, which used
> - * to point to it, have been updated with new folios. Otherwise,
> - * a parallel folio_try_get() can grab origin_folio and its caller can
> - * see stale page cache entries.
> - */
> - folio_ref_unfreeze(origin_folio, 1 +
> - ((mapping || swap_cache) ? folio_nr_pages(origin_folio) : 0));
> -
> - unlock_page_lruvec(lruvec);
> -
> - if (swap_cache)
> - xa_unlock(&swap_cache->i_pages);
> - if (mapping)
> - xa_unlock(&mapping->i_pages);
> -
> - /* Caller disabled irqs, so they are still disabled here */
> - local_irq_enable();
> -
> - if (nr_dropped)
> - shmem_uncharge(mapping->host, nr_dropped);
> -
> - remap_page(origin_folio, 1 << order,
> - folio_test_anon(origin_folio) ?
> - RMP_USE_SHARED_ZEROPAGE : 0);
> -
> - /*
> - * At this point, folio should contain the specified page.
> - * For uniform split, it is left for caller to unlock.
> - * For buddy allocator like split, the first after-split folio is left
> - * for caller to unlock.
> - */
> - for (new_folio = origin_folio; new_folio != next_folio; new_folio = next) {
> - next = folio_next(new_folio);
> - if (new_folio == page_folio(lock_at))
> - continue;
> -
> - folio_unlock(new_folio);
> - /*
> - * Subpages may be freed if there wasn't any mapping
> - * like if add_to_swap() is running on a lru page that
> - * had its mapping zapped. And freeing these pages
> - * requires taking the lru_lock so we do the put_page
> - * of the tail pages after the split is complete.
> - */
> - free_folio_and_swap_cache(new_folio);
> - }
> return ret;
> }
>
> @@ -3686,6 +3575,11 @@ bool uniform_split_supported(struct folio *folio, unsigned int new_order,
> * It is in charge of checking whether the split is supported or not and
> * preparing @folio for __split_unmapped_folio().
> *
> + * After splitting, the after-split folio containing @lock_at remains locked
> + * and others are unlocked:
> + * 1. for uniform split, @lock_at points to one of @folio's subpages;
> + * 2. for buddy allocator like (non-uniform) split, @lock_at points to @folio.
> + *
> * return: 0: successful, <0 failed (if -ENOMEM is returned, @folio might be
> * split but not to @new_order, the caller needs to check)
> */
> @@ -3695,10 +3589,12 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> {
> struct deferred_split *ds_queue = get_deferred_split_queue(folio);
> XA_STATE(xas, &folio->mapping->i_pages, folio->index);
> + struct folio *end_folio = folio_next(folio);
> bool is_anon = folio_test_anon(folio);
> struct address_space *mapping = NULL;
> struct anon_vma *anon_vma = NULL;
> int order = folio_order(folio);
> + struct folio *new_folio, *next;
> int extra_pins, ret;
> pgoff_t end;
> bool is_hzp;
> @@ -3829,6 +3725,10 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> /* Prevent deferred_split_scan() touching ->_refcount */
> spin_lock(&ds_queue->split_queue_lock);
> if (folio_ref_freeze(folio, 1 + extra_pins)) {
> + struct address_space *swap_cache = NULL;
> + int nr_dropped = 0;
> + struct lruvec *lruvec;
> +
> if (folio_order(folio) > 1 &&
> !list_empty(&folio->_deferred_list)) {
> ds_queue->split_queue_len--;
> @@ -3862,9 +3762,105 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> }
> }
>
> - ret = __split_unmapped_folio(folio, new_order,
> - split_at, lock_at, list, end, &xas, mapping,
> - uniform_split);
> + if (folio_test_swapcache(folio)) {
> + VM_BUG_ON(mapping);
> +
> + swap_cache = swap_address_space(folio->swap);
> + xa_lock(&swap_cache->i_pages);
> + }
> +
> + /* lock lru list/PageCompound, ref frozen by page_ref_freeze */
> + lruvec = folio_lruvec_lock(folio);
> +
> + ret = __split_unmapped_folio(folio, new_order, split_at, &xas,
> + mapping, uniform_split);
> +
> + /*
> + * Unfreeze after-split folios and put them back to the right
> + * list. @folio should be kept frozon until page cache
> + * entries are updated with all the other after-split folios
> + * to prevent others seeing stale page cache entries.
> + * As a result, new_folio starts from the next folio of
> + * @folio.
> + */
> + for (new_folio = folio_next(folio); new_folio != end_folio;
> + new_folio = next) {
> + next = folio_next(new_folio);
> +
> + folio_ref_unfreeze(
> + new_folio,
> + 1 + ((mapping || swap_cache) ?
> + folio_nr_pages(new_folio) :
> + 0));
> +
> + lru_add_split_folio(folio, new_folio, lruvec, list);
> +
> + /* Some pages can be beyond EOF: drop them from cache */
> + if (new_folio->index >= end) {
> + if (shmem_mapping(mapping))
> + nr_dropped += folio_nr_pages(new_folio);
> + else if (folio_test_clear_dirty(new_folio))
> + folio_account_cleaned(
> + new_folio,
> + inode_to_wb(mapping->host));
> + __filemap_remove_folio(new_folio, NULL);
> + folio_put_refs(new_folio,
> + folio_nr_pages(new_folio));
> + } else if (mapping) {
> + __xa_store(&mapping->i_pages, new_folio->index,
> + new_folio, 0);
> + } else if (swap_cache) {
> + __xa_store(&swap_cache->i_pages,
> + swap_cache_index(new_folio->swap),
> + new_folio, 0);
> + }
> + }
> + /*
> + * Unfreeze @folio only after all page cache entries, which
> + * used to point to it, have been updated with new folios.
> + * Otherwise, a parallel folio_try_get() can grab @folio
> + * and its caller can see stale page cache entries.
> + */
> + folio_ref_unfreeze(folio, 1 +
> + ((mapping || swap_cache) ? folio_nr_pages(folio) : 0));
> +
> + unlock_page_lruvec(lruvec);
> +
> + if (swap_cache)
> + xa_unlock(&swap_cache->i_pages);
> + if (mapping)
> + xas_unlock(&xas);
> +
> + local_irq_enable();
> +
> + if (nr_dropped)
> + shmem_uncharge(mapping->host, nr_dropped);
> +
> + remap_page(folio, 1 << order,
> + !ret && folio_test_anon(folio) ?
> + RMP_USE_SHARED_ZEROPAGE :
> + 0);
> +
> + /*
> + * Unlock all after-split folios except the one containing
> + * @lock_at page. If @folio is not split, it will be kept locked.
> + */
> + for (new_folio = folio; new_folio != end_folio;
> + new_folio = next) {
> + next = folio_next(new_folio);
> + if (new_folio == page_folio(lock_at))
> + continue;
> +
> + folio_unlock(new_folio);
> + /*
> + * Subpages may be freed if there wasn't any mapping
> + * like if add_to_swap() is running on a lru page that
> + * had its mapping zapped. And freeing these pages
> + * requires taking the lru_lock so we do the put_page
> + * of the tail pages after the split is complete.
> + */
> + free_folio_and_swap_cache(new_folio);
> + }
> } else {
> spin_unlock(&ds_queue->split_queue_lock);
> fail:
> --
> 2.47.2
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 2/6] mm/huge_memory: remove after_split label in __split_unmapped_folio().
2025-07-18 2:29 ` [PATCH v4 2/6] mm/huge_memory: remove after_split label in __split_unmapped_folio() Zi Yan
2025-07-18 7:19 ` David Hildenbrand
@ 2025-07-18 14:58 ` Lorenzo Stoakes
1 sibling, 0 replies; 16+ messages in thread
From: Lorenzo Stoakes @ 2025-07-18 14:58 UTC (permalink / raw)
To: Zi Yan
Cc: David Hildenbrand, linux-mm, Andrew Morton, Dan Carpenter,
Antonio Quartulli, Hugh Dickins, Kirill Shutemov, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Balbir Singh, Matthew Brost, linux-kernel
On Thu, Jul 17, 2025 at 10:29:56PM -0400, Zi Yan wrote:
> Checking stop_split instead to avoid the goto statement.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com
Thanks, nice + clear!
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> mm/huge_memory.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index 63eebca07628..e01359008b13 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3463,18 +3463,18 @@ static int __split_unmapped_folio(struct folio *folio, int new_order,
> if (xas_error(xas)) {
> ret = xas_error(xas);
> stop_split = true;
> - goto after_split;
> }
> }
> }
>
> - folio_split_memcg_refs(folio, old_order, split_order);
> - split_page_owner(&folio->page, old_order, split_order);
> - pgalloc_tag_split(folio, old_order, split_order);
> + if (!stop_split) {
> + folio_split_memcg_refs(folio, old_order, split_order);
> + split_page_owner(&folio->page, old_order, split_order);
> + pgalloc_tag_split(folio, old_order, split_order);
>
> - __split_folio_to_order(folio, old_order, split_order);
> + __split_folio_to_order(folio, old_order, split_order);
> + }
>
> -after_split:
> /*
> * Iterate through after-split folios and update folio stats.
> * But in buddy allocator like split, the folio
> --
> 2.47.2
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH v4 4/6] mm/huge_memory: convert VM_BUG* to VM_WARN* in __folio_split.
2025-07-18 2:29 ` [PATCH v4 4/6] mm/huge_memory: convert VM_BUG* to VM_WARN* in __folio_split Zi Yan
2025-07-18 7:22 ` David Hildenbrand
@ 2025-07-18 15:05 ` Lorenzo Stoakes
1 sibling, 0 replies; 16+ messages in thread
From: Lorenzo Stoakes @ 2025-07-18 15:05 UTC (permalink / raw)
To: Zi Yan
Cc: David Hildenbrand, linux-mm, Andrew Morton, Dan Carpenter,
Antonio Quartulli, Hugh Dickins, Kirill Shutemov, Baolin Wang,
Liam R. Howlett, Nico Pache, Ryan Roberts, Dev Jain, Barry Song,
Balbir Singh, Matthew Brost, linux-kernel
On Thu, Jul 17, 2025 at 10:29:58PM -0400, Zi Yan wrote:
> These VM_BUG* can be handled gracefully without crashing kernel.
>
> Signed-off-by: Zi Yan <ziy@nvidia.com>
This LGTM, but obviously this is predicated on David being happy re: his reply
but from my side:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> mm/huge_memory.c | 16 +++++++++++++---
> 1 file changed, 13 insertions(+), 3 deletions(-)
>
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index d36f7bdaeb38..d6ff5e8c89d7 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -3601,8 +3601,14 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> pgoff_t end;
> bool is_hzp;
>
> - VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio);
> - VM_BUG_ON_FOLIO(!folio_test_large(folio), folio);
> + if (!folio_test_locked(folio)) {
> + VM_WARN_ON_ONCE_FOLIO(1, folio);
> + return -EINVAL;
> + }
> + if (!folio_test_large(folio)) {
> + VM_WARN_ON_ONCE_FOLIO(1, folio);
> + return -EINVAL;
> + }
>
> if (folio != page_folio(split_at) || folio != page_folio(lock_at))
> return -EINVAL;
> @@ -3766,7 +3772,11 @@ static int __folio_split(struct folio *folio, unsigned int new_order,
> }
>
> if (folio_test_swapcache(folio)) {
> - VM_BUG_ON(mapping);
> + if (mapping) {
> + VM_WARN_ON_ONCE_FOLIO(mapping, folio);
> + ret = -EINVAL;
> + goto fail;
> + }
>
> swap_cache = swap_address_space(folio->swap);
> xa_lock(&swap_cache->i_pages);
> --
> 2.47.2
>
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2025-07-18 15:05 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-18 2:29 [PATCH v4 0/6] __folio_split() clean up Zi Yan
2025-07-18 2:29 ` [PATCH v4 1/6] mm/huge_memory: move unrelated code out of __split_unmapped_folio() Zi Yan
2025-07-18 7:19 ` David Hildenbrand
2025-07-18 14:55 ` Lorenzo Stoakes
2025-07-18 2:29 ` [PATCH v4 2/6] mm/huge_memory: remove after_split label in __split_unmapped_folio() Zi Yan
2025-07-18 7:19 ` David Hildenbrand
2025-07-18 14:58 ` Lorenzo Stoakes
2025-07-18 2:29 ` [PATCH v4 3/6] mm/huge_memory: deduplicate code in __folio_split() Zi Yan
2025-07-18 7:23 ` David Hildenbrand
2025-07-18 2:29 ` [PATCH v4 4/6] mm/huge_memory: convert VM_BUG* to VM_WARN* in __folio_split Zi Yan
2025-07-18 7:22 ` David Hildenbrand
2025-07-18 14:48 ` Zi Yan
2025-07-18 15:05 ` Lorenzo Stoakes
2025-07-18 2:29 ` [PATCH v4 5/6] mm/huge_memory: get frozen folio refcount with folio_expected_ref_count() Zi Yan
2025-07-18 2:30 ` [PATCH v4 6/6] mm/huge_memory: refactor after-split (page) cache code Zi Yan
2025-07-18 5:26 ` [PATCH v4 0/6] __folio_split() clean up Lorenzo Stoakes
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).