From: Zi Yan <ziy@nvidia.com>
To: linmiaohe@huawei.com, david@redhat.com, jane.chu@oracle.com
Cc: kernel@pankajraghav.com, ziy@nvidia.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
nao.horiguchi@gmail.com,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Wei Yang <richard.weiyang@gmail.com>,
Yang Shi <shy828301@gmail.com>,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org
Subject: [PATCH v5 0/3] Optimize folio split in memory failure
Date: Fri, 31 Oct 2025 12:19:58 -0400 [thread overview]
Message-ID: <20251031162001.670503-1-ziy@nvidia.com> (raw)
This patchset optimizes folio split operations in memory failure code by
always splitting a folio to min_order_for_split() to minimize unusable
pages, even if min_order_for_split() is non zero and memory failure code
would take the failed path eventually for a successfully split folio.
This means instead of making the entire original folio unusable memory
failure code would only make its after-split folio, which has order of
min_order_for_split() and contains HWPoison page, unusable.
For soft offline case, since the original folio is still accessible,
no split is performed if the folio cannot be split to order-0 to prevent
potential performance loss. In addition, add split_huge_page_to_order()
to improve code readability and fix kernel-doc comment format for
folio_split() and other related functions.
It is based on mm-new without V4 of this patchset.
Background
===
This patchset is a follow-up of "[PATCH v3] mm/huge_memory: do not change
split_huge_page*() target order silently."[1] and
[PATCH v4] mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split
to >0 order[2], since both are separated out as hotfixes. It improves how
memory failure code handles large block size(LBS) folios with
min_order_for_split() > 0. By splitting a large folio containing HW
poisoned pages to min_order_for_split(), the after-split folios without
HW poisoned pages could be freed for reuse. To achieve this, folio split
code needs to set has_hwpoisoned on after-split folios containing HW
poisoned pages and it is done in the hotfix in [2].
This patchset includes:
1. A patch adds split_huge_page_to_order(),
2. Patch 2 and Patch 3 of "[PATCH v2 0/3] Do not change split folio target
order"[3],
Changelog
===
From V4[5]:
1. updated cover letter.
2. updated __split_unmapped_folio() comment and removed stale text.
From V3[4]:
1. Patch, mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split
to >0 order, is sent separately as a hotfix[2].
2. made newly added new_order const in memory_failure() and
soft_offline_in_use_page().
3. explained in a comment why in memory_failure() after-split >0 order
folios are still treated as if the split failed.
From V2[3]:
1. Patch 1 is sent separately as a hotfix[1].
2. set has_hwpoisoned on after-split folios if any contains HW poisoned
pages.
3. added split_huge_page_to_order().
4. added a missing newline after variable decalaration.
5. added /* release= */ to try_to_split_thp_page().
6. restructured try_to_split_thp_page() in memory_failure().
7. fixed a typo.
8. reworded the comment in soft_offline_in_use_page() for better
understanding.
Link: https://lore.kernel.org/all/20251017013630.139907-1-ziy@nvidia.com/ [1]
Link: https://lore.kernel.org/all/20251023030521.473097-1-ziy@nvidia.com/ [2]
Link: https://lore.kernel.org/all/20251016033452.125479-1-ziy@nvidia.com/ [3]
Link: https://lore.kernel.org/all/20251022033531.389351-1-ziy@nvidia.com/ [4]
Link: https://lore.kernel.org/all/20251030014020.475659-1-ziy@nvidia.com/ [5]
Zi Yan (3):
mm/huge_memory: add split_huge_page_to_order()
mm/memory-failure: improve large block size folio handling.
mm/huge_memory: fix kernel-doc comments for folio_split() and related.
include/linux/huge_mm.h | 22 ++++++++++++++------
mm/huge_memory.c | 45 ++++++++++++++++++++++-------------------
mm/memory-failure.c | 31 ++++++++++++++++++++++++----
3 files changed, 67 insertions(+), 31 deletions(-)
--
2.51.0
next reply other threads:[~2025-10-31 16:19 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-31 16:19 Zi Yan [this message]
2025-10-31 16:19 ` [PATCH v5 1/3] mm/huge_memory: add split_huge_page_to_order() Zi Yan
2025-10-31 16:20 ` [PATCH v5 2/3] mm/memory-failure: improve large block size folio handling Zi Yan
2025-10-31 16:20 ` [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
2025-10-31 23:36 ` Wei Yang
2025-10-31 23:52 ` Zi Yan
2025-11-01 0:08 ` Wei Yang
2025-11-03 16:38 ` David Hildenbrand (Red Hat)
2025-11-05 16:10 ` Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251031162001.670503-1-ziy@nvidia.com \
--to=ziy@nvidia.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=jane.chu@oracle.com \
--cc=kernel@pankajraghav.com \
--cc=lance.yang@linux.dev \
--cc=linmiaohe@huawei.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mcgrof@kernel.org \
--cc=nao.horiguchi@gmail.com \
--cc=npache@redhat.com \
--cc=richard.weiyang@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).