From: Zi Yan <ziy@nvidia.com>
To: linmiaohe@huawei.com, david@redhat.com, jane.chu@oracle.com
Cc: kernel@pankajraghav.com, ziy@nvidia.com,
akpm@linux-foundation.org, mcgrof@kernel.org,
nao.horiguchi@gmail.com,
Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
Baolin Wang <baolin.wang@linux.alibaba.com>,
"Liam R. Howlett" <Liam.Howlett@oracle.com>,
Nico Pache <npache@redhat.com>,
Ryan Roberts <ryan.roberts@arm.com>, Dev Jain <dev.jain@arm.com>,
Barry Song <baohua@kernel.org>, Lance Yang <lance.yang@linux.dev>,
"Matthew Wilcox (Oracle)" <willy@infradead.org>,
Wei Yang <richard.weiyang@gmail.com>,
Yang Shi <shy828301@gmail.com>,
linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org
Subject: [PATCH v5 2/3] mm/memory-failure: improve large block size folio handling.
Date: Fri, 31 Oct 2025 12:20:00 -0400 [thread overview]
Message-ID: <20251031162001.670503-3-ziy@nvidia.com> (raw)
In-Reply-To: <20251031162001.670503-1-ziy@nvidia.com>
Large block size (LBS) folios cannot be split to order-0 folios but
min_order_for_folio(). Current split fails directly, but that is not
optimal. Split the folio to min_order_for_folio(), so that, after split,
only the folio containing the poisoned page becomes unusable instead.
For soft offline, do not split the large folio if its min_order_for_folio()
is not 0. Since the folio is still accessible from userspace and premature
split might lead to potential performance loss.
Suggested-by: Jane Chu <jane.chu@oracle.com>
Reviewed-by: Luis Chamberlain <mcgrof@kernel.org>
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
Acked-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Wei Yang <richard.weiyang@gmail.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Barry Song <baohua@kernel.org>
Reviewed-by: Lance Yang <lance.yang@linux.dev>
Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/memory-failure.c | 31 +++++++++++++++++++++++++++----
1 file changed, 27 insertions(+), 4 deletions(-)
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index f698df156bf8..acc35c881547 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1656,12 +1656,13 @@ static int identify_page_state(unsigned long pfn, struct page *p,
* there is still more to do, hence the page refcount we took earlier
* is still needed.
*/
-static int try_to_split_thp_page(struct page *page, bool release)
+static int try_to_split_thp_page(struct page *page, unsigned int new_order,
+ bool release)
{
int ret;
lock_page(page);
- ret = split_huge_page(page);
+ ret = split_huge_page_to_order(page, new_order);
unlock_page(page);
if (ret && release)
@@ -2280,6 +2281,9 @@ int memory_failure(unsigned long pfn, int flags)
folio_unlock(folio);
if (folio_test_large(folio)) {
+ const int new_order = min_order_for_split(folio);
+ int err;
+
/*
* The flag must be set after the refcount is bumped
* otherwise it may race with THP split.
@@ -2294,7 +2298,16 @@ int memory_failure(unsigned long pfn, int flags)
* page is a valid handlable page.
*/
folio_set_has_hwpoisoned(folio);
- if (try_to_split_thp_page(p, false) < 0) {
+ err = try_to_split_thp_page(p, new_order, /* release= */ false);
+ /*
+ * If splitting a folio to order-0 fails, kill the process.
+ * Split the folio regardless to minimize unusable pages.
+ * Because the memory failure code cannot handle large
+ * folios, this split is always treated as if it failed.
+ */
+ if (err || new_order) {
+ /* get folio again in case the original one is split */
+ folio = page_folio(p);
res = -EHWPOISON;
kill_procs_now(p, pfn, flags, folio);
put_page(p);
@@ -2621,7 +2634,17 @@ static int soft_offline_in_use_page(struct page *page)
};
if (!huge && folio_test_large(folio)) {
- if (try_to_split_thp_page(page, true)) {
+ const int new_order = min_order_for_split(folio);
+
+ /*
+ * If new_order (target split order) is not 0, do not split the
+ * folio at all to retain the still accessible large folio.
+ * NOTE: if minimizing the number of soft offline pages is
+ * preferred, split it to non-zero new_order like it is done in
+ * memory_failure().
+ */
+ if (new_order || try_to_split_thp_page(page, /* new_order= */ 0,
+ /* release= */ true)) {
pr_info("%#lx: thp split failed\n", pfn);
return -EBUSY;
}
--
2.51.0
next prev parent reply other threads:[~2025-10-31 16:19 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-31 16:19 [PATCH v5 0/3] Optimize folio split in memory failure Zi Yan
2025-10-31 16:19 ` [PATCH v5 1/3] mm/huge_memory: add split_huge_page_to_order() Zi Yan
2025-10-31 16:20 ` Zi Yan [this message]
2025-10-31 16:20 ` [PATCH v5 3/3] mm/huge_memory: fix kernel-doc comments for folio_split() and related Zi Yan
2025-10-31 23:36 ` Wei Yang
2025-10-31 23:52 ` Zi Yan
2025-11-01 0:08 ` Wei Yang
2025-11-03 16:38 ` David Hildenbrand (Red Hat)
2025-11-05 16:10 ` Zi Yan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251031162001.670503-3-ziy@nvidia.com \
--to=ziy@nvidia.com \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=baolin.wang@linux.alibaba.com \
--cc=david@redhat.com \
--cc=dev.jain@arm.com \
--cc=jane.chu@oracle.com \
--cc=kernel@pankajraghav.com \
--cc=lance.yang@linux.dev \
--cc=linmiaohe@huawei.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mcgrof@kernel.org \
--cc=nao.horiguchi@gmail.com \
--cc=npache@redhat.com \
--cc=richard.weiyang@gmail.com \
--cc=ryan.roberts@arm.com \
--cc=shy828301@gmail.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).