* Re: [PATCH v5 2/4] mm/memory-failure: add panic option for unrecoverable pages
[not found] ` <20260428030721.51274-1-lance.yang@linux.dev>
@ 2026-05-06 16:18 ` Breno Leitao
2026-05-10 14:42 ` Lance Yang
0 siblings, 1 reply; 3+ messages in thread
From: Breno Leitao @ 2026-05-06 16:18 UTC (permalink / raw)
To: Lance Yang
Cc: david, linmiaohe, nao.horiguchi, akpm, corbet, skhan, ljs,
Liam.Howlett, vbabka, rppt, surenb, mhocko, shuah, linux-mm,
linux-kernel, linux-doc, linux-kselftest, kernel-team
On Tue, Apr 28, 2026 at 11:07:21AM +0800, Lance Yang wrote:
>
> On Mon, Apr 27, 2026 at 05:49:28PM +0200, David Hildenbrand (Arm) wrote:
> >> + switch (type) {
> >> + case MF_MSG_KERNEL:
> >> + case MF_MSG_UNKNOWN:
> >> + return true;
> >> + case MF_MSG_KERNEL_HIGH_ORDER:
> >> + /*
> >> + * Rule out a concurrent buddy allocation: give the
> >> + * allocator a moment to finish prep_new_page() and
> >> + * re-check. A genuine high-order kernel tail page stays
> >> + * unowned; an in-flight allocation will have bumped the
> >> + * refcount, attached a mapping, or placed the page on
> >> + * an LRU by now.
> >> + */
> >> + p = pfn_to_online_page(pfn);
> >> + if (!p)
> >> + return true;
> >> + /*
> >> + * Yield so a concurrent allocator on another CPU can
> >> + * finish prep_new_page() and have its writes become
> >> + * visible before we resample the page state.
> >> + */
> >> + cpu_relax();
> >> + return page_count(p) == 0 &&
> >> + !PageLRU(p) &&
> >> + !page_mapped(p) &&
> >> + !page_folio(p)->mapping &&
> >> + !is_free_buddy_page(p);
> >
> >I don't get what you are doing here. The right way to check for a tail page is
> >not by checking the refcount.
> >
> >Further, you are not holding a folio reference? If so, calling
> >page_mapped/folio_mapped is shaky. On concurrent folio split you can trigger a
> >VM_WARN_ON_FOLIO().
> >
> >
> >Maybe folio_snapshot() is what you are looking for, if you are in fact not
> >holding a reference?
>
> Right! Maybe we should not try to make this decision in
> panic_on_unrecoverable_mf().
>
> By the time we get here, we only know the final MF_MSG_* type. The
> real reason why get_hwpoison_page() failed is already lost.
>
> Wonder if it would be better to split that earlier, around
> __get_unpoison_page()/get_any_page(). That code still knows why
> grabbing the page failed, either an unsupported kernel page or
> just a temporary race we cannot really trust :)
>
> Then the later panic logic can be simple: panic for the stable
> unsupported kernel page case, and not for the temporary race case.
>
> That would also avoid trying to guess MF_MSG_KERNEL_HIGH_ORDER here:)
This is a very good feedback, and definitely what I wanted to do, but,
failed. Once we have the reason, we don't need this dance to guess the
reason.
I've hacked a patch based on this approach. How does it sound?
commit ae7a09c989afe7aaed7ac4b5090d993ef1de0b38
Author: Breno Leitao <leitao@debian.org>
Date: Wed May 6 07:41:30 2026 -0700
mm/memory-failure: classify get_any_page() failures by reason
When get_any_page() fails to grab a page reference, the *reason* it
failed is known at the call site but is not surfaced to callers: the
HWPoisonHandlable() rejection path (a stable kernel page hwpoison cannot
handle — slab, vmalloc, page tables, kernel stacks, ...) and the
page_count() / put_page race paths (a transient page-allocator lifecycle
race) all collapse to a single negative errno by the time
memory_failure() sees them. memory_failure() can only observe the
conflated result and reports both as MF_MSG_GET_HWPOISON.
Surface the diagnosis explicitly. Add an mf_get_page_status enum,
plumbed out through get_any_page() and get_hwpoison_page() (NULL is
accepted by callers that do not care — unpoison_memory() and
soft_offline_page() pass NULL). get_any_page() sets the status at the
moment it gives up:
MF_GET_PAGE_UNHANDLABLE — HWPoisonHandlable() rejected the page
after retries.
MF_GET_PAGE_RACE — exhausted retries on a refcount /
lifecycle race with the allocator.
memory_failure() then promotes the unhandlable case to MF_MSG_KERNEL
alongside the existing PageReserved branch, and leaves the
transient-race case as MF_MSG_GET_HWPOISON. The user-visible report
now distinguishes the two; this also forms the foundation a later
patch will rely on to decide whether an unrecoverable failure should
panic.
Suggested-by: Lance Yang <lance.yang@linux.dev>
Signed-off-by: Breno Leitao <leitao@debian.org>
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index f112fb27a8ff6..a83fabadbce99 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1389,7 +1389,32 @@ static int __get_hwpoison_page(struct page *page, unsigned long flags)
#define GET_PAGE_MAX_RETRY_NUM 3
-static int get_any_page(struct page *p, unsigned long flags)
+/*
+ * Diagnosis of why get_any_page() failed to grab a page reference.
+ *
+ * Set when ret < 0 so callers (notably memory_failure()) can tell apart
+ * a stable kernel page type that hwpoison cannot handle — slab, vmalloc,
+ * page tables, kernel stacks, etc. — from a transient race with the page
+ * allocator lifecycle (allocation/free in flight). The distinction
+ * matters for panic_on_unrecoverable_mf(): the former is a real
+ * unrecoverable kernel-owned poisoning, the latter must not panic since
+ * the page may be destined for userspace where SIGBUS recovery would
+ * otherwise apply.
+ */
+enum mf_get_page_status {
+ MF_GET_PAGE_OK = 0,
+ /*
+ * Transient lifecycle race with the page allocator. Recorded for
+ * symmetry and for future callers that may want to distinguish a
+ * race from an unhandlable kernel page; no in-tree caller acts on
+ * this value yet.
+ */
+ MF_GET_PAGE_RACE,
+ MF_GET_PAGE_UNHANDLABLE, /* stable kernel page hwpoison cannot handle */
+};
+
+static int get_any_page(struct page *p, unsigned long flags,
+ enum mf_get_page_status *status)
{
int ret = 0, pass = 0;
bool count_increased = false;
@@ -1406,11 +1431,15 @@ static int get_any_page(struct page *p, unsigned long flags)
if (pass++ < GET_PAGE_MAX_RETRY_NUM)
goto try_again;
ret = -EBUSY;
+ if (status)
+ *status = MF_GET_PAGE_RACE;
} else if (!PageHuge(p) && !is_free_buddy_page(p)) {
/* We raced with put_page, retry. */
if (pass++ < GET_PAGE_MAX_RETRY_NUM)
goto try_again;
ret = -EIO;
+ if (status)
+ *status = MF_GET_PAGE_RACE;
}
goto out;
} else if (ret == -EBUSY) {
@@ -1423,6 +1452,8 @@ static int get_any_page(struct page *p, unsigned long flags)
goto try_again;
}
ret = -EIO;
+ if (status)
+ *status = MF_GET_PAGE_UNHANDLABLE;
goto out;
}
}
@@ -1442,6 +1473,8 @@ static int get_any_page(struct page *p, unsigned long flags)
}
put_page(p);
ret = -EIO;
+ if (status)
+ *status = MF_GET_PAGE_UNHANDLABLE;
}
out:
if (ret == -EIO)
@@ -1503,7 +1536,8 @@ static int __get_unpoison_page(struct page *page)
* operations like allocation and free,
* -EHWPOISON when the page is hwpoisoned and taken off from buddy.
*/
-static int get_hwpoison_page(struct page *p, unsigned long flags)
+static int get_hwpoison_page(struct page *p, unsigned long flags,
+ enum mf_get_page_status *status)
{
int ret;
@@ -1511,7 +1545,7 @@ static int get_hwpoison_page(struct page *p, unsigned long flags)
if (flags & MF_UNPOISON)
ret = __get_unpoison_page(p);
else
- ret = get_any_page(p, flags);
+ ret = get_any_page(p, flags, status);
zone_pcp_enable(page_zone(p));
return ret;
@@ -2349,6 +2383,7 @@ int memory_failure(unsigned long pfn, int flags)
bool retry = true;
int hugetlb = 0;
bool is_reserved;
+ enum mf_get_page_status gp_status = MF_GET_PAGE_OK;
if (!sysctl_memory_failure_recovery)
panic("Memory failure on page %lx", pfn);
@@ -2424,7 +2459,7 @@ int memory_failure(unsigned long pfn, int flags)
*/
is_reserved = PageReserved(p);
- res = get_hwpoison_page(p, flags);
+ res = get_hwpoison_page(p, flags, &gp_status);
if (!res) {
if (is_free_buddy_page(p)) {
if (take_page_off_buddy(p)) {
@@ -2445,7 +2480,12 @@ int memory_failure(unsigned long pfn, int flags)
}
goto unlock_mutex;
} else if (res < 0) {
- if (is_reserved)
+ /*
+ * Promote a stable unhandlable kernel page diagnosed by
+ * get_hwpoison_page() to MF_MSG_KERNEL alongside reserved
+ * pages; transient lifecycle races stay as MF_MSG_GET_HWPOISON.
+ */
+ if (is_reserved || gp_status == MF_GET_PAGE_UNHANDLABLE)
res = action_result(pfn, MF_MSG_KERNEL, MF_IGNORED);
else
res = action_result(pfn, MF_MSG_GET_HWPOISON,
@@ -2750,7 +2790,7 @@ int unpoison_memory(unsigned long pfn)
goto unlock_mutex;
}
- ghp = get_hwpoison_page(p, MF_UNPOISON);
+ ghp = get_hwpoison_page(p, MF_UNPOISON, NULL);
if (!ghp) {
if (folio_test_hugetlb(folio)) {
huge = true;
@@ -2957,7 +2997,7 @@ int soft_offline_page(unsigned long pfn, int flags)
retry:
get_online_mems();
- ret = get_hwpoison_page(page, flags | MF_SOFT_OFFLINE);
+ ret = get_hwpoison_page(page, flags | MF_SOFT_OFFLINE, NULL);
put_online_mems();
if (hwpoison_filter(page)) {
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v5 2/4] mm/memory-failure: add panic option for unrecoverable pages
2026-05-06 16:18 ` [PATCH v5 2/4] mm/memory-failure: add panic option for unrecoverable pages Breno Leitao
@ 2026-05-10 14:42 ` Lance Yang
2026-05-11 14:44 ` Breno Leitao
0 siblings, 1 reply; 3+ messages in thread
From: Lance Yang @ 2026-05-10 14:42 UTC (permalink / raw)
To: leitao
Cc: david, linmiaohe, nao.horiguchi, akpm, corbet, skhan, ljs,
Liam.Howlett, vbabka, rppt, surenb, mhocko, shuah, linux-mm,
linux-kernel, linux-doc, linux-kselftest, kernel-team, Lance Yang
On Wed, May 06, 2026 at 09:18:12AM -0700, Breno Leitao wrote:
>On Tue, Apr 28, 2026 at 11:07:21AM +0800, Lance Yang wrote:
>>
>> On Mon, Apr 27, 2026 at 05:49:28PM +0200, David Hildenbrand (Arm) wrote:
>> >> + switch (type) {
>> >> + case MF_MSG_KERNEL:
>> >> + case MF_MSG_UNKNOWN:
>> >> + return true;
>> >> + case MF_MSG_KERNEL_HIGH_ORDER:
>> >> + /*
>> >> + * Rule out a concurrent buddy allocation: give the
>> >> + * allocator a moment to finish prep_new_page() and
>> >> + * re-check. A genuine high-order kernel tail page stays
>> >> + * unowned; an in-flight allocation will have bumped the
>> >> + * refcount, attached a mapping, or placed the page on
>> >> + * an LRU by now.
>> >> + */
>> >> + p = pfn_to_online_page(pfn);
>> >> + if (!p)
>> >> + return true;
>> >> + /*
>> >> + * Yield so a concurrent allocator on another CPU can
>> >> + * finish prep_new_page() and have its writes become
>> >> + * visible before we resample the page state.
>> >> + */
>> >> + cpu_relax();
>> >> + return page_count(p) == 0 &&
>> >> + !PageLRU(p) &&
>> >> + !page_mapped(p) &&
>> >> + !page_folio(p)->mapping &&
>> >> + !is_free_buddy_page(p);
>> >
>> >I don't get what you are doing here. The right way to check for a tail page is
>> >not by checking the refcount.
>> >
>> >Further, you are not holding a folio reference? If so, calling
>> >page_mapped/folio_mapped is shaky. On concurrent folio split you can trigger a
>> >VM_WARN_ON_FOLIO().
>> >
>> >
>> >Maybe folio_snapshot() is what you are looking for, if you are in fact not
>> >holding a reference?
>>
>> Right! Maybe we should not try to make this decision in
>> panic_on_unrecoverable_mf().
>>
>> By the time we get here, we only know the final MF_MSG_* type. The
>> real reason why get_hwpoison_page() failed is already lost.
>>
>> Wonder if it would be better to split that earlier, around
>> __get_unpoison_page()/get_any_page(). That code still knows why
>> grabbing the page failed, either an unsupported kernel page or
>> just a temporary race we cannot really trust :)
>>
>> Then the later panic logic can be simple: panic for the stable
>> unsupported kernel page case, and not for the temporary race case.
>>
>> That would also avoid trying to guess MF_MSG_KERNEL_HIGH_ORDER here:)
>
>This is a very good feedback, and definitely what I wanted to do, but,
>failed. Once we have the reason, we don't need this dance to guess the
>reason.
>
>I've hacked a patch based on this approach. How does it sound?
Yes. This direction makes sense to me, not an expert though :D
I played with something similar (untested) on top of patch #01:
---8<---
diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 432d5f996c64..a2799f063913 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -74,6 +74,8 @@ static int sysctl_memory_failure_recovery __read_mostly = 1;
static int sysctl_enable_soft_offline __read_mostly = 1;
+static int sysctl_panic_on_unrecoverable_mf __read_mostly;
+
atomic_long_t num_poisoned_pages __read_mostly = ATOMIC_LONG_INIT(0);
static bool hw_memory_failure __read_mostly = false;
@@ -155,6 +157,15 @@ static const struct ctl_table memory_failure_table[] = {
.proc_handler = proc_dointvec_minmax,
.extra1 = SYSCTL_ZERO,
.extra2 = SYSCTL_ONE,
+ },
+ {
+ .procname = "panic_on_unrecoverable_memory_failure",
+ .data = &sysctl_panic_on_unrecoverable_mf,
+ .maxlen = sizeof(sysctl_panic_on_unrecoverable_mf),
+ .mode = 0644,
+ .proc_handler = proc_dointvec_minmax,
+ .extra1 = SYSCTL_ZERO,
+ .extra2 = SYSCTL_ONE,
}
};
@@ -1281,6 +1292,18 @@ static void update_per_node_mf_stats(unsigned long pfn,
++mf_stats->total;
}
+static bool panic_on_unrecoverable_mf(enum mf_action_page_type type,
+ enum mf_result result)
+{
+ if (!sysctl_panic_on_unrecoverable_mf || result != MF_IGNORED)
+ return false;
+
+ if (type == MF_MSG_KERNEL)
+ return true;
+
+ return false;
+}
+
/*
* "Dirty/Clean" indication is not 100% accurate due to the possibility of
* setting PG_dirty outside page lock. See also comment above set_page_dirty().
@@ -1298,6 +1321,9 @@ static int action_result(unsigned long pfn, enum mf_action_page_type type,
pr_err("%#lx: recovery action for %s: %s\n",
pfn, action_page_types[type], action_name[result]);
+ if (panic_on_unrecoverable_mf(type, result))
+ panic("Memory failure: %#lx: unrecoverable page", pfn);
+
return (result == MF_RECOVERED || result == MF_DELAYED) ? 0 : -EBUSY;
}
@@ -1389,11 +1415,27 @@ static int __get_hwpoison_page(struct page *page, unsigned long flags)
#define GET_PAGE_MAX_RETRY_NUM 3
-static int get_any_page(struct page *p, unsigned long flags)
+enum mf_get_page_status {
+ MF_GET_PAGE_OK = 0,
+ MF_GET_PAGE_RACE,
+ MF_GET_PAGE_UNHANDLABLE,
+};
+
+static void set_mf_get_page_status(enum mf_get_page_status *gp_status,
+ enum mf_get_page_status value)
+{
+ if (gp_status)
+ *gp_status = value;
+}
+
+static int get_any_page(struct page *p, unsigned long flags,
+ enum mf_get_page_status *gp_status)
{
int ret = 0, pass = 0;
bool count_increased = false;
+ set_mf_get_page_status(gp_status, MF_GET_PAGE_OK);
+
if (flags & MF_COUNT_INCREASED)
count_increased = true;
@@ -1406,11 +1448,13 @@ static int get_any_page(struct page *p, unsigned long flags)
if (pass++ < GET_PAGE_MAX_RETRY_NUM)
goto try_again;
ret = -EBUSY;
+ set_mf_get_page_status(gp_status, MF_GET_PAGE_RACE);
} else if (!PageHuge(p) && !is_free_buddy_page(p)) {
/* We raced with put_page, retry. */
if (pass++ < GET_PAGE_MAX_RETRY_NUM)
goto try_again;
ret = -EIO;
+ set_mf_get_page_status(gp_status, MF_GET_PAGE_RACE);
}
goto out;
} else if (ret == -EBUSY) {
@@ -1423,6 +1467,7 @@ static int get_any_page(struct page *p, unsigned long flags)
goto try_again;
}
ret = -EIO;
+ set_mf_get_page_status(gp_status, MF_GET_PAGE_UNHANDLABLE);
goto out;
}
}
@@ -1442,6 +1487,7 @@ static int get_any_page(struct page *p, unsigned long flags)
}
put_page(p);
ret = -EIO;
+ set_mf_get_page_status(gp_status, MF_GET_PAGE_UNHANDLABLE);
}
out:
if (ret == -EIO)
@@ -1480,6 +1526,7 @@ static int __get_unpoison_page(struct page *page)
* get_hwpoison_page() - Get refcount for memory error handling
* @p: Raw error page (hit by memory error)
* @flags: Flags controlling behavior of error handling
+ * @gp_status: Optional output for the reason get_any_page() failed
*
* get_hwpoison_page() takes a page refcount of an error page to handle memory
* error on it, after checking that the error page is in a well-defined state
@@ -1503,7 +1550,8 @@ static int __get_unpoison_page(struct page *page)
* operations like allocation and free,
* -EHWPOISON when the page is hwpoisoned and taken off from buddy.
*/
-static int get_hwpoison_page(struct page *p, unsigned long flags)
+static int get_hwpoison_page(struct page *p, unsigned long flags,
+ enum mf_get_page_status *gp_status)
{
int ret;
@@ -1511,7 +1559,7 @@ static int get_hwpoison_page(struct page *p, unsigned long flags)
if (flags & MF_UNPOISON)
ret = __get_unpoison_page(p);
else
- ret = get_any_page(p, flags);
+ ret = get_any_page(p, flags, gp_status);
zone_pcp_enable(page_zone(p));
return ret;
@@ -2341,6 +2389,7 @@ static int memory_failure_pfn(unsigned long pfn, int flags)
*/
int memory_failure(unsigned long pfn, int flags)
{
+ enum mf_get_page_status gp_status = MF_GET_PAGE_OK;
struct page *p;
struct folio *folio;
struct dev_pagemap *pgmap;
@@ -2413,7 +2462,7 @@ int memory_failure(unsigned long pfn, int flags)
* that may make page_ref_freeze()/page_ref_unfreeze() mismatch.
*/
is_reserved = PageReserved(p);
- res = get_hwpoison_page(p, flags);
+ res = get_hwpoison_page(p, flags, &gp_status);
if (!res) {
if (is_free_buddy_page(p)) {
if (take_page_off_buddy(p)) {
@@ -2437,9 +2486,13 @@ int memory_failure(unsigned long pfn, int flags)
/*
* Pages with PG_reserved set are not currently managed by the
* page allocator (memblock-reserved memory, driver reservations,
- * etc.), so classify them as kernel-owned for reporting.
+ * etc.), so classify them as kernel-owned for reporting. Do the
+ * same for pages that get_any_page() still cannot handle after
+ * retries: likely non-LRU/non-buddy pages such as slab, kernel
+ * stack, page table or vmalloc-backed pages. Transient lifecycle
+ * races stay as MF_MSG_GET_HWPOISON.
*/
- if (is_reserved)
+ if (is_reserved || gp_status == MF_GET_PAGE_UNHANDLABLE)
res = action_result(pfn, MF_MSG_KERNEL, MF_IGNORED);
else
res = action_result(pfn, MF_MSG_GET_HWPOISON,
@@ -2744,7 +2797,7 @@ int unpoison_memory(unsigned long pfn)
goto unlock_mutex;
}
- ghp = get_hwpoison_page(p, MF_UNPOISON);
+ ghp = get_hwpoison_page(p, MF_UNPOISON, NULL);
if (!ghp) {
if (folio_test_hugetlb(folio)) {
huge = true;
@@ -2951,7 +3004,7 @@ int soft_offline_page(unsigned long pfn, int flags)
retry:
get_online_mems();
- ret = get_hwpoison_page(page, flags | MF_SOFT_OFFLINE);
+ ret = get_hwpoison_page(page, flags | MF_SOFT_OFFLINE, NULL);
put_online_mems();
if (hwpoison_filter(page)) {
---
I would leave MF_MSG_KERNEL_HIGH_ORDER out for now. That path still
has the allocator race David pointed out, unless there is easy way to
rule that out ...
Also would leave MF_MSG_UNKNOWN out. We don't really know what it is no?
So it's not good basis for a panic decision :)
Maybe better to keep panic_on_unrecoverable_mf simple: classify the
get_any_page() failure reason earlier, but only panic on MF_MSG_KERNEL.
IMHO, making the knob too complicated for memory failures that should be
rare does not seem worth it. Just covering MF_MSG_KERNEL should already
help crash analysis a lot :)
Feel free to pick up any bits that look useful :)
Cheers, Lance
[...]
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH v5 2/4] mm/memory-failure: add panic option for unrecoverable pages
2026-05-10 14:42 ` Lance Yang
@ 2026-05-11 14:44 ` Breno Leitao
0 siblings, 0 replies; 3+ messages in thread
From: Breno Leitao @ 2026-05-11 14:44 UTC (permalink / raw)
To: Lance Yang
Cc: david, linmiaohe, nao.horiguchi, akpm, corbet, skhan, ljs,
Liam.Howlett, vbabka, rppt, surenb, mhocko, shuah, linux-mm,
linux-kernel, linux-doc, linux-kselftest, kernel-team
On Sun, May 10, 2026 at 10:42:20PM +0800, Lance Yang wrote:
>
> On Wed, May 06, 2026 at 09:18:12AM -0700, Breno Leitao wrote:
> >On Tue, Apr 28, 2026 at 11:07:21AM +0800, Lance Yang wrote:
> >>
> >> On Mon, Apr 27, 2026 at 05:49:28PM +0200, David Hildenbrand (Arm) wrote:
> >> >> + switch (type) {
> >> >> + case MF_MSG_KERNEL:
> >> >> + case MF_MSG_UNKNOWN:
> >> >> + return true;
> >> >> + case MF_MSG_KERNEL_HIGH_ORDER:
> >> >> + /*
> >> >> + * Rule out a concurrent buddy allocation: give the
> >> >> + * allocator a moment to finish prep_new_page() and
> >> >> + * re-check. A genuine high-order kernel tail page stays
> >> >> + * unowned; an in-flight allocation will have bumped the
> >> >> + * refcount, attached a mapping, or placed the page on
> >> >> + * an LRU by now.
> >> >> + */
> >> >> + p = pfn_to_online_page(pfn);
> >> >> + if (!p)
> >> >> + return true;
> >> >> + /*
> >> >> + * Yield so a concurrent allocator on another CPU can
> >> >> + * finish prep_new_page() and have its writes become
> >> >> + * visible before we resample the page state.
> >> >> + */
> >> >> + cpu_relax();
> >> >> + return page_count(p) == 0 &&
> >> >> + !PageLRU(p) &&
> >> >> + !page_mapped(p) &&
> >> >> + !page_folio(p)->mapping &&
> >> >> + !is_free_buddy_page(p);
> >> >
> >> >I don't get what you are doing here. The right way to check for a tail page is
> >> >not by checking the refcount.
> >> >
> >> >Further, you are not holding a folio reference? If so, calling
> >> >page_mapped/folio_mapped is shaky. On concurrent folio split you can trigger a
> >> >VM_WARN_ON_FOLIO().
> >> >
> >> >
> >> >Maybe folio_snapshot() is what you are looking for, if you are in fact not
> >> >holding a reference?
> >>
> >> Right! Maybe we should not try to make this decision in
> >> panic_on_unrecoverable_mf().
> >>
> >> By the time we get here, we only know the final MF_MSG_* type. The
> >> real reason why get_hwpoison_page() failed is already lost.
> >>
> >> Wonder if it would be better to split that earlier, around
> >> __get_unpoison_page()/get_any_page(). That code still knows why
> >> grabbing the page failed, either an unsupported kernel page or
> >> just a temporary race we cannot really trust :)
> >>
> >> Then the later panic logic can be simple: panic for the stable
> >> unsupported kernel page case, and not for the temporary race case.
> >>
> >> That would also avoid trying to guess MF_MSG_KERNEL_HIGH_ORDER here:)
> >
> >This is a very good feedback, and definitely what I wanted to do, but,
> >failed. Once we have the reason, we don't need this dance to guess the
> >reason.
> >
> >I've hacked a patch based on this approach. How does it sound?
>
> Yes. This direction makes sense to me, not an expert though :D
>
> I played with something similar (untested) on top of patch #01:
Thanks!
I'll prepare a new series addressing all the feedback from both
reviewers and AI analysis. I will resend soon and we can catch up
on the next revision,
Thanks for the review,
--breno
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2026-05-11 14:45 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <9d365395-051a-436b-9017-352ebc889770@kernel.org>
[not found] ` <20260428030721.51274-1-lance.yang@linux.dev>
2026-05-06 16:18 ` [PATCH v5 2/4] mm/memory-failure: add panic option for unrecoverable pages Breno Leitao
2026-05-10 14:42 ` Lance Yang
2026-05-11 14:44 ` Breno Leitao
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox