From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from canpmsgout12.his.huawei.com (canpmsgout12.his.huawei.com [113.46.200.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DDFA93C4563; Fri, 15 May 2026 03:04:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=113.46.200.227 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778814249; cv=none; b=ShIYN8VBH2Wnsl2sgQBj0mAzrIs4vd5WXb23AwL/+boEfdApvQ49gTOZHBx34cs053+/F6XEVscRPJw/F62MH0JPIoIc4KNTISMrhkMvLn7kraKObLVZj7TPWrF613fBZhg4HEBuvw4I/w0yl7G879wK6Wz9/dvhgFdIngXWelM= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1778814249; c=relaxed/simple; bh=XhcnxjnUYRKbBkL4zobZN3AAxaA0Z1eO6uo2G/8yYC0=; h=Subject:To:CC:References:From:Message-ID:Date:MIME-Version: In-Reply-To:Content-Type; b=jV/xRowhQd+96vFcNM4a2JF/65Ywp90Kx2Zx9JhxyhUith3Fz+aw7zp2qCrzUMo9fTuvIyaFZ5u+Fs3qknWSTEnJVoMpT7sVlYj6I3XSug2J6vRDPk8+PCXRb6/MPuAtIHNL3GR8EvEnMfKu8pqIMNBygj4JqPFn+Tbj8O8q2Nw= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com; spf=pass smtp.mailfrom=huawei.com; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b=zrNDrXNs; arc=none smtp.client-ip=113.46.200.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=huawei.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=huawei.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=huawei.com header.i=@huawei.com header.b="zrNDrXNs" dkim-signature: v=1; a=rsa-sha256; d=huawei.com; s=dkim; c=relaxed/relaxed; q=dns/txt; h=From; bh=c1kImyxqEbaAy8kSJvIICJ2cSXsdXwlWP4COYny/KSg=; b=zrNDrXNsSOpvDcQyNoICT4sJcco3E5Wsi/O9iHpDKfEX9XEWUf/G4XkITj7GNXEmCI/idJPfP P4UGl/4Oun+rHqHdIN28evtQ5auEfY+bkcrQEAnCVTqBcNINbAhTUGsW0k0T2fek/MYR3cF4Ai1 3WMtGlCqyQAslPSg+eDFPV4= Received: from mail.maildlp.com (unknown [172.19.163.15]) by canpmsgout12.his.huawei.com (SkyGuard) with ESMTPS id 4gGsL46BLFznTZY; Fri, 15 May 2026 10:57:00 +0800 (CST) Received: from dggemv706-chm.china.huawei.com (unknown [10.3.19.33]) by mail.maildlp.com (Postfix) with ESMTPS id A0C4240539; Fri, 15 May 2026 11:04:02 +0800 (CST) Received: from kwepemq500010.china.huawei.com (7.202.194.235) by dggemv706-chm.china.huawei.com (10.3.19.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 15 May 2026 11:04:02 +0800 Received: from [10.173.124.160] (10.173.124.160) by kwepemq500010.china.huawei.com (7.202.194.235) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.11; Fri, 15 May 2026 11:04:01 +0800 Subject: Re: [PATCH v7 2/6] mm/memory-failure: surface unhandlable kernel pages as -ENOTRECOVERABLE To: Breno Leitao CC: , , , , , , Andrew Morton , David Hildenbrand , "Lorenzo Stoakes" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Michal Hocko , Shuah Khan , Naoya Horiguchi , Steven Rostedt , "Masami Hiramatsu" , Mathieu Desnoyers , Jonathan Corbet , "Shuah Khan" , "Liam R. Howlett" References: <20260513-ecc_panic-v7-0-be2e578e61da@debian.org> <20260513-ecc_panic-v7-2-be2e578e61da@debian.org> From: Miaohe Lin Message-ID: <5cbb6038-72e3-9eda-7d1a-464f879fabb9@huawei.com> Date: Fri, 15 May 2026 11:04:00 +0800 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101 Thunderbird/78.6.0 Precedence: bulk X-Mailing-List: linux-doc@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 In-Reply-To: <20260513-ecc_panic-v7-2-be2e578e61da@debian.org> Content-Type: text/plain; charset="utf-8" Content-Language: en-US Content-Transfer-Encoding: 7bit X-ClientProxiedBy: kwepems100001.china.huawei.com (7.221.188.238) To kwepemq500010.china.huawei.com (7.202.194.235) On 2026/5/13 23:39, Breno Leitao wrote: > get_any_page() collapses three different failure modes into a single > -EIO return: > > * the put_page race in the !count_increased path; > * the HWPoisonHandlable() rejection that bounces out of > __get_hwpoison_page() with -EBUSY and exhausts shake_page() retries; > * the HWPoisonHandlable() rejection that goes through the > count_increased / put_page / shake_page retry loop. > > The first is transient (the page is racing with the allocator). The > second can be either transient (a userspace folio briefly off LRU > during migration/compaction) or stable (slab/vmalloc/page-table/ > kernel-stack pages). The third describes a stable kernel-owned page > that the count_increased=true caller already held a reference on. > > Distinguish them on the return path: keep -EIO for both the put_page > race and the -EBUSY-after-retries branch (shake_page() cannot drag a > folio back from active migration, so we cannot prove the page is > permanently kernel-owned from there), keep -EBUSY for the allocation > race (unchanged), and return -ENOTRECOVERABLE only from the > count_increased-true HWPoisonHandlable() rejection that exhausts its > retries -- the caller's reference is structural evidence that the > page is owned by the kernel. > > Extend the unhandlable-page pr_err() to fire for either errno and > update the get_hwpoison_page() kerneldoc. > > memory_failure() still folds every negative return into > MF_MSG_GET_HWPOISON via its existing "else if (res < 0)" branch, so > this patch is a no-op for users of memory_failure() and only changes > the errno that soft_offline_page() can propagate to its callers. A > follow-up wires the new return code through memory_failure() and > reports MF_MSG_KERNEL for the unrecoverable cases. > > Suggested-by: David Hildenbrand > Signed-off-by: Breno Leitao > --- > mm/memory-failure.c | 18 +++++++++++++++--- > 1 file changed, 15 insertions(+), 3 deletions(-) > > diff --git a/mm/memory-failure.c b/mm/memory-failure.c > index 49bcfbd04d213..bae883df3ccb2 100644 > --- a/mm/memory-failure.c > +++ b/mm/memory-failure.c > @@ -1408,6 +1408,15 @@ static int get_any_page(struct page *p, unsigned long flags) > shake_page(p); > goto try_again; > } > + /* > + * Return -EIO rather than -ENOTRECOVERABLE: this > + * branch is also reached for pages that are merely > + * off-LRU transiently (e.g. a folio in the middle > + * of migration or compaction), which shake_page() > + * cannot drag back. The caller cannot prove the > + * page is permanently kernel-owned from here, so > + * keep it on the recoverable errno. > + */ > ret = -EIO; > goto out; > } > @@ -1427,10 +1436,10 @@ static int get_any_page(struct page *p, unsigned long flags) > goto try_again; > } > put_page(p); > - ret = -EIO; > + ret = -ENOTRECOVERABLE; Theoretically, pages that are merely off-LRU transiently as you commented above could reach here too? Or am I miss something? Thanks. . > } > out: > - if (ret == -EIO) > + if (ret == -EIO || ret == -ENOTRECOVERABLE) > pr_err("%#lx: unhandlable page.\n", page_to_pfn(p)); > > return ret; > @@ -1487,7 +1496,10 @@ static int __get_unpoison_page(struct page *page) > * -EIO for pages on which we can not handle memory errors, > * -EBUSY when get_hwpoison_page() has raced with page lifecycle > * operations like allocation and free, > - * -EHWPOISON when the page is hwpoisoned and taken off from buddy. > + * -EHWPOISON when the page is hwpoisoned and taken off from buddy, > + * -ENOTRECOVERABLE for stable kernel-owned pages the handler > + * cannot recover (PG_reserved, slab, vmalloc, page tables, > + * kernel stacks, and similar non-LRU/non-buddy pages). > */ > static int get_hwpoison_page(struct page *p, unsigned long flags) > { >