From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EC861A2C25; Mon, 10 Nov 2025 05:20:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762752021; cv=none; b=T9ijdZVwb1x4g18gJ8FknQ1LnV3JiZr/Vx5ZPNCvTrSRMueCDoXE2VfpZ57/XRikrmRlxTiiJHNHNnZgFdlG3NtV6lRj1Yz7pT1XKixyPf+2l3DdR1Ttovm3TLUmzrHjahurJrrCCm+GkL4bFT/dCuFYz/DzwwjNuqQUej+eOd0= ARC-Message-Signature:i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1762752021; c=relaxed/simple; bh=YYkPGQsj9vuaz9BW84ofolrqsMqLDOwBlm1TI6/kSJQ=; h=Date:To:From:Subject:Message-Id; b=qDwQFYlbVuNKan2MM+tDfWy3wnC/aOgbmfKXcX0rwOotN59VXt/IpkY/yZsd8immBcnrsw43PaJnjni3xcAABdaOvEvmCpOYOwsZC6292IiIcJ5XaxnXUoNoGdV3o4QmtjJAnl0+NqFip6meIWvBjaCFzq0V4KlA7fRRB1XpUgA= ARC-Authentication-Results:i=1; smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b=Fr40KWvx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux-foundation.org header.i=@linux-foundation.org header.b="Fr40KWvx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 41D24C4CEF5; Mon, 10 Nov 2025 05:20:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1762752021; bh=YYkPGQsj9vuaz9BW84ofolrqsMqLDOwBlm1TI6/kSJQ=; h=Date:To:From:Subject:From; b=Fr40KWvxPcmOalvcspP/evMc8aeQb+nDySBBOsUj2IHX1Q1jEuQQh9ERf7GXgL773 x2dndmHCL0BAOrB5iq/jqDD9EXnalugoXnJvjXHZcw/vW2x0h4X8d8EplLkDauxNRG MsVICHQFZKD5UpWm9q675GhVXbt94zi9yBx4LzsU= Date: Sun, 09 Nov 2025 21:20:20 -0800 To: mm-commits@vger.kernel.org,yang@os.amperecomputing.com,willy@infradead.org,stable@vger.kernel.org,ryan.roberts@arm.com,richard.weiyang@gmail.com,npache@redhat.com,nao.horiguchi@gmail.com,mcgrof@kernel.org,lorenzo.stoakes@oracle.com,linmiaohe@huawei.com,liam.howlett@oracle.com,lance.yang@linux.dev,kernel@pankajraghav.com,jane.chu@oracle.com,dev.jain@arm.com,david@redhat.com,baolin.wang@linux.alibaba.com,baohua@kernel.org,ziy@nvidia.com,akpm@linux-foundation.org From: Andrew Morton Subject: [merged mm-hotfixes-stable] mm-huge_memory-preserve-pg_has_hwpoisoned-if-a-folio-is-split-to-0-order.patch removed from -mm tree Message-Id: <20251110052021.41D24C4CEF5@smtp.kernel.org> Precedence: bulk X-Mailing-List: mm-commits@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: The quilt patch titled Subject: mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order has been removed from the -mm tree. Its filename was mm-huge_memory-preserve-pg_has_hwpoisoned-if-a-folio-is-split-to-0-order.patch This patch was dropped because it was merged into the mm-hotfixes-stable branch of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm ------------------------------------------------------ From: Zi Yan Subject: mm/huge_memory: preserve PG_has_hwpoisoned if a folio is split to >0 order Date: Wed, 22 Oct 2025 23:05:21 -0400 folio split clears PG_has_hwpoisoned, but the flag should be preserved in after-split folios containing pages with PG_hwpoisoned flag if the folio is split to >0 order folios. Scan all pages in a to-be-split folio to determine which after-split folios need the flag. An alternatives is to change PG_has_hwpoisoned to PG_maybe_hwpoisoned to avoid the scan and set it on all after-split folios, but resulting false positive has undesirable negative impact. To remove false positive, caller of folio_test_has_hwpoisoned() and folio_contain_hwpoisoned_page() needs to do the scan. That might be causing a hassle for current and future callers and more costly than doing the scan in the split code. More details are discussed in [1]. This issue can be exposed via: 1. splitting a has_hwpoisoned folio to >0 order from debugfs interface; 2. truncating part of a has_hwpoisoned folio in truncate_inode_partial_folio(). And later accesses to a hwpoisoned page could be possible due to the missing has_hwpoisoned folio flag. This will lead to MCE errors. Link: https://lore.kernel.org/all/CAHbLzkoOZm0PXxE9qwtF4gKR=cpRXrSrJ9V9Pm2DJexs985q4g@mail.gmail.com/ [1] Link: https://lkml.kernel.org/r/20251023030521.473097-1-ziy@nvidia.com Fixes: c010d47f107f ("mm: thp: split huge page to any lower order pages") Signed-off-by: Zi Yan Acked-by: David Hildenbrand Reviewed-by: Yang Shi Reviewed-by: Lorenzo Stoakes Reviewed-by: Lance Yang Reviewed-by: Miaohe Lin Reviewed-by: Baolin Wang Reviewed-by: Wei Yang Cc: Pankaj Raghav Cc: Barry Song Cc: Dev Jain Cc: Jane Chu Cc: Liam Howlett Cc: Luis Chamberalin Cc: Matthew Wilcox (Oracle) Cc: Naoya Horiguchi Cc: Nico Pache Cc: Ryan Roberts Cc: Signed-off-by: Andrew Morton --- mm/huge_memory.c | 23 ++++++++++++++++++++--- 1 file changed, 20 insertions(+), 3 deletions(-) --- a/mm/huge_memory.c~mm-huge_memory-preserve-pg_has_hwpoisoned-if-a-folio-is-split-to-0-order +++ a/mm/huge_memory.c @@ -3263,6 +3263,14 @@ bool can_split_folio(struct folio *folio caller_pins; } +static bool page_range_has_hwpoisoned(struct page *page, long nr_pages) +{ + for (; nr_pages; page++, nr_pages--) + if (PageHWPoison(page)) + return true; + return false; +} + /* * It splits @folio into @new_order folios and copies the @folio metadata to * all the resulting folios. @@ -3270,17 +3278,24 @@ bool can_split_folio(struct folio *folio static void __split_folio_to_order(struct folio *folio, int old_order, int new_order) { + /* Scan poisoned pages when split a poisoned folio to large folios */ + const bool handle_hwpoison = folio_test_has_hwpoisoned(folio) && new_order; long new_nr_pages = 1 << new_order; long nr_pages = 1 << old_order; long i; + folio_clear_has_hwpoisoned(folio); + + /* Check first new_nr_pages since the loop below skips them */ + if (handle_hwpoison && + page_range_has_hwpoisoned(folio_page(folio, 0), new_nr_pages)) + folio_set_has_hwpoisoned(folio); /* * Skip the first new_nr_pages, since the new folio from them have all * the flags from the original folio. */ for (i = new_nr_pages; i < nr_pages; i += new_nr_pages) { struct page *new_head = &folio->page + i; - /* * Careful: new_folio is not a "real" folio before we cleared PageTail. * Don't pass it around before clear_compound_head(). @@ -3322,6 +3337,10 @@ static void __split_folio_to_order(struc (1L << PG_dirty) | LRU_GEN_MASK | LRU_REFS_MASK)); + if (handle_hwpoison && + page_range_has_hwpoisoned(new_head, new_nr_pages)) + folio_set_has_hwpoisoned(new_folio); + new_folio->mapping = folio->mapping; new_folio->index = folio->index + i; @@ -3422,8 +3441,6 @@ static int __split_unmapped_folio(struct if (folio_test_anon(folio)) mod_mthp_stat(order, MTHP_STAT_NR_ANON, -1); - folio_clear_has_hwpoisoned(folio); - /* * split to new_order one order at a time. For uniform split, * folio is split to new_order directly. _ Patches currently in -mm which might be from ziy@nvidia.com are mm-huge_memory-fix-folio-split-check-for-anon-folios-in-swapcache.patch mm-huge_memory-add-split_huge_page_to_order.patch mm-memory-failure-improve-large-block-size-folio-handling.patch mm-huge_memory-fix-kernel-doc-comments-for-folio_split-and-related.patch mm-huge_memory-fix-kernel-doc-comments-for-folio_split-and-related-fix.patch mm-huge_memory-fix-kernel-doc-comments-for-folio_split-and-related-fix-2.patch migrate-optimise-alloc_migration_target-fix.patch