linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: Jinjiang Tu <tujinjiang@huawei.com>,
	akpm@linux-foundation.org, linmiaohe@huawei.com,
	osalvador@suse.de, mhocko@kernel.org, ziy@nvidia.com
Cc: linux-mm@kvack.org, wangkefeng.wang@huawei.com
Subject: Re: [PATCH v3] mm/vmscan: fix hwpoisoned large folio handling in shrink_folio_list
Date: Fri, 11 Jul 2025 10:05:45 +0200	[thread overview]
Message-ID: <990715ed-f660-4b88-b850-57d6aee6ee59@redhat.com> (raw)
In-Reply-To: <20250711021734.2362044-1-tujinjiang@huawei.com>

On 11.07.25 04:17, Jinjiang Tu wrote:
> In shrink_folio_list(), the hwpoisoned folio may be large folio, which
> can't be handled by unmap_poisoned_folio(). For THP, try_to_unmap_one()
> must be passed with TTU_SPLIT_HUGE_PMD to split huge PMD first and then
> retry. Without TTU_SPLIT_HUGE_PMD, we will trigger null-ptr deref of
> pvmw.pte. Even we passed TTU_SPLIT_HUGE_PMD, we will trigger a WARN_ON_ONCE
> due to the page isn't in swapcache.
> 
> Since UCE is rare in real world, and race with reclaimation is more rare,
> just skipping the hwpoisoned large folio is enough. memory_failure() will
> handle it if the UCE is triggered again.
> 
> Fixes: 1b0449544c64 ("mm/vmscan: don't try to reclaim hwpoison folio")
> Reported-by: syzbot+3b220254df55d8ca8a61@syzkaller.appspotmail.com
> Closes: https://lore.kernel.org/all/68412d57.050a0220.2461cf.000e.GAE@google.com/
> Acked-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
> ---
> v3:
>   * collect Acked-by and Reviewed-by
>   * update commit message and commemts, sugguested by Oscar Salvador.
> 
>   mm/memory-failure.c | 4 ++++
>   mm/vmscan.c         | 8 ++++++++
>   2 files changed, 12 insertions(+)
> 
> diff --git a/mm/memory-failure.c b/mm/memory-failure.c
> index b91a33fb6c69..9ee176fcc949 100644
> --- a/mm/memory-failure.c
> +++ b/mm/memory-failure.c
> @@ -1561,6 +1561,10 @@ static int get_hwpoison_page(struct page *p, unsigned long flags)
>   	return ret;
>   }
>   
> +/*
> + * The caller must guarantee the folio isn't large folio. try_to_unmap()
> + * can't handle it.

Not completely accurate: it may be a hugetlb folios, that is also large, 
but supported.

"isn't a large folio, except hugetlb."

-- 
Cheers,

David / dhildenb



  parent reply	other threads:[~2025-07-11  8:05 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-07-11  2:17 [PATCH v3] mm/vmscan: fix hwpoisoned large folio handling in shrink_folio_list Jinjiang Tu
2025-07-11  3:04 ` Zi Yan
2025-07-11  5:37 ` Oscar Salvador
2025-07-11  8:05 ` David Hildenbrand [this message]
2025-07-11  8:55   ` [PATCH v4] " Jinjiang Tu
2025-07-12 23:42     ` Andrew Morton
2025-07-11  8:56   ` [PATCH v3] " Jinjiang Tu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=990715ed-f660-4b88-b850-57d6aee6ee59@redhat.com \
    --to=david@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=linmiaohe@huawei.com \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=osalvador@suse.de \
    --cc=tujinjiang@huawei.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=ziy@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).