From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Jinjiang Tu <tujinjiang@huawei.com>,
akpm@linux-foundation.org, lorenzo.stoakes@oracle.com,
Liam.Howlett@oracle.com, vbabka@suse.cz, rppt@kernel.org,
surenb@google.com, mhocko@suse.com, fengwei.yin@intel.com,
baohua@kernel.org, ryan.roberts@arm.com, linux-mm@kvack.org
Cc: wangkefeng.wang@huawei.com, sunnanyong@huawei.com
Subject: Re: [PATCH v2] mm/huge_memory: fix folio isn't locked in softleaf_to_folio()
Date: Wed, 18 Mar 2026 10:02:57 +0100 [thread overview]
Message-ID: <88f37900-5c8b-4606-9081-e9b72acb0941@kernel.org> (raw)
In-Reply-To: <20260318012055.3593216-1-tujinjiang@huawei.com>
On 3/18/26 02:20, Jinjiang Tu wrote:
> On arm64 server, we found folio that get from migration entry isn't locked
> in softleaf_to_folio(). This issue triggers when mTHP splitting and
> zap_nonpresent_ptes() races, and the root cause is lack of memory barrier
> in softleaf_to_folio(). The race is as follows:
>
> CPU0 CPU1
>
> deferred_split_scan() zap_nonpresent_ptes()
> lock folio
> split_folio()
> unmap_folio()
> change ptes to migration entries
> __split_folio_to_order() softleaf_to_folio()
> set flags(including PG_locked) for tail pages folio = pfn_folio(softleaf_to_pfn(entry))
> smp_wmb() VM_WARN_ON_ONCE(!folio_test_locked(folio))
> prep_compound_page() for tail pages
>
> In __split_folio_to_order(), smp_wmb() guarantees page flags of tail pages
> are visible before the tail page becomes non-compound. smp_wmb() should
> be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a
> result, if zap_nonpresent_ptes() accesses migration entry that stores
> tail pfn, softleaf_to_folio() may see the updated compound_head of tail
> page before page->flags.
>
> To fix it, add missing smp_rmb() if the softleaf entry is migration entry
> in softleaf_to_folio() and softleaf_to_page().
>
> Fixes: e9b61f19858a ("thp: reintroduce split_huge_page()")
> Signed-off-by: Jinjiang Tu <tujinjiang@huawei.com>
> ---
>
> Change since v1:
> * update fix tag
> * use helper softleaf_migration_entry_check()
>
> include/linux/leafops.h | 29 ++++++++++++++++++-----------
> 1 file changed, 18 insertions(+), 11 deletions(-)
>
> diff --git a/include/linux/leafops.h b/include/linux/leafops.h
> index a9ff94b744f2..c7dbc3fb8ab6 100644
> --- a/include/linux/leafops.h
> +++ b/include/linux/leafops.h
> @@ -363,6 +363,22 @@ static inline unsigned long softleaf_to_pfn(softleaf_t entry)
> return swp_offset(entry) & SWP_PFN_MASK;
> }
>
> +static inline void softleaf_migration_entry_check(softleaf_t entry,
> + struct folio *folio)
> +{
> + if (!softleaf_is_migration(entry))
> + return;
> +
> + /* See __split_folio_to_order() comment */
> + smp_rmb();
> +
> + /*
> + * Any use of migration entries may only occur while the
> + * corresponding page is locked
> + */
> + VM_WARN_ON_ONCE(!folio_test_locked(folio));
> +}
> +
> /**
> * softleaf_to_page() - Obtains struct page for PFN encoded within leaf entry.
> * @entry: Leaf entry, softleaf_has_pfn(@entry) must return true.
> @@ -374,11 +390,7 @@ static inline struct page *softleaf_to_page(softleaf_t entry)
> struct page *page = pfn_to_page(softleaf_to_pfn(entry));
>
> VM_WARN_ON_ONCE(!softleaf_has_pfn(entry));
> - /*
> - * Any use of migration entries may only occur while the
> - * corresponding page is locked
> - */
> - VM_WARN_ON_ONCE(softleaf_is_migration(entry) && !PageLocked(page));
> + softleaf_migration_entry_check(entry, page_folio(page));
It might be better to do
if (softleaf_is_migration(entry))
softleaf_migration_entry_check(entry, page_folio(page));
Removing the softleaf_is_migration() check from
softleaf_migration_entry_check(). Then, we don't do the unconditional
page_folio() and don't call the function for non-migration-entries.
With that LGTM.
--
Cheers,
David
next prev parent reply other threads:[~2026-03-18 9:03 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-18 1:20 [PATCH v2] mm/huge_memory: fix folio isn't locked in softleaf_to_folio() Jinjiang Tu
2026-03-18 9:02 ` David Hildenbrand (Arm) [this message]
2026-03-18 9:29 ` Jinjiang Tu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=88f37900-5c8b-4606-9081-e9b72acb0941@kernel.org \
--to=david@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=fengwei.yin@intel.com \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=sunnanyong@huawei.com \
--cc=surenb@google.com \
--cc=tujinjiang@huawei.com \
--cc=vbabka@suse.cz \
--cc=wangkefeng.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox