From: "David Hildenbrand (Arm)" <david@kernel.org>
To: Jinjiang Tu <tujinjiang@huawei.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: lorenzo.stoakes@oracle.com, Liam.Howlett@oracle.com,
vbabka@kernel.org, rppt@kernel.org, surenb@google.com,
mhocko@suse.com, baohua@kernel.org, ryan.roberts@arm.com,
linux-mm@kvack.org, wangkefeng.wang@huawei.com,
sunnanyong@huawei.com
Subject: Re: [PATCH v3] mm/huge_memory: fix folio isn't locked in softleaf_to_folio()
Date: Fri, 20 Mar 2026 09:10:04 +0100 [thread overview]
Message-ID: <37a204f2-796d-4d15-b21b-09fd4a9e77c2@kernel.org> (raw)
In-Reply-To: <d9d50263-a69f-4a16-b409-caeec93c423a@huawei.com>
On 3/20/26 02:52, Jinjiang Tu wrote:
>
> 在 2026/3/20 6:51, Andrew Morton 写道:
>> On Thu, 19 Mar 2026 09:25:41 +0800 Jinjiang Tu <tujinjiang@huawei.com>
>> wrote:
>>
>>> On arm64 server, we found folio that get from migration entry isn't
>>> locked
>>> in softleaf_to_folio(). This issue triggers when mTHP splitting and
>>> zap_nonpresent_ptes() races, and the root cause is lack of memory
>>> barrier
>>> in softleaf_to_folio(). The race is as follows:
>>>
>>> CPU0 CPU1
>>>
>>> deferred_split_scan() zap_nonpresent_ptes()
>>> lock folio
>>> split_folio()
>>> unmap_folio()
>>> change ptes to migration entries
>>> __split_folio_to_order()
>>> softleaf_to_folio()
>>> set flags(including PG_locked) for tail pages folio =
>>> pfn_folio(softleaf_to_pfn(entry))
>>> smp_wmb()
>>> VM_WARN_ON_ONCE(!folio_test_locked(folio))
>>> prep_compound_page() for tail pages
>>>
>>> In __split_folio_to_order(), smp_wmb() guarantees page flags of tail
>>> pages
>>> are visible before the tail page becomes non-compound. smp_wmb() should
>>> be paired with smp_rmb() in softleaf_to_folio(), which is missed. As a
>>> result, if zap_nonpresent_ptes() accesses migration entry that stores
>>> tail pfn, softleaf_to_folio() may see the updated compound_head of tail
>>> page before page->flags.
>> Please describe the userspace-visible runtime effects of this bug.
>
> This issue will trigger VM_WARN_ON_ONCE() in pfn_swap_entry_folio().
But the impact is bigger, right, when callers rely on folio_test_anon() etc?
--
Cheers,
David
next prev parent reply other threads:[~2026-03-20 8:10 UTC|newest]
Thread overview: 11+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-19 1:25 [PATCH v3] mm/huge_memory: fix folio isn't locked in softleaf_to_folio() Jinjiang Tu
2026-03-19 8:49 ` David Hildenbrand (Arm)
2026-03-19 22:51 ` Andrew Morton
2026-03-20 1:52 ` Jinjiang Tu
2026-03-20 2:31 ` Andrew Morton
2026-03-20 8:10 ` David Hildenbrand (Arm) [this message]
2026-03-20 8:56 ` Jinjiang Tu
2026-03-20 9:57 ` Lorenzo Stoakes (Oracle)
2026-03-20 10:22 ` David Hildenbrand (Arm)
2026-03-20 10:13 ` Lorenzo Stoakes (Oracle)
2026-03-21 2:40 ` Jinjiang Tu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=37a204f2-796d-4d15-b21b-09fd4a9e77c2@kernel.org \
--to=david@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=baohua@kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mhocko@suse.com \
--cc=rppt@kernel.org \
--cc=ryan.roberts@arm.com \
--cc=sunnanyong@huawei.com \
--cc=surenb@google.com \
--cc=tujinjiang@huawei.com \
--cc=vbabka@kernel.org \
--cc=wangkefeng.wang@huawei.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.