From: "David Hildenbrand (Arm)" <david@kernel.org>
To: "Boone, Max" <mboone@akamai.com>,
Andrew Morton <akpm@linux-foundation.org>
Cc: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>,
"Liam R . Howlett" <Liam.Howlett@oracle.com>,
Vlastimil Babka <vbabka@suse.cz>, Mike Rapoport <rppt@kernel.org>,
Suren Baghdasaryan <surenb@google.com>,
Michal Hocko <mhocko@suse.com>,
Alex Williamson <alex@shazbot.org>,
"linux-mm@kvack.org" <linux-mm@kvack.org>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"Tottenham, Max" <mtottenh@akamai.com>,
"Hunt, Joshua" <johunt@akamai.com>,
"Pelland, Matt" <mpelland@akamai.com>
Subject: Re: [RFC 1/1] mm/pagewalk: don't split device-backed huge pfnmaps
Date: Tue, 10 Mar 2026 10:11:51 +0100 [thread overview]
Message-ID: <da2ebc9f-ec82-4c55-851c-b27110efdf8c@kernel.org> (raw)
In-Reply-To: <CH2PR17MB3797A91D5107B6E347EEDD97BF79A@CH2PR17MB3797.namprd17.prod.outlook.com>
On 3/10/26 00:02, Boone, Max wrote:
> On Mar 9, 2026 9:19 PM, "David Hildenbrand (Arm)" <david@kernel.org> wrote:
>>
>> On 3/9/26 18:49, Max Boone wrote:
>>> Don't split and descend on special PMD/PUDs, which are generally
>>> device-backed huge pfnmaps as used by vfio for BAR mapping. These
>>> can be faulted back in after splitting and before descending, which
>>> can race to an illegal read.
>>>
>>> Signed-off-by: Max Boone <mboone@akamai.com>
>>> Signed-off-by: Max Tottenham <mtottenh@akamai.com>
>>>
>>> ---
>>> mm/pagewalk.c | 24 ++++++++++++++++++++----
>>> 1 file changed, 20 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
>>> index a94c401ab..d1460dd84 100644
>>> --- a/mm/pagewalk.c
>>> +++ b/mm/pagewalk.c
>>> @@ -147,10 +147,18 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
>>> continue;
>>> }
>>>
>>> - if (walk->vma)
>>> + if (walk->vma) {
>>> + /*
>>> + * Don't descend into device-backed pfnmaps,
>>> + * they might refault the PMD entry.
>>> + */
>>> + if (unlikely(pmd_special(*pmd)))
>>> + continue;
>>
>> In general, if you're using pmd_special()/pud_split() and friends in
>> ordinary page table walking code, you are doing something wrong. We
>> don't want to leak these details in such page table walkers.
>
> That sounds sensible, there is a check in the split_huge_pud macro, which previously included DAX memory.
>
> Related to handling that macro, I see another proposed patch for lazy provisioning of PTEs for PMD order THPs [1]. Possibly adding a return code to the split functions allows a better solution here as well?
>
Maybe. I think the behavior of trying to split is ok. We just have
to teach code to deal with races.
Because the very same problem can likely be triggered by having the
splitting/unmapping be triggered from another thread in some other
code path concurrently.
> I'm not sure if making the split (or rather unmap, calling it a split has been a bit confusing to me as it doesn't allocate PMDs) a noop will improve things - as to my understanding it will still try to descend.
>
>> We do have vm_normal_page_pmd() to identify special mappings, but I
>> first have to understand what exactly you are trying to solve here.
>
> Specifically for the page walker, avoid splitting and descending into the PUD-order pfnmaps that VFIO creates for the BAR mappings - as these (can) represent hardware control registers rather than regular memory. I haven't been able to reproduce it with PMD-level pfnmaps, but I'll build a kernel with PUD-level pfnmaps disabled tomorrow.
>
> But if course I'm mainly concerned with fixing the race such that reading numa_maps does not cause an illegal read, resulting in the read process crashing while holding the mmap lock of the process (and subsequent reads of proc freezing, waiting for the mmap lock they'll never get).
Right, that's what we should focus on.
>
>> (You would also be affecting the remapping of the huge zero folio.)
>
> Ah, good one, I do think that this race can occur with PMD-level mappings, looking at the walking & splitting logic - but given the PUD-level mapping triggered the (rare) occurrence I'm fine to focus there first. I guess it helps we don't have 1G THPs, but it would be good to treat 2M and 1G similarly?
I don't think it can happen for PMDs, as pte_offset_map_lock() double-checks
that we really have a page table there. See __pte_offset_map() where we do a
pmdval = pmdp_get_lockless(pmd);
...
if (unlikely(pmd_none(pmdval) || !pmd_present(pmdval)))
goto nomap;
if (unlikely(pmd_trans_huge(pmdval)))
goto unmap;
...
return __pte_map(&pmdval, addr);
If someone re-faulted the PMD, this function will detect it and reject
walking it as a PMD table.
PMD handling code has to deal with page table removal, so it needs
some extra steps.
For PUD handling we don't need that. Once we spot a PUD table, it's
not going to get yanked underneath our feet.
>
>> A lot more details from the cover letter belong into the patch
>> description. In fact, you don't even need a cover letter :)
>
> Hehe, first timer, still figuring out the process.
:)
>
>> IIUC, this is rather serious and would require a Fixes: and even Cc: stable?
>>
>> I'll spend some time tomorrow trying to understand what the real problem
>> here is.
>
> I think so, the bug can be easily triggered by repeatedly booting up a VM that passes through a PCI device with large BARs while continuously reading the numa_maps of the main VM process. The reproducer script is mainly to narrow down to the specific part where the race occurs, the VFIO DMA set ioctl.
>
> Should I raise a bug email to refer to, and resubmit a new RFC v2 (without the cover letter), or keep discussion in this thread for now?
No, it's okay. Let's first discuss the proper fix.
>
>> But for now: can this only be reproduces with PUDs (which you mention in
>> the cover letter) or also PMDs?
>>
>> For the PMD case I would assume that pte_offset_map_lock() performs
>> proper checks And for the PUD case we are missing a re-check under PTL.
>
> Have only seen it with PUDs, will try forcing the mapping to happen with PMDs tomorrow.
Can you try the following:
From b3f0a85b9f071e338097147f997f20d1ac796155 Mon Sep 17 00:00:00 2001
From: "David Hildenbrand (Arm)" <david@kernel.org>
Date: Tue, 10 Mar 2026 10:09:39 +0100
Subject: [PATCH] tmp
Signed-off-by: David Hildenbrand (Arm) <david@kernel.org>
---
mm/pagewalk.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index cb358558807c..779f6fa00ab7 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -96,6 +96,7 @@ static int walk_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end,
static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
struct mm_walk *walk)
{
+ pud_t pudval = pudp_get(pud);
pmd_t *pmd;
unsigned long next;
const struct mm_walk_ops *ops = walk->ops;
@@ -104,6 +105,18 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, unsigned long end,
int err = 0;
int depth = real_depth(3);
+ /*
+ * For PTE handling, pte_offset_map_lock() takes care of checking
+ * whether there actually is a page table. But it also has to be
+ * very careful about concurrent page table reclaim. If we spot a PMD
+ * table, it cannot go away, so we can just walk it. However, if we find
+ * something else, we have to retry.
+ */
+ if (!pud_present(pudval) || pud_leaf(pudval)) {
+ walk->action = ACTION_AGAIN;
+ return 0;
+ }
+
pmd = pmd_offset(pud, addr);
do {
again:
@@ -176,7 +189,7 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
pud = pud_offset(p4d, addr);
do {
- again:
+again:
next = pud_addr_end(addr, end);
if (pud_none(*pud)) {
if (has_install)
@@ -217,12 +230,13 @@ static int walk_pud_range(p4d_t *p4d, unsigned long addr, unsigned long end,
else if (pud_leaf(*pud) || !pud_present(*pud))
continue; /* Nothing to do. */
- if (pud_none(*pud))
- goto again;
-
err = walk_pmd_range(pud, addr, next, walk);
if (err)
break;
+
+ if (walk->action == ACTION_AGAIN)
+ goto again;
+
} while (pud++, addr = next, addr != end);
return err;
--
2.43.0
--
Cheers,
David
next prev parent reply other threads:[~2026-03-10 9:12 UTC|newest]
Thread overview: 14+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-09 17:49 [RFC 0/1] Avoid pagewalk hugepage-split race with VFIO DMA set Max Boone
2026-03-09 17:49 ` [RFC 1/1] mm/pagewalk: don't split device-backed huge pfnmaps Max Boone
2026-03-09 20:19 ` David Hildenbrand (Arm)
2026-03-09 22:47 ` Boone, Max
2026-03-09 23:02 ` Boone, Max
2026-03-10 9:11 ` David Hildenbrand (Arm) [this message]
2026-03-10 11:38 ` Boone, Max
2026-03-10 15:19 ` David Hildenbrand (Arm)
2026-03-11 9:42 ` Boone, Max
2026-03-11 9:59 ` David Hildenbrand (Arm)
2026-03-11 10:34 ` Boone, Max
2026-03-11 10:45 ` David Hildenbrand (Arm)
2026-03-11 11:14 ` Boone, Max
2026-03-11 11:59 ` David Hildenbrand (Arm)
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=da2ebc9f-ec82-4c55-851c-b27110efdf8c@kernel.org \
--to=david@kernel.org \
--cc=Liam.Howlett@oracle.com \
--cc=akpm@linux-foundation.org \
--cc=alex@shazbot.org \
--cc=johunt@akamai.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=lorenzo.stoakes@oracle.com \
--cc=mboone@akamai.com \
--cc=mhocko@suse.com \
--cc=mpelland@akamai.com \
--cc=mtottenh@akamai.com \
--cc=rppt@kernel.org \
--cc=surenb@google.com \
--cc=vbabka@suse.cz \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox