From: Baolu Lu <baolu.lu@linux.intel.com>
To: "Tian, Kevin" <kevin.tian@intel.com>, Jason Gunthorpe <jgg@nvidia.com>
Cc: "Hansen, Dave" <dave.hansen@intel.com>,
Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
Robin Murphy <robin.murphy@arm.com>, Jann Horn <jannh@google.com>,
Vasant Hegde <vasant.hegde@amd.com>,
Alistair Popple <apopple@nvidia.com>,
Peter Zijlstra <peterz@infradead.org>,
Uladzislau Rezki <urezki@gmail.com>,
Jean-Philippe Brucker <jean-philippe@linaro.org>,
Andy Lutomirski <luto@kernel.org>, "Lai, Yi1" <yi1.lai@intel.com>,
"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
"security@kernel.org" <security@kernel.org>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"stable@vger.kernel.org" <stable@vger.kernel.org>
Subject: Re: [PATCH v3 1/1] iommu/sva: Invalidate KVA range on kernel TLB flush
Date: Mon, 18 Aug 2025 13:58:55 +0800 [thread overview]
Message-ID: <c534dd05-c1b3-4ed3-bcde-83849d779f32@linux.intel.com> (raw)
In-Reply-To: <BN9PR11MB52760702D919B524849F08F28C34A@BN9PR11MB5276.namprd11.prod.outlook.com>
On 8/15/25 17:46, Tian, Kevin wrote:
>> From: Baolu Lu <baolu.lu@linux.intel.com>
>> Sent: Friday, August 15, 2025 5:17 PM
>>
>> On 8/8/2025 10:57 AM, Tian, Kevin wrote:
>>>> From: Jason Gunthorpe <jgg@nvidia.com>
>>>> Sent: Friday, August 8, 2025 3:52 AM
>>>>
>>>> On Thu, Aug 07, 2025 at 10:40:39PM +0800, Baolu Lu wrote:
>>>>> +static void kernel_pte_work_func(struct work_struct *work)
>>>>> +{
>>>>> + struct page *page, *next;
>>>>> +
>>>>> + iommu_sva_invalidate_kva_range(0, TLB_FLUSH_ALL);
>>>>> +
>>>>> + guard(spinlock)(&kernel_pte_work.lock);
>>>>> + list_for_each_entry_safe(page, next, &kernel_pte_work.list, lru) {
>>>>> + list_del_init(&page->lru);
>>>>
>>>> Please don't add new usages of lru, we are trying to get rid of this. :(
>>>>
>>>> I think the memory should be struct ptdesc, use that..
>>>>
>>>
>>> btw with this change we should also defer free of the pmd page:
>>>
>>> pud_free_pmd_page()
>>> ...
>>> for (i = 0; i < PTRS_PER_PMD; i++) {
>>> if (!pmd_none(pmd_sv[i])) {
>>> pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]);
>>> pte_free_kernel(&init_mm, pte);
>>> }
>>> }
>>>
>>> free_page((unsigned long)pmd_sv);
>>>
>>> Otherwise the risk still exists if the pmd page is repurposed before the
>>> pte work is scheduled.
>>
>> You're right that freeing high-level page table pages also requires an
>> IOTLB flush before the pages are freed. But I question the practical
>> risk of the race given the extremely small time window. If this is a
>
> It's already extremely difficult to conduct a real attack even w/o this
> fix. I'm not sure the criteria how small we consider acceptable in this
> specific case. but leaving an incomplete fix in code doesn't sound clean...
>
>> real concern, a potential mitigation would be to clear the U/S bits in
>> all page table entries for kernel address space? But I am not confident
>> in making that change at this time as I am unsure of the side effects it
>> might cause.
>
> I think there was already consensus that clearing U/S bits in all entries
> doesn't prevent the IOMMU caching them and setting A/D bits on
> the freed pagetable.
>
>>
>>>
>>> another observation - pte_free_kernel is not used in remove_pagetable ()
>>> and __change_page_attr(). Is it straightforward to put it in those paths
>>> or do we need duplicate some deferring logic there?
>>
>> The remove_pagetable() function is called in the path where memory is
>> hot-removed from the system, right? If so, there should be no issue, as
>> the threat model here is a page table page being freed and repurposed
>> while it's still cached in the IOTLB. In the hot-remove case, the memory
>> is removed and will not be reused, so that's fine as far as I can see.
>
> what about the page is hot-added back while the stale entry pointing to
> it is still valid in the IOMMU, theoretically? 😊
>
>>
>> The same to __change_page_attr(), which only changes the attributes of a
>> page table entry while the underlying page remains in use.
>>
>
> it may lead to cpa_collapse_large_pages() if changing attribute leads to
> all adjacent 4k pages in 2M range are with same attribute. Then page
> table might be freed:
>
> cpa_collapse_large_pages():
> list_for_each_entry_safe(ptdesc, tmp, &pgtables, pt_list) {
> list_del(&ptdesc->pt_list);
> __free_page(ptdesc_page(ptdesc));
> }
All look fair enough to me. I will handle all the cases and make it
complete.
Thanks,
baolu
next prev parent reply other threads:[~2025-08-18 6:01 UTC|newest]
Thread overview: 49+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-08-06 5:25 [PATCH v3 1/1] iommu/sva: Invalidate KVA range on kernel TLB flush Lu Baolu
2025-08-06 15:03 ` Dave Hansen
2025-08-06 15:52 ` Jason Gunthorpe
2025-08-06 16:04 ` Dave Hansen
2025-08-06 16:09 ` Jason Gunthorpe
2025-08-06 16:34 ` Dave Hansen
2025-08-06 16:42 ` Jason Gunthorpe
2025-08-07 14:40 ` Baolu Lu
2025-08-07 15:31 ` Dave Hansen
2025-08-08 5:15 ` Baolu Lu
2025-08-10 7:19 ` Ethan Zhao
2025-08-11 9:15 ` Uladzislau Rezki
2025-08-11 12:55 ` Jason Gunthorpe
2025-08-15 9:23 ` Baolu Lu
2025-08-11 13:55 ` Dave Hansen
2025-08-11 14:56 ` Uladzislau Rezki
2025-08-12 1:17 ` Ethan Zhao
2025-08-15 14:35 ` Dave Hansen
2025-08-11 12:57 ` Jason Gunthorpe
2025-08-13 3:17 ` Ethan Zhao
2025-08-18 1:34 ` Baolu Lu
2025-08-07 19:51 ` Jason Gunthorpe
2025-08-08 2:57 ` Tian, Kevin
2025-08-15 9:16 ` Baolu Lu
2025-08-15 9:46 ` Tian, Kevin
2025-08-18 5:58 ` Baolu Lu [this message]
2025-08-15 14:31 ` Dave Hansen
2025-08-18 6:08 ` Baolu Lu
2025-08-18 6:21 ` Baolu Lu
2025-08-21 7:05 ` Tian, Kevin
2025-08-23 3:26 ` Baolu Lu
2025-08-25 22:36 ` Dave Hansen
2025-08-26 1:25 ` Baolu Lu
2025-08-26 2:49 ` Baolu Lu
2025-08-26 14:22 ` Dave Hansen
2025-08-26 14:33 ` Matthew Wilcox
2025-08-26 14:57 ` Dave Hansen
2025-08-27 10:58 ` Baolu Lu
2025-08-27 23:31 ` Dave Hansen
2025-08-28 5:31 ` Baolu Lu
2025-08-28 7:08 ` Tian, Kevin
2025-08-28 18:56 ` Dave Hansen
2025-08-28 19:10 ` Jason Gunthorpe
2025-08-28 19:31 ` Dave Hansen
2025-08-28 19:39 ` Matthew Wilcox
2025-08-08 5:08 ` Baolu Lu
2025-08-07 6:53 ` Baolu Lu
2025-08-14 4:48 ` Ethan Zhao
2025-08-15 7:48 ` Baolu Lu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=c534dd05-c1b3-4ed3-bcde-83849d779f32@linux.intel.com \
--to=baolu.lu@linux.intel.com \
--cc=apopple@nvidia.com \
--cc=dave.hansen@intel.com \
--cc=iommu@lists.linux.dev \
--cc=jannh@google.com \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=peterz@infradead.org \
--cc=robin.murphy@arm.com \
--cc=security@kernel.org \
--cc=stable@vger.kernel.org \
--cc=urezki@gmail.com \
--cc=vasant.hegde@amd.com \
--cc=will@kernel.org \
--cc=yi1.lai@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).