From: Baolu Lu <baolu.lu@linux.intel.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
Robin Murphy <robin.murphy@arm.com>,
Kevin Tian <kevin.tian@intel.com>,
Jason Gunthorpe <jgg@nvidia.com>, Jann Horn <jannh@google.com>,
Vasant Hegde <vasant.hegde@amd.com>,
Dave Hansen <dave.hansen@intel.com>,
Alistair Popple <apopple@nvidia.com>,
Uladzislau Rezki <urezki@gmail.com>,
Jean-Philippe Brucker <jean-philippe@linaro.org>,
Andy Lutomirski <luto@kernel.org>,
"Tested-by : Yi Lai" <yi1.lai@intel.com>,
iommu@lists.linux.dev, security@kernel.org,
linux-kernel@vger.kernel.org, stable@vger.kernel.org
Subject: Re: [PATCH v2 1/1] iommu/sva: Invalidate KVA range on kernel TLB flush
Date: Fri, 11 Jul 2025 11:00:06 +0800 [thread overview]
Message-ID: <094fdad4-297b-44e9-a81c-0fe4da07e63f@linux.intel.com> (raw)
In-Reply-To: <20250710135432.GO1613376@noisy.programming.kicks-ass.net>
Hi Peter Z,
On 7/10/25 21:54, Peter Zijlstra wrote:
> On Wed, Jul 09, 2025 at 02:28:00PM +0800, Lu Baolu wrote:
>> The vmalloc() and vfree() functions manage virtually contiguous, but not
>> necessarily physically contiguous, kernel memory regions. When vfree()
>> unmaps such a region, it tears down the associated kernel page table
>> entries and frees the physical pages.
>>
>> In the IOMMU Shared Virtual Addressing (SVA) context, the IOMMU hardware
>> shares and walks the CPU's page tables. Architectures like x86 share
>> static kernel address mappings across all user page tables, allowing the
>> IOMMU to access the kernel portion of these tables.
>>
>> Modern IOMMUs often cache page table entries to optimize walk performance,
>> even for intermediate page table levels. If kernel page table mappings are
>> changed (e.g., by vfree()), but the IOMMU's internal caches retain stale
>> entries, Use-After-Free (UAF) vulnerability condition arises. If these
>> freed page table pages are reallocated for a different purpose, potentially
>> by an attacker, the IOMMU could misinterpret the new data as valid page
>> table entries. This allows the IOMMU to walk into attacker-controlled
>> memory, leading to arbitrary physical memory DMA access or privilege
>> escalation.
>>
>> To mitigate this, introduce a new iommu interface to flush IOMMU caches
>> and fence pending page table walks when kernel page mappings are updated.
>> This interface should be invoked from architecture-specific code that
>> manages combined user and kernel page tables.
>
> I must say I liked the kPTI based idea better. Having to iterate and
> invalidate an unspecified number of IOMMUs from non-preemptible context
> seems 'unfortunate'.
The cache invalidation path in IOMMU drivers is already critical and
operates within a non-preemptible context. This approach is, in fact,
already utilized for user-space page table updates since the beginning
of SVA support.
>
> Why was this approach chosen over the kPTI one, where we keep a
> page-table root that simply does not include the kernel bits, and
> therefore the IOMMU will never see them (change) and we'll never have to
> invalidate?
The IOMMU subsystem started supporting the SVA feature in 2019, and it
has been broadly adopted in various production kernels. The issue
described here is fundamentally a software bug related to not
maintaining IOMMU cache coherence. Therefore, we need a quick and simple
fix to address it, and this patch can be easily backported to production
kernels.
While a kPTI-based approach might appear more attractive, I believe some
extra work is still required to properly integrate it into the IOMMU
subsystem. For instance, kPTI is currently an optional mitigation,
enabled via CONFIG_MITIGATION_PAGE_TABLE_ISOLATION and bypassable with
the "nopti" kernel parameter. This optionality is not suitable for the
IOMMU subsystem, as software must always guarantee IOMMU cache coherence
for functional correctness and security.
So, in the short term, let's proceed with a straightforward solution to
resolve this issue and ensure the SVA feature functions correctly. For
the long term, we can explore optimizations and deeper integration
aligned with features like kPTI.
>
>> @@ -132,8 +136,15 @@ struct iommu_sva *iommu_sva_bind_device(struct device *dev, struct mm_struct *mm
>> if (ret)
>> goto out_free_domain;
>> domain->users = 1;
>> - list_add(&domain->next, &mm->iommu_mm->sva_domains);
>>
>> + if (list_empty(&iommu_mm->sva_domains)) {
>> + scoped_guard(spinlock_irqsave, &iommu_mms_lock) {
>> + if (list_empty(&iommu_sva_mms))
>> + static_branch_enable(&iommu_sva_present);
>> + list_add(&iommu_mm->mm_list_elm, &iommu_sva_mms);
>> + }
>> + }
>> + list_add(&domain->next, &iommu_mm->sva_domains);
>> out:
>> refcount_set(&handle->users, 1);
>> mutex_unlock(&iommu_sva_lock);
>> @@ -175,6 +186,15 @@ void iommu_sva_unbind_device(struct iommu_sva *handle)
>> list_del(&domain->next);
>> iommu_domain_free(domain);
>> }
>> +
>> + if (list_empty(&iommu_mm->sva_domains)) {
>> + scoped_guard(spinlock_irqsave, &iommu_mms_lock) {
>> + list_del(&iommu_mm->mm_list_elm);
>> + if (list_empty(&iommu_sva_mms))
>> + static_branch_disable(&iommu_sva_present);
>> + }
>> + }
>> +
>> mutex_unlock(&iommu_sva_lock);
>> kfree(handle);
>> }
>
> This seems an odd coding style choice; why the extra unneeded
> indentation? That is, what's wrong with:
>
> if (list_empty()) {
> guard(spinlock_irqsave)(&iommu_mms_lock);
> list_del();
> if (list_empty()
> static_branch_disable();
> }
Perhaps I overlooked or misunderstood something, but my understanding
is,
The lock order in this function is:
mutex_lock(&iommu_sva_lock);
spin_lock(&iommu_mms_lock);
spin_unlock(&iommu_mms_lock);
mutex_unlock(&iommu_sva_lock);
With above change, it is changed to:
mutex_lock(&iommu_sva_lock);
spin_lock(&iommu_mms_lock);
mutex_unlock(&iommu_sva_lock);
spin_unlock(&iommu_mms_lock);
>
>> @@ -312,3 +332,15 @@ static struct iommu_domain *iommu_sva_domain_alloc(struct device *dev,
>>
>> return domain;
>> }
>> +
>> +void iommu_sva_invalidate_kva_range(unsigned long start, unsigned long end)
>> +{
>> + struct iommu_mm_data *iommu_mm;
>> +
>> + if (!static_branch_unlikely(&iommu_sva_present))
>> + return;
>> +
>> + guard(spinlock_irqsave)(&iommu_mms_lock);
>> + list_for_each_entry(iommu_mm, &iommu_sva_mms, mm_list_elm)
>> + mmu_notifier_arch_invalidate_secondary_tlbs(iommu_mm->mm, start, end);
>> +}
>
> This is absolutely the wrong way to use static_branch. You want them in
> inline functions guarding the function call, not inside the function
> call.
I don't think a static branch is desirable here, as we have no idea how
often the condition will switch in real-world scenarios. I will remove
it in the next version if there are no objections.
Thanks,
baolu
next prev parent reply other threads:[~2025-07-11 3:01 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-07-09 6:28 [PATCH v2 1/1] iommu/sva: Invalidate KVA range on kernel TLB flush Lu Baolu
2025-07-09 15:29 ` Dave Hansen
2025-07-10 2:14 ` Baolu Lu
2025-07-10 2:55 ` Tian, Kevin
2025-07-10 12:53 ` Dave Hansen
2025-07-10 13:22 ` Jason Gunthorpe
2025-07-10 15:26 ` Dave Hansen
2025-07-11 2:46 ` Tian, Kevin
2025-07-11 2:54 ` Tian, Kevin
2025-07-11 8:17 ` Yu Zhang
2025-07-24 3:01 ` Baolu Lu
2025-07-28 17:36 ` Yu Zhang
2025-07-29 2:08 ` Baolu Lu
2025-07-24 3:06 ` Baolu Lu
2025-07-11 2:49 ` Tian, Kevin
2025-07-10 3:02 ` Tian, Kevin
2025-07-10 8:11 ` Yu Zhang
2025-07-10 8:15 ` Tian, Kevin
2025-07-10 9:37 ` Yu Zhang
2025-07-10 13:54 ` Peter Zijlstra
2025-07-10 15:53 ` Peter Zijlstra
2025-07-11 3:09 ` Baolu Lu
2025-07-11 8:27 ` Peter Zijlstra
2025-07-16 11:57 ` David Laight
2025-07-17 1:47 ` Baolu Lu
2025-07-11 3:00 ` Baolu Lu [this message]
2025-07-11 4:01 ` Tian, Kevin
2025-07-11 8:32 ` Peter Zijlstra
2025-07-11 11:58 ` Jason Gunthorpe
2025-07-15 5:55 ` Baolu Lu
2025-07-15 12:25 ` Jason Gunthorpe
2025-07-16 6:34 ` Baolu Lu
2025-07-16 12:08 ` Jason Gunthorpe
2025-07-17 1:43 ` Baolu Lu
2025-07-17 11:50 ` Vasant Hegde
2025-07-11 11:54 ` Jason Gunthorpe
2025-07-16 10:54 ` Yi Liu
2025-07-17 1:51 ` Baolu Lu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=094fdad4-297b-44e9-a81c-0fe4da07e63f@linux.intel.com \
--to=baolu.lu@linux.intel.com \
--cc=apopple@nvidia.com \
--cc=dave.hansen@intel.com \
--cc=iommu@lists.linux.dev \
--cc=jannh@google.com \
--cc=jean-philippe@linaro.org \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=luto@kernel.org \
--cc=peterz@infradead.org \
--cc=robin.murphy@arm.com \
--cc=security@kernel.org \
--cc=stable@vger.kernel.org \
--cc=urezki@gmail.com \
--cc=vasant.hegde@amd.com \
--cc=will@kernel.org \
--cc=yi1.lai@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).