From: Yi Liu <yi.l.liu@intel.com>
To: joro@8bytes.org, jgg@nvidia.com, kevin.tian@intel.com,
baolu.lu@linux.intel.com
Cc: alex.williamson@redhat.com, robin.murphy@arm.com,
eric.auger@redhat.com, nicolinc@nvidia.com, kvm@vger.kernel.org,
chao.p.peng@linux.intel.com, yi.l.liu@intel.com,
yi.y.sun@linux.intel.com, iommu@lists.linux.dev,
linux-kernel@vger.kernel.org, linux-kselftest@vger.kernel.org,
zhenzhong.duan@intel.com, joao.m.martins@oracle.com
Subject: [PATCH rc 2/8] iommu/vt-d: Add __iommu_flush_iotlb_psi()
Date: Thu, 8 Feb 2024 00:23:01 -0800 [thread overview]
Message-ID: <20240208082307.15759-3-yi.l.liu@intel.com> (raw)
In-Reply-To: <20240208082307.15759-1-yi.l.liu@intel.com>
Add __iommu_flush_iotlb_psi() to do the psi iotlb flush with a DID input
rather than calculating it within the helper.
This is useful when flushing cache for parent domain which reuses DIDs of
its nested domains.
Signed-off-by: Yi Liu <yi.l.liu@intel.com>
---
drivers/iommu/intel/iommu.c | 79 +++++++++++++++++++++----------------
1 file changed, 44 insertions(+), 35 deletions(-)
diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index e393c62776f3..eef6a187b651 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -1368,6 +1368,47 @@ static void domain_flush_pasid_iotlb(struct intel_iommu *iommu,
spin_unlock_irqrestore(&domain->lock, flags);
}
+static void __iommu_flush_iotlb_psi(struct intel_iommu *iommu, u16 did,
+ unsigned long pfn, unsigned int pages,
+ int ih)
+{
+ unsigned int aligned_pages = __roundup_pow_of_two(pages);
+ unsigned int mask = ilog2(aligned_pages);
+ uint64_t addr = (uint64_t)pfn << VTD_PAGE_SHIFT;
+ unsigned long bitmask = aligned_pages - 1;
+
+ /*
+ * PSI masks the low order bits of the base address. If the
+ * address isn't aligned to the mask, then compute a mask value
+ * needed to ensure the target range is flushed.
+ */
+ if (unlikely(bitmask & pfn)) {
+ unsigned long end_pfn = pfn + pages - 1, shared_bits;
+
+ /*
+ * Since end_pfn <= pfn + bitmask, the only way bits
+ * higher than bitmask can differ in pfn and end_pfn is
+ * by carrying. This means after masking out bitmask,
+ * high bits starting with the first set bit in
+ * shared_bits are all equal in both pfn and end_pfn.
+ */
+ shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
+ mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
+ }
+
+ /*
+ * Fallback to domain selective flush if no PSI support or
+ * the size is too big.
+ */
+ if (!cap_pgsel_inv(iommu->cap) ||
+ mask > cap_max_amask_val(iommu->cap))
+ iommu->flush.flush_iotlb(iommu, did, 0, 0,
+ DMA_TLB_DSI_FLUSH);
+ else
+ iommu->flush.flush_iotlb(iommu, did, addr | ih, mask,
+ DMA_TLB_PSI_FLUSH);
+}
+
static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
struct dmar_domain *domain,
unsigned long pfn, unsigned int pages,
@@ -1384,42 +1425,10 @@ static void iommu_flush_iotlb_psi(struct intel_iommu *iommu,
if (ih)
ih = 1 << 6;
- if (domain->use_first_level) {
+ if (domain->use_first_level)
domain_flush_pasid_iotlb(iommu, domain, addr, pages, ih);
- } else {
- unsigned long bitmask = aligned_pages - 1;
-
- /*
- * PSI masks the low order bits of the base address. If the
- * address isn't aligned to the mask, then compute a mask value
- * needed to ensure the target range is flushed.
- */
- if (unlikely(bitmask & pfn)) {
- unsigned long end_pfn = pfn + pages - 1, shared_bits;
-
- /*
- * Since end_pfn <= pfn + bitmask, the only way bits
- * higher than bitmask can differ in pfn and end_pfn is
- * by carrying. This means after masking out bitmask,
- * high bits starting with the first set bit in
- * shared_bits are all equal in both pfn and end_pfn.
- */
- shared_bits = ~(pfn ^ end_pfn) & ~bitmask;
- mask = shared_bits ? __ffs(shared_bits) : BITS_PER_LONG;
- }
-
- /*
- * Fallback to domain selective flush if no PSI support or
- * the size is too big.
- */
- if (!cap_pgsel_inv(iommu->cap) ||
- mask > cap_max_amask_val(iommu->cap))
- iommu->flush.flush_iotlb(iommu, did, 0, 0,
- DMA_TLB_DSI_FLUSH);
- else
- iommu->flush.flush_iotlb(iommu, did, addr | ih, mask,
- DMA_TLB_PSI_FLUSH);
- }
+ else
+ __iommu_flush_iotlb_psi(iommu, did, pfn, pages, ih);
/*
* In caching mode, changes of pages from non-present to present require
--
2.34.1
next prev parent reply other threads:[~2024-02-08 8:23 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-08 8:22 [PATCH rc 0/8] Add missing cache flush and dirty tracking set for nested parent domain Yi Liu
2024-02-08 8:23 ` [PATCH rc 1/8] iommu/vt-d: Track nested domains in parent Yi Liu
2024-02-08 8:28 ` Tian, Kevin
2024-02-08 9:23 ` Yi Liu
2024-02-08 8:23 ` Yi Liu [this message]
2024-02-08 8:30 ` [PATCH rc 2/8] iommu/vt-d: Add __iommu_flush_iotlb_psi() Tian, Kevin
2024-02-08 8:23 ` [PATCH rc 3/8] iommu/vt-d: Add missing iotlb flush for parent domain Yi Liu
2024-02-08 8:38 ` Tian, Kevin
2024-02-09 2:40 ` Baolu Lu
2024-02-21 15:19 ` Jason Gunthorpe
2024-02-22 8:34 ` Yi Liu
2024-02-22 15:16 ` Jason Gunthorpe
2024-02-08 8:23 ` [PATCH rc 4/8] iommu/vt-d: Update iotlb in nested domain attach Yi Liu
2024-02-08 8:40 ` Tian, Kevin
2024-02-08 8:23 ` [PATCH rc 5/8] iommu/vt-d: Add missing device iotlb flush for parent domain Yi Liu
2024-02-08 8:42 ` Tian, Kevin
2024-02-08 8:23 ` [PATCH rc 6/8] iommu/vt-d: Remove @domain parameter from intel_pasid_setup_dirty_tracking() Yi Liu
2024-02-08 8:43 ` Tian, Kevin
2024-02-08 10:29 ` Joao Martins
2024-02-08 8:23 ` [PATCH rc 7/8] iommu/vt-d: Wrap the dirty tracking loop to be a helper Yi Liu
2024-02-08 8:45 ` Tian, Kevin
2024-02-08 10:29 ` Joao Martins
2024-02-09 2:40 ` Baolu Lu
2024-02-08 8:23 ` [PATCH rc 8/8] iommu/vt-d: Add missing dirty tracking set for parent domain Yi Liu
2024-02-08 8:53 ` Tian, Kevin
2024-02-08 9:23 ` Yi Liu
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240208082307.15759-3-yi.l.liu@intel.com \
--to=yi.l.liu@intel.com \
--cc=alex.williamson@redhat.com \
--cc=baolu.lu@linux.intel.com \
--cc=chao.p.peng@linux.intel.com \
--cc=eric.auger@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joao.m.martins@oracle.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=yi.y.sun@linux.intel.com \
--cc=zhenzhong.duan@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).