From: Baolu Lu <baolu.lu@linux.intel.com>
To: Jason Gunthorpe <jgg@nvidia.com>,
David Woodhouse <dwmw2@infradead.org>,
iommu@lists.linux.dev, Joerg Roedel <joro@8bytes.org>,
Kevin Tian <kevin.tian@intel.com>,
Robin Murphy <robin.murphy@arm.com>,
Will Deacon <will@kernel.org>
Cc: patches@lists.linux.dev
Subject: Re: [PATCH 2/4] iommu/vtd: Pass size_order to qi_desc_piotlb() not npages
Date: Mon, 30 Mar 2026 15:11:17 +0800 [thread overview]
Message-ID: <dd8038b7-ecf6-45c5-973e-3e1798c0a453@linux.intel.com> (raw)
In-Reply-To: <2-v1-f175e27af136+11647-iommupt_inv_vtd_jgg@nvidia.com>
On 3/27/26 23:25, Jason Gunthorpe wrote:
> It doesn't make sense for the caller to compute mask, throw it away
> and then have qi_desc_piotlb() compute it again.
>
> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
> ---
> drivers/iommu/intel/cache.c | 10 ++++------
> drivers/iommu/intel/iommu.h | 16 ++++++----------
> 2 files changed, 10 insertions(+), 16 deletions(-)
>
> diff --git a/drivers/iommu/intel/cache.c b/drivers/iommu/intel/cache.c
> index e08253980a6ee7..74ee2002fb9c85 100644
> --- a/drivers/iommu/intel/cache.c
> +++ b/drivers/iommu/intel/cache.c
> @@ -338,13 +338,11 @@ static void qi_batch_add_piotlb_all(struct intel_iommu *iommu, u16 did,
> }
>
> static void qi_batch_add_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid,
> - u64 addr, unsigned long npages, bool ih,
> + u64 addr, unsigned int size_order, bool ih,
> struct qi_batch *batch)
> {
> - if (!npages)
> - return;
> -
> - qi_desc_piotlb(did, pasid, addr, npages, ih, &batch->descs[batch->index]);
> + qi_desc_piotlb(did, pasid, addr, size_order, ih,
> + &batch->descs[batch->index]);
> qi_batch_increment_index(iommu, batch);
> }
>
> @@ -385,7 +383,7 @@ static void cache_tag_flush_iotlb(struct dmar_domain *domain, struct cache_tag *
> tag->pasid, domain->qi_batch);
> else
> qi_batch_add_piotlb(iommu, tag->domain_id, tag->pasid,
> - addr, pages, ih, domain->qi_batch);
> + addr, mask, ih, domain->qi_batch);
> return;
> }
>
> diff --git a/drivers/iommu/intel/iommu.h b/drivers/iommu/intel/iommu.h
> index 40759587729953..7a92472985ee02 100644
> --- a/drivers/iommu/intel/iommu.h
> +++ b/drivers/iommu/intel/iommu.h
> @@ -1092,19 +1092,16 @@ static inline void qi_desc_piotlb_all(u16 did, u32 pasid, struct qi_desc *desc)
>
> /* Page-selective-within-PASID IOTLB invalidation */
> static inline void qi_desc_piotlb(u16 did, u32 pasid, u64 addr,
> - unsigned long npages, bool ih,
> + unsigned int size_order, bool ih,
> struct qi_desc *desc)
> {
> - int mask = ilog2(__roundup_pow_of_two(npages));
> - unsigned long align = (1ULL << (VTD_PAGE_SHIFT + mask));
> -
> - if (WARN_ON_ONCE(!IS_ALIGNED(addr, align)))
> - addr = ALIGN_DOWN(addr, align);
> -
> + /*
> + * calculate_psi_aligned_address() must be used for addr and size_order
> + */
> desc->qw0 = QI_EIOTLB_PASID(pasid) | QI_EIOTLB_DID(did) |
> QI_EIOTLB_GRAN(QI_GRAN_PSI_PASID) | QI_EIOTLB_TYPE;
> desc->qw1 = QI_EIOTLB_ADDR(addr) | QI_EIOTLB_IH(ih) |
> - QI_EIOTLB_AM(mask);
> + QI_EIOTLB_AM(size_order);
> }
>
> static inline void qi_desc_dev_iotlb_pasid(u16 sid, u16 pfsid, u32 pasid,
> @@ -1167,8 +1164,7 @@ void qi_flush_dev_iotlb(struct intel_iommu *iommu, u16 sid, u16 pfsid,
> u16 qdep, u64 addr, unsigned mask);
>
> void qi_flush_piotlb_all(struct intel_iommu *iommu, u16 did, u32 pasid);
> -void qi_flush_piotlb(struct intel_iommu *iommu, u16 did, u32 pasid, u64 addr,
> - unsigned long npages, bool ih);
> +
Could we move this cleanup to the previous patch?
Otherwise, looks good to me.
> void qi_flush_dev_iotlb_pasid(struct intel_iommu *iommu, u16 sid, u16 pfsid,
> u32 pasid, u16 qdep, u64 addr,
Thanks,
baolu
next prev parent reply other threads:[~2026-03-30 7:12 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-27 15:25 [PATCH 0/4] Improve the invalidation path in VT-d Jason Gunthorpe
2026-03-27 15:25 ` [PATCH 1/4] iommu/intel: Split piotlb invalidation into range and all Jason Gunthorpe
2026-03-30 6:39 ` Baolu Lu
2026-03-30 15:31 ` Jason Gunthorpe
2026-04-02 7:20 ` Baolu Lu
2026-03-27 15:25 ` [PATCH 2/4] iommu/vtd: Pass size_order to qi_desc_piotlb() not npages Jason Gunthorpe
2026-03-30 7:11 ` Baolu Lu [this message]
2026-03-27 15:25 ` [PATCH 3/4] iommu/vtd: Remove the remaining pages along the invalidation path Jason Gunthorpe
2026-03-27 15:25 ` [PATCH 4/4] iommu/vt: Simplify calculate_psi_aligned_address() Jason Gunthorpe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=dd8038b7-ecf6-45c5-973e-3e1798c0a453@linux.intel.com \
--to=baolu.lu@linux.intel.com \
--cc=dwmw2@infradead.org \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=patches@lists.linux.dev \
--cc=robin.murphy@arm.com \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox