From: Samiullah Khawaja <skhawaja@google.com>
To: Weinan Liu <wnliu@google.com>
Cc: iommu@lists.linux.dev, jgg@nvidia.com, joro@8bytes.org,
patches@lists.linux.dev, robin.murphy@arm.com,
suravee.suthikulpanit@amd.com, wei.w.wang@hotmail.com,
will@kernel.org, kpsingh@kernel.org, josef@toxicpanda.com
Subject: Re: [PATCH v1 1/1] iommu/amd: Don't split flush for amd_iommu_domain_flush_all()
Date: Tue, 14 Apr 2026 23:55:19 +0000 [thread overview]
Message-ID: <ad7TzkuA1VC26vkk@google.com> (raw)
In-Reply-To: <20260414210626.2097722-2-wnliu@google.com>
On Tue, Apr 14, 2026 at 09:06:26PM +0000, Weinan Liu wrote:
>We have observed multiple full invalidations occurring during device
>detach when we are done using the vfio-device.
>
>blocked_domain_attach_device()
> -> detach_device()
> -> amd_iommu_domain_flush_all()
> -> amd_iommu_domain_flush_pages(..., CMD_INV_IOMMU_ALL_PAGES_ADDRESS)
>
> while (size != 0) {
>
> -> __domain_flush_pages( flush_size /* power of 2 flush_size */)
> -> domain_flush_pages_v1()
> -> build_inv_iommu_pages()
> -> build_inv_address()
>
> }
>
>build_inv_address() will trigger a full invalidation if the chunk
>size > (1 << 51). Consequently, the guest will issue multiple full
>invalidations for a single call to amd_iommu_domain_flush_all()
>
>Without this patch, we will see 10 time instead of 1 time full
>invalidations for every amd_iommu_domain_flush_all().
>
>Fixes: a270be1b3fdf ("iommu/amd: Use only natural aligned flushes in a VM")
>
>Suggested-by: Josef Bacik <josef@toxicpanda.com>
>Suggested-by: Jason Gunthorpe <jgg@nvidia.com>
>Signed-off-by: Weinan Liu <wnliu@google.com>
>---
> drivers/iommu/amd/iommu.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
>diff --git a/drivers/iommu/amd/iommu.c b/drivers/iommu/amd/iommu.c
>index 760d5f4623b5..bcec8721d228 100644
>--- a/drivers/iommu/amd/iommu.c
>+++ b/drivers/iommu/amd/iommu.c
>@@ -1769,7 +1769,8 @@ void amd_iommu_domain_flush_pages(struct protection_domain *domain,
> {
> lockdep_assert_held(&domain->lock);
>
>- if (likely(!amd_iommu_np_cache)) {
>+ if (likely(!amd_iommu_np_cache) ||
>+ size == CMD_INV_IOMMU_ALL_PAGES_ADDRESS) {
> __domain_flush_pages(domain, address, size);
>
> /* Wait until IOMMU TLB and all device IOTLB flushes are complete */
>--
>2.54.0.rc0.605.g598a273b03-goog
>
>
Reviewed-by: Samiullah Khawaja <skhawaja@google.com>
next prev parent reply other threads:[~2026-04-14 23:55 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-04-14 21:06 [PATCH v1 0/1] Don't split flush for amd_iommu_domain_flush_all() Weinan Liu
2026-04-14 21:06 ` [PATCH v1 1/1] iommu/amd: " Weinan Liu
2026-04-14 23:36 ` Jason Gunthorpe
2026-04-15 0:30 ` Weinan Liu
2026-04-17 11:57 ` Jason Gunthorpe
2026-04-21 5:02 ` Wei Wang
2026-04-14 23:55 ` Samiullah Khawaja [this message]
2026-05-14 5:07 ` Suthikulpanit, Suravee
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ad7TzkuA1VC26vkk@google.com \
--to=skhawaja@google.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=josef@toxicpanda.com \
--cc=kpsingh@kernel.org \
--cc=patches@lists.linux.dev \
--cc=robin.murphy@arm.com \
--cc=suravee.suthikulpanit@amd.com \
--cc=wei.w.wang@hotmail.com \
--cc=will@kernel.org \
--cc=wnliu@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.