From: Alex Williamson <alex.williamson@redhat.com>
To: Peter Delevoryas <peter@pjd.dev>
Cc: qemu-devel <qemu-devel@nongnu.org>,
suravee.suthikulpanit@amd.com, iommu@lists.linux.dev,
kvm@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [q&a] Status of IOMMU virtualization for nested virtualization (userspace PCI drivers in VMs)
Date: Wed, 28 Feb 2024 12:38:10 -0700 [thread overview]
Message-ID: <20240228123810.70663da2.alex.williamson@redhat.com> (raw)
In-Reply-To: <3D96D76D-85D2-47B5-B4C1-D6F95061D7D6@pjd.dev>
On Wed, 28 Feb 2024 10:29:32 -0800
Peter Delevoryas <peter@pjd.dev> wrote:
> Hey guys,
>
> I’m having a little trouble reading between the lines on various
> docs, mailing list threads, KVM presentations, github forks, etc, so
> I figured I’d just ask:
>
> What is the status of IOMMU virtualization, like in the case where I
> want a VM guest to have a virtual IOMMU?
It works fine for simply nested assignment scenarios, ie. guest
userspace drivers or nested VMs.
> I found this great presentation from KVM Forum 2021: [1]
>
> 1. I’m using -device intel-iommu right now. This has performance
> implications and large DMA transfers hit the vfio_iommu_type1
> dma_entry_limit on the host because of how the mappings are made.
Hugepages for the guest and mappings within the guest should help both
the mapping performance and DMA entry limit. In general the type1 vfio
IOMMU backend is not optimized for dynamic mapping, so performance-wise
your best bet is still to design the userspace driver for static DMA
buffers.
> 2. -device virtio-iommu is an improvement, but it doesn’t seem
> compatible with -device vfio-pci? I was only able to test this with
> cloud-hypervisor, and it has a better vfio mapping pattern (avoids
> hitting dma_entry_limit).
AFAIK it's just growing pains, it should work but it's working through
bugs.
> 3. -object iommufd [2] I haven’t tried this quite yet, planning to:
> if it’s using iommufd, and I have all the right kernel features in
> the guest and host, I assume it’s implementing the passthrough mode
> that AMD has described in their talk? Because I imagine that would be
> the best solution for me, I’m just having trouble understanding if
> it’s actually related or orthogonal.
For now iommufd provides a similar DMA mapping interface to type1, but
it does remove the DMA entry limit and improves locked page accounting.
To really see a performance improvement relative to dynamic mappings,
you'll need nesting support in the IOMMU, which is under active
development. From this aspect you will want iommufd since similar
features will not be provided by type1. Thanks,
Alex
next prev parent reply other threads:[~2024-02-28 19:39 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-28 18:29 [q&a] Status of IOMMU virtualization for nested virtualization (userspace PCI drivers in VMs) Peter Delevoryas
2024-02-28 19:38 ` Alex Williamson [this message]
2024-02-29 19:53 ` Peter Delevoryas
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240228123810.70663da2.alex.williamson@redhat.com \
--to=alex.williamson@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=peter@pjd.dev \
--cc=qemu-devel@nongnu.org \
--cc=suravee.suthikulpanit@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).