qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Eric Auger <eric.auger@redhat.com>
Cc: Nicolin Chen <nicolinc@nvidia.com>,
	peter.maydell@linaro.org, qemu-devel@nongnu.org,
	qemu-arm@nongnu.org, jgg@nvidia.com, yi.l.liu@intel.com,
	kevin.tian@intel.com
Subject: Re: Multiple vIOMMU instance support in QEMU?
Date: Thu, 18 May 2023 10:16:24 -0400	[thread overview]
Message-ID: <ZGYzOEhdTA6sWKjP@x1n> (raw)
In-Reply-To: <0defbf3f-a8be-7f1b-3683-e3e3ece295fc@redhat.com>

On Thu, May 18, 2023 at 11:06:50AM +0200, Eric Auger wrote:
> Hi Nicolin,
> 
> On 5/18/23 05:22, Nicolin Chen wrote:
> > Hi Peter,
> >
> > Eric previously mentioned that you might not like the idea.
> > Before we start this big effort, would it possible for you
> > to comment a word or two on this topic?
> >
> > Thanks!
> >
> > On Mon, Apr 24, 2023 at 04:42:57PM -0700, Nicolin Chen wrote:
> >> Hi all,
> >>
> >> (Please feel free to include related folks into this thread.)
> >>
> >> In light of an ongoing nested-IOMMU support effort via IOMMUFD, we
> >> would likely have a need of a multi-vIOMMU support in QEMU, or more
> >> specificly a multi-vSMMU support for an underlying HW that has multi
> >> physical SMMUs. This would be used in the following use cases.
> >>  1) Multiple physical SMMUs with different feature bits so that one
> >>     vSMMU enabling a nesting configuration cannot reflect properly.
> >>  2) NVIDIA Grace CPU has a VCMDQ HW extension for SMMU CMDQ. Every
> >>     VCMDQ HW has an MMIO region (CONS and PROD indexes) that should
> >>     be exposed to a VM, so that a hypervisor can avoid trappings by
> >>     using this HW accelerator for performance. However, one single
> >>     vSMMU cannot mmap multiple MMIO regions from multiple pSMMUs.
> >>  3) With the latest iommufd design, a single vIOMMU model shares the
> >>     same stage-2 HW pagetable across all physical SMMUs with a shared
> >>     VMID. Then a stage-1 pagetable invalidation (for one device) at
> >>     the vSMMU would have to be broadcasted to all the SMMU instances,
> >>     which would hurt the overall performance.
> Well if there is a real production use case behind the requirement of
> having mutliple vSMMUs (and more generally vIOMMUs) sure you can go
> ahead. I just wanted to warn you that as far as I know multiple vIOMMUS
> are not supported even on Intel iommu and virtio-iommu. Let's add Peter
> Xu in CC. I foresee added complexicity with regard to how you define the
> RID scope of each vIOMMU, ACPI table generation, impact on arm-virt
> machine options, how you pass the feature associated to each instance,
> notifier propagation impact? And I don't evoke the VCMDQ feat addition.
> We are still far from having a singleton QEMU nested stage SMMU
> implementation at the moment but I understand you may want to feed the
> pipeline to pave the way for enhanced use cases.

I agree with Eric that we're still lacking quite a few things for >1
vIOMMUs support, afaik.

What you mentioned above makes sense to me from the POV that 1 vIOMMU may
not suffice, but that's at least totally new area to me because I never
used >1 IOMMUs even bare metal (excluding the case where I'm aware that
e.g. a GPU could have its own IOMMU-like dma translator).

What's the system layout of your multi-vIOMMU world?  Is there still a
centric vIOMMU, or multi-vIOMMUs can run fully in parallel, so that e.g. we
can have DEV1,DEV2 under vIOMMU1 and DEV3,DEV4 under vIOMMU2?  Can vIOMMU
get involved in any plug/unplug dynamically in any form?  What else can be
different from that regard?

Is it a common hardware layout or nVidia specific?

Thanks,

> 
> Thanks
> 
> Eric
> >>
> >> I previously discussed with Eric this topic in a private email. Eric
> >> felt the difficulty of implementing this in the current QEMU system,
> >> as it would touch different subsystems like IORT and platform device,
> >> since the passthrough devices would be attached to different vIOMMUs.
> >>
> >> Yet, given the situations above, it's likely the best by duplicating
> >> the vIOMMU instance corresponding to the number of the physical SMMU
> >> instances.
> >>
> >> So, I am sending this email to collect opinions on this and see what
> >> would be a potential TODO list if we decide to go on this path.

-- 
Peter Xu



  reply	other threads:[~2023-05-18 14:17 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-24 23:42 Multiple vIOMMU instance support in QEMU? Nicolin Chen
2023-05-18  3:22 ` Nicolin Chen
2023-05-18  9:06   ` Eric Auger
2023-05-18 14:16     ` Peter Xu [this message]
2023-05-18 14:56       ` Jason Gunthorpe
2023-05-18 19:45         ` Peter Xu
2023-05-18 20:19           ` Jason Gunthorpe
2023-05-19  0:38             ` Tian, Kevin
2023-05-18 22:56         ` Tian, Kevin
2023-05-18 17:39     ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZGYzOEhdTA6sWKjP@x1n \
    --to=peterx@redhat.com \
    --cc=eric.auger@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=nicolinc@nvidia.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).