qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Nicolin Chen <nicolinc@nvidia.com>
To: <eric.auger@redhat.com>, <peter.maydell@linaro.org>,
	<qemu-devel@nongnu.org>, <qemu-arm@nongnu.org>
Cc: <jgg@nvidia.com>, <yi.l.liu@intel.com>, <kevin.tian@intel.com>
Subject: Multiple vIOMMU instance support in QEMU?
Date: Mon, 24 Apr 2023 16:42:55 -0700	[thread overview]
Message-ID: <ZEcT/7erkhHDaNvD@Asurada-Nvidia> (raw)

Hi all,

(Please feel free to include related folks into this thread.)

In light of an ongoing nested-IOMMU support effort via IOMMUFD, we
would likely have a need of a multi-vIOMMU support in QEMU, or more
specificly a multi-vSMMU support for an underlying HW that has multi
physical SMMUs. This would be used in the following use cases.
 1) Multiple physical SMMUs with different feature bits so that one
    vSMMU enabling a nesting configuration cannot reflect properly.
 2) NVIDIA Grace CPU has a VCMDQ HW extension for SMMU CMDQ. Every
    VCMDQ HW has an MMIO region (CONS and PROD indexes) that should
    be exposed to a VM, so that a hypervisor can avoid trappings by
    using this HW accelerator for performance. However, one single
    vSMMU cannot mmap multiple MMIO regions from multiple pSMMUs.
 3) With the latest iommufd design, a single vIOMMU model shares the
    same stage-2 HW pagetable across all physical SMMUs with a shared
    VMID. Then a stage-1 pagetable invalidation (for one device) at
    the vSMMU would have to be broadcasted to all the SMMU instances,
    which would hurt the overall performance.

I previously discussed with Eric this topic in a private email. Eric
felt the difficulty of implementing this in the current QEMU system,
as it would touch different subsystems like IORT and platform device,
since the passthrough devices would be attached to different vIOMMUs.

Yet, given the situations above, it's likely the best by duplicating
the vIOMMU instance corresponding to the number of the physical SMMU
instances.

So, I am sending this email to collect opinions on this and see what
would be a potential TODO list if we decide to go on this path.

Thanks
Nicolin


             reply	other threads:[~2023-04-24 23:44 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-24 23:42 Nicolin Chen [this message]
2023-05-18  3:22 ` Multiple vIOMMU instance support in QEMU? Nicolin Chen
2023-05-18  9:06   ` Eric Auger
2023-05-18 14:16     ` Peter Xu
2023-05-18 14:56       ` Jason Gunthorpe
2023-05-18 19:45         ` Peter Xu
2023-05-18 20:19           ` Jason Gunthorpe
2023-05-19  0:38             ` Tian, Kevin
2023-05-18 22:56         ` Tian, Kevin
2023-05-18 17:39     ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZEcT/7erkhHDaNvD@Asurada-Nvidia \
    --to=nicolinc@nvidia.com \
    --cc=eric.auger@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).