qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Eric Auger <eric.auger@redhat.com>
To: Nicolin Chen <nicolinc@nvidia.com>,
	peter.maydell@linaro.org, qemu-devel@nongnu.org,
	qemu-arm@nongnu.org
Cc: jgg@nvidia.com, yi.l.liu@intel.com, kevin.tian@intel.com,
	Peter Xu <peterx@redhat.com>
Subject: Re: Multiple vIOMMU instance support in QEMU?
Date: Thu, 18 May 2023 11:06:50 +0200	[thread overview]
Message-ID: <0defbf3f-a8be-7f1b-3683-e3e3ece295fc@redhat.com> (raw)
In-Reply-To: <ZGWaCKQqK5hVqbvM@Asurada-Nvidia>

Hi Nicolin,

On 5/18/23 05:22, Nicolin Chen wrote:
> Hi Peter,
>
> Eric previously mentioned that you might not like the idea.
> Before we start this big effort, would it possible for you
> to comment a word or two on this topic?
>
> Thanks!
>
> On Mon, Apr 24, 2023 at 04:42:57PM -0700, Nicolin Chen wrote:
>> Hi all,
>>
>> (Please feel free to include related folks into this thread.)
>>
>> In light of an ongoing nested-IOMMU support effort via IOMMUFD, we
>> would likely have a need of a multi-vIOMMU support in QEMU, or more
>> specificly a multi-vSMMU support for an underlying HW that has multi
>> physical SMMUs. This would be used in the following use cases.
>>  1) Multiple physical SMMUs with different feature bits so that one
>>     vSMMU enabling a nesting configuration cannot reflect properly.
>>  2) NVIDIA Grace CPU has a VCMDQ HW extension for SMMU CMDQ. Every
>>     VCMDQ HW has an MMIO region (CONS and PROD indexes) that should
>>     be exposed to a VM, so that a hypervisor can avoid trappings by
>>     using this HW accelerator for performance. However, one single
>>     vSMMU cannot mmap multiple MMIO regions from multiple pSMMUs.
>>  3) With the latest iommufd design, a single vIOMMU model shares the
>>     same stage-2 HW pagetable across all physical SMMUs with a shared
>>     VMID. Then a stage-1 pagetable invalidation (for one device) at
>>     the vSMMU would have to be broadcasted to all the SMMU instances,
>>     which would hurt the overall performance.
Well if there is a real production use case behind the requirement of
having mutliple vSMMUs (and more generally vIOMMUs) sure you can go
ahead. I just wanted to warn you that as far as I know multiple vIOMMUS
are not supported even on Intel iommu and virtio-iommu. Let's add Peter
Xu in CC. I foresee added complexicity with regard to how you define the
RID scope of each vIOMMU, ACPI table generation, impact on arm-virt
machine options, how you pass the feature associated to each instance,
notifier propagation impact? And I don't evoke the VCMDQ feat addition.
We are still far from having a singleton QEMU nested stage SMMU
implementation at the moment but I understand you may want to feed the
pipeline to pave the way for enhanced use cases.

Thanks

Eric
>>
>> I previously discussed with Eric this topic in a private email. Eric
>> felt the difficulty of implementing this in the current QEMU system,
>> as it would touch different subsystems like IORT and platform device,
>> since the passthrough devices would be attached to different vIOMMUs.
>>
>> Yet, given the situations above, it's likely the best by duplicating
>> the vIOMMU instance corresponding to the number of the physical SMMU
>> instances.
>>
>> So, I am sending this email to collect opinions on this and see what
>> would be a potential TODO list if we decide to go on this path.
>>
>> Thanks
>> Nicolin



  reply	other threads:[~2023-05-18  9:07 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-24 23:42 Multiple vIOMMU instance support in QEMU? Nicolin Chen
2023-05-18  3:22 ` Nicolin Chen
2023-05-18  9:06   ` Eric Auger [this message]
2023-05-18 14:16     ` Peter Xu
2023-05-18 14:56       ` Jason Gunthorpe
2023-05-18 19:45         ` Peter Xu
2023-05-18 20:19           ` Jason Gunthorpe
2023-05-19  0:38             ` Tian, Kevin
2023-05-18 22:56         ` Tian, Kevin
2023-05-18 17:39     ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=0defbf3f-a8be-7f1b-3683-e3e3ece295fc@redhat.com \
    --to=eric.auger@redhat.com \
    --cc=jgg@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=nicolinc@nvidia.com \
    --cc=peter.maydell@linaro.org \
    --cc=peterx@redhat.com \
    --cc=qemu-arm@nongnu.org \
    --cc=qemu-devel@nongnu.org \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).