From: Nicolin Chen <nicolinc@nvidia.com>
To: Robin Murphy <robin.murphy@arm.com>
Cc: "Tian, Kevin" <kevin.tian@intel.com>,
"jgg@nvidia.com" <jgg@nvidia.com>,
"joro@8bytes.org" <joro@8bytes.org>,
"will@kernel.org" <will@kernel.org>,
"shuah@kernel.org" <shuah@kernel.org>,
"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
"linux-kselftest@vger.kernel.org"
<linux-kselftest@vger.kernel.org>
Subject: Re: [PATCH v2 2/3] iommu/dma: Support MSIs through nested domains
Date: Thu, 8 Aug 2024 15:59:31 -0700 [thread overview]
Message-ID: <ZrVN05VylFq8lK4q@Asurada-Nvidia> (raw)
In-Reply-To: <6da4f216-594b-4c51-848c-86e281402820@arm.com>
On Thu, Aug 08, 2024 at 01:38:44PM +0100, Robin Murphy wrote:
> On 06/08/2024 9:25 am, Tian, Kevin wrote:
> > > From: Nicolin Chen <nicolinc@nvidia.com>
> > > Sent: Saturday, August 3, 2024 8:32 AM
> > >
> > > From: Robin Murphy <robin.murphy@arm.com>
> > >
> > > Currently, iommu-dma is the only place outside of IOMMUFD and drivers
> > > which might need to be aware of the stage 2 domain encapsulated within
> > > a nested domain. This would be in the legacy-VFIO-style case where we're
> >
> > why is it a legacy-VFIO-style? We only support nested in IOMMUFD.
>
> Because with proper nesting we ideally shouldn't need the host-managed
> MSI mess at all, which all stems from the old VFIO paradigm of
> completely abstracting interrupts from userspace. I'm still hoping
> IOMMUFD can grow its own interface for efficient MSI passthrough, where
> the VMM can simply map the physical MSI doorbell into whatever IPA (GPA)
> it wants it to appear at in the S2 domain, then whatever the guest does
> with S1 it can program the MSI address into the endpoint accordingly
> without us having to fiddle with it.
Hmm, until now I wasn't so convinced myself that it could work as I
was worried about the data. But having a second thought, since the
host configures the MSI, it can still set the correct data. What we
only need is to change the MSI address from a RMRed IPA/gIOVA to a
real gIOVA of the vITS page.
I did a quick hack to test that loop. MSI in the guest still works
fine without having the RMR node in its IORT. Sweet!
To go further on this path, we will need the following changes:
- MSI configuration in the host (via a VFIO_IRQ_SET_ACTION_TRIGGER
hypercall) should set gIOVA instead of fetching from msi_cookie.
That hypercall doesn't forward an address currently, since host
kernel pre-sets the msi_cookie. So, we need a way to forward the
gIOVA to kernel and pack it into the msi_msg structure. I haven't
read the VFIO PCI code thoroughly, yet wonder if we could just
let the guest program the gIOVA to the PCI register and fall it
through to the hardware, so host kernel handling that hypercall
can just read it back from the register?
- IOMMUFD should provide VMM a way to tell the gPA (or directly +
GITS_TRANSLATER?). Then kernel should do the stage-2 mapping. I
have talked to Jason about this a while ago, and we have a few
thoughts how to implement it. But eventually, I think we still
can't avoid a middle man like msi_cookie to associate the gPA in
IOMMUFD to PA in irqchip?
One more concern is the MSI window size. VMM sets up a MSI region
that must fit the hardware window size. Most of ITS versions have
only one page size but one of them can have multiple pages? What
if vITS is one-page size while the underlying pITS has multiple?
My understanding of the current kernel-defined 1MB size is also a
hard-coding window to potential fit all cases, since IOMMU code in
the code can just eyeball what's going on in the irqchip subsystem
and adjust accordingly if someday it needs to. But VMM can't?
Thanks
Nicolin
next prev parent reply other threads:[~2024-08-08 22:59 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-08-03 0:32 [PATCH v2 0/3] iommufd: Add selftest coverage for reserved IOVAs Nicolin Chen
2024-08-03 0:32 ` [PATCH v2 1/3] iommufd: Reorder include files Nicolin Chen
2024-08-15 17:51 ` Jason Gunthorpe
2024-08-15 18:12 ` Nicolin Chen
2024-08-03 0:32 ` [PATCH v2 2/3] iommu/dma: Support MSIs through nested domains Nicolin Chen
2024-08-06 8:25 ` Tian, Kevin
2024-08-06 17:24 ` Nicolin Chen
2024-08-08 12:38 ` Robin Murphy
2024-08-08 22:59 ` Nicolin Chen [this message]
2024-08-09 8:00 ` Tian, Kevin
2024-08-09 17:43 ` Robin Murphy
2024-08-09 20:09 ` Nicolin Chen
2024-08-09 23:01 ` Jason Gunthorpe
2024-08-09 7:34 ` Tian, Kevin
2024-08-09 18:41 ` Jason Gunthorpe
2024-08-09 19:18 ` Nicolin Chen
2024-08-09 22:49 ` Jason Gunthorpe
2024-08-09 23:38 ` Nicolin Chen
2024-08-03 0:32 ` [PATCH v2 3/3] iommufd/selftest: Add coverage for reserved IOVAs Nicolin Chen
2024-08-09 15:52 ` kernel test robot
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZrVN05VylFq8lK4q@Asurada-Nvidia \
--to=nicolinc@nvidia.com \
--cc=iommu@lists.linux.dev \
--cc=jgg@nvidia.com \
--cc=joro@8bytes.org \
--cc=kevin.tian@intel.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=robin.murphy@arm.com \
--cc=shuah@kernel.org \
--cc=will@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox