Linux IOMMU Development
 help / color / mirror / Atom feed
From: Nicolin Chen <nicolinc@nvidia.com>
To: "Tian, Kevin" <kevin.tian@intel.com>
Cc: Robin Murphy <robin.murphy@arm.com>,
	"jgg@nvidia.com" <jgg@nvidia.com>,
	"Liu, Yi L" <yi.l.liu@intel.com>,
	"eric.auger@redhat.com" <eric.auger@redhat.com>,
	"baolu.lu@linux.intel.com" <baolu.lu@linux.intel.com>,
	"shameerali.kolothum.thodi@huawei.com"
	<shameerali.kolothum.thodi@huawei.com>,
	"jean-philippe@linaro.org" <jean-philippe@linaro.org>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>
Subject: Re: Cache Invalidation Solution for Nested IOMMU
Date: Mon, 3 Apr 2023 19:47:19 -0700	[thread overview]
Message-ID: <ZCuPt0PENL3wxbmi@Asurada-Nvidia> (raw)
In-Reply-To: <BN9PR11MB527677A2CE0BA99AABE4FA928C939@BN9PR11MB5276.namprd11.prod.outlook.com>

On Tue, Apr 04, 2023 at 02:15:38AM +0000, Tian, Kevin wrote:
> External email: Use caution opening links or attachments
> 
> 
> > From: Nicolin Chen <nicolinc@nvidia.com>
> > Sent: Monday, April 3, 2023 10:30 PM
> >
> > On Mon, Apr 03, 2023 at 08:00:12AM +0000, Tian, Kevin wrote:
> > > External email: Use caution opening links or attachments
> > >
> > >
> > > > From: Nicolin Chen <nicolinc@nvidia.com>
> > > > Sent: Monday, April 3, 2023 8:34 AM
> > > >
> > > > The new set_rid/unset_rid ioctls and the mmap interface would be
> > > > essential for VCMDQ support that we'd like to achieve at the end
> > > > of this journey. So, personally I'd like to see it can be used at
> > > > this stage, by the generic SMMUv3 (and potentially VT-d) too.
> > > >
> > >
> > > We talked earlier that there could be multiple VCMDQ's when the
> > > guest is assigned multiple devices behind different SMMU's. How
> > > does the mmap interface per iommufd work in that scenario?
> >
> > Trying to documenting that each IOMMUFD object can possibly have
> > a shared page, the mmap interface takes the index of an IOMMUFD
> > object ID. So, either a pt_id(S1) or a dev_id should be able to
> > identify which physical SMMU, I think.
> 
> Are all allowed cmds in VCMDQ per hwpt? If not then building the
> mmap interface per hwpt object is not correct. We may want explicit
> VCMDQ object in that case.

One VCMDQ HW per SMMU instance. So all HWPTs that are created
by devices behind the same SMMU instance share the same VCMDQ
HW. Each VCMDQ HW can also allocate multiple queues that don't
necessarily tie to any HWPT either.

> and devices behind different SMMU's may be attached to a same
> hwpt. In that case the number of VCMDQ associated to a hwpt is
> also dynamic.

Unless two HWPTs share the same S1 Context Table, how can two
devices behind different SMMUs attach to the same HWPT? And,
it doesn't sound very plausible to share the same S1 Context
Table between two devices either?

> But if just talking about batching for emulated smmu then having
> the user to pass a big buffer makes more sense.

OK. That's in align with Jason's suggestion, passing a queue
buffer via the ioctl.

> > > and looks this is different from the requirement of having a
> > > software short path in kernel to reduce the invalidation overhead
> > > for emulated vIOMMUs. In this case the invalidation queue is
> > > in guest memory then instead we want a registration cmd here.
> >
> > Yes for the first part. There are certain difficulties of doing
> > a short path, such as host kernel replacing the host queue that
> > the actual HW ran on, with a guest TLBI queue. So, my draft is
> > more about batching.
> >
> > For the last part, what's that "registration cmd" to do? In my
> > draft, the hypervisor dispatch all invalidation commands to the
> > guest TLBI queue (or call it user queue), which is transparent
> > to the guest OS.
> >
> 
> registration means the user to pass the buffer to the kernel.
> 
> If we want to support kernel short path, then we want the host
> smmu driver to directly read cmd out of guest TLBI queue.

I expect the mmap approach can go a bit further for an SMMU
that has ECMDQ capability (multi CMDQs): it could load the
guest TLBI queue without copying that buffer and inserting
into the host queue, by allocating a separate cmdq object
in the SMMU driver and mmap'ing its cmdq->q.base to QEMU.

Thanks
Nicolin

  reply	other threads:[~2023-04-04  2:47 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-03  0:33 Cache Invalidation Solution for Nested IOMMU Nicolin Chen
2023-04-03  7:26 ` Liu, Yi L
2023-04-03  8:39   ` Tian, Kevin
2023-04-03 15:24     ` Nicolin Chen
2023-04-04  2:42       ` Tian, Kevin
2023-04-04  3:12         ` Nicolin Chen
2023-04-03 12:23   ` Jason Gunthorpe
2023-04-03  8:00 ` Tian, Kevin
2023-04-03 14:29   ` Nicolin Chen
2023-04-04  2:15     ` Tian, Kevin
2023-04-04  2:47       ` Nicolin Chen [this message]
2023-04-03 14:08 ` Jason Gunthorpe
2023-04-03 14:51   ` Nicolin Chen
2023-04-03 19:15     ` Robin Murphy
2023-04-04  0:02       ` Nicolin Chen
2023-04-04 16:20         ` Jason Gunthorpe
2023-04-04 16:50           ` Shameerali Kolothum Thodi
2023-04-05 11:57             ` Jason Gunthorpe
2023-04-06  6:23             ` Zhangfei Gao
2023-04-06  6:39               ` Nicolin Chen
2023-04-06 11:40               ` Jason Gunthorpe
2023-04-10  1:08                 ` Nicolin Chen
2023-04-11  9:07                   ` Jean-Philippe Brucker
2023-04-11 11:57                     ` Jason Gunthorpe
2023-04-11 18:39                       ` Nicolin Chen
2023-04-11 18:41                         ` Jason Gunthorpe
2023-04-11 19:02                           ` Nicolin Chen
2023-04-11 18:43                     ` Nicolin Chen
2023-04-12  2:47                   ` Zhangfei Gao
2023-04-12  5:47                     ` Nicolin Chen
2023-05-03 15:14                     ` Shameerali Kolothum Thodi
2023-05-03 23:44                       ` Nicolin Chen
2023-04-05  5:45           ` Nicolin Chen
2023-04-05 11:37             ` Jason Gunthorpe
2023-04-05 15:34               ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZCuPt0PENL3wxbmi@Asurada-Nvidia \
    --to=nicolinc@nvidia.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=eric.auger@redhat.com \
    --cc=iommu@lists.linux.dev \
    --cc=jean-philippe@linaro.org \
    --cc=jgg@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=robin.murphy@arm.com \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox