Linux IOMMU Development
 help / color / mirror / Atom feed
From: Nicolin Chen <nicolinc@nvidia.com>
To: Jason Gunthorpe <jgg@nvidia.com>
Cc: Robin Murphy <robin.murphy@arm.com>, <kevin.tian@intel.com>,
	<yi.l.liu@intel.com>, <eric.auger@redhat.com>,
	<baolu.lu@linux.intel.com>,
	<shameerali.kolothum.thodi@huawei.com>,
	<jean-philippe@linaro.org>, <iommu@lists.linux.dev>
Subject: Re: Cache Invalidation Solution for Nested IOMMU
Date: Mon, 3 Apr 2023 07:51:28 -0700	[thread overview]
Message-ID: <ZCrn8PPPyEJ/VqFU@Asurada-Nvidia> (raw)
In-Reply-To: <ZCrd1wphkCp8uFt0@nvidia.com>

On Mon, Apr 03, 2023 at 11:08:23AM -0300, Jason Gunthorpe wrote:
> On Sun, Apr 02, 2023 at 05:33:35PM -0700, Nicolin Chen wrote:
> > The first version is simply to individually forward the entire
> > command. This can save a few CPU cycles from packing/unpacking
> > invalidation fields of the commands via a data structure, v.s.
> > the structure in v1[2].
> 
> The kernel must validate the SID for the ATS invalidations, we can't
> just blindly pass it through.

Yes. I didn't go further with the first version, yet leaving a
line of comments in the handler: we'd need set/unset_rid_user,
to validate the SID field of INV_ATC commands, as we discussed.

> And this simple path needs an explanation how errors are properly
> handled, eg by making execution synchronous, or someone guaranteeing
> that errors are impossible.

Yes. Both versions here execute in a synchronous fashion. The
error code will be returned in the cache_invalidate_user data
structure.

> > Then I added a new mmap interface to share kernel page(s) from the
> > Driver, to allow QEMU to write all TLBI commands as a single batch.
> > Then it can initiate the batch invalidation via another synchronous
> > hypercall.
> 
> I don't think a mmap is really needed for simple batching, just
> passing a larger buffer to ioctl is probably good enough

It wouldn't be a must, yet can omit a copy_from_user() at each
hypercall? And it also eases VCMDQ a bit.

> If a SW side is built it should mirror the HW vCMDQ path, not be
> different.

The host kernel has the host queue, while the hypervisor fills
in a guest TLBI queue. Switching between two queues at one SMMU
CMDQ (HW) requires a very complicated locking mechanism, v.s.
inserting the batch to the existing host queue. And it probably
doesn't have a big perf improvement by doing that?

If SMMU has ECMDQ, it'd allocate a free CMDQ upon availability,
calling arm_smmu_init_one_queue() and mmapping q->base, then it
can execute the guest TLBI queue directly, passing that q ptr.

Thanks
Nicolin

  reply	other threads:[~2023-04-03 14:52 UTC|newest]

Thread overview: 35+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-04-03  0:33 Cache Invalidation Solution for Nested IOMMU Nicolin Chen
2023-04-03  7:26 ` Liu, Yi L
2023-04-03  8:39   ` Tian, Kevin
2023-04-03 15:24     ` Nicolin Chen
2023-04-04  2:42       ` Tian, Kevin
2023-04-04  3:12         ` Nicolin Chen
2023-04-03 12:23   ` Jason Gunthorpe
2023-04-03  8:00 ` Tian, Kevin
2023-04-03 14:29   ` Nicolin Chen
2023-04-04  2:15     ` Tian, Kevin
2023-04-04  2:47       ` Nicolin Chen
2023-04-03 14:08 ` Jason Gunthorpe
2023-04-03 14:51   ` Nicolin Chen [this message]
2023-04-03 19:15     ` Robin Murphy
2023-04-04  0:02       ` Nicolin Chen
2023-04-04 16:20         ` Jason Gunthorpe
2023-04-04 16:50           ` Shameerali Kolothum Thodi
2023-04-05 11:57             ` Jason Gunthorpe
2023-04-06  6:23             ` Zhangfei Gao
2023-04-06  6:39               ` Nicolin Chen
2023-04-06 11:40               ` Jason Gunthorpe
2023-04-10  1:08                 ` Nicolin Chen
2023-04-11  9:07                   ` Jean-Philippe Brucker
2023-04-11 11:57                     ` Jason Gunthorpe
2023-04-11 18:39                       ` Nicolin Chen
2023-04-11 18:41                         ` Jason Gunthorpe
2023-04-11 19:02                           ` Nicolin Chen
2023-04-11 18:43                     ` Nicolin Chen
2023-04-12  2:47                   ` Zhangfei Gao
2023-04-12  5:47                     ` Nicolin Chen
2023-05-03 15:14                     ` Shameerali Kolothum Thodi
2023-05-03 23:44                       ` Nicolin Chen
2023-04-05  5:45           ` Nicolin Chen
2023-04-05 11:37             ` Jason Gunthorpe
2023-04-05 15:34               ` Nicolin Chen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ZCrn8PPPyEJ/VqFU@Asurada-Nvidia \
    --to=nicolinc@nvidia.com \
    --cc=baolu.lu@linux.intel.com \
    --cc=eric.auger@redhat.com \
    --cc=iommu@lists.linux.dev \
    --cc=jean-philippe@linaro.org \
    --cc=jgg@nvidia.com \
    --cc=kevin.tian@intel.com \
    --cc=robin.murphy@arm.com \
    --cc=shameerali.kolothum.thodi@huawei.com \
    --cc=yi.l.liu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox