From: Jason Gunthorpe <jgg@nvidia.com>
To: Nicolin Chen <nicolinc@nvidia.com>
Cc: Robin Murphy <robin.murphy@arm.com>,
kevin.tian@intel.com, yi.l.liu@intel.com, eric.auger@redhat.com,
baolu.lu@linux.intel.com, shameerali.kolothum.thodi@huawei.com,
jean-philippe@linaro.org, iommu@lists.linux.dev
Subject: Re: Cache Invalidation Solution for Nested IOMMU
Date: Wed, 5 Apr 2023 08:37:56 -0300 [thread overview]
Message-ID: <ZC1dlENobD4wlZCv@nvidia.com> (raw)
In-Reply-To: <ZC0LFM9hxF9wY76w@Asurada-Nvidia>
On Tue, Apr 04, 2023 at 10:45:56PM -0700, Nicolin Chen wrote:
> On Tue, Apr 04, 2023 at 01:20:01PM -0300, Jason Gunthorpe wrote:
> > On Mon, Apr 03, 2023 at 05:02:09PM -0700, Nicolin Chen wrote:
> >
> > > My preference is to have a mmap'd page, so the interface can
> > > be reused later by VCMDQ too. Performance-wise, it should be
> > > good enough, since it does batching, IMHO.
> >
> > You can't reuse mmaping the queue page with vcmdq, so it doesn't seem
> > meaningful to me.
> >
> > There should be no mmap on the SW path. If you need a half step
> > between an ioctl as a batch and a full vhost-like queue scheme then
> > using iouring with pre-registered memory would be appropriate.
>
> I've changed to a non-mmap approach that the host kernel reads
> the guest queue directly and inserts all invalidation commands
> into the host queue.
>
> The qsz could be as large as 128 x 64K pages. So, there has to
> be a big array of pages getting pinned in the handler.
>
> (The handler still needs a pathway to report errors. I will add
> tomorrow.)
>
> Does the implementation below look fine in general?
In general get_user_pages() is really slow compared to copy_from_user
Jason
next prev parent reply other threads:[~2023-04-05 11:38 UTC|newest]
Thread overview: 35+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-04-03 0:33 Cache Invalidation Solution for Nested IOMMU Nicolin Chen
2023-04-03 7:26 ` Liu, Yi L
2023-04-03 8:39 ` Tian, Kevin
2023-04-03 15:24 ` Nicolin Chen
2023-04-04 2:42 ` Tian, Kevin
2023-04-04 3:12 ` Nicolin Chen
2023-04-03 12:23 ` Jason Gunthorpe
2023-04-03 8:00 ` Tian, Kevin
2023-04-03 14:29 ` Nicolin Chen
2023-04-04 2:15 ` Tian, Kevin
2023-04-04 2:47 ` Nicolin Chen
2023-04-03 14:08 ` Jason Gunthorpe
2023-04-03 14:51 ` Nicolin Chen
2023-04-03 19:15 ` Robin Murphy
2023-04-04 0:02 ` Nicolin Chen
2023-04-04 16:20 ` Jason Gunthorpe
2023-04-04 16:50 ` Shameerali Kolothum Thodi
2023-04-05 11:57 ` Jason Gunthorpe
2023-04-06 6:23 ` Zhangfei Gao
2023-04-06 6:39 ` Nicolin Chen
2023-04-06 11:40 ` Jason Gunthorpe
2023-04-10 1:08 ` Nicolin Chen
2023-04-11 9:07 ` Jean-Philippe Brucker
2023-04-11 11:57 ` Jason Gunthorpe
2023-04-11 18:39 ` Nicolin Chen
2023-04-11 18:41 ` Jason Gunthorpe
2023-04-11 19:02 ` Nicolin Chen
2023-04-11 18:43 ` Nicolin Chen
2023-04-12 2:47 ` Zhangfei Gao
2023-04-12 5:47 ` Nicolin Chen
2023-05-03 15:14 ` Shameerali Kolothum Thodi
2023-05-03 23:44 ` Nicolin Chen
2023-04-05 5:45 ` Nicolin Chen
2023-04-05 11:37 ` Jason Gunthorpe [this message]
2023-04-05 15:34 ` Nicolin Chen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=ZC1dlENobD4wlZCv@nvidia.com \
--to=jgg@nvidia.com \
--cc=baolu.lu@linux.intel.com \
--cc=eric.auger@redhat.com \
--cc=iommu@lists.linux.dev \
--cc=jean-philippe@linaro.org \
--cc=kevin.tian@intel.com \
--cc=nicolinc@nvidia.com \
--cc=robin.murphy@arm.com \
--cc=shameerali.kolothum.thodi@huawei.com \
--cc=yi.l.liu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox