From: "Michael S. Tsirkin" <mst@redhat.com>
To: Peter Xu <peterx@redhat.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>,
"Singh, Brijesh" <brijesh.singh@amd.com>,
Jason Wang <jasowang@redhat.com>,
qemu-devel@nongnu.org, qemu-stable@nongnu.org,
Halil Pasic <pasic@linux.ibm.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH V2] vhost: correctly turn on VIRTIO_F_IOMMU_PLATFORM
Date: Tue, 17 Mar 2020 10:55:18 -0400 [thread overview]
Message-ID: <20200317105312-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20200317143904.GC199571@xz-x1>
On Tue, Mar 17, 2020 at 10:39:04AM -0400, Peter Xu wrote:
> On Tue, Mar 17, 2020 at 02:28:42AM -0400, Michael S. Tsirkin wrote:
> > On Mon, Mar 16, 2020 at 02:14:05PM -0400, Peter Xu wrote:
> > > On Mon, Mar 16, 2020 at 01:19:54PM -0400, Michael S. Tsirkin wrote:
> > > > On Fri, Mar 13, 2020 at 12:31:22PM -0400, Peter Xu wrote:
> > > > > On Fri, Mar 13, 2020 at 11:29:59AM -0400, Michael S. Tsirkin wrote:
> > > > > > On Fri, Mar 13, 2020 at 01:44:46PM +0100, Halil Pasic wrote:
> > > > > > > [..]
> > > > > > > > >
> > > > > > > > > CCing Tom. @Tom does vhost-vsock work for you with SEV and current qemu?
> > > > > > > > >
> > > > > > > > > Also, one can specify iommu_platform=on on a device that ain't a part of
> > > > > > > > > a secure-capable VM, just for the fun of it. And that breaks
> > > > > > > > > vhost-vsock. Or is setting iommu_platform=on only valid if
> > > > > > > > > qemu-system-s390x is protected virtualization capable?
> > > > > > > > >
> > > > > > > > > BTW, I don't have a strong opinion on the fixes tag. We currently do not
> > > > > > > > > recommend setting iommu_platform, and thus I don't think we care too
> > > > > > > > > much about past qemus having problems with it.
> > > > > > > > >
> > > > > > > > > Regards,
> > > > > > > > > Halil
> > > > > > > >
> > > > > > > >
> > > > > > > > Let's just say if we do have a Fixes: tag we want to set it correctly to
> > > > > > > > the commit that needs this fix.
> > > > > > > >
> > > > > > >
> > > > > > > I finally did some digging regarding the performance degradation. For
> > > > > > > s390x the performance degradation on vhost-net was introduced by commit
> > > > > > > 076a93d797 ("exec: simplify address_space_get_iotlb_entry"). Before
> > > > > > > IOMMUTLBEntry.addr_mask used to be based on plen, which in turn was
> > > > > > > calculated as the rest of the memory regions size (from address), and
> > > > > > > covered most of the guest address space. That is we didn't have a whole
> > > > > > > lot of IOTLB API overhead.
> > > > > > >
> > > > > > > With commit 076a93d797 I see IOMMUTLBEntry.addr_mask == 0xfff which comes
> > > > > > > as ~TARGET_PAGE_MASK from flatview_do_translate(). To have things working
> > > > > > > properly I applied 75e5b70e6, b021d1c044, and d542800d1e on the level of
> > > > > > > 076a93d797 and 076a93d797~1.
> > > > > >
> > > > > > Peter, what's your take on this one?
> > > > >
> > > > > Commit 076a93d797 was one of the patchset where we want to provide
> > > > > sensible IOTLB entries and also that should start to work with huge
> > > > > pages.
> > > >
> > > > So the issue bundamentally is that it
> > > > never produces entries larger than page size.
> > > >
> > > > Wasteful even just with huge pages, all the more
> > > > so which passthrough which could have giga-byte
> > > > entries.
> > > >
> > > > Want to try fixing that?
> > >
> > > Yes we can fix that, but I'm still not sure whether changing the
> > > interface of address_space_get_iotlb_entry() to cover adhoc regions is
> > > a good idea, because I think it's still a memory core API and imho it
> > > would still be good to have IOTLBs returned to be what the hardware
> > > will be using (always page aligned IOTLBs).
> >
> > E.g. with virtio-iommu, there's no hardware in sight.
> > Even with e.g. VTD page aligned does not mean TARGET_PAGE,
> > can be much bigger.
>
> Right. Sorry to be unclear, but I meant the emulated device (in this
> case for x86 it's VT-d) should follow the hardware. Here the page
> mask is decided by VT-d in vtd_iommu_translate() for PT mode which is
> 4K only. For another example, ARM SMMU is doing similar thing (return
> PAGE_SIZE when PT enabled, smmuv3_translate()). That actually makes
> sense to me. On the other hand, I'm not sure whether there's side
> effect if we change this to cover the whole address space for PT.
>
> Thanks,
Well we can translate a batch of entries in a loop, and
as long as VA/PA mappings are consistent, treat the
batch as one.
This is a classical batching approach and not doing this is a classical
reason for bad performance.
> >
> > > Also it would still be
> > > not ideal because vhost backend will still need to send the MISSING
> > > messages and block for each of the continuous guest memory ranges
> > > registered, so there will still be misterious delay. Not to say
> > > logically all the caches can be invalidated too so in that sense I
> > > think it's as hacky as the vhost speedup patch mentioned below..
> > >
> > > Ideally I think vhost should be able to know when PT is enabled or
> > > disabled for the device, so the vhost backend (kernel or userspace)
> > > should be able to directly use GPA for DMA. That might need some new
> > > vhost interface.
> > >
> > > For the s390's specific issue, I would think Jason's patch an simple
> > > and ideal solution already.
> > >
> > > Thanks,
> > >
> > > >
> > > >
> > > > > Frankly speaking after a few years I forgot the original
> > > > > motivation of that whole thing, but IIRC there's a patch that was
> > > > > trying to speedup especially for vhost but I noticed it's not merged:
> > > > >
> > > > > https://lists.gnu.org/archive/html/qemu-devel/2017-06/msg00574.html
> > > > >
> > > > > Regarding to the current patch, I'm not sure I understand it
> > > > > correctly, but is that performance issue only happens when (1) there's
> > > > > no intel-iommu device, and (2) there is iommu_platform=on specified
> > > > > for the vhost backend?
> > > > >
> > > > > If so, I'd confess I am not too surprised if this fails the boot with
> > > > > vhost-vsock because after all we speicified iommu_platform=on
> > > > > explicitly in the cmdline, so if we want it to work we can simply
> > > > > remove that iommu_platform=on when vhost-vsock doesn't support it
> > > > > yet... I thougth iommu_platform=on was added for that case - when we
> > > > > want to force IOMMU to be enabled from host side, and it should always
> > > > > be used with a vIOMMU device.
> > > > >
> > > > > However I also agree that from performance POV this patch helps for
> > > > > this quite special case.
> > > > >
> > > > > Thanks,
> > > > >
> > > > > --
> > > > > Peter Xu
> > > >
> > >
> > > --
> > > Peter Xu
> >
>
> --
> Peter Xu
next prev parent reply other threads:[~2020-03-17 14:56 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-26 9:43 [PATCH V2] vhost: correctly turn on VIRTIO_F_IOMMU_PLATFORM Jason Wang
2020-02-26 9:53 ` Michael S. Tsirkin
2020-02-26 10:17 ` Jason Wang
2020-02-26 11:44 ` Michael S. Tsirkin
2020-02-26 12:50 ` Halil Pasic
2020-02-26 12:55 ` Halil Pasic
2020-02-26 13:45 ` Michael S. Tsirkin
2020-02-26 13:28 ` Halil Pasic
2020-02-26 13:37 ` Michael S. Tsirkin
2020-02-26 15:36 ` Halil Pasic
2020-02-26 16:52 ` Michael S. Tsirkin
2020-02-27 13:02 ` Halil Pasic
2020-02-27 15:47 ` Michael S. Tsirkin
2020-03-03 14:44 ` Halil Pasic
2020-03-13 12:44 ` Halil Pasic
2020-03-13 15:29 ` Michael S. Tsirkin
2020-03-13 16:31 ` Peter Xu
2020-03-16 16:57 ` Halil Pasic
2020-03-16 17:31 ` Peter Xu
2020-03-16 17:19 ` Michael S. Tsirkin
2020-03-16 18:14 ` Peter Xu
2020-03-17 3:04 ` Jason Wang
2020-03-17 14:13 ` Peter Xu
2020-03-18 2:06 ` Jason Wang
2020-03-17 6:28 ` Michael S. Tsirkin
2020-03-17 14:39 ` Peter Xu
2020-03-17 14:55 ` Michael S. Tsirkin [this message]
2020-03-13 20:27 ` Brijesh Singh
2020-03-14 18:22 ` Michael S. Tsirkin
2020-02-27 20:36 ` Tom Lendacky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200317105312-mutt-send-email-mst@kernel.org \
--to=mst@redhat.com \
--cc=brijesh.singh@amd.com \
--cc=jasowang@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-stable@nongnu.org \
--cc=thomas.lendacky@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).