From: Peter Xu <peterx@redhat.com>
To: Halil Pasic <pasic@linux.ibm.com>
Cc: Tom Lendacky <thomas.lendacky@amd.com>,
"Singh, Brijesh" <brijesh.singh@amd.com>,
"Michael S. Tsirkin" <mst@redhat.com>,
Jason Wang <jasowang@redhat.com>,
qemu-stable@nongnu.org, qemu-devel@nongnu.org,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH V2] vhost: correctly turn on VIRTIO_F_IOMMU_PLATFORM
Date: Mon, 16 Mar 2020 13:31:58 -0400 [thread overview]
Message-ID: <20200316173158.GA184827@xz-x1> (raw)
In-Reply-To: <20200316175737.365d7b32.pasic@linux.ibm.com>
On Mon, Mar 16, 2020 at 05:57:37PM +0100, Halil Pasic wrote:
> On Fri, 13 Mar 2020 12:31:22 -0400
> Peter Xu <peterx@redhat.com> wrote:
>
> > On Fri, Mar 13, 2020 at 11:29:59AM -0400, Michael S. Tsirkin wrote:
> > > On Fri, Mar 13, 2020 at 01:44:46PM +0100, Halil Pasic wrote:
> > > > [..]
> > > > > >
> > > > > > CCing Tom. @Tom does vhost-vsock work for you with SEV and current qemu?
> > > > > >
> > > > > > Also, one can specify iommu_platform=on on a device that ain't a part of
> > > > > > a secure-capable VM, just for the fun of it. And that breaks
> > > > > > vhost-vsock. Or is setting iommu_platform=on only valid if
> > > > > > qemu-system-s390x is protected virtualization capable?
> > > > > >
> > > > > > BTW, I don't have a strong opinion on the fixes tag. We currently do not
> > > > > > recommend setting iommu_platform, and thus I don't think we care too
> > > > > > much about past qemus having problems with it.
> > > > > >
> > > > > > Regards,
> > > > > > Halil
> > > > >
> > > > >
> > > > > Let's just say if we do have a Fixes: tag we want to set it correctly to
> > > > > the commit that needs this fix.
> > > > >
> > > >
> > > > I finally did some digging regarding the performance degradation. For
> > > > s390x the performance degradation on vhost-net was introduced by commit
> > > > 076a93d797 ("exec: simplify address_space_get_iotlb_entry"). Before
> > > > IOMMUTLBEntry.addr_mask used to be based on plen, which in turn was
> > > > calculated as the rest of the memory regions size (from address), and
> > > > covered most of the guest address space. That is we didn't have a whole
> > > > lot of IOTLB API overhead.
> > > >
> > > > With commit 076a93d797 I see IOMMUTLBEntry.addr_mask == 0xfff which comes
> > > > as ~TARGET_PAGE_MASK from flatview_do_translate(). To have things working
> > > > properly I applied 75e5b70e6, b021d1c044, and d542800d1e on the level of
> > > > 076a93d797 and 076a93d797~1.
> > >
> > > Peter, what's your take on this one?
> >
> > Commit 076a93d797 was one of the patchset where we want to provide
> > sensible IOTLB entries and also that should start to work with huge
> > pages. Frankly speaking after a few years I forgot the original
> > motivation of that whole thing, but IIRC there's a patch that was
> > trying to speedup especially for vhost but I noticed it's not merged:
> >
> > https://lists.gnu.org/archive/html/qemu-devel/2017-06/msg00574.html
[1]
> >
>
> From the looks of it, I don't think we would have seen that big
> performance degradation had this patch been included. I can give
> it a spin if you like. Shall I?
>
> > Regarding to the current patch, I'm not sure I understand it
> > correctly, but is that performance issue only happens when (1) there's
> > no intel-iommu device, and (2) there is iommu_platform=on specified
> > for the vhost backend?
> >
>
> I can confirm, that your description covers my scenario. I didn't
> investigate what happens when we have an intel-iommu, because s390 does
> not do intel-iommu. I can also confirm that no performance degradation
> is observed when the virtio-net has iommu_platform=off. The property
> iommu_platform is a virtio device (and not a backend) level property.
>
>
> > If so, I'd confess I am not too surprised if this fails the boot with
> > vhost-vsock because after all we speicified iommu_platform=on
> > explicitly in the cmdline, so if we want it to work we can simply
> > remove that iommu_platform=on when vhost-vsock doesn't support it
> > yet... I thougth iommu_platform=on was added for that case - when we
> > want to force IOMMU to be enabled from host side, and it should always
> > be used with a vIOMMU device.
> >
>
> The problem is that the virtio feature bit F_ACCESS_PLATFORM, which is
> directly controlled by the iommu_platform proprerty stands for two things
> 1) need to do IOVA translation
> 2) the access of the device to the guests RAM is restricted.
>
> There are cases where 2) does apply and 1) does not. We need to specify
> iommu_platform=on to make the virtio implementation in the guest use
> the dma api, because we need to grant access to memory as required. But
> we don't need translation and we don't have a vIOMMU.
I see the point of this patch now. I'm still unclear on how s390
works for DMA protection, but it seems totally different from the
IOMMU model on x86/arm. Considering this, please ignore above patch
[1] because that's hackish in all cases to play with iotlb caches, and
current patch should be much better (and easier) IMHO.
Thanks,
--
Peter Xu
next prev parent reply other threads:[~2020-03-16 18:49 UTC|newest]
Thread overview: 30+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-02-26 9:43 [PATCH V2] vhost: correctly turn on VIRTIO_F_IOMMU_PLATFORM Jason Wang
2020-02-26 9:53 ` Michael S. Tsirkin
2020-02-26 10:17 ` Jason Wang
2020-02-26 11:44 ` Michael S. Tsirkin
2020-02-26 12:50 ` Halil Pasic
2020-02-26 12:55 ` Halil Pasic
2020-02-26 13:45 ` Michael S. Tsirkin
2020-02-26 13:28 ` Halil Pasic
2020-02-26 13:37 ` Michael S. Tsirkin
2020-02-26 15:36 ` Halil Pasic
2020-02-26 16:52 ` Michael S. Tsirkin
2020-02-27 13:02 ` Halil Pasic
2020-02-27 15:47 ` Michael S. Tsirkin
2020-03-03 14:44 ` Halil Pasic
2020-03-13 12:44 ` Halil Pasic
2020-03-13 15:29 ` Michael S. Tsirkin
2020-03-13 16:31 ` Peter Xu
2020-03-16 16:57 ` Halil Pasic
2020-03-16 17:31 ` Peter Xu [this message]
2020-03-16 17:19 ` Michael S. Tsirkin
2020-03-16 18:14 ` Peter Xu
2020-03-17 3:04 ` Jason Wang
2020-03-17 14:13 ` Peter Xu
2020-03-18 2:06 ` Jason Wang
2020-03-17 6:28 ` Michael S. Tsirkin
2020-03-17 14:39 ` Peter Xu
2020-03-17 14:55 ` Michael S. Tsirkin
2020-03-13 20:27 ` Brijesh Singh
2020-03-14 18:22 ` Michael S. Tsirkin
2020-02-27 20:36 ` Tom Lendacky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200316173158.GA184827@xz-x1 \
--to=peterx@redhat.com \
--cc=brijesh.singh@amd.com \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-stable@nongnu.org \
--cc=thomas.lendacky@amd.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).