qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Halil Pasic <pasic@linux.ibm.com>
Cc: Kevin Wolf <kwolf@redhat.com>, Cornelia Huck <cohuck@redhat.com>,
	Brijesh Singh <brijesh.singh@amd.com>,
	Daniel Henrique Barboza <danielhb@linux.ibm.com>,
	Daniel Henrique Barboza <danielhb413@gmail.com>,
	qemu-devel@nongnu.org, qemu-stable@nongnu.org,
	Jakob Naucke <Jakob.Naucke@ibm.com>
Subject: Re: [PATCH v2 1/1] virtio: fix the condition for iommu_platform not supported
Date: Fri, 28 Jan 2022 04:48:05 -0500	[thread overview]
Message-ID: <20220128044643-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <20220128032911.440323f1.pasic@linux.ibm.com>

On Fri, Jan 28, 2022 at 03:29:11AM +0100, Halil Pasic wrote:
> On Thu, 27 Jan 2022 18:34:23 -0300
> Daniel Henrique Barboza <danielhb413@gmail.com> wrote:
> 
> > On 1/27/22 10:28, Halil Pasic wrote:
> > > ping^2
> > > 
> > > Also adding Brijesh and Daniel, as I believe you guys should be
> > > interested in this, and I'm yet to receive review.
> > > 
> > > @Brijesh, Daniel: Can you confirm that AMD (SEV) and Power are affected
> > > too, and that the fix works for your platforms as well?  
> > 
> > I failed to find a host that has Power secure execution support. I'll keep looking.
> > 
> > 
> > Meanwhile I have to mention that this patch re-introduced the problem that Kevin's
> > commit fixed.
> > 
> > 
> > With current upstream, if you start a regular guest with the following command line:
> > 
> > qemu-system-ppc64 (....)
> > -chardev socket,id=char0,path=/tmp/vhostqemu
> > -device vhost-user-fs-pci,chardev=char0,tag=myfs,iommu_platform=on
> > 
> > i.e. a guest with a vhost-user-fs-pci device that claims to have iommu support,
> > but it doesn't, this is the error message:
> > 
> > 
> > qemu-system-ppc64: -device vhost-user-fs-pci,chardev=char0,tag=myfs,iommu_platform=on: iommu_platform=true is not supported by the device
> > 
> > 
> > With this patch, that command line above starts the guest. 
> > virtiofsd fails during boot:
> > 
> > sudo ~/qemu/build/tools/virtiofsd/virtiofsd --socket-path=/tmp/vhostqemu -o source=~/linux-L1
> > [sudo] password for danielhb:
> > virtio_session_mount: Waiting for vhost-user socket connection...
> > virtio_session_mount: Received vhost-user socket connection
> > virtio_loop: Entry
> > fv_panic: libvhost-user: Invalid vring_addr message
> > 
> > 
> > And inside the guest, if you attempt to mount and use the virtiofs filesystem, the guest
> > hangs:
> > 
> > [root@localhost ~]# mount -t virtiofs myfs /mnt
> > [root@localhost ~]# cd /mnt
> > 
> > (hangs)
> > 
> > Exiting QEMU throws several vhost related errors:
> > 
> > 
> > QEMU 6.2.50 monitor - type 'help' for more information
> > (qemu) quit
> > qemu-system-ppc64: Failed to set msg fds.
> > qemu-system-ppc64: vhost VQ 0 ring restore failed: -22: Invalid argument (22)
> > qemu-system-ppc64: Failed to set msg fds.
> > qemu-system-ppc64: vhost VQ 1 ring restore failed: -22: Invalid argument (22)
> > qemu-system-ppc64: Failed to set msg fds.
> > qemu-system-ppc64: vhost_set_vring_call failed: Invalid argument (22)
> > qemu-system-ppc64: Failed to set msg fds.
> > qemu-system-ppc64: vhost_set_vring_call failed: Invalid argument (22)
> > 
> > 
> 
> 
> Does your VM have an IOMMU and does your guest see it? If yes does
> vdev->dma_as != &address_space_memory hold for your virtio device? If no why not?
> 
> My understanding is that your guest wants to do translated addresses,
> because it sees the ACCESS_PLATFORM feature, and probably thinks that
> your device is indeed behind an IOMMU, from what I assume, at least it
> sees that there is an IOMMU. But then I would expect your virtio device
> to have its vdev->dma_as set to something different than
> &address_space_memory. Conversely if your dma address space is
> address_space_memory, then you don't need address translation because
> your dma addresses are the same  as your guest physical addresses.
> 
> > 
> > I made a little experiment with upstream and reverting Kevin's patch and the result is
> > the same, meaning that this is the original bug [1] Kevin fixed back then. Note that [1]
> > was reported on x86, meaning that this particular issue seems to be arch agnostic.
> 
> We don't have this problem on s390, so it ain't entirely arch agnostic.
> 
> > 
> > 
> > My point here is that your patch fixes the situation for s390x, and Brijesh already chimed
> > in claiming that it fixed for AMD SEV, but it reintroduced a bug. I believe you should
> > include this test case with vhost-user in your testing to figure out a way to fix what
> > is needed without adding this particular regression.
> 
> Can you help me with this? IMHO the big problem is that iommu_platform
> is used for two distinct things. I've described that in the commit
> message.
> 
> We may be able to differentiate between the two using ->dma_as, but for
> that it needs to be set up correctly: whenever you require translation
> it should be something different than address_space_memory. The question
> is why do you require translation but don't have your ->dma_as set up
> properly? It can be a guest thing, i.e. guest just assumes it has to do
> bus addresses, while it actually does not have to, or we indeed do have
> an IOMMU which polices the devices access to the guest memory, but for
> some strange reason we failed to set up ->dma_as to reflect that.
> 
> @Michael: what is your opinion?

Right, I am puzzled too.

> > 
> > 
> > In fact, I have a feeling that this is not the first time this kind of situation is discussed
> > around here. This reminds me of [2] and a discussion about the order virtiofs features
> > are negotiated versus when/how QEMU inits the devices.
> > 
> > 
> > 
> > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1935019
> > [2] https://lists.gnu.org/archive/html/qemu-devel/2021-02/msg05644.html
> > 
> > 
> > Thanks,
> > 
> > 
> > Daniel
> > 
> > 
> > > 
> > > Regards,
> > > Halil
> > > 
> > > On Tue, 25 Jan 2022 11:21:12 +0100
> > > Halil Pasic <pasic@linux.ibm.com> wrote:
> > >   
> > >> ping
> > >>
> > >> On Mon, 17 Jan 2022 13:02:38 +0100
> > >> Halil Pasic <pasic@linux.ibm.com> wrote:
> > >>  
> > >>> The commit 04ceb61a40 ("virtio: Fail if iommu_platform is requested, but
> > >>> unsupported") claims to fail the device hotplug when iommu_platform
> > >>> is requested, but not supported by the (vhost) device. On the first
> > >>> glance the condition for detecting that situation looks perfect, but
> > >>> because a certain peculiarity of virtio_platform it ain't.
> > >>>
> > >>> In fact the aforementioned commit introduces a regression. It breaks
> > >>> virtio-fs support for Secure Execution, and most likely also for AMD SEV
> > >>> or any other confidential guest scenario that relies encrypted guest
> > >>> memory.  The same also applies to any other vhost device that does not
> > >>> support _F_ACCESS_PLATFORM.
> > >>>
> > >>> The peculiarity is that iommu_platform and _F_ACCESS_PLATFORM collates
> > >>> "device can not access all of the guest RAM" and "iova != gpa, thus
> > >>> device needs to translate iova".
> > >>>
> > >>> Confidential guest technologies currently rely on the device/hypervisor
> > >>> offering _F_ACCESS_PLATFORM, so that, after the feature has been
> > >>> negotiated, the guest  grants access to the portions of memory the
> > >>> device needs to see. So in for confidential guests, generally,
> > >>> _F_ACCESS_PLATFORM is about the restricted access to memory, but not
> > >>> about the addresses used being something else than guest physical
> > >>> addresses.
> > >>>
> > >>> This is the very reason for which commit f7ef7e6e3b ("vhost: correctly
> > >>> turn on VIRTIO_F_IOMMU_PLATFORM") for, which fences _F_ACCESS_PLATFORM
> > >>> form the vhost device that does not need it, because on the vhost
> > >>> interface it only means "I/O address translation is needed".
> > >>>
> > >>> This patch takes inspiration from f7ef7e6e3b ("vhost: correctly turn on
> > >>> VIRTIO_F_IOMMU_PLATFORM"), and uses the same condition for detecting the
> > >>> situation when _F_ACCESS_PLATFORM is requested, but no I/O translation
> > >>> by the device, and thus no device capability is needed. In this
> > >>> situation claiming that the device does not support iommu_plattform=on
> > >>> is counter-productive. So let us stop doing that!
> > >>>
> > >>> Signed-off-by: Halil Pasic <pasic@linux.ibm.com>
> > >>> Reported-by: Jakob Naucke <Jakob.Naucke@ibm.com>
> > >>> Fixes: 04ceb61a40 ("virtio: Fail if iommu_platform is requested, but
> > >>> unsupported")
> > >>> Cc: Kevin Wolf <kwolf@redhat.com>
> > >>> Cc: qemu-stable@nongnu.org
> > >>>
> > >>> ---
> > >>>
> > >>> v1->v2:
> > >>> * Commit message tweaks. Most notably fixed commit SHA (Michael)
> > >>>
> > >>> ---
> > >>>   hw/virtio/virtio-bus.c | 11 ++++++-----
> > >>>   1 file changed, 6 insertions(+), 5 deletions(-)
> > >>>
> > >>> diff --git a/hw/virtio/virtio-bus.c b/hw/virtio/virtio-bus.c
> > >>> index d23db98c56..c1578f3de2 100644
> > >>> --- a/hw/virtio/virtio-bus.c
> > >>> +++ b/hw/virtio/virtio-bus.c
> > >>> @@ -69,11 +69,6 @@ void virtio_bus_device_plugged(VirtIODevice *vdev, Error **errp)
> > >>>           return;
> > >>>       }
> > >>>   
> > >>> -    if (has_iommu && !virtio_host_has_feature(vdev, VIRTIO_F_IOMMU_PLATFORM)) {
> > >>> -        error_setg(errp, "iommu_platform=true is not supported by the device");
> > >>> -        return;
> > >>> -    }
> > >>> -
> > >>>       if (klass->device_plugged != NULL) {
> > >>>           klass->device_plugged(qbus->parent, &local_err);
> > >>>       }
> > >>> @@ -88,6 +83,12 @@ void virtio_bus_device_plugged(VirtIODevice *vdev, Error **errp)
> > >>>       } else {
> > >>>           vdev->dma_as = &address_space_memory;
> > >>>       }
> > >>> +
> > >>> +    if (has_iommu && vdev->dma_as != &address_space_memory
> > >>> +                  && !virtio_host_has_feature(vdev, VIRTIO_F_IOMMU_PLATFORM)) {
> > >>> +        error_setg(errp, "iommu_platform=true is not supported by the device");
> > >>> +        return;
> > >>> +    }
> > >>>   }
> > >>>   
> > >>>   /* Reset the virtio_bus */
> > >>>
> > >>> base-commit: 6621441db50d5bae7e34dbd04bf3c57a27a71b32  
> > >>  
> > > 
> > >   
> > 



  reply	other threads:[~2022-01-28  9:50 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-17 12:02 [PATCH v2 1/1] virtio: fix the condition for iommu_platform not supported Halil Pasic
2022-01-25 10:21 ` Halil Pasic
2022-01-27 13:28   ` Halil Pasic
2022-01-27 19:17     ` Brijesh Singh
2022-01-27 21:34     ` Daniel Henrique Barboza
2022-01-28  2:29       ` Halil Pasic
2022-01-28  9:48         ` Michael S. Tsirkin [this message]
2022-01-28 11:02         ` Daniel Henrique Barboza
2022-01-28 11:48           ` Halil Pasic
2022-01-28 12:12             ` Daniel Henrique Barboza
2022-01-28 11:52           ` Michael S. Tsirkin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220128044643-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=Jakob.Naucke@ibm.com \
    --cc=brijesh.singh@amd.com \
    --cc=cohuck@redhat.com \
    --cc=danielhb413@gmail.com \
    --cc=danielhb@linux.ibm.com \
    --cc=kwolf@redhat.com \
    --cc=pasic@linux.ibm.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-stable@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).