qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Si-Wei Liu <si-wei.liu@oracle.com>
To: Jason Wang <jasowang@redhat.com>
Cc: eperezma <eperezma@redhat.com>, Eli Cohen <eli@mellanox.com>,
	qemu-devel <qemu-devel@nongnu.org>, mst <mst@redhat.com>
Subject: Re: [PATCH 4/7] virtio: don't read pending event on host notifier if disabled
Date: Wed, 30 Mar 2022 09:40:46 -0700	[thread overview]
Message-ID: <4f2acb7a-d436-9d97-80b1-3308c1b396b5@oracle.com> (raw)
In-Reply-To: <CACGkMEt=Bs7XPWQaMOQB5iBece1CH9HJZ69YEF_m-e2Tj95qDg@mail.gmail.com>



On 3/30/2022 2:14 AM, Jason Wang wrote:
> On Wed, Mar 30, 2022 at 2:33 PM Si-Wei Liu <si-wei.liu@oracle.com> wrote:
>> Previous commit prevents vhost-user and vhost-vdpa from using
>> userland vq handler via disable_ioeventfd_handler. The same
>> needs to be done for host notifier cleanup too, as the
>> virtio_queue_host_notifier_read handler still tends to read
>> pending event left behind on ioeventfd and attempts to handle
>> outstanding kicks from QEMU userland vq.
>>
>> If vq handler is not disabled on cleanup, it may lead to sigsegv
>> with recursive virtio_net_set_status call on the control vq:
>>
>> 0  0x00007f8ce3ff3387 in raise () at /lib64/libc.so.6
>> 1  0x00007f8ce3ff4a78 in abort () at /lib64/libc.so.6
>> 2  0x00007f8ce3fec1a6 in __assert_fail_base () at /lib64/libc.so.6
>> 3  0x00007f8ce3fec252 in  () at /lib64/libc.so.6
>> 4  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:563
>> 5  0x0000558f52d79421 in vhost_vdpa_get_vq_index (dev=<optimized out>, idx=<optimized out>) at ../hw/virtio/vhost-vdpa.c:558
>> 6  0x0000558f52d7329a in vhost_virtqueue_mask (hdev=0x558f55c01800, vdev=0x558f568f91f0, n=2, mask=<optimized out>) at ../hw/virtio/vhost.c:1557
> I feel it's probably a bug elsewhere e.g when we fail to start
> vhost-vDPA, it's the charge of the Qemu to poll host notifier and we
> will fallback to the userspace vq handler.
Apologies, an incorrect stack trace was pasted which actually came from 
patch #1. I will post a v2 with the corresponding one as below:

0  0x000055f800df1780 in qdev_get_parent_bus (dev=0x0) at 
../hw/core/qdev.c:376
1  0x000055f800c68ad8 in virtio_bus_device_iommu_enabled 
(vdev=vdev@entry=0x0) at ../hw/virtio/virtio-bus.c:331
2  0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>) at 
../hw/virtio/vhost.c:318
3  0x000055f800d70d7f in vhost_memory_unmap (dev=<optimized out>, 
buffer=0x7fc19bec5240, len=2052, is_write=1, access_len=2052) at 
../hw/virtio/vhost.c:336
4  0x000055f800d71867 in vhost_virtqueue_stop 
(dev=dev@entry=0x55f8037ccc30, vdev=vdev@entry=0x55f8044ec590, 
vq=0x55f8037cceb0, idx=0) at ../hw/virtio/vhost.c:1241
5  0x000055f800d7406c in vhost_dev_stop (hdev=hdev@entry=0x55f8037ccc30, 
vdev=vdev@entry=0x55f8044ec590) at ../hw/virtio/vhost.c:1839
6  0x000055f800bf00a7 in vhost_net_stop_one (net=0x55f8037ccc30, 
dev=0x55f8044ec590) at ../hw/net/vhost_net.c:315
7  0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, 
ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, 
cvq=cvq@entry=1)
    at ../hw/net/vhost_net.c:423
8  0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, 
n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
9  0x000055f800d4e628 in virtio_net_set_status 
(vdev=vdev@entry=0x55f8044ec590, status=15 '\017') at 
../hw/net/virtio-net.c:370
10 0x000055f800d534d8 in virtio_net_handle_ctrl (iov_cnt=<optimized 
out>, iov=<optimized out>, cmd=0 '\000', n=0x55f8044ec590) at 
../hw/net/virtio-net.c:1408
11 0x000055f800d534d8 in virtio_net_handle_ctrl (vdev=0x55f8044ec590, 
vq=0x7fc1a7e888d0) at ../hw/net/virtio-net.c:1452
12 0x000055f800d69f37 in virtio_queue_host_notifier_read 
(vq=0x7fc1a7e888d0) at ../hw/virtio/virtio.c:2331
13 0x000055f800d69f37 in virtio_queue_host_notifier_read 
(n=n@entry=0x7fc1a7e8894c) at ../hw/virtio/virtio.c:3575
14 0x000055f800c688e6 in virtio_bus_cleanup_host_notifier 
(bus=<optimized out>, n=n@entry=14) at ../hw/virtio/virtio-bus.c:312
15 0x000055f800d73106 in vhost_dev_disable_notifiers 
(hdev=hdev@entry=0x55f8035b51b0, vdev=vdev@entry=0x55f8044ec590)
    at ../../../include/hw/virtio/virtio-bus.h:35
16 0x000055f800bf00b2 in vhost_net_stop_one (net=0x55f8035b51b0, 
dev=0x55f8044ec590) at ../hw/net/vhost_net.c:316
17 0x000055f800bf0678 in vhost_net_stop (dev=dev@entry=0x55f8044ec590, 
ncs=0x55f80452bae0, data_queue_pairs=data_queue_pairs@entry=7, 
cvq=cvq@entry=1)
    at ../hw/net/vhost_net.c:423
18 0x000055f800d4e628 in virtio_net_set_status (status=<optimized out>, 
n=0x55f8044ec590) at ../hw/net/virtio-net.c:296
19 0x000055f800d4e628 in virtio_net_set_status (vdev=0x55f8044ec590, 
status=15 '\017') at ../hw/net/virtio-net.c:370
20 0x000055f800d6c4b2 in virtio_set_status (vdev=0x55f8044ec590, 
val=<optimized out>) at ../hw/virtio/virtio.c:1945
21 0x000055f800d11d9d in vm_state_notify (running=running@entry=false, 
state=state@entry=RUN_STATE_SHUTDOWN) at ../softmmu/runstate.c:333
22 0x000055f800d04e7a in do_vm_stop 
(state=state@entry=RUN_STATE_SHUTDOWN, send_stop=send_stop@entry=false) 
at ../softmmu/cpus.c:262
23 0x000055f800d04e99 in vm_shutdown () at ../softmmu/cpus.c:280
24 0x000055f800d126af in qemu_cleanup () at ../softmmu/runstate.c:812
25 0x000055f800ad5b13 in main (argc=<optimized out>, argv=<optimized 
out>, envp=<optimized out>) at ../softmmu/main.c:51

 From the trace pending read only occurs in stop path. The recursive 
virtio_net_set_status from virtio_net_handle_ctrl doesn't make sense to me.
Not sure I got the reason why we need to handle pending host 
notification in userland vq, can you elaborate?

Thanks,
-Siwei

>
> Thanks
>
>> 7  0x0000558f52c6b89a in virtio_pci_set_guest_notifier (d=d@entry=0x558f568f0f60, n=n@entry=2, assign=assign@entry=true, with_irqfd=with_irqfd@entry=false)
>>     at ../hw/virtio/virtio-pci.c:974
>> 8  0x0000558f52c6c0d8 in virtio_pci_set_guest_notifiers (d=0x558f568f0f60, nvqs=3, assign=true) at ../hw/virtio/virtio-pci.c:1019
>> 9  0x0000558f52bf091d in vhost_net_start (dev=dev@entry=0x558f568f91f0, ncs=0x558f56937cd0, data_queue_pairs=data_queue_pairs@entry=1, cvq=cvq@entry=1)
>>     at ../hw/net/vhost_net.c:361
>> 10 0x0000558f52d4e5e7 in virtio_net_set_status (status=<optimized out>, n=0x558f568f91f0) at ../hw/net/virtio-net.c:289
>> 11 0x0000558f52d4e5e7 in virtio_net_set_status (vdev=0x558f568f91f0, status=15 '\017') at ../hw/net/virtio-net.c:370
>> 12 0x0000558f52d6c4b2 in virtio_set_status (vdev=vdev@entry=0x558f568f91f0, val=val@entry=15 '\017') at ../hw/virtio/virtio.c:1945
>> 13 0x0000558f52c69eff in virtio_pci_common_write (opaque=0x558f568f0f60, addr=<optimized out>, val=<optimized out>, size=<optimized out>) at ../hw/virtio/virtio-pci.c:1292
>> 14 0x0000558f52d15d6e in memory_region_write_accessor (mr=0x558f568f19d0, addr=20, value=<optimized out>, size=1, shift=<optimized out>, mask=<optimized out>, attrs=...)
>>     at ../softmmu/memory.c:492
>> 15 0x0000558f52d127de in access_with_adjusted_size (addr=addr@entry=20, value=value@entry=0x7f8cdbffe748, size=size@entry=1, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x558f52d15cf0 <memory_region_write_accessor>, mr=0x558f568f19d0, attrs=...) at ../softmmu/memory.c:554
>> 16 0x0000558f52d157ef in memory_region_dispatch_write (mr=mr@entry=0x558f568f19d0, addr=20, data=<optimized out>, op=<optimized out>, attrs=attrs@entry=...)
>>     at ../softmmu/memory.c:1504
>> 17 0x0000558f52d078e7 in flatview_write_continue (fv=fv@entry=0x7f8accbc3b90, addr=addr@entry=103079215124, attrs=..., ptr=ptr@entry=0x7f8ce6300028, len=len@entry=1, addr1=<optimized out>, l=<optimized out>, mr=0x558f568f19d0) at ../../../include/qemu/host-utils.h:165
>> 18 0x0000558f52d07b06 in flatview_write (fv=0x7f8accbc3b90, addr=103079215124, attrs=..., buf=0x7f8ce6300028, len=1) at ../softmmu/physmem.c:2822
>> 19 0x0000558f52d0b36b in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>)
>>     at ../softmmu/physmem.c:2914
>> 20 0x0000558f52d0b3da in address_space_rw (as=<optimized out>, addr=<optimized out>, attrs=...,
>>     attrs@entry=..., buf=buf@entry=0x7f8ce6300028, len=<optimized out>, is_write=<optimized out>) at ../softmmu/physmem.c:2924
>> 21 0x0000558f52dced09 in kvm_cpu_exec (cpu=cpu@entry=0x558f55c2da60) at ../accel/kvm/kvm-all.c:2903
>> 22 0x0000558f52dcfabd in kvm_vcpu_thread_fn (arg=arg@entry=0x558f55c2da60) at ../accel/kvm/kvm-accel-ops.c:49
>> 23 0x0000558f52f9f04a in qemu_thread_start (args=<optimized out>) at ../util/qemu-thread-posix.c:556
>> 24 0x00007f8ce4392ea5 in start_thread () at /lib64/libpthread.so.0
>> 25 0x00007f8ce40bb9fd in clone () at /lib64/libc.so.6
>>
>> Fixes: 4023784 ("vhost-vdpa: multiqueue support")
>> Cc: Jason Wang <jasowang@redhat.com>
>> Signed-off-by: Si-Wei Liu <si-wei.liu@oracle.com>
>> ---
>>   hw/virtio/virtio-bus.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/hw/virtio/virtio-bus.c b/hw/virtio/virtio-bus.c
>> index 0f69d1c..3159b58 100644
>> --- a/hw/virtio/virtio-bus.c
>> +++ b/hw/virtio/virtio-bus.c
>> @@ -311,7 +311,8 @@ void virtio_bus_cleanup_host_notifier(VirtioBusState *bus, int n)
>>       /* Test and clear notifier after disabling event,
>>        * in case poll callback didn't have time to run.
>>        */
>> -    virtio_queue_host_notifier_read(notifier);
>> +    if (!vdev->disable_ioeventfd_handler)
>> +        virtio_queue_host_notifier_read(notifier);
>>       event_notifier_cleanup(notifier);
>>   }
>>
>> --
>> 1.8.3.1
>>



  reply	other threads:[~2022-03-30 16:43 UTC|newest]

Thread overview: 50+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-30  6:33 [PATCH 0/7] vhost-vdpa multiqueue fixes Si-Wei Liu
2022-03-30  6:33 ` [PATCH 1/7] virtio-net: align ctrl_vq index for non-mq guest for vhost_vdpa Si-Wei Liu
2022-03-30  9:00   ` Jason Wang
2022-03-30 15:47     ` Si-Wei Liu
2022-03-31  8:39       ` Jason Wang
2022-04-01 22:32         ` Si-Wei Liu
2022-04-02  2:10           ` Jason Wang
2022-04-05 23:26             ` Si-Wei Liu
2022-03-30  6:33 ` [PATCH 2/7] virtio-net: Fix indentation Si-Wei Liu
2022-03-30  9:01   ` Jason Wang
2022-03-30  6:33 ` [PATCH 3/7] virtio-net: Only enable userland vq if using tap backend Si-Wei Liu
2022-03-30  9:07   ` Jason Wang
2022-03-30  6:33 ` [PATCH 4/7] virtio: don't read pending event on host notifier if disabled Si-Wei Liu
2022-03-30  9:14   ` Jason Wang
2022-03-30 16:40     ` Si-Wei Liu [this message]
2022-03-31  8:36       ` Jason Wang
2022-04-01 20:37         ` Si-Wei Liu
2022-04-02  2:00           ` Jason Wang
2022-04-05 19:18             ` Si-Wei Liu
2022-04-07  7:05               ` Jason Wang
2022-04-08  1:02                 ` Si-Wei Liu
2022-04-11  8:49                   ` Jason Wang
2022-03-30  6:33 ` [PATCH 5/7] vhost-vdpa: fix improper cleanup in net_init_vhost_vdpa Si-Wei Liu
2022-03-30  9:15   ` Jason Wang
2022-03-30  6:33 ` [PATCH 6/7] vhost-net: fix improper cleanup in vhost_net_start Si-Wei Liu
2022-03-30  9:30   ` Jason Wang
2022-03-30  6:33 ` [PATCH 7/7] vhost-vdpa: backend feature should set only once Si-Wei Liu
2022-03-30  9:28   ` Jason Wang
2022-03-30 16:24   ` Stefano Garzarella
2022-03-30 17:12     ` Si-Wei Liu
2022-03-30 17:32       ` Stefano Garzarella
2022-03-30 18:27         ` Eugenio Perez Martin
2022-03-30 22:44           ` Si-Wei Liu
2022-03-30 19:01   ` Eugenio Perez Martin
2022-03-30 23:03     ` Si-Wei Liu
2022-03-31  8:02       ` Eugenio Perez Martin
2022-03-31  8:54         ` Jason Wang
2022-03-31  9:19           ` Eugenio Perez Martin
2022-04-01  2:39             ` Jason Wang
2022-04-01  4:18               ` Si-Wei Liu
2022-04-02  1:33                 ` Jason Wang
2022-03-31 21:15         ` Si-Wei Liu
2022-04-01  8:21           ` Eugenio Perez Martin
2022-04-27  4:28 ` [PATCH 0/7] vhost-vdpa multiqueue fixes Jason Wang
2022-04-27  8:29   ` Si-Wei Liu
2022-04-27  8:38     ` Jason Wang
2022-04-27  9:09       ` Si-Wei Liu
2022-04-29  2:30         ` Jason Wang
2022-04-30  2:07           ` Si-Wei Liu
2022-05-05  8:40             ` Jason Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4f2acb7a-d436-9d97-80b1-3308c1b396b5@oracle.com \
    --to=si-wei.liu@oracle.com \
    --cc=eli@mellanox.com \
    --cc=eperezma@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=mst@redhat.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).