qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Alex Bennée" <alex.bennee@linaro.org>
To: Stefan Hajnoczi <stefanha@gmail.com>
Cc: qemu-devel@nongnu.org, slp@redhat.com, mst@redhat.com,
	marcandre.lureau@redhat.com, stefanha@redhat.com,
	mathieu.poirier@linaro.org, viresh.kumar@linaro.org,
	sgarzare@redhat.com, Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH for 7.2-rc? v2 0/5] continuing efforts to fix vhost-user issues
Date: Sat, 26 Nov 2022 14:12:58 +0000	[thread overview]
Message-ID: <87bkot4x4l.fsf@linaro.org> (raw)
In-Reply-To: <CAJSP0QXaRrM3NGttNytsOZigF-SwiX4_H-j_6KHxS9VjOrPFkg@mail.gmail.com>


Stefan Hajnoczi <stefanha@gmail.com> writes:

> On Sat, 26 Nov 2022 at 04:45, Alex Bennée <alex.bennee@linaro.org> wrote:
>>
>>
>> Alex Bennée <alex.bennee@linaro.org> writes:
>>
>> > Alex Bennée <alex.bennee@linaro.org> writes:
>> >
>> >> Hi,
>> >>
>> > <snip>
>> >> I can replicate some of the other failures I've been seeing in CI by
>> >> running:
>> >>
>> >>   ../../meson/meson.py test --repeat 10 --print-errorlogs qtest-arm/qos-test
>> >>
>> >> however this seems to run everything in parallel and maybe is better
>> >> at exposing race conditions. Perhaps the CI system makes those races
>> >> easier to hit? Unfortunately I've not been able to figure out exactly
>> >> how things go wrong in the failure case.
>> >>
>> > <snip>
>> >
>> > There is a circular call - we are in vu_gpio_stop which triggers a write
>> > to vhost-user which allows us to catch a disconnect event:
>> >
>> >   #0 vhost_dev_is_started (hdev=0x557adf80d878) at
>> > /home/alex/lsrc/qemu.git/include/hw/virtio/vhost.h:199
>> >   #1  0x0000557adbe0518a in vu_gpio_stop (vdev=0x557adf80d640) at ../../hw/virtio/vhost-user-gpio.c:138
>> >   #2 0x0000557adbe04d56 in vu_gpio_disconnect (dev=0x557adf80d640)
>> > at ../../hw/virtio/vhost-user-gpio.c:255
>> >   #3 0x0000557adbe049bb in vu_gpio_event (opaque=0x557adf80d640,
>> > event=CHR_EVENT_CLOSED) at ../../hw/virtio/vhost-user-gpio.c:274
>>
>> I suspect the best choice here is to schedule the cleanup as a later
>> date. Should I use the aio_bh one shots for this or maybe an rcu cleanup
>> event?
>>
>> Paolo, any suggestions?
>>
>> >   #4 0x0000557adc0539ef in chr_be_event (s=0x557adea51f10,
>> > event=CHR_EVENT_CLOSED) at ../../chardev/char.c:61
>> >   #5 0x0000557adc0506aa in qemu_chr_be_event (s=0x557adea51f10,
>> > event=CHR_EVENT_CLOSED) at ../../chardev/char.c:81
>> >   #6 0x0000557adc04f666 in tcp_chr_disconnect_locked
>> > (chr=0x557adea51f10) at ../../chardev/char-socket.c:470
>> >   #7 0x0000557adc04c81a in tcp_chr_write (chr=0x557adea51f10,
>> > buf=0x7ffe8588cce0 "\v", len=20) at
>> > ../../chardev/char-socket.c:129
>
> Does this mean the backend closed the connection before receiving all
> the vhost-user protocol messages sent by QEMU?
>
> This looks like a backend bug. It prevents QEMU's vhost-user client
> from cleanly stopping the virtqueue (vhost_virtqueue_stop()).

Well the backend in this case is the qtest framework so not the worlds
most complete implementation.

> QEMU is still broken if it cannot handle disconnect at any time. Maybe
> a simple solution for that is to check for reentrancy (either by
> checking an existing variable or adding a new one to prevent
> vu_gpio_stop() from calling itself).

vhost-user-blk introduced an additional flag:

    /*
     * There are at least two steps of initialization of the
     * vhost-user device. The first is a "connect" step and
     * second is a "start" step. Make a separation between
     * those initialization phases by using two fields.
     */
    /* vhost_user_blk_connect/vhost_user_blk_disconnect */
    bool connected;
    /* vhost_user_blk_start/vhost_user_blk_stop */
    bool started_vu;

but that in itself is not enough. If you look at the various cases of
handling CHR_EVENT_CLOSED you'll see some schedule the shutdown with aio
and some don't even bother (so will probably break the same way).

Rather than have a mish-mash of solutions maybe we should introduce a
new vhost function - vhost_user_async_close() which can take care of the
scheduling and wrap it with a check for a valid vhost structure in case
it gets shutdown in the meantime?

>
>> >   #8  0x0000557adc050999 in qemu_chr_write_buffer (s=0x557adea51f10, buf=0x7ffe8588cce0 "\v", len=20, offset=0x7ffe8588cbe4, write_all=true) at ../../chardev/char.c:121
>> >   #9  0x0000557adc0507c7 in qemu_chr_write (s=0x557adea51f10, buf=0x7ffe8588cce0 "\v", len=20, write_all=true) at ../../chardev/char.c:173
>> >   #10 0x0000557adc046f3a in qemu_chr_fe_write_all (be=0x557adf80d830, buf=0x7ffe8588cce0 "\v", len=20) at ../../chardev/char-fe.c:53
>> >   #11 0x0000557adbddc02f in vhost_user_write (dev=0x557adf80d878, msg=0x7ffe8588cce0, fds=0x0, fd_num=0) at ../../hw/virtio/vhost-user.c:490
>> >   #12 0x0000557adbddd48f in vhost_user_get_vring_base (dev=0x557adf80d878, ring=0x7ffe8588d000) at ../../hw/virtio/vhost-user.c:1260
>> >   #13 0x0000557adbdd4bd6 in vhost_virtqueue_stop (dev=0x557adf80d878, vdev=0x557adf80d640, vq=0x557adf843570, idx=0) at ../../hw/virtio/vhost.c:1220
>> >   #14 0x0000557adbdd7eda in vhost_dev_stop (hdev=0x557adf80d878, vdev=0x557adf80d640, vrings=false) at ../../hw/virtio/vhost.c:1916
>> >   #15 0x0000557adbe051a6 in vu_gpio_stop (vdev=0x557adf80d640) at ../../hw/virtio/vhost-user-gpio.c:142
>> >   #16 0x0000557adbe04849 in vu_gpio_set_status (vdev=0x557adf80d640, status=15 '\017') at ../../hw/virtio/vhost-user-gpio.c:173
>> >   #17 0x0000557adbdc87ff in virtio_set_status (vdev=0x557adf80d640, val=15 '\017') at ../../hw/virtio/virtio.c:2442
>> >   #18 0x0000557adbdcbfa0 in virtio_vmstate_change (opaque=0x557adf80d640, running=false, state=RUN_STATE_SHUTDOWN) at ../../hw/virtio/virtio.c:3736
>> >   #19 0x0000557adb91ad27 in vm_state_notify (running=false, state=RUN_STATE_SHUTDOWN) at ../../softmmu/runstate.c:334
>> >   #20 0x0000557adb910e88 in do_vm_stop (state=RUN_STATE_SHUTDOWN, send_stop=false) at ../../softmmu/cpus.c:262
>> >   #21 0x0000557adb910e30 in vm_shutdown () at ../../softmmu/cpus.c:280
>> >   #22 0x0000557adb91b9c3 in qemu_cleanup () at ../../softmmu/runstate.c:827
>> >   #23 0x0000557adb522975 in qemu_default_main () at ../../softmmu/main.c:38
>> >   #24 0x0000557adb5229a8 in main (argc=27, argv=0x7ffe8588d2f8) at ../../softmmu/main.c:48
>> >   (rr) p hdev->started
>> >   $9 = true
>> >   (rr) info thread
>> >     Id   Target Id                                Frame
>> >   * 1    Thread 2140414.2140414 (qemu-system-aar) vhost_dev_is_started (hdev=0x557adf80d878) at /home/alex/lsrc/qemu.git/include/hw/virtio/vhost.h:199
>> >     2    Thread 2140414.2140439 (qemu-system-aar) 0x0000000070000002 in syscall_traced ()
>> >     3    Thread 2140414.2140442 (qemu-system-aar) 0x0000000070000002 in syscall_traced ()
>> >     4    Thread 2140414.2140443 (qemu-system-aar) 0x0000000070000002 in syscall_traced ()
>> >
>> > During which we eliminate the vhost_dev with a memset:
>> >
>> >   Thread 1 hit Hardware watchpoint 2: *(unsigned int *) 0x557adf80da30
>> >
>> >   Old value = 2
>> >   New value = 0
>> >   __memset_avx2_unaligned_erms () at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:220
>> >   Download failed: Invalid argument.  Continuing without source file ./string/../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S.
>> >   220     ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S: No such file or directory.
>> >   (rr) bt
>> >   #0  __memset_avx2_unaligned_erms () at ../sysdeps/x86_64/multiarch/memset-vec-unaligned-erms.S:220
>> >   #1  0x0000557adbdd67f8 in vhost_dev_cleanup (hdev=0x557adf80d878) at ../../hw/virtio/vhost.c:1501
>> >   #2  0x0000557adbe04d68 in vu_gpio_disconnect (dev=0x557adf80d640) at ../../hw/virtio/vhost-user-gpio.c:256
>> >   #3  0x0000557adbe049bb in vu_gpio_event (opaque=0x557adf80d640, event=CHR_EVENT_CLOSED) at ../../hw/virtio/vhost-user-gpio.c:274
>> >   #4  0x0000557adc0539ef in chr_be_event (s=0x557adea51f10, event=CHR_EVENT_CLOSED) at ../../chardev/char.c:61
>> >   #5  0x0000557adc0506aa in qemu_chr_be_event (s=0x557adea51f10, event=CHR_EVENT_CLOSED) at ../../chardev/char.c:81
>> >   #6  0x0000557adc04f666 in tcp_chr_disconnect_locked (chr=0x557adea51f10) at ../../chardev/char-socket.c:470
>> >   #7  0x0000557adc04c81a in tcp_chr_write (chr=0x557adea51f10, buf=0x7ffe8588cce0 "\v", len=20) at ../../chardev/char-socket.c:129
>> >   #8  0x0000557adc050999 in qemu_chr_write_buffer (s=0x557adea51f10, buf=0x7ffe8588cce0 "\v", len=20, offset=0x7ffe8588cbe4, write_all=true) at ../../chardev/char.c:121
>> >   #9  0x0000557adc0507c7 in qemu_chr_write (s=0x557adea51f10, buf=0x7ffe8588cce0 "\v", len=20, write_all=true) at ../../chardev/char.c:173
>> >   #10 0x0000557adc046f3a in qemu_chr_fe_write_all (be=0x557adf80d830, buf=0x7ffe8588cce0 "\v", len=20) at ../../chardev/char-fe.c:53
>> >   #11 0x0000557adbddc02f in vhost_user_write (dev=0x557adf80d878, msg=0x7ffe8588cce0, fds=0x0, fd_num=0) at ../../hw/virtio/vhost-user.c:490
>> >   #12 0x0000557adbddd48f in vhost_user_get_vring_base (dev=0x557adf80d878, ring=0x7ffe8588d000) at ../../hw/virtio/vhost-user.c:1260
>> >   #13 0x0000557adbdd4bd6 in vhost_virtqueue_stop (dev=0x557adf80d878, vdev=0x557adf80d640, vq=0x557adf843570, idx=0) at ../../hw/virtio/vhost.c:1220
>> >   #14 0x0000557adbdd7eda in vhost_dev_stop (hdev=0x557adf80d878, vdev=0x557adf80d640, vrings=false) at ../../hw/virtio/vhost.c:1916
>> >   #15 0x0000557adbe051a6 in vu_gpio_stop (vdev=0x557adf80d640) at ../../hw/virtio/vhost-user-gpio.c:142
>> >   #16 0x0000557adbe04849 in vu_gpio_set_status (vdev=0x557adf80d640, status=15 '\017') at ../../hw/virtio/vhost-user-gpio.c:173
>> >   #17 0x0000557adbdc87ff in virtio_set_status (vdev=0x557adf80d640, val=15 '\017') at ../../hw/virtio/virtio.c:2442
>> >   #18 0x0000557adbdcbfa0 in virtio_vmstate_change (opaque=0x557adf80d640, running=false, state=RUN_STATE_SHUTDOWN) at ../../hw/virtio/virtio.c:3736
>> >   #19 0x0000557adb91ad27 in vm_state_notify (running=false, state=RUN_STATE_SHUTDOWN) at ../../softmmu/runstate.c:334
>> >   #20 0x0000557adb910e88 in do_vm_stop (state=RUN_STATE_SHUTDOWN, send_stop=false) at ../../softmmu/cpus.c:262
>> >   #21 0x0000557adb910e30 in vm_shutdown () at ../../softmmu/cpus.c:280
>> >   #22 0x0000557adb91b9c3 in qemu_cleanup () at ../../softmmu/runstate.c:827
>> >   #23 0x0000557adb522975 in qemu_default_main () at ../../softmmu/main.c:38
>> >   #24 0x0000557adb5229a8 in main (argc=27, argv=0x7ffe8588d2f8) at ../../softmmu/main.c:48
>> >
>> > Before finally:
>> >
>> >   #0  __GI_raise (sig=sig@entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
>> >   #1  0x00007f24dc269537 in __GI_abort () at abort.c:79
>> >   #2  0x00007f24dc26940f in __assert_fail_base (fmt=0x7f24dc3e16a8 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=0x557adc28d8f5 "assign || nvqs == proxy->nvqs_with_notifiers", file=0x557adc28d7ab "../../hw/virtio/virtio-pci.c", line=1029, function=<optimized out>) at assert.c:92
>> >   #3  0x00007f24dc278662 in __GI___assert_fail (assertion=0x557adc28d8f5 "assign || nvqs == proxy->nvqs_with_notifiers", file=0x557adc28d7ab "../../hw/virtio/virtio-pci.c", line=1029, function=0x557adc28d922 "int virtio_pci_set_guest_notifiers(DeviceState *, int, _Bool)") at assert.c:101
>> >   #4  0x0000557adb8e97f1 in virtio_pci_set_guest_notifiers (d=0x557adf805280, nvqs=0, assign=false) at ../../hw/virtio/virtio-pci.c:1029
>> >   #5  0x0000557adbe051c7 in vu_gpio_stop (vdev=0x557adf80d640) at ../../hw/virtio/vhost-user-gpio.c:144
>> >   #6  0x0000557adbe04849 in vu_gpio_set_status (vdev=0x557adf80d640, status=15 '\017') at ../../hw/virtio/vhost-user-gpio.c:173
>> >   #7  0x0000557adbdc87ff in virtio_set_status (vdev=0x557adf80d640, val=15 '\017') at ../../hw/virtio/virtio.c:2442
>> >   #8  0x0000557adbdcbfa0 in virtio_vmstate_change (opaque=0x557adf80d640, running=false, state=RUN_STATE_SHUTDOWN) at ../../hw/virtio/virtio.c:3736
>> >   #9  0x0000557adb91ad27 in vm_state_notify (running=false, state=RUN_STATE_SHUTDOWN) at ../../softmmu/runstate.c:334
>> >   #10 0x0000557adb910e88 in do_vm_stop (state=RUN_STATE_SHUTDOWN, send_stop=false) at ../../softmmu/cpus.c:262
>> >   #11 0x0000557adb910e30 in vm_shutdown () at ../../softmmu/cpus.c:280
>> >   #12 0x0000557adb91b9c3 in qemu_cleanup () at ../../softmmu/runstate.c:827
>> >   #13 0x0000557adb522975 in qemu_default_main () at ../../softmmu/main.c:38
>> >   #14 0x0000557adb5229a8 in main (argc=27, argv=0x7ffe8588d2f8) at ../../softmmu/main.c:48
>> >
>> > Because of course we've just done that on disconnect.
>> >
>> > Not sure what the cleanest way to avoid that is yet. Maybe it will be
>> > clearer on Monday morning.
>>
>>
>> --
>> Alex Bennée
>>


-- 
Alex Bennée


  reply	other threads:[~2022-11-26 14:19 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-25 17:30 [PATCH for 7.2-rc? v2 0/5] continuing efforts to fix vhost-user issues Alex Bennée
2022-11-25 17:30 ` [PATCH v2 1/5] include/hw: attempt to document VirtIO feature variables Alex Bennée
2022-11-25 17:30 ` [PATCH v2 2/5] include/hw: VM state takes precedence in virtio_device_should_start Alex Bennée
2022-11-27 10:17   ` Bernhard Beschow
2022-11-25 17:30 ` [PATCH v2 3/5] tests/qtests: override "force-legacy" for gpio virtio-mmio tests Alex Bennée
2022-11-25 17:30 ` [PATCH v2 4/5] hw/virtio: ensure a valid host_feature set for virtio-user-gpio Alex Bennée
2022-11-25 17:30 ` [PATCH v2 5/5] vhost: enable vrings in vhost_dev_start() for vhost-user devices Alex Bennée
2022-11-25 18:20 ` [PATCH for 7.2-rc? v2 0/5] continuing efforts to fix vhost-user issues Stefan Weil via
2022-11-25 20:08   ` Alex Bennée
2022-11-25 19:58 ` Alex Bennée
2022-11-26  9:42   ` Alex Bennée
2022-11-26 11:24     ` Stefan Hajnoczi
2022-11-26 14:12       ` Alex Bennée [this message]
2022-11-27 18:04         ` Stefan Hajnoczi

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=87bkot4x4l.fsf@linaro.org \
    --to=alex.bennee@linaro.org \
    --cc=marcandre.lureau@redhat.com \
    --cc=mathieu.poirier@linaro.org \
    --cc=mst@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=sgarzare@redhat.com \
    --cc=slp@redhat.com \
    --cc=stefanha@gmail.com \
    --cc=stefanha@redhat.com \
    --cc=viresh.kumar@linaro.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).