qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* QMP (without OOB) function running in thread different from the main thread as part of aio_poll
@ 2023-04-19 14:09 Fiona Ebner
  2023-04-19 16:59 ` Paolo Bonzini
  2023-04-20  6:11 ` Markus Armbruster
  0 siblings, 2 replies; 21+ messages in thread
From: Fiona Ebner @ 2023-04-19 14:09 UTC (permalink / raw)
  To: QEMU Developers
  Cc: open list:Network Block Dev..., michael.roth, Markus Armbruster,
	Fam Zheng, Stefan Hajnoczi, Paolo Bonzini, Thomas Lamprecht

Hi,
while debugging a completely different issue, I was surprised to see
do_qmp_dispatch_bh being run in a vCPU thread. I was under the
impression that QMP functions are supposed to be executed in the main
thread. Is that wrong?

I managed to reproduced the scenario with a build of upstream QEMU
v8.0.0-rc4 once more (again with GDB open), but it's not a simple
reproducer and racy. The backtrace is below[0].

My attempt at explaining the situation is:
1. In qapi/qmp-dispatch.c, the main thread schedules do_qmp_dispatch_bh,
because the coroutine context doesn't match.
2. The host OS switches to the vCPU thread.
3. Something in the vCPU thread triggers aio_poll with the main thread's
AioContext (in my case, a write to a pflash drive).
4. do_qmp_dispatch_bh is executed in the vCPU thread.

Could this be an artifact of running with a debugger?

I CC'ed the maintainers of util/aio-posix.c and qapi/qmp-dispatch.c
hoping that is not too far off.

Best Regards,
Fiona

[0]:
> Thread 5 "CPU 0/KVM" hit Breakpoint 2, do_qmp_dispatch_bh (opaque=0x7ffff2e96e30) at ../qapi/qmp-dispatch.c:124
> 124	    QmpDispatchBH *data = opaque;
> #0  do_qmp_dispatch_bh (opaque=0x7ffff2e96e30) at ../qapi/qmp-dispatch.c:124
> #1  0x000055555604f50a in aio_bh_call (bh=0x7fffe4005430) at ../util/async.c:155
> #2  0x000055555604f615 in aio_bh_poll (ctx=0x555556b3e730) at ../util/async.c:184
> #3  0x00005555560337b8 in aio_poll (ctx=0x555556b3e730, blocking=true) at ../util/aio-posix.c:721
> #4  0x0000555555e8cf1c in bdrv_poll_co (s=0x7ffff1a45eb0) at /home/febner/repos/qemu/block/block-gen.h:43
> #5  0x0000555555e8fc3a in blk_pwrite (blk=0x555556daf840, offset=159232, bytes=512, buf=0x7ffee3226e00, flags=0) at block/block-gen.c:1650
> #6  0x0000555555908078 in pflash_update (pfl=0x555556d92300, offset=159232, size=1) at ../hw/block/pflash_cfi01.c:394
> #7  0x0000555555908749 in pflash_write (pfl=0x555556d92300, offset=159706, value=127, width=1, be=0) at ../hw/block/pflash_cfi01.c:522
> #8  0x0000555555908cda in pflash_mem_write_with_attrs (opaque=0x555556d92300, addr=159706, value=127, len=1, attrs=...) at ../hw/block/pflash_cfi01.c:681
> #9  0x0000555555d8936a in memory_region_write_with_attrs_accessor (mr=0x555556d926c0, addr=159706, value=0x7ffff1a460c8, size=1, shift=0, mask=255, attrs=...) at ../softmmu/memory.c:514
> #10 0x0000555555d894a9 in access_with_adjusted_size (addr=159706, value=0x7ffff1a460c8, size=1, access_size_min=1, access_size_max=4, access_fn=0x555555d89270 <memory_region_write_with_attrs_accessor>, mr=0x555556d926c0, attrs=...) at ../softmmu/memory.c:555
> #11 0x0000555555d8c5de in memory_region_dispatch_write (mr=0x555556d926c0, addr=159706, data=127, op=MO_8, attrs=...) at ../softmmu/memory.c:1522
> #12 0x0000555555d996f4 in flatview_write_continue (fv=0x7fffe843cc60, addr=4290932698, attrs=..., ptr=0x7ffff7fc5028, len=1, addr1=159706, l=1, mr=0x555556d926c0) at ../softmmu/physmem.c:2641
> #13 0x0000555555d99857 in flatview_write (fv=0x7fffe843cc60, addr=4290932698, attrs=..., buf=0x7ffff7fc5028, len=1) at ../softmmu/physmem.c:2683
> #14 0x0000555555d99c07 in address_space_write (as=0x555556a01b20 <address_space_memory>, addr=4290932698, attrs=..., buf=0x7ffff7fc5028, len=1) at ../softmmu/physmem.c:2779
> #15 0x0000555555d99c74 in address_space_rw (as=0x555556a01b20 <address_space_memory>, addr=4290932698, attrs=..., buf=0x7ffff7fc5028, len=1, is_write=true) at ../softmmu/physmem.c:2789
> #16 0x0000555555e2da88 in kvm_cpu_exec (cpu=0x555556ea10d0) at ../accel/kvm/kvm-all.c:2989
> #17 0x0000555555e3079a in kvm_vcpu_thread_fn (arg=0x555556ea10d0) at ../accel/kvm/kvm-accel-ops.c:51
> #18 0x000055555603825f in qemu_thread_start (args=0x555556b3c3b0) at ../util/qemu-thread-posix.c:541
> #19 0x00007ffff7125ea7 in start_thread (arg=<optimized out>) at pthread_create.c:477
> #20 0x00007ffff62c4a2f in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95



^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2023-05-02 12:50 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2023-04-19 14:09 QMP (without OOB) function running in thread different from the main thread as part of aio_poll Fiona Ebner
2023-04-19 16:59 ` Paolo Bonzini
2023-04-20  6:11 ` Markus Armbruster
2023-04-20  6:55   ` Paolo Bonzini
2023-04-26 14:31     ` Fiona Ebner
2023-04-27 11:03       ` Kevin Wolf
2023-04-27 12:27         ` Fiona Ebner
2023-04-27 14:36           ` Juan Quintela
2023-04-27 14:56             ` Peter Xu
2023-04-28  7:53               ` Fiona Ebner
2023-04-28  7:23             ` Fiona Ebner
2023-04-28  7:47               ` Juan Quintela
2023-04-28  8:30                 ` Kevin Wolf
2023-04-28  8:38                   ` Juan Quintela
2023-04-28 12:22                     ` Kevin Wolf
2023-04-28 16:54                       ` Juan Quintela
2023-05-02 10:03                         ` Fiona Ebner
2023-05-02 10:25                           ` Fiona Ebner
2023-05-02 10:35                             ` Juan Quintela
2023-05-02 12:49                               ` Fiona Ebner
2023-05-02 10:30                           ` Juan Quintela

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).