From: "Philippe Mathieu-Daudé" <philmd@redhat.com>
To: qemu-devel@nongnu.org
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
Jason Wang <jasowang@redhat.com>, Li Qiang <liq3ea@gmail.com>,
Qiuhao Li <Qiuhao.Li@outlook.com>,
Alexander Bulekov <alxndr@bu.edu>,
Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>,
Cheolwoo Myung <cwmyung@snu.ac.kr>
Subject: Re: [RFC PATCH] hw/virtio: Do not access vring cache if queue is not ready
Date: Wed, 25 Aug 2021 22:36:34 +0200 [thread overview]
Message-ID: <9ea6cdbd-4ad0-f10c-5229-a68e31fa90c2@redhat.com> (raw)
In-Reply-To: <20210825200953.1684541-1-philmd@redhat.com>
On 8/25/21 10:09 PM, Philippe Mathieu-Daudé wrote:
> Do not intent to access the vring MemoryRegion cache if
> the queue is not ready (no vring available).
>
> This fixes issue #301:
>
> qemu-system-i386: include/exec/memory_ldst_cached.h.inc:77: void address_space_stw_le_cached(MemoryRegionCache *, hwaddr, uint16_t, MemTxAttrs, MemTxResult *):
> Assertion `addr < cache->len && 2 <= cache->len - addr' failed.
> Thread 1 "qemu-system-i38" received signal SIGABRT, Aborted.
> (gdb) bt
> #1 0x00007ffff4d1a8a4 in abort () at /lib64/libc.so.6
> #4 0x0000555558f2a122 in address_space_stw_le_cached (cache=0x61300010bb70, addr=516, val=0, attrs=..., result=0x0) at include/exec/memory_ldst_cached.h.inc:77
> #5 0x0000555558f2964c in stw_le_phys_cached (cache=0x61300010bb70, addr=516, val=0) at include/exec/memory_ldst_phys.h.inc:109
> #6 0x0000555558f28da8 in virtio_stw_phys_cached (vdev=0x62d00004e680, cache=0x61300010bb70, pa=516, value=0) at include/hw/virtio/virtio-access.h:196
> #7 0x0000555558f287dc in vring_set_avail_event (vq=0x7fff5d23e800, val=0) at hw/virtio/virtio.c:428
> #8 0x0000555558efabb7 in virtio_queue_split_set_notification (vq=0x7fff5d23e800, enable=1) at hw/virtio/virtio.c:437
> #9 0x0000555558ef9f3c in virtio_queue_set_notification (vq=0x7fff5d23e800, enable=1) at hw/virtio/virtio.c:498
> #10 0x0000555558c94786 in virtio_blk_handle_vq (s=0x62d00004e680, vq=0x7fff5d23e800) at hw/block/virtio-blk.c:795
> #11 0x0000555558cb96c2 in virtio_blk_data_plane_handle_output (vdev=0x62d00004e680, vq=0x7fff5d23e800) at hw/block/dataplane/virtio-blk.c:165
> #12 0x0000555558f35937 in virtio_queue_notify_aio_vq (vq=0x7fff5d23e800) at hw/virtio/virtio.c:2323
> #13 0x0000555558f264b3 in virtio_queue_host_notifier_aio_read (n=0x7fff5d23e87c) at hw/virtio/virtio.c:3532
> #14 0x0000555559a9cd3e in aio_dispatch_handler (ctx=0x613000063000, node=0x60d000008f40) at util/aio-posix.c:329
> #15 0x0000555559a963ae in aio_dispatch_handlers (ctx=0x613000063000) at util/aio-posix.c:372
> #16 0x0000555559a95f82 in aio_dispatch (ctx=0x613000063000) at util/aio-posix.c:382
>
> and #302:
>
> qemu-system-i386: include/exec/memory_ldst_cached.h.inc:30: uint16_t address_space_lduw_le_cached(MemoryRegionCache *, hwaddr, MemTxAttrs, MemTxResult *):
> Assertion `addr < cache->len && 2 <= cache->len - addr' failed.
> Thread 1 "qemu-system-i38" received signal SIGABRT, Aborted.
> 0x00007ffff4d312a2 in raise () from /lib64/libc.so.6
> (gdb) bt
> #1 0x00007ffff4d1a8a4 in abort () at /lib64/libc.so.6
> #4 0x0000555558f2b8ec in address_space_lduw_le_cached (cache=0x61300010a7c0, addr=134, attrs=..., result=0x0) at include/exec/memory_ldst_cached.h.inc:30
> #5 0x0000555558f2ac6b in lduw_le_phys_cached (cache=0x61300010a7c0, addr=134) at include/exec/memory_ldst_phys.h.inc:67
> #6 0x0000555558f2a3cd in virtio_lduw_phys_cached (vdev=0x62d00003a680, cache=0x61300010a7c0, pa=134) at include/hw/virtio/virtio-access.h:166
> #7 0x0000555558f300ea in vring_avail_ring (vq=0x7fffdd55d8a0, i=65) at hw/virtio/virtio.c:326
> #8 0x0000555558f33b10 in vring_get_used_event (vq=0x7fffdd55d8a0) at hw/virtio/virtio.c:332
> #9 0x0000555558f33677 in virtio_split_should_notify (vdev=0x62d00003a680, vq=0x7fffdd55d8a0) at hw/virtio/virtio.c:2471
> #10 0x0000555558f1859f in virtio_should_notify (vdev=0x62d00003a680, vq=0x7fffdd55d8a0) at hw/virtio/virtio.c:2523
> #11 0x0000555558f188cc in virtio_notify (vdev=0x62d00003a680, vq=0x7fffdd55d8a0) at hw/virtio/virtio.c:2565
> #12 0x0000555557c2bd52 in virtio_input_handle_sts (vdev=0x62d00003a680, vq=0x7fffdd55d8a0) at hw/input/virtio-input.c:100
> #13 0x0000555558f16df7 in virtio_queue_notify (vdev=0x62d00003a680, n=1) at hw/virtio/virtio.c:2363
> #14 0x00005555583f45c0 in virtio_pci_notify_write (opaque=0x62d000032400, addr=7, val=0, size=1) at hw/virtio/virtio-pci.c:1369
> #15 0x0000555558b80b04 in memory_region_write_accessor (mr=0x62d000033190, addr=7, value=0x7fffffff8eb0, size=1, shift=0, mask=255, attrs=...) at softmmu/memory.c:492
>
> Reported-by: Cheolwoo Myung <cwmyung@snu.ac.kr>
> Reported-by: Qiuhao Li <Qiuhao.Li@outlook.com>
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/301
> BugLink: https://bugs.launchpad.net/qemu/+bug/1910941
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/302
> BugLink: https://bugs.launchpad.net/qemu/+bug/1913510
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
> Could be more readable to use virtio_queue_ready()
>
> RFC because I have no clue about this hot path code,
> I simply looked at the backtraces.
> ---
> hw/virtio/virtio.c | 82 +++++++++++++++++++++++++++++++++++++---------
> 1 file changed, 66 insertions(+), 16 deletions(-)
>
> diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
> index 874377f37a7..be1ec9e05ef 100644
> --- a/hw/virtio/virtio.c
> +++ b/hw/virtio/virtio.c
> @@ -289,26 +289,38 @@ static VRingMemoryRegionCaches *vring_get_region_caches(struct VirtQueue *vq)
> /* Called within rcu_read_lock(). */
> static inline uint16_t vring_avail_flags(VirtQueue *vq)
> {
> - VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> - hwaddr pa = offsetof(VRingAvail, flags);
> + VRingMemoryRegionCaches *caches;
> + hwaddr pa;
>
> + if (unlikely(!vq->vring.avail)) {
> + return 0;
> + }
> +
> + caches = vring_get_region_caches(vq);
> if (!caches) {
> return 0;
> }
>
> + pa = offsetof(VRingAvail, flags);
> return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
> }
>
> /* Called within rcu_read_lock(). */
> static inline uint16_t vring_avail_idx(VirtQueue *vq)
> {
> - VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> - hwaddr pa = offsetof(VRingAvail, idx);
> + VRingMemoryRegionCaches *caches;
> + hwaddr pa;
>
> + if (unlikely(!vq->vring.avail)) {
> + return 0;
> + }
> +
> + caches = vring_get_region_caches(vq);
> if (!caches) {
> return 0;
> }
>
> + pa = offsetof(VRingAvail, idx);
> vq->shadow_avail_idx = virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
> return vq->shadow_avail_idx;
> }
> @@ -316,13 +328,19 @@ static inline uint16_t vring_avail_idx(VirtQueue *vq)
> /* Called within rcu_read_lock(). */
> static inline uint16_t vring_avail_ring(VirtQueue *vq, int i)
> {
> - VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> - hwaddr pa = offsetof(VRingAvail, ring[i]);
> + VRingMemoryRegionCaches *caches;
> + hwaddr pa;
>
> + if (unlikely(!vq->vring.avail)) {
> + return 0;
> + }
> +
> + caches = vring_get_region_caches(vq);
> if (!caches) {
> return 0;
> }
>
> + pa = offsetof(VRingAvail, ring[i]);
> return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
> }
Self-NAck.
While the previous part could be OK, what follows is crap (checking
avail instead of used) so please disregard this patch.
> @@ -336,13 +354,19 @@ static inline uint16_t vring_get_used_event(VirtQueue *vq)
> static inline void vring_used_write(VirtQueue *vq, VRingUsedElem *uelem,
> int i)
> {
> - VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> - hwaddr pa = offsetof(VRingUsed, ring[i]);
> + VRingMemoryRegionCaches *caches;
> + hwaddr pa;
>
> + if (unlikely(!vq->vring.avail)) {
> + return;
> + }
> +
> + caches = vring_get_region_caches(vq);
> if (!caches) {
> return;
> }
>
> + pa = offsetof(VRingUsed, ring[i]);
> virtio_tswap32s(vq->vdev, &uelem->id);
> virtio_tswap32s(vq->vdev, &uelem->len);
> address_space_write_cached(&caches->used, pa, uelem, sizeof(VRingUsedElem));
> @@ -352,23 +376,35 @@ static inline void vring_used_write(VirtQueue *vq, VRingUsedElem *uelem,
> /* Called within rcu_read_lock(). */
> static uint16_t vring_used_idx(VirtQueue *vq)
> {
> - VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> - hwaddr pa = offsetof(VRingUsed, idx);
> + VRingMemoryRegionCaches *caches;
> + hwaddr pa;
>
> + if (unlikely(!vq->vring.avail)) {
> + return 0;
> + }
> +
> + caches = vring_get_region_caches(vq);
> if (!caches) {
> return 0;
> }
>
> + pa = offsetof(VRingUsed, idx);
> return virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
> }
>
> /* Called within rcu_read_lock(). */
> static inline void vring_used_idx_set(VirtQueue *vq, uint16_t val)
> {
> - VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> - hwaddr pa = offsetof(VRingUsed, idx);
> + VRingMemoryRegionCaches *caches;
>
> + if (unlikely(!vq->vring.avail)) {
> + return;
> + }
> +
> + caches = vring_get_region_caches(vq);
> if (caches) {
> + hwaddr pa = offsetof(VRingUsed, idx);
> +
> virtio_stw_phys_cached(vq->vdev, &caches->used, pa, val);
> address_space_cache_invalidate(&caches->used, pa, sizeof(val));
> }
> @@ -379,17 +415,22 @@ static inline void vring_used_idx_set(VirtQueue *vq, uint16_t val)
> /* Called within rcu_read_lock(). */
> static inline void vring_used_flags_set_bit(VirtQueue *vq, int mask)
> {
> - VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> - VirtIODevice *vdev = vq->vdev;
> - hwaddr pa = offsetof(VRingUsed, flags);
> + VRingMemoryRegionCaches *caches;
> + hwaddr pa;
> uint16_t flags;
>
> + if (unlikely(!vq->vring.avail)) {
> + return;
> + }
> +
> + caches = vring_get_region_caches(vq);
> if (!caches) {
> return;
> }
>
> + pa = offsetof(VRingUsed, flags);
> flags = virtio_lduw_phys_cached(vq->vdev, &caches->used, pa);
> - virtio_stw_phys_cached(vdev, &caches->used, pa, flags | mask);
> + virtio_stw_phys_cached(vq->vdev, &caches->used, pa, flags | mask);
> address_space_cache_invalidate(&caches->used, pa, sizeof(flags));
> }
>
> @@ -401,6 +442,10 @@ static inline void vring_used_flags_unset_bit(VirtQueue *vq, int mask)
> hwaddr pa = offsetof(VRingUsed, flags);
> uint16_t flags;
>
> + if (unlikely(!vq->vring.avail)) {
> + return;
> + }
> +
> if (!caches) {
> return;
> }
> @@ -415,10 +460,15 @@ static inline void vring_set_avail_event(VirtQueue *vq, uint16_t val)
> {
> VRingMemoryRegionCaches *caches;
> hwaddr pa;
> +
> if (!vq->notification) {
> return;
> }
>
> + if (unlikely(!vq->vring.avail)) {
> + return;
> + }
> +
> caches = vring_get_region_caches(vq);
> if (!caches) {
> return;
>
next prev parent reply other threads:[~2021-08-25 20:37 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-25 20:09 [RFC PATCH] hw/virtio: Do not access vring cache if queue is not ready Philippe Mathieu-Daudé
2021-08-25 20:36 ` Philippe Mathieu-Daudé [this message]
2021-08-25 22:50 ` Philippe Mathieu-Daudé
2021-08-26 7:45 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=9ea6cdbd-4ad0-f10c-5229-a68e31fa90c2@redhat.com \
--to=philmd@redhat.com \
--cc=Qiuhao.Li@outlook.com \
--cc=alxndr@bu.edu \
--cc=cwmyung@snu.ac.kr \
--cc=jasowang@redhat.com \
--cc=liq3ea@gmail.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).