From: Cornelia Huck <cohuck@redhat.com>
To: "Carlos López" <clopez@suse.de>, qemu-devel@nongnu.org
Cc: "Carlos López" <clopez@suse.de>,
"Michael S. Tsirkin" <mst@redhat.com>,
"Halil Pasic" <pasic@linux.ibm.com>,
"Eric Farman" <farman@linux.ibm.com>,
"Richard Henderson" <richard.henderson@linaro.org>,
"David Hildenbrand" <david@redhat.com>,
"Ilya Leoshkevich" <iii@linux.ibm.com>,
"Christian Borntraeger" <borntraeger@linux.ibm.com>,
"Thomas Huth" <thuth@redhat.com>,
"open list:virtio-ccw" <qemu-s390x@nongnu.org>
Subject: Re: [PATCH v2] virtio: refresh vring region cache after updating a virtqueue size
Date: Wed, 22 Mar 2023 10:52:31 +0100 [thread overview]
Message-ID: <87y1npglk0.fsf@redhat.com> (raw)
In-Reply-To: <20230317002749.27379-1-clopez@suse.de>
On Fri, Mar 17 2023, Carlos López <clopez@suse.de> wrote:
> When a virtqueue size is changed by the guest via
> virtio_queue_set_num(), its region cache is not automatically updated.
> If the size was increased, this could lead to accessing the cache out
> of bounds. For example, in vring_get_used_event():
>
> static inline uint16_t vring_get_used_event(VirtQueue *vq)
> {
> return vring_avail_ring(vq, vq->vring.num);
> }
>
> static inline uint16_t vring_avail_ring(VirtQueue *vq, int i)
> {
> VRingMemoryRegionCaches *caches = vring_get_region_caches(vq);
> hwaddr pa = offsetof(VRingAvail, ring[i]);
>
> if (!caches) {
> return 0;
> }
>
> return virtio_lduw_phys_cached(vq->vdev, &caches->avail, pa);
> }
>
> vq->vring.num will be greater than caches->avail.len, which will
> trigger a failed assertion down the call path of
> virtio_lduw_phys_cached().
>
> Fix this by calling virtio_init_region_cache() after
> virtio_queue_set_num() if we are not already calling
> virtio_queue_set_rings(). In the legacy path this is already done by
> virtio_queue_update_rings().
>
> Signed-off-by: Carlos López <clopez@suse.de>
> ---
> v2: use virtio_init_region_cache() instead of
> virtio_queue_update_rings() in the path for modern devices.
>
> hw/s390x/virtio-ccw.c | 1 +
> hw/virtio/virtio-mmio.c | 1 +
> hw/virtio/virtio-pci.c | 1 +
> hw/virtio/virtio.c | 2 +-
> include/hw/virtio/virtio.h | 1 +
> 5 files changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/hw/s390x/virtio-ccw.c b/hw/s390x/virtio-ccw.c
> index e33e5207ab..f44de1a8c1 100644
> --- a/hw/s390x/virtio-ccw.c
> +++ b/hw/s390x/virtio-ccw.c
> @@ -237,6 +237,7 @@ static int virtio_ccw_set_vqs(SubchDev *sch, VqInfoBlock *info,
> return -EINVAL;
> }
> virtio_queue_set_num(vdev, index, num);
> + virtio_init_region_cache(vdev, index);
Hmm... this is not wrong, but looking at it again, I see that the guest
has no way to change num after our last call to
virtio_init_region_cache() (while setting up the queue addresses.) IOW,
this introduces an extra round trip that is not really needed.
> } else if (virtio_queue_get_num(vdev, index) > num) {
> /* Fail if we don't have a big enough queue. */
> return -EINVAL;
> diff --git a/hw/virtio/virtio-mmio.c b/hw/virtio/virtio-mmio.c
> index 23ba625eb6..c2c6d85475 100644
> --- a/hw/virtio/virtio-mmio.c
> +++ b/hw/virtio/virtio-mmio.c
> @@ -354,6 +354,7 @@ static void virtio_mmio_write(void *opaque, hwaddr offset, uint64_t value,
> if (proxy->legacy) {
> virtio_queue_update_rings(vdev, vdev->queue_sel);
> } else {
> + virtio_init_region_cache(vdev, vdev->queue_sel);
> proxy->vqs[vdev->queue_sel].num = value;
> }
> break;
> diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
> index 247325c193..02fb84a8fa 100644
> --- a/hw/virtio/virtio-pci.c
> +++ b/hw/virtio/virtio-pci.c
> @@ -1554,6 +1554,7 @@ static void virtio_pci_common_write(void *opaque, hwaddr addr,
> proxy->vqs[vdev->queue_sel].num = val;
> virtio_queue_set_num(vdev, vdev->queue_sel,
> proxy->vqs[vdev->queue_sel].num);
> + virtio_init_region_cache(vdev, vdev->queue_sel);
> break;
> case VIRTIO_PCI_COMMON_Q_MSIX:
> vector = virtio_queue_vector(vdev, vdev->queue_sel);
OTOH, all other transports need this call, as setting the addresses and
setting num are two distinct operations. So I think we have two options:
- apply your patch, and accept that we have the extra round trip for ccw
(which should not really be an issue, we're processing a channel
command anyway)
- leave it out for ccw and add a comment why it isn't needed
(I think I'd prefer to just go ahead with your patch.)
Question (mostly for the other ccw folks): Do you think it is a problem
that ccw sets up queue addresses and size via one command, while pci and
mmio set addresses and size independently? I guess not; if anything, not
setting them in one go may lead to issues like the one this patch is
fixing.
next prev parent reply other threads:[~2023-03-22 9:53 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-03-17 0:27 [PATCH v2] virtio: refresh vring region cache after updating a virtqueue size Carlos López
2023-03-22 9:52 ` Cornelia Huck [this message]
2023-03-22 17:24 ` Halil Pasic
2023-03-24 12:00 ` Halil Pasic
2023-03-27 7:37 ` Cornelia Huck
2023-03-27 11:06 ` Cornelia Huck
2023-03-27 12:29 ` Michael S. Tsirkin
2023-03-27 12:55 ` Halil Pasic
2023-03-27 12:55 ` Cornelia Huck
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87y1npglk0.fsf@redhat.com \
--to=cohuck@redhat.com \
--cc=borntraeger@linux.ibm.com \
--cc=clopez@suse.de \
--cc=david@redhat.com \
--cc=farman@linux.ibm.com \
--cc=iii@linux.ibm.com \
--cc=mst@redhat.com \
--cc=pasic@linux.ibm.com \
--cc=qemu-devel@nongnu.org \
--cc=qemu-s390x@nongnu.org \
--cc=richard.henderson@linaro.org \
--cc=thuth@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).