From: Stefano Garzarella <sgarzare@redhat.com>
To: "Philippe Mathieu-Daudé" <philmd@redhat.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
Jason Wang <jasowang@redhat.com>,
Cornelia Huck <cohuck@redhat.com>,
qemu-devel@nongnu.org, Stefan Hajnoczi <stefanha@redhat.com>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [RFC PATCH 3/3] hw/virtio: Have virtqueue_get_avail_bytes() pass caches arg to callees
Date: Wed, 1 Sep 2021 17:55:38 +0200 [thread overview]
Message-ID: <20210901155538.vbtxakrtbjwon3pt@steredhat> (raw)
In-Reply-To: <20210826172658.2116840-4-philmd@redhat.com>
On Thu, Aug 26, 2021 at 07:26:58PM +0200, Philippe Mathieu-Daudé wrote:
>Both virtqueue_packed_get_avail_bytes() and
>virtqueue_split_get_avail_bytes() access the region cache, but
>their caller also does. Simplify by having virtqueue_get_avail_bytes
>calling both with RCU lock held, and passing the caches as argument.
>
>Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
>---
>RFC because I'm not sure this is safe enough
It seems safe to me.
While reviewing I saw that vring_get_region_caches() has
/* Called within rcu_read_lock(). */ comment, but it seems to me that
we call that function in places where we haven't acquired it, which
shouldn't be a problem, but should we remove that comment?
Thanks,
Stefano
>---
> hw/virtio/virtio.c | 29 ++++++++++++-----------------
> 1 file changed, 12 insertions(+), 17 deletions(-)
>
>diff --git a/hw/virtio/virtio.c b/hw/virtio/virtio.c
>index 3a1f6c520cb..8237693a567 100644
>--- a/hw/virtio/virtio.c
>+++ b/hw/virtio/virtio.c
>@@ -984,28 +984,23 @@ static int virtqueue_split_read_next_desc(VirtIODevice *vdev, VRingDesc *desc,
> return VIRTQUEUE_READ_DESC_MORE;
> }
>
>+/* Called within rcu_read_lock(). */
> static void virtqueue_split_get_avail_bytes(VirtQueue *vq,
> unsigned int *in_bytes, unsigned int *out_bytes,
>- unsigned max_in_bytes, unsigned max_out_bytes)
>+ unsigned max_in_bytes, unsigned max_out_bytes,
>+ VRingMemoryRegionCaches *caches)
> {
> VirtIODevice *vdev = vq->vdev;
> unsigned int max, idx;
> unsigned int total_bufs, in_total, out_total;
>- VRingMemoryRegionCaches *caches;
> MemoryRegionCache indirect_desc_cache = MEMORY_REGION_CACHE_INVALID;
> int64_t len = 0;
> int rc;
>
>- RCU_READ_LOCK_GUARD();
>-
> idx = vq->last_avail_idx;
> total_bufs = in_total = out_total = 0;
>
> max = vq->vring.num;
>- caches = vring_get_region_caches(vq);
>- if (!caches) {
>- goto err;
>- }
>
> while ((rc = virtqueue_num_heads(vq, idx)) > 0) {
> MemoryRegionCache *desc_cache = &caches->desc;
>@@ -1124,32 +1119,28 @@ static int virtqueue_packed_read_next_desc(VirtQueue *vq,
> return VIRTQUEUE_READ_DESC_MORE;
> }
>
>+/* Called within rcu_read_lock(). */
> static void virtqueue_packed_get_avail_bytes(VirtQueue *vq,
> unsigned int *in_bytes,
> unsigned int *out_bytes,
> unsigned max_in_bytes,
>- unsigned max_out_bytes)
>+ unsigned max_out_bytes,
>+ VRingMemoryRegionCaches *caches)
> {
> VirtIODevice *vdev = vq->vdev;
> unsigned int max, idx;
> unsigned int total_bufs, in_total, out_total;
> MemoryRegionCache *desc_cache;
>- VRingMemoryRegionCaches *caches;
> MemoryRegionCache indirect_desc_cache = MEMORY_REGION_CACHE_INVALID;
> int64_t len = 0;
> VRingPackedDesc desc;
> bool wrap_counter;
>
>- RCU_READ_LOCK_GUARD();
> idx = vq->last_avail_idx;
> wrap_counter = vq->last_avail_wrap_counter;
> total_bufs = in_total = out_total = 0;
>
> max = vq->vring.num;
>- caches = vring_get_region_caches(vq);
>- if (!caches) {
>- goto err;
>- }
>
> for (;;) {
> unsigned int num_bufs = total_bufs;
>@@ -1250,6 +1241,8 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
> uint16_t desc_size;
> VRingMemoryRegionCaches *caches;
>
>+ RCU_READ_LOCK_GUARD();
>+
> if (unlikely(!vq->vring.desc)) {
> goto err;
> }
>@@ -1268,10 +1261,12 @@ void virtqueue_get_avail_bytes(VirtQueue *vq, unsigned int *in_bytes,
>
> if (virtio_vdev_has_feature(vq->vdev, VIRTIO_F_RING_PACKED)) {
> virtqueue_packed_get_avail_bytes(vq, in_bytes, out_bytes,
>- max_in_bytes, max_out_bytes);
>+ max_in_bytes, max_out_bytes,
>+ caches);
> } else {
> virtqueue_split_get_avail_bytes(vq, in_bytes, out_bytes,
>- max_in_bytes, max_out_bytes);
>+ max_in_bytes, max_out_bytes,
>+ caches);
> }
>
> return;
>--
>2.31.1
>
>
next prev parent reply other threads:[~2021-09-01 16:36 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-08-26 17:26 [PATCH 0/3] hw/virtio: Minor housekeeping patches Philippe Mathieu-Daudé
2021-08-26 17:26 ` [PATCH 1/3] hw/virtio: Document virtio_queue_packed_empty_rcu is called within RCU Philippe Mathieu-Daudé
2021-09-01 15:56 ` Stefano Garzarella
2021-09-02 12:07 ` Stefan Hajnoczi
2021-08-26 17:26 ` [PATCH 2/3] hw/virtio: Remove NULL check in virtio_free_region_cache() Philippe Mathieu-Daudé
2021-09-01 15:56 ` Stefano Garzarella
2021-09-02 12:07 ` Stefan Hajnoczi
2021-08-26 17:26 ` [RFC PATCH 3/3] hw/virtio: Have virtqueue_get_avail_bytes() pass caches arg to callees Philippe Mathieu-Daudé
2021-09-01 15:55 ` Stefano Garzarella [this message]
2021-09-02 12:12 ` Stefan Hajnoczi
2021-09-02 13:09 ` Stefano Garzarella
2021-09-02 15:08 ` Stefan Hajnoczi
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210901155538.vbtxakrtbjwon3pt@steredhat \
--to=sgarzare@redhat.com \
--cc=cohuck@redhat.com \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=pbonzini@redhat.com \
--cc=philmd@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=stefanha@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).