netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: virtualization@lists.linux.dev,
	"Michael S. Tsirkin" <mst@redhat.com>,
	 "David S. Miller" <davem@davemloft.net>,
	Eric Dumazet <edumazet@google.com>,
	 Jakub Kicinski <kuba@kernel.org>,
	Paolo Abeni <pabeni@redhat.com>,
	netdev@vger.kernel.org
Subject: Re: [PATCH vhost v4 02/10] virtio_ring: packed: remove double check of the unmap ops
Date: Thu, 21 Mar 2024 13:57:06 +0800	[thread overview]
Message-ID: <CACGkMEtd1L=Cm0DWLZbfSazxxHr+iPP77B1kM=PmjdqeYoAz4w@mail.gmail.com> (raw)
In-Reply-To: <20240312033557.6351-3-xuanzhuo@linux.alibaba.com>

On Tue, Mar 12, 2024 at 11:36 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> In the functions vring_unmap_extra_packed and vring_unmap_desc_packed,
> multiple checks are made whether unmap is performed and whether it is
> INDIRECT.
>
> These two functions are usually called in a loop, and we should put the
> check outside the loop.
>
> And we unmap the descs with VRING_DESC_F_INDIRECT on the same path with
> other descs, that make the thing more complex. If we distinguish the
> descs with VRING_DESC_F_INDIRECT before unmap, thing will be clearer.
>
> 1. only one desc of the desc table is used, we do not need the loop
> 2. the called unmap api is difference from the other desc
> 3. the vq->premapped is not needed to check
> 4. the vq->indirect is not needed to check
> 5. the state->indir_desc must not be null
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/virtio/virtio_ring.c | 78 ++++++++++++++++++------------------
>  1 file changed, 40 insertions(+), 38 deletions(-)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index c2779e34aac7..0dfbd17e5a87 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -1214,6 +1214,7 @@ static u16 packed_last_used(u16 last_used_idx)
>         return last_used_idx & ~(-(1 << VRING_PACKED_EVENT_F_WRAP_CTR));
>  }
>
> +/* caller must check vring_need_unmap_buffer() */
>  static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
>                                      const struct vring_desc_extra *extra)
>  {
> @@ -1221,33 +1222,18 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
>
>         flags = extra->flags;
>
> -       if (flags & VRING_DESC_F_INDIRECT) {
> -               if (!vq->use_dma_api)
> -                       return;
> -
> -               dma_unmap_single(vring_dma_dev(vq),
> -                                extra->addr, extra->len,
> -                                (flags & VRING_DESC_F_WRITE) ?
> -                                DMA_FROM_DEVICE : DMA_TO_DEVICE);
> -       } else {
> -               if (!vring_need_unmap_buffer(vq))
> -                       return;
> -
> -               dma_unmap_page(vring_dma_dev(vq),
> -                              extra->addr, extra->len,
> -                              (flags & VRING_DESC_F_WRITE) ?
> -                              DMA_FROM_DEVICE : DMA_TO_DEVICE);
> -       }
> +       dma_unmap_page(vring_dma_dev(vq),
> +                      extra->addr, extra->len,
> +                      (flags & VRING_DESC_F_WRITE) ?
> +                      DMA_FROM_DEVICE : DMA_TO_DEVICE);
>  }
>
> +/* caller must check vring_need_unmap_buffer() */
>  static void vring_unmap_desc_packed(const struct vring_virtqueue *vq,
>                                     const struct vring_packed_desc *desc)
>  {
>         u16 flags;
>
> -       if (!vring_need_unmap_buffer(vq))
> -               return;
> -
>         flags = le16_to_cpu(desc->flags);
>
>         dma_unmap_page(vring_dma_dev(vq),
> @@ -1323,7 +1309,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
>                         total_sg * sizeof(struct vring_packed_desc),
>                         DMA_TO_DEVICE);
>         if (vring_mapping_error(vq, addr)) {
> -               if (vq->premapped)
> +               if (!vring_need_unmap_buffer(vq))
>                         goto free_desc;
>
>                 goto unmap_release;
> @@ -1338,10 +1324,11 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
>                 vq->packed.desc_extra[id].addr = addr;
>                 vq->packed.desc_extra[id].len = total_sg *
>                                 sizeof(struct vring_packed_desc);
> -               vq->packed.desc_extra[id].flags = VRING_DESC_F_INDIRECT |
> -                                                 vq->packed.avail_used_flags;
>         }
>
> +       vq->packed.desc_extra[id].flags = VRING_DESC_F_INDIRECT |
> +               vq->packed.avail_used_flags;
> +
>         /*
>          * A driver MUST NOT make the first descriptor in the list
>          * available before all subsequent descriptors comprising
> @@ -1382,6 +1369,8 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
>  unmap_release:
>         err_idx = i;
>
> +       WARN_ON(!vring_need_unmap_buffer(vq));
> +
>         for (i = 0; i < err_idx; i++)
>                 vring_unmap_desc_packed(vq, &desc[i]);
>
> @@ -1475,12 +1464,13 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>                         desc[i].len = cpu_to_le32(sg->length);
>                         desc[i].id = cpu_to_le16(id);
>
> -                       if (unlikely(vq->use_dma_api)) {
> +                       if (vring_need_unmap_buffer(vq)) {
>                                 vq->packed.desc_extra[curr].addr = addr;
>                                 vq->packed.desc_extra[curr].len = sg->length;
> -                               vq->packed.desc_extra[curr].flags =
> -                                       le16_to_cpu(flags);
>                         }
> +
> +                       vq->packed.desc_extra[curr].flags = le16_to_cpu(flags);
> +
>                         prev = curr;
>                         curr = vq->packed.desc_extra[curr].next;
>
> @@ -1530,6 +1520,8 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>
>         vq->packed.avail_used_flags = avail_used_flags;
>
> +       WARN_ON(!vring_need_unmap_buffer(vq));
> +
>         for (n = 0; n < total_sg; n++) {
>                 if (i == err_idx)
>                         break;
> @@ -1599,7 +1591,9 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
>         struct vring_desc_state_packed *state = NULL;
>         struct vring_packed_desc *desc;
>         unsigned int i, curr;
> +       u16 flags;
>
> +       flags = vq->packed.desc_extra[id].flags;
>         state = &vq->packed.desc_state[id];
>
>         /* Clear data ptr. */
> @@ -1609,22 +1603,32 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
>         vq->free_head = id;
>         vq->vq.num_free += state->num;
>
> -       if (unlikely(vq->use_dma_api)) {
> -               curr = id;
> -               for (i = 0; i < state->num; i++) {
> -                       vring_unmap_extra_packed(vq,
> -                                                &vq->packed.desc_extra[curr]);
> -                       curr = vq->packed.desc_extra[curr].next;
> +       if (!(flags & VRING_DESC_F_INDIRECT)) {
> +               if (vring_need_unmap_buffer(vq)) {
> +                       curr = id;
> +                       for (i = 0; i < state->num; i++) {
> +                               vring_unmap_extra_packed(vq,
> +                                                        &vq->packed.desc_extra[curr]);
> +                               curr = vq->packed.desc_extra[curr].next;
> +                       }
>                 }
> -       }
>
> -       if (vq->indirect) {
> +               if (ctx)
> +                       *ctx = state->indir_desc;
> +       } else {
> +               const struct vring_desc_extra *extra;
>                 u32 len;
>
> +               if (vq->use_dma_api) {
> +                       extra = &vq->packed.desc_extra[id];
> +                       dma_unmap_single(vring_dma_dev(vq),
> +                                        extra->addr, extra->len,
> +                                        (flags & VRING_DESC_F_WRITE) ?
> +                                        DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +               }

Theoretically, indirect descriptors could be chained. It is supported
without this patch but not here.

Thanks

> +
>                 /* Free the indirect table, if any, now that it's unmapped. */
>                 desc = state->indir_desc;
> -               if (!desc)
> -                       return;
>
>                 if (vring_need_unmap_buffer(vq)) {
>                         len = vq->packed.desc_extra[id].len;
> @@ -1634,8 +1638,6 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
>                 }
>                 kfree(desc);
>                 state->indir_desc = NULL;
> -       } else if (ctx) {
> -               *ctx = state->indir_desc;
>         }
>  }
>
> --
> 2.32.0.3.g01195cf9f
>


  reply	other threads:[~2024-03-21  5:57 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-12  3:35 [PATCH vhost v4 00/10] virtio: drivers maintain dma info for premapped vq Xuan Zhuo
2024-03-12  3:35 ` [PATCH vhost v4 01/10] virtio_ring: introduce vring_need_unmap_buffer Xuan Zhuo
2024-03-12  3:35 ` [PATCH vhost v4 02/10] virtio_ring: packed: remove double check of the unmap ops Xuan Zhuo
2024-03-21  5:57   ` Jason Wang [this message]
2024-03-21  8:20     ` Xuan Zhuo
2024-03-22  5:10       ` Jason Wang
2024-03-26  7:32       ` Michael S. Tsirkin
2024-03-27  7:11         ` Xuan Zhuo
2024-03-12  3:35 ` [PATCH vhost v4 03/10] virtio_ring: packed: structure the indirect desc table Xuan Zhuo
2024-03-21  4:47   ` Jason Wang
2024-03-21  8:24     ` Xuan Zhuo
2024-03-22  5:15       ` Jason Wang
2024-03-22  5:55         ` Xuan Zhuo
2024-03-22  7:51         ` Xuan Zhuo
2024-03-25  7:07           ` Jason Wang
2024-03-12  3:35 ` [PATCH vhost v4 04/10] virtio_ring: split: remove double check of the unmap ops Xuan Zhuo
2024-03-12  3:35 ` [PATCH vhost v4 05/10] virtio_ring: split: structure the indirect desc table Xuan Zhuo
2024-03-12  3:35 ` [PATCH vhost v4 06/10] virtio_ring: no store dma info when unmap is not needed Xuan Zhuo
2024-03-12  3:35 ` [PATCH vhost v4 07/10] virtio: find_vqs: add new parameter premapped Xuan Zhuo
2024-03-12  3:35 ` [PATCH vhost v4 08/10] virtio_ring: export premapped to driver by struct virtqueue Xuan Zhuo
2024-03-12  3:35 ` [PATCH vhost v4 09/10] virtio_net: set premapped mode by find_vqs() Xuan Zhuo
2024-03-12  3:35 ` [PATCH vhost v4 10/10] virtio_ring: virtqueue_set_dma_premapped support disable Xuan Zhuo
2024-03-21  6:02   ` Jason Wang
2024-03-21  8:21     ` Xuan Zhuo
2024-03-22  5:13       ` Jason Wang
2024-03-22  6:03         ` Xuan Zhuo
2024-03-25  7:10           ` Jason Wang
2024-03-19  6:56 ` [PATCH vhost v4 00/10] virtio: drivers maintain dma info for premapped vq Michael S. Tsirkin
2024-03-20  9:25   ` Jason Wang
2024-03-21  4:45 ` Jason Wang
2024-03-21  8:30   ` Xuan Zhuo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CACGkMEtd1L=Cm0DWLZbfSazxxHr+iPP77B1kM=PmjdqeYoAz4w@mail.gmail.com' \
    --to=jasowang@redhat.com \
    --cc=davem@davemloft.net \
    --cc=edumazet@google.com \
    --cc=kuba@kernel.org \
    --cc=mst@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=pabeni@redhat.com \
    --cc=virtualization@lists.linux.dev \
    --cc=xuanzhuo@linux.alibaba.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).