virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: "Michael S. Tsirkin" <mst@redhat.com>
To: Jason Wang <jasowang@redhat.com>
Cc: virtualization@lists.linux-foundation.org
Subject: Re: [PATCH vhost 01/10] virtio_ring: split: refactor virtqueue_add_split() for premapped
Date: Mon, 20 Feb 2023 07:12:54 -0500	[thread overview]
Message-ID: <20230220071202-mutt-send-email-mst@kernel.org> (raw)
In-Reply-To: <CACGkMEtnVwTBzwmRGGXELNkds=b1K+crAmonyjn9=rM1R2-Fkw@mail.gmail.com>

On Mon, Feb 20, 2023 at 01:37:37PM +0800, Jason Wang wrote:
> On Tue, Feb 14, 2023 at 3:27 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > DMA-related logic is separated from the virtqueue_add_split to prepare
> > for subsequent support for premapped.
> 
> The patch seems to do more than what is described here.
> 
> To simplify reviewers, I'd suggest to split this patch into three:
> 
> 1) virtqueue_add_split_prepare() (could we have a better name?)
> 2) virtqueue_map_sgs()
> 3) virtqueue_add_split_vring()
> 
> (Or only factor DMA parts out, I haven't gone through the reset of the patches)
> 
> Thanks
> 

It's pretty small, even split is not mandatary imho.
But definitely please do document what is done fully.



> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/virtio/virtio_ring.c | 219 ++++++++++++++++++++++++-----------
> >  1 file changed, 152 insertions(+), 67 deletions(-)
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 41144b5246a8..560ee30d942c 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -520,29 +520,83 @@ static inline unsigned int virtqueue_add_desc_split(struct virtqueue *vq,
> >         return next;
> >  }
> >
> > -static inline int virtqueue_add_split(struct virtqueue *_vq,
> > -                                     struct scatterlist *sgs[],
> > -                                     unsigned int total_sg,
> > -                                     unsigned int out_sgs,
> > -                                     unsigned int in_sgs,
> > -                                     void *data,
> > -                                     void *ctx,
> > -                                     gfp_t gfp)
> > +static int virtqueue_map_sgs(struct vring_virtqueue *vq,
> > +                            struct scatterlist *sgs[],
> > +                            unsigned int total_sg,
> > +                            unsigned int out_sgs,
> > +                            unsigned int in_sgs)
> >  {
> > -       struct vring_virtqueue *vq = to_vvq(_vq);
> >         struct scatterlist *sg;
> > -       struct vring_desc *desc;
> > -       unsigned int i, n, avail, descs_used, prev, err_idx;
> > -       int head;
> > -       bool indirect;
> > +       unsigned int n;
> >
> > -       START_USE(vq);
> > +       for (n = 0; n < out_sgs; n++) {
> > +               for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> > +                       dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
> > +
> > +                       if (vring_mapping_error(vq, addr))
> > +                               return -ENOMEM;
> > +
> > +                       sg->dma_address = addr;
> > +               }
> > +       }
> > +       for (; n < (out_sgs + in_sgs); n++) {
> > +               for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> > +                       dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
> > +
> > +                       if (vring_mapping_error(vq, addr))
> > +                               return -ENOMEM;
> > +
> > +                       sg->dma_address = addr;
> > +               }
> > +       }
> > +
> > +       return 0;
> > +}
> > +
> > +static void virtqueue_unmap_sgs(struct vring_virtqueue *vq,
> > +                               struct scatterlist *sgs[],
> > +                               unsigned int total_sg,
> > +                               unsigned int out_sgs,
> > +                               unsigned int in_sgs)
> > +{
> > +       struct scatterlist *sg;
> > +       unsigned int n;
> > +
> > +       for (n = 0; n < out_sgs; n++) {
> > +               for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> > +                       if (!sg->dma_address)
> > +                               return;
> > +
> > +                       dma_unmap_single(vring_dma_dev(vq), sg->dma_address,
> > +                                        sg->length, DMA_TO_DEVICE);
> > +               }
> > +       }
> > +       for (; n < (out_sgs + in_sgs); n++) {
> > +               for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> > +                       if (!sg->dma_address)
> > +                               return;
> > +
> > +                       dma_unmap_single(vring_dma_dev(vq), sg->dma_address,
> > +                                        sg->length, DMA_FROM_DEVICE);
> > +               }
> > +       }
> > +}
> > +
> > +static inline int virtqueue_add_split_prepare(struct vring_virtqueue *vq,
> > +                                             unsigned int total_sg,
> > +                                             unsigned int out_sgs,
> > +                                             void *data,
> > +                                             void *ctx,
> > +                                             gfp_t gfp,
> > +                                             struct vring_desc **pdesc)
> > +{
> > +       struct vring_desc *desc;
> > +       unsigned int descs_used;
> >
> >         BUG_ON(data == NULL);
> >         BUG_ON(ctx && vq->indirect);
> >
> >         if (unlikely(vq->broken)) {
> > -               END_USE(vq);
> >                 return -EIO;
> >         }
> >
> > @@ -550,27 +604,17 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >
> >         BUG_ON(total_sg == 0);
> >
> > -       head = vq->free_head;
> > -
> >         if (virtqueue_use_indirect(vq, total_sg))
> > -               desc = alloc_indirect_split(_vq, total_sg, gfp);
> > +               desc = alloc_indirect_split(&vq->vq, total_sg, gfp);
> >         else {
> >                 desc = NULL;
> >                 WARN_ON_ONCE(total_sg > vq->split.vring.num && !vq->indirect);
> >         }
> >
> > -       if (desc) {
> > -               /* Use a single buffer which doesn't continue */
> > -               indirect = true;
> > -               /* Set up rest to use this indirect table. */
> > -               i = 0;
> > +       if (desc)
> >                 descs_used = 1;
> > -       } else {
> > -               indirect = false;
> > -               desc = vq->split.vring.desc;
> > -               i = head;
> > +       else
> >                 descs_used = total_sg;
> > -       }
> >
> >         if (unlikely(vq->vq.num_free < descs_used)) {
> >                 pr_debug("Can't add buf len %i - avail = %i\n",
> > @@ -580,38 +624,64 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >                  * host should service the ring ASAP. */
> >                 if (out_sgs)
> >                         vq->notify(&vq->vq);
> > -               if (indirect)
> > -                       kfree(desc);
> > -               END_USE(vq);
> > +               kfree(desc);
> >                 return -ENOSPC;
> >         }
> >
> > +       *pdesc = desc;
> > +
> > +       return 0;
> > +}
> > +
> > +static inline int virtqueue_add_split_vring(struct vring_virtqueue *vq,
> > +                                           struct scatterlist *sgs[],
> > +                                           unsigned int total_sg,
> > +                                           unsigned int out_sgs,
> > +                                           unsigned int in_sgs,
> > +                                           struct vring_desc *desc)
> > +{
> > +       unsigned int n, i, avail, descs_used, prev;
> > +       struct virtqueue *_vq = &vq->vq;
> > +       struct scatterlist *sg;
> > +       bool indirect;
> > +       int head;
> > +
> > +       head = vq->free_head;
> > +
> > +       if (desc) {
> > +               /* Use a single buffer which doesn't continue */
> > +               indirect = true;
> > +               /* Set up rest to use this indirect table. */
> > +               i = 0;
> > +               descs_used = 1;
> > +       } else {
> > +               indirect = false;
> > +               desc = vq->split.vring.desc;
> > +               i = head;
> > +               descs_used = total_sg;
> > +       }
> > +
> >         for (n = 0; n < out_sgs; n++) {
> >                 for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> > -                       dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
> > -                       if (vring_mapping_error(vq, addr))
> > -                               goto unmap_release;
> > -
> >                         prev = i;
> >                         /* Note that we trust indirect descriptor
> >                          * table since it use stream DMA mapping.
> >                          */
> > -                       i = virtqueue_add_desc_split(_vq, desc, i, addr, sg->length,
> > +                       i = virtqueue_add_desc_split(_vq, desc, i,
> > +                                                    sg->dma_address,
> > +                                                    sg->length,
> >                                                      VRING_DESC_F_NEXT,
> >                                                      indirect);
> >                 }
> >         }
> >         for (; n < (out_sgs + in_sgs); n++) {
> >                 for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> > -                       dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
> > -                       if (vring_mapping_error(vq, addr))
> > -                               goto unmap_release;
> > -
> >                         prev = i;
> >                         /* Note that we trust indirect descriptor
> >                          * table since it use stream DMA mapping.
> >                          */
> > -                       i = virtqueue_add_desc_split(_vq, desc, i, addr,
> > +                       i = virtqueue_add_desc_split(_vq, desc, i,
> > +                                                    sg->dma_address,
> >                                                      sg->length,
> >                                                      VRING_DESC_F_NEXT |
> >                                                      VRING_DESC_F_WRITE,
> > @@ -630,7 +700,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >                         vq, desc, total_sg * sizeof(struct vring_desc),
> >                         DMA_TO_DEVICE);
> >                 if (vring_mapping_error(vq, addr))
> > -                       goto unmap_release;
> > +                       return -ENOMEM;
> >
> >                 virtqueue_add_desc_split(_vq, vq->split.vring.desc,
> >                                          head, addr,
> > @@ -648,13 +718,6 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >         else
> >                 vq->free_head = i;
> >
> > -       /* Store token and indirect buffer state. */
> > -       vq->split.desc_state[head].data = data;
> > -       if (indirect)
> > -               vq->split.desc_state[head].indir_desc = desc;
> > -       else
> > -               vq->split.desc_state[head].indir_desc = ctx;
> > -
> >         /* Put entry in available array (but don't update avail->idx until they
> >          * do sync). */
> >         avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
> > @@ -677,30 +740,52 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
> >                 virtqueue_kick(_vq);
> >
> >         return 0;
> > +}
> >
> > -unmap_release:
> > -       err_idx = i;
> > +static inline int virtqueue_add_split(struct virtqueue *_vq,
> > +                                     struct scatterlist *sgs[],
> > +                                     unsigned int total_sg,
> > +                                     unsigned int out_sgs,
> > +                                     unsigned int in_sgs,
> > +                                     void *data,
> > +                                     void *ctx,
> > +                                     gfp_t gfp)
> > +{
> > +       struct vring_virtqueue *vq = to_vvq(_vq);
> > +       struct vring_desc *desc;
> > +       int head;
> > +       int err;
> >
> > -       if (indirect)
> > -               i = 0;
> > -       else
> > -               i = head;
> > +       START_USE(vq);
> >
> > -       for (n = 0; n < total_sg; n++) {
> > -               if (i == err_idx)
> > -                       break;
> > -               if (indirect) {
> > -                       vring_unmap_one_split_indirect(vq, &desc[i]);
> > -                       i = virtio16_to_cpu(_vq->vdev, desc[i].next);
> > -               } else
> > -                       i = vring_unmap_one_split(vq, i);
> > -       }
> > +       /* check vq state and try to alloc desc for indirect. */
> > +       err = virtqueue_add_split_prepare(vq, total_sg, out_sgs, data, ctx, gfp, &desc);
> > +       if (err)
> > +               goto end;
> >
> > -       if (indirect)
> > -               kfree(desc);
> > +       err = virtqueue_map_sgs(vq, sgs, total_sg, out_sgs, in_sgs);
> > +       if (err)
> > +               goto err;
> >
> > +       head = vq->free_head;
> > +       err = virtqueue_add_split_vring(vq, sgs, total_sg, out_sgs, in_sgs, desc);
> > +       if (err)
> > +               goto err;
> > +
> > +       /* Store token and indirect buffer state. */
> > +       vq->split.desc_state[head].data = data;
> > +       vq->split.desc_state[head].indir_desc = desc ? desc : ctx;
> > +
> > +       goto end;
> > +
> > +err:
> > +       virtqueue_unmap_sgs(vq, sgs, total_sg, out_sgs, in_sgs);
> > +
> > +       kfree(desc);
> > +
> > +end:
> >         END_USE(vq);
> > -       return -ENOMEM;
> > +       return err;
> >  }
> >
> >  static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
> > --
> > 2.32.0.3.g01195cf9f
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  parent reply	other threads:[~2023-02-20 12:13 UTC|newest]

Thread overview: 46+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-02-14  7:26 [PATCH vhost 00/10] virtio core prepares for AF_XDP Xuan Zhuo
2023-02-14  7:26 ` [PATCH vhost 01/10] virtio_ring: split: refactor virtqueue_add_split() for premapped Xuan Zhuo
2023-02-20  5:37   ` Jason Wang
2023-02-20  6:57     ` Xuan Zhuo
2023-02-20 12:12     ` Michael S. Tsirkin [this message]
2023-02-14  7:26 ` [PATCH vhost 02/10] virtio_ring: packed: separate prepare code from virtuque_add_indirect_packed() Xuan Zhuo
2023-02-20  5:37   ` Jason Wang
2023-02-20  6:56     ` Xuan Zhuo
2023-02-14  7:26 ` [PATCH vhost 03/10] virtio_ring: packed: refactor virtqueue_add_packed() for premapped Xuan Zhuo
2023-02-20  5:37   ` Jason Wang
2023-02-14  7:26 ` [PATCH vhost 04/10] virtio_ring: split: introduce virtqueue_add_split_premapped() Xuan Zhuo
2023-02-20  5:38   ` Jason Wang
2023-02-20  6:43     ` Xuan Zhuo
2023-02-21  1:49       ` Jason Wang
2023-02-14  7:26 ` [PATCH vhost 05/10] virtio_ring: packed: introduce virtqueue_add_packed_premapped() Xuan Zhuo
2023-02-14  7:27 ` [PATCH vhost 06/10] virtio_ring: introduce virtqueue_add_inbuf_premapped() Xuan Zhuo
2023-02-14  7:27 ` [PATCH vhost 07/10] virtio_ring: add api virtio_dma_map() for advance dma Xuan Zhuo
2023-02-20  5:38   ` Jason Wang
2023-02-20  6:59     ` Xuan Zhuo
2023-02-21  1:51       ` Jason Wang
2023-02-28 11:15         ` Xuan Zhuo
2023-03-02  2:04           ` Xuan Zhuo
2023-03-02  3:05             ` Jason Wang
2023-03-02  3:21               ` Xuan Zhuo
2023-03-02  3:26                 ` Jason Wang
2023-03-02  6:09                   ` Michael S. Tsirkin
2023-03-02  6:43                     ` Xuan Zhuo
2023-03-02  6:56                     ` Jason Wang
2023-03-02  7:31                       ` Xuan Zhuo
2023-03-02  7:56                         ` Jason Wang
2023-03-02 11:08                           ` Xuan Zhuo
2023-03-01 11:47         ` Xuan Zhuo
2023-02-14  7:27 ` [PATCH vhost 08/10] virtio_ring: introduce dma sync api for virtio Xuan Zhuo
2023-02-20  5:38   ` Jason Wang
2023-02-20  7:04     ` Xuan Zhuo
2023-02-21  1:52       ` Jason Wang
2023-02-14  7:27 ` [PATCH vhost 09/10] virtio_ring: correct the expression of the description of virtqueue_resize() Xuan Zhuo
2023-02-20  5:38   ` Jason Wang
2023-02-14  7:27 ` [PATCH vhost 10/10] virtio_ring: introduce virtqueue_reset() Xuan Zhuo
2023-02-20  5:38   ` Jason Wang
2023-02-20  7:03     ` Xuan Zhuo
2023-02-21  1:51       ` Jason Wang
2023-02-16  5:27 ` [PATCH vhost 00/10] virtio core prepares for AF_XDP Jason Wang
2023-02-16 11:46   ` Xuan Zhuo
2023-02-17  5:23     ` Jason Wang
2023-02-17  9:02       ` Xuan Zhuo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230220071202-mutt-send-email-mst@kernel.org \
    --to=mst@redhat.com \
    --cc=jasowang@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).