* Re: [RFC v2 6/7] virtio: in order support for virtio_ring [not found] ` <20220817135718.2553-7-qtxuning1999@sjtu.edu.cn> @ 2022-08-18 3:11 ` Xuan Zhuo 2022-08-25 7:44 ` Jason Wang 1 sibling, 0 replies; 7+ messages in thread From: Xuan Zhuo @ 2022-08-18 3:11 UTC (permalink / raw) To: Guo Zhi; +Cc: kvm, mst, netdev, linux-kernel, virtualization, eperezma, Guo Zhi On Wed, 17 Aug 2022 21:57:17 +0800, Guo Zhi <qtxuning1999@sjtu.edu.cn> wrote: > If in order feature negotiated, we can skip the used ring to get > buffer's desc id sequentially. > > Signed-off-by: Guo Zhi <qtxuning1999@sjtu.edu.cn> > --- > drivers/virtio/virtio_ring.c | 53 ++++++++++++++++++++++++++++++------ > 1 file changed, 45 insertions(+), 8 deletions(-) > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > index 1c1b3fa376a2..143184ebb5a1 100644 > --- a/drivers/virtio/virtio_ring.c > +++ b/drivers/virtio/virtio_ring.c > @@ -144,6 +144,9 @@ struct vring_virtqueue { > /* DMA address and size information */ > dma_addr_t queue_dma_addr; > size_t queue_size_in_bytes; > + > + /* In order feature batch begin here */ > + u16 next_desc_begin; > } split; > > /* Available for packed ring */ > @@ -702,8 +705,13 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, > } > > vring_unmap_one_split(vq, i); > - vq->split.desc_extra[i].next = vq->free_head; > - vq->free_head = head; > + /* In order feature use desc in order, > + * that means, the next desc will always be free > + */ > + if (!virtio_has_feature(vq->vq.vdev, VIRTIO_F_IN_ORDER)) { Call virtio_has_feature() here is not good. Thanks. > + vq->split.desc_extra[i].next = vq->free_head; > + vq->free_head = head; > + } > > /* Plus final descriptor */ > vq->vq.num_free++; > @@ -745,7 +753,7 @@ static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq, > { > struct vring_virtqueue *vq = to_vvq(_vq); > void *ret; > - unsigned int i; > + unsigned int i, j; > u16 last_used; > > START_USE(vq); > @@ -764,11 +772,38 @@ static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq, > /* Only get used array entries after they have been exposed by host. */ > virtio_rmb(vq->weak_barriers); > > - last_used = (vq->last_used_idx & (vq->split.vring.num - 1)); > - i = virtio32_to_cpu(_vq->vdev, > - vq->split.vring.used->ring[last_used].id); > - *len = virtio32_to_cpu(_vq->vdev, > - vq->split.vring.used->ring[last_used].len); > + if (virtio_has_feature(_vq->vdev, VIRTIO_F_IN_ORDER)) { > + /* Skip used ring and get used desc in order*/ > + i = vq->split.next_desc_begin; > + j = i; > + /* Indirect only takes one descriptor in descriptor table */ > + while (!vq->indirect && (vq->split.desc_extra[j].flags & VRING_DESC_F_NEXT)) > + j = (j + 1) % vq->split.vring.num; > + /* move to next */ > + j = (j + 1) % vq->split.vring.num; > + /* Next buffer will use this descriptor in order */ > + vq->split.next_desc_begin = j; > + if (!vq->indirect) { > + *len = vq->split.desc_extra[i].len; > + } else { > + struct vring_desc *indir_desc = > + vq->split.desc_state[i].indir_desc; > + u32 indir_num = vq->split.desc_extra[i].len, buffer_len = 0; > + > + if (indir_desc) { > + for (j = 0; j < indir_num / sizeof(struct vring_desc); j++) > + buffer_len += indir_desc[j].len; > + } > + > + *len = buffer_len; > + } > + } else { > + last_used = (vq->last_used_idx & (vq->split.vring.num - 1)); > + i = virtio32_to_cpu(_vq->vdev, > + vq->split.vring.used->ring[last_used].id); > + *len = virtio32_to_cpu(_vq->vdev, > + vq->split.vring.used->ring[last_used].len); > + } > > if (unlikely(i >= vq->split.vring.num)) { > BAD_RING(vq, "id %u out of range\n", i); > @@ -2236,6 +2271,8 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index, > vq->split.avail_flags_shadow = 0; > vq->split.avail_idx_shadow = 0; > > + vq->split.next_desc_begin = 0; > + > /* No callback? Tell other side not to bother us. */ > if (!callback) { > vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; > -- > 2.17.1 > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC v2 6/7] virtio: in order support for virtio_ring [not found] ` <20220817135718.2553-7-qtxuning1999@sjtu.edu.cn> 2022-08-18 3:11 ` [RFC v2 6/7] virtio: in order support for virtio_ring Xuan Zhuo @ 2022-08-25 7:44 ` Jason Wang 1 sibling, 0 replies; 7+ messages in thread From: Jason Wang @ 2022-08-25 7:44 UTC (permalink / raw) To: Guo Zhi, eperezma, sgarzare, mst Cc: netdev, linux-kernel, kvm, virtualization 在 2022/8/17 21:57, Guo Zhi 写道: > If in order feature negotiated, we can skip the used ring to get > buffer's desc id sequentially. > > Signed-off-by: Guo Zhi <qtxuning1999@sjtu.edu.cn> > --- > drivers/virtio/virtio_ring.c | 53 ++++++++++++++++++++++++++++++------ > 1 file changed, 45 insertions(+), 8 deletions(-) > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > index 1c1b3fa376a2..143184ebb5a1 100644 > --- a/drivers/virtio/virtio_ring.c > +++ b/drivers/virtio/virtio_ring.c > @@ -144,6 +144,9 @@ struct vring_virtqueue { > /* DMA address and size information */ > dma_addr_t queue_dma_addr; > size_t queue_size_in_bytes; > + > + /* In order feature batch begin here */ We need tweak the comment, it's not easy for me to understand the meaning here. > + u16 next_desc_begin; > } split; > > /* Available for packed ring */ > @@ -702,8 +705,13 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, > } > > vring_unmap_one_split(vq, i); > - vq->split.desc_extra[i].next = vq->free_head; > - vq->free_head = head; > + /* In order feature use desc in order, > + * that means, the next desc will always be free > + */ Maybe we should add something like "The descriptors are prepared in order". > + if (!virtio_has_feature(vq->vq.vdev, VIRTIO_F_IN_ORDER)) { > + vq->split.desc_extra[i].next = vq->free_head; > + vq->free_head = head; > + } > > /* Plus final descriptor */ > vq->vq.num_free++; > @@ -745,7 +753,7 @@ static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq, > { > struct vring_virtqueue *vq = to_vvq(_vq); > void *ret; > - unsigned int i; > + unsigned int i, j; > u16 last_used; > > START_USE(vq); > @@ -764,11 +772,38 @@ static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq, > /* Only get used array entries after they have been exposed by host. */ > virtio_rmb(vq->weak_barriers); > > - last_used = (vq->last_used_idx & (vq->split.vring.num - 1)); > - i = virtio32_to_cpu(_vq->vdev, > - vq->split.vring.used->ring[last_used].id); > - *len = virtio32_to_cpu(_vq->vdev, > - vq->split.vring.used->ring[last_used].len); > + if (virtio_has_feature(_vq->vdev, VIRTIO_F_IN_ORDER)) { > + /* Skip used ring and get used desc in order*/ > + i = vq->split.next_desc_begin; > + j = i; > + /* Indirect only takes one descriptor in descriptor table */ > + while (!vq->indirect && (vq->split.desc_extra[j].flags & VRING_DESC_F_NEXT)) > + j = (j + 1) % vq->split.vring.num; Let's move the expensive mod outside the loop. Or it's split so we can use and here actually since the size is guaranteed to be power of the two? Another question, is it better to store the next_desc in e.g desc_extra? And this seems very expensive if the device doesn't do the batching (which is not mandatory). > + /* move to next */ > + j = (j + 1) % vq->split.vring.num; > + /* Next buffer will use this descriptor in order */ > + vq->split.next_desc_begin = j; > + if (!vq->indirect) { > + *len = vq->split.desc_extra[i].len; > + } else { > + struct vring_desc *indir_desc = > + vq->split.desc_state[i].indir_desc; > + u32 indir_num = vq->split.desc_extra[i].len, buffer_len = 0; > + > + if (indir_desc) { > + for (j = 0; j < indir_num / sizeof(struct vring_desc); j++) > + buffer_len += indir_desc[j].len; So I think we need to finalize this, then we can have much more stress on the cache: https://lkml.org/lkml/2021/10/26/1300 It was reverted since it's too aggressive, we should instead: 1) do the validation only for morden device 2) fail only when we enable the validation via (e.g a module parameter). Thanks > + } > + > + *len = buffer_len; > + } > + } else { > + last_used = (vq->last_used_idx & (vq->split.vring.num - 1)); > + i = virtio32_to_cpu(_vq->vdev, > + vq->split.vring.used->ring[last_used].id); > + *len = virtio32_to_cpu(_vq->vdev, > + vq->split.vring.used->ring[last_used].len); > + } > > if (unlikely(i >= vq->split.vring.num)) { > BAD_RING(vq, "id %u out of range\n", i); > @@ -2236,6 +2271,8 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index, > vq->split.avail_flags_shadow = 0; > vq->split.avail_idx_shadow = 0; > > + vq->split.next_desc_begin = 0; > + > /* No callback? Tell other side not to bother us. */ > if (!callback) { > vq->split.avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT; _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <20220817135718.2553-2-qtxuning1999@sjtu.edu.cn>]
* Re: [RFC v2 1/7] vhost: expose used buffers [not found] ` <20220817135718.2553-2-qtxuning1999@sjtu.edu.cn> @ 2022-08-25 7:01 ` Jason Wang 0 siblings, 0 replies; 7+ messages in thread From: Jason Wang @ 2022-08-25 7:01 UTC (permalink / raw) To: Guo Zhi, eperezma, sgarzare, mst Cc: netdev, linux-kernel, kvm, virtualization 在 2022/8/17 21:57, Guo Zhi 写道: > Follow VIRTIO 1.1 spec, only writing out a single used ring for a batch > of descriptors. > > Signed-off-by: Guo Zhi <qtxuning1999@sjtu.edu.cn> > --- > drivers/vhost/vhost.c | 14 ++++++++++++-- > drivers/vhost/vhost.h | 1 + > 2 files changed, 13 insertions(+), 2 deletions(-) > > diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c > index 40097826cff0..7b20fa5a46c3 100644 > --- a/drivers/vhost/vhost.c > +++ b/drivers/vhost/vhost.c > @@ -2376,10 +2376,20 @@ static int __vhost_add_used_n(struct vhost_virtqueue *vq, > vring_used_elem_t __user *used; > u16 old, new; > int start; > + int copy_n = count; > > + /** > + * If in order feature negotiated, devices can notify the use of a batch of buffers to > + * the driver by only writing out a single used ring entry with the id corresponding > + * to the head entry of the descriptor chain describing the last buffer in the batch. > + */ > + if (vhost_has_feature(vq, VIRTIO_F_IN_ORDER)) { > + copy_n = 1; > + heads = &heads[count - 1]; Do we need to check whether or not the buffer is fully used before doing this? > + } > start = vq->last_used_idx & (vq->num - 1); > used = vq->used->ring + start; > - if (vhost_put_used(vq, heads, start, count)) { > + if (vhost_put_used(vq, heads, start, copy_n)) { > vq_err(vq, "Failed to write used"); > return -EFAULT; > } > @@ -2410,7 +2420,7 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads, > > start = vq->last_used_idx & (vq->num - 1); > n = vq->num - start; > - if (n < count) { > + if (n < count && !vhost_has_feature(vq, VIRTIO_F_IN_ORDER)) { > r = __vhost_add_used_n(vq, heads, n); > if (r < 0) > return r; > diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h > index d9109107af08..0d5c49a30421 100644 > --- a/drivers/vhost/vhost.h > +++ b/drivers/vhost/vhost.h > @@ -236,6 +236,7 @@ enum { > VHOST_FEATURES = (1ULL << VIRTIO_F_NOTIFY_ON_EMPTY) | > (1ULL << VIRTIO_RING_F_INDIRECT_DESC) | > (1ULL << VIRTIO_RING_F_EVENT_IDX) | > + (1ULL << VIRTIO_F_IN_ORDER) | > (1ULL << VHOST_F_LOG_ALL) | Are we sure all vhost devices can support in-order (especially the SCSI)? It looks better to start from a device specific one. Thanks > (1ULL << VIRTIO_F_ANY_LAYOUT) | > (1ULL << VIRTIO_F_VERSION_1) _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <20220817135718.2553-3-qtxuning1999@sjtu.edu.cn>]
* Re: [RFC v2 2/7] vhost_test: batch used buffer [not found] ` <20220817135718.2553-3-qtxuning1999@sjtu.edu.cn> @ 2022-08-25 7:03 ` Jason Wang 0 siblings, 0 replies; 7+ messages in thread From: Jason Wang @ 2022-08-25 7:03 UTC (permalink / raw) To: Guo Zhi, eperezma, sgarzare, mst Cc: netdev, linux-kernel, kvm, virtualization 在 2022/8/17 21:57, Guo Zhi 写道: > Only add to used ring when a batch of buffer have all been used. And if > in order feature negotiated, only add the last used descriptor for a > batch of buffer. > > Signed-off-by: Guo Zhi <qtxuning1999@sjtu.edu.cn> > --- > drivers/vhost/test.c | 8 +++++++- > 1 file changed, 7 insertions(+), 1 deletion(-) > > diff --git a/drivers/vhost/test.c b/drivers/vhost/test.c > index bc8e7fb1e635..57cdb3a3edf6 100644 > --- a/drivers/vhost/test.c > +++ b/drivers/vhost/test.c > @@ -43,6 +43,9 @@ struct vhost_test { > static void handle_vq(struct vhost_test *n) > { > struct vhost_virtqueue *vq = &n->vqs[VHOST_TEST_VQ]; > + struct vring_used_elem *heads = kmalloc(sizeof(*heads) > + * vq->num, GFP_KERNEL); Though it's a test device, it would be better to try avoid memory allocation in the datapath. And where is is freed? Thanks > + int batch_idx = 0; > unsigned out, in; > int head; > size_t len, total_len = 0; > @@ -84,11 +87,14 @@ static void handle_vq(struct vhost_test *n) > vq_err(vq, "Unexpected 0 len for TX\n"); > break; > } > - vhost_add_used_and_signal(&n->dev, vq, head, 0); > + heads[batch_idx].id = cpu_to_vhost32(vq, head); > + heads[batch_idx++].len = cpu_to_vhost32(vq, len); > total_len += len; > if (unlikely(vhost_exceeds_weight(vq, 0, total_len))) > break; > } > + if (batch_idx) > + vhost_add_used_and_signal_n(&n->dev, vq, heads, batch_idx); > > mutex_unlock(&vq->mutex); > } _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <20220817135718.2553-4-qtxuning1999@sjtu.edu.cn>]
* Re: [RFC v2 3/7] vsock: batch buffers in tx [not found] ` <20220817135718.2553-4-qtxuning1999@sjtu.edu.cn> @ 2022-08-25 7:08 ` Jason Wang 0 siblings, 0 replies; 7+ messages in thread From: Jason Wang @ 2022-08-25 7:08 UTC (permalink / raw) To: Guo Zhi, eperezma, sgarzare, mst Cc: netdev, linux-kernel, kvm, virtualization 在 2022/8/17 21:57, Guo Zhi 写道: > Vsock uses buffers in order, and for tx driver doesn't have to > know the length of the buffer. So we can do a batch for vsock if > in order negotiated, only write one used ring for a batch of buffers > > Signed-off-by: Guo Zhi <qtxuning1999@sjtu.edu.cn> > --- > drivers/vhost/vsock.c | 9 ++++++++- > 1 file changed, 8 insertions(+), 1 deletion(-) > > diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c > index 368330417bde..b0108009c39a 100644 > --- a/drivers/vhost/vsock.c > +++ b/drivers/vhost/vsock.c > @@ -500,6 +500,7 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work) > int head, pkts = 0, total_len = 0; > unsigned int out, in; > bool added = false; > + int last_head = -1; > > mutex_lock(&vq->mutex); > > @@ -551,10 +552,16 @@ static void vhost_vsock_handle_tx_kick(struct vhost_work *work) > else > virtio_transport_free_pkt(pkt); > > - vhost_add_used(vq, head, 0); > + if (!vhost_has_feature(vq, VIRTIO_F_IN_ORDER)) > + vhost_add_used(vq, head, 0); > + else > + last_head = head; > added = true; > } while(likely(!vhost_exceeds_weight(vq, ++pkts, total_len))); > > + /* If in order feature negotiaged, we can do a batch to increase performance */ > + if (vhost_has_feature(vq, VIRTIO_F_IN_ORDER) && last_head != -1) > + vhost_add_used(vq, last_head, 0); I may miss something but spec said "The device then skips forward in the ring according to the size of the batch. ". I don't see how it is done here. Thanks > no_more_replies: > if (added) > vhost_signal(&vsock->dev, vq); _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 7+ messages in thread
[parent not found: <20220817135718.2553-6-qtxuning1999@sjtu.edu.cn>]
* Re: [RFC v2 5/7] virtio: unmask F_NEXT flag in desc_extra [not found] ` <20220817135718.2553-6-qtxuning1999@sjtu.edu.cn> @ 2022-08-18 3:05 ` Xuan Zhuo 2022-08-25 7:11 ` Jason Wang 1 sibling, 0 replies; 7+ messages in thread From: Xuan Zhuo @ 2022-08-18 3:05 UTC (permalink / raw) To: Guo Zhi; +Cc: kvm, mst, netdev, linux-kernel, virtualization, eperezma, Guo Zhi On Wed, 17 Aug 2022 21:57:16 +0800, Guo Zhi <qtxuning1999@sjtu.edu.cn> wrote: > We didn't unmask F_NEXT flag in desc_extra in the end of a chain, > unmask it so that we can access desc_extra to get next information. I think we should state the purpose of this. > > Signed-off-by: Guo Zhi <qtxuning1999@sjtu.edu.cn> > --- > drivers/virtio/virtio_ring.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > index a5ec724c01d8..1c1b3fa376a2 100644 > --- a/drivers/virtio/virtio_ring.c > +++ b/drivers/virtio/virtio_ring.c > @@ -567,7 +567,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, > } > /* Last one doesn't continue. */ > desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT); > - if (!indirect && vq->use_dma_api) > + if (!indirect) > vq->split.desc_extra[prev & (vq->split.vring.num - 1)].flags &= > ~VRING_DESC_F_NEXT; > > @@ -584,6 +584,8 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, > total_sg * sizeof(struct vring_desc), > VRING_DESC_F_INDIRECT, > false); > + vq->split.desc_extra[head & (vq->split.vring.num - 1)].flags &= > + ~VRING_DESC_F_NEXT; This seems unnecessary. > } > > /* We're using some buffers from the free list. */ > @@ -693,7 +695,7 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, > /* Put back on free list: unmap first-level descriptors and find end */ > i = head; > > - while (vq->split.vring.desc[i].flags & nextflag) { > + while (vq->split.desc_extra[i].flags & nextflag) { nextflag is __virtio16 You can use VRING_DESC_F_NEXT directly. Thanks. > vring_unmap_one_split(vq, i); > i = vq->split.desc_extra[i].next; > vq->vq.num_free++; > -- > 2.17.1 > _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 7+ messages in thread
* Re: [RFC v2 5/7] virtio: unmask F_NEXT flag in desc_extra [not found] ` <20220817135718.2553-6-qtxuning1999@sjtu.edu.cn> 2022-08-18 3:05 ` [RFC v2 5/7] virtio: unmask F_NEXT flag in desc_extra Xuan Zhuo @ 2022-08-25 7:11 ` Jason Wang 1 sibling, 0 replies; 7+ messages in thread From: Jason Wang @ 2022-08-25 7:11 UTC (permalink / raw) To: Guo Zhi, eperezma, sgarzare, mst Cc: netdev, linux-kernel, kvm, virtualization 在 2022/8/17 21:57, Guo Zhi 写道: > We didn't unmask F_NEXT flag in desc_extra in the end of a chain, > unmask it so that we can access desc_extra to get next information. > > Signed-off-by: Guo Zhi <qtxuning1999@sjtu.edu.cn> I post a similar patch in the past. Please share the perf numbers (e.g pps via pktgen). Thanks > --- > drivers/virtio/virtio_ring.c | 6 ++++-- > 1 file changed, 4 insertions(+), 2 deletions(-) > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c > index a5ec724c01d8..1c1b3fa376a2 100644 > --- a/drivers/virtio/virtio_ring.c > +++ b/drivers/virtio/virtio_ring.c > @@ -567,7 +567,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, > } > /* Last one doesn't continue. */ > desc[prev].flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT); > - if (!indirect && vq->use_dma_api) > + if (!indirect) > vq->split.desc_extra[prev & (vq->split.vring.num - 1)].flags &= > ~VRING_DESC_F_NEXT; > > @@ -584,6 +584,8 @@ static inline int virtqueue_add_split(struct virtqueue *_vq, > total_sg * sizeof(struct vring_desc), > VRING_DESC_F_INDIRECT, > false); > + vq->split.desc_extra[head & (vq->split.vring.num - 1)].flags &= > + ~VRING_DESC_F_NEXT; > } > > /* We're using some buffers from the free list. */ > @@ -693,7 +695,7 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head, > /* Put back on free list: unmap first-level descriptors and find end */ > i = head; > > - while (vq->split.vring.desc[i].flags & nextflag) { > + while (vq->split.desc_extra[i].flags & nextflag) { > vring_unmap_one_split(vq, i); > i = vq->split.desc_extra[i].next; > vq->vq.num_free++; _______________________________________________ Virtualization mailing list Virtualization@lists.linux-foundation.org https://lists.linuxfoundation.org/mailman/listinfo/virtualization ^ permalink raw reply [flat|nested] 7+ messages in thread
end of thread, other threads:[~2022-08-25 7:45 UTC | newest]
Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
[not found] <20220817135718.2553-1-qtxuning1999@sjtu.edu.cn>
[not found] ` <20220817135718.2553-7-qtxuning1999@sjtu.edu.cn>
2022-08-18 3:11 ` [RFC v2 6/7] virtio: in order support for virtio_ring Xuan Zhuo
2022-08-25 7:44 ` Jason Wang
[not found] ` <20220817135718.2553-2-qtxuning1999@sjtu.edu.cn>
2022-08-25 7:01 ` [RFC v2 1/7] vhost: expose used buffers Jason Wang
[not found] ` <20220817135718.2553-3-qtxuning1999@sjtu.edu.cn>
2022-08-25 7:03 ` [RFC v2 2/7] vhost_test: batch used buffer Jason Wang
[not found] ` <20220817135718.2553-4-qtxuning1999@sjtu.edu.cn>
2022-08-25 7:08 ` [RFC v2 3/7] vsock: batch buffers in tx Jason Wang
[not found] ` <20220817135718.2553-6-qtxuning1999@sjtu.edu.cn>
2022-08-18 3:05 ` [RFC v2 5/7] virtio: unmask F_NEXT flag in desc_extra Xuan Zhuo
2022-08-25 7:11 ` Jason Wang
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).