* [PATCH net-next V2 0/3] in order support for vhost-net
@ 2025-07-14 8:47 Jason Wang
2025-07-14 8:47 ` [PATCH net-next V2 1/3] vhost: fail early when __vhost_add_used() fails Jason Wang
` (4 more replies)
0 siblings, 5 replies; 16+ messages in thread
From: Jason Wang @ 2025-07-14 8:47 UTC (permalink / raw)
To: mst, jasowang, eperezma
Cc: kvm, virtualization, netdev, linux-kernel, jonah.palmer
Hi all,
This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
feature is designed to improve the performance of the virtio ring by
optimizing descriptor processing.
Benchmarks show a notable improvement. Please see patch 3 for details.
Changes since V1:
- add a new patch to fail early when vhost_add_used() fails
- drop unused parameters of vhost_add_used_ooo()
- conisty nheads for vhost_add_used_in_order()
- typo fixes and other tweaks
Thanks
Jason Wang (3):
vhost: fail early when __vhost_add_used() fails
vhost: basic in order support
vhost_net: basic in_order support
drivers/vhost/net.c | 88 +++++++++++++++++++++---------
drivers/vhost/vhost.c | 123 ++++++++++++++++++++++++++++++++++--------
drivers/vhost/vhost.h | 8 ++-
3 files changed, 171 insertions(+), 48 deletions(-)
--
2.39.5
^ permalink raw reply [flat|nested] 16+ messages in thread
* [PATCH net-next V2 1/3] vhost: fail early when __vhost_add_used() fails
2025-07-14 8:47 [PATCH net-next V2 0/3] in order support for vhost-net Jason Wang
@ 2025-07-14 8:47 ` Jason Wang
2025-07-14 8:47 ` [PATCH net-next V2 2/3] vhost: basic in order support Jason Wang
` (3 subsequent siblings)
4 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2025-07-14 8:47 UTC (permalink / raw)
To: mst, jasowang, eperezma
Cc: kvm, virtualization, netdev, linux-kernel, jonah.palmer
This patch fails vhost_add_used_n() early when __vhost_add_used()
fails to make sure used idx is not updated with stale used ring
information.
Reported-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
drivers/vhost/vhost.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index 3a5ebb973dba..d1d3912f4804 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -2775,6 +2775,9 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
}
r = __vhost_add_used_n(vq, heads, count);
+ if (r < 0)
+ return r;
+
/* Make sure buffer is written before we update index. */
smp_wmb();
if (vhost_put_used_idx(vq)) {
--
2.39.5
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH net-next V2 2/3] vhost: basic in order support
2025-07-14 8:47 [PATCH net-next V2 0/3] in order support for vhost-net Jason Wang
2025-07-14 8:47 ` [PATCH net-next V2 1/3] vhost: fail early when __vhost_add_used() fails Jason Wang
@ 2025-07-14 8:47 ` Jason Wang
2025-07-28 14:26 ` Eugenio Perez Martin
2025-07-14 8:47 ` [PATCH net-next V2 3/3] vhost_net: basic in_order support Jason Wang
` (2 subsequent siblings)
4 siblings, 1 reply; 16+ messages in thread
From: Jason Wang @ 2025-07-14 8:47 UTC (permalink / raw)
To: mst, jasowang, eperezma
Cc: kvm, virtualization, netdev, linux-kernel, jonah.palmer
This patch adds basic in order support for vhost. Two optimizations
are implemented in this patch:
1) Since driver uses descriptor in order, vhost can deduce the next
avail ring head by counting the number of descriptors that has been
used in next_avail_head. This eliminate the need to access the
available ring in vhost.
2) vhost_add_used_and_singal_n() is extended to accept the number of
batched buffers per used elem. While this increases the times of
userspace memory access but it helps to reduce the chance of
used ring access of both the driver and vhost.
Vhost-net will be the first user for this.
Acked-by: Jonah Palmer <jonah.palmer@oracle.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
drivers/vhost/net.c | 6 ++-
drivers/vhost/vhost.c | 120 ++++++++++++++++++++++++++++++++++--------
drivers/vhost/vhost.h | 8 ++-
3 files changed, 109 insertions(+), 25 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 9dbd88eb9ff4..2199ba3b191e 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -374,7 +374,8 @@ static void vhost_zerocopy_signal_used(struct vhost_net *net,
while (j) {
add = min(UIO_MAXIOV - nvq->done_idx, j);
vhost_add_used_and_signal_n(vq->dev, vq,
- &vq->heads[nvq->done_idx], add);
+ &vq->heads[nvq->done_idx],
+ NULL, add);
nvq->done_idx = (nvq->done_idx + add) % UIO_MAXIOV;
j -= add;
}
@@ -457,7 +458,8 @@ static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq)
if (!nvq->done_idx)
return;
- vhost_add_used_and_signal_n(dev, vq, vq->heads, nvq->done_idx);
+ vhost_add_used_and_signal_n(dev, vq, vq->heads, NULL,
+ nvq->done_idx);
nvq->done_idx = 0;
}
diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
index d1d3912f4804..dd7963eb6cf0 100644
--- a/drivers/vhost/vhost.c
+++ b/drivers/vhost/vhost.c
@@ -364,6 +364,7 @@ static void vhost_vq_reset(struct vhost_dev *dev,
vq->avail = NULL;
vq->used = NULL;
vq->last_avail_idx = 0;
+ vq->next_avail_head = 0;
vq->avail_idx = 0;
vq->last_used_idx = 0;
vq->signalled_used = 0;
@@ -455,6 +456,8 @@ static void vhost_vq_free_iovecs(struct vhost_virtqueue *vq)
vq->log = NULL;
kfree(vq->heads);
vq->heads = NULL;
+ kfree(vq->nheads);
+ vq->nheads = NULL;
}
/* Helper to allocate iovec buffers for all vqs. */
@@ -472,7 +475,9 @@ static long vhost_dev_alloc_iovecs(struct vhost_dev *dev)
GFP_KERNEL);
vq->heads = kmalloc_array(dev->iov_limit, sizeof(*vq->heads),
GFP_KERNEL);
- if (!vq->indirect || !vq->log || !vq->heads)
+ vq->nheads = kmalloc_array(dev->iov_limit, sizeof(*vq->nheads),
+ GFP_KERNEL);
+ if (!vq->indirect || !vq->log || !vq->heads || !vq->nheads)
goto err_nomem;
}
return 0;
@@ -1990,14 +1995,15 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
break;
}
if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
- vq->last_avail_idx = s.num & 0xffff;
+ vq->next_avail_head = vq->last_avail_idx =
+ s.num & 0xffff;
vq->last_used_idx = (s.num >> 16) & 0xffff;
} else {
if (s.num > 0xffff) {
r = -EINVAL;
break;
}
- vq->last_avail_idx = s.num;
+ vq->next_avail_head = vq->last_avail_idx = s.num;
}
/* Forget the cached index value. */
vq->avail_idx = vq->last_avail_idx;
@@ -2590,11 +2596,12 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
unsigned int *out_num, unsigned int *in_num,
struct vhost_log *log, unsigned int *log_num)
{
+ bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
struct vring_desc desc;
unsigned int i, head, found = 0;
u16 last_avail_idx = vq->last_avail_idx;
__virtio16 ring_head;
- int ret, access;
+ int ret, access, c = 0;
if (vq->avail_idx == vq->last_avail_idx) {
ret = vhost_get_avail_idx(vq);
@@ -2605,17 +2612,21 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
return vq->num;
}
- /* Grab the next descriptor number they're advertising, and increment
- * the index we've seen. */
- if (unlikely(vhost_get_avail_head(vq, &ring_head, last_avail_idx))) {
- vq_err(vq, "Failed to read head: idx %d address %p\n",
- last_avail_idx,
- &vq->avail->ring[last_avail_idx % vq->num]);
- return -EFAULT;
+ if (in_order)
+ head = vq->next_avail_head & (vq->num - 1);
+ else {
+ /* Grab the next descriptor number they're
+ * advertising, and increment the index we've seen. */
+ if (unlikely(vhost_get_avail_head(vq, &ring_head,
+ last_avail_idx))) {
+ vq_err(vq, "Failed to read head: idx %d address %p\n",
+ last_avail_idx,
+ &vq->avail->ring[last_avail_idx % vq->num]);
+ return -EFAULT;
+ }
+ head = vhost16_to_cpu(vq, ring_head);
}
- head = vhost16_to_cpu(vq, ring_head);
-
/* If their number is silly, that's an error. */
if (unlikely(head >= vq->num)) {
vq_err(vq, "Guest says index %u > %u is available",
@@ -2658,6 +2669,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
"in indirect descriptor at idx %d\n", i);
return ret;
}
+ ++c;
continue;
}
@@ -2693,10 +2705,12 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
}
*out_num += ret;
}
+ ++c;
} while ((i = next_desc(vq, &desc)) != -1);
/* On success, increment avail index. */
vq->last_avail_idx++;
+ vq->next_avail_head += c;
/* Assume notifications from guest are disabled at this point,
* if they aren't we would need to update avail_event index. */
@@ -2720,8 +2734,9 @@ int vhost_add_used(struct vhost_virtqueue *vq, unsigned int head, int len)
cpu_to_vhost32(vq, head),
cpu_to_vhost32(vq, len)
};
+ u16 nheads = 1;
- return vhost_add_used_n(vq, &heads, 1);
+ return vhost_add_used_n(vq, &heads, &nheads, 1);
}
EXPORT_SYMBOL_GPL(vhost_add_used);
@@ -2757,10 +2772,9 @@ static int __vhost_add_used_n(struct vhost_virtqueue *vq,
return 0;
}
-/* After we've used one of their buffers, we tell them about it. We'll then
- * want to notify the guest, using eventfd. */
-int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
- unsigned count)
+static int vhost_add_used_n_ooo(struct vhost_virtqueue *vq,
+ struct vring_used_elem *heads,
+ unsigned count)
{
int start, n, r;
@@ -2773,7 +2787,69 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
heads += n;
count -= n;
}
- r = __vhost_add_used_n(vq, heads, count);
+ return __vhost_add_used_n(vq, heads, count);
+}
+
+static int vhost_add_used_n_in_order(struct vhost_virtqueue *vq,
+ struct vring_used_elem *heads,
+ const u16 *nheads,
+ unsigned count)
+{
+ vring_used_elem_t __user *used;
+ u16 old, new = vq->last_used_idx;
+ int start, i;
+
+ if (!nheads)
+ return -EINVAL;
+
+ start = vq->last_used_idx & (vq->num - 1);
+ used = vq->used->ring + start;
+
+ for (i = 0; i < count; i++) {
+ if (vhost_put_used(vq, &heads[i], start, 1)) {
+ vq_err(vq, "Failed to write used");
+ return -EFAULT;
+ }
+ start += nheads[i];
+ new += nheads[i];
+ if (start >= vq->num)
+ start -= vq->num;
+ }
+
+ if (unlikely(vq->log_used)) {
+ /* Make sure data is seen before log. */
+ smp_wmb();
+ /* Log used ring entry write. */
+ log_used(vq, ((void __user *)used - (void __user *)vq->used),
+ (vq->num - start) * sizeof *used);
+ if (start + count > vq->num)
+ log_used(vq, 0,
+ (start + count - vq->num) * sizeof *used);
+ }
+
+ old = vq->last_used_idx;
+ vq->last_used_idx = new;
+ /* If the driver never bothers to signal in a very long while,
+ * used index might wrap around. If that happens, invalidate
+ * signalled_used index we stored. TODO: make sure driver
+ * signals at least once in 2^16 and remove this. */
+ if (unlikely((u16)(new - vq->signalled_used) < (u16)(new - old)))
+ vq->signalled_used_valid = false;
+ return 0;
+}
+
+/* After we've used one of their buffers, we tell them about it. We'll then
+ * want to notify the guest, using eventfd. */
+int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
+ u16 *nheads, unsigned count)
+{
+ bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
+ int r;
+
+ if (!in_order || !nheads)
+ r = vhost_add_used_n_ooo(vq, heads, count);
+ else
+ r = vhost_add_used_n_in_order(vq, heads, nheads, count);
if (r < 0)
return r;
@@ -2856,9 +2932,11 @@ EXPORT_SYMBOL_GPL(vhost_add_used_and_signal);
/* multi-buffer version of vhost_add_used_and_signal */
void vhost_add_used_and_signal_n(struct vhost_dev *dev,
struct vhost_virtqueue *vq,
- struct vring_used_elem *heads, unsigned count)
+ struct vring_used_elem *heads,
+ u16 *nheads,
+ unsigned count)
{
- vhost_add_used_n(vq, heads, count);
+ vhost_add_used_n(vq, heads, nheads, count);
vhost_signal(dev, vq);
}
EXPORT_SYMBOL_GPL(vhost_add_used_and_signal_n);
diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
index bb75a292d50c..e714ebf9da57 100644
--- a/drivers/vhost/vhost.h
+++ b/drivers/vhost/vhost.h
@@ -103,6 +103,8 @@ struct vhost_virtqueue {
* Values are limited to 0x7fff, and the high bit is used as
* a wrap counter when using VIRTIO_F_RING_PACKED. */
u16 last_avail_idx;
+ /* Next avail ring head when VIRTIO_F_IN_ORDER is negoitated */
+ u16 next_avail_head;
/* Caches available index value from user. */
u16 avail_idx;
@@ -129,6 +131,7 @@ struct vhost_virtqueue {
struct iovec iotlb_iov[64];
struct iovec *indirect;
struct vring_used_elem *heads;
+ u16 *nheads;
/* Protected by virtqueue mutex. */
struct vhost_iotlb *umem;
struct vhost_iotlb *iotlb;
@@ -213,11 +216,12 @@ bool vhost_vq_is_setup(struct vhost_virtqueue *vq);
int vhost_vq_init_access(struct vhost_virtqueue *);
int vhost_add_used(struct vhost_virtqueue *, unsigned int head, int len);
int vhost_add_used_n(struct vhost_virtqueue *, struct vring_used_elem *heads,
- unsigned count);
+ u16 *nheads, unsigned count);
void vhost_add_used_and_signal(struct vhost_dev *, struct vhost_virtqueue *,
unsigned int id, int len);
void vhost_add_used_and_signal_n(struct vhost_dev *, struct vhost_virtqueue *,
- struct vring_used_elem *heads, unsigned count);
+ struct vring_used_elem *heads, u16 *nheads,
+ unsigned count);
void vhost_signal(struct vhost_dev *, struct vhost_virtqueue *);
void vhost_disable_notify(struct vhost_dev *, struct vhost_virtqueue *);
bool vhost_vq_avail_empty(struct vhost_dev *, struct vhost_virtqueue *);
--
2.39.5
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [PATCH net-next V2 3/3] vhost_net: basic in_order support
2025-07-14 8:47 [PATCH net-next V2 0/3] in order support for vhost-net Jason Wang
2025-07-14 8:47 ` [PATCH net-next V2 1/3] vhost: fail early when __vhost_add_used() fails Jason Wang
2025-07-14 8:47 ` [PATCH net-next V2 2/3] vhost: basic in order support Jason Wang
@ 2025-07-14 8:47 ` Jason Wang
2025-07-16 1:12 ` [PATCH net-next V2 0/3] in order support for vhost-net Lei Yang
2025-07-17 0:04 ` Jakub Kicinski
4 siblings, 0 replies; 16+ messages in thread
From: Jason Wang @ 2025-07-14 8:47 UTC (permalink / raw)
To: mst, jasowang, eperezma
Cc: kvm, virtualization, netdev, linux-kernel, jonah.palmer
This patch introduces basic in-order support for vhost-net. By
recording the number of batched buffers in an array when calling
`vhost_add_used_and_signal_n()`, we can reduce the number of userspace
accesses. Note that the vhost-net batching logic is kept as we still
count the number of buffers there.
Testing Results:
With testpmd:
- TX: txonly mode + vhost_net with XDP_DROP on TAP shows a 17.5%
improvement, from 4.75 Mpps to 5.35 Mpps.
- RX: No obvious improvements were observed.
With virtio-ring in-order experimental code in the guest:
- TX: pktgen in the guest + XDP_DROP on TAP shows a 19% improvement,
from 5.2 Mpps to 6.2 Mpps.
- RX: pktgen on TAP with vhost_net + XDP_DROP in the guest achieves a
6.1% improvement, from 3.47 Mpps to 3.61 Mpps.
Acked-by: Jonah Palmer <jonah.palmer@oracle.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
Signed-off-by: Jason Wang <jasowang@redhat.com>
---
drivers/vhost/net.c | 86 ++++++++++++++++++++++++++++++++-------------
1 file changed, 61 insertions(+), 25 deletions(-)
diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
index 2199ba3b191e..b44778d1e580 100644
--- a/drivers/vhost/net.c
+++ b/drivers/vhost/net.c
@@ -74,7 +74,8 @@ enum {
(1ULL << VHOST_NET_F_VIRTIO_NET_HDR) |
(1ULL << VIRTIO_NET_F_MRG_RXBUF) |
(1ULL << VIRTIO_F_ACCESS_PLATFORM) |
- (1ULL << VIRTIO_F_RING_RESET)
+ (1ULL << VIRTIO_F_RING_RESET) |
+ (1ULL << VIRTIO_F_IN_ORDER)
};
enum {
@@ -450,7 +451,8 @@ static int vhost_net_enable_vq(struct vhost_net *n,
return vhost_poll_start(poll, sock->file);
}
-static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq)
+static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq,
+ unsigned int count)
{
struct vhost_virtqueue *vq = &nvq->vq;
struct vhost_dev *dev = vq->dev;
@@ -458,8 +460,8 @@ static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq)
if (!nvq->done_idx)
return;
- vhost_add_used_and_signal_n(dev, vq, vq->heads, NULL,
- nvq->done_idx);
+ vhost_add_used_and_signal_n(dev, vq, vq->heads,
+ vq->nheads, count);
nvq->done_idx = 0;
}
@@ -468,6 +470,8 @@ static void vhost_tx_batch(struct vhost_net *net,
struct socket *sock,
struct msghdr *msghdr)
{
+ struct vhost_virtqueue *vq = &nvq->vq;
+ bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
struct tun_msg_ctl ctl = {
.type = TUN_MSG_PTR,
.num = nvq->batched_xdp,
@@ -475,6 +479,11 @@ static void vhost_tx_batch(struct vhost_net *net,
};
int i, err;
+ if (in_order) {
+ vq->heads[0].len = 0;
+ vq->nheads[0] = nvq->done_idx;
+ }
+
if (nvq->batched_xdp == 0)
goto signal_used;
@@ -496,7 +505,7 @@ static void vhost_tx_batch(struct vhost_net *net,
}
signal_used:
- vhost_net_signal_used(nvq);
+ vhost_net_signal_used(nvq, in_order ? 1 : nvq->done_idx);
nvq->batched_xdp = 0;
}
@@ -750,6 +759,7 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
int sent_pkts = 0;
bool sock_can_batch = (sock->sk->sk_sndbuf == INT_MAX);
bool busyloop_intr;
+ bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
do {
busyloop_intr = false;
@@ -786,11 +796,13 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
break;
}
- /* We can't build XDP buff, go for single
- * packet path but let's flush batched
- * packets.
- */
- vhost_tx_batch(net, nvq, sock, &msg);
+ if (nvq->batched_xdp) {
+ /* We can't build XDP buff, go for single
+ * packet path but let's flush batched
+ * packets.
+ */
+ vhost_tx_batch(net, nvq, sock, &msg);
+ }
msg.msg_control = NULL;
} else {
if (tx_can_batch(vq, total_len))
@@ -811,8 +823,12 @@ static void handle_tx_copy(struct vhost_net *net, struct socket *sock)
pr_debug("Truncated TX packet: len %d != %zd\n",
err, len);
done:
- vq->heads[nvq->done_idx].id = cpu_to_vhost32(vq, head);
- vq->heads[nvq->done_idx].len = 0;
+ if (in_order) {
+ vq->heads[0].id = cpu_to_vhost32(vq, head);
+ } else {
+ vq->heads[nvq->done_idx].id = cpu_to_vhost32(vq, head);
+ vq->heads[nvq->done_idx].len = 0;
+ }
++nvq->done_idx;
} while (likely(!vhost_exceeds_weight(vq, ++sent_pkts, total_len)));
@@ -991,7 +1007,7 @@ static int peek_head_len(struct vhost_net_virtqueue *rvq, struct sock *sk)
}
static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk,
- bool *busyloop_intr)
+ bool *busyloop_intr, unsigned int count)
{
struct vhost_net_virtqueue *rnvq = &net->vqs[VHOST_NET_VQ_RX];
struct vhost_net_virtqueue *tnvq = &net->vqs[VHOST_NET_VQ_TX];
@@ -1001,7 +1017,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk,
if (!len && rvq->busyloop_timeout) {
/* Flush batched heads first */
- vhost_net_signal_used(rnvq);
+ vhost_net_signal_used(rnvq, count);
/* Both tx vq and rx socket were polled here */
vhost_net_busy_poll(net, rvq, tvq, busyloop_intr, true);
@@ -1013,7 +1029,7 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk,
/* This is a multi-buffer version of vhost_get_desc, that works if
* vq has read descriptors only.
- * @vq - the relevant virtqueue
+ * @nvq - the relevant vhost_net virtqueue
* @datalen - data length we'll be reading
* @iovcount - returned count of io vectors we fill
* @log - vhost log
@@ -1021,14 +1037,17 @@ static int vhost_net_rx_peek_head_len(struct vhost_net *net, struct sock *sk,
* @quota - headcount quota, 1 for big buffer
* returns number of buffer heads allocated, negative on error
*/
-static int get_rx_bufs(struct vhost_virtqueue *vq,
+static int get_rx_bufs(struct vhost_net_virtqueue *nvq,
struct vring_used_elem *heads,
+ u16 *nheads,
int datalen,
unsigned *iovcount,
struct vhost_log *log,
unsigned *log_num,
unsigned int quota)
{
+ struct vhost_virtqueue *vq = &nvq->vq;
+ bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
unsigned int out, in;
int seg = 0;
int headcount = 0;
@@ -1065,14 +1084,16 @@ static int get_rx_bufs(struct vhost_virtqueue *vq,
nlogs += *log_num;
log += *log_num;
}
- heads[headcount].id = cpu_to_vhost32(vq, d);
len = iov_length(vq->iov + seg, in);
- heads[headcount].len = cpu_to_vhost32(vq, len);
- datalen -= len;
+ if (!in_order) {
+ heads[headcount].id = cpu_to_vhost32(vq, d);
+ heads[headcount].len = cpu_to_vhost32(vq, len);
+ }
++headcount;
+ datalen -= len;
seg += in;
}
- heads[headcount - 1].len = cpu_to_vhost32(vq, len + datalen);
+
*iovcount = seg;
if (unlikely(log))
*log_num = nlogs;
@@ -1082,6 +1103,15 @@ static int get_rx_bufs(struct vhost_virtqueue *vq,
r = UIO_MAXIOV + 1;
goto err;
}
+
+ if (!in_order)
+ heads[headcount - 1].len = cpu_to_vhost32(vq, len + datalen);
+ else {
+ heads[0].len = cpu_to_vhost32(vq, len + datalen);
+ heads[0].id = cpu_to_vhost32(vq, d);
+ nheads[0] = headcount;
+ }
+
return headcount;
err:
vhost_discard_vq_desc(vq, headcount);
@@ -1094,6 +1124,8 @@ static void handle_rx(struct vhost_net *net)
{
struct vhost_net_virtqueue *nvq = &net->vqs[VHOST_NET_VQ_RX];
struct vhost_virtqueue *vq = &nvq->vq;
+ bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
+ unsigned int count = 0;
unsigned in, log;
struct vhost_log *vq_log;
struct msghdr msg = {
@@ -1141,12 +1173,13 @@ static void handle_rx(struct vhost_net *net)
do {
sock_len = vhost_net_rx_peek_head_len(net, sock->sk,
- &busyloop_intr);
+ &busyloop_intr, count);
if (!sock_len)
break;
sock_len += sock_hlen;
vhost_len = sock_len + vhost_hlen;
- headcount = get_rx_bufs(vq, vq->heads + nvq->done_idx,
+ headcount = get_rx_bufs(nvq, vq->heads + count,
+ vq->nheads + count,
vhost_len, &in, vq_log, &log,
likely(mergeable) ? UIO_MAXIOV : 1);
/* On error, stop handling until the next kick. */
@@ -1222,8 +1255,11 @@ static void handle_rx(struct vhost_net *net)
goto out;
}
nvq->done_idx += headcount;
- if (nvq->done_idx > VHOST_NET_BATCH)
- vhost_net_signal_used(nvq);
+ count += in_order ? 1 : headcount;
+ if (nvq->done_idx > VHOST_NET_BATCH) {
+ vhost_net_signal_used(nvq, count);
+ count = 0;
+ }
if (unlikely(vq_log))
vhost_log_write(vq, vq_log, log, vhost_len,
vq->iov, in);
@@ -1235,7 +1271,7 @@ static void handle_rx(struct vhost_net *net)
else if (!sock_len)
vhost_net_enable_vq(net, vq);
out:
- vhost_net_signal_used(nvq);
+ vhost_net_signal_used(nvq, count);
mutex_unlock(&vq->mutex);
}
--
2.39.5
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 0/3] in order support for vhost-net
2025-07-14 8:47 [PATCH net-next V2 0/3] in order support for vhost-net Jason Wang
` (2 preceding siblings ...)
2025-07-14 8:47 ` [PATCH net-next V2 3/3] vhost_net: basic in_order support Jason Wang
@ 2025-07-16 1:12 ` Lei Yang
2025-07-17 0:04 ` Jakub Kicinski
4 siblings, 0 replies; 16+ messages in thread
From: Lei Yang @ 2025-07-16 1:12 UTC (permalink / raw)
To: Jason Wang
Cc: mst, eperezma, kvm, virtualization, netdev, linux-kernel,
jonah.palmer
Tested this series of patches v2 with "virtio-net-pci,..,in_order=on",
regression tests pass.
Tested-by: Lei Yang <leiyang@redhat.com>
On Mon, Jul 14, 2025 at 4:48 PM Jason Wang <jasowang@redhat.com> wrote:
>
> Hi all,
>
> This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> feature is designed to improve the performance of the virtio ring by
> optimizing descriptor processing.
>
> Benchmarks show a notable improvement. Please see patch 3 for details.
>
> Changes since V1:
> - add a new patch to fail early when vhost_add_used() fails
> - drop unused parameters of vhost_add_used_ooo()
> - conisty nheads for vhost_add_used_in_order()
> - typo fixes and other tweaks
>
> Thanks
>
> Jason Wang (3):
> vhost: fail early when __vhost_add_used() fails
> vhost: basic in order support
> vhost_net: basic in_order support
>
> drivers/vhost/net.c | 88 +++++++++++++++++++++---------
> drivers/vhost/vhost.c | 123 ++++++++++++++++++++++++++++++++++--------
> drivers/vhost/vhost.h | 8 ++-
> 3 files changed, 171 insertions(+), 48 deletions(-)
>
> --
> 2.39.5
>
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 0/3] in order support for vhost-net
2025-07-14 8:47 [PATCH net-next V2 0/3] in order support for vhost-net Jason Wang
` (3 preceding siblings ...)
2025-07-16 1:12 ` [PATCH net-next V2 0/3] in order support for vhost-net Lei Yang
@ 2025-07-17 0:04 ` Jakub Kicinski
2025-07-17 2:03 ` Jason Wang
4 siblings, 1 reply; 16+ messages in thread
From: Jakub Kicinski @ 2025-07-17 0:04 UTC (permalink / raw)
To: Jason Wang
Cc: mst, eperezma, kvm, virtualization, netdev, linux-kernel,
jonah.palmer
On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> feature is designed to improve the performance of the virtio ring by
> optimizing descriptor processing.
>
> Benchmarks show a notable improvement. Please see patch 3 for details.
You tagged these as net-next but just to be clear -- these don't apply
for us in the current form.
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 0/3] in order support for vhost-net
2025-07-17 0:04 ` Jakub Kicinski
@ 2025-07-17 2:03 ` Jason Wang
2025-07-17 5:54 ` Michael S. Tsirkin
0 siblings, 1 reply; 16+ messages in thread
From: Jason Wang @ 2025-07-17 2:03 UTC (permalink / raw)
To: Jakub Kicinski
Cc: mst, eperezma, kvm, virtualization, netdev, linux-kernel,
jonah.palmer
On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski <kuba@kernel.org> wrote:
>
> On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> > This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> > feature is designed to improve the performance of the virtio ring by
> > optimizing descriptor processing.
> >
> > Benchmarks show a notable improvement. Please see patch 3 for details.
>
> You tagged these as net-next but just to be clear -- these don't apply
> for us in the current form.
>
Will rebase and send a new version.
Thanks
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 0/3] in order support for vhost-net
2025-07-17 2:03 ` Jason Wang
@ 2025-07-17 5:54 ` Michael S. Tsirkin
2025-07-17 6:01 ` Jason Wang
0 siblings, 1 reply; 16+ messages in thread
From: Michael S. Tsirkin @ 2025-07-17 5:54 UTC (permalink / raw)
To: Jason Wang
Cc: Jakub Kicinski, eperezma, kvm, virtualization, netdev,
linux-kernel, jonah.palmer
On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski <kuba@kernel.org> wrote:
> >
> > On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> > > This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> > > feature is designed to improve the performance of the virtio ring by
> > > optimizing descriptor processing.
> > >
> > > Benchmarks show a notable improvement. Please see patch 3 for details.
> >
> > You tagged these as net-next but just to be clear -- these don't apply
> > for us in the current form.
> >
>
> Will rebase and send a new version.
>
> Thanks
Indeed these look as if they are for my tree (so I put them in
linux-next, without noticing the tag).
But I also guess guest bits should be merged in the same cycle
as host bits, less confusion.
--
MST
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 0/3] in order support for vhost-net
2025-07-17 5:54 ` Michael S. Tsirkin
@ 2025-07-17 6:01 ` Jason Wang
2025-07-17 6:31 ` Michael S. Tsirkin
2025-07-17 13:52 ` Paolo Abeni
0 siblings, 2 replies; 16+ messages in thread
From: Jason Wang @ 2025-07-17 6:01 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Jakub Kicinski, eperezma, kvm, virtualization, netdev,
linux-kernel, jonah.palmer
On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> > On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski <kuba@kernel.org> wrote:
> > >
> > > On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> > > > This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> > > > feature is designed to improve the performance of the virtio ring by
> > > > optimizing descriptor processing.
> > > >
> > > > Benchmarks show a notable improvement. Please see patch 3 for details.
> > >
> > > You tagged these as net-next but just to be clear -- these don't apply
> > > for us in the current form.
> > >
> >
> > Will rebase and send a new version.
> >
> > Thanks
>
> Indeed these look as if they are for my tree (so I put them in
> linux-next, without noticing the tag).
I think that's also fine.
Do you prefer all vhost/vhost-net patches to go via your tree in the future?
(Note that the reason for the conflict is because net-next gets UDP
GSO feature merged).
>
> But I also guess guest bits should be merged in the same cycle
> as host bits, less confusion.
Work for me, I will post guest bits.
Thanks
>
> --
> MST
>
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 0/3] in order support for vhost-net
2025-07-17 6:01 ` Jason Wang
@ 2025-07-17 6:31 ` Michael S. Tsirkin
2025-07-17 13:52 ` Paolo Abeni
1 sibling, 0 replies; 16+ messages in thread
From: Michael S. Tsirkin @ 2025-07-17 6:31 UTC (permalink / raw)
To: Jason Wang
Cc: Jakub Kicinski, eperezma, kvm, virtualization, netdev,
linux-kernel, jonah.palmer
On Thu, Jul 17, 2025 at 02:01:06PM +0800, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >
> > On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> > > On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski <kuba@kernel.org> wrote:
> > > >
> > > > On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> > > > > This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> > > > > feature is designed to improve the performance of the virtio ring by
> > > > > optimizing descriptor processing.
> > > > >
> > > > > Benchmarks show a notable improvement. Please see patch 3 for details.
> > > >
> > > > You tagged these as net-next but just to be clear -- these don't apply
> > > > for us in the current form.
> > > >
> > >
> > > Will rebase and send a new version.
> > >
> > > Thanks
> >
> > Indeed these look as if they are for my tree (so I put them in
> > linux-next, without noticing the tag).
>
> I think that's also fine.
>
> Do you prefer all vhost/vhost-net patches to go via your tree in the future?
>
> (Note that the reason for the conflict is because net-next gets UDP
> GSO feature merged).
Whatever is easier really. Generally I do core vhost but if there is a
conflict we can do net-next.
> >
> > But I also guess guest bits should be merged in the same cycle
> > as host bits, less confusion.
>
> Work for me, I will post guest bits.
>
> Thanks
>
> >
> > --
> > MST
> >
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 0/3] in order support for vhost-net
2025-07-17 6:01 ` Jason Wang
2025-07-17 6:31 ` Michael S. Tsirkin
@ 2025-07-17 13:52 ` Paolo Abeni
2025-07-18 2:04 ` Jason Wang
1 sibling, 1 reply; 16+ messages in thread
From: Paolo Abeni @ 2025-07-17 13:52 UTC (permalink / raw)
To: Jason Wang, Michael S. Tsirkin
Cc: Jakub Kicinski, eperezma, kvm, virtualization, netdev,
linux-kernel, jonah.palmer
On 7/17/25 8:01 AM, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>> On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
>>> On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski <kuba@kernel.org> wrote:
>>>>
>>>> On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
>>>>> This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
>>>>> feature is designed to improve the performance of the virtio ring by
>>>>> optimizing descriptor processing.
>>>>>
>>>>> Benchmarks show a notable improvement. Please see patch 3 for details.
>>>>
>>>> You tagged these as net-next but just to be clear -- these don't apply
>>>> for us in the current form.
>>>>
>>>
>>> Will rebase and send a new version.
>>>
>>> Thanks
>>
>> Indeed these look as if they are for my tree (so I put them in
>> linux-next, without noticing the tag).
>
> I think that's also fine.
>
> Do you prefer all vhost/vhost-net patches to go via your tree in the future?
>
> (Note that the reason for the conflict is because net-next gets UDP
> GSO feature merged).
FTR, I thought that such patches should have been pulled into the vhost
tree, too. Did I miss something?
Thanks,
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 0/3] in order support for vhost-net
2025-07-17 13:52 ` Paolo Abeni
@ 2025-07-18 2:04 ` Jason Wang
2025-07-18 9:19 ` Paolo Abeni
0 siblings, 1 reply; 16+ messages in thread
From: Jason Wang @ 2025-07-18 2:04 UTC (permalink / raw)
To: Paolo Abeni
Cc: Michael S. Tsirkin, Jakub Kicinski, eperezma, kvm, virtualization,
netdev, linux-kernel, jonah.palmer
On Thu, Jul 17, 2025 at 9:52 PM Paolo Abeni <pabeni@redhat.com> wrote:
>
> On 7/17/25 8:01 AM, Jason Wang wrote:
> > On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >> On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> >>> On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski <kuba@kernel.org> wrote:
> >>>>
> >>>> On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> >>>>> This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> >>>>> feature is designed to improve the performance of the virtio ring by
> >>>>> optimizing descriptor processing.
> >>>>>
> >>>>> Benchmarks show a notable improvement. Please see patch 3 for details.
> >>>>
> >>>> You tagged these as net-next but just to be clear -- these don't apply
> >>>> for us in the current form.
> >>>>
> >>>
> >>> Will rebase and send a new version.
> >>>
> >>> Thanks
> >>
> >> Indeed these look as if they are for my tree (so I put them in
> >> linux-next, without noticing the tag).
> >
> > I think that's also fine.
> >
> > Do you prefer all vhost/vhost-net patches to go via your tree in the future?
> >
> > (Note that the reason for the conflict is because net-next gets UDP
> > GSO feature merged).
>
> FTR, I thought that such patches should have been pulled into the vhost
> tree, too. Did I miss something?
See: https://www.spinics.net/lists/netdev/msg1108896.html
>
> Thanks,
>
> Paolo
>
Thanks
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 0/3] in order support for vhost-net
2025-07-18 2:04 ` Jason Wang
@ 2025-07-18 9:19 ` Paolo Abeni
2025-07-18 9:29 ` Michael S. Tsirkin
0 siblings, 1 reply; 16+ messages in thread
From: Paolo Abeni @ 2025-07-18 9:19 UTC (permalink / raw)
To: Jason Wang, Michael S. Tsirkin
Cc: Jakub Kicinski, eperezma, kvm, virtualization, netdev,
linux-kernel, jonah.palmer
On 7/18/25 4:04 AM, Jason Wang wrote:
> On Thu, Jul 17, 2025 at 9:52 PM Paolo Abeni <pabeni@redhat.com> wrote:
>> On 7/17/25 8:01 AM, Jason Wang wrote:
>>> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>>>> On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
>>>>> On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski <kuba@kernel.org> wrote:
>>>>>>
>>>>>> On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
>>>>>>> This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
>>>>>>> feature is designed to improve the performance of the virtio ring by
>>>>>>> optimizing descriptor processing.
>>>>>>>
>>>>>>> Benchmarks show a notable improvement. Please see patch 3 for details.
>>>>>>
>>>>>> You tagged these as net-next but just to be clear -- these don't apply
>>>>>> for us in the current form.
>>>>>>
>>>>>
>>>>> Will rebase and send a new version.
>>>>>
>>>>> Thanks
>>>>
>>>> Indeed these look as if they are for my tree (so I put them in
>>>> linux-next, without noticing the tag).
>>>
>>> I think that's also fine.
>>>
>>> Do you prefer all vhost/vhost-net patches to go via your tree in the future?
>>>
>>> (Note that the reason for the conflict is because net-next gets UDP
>>> GSO feature merged).
>>
>> FTR, I thought that such patches should have been pulled into the vhost
>> tree, too. Did I miss something?
>
> See: https://www.spinics.net/lists/netdev/msg1108896.html
I'm sorry I likely was not clear in my previous message. My question is:
any special reason to not pull the UDP tunnel GSO series into the vhost
tree, too?
Thanks,
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 0/3] in order support for vhost-net
2025-07-18 9:19 ` Paolo Abeni
@ 2025-07-18 9:29 ` Michael S. Tsirkin
2025-07-18 9:44 ` Paolo Abeni
0 siblings, 1 reply; 16+ messages in thread
From: Michael S. Tsirkin @ 2025-07-18 9:29 UTC (permalink / raw)
To: Paolo Abeni
Cc: Jason Wang, Jakub Kicinski, eperezma, kvm, virtualization, netdev,
linux-kernel, jonah.palmer
On Fri, Jul 18, 2025 at 11:19:26AM +0200, Paolo Abeni wrote:
> On 7/18/25 4:04 AM, Jason Wang wrote:
> > On Thu, Jul 17, 2025 at 9:52 PM Paolo Abeni <pabeni@redhat.com> wrote:
> >> On 7/17/25 8:01 AM, Jason Wang wrote:
> >>> On Thu, Jul 17, 2025 at 1:55 PM Michael S. Tsirkin <mst@redhat.com> wrote:
> >>>> On Thu, Jul 17, 2025 at 10:03:00AM +0800, Jason Wang wrote:
> >>>>> On Thu, Jul 17, 2025 at 8:04 AM Jakub Kicinski <kuba@kernel.org> wrote:
> >>>>>>
> >>>>>> On Mon, 14 Jul 2025 16:47:52 +0800 Jason Wang wrote:
> >>>>>>> This series implements VIRTIO_F_IN_ORDER support for vhost-net. This
> >>>>>>> feature is designed to improve the performance of the virtio ring by
> >>>>>>> optimizing descriptor processing.
> >>>>>>>
> >>>>>>> Benchmarks show a notable improvement. Please see patch 3 for details.
> >>>>>>
> >>>>>> You tagged these as net-next but just to be clear -- these don't apply
> >>>>>> for us in the current form.
> >>>>>>
> >>>>>
> >>>>> Will rebase and send a new version.
> >>>>>
> >>>>> Thanks
> >>>>
> >>>> Indeed these look as if they are for my tree (so I put them in
> >>>> linux-next, without noticing the tag).
> >>>
> >>> I think that's also fine.
> >>>
> >>> Do you prefer all vhost/vhost-net patches to go via your tree in the future?
> >>>
> >>> (Note that the reason for the conflict is because net-next gets UDP
> >>> GSO feature merged).
> >>
> >> FTR, I thought that such patches should have been pulled into the vhost
> >> tree, too. Did I miss something?
> >
> > See: https://www.spinics.net/lists/netdev/msg1108896.html
>
> I'm sorry I likely was not clear in my previous message. My question is:
> any special reason to not pull the UDP tunnel GSO series into the vhost
> tree, too?
>
> Thanks,
>
> Paolo
Paolo I'm likely confused. That series is in net-next, right?
So now it would be work to drop it from there, and invalidate
all the testing it got there, for little benefit -
the merge conflict is easy to resolve.
--
MST
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 0/3] in order support for vhost-net
2025-07-18 9:29 ` Michael S. Tsirkin
@ 2025-07-18 9:44 ` Paolo Abeni
0 siblings, 0 replies; 16+ messages in thread
From: Paolo Abeni @ 2025-07-18 9:44 UTC (permalink / raw)
To: Michael S. Tsirkin
Cc: Jason Wang, Jakub Kicinski, eperezma, kvm, virtualization, netdev,
linux-kernel, jonah.palmer
On 7/18/25 11:29 AM, Michael S. Tsirkin wrote:
> Paolo I'm likely confused. That series is in net-next, right?
> So now it would be work to drop it from there, and invalidate
> all the testing it got there, for little benefit -
> the merge conflict is easy to resolve.
Yes, that series is in net-next now.
My understanding of the merge plan was to pull such series in _both_ the
net-next and the vhost tree.
Pulling from a stable public branch allows constant commit hashes in
both trees, avoids conflicts with later vhost patches in the vhost tree
and with later virtio_net/tun/tap patches in net-next and also avoid
conflicts at merge window time.
We do (in net-next) that sort of hashes sharing from time to time for
cross-subtrees changes, like this one.
But not a big deal if you didn't/don't pull the thing in the vhost tree.
At this point, merging it will be likely quite complex and there will be
likely no gains on vhost tree management side.
Perhaps we could use this schema next time.
Thanks,
Paolo
^ permalink raw reply [flat|nested] 16+ messages in thread
* Re: [PATCH net-next V2 2/3] vhost: basic in order support
2025-07-14 8:47 ` [PATCH net-next V2 2/3] vhost: basic in order support Jason Wang
@ 2025-07-28 14:26 ` Eugenio Perez Martin
0 siblings, 0 replies; 16+ messages in thread
From: Eugenio Perez Martin @ 2025-07-28 14:26 UTC (permalink / raw)
To: Jason Wang; +Cc: mst, kvm, virtualization, netdev, linux-kernel, jonah.palmer
On Mon, Jul 14, 2025 at 10:48 AM Jason Wang <jasowang@redhat.com> wrote:
>
> This patch adds basic in order support for vhost. Two optimizations
> are implemented in this patch:
>
> 1) Since driver uses descriptor in order, vhost can deduce the next
> avail ring head by counting the number of descriptors that has been
> used in next_avail_head. This eliminate the need to access the
> available ring in vhost.
>
> 2) vhost_add_used_and_singal_n() is extended to accept the number of
> batched buffers per used elem. While this increases the times of
> userspace memory access but it helps to reduce the chance of
> used ring access of both the driver and vhost.
>
> Vhost-net will be the first user for this.
>
> Acked-by: Jonah Palmer <jonah.palmer@oracle.com>
Acked-by: Eugenio Pérez <eperezma@redhat.com>
> Signed-off-by: Jason Wang <jasowang@redhat.com>
> ---
> drivers/vhost/net.c | 6 ++-
> drivers/vhost/vhost.c | 120 ++++++++++++++++++++++++++++++++++--------
> drivers/vhost/vhost.h | 8 ++-
> 3 files changed, 109 insertions(+), 25 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 9dbd88eb9ff4..2199ba3b191e 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -374,7 +374,8 @@ static void vhost_zerocopy_signal_used(struct vhost_net *net,
> while (j) {
> add = min(UIO_MAXIOV - nvq->done_idx, j);
> vhost_add_used_and_signal_n(vq->dev, vq,
> - &vq->heads[nvq->done_idx], add);
> + &vq->heads[nvq->done_idx],
> + NULL, add);
> nvq->done_idx = (nvq->done_idx + add) % UIO_MAXIOV;
> j -= add;
> }
> @@ -457,7 +458,8 @@ static void vhost_net_signal_used(struct vhost_net_virtqueue *nvq)
> if (!nvq->done_idx)
> return;
>
> - vhost_add_used_and_signal_n(dev, vq, vq->heads, nvq->done_idx);
> + vhost_add_used_and_signal_n(dev, vq, vq->heads, NULL,
> + nvq->done_idx);
> nvq->done_idx = 0;
> }
>
> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
> index d1d3912f4804..dd7963eb6cf0 100644
> --- a/drivers/vhost/vhost.c
> +++ b/drivers/vhost/vhost.c
> @@ -364,6 +364,7 @@ static void vhost_vq_reset(struct vhost_dev *dev,
> vq->avail = NULL;
> vq->used = NULL;
> vq->last_avail_idx = 0;
> + vq->next_avail_head = 0;
> vq->avail_idx = 0;
> vq->last_used_idx = 0;
> vq->signalled_used = 0;
> @@ -455,6 +456,8 @@ static void vhost_vq_free_iovecs(struct vhost_virtqueue *vq)
> vq->log = NULL;
> kfree(vq->heads);
> vq->heads = NULL;
> + kfree(vq->nheads);
> + vq->nheads = NULL;
> }
>
> /* Helper to allocate iovec buffers for all vqs. */
> @@ -472,7 +475,9 @@ static long vhost_dev_alloc_iovecs(struct vhost_dev *dev)
> GFP_KERNEL);
> vq->heads = kmalloc_array(dev->iov_limit, sizeof(*vq->heads),
> GFP_KERNEL);
> - if (!vq->indirect || !vq->log || !vq->heads)
> + vq->nheads = kmalloc_array(dev->iov_limit, sizeof(*vq->nheads),
> + GFP_KERNEL);
> + if (!vq->indirect || !vq->log || !vq->heads || !vq->nheads)
> goto err_nomem;
> }
> return 0;
> @@ -1990,14 +1995,15 @@ long vhost_vring_ioctl(struct vhost_dev *d, unsigned int ioctl, void __user *arg
> break;
> }
> if (vhost_has_feature(vq, VIRTIO_F_RING_PACKED)) {
> - vq->last_avail_idx = s.num & 0xffff;
> + vq->next_avail_head = vq->last_avail_idx =
> + s.num & 0xffff;
> vq->last_used_idx = (s.num >> 16) & 0xffff;
> } else {
> if (s.num > 0xffff) {
> r = -EINVAL;
> break;
> }
> - vq->last_avail_idx = s.num;
> + vq->next_avail_head = vq->last_avail_idx = s.num;
> }
> /* Forget the cached index value. */
> vq->avail_idx = vq->last_avail_idx;
> @@ -2590,11 +2596,12 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
> unsigned int *out_num, unsigned int *in_num,
> struct vhost_log *log, unsigned int *log_num)
> {
> + bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
> struct vring_desc desc;
> unsigned int i, head, found = 0;
> u16 last_avail_idx = vq->last_avail_idx;
> __virtio16 ring_head;
> - int ret, access;
> + int ret, access, c = 0;
>
> if (vq->avail_idx == vq->last_avail_idx) {
> ret = vhost_get_avail_idx(vq);
> @@ -2605,17 +2612,21 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
> return vq->num;
> }
>
> - /* Grab the next descriptor number they're advertising, and increment
> - * the index we've seen. */
> - if (unlikely(vhost_get_avail_head(vq, &ring_head, last_avail_idx))) {
> - vq_err(vq, "Failed to read head: idx %d address %p\n",
> - last_avail_idx,
> - &vq->avail->ring[last_avail_idx % vq->num]);
> - return -EFAULT;
> + if (in_order)
> + head = vq->next_avail_head & (vq->num - 1);
> + else {
> + /* Grab the next descriptor number they're
> + * advertising, and increment the index we've seen. */
> + if (unlikely(vhost_get_avail_head(vq, &ring_head,
> + last_avail_idx))) {
> + vq_err(vq, "Failed to read head: idx %d address %p\n",
> + last_avail_idx,
> + &vq->avail->ring[last_avail_idx % vq->num]);
> + return -EFAULT;
> + }
> + head = vhost16_to_cpu(vq, ring_head);
> }
>
> - head = vhost16_to_cpu(vq, ring_head);
> -
> /* If their number is silly, that's an error. */
> if (unlikely(head >= vq->num)) {
> vq_err(vq, "Guest says index %u > %u is available",
> @@ -2658,6 +2669,7 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
> "in indirect descriptor at idx %d\n", i);
> return ret;
> }
> + ++c;
> continue;
> }
>
> @@ -2693,10 +2705,12 @@ int vhost_get_vq_desc(struct vhost_virtqueue *vq,
> }
> *out_num += ret;
> }
> + ++c;
> } while ((i = next_desc(vq, &desc)) != -1);
>
> /* On success, increment avail index. */
> vq->last_avail_idx++;
> + vq->next_avail_head += c;
>
> /* Assume notifications from guest are disabled at this point,
> * if they aren't we would need to update avail_event index. */
> @@ -2720,8 +2734,9 @@ int vhost_add_used(struct vhost_virtqueue *vq, unsigned int head, int len)
> cpu_to_vhost32(vq, head),
> cpu_to_vhost32(vq, len)
> };
> + u16 nheads = 1;
>
> - return vhost_add_used_n(vq, &heads, 1);
> + return vhost_add_used_n(vq, &heads, &nheads, 1);
> }
> EXPORT_SYMBOL_GPL(vhost_add_used);
>
> @@ -2757,10 +2772,9 @@ static int __vhost_add_used_n(struct vhost_virtqueue *vq,
> return 0;
> }
>
> -/* After we've used one of their buffers, we tell them about it. We'll then
> - * want to notify the guest, using eventfd. */
> -int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
> - unsigned count)
> +static int vhost_add_used_n_ooo(struct vhost_virtqueue *vq,
> + struct vring_used_elem *heads,
> + unsigned count)
> {
> int start, n, r;
>
> @@ -2773,7 +2787,69 @@ int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
> heads += n;
> count -= n;
> }
> - r = __vhost_add_used_n(vq, heads, count);
> + return __vhost_add_used_n(vq, heads, count);
> +}
> +
> +static int vhost_add_used_n_in_order(struct vhost_virtqueue *vq,
> + struct vring_used_elem *heads,
> + const u16 *nheads,
> + unsigned count)
> +{
> + vring_used_elem_t __user *used;
> + u16 old, new = vq->last_used_idx;
> + int start, i;
> +
> + if (!nheads)
> + return -EINVAL;
> +
> + start = vq->last_used_idx & (vq->num - 1);
> + used = vq->used->ring + start;
> +
> + for (i = 0; i < count; i++) {
> + if (vhost_put_used(vq, &heads[i], start, 1)) {
> + vq_err(vq, "Failed to write used");
> + return -EFAULT;
> + }
> + start += nheads[i];
> + new += nheads[i];
> + if (start >= vq->num)
> + start -= vq->num;
> + }
> +
> + if (unlikely(vq->log_used)) {
> + /* Make sure data is seen before log. */
> + smp_wmb();
> + /* Log used ring entry write. */
> + log_used(vq, ((void __user *)used - (void __user *)vq->used),
> + (vq->num - start) * sizeof *used);
> + if (start + count > vq->num)
> + log_used(vq, 0,
> + (start + count - vq->num) * sizeof *used);
> + }
> +
> + old = vq->last_used_idx;
> + vq->last_used_idx = new;
> + /* If the driver never bothers to signal in a very long while,
> + * used index might wrap around. If that happens, invalidate
> + * signalled_used index we stored. TODO: make sure driver
> + * signals at least once in 2^16 and remove this. */
> + if (unlikely((u16)(new - vq->signalled_used) < (u16)(new - old)))
> + vq->signalled_used_valid = false;
> + return 0;
> +}
> +
> +/* After we've used one of their buffers, we tell them about it. We'll then
> + * want to notify the guest, using eventfd. */
> +int vhost_add_used_n(struct vhost_virtqueue *vq, struct vring_used_elem *heads,
> + u16 *nheads, unsigned count)
> +{
> + bool in_order = vhost_has_feature(vq, VIRTIO_F_IN_ORDER);
> + int r;
> +
> + if (!in_order || !nheads)
> + r = vhost_add_used_n_ooo(vq, heads, count);
> + else
> + r = vhost_add_used_n_in_order(vq, heads, nheads, count);
>
> if (r < 0)
> return r;
> @@ -2856,9 +2932,11 @@ EXPORT_SYMBOL_GPL(vhost_add_used_and_signal);
> /* multi-buffer version of vhost_add_used_and_signal */
> void vhost_add_used_and_signal_n(struct vhost_dev *dev,
> struct vhost_virtqueue *vq,
> - struct vring_used_elem *heads, unsigned count)
> + struct vring_used_elem *heads,
> + u16 *nheads,
> + unsigned count)
> {
> - vhost_add_used_n(vq, heads, count);
> + vhost_add_used_n(vq, heads, nheads, count);
> vhost_signal(dev, vq);
> }
> EXPORT_SYMBOL_GPL(vhost_add_used_and_signal_n);
> diff --git a/drivers/vhost/vhost.h b/drivers/vhost/vhost.h
> index bb75a292d50c..e714ebf9da57 100644
> --- a/drivers/vhost/vhost.h
> +++ b/drivers/vhost/vhost.h
> @@ -103,6 +103,8 @@ struct vhost_virtqueue {
> * Values are limited to 0x7fff, and the high bit is used as
> * a wrap counter when using VIRTIO_F_RING_PACKED. */
> u16 last_avail_idx;
> + /* Next avail ring head when VIRTIO_F_IN_ORDER is negoitated */
> + u16 next_avail_head;
>
> /* Caches available index value from user. */
> u16 avail_idx;
> @@ -129,6 +131,7 @@ struct vhost_virtqueue {
> struct iovec iotlb_iov[64];
> struct iovec *indirect;
> struct vring_used_elem *heads;
> + u16 *nheads;
> /* Protected by virtqueue mutex. */
> struct vhost_iotlb *umem;
> struct vhost_iotlb *iotlb;
> @@ -213,11 +216,12 @@ bool vhost_vq_is_setup(struct vhost_virtqueue *vq);
> int vhost_vq_init_access(struct vhost_virtqueue *);
> int vhost_add_used(struct vhost_virtqueue *, unsigned int head, int len);
> int vhost_add_used_n(struct vhost_virtqueue *, struct vring_used_elem *heads,
> - unsigned count);
> + u16 *nheads, unsigned count);
> void vhost_add_used_and_signal(struct vhost_dev *, struct vhost_virtqueue *,
> unsigned int id, int len);
> void vhost_add_used_and_signal_n(struct vhost_dev *, struct vhost_virtqueue *,
> - struct vring_used_elem *heads, unsigned count);
> + struct vring_used_elem *heads, u16 *nheads,
> + unsigned count);
> void vhost_signal(struct vhost_dev *, struct vhost_virtqueue *);
> void vhost_disable_notify(struct vhost_dev *, struct vhost_virtqueue *);
> bool vhost_vq_avail_empty(struct vhost_dev *, struct vhost_virtqueue *);
> --
> 2.39.5
>
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2025-07-28 14:27 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2025-07-14 8:47 [PATCH net-next V2 0/3] in order support for vhost-net Jason Wang
2025-07-14 8:47 ` [PATCH net-next V2 1/3] vhost: fail early when __vhost_add_used() fails Jason Wang
2025-07-14 8:47 ` [PATCH net-next V2 2/3] vhost: basic in order support Jason Wang
2025-07-28 14:26 ` Eugenio Perez Martin
2025-07-14 8:47 ` [PATCH net-next V2 3/3] vhost_net: basic in_order support Jason Wang
2025-07-16 1:12 ` [PATCH net-next V2 0/3] in order support for vhost-net Lei Yang
2025-07-17 0:04 ` Jakub Kicinski
2025-07-17 2:03 ` Jason Wang
2025-07-17 5:54 ` Michael S. Tsirkin
2025-07-17 6:01 ` Jason Wang
2025-07-17 6:31 ` Michael S. Tsirkin
2025-07-17 13:52 ` Paolo Abeni
2025-07-18 2:04 ` Jason Wang
2025-07-18 9:19 ` Paolo Abeni
2025-07-18 9:29 ` Michael S. Tsirkin
2025-07-18 9:44 ` Paolo Abeni
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).