From: Laurent Vivier <lvivier@redhat.com>
To: qemu-devel@nongnu.org
Cc: Jason Wang <jasowang@redhat.com>,
David Gibson <david@gibson.dropbear.id.au>,
"Michael S. Tsirkin" <mst@redhat.com>,
Stefano Brivio <sbrivio@redhat.com>,
Laurent Vivier <lvivier@redhat.com>,
alex.williamson@redhat.com
Subject: [PATCH v3 2/2] virtio-net: fix TX timer with tx_burst
Date: Thu, 20 Oct 2022 11:58:46 +0200 [thread overview]
Message-ID: <20221020095846.63831-3-lvivier@redhat.com> (raw)
In-Reply-To: <20221020095846.63831-1-lvivier@redhat.com>
When virtio_net_flush_tx() reaches the tx_burst value all
the queue is not flushed and nothing restart the timer.
Fix that by doing for TX timer as we do for bottom half TX:
rearming the timer if we find any packet to send during the
virtio_net_flush_tx() call.
Fixes: e3f30488e5f8 ("virtio-net: Limit number of packets sent per TX flush")
Cc: alex.williamson@redhat.com
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
hw/net/virtio-net.c | 50 +++++++++++++++++++++++++++++++++++++--------
1 file changed, 41 insertions(+), 9 deletions(-)
diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 1fbf2f3e19a7..b6903aea5450 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -2536,14 +2536,19 @@ static void virtio_net_tx_complete(NetClientState *nc, ssize_t len)
virtio_queue_set_notification(q->tx_vq, 1);
ret = virtio_net_flush_tx(q);
- if (q->tx_bh && ret >= n->tx_burst) {
+ if (ret >= n->tx_burst) {
/*
* the flush has been stopped by tx_burst
* we will not receive notification for the
* remainining part, so re-schedule
*/
virtio_queue_set_notification(q->tx_vq, 0);
- qemu_bh_schedule(q->tx_bh);
+ if (q->tx_bh) {
+ qemu_bh_schedule(q->tx_bh);
+ } else {
+ timer_mod(q->tx_timer,
+ qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
+ }
q->tx_waiting = 1;
}
}
@@ -2644,6 +2649,8 @@ drop:
return num_packets;
}
+static void virtio_net_tx_timer(void *opaque);
+
static void virtio_net_handle_tx_timer(VirtIODevice *vdev, VirtQueue *vq)
{
VirtIONet *n = VIRTIO_NET(vdev);
@@ -2661,15 +2668,13 @@ static void virtio_net_handle_tx_timer(VirtIODevice *vdev, VirtQueue *vq)
}
if (q->tx_waiting) {
- virtio_queue_set_notification(vq, 1);
+ /* We already have queued packets, immediately flush */
timer_del(q->tx_timer);
- q->tx_waiting = 0;
- if (virtio_net_flush_tx(q) == -EINVAL) {
- return;
- }
+ virtio_net_tx_timer(q);
} else {
+ /* re-arm timer to flush it (and more) on next tick */
timer_mod(q->tx_timer,
- qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
+ qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
q->tx_waiting = 1;
virtio_queue_set_notification(vq, 0);
}
@@ -2702,6 +2707,8 @@ static void virtio_net_tx_timer(void *opaque)
VirtIONetQueue *q = opaque;
VirtIONet *n = q->n;
VirtIODevice *vdev = VIRTIO_DEVICE(n);
+ int ret;
+
/* This happens when device was stopped but BH wasn't. */
if (!vdev->vm_running) {
/* Make sure tx waiting is set, so we'll run when restarted. */
@@ -2716,8 +2723,33 @@ static void virtio_net_tx_timer(void *opaque)
return;
}
+ ret = virtio_net_flush_tx(q);
+ if (ret == -EBUSY || ret == -EINVAL) {
+ return;
+ }
+ /*
+ * If we flush a full burst of packets, assume there are
+ * more coming and immediately rearm
+ */
+ if (ret >= n->tx_burst) {
+ q->tx_waiting = 1;
+ timer_mod(q->tx_timer,
+ qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
+ return;
+ }
+ /*
+ * If less than a full burst, re-enable notification and flush
+ * anything that may have come in while we weren't looking. If
+ * we find something, assume the guest is still active and rearm
+ */
virtio_queue_set_notification(q->tx_vq, 1);
- virtio_net_flush_tx(q);
+ ret = virtio_net_flush_tx(q);
+ if (ret > 0) {
+ virtio_queue_set_notification(q->tx_vq, 0);
+ q->tx_waiting = 1;
+ timer_mod(q->tx_timer,
+ qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
+ }
}
static void virtio_net_tx_bh(void *opaque)
--
2.37.3
next prev parent reply other threads:[~2022-10-20 10:30 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-10-20 9:58 [PATCH v3 0/2] virtio-net: re-arm/re-schedule when tx_burst stops virtio_net_flush_tx() Laurent Vivier
2022-10-20 9:58 ` [PATCH v3 1/2] virtio-net: fix bottom-half packet TX on asynchronous completion Laurent Vivier
2022-10-20 9:58 ` Laurent Vivier [this message]
2022-10-20 11:36 ` [PATCH v3 0/2] virtio-net: re-arm/re-schedule when tx_burst stops virtio_net_flush_tx() Michael S. Tsirkin
2022-10-21 3:05 ` Jason Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20221020095846.63831-3-lvivier@redhat.com \
--to=lvivier@redhat.com \
--cc=alex.williamson@redhat.com \
--cc=david@gibson.dropbear.id.au \
--cc=jasowang@redhat.com \
--cc=mst@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=sbrivio@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).