qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] virtio-net: re-arm/re-schedule when tx_burst stops virtio_net_flush_tx()
@ 2022-10-14 13:20 Laurent Vivier
  2022-10-14 13:20 ` [PATCH v2 1/2] virtio-net: fix bottom-half packet TX on asynchronous completion Laurent Vivier
                   ` (2 more replies)
  0 siblings, 3 replies; 5+ messages in thread
From: Laurent Vivier @ 2022-10-14 13:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Stefano Brivio, Michael S. Tsirkin, David Gibson, Jason Wang,
	Laurent Vivier

When virtio_net_flush_tx() reaches the tx_burst value all the queue is
not flushed and nothing restart the timer or the bottom half function.

For BH, this is only missing in the virtio_net_tx_complete() function.
For the timer, the same fix is needed in virtio_net_tx_complete() but
it must be also managed in the TX timer function.

v2:
- fix also tx timer

Laurent Vivier (2):
  virtio-net: fix bottom-half packet TX on asynchronous completion
  virtio-net: fix TX timer with tx_burst

 hw/net/virtio-net.c | 68 +++++++++++++++++++++++++++++++++++++--------
 1 file changed, 56 insertions(+), 12 deletions(-)

-- 
2.37.3



^ permalink raw reply	[flat|nested] 5+ messages in thread

* [PATCH v2 1/2] virtio-net: fix bottom-half packet TX on asynchronous completion
  2022-10-14 13:20 [PATCH v2 0/2] virtio-net: re-arm/re-schedule when tx_burst stops virtio_net_flush_tx() Laurent Vivier
@ 2022-10-14 13:20 ` Laurent Vivier
  2022-10-14 13:20 ` [PATCH v2 2/2] virtio-net: fix TX timer with tx_burst Laurent Vivier
  2022-10-26 20:23 ` [PATCH v2 0/2] virtio-net: re-arm/re-schedule when tx_burst stops virtio_net_flush_tx() Michael S. Tsirkin
  2 siblings, 0 replies; 5+ messages in thread
From: Laurent Vivier @ 2022-10-14 13:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Stefano Brivio, Michael S. Tsirkin, David Gibson, Jason Wang,
	Laurent Vivier, alex.williamson

When virtio-net is used with the socket netdev backend, the backend
can be busy and not able to collect new packets.

In this case, net_socket_receive() returns 0 and registers a poll function
to detect when the socket is ready again.

In virtio_net_tx_bh(), virtio_net_flush_tx() forwards the 0, the virtio
notifications are disabled and the function is not re-scheduled, waiting
for the backend to be ready.

When the socket netdev backend is again able to send packets, the poll
function re-starts to flush remaining packets. This is done by
calling virtio_net_tx_complete(). It re-enables notifications and calls
again virtio_net_flush_tx().

But it seems if virtio_net_flush_tx() reaches the tx_burst value all
the queue is not flushed and no new notification is sent to re-schedule
virtio_net_tx_bh(). Nothing re-start to flush the queue and remaining
packets are stuck in the queue.

To fix that, detect in virtio_net_tx_complete() if virtio_net_flush_tx()
has been stopped by tx_burst and if yes re-schedule the bottom half
function virtio_net_tx_bh() to flush the remaining packets.

This is what is done in virtio_net_tx_bh() when the virtio_net_flush_tx()
is synchronous, and completly by-passed when the operation needs to be
asynchronous.

Fixes: a697a334b3c4 ("virtio-net: Introduce a new bottom half packet TX")
Cc: alex.williamson@redhat.com
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
---
 hw/net/virtio-net.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index e9f696b4cfeb..1fbf2f3e19a7 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -2526,6 +2526,7 @@ static void virtio_net_tx_complete(NetClientState *nc, ssize_t len)
     VirtIONet *n = qemu_get_nic_opaque(nc);
     VirtIONetQueue *q = virtio_net_get_subqueue(nc);
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
+    int ret;
 
     virtqueue_push(q->tx_vq, q->async_tx.elem, 0);
     virtio_notify(vdev, q->tx_vq);
@@ -2534,7 +2535,17 @@ static void virtio_net_tx_complete(NetClientState *nc, ssize_t len)
     q->async_tx.elem = NULL;
 
     virtio_queue_set_notification(q->tx_vq, 1);
-    virtio_net_flush_tx(q);
+    ret = virtio_net_flush_tx(q);
+    if (q->tx_bh && ret >= n->tx_burst) {
+        /*
+         * the flush has been stopped by tx_burst
+         * we will not receive notification for the
+         * remainining part, so re-schedule
+         */
+        virtio_queue_set_notification(q->tx_vq, 0);
+        qemu_bh_schedule(q->tx_bh);
+        q->tx_waiting = 1;
+    }
 }
 
 /* TX */
-- 
2.37.3



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* [PATCH v2 2/2] virtio-net: fix TX timer with tx_burst
  2022-10-14 13:20 [PATCH v2 0/2] virtio-net: re-arm/re-schedule when tx_burst stops virtio_net_flush_tx() Laurent Vivier
  2022-10-14 13:20 ` [PATCH v2 1/2] virtio-net: fix bottom-half packet TX on asynchronous completion Laurent Vivier
@ 2022-10-14 13:20 ` Laurent Vivier
  2022-10-20  4:20   ` Jason Wang
  2022-10-26 20:23 ` [PATCH v2 0/2] virtio-net: re-arm/re-schedule when tx_burst stops virtio_net_flush_tx() Michael S. Tsirkin
  2 siblings, 1 reply; 5+ messages in thread
From: Laurent Vivier @ 2022-10-14 13:20 UTC (permalink / raw)
  To: qemu-devel
  Cc: Stefano Brivio, Michael S. Tsirkin, David Gibson, Jason Wang,
	Laurent Vivier, alex.williamson

When virtio_net_flush_tx() reaches the tx_burst value all
the queue is not flushed and nothing restart the timer.

Fix that by doing for TX timer as we do for bottom half TX:
rearming the timer if we find any packet to send during the
virtio_net_flush_tx() call.

Fixes: e3f30488e5f8 ("virtio-net: Limit number of packets sent per TX flush")
Cc: alex.williamson@redhat.com
Signed-off-by: Laurent Vivier <lvivier@redhat.com>
---
 hw/net/virtio-net.c | 59 +++++++++++++++++++++++++++++++++++----------
 1 file changed, 46 insertions(+), 13 deletions(-)

diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
index 1fbf2f3e19a7..b4964b821021 100644
--- a/hw/net/virtio-net.c
+++ b/hw/net/virtio-net.c
@@ -2536,14 +2536,19 @@ static void virtio_net_tx_complete(NetClientState *nc, ssize_t len)
 
     virtio_queue_set_notification(q->tx_vq, 1);
     ret = virtio_net_flush_tx(q);
-    if (q->tx_bh && ret >= n->tx_burst) {
+    if (ret >= n->tx_burst) {
         /*
          * the flush has been stopped by tx_burst
          * we will not receive notification for the
          * remainining part, so re-schedule
          */
         virtio_queue_set_notification(q->tx_vq, 0);
-        qemu_bh_schedule(q->tx_bh);
+        if (q->tx_bh) {
+            qemu_bh_schedule(q->tx_bh);
+        } else {
+            timer_mod(q->tx_timer,
+                      qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
+        }
         q->tx_waiting = 1;
     }
 }
@@ -2644,6 +2649,8 @@ drop:
     return num_packets;
 }
 
+static void virtio_net_tx_timer(void *opaque);
+
 static void virtio_net_handle_tx_timer(VirtIODevice *vdev, VirtQueue *vq)
 {
     VirtIONet *n = VIRTIO_NET(vdev);
@@ -2661,18 +2668,17 @@ static void virtio_net_handle_tx_timer(VirtIODevice *vdev, VirtQueue *vq)
     }
 
     if (q->tx_waiting) {
-        virtio_queue_set_notification(vq, 1);
+        /* We already have queued packets, immediately flush */
         timer_del(q->tx_timer);
-        q->tx_waiting = 0;
-        if (virtio_net_flush_tx(q) == -EINVAL) {
-            return;
-        }
-    } else {
-        timer_mod(q->tx_timer,
-                       qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
-        q->tx_waiting = 1;
-        virtio_queue_set_notification(vq, 0);
+        virtio_net_tx_timer(q);
+        return;
     }
+
+    /* re-arm timer to flush it (and more) on next tick */
+    timer_mod(q->tx_timer,
+              qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
+    q->tx_waiting = 1;
+    virtio_queue_set_notification(vq, 0);
 }
 
 static void virtio_net_handle_tx_bh(VirtIODevice *vdev, VirtQueue *vq)
@@ -2702,6 +2708,8 @@ static void virtio_net_tx_timer(void *opaque)
     VirtIONetQueue *q = opaque;
     VirtIONet *n = q->n;
     VirtIODevice *vdev = VIRTIO_DEVICE(n);
+    int ret;
+
     /* This happens when device was stopped but BH wasn't. */
     if (!vdev->vm_running) {
         /* Make sure tx waiting is set, so we'll run when restarted. */
@@ -2716,8 +2724,33 @@ static void virtio_net_tx_timer(void *opaque)
         return;
     }
 
+    ret = virtio_net_flush_tx(q);
+    if (ret == -EBUSY || ret == -EINVAL) {
+        return;
+    }
+    /*
+     * If we flush a full burst of packets, assume there are
+     * more coming and immediately rearm
+     */
+    if (ret >= n->tx_burst) {
+        q->tx_waiting = 1;
+        timer_mod(q->tx_timer,
+                  qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
+        return;
+    }
+    /*
+     * If less than a full burst, re-enable notification and flush
+     * anything that may have come in while we weren't looking.  If
+     * we find something, assume the guest is still active and rearm
+     */
     virtio_queue_set_notification(q->tx_vq, 1);
-    virtio_net_flush_tx(q);
+    ret = virtio_net_flush_tx(q);
+    if (ret > 0) {
+        virtio_queue_set_notification(q->tx_vq, 0);
+        q->tx_waiting = 1;
+        timer_mod(q->tx_timer,
+                  qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
+    }
 }
 
 static void virtio_net_tx_bh(void *opaque)
-- 
2.37.3



^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 2/2] virtio-net: fix TX timer with tx_burst
  2022-10-14 13:20 ` [PATCH v2 2/2] virtio-net: fix TX timer with tx_burst Laurent Vivier
@ 2022-10-20  4:20   ` Jason Wang
  0 siblings, 0 replies; 5+ messages in thread
From: Jason Wang @ 2022-10-20  4:20 UTC (permalink / raw)
  To: Laurent Vivier
  Cc: qemu-devel, Stefano Brivio, Michael S. Tsirkin, David Gibson,
	alex.williamson

On Fri, Oct 14, 2022 at 9:20 PM Laurent Vivier <lvivier@redhat.com> wrote:
>
> When virtio_net_flush_tx() reaches the tx_burst value all
> the queue is not flushed and nothing restart the timer.
>
> Fix that by doing for TX timer as we do for bottom half TX:
> rearming the timer if we find any packet to send during the
> virtio_net_flush_tx() call.
>
> Fixes: e3f30488e5f8 ("virtio-net: Limit number of packets sent per TX flush")
> Cc: alex.williamson@redhat.com
> Signed-off-by: Laurent Vivier <lvivier@redhat.com>
> ---
>  hw/net/virtio-net.c | 59 +++++++++++++++++++++++++++++++++++----------
>  1 file changed, 46 insertions(+), 13 deletions(-)
>
> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> index 1fbf2f3e19a7..b4964b821021 100644
> --- a/hw/net/virtio-net.c
> +++ b/hw/net/virtio-net.c
> @@ -2536,14 +2536,19 @@ static void virtio_net_tx_complete(NetClientState *nc, ssize_t len)
>
>      virtio_queue_set_notification(q->tx_vq, 1);
>      ret = virtio_net_flush_tx(q);
> -    if (q->tx_bh && ret >= n->tx_burst) {
> +    if (ret >= n->tx_burst) {
>          /*
>           * the flush has been stopped by tx_burst
>           * we will not receive notification for the
>           * remainining part, so re-schedule
>           */
>          virtio_queue_set_notification(q->tx_vq, 0);
> -        qemu_bh_schedule(q->tx_bh);
> +        if (q->tx_bh) {
> +            qemu_bh_schedule(q->tx_bh);
> +        } else {
> +            timer_mod(q->tx_timer,
> +                      qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
> +        }
>          q->tx_waiting = 1;
>      }
>  }
> @@ -2644,6 +2649,8 @@ drop:
>      return num_packets;
>  }
>
> +static void virtio_net_tx_timer(void *opaque);
> +
>  static void virtio_net_handle_tx_timer(VirtIODevice *vdev, VirtQueue *vq)
>  {
>      VirtIONet *n = VIRTIO_NET(vdev);
> @@ -2661,18 +2668,17 @@ static void virtio_net_handle_tx_timer(VirtIODevice *vdev, VirtQueue *vq)
>      }
>
>      if (q->tx_waiting) {
> -        virtio_queue_set_notification(vq, 1);
> +        /* We already have queued packets, immediately flush */
>          timer_del(q->tx_timer);
> -        q->tx_waiting = 0;
> -        if (virtio_net_flush_tx(q) == -EINVAL) {
> -            return;
> -        }
> -    } else {
> -        timer_mod(q->tx_timer,
> -                       qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
> -        q->tx_waiting = 1;
> -        virtio_queue_set_notification(vq, 0);
> +        virtio_net_tx_timer(q);
> +        return;
>      }
> +
> +    /* re-arm timer to flush it (and more) on next tick */
> +    timer_mod(q->tx_timer,
> +              qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
> +    q->tx_waiting = 1;
> +    virtio_queue_set_notification(vq, 0);
>  }

Nit: if we stick the above in the else, we can avoid a lot of changes.

Others look good.

Thanks

>
>  static void virtio_net_handle_tx_bh(VirtIODevice *vdev, VirtQueue *vq)
> @@ -2702,6 +2708,8 @@ static void virtio_net_tx_timer(void *opaque)
>      VirtIONetQueue *q = opaque;
>      VirtIONet *n = q->n;
>      VirtIODevice *vdev = VIRTIO_DEVICE(n);
> +    int ret;
> +
>      /* This happens when device was stopped but BH wasn't. */
>      if (!vdev->vm_running) {
>          /* Make sure tx waiting is set, so we'll run when restarted. */
> @@ -2716,8 +2724,33 @@ static void virtio_net_tx_timer(void *opaque)
>          return;
>      }
>
> +    ret = virtio_net_flush_tx(q);
> +    if (ret == -EBUSY || ret == -EINVAL) {
> +        return;
> +    }
> +    /*
> +     * If we flush a full burst of packets, assume there are
> +     * more coming and immediately rearm
> +     */
> +    if (ret >= n->tx_burst) {
> +        q->tx_waiting = 1;
> +        timer_mod(q->tx_timer,
> +                  qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
> +        return;
> +    }
> +    /*
> +     * If less than a full burst, re-enable notification and flush
> +     * anything that may have come in while we weren't looking.  If
> +     * we find something, assume the guest is still active and rearm
> +     */
>      virtio_queue_set_notification(q->tx_vq, 1);
> -    virtio_net_flush_tx(q);
> +    ret = virtio_net_flush_tx(q);
> +    if (ret > 0) {
> +        virtio_queue_set_notification(q->tx_vq, 0);
> +        q->tx_waiting = 1;
> +        timer_mod(q->tx_timer,
> +                  qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) + n->tx_timeout);
> +    }
>  }
>
>  static void virtio_net_tx_bh(void *opaque)
> --
> 2.37.3
>



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2 0/2] virtio-net: re-arm/re-schedule when tx_burst stops virtio_net_flush_tx()
  2022-10-14 13:20 [PATCH v2 0/2] virtio-net: re-arm/re-schedule when tx_burst stops virtio_net_flush_tx() Laurent Vivier
  2022-10-14 13:20 ` [PATCH v2 1/2] virtio-net: fix bottom-half packet TX on asynchronous completion Laurent Vivier
  2022-10-14 13:20 ` [PATCH v2 2/2] virtio-net: fix TX timer with tx_burst Laurent Vivier
@ 2022-10-26 20:23 ` Michael S. Tsirkin
  2 siblings, 0 replies; 5+ messages in thread
From: Michael S. Tsirkin @ 2022-10-26 20:23 UTC (permalink / raw)
  To: Laurent Vivier; +Cc: qemu-devel, Stefano Brivio, David Gibson, Jason Wang

On Fri, Oct 14, 2022 at 03:20:02PM +0200, Laurent Vivier wrote:
> When virtio_net_flush_tx() reaches the tx_burst value all the queue is
> not flushed and nothing restart the timer or the bottom half function.
> 
> For BH, this is only missing in the virtio_net_tx_complete() function.
> For the timer, the same fix is needed in virtio_net_tx_complete() but
> it must be also managed in the TX timer function.
> 
> v2:
> - fix also tx timer


Jason's area, and he wants a small nit fixed.
Looks good to me overall:

Reviewed-by: Michael S. Tsirkin <mst@redhat.com>

> Laurent Vivier (2):
>   virtio-net: fix bottom-half packet TX on asynchronous completion
>   virtio-net: fix TX timer with tx_burst
> 
>  hw/net/virtio-net.c | 68 +++++++++++++++++++++++++++++++++++++--------
>  1 file changed, 56 insertions(+), 12 deletions(-)
> 
> -- 
> 2.37.3
> 



^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2022-10-26 20:28 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2022-10-14 13:20 [PATCH v2 0/2] virtio-net: re-arm/re-schedule when tx_burst stops virtio_net_flush_tx() Laurent Vivier
2022-10-14 13:20 ` [PATCH v2 1/2] virtio-net: fix bottom-half packet TX on asynchronous completion Laurent Vivier
2022-10-14 13:20 ` [PATCH v2 2/2] virtio-net: fix TX timer with tx_burst Laurent Vivier
2022-10-20  4:20   ` Jason Wang
2022-10-26 20:23 ` [PATCH v2 0/2] virtio-net: re-arm/re-schedule when tx_burst stops virtio_net_flush_tx() Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).