* [PATCH 1/2] virtio_net: do not reschedule rx refill forever
@ 2010-07-03 2:32 Rusty Russell
2010-07-03 2:34 ` [PATCH 2/2] virtio_net: fix oom handling on tx Rusty Russell
2010-07-03 5:29 ` [PATCH 1/2] virtio_net: do not reschedule rx refill forever David Miller
0 siblings, 2 replies; 4+ messages in thread
From: Rusty Russell @ 2010-07-03 2:32 UTC (permalink / raw)
To: netdev; +Cc: Michael S. Tsirkin
From: "Michael S. Tsirkin" <mst@redhat.com>
We currently fill all of RX ring, then add_buf
returns ENOSPC, which gets mis-detected as an out of
memory condition and causes us to reschedule the work,
and so on forever. Fix this by oom = err == -ENOMEM;
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: stable@kernel.org # .34.x
---
drivers/net/virtio_net.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 06c30df..85615a3 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -416,7 +416,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi, gfp_t gfp)
static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp)
{
int err;
- bool oom = false;
+ bool oom;
do {
if (vi->mergeable_rx_bufs)
@@ -426,10 +426,9 @@ static bool try_fill_recv(struct virtnet_info *vi, gfp_t gfp)
else
err = add_recvbuf_small(vi, gfp);
- if (err < 0) {
- oom = true;
+ oom = err == -ENOMEM;
+ if (err < 0)
break;
- }
++vi->num;
} while (err > 0);
if (unlikely(vi->num > vi->max))
^ permalink raw reply related [flat|nested] 4+ messages in thread
* [PATCH 2/2] virtio_net: fix oom handling on tx
2010-07-03 2:32 [PATCH 1/2] virtio_net: do not reschedule rx refill forever Rusty Russell
@ 2010-07-03 2:34 ` Rusty Russell
2010-07-03 5:29 ` David Miller
2010-07-03 5:29 ` [PATCH 1/2] virtio_net: do not reschedule rx refill forever David Miller
1 sibling, 1 reply; 4+ messages in thread
From: Rusty Russell @ 2010-07-03 2:34 UTC (permalink / raw)
To: netdev; +Cc: Michael S. Tsirkin, Herbert Xu
virtio net will never try to overflow the TX ring, so the only reason
add_buf may fail is out of memory. Thus, we can not stop the
device until some request completes - there's no guarantee anything
at all is outstanding.
Make the error message clearer as well: error here does not
indicate queue full.
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (...and avoid TX_BUSY)
Cc: stable@kernel.org # .34.x (s/virtqueue_/vi->svq->vq_ops->/)
---
drivers/net/virtio_net.c | 21 +++++++++++++--------
1 file changed, 13 insertions(+), 8 deletions(-)
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -562,7 +562,6 @@ static netdev_tx_t start_xmit(struct sk_
struct virtnet_info *vi = netdev_priv(dev);
int capacity;
-again:
/* Free up any pending old buffers before queueing new ones. */
free_old_xmit_skbs(vi);
@@ -571,14 +570,20 @@ again:
/* This can happen with OOM and indirect buffers. */
if (unlikely(capacity < 0)) {
- netif_stop_queue(dev);
- dev_warn(&dev->dev, "Unexpected full queue\n");
- if (unlikely(!virtqueue_enable_cb(vi->svq))) {
- virtqueue_disable_cb(vi->svq);
- netif_start_queue(dev);
- goto again;
+ if (net_ratelimit()) {
+ if (likely(capacity == -ENOMEM)) {
+ dev_warn(&dev->dev,
+ "TX queue failure: out of memory\n");
+ } else {
+ dev->stats.tx_fifo_errors++;
+ dev_warn(&dev->dev,
+ "Unexpected TX queue failure: %d\n",
+ capacity);
+ }
}
- return NETDEV_TX_BUSY;
+ dev->stats.tx_dropped++;
+ kfree_skb(skb);
+ return NETDEV_TX_OK;
}
virtqueue_kick(vi->svq);
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH 2/2] virtio_net: fix oom handling on tx
2010-07-03 2:34 ` [PATCH 2/2] virtio_net: fix oom handling on tx Rusty Russell
@ 2010-07-03 5:29 ` David Miller
0 siblings, 0 replies; 4+ messages in thread
From: David Miller @ 2010-07-03 5:29 UTC (permalink / raw)
To: rusty; +Cc: netdev, mst, herbert
From: Rusty Russell <rusty@rustcorp.com.au>
Date: Sat, 3 Jul 2010 12:34:01 +1000
> virtio net will never try to overflow the TX ring, so the only reason
> add_buf may fail is out of memory. Thus, we can not stop the
> device until some request completes - there's no guarantee anything
> at all is outstanding.
>
> Make the error message clearer as well: error here does not
> indicate queue full.
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> (...and avoid TX_BUSY)
> Cc: stable@kernel.org # .34.x (s/virtqueue_/vi->svq->vq_ops->/)
Applied.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH 1/2] virtio_net: do not reschedule rx refill forever
2010-07-03 2:32 [PATCH 1/2] virtio_net: do not reschedule rx refill forever Rusty Russell
2010-07-03 2:34 ` [PATCH 2/2] virtio_net: fix oom handling on tx Rusty Russell
@ 2010-07-03 5:29 ` David Miller
1 sibling, 0 replies; 4+ messages in thread
From: David Miller @ 2010-07-03 5:29 UTC (permalink / raw)
To: rusty; +Cc: netdev, mst
From: Rusty Russell <rusty@rustcorp.com.au>
Date: Sat, 3 Jul 2010 12:32:55 +1000
> From: "Michael S. Tsirkin" <mst@redhat.com>
>
> We currently fill all of RX ring, then add_buf
> returns ENOSPC, which gets mis-detected as an out of
> memory condition and causes us to reschedule the work,
> and so on forever. Fix this by oom = err == -ENOMEM;
>
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
> Cc: stable@kernel.org # .34.x
Applied.
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2010-07-03 5:29 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-07-03 2:32 [PATCH 1/2] virtio_net: do not reschedule rx refill forever Rusty Russell
2010-07-03 2:34 ` [PATCH 2/2] virtio_net: fix oom handling on tx Rusty Russell
2010-07-03 5:29 ` David Miller
2010-07-03 5:29 ` [PATCH 1/2] virtio_net: do not reschedule rx refill forever David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).