* [PATCH net 0/3] xen-netfront: more multiqueue fixes
@ 2014-07-31 16:38 David Vrabel
2014-07-31 16:38 ` [PATCH 1/3] xen-netfront: fix locking in connect error path David Vrabel
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: David Vrabel @ 2014-07-31 16:38 UTC (permalink / raw)
To: netdev; +Cc: David Vrabel, xen-devel, Konrad Rzeszutek Wilk, Boris Ostrovsky
A few more xen-netfront fixes for the multiqueue support added in
3.16-rc1. It would be great if these could make it into 3.16 but I
suspect it's a little late for that now.
The second patch fixes a significant resource leak that prevents
guests from migrating more than a handful of times.
These have been tested by repeatedly migrating a guest over 250 times
(it would previously fail with this guest after only 8 iterations).
David
^ permalink raw reply [flat|nested] 5+ messages in thread
* [PATCH 1/3] xen-netfront: fix locking in connect error path
2014-07-31 16:38 [PATCH net 0/3] xen-netfront: more multiqueue fixes David Vrabel
@ 2014-07-31 16:38 ` David Vrabel
2014-07-31 16:38 ` [PATCH 2/3] xen-netfront: release per-queue Tx and Rx resource when disconnecting David Vrabel
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: David Vrabel @ 2014-07-31 16:38 UTC (permalink / raw)
To: netdev; +Cc: David Vrabel, xen-devel, Konrad Rzeszutek Wilk, Boris Ostrovsky
If no queues could be created when connecting to the backend, one of the
error paths would deadlock.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
drivers/net/xen-netfront.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 055222b..1cc46d0 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -2001,7 +2001,7 @@ abort_transaction_no_dev_fatal:
info->queues = NULL;
rtnl_lock();
netif_set_real_num_tx_queues(info->netdev, 0);
- rtnl_lock();
+ rtnl_unlock();
out:
return err;
}
--
1.7.10.4
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 2/3] xen-netfront: release per-queue Tx and Rx resource when disconnecting
2014-07-31 16:38 [PATCH net 0/3] xen-netfront: more multiqueue fixes David Vrabel
2014-07-31 16:38 ` [PATCH 1/3] xen-netfront: fix locking in connect error path David Vrabel
@ 2014-07-31 16:38 ` David Vrabel
2014-07-31 16:38 ` [PATCH 3/3] xen-netfront: print correct number of queues David Vrabel
2014-08-01 5:24 ` [PATCH net 0/3] xen-netfront: more multiqueue fixes David Miller
3 siblings, 0 replies; 5+ messages in thread
From: David Vrabel @ 2014-07-31 16:38 UTC (permalink / raw)
To: netdev; +Cc: David Vrabel, xen-devel, Konrad Rzeszutek Wilk, Boris Ostrovsky
Since netfront may reconnect to a backend with a different number of
queues, all per-queue Rx and Tx resources (skbs and grant references)
should be freed when disconnecting.
Without this fix, the Tx and Rx grant refs are not released and
netfront will exhaust them after only a few reconnections. netfront
will fail to connect when no free grant references are available.
Since all Rx bufs are freed and reallocated instead of reused this
will add some additional delay to the reconnection but this is
expected to be small compared to the time taken by any backend hotplug
scripts etc.
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
drivers/net/xen-netfront.c | 68 +++++---------------------------------------
1 file changed, 7 insertions(+), 61 deletions(-)
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 1cc46d0..0b133a3 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1196,22 +1196,6 @@ static void xennet_release_rx_bufs(struct netfront_queue *queue)
spin_unlock_bh(&queue->rx_lock);
}
-static void xennet_uninit(struct net_device *dev)
-{
- struct netfront_info *np = netdev_priv(dev);
- unsigned int num_queues = dev->real_num_tx_queues;
- struct netfront_queue *queue;
- unsigned int i;
-
- for (i = 0; i < num_queues; ++i) {
- queue = &np->queues[i];
- xennet_release_tx_bufs(queue);
- xennet_release_rx_bufs(queue);
- gnttab_free_grant_references(queue->gref_tx_head);
- gnttab_free_grant_references(queue->gref_rx_head);
- }
-}
-
static netdev_features_t xennet_fix_features(struct net_device *dev,
netdev_features_t features)
{
@@ -1313,7 +1297,6 @@ static void xennet_poll_controller(struct net_device *dev)
static const struct net_device_ops xennet_netdev_ops = {
.ndo_open = xennet_open,
- .ndo_uninit = xennet_uninit,
.ndo_stop = xennet_close,
.ndo_start_xmit = xennet_start_xmit,
.ndo_change_mtu = xennet_change_mtu,
@@ -1455,6 +1438,11 @@ static void xennet_disconnect_backend(struct netfront_info *info)
napi_synchronize(&queue->napi);
+ xennet_release_tx_bufs(queue);
+ xennet_release_rx_bufs(queue);
+ gnttab_free_grant_references(queue->gref_tx_head);
+ gnttab_free_grant_references(queue->gref_rx_head);
+
/* End access and free the pages */
xennet_end_access(queue->tx_ring_ref, queue->tx.sring);
xennet_end_access(queue->rx_ring_ref, queue->rx.sring);
@@ -2010,10 +1998,7 @@ static int xennet_connect(struct net_device *dev)
{
struct netfront_info *np = netdev_priv(dev);
unsigned int num_queues = 0;
- int i, requeue_idx, err;
- struct sk_buff *skb;
- grant_ref_t ref;
- struct xen_netif_rx_request *req;
+ int err;
unsigned int feature_rx_copy;
unsigned int j = 0;
struct netfront_queue *queue = NULL;
@@ -2040,47 +2025,8 @@ static int xennet_connect(struct net_device *dev)
netdev_update_features(dev);
rtnl_unlock();
- /* By now, the queue structures have been set up */
- for (j = 0; j < num_queues; ++j) {
- queue = &np->queues[j];
-
- /* Step 1: Discard all pending TX packet fragments. */
- spin_lock_irq(&queue->tx_lock);
- xennet_release_tx_bufs(queue);
- spin_unlock_irq(&queue->tx_lock);
-
- /* Step 2: Rebuild the RX buffer freelist and the RX ring itself. */
- spin_lock_bh(&queue->rx_lock);
-
- for (requeue_idx = 0, i = 0; i < NET_RX_RING_SIZE; i++) {
- skb_frag_t *frag;
- const struct page *page;
- if (!queue->rx_skbs[i])
- continue;
-
- skb = queue->rx_skbs[requeue_idx] = xennet_get_rx_skb(queue, i);
- ref = queue->grant_rx_ref[requeue_idx] = xennet_get_rx_ref(queue, i);
- req = RING_GET_REQUEST(&queue->rx, requeue_idx);
-
- frag = &skb_shinfo(skb)->frags[0];
- page = skb_frag_page(frag);
- gnttab_grant_foreign_access_ref(
- ref, queue->info->xbdev->otherend_id,
- pfn_to_mfn(page_to_pfn(page)),
- 0);
- req->gref = ref;
- req->id = requeue_idx;
-
- requeue_idx++;
- }
-
- queue->rx.req_prod_pvt = requeue_idx;
-
- spin_unlock_bh(&queue->rx_lock);
- }
-
/*
- * Step 3: All public and private state should now be sane. Get
+ * All public and private state should now be sane. Get
* ready to start sending and receiving packets and give the driver
* domain a kick because we've probably just requeued some
* packets.
--
1.7.10.4
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [PATCH 3/3] xen-netfront: print correct number of queues
2014-07-31 16:38 [PATCH net 0/3] xen-netfront: more multiqueue fixes David Vrabel
2014-07-31 16:38 ` [PATCH 1/3] xen-netfront: fix locking in connect error path David Vrabel
2014-07-31 16:38 ` [PATCH 2/3] xen-netfront: release per-queue Tx and Rx resource when disconnecting David Vrabel
@ 2014-07-31 16:38 ` David Vrabel
2014-08-01 5:24 ` [PATCH net 0/3] xen-netfront: more multiqueue fixes David Miller
3 siblings, 0 replies; 5+ messages in thread
From: David Vrabel @ 2014-07-31 16:38 UTC (permalink / raw)
To: netdev; +Cc: David Vrabel, xen-devel, Konrad Rzeszutek Wilk, Boris Ostrovsky
When less than the requested number of queues could be created, include
the actual number in the warning (instead of the requested number).
Signed-off-by: David Vrabel <david.vrabel@citrix.com>
---
drivers/net/xen-netfront.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 0b133a3..28204bc 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -1815,8 +1815,8 @@ static int xennet_create_queues(struct netfront_info *info,
ret = xennet_init_queue(queue);
if (ret < 0) {
- dev_warn(&info->netdev->dev, "only created %d queues\n",
- num_queues);
+ dev_warn(&info->netdev->dev,
+ "only created %d queues\n", i);
num_queues = i;
break;
}
--
1.7.10.4
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH net 0/3] xen-netfront: more multiqueue fixes
2014-07-31 16:38 [PATCH net 0/3] xen-netfront: more multiqueue fixes David Vrabel
` (2 preceding siblings ...)
2014-07-31 16:38 ` [PATCH 3/3] xen-netfront: print correct number of queues David Vrabel
@ 2014-08-01 5:24 ` David Miller
3 siblings, 0 replies; 5+ messages in thread
From: David Miller @ 2014-08-01 5:24 UTC (permalink / raw)
To: david.vrabel; +Cc: netdev, xen-devel, konrad.wilk, boris.ostrovsky
From: David Vrabel <david.vrabel@citrix.com>
Date: Thu, 31 Jul 2014 17:38:21 +0100
> A few more xen-netfront fixes for the multiqueue support added in
> 3.16-rc1. It would be great if these could make it into 3.16 but I
> suspect it's a little late for that now.
>
> The second patch fixes a significant resource leak that prevents
> guests from migrating more than a handful of times.
>
> These have been tested by repeatedly migrating a guest over 250 times
> (it would previously fail with this guest after only 8 iterations).
Series applied to 'net', thanks.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2014-08-01 5:24 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2014-07-31 16:38 [PATCH net 0/3] xen-netfront: more multiqueue fixes David Vrabel
2014-07-31 16:38 ` [PATCH 1/3] xen-netfront: fix locking in connect error path David Vrabel
2014-07-31 16:38 ` [PATCH 2/3] xen-netfront: release per-queue Tx and Rx resource when disconnecting David Vrabel
2014-07-31 16:38 ` [PATCH 3/3] xen-netfront: print correct number of queues David Vrabel
2014-08-01 5:24 ` [PATCH net 0/3] xen-netfront: more multiqueue fixes David Miller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).