From: Krishna Kumar <krkumar2@in.ibm.com>
To: rusty@rustcorp.com.au, mst@redhat.com
Cc: netdev@vger.kernel.org, kvm@vger.kernel.org, davem@davemloft.net,
Krishna Kumar <krkumar2@in.ibm.com>,
virtualization@lists.linux-foundation.org
Subject: [RFC] [ver3 PATCH 0/6] Implement multiqueue virtio-net
Date: Fri, 11 Nov 2011 18:32:23 +0530 [thread overview]
Message-ID: <20111111130223.9878.59517.sendpatchset@krkumar2.in.ibm.com> (raw)
This patch series resurrects the earlier multiple TX/RX queues
functionality for virtio_net, and addresses the issues pointed
out. It also includes an API to share irq's, f.e. amongst the
TX vqs.
I plan to run TCP/UDP STREAM and RR tests for local->host and
local->remote, and send the results in the next couple of days.
patch #1: Introduce VIRTIO_NET_F_MULTIQUEUE
patch #2: Move 'num_queues' to virtqueue
patch #3: virtio_net driver changes
patch #4: vhost_net changes
patch #5: Implement find_vqs_irq()
patch #6: Convert virtio_net driver to use find_vqs_irq()
Changes from rev2:
Michael:
-------
1. Added functions to handle setting RX/TX/CTRL vq's.
2. num_queue_pairs instead of numtxqs.
3. Experimental support for fewer irq's in find_vqs.
Rusty:
------
4. Cleaned up some existing "while (1)".
5. rvq/svq and rx_sg/tx_sg changed to vq and sg respectively.
6. Cleaned up some "#if 1" code.
Issue when using patch5:
-------------------------
The new API is designed to minimize code duplication. E.g.
vp_find_vqs() is implemented as:
static int vp_find_vqs(...)
{
return vp_find_vqs_irq(vdev, nvqs, vqs, callbacks, names, NULL);
}
In my testing, when multiple tx/rx is used with multiple netperf
sessions, all the device tx queues stops a few thousand times and
subsequently woken up by skb_xmit_done. But after some 40K-50K
iterations of stop/wake, some of the txq's stop and no wake
interrupt comes. (modprobe -r followed by modprobe solves this, so
it is not a system hang). At the time of the hang (#txqs=#rxqs=4):
# egrep "CPU|virtio0" /proc/interrupts | grep -v config
CPU0 CPU1 CPU2 CPU3
41: 49057 49262 48828 49421 PCI-MSI-edge virtio0-input.0
42: 5066 5213 5221 5109 PCI-MSI-edge virtio0-output.0
43: 43380 43770 43007 43148 PCI-MSI-edge virtio0-input.1
44: 41433 41727 42101 41175 PCI-MSI-edge virtio0-input.2
45: 38465 37629 38468 38768 PCI-MSI-edge virtio0-input.3
# tc -s qdisc show dev eth0
qdisc mq 0: root
Sent 393196939897 bytes 271191624 pkt (dropped 59897,
overlimits 0 requeues 67156) backlog 25375720b 1601p
requeues 67156
I am not sure if patch #5 is responsible for the hang. Also, without
patch #5/patch #6, I changed vp_find_vqs() to:
static int vp_find_vqs(...)
{
return vp_try_to_find_vqs(vdev, nvqs, vqs, callbacks, names,
false, false);
}
No packets were getting TX'd with this change when #txqs>1. This is
with the MQ-only patch that doesn't touch drivers/virtio/ directory.
Also, the MQ patch works reasonably well with 2 vectors - with
use_msix=1 and per_vq_vectors=0 in vp_find_vqs().
Patch against net-next - please review.
Signed-off-by: krkumar2@in.ibm.com
---
next reply other threads:[~2011-11-11 13:05 UTC|newest]
Thread overview: 18+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-11-11 13:02 Krishna Kumar [this message]
2011-11-11 13:02 ` [RFC] [ver3 PATCH 1/6] virtio_net: Introduce VIRTIO_NET_F_MULTIQUEUE Krishna Kumar
2011-11-11 13:03 ` [RFC] [ver3 PATCH 2/6] virtio: Move 'num_queues' to virtqueue Krishna Kumar
2011-11-11 13:04 ` [RFC] [ver3 PATCH 3/6] virtio_net: virtio_net driver changes Krishna Kumar
2011-11-18 1:08 ` Ben Hutchings
2011-11-18 6:24 ` Sasha Levin
2011-11-18 15:40 ` Ben Hutchings
2011-11-18 16:18 ` Sasha Levin
2011-11-18 17:14 ` Ben Hutchings
2011-11-11 13:05 ` [RFC] [ver3 PATCH 4/6] vhost_net: vhost_net changes Krishna Kumar
2011-11-11 13:06 ` [RFC] [ver3 PATCH 5/6] virtio: Implement find_vqs_irq() Krishna Kumar
2011-11-11 13:07 ` [RFC] [ver3 PATCH 6/6] virtio_net: Convert virtio_net driver to use find_vqs_irq Krishna Kumar
2011-11-11 22:02 ` [RFC] [ver3 PATCH 0/6] Implement multiqueue virtio-net Sasha Levin
2011-11-13 11:40 ` Michael S. Tsirkin
2011-11-13 17:48 ` [PATCH RFC] ndo: ndo_queue_xmit/ndo_flush_xmit (was Re: [RFC] [ver3 PATCH 0/6] Implement multiqueue virtio-net) Michael S. Tsirkin
2011-11-14 16:21 ` Michael S. Tsirkin
-- strict thread matches above, loose matches on Subject: below --
2011-11-12 5:45 [RFC] [ver3 PATCH 0/6] Implement multiqueue virtio-net Krishna Kumar
2011-11-12 7:20 ` Sasha Levin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20111111130223.9878.59517.sendpatchset@krkumar2.in.ibm.com \
--to=krkumar2@in.ibm.com \
--cc=davem@davemloft.net \
--cc=kvm@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=rusty@rustcorp.com.au \
--cc=virtualization@lists.linux-foundation.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).