From: "Michael S. Tsirkin" <mst@redhat.com>
To: Krishna Kumar2 <krkumar2@in.ibm.com>
Cc: anthony@codemonkey.ws, arnd@arndb.de, avi@redhat.com,
davem@davemloft.net, kvm@vger.kernel.org, netdev@vger.kernel.org,
rusty@rustcorp.com.au, herbert@gondor.hengli.com.au
Subject: Re: [v2 RFC PATCH 0/4] Implement multiqueue virtio-net
Date: Wed, 6 Oct 2010 21:03:16 +0200 [thread overview]
Message-ID: <20101006190316.GB14825@redhat.com> (raw)
In-Reply-To: <OF806283F6.38268772-ON652577B4.005E805A-652577B4.00611B1C@in.ibm.com>
On Wed, Oct 06, 2010 at 11:13:31PM +0530, Krishna Kumar2 wrote:
> "Michael S. Tsirkin" <mst@redhat.com> wrote on 10/05/2010 11:53:23 PM:
>
> > > > Any idea where does this come from?
> > > > Do you see more TX interrupts? RX interrupts? Exits?
> > > > Do interrupts bounce more between guest CPUs?
> > > > 4. Identify reasons for single netperf BW regression.
> > >
> > > After testing various combinations of #txqs, #vhosts, #netperf
> > > sessions, I think the drop for 1 stream is due to TX and RX for
> > > a flow being processed on different cpus.
> >
> > Right. Can we fix it?
>
> I am not sure how to. My initial patch had one thread but gave
> small gains and ran into limitations once number of sessions
> became large.
Sure. We will need multiple RX queues, and have a single
thread handle a TX and RX pair. Then we need to make sure packets
from a given flow on TX land on the same thread on RX.
As flows can be hashed differently, for this to work we'll have to
expose this info in host/guest interface.
But since multiqueue implies host/guest ABI changes anyway,
this point is moot.
BTW, an interesting approach could be using bonding
and multiple virtio-net interfaces.
What are the disadvantages of such a setup? One advantage
is it can be made to work in existing guests.
> > > I did two more tests:
> > > 1. Pin vhosts to same CPU:
> > > - BW drop is much lower for 1 stream case (- 5 to -8% range)
> > > - But performance is not so high for more sessions.
> > > 2. Changed vhost to be single threaded:
> > > - No degradation for 1 session, and improvement for upto
> > > 8, sometimes 16 streams (5-12%).
> > > - BW degrades after that, all the way till 128 netperf
> sessions.
> > > - But overall CPU utilization improves.
> > > Summary of the entire run (for 1-128 sessions):
> > > txq=4: BW: (-2.3) CPU: (-16.5) RCPU: (-5.3)
> > > txq=16: BW: (-1.9) CPU: (-24.9) RCPU: (-9.6)
> > >
> > > I don't see any reasons mentioned above. However, for higher
> > > number of netperf sessions, I see a big increase in retransmissions:
> >
> > Hmm, ok, and do you see any errors?
>
> I haven't seen any in any statistics, messages, etc.
Herbert, could you help out debugging this increase in retransmissions
please? Older mail on netdev in this thread has some numbers that seem
to imply that we start hitting retransmissions much more as # of flows
goes up.
> Also no
> retranmissions for txq=1.
While it's nice that we have this parameter, the need to choose between
single stream and multi stream performance when you start the vm makes
this patch much less interesting IMHO.
--
MST
next prev parent reply other threads:[~2010-10-06 19:03 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-09-17 10:03 [v2 RFC PATCH 0/4] Implement multiqueue virtio-net Krishna Kumar
2010-09-17 10:03 ` [v2 RFC PATCH 1/4] Change virtqueue structure Krishna Kumar
2010-09-17 10:03 ` [v2 RFC PATCH 2/4] Changes for virtio-net Krishna Kumar
2010-09-17 10:25 ` Eric Dumazet
2010-09-17 12:27 ` Krishna Kumar2
2010-09-17 13:20 ` Krishna Kumar2
2010-09-17 10:03 ` [v2 RFC PATCH 3/4] Changes for vhost Krishna Kumar
2010-09-17 10:03 ` [v2 RFC PATCH 4/4] qemu changes Krishna Kumar
2010-09-17 15:42 ` [v2 RFC PATCH 0/4] Implement multiqueue virtio-net Sridhar Samudrala
2010-09-19 12:44 ` Michael S. Tsirkin
2010-10-05 10:40 ` Krishna Kumar2
2010-10-05 18:23 ` Michael S. Tsirkin
2010-10-06 17:43 ` Krishna Kumar2
2010-10-06 19:03 ` Michael S. Tsirkin [this message]
2010-10-06 12:19 ` Arnd Bergmann
2010-10-06 17:14 ` Krishna Kumar2
2010-10-06 17:50 ` Arnd Bergmann
-- strict thread matches above, loose matches on Subject: below --
2010-10-06 13:34 Michael S. Tsirkin
2010-10-06 17:02 ` Krishna Kumar2
2010-10-11 7:21 ` Krishna Kumar2
2010-10-12 17:09 ` Michael S. Tsirkin
2010-10-14 7:58 ` Krishna Kumar2
2010-10-14 8:17 ` Michael S. Tsirkin
2010-10-14 9:04 ` Krishna Kumar2
[not found] ` <OFEC86A094.39835EBF-ON652577BC.002F9AAF-652577BC.003186B5@LocalDomain>
2010-10-14 12:17 ` Krishna Kumar2
[not found] ` <OF0BDA6B3A.F673A449-ON652577BC.00422911-652577BC.0043474B@LocalDomain>
2010-10-14 12:47 ` Krishna Kumar2
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20101006190316.GB14825@redhat.com \
--to=mst@redhat.com \
--cc=anthony@codemonkey.ws \
--cc=arnd@arndb.de \
--cc=avi@redhat.com \
--cc=davem@davemloft.net \
--cc=herbert@gondor.hengli.com.au \
--cc=krkumar2@in.ibm.com \
--cc=kvm@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).