From: "Michael S. Tsirkin" <mst@redhat.com>
To: Krishna Kumar2 <krkumar2@in.ibm.com>
Cc: anthony@codemonkey.ws, avi@redhat.com, davem@davemloft.net,
kvm@vger.kernel.org, netdev@vger.kernel.org,
rusty@rustcorp.com.au, rick.jones2@hp.com
Subject: Re: [RFC PATCH 0/4] Implement multiqueue virtio-net
Date: Wed, 15 Sep 2010 07:33:28 +0200 [thread overview]
Message-ID: <20100915053328.GA25566@redhat.com> (raw)
In-Reply-To: <OF5A57530E.4E8C0A65-ON6525779D.00591022-6525779D.0059D5BB@in.ibm.com>
On Mon, Sep 13, 2010 at 09:53:40PM +0530, Krishna Kumar2 wrote:
> "Michael S. Tsirkin" <mst@redhat.com> wrote on 09/13/2010 05:20:55 PM:
>
> > > Results with the original kernel:
> > > _____________________________
> > > # BW SD RSD
> > > ______________________________
> > > 1 20903 1 6
> > > 2 21963 6 25
> > > 4 22042 23 102
> > > 8 21674 97 419
> > > 16 22281 379 1663
> > > 24 22521 857 3748
> > > 32 22976 1528 6594
> > > 40 23197 2390 10239
> > > 48 22973 3542 15074
> > > 64 23809 6486 27244
> > > 80 23564 10169 43118
> > > 96 22977 14954 62948
> > > 128 23649 27067 113892
> > > ________________________________
> > >
> > > With higher number of threads running in parallel, SD
> > > increased. In this case most threads run in parallel
> > > only till __dev_xmit_skb (#numtxqs=1). With mq TX patch,
> > > higher number of threads run in parallel through
> > > ndo_start_xmit. I *think* the increase in SD is to do
> > > with higher # of threads running for larger code path
> > > >From the numbers I posted with the patch (cut-n-paste
> > > only the % parts), BW increased much more than the SD,
> > > sometimes more than twice the increase in SD.
> >
> > Service demand is BW/CPU, right? So if BW goes up by 50%
> > and SD by 40%, this means that CPU more than doubled.
>
> I think the SD calculation might be more complicated,
> I think it does it based on adding up averages sampled
> and stored during the run. But, I still don't see how CPU
> can double?? e.g.
> BW: 1000 -> 1500 (50%)
> SD: 100 -> 140 (40%)
> CPU: 10 -> 10.71 (7.1%)
Hmm. Time to look at the source. Which netperf version did you use?
> > > N# BW% SD% RSD%
> > > 4 54.30 40.00 -1.16
> > > 8 71.79 46.59 -2.68
> > > 16 71.89 50.40 -2.50
> > > 32 72.24 34.26 -14.52
> > > 48 70.10 31.51 -14.35
> > > 64 69.01 38.81 -9.66
> > > 96 70.68 71.26 10.74
> > >
> > > I also think SD calculation gets skewed for guest->local
> > > host testing.
> >
> > If it's broken, let's fix it?
> >
> > > For this test, I ran a guest with numtxqs=16.
> > > The first result below is with my patch, which creates 16
> > > vhosts. The second result is with a modified patch which
> > > creates only 2 vhosts (testing with #netperfs = 64):
> >
> > My guess is it's not a good idea to have more TX VQs than guest CPUs.
>
> Definitely, I will try to run tomorrow with more reasonable
> values, also will test with my second version of the patch
> that creates restricted number of vhosts and post results.
>
> > I realize for management it's easier to pass in a single vhost fd, but
> > just for testing it's probably easier to add code in userspace to open
> > /dev/vhost multiple times.
> >
> > >
> > > #vhosts BW% SD% RSD%
> > > 16 20.79 186.01 149.74
> > > 2 30.89 34.55 18.44
> > >
> > > The remote SD increases with the number of vhost threads,
> > > but that number seems to correlate with guest SD. So though
> > > BW% increased slightly from 20% to 30%, SD fell drastically
> > > from 186% to 34%. I think it could be a calculation skew
> > > with host SD, which also fell from 150% to 18%.
> >
> > I think by default netperf looks in /proc/stat for CPU utilization data:
> > so host CPU utilization will include the guest CPU, I think?
>
> It appears that way to me too, but the data above seems to
> suggest the opposite...
>
> > I would go further and claim that for host/guest TCP
> > CPU utilization and SD should always be identical.
> > Makes sense?
>
> It makes sense to me, but once again I am not sure how SD
> is really done, or whether it is linear to CPU. Cc'ing Rick
> in case he can comment....
Me neither. I should rephrase: I think we should always
use host CPU utilization always.
> >
> > >
> > > I am planning to submit 2nd patch rev with restricted
> > > number of vhosts.
> > >
> > > > > Likely cause for the 1 stream degradation with multiple
> > > > > vhost patch:
> > > > >
> > > > > 1. Two vhosts run handling the RX and TX respectively.
> > > > > I think the issue is related to cache ping-pong esp
> > > > > since these run on different cpus/sockets.
> > > >
> > > > Right. With TCP I think we are better off handling
> > > > TX and RX for a socket by the same vhost, so that
> > > > packet and its ack are handled by the same thread.
> > > > Is this what happens with RX multiqueue patch?
> > > > How do we select an RX queue to put the packet on?
> > >
> > > My (unsubmitted) RX patch doesn't do this yet, that is
> > > something I will check.
> > >
> > > Thanks,
> > >
> > > - KK
> >
> > You'll want to work on top of net-next, I think there's
> > RX flow filtering work going on there.
>
> Thanks Michael, I will follow up on that for the RX patch,
> plus your suggestion on tying RX with TX.
>
> Thanks,
>
> - KK
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2010-09-15 5:40 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-09-08 7:28 [RFC PATCH 0/4] Implement multiqueue virtio-net Krishna Kumar
2010-09-08 7:29 ` [RFC PATCH 1/4] Add a new API to virtio-pci Krishna Kumar
2010-09-09 3:49 ` Rusty Russell
2010-09-09 5:23 ` Krishna Kumar2
2010-09-09 12:14 ` Rusty Russell
2010-09-09 13:49 ` Krishna Kumar2
2010-09-10 3:33 ` Rusty Russell
2010-09-12 11:46 ` Michael S. Tsirkin
2010-09-13 4:20 ` Krishna Kumar2
2010-09-13 9:04 ` Michael S. Tsirkin
2010-09-13 15:59 ` Anthony Liguori
2010-09-13 16:30 ` Michael S. Tsirkin
2010-09-13 17:00 ` Avi Kivity
2010-09-15 5:35 ` Michael S. Tsirkin
2010-09-13 17:40 ` Anthony Liguori
2010-09-15 5:40 ` Michael S. Tsirkin
2010-09-08 7:29 ` [RFC PATCH 2/4] Changes for virtio-net Krishna Kumar
2010-09-08 7:29 ` [RFC PATCH 3/4] Changes for vhost Krishna Kumar
2010-09-08 7:29 ` [RFC PATCH 4/4] qemu changes Krishna Kumar
2010-09-08 7:47 ` [RFC PATCH 0/4] Implement multiqueue virtio-net Avi Kivity
2010-09-08 9:22 ` Krishna Kumar2
2010-09-08 9:28 ` Avi Kivity
2010-09-08 10:17 ` Krishna Kumar2
2010-09-08 14:12 ` Arnd Bergmann
2010-09-08 16:47 ` Krishna Kumar2
2010-09-09 10:40 ` Arnd Bergmann
2010-09-09 13:19 ` Krishna Kumar2
2010-09-08 8:10 ` Michael S. Tsirkin
2010-09-08 9:23 ` Krishna Kumar2
2010-09-08 10:48 ` Michael S. Tsirkin
2010-09-08 12:19 ` Krishna Kumar2
2010-09-08 16:47 ` Krishna Kumar2
[not found] ` <OF70542242.6CAA236A-ON65257798.0044A4E0-65257798.005C0E7C@LocalDomain>
2010-09-09 9:45 ` Krishna Kumar2
2010-09-09 23:00 ` Sridhar Samudrala
2010-09-10 5:19 ` Krishna Kumar2
2010-09-12 11:40 ` Michael S. Tsirkin
2010-09-13 4:12 ` Krishna Kumar2
2010-09-13 11:50 ` Michael S. Tsirkin
2010-09-13 16:23 ` Krishna Kumar2
2010-09-15 5:33 ` Michael S. Tsirkin [this message]
[not found] ` <OF8043B2B7.7048D739-ON65257799.0021A2EE-65257799.00356B3E@LocalDomain>
2010-09-09 13:18 ` Krishna Kumar2
2010-09-08 8:13 ` Michael S. Tsirkin
2010-09-08 9:28 ` Krishna Kumar2
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100915053328.GA25566@redhat.com \
--to=mst@redhat.com \
--cc=anthony@codemonkey.ws \
--cc=avi@redhat.com \
--cc=davem@davemloft.net \
--cc=krkumar2@in.ibm.com \
--cc=kvm@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=rick.jones2@hp.com \
--cc=rusty@rustcorp.com.au \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).