From: Rick Jones <rick.jones2@hp.com>
To: Sridhar Samudrala <sri@us.ibm.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>,
Tom Lendacky <toml@us.ibm.com>, netdev <netdev@vger.kernel.org>,
"kvm@vger.kernel.org" <kvm@vger.kernel.org>
Subject: Re: [PATCH] vhost: Make it more scalable by creating a vhost thread per device.
Date: Thu, 08 Apr 2010 17:14:35 -0700 [thread overview]
Message-ID: <4BBE716B.7050904@hp.com> (raw)
In-Reply-To: <1270771542.31186.397.camel@w-sridhar.beaverton.ibm.com>
> Here are the results with netperf TCP_STREAM 64K guest to host on a
> 8-cpu Nehalem system.
I presume you mean 8 core Nehalem-EP, or did you mean 8 processor Nehalem-EX?
Don't get me wrong, I *like* the netperf 64K TCP_STREAM test, I lik it a lot!-)
but I find it incomplete and also like to run things like single-instance TCP_RR
and multiple-instance, multiple "transaction" (./configure --enable-burst)
TCP_RR tests, particularly when concerned with "scaling" issues.
happy benchmarking,
rick jones
> It shows cumulative bandwidth in Mbps and host
> CPU utilization.
>
> Current default single vhost thread
> -----------------------------------
> 1 guest: 12500 37%
> 2 guests: 12800 46%
> 3 guests: 12600 47%
> 4 guests: 12200 47%
> 5 guests: 12000 47%
> 6 guests: 11700 47%
> 7 guests: 11340 47%
> 8 guests: 11200 48%
>
> vhost thread per cpu
> --------------------
> 1 guest: 4900 25%
> 2 guests: 10800 49%
> 3 guests: 17100 67%
> 4 guests: 20400 84%
> 5 guests: 21000 90%
> 6 guests: 22500 92%
> 7 guests: 23500 96%
> 8 guests: 24500 99%
>
> vhost thread per guest interface
> --------------------------------
> 1 guest: 12500 37%
> 2 guests: 21000 72%
> 3 guests: 21600 79%
> 4 guests: 21600 85%
> 5 guests: 22500 89%
> 6 guests: 22800 94%
> 7 guests: 24500 98%
> 8 guests: 26400 99%
>
> Thanks
> Sridhar
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2010-04-09 0:14 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-04-02 17:31 [PATCH] vhost: Make it more scalable by creating a vhost thread per device Sridhar Samudrala
2010-04-04 11:14 ` Michael S. Tsirkin
2010-04-05 17:35 ` Sridhar Samudrala
2010-04-06 18:49 ` Avi Kivity
2010-04-09 0:05 ` Sridhar Samudrala
2010-04-09 0:14 ` Rick Jones [this message]
2010-04-09 15:39 ` Sridhar Samudrala
2010-04-09 17:13 ` Rick Jones
2010-04-11 15:47 ` Michael S. Tsirkin
2010-04-12 17:35 ` Sridhar Samudrala
2010-04-12 17:42 ` Michael S. Tsirkin
2010-04-12 17:50 ` Rick Jones
2010-04-12 16:27 ` Michael S. Tsirkin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4BBE716B.7050904@hp.com \
--to=rick.jones2@hp.com \
--cc=kvm@vger.kernel.org \
--cc=mst@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=sri@us.ibm.com \
--cc=toml@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).