From: "Michael S. Tsirkin" <mst@redhat.com>
To: Razya Ladelsky <RAZYA@il.ibm.com>
Cc: Alex Glikson <GLIKSON@il.ibm.com>,
Eran Raichstein <ERANRA@il.ibm.com>,
Yossi Kuperman1 <YOSSIKU@il.ibm.com>,
Joel Nider <JOELN@il.ibm.com>,
abel.gordon@gmail.com, kvm@vger.kernel.org,
Eyal Moscovici <EYALMO@il.ibm.com>
Subject: Re: Fw: Benchmarking for vhost polling patch
Date: Mon, 5 Jan 2015 14:35:36 +0200 [thread overview]
Message-ID: <20150105123536.GA21242@redhat.com> (raw)
In-Reply-To: <OFD75B5428.AB01A5FF-ONC2257DC0.0046C5CD-C2257DC0.00475A5E@il.ibm.com>
Hi Razya,
Thanks for the update.
So that's reasonable I think, and I think it makes sense
to keep working on this in isolation - it's more
manageable at this size.
The big questions in my mind:
- What happens if system is lightly loaded?
E.g. a ping/pong benchmark. How much extra CPU are
we wasting?
- We see the best performance on your system is with 10usec worth of polling.
It's OK to be able to tune it for best performance, but
most people don't have the time or the inclination.
So what would be the best value for other CPUs?
- Should this be tunable from usespace per vhost instance?
Why is it only tunable globally?
- How bad is it if you don't pin vhost and vcpu threads?
Is the scheduler smart enough to pull them apart?
- What happens in overcommit scenarios? Does polling make things
much worse?
Clearly polling will work worse if e.g. vhost and vcpu
share the host cpu. How can we avoid conflicts?
For two last questions, better cooperation with host scheduler will
likely help here.
See e.g. http://thread.gmane.org/gmane.linux.kernel/1771791/focus=1772505
I'm currently looking at pushing something similar upstream,
if it goes in vhost polling can do something similar.
Any data points to shed light on these questions?
On Thu, Jan 01, 2015 at 02:59:21PM +0200, Razya Ladelsky wrote:
> Hi Michael,
> Just a follow up on the polling patch numbers,..
> Please let me know if you find these numbers satisfying enough to continue
> with submitting this patch.
> Otherwise - we'll have this patch submitted as part of the larger Elvis
> patch set rather than independently.
> Thank you,
> Razya
>
> ----- Forwarded by Razya Ladelsky/Haifa/IBM on 01/01/2015 09:37 AM -----
>
> From: Razya Ladelsky/Haifa/IBM@IBMIL
> To: mst@redhat.com
> Cc:
> Date: 25/11/2014 02:43 PM
> Subject: Re: Benchmarking for vhost polling patch
> Sent by: kvm-owner@vger.kernel.org
>
>
>
> Hi Michael,
>
> > Hi Razya,
> > On the netperf benchmark, it looks like polling=10 gives a modest but
> > measureable gain. So from that perspective it might be worth it if it's
> > not too much code, though we'll need to spend more time checking the
> > macro effect - we barely moved the needle on the macro benchmark and
> > that is suspicious.
>
> I ran memcached with various values for the key & value arguments, and
> managed to see a bigger impact of polling than when I used the default
> values,
> Here are the numbers:
>
> key=250 TPS net vhost vm TPS/cpu TPS/CPU
> value=2048 rate util util change
>
> polling=0 101540 103.0 46 100 695.47
> polling=5 136747 123.0 83 100 747.25 0.074440609
> polling=7 140722 125.7 84 100 764.79 0.099663658
> polling=10 141719 126.3 87 100 757.85 0.089688003
> polling=15 142430 127.1 90 100 749.63 0.077863015
> polling=25 146347 128.7 95 100 750.49 0.079107993
> polling=50 150882 131.1 100 100 754.41 0.084733701
>
> Macro benchmarks are less I/O intensive than the micro benchmark, which is
> why
> we can expect less impact for polling as compared to netperf.
> However, as shown above, we managed to get 10% TPS/CPU improvement with
> the
> polling patch.
>
> > Is there a chance you are actually trading latency for throughput?
> > do you observe any effect on latency?
>
> No.
>
> > How about trying some other benchmark, e.g. NFS?
> >
>
> Tried, but didn't have enough I/O produced (vhost was at most at 15% util)
OK but was there a regression in this case?
> >
> > Also, I am wondering:
> >
> > since vhost thread is polling in kernel anyway, shouldn't
> > we try and poll the host NIC?
> > that would likely reduce at least the latency significantly,
> > won't it?
> >
>
> Yes, it could be a great addition at some point, but needs a thorough
> investigation. In any case, not a part of this patch...
>
> Thanks,
> Razya
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
next prev parent reply other threads:[~2015-01-05 12:35 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-01-01 12:59 Fw: Benchmarking for vhost polling patch Razya Ladelsky
2015-01-05 12:35 ` Michael S. Tsirkin [this message]
2015-01-11 12:44 ` Razya Ladelsky
2015-01-12 10:36 ` Michael S. Tsirkin
2015-01-14 15:01 ` Razya Ladelsky
2015-01-14 15:23 ` Michael S. Tsirkin
2015-01-18 7:40 ` Razya Ladelsky
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150105123536.GA21242@redhat.com \
--to=mst@redhat.com \
--cc=ERANRA@il.ibm.com \
--cc=EYALMO@il.ibm.com \
--cc=GLIKSON@il.ibm.com \
--cc=JOELN@il.ibm.com \
--cc=RAZYA@il.ibm.com \
--cc=YOSSIKU@il.ibm.com \
--cc=abel.gordon@gmail.com \
--cc=kvm@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).