kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Fw: Benchmarking for vhost polling patch
@ 2015-01-01 12:59 Razya Ladelsky
  2015-01-05 12:35 ` Michael S. Tsirkin
  0 siblings, 1 reply; 7+ messages in thread
From: Razya Ladelsky @ 2015-01-01 12:59 UTC (permalink / raw)
  To: mst
  Cc: Alex Glikson, Eran Raichstein, Yossi Kuperman1, Joel Nider,
	abel.gordon, kvm, Eyal Moscovici, Razya Ladelsky

Hi Michael,
Just a follow up on the polling patch numbers,..
Please let me know if you find these numbers satisfying enough to continue 
with submitting this patch.
Otherwise - we'll have this patch submitted as part of the larger Elvis 
patch set rather than independently.
Thank you,
Razya 

----- Forwarded by Razya Ladelsky/Haifa/IBM on 01/01/2015 09:37 AM -----

From:   Razya Ladelsky/Haifa/IBM@IBMIL
To:     mst@redhat.com
Cc: 
Date:   25/11/2014 02:43 PM
Subject:        Re: Benchmarking for vhost polling patch
Sent by:        kvm-owner@vger.kernel.org



Hi Michael,

> Hi Razya,
> On the netperf benchmark, it looks like polling=10 gives a modest but
> measureable gain.  So from that perspective it might be worth it if it's
> not too much code, though we'll need to spend more time checking the
> macro effect - we barely moved the needle on the macro benchmark and
> that is suspicious.

I ran memcached with various values for the key & value arguments, and 
managed to see a bigger impact of polling than when I used the default 
values,
Here are the numbers:

key=250     TPS      net    vhost vm   TPS/cpu  TPS/CPU
value=2048           rate   util  util          change

polling=0   101540   103.0  46   100   695.47
polling=5   136747   123.0  83   100   747.25   0.074440609
polling=7   140722   125.7  84   100   764.79   0.099663658
polling=10  141719   126.3  87   100   757.85   0.089688003
polling=15  142430   127.1  90   100   749.63   0.077863015
polling=25  146347   128.7  95   100   750.49   0.079107993
polling=50  150882   131.1  100  100   754.41   0.084733701

Macro benchmarks are less I/O intensive than the micro benchmark, which is 
why 
we can expect less impact for polling as compared to netperf. 
However, as shown above, we managed to get 10% TPS/CPU improvement with 
the 
polling patch.

> Is there a chance you are actually trading latency for throughput?
> do you observe any effect on latency?

No.

> How about trying some other benchmark, e.g. NFS?
> 

Tried, but didn't have enough I/O produced (vhost was at most at 15% util)

> 
> Also, I am wondering:
> 
> since vhost thread is polling in kernel anyway, shouldn't
> we try and poll the host NIC?
> that would likely reduce at least the latency significantly,
> won't it?
> 

Yes, it could be a great addition at some point, but needs a thorough 
investigation. In any case, not a part of this patch...

Thanks,
Razya

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2015-01-18  7:40 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2015-01-01 12:59 Fw: Benchmarking for vhost polling patch Razya Ladelsky
2015-01-05 12:35 ` Michael S. Tsirkin
2015-01-11 12:44   ` Razya Ladelsky
2015-01-12 10:36     ` Michael S. Tsirkin
2015-01-14 15:01       ` Razya Ladelsky
2015-01-14 15:23         ` Michael S. Tsirkin
2015-01-18  7:40           ` Razya Ladelsky

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).