From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [PATCH] vhost: Add polling mode Date: Wed, 20 Aug 2014 13:05:15 +0200 Message-ID: <20140820110515.GD17371@redhat.com> References: <1407659404-razya@il.ibm.com> <20140810083035.0CF58380729@moren.haifa.ibm.com> <20140810194559.GA4344@redhat.com> <20140811.124621.576073630604147753.davem@davemloft.net> <20140812091850.GD6440@redhat.com> <20140813121550.GA21026@redhat.com> <20140817125809.GA22213@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Cc: Eran Raichstein , kvm-owner@vger.kernel.org, kvm@vger.kernel.org, linux-kernel@vger.kernel.org, abel.gordon@gmail.com, Alex Glikson , Yossi Kuperman1 , Joel Nider , netdev@vger.kernel.org, virtualization@lists.linux-foundation.org, David Miller To: Razya Ladelsky Return-path: Content-Disposition: inline In-Reply-To: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Sender: virtualization-bounces@lists.linux-foundation.org Errors-To: virtualization-bounces@lists.linux-foundation.org List-Id: netdev.vger.kernel.org On Tue, Aug 19, 2014 at 11:36:31AM +0300, Razya Ladelsky wrote: > > That was just one example. There many other possibilities. Either > > actually make the systems load all host CPUs equally, or divide > > throughput by host CPU. > > > > The polling patch adds this capability to vhost, reducing costly exit > overhead when the vm is loaded. > > In order to load the vm I ran netperf with msg size of 256: > > Without polling: 2480 Mbits/sec, utilization: vm - 100% vhost - 64% > With Polling: 4160 Mbits/sec, utilization: vm - 100% vhost - 100% > > Therefore, throughput/cpu without polling is 15.1, and 20.8 with polling. > Can you please present results in a form that makes it possible to see the effect on various configurations and workloads? Here's one example where this was done: https://lkml.org/lkml/2014/8/14/495 You really should also provide data about your host configuration (missing in the above link). > My intention was to load vhost as close as possible to 100% utilization > without polling, in order to compare it to the polling utilization case > (where vhost is always 100%). > The best use case, of course, would be when the shared vhost thread work > (TBD) is integrated and then vhost will actually be using its polling > cycles to handle requests of multiple devices (even from multiple vms). > > Thanks, > Razya -- MST