From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Michael S. Tsirkin" Subject: Re: [v3 RFC PATCH 0/4] Implement multiqueue virtio-net Date: Wed, 3 Nov 2010 09:01:16 +0200 Message-ID: <20101103070116.GC5245@redhat.com> References: <20101020085452.15579.76002.sendpatchset@krkumar2.in.ibm.com> <20101025161718.GA19559@redhat.com> <20101026093846.GA6766@redhat.com> <20101026110913.GC7922@redhat.com> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: anthony@codemonkey.ws, arnd@arndb.de, avi@redhat.com, davem@davemloft.net, eric.dumazet@gmail.com, kvm@vger.kernel.org, netdev@vger.kernel.org, rusty@rustcorp.com.au To: Krishna Kumar2 Return-path: Received: from mx1.redhat.com ([209.132.183.28]:57653 "EHLO mx1.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753611Ab0KCHBj (ORCPT ); Wed, 3 Nov 2010 03:01:39 -0400 Content-Disposition: inline In-Reply-To: Sender: netdev-owner@vger.kernel.org List-ID: On Thu, Oct 28, 2010 at 12:48:57PM +0530, Krishna Kumar2 wrote: > > Krishna Kumar2/India/IBM wrote on 10/28/2010 10:44:14 AM: > > > > > > > > Results for UDP BW tests (unidirectional, sum across > > > > > > 3 iterations, each iteration of 45 seconds, default > > > > > > netperf, vhosts bound to cpus 0-3; no other tuning): > > > > > > > > > > Is binding vhost threads to CPUs really required? > > > > > What happens if we let the scheduler do its job? > > > > > > > > Nothing drastic, I remember BW% and SD% both improved a > > > > bit as a result of binding. > > > > > > If there's a significant improvement this would mean that > > > we need to rethink the vhost-net interaction with the scheduler. > > > > I will get a test run with and without binding and post the > > results later today. > > Correction: The result with binding is is much better for > SD/CPU compared to without-binding: Something that was suggested to me off-list is trying to set smp affinity for NIC: in host to guest case probably virtio-net, for external to guest the host NIC as well. -- MST