From mboxrd@z Thu Jan 1 00:00:00 1970 From: Shirley Ma Subject: Re: vhost-net patches Date: Wed, 28 Oct 2009 09:45:37 -0700 Message-ID: <1256748337.6433.11.camel@localhost.localdomain> References: <20091023110438.GA20229@redhat.com> <1256310168.4443.2.camel@localhost.localdomain> <1256310765.4443.4.camel@localhost.localdomain> <1256315020.4443.12.camel@localhost.localdomain> <20091026200513.GA26623@redhat.com> <1256592889.10142.8.camel@localhost.localdomain> <20091027064302.GB26914@redhat.com> <1256654819.4753.6.camel@localhost.localdomain> <20091027152753.GA4622@redhat.com> <1256661378.6745.2.camel@localhost.localdomain> <20091028153859.GA28926@redhat.com> Mime-Version: 1.0 Content-Type: text/plain Content-Transfer-Encoding: 7bit Cc: Sridhar Samudrala , Shirley Ma , David Stevens , kvm@vger.kernel.org, sri@linux.vnet.ibm.com, mashirle@linux.vnet.ibm.com To: "Michael S. Tsirkin" Return-path: Received: from e32.co.us.ibm.com ([32.97.110.150]:35477 "EHLO e32.co.us.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755114AbZJ1QqC (ORCPT ); Wed, 28 Oct 2009 12:46:02 -0400 Received: from d03relay02.boulder.ibm.com (d03relay02.boulder.ibm.com [9.17.195.227]) by e32.co.us.ibm.com (8.14.3/8.13.1) with ESMTP id n9SGevJ6014777 for ; Wed, 28 Oct 2009 10:40:57 -0600 Received: from d03av01.boulder.ibm.com (d03av01.boulder.ibm.com [9.17.195.167]) by d03relay02.boulder.ibm.com (8.13.8/8.13.8/NCO v9.1) with ESMTP id n9SGjnKu126462 for ; Wed, 28 Oct 2009 10:45:51 -0600 Received: from d03av01.boulder.ibm.com (loopback [127.0.0.1]) by d03av01.boulder.ibm.com (8.14.3/8.13.1/NCO v10.0 AVout) with ESMTP id n9SGjdY6003291 for ; Wed, 28 Oct 2009 10:45:39 -0600 In-Reply-To: <20091028153859.GA28926@redhat.com> Sender: kvm-owner@vger.kernel.org List-ID: Hello Michael, On Wed, 2009-10-28 at 17:39 +0200, Michael S. Tsirkin wrote: > Here's another hack to try. It will break raw sockets, > but just as a test: This patch looks better than previous one for guest to host TCP_STREAM performance. The transmission queue full still exists, but TCP_STREAM results is 43xxMb/s (userspace is about 3500Mb/s). When I increase the transmission queue size to 1K, The performance burst up to 53xxMb/s. But some runs I still hit several transmission queue full in 60 secs run even with 1K queue size. Thanks Shirley