From mboxrd@z Thu Jan 1 00:00:00 1970 From: Xiao Guangrong Subject: Re: [PATCH 0/15] KVM: optimize for MMIO handled Date: Wed, 08 Jun 2011 11:25:42 +0800 Message-ID: <4DEEEBB6.5090805@cn.fujitsu.com> References: <4DEE205E.8000601@cn.fujitsu.com> <20110608121128.2caecdb3.yoshikawa.takuya@oss.ntt.co.jp> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Cc: Avi Kivity , Marcelo Tosatti , LKML , KVM To: Takuya Yoshikawa Return-path: In-Reply-To: <20110608121128.2caecdb3.yoshikawa.takuya@oss.ntt.co.jp> Sender: linux-kernel-owner@vger.kernel.org List-Id: kvm.vger.kernel.org On 06/08/2011 11:11 AM, Takuya Yoshikawa wrote: > On Tue, 07 Jun 2011 20:58:06 +0800 > Xiao Guangrong wrote: > >> The performance test result: >> >> Netperf (TCP_RR): >> =========================== >> ept is enabled: >> >> Before After >> 1st 709.58 734.60 >> 2nd 715.40 723.75 >> 3rd 713.45 724.22 >> >> ept=0 bypass_guest_pf=0: >> >> Before After >> 1st 706.10 709.63 >> 2nd 709.38 715.80 >> 3rd 695.90 710.70 >> > > In what condition, does TCP_RR perform so bad? > > On 1Gbps network, directly connecting two Intel servers, > I got 20 times better result before. > > Even when I used a KVM guest as the netperf client, > I got more than 10 times better result. > Um, which case did you test? ept = 1 or ept=0 bypass_guest_pf=0 or both? > Could you tell me a bit more details of your test? > Sure, KVM guest is the client, and it uses e1000 NIC, and uses NAT network connect to the netperf server, the bandwidth of our network is 100M.