From mboxrd@z Thu Jan 1 00:00:00 1970 From: Anthony Liguori Subject: Re: PCI passthrough with VT-d - native performance Date: Wed, 16 Jul 2008 10:23:08 -0500 Message-ID: <487E125C.7020301@codemonkey.ws> References: <1216214225-18030-1-git-send-email-benami@il.ibm.com> <487E076D.4050306@qumranet.com> <1216221530.31546.274.camel@cluwyn.haifa.ibm.com> Mime-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: QUOTED-PRINTABLE Cc: Avi Kivity , amit.shah@qumranet.com, kvm@vger.kernel.org, Muli Ben-Yehuda , weidong.han@intel.com To: Ben-Ami Yassour Return-path: Received: from wx-out-0506.google.com ([66.249.82.232]:28870 "EHLO wx-out-0506.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1757117AbYGPPXi (ORCPT ); Wed, 16 Jul 2008 11:23:38 -0400 Received: by wx-out-0506.google.com with SMTP id h29so3335458wxd.4 for ; Wed, 16 Jul 2008 08:23:37 -0700 (PDT) In-Reply-To: <1216221530.31546.274.camel@cluwyn.haifa.ibm.com> Sender: kvm-owner@vger.kernel.org List-ID: Ben-Ami Yassour wrote: > On Wed, 2008-07-16 at 17:36 +0300, Avi Kivity wrote: > =20 >> Ben-Ami Yassour wrote: >> =20 >>> In last few tests that we made with PCI-passthrough and VT-d using >>> iperf, we were able to get the same throughput as on native OS with= a 1G >>> NIC >>> =20 >> Excellent! >> >> =20 >>> (with higher CPU utilization). >>> =20 >>> =20 >> How much higher? >> =20 > > Here are some numbers for running iperf -l 1M: > > e1000 NIC (behind a PCI bridge) > Bandwidth (Mbit/sec) CPU utilization > Native OS 771 18% > Native OS with VT-d 760 18%=20 > KVM VT-d 390 95%=20 > =EF=BB=BFKVM VT-d with direct mmio 770 84% > KVM emulated 57 100% =20 > =20 What about virtio? Also, which emulated is this? That CPU utilization is extremely high and somewhat illogical if native= =20 w/vt-d has almost no CPU impact. Have you run oprofile yet or have any= =20 insight into where CPU is being burnt? What does kvm_stat look like? I wonder if there are a large number of=20 PIO exits. What does the interrupt count look like on native vs. KVM=20 with VT-d? Regards, Anthony Liguori > Comment: its not clear to me why the native linux can not get closer = to 1G for this NIC, > (I verified that its not external network issues). But clearly we sho= uldn't hope to=20 > get more then the host does with a KVM guest (especially if the guest= and host are the=20 > same OS as in this case...). > > e1000e NIC (onboard) > Bandwidth (Mbit/sec) CPU utilization > Native OS 915 18% > Native OS with VT-d 915 18% > =EF=BB=BFKVM VT-d with direct mmio 914 98% > > Clearly we need to try and improve the CPU utilization, but I think t= hat this is good enough=20 > for the first phase. > > =20 >>> The following patches are the PCI-passthrough patches that Amit sen= t >>> (re-based on the last kvm tree), followed by a few improvements and= the >>> VT-d extension. >>> I am also sending the userspace patches: the patch that Amit sent f= or >>> PCI passthrough and the direct-mmio extension for userspace (note t= hat >>> without the direct mmio extension we get less then half the through= put). >>> =20 >>> =20 >> Is mmio passthrough the reason for the performance improvement? If = not,=20 >> what was the problem? >> >> =20 > Direct mmio was definitely a major improvement, without it we got hal= f the throughput, > as you can see above. > In addition patch 4/8 improves the interrupt handling and removes unn= ecessary locks, > and I assume that it also fixed performance issues (I did not investi= gate exactly in what way). > > Regards, > Ben > > > =20