From mboxrd@z Thu Jan 1 00:00:00 1970 From: Muli Ben-Yehuda Subject: Re: [Qemu-devel] Ideas wiki for GSoC 2010 Date: Mon, 15 Mar 2010 07:18:12 -0700 Message-ID: <20100315141812.GA2790@il.ibm.com> References: <20100310183023.6632aece@redhat.com> <4B9E2745.7060903@redhat.com> <20100315125313.GK9457@il.ibm.com> <20100315130310.GE13108@8bytes.org> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Avi Kivity , Luiz Capitulino , qemu-devel@nongnu.org, aliguori@us.ibm.com, kvm@vger.kernel.org, jan.kiszka@siemens.com, agraf@suse.de, agl@us.ibm.com, Nadav Amit , Ben-Ami Yassour To: Joerg Roedel Return-path: Received: from mtagate5.de.ibm.com ([195.212.17.165]:52728 "EHLO mtagate5.de.ibm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S965042Ab0COOnW (ORCPT ); Mon, 15 Mar 2010 10:43:22 -0400 Received: from d12nrmr1607.megacenter.de.ibm.com (d12nrmr1607.megacenter.de.ibm.com [9.149.167.49]) by mtagate5.de.ibm.com (8.13.1/8.13.1) with ESMTP id o2FEhLUn010407 for ; Mon, 15 Mar 2010 14:43:21 GMT Received: from d12av01.megacenter.de.ibm.com (d12av01.megacenter.de.ibm.com [9.149.165.212]) by d12nrmr1607.megacenter.de.ibm.com (8.13.8/8.13.8/NCO v10.0) with ESMTP id o2FEhLZF1708116 for ; Mon, 15 Mar 2010 15:43:21 +0100 Received: from d12av01.megacenter.de.ibm.com (loopback [127.0.0.1]) by d12av01.megacenter.de.ibm.com (8.12.11.20060308/8.13.3) with ESMTP id o2FEhKKp012183 for ; Mon, 15 Mar 2010 15:43:21 +0100 Content-Disposition: inline In-Reply-To: <20100315130310.GE13108@8bytes.org> Sender: kvm-owner@vger.kernel.org List-ID: On Mon, Mar 15, 2010 at 02:03:11PM +0100, Joerg Roedel wrote: > On Mon, Mar 15, 2010 at 05:53:13AM -0700, Muli Ben-Yehuda wrote: > > On Mon, Mar 15, 2010 at 02:25:41PM +0200, Avi Kivity wrote: > > > On 03/10/2010 11:30 PM, Luiz Capitulino wrote: > > > > > > Hi there, > > > > > > > > Our wiki page for the Summer of Code 2010 is doing quite well: > > > > > > > >http://wiki.qemu.org/Google_Summer_of_Code_2010 > > > > > > I will add another project - iommu emulation. Could be very > > > useful for doing device assignment to nested guests, which could > > > make testing a lot easier. > > > > Our experiments show that nested device assignment is pretty much > > required for I/O performance in nested scenarios. > > Really? I did a small test with virtio-blk in a nested guest (disk > read with dd, so not a real benchmark) and got a reasonable > read-performance of around 25MB/s from the disk in the l2-guest. Netperf running in L1 with direct access: ~950 Mbps throughput with 25% CPU utilization. Netperf running in L2 with virtio between L2 and L1 and direct assignment between L1 and L0: roughly the same throughput, but over 90% CPU utilization! Now extrapolate to 10GbE. Cheers, Muli