From mboxrd@z Thu Jan 1 00:00:00 1970 From: Arnd Bergmann Subject: Re: copyless virtio net thoughts? Date: Thu, 19 Feb 2009 15:51:38 +0100 Message-ID: <200902191551.39471.arnd@arndb.de> References: <20090205020732.GA27684@sequoia.sous-sol.org> <20090218233126.GA3105@verge.net.au> <200902192206.17557.rusty@rustcorp.com.au> Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Cc: Simon Horman , Chris Wright , Herbert Xu , kvm@vger.kernel.org, "Dong, Eddie" To: Rusty Russell Return-path: Received: from moutng.kundenserver.de ([212.227.126.177]:65368 "EHLO moutng.kundenserver.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751267AbZBSOwO (ORCPT ); Thu, 19 Feb 2009 09:52:14 -0500 In-Reply-To: <200902192206.17557.rusty@rustcorp.com.au> Content-Disposition: inline Sender: kvm-owner@vger.kernel.org List-ID: On Thursday 19 February 2009, Rusty Russell wrote: > Not quite: I think PCI passthrough IMHO is the *wrong* way to do it: > it makes migrate complicated (if not impossible), and requires > emulation or the same NIC on the destination host. > > This would be the *host* seeing the virtual functions as multiple > NICs, then the ability to attach a given NIC directly to a process. I guess what you mean then is what Intel calls VMDq, not SR-IOV. Eddie has some slides about this at http://docs.huihoo.com/kvm/kvmforum2008/kdf2008_7.pdf . The latest network cards support both operation modes, and it appears to me that there is a place for both. VMDq gives you the best performance without limiting flexibility, while SR-IOV performance in theory can be even better, but sacrificing a lot of flexibility and potentially local (guest-to-gest) performance. AFAICT, any card that supports SR-IOV should also allow a VMDq like model, as you describe. Arnd <><