From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NmOv0-0003lz-1f for qemu-devel@nongnu.org; Tue, 02 Mar 2010 05:01:34 -0500 Received: from [199.232.76.173] (port=55394 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NmOuz-0003lo-GO for qemu-devel@nongnu.org; Tue, 02 Mar 2010 05:01:33 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NmOux-0007iJ-Hx for qemu-devel@nongnu.org; Tue, 02 Mar 2010 05:01:33 -0500 Received: from mx1.redhat.com ([209.132.183.28]:28978) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NmOux-0007iF-4y for qemu-devel@nongnu.org; Tue, 02 Mar 2010 05:01:31 -0500 Date: Tue, 2 Mar 2010 11:57:57 +0200 From: "Michael S. Tsirkin" Message-ID: <20100302095757.GA6002@redhat.com> References: <20100228171920.GE28921@redhat.com> <4B8AD8D4.7070002@codemonkey.ws> <201002282239.22041.paul@codesourcery.com> <20100301192732.GA3239@redhat.com> <4B8C3778.8000108@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4B8C3778.8000108@codemonkey.ws> Subject: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: amit.shah@redhat.com, quintela@redhat.com, kraxel@redhat.com, Paul Brook , qemu-devel@nongnu.org On Mon, Mar 01, 2010 at 03:54:00PM -0600, Anthony Liguori wrote: > On 03/01/2010 01:27 PM, Michael S. Tsirkin wrote: >> On Sun, Feb 28, 2010 at 10:39:21PM +0000, Paul Brook wrote: >> >>>> I'm sympathetic to your arguments though. As qemu is today, the above >>>> is definitely the right thing to do. But ram is always ram and ram >>>> always has a fixed (albeit non-linear) mapping within a guest. >>>> >>> I think this assumption is unsafe. There are machines where RAM mappings can >>> change. It's not uncommon for a chip select (i.e. physical memory address >>> region) to be switchable to several different sources, one of which may be >>> RAM. I'm pretty sure this functionality is present (but not actually >>> implemented) on some of the current qemu targets. >>> >>> I agree that changing RAM mappings under an active DMA is a fairly suspect >>> thing to do. However I think we need to avoid cache mappings between separate >>> DMA transactions i.e. when the guest can know that no DMA will occur, and >>> safely remap things. >>> >>> I'm also of the opinion that virtio devices should behave the same as any >>> other device. i.e. if you put a virtio-net-pci device on a PCI bus behind an >>> IOMMU, then it should see the same address space as any other PCI device in >>> that location. >>> >> It already doesn't. virtio passes physical memory addresses >> to device instead of DMA addresses. >> > > That's technically a bug. > >>> Apart from anything else, failure to do this breaks nested >>> virtualization. >>> >> Assigning PV device in nested virtualization? It could work, but not >> sure what the point would be. >> > > It misses the point really. > > vhost-net is not a device model and it shouldn't have to care about > things like PCI IOMMU. If we did ever implement a PCI IOMMU, then we > would perform ring translation (or not use vhost-net). > > Regards, > > Anthony Liguori Right. >>> While qemu doesn't currently implement an IOMMU, the DMA >>> interfaces have been designed to allow it. >>> >>> >>>> void cpu_ram_add(target_phys_addr_t start, ram_addr_t size); >>>> >>> We need to support aliased memory regions. For example the ARM RealView boards >>> expose the first 256M RAM at both address 0x0 and 0x70000000. It's also common >>> for systems to create aliases by ignoring certain address bits. e.g. each sim >>> slot is allocated a fixed 256M region. Populating that slot with a 128M stick >>> will cause the contents to be aliased in both the top and bottom halves of >>> that region. >>> >>> Paul >>>