From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NmVIL-0005KN-VP for qemu-devel@nongnu.org; Tue, 02 Mar 2010 11:50:06 -0500 Received: from [199.232.76.173] (port=37467 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NmVIK-0005J6-R7 for qemu-devel@nongnu.org; Tue, 02 Mar 2010 11:50:04 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NmVEL-00069Q-On for qemu-devel@nongnu.org; Tue, 02 Mar 2010 11:45:59 -0500 Received: from mx1.redhat.com ([209.132.183.28]:48027) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NmVEK-00068z-Dr for qemu-devel@nongnu.org; Tue, 02 Mar 2010 11:45:57 -0500 Date: Tue, 2 Mar 2010 13:21:36 -0300 From: Marcelo Tosatti Subject: Re: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options Message-ID: <20100302162136.GA26164@amt.cnet> References: <201003021455.49620.paul@codesourcery.com> <4B8D2FBE.5010107@codemonkey.ws> <201003021553.31042.paul@codesourcery.com> <4B8D38D5.40507@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4B8D38D5.40507@codemonkey.ws> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: "Michael S. Tsirkin" , quintela@redhat.com, qemu-devel@nongnu.org, kraxel@redhat.com, amit.shah@redhat.com, Paul Brook On Tue, Mar 02, 2010 at 10:12:05AM -0600, Anthony Liguori wrote: > On 03/02/2010 09:53 AM, Paul Brook wrote: > >>>>The key difference is that these regions are created and destroyed > >>>>rarely and in such a way that the destruction is visible to the guest. > >>>So you're making ram unmap an asynchronous process, and requiring that > >>>the address space not be reused until that umap has completed? > >>It technically already would be. If you've got a pending DMA > >>transaction and you try to hot unplug badness will happen. This is > >>something that is certainly exploitable. > >Hmm, I guess we probably want to make this work with all mappings then. DMA to > >a ram backed PCI BAR (e.g. video ram) is certainly feasible. > >Technically it's not the unmap that causes badness, it's freeing the > >underlying ram. > > Let's avoid confusing terminology. We have RAM mappings and then we > have PCI BARs that are mapped as IO_MEM_RAM. > > PCI BARs mapped as IO_MEM_RAM are allocated by the device and live > for the duration of the device. If you did something that changed > the BAR's mapping from IO_MEM_RAM to an actual IO memory type, then > you'd continue to DMA to the allocated device memory instead of > doing MMIO operations.[1] > > That's completely accurate and safe. If you did this to bare metal, > I expect you'd get very similar results. > > This is different from DMA'ing to a RAM region and then removing the > RAM region while the IO is in flight. In this case, the mapping > disappears and you potentially have the guest writing to an invalid > host pointer. > > [1] I don't think it's useful to support DMA'ing to arbitrary > IO_MEM_RAM areas. Instead, I think we should always bounce to this > memory. The benefit is that we avoid the complications resulting > from PCI hot unplug and reference counting. Agree. Thus the suggestion to tie cpu_physical_memory_map to qdev infrastructure.