From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NmVVo-0003Ie-NX for qemu-devel@nongnu.org; Tue, 02 Mar 2010 12:04:00 -0500 Received: from [199.232.76.173] (port=55705 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NmVVo-0003IS-8g for qemu-devel@nongnu.org; Tue, 02 Mar 2010 12:04:00 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1NmVVm-00070F-BT for qemu-devel@nongnu.org; Tue, 02 Mar 2010 12:03:59 -0500 Received: from mx1.redhat.com ([209.132.183.28]:16812) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NmVVl-000709-Tm for qemu-devel@nongnu.org; Tue, 02 Mar 2010 12:03:58 -0500 Date: Tue, 2 Mar 2010 19:00:25 +0200 From: "Michael S. Tsirkin" Subject: Re: [Qemu-devel] Re: [PATCHv2 10/12] tap: add vhost/vhostfd options Message-ID: <20100302170025.GA8743@redhat.com> References: <886ef6ffeb6748f6dc4fe5431f71cb12bb74edc9.1267122331.git.mst@redhat.com> <4B86D3CF.4020601@codemonkey.ws> <20100226145155.GC23359@redhat.com> <4B87E755.9000707@codemonkey.ws> <20100227194418.GB26389@redhat.com> <4B8A94FA.5020000@codemonkey.ws> <20100228171920.GE28921@redhat.com> <4B8AD8D4.7070002@codemonkey.ws> <20100302161247.GA25371@amt.cnet> <4B8D4350.6040506@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4B8D4350.6040506@codemonkey.ws> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: quintela@redhat.com, Marcelo Tosatti , qemu-devel@nongnu.org, kraxel@redhat.com, amit.shah@redhat.com, Paul Brook On Tue, Mar 02, 2010 at 10:56:48AM -0600, Anthony Liguori wrote: > On 03/02/2010 10:12 AM, Marcelo Tosatti wrote: >> On Sun, Feb 28, 2010 at 02:57:56PM -0600, Anthony Liguori wrote: >> >>> On 02/28/2010 11:19 AM, Michael S. Tsirkin wrote: >>> >>>>> Both have security implications so I think it's important that they >>>>> be addressed. Otherwise, I'm pretty happy with how things are. >>>>> >>>> Care suggesting some solutions? >>>> >>> The obvious thing to do would be to use the memory notifier in vhost >>> to keep track of whenever something remaps the ring's memory region >>> and if that happens, issue an ioctl to vhost to change the location >>> of the ring. Also, you would need to merge the vhost slot >>> management code with the KVM slot management code. >>> >> There are no security implications as long as vhost uses the qemu >> process mappings. >> > > There potentially are within a guest. If a guest can trigger a qemu bug > that results in qemu writing to a different location than what the guest > told it to write, a malicious software may use this to escalate it's > privileges within a guest. If malicious software has access to hardware that does DMA, game is likely over :) >>> cpu_ram_add() never gets called with overlapping regions. We'll >>> modify cpu_register_physical_memory() to ensure that a ram mapping >>> is never changed after initial registration. >>> >> What is the difference between your proposal and >> cpu_physical_memory_map? >> > > cpu_physical_memory_map() has the following semantics: > > - it always returns a transient mapping > - it may (transparently) bounce > - it may fail to bounce, caller must deal > > The new function I'm proposing has the following semantics: > > - it always returns a persistent mapping > - it never bounces > - it will only fail if the mapping isn't ram > > A caller can use the new function to implement an optimization to force > the device to only work with real ram. IOW, this is something we can > use in virtio, but very little else. cpu_physical_memory_map can be > used in more circumstances. > >> What i'd like to see is binding between cpu_physical_memory_map and qdev >> devices, so that you can use different host memory mappings for device >> context and for CPU context (and provide the possibility for, say, map >> a certain memory region as read-only). >> > > We really want per-bus mappings. At the lowest level, we'll have > sysbus_memory_map() but we'll also have pci_memory_map(), > virtio_memory_map(), etc. > > Nothing should ever call cpu_physical_memory_map() directly. > > Regards, > > Anthony Liguori