From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LBuZa-00006W-9v for qemu-devel@nongnu.org; Sun, 14 Dec 2008 12:16:06 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LBuZY-0008VU-Az for qemu-devel@nongnu.org; Sun, 14 Dec 2008 12:16:05 -0500 Received: from [199.232.76.173] (port=49484 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LBuZY-0008VM-49 for qemu-devel@nongnu.org; Sun, 14 Dec 2008 12:16:04 -0500 Received: from mx2.redhat.com ([66.187.237.31]:59991) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LBuZX-0005hv-Kl for qemu-devel@nongnu.org; Sun, 14 Dec 2008 12:16:04 -0500 Date: Sun, 14 Dec 2008 18:15:58 +0100 From: Andrea Arcangeli Message-ID: <20081214171558.GH30537@random.random> References: <4942B841.6010900@codemonkey.ws> <20081213143944.GD30537@random.random> <4943E6F9.1050001@codemonkey.ws> <20081213165306.GE30537@random.random> <4944251D.8080109@codemonkey.ws> <20081214164751.GF30537@random.random> <49453BF2.9070304@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <49453BF2.9070304@redhat.com> Subject: [Qemu-devel] Re: [PATCH 2 of 5] add can_dma/post_dma for direct IO Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: chrisw@redhat.com, kvm@vger.kernel.org, Gerd Hoffmann , qemu-devel@nongnu.org On Sun, Dec 14, 2008 at 07:01:38PM +0200, Avi Kivity wrote: > Actually, with Xen, RAM may be unmapped due do Xen limitations when qemu > runs on dom0 mode. But I think map/unmap makes sense even disregarding I realize xen 32bit has issues... Qemu/KVM 32bit also has the same issues but there's no point in 2009 (that's when this stuff could go productive) in trying to run guests with >2G of ram on a 32bit host. The issues emerges (I guess with xen too) in trying to run those obsolete hardware configurations. Even the atom and extremely low power athlon have 64bit capability, and on embedded that runs a real 32bit cpu I can't see how somebody would want to run a >2G guest. > Xen: if we add memory hotunplug, we need to make sure we don't unplug > memory that has pending dma operations on it. map/unmap gives us the > opportunity to refcount memory slots. So memory hotunplug here is considered differently than the real memory hotplug emulation that simulates removing dimm on the hardware. This is just the xen trick to handle >4G guest on a 32bit address space? Well that's just the thing I'm not interested to support. When 64bit wasn't mainstream it made some sense, these days it's good enough if we can boot any guest OS (including 64bit ones) on a 32bit build, but trying to run guests OS with >2G of ram doesn't look useful. > We can't get all dma to stop during hotunplug, since net rx operations are > long-running (infinite if there is no activity on the link). > > IMO, we do want map/unmap, but this would be just a rename of can_dma and > friends, and wouldn't have at this time any additional functionality. > Bouncing has to happen where we have the ability to schedule the actual > operation, and that's clearly not map/unmap. It would be a bit more than a rename, also keep in mind that in the longer term as said we need to build the iovec in the exec.c path, it's not enough to return a void *, I like to support a not 1:1 flat space to avoid wasting host virtual address space with guest memory holes. But that's about it, guest memory has to be always mapped, just not with a 1:1 mapping, and surely not with a per-page array that translates each page physical address to a host virtual address, but with ranges. So this map thing that returns a 'void *' won't be there for long even if I rename.