From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LC1ls-0006IG-LT for qemu-devel@nongnu.org; Sun, 14 Dec 2008 19:57:16 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LC1lr-0006Gr-Pd for qemu-devel@nongnu.org; Sun, 14 Dec 2008 19:57:16 -0500 Received: from [199.232.76.173] (port=35204 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LC1lr-0006GZ-Jp for qemu-devel@nongnu.org; Sun, 14 Dec 2008 19:57:15 -0500 Received: from mx20.gnu.org ([199.232.41.8]:29915) by monty-python.gnu.org with esmtps (TLS-1.0:RSA_AES_256_CBC_SHA1:32) (Exim 4.60) (envelope-from ) id 1LC1lr-0002Dk-AD for qemu-devel@nongnu.org; Sun, 14 Dec 2008 19:57:15 -0500 Received: from mail.codesourcery.com ([65.74.133.4]) by mx20.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LC1lo-0000Jb-Bh for qemu-devel@nongnu.org; Sun, 14 Dec 2008 19:57:12 -0500 From: Paul Brook Subject: Re: [Qemu-devel] Re: [PATCH 2 of 5] add can_dma/post_dma for direct IO Date: Mon, 15 Dec 2008 00:57:08 +0000 References: <49456337.4000000@redhat.com> <494591F7.3080002@codemonkey.ws> In-Reply-To: <494591F7.3080002@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200812150057.10162.paul@codesourcery.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Andrea Arcangeli , chrisw@redhat.com, kvm@vger.kernel.org, Gerd Hoffmann , Avi Kivity > > That's pointless; cirrus for example has 8MB of mmio while a > > cpu-to-vram blit is in progress, and some random device we'll add > > tomorrow could easily introduce more. Our APIs shouldn't depend on > > properties of emulated hardware, at least as much as possible. > > One way to think of what I'm suggesting, is that if for every > cpu_register_physical_memory call for MMIO, we allocated a buffer, then > whenever map() was called on MMIO, we would return that already > allocated buffer. The overhead is fixed and honestly relatively small. > Much smaller than dma.c proposes. I Wouldn't be surprised if some machines had a large memory mapped IO space. Most of it might not be actively used, but once you start considering 64-bit machines on 32-bit hosts these allocations would become problematic.