From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1LBDf5-0000b6-GP for qemu-devel@nongnu.org; Fri, 12 Dec 2008 14:26:55 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1LBDf3-0000Yp-SG for qemu-devel@nongnu.org; Fri, 12 Dec 2008 14:26:54 -0500 Received: from [199.232.76.173] (port=34588 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1LBDf3-0000YW-Ko for qemu-devel@nongnu.org; Fri, 12 Dec 2008 14:26:53 -0500 Received: from mx2.redhat.com ([66.187.237.31]:35883) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1LBDf3-0001sq-1j for qemu-devel@nongnu.org; Fri, 12 Dec 2008 14:26:53 -0500 Date: Fri, 12 Dec 2008 20:26:46 +0100 From: Andrea Arcangeli Message-ID: <20081212192646.GB30537@random.random> References: <5d932ac0ac8940b042c1.1229105803@duo.random> <4942B650.20609@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <4942B650.20609@codemonkey.ws> Subject: [Qemu-devel] Re: [PATCH 1 of 5] fix cpu_physical_memory len Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Anthony Liguori Cc: chrisw@redhat.com, avi@redhat.com, Gerd Hoffmann , kvm@vger.kernel.org, qemu-devel@nongnu.org On Fri, Dec 12, 2008 at 01:06:56PM -0600, Anthony Liguori wrote: > Andrea Arcangeli wrote: >> From: Andrea Arcangeli >> >> Be consistent and have length be size_t for all methods. >> > > ram_addr_t would be better than size_t here. Yes, that is feasible even if the dma api output remains a raw iovec (as it'll surely bounce, and the bouncing internally can restarts with unsigned long long length). To explain why it's set to a size_t, it's just that I didn't think an emulated device would ever attempt a dma on a >4G region on a 32bit host, and I was suggested to make this assumption by the current code that can't even handle that on a 64bit host (I made it possible on a 64bit host, on a 64bit host it makes some sense as there can really be that much ram allocated). For 32bit it mostly makes sense for mmio regions but that sounds a real weirdness to do such a large dma on a mmio region. So I thought sticking with size_t would less prone for truncation errors and I could the sanity checking only once (currently you'd get a graceful driver failure with the submit handler getting an error, if you attempt that). But I can change to ram_addr_t if you like. It's up to you!