From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrea Arcangeli Subject: Re: [PATCH 1 of 5] fix cpu_physical_memory len Date: Fri, 12 Dec 2008 20:26:46 +0100 Message-ID: <20081212192646.GB30537@random.random> References: <5d932ac0ac8940b042c1.1229105803@duo.random> <4942B650.20609@codemonkey.ws> Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Cc: Gerd Hoffmann , qemu-devel@nongnu.org, kvm@vger.kernel.org, avi@redhat.com, chrisw@redhat.com To: Anthony Liguori Return-path: Received: from mx2.redhat.com ([66.187.237.31]:51864 "EHLO mx2.redhat.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751320AbYLLT0x (ORCPT ); Fri, 12 Dec 2008 14:26:53 -0500 Content-Disposition: inline In-Reply-To: <4942B650.20609@codemonkey.ws> Sender: kvm-owner@vger.kernel.org List-ID: On Fri, Dec 12, 2008 at 01:06:56PM -0600, Anthony Liguori wrote: > Andrea Arcangeli wrote: >> From: Andrea Arcangeli >> >> Be consistent and have length be size_t for all methods. >> > > ram_addr_t would be better than size_t here. Yes, that is feasible even if the dma api output remains a raw iovec (as it'll surely bounce, and the bouncing internally can restarts with unsigned long long length). To explain why it's set to a size_t, it's just that I didn't think an emulated device would ever attempt a dma on a >4G region on a 32bit host, and I was suggested to make this assumption by the current code that can't even handle that on a 64bit host (I made it possible on a 64bit host, on a 64bit host it makes some sense as there can really be that much ram allocated). For 32bit it mostly makes sense for mmio regions but that sounds a real weirdness to do such a large dma on a mmio region. So I thought sticking with size_t would less prone for truncation errors and I could the sanity checking only once (currently you'd get a graceful driver failure with the submit handler getting an error, if you attempt that). But I can change to ram_addr_t if you like. It's up to you!