From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:56829) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TJp5Z-0003tH-7o for qemu-devel@nongnu.org; Thu, 04 Oct 2012 13:20:01 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TJp5R-0002qT-TG for qemu-devel@nongnu.org; Thu, 04 Oct 2012 13:19:57 -0400 Received: from mx1.redhat.com ([209.132.183.28]:40199) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TJp5R-0002qB-KP for qemu-devel@nongnu.org; Thu, 04 Oct 2012 13:19:49 -0400 Message-ID: <506DC530.7040801@redhat.com> Date: Thu, 04 Oct 2012 19:19:44 +0200 From: Avi Kivity MIME-Version: 1.0 References: <1349280245-16341-1-git-send-email-avi@redhat.com> <1349280245-16341-20-git-send-email-avi@redhat.com> <506D2EEE.3010904@redhat.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [RFC v1 19/22] memory: per-AddressSpace dispatch List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Blue Swirl Cc: Paolo Bonzini , "Michael S. Tsirkin" , qemu-devel@nongnu.org, Anthony Liguori , liu ping fan On 10/04/2012 07:13 PM, Blue Swirl wrote: > On Thu, Oct 4, 2012 at 6:38 AM, Avi Kivity wrote: >> On 10/03/2012 10:24 PM, Blue Swirl wrote: >>> > >>> > #else >>> > -void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, >>> > - int len, int is_write) >>> > + >>> > +void address_space_rw(AddressSpace *as, target_phys_addr_t addr, uint8_t *buf, >>> > + int len, bool is_write) >>> >>> I'd make address_space_* use uint64_t instead of target_phys_addr_t >>> for the address. It may actually be buggy for 32 bit >>> target_phys_addr_t and 64 bit DMA addresses, if such architectures >>> exist. Maybe memory.c could be made target independent one day. >> >> We can make target_phys_addr_t 64 bit unconditionally. The fraction of >> deployments where both host and guest are 32 bits is dropping, and I >> doubt the performance drop is noticable. > > My line of thought was that memory.c would not be tied to physical > addresses, but it would be more general. Then exec.c would specialize > the API to use target_phys_addr_t. Similarly PCI would specialize it > to pcibus_t, PIO to pio_addr_t and DMA to dma_addr_t. The problem is that all any transition across the boundaries would then involve casts (explicit or implicit) with the constant worry of whether we're truncating or not. Note we have transitions in both directions, with the higher layer APIs calling memory APIs, and the memory API calling them back via MemoryRegionOps or a new MemoryRegionIOMMUOps. What does this flexibility buy us, compared to a single hw_addr fixed at 64 bits? -- error compiling committee.c: too many arguments to function