From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:36401) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TJqkI-0004Ko-6U for qemu-devel@nongnu.org; Thu, 04 Oct 2012 15:06:07 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TJqkF-0008Gc-JK for qemu-devel@nongnu.org; Thu, 04 Oct 2012 15:06:05 -0400 Received: from mail-oa0-f45.google.com ([209.85.219.45]:49281) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TJqkF-0008Fk-DZ for qemu-devel@nongnu.org; Thu, 04 Oct 2012 15:06:03 -0400 Received: by mail-oa0-f45.google.com with SMTP id i18so808863oag.4 for ; Thu, 04 Oct 2012 12:06:01 -0700 (PDT) From: Anthony Liguori In-Reply-To: References: <1349280245-16341-1-git-send-email-avi@redhat.com> <1349280245-16341-20-git-send-email-avi@redhat.com> <506D2EEE.3010904@redhat.com> <506DC530.7040801@redhat.com> Date: Thu, 04 Oct 2012 14:05:57 -0500 Message-ID: <871uhejbvu.fsf@codemonkey.ws> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Subject: Re: [Qemu-devel] [RFC v1 19/22] memory: per-AddressSpace dispatch List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Blue Swirl , Avi Kivity Cc: Paolo Bonzini , "Michael S. Tsirkin" , qemu-devel@nongnu.org, liu ping fan Blue Swirl writes: > On Thu, Oct 4, 2012 at 5:19 PM, Avi Kivity wrote: >> On 10/04/2012 07:13 PM, Blue Swirl wrote: >>> On Thu, Oct 4, 2012 at 6:38 AM, Avi Kivity wrote: >>>> On 10/03/2012 10:24 PM, Blue Swirl wrote: >>>>> > >>>>> > #else >>>>> > -void cpu_physical_memory_rw(target_phys_addr_t addr, uint8_t *buf, >>>>> > - int len, int is_write) >>>>> > + >>>>> > +void address_space_rw(AddressSpace *as, target_phys_addr_t addr, uint8_t *buf, >>>>> > + int len, bool is_write) >>>>> >>>>> I'd make address_space_* use uint64_t instead of target_phys_addr_t >>>>> for the address. It may actually be buggy for 32 bit >>>>> target_phys_addr_t and 64 bit DMA addresses, if such architectures >>>>> exist. Maybe memory.c could be made target independent one day. >>>> >>>> We can make target_phys_addr_t 64 bit unconditionally. The fraction of >>>> deployments where both host and guest are 32 bits is dropping, and I >>>> doubt the performance drop is noticable. >>> >>> My line of thought was that memory.c would not be tied to physical >>> addresses, but it would be more general. Then exec.c would specialize >>> the API to use target_phys_addr_t. Similarly PCI would specialize it >>> to pcibus_t, PIO to pio_addr_t and DMA to dma_addr_t. >> >> The problem is that all any transition across the boundaries would then >> involve casts (explicit or implicit) with the constant worry of whether >> we're truncating or not. Note we have transitions in both directions, >> with the higher layer APIs calling memory APIs, and the memory API >> calling them back via MemoryRegionOps or a new MemoryRegionIOMMUOps. >> >> What does this flexibility buy us, compared to a single hw_addr fixed at >> 64 bits? > > They can all be 64 bits, I'm just considering types. Getting rid of > target_phys_addr_t, pcibus_t, pio_addr_t and dma_addr_t (are there > more?) may be also worthwhile. Where this breaks down is devices that are DMA capable but may exist on multiple busses. So you either end up with a device-specific type and a layer of casting or weird acrobatics. It makes more sense IMHO to just treat bus addresses as a fixed with. target_phys_addr_t is a bad name. I'd be in favor of either just using uint64_t directly or having a generic dma_addr_t. Regards, Anthony Liguori > >> >> >> -- >> error compiling committee.c: too many arguments to function