From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1M9Rgz-0002jM-QO for qemu-devel@nongnu.org; Wed, 27 May 2009 18:33:49 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1M9Rgv-0002cI-BO for qemu-devel@nongnu.org; Wed, 27 May 2009 18:33:49 -0400 Received: from [199.232.76.173] (port=38142 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1M9Rgv-0002cF-59 for qemu-devel@nongnu.org; Wed, 27 May 2009 18:33:45 -0400 Received: from mail-qy0-f123.google.com ([209.85.221.123]:47672) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1M9Rgu-000775-Ik for qemu-devel@nongnu.org; Wed, 27 May 2009 18:33:44 -0400 Received: by qyk29 with SMTP id 29so5253870qyk.4 for ; Wed, 27 May 2009 15:33:43 -0700 (PDT) Message-ID: <4A1DBFC1.6060603@codemonkey.ws> Date: Wed, 27 May 2009 17:33:37 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 2/3] Add PCI memory region registration References: <1243157375-14329-1-git-send-email-avi@redhat.com> <1243157375-14329-3-git-send-email-avi@redhat.com> <4A1D4961.1010903@us.ibm.com> <4A1D5604.60003@redhat.com> <4A1D5837.3010705@us.ibm.com> <4A1D5B44.3040207@redhat.com> In-Reply-To: <4A1D5B44.3040207@redhat.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Anthony Liguori , qemu-devel@nongnu.org Avi Kivity wrote: > Anthony Liguori wrote: >> Avi Kivity wrote: >> >> That's because it's an internal performance hack. We should just >> avoid the PCI routines for that device, if we can, although that >> suggests we need a map hook which is ugly. Clever ideas are welcome. > > My original proposal. Note it uses ram addresses, not cpu physical > addresses. I've thought about it, and I think what I find confusing about your API is that pci_register_physical_memory includes the phrase "physical memory". A PIO IO region on x86 is definitely not physical memory though. It overloads the term physical memory and still requires separate APIs for IO regions and MEM regions. I know you mentioned that ram_addr_t could be overloaded to also support IO regions but IMHO that's rather confusing. If the new code looked like: s->rtl8139_mmio_io_addr = cpu_register_io_memory(0, rtl8139_mmio_read, rtl8139_mmio_write, s); s->rtl8139_io_io_addr = cpu_register_io_memory(0, rtl8139_ioport_read, rtl8139_ioport_write, s); pci_register_io_region(&d->dev, 0, 0x100, PCI_ADDRESS_SPACE_IO, s->rtl8139_io_io_addr); pci_register_io_region(&d->dev, 1, 0x100, PCI_ADDRESS_SPACE_MEM, s->rtl8139_mmio_addr); I think it would be more understandable. However, I think that the normal case is exactly this work flow so I think it makes sense to collapse it into two function calls. So it could look like: pci_register_io_region(&d->dev, 0, 0x100, PCI_ADDRESS_SPACE_IO, rtl8139_ioport_read, rtl8139_ioport_write, s); pci_register_io_region(&d->dev, 1, 0x100, PCI_ADDRESS_SPACE_MEM, rtl8139_mmio_read, rtl8139_mmio_write, s); Moreover, you could probably drop the opaque parameter and and just use d->dev. I hope it's possible to get from one to the other. You could still have a two step process for where it's absolutely required (like VGA optimization). I think it's worth looking at changing the signatures of the mem read/write functions. Introducing a size parameter would greatly simplify adding 64-bit IO support, for instance. I would argue that ram_addr_t is the wrong thing to overload for PIO but as long as it's not exposed in the common API, it doesn't matter that much to me. Regards, Anthony Liguori