From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MAlSa-0000xj-QU for qemu-devel@nongnu.org; Sun, 31 May 2009 09:52:24 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MAlSW-0000uT-FD for qemu-devel@nongnu.org; Sun, 31 May 2009 09:52:24 -0400 Received: from [199.232.76.173] (port=43940 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MAlSW-0000uQ-89 for qemu-devel@nongnu.org; Sun, 31 May 2009 09:52:20 -0400 Received: from mail-ew0-f213.google.com ([209.85.219.213]:38220) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MAlSV-0004QZ-Ac for qemu-devel@nongnu.org; Sun, 31 May 2009 09:52:20 -0400 Received: by ewy9 with SMTP id 9so6603289ewy.34 for ; Sun, 31 May 2009 06:52:17 -0700 (PDT) Message-ID: <4A228B8E.4090304@codemonkey.ws> Date: Sun, 31 May 2009 08:52:14 -0500 From: Anthony Liguori MIME-Version: 1.0 Subject: Re: [Qemu-devel] [PATCH 2/3] Add PCI memory region registration References: <1243157375-14329-1-git-send-email-avi@redhat.com> <1243157375-14329-3-git-send-email-avi@redhat.com> <4A1D4961.1010903@us.ibm.com> <4A1D5604.60003@redhat.com> <4A1D5837.3010705@us.ibm.com> <4A1D5B44.3040207@redhat.com> <4A1DBFC1.6060603@codemonkey.ws> <4A226E48.1050006@redhat.com> In-Reply-To: <4A226E48.1050006@redhat.com> Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Avi Kivity Cc: Anthony Liguori , qemu-devel@nongnu.org Avi Kivity wrote: > > I wanted to make it an io_region as well, but pci_register_io_region() > is taken. We could rename it to pci_register_bar(), but that will > cause too much churn IMO. bar is okay by me. >> It overloads the term physical memory and still requires separate >> APIs for IO regions and MEM regions. I know you mentioned that >> ram_addr_t could be overloaded to also support IO regions but IMHO >> that's rather confusing. If the new code looked like: >> >> s->rtl8139_mmio_io_addr = >> cpu_register_io_memory(0, rtl8139_mmio_read, rtl8139_mmio_write, s); >> >> s->rtl8139_io_io_addr = >> cpu_register_io_memory(0, rtl8139_ioport_read, >> rtl8139_ioport_write, s); >> >> pci_register_io_region(&d->dev, 0, 0x100, >> PCI_ADDRESS_SPACE_IO, s->rtl8139_io_io_addr); >> >> pci_register_io_region(&d->dev, 1, 0x100, >> PCI_ADDRESS_SPACE_MEM, s->rtl8139_mmio_addr); >> >> I think it would be more understandable. However, I think that the >> normal case is exactly this work flow so I think it makes sense to >> collapse it into two function calls. So it could look like: >> >> pci_register_io_region(&d->dev, 0, 0x100, >> PCI_ADDRESS_SPACE_IO, rtl8139_ioport_read, >> rtl8139_ioport_write, s); >> >> pci_register_io_region(&d->dev, 1, 0x100, >> PCI_ADDRESS_SPACE_MEM, rtl8139_mmio_read, >> rtl8139_mmio_write, s); > > That neglects the case of direct mapping. We could add it as a helper > though (which would also need to unregister the ram_addr when the > io_region is unregistered). I was thinking that the first API (which is very similar to your original API) would be the implementation with the second API being helpers that everyone actually used. >> You could still have a two step process for where it's absolutely >> required (like VGA optimization). >> >> I think it's worth looking at changing the signatures of the mem >> read/write functions. Introducing a size parameter would greatly >> simplify adding 64-bit IO support, for instance. > > Well, let's separate those unrelated changes, otherwise nothing will > get done. As long as we agree on the final API, we can take incremental steps there. >> I would argue that ram_addr_t is the wrong thing to overload for PIO >> but as long as it's not exposed in the common API, it doesn't matter >> that much to me. > > Currently ram_addr_t is really a 64-bit encoding for a QEMUIORegion > object, plus support for address arithmetic. IMO we should drop it > and move towards explicit object representation, and drop the page > descriptor array. > > One of the things that prevents this is that the page descriptor array > is the only place holding the physical -> ioregion mapping. Once we > move this information to other objects, we can start on that. Hence > this patchset. So let's pick a better name then physical_memory and I think we've got a good starting point. Regards, Anthony Liguori