From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1JvDyf-000844-P1 for qemu-devel@nongnu.org; Sun, 11 May 2008 12:00:45 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1JvDyd-00082O-Pr for qemu-devel@nongnu.org; Sun, 11 May 2008 12:00:45 -0400 Received: from [199.232.76.173] (port=37856 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1JvDyb-000821-CB for qemu-devel@nongnu.org; Sun, 11 May 2008 12:00:41 -0400 Received: from relay2-v.mail.gandi.net ([217.70.178.76]:54833) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1JvDya-0000pN-Qg for qemu-devel@nongnu.org; Sun, 11 May 2008 12:00:41 -0400 Received: from localhost (mfilter3-v.gandi.net [217.70.178.37]) by relay2-v.mail.gandi.net (Postfix) with ESMTP id 90663135D7 for ; Sun, 11 May 2008 18:00:39 +0200 (CEST) Received: from relay2-v.mail.gandi.net ([217.70.178.76]) by localhost (mfilter3-v.mgt.gandi.net [217.70.178.37]) (amavisd-new, port 10024) with ESMTP id XNMLsMWMpGSm for ; Sun, 11 May 2008 18:00:34 +0200 (CEST) Received: from [84.99.204.158] (158.204.99-84.rev.gaoland.net [84.99.204.158]) by relay2-v.mail.gandi.net (Postfix) with ESMTP id 21478135BD for ; Sun, 11 May 2008 18:00:34 +0200 (CEST) Message-ID: <482717EF.10508@bellard.org> Date: Sun, 11 May 2008 17:59:43 +0200 From: Fabrice Bellard MIME-Version: 1.0 Subject: Re: [Qemu-devel] Re: IO_MEM_NB_ENTRIES limit References: In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org andrzej zaborowski wrote: > On 15/04/2008, andrzej zaborowski wrote: >> the maximum number of memory-mapped IO regions in qemu is >> IO_MEM_NB_ENTRIES which is defined using TARGET_PAGE_BITS. Due to >> tiny pages available on ARM, IO_MEM_NB_ENTRIES is only 64 there. >> OMAP2 cpu has many more logical IO regions than 64 and it makes sense >> to register them as separate. >> >> To be able to set IO_MEM_NB_ENTRIES higher, the io region index and >> the address bits would have to be stored in separate fields in >> PhysPageDesc and in CPUTLBEntry structs, instead of io index being >> stored in the lower bits of addresses. This would double the size of >> both structs. I'd like to hear if there are any other ideas for >> removing the upper limit for IO_MEM_NB_ENTRIES. > > Here's a less hacky patch to store the IO region number in a separate > field from the page start address, in PhysPageDesc and CPUTLBEntry, > thus simplifying a couple of things. It's intrusive but will ease any > further extension and I'd like to commit it some time if there are no > better ideas. It works in my tests but there may be corner cases that > I broke. > > The maximum number of IO_MEM_ROMD regions is still dependent on page > size because the API to register these uses the same value to store > the address and the io_index, so removing this would require api > change that affects hw/. To be more precise, I am concerned about the increase of the TLB size which is likely to have a performance impact. Moreover, unless you modify kqemu, your changes will break it. For kqemu, my prefered solution would be that QEMU uses an explicit ioctl to inform kqemu about the memory mappings. Regarding the limitation of the number of entries, a less intrusive change could be to use something similar to the subpage system (i.e. the same entry would be used for several devices depending on the physical address). Regards, Fabrice.