From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.33) id 1BV7Dn-0000nA-KP for qemu-devel@nongnu.org; Tue, 01 Jun 2004 07:14:19 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.33) id 1BV7Dm-0000lm-34 for qemu-devel@nongnu.org; Tue, 01 Jun 2004 07:14:19 -0400 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.33) id 1BV7Dl-0000lj-VX for qemu-devel@nongnu.org; Tue, 01 Jun 2004 07:14:18 -0400 Received: from [204.183.119.77] (helo=dash.soliddesign.net) by monty-python.gnu.org with esmtp (Exim 4.34) id 1BV7DE-0000im-Aw for qemu-devel@nongnu.org; Tue, 01 Jun 2004 07:13:44 -0400 Received: from [10.2.3.220] (unknown [10.2.3.220]) by dash.soliddesign.net (Postfix) with ESMTP id 340D3575FF for ; Tue, 1 Jun 2004 06:13:41 -0500 (EST) Subject: Re: [Qemu-devel] [PATCH,RFC]: Generic memory callback regions From: Joe Batt In-Reply-To: <1086058382.21903.53.camel@sherbert> References: <1086058382.21903.53.camel@sherbert> Content-Type: text/plain Message-Id: <1086088420.21275.6.camel@localhost> Mime-Version: 1.0 Date: Tue, 01 Jun 2004 06:13:41 -0500 Content-Transfer-Encoding: 7bit Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org I haven't read your patch, but I recently wrote a SH2 emulator. We found that a lot of time was spent finding the right callbacks, so for the device memory, we used a sorted array. After each access, we bubbled that memory region closed to the beginning of the array. This turned out to be faster than the tree that we were using. The main differences are we have 100+ regions and implemented it in Java. Joe On Mon, 2004-05-31 at 21:53, Gianni Tedesco wrote: > Hi, > > This patch adds an API for CONFIG_SOFTMMU mode that hardware drivers can > use to add memory regions backed by callback functions. It simply adds a > layer for storing opaque data above the basic cpu_register_io_memory / > cpu_register_physical memory functions. I used a linked list to store > the data structures, i think O(ln/2) avg. lookup time will be fine as I > don't envisage many of these regions existing. > > I've tested it as far as I currently can for the pciproxy code and it > seems to do the correct thing. I could work around this manually within > the pciproxy code itself, but I figured a generic interface would be > something useful for other drivers (such as any PCI card with MMIO > resources). > > Ideas / thoughts / bugs?