From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1FSDJ5-0002SJ-MH for qemu-devel@nongnu.org; Sat, 08 Apr 2006 09:16:51 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1FSDJ2-0002QC-7t for qemu-devel@nongnu.org; Sat, 08 Apr 2006 09:16:50 -0400 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1FSDJ2-0002Q3-4A for qemu-devel@nongnu.org; Sat, 08 Apr 2006 09:16:48 -0400 Received: from [65.74.133.6] (helo=mail.codesourcery.com) by monty-python.gnu.org with esmtps (TLS-1.0:DHE_RSA_AES_256_CBC_SHA:32) (Exim 4.52) id 1FSDNE-0002lF-2G for qemu-devel@nongnu.org; Sat, 08 Apr 2006 09:21:08 -0400 From: Paul Brook Subject: Re: [Qemu-devel] Translation cache sizes Date: Sat, 8 Apr 2006 14:16:43 +0100 References: <200604080413.52908.jseward@acm.org> In-Reply-To: <200604080413.52908.jseward@acm.org> MIME-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 7bit Content-Disposition: inline Message-Id: <200604081416.44268.paul@codesourcery.com> Reply-To: qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org On Saturday 08 April 2006 04:13, Julian Seward wrote: > Using qemu from cvs simulating x86-softmmu (no kqemu) on x86, > booting SuSE 9.1 and getting to the xdm (kdm?) graphical login > screen, requires making about 1088000 translations, and the > translation cache is flushed 17 times. Booting is not too bad, > but once user-mode starts to run the translation cache is pretty > much hammered. > > I made 2 changes: > > * increase CODE_GEN_BUFFER_SIZE from 16*1024*1024 > to 64*1024*1024, > > * observe that CODE_GEN_AVG_BLOCK_SIZE of 128 > for the softmmu case is too low; my measurements put it > at about 247. So I changed it to 256. > > With those changes in place, the same boot-to-kdm process > requires only about 570000 translations to be made, and 2 > cache flushes to happen. Of course the cost is an extra > 48M of memory use. Did you measure any actual speedup from these changes? In a typical linux boot there's a lot of new code run only once, so I'd expect the tb cache to be hammered fairly heavily. Paul