From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1DNSzY-0000g5-Tz for qemu-devel@nongnu.org; Mon, 18 Apr 2005 05:56:34 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1DNSyp-0000Xu-03 for qemu-devel@nongnu.org; Mon, 18 Apr 2005 05:56:03 -0400 Received: from [199.232.76.173] (helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1DNSyn-0000Dx-Kh for qemu-devel@nongnu.org; Mon, 18 Apr 2005 05:55:45 -0400 Received: from [212.250.162.17] (helo=mta09-winn.mailhost.ntl.com) by monty-python.gnu.org with esmtp (Exim 4.34) id 1DNSvr-0006em-Cn for qemu-devel@nongnu.org; Mon, 18 Apr 2005 05:52:43 -0400 Received: from aamta08-winn.mailhost.ntl.com ([212.250.162.8]) by mta09-winn.mailhost.ntl.com with ESMTP id <20050418095109.UTWK21834.mta09-winn.mailhost.ntl.com@aamta08-winn.mailhost.ntl.com> for ; Mon, 18 Apr 2005 10:51:09 +0100 Received: from [192.168.123.103] (really [213.106.86.207]) by aamta08-winn.mailhost.ntl.com with ESMTP id <20050418095109.YZQG1844.aamta08-winn.mailhost.ntl.com@[192.168.123.103]> for ; Mon, 18 Apr 2005 10:51:09 +0100 Message-ID: <42638306.4050200@cs.man.ac.uk> Date: Mon, 18 Apr 2005 10:51:02 +0100 From: Ian Rogers MIME-Version: 1.0 Subject: Re: [Qemu-devel] Profiling Qemu for speed? References: <20050418083542.45378.qmail@web54110.mail.yahoo.com> In-Reply-To: <20050418083542.45378.qmail@web54110.mail.yahoo.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit Reply-To: ian.rogers@manchester.ac.uk, qemu-devel@nongnu.org List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org There are some code sequences that are quite common, for example compare followed by branch. A threaded decoder tends to look like: ... // do some work load mask out opcode address_of_decoder = load decoder_lookup goto *address_of_decoder but if you say compare and branch are common then possibly compare_instruction: ... // do some work load mask out opcode if opcode == branch then goto branch_decoder address_of_decoder = load decoder_lookup goto *address_of_decoder is more optimal, as in the branch case you only have one load. So it seems you can use knowledge of common code sequences to remove one memory access. The overall effect of this is going to come down to the branch prediction hardware too - so a win isn't obvious. If you have longer code sequences then you can use specialization to create a speciailized and optimally laid out decoder. I'm not sure if you can get GCC to generate code sequences like this, but you probably at least need to use the -fprofile-generate and -fprofile-use options http://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html I'm doing this in Java and it appears to be worth upto a 10% speedup in an interpreter. So possibly this would save 10% in the translator. Sorry if this is obvious. Regards, Ian Rogers -- http://www.binarytranslator.org/ Daniel J Guinan wrote: >This conversation, below, is very interesting. It is precisely this >part of QEMU that fascinates me and potentially holds the most promise >for performance gains. I have even imagined using a genetic algorithm >to discover optimal block-sizes and instruction re-ordering and >whatnot. This could be done in order to generate translation tables of >guest instruction sequences and host translated instruction sequences. >Even if only a handful of very common sequences were translated in >this fashion, the potential speedups are enormous. > >Before even discussing the exotic possibilities, however, we need to >figure out what is possible within the framework of the current QEMU >translation system. Rewiring QEMU to support translating sequences >(blocks of instructions) rather than single instructions may or may not >be necessary. It should be rather simple to instrument QEMU to keep >track of the most common sequences in order to figure out if there are, >in fact, sequences that show up with a high enough frequency to make >this endeavor worthwhile (I would think the answer would be yes). >Then, someone skilled in machine code for the host and guest could take >a stab at hand-coding the translation for the most common couple of >sequences to see how the performance gains come out. > >I would love to see some work in this direction and would be willing to >help, although my skills are limited in x86 machine. > >-Daniel > > > >>One thought would be to have a peephole optimizer that looks back >>over >>the just translated basic block (or a state machine that matches such >>sequences as an on-line algorithm) and match against common, known >>primitive sequences, and replaces them with optimized versions. >> >>The kind of profiling you would want to do here is to run, say, >>windows >>and take a snapshot of the dynamic code cache, and look for common >>instruction sequences. Ideally, you could write some software to do >>this >>automatically. >> >>Anyway, I'm sure there are lots of other ideas laying around. >> >> >>-- John. >> >>Another thing I've thought about is checking what sequences of >>instructions often appear in x86 programs (such as e.g. "push %ebp; >>movl %esp, %ebp") and then creating C-functions which emulate such an >> >>antire block, so they can be optimized as a whole by gcc. That would >>give a similar performance gain on all supported targets, and not >>just >>on the one you created the peephole optimizer for (+ less work to >>debug). >> >>The only possible downside is that you can't jump to a particular >>instruction in such a block (the same goes for several kinds of >>peephole optimizations though). I don't know yet how Qemu exactly >>keeps >>track of the translations it has already performed, whether it >>supports >>multiple existing translations of the same instruction and/or whether >> >>it can already automatically invalidate the old block in case it >>turns >>out it needs to be splitted and thus re-translated (I guess it should >> >>at least some of these things, since it theory an x86 could jump into >> >>the middle of an instruction in order to reinterpret the bytes as >>another instruction stream). >> >> >>Jonas >> >> >>Unfortunately it's not that simple. The push instruction may cause an >> >>exception. Whatever optimizations you apply you've got to make sure >>that the >>guest state is still consistent when the exception occurs. >> >>Paul >> >> >>If we just concatenate the C code of the two procedures, won't gcc >>take >>care of that for us? Or could scheduling mess this up? Maybe there's >>a >>switch to avoid having it reschedule instructions in a way that side >>effects happen in a different order? (that would still give us the >>advantage of CSE and peephole optimizations) >> >> >>Jonas >> >> >> > > > > > >_______________________________________________ >Qemu-devel mailing list >Qemu-devel@nongnu.org >http://lists.nongnu.org/mailman/listinfo/qemu-devel > >