From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([140.186.70.92]:55170) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Rg4AJ-0000sI-Gy for qemu-devel@nongnu.org; Wed, 28 Dec 2011 19:48:16 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1Rg4AI-0005qP-6K for qemu-devel@nongnu.org; Wed, 28 Dec 2011 19:48:15 -0500 Received: from mail-iy0-f173.google.com ([209.85.210.173]:44797) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1Rg4AI-0005qL-1E for qemu-devel@nongnu.org; Wed, 28 Dec 2011 19:48:14 -0500 Received: by iagj37 with SMTP id j37so26382403iag.4 for ; Wed, 28 Dec 2011 16:48:13 -0800 (PST) MIME-Version: 1.0 In-Reply-To: References: Date: Wed, 28 Dec 2011 17:48:12 -0700 Message-ID: From: Xin Tong Content-Type: text/plain; charset=ISO-8859-1 Subject: Re: [Qemu-devel] interrupt handling in qemu List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Peter Maydell Cc: qemu-devel That is my guess as well in the first place, but my QEMU is built with CONFIG_IOTHREAD set to 0. I am not 100% sure about how interrupts are delivered in QEMU, my guess is that some kind of timer devices will have to fire and qemu might have installed a signal handler and the signal handler takes the signal and invokes unlink_tb. I hope you can enlighten me on that. Thanks Xin On Wed, Dec 28, 2011 at 2:10 PM, Peter Maydell wrote: > On 28 December 2011 00:43, Xin Tong wrote: >> I modified QEMU to check for interrupt status at the end of every TB >> and ran it on SPECINT2000 benchmarks with QEMU 0.15.0. The performance >> is 70% of the unmodified one for some benchmarks on a x86_64 host. I >> agree that the extra load-test-branch-not-taken per TB is minimal, but >> what I found is that the average number of TB executed per TB enter is >> low (~3.5 TBs), while the unmodified approach has ~10 TBs per TB >> enter. this makes me wonder why. Maybe the mechanism i used to gather >> this statistics is flawed. but the performance is indeed hindered. > > Since you said you're using system mode, here's my guess. The > unlink-tbs method of interrupting the guest CPU thread runs > in a second thread (the io thread), and doesn't stop the guest > CPU thread. So while the io thread is trying to unlink TBs, > the CPU thread is still running on, and might well execute > a few more TBs before the io thread's traversal of the TB > graph catches up with it and manages to unlink the TB link > the CPU thread is about to traverse. > > More generally: are we really taking an interrupt every 3 to > 5 TBs? This seems very high -- surely we will be spending more > time in the OS servicing interrupts than running useful guest > userspace code... > > -- PMM