qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Xin Tong <xerox.time.tech@gmail.com>
To: Peter Maydell <peter.maydell@linaro.org>
Cc: qemu-devel <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] interrupt handling in qemu
Date: Wed, 28 Dec 2011 17:48:12 -0700	[thread overview]
Message-ID: <CALKntY188dO0DW5gz6tmjAjBwyKbW6XL5Ydtnx_mKrJrGRQcEg@mail.gmail.com> (raw)
In-Reply-To: <CAFEAcA95foJ0wKJ5Pc4W5U_VWTBC62OnphAvj=6m8hoXGW8m8A@mail.gmail.com>

That is my guess as well in the first place, but my QEMU is built with
CONFIG_IOTHREAD set to 0.

I am not 100% sure about how interrupts are delivered in QEMU, my
guess is that some kind of timer devices will have to fire and qemu
might have installed a signal handler and the signal handler takes the
signal and invokes unlink_tb.  I hope you can enlighten me on that.

Thanks

Xin
On Wed, Dec 28, 2011 at 2:10 PM, Peter Maydell <peter.maydell@linaro.org> wrote:
> On 28 December 2011 00:43, Xin Tong <xerox.time.tech@gmail.com> wrote:
>> I modified QEMU to check for interrupt status at the end of every TB
>> and ran it on SPECINT2000 benchmarks with QEMU 0.15.0. The performance
>> is 70% of the unmodified one for some benchmarks on a x86_64 host. I
>> agree that the extra load-test-branch-not-taken per TB is minimal, but
>> what I found is that the average number of TB executed per TB enter is
>> low (~3.5 TBs), while the unmodified approach has ~10 TBs per TB
>> enter. this makes me wonder why. Maybe the mechanism i used to gather
>> this statistics is flawed. but the performance is indeed hindered.
>
> Since you said you're using system mode, here's my guess. The
> unlink-tbs method of interrupting the guest CPU thread runs
> in a second thread (the io thread), and doesn't stop the guest
> CPU thread. So while the io thread is trying to unlink TBs,
> the CPU thread is still running on, and might well execute
> a few more TBs before the io thread's traversal of the TB
> graph catches up with it and manages to unlink the TB link
> the CPU thread is about to traverse.
>
> More generally: are we really taking an interrupt every 3 to
> 5 TBs? This seems very high -- surely we will be spending more
> time in the OS servicing interrupts than running useful guest
> userspace code...
>
> -- PMM

  reply	other threads:[~2011-12-29  0:48 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-12-27 23:12 [Qemu-devel] interrupt handling in qemu Xin Tong
2011-12-27 23:36 ` Peter Maydell
2011-12-28  0:43   ` Xin Tong
2011-12-28  1:10     ` Peter Maydell
2011-12-28  1:23       ` Xin Tong
2011-12-28 21:10     ` Peter Maydell
2011-12-29  0:48       ` Xin Tong [this message]
2011-12-29  1:31         ` Peter Maydell
2011-12-28 10:42 ` Avi Kivity
2011-12-28 11:40   ` Peter Maydell
2011-12-28 12:04     ` Avi Kivity
2011-12-28 17:00       ` Xin Tong
2011-12-28 19:07         ` Lluís Vilanova

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALKntY188dO0DW5gz6tmjAjBwyKbW6XL5Ydtnx_mKrJrGRQcEg@mail.gmail.com \
    --to=xerox.time.tech@gmail.com \
    --cc=peter.maydell@linaro.org \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).