From: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: jan.kiszka@web.de, qemu-devel@nongnu.org
Subject: [Qemu-devel] Re: [PATCH v2 0/3] really fix -icount with iothread
Date: Fri, 11 Mar 2011 14:36:19 +0100 [thread overview]
Message-ID: <20110311133619.GC13236@edde.se.axis.com> (raw)
In-Reply-To: <4D7A2571.7000704@redhat.com>
On Fri, Mar 11, 2011 at 02:36:49PM +0100, Paolo Bonzini wrote:
> On 03/11/2011 01:57 PM, Edgar E. Iglesias wrote:
> > Hi Paulo,
> >
> > I gave this patchset a run and it runs icount and iothread very
> > fast in all my testcases.
>
> Thanks, that's good news.
>
> > But, it suffers from the problem that commit
> > 225d02cd1a34d5d87e8acefbf8e244a5d12f5f8c
> > tried to fix.
> >
> > If the virtual CPU goes to sleep waiting for a future timer
> > interrupt to wake it up, qemu deadlocks.
> >
> > The timer interrupt never comes because time is driven by
> > icount, but the vCPU doesn't run any insns.
>
> I'm not sure what it should wait for, though. Is vm_clock supposed to
> be "a count of instructions, or real time if there is need for?" So,
> it's not clear to me what the correct behavior should be in this case.
> Does it make sense to wait at all?
That was my initial local workaround. I just disabled the sleep mode
in the CPU and let it spin. The drawback is ofcourse that the tcg cpu
will consume 100% cpu time when using icount. I think that would be
a significant regression between non-iothread and iothread builds.
> Thinking more about it, perhaps VCPUs should never go to sleep in icount
> mode if there is a pending vm_clock timer; rather time should just warp
> to the next vm_clock event with no sleep ever taking place. (That's my
Yes, something like that. Or somehow sleep for some time related to the
time left until the next event to avoid the warps from beeing too visible
externally (e.g like sending a network packet continously instead of
every 100ms). The important part is to make sure the insn counter makes
progress also when the vCPU is sleeping.
> reasoning for manual -icount mode, at least; I think "-icount auto" will
> just work thanks to the icount_rt_handler).
>
> Bonus question: how does -icount mode makes sense at all for SMP? :)
Sorry, I don't know. I only use it with up. Maybe Paul has more info on
that.
Cheers
next prev parent reply other threads:[~2011-03-11 14:01 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-03-10 12:12 [Qemu-devel] [PATCH v2 0/3] really fix -icount with iothread Paolo Bonzini
2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 1/3] really fix -icount in the iothread case Paolo Bonzini
2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 2/3] Revert wrong fixes for " Paolo Bonzini
2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 3/3] qemu_next_deadline should not consider host-time timers Paolo Bonzini
2011-03-11 12:57 ` [Qemu-devel] Re: [PATCH v2 0/3] really fix -icount with iothread Edgar E. Iglesias
2011-03-11 13:36 ` Paolo Bonzini
2011-03-11 13:36 ` Edgar E. Iglesias [this message]
2011-03-11 14:02 ` Paolo Bonzini
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110311133619.GC13236@edde.se.axis.com \
--to=edgar.iglesias@gmail.com \
--cc=jan.kiszka@web.de \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).