qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
* [Qemu-devel] [PATCH v2 0/3] really fix -icount with iothread
@ 2011-03-10 12:12 Paolo Bonzini
  2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 1/3] really fix -icount in the iothread case Paolo Bonzini
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Paolo Bonzini @ 2011-03-10 12:12 UTC (permalink / raw)
  To: qemu-devel; +Cc: edgar.iglesias, jan.kiszka

This is a "real" fix for -icount, real in the sense that it
works in all cases including those that weren't fixed by my
first attempt.

Patch 1 is the three-line fix.  With that in, patch 2 can revert
the previous attempt(s).  Finally, patch 3 makes the icount code
clearer by finishing the bugfix/reorganization of qemu_next_deadline
vs. qemu_next_alarm_deadline.

v1->v2:
        reordered patches, renamed qemu_next_deadline

Paolo Bonzini (3):
  really fix -icount in the iothread case
  Revert wrong fix for -icount in the iothread case
  qemu_next_deadline should not consider host-time timers

 cpus.c       |    5 +++-
 qemu-timer.c |   75 +++++++++++++++++++++++++++++----------------------------
 qemu-timer.h |    2 +-
 3 files changed, 43 insertions(+), 39 deletions(-)

-- 
1.7.4

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Qemu-devel] [PATCH v2 1/3] really fix -icount in the iothread case
  2011-03-10 12:12 [Qemu-devel] [PATCH v2 0/3] really fix -icount with iothread Paolo Bonzini
@ 2011-03-10 12:12 ` Paolo Bonzini
  2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 2/3] Revert wrong fixes for " Paolo Bonzini
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2011-03-10 12:12 UTC (permalink / raw)
  To: qemu-devel; +Cc: edgar.iglesias, jan.kiszka

The correct fix for -icount is to consider the biggest difference
between iothread and non-iothread modes.  In the traditional model,
CPUs run _before_ the iothread calls select (or WaitForMultipleObjects
for Win32).  In the iothread model, CPUs run while the iothread
isn't holding the mutex, i.e. _during_ those same calls.

So, the iothread should always block as long as possible to let
the CPUs run smoothly---the timeout might as well be infinite---and
either the OS or the CPU thread itself will let the iothread know
when something happens.  At this point, the iothread wakes up and
interrupts the CPU.

This is exactly the approach that this patch takes: when cpu_exec_all
returns in -icount mode, and it is because a vm_clock deadline has
been met, it wakes up the iothread to process the timers.  This fixes
icount better than any other patch that was posted.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 cpus.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/cpus.c b/cpus.c
index 0f33945..a953bac 100644
--- a/cpus.c
+++ b/cpus.c
@@ -861,6 +861,9 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
 
     while (1) {
         cpu_exec_all();
+        if (use_icount && qemu_next_deadline() <= 0) {
+            qemu_notify_event();
+        }
         qemu_tcg_wait_io_event();
     }
 
-- 
1.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Qemu-devel] [PATCH v2 2/3] Revert wrong fixes for -icount in the iothread case
  2011-03-10 12:12 [Qemu-devel] [PATCH v2 0/3] really fix -icount with iothread Paolo Bonzini
  2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 1/3] really fix -icount in the iothread case Paolo Bonzini
@ 2011-03-10 12:12 ` Paolo Bonzini
  2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 3/3] qemu_next_deadline should not consider host-time timers Paolo Bonzini
  2011-03-11 12:57 ` [Qemu-devel] Re: [PATCH v2 0/3] really fix -icount with iothread Edgar E. Iglesias
  3 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2011-03-10 12:12 UTC (permalink / raw)
  To: qemu-devel; +Cc: edgar.iglesias, jan.kiszka

This reverts commits 225d02cd and c9f7383c.  While some parts of
the latter could be saved, I preferred a smooth, complete revert.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 qemu-timer.c |   66 +++++++++++++++++++++++++++++++--------------------------
 1 files changed, 36 insertions(+), 30 deletions(-)

diff --git a/qemu-timer.c b/qemu-timer.c
index 88c7b28..62dd504 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -110,9 +110,12 @@ static int64_t cpu_get_clock(void)
     }
 }
 
+#ifndef CONFIG_IOTHREAD
 static int64_t qemu_icount_delta(void)
 {
-    if (use_icount == 1) {
+    if (!use_icount) {
+        return 5000 * (int64_t) 1000000;
+    } else if (use_icount == 1) {
         /* When not using an adaptive execution frequency
            we tend to get badly out of sync with real time,
            so just delay for a reasonable amount of time.  */
@@ -121,6 +124,7 @@ static int64_t qemu_icount_delta(void)
         return cpu_get_icount() - cpu_get_clock();
     }
 }
+#endif
 
 /* enable cpu_get_ticks() */
 void cpu_enable_ticks(void)
@@ -1074,39 +1078,41 @@ void quit_timers(void)
 
 int qemu_calculate_timeout(void)
 {
+#ifndef CONFIG_IOTHREAD
     int timeout;
-    int64_t add;
-    int64_t delta;
 
-    /* When using icount, making forward progress with qemu_icount when the
-       guest CPU is idle is critical. We only use the static io-thread timeout
-       for non icount runs.  */
-    if (!use_icount || !vm_running) {
-        return 5000;
-    }
-
-    /* Advance virtual time to the next event.  */
-    delta = qemu_icount_delta();
-    if (delta > 0) {
-        /* If virtual time is ahead of real time then just
-           wait for IO.  */
-        timeout = (delta + 999999) / 1000000;
-    } else {
-        /* Wait for either IO to occur or the next
-           timer event.  */
-        add = qemu_next_deadline();
-        /* We advance the timer before checking for IO.
-           Limit the amount we advance so that early IO
-           activity won't get the guest too far ahead.  */
-        if (add > 10000000)
-            add = 10000000;
-        delta += add;
-        qemu_icount += qemu_icount_round (add);
-        timeout = delta / 1000000;
-        if (timeout < 0)
-            timeout = 0;
+    if (!vm_running)
+        timeout = 5000;
+    else {
+     /* XXX: use timeout computed from timers */
+        int64_t add;
+        int64_t delta;
+        /* Advance virtual time to the next event.  */
+	delta = qemu_icount_delta();
+        if (delta > 0) {
+            /* If virtual time is ahead of real time then just
+               wait for IO.  */
+            timeout = (delta + 999999) / 1000000;
+        } else {
+            /* Wait for either IO to occur or the next
+               timer event.  */
+            add = qemu_next_deadline();
+            /* We advance the timer before checking for IO.
+               Limit the amount we advance so that early IO
+               activity won't get the guest too far ahead.  */
+            if (add > 10000000)
+                add = 10000000;
+            delta += add;
+            qemu_icount += qemu_icount_round (add);
+            timeout = delta / 1000000;
+            if (timeout < 0)
+                timeout = 0;
+        }
     }
 
     return timeout;
+#else /* CONFIG_IOTHREAD */
+    return 1000;
+#endif
 }
 
-- 
1.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Qemu-devel] [PATCH v2 3/3] qemu_next_deadline should not consider host-time timers
  2011-03-10 12:12 [Qemu-devel] [PATCH v2 0/3] really fix -icount with iothread Paolo Bonzini
  2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 1/3] really fix -icount in the iothread case Paolo Bonzini
  2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 2/3] Revert wrong fixes for " Paolo Bonzini
@ 2011-03-10 12:12 ` Paolo Bonzini
  2011-03-11 12:57 ` [Qemu-devel] Re: [PATCH v2 0/3] really fix -icount with iothread Edgar E. Iglesias
  3 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2011-03-10 12:12 UTC (permalink / raw)
  To: qemu-devel; +Cc: edgar.iglesias, jan.kiszka

It is purely for icount-based virtual timers.  And now that we got the
code right, rename the function to clarify the intended scope.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 cpus.c       |    4 ++--
 qemu-timer.c |   11 +++--------
 qemu-timer.h |    2 +-
 3 files changed, 6 insertions(+), 11 deletions(-)

diff --git a/cpus.c b/cpus.c
index a953bac..d410f63 100644
--- a/cpus.c
+++ b/cpus.c
@@ -861,7 +861,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
 
     while (1) {
         cpu_exec_all();
-        if (use_icount && qemu_next_deadline() <= 0) {
+        if (use_icount && qemu_next_icount_deadline() <= 0) {
             qemu_notify_event();
         }
         qemu_tcg_wait_io_event();
@@ -1059,7 +1059,7 @@ static int tcg_cpu_exec(CPUState *env)
         qemu_icount -= (env->icount_decr.u16.low + env->icount_extra);
         env->icount_decr.u16.low = 0;
         env->icount_extra = 0;
-        count = qemu_icount_round (qemu_next_deadline());
+        count = qemu_icount_round (qemu_next_icount_deadline());
         qemu_icount += count;
         decr = (count > 0xffff) ? 0xffff : count;
         count -= decr;
diff --git a/qemu-timer.c b/qemu-timer.c
index 62dd504..c527844 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -688,21 +688,16 @@ static void host_alarm_handler(int host_signum)
     }
 }
 
-int64_t qemu_next_deadline(void)
+int64_t qemu_next_icount_deadline(void)
 {
     /* To avoid problems with overflow limit this to 2^32.  */
     int64_t delta = INT32_MAX;
 
+    assert(use_icount);
     if (active_timers[QEMU_CLOCK_VIRTUAL]) {
         delta = active_timers[QEMU_CLOCK_VIRTUAL]->expire_time -
                      qemu_get_clock_ns(vm_clock);
     }
-    if (active_timers[QEMU_CLOCK_HOST]) {
-        int64_t hdelta = active_timers[QEMU_CLOCK_HOST]->expire_time -
-                 qemu_get_clock_ns(host_clock);
-        if (hdelta < delta)
-            delta = hdelta;
-    }
 
     if (delta < 0)
         delta = 0;
@@ -1096,7 +1091,7 @@ int qemu_calculate_timeout(void)
         } else {
             /* Wait for either IO to occur or the next
                timer event.  */
-            add = qemu_next_deadline();
+            add = qemu_next_icount_deadline();
             /* We advance the timer before checking for IO.
                Limit the amount we advance so that early IO
                activity won't get the guest too far ahead.  */
diff --git a/qemu-timer.h b/qemu-timer.h
index 8cd8f83..9c3e52f 100644
--- a/qemu-timer.h
+++ b/qemu-timer.h
@@ -46,7 +46,7 @@ int qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
 
 void qemu_run_all_timers(void);
 int qemu_alarm_pending(void);
-int64_t qemu_next_deadline(void);
+int64_t qemu_next_icount_deadline(void);
 void configure_alarms(char const *opt);
 void configure_icount(const char *option);
 int qemu_calculate_timeout(void);
-- 
1.7.4

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [Qemu-devel] Re: [PATCH v2 0/3] really fix -icount with iothread
  2011-03-10 12:12 [Qemu-devel] [PATCH v2 0/3] really fix -icount with iothread Paolo Bonzini
                   ` (2 preceding siblings ...)
  2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 3/3] qemu_next_deadline should not consider host-time timers Paolo Bonzini
@ 2011-03-11 12:57 ` Edgar E. Iglesias
  2011-03-11 13:36   ` Paolo Bonzini
  3 siblings, 1 reply; 8+ messages in thread
From: Edgar E. Iglesias @ 2011-03-11 12:57 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: jan.kiszka, qemu-devel

On Thu, Mar 10, 2011 at 01:12:48PM +0100, Paolo Bonzini wrote:
> This is a "real" fix for -icount, real in the sense that it
> works in all cases including those that weren't fixed by my
> first attempt.
> 
> Patch 1 is the three-line fix.  With that in, patch 2 can revert
> the previous attempt(s).  Finally, patch 3 makes the icount code
> clearer by finishing the bugfix/reorganization of qemu_next_deadline
> vs. qemu_next_alarm_deadline.
> 
> v1->v2:
>         reordered patches, renamed qemu_next_deadline


Hi Paulo,

I gave this patchset a run and it runs icount and iothread very
fast in all my testcases.

But, it suffers from the problem that commit
225d02cd1a34d5d87e8acefbf8e244a5d12f5f8c
tried to fix.

If the virtual CPU goes to sleep waiting for a future timer
interrupt to wake it up, qemu deadlocks.

The timer interrupt never comes because time is driven by
icount, but the vCPU doesn't run any insns.

Cheers

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Qemu-devel] Re: [PATCH v2 0/3] really fix -icount with iothread
  2011-03-11 13:36   ` Paolo Bonzini
@ 2011-03-11 13:36     ` Edgar E. Iglesias
  2011-03-11 14:02       ` Paolo Bonzini
  0 siblings, 1 reply; 8+ messages in thread
From: Edgar E. Iglesias @ 2011-03-11 13:36 UTC (permalink / raw)
  To: Paolo Bonzini; +Cc: jan.kiszka, qemu-devel

On Fri, Mar 11, 2011 at 02:36:49PM +0100, Paolo Bonzini wrote:
> On 03/11/2011 01:57 PM, Edgar E. Iglesias wrote:
> > Hi Paulo,
> >
> > I gave this patchset a run and it runs icount and iothread very
> > fast in all my testcases.
> 
> Thanks, that's good news.
> 
> > But, it suffers from the problem that commit
> > 225d02cd1a34d5d87e8acefbf8e244a5d12f5f8c
> > tried to fix.
> >
> > If the virtual CPU goes to sleep waiting for a future timer
> > interrupt to wake it up, qemu deadlocks.
> >
> > The timer interrupt never comes because time is driven by
> > icount, but the vCPU doesn't run any insns.
> 
> I'm not sure what it should wait for, though.  Is vm_clock supposed to 
> be "a count of instructions, or real time if there is need for?"  So, 
> it's not clear to me what the correct behavior should be in this case. 
> Does it make sense to wait at all?

That was my initial local workaround. I just disabled the sleep mode
in the CPU and let it spin. The drawback is ofcourse that the tcg cpu
will consume 100% cpu time when using icount. I think that would be
a significant regression between non-iothread and iothread builds.


> Thinking more about it, perhaps VCPUs should never go to sleep in icount 
> mode if there is a pending vm_clock timer; rather time should just warp 
> to the next vm_clock event with no sleep ever taking place.  (That's my 

Yes, something like that. Or somehow sleep for some time related to the
time left until the next event to avoid the warps from beeing too visible
externally (e.g like sending a network packet continously instead of
every 100ms). The important part is to make sure the insn counter makes
progress also when the vCPU is sleeping.


> reasoning for manual -icount mode, at least; I think "-icount auto" will 
> just work thanks to the icount_rt_handler).
> 
> Bonus question: how does -icount mode makes sense at all for SMP? :)

Sorry, I don't know. I only use it with up. Maybe Paul has more info on
that.

Cheers

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Qemu-devel] Re: [PATCH v2 0/3] really fix -icount with iothread
  2011-03-11 12:57 ` [Qemu-devel] Re: [PATCH v2 0/3] really fix -icount with iothread Edgar E. Iglesias
@ 2011-03-11 13:36   ` Paolo Bonzini
  2011-03-11 13:36     ` Edgar E. Iglesias
  0 siblings, 1 reply; 8+ messages in thread
From: Paolo Bonzini @ 2011-03-11 13:36 UTC (permalink / raw)
  To: Edgar E. Iglesias; +Cc: jan.kiszka, qemu-devel

On 03/11/2011 01:57 PM, Edgar E. Iglesias wrote:
> Hi Paulo,
>
> I gave this patchset a run and it runs icount and iothread very
> fast in all my testcases.

Thanks, that's good news.

> But, it suffers from the problem that commit
> 225d02cd1a34d5d87e8acefbf8e244a5d12f5f8c
> tried to fix.
>
> If the virtual CPU goes to sleep waiting for a future timer
> interrupt to wake it up, qemu deadlocks.
>
> The timer interrupt never comes because time is driven by
> icount, but the vCPU doesn't run any insns.

I'm not sure what it should wait for, though.  Is vm_clock supposed to 
be "a count of instructions, or real time if there is need for?"  So, 
it's not clear to me what the correct behavior should be in this case. 
Does it make sense to wait at all?

Thinking more about it, perhaps VCPUs should never go to sleep in icount 
mode if there is a pending vm_clock timer; rather time should just warp 
to the next vm_clock event with no sleep ever taking place.  (That's my 
reasoning for manual -icount mode, at least; I think "-icount auto" will 
just work thanks to the icount_rt_handler).

Bonus question: how does -icount mode makes sense at all for SMP? :)

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [Qemu-devel] Re: [PATCH v2 0/3] really fix -icount with iothread
  2011-03-11 13:36     ` Edgar E. Iglesias
@ 2011-03-11 14:02       ` Paolo Bonzini
  0 siblings, 0 replies; 8+ messages in thread
From: Paolo Bonzini @ 2011-03-11 14:02 UTC (permalink / raw)
  To: Edgar E. Iglesias; +Cc: jan.kiszka, qemu-devel

On 03/11/2011 02:36 PM, Edgar E. Iglesias wrote:
> Yes, something like that. Or somehow sleep for some time related to the
> time left until the next event to avoid the warps from beeing too visible
> externally (e.g like sending a network packet continously instead of
> every 100ms). The important part is to make sure the insn counter makes
> progress also when the vCPU is sleeping.

Ok, thanks!

Paolo

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2011-03-11 14:03 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-03-10 12:12 [Qemu-devel] [PATCH v2 0/3] really fix -icount with iothread Paolo Bonzini
2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 1/3] really fix -icount in the iothread case Paolo Bonzini
2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 2/3] Revert wrong fixes for " Paolo Bonzini
2011-03-10 12:12 ` [Qemu-devel] [PATCH v2 3/3] qemu_next_deadline should not consider host-time timers Paolo Bonzini
2011-03-11 12:57 ` [Qemu-devel] Re: [PATCH v2 0/3] really fix -icount with iothread Edgar E. Iglesias
2011-03-11 13:36   ` Paolo Bonzini
2011-03-11 13:36     ` Edgar E. Iglesias
2011-03-11 14:02       ` Paolo Bonzini

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).