qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: qemu-devel@nongnu.org
Cc: edgar.iglesias@gmail.com
Subject: [Qemu-devel] [PATCH 3/4] rewrite accounting of wait time to the vm_clock
Date: Mon, 21 Feb 2011 09:51:25 +0100	[thread overview]
Message-ID: <1298278286-9158-4-git-send-email-pbonzini@redhat.com> (raw)
In-Reply-To: <1298278286-9158-1-git-send-email-pbonzini@redhat.com>

The current code is advancing qemu_icount before waiting for I/O.
Instead, after the patch qemu_icount is left aside (it is a pure
instruction counter) and qemu_icount_bias is changed according to
the actual amount of time spent in the wait.  This is more
accurate, and actually works in the iothread case as well.

Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
 qemu-timer.c |   78 +++++++++++++++++++++++++++++-----------------------------
 1 files changed, 39 insertions(+), 39 deletions(-)

diff --git a/qemu-timer.c b/qemu-timer.c
index 06fa507..163ec69 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -614,6 +614,39 @@ void configure_icount(const char *option)
                    qemu_get_clock(vm_clock) + get_ticks_per_sec() / 10);
 }
 
+static int64_t cpu_clock_last_read;
+
+int qemu_calculate_timeout(void)
+{
+    int64_t delta;
+
+    /* When using icount, vm_clock timers are handled outside of the alarm
+       timer.  So, wait for I/O in "small bits" to ensure forward progress of
+       vm_clock when the guest CPU is idle.  When not using icount, though, we
+       just wait for a fixed amount of time (it might as well be infinite).  */
+    if (!use_icount || !vm_running) {
+        return 5000;
+    }
+
+    delta = qemu_icount_delta();
+    if (delta > 0) {
+        /* Virtual time is ahead of real time, wait for it to sync.  Time
+           spent waiting for I/O will not be counted.  */
+        cpu_clock_last_read = -1;
+    } else {
+        /* Wait until the next virtual time event, and account the wait
+           as virtual time.  */
+        delta = qemu_next_deadline();
+        cpu_clock_last_read = cpu_get_clock();
+    }
+
+    if (delta > 0) {
+        return (delta + 999999) / 1000000;
+    } else {
+        return 0;
+    }
+}
+
 void qemu_run_all_timers(void)
 {
     alarm_timer->pending = 0;
@@ -626,6 +659,12 @@ void qemu_run_all_timers(void)
 
     /* vm time timers */
     if (vm_running) {
+        if (use_icount && cpu_clock_last_read != -1) {
+            /* Virtual time passed without executing instructions.  Increase
+               the bias between instruction count and virtual time.  */
+            qemu_icount_bias += cpu_get_clock() - cpu_clock_last_read;
+            cpu_clock_last_read = -1;
+        }
         qemu_run_timers(vm_clock);
     }
 
@@ -1066,42 +1105,3 @@ void quit_timers(void)
     alarm_timer = NULL;
     t->stop(t);
 }
-
-int qemu_calculate_timeout(void)
-{
-    int timeout;
-    int64_t add;
-    int64_t delta;
-
-    /* When using icount, making forward progress with qemu_icount when the
-       guest CPU is idle is critical. We only use the static io-thread timeout
-       for non icount runs.  */
-    if (!use_icount || !vm_running) {
-        return 5000;
-    }
-
-    /* Advance virtual time to the next event.  */
-    delta = qemu_icount_delta();
-    if (delta > 0) {
-        /* If virtual time is ahead of real time then just
-           wait for IO.  */
-        timeout = (delta + 999999) / 1000000;
-    } else {
-        /* Wait for either IO to occur or the next
-           timer event.  */
-        add = qemu_next_deadline();
-        /* We advance the timer before checking for IO.
-           Limit the amount we advance so that early IO
-           activity won't get the guest too far ahead.  */
-        if (add > 10000000)
-            add = 10000000;
-        delta += add;
-        qemu_icount += qemu_icount_round (add);
-        timeout = delta / 1000000;
-        if (timeout < 0)
-            timeout = 0;
-    }
-
-    return timeout;
-}
-
-- 
1.7.3.5

  parent reply	other threads:[~2011-02-21  8:51 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2011-02-21  8:51 [Qemu-devel] [PATCH 0/4] Improve -icount, fix it with iothread Paolo Bonzini
2011-02-21  8:51 ` [Qemu-devel] [PATCH 1/4] do not use qemu_icount_delta in the !use_icount case Paolo Bonzini
2011-02-21  8:51 ` [Qemu-devel] [PATCH 2/4] qemu_next_deadline should not consider host-time timers Paolo Bonzini
2011-02-21  8:51 ` Paolo Bonzini [this message]
2011-02-21  8:51 ` [Qemu-devel] [PATCH 4/4] inline qemu_icount_delta Paolo Bonzini
2011-02-23 10:18 ` [Qemu-devel] Re: [PATCH 0/4] Improve -icount, fix it with iothread Edgar E. Iglesias
2011-02-23 10:25   ` Paolo Bonzini
2011-02-23 11:08     ` Edgar E. Iglesias
2011-02-23 11:39       ` Jan Kiszka
2011-02-23 12:40         ` Edgar E. Iglesias
2011-02-23 12:45           ` Jan Kiszka
2011-02-25 19:33         ` Paolo Bonzini
2011-02-23 12:42       ` Paolo Bonzini
2011-02-23 16:27         ` Edgar E. Iglesias
2011-02-23 16:32           ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1298278286-9158-4-git-send-email-pbonzini@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=edgar.iglesias@gmail.com \
    --cc=qemu-devel@nongnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).