From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from [140.186.70.92] (port=33285 helo=eggs.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1PrRUY-0007Lw-NN for qemu-devel@nongnu.org; Mon, 21 Feb 2011 03:51:40 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1PrRUT-0001Wx-4w for qemu-devel@nongnu.org; Mon, 21 Feb 2011 03:51:34 -0500 Received: from mail-wy0-f173.google.com ([74.125.82.173]:52350) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1PrRUT-0001Wp-0h for qemu-devel@nongnu.org; Mon, 21 Feb 2011 03:51:33 -0500 Received: by wyb29 with SMTP id 29so1155761wyb.4 for ; Mon, 21 Feb 2011 00:51:32 -0800 (PST) Sender: Paolo Bonzini From: Paolo Bonzini Date: Mon, 21 Feb 2011 09:51:23 +0100 Message-Id: <1298278286-9158-2-git-send-email-pbonzini@redhat.com> In-Reply-To: <1298278286-9158-1-git-send-email-pbonzini@redhat.com> References: <1298278286-9158-1-git-send-email-pbonzini@redhat.com> Subject: [Qemu-devel] [PATCH 1/4] do not use qemu_icount_delta in the !use_icount case List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: edgar.iglesias@gmail.com The !use_icount code is the same for iothread and non-iothread, except that the timeout is different. Since the timeout might as well be infinite and is only masking bugs, use the higher value. With this change the !use_icount code is handled equivalently in qemu_icount_delta and qemu_calculate_timeout, and we rip it out of the former. Signed-off-by: Paolo Bonzini --- qemu-timer.c | 59 ++++++++++++++++++++++++--------------------------------- 1 files changed, 25 insertions(+), 34 deletions(-) diff --git a/qemu-timer.c b/qemu-timer.c index b0db780..88c7b28 100644 --- a/qemu-timer.c +++ b/qemu-timer.c @@ -112,9 +112,7 @@ static int64_t cpu_get_clock(void) static int64_t qemu_icount_delta(void) { - if (!use_icount) { - return 5000 * (int64_t) 1000000; - } else if (use_icount == 1) { + if (use_icount == 1) { /* When not using an adaptive execution frequency we tend to get badly out of sync with real time, so just delay for a reasonable amount of time. */ @@ -1077,43 +1075,36 @@ void quit_timers(void) int qemu_calculate_timeout(void) { int timeout; + int64_t add; + int64_t delta; -#ifdef CONFIG_IOTHREAD /* When using icount, making forward progress with qemu_icount when the guest CPU is idle is critical. We only use the static io-thread timeout for non icount runs. */ - if (!use_icount) { - return 1000; + if (!use_icount || !vm_running) { + return 5000; } -#endif - if (!vm_running) - timeout = 5000; - else { - /* XXX: use timeout computed from timers */ - int64_t add; - int64_t delta; - /* Advance virtual time to the next event. */ - delta = qemu_icount_delta(); - if (delta > 0) { - /* If virtual time is ahead of real time then just - wait for IO. */ - timeout = (delta + 999999) / 1000000; - } else { - /* Wait for either IO to occur or the next - timer event. */ - add = qemu_next_deadline(); - /* We advance the timer before checking for IO. - Limit the amount we advance so that early IO - activity won't get the guest too far ahead. */ - if (add > 10000000) - add = 10000000; - delta += add; - qemu_icount += qemu_icount_round (add); - timeout = delta / 1000000; - if (timeout < 0) - timeout = 0; - } + /* Advance virtual time to the next event. */ + delta = qemu_icount_delta(); + if (delta > 0) { + /* If virtual time is ahead of real time then just + wait for IO. */ + timeout = (delta + 999999) / 1000000; + } else { + /* Wait for either IO to occur or the next + timer event. */ + add = qemu_next_deadline(); + /* We advance the timer before checking for IO. + Limit the amount we advance so that early IO + activity won't get the guest too far ahead. */ + if (add > 10000000) + add = 10000000; + delta += add; + qemu_icount += qemu_icount_round (add); + timeout = delta / 1000000; + if (timeout < 0) + timeout = 0; } return timeout; -- 1.7.3.5