* [Qemu-devel] [PATCH 00/18] extract qemu-timer.c
@ 2010-03-10 10:38 Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 01/18] avoid dubiously clever code in win32_start_timer Paolo Bonzini
` (17 more replies)
0 siblings, 18 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
This is the update of my series to remove timer handling for vl.c.
main changes from v1:
- improved some commit messages
- reordered patches a bit
- removed some irrelevant cleanups, will submit as followup
- removed switch of timer handling to use a bottom half
- fixed one bisectability issue
Paolo Bonzini (18):
avoid dubiously clever code in win32_start_timer
fix error in win32_rearm_timer
only one flag is needed for alarm_timer
more alarm timer cleanup
do not use qemu_event_increment outside qemu_notify_event
tweak qemu_notify_event
remove qemu_rearm_alarm_timer from main loop
extract timer handling out of main_loop_wait
change qemu_run_timers interface
introduce and use qemu_clock_enable
centralize handling of -icount
add qemu_icount_round
add qemu_alarm_pending
new function qemu_icount_delta
move vmstate registration of vmstate_timers earlier
place together more #ifdef CONFIG_IOTHREAD blocks
disentangle tcg and deadline calculation
split out qemu-timer.c
^ permalink raw reply [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 01/18] avoid dubiously clever code in win32_start_timer
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-17 16:58 ` Anthony Liguori
2010-03-10 10:38 ` [Qemu-devel] [PATCH 02/18] fix error in win32_rearm_timer Paolo Bonzini
` (16 subsequent siblings)
17 siblings, 1 reply; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
The code is initializing an unsigned int to UINT_MAX using "-1", so that
the following always-true comparison seems to be always-false at a
first look. Since alarm timer initializations are never nested, it is
simpler to unconditionally store the result of timeGetDevCaps into
data->period.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 6 ++----
1 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/vl.c b/vl.c
index d8328c7..6b1e1a7 100644
--- a/vl.c
+++ b/vl.c
@@ -626,7 +626,7 @@ static struct qemu_alarm_timer *alarm_timer;
struct qemu_alarm_win32 {
MMRESULT timerId;
unsigned int period;
-} alarm_win32_data = {0, -1};
+} alarm_win32_data = {0, 0};
static int win32_start_timer(struct qemu_alarm_timer *t);
static void win32_stop_timer(struct qemu_alarm_timer *t);
@@ -1360,9 +1360,7 @@ static int win32_start_timer(struct qemu_alarm_timer *t)
memset(&tc, 0, sizeof(tc));
timeGetDevCaps(&tc, sizeof(tc));
- if (data->period < tc.wPeriodMin)
- data->period = tc.wPeriodMin;
-
+ data->period = tc.wPeriodMin;
timeBeginPeriod(data->period);
flags = TIME_CALLBACK_FUNCTION;
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 02/18] fix error in win32_rearm_timer
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 01/18] avoid dubiously clever code in win32_start_timer Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 03/18] only one flag is needed for alarm_timer Paolo Bonzini
` (15 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
The TIME_ONESHOT and TIME_PERIODIC flags are mutually exclusive.
The code after the patch matches the flags used in win32_start_timer.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/vl.c b/vl.c
index 6b1e1a7..7958a26 100644
--- a/vl.c
+++ b/vl.c
@@ -1408,7 +1408,7 @@ static void win32_rearm_timer(struct qemu_alarm_timer *t)
data->period,
host_alarm_handler,
(DWORD)t,
- TIME_ONESHOT | TIME_PERIODIC);
+ TIME_ONESHOT | TIME_CALLBACK_FUNCTION);
if (!data->timerId) {
fprintf(stderr, "Failed to re-arm win32 alarm timer %ld\n",
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 03/18] only one flag is needed for alarm_timer
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 01/18] avoid dubiously clever code in win32_start_timer Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 02/18] fix error in win32_rearm_timer Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 04/18] more alarm timer cleanup Paolo Bonzini
` (14 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
The ALARM_FLAG_DYNTICKS can be testing simply by checking if there is
a rearm function.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 31 +++++++++++++++----------------
1 files changed, 15 insertions(+), 16 deletions(-)
diff --git a/vl.c b/vl.c
index 7958a26..086982f 100644
--- a/vl.c
+++ b/vl.c
@@ -592,20 +592,17 @@ struct QEMUTimer {
struct qemu_alarm_timer {
char const *name;
- unsigned int flags;
-
int (*start)(struct qemu_alarm_timer *t);
void (*stop)(struct qemu_alarm_timer *t);
void (*rearm)(struct qemu_alarm_timer *t);
void *priv;
-};
-#define ALARM_FLAG_DYNTICKS 0x1
-#define ALARM_FLAG_EXPIRED 0x2
+ unsigned int expired;
+};
static inline int alarm_has_dynticks(struct qemu_alarm_timer *t)
{
- return t && (t->flags & ALARM_FLAG_DYNTICKS);
+ return t && t->rearm;
}
static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
@@ -721,18 +718,18 @@ static void init_icount_adjust(void)
static struct qemu_alarm_timer alarm_timers[] = {
#ifndef _WIN32
#ifdef __linux__
- {"dynticks", ALARM_FLAG_DYNTICKS, dynticks_start_timer,
+ {"dynticks", dynticks_start_timer,
dynticks_stop_timer, dynticks_rearm_timer, NULL},
/* HPET - if available - is preferred */
- {"hpet", 0, hpet_start_timer, hpet_stop_timer, NULL, NULL},
+ {"hpet", hpet_start_timer, hpet_stop_timer, NULL, NULL},
/* ...otherwise try RTC */
- {"rtc", 0, rtc_start_timer, rtc_stop_timer, NULL, NULL},
+ {"rtc", rtc_start_timer, rtc_stop_timer, NULL, NULL},
#endif
- {"unix", 0, unix_start_timer, unix_stop_timer, NULL, NULL},
+ {"unix", unix_start_timer, unix_stop_timer, NULL, NULL},
#else
- {"dynticks", ALARM_FLAG_DYNTICKS, win32_start_timer,
+ {"dynticks", win32_start_timer,
win32_stop_timer, win32_rearm_timer, &alarm_win32_data},
- {"win32", 0, win32_start_timer,
+ {"win32", win32_start_timer,
win32_stop_timer, NULL, &alarm_win32_data},
#endif
{NULL, }
@@ -880,7 +877,7 @@ void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time)
/* Rearm if necessary */
if (pt == &active_timers[ts->clock->type]) {
- if ((alarm_timer->flags & ALARM_FLAG_EXPIRED) == 0) {
+ if (!alarm_timer->expired) {
qemu_rearm_alarm_timer(alarm_timer);
}
/* Interrupt execution to force deadline recalculation. */
@@ -1053,7 +1050,7 @@ static void host_alarm_handler(int host_signum)
qemu_timer_expired(active_timers[QEMU_CLOCK_HOST],
qemu_get_clock(host_clock))) {
qemu_event_increment();
- if (alarm_timer) alarm_timer->flags |= ALARM_FLAG_EXPIRED;
+ if (alarm_timer) alarm_timer->expired = 1;
#ifndef CONFIG_IOTHREAD
if (next_cpu) {
@@ -1282,6 +1279,7 @@ static void dynticks_rearm_timer(struct qemu_alarm_timer *t)
int64_t nearest_delta_us = INT64_MAX;
int64_t current_us;
+ assert(alarm_has_dynticks(t));
if (!active_timers[QEMU_CLOCK_REALTIME] &&
!active_timers[QEMU_CLOCK_VIRTUAL] &&
!active_timers[QEMU_CLOCK_HOST])
@@ -1397,6 +1395,7 @@ static void win32_rearm_timer(struct qemu_alarm_timer *t)
{
struct qemu_alarm_win32 *data = t->priv;
+ assert(alarm_has_dynticks(t));
if (!active_timers[QEMU_CLOCK_REALTIME] &&
!active_timers[QEMU_CLOCK_VIRTUAL] &&
!active_timers[QEMU_CLOCK_HOST])
@@ -3884,8 +3883,8 @@ void main_loop_wait(int timeout)
slirp_select_poll(&rfds, &wfds, &xfds, (ret < 0));
/* rearm timer, if not periodic */
- if (alarm_timer->flags & ALARM_FLAG_EXPIRED) {
- alarm_timer->flags &= ~ALARM_FLAG_EXPIRED;
+ if (alarm_timer->expired) {
+ alarm_timer->expired = 0;
qemu_rearm_alarm_timer(alarm_timer);
}
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 04/18] more alarm timer cleanup
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (2 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 03/18] only one flag is needed for alarm_timer Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 05/18] do not use qemu_event_increment outside qemu_notify_event Paolo Bonzini
` (13 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
The timer_alarm_pending variable is related to the alarm timer but not
placed in the struct. Also, in qemu_mod_timer the wrong flag was being
tested: the timer is rearmed in the alarm timer "bottom half", so the
right flag to test there is the "pending" flag.
Finally, I hoisted the NULL checks from alarm_has_dynticks to
host_alarm_handler.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 29 ++++++++++++++++++-----------
1 files changed, 18 insertions(+), 11 deletions(-)
diff --git a/vl.c b/vl.c
index 086982f..6acf702 100644
--- a/vl.c
+++ b/vl.c
@@ -258,7 +258,6 @@ uint64_t node_cpumask[MAX_NODES];
static CPUState *cur_cpu;
static CPUState *next_cpu;
-static int timer_alarm_pending = 1;
/* Conversion factor from emulated instructions to virtual clock ticks. */
static int icount_time_shift;
/* Arbitrarily pick 1MIPS as the minimum allowable speed. */
@@ -597,12 +596,13 @@ struct qemu_alarm_timer {
void (*rearm)(struct qemu_alarm_timer *t);
void *priv;
- unsigned int expired;
+ char expired;
+ char pending;
};
static inline int alarm_has_dynticks(struct qemu_alarm_timer *t)
{
- return t && t->rearm;
+ return !!t->rearm;
}
static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
@@ -877,7 +877,7 @@ void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time)
/* Rearm if necessary */
if (pt == &active_timers[ts->clock->type]) {
- if (!alarm_timer->expired) {
+ if (!alarm_timer->pending) {
qemu_rearm_alarm_timer(alarm_timer);
}
/* Interrupt execution to force deadline recalculation. */
@@ -1012,6 +1012,10 @@ static void CALLBACK host_alarm_handler(UINT uTimerID, UINT uMsg,
static void host_alarm_handler(int host_signum)
#endif
{
+ struct qemu_alarm_timer *t = alarm_timer;
+ if (!t)
+ return;
+
#if 0
#define DISP_FREQ 1000
{
@@ -1041,7 +1045,7 @@ static void host_alarm_handler(int host_signum)
last_clock = ti;
}
#endif
- if (alarm_has_dynticks(alarm_timer) ||
+ if (alarm_has_dynticks(t) ||
(!use_icount &&
qemu_timer_expired(active_timers[QEMU_CLOCK_VIRTUAL],
qemu_get_clock(vm_clock))) ||
@@ -1050,7 +1054,7 @@ static void host_alarm_handler(int host_signum)
qemu_timer_expired(active_timers[QEMU_CLOCK_HOST],
qemu_get_clock(host_clock))) {
qemu_event_increment();
- if (alarm_timer) alarm_timer->expired = 1;
+ t->expired = alarm_has_dynticks(t);
#ifndef CONFIG_IOTHREAD
if (next_cpu) {
@@ -1058,7 +1062,7 @@ static void host_alarm_handler(int host_signum)
cpu_exit(next_cpu);
}
#endif
- timer_alarm_pending = 1;
+ t->pending = 1;
qemu_notify_event();
}
}
@@ -1438,6 +1442,8 @@ static int init_timer_alarm(void)
goto fail;
}
+ /* first event is at time 0 */
+ t->pending = 1;
alarm_timer = t;
return 0;
@@ -1448,8 +1454,9 @@ fail:
static void quit_timers(void)
{
- alarm_timer->stop(alarm_timer);
+ struct qemu_alarm_timer *t = alarm_timer;
alarm_timer = NULL;
+ t->stop(t);
}
/***********************************************************/
@@ -3888,6 +3895,8 @@ void main_loop_wait(int timeout)
qemu_rearm_alarm_timer(alarm_timer);
}
+ alarm_timer->pending = 0;
+
/* vm time timers */
if (vm_running) {
if (!cur_cpu || likely(!(cur_cpu->singlestep_enabled & SSTEP_NOTIMER)))
@@ -3957,10 +3966,8 @@ static void tcg_cpu_exec(void)
for (; next_cpu != NULL; next_cpu = next_cpu->next_cpu) {
CPUState *env = cur_cpu = next_cpu;
- if (timer_alarm_pending) {
- timer_alarm_pending = 0;
+ if (alarm_timer->pending)
break;
- }
if (cpu_can_run(env))
ret = qemu_cpu_exec(env);
else if (env->stop)
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 05/18] do not use qemu_event_increment outside qemu_notify_event
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (3 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 04/18] more alarm timer cleanup Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 06/18] tweak qemu_notify_event Paolo Bonzini
` (12 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
qemu_notify_event in the non-iothread case is only stopping the current
CPU. However, if the CPU is idle and the main loop is in the select
call then a call to qemu_event_increment is needed too (as done in
host_alarm_handler). Since in general one doesn't know whether the CPU
is executing or not, it is a safe bet to always do qemu_event_increment.
Another way to see it: after this patch qemu_event_increment is the
"common part" of qemu_notify_event for both the CONFIG_IOTHREAD and
!CONFIG_IOTHREAD cases, which makes sense.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/vl.c b/vl.c
index 6acf702..a546d85 100644
--- a/vl.c
+++ b/vl.c
@@ -1053,7 +1053,7 @@ static void host_alarm_handler(int host_signum)
qemu_get_clock(rt_clock)) ||
qemu_timer_expired(active_timers[QEMU_CLOCK_HOST],
qemu_get_clock(host_clock))) {
- qemu_event_increment();
+
t->expired = alarm_has_dynticks(t);
#ifndef CONFIG_IOTHREAD
@@ -3361,6 +3361,7 @@ void qemu_notify_event(void)
{
CPUState *env = cpu_single_env;
+ qemu_event_increment ();
if (env) {
cpu_exit(env);
}
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 06/18] tweak qemu_notify_event
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (4 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 05/18] do not use qemu_event_increment outside qemu_notify_event Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 07/18] remove qemu_rearm_alarm_timer from main loop Paolo Bonzini
` (11 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
Instead of testing specially next_cpu in host_alarm_handler, just do
that in qemu_notify_event. The idea is, if we are not running (or
not yet running) target CPU code, prepare things so that the execution
loop is exited asap; just make that clear.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 10 +++-------
1 files changed, 3 insertions(+), 7 deletions(-)
diff --git a/vl.c b/vl.c
index a546d85..1328979 100644
--- a/vl.c
+++ b/vl.c
@@ -1055,13 +1055,6 @@ static void host_alarm_handler(int host_signum)
qemu_get_clock(host_clock))) {
t->expired = alarm_has_dynticks(t);
-
-#ifndef CONFIG_IOTHREAD
- if (next_cpu) {
- /* stop the currently executing cpu because a timer occured */
- cpu_exit(next_cpu);
- }
-#endif
t->pending = 1;
qemu_notify_event();
}
@@ -3365,6 +3358,9 @@ void qemu_notify_event(void)
if (env) {
cpu_exit(env);
}
+ if (next_cpu && env != next_cpu) {
+ cpu_exit(next_cpu);
+ }
}
void qemu_mutex_lock_iothread(void) {}
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 07/18] remove qemu_rearm_alarm_timer from main loop
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (5 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 06/18] tweak qemu_notify_event Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 08/18] extract timer handling out of main_loop_wait Paolo Bonzini
` (10 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
Make the timer subsystem register its own callback instead.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 8 +++++++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/vl.c b/vl.c
index 1328979..d6bbdbe 100644
--- a/vl.c
+++ b/vl.c
@@ -1417,6 +1417,12 @@ static void win32_rearm_timer(struct qemu_alarm_timer *t)
#endif /* _WIN32 */
+static void alarm_timer_on_change_state_rearm(void *opaque, int running, int reason)
+{
+ if (running)
+ qemu_rearm_alarm_timer((struct qemu_alarm_timer *) opaque);
+}
+
static int init_timer_alarm(void)
{
struct qemu_alarm_timer *t = NULL;
@@ -1438,6 +1444,7 @@ static int init_timer_alarm(void)
/* first event is at time 0 */
t->pending = 1;
alarm_timer = t;
+ qemu_add_vm_change_state_handler(alarm_timer_on_change_state_rearm, t);
return 0;
@@ -3080,7 +3087,6 @@ void vm_start(void)
cpu_enable_ticks();
vm_running = 1;
vm_state_notify(1, 0);
- qemu_rearm_alarm_timer(alarm_timer);
resume_all_vcpus();
}
}
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 08/18] extract timer handling out of main_loop_wait
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (6 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 07/18] remove qemu_rearm_alarm_timer from main loop Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 09/18] change qemu_run_timers interface Paolo Bonzini
` (9 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 47 +++++++++++++++++++++++++----------------------
1 files changed, 25 insertions(+), 22 deletions(-)
diff --git a/vl.c b/vl.c
index d6bbdbe..3347ba0 100644
--- a/vl.c
+++ b/vl.c
@@ -1002,7 +1002,30 @@ static const VMStateDescription vmstate_timers = {
}
};
-static void qemu_event_increment(void);
+static void qemu_run_all_timers(void)
+{
+ /* rearm timer, if not periodic */
+ if (alarm_timer->expired) {
+ alarm_timer->expired = 0;
+ qemu_rearm_alarm_timer(alarm_timer);
+ }
+
+ alarm_timer->pending = 0;
+
+ /* vm time timers */
+ if (vm_running) {
+ if (!cur_cpu || likely(!(cur_cpu->singlestep_enabled & SSTEP_NOTIMER)))
+ qemu_run_timers(&active_timers[QEMU_CLOCK_VIRTUAL],
+ qemu_get_clock(vm_clock));
+ }
+
+ /* real time timers */
+ qemu_run_timers(&active_timers[QEMU_CLOCK_REALTIME],
+ qemu_get_clock(rt_clock));
+
+ qemu_run_timers(&active_timers[QEMU_CLOCK_HOST],
+ qemu_get_clock(host_clock));
+}
#ifdef _WIN32
static void CALLBACK host_alarm_handler(UINT uTimerID, UINT uMsg,
@@ -3892,27 +3915,7 @@ void main_loop_wait(int timeout)
slirp_select_poll(&rfds, &wfds, &xfds, (ret < 0));
- /* rearm timer, if not periodic */
- if (alarm_timer->expired) {
- alarm_timer->expired = 0;
- qemu_rearm_alarm_timer(alarm_timer);
- }
-
- alarm_timer->pending = 0;
-
- /* vm time timers */
- if (vm_running) {
- if (!cur_cpu || likely(!(cur_cpu->singlestep_enabled & SSTEP_NOTIMER)))
- qemu_run_timers(&active_timers[QEMU_CLOCK_VIRTUAL],
- qemu_get_clock(vm_clock));
- }
-
- /* real time timers */
- qemu_run_timers(&active_timers[QEMU_CLOCK_REALTIME],
- qemu_get_clock(rt_clock));
-
- qemu_run_timers(&active_timers[QEMU_CLOCK_HOST],
- qemu_get_clock(host_clock));
+ qemu_run_all_timers();
/* Check bottom-halves last in case any of the earlier events triggered
them. */
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 09/18] change qemu_run_timers interface
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (7 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 08/18] extract timer handling out of main_loop_wait Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 10/18] introduce and use qemu_clock_enable Paolo Bonzini
` (8 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 18 ++++++++----------
1 files changed, 8 insertions(+), 10 deletions(-)
diff --git a/vl.c b/vl.c
index 3347ba0..2b9b379 100644
--- a/vl.c
+++ b/vl.c
@@ -903,10 +903,13 @@ int qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time)
return (timer_head->expire_time <= current_time);
}
-static void qemu_run_timers(QEMUTimer **ptimer_head, int64_t current_time)
+static void qemu_run_timers(QEMUClock *clock)
{
- QEMUTimer *ts;
+ QEMUTimer **ptimer_head, *ts;
+ int64_t current_time;
+ current_time = qemu_get_clock (clock);
+ ptimer_head = &active_timers[clock->type];
for(;;) {
ts = *ptimer_head;
if (!ts || ts->expire_time > current_time)
@@ -1015,16 +1018,11 @@ static void qemu_run_all_timers(void)
/* vm time timers */
if (vm_running) {
if (!cur_cpu || likely(!(cur_cpu->singlestep_enabled & SSTEP_NOTIMER)))
- qemu_run_timers(&active_timers[QEMU_CLOCK_VIRTUAL],
- qemu_get_clock(vm_clock));
+ qemu_run_timers(vm_clock);
}
- /* real time timers */
- qemu_run_timers(&active_timers[QEMU_CLOCK_REALTIME],
- qemu_get_clock(rt_clock));
-
- qemu_run_timers(&active_timers[QEMU_CLOCK_HOST],
- qemu_get_clock(host_clock));
+ qemu_run_timers(rt_clock);
+ qemu_run_timers(host_clock);
}
#ifdef _WIN32
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 10/18] introduce and use qemu_clock_enable
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (8 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 09/18] change qemu_run_timers interface Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 11/18] centralize handling of -icount Paolo Bonzini
` (7 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
By adding the possibility to turn on/off a clock, yet another
incestuous relationship between timers and CPUs can be disentangled.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 16 ++++++++++++++--
1 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/vl.c b/vl.c
index 2b9b379..6cd77e6 100644
--- a/vl.c
+++ b/vl.c
@@ -578,6 +578,7 @@ void cpu_disable_ticks(void)
struct QEMUClock {
int type;
+ int enabled;
/* XXX: add frequency */
};
@@ -812,9 +813,15 @@ static QEMUClock *qemu_new_clock(int type)
QEMUClock *clock;
clock = qemu_mallocz(sizeof(QEMUClock));
clock->type = type;
+ clock->enabled = 1;
return clock;
}
+static void qemu_clock_enable(QEMUClock *clock, int enabled)
+{
+ clock->enabled = enabled;
+}
+
QEMUTimer *qemu_new_timer(QEMUClock *clock, QEMUTimerCB *cb, void *opaque)
{
QEMUTimer *ts;
@@ -907,6 +914,9 @@ static void qemu_run_timers(QEMUClock *clock)
{
QEMUTimer **ptimer_head, *ts;
int64_t current_time;
+
+ if (!clock->enabled)
+ return;
current_time = qemu_get_clock (clock);
ptimer_head = &active_timers[clock->type];
@@ -1017,8 +1027,7 @@ static void qemu_run_all_timers(void)
/* vm time timers */
if (vm_running) {
- if (!cur_cpu || likely(!(cur_cpu->singlestep_enabled & SSTEP_NOTIMER)))
- qemu_run_timers(vm_clock);
+ qemu_run_timers(vm_clock);
}
qemu_run_timers(rt_clock);
@@ -3970,6 +3979,9 @@ static void tcg_cpu_exec(void)
for (; next_cpu != NULL; next_cpu = next_cpu->next_cpu) {
CPUState *env = cur_cpu = next_cpu;
+ qemu_clock_enable(vm_clock,
+ (cur_cpu->singlestep_enabled & SSTEP_NOTIMER) == 0);
+
if (alarm_timer->pending)
break;
if (cpu_can_run(env))
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 11/18] centralize handling of -icount
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (9 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 10/18] introduce and use qemu_clock_enable Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 12/18] add qemu_icount_round Paolo Bonzini
` (6 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
A simple patch to place together all handling of -icount.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 33 +++++++++++++++++++--------------
1 files changed, 19 insertions(+), 14 deletions(-)
diff --git a/vl.c b/vl.c
index 6cd77e6..d05bae5 100644
--- a/vl.c
+++ b/vl.c
@@ -701,8 +701,23 @@ static void icount_adjust_vm(void * opaque)
icount_adjust();
}
-static void init_icount_adjust(void)
+static void configure_icount(const char *option)
{
+ if (!option)
+ return;
+
+ if (strcmp(option, "auto") != 0) {
+ icount_time_shift = strtol(option, NULL, 0);
+ use_icount = 1;
+ return;
+ }
+
+ use_icount = 2;
+
+ /* 125MIPS seems a reasonable initial guess at the guest speed.
+ It will be corrected fairly quickly anyway. */
+ icount_time_shift = 3;
+
/* Have both realtime and virtual time triggers for speed adjustment.
The realtime trigger catches emulated time passing too slowly,
the virtual time trigger catches emulated time passing too fast.
@@ -4855,6 +4870,7 @@ int main(int argc, char **argv, char **envp)
uint32_t boot_devices_bitmap = 0;
int i;
int snapshot, linux_boot, net_boot;
+ const char *icount_option = NULL;
const char *initrd_filename;
const char *kernel_filename, *kernel_cmdline;
char boot_devices[33] = "cad"; /* default to HD->floppy->CD-ROM */
@@ -5605,12 +5621,7 @@ int main(int argc, char **argv, char **envp)
tb_size = 0;
break;
case QEMU_OPTION_icount:
- use_icount = 1;
- if (strcmp(optarg, "auto") == 0) {
- icount_time_shift = -1;
- } else {
- icount_time_shift = strtol(optarg, NULL, 0);
- }
+ icount_option = optarg;
break;
case QEMU_OPTION_incoming:
incoming = optarg;
@@ -5856,13 +5867,7 @@ int main(int argc, char **argv, char **envp)
fprintf(stderr, "could not initialize alarm timer\n");
exit(1);
}
- if (use_icount && icount_time_shift < 0) {
- use_icount = 2;
- /* 125MIPS seems a reasonable initial guess at the guest speed.
- It will be corrected fairly quickly anyway. */
- icount_time_shift = 3;
- init_icount_adjust();
- }
+ configure_icount(icount_option);
#ifdef _WIN32
socket_init();
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 12/18] add qemu_icount_round
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (10 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 11/18] centralize handling of -icount Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 13/18] add qemu_alarm_pending Paolo Bonzini
` (5 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 13 +++++++------
1 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/vl.c b/vl.c
index d05bae5..2f78817 100644
--- a/vl.c
+++ b/vl.c
@@ -731,6 +731,11 @@ static void configure_icount(const char *option)
qemu_get_clock(vm_clock) + get_ticks_per_sec() / 10);
}
+static int64_t qemu_icount_round(int64_t count)
+{
+ return (count + (1 << icount_time_shift) - 1) >> icount_time_shift;
+}
+
static struct qemu_alarm_timer alarm_timers[] = {
#ifndef _WIN32
#ifdef __linux__
@@ -3961,9 +3966,7 @@ static int qemu_cpu_exec(CPUState *env)
qemu_icount -= (env->icount_decr.u16.low + env->icount_extra);
env->icount_decr.u16.low = 0;
env->icount_extra = 0;
- count = qemu_next_deadline();
- count = (count + (1 << icount_time_shift) - 1)
- >> icount_time_shift;
+ count = qemu_icount_round (qemu_next_deadline());
qemu_icount += count;
decr = (count > 0xffff) ? 0xffff : count;
count -= decr;
@@ -4073,9 +4076,7 @@ static int qemu_calculate_timeout(void)
if (add > 10000000)
add = 10000000;
delta += add;
- add = (add + (1 << icount_time_shift) - 1)
- >> icount_time_shift;
- qemu_icount += add;
+ qemu_icount += qemu_icount_round (add);
timeout = delta / 1000000;
if (timeout < 0)
timeout = 0;
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 13/18] add qemu_alarm_pending
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (11 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 12/18] add qemu_icount_round Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 14/18] new function qemu_icount_delta Paolo Bonzini
` (4 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 11 ++++++++---
1 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/vl.c b/vl.c
index 2f78817..18bd2ee 100644
--- a/vl.c
+++ b/vl.c
@@ -601,6 +601,13 @@ struct qemu_alarm_timer {
char pending;
};
+static struct qemu_alarm_timer *alarm_timer;
+
+static inline int qemu_alarm_pending(void)
+{
+ return alarm_timer->pending;
+}
+
static inline int alarm_has_dynticks(struct qemu_alarm_timer *t)
{
return !!t->rearm;
@@ -617,8 +624,6 @@ static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
/* TODO: MIN_TIMER_REARM_US should be optimized */
#define MIN_TIMER_REARM_US 250
-static struct qemu_alarm_timer *alarm_timer;
-
#ifdef _WIN32
struct qemu_alarm_win32 {
@@ -4000,7 +4005,7 @@ static void tcg_cpu_exec(void)
qemu_clock_enable(vm_clock,
(cur_cpu->singlestep_enabled & SSTEP_NOTIMER) == 0);
- if (alarm_timer->pending)
+ if (qemu_alarm_pending())
break;
if (cpu_can_run(env))
ret = qemu_cpu_exec(env);
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 14/18] new function qemu_icount_delta
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (12 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 13/18] add qemu_alarm_pending Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 15/18] move vmstate registration of vmstate_timers earlier Paolo Bonzini
` (3 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
Tweaking the rounding in qemu_next_deadline ensures that there's
no change whatsoever.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 29 ++++++++++++++++++-----------
1 files changed, 18 insertions(+), 11 deletions(-)
diff --git a/vl.c b/vl.c
index 18bd2ee..d10319f 100644
--- a/vl.c
+++ b/vl.c
@@ -548,6 +548,22 @@ static int64_t cpu_get_clock(void)
}
}
+#ifndef CONFIG_IOTHREAD
+static int64_t qemu_icount_delta(void)
+{
+ if (!use_icount) {
+ return 5000 * (int64_t) 1000000;
+ } else if (use_icount == 1) {
+ /* When not using an adaptive execution frequency
+ we tend to get badly out of sync with real time,
+ so just delay for a reasonable amount of time. */
+ return 0;
+ } else {
+ return cpu_get_icount() - cpu_get_clock();
+ }
+}
+#endif
+
/* enable cpu_get_ticks() */
void cpu_enable_ticks(void)
{
@@ -4052,25 +4068,16 @@ static int qemu_calculate_timeout(void)
timeout = 5000;
else if (tcg_has_work())
timeout = 0;
- else if (!use_icount)
- timeout = 5000;
else {
/* XXX: use timeout computed from timers */
int64_t add;
int64_t delta;
/* Advance virtual time to the next event. */
- if (use_icount == 1) {
- /* When not using an adaptive execution frequency
- we tend to get badly out of sync with real time,
- so just delay for a reasonable amount of time. */
- delta = 0;
- } else {
- delta = cpu_get_icount() - cpu_get_clock();
- }
+ delta = qemu_icount_delta();
if (delta > 0) {
/* If virtual time is ahead of real time then just
wait for IO. */
- timeout = (delta / 1000000) + 1;
+ timeout = (delta + 999999) / 1000000;
} else {
/* Wait for either IO to occur or the next
timer event. */
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 15/18] move vmstate registration of vmstate_timers earlier
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (13 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 14/18] new function qemu_icount_delta Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 16/18] place together more #ifdef CONFIG_IOTHREAD blocks Paolo Bonzini
` (2 subsequent siblings)
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 62 +++++++++++++++++++++++++++++++-------------------------------
1 files changed, 31 insertions(+), 31 deletions(-)
diff --git a/vl.c b/vl.c
index d10319f..3d8f089 100644
--- a/vl.c
+++ b/vl.c
@@ -722,36 +722,6 @@ static void icount_adjust_vm(void * opaque)
icount_adjust();
}
-static void configure_icount(const char *option)
-{
- if (!option)
- return;
-
- if (strcmp(option, "auto") != 0) {
- icount_time_shift = strtol(option, NULL, 0);
- use_icount = 1;
- return;
- }
-
- use_icount = 2;
-
- /* 125MIPS seems a reasonable initial guess at the guest speed.
- It will be corrected fairly quickly anyway. */
- icount_time_shift = 3;
-
- /* Have both realtime and virtual time triggers for speed adjustment.
- The realtime trigger catches emulated time passing too slowly,
- the virtual time trigger catches emulated time passing too fast.
- Realtime triggers occur even when idle, so use them less frequently
- than VM triggers. */
- icount_rt_timer = qemu_new_timer(rt_clock, icount_adjust_rt, NULL);
- qemu_mod_timer(icount_rt_timer,
- qemu_get_clock(rt_clock) + 1000);
- icount_vm_timer = qemu_new_timer(vm_clock, icount_adjust_vm, NULL);
- qemu_mod_timer(icount_vm_timer,
- qemu_get_clock(vm_clock) + get_ticks_per_sec() / 10);
-}
-
static int64_t qemu_icount_round(int64_t count)
{
return (count + (1 << icount_time_shift) - 1) >> icount_time_shift;
@@ -1056,6 +1026,37 @@ static const VMStateDescription vmstate_timers = {
}
};
+static void configure_icount(const char *option)
+{
+ vmstate_register(0, &vmstate_timers, &timers_state);
+ if (!option)
+ return;
+
+ if (strcmp(option, "auto") != 0) {
+ icount_time_shift = strtol(option, NULL, 0);
+ use_icount = 1;
+ return;
+ }
+
+ use_icount = 2;
+
+ /* 125MIPS seems a reasonable initial guess at the guest speed.
+ It will be corrected fairly quickly anyway. */
+ icount_time_shift = 3;
+
+ /* Have both realtime and virtual time triggers for speed adjustment.
+ The realtime trigger catches emulated time passing too slowly,
+ the virtual time trigger catches emulated time passing too fast.
+ Realtime triggers occur even when idle, so use them less frequently
+ than VM triggers. */
+ icount_rt_timer = qemu_new_timer(rt_clock, icount_adjust_rt, NULL);
+ qemu_mod_timer(icount_rt_timer,
+ qemu_get_clock(rt_clock) + 1000);
+ icount_vm_timer = qemu_new_timer(vm_clock, icount_adjust_vm, NULL);
+ qemu_mod_timer(icount_vm_timer,
+ qemu_get_clock(vm_clock) + get_ticks_per_sec() / 10);
+}
+
static void qemu_run_all_timers(void)
{
/* rearm timer, if not periodic */
@@ -5929,7 +5930,6 @@ int main(int argc, char **argv, char **envp)
if (qemu_opts_foreach(&qemu_drive_opts, drive_init_func, machine, 1) != 0)
exit(1);
- vmstate_register(0, &vmstate_timers ,&timers_state);
register_savevm_live("ram", 0, 3, NULL, ram_save_live, NULL,
ram_load, NULL);
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 16/18] place together more #ifdef CONFIG_IOTHREAD blocks
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (14 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 15/18] move vmstate registration of vmstate_timers earlier Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 17/18] disentangle tcg and deadline calculation Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 18/18] split out qemu-timer.c Paolo Bonzini
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
vl.c | 78 +++++++++++++++++++++++++++++++----------------------------------
1 files changed, 37 insertions(+), 41 deletions(-)
diff --git a/vl.c b/vl.c
index 3d8f089..d97da4d 100644
--- a/vl.c
+++ b/vl.c
@@ -3282,13 +3282,39 @@ void qemu_system_powerdown_request(void)
qemu_notify_event();
}
-#ifdef CONFIG_IOTHREAD
-static void qemu_system_vmstop_request(int reason)
+static int cpu_can_run(CPUState *env)
{
- vmstop_requested = reason;
- qemu_notify_event();
+ if (env->stop)
+ return 0;
+ if (env->stopped)
+ return 0;
+ if (!vm_running)
+ return 0;
+ return 1;
+}
+
+static int cpu_has_work(CPUState *env)
+{
+ if (env->stop)
+ return 1;
+ if (env->stopped)
+ return 0;
+ if (!env->halted)
+ return 1;
+ if (qemu_cpu_has_work(env))
+ return 1;
+ return 0;
+}
+
+static int tcg_has_work(void)
+{
+ CPUState *env;
+
+ for (env = first_cpu; env != NULL; env = env->next_cpu)
+ if (cpu_has_work(env))
+ return 1;
+ return 0;
}
-#endif
#ifndef _WIN32
static int io_thread_fd = -1;
@@ -3382,17 +3408,6 @@ static void qemu_event_increment(void)
}
#endif
-static int cpu_can_run(CPUState *env)
-{
- if (env->stop)
- return 0;
- if (env->stopped)
- return 0;
- if (!vm_running)
- return 0;
- return 1;
-}
-
#ifndef CONFIG_IOTHREAD
static int qemu_init_main_loop(void)
{
@@ -3471,8 +3486,6 @@ static QemuCond qemu_pause_cond;
static void tcg_block_io_signals(void);
static void kvm_block_io_signals(CPUState *env);
static void unblock_io_signals(void);
-static int tcg_has_work(void);
-static int cpu_has_work(CPUState *env);
static int qemu_init_main_loop(void)
{
@@ -3823,6 +3836,12 @@ void qemu_notify_event(void)
qemu_event_increment();
}
+static void qemu_system_vmstop_request(int reason)
+{
+ vmstop_requested = reason;
+ qemu_notify_event();
+}
+
void vm_stop(int reason)
{
QemuThread me;
@@ -4037,29 +4056,6 @@ static void tcg_cpu_exec(void)
}
}
-static int cpu_has_work(CPUState *env)
-{
- if (env->stop)
- return 1;
- if (env->stopped)
- return 0;
- if (!env->halted)
- return 1;
- if (qemu_cpu_has_work(env))
- return 1;
- return 0;
-}
-
-static int tcg_has_work(void)
-{
- CPUState *env;
-
- for (env = first_cpu; env != NULL; env = env->next_cpu)
- if (cpu_has_work(env))
- return 1;
- return 0;
-}
-
static int qemu_calculate_timeout(void)
{
#ifndef CONFIG_IOTHREAD
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 17/18] disentangle tcg and deadline calculation
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (15 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 16/18] place together more #ifdef CONFIG_IOTHREAD blocks Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 18/18] split out qemu-timer.c Paolo Bonzini
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
Just tell main_loop_wait whether to be blocking or nonblocking, so that
there is no need to call qemu_cpus_have_work from the timer subsystem.
Instead, tcg_cpu_exec can say "we want the main loop not to block because
we have stuff to do".
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/xenfb.c | 6 ++++--
sysemu.h | 2 +-
vl.c | 23 +++++++++++++++--------
3 files changed, 20 insertions(+), 11 deletions(-)
diff --git a/hw/xenfb.c b/hw/xenfb.c
index 795a326..422cd53 100644
--- a/hw/xenfb.c
+++ b/hw/xenfb.c
@@ -983,12 +983,14 @@ void xen_init_display(int domid)
wait_more:
i++;
- main_loop_wait(10); /* miliseconds */
+ main_loop_wait(true);
xfb = xen_be_find_xendev("vfb", domid, 0);
xin = xen_be_find_xendev("vkbd", domid, 0);
if (!xfb || !xin) {
- if (i < 256)
+ if (i < 256) {
+ usleep(10000);
goto wait_more;
+ }
xen_be_printf(NULL, 1, "displaystate setup failed\n");
return;
}
diff --git a/sysemu.h b/sysemu.h
index afa11b5..f84413f 100644
--- a/sysemu.h
+++ b/sysemu.h
@@ -64,7 +64,7 @@ void cpu_synchronize_all_post_init(void);
void qemu_announce_self(void);
-void main_loop_wait(int timeout);
+void main_loop_wait(int nonblocking);
int qemu_savevm_state_begin(Monitor *mon, QEMUFile *f, int blk_enable,
int shared);
diff --git a/vl.c b/vl.c
index d97da4d..34d39c0 100644
--- a/vl.c
+++ b/vl.c
@@ -618,6 +618,7 @@ struct qemu_alarm_timer {
};
static struct qemu_alarm_timer *alarm_timer;
+static int qemu_calculate_timeout(void);
static inline int qemu_alarm_pending(void)
{
@@ -3597,7 +3598,7 @@ static void *kvm_cpu_thread_fn(void *arg)
return NULL;
}
-static void tcg_cpu_exec(void);
+static bool tcg_cpu_exec(void);
static void *tcg_cpu_thread_fn(void *arg)
{
@@ -3915,14 +3916,20 @@ static void host_main_loop_wait(int *timeout)
}
#endif
-void main_loop_wait(int timeout)
+void main_loop_wait(int nonblocking)
{
IOHandlerRecord *ioh;
fd_set rfds, wfds, xfds;
int ret, nfds;
struct timeval tv;
+ int timeout;
- qemu_bh_update_timeout(&timeout);
+ if (nonblocking)
+ timeout = 0;
+ else {
+ timeout = qemu_calculate_timeout();
+ qemu_bh_update_timeout(&timeout);
+ }
host_main_loop_wait(&timeout);
@@ -4029,7 +4036,7 @@ static int qemu_cpu_exec(CPUState *env)
return ret;
}
-static void tcg_cpu_exec(void)
+static bool tcg_cpu_exec(void)
{
int ret = 0;
@@ -4054,6 +4061,7 @@ static void tcg_cpu_exec(void)
break;
}
}
+ return tcg_has_work();
}
static int qemu_calculate_timeout(void)
@@ -4063,8 +4071,6 @@ static int qemu_calculate_timeout(void)
if (!vm_running)
timeout = 5000;
- else if (tcg_has_work())
- timeout = 0;
else {
/* XXX: use timeout computed from timers */
int64_t add;
@@ -4124,16 +4130,17 @@ static void main_loop(void)
for (;;) {
do {
+ bool nonblocking = false;
#ifdef CONFIG_PROFILER
int64_t ti;
#endif
#ifndef CONFIG_IOTHREAD
- tcg_cpu_exec();
+ nonblocking = tcg_cpu_exec();
#endif
#ifdef CONFIG_PROFILER
ti = profile_getclock();
#endif
- main_loop_wait(qemu_calculate_timeout());
+ main_loop_wait(nonblocking);
#ifdef CONFIG_PROFILER
dev_time += profile_getclock() - ti;
#endif
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* [Qemu-devel] [PATCH 18/18] split out qemu-timer.c
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
` (16 preceding siblings ...)
2010-03-10 10:38 ` [Qemu-devel] [PATCH 17/18] disentangle tcg and deadline calculation Paolo Bonzini
@ 2010-03-10 10:38 ` Paolo Bonzini
17 siblings, 0 replies; 20+ messages in thread
From: Paolo Bonzini @ 2010-03-10 10:38 UTC (permalink / raw)
To: qemu-devel
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
Makefile.target | 1 +
cpu-all.h | 2 +
cutils.c | 18 +
qemu-common.h | 1 +
qemu-timer.c | 1203 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
qemu-timer.h | 12 +
vl.c | 1166 -----------------------------------------------------
7 files changed, 1237 insertions(+), 1166 deletions(-)
create mode 100644 qemu-timer.c
diff --git a/Makefile.target b/Makefile.target
index 320f807..99274b4 100644
--- a/Makefile.target
+++ b/Makefile.target
@@ -169,6 +169,7 @@ endif #CONFIG_BSD_USER
ifdef CONFIG_SOFTMMU
obj-y = vl.o async.o monitor.o pci.o pci_host.o pcie_host.o machine.o gdbstub.o
+obj-y += qemu-timer.o
# virtio has to be here due to weird dependency between PCI and virtio-net.
# need to fix this properly
obj-y += virtio-blk.o virtio-balloon.o virtio-net.o virtio-pci.o virtio-serial-bus.o
diff --git a/cpu-all.h b/cpu-all.h
index 9823c24..e89fb90 100644
--- a/cpu-all.h
+++ b/cpu-all.h
@@ -760,6 +760,8 @@ void QEMU_NORETURN cpu_abort(CPUState *env, const char *fmt, ...)
__attribute__ ((__format__ (__printf__, 2, 3)));
extern CPUState *first_cpu;
extern CPUState *cpu_single_env;
+
+int64_t qemu_icount_round(int64_t count);
extern int64_t qemu_icount;
extern int use_icount;
diff --git a/cutils.c b/cutils.c
index 2365e68..036ae3c 100644
--- a/cutils.c
+++ b/cutils.c
@@ -233,3 +233,21 @@ void qemu_iovec_from_buffer(QEMUIOVector *qiov, const void *buf, size_t count)
count -= copy;
}
}
+
+#ifndef _WIN32
+/* Sets a specific flag */
+int fcntl_setfl(int fd, int flag)
+{
+ int flags;
+
+ flags = fcntl(fd, F_GETFL);
+ if (flags == -1)
+ return -errno;
+
+ if (fcntl(fd, F_SETFL, flags | flag) == -1)
+ return -errno;
+
+ return 0;
+}
+#endif
+
diff --git a/qemu-common.h b/qemu-common.h
index 805be1a..63aab46 100644
--- a/qemu-common.h
+++ b/qemu-common.h
@@ -132,6 +132,7 @@ int qemu_strnlen(const char *s, int max_len);
time_t mktimegm(struct tm *tm);
int qemu_fls(int i);
int qemu_fdatasync(int fd);
+int fcntl_setfl(int fd, int flag);
/* path.c */
void init_paths(const char *prefix);
diff --git a/qemu-timer.c b/qemu-timer.c
new file mode 100644
index 0000000..329d3a4
--- /dev/null
+++ b/qemu-timer.c
@@ -0,0 +1,1203 @@
+/*
+ * QEMU System Emulator
+ *
+ * Copyright (c) 2003-2008 Fabrice Bellard
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include "sysemu.h"
+#include "net.h"
+#include "monitor.h"
+#include "console.h"
+
+#include "hw/hw.h"
+
+#include <unistd.h>
+#include <fcntl.h>
+#include <time.h>
+#include <errno.h>
+#include <sys/time.h>
+#include <signal.h>
+
+#ifdef __linux__
+#include <sys/ioctl.h>
+#include <linux/rtc.h>
+/* For the benefit of older linux systems which don't supply it,
+ we use a local copy of hpet.h. */
+/* #include <linux/hpet.h> */
+#include "hpet.h"
+#endif
+
+#ifdef _WIN32
+#include <windows.h>
+#include <mmsystem.h>
+#endif
+
+#include "cpu-defs.h"
+#include "qemu-timer.h"
+#include "exec-all.h"
+
+/* Conversion factor from emulated instructions to virtual clock ticks. */
+static int icount_time_shift;
+/* Arbitrarily pick 1MIPS as the minimum allowable speed. */
+#define MAX_ICOUNT_SHIFT 10
+/* Compensate for varying guest execution speed. */
+static int64_t qemu_icount_bias;
+static QEMUTimer *icount_rt_timer;
+static QEMUTimer *icount_vm_timer;
+
+
+/***********************************************************/
+/* real time host monotonic timer */
+
+
+static int64_t get_clock_realtime(void)
+{
+ struct timeval tv;
+
+ gettimeofday(&tv, NULL);
+ return tv.tv_sec * 1000000000LL + (tv.tv_usec * 1000);
+}
+
+#ifdef WIN32
+
+static int64_t clock_freq;
+
+static void init_get_clock(void)
+{
+ LARGE_INTEGER freq;
+ int ret;
+ ret = QueryPerformanceFrequency(&freq);
+ if (ret == 0) {
+ fprintf(stderr, "Could not calibrate ticks\n");
+ exit(1);
+ }
+ clock_freq = freq.QuadPart;
+}
+
+static int64_t get_clock(void)
+{
+ LARGE_INTEGER ti;
+ QueryPerformanceCounter(&ti);
+ return muldiv64(ti.QuadPart, get_ticks_per_sec(), clock_freq);
+}
+
+#else
+
+static int use_rt_clock;
+
+static void init_get_clock(void)
+{
+ use_rt_clock = 0;
+#if defined(__linux__) || (defined(__FreeBSD__) && __FreeBSD_version >= 500000) \
+ || defined(__DragonFly__) || defined(__FreeBSD_kernel__)
+ {
+ struct timespec ts;
+ if (clock_gettime(CLOCK_MONOTONIC, &ts) == 0) {
+ use_rt_clock = 1;
+ }
+ }
+#endif
+}
+
+static int64_t get_clock(void)
+{
+#if defined(__linux__) || (defined(__FreeBSD__) && __FreeBSD_version >= 500000) \
+ || defined(__DragonFly__) || defined(__FreeBSD_kernel__)
+ if (use_rt_clock) {
+ struct timespec ts;
+ clock_gettime(CLOCK_MONOTONIC, &ts);
+ return ts.tv_sec * 1000000000LL + ts.tv_nsec;
+ } else
+#endif
+ {
+ /* XXX: using gettimeofday leads to problems if the date
+ changes, so it should be avoided. */
+ return get_clock_realtime();
+ }
+}
+#endif
+
+/* Return the virtual CPU time, based on the instruction counter. */
+static int64_t cpu_get_icount(void)
+{
+ int64_t icount;
+ CPUState *env = cpu_single_env;;
+ icount = qemu_icount;
+ if (env) {
+ if (!can_do_io(env))
+ fprintf(stderr, "Bad clock read\n");
+ icount -= (env->icount_decr.u16.low + env->icount_extra);
+ }
+ return qemu_icount_bias + (icount << icount_time_shift);
+}
+
+/***********************************************************/
+/* guest cycle counter */
+
+typedef struct TimersState {
+ int64_t cpu_ticks_prev;
+ int64_t cpu_ticks_offset;
+ int64_t cpu_clock_offset;
+ int32_t cpu_ticks_enabled;
+ int64_t dummy;
+} TimersState;
+
+TimersState timers_state;
+
+/* return the host CPU cycle counter and handle stop/restart */
+int64_t cpu_get_ticks(void)
+{
+ if (use_icount) {
+ return cpu_get_icount();
+ }
+ if (!timers_state.cpu_ticks_enabled) {
+ return timers_state.cpu_ticks_offset;
+ } else {
+ int64_t ticks;
+ ticks = cpu_get_real_ticks();
+ if (timers_state.cpu_ticks_prev > ticks) {
+ /* Note: non increasing ticks may happen if the host uses
+ software suspend */
+ timers_state.cpu_ticks_offset += timers_state.cpu_ticks_prev - ticks;
+ }
+ timers_state.cpu_ticks_prev = ticks;
+ return ticks + timers_state.cpu_ticks_offset;
+ }
+}
+
+/* return the host CPU monotonic timer and handle stop/restart */
+static int64_t cpu_get_clock(void)
+{
+ int64_t ti;
+ if (!timers_state.cpu_ticks_enabled) {
+ return timers_state.cpu_clock_offset;
+ } else {
+ ti = get_clock();
+ return ti + timers_state.cpu_clock_offset;
+ }
+}
+
+#ifndef CONFIG_IOTHREAD
+static int64_t qemu_icount_delta(void)
+{
+ if (!use_icount) {
+ return 5000 * (int64_t) 1000000;
+ } else if (use_icount == 1) {
+ /* When not using an adaptive execution frequency
+ we tend to get badly out of sync with real time,
+ so just delay for a reasonable amount of time. */
+ return 0;
+ } else {
+ return cpu_get_icount() - cpu_get_clock();
+ }
+}
+#endif
+
+/* enable cpu_get_ticks() */
+void cpu_enable_ticks(void)
+{
+ if (!timers_state.cpu_ticks_enabled) {
+ timers_state.cpu_ticks_offset -= cpu_get_real_ticks();
+ timers_state.cpu_clock_offset -= get_clock();
+ timers_state.cpu_ticks_enabled = 1;
+ }
+}
+
+/* disable cpu_get_ticks() : the clock is stopped. You must not call
+ cpu_get_ticks() after that. */
+void cpu_disable_ticks(void)
+{
+ if (timers_state.cpu_ticks_enabled) {
+ timers_state.cpu_ticks_offset = cpu_get_ticks();
+ timers_state.cpu_clock_offset = cpu_get_clock();
+ timers_state.cpu_ticks_enabled = 0;
+ }
+}
+
+/***********************************************************/
+/* timers */
+
+#define QEMU_CLOCK_REALTIME 0
+#define QEMU_CLOCK_VIRTUAL 1
+#define QEMU_CLOCK_HOST 2
+
+struct QEMUClock {
+ int type;
+ int enabled;
+ /* XXX: add frequency */
+};
+
+struct QEMUTimer {
+ QEMUClock *clock;
+ int64_t expire_time;
+ QEMUTimerCB *cb;
+ void *opaque;
+ struct QEMUTimer *next;
+};
+
+struct qemu_alarm_timer {
+ char const *name;
+ int (*start)(struct qemu_alarm_timer *t);
+ void (*stop)(struct qemu_alarm_timer *t);
+ void (*rearm)(struct qemu_alarm_timer *t);
+ void *priv;
+
+ char expired;
+ char pending;
+};
+
+static struct qemu_alarm_timer *alarm_timer;
+
+int qemu_alarm_pending(void)
+{
+ return alarm_timer->pending;
+}
+
+static inline int alarm_has_dynticks(struct qemu_alarm_timer *t)
+{
+ return !!t->rearm;
+}
+
+static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
+{
+ if (!alarm_has_dynticks(t))
+ return;
+
+ t->rearm(t);
+}
+
+/* TODO: MIN_TIMER_REARM_US should be optimized */
+#define MIN_TIMER_REARM_US 250
+
+#ifdef _WIN32
+
+struct qemu_alarm_win32 {
+ MMRESULT timerId;
+ unsigned int period;
+} alarm_win32_data = {0, 0};
+
+static int win32_start_timer(struct qemu_alarm_timer *t);
+static void win32_stop_timer(struct qemu_alarm_timer *t);
+static void win32_rearm_timer(struct qemu_alarm_timer *t);
+
+#else
+
+static int unix_start_timer(struct qemu_alarm_timer *t);
+static void unix_stop_timer(struct qemu_alarm_timer *t);
+
+#ifdef __linux__
+
+static int dynticks_start_timer(struct qemu_alarm_timer *t);
+static void dynticks_stop_timer(struct qemu_alarm_timer *t);
+static void dynticks_rearm_timer(struct qemu_alarm_timer *t);
+
+static int hpet_start_timer(struct qemu_alarm_timer *t);
+static void hpet_stop_timer(struct qemu_alarm_timer *t);
+
+static int rtc_start_timer(struct qemu_alarm_timer *t);
+static void rtc_stop_timer(struct qemu_alarm_timer *t);
+
+#endif /* __linux__ */
+
+#endif /* _WIN32 */
+
+/* Correlation between real and virtual time is always going to be
+ fairly approximate, so ignore small variation.
+ When the guest is idle real and virtual time will be aligned in
+ the IO wait loop. */
+#define ICOUNT_WOBBLE (get_ticks_per_sec() / 10)
+
+static void icount_adjust(void)
+{
+ int64_t cur_time;
+ int64_t cur_icount;
+ int64_t delta;
+ static int64_t last_delta;
+ /* If the VM is not running, then do nothing. */
+ if (!vm_running)
+ return;
+
+ cur_time = cpu_get_clock();
+ cur_icount = qemu_get_clock(vm_clock);
+ delta = cur_icount - cur_time;
+ /* FIXME: This is a very crude algorithm, somewhat prone to oscillation. */
+ if (delta > 0
+ && last_delta + ICOUNT_WOBBLE < delta * 2
+ && icount_time_shift > 0) {
+ /* The guest is getting too far ahead. Slow time down. */
+ icount_time_shift--;
+ }
+ if (delta < 0
+ && last_delta - ICOUNT_WOBBLE > delta * 2
+ && icount_time_shift < MAX_ICOUNT_SHIFT) {
+ /* The guest is getting too far behind. Speed time up. */
+ icount_time_shift++;
+ }
+ last_delta = delta;
+ qemu_icount_bias = cur_icount - (qemu_icount << icount_time_shift);
+}
+
+static void icount_adjust_rt(void * opaque)
+{
+ qemu_mod_timer(icount_rt_timer,
+ qemu_get_clock(rt_clock) + 1000);
+ icount_adjust();
+}
+
+static void icount_adjust_vm(void * opaque)
+{
+ qemu_mod_timer(icount_vm_timer,
+ qemu_get_clock(vm_clock) + get_ticks_per_sec() / 10);
+ icount_adjust();
+}
+
+int64_t qemu_icount_round(int64_t count)
+{
+ return (count + (1 << icount_time_shift) - 1) >> icount_time_shift;
+}
+
+static struct qemu_alarm_timer alarm_timers[] = {
+#ifndef _WIN32
+#ifdef __linux__
+ {"dynticks", dynticks_start_timer,
+ dynticks_stop_timer, dynticks_rearm_timer, NULL},
+ /* HPET - if available - is preferred */
+ {"hpet", hpet_start_timer, hpet_stop_timer, NULL, NULL},
+ /* ...otherwise try RTC */
+ {"rtc", rtc_start_timer, rtc_stop_timer, NULL, NULL},
+#endif
+ {"unix", unix_start_timer, unix_stop_timer, NULL, NULL},
+#else
+ {"dynticks", win32_start_timer,
+ win32_stop_timer, win32_rearm_timer, &alarm_win32_data},
+ {"win32", win32_start_timer,
+ win32_stop_timer, NULL, &alarm_win32_data},
+#endif
+ {NULL, }
+};
+
+static void show_available_alarms(void)
+{
+ int i;
+
+ printf("Available alarm timers, in order of precedence:\n");
+ for (i = 0; alarm_timers[i].name; i++)
+ printf("%s\n", alarm_timers[i].name);
+}
+
+void configure_alarms(char const *opt)
+{
+ int i;
+ int cur = 0;
+ int count = ARRAY_SIZE(alarm_timers) - 1;
+ char *arg;
+ char *name;
+ struct qemu_alarm_timer tmp;
+
+ if (!strcmp(opt, "?")) {
+ show_available_alarms();
+ exit(0);
+ }
+
+ arg = qemu_strdup(opt);
+
+ /* Reorder the array */
+ name = strtok(arg, ",");
+ while (name) {
+ for (i = 0; i < count && alarm_timers[i].name; i++) {
+ if (!strcmp(alarm_timers[i].name, name))
+ break;
+ }
+
+ if (i == count) {
+ fprintf(stderr, "Unknown clock %s\n", name);
+ goto next;
+ }
+
+ if (i < cur)
+ /* Ignore */
+ goto next;
+
+ /* Swap */
+ tmp = alarm_timers[i];
+ alarm_timers[i] = alarm_timers[cur];
+ alarm_timers[cur] = tmp;
+
+ cur++;
+next:
+ name = strtok(NULL, ",");
+ }
+
+ qemu_free(arg);
+
+ if (cur) {
+ /* Disable remaining timers */
+ for (i = cur; i < count; i++)
+ alarm_timers[i].name = NULL;
+ } else {
+ show_available_alarms();
+ exit(1);
+ }
+}
+
+#define QEMU_NUM_CLOCKS 3
+
+QEMUClock *rt_clock;
+QEMUClock *vm_clock;
+QEMUClock *host_clock;
+
+static QEMUTimer *active_timers[QEMU_NUM_CLOCKS];
+
+static QEMUClock *qemu_new_clock(int type)
+{
+ QEMUClock *clock;
+ clock = qemu_mallocz(sizeof(QEMUClock));
+ clock->type = type;
+ clock->enabled = 1;
+ return clock;
+}
+
+void qemu_clock_enable(QEMUClock *clock, int enabled)
+{
+ clock->enabled = enabled;
+}
+
+QEMUTimer *qemu_new_timer(QEMUClock *clock, QEMUTimerCB *cb, void *opaque)
+{
+ QEMUTimer *ts;
+
+ ts = qemu_mallocz(sizeof(QEMUTimer));
+ ts->clock = clock;
+ ts->cb = cb;
+ ts->opaque = opaque;
+ return ts;
+}
+
+void qemu_free_timer(QEMUTimer *ts)
+{
+ qemu_free(ts);
+}
+
+/* stop a timer, but do not dealloc it */
+void qemu_del_timer(QEMUTimer *ts)
+{
+ QEMUTimer **pt, *t;
+
+ /* NOTE: this code must be signal safe because
+ qemu_timer_expired() can be called from a signal. */
+ pt = &active_timers[ts->clock->type];
+ for(;;) {
+ t = *pt;
+ if (!t)
+ break;
+ if (t == ts) {
+ *pt = t->next;
+ break;
+ }
+ pt = &t->next;
+ }
+}
+
+/* modify the current timer so that it will be fired when current_time
+ >= expire_time. The corresponding callback will be called. */
+void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time)
+{
+ QEMUTimer **pt, *t;
+
+ qemu_del_timer(ts);
+
+ /* add the timer in the sorted list */
+ /* NOTE: this code must be signal safe because
+ qemu_timer_expired() can be called from a signal. */
+ pt = &active_timers[ts->clock->type];
+ for(;;) {
+ t = *pt;
+ if (!t)
+ break;
+ if (t->expire_time > expire_time)
+ break;
+ pt = &t->next;
+ }
+ ts->expire_time = expire_time;
+ ts->next = *pt;
+ *pt = ts;
+
+ /* Rearm if necessary */
+ if (pt == &active_timers[ts->clock->type]) {
+ if (!alarm_timer->pending) {
+ qemu_rearm_alarm_timer(alarm_timer);
+ }
+ /* Interrupt execution to force deadline recalculation. */
+ if (use_icount)
+ qemu_notify_event();
+ }
+}
+
+int qemu_timer_pending(QEMUTimer *ts)
+{
+ QEMUTimer *t;
+ for(t = active_timers[ts->clock->type]; t != NULL; t = t->next) {
+ if (t == ts)
+ return 1;
+ }
+ return 0;
+}
+
+int qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time)
+{
+ if (!timer_head)
+ return 0;
+ return (timer_head->expire_time <= current_time);
+}
+
+static void qemu_run_timers(QEMUClock *clock)
+{
+ QEMUTimer **ptimer_head, *ts;
+ int64_t current_time;
+
+ if (!clock->enabled)
+ return;
+
+ current_time = qemu_get_clock (clock);
+ ptimer_head = &active_timers[clock->type];
+ for(;;) {
+ ts = *ptimer_head;
+ if (!ts || ts->expire_time > current_time)
+ break;
+ /* remove timer from the list before calling the callback */
+ *ptimer_head = ts->next;
+ ts->next = NULL;
+
+ /* run the callback (the timer list can be modified) */
+ ts->cb(ts->opaque);
+ }
+}
+
+int64_t qemu_get_clock(QEMUClock *clock)
+{
+ switch(clock->type) {
+ case QEMU_CLOCK_REALTIME:
+ return get_clock() / 1000000;
+ default:
+ case QEMU_CLOCK_VIRTUAL:
+ if (use_icount) {
+ return cpu_get_icount();
+ } else {
+ return cpu_get_clock();
+ }
+ case QEMU_CLOCK_HOST:
+ return get_clock_realtime();
+ }
+}
+
+int64_t qemu_get_clock_ns(QEMUClock *clock)
+{
+ switch(clock->type) {
+ case QEMU_CLOCK_REALTIME:
+ return get_clock();
+ default:
+ case QEMU_CLOCK_VIRTUAL:
+ if (use_icount) {
+ return cpu_get_icount();
+ } else {
+ return cpu_get_clock();
+ }
+ case QEMU_CLOCK_HOST:
+ return get_clock_realtime();
+ }
+}
+
+void init_clocks(void)
+{
+ init_get_clock();
+ rt_clock = qemu_new_clock(QEMU_CLOCK_REALTIME);
+ vm_clock = qemu_new_clock(QEMU_CLOCK_VIRTUAL);
+ host_clock = qemu_new_clock(QEMU_CLOCK_HOST);
+
+ rtc_clock = host_clock;
+}
+
+/* save a timer */
+void qemu_put_timer(QEMUFile *f, QEMUTimer *ts)
+{
+ uint64_t expire_time;
+
+ if (qemu_timer_pending(ts)) {
+ expire_time = ts->expire_time;
+ } else {
+ expire_time = -1;
+ }
+ qemu_put_be64(f, expire_time);
+}
+
+void qemu_get_timer(QEMUFile *f, QEMUTimer *ts)
+{
+ uint64_t expire_time;
+
+ expire_time = qemu_get_be64(f);
+ if (expire_time != -1) {
+ qemu_mod_timer(ts, expire_time);
+ } else {
+ qemu_del_timer(ts);
+ }
+}
+
+static const VMStateDescription vmstate_timers = {
+ .name = "timer",
+ .version_id = 2,
+ .minimum_version_id = 1,
+ .minimum_version_id_old = 1,
+ .fields = (VMStateField []) {
+ VMSTATE_INT64(cpu_ticks_offset, TimersState),
+ VMSTATE_INT64(dummy, TimersState),
+ VMSTATE_INT64_V(cpu_clock_offset, TimersState, 2),
+ VMSTATE_END_OF_LIST()
+ }
+};
+
+void configure_icount(const char *option)
+{
+ vmstate_register(0, &vmstate_timers, &timers_state);
+ if (!option)
+ return;
+
+ if (strcmp(option, "auto") != 0) {
+ icount_time_shift = strtol(option, NULL, 0);
+ use_icount = 1;
+ return;
+ }
+
+ use_icount = 2;
+
+ /* 125MIPS seems a reasonable initial guess at the guest speed.
+ It will be corrected fairly quickly anyway. */
+ icount_time_shift = 3;
+
+ /* Have both realtime and virtual time triggers for speed adjustment.
+ The realtime trigger catches emulated time passing too slowly,
+ the virtual time trigger catches emulated time passing too fast.
+ Realtime triggers occur even when idle, so use them less frequently
+ than VM triggers. */
+ icount_rt_timer = qemu_new_timer(rt_clock, icount_adjust_rt, NULL);
+ qemu_mod_timer(icount_rt_timer,
+ qemu_get_clock(rt_clock) + 1000);
+ icount_vm_timer = qemu_new_timer(vm_clock, icount_adjust_vm, NULL);
+ qemu_mod_timer(icount_vm_timer,
+ qemu_get_clock(vm_clock) + get_ticks_per_sec() / 10);
+}
+
+void qemu_run_all_timers(void)
+{
+ /* rearm timer, if not periodic */
+ if (alarm_timer->expired) {
+ alarm_timer->expired = 0;
+ qemu_rearm_alarm_timer(alarm_timer);
+ }
+
+ alarm_timer->pending = 0;
+
+ /* vm time timers */
+ if (vm_running) {
+ qemu_run_timers(vm_clock);
+ }
+
+ qemu_run_timers(rt_clock);
+ qemu_run_timers(host_clock);
+}
+
+#ifdef _WIN32
+static void CALLBACK host_alarm_handler(UINT uTimerID, UINT uMsg,
+ DWORD_PTR dwUser, DWORD_PTR dw1,
+ DWORD_PTR dw2)
+#else
+static void host_alarm_handler(int host_signum)
+#endif
+{
+ struct qemu_alarm_timer *t = alarm_timer;
+ if (!t)
+ return;
+
+#if 0
+#define DISP_FREQ 1000
+ {
+ static int64_t delta_min = INT64_MAX;
+ static int64_t delta_max, delta_cum, last_clock, delta, ti;
+ static int count;
+ ti = qemu_get_clock(vm_clock);
+ if (last_clock != 0) {
+ delta = ti - last_clock;
+ if (delta < delta_min)
+ delta_min = delta;
+ if (delta > delta_max)
+ delta_max = delta;
+ delta_cum += delta;
+ if (++count == DISP_FREQ) {
+ printf("timer: min=%" PRId64 " us max=%" PRId64 " us avg=%" PRId64 " us avg_freq=%0.3f Hz\n",
+ muldiv64(delta_min, 1000000, get_ticks_per_sec()),
+ muldiv64(delta_max, 1000000, get_ticks_per_sec()),
+ muldiv64(delta_cum, 1000000 / DISP_FREQ, get_ticks_per_sec()),
+ (double)get_ticks_per_sec() / ((double)delta_cum / DISP_FREQ));
+ count = 0;
+ delta_min = INT64_MAX;
+ delta_max = 0;
+ delta_cum = 0;
+ }
+ }
+ last_clock = ti;
+ }
+#endif
+ if (alarm_has_dynticks(t) ||
+ (!use_icount &&
+ qemu_timer_expired(active_timers[QEMU_CLOCK_VIRTUAL],
+ qemu_get_clock(vm_clock))) ||
+ qemu_timer_expired(active_timers[QEMU_CLOCK_REALTIME],
+ qemu_get_clock(rt_clock)) ||
+ qemu_timer_expired(active_timers[QEMU_CLOCK_HOST],
+ qemu_get_clock(host_clock))) {
+
+ t->expired = alarm_has_dynticks(t);
+ t->pending = 1;
+ qemu_notify_event();
+ }
+}
+
+int64_t qemu_next_deadline(void)
+{
+ /* To avoid problems with overflow limit this to 2^32. */
+ int64_t delta = INT32_MAX;
+
+ if (active_timers[QEMU_CLOCK_VIRTUAL]) {
+ delta = active_timers[QEMU_CLOCK_VIRTUAL]->expire_time -
+ qemu_get_clock(vm_clock);
+ }
+ if (active_timers[QEMU_CLOCK_HOST]) {
+ int64_t hdelta = active_timers[QEMU_CLOCK_HOST]->expire_time -
+ qemu_get_clock(host_clock);
+ if (hdelta < delta)
+ delta = hdelta;
+ }
+
+ if (delta < 0)
+ delta = 0;
+
+ return delta;
+}
+
+#ifndef _WIN32
+
+#if defined(__linux__)
+
+#define RTC_FREQ 1024
+
+static uint64_t qemu_next_deadline_dyntick(void)
+{
+ int64_t delta;
+ int64_t rtdelta;
+
+ if (use_icount)
+ delta = INT32_MAX;
+ else
+ delta = (qemu_next_deadline() + 999) / 1000;
+
+ if (active_timers[QEMU_CLOCK_REALTIME]) {
+ rtdelta = (active_timers[QEMU_CLOCK_REALTIME]->expire_time -
+ qemu_get_clock(rt_clock))*1000;
+ if (rtdelta < delta)
+ delta = rtdelta;
+ }
+
+ if (delta < MIN_TIMER_REARM_US)
+ delta = MIN_TIMER_REARM_US;
+
+ return delta;
+}
+
+static void enable_sigio_timer(int fd)
+{
+ struct sigaction act;
+
+ /* timer signal */
+ sigfillset(&act.sa_mask);
+ act.sa_flags = 0;
+ act.sa_handler = host_alarm_handler;
+
+ sigaction(SIGIO, &act, NULL);
+ fcntl_setfl(fd, O_ASYNC);
+ fcntl(fd, F_SETOWN, getpid());
+}
+
+static int hpet_start_timer(struct qemu_alarm_timer *t)
+{
+ struct hpet_info info;
+ int r, fd;
+
+ fd = qemu_open("/dev/hpet", O_RDONLY);
+ if (fd < 0)
+ return -1;
+
+ /* Set frequency */
+ r = ioctl(fd, HPET_IRQFREQ, RTC_FREQ);
+ if (r < 0) {
+ fprintf(stderr, "Could not configure '/dev/hpet' to have a 1024Hz timer. This is not a fatal\n"
+ "error, but for better emulation accuracy type:\n"
+ "'echo 1024 > /proc/sys/dev/hpet/max-user-freq' as root.\n");
+ goto fail;
+ }
+
+ /* Check capabilities */
+ r = ioctl(fd, HPET_INFO, &info);
+ if (r < 0)
+ goto fail;
+
+ /* Enable periodic mode */
+ r = ioctl(fd, HPET_EPI, 0);
+ if (info.hi_flags && (r < 0))
+ goto fail;
+
+ /* Enable interrupt */
+ r = ioctl(fd, HPET_IE_ON, 0);
+ if (r < 0)
+ goto fail;
+
+ enable_sigio_timer(fd);
+ t->priv = (void *)(long)fd;
+
+ return 0;
+fail:
+ close(fd);
+ return -1;
+}
+
+static void hpet_stop_timer(struct qemu_alarm_timer *t)
+{
+ int fd = (long)t->priv;
+
+ close(fd);
+}
+
+static int rtc_start_timer(struct qemu_alarm_timer *t)
+{
+ int rtc_fd;
+ unsigned long current_rtc_freq = 0;
+
+ TFR(rtc_fd = qemu_open("/dev/rtc", O_RDONLY));
+ if (rtc_fd < 0)
+ return -1;
+ ioctl(rtc_fd, RTC_IRQP_READ, ¤t_rtc_freq);
+ if (current_rtc_freq != RTC_FREQ &&
+ ioctl(rtc_fd, RTC_IRQP_SET, RTC_FREQ) < 0) {
+ fprintf(stderr, "Could not configure '/dev/rtc' to have a 1024 Hz timer. This is not a fatal\n"
+ "error, but for better emulation accuracy either use a 2.6 host Linux kernel or\n"
+ "type 'echo 1024 > /proc/sys/dev/rtc/max-user-freq' as root.\n");
+ goto fail;
+ }
+ if (ioctl(rtc_fd, RTC_PIE_ON, 0) < 0) {
+ fail:
+ close(rtc_fd);
+ return -1;
+ }
+
+ enable_sigio_timer(rtc_fd);
+
+ t->priv = (void *)(long)rtc_fd;
+
+ return 0;
+}
+
+static void rtc_stop_timer(struct qemu_alarm_timer *t)
+{
+ int rtc_fd = (long)t->priv;
+
+ close(rtc_fd);
+}
+
+static int dynticks_start_timer(struct qemu_alarm_timer *t)
+{
+ struct sigevent ev;
+ timer_t host_timer;
+ struct sigaction act;
+
+ sigfillset(&act.sa_mask);
+ act.sa_flags = 0;
+ act.sa_handler = host_alarm_handler;
+
+ sigaction(SIGALRM, &act, NULL);
+
+ /*
+ * Initialize ev struct to 0 to avoid valgrind complaining
+ * about uninitialized data in timer_create call
+ */
+ memset(&ev, 0, sizeof(ev));
+ ev.sigev_value.sival_int = 0;
+ ev.sigev_notify = SIGEV_SIGNAL;
+ ev.sigev_signo = SIGALRM;
+
+ if (timer_create(CLOCK_REALTIME, &ev, &host_timer)) {
+ perror("timer_create");
+
+ /* disable dynticks */
+ fprintf(stderr, "Dynamic Ticks disabled\n");
+
+ return -1;
+ }
+
+ t->priv = (void *)(long)host_timer;
+
+ return 0;
+}
+
+static void dynticks_stop_timer(struct qemu_alarm_timer *t)
+{
+ timer_t host_timer = (timer_t)(long)t->priv;
+
+ timer_delete(host_timer);
+}
+
+static void dynticks_rearm_timer(struct qemu_alarm_timer *t)
+{
+ timer_t host_timer = (timer_t)(long)t->priv;
+ struct itimerspec timeout;
+ int64_t nearest_delta_us = INT64_MAX;
+ int64_t current_us;
+
+ assert(alarm_has_dynticks(t));
+ if (!active_timers[QEMU_CLOCK_REALTIME] &&
+ !active_timers[QEMU_CLOCK_VIRTUAL] &&
+ !active_timers[QEMU_CLOCK_HOST])
+ return;
+
+ nearest_delta_us = qemu_next_deadline_dyntick();
+
+ /* check whether a timer is already running */
+ if (timer_gettime(host_timer, &timeout)) {
+ perror("gettime");
+ fprintf(stderr, "Internal timer error: aborting\n");
+ exit(1);
+ }
+ current_us = timeout.it_value.tv_sec * 1000000 + timeout.it_value.tv_nsec/1000;
+ if (current_us && current_us <= nearest_delta_us)
+ return;
+
+ timeout.it_interval.tv_sec = 0;
+ timeout.it_interval.tv_nsec = 0; /* 0 for one-shot timer */
+ timeout.it_value.tv_sec = nearest_delta_us / 1000000;
+ timeout.it_value.tv_nsec = (nearest_delta_us % 1000000) * 1000;
+ if (timer_settime(host_timer, 0 /* RELATIVE */, &timeout, NULL)) {
+ perror("settime");
+ fprintf(stderr, "Internal timer error: aborting\n");
+ exit(1);
+ }
+}
+
+#endif /* defined(__linux__) */
+
+static int unix_start_timer(struct qemu_alarm_timer *t)
+{
+ struct sigaction act;
+ struct itimerval itv;
+ int err;
+
+ /* timer signal */
+ sigfillset(&act.sa_mask);
+ act.sa_flags = 0;
+ act.sa_handler = host_alarm_handler;
+
+ sigaction(SIGALRM, &act, NULL);
+
+ itv.it_interval.tv_sec = 0;
+ /* for i386 kernel 2.6 to get 1 ms */
+ itv.it_interval.tv_usec = 999;
+ itv.it_value.tv_sec = 0;
+ itv.it_value.tv_usec = 10 * 1000;
+
+ err = setitimer(ITIMER_REAL, &itv, NULL);
+ if (err)
+ return -1;
+
+ return 0;
+}
+
+static void unix_stop_timer(struct qemu_alarm_timer *t)
+{
+ struct itimerval itv;
+
+ memset(&itv, 0, sizeof(itv));
+ setitimer(ITIMER_REAL, &itv, NULL);
+}
+
+#endif /* !defined(_WIN32) */
+
+
+#ifdef _WIN32
+
+static int win32_start_timer(struct qemu_alarm_timer *t)
+{
+ TIMECAPS tc;
+ struct qemu_alarm_win32 *data = t->priv;
+ UINT flags;
+
+ memset(&tc, 0, sizeof(tc));
+ timeGetDevCaps(&tc, sizeof(tc));
+
+ data->period = tc.wPeriodMin;
+ timeBeginPeriod(data->period);
+
+ flags = TIME_CALLBACK_FUNCTION;
+ if (alarm_has_dynticks(t))
+ flags |= TIME_ONESHOT;
+ else
+ flags |= TIME_PERIODIC;
+
+ data->timerId = timeSetEvent(1, // interval (ms)
+ data->period, // resolution
+ host_alarm_handler, // function
+ (DWORD)t, // parameter
+ flags);
+
+ if (!data->timerId) {
+ fprintf(stderr, "Failed to initialize win32 alarm timer: %ld\n",
+ GetLastError());
+ timeEndPeriod(data->period);
+ return -1;
+ }
+
+ return 0;
+}
+
+static void win32_stop_timer(struct qemu_alarm_timer *t)
+{
+ struct qemu_alarm_win32 *data = t->priv;
+
+ timeKillEvent(data->timerId);
+ timeEndPeriod(data->period);
+}
+
+static void win32_rearm_timer(struct qemu_alarm_timer *t)
+{
+ struct qemu_alarm_win32 *data = t->priv;
+
+ assert(alarm_has_dynticks(t));
+ if (!active_timers[QEMU_CLOCK_REALTIME] &&
+ !active_timers[QEMU_CLOCK_VIRTUAL] &&
+ !active_timers[QEMU_CLOCK_HOST])
+ return;
+
+ timeKillEvent(data->timerId);
+
+ data->timerId = timeSetEvent(1,
+ data->period,
+ host_alarm_handler,
+ (DWORD)t,
+ TIME_ONESHOT | TIME_CALLBACK_FUNCTION);
+
+ if (!data->timerId) {
+ fprintf(stderr, "Failed to re-arm win32 alarm timer %ld\n",
+ GetLastError());
+
+ timeEndPeriod(data->period);
+ exit(1);
+ }
+}
+
+#endif /* _WIN32 */
+
+static void alarm_timer_on_change_state_rearm(void *opaque, int running, int reason)
+{
+ if (running)
+ qemu_rearm_alarm_timer((struct qemu_alarm_timer *) opaque);
+}
+
+int init_timer_alarm(void)
+{
+ struct qemu_alarm_timer *t = NULL;
+ int i, err = -1;
+
+ for (i = 0; alarm_timers[i].name; i++) {
+ t = &alarm_timers[i];
+
+ err = t->start(t);
+ if (!err)
+ break;
+ }
+
+ if (err) {
+ err = -ENOENT;
+ goto fail;
+ }
+
+ /* first event is at time 0 */
+ t->pending = 1;
+ alarm_timer = t;
+ qemu_add_vm_change_state_handler(alarm_timer_on_change_state_rearm, t);
+
+ return 0;
+
+fail:
+ return err;
+}
+
+void quit_timers(void)
+{
+ struct qemu_alarm_timer *t = alarm_timer;
+ alarm_timer = NULL;
+ t->stop(t);
+}
+
+int qemu_calculate_timeout(void)
+{
+#ifndef CONFIG_IOTHREAD
+ int timeout;
+
+ if (!vm_running)
+ timeout = 5000;
+ else {
+ /* XXX: use timeout computed from timers */
+ int64_t add;
+ int64_t delta;
+ /* Advance virtual time to the next event. */
+ delta = qemu_icount_delta();
+ if (delta > 0) {
+ /* If virtual time is ahead of real time then just
+ wait for IO. */
+ timeout = (delta + 999999) / 1000000;
+ } else {
+ /* Wait for either IO to occur or the next
+ timer event. */
+ add = qemu_next_deadline();
+ /* We advance the timer before checking for IO.
+ Limit the amount we advance so that early IO
+ activity won't get the guest too far ahead. */
+ if (add > 10000000)
+ add = 10000000;
+ delta += add;
+ qemu_icount += qemu_icount_round (add);
+ timeout = delta / 1000000;
+ if (timeout < 0)
+ timeout = 0;
+ }
+ }
+
+ return timeout;
+#else /* CONFIG_IOTHREAD */
+ return 1000;
+#endif
+}
+
diff --git a/qemu-timer.h b/qemu-timer.h
index c17b4e6..fca11eb 100644
--- a/qemu-timer.h
+++ b/qemu-timer.h
@@ -26,6 +26,7 @@ extern QEMUClock *host_clock;
int64_t qemu_get_clock(QEMUClock *clock);
int64_t qemu_get_clock_ns(QEMUClock *clock);
+void qemu_clock_enable(QEMUClock *clock, int enabled);
QEMUTimer *qemu_new_timer(QEMUClock *clock, QEMUTimerCB *cb, void *opaque);
void qemu_free_timer(QEMUTimer *ts);
@@ -34,11 +35,22 @@ void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time);
int qemu_timer_pending(QEMUTimer *ts);
int qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
+void qemu_run_all_timers(void);
+int qemu_alarm_pending(void);
+int64_t qemu_next_deadline(void);
+void configure_alarms(char const *opt);
+void configure_icount(const char *option);
+int qemu_calculate_timeout(void);
+void init_clocks(void);
+int init_timer_alarm(void);
+void quit_timers(void);
+
static inline int64_t get_ticks_per_sec(void)
{
return 1000000000LL;
}
+
void qemu_get_timer(QEMUFile *f, QEMUTimer *ts);
void qemu_put_timer(QEMUFile *f, QEMUTimer *ts);
diff --git a/vl.c b/vl.c
index 34d39c0..d81bb71 100644
--- a/vl.c
+++ b/vl.c
@@ -59,14 +59,8 @@
#ifdef __linux__
#include <pty.h>
#include <malloc.h>
-#include <linux/rtc.h>
#include <sys/prctl.h>
-/* For the benefit of older linux systems which don't supply it,
- we use a local copy of hpet.h. */
-/* #include <linux/hpet.h> */
-#include "hpet.h"
-
#include <linux/ppdev.h>
#include <linux/parport.h>
#endif
@@ -101,7 +95,6 @@ extern int madvise(caddr_t, size_t, int);
#ifdef _WIN32
#include <windows.h>
-#include <mmsystem.h>
#endif
#ifdef CONFIG_SDL
@@ -258,14 +251,6 @@ uint64_t node_cpumask[MAX_NODES];
static CPUState *cur_cpu;
static CPUState *next_cpu;
-/* Conversion factor from emulated instructions to virtual clock ticks. */
-static int icount_time_shift;
-/* Arbitrarily pick 1MIPS as the minimum allowable speed. */
-#define MAX_ICOUNT_SHIFT 10
-/* Compensate for varying guest execution speed. */
-static int64_t qemu_icount_bias;
-static QEMUTimer *icount_rt_timer;
-static QEMUTimer *icount_vm_timer;
static QEMUTimer *nographic_timer;
uint8_t qemu_uuid[16];
@@ -421,1117 +406,6 @@ uint64_t muldiv64(uint64_t a, uint32_t b, uint32_t c)
return res.ll;
}
-static int64_t get_clock_realtime(void)
-{
- struct timeval tv;
-
- gettimeofday(&tv, NULL);
- return tv.tv_sec * 1000000000LL + (tv.tv_usec * 1000);
-}
-
-#ifdef WIN32
-
-static int64_t clock_freq;
-
-static void init_get_clock(void)
-{
- LARGE_INTEGER freq;
- int ret;
- ret = QueryPerformanceFrequency(&freq);
- if (ret == 0) {
- fprintf(stderr, "Could not calibrate ticks\n");
- exit(1);
- }
- clock_freq = freq.QuadPart;
-}
-
-static int64_t get_clock(void)
-{
- LARGE_INTEGER ti;
- QueryPerformanceCounter(&ti);
- return muldiv64(ti.QuadPart, get_ticks_per_sec(), clock_freq);
-}
-
-#else
-
-static int use_rt_clock;
-
-static void init_get_clock(void)
-{
- use_rt_clock = 0;
-#if defined(__linux__) || (defined(__FreeBSD__) && __FreeBSD_version >= 500000) \
- || defined(__DragonFly__) || defined(__FreeBSD_kernel__)
- {
- struct timespec ts;
- if (clock_gettime(CLOCK_MONOTONIC, &ts) == 0) {
- use_rt_clock = 1;
- }
- }
-#endif
-}
-
-static int64_t get_clock(void)
-{
-#if defined(__linux__) || (defined(__FreeBSD__) && __FreeBSD_version >= 500000) \
- || defined(__DragonFly__) || defined(__FreeBSD_kernel__)
- if (use_rt_clock) {
- struct timespec ts;
- clock_gettime(CLOCK_MONOTONIC, &ts);
- return ts.tv_sec * 1000000000LL + ts.tv_nsec;
- } else
-#endif
- {
- /* XXX: using gettimeofday leads to problems if the date
- changes, so it should be avoided. */
- return get_clock_realtime();
- }
-}
-#endif
-
-/* Return the virtual CPU time, based on the instruction counter. */
-static int64_t cpu_get_icount(void)
-{
- int64_t icount;
- CPUState *env = cpu_single_env;;
- icount = qemu_icount;
- if (env) {
- if (!can_do_io(env))
- fprintf(stderr, "Bad clock read\n");
- icount -= (env->icount_decr.u16.low + env->icount_extra);
- }
- return qemu_icount_bias + (icount << icount_time_shift);
-}
-
-/***********************************************************/
-/* guest cycle counter */
-
-typedef struct TimersState {
- int64_t cpu_ticks_prev;
- int64_t cpu_ticks_offset;
- int64_t cpu_clock_offset;
- int32_t cpu_ticks_enabled;
- int64_t dummy;
-} TimersState;
-
-TimersState timers_state;
-
-/* return the host CPU cycle counter and handle stop/restart */
-int64_t cpu_get_ticks(void)
-{
- if (use_icount) {
- return cpu_get_icount();
- }
- if (!timers_state.cpu_ticks_enabled) {
- return timers_state.cpu_ticks_offset;
- } else {
- int64_t ticks;
- ticks = cpu_get_real_ticks();
- if (timers_state.cpu_ticks_prev > ticks) {
- /* Note: non increasing ticks may happen if the host uses
- software suspend */
- timers_state.cpu_ticks_offset += timers_state.cpu_ticks_prev - ticks;
- }
- timers_state.cpu_ticks_prev = ticks;
- return ticks + timers_state.cpu_ticks_offset;
- }
-}
-
-/* return the host CPU monotonic timer and handle stop/restart */
-static int64_t cpu_get_clock(void)
-{
- int64_t ti;
- if (!timers_state.cpu_ticks_enabled) {
- return timers_state.cpu_clock_offset;
- } else {
- ti = get_clock();
- return ti + timers_state.cpu_clock_offset;
- }
-}
-
-#ifndef CONFIG_IOTHREAD
-static int64_t qemu_icount_delta(void)
-{
- if (!use_icount) {
- return 5000 * (int64_t) 1000000;
- } else if (use_icount == 1) {
- /* When not using an adaptive execution frequency
- we tend to get badly out of sync with real time,
- so just delay for a reasonable amount of time. */
- return 0;
- } else {
- return cpu_get_icount() - cpu_get_clock();
- }
-}
-#endif
-
-/* enable cpu_get_ticks() */
-void cpu_enable_ticks(void)
-{
- if (!timers_state.cpu_ticks_enabled) {
- timers_state.cpu_ticks_offset -= cpu_get_real_ticks();
- timers_state.cpu_clock_offset -= get_clock();
- timers_state.cpu_ticks_enabled = 1;
- }
-}
-
-/* disable cpu_get_ticks() : the clock is stopped. You must not call
- cpu_get_ticks() after that. */
-void cpu_disable_ticks(void)
-{
- if (timers_state.cpu_ticks_enabled) {
- timers_state.cpu_ticks_offset = cpu_get_ticks();
- timers_state.cpu_clock_offset = cpu_get_clock();
- timers_state.cpu_ticks_enabled = 0;
- }
-}
-
-/***********************************************************/
-/* timers */
-
-#define QEMU_CLOCK_REALTIME 0
-#define QEMU_CLOCK_VIRTUAL 1
-#define QEMU_CLOCK_HOST 2
-
-struct QEMUClock {
- int type;
- int enabled;
- /* XXX: add frequency */
-};
-
-struct QEMUTimer {
- QEMUClock *clock;
- int64_t expire_time;
- QEMUTimerCB *cb;
- void *opaque;
- struct QEMUTimer *next;
-};
-
-struct qemu_alarm_timer {
- char const *name;
- int (*start)(struct qemu_alarm_timer *t);
- void (*stop)(struct qemu_alarm_timer *t);
- void (*rearm)(struct qemu_alarm_timer *t);
- void *priv;
-
- char expired;
- char pending;
-};
-
-static struct qemu_alarm_timer *alarm_timer;
-static int qemu_calculate_timeout(void);
-
-static inline int qemu_alarm_pending(void)
-{
- return alarm_timer->pending;
-}
-
-static inline int alarm_has_dynticks(struct qemu_alarm_timer *t)
-{
- return !!t->rearm;
-}
-
-static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
-{
- if (!alarm_has_dynticks(t))
- return;
-
- t->rearm(t);
-}
-
-/* TODO: MIN_TIMER_REARM_US should be optimized */
-#define MIN_TIMER_REARM_US 250
-
-#ifdef _WIN32
-
-struct qemu_alarm_win32 {
- MMRESULT timerId;
- unsigned int period;
-} alarm_win32_data = {0, 0};
-
-static int win32_start_timer(struct qemu_alarm_timer *t);
-static void win32_stop_timer(struct qemu_alarm_timer *t);
-static void win32_rearm_timer(struct qemu_alarm_timer *t);
-
-#else
-
-static int unix_start_timer(struct qemu_alarm_timer *t);
-static void unix_stop_timer(struct qemu_alarm_timer *t);
-
-#ifdef __linux__
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t);
-static void dynticks_stop_timer(struct qemu_alarm_timer *t);
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t);
-
-static int hpet_start_timer(struct qemu_alarm_timer *t);
-static void hpet_stop_timer(struct qemu_alarm_timer *t);
-
-static int rtc_start_timer(struct qemu_alarm_timer *t);
-static void rtc_stop_timer(struct qemu_alarm_timer *t);
-
-#endif /* __linux__ */
-
-#endif /* _WIN32 */
-
-/* Correlation between real and virtual time is always going to be
- fairly approximate, so ignore small variation.
- When the guest is idle real and virtual time will be aligned in
- the IO wait loop. */
-#define ICOUNT_WOBBLE (get_ticks_per_sec() / 10)
-
-static void icount_adjust(void)
-{
- int64_t cur_time;
- int64_t cur_icount;
- int64_t delta;
- static int64_t last_delta;
- /* If the VM is not running, then do nothing. */
- if (!vm_running)
- return;
-
- cur_time = cpu_get_clock();
- cur_icount = qemu_get_clock(vm_clock);
- delta = cur_icount - cur_time;
- /* FIXME: This is a very crude algorithm, somewhat prone to oscillation. */
- if (delta > 0
- && last_delta + ICOUNT_WOBBLE < delta * 2
- && icount_time_shift > 0) {
- /* The guest is getting too far ahead. Slow time down. */
- icount_time_shift--;
- }
- if (delta < 0
- && last_delta - ICOUNT_WOBBLE > delta * 2
- && icount_time_shift < MAX_ICOUNT_SHIFT) {
- /* The guest is getting too far behind. Speed time up. */
- icount_time_shift++;
- }
- last_delta = delta;
- qemu_icount_bias = cur_icount - (qemu_icount << icount_time_shift);
-}
-
-static void icount_adjust_rt(void * opaque)
-{
- qemu_mod_timer(icount_rt_timer,
- qemu_get_clock(rt_clock) + 1000);
- icount_adjust();
-}
-
-static void icount_adjust_vm(void * opaque)
-{
- qemu_mod_timer(icount_vm_timer,
- qemu_get_clock(vm_clock) + get_ticks_per_sec() / 10);
- icount_adjust();
-}
-
-static int64_t qemu_icount_round(int64_t count)
-{
- return (count + (1 << icount_time_shift) - 1) >> icount_time_shift;
-}
-
-static struct qemu_alarm_timer alarm_timers[] = {
-#ifndef _WIN32
-#ifdef __linux__
- {"dynticks", dynticks_start_timer,
- dynticks_stop_timer, dynticks_rearm_timer, NULL},
- /* HPET - if available - is preferred */
- {"hpet", hpet_start_timer, hpet_stop_timer, NULL, NULL},
- /* ...otherwise try RTC */
- {"rtc", rtc_start_timer, rtc_stop_timer, NULL, NULL},
-#endif
- {"unix", unix_start_timer, unix_stop_timer, NULL, NULL},
-#else
- {"dynticks", win32_start_timer,
- win32_stop_timer, win32_rearm_timer, &alarm_win32_data},
- {"win32", win32_start_timer,
- win32_stop_timer, NULL, &alarm_win32_data},
-#endif
- {NULL, }
-};
-
-static void show_available_alarms(void)
-{
- int i;
-
- printf("Available alarm timers, in order of precedence:\n");
- for (i = 0; alarm_timers[i].name; i++)
- printf("%s\n", alarm_timers[i].name);
-}
-
-static void configure_alarms(char const *opt)
-{
- int i;
- int cur = 0;
- int count = ARRAY_SIZE(alarm_timers) - 1;
- char *arg;
- char *name;
- struct qemu_alarm_timer tmp;
-
- if (!strcmp(opt, "?")) {
- show_available_alarms();
- exit(0);
- }
-
- arg = qemu_strdup(opt);
-
- /* Reorder the array */
- name = strtok(arg, ",");
- while (name) {
- for (i = 0; i < count && alarm_timers[i].name; i++) {
- if (!strcmp(alarm_timers[i].name, name))
- break;
- }
-
- if (i == count) {
- fprintf(stderr, "Unknown clock %s\n", name);
- goto next;
- }
-
- if (i < cur)
- /* Ignore */
- goto next;
-
- /* Swap */
- tmp = alarm_timers[i];
- alarm_timers[i] = alarm_timers[cur];
- alarm_timers[cur] = tmp;
-
- cur++;
-next:
- name = strtok(NULL, ",");
- }
-
- qemu_free(arg);
-
- if (cur) {
- /* Disable remaining timers */
- for (i = cur; i < count; i++)
- alarm_timers[i].name = NULL;
- } else {
- show_available_alarms();
- exit(1);
- }
-}
-
-#define QEMU_NUM_CLOCKS 3
-
-QEMUClock *rt_clock;
-QEMUClock *vm_clock;
-QEMUClock *host_clock;
-
-static QEMUTimer *active_timers[QEMU_NUM_CLOCKS];
-
-static QEMUClock *qemu_new_clock(int type)
-{
- QEMUClock *clock;
- clock = qemu_mallocz(sizeof(QEMUClock));
- clock->type = type;
- clock->enabled = 1;
- return clock;
-}
-
-static void qemu_clock_enable(QEMUClock *clock, int enabled)
-{
- clock->enabled = enabled;
-}
-
-QEMUTimer *qemu_new_timer(QEMUClock *clock, QEMUTimerCB *cb, void *opaque)
-{
- QEMUTimer *ts;
-
- ts = qemu_mallocz(sizeof(QEMUTimer));
- ts->clock = clock;
- ts->cb = cb;
- ts->opaque = opaque;
- return ts;
-}
-
-void qemu_free_timer(QEMUTimer *ts)
-{
- qemu_free(ts);
-}
-
-/* stop a timer, but do not dealloc it */
-void qemu_del_timer(QEMUTimer *ts)
-{
- QEMUTimer **pt, *t;
-
- /* NOTE: this code must be signal safe because
- qemu_timer_expired() can be called from a signal. */
- pt = &active_timers[ts->clock->type];
- for(;;) {
- t = *pt;
- if (!t)
- break;
- if (t == ts) {
- *pt = t->next;
- break;
- }
- pt = &t->next;
- }
-}
-
-/* modify the current timer so that it will be fired when current_time
- >= expire_time. The corresponding callback will be called. */
-void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time)
-{
- QEMUTimer **pt, *t;
-
- qemu_del_timer(ts);
-
- /* add the timer in the sorted list */
- /* NOTE: this code must be signal safe because
- qemu_timer_expired() can be called from a signal. */
- pt = &active_timers[ts->clock->type];
- for(;;) {
- t = *pt;
- if (!t)
- break;
- if (t->expire_time > expire_time)
- break;
- pt = &t->next;
- }
- ts->expire_time = expire_time;
- ts->next = *pt;
- *pt = ts;
-
- /* Rearm if necessary */
- if (pt == &active_timers[ts->clock->type]) {
- if (!alarm_timer->pending) {
- qemu_rearm_alarm_timer(alarm_timer);
- }
- /* Interrupt execution to force deadline recalculation. */
- if (use_icount)
- qemu_notify_event();
- }
-}
-
-int qemu_timer_pending(QEMUTimer *ts)
-{
- QEMUTimer *t;
- for(t = active_timers[ts->clock->type]; t != NULL; t = t->next) {
- if (t == ts)
- return 1;
- }
- return 0;
-}
-
-int qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time)
-{
- if (!timer_head)
- return 0;
- return (timer_head->expire_time <= current_time);
-}
-
-static void qemu_run_timers(QEMUClock *clock)
-{
- QEMUTimer **ptimer_head, *ts;
- int64_t current_time;
-
- if (!clock->enabled)
- return;
-
- current_time = qemu_get_clock (clock);
- ptimer_head = &active_timers[clock->type];
- for(;;) {
- ts = *ptimer_head;
- if (!ts || ts->expire_time > current_time)
- break;
- /* remove timer from the list before calling the callback */
- *ptimer_head = ts->next;
- ts->next = NULL;
-
- /* run the callback (the timer list can be modified) */
- ts->cb(ts->opaque);
- }
-}
-
-int64_t qemu_get_clock(QEMUClock *clock)
-{
- switch(clock->type) {
- case QEMU_CLOCK_REALTIME:
- return get_clock() / 1000000;
- default:
- case QEMU_CLOCK_VIRTUAL:
- if (use_icount) {
- return cpu_get_icount();
- } else {
- return cpu_get_clock();
- }
- case QEMU_CLOCK_HOST:
- return get_clock_realtime();
- }
-}
-
-int64_t qemu_get_clock_ns(QEMUClock *clock)
-{
- switch(clock->type) {
- case QEMU_CLOCK_REALTIME:
- return get_clock();
- default:
- case QEMU_CLOCK_VIRTUAL:
- if (use_icount) {
- return cpu_get_icount();
- } else {
- return cpu_get_clock();
- }
- case QEMU_CLOCK_HOST:
- return get_clock_realtime();
- }
-}
-
-static void init_clocks(void)
-{
- init_get_clock();
- rt_clock = qemu_new_clock(QEMU_CLOCK_REALTIME);
- vm_clock = qemu_new_clock(QEMU_CLOCK_VIRTUAL);
- host_clock = qemu_new_clock(QEMU_CLOCK_HOST);
-
- rtc_clock = host_clock;
-}
-
-/* save a timer */
-void qemu_put_timer(QEMUFile *f, QEMUTimer *ts)
-{
- uint64_t expire_time;
-
- if (qemu_timer_pending(ts)) {
- expire_time = ts->expire_time;
- } else {
- expire_time = -1;
- }
- qemu_put_be64(f, expire_time);
-}
-
-void qemu_get_timer(QEMUFile *f, QEMUTimer *ts)
-{
- uint64_t expire_time;
-
- expire_time = qemu_get_be64(f);
- if (expire_time != -1) {
- qemu_mod_timer(ts, expire_time);
- } else {
- qemu_del_timer(ts);
- }
-}
-
-static const VMStateDescription vmstate_timers = {
- .name = "timer",
- .version_id = 2,
- .minimum_version_id = 1,
- .minimum_version_id_old = 1,
- .fields = (VMStateField []) {
- VMSTATE_INT64(cpu_ticks_offset, TimersState),
- VMSTATE_INT64(dummy, TimersState),
- VMSTATE_INT64_V(cpu_clock_offset, TimersState, 2),
- VMSTATE_END_OF_LIST()
- }
-};
-
-static void configure_icount(const char *option)
-{
- vmstate_register(0, &vmstate_timers, &timers_state);
- if (!option)
- return;
-
- if (strcmp(option, "auto") != 0) {
- icount_time_shift = strtol(option, NULL, 0);
- use_icount = 1;
- return;
- }
-
- use_icount = 2;
-
- /* 125MIPS seems a reasonable initial guess at the guest speed.
- It will be corrected fairly quickly anyway. */
- icount_time_shift = 3;
-
- /* Have both realtime and virtual time triggers for speed adjustment.
- The realtime trigger catches emulated time passing too slowly,
- the virtual time trigger catches emulated time passing too fast.
- Realtime triggers occur even when idle, so use them less frequently
- than VM triggers. */
- icount_rt_timer = qemu_new_timer(rt_clock, icount_adjust_rt, NULL);
- qemu_mod_timer(icount_rt_timer,
- qemu_get_clock(rt_clock) + 1000);
- icount_vm_timer = qemu_new_timer(vm_clock, icount_adjust_vm, NULL);
- qemu_mod_timer(icount_vm_timer,
- qemu_get_clock(vm_clock) + get_ticks_per_sec() / 10);
-}
-
-static void qemu_run_all_timers(void)
-{
- /* rearm timer, if not periodic */
- if (alarm_timer->expired) {
- alarm_timer->expired = 0;
- qemu_rearm_alarm_timer(alarm_timer);
- }
-
- alarm_timer->pending = 0;
-
- /* vm time timers */
- if (vm_running) {
- qemu_run_timers(vm_clock);
- }
-
- qemu_run_timers(rt_clock);
- qemu_run_timers(host_clock);
-}
-
-#ifdef _WIN32
-static void CALLBACK host_alarm_handler(UINT uTimerID, UINT uMsg,
- DWORD_PTR dwUser, DWORD_PTR dw1,
- DWORD_PTR dw2)
-#else
-static void host_alarm_handler(int host_signum)
-#endif
-{
- struct qemu_alarm_timer *t = alarm_timer;
- if (!t)
- return;
-
-#if 0
-#define DISP_FREQ 1000
- {
- static int64_t delta_min = INT64_MAX;
- static int64_t delta_max, delta_cum, last_clock, delta, ti;
- static int count;
- ti = qemu_get_clock(vm_clock);
- if (last_clock != 0) {
- delta = ti - last_clock;
- if (delta < delta_min)
- delta_min = delta;
- if (delta > delta_max)
- delta_max = delta;
- delta_cum += delta;
- if (++count == DISP_FREQ) {
- printf("timer: min=%" PRId64 " us max=%" PRId64 " us avg=%" PRId64 " us avg_freq=%0.3f Hz\n",
- muldiv64(delta_min, 1000000, get_ticks_per_sec()),
- muldiv64(delta_max, 1000000, get_ticks_per_sec()),
- muldiv64(delta_cum, 1000000 / DISP_FREQ, get_ticks_per_sec()),
- (double)get_ticks_per_sec() / ((double)delta_cum / DISP_FREQ));
- count = 0;
- delta_min = INT64_MAX;
- delta_max = 0;
- delta_cum = 0;
- }
- }
- last_clock = ti;
- }
-#endif
- if (alarm_has_dynticks(t) ||
- (!use_icount &&
- qemu_timer_expired(active_timers[QEMU_CLOCK_VIRTUAL],
- qemu_get_clock(vm_clock))) ||
- qemu_timer_expired(active_timers[QEMU_CLOCK_REALTIME],
- qemu_get_clock(rt_clock)) ||
- qemu_timer_expired(active_timers[QEMU_CLOCK_HOST],
- qemu_get_clock(host_clock))) {
-
- t->expired = alarm_has_dynticks(t);
- t->pending = 1;
- qemu_notify_event();
- }
-}
-
-static int64_t qemu_next_deadline(void)
-{
- /* To avoid problems with overflow limit this to 2^32. */
- int64_t delta = INT32_MAX;
-
- if (active_timers[QEMU_CLOCK_VIRTUAL]) {
- delta = active_timers[QEMU_CLOCK_VIRTUAL]->expire_time -
- qemu_get_clock(vm_clock);
- }
- if (active_timers[QEMU_CLOCK_HOST]) {
- int64_t hdelta = active_timers[QEMU_CLOCK_HOST]->expire_time -
- qemu_get_clock(host_clock);
- if (hdelta < delta)
- delta = hdelta;
- }
-
- if (delta < 0)
- delta = 0;
-
- return delta;
-}
-
-#if defined(__linux__)
-static uint64_t qemu_next_deadline_dyntick(void)
-{
- int64_t delta;
- int64_t rtdelta;
-
- if (use_icount)
- delta = INT32_MAX;
- else
- delta = (qemu_next_deadline() + 999) / 1000;
-
- if (active_timers[QEMU_CLOCK_REALTIME]) {
- rtdelta = (active_timers[QEMU_CLOCK_REALTIME]->expire_time -
- qemu_get_clock(rt_clock))*1000;
- if (rtdelta < delta)
- delta = rtdelta;
- }
-
- if (delta < MIN_TIMER_REARM_US)
- delta = MIN_TIMER_REARM_US;
-
- return delta;
-}
-#endif
-
-#ifndef _WIN32
-
-/* Sets a specific flag */
-static int fcntl_setfl(int fd, int flag)
-{
- int flags;
-
- flags = fcntl(fd, F_GETFL);
- if (flags == -1)
- return -errno;
-
- if (fcntl(fd, F_SETFL, flags | flag) == -1)
- return -errno;
-
- return 0;
-}
-
-#if defined(__linux__)
-
-#define RTC_FREQ 1024
-
-static void enable_sigio_timer(int fd)
-{
- struct sigaction act;
-
- /* timer signal */
- sigfillset(&act.sa_mask);
- act.sa_flags = 0;
- act.sa_handler = host_alarm_handler;
-
- sigaction(SIGIO, &act, NULL);
- fcntl_setfl(fd, O_ASYNC);
- fcntl(fd, F_SETOWN, getpid());
-}
-
-static int hpet_start_timer(struct qemu_alarm_timer *t)
-{
- struct hpet_info info;
- int r, fd;
-
- fd = qemu_open("/dev/hpet", O_RDONLY);
- if (fd < 0)
- return -1;
-
- /* Set frequency */
- r = ioctl(fd, HPET_IRQFREQ, RTC_FREQ);
- if (r < 0) {
- fprintf(stderr, "Could not configure '/dev/hpet' to have a 1024Hz timer. This is not a fatal\n"
- "error, but for better emulation accuracy type:\n"
- "'echo 1024 > /proc/sys/dev/hpet/max-user-freq' as root.\n");
- goto fail;
- }
-
- /* Check capabilities */
- r = ioctl(fd, HPET_INFO, &info);
- if (r < 0)
- goto fail;
-
- /* Enable periodic mode */
- r = ioctl(fd, HPET_EPI, 0);
- if (info.hi_flags && (r < 0))
- goto fail;
-
- /* Enable interrupt */
- r = ioctl(fd, HPET_IE_ON, 0);
- if (r < 0)
- goto fail;
-
- enable_sigio_timer(fd);
- t->priv = (void *)(long)fd;
-
- return 0;
-fail:
- close(fd);
- return -1;
-}
-
-static void hpet_stop_timer(struct qemu_alarm_timer *t)
-{
- int fd = (long)t->priv;
-
- close(fd);
-}
-
-static int rtc_start_timer(struct qemu_alarm_timer *t)
-{
- int rtc_fd;
- unsigned long current_rtc_freq = 0;
-
- TFR(rtc_fd = qemu_open("/dev/rtc", O_RDONLY));
- if (rtc_fd < 0)
- return -1;
- ioctl(rtc_fd, RTC_IRQP_READ, ¤t_rtc_freq);
- if (current_rtc_freq != RTC_FREQ &&
- ioctl(rtc_fd, RTC_IRQP_SET, RTC_FREQ) < 0) {
- fprintf(stderr, "Could not configure '/dev/rtc' to have a 1024 Hz timer. This is not a fatal\n"
- "error, but for better emulation accuracy either use a 2.6 host Linux kernel or\n"
- "type 'echo 1024 > /proc/sys/dev/rtc/max-user-freq' as root.\n");
- goto fail;
- }
- if (ioctl(rtc_fd, RTC_PIE_ON, 0) < 0) {
- fail:
- close(rtc_fd);
- return -1;
- }
-
- enable_sigio_timer(rtc_fd);
-
- t->priv = (void *)(long)rtc_fd;
-
- return 0;
-}
-
-static void rtc_stop_timer(struct qemu_alarm_timer *t)
-{
- int rtc_fd = (long)t->priv;
-
- close(rtc_fd);
-}
-
-static int dynticks_start_timer(struct qemu_alarm_timer *t)
-{
- struct sigevent ev;
- timer_t host_timer;
- struct sigaction act;
-
- sigfillset(&act.sa_mask);
- act.sa_flags = 0;
- act.sa_handler = host_alarm_handler;
-
- sigaction(SIGALRM, &act, NULL);
-
- /*
- * Initialize ev struct to 0 to avoid valgrind complaining
- * about uninitialized data in timer_create call
- */
- memset(&ev, 0, sizeof(ev));
- ev.sigev_value.sival_int = 0;
- ev.sigev_notify = SIGEV_SIGNAL;
- ev.sigev_signo = SIGALRM;
-
- if (timer_create(CLOCK_REALTIME, &ev, &host_timer)) {
- perror("timer_create");
-
- /* disable dynticks */
- fprintf(stderr, "Dynamic Ticks disabled\n");
-
- return -1;
- }
-
- t->priv = (void *)(long)host_timer;
-
- return 0;
-}
-
-static void dynticks_stop_timer(struct qemu_alarm_timer *t)
-{
- timer_t host_timer = (timer_t)(long)t->priv;
-
- timer_delete(host_timer);
-}
-
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t)
-{
- timer_t host_timer = (timer_t)(long)t->priv;
- struct itimerspec timeout;
- int64_t nearest_delta_us = INT64_MAX;
- int64_t current_us;
-
- assert(alarm_has_dynticks(t));
- if (!active_timers[QEMU_CLOCK_REALTIME] &&
- !active_timers[QEMU_CLOCK_VIRTUAL] &&
- !active_timers[QEMU_CLOCK_HOST])
- return;
-
- nearest_delta_us = qemu_next_deadline_dyntick();
-
- /* check whether a timer is already running */
- if (timer_gettime(host_timer, &timeout)) {
- perror("gettime");
- fprintf(stderr, "Internal timer error: aborting\n");
- exit(1);
- }
- current_us = timeout.it_value.tv_sec * 1000000 + timeout.it_value.tv_nsec/1000;
- if (current_us && current_us <= nearest_delta_us)
- return;
-
- timeout.it_interval.tv_sec = 0;
- timeout.it_interval.tv_nsec = 0; /* 0 for one-shot timer */
- timeout.it_value.tv_sec = nearest_delta_us / 1000000;
- timeout.it_value.tv_nsec = (nearest_delta_us % 1000000) * 1000;
- if (timer_settime(host_timer, 0 /* RELATIVE */, &timeout, NULL)) {
- perror("settime");
- fprintf(stderr, "Internal timer error: aborting\n");
- exit(1);
- }
-}
-
-#endif /* defined(__linux__) */
-
-static int unix_start_timer(struct qemu_alarm_timer *t)
-{
- struct sigaction act;
- struct itimerval itv;
- int err;
-
- /* timer signal */
- sigfillset(&act.sa_mask);
- act.sa_flags = 0;
- act.sa_handler = host_alarm_handler;
-
- sigaction(SIGALRM, &act, NULL);
-
- itv.it_interval.tv_sec = 0;
- /* for i386 kernel 2.6 to get 1 ms */
- itv.it_interval.tv_usec = 999;
- itv.it_value.tv_sec = 0;
- itv.it_value.tv_usec = 10 * 1000;
-
- err = setitimer(ITIMER_REAL, &itv, NULL);
- if (err)
- return -1;
-
- return 0;
-}
-
-static void unix_stop_timer(struct qemu_alarm_timer *t)
-{
- struct itimerval itv;
-
- memset(&itv, 0, sizeof(itv));
- setitimer(ITIMER_REAL, &itv, NULL);
-}
-
-#endif /* !defined(_WIN32) */
-
-
-#ifdef _WIN32
-
-static int win32_start_timer(struct qemu_alarm_timer *t)
-{
- TIMECAPS tc;
- struct qemu_alarm_win32 *data = t->priv;
- UINT flags;
-
- memset(&tc, 0, sizeof(tc));
- timeGetDevCaps(&tc, sizeof(tc));
-
- data->period = tc.wPeriodMin;
- timeBeginPeriod(data->period);
-
- flags = TIME_CALLBACK_FUNCTION;
- if (alarm_has_dynticks(t))
- flags |= TIME_ONESHOT;
- else
- flags |= TIME_PERIODIC;
-
- data->timerId = timeSetEvent(1, // interval (ms)
- data->period, // resolution
- host_alarm_handler, // function
- (DWORD)t, // parameter
- flags);
-
- if (!data->timerId) {
- fprintf(stderr, "Failed to initialize win32 alarm timer: %ld\n",
- GetLastError());
- timeEndPeriod(data->period);
- return -1;
- }
-
- return 0;
-}
-
-static void win32_stop_timer(struct qemu_alarm_timer *t)
-{
- struct qemu_alarm_win32 *data = t->priv;
-
- timeKillEvent(data->timerId);
- timeEndPeriod(data->period);
-}
-
-static void win32_rearm_timer(struct qemu_alarm_timer *t)
-{
- struct qemu_alarm_win32 *data = t->priv;
-
- assert(alarm_has_dynticks(t));
- if (!active_timers[QEMU_CLOCK_REALTIME] &&
- !active_timers[QEMU_CLOCK_VIRTUAL] &&
- !active_timers[QEMU_CLOCK_HOST])
- return;
-
- timeKillEvent(data->timerId);
-
- data->timerId = timeSetEvent(1,
- data->period,
- host_alarm_handler,
- (DWORD)t,
- TIME_ONESHOT | TIME_CALLBACK_FUNCTION);
-
- if (!data->timerId) {
- fprintf(stderr, "Failed to re-arm win32 alarm timer %ld\n",
- GetLastError());
-
- timeEndPeriod(data->period);
- exit(1);
- }
-}
-
-#endif /* _WIN32 */
-
-static void alarm_timer_on_change_state_rearm(void *opaque, int running, int reason)
-{
- if (running)
- qemu_rearm_alarm_timer((struct qemu_alarm_timer *) opaque);
-}
-
-static int init_timer_alarm(void)
-{
- struct qemu_alarm_timer *t = NULL;
- int i, err = -1;
-
- for (i = 0; alarm_timers[i].name; i++) {
- t = &alarm_timers[i];
-
- err = t->start(t);
- if (!err)
- break;
- }
-
- if (err) {
- err = -ENOENT;
- goto fail;
- }
-
- /* first event is at time 0 */
- t->pending = 1;
- alarm_timer = t;
- qemu_add_vm_change_state_handler(alarm_timer_on_change_state_rearm, t);
-
- return 0;
-
-fail:
- return err;
-}
-
-static void quit_timers(void)
-{
- struct qemu_alarm_timer *t = alarm_timer;
- alarm_timer = NULL;
- t->stop(t);
-}
-
/***********************************************************/
/* host time/date access */
void qemu_get_timedate(struct tm *tm, int offset)
@@ -4064,46 +2938,6 @@ static bool tcg_cpu_exec(void)
return tcg_has_work();
}
-static int qemu_calculate_timeout(void)
-{
-#ifndef CONFIG_IOTHREAD
- int timeout;
-
- if (!vm_running)
- timeout = 5000;
- else {
- /* XXX: use timeout computed from timers */
- int64_t add;
- int64_t delta;
- /* Advance virtual time to the next event. */
- delta = qemu_icount_delta();
- if (delta > 0) {
- /* If virtual time is ahead of real time then just
- wait for IO. */
- timeout = (delta + 999999) / 1000000;
- } else {
- /* Wait for either IO to occur or the next
- timer event. */
- add = qemu_next_deadline();
- /* We advance the timer before checking for IO.
- Limit the amount we advance so that early IO
- activity won't get the guest too far ahead. */
- if (add > 10000000)
- add = 10000000;
- delta += add;
- qemu_icount += qemu_icount_round (add);
- timeout = delta / 1000000;
- if (timeout < 0)
- timeout = 0;
- }
- }
-
- return timeout;
-#else /* CONFIG_IOTHREAD */
- return 1000;
-#endif
-}
-
static int vm_can_run(void)
{
if (powerdown_requested)
--
1.6.6
^ permalink raw reply related [flat|nested] 20+ messages in thread
* Re: [Qemu-devel] [PATCH 01/18] avoid dubiously clever code in win32_start_timer
2010-03-10 10:38 ` [Qemu-devel] [PATCH 01/18] avoid dubiously clever code in win32_start_timer Paolo Bonzini
@ 2010-03-17 16:58 ` Anthony Liguori
0 siblings, 0 replies; 20+ messages in thread
From: Anthony Liguori @ 2010-03-17 16:58 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: qemu-devel
On 03/10/2010 04:38 AM, Paolo Bonzini wrote:
> The code is initializing an unsigned int to UINT_MAX using "-1", so that
> the following always-true comparison seems to be always-false at a
> first look. Since alarm timer initializations are never nested, it is
> simpler to unconditionally store the result of timeGetDevCaps into
> data->period.
>
> Signed-off-by: Paolo Bonzini<pbonzini@redhat.com>
>
Applied all. Thanks.
Nice cleanup.
Regards,
Anthony Liguori
> ---
> vl.c | 6 ++----
> 1 files changed, 2 insertions(+), 4 deletions(-)
>
> diff --git a/vl.c b/vl.c
> index d8328c7..6b1e1a7 100644
> --- a/vl.c
> +++ b/vl.c
> @@ -626,7 +626,7 @@ static struct qemu_alarm_timer *alarm_timer;
> struct qemu_alarm_win32 {
> MMRESULT timerId;
> unsigned int period;
> -} alarm_win32_data = {0, -1};
> +} alarm_win32_data = {0, 0};
>
> static int win32_start_timer(struct qemu_alarm_timer *t);
> static void win32_stop_timer(struct qemu_alarm_timer *t);
> @@ -1360,9 +1360,7 @@ static int win32_start_timer(struct qemu_alarm_timer *t)
> memset(&tc, 0, sizeof(tc));
> timeGetDevCaps(&tc, sizeof(tc));
>
> - if (data->period< tc.wPeriodMin)
> - data->period = tc.wPeriodMin;
> -
> + data->period = tc.wPeriodMin;
> timeBeginPeriod(data->period);
>
> flags = TIME_CALLBACK_FUNCTION;
>
^ permalink raw reply [flat|nested] 20+ messages in thread
end of thread, other threads:[~2010-03-17 16:58 UTC | newest]
Thread overview: 20+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2010-03-10 10:38 [Qemu-devel] [PATCH 00/18] extract qemu-timer.c Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 01/18] avoid dubiously clever code in win32_start_timer Paolo Bonzini
2010-03-17 16:58 ` Anthony Liguori
2010-03-10 10:38 ` [Qemu-devel] [PATCH 02/18] fix error in win32_rearm_timer Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 03/18] only one flag is needed for alarm_timer Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 04/18] more alarm timer cleanup Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 05/18] do not use qemu_event_increment outside qemu_notify_event Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 06/18] tweak qemu_notify_event Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 07/18] remove qemu_rearm_alarm_timer from main loop Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 08/18] extract timer handling out of main_loop_wait Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 09/18] change qemu_run_timers interface Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 10/18] introduce and use qemu_clock_enable Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 11/18] centralize handling of -icount Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 12/18] add qemu_icount_round Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 13/18] add qemu_alarm_pending Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 14/18] new function qemu_icount_delta Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 15/18] move vmstate registration of vmstate_timers earlier Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 16/18] place together more #ifdef CONFIG_IOTHREAD blocks Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 17/18] disentangle tcg and deadline calculation Paolo Bonzini
2010-03-10 10:38 ` [Qemu-devel] [PATCH 18/18] split out qemu-timer.c Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).