* [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop
@ 2011-09-16 15:08 Paolo Bonzini
2011-09-16 15:08 ` [Qemu-devel] [RFC PATCH 01/14] remove unused function Paolo Bonzini
` (13 more replies)
0 siblings, 14 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:08 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
This patch series makes the QEMU main loop usable out of the executable,
and especially in tools and possibly unit tests. The series already
starts using the refactored main loop in qemu-nbd.
This is cleaner because it avoids introducing partial transitions to
GIOChannel. Interfacing with the glib main loop is still possible.
The main loop code is currently split in cpus.c and vl.c. Moving it
to a new file is easy; the problem is that the main loop depends on the
timer infrastructure in qemu-timer.c, and that file currently contains
the implementation of icount and the vm_clock. This is bad for the
prospective of linking qemu-timer.c into the tools. Luckily, it is
relatively easy to untie them and move them out of the way. This is
taken care by the first half of the series (patches 1-7).
Patches 8-11 complete the refactoring and cleanup some surrounding
code.
Patches 12-14 show how the main loop can be used in the tools. This
is, for example, a prerequisite for supporting multiple in-flight
operations in qemu-nbd (and perhaps named exports).
Very lightly tested, just wanted to throw out the idea.
Paolo Bonzini (14):
remove unused function
qemu-timer: remove active_timers array
qemu-timer: move common code to qemu_rearm_alarm_timer
qemu-timer: more clock functions
qemu-timer: move icount to cpus.c
qemu-timer: do not refer to vm_running
qemu-timer: move more stuff out of qemu-timer.c
create main-loop.h
create main-loop.c
Revert to a hand-made select loop
simplify main loop functions
makefile: extract tools-obj-y
link the main loop and its dependencies into the tools
qemu-nbd: use common main loop
Makefile | 13 +-
Makefile.objs | 2 +-
async.c | 1 +
cpus.c | 502 ++++++++++++++++++++++++++++++------------------------
cpus.h | 3 +-
exec-all.h | 14 ++
exec.c | 7 -
hw/mac_dbdma.c | 5 -
hw/mac_dbdma.h | 1 -
iohandler.c | 54 +------
main-loop.c | 499 +++++++++++++++++++++++++++++++++++++++++++++++++++++
main-loop.h | 80 +++++++++
os-posix.c | 42 -----
os-win32.c | 128 --------------
oslib-posix.c | 42 +++++
oslib-win32.c | 5 +
qemu-char.h | 12 +--
qemu-common.h | 15 +--
qemu-nbd.c | 127 +++++++-------
qemu-os-posix.h | 4 -
qemu-os-win32.h | 15 +--
qemu-timer.c | 470 +++++++++------------------------------------------
qemu-timer.h | 30 +---
qemu-tool.c | 45 +++--
roms/seabios | 2 +-
savevm.c | 25 +++
slirp/libslirp.h | 11 --
sysemu.h | 10 +-
vl.c | 184 ++++----------------
29 files changed, 1173 insertions(+), 1175 deletions(-)
create mode 100644 main-loop.c
create mode 100644 main-loop.h
--
1.7.6
^ permalink raw reply [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 01/14] remove unused function
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
@ 2011-09-16 15:08 ` Paolo Bonzini
2011-09-16 15:08 ` [Qemu-devel] [RFC PATCH 02/14] qemu-timer: remove active_timers array Paolo Bonzini
` (12 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:08 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
hw/mac_dbdma.c | 5 -----
hw/mac_dbdma.h | 1 -
2 files changed, 0 insertions(+), 6 deletions(-)
diff --git a/hw/mac_dbdma.c b/hw/mac_dbdma.c
index 5affdd1..1791ec1 100644
--- a/hw/mac_dbdma.c
+++ b/hw/mac_dbdma.c
@@ -661,11 +661,6 @@ void DBDMA_register_channel(void *dbdma, int nchan, qemu_irq irq,
ch->io.channel = ch;
}
-void DBDMA_schedule(void)
-{
- qemu_notify_event();
-}
-
static void
dbdma_control_write(DBDMA_channel *ch)
{
diff --git a/hw/mac_dbdma.h b/hw/mac_dbdma.h
index 933e17c..6d1abe6 100644
--- a/hw/mac_dbdma.h
+++ b/hw/mac_dbdma.h
@@ -41,5 +41,4 @@ struct DBDMA_io {
void DBDMA_register_channel(void *dbdma, int nchan, qemu_irq irq,
DBDMA_rw rw, DBDMA_flush flush,
void *opaque);
-void DBDMA_schedule(void);
void* DBDMA_init (MemoryRegion **dbdma_mem);
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 02/14] qemu-timer: remove active_timers array
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
2011-09-16 15:08 ` [Qemu-devel] [RFC PATCH 01/14] remove unused function Paolo Bonzini
@ 2011-09-16 15:08 ` Paolo Bonzini
2011-09-16 15:08 ` [Qemu-devel] [RFC PATCH 03/14] qemu-timer: move common code to qemu_rearm_alarm_timer Paolo Bonzini
` (11 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:08 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
Embed the list in the QEMUClock instead.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
qemu-timer.c | 59 +++++++++++++++++++++++++++------------------------------
1 files changed, 28 insertions(+), 31 deletions(-)
diff --git a/qemu-timer.c b/qemu-timer.c
index 46dd483..e84e444 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -134,6 +134,7 @@ struct QEMUClock {
int enabled;
QEMUTimer *warp_timer;
+ QEMUTimer *active_timers;
NotifierList reset_notifiers;
int64_t last;
@@ -352,14 +353,10 @@ next:
}
}
-#define QEMU_NUM_CLOCKS 3
-
QEMUClock *rt_clock;
QEMUClock *vm_clock;
QEMUClock *host_clock;
-static QEMUTimer *active_timers[QEMU_NUM_CLOCKS];
-
static QEMUClock *qemu_new_clock(int type)
{
QEMUClock *clock;
@@ -403,7 +400,7 @@ static void icount_warp_rt(void *opaque)
int64_t delta = cur_time - cur_icount;
qemu_icount_bias += MIN(warp_delta, delta);
}
- if (qemu_timer_expired(active_timers[QEMU_CLOCK_VIRTUAL],
+ if (qemu_timer_expired(vm_clock->active_timers,
qemu_get_clock_ns(vm_clock))) {
qemu_notify_event();
}
@@ -434,7 +431,7 @@ void qemu_clock_warp(QEMUClock *clock)
* the earliest vm_clock timer.
*/
icount_warp_rt(NULL);
- if (!all_cpu_threads_idle() || !active_timers[clock->type]) {
+ if (!all_cpu_threads_idle() || !clock->active_timers) {
qemu_del_timer(clock->warp_timer);
return;
}
@@ -489,7 +486,7 @@ void qemu_del_timer(QEMUTimer *ts)
/* NOTE: this code must be signal safe because
qemu_timer_expired() can be called from a signal. */
- pt = &active_timers[ts->clock->type];
+ pt = &ts->clock->active_timers;
for(;;) {
t = *pt;
if (!t)
@@ -513,7 +510,7 @@ static void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
/* add the timer in the sorted list */
/* NOTE: this code must be signal safe because
qemu_timer_expired() can be called from a signal. */
- pt = &active_timers[ts->clock->type];
+ pt = &ts->clock->active_timers;
for(;;) {
t = *pt;
if (!qemu_timer_expired_ns(t, expire_time)) {
@@ -526,7 +523,7 @@ static void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
*pt = ts;
/* Rearm if necessary */
- if (pt == &active_timers[ts->clock->type]) {
+ if (pt == &ts->clock->active_timers) {
if (!alarm_timer->pending) {
qemu_rearm_alarm_timer(alarm_timer);
}
@@ -548,7 +545,7 @@ void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time)
int qemu_timer_pending(QEMUTimer *ts)
{
QEMUTimer *t;
- for(t = active_timers[ts->clock->type]; t != NULL; t = t->next) {
+ for (t = ts->clock->active_timers; t != NULL; t = t->next) {
if (t == ts)
return 1;
}
@@ -569,7 +566,7 @@ static void qemu_run_timers(QEMUClock *clock)
return;
current_time = qemu_get_clock_ns(clock);
- ptimer_head = &active_timers[clock->type];
+ ptimer_head = &clock->active_timers;
for(;;) {
ts = *ptimer_head;
if (!qemu_timer_expired_ns(ts, current_time)) {
@@ -773,8 +770,8 @@ int64_t qemu_next_icount_deadline(void)
int64_t delta = INT32_MAX;
assert(use_icount);
- if (active_timers[QEMU_CLOCK_VIRTUAL]) {
- delta = active_timers[QEMU_CLOCK_VIRTUAL]->expire_time -
+ if (vm_clock->active_timers) {
+ delta = vm_clock->active_timers->expire_time -
qemu_get_clock_ns(vm_clock);
}
@@ -789,20 +786,20 @@ static int64_t qemu_next_alarm_deadline(void)
int64_t delta;
int64_t rtdelta;
- if (!use_icount && active_timers[QEMU_CLOCK_VIRTUAL]) {
- delta = active_timers[QEMU_CLOCK_VIRTUAL]->expire_time -
+ if (!use_icount && vm_clock->active_timers) {
+ delta = vm_clock->active_timers->expire_time -
qemu_get_clock_ns(vm_clock);
} else {
delta = INT32_MAX;
}
- if (active_timers[QEMU_CLOCK_HOST]) {
- int64_t hdelta = active_timers[QEMU_CLOCK_HOST]->expire_time -
+ if (host_clock->active_timers) {
+ int64_t hdelta = host_clock->active_timers->expire_time -
qemu_get_clock_ns(host_clock);
if (hdelta < delta)
delta = hdelta;
}
- if (active_timers[QEMU_CLOCK_REALTIME]) {
- rtdelta = (active_timers[QEMU_CLOCK_REALTIME]->expire_time -
+ if (rt_clock->active_timers) {
+ rtdelta = (rt_clock->active_timers->expire_time -
qemu_get_clock_ns(rt_clock));
if (rtdelta < delta)
delta = rtdelta;
@@ -871,9 +868,9 @@ static void dynticks_rearm_timer(struct qemu_alarm_timer *t)
int64_t current_ns;
assert(alarm_has_dynticks(t));
- if (!active_timers[QEMU_CLOCK_REALTIME] &&
- !active_timers[QEMU_CLOCK_VIRTUAL] &&
- !active_timers[QEMU_CLOCK_HOST])
+ if (!rt_clock->active_timers &&
+ !vm_clock->active_timers &&
+ !host_clock->active_timers)
return;
nearest_delta_ns = qemu_next_alarm_deadline();
@@ -925,9 +922,9 @@ static void unix_rearm_timer(struct qemu_alarm_timer *t)
int err;
assert(alarm_has_dynticks(t));
- if (!active_timers[QEMU_CLOCK_REALTIME] &&
- !active_timers[QEMU_CLOCK_VIRTUAL] &&
- !active_timers[QEMU_CLOCK_HOST])
+ if (!rt_clock->active_timers &&
+ !vm_clock->active_timers &&
+ !host_clock->active_timers)
return;
nearest_delta_ns = qemu_next_alarm_deadline();
@@ -1022,9 +1019,9 @@ static void mm_rearm_timer(struct qemu_alarm_timer *t)
int nearest_delta_ms;
assert(alarm_has_dynticks(t));
- if (!active_timers[QEMU_CLOCK_REALTIME] &&
- !active_timers[QEMU_CLOCK_VIRTUAL] &&
- !active_timers[QEMU_CLOCK_HOST]) {
+ if (!rt_clock->active_timers &&
+ !vm_clock->active_timers &&
+ !host_clock->active_timers) {
return;
}
@@ -1092,9 +1089,9 @@ static void win32_rearm_timer(struct qemu_alarm_timer *t)
BOOLEAN success;
assert(alarm_has_dynticks(t));
- if (!active_timers[QEMU_CLOCK_REALTIME] &&
- !active_timers[QEMU_CLOCK_VIRTUAL] &&
- !active_timers[QEMU_CLOCK_HOST])
+ if (!rt_clock->active_timers &&
+ !vm_clock->active_timers &&
+ !host_clock->active_timers)
return;
nearest_delta_ms = (qemu_next_alarm_deadline() + 999999) / 1000000;
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 03/14] qemu-timer: move common code to qemu_rearm_alarm_timer
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
2011-09-16 15:08 ` [Qemu-devel] [RFC PATCH 01/14] remove unused function Paolo Bonzini
2011-09-16 15:08 ` [Qemu-devel] [RFC PATCH 02/14] qemu-timer: remove active_timers array Paolo Bonzini
@ 2011-09-16 15:08 ` Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 04/14] qemu-timer: more clock functions Paolo Bonzini
` (10 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:08 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
qemu-timer.c | 129 ++++++++++++++++++++++++----------------------------------
roms/seabios | 2 +-
2 files changed, 54 insertions(+), 77 deletions(-)
diff --git a/qemu-timer.c b/qemu-timer.c
index e84e444..548b349 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -153,7 +153,7 @@ struct qemu_alarm_timer {
char const *name;
int (*start)(struct qemu_alarm_timer *t);
void (*stop)(struct qemu_alarm_timer *t);
- void (*rearm)(struct qemu_alarm_timer *t);
+ void (*rearm)(struct qemu_alarm_timer *t, int64_t nearest_delta_ns);
#if defined(__linux__)
int fd;
timer_t timer;
@@ -181,12 +181,46 @@ static inline int alarm_has_dynticks(struct qemu_alarm_timer *t)
return !!t->rearm;
}
+static int64_t qemu_next_alarm_deadline(void)
+{
+ int64_t delta;
+ int64_t rtdelta;
+
+ if (!use_icount && vm_clock->active_timers) {
+ delta = vm_clock->active_timers->expire_time -
+ qemu_get_clock_ns(vm_clock);
+ } else {
+ delta = INT32_MAX;
+ }
+ if (host_clock->active_timers) {
+ int64_t hdelta = host_clock->active_timers->expire_time -
+ qemu_get_clock_ns(host_clock);
+ if (hdelta < delta) {
+ delta = hdelta;
+ }
+ }
+ if (rt_clock->active_timers) {
+ rtdelta = (rt_clock->active_timers->expire_time -
+ qemu_get_clock_ns(rt_clock));
+ if (rtdelta < delta) {
+ delta = rtdelta;
+ }
+ }
+
+ return delta;
+}
+
static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
{
- if (!alarm_has_dynticks(t))
+ int64_t nearest_delta_ns;
+ assert(alarm_has_dynticks(t));
+ if (!rt_clock->active_timers &&
+ !vm_clock->active_timers &&
+ !host_clock->active_timers) {
return;
-
- t->rearm(t);
+ }
+ nearest_delta_ns = qemu_next_alarm_deadline();
+ t->rearm(t, nearest_delta_ns);
}
/* TODO: MIN_TIMER_REARM_NS should be optimized */
@@ -196,23 +230,23 @@ static void qemu_rearm_alarm_timer(struct qemu_alarm_timer *t)
static int mm_start_timer(struct qemu_alarm_timer *t);
static void mm_stop_timer(struct qemu_alarm_timer *t);
-static void mm_rearm_timer(struct qemu_alarm_timer *t);
+static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
static int win32_start_timer(struct qemu_alarm_timer *t);
static void win32_stop_timer(struct qemu_alarm_timer *t);
-static void win32_rearm_timer(struct qemu_alarm_timer *t);
+static void win32_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
#else
static int unix_start_timer(struct qemu_alarm_timer *t);
static void unix_stop_timer(struct qemu_alarm_timer *t);
-static void unix_rearm_timer(struct qemu_alarm_timer *t);
+static void unix_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
#ifdef __linux__
static int dynticks_start_timer(struct qemu_alarm_timer *t);
static void dynticks_stop_timer(struct qemu_alarm_timer *t);
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t);
+static void dynticks_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
#endif /* __linux__ */
@@ -715,8 +749,6 @@ void qemu_run_all_timers(void)
qemu_run_timers(host_clock);
}
-static int64_t qemu_next_alarm_deadline(void);
-
#ifdef _WIN32
static void CALLBACK host_alarm_handler(PVOID lpParam, BOOLEAN unused)
#else
@@ -781,33 +813,6 @@ int64_t qemu_next_icount_deadline(void)
return delta;
}
-static int64_t qemu_next_alarm_deadline(void)
-{
- int64_t delta;
- int64_t rtdelta;
-
- if (!use_icount && vm_clock->active_timers) {
- delta = vm_clock->active_timers->expire_time -
- qemu_get_clock_ns(vm_clock);
- } else {
- delta = INT32_MAX;
- }
- if (host_clock->active_timers) {
- int64_t hdelta = host_clock->active_timers->expire_time -
- qemu_get_clock_ns(host_clock);
- if (hdelta < delta)
- delta = hdelta;
- }
- if (rt_clock->active_timers) {
- rtdelta = (rt_clock->active_timers->expire_time -
- qemu_get_clock_ns(rt_clock));
- if (rtdelta < delta)
- delta = rtdelta;
- }
-
- return delta;
-}
-
#if defined(__linux__)
#include "compatfd.h"
@@ -860,20 +865,13 @@ static void dynticks_stop_timer(struct qemu_alarm_timer *t)
timer_delete(host_timer);
}
-static void dynticks_rearm_timer(struct qemu_alarm_timer *t)
+static void dynticks_rearm_timer(struct qemu_alarm_timer *t,
+ int64_t nearest_delta_ns)
{
timer_t host_timer = t->timer;
struct itimerspec timeout;
- int64_t nearest_delta_ns = INT64_MAX;
int64_t current_ns;
- assert(alarm_has_dynticks(t));
- if (!rt_clock->active_timers &&
- !vm_clock->active_timers &&
- !host_clock->active_timers)
- return;
-
- nearest_delta_ns = qemu_next_alarm_deadline();
if (nearest_delta_ns < MIN_TIMER_REARM_NS)
nearest_delta_ns = MIN_TIMER_REARM_NS;
@@ -915,19 +913,12 @@ static int unix_start_timer(struct qemu_alarm_timer *t)
return 0;
}
-static void unix_rearm_timer(struct qemu_alarm_timer *t)
+static void unix_rearm_timer(struct qemu_alarm_timer *t,
+ int64_t nearest_delta_ns)
{
struct itimerval itv;
- int64_t nearest_delta_ns = INT64_MAX;
int err;
- assert(alarm_has_dynticks(t));
- if (!rt_clock->active_timers &&
- !vm_clock->active_timers &&
- !host_clock->active_timers)
- return;
-
- nearest_delta_ns = qemu_next_alarm_deadline();
if (nearest_delta_ns < MIN_TIMER_REARM_NS)
nearest_delta_ns = MIN_TIMER_REARM_NS;
@@ -1014,23 +1005,14 @@ static void mm_stop_timer(struct qemu_alarm_timer *t)
timeEndPeriod(mm_period);
}
-static void mm_rearm_timer(struct qemu_alarm_timer *t)
+static void mm_rearm_timer(struct qemu_alarm_timer *t, int64_t delta)
{
- int nearest_delta_ms;
-
- assert(alarm_has_dynticks(t));
- if (!rt_clock->active_timers &&
- !vm_clock->active_timers &&
- !host_clock->active_timers) {
- return;
- }
-
- timeKillEvent(mm_timer);
-
- nearest_delta_ms = (qemu_next_alarm_deadline() + 999999) / 1000000;
+ int nearest_delta_ms = (delta + 999999) / 1000000;
if (nearest_delta_ms < 1) {
nearest_delta_ms = 1;
}
+
+ timeKillEvent(mm_timer);
mm_timer = timeSetEvent(nearest_delta_ms,
mm_period,
mm_alarm_handler,
@@ -1082,19 +1064,14 @@ static void win32_stop_timer(struct qemu_alarm_timer *t)
}
}
-static void win32_rearm_timer(struct qemu_alarm_timer *t)
+static void win32_rearm_timer(struct qemu_alarm_timer *t,
+ int64_t nearest_delta_ns)
{
HANDLE hTimer = t->timer;
int nearest_delta_ms;
BOOLEAN success;
- assert(alarm_has_dynticks(t));
- if (!rt_clock->active_timers &&
- !vm_clock->active_timers &&
- !host_clock->active_timers)
- return;
-
- nearest_delta_ms = (qemu_next_alarm_deadline() + 999999) / 1000000;
+ nearest_delta_ms = (nearest_delta_ns + 999999) / 1000000;
if (nearest_delta_ms < 1) {
nearest_delta_ms = 1;
}
diff --git a/roms/seabios b/roms/seabios
index 8e30147..cc97564 160000
--- a/roms/seabios
+++ b/roms/seabios
@@ -1 +1 @@
-Subproject commit 8e301472e324b6d6496d8b4ffc66863e99d7a505
+Subproject commit cc975646af69f279396d4d5e1379ac6af80ee637
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 04/14] qemu-timer: more clock functions
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
` (2 preceding siblings ...)
2011-09-16 15:08 ` [Qemu-devel] [RFC PATCH 03/14] qemu-timer: move common code to qemu_rearm_alarm_timer Paolo Bonzini
@ 2011-09-16 15:09 ` Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 05/14] qemu-timer: move icount to cpus.c Paolo Bonzini
` (9 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:09 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
These will be used when moving icount accounting to cpus.c.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
qemu-timer.c | 25 +++++++++++++++++++++++++
qemu-timer.h | 3 +++
2 files changed, 28 insertions(+), 0 deletions(-)
diff --git a/qemu-timer.c b/qemu-timer.c
index 548b349..f665000 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -495,6 +495,31 @@ void qemu_clock_warp(QEMUClock *clock)
}
}
+int64_t qemu_clock_has_timers(QEMUClock *clock)
+{
+ return !!clock->active_timers;
+}
+
+int64_t qemu_clock_expired(QEMUClock *clock)
+{
+ return (clock->active_timers &&
+ clock->active_timers->expire_time < qemu_get_clock_ns(clock));
+}
+
+int64_t qemu_clock_deadline(QEMUClock *clock)
+{
+ /* To avoid problems with overflow limit this to 2^32. */
+ int64_t delta = INT32_MAX;
+
+ if (clock->active_timers) {
+ delta = clock->active_timers->expire_time - qemu_get_clock_ns(clock);
+ }
+ if (delta < 0) {
+ delta = 0;
+ }
+ return delta;
+}
+
QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
QEMUTimerCB *cb, void *opaque)
{
diff --git a/qemu-timer.h b/qemu-timer.h
index 0a43469..4578075 100644
--- a/qemu-timer.h
+++ b/qemu-timer.h
@@ -38,6 +38,9 @@ extern QEMUClock *vm_clock;
extern QEMUClock *host_clock;
int64_t qemu_get_clock_ns(QEMUClock *clock);
+int64_t qemu_clock_has_timers(QEMUClock *clock);
+int64_t qemu_clock_expired(QEMUClock *clock);
+int64_t qemu_clock_deadline(QEMUClock *clock);
void qemu_clock_enable(QEMUClock *clock, int enabled);
void qemu_clock_warp(QEMUClock *clock);
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 05/14] qemu-timer: move icount to cpus.c
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
` (3 preceding siblings ...)
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 04/14] qemu-timer: more clock functions Paolo Bonzini
@ 2011-09-16 15:09 ` Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 06/14] qemu-timer: do not refer to vm_running Paolo Bonzini
` (8 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:09 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
None of this is needed by tools, and most of it can even be made static
inside cpus.c.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpus.c | 296 +++++++++++++++++++++++++++++++++++++++++++++++++++++----
exec-all.h | 14 +++
exec.c | 7 --
qemu-common.h | 4 +
qemu-timer.c | 279 -----------------------------------------------------
qemu-timer.h | 24 +-----
6 files changed, 297 insertions(+), 327 deletions(-)
diff --git a/cpus.c b/cpus.c
index 54c188c..31d450c 100644
--- a/cpus.c
+++ b/cpus.c
@@ -65,6 +65,282 @@
static CPUState *next_cpu;
/***********************************************************/
+/* guest cycle counter */
+
+/* Conversion factor from emulated instructions to virtual clock ticks. */
+static int icount_time_shift;
+/* Arbitrarily pick 1MIPS as the minimum allowable speed. */
+#define MAX_ICOUNT_SHIFT 10
+/* Compensate for varying guest execution speed. */
+static int64_t qemu_icount_bias;
+static QEMUTimer *icount_rt_timer;
+static QEMUTimer *icount_vm_timer;
+static QEMUTimer *icount_warp_timer;
+static int64_t vm_clock_warp_start;
+int64_t qemu_icount;
+int use_icount;
+
+typedef struct TimersState {
+ int64_t cpu_ticks_prev;
+ int64_t cpu_ticks_offset;
+ int64_t cpu_clock_offset;
+ int32_t cpu_ticks_enabled;
+ int64_t dummy;
+} TimersState;
+
+TimersState timers_state;
+
+/* Return the virtual CPU time, based on the instruction counter. */
+int64_t cpu_get_icount(void)
+{
+ int64_t icount;
+ CPUState *env = cpu_single_env;;
+
+ icount = qemu_icount;
+ if (env) {
+ if (!can_do_io(env)) {
+ fprintf(stderr, "Bad clock read\n");
+ }
+ icount -= (env->icount_decr.u16.low + env->icount_extra);
+ }
+ return qemu_icount_bias + (icount << icount_time_shift);
+}
+
+/* return the host CPU cycle counter and handle stop/restart */
+int64_t cpu_get_ticks(void)
+{
+ if (use_icount) {
+ return cpu_get_icount();
+ }
+ if (!timers_state.cpu_ticks_enabled) {
+ return timers_state.cpu_ticks_offset;
+ } else {
+ int64_t ticks;
+ ticks = cpu_get_real_ticks();
+ if (timers_state.cpu_ticks_prev > ticks) {
+ /* Note: non increasing ticks may happen if the host uses
+ software suspend */
+ timers_state.cpu_ticks_offset += timers_state.cpu_ticks_prev - ticks;
+ }
+ timers_state.cpu_ticks_prev = ticks;
+ return ticks + timers_state.cpu_ticks_offset;
+ }
+}
+
+/* return the host CPU monotonic timer and handle stop/restart */
+int64_t cpu_get_clock(void)
+{
+ int64_t ti;
+ if (!timers_state.cpu_ticks_enabled) {
+ return timers_state.cpu_clock_offset;
+ } else {
+ ti = get_clock();
+ return ti + timers_state.cpu_clock_offset;
+ }
+}
+
+/* enable cpu_get_ticks() */
+void cpu_enable_ticks(void)
+{
+ if (!timers_state.cpu_ticks_enabled) {
+ timers_state.cpu_ticks_offset -= cpu_get_real_ticks();
+ timers_state.cpu_clock_offset -= get_clock();
+ timers_state.cpu_ticks_enabled = 1;
+ }
+}
+
+/* disable cpu_get_ticks() : the clock is stopped. You must not call
+ cpu_get_ticks() after that. */
+void cpu_disable_ticks(void)
+{
+ if (timers_state.cpu_ticks_enabled) {
+ timers_state.cpu_ticks_offset = cpu_get_ticks();
+ timers_state.cpu_clock_offset = cpu_get_clock();
+ timers_state.cpu_ticks_enabled = 0;
+ }
+}
+
+/* Correlation between real and virtual time is always going to be
+ fairly approximate, so ignore small variation.
+ When the guest is idle real and virtual time will be aligned in
+ the IO wait loop. */
+#define ICOUNT_WOBBLE (get_ticks_per_sec() / 10)
+
+static void icount_adjust(void)
+{
+ int64_t cur_time;
+ int64_t cur_icount;
+ int64_t delta;
+ static int64_t last_delta;
+ /* If the VM is not running, then do nothing. */
+ if (!vm_running) {
+ return;
+ }
+ cur_time = cpu_get_clock();
+ cur_icount = qemu_get_clock_ns(vm_clock);
+ delta = cur_icount - cur_time;
+ /* FIXME: This is a very crude algorithm, somewhat prone to oscillation. */
+ if (delta > 0
+ && last_delta + ICOUNT_WOBBLE < delta * 2
+ && icount_time_shift > 0) {
+ /* The guest is getting too far ahead. Slow time down. */
+ icount_time_shift--;
+ }
+ if (delta < 0
+ && last_delta - ICOUNT_WOBBLE > delta * 2
+ && icount_time_shift < MAX_ICOUNT_SHIFT) {
+ /* The guest is getting too far behind. Speed time up. */
+ icount_time_shift++;
+ }
+ last_delta = delta;
+ qemu_icount_bias = cur_icount - (qemu_icount << icount_time_shift);
+}
+
+static void icount_adjust_rt(void *opaque)
+{
+ qemu_mod_timer(icount_rt_timer,
+ qemu_get_clock_ms(rt_clock) + 1000);
+ icount_adjust();
+}
+
+static void icount_adjust_vm(void *opaque)
+{
+ qemu_mod_timer(icount_vm_timer,
+ qemu_get_clock_ns(vm_clock) + get_ticks_per_sec() / 10);
+ icount_adjust();
+}
+
+static int64_t qemu_icount_round(int64_t count)
+{
+ return (count + (1 << icount_time_shift) - 1) >> icount_time_shift;
+}
+
+static void icount_warp_rt(void *opaque)
+{
+ if (vm_clock_warp_start == -1) {
+ return;
+ }
+
+ if (vm_running) {
+ int64_t clock = qemu_get_clock_ns(rt_clock);
+ int64_t warp_delta = clock - vm_clock_warp_start;
+ if (use_icount == 1) {
+ qemu_icount_bias += warp_delta;
+ } else {
+ /*
+ * In adaptive mode, do not let the vm_clock run too
+ * far ahead of real time.
+ */
+ int64_t cur_time = cpu_get_clock();
+ int64_t cur_icount = qemu_get_clock_ns(vm_clock);
+ int64_t delta = cur_time - cur_icount;
+ qemu_icount_bias += MIN(warp_delta, delta);
+ }
+ if (qemu_clock_expired(vm_clock)) {
+ qemu_notify_event();
+ }
+ }
+ vm_clock_warp_start = -1;
+}
+
+void qemu_clock_warp(QEMUClock *clock)
+{
+ int64_t deadline;
+
+ /*
+ * There are too many global variables to make the "warp" behavior
+ * applicable to other clocks. But a clock argument removes the
+ * need for if statements all over the place.
+ */
+ if (clock != vm_clock || !use_icount) {
+ return;
+ }
+
+ /*
+ * If the CPUs have been sleeping, advance the vm_clock timer now. This
+ * ensures that the deadline for the timer is computed correctly below.
+ * This also makes sure that the insn counter is synchronized before the
+ * CPU starts running, in case the CPU is woken by an event other than
+ * the earliest vm_clock timer.
+ */
+ icount_warp_rt(NULL);
+ if (!all_cpu_threads_idle() || !qemu_clock_has_timers(vm_clock)) {
+ qemu_del_timer(icount_warp_timer);
+ return;
+ }
+
+ vm_clock_warp_start = qemu_get_clock_ns(rt_clock);
+ deadline = qemu_clock_deadline(vm_clock);
+ if (deadline > 0) {
+ /*
+ * Ensure the vm_clock proceeds even when the virtual CPU goes to
+ * sleep. Otherwise, the CPU might be waiting for a future timer
+ * interrupt to wake it up, but the interrupt never comes because
+ * the vCPU isn't running any insns and thus doesn't advance the
+ * vm_clock.
+ *
+ * An extreme solution for this problem would be to never let VCPUs
+ * sleep in icount mode if there is a pending vm_clock timer; rather
+ * time could just advance to the next vm_clock event. Instead, we
+ * do stop VCPUs and only advance vm_clock after some "real" time,
+ * (related to the time left until the next event) has passed. This
+ * rt_clock timer will do this. This avoids that the warps are too
+ * visible externally---for example, you will not be sending network
+ * packets continously instead of every 100ms.
+ */
+ qemu_mod_timer(icount_warp_timer, vm_clock_warp_start + deadline);
+ } else {
+ qemu_notify_event();
+ }
+}
+
+static const VMStateDescription vmstate_timers = {
+ .name = "timer",
+ .version_id = 2,
+ .minimum_version_id = 1,
+ .minimum_version_id_old = 1,
+ .fields = (VMStateField[]) {
+ VMSTATE_INT64(cpu_ticks_offset, TimersState),
+ VMSTATE_INT64(dummy, TimersState),
+ VMSTATE_INT64_V(cpu_clock_offset, TimersState, 2),
+ VMSTATE_END_OF_LIST()
+ }
+};
+
+void configure_icount(const char *option)
+{
+ vmstate_register(NULL, 0, &vmstate_timers, &timers_state);
+ if (!option) {
+ return;
+ }
+
+ icount_warp_timer = qemu_new_timer_ns(rt_clock, icount_warp_rt, NULL);
+ if (strcmp(option, "auto") != 0) {
+ icount_time_shift = strtol(option, NULL, 0);
+ use_icount = 1;
+ return;
+ }
+
+ use_icount = 2;
+
+ /* 125MIPS seems a reasonable initial guess at the guest speed.
+ It will be corrected fairly quickly anyway. */
+ icount_time_shift = 3;
+
+ /* Have both realtime and virtual time triggers for speed adjustment.
+ The realtime trigger catches emulated time passing too slowly,
+ the virtual time trigger catches emulated time passing too fast.
+ Realtime triggers occur even when idle, so use them less frequently
+ than VM triggers. */
+ icount_rt_timer = qemu_new_timer_ms(rt_clock, icount_adjust_rt, NULL);
+ qemu_mod_timer(icount_rt_timer,
+ qemu_get_clock_ms(rt_clock) + 1000);
+ icount_vm_timer = qemu_new_timer_ns(vm_clock, icount_adjust_vm, NULL);
+ qemu_mod_timer(icount_vm_timer,
+ qemu_get_clock_ns(vm_clock) + get_ticks_per_sec() / 10);
+}
+
+/***********************************************************/
void hw_error(const char *fmt, ...)
{
va_list ap;
@@ -691,7 +967,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
while (1) {
cpu_exec_all();
- if (use_icount && qemu_next_icount_deadline() <= 0) {
+ if (use_icount && qemu_clock_deadline(vm_clock) <= 0) {
qemu_notify_event();
}
qemu_tcg_wait_io_event();
@@ -908,7 +1184,7 @@ static int tcg_cpu_exec(CPUState *env)
qemu_icount -= (env->icount_decr.u16.low + env->icount_extra);
env->icount_decr.u16.low = 0;
env->icount_extra = 0;
- count = qemu_icount_round(qemu_next_icount_deadline());
+ count = qemu_icount_round(qemu_clock_deadline(vm_clock));
qemu_icount += count;
decr = (count > 0xffff) ? 0xffff : count;
count -= decr;
@@ -1000,22 +1276,6 @@ void set_cpu_log_filename(const char *optarg)
cpu_set_log_filename(optarg);
}
-/* Return the virtual CPU time, based on the instruction counter. */
-int64_t cpu_get_icount(void)
-{
- int64_t icount;
- CPUState *env = cpu_single_env;;
-
- icount = qemu_icount;
- if (env) {
- if (!can_do_io(env)) {
- fprintf(stderr, "Bad clock read\n");
- }
- icount -= (env->icount_decr.u16.low + env->icount_extra);
- }
- return qemu_icount_bias + (icount << icount_time_shift);
-}
-
void list_cpus(FILE *f, fprintf_function cpu_fprintf, const char *optarg)
{
/* XXX: implement xxx_cpu_list for targets that still miss it */
diff --git a/exec-all.h b/exec-all.h
index 9b8d62c..1f32b37 100644
--- a/exec-all.h
+++ b/exec-all.h
@@ -344,4 +344,18 @@ extern int singlestep;
/* cpu-exec.c */
extern volatile sig_atomic_t exit_request;
+/* Deterministic execution requires that IO only be performed on the last
+ instruction of a TB so that interrupts take effect immediately. */
+static inline int can_do_io(CPUState *env)
+{
+ if (!use_icount) {
+ return 1;
+ }
+ /* If not executing code then assume we are ok. */
+ if (!env->current_tb) {
+ return 1;
+ }
+ return env->can_do_io != 0;
+}
+
#endif
diff --git a/exec.c b/exec.c
index c1e045d..5f7fe96 100644
--- a/exec.c
+++ b/exec.c
@@ -121,13 +121,6 @@ CPUState *first_cpu;
/* current CPU in the current thread. It is only valid inside
cpu_exec() */
CPUState *cpu_single_env;
-/* 0 = Do not count executed instructions.
- 1 = Precise instruction counting.
- 2 = Adaptive rate instruction counting. */
-int use_icount = 0;
-/* Current instruction counter. While executing translated code this may
- include some instructions that have not yet been executed. */
-int64_t qemu_icount;
typedef struct PageDesc {
/* list of TBs intersecting this ram page */
diff --git a/qemu-common.h b/qemu-common.h
index 404c421..16cc5e9 100644
--- a/qemu-common.h
+++ b/qemu-common.h
@@ -96,6 +96,10 @@ static inline char *realpath(const char *path, char *resolved_path)
}
#endif
+/* icount */
+void configure_icount(const char *option);
+extern int use_icount;
+
/* FIXME: Remove NEED_CPU_H. */
#ifndef NEED_CPU_H
diff --git a/qemu-timer.c b/qemu-timer.c
index f665000..9238458 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -46,82 +46,6 @@
#include "qemu-timer.h"
-/* Conversion factor from emulated instructions to virtual clock ticks. */
-int icount_time_shift;
-/* Arbitrarily pick 1MIPS as the minimum allowable speed. */
-#define MAX_ICOUNT_SHIFT 10
-/* Compensate for varying guest execution speed. */
-int64_t qemu_icount_bias;
-static QEMUTimer *icount_rt_timer;
-static QEMUTimer *icount_vm_timer;
-
-/***********************************************************/
-/* guest cycle counter */
-
-typedef struct TimersState {
- int64_t cpu_ticks_prev;
- int64_t cpu_ticks_offset;
- int64_t cpu_clock_offset;
- int32_t cpu_ticks_enabled;
- int64_t dummy;
-} TimersState;
-
-TimersState timers_state;
-
-/* return the host CPU cycle counter and handle stop/restart */
-int64_t cpu_get_ticks(void)
-{
- if (use_icount) {
- return cpu_get_icount();
- }
- if (!timers_state.cpu_ticks_enabled) {
- return timers_state.cpu_ticks_offset;
- } else {
- int64_t ticks;
- ticks = cpu_get_real_ticks();
- if (timers_state.cpu_ticks_prev > ticks) {
- /* Note: non increasing ticks may happen if the host uses
- software suspend */
- timers_state.cpu_ticks_offset += timers_state.cpu_ticks_prev - ticks;
- }
- timers_state.cpu_ticks_prev = ticks;
- return ticks + timers_state.cpu_ticks_offset;
- }
-}
-
-/* return the host CPU monotonic timer and handle stop/restart */
-static int64_t cpu_get_clock(void)
-{
- int64_t ti;
- if (!timers_state.cpu_ticks_enabled) {
- return timers_state.cpu_clock_offset;
- } else {
- ti = get_clock();
- return ti + timers_state.cpu_clock_offset;
- }
-}
-
-/* enable cpu_get_ticks() */
-void cpu_enable_ticks(void)
-{
- if (!timers_state.cpu_ticks_enabled) {
- timers_state.cpu_ticks_offset -= cpu_get_real_ticks();
- timers_state.cpu_clock_offset -= get_clock();
- timers_state.cpu_ticks_enabled = 1;
- }
-}
-
-/* disable cpu_get_ticks() : the clock is stopped. You must not call
- cpu_get_ticks() after that. */
-void cpu_disable_ticks(void)
-{
- if (timers_state.cpu_ticks_enabled) {
- timers_state.cpu_ticks_offset = cpu_get_ticks();
- timers_state.cpu_clock_offset = cpu_get_clock();
- timers_state.cpu_ticks_enabled = 0;
- }
-}
-
/***********************************************************/
/* timers */
@@ -133,7 +57,6 @@ struct QEMUClock {
int type;
int enabled;
- QEMUTimer *warp_timer;
QEMUTimer *active_timers;
NotifierList reset_notifiers;
@@ -252,61 +175,6 @@ static void dynticks_rearm_timer(struct qemu_alarm_timer *t, int64_t delta);
#endif /* _WIN32 */
-/* Correlation between real and virtual time is always going to be
- fairly approximate, so ignore small variation.
- When the guest is idle real and virtual time will be aligned in
- the IO wait loop. */
-#define ICOUNT_WOBBLE (get_ticks_per_sec() / 10)
-
-static void icount_adjust(void)
-{
- int64_t cur_time;
- int64_t cur_icount;
- int64_t delta;
- static int64_t last_delta;
- /* If the VM is not running, then do nothing. */
- if (!vm_running)
- return;
-
- cur_time = cpu_get_clock();
- cur_icount = qemu_get_clock_ns(vm_clock);
- delta = cur_icount - cur_time;
- /* FIXME: This is a very crude algorithm, somewhat prone to oscillation. */
- if (delta > 0
- && last_delta + ICOUNT_WOBBLE < delta * 2
- && icount_time_shift > 0) {
- /* The guest is getting too far ahead. Slow time down. */
- icount_time_shift--;
- }
- if (delta < 0
- && last_delta - ICOUNT_WOBBLE > delta * 2
- && icount_time_shift < MAX_ICOUNT_SHIFT) {
- /* The guest is getting too far behind. Speed time up. */
- icount_time_shift++;
- }
- last_delta = delta;
- qemu_icount_bias = cur_icount - (qemu_icount << icount_time_shift);
-}
-
-static void icount_adjust_rt(void * opaque)
-{
- qemu_mod_timer(icount_rt_timer,
- qemu_get_clock_ms(rt_clock) + 1000);
- icount_adjust();
-}
-
-static void icount_adjust_vm(void * opaque)
-{
- qemu_mod_timer(icount_vm_timer,
- qemu_get_clock_ns(vm_clock) + get_ticks_per_sec() / 10);
- icount_adjust();
-}
-
-int64_t qemu_icount_round(int64_t count)
-{
- return (count + (1 << icount_time_shift) - 1) >> icount_time_shift;
-}
-
static struct qemu_alarm_timer alarm_timers[] = {
#ifndef _WIN32
#ifdef __linux__
@@ -411,90 +279,6 @@ void qemu_clock_enable(QEMUClock *clock, int enabled)
clock->enabled = enabled;
}
-static int64_t vm_clock_warp_start;
-
-static void icount_warp_rt(void *opaque)
-{
- if (vm_clock_warp_start == -1) {
- return;
- }
-
- if (vm_running) {
- int64_t clock = qemu_get_clock_ns(rt_clock);
- int64_t warp_delta = clock - vm_clock_warp_start;
- if (use_icount == 1) {
- qemu_icount_bias += warp_delta;
- } else {
- /*
- * In adaptive mode, do not let the vm_clock run too
- * far ahead of real time.
- */
- int64_t cur_time = cpu_get_clock();
- int64_t cur_icount = qemu_get_clock_ns(vm_clock);
- int64_t delta = cur_time - cur_icount;
- qemu_icount_bias += MIN(warp_delta, delta);
- }
- if (qemu_timer_expired(vm_clock->active_timers,
- qemu_get_clock_ns(vm_clock))) {
- qemu_notify_event();
- }
- }
- vm_clock_warp_start = -1;
-}
-
-void qemu_clock_warp(QEMUClock *clock)
-{
- int64_t deadline;
-
- if (!clock->warp_timer) {
- return;
- }
-
- /*
- * There are too many global variables to make the "warp" behavior
- * applicable to other clocks. But a clock argument removes the
- * need for if statements all over the place.
- */
- assert(clock == vm_clock);
-
- /*
- * If the CPUs have been sleeping, advance the vm_clock timer now. This
- * ensures that the deadline for the timer is computed correctly below.
- * This also makes sure that the insn counter is synchronized before the
- * CPU starts running, in case the CPU is woken by an event other than
- * the earliest vm_clock timer.
- */
- icount_warp_rt(NULL);
- if (!all_cpu_threads_idle() || !clock->active_timers) {
- qemu_del_timer(clock->warp_timer);
- return;
- }
-
- vm_clock_warp_start = qemu_get_clock_ns(rt_clock);
- deadline = qemu_next_icount_deadline();
- if (deadline > 0) {
- /*
- * Ensure the vm_clock proceeds even when the virtual CPU goes to
- * sleep. Otherwise, the CPU might be waiting for a future timer
- * interrupt to wake it up, but the interrupt never comes because
- * the vCPU isn't running any insns and thus doesn't advance the
- * vm_clock.
- *
- * An extreme solution for this problem would be to never let VCPUs
- * sleep in icount mode if there is a pending vm_clock timer; rather
- * time could just advance to the next vm_clock event. Instead, we
- * do stop VCPUs and only advance vm_clock after some "real" time,
- * (related to the time left until the next event) has passed. This
- * rt_clock timer will do this. This avoids that the warps are too
- * visible externally---for example, you will not be sending network
- * packets continously instead of every 100ms.
- */
- qemu_mod_timer(clock->warp_timer, vm_clock_warp_start + deadline);
- } else {
- qemu_notify_event();
- }
-}
-
int64_t qemu_clock_has_timers(QEMUClock *clock)
{
return !!clock->active_timers;
@@ -709,52 +493,6 @@ void qemu_get_timer(QEMUFile *f, QEMUTimer *ts)
}
}
-static const VMStateDescription vmstate_timers = {
- .name = "timer",
- .version_id = 2,
- .minimum_version_id = 1,
- .minimum_version_id_old = 1,
- .fields = (VMStateField []) {
- VMSTATE_INT64(cpu_ticks_offset, TimersState),
- VMSTATE_INT64(dummy, TimersState),
- VMSTATE_INT64_V(cpu_clock_offset, TimersState, 2),
- VMSTATE_END_OF_LIST()
- }
-};
-
-void configure_icount(const char *option)
-{
- vmstate_register(NULL, 0, &vmstate_timers, &timers_state);
- if (!option)
- return;
-
- vm_clock->warp_timer = qemu_new_timer_ns(rt_clock, icount_warp_rt, NULL);
-
- if (strcmp(option, "auto") != 0) {
- icount_time_shift = strtol(option, NULL, 0);
- use_icount = 1;
- return;
- }
-
- use_icount = 2;
-
- /* 125MIPS seems a reasonable initial guess at the guest speed.
- It will be corrected fairly quickly anyway. */
- icount_time_shift = 3;
-
- /* Have both realtime and virtual time triggers for speed adjustment.
- The realtime trigger catches emulated time passing too slowly,
- the virtual time trigger catches emulated time passing too fast.
- Realtime triggers occur even when idle, so use them less frequently
- than VM triggers. */
- icount_rt_timer = qemu_new_timer_ms(rt_clock, icount_adjust_rt, NULL);
- qemu_mod_timer(icount_rt_timer,
- qemu_get_clock_ms(rt_clock) + 1000);
- icount_vm_timer = qemu_new_timer_ns(vm_clock, icount_adjust_vm, NULL);
- qemu_mod_timer(icount_vm_timer,
- qemu_get_clock_ns(vm_clock) + get_ticks_per_sec() / 10);
-}
-
void qemu_run_all_timers(void)
{
alarm_timer->pending = 0;
@@ -821,23 +559,6 @@ static void host_alarm_handler(int host_signum)
}
}
-int64_t qemu_next_icount_deadline(void)
-{
- /* To avoid problems with overflow limit this to 2^32. */
- int64_t delta = INT32_MAX;
-
- assert(use_icount);
- if (vm_clock->active_timers) {
- delta = vm_clock->active_timers->expire_time -
- qemu_get_clock_ns(vm_clock);
- }
-
- if (delta < 0)
- delta = 0;
-
- return delta;
-}
-
#if defined(__linux__)
#include "compatfd.h"
diff --git a/qemu-timer.h b/qemu-timer.h
index 4578075..ce576b9 100644
--- a/qemu-timer.h
+++ b/qemu-timer.h
@@ -58,9 +58,7 @@ int qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
void qemu_run_all_timers(void);
int qemu_alarm_pending(void);
-int64_t qemu_next_icount_deadline(void);
void configure_alarms(char const *opt);
-void configure_icount(const char *option);
int qemu_calculate_timeout(void);
void init_clocks(void);
int init_timer_alarm(void);
@@ -153,12 +151,8 @@ void ptimer_run(ptimer_state *s, int oneshot);
void ptimer_stop(ptimer_state *s);
/* icount */
-int64_t qemu_icount_round(int64_t count);
-extern int64_t qemu_icount;
-extern int use_icount;
-extern int icount_time_shift;
-extern int64_t qemu_icount_bias;
int64_t cpu_get_icount(void);
+int64_t cpu_get_clock(void);
/*******************************************/
/* host CPU ticks (if available) */
@@ -314,22 +308,6 @@ static inline int64_t cpu_get_real_ticks (void)
}
#endif
-#ifdef NEED_CPU_H
-/* Deterministic execution requires that IO only be performed on the last
- instruction of a TB so that interrupts take effect immediately. */
-static inline int can_do_io(CPUState *env)
-{
- if (!use_icount)
- return 1;
-
- /* If not executing code then assume we are ok. */
- if (!env->current_tb)
- return 1;
-
- return env->can_do_io != 0;
-}
-#endif
-
#ifdef CONFIG_PROFILER
static inline int64_t profile_getclock(void)
{
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 06/14] qemu-timer: do not refer to vm_running
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
` (4 preceding siblings ...)
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 05/14] qemu-timer: move icount to cpus.c Paolo Bonzini
@ 2011-09-16 15:09 ` Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 07/14] qemu-timer: move more stuff out of qemu-timer.c Paolo Bonzini
` (7 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:09 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpus.c | 1 +
qemu-timer.c | 5 +----
2 files changed, 2 insertions(+), 4 deletions(-)
diff --git a/cpus.c b/cpus.c
index 31d450c..f8799f5 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1065,6 +1065,7 @@ void pause_all_vcpus(void)
{
CPUState *penv = first_cpu;
+ qemu_clock_enable(vm_clock, false);
while (penv) {
penv->stop = 1;
qemu_cpu_kick(penv);
diff --git a/qemu-timer.c b/qemu-timer.c
index 9238458..0c7cfd2 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -504,10 +504,7 @@ void qemu_run_all_timers(void)
}
/* vm time timers */
- if (vm_running) {
- qemu_run_timers(vm_clock);
- }
-
+ qemu_run_timers(vm_clock);
qemu_run_timers(rt_clock);
qemu_run_timers(host_clock);
}
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 07/14] qemu-timer: move more stuff out of qemu-timer.c
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
` (5 preceding siblings ...)
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 06/14] qemu-timer: do not refer to vm_running Paolo Bonzini
@ 2011-09-16 15:09 ` Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 08/14] create main-loop.h Paolo Bonzini
` (6 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:09 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
qemu-timer.c | 37 +++++--------------------------------
qemu-timer.h | 3 ++-
savevm.c | 25 +++++++++++++++++++++++++
vl.c | 2 +-
4 files changed, 33 insertions(+), 34 deletions(-)
diff --git a/qemu-timer.c b/qemu-timer.c
index 0c7cfd2..3438c1c 100644
--- a/qemu-timer.c
+++ b/qemu-timer.c
@@ -266,11 +266,8 @@ static QEMUClock *qemu_new_clock(int type)
clock = g_malloc0(sizeof(QEMUClock));
clock->type = type;
clock->enabled = 1;
+ clock->last = INT64_MIN;
notifier_list_init(&clock->reset_notifiers);
- /* required to detect & report backward jumps */
- if (type == QEMU_CLOCK_HOST) {
- clock->last = get_clock_realtime();
- }
return clock;
}
@@ -344,7 +341,7 @@ void qemu_del_timer(QEMUTimer *ts)
/* modify the current timer so that it will be fired when current_time
>= expire_time. The corresponding callback will be called. */
-static void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
+void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
{
QEMUTimer **pt, *t;
@@ -378,8 +375,6 @@ static void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time)
}
}
-/* modify the current timer so that it will be fired when current_time
- >= expire_time. The corresponding callback will be called. */
void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time)
{
qemu_mod_timer_ns(ts, expire_time * ts->scale);
@@ -459,38 +454,16 @@ void qemu_unregister_clock_reset_notifier(QEMUClock *clock, Notifier *notifier)
notifier_list_remove(&clock->reset_notifiers, notifier);
}
-void init_clocks(void)
+static void __attribute__((constructor)) init_clocks(void)
{
rt_clock = qemu_new_clock(QEMU_CLOCK_REALTIME);
vm_clock = qemu_new_clock(QEMU_CLOCK_VIRTUAL);
host_clock = qemu_new_clock(QEMU_CLOCK_HOST);
-
- rtc_clock = host_clock;
}
-/* save a timer */
-void qemu_put_timer(QEMUFile *f, QEMUTimer *ts)
+uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts)
{
- uint64_t expire_time;
-
- if (qemu_timer_pending(ts)) {
- expire_time = ts->expire_time;
- } else {
- expire_time = -1;
- }
- qemu_put_be64(f, expire_time);
-}
-
-void qemu_get_timer(QEMUFile *f, QEMUTimer *ts)
-{
- uint64_t expire_time;
-
- expire_time = qemu_get_be64(f);
- if (expire_time != -1) {
- qemu_mod_timer_ns(ts, expire_time);
- } else {
- qemu_del_timer(ts);
- }
+ return qemu_timer_pending(ts) ? ts->expire_time : -1;
}
void qemu_run_all_timers(void)
diff --git a/qemu-timer.h b/qemu-timer.h
index ce576b9..19f1d0f 100644
--- a/qemu-timer.h
+++ b/qemu-timer.h
@@ -52,15 +52,16 @@ QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
QEMUTimerCB *cb, void *opaque);
void qemu_free_timer(QEMUTimer *ts);
void qemu_del_timer(QEMUTimer *ts);
+void qemu_mod_timer_ns(QEMUTimer *ts, int64_t expire_time);
void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time);
int qemu_timer_pending(QEMUTimer *ts);
int qemu_timer_expired(QEMUTimer *timer_head, int64_t current_time);
+uint64_t qemu_timer_expire_time_ns(QEMUTimer *ts);
void qemu_run_all_timers(void);
int qemu_alarm_pending(void);
void configure_alarms(char const *opt);
int qemu_calculate_timeout(void);
-void init_clocks(void);
int init_timer_alarm(void);
void quit_timers(void);
diff --git a/savevm.c b/savevm.c
index 1feaa70..1edcbd4 100644
--- a/savevm.c
+++ b/savevm.c
@@ -81,6 +81,7 @@
#include "migration.h"
#include "qemu_socket.h"
#include "qemu-queue.h"
+#include "qemu-timer.h"
#include "cpus.h"
#define SELF_ANNOUNCE_ROUNDS 5
@@ -674,6 +675,30 @@ uint64_t qemu_get_be64(QEMUFile *f)
return v;
}
+
+/* timer */
+
+void qemu_put_timer(QEMUFile *f, QEMUTimer *ts)
+{
+ uint64_t expire_time;
+
+ expire_time = qemu_timer_expire_time_ns(ts);
+ qemu_put_be64(f, expire_time);
+}
+
+void qemu_get_timer(QEMUFile *f, QEMUTimer *ts)
+{
+ uint64_t expire_time;
+
+ expire_time = qemu_get_be64(f);
+ if (expire_time != -1) {
+ qemu_mod_timer_ns(ts, expire_time);
+ } else {
+ qemu_del_timer(ts);
+ }
+}
+
+
/* bool */
static int get_bool(QEMUFile *f, void *pv, size_t size)
diff --git a/vl.c b/vl.c
index b773d2f..56a3db2 100644
--- a/vl.c
+++ b/vl.c
@@ -2202,7 +2202,7 @@ int main(int argc, char **argv, char **envp)
g_mem_set_vtable(&mem_trace);
g_thread_init(NULL);
- init_clocks();
+ rtc_clock = host_clock;
qemu_cache_utils_init(envp);
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 08/14] create main-loop.h
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
` (6 preceding siblings ...)
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 07/14] qemu-timer: move more stuff out of qemu-timer.c Paolo Bonzini
@ 2011-09-16 15:09 ` Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 09/14] create main-loop.c Paolo Bonzini
` (5 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:09 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
async.c | 1 +
cpus.c | 7 +----
cpus.h | 1 -
main-loop.h | 80 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
qemu-char.h | 12 +--------
qemu-common.h | 11 --------
sysemu.h | 10 +------
vl.c | 1 +
8 files changed, 85 insertions(+), 38 deletions(-)
create mode 100644 main-loop.h
diff --git a/async.c b/async.c
index ca13962..332d511 100644
--- a/async.c
+++ b/async.c
@@ -24,6 +24,7 @@
#include "qemu-common.h"
#include "qemu-aio.h"
+#include "main-loop.h"
/* Anchor of the list of Bottom Halves belonging to the context */
static struct QEMUBH *first_bh;
diff --git a/cpus.c b/cpus.c
index f8799f5..627a2b7 100644
--- a/cpus.c
+++ b/cpus.c
@@ -33,17 +33,12 @@
#include "qemu-thread.h"
#include "cpus.h"
+#include "main-loop.h"
#ifndef _WIN32
#include "compatfd.h"
#endif
-#ifdef SIGRTMIN
-#define SIG_IPI (SIGRTMIN+4)
-#else
-#define SIG_IPI SIGUSR1
-#endif
-
#ifdef CONFIG_LINUX
#include <sys/prctl.h>
diff --git a/cpus.h b/cpus.h
index f42b54e..f7b23ef 100644
--- a/cpus.h
+++ b/cpus.h
@@ -2,7 +2,6 @@
#define QEMU_CPUS_H
/* cpus.c */
-int qemu_init_main_loop(void);
void qemu_main_loop_start(void);
void resume_all_vcpus(void);
void pause_all_vcpus(void);
diff --git a/main-loop.h b/main-loop.h
new file mode 100644
index 0000000..d61dfdd
--- /dev/null
+++ b/main-loop.h
@@ -0,0 +1,80 @@
+/*
+ * QEMU System Emulator
+ *
+ * Copyright (c) 2003-2008 Fabrice Bellard
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#ifndef QEMU_MAIN_LOOP_H
+#define QEMU_MAIN_LOOP_H 1
+
+#ifdef SIGRTMIN
+#define SIG_IPI (SIGRTMIN+4)
+#else
+#define SIG_IPI SIGUSR1
+#endif
+
+int qemu_init_main_loop(void);
+int main_loop_wait(int nonblocking);
+
+/* Force QEMU to process pending events */
+void qemu_notify_event(void);
+
+typedef struct vm_change_state_entry VMChangeStateEntry;
+typedef void VMChangeStateHandler(void *opaque, int running, int reason);
+
+VMChangeStateEntry *qemu_add_vm_change_state_handler(VMChangeStateHandler *cb,
+ void *opaque);
+void qemu_del_vm_change_state_handler(VMChangeStateEntry *e);
+
+#ifdef _WIN32
+/* return TRUE if no sleep should be done afterwards */
+typedef int PollingFunc(void *opaque);
+
+int qemu_add_polling_cb(PollingFunc *func, void *opaque);
+void qemu_del_polling_cb(PollingFunc *func, void *opaque);
+
+/* Wait objects handling */
+typedef void WaitObjectFunc(void *opaque);
+
+int qemu_add_wait_object(HANDLE handle, WaitObjectFunc *func, void *opaque);
+void qemu_del_wait_object(HANDLE handle, WaitObjectFunc *func, void *opaque);
+#endif
+
+/* async I/O support */
+
+typedef void IOReadHandler(void *opaque, const uint8_t *buf, int size);
+typedef int IOCanReadHandler(void *opaque);
+typedef void IOHandler(void *opaque);
+
+void qemu_iohandler_fill(int *pnfds, fd_set *readfds, fd_set *writefds, fd_set *xfds);
+void qemu_iohandler_poll(fd_set *readfds, fd_set *writefds, fd_set *xfds, int rc);
+
+int qemu_set_fd_handler2(int fd,
+ IOCanReadHandler *fd_read_poll,
+ IOHandler *fd_read,
+ IOHandler *fd_write,
+ void *opaque);
+int qemu_set_fd_handler(int fd,
+ IOHandler *fd_read,
+ IOHandler *fd_write,
+ void *opaque);
+
+#endif
diff --git a/qemu-char.h b/qemu-char.h
index eebbdd8..7efcf99 100644
--- a/qemu-char.h
+++ b/qemu-char.h
@@ -7,6 +7,7 @@
#include "qemu-config.h"
#include "qobject.h"
#include "qstring.h"
+#include "main-loop.h"
/* character device */
@@ -237,15 +238,4 @@ void qemu_chr_close_mem(CharDriverState *chr);
QString *qemu_chr_mem_to_qs(CharDriverState *chr);
size_t qemu_chr_mem_osize(const CharDriverState *chr);
-/* async I/O support */
-
-int qemu_set_fd_handler2(int fd,
- IOCanReadHandler *fd_read_poll,
- IOHandler *fd_read,
- IOHandler *fd_write,
- void *opaque);
-int qemu_set_fd_handler(int fd,
- IOHandler *fd_read,
- IOHandler *fd_write,
- void *opaque);
#endif
diff --git a/qemu-common.h b/qemu-common.h
index 16cc5e9..e4fb661 100644
--- a/qemu-common.h
+++ b/qemu-common.h
@@ -211,14 +211,6 @@ int qemu_pipe(int pipefd[2]);
void QEMU_NORETURN hw_error(const char *fmt, ...) GCC_FMT_ATTR(1, 2);
-/* IO callbacks. */
-typedef void IOReadHandler(void *opaque, const uint8_t *buf, int size);
-typedef int IOCanReadHandler(void *opaque);
-typedef void IOHandler(void *opaque);
-
-void qemu_iohandler_fill(int *pnfds, fd_set *readfds, fd_set *writefds, fd_set *xfds);
-void qemu_iohandler_poll(fd_set *readfds, fd_set *writefds, fd_set *xfds, int rc);
-
struct ParallelIOArg {
void *buffer;
int count;
@@ -283,9 +275,6 @@ int cpu_load(QEMUFile *f, void *opaque, int version_id);
/* Force QEMU to stop what it's doing and service IO */
void qemu_service_io(void);
-/* Force QEMU to process pending events */
-void qemu_notify_event(void);
-
/* Unblock cpu */
void qemu_cpu_kick(void *env);
void qemu_cpu_kick_self(void);
diff --git a/sysemu.h b/sysemu.h
index 9090457..351f1bb 100644
--- a/sysemu.h
+++ b/sysemu.h
@@ -7,6 +7,7 @@
#include "qemu-queue.h"
#include "qemu-timer.h"
#include "notify.h"
+#include "main-loop.h"
/* vl.c */
extern const char *bios_name;
@@ -17,13 +18,6 @@ extern uint8_t qemu_uuid[];
int qemu_uuid_parse(const char *str, uint8_t *uuid);
#define UUID_FMT "%02hhx%02hhx%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx-%02hhx%02hhx%02hhx%02hhx%02hhx%02hhx"
-typedef struct vm_change_state_entry VMChangeStateEntry;
-typedef void VMChangeStateHandler(void *opaque, int running, int reason);
-
-VMChangeStateEntry *qemu_add_vm_change_state_handler(VMChangeStateHandler *cb,
- void *opaque);
-void qemu_del_vm_change_state_handler(VMChangeStateEntry *e);
-
#define VMSTOP_USER 0
#define VMSTOP_DEBUG 1
#define VMSTOP_SHUTDOWN 2
@@ -67,8 +61,6 @@ void do_info_snapshots(Monitor *mon);
void qemu_announce_self(void);
-int main_loop_wait(int nonblocking);
-
bool qemu_savevm_state_blocked(Monitor *mon);
int qemu_savevm_state_begin(Monitor *mon, QEMUFile *f, int blk_enable,
int shared);
diff --git a/vl.c b/vl.c
index 56a3db2..f67baf7 100644
--- a/vl.c
+++ b/vl.c
@@ -147,6 +147,7 @@ int main(int argc, char **argv)
#include "qemu-config.h"
#include "qemu-objects.h"
#include "qemu-options.h"
+#include "main-loop.h"
#ifdef CONFIG_VIRTFS
#include "fsdev/qemu-fsdev.h"
#endif
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 09/14] create main-loop.c
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
` (7 preceding siblings ...)
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 08/14] create main-loop.h Paolo Bonzini
@ 2011-09-16 15:09 ` Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 10/14] Revert to a hand-made select loop Paolo Bonzini
` (4 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:09 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
Makefile.objs | 2 +-
cpus.c | 193 +---------------------
cpus.h | 1 +
main-loop.c | 499 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
os-win32.c | 123 -------------
qemu-os-posix.h | 4 -
qemu-os-win32.h | 15 +--
slirp/libslirp.h | 11 --
vl.c | 123 +-------------
9 files changed, 504 insertions(+), 467 deletions(-)
create mode 100644 main-loop.c
diff --git a/Makefile.objs b/Makefile.objs
index 62020d7..a6d4a17 100644
--- a/Makefile.objs
+++ b/Makefile.objs
@@ -81,7 +81,7 @@ common-obj-y += $(oslib-obj-y)
common-obj-$(CONFIG_WIN32) += os-win32.o
common-obj-$(CONFIG_POSIX) += os-posix.o
-common-obj-y += tcg-runtime.o host-utils.o
+common-obj-y += tcg-runtime.o host-utils.o main-loop.o
common-obj-y += irq.o ioport.o input.o
common-obj-$(CONFIG_PTIMER) += ptimer.o
common-obj-$(CONFIG_MAX7310) += max7310.o
diff --git a/cpus.c b/cpus.c
index 627a2b7..715379a 100644
--- a/cpus.c
+++ b/cpus.c
@@ -543,148 +543,10 @@ static void qemu_kvm_eat_signals(CPUState *env)
#endif /* !CONFIG_LINUX */
#ifndef _WIN32
-static int io_thread_fd = -1;
-
-static void qemu_event_increment(void)
-{
- /* Write 8 bytes to be compatible with eventfd. */
- static const uint64_t val = 1;
- ssize_t ret;
-
- if (io_thread_fd == -1) {
- return;
- }
- do {
- ret = write(io_thread_fd, &val, sizeof(val));
- } while (ret < 0 && errno == EINTR);
-
- /* EAGAIN is fine, a read must be pending. */
- if (ret < 0 && errno != EAGAIN) {
- fprintf(stderr, "qemu_event_increment: write() failed: %s\n",
- strerror(errno));
- exit (1);
- }
-}
-
-static void qemu_event_read(void *opaque)
-{
- int fd = (intptr_t)opaque;
- ssize_t len;
- char buffer[512];
-
- /* Drain the notify pipe. For eventfd, only 8 bytes will be read. */
- do {
- len = read(fd, buffer, sizeof(buffer));
- } while ((len == -1 && errno == EINTR) || len == sizeof(buffer));
-}
-
-static int qemu_event_init(void)
-{
- int err;
- int fds[2];
-
- err = qemu_eventfd(fds);
- if (err == -1) {
- return -errno;
- }
- err = fcntl_setfl(fds[0], O_NONBLOCK);
- if (err < 0) {
- goto fail;
- }
- err = fcntl_setfl(fds[1], O_NONBLOCK);
- if (err < 0) {
- goto fail;
- }
- qemu_set_fd_handler2(fds[0], NULL, qemu_event_read, NULL,
- (void *)(intptr_t)fds[0]);
-
- io_thread_fd = fds[1];
- return 0;
-
-fail:
- close(fds[0]);
- close(fds[1]);
- return err;
-}
-
static void dummy_signal(int sig)
{
}
-/* If we have signalfd, we mask out the signals we want to handle and then
- * use signalfd to listen for them. We rely on whatever the current signal
- * handler is to dispatch the signals when we receive them.
- */
-static void sigfd_handler(void *opaque)
-{
- int fd = (intptr_t)opaque;
- struct qemu_signalfd_siginfo info;
- struct sigaction action;
- ssize_t len;
-
- while (1) {
- do {
- len = read(fd, &info, sizeof(info));
- } while (len == -1 && errno == EINTR);
-
- if (len == -1 && errno == EAGAIN) {
- break;
- }
-
- if (len != sizeof(info)) {
- printf("read from sigfd returned %zd: %m\n", len);
- return;
- }
-
- sigaction(info.ssi_signo, NULL, &action);
- if ((action.sa_flags & SA_SIGINFO) && action.sa_sigaction) {
- action.sa_sigaction(info.ssi_signo,
- (siginfo_t *)&info, NULL);
- } else if (action.sa_handler) {
- action.sa_handler(info.ssi_signo);
- }
- }
-}
-
-static int qemu_signal_init(void)
-{
- int sigfd;
- sigset_t set;
-
- /* SIGUSR2 used by posix-aio-compat.c */
- sigemptyset(&set);
- sigaddset(&set, SIGUSR2);
- pthread_sigmask(SIG_UNBLOCK, &set, NULL);
-
- /*
- * SIG_IPI must be blocked in the main thread and must not be caught
- * by sigwait() in the signal thread. Otherwise, the cpu thread will
- * not catch it reliably.
- */
- sigemptyset(&set);
- sigaddset(&set, SIG_IPI);
- pthread_sigmask(SIG_BLOCK, &set, NULL);
-
- sigemptyset(&set);
- sigaddset(&set, SIGIO);
- sigaddset(&set, SIGALRM);
- sigaddset(&set, SIGBUS);
- pthread_sigmask(SIG_BLOCK, &set, NULL);
-
- sigfd = qemu_signalfd(&set);
- if (sigfd == -1) {
- fprintf(stderr, "failed to create signalfd\n");
- return -errno;
- }
-
- fcntl_setfl(sigfd, O_NONBLOCK);
-
- qemu_set_fd_handler2(sigfd, NULL, sigfd_handler, NULL,
- (void *)(intptr_t)sigfd);
-
- return 0;
-}
-
static void qemu_kvm_init_cpu_signals(CPUState *env)
{
int r;
@@ -728,38 +590,6 @@ static void qemu_tcg_init_cpu_signals(void)
}
#else /* _WIN32 */
-
-HANDLE qemu_event_handle;
-
-static void dummy_event_handler(void *opaque)
-{
-}
-
-static int qemu_event_init(void)
-{
- qemu_event_handle = CreateEvent(NULL, FALSE, FALSE, NULL);
- if (!qemu_event_handle) {
- fprintf(stderr, "Failed CreateEvent: %ld\n", GetLastError());
- return -1;
- }
- qemu_add_wait_object(qemu_event_handle, dummy_event_handler, NULL);
- return 0;
-}
-
-static void qemu_event_increment(void)
-{
- if (!SetEvent(qemu_event_handle)) {
- fprintf(stderr, "qemu_event_increment: SetEvent failed: %ld\n",
- GetLastError());
- exit (1);
- }
-}
-
-static int qemu_signal_init(void)
-{
- return 0;
-}
-
static void qemu_kvm_init_cpu_signals(CPUState *env)
{
abort();
@@ -785,23 +615,9 @@ static QemuCond qemu_cpu_cond;
static QemuCond qemu_pause_cond;
static QemuCond qemu_work_cond;
-int qemu_init_main_loop(void)
+void qemu_init_cpu_loop(void)
{
- int ret;
-
qemu_init_sigbus();
-
- ret = qemu_signal_init();
- if (ret) {
- return ret;
- }
-
- /* Note eventfd must be drained before signalfd handlers run */
- ret = qemu_event_init();
- if (ret) {
- return ret;
- }
-
qemu_cond_init(&qemu_cpu_cond);
qemu_cond_init(&qemu_pause_cond);
qemu_cond_init(&qemu_work_cond);
@@ -810,8 +626,6 @@ int qemu_init_main_loop(void)
qemu_mutex_lock(&qemu_global_mutex);
qemu_thread_get_self(&io_thread);
-
- return 0;
}
void qemu_main_loop_start(void)
@@ -1135,11 +949,6 @@ void qemu_init_vcpu(void *_env)
}
}
-void qemu_notify_event(void)
-{
- qemu_event_increment();
-}
-
void cpu_stop_current(void)
{
if (cpu_single_env) {
diff --git a/cpus.h b/cpus.h
index f7b23ef..2c06a2b 100644
--- a/cpus.h
+++ b/cpus.h
@@ -2,6 +2,7 @@
#define QEMU_CPUS_H
/* cpus.c */
+void qemu_init_cpu_loop(void);
void qemu_main_loop_start(void);
void resume_all_vcpus(void);
void pause_all_vcpus(void);
diff --git a/main-loop.c b/main-loop.c
new file mode 100644
index 0000000..b13e441
--- /dev/null
+++ b/main-loop.c
@@ -0,0 +1,499 @@
+/*
+ * QEMU System Emulator
+ *
+ * Copyright (c) 2003-2008 Fabrice Bellard
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+#include "config-host.h"
+#include <unistd.h>
+#include <signal.h>
+#include <time.h>
+#include <errno.h>
+#include <sys/time.h>
+#include <stdbool.h>
+
+#ifdef _WIN32
+#include <windows.h>
+#include <winsock2.h>
+#include <ws2tcpip.h>
+#else
+#include <sys/socket.h>
+#include <netinet/in.h>
+#include <net/if.h>
+#include <arpa/inet.h>
+#include <sys/select.h>
+#include <sys/stat.h>
+#include "compatfd.h"
+#endif
+
+#include <glib.h>
+
+#include "main-loop.h"
+#include "qemu-timer.h"
+#include "slirp/libslirp.h"
+
+#ifndef _WIN32
+
+static int io_thread_fd = -1;
+
+void qemu_notify_event(void)
+{
+ /* Write 8 bytes to be compatible with eventfd. */
+ static const uint64_t val = 1;
+ ssize_t ret;
+
+ if (io_thread_fd == -1) {
+ return;
+ }
+ do {
+ ret = write(io_thread_fd, &val, sizeof(val));
+ } while (ret < 0 && errno == EINTR);
+
+ /* EAGAIN is fine, a read must be pending. */
+ if (ret < 0 && errno != EAGAIN) {
+ fprintf(stderr, "qemu_notify_event: write() failed: %s\n",
+ strerror(errno));
+ exit(1);
+ }
+}
+
+static void qemu_event_read(void *opaque)
+{
+ int fd = (intptr_t)opaque;
+ ssize_t len;
+ char buffer[512];
+
+ /* Drain the notify pipe. For eventfd, only 8 bytes will be read. */
+ do {
+ len = read(fd, buffer, sizeof(buffer));
+ } while ((len == -1 && errno == EINTR) || len == sizeof(buffer));
+}
+
+static int qemu_event_init(void)
+{
+ int err;
+ int fds[2];
+
+ err = qemu_eventfd(fds);
+ if (err == -1) {
+ return -errno;
+ }
+ err = fcntl_setfl(fds[0], O_NONBLOCK);
+ if (err < 0) {
+ goto fail;
+ }
+ err = fcntl_setfl(fds[1], O_NONBLOCK);
+ if (err < 0) {
+ goto fail;
+ }
+ qemu_set_fd_handler2(fds[0], NULL, qemu_event_read, NULL,
+ (void *)(intptr_t)fds[0]);
+
+ io_thread_fd = fds[1];
+ return 0;
+
+fail:
+ close(fds[0]);
+ close(fds[1]);
+ return err;
+}
+
+/* If we have signalfd, we mask out the signals we want to handle and then
+ * use signalfd to listen for them. We rely on whatever the current signal
+ * handler is to dispatch the signals when we receive them.
+ */
+static void sigfd_handler(void *opaque)
+{
+ int fd = (intptr_t)opaque;
+ struct qemu_signalfd_siginfo info;
+ struct sigaction action;
+ ssize_t len;
+
+ while (1) {
+ do {
+ len = read(fd, &info, sizeof(info));
+ } while (len == -1 && errno == EINTR);
+
+ if (len == -1 && errno == EAGAIN) {
+ break;
+ }
+
+ if (len != sizeof(info)) {
+ printf("read from sigfd returned %zd: %m\n", len);
+ return;
+ }
+
+ sigaction(info.ssi_signo, NULL, &action);
+ if ((action.sa_flags & SA_SIGINFO) && action.sa_sigaction) {
+ action.sa_sigaction(info.ssi_signo,
+ (siginfo_t *)&info, NULL);
+ } else if (action.sa_handler) {
+ action.sa_handler(info.ssi_signo);
+ }
+ }
+}
+
+static int qemu_signal_init(void)
+{
+ int sigfd;
+ sigset_t set;
+
+ /* SIGUSR2 used by posix-aio-compat.c */
+ sigemptyset(&set);
+ sigaddset(&set, SIGUSR2);
+ pthread_sigmask(SIG_UNBLOCK, &set, NULL);
+
+ /*
+ * SIG_IPI must be blocked in the main thread and must not be caught
+ * by sigwait() in the signal thread. Otherwise, the cpu thread will
+ * not catch it reliably.
+ */
+ sigemptyset(&set);
+ sigaddset(&set, SIG_IPI);
+ pthread_sigmask(SIG_BLOCK, &set, NULL);
+
+ sigemptyset(&set);
+ sigaddset(&set, SIGIO);
+ sigaddset(&set, SIGALRM);
+ sigaddset(&set, SIGBUS);
+ pthread_sigmask(SIG_BLOCK, &set, NULL);
+
+ sigfd = qemu_signalfd(&set);
+ if (sigfd == -1) {
+ fprintf(stderr, "failed to create signalfd\n");
+ return -errno;
+ }
+
+ fcntl_setfl(sigfd, O_NONBLOCK);
+
+ qemu_set_fd_handler2(sigfd, NULL, sigfd_handler, NULL,
+ (void *)(intptr_t)sigfd);
+
+ return 0;
+}
+
+#else /* _WIN32 */
+
+HANDLE qemu_event_handle;
+
+static void dummy_event_handler(void *opaque)
+{
+}
+
+static int qemu_event_init(void)
+{
+ qemu_event_handle = CreateEvent(NULL, FALSE, FALSE, NULL);
+ if (!qemu_event_handle) {
+ fprintf(stderr, "Failed CreateEvent: %ld\n", GetLastError());
+ return -1;
+ }
+ qemu_add_wait_object(qemu_event_handle, dummy_event_handler, NULL);
+ return 0;
+}
+
+void qemu_notify_event(void)
+{
+ if (!SetEvent(qemu_event_handle)) {
+ fprintf(stderr, "qemu_notify_event: SetEvent failed: %ld\n",
+ GetLastError());
+ exit(1);
+ }
+}
+
+static int qemu_signal_init(void)
+{
+ return 0;
+}
+#endif
+
+int qemu_init_main_loop(void)
+{
+ int ret;
+
+ ret = qemu_signal_init();
+ if (ret) {
+ return ret;
+ }
+
+ /* Note eventfd must be drained before signalfd handlers run */
+ ret = qemu_event_init();
+ if (ret) {
+ return ret;
+ }
+
+ return 0;
+}
+
+
+static GPollFD poll_fds[1024 * 2]; /* this is probably overkill */
+static int n_poll_fds;
+static int max_priority;
+
+static void glib_select_fill(int *max_fd, fd_set *rfds, fd_set *wfds,
+ fd_set *xfds, struct timeval *tv)
+{
+ GMainContext *context = g_main_context_default();
+ int i;
+ int timeout = 0, cur_timeout;
+
+ g_main_context_prepare(context, &max_priority);
+
+ n_poll_fds = g_main_context_query(context, max_priority, &timeout,
+ poll_fds, ARRAY_SIZE(poll_fds));
+ g_assert(n_poll_fds <= ARRAY_SIZE(poll_fds));
+
+ for (i = 0; i < n_poll_fds; i++) {
+ GPollFD *p = &poll_fds[i];
+
+ if ((p->events & G_IO_IN)) {
+ FD_SET(p->fd, rfds);
+ *max_fd = MAX(*max_fd, p->fd);
+ }
+ if ((p->events & G_IO_OUT)) {
+ FD_SET(p->fd, wfds);
+ *max_fd = MAX(*max_fd, p->fd);
+ }
+ if ((p->events & G_IO_ERR)) {
+ FD_SET(p->fd, xfds);
+ *max_fd = MAX(*max_fd, p->fd);
+ }
+ }
+
+ cur_timeout = (tv->tv_sec * 1000) + ((tv->tv_usec + 500) / 1000);
+ if (timeout >= 0 && timeout < cur_timeout) {
+ tv->tv_sec = timeout / 1000;
+ tv->tv_usec = (timeout % 1000) * 1000;
+ }
+}
+
+static void glib_select_poll(fd_set *rfds, fd_set *wfds, fd_set *xfds,
+ bool err)
+{
+ GMainContext *context = g_main_context_default();
+
+ if (!err) {
+ int i;
+
+ for (i = 0; i < n_poll_fds; i++) {
+ GPollFD *p = &poll_fds[i];
+
+ if ((p->events & G_IO_IN) && FD_ISSET(p->fd, rfds)) {
+ p->revents |= G_IO_IN;
+ }
+ if ((p->events & G_IO_OUT) && FD_ISSET(p->fd, wfds)) {
+ p->revents |= G_IO_OUT;
+ }
+ if ((p->events & G_IO_ERR) && FD_ISSET(p->fd, xfds)) {
+ p->revents |= G_IO_ERR;
+ }
+ }
+ }
+
+ if (g_main_context_check(context, max_priority, poll_fds, n_poll_fds)) {
+ g_main_context_dispatch(context);
+ }
+}
+
+#ifdef _WIN32
+/***********************************************************/
+/* Polling handling */
+
+typedef struct PollingEntry {
+ PollingFunc *func;
+ void *opaque;
+ struct PollingEntry *next;
+} PollingEntry;
+
+static PollingEntry *first_polling_entry;
+
+int qemu_add_polling_cb(PollingFunc *func, void *opaque)
+{
+ PollingEntry **ppe, *pe;
+ pe = g_malloc0(sizeof(PollingEntry));
+ pe->func = func;
+ pe->opaque = opaque;
+ for(ppe = &first_polling_entry; *ppe != NULL; ppe = &(*ppe)->next);
+ *ppe = pe;
+ return 0;
+}
+
+void qemu_del_polling_cb(PollingFunc *func, void *opaque)
+{
+ PollingEntry **ppe, *pe;
+ for(ppe = &first_polling_entry; *ppe != NULL; ppe = &(*ppe)->next) {
+ pe = *ppe;
+ if (pe->func == func && pe->opaque == opaque) {
+ *ppe = pe->next;
+ g_free(pe);
+ break;
+ }
+ }
+}
+
+/***********************************************************/
+/* Wait objects support */
+typedef struct WaitObjects {
+ int num;
+ HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
+ WaitObjectFunc *func[MAXIMUM_WAIT_OBJECTS + 1];
+ void *opaque[MAXIMUM_WAIT_OBJECTS + 1];
+} WaitObjects;
+
+static WaitObjects wait_objects = {0};
+
+int qemu_add_wait_object(HANDLE handle, WaitObjectFunc *func, void *opaque)
+{
+ WaitObjects *w = &wait_objects;
+ if (w->num >= MAXIMUM_WAIT_OBJECTS) {
+ return -1;
+ }
+ w->events[w->num] = handle;
+ w->func[w->num] = func;
+ w->opaque[w->num] = opaque;
+ w->num++;
+ return 0;
+}
+
+void qemu_del_wait_object(HANDLE handle, WaitObjectFunc *func, void *opaque)
+{
+ int i, found;
+ WaitObjects *w = &wait_objects;
+
+ found = 0;
+ for (i = 0; i < w->num; i++) {
+ if (w->events[i] == handle) {
+ found = 1;
+ }
+ if (found) {
+ w->events[i] = w->events[i + 1];
+ w->func[i] = w->func[i + 1];
+ w->opaque[i] = w->opaque[i + 1];
+ }
+ }
+ if (found) {
+ w->num--;
+ }
+}
+
+static void os_host_main_loop_wait(int *timeout)
+{
+ int ret, ret2, i;
+ PollingEntry *pe;
+
+ /* XXX: need to suppress polling by better using win32 events */
+ ret = 0;
+ for (pe = first_polling_entry; pe != NULL; pe = pe->next) {
+ ret |= pe->func(pe->opaque);
+ }
+ if (ret == 0) {
+ int err;
+ WaitObjects *w = &wait_objects;
+
+ qemu_mutex_unlock_iothread();
+ ret = WaitForMultipleObjects(w->num, w->events, FALSE, *timeout);
+ qemu_mutex_lock_iothread();
+ if (WAIT_OBJECT_0 + 0 <= ret && ret <= WAIT_OBJECT_0 + w->num - 1) {
+ if (w->func[ret - WAIT_OBJECT_0]) {
+ w->func[ret - WAIT_OBJECT_0](w->opaque[ret - WAIT_OBJECT_0]);
+ }
+
+ /* Check for additional signaled events */
+ for (i = (ret - WAIT_OBJECT_0 + 1); i < w->num; i++) {
+ /* Check if event is signaled */
+ ret2 = WaitForSingleObject(w->events[i], 0);
+ if (ret2 == WAIT_OBJECT_0) {
+ if (w->func[i]) {
+ w->func[i](w->opaque[i]);
+ }
+ } else if (ret2 != WAIT_TIMEOUT) {
+ err = GetLastError();
+ fprintf(stderr, "WaitForSingleObject error %d %d\n", i, err);
+ }
+ }
+ } else if (ret != WAIT_TIMEOUT) {
+ err = GetLastError();
+ fprintf(stderr, "WaitForMultipleObjects error %d %d\n", ret, err);
+ }
+ }
+
+ *timeout = 0;
+}
+#else
+static inline void os_host_main_loop_wait(int *timeout)
+{
+}
+#endif
+
+int main_loop_wait(int nonblocking)
+{
+ fd_set rfds, wfds, xfds;
+ int ret, nfds;
+ struct timeval tv;
+ int timeout;
+
+ if (nonblocking) {
+ timeout = 0;
+ } else {
+ timeout = qemu_calculate_timeout();
+ qemu_bh_update_timeout(&timeout);
+ }
+
+ os_host_main_loop_wait(&timeout);
+
+ tv.tv_sec = timeout / 1000;
+ tv.tv_usec = (timeout % 1000) * 1000;
+
+ /* poll any events */
+ /* XXX: separate device handlers from system ones */
+ nfds = -1;
+ FD_ZERO(&rfds);
+ FD_ZERO(&wfds);
+ FD_ZERO(&xfds);
+
+ qemu_iohandler_fill(&nfds, &rfds, &wfds, &xfds);
+#ifdef CONFIG_SLIRP
+ slirp_select_fill(&nfds, &rfds, &wfds, &xfds);
+#endif
+ glib_select_fill(&nfds, &rfds, &wfds, &xfds, &tv);
+
+ if (timeout > 0) {
+ qemu_mutex_unlock_iothread();
+ }
+
+ ret = select(nfds + 1, &rfds, &wfds, &xfds, &tv);
+
+ if (timeout > 0) {
+ qemu_mutex_lock_iothread();
+ }
+
+ qemu_iohandler_poll(&rfds, &wfds, &xfds, ret);
+#ifdef CONFIG_SLIRP
+ slirp_select_poll(&rfds, &wfds, &xfds, (ret < 0));
+#endif
+ glib_select_poll(&rfds, &wfds, &xfds, (ret < 0));
+
+ qemu_run_all_timers();
+
+ /* Check bottom-halves last in case any of the earlier events triggered
+ them. */
+ qemu_bh_poll();
+
+ return ret;
+}
diff --git a/os-win32.c b/os-win32.c
index f09f01f..7909401 100644
--- a/os-win32.c
+++ b/os-win32.c
@@ -48,129 +48,6 @@ int setenv(const char *name, const char *value, int overwrite)
return result;
}
-/***********************************************************/
-/* Polling handling */
-
-typedef struct PollingEntry {
- PollingFunc *func;
- void *opaque;
- struct PollingEntry *next;
-} PollingEntry;
-
-static PollingEntry *first_polling_entry;
-
-int qemu_add_polling_cb(PollingFunc *func, void *opaque)
-{
- PollingEntry **ppe, *pe;
- pe = g_malloc0(sizeof(PollingEntry));
- pe->func = func;
- pe->opaque = opaque;
- for(ppe = &first_polling_entry; *ppe != NULL; ppe = &(*ppe)->next);
- *ppe = pe;
- return 0;
-}
-
-void qemu_del_polling_cb(PollingFunc *func, void *opaque)
-{
- PollingEntry **ppe, *pe;
- for(ppe = &first_polling_entry; *ppe != NULL; ppe = &(*ppe)->next) {
- pe = *ppe;
- if (pe->func == func && pe->opaque == opaque) {
- *ppe = pe->next;
- g_free(pe);
- break;
- }
- }
-}
-
-/***********************************************************/
-/* Wait objects support */
-typedef struct WaitObjects {
- int num;
- HANDLE events[MAXIMUM_WAIT_OBJECTS + 1];
- WaitObjectFunc *func[MAXIMUM_WAIT_OBJECTS + 1];
- void *opaque[MAXIMUM_WAIT_OBJECTS + 1];
-} WaitObjects;
-
-static WaitObjects wait_objects = {0};
-
-int qemu_add_wait_object(HANDLE handle, WaitObjectFunc *func, void *opaque)
-{
- WaitObjects *w = &wait_objects;
-
- if (w->num >= MAXIMUM_WAIT_OBJECTS)
- return -1;
- w->events[w->num] = handle;
- w->func[w->num] = func;
- w->opaque[w->num] = opaque;
- w->num++;
- return 0;
-}
-
-void qemu_del_wait_object(HANDLE handle, WaitObjectFunc *func, void *opaque)
-{
- int i, found;
- WaitObjects *w = &wait_objects;
-
- found = 0;
- for (i = 0; i < w->num; i++) {
- if (w->events[i] == handle)
- found = 1;
- if (found) {
- w->events[i] = w->events[i + 1];
- w->func[i] = w->func[i + 1];
- w->opaque[i] = w->opaque[i + 1];
- }
- }
- if (found)
- w->num--;
-}
-
-void os_host_main_loop_wait(int *timeout)
-{
- int ret, ret2, i;
- PollingEntry *pe;
-
- /* XXX: need to suppress polling by better using win32 events */
- ret = 0;
- for(pe = first_polling_entry; pe != NULL; pe = pe->next) {
- ret |= pe->func(pe->opaque);
- }
- if (ret == 0) {
- int err;
- WaitObjects *w = &wait_objects;
-
- qemu_mutex_unlock_iothread();
- ret = WaitForMultipleObjects(w->num, w->events, FALSE, *timeout);
- qemu_mutex_lock_iothread();
- if (WAIT_OBJECT_0 + 0 <= ret && ret <= WAIT_OBJECT_0 + w->num - 1) {
- if (w->func[ret - WAIT_OBJECT_0])
- w->func[ret - WAIT_OBJECT_0](w->opaque[ret - WAIT_OBJECT_0]);
-
- /* Check for additional signaled events */
- for(i = (ret - WAIT_OBJECT_0 + 1); i < w->num; i++) {
-
- /* Check if event is signaled */
- ret2 = WaitForSingleObject(w->events[i], 0);
- if(ret2 == WAIT_OBJECT_0) {
- if (w->func[i])
- w->func[i](w->opaque[i]);
- } else if (ret2 == WAIT_TIMEOUT) {
- } else {
- err = GetLastError();
- fprintf(stderr, "WaitForSingleObject error %d %d\n", i, err);
- }
- }
- } else if (ret == WAIT_TIMEOUT) {
- } else {
- err = GetLastError();
- fprintf(stderr, "WaitForMultipleObjects error %d %d\n", ret, err);
- }
- }
-
- *timeout = 0;
-}
-
static BOOL WINAPI qemu_ctrl_handler(DWORD type)
{
exit(STATUS_CONTROL_C_EXIT);
diff --git a/qemu-os-posix.h b/qemu-os-posix.h
index 81fd9ab..920499d 100644
--- a/qemu-os-posix.h
+++ b/qemu-os-posix.h
@@ -26,10 +26,6 @@
#ifndef QEMU_OS_POSIX_H
#define QEMU_OS_POSIX_H
-static inline void os_host_main_loop_wait(int *timeout)
-{
-}
-
void os_set_line_buffering(void);
void os_set_proc_name(const char *s);
void os_setup_signal_handling(void);
diff --git a/qemu-os-win32.h b/qemu-os-win32.h
index 8a069d7..84e4d35 100644
--- a/qemu-os-win32.h
+++ b/qemu-os-win32.h
@@ -28,26 +28,13 @@
#include <windows.h>
#include <winsock2.h>
+#include "main-loop.h"
/* Declaration of ffs() is missing in MinGW's strings.h. */
int ffs(int i);
/* Polling handling */
-/* return TRUE if no sleep should be done afterwards */
-typedef int PollingFunc(void *opaque);
-
-int qemu_add_polling_cb(PollingFunc *func, void *opaque);
-void qemu_del_polling_cb(PollingFunc *func, void *opaque);
-
-/* Wait objects handling */
-typedef void WaitObjectFunc(void *opaque);
-
-int qemu_add_wait_object(HANDLE handle, WaitObjectFunc *func, void *opaque);
-void qemu_del_wait_object(HANDLE handle, WaitObjectFunc *func, void *opaque);
-
-void os_host_main_loop_wait(int *timeout);
-
static inline void os_setup_signal_handling(void) {}
static inline void os_daemonize(void) {}
static inline void os_setup_post(void) {}
diff --git a/slirp/libslirp.h b/slirp/libslirp.h
index a755123..890fd86 100644
--- a/slirp/libslirp.h
+++ b/slirp/libslirp.h
@@ -3,8 +3,6 @@
#include "qemu-common.h"
-#ifdef CONFIG_SLIRP
-
struct Slirp;
typedef struct Slirp Slirp;
@@ -44,13 +42,4 @@ void slirp_socket_recv(Slirp *slirp, struct in_addr guest_addr,
size_t slirp_socket_can_recv(Slirp *slirp, struct in_addr guest_addr,
int guest_port);
-#else /* !CONFIG_SLIRP */
-
-static inline void slirp_select_fill(int *pnfds, fd_set *readfds,
- fd_set *writefds, fd_set *xfds) { }
-
-static inline void slirp_select_poll(fd_set *readfds, fd_set *writefds,
- fd_set *xfds, int select_error) { }
-#endif /* !CONFIG_SLIRP */
-
#endif
diff --git a/vl.c b/vl.c
index f67baf7..633183a 100644
--- a/vl.c
+++ b/vl.c
@@ -1324,128 +1324,6 @@ void qemu_system_vmstop_request(int reason)
qemu_notify_event();
}
-static GPollFD poll_fds[1024 * 2]; /* this is probably overkill */
-static int n_poll_fds;
-static int max_priority;
-
-static void glib_select_fill(int *max_fd, fd_set *rfds, fd_set *wfds,
- fd_set *xfds, struct timeval *tv)
-{
- GMainContext *context = g_main_context_default();
- int i;
- int timeout = 0, cur_timeout;
-
- g_main_context_prepare(context, &max_priority);
-
- n_poll_fds = g_main_context_query(context, max_priority, &timeout,
- poll_fds, ARRAY_SIZE(poll_fds));
- g_assert(n_poll_fds <= ARRAY_SIZE(poll_fds));
-
- for (i = 0; i < n_poll_fds; i++) {
- GPollFD *p = &poll_fds[i];
-
- if ((p->events & G_IO_IN)) {
- FD_SET(p->fd, rfds);
- *max_fd = MAX(*max_fd, p->fd);
- }
- if ((p->events & G_IO_OUT)) {
- FD_SET(p->fd, wfds);
- *max_fd = MAX(*max_fd, p->fd);
- }
- if ((p->events & G_IO_ERR)) {
- FD_SET(p->fd, xfds);
- *max_fd = MAX(*max_fd, p->fd);
- }
- }
-
- cur_timeout = (tv->tv_sec * 1000) + ((tv->tv_usec + 500) / 1000);
- if (timeout >= 0 && timeout < cur_timeout) {
- tv->tv_sec = timeout / 1000;
- tv->tv_usec = (timeout % 1000) * 1000;
- }
-}
-
-static void glib_select_poll(fd_set *rfds, fd_set *wfds, fd_set *xfds,
- bool err)
-{
- GMainContext *context = g_main_context_default();
-
- if (!err) {
- int i;
-
- for (i = 0; i < n_poll_fds; i++) {
- GPollFD *p = &poll_fds[i];
-
- if ((p->events & G_IO_IN) && FD_ISSET(p->fd, rfds)) {
- p->revents |= G_IO_IN;
- }
- if ((p->events & G_IO_OUT) && FD_ISSET(p->fd, wfds)) {
- p->revents |= G_IO_OUT;
- }
- if ((p->events & G_IO_ERR) && FD_ISSET(p->fd, xfds)) {
- p->revents |= G_IO_ERR;
- }
- }
- }
-
- if (g_main_context_check(context, max_priority, poll_fds, n_poll_fds)) {
- g_main_context_dispatch(context);
- }
-}
-
-int main_loop_wait(int nonblocking)
-{
- fd_set rfds, wfds, xfds;
- int ret, nfds;
- struct timeval tv;
- int timeout;
-
- if (nonblocking)
- timeout = 0;
- else {
- timeout = qemu_calculate_timeout();
- qemu_bh_update_timeout(&timeout);
- }
-
- os_host_main_loop_wait(&timeout);
-
- tv.tv_sec = timeout / 1000;
- tv.tv_usec = (timeout % 1000) * 1000;
-
- /* poll any events */
- /* XXX: separate device handlers from system ones */
- nfds = -1;
- FD_ZERO(&rfds);
- FD_ZERO(&wfds);
- FD_ZERO(&xfds);
-
- qemu_iohandler_fill(&nfds, &rfds, &wfds, &xfds);
- slirp_select_fill(&nfds, &rfds, &wfds, &xfds);
- glib_select_fill(&nfds, &rfds, &wfds, &xfds, &tv);
-
- if (timeout > 0) {
- qemu_mutex_unlock_iothread();
- }
-
- ret = select(nfds + 1, &rfds, &wfds, &xfds, &tv);
-
- if (timeout > 0) {
- qemu_mutex_lock_iothread();
- }
-
- qemu_iohandler_poll(&rfds, &wfds, &xfds, ret);
- slirp_select_poll(&rfds, &wfds, &xfds, (ret < 0));
- glib_select_poll(&rfds, &wfds, &xfds, (ret < 0));
-
- qemu_run_all_timers();
-
- /* Check bottom-halves last in case any of the earlier events triggered
- them. */
- qemu_bh_poll();
-
- return ret;
-}
-
qemu_irq qemu_system_powerdown;
static void main_loop(void)
@@ -3182,6 +3060,7 @@ int main(int argc, char **argv, char **envp)
configure_accelerator();
+ qemu_init_cpu_loop();
if (qemu_init_main_loop()) {
fprintf(stderr, "qemu_init_main_loop failed\n");
exit(1);
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 10/14] Revert to a hand-made select loop
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
` (8 preceding siblings ...)
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 09/14] create main-loop.c Paolo Bonzini
@ 2011-09-16 15:09 ` Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 11/14] simplify main loop functions Paolo Bonzini
` (3 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:09 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
This reverts commit c82dc29a9112f34e0a51cad9a412cf6d9d05dfb2
and 4d88a2ac8643265108ef1fb47ceee5d7b28e19f2.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
iohandler.c | 54 +-----------------------------------------------------
1 files changed, 1 insertions(+), 53 deletions(-)
diff --git a/iohandler.c b/iohandler.c
index 4cc1c5a..4deae1e 100644
--- a/iohandler.c
+++ b/iohandler.c
@@ -80,64 +80,12 @@ int qemu_set_fd_handler2(int fd,
return 0;
}
-typedef struct IOTrampoline
-{
- GIOChannel *chan;
- IOHandler *fd_read;
- IOHandler *fd_write;
- void *opaque;
- guint tag;
-} IOTrampoline;
-
-static gboolean fd_trampoline(GIOChannel *chan, GIOCondition cond, gpointer opaque)
-{
- IOTrampoline *tramp = opaque;
-
- if ((cond & G_IO_IN) && tramp->fd_read) {
- tramp->fd_read(tramp->opaque);
- }
-
- if ((cond & G_IO_OUT) && tramp->fd_write) {
- tramp->fd_write(tramp->opaque);
- }
-
- return TRUE;
-}
-
int qemu_set_fd_handler(int fd,
IOHandler *fd_read,
IOHandler *fd_write,
void *opaque)
{
- static IOTrampoline fd_trampolines[FD_SETSIZE];
- IOTrampoline *tramp = &fd_trampolines[fd];
-
- if (tramp->tag != 0) {
- g_io_channel_unref(tramp->chan);
- g_source_remove(tramp->tag);
- tramp->tag = 0;
- }
-
- if (fd_read || fd_write || opaque) {
- GIOCondition cond = 0;
-
- tramp->fd_read = fd_read;
- tramp->fd_write = fd_write;
- tramp->opaque = opaque;
-
- if (fd_read) {
- cond |= G_IO_IN | G_IO_ERR;
- }
-
- if (fd_write) {
- cond |= G_IO_OUT | G_IO_ERR;
- }
-
- tramp->chan = g_io_channel_unix_new(fd);
- tramp->tag = g_io_add_watch(tramp->chan, cond, fd_trampoline, tramp);
- }
-
- return 0;
+ return qemu_set_fd_handler2(fd, NULL, fd_read, fd_write, opaque);
}
void qemu_iohandler_fill(int *pnfds, fd_set *readfds, fd_set *writefds, fd_set *xfds)
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 11/14] simplify main loop functions
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
` (9 preceding siblings ...)
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 10/14] Revert to a hand-made select loop Paolo Bonzini
@ 2011-09-16 15:09 ` Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 12/14] makefile: extract tools-obj-y Paolo Bonzini
` (2 subsequent siblings)
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:09 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
Provide a clean example of how to use the main loop in the tools.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpus.c | 5 ----
cpus.h | 1 -
vl.c | 72 +++++++++++++++++++++++++++++++++------------------------------
3 files changed, 38 insertions(+), 40 deletions(-)
diff --git a/cpus.c b/cpus.c
index 715379a..21c6a30 100644
--- a/cpus.c
+++ b/cpus.c
@@ -628,11 +628,6 @@ void qemu_init_cpu_loop(void)
qemu_thread_get_self(&io_thread);
}
-void qemu_main_loop_start(void)
-{
- resume_all_vcpus();
-}
-
void run_on_cpu(CPUState *env, void (*func)(void *data), void *data)
{
struct qemu_work_item wi;
diff --git a/cpus.h b/cpus.h
index 2c06a2b..c085a37 100644
--- a/cpus.h
+++ b/cpus.h
@@ -3,7 +3,6 @@
/* cpus.c */
void qemu_init_cpu_loop(void);
-void qemu_main_loop_start(void);
void resume_all_vcpus(void);
void pause_all_vcpus(void);
void cpu_stop_current(void);
diff --git a/vl.c b/vl.c
index 633183a..e925c44 100644
--- a/vl.c
+++ b/vl.c
@@ -1326,18 +1326,46 @@ void qemu_system_vmstop_request(int reason)
qemu_irq qemu_system_powerdown;
+static bool main_loop_should_exit(void)
+{
+ int r;
+ if (qemu_debug_requested()) {
+ vm_stop(VMSTOP_DEBUG);
+ }
+ if (qemu_shutdown_requested()) {
+ qemu_kill_report();
+ monitor_protocol_event(QEVENT_SHUTDOWN, NULL);
+ if (no_shutdown) {
+ vm_stop(VMSTOP_SHUTDOWN);
+ } else {
+ return true;
+ }
+ }
+ if (qemu_reset_requested()) {
+ pause_all_vcpus();
+ cpu_synchronize_all_states();
+ qemu_system_reset(VMRESET_REPORT);
+ resume_all_vcpus();
+ }
+ if (qemu_powerdown_requested()) {
+ monitor_protocol_event(QEVENT_POWERDOWN, NULL);
+ qemu_irq_raise(qemu_system_powerdown);
+ }
+ r = qemu_vmstop_requested();
+ if (r) {
+ vm_stop(r);
+ }
+ return false;
+}
+
static void main_loop(void)
{
bool nonblocking;
- int last_io __attribute__ ((unused)) = 0;
+ int last_io = 0;
#ifdef CONFIG_PROFILER
int64_t ti;
#endif
- int r;
-
- qemu_main_loop_start();
-
- for (;;) {
+ do {
nonblocking = !kvm_enabled() && last_io > 0;
#ifdef CONFIG_PROFILER
ti = profile_getclock();
@@ -1346,34 +1374,7 @@ static void main_loop(void)
#ifdef CONFIG_PROFILER
dev_time += profile_getclock() - ti;
#endif
-
- if (qemu_debug_requested()) {
- vm_stop(VMSTOP_DEBUG);
- }
- if (qemu_shutdown_requested()) {
- qemu_kill_report();
- monitor_protocol_event(QEVENT_SHUTDOWN, NULL);
- if (no_shutdown) {
- vm_stop(VMSTOP_SHUTDOWN);
- } else
- break;
- }
- if (qemu_reset_requested()) {
- pause_all_vcpus();
- cpu_synchronize_all_states();
- qemu_system_reset(VMRESET_REPORT);
- resume_all_vcpus();
- }
- if (qemu_powerdown_requested()) {
- monitor_protocol_event(QEVENT_POWERDOWN, NULL);
- qemu_irq_raise(qemu_system_powerdown);
- }
- if ((r = qemu_vmstop_requested())) {
- vm_stop(r);
- }
- }
- bdrv_close_all();
- pause_all_vcpus();
+ } while (!main_loop_should_exit());
}
static void version(void)
@@ -3330,7 +3331,10 @@ int main(int argc, char **argv, char **envp)
os_setup_post();
+ resume_all_vcpus();
main_loop();
+ bdrv_close_all();
+ pause_all_vcpus();
quit_timers();
net_cleanup();
res_free();
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 12/14] makefile: extract tools-obj-y
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
` (10 preceding siblings ...)
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 11/14] simplify main loop functions Paolo Bonzini
@ 2011-09-16 15:09 ` Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 13/14] link the main loop and its dependencies into the tools Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 14/14] qemu-nbd: use common main loop Paolo Bonzini
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:09 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
Makefile | 9 +++++----
1 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/Makefile b/Makefile
index 7e9382f..f00afc2 100644
--- a/Makefile
+++ b/Makefile
@@ -142,11 +142,12 @@ endif
qemu-img.o: qemu-img-cmds.h
qemu-img.o qemu-tool.o qemu-nbd.o qemu-io.o cmd.o qemu-ga.o: $(GENERATED_HEADERS)
-qemu-img$(EXESUF): qemu-img.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o
+tools-obj-y = qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) \
+ $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o
-qemu-nbd$(EXESUF): qemu-nbd.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o
-
-qemu-io$(EXESUF): qemu-io.o cmd.o qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o
+qemu-img$(EXESUF): qemu-img.o $(tools-obj-y)
+qemu-nbd$(EXESUF): qemu-nbd.o $(tools-obj-y)
+qemu-io$(EXESUF): qemu-io.o cmd.o $(tools-obj-y)
qemu-img-cmds.h: $(SRC_PATH)/qemu-img-cmds.hx
$(call quiet-command,sh $(SRC_PATH)/scripts/hxtool -h < $< > $@," GEN $@")
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 13/14] link the main loop and its dependencies into the tools
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
` (11 preceding siblings ...)
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 12/14] makefile: extract tools-obj-y Paolo Bonzini
@ 2011-09-16 15:09 ` Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 14/14] qemu-nbd: use common main loop Paolo Bonzini
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:09 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
This completes the refactoring and lets the tools use the main loop
code from QEMU. This enables tools to operate fully asynchronously.
Advantages include better Windows portability (for some definition of
portability) over glib's.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
Makefile | 4 +++-
os-posix.c | 42 ------------------------------------------
os-win32.c | 5 -----
oslib-posix.c | 42 ++++++++++++++++++++++++++++++++++++++++++
oslib-win32.c | 5 +++++
qemu-tool.c | 45 ++++++++++++++++++++++++++-------------------
6 files changed, 76 insertions(+), 67 deletions(-)
diff --git a/Makefile b/Makefile
index f00afc2..6644259 100644
--- a/Makefile
+++ b/Makefile
@@ -143,7 +143,9 @@ qemu-img.o: qemu-img-cmds.h
qemu-img.o qemu-tool.o qemu-nbd.o qemu-io.o cmd.o qemu-ga.o: $(GENERATED_HEADERS)
tools-obj-y = qemu-tool.o qemu-error.o $(oslib-obj-y) $(trace-obj-y) \
- $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o
+ $(block-obj-y) $(qobject-obj-y) $(version-obj-y) qemu-timer-common.o \
+ qemu-timer.o main-loop.o notify.o iohandler.o
+tools-obj-$(CONFIG_POSIX) += compatfd.o
qemu-img$(EXESUF): qemu-img.o $(tools-obj-y)
qemu-nbd$(EXESUF): qemu-nbd.o $(tools-obj-y)
diff --git a/os-posix.c b/os-posix.c
index dbf3b24..c0188b3 100644
--- a/os-posix.c
+++ b/os-posix.c
@@ -42,11 +42,6 @@
#ifdef CONFIG_LINUX
#include <sys/prctl.h>
-#include <sys/syscall.h>
-#endif
-
-#ifdef CONFIG_EVENTFD
-#include <sys/eventfd.h>
#endif
static struct passwd *user_pwd;
@@ -333,34 +328,6 @@ void os_set_line_buffering(void)
setvbuf(stdout, NULL, _IOLBF, 0);
}
-/*
- * Creates an eventfd that looks like a pipe and has EFD_CLOEXEC set.
- */
-int qemu_eventfd(int fds[2])
-{
-#ifdef CONFIG_EVENTFD
- int ret;
-
- ret = eventfd(0, 0);
- if (ret >= 0) {
- fds[0] = ret;
- qemu_set_cloexec(ret);
- if ((fds[1] = dup(ret)) == -1) {
- close(ret);
- return -1;
- }
- qemu_set_cloexec(fds[1]);
- return 0;
- }
-
- if (errno != ENOSYS) {
- return -1;
- }
-#endif
-
- return qemu_pipe(fds);
-}
-
int qemu_create_pidfile(const char *filename)
{
char buffer[128];
@@ -381,12 +348,3 @@ int qemu_create_pidfile(const char *filename)
return 0;
}
-
-int qemu_get_thread_id(void)
-{
-#if defined (__linux__)
- return syscall(SYS_gettid);
-#else
- return getpid();
-#endif
-}
diff --git a/os-win32.c b/os-win32.c
index 7909401..30a96ef 100644
--- a/os-win32.c
+++ b/os-win32.c
@@ -143,8 +143,3 @@ int qemu_create_pidfile(const char *filename)
}
return 0;
}
-
-int qemu_get_thread_id(void)
-{
- return GetCurrentThreadId();
-}
diff --git a/oslib-posix.c b/oslib-posix.c
index a304fb0..34b8107 100644
--- a/oslib-posix.c
+++ b/oslib-posix.c
@@ -47,7 +47,21 @@ extern int daemon(int, int);
#include "trace.h"
#include "qemu_socket.h"
+#ifdef CONFIG_LINUX
+#include <sys/syscall.h>
+#endif
+#ifdef CONFIG_EVENTFD
+#include <sys/eventfd.h>
+#endif
+int qemu_get_thread_id(void)
+{
+#if defined(__linux__)
+ return syscall(SYS_gettid);
+#else
+ return getpid();
+#endif
+}
int qemu_daemon(int nochdir, int noclose)
{
@@ -139,6 +153,34 @@ int qemu_pipe(int pipefd[2])
return ret;
}
+/*
+ * Creates an eventfd that looks like a pipe and has EFD_CLOEXEC set.
+ */
+int qemu_eventfd(int fds[2])
+{
+#ifdef CONFIG_EVENTFD
+ int ret;
+
+ ret = eventfd(0, 0);
+ if (ret >= 0) {
+ fds[0] = ret;
+ fds[1] = dup(ret);
+ if (fds[1] == -1) {
+ close(ret);
+ return -1;
+ }
+ qemu_set_cloexec(ret);
+ qemu_set_cloexec(fds[1]);
+ return 0;
+ }
+ if (errno != ENOSYS) {
+ return -1;
+ }
+#endif
+
+ return qemu_pipe(fds);
+}
+
int qemu_utimensat(int dirfd, const char *path, const struct timespec *times,
int flags)
{
diff --git a/oslib-win32.c b/oslib-win32.c
index 5f0759f..4b5b403 100644
--- a/oslib-win32.c
+++ b/oslib-win32.c
@@ -112,3 +112,8 @@ int qemu_gettimeofday(qemu_timeval *tp)
Do not set errno on error. */
return 0;
}
+
+int qemu_get_thread_id(void)
+{
+ return GetCurrentThreadId();
+}
diff --git a/qemu-tool.c b/qemu-tool.c
index eb89fe0..5a54e10 100644
--- a/qemu-tool.c
+++ b/qemu-tool.c
@@ -15,12 +15,12 @@
#include "monitor.h"
#include "qemu-timer.h"
#include "qemu-log.h"
+#include "main-loop.h"
+#include "qemu_socket.h"
+#include "slirp/libslirp.h"
#include <sys/time.h>
-QEMUClock *rt_clock;
-QEMUClock *vm_clock;
-
FILE *logfile;
struct QEMUBH
@@ -60,39 +60,46 @@ void monitor_protocol_event(MonitorEvent event, QObject *data)
{
}
-int qemu_set_fd_handler2(int fd,
- IOCanReadHandler *fd_read_poll,
- IOHandler *fd_read,
- IOHandler *fd_write,
- void *opaque)
+int64 cpu_get_clock(void)
{
- return 0;
+ abort();
}
-void qemu_notify_event(void)
+int64 cpu_get_icount(void)
{
+ abort();
}
-QEMUTimer *qemu_new_timer(QEMUClock *clock, int scale,
- QEMUTimerCB *cb, void *opaque)
+void qemu_mutex_lock_iothread(void)
{
- return g_malloc(1);
}
-void qemu_free_timer(QEMUTimer *ts)
+void qemu_mutex_unlock_iothread(void)
{
- g_free(ts);
}
-void qemu_del_timer(QEMUTimer *ts)
+int use_icount;
+
+void qemu_clock_warp(QEMUClock *clock)
{
}
-void qemu_mod_timer(QEMUTimer *ts, int64_t expire_time)
+VMChangeStateEntry *qemu_add_vm_change_state_handler(VMChangeStateHandler *cb,
+ void *opaque)
{
+ return NULL;
}
-int64_t qemu_get_clock_ns(QEMUClock *clock)
+void qemu_del_vm_change_state_handler(VMChangeStateEntry *e)
+{
+}
+
+void slirp_select_fill(int *pnfds, fd_set *readfds,
+ fd_set *writefds, fd_set *xfds)
+{
+}
+
+void slirp_select_poll(fd_set *readfds, fd_set *writefds,
+ fd_set *xfds, int select_error)
{
- return 0;
}
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
* [Qemu-devel] [RFC PATCH 14/14] qemu-nbd: use common main loop
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
` (12 preceding siblings ...)
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 13/14] link the main loop and its dependencies into the tools Paolo Bonzini
@ 2011-09-16 15:09 ` Paolo Bonzini
13 siblings, 0 replies; 15+ messages in thread
From: Paolo Bonzini @ 2011-09-16 15:09 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
qemu-nbd.c | 127 +++++++++++++++++++++++++++++-------------------------------
1 files changed, 61 insertions(+), 66 deletions(-)
diff --git a/qemu-nbd.c b/qemu-nbd.c
index 3a39145..a45dd4b 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -37,6 +37,13 @@
#define NBD_BUFFER_SIZE (1024*1024)
static int verbose;
+static int shared = 1;
+static int nb_fds;
+static BlockDriverState *bs;
+static off_t fd_size;
+static off_t dev_offset;
+static off_t offset;
+static bool readonly;
static void usage(const char *name)
{
@@ -180,18 +187,49 @@ static void show_parts(const char *device)
}
}
+static int nbd_can_accept(void *opaque)
+{
+ return nb_fds < shared;
+}
+
+static void nbd_read(void *opaque)
+{
+ static uint8_t *data;
+ int fd = (uintptr_t) opaque;
+
+ if (data == NULL) {
+ data = qemu_blockalign(bs, NBD_BUFFER_SIZE);
+ if (data == NULL) {
+ errx(EXIT_FAILURE, "Cannot allocate data buffer");
+ }
+ }
+
+ if (nbd_trip(bs, fd, fd_size, dev_offset,
+ &offset, readonly, data, NBD_BUFFER_SIZE) != 0) {
+ qemu_set_fd_handler2(fd, NULL, NULL, NULL, NULL);
+ close(fd);
+ nb_fds--;
+ }
+}
+
+static void nbd_accept(void *opaque)
+{
+ int server_fd = (uintptr_t) opaque;
+ struct sockaddr_in addr;
+ socklen_t addr_len = sizeof(addr);
+
+ int fd = accept(server_fd, (struct sockaddr *)&addr, &addr_len);
+ if (fd != -1 && nbd_negotiate(fd, fd_size) != -1) {
+ qemu_set_fd_handler2(fd, NULL, nbd_read, NULL, (void *) (intptr_t) fd);
+ nb_fds++;
+ }
+}
+
int main(int argc, char **argv)
{
- BlockDriverState *bs;
- off_t dev_offset = 0;
- off_t offset = 0;
- bool readonly = false;
bool disconnect = false;
const char *bindto = "0.0.0.0";
int port = NBD_DEFAULT_PORT;
- struct sockaddr_in addr;
- socklen_t addr_len = sizeof(addr);
- off_t fd_size;
char *device = NULL;
char *socket = NULL;
char sockpath[128];
@@ -221,14 +259,8 @@ int main(int argc, char **argv)
int flags = BDRV_O_RDWR;
int partition = -1;
int ret;
- int shared = 1;
- uint8_t *data;
fd_set fds;
- int *sharing_fds;
int fd;
- int i;
- int nb_fds = 0;
- int max_fd;
int persistent = 0;
uint32_t nbdflags;
@@ -431,67 +463,30 @@ int main(int argc, char **argv)
/* children */
}
- sharing_fds = g_malloc((shared + 1) * sizeof(int));
-
if (socket) {
- sharing_fds[0] = unix_socket_incoming(socket);
+ fd = unix_socket_incoming(socket);
} else {
- sharing_fds[0] = tcp_socket_incoming(bindto, port);
+ fd = tcp_socket_incoming(bindto, port);
}
- if (sharing_fds[0] == -1)
+ if (fd == -1) {
return 1;
- max_fd = sharing_fds[0];
- nb_fds++;
-
- data = qemu_blockalign(bs, NBD_BUFFER_SIZE);
- if (data == NULL)
- errx(EXIT_FAILURE, "Cannot allocate data buffer");
-
- do {
-
- FD_ZERO(&fds);
- for (i = 0; i < nb_fds; i++)
- FD_SET(sharing_fds[i], &fds);
-
- ret = select(max_fd + 1, &fds, NULL, NULL, NULL);
- if (ret == -1)
- break;
+ }
- if (FD_ISSET(sharing_fds[0], &fds))
- ret--;
- for (i = 1; i < nb_fds && ret; i++) {
- if (FD_ISSET(sharing_fds[i], &fds)) {
- if (nbd_trip(bs, sharing_fds[i], fd_size, dev_offset,
- &offset, readonly, data, NBD_BUFFER_SIZE) != 0) {
- close(sharing_fds[i]);
- nb_fds--;
- sharing_fds[i] = sharing_fds[nb_fds];
- i--;
- }
- ret--;
- }
- }
- /* new connection ? */
- if (FD_ISSET(sharing_fds[0], &fds)) {
- if (nb_fds < shared + 1) {
- sharing_fds[nb_fds] = accept(sharing_fds[0],
- (struct sockaddr *)&addr,
- &addr_len);
- if (sharing_fds[nb_fds] != -1 &&
- nbd_negotiate(sharing_fds[nb_fds], fd_size) != -1) {
- if (sharing_fds[nb_fds] > max_fd)
- max_fd = sharing_fds[nb_fds];
- nb_fds++;
- }
- }
- }
- } while (persistent || nb_fds > 1);
- qemu_vfree(data);
+ qemu_set_fd_handler2(fd, nbd_can_accept, nbd_accept, NULL,
+ (void *)(uintptr_t)fd);
+
+ /* Wait for the first incoming connection. */
+ FD_ZERO(&fds);
+ FD_SET(fd, &fds);
+ ret = select(fd + 1, &fds, NULL, NULL, NULL);
+ if (ret != -1) {
+ do {
+ main_loop_wait(false);
+ } while (persistent || nb_fds > 0);
+ }
- close(sharing_fds[0]);
bdrv_close(bs);
- g_free(sharing_fds);
if (socket)
unlink(socket);
--
1.7.6
^ permalink raw reply related [flat|nested] 15+ messages in thread
end of thread, other threads:[~2011-09-16 15:09 UTC | newest]
Thread overview: 15+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2011-09-16 15:08 [Qemu-devel] [RFC PATCH 00/14] allow tools to use the QEMU main loop Paolo Bonzini
2011-09-16 15:08 ` [Qemu-devel] [RFC PATCH 01/14] remove unused function Paolo Bonzini
2011-09-16 15:08 ` [Qemu-devel] [RFC PATCH 02/14] qemu-timer: remove active_timers array Paolo Bonzini
2011-09-16 15:08 ` [Qemu-devel] [RFC PATCH 03/14] qemu-timer: move common code to qemu_rearm_alarm_timer Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 04/14] qemu-timer: more clock functions Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 05/14] qemu-timer: move icount to cpus.c Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 06/14] qemu-timer: do not refer to vm_running Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 07/14] qemu-timer: move more stuff out of qemu-timer.c Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 08/14] create main-loop.h Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 09/14] create main-loop.c Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 10/14] Revert to a hand-made select loop Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 11/14] simplify main loop functions Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 12/14] makefile: extract tools-obj-y Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 13/14] link the main loop and its dependencies into the tools Paolo Bonzini
2011-09-16 15:09 ` [Qemu-devel] [RFC PATCH 14/14] qemu-nbd: use common main loop Paolo Bonzini
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).