From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1Ni6st-0005nj-6W for qemu-devel@nongnu.org; Thu, 18 Feb 2010 08:57:39 -0500 Received: from [199.232.76.173] (port=50293 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1Ni6ss-0005nX-Tg for qemu-devel@nongnu.org; Thu, 18 Feb 2010 08:57:38 -0500 Received: from Debian-exim by monty-python.gnu.org with spam-scanned (Exim 4.60) (envelope-from ) id 1Ni6sr-0001PQ-IN for qemu-devel@nongnu.org; Thu, 18 Feb 2010 08:57:38 -0500 Received: from mx1.redhat.com ([209.132.183.28]:61619) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1Ni6sq-0001P3-Sh for qemu-devel@nongnu.org; Thu, 18 Feb 2010 08:57:37 -0500 Message-Id: <20100217221701.041682707@redhat.com> Date: Wed, 17 Feb 2010 20:14:41 -0200 From: Marcelo Tosatti References: <20100217221439.351652889@redhat.com> Content-Disposition: inline; filename=kvm-specific-wait-io-event Subject: [Qemu-devel] [patch uq/master 2/4] qemu: kvm specific wait_io_event List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: kvm@vger.kernel.org, qemu-devel@nongnu.org Cc: Marcelo Tosatti , avi@redhat.com In KVM mode the global mutex is released when vcpus are executing, which means acquiring the fairness mutex is not required. Also for KVM there is one thread per vcpu, so tcg_has_work is meaningless. Add a new qemu_wait_io_event_common function to hold common code between TCG/KVM. Signed-off-by: Marcelo Tosatti Index: qemu/vl.c =================================================================== --- qemu.orig/vl.c +++ qemu/vl.c @@ -3382,6 +3382,7 @@ static QemuCond qemu_pause_cond; static void block_io_signals(void); static void unblock_io_signals(void); static int tcg_has_work(void); +static int cpu_has_work(CPUState *env); static int qemu_init_main_loop(void) { @@ -3402,6 +3403,15 @@ static int qemu_init_main_loop(void) return 0; } +static void qemu_wait_io_event_common(CPUState *env) +{ + if (env->stop) { + env->stop = 0; + env->stopped = 1; + qemu_cond_signal(&qemu_pause_cond); + } +} + static void qemu_wait_io_event(CPUState *env) { while (!tcg_has_work()) @@ -3418,11 +3428,15 @@ static void qemu_wait_io_event(CPUState qemu_mutex_unlock(&qemu_fair_mutex); qemu_mutex_lock(&qemu_global_mutex); - if (env->stop) { - env->stop = 0; - env->stopped = 1; - qemu_cond_signal(&qemu_pause_cond); - } + qemu_wait_io_event_common(env); +} + +static void qemu_kvm_wait_io_event(CPUState *env) +{ + while (!cpu_has_work(env)) + qemu_cond_timedwait(env->halt_cond, &qemu_global_mutex, 1000); + + qemu_wait_io_event_common(env); } static int qemu_cpu_exec(CPUState *env); @@ -3448,7 +3462,7 @@ static void *kvm_cpu_thread_fn(void *arg while (1) { if (cpu_can_run(env)) qemu_cpu_exec(env); - qemu_wait_io_event(env); + qemu_kvm_wait_io_event(env); } return NULL;