From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1MRYvR-0003Xx-E2 for qemu-devel@nongnu.org; Thu, 16 Jul 2009 17:55:37 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1MRYvL-0003PA-EV for qemu-devel@nongnu.org; Thu, 16 Jul 2009 17:55:35 -0400 Received: from [199.232.76.173] (port=59260 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1MRYvL-0003Ot-24 for qemu-devel@nongnu.org; Thu, 16 Jul 2009 17:55:31 -0400 Received: from mx2.redhat.com ([66.187.237.31]:44788) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1MRYvK-0004wK-Ff for qemu-devel@nongnu.org; Thu, 16 Jul 2009 17:55:30 -0400 From: Glauber Costa Date: Thu, 16 Jul 2009 17:55:28 -0400 Message-Id: <1247781328-17249-1-git-send-email-glommer@redhat.com> Subject: [Qemu-devel] [PATCH v3] introduce on_vcpu List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org Cc: Jan Kiszka , aliguori@us.ibm.com on_vcpu is a qemu-kvm function that will make sure that a specific piece of code will run on a requested cpu. We don't need that because we're restricted to -smp 1 right now, but those days are likely to end soon. So for the benefit of having qemu-kvm share more code with us, I'm introducing our own version of on_vcpu(). Right now, we either run a function on the current cpu, or abort the execution, because it would mean something is seriously wrong. As an example code, I "ported" kvm_update_guest_debug to use it, with some slight differences from qemu-kvm. This is probably 0.12 material Signed-off-by: Glauber Costa CC: Jan Kiszka --- kvm-all.c | 35 +++++++++++++++++++++++++++++------ 1 files changed, 29 insertions(+), 6 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index 61194b8..07a1cdb 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -155,6 +155,15 @@ static void kvm_reset_vcpu(void *opaque) } } +static void on_vcpu(CPUState *env, void (*func)(void *data), void *data) +{ + if (env == cpu_single_env) { + func(data); + return; + } + abort(); +} + int kvm_init_vcpu(CPUState *env) { KVMState *s = kvm_state; @@ -892,18 +901,32 @@ int kvm_sw_breakpoints_active(CPUState *env) return !TAILQ_EMPTY(&env->kvm_state->kvm_sw_breakpoints); } +struct kvm_set_guest_debug_data { + struct kvm_guest_debug dbg; + CPUState *env; + int err; +}; + +static void kvm_invoke_set_guest_debug(void *data) +{ + struct kvm_set_guest_debug_data *dbg_data = data; + dbg_data->err = kvm_vcpu_ioctl(dbg_data->env, KVM_SET_GUEST_DEBUG, &dbg_data->dbg); +} + int kvm_update_guest_debug(CPUState *env, unsigned long reinject_trap) { - struct kvm_guest_debug dbg; + struct kvm_set_guest_debug_data data; - dbg.control = 0; + data.dbg.control = 0; if (env->singlestep_enabled) - dbg.control = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_SINGLESTEP; + data.dbg.control = KVM_GUESTDBG_ENABLE | KVM_GUESTDBG_SINGLESTEP; - kvm_arch_update_guest_debug(env, &dbg); - dbg.control |= reinject_trap; + kvm_arch_update_guest_debug(env, &data.dbg); + data.dbg.control |= reinject_trap; + data.env = env; - return kvm_vcpu_ioctl(env, KVM_SET_GUEST_DEBUG, &dbg); + on_vcpu(env, kvm_invoke_set_guest_debug, &data); + return data.err; } int kvm_insert_breakpoint(CPUState *current_env, target_ulong addr, -- 1.6.2.2