* [Qemu-devel] [PATCH v2 0/11]
@ 2009-12-01 12:51 Glauber Costa
2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 01/11] Don't mess with halted state Glauber Costa
2009-12-02 10:59 ` [Qemu-devel] [PATCH v2 0/11] Gleb Natapov
0 siblings, 2 replies; 30+ messages in thread
From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw)
To: qemu-devel; +Cc: aliguori, avi, agraf
This is a repost of the -smp series. Note that it depends on irqchip-in-kernel,
that is already in staging. Also, you'll have to enable the io-thread, for the time
being.
>From the last version, main change is that I am not calling queue_work automatically
from vcpu ioctls. queue_work is only used currently for the gdb stub.
All other uses were by-passed by the new qemu_register_vcpu_reset(), since most
of it uses (all racy) came from the reset handlers.
Enjoy
^ permalink raw reply [flat|nested] 30+ messages in thread
* [Qemu-devel] [PATCH v2 01/11] Don't mess with halted state. 2009-12-01 12:51 [Qemu-devel] [PATCH v2 0/11] Glauber Costa @ 2009-12-01 12:51 ` Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 02/11] store thread-specific env information Glauber Costa 2009-12-02 10:59 ` [Qemu-devel] [PATCH v2 0/11] Gleb Natapov 1 sibling, 1 reply; 30+ messages in thread From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi, agraf When we have irqchip in kernel, halted state is kernel business. So don't initialize it in our code. Signed-off-by: Glauber Costa <glommer@redhat.com> --- hw/apic-kvm.c | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diff --git a/hw/apic-kvm.c b/hw/apic-kvm.c index 089fa45..e5a0bfc 100644 --- a/hw/apic-kvm.c +++ b/hw/apic-kvm.c @@ -122,10 +122,9 @@ static void kvm_apic_reset(void *opaque) s->dev.next_time = 0; s->dev.wait_for_sipi = 1; - s->cpu_env->halted = !(s->dev.apicbase & MSR_IA32_APICBASE_BSP); s->cpu_env->mp_state - = s->cpu_env->halted ? KVM_MP_STATE_UNINITIALIZED : KVM_MP_STATE_RUNNABLE; + = bsp ? KVM_MP_STATE_RUNNABLE : KVM_MP_STATE_UNINITIALIZED; kvm_put_mp_state(s->cpu_env); -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 30+ messages in thread
* [Qemu-devel] [PATCH v2 02/11] store thread-specific env information 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 01/11] Don't mess with halted state Glauber Costa @ 2009-12-01 12:51 ` Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 03/11] update halted state on mp_state sync Glauber Costa 0 siblings, 1 reply; 30+ messages in thread From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi, agraf Since we'll have multiple cpu threads, at least for kvm, we need a way to store and retrieve the CPUState associated with the current execution thread. For the I/O thread, this will be NULL. I am using pthread functions for that, for portability, but we could as well use __thread keyword. Signed-off-by: Glauber Costa <glommer@redhat.com> --- vl.c | 21 +++++++++++++++++++++ 1 files changed, 21 insertions(+), 0 deletions(-) diff --git a/vl.c b/vl.c index 44763af..321b18d 100644 --- a/vl.c +++ b/vl.c @@ -3434,6 +3434,24 @@ static void block_io_signals(void); static void unblock_io_signals(void); static int tcg_has_work(void); +static pthread_key_t current_env; + +CPUState *qemu_get_current_env(void); +CPUState *qemu_get_current_env(void) +{ + return pthread_getspecific(current_env); +} + +static void qemu_set_current_env(CPUState *env) +{ + pthread_setspecific(current_env, env); +} + +static void qemu_init_current_env(void) +{ + pthread_key_create(¤t_env, NULL); +} + static int qemu_init_main_loop(void) { int ret; @@ -3446,6 +3464,7 @@ static int qemu_init_main_loop(void) qemu_mutex_init(&qemu_fair_mutex); qemu_mutex_init(&qemu_global_mutex); qemu_mutex_lock(&qemu_global_mutex); + qemu_init_current_env(); unblock_io_signals(); qemu_thread_self(&io_thread); @@ -3484,6 +3503,8 @@ static void *kvm_cpu_thread_fn(void *arg) block_io_signals(); qemu_thread_self(env->thread); + qemu_set_current_env(env); + if (kvm_enabled()) kvm_init_vcpu(env); -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 30+ messages in thread
* [Qemu-devel] [PATCH v2 03/11] update halted state on mp_state sync 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 02/11] store thread-specific env information Glauber Costa @ 2009-12-01 12:51 ` Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution Glauber Costa 0 siblings, 1 reply; 30+ messages in thread From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi, agraf If we are using in-kernel irqchip, halted state belongs in the kernel. So everytime we grab kernel's idea of mpstate, we also need to propagate halted state to userspace. Signed-off-by: Glauber Costa <glommer@redhat.com> --- target-i386/kvm.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/target-i386/kvm.c b/target-i386/kvm.c index 78f6d65..1c4f660 100644 --- a/target-i386/kvm.c +++ b/target-i386/kvm.c @@ -825,6 +825,11 @@ static int kvm_get_mp_state(CPUState *env) return ret; } env->mp_state = mp_state.mp_state; + + if (kvm_irqchip_in_kernel()) { + env->halted = (env->mp_state == KVM_MP_STATE_HALTED); + } + return 0; } -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 30+ messages in thread
* [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 03/11] update halted state on mp_state sync Glauber Costa @ 2009-12-01 12:51 ` Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state Glauber Costa 2009-12-02 13:27 ` [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution Marcelo Tosatti 0 siblings, 2 replies; 30+ messages in thread From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi, agraf This function is similar to qemu-kvm's on_vcpu mechanism. Totally synchronous, and guarantees that a given function will be executed at the specified vcpu. This patch also convert usage within the breakpoints system Signed-off-by: Glauber Costa <glommer@redhat.com> --- cpu-all.h | 3 ++ cpu-defs.h | 14 +++++++++++++ kvm-all.c | 17 +-------------- vl.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++++---- 4 files changed, 75 insertions(+), 20 deletions(-) diff --git a/cpu-all.h b/cpu-all.h index e214374..8270d43 100644 --- a/cpu-all.h +++ b/cpu-all.h @@ -763,6 +763,9 @@ extern CPUState *cpu_single_env; extern int64_t qemu_icount; extern int use_icount; +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data); +void qemu_flush_work(CPUState *env); + #define CPU_INTERRUPT_HARD 0x02 /* hardware interrupt pending */ #define CPU_INTERRUPT_EXITTB 0x04 /* exit the current TB (use for x86 a20 case) */ #define CPU_INTERRUPT_TIMER 0x08 /* internal timer exception pending */ diff --git a/cpu-defs.h b/cpu-defs.h index 95068b5..18792fc 100644 --- a/cpu-defs.h +++ b/cpu-defs.h @@ -31,6 +31,8 @@ #include "qemu-queue.h" #include "targphys.h" +#include "qemu-thread.h" + #ifndef TARGET_LONG_BITS #error TARGET_LONG_BITS must be defined before including this header #endif @@ -134,6 +136,13 @@ typedef struct CPUWatchpoint { QTAILQ_ENTRY(CPUWatchpoint) entry; } CPUWatchpoint; +typedef struct QemuWorkItem { + void (*func)(void *data); + void *data; + int done; +} QemuWorkItem; + + #define CPU_TEMP_BUF_NLONGS 128 #define CPU_COMMON \ struct TranslationBlock *current_tb; /* currently executing TB */ \ @@ -175,6 +184,10 @@ typedef struct CPUWatchpoint { QTAILQ_HEAD(watchpoints_head, CPUWatchpoint) watchpoints; \ CPUWatchpoint *watchpoint_hit; \ \ + QemuWorkItem queued_work; \ + uint64_t queued_local, queued_total; \ + struct QemuMutex queue_lock; \ + \ struct GDBRegisterState *gdb_regs; \ \ /* Core interrupt code */ \ @@ -194,6 +207,7 @@ typedef struct CPUWatchpoint { uint32_t created; \ struct QemuThread *thread; \ struct QemuCond *halt_cond; \ + struct QemuCond work_cond; \ const char *cpu_model_str; \ struct KVMState *kvm_state; \ struct kvm_run *kvm_run; \ diff --git a/kvm-all.c b/kvm-all.c index 0c0b3c3..40203f0 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -636,7 +636,7 @@ int kvm_cpu_exec(CPUState *env) struct kvm_run *run = env->kvm_run; int ret; - dprintf("kvm_cpu_exec()\n"); + dprintf("kvm_cpu_exec() %d\n", env->cpu_index); do { if (env->exit_request) { @@ -951,19 +951,6 @@ void kvm_setup_guest_memory(void *start, size_t size) } #ifdef KVM_CAP_SET_GUEST_DEBUG -static void on_vcpu(CPUState *env, void (*func)(void *data), void *data) -{ -#ifdef CONFIG_IOTHREAD - if (env == cpu_single_env) { - func(data); - return; - } - abort(); -#else - func(data); -#endif -} - struct kvm_sw_breakpoint *kvm_find_sw_breakpoint(CPUState *env, target_ulong pc) { @@ -1011,7 +998,7 @@ int kvm_update_guest_debug(CPUState *env, unsigned long reinject_trap) data.dbg.control |= reinject_trap; data.env = env; - on_vcpu(env, kvm_invoke_set_guest_debug, &data); + qemu_queue_work(env, kvm_invoke_set_guest_debug, &data); return data.err; } diff --git a/vl.c b/vl.c index 321b18d..c7b46a9 100644 --- a/vl.c +++ b/vl.c @@ -3403,6 +3403,11 @@ void qemu_notify_event(void) } } +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data) +{ + func(data); +} + void qemu_mutex_lock_iothread(void) {} void qemu_mutex_unlock_iothread(void) {} @@ -3436,8 +3441,7 @@ static int tcg_has_work(void); static pthread_key_t current_env; -CPUState *qemu_get_current_env(void); -CPUState *qemu_get_current_env(void) +static CPUState *qemu_get_current_env(void) { return pthread_getspecific(current_env); } @@ -3474,8 +3478,10 @@ static int qemu_init_main_loop(void) static void qemu_wait_io_event(CPUState *env) { - while (!tcg_has_work()) + while (!tcg_has_work()) { + qemu_flush_work(env); qemu_cond_timedwait(env->halt_cond, &qemu_global_mutex, 1000); + } qemu_mutex_unlock(&qemu_global_mutex); @@ -3488,6 +3494,7 @@ static void qemu_wait_io_event(CPUState *env) qemu_mutex_unlock(&qemu_fair_mutex); qemu_mutex_lock(&qemu_global_mutex); + if (env->stop) { env->stop = 0; env->stopped = 1; @@ -3514,8 +3521,11 @@ static void *kvm_cpu_thread_fn(void *arg) qemu_cond_signal(&qemu_cpu_cond); /* and wait for machine initialization */ - while (!qemu_system_ready) + while (!qemu_system_ready) { + /* system reset and initialization is a heavy caller of queue_work() */ + qemu_flush_work(env); qemu_cond_timedwait(&qemu_system_cond, &qemu_global_mutex, 100); + } while (1) { if (cpu_can_run(env)) @@ -3542,8 +3552,9 @@ static void *tcg_cpu_thread_fn(void *arg) qemu_cond_signal(&qemu_cpu_cond); /* and wait for machine initialization */ - while (!qemu_system_ready) + while (!qemu_system_ready) { qemu_cond_timedwait(&qemu_system_cond, &qemu_global_mutex, 100); + } while (1) { tcg_cpu_exec(); @@ -3561,6 +3572,44 @@ void qemu_cpu_kick(void *_env) qemu_thread_signal(env->thread, SIGUSR1); } +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data) +{ + QemuWorkItem *wii; + + env->queued_total++; + + if (env == qemu_get_current_env()) { + env->queued_local++; + func(data); + return; + } + + wii = &env->queued_work; + wii->func = func; + wii->data = data; + wii->done = 0; + + qemu_thread_signal(env->thread, SIGUSR1); + + while (!wii->done) { + qemu_cond_wait(&env->work_cond, &qemu_global_mutex); + } + + wii->func = NULL; +} + +void qemu_flush_work(CPUState *env) +{ + QemuWorkItem *wi; + + wi = &env->queued_work; + if (wi->func) { + wi->func(wi->data); + wi->done = 1; + } + qemu_cond_broadcast(&env->work_cond); +} + int qemu_cpu_self(void *_env) { CPUState *env = _env; @@ -3719,6 +3768,8 @@ void qemu_init_vcpu(void *_env) { CPUState *env = _env; + qemu_cond_init(&env->work_cond); + if (kvm_enabled()) kvm_start_vcpu(env); else -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 30+ messages in thread
* [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution Glauber Costa @ 2009-12-01 12:51 ` Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 06/11] flush state in migration post_load Glauber Costa 2009-12-02 10:48 ` [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state Gleb Natapov 2009-12-02 13:27 ` [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution Marcelo Tosatti 1 sibling, 2 replies; 30+ messages in thread From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi, agraf This fix a bug with -smp in kvm. Since we have updated apic_base, we also have to tell kernel about it. So instead of just updating mp_state, update every regs. It is mandatory that this happens synchronously, without waiting for the next vcpu run. Otherwise, if we are migrating, or initializing the cpu's APIC, other cpus can still see an invalid state. Since putting registers already happen in vcpu entry, we factor out the required code in cpu_flush_state() Signed-off-by: Glauber Costa <glommer@redhat.com> --- hw/apic-kvm.c | 5 ++++- kvm-all.c | 14 +++++++++----- kvm.h | 8 ++++++++ 3 files changed, 21 insertions(+), 6 deletions(-) diff --git a/hw/apic-kvm.c b/hw/apic-kvm.c index e5a0bfc..9e9790f 100644 --- a/hw/apic-kvm.c +++ b/hw/apic-kvm.c @@ -126,7 +126,10 @@ static void kvm_apic_reset(void *opaque) s->cpu_env->mp_state = bsp ? KVM_MP_STATE_RUNNABLE : KVM_MP_STATE_UNINITIALIZED; - kvm_put_mp_state(s->cpu_env); + /* We have to tell the kernel about mp_state, but also save sregs, since + * apic base was just updated + */ + cpu_flush_state(s->cpu_env); if (bsp) { /* diff --git a/kvm-all.c b/kvm-all.c index 40203f0..318a4e6 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -622,7 +622,6 @@ static void kvm_run_coalesced_mmio(CPUState *env, struct kvm_run *run) } #endif } - void kvm_cpu_synchronize_state(CPUState *env) { if (!env->kvm_state->regs_modified) { @@ -631,6 +630,14 @@ void kvm_cpu_synchronize_state(CPUState *env) } } +void kvm_cpu_flush_state(CPUState *env) +{ + if (env->kvm_state->regs_modified) { + kvm_arch_put_registers(env); + env->kvm_state->regs_modified = 0; + } +} + int kvm_cpu_exec(CPUState *env) { struct kvm_run *run = env->kvm_run; @@ -645,10 +652,7 @@ int kvm_cpu_exec(CPUState *env) break; } - if (env->kvm_state->regs_modified) { - kvm_arch_put_registers(env); - env->kvm_state->regs_modified = 0; - } + kvm_cpu_flush_state(env); kvm_arch_pre_run(env, run); qemu_mutex_unlock_iothread(); diff --git a/kvm.h b/kvm.h index a474d95..d9af176 100644 --- a/kvm.h +++ b/kvm.h @@ -139,6 +139,7 @@ int kvm_check_extension(KVMState *s, unsigned int extension); uint32_t kvm_arch_get_supported_cpuid(CPUState *env, uint32_t function, int reg); void kvm_cpu_synchronize_state(CPUState *env); +void kvm_cpu_flush_state(CPUState *env); /* generic hooks - to be moved/refactored once there are more users */ @@ -149,4 +150,11 @@ static inline void cpu_synchronize_state(CPUState *env) } } +static inline void cpu_flush_state(CPUState *env) +{ + if (kvm_enabled()) { + kvm_cpu_flush_state(env); + } +} + #endif -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 30+ messages in thread
* [Qemu-devel] [PATCH v2 06/11] flush state in migration post_load 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state Glauber Costa @ 2009-12-01 12:51 ` Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 07/11] Don't call kvm cpu reset on initialization Glauber Costa 2009-12-02 10:48 ` [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state Gleb Natapov 1 sibling, 1 reply; 30+ messages in thread From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi, agraf This have already been identified in qemu-kvm. We have to synchronously tell the kernel about the APIC state. Otherwise, other cpus can see bogus state for this lapic. Signed-off-by: Glauber Costa <glommer@redhat.com> --- hw/apic-kvm.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/hw/apic-kvm.c b/hw/apic-kvm.c index 9e9790f..b2864b6 100644 --- a/hw/apic-kvm.c +++ b/hw/apic-kvm.c @@ -73,6 +73,8 @@ static int apic_post_load(void *opaque, int version_id) { APICState *s = opaque; + cpu_flush_state(s->cpu_env); + return kvm_set_lapic(s->cpu_env, &s->kvm_lapic_state); } -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 30+ messages in thread
* [Qemu-devel] [PATCH v2 07/11] Don't call kvm cpu reset on initialization 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 06/11] flush state in migration post_load Glauber Costa @ 2009-12-01 12:51 ` Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 08/11] use cpu_kick instead of direct signalling Glauber Costa 0 siblings, 1 reply; 30+ messages in thread From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi, agraf All reset functions are called from the same place, and this was a leftover Signed-off-by: Glauber Costa <glommer@redhat.com> --- kvm-all.c | 2 -- 1 files changed, 0 insertions(+), 2 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index 318a4e6..ecd1102 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -204,8 +204,6 @@ int kvm_init_vcpu(CPUState *env) ret = kvm_arch_init_vcpu(env); if (ret == 0) { qemu_register_reset(kvm_reset_vcpu, env); - kvm_arch_reset_vcpu(env); - ret = kvm_arch_put_registers(env); } err: return ret; -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 30+ messages in thread
* [Qemu-devel] [PATCH v2 08/11] use cpu_kick instead of direct signalling. 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 07/11] Don't call kvm cpu reset on initialization Glauber Costa @ 2009-12-01 12:51 ` Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 09/11] Use per-cpu reset handlers Glauber Costa 0 siblings, 1 reply; 30+ messages in thread From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi, agraf Before signalling a cpu, we have to set exit_request = 1, otherwise it may go back to executing itself. So every cpu wakeup becomes at least two statements. The qemu_cpu_kick already provides semantics to that. So use it all over. Signed-off-by: Glauber Costa <glommer@redhat.com> --- vl.c | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/vl.c b/vl.c index c7b46a9..97446fc 100644 --- a/vl.c +++ b/vl.c @@ -3568,6 +3568,7 @@ void qemu_cpu_kick(void *_env) { CPUState *env = _env; qemu_cond_broadcast(env->halt_cond); + env->exit_request = 1; if (kvm_enabled()) qemu_thread_signal(env->thread, SIGUSR1); } @@ -3589,7 +3590,7 @@ void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data) wii->data = data; wii->done = 0; - qemu_thread_signal(env->thread, SIGUSR1); + qemu_cpu_kick(env); while (!wii->done) { qemu_cond_wait(&env->work_cond, &qemu_global_mutex); @@ -3716,7 +3717,7 @@ static void pause_all_vcpus(void) qemu_cond_timedwait(&qemu_pause_cond, &qemu_global_mutex, 100); penv = first_cpu; while (penv) { - qemu_thread_signal(penv->thread, SIGUSR1); + qemu_cpu_kick(penv); penv = (CPUState *)penv->next_cpu; } } @@ -3729,7 +3730,6 @@ static void resume_all_vcpus(void) while (penv) { penv->stop = 0; penv->stopped = 0; - qemu_thread_signal(penv->thread, SIGUSR1); qemu_cpu_kick(penv); penv = (CPUState *)penv->next_cpu; } -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 30+ messages in thread
* [Qemu-devel] [PATCH v2 09/11] Use per-cpu reset handlers. 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 08/11] use cpu_kick instead of direct signalling Glauber Costa @ 2009-12-01 12:51 ` Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 10/11] Use __thread where available Glauber Costa 0 siblings, 1 reply; 30+ messages in thread From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi, agraf The proposal in this patch is to add a system_reset caller that only resets state related to the cpu. This will guarantee that does functions are called from the cpu-threads, not the I/O thread. In principle, it might seem close to the remote execution mechanism, but: * It does not involve any extra signalling, so it should be faster. * The cpu is guaranteed to be stopped, so it is much less racy. * What runs where becomes more explicit. * This is much, much less racy The previous implementation was giving me races on reset. This one makes it work flawlesly w.r.t reset. Signed-off-by: Glauber Costa <glommer@redhat.com> --- cpu-defs.h | 2 ++ exec-all.h | 12 ++++++++++++ exec.c | 32 ++++++++++++++++++++++++++++++++ hw/apic-kvm.c | 16 ++++++++-------- kvm-all.c | 7 +++---- vl.c | 10 ++++++++++ 6 files changed, 67 insertions(+), 12 deletions(-) diff --git a/cpu-defs.h b/cpu-defs.h index 18792fc..37fd11c 100644 --- a/cpu-defs.h +++ b/cpu-defs.h @@ -185,6 +185,8 @@ typedef struct QemuWorkItem { CPUWatchpoint *watchpoint_hit; \ \ QemuWorkItem queued_work; \ + int reset_requested; \ + QTAILQ_HEAD(reset_head, CPUResetEntry) reset_handlers; \ uint64_t queued_local, queued_total; \ struct QemuMutex queue_lock; \ \ diff --git a/exec-all.h b/exec-all.h index 820b59e..5acf363 100644 --- a/exec-all.h +++ b/exec-all.h @@ -78,6 +78,18 @@ void cpu_io_recompile(CPUState *env, void *retaddr); TranslationBlock *tb_gen_code(CPUState *env, target_ulong pc, target_ulong cs_base, int flags, int cflags); + +typedef void CPUResetHandler(CPUState *env); +typedef struct CPUResetEntry { + QTAILQ_ENTRY(CPUResetEntry) entry; + CPUResetHandler *func; + void *opaque; +} CPUResetEntry; + +void qemu_register_reset_cpu(CPUState *env, CPUResetHandler *func); +void qemu_unregister_reset_cpu(CPUState *env, CPUResetHandler *func); + +void qemu_cpu_reset(CPUState *env); void cpu_exec_init(CPUState *env); void QEMU_NORETURN cpu_loop_exit(void); int page_unprotect(target_ulong address, unsigned long pc, void *puc); diff --git a/exec.c b/exec.c index eb1ee51..e06d862 100644 --- a/exec.c +++ b/exec.c @@ -588,6 +588,7 @@ void cpu_exec_init(CPUState *env) env->numa_node = 0; QTAILQ_INIT(&env->breakpoints); QTAILQ_INIT(&env->watchpoints); + QTAILQ_INIT(&env->reset_handlers); *penv = env; #if defined(CONFIG_USER_ONLY) cpu_list_unlock(); @@ -599,6 +600,37 @@ void cpu_exec_init(CPUState *env) #endif } +void qemu_register_reset_cpu(CPUState *env, CPUResetHandler *func) +{ + CPUResetEntry *re = qemu_mallocz(sizeof(CPUResetEntry)); + + re->func = func; + re->opaque = env; + QTAILQ_INSERT_TAIL(&env->reset_handlers, re, entry); +} + +void qemu_unregister_reset_cpu(CPUState *env, CPUResetHandler *func) +{ + CPUResetEntry *re; + + QTAILQ_FOREACH(re, &env->reset_handlers, entry) { + if (re->func == func && re->opaque == env) { + QTAILQ_REMOVE(&env->reset_handlers, re, entry); + qemu_free(re); + return; + } + } +} + +void qemu_cpu_reset(CPUState *env) +{ + CPUResetEntry *re, *nre; + /* reset all devices */ + QTAILQ_FOREACH_SAFE(re, &env->reset_handlers, entry, nre) { + re->func(re->opaque); + } +} + static inline void invalidate_page_bitmap(PageDesc *p) { if (p->code_bitmap) { diff --git a/hw/apic-kvm.c b/hw/apic-kvm.c index b2864b6..91a77d0 100644 --- a/hw/apic-kvm.c +++ b/hw/apic-kvm.c @@ -91,20 +91,20 @@ static const VMStateDescription vmstate_apic = { .post_load = apic_post_load, }; -static void kvm_apic_reset(void *opaque) +static void kvm_apic_reset(CPUState *env) { - APICState *s = opaque; + APICState *s = env->apic_state; int bsp; int i; - cpu_synchronize_state(s->cpu_env); + cpu_synchronize_state(env); - bsp = cpu_is_bsp(s->cpu_env); + bsp = cpu_is_bsp(env); s->dev.apicbase = 0xfee00000 | (bsp ? MSR_IA32_APICBASE_BSP : 0) | MSR_IA32_APICBASE_ENABLE; - cpu_reset(s->cpu_env); + cpu_reset(env); s->dev.tpr = 0; s->dev.spurious_vec = 0xff; @@ -125,7 +125,7 @@ static void kvm_apic_reset(void *opaque) s->dev.wait_for_sipi = 1; - s->cpu_env->mp_state + env->mp_state = bsp ? KVM_MP_STATE_RUNNABLE : KVM_MP_STATE_UNINITIALIZED; /* We have to tell the kernel about mp_state, but also save sregs, since @@ -141,7 +141,7 @@ static void kvm_apic_reset(void *opaque) */ s->dev.lvt[APIC_LVT_LINT0] = 0x700; } - kvm_set_lapic(s->cpu_env, &s->kvm_lapic_state); + kvm_set_lapic(env, &s->kvm_lapic_state); } int kvm_apic_init(CPUState *env) @@ -155,7 +155,7 @@ int kvm_apic_init(CPUState *env) msix_supported = 1; vmstate_register(s->dev.id, &vmstate_apic, s); - qemu_register_reset(kvm_apic_reset, s); + qemu_register_reset_cpu(env, kvm_apic_reset); return 0; } diff --git a/kvm-all.c b/kvm-all.c index ecd1102..286d0ee 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -21,6 +21,7 @@ #include <linux/kvm.h> #include "qemu-common.h" +#include "exec-all.h" #include "sysemu.h" #include "hw/hw.h" #include "gdbstub.h" @@ -148,10 +149,8 @@ static int kvm_set_user_memory_region(KVMState *s, KVMSlot *slot) return kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem); } -static void kvm_reset_vcpu(void *opaque) +static void kvm_reset_vcpu(CPUState *env) { - CPUState *env = opaque; - kvm_arch_reset_vcpu(env); if (kvm_arch_put_registers(env)) { fprintf(stderr, "Fatal: kvm vcpu reset failed\n"); @@ -203,7 +202,7 @@ int kvm_init_vcpu(CPUState *env) ret = kvm_arch_init_vcpu(env); if (ret == 0) { - qemu_register_reset(kvm_reset_vcpu, env); + qemu_register_reset_cpu(env, kvm_reset_vcpu); } err: return ret; diff --git a/vl.c b/vl.c index 97446fc..ad2e7d6 100644 --- a/vl.c +++ b/vl.c @@ -3232,11 +3232,17 @@ void qemu_unregister_reset(QEMUResetHandler *func, void *opaque) void qemu_system_reset(void) { QEMUResetEntry *re, *nre; + CPUState *penv = first_cpu; /* reset all devices */ QTAILQ_FOREACH_SAFE(re, &reset_handlers, entry, nre) { re->func(re->opaque); } + + while (penv) { + penv->reset_requested = 1; + penv = penv->next_cpu; + } } void qemu_system_reset_request(void) @@ -3528,6 +3534,10 @@ static void *kvm_cpu_thread_fn(void *arg) } while (1) { + if (env->reset_requested) { + qemu_cpu_reset(env); + env->reset_requested = 0; + } if (cpu_can_run(env)) qemu_cpu_exec(env); qemu_wait_io_event(env); -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 30+ messages in thread
* [Qemu-devel] [PATCH v2 10/11] Use __thread where available. 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 09/11] Use per-cpu reset handlers Glauber Costa @ 2009-12-01 12:51 ` Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 11/11] remove smp restriction from kvm Glauber Costa 0 siblings, 1 reply; 30+ messages in thread From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi, agraf It is much faster than pthread_{g,s}et_specific. Signed-off-by: Glauber Costa <glommer@redhat.com> --- configure | 17 +++++++++++++++++ vl.c | 16 ++++++++++++++++ 2 files changed, 33 insertions(+), 0 deletions(-) diff --git a/configure b/configure index dca5a43..ce7bcc4 100755 --- a/configure +++ b/configure @@ -233,6 +233,7 @@ blobs="yes" pkgversion="" check_utests="no" user_pie="no" +tls_thread="yes" # OS specific if check_define __linux__ ; then @@ -1017,6 +1018,19 @@ EOF fi ########################################## +# Check for availability of the __thread keyword +cat > $TMPC <<EOF +__thread void *ptr; +int main(void) { return 0;} +EOF + +if compile_prog; then + tls_thread="yes" +else + tls_thread="no" +fi + +########################################## # VNC TLS detection if test "$vnc_tls" != "no" ; then cat > $TMPC <<EOF @@ -1969,6 +1983,9 @@ echo "CONFIG_BDRV_WHITELIST=$block_drv_whitelist" >> $config_host_mak if test "$mixemu" = "yes" ; then echo "CONFIG_MIXEMU=y" >> $config_host_mak fi +if test "$tls_thread" = "yes" ; then + echo "CONFIG_TLS_HAS_THREAD=y" >> $config_host_mak +fi if test "$vnc_tls" = "yes" ; then echo "CONFIG_VNC_TLS=y" >> $config_host_mak echo "VNC_TLS_CFLAGS=$vnc_tls_cflags" >> $config_host_mak diff --git a/vl.c b/vl.c index ad2e7d6..16a41e3 100644 --- a/vl.c +++ b/vl.c @@ -3445,21 +3445,37 @@ static void block_io_signals(void); static void unblock_io_signals(void); static int tcg_has_work(void); +#ifdef CONFIG_TLS_HAS_THREAD +static __thread CPUState *current_env; +#else static pthread_key_t current_env; +#endif static CPUState *qemu_get_current_env(void) { +#ifdef CONFIG_TLS_HAS_THREAD + return current_env; +#else return pthread_getspecific(current_env); +#endif } static void qemu_set_current_env(CPUState *env) { +#ifdef CONFIG_TLS_HAS_THREAD + current_env = env; +#else pthread_setspecific(current_env, env); +#endif } static void qemu_init_current_env(void) { +#ifdef CONFIG_TLS_HAS_THREAD + current_env = NULL; +#else pthread_key_create(¤t_env, NULL); +#endif } static int qemu_init_main_loop(void) -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 30+ messages in thread
* [Qemu-devel] [PATCH v2 11/11] remove smp restriction from kvm 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 10/11] Use __thread where available Glauber Costa @ 2009-12-01 12:51 ` Glauber Costa 0 siblings, 0 replies; 30+ messages in thread From: Glauber Costa @ 2009-12-01 12:51 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi, agraf We don't support smp without irqchip in kernel, so only abort in that situation Signed-off-by: Glauber Costa <glommer@redhat.com> --- kvm-all.c | 17 ++++++++++------- 1 files changed, 10 insertions(+), 7 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index 286d0ee..db5daf6 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -438,15 +438,23 @@ static int kvm_create_irqchip(KVMState *s) { int ret = 0; #if defined(CONFIG_IOTHREAD) + ret = -1; if (!kvm_check_extension(s, KVM_CAP_IRQCHIP)) - return -1; + goto failed; ret = kvm_vm_ioctl(s, KVM_CREATE_IRQCHIP); if (ret < 0) - return ret; + goto failed; s->irqchip_in_kernel = 1; + + return ret; +failed: #endif + if (smp_cpus > 1) { + fprintf(stderr, "No SMP KVM support, use '-smp 1'\n"); + ret = -EINVAL; + } return ret; } @@ -459,11 +467,6 @@ int kvm_init(int smp_cpus) int ret; int i; - if (smp_cpus > 1) { - fprintf(stderr, "No SMP KVM support, use '-smp 1'\n"); - return -EINVAL; - } - s = qemu_mallocz(sizeof(KVMState)); #ifdef KVM_CAP_SET_GUEST_DEBUG -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 30+ messages in thread
* Re: [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 06/11] flush state in migration post_load Glauber Costa @ 2009-12-02 10:48 ` Gleb Natapov 2009-12-02 11:22 ` Glauber Costa 1 sibling, 1 reply; 30+ messages in thread From: Gleb Natapov @ 2009-12-02 10:48 UTC (permalink / raw) To: Glauber Costa; +Cc: agraf, aliguori, qemu-devel, avi On Tue, Dec 01, 2009 at 10:51:31AM -0200, Glauber Costa wrote: > This fix a bug with -smp in kvm. Since we have updated apic_base, > we also have to tell kernel about it. So instead of just updating > mp_state, update every regs. > > It is mandatory that this happens synchronously, without waiting for > the next vcpu run. Otherwise, if we are migrating, or initializing > the cpu's APIC, other cpus can still see an invalid state. > > Since putting registers already happen in vcpu entry, we factor > out the required code in cpu_flush_state() > > Signed-off-by: Glauber Costa <glommer@redhat.com> > --- > hw/apic-kvm.c | 5 ++++- > kvm-all.c | 14 +++++++++----- > kvm.h | 8 ++++++++ > 3 files changed, 21 insertions(+), 6 deletions(-) > > diff --git a/hw/apic-kvm.c b/hw/apic-kvm.c > index e5a0bfc..9e9790f 100644 > --- a/hw/apic-kvm.c > +++ b/hw/apic-kvm.c > @@ -126,7 +126,10 @@ static void kvm_apic_reset(void *opaque) > s->cpu_env->mp_state > = bsp ? KVM_MP_STATE_RUNNABLE : KVM_MP_STATE_UNINITIALIZED; > > - kvm_put_mp_state(s->cpu_env); > + /* We have to tell the kernel about mp_state, but also save sregs, since > + * apic base was just updated > + */ > + cpu_flush_state(s->cpu_env); > > if (bsp) { > /* > diff --git a/kvm-all.c b/kvm-all.c > index 40203f0..318a4e6 100644 > --- a/kvm-all.c > +++ b/kvm-all.c > @@ -622,7 +622,6 @@ static void kvm_run_coalesced_mmio(CPUState *env, struct kvm_run *run) > } > #endif > } > - Spurious line removing. > void kvm_cpu_synchronize_state(CPUState *env) > { > if (!env->kvm_state->regs_modified) { > @@ -631,6 +630,14 @@ void kvm_cpu_synchronize_state(CPUState *env) > } > } > > +void kvm_cpu_flush_state(CPUState *env) > +{ > + if (env->kvm_state->regs_modified) { > + kvm_arch_put_registers(env); > + env->kvm_state->regs_modified = 0; > + } > +} > + > int kvm_cpu_exec(CPUState *env) > { > struct kvm_run *run = env->kvm_run; > @@ -645,10 +652,7 @@ int kvm_cpu_exec(CPUState *env) > break; > } > > - if (env->kvm_state->regs_modified) { > - kvm_arch_put_registers(env); > - env->kvm_state->regs_modified = 0; > - } > + kvm_cpu_flush_state(env); > > kvm_arch_pre_run(env, run); > qemu_mutex_unlock_iothread(); > diff --git a/kvm.h b/kvm.h > index a474d95..d9af176 100644 > --- a/kvm.h > +++ b/kvm.h > @@ -139,6 +139,7 @@ int kvm_check_extension(KVMState *s, unsigned int extension); > uint32_t kvm_arch_get_supported_cpuid(CPUState *env, uint32_t function, > int reg); > void kvm_cpu_synchronize_state(CPUState *env); > +void kvm_cpu_flush_state(CPUState *env); > > /* generic hooks - to be moved/refactored once there are more users */ > > @@ -149,4 +150,11 @@ static inline void cpu_synchronize_state(CPUState *env) > } > } > > +static inline void cpu_flush_state(CPUState *env) > +{ > + if (kvm_enabled()) { Is this ever called or intended to be called when kvm is disabled? > + kvm_cpu_flush_state(env); > + } > +} > + > #endif > -- > 1.6.5.2 > > -- Gleb. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state 2009-12-02 10:48 ` [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state Gleb Natapov @ 2009-12-02 11:22 ` Glauber Costa 2009-12-02 12:10 ` [Qemu-devel] " Jan Kiszka 0 siblings, 1 reply; 30+ messages in thread From: Glauber Costa @ 2009-12-02 11:22 UTC (permalink / raw) To: Gleb Natapov; +Cc: qemu-devel, Glauber Costa, avi, agraf, aliguori >> +{ >> + if (kvm_enabled()) { > Is this ever called or intended to be called when kvm is disabled? > >> + kvm_cpu_flush_state(env); >> + } I don't think so. But this is here for consistency with its synchronize brother. -- Glauber Costa. "Free as in Freedom" http://glommer.net "The less confident you are, the more serious you have to act." ^ permalink raw reply [flat|nested] 30+ messages in thread
* [Qemu-devel] Re: [PATCH v2 05/11] tell kernel about all registers instead of just mp_state 2009-12-02 11:22 ` Glauber Costa @ 2009-12-02 12:10 ` Jan Kiszka 2009-12-02 12:26 ` Gleb Natapov 0 siblings, 1 reply; 30+ messages in thread From: Jan Kiszka @ 2009-12-02 12:10 UTC (permalink / raw) To: Glauber Costa Cc: aliguori, Gleb Natapov, Glauber Costa, agraf, qemu-devel, avi Glauber Costa wrote: >>> +{ >>> + if (kvm_enabled()) { >> Is this ever called or intended to be called when kvm is disabled? >> >>> + kvm_cpu_flush_state(env); >>> + } > > I don't think so. But this is here for consistency with its synchronize brother. Ack. As soon as someone starts to use this generic service outside code that is only built under CONFIG_KVM or called when kvm is enabled, things will break. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux ^ permalink raw reply [flat|nested] 30+ messages in thread
* [Qemu-devel] Re: [PATCH v2 05/11] tell kernel about all registers instead of just mp_state 2009-12-02 12:10 ` [Qemu-devel] " Jan Kiszka @ 2009-12-02 12:26 ` Gleb Natapov 2009-12-02 12:35 ` Jan Kiszka 0 siblings, 1 reply; 30+ messages in thread From: Gleb Natapov @ 2009-12-02 12:26 UTC (permalink / raw) To: Jan Kiszka; +Cc: aliguori, Glauber Costa, Glauber Costa, qemu-devel, agraf, avi On Wed, Dec 02, 2009 at 01:10:22PM +0100, Jan Kiszka wrote: > Glauber Costa wrote: > >>> +{ > >>> + if (kvm_enabled()) { > >> Is this ever called or intended to be called when kvm is disabled? > >> > >>> + kvm_cpu_flush_state(env); > >>> + } > > > > I don't think so. But this is here for consistency with its synchronize brother. > > Ack. As soon as someone starts to use this generic service outside code > that is only built under CONFIG_KVM or called when kvm is enabled, > things will break. > We can add the check when adding first such user. Code outside CONFIG_KVM can't use this since the function is in kvm.h header. I don't have really strong opinion about that. I personally use kvm_enabled() as a hint that code is generic for kvm and non-kvm case when reading code. -- Gleb. ^ permalink raw reply [flat|nested] 30+ messages in thread
* [Qemu-devel] Re: [PATCH v2 05/11] tell kernel about all registers instead of just mp_state 2009-12-02 12:26 ` Gleb Natapov @ 2009-12-02 12:35 ` Jan Kiszka 0 siblings, 0 replies; 30+ messages in thread From: Jan Kiszka @ 2009-12-02 12:35 UTC (permalink / raw) To: Gleb Natapov Cc: aliguori@us.ibm.com, Glauber Costa, Glauber Costa, qemu-devel@nongnu.org, agraf@suse.de, avi@redhat.com Gleb Natapov wrote: > On Wed, Dec 02, 2009 at 01:10:22PM +0100, Jan Kiszka wrote: >> Glauber Costa wrote: >>>>> +{ >>>>> + if (kvm_enabled()) { >>>> Is this ever called or intended to be called when kvm is disabled? >>>> >>>>> + kvm_cpu_flush_state(env); >>>>> + } >>> I don't think so. But this is here for consistency with its synchronize brother. >> Ack. As soon as someone starts to use this generic service outside code >> that is only built under CONFIG_KVM or called when kvm is enabled, >> things will break. >> > We can add the check when adding first such user. Code outside > CONFIG_KVM can't use this since the function is in kvm.h header. > I don't have really strong opinion about that. I personally use > kvm_enabled() as a hint that code is generic for kvm and non-kvm > case when reading code. It's the same case as with cpu_synchronize_state, just that the latter is already used outside kvm code. Both services logically belong into a different header but are in kvm.h for simplicity reasons. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state Glauber Costa @ 2009-12-02 13:27 ` Marcelo Tosatti 2009-12-02 13:41 ` Glauber Costa 1 sibling, 1 reply; 30+ messages in thread From: Marcelo Tosatti @ 2009-12-02 13:27 UTC (permalink / raw) To: Glauber Costa; +Cc: agraf, aliguori, qemu-devel, avi On Tue, Dec 01, 2009 at 10:51:30AM -0200, Glauber Costa wrote: > This function is similar to qemu-kvm's on_vcpu mechanism. Totally synchronous, > and guarantees that a given function will be executed at the specified vcpu. > > This patch also convert usage within the breakpoints system > > Signed-off-by: Glauber Costa <glommer@redhat.com> > --- > @@ -3436,8 +3441,7 @@ static int tcg_has_work(void); > > static pthread_key_t current_env; > > -CPUState *qemu_get_current_env(void); > -CPUState *qemu_get_current_env(void) > +static CPUState *qemu_get_current_env(void) > { > return pthread_getspecific(current_env); > } > @@ -3474,8 +3478,10 @@ static int qemu_init_main_loop(void) > > static void qemu_wait_io_event(CPUState *env) > { > - while (!tcg_has_work()) > + while (!tcg_has_work()) { This checks all cpus, while for KVM it should check only the current cpu. > + qemu_flush_work(env); > qemu_cond_timedwait(env->halt_cond, &qemu_global_mutex, 1000); > + } KVM vcpu threads should block SIGUSR1, set the in-kernel signal mask with KVM_SET_SIGNAL_MASK ioctl, and eat the signal in qemu_wait_io_event (qemu_flush_work should run after eating the signal). Similarly to qemu-kvm's kvm_main_loop_wait. Otherwise a vcpu thread can lose a signal (say handle SIGUSR1 when not holding qemu_global_mutex before kernel entry). I think this the source of the problems patch 8 attempts to fix. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution 2009-12-02 13:27 ` [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution Marcelo Tosatti @ 2009-12-02 13:41 ` Glauber Costa 2009-12-02 13:54 ` Marcelo Tosatti 0 siblings, 1 reply; 30+ messages in thread From: Glauber Costa @ 2009-12-02 13:41 UTC (permalink / raw) To: Marcelo Tosatti; +Cc: agraf, aliguori, qemu-devel, avi On Wed, Dec 02, 2009 at 11:27:53AM -0200, Marcelo Tosatti wrote: > On Tue, Dec 01, 2009 at 10:51:30AM -0200, Glauber Costa wrote: > > This function is similar to qemu-kvm's on_vcpu mechanism. Totally synchronous, > > and guarantees that a given function will be executed at the specified vcpu. > > > > This patch also convert usage within the breakpoints system > > > > Signed-off-by: Glauber Costa <glommer@redhat.com> > > --- > > > @@ -3436,8 +3441,7 @@ static int tcg_has_work(void); > > > > static pthread_key_t current_env; > > > > -CPUState *qemu_get_current_env(void); > > -CPUState *qemu_get_current_env(void) > > +static CPUState *qemu_get_current_env(void) > > { > > return pthread_getspecific(current_env); > > } > > @@ -3474,8 +3478,10 @@ static int qemu_init_main_loop(void) > > > > static void qemu_wait_io_event(CPUState *env) > > { > > - while (!tcg_has_work()) > > + while (!tcg_has_work()) { > > This checks all cpus, while for KVM it should check only > the current cpu. > > > + qemu_flush_work(env); > > qemu_cond_timedwait(env->halt_cond, &qemu_global_mutex, 1000); > > + } > > KVM vcpu threads should block SIGUSR1, set the in-kernel signal mask > with KVM_SET_SIGNAL_MASK ioctl, and eat the signal in > qemu_wait_io_event (qemu_flush_work should run after eating > the signal). Similarly to qemu-kvm's kvm_main_loop_wait. > > Otherwise a vcpu thread can lose a signal (say handle SIGUSR1 when not > holding qemu_global_mutex before kernel entry). > > I think this the source of the problems patch 8 attempts to fix. I remember you had patches for both theses fixes. Can you resend them, ontop of this series? ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution 2009-12-02 13:41 ` Glauber Costa @ 2009-12-02 13:54 ` Marcelo Tosatti 2009-12-02 14:01 ` Glauber Costa 0 siblings, 1 reply; 30+ messages in thread From: Marcelo Tosatti @ 2009-12-02 13:54 UTC (permalink / raw) To: Glauber Costa; +Cc: agraf, aliguori, qemu-devel, avi On Wed, Dec 02, 2009 at 11:41:19AM -0200, Glauber Costa wrote: > > KVM vcpu threads should block SIGUSR1, set the in-kernel signal mask > > with KVM_SET_SIGNAL_MASK ioctl, and eat the signal in > > qemu_wait_io_event (qemu_flush_work should run after eating > > the signal). Similarly to qemu-kvm's kvm_main_loop_wait. > > > > Otherwise a vcpu thread can lose a signal (say handle SIGUSR1 when not > > holding qemu_global_mutex before kernel entry). > > > > I think this the source of the problems patch 8 attempts to fix. > > I remember you had patches for both theses fixes. Only qemu_wait_io_event split. > Can you resend them, ontop of this series? There is not much sense to split qemu_wait_io_event after what your patch set does (it should be split before). ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution 2009-12-02 13:54 ` Marcelo Tosatti @ 2009-12-02 14:01 ` Glauber Costa 0 siblings, 0 replies; 30+ messages in thread From: Glauber Costa @ 2009-12-02 14:01 UTC (permalink / raw) To: Marcelo Tosatti; +Cc: agraf, aliguori, qemu-devel, avi On Wed, Dec 02, 2009 at 11:54:45AM -0200, Marcelo Tosatti wrote: > On Wed, Dec 02, 2009 at 11:41:19AM -0200, Glauber Costa wrote: > > > KVM vcpu threads should block SIGUSR1, set the in-kernel signal mask > > > with KVM_SET_SIGNAL_MASK ioctl, and eat the signal in > > > qemu_wait_io_event (qemu_flush_work should run after eating > > > the signal). Similarly to qemu-kvm's kvm_main_loop_wait. > > > > > > Otherwise a vcpu thread can lose a signal (say handle SIGUSR1 when not > > > holding qemu_global_mutex before kernel entry). > > > > > > I think this the source of the problems patch 8 attempts to fix. > > > > I remember you had patches for both theses fixes. > > Only qemu_wait_io_event split. > > > Can you resend them, ontop of this series? > > There is not much sense to split qemu_wait_io_event after what your > patch set does (it should be split before). What's the difference here? ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/11] 2009-12-01 12:51 [Qemu-devel] [PATCH v2 0/11] Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 01/11] Don't mess with halted state Glauber Costa @ 2009-12-02 10:59 ` Gleb Natapov 2009-12-02 12:00 ` Glauber Costa 2009-12-02 12:04 ` [Qemu-devel] " Jan Kiszka 1 sibling, 2 replies; 30+ messages in thread From: Gleb Natapov @ 2009-12-02 10:59 UTC (permalink / raw) To: Glauber Costa; +Cc: agraf, aliguori, qemu-devel, avi On Tue, Dec 01, 2009 at 10:51:26AM -0200, Glauber Costa wrote: > This is a repost of the -smp series. Note that it depends on irqchip-in-kernel, > that is already in staging. Also, you'll have to enable the io-thread, for the time > being. > > >From the last version, main change is that I am not calling queue_work automatically > from vcpu ioctls. queue_work is only used currently for the gdb stub. > > All other uses were by-passed by the new qemu_register_vcpu_reset(), since most > of it uses (all racy) came from the reset handlers. > Looks good to me except one thing. I don't see how you are addressing the problem fixed by commit b8a7857071b477b28d3055e33ff0298fc91f329a in qemu-kvm. The problem is that mp_state can change in kernel between call to kvm_cpu_synchronize_state() and vcpu re-entry. In this case old mp_state will overwrite new one. -- Gleb. ^ permalink raw reply [flat|nested] 30+ messages in thread
* Re: [Qemu-devel] [PATCH v2 0/11] 2009-12-02 10:59 ` [Qemu-devel] [PATCH v2 0/11] Gleb Natapov @ 2009-12-02 12:00 ` Glauber Costa 2009-12-02 12:04 ` [Qemu-devel] " Jan Kiszka 1 sibling, 0 replies; 30+ messages in thread From: Glauber Costa @ 2009-12-02 12:00 UTC (permalink / raw) To: Gleb Natapov; +Cc: qemu-devel, Glauber Costa, avi, agraf, aliguori On Wed, Dec 2, 2009 at 8:59 AM, Gleb Natapov <gleb@redhat.com> wrote: > On Tue, Dec 01, 2009 at 10:51:26AM -0200, Glauber Costa wrote: >> This is a repost of the -smp series. Note that it depends on irqchip-in-kernel, >> that is already in staging. Also, you'll have to enable the io-thread, for the time >> being. >> >> >From the last version, main change is that I am not calling queue_work automatically >> from vcpu ioctls. queue_work is only used currently for the gdb stub. >> >> All other uses were by-passed by the new qemu_register_vcpu_reset(), since most >> of it uses (all racy) came from the reset handlers. >> > Looks good to me except one thing. I don't see how you are addressing > the problem fixed by commit b8a7857071b477b28d3055e33ff0298fc91f329a > in qemu-kvm. The problem is that mp_state can change in kernel between > call to kvm_cpu_synchronize_state() and vcpu re-entry. In this case old > mp_state will overwrite new one. I plan to do it in a patch that already lives on my tree. It basically flushes register state in early post_load -- Glauber Costa. "Free as in Freedom" http://glommer.net "The less confident you are, the more serious you have to act." ^ permalink raw reply [flat|nested] 30+ messages in thread
* [Qemu-devel] Re: [PATCH v2 0/11] 2009-12-02 10:59 ` [Qemu-devel] [PATCH v2 0/11] Gleb Natapov 2009-12-02 12:00 ` Glauber Costa @ 2009-12-02 12:04 ` Jan Kiszka 2009-12-02 12:22 ` Gleb Natapov 2009-12-03 7:33 ` Gleb Natapov 1 sibling, 2 replies; 30+ messages in thread From: Jan Kiszka @ 2009-12-02 12:04 UTC (permalink / raw) To: Gleb Natapov; +Cc: qemu-devel, Glauber Costa, avi, agraf, aliguori Gleb Natapov wrote: > On Tue, Dec 01, 2009 at 10:51:26AM -0200, Glauber Costa wrote: >> This is a repost of the -smp series. Note that it depends on irqchip-in-kernel, >> that is already in staging. Also, you'll have to enable the io-thread, for the time >> being. >> >> >From the last version, main change is that I am not calling queue_work automatically >> from vcpu ioctls. queue_work is only used currently for the gdb stub. >> >> All other uses were by-passed by the new qemu_register_vcpu_reset(), since most >> of it uses (all racy) came from the reset handlers. >> > Looks good to me except one thing. I don't see how you are addressing > the problem fixed by commit b8a7857071b477b28d3055e33ff0298fc91f329a > in qemu-kvm. The problem is that mp_state can change in kernel between > call to kvm_cpu_synchronize_state() and vcpu re-entry. In this case old > mp_state will overwrite new one. + nmi_pending + sipi_vector These things need to be fixed at kernel level as discussed recently: Asynchronous changes done by in-kernel subsystems need to be queued and replayed with a higher priority than user space changes. User space need to stop the vm if it does not want to be overruled. Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux ^ permalink raw reply [flat|nested] 30+ messages in thread
* [Qemu-devel] Re: [PATCH v2 0/11] 2009-12-02 12:04 ` [Qemu-devel] " Jan Kiszka @ 2009-12-02 12:22 ` Gleb Natapov 2009-12-02 12:25 ` Jan Kiszka 2009-12-03 7:33 ` Gleb Natapov 1 sibling, 1 reply; 30+ messages in thread From: Gleb Natapov @ 2009-12-02 12:22 UTC (permalink / raw) To: Jan Kiszka; +Cc: qemu-devel, Glauber Costa, avi, agraf, aliguori On Wed, Dec 02, 2009 at 01:04:16PM +0100, Jan Kiszka wrote: > Gleb Natapov wrote: > > On Tue, Dec 01, 2009 at 10:51:26AM -0200, Glauber Costa wrote: > >> This is a repost of the -smp series. Note that it depends on irqchip-in-kernel, > >> that is already in staging. Also, you'll have to enable the io-thread, for the time > >> being. > >> > >> >From the last version, main change is that I am not calling queue_work automatically > >> from vcpu ioctls. queue_work is only used currently for the gdb stub. > >> > >> All other uses were by-passed by the new qemu_register_vcpu_reset(), since most > >> of it uses (all racy) came from the reset handlers. > >> > > Looks good to me except one thing. I don't see how you are addressing > > the problem fixed by commit b8a7857071b477b28d3055e33ff0298fc91f329a > > in qemu-kvm. The problem is that mp_state can change in kernel between > > call to kvm_cpu_synchronize_state() and vcpu re-entry. In this case old > > mp_state will overwrite new one. > > + nmi_pending > + sipi_vector > > These things need to be fixed at kernel level as discussed recently: > Asynchronous changes done by in-kernel subsystems need to be queued and > replayed with a higher priority than user space changes. User space need > to stop the vm if it does not want to be overruled. > Long term yes. Short term qemu need to work with existing kernels. -- Gleb. ^ permalink raw reply [flat|nested] 30+ messages in thread
* [Qemu-devel] Re: [PATCH v2 0/11] 2009-12-02 12:22 ` Gleb Natapov @ 2009-12-02 12:25 ` Jan Kiszka 2009-12-02 12:30 ` Gleb Natapov 0 siblings, 1 reply; 30+ messages in thread From: Jan Kiszka @ 2009-12-02 12:25 UTC (permalink / raw) To: Gleb Natapov Cc: qemu-devel@nongnu.org, Glauber Costa, avi@redhat.com, agraf@suse.de, aliguori@us.ibm.com Gleb Natapov wrote: > On Wed, Dec 02, 2009 at 01:04:16PM +0100, Jan Kiszka wrote: >> Gleb Natapov wrote: >>> On Tue, Dec 01, 2009 at 10:51:26AM -0200, Glauber Costa wrote: >>>> This is a repost of the -smp series. Note that it depends on irqchip-in-kernel, >>>> that is already in staging. Also, you'll have to enable the io-thread, for the time >>>> being. >>>> >>>> >From the last version, main change is that I am not calling queue_work automatically >>>> from vcpu ioctls. queue_work is only used currently for the gdb stub. >>>> >>>> All other uses were by-passed by the new qemu_register_vcpu_reset(), since most >>>> of it uses (all racy) came from the reset handlers. >>>> >>> Looks good to me except one thing. I don't see how you are addressing >>> the problem fixed by commit b8a7857071b477b28d3055e33ff0298fc91f329a >>> in qemu-kvm. The problem is that mp_state can change in kernel between >>> call to kvm_cpu_synchronize_state() and vcpu re-entry. In this case old >>> mp_state will overwrite new one. >> + nmi_pending >> + sipi_vector >> >> These things need to be fixed at kernel level as discussed recently: >> Asynchronous changes done by in-kernel subsystems need to be queued and >> replayed with a higher priority than user space changes. User space need >> to stop the vm if it does not want to be overruled. >> > Long term yes. Short term qemu need to work with existing kernels. Avi's answer was different [1]. Jan [1] http://permalink.gmane.org/gmane.comp.emulators.kvm.devel/43288 -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux ^ permalink raw reply [flat|nested] 30+ messages in thread
* [Qemu-devel] Re: [PATCH v2 0/11] 2009-12-02 12:25 ` Jan Kiszka @ 2009-12-02 12:30 ` Gleb Natapov 2009-12-02 12:33 ` Jan Kiszka 0 siblings, 1 reply; 30+ messages in thread From: Gleb Natapov @ 2009-12-02 12:30 UTC (permalink / raw) To: Jan Kiszka Cc: qemu-devel@nongnu.org, Glauber Costa, avi@redhat.com, agraf@suse.de, aliguori@us.ibm.com On Wed, Dec 02, 2009 at 01:25:39PM +0100, Jan Kiszka wrote: > Gleb Natapov wrote: > > On Wed, Dec 02, 2009 at 01:04:16PM +0100, Jan Kiszka wrote: > >> Gleb Natapov wrote: > >>> On Tue, Dec 01, 2009 at 10:51:26AM -0200, Glauber Costa wrote: > >>>> This is a repost of the -smp series. Note that it depends on irqchip-in-kernel, > >>>> that is already in staging. Also, you'll have to enable the io-thread, for the time > >>>> being. > >>>> > >>>> >From the last version, main change is that I am not calling queue_work automatically > >>>> from vcpu ioctls. queue_work is only used currently for the gdb stub. > >>>> > >>>> All other uses were by-passed by the new qemu_register_vcpu_reset(), since most > >>>> of it uses (all racy) came from the reset handlers. > >>>> > >>> Looks good to me except one thing. I don't see how you are addressing > >>> the problem fixed by commit b8a7857071b477b28d3055e33ff0298fc91f329a > >>> in qemu-kvm. The problem is that mp_state can change in kernel between > >>> call to kvm_cpu_synchronize_state() and vcpu re-entry. In this case old > >>> mp_state will overwrite new one. > >> + nmi_pending > >> + sipi_vector > >> > >> These things need to be fixed at kernel level as discussed recently: > >> Asynchronous changes done by in-kernel subsystems need to be queued and > >> replayed with a higher priority than user space changes. User space need > >> to stop the vm if it does not want to be overruled. > >> > > Long term yes. Short term qemu need to work with existing kernels. > > Avi's answer was different [1]. > > Jan > > [1] http://permalink.gmane.org/gmane.comp.emulators.kvm.devel/43288 > I don't think we can realistically fix all older kernels. Anyway if the workaround for the bug will not be done in qemu we need to fix it in kernel ASAP and start to propagate it to stable releases. -- Gleb. ^ permalink raw reply [flat|nested] 30+ messages in thread
* [Qemu-devel] Re: [PATCH v2 0/11] 2009-12-02 12:30 ` Gleb Natapov @ 2009-12-02 12:33 ` Jan Kiszka 2009-12-02 12:49 ` Gleb Natapov 0 siblings, 1 reply; 30+ messages in thread From: Jan Kiszka @ 2009-12-02 12:33 UTC (permalink / raw) To: Gleb Natapov Cc: qemu-devel@nongnu.org, Glauber Costa, avi@redhat.com, agraf@suse.de, aliguori@us.ibm.com Gleb Natapov wrote: > On Wed, Dec 02, 2009 at 01:25:39PM +0100, Jan Kiszka wrote: >> Gleb Natapov wrote: >>> On Wed, Dec 02, 2009 at 01:04:16PM +0100, Jan Kiszka wrote: >>>> Gleb Natapov wrote: >>>>> On Tue, Dec 01, 2009 at 10:51:26AM -0200, Glauber Costa wrote: >>>>>> This is a repost of the -smp series. Note that it depends on irqchip-in-kernel, >>>>>> that is already in staging. Also, you'll have to enable the io-thread, for the time >>>>>> being. >>>>>> >>>>>> >From the last version, main change is that I am not calling queue_work automatically >>>>>> from vcpu ioctls. queue_work is only used currently for the gdb stub. >>>>>> >>>>>> All other uses were by-passed by the new qemu_register_vcpu_reset(), since most >>>>>> of it uses (all racy) came from the reset handlers. >>>>>> >>>>> Looks good to me except one thing. I don't see how you are addressing >>>>> the problem fixed by commit b8a7857071b477b28d3055e33ff0298fc91f329a >>>>> in qemu-kvm. The problem is that mp_state can change in kernel between >>>>> call to kvm_cpu_synchronize_state() and vcpu re-entry. In this case old >>>>> mp_state will overwrite new one. >>>> + nmi_pending >>>> + sipi_vector >>>> >>>> These things need to be fixed at kernel level as discussed recently: >>>> Asynchronous changes done by in-kernel subsystems need to be queued and >>>> replayed with a higher priority than user space changes. User space need >>>> to stop the vm if it does not want to be overruled. >>>> >>> Long term yes. Short term qemu need to work with existing kernels. >> Avi's answer was different [1]. >> >> Jan >> >> [1] http://permalink.gmane.org/gmane.comp.emulators.kvm.devel/43288 >> > I don't think we can realistically fix all older kernels. Anyway if the > workaround for the bug will not be done in qemu we need to fix it in kernel > ASAP and start to propagate it to stable releases. > Upstream could still easily deny in-kernel irqchip support if the host kvm is not recent enough. qemu-kvm currently has a different policy /wrt old kvm modules, so we may work around the mp_state and vcpu_events issues there (just cleaner than so far). Jan -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux ^ permalink raw reply [flat|nested] 30+ messages in thread
* [Qemu-devel] Re: [PATCH v2 0/11] 2009-12-02 12:33 ` Jan Kiszka @ 2009-12-02 12:49 ` Gleb Natapov 0 siblings, 0 replies; 30+ messages in thread From: Gleb Natapov @ 2009-12-02 12:49 UTC (permalink / raw) To: Jan Kiszka Cc: qemu-devel@nongnu.org, Glauber Costa, avi@redhat.com, agraf@suse.de, aliguori@us.ibm.com On Wed, Dec 02, 2009 at 01:33:11PM +0100, Jan Kiszka wrote: > Gleb Natapov wrote: > > On Wed, Dec 02, 2009 at 01:25:39PM +0100, Jan Kiszka wrote: > >> Gleb Natapov wrote: > >>> On Wed, Dec 02, 2009 at 01:04:16PM +0100, Jan Kiszka wrote: > >>>> Gleb Natapov wrote: > >>>>> On Tue, Dec 01, 2009 at 10:51:26AM -0200, Glauber Costa wrote: > >>>>>> This is a repost of the -smp series. Note that it depends on irqchip-in-kernel, > >>>>>> that is already in staging. Also, you'll have to enable the io-thread, for the time > >>>>>> being. > >>>>>> > >>>>>> >From the last version, main change is that I am not calling queue_work automatically > >>>>>> from vcpu ioctls. queue_work is only used currently for the gdb stub. > >>>>>> > >>>>>> All other uses were by-passed by the new qemu_register_vcpu_reset(), since most > >>>>>> of it uses (all racy) came from the reset handlers. > >>>>>> > >>>>> Looks good to me except one thing. I don't see how you are addressing > >>>>> the problem fixed by commit b8a7857071b477b28d3055e33ff0298fc91f329a > >>>>> in qemu-kvm. The problem is that mp_state can change in kernel between > >>>>> call to kvm_cpu_synchronize_state() and vcpu re-entry. In this case old > >>>>> mp_state will overwrite new one. > >>>> + nmi_pending > >>>> + sipi_vector > >>>> > >>>> These things need to be fixed at kernel level as discussed recently: > >>>> Asynchronous changes done by in-kernel subsystems need to be queued and > >>>> replayed with a higher priority than user space changes. User space need > >>>> to stop the vm if it does not want to be overruled. > >>>> > >>> Long term yes. Short term qemu need to work with existing kernels. > >> Avi's answer was different [1]. > >> > >> Jan > >> > >> [1] http://permalink.gmane.org/gmane.comp.emulators.kvm.devel/43288 > >> > > I don't think we can realistically fix all older kernels. Anyway if the > > workaround for the bug will not be done in qemu we need to fix it in kernel > > ASAP and start to propagate it to stable releases. > > > > Upstream could still easily deny in-kernel irqchip support if the host > kvm is not recent enough. qemu-kvm currently has a different policy /wrt > old kvm modules, so we may work around the mp_state and vcpu_events > issues there (just cleaner than so far). > There is no kernel (released or not) that has the bug fixed. So you say lets always disable everything that this patchset adds? -- Gleb. ^ permalink raw reply [flat|nested] 30+ messages in thread
* [Qemu-devel] Re: [PATCH v2 0/11] 2009-12-02 12:04 ` [Qemu-devel] " Jan Kiszka 2009-12-02 12:22 ` Gleb Natapov @ 2009-12-03 7:33 ` Gleb Natapov 1 sibling, 0 replies; 30+ messages in thread From: Gleb Natapov @ 2009-12-03 7:33 UTC (permalink / raw) To: Jan Kiszka; +Cc: qemu-devel, Glauber Costa, avi, agraf, aliguori On Wed, Dec 02, 2009 at 01:04:16PM +0100, Jan Kiszka wrote: > Gleb Natapov wrote: > > On Tue, Dec 01, 2009 at 10:51:26AM -0200, Glauber Costa wrote: > >> This is a repost of the -smp series. Note that it depends on irqchip-in-kernel, > >> that is already in staging. Also, you'll have to enable the io-thread, for the time > >> being. > >> > >> >From the last version, main change is that I am not calling queue_work automatically > >> from vcpu ioctls. queue_work is only used currently for the gdb stub. > >> > >> All other uses were by-passed by the new qemu_register_vcpu_reset(), since most > >> of it uses (all racy) came from the reset handlers. > >> > > Looks good to me except one thing. I don't see how you are addressing > > the problem fixed by commit b8a7857071b477b28d3055e33ff0298fc91f329a > > in qemu-kvm. The problem is that mp_state can change in kernel between > > call to kvm_cpu_synchronize_state() and vcpu re-entry. In this case old > > mp_state will overwrite new one. > > + nmi_pending > + sipi_vector > > These things need to be fixed at kernel level as discussed recently: > Asynchronous changes done by in-kernel subsystems need to be queued and > replayed with a higher priority than user space changes. User space need > to stop the vm if it does not want to be overruled. > Actually, thinking about it some more, our problem with mp_state comes from the way we try to treat it as part of a vcpu state, but in fact nmi_pending/sipi_vector/mp_state are all part of apic state. So I doubt that the correct solution lays in the kernel. We should remove mp_state syncing from put/get cpu state and move it into apic state management code. To not break "info cpus" halt state reporting we should get apic state in addition to vcpu state on that monitor command. -- Gleb. ^ permalink raw reply [flat|nested] 30+ messages in thread
end of thread, other threads:[~2009-12-03 7:34 UTC | newest] Thread overview: 30+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-12-01 12:51 [Qemu-devel] [PATCH v2 0/11] Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 01/11] Don't mess with halted state Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 02/11] store thread-specific env information Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 03/11] update halted state on mp_state sync Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 06/11] flush state in migration post_load Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 07/11] Don't call kvm cpu reset on initialization Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 08/11] use cpu_kick instead of direct signalling Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 09/11] Use per-cpu reset handlers Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 10/11] Use __thread where available Glauber Costa 2009-12-01 12:51 ` [Qemu-devel] [PATCH v2 11/11] remove smp restriction from kvm Glauber Costa 2009-12-02 10:48 ` [Qemu-devel] [PATCH v2 05/11] tell kernel about all registers instead of just mp_state Gleb Natapov 2009-12-02 11:22 ` Glauber Costa 2009-12-02 12:10 ` [Qemu-devel] " Jan Kiszka 2009-12-02 12:26 ` Gleb Natapov 2009-12-02 12:35 ` Jan Kiszka 2009-12-02 13:27 ` [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution Marcelo Tosatti 2009-12-02 13:41 ` Glauber Costa 2009-12-02 13:54 ` Marcelo Tosatti 2009-12-02 14:01 ` Glauber Costa 2009-12-02 10:59 ` [Qemu-devel] [PATCH v2 0/11] Gleb Natapov 2009-12-02 12:00 ` Glauber Costa 2009-12-02 12:04 ` [Qemu-devel] " Jan Kiszka 2009-12-02 12:22 ` Gleb Natapov 2009-12-02 12:25 ` Jan Kiszka 2009-12-02 12:30 ` Gleb Natapov 2009-12-02 12:33 ` Jan Kiszka 2009-12-02 12:49 ` Gleb Natapov 2009-12-03 7:33 ` Gleb Natapov
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).