* [Qemu-devel] [PATCH 00/11] SMP support in qemu.git @ 2009-12-02 13:48 Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 01/11] Don't mess with halted state Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi Avi/Marcelo Please include this in a branch in qemu-kvm for future inclusion in qemu.git. It is the same material people have already commented on in the ML. Glauber Costa (11): Don't mess with halted state. store thread-specific env information update halted state on mp_state sync qemu_flush_work for remote vcpu execution tell kernel about all registers instead of just mp_state flush state in migration post_load Don't call kvm cpu reset on initialization use cpu_kick instead of direct signalling. Use per-cpu reset handlers. Use __thread where available. remove smp restriction from kvm configure | 17 ++++++++ cpu-all.h | 3 + cpu-defs.h | 16 ++++++++ exec-all.h | 12 ++++++ exec.c | 32 ++++++++++++++++ hw/apic-kvm.c | 26 +++++++----- kvm-all.c | 49 +++++++++--------------- kvm.h | 10 +++++ target-i386/kvm.c | 12 ++++++ target-ppc/kvm.c | 5 ++ vl.c | 108 ++++++++++++++++++++++++++++++++++++++++++++++++++-- 11 files changed, 244 insertions(+), 46 deletions(-) ^ permalink raw reply [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 01/11] Don't mess with halted state. 2009-12-02 13:48 [Qemu-devel] [PATCH 00/11] SMP support in qemu.git Glauber Costa @ 2009-12-02 13:48 ` Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 02/11] store thread-specific env information Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi When we have irqchip in kernel, halted state is kernel business. So don't initialize it in our code. Signed-off-by: Glauber Costa <glommer@redhat.com> --- hw/apic-kvm.c | 3 +-- 1 files changed, 1 insertions(+), 2 deletions(-) diff --git a/hw/apic-kvm.c b/hw/apic-kvm.c index 089fa45..e5a0bfc 100644 --- a/hw/apic-kvm.c +++ b/hw/apic-kvm.c @@ -122,10 +122,9 @@ static void kvm_apic_reset(void *opaque) s->dev.next_time = 0; s->dev.wait_for_sipi = 1; - s->cpu_env->halted = !(s->dev.apicbase & MSR_IA32_APICBASE_BSP); s->cpu_env->mp_state - = s->cpu_env->halted ? KVM_MP_STATE_UNINITIALIZED : KVM_MP_STATE_RUNNABLE; + = bsp ? KVM_MP_STATE_RUNNABLE : KVM_MP_STATE_UNINITIALIZED; kvm_put_mp_state(s->cpu_env); -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 02/11] store thread-specific env information 2009-12-02 13:48 ` [Qemu-devel] [PATCH 01/11] Don't mess with halted state Glauber Costa @ 2009-12-02 13:48 ` Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 03/11] update halted state on mp_state sync Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi Since we'll have multiple cpu threads, at least for kvm, we need a way to store and retrieve the CPUState associated with the current execution thread. For the I/O thread, this will be NULL. I am using pthread functions for that, for portability, but we could as well use __thread keyword. Signed-off-by: Glauber Costa <glommer@redhat.com> --- vl.c | 21 +++++++++++++++++++++ 1 files changed, 21 insertions(+), 0 deletions(-) diff --git a/vl.c b/vl.c index 44763af..321b18d 100644 --- a/vl.c +++ b/vl.c @@ -3434,6 +3434,24 @@ static void block_io_signals(void); static void unblock_io_signals(void); static int tcg_has_work(void); +static pthread_key_t current_env; + +CPUState *qemu_get_current_env(void); +CPUState *qemu_get_current_env(void) +{ + return pthread_getspecific(current_env); +} + +static void qemu_set_current_env(CPUState *env) +{ + pthread_setspecific(current_env, env); +} + +static void qemu_init_current_env(void) +{ + pthread_key_create(¤t_env, NULL); +} + static int qemu_init_main_loop(void) { int ret; @@ -3446,6 +3464,7 @@ static int qemu_init_main_loop(void) qemu_mutex_init(&qemu_fair_mutex); qemu_mutex_init(&qemu_global_mutex); qemu_mutex_lock(&qemu_global_mutex); + qemu_init_current_env(); unblock_io_signals(); qemu_thread_self(&io_thread); @@ -3484,6 +3503,8 @@ static void *kvm_cpu_thread_fn(void *arg) block_io_signals(); qemu_thread_self(env->thread); + qemu_set_current_env(env); + if (kvm_enabled()) kvm_init_vcpu(env); -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 03/11] update halted state on mp_state sync 2009-12-02 13:48 ` [Qemu-devel] [PATCH 02/11] store thread-specific env information Glauber Costa @ 2009-12-02 13:48 ` Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 04/11] qemu_flush_work for remote vcpu execution Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi If we are using in-kernel irqchip, halted state belongs in the kernel. So everytime we grab kernel's idea of mpstate, we also need to propagate halted state to userspace. Signed-off-by: Glauber Costa <glommer@redhat.com> --- target-i386/kvm.c | 5 +++++ 1 files changed, 5 insertions(+), 0 deletions(-) diff --git a/target-i386/kvm.c b/target-i386/kvm.c index 7e1ce15..65ad2a1 100644 --- a/target-i386/kvm.c +++ b/target-i386/kvm.c @@ -825,6 +825,11 @@ static int kvm_get_mp_state(CPUState *env) return ret; } env->mp_state = mp_state.mp_state; + + if (kvm_irqchip_in_kernel()) { + env->halted = (env->mp_state == KVM_MP_STATE_HALTED); + } + return 0; } -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 04/11] qemu_flush_work for remote vcpu execution 2009-12-02 13:48 ` [Qemu-devel] [PATCH 03/11] update halted state on mp_state sync Glauber Costa @ 2009-12-02 13:48 ` Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 05/11] tell kernel about all registers instead of just mp_state Glauber Costa ` (2 more replies) 0 siblings, 3 replies; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi This function is similar to qemu-kvm's on_vcpu mechanism. Totally synchronous, and guarantees that a given function will be executed at the specified vcpu. This patch also convert usage within the breakpoints system Signed-off-by: Glauber Costa <glommer@redhat.com> --- cpu-all.h | 3 ++ cpu-defs.h | 14 +++++++++++++ kvm-all.c | 17 +-------------- vl.c | 61 +++++++++++++++++++++++++++++++++++++++++++++++++++++++---- 4 files changed, 75 insertions(+), 20 deletions(-) diff --git a/cpu-all.h b/cpu-all.h index e214374..8270d43 100644 --- a/cpu-all.h +++ b/cpu-all.h @@ -763,6 +763,9 @@ extern CPUState *cpu_single_env; extern int64_t qemu_icount; extern int use_icount; +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data); +void qemu_flush_work(CPUState *env); + #define CPU_INTERRUPT_HARD 0x02 /* hardware interrupt pending */ #define CPU_INTERRUPT_EXITTB 0x04 /* exit the current TB (use for x86 a20 case) */ #define CPU_INTERRUPT_TIMER 0x08 /* internal timer exception pending */ diff --git a/cpu-defs.h b/cpu-defs.h index 95068b5..18792fc 100644 --- a/cpu-defs.h +++ b/cpu-defs.h @@ -31,6 +31,8 @@ #include "qemu-queue.h" #include "targphys.h" +#include "qemu-thread.h" + #ifndef TARGET_LONG_BITS #error TARGET_LONG_BITS must be defined before including this header #endif @@ -134,6 +136,13 @@ typedef struct CPUWatchpoint { QTAILQ_ENTRY(CPUWatchpoint) entry; } CPUWatchpoint; +typedef struct QemuWorkItem { + void (*func)(void *data); + void *data; + int done; +} QemuWorkItem; + + #define CPU_TEMP_BUF_NLONGS 128 #define CPU_COMMON \ struct TranslationBlock *current_tb; /* currently executing TB */ \ @@ -175,6 +184,10 @@ typedef struct CPUWatchpoint { QTAILQ_HEAD(watchpoints_head, CPUWatchpoint) watchpoints; \ CPUWatchpoint *watchpoint_hit; \ \ + QemuWorkItem queued_work; \ + uint64_t queued_local, queued_total; \ + struct QemuMutex queue_lock; \ + \ struct GDBRegisterState *gdb_regs; \ \ /* Core interrupt code */ \ @@ -194,6 +207,7 @@ typedef struct CPUWatchpoint { uint32_t created; \ struct QemuThread *thread; \ struct QemuCond *halt_cond; \ + struct QemuCond work_cond; \ const char *cpu_model_str; \ struct KVMState *kvm_state; \ struct kvm_run *kvm_run; \ diff --git a/kvm-all.c b/kvm-all.c index a5739c4..f7d89c6 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -617,7 +617,7 @@ int kvm_cpu_exec(CPUState *env) struct kvm_run *run = env->kvm_run; int ret; - dprintf("kvm_cpu_exec()\n"); + dprintf("kvm_cpu_exec() %d\n", env->cpu_index); do { if (env->exit_request) { @@ -932,19 +932,6 @@ void kvm_setup_guest_memory(void *start, size_t size) } #ifdef KVM_CAP_SET_GUEST_DEBUG -static void on_vcpu(CPUState *env, void (*func)(void *data), void *data) -{ -#ifdef CONFIG_IOTHREAD - if (env == cpu_single_env) { - func(data); - return; - } - abort(); -#else - func(data); -#endif -} - struct kvm_sw_breakpoint *kvm_find_sw_breakpoint(CPUState *env, target_ulong pc) { @@ -992,7 +979,7 @@ int kvm_update_guest_debug(CPUState *env, unsigned long reinject_trap) data.dbg.control |= reinject_trap; data.env = env; - on_vcpu(env, kvm_invoke_set_guest_debug, &data); + qemu_queue_work(env, kvm_invoke_set_guest_debug, &data); return data.err; } diff --git a/vl.c b/vl.c index 321b18d..c7b46a9 100644 --- a/vl.c +++ b/vl.c @@ -3403,6 +3403,11 @@ void qemu_notify_event(void) } } +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data) +{ + func(data); +} + void qemu_mutex_lock_iothread(void) {} void qemu_mutex_unlock_iothread(void) {} @@ -3436,8 +3441,7 @@ static int tcg_has_work(void); static pthread_key_t current_env; -CPUState *qemu_get_current_env(void); -CPUState *qemu_get_current_env(void) +static CPUState *qemu_get_current_env(void) { return pthread_getspecific(current_env); } @@ -3474,8 +3478,10 @@ static int qemu_init_main_loop(void) static void qemu_wait_io_event(CPUState *env) { - while (!tcg_has_work()) + while (!tcg_has_work()) { + qemu_flush_work(env); qemu_cond_timedwait(env->halt_cond, &qemu_global_mutex, 1000); + } qemu_mutex_unlock(&qemu_global_mutex); @@ -3488,6 +3494,7 @@ static void qemu_wait_io_event(CPUState *env) qemu_mutex_unlock(&qemu_fair_mutex); qemu_mutex_lock(&qemu_global_mutex); + if (env->stop) { env->stop = 0; env->stopped = 1; @@ -3514,8 +3521,11 @@ static void *kvm_cpu_thread_fn(void *arg) qemu_cond_signal(&qemu_cpu_cond); /* and wait for machine initialization */ - while (!qemu_system_ready) + while (!qemu_system_ready) { + /* system reset and initialization is a heavy caller of queue_work() */ + qemu_flush_work(env); qemu_cond_timedwait(&qemu_system_cond, &qemu_global_mutex, 100); + } while (1) { if (cpu_can_run(env)) @@ -3542,8 +3552,9 @@ static void *tcg_cpu_thread_fn(void *arg) qemu_cond_signal(&qemu_cpu_cond); /* and wait for machine initialization */ - while (!qemu_system_ready) + while (!qemu_system_ready) { qemu_cond_timedwait(&qemu_system_cond, &qemu_global_mutex, 100); + } while (1) { tcg_cpu_exec(); @@ -3561,6 +3572,44 @@ void qemu_cpu_kick(void *_env) qemu_thread_signal(env->thread, SIGUSR1); } +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data) +{ + QemuWorkItem *wii; + + env->queued_total++; + + if (env == qemu_get_current_env()) { + env->queued_local++; + func(data); + return; + } + + wii = &env->queued_work; + wii->func = func; + wii->data = data; + wii->done = 0; + + qemu_thread_signal(env->thread, SIGUSR1); + + while (!wii->done) { + qemu_cond_wait(&env->work_cond, &qemu_global_mutex); + } + + wii->func = NULL; +} + +void qemu_flush_work(CPUState *env) +{ + QemuWorkItem *wi; + + wi = &env->queued_work; + if (wi->func) { + wi->func(wi->data); + wi->done = 1; + } + qemu_cond_broadcast(&env->work_cond); +} + int qemu_cpu_self(void *_env) { CPUState *env = _env; @@ -3719,6 +3768,8 @@ void qemu_init_vcpu(void *_env) { CPUState *env = _env; + qemu_cond_init(&env->work_cond); + if (kvm_enabled()) kvm_start_vcpu(env); else -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 05/11] tell kernel about all registers instead of just mp_state 2009-12-02 13:48 ` [Qemu-devel] [PATCH 04/11] qemu_flush_work for remote vcpu execution Glauber Costa @ 2009-12-02 13:48 ` Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 06/11] flush state in migration post_load Glauber Costa 2009-12-03 12:20 ` [Qemu-devel] Re: [PATCH 04/11] qemu_flush_work for remote vcpu execution Avi Kivity 2009-12-03 14:29 ` Paolo Bonzini 2 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi This fix a bug with -smp in kvm. Since we have updated apic_base, we also have to tell kernel about it. So instead of just updating mp_state, update every regs. It is mandatory that this happens synchronously, without waiting for the next vcpu run. Otherwise, if we are migrating, or initializing the cpu's APIC, other cpus can still see an invalid state. Since putting registers already happen in vcpu entry, we factor out the required code in cpu_flush_state() Signed-off-by: Glauber Costa <glommer@redhat.com> --- hw/apic-kvm.c | 5 ++++- kvm-all.c | 13 +++++++++---- kvm.h | 8 ++++++++ 3 files changed, 21 insertions(+), 5 deletions(-) diff --git a/hw/apic-kvm.c b/hw/apic-kvm.c index e5a0bfc..9e9790f 100644 --- a/hw/apic-kvm.c +++ b/hw/apic-kvm.c @@ -126,7 +126,10 @@ static void kvm_apic_reset(void *opaque) s->cpu_env->mp_state = bsp ? KVM_MP_STATE_RUNNABLE : KVM_MP_STATE_UNINITIALIZED; - kvm_put_mp_state(s->cpu_env); + /* We have to tell the kernel about mp_state, but also save sregs, since + * apic base was just updated + */ + cpu_flush_state(s->cpu_env); if (bsp) { /* diff --git a/kvm-all.c b/kvm-all.c index f7d89c6..596416a 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -612,6 +612,14 @@ void kvm_cpu_synchronize_state(CPUState *env) } } +void kvm_cpu_flush_state(CPUState *env) +{ + if (env->kvm_state->regs_modified) { + kvm_arch_put_registers(env); + env->kvm_state->regs_modified = 0; + } +} + int kvm_cpu_exec(CPUState *env) { struct kvm_run *run = env->kvm_run; @@ -626,10 +634,7 @@ int kvm_cpu_exec(CPUState *env) break; } - if (env->kvm_state->regs_modified) { - kvm_arch_put_registers(env); - env->kvm_state->regs_modified = 0; - } + kvm_cpu_flush_state(env); kvm_arch_pre_run(env, run); qemu_mutex_unlock_iothread(); diff --git a/kvm.h b/kvm.h index 15fb34a..7b9d8b3 100644 --- a/kvm.h +++ b/kvm.h @@ -144,6 +144,7 @@ int kvm_check_extension(KVMState *s, unsigned int extension); uint32_t kvm_arch_get_supported_cpuid(CPUState *env, uint32_t function, int reg); void kvm_cpu_synchronize_state(CPUState *env); +void kvm_cpu_flush_state(CPUState *env); /* generic hooks - to be moved/refactored once there are more users */ @@ -154,4 +155,11 @@ static inline void cpu_synchronize_state(CPUState *env) } } +static inline void cpu_flush_state(CPUState *env) +{ + if (kvm_enabled()) { + kvm_cpu_flush_state(env); + } +} + #endif -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 06/11] flush state in migration post_load 2009-12-02 13:48 ` [Qemu-devel] [PATCH 05/11] tell kernel about all registers instead of just mp_state Glauber Costa @ 2009-12-02 13:48 ` Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 07/11] Don't call kvm cpu reset on initialization Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi This have already been identified in qemu-kvm. We have to synchronously tell the kernel about the APIC state. Otherwise, other cpus can see bogus state for this lapic. Signed-off-by: Glauber Costa <glommer@redhat.com> --- hw/apic-kvm.c | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) diff --git a/hw/apic-kvm.c b/hw/apic-kvm.c index 9e9790f..b2864b6 100644 --- a/hw/apic-kvm.c +++ b/hw/apic-kvm.c @@ -73,6 +73,8 @@ static int apic_post_load(void *opaque, int version_id) { APICState *s = opaque; + cpu_flush_state(s->cpu_env); + return kvm_set_lapic(s->cpu_env, &s->kvm_lapic_state); } -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 07/11] Don't call kvm cpu reset on initialization 2009-12-02 13:48 ` [Qemu-devel] [PATCH 06/11] flush state in migration post_load Glauber Costa @ 2009-12-02 13:48 ` Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 08/11] use cpu_kick instead of direct signalling Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi All reset functions are called from the same place, and this was a leftover Signed-off-by: Glauber Costa <glommer@redhat.com> --- kvm-all.c | 2 -- 1 files changed, 0 insertions(+), 2 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index 596416a..1072d63 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -204,8 +204,6 @@ int kvm_init_vcpu(CPUState *env) ret = kvm_arch_init_vcpu(env); if (ret == 0) { qemu_register_reset(kvm_reset_vcpu, env); - kvm_arch_reset_vcpu(env); - ret = kvm_arch_put_registers(env); } err: return ret; -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 08/11] use cpu_kick instead of direct signalling. 2009-12-02 13:48 ` [Qemu-devel] [PATCH 07/11] Don't call kvm cpu reset on initialization Glauber Costa @ 2009-12-02 13:48 ` Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 09/11] Use per-cpu reset handlers Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi Before signalling a cpu, we have to set exit_request = 1, otherwise it may go back to executing itself. So every cpu wakeup becomes at least two statements. The qemu_cpu_kick already provides semantics to that. So use it all over. Signed-off-by: Glauber Costa <glommer@redhat.com> --- vl.c | 6 +++--- 1 files changed, 3 insertions(+), 3 deletions(-) diff --git a/vl.c b/vl.c index c7b46a9..97446fc 100644 --- a/vl.c +++ b/vl.c @@ -3568,6 +3568,7 @@ void qemu_cpu_kick(void *_env) { CPUState *env = _env; qemu_cond_broadcast(env->halt_cond); + env->exit_request = 1; if (kvm_enabled()) qemu_thread_signal(env->thread, SIGUSR1); } @@ -3589,7 +3590,7 @@ void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data) wii->data = data; wii->done = 0; - qemu_thread_signal(env->thread, SIGUSR1); + qemu_cpu_kick(env); while (!wii->done) { qemu_cond_wait(&env->work_cond, &qemu_global_mutex); @@ -3716,7 +3717,7 @@ static void pause_all_vcpus(void) qemu_cond_timedwait(&qemu_pause_cond, &qemu_global_mutex, 100); penv = first_cpu; while (penv) { - qemu_thread_signal(penv->thread, SIGUSR1); + qemu_cpu_kick(penv); penv = (CPUState *)penv->next_cpu; } } @@ -3729,7 +3730,6 @@ static void resume_all_vcpus(void) while (penv) { penv->stop = 0; penv->stopped = 0; - qemu_thread_signal(penv->thread, SIGUSR1); qemu_cpu_kick(penv); penv = (CPUState *)penv->next_cpu; } -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 09/11] Use per-cpu reset handlers. 2009-12-02 13:48 ` [Qemu-devel] [PATCH 08/11] use cpu_kick instead of direct signalling Glauber Costa @ 2009-12-02 13:48 ` Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 10/11] Use __thread where available Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi The proposal in this patch is to add a system_reset caller that only resets state related to the cpu. This will guarantee that does functions are called from the cpu-threads, not the I/O thread. In principle, it might seem close to the remote execution mechanism, but: * It does not involve any extra signalling, so it should be faster. * The cpu is guaranteed to be stopped, so it is much less racy. * What runs where becomes more explicit. * This is much, much less racy The previous implementation was giving me races on reset. This one makes it work flawlesly w.r.t reset. Signed-off-by: Glauber Costa <glommer@redhat.com> --- cpu-defs.h | 2 ++ exec-all.h | 12 ++++++++++++ exec.c | 32 ++++++++++++++++++++++++++++++++ hw/apic-kvm.c | 16 ++++++++-------- kvm-all.c | 7 +++---- vl.c | 10 ++++++++++ 6 files changed, 67 insertions(+), 12 deletions(-) diff --git a/cpu-defs.h b/cpu-defs.h index 18792fc..37fd11c 100644 --- a/cpu-defs.h +++ b/cpu-defs.h @@ -185,6 +185,8 @@ typedef struct QemuWorkItem { CPUWatchpoint *watchpoint_hit; \ \ QemuWorkItem queued_work; \ + int reset_requested; \ + QTAILQ_HEAD(reset_head, CPUResetEntry) reset_handlers; \ uint64_t queued_local, queued_total; \ struct QemuMutex queue_lock; \ \ diff --git a/exec-all.h b/exec-all.h index 820b59e..5acf363 100644 --- a/exec-all.h +++ b/exec-all.h @@ -78,6 +78,18 @@ void cpu_io_recompile(CPUState *env, void *retaddr); TranslationBlock *tb_gen_code(CPUState *env, target_ulong pc, target_ulong cs_base, int flags, int cflags); + +typedef void CPUResetHandler(CPUState *env); +typedef struct CPUResetEntry { + QTAILQ_ENTRY(CPUResetEntry) entry; + CPUResetHandler *func; + void *opaque; +} CPUResetEntry; + +void qemu_register_reset_cpu(CPUState *env, CPUResetHandler *func); +void qemu_unregister_reset_cpu(CPUState *env, CPUResetHandler *func); + +void qemu_cpu_reset(CPUState *env); void cpu_exec_init(CPUState *env); void QEMU_NORETURN cpu_loop_exit(void); int page_unprotect(target_ulong address, unsigned long pc, void *puc); diff --git a/exec.c b/exec.c index eb1ee51..f91009d 100644 --- a/exec.c +++ b/exec.c @@ -588,6 +588,7 @@ void cpu_exec_init(CPUState *env) env->numa_node = 0; QTAILQ_INIT(&env->breakpoints); QTAILQ_INIT(&env->watchpoints); + QTAILQ_INIT(&env->reset_handlers); *penv = env; #if defined(CONFIG_USER_ONLY) cpu_list_unlock(); @@ -599,6 +600,37 @@ void cpu_exec_init(CPUState *env) #endif } +void qemu_register_reset_cpu(CPUState *env, CPUResetHandler *func) +{ + CPUResetEntry *re = qemu_mallocz(sizeof(CPUResetEntry)); + + re->func = func; + re->opaque = env; + QTAILQ_INSERT_TAIL(&env->reset_handlers, re, entry); +} + +void qemu_unregister_reset_cpu(CPUState *env, CPUResetHandler *func) +{ + CPUResetEntry *re; + + QTAILQ_FOREACH(re, &env->reset_handlers, entry) { + if (re->func == func && re->opaque == env) { + QTAILQ_REMOVE(&env->reset_handlers, re, entry); + qemu_free(re); + return; + } + } +} + +void qemu_cpu_reset(CPUState *env) +{ + CPUResetEntry *re, *nre; + /* reset all devices */ + QTAILQ_FOREACH_SAFE(re, &env->reset_handlers, entry, nre) { + re->func(re->opaque); + } +} + static inline void invalidate_page_bitmap(PageDesc *p) { if (p->code_bitmap) { diff --git a/hw/apic-kvm.c b/hw/apic-kvm.c index b2864b6..91a77d0 100644 --- a/hw/apic-kvm.c +++ b/hw/apic-kvm.c @@ -91,20 +91,20 @@ static const VMStateDescription vmstate_apic = { .post_load = apic_post_load, }; -static void kvm_apic_reset(void *opaque) +static void kvm_apic_reset(CPUState *env) { - APICState *s = opaque; + APICState *s = env->apic_state; int bsp; int i; - cpu_synchronize_state(s->cpu_env); + cpu_synchronize_state(env); - bsp = cpu_is_bsp(s->cpu_env); + bsp = cpu_is_bsp(env); s->dev.apicbase = 0xfee00000 | (bsp ? MSR_IA32_APICBASE_BSP : 0) | MSR_IA32_APICBASE_ENABLE; - cpu_reset(s->cpu_env); + cpu_reset(env); s->dev.tpr = 0; s->dev.spurious_vec = 0xff; @@ -125,7 +125,7 @@ static void kvm_apic_reset(void *opaque) s->dev.wait_for_sipi = 1; - s->cpu_env->mp_state + env->mp_state = bsp ? KVM_MP_STATE_RUNNABLE : KVM_MP_STATE_UNINITIALIZED; /* We have to tell the kernel about mp_state, but also save sregs, since @@ -141,7 +141,7 @@ static void kvm_apic_reset(void *opaque) */ s->dev.lvt[APIC_LVT_LINT0] = 0x700; } - kvm_set_lapic(s->cpu_env, &s->kvm_lapic_state); + kvm_set_lapic(env, &s->kvm_lapic_state); } int kvm_apic_init(CPUState *env) @@ -155,7 +155,7 @@ int kvm_apic_init(CPUState *env) msix_supported = 1; vmstate_register(s->dev.id, &vmstate_apic, s); - qemu_register_reset(kvm_apic_reset, s); + qemu_register_reset_cpu(env, kvm_apic_reset); return 0; } diff --git a/kvm-all.c b/kvm-all.c index 1072d63..22d84a3 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -21,6 +21,7 @@ #include <linux/kvm.h> #include "qemu-common.h" +#include "exec-all.h" #include "sysemu.h" #include "hw/hw.h" #include "gdbstub.h" @@ -148,10 +149,8 @@ static int kvm_set_user_memory_region(KVMState *s, KVMSlot *slot) return kvm_vm_ioctl(s, KVM_SET_USER_MEMORY_REGION, &mem); } -static void kvm_reset_vcpu(void *opaque) +static void kvm_reset_vcpu(CPUState *env) { - CPUState *env = opaque; - kvm_arch_reset_vcpu(env); if (kvm_arch_put_registers(env)) { fprintf(stderr, "Fatal: kvm vcpu reset failed\n"); @@ -203,7 +202,7 @@ int kvm_init_vcpu(CPUState *env) ret = kvm_arch_init_vcpu(env); if (ret == 0) { - qemu_register_reset(kvm_reset_vcpu, env); + qemu_register_reset_cpu(env, kvm_reset_vcpu); } err: return ret; diff --git a/vl.c b/vl.c index 97446fc..ad2e7d6 100644 --- a/vl.c +++ b/vl.c @@ -3232,11 +3232,17 @@ void qemu_unregister_reset(QEMUResetHandler *func, void *opaque) void qemu_system_reset(void) { QEMUResetEntry *re, *nre; + CPUState *penv = first_cpu; /* reset all devices */ QTAILQ_FOREACH_SAFE(re, &reset_handlers, entry, nre) { re->func(re->opaque); } + + while (penv) { + penv->reset_requested = 1; + penv = penv->next_cpu; + } } void qemu_system_reset_request(void) @@ -3528,6 +3534,10 @@ static void *kvm_cpu_thread_fn(void *arg) } while (1) { + if (env->reset_requested) { + qemu_cpu_reset(env); + env->reset_requested = 0; + } if (cpu_can_run(env)) qemu_cpu_exec(env); qemu_wait_io_event(env); -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 10/11] Use __thread where available. 2009-12-02 13:48 ` [Qemu-devel] [PATCH 09/11] Use per-cpu reset handlers Glauber Costa @ 2009-12-02 13:48 ` Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 11/11] remove smp restriction from kvm Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi It is much faster than pthread_{g,s}et_specific. Signed-off-by: Glauber Costa <glommer@redhat.com> --- configure | 17 +++++++++++++++++ vl.c | 16 ++++++++++++++++ 2 files changed, 33 insertions(+), 0 deletions(-) diff --git a/configure b/configure index dca5a43..ce7bcc4 100755 --- a/configure +++ b/configure @@ -233,6 +233,7 @@ blobs="yes" pkgversion="" check_utests="no" user_pie="no" +tls_thread="yes" # OS specific if check_define __linux__ ; then @@ -1017,6 +1018,19 @@ EOF fi ########################################## +# Check for availability of the __thread keyword +cat > $TMPC <<EOF +__thread void *ptr; +int main(void) { return 0;} +EOF + +if compile_prog; then + tls_thread="yes" +else + tls_thread="no" +fi + +########################################## # VNC TLS detection if test "$vnc_tls" != "no" ; then cat > $TMPC <<EOF @@ -1969,6 +1983,9 @@ echo "CONFIG_BDRV_WHITELIST=$block_drv_whitelist" >> $config_host_mak if test "$mixemu" = "yes" ; then echo "CONFIG_MIXEMU=y" >> $config_host_mak fi +if test "$tls_thread" = "yes" ; then + echo "CONFIG_TLS_HAS_THREAD=y" >> $config_host_mak +fi if test "$vnc_tls" = "yes" ; then echo "CONFIG_VNC_TLS=y" >> $config_host_mak echo "VNC_TLS_CFLAGS=$vnc_tls_cflags" >> $config_host_mak diff --git a/vl.c b/vl.c index ad2e7d6..16a41e3 100644 --- a/vl.c +++ b/vl.c @@ -3445,21 +3445,37 @@ static void block_io_signals(void); static void unblock_io_signals(void); static int tcg_has_work(void); +#ifdef CONFIG_TLS_HAS_THREAD +static __thread CPUState *current_env; +#else static pthread_key_t current_env; +#endif static CPUState *qemu_get_current_env(void) { +#ifdef CONFIG_TLS_HAS_THREAD + return current_env; +#else return pthread_getspecific(current_env); +#endif } static void qemu_set_current_env(CPUState *env) { +#ifdef CONFIG_TLS_HAS_THREAD + current_env = env; +#else pthread_setspecific(current_env, env); +#endif } static void qemu_init_current_env(void) { +#ifdef CONFIG_TLS_HAS_THREAD + current_env = NULL; +#else pthread_key_create(¤t_env, NULL); +#endif } static int qemu_init_main_loop(void) -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] [PATCH 11/11] remove smp restriction from kvm 2009-12-02 13:48 ` [Qemu-devel] [PATCH 10/11] Use __thread where available Glauber Costa @ 2009-12-02 13:48 ` Glauber Costa 2009-12-03 12:23 ` [Qemu-devel] " Avi Kivity 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-02 13:48 UTC (permalink / raw) To: qemu-devel; +Cc: aliguori, avi We don't support smp without irqchip in kernel, so only abort in that situation Signed-off-by: Glauber Costa <glommer@redhat.com> --- kvm-all.c | 10 +++++----- kvm.h | 2 ++ target-i386/kvm.c | 7 +++++++ target-ppc/kvm.c | 5 +++++ 4 files changed, 19 insertions(+), 5 deletions(-) diff --git a/kvm-all.c b/kvm-all.c index 22d84a3..b17bd74 100644 --- a/kvm-all.c +++ b/kvm-all.c @@ -427,7 +427,12 @@ static int kvm_create_irqchip(KVMState *s) return ret; s->irqchip_in_kernel = 1; + #endif + if (!kvm_arch_can_smp() && (smp_cpus > 1)) { + fprintf(stderr, "No SMP KVM support, use '-smp 1'\n"); + ret = -EINVAL; + } return ret; } @@ -440,11 +445,6 @@ int kvm_init(int smp_cpus) int ret; int i; - if (smp_cpus > 1) { - fprintf(stderr, "No SMP KVM support, use '-smp 1'\n"); - return -EINVAL; - } - s = qemu_mallocz(sizeof(KVMState)); #ifdef KVM_CAP_SET_GUEST_DEBUG diff --git a/kvm.h b/kvm.h index 7b9d8b3..91a4bf4 100644 --- a/kvm.h +++ b/kvm.h @@ -101,6 +101,8 @@ int kvm_arch_init_vcpu(CPUState *env); void kvm_arch_reset_vcpu(CPUState *env); +int kvm_arch_can_smp(void); + #ifdef TARGET_I386 int kvm_set_lapic(CPUState *env, struct kvm_lapic_state *s); int kvm_get_lapic(CPUState *env, struct kvm_lapic_state *s); diff --git a/target-i386/kvm.c b/target-i386/kvm.c index 65ad2a1..cea0cf1 100644 --- a/target-i386/kvm.c +++ b/target-i386/kvm.c @@ -1171,3 +1171,10 @@ int kvm_get_lapic(CPUState *env, struct kvm_lapic_state *s) return kvm_vcpu_ioctl(env, KVM_GET_LAPIC, s); } + +int kvm_arch_can_smp(void) +{ + if (kvm_irqchip_in_kernel()) + return 1; + return 0; +} diff --git a/target-ppc/kvm.c b/target-ppc/kvm.c index d08639b..cfe467f 100644 --- a/target-ppc/kvm.c +++ b/target-ppc/kvm.c @@ -62,6 +62,11 @@ int kvm_arch_set_irq(KVMState *s, int irq, int level, int *status) return -ENOSYS; } +int kvm_arch_can_smp(void) +{ + return 0; +} + int kvm_arch_put_registers(CPUState *env) { struct kvm_regs regs; -- 1.6.5.2 ^ permalink raw reply related [flat|nested] 24+ messages in thread
* [Qemu-devel] Re: [PATCH 11/11] remove smp restriction from kvm 2009-12-02 13:48 ` [Qemu-devel] [PATCH 11/11] remove smp restriction from kvm Glauber Costa @ 2009-12-03 12:23 ` Avi Kivity 2009-12-03 12:36 ` Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Avi Kivity @ 2009-12-03 12:23 UTC (permalink / raw) To: Glauber Costa; +Cc: aliguori, qemu-devel On 12/02/2009 03:48 PM, Glauber Costa wrote: > We don't support smp without irqchip in kernel, so only abort in > that situation > What's the reason for this restriction? -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 11/11] remove smp restriction from kvm 2009-12-03 12:23 ` [Qemu-devel] " Avi Kivity @ 2009-12-03 12:36 ` Glauber Costa 2009-12-03 12:37 ` Avi Kivity 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-03 12:36 UTC (permalink / raw) To: Avi Kivity; +Cc: Glauber Costa, aliguori, qemu-devel On Thu, Dec 3, 2009 at 10:23 AM, Avi Kivity <avi@redhat.com> wrote: > On 12/02/2009 03:48 PM, Glauber Costa wrote: >> >> We don't support smp without irqchip in kernel, so only abort in >> that situation >> > > > What's the reason for this restriction? It is temporary. But as far as my testing goes, we don't come even close to working without in-kernel irqchip. -- Glauber Costa. "Free as in Freedom" http://glommer.net "The less confident you are, the more serious you have to act." ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 11/11] remove smp restriction from kvm 2009-12-03 12:36 ` Glauber Costa @ 2009-12-03 12:37 ` Avi Kivity 2009-12-03 12:39 ` Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Avi Kivity @ 2009-12-03 12:37 UTC (permalink / raw) To: Glauber Costa; +Cc: Glauber Costa, aliguori, qemu-devel On 12/03/2009 02:36 PM, Glauber Costa wrote: > On Thu, Dec 3, 2009 at 10:23 AM, Avi Kivity<avi@redhat.com> wrote: > >> On 12/02/2009 03:48 PM, Glauber Costa wrote: >> >>> We don't support smp without irqchip in kernel, so only abort in >>> that situation >>> >>> >> >> What's the reason for this restriction? >> > It is temporary. But as far as my testing goes, we don't come even close to > working without in-kernel irqchip. > This works well in qemu-kvm. What's the reason? it may indicate a bug in up. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 11/11] remove smp restriction from kvm 2009-12-03 12:37 ` Avi Kivity @ 2009-12-03 12:39 ` Glauber Costa 2009-12-03 12:40 ` Avi Kivity 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-03 12:39 UTC (permalink / raw) To: Avi Kivity; +Cc: Glauber Costa, aliguori, qemu-devel On Thu, Dec 3, 2009 at 10:37 AM, Avi Kivity <avi@redhat.com> wrote: > On 12/03/2009 02:36 PM, Glauber Costa wrote: >> >> On Thu, Dec 3, 2009 at 10:23 AM, Avi Kivity<avi@redhat.com> wrote: >> >>> >>> On 12/02/2009 03:48 PM, Glauber Costa wrote: >>> >>>> >>>> We don't support smp without irqchip in kernel, so only abort in >>>> that situation >>>> >>>> >>> >>> What's the reason for this restriction? >>> >> >> It is temporary. But as far as my testing goes, we don't come even close >> to >> working without in-kernel irqchip. >> > > This works well in qemu-kvm. What's the reason? it may indicate a bug in > up. The reason is that we do not handle SIPI in qemu.git yet. Now that the basics of smp are working, it should not be too difficult to get it working. -- Glauber Costa. "Free as in Freedom" http://glommer.net "The less confident you are, the more serious you have to act." ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 11/11] remove smp restriction from kvm 2009-12-03 12:39 ` Glauber Costa @ 2009-12-03 12:40 ` Avi Kivity 0 siblings, 0 replies; 24+ messages in thread From: Avi Kivity @ 2009-12-03 12:40 UTC (permalink / raw) To: Glauber Costa; +Cc: Glauber Costa, aliguori, qemu-devel On 12/03/2009 02:39 PM, Glauber Costa wrote: > >> This works well in qemu-kvm. What's the reason? it may indicate a bug in >> up. >> > The reason is that we do not handle SIPI in qemu.git yet. Now that the basics > of smp are working, it should not be too difficult to get it working. > OK. I think this is worthwhile. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 24+ messages in thread
* [Qemu-devel] Re: [PATCH 04/11] qemu_flush_work for remote vcpu execution 2009-12-02 13:48 ` [Qemu-devel] [PATCH 04/11] qemu_flush_work for remote vcpu execution Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 05/11] tell kernel about all registers instead of just mp_state Glauber Costa @ 2009-12-03 12:20 ` Avi Kivity 2009-12-03 12:33 ` Glauber Costa 2009-12-03 14:29 ` Paolo Bonzini 2 siblings, 1 reply; 24+ messages in thread From: Avi Kivity @ 2009-12-03 12:20 UTC (permalink / raw) To: Glauber Costa; +Cc: aliguori, qemu-devel On 12/02/2009 03:48 PM, Glauber Costa wrote: > This function is similar to qemu-kvm's on_vcpu mechanism. Is similar? You're replacing on_vcpu(). > Totally synchronous, > and guarantees that a given function will be executed at the specified vcpu. > > This patch also convert usage within the breakpoints system > > +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data); > The name suggests that it is asynchronous. Why is this patch necessary? -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 04/11] qemu_flush_work for remote vcpu execution 2009-12-03 12:20 ` [Qemu-devel] Re: [PATCH 04/11] qemu_flush_work for remote vcpu execution Avi Kivity @ 2009-12-03 12:33 ` Glauber Costa 2009-12-03 12:35 ` Avi Kivity 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-03 12:33 UTC (permalink / raw) To: Avi Kivity; +Cc: Glauber Costa, aliguori, qemu-devel On Thu, Dec 3, 2009 at 10:20 AM, Avi Kivity <avi@redhat.com> wrote: > On 12/02/2009 03:48 PM, Glauber Costa wrote: >> >> This function is similar to qemu-kvm's on_vcpu mechanism. > > Is similar? You're replacing on_vcpu(). Yeah, it began similar, now it is pretty much the same thing, but using qemu-specific data structures > >> Totally synchronous, >> and guarantees that a given function will be executed at the specified >> vcpu. >> >> This patch also convert usage within the breakpoints system >> >> +void qemu_queue_work(CPUState *env, void (*func)(void *data), void >> *data); >> > > The name suggests that it is asynchronous. > > Why is this patch necessary? to keep gdbstub working. > > -- > error compiling committee.c: too many arguments to function > > > > -- Glauber Costa. "Free as in Freedom" http://glommer.net "The less confident you are, the more serious you have to act." ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 04/11] qemu_flush_work for remote vcpu execution 2009-12-03 12:33 ` Glauber Costa @ 2009-12-03 12:35 ` Avi Kivity 2009-12-03 12:37 ` Glauber Costa 0 siblings, 1 reply; 24+ messages in thread From: Avi Kivity @ 2009-12-03 12:35 UTC (permalink / raw) To: Glauber Costa; +Cc: Glauber Costa, aliguori, qemu-devel On 12/03/2009 02:33 PM, Glauber Costa wrote: > On Thu, Dec 3, 2009 at 10:20 AM, Avi Kivity<avi@redhat.com> wrote: > >> On 12/02/2009 03:48 PM, Glauber Costa wrote: >> >>> This function is similar to qemu-kvm's on_vcpu mechanism. >>> >> Is similar? You're replacing on_vcpu(). >> > Yeah, it began similar, now it is pretty much the same thing, but > using qemu-specific > data structures > Keep the name then. The new name is misleading. >>> Totally synchronous, >>> and guarantees that a given function will be executed at the specified >>> vcpu. >>> >>> This patch also convert usage within the breakpoints system >>> >>> +void qemu_queue_work(CPUState *env, void (*func)(void *data), void >>> *data); >>> >>> >> The name suggests that it is asynchronous. >> >> Why is this patch necessary? >> > to keep gdbstub working. > "Because it fixes things". Please elaborate. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 04/11] qemu_flush_work for remote vcpu execution 2009-12-03 12:35 ` Avi Kivity @ 2009-12-03 12:37 ` Glauber Costa 2009-12-03 12:40 ` Avi Kivity 0 siblings, 1 reply; 24+ messages in thread From: Glauber Costa @ 2009-12-03 12:37 UTC (permalink / raw) To: Avi Kivity; +Cc: Glauber Costa, aliguori, qemu-devel > > Keep the name then. The new name is misleading. ok. >>>> Totally synchronous, >>>> and guarantees that a given function will be executed at the specified >>>> vcpu. >>>> >>>> This patch also convert usage within the breakpoints system >>>> >>>> +void qemu_queue_work(CPUState *env, void (*func)(void *data), void >>>> *data); >>>> >>>> >>> >>> The name suggests that it is asynchronous. >>> >>> Why is this patch necessary? >>> >> >> to keep gdbstub working. >> > > "Because it fixes things". > > Please elaborate. > gdbstub is called from the i/o thread , and call vcpu ioctls. So it has to use the on_vcpu mechanism to guarantee its execution in the right thread. What I meant is that currently, gdbstub is the only user of it, at least in qemu.git -- Glauber Costa. "Free as in Freedom" http://glommer.net "The less confident you are, the more serious you have to act." ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 04/11] qemu_flush_work for remote vcpu execution 2009-12-03 12:37 ` Glauber Costa @ 2009-12-03 12:40 ` Avi Kivity 0 siblings, 0 replies; 24+ messages in thread From: Avi Kivity @ 2009-12-03 12:40 UTC (permalink / raw) To: Glauber Costa; +Cc: Glauber Costa, aliguori, qemu-devel On 12/03/2009 02:37 PM, Glauber Costa wrote: >> "Because it fixes things". >> >> Please elaborate. >> >> > gdbstub is called from the i/o thread , and call vcpu ioctls. So it > has to use the on_vcpu > mechanism to guarantee its execution in the right thread. > > What I meant is that currently, gdbstub is the only user of it, at > least in qemu.git > In this case, the patch is unnecessary for smp. Please submit it separately from this patchset, with a proper changelog. -- error compiling committee.c: too many arguments to function ^ permalink raw reply [flat|nested] 24+ messages in thread
* [Qemu-devel] Re: [PATCH 04/11] qemu_flush_work for remote vcpu execution 2009-12-02 13:48 ` [Qemu-devel] [PATCH 04/11] qemu_flush_work for remote vcpu execution Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 05/11] tell kernel about all registers instead of just mp_state Glauber Costa 2009-12-03 12:20 ` [Qemu-devel] Re: [PATCH 04/11] qemu_flush_work for remote vcpu execution Avi Kivity @ 2009-12-03 14:29 ` Paolo Bonzini 2009-12-03 15:02 ` Glauber Costa 2 siblings, 1 reply; 24+ messages in thread From: Paolo Bonzini @ 2009-12-03 14:29 UTC (permalink / raw) To: qemu-devel On 12/02/2009 02:48 PM, Glauber Costa wrote: > + if (env == qemu_get_current_env()) { Will always be false for TCG + iothread. What's wrong with qemu_cpu_self(env)? It appears to do the same, and it would also make the whole thread-local storage stuff redundant. If there are performance problems, you can fix them by using TLS for the KVM case and keeping the existing implementation for TCG; at which point you can just use __thread because KVM can assume Linux and hence a working __thread. Paolo ^ permalink raw reply [flat|nested] 24+ messages in thread
* Re: [Qemu-devel] Re: [PATCH 04/11] qemu_flush_work for remote vcpu execution 2009-12-03 14:29 ` Paolo Bonzini @ 2009-12-03 15:02 ` Glauber Costa 0 siblings, 0 replies; 24+ messages in thread From: Glauber Costa @ 2009-12-03 15:02 UTC (permalink / raw) To: Paolo Bonzini; +Cc: qemu-devel On Thu, Dec 3, 2009 at 12:29 PM, Paolo Bonzini <pbonzini@redhat.com> wrote: > On 12/02/2009 02:48 PM, Glauber Costa wrote: >> >> + if (env == qemu_get_current_env()) { > > Will always be false for TCG + iothread. What's wrong with > qemu_cpu_self(env)? It appears to do the same, and it would also make the > whole thread-local storage stuff redundant. > Indeed. I'll turn to qemu_cpu_self. thanks -- Glauber Costa. "Free as in Freedom" http://glommer.net "The less confident you are, the more serious you have to act." ^ permalink raw reply [flat|nested] 24+ messages in thread
end of thread, other threads:[~2009-12-03 15:02 UTC | newest] Thread overview: 24+ messages (download: mbox.gz follow: Atom feed -- links below jump to the message on this page -- 2009-12-02 13:48 [Qemu-devel] [PATCH 00/11] SMP support in qemu.git Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 01/11] Don't mess with halted state Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 02/11] store thread-specific env information Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 03/11] update halted state on mp_state sync Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 04/11] qemu_flush_work for remote vcpu execution Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 05/11] tell kernel about all registers instead of just mp_state Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 06/11] flush state in migration post_load Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 07/11] Don't call kvm cpu reset on initialization Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 08/11] use cpu_kick instead of direct signalling Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 09/11] Use per-cpu reset handlers Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 10/11] Use __thread where available Glauber Costa 2009-12-02 13:48 ` [Qemu-devel] [PATCH 11/11] remove smp restriction from kvm Glauber Costa 2009-12-03 12:23 ` [Qemu-devel] " Avi Kivity 2009-12-03 12:36 ` Glauber Costa 2009-12-03 12:37 ` Avi Kivity 2009-12-03 12:39 ` Glauber Costa 2009-12-03 12:40 ` Avi Kivity 2009-12-03 12:20 ` [Qemu-devel] Re: [PATCH 04/11] qemu_flush_work for remote vcpu execution Avi Kivity 2009-12-03 12:33 ` Glauber Costa 2009-12-03 12:35 ` Avi Kivity 2009-12-03 12:37 ` Glauber Costa 2009-12-03 12:40 ` Avi Kivity 2009-12-03 14:29 ` Paolo Bonzini 2009-12-03 15:02 ` Glauber Costa
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).