From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1N11gk-0000CJ-24 for qemu-devel@nongnu.org; Thu, 22 Oct 2009 13:43:02 -0400 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1N11gV-0008Nq-Cv for qemu-devel@nongnu.org; Thu, 22 Oct 2009 13:42:52 -0400 Received: from [199.232.76.173] (port=53800 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1N11gQ-0008Mq-Hy for qemu-devel@nongnu.org; Thu, 22 Oct 2009 13:42:42 -0400 Received: from mx1.redhat.com ([209.132.183.28]:31540) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1N11bN-0002ws-Bo for qemu-devel@nongnu.org; Thu, 22 Oct 2009 13:37:29 -0400 Date: Thu, 22 Oct 2009 15:37:05 -0200 From: Marcelo Tosatti Subject: Re: [Qemu-devel] [RFC v2] queue_work proposal Message-ID: <20091022173705.GD4450@amt.cnet> References: <1252000886-20611-1-git-send-email-glommer@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1252000886-20611-1-git-send-email-glommer@redhat.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Glauber Costa Cc: aliguori@us.ibm.com, qemu-devel@nongnu.org On Thu, Sep 03, 2009 at 02:01:26PM -0400, Glauber Costa wrote: > Hi guys > > In this patch, I am attaching an early version of a new "on_vcpu" mechanism (after > making it generic, I saw no reason to keep its name). It allows us to guarantee > that a piece of code will be executed in a certain vcpu, indicated by a CPUState. > > I am sorry for the big patch, I just dumped what I had so we can have early directions. > When it comes to submission state, I'll split it accordingly. > > As we discussed these days at qemu-devel, I am using pthread_set/get_specific for > dealing with thread-local variables. Note that they are not used from signal handlers. > A first optimization would be to use TLS variables where available. > > In vl.c, I am providing a version of queue_work for the IO-thread, and other for normal > operation. The "normal" one should fix the problems Jan is having, since it does nothing > more than just issuing the function we want to execute. > > The io-thread version is tested with both tcg and kvm, and works (to the extent they were > working before, which in kvm case, is not much) > > Changes from v1: > * Don't open the possibility of asynchronous calling queue_work, suggested by > Avi "Peter Parker" Kivity > * Use a local mutex, suggested by Paolo Bonzini > > Signed-off-by: Glauber Costa > --- > cpu-all.h | 3 ++ > cpu-defs.h | 15 ++++++++++++ > exec.c | 1 + > kvm-all.c | 58 +++++++++++++++++++--------------------------- > kvm.h | 7 +++++ > vl.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ > 6 files changed, 125 insertions(+), 34 deletions(-) > > diff --git a/cpu-all.h b/cpu-all.h > index 1a6a812..529479e 100644 > --- a/cpu-all.h > +++ b/cpu-all.h > @@ -763,6 +763,9 @@ extern CPUState *cpu_single_env; > extern int64_t qemu_icount; > extern int use_icount; > > +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data); > +void qemu_flush_work(CPUState *env); > + > #define CPU_INTERRUPT_HARD 0x02 /* hardware interrupt pending */ > #define CPU_INTERRUPT_EXITTB 0x04 /* exit the current TB (use for x86 a20 case) */ > #define CPU_INTERRUPT_TIMER 0x08 /* internal timer exception pending */ > @@ -3808,6 +3835,50 @@ void qemu_cpu_kick(void *_env) > qemu_thread_signal(env->thread, SIGUSR1); > } > > +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data) > +{ > + QemuWorkItem wii; > + > + env->queued_total++; > + > + if (env == qemu_get_current_env()) { > + env->queued_local++; > + func(data); > + return; > + } > + > + wii.func = func; > + wii.data = data; > + qemu_mutex_lock(&env->queue_lock); > + TAILQ_INSERT_TAIL(&env->queued_work, &wii, entry); > + qemu_mutex_unlock(&env->queue_lock); > + > + qemu_thread_signal(env->thread, SIGUSR1); > + > + qemu_mutex_lock(&env->queue_lock); > + while (!wii.done) { > + qemu_cond_wait(&env->work_cond, &qemu_global_mutex); > + } > + qemu_mutex_unlock(&env->queue_lock); How's qemu_flush_work supposed to execute if env->queue_lock is held here? qemu_cond_wait() should work with env->queue_lock, and qemu_global_mutex should be dropped before waiting and reacquired on return.