From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mailman by lists.gnu.org with tmda-scanned (Exim 4.43) id 1NFpGh-0007HD-F5 for qemu-devel@nongnu.org; Wed, 02 Dec 2009 08:29:19 -0500 Received: from exim by lists.gnu.org with spam-scanned (Exim 4.43) id 1NFpGc-0007FF-Gu for qemu-devel@nongnu.org; Wed, 02 Dec 2009 08:29:18 -0500 Received: from [199.232.76.173] (port=58099 helo=monty-python.gnu.org) by lists.gnu.org with esmtp (Exim 4.43) id 1NFpGb-0007F3-Vs for qemu-devel@nongnu.org; Wed, 02 Dec 2009 08:29:14 -0500 Received: from mx1.redhat.com ([209.132.183.28]:16289) by monty-python.gnu.org with esmtp (Exim 4.60) (envelope-from ) id 1NFpGb-0002jc-O2 for qemu-devel@nongnu.org; Wed, 02 Dec 2009 08:29:13 -0500 Date: Wed, 2 Dec 2009 11:27:53 -0200 From: Marcelo Tosatti Subject: Re: [Qemu-devel] [PATCH v2 04/11] qemu_flush_work for remote vcpu execution Message-ID: <20091202132753.GA29490@amt.cnet> References: <1259671897-22232-1-git-send-email-glommer@redhat.com> <1259671897-22232-2-git-send-email-glommer@redhat.com> <1259671897-22232-3-git-send-email-glommer@redhat.com> <1259671897-22232-4-git-send-email-glommer@redhat.com> <1259671897-22232-5-git-send-email-glommer@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <1259671897-22232-5-git-send-email-glommer@redhat.com> List-Id: qemu-devel.nongnu.org List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Glauber Costa Cc: agraf@suse.de, aliguori@us.ibm.com, qemu-devel@nongnu.org, avi@redhat.com On Tue, Dec 01, 2009 at 10:51:30AM -0200, Glauber Costa wrote: > This function is similar to qemu-kvm's on_vcpu mechanism. Totally synchronous, > and guarantees that a given function will be executed at the specified vcpu. > > This patch also convert usage within the breakpoints system > > Signed-off-by: Glauber Costa > --- > @@ -3436,8 +3441,7 @@ static int tcg_has_work(void); > > static pthread_key_t current_env; > > -CPUState *qemu_get_current_env(void); > -CPUState *qemu_get_current_env(void) > +static CPUState *qemu_get_current_env(void) > { > return pthread_getspecific(current_env); > } > @@ -3474,8 +3478,10 @@ static int qemu_init_main_loop(void) > > static void qemu_wait_io_event(CPUState *env) > { > - while (!tcg_has_work()) > + while (!tcg_has_work()) { This checks all cpus, while for KVM it should check only the current cpu. > + qemu_flush_work(env); > qemu_cond_timedwait(env->halt_cond, &qemu_global_mutex, 1000); > + } KVM vcpu threads should block SIGUSR1, set the in-kernel signal mask with KVM_SET_SIGNAL_MASK ioctl, and eat the signal in qemu_wait_io_event (qemu_flush_work should run after eating the signal). Similarly to qemu-kvm's kvm_main_loop_wait. Otherwise a vcpu thread can lose a signal (say handle SIGUSR1 when not holding qemu_global_mutex before kernel entry). I think this the source of the problems patch 8 attempts to fix.