From: Avi Kivity <avi@qumranet.com>
To: Ryan Harper <ryanh@us.ibm.com>
Cc: kvm-devel@lists.sourceforge.net
Subject: Re: [PATCH] Fix QEMU vcpu thread race with apic_reset
Date: Sat, 26 Apr 2008 10:16:26 +0300 [thread overview]
Message-ID: <4812D6CA.4040407@qumranet.com> (raw)
In-Reply-To: <1209187574-21081-1-git-send-email-ryanh@us.ibm.com>
Ryan Harper wrote:
> There is a race between when the vcpu thread issues a create ioctl and when
> apic_reset() gets called resulting in getting a badfd error.
>
>
The problem is indeed there, but the fix is wrong:
> main thread vcpu thread
> diff --git a/qemu/qemu-kvm.c b/qemu/qemu-kvm.c
> index 78127de..3513e8c 100644
> --- a/qemu/qemu-kvm.c
> +++ b/qemu/qemu-kvm.c
> @@ -31,7 +31,9 @@ extern int smp_cpus;
> static int qemu_kvm_reset_requested;
>
> pthread_mutex_t qemu_mutex = PTHREAD_MUTEX_INITIALIZER;
> +pthread_mutex_t vcpu_mutex = PTHREAD_MUTEX_INITIALIZER;
> pthread_cond_t qemu_aio_cond = PTHREAD_COND_INITIALIZER;
> +pthread_cond_t qemu_vcpuup_cond = PTHREAD_COND_INITIALIZER;
> __thread struct vcpu_info *vcpu;
>
> struct qemu_kvm_signal_table {
> @@ -369,6 +371,11 @@ static void *ap_main_loop(void *_env)
> sigfillset(&signals);
> sigprocmask(SIG_BLOCK, &signals, NULL);
> kvm_create_vcpu(kvm_context, env->cpu_index);
> + /* block until cond_wait occurs */
> + pthread_mutex_lock(&vcpu_mutex);
> + /* now we can signal */
> + pthread_cond_signal(&qemu_vcpuup_cond);
> + pthread_mutex_unlock(&vcpu_mutex);
> kvm_qemu_init_env(env);
> kvm_main_loop_cpu(env);
> return NULL;
> @@ -388,9 +395,10 @@ static void kvm_add_signal(struct qemu_kvm_signal_table *sigtab, int signum)
>
> void kvm_init_new_ap(int cpu, CPUState *env)
> {
> + pthread_mutex_lock(&vcpu_mutex);
> pthread_create(&vcpu_info[cpu].thread, NULL, ap_main_loop, env);
> - /* FIXME: wait for thread to spin up */
> - usleep(200);
> + pthread_cond_wait(&qemu_vcpuup_cond, &vcpu_mutex);
>
pthread_cond_wait() is never correct outside a loop. The signal may
arrive before wait is called.
The usual idiom is
while (condition is not fulfilled)
pthread_cond_wait();
I see you have something there to ensure we block, but please use the
right idiom.
> + pthread_mutex_unlock(&vcpu_mutex);
> }
>
Please reuse qemu_mutex for this, no need for a new one.
--
Do not meddle in the internals of kernels, for they are subtle and quick to panic.
-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference
Don't miss this year's exciting event. There's still time to save $100.
Use priority code J8TL2D2.
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
next prev parent reply other threads:[~2008-04-26 7:16 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-04-26 5:26 [PATCH] Fix QEMU vcpu thread race with apic_reset Ryan Harper
2008-04-26 5:33 ` Ryan Harper
2008-04-26 7:16 ` Avi Kivity [this message]
2008-04-28 16:26 ` Ryan Harper
2008-04-28 20:02 ` Anthony Liguori
2008-04-29 0:46 ` Ulrich Drepper
2008-04-26 16:58 ` Ulrich Drepper
2008-04-26 17:04 ` Ulrich Drepper
2008-04-26 17:56 ` Anthony Liguori
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4812D6CA.4040407@qumranet.com \
--to=avi@qumranet.com \
--cc=kvm-devel@lists.sourceforge.net \
--cc=ryanh@us.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox