From: Glauber Costa <glommer@redhat.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: aliguori@us.ibm.com, qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [RFC v2] queue_work proposal
Date: Thu, 22 Oct 2009 16:57:23 -0200 [thread overview]
Message-ID: <20091022185655.GK8092@mothafucka.localdomain> (raw)
In-Reply-To: <20091022173705.GD4450@amt.cnet>
On Thu, Oct 22, 2009 at 03:37:05PM -0200, Marcelo Tosatti wrote:
> On Thu, Sep 03, 2009 at 02:01:26PM -0400, Glauber Costa wrote:
> > Hi guys
> >
> > In this patch, I am attaching an early version of a new "on_vcpu" mechanism (after
> > making it generic, I saw no reason to keep its name). It allows us to guarantee
> > that a piece of code will be executed in a certain vcpu, indicated by a CPUState.
> >
> > I am sorry for the big patch, I just dumped what I had so we can have early directions.
> > When it comes to submission state, I'll split it accordingly.
> >
> > As we discussed these days at qemu-devel, I am using pthread_set/get_specific for
> > dealing with thread-local variables. Note that they are not used from signal handlers.
> > A first optimization would be to use TLS variables where available.
> >
> > In vl.c, I am providing a version of queue_work for the IO-thread, and other for normal
> > operation. The "normal" one should fix the problems Jan is having, since it does nothing
> > more than just issuing the function we want to execute.
> >
> > The io-thread version is tested with both tcg and kvm, and works (to the extent they were
> > working before, which in kvm case, is not much)
> >
> > Changes from v1:
> > * Don't open the possibility of asynchronous calling queue_work, suggested by
> > Avi "Peter Parker" Kivity
> > * Use a local mutex, suggested by Paolo Bonzini
> >
> > Signed-off-by: Glauber Costa <glommer@redhat.com>
> > ---
> > cpu-all.h | 3 ++
> > cpu-defs.h | 15 ++++++++++++
> > exec.c | 1 +
> > kvm-all.c | 58 +++++++++++++++++++---------------------------
> > kvm.h | 7 +++++
> > vl.c | 75 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > 6 files changed, 125 insertions(+), 34 deletions(-)
> >
> > diff --git a/cpu-all.h b/cpu-all.h
> > index 1a6a812..529479e 100644
> > --- a/cpu-all.h
> > +++ b/cpu-all.h
> > @@ -763,6 +763,9 @@ extern CPUState *cpu_single_env;
> > extern int64_t qemu_icount;
> > extern int use_icount;
> >
> > +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data);
> > +void qemu_flush_work(CPUState *env);
> > +
> > #define CPU_INTERRUPT_HARD 0x02 /* hardware interrupt pending */
> > #define CPU_INTERRUPT_EXITTB 0x04 /* exit the current TB (use for x86 a20 case) */
> > #define CPU_INTERRUPT_TIMER 0x08 /* internal timer exception pending */
>
> > @@ -3808,6 +3835,50 @@ void qemu_cpu_kick(void *_env)
> > qemu_thread_signal(env->thread, SIGUSR1);
> > }
> >
> > +void qemu_queue_work(CPUState *env, void (*func)(void *data), void *data)
> > +{
> > + QemuWorkItem wii;
> > +
> > + env->queued_total++;
> > +
> > + if (env == qemu_get_current_env()) {
> > + env->queued_local++;
> > + func(data);
> > + return;
> > + }
> > +
> > + wii.func = func;
> > + wii.data = data;
> > + qemu_mutex_lock(&env->queue_lock);
> > + TAILQ_INSERT_TAIL(&env->queued_work, &wii, entry);
> > + qemu_mutex_unlock(&env->queue_lock);
> > +
> > + qemu_thread_signal(env->thread, SIGUSR1);
> > +
> > + qemu_mutex_lock(&env->queue_lock);
> > + while (!wii.done) {
> > + qemu_cond_wait(&env->work_cond, &qemu_global_mutex);
> > + }
> > + qemu_mutex_unlock(&env->queue_lock);
>
> How's qemu_flush_work supposed to execute if env->queue_lock is held
> here?
>
> qemu_cond_wait() should work with env->queue_lock, and qemu_global_mutex
> should be dropped before waiting and reacquired on return.
After some thinking, I don't plan to introduce this until it is absolutely needed.
I believe we can refactor a lot of code to actually run on the vcpu it should,
instead of triggering a remove event.
prev parent reply other threads:[~2009-10-22 18:57 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-03 18:01 [Qemu-devel] [RFC v2] queue_work proposal Glauber Costa
[not found] ` <m3fxb350fe.fsf@neno.mitica>
2009-09-03 19:28 ` [Qemu-devel] " Glauber Costa
2009-10-22 17:37 ` [Qemu-devel] " Marcelo Tosatti
2009-10-22 18:57 ` Glauber Costa [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20091022185655.GK8092@mothafucka.localdomain \
--to=glommer@redhat.com \
--cc=aliguori@us.ibm.com \
--cc=mtosatti@redhat.com \
--cc=qemu-devel@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).