qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: "Andreas Färber" <afaerber@suse.de>
To: Liu Ping Fan <qemulist@gmail.com>
Cc: Jan Kiszka <jan.kiszka@siemens.com>,
	qemu-devel@nongnu.org, Anthony Liguori <anthony@codemonkey.ws>
Subject: Re: [Qemu-devel] [PATCH 1/2] CPUArchState: introduce per-cpu lock
Date: Fri, 22 Jun 2012 14:55:27 +0200	[thread overview]
Message-ID: <4FE46B3F.8070600@suse.de> (raw)
In-Reply-To: <1340291218-11669-2-git-send-email-qemulist@gmail.com>

Am 21.06.2012 17:06, schrieb Liu Ping Fan:
> introduce a lock for per-cpu to protect agaist accesing from
> other vcpu thread.
> 
> Signed-off-by: Liu Ping Fan <pingfank@linux.vnet.ibm.com>
> ---
>  cpu-defs.h  |    2 ++
>  cpus.c      |   17 +++++++++++++++++
>  main-loop.h |    3 +++
>  3 files changed, 22 insertions(+), 0 deletions(-)
> 
> diff --git a/cpu-defs.h b/cpu-defs.h
> index f49e950..7305822 100644
> --- a/cpu-defs.h
> +++ b/cpu-defs.h
> @@ -30,6 +30,7 @@
>  #include "osdep.h"
>  #include "qemu-queue.h"
>  #include "targphys.h"
> +#include "qemu-thread-posix.h"
>  
>  #ifndef TARGET_LONG_BITS
>  #error TARGET_LONG_BITS must be defined before including this header
> @@ -220,6 +221,7 @@ typedef struct CPUWatchpoint {
>      CPU_COMMON_THREAD                                                   \
>      struct QemuCond *halt_cond;                                         \
>      int thread_kicked;                                                  \
> +    struct QemuMutex *cpu_lock;                                         \
>      struct qemu_work_item *queued_work_first, *queued_work_last;        \
>      const char *cpu_model_str;                                          \
>      struct KVMState *kvm_state;                                         \

Please don't add stuff to CPU_COMMON. Instead add to CPUState in
qom/cpu.c. The QOM CPUState part 4 series moves many of those fields there.

> diff --git a/cpus.c b/cpus.c
> index b182b3d..554f7bc 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -735,6 +735,7 @@ static void *qemu_kvm_cpu_thread_fn(void *arg)
>      env->thread_id = qemu_get_thread_id();
>      cpu_single_env = env;
>  
> +

Stray whitespace addition.

>      r = kvm_init_vcpu(env);
>      if (r < 0) {
>          fprintf(stderr, "kvm_init_vcpu failed: %s\n", strerror(-r));
> @@ -891,6 +892,20 @@ int qemu_cpu_is_self(void *_env)
>      return qemu_thread_is_self(env->thread);
>  }
>  
> +void qemu_mutex_lock_cpu(void *_env)
> +{
> +    CPUArchState *env = _env;
> +
> +    qemu_mutex_lock(env->cpu_lock);
> +}
> +
> +void qemu_mutex_unlock_cpu(void *_env)
> +{
> +    CPUArchState *env = _env;
> +
> +    qemu_mutex_unlock(env->cpu_lock);
> +}
> +

I don't like these helpers. For one, you are using void * arguments and
casting them, for another you are using CPUArchState at all. With my
suggestion above these can be CPUState *cpu.

>  void qemu_mutex_lock_iothread(void)
>  {
>      if (!tcg_enabled()) {
> @@ -1027,6 +1042,8 @@ void qemu_init_vcpu(void *_env)
>      env->nr_cores = smp_cores;
>      env->nr_threads = smp_threads;
>      env->stopped = 1;
> +    env->cpu_lock = g_malloc0(sizeof(QemuMutex));
> +    qemu_mutex_init(env->cpu_lock);

Are you sure this is not needed for linux-user/bsd-user? If not needed,
then the field should be #ifdef'ed in the struct to assure that.
Otherwise this function is never called and you need to move the
initialization to the initfn in qom/cpu.c and then should also clean it
up in a finalizer.

Andreas

>      if (kvm_enabled()) {
>          qemu_kvm_start_vcpu(env);
>      } else if (tcg_enabled()) {
> diff --git a/main-loop.h b/main-loop.h
> index dce1cd9..d8d44a4 100644
> --- a/main-loop.h
> +++ b/main-loop.h
> @@ -323,6 +323,9 @@ void qemu_bh_delete(QEMUBH *bh);
>  int qemu_add_child_watch(pid_t pid);
>  #endif
>  
> +void qemu_mutex_lock_cpu(void *_env);
> +void qemu_mutex_unlock_cpu(void *_env);
> +
>  /**
>   * qemu_mutex_lock_iothread: Lock the main loop mutex.
>   *

-- 
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg

  parent reply	other threads:[~2012-06-22 12:55 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-06-21 15:06 [Qemu-devel] [RFC] use little granularity lock to substitue qemu_mutex_lock_iothread Liu Ping Fan
2012-06-21 15:06 ` [Qemu-devel] [PATCH 1/2] CPUArchState: introduce per-cpu lock Liu Ping Fan
2012-06-22 12:36   ` Stefan Hajnoczi
2012-06-24 14:05     ` liu ping fan
2012-06-22 12:55   ` Andreas Färber [this message]
2012-06-24 14:02     ` liu ping fan
2012-06-21 15:06 ` [Qemu-devel] [PATCH 2/2] kvm: use per-cpu lock to free vcpu thread out of the big lock Liu Ping Fan
2012-06-22  2:29   ` 陳韋任 (Wei-Ren Chen)
2012-06-22 10:26     ` liu ping fan
     [not found] <1340290158-11036-1-git-send-email-qemulist@gmail.com>
     [not found] ` <1340290158-11036-2-git-send-email-qemulist@gmail.com>
2012-06-22 20:01   ` [Qemu-devel] [PATCH 1/2] CPUArchState: introduce per-cpu lock Anthony Liguori

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4FE46B3F.8070600@suse.de \
    --to=afaerber@suse.de \
    --cc=anthony@codemonkey.ws \
    --cc=jan.kiszka@siemens.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemulist@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).