qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Sergey Fedorov <sergey.fedorov@linaro.org>
To: qemu-devel@nongnu.org
Cc: patches@linaro.org, Sergey Fedorov <serge.fdrv@gmail.com>,
	Sergey Fedorov <sergey.fedorov@linaro.org>,
	Riku Voipio <riku.voipio@iki.fi>
Subject: [Qemu-devel] [RFC 4/8] linux-user: Rework exclusive operation mechanism
Date: Mon, 20 Jun 2016 01:28:29 +0300	[thread overview]
Message-ID: <1466375313-7562-5-git-send-email-sergey.fedorov@linaro.org> (raw)
In-Reply-To: <1466375313-7562-1-git-send-email-sergey.fedorov@linaro.org>

From: Sergey Fedorov <serge.fdrv@gmail.com>

A single variable 'pending_cpus' was used for both counting currently
running CPUs and for signalling the pending exclusive operation request.

To prepare for supporting operations which requires a quiescent state,
like translation buffer flush, it is useful to keep a counter of
currently running CPUs always up to date.

Use a separate variable 'tcg_pending_cpus' to count for currently
running CPUs and a separate variable 'exclusive_pending' to indicate
that there's an exclusive operation pending.

Signed-off-by: Sergey Fedorov <serge.fdrv@gmail.com>
Signed-off-by: Sergey Fedorov <sergey.fedorov@linaro.org>
---
 linux-user/main.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/linux-user/main.c b/linux-user/main.c
index b9a4e0ea45ac..485336f78b8f 100644
--- a/linux-user/main.c
+++ b/linux-user/main.c
@@ -111,7 +111,8 @@ static pthread_mutex_t cpu_list_mutex = PTHREAD_MUTEX_INITIALIZER;
 static pthread_mutex_t exclusive_lock = PTHREAD_MUTEX_INITIALIZER;
 static pthread_cond_t exclusive_cond = PTHREAD_COND_INITIALIZER;
 static pthread_cond_t exclusive_resume = PTHREAD_COND_INITIALIZER;
-static int pending_cpus;
+static bool exclusive_pending;
+static int tcg_pending_cpus;
 
 /* Make sure everything is in a consistent state for calling fork().  */
 void fork_start(void)
@@ -133,7 +134,8 @@ void fork_end(int child)
                 QTAILQ_REMOVE(&cpus, cpu, node);
             }
         }
-        pending_cpus = 0;
+        tcg_pending_cpus = 0;
+        exclusive_pending = false;
         pthread_mutex_init(&exclusive_lock, NULL);
         pthread_mutex_init(&cpu_list_mutex, NULL);
         pthread_cond_init(&exclusive_cond, NULL);
@@ -150,7 +152,7 @@ void fork_end(int child)
    must be held.  */
 static inline void exclusive_idle(void)
 {
-    while (pending_cpus) {
+    while (exclusive_pending) {
         pthread_cond_wait(&exclusive_resume, &exclusive_lock);
     }
 }
@@ -164,15 +166,14 @@ static inline void start_exclusive(void)
     pthread_mutex_lock(&exclusive_lock);
     exclusive_idle();
 
-    pending_cpus = 1;
+    exclusive_pending = true;
     /* Make all other cpus stop executing.  */
     CPU_FOREACH(other_cpu) {
         if (other_cpu->running) {
-            pending_cpus++;
             cpu_exit(other_cpu);
         }
     }
-    if (pending_cpus > 1) {
+    while (tcg_pending_cpus) {
         pthread_cond_wait(&exclusive_cond, &exclusive_lock);
     }
 }
@@ -180,7 +181,7 @@ static inline void start_exclusive(void)
 /* Finish an exclusive operation.  */
 static inline void __attribute__((unused)) end_exclusive(void)
 {
-    pending_cpus = 0;
+    exclusive_pending = false;
     pthread_cond_broadcast(&exclusive_resume);
     pthread_mutex_unlock(&exclusive_lock);
 }
@@ -191,6 +192,7 @@ static inline void cpu_exec_start(CPUState *cpu)
     pthread_mutex_lock(&exclusive_lock);
     exclusive_idle();
     cpu->running = true;
+    tcg_pending_cpus++;
     pthread_mutex_unlock(&exclusive_lock);
 }
 
@@ -199,11 +201,9 @@ static inline void cpu_exec_end(CPUState *cpu)
 {
     pthread_mutex_lock(&exclusive_lock);
     cpu->running = false;
-    if (pending_cpus > 1) {
-        pending_cpus--;
-        if (pending_cpus == 1) {
-            pthread_cond_signal(&exclusive_cond);
-        }
+    tcg_pending_cpus--;
+    if (!tcg_pending_cpus) {
+        pthread_cond_broadcast(&exclusive_cond);
     }
     exclusive_idle();
     pthread_mutex_unlock(&exclusive_lock);
-- 
1.9.1

  parent reply	other threads:[~2016-06-19 22:29 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-06-19 22:28 [Qemu-devel] [RFC 0/8] cpu-exec: Safe work in quiescent state Sergey Fedorov
2016-06-19 22:28 ` [Qemu-devel] [RFC 1/8] cpus: pass CPUState to run_on_cpu helpers Sergey Fedorov
2016-06-20  1:23   ` David Gibson
2016-06-20 13:02   ` Alex Bennée
2016-06-20 13:07     ` Sergey Fedorov
2016-06-19 22:28 ` [Qemu-devel] [RFC 2/8] cpus: Move common code out of {async_, }run_on_cpu() Sergey Fedorov
2016-06-27  8:54   ` Alex Bennée
2016-06-19 22:28 ` [Qemu-devel] [RFC 3/8] cpus: Add 'qemu_work_cond' usage wrappers Sergey Fedorov
2016-06-19 22:28 ` Sergey Fedorov [this message]
2016-06-27  9:02   ` [Qemu-devel] [RFC 4/8] linux-user: Rework exclusive operation mechanism Alex Bennée
2016-06-29 14:57     ` Sergey Fedorov
2016-06-19 22:28 ` [Qemu-devel] [RFC 5/8] linux-user: Add qemu_cpu_is_self() and qemu_cpu_kick() Sergey Fedorov
2016-06-19 22:28 ` [Qemu-devel] [RFC 6/8] linux-user: Support CPU work queue Sergey Fedorov
2016-06-27  9:31   ` Alex Bennée
2016-06-29 14:59     ` Sergey Fedorov
2016-06-29 16:17   ` Alex Bennée
2016-06-30  9:39     ` Sergey Fedorov
2016-06-30 10:32       ` Alex Bennée
2016-06-30 10:35         ` Sergey Fedorov
2016-07-01  8:56           ` Sergey Fedorov
2016-07-01  9:04             ` Sergey Fedorov
2016-06-19 22:28 ` [Qemu-devel] [RFC 7/8] cpu-exec-common: Introduce async_safe_run_on_cpu() Sergey Fedorov
2016-06-27  9:36   ` Alex Bennée
2016-06-29 15:03     ` Sergey Fedorov
2016-06-29 16:09       ` Alex Bennée
2016-07-01 16:29   ` Alvise Rigo
2016-07-01 16:55     ` Sergey Fedorov
2016-06-19 22:28 ` [Qemu-devel] [RFC 8/8] tcg: Make tb_flush() thread safe Sergey Fedorov
2016-06-28 16:18   ` Alex Bennée
2016-06-29 15:03     ` Sergey Fedorov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1466375313-7562-5-git-send-email-sergey.fedorov@linaro.org \
    --to=sergey.fedorov@linaro.org \
    --cc=patches@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=riku.voipio@iki.fi \
    --cc=serge.fdrv@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).