* [Qemu-devel] [RFC PATCH V3 1/3] cpus: protect queued_work_* with work_mutex.
2015-07-17 14:45 [Qemu-devel] [RFC PATCH V3 0/3] Multithread TCG async_safe_work part fred.konrad
@ 2015-07-17 14:45 ` fred.konrad
2015-07-20 16:22 ` Alex Bennée
2015-07-17 14:45 ` [Qemu-devel] [RFC PATCH V3 2/3] cpus: add tcg_exec_flag fred.konrad
` (3 subsequent siblings)
4 siblings, 1 reply; 8+ messages in thread
From: fred.konrad @ 2015-07-17 14:45 UTC (permalink / raw)
To: qemu-devel, mttcg
Cc: mark.burton, a.rigo, guillaume.delbergue, pbonzini, alex.bennee,
fred.konrad
From: KONRAD Frederic <fred.konrad@greensocs.com>
This protects queued_work_* used by async_run_on_cpu, run_on_cpu and
flush_queued_work with a new lock (work_mutex) to prevent multiple (concurrent)
access.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Changes V1 -> V2:
* Unlock the mutex while running the callback.
---
cpus.c | 11 +++++++++++
include/qom/cpu.h | 3 +++
qom/cpu.c | 1 +
3 files changed, 15 insertions(+)
diff --git a/cpus.c b/cpus.c
index b00a423..eabd4b1 100644
--- a/cpus.c
+++ b/cpus.c
@@ -845,6 +845,8 @@ void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
wi.func = func;
wi.data = data;
wi.free = false;
+
+ qemu_mutex_lock(&cpu->work_mutex);
if (cpu->queued_work_first == NULL) {
cpu->queued_work_first = &wi;
} else {
@@ -853,6 +855,7 @@ void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
cpu->queued_work_last = &wi;
wi.next = NULL;
wi.done = false;
+ qemu_mutex_unlock(&cpu->work_mutex);
qemu_cpu_kick(cpu);
while (!wi.done) {
@@ -876,6 +879,8 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
wi->func = func;
wi->data = data;
wi->free = true;
+
+ qemu_mutex_lock(&cpu->work_mutex);
if (cpu->queued_work_first == NULL) {
cpu->queued_work_first = wi;
} else {
@@ -884,6 +889,7 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
cpu->queued_work_last = wi;
wi->next = NULL;
wi->done = false;
+ qemu_mutex_unlock(&cpu->work_mutex);
qemu_cpu_kick(cpu);
}
@@ -896,15 +902,20 @@ static void flush_queued_work(CPUState *cpu)
return;
}
+ qemu_mutex_lock(&cpu->work_mutex);
while ((wi = cpu->queued_work_first)) {
cpu->queued_work_first = wi->next;
+ qemu_mutex_unlock(&cpu->work_mutex);
wi->func(wi->data);
+ qemu_mutex_lock(&cpu->work_mutex);
wi->done = true;
if (wi->free) {
g_free(wi);
}
}
cpu->queued_work_last = NULL;
+ qemu_mutex_unlock(&cpu->work_mutex);
+
qemu_cond_broadcast(&qemu_work_cond);
}
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 20aabc9..efa9624 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -242,6 +242,8 @@ struct kvm_run;
* @mem_io_pc: Host Program Counter at which the memory was accessed.
* @mem_io_vaddr: Target virtual address at which the memory was accessed.
* @kvm_fd: vCPU file descriptor for KVM.
+ * @work_mutex: Lock to prevent multiple access to queued_work_*.
+ * @queued_work_first: First asynchronous work pending.
*
* State of one CPU core or thread.
*/
@@ -262,6 +264,7 @@ struct CPUState {
uint32_t host_tid;
bool running;
struct QemuCond *halt_cond;
+ QemuMutex work_mutex;
struct qemu_work_item *queued_work_first, *queued_work_last;
bool thread_kicked;
bool created;
diff --git a/qom/cpu.c b/qom/cpu.c
index eb9cfec..4e12598 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -316,6 +316,7 @@ static void cpu_common_initfn(Object *obj)
cpu->gdb_num_regs = cpu->gdb_num_g_regs = cc->gdb_num_core_regs;
QTAILQ_INIT(&cpu->breakpoints);
QTAILQ_INIT(&cpu->watchpoints);
+ qemu_mutex_init(&cpu->work_mutex);
}
static void cpu_common_finalize(Object *obj)
--
1.9.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [RFC PATCH V3 1/3] cpus: protect queued_work_* with work_mutex.
2015-07-17 14:45 ` [Qemu-devel] [RFC PATCH V3 1/3] cpus: protect queued_work_* with work_mutex fred.konrad
@ 2015-07-20 16:22 ` Alex Bennée
0 siblings, 0 replies; 8+ messages in thread
From: Alex Bennée @ 2015-07-20 16:22 UTC (permalink / raw)
To: fred.konrad
Cc: mttcg, mark.burton, qemu-devel, a.rigo, guillaume.delbergue,
pbonzini
fred.konrad@greensocs.com writes:
> From: KONRAD Frederic <fred.konrad@greensocs.com>
>
> This protects queued_work_* used by async_run_on_cpu, run_on_cpu and
> flush_queued_work with a new lock (work_mutex) to prevent multiple (concurrent)
> access.
>
> Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
>
> Changes V1 -> V2:
> * Unlock the mutex while running the callback.
> ---
> cpus.c | 11 +++++++++++
> include/qom/cpu.h | 3 +++
> qom/cpu.c | 1 +
> 3 files changed, 15 insertions(+)
>
> diff --git a/cpus.c b/cpus.c
> index b00a423..eabd4b1 100644
> --- a/cpus.c
> +++ b/cpus.c
> @@ -845,6 +845,8 @@ void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
> wi.func = func;
> wi.data = data;
> wi.free = false;
> +
> + qemu_mutex_lock(&cpu->work_mutex);
> if (cpu->queued_work_first == NULL) {
> cpu->queued_work_first = &wi;
> } else {
> @@ -853,6 +855,7 @@ void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
> cpu->queued_work_last = &wi;
> wi.next = NULL;
> wi.done = false;
> + qemu_mutex_unlock(&cpu->work_mutex);
>
> qemu_cpu_kick(cpu);
> while (!wi.done) {
> @@ -876,6 +879,8 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
> wi->func = func;
> wi->data = data;
> wi->free = true;
> +
> + qemu_mutex_lock(&cpu->work_mutex);
> if (cpu->queued_work_first == NULL) {
> cpu->queued_work_first = wi;
> } else {
> @@ -884,6 +889,7 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
> cpu->queued_work_last = wi;
> wi->next = NULL;
> wi->done = false;
> + qemu_mutex_unlock(&cpu->work_mutex);
>
> qemu_cpu_kick(cpu);
> }
> @@ -896,15 +902,20 @@ static void flush_queued_work(CPUState *cpu)
> return;
> }
>
> + qemu_mutex_lock(&cpu->work_mutex);
> while ((wi = cpu->queued_work_first)) {
> cpu->queued_work_first = wi->next;
> + qemu_mutex_unlock(&cpu->work_mutex);
> wi->func(wi->data);
> + qemu_mutex_lock(&cpu->work_mutex);
> wi->done = true;
> if (wi->free) {
> g_free(wi);
> }
> }
> cpu->queued_work_last = NULL;
> + qemu_mutex_unlock(&cpu->work_mutex);
> +
> qemu_cond_broadcast(&qemu_work_cond);
> }
>
> diff --git a/include/qom/cpu.h b/include/qom/cpu.h
> index 20aabc9..efa9624 100644
> --- a/include/qom/cpu.h
> +++ b/include/qom/cpu.h
> @@ -242,6 +242,8 @@ struct kvm_run;
> * @mem_io_pc: Host Program Counter at which the memory was accessed.
> * @mem_io_vaddr: Target virtual address at which the memory was accessed.
> * @kvm_fd: vCPU file descriptor for KVM.
> + * @work_mutex: Lock to prevent multiple access to queued_work_*.
> + * @queued_work_first: First asynchronous work pending.
> *
> * State of one CPU core or thread.
> */
> @@ -262,6 +264,7 @@ struct CPUState {
> uint32_t host_tid;
> bool running;
> struct QemuCond *halt_cond;
> + QemuMutex work_mutex;
> struct qemu_work_item *queued_work_first, *queued_work_last;
> bool thread_kicked;
> bool created;
> diff --git a/qom/cpu.c b/qom/cpu.c
> index eb9cfec..4e12598 100644
> --- a/qom/cpu.c
> +++ b/qom/cpu.c
> @@ -316,6 +316,7 @@ static void cpu_common_initfn(Object *obj)
> cpu->gdb_num_regs = cpu->gdb_num_g_regs = cc->gdb_num_core_regs;
> QTAILQ_INIT(&cpu->breakpoints);
> QTAILQ_INIT(&cpu->watchpoints);
> + qemu_mutex_init(&cpu->work_mutex);
> }
>
> static void cpu_common_finalize(Object *obj)
--
Alex Bennée
^ permalink raw reply [flat|nested] 8+ messages in thread
* [Qemu-devel] [RFC PATCH V3 2/3] cpus: add tcg_exec_flag.
2015-07-17 14:45 [Qemu-devel] [RFC PATCH V3 0/3] Multithread TCG async_safe_work part fred.konrad
2015-07-17 14:45 ` [Qemu-devel] [RFC PATCH V3 1/3] cpus: protect queued_work_* with work_mutex fred.konrad
@ 2015-07-17 14:45 ` fred.konrad
2015-07-17 14:45 ` [Qemu-devel] [RFC PATCH V3 3/3] cpus: introduce async_run_safe_work_on_cpu fred.konrad
` (2 subsequent siblings)
4 siblings, 0 replies; 8+ messages in thread
From: fred.konrad @ 2015-07-17 14:45 UTC (permalink / raw)
To: qemu-devel, mttcg
Cc: mark.burton, a.rigo, guillaume.delbergue, pbonzini, alex.bennee,
fred.konrad
From: KONRAD Frederic <fred.konrad@greensocs.com>
This flag indicates the state of the VCPU thread:
* 0 if the VCPU is allowed to execute code.
* 1 if the VCPU is currently executing code.
* -1 if the VCPU is not allowed to execute code.
This allows to atomically check and run safe work or check and continue the TCG
execution.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Changes V2 -> V3:
* introduce a third state which allow or not the execution.
* atomically check and set the flag when starting or blocking the code execution.
Changes V1 -> V2:
* do both tcg_executing = 0 or 1 in cpu_exec().
---
cpu-exec.c | 5 +++++
include/qom/cpu.h | 32 ++++++++++++++++++++++++++++++++
qom/cpu.c | 19 +++++++++++++++++++
3 files changed, 56 insertions(+)
diff --git a/cpu-exec.c b/cpu-exec.c
index 75694f3..e16666a 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -371,6 +371,10 @@ int cpu_exec(CPUState *cpu)
cpu->halted = 0;
}
+ if (!tcg_cpu_try_start_execution(cpu)) {
+ cpu->exit_request = 1;
+ return 0;
+ }
current_cpu = cpu;
/* As long as current_cpu is null, up to the assignment just above,
@@ -583,5 +587,6 @@ int cpu_exec(CPUState *cpu)
/* fail safe : never use current_cpu outside cpu_exec() */
current_cpu = NULL;
+ tcg_cpu_allow_execution(cpu);
return ret;
}
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index efa9624..de7487e 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -226,6 +226,7 @@ struct kvm_run;
* @stopped: Indicates the CPU has been artificially stopped.
* @tcg_exit_req: Set to force TCG to stop executing linked TBs for this
* CPU and return to its top level loop.
+ * @tcg_exec_flag: See tcg_cpu_flag_* function.
* @singlestep_enabled: Flags for single-stepping.
* @icount_extra: Instructions until next timer event.
* @icount_decr: Number of cycles left, with interrupt flag in high bit.
@@ -322,6 +323,8 @@ struct CPUState {
(absolute value) offset as small as possible. This reduces code
size, especially for hosts without large memory offsets. */
volatile sig_atomic_t tcg_exit_req;
+
+ int tcg_exec_flag;
};
QTAILQ_HEAD(CPUTailQ, CPUState);
@@ -337,6 +340,35 @@ extern struct CPUTailQ cpus;
DECLARE_TLS(CPUState *, current_cpu);
#define current_cpu tls_var(current_cpu)
+
+/**
+ * tcg_cpu_try_block_execution
+ * @cpu: The CPU to block the execution
+ *
+ * Try to set the tcg_exec_flag to -1 saying the CPU can't execute code if the
+ * CPU is not executing code.
+ * Returns true if the cpu execution is blocked, false otherwise.
+ */
+bool tcg_cpu_try_block_execution(CPUState *cpu);
+
+/**
+ * tcg_cpu_allow_execution
+ * @cpu: The CPU to allow the execution.
+ *
+ * Just reset the state of tcg_exec_flag, and allow the execution of some code.
+ */
+void tcg_cpu_allow_execution(CPUState *cpu);
+
+/**
+ * tcg_cpu_try_start_execution
+ * @cpu: The CPU to start the execution.
+ *
+ * Just set the tcg_exec_flag to 1 saying the CPU is executing code if the CPU
+ * is allowed to run some code.
+ * Returns true if the cpu can execute, false otherwise.
+ */
+bool tcg_cpu_try_start_execution(CPUState *cpu);
+
/**
* cpu_paging_enabled:
* @cpu: The CPU whose state is to be inspected.
diff --git a/qom/cpu.c b/qom/cpu.c
index 4e12598..e32f90c 100644
--- a/qom/cpu.c
+++ b/qom/cpu.c
@@ -26,6 +26,23 @@
#include "qemu/error-report.h"
#include "sysemu/sysemu.h"
+bool tcg_cpu_try_block_execution(CPUState *cpu)
+{
+ return (atomic_cmpxchg(&cpu->tcg_exec_flag, 0, -1)
+ || (cpu->tcg_exec_flag == -1));
+}
+
+void tcg_cpu_allow_execution(CPUState *cpu)
+{
+ cpu->tcg_exec_flag = 0;
+}
+
+bool tcg_cpu_try_start_execution(CPUState *cpu)
+{
+ return (atomic_cmpxchg(&cpu->tcg_exec_flag, 0, 1)
+ || (cpu->tcg_exec_flag == 1));
+}
+
bool cpu_exists(int64_t id)
{
CPUState *cpu;
@@ -249,6 +266,8 @@ static void cpu_common_reset(CPUState *cpu)
cpu->icount_decr.u32 = 0;
cpu->can_do_io = 0;
cpu->exception_index = -1;
+
+ tcg_cpu_allow_execution(cpu);
memset(cpu->tb_jmp_cache, 0, TB_JMP_CACHE_SIZE * sizeof(void *));
}
--
1.9.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* [Qemu-devel] [RFC PATCH V3 3/3] cpus: introduce async_run_safe_work_on_cpu.
2015-07-17 14:45 [Qemu-devel] [RFC PATCH V3 0/3] Multithread TCG async_safe_work part fred.konrad
2015-07-17 14:45 ` [Qemu-devel] [RFC PATCH V3 1/3] cpus: protect queued_work_* with work_mutex fred.konrad
2015-07-17 14:45 ` [Qemu-devel] [RFC PATCH V3 2/3] cpus: add tcg_exec_flag fred.konrad
@ 2015-07-17 14:45 ` fred.konrad
2015-07-20 16:20 ` [Qemu-devel] [RFC PATCH V3 0/3] Multithread TCG async_safe_work part Alex Bennée
2015-07-20 17:36 ` Alex Bennée
4 siblings, 0 replies; 8+ messages in thread
From: fred.konrad @ 2015-07-17 14:45 UTC (permalink / raw)
To: qemu-devel, mttcg
Cc: mark.burton, a.rigo, guillaume.delbergue, pbonzini, alex.bennee,
fred.konrad
From: KONRAD Frederic <fred.konrad@greensocs.com>
We already had async_run_on_cpu but we need all VCPUs outside their execution
loop to execute some tb_flush/invalidate task:
async_run_on_cpu_safe schedule a work on a VCPU but the work start when no more
VCPUs are executing code.
When a safe work is pending cpu_has_work returns true, so cpu_exec returns and
the VCPUs can't enters execution loop. cpu_thread_is_idle returns false so at
the moment where all VCPUs are stop || stopped the safe work queue can be
flushed.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Changes V3 -> V4:
* Use tcg_cpu_try_block_execution.
* Use a counter to know how many safe work are pending.
Changes V2 -> V3:
* Unlock the mutex while executing the callback.
Changes V1 -> V2:
* Move qemu_cpu_kick_thread to avoid prototype declaration.
* Use the work_mutex lock to protect the queued_safe_work_* structures.
---
cpu-exec.c | 5 ++
cpus.c | 149 +++++++++++++++++++++++++++++++++++++++---------------
include/qom/cpu.h | 24 ++++++++-
3 files changed, 137 insertions(+), 41 deletions(-)
diff --git a/cpu-exec.c b/cpu-exec.c
index e16666a..97805cc 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -363,6 +363,11 @@ int cpu_exec(CPUState *cpu)
/* This must be volatile so it is not trashed by longjmp() */
volatile bool have_tb_lock = false;
+ if (async_safe_work_pending()) {
+ cpu->exit_request = 1;
+ return 0;
+ }
+
if (cpu->halted) {
if (!cpu_has_work(cpu)) {
return EXCP_HALTED;
diff --git a/cpus.c b/cpus.c
index eabd4b1..2250296 100644
--- a/cpus.c
+++ b/cpus.c
@@ -69,6 +69,8 @@ static CPUState *next_cpu;
int64_t max_delay;
int64_t max_advance;
+int safe_work_pending; /* Number of safe work pending for all VCPUs. */
+
bool cpu_is_stopped(CPUState *cpu)
{
return cpu->stopped || !runstate_is_running();
@@ -76,7 +78,7 @@ bool cpu_is_stopped(CPUState *cpu)
static bool cpu_thread_is_idle(CPUState *cpu)
{
- if (cpu->stop || cpu->queued_work_first) {
+ if (cpu->stop || cpu->queued_work_first || cpu->queued_safe_work_first) {
return false;
}
if (cpu_is_stopped(cpu)) {
@@ -833,6 +835,45 @@ void qemu_init_cpu_loop(void)
qemu_thread_get_self(&io_thread);
}
+static void qemu_cpu_kick_thread(CPUState *cpu)
+{
+#ifndef _WIN32
+ int err;
+
+ err = pthread_kill(cpu->thread->thread, SIG_IPI);
+ if (err) {
+ fprintf(stderr, "qemu:%s: %s", __func__, strerror(err));
+ exit(1);
+ }
+#else /* _WIN32 */
+ if (!qemu_cpu_is_self(cpu)) {
+ CONTEXT tcgContext;
+
+ if (SuspendThread(cpu->hThread) == (DWORD)-1) {
+ fprintf(stderr, "qemu:%s: GetLastError:%lu\n", __func__,
+ GetLastError());
+ exit(1);
+ }
+
+ /* On multi-core systems, we are not sure that the thread is actually
+ * suspended until we can get the context.
+ */
+ tcgContext.ContextFlags = CONTEXT_CONTROL;
+ while (GetThreadContext(cpu->hThread, &tcgContext) != 0) {
+ continue;
+ }
+
+ cpu_signal(0);
+
+ if (ResumeThread(cpu->hThread) == (DWORD)-1) {
+ fprintf(stderr, "qemu:%s: GetLastError:%lu\n", __func__,
+ GetLastError());
+ exit(1);
+ }
+ }
+#endif
+}
+
void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
{
struct qemu_work_item wi;
@@ -894,6 +935,70 @@ void async_run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data)
qemu_cpu_kick(cpu);
}
+void async_run_safe_work_on_cpu(CPUState *cpu, void (*func)(void *data),
+ void *data)
+{
+ struct qemu_work_item *wi;
+
+ wi = g_malloc0(sizeof(struct qemu_work_item));
+ wi->func = func;
+ wi->data = data;
+ wi->free = true;
+
+ atomic_inc(&safe_work_pending);
+ qemu_mutex_lock(&cpu->work_mutex);
+ if (cpu->queued_safe_work_first == NULL) {
+ cpu->queued_safe_work_first = wi;
+ } else {
+ cpu->queued_safe_work_last->next = wi;
+ }
+ cpu->queued_safe_work_last = wi;
+ wi->next = NULL;
+ wi->done = false;
+ qemu_mutex_unlock(&cpu->work_mutex);
+
+ CPU_FOREACH(cpu) {
+ qemu_cpu_kick_thread(cpu);
+ }
+}
+
+static void flush_queued_safe_work(CPUState *cpu)
+{
+ struct qemu_work_item *wi;
+ CPUState *other_cpu;
+
+ if (cpu->queued_safe_work_first == NULL) {
+ return;
+ }
+
+ CPU_FOREACH(other_cpu) {
+ if (!tcg_cpu_try_block_execution(other_cpu)) {
+ return;
+ }
+ }
+
+ qemu_mutex_lock(&cpu->work_mutex);
+ while ((wi = cpu->queued_safe_work_first)) {
+ cpu->queued_safe_work_first = wi->next;
+ qemu_mutex_unlock(&cpu->work_mutex);
+ wi->func(wi->data);
+ qemu_mutex_lock(&cpu->work_mutex);
+ wi->done = true;
+ if (wi->free) {
+ g_free(wi);
+ }
+ atomic_dec(&safe_work_pending);
+ }
+ cpu->queued_safe_work_last = NULL;
+ qemu_mutex_unlock(&cpu->work_mutex);
+ qemu_cond_broadcast(&qemu_work_cond);
+}
+
+bool async_safe_work_pending(void)
+{
+ return safe_work_pending != 0;
+}
+
static void flush_queued_work(CPUState *cpu)
{
struct qemu_work_item *wi;
@@ -926,6 +1031,9 @@ static void qemu_wait_io_event_common(CPUState *cpu)
cpu->stopped = true;
qemu_cond_signal(&qemu_pause_cond);
}
+ qemu_mutex_unlock_iothread();
+ flush_queued_safe_work(cpu);
+ qemu_mutex_lock_iothread();
flush_queued_work(cpu);
cpu->thread_kicked = false;
}
@@ -1085,45 +1193,6 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
return NULL;
}
-static void qemu_cpu_kick_thread(CPUState *cpu)
-{
-#ifndef _WIN32
- int err;
-
- err = pthread_kill(cpu->thread->thread, SIG_IPI);
- if (err) {
- fprintf(stderr, "qemu:%s: %s", __func__, strerror(err));
- exit(1);
- }
-#else /* _WIN32 */
- if (!qemu_cpu_is_self(cpu)) {
- CONTEXT tcgContext;
-
- if (SuspendThread(cpu->hThread) == (DWORD)-1) {
- fprintf(stderr, "qemu:%s: GetLastError:%lu\n", __func__,
- GetLastError());
- exit(1);
- }
-
- /* On multi-core systems, we are not sure that the thread is actually
- * suspended until we can get the context.
- */
- tcgContext.ContextFlags = CONTEXT_CONTROL;
- while (GetThreadContext(cpu->hThread, &tcgContext) != 0) {
- continue;
- }
-
- cpu_signal(0);
-
- if (ResumeThread(cpu->hThread) == (DWORD)-1) {
- fprintf(stderr, "qemu:%s: GetLastError:%lu\n", __func__,
- GetLastError());
- exit(1);
- }
- }
-#endif
-}
-
void qemu_cpu_kick(CPUState *cpu)
{
qemu_cond_broadcast(cpu->halt_cond);
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index de7487e..23418c0 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -243,8 +243,9 @@ struct kvm_run;
* @mem_io_pc: Host Program Counter at which the memory was accessed.
* @mem_io_vaddr: Target virtual address at which the memory was accessed.
* @kvm_fd: vCPU file descriptor for KVM.
- * @work_mutex: Lock to prevent multiple access to queued_work_*.
+ * @work_mutex: Lock to prevent multiple access to queued_* qemu_work_item.
* @queued_work_first: First asynchronous work pending.
+ * @queued_safe_work_first: First item of safe work pending.
*
* State of one CPU core or thread.
*/
@@ -267,6 +268,7 @@ struct CPUState {
struct QemuCond *halt_cond;
QemuMutex work_mutex;
struct qemu_work_item *queued_work_first, *queued_work_last;
+ struct qemu_work_item *queued_safe_work_first, *queued_safe_work_last;
bool thread_kicked;
bool created;
bool stop;
@@ -575,6 +577,26 @@ void run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data);
void async_run_on_cpu(CPUState *cpu, void (*func)(void *data), void *data);
/**
+ * async_run_safe_work_on_cpu:
+ * @cpu: The vCPU to run on.
+ * @func: The function to be executed.
+ * @data: Data to pass to the function.
+ *
+ * Schedules the function @func for execution on the vCPU @cpu asynchronously
+ * when all the VCPUs are outside their loop.
+ */
+void async_run_safe_work_on_cpu(CPUState *cpu, void (*func)(void *data),
+ void *data);
+
+/**
+ * async_safe_work_pending:
+ *
+ * Check whether any safe work is pending on any VCPUs.
+ * Returns: @true if a safe work is pending, @false otherwise.
+ */
+bool async_safe_work_pending(void);
+
+/**
* qemu_get_cpu:
* @index: The CPUState@cpu_index value of the CPU to obtain.
*
--
1.9.0
^ permalink raw reply related [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [RFC PATCH V3 0/3] Multithread TCG async_safe_work part.
2015-07-17 14:45 [Qemu-devel] [RFC PATCH V3 0/3] Multithread TCG async_safe_work part fred.konrad
` (2 preceding siblings ...)
2015-07-17 14:45 ` [Qemu-devel] [RFC PATCH V3 3/3] cpus: introduce async_run_safe_work_on_cpu fred.konrad
@ 2015-07-20 16:20 ` Alex Bennée
2015-07-20 17:36 ` Alex Bennée
4 siblings, 0 replies; 8+ messages in thread
From: Alex Bennée @ 2015-07-20 16:20 UTC (permalink / raw)
To: fred.konrad
Cc: mttcg, mark.burton, qemu-devel, a.rigo, guillaume.delbergue,
pbonzini
fred.konrad@greensocs.com writes:
> From: KONRAD Frederic <fred.konrad@greensocs.com>
>
> This is the async_safe_work introduction bit of the Multithread TCG work.
> Rebased on current upstream (6169b60285fe1ff730d840a49527e721bfb30899).
>
> (Currently untested as I need to rebase MTTCG first.)
Wouldn't it make sense for this to be re-based onto the current rc
independent of MTTCG and then have those patches based ontop of this
series? (see other mail).
>
> It can be cloned here:
> http://git.greensocs.com/fkonrad/mttcg.git branch async_work_v3
Not seeing this at the moment, can you re-push please?
>
> The first patch introduces a mutex to protect the existing queued_work_*
> CPUState members against multiple (concurent) access.
>
> The second patch introduces a tcg_exec_flag which will be 1 when we are inside
> cpu_exec(), -1 when we must not enter the cpu execution and 0 when we are
> allowed to do so. This is required as safe work need to be sure that's all vCPU
> are outside cpu_exec().
>
> The last patch introduces async_safe_work. It allows to add some work which will
> be done asynchronously but only when all vCPUs are outside cpu_exec(). The tcg
> thread will wait that no vCPUs have any pending safe work before reentering
> cpu-exec().
>
> Changes V2 -> V3:
> * Check atomically we are not in the executiong loop to fix the race condition
> which might happen.
> Changes V1 -> V2:
> * Release the lock while running the callback for both async and safe work.
>
> KONRAD Frederic (3):
> cpus: protect queued_work_* with work_mutex.
> cpus: add tcg_exec_flag.
> cpus: introduce async_run_safe_work_on_cpu.
>
> cpu-exec.c | 10 ++++
> cpus.c | 160 ++++++++++++++++++++++++++++++++++++++++--------------
> include/qom/cpu.h | 57 +++++++++++++++++++
> qom/cpu.c | 20 +++++++
> 4 files changed, 207 insertions(+), 40 deletions(-)
--
Alex Bennée
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [RFC PATCH V3 0/3] Multithread TCG async_safe_work part.
2015-07-17 14:45 [Qemu-devel] [RFC PATCH V3 0/3] Multithread TCG async_safe_work part fred.konrad
` (3 preceding siblings ...)
2015-07-20 16:20 ` [Qemu-devel] [RFC PATCH V3 0/3] Multithread TCG async_safe_work part Alex Bennée
@ 2015-07-20 17:36 ` Alex Bennée
2015-07-20 17:46 ` Frederic Konrad
4 siblings, 1 reply; 8+ messages in thread
From: Alex Bennée @ 2015-07-20 17:36 UTC (permalink / raw)
To: fred.konrad
Cc: mttcg, mark.burton, qemu-devel, a.rigo, guillaume.delbergue,
pbonzini
fred.konrad@greensocs.com writes:
> From: KONRAD Frederic <fred.konrad@greensocs.com>
>
> This is the async_safe_work introduction bit of the Multithread TCG work.
> Rebased on current upstream (6169b60285fe1ff730d840a49527e721bfb30899).
>
> (Currently untested as I need to rebase MTTCG first.)
>
> It can be cloned here:
> http://git.greensocs.com/fkonrad/mttcg.git branch async_work_v3
>
> The first patch introduces a mutex to protect the existing queued_work_*
> CPUState members against multiple (concurent) access.
>
> The second patch introduces a tcg_exec_flag which will be 1 when we are inside
> cpu_exec(), -1 when we must not enter the cpu execution and 0 when we are
> allowed to do so. This is required as safe work need to be sure that's all vCPU
> are outside cpu_exec().
>
> The last patch introduces async_safe_work. It allows to add some work which will
> be done asynchronously but only when all vCPUs are outside cpu_exec(). The tcg
> thread will wait that no vCPUs have any pending safe work before reentering
> cpu-exec().
>
> Changes V2 -> V3:
> * Check atomically we are not in the executiong loop to fix the race condition
> which might happen.
> Changes V1 -> V2:
> * Release the lock while running the callback for both async and safe work.
>
> KONRAD Frederic (3):
> cpus: protect queued_work_* with work_mutex.
> cpus: add tcg_exec_flag.
> cpus: introduce async_run_safe_work_on_cpu.
Also currently breaks a lot of targets:
https://travis-ci.org/stsquad/qemu/builds/71797143
>
> cpu-exec.c | 10 ++++
> cpus.c | 160 ++++++++++++++++++++++++++++++++++++++++--------------
> include/qom/cpu.h | 57 +++++++++++++++++++
> qom/cpu.c | 20 +++++++
> 4 files changed, 207 insertions(+), 40 deletions(-)
--
Alex Bennée
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [Qemu-devel] [RFC PATCH V3 0/3] Multithread TCG async_safe_work part.
2015-07-20 17:36 ` Alex Bennée
@ 2015-07-20 17:46 ` Frederic Konrad
0 siblings, 0 replies; 8+ messages in thread
From: Frederic Konrad @ 2015-07-20 17:46 UTC (permalink / raw)
To: Alex Bennée
Cc: mttcg, mark.burton, qemu-devel, a.rigo, guillaume.delbergue,
pbonzini
On 20/07/2015 19:36, Alex Bennée wrote:
> fred.konrad@greensocs.com writes:
>
>> From: KONRAD Frederic <fred.konrad@greensocs.com>
>>
>> This is the async_safe_work introduction bit of the Multithread TCG work.
>> Rebased on current upstream (6169b60285fe1ff730d840a49527e721bfb30899).
>>
>> (Currently untested as I need to rebase MTTCG first.)
>>
>> It can be cloned here:
>> http://git.greensocs.com/fkonrad/mttcg.git branch async_work_v3
>>
>> The first patch introduces a mutex to protect the existing queued_work_*
>> CPUState members against multiple (concurent) access.
>>
>> The second patch introduces a tcg_exec_flag which will be 1 when we are inside
>> cpu_exec(), -1 when we must not enter the cpu execution and 0 when we are
>> allowed to do so. This is required as safe work need to be sure that's all vCPU
>> are outside cpu_exec().
>>
>> The last patch introduces async_safe_work. It allows to add some work which will
>> be done asynchronously but only when all vCPUs are outside cpu_exec(). The tcg
>> thread will wait that no vCPUs have any pending safe work before reentering
>> cpu-exec().
>>
>> Changes V2 -> V3:
>> * Check atomically we are not in the executiong loop to fix the race condition
>> which might happen.
>> Changes V1 -> V2:
>> * Release the lock while running the callback for both async and safe work.
>>
>> KONRAD Frederic (3):
>> cpus: protect queued_work_* with work_mutex.
>> cpus: add tcg_exec_flag.
>> cpus: introduce async_run_safe_work_on_cpu.
> Also currently breaks a lot of targets:
>
> https://travis-ci.org/stsquad/qemu/builds/71797143
oops yes, seems linux-user is not happy at all.
Will fix that.
Thanks,
Fred
>> cpu-exec.c | 10 ++++
>> cpus.c | 160 ++++++++++++++++++++++++++++++++++++++++--------------
>> include/qom/cpu.h | 57 +++++++++++++++++++
>> qom/cpu.c | 20 +++++++
>> 4 files changed, 207 insertions(+), 40 deletions(-)
^ permalink raw reply [flat|nested] 8+ messages in thread