* [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31
@ 2016-10-31 14:13 Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 01/14] cpus: make all_vcpus_paused() return bool Paolo Bonzini
` (14 more replies)
0 siblings, 15 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel
The following changes since commit ed2839166c21e001d15868f4d9591a21aaebd547:
target-alpha: Emulate LL/SC using cmpxchg helpers (2016-10-26 08:29:02 -0700)
are available in the git repository at:
git://github.com/bonzini/qemu.git tags/for-upstream-mttcg
for you to fetch changes up to ba051fb5e56d5ff5e4fa672d37954452e58543b2:
tcg: move locking for tb_invalidate_phys_page_range up (2016-10-31 15:00:25 +0100)
----------------------------------------------------------------
Base patches for MTTCG enablement.
----------------------------------------------------------------
Alex Bennée (11):
cpus: make all_vcpus_paused() return bool
translate_all: DEBUG_FLUSH -> DEBUG_TB_FLUSH
translate-all: add DEBUG_LOCKING asserts
cpu-exec: include cpu_index in CPU_LOG_EXEC messages
linux-user/elfload: ensure mmap_lock() held while setting up
translate-all: Add assert_(memory|tb)_lock annotations
target-arm/arm-powerctl: wake up sleeping CPUs
tcg: move tcg_exec_all and helpers above thread fn
tcg: cpus rm tcg_exec_all()
cpus: re-factor out handle_icount_deadline
tcg: move locking for tb_invalidate_phys_page_range up
KONRAD Frederic (1):
tcg: protect translation related stuff with tb_lock.
Paolo Bonzini (2):
tcg: comment on which functions have to be called with tb_lock held
*_run_on_cpu: introduce run_on_cpu_data type
bsd-user/mmap.c | 5 +
cpu-exec.c | 11 +-
cpus-common.c | 9 +-
cpus.c | 259 +++++++++++++++++++++++----------------------
exec.c | 22 ++++
hw/i386/kvm/apic.c | 14 +--
hw/i386/kvmvapic.c | 17 +--
hw/ppc/ppce500_spin.c | 6 +-
hw/ppc/spapr.c | 4 +-
hw/ppc/spapr_hcall.c | 12 +--
include/exec/exec-all.h | 2 +
include/qom/cpu.h | 31 +++++-
kvm-all.c | 20 ++--
linux-user/elfload.c | 4 +
linux-user/mmap.c | 5 +
target-arm/Makefile.objs | 2 +-
target-arm/arm-powerctl.c | 2 +
target-i386/helper.c | 8 +-
target-i386/kvm.c | 4 +-
target-s390x/cpu.c | 4 +-
target-s390x/cpu.h | 4 +-
target-s390x/kvm.c | 20 ++--
target-s390x/misc_helper.c | 4 +-
tcg/tcg.h | 2 +
translate-all.c | 177 ++++++++++++++++++++++++++-----
25 files changed, 426 insertions(+), 222 deletions(-)
--
2.7.4
^ permalink raw reply [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 01/14] cpus: make all_vcpus_paused() return bool
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 02/14] translate_all: DEBUG_FLUSH -> DEBUG_TB_FLUSH Paolo Bonzini
` (13 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
From: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-2-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpus.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/cpus.c b/cpus.c
index cfd5cdc..5324ba3 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1207,17 +1207,17 @@ void qemu_mutex_unlock_iothread(void)
qemu_mutex_unlock(&qemu_global_mutex);
}
-static int all_vcpus_paused(void)
+static bool all_vcpus_paused(void)
{
CPUState *cpu;
CPU_FOREACH(cpu) {
if (!cpu->stopped) {
- return 0;
+ return false;
}
}
- return 1;
+ return true;
}
void pause_all_vcpus(void)
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 02/14] translate_all: DEBUG_FLUSH -> DEBUG_TB_FLUSH
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 01/14] cpus: make all_vcpus_paused() return bool Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 03/14] translate-all: add DEBUG_LOCKING asserts Paolo Bonzini
` (12 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
From: Alex Bennée <alex.bennee@linaro.org>
Make the debug define consistent with the others. The flush operation is
all about invalidating TranslationBlocks on flush events.
Also fix up the commenting on the other DEBUG for the benefit of
checkpatch.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-3-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
translate-all.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/translate-all.c b/translate-all.c
index 76fc18c..f35522e 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -56,10 +56,10 @@
#include "qemu/timer.h"
#include "exec/log.h"
-//#define DEBUG_TB_INVALIDATE
-//#define DEBUG_FLUSH
+/* #define DEBUG_TB_INVALIDATE */
+/* #define DEBUG_TB_FLUSH */
/* make various TB consistency checks */
-//#define DEBUG_TB_CHECK
+/* #define DEBUG_TB_CHECK */
#if !defined(CONFIG_USER_ONLY)
/* TB consistency checks only implemented for usermode emulation. */
@@ -869,7 +869,7 @@ static void do_tb_flush(CPUState *cpu, void *data)
goto done;
}
-#if defined(DEBUG_FLUSH)
+#if defined(DEBUG_TB_FLUSH)
printf("qemu: flush code_size=%ld nb_tbs=%d avg_tb_size=%ld\n",
(unsigned long)(tcg_ctx.code_gen_ptr - tcg_ctx.code_gen_buffer),
tcg_ctx.tb_ctx.nb_tbs, tcg_ctx.tb_ctx.nb_tbs > 0 ?
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 03/14] translate-all: add DEBUG_LOCKING asserts
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 01/14] cpus: make all_vcpus_paused() return bool Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 02/14] translate_all: DEBUG_FLUSH -> DEBUG_TB_FLUSH Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 04/14] cpu-exec: include cpu_index in CPU_LOG_EXEC messages Paolo Bonzini
` (11 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
From: Alex Bennée <alex.bennee@linaro.org>
This adds asserts to check the locking on the various translation
engines structures. There are two sets of structures that are protected
by locks.
The first the l1map and PageDesc structures used to track which
translation blocks are associated with which physical addresses. In
user-mode this is covered by the mmap_lock.
The second case are TB context related structures which are protected by
tb_lock which is also user-mode only.
Currently the asserts do nothing in SoftMMU mode but this will change
for MTTCG.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-4-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
bsd-user/mmap.c | 5 +++++
include/exec/exec-all.h | 1 +
linux-user/mmap.c | 5 +++++
translate-all.c | 41 +++++++++++++++++++++++++++++++++++++++++
4 files changed, 52 insertions(+)
diff --git a/bsd-user/mmap.c b/bsd-user/mmap.c
index 610f91b..ee59073 100644
--- a/bsd-user/mmap.c
+++ b/bsd-user/mmap.c
@@ -42,6 +42,11 @@ void mmap_unlock(void)
}
}
+bool have_mmap_lock(void)
+{
+ return mmap_lock_count > 0 ? true : false;
+}
+
/* Grab lock to make sure things are in a consistent state after fork(). */
void mmap_fork_start(void)
{
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index cb624e4..4d36ee3 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -369,6 +369,7 @@ void tlb_fill(CPUState *cpu, target_ulong addr, MMUAccessType access_type,
#if defined(CONFIG_USER_ONLY)
void mmap_lock(void);
void mmap_unlock(void);
+bool have_mmap_lock(void);
static inline tb_page_addr_t get_page_addr_code(CPUArchState *env1, target_ulong addr)
{
diff --git a/linux-user/mmap.c b/linux-user/mmap.c
index ffd099d..61685bf 100644
--- a/linux-user/mmap.c
+++ b/linux-user/mmap.c
@@ -41,6 +41,11 @@ void mmap_unlock(void)
}
}
+bool have_mmap_lock(void)
+{
+ return mmap_lock_count > 0 ? true : false;
+}
+
/* Grab lock to make sure things are in a consistent state after fork(). */
void mmap_fork_start(void)
{
diff --git a/translate-all.c b/translate-all.c
index f35522e..5aded3d 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -31,6 +31,7 @@
#include "tcg.h"
#if defined(CONFIG_USER_ONLY)
#include "qemu.h"
+#include "exec/exec-all.h"
#if defined(__FreeBSD__) || defined(__FreeBSD_kernel__)
#include <sys/param.h>
#if __FreeBSD_version >= 700104
@@ -58,6 +59,7 @@
/* #define DEBUG_TB_INVALIDATE */
/* #define DEBUG_TB_FLUSH */
+/* #define DEBUG_LOCKING */
/* make various TB consistency checks */
/* #define DEBUG_TB_CHECK */
@@ -66,6 +68,28 @@
#undef DEBUG_TB_CHECK
#endif
+/* Access to the various translations structures need to be serialised via locks
+ * for consistency. This is automatic for SoftMMU based system
+ * emulation due to its single threaded nature. In user-mode emulation
+ * access to the memory related structures are protected with the
+ * mmap_lock.
+ */
+#ifdef DEBUG_LOCKING
+#define DEBUG_MEM_LOCKS 1
+#else
+#define DEBUG_MEM_LOCKS 0
+#endif
+
+#ifdef CONFIG_SOFTMMU
+#define assert_memory_lock() do { /* nothing */ } while (0)
+#else
+#define assert_memory_lock() do { \
+ if (DEBUG_MEM_LOCKS) { \
+ g_assert(have_mmap_lock()); \
+ } \
+ } while (0)
+#endif
+
#define SMC_BITMAP_USE_THRESHOLD 10
typedef struct PageDesc {
@@ -173,6 +197,23 @@ void tb_lock_reset(void)
#endif
}
+#ifdef DEBUG_LOCKING
+#define DEBUG_TB_LOCKS 1
+#else
+#define DEBUG_TB_LOCKS 0
+#endif
+
+#ifdef CONFIG_SOFTMMU
+#define assert_tb_lock() do { /* nothing */ } while (0)
+#else
+#define assert_tb_lock() do { \
+ if (DEBUG_TB_LOCKS) { \
+ g_assert(have_tb_lock); \
+ } \
+ } while (0)
+#endif
+
+
static TranslationBlock *tb_find_pc(uintptr_t tc_ptr);
void cpu_gen_init(void)
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 04/14] cpu-exec: include cpu_index in CPU_LOG_EXEC messages
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (2 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 03/14] translate-all: add DEBUG_LOCKING asserts Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 05/14] tcg: comment on which functions have to be called with tb_lock held Paolo Bonzini
` (10 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
From: Alex Bennée <alex.bennee@linaro.org>
Even more important when debugging MTTCG is seeing which vCPU is
currently executing.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-5-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpu-exec.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/cpu-exec.c b/cpu-exec.c
index c999793..4879c7d 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -143,8 +143,9 @@ static inline tcg_target_ulong cpu_tb_exec(CPUState *cpu, TranslationBlock *itb)
uint8_t *tb_ptr = itb->tc_ptr;
qemu_log_mask_and_addr(CPU_LOG_EXEC, itb->pc,
- "Trace %p [" TARGET_FMT_lx "] %s\n",
- itb->tc_ptr, itb->pc, lookup_symbol(itb->pc));
+ "Trace %p [%d: " TARGET_FMT_lx "] %s\n",
+ itb->tc_ptr, cpu->cpu_index, itb->pc,
+ lookup_symbol(itb->pc));
#if defined(DEBUG_DISAS)
if (qemu_loglevel_mask(CPU_LOG_TB_CPU)
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 05/14] tcg: comment on which functions have to be called with tb_lock held
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (3 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 04/14] cpu-exec: include cpu_index in CPU_LOG_EXEC messages Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 06/14] linux-user/elfload: ensure mmap_lock() held while setting up Paolo Bonzini
` (9 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
softmmu requires more functions to be thread-safe, because translation
blocks can be invalidated from e.g. notdirty callbacks. Probably the
same holds for user-mode emulation, it's just that no one has ever
tried to produce a coherent locking there.
This patch will guide the introduction of more tb_lock and tb_unlock
calls for system emulation.
Note that after this patch some (most) of the mentioned functions are
still called outside tb_lock/tb_unlock. The next one will rectify this.
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-7-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
include/exec/exec-all.h | 1 +
include/qom/cpu.h | 3 +++
tcg/tcg.h | 2 ++
translate-all.c | 28 +++++++++++++++++++++++-----
4 files changed, 29 insertions(+), 5 deletions(-)
diff --git a/include/exec/exec-all.h b/include/exec/exec-all.h
index 4d36ee3..a8c13ce 100644
--- a/include/exec/exec-all.h
+++ b/include/exec/exec-all.h
@@ -316,6 +316,7 @@ static inline void tb_set_jmp_target(TranslationBlock *tb,
#endif
+/* Called with tb_lock held. */
static inline void tb_add_jump(TranslationBlock *tb, int n,
TranslationBlock *tb_next)
{
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 633c3fc..9f597bb 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -319,7 +319,10 @@ struct CPUState {
MemoryRegion *memory;
void *env_ptr; /* CPUArchState */
+
+ /* Writes protected by tb_lock, reads not thread-safe */
struct TranslationBlock *tb_jmp_cache[TB_JMP_CACHE_SIZE];
+
struct GDBRegisterState *gdb_regs;
int gdb_num_regs;
int gdb_num_g_regs;
diff --git a/tcg/tcg.h b/tcg/tcg.h
index b34b5fb..dc1281f 100644
--- a/tcg/tcg.h
+++ b/tcg/tcg.h
@@ -726,6 +726,7 @@ static inline bool tcg_op_buf_full(void)
/* pool based memory allocation */
+/* tb_lock must be held for tcg_malloc_internal. */
void *tcg_malloc_internal(TCGContext *s, int size);
void tcg_pool_reset(TCGContext *s);
@@ -733,6 +734,7 @@ void tb_lock(void);
void tb_unlock(void);
void tb_lock_reset(void);
+/* Called with tb_lock held. */
static inline void *tcg_malloc(int size)
{
TCGContext *s = &tcg_ctx;
diff --git a/translate-all.c b/translate-all.c
index 5aded3d..fad2646 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -308,7 +308,9 @@ static int encode_search(TranslationBlock *tb, uint8_t *block)
return p - block;
}
-/* The cpu state corresponding to 'searched_pc' is restored. */
+/* The cpu state corresponding to 'searched_pc' is restored.
+ * Called with tb_lock held.
+ */
static int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
uintptr_t searched_pc)
{
@@ -462,6 +464,7 @@ static void page_init(void)
}
/* If alloc=1:
+ * Called with tb_lock held for system emulation.
* Called with mmap_lock held for user-mode emulation.
*/
static PageDesc *page_find_alloc(tb_page_addr_t index, int alloc)
@@ -826,8 +829,12 @@ bool tcg_enabled(void)
return tcg_ctx.code_gen_buffer != NULL;
}
-/* Allocate a new translation block. Flush the translation buffer if
- too many translation blocks or too much generated code. */
+/*
+ * Allocate a new translation block. Flush the translation buffer if
+ * too many translation blocks or too much generated code.
+ *
+ * Called with tb_lock held.
+ */
static TranslationBlock *tb_alloc(target_ulong pc)
{
TranslationBlock *tb;
@@ -842,6 +849,7 @@ static TranslationBlock *tb_alloc(target_ulong pc)
return tb;
}
+/* Called with tb_lock held. */
void tb_free(TranslationBlock *tb)
{
/* In practice this is mostly used for single use temporary TB
@@ -966,6 +974,10 @@ do_tb_invalidate_check(struct qht *ht, void *p, uint32_t hash, void *userp)
}
}
+/* verify that all the pages have correct rights for code
+ *
+ * Called with tb_lock held.
+ */
static void tb_invalidate_check(target_ulong address)
{
address &= TARGET_PAGE_MASK;
@@ -1070,7 +1082,10 @@ static inline void tb_jmp_unlink(TranslationBlock *tb)
}
}
-/* invalidate one TB */
+/* invalidate one TB
+ *
+ * Called with tb_lock held.
+ */
void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
{
CPUState *cpu;
@@ -1504,7 +1519,9 @@ void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len)
}
if (!p->code_bitmap &&
++p->code_write_count >= SMC_BITMAP_USE_THRESHOLD) {
- /* build code bitmap */
+ /* build code bitmap. FIXME: writes should be protected by
+ * tb_lock, reads by tb_lock or RCU.
+ */
build_page_bitmap(p);
}
if (p->code_bitmap) {
@@ -1645,6 +1662,7 @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
}
#endif /* !defined(CONFIG_USER_ONLY) */
+/* Called with tb_lock held. */
void tb_check_watchpoint(CPUState *cpu)
{
TranslationBlock *tb;
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 06/14] linux-user/elfload: ensure mmap_lock() held while setting up
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (4 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 05/14] tcg: comment on which functions have to be called with tb_lock held Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 07/14] translate-all: Add assert_(memory|tb)_lock annotations Paolo Bonzini
` (8 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
From: Alex Bennée <alex.bennee@linaro.org>
Future patches will enforce the holding of mmap_lock() when we are
manipulating internal memory structures. Technically it doesn't matter
in the case of elfload as we haven't started executing yet. However it
is easier to grab the lock when required than special case the
translate-all API.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-8-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
linux-user/elfload.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/linux-user/elfload.c b/linux-user/elfload.c
index 816272a..547053c 100644
--- a/linux-user/elfload.c
+++ b/linux-user/elfload.c
@@ -1842,6 +1842,8 @@ static void load_elf_image(const char *image_name, int image_fd,
info->pt_dynamic_addr = 0;
#endif
+ mmap_lock();
+
/* Find the maximum size of the image and allocate an appropriate
amount of memory to handle that. */
loaddr = -1, hiaddr = 0;
@@ -2002,6 +2004,8 @@ static void load_elf_image(const char *image_name, int image_fd,
load_symbols(ehdr, image_fd, load_bias);
}
+ mmap_unlock();
+
close(image_fd);
return;
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 07/14] translate-all: Add assert_(memory|tb)_lock annotations
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (5 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 06/14] linux-user/elfload: ensure mmap_lock() held while setting up Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 08/14] tcg: protect translation related stuff with tb_lock Paolo Bonzini
` (7 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
From: Alex Bennée <alex.bennee@linaro.org>
This adds calls to the assert_(memory|tb)_lock for all public APIs which
are documented as needing them held for linux-user mode. The asserts are
NOPs for system-mode although these will be converted when MTTCG is
enabled.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-9-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
translate-all.c | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)
diff --git a/translate-all.c b/translate-all.c
index fad2646..3ff43ec 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -473,6 +473,10 @@ static PageDesc *page_find_alloc(tb_page_addr_t index, int alloc)
void **lp;
int i;
+ if (alloc) {
+ assert_memory_lock();
+ }
+
/* Level 1. Always allocated. */
lp = l1_map + ((index >> v_l1_shift) & (v_l1_size - 1));
@@ -839,6 +843,8 @@ static TranslationBlock *tb_alloc(target_ulong pc)
{
TranslationBlock *tb;
+ assert_tb_lock();
+
if (tcg_ctx.tb_ctx.nb_tbs >= tcg_ctx.code_gen_max_blocks) {
return NULL;
}
@@ -852,6 +858,8 @@ static TranslationBlock *tb_alloc(target_ulong pc)
/* Called with tb_lock held. */
void tb_free(TranslationBlock *tb)
{
+ assert_tb_lock();
+
/* In practice this is mostly used for single use temporary TB
Ignore the hard cases and just back up if this TB happens to
be the last one generated. */
@@ -1093,6 +1101,8 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr)
uint32_t h;
tb_page_addr_t phys_pc;
+ assert_tb_lock();
+
atomic_set(&tb->invalid, true);
/* remove the TB from the hash list */
@@ -1150,7 +1160,7 @@ static void build_page_bitmap(PageDesc *p)
tb_end = tb_start + tb->size;
if (tb_end > TARGET_PAGE_SIZE) {
tb_end = TARGET_PAGE_SIZE;
- }
+ }
} else {
tb_start = 0;
tb_end = ((tb->pc + tb->size) & ~TARGET_PAGE_MASK);
@@ -1173,6 +1183,8 @@ static inline void tb_alloc_page(TranslationBlock *tb,
bool page_already_protected;
#endif
+ assert_memory_lock();
+
tb->page_addr[n] = page_addr;
p = page_find_alloc(page_addr >> TARGET_PAGE_BITS, 1);
tb->page_next[n] = p->first_tb;
@@ -1229,6 +1241,8 @@ static void tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
{
uint32_t h;
+ assert_memory_lock();
+
/* add in the page list */
tb_alloc_page(tb, 0, phys_pc & TARGET_PAGE_MASK);
if (phys_page2 != -1) {
@@ -1260,6 +1274,7 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
#ifdef CONFIG_PROFILER
int64_t ti;
#endif
+ assert_memory_lock();
phys_pc = get_page_addr_code(env, pc);
if (use_icount && !(cflags & CF_IGNORE_ICOUNT)) {
@@ -1388,6 +1403,8 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
*/
void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end)
{
+ assert_memory_lock();
+
while (start < end) {
tb_invalidate_phys_page_range(start, end, 0);
start &= TARGET_PAGE_MASK;
@@ -1424,6 +1441,8 @@ void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end,
uint32_t current_flags = 0;
#endif /* TARGET_HAS_PRECISE_SMC */
+ assert_memory_lock();
+
p = page_find(start >> TARGET_PAGE_BITS);
if (!p) {
return;
@@ -2031,6 +2050,7 @@ void page_set_flags(target_ulong start, target_ulong end, int flags)
assert(end < ((target_ulong)1 << L1_MAP_ADDR_SPACE_BITS));
#endif
assert(start < end);
+ assert_memory_lock();
start = start & TARGET_PAGE_MASK;
end = TARGET_PAGE_ALIGN(end);
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 08/14] tcg: protect translation related stuff with tb_lock.
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (6 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 07/14] translate-all: Add assert_(memory|tb)_lock annotations Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 09/14] target-arm/arm-powerctl: wake up sleeping CPUs Paolo Bonzini
` (6 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: KONRAD Frederic, Emilio G . Cota, Alex Bennée
From: KONRAD Frederic <fred.konrad@greensocs.com>
This protects all translation related work with tb_lock() too ensure
thread safety. This effectively serialises all code generation. In
addition to the code generation we also take the lock for TB
invalidation. This has a knock on effect of meaning tb_lock() is held
for modification of the SoftMMU TLB by non-self threads which will be
used in later patches.
Signed-off-by: KONRAD Frederic <fred.konrad@greensocs.com>
Message-Id: <1439220437-23957-8-git-send-email-fred.konrad@greensocs.com>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[AJB: moved into tree, clean-up history]
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-10-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpu-exec.c | 6 ++++++
exec.c | 6 ++++++
hw/i386/kvmvapic.c | 4 ++++
translate-all.c | 34 ++++++++++++++++++++++++++++------
4 files changed, 44 insertions(+), 6 deletions(-)
diff --git a/cpu-exec.c b/cpu-exec.c
index 4879c7d..e9b50a6 100644
--- a/cpu-exec.c
+++ b/cpu-exec.c
@@ -211,15 +211,21 @@ static void cpu_exec_nocache(CPUState *cpu, int max_cycles,
if (max_cycles > CF_COUNT_MASK)
max_cycles = CF_COUNT_MASK;
+ tb_lock();
tb = tb_gen_code(cpu, orig_tb->pc, orig_tb->cs_base, orig_tb->flags,
max_cycles | CF_NOCACHE
| (ignore_icount ? CF_IGNORE_ICOUNT : 0));
tb->orig_tb = orig_tb;
+ tb_unlock();
+
/* execute the generated code */
trace_exec_tb_nocache(tb, tb->pc);
cpu_tb_exec(cpu, tb);
+
+ tb_lock();
tb_phys_invalidate(tb, -1);
tb_free(tb);
+ tb_unlock();
}
#endif
diff --git a/exec.c b/exec.c
index 4c84389..ab30629 100644
--- a/exec.c
+++ b/exec.c
@@ -2064,6 +2064,12 @@ static void check_watchpoint(int offset, int len, MemTxAttrs attrs, int flags)
continue;
}
cpu->watchpoint_hit = wp;
+
+ /* The tb_lock will be reset when cpu_loop_exit or
+ * cpu_loop_exit_noexc longjmp back into the cpu_exec
+ * main loop.
+ */
+ tb_lock();
tb_check_watchpoint(cpu);
if (wp->flags & BP_STOP_BEFORE_ACCESS) {
cpu->exception_index = EXCP_DEBUG;
diff --git a/hw/i386/kvmvapic.c b/hw/i386/kvmvapic.c
index 74a549b..4448253 100644
--- a/hw/i386/kvmvapic.c
+++ b/hw/i386/kvmvapic.c
@@ -17,6 +17,7 @@
#include "sysemu/kvm.h"
#include "hw/i386/apic_internal.h"
#include "hw/sysbus.h"
+#include "tcg/tcg.h"
#define VAPIC_IO_PORT 0x7e
@@ -449,6 +450,9 @@ static void patch_instruction(VAPICROMState *s, X86CPU *cpu, target_ulong ip)
resume_all_vcpus();
if (!kvm_enabled()) {
+ /* tb_lock will be reset when cpu_loop_exit_noexc longjmps
+ * back into the cpu_exec loop. */
+ tb_lock();
tb_gen_code(cs, current_pc, current_cs_base, current_flags, 1);
cpu_loop_exit_noexc(cs);
}
diff --git a/translate-all.c b/translate-all.c
index 3ff43ec..874f415 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -363,7 +363,9 @@ static int cpu_restore_state_from_tb(CPUState *cpu, TranslationBlock *tb,
bool cpu_restore_state(CPUState *cpu, uintptr_t retaddr)
{
TranslationBlock *tb;
+ bool r = false;
+ tb_lock();
tb = tb_find_pc(retaddr);
if (tb) {
cpu_restore_state_from_tb(cpu, tb, retaddr);
@@ -372,9 +374,11 @@ bool cpu_restore_state(CPUState *cpu, uintptr_t retaddr)
tb_phys_invalidate(tb, -1);
tb_free(tb);
}
- return true;
+ r = true;
}
- return false;
+ tb_unlock();
+
+ return r;
}
void page_size_init(void)
@@ -1456,6 +1460,7 @@ void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end,
/* we remove all the TBs in the range [start, end[ */
/* XXX: see if in some cases it could be faster to invalidate all
the code */
+ tb_lock();
tb = p->first_tb;
while (tb != NULL) {
n = (uintptr_t)tb & 3;
@@ -1515,6 +1520,7 @@ void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end,
cpu_loop_exit_noexc(cpu);
}
#endif
+ tb_unlock();
}
#ifdef CONFIG_SOFTMMU
@@ -1584,6 +1590,8 @@ static bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc)
if (!p) {
return false;
}
+
+ tb_lock();
tb = p->first_tb;
#ifdef TARGET_HAS_PRECISE_SMC
if (tb && pc != 0) {
@@ -1621,9 +1629,13 @@ static bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc)
modifying the memory. It will ensure that it cannot modify
itself */
tb_gen_code(cpu, current_pc, current_cs_base, current_flags, 1);
+ /* tb_lock will be reset after cpu_loop_exit_noexc longjmps
+ * back into the cpu_exec loop. */
return true;
}
#endif
+ tb_unlock();
+
return false;
}
#endif
@@ -1718,6 +1730,7 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
target_ulong pc, cs_base;
uint32_t flags;
+ tb_lock();
tb = tb_find_pc(retaddr);
if (!tb) {
cpu_abort(cpu, "cpu_io_recompile: could not find TB for pc=%p",
@@ -1769,11 +1782,16 @@ void cpu_io_recompile(CPUState *cpu, uintptr_t retaddr)
/* FIXME: In theory this could raise an exception. In practice
we have already translated the block once so it's probably ok. */
tb_gen_code(cpu, pc, cs_base, flags, cflags);
+
/* TODO: If env->pc != tb->pc (i.e. the faulting instruction was not
- the first in the TB) then we end up generating a whole new TB and
- repeating the fault, which is horribly inefficient.
- Better would be to execute just this insn uncached, or generate a
- second new TB. */
+ * the first in the TB) then we end up generating a whole new TB and
+ * repeating the fault, which is horribly inefficient.
+ * Better would be to execute just this insn uncached, or generate a
+ * second new TB.
+ *
+ * cpu_loop_exit_noexc will longjmp back to cpu_exec where the
+ * tb_lock gets reset.
+ */
cpu_loop_exit_noexc(cpu);
}
@@ -1837,6 +1855,8 @@ void dump_exec_info(FILE *f, fprintf_function cpu_fprintf)
TranslationBlock *tb;
struct qht_stats hst;
+ tb_lock();
+
target_code_size = 0;
max_target_code_size = 0;
cross_page = 0;
@@ -1898,6 +1918,8 @@ void dump_exec_info(FILE *f, fprintf_function cpu_fprintf)
tcg_ctx.tb_ctx.tb_phys_invalidate_count);
cpu_fprintf(f, "TLB flush count %d\n", tlb_flush_count);
tcg_dump_info(f, cpu_fprintf);
+
+ tb_unlock();
}
void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf)
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 09/14] target-arm/arm-powerctl: wake up sleeping CPUs
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (7 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 08/14] tcg: protect translation related stuff with tb_lock Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 10/14] tcg: move tcg_exec_all and helpers above thread fn Paolo Bonzini
` (5 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée, Alexander Spyridakis
From: Alex Bennée <alex.bennee@linaro.org>
Testing with Alexander's bare metal syncronisation tests fails in MTTCG
leaving one CPU spinning forever waiting for the second CPU to wake up.
We simply need to kick the vCPU once we have processed the PSCI power on
call.
As the power control API is for system emulation only as is the
qemu_kick_cpu function we also ensure we only build arm-powerctl for
SoftMMU builds.
Tested-by: Alex Bennée <alex.bennee@linaro.org>
CC: Alexander Spyridakis <a.spyridakis@virtualopensystems.com>
Message-Id: <1439220437-23957-20-git-send-email-fred.konrad@greensocs.com>
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Message-Id: <20161027151030.20863-11-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
target-arm/Makefile.objs | 2 +-
target-arm/arm-powerctl.c | 2 ++
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/target-arm/Makefile.objs b/target-arm/Makefile.objs
index f206411..847fb52 100644
--- a/target-arm/Makefile.objs
+++ b/target-arm/Makefile.objs
@@ -9,4 +9,4 @@ obj-y += neon_helper.o iwmmxt_helper.o
obj-y += gdbstub.o
obj-$(TARGET_AARCH64) += cpu64.o translate-a64.o helper-a64.o gdbstub64.o
obj-y += crypto_helper.o
-obj-y += arm-powerctl.o
+obj-$(CONFIG_SOFTMMU) += arm-powerctl.o
diff --git a/target-arm/arm-powerctl.c b/target-arm/arm-powerctl.c
index 6519d52..fbb7a15 100644
--- a/target-arm/arm-powerctl.c
+++ b/target-arm/arm-powerctl.c
@@ -166,6 +166,8 @@ int arm_set_cpu_on(uint64_t cpuid, uint64_t entry, uint64_t context_id,
/* Start the new CPU at the requested address */
cpu_set_pc(target_cpu_state, entry);
+ qemu_cpu_kick(target_cpu_state);
+
/* We are good to go */
return QEMU_ARM_POWERCTL_RET_SUCCESS;
}
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 10/14] tcg: move tcg_exec_all and helpers above thread fn
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (8 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 09/14] target-arm/arm-powerctl: wake up sleeping CPUs Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 11/14] tcg: cpus rm tcg_exec_all() Paolo Bonzini
` (4 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
From: Alex Bennée <alex.bennee@linaro.org>
This is a pure mechanical change in preparation for up-coming
re-factoring. Instead of a forward declaration for tcg_exec_all it and
the associated helper functions are moved in front of the call from
qemu_tcg_cpu_thread_fn.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-12-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpus.c | 200 ++++++++++++++++++++++++++++++++---------------------------------
1 file changed, 99 insertions(+), 101 deletions(-)
diff --git a/cpus.c b/cpus.c
index 5324ba3..77cc24b 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1055,7 +1055,105 @@ static void *qemu_dummy_cpu_thread_fn(void *arg)
#endif
}
-static void tcg_exec_all(void);
+static int64_t tcg_get_icount_limit(void)
+{
+ int64_t deadline;
+
+ if (replay_mode != REPLAY_MODE_PLAY) {
+ deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL);
+
+ /* Maintain prior (possibly buggy) behaviour where if no deadline
+ * was set (as there is no QEMU_CLOCK_VIRTUAL timer) or it is more than
+ * INT32_MAX nanoseconds ahead, we still use INT32_MAX
+ * nanoseconds.
+ */
+ if ((deadline < 0) || (deadline > INT32_MAX)) {
+ deadline = INT32_MAX;
+ }
+
+ return qemu_icount_round(deadline);
+ } else {
+ return replay_get_instructions();
+ }
+}
+
+static int tcg_cpu_exec(CPUState *cpu)
+{
+ int ret;
+#ifdef CONFIG_PROFILER
+ int64_t ti;
+#endif
+
+#ifdef CONFIG_PROFILER
+ ti = profile_getclock();
+#endif
+ if (use_icount) {
+ int64_t count;
+ int decr;
+ timers_state.qemu_icount -= (cpu->icount_decr.u16.low
+ + cpu->icount_extra);
+ cpu->icount_decr.u16.low = 0;
+ cpu->icount_extra = 0;
+ count = tcg_get_icount_limit();
+ timers_state.qemu_icount += count;
+ decr = (count > 0xffff) ? 0xffff : count;
+ count -= decr;
+ cpu->icount_decr.u16.low = decr;
+ cpu->icount_extra = count;
+ }
+ cpu_exec_start(cpu);
+ ret = cpu_exec(cpu);
+ cpu_exec_end(cpu);
+#ifdef CONFIG_PROFILER
+ tcg_time += profile_getclock() - ti;
+#endif
+ if (use_icount) {
+ /* Fold pending instructions back into the
+ instruction counter, and clear the interrupt flag. */
+ timers_state.qemu_icount -= (cpu->icount_decr.u16.low
+ + cpu->icount_extra);
+ cpu->icount_decr.u32 = 0;
+ cpu->icount_extra = 0;
+ replay_account_executed_instructions();
+ }
+ return ret;
+}
+
+static void tcg_exec_all(void)
+{
+ int r;
+
+ /* Account partial waits to QEMU_CLOCK_VIRTUAL. */
+ qemu_account_warp_timer();
+
+ if (next_cpu == NULL) {
+ next_cpu = first_cpu;
+ }
+ for (; next_cpu != NULL && !exit_request; next_cpu = CPU_NEXT(next_cpu)) {
+ CPUState *cpu = next_cpu;
+
+ qemu_clock_enable(QEMU_CLOCK_VIRTUAL,
+ (cpu->singlestep_enabled & SSTEP_NOTIMER) == 0);
+
+ if (cpu_can_run(cpu)) {
+ r = tcg_cpu_exec(cpu);
+ if (r == EXCP_DEBUG) {
+ cpu_handle_guest_debug(cpu);
+ break;
+ } else if (r == EXCP_ATOMIC) {
+ cpu_exec_step_atomic(cpu);
+ }
+ } else if (cpu->stop || cpu->stopped) {
+ if (cpu->unplug) {
+ next_cpu = CPU_NEXT(cpu);
+ }
+ break;
+ }
+ }
+
+ /* Pairs with smp_wmb in qemu_cpu_kick. */
+ atomic_mb_set(&exit_request, 0);
+}
static void *qemu_tcg_cpu_thread_fn(void *arg)
{
@@ -1412,106 +1510,6 @@ int vm_stop_force_state(RunState state)
}
}
-static int64_t tcg_get_icount_limit(void)
-{
- int64_t deadline;
-
- if (replay_mode != REPLAY_MODE_PLAY) {
- deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL);
-
- /* Maintain prior (possibly buggy) behaviour where if no deadline
- * was set (as there is no QEMU_CLOCK_VIRTUAL timer) or it is more than
- * INT32_MAX nanoseconds ahead, we still use INT32_MAX
- * nanoseconds.
- */
- if ((deadline < 0) || (deadline > INT32_MAX)) {
- deadline = INT32_MAX;
- }
-
- return qemu_icount_round(deadline);
- } else {
- return replay_get_instructions();
- }
-}
-
-static int tcg_cpu_exec(CPUState *cpu)
-{
- int ret;
-#ifdef CONFIG_PROFILER
- int64_t ti;
-#endif
-
-#ifdef CONFIG_PROFILER
- ti = profile_getclock();
-#endif
- if (use_icount) {
- int64_t count;
- int decr;
- timers_state.qemu_icount -= (cpu->icount_decr.u16.low
- + cpu->icount_extra);
- cpu->icount_decr.u16.low = 0;
- cpu->icount_extra = 0;
- count = tcg_get_icount_limit();
- timers_state.qemu_icount += count;
- decr = (count > 0xffff) ? 0xffff : count;
- count -= decr;
- cpu->icount_decr.u16.low = decr;
- cpu->icount_extra = count;
- }
- cpu_exec_start(cpu);
- ret = cpu_exec(cpu);
- cpu_exec_end(cpu);
-#ifdef CONFIG_PROFILER
- tcg_time += profile_getclock() - ti;
-#endif
- if (use_icount) {
- /* Fold pending instructions back into the
- instruction counter, and clear the interrupt flag. */
- timers_state.qemu_icount -= (cpu->icount_decr.u16.low
- + cpu->icount_extra);
- cpu->icount_decr.u32 = 0;
- cpu->icount_extra = 0;
- replay_account_executed_instructions();
- }
- return ret;
-}
-
-static void tcg_exec_all(void)
-{
- int r;
-
- /* Account partial waits to QEMU_CLOCK_VIRTUAL. */
- qemu_account_warp_timer();
-
- if (next_cpu == NULL) {
- next_cpu = first_cpu;
- }
- for (; next_cpu != NULL && !exit_request; next_cpu = CPU_NEXT(next_cpu)) {
- CPUState *cpu = next_cpu;
-
- qemu_clock_enable(QEMU_CLOCK_VIRTUAL,
- (cpu->singlestep_enabled & SSTEP_NOTIMER) == 0);
-
- if (cpu_can_run(cpu)) {
- r = tcg_cpu_exec(cpu);
- if (r == EXCP_DEBUG) {
- cpu_handle_guest_debug(cpu);
- break;
- } else if (r == EXCP_ATOMIC) {
- cpu_exec_step_atomic(cpu);
- }
- } else if (cpu->stop || cpu->stopped) {
- if (cpu->unplug) {
- next_cpu = CPU_NEXT(cpu);
- }
- break;
- }
- }
-
- /* Pairs with smp_wmb in qemu_cpu_kick. */
- atomic_mb_set(&exit_request, 0);
-}
-
void list_cpus(FILE *f, fprintf_function cpu_fprintf, const char *optarg)
{
/* XXX: implement xxx_cpu_list for targets that still miss it */
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 11/14] tcg: cpus rm tcg_exec_all()
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (9 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 10/14] tcg: move tcg_exec_all and helpers above thread fn Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 12/14] cpus: re-factor out handle_icount_deadline Paolo Bonzini
` (3 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
From: Alex Bennée <alex.bennee@linaro.org>
In preparation for multi-threaded TCG we remove tcg_exec_all and move
all the CPU cycling into the main thread function. When MTTCG is enabled
we shall use a separate thread function which only handles one vCPU.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Sergey Fedorov <sergey.fedorov@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-13-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpus.c | 87 +++++++++++++++++++++++++++++++++---------------------------------
1 file changed, 43 insertions(+), 44 deletions(-)
diff --git a/cpus.c b/cpus.c
index 77cc24b..cc49902 100644
--- a/cpus.c
+++ b/cpus.c
@@ -69,7 +69,6 @@
#endif /* CONFIG_LINUX */
-static CPUState *next_cpu;
int64_t max_delay;
int64_t max_advance;
@@ -1119,46 +1118,26 @@ static int tcg_cpu_exec(CPUState *cpu)
return ret;
}
-static void tcg_exec_all(void)
+/* Destroy any remaining vCPUs which have been unplugged and have
+ * finished running
+ */
+static void deal_with_unplugged_cpus(void)
{
- int r;
-
- /* Account partial waits to QEMU_CLOCK_VIRTUAL. */
- qemu_account_warp_timer();
-
- if (next_cpu == NULL) {
- next_cpu = first_cpu;
- }
- for (; next_cpu != NULL && !exit_request; next_cpu = CPU_NEXT(next_cpu)) {
- CPUState *cpu = next_cpu;
-
- qemu_clock_enable(QEMU_CLOCK_VIRTUAL,
- (cpu->singlestep_enabled & SSTEP_NOTIMER) == 0);
+ CPUState *cpu;
- if (cpu_can_run(cpu)) {
- r = tcg_cpu_exec(cpu);
- if (r == EXCP_DEBUG) {
- cpu_handle_guest_debug(cpu);
- break;
- } else if (r == EXCP_ATOMIC) {
- cpu_exec_step_atomic(cpu);
- }
- } else if (cpu->stop || cpu->stopped) {
- if (cpu->unplug) {
- next_cpu = CPU_NEXT(cpu);
- }
+ CPU_FOREACH(cpu) {
+ if (cpu->unplug && !cpu_can_run(cpu)) {
+ qemu_tcg_destroy_vcpu(cpu);
+ cpu->created = false;
+ qemu_cond_signal(&qemu_cpu_cond);
break;
}
}
-
- /* Pairs with smp_wmb in qemu_cpu_kick. */
- atomic_mb_set(&exit_request, 0);
}
static void *qemu_tcg_cpu_thread_fn(void *arg)
{
CPUState *cpu = arg;
- CPUState *remove_cpu = NULL;
rcu_register_thread();
@@ -1185,8 +1164,39 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
/* process any pending work */
atomic_mb_set(&exit_request, 1);
+ cpu = first_cpu;
+
while (1) {
- tcg_exec_all();
+ /* Account partial waits to QEMU_CLOCK_VIRTUAL. */
+ qemu_account_warp_timer();
+
+ if (!cpu) {
+ cpu = first_cpu;
+ }
+
+ for (; cpu != NULL && !exit_request; cpu = CPU_NEXT(cpu)) {
+
+ qemu_clock_enable(QEMU_CLOCK_VIRTUAL,
+ (cpu->singlestep_enabled & SSTEP_NOTIMER) == 0);
+
+ if (cpu_can_run(cpu)) {
+ int r;
+ r = tcg_cpu_exec(cpu);
+ if (r == EXCP_DEBUG) {
+ cpu_handle_guest_debug(cpu);
+ break;
+ }
+ } else if (cpu->stop || cpu->stopped) {
+ if (cpu->unplug) {
+ cpu = CPU_NEXT(cpu);
+ }
+ break;
+ }
+
+ } /* for cpu.. */
+
+ /* Pairs with smp_wmb in qemu_cpu_kick. */
+ atomic_mb_set(&exit_request, 0);
if (use_icount) {
int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL);
@@ -1196,18 +1206,7 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
}
}
qemu_tcg_wait_io_event(QTAILQ_FIRST(&cpus));
- CPU_FOREACH(cpu) {
- if (cpu->unplug && !cpu_can_run(cpu)) {
- remove_cpu = cpu;
- break;
- }
- }
- if (remove_cpu) {
- qemu_tcg_destroy_vcpu(remove_cpu);
- cpu->created = false;
- qemu_cond_signal(&qemu_cpu_cond);
- remove_cpu = NULL;
- }
+ deal_with_unplugged_cpus();
}
return NULL;
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 12/14] cpus: re-factor out handle_icount_deadline
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (10 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 11/14] tcg: cpus rm tcg_exec_all() Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 13/14] *_run_on_cpu: introduce run_on_cpu_data type Paolo Bonzini
` (2 subsequent siblings)
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
From: Alex Bennée <alex.bennee@linaro.org>
In preparation for adding a MTTCG thread we re-factor out a bit of what
will be common code to handle the QEMU_CLOCK_VIRTUAL expiration.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Reviewed-by: Richard Henderson <rth@twiddle.net>
Message-Id: <20161027151030.20863-18-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpus.c | 19 +++++++++++++------
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/cpus.c b/cpus.c
index cc49902..6f0dc1a 100644
--- a/cpus.c
+++ b/cpus.c
@@ -1076,6 +1076,18 @@ static int64_t tcg_get_icount_limit(void)
}
}
+static void handle_icount_deadline(void)
+{
+ if (use_icount) {
+ int64_t deadline =
+ qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL);
+
+ if (deadline == 0) {
+ qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
+ }
+ }
+}
+
static int tcg_cpu_exec(CPUState *cpu)
{
int ret;
@@ -1198,13 +1210,8 @@ static void *qemu_tcg_cpu_thread_fn(void *arg)
/* Pairs with smp_wmb in qemu_cpu_kick. */
atomic_mb_set(&exit_request, 0);
- if (use_icount) {
- int64_t deadline = qemu_clock_deadline_ns_all(QEMU_CLOCK_VIRTUAL);
+ handle_icount_deadline();
- if (deadline == 0) {
- qemu_clock_notify(QEMU_CLOCK_VIRTUAL);
- }
- }
qemu_tcg_wait_io_event(QTAILQ_FIRST(&cpus));
deal_with_unplugged_cpus();
}
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 13/14] *_run_on_cpu: introduce run_on_cpu_data type
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (11 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 12/14] cpus: re-factor out handle_icount_deadline Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 14/14] tcg: move locking for tb_invalidate_phys_page_range up Paolo Bonzini
2016-10-31 16:12 ` [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Peter Maydell
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
This changes the *_run_on_cpu APIs (and helpers) to pass data in a
run_on_cpu_data type instead of a plain void *. This is because we
sometimes want to pass a target address (target_ulong) and this fails on
32 bit hosts emulating 64 bit guests.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20161027151030.20863-24-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
cpus-common.c | 9 +++++----
cpus.c | 7 ++++---
hw/i386/kvm/apic.c | 14 +++++++-------
hw/i386/kvmvapic.c | 13 ++++++-------
hw/ppc/ppce500_spin.c | 6 +++---
hw/ppc/spapr.c | 4 ++--
hw/ppc/spapr_hcall.c | 12 ++++++------
include/qom/cpu.h | 28 +++++++++++++++++++++++-----
kvm-all.c | 20 +++++++++++---------
target-i386/helper.c | 8 ++++----
target-i386/kvm.c | 4 ++--
target-s390x/cpu.c | 4 ++--
target-s390x/cpu.h | 4 ++--
target-s390x/kvm.c | 20 ++++++++++----------
target-s390x/misc_helper.c | 4 ++--
translate-all.c | 13 ++++++-------
16 files changed, 95 insertions(+), 75 deletions(-)
diff --git a/cpus-common.c b/cpus-common.c
index 3e11452..59f751e 100644
--- a/cpus-common.c
+++ b/cpus-common.c
@@ -109,7 +109,7 @@ void cpu_list_remove(CPUState *cpu)
struct qemu_work_item {
struct qemu_work_item *next;
run_on_cpu_func func;
- void *data;
+ run_on_cpu_data data;
bool free, exclusive, done;
};
@@ -129,7 +129,7 @@ static void queue_work_on_cpu(CPUState *cpu, struct qemu_work_item *wi)
qemu_cpu_kick(cpu);
}
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data,
+void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
QemuMutex *mutex)
{
struct qemu_work_item wi;
@@ -154,7 +154,7 @@ void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data,
}
}
-void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data)
+void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
{
struct qemu_work_item *wi;
@@ -296,7 +296,8 @@ void cpu_exec_end(CPUState *cpu)
}
}
-void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data)
+void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func,
+ run_on_cpu_data data)
{
struct qemu_work_item *wi;
diff --git a/cpus.c b/cpus.c
index 6f0dc1a..5213351 100644
--- a/cpus.c
+++ b/cpus.c
@@ -556,7 +556,7 @@ static const VMStateDescription vmstate_timers = {
}
};
-static void cpu_throttle_thread(CPUState *cpu, void *opaque)
+static void cpu_throttle_thread(CPUState *cpu, run_on_cpu_data opaque)
{
double pct;
double throttle_ratio;
@@ -587,7 +587,8 @@ static void cpu_throttle_timer_tick(void *opaque)
}
CPU_FOREACH(cpu) {
if (!atomic_xchg(&cpu->throttle_thread_scheduled, 1)) {
- async_run_on_cpu(cpu, cpu_throttle_thread, NULL);
+ async_run_on_cpu(cpu, cpu_throttle_thread,
+ RUN_ON_CPU_NULL);
}
}
@@ -914,7 +915,7 @@ void qemu_init_cpu_loop(void)
qemu_thread_get_self(&io_thread);
}
-void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data)
+void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data)
{
do_run_on_cpu(cpu, func, data, &qemu_global_mutex);
}
diff --git a/hw/i386/kvm/apic.c b/hw/i386/kvm/apic.c
index 39b73e7..01cbaa8 100644
--- a/hw/i386/kvm/apic.c
+++ b/hw/i386/kvm/apic.c
@@ -133,9 +133,9 @@ static void kvm_apic_vapic_base_update(APICCommonState *s)
}
}
-static void kvm_apic_put(CPUState *cs, void *data)
+static void kvm_apic_put(CPUState *cs, run_on_cpu_data data)
{
- APICCommonState *s = data;
+ APICCommonState *s = data.host_ptr;
struct kvm_lapic_state kapic;
int ret;
@@ -151,12 +151,12 @@ static void kvm_apic_put(CPUState *cs, void *data)
static void kvm_apic_post_load(APICCommonState *s)
{
- run_on_cpu(CPU(s->cpu), kvm_apic_put, s);
+ run_on_cpu(CPU(s->cpu), kvm_apic_put, RUN_ON_CPU_HOST_PTR(s));
}
-static void do_inject_external_nmi(CPUState *cpu, void *data)
+static void do_inject_external_nmi(CPUState *cpu, run_on_cpu_data data)
{
- APICCommonState *s = data;
+ APICCommonState *s = data.host_ptr;
uint32_t lvt;
int ret;
@@ -174,7 +174,7 @@ static void do_inject_external_nmi(CPUState *cpu, void *data)
static void kvm_apic_external_nmi(APICCommonState *s)
{
- run_on_cpu(CPU(s->cpu), do_inject_external_nmi, s);
+ run_on_cpu(CPU(s->cpu), do_inject_external_nmi, RUN_ON_CPU_HOST_PTR(s));
}
static void kvm_send_msi(MSIMessage *msg)
@@ -213,7 +213,7 @@ static void kvm_apic_reset(APICCommonState *s)
/* Not used by KVM, which uses the CPU mp_state instead. */
s->wait_for_sipi = 0;
- run_on_cpu(CPU(s->cpu), kvm_apic_put, s);
+ run_on_cpu(CPU(s->cpu), kvm_apic_put, RUN_ON_CPU_HOST_PTR(s));
}
static void kvm_apic_realize(DeviceState *dev, Error **errp)
diff --git a/hw/i386/kvmvapic.c b/hw/i386/kvmvapic.c
index 4448253..b30d1b9 100644
--- a/hw/i386/kvmvapic.c
+++ b/hw/i386/kvmvapic.c
@@ -487,10 +487,9 @@ typedef struct VAPICEnableTPRReporting {
bool enable;
} VAPICEnableTPRReporting;
-static void vapic_do_enable_tpr_reporting(CPUState *cpu, void *data)
+static void vapic_do_enable_tpr_reporting(CPUState *cpu, run_on_cpu_data data)
{
- VAPICEnableTPRReporting *info = data;
-
+ VAPICEnableTPRReporting *info = data.host_ptr;
apic_enable_tpr_access_reporting(info->apic, info->enable);
}
@@ -505,7 +504,7 @@ static void vapic_enable_tpr_reporting(bool enable)
CPU_FOREACH(cs) {
cpu = X86_CPU(cs);
info.apic = cpu->apic_state;
- run_on_cpu(cs, vapic_do_enable_tpr_reporting, &info);
+ run_on_cpu(cs, vapic_do_enable_tpr_reporting, RUN_ON_CPU_HOST_PTR(&info));
}
}
@@ -738,9 +737,9 @@ static void vapic_realize(DeviceState *dev, Error **errp)
nb_option_roms++;
}
-static void do_vapic_enable(CPUState *cs, void *data)
+static void do_vapic_enable(CPUState *cs, run_on_cpu_data data)
{
- VAPICROMState *s = data;
+ VAPICROMState *s = data.host_ptr;
X86CPU *cpu = X86_CPU(cs);
static const uint8_t enabled = 1;
@@ -762,7 +761,7 @@ static void kvmvapic_vm_state_change(void *opaque, int running,
if (s->state == VAPIC_ACTIVE) {
if (smp_cpus == 1) {
- run_on_cpu(first_cpu, do_vapic_enable, s);
+ run_on_cpu(first_cpu, do_vapic_enable, RUN_ON_CPU_HOST_PTR(s));
} else {
zero = g_malloc0(s->rom_state.vapic_size);
cpu_physical_memory_write(s->vapic_paddr, zero,
diff --git a/hw/ppc/ppce500_spin.c b/hw/ppc/ppce500_spin.c
index 8e16f65..cf958a9 100644
--- a/hw/ppc/ppce500_spin.c
+++ b/hw/ppc/ppce500_spin.c
@@ -84,11 +84,11 @@ static void mmubooke_create_initial_mapping(CPUPPCState *env,
env->tlb_dirty = true;
}
-static void spin_kick(CPUState *cs, void *data)
+static void spin_kick(CPUState *cs, run_on_cpu_data data)
{
PowerPCCPU *cpu = POWERPC_CPU(cs);
CPUPPCState *env = &cpu->env;
- SpinInfo *curspin = data;
+ SpinInfo *curspin = data.host_ptr;
hwaddr map_size = 64 * 1024 * 1024;
hwaddr map_start;
@@ -147,7 +147,7 @@ static void spin_write(void *opaque, hwaddr addr, uint64_t value,
if (!(ldq_p(&curspin->addr) & 1)) {
/* run CPU */
- run_on_cpu(cpu, spin_kick, curspin);
+ run_on_cpu(cpu, spin_kick, RUN_ON_CPU_HOST_PTR(curspin));
}
}
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 486f57d..91989f0 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -2148,7 +2148,7 @@ static void spapr_machine_finalizefn(Object *obj)
g_free(spapr->kvm_type);
}
-static void ppc_cpu_do_nmi_on_cpu(CPUState *cs, void *arg)
+static void ppc_cpu_do_nmi_on_cpu(CPUState *cs, run_on_cpu_data arg)
{
cpu_synchronize_state(cs);
ppc_cpu_do_system_reset(cs);
@@ -2159,7 +2159,7 @@ static void spapr_nmi(NMIState *n, int cpu_index, Error **errp)
CPUState *cs;
CPU_FOREACH(cs) {
- async_run_on_cpu(cs, ppc_cpu_do_nmi_on_cpu, NULL);
+ async_run_on_cpu(cs, ppc_cpu_do_nmi_on_cpu, RUN_ON_CPU_NULL);
}
}
diff --git a/hw/ppc/spapr_hcall.c b/hw/ppc/spapr_hcall.c
index c5e7e8c..682de40 100644
--- a/hw/ppc/spapr_hcall.c
+++ b/hw/ppc/spapr_hcall.c
@@ -18,9 +18,9 @@ struct SPRSyncState {
target_ulong mask;
};
-static void do_spr_sync(CPUState *cs, void *arg)
+static void do_spr_sync(CPUState *cs, run_on_cpu_data arg)
{
- struct SPRSyncState *s = arg;
+ struct SPRSyncState *s = arg.host_ptr;
PowerPCCPU *cpu = POWERPC_CPU(cs);
CPUPPCState *env = &cpu->env;
@@ -37,7 +37,7 @@ static void set_spr(CPUState *cs, int spr, target_ulong value,
.value = value,
.mask = mask
};
- run_on_cpu(cs, do_spr_sync, &s);
+ run_on_cpu(cs, do_spr_sync, RUN_ON_CPU_HOST_PTR(&s));
}
static bool has_spr(PowerPCCPU *cpu, int spr)
@@ -911,10 +911,10 @@ typedef struct {
Error *err;
} SetCompatState;
-static void do_set_compat(CPUState *cs, void *arg)
+static void do_set_compat(CPUState *cs, run_on_cpu_data arg)
{
PowerPCCPU *cpu = POWERPC_CPU(cs);
- SetCompatState *s = arg;
+ SetCompatState *s = arg.host_ptr;
cpu_synchronize_state(cs);
ppc_set_compat(cpu, s->cpu_version, &s->err);
@@ -1017,7 +1017,7 @@ static target_ulong h_client_architecture_support(PowerPCCPU *cpu_,
.err = NULL,
};
- run_on_cpu(cs, do_set_compat, &s);
+ run_on_cpu(cs, do_set_compat, RUN_ON_CPU_HOST_PTR(&s));
if (s.err) {
error_report_err(s.err);
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index 9f597bb..3f79a8e 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -231,7 +231,25 @@ struct kvm_run;
#define TB_JMP_CACHE_SIZE (1 << TB_JMP_CACHE_BITS)
/* work queue */
-typedef void (*run_on_cpu_func)(CPUState *cpu, void *data);
+
+/* The union type allows passing of 64 bit target pointers on 32 bit
+ * hosts in a single parameter
+ */
+typedef union {
+ int host_int;
+ unsigned long host_ulong;
+ void *host_ptr;
+ vaddr target_ptr;
+} run_on_cpu_data;
+
+#define RUN_ON_CPU_HOST_PTR(p) ((run_on_cpu_data){.host_ptr = (p)})
+#define RUN_ON_CPU_HOST_INT(i) ((run_on_cpu_data){.host_int = (i)})
+#define RUN_ON_CPU_HOST_ULONG(ul) ((run_on_cpu_data){.host_ulong = (ul)})
+#define RUN_ON_CPU_TARGET_PTR(v) ((run_on_cpu_data){.target_ptr = (v)})
+#define RUN_ON_CPU_NULL RUN_ON_CPU_HOST_PTR(NULL)
+
+typedef void (*run_on_cpu_func)(CPUState *cpu, run_on_cpu_data data);
+
struct qemu_work_item;
/**
@@ -637,7 +655,7 @@ bool cpu_is_stopped(CPUState *cpu);
*
* Used internally in the implementation of run_on_cpu.
*/
-void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data,
+void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data,
QemuMutex *mutex);
/**
@@ -648,7 +666,7 @@ void do_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data,
*
* Schedules the function @func for execution on the vCPU @cpu.
*/
-void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data);
+void run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
/**
* async_run_on_cpu:
@@ -658,7 +676,7 @@ void run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data);
*
* Schedules the function @func for execution on the vCPU @cpu asynchronously.
*/
-void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data);
+void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
/**
* async_safe_run_on_cpu:
@@ -672,7 +690,7 @@ void async_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data);
* Unlike run_on_cpu and async_run_on_cpu, the function is run outside the
* BQL.
*/
-void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func, void *data);
+void async_safe_run_on_cpu(CPUState *cpu, run_on_cpu_func func, run_on_cpu_data data);
/**
* qemu_get_cpu:
diff --git a/kvm-all.c b/kvm-all.c
index 3dcce16..330219e 100644
--- a/kvm-all.c
+++ b/kvm-all.c
@@ -1856,7 +1856,7 @@ void kvm_flush_coalesced_mmio_buffer(void)
s->coalesced_flush_in_progress = false;
}
-static void do_kvm_cpu_synchronize_state(CPUState *cpu, void *arg)
+static void do_kvm_cpu_synchronize_state(CPUState *cpu, run_on_cpu_data arg)
{
if (!cpu->kvm_vcpu_dirty) {
kvm_arch_get_registers(cpu);
@@ -1867,11 +1867,11 @@ static void do_kvm_cpu_synchronize_state(CPUState *cpu, void *arg)
void kvm_cpu_synchronize_state(CPUState *cpu)
{
if (!cpu->kvm_vcpu_dirty) {
- run_on_cpu(cpu, do_kvm_cpu_synchronize_state, NULL);
+ run_on_cpu(cpu, do_kvm_cpu_synchronize_state, RUN_ON_CPU_NULL);
}
}
-static void do_kvm_cpu_synchronize_post_reset(CPUState *cpu, void *arg)
+static void do_kvm_cpu_synchronize_post_reset(CPUState *cpu, run_on_cpu_data arg)
{
kvm_arch_put_registers(cpu, KVM_PUT_RESET_STATE);
cpu->kvm_vcpu_dirty = false;
@@ -1879,10 +1879,10 @@ static void do_kvm_cpu_synchronize_post_reset(CPUState *cpu, void *arg)
void kvm_cpu_synchronize_post_reset(CPUState *cpu)
{
- run_on_cpu(cpu, do_kvm_cpu_synchronize_post_reset, NULL);
+ run_on_cpu(cpu, do_kvm_cpu_synchronize_post_reset, RUN_ON_CPU_NULL);
}
-static void do_kvm_cpu_synchronize_post_init(CPUState *cpu, void *arg)
+static void do_kvm_cpu_synchronize_post_init(CPUState *cpu, run_on_cpu_data arg)
{
kvm_arch_put_registers(cpu, KVM_PUT_FULL_STATE);
cpu->kvm_vcpu_dirty = false;
@@ -1890,7 +1890,7 @@ static void do_kvm_cpu_synchronize_post_init(CPUState *cpu, void *arg)
void kvm_cpu_synchronize_post_init(CPUState *cpu)
{
- run_on_cpu(cpu, do_kvm_cpu_synchronize_post_init, NULL);
+ run_on_cpu(cpu, do_kvm_cpu_synchronize_post_init, RUN_ON_CPU_NULL);
}
int kvm_cpu_exec(CPUState *cpu)
@@ -2218,9 +2218,10 @@ struct kvm_set_guest_debug_data {
int err;
};
-static void kvm_invoke_set_guest_debug(CPUState *cpu, void *data)
+static void kvm_invoke_set_guest_debug(CPUState *cpu, run_on_cpu_data data)
{
- struct kvm_set_guest_debug_data *dbg_data = data;
+ struct kvm_set_guest_debug_data *dbg_data =
+ (struct kvm_set_guest_debug_data *) data.host_ptr;
dbg_data->err = kvm_vcpu_ioctl(cpu, KVM_SET_GUEST_DEBUG,
&dbg_data->dbg);
@@ -2237,7 +2238,8 @@ int kvm_update_guest_debug(CPUState *cpu, unsigned long reinject_trap)
}
kvm_arch_update_guest_debug(cpu, &data.dbg);
- run_on_cpu(cpu, kvm_invoke_set_guest_debug, &data);
+ run_on_cpu(cpu, kvm_invoke_set_guest_debug,
+ RUN_ON_CPU_HOST_PTR(&data));
return data.err;
}
diff --git a/target-i386/helper.c b/target-i386/helper.c
index 9bc961b..4ecc091 100644
--- a/target-i386/helper.c
+++ b/target-i386/helper.c
@@ -1121,9 +1121,9 @@ typedef struct MCEInjectionParams {
int flags;
} MCEInjectionParams;
-static void do_inject_x86_mce(CPUState *cs, void *data)
+static void do_inject_x86_mce(CPUState *cs, run_on_cpu_data data)
{
- MCEInjectionParams *params = data;
+ MCEInjectionParams *params = data.host_ptr;
X86CPU *cpu = X86_CPU(cs);
CPUX86State *cenv = &cpu->env;
uint64_t *banks = cenv->mce_banks + 4 * params->bank;
@@ -1230,7 +1230,7 @@ void cpu_x86_inject_mce(Monitor *mon, X86CPU *cpu, int bank,
return;
}
- run_on_cpu(cs, do_inject_x86_mce, ¶ms);
+ run_on_cpu(cs, do_inject_x86_mce, RUN_ON_CPU_HOST_PTR(¶ms));
if (flags & MCE_INJECT_BROADCAST) {
CPUState *other_cs;
@@ -1243,7 +1243,7 @@ void cpu_x86_inject_mce(Monitor *mon, X86CPU *cpu, int bank,
if (other_cs == cs) {
continue;
}
- run_on_cpu(other_cs, do_inject_x86_mce, ¶ms);
+ run_on_cpu(other_cs, do_inject_x86_mce, RUN_ON_CPU_HOST_PTR(¶ms));
}
}
}
diff --git a/target-i386/kvm.c b/target-i386/kvm.c
index 86b41a9..1c0864e 100644
--- a/target-i386/kvm.c
+++ b/target-i386/kvm.c
@@ -183,7 +183,7 @@ static int kvm_get_tsc(CPUState *cs)
return 0;
}
-static inline void do_kvm_synchronize_tsc(CPUState *cpu, void *arg)
+static inline void do_kvm_synchronize_tsc(CPUState *cpu, run_on_cpu_data arg)
{
kvm_get_tsc(cpu);
}
@@ -194,7 +194,7 @@ void kvm_synchronize_all_tsc(void)
if (kvm_enabled()) {
CPU_FOREACH(cpu) {
- run_on_cpu(cpu, do_kvm_synchronize_tsc, NULL);
+ run_on_cpu(cpu, do_kvm_synchronize_tsc, RUN_ON_CPU_NULL);
}
}
}
diff --git a/target-s390x/cpu.c b/target-s390x/cpu.c
index 9e2f239..0a39d31 100644
--- a/target-s390x/cpu.c
+++ b/target-s390x/cpu.c
@@ -164,7 +164,7 @@ static void s390_cpu_machine_reset_cb(void *opaque)
{
S390CPU *cpu = opaque;
- run_on_cpu(CPU(cpu), s390_do_cpu_full_reset, NULL);
+ run_on_cpu(CPU(cpu), s390_do_cpu_full_reset, RUN_ON_CPU_NULL);
}
#endif
@@ -220,7 +220,7 @@ static void s390_cpu_realizefn(DeviceState *dev, Error **errp)
s390_cpu_gdb_init(cs);
qemu_init_vcpu(cs);
#if !defined(CONFIG_USER_ONLY)
- run_on_cpu(cs, s390_do_cpu_full_reset, NULL);
+ run_on_cpu(cs, s390_do_cpu_full_reset, RUN_ON_CPU_NULL);
#else
cpu_reset(cs);
#endif
diff --git a/target-s390x/cpu.h b/target-s390x/cpu.h
index 4e58cde..fd36a25 100644
--- a/target-s390x/cpu.h
+++ b/target-s390x/cpu.h
@@ -502,13 +502,13 @@ static inline hwaddr decode_basedisp_s(CPUS390XState *env, uint32_t ipb,
#define decode_basedisp_rs decode_basedisp_s
/* helper functions for run_on_cpu() */
-static inline void s390_do_cpu_reset(CPUState *cs, void *arg)
+static inline void s390_do_cpu_reset(CPUState *cs, run_on_cpu_data arg)
{
S390CPUClass *scc = S390_CPU_GET_CLASS(cs);
scc->cpu_reset(cs);
}
-static inline void s390_do_cpu_full_reset(CPUState *cs, void *arg)
+static inline void s390_do_cpu_full_reset(CPUState *cs, run_on_cpu_data arg)
{
cpu_reset(cs);
}
diff --git a/target-s390x/kvm.c b/target-s390x/kvm.c
index 7f74572..36b4847 100644
--- a/target-s390x/kvm.c
+++ b/target-s390x/kvm.c
@@ -1607,7 +1607,7 @@ int kvm_s390_cpu_restart(S390CPU *cpu)
{
SigpInfo si = {};
- run_on_cpu(CPU(cpu), sigp_restart, &si);
+ run_on_cpu(CPU(cpu), sigp_restart, RUN_ON_CPU_HOST_PTR(&si));
DPRINTF("DONE: KVM cpu restart: %p\n", &cpu->env);
return 0;
}
@@ -1683,31 +1683,31 @@ static int handle_sigp_single_dst(S390CPU *dst_cpu, uint8_t order,
switch (order) {
case SIGP_START:
- run_on_cpu(CPU(dst_cpu), sigp_start, &si);
+ run_on_cpu(CPU(dst_cpu), sigp_start, RUN_ON_CPU_HOST_PTR(&si));
break;
case SIGP_STOP:
- run_on_cpu(CPU(dst_cpu), sigp_stop, &si);
+ run_on_cpu(CPU(dst_cpu), sigp_stop, RUN_ON_CPU_HOST_PTR(&si));
break;
case SIGP_RESTART:
- run_on_cpu(CPU(dst_cpu), sigp_restart, &si);
+ run_on_cpu(CPU(dst_cpu), sigp_restart, RUN_ON_CPU_HOST_PTR(&si));
break;
case SIGP_STOP_STORE_STATUS:
- run_on_cpu(CPU(dst_cpu), sigp_stop_and_store_status, &si);
+ run_on_cpu(CPU(dst_cpu), sigp_stop_and_store_status, RUN_ON_CPU_HOST_PTR(&si));
break;
case SIGP_STORE_STATUS_ADDR:
- run_on_cpu(CPU(dst_cpu), sigp_store_status_at_address, &si);
+ run_on_cpu(CPU(dst_cpu), sigp_store_status_at_address, RUN_ON_CPU_HOST_PTR(&si));
break;
case SIGP_STORE_ADTL_STATUS:
- run_on_cpu(CPU(dst_cpu), sigp_store_adtl_status, &si);
+ run_on_cpu(CPU(dst_cpu), sigp_store_adtl_status, RUN_ON_CPU_HOST_PTR(&si));
break;
case SIGP_SET_PREFIX:
- run_on_cpu(CPU(dst_cpu), sigp_set_prefix, &si);
+ run_on_cpu(CPU(dst_cpu), sigp_set_prefix, RUN_ON_CPU_HOST_PTR(&si));
break;
case SIGP_INITIAL_CPU_RESET:
- run_on_cpu(CPU(dst_cpu), sigp_initial_cpu_reset, &si);
+ run_on_cpu(CPU(dst_cpu), sigp_initial_cpu_reset, RUN_ON_CPU_HOST_PTR(&si));
break;
case SIGP_CPU_RESET:
- run_on_cpu(CPU(dst_cpu), sigp_cpu_reset, &si);
+ run_on_cpu(CPU(dst_cpu), sigp_cpu_reset, RUN_ON_CPU_HOST_PTR(&si));
break;
default:
DPRINTF("KVM: unknown SIGP: 0x%x\n", order);
diff --git a/target-s390x/misc_helper.c b/target-s390x/misc_helper.c
index 4df2ec6..c9604ea 100644
--- a/target-s390x/misc_helper.c
+++ b/target-s390x/misc_helper.c
@@ -126,7 +126,7 @@ static int modified_clear_reset(S390CPU *cpu)
pause_all_vcpus();
cpu_synchronize_all_states();
CPU_FOREACH(t) {
- run_on_cpu(t, s390_do_cpu_full_reset, NULL);
+ run_on_cpu(t, s390_do_cpu_full_reset, RUN_ON_CPU_NULL);
}
s390_cmma_reset();
subsystem_reset();
@@ -145,7 +145,7 @@ static int load_normal_reset(S390CPU *cpu)
pause_all_vcpus();
cpu_synchronize_all_states();
CPU_FOREACH(t) {
- run_on_cpu(t, s390_do_cpu_reset, NULL);
+ run_on_cpu(t, s390_do_cpu_reset, RUN_ON_CPU_NULL);
}
s390_cmma_reset();
subsystem_reset();
diff --git a/translate-all.c b/translate-all.c
index 874f415..01b1604 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -917,16 +917,14 @@ static void page_flush_tb(void)
}
/* flush all the translation blocks */
-static void do_tb_flush(CPUState *cpu, void *data)
+static void do_tb_flush(CPUState *cpu, run_on_cpu_data tb_flush_count)
{
- unsigned tb_flush_req = (unsigned) (uintptr_t) data;
-
tb_lock();
- /* If it's already been done on request of another CPU,
+ /* If it is already been done on request of another CPU,
* just retry.
*/
- if (tcg_ctx.tb_ctx.tb_flush_count != tb_flush_req) {
+ if (tcg_ctx.tb_ctx.tb_flush_count != tb_flush_count.host_int) {
goto done;
}
@@ -967,8 +965,9 @@ done:
void tb_flush(CPUState *cpu)
{
if (tcg_enabled()) {
- uintptr_t tb_flush_req = atomic_mb_read(&tcg_ctx.tb_ctx.tb_flush_count);
- async_safe_run_on_cpu(cpu, do_tb_flush, (void *) tb_flush_req);
+ unsigned tb_flush_count = atomic_mb_read(&tcg_ctx.tb_ctx.tb_flush_count);
+ async_safe_run_on_cpu(cpu, do_tb_flush,
+ RUN_ON_CPU_HOST_INT(tb_flush_count));
}
}
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* [Qemu-devel] [PULL 14/14] tcg: move locking for tb_invalidate_phys_page_range up
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (12 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 13/14] *_run_on_cpu: introduce run_on_cpu_data type Paolo Bonzini
@ 2016-10-31 14:13 ` Paolo Bonzini
2016-10-31 16:12 ` [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Peter Maydell
14 siblings, 0 replies; 16+ messages in thread
From: Paolo Bonzini @ 2016-10-31 14:13 UTC (permalink / raw)
To: qemu-devel; +Cc: Alex Bennée
From: Alex Bennée <alex.bennee@linaro.org>
In the linux-user case all things that involve ''l1_map' and PageDesc
tweaks are protected by the memory lock (mmpa_lock). For SoftMMU mode
we previously relied on single threaded behaviour, with MTTCG we now use
the tb_lock().
As a result we need to do a little re-factoring and push the taking of
this lock up the call tree. This requires a slightly different entry for
the SoftMMU and user-mode cases from tb_invalidate_phys_range.
This also means user-mode breakpoint insertion needs to take two locks
but it hadn't taken any previously so this is an improvement.
Signed-off-by: Alex Bennée <alex.bennee@linaro.org>
Message-Id: <20161027151030.20863-20-alex.bennee@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
---
exec.c | 16 ++++++++++++++++
translate-all.c | 39 +++++++++++++++++++++++++++++++--------
2 files changed, 47 insertions(+), 8 deletions(-)
diff --git a/exec.c b/exec.c
index ab30629..4d08581 100644
--- a/exec.c
+++ b/exec.c
@@ -687,7 +687,11 @@ void cpu_exec_realizefn(CPUState *cpu, Error **errp)
#if defined(CONFIG_USER_ONLY)
static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
{
+ mmap_lock();
+ tb_lock();
tb_invalidate_phys_page_range(pc, pc + 1, 0);
+ tb_unlock();
+ mmap_unlock();
}
#else
static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
@@ -696,6 +700,7 @@ static void breakpoint_invalidate(CPUState *cpu, target_ulong pc)
hwaddr phys = cpu_get_phys_page_attrs_debug(cpu, pc, &attrs);
int asidx = cpu_asidx_from_attrs(cpu, attrs);
if (phys != -1) {
+ /* Locks grabbed by tb_invalidate_phys_addr */
tb_invalidate_phys_addr(cpu->cpu_ases[asidx].as,
phys | (pc & ~TARGET_PAGE_MASK));
}
@@ -1988,7 +1993,11 @@ ram_addr_t qemu_ram_addr_from_host(void *ptr)
static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
uint64_t val, unsigned size)
{
+ bool locked = false;
+
if (!cpu_physical_memory_get_dirty_flag(ram_addr, DIRTY_MEMORY_CODE)) {
+ locked = true;
+ tb_lock();
tb_invalidate_phys_page_fast(ram_addr, size);
}
switch (size) {
@@ -2004,6 +2013,11 @@ static void notdirty_mem_write(void *opaque, hwaddr ram_addr,
default:
abort();
}
+
+ if (locked) {
+ tb_unlock();
+ }
+
/* Set both VGA and migration bits for simplicity and to remove
* the notdirty callback faster.
*/
@@ -2477,7 +2491,9 @@ static void invalidate_and_set_dirty(MemoryRegion *mr, hwaddr addr,
cpu_physical_memory_range_includes_clean(addr, length, dirty_log_mask);
}
if (dirty_log_mask & (1 << DIRTY_MEMORY_CODE)) {
+ tb_lock();
tb_invalidate_phys_range(addr, addr + length);
+ tb_unlock();
dirty_log_mask &= ~(1 << DIRTY_MEMORY_CODE);
}
cpu_physical_memory_set_dirty_range(addr, length, dirty_log_mask);
diff --git a/translate-all.c b/translate-all.c
index 01b1604..e6a8b07 100644
--- a/translate-all.c
+++ b/translate-all.c
@@ -1402,12 +1402,11 @@ TranslationBlock *tb_gen_code(CPUState *cpu,
* access: the virtual CPU will exit the current TB if code is modified inside
* this TB.
*
- * Called with mmap_lock held for user-mode emulation
+ * Called with mmap_lock held for user-mode emulation, grabs tb_lock
+ * Called with tb_lock held for system-mode emulation
*/
-void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end)
+static void tb_invalidate_phys_range_1(tb_page_addr_t start, tb_page_addr_t end)
{
- assert_memory_lock();
-
while (start < end) {
tb_invalidate_phys_page_range(start, end, 0);
start &= TARGET_PAGE_MASK;
@@ -1415,6 +1414,21 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end)
}
}
+#ifdef CONFIG_SOFTMMU
+void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end)
+{
+ assert_tb_lock();
+ tb_invalidate_phys_range_1(start, end);
+}
+#else
+void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end)
+{
+ assert_memory_lock();
+ tb_lock();
+ tb_invalidate_phys_range_1(start, end);
+ tb_unlock();
+}
+#endif
/*
* Invalidate all TBs which intersect with the target physical address range
* [start;end[. NOTE: start and end must refer to the *same* physical page.
@@ -1422,7 +1436,8 @@ void tb_invalidate_phys_range(tb_page_addr_t start, tb_page_addr_t end)
* access: the virtual CPU will exit the current TB if code is modified inside
* this TB.
*
- * Called with mmap_lock held for user-mode emulation
+ * Called with tb_lock/mmap_lock held for user-mode emulation
+ * Called with tb_lock held for system-mode emulation
*/
void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end,
int is_cpu_write_access)
@@ -1445,6 +1460,7 @@ void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end,
#endif /* TARGET_HAS_PRECISE_SMC */
assert_memory_lock();
+ assert_tb_lock();
p = page_find(start >> TARGET_PAGE_BITS);
if (!p) {
@@ -1459,7 +1475,6 @@ void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end,
/* we remove all the TBs in the range [start, end[ */
/* XXX: see if in some cases it could be faster to invalidate all
the code */
- tb_lock();
tb = p->first_tb;
while (tb != NULL) {
n = (uintptr_t)tb & 3;
@@ -1519,11 +1534,13 @@ void tb_invalidate_phys_page_range(tb_page_addr_t start, tb_page_addr_t end,
cpu_loop_exit_noexc(cpu);
}
#endif
- tb_unlock();
}
#ifdef CONFIG_SOFTMMU
-/* len must be <= 8 and start must be a multiple of len */
+/* len must be <= 8 and start must be a multiple of len.
+ * Called via softmmu_template.h when code areas are written to with
+ * tb_lock held.
+ */
void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len)
{
PageDesc *p;
@@ -1537,6 +1554,8 @@ void tb_invalidate_phys_page_fast(tb_page_addr_t start, int len)
(intptr_t)cpu_single_env->segs[R_CS].base);
}
#endif
+ assert_memory_lock();
+
p = page_find(start >> TARGET_PAGE_BITS);
if (!p) {
return;
@@ -1584,6 +1603,8 @@ static bool tb_invalidate_phys_page(tb_page_addr_t addr, uintptr_t pc)
uint32_t current_flags = 0;
#endif
+ assert_memory_lock();
+
addr &= TARGET_PAGE_MASK;
p = page_find(addr >> TARGET_PAGE_BITS);
if (!p) {
@@ -1687,7 +1708,9 @@ void tb_invalidate_phys_addr(AddressSpace *as, hwaddr addr)
return;
}
ram_addr = memory_region_get_ram_addr(mr) + addr;
+ tb_lock();
tb_invalidate_phys_page_range(ram_addr, ram_addr + 1, 0);
+ tb_unlock();
rcu_read_unlock();
}
#endif /* !defined(CONFIG_USER_ONLY) */
--
2.7.4
^ permalink raw reply related [flat|nested] 16+ messages in thread
* Re: [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
` (13 preceding siblings ...)
2016-10-31 14:13 ` [Qemu-devel] [PULL 14/14] tcg: move locking for tb_invalidate_phys_page_range up Paolo Bonzini
@ 2016-10-31 16:12 ` Peter Maydell
14 siblings, 0 replies; 16+ messages in thread
From: Peter Maydell @ 2016-10-31 16:12 UTC (permalink / raw)
To: Paolo Bonzini; +Cc: QEMU Developers
On 31 October 2016 at 14:13, Paolo Bonzini <pbonzini@redhat.com> wrote:
> The following changes since commit ed2839166c21e001d15868f4d9591a21aaebd547:
>
> target-alpha: Emulate LL/SC using cmpxchg helpers (2016-10-26 08:29:02 -0700)
>
> are available in the git repository at:
>
> git://github.com/bonzini/qemu.git tags/for-upstream-mttcg
>
> for you to fetch changes up to ba051fb5e56d5ff5e4fa672d37954452e58543b2:
>
> tcg: move locking for tb_invalidate_phys_page_range up (2016-10-31 15:00:25 +0100)
>
> ----------------------------------------------------------------
> Base patches for MTTCG enablement.
>
> ----------------------------------------------------------------
> Alex Bennée (11):
> cpus: make all_vcpus_paused() return bool
> translate_all: DEBUG_FLUSH -> DEBUG_TB_FLUSH
> translate-all: add DEBUG_LOCKING asserts
> cpu-exec: include cpu_index in CPU_LOG_EXEC messages
> linux-user/elfload: ensure mmap_lock() held while setting up
> translate-all: Add assert_(memory|tb)_lock annotations
> target-arm/arm-powerctl: wake up sleeping CPUs
> tcg: move tcg_exec_all and helpers above thread fn
> tcg: cpus rm tcg_exec_all()
> cpus: re-factor out handle_icount_deadline
> tcg: move locking for tb_invalidate_phys_page_range up
>
> KONRAD Frederic (1):
> tcg: protect translation related stuff with tb_lock.
>
> Paolo Bonzini (2):
> tcg: comment on which functions have to be called with tb_lock held
> *_run_on_cpu: introduce run_on_cpu_data type
>
Applied, thanks.
-- PMM
^ permalink raw reply [flat|nested] 16+ messages in thread
end of thread, other threads:[~2016-10-31 16:13 UTC | newest]
Thread overview: 16+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-10-31 14:13 [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 01/14] cpus: make all_vcpus_paused() return bool Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 02/14] translate_all: DEBUG_FLUSH -> DEBUG_TB_FLUSH Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 03/14] translate-all: add DEBUG_LOCKING asserts Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 04/14] cpu-exec: include cpu_index in CPU_LOG_EXEC messages Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 05/14] tcg: comment on which functions have to be called with tb_lock held Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 06/14] linux-user/elfload: ensure mmap_lock() held while setting up Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 07/14] translate-all: Add assert_(memory|tb)_lock annotations Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 08/14] tcg: protect translation related stuff with tb_lock Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 09/14] target-arm/arm-powerctl: wake up sleeping CPUs Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 10/14] tcg: move tcg_exec_all and helpers above thread fn Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 11/14] tcg: cpus rm tcg_exec_all() Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 12/14] cpus: re-factor out handle_icount_deadline Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 13/14] *_run_on_cpu: introduce run_on_cpu_data type Paolo Bonzini
2016-10-31 14:13 ` [Qemu-devel] [PULL 14/14] tcg: move locking for tb_invalidate_phys_page_range up Paolo Bonzini
2016-10-31 16:12 ` [Qemu-devel] [PULL 00/14] MTTCG patches for 2016-10-31 Peter Maydell
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).