From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37677) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8vtl-0000Eb-Vg for qemu-devel@nongnu.org; Fri, 03 Jun 2016 16:40:55 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1b8vtg-0000d7-Ge for qemu-devel@nongnu.org; Fri, 03 Jun 2016 16:40:53 -0400 Received: from mail-wm0-x22e.google.com ([2a00:1450:400c:c09::22e]:36695) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1b8vtf-0000cX-TM for qemu-devel@nongnu.org; Fri, 03 Jun 2016 16:40:48 -0400 Received: by mail-wm0-x22e.google.com with SMTP id n184so11949465wmn.1 for ; Fri, 03 Jun 2016 13:40:47 -0700 (PDT) From: =?UTF-8?q?Alex=20Benn=C3=A9e?= Date: Fri, 3 Jun 2016 21:40:27 +0100 Message-Id: <1464986428-6739-19-git-send-email-alex.bennee@linaro.org> In-Reply-To: <1464986428-6739-1-git-send-email-alex.bennee@linaro.org> References: <1464986428-6739-1-git-send-email-alex.bennee@linaro.org> Subject: [Qemu-devel] [RFC v3 18/19] tcg: Ensure safe TB lookup out of 'tb_lock' List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, serge.fdrv@gmail.com, cota@braap.org, bobby.prani@gmail.com Cc: mark.burton@greensocs.com, pbonzini@redhat.com, jan.kiszka@siemens.com, rth@twiddle.net, peter.maydell@linaro.org, claudio.fontana@huawei.com, Sergey Fedorov , Peter Crosthwaite From: Sergey Fedorov First, ensure atomicity of CPU's 'tb_jmp_cache' access by: * using atomic_read() to look up a TB when not holding 'tb_lock'; * using atomic_write() to remove a TB from each CPU's local cache on TB invalidation. Second, add some memory barriers to ensure we don't put the TB being invalidated back to CPU's 'tb_jmp_cache'. If we fail to look up a TB in CPU's local cache because it is being invalidated by some other thread then it must not be found in the shared TB hash table. Otherwise we'd put it back to CPU's local cache. Note that this patch does *not* make CPU's TLB invalidation safe if it is done from some other thread while the CPU is in its execution loop. Signed-off-by: Sergey Fedorov Signed-off-by: Sergey Fedorov --- cpu-exec.c | 7 ++++++- translate-all.c | 7 ++++++- 2 files changed, 12 insertions(+), 2 deletions(-) diff --git a/cpu-exec.c b/cpu-exec.c index 5ad3865..b017643 100644 --- a/cpu-exec.c +++ b/cpu-exec.c @@ -292,6 +292,11 @@ static TranslationBlock *tb_find_slow(CPUState *cpu, { TranslationBlock *tb; + /* Ensure that we won't find a TB in the shared hash table + * if it is being invalidated by some other thread. + * Otherwise we'd put it back to CPU's local cache. + * Pairs with smp_wmb() in tb_phys_invalidate(). */ + smp_rmb(); tb = tb_find_physical(cpu, pc, cs_base, flags); if (tb) { goto found; @@ -336,7 +341,7 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu, is executed. */ cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags); tb_lock(); - tb = cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)]; + tb = atomic_read(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)]); if (unlikely(!tb || tb->pc != pc || tb->cs_base != cs_base || tb->flags != flags)) { tb = tb_find_slow(cpu, pc, cs_base, flags); diff --git a/translate-all.c b/translate-all.c index 95e5284..29a7946 100644 --- a/translate-all.c +++ b/translate-all.c @@ -1071,11 +1071,16 @@ void tb_phys_invalidate(TranslationBlock *tb, tb_page_addr_t page_addr) invalidate_page_bitmap(p); } + /* Ensure that we won't find the TB in the shared hash table + * if we con't see it in CPU's local cache. + * Pairs with smp_rmb() in tb_find_slow(). */ + smp_wmb(); + /* remove the TB from the hash list */ h = tb_jmp_cache_hash_func(tb->pc); CPU_FOREACH(cpu) { if (cpu->tb_jmp_cache[h] == tb) { - cpu->tb_jmp_cache[h] = NULL; + atomic_set(&cpu->tb_jmp_cache[h], NULL); } } -- 2.7.4