From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:47500) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bNgR3-0002t7-1p for qemu-devel@nongnu.org; Thu, 14 Jul 2016 09:12:14 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bNgQy-0002aJ-Le for qemu-devel@nongnu.org; Thu, 14 Jul 2016 09:12:12 -0400 Received: from mail-wm0-x22c.google.com ([2a00:1450:400c:c09::22c]:34912) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bNgQx-0002a6-5j for qemu-devel@nongnu.org; Thu, 14 Jul 2016 09:12:08 -0400 Received: by mail-wm0-x22c.google.com with SMTP id f65so66046190wmi.0 for ; Thu, 14 Jul 2016 06:12:07 -0700 (PDT) References: <1468354426-837-1-git-send-email-sergey.fedorov@linaro.org> <1468354426-837-5-git-send-email-sergey.fedorov@linaro.org> <87poqg4cdq.fsf@linaro.org> <57878BCA.5010801@gmail.com> From: Alex =?utf-8?Q?Benn=C3=A9e?= In-reply-to: <57878BCA.5010801@gmail.com> Date: Thu, 14 Jul 2016 14:12:06 +0100 Message-ID: <87inw84b4p.fsf@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [PATCH v3 04/11] tcg: Prepare safe access to tb_flushed out of tb_lock List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Sergey Fedorov Cc: Sergey Fedorov , qemu-devel@nongnu.org, mttcg@listserver.greensocs.com, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, cota@braap.org, bobby.prani@gmail.com, rth@twiddle.net, patches@linaro.org, mark.burton@greensocs.com, pbonzini@redhat.com, jan.kiszka@siemens.com, peter.maydell@linaro.org, claudio.fontana@huawei.com, Peter Crosthwaite Sergey Fedorov writes: > On 14/07/16 15:45, Alex Bennée wrote: >> Sergey Fedorov writes: >> >>> From: Sergey Fedorov >>> >>> Ensure atomicity of CPU's 'tb_flushed' access for future translation >>> block lookup out of 'tb_lock'. >>> >>> This field can only be touched from another thread by tb_flush() in user >>> mode emulation. So the only access to be atomic is: >>> * a single write in tb_flush(); >>> * reads/writes out of 'tb_lock'. >> It might worth mentioning the barrier here. > > Do you mean atomic_set() vs. atomic_mb_set()? Yes. > >> >>> In future, before enabling MTTCG in system mode, tb_flush() must be safe >>> and this field becomes unnecessary. >>> >>> Signed-off-by: Sergey Fedorov >>> Signed-off-by: Sergey Fedorov >>> --- >>> cpu-exec.c | 16 +++++++--------- >>> translate-all.c | 4 ++-- >>> 2 files changed, 9 insertions(+), 11 deletions(-) >>> >>> diff --git a/cpu-exec.c b/cpu-exec.c >>> index d6178eab71d4..c973e3b85922 100644 >>> --- a/cpu-exec.c >>> +++ b/cpu-exec.c >>> @@ -338,13 +338,6 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu, >>> tb->flags != flags)) { >>> tb = tb_find_slow(cpu, pc, cs_base, flags); >>> } >>> - if (cpu->tb_flushed) { >>> - /* Ensure that no TB jump will be modified as the >>> - * translation buffer has been flushed. >>> - */ >>> - *last_tb = NULL; >>> - cpu->tb_flushed = false; >>> - } >>> #ifndef CONFIG_USER_ONLY >>> /* We don't take care of direct jumps when address mapping changes in >>> * system emulation. So it's not safe to make a direct jump to a TB >>> @@ -356,7 +349,12 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu, >>> #endif >>> /* See if we can patch the calling TB. */ >>> if (last_tb && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) { >>> - tb_add_jump(last_tb, tb_exit, tb); >>> + /* Check if translation buffer has been flushed */ >>> + if (cpu->tb_flushed) { >>> + cpu->tb_flushed = false; >>> + } else { >>> + tb_add_jump(last_tb, tb_exit, tb); >>> + } >>> } >>> tb_unlock(); >>> return tb; >>> @@ -618,7 +616,7 @@ int cpu_exec(CPUState *cpu) >>> } >>> >>> last_tb = NULL; /* forget the last executed TB after exception */ >>> - cpu->tb_flushed = false; /* reset before first TB lookup */ >>> + atomic_mb_set(&cpu->tb_flushed, false); /* reset before first TB lookup */ >>> for(;;) { >>> cpu_handle_interrupt(cpu, &last_tb); >>> tb = tb_find_fast(cpu, last_tb, tb_exit); >>> diff --git a/translate-all.c b/translate-all.c >>> index fdf520a86d68..788fed1e0765 100644 >>> --- a/translate-all.c >>> +++ b/translate-all.c >>> @@ -845,7 +845,6 @@ void tb_flush(CPUState *cpu) >>> > tcg_ctx.code_gen_buffer_size) { >>> cpu_abort(cpu, "Internal error: code buffer overflow\n"); >>> } >>> - tcg_ctx.tb_ctx.nb_tbs = 0; >>> >>> CPU_FOREACH(cpu) { >>> int i; >>> @@ -853,9 +852,10 @@ void tb_flush(CPUState *cpu) >>> for (i = 0; i < TB_JMP_CACHE_SIZE; ++i) { >>> atomic_set(&cpu->tb_jmp_cache[i], NULL); >>> } >>> - cpu->tb_flushed = true; >>> + atomic_mb_set(&cpu->tb_flushed, true); >>> } >>> >>> + tcg_ctx.tb_ctx.nb_tbs = 0; >>> qht_reset_size(&tcg_ctx.tb_ctx.htable, CODE_GEN_HTABLE_SIZE); >> I can see the sense of moving the setting of nb_tbs but is it strictly >> required as part of this patch? > > Yes, otherwise tb_alloc() may start allocation TBs from the beginning of > the translation buffer before 'tb_flushed' is updated. Ahh yes I see. Thanks Reviewed-by: Alex Bennée > > Kind regards, > Sergey > >> >>> page_flush_tb(); >> Otherwise: >> >> Reviewed-by: Alex Bennée >> >> -- >> Alex Bennée -- Alex Bennée