From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:38397) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bIGaz-0008FN-7S for qemu-devel@nongnu.org; Wed, 29 Jun 2016 10:36:06 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bIGau-0000l0-7p for qemu-devel@nongnu.org; Wed, 29 Jun 2016 10:36:05 -0400 Received: from mail-lf0-x229.google.com ([2a00:1450:4010:c07::229]:32876) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bIGat-0000kg-RG for qemu-devel@nongnu.org; Wed, 29 Jun 2016 10:36:00 -0400 Received: by mail-lf0-x229.google.com with SMTP id f6so35030228lfg.0 for ; Wed, 29 Jun 2016 07:35:59 -0700 (PDT) References: <1464986428-6739-1-git-send-email-alex.bennee@linaro.org> <1464986428-6739-20-git-send-email-alex.bennee@linaro.org> From: Sergey Fedorov Message-ID: <5773DCCC.9080607@gmail.com> Date: Wed, 29 Jun 2016 17:35:56 +0300 MIME-Version: 1.0 In-Reply-To: <1464986428-6739-20-git-send-email-alex.bennee@linaro.org> Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [RFC v3 19/19] cpu-exec: remove tb_lock from the hot-path List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: =?UTF-8?Q?Alex_Benn=c3=a9e?= , mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, cota@braap.org, bobby.prani@gmail.com Cc: mark.burton@greensocs.com, pbonzini@redhat.com, jan.kiszka@siemens.com, rth@twiddle.net, peter.maydell@linaro.org, claudio.fontana@huawei.com, Peter Crosthwaite On 03/06/16 23:40, Alex Bennée wrote: > Lock contention in the hot path of moving between existing patched > TranslationBlocks is the main drag on MTTCG performance. This patch > pushes the tb_lock() usage down to the two places that really need it: > > - code generation (tb_gen_code) > - jump patching (tb_add_jump) > > The rest of the code doesn't really need to hold a lock as it is either > using per-CPU structures or designed to be used in concurrent read > situations (qht_lookup). > > Signed-off-by: Alex Bennée > > --- > v3 > - fix merge conflicts with Sergey's patch > --- > cpu-exec.c | 59 ++++++++++++++++++++++++++++++----------------------------- > 1 file changed, 30 insertions(+), 29 deletions(-) > > diff --git a/cpu-exec.c b/cpu-exec.c > index b017643..4af0b52 100644 > --- a/cpu-exec.c > +++ b/cpu-exec.c > @@ -298,41 +298,38 @@ static TranslationBlock *tb_find_slow(CPUState *cpu, > * Pairs with smp_wmb() in tb_phys_invalidate(). */ > smp_rmb(); > tb = tb_find_physical(cpu, pc, cs_base, flags); > - if (tb) { > - goto found; > - } > + if (!tb) { > > - /* mmap_lock is needed by tb_gen_code, and mmap_lock must be > - * taken outside tb_lock. Since we're momentarily dropping > - * tb_lock, there's a chance that our desired tb has been > - * translated. > - */ > - tb_unlock(); > - mmap_lock(); > - tb_lock(); > - tb = tb_find_physical(cpu, pc, cs_base, flags); > - if (tb) { > - mmap_unlock(); > - goto found; > - } > + /* mmap_lock is needed by tb_gen_code, and mmap_lock must be > + * taken outside tb_lock. > + */ > + mmap_lock(); > + tb_lock(); > > - /* if no translated code available, then translate it now */ > - tb = tb_gen_code(cpu, pc, cs_base, flags, 0); > + /* There's a chance that our desired tb has been translated while > + * taking the locks so we check again inside the lock. > + */ > + tb = tb_find_physical(cpu, pc, cs_base, flags); > + if (!tb) { > + /* if no translated code available, then translate it now */ > + tb = tb_gen_code(cpu, pc, cs_base, flags, 0); > + } > > - mmap_unlock(); > + tb_unlock(); > + mmap_unlock(); > + } > > -found: > - /* we add the TB in the virtual pc hash table */ > + /* We add the TB in the virtual pc hash table for the fast lookup */ > cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)] = tb; Hmm, seems like I forgot to convert this into atomic_set() in the previous patch... > return tb; > } > > static inline TranslationBlock *tb_find_fast(CPUState *cpu, > - TranslationBlock **last_tb, > + TranslationBlock **ltbp, I'm not sure if it is more readable... > int tb_exit) > { > CPUArchState *env = (CPUArchState *)cpu->env_ptr; > - TranslationBlock *tb; > + TranslationBlock *tb, *last_tb; > target_ulong cs_base, pc; > uint32_t flags; > > @@ -340,7 +337,6 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu, > always be the same before a given translated block > is executed. */ > cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags); > - tb_lock(); > tb = atomic_read(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)]); > if (unlikely(!tb || tb->pc != pc || tb->cs_base != cs_base || > tb->flags != flags)) { > @@ -350,7 +346,7 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu, > /* Ensure that no TB jump will be modified as the > * translation buffer has been flushed. > */ > - *last_tb = NULL; > + *ltbp = NULL; > cpu->tb_flushed = false; > } > #ifndef CONFIG_USER_ONLY > @@ -359,14 +355,19 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu, > * spanning two pages because the mapping for the second page can change. > */ > if (tb->page_addr[1] != -1) { > - *last_tb = NULL; > + *ltbp = NULL; > } > #endif > + > /* See if we can patch the calling TB. */ > - if (*last_tb && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) { > - tb_add_jump(*last_tb, tb_exit, tb); > + last_tb = *ltbp; > + if (!qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN) && > + last_tb && > + !last_tb->jmp_list_next[tb_exit]) { If we're going to check this outside of tb_lock we have to do this with atomic_{read,set}(). However, I think it is rare case to race on tb_add_jump() so probably it is okay to do the check under tb_lock. > + tb_lock(); > + tb_add_jump(last_tb, tb_exit, tb); > + tb_unlock(); > } > - tb_unlock(); > return tb; > } > Kind regards, Sergey