From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:37804) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bII2f-0002Yh-W0 for qemu-devel@nongnu.org; Wed, 29 Jun 2016 12:08:51 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1bII2c-0000BM-6Z for qemu-devel@nongnu.org; Wed, 29 Jun 2016 12:08:44 -0400 Received: from mail-wm0-x230.google.com ([2a00:1450:400c:c09::230]:37550) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1bII2b-0000BE-H9 for qemu-devel@nongnu.org; Wed, 29 Jun 2016 12:08:42 -0400 Received: by mail-wm0-x230.google.com with SMTP id a66so80919401wme.0 for ; Wed, 29 Jun 2016 09:08:41 -0700 (PDT) References: <87r3bg1275.fsf@linaro.org> <5773E0BE.9090607@gmail.com> From: Alex =?utf-8?Q?Benn=C3=A9e?= In-reply-to: <5773E0BE.9090607@gmail.com> Date: Wed, 29 Jun 2016 17:08:46 +0100 Message-ID: <87oa6k0ygh.fsf@linaro.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Subject: Re: [Qemu-devel] [RFC v3 19/19] cpu-exec: remove tb_lock from the hot-path List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: Sergey Fedorov Cc: mttcg@listserver.greensocs.com, qemu-devel@nongnu.org, fred.konrad@greensocs.com, a.rigo@virtualopensystems.com, cota@braap.org, bobby.prani@gmail.com, mark.burton@greensocs.com, pbonzini@redhat.com, jan.kiszka@siemens.com, rth@twiddle.net, peter.maydell@linaro.org, claudio.fontana@huawei.com, Peter Crosthwaite Sergey Fedorov writes: > On 29/06/16 17:47, Alex Bennée wrote: >> Sergey Fedorov writes: >> >>> On 03/06/16 23:40, Alex Bennée wrote: >>>> Lock contention in the hot path of moving between existing patched >>>> TranslationBlocks is the main drag on MTTCG performance. This patch >>>> pushes the tb_lock() usage down to the two places that really need it: >>>> >>>> - code generation (tb_gen_code) >>>> - jump patching (tb_add_jump) >>>> >>>> The rest of the code doesn't really need to hold a lock as it is either >>>> using per-CPU structures or designed to be used in concurrent read >>>> situations (qht_lookup). >>>> >>>> Signed-off-by: Alex Bennée >>>> >>>> --- >>>> v3 >>>> - fix merge conflicts with Sergey's patch >>>> --- >>>> cpu-exec.c | 59 ++++++++++++++++++++++++++++++----------------------------- >>>> 1 file changed, 30 insertions(+), 29 deletions(-) >>>> >>>> diff --git a/cpu-exec.c b/cpu-exec.c >>>> index b017643..4af0b52 100644 >>>> --- a/cpu-exec.c >>>> +++ b/cpu-exec.c >>>> @@ -298,41 +298,38 @@ static TranslationBlock *tb_find_slow(CPUState *cpu, >>>> * Pairs with smp_wmb() in tb_phys_invalidate(). */ >>>> smp_rmb(); >>>> tb = tb_find_physical(cpu, pc, cs_base, flags); >>>> - if (tb) { >>>> - goto found; >>>> - } >>>> + if (!tb) { >>>> >>>> - /* mmap_lock is needed by tb_gen_code, and mmap_lock must be >>>> - * taken outside tb_lock. Since we're momentarily dropping >>>> - * tb_lock, there's a chance that our desired tb has been >>>> - * translated. >>>> - */ >>>> - tb_unlock(); >>>> - mmap_lock(); >>>> - tb_lock(); >>>> - tb = tb_find_physical(cpu, pc, cs_base, flags); >>>> - if (tb) { >>>> - mmap_unlock(); >>>> - goto found; >>>> - } >>>> + /* mmap_lock is needed by tb_gen_code, and mmap_lock must be >>>> + * taken outside tb_lock. >>>> + */ >>>> + mmap_lock(); >>>> + tb_lock(); >>>> >>>> - /* if no translated code available, then translate it now */ >>>> - tb = tb_gen_code(cpu, pc, cs_base, flags, 0); >>>> + /* There's a chance that our desired tb has been translated while >>>> + * taking the locks so we check again inside the lock. >>>> + */ >>>> + tb = tb_find_physical(cpu, pc, cs_base, flags); >>>> + if (!tb) { >>>> + /* if no translated code available, then translate it now */ >>>> + tb = tb_gen_code(cpu, pc, cs_base, flags, 0); >>>> + } >>>> >>>> - mmap_unlock(); >>>> + tb_unlock(); >>>> + mmap_unlock(); >>>> + } >>>> >>>> -found: >>>> - /* we add the TB in the virtual pc hash table */ >>>> + /* We add the TB in the virtual pc hash table for the fast lookup */ >>>> cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)] = tb; >>> Hmm, seems like I forgot to convert this into atomic_set() in the >>> previous patch... >> OK, can you fix that in your quick fixes series? > > Sure. I think that patch and this are both ready-to-go into mainline. > >> >>>> return tb; >>>> } >>>> >>>> static inline TranslationBlock *tb_find_fast(CPUState *cpu, >>>> - TranslationBlock **last_tb, >>>> + TranslationBlock **ltbp, >>> I'm not sure if it is more readable... >> I'll revert. I was trying to keep line lengths short :-/ >> >>>> int tb_exit) >>>> { >>>> CPUArchState *env = (CPUArchState *)cpu->env_ptr; >>>> - TranslationBlock *tb; >>>> + TranslationBlock *tb, *last_tb; >>>> target_ulong cs_base, pc; >>>> uint32_t flags; >>>> >>>> @@ -340,7 +337,6 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu, >>>> always be the same before a given translated block >>>> is executed. */ >>>> cpu_get_tb_cpu_state(env, &pc, &cs_base, &flags); >>>> - tb_lock(); >>>> tb = atomic_read(&cpu->tb_jmp_cache[tb_jmp_cache_hash_func(pc)]); >>>> if (unlikely(!tb || tb->pc != pc || tb->cs_base != cs_base || >>>> tb->flags != flags)) { >>>> @@ -350,7 +346,7 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu, >>>> /* Ensure that no TB jump will be modified as the >>>> * translation buffer has been flushed. >>>> */ >>>> - *last_tb = NULL; >>>> + *ltbp = NULL; >>>> cpu->tb_flushed = false; >>>> } >>>> #ifndef CONFIG_USER_ONLY >>>> @@ -359,14 +355,19 @@ static inline TranslationBlock *tb_find_fast(CPUState *cpu, >>>> * spanning two pages because the mapping for the second page can change. >>>> */ >>>> if (tb->page_addr[1] != -1) { >>>> - *last_tb = NULL; >>>> + *ltbp = NULL; >>>> } >>>> #endif >>>> + >>>> /* See if we can patch the calling TB. */ >>>> - if (*last_tb && !qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN)) { >>>> - tb_add_jump(*last_tb, tb_exit, tb); >>>> + last_tb = *ltbp; >>>> + if (!qemu_loglevel_mask(CPU_LOG_TB_NOCHAIN) && >>>> + last_tb && >>>> + !last_tb->jmp_list_next[tb_exit]) { >>> If we're going to check this outside of tb_lock we have to do this with >>> atomic_{read,set}(). However, I think it is rare case to race on >>> tb_add_jump() so probably it is okay to do the check under tb_lock. >> It's checking for NULL, it gets re-checked in tb_add_jump while under >> lock so I the setting race should be OK I think. > > I think we could just skip this check and be fine. What do you think > regarding this? I think that means we'll take the lock more frequently than we want to. I'll have to check. > > Thanks, > Sergey > >> >>>> + tb_lock(); >>>> + tb_add_jump(last_tb, tb_exit, tb); >>>> + tb_unlock(); >>>> } >>>> - tb_unlock(); >>>> return tb; >>>> } >>>> >>> Kind regards, >>> Sergey >> >> -- >> Alex Bennée -- Alex Bennée