From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57072) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1asVsa-0003Hj-N2 for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:50 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1asVsU-0000ex-Kq for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:48 -0400 Received: from mail-wm0-x22e.google.com ([2a00:1450:400c:c09::22e]:37528) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1asVsU-0000eq-AT for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:42 -0400 Received: by mail-wm0-x22e.google.com with SMTP id n3so30405783wmn.0 for ; Tue, 19 Apr 2016 06:39:42 -0700 (PDT) From: Alvise Rigo Date: Tue, 19 Apr 2016 15:39:19 +0200 Message-Id: <1461073171-22953-3-git-send-email-a.rigo@virtualopensystems.com> In-Reply-To: <1461073171-22953-1-git-send-email-a.rigo@virtualopensystems.com> References: <1461073171-22953-1-git-send-email-a.rigo@virtualopensystems.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [Qemu-devel] [RFC v8 02/14] softmmu: Simplify helper_*_st_name, wrap unaligned code List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org, mttcg@listserver.greensocs.com Cc: jani.kokkonen@huawei.com, claudio.fontana@huawei.com, tech@virtualopensystems.com, alex.bennee@linaro.org, pbonzini@redhat.com, rth@twiddle.net, serge.fdrv@gmail.com, Alvise Rigo , Peter Crosthwaite Attempting to simplify the helper_*_st_name, wrap the do_unaligned_access code into an shared inline function. As this also removes the goto statement the inline code is expanded twice in each helper. >>From Message-id 1452268394-31252-2-git-send-email-alex.bennee@linaro.org: There is a minor wrinkle that we need to use a unique name for each inline fragment as the template is included multiple times. For this the smmu_helper macro does the appropriate glue magic. I've tested the result with no change to functionality. Comparing the the objdump of cputlb.o shows minimal changes in probe_write and everything else is identical. Suggested-by: Jani Kokkonen Suggested-by: Claudio Fontana CC: Alvise Rigo Signed-off-by: Alex Bennée [Alex Bennée: define smmu_helper and unified logic between be/le] Signed-off-by: Alvise Rigo --- softmmu_template.h | 82 ++++++++++++++++++++++++++++++------------------------ 1 file changed, 46 insertions(+), 36 deletions(-) diff --git a/softmmu_template.h b/softmmu_template.h index 208f808..3eb54f8 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -370,6 +370,46 @@ static inline void glue(io_write, SUFFIX)(CPUArchState *env, iotlbentry->attrs); } +/* Inline helper functions for SoftMMU + * + * These functions help reduce code duplication in the various main + * helper functions. Constant arguments (like endian state) will allow + * the compiler to skip code which is never called in a given inline. + */ +#define smmu_helper(name) glue(glue(glue(smmu_helper_, SUFFIX), \ + MMUSUFFIX), _##name) +static inline void smmu_helper(do_unl_store)(CPUArchState *env, + bool little_endian, + DATA_TYPE val, + target_ulong addr, + TCGMemOpIdx oi, + unsigned mmu_idx, + uintptr_t retaddr) +{ + int i; + + if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, + mmu_idx, retaddr); + } + /* Note: relies on the fact that tlb_fill() does not remove the + * previous page from the TLB cache. */ + for (i = DATA_SIZE - 1; i >= 0; i--) { + uint8_t val8; + if (little_endian) { + /* Little-endian extract. */ + val8 = val >> (i * 8); + } else { + /* Big-endian extract. */ + val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8)); + } + /* Note the adjustment at the beginning of the function. + Undo that for the recursion. */ + glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, + oi, retaddr + GETPC_ADJ); + } +} + void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, TCGMemOpIdx oi, uintptr_t retaddr) { @@ -399,7 +439,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { CPUIOTLBEntry *iotlbentry; if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; + smmu_helper(do_unl_store)(env, false, val, addr, oi, mmu_idx, retaddr); + return; } iotlbentry = &env->iotlb[mmu_idx][index]; @@ -414,23 +455,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (DATA_SIZE > 1 && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 >= TARGET_PAGE_SIZE)) { - int i; - do_unaligned_access: - if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - /* XXX: not efficient, but simple */ - /* Note: relies on the fact that tlb_fill() does not remove the - * previous page from the TLB cache. */ - for (i = DATA_SIZE - 1; i >= 0; i--) { - /* Little-endian extract. */ - uint8_t val8 = val >> (i * 8); - /* Note the adjustment at the beginning of the function. - Undo that for the recursion. */ - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, - oi, retaddr + GETPC_ADJ); - } + smmu_helper(do_unl_store)(env, true, val, addr, oi, mmu_idx, retaddr); return; } @@ -479,7 +504,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { CPUIOTLBEntry *iotlbentry; if ((addr & (DATA_SIZE - 1)) != 0) { - goto do_unaligned_access; + smmu_helper(do_unl_store)(env, true, val, addr, oi, mmu_idx, retaddr); + return; } iotlbentry = &env->iotlb[mmu_idx][index]; @@ -494,23 +520,7 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, if (DATA_SIZE > 1 && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 >= TARGET_PAGE_SIZE)) { - int i; - do_unaligned_access: - if ((get_memop(oi) & MO_AMASK) == MO_ALIGN) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - /* XXX: not efficient, but simple */ - /* Note: relies on the fact that tlb_fill() does not remove the - * previous page from the TLB cache. */ - for (i = DATA_SIZE - 1; i >= 0; i--) { - /* Big-endian extract. */ - uint8_t val8 = val >> (((DATA_SIZE - 1) * 8) - (i * 8)); - /* Note the adjustment at the beginning of the function. - Undo that for the recursion. */ - glue(helper_ret_stb, MMUSUFFIX)(env, addr + i, val8, - oi, retaddr + GETPC_ADJ); - } + smmu_helper(do_unl_store)(env, false, val, addr, oi, mmu_idx, retaddr); return; } -- 2.8.0