From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57025) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1asVsY-0003DN-E2 for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:47 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1asVsX-0000fb-5Z for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:46 -0400 Received: from mail-wm0-x22c.google.com ([2a00:1450:400c:c09::22c]:34134) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1asVsW-0000fN-SW for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:45 -0400 Received: by mail-wm0-x22c.google.com with SMTP id l6so28334541wml.1 for ; Tue, 19 Apr 2016 06:39:44 -0700 (PDT) From: Alvise Rigo Date: Tue, 19 Apr 2016 15:39:21 +0200 Message-Id: <1461073171-22953-5-git-send-email-a.rigo@virtualopensystems.com> In-Reply-To: <1461073171-22953-1-git-send-email-a.rigo@virtualopensystems.com> References: <1461073171-22953-1-git-send-email-a.rigo@virtualopensystems.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [Qemu-devel] [RFC v8 04/14] softmmu: Simplify helper_*_st_name, wrap RAM code List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org, mttcg@listserver.greensocs.com Cc: jani.kokkonen@huawei.com, claudio.fontana@huawei.com, tech@virtualopensystems.com, alex.bennee@linaro.org, pbonzini@redhat.com, rth@twiddle.net, serge.fdrv@gmail.com, Alvise Rigo , Peter Crosthwaite Attempting to simplify the helper_*_st_name, wrap the code relative to a RAM access into an inline function. The function covers both BE and LE cases and it is expanded twice in each helper (TODO: check this last statement). Suggested-by: Jani Kokkonen Suggested-by: Claudio Fontana CC: Alex Bennée Signed-off-by: Alvise Rigo --- softmmu_template.h | 80 +++++++++++++++++++++++++++--------------------------- 1 file changed, 40 insertions(+), 40 deletions(-) diff --git a/softmmu_template.h b/softmmu_template.h index 9185486..ea6a0fb 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -433,13 +433,48 @@ static inline void smmu_helper(do_mmio_store)(CPUArchState *env, glue(io_write, SUFFIX)(env, iotlbentry, val, addr, retaddr); } +static inline void smmu_helper(do_ram_store)(CPUArchState *env, + bool little_endian, DATA_TYPE val, + target_ulong addr, TCGMemOpIdx oi, + unsigned mmu_idx, int index, + uintptr_t retaddr) +{ + uintptr_t haddr; + + /* Handle slow unaligned access (it spans two pages or IO). */ + if (DATA_SIZE > 1 + && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 + >= TARGET_PAGE_SIZE)) { + smmu_helper(do_unl_store)(env, little_endian, val, addr, oi, mmu_idx, + retaddr); + return; + } + + /* Handle aligned access or unaligned access in the same page. */ + if ((addr & (DATA_SIZE - 1)) != 0 + && (get_memop(oi) & MO_AMASK) == MO_ALIGN) { + cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, + mmu_idx, retaddr); + } + + haddr = addr + env->tlb_table[mmu_idx][index].addend; +#if DATA_SIZE == 1 + glue(glue(st, SUFFIX), _p)((uint8_t *)haddr, val); +#else + if (little_endian) { + glue(glue(st, SUFFIX), _le_p)((uint8_t *)haddr, val); + } else { + glue(glue(st, SUFFIX), _be_p)((uint8_t *)haddr, val); + } +#endif +} + void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, TCGMemOpIdx oi, uintptr_t retaddr) { unsigned mmu_idx = get_mmuidx(oi); int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); target_ulong tlb_addr = env->tlb_table[mmu_idx][index].addr_write; - uintptr_t haddr; /* Adjust the given return address. */ retaddr -= GETPC_ADJ; @@ -465,27 +500,8 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, return; } - /* Handle slow unaligned access (it spans two pages or IO). */ - if (DATA_SIZE > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 - >= TARGET_PAGE_SIZE)) { - smmu_helper(do_unl_store)(env, true, val, addr, oi, mmu_idx, retaddr); - return; - } - - /* Handle aligned access or unaligned access in the same page. */ - if ((addr & (DATA_SIZE - 1)) != 0 - && (get_memop(oi) & MO_AMASK) == MO_ALIGN) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - - haddr = addr + env->tlb_table[mmu_idx][index].addend; -#if DATA_SIZE == 1 - glue(glue(st, SUFFIX), _p)((uint8_t *)haddr, val); -#else - glue(glue(st, SUFFIX), _le_p)((uint8_t *)haddr, val); -#endif + smmu_helper(do_ram_store)(env, true, val, addr, oi, mmu_idx, index, + retaddr); } #if DATA_SIZE > 1 @@ -495,7 +511,6 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, unsigned mmu_idx = get_mmuidx(oi); int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1); target_ulong tlb_addr = env->tlb_table[mmu_idx][index].addr_write; - uintptr_t haddr; /* Adjust the given return address. */ retaddr -= GETPC_ADJ; @@ -521,23 +536,8 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, return; } - /* Handle slow unaligned access (it spans two pages or IO). */ - if (DATA_SIZE > 1 - && unlikely((addr & ~TARGET_PAGE_MASK) + DATA_SIZE - 1 - >= TARGET_PAGE_SIZE)) { - smmu_helper(do_unl_store)(env, false, val, addr, oi, mmu_idx, retaddr); - return; - } - - /* Handle aligned access or unaligned access in the same page. */ - if ((addr & (DATA_SIZE - 1)) != 0 - && (get_memop(oi) & MO_AMASK) == MO_ALIGN) { - cpu_unaligned_access(ENV_GET_CPU(env), addr, MMU_DATA_STORE, - mmu_idx, retaddr); - } - - haddr = addr + env->tlb_table[mmu_idx][index].addend; - glue(glue(st, SUFFIX), _be_p)((uint8_t *)haddr, val); + smmu_helper(do_ram_store)(env, false, val, addr, oi, mmu_idx, index, + retaddr); } #endif /* DATA_SIZE > 1 */ -- 2.8.0