From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:57148) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1asVsi-0003YL-1K for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:57 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1asVsd-0000i1-P1 for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:55 -0400 Received: from mail-wm0-x242.google.com ([2a00:1450:400c:c09::242]:34542) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1asVsd-0000hv-F4 for qemu-devel@nongnu.org; Tue, 19 Apr 2016 09:39:51 -0400 Received: by mail-wm0-x242.google.com with SMTP id n3so5310695wmn.1 for ; Tue, 19 Apr 2016 06:39:51 -0700 (PDT) From: Alvise Rigo Date: Tue, 19 Apr 2016 15:39:26 +0200 Message-Id: <1461073171-22953-10-git-send-email-a.rigo@virtualopensystems.com> In-Reply-To: <1461073171-22953-1-git-send-email-a.rigo@virtualopensystems.com> References: <1461073171-22953-1-git-send-email-a.rigo@virtualopensystems.com> Subject: [Qemu-devel] [RFC v8 09/14] softmmu: Honor the new exclusive bitmap List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: qemu-devel@nongnu.org, mttcg@listserver.greensocs.com Cc: jani.kokkonen@huawei.com, claudio.fontana@huawei.com, tech@virtualopensystems.com, alex.bennee@linaro.org, pbonzini@redhat.com, rth@twiddle.net, serge.fdrv@gmail.com, Alvise Rigo , Peter Crosthwaite The pages set as exclusive (clean) in the DIRTY_MEMORY_EXCLUSIVE bitmap have to have their TLB entries flagged with TLB_EXCL. The accesses to pages with TLB_EXCL flag set have to be properly handled in that they can potentially invalidate an open LL/SC transaction. Modify the TLB entries generation to honor the new bitmap and extend the softmmu_template to handle the accesses made to guest pages marked as exclusive. The TLB_EXCL flag is used only for normal RAM memory. Exclusive accesses to MMIO memory are still not supported, but they will with the next patch. Suggested-by: Jani Kokkonen Suggested-by: Claudio Fontana Signed-off-by: Alvise Rigo --- cputlb.c | 36 ++++++++++++++++++++++++++---- softmmu_template.h | 65 +++++++++++++++++++++++++++++++++++++++++++++++------- 2 files changed, 89 insertions(+), 12 deletions(-) diff --git a/cputlb.c b/cputlb.c index 02b0d14..e5df3a5 100644 --- a/cputlb.c +++ b/cputlb.c @@ -416,11 +416,20 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr, || memory_region_is_romd(section->mr)) { /* Write access calls the I/O callback. */ te->addr_write = address | TLB_MMIO; - } else if (memory_region_is_ram(section->mr) - && cpu_physical_memory_is_clean(section->mr->ram_addr - + xlat)) { - te->addr_write = address | TLB_NOTDIRTY; } else { + if (memory_region_is_ram(section->mr) + && cpu_physical_memory_is_clean(section->mr->ram_addr + + xlat)) { + address |= TLB_NOTDIRTY; + } + /* Only normal RAM accesses need the TLB_EXCL flag to handle + * exclusive store operatoins. */ + if (!(address & TLB_MMIO) && + cpu_physical_memory_is_excl(section->mr->ram_addr + xlat)) { + /* There is at least one vCPU that has flagged the address as + * exclusive. */ + address |= TLB_EXCL; + } te->addr_write = address; } } else { @@ -496,6 +505,25 @@ static inline void excl_history_put_addr(hwaddr addr) excl_history.c_array[excl_history.last_idx] = addr & TARGET_PAGE_MASK; } +/* For every vCPU compare the exclusive address and reset it in case of a + * match. Since only one vCPU is running at once, no lock has to be held to + * guard this operation. */ +static inline void reset_other_cpus_colliding_ll_addr(hwaddr addr, hwaddr size) +{ + CPUState *cpu; + + CPU_FOREACH(cpu) { + if (current_cpu != cpu && + cpu->excl_protected_range.begin != EXCLUSIVE_RESET_ADDR && + ranges_overlap(cpu->excl_protected_range.begin, + cpu->excl_protected_range.end - + cpu->excl_protected_range.begin, + addr, size)) { + cpu->excl_protected_range.begin = EXCLUSIVE_RESET_ADDR; + } + } +} + #define MMUSUFFIX _mmu /* Generates LoadLink/StoreConditional helpers in softmmu_template.h */ diff --git a/softmmu_template.h b/softmmu_template.h index ede1240..2934a0c 100644 --- a/softmmu_template.h +++ b/softmmu_template.h @@ -469,6 +469,43 @@ static inline void smmu_helper(do_ram_store)(CPUArchState *env, #endif } +static inline void smmu_helper(do_excl_store)(CPUArchState *env, + bool little_endian, + DATA_TYPE val, target_ulong addr, + TCGMemOpIdx oi, int index, + uintptr_t retaddr) +{ + CPUIOTLBEntry *iotlbentry = &env->iotlb[get_mmuidx(oi)][index]; + CPUState *cpu = ENV_GET_CPU(env); + CPUClass *cc = CPU_GET_CLASS(cpu); + /* The slow-path has been forced since we are writing to + * exclusive-protected memory. */ + hwaddr hw_addr = (iotlbentry->addr & TARGET_PAGE_MASK) + addr; + + /* The function reset_other_cpus_colliding_ll_addr could have reset + * the exclusive address. Fail the SC in this case. + * N.B.: here excl_succeed == true means that the caller is + * helper_stcond_name in softmmu_llsc_template. + * On the contrary, excl_succeeded == false occurs when a VCPU is + * writing through normal store to a page with TLB_EXCL bit set. */ + if (cpu->excl_succeeded) { + if (!cc->cpu_valid_excl_access(cpu, hw_addr, DATA_SIZE)) { + /* The vCPU is SC-ing to an unprotected address. */ + cpu->excl_protected_range.begin = EXCLUSIVE_RESET_ADDR; + cpu->excl_succeeded = false; + + return; + } + } + + smmu_helper(do_ram_store)(env, little_endian, val, addr, oi, + get_mmuidx(oi), index, retaddr); + + reset_other_cpus_colliding_ll_addr(hw_addr, DATA_SIZE); + + return; +} + void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, TCGMemOpIdx oi, uintptr_t retaddr) { @@ -493,11 +530,17 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, tlb_addr = env->tlb_table[mmu_idx][index].addr_write; } - /* Handle an IO access. */ + /* Handle an IO access or exclusive access. */ if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - smmu_helper(do_mmio_store)(env, true, val, addr, oi, mmu_idx, index, - retaddr); - return; + if (tlb_addr & TLB_EXCL) { + smmu_helper(do_excl_store)(env, true, val, addr, oi, index, + retaddr); + return; + } else { + smmu_helper(do_mmio_store)(env, true, val, addr, oi, mmu_idx, + index, retaddr); + return; + } } smmu_helper(do_ram_store)(env, true, val, addr, oi, mmu_idx, index, @@ -529,11 +572,17 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val, tlb_addr = env->tlb_table[mmu_idx][index].addr_write; } - /* Handle an IO access. */ + /* Handle an IO access or exclusive access. */ if (unlikely(tlb_addr & ~TARGET_PAGE_MASK)) { - smmu_helper(do_mmio_store)(env, false, val, addr, oi, mmu_idx, index, - retaddr); - return; + if (tlb_addr & TLB_EXCL) { + smmu_helper(do_excl_store)(env, false, val, addr, oi, index, + retaddr); + return; + } else { + smmu_helper(do_mmio_store)(env, false, val, addr, oi, mmu_idx, + index, retaddr); + return; + } } smmu_helper(do_ram_store)(env, false, val, addr, oi, mmu_idx, index, -- 2.8.0