* [Qemu-devel] [PULL 1/6] translate-all: fix locking of TBs whose two pages share the same physical page
2018-07-02 16:05 [Qemu-devel] [PULL 0/6] tcg queued patches Richard Henderson
@ 2018-07-02 16:05 ` Richard Henderson
2018-07-02 16:05 ` [Qemu-devel] [PULL 2/6] tcg: Define and use new tlb_hit() and tlb_hit_page() functions Richard Henderson
` (5 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Richard Henderson @ 2018-07-02 16:05 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell, Emilio G. Cota
From: "Emilio G. Cota" <cota@braap.org>
Commit 0b5c91f ("translate-all: use per-page locking in !user-mode",
2018-06-15) introduced per-page locking. It assumed that the physical
pages corresponding to a TB (at most two pages) are always distinct,
which is wrong. For instance, an xtensa test provided by Max Filippov
is broken by the commit, since the test maps two virtual pages
to the same physical page:
virt1: 7fff, virt2: 8000
phys1 6000fff, phys2 6000000
Fix it by removing the assumption from page_lock_pair.
If the two physical page addresses are equal, we only lock
the PageDesc once. Note that the two callers of page_lock_pair,
namely page_unlock_tb and tb_link_page, are also updated so that
we do not try to unlock the same PageDesc twice.
Fixes: 0b5c91f74f3c83a36f37740969df8c775c997e69
Reported-by: Max Filippov <jcmvbkbc@gmail.com>
Tested-by: Max Filippov <jcmvbkbc@gmail.com>
Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Emilio G. Cota <cota@braap.org>
Message-Id: <1529944302-14186-1-git-send-email-cota@braap.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/translate-all.c | 32 +++++++++++++++++++++++++-------
1 file changed, 25 insertions(+), 7 deletions(-)
diff --git a/accel/tcg/translate-all.c b/accel/tcg/translate-all.c
index e8228bf3e6..170b95793f 100644
--- a/accel/tcg/translate-all.c
+++ b/accel/tcg/translate-all.c
@@ -669,9 +669,15 @@ static inline void page_lock_tb(const TranslationBlock *tb)
static inline void page_unlock_tb(const TranslationBlock *tb)
{
- page_unlock(page_find(tb->page_addr[0] >> TARGET_PAGE_BITS));
+ PageDesc *p1 = page_find(tb->page_addr[0] >> TARGET_PAGE_BITS);
+
+ page_unlock(p1);
if (unlikely(tb->page_addr[1] != -1)) {
- page_unlock(page_find(tb->page_addr[1] >> TARGET_PAGE_BITS));
+ PageDesc *p2 = page_find(tb->page_addr[1] >> TARGET_PAGE_BITS);
+
+ if (p2 != p1) {
+ page_unlock(p2);
+ }
}
}
@@ -850,22 +856,34 @@ static void page_lock_pair(PageDesc **ret_p1, tb_page_addr_t phys1,
PageDesc **ret_p2, tb_page_addr_t phys2, int alloc)
{
PageDesc *p1, *p2;
+ tb_page_addr_t page1;
+ tb_page_addr_t page2;
assert_memory_lock();
- g_assert(phys1 != -1 && phys1 != phys2);
- p1 = page_find_alloc(phys1 >> TARGET_PAGE_BITS, alloc);
+ g_assert(phys1 != -1);
+
+ page1 = phys1 >> TARGET_PAGE_BITS;
+ page2 = phys2 >> TARGET_PAGE_BITS;
+
+ p1 = page_find_alloc(page1, alloc);
if (ret_p1) {
*ret_p1 = p1;
}
if (likely(phys2 == -1)) {
page_lock(p1);
return;
+ } else if (page1 == page2) {
+ page_lock(p1);
+ if (ret_p2) {
+ *ret_p2 = p1;
+ }
+ return;
}
- p2 = page_find_alloc(phys2 >> TARGET_PAGE_BITS, alloc);
+ p2 = page_find_alloc(page2, alloc);
if (ret_p2) {
*ret_p2 = p2;
}
- if (phys1 < phys2) {
+ if (page1 < page2) {
page_lock(p1);
page_lock(p2);
} else {
@@ -1623,7 +1641,7 @@ tb_link_page(TranslationBlock *tb, tb_page_addr_t phys_pc,
tb = existing_tb;
}
- if (p2) {
+ if (p2 && p2 != p) {
page_unlock(p2);
}
page_unlock(p);
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [Qemu-devel] [PULL 2/6] tcg: Define and use new tlb_hit() and tlb_hit_page() functions
2018-07-02 16:05 [Qemu-devel] [PULL 0/6] tcg queued patches Richard Henderson
2018-07-02 16:05 ` [Qemu-devel] [PULL 1/6] translate-all: fix locking of TBs whose two pages share the same physical page Richard Henderson
@ 2018-07-02 16:05 ` Richard Henderson
2018-07-02 16:05 ` [Qemu-devel] [PULL 3/6] accel/tcg: Correct "is this a TLB miss" check in get_page_addr_code() Richard Henderson
` (4 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Richard Henderson @ 2018-07-02 16:05 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
From: Peter Maydell <peter.maydell@linaro.org>
The condition to check whether an address has hit against a particular
TLB entry is not completely trivial. We do this in various places, and
in fact in one place (get_page_addr_code()) we have got the condition
wrong. Abstract it out into new tlb_hit() and tlb_hit_page() inline
functions (one for a known-page-aligned address and one for an
arbitrary address), and use them in all the places where we had the
condition correct.
This is a no-behaviour-change patch; we leave fixing the buggy
code in get_page_addr_code() to a subsequent patch.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180629162122.19376-2-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/softmmu_template.h | 16 ++++++----------
include/exec/cpu-all.h | 23 +++++++++++++++++++++++
include/exec/cpu_ldst.h | 3 +--
accel/tcg/cputlb.c | 15 +++++----------
4 files changed, 35 insertions(+), 22 deletions(-)
diff --git a/accel/tcg/softmmu_template.h b/accel/tcg/softmmu_template.h
index c47591c970..badbf14880 100644
--- a/accel/tcg/softmmu_template.h
+++ b/accel/tcg/softmmu_template.h
@@ -123,8 +123,7 @@ WORD_TYPE helper_le_ld_name(CPUArchState *env, target_ulong addr,
}
/* If the TLB entry is for a different page, reload and try again. */
- if ((addr & TARGET_PAGE_MASK)
- != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
+ if (!tlb_hit(tlb_addr, addr)) {
if (!VICTIM_TLB_HIT(ADDR_READ, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, READ_ACCESS_TYPE,
mmu_idx, retaddr);
@@ -191,8 +190,7 @@ WORD_TYPE helper_be_ld_name(CPUArchState *env, target_ulong addr,
}
/* If the TLB entry is for a different page, reload and try again. */
- if ((addr & TARGET_PAGE_MASK)
- != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
+ if (!tlb_hit(tlb_addr, addr)) {
if (!VICTIM_TLB_HIT(ADDR_READ, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, READ_ACCESS_TYPE,
mmu_idx, retaddr);
@@ -286,8 +284,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
}
/* If the TLB entry is for a different page, reload and try again. */
- if ((addr & TARGET_PAGE_MASK)
- != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
+ if (!tlb_hit(tlb_addr, addr)) {
if (!VICTIM_TLB_HIT(addr_write, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
@@ -322,7 +319,7 @@ void helper_le_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK;
index2 = (page2 >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
tlb_addr2 = env->tlb_table[mmu_idx][index2].addr_write;
- if (page2 != (tlb_addr2 & (TARGET_PAGE_MASK | TLB_INVALID_MASK))
+ if (!tlb_hit_page(tlb_addr2, page2)
&& !VICTIM_TLB_HIT(addr_write, page2)) {
tlb_fill(ENV_GET_CPU(env), page2, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
@@ -364,8 +361,7 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
}
/* If the TLB entry is for a different page, reload and try again. */
- if ((addr & TARGET_PAGE_MASK)
- != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
+ if (!tlb_hit(tlb_addr, addr)) {
if (!VICTIM_TLB_HIT(addr_write, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
@@ -400,7 +396,7 @@ void helper_be_st_name(CPUArchState *env, target_ulong addr, DATA_TYPE val,
page2 = (addr + DATA_SIZE) & TARGET_PAGE_MASK;
index2 = (page2 >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
tlb_addr2 = env->tlb_table[mmu_idx][index2].addr_write;
- if (page2 != (tlb_addr2 & (TARGET_PAGE_MASK | TLB_INVALID_MASK))
+ if (!tlb_hit_page(tlb_addr2, page2)
&& !VICTIM_TLB_HIT(addr_write, page2)) {
tlb_fill(ENV_GET_CPU(env), page2, DATA_SIZE, MMU_DATA_STORE,
mmu_idx, retaddr);
diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 7338f57062..117d2fbbca 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -339,6 +339,29 @@ CPUArchState *cpu_copy(CPUArchState *env);
#define TLB_FLAGS_MASK (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
| TLB_RECHECK)
+/**
+ * tlb_hit_page: return true if page aligned @addr is a hit against the
+ * TLB entry @tlb_addr
+ *
+ * @addr: virtual address to test (must be page aligned)
+ * @tlb_addr: TLB entry address (a CPUTLBEntry addr_read/write/code value)
+ */
+static inline bool tlb_hit_page(target_ulong tlb_addr, target_ulong addr)
+{
+ return addr == (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK));
+}
+
+/**
+ * tlb_hit: return true if @addr is a hit against the TLB entry @tlb_addr
+ *
+ * @addr: virtual address to test (need not be page aligned)
+ * @tlb_addr: TLB entry address (a CPUTLBEntry addr_read/write/code value)
+ */
+static inline bool tlb_hit(target_ulong tlb_addr, target_ulong addr)
+{
+ return tlb_hit_page(tlb_addr, addr & TARGET_PAGE_MASK);
+}
+
void dump_exec_info(FILE *f, fprintf_function cpu_fprintf);
void dump_opcount_info(FILE *f, fprintf_function cpu_fprintf);
#endif /* !CONFIG_USER_ONLY */
diff --git a/include/exec/cpu_ldst.h b/include/exec/cpu_ldst.h
index 5de8c8a5af..0f2cb717b1 100644
--- a/include/exec/cpu_ldst.h
+++ b/include/exec/cpu_ldst.h
@@ -422,8 +422,7 @@ static inline void *tlb_vaddr_to_host(CPUArchState *env, target_ulong addr,
g_assert_not_reached();
}
- if ((addr & TARGET_PAGE_MASK)
- != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
+ if (!tlb_hit(tlb_addr, addr)) {
/* TLB entry is for a different page */
return NULL;
}
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index eebe97dabb..adb711963b 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -239,12 +239,9 @@ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr)
{
- if (addr == (tlb_entry->addr_read &
- (TARGET_PAGE_MASK | TLB_INVALID_MASK)) ||
- addr == (tlb_entry->addr_write &
- (TARGET_PAGE_MASK | TLB_INVALID_MASK)) ||
- addr == (tlb_entry->addr_code &
- (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
+ if (tlb_hit_page(tlb_entry->addr_read, addr) ||
+ tlb_hit_page(tlb_entry->addr_write, addr) ||
+ tlb_hit_page(tlb_entry->addr_code, addr)) {
memset(tlb_entry, -1, sizeof(*tlb_entry));
}
}
@@ -1046,8 +1043,7 @@ void probe_write(CPUArchState *env, target_ulong addr, int size, int mmu_idx,
int index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
target_ulong tlb_addr = env->tlb_table[mmu_idx][index].addr_write;
- if ((addr & TARGET_PAGE_MASK)
- != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
+ if (!tlb_hit(tlb_addr, addr)) {
/* TLB entry is for a different page */
if (!VICTIM_TLB_HIT(addr_write, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, size, MMU_DATA_STORE,
@@ -1091,8 +1087,7 @@ static void *atomic_mmu_lookup(CPUArchState *env, target_ulong addr,
}
/* Check TLB entry and enforce page permissions. */
- if ((addr & TARGET_PAGE_MASK)
- != (tlb_addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK))) {
+ if (!tlb_hit(tlb_addr, addr)) {
if (!VICTIM_TLB_HIT(addr_write, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, 1 << s_bits, MMU_DATA_STORE,
mmu_idx, retaddr);
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [Qemu-devel] [PULL 3/6] accel/tcg: Correct "is this a TLB miss" check in get_page_addr_code()
2018-07-02 16:05 [Qemu-devel] [PULL 0/6] tcg queued patches Richard Henderson
2018-07-02 16:05 ` [Qemu-devel] [PULL 1/6] translate-all: fix locking of TBs whose two pages share the same physical page Richard Henderson
2018-07-02 16:05 ` [Qemu-devel] [PULL 2/6] tcg: Define and use new tlb_hit() and tlb_hit_page() functions Richard Henderson
@ 2018-07-02 16:05 ` Richard Henderson
2018-07-02 16:05 ` [Qemu-devel] [PULL 4/6] accel/tcg: Don't treat invalid TLB entries as needing recheck Richard Henderson
` (3 subsequent siblings)
6 siblings, 0 replies; 11+ messages in thread
From: Richard Henderson @ 2018-07-02 16:05 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
From: Peter Maydell <peter.maydell@linaro.org>
In commit 71b9a45330fe220d1 we changed the condition we use
to determine whether we need to refill the TLB in
get_page_addr_code() to
if (unlikely(env->tlb_table[mmu_idx][index].addr_code !=
(addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK)))) {
This isn't the right check (it will falsely fail if the
input addr happens to have the low bit corresponding to
TLB_INVALID_MASK set, for instance). Replace it with a
use of the new tlb_hit() function, which is the correct test.
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180629162122.19376-3-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index adb711963b..3ae1198c24 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -957,8 +957,7 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
index = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
mmu_idx = cpu_mmu_index(env, true);
- if (unlikely(env->tlb_table[mmu_idx][index].addr_code !=
- (addr & (TARGET_PAGE_MASK | TLB_INVALID_MASK)))) {
+ if (unlikely(!tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr))) {
if (!VICTIM_TLB_HIT(addr_read, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, 0, MMU_INST_FETCH, mmu_idx, 0);
}
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [Qemu-devel] [PULL 4/6] accel/tcg: Don't treat invalid TLB entries as needing recheck
2018-07-02 16:05 [Qemu-devel] [PULL 0/6] tcg queued patches Richard Henderson
` (2 preceding siblings ...)
2018-07-02 16:05 ` [Qemu-devel] [PULL 3/6] accel/tcg: Correct "is this a TLB miss" check in get_page_addr_code() Richard Henderson
@ 2018-07-02 16:05 ` Richard Henderson
2018-07-13 11:05 ` Peter Maydell
2018-07-02 16:05 ` [Qemu-devel] [PULL 5/6] accel/tcg: Avoid caching overwritten tlb entries Richard Henderson
` (2 subsequent siblings)
6 siblings, 1 reply; 11+ messages in thread
From: Richard Henderson @ 2018-07-02 16:05 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
From: Peter Maydell <peter.maydell@linaro.org>
In get_page_addr_code() when we check whether the TLB entry
is marked as TLB_RECHECK, we should not go down that code
path if the TLB entry is not valid at all (ie the TLB_INVALID
bit is set).
Tested-by: Laurent Vivier <laurent@vivier.eu>
Reported-by: Laurent Vivier <laurent@vivier.eu>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <20180629161731.16239-1-peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index 3ae1198c24..cc90a5fe92 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -963,7 +963,8 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
}
}
- if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
+ if (unlikely((env->tlb_table[mmu_idx][index].addr_code &
+ (TLB_RECHECK | TLB_INVALID_MASK)) == TLB_RECHECK)) {
/*
* This is a TLB_RECHECK access, where the MMU protection
* covers a smaller range than a target page, and we must
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PULL 4/6] accel/tcg: Don't treat invalid TLB entries as needing recheck
2018-07-02 16:05 ` [Qemu-devel] [PULL 4/6] accel/tcg: Don't treat invalid TLB entries as needing recheck Richard Henderson
@ 2018-07-13 11:05 ` Peter Maydell
2018-07-13 12:36 ` Richard Henderson
0 siblings, 1 reply; 11+ messages in thread
From: Peter Maydell @ 2018-07-13 11:05 UTC (permalink / raw)
To: Richard Henderson; +Cc: QEMU Developers
On 2 July 2018 at 17:05, Richard Henderson <richard.henderson@linaro.org> wrote:
> From: Peter Maydell <peter.maydell@linaro.org>
>
> In get_page_addr_code() when we check whether the TLB entry
> is marked as TLB_RECHECK, we should not go down that code
> path if the TLB entry is not valid at all (ie the TLB_INVALID
> bit is set).
>
> Tested-by: Laurent Vivier <laurent@vivier.eu>
> Reported-by: Laurent Vivier <laurent@vivier.eu>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
> Message-Id: <20180629161731.16239-1-peter.maydell@linaro.org>
> Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
> ---
> accel/tcg/cputlb.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
> index 3ae1198c24..cc90a5fe92 100644
> --- a/accel/tcg/cputlb.c
> +++ b/accel/tcg/cputlb.c
> @@ -963,7 +963,8 @@ tb_page_addr_t get_page_addr_code(CPUArchState *env, target_ulong addr)
> }
> }
>
> - if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
> + if (unlikely((env->tlb_table[mmu_idx][index].addr_code &
> + (TLB_RECHECK | TLB_INVALID_MASK)) == TLB_RECHECK)) {
> /*
> * This is a TLB_RECHECK access, where the MMU protection
> * covers a smaller range than a target page, and we must
Looking again at this code, I think that now we have the code to
ensure that there's only ever one entry in the TLB/victim TLB for
a given guest address, this change is unnecessary. The sequence
if (unlikely(!tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr))) {
if (!VICTIM_TLB_HIT(addr_read, addr)) {
tlb_fill(ENV_GET_CPU(env), addr, 0, MMU_INST_FETCH, mmu_idx, 0);
}
}
should result in us always either (a) taking a guest exception and
longjumping out of the tlb_fill(), or (b) ending up with the TLB
containing an entry valid for an insn fetch, ie addr_code does not
have TLB_INVALID_MASK set. So we could drop the check on TLB_INVALID_MASK
here and instead have:
assert(tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr));
(I'm looking at this code and trying to clean up the mishandling of
execution from rom-device-not-in-romd-mode. Patches to follow later...)
thanks
-- PMM
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PULL 4/6] accel/tcg: Don't treat invalid TLB entries as needing recheck
2018-07-13 11:05 ` Peter Maydell
@ 2018-07-13 12:36 ` Richard Henderson
2018-07-13 12:50 ` Peter Maydell
0 siblings, 1 reply; 11+ messages in thread
From: Richard Henderson @ 2018-07-13 12:36 UTC (permalink / raw)
To: Peter Maydell; +Cc: QEMU Developers
On 07/13/2018 06:05 AM, Peter Maydell wrote:
>> - if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
>> + if (unlikely((env->tlb_table[mmu_idx][index].addr_code &
>> + (TLB_RECHECK | TLB_INVALID_MASK)) == TLB_RECHECK)) {
>> /*
>> * This is a TLB_RECHECK access, where the MMU protection
>> * covers a smaller range than a target page, and we must
>
> Looking again at this code, I think that now we have the code to
> ensure that there's only ever one entry in the TLB/victim TLB for
> a given guest address...
Which probably wasn't the case the first time you wrote this, no?
Fixing that single entry condition was just a few weeks ago.
> The sequence
>
> if (unlikely(!tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr))) {
> if (!VICTIM_TLB_HIT(addr_read, addr)) {
> tlb_fill(ENV_GET_CPU(env), addr, 0, MMU_INST_FETCH, mmu_idx, 0);
> }
> }
>
> should result in us always either (a) taking a guest exception and
> longjumping out of the tlb_fill(), or (b) ending up with the TLB
> containing an entry valid for an insn fetch, ie addr_code does not
> have TLB_INVALID_MASK set. So we could drop the check on TLB_INVALID_MASK
> here and instead have:
>
> assert(tlb_hit(env->tlb_table[mmu_idx][index].addr_code, addr));
Tuck that assert just after the tlb_fill, if you like.
I think it's unnecessary; we don't do that any of the
other places we use tlb_fill.
r~
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PULL 4/6] accel/tcg: Don't treat invalid TLB entries as needing recheck
2018-07-13 12:36 ` Richard Henderson
@ 2018-07-13 12:50 ` Peter Maydell
0 siblings, 0 replies; 11+ messages in thread
From: Peter Maydell @ 2018-07-13 12:50 UTC (permalink / raw)
To: Richard Henderson; +Cc: QEMU Developers
On 13 July 2018 at 13:36, Richard Henderson
<richard.henderson@linaro.org> wrote:
> On 07/13/2018 06:05 AM, Peter Maydell wrote:
>>> - if (unlikely(env->tlb_table[mmu_idx][index].addr_code & TLB_RECHECK)) {
>>> + if (unlikely((env->tlb_table[mmu_idx][index].addr_code &
>>> + (TLB_RECHECK | TLB_INVALID_MASK)) == TLB_RECHECK)) {
>>> /*
>>> * This is a TLB_RECHECK access, where the MMU protection
>>> * covers a smaller range than a target page, and we must
>>
>> Looking again at this code, I think that now we have the code to
>> ensure that there's only ever one entry in the TLB/victim TLB for
>> a given guest address...
>
> Which probably wasn't the case the first time you wrote this, no?
> Fixing that single entry condition was just a few weeks ago.
Yes, exactly.
OTOH with Laurent's m68k test case I'm finding that the assert
is firing, and I'm not entirely sure why yet...
thanks
-- PMM
^ permalink raw reply [flat|nested] 11+ messages in thread
* [Qemu-devel] [PULL 5/6] accel/tcg: Avoid caching overwritten tlb entries
2018-07-02 16:05 [Qemu-devel] [PULL 0/6] tcg queued patches Richard Henderson
` (3 preceding siblings ...)
2018-07-02 16:05 ` [Qemu-devel] [PULL 4/6] accel/tcg: Don't treat invalid TLB entries as needing recheck Richard Henderson
@ 2018-07-02 16:05 ` Richard Henderson
2018-07-02 16:05 ` [Qemu-devel] [PULL 6/6] cpu: Assert asidx_from_attrs return value in range Richard Henderson
2018-07-02 18:07 ` [Qemu-devel] [PULL 0/6] tcg queued patches Peter Maydell
6 siblings, 0 replies; 11+ messages in thread
From: Richard Henderson @ 2018-07-02 16:05 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
When installing a TLB entry, remove any cached version of the
same page in the VTLB. If the existing TLB entry matches, do
not copy into the VTLB, but overwrite it.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
accel/tcg/cputlb.c | 63 ++++++++++++++++++++++++++--------------------
1 file changed, 36 insertions(+), 27 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index cc90a5fe92..20c147d655 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -235,17 +235,30 @@ void tlb_flush_by_mmuidx_all_cpus_synced(CPUState *src_cpu,
async_safe_run_on_cpu(src_cpu, fn, RUN_ON_CPU_HOST_INT(idxmap));
}
-
-
-static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong addr)
+static inline bool tlb_hit_page_anyprot(CPUTLBEntry *tlb_entry,
+ target_ulong page)
{
- if (tlb_hit_page(tlb_entry->addr_read, addr) ||
- tlb_hit_page(tlb_entry->addr_write, addr) ||
- tlb_hit_page(tlb_entry->addr_code, addr)) {
+ return tlb_hit_page(tlb_entry->addr_read, page) ||
+ tlb_hit_page(tlb_entry->addr_write, page) ||
+ tlb_hit_page(tlb_entry->addr_code, page);
+}
+
+static inline void tlb_flush_entry(CPUTLBEntry *tlb_entry, target_ulong page)
+{
+ if (tlb_hit_page_anyprot(tlb_entry, page)) {
memset(tlb_entry, -1, sizeof(*tlb_entry));
}
}
+static inline void tlb_flush_vtlb_page(CPUArchState *env, int mmu_idx,
+ target_ulong page)
+{
+ int k;
+ for (k = 0; k < CPU_VTLB_SIZE; k++) {
+ tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], page);
+ }
+}
+
static void tlb_flush_page_async_work(CPUState *cpu, run_on_cpu_data data)
{
CPUArchState *env = cpu->env_ptr;
@@ -271,14 +284,7 @@ static void tlb_flush_page_async_work(CPUState *cpu, run_on_cpu_data data)
i = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
tlb_flush_entry(&env->tlb_table[mmu_idx][i], addr);
- }
-
- /* check whether there are entries that need to be flushed in the vtlb */
- for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
- int k;
- for (k = 0; k < CPU_VTLB_SIZE; k++) {
- tlb_flush_entry(&env->tlb_v_table[mmu_idx][k], addr);
- }
+ tlb_flush_vtlb_page(env, mmu_idx, addr);
}
tb_flush_jmp_cache(cpu, addr);
@@ -310,7 +316,6 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu,
unsigned long mmu_idx_bitmap = addr_and_mmuidx & ALL_MMUIDX_BITS;
int page = (addr >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
int mmu_idx;
- int i;
assert_cpu_is_self(cpu);
@@ -320,11 +325,7 @@ static void tlb_flush_page_by_mmuidx_async_work(CPUState *cpu,
for (mmu_idx = 0; mmu_idx < NB_MMU_MODES; mmu_idx++) {
if (test_bit(mmu_idx, &mmu_idx_bitmap)) {
tlb_flush_entry(&env->tlb_table[mmu_idx][page], addr);
-
- /* check whether there are vltb entries that need to be flushed */
- for (i = 0; i < CPU_VTLB_SIZE; i++) {
- tlb_flush_entry(&env->tlb_v_table[mmu_idx][i], addr);
- }
+ tlb_flush_vtlb_page(env, mmu_idx, addr);
}
}
@@ -609,10 +610,9 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
target_ulong address;
target_ulong code_address;
uintptr_t addend;
- CPUTLBEntry *te, *tv, tn;
+ CPUTLBEntry *te, tn;
hwaddr iotlb, xlat, sz, paddr_page;
target_ulong vaddr_page;
- unsigned vidx = env->vtlb_index++ % CPU_VTLB_SIZE;
int asidx = cpu_asidx_from_attrs(cpu, attrs);
assert_cpu_is_self(cpu);
@@ -654,19 +654,28 @@ void tlb_set_page_with_attrs(CPUState *cpu, target_ulong vaddr,
addend = (uintptr_t)memory_region_get_ram_ptr(section->mr) + xlat;
}
+ /* Make sure there's no cached translation for the new page. */
+ tlb_flush_vtlb_page(env, mmu_idx, vaddr_page);
+
code_address = address;
iotlb = memory_region_section_get_iotlb(cpu, section, vaddr_page,
paddr_page, xlat, prot, &address);
index = (vaddr_page >> TARGET_PAGE_BITS) & (CPU_TLB_SIZE - 1);
te = &env->tlb_table[mmu_idx][index];
- /* do not discard the translation in te, evict it into a victim tlb */
- tv = &env->tlb_v_table[mmu_idx][vidx];
- /* addr_write can race with tlb_reset_dirty_range */
- copy_tlb_helper(tv, te, true);
+ /*
+ * Only evict the old entry to the victim tlb if it's for a
+ * different page; otherwise just overwrite the stale data.
+ */
+ if (!tlb_hit_page_anyprot(te, vaddr_page)) {
+ unsigned vidx = env->vtlb_index++ % CPU_VTLB_SIZE;
+ CPUTLBEntry *tv = &env->tlb_v_table[mmu_idx][vidx];
- env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index];
+ /* Evict the old entry into the victim tlb. */
+ copy_tlb_helper(tv, te, true);
+ env->iotlb_v[mmu_idx][vidx] = env->iotlb[mmu_idx][index];
+ }
/* refill the tlb */
/*
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [Qemu-devel] [PULL 6/6] cpu: Assert asidx_from_attrs return value in range
2018-07-02 16:05 [Qemu-devel] [PULL 0/6] tcg queued patches Richard Henderson
` (4 preceding siblings ...)
2018-07-02 16:05 ` [Qemu-devel] [PULL 5/6] accel/tcg: Avoid caching overwritten tlb entries Richard Henderson
@ 2018-07-02 16:05 ` Richard Henderson
2018-07-02 18:07 ` [Qemu-devel] [PULL 0/6] tcg queued patches Peter Maydell
6 siblings, 0 replies; 11+ messages in thread
From: Richard Henderson @ 2018-07-02 16:05 UTC (permalink / raw)
To: qemu-devel; +Cc: peter.maydell
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Richard Henderson <richard.henderson@linaro.org>
---
include/qom/cpu.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/include/qom/cpu.h b/include/qom/cpu.h
index cce2fd6acc..bd796579ee 100644
--- a/include/qom/cpu.h
+++ b/include/qom/cpu.h
@@ -620,11 +620,13 @@ static inline hwaddr cpu_get_phys_page_debug(CPUState *cpu, vaddr addr)
static inline int cpu_asidx_from_attrs(CPUState *cpu, MemTxAttrs attrs)
{
CPUClass *cc = CPU_GET_CLASS(cpu);
+ int ret = 0;
if (cc->asidx_from_attrs) {
- return cc->asidx_from_attrs(cpu, attrs);
+ ret = cc->asidx_from_attrs(cpu, attrs);
+ assert(ret < cpu->num_ases && ret >= 0);
}
- return 0;
+ return ret;
}
#endif
--
2.17.1
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [Qemu-devel] [PULL 0/6] tcg queued patches
2018-07-02 16:05 [Qemu-devel] [PULL 0/6] tcg queued patches Richard Henderson
` (5 preceding siblings ...)
2018-07-02 16:05 ` [Qemu-devel] [PULL 6/6] cpu: Assert asidx_from_attrs return value in range Richard Henderson
@ 2018-07-02 18:07 ` Peter Maydell
6 siblings, 0 replies; 11+ messages in thread
From: Peter Maydell @ 2018-07-02 18:07 UTC (permalink / raw)
To: Richard Henderson; +Cc: QEMU Developers
On 2 July 2018 at 17:05, Richard Henderson <richard.henderson@linaro.org> wrote:
> The following changes since commit 2d58e33ec1b76990f09bc1e3412e0b36e1ac4634:
>
> Merge remote-tracking branch 'remotes/vivier/tags/m68k-for-3.0-pull-request' into staging (2018-07-02 13:43:10 +0100)
>
> are available in the Git repository at:
>
> https://github.com/rth7680/qemu.git tags/pull-tcg-20180702
>
> for you to fetch changes up to 9c8c334b0637bf3c592d432b0c11f3b62dd5dba3:
>
> cpu: Assert asidx_from_attrs return value in range (2018-07-02 08:09:49 -0700)
>
> ----------------------------------------------------------------
> Assorted tlb and tb caching fixes
>
> ----------------------------------------------------------------
> Emilio G. Cota (1):
> translate-all: fix locking of TBs whose two pages share the same physical page
>
> Peter Maydell (3):
> tcg: Define and use new tlb_hit() and tlb_hit_page() functions
> accel/tcg: Correct "is this a TLB miss" check in get_page_addr_code()
> accel/tcg: Don't treat invalid TLB entries as needing recheck
>
> Richard Henderson (2):
> accel/tcg: Avoid caching overwritten tlb entries
> cpu: Assert asidx_from_attrs return value in range
Applied, thanks.
-- PMM
^ permalink raw reply [flat|nested] 11+ messages in thread