From: Nicholas Piggin <npiggin@gmail.com>
To: qemu-ppc@nongnu.org
Cc: "Nicholas Piggin" <npiggin@gmail.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Richard Henderson" <richard.henderson@linaro.org>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Daniel Henrique Barboza" <danielhb413@gmail.com>,
qemu-devel@nongnu.org, "Peter Maydell" <peter.maydell@linaro.org>,
qemu-arm@nongnu.org, qemu-riscv@nongnu.org,
qemu-s390x@nongnu.org
Subject: [PATCH v2 1/3] target/ppc: Fix broadcast tlbie synchronisation
Date: Fri, 5 Apr 2024 22:53:36 +1000 [thread overview]
Message-ID: <20240405125340.380828-2-npiggin@gmail.com> (raw)
In-Reply-To: <20240405125340.380828-1-npiggin@gmail.com>
With mttcg, broadcast tlbie instructions do not wait until other vCPUs
have been kicked out of TCG execution before they complete (including
necessary subsequent tlbsync, etc., instructions). This is contrary to
the ISA, and it permits other vCPUs to use translations after the TLB
flush. For example:
// *memP is initially 0, memV maps to memP with *pte
CPU0
*pte = 0;
ptesync ; tlbie ; eieio ; tlbsync ; ptesync
*memP = 1;
CPU1
assert(*memV == 0);
It is possible for the assertion to fail because CPU1 translates memV
using the TLB after CPU0 has stored 1 to the underlying memory. The
correct behaviour would be no assertion and possibly a page fault due
to pte being cleared.
This race was observed with a careful test case where the CPU1 checks
run in a large TB making multiple loads so it can run for the entire
CPU0 period between clearing the pte and storing the memory, but host
vCPU thread preemption could cause the race to hit anywhere.
As explained in commit 4ddc104689b ("target/ppc: Fix tlbie"), it is not
enough to just use tlb_flush_all_cpus_synced(), because that does not
execute until the calling CPU has finished its TB. It is also required
that the TB is ended, which will guarantee all CPUs have run the work
before the next instruction will be executed.
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
target/ppc/helper_regs.c | 2 +-
target/ppc/mmu_helper.c | 2 +-
target/ppc/translate.c | 7 +++++++
target/ppc/translate/storage-ctrl-impl.c.inc | 7 +++++++
4 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/target/ppc/helper_regs.c b/target/ppc/helper_regs.c
index 25258986e3..9094ae5004 100644
--- a/target/ppc/helper_regs.c
+++ b/target/ppc/helper_regs.c
@@ -334,7 +334,7 @@ void check_tlb_flush(CPUPPCState *env, bool global)
if (global && (env->tlb_need_flush & TLB_NEED_GLOBAL_FLUSH)) {
env->tlb_need_flush &= ~TLB_NEED_GLOBAL_FLUSH;
env->tlb_need_flush &= ~TLB_NEED_LOCAL_FLUSH;
- tlb_flush_all_cpus(cs);
+ tlb_flush_all_cpus_synced(cs);
return;
}
diff --git a/target/ppc/mmu_helper.c b/target/ppc/mmu_helper.c
index c071b4d5e2..aaa5bfc62a 100644
--- a/target/ppc/mmu_helper.c
+++ b/target/ppc/mmu_helper.c
@@ -533,7 +533,7 @@ void helper_tlbie_isa300(CPUPPCState *env, target_ulong rb, target_ulong rs,
if (local) {
tlb_flush_page(env_cpu(env), addr);
} else {
- tlb_flush_page_all_cpus(env_cpu(env), addr);
+ tlb_flush_page_all_cpus_synced(env_cpu(env), addr);
}
return;
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index 93ffec787c..4ac8af2058 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -3495,6 +3495,13 @@ static inline void gen_check_tlb_flush(DisasContext *ctx, bool global)
gen_helper_check_tlb_flush_local(tcg_env);
}
gen_set_label(l);
+ if (global) {
+ /*
+ * Global TLB flush uses async-work which must run before the
+ * next instruction, so this must be the last in the TB.
+ */
+ ctx->base.is_jmp = DISAS_EXIT_UPDATE;
+ }
}
#else
static inline void gen_check_tlb_flush(DisasContext *ctx, bool global) { }
diff --git a/target/ppc/translate/storage-ctrl-impl.c.inc b/target/ppc/translate/storage-ctrl-impl.c.inc
index 74c23a4191..b8b4454663 100644
--- a/target/ppc/translate/storage-ctrl-impl.c.inc
+++ b/target/ppc/translate/storage-ctrl-impl.c.inc
@@ -224,6 +224,13 @@ static bool do_tlbie(DisasContext *ctx, arg_X_tlbie *a, bool local)
a->prs << TLBIE_F_PRS_SHIFT |
a->r << TLBIE_F_R_SHIFT |
local << TLBIE_F_LOCAL_SHIFT));
+ if (!local) {
+ /*
+ * Global TLB flush uses async-work which must run before the
+ * next instruction, so this must be the last in the TB.
+ */
+ ctx->base.is_jmp = DISAS_EXIT_UPDATE;
+ }
return true;
#endif
--
2.43.0
next prev parent reply other threads:[~2024-04-05 12:54 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-04-05 12:53 [PATCH-for-9.1 v2 0/3] target/ppc: fix tlb flushing race (plus Nicholas Piggin
2024-04-05 12:53 ` Nicholas Piggin [this message]
2024-04-05 12:53 ` [PATCH v2 2/3] tcg/cputlb: Remove non-synced variants of global TLB flushes Nicholas Piggin
2024-04-05 17:08 ` Richard Henderson
2024-04-05 12:53 ` [PATCH v2 3/3] tcg/cputlb: remove other-cpu capability from TLB flushing Nicholas Piggin
2024-04-05 17:09 ` Richard Henderson
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240405125340.380828-2-npiggin@gmail.com \
--to=npiggin@gmail.com \
--cc=danielhb413@gmail.com \
--cc=pbonzini@redhat.com \
--cc=peter.maydell@linaro.org \
--cc=philmd@linaro.org \
--cc=qemu-arm@nongnu.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
--cc=qemu-riscv@nongnu.org \
--cc=qemu-s390x@nongnu.org \
--cc=richard.henderson@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).