* [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap
@ 2020-03-18 4:41 Nicholas Piggin
2020-03-18 4:41 ` [PATCH 2/2] target/ppc: Fix ISA v3.0 (POWER9) slbia implementation Nicholas Piggin
` (3 more replies)
0 siblings, 4 replies; 11+ messages in thread
From: Nicholas Piggin @ 2020-03-18 4:41 UTC (permalink / raw)
To: qemu-ppc
Cc: Greg Kurz, Nicholas Piggin, qemu-devel, Cédric Le Goater,
David Gibson
slbia must invalidate TLBs even if it does not remove a valid SLB
entry, because slbmte can overwrite valid entries without removing
their TLBs.
As the architecture says, slbia invalidates all lookaside information,
not conditionally based on if it removed valid entries.
It does not seem possible for POWER8 or earlier Linux kernels to hit
this bug because it never changes its kernel SLB translations, and it
should always have valid entries if any accesses are made to usespace
regions. However other operating systems which may modify SLB entry 0
or do more fancy things with segments might be affected.
When POWER9 slbia support is added in the next patch, this becomes a
real problem because some new slbia variants don't invalidate all
non-zero entries.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
target/ppc/mmu-hash64.c | 21 +++++++++++++++------
1 file changed, 15 insertions(+), 6 deletions(-)
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 34f6009b1e..373d44de74 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -100,20 +100,29 @@ void helper_slbia(CPUPPCState *env)
PowerPCCPU *cpu = env_archcpu(env);
int n;
+ /*
+ * slbia must always flush all TLB (which is equivalent to ERAT in ppc
+ * architecture). Matching on SLB_ESID_V is not good enough, because slbmte
+ * can overwrite a valid SLB without flushing its lookaside information.
+ *
+ * It would be possible to keep the TLB in synch with the SLB by flushing
+ * when a valid entry is overwritten by slbmte, and therefore slbia would
+ * not have to flush unless it evicts a valid SLB entry. However it is
+ * expected that slbmte is more common than slbia, and slbia is usually
+ * going to evict valid SLB entries, so that tradeoff is unlikely to be a
+ * good one.
+ */
+
/* XXX: Warning: slbia never invalidates the first segment */
for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
ppc_slb_t *slb = &env->slb[n];
if (slb->esid & SLB_ESID_V) {
slb->esid &= ~SLB_ESID_V;
- /*
- * XXX: given the fact that segment size is 256 MB or 1TB,
- * and we still don't have a tlb_flush_mask(env, n, mask)
- * in QEMU, we just invalidate all TLBs
- */
- env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
}
}
+
+ env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
}
static void __helper_slbie(CPUPPCState *env, target_ulong addr,
--
2.23.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* [PATCH 2/2] target/ppc: Fix ISA v3.0 (POWER9) slbia implementation
2020-03-18 4:41 [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap Nicholas Piggin
@ 2020-03-18 4:41 ` Nicholas Piggin
2020-03-18 17:08 ` [EXTERNAL] " Cédric Le Goater
2020-03-18 16:45 ` [EXTERNAL] [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap Cédric Le Goater
` (2 subsequent siblings)
3 siblings, 1 reply; 11+ messages in thread
From: Nicholas Piggin @ 2020-03-18 4:41 UTC (permalink / raw)
To: qemu-ppc
Cc: Greg Kurz, Nicholas Piggin, qemu-devel, Cédric Le Goater,
David Gibson
Linux using the hash MMU ("disable_radix" command line) on a POWER9
machine quickly hits translation bugs due to using v3.0 slbia
features that are not implemented in TCG. Add them.
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
---
target/ppc/helper.h | 2 +-
target/ppc/mmu-hash64.c | 57 ++++++++++++++++++++++++++++++++++++-----
target/ppc/translate.c | 5 +++-
3 files changed, 55 insertions(+), 9 deletions(-)
diff --git a/target/ppc/helper.h b/target/ppc/helper.h
index ee1498050d..2dfa1c6942 100644
--- a/target/ppc/helper.h
+++ b/target/ppc/helper.h
@@ -615,7 +615,7 @@ DEF_HELPER_FLAGS_3(store_slb, TCG_CALL_NO_RWG, void, env, tl, tl)
DEF_HELPER_2(load_slb_esid, tl, env, tl)
DEF_HELPER_2(load_slb_vsid, tl, env, tl)
DEF_HELPER_2(find_slb_vsid, tl, env, tl)
-DEF_HELPER_FLAGS_1(slbia, TCG_CALL_NO_RWG, void, env)
+DEF_HELPER_FLAGS_2(slbia, TCG_CALL_NO_RWG, void, env, i32)
DEF_HELPER_FLAGS_2(slbie, TCG_CALL_NO_RWG, void, env, tl)
DEF_HELPER_FLAGS_2(slbieg, TCG_CALL_NO_RWG, void, env, tl)
#endif
diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
index 373d44de74..deb1c13a66 100644
--- a/target/ppc/mmu-hash64.c
+++ b/target/ppc/mmu-hash64.c
@@ -95,9 +95,10 @@ void dump_slb(PowerPCCPU *cpu)
}
}
-void helper_slbia(CPUPPCState *env)
+void helper_slbia(CPUPPCState *env, uint32_t ih)
{
PowerPCCPU *cpu = env_archcpu(env);
+ int starting_entry;
int n;
/*
@@ -111,18 +112,59 @@ void helper_slbia(CPUPPCState *env)
* expected that slbmte is more common than slbia, and slbia is usually
* going to evict valid SLB entries, so that tradeoff is unlikely to be a
* good one.
+ *
+ * ISA v2.05 introduced IH field with values 0,1,2,6. These all invalidate
+ * the same SLB entries (everything but entry 0), but differ in what
+ * "lookaside information" is invalidated. TCG can ignore this and flush
+ * everything.
+ *
+ * ISA v3.0 introduced additional values 3,4,7, which change what SLBs are
+ * invalidated.
*/
- /* XXX: Warning: slbia never invalidates the first segment */
- for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
- ppc_slb_t *slb = &env->slb[n];
+ env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
+
+ starting_entry = 1; /* default for IH=0,1,2,6 */
+
+ if (env->mmu_model == POWERPC_MMU_3_00) {
+ switch (ih) {
+ case 0x7:
+ /* invalidate no SLBs, but all lookaside information */
+ return;
- if (slb->esid & SLB_ESID_V) {
- slb->esid &= ~SLB_ESID_V;
+ case 0x3:
+ case 0x4:
+ /* also considers SLB entry 0 */
+ starting_entry = 0;
+ break;
+
+ case 0x5:
+ /* treat undefined values as ih==0, and warn */
+ qemu_log_mask(LOG_GUEST_ERROR,
+ "slbia undefined IH field %u.\n", ih);
+ break;
+
+ default:
+ /* 0,1,2,6 */
+ break;
}
}
- env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
+ for (n = starting_entry; n < cpu->hash64_opts->slb_size; n++) {
+ ppc_slb_t *slb = &env->slb[n];
+
+ if (!(slb->esid & SLB_ESID_V)) {
+ continue;
+ }
+ if (env->mmu_model == POWERPC_MMU_3_00) {
+ if (ih == 0x3 && (slb->vsid & SLB_VSID_C) == 0) {
+ /* preserves entries with a class value of 0 */
+ continue;
+ }
+ }
+
+ slb->esid &= ~SLB_ESID_V;
+ }
}
static void __helper_slbie(CPUPPCState *env, target_ulong addr,
@@ -136,6 +178,7 @@ static void __helper_slbie(CPUPPCState *env, target_ulong addr,
return;
}
+ env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
if (slb->esid & SLB_ESID_V) {
slb->esid &= ~SLB_ESID_V;
diff --git a/target/ppc/translate.c b/target/ppc/translate.c
index eb0ddba850..e514732a09 100644
--- a/target/ppc/translate.c
+++ b/target/ppc/translate.c
@@ -5027,12 +5027,15 @@ static void gen_tlbsync(DisasContext *ctx)
/* slbia */
static void gen_slbia(DisasContext *ctx)
{
+ uint32_t ih = (ctx->opcode >> 21) & 0x7;
+ TCGv_i32 t0 = tcg_const_i32(ih);
+
#if defined(CONFIG_USER_ONLY)
GEN_PRIV;
#else
CHK_SV;
- gen_helper_slbia(cpu_env);
+ gen_helper_slbia(cpu_env, t0);
#endif /* defined(CONFIG_USER_ONLY) */
}
--
2.23.0
^ permalink raw reply related [flat|nested] 11+ messages in thread
* Re: [EXTERNAL] [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap
2020-03-18 4:41 [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap Nicholas Piggin
2020-03-18 4:41 ` [PATCH 2/2] target/ppc: Fix ISA v3.0 (POWER9) slbia implementation Nicholas Piggin
@ 2020-03-18 16:45 ` Cédric Le Goater
2020-03-19 2:24 ` Nicholas Piggin
2020-03-18 16:52 ` Greg Kurz
2020-03-19 5:19 ` David Gibson
3 siblings, 1 reply; 11+ messages in thread
From: Cédric Le Goater @ 2020-03-18 16:45 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc; +Cc: Greg Kurz, qemu-devel, David Gibson
On 3/18/20 5:41 AM, Nicholas Piggin wrote:
> slbia must invalidate TLBs even if it does not remove a valid SLB
> entry, because slbmte can overwrite valid entries without removing
> their TLBs.
>
> As the architecture says, slbia invalidates all lookaside information,
> not conditionally based on if it removed valid entries.
>
> It does not seem possible for POWER8 or earlier Linux kernels to hit
> this bug because it never changes its kernel SLB translations, and it
> should always have valid entries if any accesses are made to usespace
> regions. However other operating systems which may modify SLB entry 0
> or do more fancy things with segments might be affected.
Did you hit the bug on the other OS ?
> When POWER9 slbia support is added in the next patch, this becomes a
> real problem because some new slbia variants don't invalidate all
> non-zero entries.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Looks correct.
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Thanks,
C.
> ---
> target/ppc/mmu-hash64.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 34f6009b1e..373d44de74 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -100,20 +100,29 @@ void helper_slbia(CPUPPCState *env)
> PowerPCCPU *cpu = env_archcpu(env);
> int n;
>
> + /*
> + * slbia must always flush all TLB (which is equivalent to ERAT in ppc
> + * architecture). Matching on SLB_ESID_V is not good enough, because slbmte
> + * can overwrite a valid SLB without flushing its lookaside information.
> + *
> + * It would be possible to keep the TLB in synch with the SLB by flushing
> + * when a valid entry is overwritten by slbmte, and therefore slbia would
> + * not have to flush unless it evicts a valid SLB entry. However it is
> + * expected that slbmte is more common than slbia, and slbia is usually
> + * going to evict valid SLB entries, so that tradeoff is unlikely to be a
> + * good one.
> + */
> +
> /* XXX: Warning: slbia never invalidates the first segment */
> for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
> ppc_slb_t *slb = &env->slb[n];
>
> if (slb->esid & SLB_ESID_V) {
> slb->esid &= ~SLB_ESID_V;
> - /*
> - * XXX: given the fact that segment size is 256 MB or 1TB,
> - * and we still don't have a tlb_flush_mask(env, n, mask)
> - * in QEMU, we just invalidate all TLBs
> - */
> - env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> }
> }
> +
> + env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> }
>
> static void __helper_slbie(CPUPPCState *env, target_ulong addr,
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap
2020-03-18 4:41 [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap Nicholas Piggin
2020-03-18 4:41 ` [PATCH 2/2] target/ppc: Fix ISA v3.0 (POWER9) slbia implementation Nicholas Piggin
2020-03-18 16:45 ` [EXTERNAL] [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap Cédric Le Goater
@ 2020-03-18 16:52 ` Greg Kurz
2020-03-19 5:20 ` David Gibson
2020-03-19 5:19 ` David Gibson
3 siblings, 1 reply; 11+ messages in thread
From: Greg Kurz @ 2020-03-18 16:52 UTC (permalink / raw)
To: Nicholas Piggin; +Cc: Cédric Le Goater, qemu-ppc, qemu-devel, David Gibson
On Wed, 18 Mar 2020 14:41:34 +1000
Nicholas Piggin <npiggin@gmail.com> wrote:
> slbia must invalidate TLBs even if it does not remove a valid SLB
> entry, because slbmte can overwrite valid entries without removing
> their TLBs.
>
> As the architecture says, slbia invalidates all lookaside information,
> not conditionally based on if it removed valid entries.
>
> It does not seem possible for POWER8 or earlier Linux kernels to hit
> this bug because it never changes its kernel SLB translations, and it
> should always have valid entries if any accesses are made to usespace
s/usespace/userspace
> regions. However other operating systems which may modify SLB entry 0
> or do more fancy things with segments might be affected.
>
> When POWER9 slbia support is added in the next patch, this becomes a
> real problem because some new slbia variants don't invalidate all
> non-zero entries.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
LGTM
Reviewed-by: Greg Kurz <groug@kaod.org>
> target/ppc/mmu-hash64.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 34f6009b1e..373d44de74 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -100,20 +100,29 @@ void helper_slbia(CPUPPCState *env)
> PowerPCCPU *cpu = env_archcpu(env);
> int n;
>
> + /*
> + * slbia must always flush all TLB (which is equivalent to ERAT in ppc
> + * architecture). Matching on SLB_ESID_V is not good enough, because slbmte
> + * can overwrite a valid SLB without flushing its lookaside information.
> + *
> + * It would be possible to keep the TLB in synch with the SLB by flushing
> + * when a valid entry is overwritten by slbmte, and therefore slbia would
> + * not have to flush unless it evicts a valid SLB entry. However it is
> + * expected that slbmte is more common than slbia, and slbia is usually
> + * going to evict valid SLB entries, so that tradeoff is unlikely to be a
> + * good one.
> + */
> +
> /* XXX: Warning: slbia never invalidates the first segment */
> for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
> ppc_slb_t *slb = &env->slb[n];
>
> if (slb->esid & SLB_ESID_V) {
> slb->esid &= ~SLB_ESID_V;
> - /*
> - * XXX: given the fact that segment size is 256 MB or 1TB,
> - * and we still don't have a tlb_flush_mask(env, n, mask)
> - * in QEMU, we just invalidate all TLBs
> - */
> - env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> }
> }
> +
> + env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> }
>
> static void __helper_slbie(CPUPPCState *env, target_ulong addr,
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [EXTERNAL] [PATCH 2/2] target/ppc: Fix ISA v3.0 (POWER9) slbia implementation
2020-03-18 4:41 ` [PATCH 2/2] target/ppc: Fix ISA v3.0 (POWER9) slbia implementation Nicholas Piggin
@ 2020-03-18 17:08 ` Cédric Le Goater
2020-03-18 20:46 ` Benjamin Herrenschmidt
0 siblings, 1 reply; 11+ messages in thread
From: Cédric Le Goater @ 2020-03-18 17:08 UTC (permalink / raw)
To: Nicholas Piggin, qemu-ppc; +Cc: Greg Kurz, qemu-devel, David Gibson
On 3/18/20 5:41 AM, Nicholas Piggin wrote:
> Linux using the hash MMU ("disable_radix" command line) on a POWER9
> machine quickly hits translation bugs due to using v3.0 slbia
> features that are not implemented in TCG. Add them.
I checked the ISA books and this looks OK but you are also modifying
slbie.
Thanks,
C.
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> ---
> target/ppc/helper.h | 2 +-
> target/ppc/mmu-hash64.c | 57 ++++++++++++++++++++++++++++++++++++-----
> target/ppc/translate.c | 5 +++-
> 3 files changed, 55 insertions(+), 9 deletions(-)
>
> diff --git a/target/ppc/helper.h b/target/ppc/helper.h
> index ee1498050d..2dfa1c6942 100644
> --- a/target/ppc/helper.h
> +++ b/target/ppc/helper.h
> @@ -615,7 +615,7 @@ DEF_HELPER_FLAGS_3(store_slb, TCG_CALL_NO_RWG, void, env, tl, tl)
> DEF_HELPER_2(load_slb_esid, tl, env, tl)
> DEF_HELPER_2(load_slb_vsid, tl, env, tl)
> DEF_HELPER_2(find_slb_vsid, tl, env, tl)
> -DEF_HELPER_FLAGS_1(slbia, TCG_CALL_NO_RWG, void, env)
> +DEF_HELPER_FLAGS_2(slbia, TCG_CALL_NO_RWG, void, env, i32)
> DEF_HELPER_FLAGS_2(slbie, TCG_CALL_NO_RWG, void, env, tl)
> DEF_HELPER_FLAGS_2(slbieg, TCG_CALL_NO_RWG, void, env, tl)
> #endif
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 373d44de74..deb1c13a66 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -95,9 +95,10 @@ void dump_slb(PowerPCCPU *cpu)
> }
> }
>
> -void helper_slbia(CPUPPCState *env)
> +void helper_slbia(CPUPPCState *env, uint32_t ih)
> {
> PowerPCCPU *cpu = env_archcpu(env);
> + int starting_entry;
> int n;
>
> /*
> @@ -111,18 +112,59 @@ void helper_slbia(CPUPPCState *env)
> * expected that slbmte is more common than slbia, and slbia is usually
> * going to evict valid SLB entries, so that tradeoff is unlikely to be a
> * good one.
> + *
> + * ISA v2.05 introduced IH field with values 0,1,2,6. These all invalidate
> + * the same SLB entries (everything but entry 0), but differ in what
> + * "lookaside information" is invalidated. TCG can ignore this and flush
> + * everything.
> + *
> + * ISA v3.0 introduced additional values 3,4,7, which change what SLBs are
> + * invalidated.
> */
>
> - /* XXX: Warning: slbia never invalidates the first segment */
> - for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
> - ppc_slb_t *slb = &env->slb[n];
> + env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> +
> + starting_entry = 1; /* default for IH=0,1,2,6 */
> +
> + if (env->mmu_model == POWERPC_MMU_3_00) {
> + switch (ih) {
> + case 0x7:
> + /* invalidate no SLBs, but all lookaside information */
> + return;
>
> - if (slb->esid & SLB_ESID_V) {
> - slb->esid &= ~SLB_ESID_V;
> + case 0x3:
> + case 0x4:
> + /* also considers SLB entry 0 */
> + starting_entry = 0;
> + break;
> +
> + case 0x5:
> + /* treat undefined values as ih==0, and warn */
> + qemu_log_mask(LOG_GUEST_ERROR,
> + "slbia undefined IH field %u.\n", ih);
> + break;
> +
> + default:
> + /* 0,1,2,6 */
> + break;
> }
> }
>
> - env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> + for (n = starting_entry; n < cpu->hash64_opts->slb_size; n++) {
> + ppc_slb_t *slb = &env->slb[n];
> +
> + if (!(slb->esid & SLB_ESID_V)) {
> + continue;
> + }
> + if (env->mmu_model == POWERPC_MMU_3_00) {
> + if (ih == 0x3 && (slb->vsid & SLB_VSID_C) == 0) {
> + /* preserves entries with a class value of 0 */
> + continue;
> + }
> + }
> +
> + slb->esid &= ~SLB_ESID_V;
> + }
> }
>
> static void __helper_slbie(CPUPPCState *env, target_ulong addr,
> @@ -136,6 +178,7 @@ static void __helper_slbie(CPUPPCState *env, target_ulong addr,
> return;
> }
>
> + env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> if (slb->esid & SLB_ESID_V) {
> slb->esid &= ~SLB_ESID_V;
>
> diff --git a/target/ppc/translate.c b/target/ppc/translate.c
> index eb0ddba850..e514732a09 100644
> --- a/target/ppc/translate.c
> +++ b/target/ppc/translate.c
> @@ -5027,12 +5027,15 @@ static void gen_tlbsync(DisasContext *ctx)
> /* slbia */
> static void gen_slbia(DisasContext *ctx)
> {
> + uint32_t ih = (ctx->opcode >> 21) & 0x7;
> + TCGv_i32 t0 = tcg_const_i32(ih);
> +
> #if defined(CONFIG_USER_ONLY)
> GEN_PRIV;
> #else
> CHK_SV;
>
> - gen_helper_slbia(cpu_env);
> + gen_helper_slbia(cpu_env, t0);
> #endif /* defined(CONFIG_USER_ONLY) */
> }
>
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [EXTERNAL] [PATCH 2/2] target/ppc: Fix ISA v3.0 (POWER9) slbia implementation
2020-03-18 17:08 ` [EXTERNAL] " Cédric Le Goater
@ 2020-03-18 20:46 ` Benjamin Herrenschmidt
2020-03-19 2:42 ` Nicholas Piggin
2020-03-19 5:22 ` David Gibson
0 siblings, 2 replies; 11+ messages in thread
From: Benjamin Herrenschmidt @ 2020-03-18 20:46 UTC (permalink / raw)
To: Cédric Le Goater, Nicholas Piggin, qemu-ppc
Cc: Greg Kurz, qemu-devel, David Gibson
On Wed, 2020-03-18 at 18:08 +0100, Cédric Le Goater wrote:
> On 3/18/20 5:41 AM, Nicholas Piggin wrote:
> > Linux using the hash MMU ("disable_radix" command line) on a POWER9
> > machine quickly hits translation bugs due to using v3.0 slbia
> > features that are not implemented in TCG. Add them.
>
> I checked the ISA books and this looks OK but you are also modifying
> slbie.
For the same reason, I believe slbie needs to invalidate caches even if
the entry isn't present.
The kernel will under some circumstances overwrite SLB entries without
invalidating (because the translation itself isn't invalid, it's just
that the SLB is full, so anything cached in the ERAT is still
technically ok).
However, when those things get really invalidated, they need to be
taken out, even if they no longer have a corresponding SLB entry.
Cheers,
Ben.
> Thanks,
>
> C.
>
>
> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> > ---
> > target/ppc/helper.h | 2 +-
> > target/ppc/mmu-hash64.c | 57 ++++++++++++++++++++++++++++++++++++-
> > ----
> > target/ppc/translate.c | 5 +++-
> > 3 files changed, 55 insertions(+), 9 deletions(-)
> >
> > diff --git a/target/ppc/helper.h b/target/ppc/helper.h
> > index ee1498050d..2dfa1c6942 100644
> > --- a/target/ppc/helper.h
> > +++ b/target/ppc/helper.h
> > @@ -615,7 +615,7 @@ DEF_HELPER_FLAGS_3(store_slb, TCG_CALL_NO_RWG,
> > void, env, tl, tl)
> > DEF_HELPER_2(load_slb_esid, tl, env, tl)
> > DEF_HELPER_2(load_slb_vsid, tl, env, tl)
> > DEF_HELPER_2(find_slb_vsid, tl, env, tl)
> > -DEF_HELPER_FLAGS_1(slbia, TCG_CALL_NO_RWG, void, env)
> > +DEF_HELPER_FLAGS_2(slbia, TCG_CALL_NO_RWG, void, env, i32)
> > DEF_HELPER_FLAGS_2(slbie, TCG_CALL_NO_RWG, void, env, tl)
> > DEF_HELPER_FLAGS_2(slbieg, TCG_CALL_NO_RWG, void, env, tl)
> > #endif
> > diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> > index 373d44de74..deb1c13a66 100644
> > --- a/target/ppc/mmu-hash64.c
> > +++ b/target/ppc/mmu-hash64.c
> > @@ -95,9 +95,10 @@ void dump_slb(PowerPCCPU *cpu)
> > }
> > }
> >
> > -void helper_slbia(CPUPPCState *env)
> > +void helper_slbia(CPUPPCState *env, uint32_t ih)
> > {
> > PowerPCCPU *cpu = env_archcpu(env);
> > + int starting_entry;
> > int n;
> >
> > /*
> > @@ -111,18 +112,59 @@ void helper_slbia(CPUPPCState *env)
> > * expected that slbmte is more common than slbia, and slbia
> > is usually
> > * going to evict valid SLB entries, so that tradeoff is
> > unlikely to be a
> > * good one.
> > + *
> > + * ISA v2.05 introduced IH field with values 0,1,2,6. These
> > all invalidate
> > + * the same SLB entries (everything but entry 0), but differ
> > in what
> > + * "lookaside information" is invalidated. TCG can ignore this
> > and flush
> > + * everything.
> > + *
> > + * ISA v3.0 introduced additional values 3,4,7, which change
> > what SLBs are
> > + * invalidated.
> > */
> >
> > - /* XXX: Warning: slbia never invalidates the first segment */
> > - for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
> > - ppc_slb_t *slb = &env->slb[n];
> > + env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> > +
> > + starting_entry = 1; /* default for IH=0,1,2,6 */
> > +
> > + if (env->mmu_model == POWERPC_MMU_3_00) {
> > + switch (ih) {
> > + case 0x7:
> > + /* invalidate no SLBs, but all lookaside information
> > */
> > + return;
> >
> > - if (slb->esid & SLB_ESID_V) {
> > - slb->esid &= ~SLB_ESID_V;
> > + case 0x3:
> > + case 0x4:
> > + /* also considers SLB entry 0 */
> > + starting_entry = 0;
> > + break;
> > +
> > + case 0x5:
> > + /* treat undefined values as ih==0, and warn */
> > + qemu_log_mask(LOG_GUEST_ERROR,
> > + "slbia undefined IH field %u.\n", ih);
> > + break;
> > +
> > + default:
> > + /* 0,1,2,6 */
> > + break;
> > }
> > }
> >
> > - env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> > + for (n = starting_entry; n < cpu->hash64_opts->slb_size; n++)
> > {
> > + ppc_slb_t *slb = &env->slb[n];
> > +
> > + if (!(slb->esid & SLB_ESID_V)) {
> > + continue;
> > + }
> > + if (env->mmu_model == POWERPC_MMU_3_00) {
> > + if (ih == 0x3 && (slb->vsid & SLB_VSID_C) == 0) {
> > + /* preserves entries with a class value of 0 */
> > + continue;
> > + }
> > + }
> > +
> > + slb->esid &= ~SLB_ESID_V;
> > + }
> > }
> >
> > static void __helper_slbie(CPUPPCState *env, target_ulong addr,
> > @@ -136,6 +178,7 @@ static void __helper_slbie(CPUPPCState *env,
> > target_ulong addr,
> > return;
> > }
> >
> > + env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> > if (slb->esid & SLB_ESID_V) {
> > slb->esid &= ~SLB_ESID_V;
> >
> > diff --git a/target/ppc/translate.c b/target/ppc/translate.c
> > index eb0ddba850..e514732a09 100644
> > --- a/target/ppc/translate.c
> > +++ b/target/ppc/translate.c
> > @@ -5027,12 +5027,15 @@ static void gen_tlbsync(DisasContext *ctx)
> > /* slbia */
> > static void gen_slbia(DisasContext *ctx)
> > {
> > + uint32_t ih = (ctx->opcode >> 21) & 0x7;
> > + TCGv_i32 t0 = tcg_const_i32(ih);
> > +
> > #if defined(CONFIG_USER_ONLY)
> > GEN_PRIV;
> > #else
> > CHK_SV;
> >
> > - gen_helper_slbia(cpu_env);
> > + gen_helper_slbia(cpu_env, t0);
> > #endif /* defined(CONFIG_USER_ONLY) */
> > }
> >
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [EXTERNAL] [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap
2020-03-18 16:45 ` [EXTERNAL] [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap Cédric Le Goater
@ 2020-03-19 2:24 ` Nicholas Piggin
0 siblings, 0 replies; 11+ messages in thread
From: Nicholas Piggin @ 2020-03-19 2:24 UTC (permalink / raw)
To: Cédric Le Goater, qemu-ppc; +Cc: qemu-devel, Greg Kurz, David Gibson
Cédric Le Goater's on March 19, 2020 2:45 am:
> On 3/18/20 5:41 AM, Nicholas Piggin wrote:
>> slbia must invalidate TLBs even if it does not remove a valid SLB
>> entry, because slbmte can overwrite valid entries without removing
>> their TLBs.
>>
>> As the architecture says, slbia invalidates all lookaside information,
>> not conditionally based on if it removed valid entries.
>>
>> It does not seem possible for POWER8 or earlier Linux kernels to hit
>> this bug because it never changes its kernel SLB translations, and it
>> should always have valid entries if any accesses are made to usespace
>> regions. However other operating systems which may modify SLB entry 0
>> or do more fancy things with segments might be affected.
>
> Did you hit the bug on the other OS ?
No, hit it when fixing POWER9 hash.
>
>> When POWER9 slbia support is added in the next patch, this becomes a
>> real problem because some new slbia variants don't invalidate all
>> non-zero entries.
>>
>> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
>
> Looks correct.
>
> Reviewed-by: Cédric Le Goater <clg@kaod.org>
Thanks,
Nick
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [EXTERNAL] [PATCH 2/2] target/ppc: Fix ISA v3.0 (POWER9) slbia implementation
2020-03-18 20:46 ` Benjamin Herrenschmidt
@ 2020-03-19 2:42 ` Nicholas Piggin
2020-03-19 5:22 ` David Gibson
1 sibling, 0 replies; 11+ messages in thread
From: Nicholas Piggin @ 2020-03-19 2:42 UTC (permalink / raw)
To: Benjamin Herrenschmidt, Cédric Le Goater, qemu-ppc
Cc: qemu-devel, Greg, Kurz, David Gibson
Benjamin Herrenschmidt's on March 19, 2020 6:46 am:
> On Wed, 2020-03-18 at 18:08 +0100, Cédric Le Goater wrote:
>> On 3/18/20 5:41 AM, Nicholas Piggin wrote:
>> > Linux using the hash MMU ("disable_radix" command line) on a POWER9
>> > machine quickly hits translation bugs due to using v3.0 slbia
>> > features that are not implemented in TCG. Add them.
>>
>> I checked the ISA books and this looks OK but you are also modifying
>> slbie.
That was a mistake that leaked in from debugging the crashes.
> For the same reason, I believe slbie needs to invalidate caches even if
> the entry isn't present.
I don't think it does per the ISA. If we overwrite it then we can only
invalidate with slbia. That's why there is that slb insertion cache for
pre-POWER9 that lets us use slbies to context switch so long as none
have been overwritten.
> The kernel will under some circumstances overwrite SLB entries without
> invalidating (because the translation itself isn't invalid, it's just
> that the SLB is full, so anything cached in the ERAT is still
> technically ok).
>
> However, when those things get really invalidated, they need to be
> taken out, even if they no longer have a corresponding SLB entry.
Yeah we track that and do slbia in that case.
Thanks,
Nick
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap
2020-03-18 4:41 [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap Nicholas Piggin
` (2 preceding siblings ...)
2020-03-18 16:52 ` Greg Kurz
@ 2020-03-19 5:19 ` David Gibson
3 siblings, 0 replies; 11+ messages in thread
From: David Gibson @ 2020-03-19 5:19 UTC (permalink / raw)
To: Nicholas Piggin; +Cc: Cédric Le Goater, qemu-ppc, qemu-devel, Greg Kurz
[-- Attachment #1: Type: text/plain, Size: 3030 bytes --]
On Wed, Mar 18, 2020 at 02:41:34PM +1000, Nicholas Piggin wrote:
> slbia must invalidate TLBs even if it does not remove a valid SLB
> entry, because slbmte can overwrite valid entries without removing
> their TLBs.
>
> As the architecture says, slbia invalidates all lookaside information,
> not conditionally based on if it removed valid entries.
>
> It does not seem possible for POWER8 or earlier Linux kernels to hit
> this bug because it never changes its kernel SLB translations, and it
> should always have valid entries if any accesses are made to usespace
> regions. However other operating systems which may modify SLB entry 0
> or do more fancy things with segments might be affected.
>
> When POWER9 slbia support is added in the next patch, this becomes a
> real problem because some new slbia variants don't invalidate all
> non-zero entries.
>
> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Applied to ppc-for-5.0, thanks.
> ---
> target/ppc/mmu-hash64.c | 21 +++++++++++++++------
> 1 file changed, 15 insertions(+), 6 deletions(-)
>
> diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> index 34f6009b1e..373d44de74 100644
> --- a/target/ppc/mmu-hash64.c
> +++ b/target/ppc/mmu-hash64.c
> @@ -100,20 +100,29 @@ void helper_slbia(CPUPPCState *env)
> PowerPCCPU *cpu = env_archcpu(env);
> int n;
>
> + /*
> + * slbia must always flush all TLB (which is equivalent to ERAT in ppc
> + * architecture). Matching on SLB_ESID_V is not good enough, because slbmte
> + * can overwrite a valid SLB without flushing its lookaside information.
> + *
> + * It would be possible to keep the TLB in synch with the SLB by flushing
> + * when a valid entry is overwritten by slbmte, and therefore slbia would
> + * not have to flush unless it evicts a valid SLB entry. However it is
> + * expected that slbmte is more common than slbia, and slbia is usually
> + * going to evict valid SLB entries, so that tradeoff is unlikely to be a
> + * good one.
> + */
> +
> /* XXX: Warning: slbia never invalidates the first segment */
> for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
> ppc_slb_t *slb = &env->slb[n];
>
> if (slb->esid & SLB_ESID_V) {
> slb->esid &= ~SLB_ESID_V;
> - /*
> - * XXX: given the fact that segment size is 256 MB or 1TB,
> - * and we still don't have a tlb_flush_mask(env, n, mask)
> - * in QEMU, we just invalidate all TLBs
> - */
> - env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> }
> }
> +
> + env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> }
>
> static void __helper_slbie(CPUPPCState *env, target_ulong addr,
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap
2020-03-18 16:52 ` Greg Kurz
@ 2020-03-19 5:20 ` David Gibson
0 siblings, 0 replies; 11+ messages in thread
From: David Gibson @ 2020-03-19 5:20 UTC (permalink / raw)
To: Greg Kurz; +Cc: Cédric Le Goater, qemu-ppc, qemu-devel, Nicholas Piggin
[-- Attachment #1: Type: text/plain, Size: 3343 bytes --]
On Wed, Mar 18, 2020 at 05:52:32PM +0100, Greg Kurz wrote:
65;5803;1c> On Wed, 18 Mar 2020 14:41:34 +1000
> Nicholas Piggin <npiggin@gmail.com> wrote:
>
> > slbia must invalidate TLBs even if it does not remove a valid SLB
> > entry, because slbmte can overwrite valid entries without removing
> > their TLBs.
> >
> > As the architecture says, slbia invalidates all lookaside information,
> > not conditionally based on if it removed valid entries.
> >
> > It does not seem possible for POWER8 or earlier Linux kernels to hit
> > this bug because it never changes its kernel SLB translations, and it
> > should always have valid entries if any accesses are made to usespace
>
> s/usespace/userspace
Corrected in my tree, thanks.
>
> > regions. However other operating systems which may modify SLB entry 0
> > or do more fancy things with segments might be affected.
> >
> > When POWER9 slbia support is added in the next patch, this becomes a
> > real problem because some new slbia variants don't invalidate all
> > non-zero entries.
> >
> > Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> > ---
>
> LGTM
>
> Reviewed-by: Greg Kurz <groug@kaod.org>
>
> > target/ppc/mmu-hash64.c | 21 +++++++++++++++------
> > 1 file changed, 15 insertions(+), 6 deletions(-)
> >
> > diff --git a/target/ppc/mmu-hash64.c b/target/ppc/mmu-hash64.c
> > index 34f6009b1e..373d44de74 100644
> > --- a/target/ppc/mmu-hash64.c
> > +++ b/target/ppc/mmu-hash64.c
> > @@ -100,20 +100,29 @@ void helper_slbia(CPUPPCState *env)
> > PowerPCCPU *cpu = env_archcpu(env);
> > int n;
> >
> > + /*
> > + * slbia must always flush all TLB (which is equivalent to ERAT in ppc
> > + * architecture). Matching on SLB_ESID_V is not good enough, because slbmte
> > + * can overwrite a valid SLB without flushing its lookaside information.
> > + *
> > + * It would be possible to keep the TLB in synch with the SLB by flushing
> > + * when a valid entry is overwritten by slbmte, and therefore slbia would
> > + * not have to flush unless it evicts a valid SLB entry. However it is
> > + * expected that slbmte is more common than slbia, and slbia is usually
> > + * going to evict valid SLB entries, so that tradeoff is unlikely to be a
> > + * good one.
> > + */
> > +
> > /* XXX: Warning: slbia never invalidates the first segment */
> > for (n = 1; n < cpu->hash64_opts->slb_size; n++) {
> > ppc_slb_t *slb = &env->slb[n];
> >
> > if (slb->esid & SLB_ESID_V) {
> > slb->esid &= ~SLB_ESID_V;
> > - /*
> > - * XXX: given the fact that segment size is 256 MB or 1TB,
> > - * and we still don't have a tlb_flush_mask(env, n, mask)
> > - * in QEMU, we just invalidate all TLBs
> > - */
> > - env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> > }
> > }
> > +
> > + env->tlb_need_flush |= TLB_NEED_LOCAL_FLUSH;
> > }
> >
> > static void __helper_slbie(CPUPPCState *env, target_ulong addr,
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
* Re: [EXTERNAL] [PATCH 2/2] target/ppc: Fix ISA v3.0 (POWER9) slbia implementation
2020-03-18 20:46 ` Benjamin Herrenschmidt
2020-03-19 2:42 ` Nicholas Piggin
@ 2020-03-19 5:22 ` David Gibson
1 sibling, 0 replies; 11+ messages in thread
From: David Gibson @ 2020-03-19 5:22 UTC (permalink / raw)
To: Benjamin Herrenschmidt
Cc: Greg Kurz, qemu-ppc, Cédric Le Goater, Nicholas Piggin,
qemu-devel
[-- Attachment #1: Type: text/plain, Size: 1314 bytes --]
On Thu, Mar 19, 2020 at 07:46:54AM +1100, Benjamin Herrenschmidt wrote:
> On Wed, 2020-03-18 at 18:08 +0100, Cédric Le Goater wrote:
> > On 3/18/20 5:41 AM, Nicholas Piggin wrote:
> > > Linux using the hash MMU ("disable_radix" command line) on a POWER9
> > > machine quickly hits translation bugs due to using v3.0 slbia
> > > features that are not implemented in TCG. Add them.
> >
> > I checked the ISA books and this looks OK but you are also modifying
> > slbie.
>
> For the same reason, I believe slbie needs to invalidate caches even if
> the entry isn't present.
>
> The kernel will under some circumstances overwrite SLB entries without
> invalidating (because the translation itself isn't invalid, it's just
> that the SLB is full, so anything cached in the ERAT is still
> technically ok).
>
> However, when those things get really invalidated, they need to be
> taken out, even if they no longer have a corresponding SLB entry.
Right, the slbie change is certainly correct, but it doesn't match
what the commit message says this is doing. Nick, can you split that
out please.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 11+ messages in thread
end of thread, other threads:[~2020-03-19 5:32 UTC | newest]
Thread overview: 11+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2020-03-18 4:41 [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap Nicholas Piggin
2020-03-18 4:41 ` [PATCH 2/2] target/ppc: Fix ISA v3.0 (POWER9) slbia implementation Nicholas Piggin
2020-03-18 17:08 ` [EXTERNAL] " Cédric Le Goater
2020-03-18 20:46 ` Benjamin Herrenschmidt
2020-03-19 2:42 ` Nicholas Piggin
2020-03-19 5:22 ` David Gibson
2020-03-18 16:45 ` [EXTERNAL] [PATCH 1/2] target/ppc: Fix slbia TLB invalidation gap Cédric Le Goater
2020-03-19 2:24 ` Nicholas Piggin
2020-03-18 16:52 ` Greg Kurz
2020-03-19 5:20 ` David Gibson
2020-03-19 5:19 ` David Gibson
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).